id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
11,145 | https://en.wikipedia.org/wiki/Fire | Fire is the rapid oxidation of a material (the fuel) in the exothermic chemical process of combustion, releasing heat, light, and various reaction products.
At a certain point in the combustion reaction, called the ignition point, flames are produced. The flame is the visible portion of the fire. Flames consist primarily of carbon dioxide, water vapor, oxygen and nitrogen. If hot enough, the gases may become ionized to produce plasma. Depending on the substances alight, and any impurities outside, the color of the flame and the fire's intensity will be different.
Fire, in its most common form, has the potential to result in conflagration, which can lead to physical damage, which can be permanent, through burning. Fire is a significant process that influences ecological systems worldwide. The positive effects of fire include stimulating growth and maintaining various ecological systems.
Its negative effects include hazard to life and property, atmospheric pollution, and water contamination. When fire removes protective vegetation, heavy rainfall can contribute to increased soil erosion by water. Additionally, the burning of vegetation releases nitrogen into the atmosphere, unlike elements such as potassium and phosphorus which remain in the ash and are quickly recycled into the soil. This loss of nitrogen caused by a fire produces a long-term reduction in the fertility of the soil, which can be recovered as atmospheric nitrogen is fixed and converted to ammonia by natural phenomena such as lightning or by leguminous plants such as clover, peas, and green beans.
Fire is one of the four classical elements and has been used by humans in rituals, in agriculture for clearing land, for cooking, generating heat and light, for signaling, propulsion purposes, smelting, forging, incineration of waste, cremation, and as a weapon or mode of destruction.
Etymology
The word "fire" originated , which can be traced back to the Germanic root , which itself comes from the Proto-Indo-European from the root . The current spelling of "fire" has been in use since as early as 1200, but it was not until around 1600 that it completely replaced the Middle English term (which is still preserved in the word "fiery").
History
Fossil record
The fossil record of fire first appears with the establishment of a land-based flora in the Middle Ordovician period, , permitting the accumulation of oxygen in the atmosphere as never before, as the new hordes of land plants pumped it out as a waste product. When this concentration rose above 13%, it permitted the possibility of wildfire. Wildfire is first recorded in the Late Silurian fossil record, , by fossils of charcoalified plants. Apart from a controversial gap in the Late Devonian, charcoal is present ever since. The level of atmospheric oxygen is closely related to the prevalence of charcoal: clearly oxygen is the key factor in the abundance of wildfire. Fire also became more abundant when grasses radiated and became the dominant component of many ecosystems, around ; this kindling provided tinder which allowed for the more rapid spread of fire. These widespread fires may have initiated a positive feedback process, whereby they produced a warmer, drier climate more conducive to fire.
Human control of fire
Early human control
The ability to control fire was a dramatic change in the habits of early humans. Making fire to generate heat and light made it possible for people to cook food, simultaneously increasing the variety and availability of nutrients and reducing disease by killing pathogenic microorganisms in the food. The heat produced would also help people stay warm in cold weather, enabling them to live in cooler climates. Fire also kept nocturnal predators at bay. Evidence of occasional cooked food is found from . Although this evidence shows that fire may have been used in a controlled fashion about 1 million years ago, other sources put the date of regular use at 400,000 years ago. Evidence becomes widespread around 50 to 100 thousand years ago, suggesting regular use from this time; resistance to air pollution started to evolve in human populations at a similar point in time. The use of fire became progressively more sophisticated, as it was used to create charcoal and to control wildlife from tens of thousands of years ago.
Fire has also been used for centuries as a method of torture and execution, as evidenced by death by burning as well as torture devices such as the iron boot, which could be filled with water, oil, or even lead and then heated over an open fire to the agony of the wearer.
By the Neolithic Revolution, during the introduction of grain-based agriculture, people all over the world used fire as a tool in landscape management. These fires were typically controlled burns or "cool fires", as opposed to uncontrolled "hot fires", which damage the soil. Hot fires destroy plants and animals, and endanger communities. This is especially a problem in the forests of today where traditional burning is prevented in order to encourage the growth of timber crops. Cool fires are generally conducted in the spring and autumn. They clear undergrowth, burning up biomass that could trigger a hot fire should it get too dense. They provide a greater variety of environments, which encourages game and plant diversity. For humans, they make dense, impassable forests traversable. Another human use for fire in regards to landscape management is its use to clear land for agriculture. Slash-and-burn agriculture is still common across much of tropical Africa, Asia and South America. For small farmers, controlled fires are a convenient way to clear overgrown areas and release nutrients from standing vegetation back into the soil. However, this useful strategy is also problematic. Growing population, fragmentation of forests and warming climate are making the earth's surface more prone to ever-larger escaped fires. These harm ecosystems and human infrastructure, cause health problems, and send up spirals of carbon and soot that may encourage even more warming of the atmosphere – and thus feed back into more fires. Globally today, as much as 5 million square kilometres – an area more than half the size of the United States – burns in a given year.
Later human control
There are numerous modern applications of fire. In its broadest sense, fire is used by nearly every human being on Earth in a controlled setting every day. Users of internal combustion vehicles employ fire every time they drive. Thermal power stations provide electricity for a large percentage of humanity by igniting fuels such as coal, oil or natural gas, then using the resultant heat to boil water into steam, which then drives turbines.
Use of fire in war
The use of fire in warfare has a long history. Fire was the basis of all early thermal weapons. The Byzantine fleet used Greek fire to attack ships and men.
The invention of gunpowder in China led to the fire lance, a flame-thrower weapon dating to around 1000 CE which was a precursor to projectile weapons driven by burning gunpowder.
The earliest modern flamethrowers were used by infantry in the First World War, first used by German troops against entrenched French troops near Verdun in February 1915. They were later successfully mounted on armoured vehicles in the Second World War.
Hand-thrown incendiary bombs improvised from glass bottles, later known as Molotov cocktails, were deployed during the Spanish Civil War in the 1930s. Also during that war, incendiary bombs were deployed against Guernica by Fascist Italian and Nazi German air forces that had been created specifically to support Franco's Nationalists.
Incendiary bombs were dropped by Axis and Allies during the Second World War, notably on Coventry, Tokyo, Rotterdam, London, Hamburg and Dresden; in the latter two cases firestorms were deliberately caused in which a ring of fire surrounding each city was drawn inward by an updraft caused by a central cluster of fires. The United States Army Air Force also extensively used incendiaries against Japanese targets in the latter months of the war, devastating entire cities constructed primarily of wood and paper houses. The incendiary fluid napalm was used in July 1944, towards the end of the Second World War, although its use did not gain public attention until the Vietnam War.
Fire management
Controlling a fire to optimize its size, shape, and intensity is generally called fire management, and the more advanced forms of it, as traditionally (and sometimes still) practiced by skilled cooks, blacksmiths, ironmasters, and others, are highly skilled activities. They include knowledge of which fuel to burn; how to arrange the fuel; how to stoke the fire both in early phases and in maintenance phases; how to modulate the heat, flame, and smoke as suited to the desired application; how best to bank a fire to be revived later; how to choose, design, or modify stoves, fireplaces, bakery ovens, or industrial furnaces; and so on. Detailed expositions of fire management are available in various books about blacksmithing, about skilled camping or military scouting, and about domestic arts.
Productive use for energy
Burning fuel converts chemical energy into heat energy; wood has been used as fuel since prehistory. The International Energy Agency states that nearly 80% of the world's power has consistently come from fossil fuels such as petroleum, natural gas, and coal in the past decades. The fire in a power station is used to heat water, creating steam that drives turbines. The turbines then spin an electric generator to produce electricity. Fire is also used to provide mechanical work directly by thermal expansion, in both external and internal combustion engines.
The unburnable solid remains of a combustible material left after a fire is called clinker if its melting point is below the flame temperature, so that it fuses and then solidifies as it cools, and ash if its melting point is above the flame temperature.
Physical properties
Chemistry
Fire is a chemical process in which a fuel and an oxidizing agent react, yielding carbon dioxide and water. This process, known as a combustion reaction, does not proceed directly and involves intermediates. Although the oxidizing agent is typically oxygen, other compounds are able to fulfill the role. For instance, chlorine trifluoride is able to ignite sand.
Fires start when a flammable or a combustible material, in combination with a sufficient quantity of an oxidizer such as oxygen gas or another oxygen-rich compound (though non-oxygen oxidizers exist), is exposed to a source of heat or ambient temperature above the flash point for the fuel/oxidizer mix, and is able to sustain a rate of rapid oxidation that produces a chain reaction. This is commonly called the fire tetrahedron. Fire cannot exist without all of these elements in place and in the right proportions. For example, a flammable liquid will start burning only if the fuel and oxygen are in the right proportions. Some fuel-oxygen mixes may require a catalyst, a substance that is not consumed, when added, in any chemical reaction during combustion, but which enables the reactants to combust more readily.
Once ignited, a chain reaction must take place whereby fires can sustain their own heat by the further release of heat energy in the process of combustion and may propagate, provided there is a continuous supply of an oxidizer and fuel.
If the oxidizer is oxygen from the surrounding air, the presence of a force of gravity, or of some similar force caused by acceleration, is necessary to produce convection, which removes combustion products and brings a supply of oxygen to the fire. Without gravity, a fire rapidly surrounds itself with its own combustion products and non-oxidizing gases from the air, which exclude oxygen and extinguish the fire. Because of this, the risk of fire in a spacecraft is small when it is coasting in inertial flight. This does not apply if oxygen is supplied to the fire by some process other than thermal convection.
Fire can be extinguished by removing any one of the elements of the fire tetrahedron. Consider a natural gas flame, such as from a stove-top burner. The fire can be extinguished by any of the following:
turning off the gas supply, which removes the fuel source;
covering the flame completely, which smothers the flame as the combustion both uses the available oxidizer (the oxygen in the air) and displaces it from the area around the flame with CO2;
application of an inert gas such as carbon dioxide, smothering the flame by displacing the available oxidizer;
application of water, which removes heat from the fire faster than the fire can produce it (similarly, blowing hard on a flame will displace the heat of the currently burning gas from its fuel source, to the same end); or
application of a retardant chemical such as Halon (largely banned in some countries ) to the flame, which retards the chemical reaction itself until the rate of combustion is too slow to maintain the chain reaction.
In contrast, fire is intensified by increasing the overall rate of combustion. Methods to do this include balancing the input of fuel and oxidizer to stoichiometric proportions, increasing fuel and oxidizer input in this balanced mix, increasing the ambient temperature so the fire's own heat is better able to sustain combustion, or providing a catalyst, a non-reactant medium in which the fuel and oxidizer can more readily react.
Flame
A flame is a mixture of reacting gases and solids emitting visible, infrared, and sometimes ultraviolet light, the frequency spectrum of which depends on the chemical composition of the burning material and intermediate reaction products. In many cases, such as the burning of organic matter, for example wood, or the incomplete combustion of gas, incandescent solid particles called soot produce the familiar red-orange glow of "fire". This light has a continuous spectrum. Complete combustion of gas has a dim blue color due to the emission of single-wavelength radiation from various electron transitions in the excited molecules formed in the flame. Usually oxygen is involved, but hydrogen burning in chlorine also produces a flame, producing hydrogen chloride (HCl). Other possible combinations producing flames, amongst many, are fluorine and hydrogen, and hydrazine and nitrogen tetroxide. Hydrogen and hydrazine/UDMH flames are similarly pale blue, while burning boron and its compounds, evaluated in mid-20th century as a high energy fuel for jet and rocket engines, emits intense green flame, leading to its informal nickname of "Green Dragon".
The glow of a flame is complex. Black-body radiation is emitted from soot, gas, and fuel particles, though the soot particles are too small to behave like perfect blackbodies. There is also photon emission by de-excited atoms and molecules in the gases. Much of the radiation is emitted in the visible and infrared bands. The color depends on temperature for the black-body radiation, and on chemical makeup for the emission spectra.
The common distribution of a flame under normal gravity conditions depends on convection, as soot tends to rise to the top of a general flame, as in a candle in normal gravity conditions, making it yellow. In microgravity or zero gravity, such as an environment in outer space, convection no longer occurs, and the flame becomes spherical, with a tendency to become more blue and more efficient (although it may go out if not moved steadily, as the CO2 from combustion does not disperse as readily in microgravity, and tends to smother the flame). There are several possible explanations for this difference, of which the most likely is that the temperature is sufficiently evenly distributed that soot is not formed and complete combustion occurs. Experiments by NASA reveal that diffusion flames in microgravity allow more soot to be completely oxidized after they are produced than diffusion flames on Earth, because of a series of mechanisms that behave differently in micro gravity when compared to normal gravity conditions. These discoveries have potential applications in applied science and industry, especially concerning fuel efficiency.
Typical adiabatic temperatures
The adiabatic flame temperature of a given fuel and oxidizer pair is that at which the gases achieve stable combustion.
Oxy–dicyanoacetylene
Oxy–acetylene
Oxyhydrogen
Air–acetylene
Blowtorch (air–MAPP gas)
Bunsen burner (air–natural gas)
Candle (air–paraffin) .
Fire science
Fire science is a branch of physical science which includes fire behavior, dynamics, and combustion. Applications of fire science include fire protection, fire investigation, and wildfire management.
Fire ecology
Every natural ecosystem on land has its own fire regime, and the organisms in those ecosystems are adapted to or dependent upon that fire regime. Fire creates a mosaic of different habitat patches, each at a different stage of succession. Different species of plants, animals, and microbes specialize in exploiting a particular stage, and by creating these different types of patches, fire allows a greater number of species to exist within a landscape.
Prevention and protection systems
Wildfire prevention programs around the world may employ techniques such as wildland fire use and prescribed or controlled burns. Wildland fire use refers to any fire of natural causes that is monitored but allowed to burn. Controlled burns are fires ignited by government agencies under less dangerous weather conditions.
Fire fighting services are provided in most developed areas to extinguish or contain uncontrolled fires. Trained firefighters use fire apparatus, water supply resources such as water mains and fire hydrants or they might use A and B class foam depending on what is feeding the fire.
Fire prevention is intended to reduce sources of ignition. Fire prevention also includes education to teach people how to avoid causing fires. Buildings, especially schools and tall buildings, often conduct fire drills to inform and prepare citizens on how to react to a building fire. Purposely starting destructive fires constitutes arson and is a crime in most jurisdictions.
Model building codes require passive fire protection and active fire protection systems to minimize damage resulting from a fire. The most common form of active fire protection is fire sprinklers. To maximize passive fire protection of buildings, building materials and furnishings in most developed countries are tested for fire-resistance, combustibility and flammability. Upholstery, carpeting and plastics used in vehicles and vessels are also tested.
Where fire prevention and fire protection have failed to prevent damage, fire insurance can mitigate the financial impact.
See also
Aodh (given name)
Bonfire
The Chemical History of a Candle
Colored fire
Control of fire by early humans
Deflagration
Fire (classical element)
Fire investigation
Fire lookout
Fire lookout tower
Fire making
Fire pit
Fire safety
Fire triangle
Fire whirl
Fire worship
Flame test
Life Safety Code
List of fires
List of light sources
Phlogiston theory
Piano burning
Prometheus, the Greek mythological figure who gave mankind fire
Pyrokinesis
Pyrolysis
Pyromania
Self-immolation
References
Notes
Citations
Sources
Haung, Kai (2009). Population and Building Factors That Impact Residential Fire Rates in Large U.S. Cities. Applied Research Project . Texas State University.
Kosman, Admiel (January 13, 2011). "Sacred fire". Haaretz.
Further reading
Pyne, Stephen J. Fire : a brief history (University of Washington Press, 2001).
Pyne, Stephen J. World fire : the culture of fire on earth (1995) online
Pyne, Stephen J. Tending fire : coping with America's wildland fires (2004) online
Pyne, Stephen J. Awful splendour : a fire history of Canada (2007) online
Pyne, Stephen J. Burning bush : a fire history of Australia (1991) online
Pyne, Stephen J. Between Two Fires: A Fire History of Contemporary America (2015)
Pyne, Stephen J. California: A Fire Survey (2016)
Safford, Hugh D., et al. "Fire ecology of the North American Mediterranean-climate zone." in Fire ecology and management: Past, present, and future of US forested ecosystems (2021): 337–392. re California and its neighbors online
External links
How Fire Works at HowStuffWorks
What exactly is fire? from The Straight Dope
On Fire, an Adobe Flash–based science tutorial from the NOVA (TV series)
"20 Things You Didn't Know About... Fire" from Discover magazine
Terrestrial plasmas
Articles containing video clips
Cooking techniques | Fire | [
"Chemistry"
] | 4,140 | [
"Combustion",
"Fire"
] |
11,149 | https://en.wikipedia.org/wiki/Fresnel%20equations | The Fresnel equations (or Fresnel coefficients) describe the reflection and transmission of light (or electromagnetic radiation in general) when incident on an interface between different optical media. They were deduced by French engineer and physicist Augustin-Jean Fresnel () who was the first to understand that light is a transverse wave, when no one realized that the waves were electric and magnetic fields. For the first time, polarization could be understood quantitatively, as Fresnel's equations correctly predicted the differing behaviour of waves of the s and p polarizations incident upon a material interface.
Overview
When light strikes the interface between a medium with refractive index and a second medium with refractive index , both reflection and refraction of the light may occur. The Fresnel equations give the ratio of the reflected wave's electric field to the incident wave's electric field, and the ratio of the transmitted wave's electric field to the incident wave's electric field, for each of two components of polarization. (The magnetic fields can also be related using similar coefficients.) These ratios are generally complex, describing not only the relative amplitudes but also the phase shifts at the interface.
The equations assume the interface between the media is flat and that the media are homogeneous and isotropic. The incident light is assumed to be a plane wave, which is sufficient to solve any problem since any incident light field can be decomposed into plane waves and polarizations.
S and P polarizations
There are two sets of Fresnel coefficients for two different linear polarization components of the incident wave. Since any polarization state can be resolved into a combination of two orthogonal linear polarizations, this is sufficient for any problem. Likewise, unpolarized (or "randomly polarized") light has an equal amount of power in each of two linear polarizations.
The s polarization refers to polarization of a wave's electric field normal to the plane of incidence (the direction in the derivation below); then the magnetic field is in the plane of incidence. The p polarization refers to polarization of the electric field in the plane of incidence (the plane in the derivation below); then the magnetic field is normal to the plane of incidence. The names "s" and "p" for the polarization components refer to German "senkrecht" (perpendicular or normal) and "parallel" (parallel to the plane of incidence).
Although the reflection and transmission are dependent on polarization, at normal incidence () there is no distinction between them so all polarization states are governed by a single set of Fresnel coefficients (and another special case is mentioned below in which that is true).
Configuration
In the diagram on the right, an incident plane wave in the direction of the ray strikes the interface between two media of refractive indices and at point . Part of the wave is reflected in the direction , and part refracted in the direction . The angles that the incident, reflected and refracted rays make to the normal of the interface are given as , and , respectively.
The relationship between these angles is given by the law of reflection:
and Snell's law:
The behavior of light striking the interface is explained by considering the electric and magnetic fields that constitute an electromagnetic wave, and the laws of electromagnetism, as shown below. The ratio of waves' electric field (or magnetic field) amplitudes are obtained, but in practice one is more often interested in formulae which determine power coefficients, since power (or irradiance) is what can be directly measured at optical frequencies. The power of a wave is generally proportional to the square of the electric (or magnetic) field amplitude.
Power (intensity) reflection and transmission coefficients
We call the fraction of the incident power that is reflected from the interface the reflectance (or reflectivity, or power reflection coefficient) , and the fraction that is refracted into the second medium is called the transmittance (or transmissivity, or power transmission coefficient) . Note that these are what would be measured right at each side of an interface and do not account for attenuation of a wave in an absorbing medium following transmission or reflection.
The reflectance for s-polarized light is
while the reflectance for p-polarized light is
where and are the wave impedances of media 1 and 2, respectively.
We assume that the media are non-magnetic (i.e., ), which is typically a good approximation at optical frequencies (and for transparent media at other frequencies). Then the wave impedances are determined solely by the refractive indices and :
where is the impedance of free space and . Making this substitution, we obtain equations using the refractive indices:
The second form of each equation is derived from the first by eliminating using Snell's law and trigonometric identities.
As a consequence of conservation of energy, one can find the transmitted power (or more correctly, irradiance: power per unit area) simply as the portion of the incident power that isn't reflected:
and
Note that all such intensities are measured in terms of a wave's irradiance in the direction normal to the interface; this is also what is measured in typical experiments. That number could be obtained from irradiances in the direction of an incident or reflected wave (given by the magnitude of a wave's Poynting vector) multiplied by for a wave at an angle to the normal direction (or equivalently, taking the dot product of the Poynting vector with the unit vector normal to the interface). This complication can be ignored in the case of the reflection coefficient, since , so that the ratio of reflected to incident irradiance in the wave's direction is the same as in the direction normal to the interface.
Although these relationships describe the basic physics, in many practical applications one is concerned with "natural light" that can be described as unpolarized. That means that there is an equal amount of power in the s and p polarizations, so that the effective reflectivity of the material is just the average of the two reflectivities:
For low-precision applications involving unpolarized light, such as computer graphics, rather than rigorously computing the effective reflection coefficient for each angle, Schlick's approximation is often used.
Special cases
Normal incidence
For the case of normal incidence, , and there is no distinction between s and p polarization. Thus, the reflectance simplifies to
For common glass () surrounded by air (), the power reflectance at normal incidence can be seen to be about 4%, or 8% accounting for both sides of a glass pane.
Brewster's angle
At a dielectric interface from to , there is a particular angle of incidence at which goes to zero and a p-polarised incident wave is purely refracted, thus all reflected light is s-polarised. This angle is known as Brewster's angle, and is around 56° for and (typical glass).
Total internal reflection
When light travelling in a denser medium strikes the surface of a less dense medium (i.e., ), beyond a particular incidence angle known as the critical angle, all light is reflected and . This phenomenon, known as total internal reflection, occurs at incidence angles for which Snell's law predicts that the sine of the angle of refraction would exceed unity (whereas in fact for all real ). For glass with surrounded by air, the critical angle is approximately 42°.
45° incidence
Reflection at 45° incidence is very commonly used for making 90° turns. For the case of light traversing from a less dense medium into a denser one at 45° incidence (), it follows algebraically from the above equations that equals the square of :
This can be used to either verify the consistency of the measurements of and , or to derive one of them when the other is known. This relationship is only valid for the simple case of a single plane interface between two homogeneous materials, not for films on substrates, where a more complex analysis is required.
Measurements of and at 45° can be used to estimate the reflectivity at normal incidence. The "average of averages" obtained by calculating first the arithmetic as well as the geometric average of and , and then averaging these two averages again arithmetically, gives a value for with an error of less than about 3% for most common optical materials. This is useful because measurements at normal incidence can be difficult to achieve in an experimental setup since the incoming beam and the detector will obstruct each other. However, since the dependence of and on the angle of incidence for angles below 10° is very small, a measurement at about 5° will usually be a good approximation for normal incidence, while allowing for a separation of the incoming and reflected beam.
Complex amplitude reflection and transmission coefficients
The above equations relating powers (which could be measured with a photometer for instance) are derived from the Fresnel equations which solve the physical problem in terms of electromagnetic field complex amplitudes, i.e., considering phase shifts in addition to their amplitudes. Those underlying equations supply generally complex-valued ratios of those EM fields and may take several different forms, depending on the formalism used. The complex amplitude coefficients for reflection and transmission are usually represented by lower case and (whereas the power coefficients are capitalized). As before, we are assuming the magnetic permeability, of both media to be equal to the permeability of free space as is essentially true of all dielectrics at optical frequencies.
In the following equations and graphs, we adopt the following conventions. For s polarization, the reflection coefficient is defined as the ratio of the reflected wave's complex electric field amplitude to that of the incident wave, whereas for p polarization is the ratio of the waves complex magnetic field amplitudes (or equivalently, the negative of the ratio of their electric field amplitudes). The transmission coefficient is the ratio of the transmitted wave's complex electric field amplitude to that of the incident wave, for either polarization. The coefficients and are generally different between the s and p polarizations, and even at normal incidence (where the designations s and p do not even apply!) the sign of is reversed depending on whether the wave is considered to be s or p polarized, an artifact of the adopted sign convention (see graph for an air-glass interface at 0° incidence).
The equations consider a plane wave incident on a plane interface at angle of incidence , a wave reflected at angle , and a wave transmitted at angle . In the case of an interface into an absorbing material (where is complex) or total internal reflection, the angle of transmission does not generally evaluate to a real number. In that case, however, meaningful results can be obtained using formulations of these relationships in which trigonometric functions and geometric angles are avoided; the inhomogeneous waves launched into the second medium cannot be described using a single propagation angle.
Using this convention,
One can see that and . One can write very similar equations applying to the ratio of the waves' magnetic fields, but comparison of the electric fields is more conventional.
Because the reflected and incident waves propagate in the same medium and make the same angle with the normal to the surface, the power reflection coefficient is just the squared magnitude of :
On the other hand, calculation of the power transmission coefficient is less straightforward, since the light travels in different directions in the two media. What's more, the wave impedances in the two media differ; power (irradiance) is given by the square of the electric field amplitude divided by the characteristic impedance of the medium (or by the square of the magnetic field multiplied by the characteristic impedance). This results in:
using the above definition of . The introduced factor of is the reciprocal of the ratio of the media's wave impedances. The factors adjust the waves' powers so they are reckoned in the direction normal to the interface, for both the incident and transmitted waves, so that full power transmission corresponds to .
In the case of total internal reflection where the power transmission is zero, nevertheless describes the electric field (including its phase) just beyond the interface. This is an evanescent field which does not propagate as a wave (thus ) but has nonzero values very close to the interface. The phase shift of the reflected wave on total internal reflection can similarly be obtained from the phase angles of and (whose magnitudes are unity in this case). These phase shifts are different for s and p waves, which is the well-known principle by which total internal reflection is used to effect polarization transformations.
Alternative forms
In the above formula for , if we put (Snell's law) and multiply the numerator and denominator by , we obtain
If we do likewise with the formula for , the result is easily shown to be equivalent to
These formulas are known respectively as Fresnel's sine law and Fresnel's tangent law. Although at normal incidence these expressions reduce to 0/0, one can see that they yield the correct results in the limit as .
Multiple surfaces
When light makes multiple reflections between two or more parallel surfaces, the multiple beams of light generally interfere with one another, resulting in net transmission and reflection amplitudes that depend on the light's wavelength. The interference, however, is seen only when the surfaces are at distances comparable to or smaller than the light's coherence length, which for ordinary white light is few micrometers; it can be much larger for light from a laser.
An example of interference between reflections is the iridescent colours seen in a soap bubble or in thin oil films on water. Applications include Fabry–Pérot interferometers, antireflection coatings, and optical filters. A quantitative analysis of these effects is based on the Fresnel equations, but with additional calculations to account for interference.
The transfer-matrix method, or the recursive Rouard method can be used to solve multiple-surface problems.
History
In 1808, Étienne-Louis Malus discovered that when a ray of light was reflected off a non-metallic surface at the appropriate angle, it behaved like one of the two rays emerging from a doubly-refractive calcite crystal. He later coined the term polarization to describe this behavior. In 1815, the dependence of the polarizing angle on the refractive index was determined experimentally by David Brewster. But the reason for that dependence was such a deep mystery that in late 1817, Thomas Young was moved to write:
In 1821, however, Augustin-Jean Fresnel derived results equivalent to his sine and tangent laws (above), by modeling light waves as transverse elastic waves with vibrations perpendicular to what had previously been called the plane of polarization. Fresnel promptly confirmed by experiment that the equations correctly predicted the direction of polarization of the reflected beam when the incident beam was polarized at 45° to the plane of incidence, for light incident from air onto glass or water; in particular, the equations gave the correct polarization at Brewster's angle. The experimental confirmation was reported in a "postscript" to the work in which Fresnel first revealed his theory that light waves, including "unpolarized" waves, were purely transverse.
Details of Fresnel's derivation, including the modern forms of the sine law and tangent law, were given later, in a memoir read to the French Academy of Sciences in January 1823. That derivation combined conservation of energy with continuity of the tangential vibration at the interface, but failed to allow for any condition on the normal component of vibration. The first derivation from electromagnetic principles was given by Hendrik Lorentz in 1875.
In the same memoir of January 1823, Fresnel found that for angles of incidence greater than the critical angle, his formulas for the reflection coefficients ( and ) gave complex values with unit magnitudes. Noting that the magnitude, as usual, represented the ratio of peak amplitudes, he guessed that the argument represented the phase shift, and verified the hypothesis experimentally. The verification involved
calculating the angle of incidence that would introduce a total phase difference of 90° between the s and p components, for various numbers of total internal reflections at that angle (generally there were two solutions),
subjecting light to that number of total internal reflections at that angle of incidence, with an initial linear polarization at 45° to the plane of incidence, and
checking that the final polarization was circular.
Thus he finally had a quantitative theory for what we now call the Fresnel rhomb — a device that he had been using in experiments, in one form or another, since 1817 (see Fresnel rhomb §History).
The success of the complex reflection coefficient inspired James MacCullagh and Augustin-Louis Cauchy, beginning in 1836, to analyze reflection from metals by using the Fresnel equations with a complex refractive index.
Four weeks before he presented his completed theory of total internal reflection and the rhomb, Fresnel submitted a memoir in which he introduced the needed terms linear polarization, circular polarization, and elliptical polarization, and in which he explained optical rotation as a species of birefringence: linearly-polarized light can be resolved into two circularly-polarized components rotating in opposite directions, and if these propagate at different speeds, the phase difference between them — hence the orientation of their linearly-polarized resultant — will vary continuously with distance.
Thus Fresnel's interpretation of the complex values of his reflection coefficients marked the confluence of several streams of his research and, arguably, the essential completion of his reconstruction of physical optics on the transverse-wave hypothesis (see Augustin-Jean Fresnel).
Derivation
Here we systematically derive the above relations from electromagnetic premises.
Material parameters
In order to compute meaningful Fresnel coefficients, we must assume that the medium is (approximately) linear and homogeneous. If the medium is also isotropic, the four field vectors are related by
where and are scalars, known respectively as the (electric) permittivity and the (magnetic) permeability of the medium. For vacuum, these have the values and , respectively. Hence we define the relative permittivity (or dielectric constant) , and the relative permeability .
In optics it is common to assume that the medium is non-magnetic, so that . For ferromagnetic materials at radio/microwave frequencies, larger values of must be taken into account. But, for optically transparent media, and for all other materials at optical frequencies (except possible metamaterials), is indeed very close to 1; that is, .
In optics, one usually knows the refractive index of the medium, which is the ratio of the speed of light in vacuum () to the speed of light in the medium. In the analysis of partial reflection and transmission, one is also interested in the electromagnetic wave impedance , which is the ratio of the amplitude of to the amplitude of . It is therefore desirable to express and in terms of and , and thence to relate to . The last-mentioned relation, however, will make it convenient to derive the reflection coefficients in terms of the wave admittance , which is the reciprocal of the wave impedance .
In the case of uniform plane sinusoidal waves, the wave impedance or admittance is known as the intrinsic impedance or admittance of the medium. This case is the one for which the Fresnel coefficients are to be derived.
Electromagnetic plane waves
In a uniform plane sinusoidal electromagnetic wave, the electric field has the form
where is the (constant) complex amplitude vector, is the imaginary unit, is the wave vector (whose magnitude is the angular wavenumber), is the position vector, is the angular frequency, is time, and it is understood that the real part of the expression is the physical field. The value of the expression is unchanged if the position varies in a direction normal to ; hence is normal to the wavefronts.
To advance the phase by the angle ϕ, we replace by (that is, we replace by ), with the result that the (complex) field is multiplied by . So a phase advance is equivalent to multiplication by a complex constant with a negative argument. This becomes more obvious when the field () is factored as , where the last factor contains the time-dependence. That factor also implies that differentiation w.r.t. time corresponds to multiplication by .
If ℓ is the component of in the direction of , the field () can be written . If the argument of is to be constant, ℓ must increase at the velocity known as the phase velocity . This in turn is equal to Solving for gives
As usual, we drop the time-dependent factor , which is understood to multiply every complex field quantity. The electric field for a uniform plane sine wave will then be represented by the location-dependent phasor
For fields of that form, Faraday's law and the Maxwell-Ampère law respectively reduce to
Putting and , as above, we can eliminate and to obtain equations in only and :
If the material parameters and are real (as in a lossless dielectric), these equations show that form a right-handed orthogonal triad, so that the same equations apply to the magnitudes of the respective vectors. Taking the magnitude equations and substituting from (), we obtain
where and are the magnitudes of and . Multiplying the last two equations gives
Dividing (or cross-multiplying) the same two equations gives , where
This is the intrinsic admittance.
From () we obtain the phase velocity For vacuum this reduces to Dividing the second result by the first gives
For a non-magnetic medium (the usual case), this becomes .
Taking the reciprocal of (), we find that the intrinsic impedance is In vacuum this takes the value known as the impedance of free space. By division, For a non-magnetic medium, this becomes
Wave vectors
In Cartesian coordinates , let the region have refractive index , intrinsic admittance , etc., and let the region have refractive index , intrinsic admittance , etc. Then the plane is the interface, and the axis is normal to the interface (see diagram). Let and (in bold roman type) be the unit vectors in the and directions, respectively. Let the plane of incidence be the plane (the plane of the page), with the angle of incidence measured from towards . Let the angle of refraction, measured in the same sense, be , where the subscript stands for transmitted (reserving for reflected).
In the absence of Doppler shifts, ω does not change on reflection or refraction. Hence, by (), the magnitude of the wave vector is proportional to the refractive index.
So, for a given , if we redefine as the magnitude of the wave vector in the reference medium (for which ), then the wave vector has magnitude in the first medium (region in the diagram) and magnitude in the second medium. From the magnitudes and the geometry, we find that the wave vectors are
where the last step uses Snell's law. The corresponding dot products in the phasor form () are
Hence:
s components
For the s polarization, the field is parallel to the axis and may therefore be described by its component in the direction. Let the reflection and transmission coefficients be and , respectively. Then, if the incident field is taken to have unit amplitude, the phasor form () of its -component is
and the reflected and transmitted fields, in the same form, are
Under the sign convention used in this article, a positive reflection or transmission coefficient is one that preserves the direction of the transverse field, meaning (in this context) the field normal to the plane of incidence. For the s polarization, that means the field. If the incident, reflected, and transmitted fields (in the above equations) are in the -direction ("out of the page"), then the respective fields are in the directions of the red arrows, since form a right-handed orthogonal triad. The fields may therefore be described by their components in the directions of those arrows, denoted by . Then, since ,
At the interface, by the usual interface conditions for electromagnetic fields, the tangential components of the and fields must be continuous; that is,
When we substitute from equations () to () and then from (), the exponential factors cancel out, so that the interface conditions reduce to the simultaneous equations
which are easily solved for and , yielding
and
At normal incidence , indicated by an additional subscript 0, these results become
and
At grazing incidence , we have , hence and .
p components
For the p polarization, the incident, reflected, and transmitted fields are parallel to the red arrows and may therefore be described by their components in the directions of those arrows. Let those components be (redefining the symbols for the new context). Let the reflection and transmission coefficients be and . Then, if the incident field is taken to have unit amplitude, we have
If the fields are in the directions of the red arrows, then, in order for to form a right-handed orthogonal triad, the respective fields must be in the -direction ("into the page") and may therefore be described by their components in that direction. This is consistent with the adopted sign convention, namely that a positive reflection or transmission coefficient is one that preserves the direction of the transverse field the field in the case of the p polarization. The agreement of the other field with the red arrows reveals an alternative definition of the sign convention: that a positive reflection or transmission coefficient is one for which the field vector in the plane of incidence points towards the same medium before and after reflection or transmission.
So, for the incident, reflected, and transmitted fields, let the respective components in the -direction be . Then, since ,
At the interface, the tangential components of the and fields must be continuous; that is,
When we substitute from equations () and () and then from (), the exponential factors again cancel out, so that the interface conditions reduce to
Solving for and , we find
and
At normal incidence indicated by an additional subscript 0, these results become
and
At , we again have , hence and .
Comparing () and () with () and (), we see that at normal incidence, under the adopted sign convention, the transmission coefficients for the two polarizations are equal, whereas the reflection coefficients have equal magnitudes but opposite signs. While this clash of signs is a disadvantage of the convention, the attendant advantage is that the signs agree at grazing incidence.
Power ratios (reflectivity and transmissivity)
The Poynting vector for a wave is a vector whose component in any direction is the irradiance (power per unit area) of that wave on a surface perpendicular to that direction. For a plane sinusoidal wave the Poynting vector is , where and are due only to the wave in question, and the asterisk denotes complex conjugation. Inside a lossless dielectric (the usual case), and are in phase, and at right angles to each other and to the wave vector ; so, for s polarization, using the and components of and respectively (or for p polarization, using the and components of and ), the irradiance in the direction of is given simply by , which is in a medium of intrinsic impedance . To compute the irradiance in the direction normal to the interface, as we shall require in the definition of the power transmission coefficient, we could use only the component (rather than the full component) of or or, equivalently, simply multiply by the proper geometric factor, obtaining .
From equations () and (), taking squared magnitudes, we find that the reflectivity (ratio of reflected power to incident power) is
for the s polarization, and
for the p polarization. Note that when comparing the powers of two such waves in the same medium and with the same cosθ, the impedance and geometric factors mentioned above are identical and cancel out. But in computing the power transmission (below), these factors must be taken into account.
The simplest way to obtain the power transmission coefficient (transmissivity, the ratio of transmitted power to incident power in the direction normal to the interface, i.e. the direction) is to use (conservation of energy). In this way we find
for the s polarization, and
for the p polarization.
In the case of an interface between two lossless media (for which ϵ and μ are real and positive), one can obtain these results directly using the squared magnitudes of the amplitude transmission coefficients that we found earlier in equations () and (). But, for given amplitude (as noted above), the component of the Poynting vector in the direction is proportional to the geometric factor and inversely proportional to the wave impedance . Applying these corrections to each wave, we obtain two ratios multiplying the square of the amplitude transmission coefficient:
for the s polarization, and
for the p polarization. The last two equations apply only to lossless dielectrics, and only at incidence angles smaller than the critical angle (beyond which, of course, ).
For unpolarized light:
where .
Equal refractive indices
From equations () and (), we see that two dissimilar media will have the same refractive index, but different admittances, if the ratio of their permeabilities is the inverse of the ratio of their permittivities. In that unusual situation we have (that is, the transmitted ray is undeviated), so that the cosines in equations (), (), (), (), and () to () cancel out, and all the reflection and transmission ratios become independent of the angle of incidence; in other words, the ratios for normal incidence become applicable to all angles of incidence. When extended to spherical reflection or scattering, this results in the Kerker effect for Mie scattering.
Non-magnetic media
Since the Fresnel equations were developed for optics, they are usually given for non-magnetic materials. Dividing () by ()) yields
For non-magnetic media we can substitute the vacuum permeability for , so that
that is, the admittances are simply proportional to the corresponding refractive indices. When we make these substitutions in equations () to () and equations () to (), the factor cμ0 cancels out. For the amplitude coefficients we obtain:
For the case of normal incidence these reduce to:
The power reflection coefficients become:
The power transmissions can then be found from .
Brewster's angle
For equal permeabilities (e.g., non-magnetic media), if and are complementary, we can substitute for , and for , so that the numerator in equation () becomes , which is zero (by Snell's law). Hence and only the s-polarized component is reflected. This is what happens at the Brewster angle. Substituting for in Snell's law, we readily obtain
for Brewster's angle.
Equal permittivities
Although it is not encountered in practice, the equations can also apply to the case of two media with a common permittivity but different refractive indices due to different permeabilities. From equations () and (), if is fixed instead of , then becomes inversely proportional to , with the result that the subscripts 1 and 2 in equations () to () are interchanged (due to the additional step of multiplying the numerator and denominator by ). Hence, in () and (), the expressions for and in terms of refractive indices will be interchanged, so that Brewster's angle () will give instead of , and any beam reflected at that angle will be p-polarized instead of s-polarized. Similarly, Fresnel's sine law will apply to the p polarization instead of the s polarization, and his tangent law to the s polarization instead of the p polarization.
This switch of polarizations has an analog in the old mechanical theory of light waves (see §History, above). One could predict reflection coefficients that agreed with observation by supposing (like Fresnel) that different refractive indices were due to different densities and that the vibrations were normal to what was then called the plane of polarization, or by supposing (like MacCullagh and Neumann) that different refractive indices were due to different elasticities and that the vibrations were parallel to that plane. Thus the condition of equal permittivities and unequal permeabilities, although not realistic, is of some historical interest.
See also
Jones calculus
Polarization mixing
Index-matching material
Field and power quantities
Fresnel rhomb, Fresnel's apparatus to produce circularly polarised light
Reflection loss
Specular reflection
Schlick's approximation
Snell's window
X-ray reflectivity
Plane of incidence
Reflections of signals on conducting lines
Notes
References
Sources
M. Born and E. Wolf, 1970, Principles of Optics, 4th Ed., Oxford: Pergamon Press.
J.Z. Buchwald, 1989, The Rise of the Wave Theory of Light: Optical Theory and Experiment in the Early Nineteenth Century, University of Chicago Press, .
R.E. Collin, 1966, Foundations for Microwave Engineering, Tokyo: McGraw-Hill.
O. Darrigol, 2012, A History of Optics: From Greek Antiquity to the Nineteenth Century, Oxford, .
A. Fresnel, 1866 (ed. H. de Senarmont, E. Verdet, and L. Fresnel), Oeuvres complètes d'Augustin Fresnel, Paris: Imprimerie Impériale (3 vols., 1866–70), vol.1 (1866).
E. Hecht, 1987, Optics, 2nd Ed., Addison Wesley, .
E. Hecht, 2002, Optics, 4th Ed., Addison Wesley, .
F.A. Jenkins and H.E. White, 1976, Fundamentals of Optics, 4th Ed., New York: McGraw-Hill, .
H. Lloyd, 1834, "Report on the progress and present state of physical optics", Report of the Fourth Meeting of the British Association for the Advancement of Science (held at Edinburgh in 1834), London: J. Murray, 1835, pp.295–413.
W. Whewell, 1857, History of the Inductive Sciences: From the Earliest to the Present Time, 3rd Ed., London: J.W. Parker & Son, vol.2.
E. T. Whittaker, 1910, A History of the Theories of Aether and Electricity: From the Age of Descartes to the Close of the Nineteenth Century, London: Longmans, Green, & Co.
External links
Fresnel Equations – Wolfram.
Fresnel equations calculator
FreeSnell – Free software computes the optical properties of multilayer materials.
Thinfilm – Web interface for calculating optical properties of thin films and multilayer materials (reflection & transmission coefficients, ellipsometric parameters Psi & Delta).
Simple web interface for calculating single-interface reflection and refraction angles and strengths.
Reflection and transmittance for two dielectrics – Mathematica interactive webpage that shows the relations between index of refraction and reflection.
A self-contained first-principles derivation of the transmission and reflection probabilities from a multilayer with complex indices of refraction.
Eponymous equations of physics
Light
Geometrical optics
Physical optics
Polarization (waves)
History of physics | Fresnel equations | [
"Physics"
] | 7,360 | [
"Physical phenomena",
"Equations of physics",
"Spectrum (physical sciences)",
"Eponymous equations of physics",
"Electromagnetic spectrum",
"Astrophysics",
"Waves",
"Light",
"Polarization (waves)"
] |
11,168 | https://en.wikipedia.org/wiki/Fortran | Fortran (; formerly FORTRAN) is a third generation, compiled, imperative programming language that is especially suited to numeric computation and scientific computing.
Fortran was originally developed by IBM. It first compiled correctly in 1958. Fortran computer programs have been written to support scientific and engineering applications, such as numerical weather prediction, finite element analysis, computational fluid dynamics, plasma physics, geophysics, computational physics, crystallography and computational chemistry. It is a popular language for high-performance computing and is used for programs that benchmark and rank the world's fastest supercomputers.
Fortran has evolved through numerous versions and dialects. In 1966, the American National Standards Institute (ANSI) developed a standard for Fortran to limit proliferation of compilers using slightly different syntax. Successive versions have added support for a character data type (Fortran 77), structured programming, array programming, modular programming, generic programming (Fortran 90), parallel computing (Fortran 95), object-oriented programming (Fortran 2003), and concurrent programming (Fortran 2008).
Since April 2024, Fortran has ranked among the top ten languages in the TIOBE index, a measure of the popularity of programming languages.
Naming
The first manual for FORTRAN describes it as a Formula Translating System, and printed the name with small caps, . Other sources suggest the name stands for Formula Translator, or Formula Translation.
Early IBM computers did not support lowercase letters, and the names of versions of the language through FORTRAN 77 were usually spelled in all-uppercase. FORTRAN 77 was the last version in which the Fortran character set included only uppercase letters.
The official language standards for Fortran have referred to the language as "Fortran" with initial caps since Fortran 90.
Origins
In late 1953, John W. Backus submitted a proposal to his superiors at IBM to develop a more practical alternative to assembly language for programming their IBM 704 mainframe computer. Backus' historic FORTRAN team consisted of programmers Richard Goldberg, Sheldon F. Best, Harlan Herrick, Peter Sheridan, Roy Nutt, Robert Nelson, Irving Ziller, Harold Stern, Lois Haibt, and David Sayre. Its concepts included easier entry of equations into a computer, an idea developed by J. Halcombe Laning and demonstrated in the Laning and Zierler system of 1952.
A draft specification for The IBM Mathematical Formula Translating System was completed by November 1954. The first manual for FORTRAN appeared in October 1956, with the first FORTRAN compiler delivered in April 1957. Fortran produced efficient enough code for assembly language programmers to accept a high-level programming language replacement.
John Backus said during a 1979 interview with Think, the IBM employee magazine, "Much of my work has come from being lazy. I didn't like writing programs, and so, when I was working on the IBM 701, writing programs for computing missile trajectories, I started work on a programming system to make it easier to write programs."
The language was widely adopted by scientists for writing numerically intensive programs, which encouraged compiler writers to produce compilers that could generate faster and more efficient code. The inclusion of a complex number data type in the language made Fortran especially suited to technical applications such as electrical engineering.
By 1960, versions of FORTRAN were available for the IBM 709, 650, 1620, and 7090 computers. Significantly, the increasing popularity of FORTRAN spurred competing computer manufacturers to provide FORTRAN compilers for their machines, so that by 1963 over 40 FORTRAN compilers existed.
FORTRAN was provided for the IBM 1401 computer by an innovative 63-phase compiler that ran entirely in its core memory of only 8000 (six-bit) characters. The compiler could be run from tape, or from a 2200-card deck; it used no further tape or disk storage. It kept the program in memory and loaded overlays that gradually transformed it, in place, into executable form, as described by Haines.
This article was reprinted, edited, in both editions of Anatomy of a Compiler and in the IBM manual "Fortran Specifications and Operating Procedures, IBM 1401". The executable form was not entirely machine language; rather, floating-point arithmetic, sub-scripting, input/output, and function references were interpreted, preceding UCSD Pascal P-code by two decades. GOTRAN, a simplified, interpreted version of FORTRAN I (with only 12 statements not 32) for "load and go" operation was available (at least for the early IBM 1620 computer). Modern Fortran, and almost all later versions, are fully compiled, as done for other high-performance languages.
The development of Fortran paralleled the early evolution of compiler technology, and many advances in the theory and design of compilers were specifically motivated by the need to generate efficient code for Fortran programs.
FORTRAN
The initial release of FORTRAN for the IBM 704 contained 32 statements, including:
and statements
Assignment statements
Three-way arithmetic statement, which passed control to one of three locations in the program depending on whether the result of the arithmetic expression was negative, zero, or positive
Control statements for checking exceptions (, , and ); and control statements for manipulating sense switches and sense lights (, , and )
, computed , , and assigned
loops
Formatted I/O: , , , , , and
Unformatted I/O: , , , and
Other I/O: , , and
, , and
statement (for providing optimization hints to the compiler).
The arithmetic statement was reminiscent of (but not readily implementable by) a three-way comparison instruction (CAS—Compare Accumulator with Storage) available on the 704. The statement provided the only way to compare numbers—by testing their difference, with an attendant risk of overflow. This deficiency was later overcome by "logical" facilities introduced in FORTRAN IV.
The statement was used originally (and optionally) to give branch probabilities for the three branch cases of the arithmetic statement. It could also be used to suggest how many iterations a loop might run. The first FORTRAN compiler used this weighting to perform at compile time a Monte Carlo simulation of the generated code, the results of which were used to optimize the placement of basic blocks in memory—a very sophisticated optimization for its time. The Monte Carlo technique is documented in Backus et al.'s paper on this original implementation, The FORTRAN Automatic Coding System:
The fundamental unit of program is the basic block; a basic block is a stretch of program which has one entry point and one exit point. The purpose of section 4 is to prepare for section 5 a table of predecessors (PRED table) which enumerates the basic blocks and lists for every basic block each of the basic blocks which can be its immediate predecessor in flow, together with the absolute frequency of each such basic block link. This table is obtained by running the program once in Monte-Carlo fashion, in which the outcome of conditional transfers arising out of IF-type statements and computed GO TO's is determined by a random number generator suitably weighted according to whatever FREQUENCY statements have been provided.
The first FORTRAN compiler reported diagnostic information by halting the program when an error was found and outputting an error code on its console. That code could be looked up by the programmer in an error messages table in the operator's manual, providing them with a brief description of the problem. Later, an error-handling subroutine to handle user errors such as division by zero, developed by NASA, was incorporated, informing users of which line of code contained the error.
Fixed layout and punched cards
Before the development of disk files, text editors and terminals, programs were most often entered on a keypunch keyboard onto 80-column punched cards, one line to a card. The resulting deck of cards would be fed into a card reader to be compiled. Punched card codes included no lower-case letters or many special characters, and special versions of the IBM 026 keypunch were offered that would correctly print the re-purposed special characters used in FORTRAN.
Reflecting punched card input practice, Fortran programs were originally written in a fixed-column format, with the first 72 columns read into twelve 36-bit words.
A letter "C" in column 1 caused the entire card to be treated as a comment and ignored by the compiler. Otherwise, the columns of the card were divided into four fields:
1 to 5 were the label field: a sequence of digits here was taken as a label for use in DO or control statements such as GO TO and IF, or to identify a FORMAT statement referred to in a WRITE or READ statement. Leading zeros are ignored and 0 is not a valid label number.
6 was a continuation field: a character other than a blank or a zero here caused the card to be taken as a continuation of the statement on the prior card. The continuation cards were usually numbered 1, 2, etc. and the starting card might therefore have zero in its continuation column—which is not a continuation of its preceding card.
7 to 72 served as the statement field.
73 to 80 were ignored (the IBM 704's card reader only used 72 columns).
Columns 73 to 80 could therefore be used for identification information, such as punching a sequence number or text, which could be used to re-order cards if a stack of cards was dropped; though in practice this was reserved for stable, production programs. An IBM 519 could be used to copy a program deck and add sequence numbers. Some early compilers, e.g., the IBM 650's, had additional restrictions due to limitations on their card readers. Keypunches could be programmed to tab to column 7 and skip out after column 72. Later compilers relaxed most fixed-format restrictions, and the requirement was eliminated in the Fortran 90 standard.
Within the statement field, whitespace characters (blanks) were ignored outside a text literal. This allowed omitting spaces between tokens for brevity or including spaces within identifiers for clarity. For example, was a valid identifier, equivalent to , and 101010DO101I=1,101 was a valid statement, equivalent to 10101 DO 101 I = 1, 101 because the zero in column 6 is treated as if it were a space (!), while 101010DO101I=1.101 was instead 10101 DO101I = 1.101, the assignment of 1.101 to a variable called DO101I. Note the slight visual difference between a comma and a period.
Hollerith strings, originally allowed only in FORMAT and DATA statements, were prefixed by a character count and the letter H (e.g., ), allowing blanks to be retained within the character string. Miscounts were a problem.
Evolution
FORTRAN II
IBM's FORTRAN II appeared in 1958. The main enhancement was to support procedural programming by allowing user-written subroutines and functions which returned values with parameters passed by reference. The COMMON statement provided a way for subroutines to access common (or global) variables. Six new statements were introduced:
, , and
and
Over the next few years, FORTRAN II added support for the and data types.
Early FORTRAN compilers supported no recursion in subroutines. Early computer architectures supported no concept of a stack, and when they did directly support subroutine calls, the return location was often stored in one fixed location adjacent to the subroutine code (e.g. the IBM 1130) or a specific machine register (IBM 360 et seq), which only allows recursion if a stack is maintained by software and the return address is stored on the stack before the call is made and restored after the call returns. Although not specified in FORTRAN 77, many F77 compilers supported recursion as an option, and the Burroughs mainframes, designed with recursion built-in, did so by default. It became a standard in Fortran 90 via the new keyword RECURSIVE.
Simple FORTRAN II program
This program, for Heron's formula, reads data on a tape reel containing three 5-digit integers A, B, and C as input. There are no "type" declarations available: variables whose name starts with I, J, K, L, M, or N are "fixed-point" (i.e. integers), otherwise floating-point. Since integers are to be processed in this example, the names of the variables start with the letter "I". The name of a variable must start with a letter and can continue with both letters and digits, up to a limit of six characters in FORTRAN II. If A, B, and C cannot represent the sides of a triangle in plane geometry, then the program's execution will end with an error code of "STOP 1". Otherwise, an output line will be printed showing the input values for A, B, and C, followed by the computed AREA of the triangle as a floating-point number occupying ten spaces along the line of output and showing 2 digits after the decimal point, the .2 in F10.2 of the FORMAT statement with label 601.
C AREA OF A TRIANGLE WITH A STANDARD SQUARE ROOT FUNCTION
C INPUT - TAPE READER UNIT 5, INTEGER INPUT
C OUTPUT - LINE PRINTER UNIT 6, REAL OUTPUT
C INPUT ERROR DISPLAY ERROR OUTPUT CODE 1 IN JOB CONTROL LISTING
READ INPUT TAPE 5, 501, IA, IB, IC
501 FORMAT (3I5)
C IA, IB, AND IC MAY NOT BE NEGATIVE OR ZERO
C FURTHERMORE, THE SUM OF TWO SIDES OF A TRIANGLE
C MUST BE GREATER THAN THE THIRD SIDE, SO WE CHECK FOR THAT, TOO
IF (IA) 777, 777, 701
701 IF (IB) 777, 777, 702
702 IF (IC) 777, 777, 703
703 IF (IA+IB-IC) 777, 777, 704
704 IF (IA+IC-IB) 777, 777, 705
705 IF (IB+IC-IA) 777, 777, 799
777 STOP 1
C USING HERON'S FORMULA WE CALCULATE THE
C AREA OF THE TRIANGLE
799 S = FLOATF (IA + IB + IC) / 2.0
AREA = SQRTF( S * (S - FLOATF(IA)) * (S - FLOATF(IB)) *
+ (S - FLOATF(IC)))
WRITE OUTPUT TAPE 6, 601, IA, IB, IC, AREA
601 FORMAT (4H A= ,I5,5H B= ,I5,5H C= ,I5,8H AREA= ,F10.2,
+ 13H SQUARE UNITS)
STOP
END
FORTRAN III
IBM also developed a FORTRAN III in 1958 that allowed for inline assembly code among other features; however, this version was never released as a product. Like the 704 FORTRAN and FORTRAN II, FORTRAN III included machine-dependent features that made code written in it unportable from machine to machine. Early versions of FORTRAN provided by other vendors suffered from the same disadvantage.
FORTRAN IV
IBM began development of FORTRAN IV in 1961 as a result of customer demands. FORTRAN IV removed the machine-dependent features of FORTRAN II (such as ), while adding new features such as a data type, logical Boolean expressions, and the logical IF statement as an alternative to the arithmetic IF statement. FORTRAN IV was eventually released in 1962, first for the IBM 7030 ("Stretch") computer, followed by versions for the IBM 7090, IBM 7094, and later for the IBM 1401 in 1966.
By 1965, FORTRAN IV was supposed to be compliant with the standard being developed by the American Standards Association X3.4.3 FORTRAN Working Group.
Between 1966 and 1968, IBM offered several FORTRAN IV compilers for its System/360, each named by letters that indicated the minimum amount of memory the compiler needed to run.
The letters (F, G, H) matched the codes used with System/360 model numbers to indicate memory size, each letter increment being a factor of two larger:
1966 : FORTRAN IV F for DOS/360 (64K bytes)
1966 : FORTRAN IV G for OS/360 (128K bytes)
1968 : FORTRAN IV H for OS/360 (256K bytes)
Digital Equipment Corporation maintained DECSYSTEM-10 Fortran IV (F40) for PDP-10 from 1967 to 1975. Compilers were also available for the UNIVAC 1100 series and the Control Data 6000 series and 7000 series systems.
At about this time FORTRAN IV had started to become an important educational tool and implementations such as the University of Waterloo's WATFOR and WATFIV were created to simplify the complex compile and link processes of earlier compilers.
In the FORTRAN IV programming environment of the era, except for that used on Control Data Corporation (CDC) systems, only one instruction was placed per line. The CDC version allowed for multiple instructions per line if separated by a (dollar) character. The FORTRAN sheet was divided into four fields, as described above.
Two compilers of the time, IBM "G" and UNIVAC, allowed comments to be written on the same line as instructions, separated by a special character: "master space": V (perforations 7 and 8) for UNIVAC and perforations 12/11/0/7/8/9 (hexadecimal FF) for IBM. These comments were not to be inserted in the middle of continuation cards.
FORTRAN 66
Perhaps the most significant development in the early history of FORTRAN was the decision by the American Standards Association (now American National Standards Institute (ANSI)) to form a committee sponsored by the Business Equipment Manufacturers Association (BEMA) to develop an American Standard Fortran. The resulting two standards, approved in March 1966, defined two languages, FORTRAN (based on FORTRAN IV, which had served as a de facto standard), and Basic FORTRAN (based on FORTRAN II, but stripped of its machine-dependent features). The FORTRAN defined by the first standard, officially denoted X3.9-1966, became known as FORTRAN 66 (although many continued to term it FORTRAN IV, the language on which the standard was largely based). FORTRAN 66 effectively became the first industry-standard version of FORTRAN. FORTRAN 66 included:
Main program, , , and program units
, , , , and data types
, , and statements
statement for specifying initial values
Intrinsic and (e.g., library) functions
Assignment statement
, computed , assigned , and statements
Logical and arithmetic (three-way) statements
loop statement
, , , , and statements for sequential I/O
statement and assigned format
, , , and statements
Hollerith constants in and statements, and as arguments to procedures
Identifiers of up to six characters in length
Comment lines
line
The above Fortran II version of the Heron program needs several modifications to compile as a Fortran 66 program. Modifications include using the more machine independent versions of the and statements, and removal of the unneeded type conversion functions. Though not required, the arithmetic statements can be re-written to use logical statements and expressions in a more structured fashion.
C AREA OF A TRIANGLE WITH A STANDARD SQUARE ROOT FUNCTION
C INPUT - TAPE READER UNIT 5, INTEGER INPUT
C OUTPUT - LINE PRINTER UNIT 6, REAL OUTPUT
C INPUT ERROR DISPLAY ERROR OUTPUT CODE 1 IN JOB CONTROL LISTING
READ (5, 501) IA, IB, IC
501 FORMAT (3I5)
C
C IA, IB, AND IC MAY NOT BE NEGATIVE OR ZERO
C FURTHERMORE, THE SUM OF TWO SIDES OF A TRIANGLE
C MUST BE GREATER THAN THE THIRD SIDE, SO WE CHECK FOR THAT, TOO
IF (IA .GT. 0 .AND. IB .GT. 0 .AND. IC .GT. 0) GOTO 10
WRITE (6, 602)
602 FORMAT (42H IA, IB, AND IC MUST BE GREATER THAN ZERO.)
STOP 1
10 CONTINUE
C
IF (IA+IB-IC .GT. 0
+ .AND. IA+IC-IB .GT. 0
+ .AND. IB+IC-IA .GT. 0) GOTO 20
WRITE (6, 603)
603 FORMAT (50H SUM OF TWO SIDES MUST BE GREATER THAN THIRD SIDE.)
STOP 1
20 CONTINUE
C
C USING HERON'S FORMULA WE CALCULATE THE
C AREA OF THE TRIANGLE
S = (IA + IB + IC) / 2.0
AREA = SQRT ( S * (S - IA) * (S - IB) * (S - IC))
WRITE (6, 601) IA, IB, IC, AREA
601 FORMAT (4H A= ,I5,5H B= ,I5,5H C= ,I5,8H AREA= ,F10.2,
+ 13H SQUARE UNITS)
STOP
END
FORTRAN 77
After the release of the FORTRAN 66 standard, compiler vendors introduced several extensions to Standard Fortran, prompting ANSI committee X3J3 in 1969 to begin work on revising the 1966 standard, under sponsorship of CBEMA, the Computer Business Equipment Manufacturers Association (formerly BEMA). Final drafts of this revised standard circulated in 1977, leading to formal approval of the new FORTRAN standard in April 1978. The new standard, called FORTRAN 77 and officially denoted X3.9-1978, added a number of significant features to address many of the shortcomings of FORTRAN 66:
Block and statements, with optional and clauses, to provide improved language support for structured programming
loop extensions, including parameter expressions, negative increments, and zero trip counts
, , and statements for improved I/O capability
Direct-access file I/O
statement, to override implicit conventions that undeclared variables are if their name begins with , , , , , or (and otherwise)
data type, replacing Hollerith strings with vastly expanded facilities for character input and output and processing of character-based data
statement for specifying constants
statement for persistent local variables
Generic names for intrinsic functions (e.g. also accepts arguments of other types, such as or ).
A set of intrinsics () for lexical comparison of strings, based upon the ASCII collating sequence. (These ASCII functions were demanded by the U.S. Department of Defense, in their conditional approval vote.)
A maximum of seven dimensions in arrays, rather than three. Allowed subscript expressions were also generalized.
In this revision of the standard, a number of features were removed or altered in a manner that might invalidate formerly standard-conforming programs.
(Removal was the only allowable alternative to X3J3 at that time, since the concept of "deprecation" was not yet available for ANSI standards.)
While most of the 24 items in the conflict list (see Appendix A2 of X3.9-1978) addressed loopholes or pathological cases permitted by the prior standard but rarely used, a small number of specific capabilities were deliberately removed, such as:
Hollerith constants and Hollerith data, such as GREET = 12HHELLO THERE!
Reading into an H edit (Hollerith field) descriptor in a FORMAT specification
Overindexing of array bounds by subscripts DIMENSION A(10,5)
Y = A(11,1)
Transfer of control out of and back into the range of a DO loop (also known as "Extended Range")
A Fortran 77 version of the Heron program requires no modifications to the Fortran 66 version. However this example demonstrates additional cleanup of the I/O statements, including using list-directed I/O, and replacing the Hollerith edit descriptors in the statements with quoted strings. It also uses structured and statements, rather than /.
PROGRAM HERON
C AREA OF A TRIANGLE WITH A STANDARD SQUARE ROOT FUNCTION
C INPUT - DEFAULT STANDARD INPUT UNIT, INTEGER INPUT
C OUTPUT - DEFAULT STANDARD OUTPUT UNIT, REAL OUTPUT
C INPUT ERROR DISPLAY ERROR OUTPUT CODE 1 IN JOB CONTROL LISTING
READ (*, *) IA, IB, IC
C
C IA, IB, AND IC MAY NOT BE NEGATIVE OR ZERO
C FURTHERMORE, THE SUM OF TWO SIDES OF A TRIANGLE
C MUST BE GREATER THAN THE THIRD SIDE, SO WE CHECK FOR THAT, TOO
IF (IA .LE. 0 .OR. IB .LE. 0 .OR. IC .LE. 0) THEN
WRITE (*, *) 'IA, IB, and IC must be greater than zero.'
STOP 1
END IF
C
IF (IA+IB-IC .LE. 0
+ .OR. IA+IC-IB .LE. 0
+ .OR. IB+IC-IA .LE. 0) THEN
WRITE (*, *) 'Sum of two sides must be greater than third side.'
STOP 1
END IF
C
C USING HERON'S FORMULA WE CALCULATE THE
C AREA OF THE TRIANGLE
S = (IA + IB + IC) / 2.0
AREA = SQRT ( S * (S - IA) * (S - IB) * (S - IC))
WRITE (*, 601) IA, IB, IC, AREA
601 FORMAT ('A= ', I5, ' B= ', I5, ' C= ', I5, ' AREA= ', F10.2,
+ ' square units')
STOP
END
Transition to ANSI Standard Fortran
The development of a revised standard to succeed FORTRAN 77 would be repeatedly delayed as the standardization process struggled to keep up with rapid changes in computing and programming practice. In the meantime, as the "Standard FORTRAN" for nearly fifteen years, FORTRAN 77 would become the historically most important dialect.
An important practical extension to FORTRAN 77 was the release of MIL-STD-1753 in 1978. This specification, developed by the U.S. Department of Defense, standardized a number of features implemented by most FORTRAN 77 compilers but not included in the ANSI FORTRAN 77 standard. These features would eventually be incorporated into the Fortran 90 standard.
and statements
statement
variant of the statement
Bit manipulation intrinsic functions, based on similar functions included in Industrial Real-Time Fortran (ANSI/ISA S61.1 (1976))
The IEEE 1003.9 POSIX Standard, released in 1991, provided a simple means for FORTRAN 77 programmers to issue POSIX system calls. Over 100 calls were defined in the document allowing access to POSIX-compatible process control, signal handling, file system control, device control, procedure pointing, and stream I/O in a portable manner.
Fortran 90
The much-delayed successor to FORTRAN 77, informally known as Fortran 90 (and prior to that, Fortran 8X), was finally released as ISO/IEC standard 1539:1991 in 1991 and an ANSI Standard in 1992. In addition to changing the official spelling from FORTRAN to Fortran, this major revision added many new features to reflect the significant changes in programming practice that had evolved since the 1978 standard:
Free-form source input removed the need to skip the first six character positions before entering statements.
Lowercase Fortran keywords
Identifiers up to 31 characters in length (In the previous standard, it was only six characters).
Inline comments
Ability to operate on arrays (or array sections) as a whole, thus greatly simplifying math and engineering computations.
whole, partial and masked array assignment statements and array expressions, such as X(1:N)=R(1:N)*COS(A(1:N))
statement for selective array assignment
array-valued constants and expressions,
user-defined array-valued functions and array constructors.
procedures
Modules, to group related procedures and data together, and make them available to other program units, including the capability to limit the accessibility to only specific parts of the module.
A vastly improved argument-passing mechanism, allowing interfaces to be checked at compile time
User-written interfaces for generic procedures
Operator overloading
Derived (structured) data types
New data type declaration syntax, to specify the data type and other attributes of variables
Dynamic memory allocation by means of the attribute and the and statements
attribute, pointer assignment, and statement to facilitate the creation and manipulation of dynamic data structures
Structured looping constructs, with an statement for loop termination, and and statements for terminating normal loop iterations in an orderly way
, , . . . , construct for multi-way selection
Portable specification of numerical precision under the user's control
New and enhanced intrinsic procedures.
Obsolescence and deletions
Unlike the prior revision, Fortran 90 removed no features. Any standard-conforming FORTRAN 77 program was also standard-conforming under Fortran 90, and either standard should have been usable to define its behavior.
A small set of features were identified as "obsolescent" and were expected to be removed in a future standard. All of the functionalities of these early-version features can be performed by newer Fortran features. Some are kept to simplify porting of old programs but many were deleted in Fortran 95.
"Hello, World!" example
program helloworld
print *, "Hello, World!"
end program helloworld
Fortran 95
Fortran 95, published officially as ISO/IEC 1539-1:1997, was a minor revision, mostly to resolve some outstanding issues from the Fortran 90 standard. Nevertheless, Fortran 95 also added a number of extensions, notably from the High Performance Fortran specification:
and nested constructs to aid vectorization
User-defined and procedures
Default initialization of derived type components, including pointer initialization
Expanded the ability to use initialization expressions for data objects
Initialization of pointers to
Clearly defined that arrays are automatically deallocated when they go out of scope.
A number of intrinsic functions were extended (for example a argument was added to the intrinsic).
Several features noted in Fortran 90 to be "obsolescent" were removed from Fortran 95:
statements using and index variables
Branching to an statement from outside its block
statement
and assigned statement, and assigned format specifiers
Hollerith edit descriptor.
An important supplement to Fortran 95 was the ISO technical report TR-15581: Enhanced Data Type Facilities, informally known as the Allocatable TR. This specification defined enhanced use of arrays, prior to the availability of fully Fortran 2003-compliant Fortran compilers. Such uses include arrays as derived type components, in procedure dummy argument lists, and as function return values. ( arrays are preferable to -based arrays because arrays are guaranteed by Fortran 95 to be deallocated automatically when they go out of scope, eliminating the possibility of memory leakage. In addition, elements of allocatable arrays are contiguous, and aliasing is not an issue for optimization of array references, allowing compilers to generate faster code than in the case of pointers.)
Another important supplement to Fortran 95 was the ISO technical report TR-15580: Floating-point exception handling, informally known as the IEEE TR. This specification defined support for IEEE floating-point arithmetic and floating-point exception handling.
Conditional compilation and varying length strings
In addition to the mandatory "Base language" (defined in ISO/IEC 1539-1 : 1997), the Fortran 95 language also included two optional modules:
Varying length character strings (ISO/IEC 1539-2 : 2000)
Conditional compilation (ISO/IEC 1539-3 : 1998)
which, together, compose the multi-part International Standard (ISO/IEC 1539).
According to the standards developers, "the optional parts describe self-contained features which have been requested by a substantial body of users and/or implementors, but which are not deemed to be of sufficient generality for them to be required in all standard-conforming Fortran compilers." Nevertheless, if a standard-conforming Fortran does provide such options, then they "must be provided in accordance with the description of those facilities in the appropriate Part of the Standard".
Modern Fortran
The language defined by the twenty-first century standards, in particular because of its incorporation of object-oriented programming support and subsequently Coarray Fortran, is often referred to as 'Modern Fortran', and the term is increasingly used in the literature.
Fortran 2003
Fortran 2003, officially published as ISO/IEC 1539-1:2004, was a major revision introducing many new features. A comprehensive summary of the new features of Fortran 2003 is available at the Fortran Working Group (ISO/IEC JTC1/SC22/WG5) official Web site.
From that article, the major enhancements for this revision include:
Derived type enhancements: parameterized derived types, improved control of accessibility, improved structure constructors, and finalizers
Object-oriented programming support: type extension and inheritance, polymorphism, dynamic type allocation, and type-bound procedures, providing complete support for abstract data types
Data manipulation enhancements: allocatable components (incorporating TR 15581), deferred type parameters, attribute, explicit type specification in array constructors and allocate statements, pointer enhancements, extended initialization expressions, and enhanced intrinsic procedures
Input/output enhancements: asynchronous transfer, stream access, user specified transfer operations for derived types, user specified control of rounding during format conversions, named constants for preconnected units, the statement, regularization of keywords, and access to error messages
Procedure pointers
Support for IEEE floating-point arithmetic and floating-point exception handling (incorporating TR 15580)
Interoperability with the C programming language
Support for international usage: access to ISO 10646 4-byte characters and choice of decimal or comma in numeric formatted input/output
Enhanced integration with the host operating system: access to command-line arguments, environment variables, and processor error messages
An important supplement to Fortran 2003 was the ISO technical report TR-19767: Enhanced module facilities in Fortran. This report provided sub-modules, which make Fortran modules more similar to Modula-2 modules. They are similar to Ada private child sub-units. This allows the specification and implementation of a module to be expressed in separate program units, which improves packaging of large libraries, allows preservation of trade secrets while publishing definitive interfaces, and prevents compilation cascades.
Fortran 2008
ISO/IEC 1539-1:2010, informally known as Fortran 2008, was approved in September 2010. As with Fortran 95, this is a minor upgrade, incorporating clarifications and corrections to Fortran 2003, as well as introducing some new capabilities. The new capabilities include:
Sub-modules – additional structuring facilities for modules; supersedes ISO/IEC TR 19767:2005
Coarray Fortran – a parallel execution model
The DO CONCURRENT construct – for loop iterations with no interdependencies
The CONTIGUOUS attribute – to specify storage layout restrictions
The BLOCK construct – can contain declarations of objects with construct scope
Recursive allocatable components – as an alternative to recursive pointers in derived types
The Final Draft international Standard (FDIS) is available as document N1830.
A supplement to Fortran 2008 is the International Organization for Standardization (ISO) Technical Specification (TS) 29113 on Further Interoperability of Fortran with C, which has been submitted to ISO in May 2012 for approval. The specification adds support for accessing the array descriptor from C and allows ignoring the type and rank of arguments.
Fortran 2018
The Fortran 2018 revision of the language was earlier referred to as Fortran 2015. It was a significant revision and was released on November 28, 2018.
Fortran 2018 incorporates two previously published Technical Specifications:
ISO/IEC TS 29113:2012 Further Interoperability with C
ISO/IEC TS 18508:2015 Additional Parallel Features in Fortran
Additional changes and new features include support for ISO/IEC/IEEE 60559:2011 (the version of the IEEE floating-point standard before the latest minor revision IEEE ), hexadecimal input/output, IMPLICIT NONE enhancements and other changes.
Fortran 2018 deleted the arithmetic IF statement. It also deleted non-block DO constructs - loops which do not end with an END DO or CONTINUE statement. These had been an obsolescent part of the language since Fortran 90.
New obsolescences are: COMMON and EQUIVALENCE statements and the BLOCK DATA program unit, labelled DO loops, specific names for intrinsic functions, and the FORALL statement and construct.
Fortran 2023
Fortran 2023 (ISO/IEC 1539-1:2023) was published in November 2023, and can be purchased from the ISO.
Fortran 2023 is a minor extension of Fortran 2018 that focuses on correcting errors and omissions
in Fortran 2018. It also adds some small features, including an enumerated type capability.
Language features
A full description of the Fortran language features brought by Fortran 95 is covered in the related article, Fortran 95 language features. The language versions defined by later standards are often referred to collectively as 'Modern Fortran' and are described in the literature.
Science and engineering
Although a 1968 journal article by the authors of BASIC already described FORTRAN as "old-fashioned", programs have been written in Fortran for many decades and there is a vast body of Fortran software in daily use throughout the scientific and engineering communities. Jay Pasachoff wrote in 1984 that "physics and astronomy students simply have to learn FORTRAN. So much exists in FORTRAN that it seems unlikely that scientists will change to Pascal, Modula-2, or whatever." In 1993, Cecil E. Leith called FORTRAN the "mother tongue of scientific computing", adding that its replacement by any other possible language "may remain a forlorn hope".
It is the primary language for some of the most intensive super-computing tasks, such as in astronomy, climate modeling, computational chemistry, computational economics, computational fluid dynamics, computational physics, data analysis, hydrological modeling, numerical linear algebra and numerical libraries (LAPACK, IMSL and NAG), optimization, satellite simulation, structural engineering, and weather prediction. Many of the floating-point benchmarks to gauge the performance of new computer processors, such as the floating-point components of the SPEC benchmarks (e.g., CFP2006, CFP2017) are written in Fortran. Math algorithms are well documented in Numerical Recipes.
Apart from this, more modern codes in computational science generally use large program libraries, such as METIS for graph partitioning, PETSc or Trilinos for linear algebra capabilities, deal.II or FEniCS for mesh and finite element support, and other generic libraries. Since the early 2000s, many of the widely used support libraries have also been implemented in C and more recently, in C++. On the other hand, high-level languages such as the Wolfram Language, MATLAB, Python, and R have become popular in particular areas of computational science. Consequently, a growing fraction of scientific programs are also written in such higher-level scripting languages. For this reason, facilities for inter-operation with C were added to Fortran 2003 and enhanced by the ISO/IEC technical specification 29113, which was incorporated into Fortran 2018 to allow more flexible interoperation with other programming languages.
Portability
Portability was a problem in the early days because there was no agreed upon standard—not even IBM's reference manual—and computer companies vied to differentiate their offerings from others by providing incompatible features. Standards have improved portability. The 1966 standard provided a reference syntax and semantics, but vendors continued to provide incompatible extensions. Although careful programmers were coming to realize that use of incompatible extensions caused expensive portability problems, and were therefore using programs such as The PFORT Verifier, it was not until after the 1977 standard, when the National Bureau of Standards (now NIST) published FIPS PUB 69, that processors purchased by the U.S. Government were required to diagnose extensions of the standard. Rather than offer two processors, essentially every compiler eventually had at least an option to diagnose extensions.
Incompatible extensions were not the only portability problem. For numerical calculations, it is important to take account of the characteristics of the arithmetic. This was addressed by Fox et al. in the context of the 1966 standard by the PORT library. The ideas therein became widely used, and were eventually incorporated into the 1990 standard by way of intrinsic inquiry functions. The widespread (now almost universal) adoption of the IEEE 754 standard for binary floating-point arithmetic has essentially removed this problem.
Access to the computing environment (e.g., the program's command line, environment variables, textual explanation of error conditions) remained a problem until it was addressed by the 2003 standard.
Large collections of library software that could be described as being loosely related to engineering and scientific calculations, such as graphics libraries, have been written in C, and therefore access to them presented a portability problem. This has been addressed by incorporation of C interoperability into the 2003 standard.
It is now possible (and relatively easy) to write an entirely portable program in Fortran, even without recourse to a preprocessor.
Obsolete variants
Until the Fortran 66 standard was developed, each compiler supported its own variant of Fortran. Some were more divergent from the mainstream than others.
The first Fortran compiler set a high standard of efficiency for compiled code. This goal made it difficult to create a compiler so it was usually done by the computer manufacturers to support hardware sales. This left an important niche: compilers that were fast and provided good diagnostics for the programmer (often a student). Examples include Watfor, Watfiv, PUFFT, and on a smaller scale, FORGO, Wits Fortran, and Kingston Fortran 2.
Fortran 5 was marketed by Data General Corp from the early 1970s to the early 1980s, for the Nova, Eclipse, and MV line of computers. It had an optimizing compiler that was quite good for minicomputers of its time. The language most closely resembles FORTRAN 66.
FORTRAN V was distributed by Control Data Corporation in 1968 for the CDC 6600 series. The language was based upon FORTRAN IV.
Univac also offered a compiler for the 1100 series known as FORTRAN V. A spinoff of Univac Fortran V was Athena FORTRAN.
Specific variants produced by the vendors of high-performance scientific computers (e.g., Burroughs, Control Data Corporation (CDC), Cray, Honeywell, IBM, Texas Instruments, and UNIVAC) added extensions to Fortran to take advantage of special hardware features such as instruction cache, CPU pipelines, and vector arrays. For example, one of IBM's FORTRAN compilers (H Extended IUP) had a level of optimization which reordered the machine code instructions to keep multiple internal arithmetic units busy simultaneously. Another example is CFD, a special variant of FORTRAN designed specifically for the ILLIAC IV supercomputer, running at NASA's Ames Research Center.
IBM Research Labs also developed an extended FORTRAN-based language called VECTRAN for processing vectors and matrices.
Object-Oriented Fortran was an object-oriented extension of Fortran, in which data items can be grouped into objects, which can be instantiated and executed in parallel. It was available for Sun, Iris, iPSC, and nCUBE, but is no longer supported.
Such machine-specific extensions have either disappeared over time or have had elements incorporated into the main standards. The major remaining extension is OpenMP, which is a cross-platform extension for shared memory programming. One new extension, Coarray Fortran, is intended to support parallel programming.
FOR TRANSIT was the name of a reduced version of the IBM 704 FORTRAN language, which was implemented for the IBM 650, using a translator program developed at Carnegie in the late 1950s. The following comment appears in the IBM Reference Manual (FOR TRANSIT Automatic Coding System C28-4038, Copyright 1957, 1959 by IBM):
The FORTRAN system was designed for a more complex machine than the 650, and consequently some of the 32 statements found in the FORTRAN Programmer's Reference Manual are not acceptable to the FOR TRANSIT system. In addition, certain restrictions to the FORTRAN language have been added. However, none of these restrictions make a source program written for FOR TRANSIT incompatible with the FORTRAN system for the 704.
The permissible statements were:
Arithmetic assignment statements, e.g., a = b
GO TO (n1, n2, ..., nm), i
IF (a) n1, n2, n3
DO n i = m1, m2
Up to ten subroutines could be used in one program.
FOR TRANSIT statements were limited to columns 7 through 56, only. Punched cards were used for input and output on the IBM 650. Three passes were required to translate source code to the "IT" language, then to compile the IT statements into SOAP assembly language, and finally to produce the object program, which could then be loaded into the machine to run the program (using punched cards for data input, and outputting results onto punched cards).
Two versions existed for the 650s with a 2000 word memory drum: FOR TRANSIT I (S) and FOR TRANSIT II, the latter for machines equipped with indexing registers and automatic floating-point decimal (bi-quinary) arithmetic. Appendix A of the manual included wiring diagrams for the IBM 533 card reader/punch control panel.
Fortran-based languages
Prior to FORTRAN 77, many preprocessors were commonly used to provide a friendlier language, with the advantage that the preprocessed code could be compiled on any machine with a standard FORTRAN compiler. These preprocessors would typically support structured programming, variable names longer than six characters, additional data types, conditional compilation, and even macro capabilities. Popular preprocessors included EFL, FLECS, iftran, MORTRAN, SFtran, S-Fortran, Ratfor, and Ratfiv. EFL, Ratfor and Ratfiv, for example, implemented C-like languages, outputting preprocessed code in standard FORTRAN 66. The PFORT preprocessor was often used to verify that code conformed to a portable subset of the language. Despite advances in the Fortran language, preprocessors continue to be used for conditional compilation and macro substitution.
One of the earliest versions of FORTRAN, introduced in the '60s, was popularly used in colleges and universities. Developed, supported, and distributed by the University of Waterloo, WATFOR was based largely on FORTRAN IV. A student using WATFOR could submit their batch FORTRAN job and, if there were no syntax errors, the program would move straight to execution. This simplification allowed students to concentrate on their program's syntax and semantics, or execution logic flow, rather than dealing with submission Job Control Language (JCL), the compile/link-edit/execution successive process(es), or other complexities of the mainframe/minicomputer environment. A down side to this simplified environment was that WATFOR was not a good choice for programmers needing the expanded abilities of their host processor(s), e.g., WATFOR typically had very limited access to I/O devices. WATFOR was succeeded by WATFIV and its later versions.
(line programming)
LRLTRAN was developed at the Lawrence Radiation Laboratory to provide support for vector arithmetic and dynamic storage, among other extensions to support systems programming. The distribution included the Livermore Time Sharing System (LTSS) operating system.
The Fortran-95 Standard includes an optional Part 3 which defines an optional conditional compilation capability. This capability is often referred to as "CoCo".
Many Fortran compilers have integrated subsets of the C preprocessor into their systems.
SIMSCRIPT is an application specific Fortran preprocessor for modeling and simulating large discrete systems.
The F programming language was designed to be a clean subset of Fortran 95 that attempted to remove the redundant, unstructured, and deprecated features of Fortran, such as the statement. F retains the array features added in Fortran 90, and removes control statements that were made obsolete by structured programming constructs added to both FORTRAN 77 and Fortran 90. F is described by its creators as "a compiled, structured, array programming language especially well suited to education and scientific computing". Essential Lahey Fortran 90 (ELF90) was a similar subset.
Lahey and Fujitsu teamed up to create Fortran for the Microsoft .NET Framework. Silverfrost FTN95 is also capable of creating .NET code.
Code examples
The following program illustrates dynamic memory allocation and array-based operations, two features introduced with Fortran 90. Particularly noteworthy is the absence of loops and / statements in manipulating the array; mathematical operations are applied to the array as a whole. Also apparent is the use of descriptive variable names and general code formatting that conform with contemporary programming style. This example computes an average over data entered interactively.
program average
! Read in some numbers and take the average
! As written, if there are no data points, an average of zero is returned
! While this may not be desired behavior, it keeps this example simple
implicit none
real, allocatable :: points(:)
integer :: number_of_points
real :: average_points, positive_average, negative_average
average_points = 0.
positive_average = 0.
negative_average = 0.
write (*,*) "Input number of points to average:"
read (*,*) number_of_points
allocate (points(number_of_points))
write (*,*) "Enter the points to average:"
read (*,*) points
! Take the average by summing points and dividing by number_of_points
if (number_of_points > 0) average_points = sum(points) / number_of_points
! Now form average over positive and negative points only
if (count(points > 0.) > 0) positive_average = sum(points, points > 0.) / count(points > 0.)
if (count(points < 0.) > 0) negative_average = sum(points, points < 0.) / count(points < 0.)
! Print result to terminal stdout unit 6
write (*,'(a,g12.4)') 'Average = ', average_points
write (*,'(a,g12.4)') 'Average of positive points = ', positive_average
write (*,'(a,g12.4)') 'Average of negative points = ', negative_average
deallocate (points) ! free memory
end program average
Humor
During the same FORTRAN standards committee meeting at which the name "FORTRAN 77" was chosen, a satirical technical proposal was incorporated into the official distribution bearing the title "Letter O Considered Harmful". This proposal purported to address the confusion that sometimes arises between the letter "O" and the numeral zero, by eliminating the letter from allowable variable names. However, the method proposed was to eliminate the letter from the character set entirely (thereby retaining 48 as the number of lexical characters, which the colon had increased to 49). This was considered beneficial in that it would promote structured programming, by making it impossible to use the notorious statement as before. (Troublesome statements would also be eliminated.) It was noted that this "might invalidate some existing programs" but that most of these "probably were non-conforming, anyway".
When X3J3 debated whether the minimum trip count for a DO loop should be zero or one in Fortran 77, Loren Meissner suggested a minimum trip count of two—reasoning (tongue-in-cheek) that if it were less than two, then there would be no reason for a loop.
When assumed-length arrays were being added, there was a dispute as to the appropriate character to separate upper and lower bounds. In a comment examining these arguments, Walt Brainerd penned an article entitled "Astronomy vs. Gastroenterology" because some proponents had suggested using the star or asterisk ("*"), while others favored the colon (":").
Variable names beginning with the letters I–N have a default type of integer, while variables starting with any other letters defaulted to real, although programmers could override the defaults with an explicit declaration. This led to the joke: "In FORTRAN, GOD is REAL (unless declared INTEGER)."
See also
References
Further reading
Language standards
Informally known as FORTRAN 66.
Also known as ISO 1539–1980, informally known as FORTRAN 77.
Informally known as Fortran 90.
Informally known as Fortran 95. There are a further two parts to this standard. Part 1 has been formally adopted by ANSI.
Informally known as Fortran 2003.
Informally known as Fortran 2008.
Related standards
Other reference material
Books
Arjen, Markus (2012), "Modern Fortran in Practice", Cambridge Univ. Press, ISBN 978-1-13908479-6.
(Supplemental materials)
Articles
External links
ISO/IEC JTC1/SC22/WG5—the official home of Fortran standards
Fortran Standards Documents—GFortran standards
fortran-lang.org (2020).
History of FORTRAN and Fortran II—Computer History Museum
Valmer Norrod, et al.: A self-study course in FORTRAN programing—Volume I—textbook, Computer Science Corporation El Segundo, California (April 1970). NASA (N70-25287).
Valmer Norrod, Sheldom Blecher, and Martha Horton: A self-study course in FORTRAN programing—Volume II—workbook, NASA CR-1478 (April 1970), NASA (N70-25288).
An introduction to the Fortran programming language, by Reinhold Bader, Nisarg Patel, Leibniz Supercomputing Centre.
A coarray tutorial
Victor Eijkhout : Introduction to Scientific Programming in C++17/Fortran2008, The Art of HPC, volume 3 (PDF)
American inventions
Array programming languages
Computer standards
Numerical programming languages
Object-oriented programming languages
Procedural programming languages
High-level programming languages
IBM software
Programming languages created in 1957
Programming languages with an ISO standard
Statically typed programming languages
Unix programming tools
1957 software | Fortran | [
"Technology"
] | 11,318 | [
"Computer standards"
] |
11,178 | https://en.wikipedia.org/wiki/Foobar | The terms foobar (), foo, bar, baz, qux, quux, and others are used as metasyntactic variables and placeholder names in computer programming or computer-related documentation. They have been used to name entities such as variables, functions, and commands whose exact identity is unimportant and serve only to demonstrate a concept.
The style guide for Google developer documentation recommends against using them as example project names because they are unclear and can cause confusion.
History and etymology
It is possible that foobar is a playful allusion to the World War II-era military slang FUBAR (fucked up beyond all recognition).
According to a RFC from the Internet Engineering Task Force, the word FOO originated as a nonsense word with its earliest documented use in the 1930s comic Smokey Stover by Bill Holman. Holman states that he used the word due to having seen it on the bottom of a jade Chinese figurine in San Francisco Chinatown, purportedly signifying "good luck". If true, this is presumably related to the Chinese word fu ("", sometimes transliterated foo, as in foo dog), which can mean happiness or blessing.
The first known use of the terms in print in a programming context appears in a 1965 edition of MIT's Tech Engineering News. The use of foo in a programming context is generally credited to the Tech Model Railroad Club (TMRC) of MIT from . In the complex model system, there were scram switches located at numerous places around the room that could be thrown if something undesirable was about to occur, such as a train moving at full power towards an obstruction. Another feature of the system was a digital clock on the dispatch board. When someone hit a scram switch, the clock stopped and the display was replaced with the word "FOO"; at TMRC the scram switches are, therefore, called "Foo switches". Because of this, an entry in the 1959 Dictionary of the TMRC Language went something like this: "FOO: The first syllable of the misquoted sacred chant phrase 'foo mane padme hum.' Our first obligation is to keep the foo counters turning." One book describing the MIT train room describes two buttons by the door labeled "foo" and "bar". These were general-purpose buttons and were often repurposed for whatever fun idea the MIT hackers had at the time, hence the adoption of foo and bar as general-purpose variable names. An entry in the Abridged Dictionary of the TMRC Language states:
Foobar was used as a variable name in the Fortran code of Colossal Cave Adventure (1977 Crowther and Woods version). The variable FOOBAR was used to contain the player's progress in saying the magic phrase "Fee Fie Foe Foo", a phrase from an historical quatrain in the classic English fairy tale Jack and the Beanstalk. Intel also used the term foo in their programming documentation in 1978.
Examples in culture
Foo Camp is an annual hacker convention.
BarCamp, an international network of user-generated conferences.
During the United States v. Microsoft Corp. trial, evidence was presented that Microsoft had tried to use the Web Services Interoperability organization (WS-I) as a means to stifle competition, including e-mails in which top executives including Bill Gates and Steve Ballmer referred to the WS-I using the codename "foo".
foobar2000 is an audio player.
See also
Alice and Bob
Foo fighter
Foo was here
Fu (character)
Lorem ipsum, similar placeholder text used outside programming
xyzzy
:Category:Variable (computer science)
References
External links
Google developer documentation style guide word list
The Jargon File entry on "foobar", catb.org
– FTP Operation Over Big Address Records (FOOBAR)
Placeholder names
Computer programming folklore
Articles with example C code
Computing terminology | Foobar | [
"Technology"
] | 802 | [
"Computing terminology"
] |
11,180 | https://en.wikipedia.org/wiki/Functional%20analysis | Functional analysis is a branch of mathematical analysis, the core of which is formed by the study of vector spaces endowed with some kind of limit-related structure (for example, inner product, norm, or topology) and the linear functions defined on these spaces and suitably respecting these structures. The historical roots of functional analysis lie in the study of spaces of functions and the formulation of properties of transformations of functions such as the Fourier transform as transformations defining, for example, continuous or unitary operators between function spaces. This point of view turned out to be particularly useful for the study of differential and integral equations.
The usage of the word functional as a noun goes back to the calculus of variations, implying a function whose argument is a function. The term was first used in Hadamard's 1910 book on that subject. However, the general concept of a functional had previously been introduced in 1887 by the Italian mathematician and physicist Vito Volterra. The theory of nonlinear functionals was continued by students of Hadamard, in particular Fréchet and Lévy. Hadamard also founded the modern school of linear functional analysis further developed by Riesz and the group of Polish mathematicians around Stefan Banach.
In modern introductory texts on functional analysis, the subject is seen as the study of vector spaces endowed with a topology, in particular infinite-dimensional spaces. In contrast, linear algebra deals mostly with finite-dimensional spaces, and does not use topology. An important part of functional analysis is the extension of the theories of measure, integration, and probability to infinite-dimensional spaces, also known as infinite dimensional analysis.
Normed vector spaces
The basic and historically first class of spaces studied in functional analysis are complete normed vector spaces over the real or complex numbers. Such spaces are called Banach spaces. An important example is a Hilbert space, where the norm arises from an inner product. These spaces are of fundamental importance in many areas, including the mathematical formulation of quantum mechanics, machine learning, partial differential equations, and Fourier analysis.
More generally, functional analysis includes the study of Fréchet spaces and other topological vector spaces not endowed with a norm.
An important object of study in functional analysis are the continuous linear operators defined on Banach and Hilbert spaces. These lead naturally to the definition of C*-algebras and other operator algebras.
Hilbert spaces
Hilbert spaces can be completely classified: there is a unique Hilbert space up to isomorphism for every cardinality of the orthonormal basis. Finite-dimensional Hilbert spaces are fully understood in linear algebra, and infinite-dimensional separable Hilbert spaces are isomorphic to . Separability being important for applications, functional analysis of Hilbert spaces consequently mostly deals with this space. One of the open problems in functional analysis is to prove that every bounded linear operator on a Hilbert space has a proper invariant subspace. Many special cases of this invariant subspace problem have already been proven.
Banach spaces
General Banach spaces are more complicated than Hilbert spaces, and cannot be classified in such a simple manner as those. In particular, many Banach spaces lack a notion analogous to an orthonormal basis.
Examples of Banach spaces are -spaces for any real number Given also a measure on set then sometimes also denoted or has as its vectors equivalence classes of measurable functions whose absolute value's -th power has finite integral; that is, functions for which one has
If is the counting measure, then the integral may be replaced by a sum. That is, we require
Then it is not necessary to deal with equivalence classes, and the space is denoted written more simply in the case when is the set of non-negative integers.
In Banach spaces, a large part of the study involves the dual space: the space of all continuous linear maps from the space into its underlying field, so-called functionals. A Banach space can be canonically identified with a subspace of its bidual, which is the dual of its dual space. The corresponding map is an isometry but in general not onto. A general Banach space and its bidual need not even be isometrically isomorphic in any way, contrary to the finite-dimensional situation. This is explained in the dual space article.
Also, the notion of derivative can be extended to arbitrary functions between Banach spaces. See, for instance, the Fréchet derivative article.
Linear functional analysis
Major and foundational results
There are four major theorems which are sometimes called the four pillars of functional analysis:
the Hahn–Banach theorem
the open mapping theorem
the closed graph theorem
the uniform boundedness principle, also known as the Banach–Steinhaus theorem.
Important results of functional analysis include:
Uniform boundedness principle
The uniform boundedness principle or Banach–Steinhaus theorem is one of the fundamental results in functional analysis. Together with the Hahn–Banach theorem and the open mapping theorem, it is considered one of the cornerstones of the field. In its basic form, it asserts that for a family of continuous linear operators (and thus bounded operators) whose domain is a Banach space, pointwise boundedness is equivalent to uniform boundedness in operator norm.
The theorem was first published in 1927 by Stefan Banach and Hugo Steinhaus but it was also proven independently by Hans Hahn.
Spectral theorem
There are many theorems known as the spectral theorem, but one in particular has many applications in functional analysis.
This is the beginning of the vast research area of functional analysis called operator theory; see also the spectral measure.
There is also an analogous spectral theorem for bounded normal operators on Hilbert spaces. The only difference in the conclusion is that now may be complex-valued.
Hahn–Banach theorem
The Hahn–Banach theorem is a central tool in functional analysis. It allows the extension of bounded linear functionals defined on a subspace of some vector space to the whole space, and it also shows that there are "enough" continuous linear functionals defined on every normed vector space to make the study of the dual space "interesting".
Open mapping theorem
The open mapping theorem, also known as the Banach–Schauder theorem (named after Stefan Banach and Juliusz Schauder), is a fundamental result which states that if a continuous linear operator between Banach spaces is surjective then it is an open map. More precisely,
The proof uses the Baire category theorem, and completeness of both and is essential to the theorem. The statement of the theorem is no longer true if either space is just assumed to be a normed space, but is true if and are taken to be Fréchet spaces.
Closed graph theorem
Other topics
Foundations of mathematics considerations
Most spaces considered in functional analysis have infinite dimension. To show the existence of a vector space basis for such spaces may require Zorn's lemma. However, a somewhat different concept, the Schauder basis, is usually more relevant in functional analysis. Many theorems require the Hahn–Banach theorem, usually proved using the axiom of choice, although the strictly weaker Boolean prime ideal theorem suffices. The Baire category theorem, needed to prove many important theorems, also requires a form of axiom of choice.
Points of view
Functional analysis includes the following tendencies:
Abstract analysis. An approach to analysis based on topological groups, topological rings, and topological vector spaces.
Geometry of Banach spaces contains many topics. One is combinatorial approach connected with Jean Bourgain; another is a characterization of Banach spaces in which various forms of the law of large numbers hold.
Noncommutative geometry. Developed by Alain Connes, partly building on earlier notions, such as George Mackey's approach to ergodic theory.
Connection with quantum mechanics. Either narrowly defined as in mathematical physics, or broadly interpreted by, for example, Israel Gelfand, to include most types of representation theory.
See also
List of functional analysis topics
Spectral theory
References
Further reading
Aliprantis, C.D., Border, K.C.: Infinite Dimensional Analysis: A Hitchhiker's Guide, 3rd ed., Springer 2007, . Online (by subscription)
Bachman, G., Narici, L.: Functional analysis, Academic Press, 1966. (reprint Dover Publications)
Banach S. Theory of Linear Operations . Volume 38, North-Holland Mathematical Library, 1987,
Brezis, H.: Analyse Fonctionnelle, Dunod or
Conway, J. B.: A Course in Functional Analysis, 2nd edition, Springer-Verlag, 1994,
Dunford, N. and Schwartz, J.T.: Linear Operators, General Theory, John Wiley & Sons, and other 3 volumes, includes visualization charts
Edwards, R. E.: Functional Analysis, Theory and Applications, Hold, Rinehart and Winston, 1965.
Eidelman, Yuli, Vitali Milman, and Antonis Tsolomitis: Functional Analysis: An Introduction, American Mathematical Society, 2004.
Friedman, A.: Foundations of Modern Analysis, Dover Publications, Paperback Edition, July 21, 2010
Giles, J.R.: Introduction to the Analysis of Normed Linear Spaces, Cambridge University Press, 2000
Hirsch F., Lacombe G. - "Elements of Functional Analysis", Springer 1999.
Hutson, V., Pym, J.S., Cloud M.J.: Applications of Functional Analysis and Operator Theory, 2nd edition, Elsevier Science, 2005,
Kantorovitz, S.,Introduction to Modern Analysis, Oxford University Press, 2003,2nd ed.2006.
Kolmogorov, A.N and Fomin, S.V.: Elements of the Theory of Functions and Functional Analysis, Dover Publications, 1999
Kreyszig, E.: Introductory Functional Analysis with Applications, Wiley, 1989.
Lax, P.: Functional Analysis, Wiley-Interscience, 2002,
Lebedev, L.P. and Vorovich, I.I.: Functional Analysis in Mechanics, Springer-Verlag, 2002
Michel, Anthony N. and Charles J. Herget: Applied Algebra and Functional Analysis, Dover, 1993.
Pietsch, Albrecht: History of Banach spaces and linear operators, Birkhäuser Boston Inc., 2007,
Reed, M., Simon, B.: "Functional Analysis", Academic Press 1980.
Riesz, F. and Sz.-Nagy, B.: Functional Analysis, Dover Publications, 1990
Rudin, W.: Functional Analysis, McGraw-Hill Science, 1991
Saxe, Karen: Beginning Functional Analysis, Springer, 2001
Schechter, M.: Principles of Functional Analysis, AMS, 2nd edition, 2001
Shilov, Georgi E.: Elementary Functional Analysis, Dover, 1996.
Sobolev, S.L.: Applications of Functional Analysis in Mathematical Physics, AMS, 1963
Vogt, D., Meise, R.: Introduction to Functional Analysis, Oxford University Press, 1997.
Yosida, K.: Functional Analysis, Springer-Verlag, 6th edition, 1980
External links
Topics in Real and Functional Analysis by Gerald Teschl, University of Vienna.
Lecture Notes on Functional Analysis by Yevgeny Vilensky, New York University.
Lecture videos on functional analysis by Greg Morrow from University of Colorado Colorado Springs | Functional analysis | [
"Mathematics"
] | 2,301 | [
"Functional analysis",
"Functions and mappings",
"Mathematical relations",
"Mathematical objects"
] |
11,236 | https://en.wikipedia.org/wiki/Fart%20%28word%29 | Fart is a word in the English language most commonly used in reference to flatulence that can be used as a noun or a verb. The immediate roots are in the Middle English words ferten, feortan and farten, kin of the Old High German word ferzan. Cognates are found in Old Norse, Slavic and also Greek and Sanskrit. The word fart has been incorporated into the colloquial and technical speech of a number of occupations, including computing. It is often considered unsuitable in formal situations as it may be considered vulgar or offensive.
Etymology
The English word fart is one of the oldest words in the English lexicon. Its Indo-European origins are confirmed by the many cognate words in some other Indo-European languages: It is cognate with Greek verb πέρδομαι (perdomai), as well as the Latin pēdĕre, Sanskrit pardate, Ashkun pidiṅ, Avestan , Italian , French "péter", Russian пердеть (perdet') and Polish "" << PIE * [break wind loudly] or * [the same, softly], all of which mean the same thing. Like most Indo-European roots in the Germanic languages, it was altered under Grimm's law, so that Indo-European /p/ > /f/, and /d/ > /t/, as the German cognate furzen also manifests.
Vulgarity and offensiveness
In certain circles the word is considered merely a common profanity with an often humorous connotation. For example, a person may be referred to as a 'fart', or an 'old fart', not necessarily depending on the person's age. This may convey the sense that a person is boring or unduly fussy and be intended as an insult, mainly when used in the second or third person. For example, '"he's a boring old fart!" However the word may be used as a colloquial term of endearment or in an attempt at humorous self-deprecation (e.g., in such phrases as "I know I'm just an old fart" or "you do like to fart about!"). 'Fart' is often only used as a term of endearment when the subject is personally well known to the user.
In both cases though, it tends to refer to personal habits or traits that the user considers to be a negative feature of the subject, even when it is a self-reference. For example, when concerned that a person is being overly methodical they might say 'I know I'm being an old fart', potentially to forestall negative thoughts and opinions in others. When used in an attempt to be offensive, the word is still considered vulgar, but it remains a mild example of such an insult. This usage dates back to the Medieval period, where the phrase 'not worth a fart' would be applied to an item held to be worthless.
Historical examples
The word fart in Middle English occurs in "Sumer Is Icumen In", where one sign of summer is "bucke uerteþ" (the buck farts). It appears in several of Geoffrey Chaucer's Canterbury Tales. In "The Miller's Tale", Absolon has already been tricked into kissing Alison's buttocks when he is expecting to kiss her face. Her boyfriend Nicholas hangs his buttocks out of a window, hoping to trick Absolon into kissing his buttocks in turn and then farts in the face of his rival. In "The Summoner's Tale", the friars in the story are to receive the smell of a fart through a twelve-spoked wheel.
In the early modern period, the word fart was not considered especially vulgar; it even surfaced in literary works. For example, Samuel Johnson's A Dictionary of the English Language, published in 1755, included the word. Johnson defined it with two poems, one by Jonathan Swift, the other by Sir John Suckling.
Benjamin Franklin prepared an essay on the topic for the Royal Academy of Brussels in 1781 urging scientific study. In 1607, a group of Members of Parliament had written a ribald poem entitled The Parliament Fart, as a symbolic protest against the conservatism of the House of Lords and the king, James I.
Modern usage
While not one of George Carlin's original seven dirty words, he noted in a later routine that the word fart ought to be added to "the list" of words that were not acceptable (for broadcast) in any context (which have non-offensive meanings), and described television as (then) a "fart-free zone". Thomas Wolfe had the phrase "a fizzing and sulphuric fart" cut out of his 1929 work Look Homeward, Angel by his publisher. Ernest Hemingway, who had the same publisher, accepted the principle that "fart" could be cut, on the grounds that words should not be used purely to shock. The hippie movement in the 1970s saw a new definition develop, with the use of "fart" as a personal noun, to describe a "detestable person, or someone of small stature or limited mental capacity", gaining wider and more open usage as a result.
Rhyming slang developed the alternative form "raspberry tart", later shortened to "raspberry", and occasionally abbreviated further to "razz". This was associated with the phrase "blowing a raspberry". The word has become more prevalent, and now features in children's literature, such as the Walter the Farting Dog series of children's books, Robert Munsch's Good Families Don't and The Gas We Pass by Shinta Cho.
See also
Flatulence humor
Le Pétomane
Queef
References
Further reading
External links
Dictionary of Fart Slang
Flatulence
Digestive system
English words | Fart (word) | [
"Biology"
] | 1,251 | [
"Digestive system",
"Organ systems"
] |
11,240 | https://en.wikipedia.org/wiki/Flatulence | Flatulence is the expulsion of gas from the intestines via the anus, commonly referred to as farting. "Flatus" is the medical word for gas generated in the stomach or bowels. A proportion of intestinal gas may be swallowed environmental air, and hence flatus is not entirely generated in the stomach or bowels. The scientific study of this area of medicine is termed flatology.
Passing gas is a normal bodily process.
Flatus is brought to the rectum and pressurized by muscles in the intestines. It is normal to pass flatus ("to fart"), though volume and frequency vary greatly among individuals. It is also normal for intestinal gas to have a feculent or unpleasant odor, which may be intense. The noise commonly associated with flatulence is produced by the anus and buttocks, which act together in a manner similar to that of an embouchure. Both the sound and odor are sources of embarrassment, annoyance or amusement (flatulence humor). In many societies, flatus is a taboo. Thus, many people either let their flatus out quietly or even hold it completely. However, holding the gases inside is not healthy.
There are several general symptoms related to intestinal gas: pain, bloating and abdominal distension, excessive flatus volume, excessive flatus odor, and gas incontinence. Furthermore, eructation (colloquially known as "burping") is sometimes included under the topic of flatulence. When excessive or malodorous, flatus can be a sign of a health disorder, such as irritable bowel syndrome, celiac disease or lactose intolerance.
Terminology
Non-medical definitions of the term include "the uncomfortable condition of having gas in the stomach and bowels", or "a state of excessive gas in the alimentary canal". These definitions highlight that many people consider "bloating", abdominal distension or increased volume of intestinal gas, to be synonymous with the term flatulence (although this is technically inaccurate).
Colloquially, flatulence may be referred to as "farting", "pumping", "trumping", "blowing off", "pooting", "passing gas", "breaking wind", "backfiring", "tooting", "beefing", or simply (in American English) "gas" or (British English) "wind". Derived terms include vaginal flatulence, otherwise known as a queef. In rhyming slang, blowing a raspberry (at someone) means imitating with the mouth the sound of a fart, in real or feigned derision.
Signs and symptoms
Generally speaking, there are four different types of complaints that relate to intestinal gas, which may present individually or in combination.
Bloating and pain
Patients may complain of bloating as abdominal distension, discomfort and pain from "trapped wind". In the past, functional bowel disorders such as irritable bowel syndrome that produced symptoms of bloating were attributed to increased production of intestinal gas.
However, three significant pieces of evidence refute this theory. First, in normal subjects, even very high rates of gas infusion into the small intestine (30mL/min) is tolerated without complaints of pain or bloating and harmlessly passed as flatus per rectum. Secondly, studies aiming to quantify the total volume of gas produced by patients with irritable bowel syndrome (some including gas emitted from the mouth by eructation) have consistently failed to demonstrate increased volumes compared to healthy subjects. The proportion of hydrogen produced may be increased in some patients with irritable bowel syndrome, but this does not affect the total volume. Thirdly, the volume of flatus produced by patients with irritable bowel syndrome who have pain and abdominal distension would be tolerated in normal subjects without any complaints of pain.
Patients who complain of bloating frequently can be shown to have objective increases in abdominal girth, often increased throughout the day and then resolving during sleep. The increase in girth combined with the fact that the total volume of flatus is not increased led to studies aiming to image the distribution of intestinal gas in patients with bloating. They found that gas was not distributed normally in these patients: there was segmental gas pooling and focal distension. In conclusion, abdominal distension, pain and bloating symptoms are the result of abnormal intestinal gas dynamics rather than increased flatus production.
Excessive volume
The range of volumes of flatus in normal individuals varies hugely (476–1,491 mL/24 h). All intestinal gas is either swallowed environmental air, present intrinsically in foods and beverages, or the result of gut fermentation.
Swallowing small amounts of air occurs while eating and drinking. This is emitted from the mouth by eructation (burping) and is normal. Excessive swallowing of environmental air is called aerophagia, and has been shown in a few case reports to be responsible for increased flatus volume. This is, however, considered a rare cause of increased flatus volume. Gases contained in food and beverages are likewise emitted largely through eructation, e.g., carbonated beverages.
Endogenously produced intestinal gases make up 74 percent of flatus in normal subjects. The volume of gas produced is partially dependent upon the composition of the intestinal microbiota, which is normally very resistant to change, but is also very different in different individuals. Some patients are predisposed to increased endogenous gas production by virtue of their gut microbiota composition. The greatest concentration of gut bacteria is in the colon, while the small intestine is normally nearly sterile. Fermentation occurs when unabsorbed food residues arrive in the colon.
Therefore, even more than the composition of the microbiota, diet is the primary factor that dictates the volume of flatus produced. Diets that aim to reduce the amount of undigested fermentable food residues arriving in the colon have been shown to significantly reduce the volume of flatus produced. Again, increased volume of intestinal gas will not cause bloating and pain in normal subjects. Abnormal intestinal gas dynamics will create pain, distension, and bloating, regardless of whether there is high or low total flatus volume.
Odor
Although flatus possesses an odor, this may be abnormally increased in some patients and cause social distress to the patient. Increased odor of flatus presents a distinct clinical issue from other complaints related to intestinal gas. Some patients may exhibit over-sensitivity to bad flatus odor, and in extreme forms, olfactory reference syndrome may be diagnosed. Recent informal research found a correlation between flatus odor and both loudness and humidity content.
Incontinence of flatus
"Gas incontinence" could be defined as loss of voluntary control over the passage of flatus. It is a recognised subtype of faecal incontinence, and is usually related to minor disruptions of the continence mechanisms. Some consider gas incontinence to be the first, sometimes only, symptom of faecal incontinence.
Cause
Intestinal gas is composed of varying quantities of exogenous sources and endogenous sources. The exogenous gases are swallowed (aerophagia) when eating or drinking or increased swallowing during times of excessive salivation (as might occur when nauseated or as the result of gastroesophageal reflux disease). The endogenous gases are produced either as a by-product of digesting certain types of food, or of incomplete digestion, as is the case during steatorrhea. Anything that causes food to be incompletely digested by the stomach or small intestine may cause flatulence when the material arrives in the large intestine, due to fermentation by yeast or prokaryotes normally or abnormally present in the gastrointestinal tract.
Flatulence-producing foods are typically high in certain polysaccharides, especially oligosaccharides such as inulin. Those foods include beans, lentils, dairy products, onions, garlic, spring onions, leeks, turnips, swedes, radishes, sweet potatoes, potatoes, cashews, Jerusalem artichokes, oats, wheat, and yeast in breads. Cauliflower, broccoli, cabbage, Brussels sprouts and other cruciferous vegetables that belong to the genus Brassica are commonly reputed to not only increase flatulence, but to increase the pungency of the flatus.
In beans, endogenous gases seem to arise from complex oligosaccharides (carbohydrates) that are particularly resistant to digestion by mammals, but are readily digestible by microorganisms (methane-producing archaea; Methanobrevibacter smithii) that inhabit the digestive tract. These oligosaccharides pass through the small intestine largely unchanged, and when they reach the large intestine, bacteria ferment them, producing copious amounts of flatus.
When excessive or malodorous, flatus can be a sign of a health disorder, such as irritable bowel syndrome, celiac disease, non-celiac gluten sensitivity or lactose intolerance. It can also be caused by certain medicines, such as ibuprofen, laxatives, antifungal medicines or statins. Some infections, such as giardiasis, are also associated with flatulence.
Interest in the causes of flatulence was spurred by high-altitude flight and human spaceflight; the low atmospheric pressure, confined conditions, and stresses peculiar to those endeavours were cause for concern. In the field of mountaineering, the phenomenon of high altitude flatus expulsion was first recorded over two hundred years ago.
Mechanism
Production, composition, and odor
Flatus (intestinal gas) is mostly produced as a byproduct of bacterial fermentation in the gastrointestinal (GI) tract, especially the colon. There are reports of aerophagia (excessive air swallowing) causing excessive intestinal gas, but this is considered rare.
Over 99% of the volume of flatus is composed of odorless gases. These include oxygen, nitrogen, carbon dioxide, hydrogen and methane. Nitrogen is not produced in the gut, but a component of environmental air. Patients who have excessive intestinal gas that is mostly composed of nitrogen have aerophagia. Hydrogen, carbon dioxide and methane are all produced in the gut and contribute 74% of the volume of flatus in normal subjects. Methane and hydrogen are flammable, and so flatus can be ignited if it contains adequate amounts of these components.
Not all humans produce flatus that contains methane. For example, in one study of the faeces of nine adults, only five of the samples contained archaea capable of producing methane. The prevalence of methane over hydrogen in human flatus may correlate with obesity, constipation and irritable bowel syndrome, as archaea that oxidise hydrogen into methane promote the metabolism's ability to absorb fatty acids from food.
The remaining trace (<1% volume) compounds contribute to the odor of flatus. Historically, compounds such as indole, skatole, ammonia and short chain fatty acids were thought to cause the odor of flatus. More recent evidence proves that the major contribution to the odor of flatus comes from a combination of volatile sulfur compounds. Hydrogen sulfide, methyl mercaptan (also known as methanethiol), dimethyl sulfide, dimethyl disulfide and dimethyl trisulfide are present in flatus. The benzopyrrole volatiles indole and skatole have an odor of mothballs, and therefore probably do not contribute greatly to the characteristic odor of flatus.
In one study, hydrogen sulfide concentration was shown to correlate convincingly with perceived bad odor of flatus, followed by methyl mercaptan and dimethyl sulfide. This is supported by the fact that hydrogen sulfide may be the most abundant volatile sulfur compound present. These results were generated from subjects who were eating a diet high in pinto beans to stimulate flatus production.
Others report that methyl mercaptan was the greatest contributor to the odor of flatus in patients not under any specific dietary alterations. It has now been demonstrated that methyl mercaptan, dimethyl sulfide, and hydrogen sulfide (described as decomposing vegetables, unpleasantly sweet/wild radish and rotten eggs respectively) are all present in human flatus in concentrations above their smell perception thresholds.
It is recognized that increased dietary sulfur-containing amino acids significantly increases the odor of flatus. It is therefore likely that the odor of flatus is created by a combination of volatile sulfur compounds, with minimal contribution from non-sulfur volatiles. This odor can also be caused by the presence of large numbers of microflora bacteria or the presence of faeces in the rectum. Diets high in protein, especially sulfur-containing amino acids, have been demonstrated to significantly increase the odor of flatus.
Volume and intestinal gas dynamics
Normal flatus volume is 476 to 1491 mL per 24 hours. This variability between individuals is greatly dependent upon diet. Similarly, the number of flatus episodes per day is variable; the normal range is given as 8–20 per day. The volume of flatus associated with each flatulence event again varies (5–375 mL). The volume of the first flatulence upon waking in the morning is significantly larger than those during the day. This may be due to buildup of intestinal gas in the colon during sleep, the peak in peristaltic activity in the first few hours after waking or the strong prokinetic effect of rectal distension on the rate of transit of intestinal gas. It is now known that gas is moved along the gut independently of solids and liquids, and this transit is more efficient in the erect position compared to when supine. It is thought that large volumes of intestinal gas present low resistance, and can be propelled by subtle changes in gut tone, capacitance and proximal contraction and distal relaxation. This process is thought not to affect solid and liquid intra-lumenal contents.
Researchers investigating the role of sensory nerve endings in the anal canal did not find them to be essential for retaining fluids in the anus, and instead speculate that their role may be to distinguish between flatus and faeces, thereby helping detect a need to defecate or to signal the end of defecation.
The sound varies depending on the volume of gas, the size of the opening that the air is being pushed through, which is affected by the state of tension in the sphincter muscle, and the force or velocity of the gas being propelled, as well as other factors, such as whether the gas was caused by swallowed air. Among humans, flatulence occasionally happens accidentally, such as incidentally to coughing or sneezing or during orgasm; on other occasions, flatulence can be voluntarily elicited by tensing the rectum or "bearing down" on stomach or bowel muscles and subsequently relaxing the anal sphincter, resulting in the expulsion of flatus.
Management
Since problems involving intestinal gas present as different (but sometimes combined) complaints, the management is cause-related.
Pain and bloating
While not affecting the production of the gases themselves, surfactants (agents that lower surface tension) can reduce the disagreeable sensations associated with flatulence, by aiding the dissolution of the gases into liquid and solid faecal matter. Preparations containing simethicone reportedly operate by promoting the coalescence of smaller bubbles into larger ones more easily passed from the body, either by burping or flatulence. Such preparations do not decrease the total amount of gas generated in or passed from the colon, but make the bubbles larger and thereby allowing them to be passed more easily.
Other drugs including prokinetics, lubiprostone, antibiotics and probiotics are also used to treat bloating in patients with functional bowel disorders such as irritable bowel syndrome, and there is some evidence that these measures may reduce symptoms.
A flexible tube, inserted into the rectum, can be used to collect intestinal gas in a flatus bag. This method is occasionally needed in a hospital setting, when the patient is unable to pass gas normally.
Volume
One method of reducing the volume of flatus produced is dietary modification, reducing the amount of fermentable carbohydrates. This is the theory behind diets such as the low-FODMAP diet (a diet low in fermentable oligosaccharides, disaccharides, monosaccharides, alcohols, and polyols).
Most starches, including potatoes, corn, noodles, and wheat, produce gas as they are broken down in the large intestine. Intestinal gas can be reduced by fermenting the beans, and making them less gas-inducing, or by cooking them in the liquor from a previous batch. For example, the fermented bean product miso is less likely to produce as much intestinal gas. Some legumes also stand up to prolonged cooking, which can help break down the oligosaccharides into simple sugars. Fermentative lactic acid bacteria such as Lactobacillus casei and Lactobacillus plantarum reduce flatulence in the human intestinal tract.
Probiotics (live yogurt, kefir, etc.) are reputed to reduce flatulence when used to restore balance to the normal intestinal flora. Live (bioactive) yogurt contains, among other lactic bacteria, Lactobacillus acidophilus, which may be useful in reducing flatulence. L. acidophilus may make the intestinal environment more acidic, supporting a natural balance of the fermentative processes. L. acidophilus is available in supplements. Prebiotics, which generally are non-digestible oligosaccharides, such as fructooligosaccharide, generally increase flatulence in a similar way as described for lactose intolerance.
Digestive enzyme supplements may significantly reduce the amount of flatulence caused by some components of foods not being digested by the body and thereby promoting the action of microbes in the small and large intestines. It has been suggested that alpha-galactosidase enzymes, which can digest certain complex sugars, are effective in reducing the volume and frequency of flatus. The enzymes alpha-galactosidase, lactase, amylase, lipase, protease, cellulase, glucoamylase, invertase, malt diastase, pectinase, and bromelain are available, either individually or in combination blends, in commercial products.
The antibiotic rifaximin, often used to treat diarrhea caused by the microorganism E. coli, may reduce both the production of intestinal gas and the frequency of flatus events.
Odor
Bismuth
The odor created by flatulence is commonly treated with bismuth subgallate, available under the name Devrom. Bismuth subgallate is commonly used by individuals who have had ostomy surgery, bariatric surgery, faecal incontinence and irritable bowel syndrome. Bismuth subsalicylate is a compound that binds hydrogen sulfide, and one study reported a dose of 524 mg four times a day for 3–7 days bismuth subsalicylate yielded a >95% reduction in faecal hydrogen sulfide release in both humans and rats.
Another bismuth compound, bismuth subnitrate was also shown to bind to hydrogen sulfide. Another study showed that bismuth acted synergistically with various antibiotics to inhibit sulfate-reducing gut bacteria and sulfide production. Some authors proposed a theory that hydrogen sulfide was involved in the development of ulcerative colitis and that bismuth might be helpful in the management of this condition. However, bismuth administration in rats did not prevent them from developing ulcerative colitis despite reduced hydrogen sulfide production. Also, evidence suggests that colonic hydrogen sulfide is largely present in bound forms, probably sulfides of iron and other metals. Rarely, serious bismuth toxicity may occur with higher doses.
Activated charcoal
Despite being an ancient treatment for various digestive complaints, activated charcoal did not produce reduction in both the total flatus volume nor the release of sulfur-containing gasses, and there was no reduction in abdominal symptoms (after 0.52g activated charcoal four times a day for one week). The authors suggested that saturation of charcoal binding sites during its passage through the gut was the reason for this. A further study concluded that activated charcoal (4g) does not influence gas formation in vitro or in vivo. Other authors reported that activated charcoal was effective. A study in 8 dogs concluded activated charcoal (unknown oral dose) reduced hydrogen sulfide levels by 71%. In combination with yucca schidigera, and zinc acetate, this was increased to an 86% reduction in hydrogen sulfide, although flatus volume and number was unchanged. An early study reported activated charcoal (unknown oral dose) prevented a large increase in the number of flatus events and increased breath hydrogen concentrations that normally occur following a gas-producing meal.
Garments and external devices
In 1998, Chester "Buck" Weimer of Pueblo, Colorado, received a patent for the first undergarment that contained a replaceable charcoal filter. The undergarments are air-tight and provide a pocketed escape hole in which a charcoal filter can be inserted. In 2001 Weimer received the Ig Nobel Prize for Biology for his invention.
A similar product was released in 2002, but rather than an entire undergarment, consumers are able to purchase an insert similar to a pantiliner that contains activated charcoal. The inventors, Myra and Brian Conant of Mililani, Hawaii, still claim on their website to have discovered the undergarment product in 2002 (four years after Chester Weimer filed for a patent for his product), but state that their tests "concluded" that they should release an insert instead.
Incontinence
Flatus incontinence where there is involuntary passage of gas, is a type of faecal incontinence, and is managed similarly.
Society and culture
In many cultures, flatulence in public is regarded as embarrassing, but, depending on context, may also be considered humorous. People will often strain to hold in the passing of gas when in polite company, or position themselves to silence or conceal the passing of gas. In other cultures, it may be no more embarrassing than coughing.
While the act of passing flatus in some cultures is generally considered to be an unfortunate occurrence in public settings, flatulence may, in casual circumstances and especially among children, be used as either a humorous supplement to a joke ("pull my finger"), or as a comic activity in and of itself. The social acceptability of flatulence-based humour in entertainment and the mass media varies over the course of time and between cultures. A sufficient number of entertainers have performed using their flatus to lead to the coining of the term flatulist. The whoopee cushion is a joking device invented in the early 20th century for simulating a fart. In 2008, a farting application for the iPhone earned nearly $10,000 in one day.
A farting game named Touch Wood was documented by John Gregory Bourke in the 1890s. It was known as Safety in the 20th century in the U.S., and is still played by children as of 2011.
In January 2011, the Malawi Minister of Justice, George Chaponda, said that Air Fouling Legislation would make public "farting" illegal in his country. When reporting the story, the media satirised Chaponda's statement with punning headlines. Later, the minister withdrew his statement.
Environmental impact
Flatulence is often blamed as a significant source of greenhouse gases, owing to the erroneous belief that the methane released by livestock is in the flatus. While livestock account for around 20% of global methane emissions, 90–95% of that is released by exhaling or burping. In cows, gas and burps are produced by methane-generating microbes called methanogens, which live inside the cow's digestive system. Proposals for reducing methane production in cows include the feeding of supplements such as oregano and seaweed, and the genetic engineering of gut biome microbes to produce less methane.
Since New Zealand produces large amounts of agricultural products, it has the unique position of having higher methane emissions from livestock compared to other greenhouse gas sources. The New Zealand government is a signatory to the Kyoto Protocol and therefore attempts to reduce greenhouse emissions. To achieve this, an agricultural emissions research levy was proposed, which promptly became known as a "fart tax" or "flatulence tax". It encountered opposition from farmers, farming lobby groups and opposition politicians.
Entertainment
Historical comment on the ability to fart at will is observed as early as Saint Augustine's City of God (5th century A.D.). Augustine mentions "people who produce at will without any stench such rhythmical sounds from their fundament that they appear to be making music even from that quarter." Intentional passing of gas and its use as entertainment for others appear to have been somewhat well known in pre-modern Europe, according to mentions of it in medieval and later literature, including Rabelais.
Le Pétomane ("the Fartomaniac") was a famous French performer in the 19th century who, as well as many professional farters before him, did flatulence impressions and held shows. The performer Mr. Methane carries on le Pétomane's tradition today. Also, a 2002 fiction film Thunderpants revolves around a boy named Patrick Smash who has an ongoing flatulence problem from the time of his birth.
Since the 1970s, farting has increasingly been featured in film, especially comedies such as Blazing Saddles and Scooby-Doo.
In the popular vulgar cartoon series "South Park," characters sometimes watch a show-within-a-show called "The Terrance and Phillip Show" whose humor primarily revolves around flatulence.
Personal experiences
People find other peoples' flatus unpleasant, but are unfazed by, and may even enjoy, the scent of their own. While there has been little research carried out upon the subject, some speculative guesses have been made as to why this might be so. For example, one explanation for this phenomenon is that people are very familiar with the scent of their own flatus, and that survival in nature may depend on the detection of and reaction to foreign scents.
Some people have Eproctophilla, the fetish of flatulence, finding sexual gratification and pleasure from either the sound of the gas, smells from the gas, feeling of the gas, some combination of the three, or all three.
See also
Antiflatulent
Armpit fart
Borborygmus
Eproctophilia
Fart lighting
Flatulence humor
The Gas We Pass
Terrance and Phillip
Tympany
Fart (word)
References
Citations
General and cited references
Allen, V. (2007). On Farting: Language and Laughter in the Middle Ages. Palgrave MacMillan. .
Persels, J., & Ganim, R. (2004). Fecal Matters in Early Modern Literature and Art: Studies in Scatology. (Chap. 1: "The Honorable Art of Farting in Continental Renaissance"). .
External links
The Merck Manual of Diagnosis and Therapy, Gas
Dictionary of Fart Slang
Invisible College of Experimental Flatology
Digestive system
Medical signs
Methane
Reflexes
Symptoms and signs: Digestive system and abdomen | Flatulence | [
"Chemistry",
"Biology"
] | 5,843 | [
"Greenhouse gases",
"Organ systems",
"Methane",
"Digestive system"
] |
11,258 | https://en.wikipedia.org/wiki/Fellatio | Fellatio (also known as fellation, and in slang as blowjob, BJ, giving head, or sucking off) is an oral sex act consisting of the stimulation of a penis by using the mouth. Oral stimulation of the scrotum may also be termed fellatio, or colloquially as teabagging.
It may be performed by a sexual partner as foreplay before other sexual activities, such as vaginal or anal intercourse, or as an erotic and physically intimate act of its own. Fellatio creates a risk of contracting sexually transmitted infections (STIs), but the risk is significantly lower than that of vaginal or anal sex, especially for HIV transmission.
Most countries do not have laws banning the practice of fellatio, though some cultures may consider it taboo. People may also refrain from engaging in fellatio due to personal preference, negative feelings, or sexual inhibitions. Commonly, people do not view oral sex as affecting the virginity of either partner, though opinions on the matter vary.
Etymology
The English noun fellatio comes from the Latin , the past participle of the verb , meaning "to suck". In fellatio, the -us is replaced by the -io while the declension stem ends in -ion-, which gives the suffix the form -ion (cf. French fellation). The -io(n) ending is used in English to create nouns from Latin adjectives and it can indicate a state or action wherein the Latin verb is being, or has been, performed.
Further English words have been created based on the same Latin root. A person who performs fellatio upon another (i.e. who fellates) may be termed a fellator. Latin's gender based declension means this word may be restricted to describing a male. The equivalent term for a female is fellatrix.
Practice
General
The essential aspect of fellatio is oral stimulation of the penis (including the shaft and glans) through sucking with the mouth, use of the tongue for licking, using the lips, or some combination. One method is the sex partner taking the penis into the mouth and moving smoothly up and down to a rhythm while being careful to avoid contact with the teeth. Fellatio also includes oral stimulation of the scrotum, whether licking, sucking or taking the entire scrotum into the mouth. During the act, orgasm may be achieved and semen may be ejaculated into the partner's mouth. When the penis is thrust into someone's mouth, it is called irrumatio, though the term is rarely used.
Performing fellatio can trigger the gag reflex.
It is physically possible for men with sufficient flexibility, penis size, or both, to perform fellatio on themselves as a form of masturbation, in an act called autofellatio. However, few men possess the flexibility and penis length to safely perform the necessary frontbend.
Deposition of semen
During fellatio, a partner may ingest semen from the penis. As late as 1976, some doctors were advising women in the eighth and ninth months of pregnancy not to swallow semen lest it induce premature labor, but it was later determined to be safe.
The receiver of fellatio typically becomes sexually aroused. Once the prerequisite level of sexual stimulation has been achieved and ejaculation becomes imminent, the semen may be discharged onto his partner. The male may position his penis prior to ejaculation so that semen will be deposited onto his partner's face (known as a "facial"), or other body part such as their neck, chest or breast.
Deep-throating
Deep-throating is a sexual act in which a person takes a partner's entire erect penis into the mouth and throat. The technique and term became popularized by the 1972 pornographic film Deep Throat. Generally, the person receiving fellatio is in control. For deep-throating, the penis must be long enough so that it can reach the back of the receiver's throat.
Deep-throating can be difficult because of the natural gag reflex that is triggered when the soft palate is touched. People have different sensitivities to this reflex. With practice, some learn to suppress it. Deep-throating leads to a different kind of sexual stimulation of the penis than regular fellatio: the tongue's movement is restricted during deep-throating and sucking becomes impossible; the tightness of the pharynx can intensely stimulate the glans of the penis.
Other aspects
The male receiving fellatio receives direct sexual stimulation, while his partner may derive satisfaction from giving him pleasure. Giving and receiving fellatio may happen simultaneously in sex positions like 69 and daisy chain.
Fellatio is sometimes practiced when vaginal or anal penetration would create a physical difficulty for a sex partner. For example, it may be practiced during pregnancy instead of vaginal intercourse by couples wishing to engage in intimate sexual activity while avoiding the difficulty of vaginal intercourse during later stages of pregnancy.
Other reasons why a woman may not wish to have vaginal intercourse include apprehension of losing her virginity or of becoming pregnant, or because she may be menstruating.
Health aspects
Sexually transmitted infections
Chlamydia, human papillomavirus (HPV), gonorrhea, herpes, hepatitis (multiple strains), and other sexually transmitted infections (STIs) can be transmitted through oral sex. Any sexual exchange of bodily fluids with a person infected with HIV, the virus that causes AIDS, poses a risk of infection. Risk of STI infection, however, is generally considered significantly lower for oral sex than for vaginal or anal sex, with HIV transmission considered the lowest risk with regard to oral sex.
There is an increased risk of STI transmission if the receiving partner has wounds on their genitals, or if the giving partner has wounds or open sores on or in their mouth, or bleeding gums. Brushing the teeth, flossing, or undergoing dental work soon before or after giving fellatio can also increase the risk of transmission, because all of these activities can cause small scratches in the lining of the mouth. These wounds, even when they are microscopic, increase the chances of contracting STIs that can be transmitted orally under these conditions. Such contact can also lead to more mundane infections from common bacteria and viruses found in, around and secreted from the genital regions. Because of the aforementioned factors, medical sources advise the use of condoms or other effective barrier methods when performing or receiving fellatio with a partner whose STI status is unknown.
HPV and oral cancer link
Links have been reported between oral sex and oral cancer with HPV-infected people.
A 2007 study suggested a correlation between oral sex and throat cancer. It is believed that this is due to the transmission of HPV, a virus that has been implicated in the majority of cervical cancers and which has been detected in throat cancer tissue in numerous studies. The study concludes that people who had one to five oral sex partners in their lifetime had approximately a doubled risk of throat cancer compared with those who never engaged in this activity and those with more than five oral sex partners had a 250 percent increased risk.
Pregnancy and semen exposure
Fellatio cannot result in pregnancy, as there is no way for ingested sperm to reach the uterus and fallopian tubes to fertilize an egg cell. At any rate, acids in the stomach and digestive enzymes in the gastrointestinal tract break down and kill spermatozoa.
Clinical research has tentatively linked fellatio with immune modulation, indicating it may reduce the chance of complications during pregnancy. The potentially fatal complication pre-eclampsia was observed to occur less in women who regularly engaged in fellatio, with those who also ingested their partner's semen being at the least risk. The results were consistent with the fact that semen contains TGF-β1, the exchange of which between partners has a causal reduction in risk of pre-eclampsia caused by an immunological reaction. However, fellatio is not the only viable mechanism for the transmission of TGF-β1.
Cultural views
Virginity
Oral sex is commonly used as a means of preserving virginity, especially among heterosexual pairings; this is sometimes termed technical virginity (which additionally includes anal sex, manual sex and other non-penetrative sex acts, but excludes penile-vaginal sex). The concept of "technical virginity" or sexual abstinence through oral sex is particularly popular among teenagers in the United States, including with regard to teenage girls who not only fellate their boyfriends to preserve their virginities, but also to create and maintain intimacy or to avoid pregnancy. Other reasons given for the practice among teenage girls are peer-group pressure and as their introduction to sexual activity. Additionally, gay males may regard fellatio as a way of maintaining their virginities, with penile-anal penetration defined as resulting in virginity loss, while other gay males may define fellatio as their main form of sexual activity.
Legality
Fellatio is legal in most countries. Laws of some jurisdictions regard fellatio as penetrative sex for the purposes of sexual offenses with regard to the act, but most countries do not have laws which ban the practice, in contrast to anal sex or extramarital sex. In Islamic literature, the only forms of sexual activity that are consistently explicitly prohibited within marriage are anal sex and sexual activity during menstruation. However, the exact attitude towards oral sex is a subject of disagreements between modern scholars of Islam. Authorities considering it "objectionable" do so because of the penis' supposedly impure fluids coming in contact with the mouth. Others emphasize that there is no decisive evidence to forbid oral sex.
In Malaysia, fellatio is illegal, but the law is seldom enforced. Under Malaysia's Section 377A of the Penal Code, the introduction of the penis into the anus or mouth of another person is considered a "carnal intercourse against the order of nature" and is punishable with imprisonment of 20 years maximum and whipping.
Tradition
Galienus called fellatio lesbiari since women of the island of Lesbos were supposed to have introduced the practice of using one's lips to give sexual pleasure.
The Ancient Indian Kama Sutra, dating from the first century AD, describes oral sex, discussing fellatio in great detail (the Kama Sutra has a chapter on (or ), "mouth congress") and only briefly mentioning cunnilingus. However, according to the Kama Sutra, fellatio is above all a characteristic of eunuchs (or, according to other translations, of effeminate homosexuals or trans women similar to the modern Hijra of India), who use their mouths as a substitute for female genitalia.
Vātsyāyana, the author of the Kama Sutra, states that it is also practiced by "unchaste women", but mentions that there are widespread traditional concerns about this being a degrading or unclean practice, with known practitioners being evaded as love partners in large parts of the country. The author appears to somewhat agree with these attitudes, claiming that "a wise man" should not engage in that form of intercourse while acknowledging that it can be appropriate in some unspecified cases.
The Moche culture of ancient Peru depicted fellatio in their ceramics.
In some cultures, such as Cambodia, Chinese in Southeast Asia, northern Manchu tribes along Amur River, Sambians in Papua New Guinea, Thailand, Telugus of India, Hawaii and other Pacific Islanders, briefly taking the penis of a male infant or toddler into one's mouth was considered a nonsexual form of affection or even a form of ritual, greeting, respect, parenting love, or lifesaving. According to some sources, it was an ancient Chinese custom for grandmothers, mothers, and elder sisters to calm their baby boys with fellatio. It has also been reported that some modern Chinese mothers have performed fellatio to their moribund sons as affection and means for lifesaving, because they culturally believe that when the penis is completely retracted into the abdomen, the boy or man will die.
Other animals
Flying foxes (a type of bat) have been observed engaging in oral sex. Indian flying fox males will lick a female's vulva both before and after copulation, with the length of pre-copulation cunnilingus positively correlated with length of copulation. The fruit bat Cynopterus sphinx, has been observed to engage in fellatio during mating. Pairs spend more time copulating if the female licks the male than if she does not. Male Livingstone's fruit bats have been observed engaging in homosexual fellatio, although it is unknown if this is an example of sexual behavior or social grooming. Bonin flying foxes also engage in homosexual fellatio, but the behavior has been observed independently of social grooming.
See also
Bukkake
Cum shot
Cunnilingus – oral stimulation of the vulva
Facial (sexual act)
Fellatio in Halacha
Gokkun
Pearl necklace (sexuality)
Steak and Blowjob Day
References
Sexual acts
Human penis
Articles containing video clips
Oral eroticism | Fellatio | [
"Biology"
] | 2,726 | [
"Sexual acts",
"Behavior",
"Sexuality",
"Mating"
] |
11,274 | https://en.wikipedia.org/wiki/Elementary%20particle | In particle physics, an elementary particle or fundamental particle is a subatomic particle that is not composed of other particles. The Standard Model presently recognizes seventeen distinct particles—twelve fermions and five bosons. As a consequence of flavor and color combinations and antimatter, the fermions and bosons are known to have 48 and 13 variations, respectively. Among the 61 elementary particles embraced by the Standard Model number: electrons and other leptons, quarks, and the fundamental bosons. Subatomic particles such as protons or neutrons, which contain two or more elementary particles, are known as composite particles.
Ordinary matter is composed of atoms, themselves once thought to be indivisible elementary particles. The name atom comes from the Ancient Greek word ἄτομος (atomos) which means indivisible or uncuttable. Despite the theories about atoms that had existed for thousands of years, the factual existence of atoms remained controversial until 1905. In that year, Albert Einstein published his paper on Brownian motion, putting to rest theories that had regarded molecules as mathematical illusions. Einstein subsequently identified matter as ultimately composed of various concentrations of energy.
Subatomic constituents of the atom were first identified toward the end of the 19th century, beginning with the electron, followed by the proton in 1919, the photon in the 1920s, and the neutron in 1932. By that time, the advent of quantum mechanics had radically altered the definition of a "particle" by putting forward an understanding in which they carried out a simultaneous existence as matter waves.
Many theoretical elaborations upon, and beyond, the Standard Model have been made since its codification in the 1970s. These include notions of supersymmetry, which double the number of elementary particles by hypothesizing that each known particle associates with a "shadow" partner far more massive. However, like an additional elementary boson mediating gravitation, such superpartners remain undiscovered as of 2013.
Overview
All elementary particles are either bosons or fermions. These classes are distinguished by their quantum statistics: fermions obey Fermi–Dirac statistics and bosons obey Bose–Einstein statistics. Their spin is differentiated via the spin–statistics theorem: it is half-integer for fermions, and integer for bosons.
In the Standard Model, elementary particles are represented for predictive utility as point particles. Though extremely successful, the Standard Model is limited by its omission of gravitation and has some parameters arbitrarily added but unexplained.
Cosmic abundance of elementary particles
According to the current models of Big Bang nucleosynthesis, the primordial composition of visible matter of the universe should be about 75% hydrogen and 25% helium-4 (in mass). Neutrons are made up of one up and two down quarks, while protons are made of two up and one down quark. Since the other common elementary particles (such as electrons, neutrinos, or weak bosons) are so light or so rare when compared to atomic nuclei, we can neglect their mass contribution to the observable universe's total mass. Therefore, one can conclude that most of the visible mass of the universe consists of protons and neutrons, which, like all baryons, in turn consist of up quarks and down quarks.
Some estimates imply that there are roughly baryons (almost entirely protons and neutrons) in the observable universe.
The number of protons in the observable universe is called the Eddington number.
In terms of number of particles, some estimates imply that nearly all the matter, excluding dark matter, occurs in neutrinos, which constitute the majority of the roughly elementary particles of matter that exist in the visible universe. Other estimates imply that roughly elementary particles exist in the visible universe (not including dark matter), mostly photons and other massless force carriers.
Standard Model
The Standard Model of particle physics contains 12 flavors of elementary fermions, plus their corresponding antiparticles, as well as elementary bosons that mediate the forces and the Higgs boson, which was reported on July 4, 2012, as having been likely detected by the two main experiments at the Large Hadron Collider (ATLAS and CMS). The Standard Model is widely considered to be a provisional theory rather than a truly fundamental one, however, since it is not known if it is compatible with Einstein's general relativity. There may be hypothetical elementary particles not described by the Standard Model, such as the graviton, the particle that would carry the gravitational force, and sparticles, supersymmetric partners of the ordinary particles.
Fundamental fermions
The 12 fundamental fermions are divided into 3 generations of 4 particles each. Half of the fermions are leptons, three of which have an electric charge of −1 e, called the electron (), the muon (), and the tau (); the other three leptons are neutrinos (, , ), which are the only elementary fermions with neither electric nor color charge. The remaining six particles are quarks (discussed below).
Generations
Mass
The following table lists current measured masses and mass estimates for all the fermions, using the same scale of measure: millions of electron-volts relative to square of light speed (MeV/c2). For example, the most accurately known quark mass is of the top quark () at , estimated using the on-shell scheme.
Estimates of the values of quark masses depend on the version of quantum chromodynamics used to describe quark interactions. Quarks are always confined in an envelope of gluons that confer vastly greater mass to the mesons and baryons where quarks occur, so values for quark masses cannot be measured directly. Since their masses are so small compared to the effective mass of the surrounding gluons, slight differences in the calculation make large differences in the masses.
Antiparticles
There are also 12 fundamental fermionic antiparticles that correspond to these 12 particles. For example, the antielectron (positron) is the electron's antiparticle and has an electric charge of +1 e.
Quarks
Isolated quarks and antiquarks have never been detected, a fact explained by confinement. Every quark carries one of three color charges of the strong interaction; antiquarks similarly carry anticolor. Color-charged particles interact via gluon exchange in the same way that charged particles interact via photon exchange. Gluons are themselves color-charged, however, resulting in an amplification of the strong force as color-charged particles are separated. Unlike the electromagnetic force, which diminishes as charged particles separate, color-charged particles feel increasing force.
Nonetheless, color-charged particles may combine to form color neutral composite particles called hadrons. A quark may pair up with an antiquark: the quark has a color and the antiquark has the corresponding anticolor. The color and anticolor cancel out, forming a color neutral meson. Alternatively, three quarks can exist together, one quark being "red", another "blue", another "green". These three colored quarks together form a color-neutral baryon. Symmetrically, three antiquarks with the colors "antired", "antiblue" and "antigreen" can form a color-neutral antibaryon.
Quarks also carry fractional electric charges, but, since they are confined within hadrons whose charges are all integral, fractional charges have never been isolated. Note that quarks have electric charges of either e or e, whereas antiquarks have corresponding electric charges of either e or e.
Evidence for the existence of quarks comes from deep inelastic scattering: firing electrons at nuclei to determine the distribution of charge within nucleons (which are baryons). If the charge is uniform, the electric field around the proton should be uniform and the electron should scatter elastically. Low-energy electrons do scatter in this way, but, above a particular energy, the protons deflect some electrons through large angles. The recoiling electron has much less energy and a jet of particles is emitted. This inelastic scattering suggests that the charge in the proton is not uniform but split among smaller charged particles: quarks.
Fundamental bosons
In the Standard Model, vector (spin-1) bosons (gluons, photons, and the W and Z bosons) mediate forces, whereas the Higgs boson (spin-0) is responsible for the intrinsic mass of particles. Bosons differ from fermions in the fact that multiple bosons can occupy the same quantum state (Pauli exclusion principle). Also, bosons can be either elementary, like photons, or a combination, like mesons. The spin of bosons are integers instead of half integers.
Gluons
Gluons mediate the strong interaction, which join quarks and thereby form hadrons, which are either baryons (three quarks) or mesons (one quark and one antiquark). Protons and neutrons are baryons, joined by gluons to form the atomic nucleus. Like quarks, gluons exhibit color and anticolor – unrelated to the concept of visual color and rather the particles' strong interactions – sometimes in combinations, altogether eight variations of gluons.
Electroweak bosons
There are three weak gauge bosons: W+, W−, and Z0; these mediate the weak interaction. The W bosons are known for their mediation in nuclear decay: The W− converts a neutron into a proton then decays into an electron and electron-antineutrino pair.
The Z0 does not convert particle flavor or charges, but rather changes momentum; it is the only mechanism for elastically scattering neutrinos. The weak gauge bosons were discovered due to momentum change in electrons from neutrino-Z exchange. The massless photon mediates the electromagnetic interaction. These four gauge bosons form the electroweak interaction among elementary particles.
Higgs boson
Although the weak and electromagnetic forces appear quite different to us at everyday energies, the two forces are theorized to unify as a single electroweak force at high energies. This prediction was clearly confirmed by measurements of cross-sections for high-energy electron-proton scattering at the HERA collider at DESY. The differences at low energies is a consequence of the high masses of the W and Z bosons, which in turn are a consequence of the Higgs mechanism. Through the process of spontaneous symmetry breaking, the Higgs selects a special direction in electroweak space that causes three electroweak particles to become very heavy (the weak bosons) and one to remain with an undefined rest mass as it is always in motion (the photon). On 4 July 2012, after many years of experimentally searching for evidence of its existence, the Higgs boson was announced to have been observed at CERN's Large Hadron Collider. Peter Higgs who first posited the existence of the Higgs boson was present at the announcement. The Higgs boson is believed to have a mass of approximately . The statistical significance of this discovery was reported as 5 sigma, which implies a certainty of roughly 99.99994%. In particle physics, this is the level of significance required to officially label experimental observations as a discovery. Research into the properties of the newly discovered particle continues.
Graviton
The graviton is a hypothetical elementary spin-2 particle proposed to mediate gravitation. While it remains undiscovered due to the difficulty inherent in its detection, it is sometimes included in tables of elementary particles. The conventional graviton is massless, although some models containing massive Kaluza–Klein gravitons exist.
Beyond the Standard Model
Although experimental evidence overwhelmingly confirms the predictions derived from the Standard Model, some of its parameters were added arbitrarily, not determined by a particular explanation, which remain mysterious, for instance the hierarchy problem. Theories beyond the Standard Model attempt to resolve these shortcomings.
Grand unification
One extension of the Standard Model attempts to combine the electroweak interaction with the strong interaction into a single 'grand unified theory' (GUT). Such a force would be spontaneously broken into the three forces by a Higgs-like mechanism. This breakdown is theorized to occur at high energies, making it difficult to observe unification in a laboratory. The most dramatic prediction of grand unification is the existence of X and Y bosons, which cause proton decay. The non-observation of proton decay at the Super-Kamiokande neutrino observatory rules out the simplest GUTs, however, including SU(5) and SO(10).
Supersymmetry
Supersymmetry extends the Standard Model by adding another class of symmetries to the Lagrangian. These symmetries exchange fermionic particles with bosonic ones. Such a symmetry predicts the existence of supersymmetric particles, abbreviated as sparticles, which include the sleptons, squarks, neutralinos, and charginos. Each particle in the Standard Model would have a superpartner whose spin differs by from the ordinary particle. Due to the breaking of supersymmetry, the sparticles are much heavier than their ordinary counterparts; they are so heavy that existing particle colliders would not be powerful enough to produce them. Some physicists believe that sparticles will be detected by the Large Hadron Collider at CERN.
String theory
String theory is a model of physics whereby all "particles" that make up matter are composed of strings (measuring at the Planck length) that exist in an 11-dimensional (according to M-theory, the leading version) or 12-dimensional (according to F-theory) universe. These strings vibrate at different frequencies that determine mass, electric charge, color charge, and spin. A "string" can be open (a line) or closed in a loop (a one-dimensional sphere, that is, a circle). As a string moves through space it sweeps out something called a world sheet. String theory predicts 1- to 10-branes (a 1-brane being a string and a 10-brane being a 10-dimensional object) that prevent tears in the "fabric" of space using the uncertainty principle (e.g., the electron orbiting a hydrogen atom has the probability, albeit small, that it could be anywhere else in the universe at any given moment).
String theory proposes that our universe is merely a 4-brane, inside which exist the three space dimensions and the one time dimension that we observe. The remaining 7 theoretical dimensions either are very tiny and curled up (and too small to be macroscopically accessible) or simply do not/cannot exist in our universe (because they exist in a grander scheme called the "multiverse" outside our known universe).
Some predictions of the string theory include existence of extremely massive counterparts of ordinary particles due to vibrational excitations of the fundamental string and existence of a massless spin-2 particle behaving like the graviton.
Technicolor
Technicolor theories try to modify the Standard Model in a minimal way by introducing a new QCD-like interaction. This means one adds a new theory of so-called Techniquarks, interacting via so called Technigluons. The main idea is that the Higgs boson is not an elementary particle but a bound state of these objects.
Preon theory
According to preon theory there are one or more orders of particles more fundamental than those (or most of those) found in the Standard Model. The most fundamental of these are normally called preons, which is derived from "pre-quarks". In essence, preon theory tries to do for the Standard Model what the Standard Model did for the particle zoo that came before it. Most models assume that almost everything in the Standard Model can be explained in terms of three to six more fundamental particles and the rules that govern their interactions. Interest in preons has waned since the simplest models were experimentally ruled out in the 1980s.
Acceleron theory
Accelerons are the hypothetical subatomic particles that integrally link the newfound mass of the neutrino to the dark energy conjectured to be accelerating the expansion of the universe.
In this theory, neutrinos are influenced by a new force resulting from their interactions with accelerons, leading to dark energy. Dark energy results as the universe tries to pull neutrinos apart. Accelerons are thought to interact with matter more infrequently than they do with neutrinos.
See also
Asymptotic freedom
List of particles
Physical ontology
Quantum field theory
Quantum gravity
Quantum triviality
UV fixed point
Notes
Further reading
General readers
Textbooks
An undergraduate text for those not majoring in physics.
External links
The most important address about the current experimental and theoretical knowledge about elementary particle physics is the Particle Data Group, where different international institutions collect all experimental data and give short reviews over the contemporary theoretical understanding.
other pages are:
particleadventure.org, a well-made introduction also for non physicists
CERNCourier: Season of Higgs and melodrama
Interactions.org, particle physics news
Symmetry Magazine, a joint Fermilab/SLAC publication
Elementary Particles made thinkable, an interactive visualisation allowing physical properties to be compared
Quantum mechanics
Quantum field theory
Subatomic particles | Elementary particle | [
"Physics"
] | 3,663 | [
"Quantum field theory",
"Matter",
"Elementary particles",
"Theoretical physics",
"Quantum mechanics",
"Particle physics",
"Nuclear physics",
"Atoms",
"Subatomic particles"
] |
11,299 | https://en.wikipedia.org/wiki/Fox | Foxes are small-to-medium-sized omnivorous mammals belonging to several genera of the family Canidae. They have a flattened skull; upright, triangular ears; a pointed, slightly upturned snout; and a long, bushy tail ("brush").
Twelve species belong to the monophyletic "true fox" group of genus Vulpes. Another 25 current or extinct species are sometimes called foxes – they are part of the paraphyletic group of the South American foxes or an outlying group, which consists of the bat-eared fox, gray fox, and island fox.
Foxes live on every continent except Antarctica. The most common and widespread species of fox is the red fox (Vulpes vulpes) with about 47 recognized subspecies. The global distribution of foxes, together with their widespread reputation for cunning, has contributed to their prominence in popular culture and folklore in many societies around the world. The hunting of foxes with packs of hounds, long an established pursuit in Europe, especially in the British Isles, was exported by European settlers to various parts of the New World.
Etymology
The word fox comes from Old English and derives from Proto-Germanic *fuhsaz. This in turn derives from Proto-Indo-European *puḱ- "thick-haired, tail." Male foxes are known as dogs, tods, or reynards; females as vixens; and young as cubs, pups, or kits, though the last term is not to be confused with the kit fox, a distinct species. "Vixen" is one of very few modern English words that retain the Middle English southern dialectal "v" pronunciation instead of "f"; i.e., northern English "fox" versus southern English "vox". A group of foxes is referred to as a skulk, leash, or earth.
Phylogenetic relationships
Within the Canidae, the results of DNA analysis shows several phylogenetic divisions:
The fox-like canids, which include the kit fox (Vulpes velox), red fox (Vulpes vulpes), Cape fox (Vulpes chama), Arctic fox (Vulpes lagopus), and fennec fox (Vulpes zerda).
The wolf-like canids, (genus Canis, Cuon and Lycaon) including the dog (Canis lupus familiaris), gray wolf (Canis lupus), red wolf (Canis rufus), eastern wolf (Canis lycaon), coyote (Canis latrans), golden jackal (Canis aureus), Ethiopian wolf (Canis simensis), black-backed jackal (Canis mesomelas), side-striped jackal (Canis adustus), dhole (Cuon alpinus), and African wild dog (Lycaon pictus).
The South American canids, including the bush dog (Speothos venaticus), hoary fox (Lycalopex uetulus), crab-eating fox (Cerdocyon thous) and maned wolf (Chrysocyon brachyurus).
Various monotypic taxa, including the bat-eared fox (Otocyon megalotis), gray fox (Urocyon cinereoargenteus), and raccoon dog (Nyctereutes procyonoides).
Biology
General morphology
Foxes are generally smaller than some other members of the family Canidae such as wolves and jackals, while they may be larger than some within the family, such as raccoon dogs. In the largest species, the red fox, males weigh between , while the smallest species, the fennec fox, weighs just .
Fox features typically include a triangular face, pointed ears, an elongated rostrum, and a bushy tail. They are digitigrade (meaning they walk on their toes). Unlike most members of the family Canidae, foxes have partially retractable claws. Fox vibrissae, or whiskers, are black. The whiskers on the muzzle, known as mystacial vibrissae, average long, while the whiskers everywhere else on the head average to be shorter in length. Whiskers (carpal vibrissae) are also on the forelimbs and average long, pointing downward and backward. Other physical characteristics vary according to habitat and adaptive significance.
Pelage
Fox species differ in fur color, length, and density. Coat colors range from pearly white to black-and-white to black flecked with white or grey on the underside. Fennec foxes (and other species of fox adapted to life in the desert, such as kit foxes), for example, have large ears and short fur to aid in keeping the body cool. Arctic foxes, on the other hand, have tiny ears and short limbs as well as thick, insulating fur, which aid in keeping the body warm. Red foxes, by contrast, have a typical auburn pelt, the tail normally ending with a white marking.
A fox's coat color and texture may vary due to the change in seasons; fox pelts are richer and denser in the colder months and lighter in the warmer months. To get rid of the dense winter coat, foxes moult once a year around April; the process begins from the feet, up the legs, and then along the back. Coat color may also change as the individual ages.
Dentition
A fox's dentition, like all other canids, is I 3/3, C 1/1, PM 4/4, M 3/2 = 42. (Bat-eared foxes have six extra molars, totalling in 48 teeth.) Foxes have pronounced carnassial pairs, which is characteristic of a carnivore. These pairs consist of the upper premolar and the lower first molar, and work together to shear tough material like flesh. Foxes' canines are pronounced, also characteristic of a carnivore, and are excellent in gripping prey.
Behaviour
In the wild, the typical lifespan of a fox is one to three years, although individuals may live up to ten years. Unlike many canids, foxes are not always pack animals. Typically, they live in small family groups, but some (such as Arctic foxes) are known to be solitary.
Foxes are omnivores. Their diet is made up primarily of invertebrates such as insects and small vertebrates such as reptiles and birds. They may also eat eggs and vegetation. Many species are generalist predators, but some (such as the crab-eating fox) have more specialized diets. Most species of fox consume around of food every day. Foxes cache excess food, burying it for later consumption, usually under leaves, snow, or soil. While hunting, foxes tend to use a particular pouncing technique, such that they crouch down to camouflage themselves in the terrain and then use their hind legs to leap up with great force and land on top of their chosen prey. Using their pronounced canine teeth, they can then grip the prey's neck and shake it until it is dead or can be readily disemboweled.
The gray fox is one of only two canine species known to regularly climb trees; the other is the raccoon dog.
Sexual characteristics
The male fox's scrotum is held up close to the body with the testes inside even after they descend. Like other canines, the male fox has a baculum, or penile bone. The testes of red foxes are smaller than those of Arctic foxes. Sperm formation in red foxes begins in August–September, with the testicles attaining their greatest weight in December–February.
Vixens are in heat for one to six days, making their reproductive cycle twelve months long. As with other canines, the ova are shed during estrus without the need for the stimulation of copulating. Once the egg is fertilized, the vixen enters a period of gestation that can last from 52 to 53 days. Foxes tend to have an average litter size of four to five with an 80 percent success rate in becoming pregnant. Litter sizes can vary greatly according to species and environmentthe Arctic fox, for example, can have up to eleven kits.
The vixen usually has six or eight mammae. Each teat has 8 to 20 lactiferous ducts, which connect the mammary gland to the nipple, allowing for milk to be carried to the nipple.
Vocalization
The fox's vocal repertoire is vast, and includes:
Whine Made shortly after birth. Occurs at a high rate when kits are hungry and when their body temperatures are low. Whining stimulates the mother to care for her young; it also has been known to stimulate the male fox into caring for his mate and kits.
Yelp Made about 19 days later. The kits' whining turns into infantile barks, yelps, which occur heavily during play.
Explosive call At the age of about one month, the kits can emit an explosive call which is intended to be threatening to intruders or other cubs; a high-pitched howl.
Combative call In adults, the explosive call becomes an open-mouthed combative call during any conflict; a sharper bark.
Growl An adult fox's indication to their kits to feed or head to the adult's location.
Bark Adult foxes warn against intruders and in defense by barking.
In the case of domesticated foxes, the whining seems to remain in adult individuals as a sign of excitement and submission in the presence of their owners.
Classification
Canids commonly known as foxes include the following genera and species:
Conservation
Several fox species are endangered in their native environments. Pressures placed on foxes include habitat loss and being hunted for pelts, other trade, or control. Due in part to their opportunistic hunting style and industriousness, foxes are commonly resented as nuisance animals. Contrastingly, foxes, while often considered pests themselves, have been successfully employed to control pests on fruit farms while leaving the fruit intact.
Urocyon littoralis
The island fox, though considered a near-threatened species throughout the world, is becoming increasingly endangered in its endemic environment of the California Channel Islands. A population on an island is smaller than those on the mainland because of limited resources like space, food and shelter. Island populations are therefore highly susceptible to external threats ranging from introduced predatory species and humans to extreme weather.
On the California Channel Islands, it was found that the population of the island fox was so low due to an outbreak of canine distemper virus from 1999 to 2000 as well as predation by non-native golden eagles. Since 1993, the eagles have caused the population to decline by as much as 95%. Because of the low number of foxes, the population went through an Allee effect (an effect in which, at low enough densities, an individual's fitness decreases). Conservationists had to take healthy breeding pairs out of the wild population to breed them in captivity until they had enough foxes to release back into the wild. Nonnative grazers were also removed so that native plants would be able to grow back to their natural height, thereby providing adequate cover and protection for the foxes against golden eagles.
Pseudalopex fulvipes
Darwin's fox was considered critically endangered because of their small known population of 250 mature individuals as well as their restricted distribution. However, the IUCN have since downgraded the conservation status from crictically endangered in their 2004 and 2008 assessments to endangered in the 2016 assessment, following findings of a wider distribution than previously reported. On the Chilean mainland, the population is limited to Nahuelbuta National Park and the surrounding Valdivian rainforest. Similarly on Chiloé Island, their population is limited to the forests that extend from the southernmost to the northwesternmost part of the island. Though the Nahuelbuta National Park is protected, 90% of the species live on Chiloé Island.
A major issue the species faces is their dwindling, limited habitat due to the cutting and burning of the unprotected forests. Because of deforestation, the Darwin's fox habitat is shrinking, allowing for their competitor's (chilla fox) preferred habitat of open space, to increase; the Darwin's fox, subsequently, is being outcompeted. Another problem they face is their inability to fight off diseases transmitted by the increasing number of pet dogs. To conserve these animals, researchers suggest the need for the forests that link the Nahuelbuta National Park to the coast of Chile and in turn Chiloé Island and its forests, to be protected. They also suggest that other forests around Chile be examined to determine whether Darwin's foxes have previously existed there or can live there in the future, should the need to reintroduce the species to those areas arise. And finally, the researchers advise for the creation of a captive breeding program, in Chile, because of the limited number of mature individuals in the wild.
Relationships with humans
Foxes are often considered pests or nuisance creatures for their opportunistic attacks on poultry and other small livestock. Fox attacks on humans are not common.
Many foxes adapt well to human environments, with several species classified as "resident urban carnivores" for their ability to sustain populations entirely within urban boundaries. Foxes in urban areas can live longer and can have smaller litter sizes than foxes in non-urban areas. Urban foxes are ubiquitous in Europe, where they show altered behaviors compared to non-urban foxes, including increased population density, smaller territory, and pack foraging. Foxes have been introduced in numerous locations, with varying effects on indigenous flora and fauna.
In some countries, foxes are major predators of rabbits and hens. Population oscillations of these two species were the first nonlinear oscillation studied and led to the derivation of the Lotka–Volterra equation.
As food
Fox meat is edible, though it is not considered a common cuisine in any country.
Hunting
Fox hunting originated in the United Kingdom in the 16th century. Hunting with dogs is now banned in the United Kingdom, though hunting without dogs is still permitted. Red foxes were introduced into Australia in the early 19th century for sport, and have since become widespread through much of the country. They have caused population decline among many native species and prey on livestock, especially new lambs. Fox hunting is practiced as recreation in several other countries including Canada, France, Ireland, Italy, Russia, United States and Australia.
Domestication
There are many records of domesticated red foxes and others, but rarely of sustained domestication. A recent and notable exception is the Russian silver fox, which resulted in visible and behavioral changes, and is a case study of an animal population modeling according to human domestication needs. The current group of domesticated silver foxes are the result of nearly fifty years of experiments in the Soviet Union and Russia to de novo domesticate the silver morph of the red fox. This selective breeding resulted in physical and behavioral traits appearing that are frequently seen in domestic cats, dogs, and other animals, such as pigmentation changes, floppy ears, and curly tails. Notably, the new foxes became more tame, allowing themselves to be petted, whimpering to get attention and sniffing and licking their caretakers.
Urban settings
Foxes are among the comparatively few mammals which have been able to adapt themselves to a certain degree to living in urban (mostly suburban) human environments. Their omnivorous diet allows them to survive on discarded food waste, and their skittish and often nocturnal nature means that they are often able to avoid detection, despite their larger size.
Urban foxes have been identified as threats to cats and small dogs, and for this reason there is often pressure to exclude them from these environments.
The San Joaquin kit fox is a highly endangered species that has, ironically, become adapted to urban living in the San Joaquin Valley and Salinas Valley of southern California. Its diet includes mice, ground squirrels, rabbits, hares, bird eggs, and insects, and it has claimed habitats in open areas, golf courses, drainage basins, and school grounds.
Though rare, bites by foxes have been reported; in 2018, a woman in Clapham, London was bitten on the arm by a fox after she had left the door to her flat open.
In popular culture
The fox appears in many cultures, usually in folklore. There are slight variations in their depictions. In European, Persian, East Asian, and Native American folklore, foxes are symbols of cunning and trickery—a reputation derived especially from their reputed ability to evade hunters. This is usually represented as a character possessing these traits. These traits are used on a wide variety of characters, either making them a nuisance to the story, a misunderstood hero, or a devious villain.
In East Asian folklore, foxes are depicted as familiar spirits possessing magic powers. Similar to in the folklore of other regions, foxes are portrayed as mischievous, usually tricking other people, with the ability to disguise as an attractive female human. Others depict them as mystical, sacred creatures who can bring wonder or ruin. Nine-tailed foxes appear in Chinese folklore, literature, and mythology, in which, depending on the tale, they can be a good or a bad omen. The motif was eventually introduced from Chinese to Japanese and Korean cultures.
The constellation Vulpecula represents a fox.
Notes
References
External links
BBC Wales Nature: Fox videos
Mammal common names
Paraphyletic groups | Fox | [
"Biology"
] | 3,602 | [
"Phylogenetics",
"Paraphyletic groups"
] |
11,344 | https://en.wikipedia.org/wiki/First-order%20predicate | In mathematical logic, a first-order predicate is a predicate that takes only individual(s) constants or variables as argument(s). Compare second-order predicate and higher-order predicate.
This is not to be confused with a one-place predicate or monad, which is a predicate that takes only one argument. For example, the expression "is a planet" is a one-place predicate, while the expression "is father of" is a two-place predicate.
See also
First-order predicate calculus
Monadic predicate calculus
References
Predicate logic
Concepts in logic | First-order predicate | [
"Mathematics"
] | 128 | [
"Mathematical logic",
"Predicate logic",
"Basic concepts in set theory"
] |
11,350 | https://en.wikipedia.org/wiki/Firewall%20%28construction%29 | A firewall is a fire-resistant barrier used to prevent the spread of fire. Firewalls are built between or through buildings, structures, or electrical substation transformers, or within an aircraft or vehicle.
Applications
Firewalls can be used to subdivide a building into separate fire areas and are constructed in accordance with the locally applicable building codes. Firewalls are a portion of a building's passive fire protection systems.
Firewalls can be used to separate-high value transformers at an electrical substation in the event of a mineral oil tank rupture and ignition. The firewall serves as a fire containment wall between one oil-filled transformer and other neighboring transformers, building structures, and site equipment.
Types
There are three main classifications of fire rated walls: fire walls, fire barriers, and fire partitions.
A firewall is an assembly of materials used to delay the spread of fire a wall assembly with a prescribed fire resistance duration and independent structural stability. This allows a building to be subdivided into smaller sections. If a section becomes structurally unstable due to fire or other causes, that section can break or fall away from the other sections in the building.
A fire barrier wall, or a fire partition, is a fire-rated wall assembly that are not structurally self-sufficient.
Fire barrier walls are typically continuous from an exterior wall to an exterior wall, or from a floor below to a floor or roof above, or from one fire barrier wall to another fire barrier wall, having a fire resistance rating equal to or greater than the required rating for the application. Fire barriers are continuous through concealed spaces (e.g., above a ceiling) to the floor deck or roof deck above the barrier. Fire partitions are not required to extend through concealed spaces if the construction assembly forming the bottom of the concealed space, such as the ceiling, has a fire resistance rating at least equal to or greater than the fire partition.
A high challenge fire wall is a wall used to subdivide a building with high fire challenge occupancies, having enhanced fire resistance ratings and enhanced appurtenance protection to prevent the spread of fire, and having structural stability.
Portions of structures that are subdivided by fire walls are permitted to be considered separate buildings, in that fire walls have sufficient structural stability to maintain the integrity of the wall in the event of the collapse of the building construction on either side of the wall.
Characteristics
Fire rating - Fire walls are constructed in such a way as to achieve a code-determined fire-resistance rating, thus forming part of a fire compartment's passive fire protection. Germany includes repeated impact force testing upon new fire wall systems. Other codes require impact resistance on a performance basis
Design loads – Fire wall must withstand a minimum , and additional seismic loads.
Substation transformer firewalls are typically free-standing modular walls custom designed and engineered to meet application needs.
Building fire walls typically extend through the roof and terminate at a code-determined height above it. They are usually finished off on the top with flashing (sheet metal cap) for protection against the elements.
Materials
Building and structural fire walls in North America are usually made of concrete, concrete blocks, or reinforced concrete. Older fire walls, built prior to World War II, used brick materials.
Fire barrier walls are typically constructed of drywall or gypsum board partitions with wood or metal framed studs.
Penetrations – Penetrations through fire walls, such as for pipes and cables, must be protected with a listed firestop assembly designed to prevent the spread of fire through wall penetrations. Penetrations (holes) must not defeat the structural integrity of the wall, such that the wall cannot withstand the prescribed fire duration without threat of collapse.
Openings – Other openings in fire walls, such as doors and windows, must also be fire-rated fire door assemblies and fire window assemblies.
Performance based design
Firewalls are used in varied applications that require specific design and performance specifications. Knowing the potential conditions that may exist during a fire are critical to selecting and installing an effective firewall. For example, a firewall designed to meet National Fire Protection Agency, (NFPA), 221-09 section A.5.7 which indicates an average temperature of , is not designed to withstand higher temperatures such as would be present in higher challenge fires, and as a result would fail to function for the expected duration of the listed wall rating.
Performance based design takes into account the potential conditions during a fire. Understanding thermal limitations of materials is essential to using the correct material for the application. Laboratory testing is used to simulate fire scenarios and wall loading conditions. The testing results in an assigned listing number for the fire-rated assembly that defines the expected fire resistance duration and wall structural integrity under the tested conditions. Designers may elect to specify a listed fire wall assembly or design a wall system that would require performance testing to certify the expected protections before use of the designed fire-rated wall system.
High-voltage transformer fire barriers
Fire barriers are used around large electrical transformers as firewalls.
These barriers are used to isolate one transformer in case of fire or explosions, preventing fire propagation to neighboring transformers.
See also
Firebreak (forestry)
Fireproofing
Firestop (construction)
Firewall (engine)
Listing and approval use and compliance
High-voltage transformer fire barriers
Notes
External links
FAA Regulation about firewalls in aircraft
Firefighting
Passive fire protection
Types of wall | Firewall (construction) | [
"Engineering"
] | 1,099 | [
"Structural engineering",
"Types of wall"
] |
11,376 | https://en.wikipedia.org/wiki/Floating-point%20arithmetic | In computing, floating-point arithmetic (FP) is arithmetic on subsets of real numbers formed by a signed sequence of a fixed number of digits in some base, called a significand, scaled by an integer exponent of that base.
Numbers of this form are called floating-point numbers.
For example, the number 2469/200 is a floating-point number in base ten with five digits:
However, 7716/625 = 12.3456 is not a floating-point number in base ten with five digits—it needs six digits.
The nearest floating-point number with only five digits is 12.346.
And 1/3 = 0.3333… is not a floating-point number in base ten with any finite number of digits.
In practice, most floating-point systems use base two, though base ten (decimal floating point) is also common.
Floating-point arithmetic operations, such as addition and division, approximate the corresponding real number arithmetic operations by rounding any result that is not a floating-point number itself to a nearby floating-point number.
For example, in a floating-point arithmetic with five base-ten digits, the sum 12.345 + 1.0001 = 13.3451 might be rounded to 13.345.
The term floating point refers to the fact that the number's radix point can "float" anywhere to the left, right, or between the significant digits of the number. This position is indicated by the exponent, so floating point can be considered a form of scientific notation.
A floating-point system can be used to represent, with a fixed number of digits, numbers of very different orders of magnitude — such as the number of meters between galaxies or between protons in an atom. For this reason, floating-point arithmetic is often used to allow very small and very large real numbers that require fast processing times. The result of this dynamic range is that the numbers that can be represented are not uniformly spaced; the difference between two consecutive representable numbers varies with their exponent.
Over the years, a variety of floating-point representations have been used in computers. In 1985, the IEEE 754 Standard for Floating-Point Arithmetic was established, and since the 1990s, the most commonly encountered representations are those defined by the IEEE.
The speed of floating-point operations, commonly measured in terms of FLOPS, is an important characteristic of a computer system, especially for applications that involve intensive mathematical calculations.
A floating-point unit (FPU, colloquially a math coprocessor) is a part of a computer system specially designed to carry out operations on floating-point numbers.
Overview
Floating-point numbers
A number representation specifies some way of encoding a number, usually as a string of digits.
There are several mechanisms by which strings of digits can represent numbers. In standard mathematical notation, the digit string can be of any length, and the location of the radix point is indicated by placing an explicit "point" character (dot or comma) there. If the radix point is not specified, then the string implicitly represents an integer and the unstated radix point would be off the right-hand end of the string, next to the least significant digit. In fixed-point systems, a position in the string is specified for the radix point. So a fixed-point scheme might use a string of 8 decimal digits with the decimal point in the middle, whereby "00012345" would represent 0001.2345.
In scientific notation, the given number is scaled by a power of 10, so that it lies within a specific range—typically between 1 and 10, with the radix point appearing immediately after the first digit. As a power of ten, the scaling factor is then indicated separately at the end of the number. For example, the orbital period of Jupiter's moon Io is seconds, a value that would be represented in standard-form scientific notation as seconds.
Floating-point representation is similar in concept to scientific notation. Logically, a floating-point number consists of:
A signed (meaning positive or negative) digit string of a given length in a given base (or radix). This digit string is referred to as the significand, mantissa, or coefficient. The length of the significand determines the precision to which numbers can be represented. The radix point position is assumed always to be somewhere within the significand—often just after or just before the most significant digit, or to the right of the rightmost (least significant) digit. This article generally follows the convention that the radix point is set just after the most significant (leftmost) digit.
A signed integer exponent (also referred to as the characteristic, or scale), which modifies the magnitude of the number.
To derive the value of the floating-point number, the significand is multiplied by the base raised to the power of the exponent, equivalent to shifting the radix point from its implied position by a number of places equal to the value of the exponent—to the right if the exponent is positive or to the left if the exponent is negative.
Using base-10 (the familiar decimal notation) as an example, the number , which has ten decimal digits of precision, is represented as the significand together with 5 as the exponent. To determine the actual value, a decimal point is placed after the first digit of the significand and the result is multiplied by to give , or . In storing such a number, the base (10) need not be stored, since it will be the same for the entire range of supported numbers, and can thus be inferred.
Symbolically, this final value is:
where is the significand (ignoring any implied decimal point), is the precision (the number of digits in the significand), is the base (in our example, this is the number ten), and is the exponent.
Historically, several number bases have been used for representing floating-point numbers, with base two (binary) being the most common, followed by base ten (decimal floating point), and other less common varieties, such as base sixteen (hexadecimal floating point), base eight (octal floating point), base four (quaternary floating point), base three (balanced ternary floating point) and even base 256 and base .
A floating-point number is a rational number, because it can be represented as one integer divided by another; for example is (145/100)×1000 or /100. The base determines the fractions that can be represented; for instance, 1/5 cannot be represented exactly as a floating-point number using a binary base, but 1/5 can be represented exactly using a decimal base (, or ). However, 1/3 cannot be represented exactly by either binary (0.010101...) or decimal (0.333...), but in base 3, it is trivial (0.1 or 1×3−1) . The occasions on which infinite expansions occur depend on the base and its prime factors.
The way in which the significand (including its sign) and exponent are stored in a computer is implementation-dependent. The common IEEE formats are described in detail later and elsewhere, but as an example, in the binary single-precision (32-bit) floating-point representation, , and so the significand is a string of 24 bits. For instance, the number π's first 33 bits are:
In this binary expansion, let us denote the positions from 0 (leftmost bit, or most significant bit) to 32 (rightmost bit). The 24-bit significand will stop at position 23, shown as the underlined bit above. The next bit, at position 24, is called the round bit or rounding bit. It is used to round the 33-bit approximation to the nearest 24-bit number (there are specific rules for halfway values, which is not the case here). This bit, which is in this example, is added to the integer formed by the leftmost 24 bits, yielding:
When this is stored in memory using the IEEE 754 encoding, this becomes the significand . The significand is assumed to have a binary point to the right of the leftmost bit. So, the binary representation of π is calculated from left-to-right as follows:
where is the precision ( in this example), is the position of the bit of the significand from the left (starting at and finishing at here) and is the exponent ( in this example).
It can be required that the most significant digit of the significand of a non-zero number be non-zero (except when the corresponding exponent would be smaller than the minimum one). This process is called normalization. For binary formats (which uses only the digits and ), this non-zero digit is necessarily . Therefore, it does not need to be represented in memory, allowing the format to have one more bit of precision. This rule is variously called the leading bit convention, the implicit bit convention, the hidden bit convention, or the assumed bit convention.
Alternatives to floating-point numbers
The floating-point representation is by far the most common way of representing in computers an approximation to real numbers. However, there are alternatives:
Fixed-point representation uses integer hardware operations controlled by a software implementation of a specific convention about the location of the binary or decimal point, for example, 6 bits or digits from the right. The hardware to manipulate these representations is less costly than floating point, and it can be used to perform normal integer operations, too. Binary fixed point is usually used in special-purpose applications on embedded processors that can only do integer arithmetic, but decimal fixed point is common in commercial applications.
Logarithmic number systems (LNSs) represent a real number by the logarithm of its absolute value and a sign bit. The value distribution is similar to floating point, but the value-to-representation curve (i.e., the graph of the logarithm function) is smooth (except at 0). Conversely to floating-point arithmetic, in a logarithmic number system multiplication, division and exponentiation are simple to implement, but addition and subtraction are complex. The (symmetric) level-index arithmetic (LI and SLI) of Charles Clenshaw, Frank Olver and Peter Turner is a scheme based on a generalized logarithm representation.
Tapered floating-point representation, used in Unum.
Some simple rational numbers (e.g., 1/3 and 1/10) cannot be represented exactly in binary floating point, no matter what the precision is. Using a different radix allows one to represent some of them (e.g., 1/10 in decimal floating point), but the possibilities remain limited. Software packages that perform rational arithmetic represent numbers as fractions with integral numerator and denominator, and can therefore represent any rational number exactly. Such packages generally need to use "bignum" arithmetic for the individual integers.
Interval arithmetic allows one to represent numbers as intervals and obtain guaranteed bounds on results. It is generally based on other arithmetics, in particular floating point.
Computer algebra systems such as Mathematica, Maxima, and Maple can often handle irrational numbers like or in a completely "formal" way (symbolic computation), without dealing with a specific encoding of the significand. Such a program can evaluate expressions like "" exactly, because it is programmed to process the underlying mathematics directly, instead of using approximate values for each intermediate calculation.
History
In 1914, the Spanish engineer Leonardo Torres Quevedo published Essays on Automatics, where he designed a special-purpose electromechanical calculator based on Charles Babbage's analytical engine and described a way to store floating-point numbers in a consistent manner. He stated that numbers will be stored in exponential format as n x 10, and offered three rules by which consistent manipulation of floating-point numbers by machines could be implemented. For Torres, "n will always be the same number of digits (e.g. six), the first digit of n will be of order of tenths, the second of hundredths, etc, and one will write each quantity in the form: n; m." The format he proposed shows the need for a fixed-sized significand as is presently used for floating-point data, fixing the location of the decimal point in the significand so that each representation was unique, and how to format such numbers by specifying a syntax to be used that could be entered through a typewriter, as was the case of his Electromechanical Arithmometer in 1920.
In 1938, Konrad Zuse of Berlin completed the Z1, the first binary, programmable mechanical computer; it uses a 24-bit binary floating-point number representation with a 7-bit signed exponent, a 17-bit significand (including one implicit bit), and a sign bit. The more reliable relay-based Z3, completed in 1941, has representations for both positive and negative infinities; in particular, it implements defined operations with infinity, such as , and it stops on undefined operations, such as .
Zuse also proposed, but did not complete, carefully rounded floating-point arithmetic that includes and NaN representations, anticipating features of the IEEE Standard by four decades. In contrast, von Neumann recommended against floating-point numbers for the 1951 IAS machine, arguing that fixed-point arithmetic is preferable.
The first commercial computer with floating-point hardware was Zuse's Z4 computer, designed in 1942–1945. In 1946, Bell Laboratories introduced the Model V, which implemented decimal floating-point numbers.
The Pilot ACE has binary floating-point arithmetic, and it became operational in 1950 at National Physical Laboratory, UK. Thirty-three were later sold commercially as the English Electric DEUCE. The arithmetic is actually implemented in software, but with a one megahertz clock rate, the speed of floating-point and fixed-point operations in this machine were initially faster than those of many competing computers.
The mass-produced IBM 704 followed in 1954; it introduced the use of a biased exponent. For many decades after that, floating-point hardware was typically an optional feature, and computers that had it were said to be "scientific computers", or to have "scientific computation" (SC) capability (see also Extensions for Scientific Computation (XSC)). It was not until the launch of the Intel i486 in 1989 that general-purpose personal computers had floating-point capability in hardware as a standard feature.
The UNIVAC 1100/2200 series, introduced in 1962, supported two floating-point representations:
Single precision: 36 bits, organized as a 1-bit sign, an 8-bit exponent, and a 27-bit significand.
Double precision: 72 bits, organized as a 1-bit sign, an 11-bit exponent, and a 60-bit significand.
The IBM 7094, also introduced in 1962, supported single-precision and double-precision representations, but with no relation to the UNIVAC's representations. Indeed, in 1964, IBM introduced hexadecimal floating-point representations in its System/360 mainframes; these same representations are still available for use in modern z/Architecture systems. In 1998, IBM implemented IEEE-compatible binary floating-point arithmetic in its mainframes; in 2005, IBM also added IEEE-compatible decimal floating-point arithmetic.
Initially, computers used many different representations for floating-point numbers. The lack of standardization at the mainframe level was an ongoing problem by the early 1970s for those writing and maintaining higher-level source code; these manufacturer floating-point standards differed in the word sizes, the representations, and the rounding behavior and general accuracy of operations. Floating-point compatibility across multiple computing systems was in desperate need of standardization by the early 1980s, leading to the creation of the IEEE 754 standard once the 32-bit (or 64-bit) word had become commonplace. This standard was significantly based on a proposal from Intel, which was designing the i8087 numerical coprocessor; Motorola, which was designing the 68000 around the same time, gave significant input as well.
In 1989, mathematician and computer scientist William Kahan was honored with the Turing Award for being the primary architect behind this proposal; he was aided by his student Jerome Coonen and a visiting professor, Harold Stone.
Among the x86 innovations are these:
A precisely specified floating-point representation at the bit-string level, so that all compliant computers interpret bit patterns the same way. This makes it possible to accurately and efficiently transfer floating-point numbers from one computer to another (after accounting for endianness).
A precisely specified behavior for the arithmetic operations: A result is required to be produced as if infinitely precise arithmetic were used to yield a value that is then rounded according to specific rules. This means that a compliant computer program would always produce the same result when given a particular input, thus mitigating the almost mystical reputation that floating-point computation had developed for its hitherto seemingly non-deterministic behavior.
The ability of exceptional conditions (overflow, divide by zero, etc.) to propagate through a computation in a benign manner and then be handled by the software in a controlled fashion.
Range of floating-point numbers
A floating-point number consists of two fixed-point components, whose range depends exclusively on the number of bits or digits in their representation. Whereas components linearly depend on their range, the floating-point range linearly depends on the significand range and exponentially on the range of exponent component, which attaches outstandingly wider range to the number.
On a typical computer system, a double-precision (64-bit) binary floating-point number has a coefficient of 53 bits (including 1 implied bit), an exponent of 11 bits, and 1 sign bit. Since 210 = 1024, the complete range of the positive normal floating-point numbers in this format is from 2−1022 ≈ 2 × 10−308 to approximately 21024 ≈ 2 × 10308.
The number of normal floating-point numbers in a system (B, P, L, U) where
B is the base of the system,
P is the precision of the significand (in base B),
L is the smallest exponent of the system,
U is the largest exponent of the system,
is .
There is a smallest positive normal floating-point number,
Underflow level = UFL = ,
which has a 1 as the leading digit and 0 for the remaining digits of the significand, and the smallest possible value for the exponent.
There is a largest floating-point number,
Overflow level = OFL = ,
which has B − 1 as the value for each digit of the significand and the largest possible value for the exponent.
In addition, there are representable values strictly between −UFL and UFL. Namely, positive and negative zeros, as well as subnormal numbers.
IEEE 754: floating point in modern computers
The IEEE standardized the computer representation for binary floating-point numbers in IEEE 754 (a.k.a. IEC 60559) in 1985. This first standard is followed by almost all modern machines. It was revised in 2008. IBM mainframes support IBM's own hexadecimal floating point format and IEEE 754-2008 decimal floating point in addition to the IEEE 754 binary format. The Cray T90 series had an IEEE version, but the SV1 still uses Cray floating-point format.
The standard provides for many closely related formats, differing in only a few details. Five of these formats are called basic formats, and others are termed extended precision formats and extendable precision format. Three formats are especially widely used in computer hardware and languages:
Single precision (binary32), usually used to represent the "float" type in the C language family. This is a binary format that occupies 32 bits (4 bytes) and its significand has a precision of 24 bits (about 7 decimal digits).
Double precision (binary64), usually used to represent the "double" type in the C language family. This is a binary format that occupies 64 bits (8 bytes) and its significand has a precision of 53 bits (about 16 decimal digits).
Double extended, also ambiguously called "extended precision" format. This is a binary format that occupies at least 79 bits (80 if the hidden/implicit bit rule is not used) and its significand has a precision of at least 64 bits (about 19 decimal digits). The C99 and C11 standards of the C language family, in their annex F ("IEC 60559 floating-point arithmetic"), recommend such an extended format to be provided as "long double". A format satisfying the minimal requirements (64-bit significand precision, 15-bit exponent, thus fitting on 80 bits) is provided by the x86 architecture. Often on such processors, this format can be used with "long double", though extended precision is not available with MSVC. For alignment purposes, many tools store this 80-bit value in a 96-bit or 128-bit space. On other processors, "long double" may stand for a larger format, such as quadruple precision, or just double precision, if any form of extended precision is not available.
Increasing the precision of the floating-point representation generally reduces the amount of accumulated round-off error caused by intermediate calculations.
Other IEEE formats include:
Decimal64 and decimal128 floating-point formats. These formats (especially decimal128) are pervasive in financial transactions because, along with the decimal32 format, they allow correct decimal rounding.
Quadruple precision (binary128). This is a binary format that occupies 128 bits (16 bytes) and its significand has a precision of 113 bits (about 34 decimal digits).
Half precision, also called binary16, a 16-bit floating-point value. It is being used in the NVIDIA Cg graphics language, and in the openEXR standard (where it actually predates the introduction in the IEEE 754 standard).
Any integer with absolute value less than 224 can be exactly represented in the single-precision format, and any integer with absolute value less than 253 can be exactly represented in the double-precision format. Furthermore, a wide range of powers of 2 times such a number can be represented. These properties are sometimes used for purely integer data, to get 53-bit integers on platforms that have double-precision floats but only 32-bit integers.
The standard specifies some special values, and their representation: positive infinity (), negative infinity (), a negative zero (−0) distinct from ordinary ("positive") zero, and "not a number" values (NaNs).
Comparison of floating-point numbers, as defined by the IEEE standard, is a bit different from usual integer comparison. Negative and positive zero compare equal, and every NaN compares unequal to every value, including itself. All finite floating-point numbers are strictly smaller than and strictly greater than , and they are ordered in the same way as their values (in the set of real numbers).
Internal representation
Floating-point numbers are typically packed into a computer datum as the sign bit, the exponent field, and the significand or mantissa, from left to right. For the IEEE 754 binary formats (basic and extended) which have extant hardware implementations, they are apportioned as follows:
While the exponent can be positive or negative, in binary formats it is stored as an unsigned number that has a fixed "bias" added to it. Values of all 0s in this field are reserved for the zeros and subnormal numbers; values of all 1s are reserved for the infinities and NaNs. The exponent range for normal numbers is [−126, 127] for single precision, [−1022, 1023] for double, or [−16382, 16383] for quad. Normal numbers exclude subnormal values, zeros, infinities, and NaNs.
In the IEEE binary interchange formats the leading 1 bit of a normalized significand is not actually stored in the computer datum. It is called the "hidden" or "implicit" bit. Because of this, the single-precision format actually has a significand with 24 bits of precision, the double-precision format has 53, and quad has 113.
For example, it was shown above that π, rounded to 24 bits of precision, has:
sign = 0 ; e = 1 ; s = 110010010000111111011011 (including the hidden bit)
The sum of the exponent bias (127) and the exponent (1) is 128, so this is represented in the single-precision format as
0 10000000 10010010000111111011011 (excluding the hidden bit) = 40490FDB as a hexadecimal number.
An example of a layout for 32-bit floating point is
and the 64-bit ("double") layout is similar.
Other notable floating-point formats
In addition to the widely used IEEE 754 standard formats, other floating-point formats are used, or have been used, in certain domain-specific areas.
The Microsoft Binary Format (MBF) was developed for the Microsoft BASIC language products, including Microsoft's first ever product the Altair BASIC (1975), TRS-80 LEVEL II, CP/M's MBASIC, IBM PC 5150's BASICA, MS-DOS's GW-BASIC and QuickBASIC prior to version 4.00. QuickBASIC version 4.00 and 4.50 switched to the IEEE 754-1985 format but can revert to the MBF format using the /MBF command option. MBF was designed and developed on a simulated Intel 8080 by Monte Davidoff, a dormmate of Bill Gates, during spring of 1975 for the MITS Altair 8800. The initial release of July 1975 supported a single-precision (32 bits) format due to cost of the MITS Altair 8800 4-kilobytes memory. In December 1975, the 8-kilobytes version added a double-precision (64 bits) format. A single-precision (40 bits) variant format was adopted for other CPU's, notably the MOS 6502 (Apple //, Commodore PET, Atari), Motorola 6800 (MITS Altair 680) and Motorola 6809 (TRS-80 Color Computer). All Microsoft language products from 1975 through 1987 used the Microsoft Binary Format until Microsoft adopted the IEEE-754 standard format in all its products starting in 1988 to their current releases. MBF consists of the MBF single-precision format (32 bits, "6-digit BASIC"), the MBF extended-precision format (40 bits, "9-digit BASIC"), and the MBF double-precision format (64 bits); each of them is represented with an 8-bit exponent, followed by a sign bit, followed by a significand of respectively 23, 31, and 55 bits.
The Bfloat16 format requires the same amount of memory (16 bits) as the IEEE 754 half-precision format, but allocates 8 bits to the exponent instead of 5, thus providing the same range as a IEEE 754 single-precision number. The tradeoff is a reduced precision, as the trailing significand field is reduced from 10 to 7 bits. This format is mainly used in the training of machine learning models, where range is more valuable than precision. Many machine learning accelerators provide hardware support for this format.
The TensorFloat-32 format combines the 8 bits of exponent of the Bfloat16 with the 10 bits of trailing significand field of half-precision formats, resulting in a size of 19 bits. This format was introduced by Nvidia, which provides hardware support for it in the Tensor Cores of its GPUs based on the Nvidia Ampere architecture. The drawback of this format is its size, which is not a power of 2. However, according to Nvidia, this format should only be used internally by hardware to speed up computations, while inputs and outputs should be stored in the 32-bit single-precision IEEE 754 format.
The Hopper architecture GPUs provide two FP8 formats: one with the same numerical range as half-precision (E5M2) and one with higher precision, but less range (E4M3).
Representable numbers, conversion and rounding
By their nature, all numbers expressed in floating-point format are rational numbers with a terminating expansion in the relevant base (for example, a terminating decimal expansion in base-10, or a terminating binary expansion in base-2). Irrational numbers, such as π or √2, or non-terminating rational numbers, must be approximated. The number of digits (or bits) of precision also limits the set of rational numbers that can be represented exactly. For example, the decimal number 123456789 cannot be exactly represented if only eight decimal digits of precision are available (it would be rounded to one of the two straddling representable values, 12345678 × 101 or 12345679 × 101), the same applies to non-terminating digits (. to be rounded to either .55555555 or .55555556).
When a number is represented in some format (such as a character string) which is not a native floating-point representation supported in a computer implementation, then it will require a conversion before it can be used in that implementation. If the number can be represented exactly in the floating-point format then the conversion is exact. If there is not an exact representation then the conversion requires a choice of which floating-point number to use to represent the original value. The representation chosen will have a different value from the original, and the value thus adjusted is called the rounded value.
Whether or not a rational number has a terminating expansion depends on the base. For example, in base-10 the number 1/2 has a terminating expansion (0.5) while the number 1/3 does not (0.333...). In base-2 only rationals with denominators that are powers of 2 (such as 1/2 or 3/16) are terminating. Any rational with a denominator that has a prime factor other than 2 will have an infinite binary expansion. This means that numbers that appear to be short and exact when written in decimal format may need to be approximated when converted to binary floating-point. For example, the decimal number 0.1 is not representable in binary floating-point of any finite precision; the exact binary representation would have a "1100" sequence continuing endlessly:
e = −4; s = 1100110011001100110011001100110011...,
where, as previously, s is the significand and e is the exponent.
When rounded to 24 bits this becomes
e = −4; s = 110011001100110011001101,
which is actually 0.100000001490116119384765625 in decimal.
As a further example, the real number π, represented in binary as an infinite sequence of bits is
11.0010010000111111011010101000100010000101101000110000100011010011...
but is
11.0010010000111111011011
when approximated by rounding to a precision of 24 bits.
In binary single-precision floating-point, this is represented as s = 1.10010010000111111011011 with e = 1.
This has a decimal value of
3.1415927410125732421875,
whereas a more accurate approximation of the true value of π is
3.14159265358979323846264338327950...
The result of rounding differs from the true value by about 0.03 parts per million, and matches the decimal representation of π in the first 7 digits. The difference is the discretization error and is limited by the machine epsilon.
The arithmetical difference between two consecutive representable floating-point numbers which have the same exponent is called a unit in the last place (ULP). For example, if there is no representable number lying between the representable numbers 1.45a70c22hex and 1.45a70c24hex, the ULP is 2×16−8, or 2−31. For numbers with a base-2 exponent part of 0, i.e. numbers with an absolute value higher than or equal to 1 but lower than 2, an ULP is exactly 2−23 or about 10−7 in single precision, and exactly 2−53 or about 10−16 in double precision. The mandated behavior of IEEE-compliant hardware is that the result be within one-half of a ULP.
Rounding modes
Rounding is used when the exact result of a floating-point operation (or a conversion to floating-point format) would need more digits than there are digits in the significand. IEEE 754 requires correct rounding: that is, the rounded result is as if infinitely precise arithmetic was used to compute the value and then rounded (although in implementation only three extra bits are needed to ensure this). There are several different rounding schemes (or rounding modes). Historically, truncation was the typical approach. Since the introduction of IEEE 754, the default method (round to nearest, ties to even, sometimes called Banker's Rounding) is more commonly used. This method rounds the ideal (infinitely precise) result of an arithmetic operation to the nearest representable value, and gives that representation as the result. In the case of a tie, the value that would make the significand end in an even digit is chosen. The IEEE 754 standard requires the same rounding to be applied to all fundamental algebraic operations, including square root and conversions, when there is a numeric (non-NaN) result. It means that the results of IEEE 754 operations are completely determined in all bits of the result, except for the representation of NaNs. ("Library" functions such as cosine and log are not mandated.)
Alternative rounding options are also available. IEEE 754 specifies the following rounding modes:
round to nearest, where ties round to the nearest even digit in the required position (the default and by far the most common mode)
round to nearest, where ties round away from zero (optional for binary floating-point and commonly used in decimal)
round up (toward +∞; negative results thus round toward zero)
round down (toward −∞; negative results thus round away from zero)
round toward zero (truncation; it is similar to the common behavior of float-to-integer conversions, which convert −3.9 to −3 and 3.9 to 3)
Alternative modes are useful when the amount of error being introduced must be bounded. Applications that require a bounded error are multi-precision floating-point, and interval arithmetic.
The alternative rounding modes are also useful in diagnosing numerical instability: if the results of a subroutine vary substantially between rounding to + and − infinity then it is likely numerically unstable and affected by round-off error.
Binary-to-decimal conversion with minimal number of digits
Converting a double-precision binary floating-point number to a decimal string is a common operation, but an algorithm producing results that are both accurate and minimal did not appear in print until 1990, with Steele and White's Dragon4. Some of the improvements since then include:
David M. Gay's dtoa.c, a practical open-source implementation of many ideas in Dragon4.
Grisu3, with a 4× speedup as it removes the use of bignums. Must be used with a fallback, as it fails for ~0.5% of cases.
Errol3, an always-succeeding algorithm similar to, but slower than, Grisu3. Apparently not as good as an early-terminating Grisu with fallback.
Ryū, an always-succeeding algorithm that is faster and simpler than Grisu3.
Schubfach, an always-succeeding algorithm that is based on a similar idea to Ryū, developed almost simultaneously and independently. Performs better than Ryū and Grisu3 in certain benchmarks.
Many modern language runtimes use Grisu3 with a Dragon4 fallback.
Decimal-to-binary conversion
The problem of parsing a decimal string into a binary FP representation is complex, with an accurate parser not appearing until Clinger's 1990 work (implemented in dtoa.c). Further work has likewise progressed in the direction of faster parsing.
Floating-point operations
For ease of presentation and understanding, decimal radix with 7 digit precision will be used in the examples, as in the IEEE 754 decimal32 format. The fundamental principles are the same in any radix or precision, except that normalization is optional (it does not affect the numerical value of the result). Here, s denotes the significand and e denotes the exponent.
Addition and subtraction
A simple method to add floating-point numbers is to first represent them with the same exponent. In the example below, the second number (with the smaller exponent) is shifted right by three digits, and one then proceeds with the usual addition method:
123456.7 = 1.234567 × 10^5
101.7654 = 1.017654 × 10^2 = 0.001017654 × 10^5
Hence:
123456.7 + 101.7654 = (1.234567 × 10^5) + (1.017654 × 10^2)
= (1.234567 × 10^5) + (0.001017654 × 10^5)
= (1.234567 + 0.001017654) × 10^5
= 1.235584654 × 10^5
In detail:
e=5; s=1.234567 (123456.7)
+ e=2; s=1.017654 (101.7654)
e=5; s=1.234567
+ e=5; s=0.001017654 (after shifting)
--------------------
e=5; s=1.235584654 (true sum: 123558.4654)
This is the true result, the exact sum of the operands. It will be rounded to seven digits and then normalized if necessary. The final result is
e=5; s=1.235585 (final sum: 123558.5)
The lowest three digits of the second operand (654) are essentially lost. This is round-off error. In extreme cases, the sum of two non-zero numbers may be equal to one of them:
e=5; s=1.234567
+ e=−3; s=9.876543
e=5; s=1.234567
+ e=5; s=0.00000009876543 (after shifting)
----------------------
e=5; s=1.23456709876543 (true sum)
e=5; s=1.234567 (after rounding and normalization)
In the above conceptual examples it would appear that a large number of extra digits would need to be provided by the adder to ensure correct rounding; however, for binary addition or subtraction using careful implementation techniques only a guard bit, a rounding bit and one extra sticky bit need to be carried beyond the precision of the operands.
Another problem of loss of significance occurs when approximations to two nearly equal numbers are subtracted. In the following example e = 5; s = 1.234571 and e = 5; s = 1.234567 are approximations to the rationals 123457.1467 and 123456.659.
e=5; s=1.234571
− e=5; s=1.234567
----------------
e=5; s=0.000004
e=−1; s=4.000000 (after rounding and normalization)
The floating-point difference is computed exactly because the numbers are close—the Sterbenz lemma guarantees this, even in case of underflow when gradual underflow is supported. Despite this, the difference of the original numbers is e = −1; s = 4.877000, which differs more than 20% from the difference e = −1; s = 4.000000 of the approximations. In extreme cases, all significant digits of precision can be lost. This cancellation illustrates the danger in assuming that all of the digits of a computed result are meaningful. Dealing with the consequences of these errors is a topic in numerical analysis; see also Accuracy problems.
Multiplication and division
To multiply, the significands are multiplied while the exponents are added, and the result is rounded and normalized.
e=3; s=4.734612
× e=5; s=5.417242
-----------------------
e=8; s=25.648538980104 (true product)
e=8; s=25.64854 (after rounding)
e=9; s=2.564854 (after normalization)
Similarly, division is accomplished by subtracting the divisor's exponent from the dividend's exponent, and dividing the dividend's significand by the divisor's significand.
There are no cancellation or absorption problems with multiplication or division, though small errors may accumulate as operations are performed in succession. In practice, the way these operations are carried out in digital logic can be quite complex (see Booth's multiplication algorithm and Division algorithm).
Literal syntax
Literals for floating-point numbers depend on languages. They typically use e or E to denote scientific notation. The C programming language and the IEEE 754 standard also define a hexadecimal literal syntax with a base-2 exponent instead of 10. In languages like C, when the decimal exponent is omitted, a decimal point is needed to differentiate them from integers. Other languages do not have an integer type (such as JavaScript), or allow overloading of numeric types (such as Haskell). In these cases, digit strings such as 123 may also be floating-point literals.
Examples of floating-point literals are:
99.9
-5000.12
6.02e23
-3e-45
0x1.fffffep+127 in C and IEEE 754
Dealing with exceptional cases
Floating-point computation in a computer can run into three kinds of problems:
An operation can be mathematically undefined, such as ∞/∞, or division by zero.
An operation can be legal in principle, but not supported by the specific format, for example, calculating the square root of −1 or the inverse sine of 2 (both of which result in complex numbers).
An operation can be legal in principle, but the result can be impossible to represent in the specified format, because the exponent is too large or too small to encode in the exponent field. Such an event is called an overflow (exponent too large), underflow (exponent too small) or denormalization (precision loss).
Prior to the IEEE standard, such conditions usually caused the program to terminate, or triggered some kind of trap that the programmer might be able to catch. How this worked was system-dependent, meaning that floating-point programs were not portable. (The term "exception" as used in IEEE 754 is a general term meaning an exceptional condition, which is not necessarily an error, and is a different usage to that typically defined in programming languages such as a C++ or Java, in which an "exception" is an alternative flow of control, closer to what is termed a "trap" in IEEE 754 terminology.)
Here, the required default method of handling exceptions according to IEEE 754 is discussed (the IEEE 754 optional trapping and other "alternate exception handling" modes are not discussed). Arithmetic exceptions are (by default) required to be recorded in "sticky" status flag bits. That they are "sticky" means that they are not reset by the next (arithmetic) operation, but stay set until explicitly reset. The use of "sticky" flags thus allows for testing of exceptional conditions to be delayed until after a full floating-point expression or subroutine: without them exceptional conditions that could not be otherwise ignored would require explicit testing immediately after every floating-point operation. By default, an operation always returns a result according to specification without interrupting computation. For instance, 1/0 returns +∞, while also setting the divide-by-zero flag bit (this default of ∞ is designed to often return a finite result when used in subsequent operations and so be safely ignored).
The original IEEE 754 standard, however, failed to recommend operations to handle such sets of arithmetic exception flag bits. So while these were implemented in hardware, initially programming language implementations typically did not provide a means to access them (apart from assembler). Over time some programming language standards (e.g., C99/C11 and Fortran) have been updated to specify methods to access and change status flag bits. The 2008 version of the IEEE 754 standard now specifies a few operations for accessing and handling the arithmetic flag bits. The programming model is based on a single thread of execution and use of them by multiple threads has to be handled by a means outside of the standard (e.g. C11 specifies that the flags have thread-local storage).
IEEE 754 specifies five arithmetic exceptions that are to be recorded in the status flags ("sticky bits"):
inexact, set if the rounded (and returned) value is different from the mathematically exact result of the operation.
underflow, set if the rounded value is tiny (as specified in IEEE 754) and inexact (or maybe limited to if it has denormalization loss, as per the 1985 version of IEEE 754), returning a subnormal value including the zeros.
overflow, set if the absolute value of the rounded value is too large to be represented. An infinity or maximal finite value is returned, depending on which rounding is used.
divide-by-zero, set if the result is infinite given finite operands, returning an infinity, either +∞ or −∞.
invalid, set if a finite or infinite result cannot be returned e.g. sqrt(−1) or 0/0, returning a quiet NaN.
The default return value for each of the exceptions is designed to give the correct result in the majority of cases such that the exceptions can be ignored in the majority of codes. inexact returns a correctly rounded result, and underflow returns a value less than or equal to the smallest positive normal number in magnitude and can almost always be ignored. divide-by-zero returns infinity exactly, which will typically then divide a finite number and so give zero, or else will give an invalid exception subsequently if not, and so can also typically be ignored. For example, the effective resistance of n resistors in parallel (see fig. 1) is given by . If a short-circuit develops with set to 0, will return +infinity which will give a final of 0, as expected (see the continued fraction example of IEEE 754 design rationale for another example).
Overflow and invalid exceptions can typically not be ignored, but do not necessarily represent errors: for example, a root-finding routine, as part of its normal operation, may evaluate a passed-in function at values outside of its domain, returning NaN and an invalid exception flag to be ignored until finding a useful start point.
Accuracy problems
The fact that floating-point numbers cannot accurately represent all real numbers, and that floating-point operations cannot accurately represent true arithmetic operations, leads to many surprising situations. This is related to the finite precision with which computers generally represent numbers.
For example, the decimal numbers 0.1 and 0.01 cannot be represented exactly as binary floating-point numbers. In the IEEE 754 binary32 format with its 24-bit significand, the result of attempting to square the approximation to 0.1 is neither 0.01 nor the representable number closest to it. The decimal number 0.1 is represented in binary as ; , which is
Squaring this number gives
Squaring it with rounding to the 24-bit precision gives
But the representable number closest to 0.01 is
Also, the non-representability of π (and π/2) means that an attempted computation of tan(π/2) will not yield a result of infinity, nor will it even overflow in the usual floating-point formats (assuming an accurate implementation of tan). It is simply not possible for standard floating-point hardware to attempt to compute tan(π/2), because π/2 cannot be represented exactly. This computation in C:
/* Enough digits to be sure we get the correct approximation. */
double pi = 3.1415926535897932384626433832795;
double z = tan(pi/2.0);
will give a result of 16331239353195370.0. In single precision (using the tanf function), the result will be −22877332.0.
By the same token, an attempted computation of sin(π) will not yield zero. The result will be (approximately) 0.1225 in double precision, or −0.8742 in single precision.
While floating-point addition and multiplication are both commutative ( and ), they are not necessarily associative. That is, is not necessarily equal to . Using 7-digit significand decimal arithmetic:
a = 1234.567, b = 45.67834, c = 0.0004
(a + b) + c:
1234.567 (a)
+ 45.67834 (b)
1280.24534 rounds to 1280.245
1280.245 (a + b)
+ 0.0004 (c)
1280.2454 rounds to 1280.245 ← (a + b) + c
a + (b + c):
45.67834 (b)
+ 0.0004 (c)
45.67874
1234.567 (a)
+ 45.67874 (b + c)
1280.24574 rounds to 1280.246 ← a + (b + c)
They are also not necessarily distributive. That is, may not be the same as :
1234.567 × 3.333333 = 4115.223
1.234567 × 3.333333 = 4.115223
4115.223 + 4.115223 = 4119.338
but
1234.567 + 1.234567 = 1235.802
1235.802 × 3.333333 = 4119.340
In addition to loss of significance, inability to represent numbers such as π and 0.1 exactly, and other slight inaccuracies, the following phenomena may occur:
Incidents
On 25 February 1991, a loss of significance in a MIM-104 Patriot missile battery prevented it from intercepting an incoming Scud missile in Dhahran, Saudi Arabia, contributing to the death of 28 soldiers from the U.S. Army's 14th Quartermaster Detachment. The error was actually introduced by a fixed-point computation, but the underlying issue would have been the same with floating-point arithmetic.
Machine precision and backward error analysis
Machine precision is a quantity that characterizes the accuracy of a floating-point system, and is used in backward error analysis of floating-point algorithms. It is also known as unit roundoff or machine epsilon. Usually denoted , its value depends on the particular rounding being used.
With rounding to zero,
whereas rounding to nearest,
where B is the base of the system and P is the precision of the significand (in base B).
This is important since it bounds the relative error in representing any non-zero real number within the normalized range of a floating-point system:
Backward error analysis, the theory of which was developed and popularized by James H. Wilkinson, can be used to establish that an algorithm implementing a numerical function is numerically stable. The basic approach is to show that although the calculated result, due to roundoff errors, will not be exactly correct, it is the exact solution to a nearby problem with slightly perturbed input data. If the perturbation required is small, on the order of the uncertainty in the input data, then the results are in some sense as accurate as the data "deserves". The algorithm is then defined as backward stable. Stability is a measure of the sensitivity to rounding errors of a given numerical procedure; by contrast, the condition number of a function for a given problem indicates the inherent sensitivity of the function to small perturbations in its input and is independent of the implementation used to solve the problem.
As a trivial example, consider a simple expression giving the inner product of (length two) vectors and , then
and so
where
where
by definition, which is the sum of two slightly perturbed (on the order of Εmach) input data, and so is backward stable. For more realistic examples in numerical linear algebra, see Higham 2002 and other references below.
Minimizing the effect of accuracy problems
Although individual arithmetic operations of IEEE 754 are guaranteed accurate to within half a ULP, more complicated formulae can suffer from larger errors for a variety of reasons. The loss of accuracy can be substantial if a problem or its data are ill-conditioned, meaning that the correct result is hypersensitive to tiny perturbations in its data. However, even functions that are well-conditioned can suffer from large loss of accuracy if an algorithm numerically unstable for that data is used: apparently equivalent formulations of expressions in a programming language can differ markedly in their numerical stability. One approach to remove the risk of such loss of accuracy is the design and analysis of numerically stable algorithms, which is an aim of the branch of mathematics known as numerical analysis. Another approach that can protect against the risk of numerical instabilities is the computation of intermediate (scratch) values in an algorithm at a higher precision than the final result requires, which can remove, or reduce by orders of magnitude, such risk: IEEE 754 quadruple precision and extended precision are designed for this purpose when computing at double precision.
For example, the following algorithm is a direct implementation to compute the function which is well-conditioned at 1.0, however it can be shown to be numerically unstable and lose up to half the significant digits carried by the arithmetic when computed near 1.0.
double A(double X)
{
double Y, Z; // [1]
Y = X - 1.0;
Z = exp(Y);
if (Z != 1.0)
Z = Y / (Z - 1.0); // [2]
return Z;
}
If, however, intermediate computations are all performed in extended precision (e.g. by setting line [1] to C99 ), then up to full precision in the final double result can be maintained. Alternatively, a numerical analysis of the algorithm reveals that if the following non-obvious change to line [2] is made:
Z = log(Z) / (Z - 1.0);
then the algorithm becomes numerically stable and can compute to full double precision.
To maintain the properties of such carefully constructed numerically stable programs, careful handling by the compiler is required. Certain "optimizations" that compilers might make (for example, reordering operations) can work against the goals of well-behaved software. There is some controversy about the failings of compilers and language designs in this area: C99 is an example of a language where such optimizations are carefully specified to maintain numerical precision. See the external references at the bottom of this article.
A detailed treatment of the techniques for writing high-quality floating-point software is beyond the scope of this article, and the reader is referred to, and the other references at the bottom of this article. Kahan suggests several rules of thumb that can substantially decrease by orders of magnitude the risk of numerical anomalies, in addition to, or in lieu of, a more careful numerical analysis. These include: as noted above, computing all expressions and intermediate results in the highest precision supported in hardware (a common rule of thumb is to carry twice the precision of the desired result, i.e. compute in double precision for a final single-precision result, or in double extended or quad precision for up to double-precision results); and rounding input data and results to only the precision required and supported by the input data (carrying excess precision in the final result beyond that required and supported by the input data can be misleading, increases storage cost and decreases speed, and the excess bits can affect convergence of numerical procedures: notably, the first form of the iterative example given below converges correctly when using this rule of thumb). Brief descriptions of several additional issues and techniques follow.
As decimal fractions can often not be exactly represented in binary floating-point, such arithmetic is at its best when it is simply being used to measure real-world quantities over a wide range of scales (such as the orbital period of a moon around Saturn or the mass of a proton), and at its worst when it is expected to model the interactions of quantities expressed as decimal strings that are expected to be exact. An example of the latter case is financial calculations. For this reason, financial software tends not to use a binary floating-point number representation. The "decimal" data type of the C# and Python programming languages, and the decimal formats of the IEEE 754-2008 standard, are designed to avoid the problems of binary floating-point representations when applied to human-entered exact decimal values, and make the arithmetic always behave as expected when numbers are printed in decimal.
Expectations from mathematics may not be realized in the field of floating-point computation. For example, it is known that , and that , however these facts cannot be relied on when the quantities involved are the result of floating-point computation.
The use of the equality test (if (x==y) ...) requires care when dealing with floating-point numbers. Even simple expressions like 0.6/0.2-3==0 will, on most computers, fail to be true (in IEEE 754 double precision, for example, 0.6/0.2 - 3 is approximately equal to -4.44089209850063e-16). Consequently, such tests are sometimes replaced with "fuzzy" comparisons (if (abs(x-y) < epsilon) ..., where epsilon is sufficiently small and tailored to the application, such as 1.0E−13). The wisdom of doing this varies greatly, and can require numerical analysis to bound epsilon. Values derived from the primary data representation and their comparisons should be performed in a wider, extended, precision to minimize the risk of such inconsistencies due to round-off errors. It is often better to organize the code in such a way that such tests are unnecessary. For example, in computational geometry, exact tests of whether a point lies off or on a line or plane defined by other points can be performed using adaptive precision or exact arithmetic methods.
Small errors in floating-point arithmetic can grow when mathematical algorithms perform operations an enormous number of times. A few examples are matrix inversion, eigenvector computation, and differential equation solving. These algorithms must be very carefully designed, using numerical approaches such as iterative refinement, if they are to work well.
Summation of a vector of floating-point values is a basic algorithm in scientific computing, and so an awareness of when loss of significance can occur is essential. For example, if one is adding a very large number of numbers, the individual addends are very small compared with the sum. This can lead to loss of significance. A typical addition would then be something like
3253.671
+ 3.141276
-----------
3256.812
The low 3 digits of the addends are effectively lost. Suppose, for example, that one needs to add many numbers, all approximately equal to 3. After 1000 of them have been added, the running sum is about 3000; the lost digits are not regained. The Kahan summation algorithm may be used to reduce the errors.
Round-off error can affect the convergence and accuracy of iterative numerical procedures. As an example, Archimedes approximated π by calculating the perimeters of polygons inscribing and circumscribing a circle, starting with hexagons, and successively doubling the number of sides. As noted above, computations may be rearranged in a way that is mathematically equivalent but less prone to error (numerical analysis). Two forms of the recurrence formula for the circumscribed polygon are:
First form:
Second form:
, converging as
Here is a computation using IEEE "double" (a significand with 53 bits of precision) arithmetic:
i 6 × 2i × ti, first form 6 × 2i × ti, second form
---------------------------------------------------------
0 .4641016151377543863 .4641016151377543863
1 .2153903091734710173 .2153903091734723496
2 596599420974940120 596599420975006733
3 60862151314012979 60862151314352708
4 27145996453136334 27145996453689225
5 8730499801259536 8730499798241950
6 6627470548084133 6627470568494473
7 6101765997805905 6101766046906629
8 70343230776862 70343215275928
9 37488171150615 37487713536668
10 9278733740748 9273850979885
11 7256228504127 7220386148377
12 717412858693 707019992125
13 189011456060 78678454728
14 717412858693 46593073709
15 19358822321783 8571730119
16 717412858693 6566394222
17 810075796233302 6065061913
18 717412858693 939728836
19 4061547378810956 908393901
20 05434924008406305 900560168
21 00068646912273617 8608396
22 349453756585929919 8122118
23 00068646912273617 95552
24 .2245152435345525443 68907
25 62246
26 62246
27 62246
28 62246
The true value is
While the two forms of the recurrence formula are clearly mathematically equivalent, the first subtracts 1 from a number extremely close to 1, leading to an increasingly problematic loss of significant digits. As the recurrence is applied repeatedly, the accuracy improves at first, but then it deteriorates. It never gets better than about 8 digits, even though 53-bit arithmetic should be capable of about 16 digits of precision. When the second form of the recurrence is used, the value converges to 15 digits of precision.
"Fast math" optimization
The aforementioned lack of associativity of floating-point operations in general means that compilers cannot as effectively reorder arithmetic expressions as they could with integer and fixed-point arithmetic, presenting a roadblock in optimizations such as common subexpression elimination and auto-vectorization. The "fast math" option on many compilers (ICC, GCC, Clang, MSVC...) turns on reassociation along with unsafe assumptions such as a lack of NaN and infinite numbers in IEEE 754. Some compilers also offer more granular options to only turn on reassociation. In either case, the programmer is exposed to many of the precision pitfalls mentioned above for the portion of the program using "fast" math.
In some compilers (GCC and Clang), turning on "fast" math may cause the program to disable subnormal floats at startup, affecting the floating-point behavior of not only the generated code, but also any program using such code as a library.
In most Fortran compilers, as allowed by the ISO/IEC 1539-1:2004 Fortran standard, reassociation is the default, with breakage largely prevented by the "protect parens" setting (also on by default). This setting stops the compiler from reassociating beyond the boundaries of parentheses. Intel Fortran Compiler is a notable outlier.
A common problem in "fast" math is that subexpressions may not be optimized identically from place to place, leading to unexpected differences. One interpretation of the issue is that "fast" math as implemented currently has a poorly defined semantics. One attempt at formalizing "fast" math optimizations is seen in Icing, a verified compiler.
See also
Arbitrary-precision arithmetic
C99 for code examples demonstrating access and use of IEEE 754 features.
Computable number
Coprocessor
Decimal floating point
Double-precision floating-point format
Experimental mathematics – utilizes high precision floating-point computations
Fixed-point arithmetic
Floating-point error mitigation
FLOPS
Gal's accurate tables
GNU MPFR
Half-precision floating-point format
IEEE 754 – Standard for Binary Floating-Point Arithmetic
IBM Floating Point Architecture
Kahan summation algorithm
Microsoft Binary Format (MBF)
Minifloat
Q (number format) for constant resolution
Quadruple-precision floating-point format (including double-double)
Significant figures
Single-precision floating-point format
Standard Apple Numerics Environment (SANE)
Notes
References
Further reading
(NB. Classic influential treatises on floating-point arithmetic.)
(NB. Edition with source code CD-ROM.)
(1213 pages) (NB. This is a single-volume edition. This work was also available in a two-volume version.)
External links
(NB. This page gives a very brief summary of floating-point formats that have been used over the years.)
(NB. A compendium of non-intuitive behaviors of floating point on popular architectures, with implications for program verification and testing.)
OpenCores. (NB. This website contains open source floating-point IP cores for the implementation of floating-point operators in FPGA or ASIC devices. The project double_fpu contains verilog source code of a double-precision floating-point unit. The project fpuvhdl contains vhdl source code of a single-precision floating-point unit.)
Computer arithmetic
Articles with example C code | Floating-point arithmetic | [
"Mathematics"
] | 14,330 | [
"Computer arithmetic",
"Arithmetic"
] |
11,424 | https://en.wikipedia.org/wiki/Flag | A flag is a piece of fabric (most often rectangular) with distinctive colours and design. It is used as a symbol, a signalling device, or for decoration. The term flag is also used to refer to the graphic design employed, and flags have evolved into a general tool for rudimentary signalling and identification, especially in environments where communication is challenging (such as the maritime environment, where semaphore is used). Many flags fall into groups of similar designs called flag families. The study of flags is known as "vexillology" from the Latin , meaning "flag" or "banner".
National flags are patriotic symbols with widely varied interpretations that often include strong military associations because of their original and ongoing use for that purpose. Flags are also used in messaging, advertising, or for decorative purposes.
Some military units are called "flags" after their use of flags. A flag (Arabic: ) is equivalent to a brigade in Arab countries. In Spain, a flag (Spanish: ) is a battalion-equivalent in the Spanish Legion.
History
The origin of the flag is unknown and it remains unclear when the first flag was raised.
Ships with vexilloids were represented on predynastic Egyptian pottery . In antiquity, field signs that can be categorised as vexilloid or "flag-like" were used in warfare, originating in ancient Egypt or Assyria. Examples include the Sassanid battle standard Derafsh Kaviani, and the standards of the Roman legions such as the eagle of Augustus Caesar's Xth legion and the dragon standard of the Sarmatians; the latter was allowed to fly freely in the wind, carried by a horseman, but depictions suggest that it bore more similarity to an elongated dragon kite than to a simple flag.
While the origin of the flag remains a mystery, the oldest flag discovered is made of bronze: a Derafsh or 'flag-like' Shahdad, which was found in Shahdad, Iran, and dates back to . It features a seated man and a kneeling woman facing each other, with a star in between. This iconography was found in other Iranian Bronze Age pieces of art.
Flags made of cloth were almost certainly the invention of the ancient peoples of the Indian subcontinent or the Zhou dynasty of Ancient China. Chinese flags had iconography such as a red bird, a white tiger, or a blue dragon, and royal flags were to be treated with a level of respect similar to that given to the ruler. Indian flags were often triangular and decorated with attachments such as a yak's tail and the state umbrella. Silk flags either spread to the Near East from China or it was just the silk itself, later fashioned by people who had independently conceptualized a rectangular cloth attached to a pole. Flags were probably transmitted to Europe via the Muslim world, where plainly coloured flags were used due to Islamic proscriptions. They are often mentioned in the early history of Islam and may have been copied from India.
In Europe, during the High Middle Ages, flags came to be used primarily as a heraldic device in battle, allowing easier identification of a knight over only the heraldic icon painted on the shield. Already during the high medieval period, and increasingly during the Late Middle Ages, city states and communes such as those of the Old Swiss Confederacy also began to use flags as field signs. Regimental flags for individual units became commonplace during the Early Modern period.
During the peak of the sailing age, beginning in the early 17th century, it was customary (and later a legal requirement) for ships to fly flags designating their nationality; these flags eventually evolved into the national flags and maritime flags of today. Flags also became the preferred means of communications at sea, resulting in various systems of flag signals; see, International maritime signal flags.
Use of flags beyond a military or naval context began with the rise of nationalism by the end of the 18th century, although some flags date back earlier. The flags of countries such as Austria, Denmark or Turkey have legendary origins while many others, including those of Poland and Switzerland, grew out of the heraldic emblems of the Middle Ages. The 17th century saw the birth of several national flags through revolutionary struggle. One of these was the flag of the Netherlands, which appeared during the 80-year Dutch rebellion which began in 1568 against Spanish domination.
Political change and social reform, allied with a growing sense of nationhood among ordinary people, led to the creation of new nations and flags all over the world in the 19th and 20th centuries.
National flags
One of the most popular uses of a flag is to symbolise a nation or country. Some national flags have been particularly inspirational to other nations, countries, or subnational entities in the design of their own flags. Some prominent examples include:
The flag of Denmark, the Dannebrog, is attested in 1478, and is the oldest national flag still in use. It inspired the cross design of the other Nordic countries: Norway, Sweden, Iceland, Finland, and regional Scandinavian flags for the Faroe Islands, Åland, Scania and Bornholm, as well as flags for the non-Scandinavian Shetland and Orkney.
The flag of the Netherlands is the oldest tricolour. Its three colours of red, white and blue go back to Charlemagne's time, the ninth century. The coastal region of what today is the Netherlands was then known for its cloth in these colours. Maps from the early 16th century already put flags in these colours next to this region, like Texeira's map of 1520. A century before that, during the 15th century, the three colours were mentioned as the coastal signals for this area, with the three bands straight or diagonal, single or doubled. As state flag it first appeared around 1572 as the Prince's Flag in orange–white–blue. Soon the more famous red–white–blue began appearing, becoming the prevalent version from around 1630. Orange made a comeback during the civil war of the late 18th century, signifying the orangist or pro-stadtholder party. During World War II the pro-Nazi NSB used it. Any symbolism has been added later to the three colours, although the orange comes from the House of Orange-Nassau. This use of orange comes from Nassau, which today uses orange-blue, not from Orange, which today uses red-blue. However, the usual way to show the link with the House of Orange-Nassau is the orange pennant above the red-white-blue. It is said that the Dutch Tricolour has inspired many flags but most notably those of Russia, New York City, and South Africa (the 1928–94 flag as well the current flag). As the probable inspiration for the Russian flag, it is the source too for the pan-Slavic colours red, white and blue, adopted by many Slavic states and peoples as their symbols; examples are Slovakia, Serbia, and Slovenia.
The national flag of France was designed in 1794. As a forerunner of revolution, France's tricolour flag style has been adopted by other nations. Examples: Italy, Belgium, Ireland, Romania and Mexico.
The Union Flag (Union Jack) of the United Kingdom is the most commonly used. British colonies typically flew a flag based on one of the ensigns based on this flag, and many former colonies have retained the design to acknowledge their cultural history. Examples: Australia, Fiji, New Zealand, Tuvalu, and also the Canadian provinces of Manitoba, Ontario and British Columbia, and the American state of Hawaii; see commons:Flags based on British ensigns.
The flag of the United States is nicknamed The Stars and Stripes or Old Glory. Some nations imitated this flag to symbolise their similarity to the United States or the American Revolution. Examples: Liberia, Chile, Taiwan (ROC), and the French region of Brittany.
Ethiopia was seen as a model by emerging African states of the 1950s and 1960s, as it was one of the oldest independent states in Africa. Accordingly, its flag became the source of the Pan-African colours, or 'Rasta colours'. Examples: Benin, Togo, Senegal, Ghana, Mali, Guinea.
The flag of Turkey, which is very similar to the last flag of the old Ottoman Empire, has been an inspiration for the flag designs of many other Muslim nations. During the time of the Ottomans the crescent began to be associated with Islam and this is reflected on the flags of Algeria, Azerbaijan, Comoros, Libya, Mauritania, Pakistan, Tunisia and Maldives.
The Pan-Arab colours, green, white, red and black, are derived from the flag of the Great Arab Revolt as seen on the flags of Jordan, Libya, Kuwait, Sudan, Syria, the United Arab Emirates, Western Sahara, Egypt, Iraq, Yemen and Palestine.
The Soviet flag, with its golden symbols of the hammer and sickle on a red field, was an inspiration to flags of other communist states, such as East Germany, the People's Republic of China, Vietnam, Angola, Afghanistan (1978–1980) and Mozambique.
The flag of Venezuela, created by Francisco de Miranda to represent the independence movement in Venezuela that later gave birth to the Gran Colombia, inspired the flags of Colombia, Ecuador, and the Federal Territories in Malaysia, all sharing three bands of yellow, blue and red with the flag of Venezuela.
The flag of Argentina, created by Manuel Belgrano during the war of independence, was the inspiration for the United Provinces of Central America's flag, which in turn was the origin for the flags of Guatemala, Honduras, El Salvador, and Nicaragua.
National flag designs are often used to signify nationality in other forms, such as flag patches.
Civil flags
A civil flag is a version of the national flag that is flown by civilians on non-government installations or craft. The use of civil flags was more common in the past, in order to denote buildings or ships that were not crewed by the military. In some countries the civil flag is the same as the war flag or state flag, but without the coat of arms, such as in the case of Spain, and in others it is an alteration of the war flag.
War flags
Several countries, including the Royal Air Force, British Army and the Royal Navy (White Ensign) of the United Kingdom and the Soviet Union have had unique flags flown by their armed forces separately, rather than the national flag.
Other countries' armed forces (such as those of the United States or Switzerland) use their standard national flag; in addition, the U.S. has alongside flags and seals designed from long tradition for each of its six uniformed military services/military sub-departments in the Department of Defense and the Department of Homeland Security. The Philippines' armed forces may use their standard national flag, but during times of war the flag is turned upside down. Bulgaria's flag is also turned upside down during times of war. These are also considered war flags, though the terminology only applies to the flag's military usage.
Large versions of the war flag flown on the warships of countries' navies are known as battle ensigns. In addition, besides flying the national standard or a military services' emblem flag at a military fort, base, station or post and at sea at the stern (rear) or main top mast of a warship, a Naval Jack flag and other maritime flags, pennants and emblems are flown at the bow (front). In times of war waving a white flag is a banner of truce, talks/negotiations or surrender.
Four distinctive African flags currently in the collection of the National Maritime Museum in Britain were flown in action by Itsekiri ships under the control of Nana Olomu during the conflict in the late 19th century. One is the flag generally known as the Benin Empire flag and one is referred to as Nana Olomu's flag.
International flags
Among international flags are the United Nations, Europe, Olympic, NATO and Paralympic flags.
Maritime flags
Flags are particularly important at sea, where they can mean the difference between life and death, and consequently where the rules and regulations for the flying of flags are strictly enforced. A national flag flown at sea is known as an ensign. A courteous, peaceable merchant ship or yacht customarily flies its ensign (in the usual ensign position), together with the flag of whatever nation it is currently visiting at the mast (known as a courtesy flag). To fly one's ensign alone in foreign waters, a foreign port or in the face of a foreign warship traditionally indicates a willingness to fight, with cannon, for the right to do so. , this custom is still taken seriously by many naval and port authorities and is readily enforced in many parts of the world by boarding, confiscation and other civil penalties. In some countries yacht ensigns are different from merchant ensigns in order to signal that the yacht is not carrying cargo that requires a customs declaration. Carrying commercial cargo on a boat with a yacht ensign is deemed to be smuggling in many jurisdictions. Traditionally, a vessel flying under the courtesy flag of a specific nation, regardless of the vessel's country of registry, is considered to be operating under the law of her 'host' nation.
There is a system of international maritime signal flags for numerals and letters of the alphabet. Each flag or pennant has a specific meaning when flown individually. As well, semaphore flags can be used to communicate on an ad hoc basis from ship to ship over short distances.
Another category of maritime flag flown by some United States government ships is the distinctive mark. Although the United States Coast Guard has its own service ensign, all other U.S. government ships fly the national ensign as their service ensign, following United States Navy practice. To distinguish themselves from ships of the Navy, such ships historically have flown their parent organisation's flag from a forward mast as a distinctive mark. Today, for example, commissioned ships of the National Oceanic and Atmospheric Administration (NOAA) fly the NOAA flag as a distinctive mark.
Shapes and designs
Flags are usually rectangular in shape (often in the ratio 2:3, 1:2, or 3:5), but may be of any shape or size that is practical for flying, including square, triangular, or swallow tailed. A more unusual flag shape is that of the flag of Nepal, which is in the shape of two stacked triangles. Other unusually shaped flags include the civil flags of Ohio (a swallowtail); Tampa, Florida; and Pike County, Ohio.
Many flags are dyed through and through to be inexpensive to manufacture, such that the reverse side is the mirror image of the obverse (front) side, generally the side displayed when, from the observer's point of view, the flag flies from pole-side left to right. This presents two possibilities:
If the design is symmetrical in an axis parallel to the flag pole, obverse and reverse will be identical despite the mirror-reversal, such as the Indian Flag or Canadian Flag
If not, the obverse and reverse will present two variants of the same design, one with the hoist on the left (usually considered the obverse side), the other with the hoist on the right (usually considered the reverse side of the flag). This is very common and usually not disturbing if there is no text in the design.
Some complex flag designs are not intended to be shown on both sides, requiring separate obverse and reverse sides if made correctly. In these cases there is a design element (usually text) which is not symmetric and should be read in the same direction, regardless of whether the hoist is to the viewer's left or right. These cases can be divided into two types:
The same (asymmetric) design may be duplicated on both sides. Such flags can be manufactured by creating two identical through and through flags and then sewing them back to back, though this can affect the resulting combination's responsiveness to the wind. Depictions of such flags may be marked with the symbol , indicating the reverse is congruent to (rather than a mirror image of) the obverse.
Rarely, the reverse design may differ, in whole or in part, from that of the obverse. Examples of flags whose reverse differs from the obverse include the flag of Paraguay, the flag of Oregon, and the historical flag of the Soviet Union. Depictions of such flags may be marked with the symbol .
Common designs on flags include crosses, stripes, and divisions of the surface, or field, into bands or quarters—patterns and principles mainly derived from heraldry. A heraldic coat of arms may also be flown as a banner of arms, as is done on both the state flag of Maryland and the flag of Kiribati.
The de jure flag of Libya under Muammar Gaddafi, which consisted of a rectangular field of green, was for a long period the only national flag using a single colour and no design or insignia. However, other historical states have also used flags without designs or insignia, such as the short-lived Soviet Republic of Hungary and the more recent Sultanate of Muscat and Oman, whose flags were both a plain field of red.
Colours are normally described with common names, such as "red", but may be further specified using colourimetry.
The largest flag flown from a flagpole worldwide, according to Guinness World Records, is the flag of the United Arab Emirates flown in Sharjah. This flag was . The largest flag ever made was the flag of Qatar; the flag, which measures at , was completed in December 2013 in Doha.
Parts of a flag
The general parts of a flag are: canton (the upper inner section of the flag), field or ground (the entire flag except the canton), the hoist (the edge used to attach the flag to the hoist), and the fly (the furthest edge from the hoist end).
Vertical flags
Vertical flags are sometimes used in lieu of the standard horizontal flag in central and eastern Europe, particularly in the German-speaking countries. This practice came about because the relatively brisk wind needed to display horizontal flags is not common in these countries.
The standard horizontal flag (no. 1 in the preceding illustration) is nonetheless the form most often used even in these countries.
The vertical flag (German: Hochformatflagge or Knatterflagge; no. 2) is a vertical form of the standard flag. The flag's design may remain unchanged (No. 2a) or it may change, e.g. by changing horizontal stripes to vertical ones (no. 2b). If the flag carries an emblem, it may remain centred or may be shifted slightly upwards.
The vertical flag for hoisting from a beam (German: Auslegerflagge or Galgenflagge; no. 3) is additionally attached to a horizontal beam, ensuring that it is fully displayed even if there is no wind.
The vertical flag for hoisting from a horizontal pole (German: Hängeflagge; no. 4) is hoisted from a horizontal pole, normally attached to a building. The topmost stripe on the horizontal version of the flag faces away from the building.
The vertical flag for hoisting from a crossbar or banner (German: Bannerflagge; no. 5) is firmly attached to a horizontal crossbar from which it is hoisted, either by a vertical pole (no. 5a) or a horizontal one (no. 5b). The topmost stripe on the horizontal version of the flag normally faces to the left.
Religious flags
Flags can play many different roles in religion. In Buddhism, prayer flags are used, usually in sets of five differently coloured flags. Several flags and banners including the Black Standard are associated with Islam. Many national flags and other flags include religious symbols such as the cross, the crescent, or a reference to a patron saint. Flags are also adopted by religious groups and flags such as the Jain flag, Nishan Sahib (Sikhism), the Saffron flag (Hindu) and the Christian flag are used to represent a whole religion.
In sports
Because of their ease of signalling and identification, flags are often used in sports.
In association football, linesmen carry small flags along the touch lines. They use the flags to indicate to the referee potential infringements of the laws, or who is entitled to possession of the ball that has gone out of the field of play, or, most famously, raising the flag to indicate an offside offence. Officials called touch judges use flags for similar purposes in both codes of rugby.
In American and Canadian football, referees use penalty flags to indicate that a foul has been committed in game play. The phrase used for such an indication is flag on the play. The flag itself is a small, weighted handkerchief, tossed on the field at the approximate point of the infraction; the intent is usually to sort out the details after the current play from scrimmage has concluded. In American football, the flag is yellow; in Canadian football the flag is orange, but at the professional level the flag is yellow. In both the Canadian Football League and National Football League, coaches also use red challenge flags to indicate that they wish to contest a ruling on the field.
In yacht racing, flags are used to communicate information from the race committee boat to the racers. Different flags hoisted from the committee boat may communicate a false start, changes in the course, a cancelled race, or other important information. Racing boats themselves may also use flags to symbolise a protest or distress. The flags are often part of the nautical alphabetic system of International maritime signal flags, in which 26 different flags designate the 26 letters of the Latin alphabet.
In auto and motorcycle racing, racing flags are used to communicate with drivers. Most famously, a checkered flag of black and white squares indicates the end of the race, and victory for the leader. A yellow flag is used to indicate caution requiring slow speed and a red flag requires racers to stop immediately. A black flag is used to indicate penalties.
In addition, fans of almost all sports wave flags in the stands to indicate their support for the participants. Many sports teams have their own flags, and, in individual sports, fans will indicate their support for a player by waving the flag of his or her home country.
Capture the flag is a popular children's sport.
In Gaelic football and Hurling a green flag is used to indicate a goal while a white flag is used to indicate a point
In Australian rules football, the goal umpire will wave two flags to indicate a goal (worth six points) and a single flag to indicate a behind (worth one point).
For safety, dive flags indicate the locations of underwater scuba divers or that diving operations are being conducted in the vicinity.
In water sports such as wakeboarding and Water-Skiing, an orange flag is held in between runs to indicate someone is in the water.
In golf, the hole is almost always marked with a flag. The flagpole is designed to fit centered within the base of the hole and is removable. Many courses will use colour-coded flags to determine a hole location at the front, middle or rear of the green. However, colour-coded flags are not used in the professional tours. (A rare example of a golf course that does not use flags to mark the hole is the East Course of Merion Golf Club, which instead uses flagpoles topped by wicker baskets.)
Flag poles with flags of all shapes and sizes are used by marching bands, drum corps, and winter guard teams use flags as a method of visual enhancement in performances.
Diplomatic and political flags
Some countries use diplomatic flags, such as the United Kingdom (see image of the Embassy flag) and the Kingdom of Thailand (see image of the Embassy flag).
The socialist movement uses red flags to represent their cause. The anarchist movement has a variety of different flags, but the primary flag associated with them is the black flag. In the Spanish Civil War, the anarchists used the red-and-black bisected flag. In the 20th century, the rainbow flag was adopted as a symbol of the LGBT social movements. Its derivatives include the Bisexual pride and Transgender pride flags.
Some of these political flags have become national flags, such as the red flag of the Soviet Union and national socialist banners for Nazi Germany. The present Flag of Portugal is based on what had been the political flag of the Portuguese Republican Party previous to the 5 October 1910 revolution which brought this party to power.
Personal flags
Throughout history, monarchs have often had personal flags (including royal standards), representing the royal person, including in personal union of national monarchies.
Vehicle flags
Flags are often representative of an individual's affinity or allegiance to a country, team or business and can be presented in various ways. A popular trend that has surfaced revolves around the idea of the 'mobile' flag in which an individual displays their particular flag of choice on their vehicle. These items are commonly referred to as car flags and are usually manufactured from high strength polyester material and are attached to a vehicle via a polypropylene pole and clip window attachment.
Swimming flags
In Australia, Canada, New Zealand, the Philippines, Ireland and the United Kingdom, a pair of red-yellow flags is used to mark the limits of the bathing area on a beach, usually guarded by surf lifesavers. If the beach is closed, the poles of the flags are crossed. The flags are coloured with a red triangle and a yellow triangle making a rectangular flag, or a red rectangle over a yellow rectangle. On many Australian beaches there is a slight variation with beach condition signalling. A red flag signifies a closed beach (in the UK also other dangers), yellow signifies strong current or difficult swimming conditions, and green represents a beach safe for general swimming. In Ireland, a red and yellow flag indicates that it is safe to swim; a red flag that it is unsafe; and no flag indicates that there are no lifeguards on duty. Blue flags may also be used away from the yellow-red lifesaver area to designate a zone for surfboarding and other small, non-motorised watercraft.
Reasons for closing the beach include:
dangerous rip
hurricane warning
no lifeguards in attendance
overpolluted water
sharks
tsunami
waves too strong
A surf flag exists, divided into four quadrants. The top left and bottom right quadrants are black, and the remaining area is white.
Signal flag "India" (a black circle on a yellow square) is frequently used to denote a "blackball" zone where surfboards cannot be used but other water activities are permitted.
The United States uses beach warning flags created by the International Life Saving Federation and endorsed and conditionally approved by the United States Lifesaving Association.
Railway flags
Railways use a number of coloured flags. When used as wayside signals they usually use the following meanings (exact meanings are set by the individual railroad company):
red = stop
yellow = proceed with caution
green or white = proceed.
a flag of any colour waved vigorously means stop
a blue flag on the side of a locomotive means that it should not be moved because someone is working on it (or on the train attached to it). A blue flag on a track means that nothing on that track should be moved. The flag can only be removed by the person or group that placed it. In the railway dominated steel industry this principle of "blue flag and tag" was extended to all operations at Bethlehem Steel, Lackawanna, New York. If a man went inside a large machine or worked on an electrical circuit for example, his blue flag and tag was sacrosanct. The "Lock Out/Tag Out" practice is similar and now used in other industries to comply with safety regulations.
At night, the flags are replaced with lanterns showing the same colours.
Flags displayed on the front of a moving locomotive are an acceptable replacement for classification lights and usually have the following meanings (exact meanings are set by the individual railroad company):
white = extra (not on the timetable)
green = another section following
red = last section
Additionally, a railroad brakeman will typically carry a red flag to make his or her hand signals more visible to the engineer. Railway signals are a development of railway flags.
Flagpoles
A flagpole, flagmast, flagstaff, or staff can be a simple support made of wood or metal. If it is taller than can be easily reached to raise the flag, a cord is used, looping around a pulley at the top of the pole with the ends tied at the bottom. The flag is fixed to one lower end of the cord, and is then raised by pulling on the other end. The cord is then tightened and tied to the pole at the bottom. The pole is usually topped by a flat plate or ball called a "truck" (originally meant to keep a wooden pole from splitting) or a finial in a more complex shape. Very high flagpoles may require more complex support structures than a simple pole, such as a guyed mast.
Dwajasthambam are flagpoles commonly found at the entrances of South Indian Hindu temples.
Record heights
Since 26 December 2021, the tallest free-standing flagpole in the world is the Cairo Flagpole, located in the New Administrative Capital under construction in Egypt at a height of , exceeding the former record holders, the Jeddah Flagpole in Saudi Arabia (height: ), the Dushanbe Flagpole in Tajikistan (height: ) and the National Flagpole in Azerbaijan (height: ). The flagpole in North Korea is the fourth tallest flagpole in the world, however, it is not free-standing. It is a radio tower supported flagpole. Many of these were built by American company Trident Support: the Dushanbe Flagpole, the National Flagpole in Azerbaijan, the Ashgabat flagpole in Turkmenistan at ; the Aqaba Flagpole in Jordan at ; the Raghadan Flagpole in Jordan at ; and the Abu Dhabi Flagpole in the United Arab Emirates at .
The current tallest flagpole in India (and the tallest flying the tricolour) is the flagpole in Belgaum, Karnataka which was first hoisted on 12 March 2018. The tallest flagpole in the United Kingdom from 1959 until 2013 stood in Kew Gardens. It was made from a Canadian Douglas-fir tree and was in height.
The current tallest flagpole in the United States (and the tallest flying an American flag) is the pole completed before Memorial Day 2014 and custom-made with an base in concrete by wind turbine manufacturer Broadwind Energy. It is situated on the north side of the Acuity Insurance headquarters campus along Interstate 43 in Sheboygan, Wisconsin, and is visible from Cedar Grove. The pole can fly a 220-pound flag in light wind conditions and a heavier 350-pound flag in higher wind conditions.
Design
Flagpoles can be designed in one piece with a taper (typically a steel taper or a Greek entasis taper), or be made from multiple pieces to make them able to expand. In the United States, ANSI/NAAMM guide specification FP-1001-97 covers the engineering design of metal flagpoles to ensure safety.
Hoisting the flag
Hoisting the flag is the act of raising the flag on the flagpole. Raising or lowering flags, especially national flags, usually involves ceremonies and certain sets of rules, depending on the country, and usually involve the performance of a national anthem.
A flag-raising squad is a group of people, usually troops, cadets, or students, that march in and bring the flags for the flag-hoisting ceremony. Flag-hoisting ceremonies involving flag-raising squads can be simple or elaborate, involving large numbers of squads. Elaborate flag-hoisting ceremonies are usually performed on national holidays.
The cord or rope that ties a flag to its pole is called a halyard. Flags may have a strip of fabric along the hoist side called a heading for the halyard to pass through, or a pair of grommets for the halyard to be threaded through. Flags may also be held in position using Inglefield clips.
Flags in communication
Semaphore is a form of communication that utilises flags. The signalling is performed by an individual using two flags (or lighted wands), the positions of the flags indicating a symbol. The person who holds the flags is known as the signalman. This form of communication is primarily used by naval signallers. This technique of signalling was adopted in the early 19th century and is still used in various forms today.
The colours of the flags can also be used to communicate. For example; a white flag means, among other things, surrender or peace, a red flag can be used as a warning signal, and a black flag can mean war, or determination to defeat enemies.
Orientation of a flag is also used for communication, though the practice is rarely used given modern communication systems. Raising a flag upside-down was indicative that the raising force controlled that particular area, but that it was in severe distress.
See also
Lists and galleries of flags
Gallery of sovereign state flags
List of flag names
Lists of flags
Timeline of national flags
Notable flag-related topics
Flag families
False flag
Flag Day
Flag desecration
Flag protocol
Flag patch
Flag semaphore
Flag throwing
Glossary of vexillology
Pledge of Allegiance (United States)
Standard-bearer (also enumerates various types of standards, both flag types and immobile ensigns)
Vexillology
Flags of the World, an Internet-based vexillological association and resource
Windsock
Citations
General and cited references
Inglefield, Eric (1979 edition). Flags. Ward Lock, London.
External links
International Marine Signal Flags
Articles containing video clips
National symbols | Flag | [
"Mathematics"
] | 6,887 | [
"Symbols",
"Flags"
] |
11,432 | https://en.wikipedia.org/wiki/Full%20moon | The full moon is the lunar phase when the Moon appears fully illuminated from Earth's perspective. This occurs when Earth is located between the Sun and the Moon (when the ecliptic longitudes of the Sun and Moon differ by 180°). This means that the lunar hemisphere facing Earth—the near side—is completely sunlit and appears as an approximately circular disk. The full moon occurs roughly once a month.
The time interval between a full moon and the next repetition of the same phase, a synodic month, averages about 29.53 days. Because of irregularities in the moon's orbit, the new and full moons may fall up to thirteen hours either side of their mean. If the calendar date is not locally determined through observation of the new moon at the beginning of the month there is the potential for a further twelve hours difference depending on the time zone. Potential discrepancies also arise from whether the calendar day is considered to begin in the evening or at midnight. It is normal for the full moon to fall on the fourteenth or the fifteenth of the month according to whether the start of the month is reckoned from the appearance of the new moon or from the conjunction. A tabular lunar calendar will also exhibit variations depending on the intercalation system used. Because a calendar month consists of a whole number of days, a month in a lunar calendar may be either 29 or 30 days long.
Characteristics
A full moon is often thought of as an event of a full night's duration, although its phase seen from Earth continuously waxes or wanes, and is full only at the instant when waxing ends and waning begins. For any given location, about half of these maximum full moons may be visible, while the other half occurs during the day, when the full moon is below the horizon. As the Moon's orbit is inclined by 5.145° from the ecliptic, it is not generally perfectly opposite from the Sun during full phase, therefore a full moon is in general not perfectly full except on nights with a lunar eclipse as the Moon crosses the ecliptic at opposition from the Sun.
Many almanacs list full moons not only by date, but also by their exact time, usually in Coordinated Universal Time (UTC). Typical monthly calendars that include lunar phases may be offset by one day when prepared for a different time zone.
The full moon is generally a suboptimal time for astronomical observation of the Moon because shadows vanish. It is a poor time for other observations because the bright sunlight reflected by the Moon, amplified by the opposition surge, then outshines many stars.
Moon phases
There are eight phases of the moon, which vary from partial to full illumination. The moon phases are also called lunar phases. These stages have different names that come from its shape and size at each phase. For example, the crescent moon is 'banana' shaped, and the half-moon is D-shaped. When the moon is nearly full, it is called a gibbous moon. The crescent and gibbous moons each last approximately a week.
Each phase is also described in accordance to its position on the full 29.5-day cycle. The eight phases of the moon in order:
new moon
waxing crescent moon
first quarter moon
waxing gibbous moon
full moon
waning gibbous moon
last quarter moon
waning crescent moon
Formula
The date and approximate time of a specific full moon (assuming a circular orbit) can be calculated from the following equation:
where d is the number of days since 1 January 2000 00:00:00 in the Terrestrial Time scale used in astronomical ephemerides; for Universal Time (UT) add the following approximate correction to d:
days
where N is the number of full moons since the first full moon of 2000. The true time of a full moon may differ from this approximation by up to about 14.5 hours as a result of the non-circularity of the Moon's orbit. See New moon for an explanation of the formula and its parameters.
The age and apparent size of the full moon vary in a cycle of just under 14 synodic months, which has been referred to as a full moon cycle.
Lunar eclipses
When the Moon moves into Earth's shadow, a lunar eclipse occurs, during which all or part of the Moon's face may appear reddish due to the Rayleigh scattering of blue wavelengths and the refraction of sunlight through Earth's atmosphere. Lunar eclipses happen only during a full moon and around points on its orbit where the satellite may pass through the planet's shadow. A lunar eclipse does not occur every month because the Moon's orbit is inclined 5.145° with respect to the ecliptic plane of Earth; thus, the Moon usually passes north or south of Earth's shadow, which is mostly restricted to this plane of reference. Lunar eclipses happen only when the full moon occurs around either node of its orbit (ascending or descending). Therefore, a lunar eclipse occurs about every six months, and often two weeks before or after a solar eclipse, which occurs during a new moon around the opposite node.
In folklore and tradition
In Buddhism, Vesak is celebrated on the full moon day of the Vaisakha month, marking the birth, enlightenment, and the death of the Buddha.
In Arabic, badr (بدر ) means 'full moon', but it is often translated as 'white moon', referring to The White Days, the three days when the full moon is celebrated.
Full moons are traditionally associated with insomnia (inability to sleep), insanity (hence the terms lunacy and lunatic) and various "magical phenomena" such as lycanthropy. Psychologists, however, have found that there is no strong evidence for effects on human behavior around the time of a full moon. They find that studies are generally not consistent, with some showing a positive effect and others showing a negative effect. In one instance, the 23 December 2000 issue of the British Medical Journal published two studies on dog bite admission to hospitals in England and Australia. The study of the Bradford Royal Infirmary found that dog bites were twice as common during a full moon, whereas the study conducted by the public hospitals in Australia found that they were less likely.
The symbol of the Triple Goddess is drawn with the circular image of the full moon in the center flanked by a left facing crescent and right facing crescent, on either side, representing a maiden, mother and crone archetype.
Full moon names
Historically, month names are names of moons (lunations, not necessarily full moons) in lunisolar calendars. Since the introduction of the solar Julian calendar in the Roman Empire, and later the Gregorian calendar worldwide, people no longer perceive month names as "moon" names. The traditional Old English month names were equated with the names of the Julian calendar from an early time, soon after Christianization, according to the testimony of Bede around AD 700.
Some full moons have developed new names in modern times, such as "blue moon", as well as "harvest moon" and "hunter's moon" for the full moons of autumn.
The golden or reddish hue of the Harvest Moon and other full moons near the horizon is caused by atmospheric scattering. When the Moon is low in the sky, its light passes through a thicker layer of Earth's atmosphere, scattering shorter wavelengths like blue and violet and allowing longer wavelengths, such as red and yellow, to dominate. This effect, combined with environmental factors such as dust, pollutants, or haze, can intensify or dull the Moon's color. Clear skies often enhance the yellow or golden appearance, particularly during the autumn months when these full moons are observed.
Lunar eclipses occur only at a full moon and often cause a reddish hue on the near side of the Moon. This full moon has been called a blood moon in popular culture.
Harvest and hunter's moons
The "harvest moon" and the "hunter's moon" are traditional names for the full moons in late summer and in the autumn in the Northern Hemisphere, usually in September and October, respectively. People may celebrate these occurrences in festivities such as the Chinese Mid-Autumn Festival.
The "harvest moon" (also known as the "barley moon" or "full corn moon") is the full moon nearest to the autumnal equinox (22 or 23 September), occurring anytime within two weeks before or after that date. The "hunter's moon" is the full moon following it. The names are recorded from the early 18th century. The Oxford English Dictionary entry for "harvest moon" cites a 1706 reference, and for "hunter's moon" a 1710 edition of The British Apollo, which attributes the term to "the country people" ("The Country People call this the Hunters-Moon.") The names became traditional in American folklore, where they are now often popularly attributed to Native Americans. The Feast of the Hunters' Moon is a yearly festival in West Lafayette, Indiana, held in late September or early October each year since 1968. In 2010 the harvest moon occurred on the night of the equinox itself (some 5 hours after the moment of equinox) for the first time since 1991, after a period known as the Metonic cycle.
All full moons rise around the time of sunset. Since the Moon moves eastward among the stars faster than the Sun, lunar culmination is delayed by about 50.47 minutes (on average) each day, thus causing moonrise to occur later each day.
Due to the high lunar standstill, the harvest and hunter's moons of 2007 were special because the time difference between moonrises on successive evenings was much shorter than average. The moon rose about 30 minutes later from one night to the next, as seen from about 40° N or S latitude (because the full moon of September 2007 rose in the northeast rather than in the east). Hence, no long period of darkness occurred between sunset and moonrise for several days after the full moon, thus lengthening the time in the evening when there is enough twilight and moonlight to work to get the harvest in.
Native American
Various 18th and 19th century writers gave what were claimed to be Native American or First Nations moon names. These were not the names of the full moons as such, but were the names of lunar months beginning with each new moon. According to Jonathan Carver in 1778, "Some nations among them reckon their years by moons, and make them consist of twelve synodical or lunar months, observing, when thirty moons have waned, to add a supernumerary one, which they term the lost moon; and then begin to count as before." Carver gave the names of the lunar months (starting from the first after the March equinox) as Worm, Plants, Flowers, Hot, Buck, Sturgeon, Corn, Travelling, Beaver, Hunting, Cold, Snow. Carver's account was reproduced verbatim in Events in Indian History (1841), but completely different lists were given by Eugene Vetromile (1856) and Peter Jones (1861).
In a book on Native American culture published in 1882, Richard Irving Dodge stated:
There is a difference among authorities as to whether or not the moons themselves are named. Brown gives names for nine moons corresponding to months. Maximillian gives the names of twelve moons; and Belden, who lived many years among the Sioux, asserts that "the Indians compute their time very much as white men do, only they use moons instead of months to designate the seasons, each answering to some month in our calendar." Then follows a list of twelve moons with Indian and English names. While I cannot contradict so positive and minute a statement of one so thoroughly in a position to know, I must assert with equal positiveness that I have never met any wild Indians, of the Sioux or other Plains tribes, who had a permanent, common, conventional name for any moon. The looseness of Belden's general statement, that "Indians compute time like white people," when his only particularization of similarity is between the months and moons, is in itself sufficient to render the whole statement questionable.
My experience is that the Indian, in attempting to fix on a particular moon, will designate it by some natural and well-known phenomenon which culminates during that moon. But two Indians of the same tribe may fix on different designations; and even the same Indian, on different occasions, may give different names to the same moon. Thus, an Indian of the middle Plains will to-day designate a spring moon as "the moon when corn is planted;" to-morrow, speaking of the same moon, he may call it "the moon when the buffalo comes." Moreover, though there are thirteen moons in our year, no observer has ever given an Indian name to the thirteenth. My opinion is, that if any of the wild tribes have given conventional names to twelve moons, it is not an indigenous idea, but borrowed from the whites.
Jonathan Carver's list of purportedly Native American month names was adopted in the 19th century by the Improved Order of Red Men, an all-white U.S. fraternal organization. They called the month of January "Cold moon", the rest being Snow, Worm, Plant, Flower, Hot, Buck, Sturgeon, Corn, Travelling, Beaver and Hunting moon. They numbered years from the time of Columbus's arrival in America.
In The American Boy's Book of Signs, Signals and Symbols (1918), Daniel Carter Beard wrote: "The Indians' Moons naturally vary in the different parts of the country, but by comparing them all and striking an average as near as may be, the moons are reduced to the following." He then gave a list that had two names for each lunar month, again quite different from earlier lists that had been published.
The 1937 Maine Farmers' Almanac published a list of full moon names that it said "were named by our early English ancestors as follows":
It also mentioned blue moon. These were considered in some quarters to be Native American full moon names, and some were adopted by colonial Americans. The Farmers' Almanac (since 1955 published in Maine, but not the same publication as the Maine Farmers' Almanac) continues to print such names.
Such names have gained currency in American folklore. They appeared in print more widely outside of the almanac tradition from the 1990s in popular publications about the Moon.
Mysteries of the Moon by Patricia Haddock ("Great Mysteries Series", Greenhaven Press, 1992) gave an extensive list of such names along with the individual tribal groups they were supposedly associated with. Haddock supposes that certain "Colonial American" moon names were adopted from Algonquian languages (which were formerly spoken in the territory of New England), while others are based in European tradition (e.g. the Colonial American names for the May moon, "Milk Moon", "Mother's Moon", "Hare Moon" have no parallels in the supposed native names, while the name of November, "Beaver Moon" is supposedly based in an Algonquian language). Many other names have been reported.
These have passed into modern mythology, either as full-moon names, or as names for lunar months. Deanna J. Conway's Moon Magick: Myth & Magick, Crafts & Recipes, Rituals & Spells (1995) gave as headline names for the lunar months (from January): Wolf, Ice, Storm, Growing, Hare, Mead, Hay, Corn, Harvest, Blood, Snow, Cold. Conway also gave multiple alternative names for each month, e.g. the first lunar month after the winter solstice could be called the Wolf, Quiet, Snow, Cold, Chaste or Disting Moon, or the Moon of Little Winter. For the last lunar month Conway offered the names Cold, Oak or Wolf Moon, or Moon of Long Nights, Long Night's Moon, Aerra Geola (Month Before Yule), Wintermonat (Winter Month), Heilagmanoth (Holy Month), Big Winter Moon, Moon of Popping Trees. Conway did not cite specific sources for most of the names she listed, but some have gained wider currency as full-moon names, such as Pink Moon for a full moon in April, Long Night's Moon for the last in December and Ice Moon for the first full moon of January or February.
Hindu full moon festivals
In Hinduism, most festivals are celebrated on auspicious days. Many Hindu festivals are celebrated on days with a full moon night, called the purnima. Different parts of India celebrate the same festival with different names, as listed below:
Chaitra Purnima – Gudi Padua, Ugadi, Hanuman Jayanti (15 April 2014)
Vaishakha Purnima – Narasimha Jayanti, Buddha Jayanti (14 May 2014)
Jyeshtha Purnima – Savitri Vrata, Vat Purnima (8 June 2014)
Ashadha Purnima – Guru Purnima, Vyasa Purnima
Shravana Purnima – Upanayana ceremony, Avani Avittam, Raksha Bandhan, Onam
Bhadrapada Purnima – Start of Pitru Paksha, Madhu Purnima
Ashvin Purnima – Sharad Purnima
Kartika Purnima – Karthikai Deepam, Thrukkarthika
Margashirsha Purnima – Thiruvathira, Dattatreya Jayanti
Pushya Purnima – Thaipusam, Shakambhari Purnima
Magha Purnima
Phalguna Purnima – Holi
Lunar and lunisolar calendars
Most pre-modern calendars the world over were lunisolar, combining the solar year with the lunation by means of intercalary months. The Julian calendar abandoned this method in favour of a purely solar reckoning while conversely the 7th-century Islamic calendar opted for a purely lunar one.
A continuing lunisolar calendar is the Hebrew calendar. Evidence of this is noted in the dates of Passover and Easter in Judaism and Christianity, respectively. Passover falls on the full moon on 15 Nisan of the Hebrew calendar. The date of the Jewish Rosh Hashana and Sukkot festivals along with all other Jewish holidays are dependent on the dates of the new moons.
Intercalary months
In lunisolar calendars, an intercalary month occurs seven times in the 19 years of the Metonic cycle, or on average every 2.7 years (19/7). In the Hebrew calendar this is noted with a periodic extra month of Adar in the early spring.
Meetings arranged to coincide with full moon
Before the days of good street lighting and car headlights, several organisations arranged their meetings for full moon, so that it would be easier for their members to walk, or ride home. Examples include the Lunar Society of Birmingham, several Masonic societies, including Warren Lodge No. 32, USA and Masonic Hall, York, Western Australia, and several New Zealand local authorities, including Awakino, Ohura and Whangarei County Councils and Maori Hill and Wanganui East Borough Councils.
See also
Blue moon
Lunar eclipse
Lunar effect
Lunar phase
Near side of the Moon
Orbit of the Moon
References
External links
Moon Phase Calendar for any date
Observational astronomy
Phases of the Moon | Full moon | [
"Astronomy"
] | 3,971 | [
"Observational astronomy",
"Astronomical sub-disciplines"
] |
11,439 | https://en.wikipedia.org/wiki/Faster-than-light | Faster-than-light (superluminal or supercausal) travel and communication are the conjectural propagation of matter or information faster than the speed of light in vacuum (). The special theory of relativity implies that only particles with zero rest mass (i.e., photons) may travel at the speed of light, and that nothing may travel faster.
Particles whose speed exceeds that of light (tachyons) have been hypothesized, but their existence would violate causality and would imply time travel. The scientific consensus is that they do not exist.
According to all observations and current scientific theories, matter travels at slower-than-light (subluminal) speed with respect to the locally distorted spacetime region. Speculative faster-than-light concepts include the Alcubierre drive, Krasnikov tubes, traversable wormholes, and quantum tunneling. Some of these proposals find loopholes around general relativity, such as by expanding or contracting space to make the object appear to be travelling greater than c. Such proposals are still widely believed to be impossible as they still violate current understandings of causality, and they all require fanciful mechanisms to work (such as requiring exotic matter).
Superluminal travel of non-information
In the context of this article, "faster-than-light" means the transmission of information or matter faster than c, a constant equal to the speed of light in vacuum, which is 299,792,458 m/s (by definition of the metre) or about 186,282.397 miles per second. This is not quite the same as traveling faster than light, since:
Some processes propagate faster than c, but cannot carry information (see examples in the sections immediately following).
In some materials where light travels at speed c/n (where n is the refractive index) other particles can travel faster than c/n (but still slower than c), leading to Cherenkov radiation (see phase velocity below).
Neither of these phenomena violates special relativity or creates problems with causality, and thus neither qualifies as faster-than-light as described here.
In the following examples, certain influences may appear to travel faster than light, but they do not convey energy or information faster than light, so they do not violate special relativity.
Daily sky motion
For an earth-bound observer, objects in the sky complete one revolution around the Earth in one day. Proxima Centauri, the nearest star outside the Solar System, is about four and a half light-years away. In this frame of reference, in which Proxima Centauri is perceived to be moving in a circular trajectory with a radius of four light years, it could be described as having a speed many times greater than c as the rim speed of an object moving in a circle is a product of the radius and angular speed. It is also possible on a geostatic view, for objects such as comets to vary their speed from subluminal to superluminal and vice versa simply because the distance from the Earth varies. Comets may have orbits which take them out to more than 1000 AU. The circumference of a circle with a radius of 1000 AU is greater than one light day. In other words, a comet at such a distance is superluminal in a geostatic, and therefore non-inertial, frame.
Light spots and shadows
If a laser beam is swept across a distant object, the spot of laser light can seem to move across the object at a speed greater than c. Similarly, a shadow projected onto a distant object seems to move across the object faster than c. In neither case does the light travel from the source to the object faster than c, nor does any information travel faster than light. No object is moving in these examples. For comparison, consider water squirting out of a garden hose as it is swung side to side: water does not instantly follow the direction of the hose.
Closing speeds
The rate at which two objects in motion in a single frame of reference get closer together is called the mutual or closing speed. This may approach twice the speed of light, as in the case of two particles travelling at close to the speed of light in opposite directions with respect to the reference frame.
Imagine two fast-moving particles approaching each other from opposite sides of a particle accelerator of the collider type. The closing speed would be the rate at which the distance between the two particles is decreasing. From the point of view of an observer standing at rest relative to the accelerator, this rate will be slightly less than twice the speed of light.
Special relativity does not prohibit this. It tells us that it is wrong to use Galilean relativity to compute the velocity of one of the particles, as would be measured by an observer traveling alongside the other particle. That is, special relativity gives the correct velocity-addition formula for computing such relative velocity.
It is instructive to compute the relative velocity of particles moving at v and −v in accelerator frame, which corresponds to the closing speed of 2v > c. Expressing the speeds in units of c, β = v/c:
Proper speeds
If a spaceship travels to a planet one light-year (as measured in the Earth's rest frame) away from Earth at high speed, the time taken to reach that planet could be less than one year as measured by the traveller's clock (although it will always be more than one year as measured by a clock on Earth). The value obtained by dividing the distance traveled, as determined in the Earth's frame, by the time taken, measured by the traveller's clock, is known as a proper speed or a proper velocity. There is no limit on the value of a proper speed as a proper speed does not represent a speed measured in a single inertial frame. A light signal that left the Earth at the same time as the traveller would always get to the destination before the traveller would.
Phase velocities above c
The phase velocity of an electromagnetic wave, when traveling through a medium, can routinely exceed c, the vacuum velocity of light. For example, this occurs in most glasses at X-ray frequencies. However, the phase velocity of a wave corresponds to the propagation speed of a theoretical single-frequency (purely monochromatic) component of the wave at that frequency. Such a wave component must be infinite in extent and of constant amplitude (otherwise it is not truly monochromatic), and so cannot convey any information.
Thus a phase velocity above c does not imply the propagation of signals with a velocity above c.
Group velocities above c
The group velocity of a wave may also exceed c in some circumstances. In such cases, which typically at the same time involve rapid attenuation of the intensity, the maximum of the envelope of a pulse may travel with a velocity above c. However, even this situation does not imply the propagation of signals with a velocity above c, even though one may be tempted to associate pulse maxima with signals. The latter association has been shown to be misleading, because the information on the arrival of a pulse can be obtained before the pulse maximum arrives. For example, if some mechanism allows the full transmission of the leading part of a pulse while strongly attenuating the pulse maximum and everything behind (distortion), the pulse maximum is effectively shifted forward in time, while the information on the pulse does not come faster than c without this effect. However, group velocity can exceed c in some parts of a Gaussian beam in vacuum (without attenuation). The diffraction causes the peak of the pulse to propagate faster, while overall power does not.
Cosmic expansion
According to Hubble's law, the expansion of the universe causes distant galaxies to appear to recede from us faster than the speed of light. However, the recession speed associated with Hubble's law, defined as the rate of increase in proper distance per interval of cosmological time, is not a velocity in a relativistic sense. Moreover, in general relativity, velocity is a local notion, and there is not even a unique definition for the relative velocity of a cosmologically distant object. Faster-than-light cosmological recession speeds are entirely a coordinate effect.
There are many galaxies visible in telescopes with redshift numbers of 1.4 or higher. All of these have cosmological recession speeds greater than the speed of light. Because the Hubble parameter is decreasing with time, there can actually be cases where a galaxy that is receding from us faster than light does manage to emit a signal which reaches us eventually.
However, because the expansion of the universe is accelerating, it is projected that most galaxies will eventually cross a type of cosmological event horizon where any light they emit past that point will never be able to reach us at any time in the infinite future, because the light never reaches a point where its "peculiar velocity" towards us exceeds the expansion velocity away from us (these two notions of velocity are also discussed in ). The current distance to this cosmological event horizon is about 16 billion light-years, meaning that a signal from an event happening at present would eventually be able to reach us in the future if the event was less than 16 billion light-years away, but the signal would never reach us if the event was more than 16 billion light-years away.
Astronomical observations
Apparent superluminal motion is observed in many radio galaxies, blazars, quasars, and recently also in microquasars. The effect was predicted before it was observed by Martin Rees and can be explained as an optical illusion caused by the object partly moving in the direction of the observer, when the speed calculations assume it does not. The phenomenon does not contradict the theory of special relativity. Corrected calculations show these objects have velocities close to the speed of light (relative to our reference frame). They are the first examples of large amounts of mass moving at close to the speed of light. Earth-bound laboratories have only been able to accelerate small numbers of elementary particles to such speeds.
Quantum mechanics
Certain phenomena in quantum mechanics, such as quantum entanglement, might give the superficial impression of allowing communication of information faster than light. According to the no-communication theorem these phenomena do not allow true communication; they only let two observers in different locations see the same system simultaneously, without any way of controlling what either sees. Wavefunction collapse can be viewed as an epiphenomenon of quantum decoherence, which in turn is nothing more than an effect of the underlying local time evolution of the wavefunction of a system and all of its environment. Since the underlying behavior does not violate local causality or allow FTL communication, it follows that neither does the additional effect of wavefunction collapse, whether real or apparent.
The uncertainty principle implies that individual photons may travel for short distances at speeds somewhat faster (or slower) than c, even in vacuum; this possibility must be taken into account when enumerating Feynman diagrams for a particle interaction. However, it was shown in 2011 that a single photon may not travel faster than c.
There have been various reports in the popular press of experiments on faster-than-light transmission in optics — most often in the context of a kind of quantum tunnelling phenomenon. Usually, such reports deal with a phase velocity or group velocity faster than the vacuum velocity of light. However, as stated above, a superluminal phase velocity cannot be used for faster-than-light transmission of information
Hartman effect
The Hartman effect is the tunneling effect through a barrier where the tunneling time tends to a constant for large barriers. This could, for instance, be the gap between two prisms. When the prisms are in contact, the light passes straight through, but when there is a gap, the light is refracted. There is a non-zero probability that the photon will tunnel across the gap rather than follow the refracted path.
However, it has been claimed that the Hartman effect cannot actually be used to violate relativity by transmitting signals faster than c, also because the tunnelling time "should not be linked to a velocity since evanescent waves do not propagate". The evanescent waves in the Hartman effect are due to virtual particles and a non-propagating static field, as mentioned in the sections above for gravity and electromagnetism.
Casimir effect
In physics, the Casimir–Polder force is a physical force exerted between separate objects due to resonance of vacuum energy in the intervening space between the objects. This is sometimes described in terms of virtual particles interacting with the objects, owing to the mathematical form of one possible way of calculating the strength of the effect. Because the strength of the force falls off rapidly with distance, it is only measurable when the distance between the objects is extremely small. Because the effect is due to virtual particles mediating a static field effect, it is subject to the comments about static fields discussed above.
EPR paradox
The EPR paradox refers to a famous thought experiment of Albert Einstein, Boris Podolsky and Nathan Rosen that was realized experimentally for the first time by Alain Aspect in 1981 and 1982 in the Aspect experiment. In this experiment, the two measurements of an entangled state are correlated even when the measurements are distant from the source and each other. However, no information can be transmitted this way; the answer to whether or not the measurement actually affects the other quantum system comes down to which interpretation of quantum mechanics one subscribes to.
An experiment performed in 1997 by Nicolas Gisin has demonstrated quantum correlations between particles separated by over 10 kilometers. But as noted earlier, the non-local correlations seen in entanglement cannot actually be used to transmit classical information faster than light, so that relativistic causality is preserved. The situation is akin to sharing a synchronized coin flip, where the second person to flip their coin will always see the opposite of what the first person sees, but neither has any way of knowing whether they were the first or second flipper, without communicating classically. See No-communication theorem for further information. A 2008 quantum physics experiment also performed by Nicolas Gisin and his colleagues has determined that in any hypothetical non-local hidden-variable theory, the speed of the quantum non-local connection (what Einstein called "spooky action at a distance") is at least 10,000 times the speed of light.
Delayed choice quantum eraser
The delayed-choice quantum eraser is a version of the EPR paradox in which the observation (or not) of interference after the passage of a photon through a double slit experiment depends on the conditions of observation of a second photon entangled with the first. The characteristic of this experiment is that the observation of the second photon can take place at a later time than the observation of the first photon, which may give the impression that the measurement of the later photons "retroactively" determines whether the earlier photons show interference or not, although the interference pattern can only be seen by correlating the measurements of both members of every pair and so it cannot be observed until both photons have been measured, ensuring that an experimenter watching only the photons going through the slit does not obtain information about the other photons in an faster-than-light or backwards-in-time manner.
Superluminal communication
Faster-than-light communication is, according to relativity, equivalent to time travel. What we measure as the speed of light in vacuum (or near vacuum) is actually the fundamental physical constant c. This means that all inertial and, for the coordinate speed of light, non-inertial observers, regardless of their relative velocity, will always measure zero-mass particles such as photons traveling at c in vacuum. This result means that measurements of time and velocity in different frames are no longer related simply by constant shifts, but are instead related by Poincaré transformations. These transformations have important implications:
The relativistic momentum of a massive particle would increase with speed in such a way that at the speed of light an object would have infinite momentum.
To accelerate an object of non-zero rest mass to c would require infinite time with any finite acceleration, or infinite acceleration for a finite amount of time.
Either way, such acceleration requires infinite energy.
Some observers with sub-light relative motion will disagree about which occurs first of any two events that are separated by a space-like interval. In other words, any travel that is faster-than-light will be seen as traveling backwards in time in some other, equally valid, frames of reference, or need to assume the speculative hypothesis of possible Lorentz violations at a presently unobserved scale (for instance the Planck scale). Therefore, any theory which permits "true" FTL also has to cope with time travel and all its associated paradoxes, or else to assume the Lorentz invariance to be a symmetry of thermodynamical statistical nature (hence a symmetry broken at some presently unobserved scale).
In special relativity the coordinate speed of light is only guaranteed to be c in an inertial frame; in a non-inertial frame the coordinate speed may be different from c. In general relativity no coordinate system on a large region of curved spacetime is "inertial", so it is permissible to use a global coordinate system where objects travel faster than c, but in the local neighborhood of any point in curved spacetime we can define a "local inertial frame" and the local speed of light will be c in this frame, with massive objects moving through this local neighborhood always having a speed less than c in the local inertial frame.
Justifications
Casimir vacuum and quantum tunnelling
Special relativity postulates that the speed of light in vacuum is invariant in inertial frames. That is, it will be the same from any frame of reference moving at a constant speed. The equations do not specify any particular value for the speed of light, which is an experimentally determined quantity for a fixed unit of length. Since 1983, the SI unit of length (the meter) has been defined using the speed of light.
The experimental determination has been made in vacuum. However, the vacuum we know is not the only possible vacuum which can exist. The vacuum has energy associated with it, called simply the vacuum energy, which could perhaps be altered in certain cases. When vacuum energy is lowered, light itself has been predicted to go faster than the standard value c. This is known as the Scharnhorst effect. Such a vacuum can be produced by bringing two perfectly smooth metal plates together at near atomic diameter spacing. It is called a Casimir vacuum. Calculations imply that light will go faster in such a vacuum by a minuscule amount: a photon traveling between two plates that are 1 micrometer apart would increase the photon's speed by only about one part in 1036. Accordingly, there has as yet been no experimental verification of the prediction. A recent analysis argued that the Scharnhorst effect cannot be used to send information backwards in time with a single set of plates since the plates' rest frame would define a "preferred frame" for FTL signaling. However, with multiple pairs of plates in motion relative to one another the authors noted that they had no arguments that could "guarantee the total absence of causality violations", and invoked Hawking's speculative chronology protection conjecture which suggests that feedback loops of virtual particles would create "uncontrollable singularities in the renormalized quantum stress-energy" on the boundary of any potential time machine, and thus would require a theory of quantum gravity to fully analyze. Other authors argue that Scharnhorst's original analysis, which seemed to show the possibility of faster-than-c signals, involved approximations which may be incorrect, so that it is not clear whether this effect could actually increase signal speed at all.
It was later claimed by Eckle et al. that particle tunneling does indeed occur in zero real time. Their tests involved tunneling electrons, where the group argued a relativistic prediction for tunneling time should be 500–600 attoseconds (an attosecond is one quintillionth (10−18) of a second). All that could be measured was 24 attoseconds, which is the limit of the test accuracy. Again, though, other physicists believe that tunneling experiments in which particles appear to spend anomalously short times inside the barrier are in fact fully compatible with relativity, although there is disagreement about whether the explanation involves reshaping of the wave packet or other effects.
Give up (absolute) relativity
Because of the strong empirical support for special relativity, any modifications to it must necessarily be quite subtle and difficult to measure. The best-known attempt is doubly special relativity, which posits that the Planck length is also the same in all reference frames, and is associated with the work of Giovanni Amelino-Camelia and João Magueijo.
There are speculative theories that claim inertia is produced by the combined mass of the universe (e.g., Mach's principle), which implies that the rest frame of the universe might be preferred by conventional measurements of natural law. If confirmed, this would imply special relativity is an approximation to a more general theory, but since the relevant comparison would (by definition) be outside the observable universe, it is difficult to imagine (much less construct) experiments to test this hypothesis. Despite this difficulty, such experiments have been proposed.
Spacetime distortion
Although the theory of special relativity forbids objects to have a relative velocity greater than light speed, and general relativity reduces to special relativity in a local sense (in small regions of spacetime where curvature is negligible), general relativity does allow the space between distant objects to expand in such a way that they have a "recession velocity" which exceeds the speed of light, and it is thought that galaxies which are at a distance of more than about 14 billion light-years from us today have a recession velocity which is faster than light. Miguel Alcubierre theorized that it would be possible to create a warp drive, in which a ship would be enclosed in a "warp bubble" where the space at the front of the bubble is rapidly contracting and the space at the back is rapidly expanding, with the result that the bubble can reach a distant destination much faster than a light beam moving outside the bubble, but without objects inside the bubble locally traveling faster than light. However, several objections raised against the Alcubierre drive appear to rule out the possibility of actually using it in any practical fashion. Another possibility predicted by general relativity is the traversable wormhole, which could create a shortcut between arbitrarily distant points in space. As with the Alcubierre drive, travelers moving through the wormhole would not locally move faster than light travelling through the wormhole alongside them, but they would be able to reach their destination (and return to their starting location) faster than light traveling outside the wormhole.
Gerald Cleaver and Richard Obousy, a professor and student of Baylor University, theorized that manipulating the extra spatial dimensions of string theory around a spaceship with an extremely large amount of energy would create a "bubble" that could cause the ship to travel faster than the speed of light. To create this bubble, the physicists believe manipulating the 10th spatial dimension would alter the dark energy in three large spatial dimensions: height, width and length. Cleaver said positive dark energy is currently responsible for speeding up the expansion rate of our universe as time moves on.
Lorentz symmetry violation
The possibility that Lorentz symmetry may be violated has been seriously considered in the last two decades, particularly after the development of a realistic effective field theory that describes this possible violation, the so-called Standard-Model Extension. This general framework has allowed experimental searches by ultra-high energy cosmic-ray experiments and a wide variety of experiments in gravity, electrons, protons, neutrons, neutrinos, mesons, and photons.
The breaking of rotation and boost invariance causes direction dependence in the theory as well as unconventional energy dependence that introduces novel effects, including Lorentz-violating neutrino oscillations and modifications to the dispersion relations of different particle species, which naturally could make particles move faster than light.
In some models of broken Lorentz symmetry, it is postulated that the symmetry is still built into the most fundamental laws of physics, but that spontaneous symmetry breaking of Lorentz invariance shortly after the Big Bang could have left a "relic field" throughout the universe which causes particles to behave differently depending on their velocity relative to the field; however, there are also some models where Lorentz symmetry is broken in a more fundamental way. If Lorentz symmetry can cease to be a fundamental symmetry at the Planck scale or at some other fundamental scale, it is conceivable that particles with a critical speed different from the speed of light be the ultimate constituents of matter.
In current models of Lorentz symmetry violation, the phenomenological parameters are expected to be energy-dependent. Therefore, as widely recognized, existing low-energy bounds cannot be applied to high-energy phenomena; however, many searches for Lorentz violation at high energies have been carried out using the Standard-Model Extension.
Lorentz symmetry violation is expected to become stronger as one gets closer to the fundamental scale.
Superfluid theories of physical vacuum
In this approach, the physical vacuum is viewed as a quantum superfluid which is essentially non-relativistic, whereas Lorentz symmetry is not an exact symmetry of nature but rather the approximate description valid only for the small fluctuations of the superfluid background. Within the framework of the approach, a theory was proposed in which the physical vacuum is conjectured to be a quantum Bose liquid whose ground-state wavefunction is described by the logarithmic Schrödinger equation. It was shown that the relativistic gravitational interaction arises as the small-amplitude collective excitation mode whereas relativistic elementary particles can be described by the particle-like modes in the limit of low momenta. The important fact is that at very high velocities the behavior of the particle-like modes becomes distinct from the relativistic one – they can reach the speed of light limit at finite energy; also, faster-than-light propagation is possible without requiring moving objects to have imaginary mass.
FTL neutrino flight results
MINOS experiment
In 2007 the MINOS collaboration reported results measuring the flight-time of 3 GeV neutrinos yielding a speed exceeding that of light by 1.8-sigma significance. However, those measurements were considered to be statistically consistent with neutrinos traveling at the speed of light. After the detectors for the project were upgraded in 2012, MINOS corrected their initial result and found agreement with the speed of light. Further measurements are going to be conducted.
OPERA neutrino anomaly
On September 22, 2011, a preprint from the OPERA Collaboration indicated detection of 17 and 28 GeV muon neutrinos, sent 730 kilometers (454 miles) from CERN near Geneva, Switzerland to the Gran Sasso National Laboratory in Italy, traveling faster than light by a relative amount of (approximately 1 in 40,000), a statistic with 6.0-sigma significance. On 17 November 2011, a second follow-up experiment by OPERA scientists confirmed their initial results. However, scientists were skeptical about the results of these experiments, the significance of which was disputed. In March 2012, the ICARUS collaboration failed to reproduce the OPERA results with their equipment, detecting neutrino travel time from CERN to the Gran Sasso National Laboratory indistinguishable from the speed of light. Later the OPERA team reported two flaws in their equipment set-up that had caused errors far outside their original confidence interval: a fiber-optic cable attached improperly, which caused the apparently faster-than-light measurements, and a clock oscillator ticking too fast.
Tachyons
In special relativity, it is impossible to accelerate an object the speed of light, or for a massive object to move the speed of light. However, it might be possible for an object to exist which moves faster than light. The hypothetical elementary particles with this property are called tachyons or tachyonic particles. Attempts to quantize them failed to produce faster-than-light particles, and instead illustrated that their presence leads to an instability.
Various theorists have suggested that the neutrino might have a tachyonic nature, while others have disputed the possibility.
General relativity
General relativity was developed after special relativity to include concepts like gravity. It maintains the principle that no object can accelerate to the speed of light in the reference frame of any coincident observer. However, it permits distortions in spacetime that allow an object to move faster than light from the point of view of a distant observer. One such distortion is the Alcubierre drive, which can be thought of as producing a ripple in spacetime that carries an object along with it. Another possible system is the wormhole, which connects two distant locations as though by a shortcut. Both distortions would need to create a very strong curvature in a highly localized region of space-time and their gravity fields would be immense. To counteract the unstable nature, and prevent the distortions from collapsing under their own 'weight', one would need to introduce hypothetical exotic matter or negative energy.
General relativity also recognizes that any means of faster-than-light travel could also be used for time travel. This raises problems with causality. Many physicists believe that the above phenomena are impossible and that future theories of gravity will prohibit them. One theory states that stable wormholes are possible, but that any attempt to use a network of wormholes to violate causality would result in their decay. In string theory, Eric G. Gimon and Petr Hořava have argued that in a supersymmetric five-dimensional Gödel universe, quantum corrections to general relativity effectively cut off regions of spacetime with causality-violating closed timelike curves. In particular, in the quantum theory a smeared supertube is present that cuts the spacetime in such a way that, although in the full spacetime a closed timelike curve passed through every point, no complete curves exist on the interior region bounded by the tube.
In fiction and popular culture
FTL travel is a common plot device in science fiction.
See also
Faster-than-light neutrino anomaly
Intergalactic travel
Krasnikov tube
Slow light
Variable speed of light
Wheeler–Feynman absorber theory
Notes
Further reading
External links
Measurement of the neutrino velocity with the OPERA detector in the CNGS beam
Encyclopedia of laser physics and technology on "superluminal transmission", with more details on phase and group velocity, and on causality
Markus Pössel: Faster-than-light (FTL) speeds in tunneling experiments: an annotated bibliography
Relativity and FTL Travel FAQ
Usenet Physics FAQ: is FTL travel or communication Possible?
Relativity, FTL and causality
Conical and paraboloidal superluminal particle accelerators
Relativity and FTL (=Superluminal motion) Travel Homepage
Interstellar travel
Fiction about physics
Science fiction themes
Theory of relativity
Warp drive theory
Tachyons
Velocity | Faster-than-light | [
"Physics",
"Astronomy"
] | 6,447 | [
"Physical phenomena",
"Astronomical hypotheses",
"Physical quantities",
"Tachyons",
"Motion (physics)",
"Subatomic particles",
"Vector physical quantities",
"Interstellar travel",
"Theory of relativity",
"Velocity",
"Wikipedia categories named after physical quantities",
"Matter"
] |
11,442 | https://en.wikipedia.org/wiki/FidoNet | __
/ \
/|oo \
(_| /_)
_`@/_ \ _
| | \ \\
| (*) | \ ))
__ || / \//
/ FIDO \ _//|| _\ /
() (_/(_|(/
(c) John Madill
FidoNet logo by John Madill
FidoNet is a worldwide computer network that is used for communication between bulletin board systems (BBSes). It uses a store-and-forward system to exchange private (email) and public (forum) messages between the BBSes in the network, as well as other files and protocols in some cases.
The FidoNet system was based on several small interacting programs, only one of which needed to be ported to support other BBS software. FidoNet was one of the few networks that was supported by almost all BBS software, as well as a number of non-BBS online services. This modular construction also allowed FidoNet to easily upgrade to new data compression systems, which was important in an era using modem-based communications over telephone links with high long-distance calling charges.
The rapid improvement in modem speeds during the early 1990s, combined with the rapid decrease in price of computer systems and storage, made BBSes increasingly popular. By the mid-1990s there were almost 40,000 FidoNet systems in operation, and it was possible to communicate with millions of users around the world. Only UUCPNET came close in terms of breadth or numbers; FidoNet's user base far surpassed other networks like BITNET.
The broad availability of low-cost Internet connections starting in the mid-1990s lessened the need for FidoNet's store-and-forward system, as any system in the world could be reached for equal cost. Direct dialing into local BBS systems rapidly declined. Although FidoNet has shrunk considerably since the late 1990s, it has remained in use even today despite internet connectivity becoming more widespread.
History
Origins
There are two major accounts of the development of the FidoNet, differing only in small details.
Tom Jennings' account
Around Christmas 1983, Tom Jennings started work on a new bulletin board system that would emerge as Fido BBS. It was called "Fido" because the assorted hardware together was "a real mongrel". Jennings set up the system in San Francisco sometime in early 1984. Another early user was John Madill, who was trying to set up a similar system in Baltimore on his Rainbow 100. Fido started spreading to new systems, and Jennings eventually started keeping an informal list of their phone numbers, with Jennings becoming #1 and Madill #2.
Jennings released the first version of the FidoNet software in June 1984. In early 1985 he wrote a document explaining the operations of the FidoNet, along with a short portion on the history of the system. In this version, FidoNet was developed as a way to exchange mail between the first two Fido BBS systems, Jennings' and Madill's, to "see if it could be done, merely for the fun of it". This was first supported in Fido V7, "sometime in June 84 or so".
Ben Baker's account
In early 1984, Ben Baker was planning on starting a BBS for the newly forming computer club at the McDonnell Douglas automotive division in St. Louis. Baker was part of the CP/M special interest group within the club. He intended to use the seminal, CP/M-hosted, CBBS system, and went looking for a machine to run it on. The club's president told Baker that DEC would be giving them a Rainbow 100 computer on indefinite loan, so he made plans to move the CBBS onto this machine. The Rainbow contained two processors, an Intel 8088 and a Zilog Z80, allowing it to run both MS-DOS and CP/M, with the BBS running on the latter. When the machine arrived, they learned that the Z80 side had no access to the I/O ports, so CBBS could not communicate with a modem. While searching for software that would run on the MS-DOS side of the system, Baker learned of Fido through Madill.
The Fido software required changes to the serial drivers to work properly on the Rainbow. A porting effort started, involving Jennings, Madill and Baker. This caused all involved to rack up considerable long-distance charges as they all called each other during development, or called into each other's BBSes to leave email. During one such call "in May or early June", Baker and Jennings discussed how great it would be if the BBS systems could call each other automatically, exchanging mail and files between them. This would allow them to compose mail on their local machines, and then deliver it quickly, as opposed to calling in and typing the message in while on a long-distance telephone connection.
Jennings responded by calling into Baker's system that night and uploading a new version of the software consisting of three files: FIDO_DECV6, a new version of the BBS program itself, FIDONET, a new program, and NODELIST.BBS, a text file. The new version of FIDO BBS had a timer that caused it to exit at a specified time, normally at night. As it exited it would run the separate FIDONET program. NODELIST was the list of Fido BBS systems, which Jennings had already been compiling.
The FIDONET program was what later became known as a mailer. The FIDO BBS software was modified to use a previously unused numeric field in the message headers to store a node number for the machine to which the message should be delivered to. When FIDONET ran, it would search through the email database for any messages with a number in this field. FIDONET collected all of the messages for a particular node number into a file known as a message packet. After all the packets were generated, one for each node, the FIDONET program would look up the destination node's phone number in NODELIST.BBS, and call the remote system. Provided that FIDONET was running on that system, the two systems would handshake and, if this succeeded, the calling system would upload its packet, download a return packet if there was one, and disconnect. FIDONET would then unpack the return packet, place the received messages into the local system's database, and move onto the next packet. When there were no remaining packets, FIDONET would exit, and run the FIDO BBS program.
In order to lower long-distance charges, the mail exchanges were timed to run late at night, normally 4 AM. This would later be known as national mail hour, and, later still, as Zone Mail Hour.
Up and running
By June 1984, Version 7 of the system was being run in production, and nodes were rapidly being added to the network. By August there were almost 30 systems in the nodelist, 50 by September, and over 160 by January 1985. As the network grew, the maintenance of the nodelist became prohibitive, and errors were common. In these cases, people would start receiving phone calls at 4 AM, from a caller that would say nothing and then hang up. In other cases the system would be listed before it was up and running, resulting in long-distance calls that accomplished nothing.
In August 1984, Jennings handed off control of the nodelist to the group in St. Louis, mostly Ken Kaplan and Ben Baker. Kaplan had come across Fido as part of finding a BBS solution for his company, which worked with DEC computers and had been given a Rainbow computer and a USRobotics 1200bit/s modem. From then on, joining FidoNet required one to set up their system and use it to deliver a netmail message to a special system, Node 51. The message contained various required contact information. If this message was transmitted successfully, it ensured that at least some of the system was working properly. The nodelist team would then reply with another netmail message back to the system in question, containing the assigned node number. If delivery succeeded, the system was considered to be working properly, and it was added to the nodelist. The first new nodelist was published on 21 September 1984.
Nets and nodes
Growth continued to accelerate, and by the spring of 1985, the system was already reaching its limit of 250 nodes. In addition to the limits on the growth of what was clearly a popular system, nodelist maintenance continued to grow more and more time-consuming.
It was also realized that Fido systems were generally clustered – of the fifteen systems running by the start of June 1984, five of them were in St. Louis. A user on Jennings's system in San Francisco that addressed emails to different systems in St. Louis would cause calls to be made to each of those BBSes in turn. In the United States, local calls were normally free, and in most other countries were charged at a lower rate. Additionally, the initial call setup, generally the first minute of the call, was normally billed at a higher rate than continuing an existing connection. Therefore, it would be less expensive to deliver all the messages from all the users in San Francisco to all of the users in St. Louis in a single call. Packets were generally small enough to be delivered within a minute or two, so delivering all the messages in a single call could greatly reduce costs by avoiding multiple first-minute charges. Once delivered, the packet would be broken out into separate packets for local systems, and delivered using multiple local free calls.
The team settled on the concept of adding a new network number patterned on the idea of area codes. A complete network address would now consist of the network and node number pair, which would be written with a slash between them. All mail travelling between networks would first be sent to their local network host, someone who volunteered to pay for any long-distance charges. That single site would collect up all the netmail from all of the systems in their network, then re-package it into single packets destined to each network. They would then call any required network admin sites and deliver the packet to them. That site would then process the mail as normal, although all of the messages in the packet would be guaranteed to be local calls.
The network address was placed in an unused field in the Fido message database, which formerly always held a zero. Systems running existing versions of the software already ignored the fields containing the new addressing, so they would continue to work as before; when noticing a message addressed to another node they would look it up and call that system. Newer systems would recognize the network number and instead deliver that message to the network host. To ensure backward compatibility, existing systems retained their original node numbers through this period.
A huge advantage of the new scheme was that node numbers were now unique only within their network, not globally. This meant the previous 250 node limit was gone, but for a variety of reasons this was initially limited to about 1,200. This change also devolved the maintenance of the nodelists down to the network hosts, who then sent updated lists back to Node 51 to be collected into the master list. The St. Louis group now had to only maintain their own local network, and do basic work to compile the global list.
At a meeting held in Kaplan's living room in St. Louis on 11 April 1985 the various parties hammered out all of the details of the new concept. As part of this meeting, they also added the concept of a region, a purely administrative level that was not part of the addressing scheme. Regional hosts would handle any stragglers in the network maps, remote systems that had no local network hosts. They then divided up the US into ten regions that they felt would have roughly equal populations.
By May, Jennings had early versions of the new software running. These early versions specified the routing manually through a new ROUTE.BBS file that listed network hosts for each node. For instance, an operator might want to forward all mail to St. Louis through a single node, node 10. ROUTE.BBS would then include a list of all the known systems in that area, with instructions to forward mail to each of those nodes through node 10. This process was later semi-automated by John Warren's NODELIST program. Over time, this information was folded into updated versions of the nodelist format, and the ROUTES file is no longer used.
A new version of FIDO and FIDONET, 10C, was released containing all of these features. On 12 June 1985 the core group brought up 10C, and most Fido systems had upgraded within a few months. The process went much smoother than anyone imagined, and very few nodes had any problems.
Echomail
Sometime during the evolution of Fido, file attachments were added to the system, allowing a file to be referenced from an email message. During the normal exchange between two instances of FIDONET, any files attached to the messages in the packets were delivered after the packet itself had been up or downloaded. It is not clear when this was added, but it was already a feature of the basic system when the 8 February 1985 version of the FidoNet standards document was released, so this was added very early in Fido's history.
At a sysop meeting in Dallas, the idea was raised that it would be nice if there was some way for the sysops to post messages that would be shared among the systems. In February 1986 Jeff Rush, one of the group members, introduced a new mailer that extracted messages from public forums that the sysop selected, similar to the way the original mailer handled private messages. The new program was known as a tosser/scanner. The tosser produced a file that was similar (or identical) to the output from the normal netmail scan, but these files were then compressed and attached to a normal netmail message as an attachment. This message was then sent to a special address on the remote system. After receiving netmail as normal, the scanner on the remote system looked for these messages, unpacked them, and put them into the same public forum on the original system.
In this fashion, Rush's system implemented a store and forward public message system similar to Usenet, but based on, and hosted by, the FidoNet system. The first such echomail forum was one created by the Dallas area sysops to discuss business, known as SYSOP. Another called TECH soon followed. Several public echos soon followed, including GAYNET and CLANG. These spawned hundreds of new echos, and led to the creation of the Echomail Conference List (Echolist) by Thomas Kenny in January 1987. Echomail produced world-spanning shared forums, and its traffic volume quickly surpassed the original netmail system. By the early 1990s, echo mail was carrying over 8 MB of compressed message traffic a day, many times that when uncompressed.
Echomail did not necessarily use the same distribution pathways as normal netmail, and the distribution routing was stored in a separate setup file not unlike the original ROUTES.BBS. At the originating site a header line was added to the message indicating the origin system's name and address. After that, each system that the message traveled through added itself to a growing PATH header, as well as a SEENBY header. SEENBY prevented the message from looping around the network in the case of misconfigured routing information.
Echomail was not the only system to use the file attachment feature of netmail to implement store-and-forward capabilities. Similar concepts were used by online games and other systems as well.
Zones and points
The evolution towards the net/node addressing scheme was also useful for reducing communications costs between continents, where time zone differences on either end of the connection might also come into play. For instance, the best time to forward mail in the US was at night, but that might not be the best time for European hosts to exchange. Efforts towards introducing a continental level to the addressing system started in 1986.
At the same time, it was noted that some power users were interested in using FidoNet protocols as a way of delivering the large quantities of echomail to their local machines where it could be read offline. These users did not want their systems to appear in the nodelist - they did not (necessarily) run a bulletin board system and were not publicly accessible. A mechanism allowing netmail delivery to these systems without the overhead of nodelist maintenance was desirable.
In October 1986 the last major change to the FidoNet network was released, adding zones and points. Zones represented major geographical areas roughly corresponding to continents. There were six zones in total, North America, South America, Europe, Oceania, Asia, and Africa. Points represented non-public nodes, which were created privately on a host BBS system. Point mail was delivered to a selected host as if it was addressed to a user on that machine, but then re-packaged into a packet for the point to pick up on-demand. The complete addressing format was now zone:net/node.point, so a real example might be Bob Smith@1:250/250.10. Points were widely used only for a short time, the introduction of offline reader systems filled this role with systems that were much easier to use. Points remain in use to this day but are less popular than when they were introduced.
Other extensions
FidoNet supported file attachments from even the earliest standards. File attachments followed the normal mail routing through multiple systems and could back up transfers all along the line as the files were copied. Additionally, users could send files to other users and rack up long-distance charges on a host systems. For these reasons, file transfers were normally turned off for most users, and only available to the system operators and tosser/scanners.
A solution was offered in the form of file requests. This reversed the flow of information, instead of being driven by the sending systems, these were driven by the calling system. This meant it was the receiver, the user trying to get the file, that paid for the connection. Additionally, requests were directly routed using one-time point-to-point connections instead of the traditional routing, so they did not cause the file to be copied multiple times. Two such standards became common, "WaZOO" and "Bark", which saw varying support among different mailers. Both worked similarly, with the mailer calling the remote system and sending a new handshake packet to request the files.
Although FidoNet was, by far, the best known BBS-based network, it was by no means the only one. From 1988 on, PCBoard systems were able to host similar functionality known as RelayNet, while other popular networks included RBBSNet from the Commodore 64 world, and AlterNet. Late in the evolution of the FidoNet system, there was a proposal to allow mail (but not forum messages) from these systems to switch into the FidoNet structure. This was not adopted, and the rapid rise of the internet made this superfluous as these networks rapidly added internet exchange, which acted as a lingua franca.
Peak
FidoNet started in 1984 and listed 100 nodes by the end of that year. Steady growth continued through the 1980s, but a combination of factors led to rapid growth after 1988. These included faster and less expensive modems and rapidly declining costs of hard drives and computer systems in general. By April 1993, the FidoNet nodelist contained over 20,000 systems. At that time it was estimated that each node had, on average, about 200 active users. Of these 4 million users in total, 2 million users commonly used echomail, the shared public forums, while about 200,000 used the private netmail system. At its peak, FidoNet listed approximately 39,000 systems.
Throughout its lifetime, FidoNet was beset with management problems and infighting. Much of this can be traced to the fact that the inter-net delivery cost real money, and the traffic grew more rapidly than decreases caused by improving modem speeds and downward trending long-distance rates. As they increased, various methods of recouping the costs were attempted, all of which caused friction in the groups. The problems were so bad that Jennings came to refer to the system as the "fight-o-net".
Decline
As modems reached speeds of 28.8 kbit/s, dial-up Internet became increasingly common. By 1995, the bulletin board market was reeling as users abandoned local BBS systems in favour of a subscription to a local Internet Provider, which allowed access to worldwide internet services, such as HTTP, internet mail and so on, for the same cost as accessing a local BBS system. Many BBS sysops became Internet Service Providers. Their Internet gateways also made FidoNet less expensive to implement, because inter-net transfers could be delivered over the Internet as well, at little or no marginal cost. But this seriously diluted the entire purpose of the store-and-forward model, which had been built up specifically to address a long-distance problem that no longer existed.
The FidoNet nodelist started shrinking, especially in areas with a widespread availability of internet connections. This downward trend continues but has levelled out at approximately 2,500 nodes. FidoNet remains popular in areas where Internet access is difficult to come by, or expensive.
Resurgence
Around 2014, a retro movement led to a slow increase in internet-connected BBS and nodes. Telnet, rlogin, and SSH are being used between systems. This means the user can telnet to any BBS worldwide as cheaply as ones next door. Also, Usenet and internet mail has been added, along with long file names to many newer versions of BBS software, some being freeware, resulting in increasing use. Nodelists are no longer declining in all cases.
FidoNet organizational structure
FidoNet is governed in a hierarchical structure according to FidoNet policy, with designated coordinators at each level to manage the administration of FidoNet nodes and resolve disputes between members. The rules of conduct are summed up into these two deliberately vague principles:
Thou shalt not excessively annoy others.
Thou shalt not be too easily annoyed.
Network coordinators are responsible for managing the individual nodes within their area, usually a city or similar sized area. Regional coordinators are responsible for managing the administration of the network coordinators within their region, typically the size of a state, or small country. Zone coordinators are responsible for managing the administration of all of the regions within their zone. The world is divided into six zones, the coordinators of which elect one of themselves to be the International Coordinator of FidoNet.
Technical structure
FidoNet was historically designed to use modem-based dial-up access between bulletin board systems, and much of its policy and structure reflected this.
The FidoNet system officially referred only to the transfer of Netmail—the individual private messages between people using bulletin boards—including the protocols and standards with which to support it. A netmail message would contain the name of the person sending, the name of the intended recipient, and the respective FidoNet addresses of each. The FidoNet system was responsible for routing the message from one system to the other (details below), with the bulletin board software on each end being responsible for ensuring that only the intended recipient could read it. Due to the hobbyist nature of the network, any privacy between the sender and recipient was only the result of politeness from the owners of the FidoNet systems involved in the mail's transfer. It was common, however, for system operators to reserve the right to review the content of mail that passed through their system.
Netmail allowed for the attachment of a single file to every message. This led to a series of piggyback protocols that built additional features onto FidoNet by passing information back and forth as file attachments. These included the automated distribution of files and transmission of data for inter-BBS games.
By far the most commonly used of these piggyback protocols was Echomail, public discussions similar to Usenet newsgroups in nature. Echomail was supported by a variety of software that collected up new messages from the local BBSes' public forums (the scanner), compressed it using ARC or ZIP, attached the resulting archive to a Netmail message, and sent that message to a selected system. On receiving such a message, identified because it was addressed to a particular user, the reverse process was used to extract the messages, and a tosser put them back into the new system's forums.
Echomail was so popular that for many users, Echomail was the FidoNet. Private person-to-person Netmail was relatively rare.
Geographical structure
FidoNet is politically organized into a tree structure, with different parts of the tree electing their respective coordinators. The FidoNet hierarchy consists of zones, regions, networks, nodes and points broken down more-or-less geographically.
The highest level is the zone, which is largely continent-based:
Zone 1 is the United States and Canada
Zone 2 is Europe, Former Soviet Union countries, and Israel
Zone 3 is Australasia
Zone 4 is Latin America (except Puerto Rico)
Zone 5 was Africa
Zone 6 was Asia, Israel and the Asian parts of Russia, (which are listed in Zone 2). On 26 July 2007 zone 6 was removed, and all remaining nodes were moved to zone 3.
Each zone is broken down into regions, which are broken down into nets, which consist of individual nodes. Zones 7-4095 are used for othernets; groupings of nodes that use Fido-compatible software to carry their own independent message areas without being in any way controlled by FidoNet's political structure. Using un-used zone numbers would ensure that each network would have a unique set of addresses, avoiding potential routing conflicts and ambiguities for systems that belonged to more than one network.
FidoNet addresses
FidoNet addresses explicitly consist of a zone number, a network number (or region number), and a node number. They are written in the form Zone:Network/Node. The FidoNet structure also allows for semantic designation of region, host, and hub status for particular nodes, but this status is not directly indicated by the main address.
For example, consider a node located in Tulsa, Oklahoma, United States with an assigned node number is 918, located in Zone 1 (North America), Region 19, and Network 170. The full FidoNet address for this system would be 1:170/918. The region was used for administrative purposes, and was only part of the address if the node was listed directly underneath the Regional Coordinator, rather than one of the networks that were used to divide the region further.
FidoNet policy requires that each FidoNet system maintain a nodelist of every other member system. Information on each node includes the name of the system or BBS, the name of the node operator, the geographic location, the telephone number, and software capabilities. The nodelist is updated weekly, to avoid unwanted calls to nodes that had shut down, with their phone numbers possibly having been reassigned for voice use by the respective telephone company.
To accomplish regular updates, coordinators of each network maintain the list of systems in their local areas. The lists are forwarded back to the International Coordinator via automated systems on a regular basis. The International Coordinator would then compile a new nodelist, and generate the list of changes (nodediff) to be distributed for node operators to apply to their existing nodelist.
Routing of FidoNet mail
In a theoretical situation, a node would normally forward messages to a hub. The hub, acting as a distribution point for mail, might then send the message to the Net Coordinator. From there it may be sent through a Regional Coordinator, or to some other system specifically set up for the function. Mail to other zones might be sent through a Zone Gate.
For example, a FidoNet message might follow the path:
1:170/918 (node) to 1:170/900 (hub) to 1:170/0 (net coordinator) to 1:19/0 (region coordinator) to 1:1/0 (zone coordinator). From there, it was distributed 'down stream' to the destination node(s).
Originally there was no specific relationship between network numbers and the regions they reside in. In some areas of FidoNet, most notably in Zone 2, the relationship between region number and network number are entwined. For example, 2:201/329 is in Net 201 which is in Region 20 while 2:2410/330 is in Net 2410 which is in Region 24. Zone 2 also relates the node number to the hub number if the network is large enough to contain any hubs. This effect may be seen in the nodelist by looking at the structure of Net 2410 where node 2:2410/330 is listed under Hub 300. This is not the case in other zones.
In Zone 1, things are much different. Zone 1 was the starting point and when Zones and Regions were formed, the existing nets were divided up regionally with no set formula. The only consideration taken was where they were located geographically with respect to the region's mapped outline. As net numbers got added, the following formula was used.
Region number × 20
Then when some regions started running out of network numbers, the following was also used.
Region number × 200
Region 19, for instance, contains nets 380-399 and 3800–3999 in addition to those that were in Region 19 when it was formed.
Part of the objective behind the formation of local nets was to implement cost reduction plans by which all messages would be sent to one or more hubs or hosts in compressed form (ARC was nominally standard, but PKZIP is universally supported); one toll call could then be made during off-peak hours to exchange entire message-filled archives with an out-of-town uplink for further redistribution.
In practice, the FidoNet structure allows for any node to connect directly to any other, and node operators would sometimes form their own toll-calling arrangements on an ad-hoc basis, allowing for a balance between collective cost saving and timely delivery. For instance, if one node operator in a network offered to make regular toll calls to a particular system elsewhere, other operators might arrange to forward all of their mail destined for the remote system, and those near it, to the local volunteer. Operators within individual networks would sometimes have cost-sharing arrangements, but it was also common for people to volunteer to pay for regular toll calls either out of generosity or to build their status in the community.
This ad-hoc system was particularly popular with networks that were built on top of FidoNet. Echomail, for instance, often involved relatively large file transfers due to its popularity. If official FidoNet distributors refused to transfer Echomail due to additional toll charges, other node operators would sometimes volunteer. In such cases, Echomail messages would be routed to the volunteers' systems instead.
The FidoNet system was best adapted to an environment in which local telephone service was inexpensive and long-distance calls (or intercity data transfer via packet-switched networks) costly. Therefore, it fared somewhat poorly in Japan, where even local lines are expensive, or in France, where tolls on local calls and competition with Minitel or other data networks limited its growth.
Points
As the number of messages in Echomail grew over time, it became very difficult for users to keep up with the volume while logged into their local BBS. Points were introduced to address this, allowing technically savvy users to receive the already compressed and batched Echomail (and Netmail) and read it locally on their own machines.
To do this, the FidoNet addressing scheme was extended with the addition of a final address segment, the point number. For instance, a user on the example system above might be given point number 10, and thus could be sent mail at the address 1:170/918.10.
In real-world use, points are fairly difficult to set up. The FidoNet software typically consisted of a number of small utility programs run by manually edited scripts that required some level of technical ability. Reading and editing the mail required either a "sysop editor" program or a BBS program to be run locally.
In North America (Zone 1), where local calls are generally free, the benefits of the system were offset by its complexity. Points were used only briefly, and even then only to a limited degree. Dedicated offline mail reader programs such as Blue Wave, Squiggy and Silver Xpress (OPX) were introduced in the mid-1990s and quickly rendered the point system obsolete. Many of these packages supported the QWK offline mail standard.
In other parts of the world, especially Europe, this was different. In Europe, even local calls are generally metered, so there was a strong incentive to keep the duration of the calls as short as possible. Point software employs standard compression (ZIP, ARJ, etc.) and so keeps the calls down to a few minutes a day at most. In contrast to North America, pointing saw rapid and fairly widespread uptake in Europe.
Many regions distribute a pointlist in parallel with the nodelist. The pointlist segments are maintained by Net- and Region Pointlist Keepers and the Zone Point List Keeper assembles them into the Zone pointlist. At the peak of FidoNet there were over 120,000 points listed in the Zone 2 pointlist. Listing points is on a voluntary basis and not every point is listed, so how many points there really were is anybody's guess. As of June 2006, there are still some 50,000 listed points. Most of them are in Russia and Ukraine.
Technical specifications
FidoNet contained several technical specifications for compatibility between systems. The most basic of all is FTS-0001, with which all FidoNet systems are required to comply as a minimum requirement. FTS-0001 defined:
Handshaking - the protocols used by mailer software to identify each other and exchange meta-information about the session.
Transfer protocol (XMODEM) - the protocols to be used for transferring files containing FidoNet mail between systems.
Message format - the standard format for FidoNet messages during the time which they were exchanged between systems.
Other specifications that were commonly used provided for echomail, different transfer protocols and handshake methods (e.g.: Yoohoo/Yoohoo2u2, EMSI), file compression, nodelist format, transfer over reliable connections such as the Internet (Binkp), and other aspects.
Zone mail hour
Since computer bulletin boards historically used the same telephone lines for transferring mail as were used for dial-in human users of the BBS, FidoNet policy dictates that at least one designated line of each FidoNet node must be available for accepting mail from other FidoNet nodes during a particular hour of each day.
Zone Mail Hour, as it was named, varies depending on the geographic location of the node, and was designated to occur during the early morning. The exact hour varies depending on the time zone, and any node with only one telephone line is required to reject human callers. In practice, particularly in later times, most FidoNet systems tend to accept mail at any time of day when the phone line is not busy, usually during night.
FidoNet deployments
Most FidoNet deployments were designed in a modular fashion. A typical deployment would involve several applications that would communicate through shared files and directories, and switch between each other through carefully designed scripts or batch files. However, monolithic software that encompassed all required functions in one package is available, such as D'Bridge. Such software eliminated the need for custom batch files and is tightly integrated in operation. The preference for deployment was that of the operator and there were both pros and cons of running in either fashion.
Arguably the most important piece of software on a DOS-based Fido system was the FOSSIL driver, which was a small device driver which provided a standard way for the Fido software to talk to the modem. This driver needed to be loaded before any Fido software would work. An efficient FOSSIL driver meant faster, more reliable connections.
Mailer software was responsible for transferring files and messages between systems, as well as passing control to other applications, such as the BBS software, at appropriate times. The mailer would initially answer the phone and, if necessary, deal with incoming mail via FidoNet transfer protocols. If the mailer answered the phone and a human caller was detected rather than other mailer software, the mailer would exit, and pass control to the BBS software, which would then initialise for interaction with the user. When outgoing mail was waiting on the local system, the mailer software would attempt to send it from time to time by dialing and connecting to other systems who would accept and route the mail further. Due to the costs of toll calls which often varied between peak and off-peak times, mailer software would usually allow its operator to configure the optimal times in which to attempt to send mail to other systems.
BBS software was used to interact with human callers to the system. BBS software would allow dial-in users to use the system's message bases and write mail to others, locally or on other BBSes. Mail directed to other BBSes would later be routed and sent by the mailer, usually after the user had finished using the system. Many BBSes also allowed users to exchange files, play games, and interact with other users in a variety of ways (i.e.: node to node chat).
A scanner/tosser application, such as FastEcho, FMail, TosScan and Squish, would normally be invoked when a BBS user had entered a new FidoNet message that needed to be sent, or when a mailer had received new mail to be imported into the local messages bases. This application would be responsible for handling the packaging of incoming and outgoing mail, moving it between the local system's message bases and the mailer's inbound and outbound directories. The scanner/tosser application would generally be responsible for basic routing information, determining which systems to forward mail to.
In later times, message readers or editors that were independent of BBS software were also developed. Often the System Operator of a particular BBS would use a devoted message reader, rather than the BBS software itself, to read and write FidoNet and related messages. One of the most popular editors in 2008 was GoldED+. In some cases, FidoNet nodes, or more often FidoNet points, had no public bulletin board attached and existed only for the transfer of mail for the benefit of the node's operator. Most nodes in 2009 had no BBS access, but only points, if anything.
The original Fido BBS software, and some other FidoNet-supporting software from the 1980s, is no longer functional on modern systems. This is for several reasons, including problems related to the Y2K bug. In some cases, the original authors have left the BBS or shareware community, and the software, much of which was closed source, is no longer supported.
Several DOS-based legacy FidoNet Mailers such as FrontDoor, Intermail, MainDoor and D'Bridge from the early 1990s can still be run today under Windows without a modem, by using the freeware NetFoss Telnet FOSSIL driver, and by using a Virtual Modem such as NetSerial. This allows the mailer to dial an IP address or hostname via Telnet, rather than dialing a real POTS phone number. There are similar solutions for Linux such as MODEMU (modem emulator) which has limited success when combined with DOSEMU (DOS emulator).
Mail Tossers such as FastEcho and FMail are still used today under both Windows and Linux/DOSEMU.
There are several modern Windows-based FidoNet Mailers available today with source code, including Argus, Radius, and Taurus. MainDoor is another Windows-based Fidonet mailer, which also can be run using either a modem or directly over TCP/IP. Two popular free and open source software FidoNet mailers for Unix-like systems are the binkd (cross-platform, IP-only, uses the binkp protocol) and qico (supports modem communication as well as the IP protocol of ifcico and binkp).
On the hardware side, Fido systems were usually well-equipped machines, for their day, with quick CPUs, high-speed modems and 16550 UARTs, which were at the time an upgrade. As a Fidonet system was usually a BBS, it needed to quickly process any new mail events before returning to its 'waiting for call' state. In addition, the BBS itself usually necessitated lots of storage space. Finally, a FidoNet system usually had at least one dedicated phone line. Consequently, operating a Fidonet system often required significant financial investment, a cost usually met by the owner of the system.
FidoNet availability
While the use of FidoNet has dropped dramatically compared with its use up to the mid-1990s, it is still used in many countries and especially Russia and former republics of the USSR. Some BBSes, including those that are now available for users with Internet connections via telnet, also retain their FidoNet netmail and echomail feeds.
Some of FidoNet's echomail conferences are available via gateways with the Usenet news hierarchy using software like UFGate. There are also mail gates for exchanging messages between Internet and FidoNet. Widespread net abuse and e-mail spam on the Internet side has caused some gateways (such as the former 1:1/31 IEEE fidonet.org gateway) to become unusable or cease operation entirely.
FidoNews
FidoNews is the newsletter of the FidoNet community. Affectionately nicknamed The Snooze, it is published weekly. It was first published in 1984. Throughout its history, it has been published by various people and entities, including the short-lived International FidoNet Association. From January 2002 it has been published by Björn Felten, Sweden.
See also
PODSnet
RelayNet
UUCP
References
Notes
Citations
Further reading
Alt URL
External links
Alternate US FidoNet Home Page
FidoNet Technical Standards Committee Home Page
FidoNews, the weekly newsletter of the FidoNet community
International Echolist Home Page
IFDC FileGate Project
Fidonet Showcase Project
Computer-related introductions in 1984
Computer-mediated communication
Pre–World Wide Web online services
Wikipedia articles with ASCII art
BBS networks | FidoNet | [
"Technology"
] | 8,877 | [
"Computer-mediated communication",
"Information systems",
"Computing and society"
] |
11,461 | https://en.wikipedia.org/wiki/Francis%20Crick | Francis Harry Compton Crick (8 June 1916 – 28 July 2004) was an English molecular biologist, biophysicist, and neuroscientist. He, James Watson, Rosalind Franklin, and Maurice Wilkins played crucial roles in deciphering the helical structure of the DNA molecule.
Crick and Watson's paper in Nature in 1953 laid the groundwork for understanding DNA structure and functions. Together with Maurice Wilkins, they were jointly awarded the 1962 Nobel Prize in Physiology or Medicine "for their discoveries concerning the molecular structure of nucleic acids and its significance for information transfer in living material".
Crick was an important theoretical molecular biologist and played a crucial role in research related to revealing the helical structure of DNA. He is widely known for the use of the term "central dogma" to summarise the idea that once information is transferred from nucleic acids (DNA or RNA) to proteins, it cannot flow back to nucleic acids. In other words, the final step in the flow of information from nucleic acids to proteins is irreversible.
During the remainder of his career, he held the post of J.W. Kieckhefer Distinguished Research Professor at the Salk Institute for Biological Studies in La Jolla, California. His later research centred on theoretical neurobiology and attempts to advance the scientific study of human consciousness. He remained in this post until his death; "he was editing a manuscript on his death bed, a scientist until the bitter end" according to Christof Koch.
Early life and education
Crick was the first son of Harry Crick and Annie Elizabeth Crick (née Wilkins). He was born on 8 June 1916 and raised in Weston Favell, then a small village near the English town of Northampton, in which Crick's father and uncle ran the family's boot and shoe factory. His grandfather, Walter Drawbridge Crick, an amateur naturalist, wrote a survey of local foraminifera (single-celled protists with shells), corresponded with Charles Darwin, and had two gastropods (snails or slugs) named after him.
At an early age, Francis was attracted to science and what he could learn about it from books. As a child, he was taken to church by his parents. But by about age 12, he said he did not want to go any more as he preferred a scientific search for answers over religious belief.
Walter Crick, his uncle, lived in a small house on the south side of Abington Avenue; he had a shed at the bottom of his little garden where he taught Crick to blow glass, do chemical experiments and to make photographic prints. When he was eight or nine he transferred to the most junior form of the Northampton Grammar School, on the Billing Road. This was about from his home so he could walk there and back, by Park Avenue South and Abington Park Crescent, but he more often went by bus or, later, by bicycle. The teaching in the higher forms was satisfactory, but not as stimulating. After the age of 14, he was educated at Mill Hill School in London (on a scholarship), where he studied mathematics, physics, and chemistry with his best friend John Shilston. He shared the Walter Knox Prize for Chemistry on Mill Hill School's Foundation Day, Friday, 7 July 1933. He declared that his success was founded on the quality of teaching he received whilst a pupil at Mill Hill.
Crick studied at University College London (UCL), a constituent college of the University of London and earned a Bachelor of Science degree awarded by the University of London in 1937. Crick began a PhD at UCL, but was interrupted by World War II. He later became a PhD student and Honorary Fellow of Gonville and Caius College, Cambridge, and mainly worked at the Cavendish Laboratory and the Medical Research Council (MRC) Laboratory of Molecular Biology in Cambridge. He was also an Honorary Fellow of Churchill College, Cambridge, and of University College London.
Crick began a PhD research project on measuring the viscosity of water at high temperatures (which he later described as "the dullest problem imaginable") in the laboratory of physicist Edward Neville da Costa Andrade at University College London, but with the outbreak of World War II (in particular, an incident during the Battle of Britain when a bomb fell through the roof of the laboratory and destroyed his experimental apparatus), Crick was deflected from a possible career in physics. During his second year as a PhD student, however, he was awarded the Carey Foster Research Prize, a great honour. He did postdoctoral work at the Brooklyn Collegiate and Polytechnic Institute, now part of the New York University Tandon School of Engineering.
During World War II, he worked for the Admiralty Research Laboratory, from which many notable scientists emerged, including David Bates, Robert Boyd, Thomas Gaskell, George Deacon, John Gunn, Harrie Massey, and Nevill Mott; he worked on the design of magnetic and acoustic mines and was instrumental in designing a new mine that was effective against German minesweepers.
Post-World War Two life and work
In 1947, aged 31, Crick began studying biology and became part of an important migration of physical scientists into biology research. This migration was made possible by the newly won influence of physicists such as Sir John Randall, who had helped win the war with inventions such as radar. Crick had to adjust from the "elegance and deep simplicity" of physics to the "elaborate chemical mechanisms that natural selection had evolved over billions of years." He described this transition as, "almost as if one had to be born again". According to Crick, the experience of learning physics had taught him something important—hubris—and the conviction that since physics was already a success, great advances should also be possible in other sciences such as biology. Crick felt that this attitude encouraged him to be more daring than typical biologists who tended to concern themselves with the daunting problems of biology and not the past successes of physics.
For the better part of two years, Crick worked on the physical properties of cytoplasm at Cambridge's Strangeways Research Laboratory, headed by Honor Bridget Fell, with a Medical Research Council studentship, until he joined Max Perutz and John Kendrew at the Cavendish Laboratory. The Cavendish Laboratory at Cambridge was under the general direction of Sir Lawrence Bragg, who had won the Nobel Prize in 1915 at the age of 25. Bragg was influential in the effort to beat a leading American chemist, Linus Pauling, to the discovery of DNA's structure (after having been pipped at the post by Pauling's success in determining the alpha helix structure of proteins). At the same time Bragg's Cavendish Laboratory was also effectively competing with King's College London, whose Biophysics department was under the direction of Randall. (Randall had refused Crick's application to work at King's College.) Francis Crick and Maurice Wilkins of King's College were personal friends, which influenced subsequent scientific events as much as the close friendship between Crick and James Watson. Crick and Wilkins first met at King's College and not, as erroneously recorded by two authors, at the Admiralty during World War II.
Personal life
Crick married twice and fathered three children; his brother Anthony (born in 1918) predeceased him in 1966.
Spouses:
Ruth Doreen Crick, née Dodd (m. 18 February 1940 – 8 May 1947), became Mrs. James Stewart Potter
Odile Crick, née Speed (m. 14 August 1949 – 28 July 2004)
Children:
Michael Francis Compton (b. 25 November 1940) [by Doreen Crick]
Gabrielle Anne (b. 15 July 1951) [by Odile Crick]
Jacqueline Marie-Therese [later Nichols] (b. 12 March 1954, d. 28 February 2011) [by Odile Crick];
Crick died of colon cancer on the morning of 28 July 2004 at the University of California, San Diego (UCSD) Thornton Hospital in La Jolla; he was cremated and his ashes were scattered into the Pacific Ocean. A public memorial was held on 27 September 2004 at the Salk Institute, La Jolla, near San Diego, California; guest speakers included James Watson, Sydney Brenner, Alex Rich, Seymour Benzer, Aaron Klug, Christof Koch, Pat Churchland, Vilayanur Ramachandran, Tomaso Poggio, Leslie Orgel, Terry Sejnowski, his son Michael Crick, and his younger daughter Jacqueline Nichols. A private memorial for family and colleagues was held on 3 August 2004.
Crick's Nobel Prize medal and diploma from the Nobel committee was sold at auction in June 2013 for $2,270,000. It was bought by Jack Wang, the CEO of Chinese medical company Biomobie. 20% of the sale price of the medal was donated to the Francis Crick Institute in London.
Research
Crick was interested in two fundamental unsolved problems of biology: how molecules make the transition from the non-living to the living, and how the brain makes a conscious mind. He realised that his background made him more qualified for research on the first topic and the field of biophysics. It was at this time of Crick's transition from physics to biology that he was influenced by both Linus Pauling and Erwin Schrödinger. It was clear in theory that covalent bonds in biological molecules could provide the structural stability needed to hold genetic information in cells. It only remained as an exercise of experimental biology to discover exactly which molecule was the genetic molecule. In Crick's view, Charles Darwin's theory of evolution by natural selection, Gregor Mendel's genetics and knowledge of the molecular basis of genetics, when combined, revealed the secret of life. Crick had the very optimistic view that life would very soon be created in a test tube. However, some people (such as fellow researcher and colleague Esther Lederberg) thought that Crick was unduly optimistic.
It was clear that some macromolecule such as a protein was likely to be the genetic molecule. However, it was well known that proteins are structural and functional macromolecules, some of which carry out enzymatic reactions of cells. In the 1940s, some evidence had been found pointing to another macromolecule, DNA, the other major component of chromosomes, as a candidate genetic molecule. In the 1944 Avery-MacLeod-McCarty experiment, Oswald Avery and his collaborators showed that a heritable phenotypic difference could be caused in bacteria by providing them with a particular DNA molecule.
However, other evidence was interpreted as suggesting that DNA was structurally uninteresting and possibly just a molecular scaffold for the apparently more interesting protein molecules. Crick was in the right place, in the right frame of mind, at the right time (1949), to join Max Perutz's project at the University of Cambridge, and he began to work on the X-ray crystallography of proteins. X-ray crystallography theoretically offered the opportunity to reveal the molecular structure of large molecules like proteins and DNA, but there were serious technical problems then preventing X-ray crystallography from being applicable to such large molecules.
1949–1950
Crick taught himself the mathematical theory of X-ray crystallography. During the period of Crick's study of X-ray diffraction, researchers in the Cambridge lab were attempting to determine the most stable helical conformation of amino acid chains in proteins (the alpha helix). Linus Pauling was the first to identify the 3.6 amino acids per helix turn ratio of the alpha helix. Crick was witness to the kinds of errors that his co-workers made in their failed attempts to make a correct molecular model of the alpha helix; these turned out to be important lessons that could be applied, in the future, to the helical structure of DNA. For example, he learned the importance of the structural rigidity that double bonds confer on molecular structures which is relevant both to peptide bonds in proteins and the structure of nucleotides in DNA.
1951–1953: DNA structure
In 1951 and 1952, together with William Cochran and Vladimir Vand, Crick assisted in the development of a mathematical theory of X-ray diffraction by a helical molecule. This theoretical result matched well with X-ray data for proteins that contain sequences of amino acids in the alpha helix conformation. Helical diffraction theory turned out to also be useful for understanding the structure of DNA.
Late in 1951, Crick started working with James Watson at Cavendish Laboratory at the University of Cambridge, England. Using "Photo 51" (the X-ray diffraction results of Rosalind Franklin and her graduate student Raymond Gosling of King's College London, given to them by Gosling and Franklin's colleague Wilkins), Watson and Crick together developed a model for a helical structure of DNA, which they published in 1953. For this and subsequent work they were jointly awarded the Nobel Prize in Physiology or Medicine in 1962 with Wilkins.
When Watson came to Cambridge, Crick was a 35-year-old graduate student (due to his work during WWII) and Watson was only 23, but had already obtained a PhD. They shared an interest in the fundamental problem of learning how genetic information might be stored in molecular form. Watson and Crick talked endlessly about DNA and the idea that it might be possible to guess a good molecular model of its structure. A key piece of experimentally-derived information came from X-ray diffraction images that had been obtained by Wilkins, Franklin, and Gosling. In November 1951, Wilkins came to Cambridge and shared his data with Watson and Crick. Alexander Stokes (another expert in helical diffraction theory) and Wilkins (both at King's College) had reached the conclusion that X-ray diffraction data for DNA indicated that the molecule had a helical structure—but Franklin vehemently disputed this conclusion. Stimulated by their discussions with Wilkins and what Watson learned by attending a talk given by Franklin about her work on DNA, Crick and Watson produced and showed off an erroneous first model of DNA. Their hurry to produce a model of DNA structure was driven in part by the knowledge that they were competing against Linus Pauling. Given Pauling's recent success in discovering the Alpha helix, they feared that Pauling might also be the first to determine the structure of DNA.
Many have speculated about what might have happened had Pauling been able to travel to Britain as planned in May 1952. As it was, his political activities caused his travel to be restricted by the United States government and he did not visit the UK until later, at which point he met none of the DNA researchers in England. At any rate he was preoccupied with proteins at the time, not DNA. Watson and Crick were not officially working on DNA. Crick was writing his PhD thesis; Watson also had other work such as trying to obtain crystals of myoglobin for X-ray diffraction experiments. In 1952, Watson performed X-ray diffraction on tobacco mosaic virus and found results indicating that it had helical structure. Having failed once, Watson and Crick were now somewhat reluctant to try again and for a while they were forbidden to make further efforts to find a molecular model of DNA.
Of great importance to the model building effort of Watson and Crick was Rosalind Franklin's understanding of basic chemistry, which indicated that the hydrophilic phosphate-containing backbones of the nucleotide chains of DNA should be positioned so as to interact with water molecules on the outside of the molecule while the hydrophobic bases should be packed into the core. Franklin shared this chemical knowledge with Watson and Crick when she pointed out to them that their first model (from 1951, with the phosphates inside) was obviously wrong.
Crick described what he saw as the failure of Wilkins and Franklin to cooperate and work towards finding a molecular model of DNA as a major reason why he and Watson eventually made a second attempt to do so. They asked for, and received, permission to do so from both William Lawrence Bragg and Wilkins. To construct their model of DNA, Watson and Crick made use of information from unpublished X-ray diffraction images of Franklin's (shown at meetings and freely shared by Wilkins), including preliminary accounts of Franklin's results/photographs of the X-ray images that were included in a written progress report for the King's College laboratory of Sir John Randall from late 1952.
It is a matter of debate whether Watson and Crick should have had access to Franklin's results without her knowledge or permission, and before she had a chance to formally publish the results of her detailed analysis of her X-ray diffraction data which were included in the progress report. However, Watson and Crick found fault in her steadfast assertion that, according to her data, a helical structure was not the only possible shape for DNA—so they had a dilemma. In an effort to clarify this issue, Max Ferdinand Perutz later published what had been in the progress report, and suggested that nothing was in the report that Franklin herself had not said in her talk (attended by Watson) in late 1951. Perutz explained that the report was to a Medical Research Council (MRC) committee that had been created to "establish contact between the different groups of people working for the Council". Randall's and Perutz's laboratories were both funded by the MRC.
It is also not clear how important Franklin's unpublished results from the progress report actually were for the model-building done by Watson and Crick. After the first crude X-ray diffraction images of DNA were collected in the 1930s, William Astbury had talked about stacks of nucleotides spaced at 3.4 angström (0.34 nanometre) intervals in DNA. A citation to Astbury's earlier X-ray diffraction work was one of only eight references in Franklin's first paper on DNA. Analysis of Astbury's published DNA results and the better X-ray diffraction images collected by Wilkins and Franklin revealed the helical nature of DNA. It was possible to predict the number of bases stacked within a single turn of the DNA helix (10 per turn; a full turn of the helix is 27 angströms [2.7 nm] in the compact A form, 34 angströms [3.4 nm] in the wetter B form). Wilkins shared this information about the B form of DNA with Crick and Watson. Crick did not see Franklin's B form X-ray images (Photo 51) until after the DNA double helix model was published.
One of the few references cited by Watson and Crick when they published their model of DNA was to a published article that included Sven Furberg's DNA model that had the bases on the inside. Thus, the Watson and Crick model was not the first "bases in" model to be proposed. Furberg's results had also provided the correct orientation of the DNA sugars with respect to the bases. During their model building, Crick and Watson learned that an antiparallel orientation of the two nucleotide chain backbones worked best to orient the base pairs in the centre of a double helix. Crick's access to Franklin's progress report of late 1952 is what made Crick confident that DNA was a double helix with antiparallel chains, but there were other chains of reasoning and sources of information that also led to these conclusions.
As a result of leaving King's College for Birkbeck College, Franklin was asked by John Randall to give up her work on DNA. When it became clear to Wilkins and the supervisors of Watson and Crick that Franklin was going to the new job, and that Linus Pauling was working on the structure of DNA, they were willing to share Franklin's data with Watson and Crick, in the hope that they could find a good model of DNA before Pauling was able. Franklin's X-ray diffraction data for DNA and her systematic analysis of DNA's structural features were useful to Watson and Crick in guiding them towards a correct molecular model. The key problem for Watson and Crick, which could not be resolved by the data from King's College, was to guess how the nucleotide bases pack into the core of the DNA double helix.
Another key to finding the correct structure of DNA was the so-called Chargaff ratios, experimentally determined ratios of the nucleotide subunits of DNA: the amount of guanine is equal to cytosine and the amount of adenine is equal to thymine. A visit by Erwin Chargaff to England, in 1952, reinforced the salience of this important fact for Watson and Crick. The significance of these ratios for the structure of DNA were not recognised until Watson, persisting in building structural models, realised that A:T and C:G pairs are structurally similar. In particular, the length of each base pair is the same. Chargaff had also pointed out to Watson that, in the aqueous, saline environment of the cell, the predominant tautomers of the pyrimidine (C and T) bases would be the amine and keto configurations of cytosine and thymine, rather than the imino and enol forms that Crick and Watson had assumed. They consulted Jerry Donohue who confirmed the most likely structures of the nucleotide bases. The base pairs are held together by hydrogen bonds, the same non-covalent interaction that stabilise the protein α-helix. The correct structures were essential for the positioning of the hydrogen bonds. These insights led Watson to deduce the true biological relationships of the A:T and C:G pairs. After the discovery of the hydrogen bonded A:T and C:G pairs, Watson and Crick soon had their anti-parallel, double helical model of DNA, with the hydrogen bonds at the core of the helix providing a way to "unzip" the two complementary strands for easy replication: the last key requirement for a likely model of the genetic molecule. As important as Crick's contributions to the discovery of the double helical DNA model were, he stated that without the chance to collaborate with Watson, he would not have found the structure by himself.
Crick did tentatively attempt to perform some experiments on nucleotide base pairing, but he was more of a theoretical biologist than an experimental biologist. There was another near-discovery of the base pairing rules in early 1952. Crick had started to think about interactions between the bases. He asked John Griffith to try to calculate attractive interactions between the DNA bases from chemical principles and quantum mechanics. Griffith's best guess was that A:T and G:C were attractive pairs. At that time, Crick was not aware of Chargaff's rules and he made little of Griffith's calculations, although it did start him thinking about complementary replication. Identification of the correct base-pairing rules (A-T, G-C) was achieved by Watson "playing" with cardboard cut-out models of the nucleotide bases, much in the manner that Linus Pauling had discovered the protein alpha helix a few years earlier. The Watson and Crick discovery of the DNA double helix structure was made possible by their willingness to combine theory, modelling and experimental results (albeit mostly done by others) to achieve their goal.
The DNA double helix structure proposed by Watson and Crick was based upon "Watson-Crick" bonds between the four bases most frequently found in DNA (A, C, T, G) and RNA (A, C, U, G). However, later research showed that triple-stranded, quadruple-stranded and other more complex DNA molecular structures required Hoogsteen base pairing. The entire field of synthetic biology began with work by researchers such as Erik T Kool, in which bases other than A, C, T and G are used in a synthetic DNA. In addition to synthetic DNA there are also attempts to construct synthetic codons, synthetic endonucleases, synthetic proteins and synthetic zinc fingers. Using synthetic DNA, instead of there being 43 codons, if there are n new bases there could be as many as n3 codons. Research is currently being done to see if codons can be expanded to more than 3 bases. These new codons can code for new amino acids. These synthetic molecules can be used not only in medicine, but in creation of new materials.
The discovery was made on 28 February 1953; the first Watson/Crick paper appeared in Nature on 25 April 1953. Sir Lawrence Bragg, the director of the Cavendish Laboratory, where Watson and Crick worked, gave a talk at Guy's Hospital Medical School in London on Thursday 14 May 1953 which resulted in an article by Ritchie Calder in the News Chronicle of London, on Friday 15 May 1953, entitled "Why You Are You. Nearer Secret of Life." The news reached readers of The New York Times the next day; Victor K. McElheny, in researching his biography, "Watson and DNA: Making a Scientific Revolution", found a clipping of a six-paragraph New York Times article written from London and dated 16 May 1953 with the headline "Form of 'Life Unit' in Cell Is Scanned". The article ran in an early edition and was then pulled to make space for news deemed more important. (The New York Times subsequently ran a longer article on 12 June 1953). The university's undergraduate newspaper Varsity also ran its own short article on the discovery on Saturday 30 May 1953. Bragg's original announcement of the discovery at a Solvay conference on proteins in Belgium on 8 April 1953 went unreported by the British press.
In a seven-page, handwritten letter to his son at a British boarding school on 19 March 1953 Crick explained his discovery, beginning the letter "My Dear Michael, Jim Watson and I have probably made a most important discovery". The letter was put up for auction at Christie's New York on 10 April 2013 with an estimate of $1 to $2 million, eventually selling for $6,059,750, the largest amount ever paid for a letter at auction.
Sydney Brenner, Jack Dunitz, Dorothy Hodgkin, Leslie Orgel, and Beryl M Oughton, were some of the first people in April 1953 to see the model of the structure of DNA, constructed by Crick and Watson; at the time they were working at Oxford University's Chemistry Department. All were impressed by the new DNA model, especially Brenner who subsequently worked with Crick at Cambridge in the Cavendish Laboratory and the new Laboratory of Molecular Biology. According to the late Dr. Beryl Oughton, later Rimmer, they all travelled together in two cars once Dorothy Hodgkin announced to them that they were off to Cambridge to see the model of the structure of DNA. Orgel also later worked with Crick at the Salk Institute for Biological Studies.
Crick was often described as very talkative, with Watson – in The Double Helix – implying lack of modesty. His personality combined with his scientific accomplishments produced many opportunities for Crick to stimulate reactions from others, both inside and outside the scientific world, which was the centre of his intellectual and professional life. Crick spoke rapidly, and rather loudly, and had an infectious and reverberating laugh, and a lively sense of humour. One colleague from the Salk Institute described him as "a brainstorming intellectual powerhouse with a mischievous smile. ... Francis was never mean-spirited, just incisive. He detected microscopic flaws in logic. In a room full of smart scientists, Francis continually re-earned his position as the heavyweight champ."
Soon after Crick's death, there have been allegations about him having used LSD when he came to the idea of the helix structure of the DNA. While he almost certainly did use LSD, it is unlikely that he did so as early as 1953.
Molecular biology
In 1954, at the age of 37, Crick completed his PhD thesis: "X-Ray Diffraction: Polypeptides and Proteins" and received his degree. Crick then worked in the laboratory of David Harker at Brooklyn Polytechnic Institute, where he continued to develop his skills in the analysis of X-ray diffraction data for proteins, working primarily on ribonuclease and the mechanisms of protein synthesis. David Harker, the American X-ray crystallographer, was described as "the John Wayne of crystallography" by Vittorio Luzzati, a crystallographer at the Centre for Molecular Genetics in Gif-sur-Yvette near Paris, who had worked with Rosalind Franklin.
After the discovery of the double helix model of DNA, Crick's interests quickly turned to the biological implications of the structure. In 1953, Watson and Crick published another article in Nature which stated: "it therefore seems likely that the precise sequence of the bases is the code that carries the genetical information".
In 1956, Crick and Watson speculated on the structure of small viruses. They suggested that spherical viruses such as Tomato bushy stunt virus had icosahedral symmetry and were made from 60 identical subunits.
After his short time in New York, Crick returned to Cambridge where he worked until 1976, at which time he moved to California. Crick engaged in several X-ray diffraction collaborations such as one with Alexander Rich on the structure of collagen. However, Crick was quickly drifting away from continued work related to his expertise in the interpretation of X-ray diffraction patterns of proteins.
George Gamow established a group of scientists interested in the role of RNA as an intermediary between DNA as the genetic storage molecule in the nucleus of cells and the synthesis of proteins in the cytoplasm (the RNA Tie Club). It was clear to Crick that there had to be a code by which a short sequence of nucleotides would specify a particular amino acid in a newly synthesised protein. In 1956, Crick wrote an informal paper about the genetic coding problem for the small group of scientists in Gamow's RNA group. In this article, Crick reviewed the evidence supporting the idea that there was a common set of about 20 amino acids used to synthesise proteins. Crick proposed that there was a corresponding set of small "adaptor molecules" that would hydrogen bond to short sequences of a nucleic acid, and also link to one of the amino acids. He also explored the many theoretical possibilities by which short nucleic acid sequences might code for the 20 amino acids.
During the mid-to-late 1950s Crick was very much intellectually engaged in sorting out the mystery of how proteins are synthesised. By 1958, Crick's thinking had matured and he could list in an orderly way all of the key features of the protein synthesis process:
genetic information stored in the sequence of DNA molecules
a "messenger" RNA molecule to carry the instructions for making one protein to the cytoplasm
adaptor molecules ("they might contain nucleotides") to match short sequences of nucleotides in the RNA messenger molecules to specific amino acids
ribonucleic-protein complexes that catalyse the assembly of amino acids into proteins according to the messenger RNA
The adaptor molecules were eventually shown to be tRNAs and the catalytic "ribonucleic-protein complexes" became known as ribosomes. An important step was the realisation by Crick and Brenner on 15 April 1960 during a conversation with François Jacob that messenger RNA was not the same thing as ribosomal RNA. Later that summer, Brenner, Jacob, and Matthew Meselson conducted an experiment which was the first to prove the existence of messenger RNA. None of this, however, answered the fundamental theoretical question of the exact nature of the genetic code. In his 1958 article, Crick speculated, as had others, that a triplet of nucleotides could code for an amino acid. Such a code might be "degenerate", with 4×4×4=64 possible triplets of the four nucleotide subunits while there were only 20 amino acids. Some amino acids might have multiple triplet codes. Crick also explored other codes in which, for various reasons, only some of the triplets were used, "magically" producing just the 20 needed combinations. Experimental results were needed; theory alone could not decide the nature of the code. Crick also used the term "central dogma" to summarise an idea that implies that genetic information flow between macromolecules would be essentially one-way:
DNA → RNA → protein
Some critics thought that by using the word "dogma", Crick was implying that this was a rule that could not be questioned, but all he really meant was that it was a compelling idea without much solid evidence to support it. In his thinking about the biological processes linking DNA genes to proteins, Crick made explicit the distinction between the materials involved, the energy required, and the information flow. Crick was focused on this third component (information) and it became the organising principle of what became known as molecular biology. Crick had by this time become a highly influential theoretical molecular biologist.
Proof that the genetic code is a degenerate triplet code finally came from genetics experiments, some of which were performed by Crick. The details of the code came mostly from work by Marshall Nirenberg and others who synthesized synthetic RNA molecules and used them as templates for in vitro protein synthesis. Nirenberg first announced his results to a small audience in Moscow at a 1961 conference. Crick's reaction was to invite Nirenberg to deliver his talk to a larger audience.
Controversy
Use of other researchers' data
Watson and Crick's use of DNA X-ray diffraction data collected by Franklin and Wilkins has generated an enduring controversy. It arose from the fact that some of Franklin's unpublished data were used without her knowledge or consent by Watson and Crick in their construction of the double helix model of DNA. Of the four DNA researchers, only Franklin had a degree in chemistry; Wilkins and Crick had backgrounds in physics, Watson in biology.
Prior to publication of the double helix structure, Watson and Crick had little direct interaction with Franklin herself. They were, however, aware of her work, more aware than she herself realised. Watson was present at a lecture, given in November 1951, where Franklin presented the two forms of the molecule, type A and type B, and discussed the position of the phosphate units on the external part of the molecule. In January 1953, Watson was shown an X-ray photograph of B-DNA (called photograph 51), by Wilkins. Wilkins had been given photograph 51 by Rosalind Franklin's PhD student Raymond Gosling. Wilkins and Gosling had worked together in the Medical Research Council's (MRC) Biophysics Unit before director John Randall appointed Franklin to take over both DNA diffraction work and guidance of Gosling's thesis. It appears that Randall did not communicate effectively with them about Franklin's appointment, contributing to confusion and friction between Wilkins and Franklin. In the middle of February 1953, Crick's thesis advisor, Max Perutz, gave Crick a copy of a report written for a Medical Research Council biophysics committee visit to King's in December 1952, containing data from the King's group, including some of Franklin's crystallographic calculations. Franklin was unaware that photograph 51 and other information had been shared with Crick and Watson. She wrote a series of three draft manuscripts, two of which included a double helical DNA backbone. Her two manuscripts on form A DNA reached Acta Crystallographica in Copenhagen on March 6, 1953, one day before Crick and Watson had completed their model.
The X-ray diffraction images collected by Gosling and Franklin provided the best evidence for the helical nature of DNA. Before this, both Linus Pauling, Watson, and Crick had generated erroneous models with the chains inside and the bases pointing outwards. Her experimental results provided estimates of the water content of DNA crystals, and these results were most consistent with the three sugar-phosphate backbones being on the outside of the molecule. Franklin's X-Ray photograph showed that the backbones had to be on the outside. Although she at first insisted vehemently that her data did not force one to conclude that DNA has a helical structure, in the drafts she submitted in 1953 she argues for a double helical DNA backbone. Building on her manuscripts, she discovered that form A DNA had antiparallel backbones, which supported the double helical structure of DNA. She did this through identification of the space group for DNA crystals. This would go to help Watson and Crick decide to look for DNA models with two antiparallel polynucleotide strands.
In summary, Watson and Crick had three sources for Franklin's unpublished data: 1) her 1951 seminar, attended by Watson, 2) discussions with Wilkins, who worked in the same laboratory with Franklin, 3) a research progress report that was intended to promote coordination of Medical Research Council-supported laboratories. Watson, Crick, Wilkins and Franklin all worked in MRC laboratories.
Crick and Watson felt that they had benefited from collaborating with Wilkins. They offered him a co-authorship on the article that first described the double helix structure of DNA. Wilkins turned down the offer, a fact that may have led to the terse character of the acknowledgement of experimental work done at King's College in the eventual published paper. Rather than make any of the DNA researchers at King's College co-authors on the Watson and Crick double helix article, the solution that was arrived at was to publish two additional papers from King's College along with the helix paper. Brenda Maddox suggests that because of the importance of her experimental results in Watson and Crick's model building and theoretical analysis, Franklin should have had her name on the original Watson and Crick paper in Nature. Franklin and Gosling submitted their own joint "second" paper to Nature at the same time as Wilkins, Stokes, and Wilson submitted theirs (i.e. the "third" paper on DNA).
Watson's portrayal of Franklin in The Double Helix was negative and gave the appearance that she was Wilkins' assistant and was unable to interpret her own DNA data. However, according to Nathaniel Comfort, a historian of medicine at Johns Hopkins University, Franklin's colleague Aaron Klug believed that Franklin "..was 'two steps away' from the double helix. After completing an analysis of her lab notebook, Klug stated that she surely would have had it.
The X-ray diffraction images collected by Franklin provided the best evidence for the helical nature of DNA. While Franklin's experimental work proved important to Crick and Watson's development of a correct model, she herself could not realize it at the time. When she left King's College, Director Sir John Randall insisted that all DNA work belonged exclusively to King's and ordered Franklin to not even think about it. Because of this, the scientific community did not understand the depth of Franklin's contributions. Franklin subsequently did superb work in J. D. Bernal's Lab at Birkbeck College with the tobacco mosaic virus, which also extended ideas on helical construction.
Eugenics
Crick occasionally expressed his views on eugenics, usually in private letters. For example, Crick advocated a form of positive eugenics in which wealthy parents would be encouraged to have more children. He once remarked, "In the long run, it is unavoidable that society will begin to worry about the character of the next generation ... It is not a subject at the moment which we can tackle easily because people have so many religious beliefs and until we have a more uniform view of ourselves I think it would be risky to try and do anything in the way of eugenics ... I would be astonished if, in the next 100 or 200 years, society did not come round to the view that they would have to try to improve the next generation in some extent or one way or another."
Sexual harassment
Biologist Nancy Hopkins says when she was an undergraduate in the 1960s, Crick put his hands on her breasts during a lab visit. She described the incident: "Before I could rise and shake hands, he had zoomed across the room, stood behind me, put his hands on my breasts and said, 'What are you working on?
Views on religion
Crick referred to himself as a humanist, which he defined as the belief "that human problems can and must be faced in terms of human moral and intellectual resources without invoking supernatural authority." He publicly called for humanism to replace religion as a guiding force for humanity, writing:
The human dilemma is hardly new. We find ourselves through no wish of our own on this slowly revolving planet in an obscure corner of a vast universe. Our questioning intelligence will not let us live in cow-like content with our lot. We have a deep need to know why we are here. What is the world made of? More important, what are we made of? In the past religion answered these questions, often in considerable detail. Now we know that almost all these answers are highly likely to be nonsense, having sprung from man's ignorance and his enormous capacity for self-deception ... The simple fables of the religions of the world have come to seem like tales told to children. Even understood symbolically they are often perverse, if not rather unpleasant ... Humanists, then, live in a mysterious, exciting and intellectually expanding world, which, once glimpsed, makes the old worlds of the religions seem fake-cosy and stale
Crick was especially critical of Christianity:
I do not respect Christian beliefs. I think they are ridiculous. If we could get rid of them we could more easily get down to the serious problem of trying to find out what the world is all about.
Crick once joked, "Christianity may be OK between consenting adults in private but should not be taught to young children."
In his book Of Molecules and Men, Crick expressed his views on the relationship between science and religion. After suggesting that it would become possible for a computer to be programmed so as to have a soul, he wondered: at what point during biological evolution did the first organism have a soul? At what moment does a baby get a soul? Crick stated his view that the idea of a non-material soul that could enter a body and then persist after death is just that, an imagined idea. For Crick, the mind is a product of physical brain activity and the brain had evolved by natural means over millions of years. He felt that it was important that evolution by natural selection be taught in schools and that it was regrettable that English schools had compulsory religious instruction. He also considered that a new scientific world view was rapidly being established, and predicted that once the detailed workings of the brain were eventually revealed, erroneous Christian concepts about the nature of humans and the world would no longer be tenable; traditional conceptions of the "soul" would be replaced by a new understanding of the physical basis of mind. He was sceptical of organised religion, referring to himself as a sceptic and an agnostic with "a strong inclination towards atheism".
In 1960, Crick accepted an honorary fellowship at Churchill College, Cambridge, one factor being that the new college did not have a chapel. Some time later a large donation was made to establish a chapel and the College Council decided to accept it. Crick resigned his fellowship in protest.
In October 1969, Crick participated in a celebration of the 100th year of the journal Nature in which he attempted to make some predictions about what the next 30 years would hold for molecular biology. His speculations were later published in Nature. Near the end of the article, Crick briefly mentioned the search for life on other planets, but he held little hope that extraterrestrial life would be found by the year 2000. He also discussed what he described as a possible new direction for research, what he called "biochemical theology". Crick wrote "so many people pray that one finds it hard to believe that they do not get some satisfaction from it". A field similar to Crick's hypothesized "biochemical theology" now exists as neurotheology.
Crick suggested that it might be possible to find chemical changes in the brain that were molecular correlates of the act of prayer. He speculated that there might be a detectable change in the level of some neurotransmitter or neurohormone when people pray. Crick's view of the relationship between science and religion continued to play a role in his work as he made the transition from molecular biology research into theoretical neuroscience.
Crick asked in 1998 "and if some of the Bible is manifestly wrong, why should any of the rest of it be accepted automatically? ... And what would be more important than to find our true place in the universe by removing one by one these unfortunate vestiges of earlier beliefs?"
In 2003 he was one of 22 Nobel laureates who signed the Humanist Manifesto.
Creationism
Crick was a firm critic of young Earth creationism. In the 1987 United States Supreme Court case Edwards v. Aguillard, Crick joined a group of other Nobel laureates who advised, Creation-science' simply has no place in the public-school science classroom." Crick was also an advocate for the establishment of Darwin Day as a British national holiday.
Directed panspermia
During the 1960s, Crick became concerned with the origins of the genetic code. In 1966, Crick took the place of Leslie Orgel at a meeting where Orgel was to talk about the origin of life. Crick speculated about possible stages by which an initially simple code with a few amino acid types might have evolved into the more complex code used by existing organisms. At that time, proteins were thought to be the only kind of enzyme, and ribozymes had not yet been identified. Many molecular biologists were puzzled by the problem of the origin of a protein replicating system that is as complex as that which exists in organisms currently inhabiting Earth. In the early 1970s, Crick and Orgel further speculated about the possibility that the production of living systems from molecules may have been a very rare event in the universe, but once it had developed it could be spread by intelligent life forms using space travel technology, a process they called "directed panspermia". In a retrospective article, Crick and Orgel noted that they had been unduly pessimistic about the chances of abiogenesis on Earth when they had assumed that some kind of self-replicating protein system was the molecular origin of life.
In 1976, Crick addressed the origin of protein synthesis in a paper with Sydney Brenner, Aaron Klug, and George Pieczenik. In this paper, they speculate that code constraints on nucleotide sequences allow protein synthesis without the need for a ribosome. It, however, requires a five base binding between the mRNA and tRNA with a flip of the anti-codon creating a triplet coding, even though it is a five-base physical interaction. Thomas H. Jukes pointed out that the code constraints on the mRNA sequence required for this translation mechanism is still preserved.
Neuroscience and other interests
Crick's period at Cambridge was the pinnacle of his long scientific career, but he left Cambridge in 1977 after 30 years, having been offered (and having refused) the Mastership of Gonville and Caius. James Watson claimed at a Cambridge conference marking the 50th anniversary of the discovery of the structure of DNA in 2003:
Now perhaps it's a pretty well kept secret that one of the most uninspiring acts of the University of Cambridge over this past century was to turn down Francis Crick when he applied to be the Professor of Genetics, in 1958. Now there may have been a series of arguments, which led them to reject Francis. It was really saying, don't push us to the frontier.
The apparently "pretty well kept secret" had already been recorded in Soraya De Chadarevian's Designs For Life: Molecular Biology After World War II, published by Cambridge University Press in 2002. His major contribution to molecular biology in Cambridge is well documented in The History of the University of Cambridge: Volume 4 (1870 to 1990), which was published by CUP in 1992.
According to the University of Cambridge's genetics department official website, the electors of the professorship could not reach consensus, prompting the intervention of then University Vice-Chancellor Lord Adrian. Lord Adrian first offered the professorship to a compromise candidate, Guido Pontecorvo, who refused, and is said to have offered it then to Crick, who also refused.
In 1976, Crick took a sabbatical year at the Salk Institute for Biological Studies in La Jolla, California. Crick had been a nonresident fellow of the Institute since 1960. Crick wrote, "I felt at home in Southern California." After the sabbatical, Crick left Cambridge to continue working at the Salk Institute. He was also an adjunct professor at the University of California, San Diego. He taught himself neuroanatomy and studied many other areas of neuroscience research. It took him several years to disengage from molecular biology because exciting discoveries continued to be made, including the discovery of alternative splicing and the discovery of restriction enzymes, which helped make possible genetic engineering. Eventually, in the 1980s, Crick was able to devote his full attention to his other interest, consciousness. His autobiographical book, What Mad Pursuit: A Personal View of Scientific Discovery, includes a description of why he left molecular biology and switched to neuroscience.
Upon taking up work in theoretical neuroscience, Crick was struck by several things:
there were many isolated subdisciplines within neuroscience with little contact between them
many people who were interested in behaviour treated the brain as a black box
consciousness was viewed as a taboo subject by many neurobiologists
Crick hoped he might aid progress in neuroscience by promoting constructive interactions between specialists from the many different subdisciplines concerned with consciousness. He also collaborated with neurophilosophers such as Patricia Churchland. In 1983, as a result of their studies of computer models of neural networks, Crick and Mitchison proposed that the function of REM sleep and dreaming is to remove certain modes of interactions in networks of cells in the mammalian cerebral cortex; they called this hypothetical process "reverse learning" or "unlearning". In the final phase of his career, Crick established a collaboration with Christof Koch that led to publication of a series of articles on consciousness during the period spanning from 1990 to 2005. Crick made the strategic decision to focus his theoretical investigation of consciousness on how the brain generates visual awareness within a few hundred milliseconds of viewing a scene. Crick and Koch proposed that consciousness seems so mysterious because it involves very short-term memory processes that are as yet poorly understood. In his book The Astonishing Hypothesis, Crick described how neurobiology had reached a mature enough stage so that consciousness could be the subject of a unified effort to study it at the molecular, cellular and behavioural levels. Crick was sceptical about the value of computational models of mental function that are not based on details about brain structure and function.
Crick was aware that research on consciousness was a difficult task, as he wrote to Martynas Yčas in April 1996:I don't think we shall fully understand consciousness by the end of this century, but it's possible we can get a glimpse of the answer by then. Whether it will all fall into place, as molecular biology did, without a vital force, or whether we need a radical formulation, only time will tell. Best wishes, Yours, Francis. P.S. By the way, I've not been knighted.
Awards and honours
In addition to his third share of the 1962 Nobel prize for Physiology or Medicine, he received many awards and honours, including the Royal and Copley medals of the Royal Society (1972 and 1975), and also the Order of Merit (on 27 November 1991); he refused an offer of a CBE in 1963, but was often referred to in error as 'Sir Francis Crick' and even on occasions as 'Lord Crick'. He was elected an EMBO Member in 1964.
The award of Nobel prizes to John Kendrew and Max Perutz, and to Crick, Watson, and Wilkins was satirised in a short sketch in the BBC TV programme That Was The Week That Was with the Nobel Prizes being referred to as 'The Alfred Nobel Peace Pools'.
He was an elected member of the American Academy of Arts and Sciences (1962), the United States National Academy of Sciences (1969), and the American Philosophical Society (1972).
Francis Crick Medal and Lecture
The Francis Crick Medal and Lecture was established in 2003 following an endowment by his former colleague, Sydney Brenner, joint winner of the 2002 Nobel Prize in Physiology and Medicine. The lecture is delivered annually in any field of biological sciences, with preference given to the areas in which Francis Crick himself worked. Importantly, the lectureship is aimed at younger scientists, ideally under 40, or whose career progression corresponds to this age. , Crick lectures have been delivered by Julie Ahringer, Dario Alessi, Ewan Birney, Simon Boulton, Jason Chin, Simon Fisher, Matthew Hurles, Gilean McVean, Duncan Odom, Geraint Rees, Sarah Teichmann, M. Madan Babu and Daniel Wolpert.
Francis Crick Institute
The Francis Crick Institute is a £660 million biomedical research centre located in north London, United Kingdom. The Francis Crick Institute is a partnership between Cancer Research UK, Imperial College London, King's College London, the Medical Research Council, University College London (UCL) and the Wellcome Trust. Completed in 2016, it is the largest centre for biomedical research and innovation in Europe.
Francis Crick Graduate Lectures
The University of Cambridge Graduate School of Biological, Medical and Veterinary Sciences hosts The Francis Crick Graduate Lectures. The first two lectures were by John Gurdon and Tim Hunt.
Other honours
The inscription on the helices of a DNA sculpture (which was donated by James Watson) outside Clare College's Thirkill Court, Cambridge, England reads: "The structure of DNA was discovered in 1953 by Francis Crick and James Watson while Watson lived here at Clare." and on the base: "The double helix model was supported by the work of Rosalind Franklin and Maurice Wilkins."
Another sculpture entitled Discovery, by artist Lucy Glendinning was installed on Tuesday, 13 December 2005 in Abington Street, Northampton. According to the late Lynn Wilson, chairman of the Wilson Foundation, "The sculpture celebrates the life of a world class scientist who must surely be considered the greatest Northamptonian of all time — by discovering DNA he unlocked the whole future of genetics and the alphabet of life."
Westminster City Council unveiled a green plaque to Francis Crick on the front façade of 56 St George's Square, Pimlico, London SW1 on 20 June 2007; Crick lived in the first floor flat, together with Robert Dougall of BBC radio and later TV fame, a former Royal Navy associate.
In addition, Crick was elected a Fellow of the Royal Society (FRS) in 1959, a Fellow of the International Academy of Humanism, and a Fellow of CSICOP.
In 1987, Crick received the Golden Plate Award of the American Academy of Achievement.
At a meeting of the executive council of the Committee for Skeptical Inquiry (CSI) (formerly CSICOP) in Denver, Colorado in April 2011, Crick was selected for inclusion in CSI's Pantheon of Skeptics. The Pantheon of Skeptics was created by CSI to remember the legacy of deceased fellows of CSI and their contributions to the cause of scientific scepticism.
A sculpted bust of Francis Crick by John Sherrill Houser, which incorporates a single "Golden" Helix, was cast in bronze in the artist's studio in New Mexico, US. The bronze was first displayed at the Francis Crick Memorial Conference (on Consciousness) at the University of Cambridge's Churchill College on 7 July 2012; it was bought by Mill Hill School in May 2013, and displayed at the inaugural Crick Dinner on 8 June 2013, and will be again at their Crick Centenary Dinner in 2016.
The Benjamin Franklin Medal for Distinguished Achievement in the Sciences of the American Philosophical Society (2001), together with Watson.
Crick featured in the BBC Radio 4 series The New Elizabethans to mark the diamond Jubilee of Queen Elizabeth II in 2012. A panel of seven academics, journalists and historians named Crick among a group of 60 people in the UK "whose actions during the reign of Elizabeth II have had a significant impact on lives in these islands and given the age its character".
Books
Of Molecules and Men (Prometheus Books, 2004; original edition 1967)
Life Itself: Its Origin and Nature (Simon & Schuster, 1981)
What Mad Pursuit: A Personal View of Scientific Discovery (Basic Books reprint edition, 1990)
The Astonishing Hypothesis: The Scientific Search for the Soul (Scribner reprint edition, 1995)
Georg Kreisel: a Few Personal Recollections. In: Kreiseliana: About and Around Georg Kreisel (1996), pp. 25–32.
See also
Crick, Brenner et al. experiment
Crick's wobble hypothesis
History of RNA biology
List of RNA biologists
Molecular structure of Nucleic Acids (article)
Neural correlates of consciousness
References
Sources
Further reading
John Bankston, Francis Crick and James D. Watson; Francis Crick and James Watson: Pioneers in DNA Research (Mitchell Lane Publishers, Inc., 2002) .
Bill Bryson; A Short History of Nearly Everything (Broadway Books, 2003) .
Soraya De Chadarevian; Designs For Life: Molecular Biology After World War II, CUP 2002, 444 pp; .
Roderick Braithwaite. Strikingly Alive: The History of the Mill Hill School Foundation 1807–2007; published Phillimore & Co.
Edwin Chargaff; Heraclitean Fire, Rockefeller Press, 1978.
S. Chomet (Ed.), D.N.A. Genesis of a Discovery, 1994, Newman- Hemisphere Press, London
Dickerson, Richard E.; Present at the Flood: How Structural Molecular Biology Came About, Sinauer, 2005; .
Edward Edelson, Francis Crick And James Watson: And the Building Blocks of Life, Oxford University Press, 2000, .
John Finch; A Nobel Fellow On Every Floor, Medical Research Council 2008, 381 pp, .
Hager, Thomas; Force of Nature: The Life of Linus Pauling, Simon & Schuster 1995;
Graeme Hunter; Light Is A Messenger, the life and science of William Lawrence Bragg (Oxford University Press, 2004) .
Horace Freeland Judson, The Eighth Day of Creation. Makers of the Revolution in Biology; Penguin Books 1995, first published by Jonathan Cape, 1977; .
Errol C. Friedberg; Sydney Brenner: A Biography, pub. CSHL Press October 2010, .
Torsten Krude (Ed.); DNA Changing Science and Society () CUP 2003. (The Darwin Lectures for 2003, including one by Sir Aaron Klug on Rosalind Franklin's involvement in the determination of the structure of DNA).
Robert Olby; The Path to The Double Helix: Discovery of DNA; first published in October 1974 by MacMillan, with foreword by Francis Crick; ; revised in 1994, with a 9-page postscript.
Robert Olby; Oxford National Dictionary article: Crick, Francis Harry Compton (1916–2004). In: Oxford Dictionary of National Biography, Oxford University Press, January 2008.
Anne Sayre. 1975. Rosalind Franklin and DNA. New York: W.W. Norton and Company. .
James D. Watson; The Double Helix: A Personal Account of the Discovery of the Structure of DNA, Atheneum, 1980, (first published in 1968) is a very readable firsthand account of the research by Crick and Watson. The book also formed the basis of the award-winning television dramatisation Life Story by BBC Horizon (also broadcast as Race for the Double Helix). [The Norton Critical Edition, which was published in 1980, edited by Gunther S. Stent: ]
James D. Watson; Avoid Boring People and Other Lessons from a Life in Science, New York: Random House. .
External links
The Francis Crick Institute
"Francis Harry Compton Crick (1916–2004)" by A. Andrei at the Embryo Project Encyclopedia
Crick papers
Register of Francis Crick Personal Papers – MSS 660 Crick's personal papers at Mandeville Special Collections Library, Geisel Library, University of California, San Diego
Francis Crick Archive — Papers by Francis Crick are available for study at the Wellcome Library's Archives and Manuscripts department. These papers include those dealing with Crick's career after he moved to the Salk Institute in San Diego. The digitised papers are available at Codebreakers: Makers of Modern Genetics: the Francis Crick papers
Comprehensive list of pdf files of Crick's papers from 1950 to 1990 – National Library of Medicine.
Francis Crick papers – Nature.com
Key Participants: Francis H. C. Crick – Linus Pauling and the Race for DNA: A Documentary History
Audio and video files
An interview with Francis Crick and Christof Koch, 2001
Listen to Francis Crick
The Quest for Consciousness – The Quest for Consciousness – 65 minute audio program — a conversation on Consciousness with neurobiologist Francis Crick of the Salk Institute and neurobiologist Christof Koch from Caltech.
Listen to Francis Crick and James Watson talking on the BBC in 1962, 1972, and 1974.
The Impact of Linus Pauling on Molecular Biology – a 1995 talk delivered by Crick at Oregon State University
About his work
The Crick Papers at the Wellcome Trust.
"Quiet debut for the double helix" by Professor Robert Olby, Nature 421 (23 January 2003): 402–405.
Reading list for discovery of DNA story from the National Centre for Biotechnology Education.
Papers of Francis Crick, 1953-1969 held at Churchill Archives Centre
About his life
Olby's Australian lecture, March 2010
Salk Institute Press Release on the death of Francis Crick.
The Francis Crick Papers – Profiles in Science, National Library of Medicine
Obituary in The Times (London) of Francis Crick, 30 July 2004.
Francis Crick Obituary The Biochemist
Miscellaneous
National DNA Day, 25 April 2006 Moderated Chat Transcript Archive
Independent On Line article about Consciousness, 7 June 2006.
100 Scientists and Thinkers: James Watson and Francis Crick from Time magazine.
Francis Crick: Nobel Prize 1962, Physiology or Medicine
First press stories on DNA but for the "second" DNA story in The New York Times, see: https://www.nytimes.com/packages/pdf/science/dna-article.pdf — for reproduction of the original text in June 1953.
50th anniversary series of articles -from The New York Times.
Quotes of Robert Olby on exactly who may have discovered the structure of DNA.
A celebration of Francis Crick's life in science.
Francis Crick tells his life story at Web of Stories
Article by Mark Steyn from The Atlantic in 2004.
Review of Francis Crick: Hunter of Life's Secrets in Current Biology.
1916 births
2004 deaths
Alumni of the University of London
Alumni of University College London
Alumni of Gonville and Caius College, Cambridge
British consciousness researchers and theorists
British critics of Christianity
Critics of creationism
Deaths from colorectal cancer in California
DNA
Oneirologists
English agnostics
English biophysicists
English geneticists
English humanists
English molecular biologists
English neuroscientists
English Nobel laureates
English sceptics
Fellows of Churchill College, Cambridge
Fellows of the Royal Society
Foreign associates of the National Academy of Sciences
Materialists
Members of the European Molecular Biology Organization
Members of the French Academy of Sciences
Members of the Order of Merit
Nobel laureates in Physiology or Medicine
Panspermia
People associated with University College London
People educated at Mill Hill School
People educated at Northampton School for Boys
People from Northampton
Phage workers
Polytechnic Institute of New York University faculty
Recipients of the Albert Lasker Award for Basic Medical Research
Recipients of the Copley Medal
Royal Medal winners
Sleep researchers
Admiralty personnel of World War II
Burials at sea
Members of the American Philosophical Society
New York University Tandon School of Engineering alumni | Francis Crick | [
"Physics",
"Biology"
] | 13,401 | [
"Behavior",
"Origin of life",
"Panspermia",
"Materialists",
"Sleep researchers",
"Biological hypotheses",
"Sleep",
"Materialism",
"Matter"
] |
11,469 | https://en.wikipedia.org/wiki/August%20Kekul%C3%A9 | Friedrich August Kekulé, later Friedrich August Kekule von Stradonitz ( , ; 7 September 1829 – 13 July 1896), was a German organic chemist. From the 1850s until his death, Kekulé was one of the most prominent chemists in Europe, especially in the field of theoretical chemistry. He was the principal founder of the theory of chemical structure and in particular the Kekulé structure of benzene.
Name
Kekulé never used his first given name; he was known throughout his life as August Kekulé. He did however say it once in his work. After he was ennobled by the Kaiser in 1895, he adopted the name August Kekule von Stradonitz, without the French acute accent over the second "e". The French accent had apparently been added to the name by Kekulé's father during the Napoleonic occupation of Hesse by France, to ensure that French-speakers pronounced the third syllable.
Early years
The son of a civil servant, Kekulé was born in Darmstadt, the capital of the Grand Duchy of Hesse. After graduating from secondary school (the Grand Ducal Gymnasium in Darmstadt), in the fall of 1847 he entered the University of Giessen, with the intention of studying architecture. After hearing the lectures of Justus von Liebig in his first semester, he decided to study chemistry. Following four years of study in Giessen and a brief compulsory military service, he took temporary assistantships in Paris (1851–52), in Chur, Switzerland (1852–53), and in London (1853–55), where he was decisively influenced by Alexander Williamson. His Giessen doctoral degree was awarded in the summer of 1852.
Theory of chemical structure
In 1856, Kekulé became Privatdozent at the University of Heidelberg. In 1858, he was hired as full professor at the University of Ghent, then in 1867 he was called to Bonn, where he remained for the rest of his career. Basing his ideas on those of predecessors such as Williamson, Charles Gerhardt, Edward Frankland, William Odling, Auguste Laurent, Charles-Adolphe Wurtz and others, Kekulé was the principal formulator of the theory of chemical structure (1857–58). This theory proceeds from the idea of atomic valence, especially the tetravalence of carbon (which Kekulé announced late in 1857) and the ability of carbon atoms to link to each other (announced in a paper published in May 1858), to the determination of the bonding order of all of the atoms in a molecule. Archibald Scott Couper independently arrived at the idea of self-linking of carbon atoms (his paper appeared in June 1858), and provided the first molecular formulas where lines symbolize bonds connecting the atoms. For organic chemists, the theory of structure provided dramatic new clarity of understanding, and a reliable guide to both analytic and especially synthetic work. As a consequence, the field of organic chemistry developed explosively from this point. Among those who were most active in pursuing early structural investigations were, in addition to Kekulé and Couper, Frankland, Wurtz, Alexander Crum Brown, Emil Erlenmeyer, and Alexander Butlerov.
Kekulé's idea of assigning certain atoms to certain positions within the molecule, and schematically connecting them using what he called their "Verwandtschaftseinheiten" ("affinity units", now called "valences" or "bonds"), was based largely on evidence from chemical reactions, rather than on instrumental methods that could peer directly into the molecule, such as X-ray crystallography. Such physical methods of structural determination had not yet been developed, so chemists of Kekulé's day had to rely almost entirely on so-called "wet" chemistry. Some chemists, notably Hermann Kolbe, heavily criticized the use of structural formulas that were offered, as he thought, without proof. However, most chemists followed Kekulé's lead in pursuing and developing what some have called "classical" structure theory, which was modified after the discovery of electrons (1897) and the development of quantum mechanics (in the 1920s).
The idea that the number of valences of a given element was invariant was a key component of Kekulé's version of structural chemistry. This generalization suffered from many exceptions, and was subsequently replaced by the suggestion that valences were fixed at certain oxidation states. For example, periodic acid according to Kekuléan structure theory could be represented by the chain structure I-O-O-O-O-H. By contrast, the modern structure of (meta) periodic acid has all four oxygen atoms surrounding the iodine in a tetrahedral geometry.
Benzene
Kekulé's most famous work was on the structure of benzene. In 1865 Kekulé published a paper in French (for he was then still in Belgium) suggesting that the structure contained a six-membered ring of carbon atoms with alternating single and double bonds. The following year he published a much longer paper in German on the same subject.
The empirical formula for benzene had been long known, but its highly unsaturated structure was a challenge to determine. Archibald Scott Couper in 1858 and Joseph Loschmidt in 1861 suggested possible structures that contained multiple double bonds or multiple rings, but the study of aromatic compounds was in its earliest years, and too little evidence was then available to help chemists decide on any particular structure.
More evidence was available by 1865, especially regarding the relationships of aromatic isomers. Kekulé argued for his proposed structure by considering the number of isomers observed for derivatives of benzene. For every monoderivative of benzene (C6H5X, where X = Cl, OH, CH3, NH2, etc.) only one isomer was ever found, implying that all six carbons are equivalent, so that substitution on any carbon gives only a single possible product. For diderivatives such as the toluidines, C6H4(NH2)(CH3), three isomers were observed, for which Kekulé proposed structures with the two substituted carbon atoms separated by one, two and three carbon-carbon bonds, later named ortho, meta, and para isomers respectively.
The counting of possible isomers for diderivatives was, however, criticized by Albert Ladenburg, a former student of Kekulé, who argued that Kekulé's 1865 structure implied two distinct "ortho" structures, depending on whether the substituted carbons are separated by a single or a double bond. Since ortho derivatives of benzene were never actually found in more than one isomeric form, Kekulé modified his proposal in 1872 and suggested that the benzene molecule oscillates between two equivalent structures, in such a way that the single and double bonds continually interchange positions. This implies that all six carbon-carbon bonds are equivalent, as each is single half the time and double half the time. A firmer theoretical basis for a similar idea was later proposed in 1928 by Linus Pauling, who replaced Kekulé's oscillation by the concept of resonance between quantum-mechanical structures.
Kekulé's dream
The new understanding of benzene, and hence of all aromatic compounds, proved to be so important for both pure and applied chemistry after 1865 that in 1890 the German Chemical Society organized an elaborate appreciation in Kekulé's honor, celebrating the twenty-fifth anniversary of his first benzene paper. Here Kekulé spoke of the creation of the theory. He said that he had discovered the ring shape of the benzene molecule after having a reverie or day-dream of a snake seizing its own tail (this is an ancient symbol known as the ouroboros).
Another depiction of benzene had appeared in 1886 in the Berichte der Durstigen Chemischen Gesellschaft (Journal of the Thirsty Chemical Society), a parody of the Berichte der Deutschen Chemischen Gesellschaft, only the parody had six monkeys seizing each other in a circle, rather than a single snake as in Kekulé's anecdote. Some historians have suggested that the parody was a lampoon of the snake anecdote, possibly already well-known through oral transmission even if it had not yet appeared in print. Others have speculated that Kekulé's story in 1890 was a re-parody of the monkey spoof, and was a mere invention rather than a recollection of an event in his life.
Kekulé's 1890 speech, in which these anecdotes appeared, has been translated into English. If one takes the anecdote as reflecting an accurate memory of a real event, circumstances mentioned in the story suggest that it must have happened early in 1862.
He told another autobiographical anecdote in the same 1890 speech, of an earlier vision of dancing atoms and molecules that led to his theory of structure, published in May 1858. This happened, he claimed, while he was riding on the upper deck of a horse-drawn omnibus in London. Once again, if one takes the anecdote as reflecting an accurate memory of a real event, circumstances related in the anecdote suggest that it must have occurred in the late summer of 1855.
Works
Honors
In 1895, Kekulé was ennobled by Kaiser Wilhelm II of Germany, giving him the right to add "von Stradonitz" to his name, referring to a possession of his patrilineal ancestors in Stradonice, Bohemia. His name thus became Friedrich August Kekule von Stradonitz, without the French accent on the last "e" of his name, and this is the form of the name that some libraries use. This title was inherited by his son, genealogist Stephan Kekule von Stradonitz. Of the first five Nobel Prizes in Chemistry, Kekulé's former students won three: van 't Hoff in 1901, Fischer in 1902 and Baeyer in 1905.
A larger-than-life monument of Kekulé, unveiled in 1903, is situated in front of the former Chemical Institute (completed 1868) at the University of Bonn. His statue is often humorously decorated by students, e.g. for Valentine's Day or Halloween.
See also
Non-Kekulé molecule
Skeletal formula
Kekulé Program
Auguste Laurent
References
Further reading
Benfey, O. Theodor. "August Kekule and the Birth of the Structural Theory of Organic Chemistry in 1858." Journal of Chemical Education. Volume 35, No. 1, January 1958. p. 21–23. – Includes an English translation of Kekule's 1890 speech in which he spoke about his development of structure theory and benzene theory.
Rocke, A. J., Image and Reality: Kekule, Kopp, and the Scientific Imagination (University of Chicago Press, 2010).
External links
Kekulés Traum (Kekulé's dream, in German)
Kekulé: A Scientist and a Dreamer
1829 births
1896 deaths
19th-century German chemists
Scientists from Darmstadt
People from the Grand Duchy of Hesse
German organic chemists
German untitled nobility
University of Giessen alumni
Academic staff of the University of Bonn
Recipients of the Copley Medal
Foreign members of the Royal Society
Foreign associates of the National Academy of Sciences
Corresponding members of the Saint Petersburg Academy of Sciences
Recipients of the Pour le Mérite (civil class) | August Kekulé | [
"Chemistry"
] | 2,369 | [
"Organic chemists",
"German organic chemists"
] |
11,488 | https://en.wikipedia.org/wiki/Furlong | A furlong is a measure of distance in imperial units and United States customary units equal to one-eighth of a mile, equivalent to any of 660 feet, 220 yards, 40 rods, 10 chains, or approximately 201 metres. It is now mostly confined to use in horse racing, where in many countries it is the standard measurement of race lengths, and agriculture, where it is used to measure rural field lengths and distances.
In the United States, some states use older definitions for surveying purposes, leading to variations in the length of the furlong of two parts per million, or about . This variation is small enough to not have practical consequences in most applications.
Using the international definition of the yard as exactly 0.9144 metres, one furlong is 201.168 metres, and five furlongs are about 1 kilometre ( exactly).
History
The name furlong derives from the Old English words (furrow) and (long). Dating back at least to early Anglo-Saxon times, it originally referred to the length of the furrow in one acre of a ploughed open field (a medieval communal field which was divided into strips). The furlong (meaning furrow length) was the distance a team of oxen could plough without resting. This was standardised to be exactly 40 rods or 10 chains. The system of long furrows arose because turning a team of oxen pulling a heavy plough was difficult. This offset the drainage advantages of short furrows and meant furrows were made as long as possible. An acre is an area that is one furlong long and one chain (66 feet or 22 yards) wide. For this reason, the furlong was once also called an acre's length, though in modern usage an area of one acre can be of any shape. The term furlong, or shot, was also used to describe a grouping of adjacent strips within an open field.
Among the early Anglo-Saxons, the rod was the fundamental unit of land measurement. A furlong was 40 rods; an acre 4 by 40 rods, or 4 rods by 1 furlong, and thus 160 square rods; there are 10 acres in a square furlong. At the time, the Saxons used the North German foot, which was about 10 percent longer than the foot of the international 1959 agreement. When England changed to a shorter foot in the late 13th century, rods and furlongs remained unchanged, since property boundaries were already defined in rods and furlongs. The only thing that changed was the number of feet and yards in a rod or a furlong, and the number of square feet and square yards in an acre. The definition of the rod went from 15 old feet to new feet, or from 5 old yards to new yards. The furlong went from 600 old feet to 660 new feet, or from 200 old yards to 220 new yards. The acre went from 36,000 old square feet to 43,560 new square feet, or from 4,000 old square yards to 4,840 new square yards.
The furlong was historically viewed as being equivalent to the Roman stade (stadium), which in turn derived from the Greek system. For example, the King James Bible uses the term "furlong" in place of the Greek stadion, although more recent translations often use miles or kilometres in the main text and give the original numbers in footnotes.
In the Roman system, there were 625 feet to the stadium, eight stadia to the mile, and 1½ miles to the league. A league was considered to be the distance a man could walk in one hour, and the mile (from mille, meaning "thousand") consisted of 1,000 passus (paces, five feet, or double-step).
After the fall of the Western Roman Empire, medieval Europe continued with the Roman system, which the people proceeded to diversify, leading to serious complications in trade, taxation, etc. Around the year 1300, by royal decree England standardized a long list of measures. Among the important units of distance and length at the time were the foot, yard, rod (or pole), furlong, and the mile. The rod was defined as yards or feet, and the mile was eight furlongs, so the definition of the furlong became 40 rods and that of the mile became 5,280 feet (eight furlongs/mile times 40 rods/furlong times feet/rod). The invention of the measuring chain in the 1620s led to the introduction of an intermediate unit of length, the chain of 22 yards, being equal to four rods, and to one-tenth of a furlong.
A description from 1675 states, "Dimensurator or Measuring Instrument whereof the mosts usual has been the Chain, and the common length for English Measures four Poles, as answering indifferently to the Englishs Mile and Acre, 10 such Chains in length making a Furlong, and 10 single square Chains an Acre, so that a square Mile contains 640 square Acres." —John Ogilby, Britannia, 1675
The official use of the furlong was abolished in the United Kingdom under the Weights and Measures Act 1985, an act that also abolished the official use of many other traditional units of measurement.
Use
In Myanmar furlongs are currently used in conjunction with miles to indicate distances on highway signs. Mileposts on the Yangon–Mandalay Expressway use miles and furlongs.
In the rest of the world the furlong has very limited use, with the notable exception of horse racing in most English-speaking countries, including Canada and the United States. The distances for horse racing in Australia were converted to metric in 1972 and the term survives only in slang. In the United Kingdom, Ireland, Canada, and the United States, races are still given in miles and furlongs. Also distances along English canals navigated by narrowboats are commonly expressed in miles and furlongs.
The city of Chicago's street numbering system allots a measure of 800 address units to each mile, in keeping with the city's system of eight blocks per mile. This means that every block in a typical Chicago neighborhood (in either north–south or east–west direction but rarely both) is approximately one furlong in length. City blocks in the Hoddle Grid of Melbourne are also one furlong in length. Salt Lake City's blocks are each a square furlong in the downtown area. The blocks become less regular in shape farther from the center, but the numbering system (800 units to each mile) remains the same everywhere in Salt Lake County. Blocks in central Logan, Utah, and in large sections of Phoenix, Arizona, are similarly a square furlong in extent (eight to a mile, which explains the series of freeway exits: 19th Ave, 27th, 35th, 43rd, 51st, 59th ...).
Much of Ontario, Canada, was originally surveyed on a ten-furlong grid, with major roads being laid out along the grid lines. Now that distances are shown on road signs in kilometres, these major roads are almost exactly two kilometres apart. The exits on highways running through Toronto, for example, are generally at intervals of two kilometres.
The Bangor City Forest in Bangor, Maine has its trail system marked in miles and furlongs.
The furlong is also a base unit of the humorous FFF system of units.
Definition of length
The exact length of the furlong varies slightly among English-speaking countries. In Canada and the United Kingdom, which define the furlong in terms of the international yard of exactly 0.9144 metres, a furlong is 201.168 m. Australia does not formally define the furlong, but it does define the chain and link in terms of the international yard.
The United States previously defined the furlong, chain, rod, and link in terms of the U.S. survey foot of exactly metre, resulting in a furlong approximately 201.1684 m long. The difference of approximately two parts per million between the old U.S. value and the "international" value was insignificant for most practical measurements.
In October 2019, U.S. National Geodetic Survey and National Institute of Standards and Technology announced their joint intent to retire the U.S. survey foot, with effect from the end of 2022. The furlong in U.S. Customary units is thereafter defined based on the International 1959 foot, giving the length of the furlong as exact 201.168 meters in the United States as well.
References
Customary units of measurement in the United States
Horse racing terminology
Imperial units
Road transport in Myanmar
Surveying
Units of length | Furlong | [
"Mathematics",
"Engineering"
] | 1,738 | [
"Units of length",
"Quantity",
"Surveying",
"Civil engineering",
"Units of measurement"
] |
11,490 | https://en.wikipedia.org/wiki/Fundamental%20frequency | The fundamental frequency, often referred to simply as the fundamental (abbreviated as 0 or 1 ), is defined as the lowest frequency of a periodic waveform. In music, the fundamental is the musical pitch of a note that is perceived as the lowest partial present. In terms of a superposition of sinusoids, the fundamental frequency is the lowest frequency sinusoidal in the sum of harmonically related frequencies, or the frequency of the difference between adjacent frequencies. In some contexts, the fundamental is usually abbreviated as 0, indicating the lowest frequency counting from zero. In other contexts, it is more common to abbreviate it as 1, the first harmonic. (The second harmonic is then 2 = 2⋅1, etc. In this context, the zeroth harmonic would be 0 Hz.)
According to Benward's and Saker's Music: In Theory and Practice:
Explanation
All sinusoidal and many non-sinusoidal waveforms repeat exactly over time – they are periodic. The period of a waveform is the smallest positive value for which the following is true:
Where is the value of the waveform . This means that the waveform's values over any interval of length is all that is required to describe the waveform completely (for example, by the associated Fourier series). Since any multiple of period also satisfies this definition, the fundamental period is defined as the smallest period over which the function may be described completely. The fundamental frequency is defined as its reciprocal:
When the units of time are seconds, the frequency is in , also known as Hertz.
Fundamental frequency of a pipe
For a pipe of length with one end closed and the other end open the wavelength of the fundamental harmonic is , as indicated by the first two animations. Hence,
Therefore, using the relation
where is the speed of the wave, the fundamental frequency can be found in terms of the speed of the wave and the length of the pipe:
If the ends of the same pipe are now both closed or both opened, the wavelength of the fundamental harmonic becomes . By the same method as above, the fundamental frequency is found to be
In music
In music, the fundamental is the musical pitch of a note that is perceived as the lowest partial present. The fundamental may be created by vibration over the full length of a string or air column, or a higher harmonic chosen by the player. The fundamental is one of the harmonics. A harmonic is any member of the harmonic series, an ideal set of frequencies that are positive integer multiples of a common fundamental frequency. The reason a fundamental is also considered a harmonic is because it is 1 times itself.
The fundamental is the frequency at which the entire wave vibrates. Overtones are other sinusoidal components present at frequencies above the fundamental. All of the frequency components that make up the total waveform, including the fundamental and the overtones, are called partials. Together they form the harmonic series. Overtones which are perfect integer multiples of the fundamental are called harmonics. When an overtone is near to being harmonic, but not exact, it is sometimes called a harmonic partial, although they are often referred to simply as harmonics. Sometimes overtones are created that are not anywhere near a harmonic, and are just called partials or inharmonic overtones.
The fundamental frequency is considered the first harmonic and the first partial. The numbering of the partials and harmonics is then usually the same; the second partial is the second harmonic, etc. But if there are inharmonic partials, the numbering no longer coincides. Overtones are numbered as they appear the fundamental. So strictly speaking, the first overtone is the second partial (and usually the second harmonic). As this can result in confusion, only harmonics are usually referred to by their numbers, and overtones and partials are described by their relationships to those harmonics.
Mechanical systems
Consider a spring, fixed at one end and having a mass attached to the other; this would be a single degree of freedom (SDoF) oscillator. Once set into motion, it will oscillate at its natural frequency. For a single degree of freedom oscillator, a system in which the motion can be described by a single coordinate, the natural frequency depends on two system properties: mass and stiffness; (providing the system is undamped). The natural frequency, or fundamental frequency, 0, can be found using the following equation:
where:
= stiffness of the spring
= mass
0 = natural frequency in radians per second.
To determine the natural frequency in Hz, the omega value is divided by 2.
Or:
where:
0 = natural frequency (SI unit: hertz)
= stiffness of the spring (SI unit: newtons/metre or N/m)
= mass (SI unit: kg).
While doing a modal analysis, the frequency of the 1st mode is the fundamental frequency.
This is also expressed as:
where:
0 = natural frequency (SI unit: hertz)
= length of the string (SI unit: metre)
= mass per unit length of the string (SI unit: kg/m)
= tension on the string (SI unit: newton)
See also
Greatest common divisor
Hertz
Missing fundamental
Natural frequency
Oscillation
Harmonic series (music)#Terminology
Pitch detection algorithm
Scale of harmonics
References
Musical tuning
Acoustics
Fourier analysis
Spectrum (physical sciences) | Fundamental frequency | [
"Physics"
] | 1,110 | [
"Physical phenomena",
"Spectrum (physical sciences)",
"Classical mechanics",
"Acoustics",
"Waves"
] |
11,492 | https://en.wikipedia.org/wiki/Foot | The foot (: feet) is an anatomical structure found in many vertebrates. It is the terminal portion of a limb which bears weight and allows locomotion. In many animals with feet, the foot is a separate organ at the terminal part of the leg made up of one or more segments or bones, generally including claws and/or nails.
Etymology
The word "foot", in the sense of meaning the "terminal part of the leg of a vertebrate animal" comes from Old English fot, from Proto-Germanic *fot (source also of Old Frisian fot, Old Saxon fot, Old Norse fotr, Danish fod, Swedish fot, Dutch voet, Old High German fuoz, German Fuß, Gothic fotus, all meaning "foot"), from PIE root *ped- "foot".
The plural form feet is an instance of i-mutation.
Structure
The human foot is a strong and complex mechanical structure containing 26 bones, 33 joints (20 of which are actively articulated), and more than a hundred muscles, tendons, and ligaments. The joints of the foot are the ankle and subtalar joint and the interphalangeal joints of the foot. An anthropometric study of 1197 North American adult Caucasian males (mean age 35.5 years) found that a man's foot length was 26.3 cm with a standard deviation of 1.2 cm.
The foot can be subdivided into the hindfoot, the midfoot, and the forefoot:
The hindfoot is composed of the talus (or ankle bone) and the calcaneus (or heel bone). The two long bones of the lower leg, the tibia and fibula, are connected to the top of the talus to form the ankle. Connected to the talus at the subtalar joint, the calcaneus, the largest bone of the foot, is cushioned underneath by a layer of fat.
The five irregular bones of the midfoot, the cuboid, navicular, and three cuneiform bones, form the arches of the foot which serve as a shock absorber. The midfoot is connected to the hind- and fore-foot by muscles and the plantar fascia.
The forefoot is composed of five toes and the corresponding five proximal long bones forming the metatarsus. Similar to the fingers of the hand, the bones of the toes are called phalanges and the big toe has two phalanges while the other four toes have three phalanges each. The joints between the phalanges are called interphalangeal and those between the metatarsus and phalanges are called metatarsophalangeal (MTP).
Both the midfoot and forefoot constitute the dorsum (the area facing upward while standing) and the planum (the area facing downward while standing).
The instep is the arched part of the top of the foot between the toes and the ankle.
Bones
tibia, fibula
tarsus (7): talus, calcaneus, cuneiformes (3), cuboid, and navicular
metatarsus (5): first, second, third, fourth, and fifth metatarsal bone
phalanges (14)
There can be many sesamoid bones near the metatarsophalangeal joints, although they are only regularly present in the distal portion of the first metatarsal bone.
Arches
The human foot has two longitudinal arches and a transverse arch maintained by the interlocking shapes of the foot bones, strong ligaments, and pulling muscles during activity. The slight mobility of these arches when weight is applied to and removed from the foot makes walking and running more economical in terms of energy. As can be examined in a footprint, the medial longitudinal arch curves above the ground. This arch stretches from the heel bone over the "keystone" ankle bone to the three medial metatarsals. In contrast, the lateral longitudinal arch is very low. With the cuboid serving as its keystone, it redistributes part of the weight to the calcaneus and the distal end of the fifth metatarsal. The two longitudinal arches serve as pillars for the transverse arch which run obliquely across the tarsometatarsal joints. Excessive strain on the tendons and ligaments of the feet can result in fallen arches or flat feet.
Muscles
The muscles acting on the foot can be classified into extrinsic muscles, those originating on the anterior or posterior aspect of the lower leg, and intrinsic muscles, originating on the dorsal (top) or plantar (base) aspects of the foot.
Extrinsic
All muscles originating on the lower leg except the popliteus muscle are attached to the bones of the foot. The tibia and fibula and the interosseous membrane separate these muscles into anterior and posterior groups, in their turn subdivided into subgroups and layers.
Anterior group
Extensor group: the tibialis anterior originates on the proximal half of the tibia and the interosseous membrane and is inserted near the tarsometatarsal joint of the first digit. In the non-weight-bearing leg, the tibialis anterior dorsiflexes the foot and lift its medial edge (supination). In the weight-bearing leg, it brings the leg toward the back of the foot, like in rapid walking. The extensor digitorum longus arises on the lateral tibial condyle and along the fibula, and is inserted on the second to fifth digits and proximally on the fifth metatarsal. The extensor digitorum longus acts similar to the tibialis anterior except that it also dorsiflexes the digits. The extensor hallucis longus originates medially on the fibula and is inserted on the first digit. It dorsiflexes the big toe and also acts on the ankle in the unstressed leg. In the weight-bearing leg, it acts similarly to the tibialis anterior.
Peroneal group: the peroneus longus arises on the proximal aspect of the fibula and peroneus brevis below it. Together, their tendons pass behind the lateral malleolus. Distally, the peroneus longus crosses the plantar side of the foot to reach its insertion on the first tarsometatarsal joint, while the peroneus brevis reaches the proximal part of the fifth metatarsal. These two muscles are the strongest pronators and aid in plantar flexion. The peroneus longus also acts like a bowstring that braces the transverse arch of the foot.
Posterior group
The superficial layer of posterior leg muscles is formed by the triceps surae and the plantaris. The triceps surae consists of the soleus and the two heads of the gastrocnemius. The heads of gastrocnemius arise on the femur, proximal to the condyles, and the soleus arises on the proximal dorsal parts of the tibia and fibula. The tendons of these muscles merge to be inserted onto the calcaneus as the Achilles tendon. The plantaris originates on the femur proximal to the lateral head of the gastrocnemius and its long tendon is embedded medially into the Achilles tendon. The triceps surae is the primary plantar flexor. Its strength becomes most obvious during ballet dancing. It is fully activated only with the knee extended, because the gastrocnemius is shortened during flexion of the knee. During walking it not only lifts the heel, but also flexes the knee, assisted by the plantaris.
In the deep layer of posterior muscles, the tibialis posterior arises proximally on the back of the interosseous membrane and adjoining bones, and divides into two parts in the sole of the foot to attach to the tarsus. In the non-weight-bearing leg, it produces plantar flexion and supination, and, in the weight-bearing leg, it proximates the heel to the calf. The flexor hallucis longus arises on the back of the fibula on the lateral side, and its relatively thick muscle belly extends distally down to the flexor retinaculum where it passes over to the medial side to stretch across the sole to the distal phalanx of the first digit. The popliteus is also part of this group, but, with its oblique course across the back of the knee, does not act on the foot.
Intrinsic
On the top of the foot, the tendons of extensor digitorum brevis and extensor hallucis brevis lie deep in the system of long extrinsic extensor tendons. They both arise on the calcaneus and extend into the dorsal aponeurosis of digits one to four, just beyond the penultimate joints. They act to dorsiflex the digits. Similar to the intrinsic muscles of the hand, there are three groups of muscles in the sole of foot, those of the first and last digits, and a central group:
Muscles of the big toe: the abductor hallucis stretches medially along the border of the sole, from the calcaneus to the first digit. Below its tendon, the tendons of the long flexors pass through the tarsal canal. The abductor hallucis is an abductor and a weak flexor, and also helps maintain the arch of the foot. The flexor hallucis brevis arises on the medial cuneiform bone and related ligaments and tendons. An important plantar flexor, it is crucial to ballet dancing. Both these muscles are inserted with two heads proximally and distally to the first metatarsophalangeal joint. The adductor hallucis is part of this group, though it originally formed a separate system (see contrahens). It has two heads, the oblique head originating obliquely across the central part of the midfoot, and the transverse head originating near the metatarsophalangeal joints of digits five to three. Both heads are inserted into the lateral sesamoid bone of the first digit. The adductor hallucis acts as a tensor of the plantar arches and also adducts the big toe and might plantar flex the proximal phalanx.
Muscles of the little toe: Stretching laterally from the calcaneus to the proximal phalanx of the fifth digit, the abductor digiti minimi form the lateral margin of the foot and are the largest of the muscles of the fifth digit. Arising from the base of the fifth metatarsal, the flexor digiti minimi is inserted together with abductor on the first phalanx. Often absent, the opponens digiti minimi originates near the cuboid bone and is inserted on the fifth metatarsal bone. These three muscles act to support the arch of the foot and to plantar flex the fifth digit.
Central muscle group: The four lumbricals arise on the medial side of the tendons of flexor digitorum longus and are inserted on the medial margins of the proximal phalanges. The quadratus plantae originates with two slips from the lateral and medial margins of the calcaneus and inserts into the lateral margin of the flexor digitorum tendon. It is also known as the flexor accessorius. The flexor digitorum brevis arises inferiorly on the calcaneus and its three tendons are inserted into the middle phalanges of digits two to four (sometimes also the fifth digit). These tendons divide before their insertions and the tendons of flexor digitorum longus pass through these divisions. Flexor digitorum brevis flexes the middle phalanges. It is occasionally absent. Between the toes, the dorsal and plantar interossei stretch from the metatarsals to the proximal phalanges of digits two to five. The plantar interossei adduct and the dorsal interossei abduct these digits, and are also plantar flexors at the metatarsophalangeal joints.
Clinical significance
Due to their position and function, feet are exposed to a variety of potential infections and injuries, including athlete's foot, bunions, ingrown toenails, Morton's neuroma, plantar fasciitis, plantar warts, and stress fractures. In addition, there are several genetic disorders that can affect the shape and function of the feet, including clubfoot or flat feet.
This leaves humans more vulnerable to medical problems that are caused by poor leg and foot alignments. Also, the wearing of shoes, sneakers and boots can impede proper alignment and movement within the ankle and foot. For example, high-heeled shoes are known to throw off the natural weight balance (this can also affect the lower back). For the sake of posture, flat soles with no heels are advised.
A doctor who specializes in the treatment of the feet practices podiatry and is called a podiatrist. A pedorthist specializes in the use and modification of footwear to treat problems related to the lower limbs.
Fractures of the foot include:
Lisfranc fracture – in which one or all of the metatarsals are displaced from the tarsus
Jones fracture – a fracture of the fifth metatarsal
March fracture – a fracture of the distal third of one of the metatarsals occurring because of recurrent stress
Calcaneal fracture
Broken toe – a fracture of a phalanx
Cuneiform fracture – Due to the ligamentous support of the midfoot, isolated cuneiform fractures are rare.
Pronation
In anatomy, pronation is a rotational movement of the forearm (at the radioulnar joint) or foot (at the subtalar and talocalcaneonavicular joints). Pronation of the foot refers to how the body distributes weight as it cycles through the gait. During the gait cycle the foot can pronate in many different ways based on rearfoot and forefoot function. Types of pronation include neutral pronation, underpronation (supination), and overpronation.
Neutral pronation
An individual who neutrally pronates initially strikes the ground on the lateral side of the heel. As the individual transfers weight from the heel to the metatarsus, the foot will roll in a medial direction, such that the weight is distributed evenly across the metatarsus. In this stage of the gait, the knee will generally, but not always, track directly over the hallux.
This rolling inward motion as the foot progresses from heel to toe is the way that the body naturally absorbs shock. Neutral pronation is the most ideal, efficient type of gait when using a heel strike gait; in a forefoot strike, the body absorbs shock instead via flexion of the foot.
Overpronation
As with a neutral pronator, an individual who overpronates initially strikes the ground on the lateral side of the heel. As the individual transfers weight from the heel to the metatarsus, however, the foot will roll too far in a medial direction, such that the weight is distributed unevenly across the metatarsus, with excessive weight borne on the hallux. In this stage of the gait, the knee will generally, but not always, track inward.
An overpronator does not absorb shock efficiently. Imagine someone jumping onto a diving board, but the board is so flimsy that when it is struck, it bends and allows the person to plunge straight down into the water instead of back into the air. Similarly, an overpronator's arches will collapse, or the ankles will roll inward (or a combination of the two) as they cycle through the gait. An individual whose bone structure involves external rotation at the hip, knee, or ankle will be more likely to overpronate than one whose bone structure has internal rotation or central alignment. An individual who overpronates tends to wear down their running shoes on the medial (inside) side of the shoe toward the toe area.
When choosing a running or walking shoe, a person with overpronation can choose shoes that have good inside support—usually by strong material at the inside sole and arch of the shoe. It is usually visible. The inside support area is marked by strong greyish material to support the weight when a person lands on the outside foot and then roll onto the inside foot.
Underpronation (supination)
An individual who underpronates also initially strikes the ground on the lateral side of the heel. As the individual transfers weight from the heel to the metatarsus, the foot will not roll far enough in a medial direction. The weight is distributed unevenly across the metatarsus, with excessive weight borne on the fifth metatarsal, toward the lateral side of the foot. In this stage of the gait, the knee will generally, but not always, track laterally of the hallux.
Like an overpronator, an underpronator does not absorb shock efficiently, but for the opposite reason. The underpronated foot is like a diving board that, instead of failing to spring someone in the air because it is too flimsy, fails to do so because it is too rigid. There is virtually no give. An underpronator's arches or ankles do not experience much motion as they cycle through the gait. An individual whose bone structure involves internal rotation at the hip, knee, or ankle will be more likely to underpronate than one whose bone structure has external rotation or central alignment. Usually – but not always – those who are bow-legged tend to underpronate. An individual who underpronates tends to wear down their running shoes on the lateral (outside) side of the shoe toward the rear of the shoe in the heel area.
Society and culture
Humans usually wear shoes or similar footwear for protection from hazards when walking outside. There are a number of contexts where it is considered inappropriate to wear shoes. Some people consider it rude to wear shoes into a house and sacred places in multiple cultures like Māori Marae, which should only be entered with bare feet.
Foot fetishism is the most common sexual fetish.
Other animals
A paw is the soft foot of a mammal, generally a quadruped, that has claws or nails (e.g., a cat or dog's paw). A hard foot is called a hoof. Depending on style of locomotion, animals can be classified as plantigrade (sole walking), digitigrade (toe walking), or unguligrade (nail walking).
The metatarsals are the bones that make up the main part of the foot in humans, and part of the leg in large animals or paw in smaller animals. The number of metatarsals are directly related to the mode of locomotion with many larger animals having their digits reduced to two (elk, cow, sheep) or one (horse). The metatarsal bones of feet and paws are tightly grouped compared to, most notably, the human hand where the thumb metacarpal diverges from the rest of the metacarpus.
Metaphorical and cultural usage
The word "foot" is used to refer to a "...linear measure was in Old English (the exact length has varied over time), this being considered the length of a man's foot; a unit of measure used widely and anciently. In this sense the plural is often foot. The current inch and foot are implied from measurements in 12c."
The word "foot" also has a musical meaning; a "...metrical foot (late Old English, translating Latin pes, Greek pous in the same sense) is commonly taken to represent one rise and one fall of a foot: keeping time according to some, dancing according to others."
The word "foot" was used in Middle English to mean "a person" (c. 1200).
The expression "...to put one's best foot foremost first recorded 1849 (Shakespeare has the better foot before, 1596)". The expression to "...put one's foot in (one's) mouth "say something stupid" was first used in 1942. The expression "put (one's) foot in something" meaning to "make a mess of it" was used in 1823.
The word "footloose" was first used in the 1690s, meaning "free to move the feet, unshackled"; the figurative sense of "free to act as one pleases" was first used in 1873. Like "footloose", "flat-footed" at first had its obvious literal meaning (in 1600, it meant "with flat feet") but by 1912 it meant "unprepared" (U.S. baseball slang).
See also
Ball (foot)
Barefoot
Comparison of orthotics
Flat feet
Foot binding
Foot fetishism
Foot gymnastics
Gait analysis
Pedobarography (foot pressure analysis)
Pes cavus
Runner's toe, repetitive injury seen in runners
Squatting position
References
Bibliography
External links
Human body | Foot | [
"Physics"
] | 4,437 | [
"Human body",
"Physical objects",
"Matter"
] |
11,493 | https://en.wikipedia.org/wiki/Fallout%20shelter | A fallout shelter is an enclosed space specially designated to protect occupants from radioactive debris or fallout resulting from a nuclear explosion. Many such shelters were constructed as civil defense measures during the Cold War.
During a nuclear explosion, matter vaporized in the resulting fireball is exposed to neutrons from the explosion, absorbs them, and becomes radioactive. When this material condenses in the rain, it forms dust and light sandy materials that resemble ground pumice. The fallout emits alpha and beta particles, as well as gamma rays.
Much of this highly radioactive material falls to Earth, subjecting anything within the line of sight to radiation, becoming a significant hazard. A fallout shelter is designed to allow its occupants to minimize exposure to harmful fallout until radioactivity has decayed to a safer level, over a few weeks or months.
Principle
A fallout shelter is designed to protect its occupants from:
the mechanical and thermal effects of a nuclear explosion (or nuclear accident);
radioactive fallout, allowing them to survive for a period of time deemed sufficient to allow them to escape safely.
History
North America
During the Cold War, many countries built fallout shelters for high-ranking government officials and crucial military facilities, such as Project Greek Island and the Cheyenne Mountain nuclear bunker in the United States and Canada's Emergency Government Headquarters. Plans were made, however, to use existing buildings with sturdy below-ground-level basements as makeshift fallout shelters. These buildings were placarded with the orange-yellow and black trefoil sign designed by United States Army Corps of Engineers director of administrative logistics support function Robert W. Blakeley in 1961.
The National Emergency Alarm Repeater (NEAR) program was developed in the United States in 1956 during the Cold War to supplement the existing siren warning systems and radio broadcasts in the event of a nuclear attack. The NEAR civilian alarm device was engineered and tested but the program was not viable and was terminated in 1967.
In the U.S. in September 1961, under the direction of Steuart L. Pittman, the federal government started the Community Fallout Shelter Program. A letter from President Kennedy advising the use of fallout shelters appeared in the September 1961 issue of Life magazine. From 1961 to 1963, home fallout shelter sales grew, but eventually there was a public backlash against the fallout shelter as a consumer product.
In November 1961, in Fortune magazine, an article by Gilbert Burck appeared that outlined the plans of Nelson Rockefeller, Edward Teller, Herman Kahn, and Chet Holifield for an enormous network of concrete-lined underground fallout shelters throughout the United States sufficient to shelter millions of people to serve as a refuge in case of nuclear war.
The United States ended federal funding for the shelters in the 1970s. In 2017, New York City began removing the yellow signs since members of the public are unlikely to find edible food and usable medicine inside those rooms.
Atomitat
The Atomitat was an underground house in Plainview, Texas: it was designed by Jay Swayze and completed in 1962. The house was designed in response to the fear of nuclear war during the Cold War. The house was designed to be an "atomic-habitat" which met the United States Civil Defense specifications. It was the first bunker-house to meet their specifications as a nuclear shelter. Swayze also built an underground house for the 1964 New York World's Fair: it was called the Underground World Home.
Europe
Similar projects have been undertaken in Finland, which requires all buildings with area over 600 m2 to have an NBC (nuclear-biological-chemical) shelter, and Norway, which requires all buildings with an area over 1000 m2 to have a shelter.
The former Soviet Union and other Eastern Bloc countries often designed their underground mass-transit and subway tunnels to serve as bomb and fallout shelters in the event of an attack. Currently, the deepest subway line in the world is situated in St Petersburg in Russia, with an average depth of 60 meters, while the deepest subway station is Arsenalna in Kyiv, at 105.5 meters.
Germany has protected shelters for 3% of its population, Austria for 30%, Finland for 70%, Sweden for 81%, and Switzerland for 114%.
Bosnia
The Armijska Ratna Komanda D-0, also known as the Ark, was a Cold War-era nuclear bunker and military command centre located near the town of Konjic in Bosnia and Herzegovina. Built to protect Yugoslav President Josip Broz Tito and up to 350 members of his inner circle in the event of an atomic exchange, the structure is made up of residential areas, conference rooms, offices, strategic planning rooms, and other areas. The bunker remained a state secret until after the breakup of Yugoslavia in the 1990s.
The facility is now under the authority of the Bosnian Ministry of Defense and is managed by the country's military, guarded by a five-soldier detachment, but is designated by KONS as National Monuments of Bosnia and Herzegovina and used as exhibition space for project such as Cultural Event of Europe with strong UNESCO support, and tourist attraction.
Another underground facility is Željava Air Base, situated on the border between Bosnia and Herzegovina and Croatia under the mountain, near the city of Bihać. It was the largest underground airport and military air base in the Socialist Federal Republic of Yugoslavia (SFRY), and one of the largest in Europe. The role of the facility was to establish, integrate and coordinate a nationwide early warning radar network in SFRY akin to NORAD in the US. The complex contained tunnels in total length of 3.5 km (2.2 mi), and the bunker with four entrances protected by 100-ton pressurized doors, three of which were customized for use by fixed-wing aircraft. capable in housing two full fighter squadrons, one reconnaissance squadron, and associated maintenance facilities. It was designed and built to sustain a direct hit from a 20-kiloton nuclear bomb, equivalent to that dropped on Nagasaki. The underground facility was lined with semicircular concrete shields, arranged every 10 km (6.2 mi), to cushion the impact of incoming strike. The complex included an underground water source, power generators, crew quarters, and other strategic military facilities. It also housed a mess hall that could feed 1,000 people simultaneously, along with stores of food, fuel and arms sufficient to last 30 days. Fuel was supplied by a 20 km (12 mi) underground pipe network connected to a military warehouse on Pokoj Hill near Bihać. Nowadays, they are popular for urban exploration.
Switzerland
Switzerland built an extensive network of fallout shelters, not only through extra hardening of government buildings such as schools, but also through a building regulation requiring nuclear shelters in residential buildings since the 1960s (the first legal basis in this sense dates from 4 October 1963). Later, the law ensured that all residential buildings built after 1978 contained a nuclear shelter able to withstand a blast from a 12-megaton explosion at a distance of 700 metres. The Federal Law on the Protection of the Population and Civil Protection still requires that every inhabitant should have a place in a shelter close to where they live.
The Swiss authorities maintained large communal shelters (such as the Sonnenberg Tunnel until 2006) stocked with over four months of food and fuel. The reference Nuclear War Survival Skills declared that, as of 1986, "Switzerland has the best civil defense system, one that already includes blast shelters for over 85% of all its citizens." As of 2006, there were about 300,000 shelters built in private residences, institutions and hospitals, as well as 5,100 public shelters for a total of 8.6 million places, a level of coverage equal to 114% of the population.
In Switzerland, most residential shelters are no longer stocked with the food and water required for prolonged habitation and a large number have been converted by the owners to other uses (e.g., wine cellars, ski rooms, gyms), but a legal obligation to ensure that the shelters are properly maintained remains in effect.
United Kingdom
In the United Kingdom, a network of fallout shelters were built across the country for use by the Royal Observer Corps in its nuclear reporting role. Other shelters were built for the purposes of the ROTOR radar system and the regional seat of government scheme. The Pindar complex in London is intended to provide its inhabitants with fallout protection in the event of nuclear attack, as was the earlier Central Government War Headquarters in Corsham.
Details of shelter construction
Shielding
A basic fallout shelter consists of shields that reduce gamma ray exposure by a factor of 1000. The required shielding can be accomplished with 10 times the thickness of any quantity of material capable of cutting gamma ray exposure in half. Shields that reduce gamma ray intensity by 50% (1/2) include of lead, of concrete, of packed earth or of air. When multiple thicknesses are built, the shielding multiplies. Thus, a practical fallout shield is ten halving-thicknesses of packed earth, reducing gamma rays by approximately 1024 times (210).
Usually, an expedient purpose-built fallout shelter is a trench; with a strong roof buried by 1 m (3 ft) of earth. The two ends of the trench have ramps or entrances at right angles to the trench, so that gamma rays cannot enter (they can travel only in straight lines). To make the overburden waterproof (in case of rain), a plastic sheet may be buried a few inches below the surface and held down with rocks or bricks.
Blast doors are designed to absorb the shock wave of a nuclear blast, bending and then returning to their original shape.
Climate control
Dry earth is a reasonably good thermal insulator, but over several weeks of habitation, a shelter will become dangerously hot. The simplest form of effective fan to cool a shelter is a wide, heavy frame with flaps that swing in the shelter's doorway and can be swung from hinges on the ceiling. The flaps open in one direction and close in the other, pumping air. (This is a Kearny air pump, or KAP, named after the inventor, Cresson Kearny.)
Unfiltered air is safe, since the most dangerous fallout has the consistency of sand or finely ground pumice. Such large particles are not easily ingested into the soft tissues of the body, so extensive filters are not required. Any exposure to fine dust is far less hazardous than exposure to the fallout outside the shelter. Dust fine enough to pass the entrance will probably pass through the shelter. Some shelters, however, incorporate NBC-filters for additional protection.
Locations
Effective public shelters can be the middle floors of some tall buildings or parking structures, or below ground level in most buildings with more than 10 floors. The thickness of the upper floors must form an effective shield, and the windows of the sheltered area must not view fallout-covered ground that is closer than 1.5 km (1 mi). One of Switzerland's solutions is to use road tunnels passing through the mountains, with some of these shelters being able to protect tens of thousands.
Fallout shelters are not always underground. Above ground buildings with walls and roofs dense enough to afford a meaningful protection factor can be used as a fallout shelter.
Contents
A battery-powered radio may be helpful to get reports of fallout patterns and clearance. However, radio and other electronic equipment may be disabled by electromagnetic pulse. For example, even at the height of the Cold War, EMP protection had been completed for only 125 of the approximately 2,771 radio stations in the United States Emergency Broadcast System. Also, only 110 of 3,000 existing Emergency Operating Centers had been protected against EMP effects. The Emergency Broadcast System has since been supplanted in the United States by the Emergency Alert System.
The reference Nuclear War Survival Skills includes the following supplies in a list of "Minimum Pre-Crisis Preparations": one or more shovels, a pick, a bow-saw with an extra blade, a hammer, and polyethylene film (also any necessary nails, wire, etc.); a homemade shelter-ventilating pump (a KAP); large containers for water; a plastic bottle of sodium hypochlorite bleach; one or two KFMs (Kearny fallout meters) and the knowledge to operate them; at least a 2-week supply of compact, nonperishable food; an efficient portable stove; wooden matches in a waterproof container; essential containers and utensils for storing, transporting, and cooking food; a hose-vented can, with heavy plastic bags for liners, for use as a toilet; tampons; insect screen and fly bait; any special medications needed by family members; pure potassium iodide, a bottle, and a medicine dropper; a first-aid kit and a tube of antibiotic ointment; long-burning candles (with small wicks) sufficient for at least 14 nights; an oil lamp; a flashlight and extra batteries; and a transistor radio with extra batteries and a metal box to protect it from electromagnetic pulse.
Inhabitants should have water on hand, per person per day. Water stored in bulk containers requires less space than water stored in smaller bottles.
Kearny fallout meter
Commercially made Geiger counters are expensive and require frequent calibration. It is possible to construct an electrometer-type radiation meter called the Kearny fallout meter, which does not require batteries or professional calibration, from properly-scaled plans with just a coffee can or pail, gypsum board, monofilament fishing line, and aluminum foil. Plans are freely available in the public domain in the reference Nuclear War Survival Skills by Cresson Kearny.
Use
Inhabitants should plan to remain sheltered for at least two weeks (with an hour out at the end of the first week – see Swiss Civil Defense guidelines), then work outside for gradually increasing amounts of time, to four hours a day at three weeks. The normal work is to sweep or wash fallout into shallow trenches to decontaminate the area. They should sleep in a shelter for several months. Evacuation at three weeks is recommended by official authorities.
If available, inhabitants may take potassium iodide at the rate of 130 mg/day per adult (65 mg/day per child) as an additional measure to protect the thyroid gland from the uptake of dangerous radioactive iodine, a component of most fallout and reactor waste.
Different types of radiation emitted by fallout
Alpha (α)
In the vast majority of accidents, and in all atomic bomb blasts, the threat due to beta and gamma emitters is greater than that posed by the alpha emitters in the fallout. Alpha particles are identical to a helium-4 nucleus (two protons and two neutrons), and travel at speeds in excess of 5% of the speed of light. Alpha particles have little penetrating power; most cannot penetrate through human skin. Avoiding direct exposure with fallout particles will prevent injury from alpha radiation.
Beta (β)
Beta radiation consists of particles (high-speed electrons) given off by some fallout. Most beta particles cannot penetrate more than about of air or about of water, wood, or human body tissue; or a sheet of aluminum foil. Avoiding direct exposure with fallout particles will prevent most injuries from beta radiation.
The primary dangers associated with beta radiation are internal exposure from ingested fallout particles and beta burns from fallout particles no more than a few days old. Beta burns can result from contact with highly radioactive particles on bare skin; ordinary clothing separating fresh fallout particles from the skin can provide significant shielding.
Gamma (γ)
Gamma radiation penetrates further through matter than alpha or beta radiation. Most of the design of a typical fallout shelter is intended to protect against gamma rays. Gamma rays are better absorbed by materials with high atomic numbers and high density, although neither effect is important compared to the total mass per area in the path of the gamma ray. Thus, lead is only modestly better as a gamma shield than an equal mass of another shielding material such as aluminum, concrete, water or soil.
Some gamma radiation from fallout will penetrate into even the best shelters. However, the radiation dose received while inside a shelter can be significantly reduced with proper shielding. Ten halving thicknesses of a given material can reduce gamma exposure to less than of unshielded exposure.
Weapons versus nuclear accident fallout
The bulk of the radioactivity in nuclear accident fallout is more long-lived than that in weapons fallout. A good table of the nuclides, such as that provided by the Korean Atomic Energy Research Institute, includes the fission yields of the different nuclides. From this data it is possible to calculate the isotopic mixture in the fallout (due to fission products in bomb fallout).
Other matters and simple improvements
While a person's home may not be a purpose-made shelter, it could be thought of as one if measures are taken to improve the degree of fallout protection.
Measures to lower the beta dose
The main threat of beta radiation exposure comes from hot particles in contact with or close to the skin of a person. Also, swallowed or inhaled hot particles could cause beta burns. As it is important to avoid bringing hot particles into the shelter, one option is to remove one's outer clothing, or follow other decontamination procedures, on entry. Fallout particles will cease to be radioactive enough to cause beta burns within a few days following a nuclear explosion. The danger of gamma radiation will persist for far longer than the threat of beta burns in areas with heavy fallout exposure.
Measures to lower the gamma dose rate
The gamma dose rate due to the contamination brought into the shelter on the clothing of a person is likely to be small (by wartime standards) compared to gamma radiation that penetrates through the walls of the shelter. The following measures can be taken to reduce the amount of gamma radiation entering the shelter:
Roofs and gutters can be cleaned to lower the dose rate in the house.
The top inch of soil in the area near the house can be either removed or dug up and mixed with the subsoil. This reduces the dose rate as the gamma rays have to pass through the topsoil before they can irradiate anything above.
Nearby roads can be rinsed and washed down to remove dust and debris; the fallout would collect in the sewers and gutters for easier disposal. In Kyiv after the Chernobyl accident a program of road washing was used to control the spread of radioactivity.
Windows can be bricked up, or the sill raised to reduce the hole in the shielding formed by the wall.
Gaps in the shielding can be blocked using containers of water. While water has a much lower density than that of lead, it is still able to shield some gamma rays.
Earth (or other dense material) can be heaped up against the exposed walls of the building; this forces the gamma rays to pass through a thicker layer of shielding before entering the house.
Nearby trees can be removed to reduce the dose due to fallout which is on the branches and leaves. It has been suggested by the US government that a fallout shelter should not be dug close to trees for this reason.
Fallout shelters in popular culture
Fallout shelters feature prominently in the Robert A. Heinlein novel Farnham's Freehold (Heinlein built a fairly extensive shelter near his home in Colorado Springs in 1963), Pulling Through by Dean Ing, A Canticle for Leibowitz by Walter M. Miller and Earth by David Brin.
The 1961 Twilight Zone episode "The Shelter", from a Rod Serling script, deals with the consequences of actually using a shelter. Another episode of the series called "One More Pallbearer" featured a fallout shelter owned by a millionaire. The 1985 adaption of the series had the episode "Shelter Skelter" that featured a fallout shelter.
In the Only Fools and Horses episode "The Russians are Coming", aired in 1981, Derek Trotter buys a lead fallout shelter, then decides to construct it in fear of an impending nuclear war caused by the Soviet Union.
In 1999, the film Blast from the Past was released. It is a romantic comedy film about a nuclear physicist, his wife, and son that enter a well-equipped, spacious fallout shelter during the 1962 Cuban Missile Crisis. They do not emerge until 35 years later, in 1997. The film shows their reaction to contemporary society.
The Fallout series of computer games depicts the remains of human civilization after an immensely destructive global nuclear war; the United States of America had built underground fallout shelters known as vaults, that were advertised to protect the population against a nuclear attack, but almost all of them were in fact meant to lure subjects for long-term human experimentation.
Paranoia, a role-playing game, takes place in a city-sized fallout shelter, which has become ruled by an insane computer.
An episode of the sitcom Malcolm in the Middle features a subplot revolving around Reese and Dewey discovering a previously unknown fallout shelter in their backyard and trapping their father Hal in it, who soon becomes smitten with the shelter's 1960s decor.
The Metro 2033 book series by Russian author Dmitry Glukhovsky depicts survivors' life in the subway systems below Moscow and Saint-Petersburg after a nuclear exchange between the Russian Federation and the United States of America.
Fallout shelters are often featured on the reality television show Doomsday Preppers.
The Silo series of novellas by Hugh Howey feature extensive fallout-style shelters that protect the inhabitants from an initially unknown disaster.
The 2019 US film The Tomorrow Man centers around a reclusive man whose main preoccupation is tending to his in-home fallout shelter and the conspiracy theories that could put it to use.
See also
Abo Elementary School
Ark Two Shelter
Blast shelter
Bomb shelter
Bunker
Bruce D. Clayton, author of Fallout Survival and Life After Doomsday
Collective protection
Command center
CONELRAD
Continuity of government
Project Greek Island
Vivos (underground shelter)
Nation specific:
Central Government War Headquarters, The UKs Gov. War Headquarters at Corsham, Wiltshire.
Diefenbunker
HANDEL, UK's former national attack warning system
General:
Fission product
Retreat (survivalism)
Sonnenberg Tunnel
Survivalism
Publications:
Fallout Protection
Survival Under Atomic Attack
Nuclear War Survival Skills
Notes and references
Further reading
Rose, Kenneth D., One Nation Underground: The Fallout Shelter in American Culture, New York University Press (2004),
Readers Forum: Nuclear Fallout Shelters 50 Years Ago (Greeneville, Tennessee) by Henry Samples (1947-2024), AZER.com, in Azerbaijan International, Vol. 14:3 (Autumn 2006), pp. 12-13.
External links
Nuclear War Survival Skills
Air raid shelters
Cold War sites
Nuclear warfare
Radioactivity
Radiobiology
Subterranea (geography)
Survivalism
Radiation protection
Nuclear fallout | Fallout shelter | [
"Physics",
"Chemistry",
"Technology",
"Biology"
] | 4,633 | [
"Radioactive contamination",
"Radiobiology",
"Nuclear fallout",
"Environmental impact of nuclear power",
"Nuclear warfare",
"Nuclear physics",
"Radioactivity"
] |
11,522 | https://en.wikipedia.org/wiki/Fly-by-wire | Fly-by-wire (FBW) is a system that replaces the conventional manual flight controls of an aircraft with an electronic interface. The movements of flight controls are converted to electronic signals, and flight control computers determine how to move the actuators at each control surface to provide the ordered response. Implementations either use mechanical flight control backup systems or else are fully electronic.
Improved fully fly-by-wire systems interpret the pilot's control inputs as a desired outcome and calculate the control surface positions required to achieve that outcome; this results in various combinations of rudder, elevator, aileron, flaps and engine controls in different situations using a closed feedback loop. The pilot may not be fully aware of all the control outputs acting to affect the outcome, only that the aircraft is reacting as expected. The fly-by-wire computers act to stabilize the aircraft and adjust the flying characteristics without the pilot's involvement, and to prevent the pilot from operating outside of the aircraft's safe performance envelope.
Rationale
Mechanical and hydro-mechanical flight control systems are relatively heavy and require careful routing of flight control cables through the aircraft by systems of pulleys, cranks, tension cables and hydraulic pipes. Both systems often require redundant backup to deal with failures, which increases weight. Both have limited ability to compensate for changing aerodynamic conditions. Dangerous characteristics such as stalling, spinning and pilot-induced oscillation (PIO), which depend mainly on the stability and structure of the aircraft rather than the control system itself, are dependent on the pilot's actions.
The term "fly-by-wire" implies a purely electrically signaled control system. It is used in the general sense of computer-configured controls, where a computer system is interposed between the operator and the final control actuators or surfaces. This modifies the manual inputs of the pilot in accordance with control parameters.
Side-sticks or conventional flight control yokes can be used to fly fly-by-wire aircraft.
Weight saving
A fly-by-wire aircraft can be lighter than a similar design with conventional controls. This is partly due to the lower overall weight of the system components and partly because the natural stability of the aircraft can be relaxed (slightly for a transport aircraft; more for a maneuverable fighter), which means that the stability surfaces that are part of the aircraft structure can therefore be made smaller. These include the vertical and horizontal stabilizers (fin and tailplane) that are (normally) at the rear of the fuselage. If these structures can be reduced in size, airframe weight is reduced. The advantages of fly-by-wire controls were first exploited by the military and then in the commercial airline market. The Airbus series of airliners used full-authority fly-by-wire controls beginning with their A320 series, see A320 flight control (though some limited fly-by-wire functions existed on A310 aircraft). Boeing followed with their 777 and later designs.
Basic operation
Closed-loop feedback control
A pilot commands the flight control computer to make the aircraft perform a certain action, such as pitch the aircraft up, or roll to one side, by moving the control column or sidestick. The flight control computer then calculates what control surface movements will cause the plane to perform that action and issues those commands to the electronic controllers for each surface. The controllers at each surface receive these commands and then move actuators attached to the control surface until it has moved to where the flight control computer commanded it to. The controllers measure the position of the flight control surface with sensors such as LVDTs.
Automatic stability systems
Fly-by-wire control systems allow aircraft computers to perform tasks without pilot input. Automatic stability systems operate in this way. Gyroscopes and sensors such as accelerometers are mounted in an aircraft to sense rotation on the pitch, roll and yaw axes. Any movement (from straight and level flight for example) results in signals to the computer, which can automatically move control actuators to stabilize the aircraft.
Safety and redundancy
While traditional mechanical or hydraulic control systems usually fail gradually, the loss of all flight control computers immediately renders the aircraft uncontrollable. For this reason, most fly-by-wire systems incorporate either redundant computers (triplex, quadruplex etc.), some kind of mechanical or hydraulic backup or a combination of both. A "mixed" control system with mechanical backup feedbacks any rudder elevation directly to the pilot and therefore makes closed loop (feedback) systems senseless.
Aircraft systems may be quadruplexed (four independent channels) to prevent loss of signals in the case of failure of one or even two channels. High performance aircraft that have fly-by-wire controls (also called CCVs or Control-Configured Vehicles) may be deliberately designed to have low or even negative stability in some flight regimes rapid-reacting CCV controls can electronically stabilize the lack of natural stability.
Pre-flight safety checks of a fly-by-wire system are often performed using built-in test equipment (BITE). A number of control movement steps can be automatically performed, reducing workload of the pilot or groundcrew and speeding up flight-checks.
Some aircraft, the Panavia Tornado for example, retain a very basic hydro-mechanical backup system for limited flight control capability on losing electrical power; in the case of the Tornado this allows rudimentary control of the stabilators only for pitch and roll axis movements.
History
Servo-electrically operated control surfaces were first tested in the 1930s on the Soviet Tupolev ANT-20. Long runs of mechanical and hydraulic connections were replaced with wires and electric servos.
In 1934, filed a patent about the automatic-electronic system, which flared the aircraft, when it was close to the ground.
In 1941, Karl Otto Altvater, who was an engineer at Siemens, developed and tested the first fly-by-wire system for the Heinkel He 111, in which the aircraft was fully controlled by electronic impulses.
The first non-experimental aircraft that was designed and flown (in 1958) with a fly-by-wire flight control system was the Avro Canada CF-105 Arrow, a feat not repeated with a production aircraft (though the Arrow was cancelled with five built) until Concorde in 1969, which became the first fly-by-wire airliner. This system also included solid-state components and system redundancy, was designed to be integrated with a computerised navigation and automatic search and track radar, was flyable from ground control with data uplink and downlink, and provided artificial feel (feedback) to the pilot.
The first electronic fly-by-wire testbed operated by the U.S. Air Force was a Boeing B-47E Stratojet (Ser. No. 53-2280)
The first pure electronic fly-by-wire aircraft with no mechanical or hydraulic backup was the Apollo Lunar Landing Training Vehicle (LLTV), first flown in 1968. This was preceded in 1964 by the Lunar Landing Research Vehicle (LLRV) which pioneered fly-by-wire flight with no mechanical backup. Control was through a digital computer with three analog redundant channels. In the USSR, the Sukhoi T-4 also flew. At about the same time in the United Kingdom a trainer variant of the British Hawker Hunter fighter was modified at the British Royal Aircraft Establishment with fly-by-wire flight controls for the right-seat pilot.
In the UK the two seater Avro 707C was flown with a Fairey system with mechanical backup in the early to mid-60s. The program was curtailed when the air-frame ran out of flight time.
In 1972, the first digital fly-by-wire fixed-wing aircraft without a mechanical backup to take to the air was an F-8 Crusader, which had been modified electronically by NASA of the United States as a test aircraft; the F-8 used the Apollo guidance, navigation and control hardware.
The Airbus A320 began service in 1988 as the first mass-produced airliner with digital fly-by-wire controls. As of June 2024, over 11,000 A320 family aircraft, variants included, are operational around the world, making it one of the best-selling commercial jets.
Boeing chose fly-by-wire flight controls for the 777 in 1994, departing from traditional cable and pulley systems. In addition to overseeing the aircraft's flight control, the FBW offered "envelope protection", which guaranteed that the system would step in to avoid accidental mishandling, stalls, or excessive structural stress on the aircraft. The 777 used ARINC 629 buses to connect primary flight computers (PFCs) with actuator-control electronics units (ACEs). Every PFC housed three 32-bit microprocessors, including a Motorola 68040, an Intel 80486, and an AMD 29050, all programmed in Ada programming language.
Analog systems
All fly-by-wire flight control systems eliminate the complexity, fragility and weight of the mechanical circuit of the hydromechanical or electromechanical flight control systems – each being replaced with electronic circuits. The control mechanisms in the cockpit now operate signal transducers, which in turn generate the appropriate commands. These are next processed by an electronic controller—either an analog one, or (more modernly) a digital one. Aircraft and spacecraft autopilots are now part of the electronic controller.
The hydraulic circuits are similar except that mechanical servo valves are replaced with electrically controlled servo valves, operated by the electronic controller. This is the simplest and earliest configuration of an analog fly-by-wire flight control system. In this configuration, the flight control systems must simulate "feel". The electronic controller controls electrical devices that provide the appropriate "feel" forces on the manual controls. This was used in Concorde, the first production fly-by-wire airliner.
Digital systems
A digital fly-by-wire flight control system can be extended from its analog counterpart. Digital signal processing can receive and interpret input from multiple sensors simultaneously (such as the altimeters and the pitot tubes) and adjust the controls in real time. The computers sense position and force inputs from pilot controls and aircraft sensors. They then solve differential equations related to the aircraft's equations of motion to determine the appropriate command signals for the flight controls to execute the intentions of the pilot.
The programming of the digital computers enable flight envelope protection. These protections are tailored to an aircraft's handling characteristics to stay within aerodynamic and structural limitations of the aircraft. For example, the computer in flight envelope protection mode can try to prevent the aircraft from being handled dangerously by preventing pilots from exceeding preset limits on the aircraft's flight-control envelope, such as those that prevent stalls and spins, and which limit airspeeds and g forces on the airplane. Software can also be included that stabilize the flight-control inputs to avoid pilot-induced oscillations.
Since the flight-control computers continuously feedback the environment, pilot's workloads can be reduced. This also enables military aircraft with relaxed stability. The primary benefit for such aircraft is more maneuverability during combat and training flights, and the so-called "carefree handling" because stalling, spinning and other undesirable performances are prevented automatically by the computers. Digital flight control systems (DFCS) enable inherently unstable combat aircraft, such as the Lockheed F-117 Nighthawk and the Northrop Grumman B-2 Spirit flying wing to fly in usable and safe manners.
Legislation
The United States Federal Aviation Administration (FAA) has adopted the RTCA/DO-178C, titled "Software Considerations in Airborne Systems and Equipment Certification", as the certification standard for aviation software. Any safety-critical component in a digital fly-by-wire system including applications of the laws of aeronautics and computer operating systems will need to be certified to DO-178C Level A or B, depending on the class of aircraft, which is applicable for preventing potential catastrophic failures.
Nevertheless, the top concern for computerized, digital, fly-by-wire systems is reliability, even more so than for analog electronic control systems. This is because the digital computers that are running software are often the only control path between the pilot and aircraft's flight control surfaces. If the computer software crashes for any reason, the pilot may be unable to control an aircraft. Hence virtually all fly-by-wire flight control systems are either triply or quadruply redundant in their computers and electronics. These have three or four flight-control computers operating in parallel and three or four separate data buses connecting them with each control surface.
Redundancy
The multiple redundant flight control computers continuously monitor each other's output. If one computer begins to give aberrant results for any reason, potentially including software or hardware failures or flawed input data, then the combined system is designed to exclude the results from that computer in deciding the appropriate actions for the flight controls. Depending on specific system details there may be the potential to reboot an aberrant flight control computer, or to reincorporate its inputs if they return to agreement. Complex logic exists to deal with multiple failures, which may prompt the system to revert to simpler back-up modes.
In addition, most of the early digital fly-by-wire aircraft also had an analog electrical, mechanical, or hydraulic back-up flight control system. The Space Shuttle had, in addition to its redundant set of four digital computers running its primary flight-control software, a fifth backup computer running a separately developed, reduced-function, software flight-control system – one that could be commanded to take over in the event that a fault ever affected all of the other four computers. This backup system served to reduce the risk of total flight control system failure ever happening because of a general-purpose flight software fault that had escaped notice in the other four computers.
Efficiency of flight
For airliners, flight-control redundancy improves their safety, but fly-by-wire control systems, which are physically lighter and have lower maintenance demands than conventional controls also improve economy, both in terms of cost of ownership and for in-flight economy. In certain designs with limited relaxed stability in the pitch axis, for example the Boeing 777, the flight control system may allow the aircraft to fly at a more aerodynamically efficient angle of attack than a conventionally stable design. Modern airliners also commonly feature computerized Full-Authority Digital Engine Control systems (FADECs) that control their engines, air inlets, fuel storage and distribution system, in a similar fashion to the way that FBW controls the flight control surfaces. This allows the engine output to be continually varied for the most efficient usage possible.
The second generation Embraer E-Jet family gained a 1.5% efficiency improvement over the first generation from the fly-by-wire system, which enabled a reduction from 280 ft.² to 250 ft.² for the horizontal stabilizer on the E190/195 variants.
Airbus/Boeing
Airbus and Boeing differ in their approaches to implementing fly-by-wire systems in commercial aircraft. Since the Airbus A320, Airbus flight-envelope control systems always retain ultimate flight control when flying under normal law and will not permit pilots to violate aircraft performance limits unless they choose to fly under alternate law. This strategy has been continued on subsequent Airbus airliners. However, in the event of multiple failures of redundant computers, the A320 does have a mechanical back-up system for its pitch trim and its rudder, the Airbus A340 has a purely electrical (not electronic) back-up rudder control system and beginning with the A380, all flight-control systems have back-up systems that are purely electrical through the use of a "three-axis Backup Control Module" (BCM).
Boeing airliners, such as the Boeing 777, allow the pilots to completely override the computerized flight control system, permitting the aircraft to be flown outside of its usual flight control envelope.
Applications
Concorde was the first production fly-by-wire aircraft with analog control.
The General Dynamics F-16 was the first production aircraft to use digital fly-by-wire controls.
The Space Shuttle orbiter had an all-digital fly-by-wire control system. This system was first exercised (as the only flight control system) during the glider unpowered-flight "Approach and Landing Tests" that began with the Space Shuttle Enterprise during 1977.
Launched into production during 1984, the Airbus Industries Airbus A320 became the first airliner to fly with an all-digital fly-by-wire control system.
With its launch in 1993 the Boeing C-17 Globemaster III became the first fly-by-wire military transport aircraft.
In 2005, the Dassault Falcon 7X became the first business jet with fly-by-wire controls.
A fully digital fly-by-wire without a closed feedback loop was integrated in 2002 in the first generation Embraer E-Jet family. By closing the loop (feedback), the second generation Embraer E-Jet family gained a 1.5% efficiency improvement in 2016.
Engine digital control
The advent of FADEC (Full Authority Digital Engine Control) engines permits operation of the flight control systems and autothrottles for the engines to be fully integrated. On modern military aircraft other systems such as autostabilization, navigation, radar and weapons system are all integrated with the flight control systems. FADEC allows maximum performance to be extracted from the aircraft without fear of engine misoperation, aircraft damage or high pilot workloads.
In the civil field, the integration increases flight safety and economy. Airbus fly-by-wire aircraft are protected from dangerous situations such as low-speed stall or overstressing by flight envelope protection. As a result, in such conditions, the flight control systems commands the engines to increase thrust without pilot intervention. In economy cruise modes, the flight control systems adjust the throttles and fuel tank selections precisely. FADEC reduces rudder drag needed to compensate for sideways flight from unbalanced engine thrust. On the A330/A340 family, fuel is transferred between the main (wing and center fuselage) tanks and a fuel tank in the horizontal stabilizer, to optimize the aircraft's center of gravity during cruise flight. The fuel management controls keep the aircraft's center of gravity accurately trimmed with fuel weight, rather than drag-inducing aerodynamic trims in the elevators.
Further developments
Fly-by-optics
Fly-by-optics is sometimes used instead of fly-by-wire because it offers a higher data transfer rate, immunity to electromagnetic interference and lighter weight. In most cases, the cables are just changed from electrical to optical fiber cables. Sometimes it is referred to as "fly-by-light" due to its use of fiber optics. The data generated by the software and interpreted by the controller remain the same. Fly-by-light has the effect of decreasing electro-magnetic disturbances to sensors in comparison to more common fly-by-wire control systems. The Kawasaki P-1 is the first production aircraft in the world to be equipped with such a flight control system.
Power-by-wire
Having eliminated the mechanical transmission circuits in fly-by-wire flight control systems, the next step is to eliminate the bulky and heavy hydraulic circuits. The hydraulic circuit is replaced by an electrical power circuit. The power circuits power electrical or self-contained electrohydraulic actuators that are controlled by the digital flight control computers. All benefits of digital fly-by-wire are retained since the power-by-wire components are strictly complementary to the fly-by-wire components.
The biggest benefits are weight savings, the possibility of redundant power circuits and tighter integration between the aircraft flight control systems and its avionics systems. The absence of hydraulics greatly reduces maintenance costs. This system is used in the Lockheed Martin F-35 Lightning II and in Airbus A380 backup flight controls. The Boeing 787 and Airbus A350 also incorporate electrically powered backup flight controls which remain operational even in the event of a total loss of hydraulic power.
Fly-by-wireless
Wiring adds a considerable amount of weight to an aircraft; therefore, researchers are exploring implementing fly-by-wireless solutions. Fly-by-wireless systems are very similar to fly-by-wire systems, however, instead of using a wired protocol for the physical layer a wireless protocol is employed.
In addition to reducing weight, implementing a wireless solution has the potential to reduce costs throughout an aircraft's life cycle. For example, many key failure points associated with wire and connectors will be eliminated thus hours spent troubleshooting wires and connectors will be reduced. Furthermore, engineering costs could potentially decrease because less time would be spent on designing wiring installations, late changes in an aircraft's design would be easier to manage, etc.
Intelligent flight control system
A newer flight control system, called intelligent flight control system (IFCS), is an extension of modern digital fly-by-wire flight control systems. The aim is to intelligently compensate for aircraft damage and failure during flight, such as automatically using engine thrust and other avionics to compensate for severe failures such as loss of hydraulics, loss of rudder, loss of ailerons, loss of an engine, etc. Several demonstrations were made on a flight simulator where a Cessna-trained small-aircraft pilot successfully landed a heavily damaged full-size concept jet, without prior experience with large-body jet aircraft. This development is being spearheaded by NASA Dryden Flight Research Center. It is reported that enhancements are mostly software upgrades to existing fully computerized digital fly-by-wire flight control systems. The Dassault Falcon 7X and Embraer Legacy 500 business jets have flight computers that can partially compensate for engine-out scenarios by adjusting thrust levels and control inputs, but still require pilots to respond appropriately.
See also
Index of aviation articles
Aircraft flight control system
Air France Flight 296Q
Drive by wire
Dual control (aviation)
Flight control modes
MIL-STD-1553, a standard data bus for fly-by-wire
Relaxed stability
Note
References
External links
"Fly-by-wire" a 1972 Flight article archive version
Aircraft controls
Fault tolerance
Flight control systems | Fly-by-wire | [
"Engineering"
] | 4,554 | [
"Reliability engineering",
"Fault tolerance"
] |
11,524 | https://en.wikipedia.org/wiki/Fahrenheit | The Fahrenheit scale () is a temperature scale based on one proposed in 1724 by the European physicist Daniel Gabriel Fahrenheit (1686–1736). It uses the degree Fahrenheit (symbol: °F) as the unit. Several accounts of how he originally defined his scale exist, but the original paper suggests the lower defining point, 0 °F, was established as the freezing temperature of a solution of brine made from a mixture of water, ice, and ammonium chloride (a salt). The other limit established was his best estimate of the average human body temperature, originally set at 90 °F, then 96 °F (about 2.6 °F less than the modern value due to a later redefinition of the scale).
For much of the 20th century, the Fahrenheit scale was defined by two fixed points with a 180 °F separation: the temperature at which pure water freezes was defined as 32 °F and the boiling point of water was defined to be 212 °F, both at sea level and under standard atmospheric pressure. It is now formally defined using the Kelvin scale.
It continues to be used in the United States (including its unincorporated territories), its freely associated states in the Western Pacific (Palau, the Federated States of Micronesia and the Marshall Islands), the Cayman Islands, and Liberia.
Fahrenheit is commonly still used alongside the Celsius scale in other countries that use the U.S. metrological service, such as Antigua and Barbuda, Saint Kitts and Nevis, the Bahamas, and Belize. A handful of British Overseas Territories, including the Virgin Islands, Montserrat, Anguilla, and Bermuda, also still use both scales. All other countries now use Celsius ("centigrade" until 1948), which was invented 18 years after the Fahrenheit scale.
Definition and conversion
Historically, on the Fahrenheit scale the freezing point of water was 32 °F, and the boiling point was 212 °F (at standard atmospheric pressure). This put the boiling and freezing points of water 180 degrees apart. Therefore, a degree on the Fahrenheit scale was of the interval between the freezing point and the boiling point. On the Celsius scale, the freezing and boiling points of water were originally defined to be 100 degrees apart. A temperature interval of 1 °F was equal to an interval of degrees Celsius. With the Fahrenheit and Celsius scales now both defined by the kelvin, this relationship was preserved, a temperature interval of 1 °F being equal to an interval of K and of °C. The Fahrenheit and Celsius scales intersect numerically at −40 in the respective unit (i.e., −40 °F ≘ −40 °C).
Absolute zero is 0 K, −273.15 °C, or −459.67 °F. The Rankine temperature scale uses degree intervals of the same size as those of the Fahrenheit scale, except that absolute zero is 0 °R the same way that the Kelvin temperature scale matches the Celsius scale, except that absolute zero is 0 K.
The combination of degree symbol (°) followed by an uppercase letter F is the conventional symbol for the Fahrenheit temperature scale. A number followed by this symbol (and separated from it with a space) denotes a specific temperature point (e.g., "Gallium melts at 85.5763 °F"). A difference between temperatures or an uncertainty in temperature is also conventionally written the same way as well, e.g., "The output of the heat exchanger experiences an increase of 72 °F" or "Our standard uncertainty is ±5 °F". However, some authors instead use the notation "An increase of " (reversing the symbol order) to indicate temperature differences. Similar conventions exist for the Celsius scale, see .
Conversion (specific temperature point)
For an exact conversion between degrees Fahrenheit and Celsius, and kelvins of a specific temperature point, the following formulas can be applied. Here, is the value in degrees Fahrenheit, the value in degrees Celsius, and the value in kelvins:
°F to °C: =
°C to °F: = × 1.8 + 32
°F to K: =
K to °F: = × 1.8 − 459.67
There is also an exact conversion between Celsius and Fahrenheit scales making use of the correspondence −40 °F ≘ −40 °C. Again, is the numeric value in degrees Fahrenheit, and the numeric value in degrees Celsius:
°F to °C: = − 40
°C to °F: = ( + 40) × 1.8 − 40
Conversion (temperature difference or interval)
When converting a temperature interval between the Fahrenheit and Celsius scales, only the ratio is used, without any constant (in this case, the interval has the same numeric value in kelvins as in degrees Celsius):
°F to °C or K: = =
°C or K to °F: = × 1.8 = × 1.8
History
Fahrenheit proposed his temperature scale in 1724, basing it on two reference points of temperature. In his initial scale (which is not the final Fahrenheit scale), the zero point was determined by placing the thermometer in "a mixture of ice, water, and salis Armoniaci [transl. ammonium chloride] or even sea salt". This combination forms a eutectic system, which stabilizes its temperature automatically: 0 °F was defined to be that stable temperature. A second point, 96 degrees, was approximately the human body's temperature. A third point, 32 degrees, was marked as being the temperature of ice and water "without the aforementioned salts".
According to a German story, Fahrenheit actually chose the lowest air temperature measured in his hometown Danzig (Gdańsk, Poland) in winter 1708–09 as 0 °F, and only later had the need to be able to make this value reproducible using brine.
According to a letter Fahrenheit wrote to his friend Herman Boerhaave, his scale was built on the work of Ole Rømer, whom he had met earlier. In Rømer scale, brine freezes at zero, water freezes and melts at 7.5 degrees, body temperature is 22.5, and water boils at 60 degrees. Fahrenheit multiplied each value by 4 in order to eliminate fractions and make the scale more fine-grained. He then re-calibrated his scale using the melting point of ice and normal human body temperature (which were at 30 and 90 degrees); he adjusted the scale so that the melting point of ice would be 32 degrees, and body temperature 96 degrees, so that 64 intervals would separate the two, allowing him to mark degree lines on his instruments by simply bisecting the interval 6 times (since 64 = 26).
Fahrenheit soon after observed that water boils at about 212 degrees using this scale. The use of the freezing and boiling points of water as thermometer fixed reference points became popular following the work of Anders Celsius, and these fixed points were adopted by a committee of the Royal Society led by Henry Cavendish in 1776–77. Under this system, the Fahrenheit scale is redefined slightly so that the freezing point of water was exactly 32 °F, and the boiling point was exactly 212 °F, or 180 degrees higher. It is for this reason that normal human body temperature is approximately 98.6 °F (oral temperature) on the revised scale (whereas it was 90° on Fahrenheit's multiplication of Rømer, and 96° on his original scale).
In the present-day Fahrenheit scale, 0 °F no longer corresponds to the eutectic temperature of ammonium chloride brine as described above. Instead, that eutectic is at approximately 4 °F on the final Fahrenheit scale.
The Rankine temperature scale was based upon the Fahrenheit temperature scale, with its zero representing absolute zero instead.
Usage
General
The Fahrenheit scale was the primary temperature standard for climatic, industrial and medical purposes in Anglophone countries until the 1960s. In the late 1960s and 1970s, the Celsius scale replaced Fahrenheit in almost all of those countries—with the notable exception of the United States.
Fahrenheit is used in the United States, its territories and associated states (all serviced by the U.S. National Weather Service), as well as the (British) Cayman Islands and Liberia for everyday applications. The Fahrenheit scale is in use in U.S. for all temperature measurements including weather forecasts, cooking, and food freezing temperatures, however for scientific research the scale is Celsius and Kelvin.
United States
Early in the 20th century, Halsey and Dale suggested that reasons for resistance to use the centigrade (now Celsius) system in the U.S. included the larger size of each degree Celsius and the lower zero point in the Fahrenheit system; the Fahrenheit scale is more intuitive than Celsius for describing outdoor temperatures in temperate latitudes, with 100 °F being a hot summer day and 0 °F a cold winter day.
Canada
Canada has passed legislation favoring the International System of Units, while also maintaining legal definitions for traditional Canadian imperial units. Canadian weather reports are conveyed using degrees Celsius with occasional reference to Fahrenheit especially for cross-border broadcasts. Fahrenheit is still used on virtually all Canadian ovens. Thermometers, both digital and analog, sold in Canada usually employ both the Celsius and Fahrenheit scales.
European Union
In the European Union, it is mandatory to use Kelvins or degrees Celsius when quoting temperature for "economic, public health, public safety and administrative" purposes, though degrees Fahrenheit may be used alongside degrees Celsius as a supplementary unit.
United Kingdom
Most British people use Celsius. However, the use of Fahrenheit still may appear at times alongside degrees Celsius in the print media with no standard convention for when the measurement is included.
For example, The Times has an all-metric daily weather page but includes a Celsius-to-Fahrenheit conversion table. Some UK tabloids have adopted a tendency of using Fahrenheit for mid to high temperatures. It has been suggested that the rationale to keep using Fahrenheit was one of emphasis for high temperatures: "−6 °C" sounds colder than "21 °F", and "94 °F" sounds more sensational than "34 °C".
Unicode representation of symbol
Unicode provides the Fahrenheit symbol at code point . However, this is a compatibility character encoded for roundtrip compatibility with legacy encodings. The Unicode standard explicitly discourages the use of this character: "The sequence + is preferred over , and those two sequences should be treated as identical for searching."
See also
Outline of metrology and measurement
Comparison of temperature scales
Degree of frost
Notes
References
External links
Daniel Gabriel Fahrenheit (Polish-born Dutch physicist) – Encyclopædia Britannica
"At Auction | One of Only Three Original Fahrenheit Thermometers" Enfilade page for 2012 Christie's sale of a Fahrenheit mercury thermometer
Christie's press release
Customary units of measurement in the United States
Imperial units
Scales of temperature
1724 introductions
Scales in meteorology | Fahrenheit | [
"Physics",
"Mathematics"
] | 2,389 | [
"Scales of temperature",
"Quantity",
"Physical quantities"
] |
11,526 | https://en.wikipedia.org/wiki/Quotient%20group | A quotient group or factor group is a mathematical group obtained by aggregating similar elements of a larger group using an equivalence relation that preserves some of the group structure (the rest of the structure is "factored out"). For example, the cyclic group of addition modulo n can be obtained from the group of integers under addition by identifying elements that differ by a multiple of and defining a group structure that operates on each such class (known as a congruence class) as a single entity. It is part of the mathematical field known as group theory.
For a congruence relation on a group, the equivalence class of the identity element is always a normal subgroup of the original group, and the other equivalence classes are precisely the cosets of that normal subgroup. The resulting quotient is written , where is the original group and is the normal subgroup. This is read as '', where is short for modulo. (The notation should be interpreted with caution, as some authors (e.g., Vinberg) use it to represent the left cosets of in for any subgroup , even though these cosets do not form a group if is not normal in . Others (e.g., Dummit and Foote) use this notation to refer only to the quotient group, with the appearance of this notation implying that is normal in .)
Much of the importance of quotient groups is derived from their relation to homomorphisms. The first isomorphism theorem states that the image of any group under a homomorphism is always isomorphic to a quotient of . Specifically, the image of under a homomorphism is isomorphic to where denotes the kernel of .
The dual notion of a quotient group is a subgroup, these being the two primary ways of forming a smaller group from a larger one. Any normal subgroup has a corresponding quotient group, formed from the larger group by eliminating the distinction between elements of the subgroup. In category theory, quotient groups are examples of quotient objects, which are dual to subobjects.
Definition and illustration
Given a group and a subgroup , and a fixed element , one can consider the corresponding left coset: . Cosets are a natural class of subsets of a group; for example consider the abelian group of integers, with operation defined by the usual addition, and the subgroup of even integers. Then there are exactly two cosets: , which are the even integers, and , which are the odd integers (here we are using additive notation for the binary operation instead of multiplicative notation).
For a general subgroup , it is desirable to define a compatible group operation on the set of all possible cosets, . This is possible exactly when is a normal subgroup, see below. A subgroup of a group is normal if and only if the coset equality holds for all . A normal subgroup of is denoted .
Definition
Let be a normal subgroup of a group . Define the set to be the set of all left cosets of in . That is, .
Since the identity element , . Define a binary operation on the set of cosets, , as follows. For each and in , the product of and , , is . This works only because does not depend on the choice of the representatives, and , of each left coset, and . To prove this, suppose and for some . Then
This depends on the fact that is a normal subgroup. It still remains to be shown that this condition is not only sufficient but necessary to define the operation on .
To show that it is necessary, consider that for a subgroup of , we have been given that the operation is well defined. That is, for all and for .
Let and . Since , we have .
Now, and .
Hence is a normal subgroup of .
It can also be checked that this operation on is always associative, has identity element , and the inverse of element can always be represented by . Therefore, the set together with the operation defined by forms a group, the quotient group of by .
Due to the normality of , the left cosets and right cosets of in are the same, and so, could have been defined to be the set of right cosets of in .
Example: Addition modulo 6
For example, consider the group with addition modulo 6: . Consider the subgroup , which is normal because is abelian. Then the set of (left) cosets is of size three:
The binary operation defined above makes this set into a group, known as the quotient group, which in this case is isomorphic to the cyclic group of order 3.
Motivation for the name "quotient"
The quotient group can be compared to division of integers. When dividing 12 by 3 one obtains the result 4 because one can regroup 12 objects into 4 subcollections of 3 objects. The quotient group is the same idea, although one ends up with a group for a final answer instead of a number because groups have more structure than an arbitrary collection of objects: in the quotient , the group structure is used to form a natural "regrouping". These are the cosets of in . Because we started with a group and normal subgroup, the final quotient contains more information than just the number of cosets (which is what regular division yields), but instead has a group structure itself.
Examples
Even and odd integers
Consider the group of integers (under addition) and the subgroup consisting of all even integers. This is a normal subgroup, because is abelian. There are only two cosets: the set of even integers and the set of odd integers, and therefore the quotient group is the cyclic group with two elements. This quotient group is isomorphic with the set with addition modulo 2; informally, it is sometimes said that equals the set with addition modulo 2.
Example further explained...
Let be the remainders of when dividing by . Then, when is even and when is odd.
By definition of , the kernel of , , is the set of all even integers.
Let . Then, is a subgroup, because the identity in , which is , is in , the sum of two even integers is even and hence if and are in , is in (closure) and if is even, is also even and so contains its inverses.
Define as for and is the quotient group of left cosets; .
Note that we have defined , is if is odd and if is even.
Thus, is an isomorphism from to .
Remainders of integer division
A slight generalization of the last example. Once again consider the group of integers under addition. Let be any positive integer. We will consider the subgroup of consisting of all multiples of . Once again is normal in because is abelian. The cosets are the collection . An integer belongs to the coset , where is the remainder when dividing by . The quotient can be thought of as the group of "remainders" modulo . This is a cyclic group of order .
Complex integer roots of 1
The twelfth roots of unity, which are points on the complex unit circle, form a multiplicative abelian group , shown on the picture on the right as colored balls with the number at each point giving its complex argument. Consider its subgroup made of the fourth roots of unity, shown as red balls. This normal subgroup splits the group into three cosets, shown in red, green and blue. One can check that the cosets form a group of three elements (the product of a red element with a blue element is blue, the inverse of a blue element is green, etc.). Thus, the quotient group is the group of three colors, which turns out to be the cyclic group with three elements.
Real numbers modulo the integers
Consider the group of real numbers under addition, and the subgroup of integers. Each coset of in is a set of the form , where is a real number. Since and are identical sets when the non-integer parts of and are equal, one may impose the restriction without change of meaning. Adding such cosets is done by adding the corresponding real numbers, and subtracting 1 if the result is greater than or equal to 1. The quotient group is isomorphic to the circle group, the group of complex numbers of absolute value 1 under multiplication, or correspondingly, the group of rotations in 2D about the origin, that is, the special orthogonal group . An isomorphism is given by (see Euler's identity).
Matrices of real numbers
If is the group of invertible real matrices, and is the subgroup of real matrices with determinant 1, then is normal in (since it is the kernel of the determinant homomorphism). The cosets of are the sets of matrices with a given determinant, and hence is isomorphic to the multiplicative group of non-zero real numbers. The group is known as the special linear group .
Integer modular arithmetic
Consider the abelian group (that is, the set with addition modulo 4), and its subgroup . The quotient group is . This is a group with identity element , and group operations such as . Both the subgroup and the quotient group are isomorphic with .
Integer multiplication
Consider the multiplicative group . The set of th residues is a multiplicative subgroup isomorphic to . Then is normal in and the factor group has the cosets . The Paillier cryptosystem is based on the conjecture that it is difficult to determine the coset of a random element of without knowing the factorization of .
Properties
The quotient group is isomorphic to the trivial group (the group with one element), and is isomorphic to .
The order of , by definition the number of elements, is equal to , the index of in . If is finite, the index is also equal to the order of divided by the order of . The set may be finite, although both and are infinite (for example, ).
There is a "natural" surjective group homomorphism , sending each element of to the coset of to which belongs, that is: . The mapping is sometimes called the canonical projection of onto . Its kernel is .
There is a bijective correspondence between the subgroups of that contain and the subgroups of ; if is a subgroup of containing , then the corresponding subgroup of is . This correspondence holds for normal subgroups of and as well, and is formalized in the lattice theorem.
Several important properties of quotient groups are recorded in the fundamental theorem on homomorphisms and the isomorphism theorems.
If is abelian, nilpotent, solvable, cyclic or finitely generated, then so is .
If is a subgroup in a finite group , and the order of is one half of the order of , then is guaranteed to be a normal subgroup, so exists and is isomorphic to . This result can also be stated as "any subgroup of index 2 is normal", and in this form it applies also to infinite groups. Furthermore, if is the smallest prime number dividing the order of a finite group, , then if has order , must be a normal subgroup of .
Given and a normal subgroup , then is a group extension of by . One could ask whether this extension is trivial or split; in other words, one could ask whether is a direct product or semidirect product of and . This is a special case of the extension problem. An example where the extension is not split is as follows: Let , and , which is isomorphic to . Then is also isomorphic to . But has only the trivial automorphism, so the only semi-direct product of and is the direct product. Since is different from , we conclude that is not a semi-direct product of and .
Quotients of Lie groups
If is a Lie group and is a normal and closed (in the topological rather than the algebraic sense of the word) Lie subgroup of , the quotient is also a Lie group. In this case, the original group has the structure of a fiber bundle (specifically, a principal -bundle), with base space and fiber . The dimension of equals .
Note that the condition that is closed is necessary. Indeed, if is not closed then the quotient space is not a T1-space (since there is a coset in the quotient which cannot be separated from the identity by an open set), and thus not a Hausdorff space.
For a non-normal Lie subgroup , the space of left cosets is not a group, but simply a differentiable manifold on which acts. The result is known as a homogeneous space.
See also
Group extension
Quotient category
Short exact sequence
Notes
References
Group theory
Group | Quotient group | [
"Mathematics"
] | 2,646 | [
"Group theory",
"Fields of abstract algebra"
] |
11,527 | https://en.wikipedia.org/wiki/Fundamental%20theorem%20on%20homomorphisms | In abstract algebra, the fundamental theorem on homomorphisms, also known as the fundamental homomorphism theorem, or the first isomorphism theorem, relates the structure of two objects between which a homomorphism is given, and of the kernel and image of the homomorphism.
The homomorphism theorem is used to prove the isomorphism theorems. Similar theorems are valid for vector spaces, modules, and rings.
Group-theoretic version
Given two groups and and a group homomorphism , let be a normal subgroup in and the natural surjective homomorphism (where is the quotient group of by ). If is a subset of (where represents a kernel) then there exists a unique homomorphism such that .
In other words, the natural projection is universal among homomorphisms on that map to the identity element.
The situation is described by the following commutative diagram:
is injective if and only if . Therefore, by setting , we immediately get the first isomorphism theorem.
We can write the statement of the fundamental theorem on homomorphisms of groups as "every homomorphic image of a group is isomorphic to a quotient group".
Proof
The proof follows from two basic facts about homomorphisms, namely their preservation of the group operation, and their mapping of the identity element to the identity element. We need to show that if is a homomorphism of groups, then:
is a subgroup of .
is isomorphic to .
Proof of 1
The operation that is preserved by is the group operation. If , then there exist elements such that and . For these and , we have (since preserves the group operation), and thus, the closure property is satisfied in . The identity element is also in because maps the identity element of to it. Since every element in has an inverse such that (because preserves the inverse property as well), we have an inverse for each element in , therefore, is a subgroup of .
Proof of 2
Construct a map by . This map is well-defined, as if , then and so which gives . This map is an isomorphism. is surjective onto by definition. To show injectiveness, if , then , which implies so .
Finally,
hence preserves the group operation. Hence is an isomorphism between and , which completes the proof.
Applications
The group theoretic version of fundamental homomorphism theorem can be used to show that two selected groups are isomorphic. Two examples are shown below.
Integers modulo n
For each , consider the groups and and a group homomorphism defined by (see modular arithmetic). Next, consider the kernel of , , which is a normal subgroup in . There exists a natural surjective homomorphism defined by . The theorem asserts that there exists an isomorphism between and , or in other words . The commutative diagram is illustrated below.
N / C theorem
Let be a group with subgroup . Let , and be the centralizer, the normalizer and the automorphism group of in , respectively. Then, the theorem states that is isomorphic to a subgroup of .
Proof
We are able to find a group homomorphism defined by , for all . Clearly, the kernel of is . Hence, we have a natural surjective homomorphism defined by . The fundamental homomorphism theorem then asserts that there exists an isomorphism between and , which is a subgroup of .
See also
Quotient category
References
Theorems in abstract algebra | Fundamental theorem on homomorphisms | [
"Mathematics"
] | 693 | [
"Theorems in algebra",
"Theorems in abstract algebra"
] |
11,529 | https://en.wikipedia.org/wiki/Fermion | In particle physics, a fermion is a subatomic particle that follows Fermi–Dirac statistics. Fermions have a half-odd-integer spin (spin , spin , etc.) and obey the Pauli exclusion principle. These particles include all quarks and leptons and all composite particles made of an odd number of these, such as all baryons and many atoms and nuclei. Fermions differ from bosons, which obey Bose–Einstein statistics.
Some fermions are elementary particles (such as electrons), and some are composite particles (such as protons). For example, according to the spin-statistics theorem in relativistic quantum field theory, particles with integer spin are bosons. In contrast, particles with half-integer spin are fermions.
In addition to the spin characteristic, fermions have another specific property: they possess conserved baryon or lepton quantum numbers. Therefore, what is usually referred to as the spin-statistics relation is, in fact, a spin statistics-quantum number relation.
As a consequence of the Pauli exclusion principle, only one fermion can occupy a particular quantum state at a given time. Suppose multiple fermions have the same spatial probability distribution, then, at least one property of each fermion, such as its spin, must be different. Fermions are usually associated with matter, whereas bosons are generally force carrier particles. However, in the current state of particle physics, the distinction between the two concepts is unclear. Weakly interacting fermions can also display bosonic behavior under extreme conditions. For example, at low temperatures, fermions show superfluidity for uncharged particles and superconductivity for charged particles.
Composite fermions, such as protons and neutrons, are the key building blocks of everyday matter.
English theoretical physicist Paul Dirac coined the name fermion from the surname of Italian physicist Enrico Fermi.
Elementary fermions
The Standard Model recognizes two types of elementary fermions: quarks and leptons. In all, the model distinguishes 24 different fermions. There are six quarks (up, down, strange, charm, bottom and top), and six leptons (electron, electron neutrino, muon, muon neutrino, tauon and tauon neutrino), along with the corresponding antiparticle of each of these.
Mathematically, there are many varieties of fermions, with the three most common types being:
Weyl fermions (massless),
Dirac fermions (massive), and
Majorana fermions (each its own antiparticle).
Most Standard Model fermions are believed to be Dirac fermions, although it is unknown at this time whether the neutrinos are Dirac or Majorana fermions (or both). Dirac fermions can be treated as a combination of two Weyl fermions. In July 2015, Weyl fermions have been experimentally realized in Weyl semimetals.
Composite fermions
Composite particles (such as hadrons, nuclei, and atoms) can be bosons or fermions depending on their constituents. More precisely, because of the relation between spin and statistics, a particle containing an odd number of fermions is itself a fermion. It will have half-integer spin.
Examples include the following:
A baryon, such as the proton or neutron, contains three fermionic quarks.
The nucleus of a carbon-13 atom contains six protons and seven neutrons.
The atom helium-3 (3He) consists of two protons, one neutron, and two electrons. The deuterium atom consists of one proton, one neutron, and one electron.
The number of bosons within a composite particle made up of simple particles bound with a potential has no effect on whether it is a boson or a fermion.
Fermionic or bosonic behavior of a composite particle (or system) is only seen at large (compared to size of the system) distances. At proximity, where spatial structure begins to be important, a composite particle (or system) behaves according to its constituent makeup.
Fermions can exhibit bosonic behavior when they become loosely bound in pairs. This is the origin of superconductivity and the superfluidity of helium-3: in superconducting materials, electrons interact through the exchange of phonons, forming Cooper pairs, while in helium-3, Cooper pairs are formed via spin fluctuations.
The quasiparticles of the fractional quantum Hall effect are also known as composite fermions; they consist of electrons with an even number of quantized vortices attached to them.
See also
Anyon, 2D quasiparticles
Chirality (physics), left-handed and right-handed
Fermionic condensate
Weyl semimetal
Fermionic field
Identical particles
Kogut–Susskind fermion, a type of lattice fermion
Majorana fermion, each its own antiparticle
Parastatistics
Skyrmion, a hypothetical particle
Notes
External links
Quantum field theory
Enrico Fermi | Fermion | [
"Physics",
"Materials_science"
] | 1,083 | [
"Quantum field theory",
"Fermions",
"Quantum mechanics",
"Subatomic particles",
"Condensed matter physics",
"Matter"
] |
11,545 | https://en.wikipedia.org/wiki/Feedback | Feedback occurs when outputs of a system are routed back as inputs as part of a chain of cause and effect that forms a circuit or loop. The system can then be said to feed back into itself. The notion of cause-and-effect has to be handled carefully when applied to feedback systems:
History
Self-regulating mechanisms have existed since antiquity, and the idea of feedback started to enter economic theory in Britain by the 18th century, but it was not at that time recognized as a universal abstraction and so did not have a name.
The first ever known artificial feedback device was a float valve, for maintaining water at a constant level, invented in 270 BC in Alexandria, Egypt. This device illustrated the principle of feedback: a low water level opens the valve, the rising water then provides feedback into the system, closing the valve when the required level is reached. This then reoccurs in a circular fashion as the water level fluctuates.
Centrifugal governors were used to regulate the distance and pressure between millstones in windmills since the 17th century. In 1788, James Watt designed his first centrifugal governor following a suggestion from his business partner Matthew Boulton, for use in the steam engines of their production. Early steam engines employed a purely reciprocating motion, and were used for pumping water – an application that could tolerate variations in the working speed, but the use of steam engines for other applications called for more precise control of the speed.
In 1868, James Clerk Maxwell wrote a famous paper, "On governors", that is widely considered a classic in feedback control theory. This was a landmark paper on control theory and the mathematics of feedback.
The verb phrase to feed back, in the sense of returning to an earlier position in a mechanical process, was in use in the US by the 1860s, and in 1909, Nobel laureate Karl Ferdinand Braun used the term "feed-back" as a noun to refer to (undesired) coupling between components of an electronic circuit.
By the end of 1912, researchers using early electronic amplifiers (audions) had discovered that deliberately coupling part of the output signal back to the input circuit would boost the amplification (through regeneration), but would also cause the audion to howl or sing. This action of feeding back of the signal from output to input gave rise to the use of the term "feedback" as a distinct word by 1920.
The development of cybernetics from the 1940s onwards was centred around the study of circular causal feedback mechanisms.
Over the years there has been some dispute as to the best definition of feedback. According to cybernetician Ashby (1956), mathematicians and theorists interested in the principles of feedback mechanisms prefer the definition of "circularity of action", which keeps the theory simple and consistent. For those with more practical aims, feedback should be a deliberate effect via some more tangible connection.
Focusing on uses in management theory, Ramaprasad (1983) defines feedback generally as "...information about the gap between the actual level and the reference level of a system parameter" that is used to "alter the gap in some way". He emphasizes that the information by itself is not feedback unless translated into action.
Types
Positive and negative feedback
Positive feedback: If the signal feedback from output is in phase with the input signal, the feedback is called positive feedback.
Negative feedback: If the signal feedback is out of phase by 180° with respect to the input signal, the feedback is called negative feedback.
As an example of negative feedback, the diagram might represent a cruise control system in a car that matches a target speed such as the speed limit. The controlled system is the car; its input includes the combined torque from the engine and from the changing slope of the road (the disturbance). The car's speed (status) is measured by a speedometer. The error signal is the difference of the speed as measured by the speedometer from the target speed (set point). The controller interprets the speed to adjust the accelerator, commanding the fuel flow to the engine (the effector). The resulting change in engine torque, the feedback, combines with the torque exerted by the change of road grade to reduce the error in speed, minimising the changing slope.
The terms "positive" and "negative" were first applied to feedback prior to WWII. The idea of positive feedback already existed in the 1920s when the regenerative circuit was made. Friis and Jensen (1924) described this circuit in a set of electronic amplifiers as a case where the "feed-back" action is positive in contrast to negative feed-back action, which they mentioned only in passing. Harold Stephen Black's classic 1934 paper first details the use of negative feedback in electronic amplifiers. According to Black:
According to Mindell (2002) confusion in the terms arose shortly after this:
Even before these terms were being used, James Clerk Maxwell had described their concept through several kinds of "component motions" associated with the centrifugal governors used in steam engines. He distinguished those that lead to a continued increase in a disturbance or the amplitude of a wave or oscillation, from those that lead to a decrease of the same quality.
Terminology
The terms positive and negative feedback are defined in different ways within different disciplines.
the change of the gap between reference and actual values of a parameter or trait, based on whether the gap is widening (positive) or narrowing (negative).
the valence of the action or effect that alters the gap, based on whether it makes the recipient or observer happy (positive) or unhappy (negative).
The two definitions may be confusing, like when an incentive (reward) is used to boost poor performance (narrow a gap). Referring to definition 1, some authors use alternative terms, replacing positive and negative with self-reinforcing and self-correcting, reinforcing and balancing, discrepancy-enhancing and discrepancy-reducing or regenerative and degenerative respectively. And for definition 2, some authors promote describing the action or effect as positive and negative reinforcement or punishment rather than feedback.
Yet even within a single discipline an example of feedback can be called either positive or negative, depending on how values are measured or referenced.
This confusion may arise because feedback can be used to provide information or motivate, and often has both a qualitative and a quantitative component. As Connellan and Zemke (1993) put it:
Limitations of negative and positive feedback
While simple systems can sometimes be described as one or the other type, many systems with feedback loops cannot be shoehorned into either type, and this is especially true when multiple loops are present.
Other types of feedback
In general, feedback systems can have many signals fed back and the feedback loop frequently contain mixtures of positive and negative feedback where positive and negative feedback can dominate at different frequencies or different points in the state space of a system.
The term bipolar feedback has been coined to refer to biological systems where positive and negative feedback systems can interact, the output of one affecting the input of another, and vice versa.
Some systems with feedback can have very complex behaviors such as chaotic behaviors in non-linear systems, while others have much more predictable behaviors, such as those that are used to make and design digital systems.
Feedback is used extensively in digital systems. For example, binary counters and similar devices employ feedback where the current state and inputs are used to calculate a new state which is then fed back and clocked back into the device to update it.
Applications
Mathematics and dynamical systems
By using feedback properties, the behavior of a system can be altered to meet the needs of an application; systems can be made stable, responsive or held constant. It is shown that dynamical systems with a feedback experience an adaptation to the edge of chaos.
Physics
Physical systems present feedback through the mutual interactions of its parts. Feedback is also relevant for the regulation of experimental conditions, noise reduction, and signal control. The thermodynamics of feedback-controlled systems has intrigued physicist since the Maxwell's demon, with recent advances on the consequences for entropy reduction and performance increase.
Biology
In biological systems such as organisms, ecosystems, or the biosphere, most parameters must stay under control within a narrow range around a certain optimal level under certain environmental conditions. The deviation of the optimal value of the controlled parameter can result from the changes in internal and external environments. A change of some of the environmental conditions may also require change of that range to change for the system to function. The value of the parameter to maintain is recorded by a reception system and conveyed to a regulation module via an information channel. An example of this is insulin oscillations.
Biological systems contain many types of regulatory circuits, both positive and negative. As in other contexts, positive and negative do not imply that the feedback causes good or bad effects. A negative feedback loop is one that tends to slow down a process, whereas the positive feedback loop tends to accelerate it. The mirror neurons are part of a social feedback system, when an observed action is "mirrored" by the brain—like a self-performed action.
Normal tissue integrity is preserved by feedback interactions between diverse cell types mediated by adhesion molecules and secreted molecules that act as mediators; failure of key feedback mechanisms in cancer disrupts tissue function.
In an injured or infected tissue, inflammatory mediators elicit feedback responses in cells, which alter gene expression, and change the groups of molecules expressed and secreted, including molecules that induce diverse cells to cooperate and restore tissue structure and function. This type of feedback is important because it enables coordination of immune responses and recovery from infections and injuries. During cancer, key elements of this feedback fail. This disrupts tissue function and immunity.
Mechanisms of feedback were first elucidated in bacteria, where a nutrient elicits changes in some of their metabolic functions.
Feedback is also central to the operations of genes and gene regulatory networks. Repressor (see Lac repressor) and activator proteins are used to create genetic operons, which were identified by François Jacob and Jacques Monod in 1961 as feedback loops. These feedback loops may be positive (as in the case of the coupling between a sugar molecule and the proteins that import sugar into a bacterial cell), or negative (as is often the case in metabolic consumption).
On a larger scale, feedback can have a stabilizing effect on animal populations even when profoundly affected by external changes, although time lags in feedback response can give rise to predator-prey cycles.
In zymology, feedback serves as regulation of activity of an enzyme by its direct or downstream in the metabolic pathway (see Allosteric regulation).
The hypothalamic–pituitary–adrenal axis is largely controlled by positive and negative feedback, much of which is still unknown.
In psychology, the body receives a stimulus from the environment or internally that causes the release of hormones. Release of hormones then may cause more of those hormones to be released, causing a positive feedback loop. This cycle is also found in certain behaviour. For example, "shame loops" occur in people who blush easily. When they realize that they are blushing, they become even more embarrassed, which leads to further blushing, and so on.
Climate science
The climate system is characterized by strong positive and negative feedback loops between processes that affect the state of the atmosphere, ocean, and land. A simple example is the ice–albedo positive feedback loop whereby melting snow exposes more dark ground (of lower albedo), which in turn absorbs heat and causes more snow to melt.
Control theory
Feedback is extensively used in control theory, using a variety of methods including state space (controls), full state feedback, and so forth. In the context of control theory, "feedback" is traditionally assumed to specify "negative feedback".
The most common general-purpose controller using a control-loop feedback mechanism is a proportional-integral-derivative (PID) controller. Heuristically, the terms of a PID controller can be interpreted as corresponding to time: the proportional term depends on the present error, the integral term on the accumulation of past errors, and the derivative term is a prediction of future error, based on current rate of change.
Education
For feedback in the educational context, see corrective feedback.
Mechanical engineering
In ancient times, the float valve was used to regulate the flow of water in Greek and Roman water clocks; similar float valves are used to regulate fuel in a carburettor and also used to regulate tank water level in the flush toilet.
The Dutch inventor Cornelius Drebbel (1572–1633) built thermostats (c1620) to control the temperature of chicken incubators and chemical furnaces. In 1745, the windmill was improved by blacksmith Edmund Lee, who added a fantail to keep the face of the windmill pointing into the wind. In 1787, Tom Mead regulated the rotation speed of a windmill by using a centrifugal pendulum to adjust the distance between the bedstone and the runner stone (i.e., to adjust the load).
The use of the centrifugal governor by James Watt in 1788 to regulate the speed of his steam engine was one factor leading to the Industrial Revolution. Steam engines also use float valves and pressure release valves as mechanical regulation devices. A mathematical analysis of Watt's governor was done by James Clerk Maxwell in 1868.
The Great Eastern was one of the largest steamships of its time and employed a steam powered rudder with feedback mechanism designed in 1866 by John McFarlane Gray. Joseph Farcot coined the word servo in 1873 to describe steam-powered steering systems. Hydraulic servos were later used to position guns. Elmer Ambrose Sperry of the Sperry Corporation designed the first autopilot in 1912. Nicolas Minorsky published a theoretical analysis of automatic ship steering in 1922 and described the PID controller.
Internal combustion engines of the late 20th century employed mechanical feedback mechanisms such as the vacuum timing advance but mechanical feedback was replaced by electronic engine management systems once small, robust and powerful single-chip microcontrollers became affordable.
Electronic engineering
The use of feedback is widespread in the design of electronic components such as amplifiers, oscillators, and stateful logic circuit elements such as flip-flops and counters. Electronic feedback systems are also very commonly used to control mechanical, thermal and other physical processes.
If the signal is inverted on its way round the control loop, the system is said to have negative feedback; otherwise, the feedback is said to be positive. Negative feedback is often deliberately introduced to increase the stability and accuracy of a system by correcting or reducing the influence of unwanted changes. This scheme can fail if the input changes faster than the system can respond to it. When this happens, the lag in arrival of the correcting signal can result in over-correction, causing the output to oscillate or "hunt". While often an unwanted consequence of system behaviour, this effect is used deliberately in electronic oscillators.
Harry Nyquist at Bell Labs derived the Nyquist stability criterion for determining the stability of feedback systems. An easier method, but less general, is to use Bode plots developed by Hendrik Bode to determine the gain margin and phase margin. Design to ensure stability often involves frequency compensation to control the location of the poles of the amplifier.
Electronic feedback loops are used to control the output of electronic devices, such as amplifiers. A feedback loop is created when all or some portion of the output is fed back to the input. A device is said to be operating open loop if no output feedback is being employed and closed loop if feedback is being used.
When two or more amplifiers are cross-coupled using positive feedback, complex behaviors can be created. These multivibrators are widely used and include:
astable circuits, which act as oscillators
monostable circuits, which can be pushed into a state, and will return to the stable state after some time
bistable circuits, which have two stable states that the circuit can be switched between
Negative feedback
Negative feedback occurs when the fed-back output signal has a relative phase of 180° with respect to the input signal (upside down). This situation is sometimes referred to as being out of phase, but that term also is used to indicate other phase separations, as in "90° out of phase". Negative feedback can be used to correct output errors or to desensitize a system to unwanted fluctuations. In feedback amplifiers, this correction is generally for waveform distortion reduction or to establish a specified gain level. A general expression for the gain of a negative feedback amplifier is the asymptotic gain model.
Positive feedback
Positive feedback occurs when the fed-back signal is in phase with the input signal. Under certain gain conditions, positive feedback reinforces the input signal to the point where the output of the device oscillates between its maximum and minimum possible states. Positive feedback may also introduce hysteresis into a circuit. This can cause the circuit to ignore small signals and respond only to large ones. It is sometimes used to eliminate noise from a digital signal. Under some circumstances, positive feedback may cause a device to latch, i.e., to reach a condition in which the output is locked to its maximum or minimum state. This fact is very widely used in digital electronics to make bistable circuits for volatile storage of information.
The loud squeals that sometimes occurs in audio systems, PA systems, and rock music are known as audio feedback. If a microphone is in front of a loudspeaker that it is connected to, sound that the microphone picks up comes out of the speaker, and is picked up by the microphone and re-amplified. If the loop gain is sufficient, howling or squealing at the maximum power of the amplifier is possible.
Oscillator
An electronic oscillator is an electronic circuit that produces a periodic, oscillating electronic signal, often a sine wave or a square wave. Oscillators convert direct current (DC) from a power supply to an alternating current signal. They are widely used in many electronic devices. Common examples of signals generated by oscillators include signals broadcast by radio and television transmitters, clock signals that regulate computers and quartz clocks, and the sounds produced by electronic beepers and video games.
Oscillators are often characterized by the frequency of their output signal:
A low-frequency oscillator (LFO) is an electronic oscillator that generates a frequency below ≈20 Hz. This term is typically used in the field of audio synthesizers, to distinguish it from an audio frequency oscillator.
An audio oscillator produces frequencies in the audio range, about 16 Hz to 20 kHz.
An RF oscillator produces signals in the radio frequency (RF) range of about 100 kHz to 100 GHz.
Oscillators designed to produce a high-power AC output from a DC supply are usually called inverters.
There are two main types of electronic oscillator: the linear or harmonic oscillator and the nonlinear or relaxation oscillator.
Latches and flip-flops
A latch or a flip-flop is a circuit that has two stable states and can be used to store state information. They typically constructed using feedback that crosses over between two arms of the circuit, to provide the circuit with a state. The circuit can be made to change state by signals applied to one or more control inputs and will have one or two outputs. It is the basic storage element in sequential logic. Latches and flip-flops are fundamental building blocks of digital electronics systems used in computers, communications, and many other types of systems.
Latches and flip-flops are used as data storage elements. Such data storage can be used for storage of state, and such a circuit is described as sequential logic. When used in a finite-state machine, the output and next state depend not only on its current input, but also on its current state (and hence, previous inputs). It can also be used for counting of pulses, and for synchronizing variably-timed input signals to some reference timing signal.
Flip-flops can be either simple (transparent or opaque) or clocked (synchronous or edge-triggered). Although the term flip-flop has historically referred generically to both simple and clocked circuits, in modern usage it is common to reserve the term flip-flop exclusively for discussing clocked circuits; the simple ones are commonly called latches.
Using this terminology, a latch is level-sensitive, whereas a flip-flop is edge-sensitive. That is, when a latch is enabled it becomes transparent, while a flip flop's output only changes on a single type (positive going or negative going) of clock edge.
Software
Feedback loops provide generic mechanisms for controlling the running, maintenance, and evolution of software and computing systems. Feedback-loops are important models in the engineering of adaptive software, as they define the behaviour of the interactions among the control elements over the adaptation process, to guarantee system properties at run-time. Feedback loops and foundations of control theory have been successfully applied to computing systems. In particular, they have been applied to the development of products such as IBM Db2 and IBM Tivoli. From a software perspective, the autonomic (MAPE, monitor analyze plan execute) loop proposed by researchers of IBM is another valuable contribution to the application of feedback loops to the control of dynamic properties and the design and evolution of autonomic software systems.
Software Development
User interface design
Feedback is also a useful design principle for designing user interfaces.
Video feedback
Video feedback is the video equivalent of acoustic feedback. It involves a loop between a video camera input and a video output, e.g., a television screen or monitor. Aiming the camera at the display produces a complex video image based on the feedback.
Human resource management
See also
References
Further reading
Katie Salen and Eric Zimmerman. Rules of Play. MIT Press. 2004. . Chapter 18: Games as Cybernetic Systems.
Korotayev A., Malkov A., Khaltourina D. Introduction to Social Macrodynamics: Secular Cycles and Millennial Trends. Moscow: URSS, 2006.
Dijk, E., Cremer, D.D., Mulder, L.B., and Stouten, J. "How Do We React to Feedback in Social Dilemmas?" In Biel, Eek, Garling & Gustafsson, (eds.), New Issues and Paradigms in Research on Social Dilemmas, New York: Springer, 2008.
External links
Control theory
Management cybernetics
Broad-concept articles | Feedback | [
"Mathematics"
] | 4,641 | [
"Applied mathematics",
"Control theory",
"Dynamical systems"
] |
11,552 | https://en.wikipedia.org/wiki/Frederick%20Abel | Sir Frederick Augustus Abel, 1st Baronet (17 July 18276 September 1902) was an English chemist who was recognised as the leading British authority on explosives. He is best known for the invention of cordite as a replacement for gunpowder in firearms.
Education
Born in London as son of Johann Leopold Abel, Abel studied chemistry at the Royal Polytechnic Institution and in 1845 became one of the original 26 students of A. W. von Hofmann at the Royal College of Chemistry (now a constituent of Imperial College London). In 1852 he was appointed lecturer in chemistry at the Royal Military Academy, Woolwich, succeeding Michael Faraday, who had held that post since 1829.
Early career
From 1854 until 1888 Abel served as ordnance chemist at the Chemical Establishment of the Royal Arsenal at Woolwich, establishing himself as the leading British authority on explosives. Three years later was appointed chemist to the War Department and chemical referee to the government. During his tenure of this office, which lasted until 1888, he carried out a large amount of work in connection with the chemistry of explosives.
Notable work
One of the most important of his investigations had to do with the manufacture of guncotton, and he developed a process, consisting essentially of reducing the nitrated cotton to fine pulp, which enabled it to be safely manufactured and at the same time yielded the product in a form that increased its usefulness. This work to an important extent prepared the way for the "smokeless powders" which came into general use towards the end of the 19th century; cordite, the type adopted by the British government in 1891, was invented jointly by him and Sir James Dewar. He and Dewar were unsuccessfully sued by Alfred Nobel over infringement of Nobel's patent for a similar explosive called ballistite, the case finally being resolved in the House of Lords in 1895. He also extensively researched the behaviour of black powder when ignited, with the Scottish physicist Sir Andrew Noble. At the request of the British government, he devised the Abel test, a means of determining the flash point of petroleum products. His first instrument, the open-test apparatus, was specified in an Act of Parliament in 1868 for officially specifying petroleum products. It was superseded in August 1879 by the much more reliable Abel close-test instrument. Under his leadership, first, guncotton was developed at Waltham Abbey Royal Gunpowder Mills, patented in 1865, then, the propellant cordite, patented in 1889. In electricity, Abel studied the construction of electrical fuses and other applications of electricity to warlike purposes.
Leadership and honours
He was elected a Fellow of the Royal Society in 1860 and received their Royal Medal in 1887. He was president of the Chemical Society (1875–77), of the Institution of Electrical Engineers (then the Society of Telegraph Engineers) (1877), of the Institute of Chemistry (1881–82) and of the Society of Chemical Industry (1882–83). He was also president of the Iron and Steel Institute in 1891 and was awarded the Bessemer Gold Medal in 1897 for his work on problems of steel manufacture. He was awarded the Telford Medal by the Institution of Civil Engineers in 1879.
He was made a Commander of the Order of the Bath (CB) in 1877. and knighted on 20 April 1883
He took an important part in the work of the Inventions Exhibition (London) in 1885, and in 1887 became organizing secretary and first director of the Imperial Institute, a position he held till his death in 1902. He was Rede Lecturer and received an honorary doctorate from Cambridge University in 1888. He was upgraded Knight Commander of the Order of the Bath (KCB) on 3 February 1891, created a baronet, of Cadogan Place in the Parish of Chelsea in the County of London, on 25 May 1893 and made a Knight Grand Cross of the Royal Victorian Order (GCVO) on 8 March 1901.
Abel died at his residence in Whitehall Court, London, on 6 September 1902, aged 75, and was buried in Nunhead Cemetery, London. The baronetcy became extinct on his death.
Family
Abel married twice; first to Sarah Blanch, daughter of James Blanch, of Bristol; secondly after his first wife's death to Giulietta de La Feuillade. He left no children.
Books
Handbook of Chemistry (with C. L. Bloxam)
The Modern History of Gunpowder (1866)
Gun-cotton (1866)
On Explosive Agents (1872)
Researches in Explosives (1875)
Electricity applied to Explosive Purposes (1898)
He also wrote several articles in the ninth edition of the Encyclopædia Britannica.
See also
Internal ballistics
References
Attribution
Further reading
External links
1827 births
1902 deaths
19th-century English chemists
Cordite
Ballistics experts
Fellows of the Royal Society
Knights Commander of the Order of the Bath
Knights Grand Cross of the Royal Victorian Order
Presidents of the Smeatonian Society of Civil Engineers
Alumni of Imperial College London
Scientists from London
Knights Bachelor
Baronets in the Baronetage of the United Kingdom
Burials at Nunhead Cemetery
Royal Medal winners
Bessemer Gold Medal | Frederick Abel | [
"Chemistry"
] | 1,025 | [
"Bessemer Gold Medal",
"Chemical engineering awards"
] |
11,555 | https://en.wikipedia.org/wiki/Fluorescence | Fluorescence is one of two kinds of photoluminescence, the emission of light by a substance that has absorbed light or other electromagnetic radiation. When exposed to ultraviolet radiation, many substances will glow (fluoresce) with colored visible light. The color of the light emitted depends on the chemical composition of the substance. Fluorescent materials generally cease to glow nearly immediately when the radiation source stops. This distinguishes them from the other type of light emission, phosphorescence. Phosphorescent materials continue to emit light for some time after the radiation stops.
This difference in timing is a result of quantum spin effects.
Fluorescence occurs when a photon of the incoming radiation is absorbed by a molecule exciting it to a higher energy level followed by emission of light as the molecule returns to a lower energy state. The emitted light may have a longer wavelength, and therefore a lower photon energy, than the absorbed radiation. For example when the absorbed radiation could be in the ultraviolet region of the electromagnetic spectrum (invisible to the human eye), while the emitted light is in the visible region. This gives the fluorescent substance a distinct color that is best seen when it has been exposed to UV light, making it appear to glow in the dark. However, any light of a shorter wavelength may cause a material to fluoresce at a longer wavelength. Fluorescent materials may also be excited by certain wavelengths of visible light, which masks the glow, yet their colors may appear bright and intensified. Other fluorescent materials emit their light in the infrared or even the ultraviolet regions of the spectrum.
Fluorescence has many practical applications, including mineralogy, gemology, medicine, chemical sensors (fluorescence spectroscopy), fluorescent labelling, dyes, biological detectors, cosmic-ray detection, vacuum fluorescent displays, and cathode-ray tubes. Its most common everyday application is in (gas-discharge) fluorescent lamps and LED lamps, in which fluorescent coatings convert UV or blue light into longer-wavelengths resulting in white light which can even appear indistinguishable from that of the traditional but energy-inefficient incandescent lamp. Fluorescence also occurs frequently in nature in some minerals and in many biological forms across all kingdoms of life. The latter may be referred to as biofluorescence, indicating that the fluorophore is part of or is extracted from a living organism (rather than an inorganic dye or stain). But since fluorescence is due to a specific chemical, which can also be synthesized artificially in most cases, it is sufficient to describe the substance itself as fluorescent.
History
Fluorescence was observed long before it was named and understood.
An early observation of fluorescence was known to the Aztecs and described in 1560 by Bernardino de Sahagún and in 1565 by Nicolás Monardes in the infusion known as lignum nephriticum (Latin for "kidney wood"). It was derived from the wood of two tree species, Pterocarpus indicus and Eysenhardtia polystachya.
The chemical compound responsible for this fluorescence is matlaline, which is the oxidation product of one of the flavonoids found in this wood.
In 1819, E.D. Clarke
and in 1822 René Just Haüy
described some varieties of fluorites that had a different color depending on whether the light was reflected or (apparently) transmitted. Haüy incorrectly viewed the effect as light scattering similar to opalescence. In 1833 Sir David Brewster described a similar effect in chlorophyll which he also considered a form of opalescence.
Sir John Herschel studied quinine in 1845 and came to a different incorrect conclusion.
In 1842, A.E. Becquerel observed that calcium sulfide emits light after being exposed to solar ultraviolet, making him the first to state that the emitted light is of longer wavelength than the incident light. While his observation of photoluminescence was similar to that described 10 years later by Stokes, who observed a fluorescence of a solution of quinine, the phenomenon that Becquerel described with calcium sulfide is now called phosphorescence.
In his 1852 paper on the "Refrangibility" (wavelength change) of light, George Gabriel Stokes described the ability of fluorspar, uranium glass and many other substances to change invisible light beyond the violet end of the visible spectrum into visible light. He named this phenomenon fluorescence
"I am almost inclined to coin a word, and call the appearance fluorescence, from fluor-spar [i.e., fluorite], as the analogous term opalescence is derived from the name of a mineral."
Neither Becquerel nor Stokes understood one key aspect of photoluminescence: the critical difference from incandescence, the emission of light by heated material. To distinguish it from incandescence, in the late 1800s, Gustav Wiedemann proposed the term luminescence to designate any emission of light more intense than expected from the source's temperature.
Advances in spectroscopy and quantum electronics between the 1950s and 1970s provided a way to distinguish between the three different mechanisms that produce the light, as well as narrowing down the typical timescales those mechanisms take to decay after absorption. In modern science, this distinction became important because some items, such as lasers, required the fastest decay times, which typically occur in the nanosecond (billionth of a second) range. In physics, this first mechanism was termed "fluorescence" or "singlet emission", and is common in many laser mediums such as ruby. Other fluorescent materials were discovered to have much longer decay times, because some of the atoms would change their spin to a triplet state, thus would glow brightly with fluorescence under excitation but produce a dimmer afterglow for a short time after the excitation was removed, which became labeled "phosphorescence" or "triplet phosphorescence". The typical decay times ranged from a few microseconds to one second, which are still fast enough by human-eye standards to be colloquially referred to as fluorescent. Common examples include fluorescent lamps, organic dyes, and even fluorspar. Longer emitters, commonly referred to as glow-in-the-dark substances, ranged from one second to many hours, and this mechanism was called persistent phosphorescence or persistent luminescence, to distinguish it from the other two mechanisms.
Physical principles
Mechanism
Fluorescence occurs when an excited molecule, atom, or nanostructure, relaxes to a lower energy state (usually the ground state) through emission of a photon without a change in electron spin. When the initial and final states have different multiplicity (spin), the phenomenon is termed phosphorescence.
When a molecule in its ground state (called S0) is photoexcited it may end up in any one of a number of excited states (S1, S2, S3,...). These higher excited states are different vibrational levels, populated in proportion to their overlap with the ground state according to the Franck-Condon principle. These vibrational excited states typically decay rapidly by to S1, followed by radiative transition to the ground state or to vibrational states close to the ground state. This transition is called fluorescence. All of these states are singlet states.
A different pathway for deexcitation is intersystem crossing from the S1 to a triplet state T1. Decay from T1 to S0 is typically slower and less intense and is called phosphorescence.
Absorption of a photon of energy results in an excited state of the same multiplicity (spin) of the ground state, usually a singlet (Sn with n > 0). In solution, states with n > 1 relax rapidly to the lowest vibrational level of the first excited state (S1) by transferring energy to the solvent molecules through non-radiative processes, including internal conversion followed by vibrational relaxation, in which the energy is dissipated as heat. Thus the fluorescence energy is typically less than the photoexcitation energy.
The excited state S1 can relax by other mechanisms that do not involve the emission of light. These processes, called non-radiative processes, compete with fluorescence emission and decrease its efficiency. Examples include internal conversion, intersystem crossing to the triplet state, and energy transfer to another molecule. An example of energy transfer is Förster resonance energy transfer. Relaxation from an excited state can also occur through collisional quenching, a process where a molecule (the quencher) collides with the fluorescent molecule during its excited state lifetime. Molecular oxygen (O2) is an extremely efficient quencher of fluorescence because of its unusual triplet ground state.
Quantum yield
The fluorescence quantum yield gives the efficiency of the fluorescence process. It is defined as the ratio of the number of photons emitted to the number of photons absorbed.
The maximum possible fluorescence quantum yield is 1.0 (100%); each photon absorbed results in a photon emitted. Compounds with quantum yields of 0.10 are still considered quite fluorescent. Another way to define the quantum yield of fluorescence is by the rate of excited state decay:
where is the rate constant of spontaneous emission of radiation and
is the sum of all rates of excited state decay. Other rates of excited state decay are caused by mechanisms other than photon emission and are, therefore, often called "non-radiative rates", which can include:
dynamic collisional quenching
near-field dipole–dipole interaction (or resonance energy transfer)
internal conversion
intersystem crossing
Thus, if the rate of any pathway changes, both the excited state lifetime and the fluorescence quantum yield will be affected.
Fluorescence quantum yields are measured by comparison to a standard. The quinine salt quinine sulfate in a sulfuric acid solution was regarded as the most common fluorescence standard,
however, a recent study revealed that the fluorescence quantum yield of this solution is strongly affected by the temperature, and should no longer be used as the standard solution. The quinine in 0.1 M perchloric acid () shows no temperature dependence up to 45 °C, therefore it can be considered as a reliable standard solution.
Lifetime
The fluorescence lifetime refers to the average time the molecule stays in its excited state before emitting a photon. Fluorescence typically follows first-order kinetics:
where is the concentration of excited state molecules at time , is the initial concentration and is the decay rate or the inverse of the fluorescence lifetime. This is an instance of exponential decay. Various radiative and non-radiative processes can de-populate the excited state. In such case the total decay rate is the sum over all rates:
where is the total decay rate, the radiative decay rate and the non-radiative decay rate. It is similar to a first-order chemical reaction in which the first-order rate constant is the sum of all of the rates (a parallel kinetic model). If the rate of spontaneous emission, or any of the other rates are fast, the lifetime is short. For commonly used fluorescent compounds, typical excited state decay times for photon emissions with energies from the UV to near infrared are within the range of 0.5 to 20 nanoseconds. The fluorescence lifetime is an important parameter for practical applications of fluorescence such as fluorescence resonance energy transfer and fluorescence-lifetime imaging microscopy.
Jablonski diagram
The Jablonski diagram describes most of the relaxation mechanisms for excited state molecules. The diagram alongside shows how fluorescence occurs due to the relaxation of certain excited electrons of a molecule.
Fluorescence anisotropy
Fluorophores are more likely to be excited by photons if the transition moment of the fluorophore is parallel to the electric vector of the photon. The polarization of the emitted light will also depend on the transition moment. The transition moment is dependent on the physical orientation of the fluorophore molecule. For fluorophores in solution, the intensity and polarization of the emitted light is dependent on rotational diffusion. Therefore, anisotropy measurements can be used to investigate how freely a fluorescent molecule moves in a particular environment.
Fluorescence anisotropy can be defined quantitatively as
where is the emitted intensity parallel to the polarization of the excitation light and is the emitted intensity perpendicular to the polarization of the excitation light.
Anisotropy is independent of the intensity of the absorbed or emitted light, it is the property of the light, so photobleaching of the dye will not affect the anisotropy value as long as the signal is detectable.
Fluorescence
Strongly fluorescent pigments often have an unusual appearance which is often described colloquially as a "neon color" (originally "day-glo" in the late 1960s, early 1970s). This phenomenon was termed "Farbenglut" by Hermann von Helmholtz and "fluorence" by Ralph M. Evans. It is generally thought to be related to the high brightness of the color relative to what it would be as a component of white. Fluorescence shifts energy in the incident illumination from shorter wavelengths to longer (such as blue to yellow) and thus can make the fluorescent color appear brighter (more saturated) than it could possibly be by reflection alone.
Rules
There are several general rules that deal with fluorescence. Each of the following rules have exceptions but they are useful guidelines for understanding fluorescence (these rules do not necessarily apply to two-photon absorption).
Kasha's rule
Kasha's rule states that the luminesce (fluorescence or phosphorescence) of a molecule will be emitted only from the lowest excited state of its given multiplicity. Vavilov's rule (a logical extension of Kasha's rule thusly called Kasha–Vavilov rule) dictates that the quantum yield of luminescence is independent of the wavelength of exciting radiation and is proportional to the absorbance of the excited wavelength. Kasha's rule does not always apply and is violated by simple molecules, such an example is azulene. A somewhat more reliable statement, although still with exceptions, would be that the fluorescence spectrum shows very little dependence on the wavelength of exciting radiation.
Mirror image rule
For many fluorophores the absorption spectrum is a mirror image of the emission spectrum.
This is known as the mirror image rule and is related to the Franck–Condon principle which states that electronic transitions are vertical, that is energy changes without distance changing as can be represented with a vertical line in Jablonski diagram. This means the nucleus does not move and the vibration levels of the excited state resemble the vibration levels of the ground state.
Stokes shift
In general, emitted fluorescence light has a longer wavelength and lower energy than the absorbed light. This phenomenon, known as Stokes shift, is due to energy loss between the time a photon is absorbed and when a new one is emitted. The causes and magnitude of Stokes shift can be complex and are dependent on the fluorophore and its environment. However, there are some common causes. It is frequently due to non-radiative decay to the lowest vibrational energy level of the excited state. Another factor is that the emission of fluorescence frequently leaves a fluorophore in a higher vibrational level of the ground state.
In nature
There are many natural compounds that exhibit fluorescence, and they have a number of applications. Some deep-sea animals, such as the greeneye, have fluorescent structures.
Compared to bioluminescence and biophosphorescence
Fluorescence
Fluorescence is the phenomenon of absorption of electromagnetic radiation, typically from ultraviolet or visible light, by a molecule and the subsequent emission of a photon of a lower energy (smaller frequency, longer wavelength). This causes the light that is emitted to be a different color than the light that is absorbed. Stimulating light excites an electron to an excited state. When the molecule returns to the ground state, it releases a photon, which is the fluorescent emission. The excited state lifetime is short, so emission of light is typically only observable when the absorbing light is on. Fluorescence can be of any wavelength but is often more significant when emitted photons are in the visible spectrum. When it occurs in a living organism, it is sometimes called biofluorescence. Fluorescence should not be confused with bioluminescence and biophosphorescence. Pumpkin toadlets that live in the Brazilian Atlantic forest are fluorescent.
Bioluminescence
Bioluminescence differs from fluorescence in that it is the natural production of light by chemical reactions within an organism, whereas fluorescence is the absorption and reemission of light from the environment. Fireflies and anglerfish are two examples of bioluminescent organisms. To add to the potential confusion, some organisms are both bioluminescent and fluorescent, like the sea pansy Renilla reniformis, where bioluminescence serves as the light source for fluorescence.
Phosphorescence
Phosphorescence is similar to fluorescence in its requirement of light wavelengths as a provider of excitation energy. The difference here lies in the relative stability of the energized electron. Unlike with fluorescence, in phosphorescence the electron retains stability, emitting light that continues to "glow in the dark" even after the stimulating light source has been removed. For example, glow-in-the-dark stickers are phosphorescent, but there are no truly biophosphorescent animals known.
Mechanisms
Epidermal chromatophores
Pigment cells that exhibit fluorescence are called fluorescent chromatophores, and function somatically similar to regular chromatophores. These cells are dendritic, and contain pigments called fluorosomes. These pigments contain fluorescent proteins which are activated by K+ (potassium) ions, and it is their movement, aggregation, and dispersion within the fluorescent chromatophore that cause directed fluorescence patterning. Fluorescent cells are innervated the same as other chromatophores, like melanophores, pigment cells that contain melanin. Short term fluorescent patterning and signaling is controlled by the nervous system. Fluorescent chromatophores can be found in the skin (e.g. in fish) just below the epidermis, amongst other chromatophores.
Epidermal fluorescent cells in fish also respond to hormonal stimuli by the α–MSH and MCH hormones much the same as melanophores. This suggests that fluorescent cells may have color changes throughout the day that coincide with their circadian rhythm. Fish may also be sensitive to cortisol induced stress responses to environmental stimuli, such as interaction with a predator or engaging in a mating ritual.
Phylogenetics
Evolutionary origins
The incidence of fluorescence across the tree of life is widespread, and has been studied most extensively in cnidarians and fish. The phenomenon appears to have evolved multiple times in multiple taxa such as in the anguilliformes (eels), gobioidei (gobies and cardinalfishes), and tetradontiformes (triggerfishes), along with the other taxa discussed later in the article. Fluorescence is highly genotypically and phenotypically variable even within ecosystems, in regards to the wavelengths emitted, the patterns displayed, and the intensity of the fluorescence. Generally, the species relying upon camouflage exhibit the greatest diversity in fluorescence, likely because camouflage may be one of the uses of fluorescence.
It is suspected by some scientists that GFPs and GFP-like proteins began as electron donors activated by light. These electrons were then used for reactions requiring light energy. Functions of fluorescent proteins, such as protection from the sun, conversion of light into different wavelengths, or for signaling are thought to have evolved secondarily.
Adaptive functions
Currently, relatively little is known about the functional significance of fluorescence and fluorescent proteins. However, it is suspected that fluorescence may serve important functions in signaling and communication, mating, lures, camouflage, UV protection and antioxidation, photoacclimation, dinoflagellate regulation, and in coral health.
Aquatic
Water absorbs light of long wavelengths, so less light from these wavelengths reflects back to reach the eye. Therefore, warm colors from the visual light spectrum appear less vibrant at increasing depths. Water scatters light of shorter wavelengths above violet, meaning cooler colors dominate the visual field in the photic zone. Light intensity decreases 10 fold with every 75 m of depth, so at depths of 75 m, light is 10% as intense as it is on the surface, and is only 1% as intense at 150 m as it is on the surface. Because the water filters out the wavelengths and intensity of water reaching certain depths, different proteins, because of the wavelengths and intensities of light they are capable of absorbing, are better suited to different depths. Theoretically, some fish eyes can detect light as deep as 1000 m. At these depths of the aphotic zone, the only sources of light are organisms themselves, giving off light through chemical reactions in a process called bioluminescence.
Fluorescence is simply defined as the absorption of electromagnetic radiation at one wavelength and its reemission at another, lower energy wavelength. Thus any type of fluorescence depends on the presence of external sources of light. Biologically functional fluorescence is found in the photic zone, where there is not only enough light to cause fluorescence, but enough light for other organisms to detect it.
The visual field in the photic zone is naturally blue, so colors of fluorescence can be detected as bright reds, oranges, yellows, and greens. Green is the most commonly found color in the marine spectrum, yellow the second most, orange the third, and red is the rarest. Fluorescence can occur in organisms in the aphotic zone as a byproduct of that same organism's bioluminescence. Some fluorescence in the aphotic zone is merely a byproduct of the organism's tissue biochemistry and does not have a functional purpose. However, some cases of functional and adaptive significance of fluorescence in the aphotic zone of the deep ocean is an active area of research.
Photic zone
Fish
Bony fishes living in shallow water generally have good color vision due to their living in a colorful environment. Thus, in shallow-water fishes, red, orange, and green fluorescence most likely serves as a means of communication with conspecifics, especially given the great phenotypic variance of the phenomenon.
Many fish that exhibit fluorescence, such as sharks, lizardfish, scorpionfish, wrasses, and flatfishes, also possess yellow intraocular filters. Yellow intraocular filters in the lenses and cornea of certain fishes function as long-pass filters. These filters enable the species to visualize and potentially exploit fluorescence, in order to enhance visual contrast and patterns that are unseen to other fishes and predators that lack this visual specialization. Fish that possess the necessary yellow intraocular filters for visualizing fluorescence potentially exploit a light signal from members of it. Fluorescent patterning was especially prominent in cryptically patterned fishes possessing complex camouflage. Many of these lineages also possess yellow long-pass intraocular filters that could enable visualization of such patterns.
Another adaptive use of fluorescence is to generate orange and red light from the ambient blue light of the photic zone to aid vision. Red light can only be seen across short distances due to attenuation of red light wavelengths by water. Many fish species that fluoresce are small, group-living, or benthic/aphotic, and have conspicuous patterning. This patterning is caused by fluorescent tissue and is visible to other members of the species, however the patterning is invisible at other visual spectra. These intraspecific fluorescent patterns also coincide with intra-species signaling. The patterns present in ocular rings to indicate directionality of an individual's gaze, and along fins to indicate directionality of an individual's movement. Current research suspects that this red fluorescence is used for private communication between members of the same species. Due to the prominence of blue light at ocean depths, red light and light of longer wavelengths are muddled, and many predatory reef fish have little to no sensitivity for light at these wavelengths. Fish such as the fairy wrasse that have developed visual sensitivity to longer wavelengths are able to display red fluorescent signals that give a high contrast to the blue environment and are conspicuous to conspecifics in short ranges, yet are relatively invisible to other common fish that have reduced sensitivities to long wavelengths. Thus, fluorescence can be used as adaptive signaling and intra-species communication in reef fish.
Additionally, it is suggested that fluorescent tissues that surround an organism's eyes are used to convert blue light from the photic zone or green bioluminescence in the aphotic zone into red light to aid vision.
Sharks
A new fluorophore was described in two species of sharks, wherein it was due to an undescribed group of brominated tryptophane-kynurenine small molecule metabolites.
Coral
Fluorescence serves a wide variety of functions in coral. Fluorescent proteins in corals may contribute to photosynthesis by converting otherwise unusable wavelengths of light into ones for which the coral's symbiotic algae are able to conduct photosynthesis. Also, the proteins may fluctuate in number as more or less light becomes available as a means of photoacclimation. Similarly, these fluorescent proteins may possess antioxidant capacities to eliminate oxygen radicals produced by photosynthesis. Finally, through modulating photosynthesis, the fluorescent proteins may also serve as a means of regulating the activity of the coral's photosynthetic algal symbionts.
Cephalopods
Alloteuthis subulata and Loligo vulgaris, two types of nearly transparent squid, have fluorescent spots above their eyes. These spots reflect incident light, which may serve as a means of camouflage, but also for signaling to other squids for schooling purposes.
Jellyfish
Another, well-studied example of fluorescence in the ocean is the hydrozoan Aequorea victoria. This jellyfish lives in the photic zone off the west coast of North America and was identified as a carrier of green fluorescent protein (GFP) by Osamu Shimomura. The gene for these green fluorescent proteins has been isolated and is scientifically significant because it is widely used in genetic studies to indicate the expression of other genes.
Mantis shrimp
Several species of mantis shrimp, which are stomatopod crustaceans, including Lysiosquillina glabriuscula, have yellow fluorescent markings along their antennal scales and carapace (shell) that males present during threat displays to predators and other males. The display involves raising the head and thorax, spreading the striking appendages and other maxillipeds, and extending the prominent, oval antennal scales laterally, which makes the animal appear larger and accentuates its yellow fluorescent markings. Furthermore, as depth increases, mantis shrimp fluorescence accounts for a greater part of the visible light available. During mating rituals, mantis shrimp actively fluoresce, and the wavelength of this fluorescence matches the wavelengths detected by their eye pigments.
Aphotic zone
Siphonophores
Siphonophorae is an order of marine animals from the phylum Hydrozoa that consist of a specialized medusoid and polyp zooid. Some siphonophores, including the genus Erenna that live in the aphotic zone between depths of 1600 m and 2300 m, exhibit yellow to red fluorescence in the photophores of their tentacle-like tentilla. This fluorescence occurs as a by-product of bioluminescence from these same photophores. The siphonophores exhibit the fluorescence in a flicking pattern that is used as a lure to attract prey.
Dragonfish
The predatory deep-sea dragonfish Malacosteus niger, the closely related genus Aristostomias and the species Pachystomias microdon use fluorescent red accessory pigments to convert the blue light emitted from their own bioluminescence to red light from suborbital photophores. This red luminescence is invisible to other animals, which allows these dragonfish extra light at dark ocean depths without attracting or signaling predators.
Terrestrial
Amphibians
Fluorescence is widespread among amphibians and has been documented in several families of frogs, salamanders and caecilians, but the extent of it varies greatly.
The polka-dot tree frog (Hypsiboas punctatus), widely found in South America, was unintentionally discovered to be the first fluorescent amphibian in 2017. The fluorescence was traced to a new compound found in the lymph and skin glands. The main fluorescent compound is Hyloin-L1 and it gives a blue-green glow when exposed to violet or ultraviolet light. The scientists behind the discovery suggested that the fluorescence can be used for communication. They speculated that fluorescence possibly is relatively widespread among frogs. Only a few months later, fluorescence was discovered in the closely related Hypsiboas atlanticus. Because it is linked to secretions from skin glands, they can also leave fluorescent markings on surfaces where they have been.
In 2019, two other frogs, the tiny pumpkin toadlet (Brachycephalus ephippium) and red pumpkin toadlet (B. pitanga) of southeastern Brazil, were found to have naturally fluorescent skeletons, which are visible through their skin when exposed to ultraviolet light. It was initially speculated that the fluorescence supplemented their already aposematic colours (they are toxic) or that it was related to mate choice (species recognition or determining fitness of a potential partner), but later studies indicate that the former explanation is unlikely, as predation attempts on the toadlets appear to be unaffected by the presence/absence of fluorescence.
In 2020 it was confirmed that green or yellow fluorescence is widespread not only in adult frogs that are exposed to blue or ultraviolet light, but also among tadpoles, salamanders and caecilians. The extent varies greatly depending on species; in some it is highly distinct and in others it is barely noticeable. It can be based on their skin pigmentation, their mucus or their bones.
Butterflies
Swallowtail (Papilio) butterflies have complex systems for emitting fluorescent light. Their wings contain pigment-infused crystals that provide directed fluorescent light. These crystals function to produce fluorescent light best when they absorb radiance from sky-blue light (wavelength about 420 nm). The wavelengths of light that the butterflies see the best correspond to the absorbance of the crystals in the butterfly's wings. This likely functions to enhance the capacity for signaling.
Parrots
Parrots have fluorescent plumage that may be used in mate signaling. A study using mate-choice experiments on budgerigars (Melopsittacus undulates) found compelling support for fluorescent sexual signaling, with both males and females significantly preferring birds with the fluorescent experimental stimulus. This study suggests that the fluorescent plumage of parrots is not simply a by-product of pigmentation, but instead an adapted sexual signal. Considering the intricacies of the pathways that produce fluorescent pigments, there may be significant costs involved. Therefore, individuals exhibiting strong fluorescence may be honest indicators of high individual quality, since they can deal with the associated costs.
Arachnids
Spiders fluoresce under UV light and possess a huge diversity of fluorophores. Andrews, Reed, & Masta noted that spiders are the only known group in which fluorescence is "taxonomically widespread, variably expressed, evolutionarily labile, and probably under selection and potentially of ecological importance for intraspecific and interspecific signaling". They showed that fluorescence evolved multiple times across spider taxa, with novel fluorophores evolving during spider diversification.
In some spiders, ultraviolet cues are important for predator–prey interactions, intraspecific communication, and camouflage-matching with fluorescent flowers. Differing ecological contexts could favor inhibition or enhancement of fluorescence expression, depending upon whether fluorescence helps spiders be cryptic or makes them more conspicuous to predators. Therefore, natural selection could be acting on expression of fluorescence across spider species.
Scorpions are also fluorescent, in their case due to the presence of beta carboline in their cuticles.
Platypus
In 2020 fluorescence was reported for several platypus specimens.
Plants
Many plants are fluorescent due to the presence of chlorophyll, which is probably the most widely distributed fluorescent molecule, producing red emission under a range of excitation wavelengths. This attribute of chlorophyll is commonly used by ecologists to measure photosynthetic efficiency.
The Mirabilis jalapa flower contains violet, fluorescent betacyanins and yellow, fluorescent betaxanthins. Under white light, parts of the flower containing only betaxanthins appear yellow, but in areas where both betaxanthins and betacyanins are present, the visible fluorescence of the flower is faded due to internal light-filtering mechanisms. Fluorescence was previously suggested to play a role in pollinator attraction, however, it was later found that the visual signal by fluorescence is negligible compared to the visual signal of light reflected by the flower.
Abiotic
Gemology, mineralogy and geology
In addition to the eponymous fluorspar, many
gemstones and minerals may have a distinctive fluorescence or may fluoresce differently under short-wave ultraviolet, long-wave ultraviolet, visible light, or X-rays.
Many types of calcite and amber will fluoresce under shortwave UV, longwave UV and visible light. Rubies, emeralds, and diamonds exhibit red fluorescence under long-wave UV, blue and sometimes green light; diamonds also emit light under X-ray radiation.
Fluorescence in minerals is caused by a wide range of activators. In some cases, the concentration of the activator must be restricted to below a certain level, to prevent quenching of the fluorescent emission. Furthermore, the mineral must be free of impurities such as iron or copper, to prevent quenching of possible fluorescence. Divalent manganese, in concentrations of up to several percent, is responsible for the red or orange fluorescence of calcite, the green fluorescence of willemite, the yellow fluorescence of esperite, and the orange fluorescence of wollastonite and clinohedrite. Hexavalent uranium, in the form of the uranyl cation (), fluoresces at all concentrations in a yellow green, and is the cause of fluorescence of minerals such as autunite or andersonite, and, at low concentration, is the cause of the fluorescence of such materials as some samples of hyalite opal. Trivalent chromium at low concentration is the source of the red fluorescence of ruby. Divalent europium is the source of the blue fluorescence, when seen in the mineral fluorite. Trivalent lanthanides such as terbium and dysprosium are the principal activators of the creamy yellow fluorescence exhibited by the yttrofluorite variety of the mineral fluorite, and contribute to the orange fluorescence of zircon. Powellite (calcium molybdate) and scheelite (calcium tungstate) fluoresce intrinsically in yellow and blue, respectively. When present together in solid solution, energy is transferred from the higher-energy tungsten to the lower-energy molybdenum, such that fairly low levels of molybdenum are sufficient to cause a yellow emission for scheelite, instead of blue. Low-iron sphalerite (zinc sulfide), fluoresces and phosphoresces in a range of colors, influenced by the presence of various trace impurities.
Crude oil (petroleum) fluoresces in a range of colors, from dull-brown for heavy oils and tars through to bright-yellowish and bluish-white for very light oils and condensates. This phenomenon is used in oil exploration drilling to identify very small amounts of oil in drill cuttings and core samples.
Humic acids and fulvic acids produced by the degradation of organic matter in soils (humus) may also fluoresce because of the presence of aromatic cycles in their complex molecular structures. Humic substances dissolved in groundwater can be detected and characterized by spectrofluorimetry.
Organic liquids
Organic (carbon based) solutions such anthracene or stilbene, dissolved in benzene or toluene, fluoresce with ultraviolet or gamma ray irradiation. The decay times of this fluorescence are on the order of nanoseconds, since the duration of the light depends on the lifetime of the excited states of the fluorescent material, in this case anthracene or stilbene.
Scintillation is defined a flash of light produced in a transparent material by the passage of a particle (an electron, an alpha particle, an ion, or a high-energy photon). Stilbene and derivatives are used in scintillation counters to detect such particles. Stilbene is also one of the gain mediums used in dye lasers.
Atmosphere
Fluorescence is observed in the atmosphere when the air is under energetic electron bombardment. In cases such as the natural aurora, high-altitude nuclear explosions, and rocket-borne electron gun experiments, the molecules and ions formed have a fluorescent response to light.
Common materials that fluoresce
Vitamin B2 fluoresces yellow.
Tonic water fluoresces blue due to the presence of quinine.
Highlighter ink is often fluorescent due to the presence of pyranine.
Banknotes, postage stamps and credit cards often have fluorescent security features.
In novel technology
In August 2020 researchers reported the creation of the brightest fluorescent solid optical materials so far by enabling the transfer of properties of highly fluorescent dyes via spatial and electronic isolation of the dyes by mixing cationic dyes with anion-binding cyanostar macrocycles. According to a co-author these materials may have applications in areas such as solar energy harvesting, bioimaging, and lasers.
Applications
Lighting
The common fluorescent lamp relies on fluorescence. Inside the glass tube is a partial vacuum and a small amount of mercury. An electric discharge in the tube causes the mercury atoms to emit mostly ultraviolet light. The tube is lined with a coating of a fluorescent material, called the phosphor, which absorbs ultraviolet light and re-emits visible light. Fluorescent lighting is more energy-efficient than incandescent lighting elements. However, the uneven spectrum of traditional fluorescent lamps may cause certain colors to appear different from when illuminated by incandescent light or daylight. The mercury vapor emission spectrum is dominated by a short-wave UV line at 254 nm (which provides most of the energy to the phosphors), accompanied by visible light emission at 436 nm (blue), 546 nm (green) and 579 nm (yellow-orange). These three lines can be observed superimposed on the white continuum using a hand spectroscope, for light emitted by the usual white fluorescent tubes. These same visible lines, accompanied by the emission lines of trivalent europium and trivalent terbium, and further accompanied by the emission continuum of divalent europium in the blue region, comprise the more discontinuous light emission of the modern trichromatic phosphor systems used in many compact fluorescent lamp and traditional lamps where better color rendition is a goal.
Fluorescent lights were first available to the public at the 1939 New York World's Fair. Improvements since then have largely been better phosphors, longer life, and more consistent internal discharge, and easier-to-use shapes (such as compact fluorescent lamps). Some high-intensity discharge (HID) lamps couple their even-greater electrical efficiency with phosphor enhancement for better color rendition.
White light-emitting diodes (LEDs) became available in the mid-1990s as LED lamps, in which blue light emitted from the semiconductor strikes phosphors deposited on the tiny chip. The combination of the blue light that continues through the phosphor and the green to red fluorescence from the phosphors produces a net emission of white light.
Glow sticks sometimes utilize fluorescent materials to absorb light from the chemiluminescent reaction and emit light of a different color.
Analytical chemistry
Many analytical procedures involve the use of a fluorometer, usually with a single exciting wavelength and single detection wavelength. Because of the sensitivity that the method affords, fluorescent molecule concentrations as low as 1 part per trillion can be measured.
Fluorescence in several wavelengths can be detected by an array detector, to detect compounds from HPLC flow. Also, TLC plates can be visualized if the compounds or a coloring reagent is fluorescent. Fluorescence is most effective when there is a larger ratio of atoms at lower energy levels in a Boltzmann distribution. There is, then, a higher probability of excitement and release of photons by lower-energy atoms, making analysis more efficient.
Spectroscopy
Usually the setup of a fluorescence assay involves a light source, which may emit many different wavelengths of light. In general, a single wavelength is required for proper analysis, so, in order to selectively filter the light, it is passed through an excitation monochromator, and then that chosen wavelength is passed through the sample cell. After absorption and re-emission of the energy, many wavelengths may emerge due to Stokes shift and various electron transitions. To separate and analyze them, the fluorescent radiation is passed through an emission monochromator, and observed selectively by a detector.
Lasers
Lasers most often use the fluorescence of certain materials as their active media, such as the red glow produced by a ruby (chromium sapphire), the infrared of titanium sapphire, or the unlimited range of colors produced by organic dyes. These materials normally fluoresce through a process called spontaneous emission, in which the light is emitted in all directions and often at many discrete spectral lines all at once. In many lasers, the fluorescent medium is "pumped" by exposing it to an intense light source, creating a population inversion, meaning that more of its atoms become in an excited state (high energy) rather than at ground state (low energy). When this occurs, the spontaneous fluorescence can then induce the other atoms to emit their photons in the same direction and at the same wavelength, creating stimulated emission. When a portion of the spontaneous fluorescence is trapped between two mirrors, nearly all of the medium's fluorescence can be stimulated to emit along the same line, producing a laser beam.
Biochemistry and medicine
Fluorescence in the life sciences is used generally as a non-destructive way of tracking or analysis of biological molecules by means of the fluorescent emission at a specific frequency where there is no background from the excitation light, as relatively few cellular components are naturally fluorescent (called intrinsic or autofluorescence).
In fact, a protein or other component can be "labelled" with an extrinsic fluorophore, a fluorescent dye that can be a small molecule, protein, or quantum dot, finding a large use in many biological applications.
The quantification of a dye is done with a spectrofluorometer and finds additional applications in:
Microscopy
When scanning the fluorescence intensity across a plane one has fluorescence microscopy of tissues, cells, or subcellular structures, which is accomplished by labeling an antibody with a fluorophore and allowing the antibody to find its target antigen within the sample. Labelling multiple antibodies with different fluorophores allows visualization of multiple targets within a single image (multiple channels). DNA microarrays are a variant of this.
Immunology: An antibody is first prepared by having a fluorescent chemical group attached, and the sites (e.g., on a microscopic specimen) where the antibody has bound can be seen, and even quantified, by the fluorescence.
FLIM (Fluorescence Lifetime Imaging Microscopy) can be used to detect certain bio-molecular interactions that manifest themselves by influencing fluorescence lifetimes.
Cell and molecular biology: detection of colocalization using fluorescence-labelled antibodies for selective detection of the antigens of interest using specialized software such as ImageJ.
Other techniques
FRET (Förster resonance energy transfer, also known as fluorescence resonance energy transfer) is used to study protein interactions, detect specific nucleic acid sequences and used as biosensors, while fluorescence lifetime (FLIM) can give an additional layer of information.
Biotechnology: biosensors using fluorescence are being studied as possible Fluorescent glucose biosensors.
Automated sequencing of DNA by the chain termination method; each of four different chain terminating bases has its own specific fluorescent tag. As the labelled DNA molecules are separated, the fluorescent label is excited by a UV source, and the identity of the base terminating the molecule is identified by the wavelength of the emitted light.
FACS (fluorescence-activated cell sorting). One of several important cell sorting techniques used in the separation of different cell lines (especially those isolated from animal tissues).
DNA detection: the compound ethidium bromide, in aqueous solution, has very little fluorescence, as it is quenched by water. Ethidium bromide's fluorescence is greatly enhanced after it binds to DNA, so this compound is very useful in visualising the location of DNA fragments in agarose gel electrophoresis. Intercalated ethidium is in a hydrophobic environment when it is between the base pairs of the DNA, protected from quenching by water which is excluded from the local environment of the intercalated ethidium. Ethidium bromide may be carcinogenic – an arguably safer alternative is the dye SYBR Green.
FIGS (Fluorescence image-guided surgery) is a medical imaging technique that uses fluorescence to detect properly labeled structures during surgery.
Intravascular fluorescence is a catheter-based medical imaging technique that uses fluorescence to detect high-risk features of atherosclerosis and unhealed vascular stent devices. Plaque autofluorescence has been used in a first-in-man study in coronary arteries in combination with optical coherence tomography. Molecular agents has been also used to detect specific features, such as stent fibrin accumulation and enzymatic activity related to artery inflammation.
SAFI (species altered fluorescence imaging) an imaging technique in electrokinetics and microfluidics. It uses non-electromigrating dyes whose fluorescence is easily quenched by migrating chemical species of interest. The dye(s) are usually seeded everywhere in the flow and differential quenching of their fluorescence by analytes is directly observed.
Fluorescence-based assays for screening toxic chemicals. The optical assays consist of a mixture of environment-sensitive fluorescent dyes and human skin cells that generate fluorescence spectra patterns. This approach can reduce the need for laboratory animals in biomedical research and pharmaceutical industry.
Bone-margin detection: Alizarin-stained specimens and certain fossils can be lit by fluorescent lights to view anatomical structures, including bone margins.
Forensics
Fingerprints can be visualized with fluorescent compounds such as ninhydrin or DFO (1,8-Diazafluoren-9-one). Blood and other substances are sometimes detected by fluorescent reagents, like fluorescein. Fibers, and other materials that may be encountered in forensics or with a relationship to various collectibles, are sometimes fluorescent.
Non-destructive testing
Fluorescent penetrant inspection is used to find cracks and other defects on the surface of a part. Dye tracing, using fluorescent dyes, is used to find leaks in liquid and gas plumbing systems.
Signage
Fluorescent colors are frequently used in signage, particularly road signs. Fluorescent colors are generally recognizable at longer ranges than their non-fluorescent counterparts, with fluorescent orange being particularly noticeable. This property has led to its frequent use in safety signs and labels.
Optical brighteners
Fluorescent compounds are often used to enhance the appearance of fabric and paper, causing a "whitening" effect. A white surface treated with an optical brightener can emit more visible light than that which shines on it, making it appear brighter. The blue light emitted by the brightener compensates for the diminishing blue of the treated material and changes the hue away from yellow or brown and toward white. Optical brighteners are used in laundry detergents, high brightness paper, cosmetics, high-visibility clothing and more.
See also
Absorption-re-emission atomic line filters use the phenomenon of fluorescence to filter light extremely effectively.
Black light
Blacklight paint
Fiber photometry
Fluorescence-activating and absorption-shifting tag
Fluorescence correlation spectroscopy
Fluorescence image-guided surgery
Fluorescence in plants
Fluorescence spectroscopy
Fluorescent lamp
Fluorescent Multilayer Disc
Fluorometer
High-visibility clothing
Integrated fluorometer
Intrinsic DNA fluorescence
Laser-induced fluorescence
List of light sources
Microbial art, using fluorescent bacteria
Mössbauer effect, resonant fluorescence of gamma rays
Organic light-emitting diodes can be fluorescent
Phosphorescence
Phosphor thermometry, the use of phosphorescence to measure temperature.
Spectroscopy
Two-photon absorption
Vibronic spectroscopy
X-ray fluorescence
References
Further reading
External links
Fluorophores.org, the database of fluorescent dyes
FSU.edu, Basic Concepts in Fluorescence
"A nano-history of fluorescence" lecture by David Jameson
Excitation and emission spectra of various fluorescent dyes
Database of fluorescent minerals with pictures, activators and spectra (fluomin.org)
"Biofluorescent Night Dive – Dahab/Red Sea (Egypt), Masbat Bay/Mashraba, "Roman Rock"". YouTube. 9 October 2012.
Steffen O. Beyer. "FluoPedia.org: Publications". fluopedia.org.
Steffen O. Beyer. "FluoMedia.org: Science". fluomedia.org.
Courtney Whitcher. Finding Fluorescence - backyard participation project to identify new examples of fluorescence
Dyes
Molecular biology
Radiochemistry | Fluorescence | [
"Chemistry",
"Biology"
] | 10,435 | [
"Luminescence",
"Fluorescence",
"Radiochemistry",
"Molecular biology",
"Biochemistry",
"Radioactivity"
] |
11,556 | https://en.wikipedia.org/wiki/Fundamental%20theorem%20of%20arithmetic | In mathematics, the fundamental theorem of arithmetic, also called the unique factorization theorem and prime factorization theorem, states that every integer greater than 1 can be represented uniquely as a product of prime numbers, up to the order of the factors. For example,
The theorem says two things about this example: first, that 1200 be represented as a product of primes, and second, that no matter how this is done, there will always be exactly four 2s, one 3, two 5s, and no other primes in the product.
The requirement that the factors be prime is necessary: factorizations containing composite numbers may not be unique
(for example, ).
This theorem is one of the main reasons why 1 is not considered a prime number: if 1 were prime, then factorization into primes would not be unique; for example,
The theorem generalizes to other algebraic structures that are called unique factorization domains and include principal ideal domains, Euclidean domains, and polynomial rings over a field. However, the theorem does not hold for algebraic integers. This failure of unique factorization is one of the reasons for the difficulty of the proof of Fermat's Last Theorem. The implicit use of unique factorization in rings of algebraic integers is behind the error of many of the numerous false proofs that have been written during the 358 years between Fermat's statement and Wiles's proof.
History
The fundamental theorem can be derived from Book VII, propositions 30, 31 and 32, and Book IX, proposition 14 of Euclid's Elements.
(In modern terminology: if a prime p divides the product ab, then p divides either a or b or both.) Proposition 30 is referred to as Euclid's lemma, and it is the key in the proof of the fundamental theorem of arithmetic.
(In modern terminology: every integer greater than one is divided evenly by some prime number.) Proposition 31 is proved directly by infinite descent.
Proposition 32 is derived from proposition 31, and proves that the decomposition is possible.
(In modern terminology: a least common multiple of several prime numbers is not a multiple of any other prime number.) Book IX, proposition 14 is derived from Book VII, proposition 30, and proves partially that the decomposition is unique – a point critically noted by André Weil. Indeed, in this proposition the exponents are all equal to one, so nothing is said for the general case.
While Euclid took the first step on the way to the existence of prime factorization, Kamāl al-Dīn al-Fārisī took the final step and stated for the first time the fundamental theorem of arithmetic.
Article 16 of Gauss's Disquisitiones Arithmeticae is an early modern statement and proof employing modular arithmetic.
Applications
Canonical representation of a positive integer
Every positive integer can be represented in exactly one way as a product of prime powers
where are primes and the are positive integers. This representation is commonly extended to all positive integers, including 1, by the convention that the empty product is equal to 1 (the empty product corresponds to ).
This representation is called the canonical representation of , or the standard form of n. For example,
999 = 33×37,
1000 = 23×53,
1001 = 7×11×13.
Factors may be inserted without changing the value of (for example, ). In fact, any positive integer can be uniquely represented as an infinite product taken over all the positive prime numbers, as
where a finite number of the are positive integers, and the others are zero.
Allowing negative exponents provides a canonical form for positive rational numbers.
Arithmetic operations
The canonical representations of the product, greatest common divisor (GCD), and least common multiple (LCM) of two numbers a and b can be expressed simply in terms of the canonical representations of a and b themselves:
However, integer factorization, especially of large numbers, is much more difficult than computing products, GCDs, or LCMs. So these formulas have limited use in practice.
Arithmetic functions
Many arithmetic functions are defined using the canonical representation. In particular, the values of additive and multiplicative functions are determined by their values on the powers of prime numbers.
Proof
The proof uses Euclid's lemma (Elements VII, 30): If a prime divides the product of two integers, then it must divide at least one of these integers.
Existence
It must be shown that every integer greater than is either prime or a product of primes. First, is prime. Then, by strong induction, assume this is true for all numbers greater than and less than . If is prime, there is nothing more to prove. Otherwise, there are integers and , where , and . By the induction hypothesis, and are products of primes. But then is a product of primes.
Uniqueness
Suppose, to the contrary, there is an integer that has two distinct prime factorizations. Let be the least such integer and write , where each and is prime. We see that divides , so divides some by Euclid's lemma. Without loss of generality, say divides . Since and are both prime, it follows that . Returning to our factorizations of , we may cancel these two factors to conclude that . We now have two distinct prime factorizations of some integer strictly smaller than , which contradicts the minimality of .
Uniqueness without Euclid's lemma
The fundamental theorem of arithmetic can also be proved without using Euclid's lemma. The proof that follows is inspired by Euclid's original version of the Euclidean algorithm.
Assume that is the smallest positive integer which is the product of prime numbers in two different ways. Incidentally, this implies that , if it exists, must be a composite number greater than . Now, say
Every must be distinct from every Otherwise, if say then there would exist some positive integer that is smaller than and has two distinct prime factorizations. One may also suppose that by exchanging the two factorizations, if needed.
Setting and one has
Also, since one has
It then follows that
As the positive integers less than have been supposed to have a unique prime factorization, must occur in the factorization of either or . The latter case is impossible, as , being smaller than , must have a unique prime factorization, and differs from every The former case is also impossible, as, if is a divisor of it must be also a divisor of which is impossible as and are distinct primes.
Therefore, there cannot exist a smallest integer with more than a single distinct prime factorization. Every positive integer must either be a prime number itself, which would factor uniquely, or a composite that also factors uniquely into primes, or in the case of the integer , not factor into any prime.
Generalizations
The first generalization of the theorem is found in Gauss's second monograph (1832) on biquadratic reciprocity. This paper introduced what is now called the ring of Gaussian integers, the set of all complex numbers a + bi where a and b are integers. It is now denoted by He showed that this ring has the four units ±1 and ±i, that the non-zero, non-unit numbers fall into two classes, primes and composites, and that (except for order), the composites have unique factorization as a product of primes (up to the order and multiplication by units).
Similarly, in 1844 while working on cubic reciprocity, Eisenstein introduced the ring , where is a cube root of unity. This is the ring of Eisenstein integers, and he proved it has the six units and that it has unique factorization.
However, it was also discovered that unique factorization does not always hold. An example is given by . In this ring one has
Examples like this caused the notion of "prime" to be modified. In it can be proven that if any of the factors above can be represented as a product, for example, 2 = ab, then one of a or b must be a unit. This is the traditional definition of "prime". It can also be proven that none of these factors obeys Euclid's lemma; for example, 2 divides neither (1 + ) nor (1 − ) even though it divides their product 6. In algebraic number theory 2 is called irreducible in (only divisible by itself or a unit) but not prime in (if it divides a product it must divide one of the factors). The mention of is required because 2 is prime and irreducible in Using these definitions it can be proven that in any integral domain a prime must be irreducible. Euclid's classical lemma can be rephrased as "in the ring of integers every irreducible is prime". This is also true in and but not in
The rings in which factorization into irreducibles is essentially unique are called unique factorization domains. Important examples are polynomial rings over the integers or over a field, Euclidean domains and principal ideal domains.
In 1843 Kummer introduced the concept of ideal number, which was developed further by Dedekind (1876) into the modern theory of ideals, special subsets of rings. Multiplication is defined for ideals, and the rings in which they have unique factorization are called Dedekind domains.
There is a version of unique factorization for ordinals, though it requires some additional conditions to ensure uniqueness.
Any commutative Möbius monoid satisfies a unique factorization theorem and thus possesses arithmetical properties similar to those of the multiplicative semigroup of positive integers. Fundamental Theorem of Arithmetic is, in fact, a special case of the unique factorization theorem in commutative Möbius monoids.
See also
Integer factorization
List of theorems called fundamental
Prime signature, a characterization of how many primes divide a given number
Notes
References
The Disquisitiones Arithmeticae has been translated from Latin into English and German. The German edition includes all of his papers on number theory: all the proofs of quadratic reciprocity, the determination of the sign of the Gauss sum, the investigations into biquadratic reciprocity, and unpublished notes.
The two monographs Gauss published on biquadratic reciprocity have consecutively numbered sections: the first contains §§ 1–23 and the second §§ 24–76. Footnotes referencing these are of the form "Gauss, BQ, § n". Footnotes referencing the Disquisitiones Arithmeticae are of the form "Gauss, DA, Art. n".
These are in Gauss's Werke, Vol II, pp. 65–92 and 93–148; German translations are pp. 511–533 and 534–586 of the German edition of the Disquisitiones.
.
.
External links
Why isn’t the fundamental theorem of arithmetic obvious?
GCD and the Fundamental Theorem of Arithmetic at cut-the-knot.
PlanetMath: Proof of fundamental theorem of arithmetic
Fermat's Last Theorem Blog: Unique Factorization, a blog that covers the history of Fermat's Last Theorem from Diophantus of Alexandria to the proof by Andrew Wiles.
"Fundamental Theorem of Arithmetic" by Hector Zenil, Wolfram Demonstrations Project, 2007.
Theorems about prime numbers
Articles containing proofs
Uniqueness theorems
factorization
de:Primfaktorzerlegung#Fundamentalsatz der Arithmetik | Fundamental theorem of arithmetic | [
"Mathematics"
] | 2,370 | [
"Theorems about prime numbers",
"Theorems in number theory",
"Arithmetic",
"Mathematical problems",
"Articles containing proofs",
"Factorization",
"Mathematical theorems",
"Uniqueness theorems"
] |
11,579 | https://en.wikipedia.org/wiki/Fermi%20paradox | The Fermi paradox is the discrepancy between the lack of conclusive evidence of advanced extraterrestrial life and the apparently high likelihood of its existence. Those affirming the paradox generally conclude that if the conditions required for life to arise from non-living matter are as permissive as the available evidence on Earth indicates, then extraterrestrial life would be sufficiently common such that it would be implausible for it not to have been detected yet.
The quandary takes its name from the Italian-American physicist Enrico Fermi: in the summer of 1950, Fermi was engaged in casual conversation about contemporary UFO reports and the possibility of faster-than-light travel with fellow physicists Edward Teller, Herbert York, and Emil Konopinski while the group was walking to lunch. The conversation moved on to other topics, until Fermi later blurted out during lunch, "But where is everybody?" (the exact quote is uncertain.)
There have been many attempts to resolve the Fermi paradox, such as suggesting that intelligent extraterrestrial beings are extremely rare, that the lifetime of such civilizations is short, or that they exist but (for various reasons) humans see no evidence.
Chain of reasoning
The following are some of the facts and hypotheses that together serve to highlight the apparent contradiction:
There are billions of stars in the Milky Way similar to the Sun.
With high probability, some of these stars have Earth-like planets in a circumstellar habitable zone.
Many of these stars, and hence their planets, are much older than the Sun. If Earth-like planets are typical, some may have developed intelligent life long ago.
Some of these civilizations may have developed interstellar travel, a step humans are investigating now.
Even at the slow pace of currently envisioned interstellar travel, the Milky Way galaxy could be completely traversed in a few million years.
Since many of the Sun-like stars are billions of years older than the Sun, the Earth should have already been visited by extraterrestrial civilizations, or at least their probes.
However, there is no convincing evidence that this has happened.
History
Fermi was not the first to ask the question. An earlier implicit mention was by Konstantin Tsiolkovsky in an unpublished manuscript from 1933. He noted "people deny the presence of intelligent beings on the planets of the universe" because "(i) if such beings exist they would have visited Earth, and (ii) if such civilizations existed then they would have given us some sign of their existence". This was not a paradox for others, who took this to imply the absence of extraterrestrial life. But it was one for him, since he believed in extraterrestrial life and the possibility of space travel. Therefore, he proposed what is now known as the zoo hypothesis and speculated that mankind is not yet ready for higher beings to contact us. In turn, Tsiolkovsky himself was not the first to discover the paradox, as shown by his reference to other people's reasons for not accepting the premise that extraterrestrial civilizations exist.
In 1975, Michael H. Hart published a detailed examination of the paradox, one of the first to do so. He argued that if intelligent extraterrestrials exist, and are capable of space travel, then the galaxy could have been colonized in a time much less than that of the age of the Earth. However, there is no observable evidence they have been here, which Hart called "Fact A".
Other names closely related to Fermi's question ("Where are they?") include the Great Silence, and (Latin for "silence of the universe"), though these only refer to one portion of the Fermi paradox, that humans see no evidence of other civilizations.
Original conversations
In the summer of 1950 at Los Alamos National Laboratory in New Mexico, Enrico Fermi and co-workers Emil Konopinski, Edward Teller, and Herbert York had one or several lunchtime conversations. In one, Fermi suddenly blurted out, "Where is everybody?" (Teller's letter), or "Don't you ever wonder where everybody is?" (York's letter), or "But where is everybody?" (Konopinski's letter). Teller wrote, "The result of his question was general laughter because of the strange fact that, in spite of Fermi's question coming out of the blue, everybody around the table seemed to understand at once that he was talking about extraterrestrial life."
In 1984, York wrote that Fermi "followed up with a series of calculations on the probability of earthlike planets, the probability of life given an earth, the probability of humans given life, the likely rise and duration of high technology, and so on. He concluded on the basis of such calculations that we ought to have been visited long ago and many times over." Teller remembers that not much came of this conversation "except perhaps a statement that the distances to the next location of living beings may be very great and that, indeed, as far as our galaxy is concerned, we are living somewhere in the sticks, far removed from the metropolitan area of the galactic center."
Fermi died of cancer in 1954. However, in letters to the three surviving men decades later in 1984, Dr. Eric Jones of Los Alamos was able to partially put the original conversation back together. He informed each of the men that he wished to include a reasonably accurate version or composite in the written proceedings he was putting together for a previously held conference entitled "Interstellar Migration and the Human Experience". Jones first sent a letter to Edward Teller which included a secondhand account from Hans Mark. Teller responded, and then Jones sent Teller's letter to Herbert York. York responded, and finally, Jones sent both Teller's and York's letters to Emil Konopinski who also responded. Furthermore, Konopinski was able to later identify a cartoon which Jones found as the one involved in the conversation and thereby help to settle the time period as being the summer of 1950.
Basis
The Fermi paradox is a conflict between the argument that scale and probability seem to favor intelligent life being common in the universe, and the total lack of evidence of intelligent life having ever arisen anywhere other than on Earth.
The first aspect of the Fermi paradox is a function of the scale or the large numbers involved: there are an estimated 200–400 billion stars in the Milky Way (2–4 × 1011) and 70 sextillion (7×1022) in the observable universe. Even if intelligent life occurs on only a minuscule percentage of planets around these stars, there might still be a great number of extant civilizations, and if the percentage were high enough it would produce a significant number of extant civilizations in the Milky Way. This assumes the mediocrity principle, by which Earth is a typical planet.
The second aspect of the Fermi paradox is the argument of probability: given intelligent life's ability to overcome scarcity, and its tendency to colonize new habitats, it seems possible that at least some civilizations would be technologically advanced, seek out new resources in space, and colonize their star system and, subsequently, surrounding star systems. Since there is no significant evidence on Earth, or elsewhere in the known universe, of other intelligent life after 13.8 billion years of the universe's history, there is a conflict requiring a resolution. Some examples of possible resolutions are that intelligent life is rarer than is thought, that assumptions about the general development or behavior of intelligent species are flawed, or, more radically, that current scientific understanding of the nature of the universe itself is quite incomplete.
The Fermi paradox can be asked in two ways. The first is, "Why are no aliens or their artifacts found on Earth, or in the Solar System?". If interstellar travel is possible, even the "slow" kind nearly within the reach of Earth technology, then it would only take from 5 million to 50 million years to colonize the galaxy. This is relatively brief on a geological scale, let alone a cosmological one. Since there are many stars older than the Sun, and since intelligent life might have evolved earlier elsewhere, the question then becomes why the galaxy has not been colonized already. Even if colonization is impractical or undesirable to all alien civilizations, large-scale exploration of the galaxy could be possible by probes. These might leave detectable artifacts in the Solar System, such as old probes or evidence of mining activity, but none of these have been observed.
The second form of the question is "Why are there no signs of intelligence elsewhere in the universe?". This version does not assume interstellar travel, but includes other galaxies as well. For distant galaxies, travel times may well explain the lack of alien visits to Earth, but a sufficiently advanced civilization could potentially be observable over a significant fraction of the size of the observable universe. Even if such civilizations are rare, the scale argument indicates they should exist somewhere at some point during the history of the universe, and since they could be detected from far away over a considerable period of time, many more potential sites for their origin are within range of human observation. It is unknown whether the paradox is stronger for the Milky Way galaxy or for the universe as a whole.
Drake equation
The theories and principles in the Drake equation are closely related to the Fermi paradox. The equation was formulated by Frank Drake in 1961 in an attempt to find a systematic means to evaluate the numerous probabilities involved in the existence of alien life. The equation is presented as follows:
Where is the number of technologically advanced civilizations in the Milky Way galaxy, and is asserted to be the product of
, the rate of formation of stars in the galaxy;
, the fraction of those stars with planetary systems;
, the number of planets, per solar system, with an environment suitable for organic life;
, the fraction of those suitable planets whereon organic life appears;
, the fraction of life-bearing planets whereon intelligent life appears;
, the fraction of civilizations that reach the technological level whereby detectable signals may be dispatched; and
, the length of time that those civilizations dispatch their signals.
The fundamental problem is that the last four terms (, , , and ) are entirely unknown, rendering statistical estimates impossible.
The Drake equation has been used by both optimists and pessimists, with wildly differing results. The first scientific meeting on the search for extraterrestrial intelligence (SETI), which had 10 attendees including Frank Drake and Carl Sagan, speculated that the number of civilizations was roughly between 1,000 and 100,000,000 civilizations in the Milky Way galaxy. Conversely, Frank Tipler and John D. Barrow used pessimistic numbers and speculated that the average number of civilizations in a galaxy is much less than one. Almost all arguments involving the Drake equation suffer from the overconfidence effect, a common error of probabilistic reasoning about low-probability events, by guessing specific numbers for likelihoods of events whose mechanism is not yet understood, such as the likelihood of abiogenesis on an Earth-like planet, with current likelihood estimates varying over many hundreds of orders of magnitude. An analysis that takes into account some of the uncertainty associated with this lack of understanding has been carried out by Anders Sandberg, Eric Drexler and Toby Ord, and suggests "a substantial ex ante probability of there being no other intelligent life in our observable universe".
Great Filter
The Great Filter, a concept introduced by Robin Hanson in 1996, represents whatever natural phenomena that would make it unlikely for life to evolve from inanimate matter to an advanced civilization. The most commonly agreed-upon low probability event is abiogenesis: a gradual process of increasing complexity of the first self-replicating molecules by a randomly occurring chemical process. Other proposed great filters are the emergence of eukaryotic cells or of meiosis or some of the steps involved in the evolution of a brain capable of complex logical deductions.
Astrobiologists Dirk Schulze-Makuch and William Bains, reviewing the history of life on Earth, including convergent evolution, concluded that transitions such as oxygenic photosynthesis, the eukaryotic cell, multicellularity, and tool-using intelligence are likely to occur on any Earth-like planet given enough time. They argue that the Great Filter may be abiogenesis, the rise of technological human-level intelligence, or an inability to settle other worlds because of self-destruction or a lack of resources. Paleobiologist Olev Vinn has suggested that the great filter may have universal biological roots related to evolutionary animal behavior.
Grabby Aliens
In 2021, the concepts of quiet, loud, and grabby aliens were introduced by Hanson et al. The possible "loud" aliens expand rapidly in a highly detectable way throughout the universe and endure, while "quiet" aliens are hard or impossible to detect and eventually disappear. "Grabby" aliens prevent the emergence of other civilizations in their sphere of influence, which expands at a rate near the speed of light. The authors argue that if loud civilizations are rare, as they appear to be, then quiet civilizations are also rare. The paper suggests that humanity's current stage of technological development is relatively early in the potential timeline of intelligent life in the universe, as loud aliens would otherwise be observable by astronomers.
Earlier in 2013, Anders Sandberg and Stuart Armstrong examined the potential for intelligent life to spread intergalactically throughout the universe and the implications for the Fermi Paradox. Their study suggests that with sufficient energy, intelligent civilizations could potentially colonize the entire Milky Way galaxy within a few million years, and spread to nearby galaxies in a timespan that is cosmologically brief. They conclude that intergalactic colonization appears possible with the resources of a single solar system and that intergalactic colonization is of comparable difficulty to interstellar colonization, and therefore the Fermi paradox is much sharper than commonly thought.
Empirical evidence
There are two parts of the Fermi paradox that rely on empirical evidence—that there are many potentially habitable planets, and that humans see no evidence of life. The first point, that many suitable planets exist, was an assumption in Fermi's time but is now supported by the discovery that exoplanets are common. Current models predict billions of habitable worlds in the Milky Way.
The second part of the paradox, that humans see no evidence of extraterrestrial life, is also an active field of scientific research. This includes both efforts to find any indication of life, and efforts specifically directed to finding intelligent life. These searches have been made since 1960, and several are ongoing.
Although astronomers do not usually search for extraterrestrials, they have observed phenomena that they could not immediately explain without positing an intelligent civilization as the source. For example, pulsars, when first discovered in 1967, were called little green men (LGM) because of the precise repetition of their pulses. In all cases, explanations with no need for intelligent life have been found for such observations, but the possibility of discovery remains. Proposed examples include asteroid mining that would change the appearance of debris disks around stars, or spectral lines from nuclear waste disposal in stars.
Explanations based on technosignatures, such as radio communications, have been presented.
Electromagnetic emissions
Radio technology and the ability to construct a radio telescope are presumed to be a natural advance for technological species, theoretically creating effects that might be detected over interstellar distances. The careful searching for non-natural radio emissions from space may lead to the detection of alien civilizations. Sensitive alien observers of the Solar System, for example, would note unusually intense radio waves for a G2 star due to Earth's television and telecommunication broadcasts. In the absence of an apparent natural cause, alien observers might infer the existence of a terrestrial civilization. Such signals could be either "accidental" by-products of a civilization, or deliberate attempts to communicate, such as the Arecibo message. It is unclear whether "leakage", as opposed to a deliberate beacon, could be detected by an extraterrestrial civilization. The most sensitive radio telescopes on Earth, , would not be able to detect non-directional radio signals (such as broadband) even at a fraction of a light-year away, but other civilizations could hypothetically have much better equipment.
A number of astronomers and observatories have attempted and are attempting to detect such evidence, mostly through SETI organizations such as the SETI Institute and Breakthrough Listen. Several decades of SETI analysis have not revealed any unusually bright or meaningfully repetitive radio emissions.
Direct planetary observation
Exoplanet detection and classification is a very active sub-discipline in astronomy; the first candidate terrestrial planet discovered within a star's habitable zone was found in 2007. New refinements in exoplanet detection methods, and use of existing methods from space (such as the Kepler and TESS missions) are starting to detect and characterize Earth-size planets, to determine whether they are within the habitable zones of their stars. Such observational refinements may allow for a better estimation of how common these potentially habitable worlds are.
Conjectures about interstellar probes
The Hart–Tipler conjecture is a form of contraposition which states that because no interstellar probes have been detected, there likely is no other intelligent life in the universe, as such life should be expected to eventually create and launch such probes. Self-replicating probes could exhaustively explore a galaxy the size of the Milky Way in as little as a million years. If even a single civilization in the Milky Way attempted this, such probes could spread throughout the entire galaxy. Another speculation for contact with an alien probe—one that would be trying to find human beings—is an alien Bracewell probe. Such a hypothetical device would be an autonomous space probe whose purpose is to seek out and communicate with alien civilizations (as opposed to von Neumann probes, which are usually described as purely exploratory). These were proposed as an alternative to carrying a slow speed-of-light dialogue between vastly distant neighbors. Rather than contending with the long delays a radio dialogue would suffer, a probe housing an artificial intelligence would seek out an alien civilization to carry on a close-range communication with the discovered civilization. The findings of such a probe would still have to be transmitted to the home civilization at light speed, but an information-gathering dialogue could be conducted in real time.
Direct exploration of the Solar System has yielded no evidence indicating a visit by aliens or their probes. Detailed exploration of areas of the Solar System where resources would be plentiful may yet produce evidence of alien exploration, though the entirety of the Solar System is vast and difficult to investigate. Attempts to signal, attract, or activate hypothetical Bracewell probes in Earth's vicinity have not succeeded.
Searches for stellar-scale artifacts
In 1959, Freeman Dyson observed that every developing human civilization constantly increases its energy consumption, and he conjectured that a civilization might try to harness a large part of the energy produced by a star. He proposed a hypothetical "Dyson sphere" as a possible means: a shell or cloud of objects enclosing a star to absorb and utilize as much radiant energy as possible. Such a feat of astroengineering would drastically alter the observed spectrum of the star involved, changing it at least partly from the normal emission lines of a natural stellar atmosphere to those of black-body radiation, probably with a peak in the infrared. Dyson speculated that advanced alien civilizations might be detected by examining the spectra of stars and searching for such an altered spectrum.
There have been some attempts to find evidence of the existence of Dyson spheres that would alter the spectra of their core stars. Direct observation of thousands of galaxies has shown no explicit evidence of artificial construction or modifications. In October 2015, there was some speculation that a dimming of light from star KIC 8462852, observed by the Kepler space telescope, could have been a result of Dyson sphere construction. However, in 2018, observations determined that the amount of dimming varied by the frequency of the light, pointing to dust, rather than an opaque object such as a Dyson sphere, as the cause of the dimming.
Hypothetical explanations for the paradox
Rarity of intelligent life
Extraterrestrial life is rare or non-existent
Those who think that intelligent extraterrestrial life is (nearly) impossible argue that the conditions needed for the evolution of life—or at least the evolution of biological complexity—are rare or even unique to Earth. Under this assumption, called the rare Earth hypothesis, a rejection of the mediocrity principle, complex multicellular life is regarded as exceedingly unusual.
The rare Earth hypothesis argues that the evolution of biological complexity requires a host of fortuitous circumstances, such as a galactic habitable zone, a star and planet(s) having the requisite conditions, such as enough of a continuous habitable zone, the advantage of a giant guardian like Jupiter and a large moon, conditions needed to ensure the planet has a magnetosphere and plate tectonics, the chemistry of the lithosphere, atmosphere, and oceans, the role of "evolutionary pumps" such as massive glaciation and rare bolide impacts. Perhaps most importantly, advanced life needs whatever it was that led to the transition of (some) prokaryotic cells to eukaryotic cells, sexual reproduction and the Cambrian explosion.
In his book Wonderful Life (1989), Stephen Jay Gould suggested that if the "tape of life" were rewound to the time of the Cambrian explosion, and one or two tweaks made, human beings most probably never would have evolved. Other thinkers such as Fontana, Buss, and Kauffman have written about the self-organizing properties of life.
Extraterrestrial intelligence is rare or non-existent
It is possible that even if complex life is common, intelligence (and consequently civilizations) is not. While there are remote sensing techniques that could perhaps detect life-bearing planets without relying on the signs of technology, none of them have any ability to tell if any detected life is intelligent. This is sometimes referred to as the "algae vs. alumnae" problem.
Charles Lineweaver states that when considering any extreme trait in an animal, intermediate stages do not necessarily produce "inevitable" outcomes. For example, large brains are no more "inevitable", or convergent, than are the long noses of animals such as aardvarks and elephants. As he points out, "dolphins have had ~20million years to build a radio telescope and have not done so". In addition, Rebecca Boyle points out that of all the species that have ever evolved in the history of life on the planet Earth, only one—human beings and only in the beginning stages—has ever become space-faring.
Periodic extinction by natural events
New life might commonly die out due to runaway heating or cooling on their fledgling planets. On Earth, there have been numerous major extinction events that destroyed the majority of complex species alive at the time; the extinction of the non-avian dinosaurs is the best known example. These are thought to have been caused by events such as impact from a large meteorite, massive volcanic eruptions, or astronomical events such as gamma-ray bursts. It may be the case that such extinction events are common throughout the universe and periodically destroy intelligent life, or at least its civilizations, before the species is able to develop the technology to communicate with other intelligent species.
However, the chances of extinction by natural events may be very low on the scale of a civilization's lifetime. Based on an analysis of impact craters on Earth and the Moon, the average interval between impacts large enough to cause global consequences (like the Chicxulub impact) is estimated to be around 100 million years.
Evolutionary explanations
Intelligent alien species have not developed advanced technologies
It may be that while alien species with intelligence exist, they are primitive or have not reached the level of technological advancement necessary to communicate. Along with non-intelligent life, such civilizations would also be very difficult to detect. A trip using conventional rockets would take hundreds of thousands of years to reach the nearest stars.
To skeptics, the fact that in the history of life on the Earth, only one species has developed a civilization to the point of being capable of spaceflight and radio technology, lends more credence to the idea that technologically advanced civilizations are rare in the universe.
Amedeo Balbi and Adam Frank propose the concept of an "oxygen bottleneck" for the emergence of technospheres. The "oxygen bottleneck" refers to the critical level of atmospheric oxygen necessary for fire and combustion. Earth's current atmospheric oxygen concentration is about 21%, but has been much lower in the past and may also be on many exoplanets. The authors argue that while the threshold of oxygen required for the existence of complex life and ecosystems is much lower, technological advancement, particularly that reliant on combustion, such as metal smelting and energy production, requires higher oxygen concentrations of around 18% or more. Thus, the presence of high levels of oxygen in a planet's atmosphere is not only a potential biosignature but also a critical factor in the emergence of detectable technological civilizations.
Another hypothesis in this category is the "Water World hypothesis". According to author and scientist David Brin: "it turns out that our Earth skates the very inner edge of our sun's continuously habitable—or 'Goldilocks'—zone. And Earth may be anomalous. It may be that because we are so close to our sun, we have an anomalously oxygen-rich atmosphere, and we have anomalously little ocean for a water world. In other words, 32 percent continental mass may be high among water worlds..." Brin continues, "In which case, the evolution of creatures like us, with hands and fire and all that sort of thing, may be rare in the galaxy. In which case, when we do build starships and head out there, perhaps we'll find lots and lots of life worlds, but they're all like Polynesia. We'll find lots and lots of intelligent lifeforms out there, but they're all dolphins, whales, squids, who could never build their own starships. What a perfect universe for us to be in, because nobody would be able to boss us around, and we'd get to be the voyagers, the Star Trek people, the starship builders, the policemen, and so on."
The rapid increase of scientific and technological progress seen in the 19th and 20th centuries, compared to earlier eras, led to the common assumption that such progresses will keep growing at exponential rates as time goes by, eventually leading to the progress level required for space exploration. The "universal limit to technological development" (ULTD) hypothesis proposes that there is a limit to the potential growth of a civilization, and that this limit may be placed well below the point required for space exploration. Such limits may be based on economic reasons, natural reasons (such as faster-than-light travel being impossible), and even limitations based on the species' own biology.
It is the nature of intelligent life to destroy itself
This is the argument that technological civilizations may usually or invariably destroy themselves before or shortly after developing radio or spaceflight technology. The astrophysicist Sebastian von Hoerner stated that the progress of science and technology on Earth was driven by two factors—the struggle for domination and the desire for an easy life. The former potentially leads to complete destruction, while the latter may lead to biological or mental degeneration. Possible means of annihilation via major global issues, where global interconnectedness actually makes humanity more vulnerable than resilient, are many, including war, accidental environmental contamination or damage, the development of biotechnology, synthetic life like mirror life, resource depletion, climate change, or poorly-designed artificial intelligence. This general theme is explored both in fiction and in scientific hypothesizing.
In 1966, Sagan and Shklovskii speculated that technological civilizations will either tend to destroy themselves within a century of developing interstellar communicative capability or master their self-destructive tendencies and survive for billion-year timescales. Self-annihilation may also be viewed in terms of thermodynamics: insofar as life is an ordered system that can sustain itself against the tendency to disorder, Stephen Hawking's "external transmission" or interstellar communicative phase, where knowledge production and knowledge management is more important than transmission of information via evolution, may be the point at which the system becomes unstable and self-destructs. Here, Hawking emphasizes self-design of the human genome (transhumanism) or enhancement via machines (e.g., brain–computer interface) to enhance human intelligence and reduce aggression, without which he implies human civilization may be too stupid collectively to survive an increasingly unstable system. For instance, the development of technologies during the "external transmission" phase, such as weaponization of artificial general intelligence or antimatter, may not be met by concomitant increases in human ability to manage its own inventions. Consequently, disorder increases in the system: global governance may become increasingly destabilized, worsening humanity's ability to manage the possible means of annihilation listed above, resulting in global societal collapse.
A less theoretical example might be the resource-depletion issue on Polynesian islands, of which Easter Island is only the best known. David Brin points out that during the expansion phase from 1500 BC to 800 AD there were cycles of overpopulation followed by what might be called periodic cullings of adult males through war or ritual. He writes, "There are many stories of islands whose men were almost wiped out—sometimes by internal strife, and sometimes by invading males from other islands."
Using extinct civilizations such as Easter Island (Rapa Nui) as models, a study conducted in 2018 by Adam Frank et al. posited that climate change induced by "energy intensive" civilizations may prevent sustainability within such civilizations, thus explaining the paradoxical lack of evidence for intelligent extraterrestrial life. Based on dynamical systems theory, the study examined how technological civilizations (exo-civilizations) consume resources and the feedback effects this consumption has on their planets and its carrying capacity. According to Adam Frank "[t]he point is to recognize that driving climate change may be something generic. The laws of physics demand that any young population, building an energy-intensive civilization like ours, is going to have feedback on its planet. Seeing climate change in this cosmic context may give us better insight into what’s happening to us now and how to deal with it." Generalizing the Anthropocene, their model produces four different outcomes:
Die-off: A scenario where the population grows quickly, surpassing the planet's carrying capacity, which leads to a peak followed by a rapid decline. The population eventually stabilizes at a much lower equilibrium level, allowing the planet to partially recover.
Sustainability: A scenario where civilizations successfully transition from high-impact resources (like fossil fuels) to sustainable ones (like solar energy) before significant environmental degradation occurs. This allows the civilization and planet to reach a stable equilibrium, avoiding catastrophic effects.
Collapse Without Resource Change: In this trajectory, the population and environmental degradation increase rapidly. The civilization does not switch to sustainable resources in time, leading to a total collapse where a tipping point is crossed and the population drops.
Collapse With Resource Change: Similar to the previous scenario, but in this case, the civilization attempts to transition to sustainable resources. However, the change comes too late, and the environmental damage is irreversible, still leading to the civilization's collapse.
It is the nature of intelligent life to destroy others
Another hypothesis is that an intelligent species beyond a certain point of technological capability will destroy other intelligent species as they appear, perhaps by using self-replicating probes. Science fiction writer Fred Saberhagen has explored this idea in his Berserker series, as has physicist Gregory Benford and, as well, science fiction writer Greg Bear in his The Forge of God novel, and later Liu Cixin in his The Three-Body Problem series.
A species might undertake such extermination out of expansionist motives, greed, paranoia, or aggression. In 1981, cosmologist Edward Harrison argued that such behavior would be an act of prudence: an intelligent species that has overcome its own self-destructive tendencies might view any other species bent on galactic expansion as a threat. It has also been suggested that a successful alien species would be a superpredator, as are humans. Another possibility invokes the "tragedy of the commons" and the anthropic principle: the first lifeform to achieve interstellar travel will necessarily (even if unintentionally) prevent competitors from arising, and humans simply happen to be first.
Civilizations only broadcast detectable signals for a brief period of time
It may be that alien civilizations are detectable through their radio emissions for only a short time, reducing the likelihood of spotting them. The usual assumption is that civilizations outgrow radio through technological advancement. However, there could be other leakage such as that from microwaves used to transmit power from solar satellites to ground receivers. Regarding the first point, in a 2006 Sky & Telescope article, Seth Shostak wrote, "Moreover, radio leakage from a planet is only likely to get weaker as a civilization advances and its communications technology gets better. Earth itself is increasingly switching from broadcasts to leakage-free cables and fiber optics, and from primitive but obvious carrier-wave broadcasts to subtler, hard-to-recognize spread-spectrum transmissions."
More hypothetically, advanced alien civilizations may evolve beyond broadcasting at all in the electromagnetic spectrum and communicate by technologies not developed or used by mankind. Some scientists have hypothesized that advanced civilizations may send neutrino signals. If such signals exist, they could be detectable by neutrino detectors that are now under construction for other goals.
Alien life may be too incomprehensible
Another possibility is that human theoreticians have underestimated how much alien life might differ from that on Earth. Aliens may be psychologically unwilling to attempt to communicate with human beings. Perhaps human mathematics is parochial to Earth and not shared by other life, though others argue this can only apply to abstract math since the math associated with physics must be similar (in results, if not in methods).
In his 2009 book, SETI scientist Seth Shostak wrote, "Our experiments [such as plans to use drilling rigs on Mars] are still looking for the type of extraterrestrial that would have appealed to Percival Lowell [astronomer who believed he had observed canals on Mars]."
Physiology might also cause a communication barrier. Carl Sagan speculated that an alien species might have a thought process orders of magnitude slower (or faster) than that of humans. A message broadcast by that species might well seem like random background noise to humans, and therefore go undetected.
Paul Davies states that 500 years ago the very idea of a computer doing work merely by manipulating internal data may not have been viewed as a technology at all. He writes, "Might there be a still level[...] If so, this 'third level' would never be manifest through observations made at the informational level, still less the matter level. There is no vocabulary to describe the third level, but that doesn't mean it is non-existent, and we need to be open to the possibility that alien technology may operate at the third level, or maybe the fourth, fifth[...] levels."
Arthur C. Clarke hypothesized that "our technology must still be laughably primitive; we may well be like jungle savages listening for the throbbing of tom-toms, while the ether around them carries more words per second than they could utter in a lifetime". Another thought is that technological civilizations invariably experience a technological singularity and attain a post-biological character.
Sociological explanations
Colonization is not the cosmic norm
In response to Tipler's idea of self-replicating probes, Stephen Jay Gould wrote, "I must confess that I simply don't know how to react to such arguments. I have enough trouble predicting the plans and reactions of the people closest to me. I am usually baffled by the thoughts and accomplishments of humans in different cultures. I'll be damned if I can state with certainty what some extraterrestrial source of intelligence might do."
Alien species may have only settled part of the galaxy
According to a study by Frank et al., advanced civilizations may not colonize everything in the galaxy due to their potential adoption of steady states of expansion. This hypothesis suggests that civilizations might reach a stable pattern of expansion where they neither collapse nor aggressively spread throughout the galaxy. A February 2019 article in Popular Science states, "Sweeping across the Milky Way and establishing a unified galactic empire might be inevitable for a monolithic super-civilization, but most cultures are neither monolithic nor super—at least if our experience is any guide." Astrophysicist Adam Frank, along with co-authors such as astronomer Jason Wright, ran a variety of simulations in which they varied such factors as settlement lifespans, fractions of suitable planets, and recharge times between launches. They found many of their simulations seemingly resulted in a "third category" in which the Milky Way remains partially settled indefinitely. The abstract to their 2019 paper states, "These results break the link between Hart's famous 'Fact A' (no interstellar visitors on Earth now) and the conclusion that humans must, therefore, be the only technological civilization in the galaxy. Explicitly, our solutions admit situations where our current circumstances are consistent with an otherwise settled, steady-state galaxy."
An alternative scenario is that long-lived civilizations may only choose to colonize stars during closest approach. As low mass K- and M-type dwarfs are by far the most common types of main sequence stars in the Milky Way, they are more likely to pass close to existing civilizations. These stars have longer life spans, which may be preferred by such a civilization. Interstellar travel capability of 0.3 light years is theoretically sufficient to colonize all M-dwarfs in the galaxy within 2 billion years. If the travel capability is increased to 2 light years, then all K-dwarfs can be colonized in the same time frame.
Alien species may isolate themselves in virtual worlds
Avi Loeb suggests that one possible explanation for the Fermi paradox is virtual reality technology. Individuals of extraterrestrial civilizations may prefer to spend time in virtual worlds or metaverses that have different physical law constraints as opposed to focusing on colonizing planets. Nick Bostrom suggests that some advanced beings may divest themselves entirely of physical form, create massive artificial virtual environments, transfer themselves into these environments through mind uploading, and exist totally within virtual worlds, ignoring the external physical universe.
It may be that intelligent alien life develops an "increasing disinterest" in their outside world. Possibly any sufficiently advanced society will develop highly engaging media and entertainment well before the capacity for advanced space travel, with the rate of appeal of these social contrivances being destined, because of their inherent reduced complexity, to overtake any desire for complex, expensive endeavors such as space exploration and communication. Once any sufficiently advanced civilization becomes able to master its environment, and most of its physical needs are met through technology, various "social and entertainment technologies", including virtual reality, are postulated to become the primary drivers and motivations of that civilization.
Artificial intelligence may not expand
While artificial intelligence supplanting its creators could only deepen the Fermi paradox, such as through enabling the colonizing of the galaxy through self-replicating probes, it is also possible that after replacing its creators, artificial intelligence either doesn't expand or endure for a variety of reasons. Michael A. Garrett has suggested that biological civilizations may universally underestimate the speed that AI systems progress, and not react to it in time, thus making it a possible great filter. He also argues that this could make the longevity of advanced technological civilizations less than 200 years, thus explaining the great silence observed by SETI.
Economic explanations
Lack of resources needed to physically spread throughout the galaxy
The ability of an alien culture to colonize other star systems is based on the idea that interstellar travel is technologically feasible. While the current understanding of physics rules out the possibility of faster-than-light travel, it appears that there are no major theoretical barriers to the construction of "slow" interstellar ships, even though the engineering required is considerably beyond present human capabilities. This idea underlies the concept of the Von Neumann probe and the Bracewell probe as a potential evidence of extraterrestrial intelligence.
It is possible, however, that present scientific knowledge cannot properly gauge the feasibility and costs of such interstellar colonization. Theoretical barriers may not yet be understood, and the resources needed may be so great as to make it unlikely that any civilization could afford to attempt it. Even if interstellar travel and colonization are possible, they may be difficult, leading to a colonization model based on percolation theory.
Colonization efforts may not occur as an unstoppable rush, but rather as an uneven tendency to "percolate" outwards, within an eventual slowing and termination of the effort given the enormous costs involved and the expectation that colonies will inevitably develop a culture and civilization of their own. Colonization may thus occur in "clusters", with large areas remaining uncolonized at any one time.
Information is cheaper to transmit than matter is to transfer
If a human-capability machine intelligence is possible, and if it is possible to transfer such constructs over vast distances and rebuild them on a remote machine, then it might not make strong economic sense to travel the galaxy by spaceflight. Louis K. Scheffer calculates the cost of radio transmission of information across space to be cheaper than spaceflight by a factor of 108–1017. For a machine civilization, the costs of interstellar travel are therefore enormous compared to the more efficient option of sending computational signals across space to already established sites. After the first civilization has physically explored or colonized the galaxy, as well as sent such machines for easy exploration, then any subsequent civilizations, after having contacted the first, may find it cheaper, faster, and easier to explore the galaxy through intelligent mind transfers to the machines built by the first civilization. However, since a star system needs only one such remote machine, and the communication is most likely highly directed, transmitted at high-frequencies, and at a minimal power to be economical, such signals would be hard to detect from Earth.
By contrast, in economics the counter-intuitive Jevons paradox implies that higher productivity results in higher demand. In other words, increased economic efficiency results in increased economic growth. For example, increased renewable energy has the risk of not directly resulting in declining fossil fuel use, but rather additional economic growth as fossil fuels instead are directed to alternative uses. Thus, technological innovation makes human civilization more capable of higher levels of consumption, as opposed to its existing consumption being achieved more efficiently at a stable level.
Discovery of extraterrestrial life is too difficult
Humans have not listened properly
There are some assumptions that underlie the SETI programs that may cause searchers to miss signals that are present. Extraterrestrials might, for example, transmit signals that have a very high or low data rate, or employ unconventional (in human terms) frequencies, which would make them hard to distinguish from background noise. Signals might be sent from non-main sequence star systems that humans search with lower priority; current programs assume that most alien life will be orbiting Sun-like stars.
The greatest challenge is the sheer size of the radio search needed to look for signals (effectively spanning the entire observable universe), the limited amount of resources committed to SETI, and the sensitivity of modern instruments. SETI estimates, for instance, that with a radio telescope as sensitive as the Arecibo Observatory, Earth's television and radio broadcasts would only be detectable at distances up to 0.3 light-years, less than 1/10 the distance to the nearest star. A signal is much easier to detect if it consists of a deliberate, powerful transmission directed at Earth. Such signals could be detected at ranges of hundreds to tens of thousands of light-years distance. However, this means that detectors must be listening to an appropriate range of frequencies, and be in that region of space to which the beam is being sent. Many SETI searches assume that extraterrestrial civilizations will be broadcasting a deliberate signal, like the Arecibo message, in order to be found.
Thus, to detect alien civilizations through their radio emissions, Earth observers either need more sensitive instruments or must hope for fortunate circumstances: that the broadband radio emissions of alien radio technology are much stronger than humanity's own; that one of SETI's programs is listening to the correct frequencies from the right regions of space; or that aliens are deliberately sending focused transmissions in Earth's general direction.
Humans have not listened for long enough
Humanity's ability to detect intelligent extraterrestrial life has existed for only a very brief period—from 1937 onwards, if the invention of the radio telescope is taken as the dividing line—and Homo sapiens is a geologically recent species. The whole period of modern human existence to date is a very brief period on a cosmological scale, and radio transmissions have only been propagated since 1895. Thus, it remains possible that human beings have neither existed long enough nor made themselves sufficiently detectable to be found by extraterrestrial intelligence.
Intelligent life may be too far away
It may be that non-colonizing technologically capable alien civilizations exist, but that they are simply too far apart for meaningful two-way communication. Sebastian von Hoerner estimated the average duration of civilization at 6,500 years and the average distance between civilizations in the Milky Way at 1,000 light years. If two civilizations are separated by several thousand light-years, it is possible that one or both cultures may become extinct before meaningful dialogue can be established. Human searches may be able to detect their existence, but communication will remain impossible because of distance. It has been suggested that this problem might be ameliorated somewhat if contact and communication is made through a Bracewell probe. In this case at least one partner in the exchange may obtain meaningful information. Alternatively, a civilization may simply broadcast its knowledge, and leave it to the receiver to make what they may of it. This is similar to the transmission of information from ancient civilizations to the present, and humanity has undertaken similar activities like the Arecibo message, which could transfer information about Earth's intelligent species, even if it never yields a response or does not yield a response in time for humanity to receive it. It is possible that observational signatures of self-destroyed civilizations could be detected, depending on the destruction scenario and the timing of human observation relative to it.
A related speculation by Sagan and Newman suggests that if other civilizations exist, and are transmitting and exploring, their signals and probes simply have not arrived yet. However, critics have noted that this is unlikely, since it requires that humanity's advancement has occurred at a very special point in time, while the Milky Way is in transition from empty to full. This is a tiny fraction of the lifespan of a galaxy under ordinary assumptions, so the likelihood that humanity is in the midst of this transition is considered low in the paradox.
Some SETI skeptics may also believe that humanity is at a very special point of time—specifically, a transitional period from no space-faring societies to one space-faring society, namely that of human beings.
Intelligent life may exist hidden from view
Planetary scientist Alan Stern put forward the idea that there could be a number of worlds with subsurface oceans (such as Jupiter's Europa or Saturn's Enceladus). The surface would provide a large degree of protection from such things as cometary impacts and nearby supernovae, as well as creating a situation in which a much broader range of orbits are acceptable. Life, and potentially intelligence and civilization, could evolve. Stern states, "If they have technology, and let's say they're broadcasting, or they have city lights or whatever—we can't see it in any part of the spectrum, except maybe very-low-frequency [radio]."
Advanced civilizations may limit their search for life to technological signatures
If life is abundant in the universe but the cost of space travel is high, an advanced civilization may choose to focus its search not on signs of life in general, but on those of other advanced civilizations, and specifically on radio signals. Since humanity has only recently began to use radio communication, its signals may have yet to arrive to other inhabited planets, and if they have, probes from those planets may have yet to arrive on Earth.
Willingness to communicate
Everyone is listening but no one is transmitting
Alien civilizations might be technically capable of contacting Earth, but could be only listening instead of transmitting. If all or most civilizations act in the same way, the galaxy could be full of civilizations eager for contact, but everyone is listening and no one is transmitting. This is the so-called SETI Paradox.
The only civilization known, humanity, does not explicitly transmit, except for a few small efforts. Even these efforts, and certainly any attempt to expand them, are controversial. It is not even clear humanity would respond to a detected signal—the official policy within the SETI community is that "[no] response to a signal or other evidence of extraterrestrial intelligence should be sent until appropriate international consultations have taken place". However, given the possible impact of any reply, it may be very difficult to obtain any consensus on who would speak and what they would say.
Communication is dangerous
An alien civilization might feel it is too dangerous to communicate, either for humanity or for them. It is argued that when very different civilizations have met on Earth, the results have often been disastrous for one side or the other, and the same may well apply to interstellar contact. Even contact at a safe distance could lead to infection by computer code or even ideas themselves. Perhaps prudent civilizations actively hide not only from Earth but from everyone, out of fear of other civilizations.
Perhaps the Fermi paradox itself—or the alien equivalent of it—is the reason for any civilization to avoid contact with other civilizations, even if no other obstacles existed. From any one civilization's point of view, it would be unlikely for them to be the first ones to make first contact. Therefore, according to this reasoning, it is likely that previous civilizations faced fatal problems with first contact and doing so should be avoided. So perhaps every civilization keeps quiet because of the possibility that there is a real reason for others to do so.
In 1987, science fiction author Greg Bear explored this concept in his novel The Forge of God. In The Forge of God, humanity is likened to a baby crying in a hostile forest: "There once was an infant lost in the woods, crying its heart out, wondering why no one answered, drawing down the wolves." One of the characters explains, "We've been sitting in our tree chirping like foolish birds for over a century now, wondering why no other birds answered. The galactic skies are full of hawks, that's why. Planetisms that don't know enough to keep quiet, get eaten."
In Liu Cixin's 2008 novel The Dark Forest, the author proposes a literary explanation for the Fermi paradox in which many multiple alien civilizations exist, but are both silent and paranoid, destroying any nascent lifeforms loud enough to make themselves known. This is because any other intelligent life may represent a future threat. As a result, Liu's fictional universe contains a plethora of quiet civilizations which do not reveal themselves, as in a "dark forest"...filled with "armed hunter(s) stalking through the trees like a ghost". This idea has come to be known as the dark forest hypothesis.
Earth is deliberately being avoided
The zoo hypothesis states that intelligent extraterrestrial life exists and does not contact life on Earth to allow for its natural evolution and development. A variation on the zoo hypothesis is the laboratory hypothesis, where humanity has been or is being subject to experiments, with Earth or the Solar System effectively serving as a laboratory. The zoo hypothesis may break down under the uniformity of motive flaw: all it takes is a single culture or civilization to decide to act contrary to the imperative within humanity's range of detection for it to be abrogated, and the probability of such a violation of hegemony increases with the number of civilizations, tending not towards a "Galactic Club" with a unified foreign policy with regard to life on Earth but multiple "Galactic Cliques". However, if artificial superintelligences dominate galactic life, and if it is true that such intelligences tend towards merged hegemonic behavior, then this would address the uniformity of motive flaw by dissuading rogue behavior.
Analysis of the inter-arrival times between civilizations in the galaxy based on common astrobiological assumptions suggests that the initial civilization would have a commanding lead over the later arrivals. As such, it may have established what has been termed the zoo hypothesis through force or as a galactic or universal norm and the resultant "paradox" by a cultural founder effect with or without the continued activity of the founder. Some colonization scenarios predict spherical expansion across star systems, with continued expansion coming from the systems just previously settled. It has been suggested that this would cause a strong selection process among the colonization front favoring cultural or biological adaptations to living in starships or space habitats. As a result, they may forgo living on planets. This may result in the destruction of terrestrial planets in these systems for use as building materials, thus preventing the development of life on those worlds. Or, they may have an ethic of protection for "nursery worlds", and protect them.
It is possible that a civilization advanced enough to travel between solar systems could be actively visiting or observing Earth while remaining undetected or unrecognized. Following this logic, and building on arguments that other proposed solutions to the Fermi paradox may be implausible, Ian Crawford and Dirk Schulze-Makuch have argued that technological civilisations are either very rare in the Galaxy or are deliberately hiding from us.
Earth is deliberately being isolated
A related idea to the zoo hypothesis is that, beyond a certain distance, the perceived universe is a simulated reality. The planetarium hypothesis speculates that beings may have created this simulation so that the universe appears to be empty of other life.
Alien life is already here, unacknowledged
A significant fraction of the population believes that at least some UFOs (Unidentified Flying Objects) are spacecraft piloted by aliens. While most of these are unrecognized or mistaken interpretations of mundane phenomena, some occurrences remain puzzling even after investigation. The consensus scientific view is that although they may be unexplained, they do not rise to the level of convincing evidence.
Similarly, it is theoretically possible that SETI groups are not reporting positive detections, or governments have been blocking signals or suppressing publication. This response might be attributed to security or economic interests from the potential use of advanced extraterrestrial technology. It has been suggested that the detection of an extraterrestrial radio signal or technology could well be the most highly secret information that exists. Claims that this has already happened are common in the popular press, but the scientists involved report the opposite experience—the press becomes informed and interested in a potential detection even before a signal can be confirmed.
Regarding the idea that aliens are in secret contact with governments, David Brin writes, "Aversion to an idea, simply because of its long association with crackpots, gives crackpots altogether too much influence."
See also
Calculating God – 2000 novel by Robert J. Sawyer
Notes
References
Further reading
External links
Kestenbaum, David. "Three people grapple with the question, 'Are we alone?'", This American Life radio show, hosted by Ira Glass. This episode's first 22 minutes discusses the Fermi Paradox. See also the show's May 19, 2017 transcript.
The Fermi Paradox – Where Are All The Aliens? (2015), Kurzgesagt – In a Nutshell
Webb, Stephen (video; 13:09): Where Are All The Aliens? (TED talk – 2018) (transcript)
(TED Talk – 2018)
Astrobiology
Paradox
Eponymous paradoxes
Interstellar messages
Search for extraterrestrial intelligence
Unsolved problems in astronomy | Fermi paradox | [
"Physics",
"Astronomy",
"Biology"
] | 11,672 | [
"Astronomical hypotheses",
"Unsolved problems in astronomy",
"Origin of life",
"Concepts in astronomy",
"Speculative evolution",
"Astrobiology",
"Astronomical controversies",
"Fermi paradox",
"Biological hypotheses",
"Astronomical sub-disciplines"
] |
11,586 | https://en.wikipedia.org/wiki/Full%20disclosure%20%28computer%20security%29 | In the field of computer security, independent researchers often discover flaws in software that can be abused to cause unintended behaviour; these flaws are called vulnerabilities. The process by which the analysis of these vulnerabilities is shared with third parties is the subject of much debate, and is referred to as the researcher's disclosure policy. Full disclosure is the practice of publishing analysis of software vulnerabilities as early as possible, making the data accessible to everyone without restriction. The primary purpose of widely disseminating information about vulnerabilities is so that potential victims are as knowledgeable as those who attack them.
In his 2007 essay on the topic, Bruce Schneier stated "Full disclosure – the practice of making the details of security vulnerabilities public – is a damned good idea. Public scrutiny is the only reliable way to improve security, while secrecy only makes us less secure." Leonard Rose, co-creator of an electronic mailing list that has superseded bugtraq to become the de facto forum for disseminating advisories, explains "We don't believe in security by obscurity, and as far as we know, full disclosure is the only way to ensure that everyone, not just the insiders, have access to the information we need."
The vulnerability disclosure debate
The controversy around the public disclosure of sensitive information is not new. The issue of full disclosure was first raised in the context of locksmithing, in a 19th-century controversy regarding whether weaknesses in lock systems should be kept secret in the locksmithing community, or revealed to the public. Today, there are three major disclosure policies under which most others can be categorized: Non Disclosure, Coordinated Disclosure, and Full Disclosure.
The major stakeholders in vulnerability research have their disclosure policies shaped by various motivations, it is not uncommon to observe campaigning, marketing or lobbying for their preferred policy to be adopted and chastising those who dissent. Many prominent security researchers favor full disclosure, whereas most vendors prefer coordinated disclosure. Non disclosure is generally favored by commercial exploit vendors and blackhat hackers.
Coordinated vulnerability disclosure
Coordinated vulnerability disclosure is a policy under which researchers agree to report vulnerabilities to a coordinating authority, which then reports it to the vendor, tracks fixes and mitigations, and coordinates the disclosure of information with stakeholders including the public. In some cases the coordinating authority is the vendor. The premise of coordinated disclosure is typically that nobody should be informed about a vulnerability until the software vendor says it is time. While there are often exceptions or variations of this policy, distribution must initially be limited and vendors are given privileged access to nonpublic research.
The original name for this approach was "responsible disclosure", based on the essay by Microsoft Security Manager Scott Culp “It's Time to End Information Anarchy” (referring to full disclosure). Microsoft later called for the term to be phased out in favor of “Coordinated Vulnerability Disclosure” (CVD).
Although the reasoning varies, many practitioners argue that end-users cannot benefit from access to vulnerability information without guidance or patches from the vendor, so the risks of sharing research with malicious actors is too great for too little benefit. As Microsoft explain, "[Coordinated disclosure] serves everyone's best interests by ensuring that customers receive comprehensive, high-quality updates for security vulnerabilities but are not exposed to malicious attacks while the update is being developed."
To prevent vendors to indefinitely delaying the disclosure, a common practice in the security industry, pioneered by Google, is to publish all the details of vulnerabilities after a deadline, usually 90 or 120 days reduced to 7 days if the vulnerability is under active exploitation.
Full disclosure
Full disclosure is the policy of publishing information on vulnerabilities without restriction as early as possible, making the information accessible to the general public without restriction. In general, proponents of full disclosure believe that the benefits of freely available vulnerability research outweigh the risks, whereas opponents prefer to limit the distribution.
The free availability of vulnerability information allows users and administrators to understand and react to vulnerabilities in their systems, and allows customers to pressure vendors to fix vulnerabilities that vendors may otherwise feel no incentive to solve. There are some fundamental problems with coordinated disclosure that full disclosure can resolve.
If customers do not know about vulnerabilities, they cannot request patches, and vendors experience no economic incentive to correct vulnerabilities.
Administrators cannot make informed decisions about the risks to their systems, as information on vulnerabilities is restricted.
Malicious researchers who also know about the flaw have a long period of time to continue exploiting the flaw.
Discovery of a specific flaw or vulnerability is not a mutually exclusive event, multiple researchers with differing motivations can and do discover the same flaws independently.
There is no standard way to make vulnerability information available to the public, researchers often use mailing lists dedicated to the topic, academic papers or industry conferences.
Non disclosure
Non disclosure is the policy that vulnerability information should not be shared, or should only be shared under non-disclosure agreement (either contractually or informally).
Common proponents of non-disclosure include commercial exploit vendors, researchers who intend to exploit the flaws they find, and proponents of security through obscurity.
Debate
In 2009, Charlie Miller, Dino Dai Zovi and Alexander Sotirov announced at the CanSecWest conference the "No More Free Bugs" campaign, arguing that companies are profiting and taking advantage of security researchers by not paying them for disclosing bugs. This announcement made it to the news and opened a broader debate about the problem and its associated incentives.
Arguments against coordinated disclosure
Researchers in favor of coordinated disclosure believe that users cannot make use of advanced knowledge of vulnerabilities without guidance from the vendor, and that the majority is best served by limiting distribution of vulnerability information. Advocates argue that low-skilled attackers can use this information to perform sophisticated attacks that would otherwise be beyond their ability, and the potential benefit does not outweigh the potential harm caused by malevolent actors. Only when the vendor has prepared guidance that even the most unsophisticated users can digest should the information be made public.
This argument presupposes that vulnerability discovery is a mutually exclusive event, that only one person can discover a vulnerability. There are many examples of vulnerabilities being discovered simultaneously, often being exploited in secrecy before discovery by other researchers. While there may exist users who cannot benefit from vulnerability information, full disclosure advocates believe this demonstrates a contempt for the intelligence of end users. While it's true that some users cannot benefit from vulnerability information, if they're concerned with the security of their networks they are in a position to hire an expert to assist them as you would hire a mechanic to help with a car.
Arguments against non disclosure
Non disclosure is typically used when a researcher intends to use knowledge of a vulnerability to attack computer systems operated by their enemies, or to trade knowledge of a vulnerability to a third party for profit, who will typically use it to attack their enemies.
Researchers practicing non disclosure are generally not concerned with improving security or protecting networks. However, some proponents argue that they simply do not want to assist vendors, and claim no intent to harm others.
While full and coordinated disclosure advocates declare similar goals and motivations, simply disagreeing on how best to achieve them, non disclosure is entirely incompatible.
References
Computer security procedures | Full disclosure (computer security) | [
"Engineering"
] | 1,498 | [
"Cybersecurity engineering",
"Computer security procedures"
] |
11,617 | https://en.wikipedia.org/wiki/Feynman%20diagram | In theoretical physics, a Feynman diagram is a pictorial representation of the mathematical expressions describing the behavior and interaction of subatomic particles. The scheme is named after American physicist Richard Feynman, who introduced the diagrams in 1948.
The calculation of probability amplitudes in theoretical particle physics requires the use of large, complicated integrals over a large number of variables. Feynman diagrams instead represent these integrals graphically.
Feynman diagrams give a simple visualization of what would otherwise be an arcane and abstract formula. According to David Kaiser, "Since the middle of the 20th century, theoretical physicists have increasingly turned to this tool to help them undertake critical calculations. Feynman diagrams have revolutionized nearly every aspect of theoretical physics."
While the diagrams apply primarily to quantum field theory, they can be used in other areas of physics, such as solid-state theory. Frank Wilczek wrote that the calculations that won him the 2004 Nobel Prize in Physics "would have been literally unthinkable without Feynman diagrams, as would [Wilczek's] calculations that established a route to production and observation of the Higgs particle."
A Feynman diagram is a graphical representation of a perturbative contribution to the transition amplitude or correlation function of a quantum mechanical or statistical field theory. Within the canonical formulation of quantum field theory, a Feynman diagram represents a term in the Wick's expansion of the perturbative -matrix. Alternatively, the path integral formulation of quantum field theory represents the transition amplitude as a weighted sum of all possible histories of the system from the initial to the final state, in terms of either particles or fields. The transition amplitude is then given as the matrix element of the -matrix between the initial and final states of the quantum system.
Feynman used Ernst Stueckelberg's interpretation of the positron as if it were an electron moving backward in time. Thus, antiparticles are represented as moving backward along the time axis in Feynman diagrams.
Motivation and history
When calculating scattering cross-sections in particle physics, the interaction between particles can be described by starting from a free field that describes the incoming and outgoing particles, and including an interaction Hamiltonian to describe how the particles deflect one another. The amplitude for scattering is the sum of each possible interaction history over all possible intermediate particle states. The number of times the interaction Hamiltonian acts is the order of the perturbation expansion, and the time-dependent perturbation theory for fields is known as the Dyson series. When the intermediate states at intermediate times are energy eigenstates (collections of particles with a definite momentum) the series is called old-fashioned perturbation theory (or time-dependent/time-ordered perturbation theory).
The Dyson series can be alternatively rewritten as a sum over Feynman diagrams, where at each vertex both the energy and momentum are conserved, but where the length of the energy-momentum four-vector is not necessarily equal to the mass, i.e. the intermediate particles are so-called off-shell. The Feynman diagrams are much easier to keep track of than "old-fashioned" terms, because the old-fashioned way treats the particle and antiparticle contributions as separate. Each Feynman diagram is the sum of exponentially many old-fashioned terms, because each internal line can separately represent either a particle or an antiparticle. In a non-relativistic theory, there are no antiparticles and there is no doubling, so each Feynman diagram includes only one term.
Feynman gave a prescription for calculating the amplitude (the Feynman rules, below) for any given diagram from a field theory Lagrangian. Each internal line corresponds to a factor of the virtual particle's propagator; each vertex where lines meet gives a factor derived from an interaction term in the Lagrangian, and incoming and outgoing lines carry an energy, momentum, and spin.
In addition to their value as a mathematical tool, Feynman diagrams provide deep physical insight into the nature of particle interactions. Particles interact in every way available; in fact, intermediate virtual particles are allowed to propagate faster than light. The probability of each final state is then obtained by summing over all such possibilities. This is closely tied to the functional integral formulation of quantum mechanics, also invented by Feynman—see path integral formulation.
The naïve application of such calculations often produces diagrams whose amplitudes are infinite, because the short-distance particle interactions require a careful limiting procedure, to include particle self-interactions. The technique of renormalization, suggested by Ernst Stueckelberg and Hans Bethe and implemented by Dyson, Feynman, Schwinger, and Tomonaga compensates for this effect and eliminates the troublesome infinities. After renormalization, calculations using Feynman diagrams match experimental results with very high accuracy.
Feynman diagram and path integral methods are also used in statistical mechanics and can even be applied to classical mechanics.
Alternate names
Murray Gell-Mann always referred to Feynman diagrams as Stueckelberg diagrams, after Swiss physicist Ernst Stueckelberg, who devised a similar notation many years earlier. Stueckelberg was motivated by the need for a manifestly covariant formalism for quantum field theory, but did not provide as automated a way to handle symmetry factors and loops, although he was first to find the correct physical interpretation in terms of forward and backward in time particle paths, all without the path-integral.
Historically, as a book-keeping device of covariant perturbation theory, the graphs were called Feynman–Dyson diagrams or Dyson graphs, because the path integral was unfamiliar when they were introduced, and Freeman Dyson's derivation from old-fashioned perturbation theory borrowed from the perturbative expansions in statistical mechanics was easier to follow for physicists trained in earlier methods. Feynman had to lobby hard for the diagrams, which confused physicists trained in equations and graphs.
Representation of physical reality
In their presentations of fundamental interactions, written from the particle physics perspective, Gerard 't Hooft and Martinus Veltman gave good arguments for taking the original, non-regularized Feynman diagrams as the most succinct representation of the physics of quantum scattering of fundamental particles. Their motivations are consistent with the convictions of James Daniel Bjorken and Sidney Drell:
The Feynman graphs and rules of calculation summarize quantum field theory in a form in close contact with the experimental numbers one wants to understand. Although the statement of the theory in terms of graphs may imply perturbation theory, use of graphical methods in the many-body problem shows that this formalism is flexible enough to deal with phenomena of nonperturbative characters ... Some modification of the Feynman rules of calculation may well outlive the elaborate mathematical structure of local canonical quantum field theory ...
In quantum field theories, Feynman diagrams are obtained from a Lagrangian by Feynman rules.
Dimensional regularization is a method for regularizing integrals in the evaluation of Feynman diagrams; it assigns values to them that are meromorphic functions of an auxiliary complex parameter , called the dimension. Dimensional regularization writes a Feynman integral as an integral depending on the spacetime dimension and spacetime points.
Particle-path interpretation
A Feynman diagram is a representation of quantum field theory processes in terms of particle interactions. The particles are represented by the diagram lines. The lines can be squiggly or straight, with an arrow or without, depending on the type of particle. A point where lines connect to other lines is a vertex, and this is where the particles meet and interact. The interactions are: emit/absorb particles, deflect particles, or change particle type.
The three different types of lines are: internal lines, connecting vertices, incoming lines, extending from "the past" to a vertex, representing an initial state, and outgoing lines, extending from a vertex to "the future", representing the end state (the latter two are also known as external lines). Traditionally, the bottom of the diagram is the past and the top the future; alternatively, the past is to the left and the future to the right. When calculating correlation functions instead of scattering amplitudes, past and future are not relevant and all lines are internal. The particles then begin and end on small x's, which represent the positions of the operators whose correlation is calculated.
Feynman diagrams are a pictorial representation of a contribution to the total amplitude for a process that can happen in different ways. When a group of incoming particles scatter off each other, the process can be thought of as one where the particles travel over all possible paths, including paths that go backward in time.
Feynman diagrams are graphs that represent the interaction of particles rather than the physical position of the particle during a scattering process. They are not the same as spacetime diagrams and bubble chamber images even though they all describe particle scattering. Unlike a bubble chamber picture, only the sum of all relevant Feynman diagrams represent any given particle interaction; particles do not choose a particular diagram each time they interact. The law of summation is in accord with the principle of superposition—every diagram contributes to the total process's amplitude.
Description
A Feynman diagram represents a perturbative contribution to the amplitude of a quantum transition from some initial quantum state to some final quantum state.
For example, in the process of electron-positron annihilation the initial state is one electron and one positron, while the final state is two photons.
Conventionally, the initial state is at the left of the diagram and the final state at the right (although other layouts are also used).
The particles in the initial state are depicted by lines pointing in the direction of the initial state (e.g., to the left). The particles in the final state are represented by lines pointing in the direction of the final state (e.g., to the right).
QED involves two types of particles: matter particles such as electrons or positrons (called fermions) and exchange particles (called gauge bosons). They are represented in Feynman diagrams as follows:
Electron in the initial state is represented by a solid line, with an arrow indicating the spin of the particle e.g. pointing toward the vertex (→•).
Electron in the final state is represented by a line, with an arrow indicating the spin of the particle e.g. pointing away from the vertex: (•→).
Positron in the initial state is represented by a solid line, with an arrow indicating the spin of the particle e.g. pointing away from the vertex: (←•).
Positron in the final state is represented by a line, with an arrow indicating the spin of the particle e.g. pointing toward the vertex: (•←).
Virtual Photon in the initial and the final states is represented by a wavy line (~• and •~).
In QED each vertex has three lines attached to it: one bosonic line, one fermionic line with arrow toward the vertex, and one fermionic line with arrow away from the vertex.
Vertices can be connected by a bosonic or fermionic propagator. A bosonic propagator is represented by a wavy line connecting two vertices (•~•). A fermionic propagator is represented by a solid line with an arrow connecting two vertices, (•←•).
The number of vertices gives the order of the term in the perturbation series expansion of the transition amplitude.
Electron–positron annihilation example
The electron–positron annihilation interaction:
e+ + e− → 2γ
has a contribution from the second order Feynman diagram:
In the initial state (at the bottom; early time) there is one electron (e−) and one positron (e+) and in the final state (at the top; late time) there are two photons (γ).
Canonical quantization formulation
The probability amplitude for a transition of a quantum system (between asymptotically free states) from the initial state to the final state is given by the matrix element
where is the -matrix. In terms of the time-evolution operator , it is simply
In the interaction picture, this expands to
where is the interaction Hamiltonian and signifies the time-ordered product of operators. Dyson's formula expands the time-ordered matrix exponential into a perturbation series in the powers of the interaction Hamiltonian density,
Equivalently, with the interaction Lagrangian , it is
A Feynman diagram is a graphical representation of a single summand in the Wick's expansion of the time-ordered product in the th-order term of the Dyson series of the -matrix,
where signifies the normal-ordered product of the operators and (±) takes care of the possible sign change when commuting the fermionic operators to bring them together for a contraction (a propagator) and represents all possible contractions.
Feynman rules
The diagrams are drawn according to the Feynman rules, which depend upon the interaction Lagrangian. For the QED interaction Lagrangian
describing the interaction of a fermionic field with a bosonic gauge field , the Feynman rules can be formulated in coordinate space as follows:
Each integration coordinate is represented by a point (sometimes called a vertex);
A bosonic propagator is represented by a wiggly line connecting two points;
A fermionic propagator is represented by a solid line connecting two points;
A bosonic field is represented by a wiggly line attached to the point ;
A fermionic field is represented by a solid line attached to the point with an arrow toward the point;
An anti-fermionic field is represented by a solid line attached to the point with an arrow away from the point;
Example: second order processes in QED
The second order perturbation term in the -matrix is
Scattering of fermions
The Wick's expansion of the integrand gives (among others) the following term
where
is the electromagnetic contraction (propagator) in the Feynman gauge. This term is represented by the Feynman diagram at the right. This diagram gives contributions to the following processes:
e− e− scattering (initial state at the right, final state at the left of the diagram);
e+ e+ scattering (initial state at the left, final state at the right of the diagram);
e− e+ scattering (initial state at the bottom/top, final state at the top/bottom of the diagram).
Compton scattering and annihilation/generation of e− e+ pairs
Another interesting term in the expansion is
where
is the fermionic contraction (propagator).
Path integral formulation
In a path integral, the field Lagrangian, integrated over all possible field histories, defines the probability amplitude to go from one field configuration to another. In order to make sense, the field theory must have a well-defined ground state, and the integral must be performed a little bit rotated into imaginary time, i.e. a Wick rotation. The path integral formalism is completely equivalent to the canonical operator formalism above.
Scalar field Lagrangian
A simple example is the free relativistic scalar field in dimensions, whose action integral is:
The probability amplitude for a process is:
where and are space-like hypersurfaces that define the boundary conditions. The collection of all the on the starting hypersurface give the field's initial value, analogous to the starting position for a point particle, and the field values at each point of the final hypersurface defines the final field value, which is allowed to vary, giving a different amplitude to end up at different values. This is the field-to-field transition amplitude.
The path integral gives the expectation value of operators between the initial and final state:
and in the limit that A and B recede to the infinite past and the infinite future, the only contribution that matters is from the ground state (this is only rigorously true if the path-integral is defined slightly rotated into imaginary time). The path integral can be thought of as analogous to a probability distribution, and it is convenient to define it so that multiplying by a constant does not change anything:
The field's partition function is the normalization factor on the bottom, which coincides with the statistical mechanical partition function at zero temperature when rotated into imaginary time.
The initial-to-final amplitudes are ill-defined if one thinks of the continuum limit right from the beginning, because the fluctuations in the field can become unbounded. So the path-integral can be thought of as on a discrete square lattice, with lattice spacing and the limit should be taken carefully. If the final results do not depend on the shape of the lattice or the value of , then the continuum limit exists.
On a lattice
On a lattice, (i), the field can be expanded in Fourier modes:
Here the integration domain is over restricted to a cube of side length , so that large values of are not allowed. It is important to note that the -measure contains the factors of 2 from Fourier transforms, this is the best standard convention for -integrals in QFT. The lattice means that fluctuations at large are not allowed to contribute right away, they only start to contribute in the limit . Sometimes, instead of a lattice, the field modes are just cut off at high values of instead.
It is also convenient from time to time to consider the space-time volume to be finite, so that the modes are also a lattice. This is not strictly as necessary as the space-lattice limit, because interactions in are not localized, but it is convenient for keeping track of the factors in front of the -integrals and the momentum-conserving delta functions that will arise.
On a lattice, (ii), the action needs to be discretized:
where is a pair of nearest lattice neighbors and . The discretization should be thought of as defining what the derivative means.
In terms of the lattice Fourier modes, the action can be written:
For near zero this is:
Now we have the continuum Fourier transform of the original action. In finite volume, the quantity is not infinitesimal, but becomes the volume of a box made by neighboring Fourier modes, or .
The field is real-valued, so the Fourier transform obeys:
In terms of real and imaginary parts, the real part of is an even function of , while the imaginary part is odd. The Fourier transform avoids double-counting, so that it can be written:
over an integration domain that integrates over each pair exactly once.
For a complex scalar field with action
the Fourier transform is unconstrained:
and the integral is over all .
Integrating over all different values of is equivalent to integrating over all Fourier modes, because taking a Fourier transform is a unitary linear transformation of field coordinates. When you change coordinates in a multidimensional integral by a linear transformation, the value of the new integral is given by the determinant of the transformation matrix. If
then
If is a rotation, then
so that , and the sign depends on whether the rotation includes a reflection or not.
The matrix that changes coordinates from to can be read off from the definition of a Fourier transform.
and the Fourier inversion theorem tells you the inverse:
which is the complex conjugate-transpose, up to factors of 2. On a finite volume lattice, the determinant is nonzero and independent of the field values.
and the path integral is a separate factor at each value of .
The factor is the infinitesimal volume of a discrete cell in -space, in a square lattice box
where is the side-length of the box. Each separate factor is an oscillatory Gaussian, and the width of the Gaussian diverges as the volume goes to infinity.
In imaginary time, the Euclidean action becomes positive definite, and can be interpreted as a probability distribution. The probability of a field having values is
The expectation value of the field is the statistical expectation value of the field when chosen according to the probability distribution:
Since the probability of is a product, the value of at each separate value of is independently Gaussian distributed. The variance of the Gaussian is , which is formally infinite, but that just means that the fluctuations are unbounded in infinite volume. In any finite volume, the integral is replaced by a discrete sum, and the variance of the integral is .
Monte Carlo
The path integral defines a probabilistic algorithm to generate a Euclidean scalar field configuration. Randomly pick the real and imaginary parts of each Fourier mode at wavenumber to be a Gaussian random variable with variance . This generates a configuration at random, and the Fourier transform gives . For real scalar fields, the algorithm must generate only one of each pair , and make the second the complex conjugate of the first.
To find any correlation function, generate a field again and again by this procedure, and find the statistical average:
where is the number of configurations, and the sum is of the product of the field values on each configuration. The Euclidean correlation function is just the same as the correlation function in statistics or statistical mechanics. The quantum mechanical correlation functions are an analytic continuation of the Euclidean correlation functions.
For free fields with a quadratic action, the probability distribution is a high-dimensional Gaussian, and the statistical average is given by an explicit formula. But the Monte Carlo method also works well for bosonic interacting field theories where there is no closed form for the correlation functions.
Scalar propagator
Each mode is independently Gaussian distributed. The expectation of field modes is easy to calculate:
for , since then the two Gaussian random variables are independent and both have zero mean.
in finite volume , when the two -values coincide, since this is the variance of the Gaussian. In the infinite volume limit,
Strictly speaking, this is an approximation: the lattice propagator is:
But near , for field fluctuations long compared to the lattice spacing, the two forms coincide.
The delta functions contain factors of 2, so that they cancel out the 2 factors in the measure for integrals.
where is the ordinary one-dimensional Dirac delta function. This convention for delta-functions is not universal—some authors keep the factors of 2 in the delta functions (and in the -integration) explicit.
Equation of motion
The form of the propagator can be more easily found by using the equation of motion for the field. From the Lagrangian, the equation of motion is:
and in an expectation value, this says:
Where the derivatives act on , and the identity is true everywhere except when and coincide, and the operator order matters. The form of the singularity can be understood from the canonical commutation relations to be a delta-function. Defining the (Euclidean) Feynman propagator as the Fourier transform of the time-ordered two-point function (the one that comes from the path-integral):
So that:
If the equations of motion are linear, the propagator will always be the reciprocal of the quadratic-form matrix that defines the free Lagrangian, since this gives the equations of motion. This is also easy to see directly from the path integral. The factor of disappears in the Euclidean theory.
Wick theorem
Because each field mode is an independent Gaussian, the expectation values for the product of many field modes obeys Wick's theorem:
is zero unless the field modes coincide in pairs. This means that it is zero for an odd number of , and for an even number of , it is equal to a contribution from each pair separately, with a delta function.
where the sum is over each partition of the field modes into pairs, and the product is over the pairs. For example,
An interpretation of Wick's theorem is that each field insertion can be thought of as a dangling line, and the expectation value is calculated by linking up the lines in pairs, putting a delta function factor that ensures that the momentum of each partner in the pair is equal, and dividing by the propagator.
Higher Gaussian moments — completing Wick's theorem
There is a subtle point left before Wick's theorem is proved—what if more than two of the s have the same momentum? If it's an odd number, the integral is zero; negative values cancel with the positive values. But if the number is even, the integral is positive. The previous demonstration assumed that the s would only match up in pairs.
But the theorem is correct even when arbitrarily many of the are equal, and this is a notable property of Gaussian integration:
Dividing by ,
If Wick's theorem were correct, the higher moments would be given by all possible pairings of a list of different :
where the are all the same variable, the index is just to keep track of the number of ways to pair them. The first can be paired with others, leaving . The next unpaired can be paired with different leaving , and so on. This means that Wick's theorem, uncorrected, says that the expectation value of should be:
and this is in fact the correct answer. So Wick's theorem holds no matter how many of the momenta of the internal variables coincide.
Interaction
Interactions are represented by higher order contributions, since quadratic contributions are always Gaussian. The simplest interaction is the quartic self-interaction, with an action:
The reason for the combinatorial factor 4! will be clear soon. Writing the action in terms of the lattice (or continuum) Fourier modes:
Where is the free action, whose correlation functions are given by Wick's theorem. The exponential of in the path integral can be expanded in powers of , giving a series of corrections to the free action.
The path integral for the interacting action is then a power series of corrections to the free action. The term represented by should be thought of as four half-lines, one for each factor of . The half-lines meet at a vertex, which contributes a delta-function that ensures that the sum of the momenta are all equal.
To compute a correlation function in the interacting theory, there is a contribution from the terms now. For example, the path-integral for the four-field correlator:
which in the free field was only nonzero when the momenta were equal in pairs, is now nonzero for all values of . The momenta of the insertions can now match up with the momenta of the s in the expansion. The insertions should also be thought of as half-lines, four in this case, which carry a momentum , but one that is not integrated.
The lowest-order contribution comes from the first nontrivial term in the Taylor expansion of the action. Wick's theorem requires that the momenta in the half-lines, the factors in , should match up with the momenta of the external half-lines in pairs. The new contribution is equal to:
The 4! inside is canceled because there are exactly 4! ways to match the half-lines in to the external half-lines. Each of these different ways of matching the half-lines together in pairs contributes exactly once, regardless of the values of , by Wick's theorem.
Feynman diagrams
The expansion of the action in powers of gives a series of terms with progressively higher number of s. The contribution from the term with exactly s is called th order.
The th order terms has:
internal half-lines, which are the factors of from the s. These all end on a vertex, and are integrated over all possible .
external half-lines, which are the come from the insertions in the integral.
By Wick's theorem, each pair of half-lines must be paired together to make a line, and this line gives a factor of
which multiplies the contribution. This means that the two half-lines that make a line are forced to have equal and opposite momentum. The line itself should be labelled by an arrow, drawn parallel to the line, and labeled by the momentum in the line . The half-line at the tail end of the arrow carries momentum , while the half-line at the head-end carries momentum . If one of the two half-lines is external, this kills the integral over the internal , since it forces the internal to be equal to the external . If both are internal, the integral over remains.
The diagrams that are formed by linking the half-lines in the s with the external half-lines, representing insertions, are the Feynman diagrams of this theory. Each line carries a factor of , the propagator, and either goes from vertex to vertex, or ends at an insertion. If it is internal, it is integrated over. At each vertex, the total incoming is equal to the total outgoing .
The number of ways of making a diagram by joining half-lines into lines almost completely cancels the factorial factors coming from the Taylor series of the exponential and the 4! at each vertex.
Loop order
A forest diagram is one where all the internal lines have momentum that is completely determined by the external lines and the condition that the incoming and outgoing momentum are equal at each vertex. The contribution of these diagrams is a product of propagators, without any integration. A tree diagram is a connected forest diagram.
An example of a tree diagram is the one where each of four external lines end on an . Another is when three external lines end on an , and the remaining half-line joins up with another , and the remaining half-lines of this run off to external lines. These are all also forest diagrams (as every tree is a forest); an example of a forest that is not a tree is when eight external lines end on two s.
It is easy to verify that in all these cases, the momenta on all the internal lines is determined by the external momenta and the condition of momentum conservation in each vertex.
A diagram that is not a forest diagram is called a loop diagram, and an example is one where two lines of an are joined to external lines, while the remaining two lines are joined to each other. The two lines joined to each other can have any momentum at all, since they both enter and leave the same vertex. A more complicated example is one where two s are joined to each other by matching the legs one to the other. This diagram has no external lines at all.
The reason loop diagrams are called loop diagrams is because the number of -integrals that are left undetermined by momentum conservation is equal to the number of independent closed loops in the diagram, where independent loops are counted as in homology theory. The homology is real-valued (actually valued), the value associated with each line is the momentum. The boundary operator takes each line to the sum of the end-vertices with a positive sign at the head and a negative sign at the tail. The condition that the momentum is conserved is exactly the condition that the boundary of the -valued weighted graph is zero.
A set of valid -values can be arbitrarily redefined whenever there is a closed loop. A closed loop is a cyclical path of adjacent vertices that never revisits the same vertex. Such a cycle can be thought of as the boundary of a hypothetical 2-cell. The -labellings of a graph that conserve momentum (i.e. which has zero boundary) up to redefinitions of (i.e. up to boundaries of 2-cells) define the first homology of a graph. The number of independent momenta that are not determined is then equal to the number of independent homology loops. For many graphs, this is equal to the number of loops as counted in the most intuitive way.
Symmetry factors
The number of ways to form a given Feynman diagram by joining half-lines is large, and by Wick's theorem, each way of pairing up the half-lines contributes equally. Often, this completely cancels the factorials in the denominator of each term, but the cancellation is sometimes incomplete.
The uncancelled denominator is called the symmetry factor of the diagram. The contribution of each diagram to the correlation function must be divided by its symmetry factor.
For example, consider the Feynman diagram formed from two external lines joined to one , and the remaining two half-lines in the joined to each other. There are 4 × 3 ways to join the external half-lines to the , and then there is only one way to join the two remaining lines to each other. The comes divided by , but the number of ways to link up the half lines to make the diagram is only 4 × 3, so the contribution of this diagram is divided by two.
For another example, consider the diagram formed by joining all the half-lines of one to all the half-lines of another . This diagram is called a vacuum bubble, because it does not link up to any external lines. There are 4! ways to form this diagram, but the denominator includes a 2! (from the expansion of the exponential, there are two s) and two factors of 4!. The contribution is multiplied by = .
Another example is the Feynman diagram formed from two s where each links up to two external lines, and the remaining two half-lines of each are joined to each other. The number of ways to link an to two external lines is 4 × 3, and either could link up to either pair, giving an additional factor of 2. The remaining two half-lines in the two s can be linked to each other in two ways, so that the total number of ways to form the diagram is , while the denominator is . The total symmetry factor is 2, and the contribution of this diagram is divided by 2.
The symmetry factor theorem gives the symmetry factor for a general diagram: the contribution of each Feynman diagram must be divided by the order of its group of automorphisms, the number of symmetries that it has.
An automorphism of a Feynman graph is a permutation of the lines and a permutation of the vertices with the following properties:
If a line goes from vertex to vertex , then goes from to . If the line is undirected, as it is for a real scalar field, then can go from to too.
If a line ends on an external line, ends on the same external line.
If there are different types of lines, should preserve the type.
This theorem has an interpretation in terms of particle-paths: when identical particles are present, the integral over all intermediate particles must not double-count states that differ only by interchanging identical particles.
Proof: To prove this theorem, label all the internal and external lines of a diagram with a unique name. Then form the diagram by linking a half-line to a name and then to the other half line.
Now count the number of ways to form the named diagram. Each permutation of the s gives a different pattern of linking names to half-lines, and this is a factor of . Each permutation of the half-lines in a single gives a factor of 4!. So a named diagram can be formed in exactly as many ways as the denominator of the Feynman expansion.
But the number of unnamed diagrams is smaller than the number of named diagram by the order of the automorphism group of the graph.
Connected diagrams: linked-cluster theorem
Roughly speaking, a Feynman diagram is called connected if all vertices and propagator lines are linked by a sequence of vertices and propagators of the diagram itself. If one views it as an undirected graph it is connected. The remarkable relevance of such diagrams in QFTs is due to the fact that they are sufficient to determine the quantum partition function . More precisely, connected Feynman diagrams determine
To see this, one should recall that
with constructed from some (arbitrary) Feynman diagram that can be thought to consist of several connected components . If one encounters (identical) copies of a component within the Feynman diagram one has to include a symmetry factor . However, in the end each contribution of a Feynman diagram to the partition function has the generic form
where labels the (infinitely) many connected Feynman diagrams possible.
A scheme to successively create such contributions from the to is obtained by
and therefore yields
To establish the normalization one simply calculates all connected vacuum diagrams, i.e., the diagrams without any sources (sometimes referred to as external legs of a Feynman diagram).
The linked-cluster theorem was first proved to order four by Keith Brueckner in 1955, and for infinite orders by Jeffrey Goldstone in 1957.
Vacuum bubbles
An immediate consequence of the linked-cluster theorem is that all vacuum bubbles, diagrams without external lines, cancel when calculating correlation functions. A correlation function is given by a ratio of path-integrals:
The top is the sum over all Feynman diagrams, including disconnected diagrams that do not link up to external lines at all. In terms of the connected diagrams, the numerator includes the same contributions of vacuum bubbles as the denominator:
Where the sum over diagrams includes only those diagrams each of whose connected components end on at least one external line. The vacuum bubbles are the same whatever the external lines, and give an overall multiplicative factor. The denominator is the sum over all vacuum bubbles, and dividing gets rid of the second factor.
The vacuum bubbles then are only useful for determining itself, which from the definition of the path integral is equal to:
where is the energy density in the vacuum. Each vacuum bubble contains a factor of zeroing the total at each vertex, and when there are no external lines, this contains a factor of , because the momentum conservation is over-enforced. In finite volume, this factor can be identified as the total volume of space time. Dividing by the volume, the remaining integral for the vacuum bubble has an interpretation: it is a contribution to the energy density of the vacuum.
Sources
Correlation functions are the sum of the connected Feynman diagrams, but the formalism treats the connected and disconnected diagrams differently. Internal lines end on vertices, while external lines go off to insertions. Introducing sources unifies the formalism, by making new vertices where one line can end.
Sources are external fields, fields that contribute to the action, but are not dynamical variables. A scalar field source is another scalar field that contributes a term to the (Lorentz) Lagrangian:
In the Feynman expansion, this contributes H terms with one half-line ending on a vertex. Lines in a Feynman diagram can now end either on an vertex, or on an vertex, and only one line enters an vertex. The Feynman rule for an vertex is that a line from an with momentum gets a factor of .
The sum of the connected diagrams in the presence of sources includes a term for each connected diagram in the absence of sources, except now the diagrams can end on the source. Traditionally, a source is represented by a little "×" with one line extending out, exactly as an insertion.
where is the connected diagram with external lines carrying momentum as indicated. The sum is over all connected diagrams, as before.
The field is not dynamical, which means that there is no path integral over : is just a parameter in the Lagrangian, which varies from point to point. The path integral for the field is:
and it is a function of the values of at every point. One way to interpret this expression is that it is taking the Fourier transform in field space. If there is a probability density on , the Fourier transform of the probability density is:
The Fourier transform is the expectation of an oscillatory exponential. The path integral in the presence of a source is:
which, on a lattice, is the product of an oscillatory exponential for each field value:
The Fourier transform of a delta-function is a constant, which gives a formal expression for a delta function:
This tells you what a field delta function looks like in a path-integral. For two scalar fields and ,
which integrates over the Fourier transform coordinate, over . This expression is useful for formally changing field coordinates in the path integral, much as a delta function is used to change coordinates in an ordinary multi-dimensional integral.
The partition function is now a function of the field , and the physical partition function is the value when is the zero function:
The correlation functions are derivatives of the path integral with respect to the source:
In Euclidean space, source contributions to the action can still appear with a factor of , so that they still do a Fourier transform.
Spin ; "photons" and "ghosts"
Spin : Grassmann integrals
The field path integral can be extended to the Fermi case, but only if the notion of integration is expanded. A Grassmann integral of a free Fermi field is a high-dimensional determinant or Pfaffian, which defines the new type of Gaussian integration appropriate for Fermi fields.
The two fundamental formulas of Grassmann integration are:
where is an arbitrary matrix and are independent Grassmann variables for each index , and
where is an antisymmetric matrix, is a collection of Grassmann variables, and the is to prevent double-counting (since ).
In matrix notation, where and are Grassmann-valued row vectors, and are Grassmann-valued column vectors, and is a real-valued matrix:
where the last equality is a consequence of the translation invariance of the Grassmann integral. The Grassmann variables are external sources for , and differentiating with respect to pulls down factors of .
again, in a schematic matrix notation. The meaning of the formula above is that the derivative with respect to the appropriate component of and gives the matrix element of . This is exactly analogous to the bosonic path integration formula for a Gaussian integral of a complex bosonic field:
So that the propagator is the inverse of the matrix in the quadratic part of the action in both the Bose and Fermi case.
For real Grassmann fields, for Majorana fermions, the path integral is a Pfaffian times a source quadratic form, and the formulas give the square root of the determinant, just as they do for real Bosonic fields. The propagator is still the inverse of the quadratic part.
The free Dirac Lagrangian:
formally gives the equations of motion and the anticommutation relations of the Dirac field, just as the Klein Gordon Lagrangian in an ordinary path integral gives the equations of motion and commutation relations of the scalar field. By using the spatial Fourier transform of the Dirac field as a new basis for the Grassmann algebra, the quadratic part of the Dirac action becomes simple to invert:
The propagator is the inverse of the matrix linking and , since different values of do not mix together.
The analog of Wick's theorem matches and in pairs:
where S is the sign of the permutation that reorders the sequence of and to put the ones that are paired up to make the delta-functions next to each other, with the coming right before the . Since a pair is a commuting element of the Grassmann algebra, it does not matter what order the pairs are in. If more than one pair have the same , the integral is zero, and it is easy to check that the sum over pairings gives zero in this case (there are always an even number of them). This is the Grassmann analog of the higher Gaussian moments that completed the Bosonic Wick's theorem earlier.
The rules for spin- Dirac particles are as follows: The propagator is the inverse of the Dirac operator, the lines have arrows just as for a complex scalar field, and the diagram acquires an overall factor of −1 for each closed Fermi loop. If there are an odd number of Fermi loops, the diagram changes sign. Historically, the −1 rule was very difficult for Feynman to discover. He discovered it after a long process of trial and error, since he lacked a proper theory of Grassmann integration.
The rule follows from the observation that the number of Fermi lines at a vertex is always even. Each term in the Lagrangian must always be Bosonic. A Fermi loop is counted by following Fermionic lines until one comes back to the starting point, then removing those lines from the diagram. Repeating this process eventually erases all the Fermionic lines: this is the Euler algorithm to 2-color a graph, which works whenever each vertex has even degree. The number of steps in the Euler algorithm is only equal to the number of independent Fermionic homology cycles in the common special case that all terms in the Lagrangian are exactly quadratic in the Fermi fields, so that each vertex has exactly two Fermionic lines. When there are four-Fermi interactions (like in the Fermi effective theory of the weak nuclear interactions) there are more -integrals than Fermi loops. In this case, the counting rule should apply the Euler algorithm by pairing up the Fermi lines at each vertex into pairs that together form a bosonic factor of the term in the Lagrangian, and when entering a vertex by one line, the algorithm should always leave with the partner line.
To clarify and prove the rule, consider a Feynman diagram formed from vertices, terms in the Lagrangian, with Fermion fields. The full term is Bosonic, it is a commuting element of the Grassmann algebra, so the order in which the vertices appear is not important. The Fermi lines are linked into loops, and when traversing the loop, one can reorder the vertex terms one after the other as one goes around without any sign cost. The exception is when you return to the starting point, and the final half-line must be joined with the unlinked first half-line. This requires one permutation to move the last to go in front of the first , and this gives the sign.
This rule is the only visible effect of the exclusion principle in internal lines. When there are external lines, the amplitudes are antisymmetric when two Fermi insertions for identical particles are interchanged. This is automatic in the source formalism, because the sources for Fermi fields are themselves Grassmann valued.
Spin 1: photons
The naive propagator for photons is infinite, since the Lagrangian for the A-field is:
The quadratic form defining the propagator is non-invertible. The reason is the gauge invariance of the field; adding a gradient to does not change the physics.
To fix this problem, one needs to fix a gauge. The most convenient way is to demand that the divergence of is some function , whose value is random from point to point. It does no harm to integrate over the values of , since it only determines the choice of gauge. This procedure inserts the following factor into the path integral for :
The first factor, the delta function, fixes the gauge. The second factor sums over different values of that are inequivalent gauge fixings. This is simply
The additional contribution from gauge-fixing cancels the second half of the free Lagrangian, giving the Feynman Lagrangian:
which is just like four independent free scalar fields, one for each component of . The Feynman propagator is:
The one difference is that the sign of one propagator is wrong in the Lorentz case: the timelike component has an opposite sign propagator. This means that these particle states have negative norm—they are not physical states. In the case of photons, it is easy to show by diagram methods that these states are not physical—their contribution cancels with longitudinal photons to only leave two physical photon polarization contributions for any value of .
If the averaging over is done with a coefficient different from , the two terms do not cancel completely. This gives a covariant Lagrangian with a coefficient , which does not affect anything:
and the covariant propagator for QED is:
Spin 1: non-Abelian ghosts
To find the Feynman rules for non-Abelian gauge fields, the procedure that performs the gauge fixing must be carefully corrected to account for a change of variables in the path-integral.
The gauge fixing factor has an extra determinant from popping the delta function:
To find the form of the determinant, consider first a simple two-dimensional integral of a function that depends only on , not on the angle . Inserting an integral over :
The derivative-factor ensures that popping the delta function in removes the integral. Exchanging the order of integration,
but now the delta-function can be popped in ,
The integral over just gives an overall factor of 2, while the rate of change of with a change in is just , so this exercise reproduces the standard formula for polar integration of a radial function:
In the path-integral for a nonabelian gauge field, the analogous manipulation is:
The factor in front is the volume of the gauge group, and it contributes a constant, which can be discarded. The remaining integral is over the gauge fixed action.
To get a covariant gauge, the gauge fixing condition is the same as in the Abelian case:
Whose variation under an infinitesimal gauge transformation is given by:
where is the adjoint valued element of the Lie algebra at every point that performs the infinitesimal gauge transformation. This adds the Faddeev Popov determinant to the action:
which can be rewritten as a Grassmann integral by introducing ghost fields:
The determinant is independent of , so the path-integral over can give the Feynman propagator (or a covariant propagator) by choosing the measure for as in the abelian case. The full gauge fixed action is then the Yang Mills action in Feynman gauge with an additional ghost action:
The diagrams are derived from this action. The propagator for the spin-1 fields has the usual Feynman form. There are vertices of degree 3 with momentum factors whose couplings are the structure constants, and vertices of degree 4 whose couplings are products of structure constants. There are additional ghost loops, which cancel out timelike and longitudinal states in loops.
In the Abelian case, the determinant for covariant gauges does not depend on , so the ghosts do not contribute to the connected diagrams.
Particle-path representation
Feynman diagrams were originally discovered by Feynman, by trial and error, as a way to represent the contribution to the S-matrix from different classes of particle trajectories.
Schwinger representation
The Euclidean scalar propagator has a suggestive representation:
The meaning of this identity (which is an elementary integration) is made clearer by Fourier transforming to real space.
The contribution at any one value of to the propagator is a Gaussian of width . The total propagation function from 0 to is a weighted sum over all proper times of a normalized Gaussian, the probability of ending up at after a random walk of time .
The path-integral representation for the propagator is then:
which is a path-integral rewrite of the Schwinger representation.
The Schwinger representation is both useful for making manifest the particle aspect of the propagator, and for symmetrizing denominators of loop diagrams.
Combining denominators
The Schwinger representation has an immediate practical application to loop diagrams. For example, for the diagram in the theory formed by joining two s together in two half-lines, and making the remaining lines external, the integral over the internal propagators in the loop is:
Here one line carries momentum and the other . The asymmetry can be fixed by putting everything in the Schwinger representation.
Now the exponent mostly depends on ,
except for the asymmetrical little bit. Defining the variable and , the variable goes from 0 to , while goes from 0 to 1. The variable is the total proper time for the loop, while parametrizes the fraction of the proper time on the top of the loop versus the bottom.
The Jacobian for this transformation of variables is easy to work out from the identities:
and "wedging" gives
.
This allows the integral to be evaluated explicitly:
leaving only the -integral. This method, invented by Schwinger but usually attributed to Feynman, is called combining denominator. Abstractly, it is the elementary identity:
But this form does not provide the physical motivation for introducing ; is the proportion of proper time on one of the legs of the loop.
Once the denominators are combined, a shift in to symmetrizes everything:
This form shows that the moment that is more negative than four times the mass of the particle in the loop, which happens in a physical region of Lorentz space, the integral has a cut. This is exactly when the external momentum can create physical particles.
When the loop has more vertices, there are more denominators to combine:
The general rule follows from the Schwinger prescription for denominators:
The integral over the Schwinger parameters can be split up as before into an integral over the total proper time and an integral over the fraction of the proper time in all but the first segment of the loop for . The are positive and add up to less than 1, so that the integral is over an -dimensional simplex.
The Jacobian for the coordinate transformation can be worked out as before:
Wedging all these equations together, one obtains
This gives the integral:
where the simplex is the region defined by the conditions
as well as
Performing the integral gives the general prescription for combining denominators:
Since the numerator of the integrand is not involved, the same prescription works for any loop, no matter what the spins are carried by the legs. The interpretation of the parameters is that they are the fraction of the total proper time spent on each leg.
Scattering
The correlation functions of a quantum field theory describe the scattering of particles. The definition of "particle" in relativistic field theory is not self-evident, because if you try to determine the position so that the uncertainty is less than the compton wavelength, the uncertainty in energy is large enough to produce more particles and antiparticles of the same type from the vacuum. This means that the notion of a single-particle state is to some extent incompatible with the notion of an object localized in space.
In the 1930s, Wigner gave a mathematical definition for single-particle states: they are a collection of states that form an irreducible representation of the Poincaré group. Single particle states describe an object with a finite mass, a well defined momentum, and a spin. This definition is fine for protons and neutrons, electrons and photons, but it excludes quarks, which are permanently confined, so the modern point of view is more accommodating: a particle is anything whose interaction can be described in terms of Feynman diagrams, which have an interpretation as a sum over particle trajectories.
A field operator can act to produce a one-particle state from the vacuum, which means that the field operator produces a superposition of Wigner particle states. In the free field theory, the field produces one particle states only. But when there are interactions, the field operator can also produce 3-particle, 5-particle (if there is no +/− symmetry also 2, 4, 6 particle) states too. To compute the scattering amplitude for single particle states only requires a careful limit, sending the fields to infinity and integrating over space to get rid of the higher-order corrections.
The relation between scattering and correlation functions is the LSZ-theorem: The scattering amplitude for particles to go to particles in a scattering event is the given by the sum of the Feynman diagrams that go into the correlation function for field insertions, leaving out the propagators for the external legs.
For example, for the interaction of the previous section, the order contribution to the (Lorentz) correlation function is:
Stripping off the external propagators, that is, removing the factors of , gives the invariant scattering amplitude :
which is a constant, independent of the incoming and outgoing momentum. The interpretation of the scattering amplitude is that the sum of over all possible final states is the probability for the scattering event. The normalization of the single-particle states must be chosen carefully, however, to ensure that is a relativistic invariant.
Non-relativistic single particle states are labeled by the momentum , and they are chosen to have the same norm at every value of . This is because the nonrelativistic unit operator on single particle states is:
In relativity, the integral over the -states for a particle of mass m integrates over a hyperbola in space defined by the energy–momentum relation:
If the integral weighs each point equally, the measure is not Lorentz-invariant. The invariant measure integrates over all values of and , restricting to the hyperbola with a Lorentz-invariant delta function:
So the normalized -states are different from the relativistically normalized -states by a factor of
The invariant amplitude is then the probability amplitude for relativistically normalized incoming states to become relativistically normalized outgoing states.
For nonrelativistic values of , the relativistic normalization is the same as the nonrelativistic normalization (up to a constant factor ). In this limit, the invariant scattering amplitude is still constant. The particles created by the field scatter in all directions with equal amplitude.
The nonrelativistic potential, which scatters in all directions with an equal amplitude (in the Born approximation), is one whose Fourier transform is constant—a delta-function potential. The lowest order scattering of the theory reveals the non-relativistic interpretation of this theory—it describes a collection of particles with a delta-function repulsion. Two such particles have an aversion to occupying the same point at the same time.
Nonperturbative effects
Thinking of Feynman diagrams as a perturbation series, nonperturbative effects like tunneling do not show up, because any effect that goes to zero faster than any polynomial does not affect the Taylor series. Even bound states are absent, since at any finite order particles are only exchanged a finite number of times, and to make a bound state, the binding force must last forever.
But this point of view is misleading, because the diagrams not only describe scattering, but they also are a representation of the short-distance field theory correlations. They encode not only asymptotic processes like particle scattering, they also describe the multiplication rules for fields, the operator product expansion. Nonperturbative tunneling processes involve field configurations that on average get big when the coupling constant gets small, but each configuration is a coherent superposition of particles whose local interactions are described by Feynman diagrams. When the coupling is small, these become collective processes that involve large numbers of particles, but where the interactions between each of the particles is simple. (The perturbation series of any interacting quantum field theory has zero radius of convergence, complicating the limit of the infinite series of diagrams needed (in the limit of vanishing coupling) to describe such field configurations.)
This means that nonperturbative effects show up asymptotically in resummations of infinite classes of diagrams, and these diagrams can be locally simple. The graphs determine the local equations of motion, while the allowed large-scale configurations describe non-perturbative physics. But because Feynman propagators are nonlocal in time, translating a field process to a coherent particle language is not completely intuitive, and has only been explicitly worked out in certain special cases. In the case of nonrelativistic bound states, the Bethe–Salpeter equation describes the class of diagrams to include to describe a relativistic atom. For quantum chromodynamics, the Shifman–Vainshtein–Zakharov sum rules describe non-perturbatively excited long-wavelength field modes in particle language, but only in a phenomenological way.
The number of Feynman diagrams at high orders of perturbation theory is very large, because there are as many diagrams as there are graphs with a given number of nodes. Nonperturbative effects leave a signature on the way in which the number of diagrams and resummations diverge at high order. It is only because non-perturbative effects appear in hidden form in diagrams that it was possible to analyze nonperturbative effects in string theory, where in many cases a Feynman description is the only one available.
In popular culture
The use of the above diagram of the virtual particle producing a quark–antiquark pair was featured in the television sit-com The Big Bang Theory, in the episode "The Bat Jar Conjecture".
PhD Comics of January 11, 2012, shows Feynman diagrams that visualize and describe quantum academic interactions, i.e. the paths followed by Ph.D. students when interacting with their advisors.
Vacuum Diagrams, a science fiction story by Stephen Baxter, features the titular vacuum diagram, a specific type of Feynman diagram.
Feynman and his wife, Gweneth Howarth, bought a Dodge Tradesman Maxivan in 1975, and had it painted with Feynman diagrams. The van is currently owned by video game designer and physicist Seamus Blackley. Qantum was the license plate ID.
See also
One-loop Feynman diagram
Julian Schwinger#Schwinger and Feynman
Stueckelberg–Feynman interpretation
Penguin diagram
Path integral formulation
Propagator
List of Feynman diagrams
Angular momentum diagrams (quantum mechanics)
Notes
References
Sources
(expanded, updated version of 't Hooft & Veltman, 1973, cited above)
External links
AMS article: "What's New in Mathematics: Finite-dimensional Feynman Diagrams"
Draw Feynman diagrams explained by Flip Tanedo at Quantumdiaries.com
Drawing Feynman diagrams with FeynDiagram C++ library that produces PostScript output.
Online Diagram Tool A graphical application for creating publication ready diagrams.
JaxoDraw A Java program for drawing Feynman diagrams.
Concepts in physics
Scattering theory
Quantum field theory
Diagrams
Richard Feynman
1948 introductions
Eponymous theorems of physics | Feynman diagram | [
"Physics",
"Chemistry"
] | 13,017 | [
"Quantum field theory",
"Scattering theory",
"Equations of physics",
"Quantum mechanics",
"Eponymous theorems of physics",
"Scattering",
"nan",
"Physics theorems"
] |
11,627 | https://en.wikipedia.org/wiki/Faith%20healing | Faith healing is the practice of prayer and gestures (such as laying on of hands) that are believed by some to elicit divine intervention in spiritual and physical healing, especially the Christian practice. Believers assert that the healing of disease and disability can be brought about by religious faith through prayer or other rituals that, according to adherents, can stimulate a divine presence and power. Religious belief in divine intervention does not depend on empirical evidence of an evidence-based outcome achieved via faith healing. Virtually all scientists and philosophers dismiss faith healing as pseudoscience.
Claims that "a myriad of techniques" such as prayer, divine intervention, or the ministrations of an individual healer can cure illness have been popular throughout history. There have been claims that faith can cure blindness, deafness, cancer, HIV/AIDS, developmental disorders, anemia, arthritis, corns, defective speech, multiple sclerosis, skin rashes, total body paralysis, and various injuries. Recoveries have been attributed to many techniques commonly classified as faith healing. It can involve prayer, a visit to a religious shrine, or simply a strong belief in a supreme being.
Many people interpret the Bible, especially the New Testament, as teaching belief in, and the practice of, faith healing. According to a 2004 Newsweek poll, 72 percent of Americans said they believe that praying to God can cure someone, even if science says the person has an incurable disease. Unlike faith healing, advocates of spiritual healing make no attempt to seek divine intervention, instead believing in divine energy. The increased interest in alternative medicine at the end of the 20th century has given rise to a parallel interest among sociologists in the relationship of religion to health.
Faith healing can be classified as a spiritual, supernatural, or paranormal topic, and, in some cases, belief in faith healing can be classified as magical thinking. The American Cancer Society states "available scientific evidence does not support claims that faith healing can actually cure physical ailments". "Death, disability, and other unwanted outcomes have occurred when faith healing was elected instead of medical care for serious injuries or illnesses." When parents have practiced faith healing but not medical care, many children have died that otherwise would have been expected to live. Similar results are found in adults.
In various belief systems
Christianity
Overview
Regarded as a Christian belief that God heals people through the power of the Holy Spirit, faith healing often involves the laying on of hands. It is also called supernatural healing, divine healing, and miracle healing, among other things. Healing in the Bible is often associated with the ministry of specific individuals including Elijah, Jesus and Paul.
Christian physician Reginald B. Cherry views faith healing as a pathway of healing in which God uses both the natural and the supernatural to heal. Being healed has been described as a privilege of accepting Christ's redemption on the cross. Pentecostal writer Wilfred Graves Jr. views the healing of the body as a physical expression of salvation. , after describing Jesus exorcising at sunset and healing all of the sick who were brought to him, quotes these miracles as a fulfillment of the prophecy in : "He took up our infirmities and carried our diseases".
Even those Christian writers who believe in faith healing do not all believe that one's faith presently brings about the desired healing. "[Y]our faith does not effect your healing now. When you are healed rests entirely on what the sovereign purposes of the Healer are." Larry Keefauver cautions against allowing enthusiasm for faith healing to stir up false hopes. "Just believing hard enough, long enough or strong enough will not strengthen you or prompt your healing. Doing mental gymnastics to 'hold on to your miracle' will not cause your healing to manifest now." Those who actively lay hands on others and pray with them to be healed are usually aware that healing may not always follow immediately. Proponents of faith healing say it may come later, and it may not come in this life. "The truth is that your healing may manifest in eternity, not in time".
New Testament
Parts of the four canonical gospels in the New Testament say that Jesus cured physical ailments well outside the capacity of first-century medicine. Jesus' healing acts are considered miraculous and spectacular due to the results being impossible or statistically improbable. One example is the case of "a woman who had had a discharge of blood for twelve years, and who had suffered much under many physicians, and had spent all that she had, and was not better but rather grew worse". After healing her, Jesus tells her "Daughter, your faith has made you well. Go in peace! Be cured from your illness". At least two other times Jesus credited the sufferer's faith as the means of being healed: and .
Jesus endorsed the use of the medical assistance of the time (medicines of oil and wine) when he told the parable of the Good Samaritan (Luke 10:25–37), who "bound up [an injured man's] wounds, pouring on oil and wine" (verse 34) as a physician would. Jesus then told the doubting teacher of the law (who had elicited this parable by his self-justifying question, "And who is my neighbor?" in verse 29) to "go, and do likewise" in loving others with whom he would never ordinarily associate (verse 37).
The healing in the gospels is referred to as a "sign" to prove Jesus' divinity and to foster belief in him as the Christ. However, when asked for other types of miracles, Jesus refused some but granted others in consideration of the motive of the request. Some theologians' understanding is that Jesus healed all who were present every single time. Sometimes he determines whether they had faith that he would heal them. Four of the seven miraculous signs performed in the Fourth Gospel that indicated he was sent from God were acts of healing or resurrection. He heals the Capernaum official's son, heals a paralytic by the pool in Bethsaida, healing a man born blind, and resurrecting Lazarus of Bethany.
Jesus told his followers to heal the sick and stated that signs such as healing are evidence of faith. Jesus also told his followers to "cure sick people, raise up dead persons, make lepers clean, expel demons. You received free, give free".
Jesus sternly ordered many who received healing from him: "Do not tell anyone!" Jesus did not approve of anyone asking for a sign just for the spectacle of it, describing such as coming from a "wicked and adulterous generation".
The apostle Paul believed healing is one of the special gifts of the Holy Spirit, and that the possibility exists that certain persons may possess this gift to an extraordinarily high degree.
In the New Testament Epistle of James, the faithful are told that to be healed, those who are sick should call upon the elders of the church to pray over [them] and anoint [them] with oil in the name of the Lord.
The New Testament says that during Jesus' ministry and after his Resurrection, the apostles healed the sick and cast out demons, made lame men walk, raised the dead and performed other miracles. Apostles were holy men who had direct access to God and could channel his power to help and heal people. For example, Saint Peter healed a disabled man.
Jesus used miracles to convince people that he was inaugurating the Messianic Age, as in Mt 12.28. Scholars have described Jesus' miracles as establishing the kingdom during his lifetime.
Early Christian church
Accounts or references to healing appear in the writings of many Ante Nicene Fathers, although many of these mentions are very general and do not include specifics.
Catholicism
The Roman Catholic Church recognizes two "not mutually exclusive" kinds of healing, one justified by science and one justified by faith:
healing by human "natural means through the practice of medicine" which emphasizes that the theological virtue of "charity demands that we not neglect natural means of healing people who are ill" and the cardinal virtue of prudence forewarns not "to employ a technique that has no scientific support (or even plausibility)"
healing by divine grace "interceded on behalf of the sick through the invocation of the name of the Lord Jesus, asking for healing through the power of the Holy Spirit, whether in the form of the sacramental laying on of hands and anointing with oil or of simple prayers for healing, which often include an appeal to the saints for their aid"
In 2000, the Congregation for the Doctrine of the Faith issued "Instruction on prayers for healing" with specific norms about prayer meetings for obtaining healing, which presents the Catholic Church's doctrines of sickness and healing.
It accepts "that there may be means of natural healing that have not yet been understood or recognized by science", but it rejects superstitious practices which are neither compatible with Christian teaching nor compatible with scientific evidence.
Faith healing is reported by Catholics as the result of intercessory prayer to a saint or to a person with the gift of healing. According to U.S. Catholic magazine, "Even in this skeptical, postmodern, scientific agemiracles really are possible." According to a Newsweek poll, three-fourths of American Catholics say they pray for "miracles" of some sort.
According to John Cavadini, when healing is granted, "The miracle is not primarily for the person healed, but for all people, as a sign of God's work in the ultimate healing called 'salvation', or a sign of the kingdom that is coming." Some might view their own healing as a sign they are particularly worthy or holy, while others do not deserve it.
The Catholic Church has a special Congregation dedicated to the careful investigation of the validity of alleged miracles attributed to prospective saints. Pope Francis tightened the rules on money and miracles in the canonization process. Since Catholic Christians believe the lives of canonized saints in the Church will reflect Christ's, many have come to expect healing miracles. While the popular conception of a miracle can be wide-ranging, the Catholic Church has a specific definition for the kind of miracle formally recognized in a canonization process.
According to Catholic Encyclopedia, it is often said that cures at shrines and during Christian pilgrimages are mainly due to psychotherapypartly to confident trust in Divine providence, and partly to the strong expectancy of cure that comes over suggestible persons at these times and places.
Among the best-known accounts by Catholics of faith healings are those attributed to the miraculous intercession of the apparition of the Blessed Virgin Mary known as Our Lady of Lourdes at the Sanctuary of Our Lady of Lourdes in France and the remissions of life-threatening disease claimed by those who have applied for aid to Saint Jude, who is known as the "patron saint of lost causes".
, Catholic medics have asserted that there have been 67 miracles and 7,000 unexplainable medical cures at Lourdes since 1858. In a 1908 book, it says these cures were subjected to intense medical scrutiny and were only recognized as authentic spiritual cures after a commission of doctors and scientists, called the Lourdes Medical Bureau, had ruled out any physical mechanism for the patient's recovery.
Evangelicalism
In some Pentecostal and Charismatic Evangelical churches, a special place is thus reserved for faith healings with laying on of hands during worship services or for campaigns evangelization. Faith healing or divine healing is considered to be an inheritance of Jesus acquired by his death and resurrection. Biblical inerrancy ensures that the miracles and healings described in the Bible are still relevant and may be present in the life of the believer.
At the beginning of the 20th century, the new Pentecostal movement drew participants from the Holiness movement and other movements in America that already believed in divine healing. By the 1930s, several faith healers drew large crowds and established worldwide followings.
The first Pentecostals in the modern sense appeared in Topeka, Kansas, in a Bible school conducted by Charles Fox Parham, a holiness teacher and former Methodist pastor. Pentecostalism achieved worldwide attention in 1906 through the Azusa Street Revival in Los Angeles led by William Joseph Seymour.
Smith Wigglesworth was also a well-known figure in the early part of the 20th century. A former English plumber turned evangelist who lived simply and read nothing but the Bible from the time his wife taught him to read, Wigglesworth traveled around the world preaching about Jesus and performing faith healings. Wigglesworth claimed to raise several people from the dead in Jesus' name in his meetings.
During the 1920s and 1930s, Aimee Semple McPherson was a controversial faith healer of growing popularity during the Great Depression. Subsequently, William M. Branham has been credited as the initiator of the post-World War II healing revivals. The healing revival he began led many to emulate his style and spawned a generation of faith healers. Because of this, Branham has been recognized as the "father of modern faith healers". According to writer and researcher Patsy Sims, "the power of a Branham service and his stage presence remains a legend unparalleled in the history of the Charismatic movement". By the late 1940s, Oral Roberts, who was associated with and promoted by Branham's Voice of Healing magazine also became well known, and he continued with faith healing until the 1980s. Roberts discounted faith healing in the late 1950s, stating, "I never was a faith healer and I was never raised that way. My parents believed very strongly in medical science and we have a doctor who takes care of our children when they get sick. I cannot heal anyone – God does that." A friend of Roberts was Kathryn Kuhlman, another popular faith healer, who gained fame in the 1950s and had a television program on CBS. Also in this era, Jack Coe and A. A. Allen were faith healers who traveled with large tents for large open-air crusades.
Oral Roberts's successful use of television as a medium to gain a wider audience led others to follow suit. His former pilot, Kenneth Copeland, started a healing ministry. Pat Robertson, Benny Hinn, and Peter Popoff became well-known televangelists who claimed to heal the sick. Richard Rossi is known for advertising his healing clinics through secular television and radio. Kuhlman influenced Benny Hinn, who adopted some of her techniques and wrote a book about her.
Christian Science
Christian Science claims that healing is possible through prayer based on an understanding of God and the underlying spiritual perfection of God's creation. The material world as humanly perceived is believed to not be the spiritual reality. Christian Scientists believe that healing through prayer is possible insofar as it succeeds in bringing the spiritual reality of health into human experience. Prayer does not change the spiritual creation but gives a clearer view of it, and the result appears in the human scene as healing: the human picture adjusts to coincide more nearly with the divine reality. Therefore, Christian Scientists do not consider themselves to be faith healers since faith or belief in Christian Science is not required on the part of the patient, and because they consider healings reliable and provable rather than random.
Although there is no hierarchy in Christian Science, practitioners devote full time to prayer for others on a professional basis, and advertise in an online directory published by the church. Christian Scientists sometimes tell their stories of healing at weekly testimony meetings at local Christian Science churches, or publish them in the church's magazines including The Christian Science Journal printed monthly since 1883, the Christian Science Sentinel printed weekly since 1898, and The Herald of Christian Science a foreign language magazine beginning with a German edition in 1903 and later expanding to Spanish, French, and Portuguese editions. Christian Science Reading Rooms often have archives of such healing accounts.
The Church of Jesus Christ of Latter-day Saints
The Church of Jesus Christ of Latter-day Saints (LDS) has had a long history of faith healings. Many members of the LDS Church have told their stories of healing within the LDS publication, the Ensign. The church believes healings come most often as a result of priesthood blessings given by the laying on of hands; however, prayer often accompanied with fasting is also thought to cause healings. Healing is always attributed to be God's power. Latter-day Saints believe that the Priesthood of God, held by prophets (such as Moses) and worthy disciples of the Savior, was restored via heavenly messengers to the first prophet of this dispensation, Joseph Smith.
According to LDS doctrine, even though members may have the restored priesthood authority to heal in the name of Jesus Christ, all efforts should be made to seek the appropriate medical help. Brigham Young stated this effectively, while also noting that the ultimate outcome is still dependent on the will of God.
Islam
A number of healing traditions exist among Muslims. Some healers are particularly focused on diagnosing cases of possession by jinn or demons.
Buddhism
Chinese-born Australian businessman Jun Hong Lu was a prominent proponent of the "Guan Yin Citta Dharma Door", claiming that practicing the three "golden practices" of reciting texts and mantras, liberation of beings, and making vows, laid a solid foundation for improved physical, mental, and psychological well-being, with many followers publicly attesting to have been healed through practice.
Scientology
Some critics of Scientology have referred to some of its practices as being similar to faith healing, based on claims made by L. Ron Hubbard in Dianetics: The Modern Science of Mental Health and other writings.
Scientific investigation
Nearly all scientists dismiss faith healing as pseudoscience. Believers assert that faith healing makes no scientific claims and thus should be treated as a matter of faith that is not testable by science. Critics reply that claims of medical cures should be tested scientifically because, although faith in the supernatural is not in itself usually considered to be the purview of science, claims of reproducible effects are nevertheless subject to scientific investigation.
Scientists and doctors generally find that faith healing lacks biological plausibility or epistemic warrant, which is one of the criteria used to judge whether clinical research is ethical and financially justified. A Cochrane review of intercessory prayer found "although some of the results of individual studies suggest a positive effect of intercessory prayer, the majority do not". The authors concluded: "We are not convinced that further trials of this intervention should be undertaken and would prefer to see any resources available for such a trial used to investigate other questions in health care".
A review in 1954 investigated spiritual healing, therapeutic touch and faith healing. Of the hundred cases reviewed, none revealed that the healer's intervention alone resulted in any improvement or cure of a measurable organic disability.
In addition, at least one study has suggested that adult Christian Scientists, who generally use prayer rather than medical care, have a higher death rate than other people of the same age.
The Global Medical Research Institute (GMRI) was created in 2012 to start collecting medical records of patients who claim to have received a supernatural healing miracle as a result of Christian Spiritual Healing practices. The organization has a panel of medical doctors who review the patient's records looking at entries prior to the claimed miracles and entries after the miracle was claimed to have taken place. "The overall goal of GMRI is to promote an empirically grounded understanding of the physiological, emotional, and sociological effects of Christian Spiritual Healing practices". This is accomplished by applying the same rigorous standards used in other forms of medical and scientific research.
A 2011 article in the New Scientist magazine cited positive physical results from meditation, positive thinking and spiritual faith
Criticism
Skeptics of faith healing offer primarily two explanations for anecdotes of cures or improvements, relieving any need to appeal to the supernatural. The first is post hoc ergo propter hoc, meaning that a genuine improvement or spontaneous remission may have been experienced coincidental with but independent from anything the faith healer or patient did or said. These patients would have improved just as well even had they done nothing. The second is the placebo effect, through which a person may experience genuine pain relief and other symptomatic alleviation. In this case, the patient genuinely has been helped by the faith healer or faith-based remedy, not through any mysterious or numinous function, but by the power of their own belief that they would be healed. In both cases the patient may experience a real reduction in symptoms, though in neither case has anything miraculous or inexplicable occurred. Both cases, however, are strictly limited to the body's natural abilities.
According to the American Cancer Society:
The American Medical Association considers that prayer as therapy should not be a medically reimbursable or deductible expense.
Belgian philosopher and skeptic Etienne Vermeersch coined the term Lourdes effect as a criticism of the magical thinking and placebo effect possibilities for the claimed miraculous cures as there are no documented events where a severed arm has been reattached through faith healing at Lourdes. Vermeersch identifies ambiguity and equivocal nature of the miraculous cures as a key feature of miraculous events.
Negative impact on public health
Reliance on faith healing to the exclusion of other forms of treatment can have a public health impact when it reduces or eliminates access to modern medical techniques. This is evident in both higher mortality rates for children and in reduced life expectancy for adults. Critics have also made note of serious injury that has resulted from falsely labelled "healings", where patients erroneously consider themselves cured and cease or withdraw from treatment. For example, at least six people have died after faith healing by their church and being told they had been healed of HIV and could stop taking their medications. It is the stated position of the AMA that "prayer as therapy should not delay access to traditional medical care". Choosing faith healing while rejecting modern medicine can and does cause people to die needlessly.
Christian theological criticism of faith healing
Christian theological criticism of faith healing broadly falls into two distinct levels of disagreement.
The first is widely termed the "open-but-cautious" view of the miraculous in the church today. This term is deliberately used by Robert L. Saucy in the book Are Miraculous Gifts for Today?. Don Carson is another example of a Christian teacher who has put forward what has been described as an "open-but-cautious" view. In dealing with the claims of Warfield, particularly "Warfield's insistence that miracles ceased", Carson asserts, "But this argument stands up only if such miraculous gifts are theologically tied exclusively to a role of attestation; and that is demonstrably not so." However, while affirming that he does not expect healing to happen today, Carson is critical of aspects of the faith healing movement, "Another issue is that of immense abuses in healing practises.... The most common form of abuse is the view that since all illness is directly or indirectly attributable to the devil and his works, and since Christ by his cross has defeated the devil, and by his Spirit has given us the power to overcome him, healing is the inheritance right of all true Christians who call upon the Lord with genuine faith."
The second level of theological disagreement with Christian faith healing goes further. Commonly referred to as cessationism, its adherents either claim that faith healing will not happen today at all, or may happen today, but it would be unusual. Richard Gaffin argues for a form of cessationism in an essay alongside Saucy's in the book Are Miraculous Gifts for Today? In his book Perspectives on Pentecost Gaffin states of healing and related gifts that "the conclusion to be drawn is that as listed in 1 Corinthians 12(vv. 9f., 29f.) and encountered throughout the narrative in Acts, these gifts, particularly when exercised regularly by a given individual, are part of the foundational structure of the church... and so have passed out of the life of the church." Gaffin qualifies this, however, by saying "At the same time, however, the sovereign will and power of God today to heal the sick, particularly in response to prayer (see e.g. James 5:14, 15), ought to be acknowledged and insisted on."
According to the Catholic apologist Trent Horn, while the Bible teaches believers to pray when they are sick, this is not to be viewed as an exclusion of medical care, citing Sirach 38:9,12-14:
Fraud
Skeptics of faith healers point to fraudulent practices either in the healings themselves (such as plants in the audience with fake illnesses), or concurrent with the healing work supposedly taking place and claim that faith healing is a quack practice in which the "healers" use well known non-supernatural illusions to exploit credulous people in order to obtain their gratitude, confidence and money. James Randi's The Faith Healers investigates Christian evangelists such as Peter Popoff, who claimed to heal sick people on stage in front of an audience. Popoff pretended to know private details about participants' lives by receiving radio transmissions from his wife who was off-stage and had gathered information from audience members prior to the show. According to this book, many of the leading modern evangelistic healers have engaged in deception and fraud. The book also questioned how faith healers use funds that were sent to them for specific purposes. Physicist Robert L. Park and doctor and consumer advocate Stephen Barrett have called into question the ethics of some exorbitant fees.
There have also been legal controversies. For example, in 1955 at a Jack Coe revival service in Miami, Florida, Coe told the parents of a three-year-old boy that he healed their son who had polio. Coe then told the parents to remove the boy's leg braces. However, their son was not cured of polio and removing the braces left the boy in constant pain. As a result, through the efforts of Joseph L. Lewis, Coe was arrested and charged on February 6, 1956, with practicing medicine without a license, a felony in the state of Florida. A Florida Justice of the Peace dismissed the case on grounds that Florida exempts divine healing from the law. Later that year Coe was diagnosed with bulbar polio, and died a few weeks later at Dallas' Parkland Hospital on December 17, 1956.
Miracles for sale
TV personality Derren Brown produced a show on faith healing entitled Miracles for Sale which arguably exposed the art of faith healing as a scam. In this show, Derren trained a scuba diver trainer picked from the general public to be a faith healer and took him to Texas to successfully deliver a faith healing session to a congregation.
United States law
The 1974 Child Abuse Prevention and Treatment Act (CAPTA) required states to grant religious exemptions to child neglect and child abuse laws in order to receive federal money. The CAPTA amendments of 1996 state:
Thirty-one states have child-abuse religious exemptions. These are Alabama, Alaska, California, Colorado, Delaware, Florida, Georgia, Idaho, Illinois, Indiana, Iowa, Kansas, Kentucky, Louisiana, Maine, Michigan, Minnesota, Mississippi, Missouri, Montana, Nevada, New Hampshire, New Jersey, New Mexico, Ohio, Oklahoma, Oregon, Pennsylvania, Vermont, Virginia, and Wyoming. In six of these states, Arkansas, Idaho, Iowa, Louisiana, Ohio and Virginia, the exemptions extend to murder and manslaughter. Of these, Idaho is the only state accused of having a large number of deaths due to the legislation in recent times. In February 2015, controversy was sparked in Idaho over a bill believed to further reinforce parental rights to deny their children medical care.
Reckless homicide convictions
Parents have been convicted of child abuse and felony reckless negligent homicide and found responsible for killing their children when they withheld lifesaving medical care and chose only prayers.
See also
Anointing of the sick
Efficacy of prayer
Egregore
Energy medicine
Folk medicine
Self-efficacy
Thaumaturgy
Witch doctor
List of ineffective cancer treatments
List of topics characterized as pseudoscience
Notes
References
Bibliography
Beyer, Jürgen (2013) "Wunderheilung". In Enzyklopädie des Märchens. Handwörterbuch zur historischen und vergleichenden Erzählforschung, vol. 14, Berlin & Boston: Walter de Gruyter, coll. 1043–1050
External links
Alternative medicine
Religious practices
Pseudoscience
Religious terminology
Magic (supernatural)
Medical controversies
Health fraud | Faith healing | [
"Biology"
] | 5,860 | [
"Behavior",
"Religious practices",
"Human behavior"
] |
11,632 | https://en.wikipedia.org/wiki/Food%20and%20Drug%20Administration | The United States Food and Drug Administration (FDA or US FDA) is a federal agency of the Department of Health and Human Services. The FDA is responsible for protecting and promoting public health through the control and supervision of food safety, tobacco products, caffeine products, dietary supplements, prescription and over-the-counter pharmaceutical drugs (medications), vaccines, biopharmaceuticals, blood transfusions, medical devices, electromagnetic radiation emitting devices (ERED), cosmetics, animal foods & feed and veterinary products.
The FDA's primary focus is enforcement of the Federal Food, Drug, and Cosmetic Act (FD&C). However, the agency also enforces other laws, notably Section 361 of the Public Health Service Act as well as associated regulations. Much of this regulatory-enforcement work is not directly related to food or drugs but involves other factors like regulating lasers, cellular phones, and condoms. In addition, the FDA takes control of diseases in the contexts varying from household pets to human sperm donated for use in assisted reproduction.
The FDA is led by the commissioner of food and drugs, appointed by the president with the advice and consent of the Senate. The commissioner reports to the secretary of health and human services. Robert Califf is the current commissioner as of February 17, 2022.
The FDA's headquarters is located in the White Oak area of Silver Spring, Maryland. The agency has 223 field offices and 13 laboratories located across the 50 states, the United States Virgin Islands, and Puerto Rico. In 2008, the FDA began to post employees to foreign countries, including China, India, Costa Rica, Chile, Belgium, and the United Kingdom.
Organizational structure
Department of Health and Human Services
Food and Drug Administration
Office of the Commissioner (C)
Office of the Chief Counsel (OCC)
Office of the Executive Secretariat (OES)
Office of the Counselor to the Commissioner
Office of Digital Transformation (ODT)
Center for Biologics Evaluation and Research (CBER)
Center for Devices and Radiological Health (CDRH)
Center for Drug Evaluation and Research (CDER)
Center for Food Safety and Applied Nutrition (CFSAN)
Center for Tobacco Products (CTP)
Center for Veterinary Medicine (CVM)
Oncology Center of Excellence (OCE)
Office of Regulatory Affairs (ORA)
Office of Clinical Policy and Programs (OCPP)
Office of External Affairs (OEA)
Office of Food Policy and Response (OFPR)
Office of Minority Health and Health Equity (OMHHE)
Office of Operations (OO)
Office of Policy, Legislation, and International Affairs (OPLIA)
Office of the Chief Scientist (OCS)
National Center for Toxicological Research (NCTR)
Office of Women's Health (OWH)
Location
Headquarters
FDA headquarters facilities are currently located in Montgomery County and Prince George's County, Maryland.
White Oak Federal Research Center
Since 1990, the FDA has had employees and facilities on of the White Oak Federal Research Center in the White Oak area of Silver Spring, Maryland. In 2001, the General Services Administration (GSA) began new construction on the campus to consolidate the FDA's 25 existing operations in the Washington metropolitan area, its headquarters in Rockville, and several fragmented office buildings. The first building, the Life Sciences Laboratory, was dedicated and opened with 104 employees in December 2003. the FDA campus has a population of 10,987 employees housed in approximately of space, divided into ten offices and four laboratory buildings. The campus houses the Office of the Commissioner (OC), the Office of Regulatory Affairs (ORA), the Center for Drug Evaluation and Research (CDER), the Center for Devices and Radiological Health (CDRH), the Center for Biologics Evaluation and Research (CBER) and offices for the Center for Veterinary Medicine (CVM).
With the passing of the FDA Reauthorization Act of 2017, the FDA projects a 64% increase in employees to 18,000 over the next 15 years and wants to add approximately of office and special use space to their existing facilities. The National Capital Planning Commission approved a new master plan for this expansion in December 2018, and construction is expected to be completed by 2035, dependent on GSA appropriations.
Field locations
Office of Regulatory Affairs
The Office of Regulatory Affairs is considered the agency's "eyes and ears", conducting the vast majority of the FDA's work in the field. Its employees, known as Consumer Safety Officers, or more commonly known simply as investigators, inspect production, warehousing facilities, investigate complaints, illnesses, or outbreaks, and review documentation in the case of medical devices, drugs, biological products, and other items where it may be difficult to conduct a physical examination or take a physical sample of the product. The Office of Regulatory Affairs is divided into five regions, which are further divided into 20 districts. The districts are based roughly on the geographic divisions of the Federal court system. Each district comprises a main district office and a number of Resident Posts, which are FDA remote offices that serve a particular geographic area. ORA also includes the Agency's network of regulatory laboratories, which analyze any physical samples taken. Though samples are usually food-related, some laboratories are equipped to analyze drugs, cosmetics, and radiation-emitting devices.
Office of Criminal Investigations
The Office of Criminal Investigations was established in 1991 to investigate criminal cases. To do so, OCI employs approximately 200 Special Agents nationwide who, unlike ORA Investigators, are armed, have badges, and do not focus on technical aspects of the regulated industries. Rather, OCI agents pursue and develop cases when individuals and companies commit criminal actions, such as fraudulent claims or knowingly and willfully shipping known adulterated goods in interstate commerce. In many cases, OCI pursues cases involving violations of Title 18 of the United States Code (e.g., conspiracy, false statements, wire fraud, mail fraud), in addition to prohibited acts as defined in Chapter III of the FD&C Act. OCI Special Agents often come from other criminal investigations backgrounds, and frequently work closely with the Federal Bureau of Investigation, Assistant Attorney General, and even Interpol. OCI receives cases from a variety of sources—including ORA, local agencies, and the FBI, and works with ORA Investigators to help develop the technical and science-based aspects of a case.
Other locations
The FDA has a number of field offices across the United States, in addition to international locations in China, India, Europe, the Middle East, and Latin America.
Scope and funding
As of 2021, the FDA had responsibility for overseeing $2.7 trillion in food, medical, and tobacco products. Some 54% of its budget derives from the federal government, and 46% is covered by industry user fees for FDA services. For example, pharmaceutical firms pay fees to expedite drug reviews.
According to Forbes, pharmaceutical firms provide 75% of the FDA's drug review budget.
Regulatory programs
Emergency approvals (EUA)
Emergency Use Authorization (EUA) is a mechanism that was created to facilitate the availability and use of medical countermeasures, including vaccines and personal protective equipment, during public health emergencies such as the Zika virus epidemic, the Ebola virus epidemic and the COVID-19 pandemic.
Regulations
The programs for safety regulation vary widely by the type of product, its potential risks, and the regulatory powers granted to the agency. For example, the FDA regulates almost every facet of prescription drugs, including testing, manufacturing, labeling, advertising, marketing, efficacy, and safety—yet FDA regulation of cosmetics focuses primarily on labeling and safety. The FDA regulates most products with a set of published standards enforced by a modest number of facility inspections. Inspection observations are documented on Form 483.
In June 2018, the FDA released a statement regarding new guidelines to help food and drug manufacturers "implement protections against potential attacks on the U.S. food supply". One of the guidelines includes the Intentional Adulteration (IA) rule, which requires strategies and procedures by the food industry to reduce the risk of compromise in facilities and processes that are significantly vulnerable.
The FDA also uses tactics of regulatory shaming, mainly through online publication of non-compliance, warning letters, and "shaming lists." Regulation by shaming harnesses firms' sensitivity to reputational damage. For example, in 2018, the agency published an online "black list", in which it named dozens of branded drug companies that are supposedly using unlawful or unethical means to attempt to impede competition from generic drug companies.
The FDA frequently works with other federal agencies, including the Department of Agriculture, the Drug Enforcement Administration, Customs and Border Protection, and the Consumer Product Safety Commission. They also often work with local and state government agencies in performing regulatory inspections and enforcement actions.
Food and dietary supplements
The regulation of food and dietary supplements by the Food and Drug Administration is governed by various statutes enacted by the United States Congress and interpreted by the FDA. Pursuant to the Federal Food, Drug, and Cosmetic Act and accompanying legislation, the FDA has authority to oversee the quality of substances sold as food in the United States, and to monitor claims made in the labeling of both the composition and the health benefits of foods.
The FDA subdivides substances that it regulates as food into various categories—including foods, food additives, added substances (human-made substances that are not intentionally introduced into food, but nevertheless end up in it), and dietary supplements. Dietary supplements or dietary ingredients include vitamins, minerals, herbs, amino acids, and enzymes. Specific standards the FDA exercises differ from one category to the next. Furthermore, legislation had granted the FDA a variety of means to address violations of standards for a given substance category.
Under the Dietary Supplement Health and Education Act of 1994 (DSHEA), the FDA is responsible for ensuring that manufacturers and distributors of dietary supplements and dietary ingredients meet the current requirements. These manufacturers and distributors are not allowed to advertise their products in an adulterated way, and they are responsible for evaluating the safety and labeling of their product.
The FDA has a "Dietary Supplement Ingredient Advisory List" that includes ingredients that sometimes appear on dietary supplements but need further evaluation. An ingredient is added to this list when it is excluded from use in a dietary supplement, does not appear to be an approved food additive or recognized as safe, and/or is subjected to the requirement for pre-market notification without having a satisfied requirement.
"FDA-Approved" vs. "FDA-Accepted in Food Processing"
The FDA does not approve applied coatings used in the food processing industry. There is no review process to approve the composition of nonstick coatings; nor does the FDA inspect or test these materials. Through their governing of processes, however, the FDA does have a set of regulations that cover the formulation, manufacturing, and use of nonstick coatings. Hence, materials like Polytetrafluoroethylene (Teflon) are not and cannot be considered as FDA Approved, but rather, they are a "FDA Compliant" or "FDA Acceptable".
Medical countermeasures (MCMs)
Medical countermeasures (MCMs) are products such as biologics and pharmaceutical drugs that can protect from or treat the health effects of a chemical, biological, radiological, or nuclear (CBRN) attack. MCMs can also be used for prevention and diagnosis of symptoms associated with CBRN attacks or threats. The FDA runs a program called the "FDA Medical Countermeasures Initiative" (MCMi), with programs funded by the federal government. It helps support "partner" agencies and organisations prepare for public health emergencies that could require MCMs.
Medications
The Center for Drug Evaluation and Research uses different requirements for the three main drug product types: new drugs, generic drugs, and over-the-counter drugs. A drug is considered "new" if it is made by a different manufacturer, uses different excipients or inactive ingredients, is used for a different purpose, or undergoes any substantial change. The most rigorous requirements apply to new molecular entities: drugs that are not based on existing medications.
New medications
New drugs receive extensive scrutiny before FDA approval in a process called a new drug application (NDA). Under the Presidency of Donald Trump, the agency has worked to make the drug-approval process go faster. Critics, however, argue that FDA standards are not sufficiently rigorous to prevent unsafe or ineffective drugs from getting approval. New drugs are available only by prescription by default. A change to over-the-counter (OTC) status is a separate process, and the drug must be approved through an NDA first. A drug that is approved is said to be "safe and effective when used as directed".
Very rare, limited exceptions to this multi-step process involving animal testing and controlled clinical trials can be granted out of compassionate use protocols. This was the case during the 2015 Ebola epidemic with the use, by prescription and authorization, of ZMapp and other experimental treatments, and for new drugs that can be used to treat debilitating and/or very rare conditions for which no existing remedies or drugs are satisfactory, or where there has not been an advance in a long period of time. The studies are progressively longer, gradually adding more individuals as they progress from stage I to stage III, normally over a period of years, and normally involve drug companies, the government and its laboratories, and often medical schools and hospitals and clinics. However, any exceptions to the aforementioned process are subject to strict review and scrutiny and conditions, and are only given if a substantial amount of research and at least some preliminary human testing has shown that they are believed to be somewhat safe and possibly effective. (See FDA Special Protocol Assessment about Phase III trials.)
Advertising and promotion
The FDA's Office of Prescription Drug Promotion (OPDP) has responsibilities that revolve around the review and regulation of prescription drug advertising and promotion. This is achieved through surveillance activities and the issuance of enforcement letters to pharmaceutical manufacturers. Advertising and promotion for over-the-counter drugs is regulated by the Federal Trade Commission. The FDA also implements regulatory oversight through engagement with third-party enforcer-firms. It expects pharmaceutical companies to ensure that third-party suppliers and labs comply with the agency's health and safety guidelines .
The drug advertising regulation contains two broad requirements: (1) a company may advertise or promote a drug only for the specific indication or medical use for which it was approved by FDA. Also, an advertisement must contain a "fair balance" between the benefits and the risks (side effects) of a drug. The regulation of drug advertising in the U.S. is divided between the Food and Drug Administration (FDA) and the Federal Trade Commission (FTC), based on whether the drug in question is a prescription drug or an over-the-counter (OTC) drug. The FDA oversees the advertising of prescription drugs, while the FTC regulates the advertising of OTC drugs.
The term off-label refers to the practice of prescribing a drug for a different purpose than what the FDA approved.
Due to this approval requirement, manufacturers were prohibited from advertising COVID-19 vaccines during the period in which they had only been approved under Emergency Use Authorization.
Post-market safety surveillance
After NDA approval, the sponsor must then review and report to the FDA every single patient adverse drug experience it learns of. They must report unexpected serious and fatal adverse drug events within 15 days, and other events on a quarterly basis. The FDA also receives directly adverse drug event reports through its MedWatch program. These reports are called "spontaneous reports" because reporting by consumers and health professionals is voluntary.
While this remains the primary tool of post-market safety surveillance, FDA requirements for post-marketing risk management are increasing. As a condition of approval, a sponsor may be required to conduct additional clinical trials, called Phase IV trials. In some cases, the FDA requires risk management plans called Risk Evaluation and Mitigation Strategies (REMS) for some drugs that require actions to be taken to ensure that the drug is used safely. For example, thalidomide can cause birth defects, but has uses that outweigh the risks if men and women taking the drugs do not conceive a child; a REMS program for thalidomide mandates an auditable process to ensure that people taking the drug take action to avoid pregnancy; many opioid drugs have REMS programs to avoid addiction and diversion of drugs. The drug isotretinoin has a REMS program called iPLEDGE.
Generic drugs
Generic drugs are chemical and therapeutic equivalents of name-brand drugs, normally whose patents have expired. Approved generic drugs should have the same dosage, safety, effectiveness, strength, stability, and quality, as well as route of administration. In general, they are less expensive than their name brand counterparts, are manufactured and marketed by rival companies and, in the 1990s, accounted for about a third of all prescriptions written in the United States. For a pharmaceutical company to gain approval to produce a generic drug, the FDA requires scientific evidence that the generic drug is interchangeable with or therapeutically equivalent to the originally approved drug. This is called an Abbreviated New Drug Application (ANDA). 80% of prescription drugs sold in the United States are generic brands.
Generic drug scandal
In 1989, a major scandal erupted involving the procedures used by the FDA to approve generic drugs for sale to the public. Charges of corruption in generic drug approval first emerged in 1988 during the course of an extensive congressional investigation into the FDA. The oversight subcommittee of the United States House Energy and Commerce Committee resulted from a complaint brought against the FDA by Mylan Laboratories Inc. of Pittsburgh. When its application to manufacture generics were subjected to repeated delays by the FDA, Mylan, convinced that it was being discriminated against, soon began its own private investigation of the agency in 1987. Mylan eventually filed suit against two former FDA employees and four drug-manufacturing companies, charging that corruption within the federal agency resulted in racketeering and in violations of antitrust law. "The order in which new generic drugs were approved was set by the FDA employees even before drug manufacturers submitted applications" and, according to Mylan, this illegal procedure was followed to give preferential treatment to certain companies. During the summer of 1989, three FDA officials (Charles Y. Chang, David J. Brancato, Walter Kletch) pleaded guilty to criminal charges of accepting bribes from generic drugs makers, and two companies (Par Pharmaceutical and its subsidiary Quad Pharmaceuticals) pleaded guilty to giving bribes.
Furthermore, it was discovered that several manufacturers had falsified data submitted in seeking FDA authorization to market certain generic drugs. Vitarine Pharmaceuticals of New York, which sought approval of a generic version of the drug Dyazide, a medication for high blood pressure, submitted Dyazide, rather than its generic version, for the FDA tests. In April 1989, the FDA investigated 11 manufacturers for irregularities; and later brought that number up to 13. Dozens of drugs were eventually suspended or recalled by manufacturers. In the early 1990s, the U.S. Securities and Exchange Commission filed securities fraud charges against the Bolar Pharmaceutical Company, a major generic manufacturer based in Long Island, New York.
Over-the-counter drugs
Over-the-counter (OTC) are drugs like aspirin that do not require a doctor's prescription. The FDA has a list of approximately 800 such approved ingredients that are combined in various ways to create more than 100,000 OTC drug products. Many OTC drug ingredients had been previously approved prescription drugs now deemed safe enough for use without a medical practitioner's supervision like ibuprofen.
Ebola treatment
In 2014, the FDA added an Ebola treatment being developed by Canadian pharmaceutical company Tekmira to the Fast Track program, but halted the phase 1 trials in July pending the receipt of more information about how the drug works. This was widely viewed as increasingly important in the face of a major outbreak of the disease in West Africa that began in late March 2014 and ended in June 2016.
Coronavirus (COVID-19) testing
During the coronavirus pandemic, FDA granted emergency use authorization for personal protective equipment (PPE), in vitro diagnostic equipment, ventilators and other medical devices.
On March 18, 2020, FDA inspectors postponed most foreign facility inspections and all domestic routine surveillance facility inspections. In contrast, the USDA's Food Safety and Inspection Service (FSIS) continued inspections of meatpacking plants, which resulted in 145 FSIS field employees who tested positive for COVID-19, and three who died.
Vaccines, blood and tissue products, and biotechnology
The Center for Biologics Evaluation and Research is the branch of the FDA responsible for ensuring the safety and efficacy of biological therapeutic agents. These include blood and blood products, vaccines, allergenics, cell and tissue-based products, and gene therapy products. New biologics are required to go through a premarket approval process called a Biologics License Application (BLA), similar to that for drugs.
The original authority for government regulation of biological products was established by the 1902 Biologics Control Act, with additional authority established by the 1944 Public Health Service Act. Along with these Acts, the Federal Food, Drug, and Cosmetic Act applies to all biologic products, as well. Originally, the entity responsible for regulation of biological products resided under the National Institutes of Health; this authority was transferred to the FDA in 1972.
Medical and radiation-emitting devices
The Center for Devices and Radiological Health (CDRH) is the branch of the FDA responsible for the premarket approval of all medical devices, as well as overseeing the manufacturing, performance and safety of these devices. The definition of a medical device is given in the FD&C Act, and it includes products from the simple toothbrush to complex devices such as implantable neurostimulators. CDRH also oversees the safety performance of non-medical devices that emit certain types of electromagnetic radiation. Examples of CDRH-regulated devices include cellular phones, airport baggage screening equipment, television receivers, microwave ovens, tanning booths, and laser products.
CDRH regulatory powers include the authority to require certain technical reports from the manufacturers or importers of regulated products, to require that radiation-emitting products meet mandatory safety performance standards, to declare regulated products defective, and to order the recall of defective or noncompliant products. CDRH also conducts limited amounts of direct product testing.
"FDA-Cleared" vs "FDA-Approved"
Clearance requests are required for medical devices that prove they are "substantially equivalent" to the predicate devices already on the market. Approved requests are for items that are new or substantially different and need to demonstrate "safety and efficacy", for example they may be inspected for safety in case of new toxic hazards. Both aspects need to be proved or provided by the submitter to ensure proper procedures are followed.
Cosmetics
Cosmetics are regulated by the Center for Food Safety and Applied Nutrition, the same branch of the FDA that regulates food. Cosmetic products are not, in general, subject to premarket approval by the FDA unless they make "structure or function claims" that make them into drugs (see Cosmeceutical). However, all color additives must be specifically FDA approved before manufacturers can include them in cosmetic products sold in the U.S. The FDA regulates cosmetics labeling, and cosmetics that have not been safety tested must bear a warning to that effect.
According to the industry advocacy group, the American Council on Science and Health, though the cosmetic industry is primarily responsible for its own product safety, the FDA can intervene when necessary to protect the public. In general, though, cosmetics do not require pre-market approval or testing. The ACSH says that companies must place a warning note on their products if they have not been tested, and that experts in cosmetic ingredient review also play a role in monitoring safety through influence on ingredients, but they lack legal authority. According to the ACSH, it has reviewed about 1,200 ingredients and has suggested that several hundred be restricted—but there is no standard or systemic method for reviewing chemicals for safety, or a clear definition of what 'safety' even means so that all chemicals get tested on the same basis.
However, on December 29, 2022, President Biden signed the '2023 Consolidated Budget Act', which includes the 'Cosmetics Regulatory Modernization Act of 2022 (MoCRA)', which is a stricter regulation that is different from the previous regulations. MoCRA requires compliance with matters such as serious adverse event reporting, safety substantiation, additional labeling, record keeping, and Good Manufacturing Practices (GMP). MoCRA also calls on the FDA to grant Mandatory Recall Authority and establish regulations for GMP rules, flavor allergen labeling rules, and testing methods for cosmetics containing talc.
Veterinary products
The Center for Veterinary Medicine (CVM) is a center of the FDA that regulates food additives and drugs that are given to animals. CVM regulates animal drugs, animal food including pet animal, and animal medical devices. The FDA's requirements to prevent the spread of bovine spongiform encephalopathy are also administered by CVM through inspections of feed manufacturers. CVM does not regulate vaccines for animals; these are handled by the United States Department of Agriculture.
Tobacco products
The FDA regulates tobacco products with authority established by the 2009 Family Smoking Prevention and Tobacco Control Act. This Act requires color warnings on cigarette packages and printed advertising, and text warnings from the U.S. Surgeon General.
The nine new graphic warning labels were announced by the FDA in June 2011 and were scheduled to be required to appear on packaging by September 2012. The implementation date is uncertain, due to ongoing proceedings in the case of R.J. Reynolds Tobacco Co. v. U.S. Food and Drug Administration. R.J. Reynolds, Lorillard, Commonwealth Brands, Liggett Group and Santa Fe Natural Tobacco Company have filed suit in Washington, D.C. federal court claiming that the graphic labels are an unconstitutional way of forcing tobacco companies to engage in anti-smoking advocacy on the government's behalf.
A First Amendment lawyer, Floyd Abrams, is representing the tobacco companies in the case, contending requiring graphic warning labels on a lawful product cannot withstand constitutional scrutiny. The Association of National Advertisers and the American Advertising Federation have also filed a brief in the suit, arguing that the labels infringe on commercial free speech and could lead to further government intrusion if left unchallenged. In November 2011, Federal judge Richard Leon of the U.S. District Court for the District of Columbia temporarily halted the new labels, likely delaying the requirement that tobacco companies display the labels. The U.S. Supreme Court ultimately could decide the matter.
In July 2017, the FDA announced a plan that would reduce the current levels of nicotine permitted in tobacco cigarettes. The proposed regulation, identified as RIN 0910-AI76, titled "Tobacco Product Standard for Nicotine Yield of Cigarettes and Certain Other Combusted Tobacco Products," seeks to reduce the nicotine content in cigarettes to approximately 0.7 milligrams per gram of tobacco.
Regulation of living organisms
With acceptance of premarket notification 510(k) k033391 in January 2004, the FDA granted Ronald Sherman permission to produce and market medical maggots for use in humans or other animals as a prescription medical device. Medical maggots represent the first living organism allowed by the Food and Drug Administration for production and marketing as a prescription medical device.
In June 2004, the FDA cleared Hirudo medicinalis (medicinal leeches) as the second living organism legal to use as a medical device.
The FDA also requires that milk be pasteurized to remove bacteria.
International Cooperation
In February 2011, President Barack Obama and Canadian Prime Minister Stephen Harper issued a "Declaration on a Shared Vision for Perimeter Security and Economic Competitiveness" and announced the creation of the Canada-United States Regulatory Cooperation Council (RCC) "to increase regulatory transparency and coordination between the two countries."
Under the RCC mandate, the FDA and Health Canada undertook a "first of its kind" initiative by selecting "as its first area of alignment common cold indications for certain over-the-counter antihistamine ingredients (GC 2013-01-10)."
A more recent example of the FDA's international work is their 2018 cooperation with regulatory and law-enforcement agencies worldwide through Interpol as part of Operation Pangea XI. The FDA targeted 465 websites that illegally sold potentially dangerous, unapproved versions of opioid, oncology, and antiviral prescription drugs to U.S. consumers. The agency focused on transaction laundering schemes in order to uncover the complex online drug network.
Science and research programs
The FDA carries out research and development activities to develop technology and standards that support its regulatory role, with the objective of resolving scientific and technical challenges before they become impediments. The FDA's research efforts include the areas of biologics, medical devices, drugs, women's health, toxicology, food safety and applied nutrition, and veterinary medicine.
Data management
The FDA has collected a large amount of data through the decades. The OpenFDA project was created to enable easy access of the data for the public and was officially launched in June 2014.
History
Up until the 20th century, there were few federal laws regulating the contents and sale of domestically produced food and pharmaceuticals, with one exception being the Vaccine Act of 1813. The history of the FDA can be traced to the latter part of the 19th century and the Division of Chemistry of the U.S. Department of Agriculture, which itself derived from the Copyright and Patent Clause. Under Harvey Washington Wiley, appointed chief chemist in 1883, the Division began conducting research into the adulteration and misbranding of food and drugs on the American market. Wiley's advocacy came at a time when the public had become aroused to hazards in the marketplace by muckraking journalists like Upton Sinclair, and became part of a general trend for increased federal regulations in matters pertinent to public safety during the Progressive Era. The Biologics Control Act of 1902 was put in place after a diphtheria antitoxin derived from tetanus-contaminated serum caused the deaths of thirteen children in St. Louis, Missouri. The serum was originally collected from a horse named Jim who had contracted tetanus.
In June 1906, President Theodore Roosevelt signed into law the Pure Food and Drug Act of 1906, also known as the "Wiley Act" after its chief advocate. The Act prohibited, under penalty of seizure of goods, the interstate transport of food that had been "adulterated". The Act applied similar penalties to the interstate marketing of "adulterated" drugs, in which the "standard of strength, quality, or purity" of the active ingredient was not either stated clearly on the label or listed in the United States Pharmacopeia or the National Formulary.
The responsibility for examining food and drugs for such "adulteration" or "misbranding" was given to Wiley's USDA Bureau of Chemistry. Wiley used these new regulatory powers to pursue an aggressive campaign against the manufacturers of foods with chemical additives, but the Chemistry Bureau's authority was soon checked by judicial decisions, which narrowly defined the bureau's powers and set high standards for proof of fraudulent intent. In 1927, the Bureau of Chemistry's regulatory powers were reorganized under a new USDA body, the Food, Drug, and Insecticide Administration. This name was shortened to the Food and Drug Administration (FDA) three years later.
By the 1930s, muckraking journalists, consumer protection organizations, and federal regulators began mounting a campaign for stronger regulatory authority by publicizing a list of injurious products that had been ruled permissible under the 1906 law, including radioactive beverages, mascara that could cause blindness, and worthless "cures" for diabetes and tuberculosis. The resulting proposed law did not get through the Congress of the United States for five years, but was rapidly enacted into law following the public outcry over the 1937 Elixir Sulfanilamide tragedy, in which over 100 people died after using a drug formulated with a toxic, untested solvent.
President Franklin Delano Roosevelt signed the Federal Food, Drug, and Cosmetic Act into law on June 24, 1938. The new law significantly increased federal regulatory authority over drugs by mandating a pre-market review of the safety of all new drugs, as well as banning false therapeutic claims in drug labeling without requiring that the FDA prove fraudulent intent. The law also authorized the FDA to issue minimum food standards of identity for all mass-produced foods to reduce food fraud.
Soon after passage of the 1938 Act, the FDA began to designate certain drugs as safe for use only under the supervision of a medical professional, and the category of "prescription-only" drugs was securely codified into law by the Durham-Humphrey Amendment in 1951. These developments confirmed extensive powers for the FDA to enforce post-marketing recalls of ineffective drugs.
Outside of the US, the drug thalidomide was marketed for the relief of general nausea and morning sickness, but caused birth defects and even the death of thousands of babies when taken during pregnancy. American mothers were largely unaffected as Frances Oldham Kelsey of the FDA refused to authorize the medication for market. In 1962, the Kefauver-Harris Amendment to the FD&C Act was passed, which represented a "revolution" in FDA regulatory authority. The most important change was the requirement that all new drug applications demonstrate "substantial evidence" of the drug's efficacy for a marketed indication, in addition to the existing requirement for pre-marketing demonstration of safety. This marked the start of the FDA approval process in its modern form.
These reforms had the effect of increasing the time, and the difficulty, required to bring a drug to market. One of the most important statutes in establishing the modern American pharmaceutical market was the 1984 Drug Price Competition and Patent Term Restoration Act, more commonly known as the "Hatch-Waxman Act" after its chief sponsors. The act extended the patent exclusivity terms of new drugs, and tied those extensions, in part, to the length of the FDA approval process for each individual drug. For generic manufacturers, the Act created a new approval mechanism, the Abbreviated New Drug Application (ANDA), in which the generic drug manufacturer need only demonstrate that their generic formulation has the same active ingredient, route of administration, dosage form, strength, and pharmacokinetic properties ("bioequivalence") as the corresponding brand-name drug. This Act has been credited with, in essence, creating the modern generic drug industry.
Concerns about the length of the drug approval process were brought to the fore early in the AIDS epidemic. In the mid- and late 1980s, ACT-UP and other HIV activist organizations accused the FDA of unnecessarily delaying the approval of medications to fight HIV and opportunistic infections. Partly in response to these criticisms, the FDA issued new rules to expedite approval of drugs for life-threatening diseases, and expanded pre-approval access to drugs for patients with limited treatment options. All of the initial drugs approved for the treatment of HIV/AIDS were approved through these accelerated approval mechanisms. Frank Young, then commissioner of the FDA, was behind the Action Plan Phase II, established in August 1987 for quicker approval of AIDS medication.
In two instances, state governments have sought to legalize drugs that the FDA has not approved. Under the theory that federal law, passed pursuant to Constitutional authority, overrules conflicting state laws, federal authorities still claim the authority to seize, arrest, and prosecute for possession and sales of these substances, even in states where they are legal under state law. The first wave was the legalization by 27 states of laetrile in the late 1970s. This drug was used as a treatment for cancer, but scientific studies both before and after this legislative trend found it ineffective. The second wave concerned medical marijuana in the 1990s and 2000s. Though Virginia passed legislation allowing doctors to recommend cannabis for glaucoma or the side effects of chemotherapy, a more widespread trend began in California with the Compassionate Use Act of 1996.
When the FDA requested Endo Pharmaceuticals on June 8, 2017, to remove oxymorphone hydrochloride from the market, it was the first request in FDA history to recall an effective drug over its potential for misuse.
21st-century reforms
Critical Path Initiative
The Critical Path Initiative is the FDA's effort to stimulate and facilitate a national effort to modernize the sciences through which FDA-regulated products are developed, evaluated, and manufactured. The Initiative was launched in March 2004, with the release of a report entitled Innovation/Stagnation: Challenge and Opportunity on the Critical Path to New Medical Products.
Patients' rights to access unapproved drugs
The Compassionate Investigational New Drug program was created after Randall v. U.S. ruled in favor of Robert C. Randall in 1978, creating a program for medical marijuana.
A 2006 court case, Abigail Alliance v. von Eschenbach, would have forced radical changes in FDA regulation of unapproved drugs. The Abigail Alliance argued that the FDA must license drugs for use by terminally ill patients with "desperate diagnoses", after they have completed Phase I testing. The case won an initial appeal in May 2006, but that decision was reversed by a March 2007 rehearing. The US Supreme Court declined to hear the case, and the final decision denied the existence of a right to unapproved medications.
Critics of the FDA's regulatory power argue that the FDA takes too long to approve drugs that might ease pain and human suffering faster if brought to market sooner. The AIDS crisis created some political efforts to streamline the approval process. However, these limited reforms were targeted for AIDS drugs, not for the broader market. This has led to the call for more robust and enduring reforms that would allow patients, under the care of their doctors, access to drugs that have passed the first round of clinical trials.
Post-marketing drug safety monitoring
The widely publicized recall of Vioxx, a non-steroidal anti-inflammatory drug (NSAID) now estimated to have contributed to fatal heart attacks in thousands of Americans, played a strong role in driving a new wave of safety reforms at both the FDA rulemaking and statutory levels. The FDA approved Vioxx in 1999, and initially hoped it would be safer than previous NSAIDs due to its reduced risk of intestinal tract bleeding. However, a number of pre and post-marketing studies suggested that Vioxx might increase the risk of myocardial infarction, and results from the APPROVe trial in 2004 conclusively demonstrated this.
Faced with numerous lawsuits, the manufacturer voluntarily withdrew it from the market. The example of Vioxx has been prominent in an ongoing debate over whether new drugs should be evaluated on the basis of their absolute safety, or their safety relative to existing treatments for a given condition. In the wake of the Vioxx recall, there were widespread calls by major newspapers, medical journals, consumer advocacy organizations, lawmakers, and FDA officials for reforms in the FDA's procedures for pre- and post-market drug safety regulation.
In 2006, a Congressional committee was appointed by the Institute of Medicine to review pharmaceutical safety regulation in the U.S. and to issue recommendations for improvements. The committee was composed of 16 experts, including leaders in clinical medicine medical research, economics, biostatistics, law, public policy, public health, and the allied health professions, as well as current and former executives from the pharmaceutical, hospital, and health insurance industries. The authors found major deficiencies in the current FDA system for ensuring the safety of drugs on the American market. Overall, the authors called for an increase in the regulatory powers, funding, and independence of the FDA. Some of the committee's recommendations were incorporated into drafts of the PDUFA IV amendment, which was signed into law as the Food and Drug Administration Amendments Act of 2007.
As of 2011, Risk Minimization Action Plans (RiskMAPS) have been created to ensure risks of a drug never outweigh the benefits of that drug within the post-marketing period. This program requires that manufacturers design and implement periodic assessments of their programs' effectiveness. The Risk Minimization Action Plans are set in place depending on the overall level of risk a prescription drug is likely to pose to the public.
Pediatric drug testing
Prior to the 1990s, only 20% of all drugs prescribed for children in the United States were tested for safety or efficacy in a pediatric population. This became a major concern of pediatricians as evidence accumulated that the physiological response of children to many drugs differed significantly from those drugs' effects on adults. Children react differently to the drugs because of many reasons, including size, weight, etc. There were several reasons that few medical trials were done with children. For many drugs, children represented such a small proportion of the potential market, that drug manufacturers did not see such testing as cost-effective.
Also, the belief that children are ethically restricted in their ability to give informed consent brought increased governmental and institutional hurdles to approval of these clinical trials, and greater concerns about legal liability. Thus, for decades, most medicines prescribed to children in the U.S. were done so in a non-FDA-approved, "off-label" manner, with dosages "extrapolated" from adult data through body weight and body-surface-area calculations.
In an initial FDA attempt to address this issue they produced the 1994 FDA Final Rule on Pediatric Labeling and Extrapolation, which allowed manufacturers to add pediatric labeling information, but required drugs that had not been tested for pediatric safety and efficacy to bear a disclaimer to that effect. However, this rule failed to motivate many drug companies to conduct additional pediatric drug trials. In 1997, the FDA proposed a rule to require pediatric drug trials from the sponsors of New Drug Applications. However, this new rule was successfully preempted in federal court as exceeding the FDA's statutory authority.
While this debate was unfolding, Congress used the Food and Drug Administration Modernization Act of 1997 to pass incentives that gave pharmaceutical manufacturers a six-month patent term extension on new drugs submitted with pediatric trial data. The Best Pharmaceuticals for Children Act of 2007 reauthorized these provisions and allowed the FDA to request NIH-sponsored testing for pediatric drug testing, although these requests are subject to NIH funding constraints. In the Pediatric Research Equity Act of 2003, Congress codified the FDA's authority to mandate manufacturer-sponsored pediatric drug trials for certain drugs as a "last resort" if incentives and publicly funded mechanisms proved inadequate.
Priority review voucher (PRV)
The priority review voucher is a provision of the Food and Drug Administration Amendments Act of 2007, which awards a transferable "priority review voucher" to any company that obtains approval for a treatment for a neglected tropical diseases. The system was first proposed by Duke University faculty David Ridley, Henry Grabowski, and Jeffrey Moe in their 2006 Health Affairs paper: "Developing Drugs for Developing Countries". President Obama signed into law the Food and Drug Administration Safety and Innovation Act of 2012, which extended the authorization until 2017.
Rules for generic biologics
Since the 1990s, many successful new drugs for the treatment of cancer, autoimmune diseases, and other conditions have been protein-based biotechnology drugs, regulated by the Center for Biologics Evaluation and Research. Many of these drugs are extremely expensive; for example, the anti-cancer drug Avastin costs $55,000 for a year of treatment, while the enzyme replacement therapy drug Cerezyme costs $200,000 per year, and must be taken by Gaucher's disease patients for life.
Biotechnology drugs do not have the simple, readily verifiable chemical structures of conventional drugs, and are produced through complex, often proprietary, techniques, such as transgenic mammalian cell cultures. Because of these complexities, the 1984 Hatch-Waxman Act did not include biologics in the Abbreviated New Drug Application (ANDA) process. This precluded the possibility of generic drug competition for biotechnology drugs. In February 2007, identical bills were introduced into the House to create an ANDA process for the approval of generic biologics, but were not passed.
Mobile medical applications
In 2013, a guidance was issued to regulate mobile medical applications and protect users from their unintended use. This guidance distinguishes the apps subjected to regulation based on the marketing claims of the apps. Incorporation of the guidelines during the development phase of these apps has been proposed for expedited market entry and clearance.
Electronic Submissions Gateway (ESG)
To standardize, automate and streamline the flow of regulatory data, FDA introduced an Electronic Submissions Gateway (ESG) in 2006. This gateway allows reporting organizations to send regulatory submissions to different centers over the internet, packaged in a center-specific format and enveloped as a GNU-compatible .tar.gz file, through either a FDA-specific WebTrader application or via a more generic B2B communication protocol called AS2 (Applicability Statement 2).
For WebTrader, which is recommended for manual, small-volume submissions, users would typically install a client application on their computers and upload the package through it to FDA server. In AS2, which is recommended for automated or high-volume submissions, users can use any standard AS2 software to transmit the package to FDA by including additional routing details on top of standard AS2, in the form of custom HTTP request headers.
Criticism
The FDA has regulatory oversight over a large array of products that affect the health and life of American citizens. As a result, the FDA's powers and decisions are carefully monitored by several governmental and non-governmental organizations. A $1.8million 2006 Institute of Medicine report on pharmaceutical regulation in the U.S. found major deficiencies in the current FDA system for ensuring the safety of drugs on the American market. Overall, the authors called for an increase in the regulatory powers, funding, and independence of the FDA.
A 2022 article from Politico raised concerns that food is not a high priority at the FDA. The report explains the FDA has structural and leadership problems in the food division and is often deferential to industry. This might be attributed to lobbying and influence of big food companies in Washington, D.C.
See also
Adverse reaction
Adverse event
Adverse drug reaction
Biosecurity
Biosecurity in the United States
Drug Efficacy Study Implementation
Food and Drug Administration Modernization Act of 1997
FDA Food Safety Modernization Act of 2011
FDA Fast Track Development Program (for drugs)
Food and Drug Administration Amendments Act of 2007 (e.g. drugs)
Food and Drug Administration Safety and Innovation Act of 2012 (GAIN/QIDP etc.)
Inverse benefit law
Investigational Device Exemption (for use in clinical trials)
Kefauver Harris Amendment 1962 – required "proof-of-efficacy" for drugs
International:
Food Administration
International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use (ICH)
African Union: African Medicines Agency
Australia: Therapeutic Goods Administration
Brazil: National Health Surveillance Agency
Canada: Marketed Health Products Directorate
Canada: Health Canada
Denmark: Danish Medicines Agency
European Union: European Medicines Agency
Germany: Federal Institute for Drugs and Medical Devices
India: Food Safety and Standards Authority of India
India: Central Drugs Standard Control Organization
Japan: Ministry of Health, Labour and Welfare (MHLW)
Japan: Pharmaceuticals and Medical Devices Agency
Mexico: Federal Commission for the Protection against Sanitary Risk
Philippines: Food and Drug Administration (FDA)
Singapore: Health Sciences Authority
United Kingdom: Medicines and Healthcare products Regulatory Agency
United States: Food and Drug Administration
Notes
References
Further reading
External links
Food and Drug Administration in the Federal Register
Food and Drug Administration in the Code of Federal Regulations
Strategic Plan (archived)
Online books by United States Food and Drug Administration at The Online Books Page
Food and Drug Administration apportionments on OpenOMB
1906 establishments in the United States
American medical research
Government agencies established in 1906
Regulators of biotechnology products
National agencies for drug regulation | Food and Drug Administration | [
"Chemistry",
"Biology"
] | 9,846 | [
"Biotechnology products",
"Regulation of biotechnologies",
"National agencies for drug regulation",
"Regulators of biotechnology products",
"Drug safety"
] |
11,659 | https://en.wikipedia.org/wiki/Fourier%20analysis | In mathematics, Fourier analysis () is the study of the way general functions may be represented or approximated by sums of simpler trigonometric functions. Fourier analysis grew from the study of Fourier series, and is named after Joseph Fourier, who showed that representing a function as a sum of trigonometric functions greatly simplifies the study of heat transfer.
The subject of Fourier analysis encompasses a vast spectrum of mathematics. In the sciences and engineering, the process of decomposing a function into oscillatory components is often called Fourier analysis, while the operation of rebuilding the function from these pieces is known as Fourier synthesis. For example, determining what component frequencies are present in a musical note would involve computing the Fourier transform of a sampled musical note. One could then re-synthesize the same sound by including the frequency components as revealed in the Fourier analysis. In mathematics, the term Fourier analysis often refers to the study of both operations.
The decomposition process itself is called a Fourier transformation. Its output, the Fourier transform, is often given a more specific name, which depends on the domain and other properties of the function being transformed. Moreover, the original concept of Fourier analysis has been extended over time to apply to more and more abstract and general situations, and the general field is often known as harmonic analysis. Each transform used for analysis (see list of Fourier-related transforms) has a corresponding inverse transform that can be used for synthesis.
To use Fourier analysis, data must be equally spaced. Different approaches have been developed for analyzing unequally spaced data, notably the least-squares spectral analysis (LSSA) methods that use a least squares fit of sinusoids to data samples, similar to Fourier analysis. Fourier analysis, the most used spectral method in science, generally boosts long-periodic noise in long gapped records; LSSA mitigates such problems.
Applications
Fourier analysis has many scientific applications – in physics, partial differential equations, number theory, combinatorics, signal processing, digital image processing, probability theory, statistics, forensics, option pricing, cryptography, numerical analysis, acoustics, oceanography, sonar, optics, diffraction, geometry, protein structure analysis, and other areas.
This wide applicability stems from many useful properties of the transforms:
The transforms are linear operators and, with proper normalization, are unitary as well (a property known as Parseval's theorem or, more generally, as the Plancherel theorem, and most generally via Pontryagin duality).
The transforms are usually invertible.
The exponential functions are eigenfunctions of differentiation, which means that this representation transforms linear differential equations with constant coefficients into ordinary algebraic ones. Therefore, the behavior of a linear time-invariant system can be analyzed at each frequency independently.
By the convolution theorem, Fourier transforms turn the complicated convolution operation into simple multiplication, which means that they provide an efficient way to compute convolution-based operations such as signal filtering, polynomial multiplication, and multiplying large numbers.
The discrete version of the Fourier transform (see below) can be evaluated quickly on computers using fast Fourier transform (FFT) algorithms.
In forensics, laboratory infrared spectrophotometers use Fourier transform analysis for measuring the wavelengths of light at which a material will absorb in the infrared spectrum. The FT method is used to decode the measured signals and record the wavelength data. And by using a computer, these Fourier calculations are rapidly carried out, so that in a matter of seconds, a computer-operated FT-IR instrument can produce an infrared absorption pattern comparable to that of a prism instrument.
Fourier transformation is also useful as a compact representation of a signal. For example, JPEG compression uses a variant of the Fourier transformation (discrete cosine transform) of small square pieces of a digital image. The Fourier components of each square are rounded to lower arithmetic precision, and weak components are eliminated, so that the remaining components can be stored very compactly. In image reconstruction, each image square is reassembled from the preserved approximate Fourier-transformed components, which are then inverse-transformed to produce an approximation of the original image.
In signal processing, the Fourier transform often takes a time series or a function of continuous time, and maps it into a frequency spectrum. That is, it takes a function from the time domain into the frequency domain; it is a decomposition of a function into sinusoids of different frequencies; in the case of a Fourier series or discrete Fourier transform, the sinusoids are harmonics of the fundamental frequency of the function being analyzed.
When a function is a function of time and represents a physical signal, the transform has a standard interpretation as the frequency spectrum of the signal. The magnitude of the resulting complex-valued function at frequency represents the amplitude of a frequency component whose initial phase is given by the angle of (polar coordinates).
Fourier transforms are not limited to functions of time, and temporal frequencies. They can equally be applied to analyze spatial frequencies, and indeed for nearly any function domain. This justifies their use in such diverse branches as image processing, heat conduction, and automatic control.
When processing signals, such as audio, radio waves, light waves, seismic waves, and even images, Fourier analysis can isolate narrowband components of a compound waveform, concentrating them for easier detection or removal. A large family of signal processing techniques consist of Fourier-transforming a signal, manipulating the Fourier-transformed data in a simple way, and reversing the transformation.
Some examples include:
Equalization of audio recordings with a series of bandpass filters;
Digital radio reception without a superheterodyne circuit, as in a modern cell phone or radio scanner;
Image processing to remove periodic or anisotropic artifacts such as jaggies from interlaced video, strip artifacts from strip aerial photography, or wave patterns from radio frequency interference in a digital camera;
Cross correlation of similar images for co-alignment;
X-ray crystallography to reconstruct a crystal structure from its diffraction pattern;
Fourier-transform ion cyclotron resonance mass spectrometry to determine the mass of ions from the frequency of cyclotron motion in a magnetic field;
Many other forms of spectroscopy, including infrared and nuclear magnetic resonance spectroscopies;
Generation of sound spectrograms used to analyze sounds;
Passive sonar used to classify targets based on machinery noise.
Variants of Fourier analysis
(Continuous) Fourier transform
Most often, the unqualified term Fourier transform refers to the transform of functions of a continuous real argument, and it produces a continuous function of frequency, known as a frequency distribution. One function is transformed into another, and the operation is reversible. When the domain of the input (initial) function is time (), and the domain of the output (final) function is ordinary frequency, the transform of function at frequency is given by the complex number:
Evaluating this quantity for all values of produces the frequency-domain function. Then can be represented as a recombination of complex exponentials of all possible frequencies:
which is the inverse transform formula. The complex number, conveys both amplitude and phase of frequency
See Fourier transform for much more information, including:
conventions for amplitude normalization and frequency scaling/units
transform properties
tabulated transforms of specific functions
an extension/generalization for functions of multiple dimensions, such as images.
Fourier series
The Fourier transform of a periodic function, with period becomes a Dirac comb function, modulated by a sequence of complex coefficients:
(where is the integral over any interval of length ).
The inverse transform, known as Fourier series, is a representation of in terms of a summation of a potentially infinite number of harmonically related sinusoids or complex exponential functions, each with an amplitude and phase specified by one of the coefficients:
Any can be expressed as a periodic summation of another function, :
and the coefficients are proportional to samples of at discrete intervals of :
Note that any whose transform has the same discrete sample values can be used in the periodic summation. A sufficient condition for recovering (and therefore ) from just these samples (i.e. from the Fourier series) is that the non-zero portion of be confined to a known interval of duration which is the frequency domain dual of the Nyquist–Shannon sampling theorem.
See Fourier series for more information, including the historical development.
Discrete-time Fourier transform (DTFT)
The DTFT is the mathematical dual of the time-domain Fourier series. Thus, a convergent periodic summation in the frequency domain can be represented by a Fourier series, whose coefficients are samples of a related continuous time function:
which is known as the DTFT. Thus the DTFT of the sequence is also the Fourier transform of the modulated Dirac comb function.
The Fourier series coefficients (and inverse transform), are defined by:
Parameter corresponds to the sampling interval, and this Fourier series can now be recognized as a form of the Poisson summation formula. Thus we have the important result that when a discrete data sequence, is proportional to samples of an underlying continuous function, one can observe a periodic summation of the continuous Fourier transform, Note that any with the same discrete sample values produces the same DTFT. But under certain idealized conditions one can theoretically recover and exactly. A sufficient condition for perfect recovery is that the non-zero portion of be confined to a known frequency interval of width When that interval is the applicable reconstruction formula is the Whittaker–Shannon interpolation formula. This is a cornerstone in the foundation of digital signal processing.
Another reason to be interested in is that it often provides insight into the amount of aliasing caused by the sampling process.
Applications of the DTFT are not limited to sampled functions. See Discrete-time Fourier transform for more information on this and other topics, including:
normalized frequency units
windowing (finite-length sequences)
transform properties
tabulated transforms of specific functions
Discrete Fourier transform (DFT)
Similar to a Fourier series, the DTFT of a periodic sequence, with period , becomes a Dirac comb function, modulated by a sequence of complex coefficients (see ):
(where is the sum over any sequence of length )
The sequence is customarily known as the DFT of one cycle of It is also -periodic, so it is never necessary to compute more than coefficients. The inverse transform, also known as a discrete Fourier series, is given by:
where is the sum over any sequence of length
When is expressed as a periodic summation of another function:
and
the coefficients are samples of at discrete intervals of :
Conversely, when one wants to compute an arbitrary number of discrete samples of one cycle of a continuous DTFT, it can be done by computing the relatively simple DFT of as defined above. In most cases, is chosen equal to the length of the non-zero portion of Increasing known as zero-padding or interpolation, results in more closely spaced samples of one cycle of Decreasing causes overlap (adding) in the time-domain (analogous to aliasing), which corresponds to decimation in the frequency domain. (see ) In most cases of practical interest, the sequence represents a longer sequence that was truncated by the application of a finite-length window function or FIR filter array.
The DFT can be computed using a fast Fourier transform (FFT) algorithm, which makes it a practical and important transformation on computers.
See Discrete Fourier transform for much more information, including:
transform properties
applications
tabulated transforms of specific functions
Summary
For periodic functions, both the Fourier transform and the DTFT comprise only a discrete set of frequency components (Fourier series), and the transforms diverge at those frequencies. One common practice (not discussed above) is to handle that divergence via Dirac delta and Dirac comb functions. But the same spectral information can be discerned from just one cycle of the periodic function, since all the other cycles are identical. Similarly, finite-duration functions can be represented as a Fourier series, with no actual loss of information except that the periodicity of the inverse transform is a mere artifact.
It is common in practice for the duration of s(•) to be limited to the period, or . But these formulas do not require that condition.
Symmetry properties
When the real and imaginary parts of a complex function are decomposed into their even and odd parts, there are four components, denoted below by the subscripts RE, RO, IE, and IO. And there is a one-to-one mapping between the four components of a complex time function and the four components of its complex frequency transform:
From this, various relationships are apparent, for example:
The transform of a real-valued function is the conjugate symmetric function Conversely, a conjugate symmetric transform implies a real-valued time-domain.
The transform of an imaginary-valued function is the conjugate antisymmetric function and the converse is true.
The transform of a conjugate symmetric function is the real-valued function and the converse is true.
The transform of a conjugate antisymmetric function is the imaginary-valued function and the converse is true.
History
An early form of harmonic series dates back to ancient Babylonian mathematics, where they were used to compute ephemerides (tables of astronomical positions).
The Classical Greek concepts of deferent and epicycle in the Ptolemaic system of astronomy were related to Fourier series (see ).
In modern times, variants of the discrete Fourier transform were used by Alexis Clairaut in 1754 to compute an orbit,
which has been described as the first formula for the DFT,
and in 1759 by Joseph Louis Lagrange, in computing the coefficients of a trigonometric series for a vibrating string. Technically, Clairaut's work was a cosine-only series (a form of discrete cosine transform), while Lagrange's work was a sine-only series (a form of discrete sine transform); a true cosine+sine DFT was used by Gauss in 1805 for trigonometric interpolation of asteroid orbits.
Euler and Lagrange both discretized the vibrating string problem, using what would today be called samples.
An early modern development toward Fourier analysis was the 1770 paper Réflexions sur la résolution algébrique des équations by Lagrange, which in the method of Lagrange resolvents used a complex Fourier decomposition to study the solution of a cubic:
Lagrange transformed the roots into the resolvents:
where is a cubic root of unity, which is the DFT of order 3.
A number of authors, notably Jean le Rond d'Alembert, and Carl Friedrich Gauss used trigonometric series to study the heat equation, but the breakthrough development was the 1807 paper Mémoire sur la propagation de la chaleur dans les corps solides by Joseph Fourier, whose crucial insight was to model all functions by trigonometric series, introducing the Fourier series.
Historians are divided as to how much to credit Lagrange and others for the development of Fourier theory: Daniel Bernoulli and Leonhard Euler had introduced trigonometric representations of functions, and Lagrange had given the Fourier series solution to the wave equation, so Fourier's contribution was mainly the bold claim that an arbitrary function could be represented by a Fourier series.
The subsequent development of the field is known as harmonic analysis, and is also an early instance of representation theory.
The first fast Fourier transform (FFT) algorithm for the DFT was discovered around 1805 by Carl Friedrich Gauss when interpolating measurements of the orbit of the asteroids Juno and Pallas, although that particular FFT algorithm is more often attributed to its modern rediscoverers Cooley and Tukey.
Time–frequency transforms
In signal processing terms, a function (of time) is a representation of a signal with perfect time resolution, but no frequency information, while the Fourier transform has perfect frequency resolution, but no time information.
As alternatives to the Fourier transform, in time–frequency analysis, one uses time–frequency transforms to represent signals in a form that has some time information and some frequency information – by the uncertainty principle, there is a trade-off between these. These can be generalizations of the Fourier transform, such as the short-time Fourier transform, the Gabor transform or fractional Fourier transform (FRFT), or can use different functions to represent signals, as in wavelet transforms and chirplet transforms, with the wavelet analog of the (continuous) Fourier transform being the continuous wavelet transform.
Fourier transforms on arbitrary locally compact abelian topological groups
The Fourier variants can also be generalized to Fourier transforms on arbitrary locally compact Abelian topological groups, which are studied in harmonic analysis; there, the Fourier transform takes functions on a group to functions on the dual group. This treatment also allows a general formulation of the convolution theorem, which relates Fourier transforms and convolutions. See also the Pontryagin duality for the generalized underpinnings of the Fourier transform.
More specific, Fourier analysis can be done on cosets, even discrete cosets.
See also
Conjugate Fourier series
Generalized Fourier series
Fourier–Bessel series
Fourier-related transforms
Laplace transform (LT)
Two-sided Laplace transform
Mellin transform
Non-uniform discrete Fourier transform (NDFT)
Quantum Fourier transform (QFT)
Number-theoretic transform
Basis vectors
Bispectrum
Characteristic function (probability theory)
Orthogonal functions
Schwartz space
Spectral density
Spectral density estimation
Spectral music
Walsh function
Wavelet
Notes
References
Further reading
External links
Tables of Integral Transforms at EqWorld: The World of Mathematical Equations.
An Intuitive Explanation of Fourier Theory by Steven Lehar.
Lectures on Image Processing: A collection of 18 lectures in pdf format from Vanderbilt University. Lecture 6 is on the 1- and 2-D Fourier Transform. Lectures 7–15 make use of it., by Alan Peters
Introduction to Fourier analysis of time series at Medium
Integral transforms
Digital signal processing
Mathematical physics
Mathematics of computing
Time series
Joseph Fourier
Acoustics | Fourier analysis | [
"Physics",
"Mathematics"
] | 3,696 | [
"Applied mathematics",
"Theoretical physics",
"Classical mechanics",
"Acoustics",
"Mathematical physics"
] |
11,665 | https://en.wikipedia.org/wiki/Filtration | Filtration is a physical separation process that separates solid matter and fluid from a mixture using a filter medium that has a complex structure through which only the fluid can pass. Solid particles that cannot pass through the filter medium are described as oversize and the fluid that passes through is called the filtrate. Oversize particles may form a filter cake on top of the filter and may also block the filter lattice, preventing the fluid phase from crossing the filter, known as blinding. The size of the largest particles that can successfully pass through a filter is called the effective pore size of that filter. The separation of solid and fluid is imperfect; solids will be contaminated with some fluid and filtrate will contain fine particles (depending on the pore size, filter thickness and biological activity). Filtration occurs both in nature and in engineered systems; there are biological, geological, and industrial forms. In everyday usage the verb "strain" is more often used; for example, using a colander to drain cooking water from cooked pasta.
Oil filtration refers to the method of purifying oil by removing impurities that can degrade its quality. Contaminants can enter the oil through various means, including wear and tear of machinery components, environmental factors, and improper handling during oil changes. The primary goal of oil filtration is to enhance the oil’s performance, thereby protecting the machinery and extending its service life.
Filtration is also used to describe biological and physical systems that not only separate solids from a fluid stream but also remove chemical species and biological organisms by entrainment, phagocytosis, adsorption and absorption. Examples include slow sand filters and trickling filters. It is also used as a general term for macrophage in which organisms use a variety of means to filter small food particles from their environment. Examples range from the microscopic Vorticella up to the basking shark, one of the largest fishes, and the baleen whales, all of which are described as filter feeders.
Physical processes
Filtration is used to separate particles and fluid in a suspension, where the fluid can be a liquid, a gas or a supercritical fluid. Depending on the application, either one or both of the components may be isolated.
Filtration, as a physical operation enables materials of different chemical compositions to be separated. A solvent is chosen which dissolves one component, while not dissolving the other. By dissolving the mixture in the chosen solvent, one component will go into the solution and pass through the filter, while the other will be retained.
Filtration is widely used in chemical engineering. It may be combined with other unit operations to process the feed stream, as in the biofilter, which is a combined filter and biological digestion device.
Filtration differs from sieving, where separation occurs at a single perforated layer (a sieve). In sieving, particles that are too big to pass through the holes of the sieve are retained (see particle size distribution). In filtration, a multilayer lattice retains those particles that are unable to follow the tortuous channels of the filter. Oversize particles may form a cake layer on top of the filter and may also block the filter lattice, preventing the fluid phase from crossing the filter (blinding). Commercially, the term filter is applied to membranes where the separation lattice is so thin that the surface becomes the main zone of particle separation, even though these products might be described as sieves.
Filtration differs from adsorption, where separation relies on surface charge. Some adsorption devices containing activated charcoal and ion-exchange resin are commercially called filters, although filtration is not their principal mechanical function.
Filtration differs from removal of magnetic contaminants from fluids with magnets (typically lubrication oil, coolants and fuel oils) because there is no filter medium. Commercial devices called "magnetic filters" are sold, but the name reflects their use, not their mode of operation.
In biological filters, oversize particulates are trapped and ingested and the resulting metabolites may be released. For example, in animals (including humans), renal filtration removes waste from the blood, and in water treatment and sewage treatment, undesirable constituents are removed by adsorption into a biological film grown on or in the filter medium, as in slow sand filtration.
Methods
Filters may be used for the purpose of removing unwanted liquid from a solid residue, cleaning unwanted solids from a liquid, or simply to separate the solid from the liquid.
There are many different methods of filtration; all aim to attain the separation of substances. Separation is achieved by some form of interaction between the substance or objects to be removed and the filter. The substance that is to pass through the filter must be a fluid, i.e. a liquid or gas. Methods of filtration vary depending on the location of the targeted material, i.e. whether it is dissolved in the fluid phase or suspended as a solid.
There are several laboratory filtration techniques depending on the desired outcome namely, hot, cold and vacuum filtration. Some of the major purposes of obtaining the desired outcome are, for the removal of impurities from a mixture or, for the isolation of solids from a mixture.
Hot filtration method is mainly used to separate solids from a hot solution. This is done to prevent crystal formation in the filter funnel and other apparatus that come in contact with the solution. As a result, the apparatus and the solution used are heated to prevent the rapid decrease in temperature which in turn, would lead to the crystallisation of the solids in the funnel and hinder the filtration process.
One of the most important measures to prevent the formation of crystals in the funnel and to undergo effective hot filtration is the use stemless filter funnel. Due to the absence of a stem in the filter funnel, there is a decrease in the surface area of contact between the solution and the stem of the filter funnel, hence preventing re-crystallization of solid in the funnel, and adversely affecting the filtration process.
Cold filtration method is the use of an ice bath to rapidly cool the solution to be crystallized rather than leaving it to cool slowly in the room atmosphere. This technique results in the formation of very small crystals as opposed to getting large crystals by cooling the solution at room temperature.
Vacuum filtration technique is mostly preferred for small batches of solution to dry small crystals quickly. This method requires a Büchner funnel, filter paper of a smaller diameter than the funnel, Büchner flask, and rubber tubing to connect to a vacuum source.
Centrifugal filtration is carried out by rapidly rotating the substance to be filtered. The more dense material is separated from the less dense matter by the horizontal rotation.
Gravity filtration is the process of pouring the mixture from a higher location to a lower one. It is frequently accomplished via simple filtration, which involves placing filter paper in a glass funnel with the liquid passing through by gravity while the insoluble solid particles are caught by the filter paper. Filter cones, fluted filters, or filtering pipets can all be employed, depending on the amount of the substance at hand. Gravity filtration is in widespread everyday use, for example for straining cooking water from food, or removing contaminants from a liquid.
Filtering force
Only when a driving force is supplied will the fluid to be filtered be able to flow through the filter media. Gravity, centrifugation, applying pressure to the fluid above the filter, applying a vacuum below the filter, or a combination of these factors may all contribute to this force. In both straightforward laboratory filtrations and massive sand-bed filters, gravitational force alone may be utilized. Centrifuges with a bowl holding a porous filter media can be thought of as filters in which a centrifugal force several times stronger than gravity replaces gravitational force. A partial vacuum is typically provided to the container below the filter media when laboratory filtration is challenging to speed up the filtering process. Depending on the type of filter being used, the majority of industrial filtration operations employ pressure or vacuum to speed up filtering and reduce the amount of equipment needed.
Filter media
Filter media are the materials used to do the separation of materials.
Two main types of filter media are employed in laboratories:
Surface filters are solid sieves with a mesh to trap solid particles, sometimes with the aid of filter paper (e.g. Büchner funnel, belt filter, rotary vacuum-drum filter, cross-flow filters, screen filter).
Depth filters, beds of granular material which retain the solid particles as they pass (e.g. sand filter).
Surface filters allow the solid particles, i.e. the residue, to be collected intact; depth filters do not. However, the depth filter is less prone to clogging due to the greater surface area where the particles can be trapped. Also, when the solid particles are very fine, it is often cheaper and easier to discard the contaminated granules than to clean the solid sieve.
Filter media can be cleaned by rinsing with solvents or detergents or backwashing. Alternatively, in engineering applications, such as swimming pool water treatment plants, they may be cleaned by backwashing. Self-cleaning screen filters utilize point-of-suction backwashing to clean the screen without interrupting system flow.
Achieving flow through the filter
Fluids flow through a filter due to a pressure difference—fluid flows from the high-pressure side to the low-pressure side of the filter. The simplest method to achieve this is by gravity which can be seen in the coffeemaker example. In the laboratory, pressure in the form of compressed air on the feed side (or vacuum on the filtrate side) may be applied to make the filtration process faster, though this may lead to clogging or the passage of fine particles. Alternatively, the liquid may flow through the filter by the force exerted by a pump, a method commonly used in industry when a reduced filtration time is important. In this case, the filter need not be mounted vertically.
Filter aid
Certain filter aids may be used to aid filtration. These are often incompressible diatomaceous earth, or kieselguhr, which is composed primarily of silica. Also used are wood cellulose and other inert porous solids such as the cheaper and safer perlite. Activated carbon is often used in industrial applications that require changes in the filtrate's properties, such as altering colour or odour.
These filter aids can be used in two different ways. They can be used as a precoat before the slurry is filtered. This will prevent gelatinous-type solids from plugging the filter medium and also give a clearer filtrate. They can also be added to the slurry before filtration. This increases the porosity of the cake and reduces the resistance of the cake during filtration. In a rotary filter, the filter aid may be applied as a precoat; subsequently, thin slices of this layer are sliced off with the cake.
The use of filter aids is usually limited to cases where the cake is discarded or where the precipitate can be chemically separated from the filter.
Alternatives
Filtration is a more efficient method for the separation of mixtures than decantation but is much more time-consuming. If very small amounts of solution are involved, most of the solution may be soaked up by the filter medium.
An alternative to filtration is centrifugation. Instead of filtering the mixture of solid and liquid particles, the mixture is centrifuged to force the (usually) denser solid to the bottom, where it often forms a firm cake. The liquid above can then be decanted. This method is especially useful for separating solids that do not filter well, such as gelatinous or fine particles. These solids can clog or pass through the filter, respectively.
Biological filtration
Biological filtration may take place inside an organism, or the biological component may be grown on a medium in the material being filtered. Removal of solids, emulsified components, organic chemicals and ions may be achieved by ingestion and digestion, adsorption or absorption. Because of the complexity of biological interactions, especially in multi-organism communities, it is often not possible to determine which processes are achieving the filtration result. At the molecular level, it may often be by individual catalytic enzyme actions within an individual organism. The waste products of some organisms may subsequently broken down by other organisms to extract as much energy as possible and in so doing reduce complex organic molecules to very simple inorganic species such as water, carbon dioxide and nitrogen.
Excretion
In mammals, reptiles, and birds, the kidneys function by renal filtration whereby the glomerulus selectively removes undesirable constituents such as urea, followed by selective reabsorption of many substances essential for the body to maintain homeostasis. The complete process is termed excretion by urination. Similar but often less complex solutions are deployed in all animals, even the protozoa, where the contractile vacuole provides a similar function.
Biofilms
Biofilms are often complex communities of bacteria, phages, yeasts and often more complex organisms including protozoa, rotifers and annelids which form dynamic and complex, frequently gelatinous films on wet substrates. Such biofilms coat the rocks of most rivers and the sea and they provide the key filtration capability of the Schmutzdecke on the surface of slow sand filters and the film on the filter media of trickling filters which are used to create potable water and treat sewage respectively.
An example of a biofilm is a biological slime, which may be found in lakes, rivers, rocks, etc. The utilization of single- or dual-species biofilms is a novel technology since natural biofilms are sluggishly developing. The use of biofilms in the biofiltration process allows for the attachment of desirable biomass and critical nutrients to immobilized support. So that water may be reused for various processes, advances in biofiltration methods assist in removing significant volumes of effluents from wastewater.
Systems for biologically treating wastewater are crucial for enhancing both human health and water quality. Biofilm technology, the formation of biofilms on various filter media, and other factors have an impact on the growth structure and function of these biofilms. To conduct a thorough investigation of the composition, diversity, and dynamics of biofilms, it also takes on a variety of traditional and contemporary molecular approaches.
Filter feeders
Filter feeders are organisms that obtain their food by filtering their, generally aquatic, environment. Many of the protozoa are filter feeders using a range of adaptations including rigid spikes of protoplasm held in the water flow as in the suctoria to various arrangements of beating cillia to direct particles to the mouth including organisms such as Vorticella which have a complex ring of cilia which create a vortex in the flow drafting particles into the oral cavity. Similar feeding techniques are used by the Rotifera and the Ectoprocta. Many aquatic arthropods are filter feeders. Some use rhythmical beating of abdominal limbs to create a water current to the mouth whilst the hairs on the legs trap any particle. Others such as some caddis flies spin fine webs in the water flow to trap particles.
Examples
Many filtration processes include more than one filtration mechanism, and particulates are often removed from the fluid first to prevent clogging of downstream elements.
Particulate filtration includes:
The coffee filter to separate the coffee infusion from the grounds.
HEPA filters in air conditioning to remove particles from air.
Belt filters to extract precious metals in mining.
Vertical plate filter such as those used in Merrill–Crowe process.
The Nutsche filter is typically used in pharmaceutical applications or batch processes that need to capture solids.
Furnaces use filtration to prevent the furnace elements from fouling with particulates.
A pneumatic conveying system such as an industrial exhaust duct system often employs filtration to stop or slow the flow of unwanted material that is transported, through the use of a baghouse.
In the laboratory, a Büchner funnel is often used, with a filter paper serving as the porous barrier.
Air filters are commonly used to remove airborne particulate matter in building ventilation systems, combustion engines, and industrial processes.
Oil filter in automobiles, often as a canister or cartridge.
Aquarium filter
Straining water from food with a colander
Adsorption filtration removes contaminants by adsorption of the contaminant by the filter medium. This requires intimate contact between the filter medium and the filtrate, and takes time for diffusion to bring the contaminant into direct contact with the medium while passing through it, referred to as . Slower flow also reduces pressure drop across the filter. Applications include:
Carbon dioxide removal from breathing gas in rebreathers and life-support systems using scrubber filters,
Activated carbon filters to remove volatile hydrocarbons, odours, and other contaminants from recirculated breathing gas in closed habitats.
Combined applications include:
Compressed breathing air production, where the air passes through a particulate filter before entering the compressor, which removes particles likely to damage the compressor, followed by droplet separation after post-compression cooling and final product adsorption filtration to remove gaseous hydrocarbons contaminants and excessive water vapour. In some cases prefilters using adsorption media are used to control carbon dioxide levels, pressure swing adsorption may be used to increase oxygen fraction, and where the risk of carbon monoxide contamination exists, hopcalite catalytic converters may be included in the filtration media of the product. All these processes are broadly referred to as aspects of the filtration of the product.
Potable water treatment using biofilm filtration in slow sand filters.
Wastewater treatment using biofilm filtration using trickling filters.
See also
References
External links
Filtration modeling (constant rate and pressure)
Analytical chemistry
Laboratory techniques
Alchemical processes
Industrial water treatment | Filtration | [
"Chemistry"
] | 3,780 | [
"Separation processes",
"Water treatment",
"Industrial water treatment",
"Filtration",
"Alchemical processes",
"nan"
] |
11,671 | https://en.wikipedia.org/wiki/Fick%27s%20laws%20of%20diffusion | Fick's laws of diffusion describe diffusion and were first posited by Adolf Fick in 1855 on the basis of largely experimental results. They can be used to solve for the diffusion coefficient, . Fick's first law can be used to derive his second law which in turn is identical to the diffusion equation.
Fick's first law: Movement of particles from high to low concentration (diffusive flux) is directly proportional to the particle's concentration gradient.
Fick's second law: Prediction of change in concentration gradient with time due to diffusion.
A diffusion process that obeys Fick's laws is called normal or Fickian diffusion; otherwise, it is called anomalous diffusion or non-Fickian diffusion.
History
In 1855, physiologist Adolf Fick first reported his now well-known laws governing the transport of mass through diffusive means. Fick's work was inspired by the earlier experiments of Thomas Graham, which fell short of proposing the fundamental laws for which Fick would become famous. Fick's law is analogous to the relationships discovered at the same epoch by other eminent scientists: Darcy's law (hydraulic flow), Ohm's law (charge transport), and Fourier's law (heat transport).
Fick's experiments (modeled on Graham's) dealt with measuring the concentrations and fluxes of salt, diffusing between two reservoirs through tubes of water. It is notable that Fick's work primarily concerned diffusion in fluids, because at the time, diffusion in solids was not considered generally possible. Today, Fick's laws form the core of our understanding of diffusion in solids, liquids, and gases (in the absence of bulk fluid motion in the latter two cases). When a diffusion process does not follow Fick's laws (which happens in cases of diffusion through porous media and diffusion of swelling penetrants, among others), it is referred to as non-Fickian.
Fick's first law
Fick's first law relates the diffusive flux to the gradient of the concentration. It postulates that the flux goes from regions of high concentration to regions of low concentration, with a magnitude that is proportional to the concentration gradient (spatial derivative), or in simplistic terms the concept that a solute will move from a region of high concentration to a region of low concentration across a concentration gradient. In one (spatial) dimension, the law can be written in various forms, where the most common form (see) is in a molar basis:
where
is the diffusion flux, of which the dimension is the amount of substance per unit area per unit time. measures the amount of substance that will flow through a unit area during a unit time interval,
is the diffusion coefficient or diffusivity. Its dimension is area per unit time,
is the concentration gradient,
(for ideal mixtures) is the concentration, with a dimension of amount of substance per unit volume,
is position, the dimension of which is length.
is proportional to the squared velocity of the diffusing particles, which depends on the temperature, viscosity of the fluid and the size of the particles according to the Stokes–Einstein relation. In dilute aqueous solutions the diffusion coefficients of most ions are similar and have values that at room temperature are in the range of . For biological molecules the diffusion coefficients normally range from 10−10 to 10−11 m2/s.
In two or more dimensions we must use , the del or gradient operator, which generalises the first derivative, obtaining
where denotes the diffusion flux vector.
The driving force for the one-dimensional diffusion is the quantity , which for ideal mixtures is the concentration gradient.
Variations of the first law
Another form for the first law is to write it with the primary variable as mass fraction (, given for example in kg/kg), then the equation changes to:
where
the index denotes the th species,
is the diffusion flux vector of the th species (for example in mol/m2-s),
is the molar mass of the th species,
is the mixture density (for example in kg/m3).
The is outside the gradient operator. This is because:
where is the partial density of the th species.
Beyond this, in chemical systems other than ideal solutions or mixtures, the driving force for the diffusion of each species is the gradient of chemical potential of this species. Then Fick's first law (one-dimensional case) can be written
where
the index denotes the th species,
is the concentration (mol/m3),
is the universal gas constant (J/K/mol),
is the absolute temperature (K),
is the chemical potential (J/mol).
The driving force of Fick's law can be expressed as a fugacity difference:
where is the fugacity in Pa. is a partial pressure of component in a vapor or liquid phase. At vapor liquid equilibrium the evaporation flux is zero because .
Derivation of Fick's first law for gases
Four versions of Fick's law for binary gas mixtures are given below. These assume: thermal diffusion is negligible; the body force per unit mass is the same on both species; and either pressure is constant or both species have the same molar mass. Under these conditions, Ref. shows in detail how the diffusion equation from the kinetic theory of gases reduces to this version of Fick's law:
where is the diffusion velocity of species . In terms of species flux this is
If, additionally, , this reduces to the most common form of Fick's law,
If (instead of or in addition to ) both species have the same molar mass, Fick's law becomes
where is the mole fraction of species .
Fick's second law
Fick's second law predicts how diffusion causes the concentration to change with respect to time. It is a partial differential equation which in one dimension reads:
where
is the concentration in dimensions of , example mol/m3; is a function that depends on location and time ,
is time, example s,
is the diffusion coefficient in dimensions of , example m2/s,
is the position, example m.
In two or more dimensions we must use the Laplacian , which generalises the second derivative, obtaining the equation
Fick's second law has the same mathematical form as the Heat equation and its fundamental solution is the same as the Heat kernel, except switching thermal conductivity with diffusion coefficient :
Derivation of Fick's second law
Fick's second law can be derived from Fick's first law and the mass conservation in absence of any chemical reactions:
Assuming the diffusion coefficient to be a constant, one can exchange the orders of the differentiation and multiply by the constant:
and, thus, receive the form of the Fick's equations as was stated above.
For the case of diffusion in two or more dimensions Fick's second law becomes
which is analogous to the heat equation.
If the diffusion coefficient is not a constant, but depends upon the coordinate or concentration, Fick's second law yields
An important example is the case where is at a steady state, i.e. the concentration does not change by time, so that the left part of the above equation is identically zero. In one dimension with constant , the solution for the concentration will be a linear change of concentrations along . In two or more dimensions we obtain
which is Laplace's equation, the solutions to which are referred to by mathematicians as harmonic functions.
Example solutions and generalization
Fick's second law is a special case of the convection–diffusion equation in which there is no advective flux and no net volumetric source. It can be derived from the continuity equation:
where is the total flux and is a net volumetric source for . The only source of flux in this situation is assumed to be diffusive flux:
Plugging the definition of diffusive flux to the continuity equation and assuming there is no source (), we arrive at Fick's second law:
If flux were the result of both diffusive flux and advective flux, the convection–diffusion equation is the result.
Example solution 1: constant concentration source and diffusion length
A simple case of diffusion with time in one dimension (taken as the -axis) from a boundary located at position , where the concentration is maintained at a value is
where is the complementary error function. This is the case when corrosive gases diffuse through the oxidative layer towards the metal surface (if we assume that concentration of gases in the environment is constant and the diffusion space – that is, the corrosion product layer – is semi-infinite, starting at 0 at the surface and spreading infinitely deep in the material). If, in its turn, the diffusion space is infinite (lasting both through the layer with , and that with , ), then the solution is amended only with coefficient in front of (as the diffusion now occurs in both directions). This case is valid when some solution with concentration is put in contact with a layer of pure solvent. (Bokstein, 2005) The length is called the diffusion length and provides a measure of how far the concentration has propagated in the -direction by diffusion in time (Bird, 1976).
As a quick approximation of the error function, the first two terms of the Taylor series can be used:
If is time-dependent, the diffusion length becomes
This idea is useful for estimating a diffusion length over a heating and cooling cycle, where varies with temperature.
Example solution 2: Brownian particle and mean squared displacement
Another simple case of diffusion is the Brownian motion of one particle. The particle's Mean squared displacement from its original position is:
where is the dimension of the particle's Brownian motion. For example, the diffusion of a molecule across a cell membrane 8 nm thick is 1-D diffusion because of the spherical symmetry; However, the diffusion of a molecule from the membrane to the center of a eukaryotic cell is a 3-D diffusion. For a cylindrical cactus, the diffusion from photosynthetic cells on its surface to its center (the axis of its cylindrical symmetry) is a 2-D diffusion.
The square root of MSD, , is often used as a characterization of how far the particle has moved after time has elapsed. The MSD is symmetrically distributed over the 1D, 2D, and 3D space. Thus, the probability distribution of the magnitude of MSD in 1D is Gaussian and 3D is a Maxwell-Boltzmann distribution.
Generalizations
In non-homogeneous media, the diffusion coefficient varies in space, . This dependence does not affect Fick's first law but the second law changes:
In anisotropic media, the diffusion coefficient depends on the direction. It is a symmetric tensor . Fick's first law changes to it is the product of a tensor and a vector: For the diffusion equation this formula gives The symmetric matrix of diffusion coefficients should be positive definite. It is needed to make the right-hand side operator elliptic.
For inhomogeneous anisotropic media these two forms of the diffusion equation should be combined in
The approach based on Einstein's mobility and Teorell formula gives the following generalization of Fick's equation for the multicomponent diffusion of the perfect components: where are concentrations of the components and is the matrix of coefficients. Here, indices and are related to the various components and not to the space coordinates.
The Chapman–Enskog formulae for diffusion in gases include exactly the same terms. These physical models of diffusion are different from the test models which are valid for very small deviations from the uniform equilibrium. Earlier, such terms were introduced in the Maxwell–Stefan diffusion equation.
For anisotropic multicomponent diffusion coefficients one needs a rank-four tensor, for example , where refer to the components and correspond to the space coordinates.
Applications
Equations based on Fick's law have been commonly used to model transport processes in foods, neurons, biopolymers, pharmaceuticals, porous soils, population dynamics, nuclear materials, plasma physics, and semiconductor doping processes. The theory of voltammetric methods is based on solutions of Fick's equation. On the other hand, in some cases a "Fickian (another common approximation of the transport equation is that of the diffusion theory)" description is inadequate. For example, in polymer science and food science a more general approach is required to describe transport of components in materials undergoing a glass transition. One more general framework is the Maxwell–Stefan diffusion equations
of multi-component mass transfer, from which Fick's law can be obtained as a limiting case, when the mixture is extremely dilute and every chemical species is interacting only with the bulk mixture and not with other species. To account for the presence of multiple species in a non-dilute mixture, several variations of the Maxwell–Stefan equations are used. See also non-diagonal coupled transport processes (Onsager relationship).
Fick's flow in liquids
When two miscible liquids are brought into contact, and diffusion takes place, the macroscopic (or average) concentration evolves following Fick's law. On a mesoscopic scale, that is, between the macroscopic scale described by Fick's law and molecular scale, where molecular random walks take place, fluctuations cannot be neglected. Such situations can be successfully modeled with Landau-Lifshitz fluctuating hydrodynamics. In this theoretical framework, diffusion is due to fluctuations whose dimensions range from the molecular scale to the macroscopic scale.
In particular, fluctuating hydrodynamic equations include a Fick's flow term, with a given diffusion coefficient, along with hydrodynamics equations and stochastic terms describing fluctuations. When calculating the fluctuations with a perturbative approach, the zero order approximation is Fick's law. The first order gives the fluctuations, and it comes out that fluctuations contribute to diffusion. This represents somehow a tautology, since the phenomena described by a lower order approximation is the result of a higher approximation: this problem is solved only by renormalizing the fluctuating hydrodynamics equations.
Sorption rate and collision frequency of diluted solute
Adsorption, absorption, and collision of molecules, particles, and surfaces are important problems in many fields. These fundamental processes regulate chemical, biological, and environmental reactions. Their rate can be calculated using the diffusion constant and Fick's laws of diffusion especially when these interactions happen in diluted solutions.
Typically, the diffusion constant of molecules and particles defined by Fick's equation can be calculated using the Stokes–Einstein equation. In the ultrashort time limit, in the order of the diffusion time a2/D, where a is the particle radius, the diffusion is described by the Langevin equation. At a longer time, the Langevin equation merges into the Stokes–Einstein equation. The latter is appropriate for the condition of the diluted solution, where long-range diffusion is considered. According to the fluctuation-dissipation theorem based on the Langevin equation in the long-time limit and when the particle is significantly denser than the surrounding fluid, the time-dependent diffusion constant is:
where (all in SI units)
kB is the Boltzmann constant,
T is the absolute temperature,
μ is the mobility of the particle in the fluid or gas, which can be calculated using the Einstein relation (kinetic theory),
m is the mass of the particle,
t is time.
For a single molecule such as organic molecules or biomolecules (e.g. proteins) in water, the exponential term is negligible due to the small product of mμ in the ultrafast picosecond region, thus irrelevant to the relatively slower adsorption of diluted solute.
The adsorption or absorption rate of a dilute solute to a surface or interface in a (gas or liquid) solution can be calculated using Fick's laws of diffusion. The accumulated number of molecules adsorbed on the surface is expressed by the Langmuir-Schaefer equation by integrating the diffusion flux equation over time as shown in the simulated molecular diffusion in the first section of this page:
is the surface area (m2).
is the number concentration of the adsorber molecules (solute) in the bulk solution (#/m3).
is diffusion coefficient of the adsorber (m2/s).
is elapsed time (s).
is the accumulated number of molecules in unit # molecules adsorbed during the time .
The equation is named after American chemists Irving Langmuir and Vincent Schaefer.
Briefly as explained in,
the concentration gradient profile near a newly created (from ) absorptive surface (placed at ) in a once uniform bulk solution is solved in the above sections from Fick's equation,
where is the number concentration of adsorber molecules at (#/m3).
The concentration gradient at the subsurface at is simplified to the pre-exponential factor of the distribution
And the rate of diffusion (flux) across area of the plane is
Integrating over time,
The Langmuir–Schaefer equation can be extended to the Ward–Tordai Equation to account for the "back-diffusion" of rejected molecules from the surface:
where is the bulk concentration, is the sub-surface concentration (which is a function of time depending on the reaction model of the adsorption), and is a dummy variable.
Monte Carlo simulations show that these two equations work to predict the adsorption rate of systems that form predictable concentration gradients near the surface but have troubles for systems without or with unpredictable concentration gradients, such as typical biosensing systems or when flow and convection are significant.
A brief history of diffusive adsorption is shown in the right figure. A noticeable challenge of understanding the diffusive adsorption at the single-molecule level is the fractal nature of diffusion. Most computer simulations pick a time step for diffusion which ignores the fact that there are self-similar finer diffusion events (fractal) within each step. Simulating the fractal diffusion shows that a factor of two corrections should be introduced for the result of a fixed time-step adsorption simulation, bringing it to be consistent with the above two equations.
A more problematic result of the above equations is they predict the lower limit of adsorption under ideal situations but is very difficult to predict the actual adsorption rates. The equations are derived at the long-time-limit condition when a stable concentration gradient has been formed near the surface. But real adsorption is often done much faster than this infinite time limit i.e. the concentration gradient, decay of concentration at the sub-surface, is only partially formed before the surface has been saturated or flow is on to maintain a certain gradient, thus the adsorption rate measured is almost always faster than the equations have predicted for low or none energy barrier adsorption (unless there is a significant adsorption energy barrier that slows down the absorption significantly), for example, thousands to millions time faster in the self-assembly of monolayers at the water-air or water-substrate interfaces. As such, it is necessary to calculate the evolution of the concentration gradient near the surface and find out a proper time to stop the imagined infinite evolution for practical applications. While it is hard to predict when to stop but it is reasonably easy to calculate the shortest time that matters, the critical time when the first nearest neighbor from the substrate surface feels the building-up of the concentration gradient. This yields the upper limit of the adsorption rate under an ideal situation when there are no other factors than diffusion that affect the absorber dynamics:
where:
is the adsorption rate assuming under adsorption energy barrier-free situation, in unit #/s,
is the area of the surface of interest on an "infinite and flat" substrate (m2),
is the concentration of the absorber molecule in the bulk solution (#/m3),
is the diffusion constant of the absorber (solute) in the solution (m2/s) defined with Fick's law.
This equation can be used to predict the initial adsorption rate of any system; It can be used to predict the steady-state adsorption rate of a typical biosensing system when the binding site is just a very small fraction of the substrate surface and a near-surface concentration gradient is never formed; It can also be used to predict the adsorption rate of molecules on the surface when there is a significant flow to push the concentration gradient very shallowly in the sub-surface.
This critical time is significantly different from the first passenger arriving time or the mean free-path time. Using the average first-passenger time and Fick's law of diffusion to estimate the average binding rate will significantly over-estimate the concentration gradient because the first passenger usually comes from many layers of neighbors away from the target, thus its arriving time is significantly longer than the nearest neighbor diffusion time. Using the mean free path time plus the Langmuir equation will cause an artificial concentration gradient between the initial location of the first passenger and the target surface because the other neighbor layers have no change yet, thus significantly lower estimate the actual binding time, i.e., the actual first passenger arriving time itself, the inverse of the above rate, is difficult to calculate. If the system can be simplified to 1D diffusion, then the average first passenger time can be calculated using the same nearest neighbor critical diffusion time for the first neighbor distance to be the MSD,
where:
(unit m) is the average nearest neighbor distance approximated as cubic packing, where is the solute concentration in the bulk solution (unit # molecule / m3),
is the diffusion coefficient defined by Fick's equation (unit m2/s),
is the critical time (unit s).
In this critical time, it is unlikely the first passenger has arrived and adsorbed. But it sets the speed of the layers of neighbors to arrive. At this speed with a concentration gradient that stops around the first neighbor layer, the gradient does not project virtually in the longer time when the actual first passenger arrives. Thus, the average first passenger coming rate (unit # molecule/s) for this 3D diffusion simplified in 1D problem,
where is a factor of converting the 3D diffusive adsorption problem into a 1D diffusion problem whose value depends on the system, e.g., a fraction of adsorption area over solute nearest neighbor sphere surface area assuming cubic packing each unit has 8 neighbors shared with other units. This example fraction converges the result to the 3D diffusive adsorption solution shown above with a slight difference in pre-factor due to different packing assumptions and ignoring other neighbors.
When the area of interest is the size of a molecule (specifically, a long cylindrical molecule such as DNA), the adsorption rate equation represents the collision frequency of two molecules in a diluted solution, with one molecule a specific side and the other no steric dependence, i.e., a molecule (random orientation) hit one side of the other. The diffusion constant need to be updated to the relative diffusion constant between two diffusing molecules. This estimation is especially useful in studying the interaction between a small molecule and a larger molecule such as a protein. The effective diffusion constant is dominated by the smaller one whose diffusion constant can be used instead.
The above hitting rate equation is also useful to predict the kinetics of molecular self-assembly on a surface. Molecules are randomly oriented in the bulk solution. Assuming 1/6 of the molecules has the right orientation to the surface binding sites, i.e. 1/2 of the z-direction in x, y, z three dimensions, thus the concentration of interest is just 1/6 of the bulk concentration. Put this value into the equation one should be able to calculate the theoretical adsorption kinetic curve using the Langmuir adsorption model. In a more rigid picture, 1/6 can be replaced by the steric factor of the binding geometry.
The bimolecular collision frequency related to many reactions including protein coagulation/aggregation is initially described by Smoluchowski coagulation equation proposed by Marian Smoluchowski in a seminal 1916 publication, derived from Brownian motion and Fick's laws of diffusion. Under an idealized reaction condition for A + B → product in a diluted solution, Smoluchovski suggested that the molecular flux at the infinite time limit can be calculated from Fick's laws of diffusion yielding a fixed/stable concentration gradient from the target molecule, e.g. B is the target molecule holding fixed relatively, and A is the moving molecule that creates a concentration gradient near the target molecule B due to the coagulation reaction between A and B. Smoluchowski calculated the collision frequency between A and B in the solution with unit #/s/m3:
where:
is the radius of the collision,
is the relative diffusion constant between A and B (m2/s),
and are number concentrations of A and B respectively (#/m3).
The reaction order of this bimolecular reaction is 2 which is the analogy to the result from collision theory by replacing the moving speed of the molecule with diffusive flux. In the collision theory, the traveling time between A and B is proportional to the distance which is a similar relationship for the diffusion case if the flux is fixed.
However, under a practical condition, the concentration gradient near the target molecule is evolving over time with the molecular flux evolving as well, and on average the flux is much bigger than the infinite time limit flux Smoluchowski has proposed. Before the first passenger arrival time, Fick's equation predicts a concentration gradient over time which does not build up yet in reality. Thus, this Smoluchowski frequency represents the lower limit of the real collision frequency.
In 2022, Chen calculates the upper limit of the collision frequency between A and B in a solution assuming the bulk concentration of the moving molecule is fixed after the first nearest neighbor of the target molecule. Thus the concentration gradient evolution stops at the first nearest neighbor layer given a stop-time to calculate the actual flux. He named this the critical time and derived the diffusive collision frequency in unit #/s/m3:
where:
is the area of the cross-section of the collision (m2),
is the relative diffusion constant between A and B (m2/s),
and are number concentrations of A and B respectively (#/m3),
represents 1/<d>, where d is the average distance between two molecules.
This equation assumes the upper limit of a diffusive collision frequency between A and B is when the first neighbor layer starts to feel the evolution of the concentration gradient, whose reaction order is instead of 2. Both the Smoluchowski equation and the JChen equation satisfy dimensional checks with SI units. But the former is dependent on the radius and the latter is on the area of the collision sphere. From dimensional analysis, there will be an equation dependent on the volume of the collision sphere but eventually, all equations should converge to the same numerical rate of the collision that can be measured experimentally. The actual reaction order for a bimolecular unit reaction could be between 2 and , which makes sense because the diffusive collision time is squarely dependent on the distance between the two molecules.
These new equations also avoid the singularity on the adsorption rate at time zero for the Langmuir-Schaefer equation. The infinity rate is justifiable under ideal conditions because when you introduce target molecules magically in a solution of probe molecule or vice versa, there always be a probability of them overlapping at time zero, thus the rate of that two molecules association is infinity. It does not matter that other millions of molecules have to wait for their first mate to diffuse and arrive. The average rate is thus infinity. But statistically this argument is meaningless. The maximum rate of a molecule in a period of time larger than zero is 1, either meet or not, thus the infinite rate at time zero for that molecule pair really should just be one, making the average rate 1/millions or more and statistically negligible. This does not even count in reality no two molecules can magically meet at time zero.
Biological perspective
The first law gives rise to the following formula:
where
is the permeability, an experimentally determined membrane "conductance" for a given gas at a given temperature,
is the difference in concentration of the gas across the membrane for the direction of flow (from to ).
Fick's first law is also important in radiation transfer equations. However, in this context, it becomes inaccurate when the diffusion constant is low and the radiation becomes limited by the speed of light rather than by the resistance of the material the radiation is flowing through. In this situation, one can use a flux limiter.
The exchange rate of a gas across a fluid membrane can be determined by using this law together with Graham's law.
Under the condition of a diluted solution when diffusion takes control, the membrane permeability mentioned in the above section can be theoretically calculated for the solute using the equation mentioned in the last section (use with particular care because the equation is derived for dense solutes, while biological molecules are not denser than water. Also, this equation assumes ideal concentration gradient forms near the membrane and evolves):
where:
is the total area of the pores on the membrane (unit m2),
transmembrane efficiency (unitless), which can be calculated from the stochastic theory of chromatography,
D is the diffusion constant of the solute unit m2⋅s−1,
t is time unit s,
c2, c1 concentration should use unit mol m−3, so flux unit becomes mol s−1.
The flux is decay over the square root of time because a concentration gradient builds up near the membrane over time under ideal conditions. When there is flow and convection, the flux can be significantly different than the equation predicts and show an effective time t with a fixed value, which makes the flux stable instead of decay over time. A critical time has been estimated under idealized flow conditions when there is no gradient formed. This strategy is adopted in biology such as blood circulation.
Semiconductor fabrication applications
The semiconductor is a collective term for a series of devices. It mainly includes three categories:two-terminal devices, three-terminal devices, and four-terminal devices. The combination of the semiconductors is called an integrated circuit.
The relationship between Fick's law and semiconductors: the principle of the semiconductor is transferring chemicals or dopants from a layer to a layer. Fick's law can be used to control and predict the diffusion by knowing how much the concentration of the dopants or chemicals move per meter and second through mathematics.
Therefore, different types and levels of semiconductors can be fabricated.
Integrated circuit fabrication technologies, model processes like CVD, thermal oxidation, wet oxidation, doping, etc. use diffusion equations obtained from Fick's law.
CVD method of fabricate semiconductor
The wafer is a kind of semiconductor whose silicon substrate is coated with a layer of CVD-created polymer chain and films. This film contains n-type and p-type dopants and takes responsibility for dopant conductions. The principle of CVD relies on the gas phase and gas-solid chemical reaction to create thin films.
The viscous flow regime of CVD is driven by a pressure gradient. CVD also includes a diffusion component distinct from the surface diffusion of adatoms. In CVD, reactants and products must also diffuse through a boundary layer of stagnant gas that exists next to the substrate. The total number of steps required for CVD film growth are gas phase diffusion of reactants through the boundary layer, adsorption and surface diffusion of adatoms, reactions on the substrate, and gas phase diffusion of products away through the boundary layer.
The velocity profile for gas flow is:
where:
is the thickness,
is the Reynolds number,
is the length of the substrate,
at any surface,
is viscosity,
is density.
Integrated the from to , it gives the average thickness:
To keep the reaction balanced, reactants must diffuse through the stagnant boundary layer to reach the substrate. So a thin boundary layer is desirable. According to the equations, increasing vo would result in more wasted reactants. The reactants will not reach the substrate uniformly if the flow becomes turbulent. Another option is to switch to a new carrier gas with lower viscosity or density.
The Fick's first law describes diffusion through the boundary layer. As a function of pressure (p) and temperature (T) in a gas, diffusion is determined.
where:
is the standard pressure,
is the standard temperature,
is the standard diffusitivity.
The equation tells that increasing the temperature or decreasing the pressure can increase the diffusivity.
Fick's first law predicts the flux of the reactants to the substrate and product away from the substrate:
where:
is the thickness ,
is the first reactant's concentration.
In ideal gas law , the concentration of the gas is expressed by partial pressure.
where
is the gas constant,
is the partial pressure gradient.
As a result, Fick's first law tells us we can use a partial pressure gradient to control the diffusivity and control the growth of thin films of semiconductors.
In many realistic situations, the simple Fick's law is not an adequate formulation for the semiconductor problem. It only applies to certain conditions, for example, given the semiconductor boundary conditions: constant source concentration diffusion, limited source concentration, or moving boundary diffusion (where junction depth keeps moving into the substrate).
Invalidity of Fickian diffusion
Even though Fickian diffusion has been used to model diffusion processes in semiconductor manufacturing (including CVD reactors) in early days, it often fails to validate the diffusion in advanced semiconductor nodes (< 90 nm). This mostly stems from the inability of Fickian diffusion to model diffusion processes accurately at molecular level and smaller. In advanced semiconductor manufacturing, it is important to understand the movement at atomic scales, which is failed by continuum diffusion. Today, most semiconductor manufacturers use random walk to study and model diffusion processes. This allows us to study the effects of diffusion in a discrete manner to understand the movement of individual atoms, molecules, plasma etc.
In such a process, the movements of diffusing species (atoms, molecules, plasma etc.) are treated as a discrete entity, following a random walk through the CVD reactor, boundary layer, material structures etc. Sometimes, the movements might follow a biased-random walk depending on the processing conditions. Statistical analysis is done to understand variation/stochasticity arising from the random walk of the species, which in-turn affects the overall process and electrical variations.
Food production and cooking
The formulation of Fick's first law can explain a variety of complex phenomena in the context of food and cooking: Diffusion of molecules such as ethylene promotes plant growth and ripening, salt and sugar molecules promotes meat brining and marinating, and water molecules promote dehydration. Fick's first law can also be used to predict the changing moisture profiles across a spaghetti noodle as it hydrates during cooking. These phenomena are all about the spontaneous movement of particles of solutes driven by the concentration gradient. In different situations, there is different diffusivity which is a constant.
By controlling the concentration gradient, the cooking time, shape of the food, and salting can be controlled.
See also
Advection
Churchill–Bernstein equation
Diffusion
False diffusion
Gas exchange
Mass flux
Maxwell–Stefan diffusion
Nernst–Planck equation
Osmosis
Citations
Further reading
– reprinted in
External links
Fick's equations, Boltzmann's transformation, etc. (with figures and animations)
Fick's Second Law on OpenStax
Diffusion
Eponymous laws of physics
Mathematics in medicine
Physical chemistry
Statistical mechanics
de:Diffusion#Erstes Fick'sches Gesetz | Fick's laws of diffusion | [
"Physics",
"Chemistry",
"Mathematics"
] | 7,467 | [
"Transport phenomena",
"Physical phenomena",
"Applied and interdisciplinary physics",
"Diffusion",
"Applied mathematics",
"nan",
"Statistical mechanics",
"Mathematics in medicine",
"Physical chemistry"
] |
11,691 | https://en.wikipedia.org/wiki/Functional%20decomposition | In engineering, functional decomposition is the process of resolving a functional relationship into its constituent parts in such a way that the original function can be reconstructed (i.e., recomposed) from those parts.
This process of decomposition may be undertaken to gain insight into the identity of the constituent components, which may reflect individual physical processes of interest. Also, functional decomposition may result in a compressed representation of the global function, a task which is feasible only when the constituent processes possess a certain level of modularity (i.e., independence or non-interaction).
Interaction (statistics) between the components are critical to the function of the collection. All interactions may not be , but possibly deduced through repetitive , synthesis, validation and verification of composite behavior.
Motivation for decomposition
Decomposition of a function into non-interacting components generally permits more economical representations of the function. Intuitively, this reduction in representation size is achieved simply because each variable depends only on a subset of the other variables. Thus, variable only depends directly on variable , rather than depending on the entire set of variables. We would say that variable screens off variable from the rest of the world. Practical examples of this phenomenon surround us.
Consider the particular case of "northbound traffic on the West Side Highway." Let us assume this variable () takes on three possible values of {"moving slow", "moving deadly slow", "not moving at all"}. Now, let's say the variable depends on two other variables, "weather" with values of {"sun", "rain", "snow"}, and "GW Bridge traffic" with values {"10mph", "5mph", "1mph"}. The point here is that while there are certainly many secondary variables that affect the weather variable (e.g., low pressure system over Canada, butterfly flapping in Japan, etc.) and the Bridge traffic variable (e.g., an accident on I-95, presidential motorcade, etc.) all these other secondary variables are not directly relevant to the West Side Highway traffic. All we need (hypothetically) in order to predict the West Side Highway traffic is the weather and the GW Bridge traffic, because these two variables screen off West Side Highway traffic from all other potential influences. That is, all other influences act through them.
Applications
Practical applications of functional decomposition are found in Bayesian networks, structural equation modeling, linear systems, and database systems.
Knowledge representation
Processes related to functional decomposition are prevalent throughout the fields of knowledge representation and machine learning. Hierarchical model induction techniques such as Logic circuit minimization, decision trees, grammatical inference, hierarchical clustering, and quadtree decomposition are all examples of function decomposition.
Many statistical inference methods can be thought of as implementing a function decomposition process in the presence of noise; that is, where functional dependencies are only expected to hold approximately. Among such models are mixture models and the recently popular methods referred to as "causal decompositions" or Bayesian networks.
Database theory
See database normalization.
Machine learning
In practical scientific applications, it is almost never possible to achieve perfect functional decomposition because of the incredible complexity of the systems under study. This complexity is manifested in the presence of "noise," which is just a designation for all the unwanted and untraceable influences on our observations.
However, while perfect functional decomposition is usually impossible, the spirit lives on in a large number of statistical methods that are equipped to deal with noisy systems. When a natural or artificial system is intrinsically hierarchical, the joint distribution on system variables should provide evidence of this hierarchical structure. The task of an observer who seeks to understand the system is then to infer the hierarchical structure from observations of these variables. This is the notion behind the hierarchical decomposition of a joint distribution, the attempt to recover something of the intrinsic hierarchical structure which generated that joint distribution.
As an example, Bayesian network methods attempt to decompose a joint distribution along its causal fault lines, thus "cutting nature at its seams". The essential motivation behind these methods is again that within most systems (natural or artificial), relatively few components/events interact with one another directly on equal footing. Rather, one observes pockets of dense connections (direct interactions) among small subsets of components, but only loose connections between these densely connected subsets. There is thus a notion of "causal proximity" in physical systems under which variables naturally precipitate into small clusters. Identifying these clusters and using them to represent the joint provides the basis for great efficiency of storage (relative to the full joint distribution) as well as for potent inference algorithms.
Software architecture
Functional Decomposition is a design method intending to produce a non-implementation, architectural description of a computer program. The software architect first establishes a series of functions and types that accomplishes the main processing problem of the computer program, decomposes each to reveal common functions and types, and finally derives Modules from this activity.
Signal processing
Functional decomposition is used in the analysis of many signal processing systems, such as LTI systems. The input signal to an LTI system can be expressed as a function, . Then can be decomposed into a linear combination of other functions, called component signals:
Here, are the component signals. Note that are constants. This decomposition aids in analysis, because now the output of the system can be expressed in terms of the components of the input. If we let represent the effect of the system, then the output signal is , which can be expressed as:
In other words, the system can be seen as acting separately on each of the components of the input signal. Commonly used examples of this type of decomposition are the Fourier series and the Fourier transform.
Systems engineering
Functional decomposition in systems engineering refers to the process of defining a system in functional terms, then defining lower-level functions and sequencing relationships from these higher level systems functions. The basic idea is to try to divide a system in such a way that each block of a block diagram can be described without an "and" or "or" in the description.
This exercise forces each part of the system to have a pure function. When a system is designed as pure functions, they can be reused, or replaced. A usual side effect is that the interfaces between blocks become simple and generic. Since the interfaces usually become simple, it is easier to replace a pure function with a related, similar function.
For example, say that one needs to make a stereo system. One might functionally decompose this into speakers, amplifier, a tape deck and a front panel. Later, when a different model needs an audio CD, it can probably fit the same interfaces.
See also
Bayesian networks
Currying
Database normalization
Function composition (computer science)
Inductive inference
Knowledge representation
Further reading
A review of other applications and function decomposition. Also presents methods based on information theory and graph theory.
Notes
References
.
.
.
.
Functions and mappings
Philosophy of mathematics
Philosophy of physics | Functional decomposition | [
"Physics",
"Mathematics"
] | 1,417 | [
"Philosophy of physics",
"Functions and mappings",
"Applied and interdisciplinary physics",
"Mathematical analysis",
"Mathematical objects",
"Mathematical relations",
"nan"
] |
11,712 | https://en.wikipedia.org/wiki/Facilitated%20diffusion | Facilitated diffusion (also known as facilitated transport or passive-mediated transport) is the process of spontaneous passive transport (as opposed to active transport) of molecules or ions across a biological membrane via specific transmembrane integral proteins. Being passive, facilitated transport does not directly require chemical energy from ATP hydrolysis in the transport step itself; rather, molecules and ions move down their concentration gradient according to the principles of diffusion.
Facilitated diffusion differs from simple diffusion in several ways:
The transport relies on molecular binding between the cargo and the membrane-embedded channel or carrier protein.
The rate of facilitated diffusion is saturable with respect to the concentration difference between the two phases; unlike free diffusion which is linear in the concentration difference.
The temperature dependence of facilitated transport is substantially different due to the presence of an activated binding event, as compared to free diffusion where the dependence on temperature is mild.
Polar molecules and large ions dissolved in water cannot diffuse freely across the plasma membrane due to the hydrophobic nature of the fatty acid tails of the phospholipids that comprise the lipid bilayer. Only small, non-polar molecules, such as oxygen and carbon dioxide, can diffuse easily across the membrane. Hence, small polar molecules are transported by proteins in the form of transmembrane channels. These channels are gated, meaning that they open and close, and thus deregulate the flow of ions or small polar molecules across membranes, sometimes against the osmotic gradient. Larger molecules are transported by transmembrane carrier proteins, such as permeases, that change their conformation as the molecules are carried across (e.g. glucose or amino acids).
Non-polar molecules, such as retinol or lipids, are poorly soluble in water. They are transported through aqueous compartments of cells or through extracellular space by water-soluble carriers (e.g. retinol binding protein). The metabolites are not altered because no energy is required for facilitated diffusion. Only permease changes its shape in order to transport metabolites. The form of transport through a cell membrane in which a metabolite is modified is called group translocation transportation.
Glucose, sodium ions, and chloride ions are just a few examples of molecules and ions that must efficiently cross the plasma membrane but to which the lipid bilayer of the membrane is virtually impermeable. Their transport must therefore be "facilitated" by proteins that span the membrane and provide an alternative route or bypass mechanism. Some examples of proteins that mediate this process are glucose transporters, organic cation transport proteins, urea transporter, monocarboxylate transporter 8 and monocarboxylate transporter 10.
In vivo model of facilitated diffusion
Many physical and biochemical processes are regulated by diffusion. Facilitated diffusion is one form of diffusion and it is important in several metabolic processes. Facilitated diffusion is the main mechanism behind the binding of Transcription Factors (TFs) to designated target sites on the DNA molecule. The in vitro model, which is a very well known method of facilitated diffusion, that takes place outside of a living cell, explains the 3-dimensional pattern of diffusion in the cytosol and the 1-dimensional diffusion along the DNA contour. After carrying out extensive research on processes occurring out of the cell, this mechanism was generally accepted but there was a need to verify that this mechanism could take place in vivo or inside of living cells. Bauer & Metzler (2013) therefore carried out an experiment using a bacterial genome in which they investigated the average time for TF – DNA binding to occur. After analyzing the process for the time it takes for TF's to diffuse across the contour and cytoplasm of the bacteria's DNA, it was concluded that in vitro and in vivo are similar in that the association and dissociation rates of TF's to and from the DNA are similar in both. Also, on the DNA contour, the motion is slower and target sites are easy to localize while in the cytoplasm, the motion is faster but the TF's are not sensitive to their targets and so binding is restricted.
Intracellular facilitated diffusion
Single-molecule imaging is an imaging technique which provides an ideal resolution necessary for the study of the Transcription factor binding mechanism in living cells. In prokaryotic bacteria cells such as E. coli, facilitated diffusion is required in order for regulatory proteins to locate and bind to target sites on DNA base pairs. There are 2 main steps involved: the protein binds to a non-specific site on the DNA and then it diffuses along the DNA chain until it locates a target site, a process referred to as sliding. According to Brackley et al. (2013), during the process of protein sliding, the protein searches the entire length of the DNA chain using 3-D and 1-D diffusion patterns. During 3-D diffusion, the high incidence of Crowder proteins creates an osmotic pressure which brings searcher proteins (e.g. Lac Repressor) closer to the DNA to increase their attraction and enable them to bind, as well as steric effect which exclude the Crowder proteins from this region (Lac operator region). Blocker proteins participate in 1-D diffusion only i.e. bind to and diffuse along the DNA contour and not in the cytosol.
Facilitated diffusion of proteins on chromatin
The in vivo model mentioned above clearly explains 3-D and 1-D diffusion along the DNA strand and the binding of proteins to target sites on the chain. Just like prokaryotic cells, in eukaryotes, facilitated diffusion occurs in the nucleoplasm on chromatin filaments, accounted for by the switching dynamics of a protein when it is either bound to a chromatin thread or when freely diffusing in the nucleoplasm. In addition, given that the chromatin molecule is fragmented, its fractal properties need to be considered. After calculating the search time for a target protein, alternating between the 3-D and 1-D diffusion phases on the chromatin fractal structure, it was deduced that facilitated diffusion in eukaryotes precipitates the searching process and minimizes the searching time by increasing the DNA-protein affinity.
For oxygen
The oxygen affinity with hemoglobin on red blood cell surfaces enhances this bonding ability. In a system of facilitated diffusion of oxygen, there is a tight relationship between the ligand which is oxygen and the carrier which is either hemoglobin or myoglobin. This mechanism of facilitated diffusion of oxygen by hemoglobin or myoglobin was discovered and initiated by Wittenberg and Scholander. They carried out experiments to test for the steady-state of diffusion of oxygen at various pressures. Oxygen-facilitated diffusion occurs in a homogeneous environment where oxygen pressure can be relatively controlled.
For oxygen diffusion to occur, there must be a full saturation pressure (more) on one side of the membrane and full reduced pressure (less) on the other side of the membrane i.e. one side of the membrane must be of higher concentration. During facilitated diffusion, hemoglobin increases the rate of constant diffusion of oxygen and facilitated diffusion occurs when oxyhemoglobin molecule is randomly displaced.
For carbon monoxide
Facilitated diffusion of carbon monoxide is similar to that of oxygen. Carbon monoxide also combines with hemoglobin and myoglobin, but carbon monoxide has a dissociation velocity that 100 times less than that of oxygen. Its affinity for myoglobin is 40 times higher and 250 times higher for hemoglobin, compared to oxygen.
For glucose
Since glucose is a large molecule, its diffusion across a membrane is difficult. Hence, it diffuses across membranes through facilitated diffusion, down the concentration gradient. The carrier protein at the membrane binds to the glucose and alters its shape such that it can easily to be transported. Movement of glucose into the cell could be rapid or slow depending on the number of membrane-spanning protein. It is transported against the concentration gradient by a dependent glucose symporter which provides a driving force to other glucose molecules in the cells. Facilitated diffusion helps in the release of accumulated glucose into the extracellular space adjacent to the blood capillary.
See also
Major facilitator superfamily
References
External links
Facilitated Diffusion - Description and Animation
Facilitated Diffusion- Definition and Supplement
Diffusion
Transport proteins | Facilitated diffusion | [
"Physics",
"Chemistry"
] | 1,727 | [
"Transport phenomena",
"Physical phenomena",
"Diffusion"
] |
11,734 | https://en.wikipedia.org/wiki/Fred%20Singer | Siegfried Fred Singer (September 27, 1924 – April 6, 2020) was an Austrian-born American physicist and emeritus professor of environmental science at the University of Virginia, trained as an atmospheric physicist. He was known for rejecting the scientific consensus on several issues, including climate change, the connection between UV-B exposure and melanoma rates, stratospheric ozone loss being caused by chlorofluoro compounds, often used as refrigerants, and the health risks of passive smoking.
He is the author or editor of several books, including Global Effects of Environmental Pollution (1970), The Ocean in Human Affairs (1989), Global Climate Change (1989), The Greenhouse Debate Continued (1992), and Hot Talk, Cold Science (1997). He also co-authored Unstoppable Global Warming: Every 1,500 Years (2007) with Dennis Avery, and Climate Change Reconsidered (2009) with Craig Idso.
Singer had a varied career, serving in the armed forces, government, and academia. He designed mines for the U.S. Navy during World War II, before obtaining his Ph.D. in physics from Princeton University in 1948 and working as a scientific liaison officer in the U.S. Embassy in London. He became a leading figure in early space research, was involved in the development of earth observation satellites, and in 1962 established the National Weather Bureau's Satellite Service Center. He was the founding dean of the University of Miami School of Environmental and Planetary Sciences in 1964, and held several government positions, including deputy assistant administrator for the Environmental Protection Agency, and chief scientist for the Department of Transportation. He held a professorship with the University of Virginia from 1971 until 1994, and with George Mason University until 2000.
In 1990 Singer founded the Science & Environmental Policy Project, and in 2006 was named by the Canadian Broadcasting Corporation as one of a minority of scientists said to be creating a stand-off on a consensus on climate change. Singer argued, contrary to the scientific consensus on climate change, that there is no evidence that global warming is attributable to human-caused increases in atmospheric carbon dioxide, and that humanity would benefit if temperatures do rise. He was an opponent of the Kyoto Protocol, and claimed that climate models are not based on reality or evidence. Singer was accused of rejecting peer-reviewed and independently confirmed scientific evidence in his claims concerning public health and environmental issues.
Early life and education
Singer was born in Vienna, Austria, to a Jewish family. His father was a jeweler and his mother a homemaker. Following the Anschluss between Nazi Germany and Austria in 1938, the family fled Austria, and Singer departed on a children's transport train with other Jewish children. He ended up in England, where he lived in Northumberland, working for a time as a teenage optician. Several years later he emigrated to Ohio and became an American citizen in 1944. He received a Bachelor of Electrical Engineering (B.E.E.) from Ohio State University in 1943. He taught physics at Princeton while he worked on his masters and his doctorate, obtaining his Ph.D. there in 1948. His doctoral thesis was titled, "The density spectrum and latitude dependence of extensive cosmic ray air showers." His supervisor was John Archibald Wheeler, and his thesis committee included J. Robert Oppenheimer and Niels Bohr.
Career
1950: United States Navy
After his masters, Singer joined the armed forces, working for the United States Navy on mine warfare and countermeasures from 1944 until 1946. While with the Naval Ordnance Laboratory he developed an arithmetic element for an electronic digital calculator that he called an "electronic brain". He was discharged in 1946 and joined the Upper Atmosphere Rocket Program at the Johns Hopkins University Applied Physics Laboratory in Silver Spring, Maryland, working there until 1950. He focused on ozone, cosmic rays, and the ionosphere, all measured using balloons and rockets launched from White Sands, New Mexico, or from ships out at sea. Rachel White Scheuering writes that for one mission to launch a rocket, he sailed with a naval operation to the Arctic, and also conducted rocket launching from ships at the equator.
From 1950 to 1953, he was attached to the U.S. Embassy in London as a scientific liaison officer with the Office of Naval Research, where he studied research programs in Europe into cosmic radiation and nuclear physics. While there, he was one of eight delegates with a background in guided weapons projects to address the Fourth International Congress of Astronautics in Zurich in August 1953, at a time when, as The New York Times reported, most scientists saw space flight as thinly disguised science fiction.
1951: Design of early satellites
Singer was one of the first scientists to urge the launching of Earth satellites for scientific observation during the 1950s. In 1951 or 1952 he proposed the MOUSE ("Minimal Orbital Unmanned Satellite, Earth"), a satellite that would contain Geiger counters for measuring cosmic rays, photo cells for scanning the Earth, telemetry electronics for sending data back to Earth, a magnetic data storage device, and rudimentary solar energy cells. Although MOUSE never flew, the Baltimore News-Post reported in 1957 that had Singer's arguments about the need for satellites been heeded, the U.S. could have beaten Russia by launching the first Earth satellite. He also proposed (along with R. C. Wentworth) that satellite measurement of ultraviolet backscatter could be used as a method to measure atmospheric ozone profiles. This technique was later used on early weather satellites.
1953: University of Maryland
Singer moved back to the United States in 1953, where he took up an associate professorship in physics at the University of Maryland, and at the same time served as the director of the Center for Atmospheric and Space Physics. Scheuering writes that his work involved conducting experiments on rockets and satellites, remote sensing, radiation belts, the magnetosphere, and meteorites. He developed a new method of launching rockets into space: firing them from a high-flying plane, both with and without a pilot. The Navy adopted the idea and Singer supervised the project. He received a White House Special Commendation from President Eisenhower in 1954 for his work.
He became one of 12 board members of the American Astronautical Society, an organization formed in 1954 to represent the country's 300 leading scientists and engineers in the area of guided missiles—he was one of seven members of the board to resign in December 1956 after a series of disputes about the direction and control of the group.
In November 1957 Singer and other scientists at the university successfully designed and fired three new "Oriole" rockets off the Virginia Capes. The rockets weighed less than and could be built for around $2000. Fired from a converted Navy LSM, they could reach an altitude of and had a complete telemetry system to send back information on cosmic, ultraviolet and X-rays. Singer said that the firings placed "the exploration of outer space with high altitude rockets on the same basis, cost-wise and effort-wise, as low atmosphere measurements with weather balloons. From now on, we can fire thousands of these rockets all over the world with very little cost."
In February 1958, when he was head of the cosmic ray group of the University of Maryland's physics department, he received a special commendation from President Eisenhower for "outstanding achievements in the development of satellites for scientific purposes." In April 1958, he was appointed as a consultant to the House Select Committee on Astronautics and Space Exploration, which was preparing to hold hearings on President Eisenhower's proposal for a new agency to handle space research, and a month later received the Ohio State University's Distinguished Alumnus Award. He became a full professor at Maryland in 1959, and was chosen that year by the United States Junior Chamber of Commerce as one of the country's ten outstanding young men.
In a January 1960 presentation to the American Physical Society, Singer sketched out his vision of what the environment around the Earth might consist of, extending up to into space. He became known for his early predictions about the properties of the electrical particles trapped around the Earth, which were partly verified by later discoveries in satellite experiments. In December 1960, he suggested the existence of a shell of visible dust particles around the Earth some 600 to in space, beyond which there was a layer of smaller particles, a micrometre or less in diameter, extending 2,000 to . In March 1961 Singer and another University of Maryland physicist, E. J. Opik, were given a $97,000 grant by NASA to conduct a three-year study of interplanetary gas and dust.
1960: Artificial Phobos hypothesis
In a 1960 Astronautics newsletter, Singer commented on Iosif Shklovsky's hypothesis that the orbit of the Martian moon Phobos suggests that it is hollow, which implies it is of artificial origin. Singer wrote: "My conclusion there is, and here I back Shklovsky, that if the satellite is indeed spiraling inward as deduced from astronomical observation, then there is little alternative to the hypothesis that it is hollow and therefore martian made. The big 'if' lies in the astronomical observations; they may well be in error. Since they are based on several independent sets of measurements taken decades apart by different observers with different instruments, systematic errors may have influenced them." Later measurements confirmed Singer's big "if" caveat: Shklovsky overestimated Phobos' rate of altitude loss due to bad early data. Photographs by probes beginning in 1972 show a natural stony surface with craters. Ufologists continue to present Singer as an unconditional supporter of Shklovsky's artificial Phobos hypothesis.
Time magazine wrote in 1969 that Singer had had a lifelong fascination with Phobos and Mars's second moon, Deimos. He told Time it might be possible to pull Deimos into the Earth's orbit so it could be examined. During an international space symposium in May 1966, attended by space scientists from the United States and Soviet Union, he first proposed that crewed landings on the Martian moons would be a logical step after a crewed landing on the Earth's Moon. He pointed out that the very small sizes of Phobos and Deimos—approximately in diameter and sub milli-g surface gravity—would make it easier for a spacecraft to land and take off again.
1962: National Weather Center and University of Miami
In 1962, on leave from the university, Singer was named as the first director of meteorological satellite services for the National Weather Satellite Center, now part of the National Oceanic and Atmospheric Administration, and directed a program for using satellites to forecast the weather. He stayed there until 1964. He told Time magazine in 1969 that he enjoyed moving around. "Each move gave me a completely new perspective," he said. "If I had sat still, I'd probably still be measuring cosmic rays, the subject of my thesis at Princeton. That's what happens to most scientists." When he stepped down as director he received a Department of Commerce Gold Medal award for Distinguished Federal Service.
In 1964, he became the first dean of the School of Environmental and Planetary Sciences at the University of Miami in 1964, the first school of its kind in the country, dedicated to space-age research. In December 1965, The New York Times reported on a conference Singer hosted in Miami Beach during which five groups of scientists, working independently, presented research identifying what they believed was the remains of a primordial flash that occurred when the universe was born.
1967–1994
In 1967 he accepted the position of deputy assistant secretary with the U.S. Department of the Interior, where he was in charge of water quality and research. When the U.S. Environmental Protection Agency was created on 1970, he became its deputy assistant administrator of policy.
Singer accepted a professorship in Environmental Sciences at the University of Virginia in 1971, a position he held until 1994, where he taught classes on environmental issues such as ozone depletion, acid rain, climate change, population growth, and public policy issues related to oil and energy. In 1987 he took up a two-year post as chief scientist at the U.S. Department of Transportation, and in 1989 joined the Institute of Space Science and Technology in Gainesville, Florida where he contributed to a paper on the results from the Interplanetary Dust Experiment using data from the Long Duration Exposure Facility satellite. When he retired from Virginia in 1994, he became Distinguished Research Professor at the Institute for Humane Studies at George Mason University until 2000.
Naomi Oreskes and Erik Conway say that Singer was involved in the Reagan administration's efforts to prevent regulatory action to reduce acid rain.
Public debates
Writing
Throughout his academic career Singer wrote frequently in the mainstream press, including The New York Times, The Washington Post, and Wall Street Journal, often striking up positions disputing mainstream thinking. His overall position was one of distrust of federal regulations and a strong belief in the efficacy of the free market. He believed in what Rachel White Scheuering calls "free-market environmentalism": that market principles and incentives should be sufficient to lead to the protection of the environment and conservation of resources. Regular themes in his articles have been energy, oil embargoes, OPEC, Iran, and rising prices. Throughout the 1970s, for example, he downplayed the idea of an energy crisis and said it was largely a media event. In several papers in the 1990s and 2000s he struck up other positions against the mainstream, questioning the link between UV-B and melanoma rates, and that between CFCs and stratospheric ozone loss.
In October 1967, Singer wrote an article for The Washington Post from the perspective of 2007. His predictions included that planets had been explored but not colonized, and although rockets had become more powerful they had not replaced aircraft and ramjet vehicles. None of the fundamental laws of physics had been overturned. There was increased reliance on the electronic computer and data processor; the most exciting development was the increase in human intellect by direct electronic storage of information in the brain—the coupling of the brain to an external computer, thereby gaining direct access to an information library.
He debated the astronomer Carl Sagan on ABC's Nightline, regarding the possible environmental effects of the Kuwaiti oil fires. Sagan argued that if enough fire-fighting teams were not assembled in short order, and if many fires were left to burn over a period of months to possibly a year, the smoke might loft into the upper atmosphere and lead to massive agricultural failures over South Asia. Singer argued that it would rise to then be rained out after a few days. In fact, both Sagan and Singer were incorrect; smoke plumes from the fires rose to 10,000–12,000 feet and lingered for nearly a month, but despite absorbing 75–80% of the sun's radiation in the Persian Gulf area the plumes had little global effect.
The public debates in which Singer received most criticism have been about second-hand smoke and global warming. He questioned the link between second-hand smoke and lung cancer, and was an outspoken opponent of the mainstream scientific view on climate change; he argued there is no evidence that increases in carbon dioxide produced by human beings is causing global warming and that the temperature of the Earth has always varied. A CBC Fifth Estate documentary in 2006 linked these two debates, naming Singer as a scientist who has acted as a consultant to industry in both areas, either directly or through a public relations firm. Naomi Oreskes and Erik Conway named Singer in their book, Merchants of Doubt, as one of three contrarian physicists—along with Fred Seitz and Bill Nierenberg—who regularly injected themselves into the public debate about contentious scientific issues, positioning themselves as skeptics, their views gaining traction because the media gives them equal time out of a sense of fairness.
Second-hand smoke
According to David Biello and John Pavlus in Scientific American, Singer was best known for his denial of the health risks of passive smoking. He was involved in 1994 as writer and reviewer of a report on the issue by the Alexis de Tocqueville Institution, where he was a senior fellow. The report criticized the Environmental Protection Agency (EPA) for their 1993 study about the cancer risks of passive smoking, calling it "junk science". Singer told CBC's The Fifth Estate in 2006 that he stood by the position that the EPA had "cooked the data" to show that second-hand smoke causes lung cancer. CBC said that tobacco money had paid for Singer's research and for his promotion of it, and that it was organized by APCO. Singer told CBC it made no difference where the money came from. "They don't carry a note on a dollar bill saying 'This comes from the tobacco industry,'" he said. "In any case I was not aware of it, and I didn't ask APCO where they get their money. That's not my business."
Global warming
In a 2003 letter to the Financial Times, Singer wrote that "there is no convincing evidence that the global climate is actually warming." In 2006, the CBC's Fifth Estate named Singer as one of a small group of scientists who have created what the documentary called a stand-off that is undermining the political response to global warming. The following year he appeared on the British Channel 4 documentary The Great Global Warming Swindle. Singer argues there is no evidence that the increases in carbon dioxide produced by humans cause global warming, and that if temperatures do rise it will be good for humankind. He told CBC: "It was warmer a thousand years ago than it is today. Vikings settled Greenland. Is that good or bad? I think it's good. They grew wine in England, in northern England. I think that's good. At least some people think so." "We are certainly putting more carbon dioxide in the atmosphere," he told The Daily Telegraph in 2009. "However there is no evidence that this high CO2 is making a detectable difference. It should in principle, however the atmosphere is very complicated and one cannot simply argue that just because CO2 is a greenhouse gas it causes warming." He believes that radical environmentalists are exaggerating the dangers. "The underlying effort here seems to be to use global warming as an excuse to cut down the use of energy," he said. "It's very simple: if you cut back the use of energy, then you cut back economic growth. And believe it or not, there are people in the world who believe we have gone too far in economic growth."
Singers's opinions conflict with the scientific consensus on climate change, where there is overwhelming consensus for anthropogenic global warming, and a decisive link between carbon dioxide concentration and global average temperatures, as well as consensus that such a change to the climate will have dangerous consequences. In 2005, Mother Jones magazine described Singer as a "godfather of global warming denial." However, Singer characterized himself as a "skeptic" rather than a "denier" of global climate change.
SEPP and funding
In 1990 Singer set up the Science & Environmental Policy Project (SEPP) to argue against preventive measures against global warming. After the 1991 United Nations Conference on Environment and Development, the Earth Summit, Singer started writing and speaking out to cast doubt on the science. He predicted disastrous economic damage from any restrictions on fossil fuel use, and argued that the natural world and its weather patterns are complex and ill-understood, and that little is known about the dynamics of heat exchange from the oceans to the atmosphere, or the role of clouds. As the scientific consensus grew, he continued to argue from a dismissive position. He has repeatedly criticized the climate models that predict global warming. In 1994 he compared model results to observed temperatures and found that the predicted temperatures for 1950–1980 deviated from the temperatures that had actually occurred, from which he concluded in his regular column in The Washington Times—with the headline that day "Climate Claims Wither under the Luminous Lights of Science"—that climate models are faulty. In 2007 he collaborated on a study that found tropospheric temperature trends of "Climate of the 20th Century" models differed from satellite observations by twice the model mean uncertainty.
Rachel White Scheuering writes that, when SEPP began, it was affiliated with the Washington Institute for Values in Public Policy, a think tank founded by Unification Church leader Sun Myung Moon. A 1990 article for the Cato Institute identifies Singer as the director of the science and environmental policy project at the Washington Institute for Values in Public Policy, on leave from the University of Virginia. Scheuering writes that Singer had cut ties with the institute, and was funded by foundations and oil companies. She writes that he was a paid consultant for many years for ARCO, ExxonMobil, Shell, Sun Oil Company, and Unocal, and that SEPP had received grants from ExxonMobil. Singer said his financial relationships did not influence his research. Scheuering argues that his conclusions concur with the economic interests of the companies that pay him, in that the companies want to see a reduction in environmental regulation.
In August 2007 Newsweek reported that in April 1998 a dozen people from what it called "the denial machine" met at the American Petroleum Institute's Washington headquarters. The meeting included Singer's group, the George C. Marshall Institute, and ExxonMobil. Newsweek said that, according to an eight-page memo that was leaked, the meeting proposed a $5-million campaign to convince the public that the science of global warming was controversial and uncertain. The plan was leaked to the press and never implemented. The week after the story, Newsweek published a contrary view from Robert Samuelson, one of its columnists, who said the story of an industry-funded denial machine was contrived and fundamentally misleading. ABC News reported in March 2008 that Singer said he is not on the payroll of the energy industry, but he acknowledged that SEPP had received one unsolicited charitable donation of $10,000 from ExxonMobil, and that it was one percent of all donations received. Singer said that his connection to Exxon was more like being on their mailing list than holding a paid position. The relationships have discredited Singer's research among members of the scientific community, according to Scheuering. Congresswoman Lynn Rivers questioned Singer's credibility during a congressional hearing in 1995, saying he had not been able to publish anything in a peer-reviewed scientific journal for the previous 15 years, except for one technical comment.
Criticism of the IPCC
In 1995 the Intergovernmental Panel on Climate Change (IPCC) issued a report reflecting the scientific consensus that the balance of evidence suggests there is a discernible human influence on global climate. Singer responded with a letter to Science saying the IPCC report had presented material selectively. He wrote: "the Summary does not even mention the existence of 18 years of weather satellite data that show a slight global cooling trend, contradicting all theoretical models of climate warming." Scheuering writes that Singer acknowledges the surface thermometers from weather stations show warming, but he argues that the satellites provide better data because their measurements cover pole to pole.
According to Edward Parson and Andrew Dessler, the satellite data did not show surface temperatures directly, but had to be adjusted using models. When adjustment was made for transient events the data showed a slight warming, and research suggested that the discrepancy between surface and satellite data was largely accounted for by problems such as instrument differences between satellites.
Singer wrote the "Leipzig Declaration on Global Climate Change in the U.S." in 1995, updating it in 1997 to rebut the Kyoto Protocol. The Kyoto Protocol was the result of an international convention held in Kyoto, Japan, during which several industrialized nations agreed to reduce their greenhouse gas emissions. Singer's declaration read: "Energy is essential for economic growth ... We understand the motivation to eliminate what are perceived to be the driving forces behind a potential climate change; but we believe the Kyoto Protocol—to curtail carbon dioxide emissions from only a part of the world community—is dangerously simplistic, quite ineffective, and economically destructive to jobs and standards-of-living."
Scheuering writes that Singer circulated this in the United States and Europe and gathered 100 signatories, though she says some of the signatories' credentials were questioned. At least 20 were television weather reporters, some did not have science degrees, and 14 were listed as professors without specifying a field. According to Scheuering, some of them later said they believed they were signing a document in favour of action against climate change.
Singer set up the Nongovernmental International Panel on Climate Change (NIPCC) in 2004 after the 2003 United Nations Climate Change Conference in Milan. NIPCC organized an international climate workshop in Vienna in April 2007, to provide what they called an independent examination of the evidence for climate change. Singer prepared an NIPCC report called "Nature, Not Human Activity, Rules the Climate," published in March 2008 by the Heartland Institute, a conservative think tank. ABC News said the same month that unnamed climate scientists from NASA, Stanford, and Princeton who spoke to ABC about the report dismissed it as "fabricated nonsense". In a letter of complaint to ABC News, Singer said their piece used "prejudicial language, distorted facts, libelous insinuations, and anonymous smears".
On September 18, 2013, the NIPCC's fourth report, entitled Climate Change Reconsidered II: Physical Science, was published. As with previous NIPCC reports, environmentalists criticized it upon its publication; for example, David Suzuki wrote that it was "full of long-discredited claims, including that carbon dioxide emissions are good because they stimulate life". After the report received favorable coverage from Fox News Channel's Doug McKelway, climate scientists Kevin Trenberth and Michael Oppenheimer criticized this coverage, with Trenberth calling it "irresponsible journalism" and Oppenheimer calling it "flat out wrong".
Climategate
In December 2009, after the Climatic Research Unit email controversy, Singer wrote an opinion piece for Reuters in which he claimed the scientists had misused peer review, pressured editors to prevent publication of alternative views, and smeared opponents. He also claimed the leaked e-mails showed that the "surface temperature data that IPCC relies on is based on distorted raw data and algorithms that they will not share with the science community." He argued that the incident exposed a flawed process, and that the temperature trends were heading downwards even as greenhouse gases like CO2 were increasing in the atmosphere. He wrote: "This negative correlation contradicts the results of the models that IPCC relies on and indicates that anthropogenic global warming (AGW) is quite small," concluding "and now it turns out that global warming might have been 'man made' after all." A British House of Commons Science and Technology Select Committee later issued a report that exonerated the scientists,
and eight committees investigated the allegations, finding no evidence of fraud or scientific misconduct.
Death
On April 6, 2020, Singer died in a nursing home in Rockville, Maryland.
Selected publications
Global Effects of Environmental Pollution (Reidel, 1970)
Manned Laboratories in Space (Reidel, 1970)
Is There an Optimum Level of Population? (McGraw-Hill, 1971)
The Changing Global Environment (Reidel, 1975)
Arid Zone Development (Ballinger, 1977)
Economic Effects of Demographic Changes (Joint Economic Committee, U.S. Congress, 1977)
Cost-Benefit Analysis in Environmental Decisionmaking (Mitre Corp, 1979)
Energy (W.H. Freeman, 1979)
The Price of World Oil (Annual Review of Energy, Vol. 8, 1983)
Free Market Energy (Universe Books, 1984)
Oil Policy in a Changing Market (Annual Review of Energy, Vol. 12, 1987)
The Ocean in Human Affairs (Paragon House, 1989)
The Universe and Its Origin: From Ancient Myths to Present Reality and Future Fantasy (Paragon House, 1990)
Global Climate Change: Human and Natural Influences (Paragon House, 1989)
The Greenhouse Debate Continued (ICS Press, 1992)
The Scientific Case Against the Global Climate Treaty (SEPP, 1997)
Hot Talk, Cold Science: Global Warming's Unfinished Debate (The Independent Institute, 1997)
with Dennis Avery. Unstoppable Global Warming: Every 1500 Years (Rowman & Littlefield, 2007)
with Craig Idso. Climate Change Reconsidered: 2009 Report of the Nongovernmental International Panel on Climate Change (NIPCC) (2009).
See also
Fringe science
Second-hand smoke
The Power of Big Oil
Notes
Further reading
Oreskes, Naomi and Erik Conway. 2010. Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming. Bloomsbury.
1924 births
2020 deaths
American climatologists
American non-fiction environmental writers
Environmental scientists
Ohio State University College of Engineering alumni
Princeton University alumni
University of Maryland, College Park faculty
University of Miami faculty
University of Virginia faculty
George Mason University faculty
National Weather Service people
Heartland Institute
Department of Commerce Gold Medal
United States Navy personnel of World War II
Scientists from Vienna
Austrian Jews
Austrian emigrants to the United States
Naturalized citizens of the United States
Jewish American scientists
Fellows of the American Physical Society
21st-century American Jews
Atmospheric physicists | Fred Singer | [
"Environmental_science"
] | 6,037 | [
"American environmental scientists",
"Environmental scientists"
] |
11,742 | https://en.wikipedia.org/wiki/Finite%20set | In mathematics, particularly set theory, a finite set is a set that has a finite number of elements. Informally, a finite set is a set which one could in principle count and finish counting. For example,
is a finite set with five elements. The number of elements of a finite set is a natural number (possibly zero) and is called the cardinality (or the cardinal number) of the set. A set that is not a finite set is called an infinite set. For example, the set of all positive integers is infinite:
Finite sets are particularly important in combinatorics, the mathematical study of counting. Many arguments involving finite sets rely on the pigeonhole principle, which states that there cannot exist an injective function from a larger finite set to a smaller finite set.
Definition and terminology
Formally, a set is called finite if there exists a bijection
for some natural number (natural numbers are defined as sets in Zermelo-Fraenkel set theory). The number is the set's cardinality, denoted as .
If a set is finite, its elements may be written — in many ways — in a sequence:
In combinatorics, a finite set with elements is sometimes called an -set and a subset with elements is called a -subset. For example, the set is a 3-set – a finite set with three elements – and is a 2-subset of it.
Basic properties
Any proper subset of a finite set is finite and has fewer elements than S itself. As a consequence, there cannot exist a bijection between a finite set S and a proper subset of S. Any set with this property is called Dedekind-finite. Using the standard ZFC axioms for set theory, every Dedekind-finite set is also finite, but this implication cannot be proved in ZF (Zermelo–Fraenkel axioms without the axiom of choice) alone.
The axiom of countable choice, a weak version of the axiom of choice, is sufficient to prove this equivalence.
Any injective function between two finite sets of the same cardinality is also a surjective function (a surjection). Similarly, any surjection between two finite sets of the same cardinality is also an injection.
The union of two finite sets is finite, with
In fact, by the inclusion–exclusion principle:
More generally, the union of any finite number of finite sets is finite. The Cartesian product of finite sets is also finite, with:
Similarly, the Cartesian product of finitely many finite sets is finite. A finite set with elements has distinct subsets. That is, the power set of a finite set S is finite, with cardinality .
Any subset of a finite set is finite. The set of values of a function when applied to elements of a finite set is finite.
All finite sets are countable, but not all countable sets are finite. (Some authors, however, use "countable" to mean "countably infinite", so do not consider finite sets to be countable.)
The free semilattice over a finite set is the set of its non-empty subsets, with the join operation being given by set union.
Necessary and sufficient conditions for finiteness
In Zermelo–Fraenkel set theory without the axiom of choice (ZF), the following conditions are all equivalent:
is a finite set. That is, can be placed into a one-to-one correspondence with the set of those natural numbers less than some specific natural number.
(Kazimierz Kuratowski) has all properties which can be proved by mathematical induction beginning with the empty set and adding one new element at a time.
(Paul Stäckel) can be given a total ordering which is well-ordered both forwards and backwards. That is, every non-empty subset of has both a least and a greatest element in the subset.
Every one-to-one function from into itself is onto. That is, the powerset of the powerset of is Dedekind-finite (see below).
Every surjective function from onto itself is one-to-one.
(Alfred Tarski) Every non-empty family of subsets of has a minimal element with respect to inclusion. (Equivalently, every non-empty family of subsets of has a maximal element with respect to inclusion.)
can be well-ordered and any two well-orderings on it are order isomorphic. In other words, the well-orderings on have exactly one order type.
If the axiom of choice is also assumed (the axiom of countable choice is sufficient), then the following conditions are all equivalent:
is a finite set.
(Richard Dedekind) Every one-to-one function from into itself is onto. A set with this property is called Dedekind-finite.
Every surjective function from onto itself is one-to-one.
is empty or every partial ordering of contains a maximal element.
Other concepts of finiteness
In ZF set theory without the axiom of choice, the following concepts of finiteness for a set are distinct. They are arranged in strictly decreasing order of strength, i.e. if a set meets a criterion in the list then it meets all of the following criteria. In the absence of the axiom of choice the reverse implications are all unprovable, but if the axiom of choice is assumed then all of these concepts are equivalent. (Note that none of these definitions need the set of finite ordinal numbers to be defined first; they are all pure "set-theoretic" definitions in terms of the equality and membership relations, not involving ω.)
I-finite. Every non-empty set of subsets of has a -maximal element. (This is equivalent to requiring the existence of a -minimal element. It is also equivalent to the standard numerical concept of finiteness.)
Ia-finite. For every partition of into two sets, at least one of the two sets is I-finite. (A set with this property which is not I-finite is called an amorphous set.)
II-finite. Every non-empty -monotone set of subsets of has a -maximal element.
III-finite. The power set is Dedekind finite.
IV-finite. is Dedekind finite.
V-finite. or .
VI-finite. or or .
VII-finite. is I-finite or not well-orderable.
The forward implications (from strong to weak) are theorems within ZF. Counter-examples to the reverse implications (from weak to strong) in ZF with urelements are found using model theory.
Most of these finiteness definitions and their names are attributed to by . However, definitions I, II, III, IV and V were presented in , together with proofs (or references to proofs) for the forward implications. At that time, model theory was not sufficiently advanced to find the counter-examples.
Each of the properties I-finite thru IV-finite is a notion of smallness in the sense that any subset of a set with such a property will also have the property. This is not true for V-finite thru VII-finite because they may have countably infinite subsets.
See also
FinSet
Discrete point set
Ordinal number
Peano arithmetic
Notes
References
External links
Basic concepts in set theory
Cardinal numbers | Finite set | [
"Mathematics"
] | 1,524 | [
"Cardinal numbers",
"Mathematical objects",
"Infinity",
"Basic concepts in set theory",
"Numbers"
] |
11,751 | https://en.wikipedia.org/wiki/Foresight%20Institute | The Foresight Institute (Foresight) is a San Francisco-based research non-profit that promotes the development of nanotechnology and other emerging technologies, such as safe AGI, biotech and longevity.
Foresight runs four cross-disciplinary program tracks to research, advance, and govern maturing technologies for the long-term benefit of life and the biosphere: Molecular machines nanotechnology for building better materials, biotechnology for health extension, and computer science and crypto commerce for intelligent global cooperation.
Foresight also runs a program on "existential hope", pushing forward the concept coined by Toby Ord and Owen Cotton-Barratt in their 2015 paper "Existential risk and Existential hope: Definitions", in which they wrote
Foresight's stated strategy is to focus on creating a community that promotes beneficial uses of new technologies and reduce misuse and accidents potentially associated with them.
Foresight runs a one-year Fellowship program aimed at giving researchers and innovators the support and mentorship to accelerate their projects while they continue to work in their existing career.
Since 2021, Foresight has hosted a podcast about grand futures called "The Foresight Institute Podcast" and shares all their material as open source via YouTube with lectures from scientists and other relevant actors within their fields of interest.
In addition, Foresight hosts Vision Weekend, an annual conferences focused on envisioning positive, long-term futures enabled by science and technology. The institute holds conferences on molecular nanotechnology and awards yearly prizes for developments in the field.
History
The Foresight Institute was founded in 1986 by Christine Peterson, K. Eric Drexler, and James C. Bennett to support the development of nanotechnology. Many of the institute's initial members came to it from the L5 Society, who were hoping to form a smaller group more focused on nanotechnology. In 1991, the Foresight Institute created two suborganizations with funding from tech entrepreneur Mitch Kapor; the Institute for Molecular Manufacturing and the Center for Constitutional Issues in Technology. In the 1990s, the Foresight Institute launched several initiatives to provide funding to developers of nanotechnology. In 1993, it created the Feynman Prize in Nanotechnology, named after physicist Richard Feynman. In May 2005, the Foresight Institute changed its name to "Foresight Nanotech Institute", though it reverted to its original name in June 2009.
In 2020, following the COVID-19 pandemic, the institute moved its programs online.
Prizes
The Feynman Prize in Nanotechnology is an award given by the Foresight Institute for significant advances in nanotechnology. Between 1993 and 1997, one prize was given biennially. Since 1997, two prizes have been given each year, divided into the categories of theory and experimentation. The prize is named in honor of physicist Richard Feynman, whose 1959 talk "There's Plenty of Room at the Bottom" is considered to have inspired and informed the start of the field of nanotechnology. Author Colin Milburn refers to the prize as an example of "fetishizing" its namesake Feynman, due to his "prestige as a scientist and his fame among the broader public."
The Foresight Institute also offers the Feynman Grand Prize, a $250,000 award to the first persons to create both a nanoscale robotic arm capable of precise positional control and a nanoscale 8-bit adder, with both conditions conforming to given specifications. The Feynman Grand Prize is intended to emulate historical prizes such as the Longitude prize, Orteig Prize, Kremer prize, Ansari X Prize, and two prizes that were offered by Richard Feynman personally as challenges during his 1959 "There's Plenty of Room at the Bottom" talk. In 2004, X-Prize Foundation founder Peter Diamandis was selected to chair the Feynman Grand Prize committee.
See also
Nanomedicine
Transhumanism
References
Further reading
Smith, Richard Hewlett. "A Policy Framework for Developing a National Nanotechnology Program", Master of Science thesis, Virginia Polytechnic Institute and State University, 1998, available at VTechWorks
External links
Scientific organizations established in 1986
Nanotechnology institutions
Non-profit organizations based in California
1986 establishments in California
Transhumanist organizations
Organizations based in Palo Alto, California | Foresight Institute | [
"Materials_science"
] | 884 | [
"Nanotechnology",
"Nanotechnology institutions"
] |
11,753 | https://en.wikipedia.org/wiki/List%20of%20freshwater%20aquarium%20plant%20species | Aquatic plants are used to give the freshwater aquarium a natural appearance, oxygenate the water, absorb ammonia, and provide habitat for fish, especially fry (babies) and for invertebrates. Some aquarium fish and invertebrates also eat live plants. Hobbyists use aquatic plants for aquascaping, of several aesthetic styles.
Most of these plant species are found either partially or fully submerged in their natural habitat. Although there are a handful of obligate aquatic plants that must be grown entirely underwater, most can grow fully emersed if the soil is moist. Though some are just living at the water margins, still, they can live in the completely submerged habitat.
By scientific name
The taxonomy of most plant genera is not final. Scientific names listed here may, therefore, contradict other sources. Many of these species are dangerous invasives and should be disposed of in a way that guarantees that they will not enter local waters.
Common aquarium plant species:
Aciotis acuminifolia
Acmella repens
Acorus calamus (Common sweet flag)
Acorus gramineus (Japanese sweet flag)
Aldrovanda vesiculosa
Alisma canaliculatum
Alisma gramineum
Alisma lanceolatum
Alisma nanum
Alisma orientale
Alisma plantago-aquatica
Alisma subcordatum
Alisma triviale
Alisma wahlenbergii
Alternanthera bettzickiana
Alternanthera philoxeroides
Alternanthera reineckii
Alternanthera sessilis
Ammania capitellata
Ammannia crassicaulis (Synonym Nesaea crassicaulis)
Ammania gracilis (Delicate ammania, red ammania)
Ammania latifolia
Ammannia pedicellata
Ammannia praetemissa
Ammania senegalensis
Anubias afzelii (Narrow-leafed anubias)
Anubias barteri var. barteri (Broadleaved anubias)
Anubias barteri var. angustifolia
Anubias barteri var. caladiifolia
Anubias barteri var. glabra
Anubias barteri var. nana (Dwarf anubias)
Anubias gigantea
Anubias gilletti
Anubias gracilis
Anubias hastifolia
Anubias heterophylla
Anubias pynaertii
Aponogeton appendiculatus
Aponogeton bernierianus
Aponogeton boivinianus
Aponogeton capuronii
Aponogeton crispus (Crinkled or ruffled aponogeton)
Aponogeton decartyi
Aponogeton desertorum
Aponogeton dioecus
Aponogeton distachyos
Aponogeton elongatus
Aponogeton fenestralis
Aponogeton henkelianus
Aponogeton junceus
Aponogeton longiplumulosus
Aponogeton loriae
Aponogeton madagascariensis (Madagascar laceleaf, lace plant)
Aponogeton natans
Aponogeton rigidifolius
Aponogeton tenuispicatus
Aponogeton ulvaceus (Compact apongeton)
Aponogeton undulatus
Armoracia aquatica
Arthraxon hispidus
Azolla caroliniana (water velvet, mosquito fern)
Azolla filiculoïdes (Azolla, moss fern)
Azolla pinnata
Bacopa amplexicaulis
Bacopa australis
Bacopa caroliniana (lemon bacopa, water hyssop, giant bacopa)
Bacopa crenata
Bacopa innominata
Bacopa lanigera
Bacopa madagascarensis
Bacopa monnieri (water hyssop, dwarf bacopa, baby tears)
Bacopa myriophylloides
Bacopa rotundifolia (Round bacopa)
Bacopa salzmannii
Bacopa serpyllifolia
Baldellia ranunculoides
Barclaya longifolia (Orchid lily)
Barclaya motleyi
Berula erecta
Blyxa aubertii
Blyxa echinosperma
Blyxa japonica (Japanese rush)
Blyxa novoguineensis
Blyxa octandra
Bolbitis heteroclita (sometimes sold as B. asiatica)
Bolbitis heudelotii (African or Congo fern)
Bucephalandra gigantea
Bucephalandra motleyana
Bucephalandra catherineae
Cabomba aquatica (Yellow cabomba, giant cabomba)
Cabomba caroliniana (Green cabomba)
Cabomba furcata
Cabomba palaeformis
Cabomba piauhyensis (Red cabomba)
Caldesia parnassifolia
Calla palustris
Caltha palustris
Callitriche hamulata
Callitriche hermaphroditica
Callitriche palustris
Callitriche stagnalis
Callitriche terestris
Cardamine lyrata (Chinese ivy, Japanese cress)
Cardamine rotundifolia
Ceratophyllum demersum (hornwort)
Ceratophyllum submersum (tropical hornwort)
Ceratopteris cornuta
Ceratopteris pteridoides
Ceratopteris thalictroides (water sprite)
Cladophora aegagropila
Clinopodium brownei
Crassula aquatica
Crassula helmsii
Crinum calamistratum
Crinum natans (African onion plant)
Crinum purpurascens
Crinum thaianum (water onion)
Cryptocoryne affinis
Cryptocoryne alba
Cryptocoryne albida
Cryptocoryne aponogetifolia
Cryptocoryne auriculata
Cryptocoryne axelrodii
Cryptocoryne balansae
Cryptocoryne beckettii (Beckett's Cryptocoryne)
Cryptocoryne blassii
Cryptocoryne bogneri
Cryptocoryne bullosa
Cryptocoryne ciliata
Cryptocoryne cognata
Cryptocoryne cordata (Giant cryptocoryne)
Cryptocoryne crispatula
Cryptocoryne cruddasiana
Cryptocoryne dewitii
Cryptocoryne diderici
Cryptocoryne elliptica
Cryptocoryne ferruginea
Cryptocoryne fusca
Cryptocoryne gasserii
Cryptocoryne grabowskii
Cryptocoryne gracilis
Cryptocoryne griffithii
Cryptocoryne hudoroi
Cryptocoryne keei
Cryptocoryne legroi
Cryptocoryne lingua
Cryptocoryne longicauda
Cryptocoryne lucens
Cryptocoryne lutea
Cryptocoryne minima
Cryptocoryne moehlmannii (Moehlmann's cryptocoryne)
Cryptocoryne nevillii
Cryptocoryne nurii
Cryptocoryne pallidinervia
Cryptocoryne parva (Tiny cryptocoryne)
Cryptocoryne petchii
Cryptocoryne pontederiifolia
Cryptocoryne purpurea
Cryptocoryne retrospiralis
Cryptocoryne schulzei
Cryptocoryne scrurillis
Cryptocoryne siamensis
Cryptocoryne spiralis
Cryptocoryne striolata
Cryptocoryne thwaitesii
Cryptocoryne tonkinensis
Cryptocoryne undulata (Undulate cryptocoryne)
Cryptocoryne usteriana
Cryptocoryne venemae
Cryptocoryne versteegii
Cryptocoryne walkeri
Cryptocoryne wendtii 'Tropica'
Cryptocoryne x willisii
Cryptocoryne zewaldiae
Cryptocoryne zonata
Cryptocoryne zukalii
Cuphea anagalloidea
Cyperus alternifolius
Cyperus helferi
Cyperus papyrus
Damasonium alisma
Didiplis diandra (Water hedge)
Diodia kuntzei
Diodia virginiana
Echinodorus africanus
Echinodorus amazonicus (Amazon sword)
Echinodorus andrieuxii
Echinodorus angustifolius
Echinodorus argentinensis
Echinodorus aschersonianus
Echinodorus barthii
Echinodorus berteroi
Echinodorus bleheri (Broadleaved amazon)
Echinodorus brevipedicellatus
Echinodorus cordifolius (Radicans sword, spade leaf sword)
Echinodorus fluitans
Echinodorus grandiflorus (Large-flowered amazon)
Echinodorus horemanii (Black-red amazon)
Echinodorus horizontalis
Echinodorus humilis
Echinodorus latifolius
Echinodorus longiscapus
Echinodorus macrophyllus (Large-leaved amazon sword)
Echinodorus martii
Echinodorus major (Ruffled amazon sword)
Echinodorus opacus (Opaque amazon sword)
Echinodorus osiris (Red amazon sword)
Echinodorus 'Ozelot'
Echinodorus palaefolius
Echinodorus paniculatus
Echinodorus parviflorus (Black amazon sword)
Echinodorus pelliscidus
Echinodorus quadricostatus (Dwarf sword)
Echinodorus radicans
Echinodorus rigidifolius
Echinodorus 'Rubin'
Echinodorus rubra
Echinodorus schlueteri
Echinodorus subalatus
Echinodorus tunicatus
Echinodorus uruguayensis (Uruguay amazon sword)
Egeria densa (Elodea, pondweed)
Egeria najas
Egleria fluctuans
Eichhornia azurea
Eichhornia crassipes (Water hyacinth)
Eichhornia diversifolia
Elatine gussonei
Elatine hydropiper
Elatine macropoda
Elatine triandra
Eleocharis acicularis (Hairgrass)
Eleocharis dulcis
Eleocharis minima
Eleocharis obtusa
Eleocharis parvula
Eleocharis vivipara
Elodea canadensis (Canadian pondweed)
Elodea granatensis
Elodea nuttallii
Elodea occidentalis
Equisetum spp.
Eriocaulon amanoanum
Eriocaulon cinereum
Eriocaulon depressum
Eriocaulon parkeri
Eusteralis stellata (Star rotala)
Fittonia argyroneura
Fontinalis antipyretica (Willow moss)
Glossadelphus zollingeri
Glossostigma diandrum
Glossostigma elatinoides
Gratiola amphiantha
Gratiola brevefolia
Gratiola viscidula
Gymnocoronis spilanthoides (Spadeleaf plant)
Helanthium bolivianum (Bolivian sword) (Synonym - Echinodorus bolivianus)
Helanthium tenellum (Pygmy chain sword) (Synonym - Echinodorus tenellus)
Helanthium zombiense
Hemianthus callitrichoides (Dwarf helzine)
Hemianthus micranthemoides (Pearlweed)
Heteranthera dubia
Heteranthera reniformis
Heteranthera zosterifolia (Stargrass)
Hippuris vulgaris
Hottonia inflata
Hottonia palustris (Water violet)
Hydrilla verticillata
Hydrocharis morsus-ranae
Hydrocleys martii
Hydrocleys nymphoides
Hydrocotyle leucocephala (Brazilian pennywort)
Hydrocotyle sibthorpioides
Hydrocotyle tripartita
Hydrocotyle verticillata (Whorled umbrella plant)
Hydrocotyle vulgaris
Hydrothrix gardneri
Hydrotriche hottoniiflora
Hygrophila angustifolia
Hygrophila corymbosa 'crispa'
Hygrophila corymbosa 'glabra' (Broadlead giant stricta)
Hygrophila corymbosa 'gracilis'
Hygrophila corymbosa 'siamensis'
Hygrophila corymbosa 'strigosa'
Hygrophila difformis (Water wisteria)
Hygrophila guianensis
Hygrophila lacustris
Hygrophila lancea
Hygrophila natalis
Hygrophila polysperma (Dwarf hygrophilia)
Hygrophila salicifolia
Hygrophila stricta (Thai stricta, green stricta)
Hygroryza aristata
Hyptis lorentziana
Iris spp.
Isoetes lacustris (quillwort)
Isoetes malinverniana
Isoetes taiwanensis
Isoetes velata
Isolepis setracea
Jasarum steyermarkii
Juncus repens
Lagarosiphon cordofanus
Lagarosiphon madagascariensis
Lagarosiphon major (Elodea crispa)
Lagenandra dewitii
Lagenandra insignis
Lagenandra koenigii
Lagenandra lancifolia
Lagenandra meeboldii
Lagenandra nairii
Lagenandra ovata
Lagenandra thwaitesii
Lemna gibba
Lemna minor (Duckweed)
Lemna paucicostata
Lemna perpusilla
Lemna trisulca
Lilaeopsis brasiliensis
Lilaeopsis carolinensis
Lilaeopsis macloviana
Lilaeopsis mauritiana
Lilaeopsis novae-zelandiae (New Zealand grassplant)
Lilaeopsis ruthiana
Limnobium laevigatum (Amazon frogbit)
Limnobium spongia
Limnocharis flava
Limnophila aquatica (Giant ambulia)
Limnophila aromatica
Limnophila glabra
Limnophila heterophylla
Limnophila indica (Indian ambulia)
Limnophila sessiliflora (Dwarf ambulia)
Limnophyton fluitans
Lindernia crustacea F. Muell.
Lindernia dubia
Lindernia grandiflora
Lindernia parviflora
Lindernia rotundifolia
Littorella uniflora
Lobelia cardinalis (Cardinal flower, scarlet lobelia)
Lobelia dortmanna
Lomariopsis sp. (Süsswassertang)
Ludwigia alternifolia
Ludwigia arcuata
Ludwigia glandulosa (Glandular ludwigia, red star ludwigia)
Ludwigia helminthorrhiza
Ludwigia inclinata
Ludwigia inclinata var. verticellata 'Cuba'
Ludwigia mullertii
Ludwigia natans
Ludwigia sedioides
Ludwigia palustris
Ludwigia pulvinaris
Ludwigia repens (Creeping ludwigia, narrow-leaf ludwigia)
Luronium natans
Lycopodiella inundata (Lycopodium inundatum)
Lysimachia nummularia (creeping Jenny, moneywort)
Marsilea crenata
Marsilea drummondii
Marsilea hirsuta
Marsilea pubescens
Marsilea quadrifolia (water-clover)
Mayaca fluviatilis
Mayaca madida (Synonym Mayaca sellowiana)
Mayaca vandellii
Mentha aquatica
Micranthemum umbrosum (Helzine)
Microcarpaea minima
Microsorum pteropus (Java fern)
Monochoria vaginalis
Monosolenium tenerum (commercial name; plants sold under this name are actually a fern Lomariopsis sp.)
Murdannia keisak
Myriophyllum alterniflorum
Myriophyllum aquaticum (Brazilian milfoil, milfoil)
Myriophyllum elatinoides
Myriophyllum heterophyllum
Myriophyllum hippuroides (Green milfoil, water milfoil)
Myriophyllum mattogrossense
Myriophyllum proserpinacoides
Myriophyllum scabratum (Foxtail)
Myriophyllum spicatum
Myriophyllum tuberculatum (Red myriophyllum)
Myriophyllum ussuriense
Myriophyllum verticillatum
Myriophylumm oguraense
Najas graminea
Najas guadelupensis
Najas indica
Najas marina
Najas minor
Najas pectinata
Nechamandra alternifolia
Nelumbo nucifera
Neptunia oleracea
Nitella capillaris
Nitella flexilis
Nitella gracilis
Nomaphila siamensis
Nuphar advenum
Nuphar japonica (Spatterdock)
Nuphar lutea (Yellow water-lily)
Nuphar pumilum
Nuphar sagittifolium
Nymphaea alba
Nymphaea lotus (Tiger lotus)
Nymphaea lotus var. rubra
Nymphaea micrantha
Nymphaea pubescens
Nymphaea pygmea
Nymphaea stellata (Red and blue water lily)
Nymphaea zenkeri 'Red' (Red tiger lotus)
Nymphoides aquatica (Banana plant)
Nymphoides humboldtiana
Nymphoides indica
Nymphoides peltata
Oenanthe javanica
Oenanthe aquatica
Oldenlandia salzmannii (Synonym Hedyotis salzmannii)
Orontium aquaticum
Ottelia alismoides
Ottelia mesenterum
Ottelia ulvifolia
Penthorum sedoides
Persicaria hydropiperoides
Persicaria praetermissa
Pilularia americana
Pilularia globulifera
Pistia stratiotes (Water lettuce)
Phyllanthus fluitans
Physostegia purpurea
Pogostemon helferi
Pogostemon stellatus
Pontederia cordata
Potamogeton coloratus
Potamogeton crispus
Potamogeton densus
Potamogeton filiformis
Potamogeton gayi
Potamogeton gramineus
Potamogeton lucens
Potamogeton malaianus
Potamogeton natans
Potamogeton perfoliatus
Proserpinaca palustris
Ranunculus aquatilis
Ranunculus limosella
Regnellidium diphyllum
Riccia fluitans (Crystalwort)
Ricciocarpos natans
Rorippa aquatica
Rotala indica
Rotala macrandra (Giant red rotala)
Rotala mexicana
Rotala ramosior
Rotala rotundifolia (Dwarf rotala)
Rotala pusilla
Rotala wallichii (Whorly rotala)
Ruppia maritima
Sagittaria chapmani
Sagittaria eatonii
Sagittaria filiformis
Sagittaria graminea
Sagittaria guyanensis
Sagittaria isoëtiformis
Sagittaria latifolia
Sagittaria microfila
Sagittaria montevidensis
Sagittaria natans
Sagittaria papillosa
Sagittaria platyphylla (giant sagittaria)
Sagittaria pusilla (dwarf sagittaria)
Sagittaria sagittifolia
Sagittaria subulata (needle sagittaria, floating arrowhead)
Salvinia auriculata
Salvinia cucullata
Salvinia minima
Salvinia natans (water spangles)
Salvinia oblongifolia
Salvinia rotundifolia
Samolus valerandi (Water cabbage)
Saururus cernuus (Lizard's tail)
Schismatoglottis prietoi
Selaginella sp.
Sium floridanum
Sium latifolium
Shinnersia rivularis (Mexican oak leaf)
Spiranthes romanzoffiana
Spirodela polyrhiza
Staurogyne repens
Staurogyne stolonifera
Stratiotes aloides
Stuckenia vaginata
Subularia aquatica
Syngonanthus caulescens
Synnema triflorum (out of date synonym)
Taxiphyllum barbieri (Java moss)
Tonina fluviatilis
Trapa natans (Water chestnut)
Triglochin maritima
Triglochin palustris
Triglochin striata
Trithuria austinensis
Trithuria austinensis
Trithuria australis
Trithuria inconspicua
Typha angustifolia
Typha latifolia
Utricularia bifida
Utricularia gibba
Utricularia graminifolia
Utricularia minor
Utricularia vulgaris
Vallisneria americana (Dwarf vallisneria)
Vallisneria asiatica
Vallisneria asiatica var. biwaensis (Corkscrew vallisneria)
Vallisneria gigantea (Giant vallisneria)
Vallisneria neotropicalis
Vallisneria rubra
Vallisneria spiralis (Straight vallisneria)
Vallisneria tortifolia (Twisted vallisneria, dwarf vallisneria)
Vallisneria tortissima
Vesicularia montagnei (Christmas moss, Xmas moss)
Veronica americana
Wolffia arrhiza
Wolffia microscopica
Wolffiella floridana
Zannichellia palustris
Algae
Most algae in hobby aquaria are unwanted, nuisance plants. Few algae, such as marimo (Aegagropila linnaei), are sought after and intentionally cultivated in freshwater aquaria.
False aquatics or pseudo-aquarium plants
Several species of terrestrial plants are frequently sold as "aquarium plants". While such plants are beautiful and can survive and even flourish for months under water, they will eventually die and must be removed so their decay does not contaminate the aquarium water. These plants have no necessary biology to live underwater.
Aglaonema modestum (Chinese evergreen)
Aglaonema simplex
Chlorophytum bichetii (Pongol sword)
Dracaena sanderiana (Striped dragonplant)
Hemigraphis colorata (Crimson ivy)
Ophiopogon japonicus (Fountain plant)
Pilea cadierei (Aluminum plant)
Sciadopitys verticillata) (Umbrella pine, koyamaki)
Spathiphyllum tasson (Brazil sword)
Syngonium podophyllum (Stardust ivy)
Trichomanes javanicum
Images
Photos
Illustration
References
See also
List of freshwater aquarium fish species
Fishkeeping
Aqaurium
Freshwater aquarium plant species
freshwater plant
Aquarium plants | List of freshwater aquarium plant species | [
"Biology"
] | 4,731 | [
"Lists of biota",
"Lists of plants",
"Plants"
] |
11,774 | https://en.wikipedia.org/wiki/Field%20ion%20microscope | The field-ion microscope (FIM) was invented by Müller in 1951. It is a type of microscope that can be used to image the arrangement of atoms at the surface of a sharp metal tip.
On October 11, 1955, Erwin Müller and his Ph.D. student, Kanwar Bahadur (Pennsylvania State University) observed individual tungsten atoms on the surface of a sharply pointed tungsten tip by cooling it to 21 K and employing helium as the imaging gas. Müller & Bahadur were the first persons to observe individual atoms directly.
Introduction
In FIM, a sharp (<50 nm tip radius) metal tip is produced and placed in an ultra high vacuum chamber, which is backfilled with an imaging gas such as helium or neon. The tip is cooled to cryogenic temperatures (20–100 K). A positive voltage of 5 to 10 kilovolts is applied to the tip. Gas atoms adsorbed on the tip are ionized by the strong electric field in the vicinity of the tip (thus, "field ionization"), becoming positively charged and being repelled from the tip. The curvature of the surface near the tip causes a natural magnification — ions are repelled in a direction roughly perpendicular to the surface (a "point projection" effect). A detector is placed so as to collect these repelled ions; the image formed from all the collected ions can be of sufficient resolution to image individual atoms on the tip surface.
Unlike conventional microscopes, where the spatial resolution is limited by the wavelength of the particles which are used for imaging, the FIM is a projection type microscope with atomic resolution and an approximate magnification of a few million times.
Design, limitations and applications
FIM like field-emission microscopy (FEM) consists of a sharp sample tip and a fluorescent screen (now replaced by a multichannel plate) as the key elements. However, there are some essential differences as follows:
The tip potential is positive.
The chamber is filled with an imaging gas (typically, He or Ne at 10−5 to 10−3 Torr).
The tip is cooled to low temperatures (~20-80K).
Like FEM, the field strength at the tip apex is typically a few V/Å. The experimental set-up and image formation in FIM is illustrated in the accompanying figures.
In FIM the presence of a strong field is critical. The imaging gas atoms (He, Ne) near the tip are polarized by the field and since the field is non-uniform the polarized atoms are attracted towards the tip surface. The imaging atoms then lose their kinetic energy performing a series of hops and accommodate to the tip temperature. Eventually, the imaging atoms are ionized by tunneling electrons into the surface and the resulting positive ions are accelerated along the field lines to the screen to form a highly magnified image of the sample tip.
In FIM, the ionization takes place close to the tip, where the field is strongest. The electron that tunnels from the atom is picked up by the tip. There is a critical distance, xc, at which the tunneling probability is a maximum. This distance is typically about 0.4 nm. The very high spatial resolution and high contrast for features on the atomic scale arises from the fact that the electric field is enhanced in the vicinity of the surface atoms because of the higher local curvature. The resolution of FIM is limited by the thermal velocity of the imaging ion. Resolution of the order of 1Å (atomic resolution) can be achieved by effective cooling of the tip.
Application of FIM, like FEM, is limited by the materials which can be fabricated in the shape of a sharp tip, can be used in an ultra high vacuum (UHV) environment, and can tolerate the high electrostatic fields. For these reasons, refractory metals with high melting temperature (e.g. W, Mo, Pt, Ir) are conventional objects for FIM experiments. Metal tips for FEM and FIM are prepared by electropolishing (electrochemical polishing) of thin wires. However, these tips usually contain many asperities. The final preparation procedure involves the in situ removal of these asperities by field evaporation just by raising the tip voltage. Field evaporation is a field induced process which involves the removal of atoms from the surface itself at very high field strengths and typically occurs in the range 2-5 V/Å. The effect of the field in this case is to reduce the effective binding energy of the atom to the surface and to give, in effect, a greatly increased evaporation rate relative to that expected at that temperature at zero fields. This process is self-regulating since the atoms that are at positions of high local curvature, such as adatoms or ledge atoms, are removed preferentially. The tips used in FIM is sharper (tip radius is 100~300 Å) compared to those used in FEM experiments (tip radius ~1000 Å).
FIM has been used to study dynamical behavior of surfaces and the behavior of adatoms on surfaces. The problems studied include adsorption-desorption phenomena, surface diffusion of adatoms and clusters, adatom-adatom interactions, step motion, equilibrium crystal shape, etc. However, there is the possibility of the results being affected by the limited surface area (i.e. edge effects) and by the presence of large electric field.
In a recent study from Günther Rupprechter laboratory examined a rhodium nanocrystal surface using field emission microscopy consisting of different nanometer-sized nanofacets as a model of a compartmentalized reaction nanosystem. Different reaction modes were observed, including a transition to spatio-temporal chaos. The transitions between different modes were caused by variations of the hydrogen pressure modifying the strength of diffusive coupling between individual nanofacets.
See also
Atom probe
Electron microscope
Field emission microscopy
List of surface analysis methods
References
K.Oura, V.G.Lifshits, A.ASaranin, A.V.Zotov and M.Katayama, Surface Science – An Introduction, (Springer-Verlag Berlin Heidelberg 2003).
John B. Hudson, Surface Science – An Introduction, BUTTERWORTH-Heinemann 1992.
External links
Northwestern University Center for Atom-Probe Tomography
Microscope Parts need to know.
Further reading
Microscopes | Field ion microscope | [
"Chemistry",
"Technology",
"Engineering"
] | 1,307 | [
"Microscopes",
"Measuring instruments",
"Microscopy"
] |
11,778 | https://en.wikipedia.org/wiki/Frederick%20Soddy | Frederick Soddy FRS (2 September 1877 – 22 September 1956) was an English radiochemist who explained, with Ernest Rutherford, that radioactivity is due to the transmutation of elements, now known to involve nuclear reactions. He also proved the existence of isotopes of certain radioactive elements. In 1921, he received the Nobel Prize in Chemistry "for his contributions to our knowledge of the chemistry of radioactive substances, and his investigations into the origin and nature of isotopes". Soddy was a polymath who mastered chemistry, nuclear physics, statistical mechanics, finance, and economics.
Biography
Soddy was born at 6 Bolton Road, Eastbourne, England, the son of Benjamin Soddy, corn merchant, and his wife Hannah Green. He went to school at Eastbourne College, before going on to study at University College of Wales at Aberystwyth and at Merton College, Oxford, where he graduated in 1898 with first class honours in chemistry. He was a researcher at Oxford from 1898 to 1900.
Scientific career
In 1900, he became a demonstrator in chemistry at McGill University in Montreal, Quebec, where he worked with Ernest Rutherford on radioactivity.
He and Rutherford realized that the anomalous behaviour of radioactive elements was because they decayed into other elements.
This decay also produced alpha, beta, and gamma radiation. When radioactivity was first discovered, no one was sure what the cause was. It needed careful work by Soddy and Rutherford to prove that atomic transmutation was in fact occurring.
In 1903, with Sir William Ramsay at University College London, Soddy showed that the decay of radium produced helium gas. In the experiment a sample of radium was enclosed in a thin-walled glass envelope sited within an evacuated glass bulb. After leaving the experiment running for a long period of time, a spectral analysis of the contents of the former evacuated space revealed the presence of helium. Later in 1907, Rutherford and Thomas Royds showed that the helium was first formed as positively charged nuclei of helium (He2+) which were identical to alpha particles, which could pass through the thin glass wall but were contained within the surrounding glass envelope.
From 1904 to 1914, Soddy was a lecturer at the University of Glasgow. Ruth Pirret worked as his research assistant during this time. In May 1910 Soddy was elected a Fellow of the Royal Society. In 1914 he was appointed to a chair at the University of Aberdeen, where he worked on research related to World War I.
In 1913, Soddy showed that an atom moves lower in atomic number by two places on alpha emission, higher by one place on beta emission. This was discovered at about the same time by Kazimierz Fajans, and is known as the radioactive displacement law of Fajans and Soddy, a fundamental step toward understanding the relationships among families of radioactive elements. In 1913 Soddy also described the phenomenon in which a radioactive element may have more than one atomic mass though the chemical properties are identical. He named this concept isotope meaning "same place". The word was initially suggested to him by Margaret Todd. Later, J. J. Thomson showed that non-radioactive elements can also have multiple isotopes.
The work that Soddy and his research assistant Ada Hitchins did at Glasgow and Aberdeen showed that uranium decays to radium.
Soddy published The Interpretation of Radium (1909) and Atomic Transmutation (1953).
In 1918, working with the Scottish scientist John Arnold Cranston, he announced the discovery of an isotope of the element later named protactinium. This slightly post-dated its discovery by the Germans Lise Meitner and Otto Hahn; however, it is said their discovery was actually made in 1915 but its announcement was delayed due to Cranston's notes being locked away whilst on active service in the First World War.
In 1919, he moved to the University of Oxford as the first Dr. Lee's Professor of Chemistry, where, in the period up till 1936, he reorganized the laboratories and the syllabus in chemistry. He received the 1921 Nobel Prize in Chemistry for his research in radioactive decay and particularly for his formulation of the theory of isotopes.
His work and essays popularising the new understanding of radioactivity was the main inspiration for H. G. Wells's The World Set Free (1914), which features atomic bombs dropped from biplanes in a war set many years in the future. Wells's novel is also known as The Last War and imagines a peaceful world emerging from the chaos. In Wealth, Virtual Wealth and Debt Soddy praises Wells's The World Set Free. He also says that radioactive processes probably power the stars.
Economics
In four books written from 1921 to 1934, Soddy carried on a "campaign for a radical restructuring of global monetary relationships", offering a perspective on economics rooted in physics – the laws of thermodynamics, in particular – and was "roundly dismissed as a crank". While most of his proposals – "to abandon the gold standard, let international exchange rates float, use federal surpluses and deficits as macroeconomic policy tools that could counter cyclical trends, and establish bureaus of economic statistics (including a consumer price index) in order to facilitate this effort" – are now conventional practice, his critique of fractional-reserve banking still "remains outside the bounds of conventional wisdom" although a recent paper by the IMF reinvigorated his proposals. Soddy wrote that financial debts grew exponentially at compound interest but the real economy was based on exhaustible stocks of fossil fuels. Energy obtained from the fossil fuels could not be used again. This criticism of economic growth is echoed by his intellectual heirs in the now emergent field of ecological economics.
The New Palgrave Dictionary of Economics, an influential reference text in economics, recognized Soddy as a "reformer" for his works on monetary reforms.
Political views
In Wealth, Virtual Wealth and Debt, Soddy cited the Protocols of the Learned Elders of Zion, which had been widely disseminated by Henry Ford in the United States, as evidence that the belief in a "financial conspiracy to enslave the world" was widespread at the time. He further wrote that "conscious conspiracy or not a corrupt monetary system strikes at the very life of the nation". Later in life he published a pamphlet Abolish Private Money, or Drown in Debt (1939).
The influence of his writing can be gauged, for example, in this quote from Ezra Pound:
Though some activists have insubstantially accused Soddy of anti-Semitism, most of his biographers dispute this narrative and argue that among Soddy's friends and students were some Jews who held positive views of him.
Descartes' theorem
He rediscovered the Descartes' theorem in 1936 and published it as a poem, "The Kiss Precise", quoted at Problem of Apollonius. The kissing circles in this problem are sometimes known as Soddy circles.
Honours and awards
He received the Nobel Prize in Chemistry in 1921 and the same year was elected member of the International Atomic Weights Committee. A small crater on the far side of the Moon as well as the radioactive uranium mineral soddyite are named after him. The author H. G. Wells dedicated his novel The World Set Free to Soddy's Interpretation of Radium (1909).
Personal life
In 1908, Soddy married Winifred Moller Beilby (1885–1936), the daughter of industrial chemist Sir George Beilby and Lady Emma Bielby, a philanthropist to women's causes. The couple worked together and co-published a paper in 1910 on the absorption of gamma rays from radium. He died in Brighton, England in 1956, twenty days after his 79th birthday.
Bibliography
Radio-Activity (1904)
The Interpretation of Radium (1909)
Matter and Energy (1911)
The Chemistry of the Radio-elements (1911)
Science and Life: Aberdeen addresses (1920)
Cartesian Economics: The Bearing of Physical Science upon State Stewardship (1921)
Nobel Lecture – The origins of the conception of isotopes (1922)
Wealth, Virtual Wealth and Debt. The solution of the economic paradox (George Allen & Unwin, 1926). See also Wealth, Virtual Wealth and Debt.
The wrecking of a scientific age (1927)
Money versus Man (1931)
The Interpretation of the Atom (1932) (at Archive.org. Free registration needed)
The Role of Money (London: George Routledge & Sons Ltd, 1934)
Money as nothing for something ; The gold "standard" snare (1935)
Abolish Private Money, or Drown in Debt (1939) (with Walter Crick)
Present outlook, a warning : debasement of the currency, deflation and unemployment (1944)
The Story of Atomic Energy (1949)
Atomic Transmutation (1953)
See also
Ada Hitchins, who helped Soddy to discover the element protactinium
Alfred J. Lotka
Problem of Apollonius
Oliver Sacks' autobiography Uncle Tungsten, in which Soddy, his work and his profound discoveries in atomic physics are extensively discussed and explained in Sacks' insightful and easily understandable language.
References
Further reading
External links
The Central Role of Energy in Soddy's Holistic and Critical Approach to Nuclear Science, Economics, and Social Responsibility
Annotated bibliography for Frederick Soddy from the Alsos Digital Library for Nuclear Issues
M. King Hubbert on the Nature of Growth. 1974
A biography of Frederick Soddy by Arian Forrest Nevin
The Frederick Soddy Trust
including the Nobel Lecture, 12 December 1922 The Origins of the Conception of Isotopes
Frederick Soddy Papers, 1920–1956 (inclusive). H MS c388. Harvard Medical Library, Francis A. Countway Library of Medicine, Boston, Mass.
1877 births
1956 deaths
Academics of the University of Aberdeen
Alumni of Merton College, Oxford
Alumni of Aberystwyth University
Fellows of the Royal Society
Corresponding Members of the Russian Academy of Sciences (1917–1925)
Corresponding Members of the USSR Academy of Sciences
Nobel laureates in Chemistry
People educated at Eastbourne College
People from Eastbourne
English chemists
English Nobel laureates
Dr Lee's Professors of Chemistry
Academic staff of McGill University
People involved with the periodic table | Frederick Soddy | [
"Chemistry"
] | 2,084 | [
"Periodic table",
"People involved with the periodic table"
] |
11,780 | https://en.wikipedia.org/wiki/Fur%20seal | Fur seals are any of nine species of pinnipeds belonging to the subfamily Arctocephalinae in the family Otariidae. They are much more closely related to sea lions than true seals, and share with them external ears (pinnae), relatively long and muscular foreflippers, and the ability to walk on all fours. They are marked by their dense underfur, which made them a long-time object of commercial hunting. Eight species belong to the genus Arctocephalus and are found primarily in the Southern Hemisphere, while a ninth species also sometimes called fur seal, the Northern fur seal (Callorhinus ursinus), belongs to a different genus and inhabits the North Pacific. The fur seals in Arctocephalus are more closely related to sea lions than they are to the Northern fur seal, but all three groups are more closely related to one another than they are to true seals.
Taxonomy
Fur seals and sea lions make up the family Otariidae. Along with the Phocidae and Odobenidae, ottariids are pinnipeds descending from a common ancestor most closely related to modern bears (as hinted by the subfamily Arctocephalinae, meaning "bear-headed"). The name pinniped refers to mammals with front and rear flippers. Otariids arose about 15-17 million years ago in the Miocene, and were originally land mammals that rapidly diversified and adapted to a marine environment, giving rise to the semiaquatic marine mammals that thrive today. Fur seals and sea lions are closely related and commonly known together as the "eared seals".
Until recently, fur seals were all grouped under a single subfamily of Pinnipedia, called the Arctocephalinae, to contrast them with Otariinae – the sea lions – based on the most prominent common feature, namely the coat of dense underfur intermixed with guard hairs. Recent genetic evidence, however, suggests Callorhinus is more closely related to some sea lion species, and the fur seal/sea lion subfamily distinction has been eliminated from many taxonomies. Nonetheless, all fur seals have certain features in common: the fur, generally smaller sizes, farther and longer foraging trips, smaller and more abundant prey items, and greater sexual dimorphism. For these reasons, the distinction remains useful. Fur seals comprise two genera: Callorhinus, and Arctocephalus. Callorhinus is represented by just one species in the Northern Hemisphere, the northern fur seal (Callorhinus ursinus), and Arctocephalus is represented by eight species in the Southern Hemisphere. The southern fur seals comprising the genus Arctocephalus include Antarctic fur seals, Galapagos fur seals, Juan Fernandez fur seals, New Zealand fur seals, brown fur seals, South American fur seals, and subantarctic fur seals.
Physical appearance
Along with the previously mentioned thick underfur, fur seals are distinguished from sea lions by their smaller body structure, greater sexual dimorphism, smaller prey, and longer foraging trips during the feeding cycle. The physical appearance of fur seals varies with individual species, but the main characteristics remain constant.
Fur seals are characterized by their external pinnae, dense underfur, vibrissae, and long, muscular limbs. They share with other otariids the ability to rotate their rear limbs forward, supporting their bodies and allowing them to ambulate on land. In water, their front limbs, typically measuring about a fourth of their body length, act as oars and can propel them forward for optimal mobility. The surfaces of these long, paddle-like fore limbs are leathery with small claws. Otariids have a dog-like head, sharp, well-developed canines, sharp eyesight, and keen hearing.
They are extremely sexually dimorphic mammals, with the males often two to five times the size of the females, with proportionally larger heads, necks, and chests. Size ranges from about 1.5 m, 64 kg in the male Galapagos fur seal (also the smallest pinniped) to 2.5 m, 180 kg in the adult male New Zealand fur seal. Most fur seal pups are born with a black-brown coat that molts at 2–3 months, revealing a brown coat that typically gets darker with age. Some males and females within the same species have significant differences in appearance, further contributing to the sexual dimorphism. Females and juveniles often have a lighter colored coat overall or only on the chest, as seen in South American fur seals. In a northern fur seal population, the females are typically silvery-gray on the dorsal side and reddish-brown on their ventral side with a light gray patch on their chest. This makes them easily distinguished from the males with their brownish-gray to reddish-brown or black coats.
Habitat
Of the fur seal family, eight species are considered southern fur seals, and only one is found in the Northern Hemisphere. The southern group includes Antarctic, Galapagos, Guadalupe, Juan Fernandez, New Zealand, brown, South American, and subantarctic fur seals. They typically spend about 70% of their lives in subpolar, temperate, and equatorial waters. Colonies of fur seals can be seen throughout the Pacific and Southern Oceans from south Australia, Africa, and New Zealand, to the coast of Peru and north to California. They are typically nonmigrating mammals, with the exception of the northern fur seal, which has been known to travel distances up to 10,000 km. Fur seals are often found near isolated islands or peninsulas, and can be seen hauling out onto the mainland during winter. Although they are not migratory, they have been observed wandering hundreds of miles from their breeding grounds in times of scarce resources. For example, the subantarctic fur seal typically resides near temperate islands in the South Atlantic and Indian Oceans north of the Antarctic Polar Front, but juvenile males have been seen wandering as far north as Brazil and South Africa.
Behavior and ecology
Typically, fur seals gather during the summer in large rookeries at specific beaches or rocky outcrops to give birth and breed. All species are polygynous, meaning dominant males reproduce with more than one female. For most species, total gestation lasts about 11.5 months, including a several-month period of delayed implantation of the embryo. Northern fur seal males aggressively select and defend the specific females in their harems. Females typically reach sexual maturity around 3–4 years. The males reach sexual maturity around the same time, but do not become territorial or mate until 6–10 years.
The breeding season typically begins in November and lasts 2–3 months. The northern fur seals begin their breeding season as early as June due to their region, climate, and resources. In all cases, the males arrive a few weeks early to fight for their territory and groups of females with which to mate. They congregate at rocky, isolated breeding grounds and defend their territory through fighting and vocalization. Males typically do not leave their territory for the entirety of the breeding season, fasting and competing until all energy sources are depleted.
The Juan Fernandez fur seals deviate from this typical behavior, using aquatic breeding territories not seen in other fur seals. They use rocky sites for breeding, but males fight for territory on land and on the shoreline and in the water. Upon arriving to the breeding grounds, females give birth to their pups from the previous season. About a week later, the females mate again and shortly after begin their feeding cycle, which typically consists of foraging and feeding at sea for about 5 days, then returning to the breeding grounds to nurse the pups for about 2 days. Mothers and pups locate each other using call recognition during nursing period. The Juan Fernandez fur seal has a particularly long feeding cycle, with about 12 days of foraging and feeding and 5 days of nursing. Most fur seals continue this cycle for about 9 months until they wean their pup. The exception to this is the Antarctic fur seal, which has a feeding cycle that lasts only 4 months. During foraging trips, most female fur seals travel around 200 km from the breeding site, and can dive around 200 m depending on food availability.
The remainder of the year, fur seals lead a largely pelagic existence in the open sea, pursuing their prey wherever it is abundant. They feed on moderately sized fish, squid, and krill. Several species of the southern fur seal also have sea birds, especially penguins, as part of their diets. Fur seals, in turn, are preyed upon by sharks, orcas, and occasionally by larger sea lions. These opportunistic mammals tend to feed and dive in shallow waters at night, when their prey are swimming near the surface. Fur seals occasionally gang up and evict sharks. South American fur seals exhibit a different diet; adults feed almost exclusively on anchovies, while juveniles feed on demersal fish, most likely due to availability.
When fur seals were hunted in the late 18th and early 19th centuries, they hauled out on remote islands where no predators were present. The hunters reported being able to club the unwary animals to death one after another, making the hunt profitable, though the price per seal skin was low.
Population and survival
The average lifespan of fur seals varies with different species from 13 to 25 years, with females typically living longer. Most populations continue to expand as they recover from previous commercial hunting and environmental threats.
Many species were heavily exploited by commercial sealers, especially during the 19th century, when their fur was highly valued. Beginning in the 1790s, the ports of Stonington and New Haven, Connecticut, were leaders of the American fur seal trade, which primarily entailed clubbing fur seals to death on uninhabited South Pacific islands, skinning them, and selling the hides in China. Many populations, notably the Guadalupe fur seal, northern fur seal, and Cape fur seal, suffered dramatic declines and are still recovering. Currently, most species are protected, and hunting is mostly limited to subsistence harvest. Globally, most populations can be considered healthy, mostly because they often prefer remote habitats that are relatively inaccessible to humans. Nonetheless, environmental degradation, competition with fisheries, and climate change potentially pose threats to some populations.
See also
Bering Sea Arbitration
References
Further reading
Gentry, R. L (1998) Behavior and Ecology of the Northern Fur Seal. Princeton: Princeton University Press.
Fur seal
Fur trade
Pinnipeds
Paraphyletic groups | Fur seal | [
"Biology"
] | 2,149 | [
"Phylogenetics",
"Paraphyletic groups"
] |
11,807 | https://en.wikipedia.org/wiki/Ferromagnetism | Ferromagnetism is a property of certain materials (such as iron) that results in a significant, observable magnetic permeability, and in many cases, a significant magnetic coercivity, allowing the material to form a permanent magnet. Ferromagnetic materials are noticeably attracted to a magnet, which is a consequence of their substantial magnetic permeability.
Magnetic permeability describes the induced magnetization of a material due to the presence of an external magnetic field. For example, this temporary magnetization inside a steel plate accounts for the plate's attraction to a magnet. Whether or not that steel plate then acquires permanent magnetization depends on both the strength of the applied field and on the coercivity of that particular piece of steel (which varies with the steel's chemical composition and any heat treatment it may have undergone).
In physics, multiple types of material magnetism have been distinguished. Ferromagnetism (along with the similar effect ferrimagnetism) is the strongest type and is responsible for the common phenomenon of everyday magnetism. An example of a permanent magnet formed from a ferromagnetic material is a refrigerator magnet.
Substances respond weakly to three other types of magnetism—paramagnetism, diamagnetism, and antiferromagnetism—but the forces are usually so weak that they can be detected only by lab instruments.
Permanent magnets (materials that can be magnetized by an external magnetic field and remain magnetized after the external field is removed) are either ferromagnetic or ferrimagnetic, as are the materials that are attracted to them. Relatively few materials are ferromagnetic. They are typically pure forms, alloys, or compounds of iron, cobalt, nickel, and certain rare-earth metals.
Ferromagnetism is vital in industrial applications and modern technologies, forming the basis for electrical and electromechanical devices such as electromagnets, electric motors, generators, transformers, magnetic storage (including tape recorders and hard disks), and nondestructive testing of ferrous materials.
Ferromagnetic materials can be divided into magnetically soft materials (like annealed iron), which do not tend to stay magnetized, and magnetically hard materials, which do. Permanent magnets are made from hard ferromagnetic materials (such as alnico) and ferrimagnetic materials (such as ferrite) that are subjected to special processing in a strong magnetic field during manufacturing to align their internal microcrystalline structure, making them difficult to demagnetize. To demagnetize a saturated magnet, a magnetic field must be applied. The threshold at which demagnetization occurs depends on the coercivity of the material. Magnetically hard materials have high coercivity, whereas magnetically soft materials have low coercivity.
The overall strength of a magnet is measured by its magnetic moment or, alternatively, its total magnetic flux. The local strength of magnetism in a material is measured by its magnetization.
Terms
Historically, the term ferromagnetism was used for any material that could exhibit spontaneous magnetization: a net magnetic moment in the absence of an external magnetic field; that is, any material that could become a magnet. This definition is still in common use.
In a landmark paper in 1948, Louis Néel showed that two levels of magnetic alignment result in this behavior. One is ferromagnetism in the strict sense, where all the magnetic moments are aligned. The other is ferrimagnetism, where some magnetic moments point in the opposite direction but have a smaller contribution, so spontaneous magnetization is present.
In the special case where the opposing moments balance completely, the alignment is known as antiferromagnetism; antiferromagnets do not have a spontaneous magnetization.
Materials
Ferromagnetism is an unusual property that occurs in only a few substances. The common ones are the transition metals iron, nickel, and cobalt, as well as their alloys and alloys of rare-earth metals. It is a property not just of the chemical make-up of a material, but of its crystalline structure and microstructure. Ferromagnetism results from these materials having many unpaired electrons in their d-block (in the case of iron and its relatives) or f-block (in the case of the rare-earth metals), a result of Hund's rule of maximum multiplicity. There are ferromagnetic metal alloys whose constituents are not themselves ferromagnetic, called Heusler alloys, named after Fritz Heusler. Conversely, there are non-magnetic alloys, such as types of stainless steel, composed almost exclusively of ferromagnetic metals.
Amorphous (non-crystalline) ferromagnetic metallic alloys can be made by very rapid quenching (cooling) of an alloy. These have the advantage that their properties are nearly isotropic (not aligned along a crystal axis); this results in low coercivity, low hysteresis loss, high permeability, and high electrical resistivity. One such typical material is a transition metal-metalloid alloy, made from about 80% transition metal (usually Fe, Co, or Ni) and a metalloid component (B, C, Si, P, or Al) that lowers the melting point.
A relatively new class of exceptionally strong ferromagnetic materials are the rare-earth magnets. They contain lanthanide elements that are known for their ability to carry large magnetic moments in well-localized f-orbitals.
The table lists a selection of ferromagnetic and ferrimagnetic compounds, along with their Curie temperature (TC), above which they cease to exhibit spontaneous magnetization.
Unusual materials
Most ferromagnetic materials are metals, since the conducting electrons are often responsible for mediating the ferromagnetic interactions. It is therefore a challenge to develop ferromagnetic insulators, especially multiferroic materials, which are both ferromagnetic and ferroelectric.
A number of actinide compounds are ferromagnets at room temperature or exhibit ferromagnetism upon cooling. PuP is a paramagnet with cubic symmetry at room temperature, but which undergoes a structural transition into a tetragonal state with ferromagnetic order when cooled below its . In its ferromagnetic state, PuP's easy axis is in the ⟨100⟩ direction.
In NpFe2 the easy axis is ⟨111⟩. Above , NpFe2 is also paramagnetic and cubic. Cooling below the Curie temperature produces a rhombohedral distortion wherein the rhombohedral angle changes from 60° (cubic phase) to 60.53°. An alternate description of this distortion is to consider the length along the unique trigonal axis (after the distortion has begun) and as the distance in the plane perpendicular to . In the cubic phase this reduces to . Below the Curie temperature, the lattice acquires a distortion
which is the largest strain in any actinide compound. NpNi2 undergoes a similar lattice distortion below , with a strain of (43 ± 5) × 10−4. NpCo2 is a ferrimagnet below 15 K.
In 2009, a team of MIT physicists demonstrated that a lithium gas cooled to less than one kelvin can exhibit ferromagnetism. The team cooled fermionic lithium-6 to less than (150 billionths of one kelvin) using infrared laser cooling. This demonstration is the first time that ferromagnetism has been demonstrated in a gas.
In rare circumstances, ferromagnetism can be observed in compounds consisting of only s-block and p-block elements, such as rubidium sesquioxide.
In 2018, a team of University of Minnesota physicists demonstrated that body-centered tetragonal ruthenium exhibits ferromagnetism at room temperature.
Electrically induced ferromagnetism
Recent research has shown evidence that ferromagnetism can be induced in some materials by an electric current or voltage. Antiferromagnetic LaMnO3 and SrCoO have been switched to be ferromagnetic by a current. In July 2020, scientists reported inducing ferromagnetism in the abundant diamagnetic material iron pyrite ("fool's gold") by an applied voltage. In these experiments, the ferromagnetism was limited to a thin surface layer.
Explanation
The Bohr–Van Leeuwen theorem, discovered in the 1910s, showed that classical physics theories are unable to account for any form of material magnetism, including ferromagnetism; the explanation rather depends on the quantum mechanical description of atoms. Each of an atom's electrons has a magnetic moment according to its spin state, as described by quantum mechanics. The Pauli exclusion principle, also a consequence of quantum mechanics, restricts the occupancy of electrons' spin states in atomic orbitals, generally causing the magnetic moments from an atom's electrons to largely or completely cancel. An atom will have a net magnetic moment when that cancellation is incomplete.
Origin of atomic magnetism
One of the fundamental properties of an electron (besides that it carries charge) is that it has a magnetic dipole moment, i.e., it behaves like a tiny magnet, producing a magnetic field. This dipole moment comes from a more fundamental property of the electron: its quantum mechanical spin. Due to its quantum nature, the spin of the electron can be in one of only two states, with the magnetic field either pointing "up" or "down" (for any choice of up and down). Electron spin in atoms is the main source of ferromagnetism, although there is also a contribution from the orbital angular momentum of the electron about the nucleus. When these magnetic dipoles in a piece of matter are aligned (point in the same direction), their individually tiny magnetic fields add together to create a much larger macroscopic field.
However, materials made of atoms with filled electron shells have a total dipole moment of zero: because the electrons all exist in pairs with opposite spin, every electron's magnetic moment is cancelled by the opposite moment of the second electron in the pair. Only atoms with partially filled shells (i.e., unpaired spins) can have a net magnetic moment, so ferromagnetism occurs only in materials with partially filled shells. Because of Hund's rules, the first few electrons in an otherwise unoccupied shell tend to have the same spin, thereby increasing the total dipole moment.
These unpaired dipoles (often called simply "spins", even though they also generally include orbital angular momentum) tend to align in parallel to an external magnetic field leading to a macroscopic effect called paramagnetism. In ferromagnetism, however, the magnetic interaction between neighboring atoms' magnetic dipoles is strong enough that they align with each other regardless of any applied field, resulting in the spontaneous magnetization of so-called domains. This results in the large observed magnetic permeability of ferromagnetics, and the ability of magnetically hard materials to form permanent magnets.
Exchange interaction
When two nearby atoms have unpaired electrons, whether the electron spins are parallel or antiparallel affects whether the electrons can share the same orbit as a result of the quantum mechanical effect called the exchange interaction. This in turn affects the electron location and the Coulomb (electrostatic) interaction and thus the energy difference between these states.
The exchange interaction is related to the Pauli exclusion principle, which says that two electrons with the same spin cannot also be in the same spatial state (orbital). This is a consequence of the spin–statistics theorem and that electrons are fermions. Therefore, under certain conditions, when the orbitals of the unpaired outer valence electrons from adjacent atoms overlap, the distributions of their electric charge in space are farther apart when the electrons have parallel spins than when they have opposite spins. This reduces the electrostatic energy of the electrons when their spins are parallel compared to their energy when the spins are antiparallel, so the parallel-spin state is more stable. This difference in energy is called the exchange energy. In simple terms, the outer electrons of adjacent atoms, which repel each other, can move further apart by aligning their spins in parallel, so the spins of these electrons tend to line up.
This energy difference can be orders of magnitude larger than the energy differences associated with the magnetic dipole–dipole interaction due to dipole orientation, which tends to align the dipoles antiparallel. In certain doped semiconductor oxides, RKKY interactions have been shown to bring about periodic longer-range magnetic interactions, a phenomenon of significance in the study of spintronic materials.
The materials in which the exchange interaction is much stronger than the competing dipole–dipole interaction are frequently called magnetic materials. For instance, in iron (Fe) the exchange force is about 1,000 times stronger than the dipole interaction. Therefore, below the Curie temperature, virtually all of the dipoles in a ferromagnetic material will be aligned. In addition to ferromagnetism, the exchange interaction is also responsible for the other types of spontaneous ordering of atomic magnetic moments occurring in magnetic solids: antiferromagnetism and ferrimagnetism. There are different exchange interaction mechanisms which create the magnetism in different ferromagnetic, ferrimagnetic, and antiferromagnetic substances—these mechanisms include direct exchange, RKKY exchange, double exchange, and superexchange.
Magnetic anisotropy
Although the exchange interaction keeps spins aligned, it does not align them in a particular direction. Without magnetic anisotropy, the spins in a magnet randomly change direction in response to thermal fluctuations, and the magnet is superparamagnetic. There are several kinds of magnetic anisotropy, the most common of which is magnetocrystalline anisotropy. This is a dependence of the energy on the direction of magnetization relative to the crystallographic lattice. Another common source of anisotropy, inverse magnetostriction, is induced by internal strains. Single-domain magnets also can have a shape anisotropy due to the magnetostatic effects of the particle shape. As the temperature of a magnet increases, the anisotropy tends to decrease, and there is often a blocking temperature at which a transition to superparamagnetism occurs.
Magnetic domains
The spontaneous alignment of magnetic dipoles in ferromagnetic materials would seem to suggest that every piece of ferromagnetic material should have a strong magnetic field, since all the spins are aligned; yet iron and other ferromagnets are often found in an "unmagnetized" state. This is because a bulk piece of ferromagnetic material is divided into tiny regions called magnetic domains (also known as Weiss domains). Within each domain, the spins are aligned, but if the bulk material is in its lowest energy configuration (i.e. "unmagnetized"), the spins of separate domains point in different directions and their magnetic fields cancel out, so the bulk material has no net large-scale magnetic field.
Ferromagnetic materials spontaneously divide into magnetic domains because the exchange interaction is a short-range force, so over long distances of many atoms, the tendency of the magnetic dipoles to reduce their energy by orienting in opposite directions wins out. If all the dipoles in a piece of ferromagnetic material are aligned parallel, it creates a large magnetic field extending into the space around it. This contains a lot of magnetostatic energy. The material can reduce this energy by splitting into many domains pointing in different directions, so the magnetic field is confined to small local fields in the material, reducing the volume of the field. The domains are separated by thin domain walls a number of molecules thick, in which the direction of magnetization of the dipoles rotates smoothly from one domain's direction to the other.
Magnetized materials
Thus, a piece of iron in its lowest energy state ("unmagnetized") generally has little or no net magnetic field. However, the magnetic domains in a material are not fixed in place; they are simply regions where the spins of the electrons have aligned spontaneously due to their magnetic fields, and thus can be altered by an external magnetic field. If a strong-enough external magnetic field is applied to the material, the domain walls will move via a process in which the spins of the electrons in atoms near the wall in one domain turn under the influence of the external field to face in the same direction as the electrons in the other domain, thus reorienting the domains so more of the dipoles are aligned with the external field. The domains will remain aligned when the external field is removed, and sum to create a magnetic field of their own extending into the space around the material, thus creating a "permanent" magnet. The domains do not go back to their original minimum energy configuration when the field is removed because the domain walls tend to become 'pinned' or 'snagged' on defects in the crystal lattice, preserving their parallel orientation. This is shown by the Barkhausen effect: as the magnetizing field is changed, the material's magnetization changes in thousands of tiny discontinuous jumps as domain walls suddenly "snap" past defects.
This magnetization as a function of an external field is described by a hysteresis curve. Although this state of aligned domains found in a piece of magnetized ferromagnetic material is not a minimal-energy configuration, it is metastable, and can persist for long periods, as shown by samples of magnetite from the sea floor which have maintained their magnetization for millions of years.
Heating and then cooling (annealing) a magnetized material, subjecting it to vibration by hammering it, or applying a rapidly oscillating magnetic field from a degaussing coil tends to release the domain walls from their pinned state, and the domain boundaries tend to move back to a lower energy configuration with less external magnetic field, thus demagnetizing the material.
Commercial magnets are made of "hard" ferromagnetic or ferrimagnetic materials with very large magnetic anisotropy such as alnico and ferrites, which have a very strong tendency for the magnetization to be pointed along one axis of the crystal, the "easy axis". During manufacture the materials are subjected to various metallurgical processes in a powerful magnetic field, which aligns the crystal grains so their "easy" axes of magnetization all point in the same direction. Thus, the magnetization, and the resulting magnetic field, is "built in" to the crystal structure of the material, making it very difficult to demagnetize.
Curie temperature
As the temperature of a material increases, thermal motion, or entropy, competes with the ferromagnetic tendency for dipoles to align. When the temperature rises beyond a certain point, called the Curie temperature, there is a second-order phase transition and the system can no longer maintain a spontaneous magnetization, so its ability to be magnetized or attracted to a magnet disappears, although it still responds paramagnetically to an external field. Below that temperature, there is a spontaneous symmetry breaking and magnetic moments become aligned with their neighbors. The Curie temperature itself is a critical point, where the magnetic susceptibility is theoretically infinite and, although there is no net magnetization, domain-like spin correlations fluctuate at all length scales.
The study of ferromagnetic phase transitions, especially via the simplified Ising spin model, had an important impact on the development of statistical physics. There, it was first clearly shown that mean field theory approaches failed to predict the correct behavior at the critical point (which was found to fall under a universality class that includes many other systems, such as liquid-gas transitions), and had to be replaced by renormalization group theory.
See also
References
External links
Electromagnetism – ch. 11, from an online textbook
Detailed nonmathematical description of ferromagnetic materials with illustrations
Magnetism: Models and Mechanisms in E. Pavarini, E. Koch, and U. Schollwöck: Emergent Phenomena in Correlated Matter, Jülich 2013,
Quantum phases
Magnetic hysteresis
Physical phenomena
Ferromagnetism | Ferromagnetism | [
"Physics",
"Chemistry",
"Materials_science"
] | 4,257 | [
"Quantum phases",
"Physical phenomena",
"Matter",
"Phases of matter",
"Quantum mechanics",
"Magnetic ordering",
"Ferromagnetism",
"Condensed matter physics",
"Hysteresis",
"Magnetic hysteresis"
] |
11,844 | https://en.wikipedia.org/wiki/GeekSpeak | GeekSpeak is a podcast with two to four hosts who focus on technology and technology news of the week. Though originally a radio tech call-in program, which first aired in 1998 on KUSP, GeekSpeak has been a weekly podcast since 2004.
The program's slogan is "Bridging the gap between geeks and the rest of humanity".
History
GeekSpeak was created and originally broadcast on KUSP by Chris Neklason of Cruzio, Steve Schaefer of Guenther Computer, and board operator Ray Price from KUSP. Shortly thereafter Mark Hanford of Cruzio joined the program.
Currently, the host/producer is Lyle Troxell, who took over in September 2000.
In April 2016, citing financial difficulties, KUSP stopped broadcasting GeekSpeak with its final broadcast on May 5, 2016.
GeekSpeak episodes have been distributed as an archive on the internet since 2001. The podcast went live prior to March 5, 2005, with its first episode December 3, 2004.
See also
Computer jargon
Technobabble
External links
GeekSpeak Website
iTunes Podcast
Reference List
American talk radio programs
Presentation | GeekSpeak | [
"Technology"
] | 231 | [
"Multimedia",
"Presentation"
] |
11,866 | https://en.wikipedia.org/wiki/Global%20Positioning%20System | The Global Positioning System (GPS), originally Navstar GPS, is a satellite-based radio navigation system owned by the United States Space Force and operated by Mission Delta 31. It is one of the global navigation satellite systems (GNSS) that provide geolocation and time information to a GPS receiver anywhere on or near the Earth where there is an unobstructed line of sight to four or more GPS satellites. It does not require the user to transmit any data, and operates independently of any telephone or Internet reception, though these technologies can enhance the usefulness of the GPS positioning information. It provides critical positioning capabilities to military, civil, and commercial users around the world. Although the United States government created, controls, and maintains the GPS system, it is freely accessible to anyone with a GPS receiver.
Overview
The GPS project was started by the U.S. Department of Defense in 1973. The first prototype spacecraft was launched in 1978 and the full constellation of 24 satellites became operational in 1993. After Korean Air Lines Flight 007 was shot down when it mistakenly entered Soviet airspace, President Ronald Reagan announced that the GPS system would be made available for civilian use as of September 16, 1983; however, initially this civilian use was limited to an average accuracy of by use of Selective Availability (SA), a deliberate error introduced into the GPS data that military receivers could correct for.
As civilian GPS usage grew, there was increasing pressure to remove this error. The SA system was temporarily disabled during the Gulf War, as a shortage of military GPS units meant that many US soldiers were using civilian GPS units sent from home. In the 1990s, Differential GPS systems from the US Coast Guard, Federal Aviation Administration, and similar agencies in other countries began to broadcast local GPS corrections, reducing the effect of both SA degradation and atmospheric effects (that military receivers also corrected for). The U.S. military had also developed methods to perform local GPS jamming, meaning that the ability to globally degrade the system was no longer necessary. As a result, United States President Bill Clinton signed a bill ordering that Selective Availability be disabled on May 1, 2000; and, in 2007, the US government announced that the next generation of GPS satellites would not include the feature at all.
Advances in technology and new demands on the existing system have now led to efforts to modernize the GPS and implement the next generation of GPS Block III satellites and Next Generation Operational Control System (OCX) which was authorized by the U.S. Congress in 2000. When Selective Availability was discontinued, GPS was accurate to about . GPS receivers that use the L5 band have much higher accuracy of , while those for high-end applications such as engineering and land surveying are accurate to within and can even provide sub-millimeter accuracy with long-term measurements. Consumer devices such as smartphones can be accurate to or better when used with assistive services like Wi-Fi positioning.
, 18 GPS satellites broadcast L5 signals, which are considered pre-operational prior to being broadcast by a full complement of 24 satellites in 2027.
History
The GPS project was launched in the United States in 1973 to overcome the limitations of previous navigation systems, combining ideas from several predecessors, including classified engineering design studies from the 1960s. The U.S. Department of Defense developed the system, which originally used 24 satellites, for use by the United States military, and became fully operational in 1993. Civilian use was allowed from the 1980s. Roger L. Easton of the Naval Research Laboratory, Ivan A. Getting of The Aerospace Corporation, and Bradford Parkinson of the Applied Physics Laboratory are credited with inventing it. The work of Gladys West on the creation of the mathematical geodetic Earth model is credited as instrumental in the development of computational techniques for detecting satellite positions with the precision needed for GPS.
The design of GPS is based partly on similar ground-based radio-navigation systems, such as LORAN and the Decca Navigator System, developed in the early 1940s. In 1955, Friedwardt Winterberg proposed a test of general relativity—detecting time slowing in a strong gravitational field using accurate atomic clocks placed in orbit inside artificial satellites. Special and general relativity predicted that the clocks on GPS satellites, as observed by those on Earth, run 38 microseconds faster per day than those on the Earth. The design of GPS corrects for this difference; because without doing so, GPS calculated positions would accumulate errors of up to .
Predecessors
When the Soviet Union launched its first artificial satellite (Sputnik 1) in 1957, two American physicists, William Guier and George Weiffenbach, at Johns Hopkins University's Applied Physics Laboratory (APL) monitored its radio transmissions. Within hours they realized that, because of the Doppler effect, they could pinpoint where the satellite was along its orbit. The Director of the APL gave them access to their UNIVAC I computer to perform the heavy calculations required.
Early the next year, Frank McClure, the deputy director of the APL, asked Guier and Weiffenbach to investigate the inverse problem: pinpointing the user's location, given the satellite's. (At the time, the Navy was developing the submarine-launched Polaris missile, which required them to know the submarine's location.) This led them and APL to develop the TRANSIT system. In 1959, ARPA (renamed DARPA in 1972) also played a role in TRANSIT.
TRANSIT was first successfully tested in 1960. It used a constellation of five satellites and could provide a navigational fix approximately once per hour. In 1967, the U.S. Navy developed the Timation satellite, which proved the feasibility of placing accurate clocks in space, a technology required for GPS.
In the 1970s, the ground-based OMEGA navigation system, based on phase comparison of signal transmission from pairs of stations, became the first worldwide radio navigation system. Limitations of these systems drove the need for a more universal navigation solution with greater accuracy.
Although there were wide needs for accurate navigation in military and civilian sectors, almost none of those was seen as justification for the billions of dollars it would cost in research, development, deployment, and operation of a constellation of navigation satellites. During the Cold War arms race, the nuclear threat to the existence of the United States was the one need that did justify this cost in the view of the United States Congress. This deterrent effect is why GPS was funded. It is also the reason for the ultra-secrecy at that time. The nuclear triad consisted of the United States Navy's submarine-launched ballistic missiles (SLBMs) along with United States Air Force (USAF) strategic bombers and intercontinental ballistic missiles (ICBMs). Considered vital to the nuclear deterrence posture, accurate determination of the SLBM launch position was a force multiplier.
Precise navigation would enable United States ballistic missile submarines to get an accurate fix of their positions before they launched their SLBMs. The USAF, with two-thirds of the nuclear triad, also had requirements for a more accurate and reliable navigation system. The U.S. Navy and U.S. Air Force were developing their own technologies in parallel to solve what was essentially the same problem. To increase the survivability of ICBMs, there was a proposal to use mobile launch platforms (comparable to the Soviet SS-24 and SS-25) and so the need to fix the launch position had similarity to the SLBM situation.
In 1960, the Air Force proposed a radio-navigation system called MOSAIC (MObile System for Accurate ICBM Control) that was essentially a 3-D LORAN System. A follow-on study, Project 57, was performed in 1963 and it was "in this study that the GPS concept was born". That same year, the concept was pursued as Project 621B, which had "many of the attributes that you now see in GPS" and promised increased accuracy for U.S. Air Force bombers as well as ICBMs.
Updates from the Navy TRANSIT system were too slow for the high speeds of Air Force operation. The Naval Research Laboratory (NRL) continued making advances with their Timation (Time Navigation) satellites, first launched in 1967, second launched in 1969, with the third in 1974 carrying the first atomic clock into orbit and the fourth launched in 1977.
Another important predecessor to GPS came from a different branch of the United States military. In 1964, the United States Army orbited its first Sequential Collation of Range (SECOR) satellite used for geodetic surveying. The SECOR system included three ground-based transmitters at known locations that would send signals to the satellite transponder in orbit. A fourth ground-based station, at an undetermined position, could then use those signals to fix its location precisely. The last SECOR satellite was launched in 1969.
Development
With these parallel developments in the 1960s, it was realized that a superior system could be developed by synthesizing the best technologies from 621B, Transit, Timation, and SECOR in a multi-service program. Satellite orbital position errors, induced by variations in the gravity field and radar refraction among others, had to be resolved. A team led by Harold L. Jury of Pan Am Aerospace Division in Florida from 1970 to 1973, used real-time data assimilation and recursive estimation to do so, reducing systematic and residual errors to a manageable level to permit accurate navigation.
During Labor Day weekend in 1973, a meeting of about twelve military officers at the Pentagon discussed the creation of a Defense Navigation Satellite System (DNSS). It was at this meeting that the real synthesis that became GPS was created. Later that year, the DNSS program was named Navstar. Navstar is often erroneously considered an acronym for "NAVigation System using Timing And Ranging" but was never considered as such by the GPS Joint Program Office (TRW may have once advocated for a different navigational system that used that acronym). With the individual satellites being associated with the name Navstar (as with the predecessors Transit and Timation), a more fully encompassing name was used to identify the constellation of Navstar satellites, Navstar-GPS. Ten "Block I" prototype satellites were launched between 1978 and 1985 (an additional unit was destroyed in a launch failure).
The effect of the ionosphere on radio transmission was investigated in a geophysics laboratory of Air Force Cambridge Research Laboratory, renamed to Air Force Geophysical Research Lab (AFGRL) in 1974. AFGRL developed the Klobuchar model for computing ionospheric corrections to GPS location. Of note is work done by Australian space scientist Elizabeth Essex-Cohen at AFGRL in 1974. She was concerned with the curving of the paths of radio waves (atmospheric refraction) traversing the ionosphere from NavSTAR satellites.
After Korean Air Lines Flight 007, a Boeing 747 carrying 269 people, was shot down by a Soviet interceptor aircraft after straying in prohibited airspace because of navigational errors, in the vicinity of Sakhalin and Moneron Islands, President Ronald Reagan issued a directive making GPS freely available for civilian use, once it was sufficiently developed, as a common good. The first Block II satellite was launched on February 14, 1989, and the 24th satellite was launched in 1994. The GPS program cost at this point, not including the cost of the user equipment but including the costs of the satellite launches, has been estimated at US$5 billion (equivalent to $ billion in ).
Initially, the highest-quality signal was reserved for military use, and the signal available for civilian use was intentionally degraded, in a policy known as Selective Availability. This changed on May 1, 2000, with U.S. President Bill Clinton signing a policy directive to turn off Selective Availability to provide the same accuracy to civilians that was afforded to the military. The directive was proposed by the U.S. Secretary of Defense, William Perry, in view of the widespread growth of differential GPS services by private industry to improve civilian accuracy. Moreover, the U.S. military was developing technologies to deny GPS service to potential adversaries on a regional basis. Selective Availability was removed from the GPS architecture beginning with GPS-III.
Since its deployment, the U.S. has implemented several improvements to the GPS service, including new signals for civil use and increased accuracy and integrity for all users, all the while maintaining compatibility with existing GPS equipment. Modernization of the satellite system has been an ongoing initiative by the U.S. Department of Defense through a series of satellite acquisitions to meet the growing needs of the military, civilians, and the commercial market. As of early 2015, high-quality Standard Positioning Service (SPS) GPS receivers provided horizontal accuracy of better than , although many factors such as receiver and antenna quality and atmospheric issues can affect this accuracy.
GPS is owned and operated by the United States government as a national resource. The Department of Defense is the steward of GPS. The Interagency GPS Executive Board (IGEB) oversaw GPS policy matters from 1996 to 2004. After that, the National Space-Based Positioning, Navigation and Timing Executive Committee was established by presidential directive in 2004 to advise and coordinate federal departments and agencies on matters concerning the GPS and related systems. The executive committee is chaired jointly by the Deputy Secretaries of Defense and Transportation. Its membership includes equivalent-level officials from the Departments of State, Commerce, and Homeland Security, the Joint Chiefs of Staff and NASA. Components of the executive office of the president participate as observers to the executive committee, and the FCC chairman participates as a liaison.
The U.S. Department of Defense is required by law to "maintain a Standard Positioning Service (as defined in the federal radio navigation plan and the standard positioning service signal specification) that will be available on a continuous, worldwide basis" and "develop measures to prevent hostile use of GPS and its augmentations without unduly disrupting or degrading civilian uses".
Timeline and modernization
In 1972, the U.S. Air Force Central Inertial Guidance Test Facility (Holloman Air Force Base) conducted developmental flight tests of four prototype GPS receivers in a Y configuration over White Sands Missile Range, using ground-based pseudo-satellites.
In 1978, the first experimental Block-I GPS satellite was launched.
In 1983, after Soviet Union interceptor aircraft shot down the civilian airliner KAL 007 that strayed into prohibited airspace because of navigational errors, killing all 269 people on board, U.S. President Ronald Reagan announced that GPS would be made available for civilian uses once it was completed, although it had been publicly known as early as 1979, that the CA code (Coarse/Acquisition code) would be available to civilian users.
By 1985, ten more experimental Block-I satellites had been launched to validate the concept.
Beginning in 1988, command and control of these satellites was moved from Onizuka AFS, California to the 2nd Satellite Control Squadron (2SCS) located at Schriever Space Force Base in Colorado Springs, Colorado.
On February 14, 1989, the first modern Block-II satellite was launched.
The Gulf War from 1990 to 1991 was the first conflict in which the military widely used GPS.
In 1991, DARPA's project to create a miniature GPS receiver successfully ended, replacing the previous military receivers with a all-digital handheld GPS receiver.
In 1991, TomTom, a Dutch sat-nav manufacturer was founded.
In 1992, the 2nd Space Wing, which originally managed the system, was inactivated and replaced by the 50th Space Wing.
By December 1993, GPS achieved initial operational capability (IOC), with a full constellation (24 satellites) available and providing the Standard Positioning Service (SPS).
Full Operational Capability (FOC) was declared by Air Force Space Command (AFSPC) in April 1995, signifying full availability of the military's secure Precise Positioning Service (PPS).
In 1996, recognizing the importance of GPS to civilian users as well as military users, U.S. President Bill Clinton issued a policy directive declaring GPS a dual-use system and establishing an Interagency GPS Executive Board to manage it as a national asset.
In 1998, United States Vice President Al Gore announced plans to upgrade GPS with two new civilian signals for enhanced user accuracy and reliability, particularly with respect to aviation safety, and in 2000 the United States Congress authorized the effort, referring to it as GPS III.
On May 2, 2000 "Selective Availability" was discontinued as a result of the 1996 executive order, allowing civilian users to receive a non-degraded signal globally.
In 2004, the United States government signed an agreement with the European Community establishing cooperation related to GPS and Europe's Galileo system.
In 2004, United States President George W. Bush updated the national policy and replaced the executive board with the National Executive Committee for Space-Based Positioning, Navigation, and Timing.
In November 2004, Qualcomm announced successful tests of assisted GPS for mobile phones.
In 2005, the first modernized GPS satellite was launched and began transmitting a second civilian signal (L2C) for enhanced user performance.
On September 14, 2007, the aging mainframe-based Ground segment Control System was transferred to the new Architecture Evolution Plan.
On May 19, 2009, the United States Government Accountability Office issued a report warning that some GPS satellites could fail as soon as 2010.
On May 21, 2009, the Air Force Space Command allayed fears of GPS failure, saying: "There's only a small risk we will not continue to exceed our performance standard."
On January 11, 2010, an update of ground control systems caused a software incompatibility with 8,000 to 10,000 military receivers manufactured by a division of Trimble Navigation Limited of Sunnyvale, California.
On February 25, 2010, the U.S. Air Force awarded the contract to Raytheon Company to develop the GPS Next Generation Operational Control System (OCX) to improve accuracy and availability of GPS navigation signals, and serve as a critical part of GPS modernization.
July 24, 2020, operation of the GPS constellation is transferred to the newly established U.S. Space Force as part of its establishment.
On October 13, 2023, the Space Force activated PNT Delta (Provisional) to manage US navigation warfare assets. 2SOPS and GPS operations were realigned under this new Delta.
Awards
On February 10, 1993, the National Aeronautic Association selected the GPS Team as winners of the 1992 Robert J. Collier Trophy, the US's most prestigious aviation award. This team combines researchers from the Naval Research Laboratory, the U.S. Air Force, the Aerospace Corporation, Rockwell International Corporation, and IBM Federal Systems Company. The citation honors them "for the most significant development for safe and efficient navigation and surveillance of air and spacecraft since the introduction of radio navigation 50 years ago".
Two GPS developers received the National Academy of Engineering Charles Stark Draper Prize for 2003:
Ivan Getting, emeritus president of The Aerospace Corporation and an engineer at Massachusetts Institute of Technology, established the basis for GPS, improving on the World War II land-based radio system called LORAN (Long-range Radio Aid to Navigation).
Bradford Parkinson, professor of aeronautics and astronautics at Stanford University, conceived the present satellite-based system in the early 1960s and developed it in conjunction with the U.S. Air Force. Parkinson served twenty-one years in the Air Force, from 1957 to 1978, and retired with the rank of colonel.
GPS developer Roger L. Easton received the National Medal of Technology on February 13, 2006. Francis X. Kane (Col. USAF, ret.) was inducted into the U.S. Air Force Space and Missile Pioneers Hall of Fame at Lackland A.F.B., San Antonio, Texas, March 2, 2010, for his role in space technology development and the engineering design concept of GPS conducted as part of Project 621B. In 1998, GPS technology was inducted into the Space Foundation Space Technology Hall of Fame.
On October 4, 2011, the International Astronautical Federation (IAF) awarded the Global Positioning System (GPS) its 60th Anniversary Award, nominated by IAF member, the American Institute for Aeronautics and Astronautics (AIAA). The IAF Honors and Awards Committee recognized the uniqueness of the GPS program and the exemplary role it has played in building international collaboration for the benefit of humanity. On December 6, 2018, Gladys West was inducted into the Air Force Space and Missile Pioneers Hall of Fame in recognition of her work on an extremely accurate geodetic Earth model, which was ultimately used to determine the orbit of the GPS constellation. On February 12, 2019, four founding members of the project were awarded the Queen Elizabeth Prize for Engineering with the chair of the awarding board stating: "Engineering is the foundation of civilisation; ...They've re-written, in a major way, the infrastructure of our world."
Principles
The GPS satellites carry very stable atomic clocks that are synchronized with one another and with the reference atomic clocks at the ground control stations; any drift of the clocks aboard the satellites from the reference time maintained on the ground stations is corrected regularly. Since the speed of radio waves (speed of light) is constant and independent of the satellite speed, the time delay between when the satellite transmits a signal and the ground station receives it is proportional to the distance from the satellite to the ground station. With the distance information collected from multiple ground stations, the location coordinates of any satellite at any time can be calculated with great precision.
Each GPS satellite carries an accurate record of its own position and time, and broadcasts that data continuously. Based on data received from multiple GPS satellites, an end user's GPS receiver can calculate its own four-dimensional position in spacetime; However, at a minimum, four satellites must be in view of the receiver for it to compute four unknown quantities (three position coordinates and the deviation of its own clock from satellite time).
More detailed description
Each GPS satellite continually broadcasts a signal (carrier wave with modulation) that includes:
A pseudorandom code (sequence of ones and zeros) that is known to the receiver. By time-aligning a receiver-generated version and the receiver-measured version of the code, the time of arrival (TOA) of a defined point in the code sequence, called an epoch, can be found in the receiver clock time scale
A message that includes the time of transmission (TOT) of the code epoch (in GPS time scale) and the satellite position at that time
Conceptually, the receiver measures the TOAs (according to its own clock) of four satellite signals. From the TOAs and the TOTs, the receiver forms four time of flight (TOF) values, which are (given the speed of light) approximately equivalent to receiver-satellite ranges plus time difference between the receiver and GPS satellites multiplied by speed of light, which are called pseudo-ranges. The receiver then computes its three-dimensional position and clock deviation from the four TOFs.
In practice the receiver position (in three dimensional Cartesian coordinates with origin at the Earth's center) and the offset of the receiver clock relative to the GPS time are computed simultaneously, using the navigation equations to process the TOFs.
The receiver's Earth-centered solution location is usually converted to latitude, longitude and height relative to an ellipsoidal Earth model. The height may then be further converted to height relative to the geoid, which is essentially mean sea level. These coordinates may be displayed, such as on a moving map display, or recorded or used by some other system, such as a vehicle guidance system.
User-satellite geometry
Although usually not formed explicitly in the receiver processing, the conceptual time differences of arrival (TDOAs) define the measurement geometry. Each TDOA corresponds to a hyperboloid of revolution (see Multilateration). The line connecting the two satellites involved (and its extensions) forms the axis of the hyperboloid. The receiver is located at the point where three hyperboloids intersect.
It is sometimes incorrectly said that the user location is at the intersection of three spheres. While simpler to visualize, this is the case only if the receiver has a clock synchronized with the satellite clocks (i.e., the receiver measures true ranges to the satellites rather than range differences). There are marked performance benefits to the user carrying a clock synchronized with the satellites. Foremost is that only three satellites are needed to compute a position solution. If it were an essential part of the GPS concept that all users needed to carry a synchronized clock, a smaller number of satellites could be deployed, but the cost and complexity of the user equipment would increase.
Receiver in continuous operation
The description above is representative of a receiver start-up situation. Most receivers have a track algorithm, sometimes called a tracker, that combines sets of satellite measurements collected at different times—in effect, taking advantage of the fact that successive receiver positions are usually close to each other. After a set of measurements are processed, the tracker predicts the receiver location corresponding to the next set of satellite measurements. When the new measurements are collected, the receiver uses a weighting scheme to combine the new measurements with the tracker prediction. In general, a tracker can (a) improve receiver position and time accuracy, (b) reject bad measurements, and (c) estimate receiver speed and direction.
The disadvantage of a tracker is that changes in speed or direction can be computed only with a delay, and that derived direction becomes inaccurate when the distance traveled between two position measurements drops below or near the random error of position measurement. GPS units can use measurements of the Doppler shift of the signals received to compute velocity accurately. More advanced navigation systems use additional sensors like a compass or an inertial navigation system to complement GPS.
Non-navigation applications
GPS requires four or more satellites to be visible for accurate navigation. The solution of the navigation equations gives the position of the receiver along with the difference between the time kept by the receiver's on-board clock and the true time-of-day, thereby eliminating the need for a more precise and possibly impractical receiver based clock. Applications for GPS such as time transfer, traffic signal timing, and synchronization of cell phone base stations, make use of this cheap and highly accurate timing. Some GPS applications use this time for display, or, other than for the basic position calculations, do not use it at all.
Although four satellites are required for normal operation, fewer apply in special cases. If one variable is already known, a receiver can determine its position using only three satellites. For example, a ship on the open ocean usually has a known elevation close to 0m, and the elevation of an aircraft may be known. Some GPS receivers may use additional clues or assumptions such as reusing the last known altitude, dead reckoning, inertial navigation, or including information from the vehicle computer, to give a (possibly degraded) position when fewer than four satellites are visible.
Structure
The current GPS consists of three major segments. These are the space segment, a control segment, and a user segment. The U.S. Space Force develops, maintains, and operates the space and control segments. GPS satellites broadcast signals from space, and each GPS receiver uses these signals to calculate its three-dimensional location (latitude, longitude, and altitude) and the current time.
Space segment
The space segment (SS) is composed of 24 to 32 satellites, or Space Vehicles (SV), in medium Earth orbit, and also includes the payload adapters to the boosters required to launch them into orbit. The GPS design originally called for 24 SVs, eight each in three approximately circular orbits, but this was modified to six orbital planes with four satellites each. The six orbit planes have approximately 55° inclination (tilt relative to the Earth's equator) and are separated by 60° right ascension of the ascending node (angle along the equator from a reference point to the orbit's intersection). The orbital period is one-half of a sidereal day, i.e., 11 hours and 58 minutes, so that the satellites pass over the same locations or almost the same locations every day. The orbits are arranged so that at least six satellites are always within line of sight from everywhere on the Earth's surface (see animation at right). The result of this objective is that the four satellites are not evenly spaced (90°) apart within each orbit. In general terms, the angular difference between satellites in each orbit is 30°, 105°, 120°, and 105° apart, which sum to 360°.
Orbiting at an altitude of approximately ; orbital radius of approximately , each SV makes two complete orbits each sidereal day, repeating the same ground track each day. This was very helpful during development because even with only four satellites, correct alignment means all four are visible from one spot for a few hours each day. For military operations, the ground track repeat can be used to ensure good coverage in combat zones.
, there are 31 satellites in the GPS constellation, 27 of which are in use at a given time with the rest allocated as stand-bys. A 32nd was launched in 2018, but as of July 2019 is still in evaluation. More decommissioned satellites are in orbit and available as spares. The additional satellites improve the precision of GPS receiver calculations by providing redundant measurements. With the increased number of satellites, the constellation was changed to a nonuniform arrangement. Such an arrangement was shown to improve accuracy but also improves reliability and availability of the system, relative to a uniform system, when multiple satellites fail. With the expanded constellation, nine satellites are usually visible at any time from any point on the Earth with a clear horizon, ensuring considerable redundancy over the minimum four satellites needed for a position.
Control segment
The control segment (CS) is composed of:
a master control station (MCS),
an alternative master control station,
four dedicated ground antennas, and
six dedicated monitor stations.
The MCS can also access Satellite Control Network (SCN) ground antennas (for additional command and control capability) and NGA (National Geospatial-Intelligence Agency) monitor stations. The flight paths of the satellites are tracked by dedicated U.S. Space Force monitoring stations in Hawaii, Kwajalein Atoll, Ascension Island, Diego Garcia, Colorado Springs, Colorado and Cape Canaveral, Florida, along with shared NGA monitor stations operated in England, Argentina, Ecuador, Bahrain, Australia and Washington, DC. The tracking information is sent to the MCS at Schriever Space Force Base ESE of Colorado Springs, which is operated by the 2nd Space Operations Squadron (2 SOPS) of the U.S. Space Force. Then 2 SOPS contacts each GPS satellite regularly with a navigational update using dedicated or shared (AFSCN) ground antennas (GPS dedicated ground antennas are located at Kwajalein, Ascension Island, Diego Garcia, and Cape Canaveral). These updates synchronize the atomic clocks on board the satellites to within a few nanoseconds of each other, and adjust the ephemeris of each satellite's internal orbital model. The updates are created by a Kalman filter that uses inputs from the ground monitoring stations, space weather information, and various other inputs.
When a satellite's orbit is being adjusted, the satellite is marked unhealthy, so receivers do not use it. After the maneuver, engineers track the new orbit from the ground, upload the new ephemeris, and mark the satellite healthy again. The operation control segment (OCS) currently serves as the control segment of record. It provides the operational capability that supports GPS users and keeps the GPS operational and performing within specification.
OCS replaced the 1970s-era mainframe computer at Schriever Air Force Base in September 2007. After installation, the system helped enable upgrades and provide a foundation for a new security architecture that supported U.S. armed forces.
OCS will continue to be the ground control system of record until the new segment, Next Generation GPS Operation Control System (OCX), is fully developed and functional. The U.S. Department of Defense has claimed that the new capabilities provided by OCX will be the cornerstone for enhancing GPS's mission capabilities, enabling U.S. Space Force to enhance GPS operational services to U.S. combat forces, civil partners and domestic and international users. The GPS OCX program also will reduce cost, schedule and technical risk. It is designed to provide 50% sustainment cost savings through efficient software architecture and Performance-Based Logistics. In addition, GPS OCX is expected to cost millions of dollars less than the cost to upgrade OCS while providing four times the capability.
The GPS OCX program represents a critical part of GPS modernization and provides information assurance improvements over the current GPS OCS program.
OCX will have the ability to control and manage GPS legacy satellites as well as the next generation of GPS III satellites, while enabling the full array of military signals.
Built on a flexible architecture that can rapidly adapt to changing needs of GPS users allowing immediate access to GPS data and constellation status through secure, accurate and reliable information.
Provides the warfighter with more secure, actionable and predictive information to enhance situational awareness.
Enables new modernized signals (L1C, L2C, and L5) and has M-code capability, which the legacy system is unable to do.
Provides significant information assurance improvements over the current program including detecting and preventing cyber attacks, while isolating, containing and operating during such attacks.
Supports higher volume near real-time command and control capabilities and abilities.
On September 14, 2011, the U.S. Air Force announced the completion of GPS OCX Preliminary Design Review and confirmed that the OCX program is ready for the next phase of development. The GPS OCX program missed major milestones and pushed its launch into 2021, 5 years past the original deadline. According to the Government Accounting Office in 2019, the 2021 deadline looked shaky.
The project remained delayed in 2023, and was (as of June 2023) 73% over its original estimated budget. In late 2023, Frank Calvelli, the assistant secretary of the Air Force for space acquisitions and integration, stated that the project was estimated to go live some time during the summer of 2024.
User segment
The user segment (US) is composed of hundreds of thousands of U.S. and allied military users of the secure GPS Precise Positioning Service, and tens of millions of civil, commercial and scientific users of the Standard Positioning Service. In general, GPS receivers are composed of an antenna, tuned to the frequencies transmitted by the satellites, receiver-processors, and a highly stable clock (often a crystal oscillator). They may also include a display for providing location and speed information to the user.
GPS receivers may include an input for differential corrections, using the RTCM SC-104 format. This is typically in the form of an RS-232 port at 4,800 bit/s speed. Data is actually sent at a much lower rate, which limits the accuracy of the signal sent using RTCM. Receivers with internal DGPS receivers can outperform those using external RTCM data. , even low-cost units commonly include Wide Area Augmentation System (WAAS) receivers.
Many GPS receivers can relay position data to a PC or other device using the NMEA 0183 protocol. Although this protocol is officially defined by the National Marine Electronics Association (NMEA), references to this protocol have been compiled from public records, allowing open source tools like gpsd to read the protocol without violating intellectual property laws. Other proprietary protocols exist as well, such as the SiRF and MTK protocols. Receivers can interface with other devices using methods including a serial connection, USB, or Bluetooth.
Applications
While originally a military project, GPS is considered a dual-use technology, meaning it has significant civilian applications as well.
GPS has become a widely deployed and useful tool for commerce, scientific uses, tracking, and surveillance. GPS's accurate time facilitates everyday activities such as banking, mobile phone operations, and even the control of power grids by allowing well synchronized hand-off switching.
Civilian
Many civilian applications use one or more of GPS's three basic components: absolute location, relative movement, and time transfer.
Amateur radio: clock synchronization required for several digital modes such as FT8, FT4 and JS8; also used with APRS for position reporting; is often critical during emergency and disaster communications support.
Atmosphere: studying the troposphere delays (recovery of the water vapor content) and ionosphere delays (recovery of the number of free electrons). Recovery of Earth surface displacements due to the atmospheric pressure loading.
Astronomy: both positional and clock synchronization data is used in astrometry and celestial mechanics and precise orbit determination. GPS is also used in both amateur astronomy with small telescopes as well as by professional observatories for finding extrasolar planets.
Automated vehicle: applying location and routes for cars and trucks to function without a human driver.
Cartography: both civilian and military cartographers use GPS extensively.
Cellular telephony: clock synchronization enables time transfer, which is critical for synchronizing its spreading codes with other base stations to facilitate inter-cell handoff and support hybrid GPS/cellular position detection for mobile emergency calls and other applications. The first handsets with integrated GPS launched in the late 1990s. The U.S. Federal Communications Commission (FCC) mandated the feature in either the handset or in the towers (for use in triangulation) in 2002 so emergency services could locate 911 callers. Third-party software developers later gained access to GPS APIs from Nextel upon launch, followed by Sprint in 2006, and Verizon soon thereafter.
Clock synchronization: the accuracy of GPS time signals (±10 ns) is second only to the atomic clocks they are based on, and is used in applications such as GPS disciplined oscillators.
Disaster relief/emergency services: many emergency services depend upon GPS for location and timing capabilities.
GPS-equipped radiosondes and dropsondes: measure and calculate the atmospheric pressure, wind speed and direction up to from the Earth's surface.
Radio occultation for weather and atmospheric science applications.
Fleet tracking: used to identify, locate and maintain contact reports with one or more fleet vehicles in real-time.
Geodesy: determination of Earth orientation parameters including the daily and sub-daily polar motion, and length-of-day variabilities, Earth's center-of-mass – geocenter motion, and low-degree gravity field parameters.
Geofencing: vehicle tracking systems, person tracking systems, and pet tracking systems use GPS to locate devices that are attached to or carried by a person, vehicle, or pet. The application can provide continuous tracking and send notifications if the target leaves a designated (or "fenced-in") area.
Geotagging: applies location coordinates to digital objects such as photographs (in Exif data) and other documents for purposes such as creating map overlays with devices like Nikon GP-1.
GPS aircraft tracking
GPS for mining: the use of RTK GPS has significantly improved several mining operations such as drilling, shoveling, vehicle tracking, and surveying. RTK GPS provides centimeter-level positioning accuracy.
GPS data mining: It is possible to aggregate GPS data from multiple users to understand movement patterns, common trajectories and interesting locations. GPS data is today used in transportation and disaster engineering to forecast mobility in normal and evacuation situations (e.g., hurricanes, wildfires, earthquakes).
GPS tours: location determines what content to display; for instance, information about an approaching point of interest.
Mental health: tracking mental health functioning and sociability.
Navigation: navigators value digitally precise velocity and orientation measurements, as well as precise positions in real-time with a support of orbit and clock corrections.
Orbit determination of low-orbiting satellites with GPS receiver installed on board, such as GOCE, GRACE, Jason-1, Jason-2, TerraSAR-X, TanDEM-X, CHAMP, Sentinel-3, and some cubesats, e.g., CubETH.
Phasor measurements: GPS enables highly accurate timestamping of power system measurements, making it possible to compute phasors.
Recreation: for example, Geocaching, Geodashing, GPS drawing, waymarking, and other kinds of location based mobile games such as Pokémon Go.
Reference frames: realization and densification of the terrestrial reference frames in the framework of Global Geodetic Observing System. Co-location in space between Satellite laser ranging and microwave observations for deriving global geodetic parameters.
Robotics: self-navigating, autonomous robots using GPS sensors, which calculate latitude, longitude, time, speed, and heading.
Sport: used in football and rugby for the control and analysis of the training load.
Surveying: surveyors use absolute locations to make maps and determine property boundaries.
Tectonics: GPS enables direct fault motion measurement of earthquakes. Between earthquakes GPS can be used to measure crustal motion and deformation to estimate seismic strain buildup for creating seismic hazard maps.
Telematics: GPS technology integrated with computers and mobile communications technology in automotive navigation systems.
Restrictions on civilian use
The U.S. government controls the export of some civilian receivers. All GPS receivers capable of functioning above above sea level and , or designed or modified for use with unmanned missiles and aircraft, are classified as munitions (weapons)—which means they require State Department export licenses. This rule applies even to otherwise purely civilian units that only receive the L1 frequency and the C/A (Coarse/Acquisition) code.
Disabling operation above these limits exempts the receiver from classification as a munition. Vendor interpretations differ. The rule refers to operation at both the target altitude and speed, but some receivers stop operating even when stationary. This has caused problems with some amateur radio balloon launches that regularly reach . These limits only apply to units or components exported from the United States. A growing trade in various components exists, including GPS units from other countries. These are expressly sold as ITAR-free.
Military
As of 2009, military GPS applications include:
Navigation: Soldiers use GPS to find objectives, even in the dark or in unfamiliar territory, and to coordinate troop and supply movement. In the United States armed forces, commanders use the Commander's Digital Assistant and lower ranks use the Soldier Digital Assistant.
Frequency-Hopping Radio Clock Coordination: Military radio systems using frequency hopping modes, such as SINCGARS and HAVEQUICK, require all radios within a network to have the same time input to their internal clocks (+/-4 seconds in the case of SINCGARS) to be on the correct frequency at a given time. Military GPS receivers, such as the Precision Lightweight GPS Receiver (PLGR) and Defense Advanced GPS Receiver (DAGR), are used by radio operators within a radio network to properly input an accurate time to said radios internal clock. More modern military radios have internal GPS receivers that synchronize the internal clock automatically.
Target tracking: Various military weapons systems use GPS to track potential ground and air targets before flagging them as hostile. These weapon systems pass target coordinates to precision-guided munitions to allow them to engage targets accurately. Military aircraft, particularly in air-to-ground roles, use GPS to find targets.
Missile and projectile guidance: GPS allows accurate targeting of various military weapons including ICBMs, cruise missiles, precision-guided munitions and artillery shells. Embedded GPS receivers able to withstand accelerations of 12,000 g or about have been developed for use in howitzer shells.
Search and rescue.
Reconnaissance: Patrol movement can be managed more closely.
GPS satellites carry a set of nuclear detonation detectors consisting of an optical sensor called a bhangmeter, an X-ray sensor, a dosimeter, and an electromagnetic pulse (EMP) sensor (W-sensor), that form a major portion of the United States Nuclear Detonation Detection System. General William Shelton has stated that future satellites may drop this feature to save money.
GPS type navigation was first used in war in the 1991 Persian Gulf War, before GPS was fully developed in 1995, to assist Coalition Forces to navigate and perform maneuvers in the war. The war also demonstrated the vulnerability of GPS to being jammed, when Iraqi forces installed jamming devices on likely targets that emitted radio noise, disrupting reception of the weak GPS signal.
GPS's vulnerability to jamming is a threat that continues to grow as jamming equipment and experience grows. GPS signals have been reported to have been jammed many times over the years for military purposes. Russia seems to have several objectives for this approach, such as intimidating neighbors while undermining confidence in their reliance on American systems, promoting their GLONASS alternative, disrupting Western military exercises, and protecting assets from drones. China uses jamming to discourage US surveillance aircraft near the contested Spratly Islands. North Korea has mounted several major jamming operations near its border with South Korea and offshore, disrupting flights, shipping and fishing operations. Iranian Armed Forces disrupted the civilian airliner plane Flight PS752's GPS when it shot down the aircraft.
In the Russo-Ukrainian War, GPS-guided munitions provided to Ukraine by NATO countries experienced significant failure rates as a result of Russian electronic warfare. Excalibur artillery shells efficiency rate hitting targets dropped from 70% to 6% as Russia adapted its electronic warfare activities.
Timekeeping
Leap seconds
While most clocks derive their time from Coordinated Universal Time (UTC), the atomic clocks on the satellites are set to GPS time. The difference is that GPS time is not corrected to match the rotation of the Earth, so it does not contain new leap seconds or other corrections that are periodically added to UTC. GPS time was set to match UTC in 1980, but has since diverged. The lack of corrections means that GPS time remains at a constant offset with International Atomic Time (TAI) (TAI – GPS = 19 seconds). Periodic corrections are performed to the on-board clocks to keep them synchronized with ground clocks.
The GPS navigation message includes the difference between GPS time and UTC. GPS time is 18 seconds ahead of UTC because of the leap second added to UTC on December 31, 2016. Receivers subtract this offset from GPS time to calculate UTC and specific time zone values. New GPS units may not show the correct UTC time until after receiving the UTC offset message. The GPS-UTC offset field can accommodate 255 leap seconds (eight bits).
Accuracy
GPS time is theoretically accurate to about 14 nanoseconds, due to the clock drift relative to International Atomic Time that the atomic clocks in GPS transmitters experience. Most receivers lose some accuracy in their interpretation of the signals and are only accurate to about 100 nanoseconds.
Relativistic corrections
The GPS implements two major corrections to its time signals for relativistic effects: one for relative velocity of satellite and receiver, using the special theory of relativity, and one for the difference in gravitational potential between satellite and receiver, using general relativity. The acceleration of the satellite could also be computed independently as a correction, depending on purpose, but normally the effect is already dealt with in the first two corrections.
Format
As opposed to the year, month, and day format of the Gregorian calendar, the GPS date is expressed as a week number and a seconds-into-week number. The week number is transmitted as a ten-bit field in the C/A and P(Y) navigation messages, and so it becomes zero again every 1,024 weeks (19.6 years). GPS week zero started at 00:00:00 UTC (00:00:19 TAI) on January 6, 1980, and the week number became zero again for the first time at 23:59:47 UTC on August 21, 1999 (00:00:19 TAI on August 22, 1999). It happened the second time at 23:59:42 UTC on April 6, 2019. To determine the current Gregorian date, a GPS receiver must be provided with the approximate date (to within 3,584 days) to correctly translate the GPS date signal. To address this concern in the future the modernized GPS civil navigation (CNAV) message will use a 13-bit field that only repeats every 8,192 weeks (157 years), thus lasting until 2137 (157 years after GPS week zero).
Communication
The navigational signals transmitted by GPS satellites encode a variety of information including satellite positions, the state of the internal clocks, and the health of the network. These signals are transmitted on two separate carrier frequencies that are common to all satellites in the network. Two different encodings are used: a public encoding that enables lower resolution navigation, and an encrypted encoding used by the U.S. military.
Message format
{|class="wikitable" style="float:right; margin:0 0 0.5em 1em;" border="1"
|+
! Subframes !! Description
|-
| 1 || Satellite clock,GPS time relationship
|-
| 2–3 || Ephemeris(precise satellite orbit)
|-
| 4–5 || Almanac component(satellite network synopsis,error correction)
|}
Each GPS satellite continuously broadcasts a navigation message on L1 (C/A and P/Y) and L2 (P/Y) frequencies at a rate of 50 bits per second (see bitrate). Each complete message takes 750 seconds ( minutes) to complete. The message structure has a basic format of a 1500-bit-long frame made up of five subframes, each subframe being 300 bits (6 seconds) long. Subframes 4 and 5 are subcommutated 25 times each, so that a complete data message requires the transmission of 25 full frames. Each subframe consists of ten words, each 30 bits long. Thus, with 300 bits in a subframe times 5 subframes in a frame times 25 frames in a message, each message is 37,500 bits long. At a transmission rate of 50-bit/s, this gives 750 seconds to transmit an entire almanac message (GPS). Each 30-second frame begins precisely on the minute or half-minute as indicated by the atomic clock on each satellite.
The first subframe of each frame encodes the week number and the time within the week, as well as the data about the health of the satellite. The second and the third subframes contain the ephemeris – the precise orbit for the satellite. The fourth and fifth subframes contain the almanac, which contains coarse orbit and status information for up to 32 satellites in the constellation as well as data related to error correction. Thus, to obtain an accurate satellite location from this transmitted message, the receiver must demodulate the message from each satellite it includes in its solution for 18 to 30 seconds. To collect all transmitted almanacs, the receiver must demodulate the message for 732 to 750 seconds or minutes.
All satellites broadcast at the same frequencies, encoding signals using unique code-division multiple access (CDMA) so receivers can distinguish individual satellites from each other. The system uses two distinct CDMA encoding types: the coarse/acquisition (C/A) code, which is accessible by the general public, and the precise (P(Y)) code, which is encrypted so that only the U.S. military and other NATO nations who have been given access to the encryption code can access it.
The ephemeris is updated every 2 hours and is sufficiently stable for 4 hours, with provisions for updates every 6 hours or longer in non-nominal conditions. The almanac is updated typically every 24 hours. Additionally, data for a few weeks following is uploaded in case of transmission updates that delay data upload.
Satellite frequencies
{|class="wikitable" style="float:right; width:30em; margin:0 0 0.5em 1em;" border="1"
|+
! Band !! Frequency !! Description
|-
| L1 || 1575.42 MHz || Coarse-acquisition (C/A) and encrypted precision (P(Y)) codes, plus the L1 civilian (L1C) and military (M) codes on Block III and newer satellites.
|-
| L2 || 1227.60 MHz || P(Y) code, plus the L2C and military codes on the Block IIR-M and newer satellites.
|-
| L3 || 1381.05 MHz || Used for nuclear detonation (NUDET) detection.
|-
| L4 || 1379.913 MHz || Being studied for additional ionospheric correction.
|-
| L5 || 1176.45 MHz || Used as a civilian safety-of-life (SoL) signal on Block IIF and newer satellites.
|}
All satellites broadcast at the same two frequencies, 1.57542 GHz (L1 signal) and 1.2276 GHz (L2 signal). The satellite network uses a CDMA spread-spectrum technique where the low-bitrate message data is encoded with a high-rate pseudo-random (PRN) sequence that is different for each satellite. The receiver must be aware of the PRN codes for each satellite to reconstruct the actual message data. The C/A code, for civilian use, transmits data at 1.023 million chips per second, whereas the P code, for U.S. military use, transmits at 10.23 million chips per second. The actual internal reference of the satellites is 10.22999999543 MHz to compensate for relativistic effects that make observers on the Earth perceive a different time reference with respect to the transmitters in orbit. The L1 carrier is modulated by both the C/A and P codes, while the L2 carrier is only modulated by the P code. The P code can be encrypted as a so-called P(Y) code that is only available to military equipment with a proper decryption key. Both the C/A and P(Y) codes impart the precise time-of-day to the user.
The L3 signal at a frequency of 1.38105 GHz is used to transmit data from the satellites to ground stations. This data is used by the United States Nuclear Detonation (NUDET) Detection System (USNDS) to detect, locate, and report nuclear detonations (NUDETs) in the Earth's atmosphere and near space. One usage is the enforcement of nuclear test ban treaties.
The L4 band at 1.379913 GHz is being studied for additional ionospheric correction.
The L5 frequency band at 1.17645 GHz was added in the process of GPS modernization. This frequency falls into an internationally protected range for aeronautical navigation, promising little or no interference under all circumstances. The first Block IIF satellite that provides this signal was launched in May 2010. On February 5, 2016, the 12th and final Block IIF satellite was launched. The L5 consists of two carrier components that are in phase quadrature with each other. Each carrier component is bi-phase shift key (BPSK) modulated by a separate bit train. "L5, the third civil GPS signal, will eventually support safety-of-life applications for aviation and provide improved availability and accuracy."
In 2011, a conditional waiver was granted to LightSquared to operate a terrestrial broadband service near the L1 band. Although LightSquared had applied for a license to operate in the 1525 to 1559 band as early as 2003 and it was put out for public comment, the FCC asked LightSquared to form a study group with the GPS community to test GPS receivers and identify issues that might arise due to the larger signal power from the LightSquared terrestrial network. The GPS community had not objected to the LightSquared (formerly MSV and SkyTerra) applications until November 2010, when LightSquared applied for a modification to its Ancillary Terrestrial Component (ATC) authorization. This filing (SAT-MOD-20101118-00239) amounted to a request to run several orders of magnitude more power in the same frequency band for terrestrial base stations, essentially repurposing what was supposed to be a "quiet neighborhood" for signals from space as the equivalent of a cellular network. Testing in the first half of 2011 has demonstrated that the effects from the lower 10 MHz of spectrum are minimal to GPS devices (less than 1% of the total GPS devices are affected). The upper 10 MHz intended for use by LightSquared may have some effect on GPS devices. There is some concern that this may seriously degrade the GPS signal for many consumer uses. Aviation Week magazine reports that the latest testing (June 2011) confirms "significant jamming" of GPS by LightSquared's system.
Demodulation and decoding
Because all of the satellite signals are modulated onto the same L1 carrier frequency, the signals must be separated after demodulation. This is done by assigning each satellite a unique binary sequence known as a Gold code. The signals are decoded after demodulation using addition of the Gold codes corresponding to the satellites monitored by the receiver.
If the almanac information has previously been acquired, the receiver picks the satellites to listen for by their PRNs, unique numbers in the range 1 through 32. If the almanac information is not in memory, the receiver enters a search mode until a lock is obtained on one of the satellites. To obtain a lock, it is necessary that there be an unobstructed line of sight from the receiver to the satellite. The receiver can then acquire the almanac and determine the satellites it should listen for. As it detects each satellite's signal, it identifies it by its distinct C/A code pattern. There can be a delay of up to 30 seconds before the first estimate of position because of the need to read the ephemeris data.
Processing of the navigation message enables the determination of the time of transmission and the satellite position at this time. For more information see Demodulation and Decoding, Advanced.
Navigation equations
Problem statement
The receiver uses messages received from satellites to determine the satellite positions and time sent. The x, y, and z components of satellite position and the time sent (s) are designated as [xi, yi, zi, si] where the subscript i denotes the satellite and has the value 1, 2, ..., n, where n ≥ 4. When the time of message reception indicated by the on-board receiver clock is , the true reception time is , where b is the receiver's clock bias from the much more accurate GPS clocks employed by the satellites. The receiver clock bias is the same for all received satellite signals (assuming the satellite clocks are all perfectly synchronized). The message's transit time is , where si is the satellite time. Assuming the message traveled at the speed of light, c, the distance traveled is .
For n satellites, the equations to satisfy are:
where di is the geometric distance or range between receiver and satellite i (the values without subscripts are the x, y, and z components of receiver position):
Defining pseudoranges as , we see they are biased versions of the true range:
.
Since the equations have four unknowns [x, y, z, b]—the three components of GPS receiver position and the clock bias—signals from at least four satellites are necessary to attempt solving these equations. They can be solved by algebraic or numerical methods. Existence and uniqueness of GPS solutions are discussed by Abell and Chaffee. When n is greater than four, this system is overdetermined and a fitting method must be used.
The amount of error in the results varies with the received satellites' locations in the sky, since certain configurations (when the received satellites are close together in the sky) cause larger errors. Receivers usually calculate a running estimate of the error in the calculated position. This is done by multiplying the basic resolution of the receiver by quantities called the geometric dilution of position (GDOP) factors, calculated from the relative sky directions of the satellites used. The receiver location is expressed in a specific coordinate system, such as latitude and longitude using the WGS 84 geodetic datum or a country-specific system.
Geometric interpretation
The GPS equations can be solved by numerical and analytical methods. Geometrical interpretations can enhance the understanding of these solution methods.
Spheres
The measured ranges, called pseudoranges, contain clock errors. In a simplified idealization in which the ranges are synchronized, these true ranges represent the radii of spheres, each centered on one of the transmitting satellites. The solution for the position of the receiver is then at the intersection of the surfaces of these spheres; see trilateration (more generally, true-range multilateration). Signals from at minimum three satellites are required, and their three spheres would typically intersect at two points. One of the points is the location of the receiver, and the other moves rapidly in successive measurements and would not usually be on Earth's surface.
In practice, there are many sources of inaccuracy besides clock bias, including random errors as well as the potential for precision loss from subtracting numbers close to each other if the centers of the spheres are relatively close together. This means that the position calculated from three satellites alone is unlikely to be accurate enough. Data from more satellites can help because of the tendency for random errors to cancel out and also by giving a larger spread between the sphere centers. But at the same time, more spheres will not generally intersect at one point. Therefore, a near intersection gets computed, typically via least squares. The more signals available, the better the approximation is likely to be.
Hyperboloids
If the pseudorange between the receiver and satellite i and the pseudorange between the receiver and satellite j are subtracted, , the common receiver clock bias (b) cancels out, resulting in a difference of distances . The locus of points having a constant difference in distance to two points (here, two satellites) is a hyperbola on a plane and a hyperboloid of revolution (more specifically, a two-sheeted hyperboloid) in 3D space (see Multilateration). Thus, from four pseudorange measurements, the receiver can be placed at the intersection of the surfaces of three hyperboloids each with foci at a pair of satellites. With additional satellites, the multiple intersections are not necessarily unique, and a best-fitting solution is sought instead.
Inscribed sphere
The receiver position can be interpreted as the center of an inscribed sphere (insphere) of radius bc, given by the receiver clock bias b (scaled by the speed of light c). The insphere location is such that it touches other spheres. The circumscribing spheres are centered at the GPS satellites, whose radii equal the measured pseudoranges pi. This configuration is distinct from the one described above, in which the spheres' radii were the unbiased or geometric ranges di.
Hypercones
The clock in the receiver is usually not of the same quality as the ones in the satellites and will not be accurately synchronized to them. This produces pseudoranges with large differences compared to the true distances to the satellites. Therefore, in practice, the time difference between the receiver clock and the satellite time is defined as an unknown clock bias b. The equations are then solved simultaneously for the receiver position and the clock bias. The solution space [x, y, z, b] can be seen as a four-dimensional spacetime, and signals from at minimum four satellites are needed. In that case each of the equations describes a hypercone (or spherical cone), with the cusp located at the satellite, and the base a sphere around the satellite. The receiver is at the intersection of four or more of such hypercones.
Solution methods
Least squares
When more than four satellites are available, the calculation can use the four best, or more than four simultaneously (up to all visible satellites), depending on the number of receiver channels, processing capability, and geometric dilution of precision (GDOP).
Using more than four involves an over-determined system of equations with no unique solution; such a system can be solved by a least-squares or weighted least squares method.
Iterative
Both the equations for four satellites, or the least squares equations for more than four, are non-linear and need special solution methods. A common approach is by iteration on a linearized form of the equations, such as the Gauss–Newton algorithm.
The GPS was initially developed assuming use of a numerical least-squares solution method—i.e., before closed-form solutions were found.
Closed-form
One closed-form solution to the above set of equations was developed by S. Bancroft. Its properties are well known; in particular, proponents claim it is superior in low-GDOP situations, compared to iterative least squares methods.
Bancroft's method is algebraic, as opposed to numerical, and can be used for four or more satellites. When four satellites are used, the key steps are inversion of a 4x4 matrix and solution of a single-variable quadratic equation. Bancroft's method provides one or two solutions for the unknown quantities. When there are two (usually the case), only one is a near-Earth sensible solution.
When a receiver uses more than four satellites for a solution, Bancroft uses the generalized inverse (i.e., the pseudoinverse) to find a solution. A case has been made that iterative methods, such as the Gauss–Newton algorithm approach for solving over-determined non-linear least squares problems, generally provide more accurate solutions.
Leick et al. (2015) states that "Bancroft's (1985) solution is a very early, if not the first, closed-form solution."
Other closed-form solutions were published afterwards, although their adoption in practice is unclear.
Error sources and analysis
GPS error analysis examines error sources in GPS results and the expected size of those errors. GPS makes corrections for receiver clock errors and other effects, but some residual errors remain uncorrected. Error sources include signal arrival time measurements, numerical calculations, atmospheric effects (ionospheric/tropospheric delays), ephemeris and clock data, multipath signals, and natural and artificial interference. Magnitude of residual errors from these sources depends on geometric dilution of precision. Artificial errors may result from jamming devices and threaten ships and aircraft or from intentional signal degradation through selective availability, which limited accuracy to ≈ , but has been switched off since May 1, 2000.
Accuracy enhancement and surveying
Regulatory spectrum issues concerning GPS receivers
In the United States, GPS receivers are regulated under the Federal Communications Commission's (FCC) Part 15 rules. As indicated in the manuals of GPS-enabled devices sold in the United States, as a Part 15 device, it "must accept any interference received, including interference that may cause undesired operation". With respect to GPS devices in particular, the FCC states that GPS receiver manufacturers "must use receivers that reasonably discriminate against reception of signals outside their allocated spectrum". For the last 30 years, GPS receivers have operated next to the Mobile Satellite Service band, and have discriminated against reception of mobile satellite services, such as Inmarsat, without any issue.
The spectrum allocated for GPS L1 use by the FCC is 1559 to 1610 MHz, while the spectrum allocated for satellite-to-ground use owned by Lightsquared is the Mobile Satellite Service band. Since 1996, the FCC has authorized licensed use of the spectrum neighboring the GPS band of 1525 to 1559 MHz to the Virginia company LightSquared. On March 1, 2001, the FCC received an application from LightSquared's predecessor, Motient Services, to use their allocated frequencies for an integrated satellite-terrestrial service. In 2002, the U.S. GPS Industry Council came to an out-of-band-emissions (OOBE) agreement with LightSquared to prevent transmissions from LightSquared's ground-based stations from emitting transmissions into the neighboring GPS band of 1559 to 1610 MHz. In 2004, the FCC adopted the OOBE agreement in its authorization for LightSquared to deploy a ground-based network ancillary to their satellite system – known as the Ancillary Tower Components (ATCs) – "We will authorize MSS ATC subject to conditions that ensure that the added terrestrial component remains ancillary to the principal MSS offering. We do not intend, nor will we permit, the terrestrial component to become a stand-alone service." This authorization was reviewed and approved by the U.S. Interdepartment Radio Advisory Committee, which includes the U.S. Department of Agriculture, U.S. Space Force, U.S. Army, U.S. Coast Guard, Federal Aviation Administration, National Aeronautics and Space Administration (NASA), U.S. Department of the Interior, and U.S. Department of Transportation.
In January 2011, the FCC conditionally authorized LightSquared's wholesale customers—such as Best Buy, Sharp, and C Spire—to only purchase an integrated satellite-ground-based service from LightSquared and re-sell that integrated service on devices that are equipped to only use the ground-based signal using LightSquared's allocated frequencies of 1525 to 1559 MHz. In December 2010, GPS receiver manufacturers expressed concerns to the FCC that LightSquared's signal would interfere with GPS receiver devices although the FCC's policy considerations leading up to the January 2011 order did not pertain to any proposed changes to the maximum number of ground-based LightSquared stations or the maximum power at which these stations could operate. The January 2011 order makes final authorization contingent upon studies of GPS interference issues carried out by a LightSquared led working group along with GPS industry and Federal agency participation. On February 14, 2012, the FCC initiated proceedings to vacate LightSquared's Conditional Waiver Order based on the NTIA's conclusion that there was currently no practical way to mitigate potential GPS interference.
GPS receiver manufacturers design GPS receivers to use spectrum beyond the GPS-allocated band. In some cases, GPS receivers are designed to use up to 400 MHz of spectrum in either direction of the L1 frequency of 1575.42 MHz, because mobile satellite services in those regions are broadcasting from space to ground, and at power levels commensurate with mobile satellite services. As regulated under the FCC's Part 15 rules, GPS receivers are not warranted protection from signals outside GPS-allocated spectrum. This is why GPS operates next to the Mobile Satellite Service band, and also why the Mobile Satellite Service band operates next to GPS. The symbiotic relationship of spectrum allocation ensures that users of both bands are able to operate cooperatively and freely.
The FCC adopted rules in February 2003 that allowed Mobile Satellite Service (MSS) licensees such as LightSquared to construct a small number of ancillary ground-based towers in their licensed spectrum to "promote more efficient use of terrestrial wireless spectrum". In those 2003 rules, the FCC stated: "As a preliminary matter, terrestrial [Commercial Mobile Radio Service ('CMRS')] and MSS ATC are expected to have different prices, coverage, product acceptance and distribution; therefore, the two services appear, at best, to be imperfect substitutes for one another that would be operating in predominantly different market segments ... MSS ATC is unlikely to compete directly with terrestrial CMRS for the same customer base...". In 2004, the FCC clarified that the ground-based towers would be ancillary, noting: "We will authorize MSS ATC subject to conditions that ensure that the added terrestrial component remains ancillary to the principal MSS offering. We do not intend, nor will we permit, the terrestrial component to become a stand-alone service." In July 2010, the FCC stated that it expected LightSquared to use its authority to offer an integrated satellite-terrestrial service to "provide mobile broadband services similar to those provided by terrestrial mobile providers and enhance competition in the mobile broadband sector". GPS receiver manufacturers have argued that LightSquared's licensed spectrum of 1525 to 1559 MHz was never envisioned as being used for high-speed wireless broadband based on the 2003 and 2004 FCC ATC rulings making clear that the Ancillary Tower Component (ATC) would be, in fact, ancillary to the primary satellite component. To build public support of efforts to continue the 2004 FCC authorization of LightSquared's ancillary terrestrial component vs. a simple ground-based LTE service in the Mobile Satellite Service band, GPS receiver manufacturer Trimble Navigation Ltd. formed the "Coalition To Save Our GPS".
The FCC and LightSquared have each made public commitments to solve the GPS interference issue before the network is allowed to operate. According to Chris Dancy of the Aircraft Owners and Pilots Association, airline pilots with the type of systems that would be affected "may go off course and not even realize it". The problems could also affect the Federal Aviation Administration upgrade to the air traffic control system, United States Defense Department guidance, and local emergency services including 911.
On February 14, 2012, the FCC moved to bar LightSquared's planned national broadband network after being informed by the National Telecommunications and Information Administration (NTIA), the federal agency that coordinates spectrum uses for the military and other federal government entities, that "there is no practical way to mitigate potential interference at this time". LightSquared is challenging the FCC's action.
Similar systems
Following the United States' deployment of GPS, other countries have also developed their own satellite navigation systems. These systems include:
The Russian Global Navigation Satellite System (GLONASS) was developed at the same time as GPS, but suffered from incomplete coverage of the globe until the mid-2000s. GLONASS reception in addition to GPS can be combined in a receiver thereby allowing for additional satellites available to enable faster position fixes and improved accuracy, to within .
China's BeiDou Navigation Satellite System began global services in 2018 and finished its full deployment in 2020.
The Galileo navigation satellite system, a global system being developed by the European Union and other partner countries, began operation in 2016, and is expected to be fully deployed by 2020.
Japan's Quasi-Zenith Satellite System (QZSS) is a GPS satellite-based augmentation system to enhance GPS's accuracy in Asia-Oceania, with satellite navigation independent of GPS scheduled for 2023.
The Indian Regional Navigation Satellite System (Operational name 'NavIC', Navigation with Indian Constellation), deployed by India.
Backup system
In the event of adverse space weather or the deployment of an anti-satellite weapon against GPS, the United States has no terrestrial backup system. The potential cost of such an event to the U.S. economy is estimated at $1 billion per day. The LORAN-C system was turned off in North America in 2010 and Europe in 2015. eLoran is proposed as an American terrestrial backup system, but as of 2024 has not received approval or funding.
China continues to operate LORAN-C transmitters, and Russia has a similar system called CHAYKA ("Seagull").
See also
List of GPS satellites
GPS satellite blocks
GPS signals
Satellite navigation software
GPS/INS
GPS spoofing
Indoor positioning system
Local-area augmentation system
Local positioning system
Military invention
Mobile phone tracking
Navigation paradox
Notice Advisory to Navstar Users
S-GPS
Geostationary balloon satellite
Notes
References
Further reading
Global Positioning System. Open Courseware from Massachusetts Institute of Technology, 2012.
External links
FAA GPS FAQ
GPS.gov – General public education website created by the U.S. Government
20th-century inventions
Equipment of the United States Space Force
Programs of the United States Space Force
Military equipment introduced in the 1970s | Global Positioning System | [
"Technology",
"Engineering"
] | 15,614 | [
"Global Positioning System",
"Wireless locating",
"Aircraft instruments",
"Aerospace engineering"
] |
11,875 | https://en.wikipedia.org/wiki/GNU | GNU () is an extensive collection of free software (394 packages ), which can be used as an operating system or can be used in parts with other operating systems. The use of the completed GNU tools led to the family of operating systems popularly known as Linux. Most of GNU is licensed under the GNU Project's own General Public License (GPL).
GNU is also the project within which the free software concept originated. Richard Stallman, the founder of the project, views GNU as a "technical means to a social end". Relatedly, Lawrence Lessig states in his introduction to the second edition of Stallman's book Free Software, Free Society that in it Stallman has written about "the social aspects of software and how Free Software can create community and social justice".
Name
GNU is a recursive acronym for "GNU's Not Unix!", chosen because GNU's design is Unix-like, but differs from Unix by being free software and containing no Unix code. Stallman chose the name by using various plays on words, including the song The Gnu.
History
Development of the GNU software was initiated by Richard Stallman while he worked at MIT Artificial Intelligence Laboratory. It was called the GNU Project, and was publicly announced on September 27, 1983, on the net.unix-wizards and net.usoft newsgroups by Stallman. Software development began on January 5, 1984, when Stallman quit his job at the Lab so that they could not claim ownership or interfere with distributing GNU components as free software.
The goal was to bring a completely free software operating system into existence. Stallman wanted computer users to be free to study the source code of the software they use, share software with other people, modify the behavior of software, and publish their modified versions of the software. This philosophy was published as the GNU Manifesto in March 1985.
Richard Stallman's experience with the Incompatible Timesharing System (ITS), an early operating system written in assembly language that became obsolete due to discontinuation of PDP-10, the computer architecture for which ITS was written, led to a decision that a portable system was necessary. It was thus decided that the development would be started using C and Lisp as system programming languages, and that GNU would be compatible with Unix. At the time, Unix was already a popular proprietary operating system. The design of Unix was modular, so it could be reimplemented piece by piece.
Much of the needed software had to be written from scratch, but existing compatible third-party free software components were also used such as the TeX typesetting system, the X Window System, and the Mach microkernel that forms the basis of the GNU Mach core of GNU Hurd (the official kernel of GNU). With the exception of the aforementioned third-party components, most of GNU has been written by volunteers; some in their spare time, some paid by companies, educational institutions, and other non-profit organizations. In October 1985, Stallman set up the Free Software Foundation (FSF). In the late 1980s and 1990s, the FSF hired software developers to write the software needed for GNU.
As GNU gained prominence, interested businesses began contributing to development or selling GNU software and technical support. The most prominent and successful of these was Cygnus Solutions, now part of Red Hat.
Components
The system's basic components include the GNU Compiler Collection (GCC), the GNU C library (glibc), and GNU Core Utilities (coreutils), but also the GNU Debugger (GDB), GNU Binary Utilities (binutils), and the GNU Bash shell. GNU developers have contributed to Linux ports of GNU applications and utilities, which are now also widely used on other operating systems such as BSD variants, Solaris and macOS.
Many GNU programs have been ported to other operating systems, including proprietary platforms such as Microsoft Windows and macOS. GNU programs have been shown to be more reliable than their proprietary Unix counterparts.
, there are a total of 467 GNU packages (including decommissioned, 394 excluding) hosted on the official GNU development site.
GNU as an operating system
In its original meaning, and one still common in hardware engineering, the operating system is a basic set of functions to control the hardware and manage things like task scheduling and system calls. In modern terminology used by software developers, the collection of these functions is usually referred to as a kernel, while an 'operating system' is expected to have a more extensive set of programs. The GNU project maintains two kernels itself, allowing the creation of pure GNU operating systems, but the GNU toolchain is also used with non-GNU kernels. Due to the two different definitions of the term 'operating system', there is an ongoing debate concerning the naming of distributions of GNU packages with a non-GNU kernel. (See below.)
With kernels maintained by GNU and FSF
GNU Hurd
The original kernel of GNU Project is the GNU Hurd (together with the GNU Mach microkernel), which was the original focus of the Free Software Foundation (FSF).
With the April 30, 2015 release of the Debian GNU/Hurd 2015 distro, GNU now provides all required components to assemble an operating system that users can install and use on a computer.
However, the Hurd kernel is not yet considered production-ready but rather a base for further development and non-critical application usage.
Linux-libre
In 2012, a fork of the Linux kernel became officially part of the GNU Project in the form of Linux-libre, a variant of Linux with all proprietary components removed.
The GNU Project has endorsed Linux-libre distributions, such as Trisquel, Parabola GNU/Linux-libre, PureOS and GNU Guix System.
With non-GNU kernels
Because of the development status of Hurd, GNU is usually paired with other kernels such as Linux or FreeBSD. Whether the combination of GNU libraries with external kernels is a GNU operating system with a kernel (e.g. GNU with Linux), because the GNU collection renders the kernel into a usable operating system as understood in modern software development, or whether the kernel is an operating system unto itself with a GNU layer on top (i.e. Linux with GNU), because the kernel can operate a machine without GNU, is a matter of ongoing debate. The FSF maintains that an operating system built using the Linux kernel and GNU tools and utilities should be considered a variant of GNU, and promotes the term GNU/Linux for such systems (leading to the GNU/Linux naming controversy). This view is not exclusive to the FSF. Notably, Debian, one of the biggest and oldest Linux distributions, refers to itself as Debian GNU/Linux.
Copyright, GNU licenses, and stewardship
The GNU Project recommends that contributors assign the copyright for GNU packages to the Free Software Foundation, though the Free Software Foundation considers it acceptable to release small changes to an existing project to the public domain. However, this is not required; package maintainers may retain copyright to the GNU packages they maintain, though since only the copyright holder may enforce the license used (such as the GNU GPL), the copyright holder in this case enforces it rather than the Free Software Foundation.
For the development of needed software, Stallman wrote a license called the GNU General Public License (first called Emacs General Public License), with the goal to guarantee users freedom to share and change free software. Stallman wrote this license after his experience with James Gosling and a program called UniPress, over a controversy around software code use in the GNU Emacs program. For most of the 80s, each GNU package had its own license: the Emacs General Public License, the GCC General Public License, etc. In 1989, FSF published a single license they could use for all their software, and which could be used by non-GNU projects: the GNU General Public License (GPL).
This license is now used by most of GNU software, as well as a large number of free software programs that are not part of the GNU Project; it also historically has been the most commonly used free software license (though recently challenged by the MIT license). It gives all recipients of a program the right to run, copy, modify and distribute it, while forbidding them from imposing further restrictions on any copies they distribute. This idea is often referred to as copyleft.
In 1991, the GNU Lesser General Public License (LGPL), then known as the Library General Public License, was written for the GNU C Library to allow it to be linked with proprietary software. 1991 also saw the release of version 2 of the GNU GPL. The GNU Free Documentation License (FDL), for documentation, followed in 2000. The GPL and LGPL were revised to version 3 in 2007, adding clauses to protect users against hardware restrictions that prevent users from running modified software on their own devices.
Besides GNU's packages, the GNU Project's licenses can and are used by many unrelated projects, such as the Linux kernel, often used with GNU software. A majority of free software such as the X Window System, is licensed under permissive free software licenses.
Logo
The logo for GNU is a gnu head. Originally drawn by Etienne Suvasa, a bolder and simpler version designed by Aurelio Heckert is now preferred. It appears in GNU software and in printed and electronic documentation for the GNU Project, and is also used in Free Software Foundation materials.
There was also a modified version of the official logo. It was created by the Free Software Foundation in September 2013 in order to commemorate the 30th anniversary of the GNU Project.
See also
Free software movement
History of free and open-source software
List of computing mascots
:Category:Computing mascots
References
External links
Ports of GNU utilities for Microsoft Windows
The daemon, the GNU and the penguin
Free software operating systems
GNU Project
GNU Project software
Mach (kernel)
Microkernel-based operating systems
Unix variants
Computing acronyms | GNU | [
"Technology"
] | 2,056 | [
"Computing terminology",
"Computing acronyms"
] |
11,924 | https://en.wikipedia.org/wiki/Game%20theory | Game theory is the study of mathematical models of strategic interactions. It has applications in many fields of social science, and is used extensively in economics, logic, systems science and computer science. Initially, game theory addressed two-person zero-sum games, in which a participant's gains or losses are exactly balanced by the losses and gains of the other participant. In the 1950s, it was extended to the study of non zero-sum games, and was eventually applied to a wide range of behavioral relations. It is now an umbrella term for the science of rational decision making in humans, animals, and computers.
Modern game theory began with the idea of mixed-strategy equilibria in two-person zero-sum games and its proof by John von Neumann. Von Neumann's original proof used the Brouwer fixed-point theorem on continuous mappings into compact convex sets, which became a standard method in game theory and mathematical economics. His paper was followed by Theory of Games and Economic Behavior (1944), co-written with Oskar Morgenstern, which considered cooperative games of several players. The second edition provided an axiomatic theory of expected utility, which allowed mathematical statisticians and economists to treat decision-making under uncertainty.
Game theory was developed extensively in the 1950s, and was explicitly applied to evolution in the 1970s, although similar developments go back at least as far as the 1930s. Game theory has been widely recognized as an important tool in many fields. John Maynard Smith was awarded the Crafoord Prize for his application of evolutionary game theory in 1999, and fifteen game theorists have won the Nobel Prize in economics as of 2020, including most recently Paul Milgrom and Robert B. Wilson.
History
Earliest results
In 1713, a letter attributed to Charles Waldegrave, an active Jacobite and uncle to British diplomat James Waldegrave, analyzed a game called "le her". Waldegrave provided a minimax mixed strategy solution to a two-person version of the card game, and the problem is now known as the Waldegrave problem.
In 1838, Antoine Augustin Cournot provided a model of competition in oligopolies. Though he did not refer to it as such, he presented a solution that is the Nash equilibrium of the game in his (Researches into the Mathematical Principles of the Theory of Wealth). In 1883, Joseph Bertrand critiqued Cournot's model as unrealistic, providing an alternative model of price competition which would later be formalized by Francis Ysidro Edgeworth.
In 1913, Ernst Zermelo published (On an Application of Set Theory to the Theory of the Game of Chess), which proved that the optimal chess strategy is strictly determined.
Foundation
The work of John von Neumann established game theory as its own independent field in the early-to-mid 20th century, with von Neumann publishing his paper On the Theory of Games of Strategy in 1928. Von Neumann's original proof used Brouwer's fixed-point theorem on continuous mappings into compact convex sets, which became a standard method in game theory and mathematical economics. Von Neumann's work in game theory culminated in his 1944 book Theory of Games and Economic Behavior, co-authored with Oskar Morgenstern. The second edition of this book provided an axiomatic theory of utility, which reincarnated Daniel Bernoulli's old theory of utility (of money) as an independent discipline. This foundational work contains the method for finding mutually consistent solutions for two-person zero-sum games. Subsequent work focused primarily on cooperative game theory, which analyzes optimal strategies for groups of individuals, presuming that they can enforce agreements between them about proper strategies.
In his 1938 book and earlier notes, Émile Borel proved a minimax theorem for two-person zero-sum matrix games only when the pay-off matrix is symmetric and provided a solution to a non-trivial infinite game (known in English as Blotto game). Borel conjectured the non-existence of mixed-strategy equilibria in finite two-person zero-sum games, a conjecture that was proved false by von Neumann.
In 1950, John Nash developed a criterion for mutual consistency of players' strategies known as the Nash equilibrium, applicable to a wider variety of games than the criterion proposed by von Neumann and Morgenstern. Nash proved that every finite n-player, non-zero-sum (not just two-player zero-sum) non-cooperative game has what is now known as a Nash equilibrium in mixed strategies.
Game theory experienced a flurry of activity in the 1950s, during which the concepts of the core, the extensive form game, fictitious play, repeated games, and the Shapley value were developed. The 1950s also saw the first applications of game theory to philosophy and political science. The first mathematical discussion of the prisoner's dilemma appeared, and an experiment was undertaken by mathematicians Merrill M. Flood and Melvin Dresher, as part of the RAND Corporation's investigations into game theory. RAND pursued the studies because of possible applications to global nuclear strategy.
Prize-winning achievements
In 1965, Reinhard Selten introduced his solution concept of subgame perfect equilibria, which further refined the Nash equilibrium. Later he would introduce trembling hand perfection as well. In 1994 Nash, Selten and Harsanyi became Economics Nobel Laureates for their contributions to economic game theory.
In the 1970s, game theory was extensively applied in biology, largely as a result of the work of John Maynard Smith and his evolutionarily stable strategy. In addition, the concepts of correlated equilibrium, trembling hand perfection and common knowledge were introduced and analyzed.
In 1994, John Nash was awarded the Nobel Memorial Prize in the Economic Sciences for his contribution to game theory. Nash's most famous contribution to game theory is the concept of the Nash equilibrium, which is a solution concept for non-cooperative games, published in 1951. A Nash equilibrium is a set of strategies, one for each player, such that no player can improve their payoff by unilaterally changing their strategy.
In 2005, game theorists Thomas Schelling and Robert Aumann followed Nash, Selten, and Harsanyi as Nobel Laureates. Schelling worked on dynamic models, early examples of evolutionary game theory. Aumann contributed more to the equilibrium school, introducing equilibrium coarsening and correlated equilibria, and developing an extensive formal analysis of the assumption of common knowledge and of its consequences.
In 2007, Leonid Hurwicz, Eric Maskin, and Roger Myerson were awarded the Nobel Prize in Economics "for having laid the foundations of mechanism design theory". Myerson's contributions include the notion of proper equilibrium, and an important graduate text: Game Theory, Analysis of Conflict. Hurwicz introduced and formalized the concept of incentive compatibility.
In 2012, Alvin E. Roth and Lloyd S. Shapley were awarded the Nobel Prize in Economics "for the theory of stable allocations and the practice of market design". In 2014, the Nobel went to game theorist Jean Tirole.
Different types of games
Cooperative / non-cooperative
A game is cooperative if the players are able to form binding commitments externally enforced (e.g. through contract law). A game is non-cooperative if players cannot form alliances or if all agreements need to be self-enforcing (e.g. through credible threats).
Cooperative games are often analyzed through the framework of cooperative game theory, which focuses on predicting which coalitions will form, the joint actions that groups take, and the resulting collective payoffs. It is different from non-cooperative game theory which focuses on predicting individual players' actions and payoffs by analyzing Nash equilibria.
Cooperative game theory provides a high-level approach as it describes only the structure and payoffs of coalitions, whereas non-cooperative game theory also looks at how strategic interaction will affect the distribution of payoffs. As non-cooperative game theory is more general, cooperative games can be analyzed through the approach of non-cooperative game theory (the converse does not hold) provided that sufficient assumptions are made to encompass all the possible strategies available to players due to the possibility of external enforcement of cooperation.
Symmetric / asymmetric
A symmetric game is a game where each player earns the same payoff when making the same choice. In other words, the identity of the player does not change the resulting game facing the other player. Many of the commonly studied 2×2 games are symmetric. The standard representations of chicken, the prisoner's dilemma, and the stag hunt are all symmetric games.
The most commonly studied asymmetric games are games where there are not identical strategy sets for both players. For instance, the ultimatum game and similarly the dictator game have different strategies for each player. It is possible, however, for a game to have identical strategies for both players, yet be asymmetric. For example, the game pictured in this section's graphic is asymmetric despite having identical strategy sets for both players.
Zero-sum / non-zero-sum
Zero-sum games (more generally, constant-sum games) are games in which choices by players can neither increase nor decrease the available resources. In zero-sum games, the total benefit goes to all players in a game, for every combination of strategies, and always adds to zero (more informally, a player benefits only at the equal expense of others). Poker exemplifies a zero-sum game (ignoring the possibility of the house's cut), because one wins exactly the amount one's opponents lose. Other zero-sum games include matching pennies and most classical board games including Go and chess.
Many games studied by game theorists (including the famed prisoner's dilemma) are non-zero-sum games, because the outcome has net results greater or less than zero. Informally, in non-zero-sum games, a gain by one player does not necessarily correspond with a loss by another.
Furthermore, constant-sum games correspond to activities like theft and gambling, but not to the fundamental economic situation in which there are potential gains from trade. It is possible to transform any constant-sum game into a (possibly asymmetric) zero-sum game by adding a dummy player (often called "the board") whose losses compensate the players' net winnings.
Simultaneous / sequential
Simultaneous games are games where both players move simultaneously, or instead the later players are unaware of the earlier players' actions (making them effectively simultaneous). Sequential games (or dynamic games) are games where players do not make decisions simultaneously, and player's earlier actions affect the outcome and decisions of other players. This need not be perfect information about every action of earlier players; it might be very little knowledge. For instance, a player may know that an earlier player did not perform one particular action, while they do not know which of the other available actions the first player actually performed.
The difference between simultaneous and sequential games is captured in the different representations discussed above. Often, normal form is used to represent simultaneous games, while extensive form is used to represent sequential ones. The transformation of extensive to normal form is one way, meaning that multiple extensive form games correspond to the same normal form. Consequently, notions of equilibrium for simultaneous games are insufficient for reasoning about sequential games; see subgame perfection.
In short, the differences between sequential and simultaneous games are as follows:
Perfect information and imperfect information
An important subset of sequential games consists of games of perfect information. A game with perfect information means that all players, at every move in the game, know the previous history of the game and the moves previously made by all other players. An imperfect information game is played when the players do not know all moves already made by the opponent such as a simultaneous move game. Examples of perfect-information games include tic-tac-toe, checkers, chess, and Go.
Many card games are games of imperfect information, such as poker and bridge. Perfect information is often confused with complete information, which is a similar concept pertaining to the common knowledge of each player's sequence, strategies, and payoffs throughout gameplay. Complete information requires that every player know the strategies and payoffs available to the other players but not necessarily the actions taken, whereas perfect information is knowledge of all aspects of the game and players. Games of incomplete information can be reduced, however, to games of imperfect information by introducing "moves by nature".
Bayesian game
One of the assumptions of the Nash equilibrium is that every player has correct beliefs about the actions of the other players. However, there are many situations in game theory where participants do not fully understand the characteristics of their opponents. Negotiators may be unaware of their opponent's valuation of the object of negotiation, companies may be unaware of their opponent's cost functions, combatants may be unaware of their opponent's strengths, and jurors may be unaware of their colleague's interpretation of the evidence at trial. In some cases, participants may know the character of their opponent well, but may not know how well their opponent knows his or her own character.
Bayesian game means a strategic game with incomplete information. For a strategic game, decision makers are players, and every player has a group of actions. A core part of the imperfect information specification is the set of states. Every state completely describes a collection of characteristics relevant to the player such as their preferences and details about them. There must be a state for every set of features that some player believes may exist.
For example, where Player 1 is unsure whether Player 2 would rather date her or get away from her, while Player 2 understands Player 1's preferences as before. To be specific, supposing that Player 1 believes that Player 2 wants to date her under a probability of 1/2 and get away from her under a probability of 1/2 (this evaluation comes from Player 1's experience probably: she faces players who want to date her half of the time in such a case and players who want to avoid her half of the time). Due to the probability involved, the analysis of this situation requires to understand the player's preference for the draw, even though people are only interested in pure strategic equilibrium.
Combinatorial games
Games in which the difficulty of finding an optimal strategy stems from the multiplicity of possible moves are called combinatorial games. Examples include chess and Go. Games that involve imperfect information may also have a strong combinatorial character, for instance backgammon. There is no unified theory addressing combinatorial elements in games. There are, however, mathematical tools that can solve some particular problems and answer some general questions.
Games of perfect information have been studied in combinatorial game theory, which has developed novel representations, e.g. surreal numbers, as well as combinatorial and algebraic (and sometimes non-constructive) proof methods to solve games of certain types, including "loopy" games that may result in infinitely long sequences of moves. These methods address games with higher combinatorial complexity than those usually considered in traditional (or "economic") game theory. A typical game that has been solved this way is Hex. A related field of study, drawing from computational complexity theory, is game complexity, which is concerned with estimating the computational difficulty of finding optimal strategies.
Research in artificial intelligence has addressed both perfect and imperfect information games that have very complex combinatorial structures (like chess, go, or backgammon) for which no provable optimal strategies have been found. The practical solutions involve computational heuristics, like alpha–beta pruning or use of artificial neural networks trained by reinforcement learning, which make games more tractable in computing practice.
Discrete and continuous games
Much of game theory is concerned with finite, discrete games that have a finite number of players, moves, events, outcomes, etc. Many concepts can be extended, however. Continuous games allow players to choose a strategy from a continuous strategy set. For instance, Cournot competition is typically modeled with players' strategies being any non-negative quantities, including fractional quantities.
Differential games
Differential games such as the continuous pursuit and evasion game are continuous games where the evolution of the players' state variables is governed by differential equations. The problem of finding an optimal strategy in a differential game is closely related to the optimal control theory. In particular, there are two types of strategies: the open-loop strategies are found using the Pontryagin maximum principle while the closed-loop strategies are found using Bellman's Dynamic Programming method.
A particular case of differential games are the games with a random time horizon. In such games, the terminal time is a random variable with a given probability distribution function. Therefore, the players maximize the mathematical expectation of the cost function. It was shown that the modified optimization problem can be reformulated as a discounted differential game over an infinite time interval.
Evolutionary game theory
Evolutionary game theory studies players who adjust their strategies over time according to rules that are not necessarily rational or farsighted. In general, the evolution of strategies over time according to such rules is modeled as a Markov chain with a state variable such as the current strategy profile or how the game has been played in the recent past. Such rules may feature imitation, optimization, or survival of the fittest.
In biology, such models can represent evolution, in which offspring adopt their parents' strategies and parents who play more successful strategies (i.e. corresponding to higher payoffs) have a greater number of offspring. In the social sciences, such models typically represent strategic adjustment by players who play a game many times within their lifetime and, consciously or unconsciously, occasionally adjust their strategies.
Stochastic outcomes (and relation to other fields)
Individual decision problems with stochastic outcomes are sometimes considered "one-player games". They may be modeled using similar tools within the related disciplines of decision theory, operations research, and areas of artificial intelligence, particularly AI planning (with uncertainty) and multi-agent system. Although these fields may have different motivators, the mathematics involved are substantially the same, e.g. using Markov decision processes (MDP).
Stochastic outcomes can also be modeled in terms of game theory by adding a randomly acting player who makes "chance moves" ("moves by nature"). This player is not typically considered a third player in what is otherwise a two-player game, but merely serves to provide a roll of the dice where required by the game.
For some problems, different approaches to modeling stochastic outcomes may lead to different solutions. For example, the difference in approach between MDPs and the minimax solution is that the latter considers the worst-case over a set of adversarial moves, rather than reasoning in expectation about these moves given a fixed probability distribution. The minimax approach may be advantageous where stochastic models of uncertainty are not available, but may also be overestimating extremely unlikely (but costly) events, dramatically swaying the strategy in such scenarios if it is assumed that an adversary can force such an event to happen. (See Black swan theory for more discussion on this kind of modeling issue, particularly as it relates to predicting and limiting losses in investment banking.)
General models that include all elements of stochastic outcomes, adversaries, and partial or noisy observability (of moves by other players) have also been studied. The "gold standard" is considered to be partially observable stochastic game (POSG), but few realistic problems are computationally feasible in POSG representation.
Metagames
These are games the play of which is the development of the rules for another game, the target or subject game. Metagames seek to maximize the utility value of the rule set developed. The theory of metagames is related to mechanism design theory.
The term metagame analysis is also used to refer to a practical approach developed by Nigel Howard, whereby a situation is framed as a strategic game in which stakeholders try to realize their objectives by means of the options available to them. Subsequent developments have led to the formulation of confrontation analysis.
Mean field game theory
Mean field game theory is the study of strategic decision making in very large populations of small interacting agents. This class of problems was considered in the economics literature by Boyan Jovanovic and Robert W. Rosenthal, in the engineering literature by Peter E. Caines, and by mathematicians Pierre-Louis Lions and Jean-Michel Lasry.
Representation of games
The games studied in game theory are well-defined mathematical objects. To be fully defined, a game must specify the following elements: the players of the game, the information and actions available to each player at each decision point, and the payoffs for each outcome. (Eric Rasmusen refers to these four "essential elements" by the acronym "PAPI".) A game theorist typically uses these elements, along with a solution concept of their choosing, to deduce a set of equilibrium strategies for each player such that, when these strategies are employed, no player can profit by unilaterally deviating from their strategy. These equilibrium strategies determine an equilibrium to the game—a stable state in which either one outcome occurs or a set of outcomes occur with known probability.
Most cooperative games are presented in the characteristic function form, while the extensive and the normal forms are used to define noncooperative games.
Extensive form
The extensive form can be used to formalize games with a time sequencing of moves. Extensive form games can be visualized using game trees (as pictured here). Here each vertex (or node) represents a point of choice for a player. The player is specified by a number listed by the vertex. The lines out of the vertex represent a possible action for that player. The payoffs are specified at the bottom of the tree. The extensive form can be viewed as a multi-player generalization of a decision tree. To solve any extensive form game, backward induction must be used. It involves working backward up the game tree to determine what a rational player would do at the last vertex of the tree, what the player with the previous move would do given that the player with the last move is rational, and so on until the first vertex of the tree is reached.
The game pictured consists of two players. The way this particular game is structured (i.e., with sequential decision making and perfect information), Player 1 "moves" first by choosing either or (fair or unfair). Next in the sequence, Player 2, who has now observed Player 1s move, can choose to play either or (accept or reject). Once Player 2 has made their choice, the game is considered finished and each player gets their respective payoff, represented in the image as two numbers, where the first number represents Player 1's payoff, and the second number represents Player 2's payoff. Suppose that Player 1 chooses and then Player 2 chooses : Player 1 then gets a payoff of "eight" (which in real-world terms can be interpreted in many ways, the simplest of which is in terms of money but could mean things such as eight days of vacation or eight countries conquered or even eight more opportunities to play the same game against other players) and Player 2 gets a payoff of "two".
The extensive form can also capture simultaneous-move games and games with imperfect information. To represent it, either a dotted line connects different vertices to represent them as being part of the same information set (i.e. the players do not know at which point they are), or a closed line is drawn around them. (See example in the imperfect information section.)
Normal form
The normal (or strategic form) game is usually represented by a matrix which shows the players, strategies, and payoffs (see the example to the right). More generally it can be represented by any function that associates a payoff for each player with every possible combination of actions. In the accompanying example there are two players; one chooses the row and the other chooses the column. Each player has two strategies, which are specified by the number of rows and the number of columns. The payoffs are provided in the interior. The first number is the payoff received by the row player (Player 1 in our example); the second is the payoff for the column player (Player 2 in our example). Suppose that Player 1 plays Up and that Player 2 plays Left. Then Player 1 gets a payoff of 4, and Player 2 gets 3.
When a game is presented in normal form, it is presumed that each player acts simultaneously or, at least, without knowing the actions of the other. If players have some information about the choices of other players, the game is usually presented in extensive form.
Every extensive-form game has an equivalent normal-form game, however, the transformation to normal form may result in an exponential blowup in the size of the representation, making it computationally impractical.
Characteristic function form
In cooperative game theory the characteristic function lists the payoff of each coalition. The origin of this formulation is in John von Neumann and Oskar Morgenstern's book.
Formally, a characteristic function is a function from the set of all possible coalitions of players to a set of payments, and also satisfies . The function describes how much collective payoff a set of players can gain by forming a coalition.
Alternative game representations
Alternative game representation forms are used for some subclasses of games or adjusted to the needs of interdisciplinary research. In addition to classical game representations, some of the alternative representations also encode time related aspects.
General and applied uses
As a method of applied mathematics, game theory has been used to study a wide variety of human and animal behaviors. It was initially developed in economics to understand a large collection of economic behaviors, including behaviors of firms, markets, and consumers. The first use of game-theoretic analysis was by Antoine Augustin Cournot in 1838 with his solution of the Cournot duopoly. The use of game theory in the social sciences has expanded, and game theory has been applied to political, sociological, and psychological behaviors as well.
Although pre-twentieth-century naturalists such as Charles Darwin made game-theoretic kinds of statements, the use of game-theoretic analysis in biology began with Ronald Fisher's studies of animal behavior during the 1930s. This work predates the name "game theory", but it shares many important features with this field. The developments in economics were later applied to biology largely by John Maynard Smith in his 1982 book Evolution and the Theory of Games.
In addition to being used to describe, predict, and explain behavior, game theory has also been used to develop theories of ethical or normative behavior and to prescribe such behavior. In economics and philosophy, scholars have applied game theory to help in the understanding of good or proper behavior. Game-theoretic approaches have also been suggested in the philosophy of language and philosophy of science. Game-theoretic arguments of this type can be found as far back as Plato. An alternative version of game theory, called chemical game theory, represents the player's choices as metaphorical chemical reactant molecules called "knowlecules". Chemical game theory then calculates the outcomes as equilibrium solutions to a system of chemical reactions.
Description and modeling
The primary use of game theory is to describe and model how human populations behave. Some scholars believe that by finding the equilibria of games they can predict how actual human populations will behave when confronted with situations analogous to the game being studied. This particular view of game theory has been criticized. It is argued that the assumptions made by game theorists are often violated when applied to real-world situations. Game theorists usually assume players act rationally, but in practice, human rationality and/or behavior often deviates from the model of rationality as used in game theory. Game theorists respond by comparing their assumptions to those used in physics. Thus while their assumptions do not always hold, they can treat game theory as a reasonable scientific ideal akin to the models used by physicists. However, empirical work has shown that in some classic games, such as the centipede game, guess 2/3 of the average game, and the dictator game, people regularly do not play Nash equilibria. There is an ongoing debate regarding the importance of these experiments and whether the analysis of the experiments fully captures all aspects of the relevant situation.
Some game theorists, following the work of John Maynard Smith and George R. Price, have turned to evolutionary game theory in order to resolve these issues. These models presume either no rationality or bounded rationality on the part of players. Despite the name, evolutionary game theory does not necessarily presume natural selection in the biological sense. Evolutionary game theory includes both biological as well as cultural evolution and also models of individual learning (for example, fictitious play dynamics).
Prescriptive or normative analysis
Some scholars see game theory not as a predictive tool for the behavior of human beings, but as a suggestion for how people ought to behave. Since a strategy, corresponding to a Nash equilibrium of a game constitutes one's best response to the actions of the other players – provided they are in (the same) Nash equilibrium – playing a strategy that is part of a Nash equilibrium seems appropriate. This normative use of game theory has also come under criticism.
Economics
Game theory is a major method used in mathematical economics and business for modeling competing behaviors of interacting agents. Applications include a wide array of economic phenomena and approaches, such as auctions, bargaining, mergers and acquisitions pricing, fair division, duopolies, oligopolies, social network formation, agent-based computational economics, general equilibrium, mechanism design, and voting systems; and across such broad areas as experimental economics, behavioral economics, information economics, industrial organization, and political economy.
This research usually focuses on particular sets of strategies known as "solution concepts" or "equilibria". A common assumption is that players act rationally. In non-cooperative games, the most famous of these is the Nash equilibrium. A set of strategies is a Nash equilibrium if each represents a best response to the other strategies. If all the players are playing the strategies in a Nash equilibrium, they have no unilateral incentive to deviate, since their strategy is the best they can do given what others are doing.
The payoffs of the game are generally taken to represent the utility of individual players.
A prototypical paper on game theory in economics begins by presenting a game that is an abstraction of a particular economic situation. One or more solution concepts are chosen, and the author demonstrates which strategy sets in the presented game are equilibria of the appropriate type. Economists and business professors suggest two primary uses (noted above): descriptive and prescriptive.
Managerial economics
Game theory also has an extensive use in a specific branch or stream of economics – Managerial Economics. One important usage of it in the field of managerial economics is in analyzing strategic interactions between firms. For example, firms may be competing in a market with limited resources, and game theory can help managers understand how their decisions impact their competitors and the overall market outcomes. Game theory can also be used to analyze cooperation between firms, such as in forming strategic alliances or joint ventures. Another use of game theory in managerial economics is in analyzing pricing strategies. For example, firms may use game theory to determine the optimal pricing strategy based on how they expect their competitors to respond to their pricing decisions. Overall, game theory serves as a useful tool for analyzing strategic interactions and decision making in the context of managerial economics.
Business
The Chartered Institute of Procurement & Supply (CIPS) promotes knowledge and use of game theory within the context of business procurement. CIPS and TWS Partners have conducted a series of surveys designed to explore the understanding, awareness and application of game theory among procurement professionals. Some of the main findings in their third annual survey (2019) include:
application of game theory to procurement activity has increased – at the time it was at 19% across all survey respondents
65% of participants predict that use of game theory applications will grow
70% of respondents say that they have "only a basic or a below basic understanding" of game theory
20% of participants had undertaken on-the-job training in game theory
50% of respondents said that new or improved software solutions were desirable
90% of respondents said that they do not have the software they need for their work.
Project management
Sensible decision-making is critical for the success of projects. In project management, game theory is used to model the decision-making process of players, such as investors, project managers, contractors, sub-contractors, governments and customers. Quite often, these players have competing interests, and sometimes their interests are directly detrimental to other players, making project management scenarios well-suited to be modeled by game theory.
Piraveenan (2019) in his review provides several examples where game theory is used to model project management scenarios. For instance, an investor typically has several investment options, and each option will likely result in a different project, and thus one of the investment options has to be chosen before the project charter can be produced. Similarly, any large project involving subcontractors, for instance, a construction project, has a complex interplay between the main contractor (the project manager) and subcontractors, or among the subcontractors themselves, which typically has several decision points. For example, if there is an ambiguity in the contract between the contractor and subcontractor, each must decide how hard to push their case without jeopardizing the whole project, and thus their own stake in it. Similarly, when projects from competing organizations are launched, the marketing personnel have to decide what is the best timing and strategy to market the project, or its resultant product or service, so that it can gain maximum traction in the face of competition. In each of these scenarios, the required decisions depend on the decisions of other players who, in some way, have competing interests to the interests of the decision-maker, and thus can ideally be modeled using game theory.
Piraveenan summarizes that two-player games are predominantly used to model project management scenarios, and based on the identity of these players, five distinct types of games are used in project management.
Government-sector–private-sector games (games that model public–private partnerships)
Contractor–contractor games
Contractor–subcontractor games
Subcontractor–subcontractor games
Games involving other players
In terms of types of games, both cooperative as well as non-cooperative, normal-form as well as extensive-form, and zero-sum as well as non-zero-sum are used to model various project management scenarios.
Political science
The application of game theory to political science is focused in the overlapping areas of fair division, political economy, public choice, war bargaining, positive political theory, and social choice theory. In each of these areas, researchers have developed game-theoretic models in which the players are often voters, states, special interest groups, and politicians.
Early examples of game theory applied to political science are provided by Anthony Downs. In his 1957 book An Economic Theory of Democracy, he applies the Hotelling firm location model to the political process. In the Downsian model, political candidates commit to ideologies on a one-dimensional policy space. Downs first shows how the political candidates will converge to the ideology preferred by the median voter if voters are fully informed, but then argues that voters choose to remain rationally ignorant which allows for candidate divergence. Game theory was applied in 1962 to the Cuban Missile Crisis during the presidency of John F. Kennedy.
It has also been proposed that game theory explains the stability of any form of political government. Taking the simplest case of a monarchy, for example, the king, being only one person, does not and cannot maintain his authority by personally exercising physical control over all or even any significant number of his subjects. Sovereign control is instead explained by the recognition by each citizen that all other citizens expect each other to view the king (or other established government) as the person whose orders will be followed. Coordinating communication among citizens to replace the sovereign is effectively barred, since conspiracy to replace the sovereign is generally punishable as a crime. Thus, in a process that can be modeled by variants of the prisoner's dilemma, during periods of stability no citizen will find it rational to move to replace the sovereign, even if all the citizens know they would be better off if they were all to act collectively.
A game-theoretic explanation for democratic peace is that public and open debate in democracies sends clear and reliable information regarding their intentions to other states. In contrast, it is difficult to know the intentions of nondemocratic leaders, what effect concessions will have, and if promises will be kept. Thus there will be mistrust and unwillingness to make concessions if at least one of the parties in a dispute is a non-democracy.
However, game theory predicts that two countries may still go to war even if their leaders are cognizant of the costs of fighting. War may result from asymmetric information; two countries may have incentives to mis-represent the amount of military resources they have on hand, rendering them unable to settle disputes agreeably without resorting to fighting. Moreover, war may arise because of commitment problems: if two countries wish to settle a dispute via peaceful means, but each wishes to go back on the terms of that settlement, they may have no choice but to resort to warfare. Finally, war may result from issue indivisibilities.
Game theory could also help predict a nation's responses when there is a new rule or law to be applied to that nation. One example is Peter John Wood's (2013) research looking into what nations could do to help reduce climate change. Wood thought this could be accomplished by making treaties with other nations to reduce greenhouse gas emissions. However, he concluded that this idea could not work because it would create a prisoner's dilemma for the nations.
Defence science and technology
Game theory has been used extensively to model decision-making scenarios relevant to defence applications. Most studies that has applied game theory in defence settings are concerned with Command and Control Warfare, and can be further classified into studies dealing with (i) Resource Allocation Warfare (ii) Information Warfare (iii) Weapons Control Warfare, and (iv) Adversary Monitoring Warfare. Many of the problems studied are concerned with sensing and tracking, for example a surface ship trying to track a hostile submarine and the submarine trying to evade being tracked, and the interdependent decision making that takes place with regards to bearing, speed, and the sensor technology activated by both vessels. Ho et al. provides a concise summary of the state-of-the-art with regards to the use of game theory in defence applications and highlights the benefits and limitations of game theory in the considered scenarios.
Biology
Unlike those in economics, the payoffs for games in biology are often interpreted as corresponding to fitness. In addition, the focus has been less on equilibria that correspond to a notion of rationality and more on ones that would be maintained by evolutionary forces. The best-known equilibrium in biology is known as the evolutionarily stable strategy (ESS), first introduced in . Although its initial motivation did not involve any of the mental requirements of the Nash equilibrium, every ESS is a Nash equilibrium.
In biology, game theory has been used as a model to understand many different phenomena. It was first used to explain the evolution (and stability) of the approximate 1:1 sex ratios. suggested that the 1:1 sex ratios are a result of evolutionary forces acting on individuals who could be seen as trying to maximize their number of grandchildren.
Additionally, biologists have used evolutionary game theory and the ESS to explain the emergence of animal communication. The analysis of signaling games and other communication games has provided insight into the evolution of communication among animals. For example, the mobbing behavior of many species, in which a large number of prey animals attack a larger predator, seems to be an example of spontaneous emergent organization. Ants have also been shown to exhibit feed-forward behavior akin to fashion (see Paul Ormerod's Butterfly Economics).
Biologists have used the game of chicken to analyze fighting behavior and territoriality.
According to Maynard Smith, in the preface to Evolution and the Theory of Games, "paradoxically, it has turned out that game theory is more readily applied to biology than to the field of economic behaviour for which it was originally designed". Evolutionary game theory has been used to explain many seemingly incongruous phenomena in nature.
One such phenomenon is known as biological altruism. This is a situation in which an organism appears to act in a way that benefits other organisms and is detrimental to itself. This is distinct from traditional notions of altruism because such actions are not conscious, but appear to be evolutionary adaptations to increase overall fitness. Examples can be found in species ranging from vampire bats that regurgitate blood they have obtained from a night's hunting and give it to group members who have failed to feed, to worker bees that care for the queen bee for their entire lives and never mate, to vervet monkeys that warn group members of a predator's approach, even when it endangers that individual's chance of survival. All of these actions increase the overall fitness of a group, but occur at a cost to the individual.
Evolutionary game theory explains this altruism with the idea of kin selection. Altruists discriminate between the individuals they help and favor relatives. Hamilton's rule explains the evolutionary rationale behind this selection with the equation , where the cost to the altruist must be less than the benefit to the recipient multiplied by the coefficient of relatedness . The more closely related two organisms are causes the incidences of altruism to increase because they share many of the same alleles. This means that the altruistic individual, by ensuring that the alleles of its close relative are passed on through survival of its offspring, can forgo the option of having offspring itself because the same number of alleles are passed on. For example, helping a sibling (in diploid animals) has a coefficient of , because (on average) an individual shares half of the alleles in its sibling's offspring. Ensuring that enough of a sibling's offspring survive to adulthood precludes the necessity of the altruistic individual producing offspring. The coefficient values depend heavily on the scope of the playing field; for example if the choice of whom to favor includes all genetic living things, not just all relatives, we assume the discrepancy between all humans only accounts for approximately 1% of the diversity in the playing field, a coefficient that was in the smaller field becomes 0.995. Similarly if it is considered that information other than that of a genetic nature (e.g. epigenetics, religion, science, etc.) persisted through time the playing field becomes larger still, and the discrepancies smaller.
Computer science and logic
Game theory has come to play an increasingly important role in logic and in computer science. Several logical theories have a basis in game semantics. In addition, computer scientists have used games to model interactive computations. Also, game theory provides a theoretical basis to the field of multi-agent systems.
Separately, game theory has played a role in online algorithms; in particular, the -server problem, which has in the past been referred to as games with moving costs and request-answer games. Yao's principle is a game-theoretic technique for proving lower bounds on the computational complexity of randomized algorithms, especially online algorithms.
The emergence of the Internet has motivated the development of algorithms for finding equilibria in games, markets, computational auctions, peer-to-peer systems, and security and information markets. Algorithmic game theory and within it algorithmic mechanism design combine computational algorithm design and analysis of complex systems with economic theory.
Game theory has multiple applications in the field of artificial intelligence and machine learning. It is often used in developing autonomous systems that can make complex decisions in uncertain environment. Some other areas of application of game theory in AI/ML context are as follows - multi-agent system formation, reinforcement learning, mechanism design etc. By using game theory to model the behavior of other agents and anticipate their actions, AI/ML systems can make better decisions and operate more effectively.
Philosophy
Game theory has been put to several uses in philosophy. Responding to two papers by , used game theory to develop a philosophical account of convention. In so doing, he provided the first analysis of common knowledge and employed it in analyzing play in coordination games. In addition, he first suggested that one can understand meaning in terms of signaling games. This later suggestion has been pursued by several philosophers since Lewis. Following game-theoretic account of conventions, Edna Ullmann-Margalit (1977) and Bicchieri (2006) have developed theories of social norms that define them as Nash equilibria that result from transforming a mixed-motive game into a coordination game.
Game theory has also challenged philosophers to think in terms of interactive epistemology: what it means for a collective to have common beliefs or knowledge, and what are the consequences of this knowledge for the social outcomes resulting from the interactions of agents. Philosophers who have worked in this area include Bicchieri (1989, 1993), Skyrms (1990), and Stalnaker (1999).
The synthesis of game theory with ethics was championed by R. B. Braithwaite. The hope was that rigorous mathematical analysis of game theory might help formalize the more imprecise philosophical discussions. However, this expectation was only materialized to a limited extent.
In ethics, some (most notably David Gauthier, Gregory Kavka, and Jean Hampton) authors have attempted to pursue Thomas Hobbes' project of deriving morality from self-interest. Since games like the prisoner's dilemma present an apparent conflict between morality and self-interest, explaining why cooperation is required by self-interest is an important component of this project. This general strategy is a component of the general social contract view in political philosophy (for examples, see and ).
Other authors have attempted to use evolutionary game theory in order to explain the emergence of human attitudes about morality and corresponding animal behaviors. These authors look at several games including the prisoner's dilemma, stag hunt, and the Nash bargaining game as providing an explanation for the emergence of attitudes about morality (see, e.g., and ).
Epidemiology
Since the decision to take a vaccine for a particular disease is often made by individuals, who may consider a range of factors and parameters in making this decision (such as the incidence and prevalence of the disease, perceived and real risks associated with contracting the disease, mortality rate, perceived and real risks associated with vaccination, and financial cost of vaccination), game theory has been used to model and predict vaccination uptake in a society.
Well known examples of games
Prisoner's dilemma
William Poundstone described the game in his 1993 book Prisoner's Dilemma:
Two members of a criminal gang, A and B, are arrested and imprisoned. Each prisoner is in solitary confinement with no means of communication with their partner. The principal charge would lead to a sentence of ten years in prison; however, the police do not have the evidence for a conviction. They plan to sentence both to two years in prison on a lesser charge but offer each prisoner a Faustian bargain: If one of them confesses to the crime of the principal charge, betraying the other, they will be pardoned and free to leave while the other must serve the entirety of the sentence instead of just two years for the lesser charge.
The dominant strategy (and therefore the best response to any possible opponent strategy), is to betray the other, which aligns with the sure-thing principle. However, both prisoners staying silent would yield a greater reward for both of them than mutual betrayal.
Battle of the sexes
The "battle of the sexes" is a term used to describe the perceived conflict between men and women in various areas of life, such as relationships, careers, and social roles. This conflict is often portrayed in popular culture, such as movies and television shows, as a humorous or dramatic competition between the genders. This conflict can be depicted in a game theory framework. This is an example of non-cooperative games.
An example of the "battle of the sexes" can be seen in the portrayal of relationships in popular media, where men and women are often depicted as being fundamentally different and in conflict with each other. For instance, in some romantic comedies, the male and female protagonists are shown as having opposing views on love and relationships, and they have to overcome these differences in order to be together.
In this game, there are two pure strategy Nash equilibria: one where both the players choose the same strategy and the other where the players choose different options. If the game is played in mixed strategies, where each player chooses their strategy randomly, then there is an infinite number of Nash equilibria. However, in the context of the "battle of the sexes" game, the assumption is usually made that the game is played in pure strategies.
Ultimatum game
The ultimatum game is a game that has become a popular instrument of economic experiments. An early description is by Nobel laureate John Harsanyi in 1961.
One player, the proposer, is endowed with a sum of money. The proposer is tasked with splitting it with another player, the responder (who knows what the total sum is). Once the proposer communicates his decision, the responder may accept it or reject it. If the responder accepts, the money is split per the proposal; if the responder rejects, both players receive nothing. Both players know in advance the consequences of the responder accepting or rejecting the offer. The game demonstrates how social acceptance, fairness, and generosity influence the players decisions.
Ultimatum game has a variant, that is the dictator game. They are mostly identical, except in dictator game the responder has no power to reject the proposer's offer.
Trust game
The Trust Game is an experiment designed to measure trust in economic decisions. It is also called "the investment game" and is designed to investigate trust and demonstrate its importance rather than "rationality" of self-interest. The game was designed by Berg Joyce, John Dickhaut and Kevin McCabe in 1995.
In the game, one player (the investor) is given a sum of money and must decide how much of it to give to another player (the trustee). The amount given is then tripled by the experimenter. The trustee then decides how much of the tripled amount to return to the investor. If the recipient is completely self interested, then he/she should return nothing. However that is not true as the experiment conduct. The outcome suggest that people are willing to place a trust, by risking some amount of money, in the belief that there would be reciprocity.
Cournot Competition
The Cournot competition model involves players choosing quantity of a homogenous product to produce independently and simultaneously, where marginal cost can be different for each firm and the firm's payoff is profit. The production costs are public information and the firm aims to find their profit-maximizing quantity based on what they believe the other firm will produce and behave like monopolies. In this game firms want to produce at the monopoly quantity but there is a high incentive to deviate and produce more, which decreases the market-clearing price. For example, firms may be tempted to deviate from the monopoly quantity if there is a low monopoly quantity and high price, with the aim of increasing production to maximize profit. However this option does not provide the highest payoff, as a firm's ability to maximize profits depends on its market share and the elasticity of the market demand. The Cournot equilibrium is reached when each firm operates on their reaction function with no incentive to deviate, as they have the best response based on the other firms output. Within the game, firms reach the Nash equilibrium when the Cournot equilibrium is achieved.
Bertrand Competition
The Bertrand competition assumes homogenous products and a constant marginal cost and players choose the prices. The equilibrium of price competition is where the price is equal to marginal costs, assuming complete information about the competitors' costs. Therefore, the firms have an incentive to deviate from the equilibrium because a homogenous product with a lower price will gain all of the market share, known as a cost advantage.
In popular culture
Based on the 1998 book by Sylvia Nasar, the life story of game theorist and mathematician John Nash was turned into the 2001 biopic A Beautiful Mind, starring Russell Crowe as Nash.
The 1959 military science fiction novel Starship Troopers by Robert A. Heinlein mentioned "games theory" and "theory of games". In the 1997 film of the same name, the character Carl Jenkins referred to his military intelligence assignment as being assigned to "games and theory".
The 1964 film Dr. Strangelove satirizes game theoretic ideas about deterrence theory. For example, nuclear deterrence depends on the threat to retaliate catastrophically if a nuclear attack is detected. A game theorist might argue that such threats can fail to be credible, in the sense that they can lead to subgame imperfect equilibria. The movie takes this idea one step further, with the Soviet Union irrevocably committing to a catastrophic nuclear response without making the threat public.
The 1980s power pop band Game Theory was founded by singer/songwriter Scott Miller, who described the band's name as alluding to "the study of calculating the most appropriate action given an adversary... to give yourself the minimum amount of failure".
Liar Game, a 2005 Japanese manga and 2007 television series, presents the main characters in each episode with a game or problem that is typically drawn from game theory, as demonstrated by the strategies applied by the characters.
The 1974 novel Spy Story by Len Deighton explores elements of game theory in regard to cold war army exercises.
The 2008 novel The Dark Forest by Liu Cixin explores the relationship between extraterrestrial life, humanity, and game theory.
Joker, the prime antagonist in the 2008 film The Dark Knight presents game theory concepts—notably the prisoner's dilemma in a scene where he asks passengers in two different ferries to bomb the other one to save their own.
In the 2018 film Crazy Rich Asians, the female lead Rachel Chu is a professor of economics and game theory at New York University. At the beginning of the film she is seen in her NYU classroom playing a game of poker with her teaching assistant and wins the game by bluffing; then in the climax of the film, she plays a game of mahjong with her boyfriend's disapproving mother Eleanor, losing the game to Eleanor on purpose but winning her approval as a result.
In the 2017 film Molly's Game, Brad, an inexperienced poker player, makes an irrational betting decision without realizing and causes his opponent Harlan to deviate from his Nash Equilibrium strategy, resulting in a significant loss when Harlan loses the hand.
See also
Compositional game theory
Lists
List of cognitive biases
List of emerging technologies
List of games in game theory
Notes
References
Sources
, (2002 edition)
. A modern introduction at the graduate level.
Consistent treatment of game types usually claimed by different applied fields, e.g. Markov decision processes.
Further reading
Textbooks and general literature
.
, Description.
. Suitable for undergraduate and business students.
. Suitable for upper-level undergraduates.
. Suitable for advanced undergraduates.
Published in Europe as .
. Presents game theory in formal way suitable for graduate level.
Joseph E. Harrington (2008) Games, strategies, and decision making, Worth, . Textbook suitable for undergraduates in applied fields; numerous examples, fewer formalisms in concept presentation.
Maschler, Michael; Solan, Eilon; Zamir, Shmuel (2013), Game Theory, Cambridge University Press, . Undergraduate textbook.
. Suitable for a general audience.
. A leading textbook at the advanced undergraduate level.
Historically important texts
reprinted edition:
Shapley, L.S. (1953), A Value for n-person Games, In: Contributions to the Theory of Games volume II, H. W. Kuhn and A. W. Tucker (eds.)
English translation: "On the Theory of Games of Strategy," in A. W. Tucker and R. D. Luce, ed. (1959), Contributions to the Theory of Games, v. 4, p. 42. Princeton University Press.
Other material
Allan Gibbard, "Manipulation of voting schemes: a general result", Econometrica, Vol. 41, No. 4 (1973), pp. 587–601.
. A layman's introduction.
.
External links
James Miller (2015): Introductory Game Theory Videos.
Paul Walker: History of Game Theory Page.
David Levine: Game Theory. Papers, Lecture Notes and much more stuff.
Alvin Roth: — Comprehensive list of links to game theory information on the Web
Adam Kalai: Game Theory and Computer Science — Lecture notes on Game Theory and Computer Science
Mike Shor: GameTheory.net — Lecture notes, interactive illustrations and other information.
Jim Ratliff's Graduate Course in Game Theory (lecture notes).
Don Ross: Review Of Game Theory in the Stanford Encyclopedia of Philosophy.
Bruno Verbeek and Christopher Morris: Game Theory and Ethics
Elmer G. Wiens: Game Theory — Introduction, worked examples, play online two-person zero-sum games.
Marek M. Kaminski: Game Theory and Politics — Syllabuses and lecture notes for game theory and political science.
Websites on game theory and social interactions
Kesten Green's — See Papers for evidence on the accuracy of forecasts from game theory and other methods .
McKelvey, Richard D., McLennan, Andrew M., and Turocy, Theodore L. (2007) Gambit: Software Tools for Game Theory.
Benjamin Polak: Open Course on Game Theory at Yale videos of the course
Benjamin Moritz, Bernhard Könsgen, Danny Bures, Ronni Wiersch, (2007) Spieltheorie-Software.de: An application for Game Theory implemented in JAVA.
Antonin Kucera: Stochastic Two-Player Games.
Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ?( #5) – Finale, summing up, and my own view
Artificial intelligence
Formal sciences
Mathematical economics
John von Neumann | Game theory | [
"Mathematics"
] | 12,038 | [
"Applied mathematics",
"Game theory",
"Mathematical economics"
] |
11,953 | https://en.wikipedia.org/wiki/History%20of%20geometry | Geometry (from the ; geo- "earth", -metron "measurement") arose as the field of knowledge dealing with spatial relationships. Geometry was one of the two fields of pre-modern mathematics, the other being the study of numbers (arithmetic).
Classic geometry was focused in compass and straightedge constructions. Geometry was revolutionized by Euclid, who introduced mathematical rigor and the axiomatic method still in use today. His book, The Elements is widely considered the most influential textbook of all time, and was known to all educated people in the West until the middle of the 20th century.
In modern times, geometric concepts have been generalized to a high level of abstraction and complexity, and have been subjected to the methods of calculus and abstract algebra, so that many modern branches of the field are barely recognizable as the descendants of early geometry. (See Areas of mathematics and Algebraic geometry.)
Early geometry
The earliest recorded beginnings of geometry can be traced to early peoples, such as the ancient Indus Valley (see Harappan mathematics) and ancient Babylonia (see Babylonian mathematics) from around 3000 BC. Early geometry was a collection of empirically discovered principles concerning lengths, angles, areas, and volumes, which were developed to meet some practical need in surveying, construction, astronomy, and various crafts. Among these were some surprisingly sophisticated principles, and a modern mathematician might be hard put to derive some of them without the use of calculus and algebra. For example, both the Egyptians and the Babylonians were aware of versions of the Pythagorean theorem about 1500 years before Pythagoras and the Indian Sulba Sutras around 800 BC contained the first statements of the theorem; the Egyptians had a correct formula for the volume of a frustum of a square pyramid.
Egyptian geometry
The ancient Egyptians knew that they could approximate the area of a circle as follows:
Area of Circle ≈ [ (Diameter) x 8/9 ]2.
Problem 50 of the Ahmes papyrus uses these methods to calculate the area of a circle, according to a rule that the area is equal to the square of 8/9 of the circle's diameter. This assumes that is 4×(8/9)2 (or 3.160493...), with an error of slightly over 0.63 percent. This value was slightly less accurate than the calculations of the Babylonians (25/8 = 3.125, within 0.53 percent), but was not otherwise surpassed until Archimedes' approximation of 211875/67441 = 3.14163, which had an error of just over 1 in 10,000.
Ahmes knew of the modern 22/7 as an approximation for , and used it to split a hekat, hekat x 22/x x 7/22 = hekat; however, Ahmes continued to use the traditional 256/81 value for for computing his hekat volume found in a cylinder.
Problem 48 involved using a square with side 9 units. This square was cut into a 3x3 grid. The diagonal of the corner squares were used to make an irregular octagon with an area of 63 units. This gave a second value for of 3.111...
The two problems together indicate a range of values for between 3.11 and 3.16.
Problem 14 in the Moscow Mathematical Papyrus gives the only ancient example finding the volume of a frustum of a pyramid, describing the correct formula:
where a and b are the base and top side lengths of the truncated pyramid and h is the height.
Babylonian geometry
The Babylonians may have known the general rules for measuring areas and volumes. They measured the circumference of a circle as three times the diameter and the area as one-twelfth the square of the circumference, which would be correct if π is estimated as 3. The volume of a cylinder was taken as the product of the base and the height, however, the volume of the frustum of a cone or a square pyramid was incorrectly taken as the product of the height and half the sum of the bases. The Pythagorean theorem was also known to the Babylonians. Also, there was a recent discovery in which a tablet used π as 3 and 1/8. The Babylonians are also known for the Babylonian mile, which was a measure of distance equal to about seven miles today. This measurement for distances eventually was converted to a time-mile used for measuring the travel of the Sun, therefore, representing time. There have been recent discoveries showing that ancient Babylonians may have discovered astronomical geometry nearly 1400 years before Europeans did.
Vedic India geometry
The Indian Vedic period had a tradition of geometry, mostly expressed in the construction of elaborate altars.
Early Indian texts (1st millennium BC) on this topic include the Satapatha Brahmana and the Śulba Sūtras.
The Śulba Sūtras has been described as "the earliest extant verbal expression of the Pythagorean Theorem in the world, although it had already been known to the Old Babylonians." They make use of Pythagorean triples, which are particular cases of Diophantine equations.
According to mathematician S. G. Dani, the Babylonian cuneiform tablet Plimpton 322 written c. 1850 BC "contains fifteen Pythagorean triples with quite large entries, including (13500, 12709, 18541) which is a primitive triple, indicating, in particular, that there was sophisticated understanding on the topic" in Mesopotamia in 1850 BC. "Since these tablets predate the Sulbasutras period by several centuries, taking into account the contextual appearance of some of the triples, it is reasonable to expect that similar understanding would have been there in India." Dani goes on to say:
As the main objective of the Sulvasutras was to describe the constructions of altars and the geometric principles involved in them, the subject of Pythagorean triples, even if it had been well understood may still not have featured in the Sulvasutras. The occurrence of the triples in the Sulvasutras is comparable to mathematics that one may encounter in an introductory book on architecture or another similar applied area, and would not correspond directly to the overall knowledge on the topic at that time. Since, unfortunately, no other contemporaneous sources have been found it may never be possible to settle this issue satisfactorily.
Greek geometry
Thales and Pythagoras
Thales (635–543 BC) of Miletus (now in southwestern Turkey), was the first to whom deduction in mathematics is attributed. There are five geometric propositions for which he wrote deductive proofs, though his proofs have not survived. Pythagoras (582–496 BC) of Ionia, and later, Italy, then colonized by Greeks, may have been a student of Thales, and traveled to Babylon and Egypt. The theorem that bears his name may not have been his discovery, but he was probably one of the first to give a deductive proof of it. He gathered a group of students around him to study mathematics, music, and philosophy, and together they discovered most of what high school students learn today in their geometry courses. In addition, they made the profound discovery of incommensurable lengths and irrational numbers.
Plato
Plato (427–347 BC) was a philosopher, highly esteemed by the Greeks. There is a story that he had inscribed above the entrance to his famous school, "Let none ignorant of geometry enter here." However, the story is considered to be untrue. Though he was not a mathematician himself, his views on mathematics had great influence. Mathematicians thus accepted his belief that geometry should use no tools but compass and straightedge – never measuring instruments such as a marked ruler or a protractor, because these were a workman's tools, not worthy of a scholar. This dictum led to a deep study of possible compass and straightedge constructions, and three classic construction problems: how to use these tools to trisect an angle, to construct a cube twice the volume of a given cube, and to construct a square equal in area to a given circle. The proofs of the impossibility of these constructions, finally achieved in the 19th century, led to important principles regarding the deep structure of the real number system. Aristotle (384–322 BC), Plato's greatest pupil, wrote a treatise on methods of reasoning used in deductive proofs (see Logic) which was not substantially improved upon until the 19th century.
Hellenistic geometry
Euclid
Euclid (c. 325–265 BC), of Alexandria, probably a student at the Academy founded by Plato, wrote a treatise in 13 books (chapters), titled The Elements of Geometry, in which he presented geometry in an ideal axiomatic form, which came to be known as Euclidean geometry. The treatise is not a compendium of all that the Hellenistic mathematicians knew at the time about geometry; Euclid himself wrote eight more advanced books on geometry. We know from other references that Euclid's was not the first elementary geometry textbook, but it was so much superior that the others fell into disuse and were lost. He was brought to the university at Alexandria by Ptolemy I, King of Egypt.
The Elements began with definitions of terms, fundamental geometric principles (called axioms or postulates), and general quantitative principles (called common notions) from which all the rest of geometry could be logically deduced. Following are his five axioms, somewhat paraphrased to make the English easier to read.
Any two points can be joined by a straight line.
Any finite straight line can be extended in a straight line.
A circle can be drawn with any center and any radius.
All right angles are equal to each other.
If two straight lines in a plane are crossed by another straight line (called the transversal), and the interior angles between the two lines and the transversal lying on one side of the transversal add up to less than two right angles, then on that side of the transversal, the two lines extended will intersect (also called the parallel postulate).
Concepts, that are now understood as algebra, were expressed geometrically by Euclid, a method referred to as Greek geometric algebra.
Archimedes
Archimedes (287–212 BC), of Syracuse, Sicily, when it was a Greek city-state, was one of the most famous mathematicians of the Hellenistic period. He is known for his formulation of a hydrostatic principle (known as Archimedes' principle) and for his works on geometry, including Measurement of the Circle and On Conoids and Spheroids. His work On Floating Bodies is the first known work on hydrostatics, of which Archimedes is recognized as the founder. Renaissance translations of his works, including the ancient commentaries, were enormously influential in the work of some of the best mathematicians of the 17th century, notably René Descartes and Pierre de Fermat.
After Archimedes
After Archimedes, Hellenistic mathematics began to decline. There were a few minor stars yet to come, but the golden age of geometry was over. Proclus (410–485), author of Commentary on the First Book of Euclid, was one of the last important players in Hellenistic geometry. He was a competent geometer, but more importantly, he was a superb commentator on the works that preceded him. Much of that work did not survive to modern times, and is known to us only through his commentary. The Roman Republic and Empire that succeeded and absorbed the Greek city-states produced excellent engineers, but no mathematicians of note.
The great Library of Alexandria was later burned. There is a growing consensus among historians that the Library of Alexandria likely suffered from several destructive events, but that the destruction of Alexandria's pagan temples in the late 4th century was probably the most severe and final one. The evidence for that destruction is the most definitive and secure. Caesar's invasion may well have led to the loss of some 40,000–70,000 scrolls in a warehouse adjacent to the port (as Luciano Canfora argues, they were likely copies produced by the Library intended for export), but it is unlikely to have affected the Library or Museum, given that there is ample evidence that both existed later.
Civil wars, decreasing investments in maintenance and acquisition of new scrolls and generally declining interest in non-religious pursuits likely contributed to a reduction in the body of material available in the Library, especially in the 4th century. The Serapeum was certainly destroyed by Theophilus in 391, and the Museum and Library may have fallen victim to the same campaign.
Classical Indian geometry
In the Bakhshali manuscript, there is a handful of geometric problems (including problems about volumes of irregular solids). The Bakhshali manuscript also "employs a decimal place value system with a dot for zero." Aryabhata's Aryabhatiya (499) includes the computation of areas and volumes.
Brahmagupta wrote his astronomical work in 628. Chapter 12, containing 66 Sanskrit verses, was divided into two sections: "basic operations" (including cube roots, fractions, ratio and proportion, and barter) and "practical mathematics" (including mixture, mathematical series, plane figures, stacking bricks, sawing of timber, and piling of grain). In the latter section, he stated his famous theorem on the diagonals of a cyclic quadrilateral:
Brahmagupta's theorem: If a cyclic quadrilateral has diagonals that are perpendicular to each other, then the perpendicular line drawn from the point of intersection of the diagonals to any side of the quadrilateral always bisects the opposite side.
Chapter 12 also included a formula for the area of a cyclic quadrilateral (a generalization of Heron's formula), as well as a complete description of rational triangles (i.e. triangles with rational sides and rational areas).
Brahmagupta's formula: The area, A, of a cyclic quadrilateral with sides of lengths a, b, c, d, respectively, is given by
where s, the semiperimeter, given by:
Brahmagupta's Theorem on rational triangles: A triangle with rational sides and rational area is of the form:
for some rational numbers and .
Parameshvara Nambudiri was the first mathematician to give a formula for the radius of the circle circumscribing a cyclic quadrilateral. The expression is sometimes attributed to Lhuilier [1782], 350 years later. With the sides of the cyclic quadrilateral being a, b, c, and d, the radius R of the circumscribed circle is:
Chinese geometry
The first definitive work (or at least oldest existent) on geometry in China was the Mo Jing, the Mohist canon of the early philosopher Mozi (470–390 BC). It was compiled years after his death by his followers around the year 330 BC. Although the Mo Jing is the oldest existent book on geometry in China, there is the possibility that even older written material existed. However, due to the infamous Burning of the Books in a political maneuver by the Qin dynasty ruler Qin Shihuang (r. 221–210 BC), multitudes of written literature created before his time were purged. In addition, the Mo Jing presents geometrical concepts in mathematics that are perhaps too advanced not to have had a previous geometrical base or mathematic background to work upon.
The Mo Jing described various aspects of many fields associated with physical science, and provided a small wealth of information on mathematics as well. It provided an 'atomic' definition of the geometric point, stating that a line is separated into parts, and the part which has no remaining parts (i.e. cannot be divided into smaller parts) and thus forms the extreme end of a line is a point. Much like Euclid's first and third definitions and Plato's 'beginning of a line', the Mo Jing stated that "a point may stand at the end (of a line) or at its beginning like a head-presentation in childbirth. (As to its invisibility) there is nothing similar to it." Similar to the atomists of Democritus, the Mo Jing stated that a point is the smallest unit, and cannot be cut in half, since 'nothing' cannot be halved. It stated that two lines of equal length will always finish at the same place, while providing definitions for the comparison of lengths and for parallels, along with principles of space and bounded space. It also described the fact that planes without the quality of thickness cannot be piled up since they cannot mutually touch. The book provided definitions for circumference, diameter, and radius, along with the definition of volume.
The Han dynasty (202 BC – 220 AD) period of China witnessed a new flourishing of mathematics. One of the oldest Chinese mathematical texts to present geometric progressions was the Suàn shù shū of 186 BC, during the Western Han era. The mathematician, inventor, and astronomer Zhang Heng (78–139 AD) used geometrical formulas to solve mathematical problems. Although rough estimates for pi (π) were given in the Zhou Li (compiled in the 2nd century BC), it was Zhang Heng who was the first to make a concerted effort at creating a more accurate formula for pi. Zhang Heng approximated pi as 730/232 (or approx 3.1466), although he used another formula of pi in finding a spherical volume, using the square root of 10 (or approx 3.162) instead. Zu Chongzhi (429–500 AD) improved the accuracy of the approximation of pi to between 3.1415926 and 3.1415927, with 355⁄113 (密率, Milü, detailed approximation) and 22⁄7 (约率, Yuelü, rough approximation) being the other notable approximation. In comparison to later works, the formula for pi given by the French mathematician Franciscus Vieta (1540–1603) fell halfway between Zu's approximations.
The Nine Chapters on the Mathematical Art
The Nine Chapters on the Mathematical Art, the title of which first appeared by 179 AD on a bronze inscription, was edited and commented on by the 3rd century mathematician Liu Hui from the Kingdom of Cao Wei. This book included many problems where geometry was applied, such as finding surface areas for squares and circles, the volumes of solids in various three-dimensional shapes, and included the use of the Pythagorean theorem. The book provided illustrated proof for the Pythagorean theorem, contained a written dialogue between of the earlier Duke of Zhou and Shang Gao on the properties of the right angle triangle and the Pythagorean theorem, while also referring to the astronomical gnomon, the circle and square, as well as measurements of heights and distances. The editor Liu Hui listed pi as 3.141014 by using a 192 sided polygon, and then calculated pi as 3.14159 using a 3072 sided polygon. This was more accurate than Liu Hui's contemporary Wang Fan, a mathematician and astronomer from Eastern Wu, would render pi as 3.1555 by using 142⁄45. Liu Hui also wrote of mathematical surveying to calculate distance measurements of depth, height, width, and surface area. In terms of solid geometry, he figured out that a wedge with rectangular base and both sides sloping could be broken down into a pyramid and a tetrahedral wedge. He also figured out that a wedge with trapezoid base and both sides sloping could be made to give two tetrahedral wedges separated by a pyramid. Furthermore, Liu Hui described Cavalieri's principle on volume, as well as Gaussian elimination. From the Nine Chapters, it listed the following geometrical formulas that were known by the time of the Former Han dynasty (202 BCE – 9 CE).
Areas for the
Square
Rectangle
Circle
Isosceles triangle
Rhomboid
Trapezoid
Double trapezium
Segment of a circle
Annulus ('ring' between two concentric circles)
Volumes for the
Parallelepiped with two square surfaces
Parallelepiped with no square surfaces
Pyramid
Frustum of pyramid with square base
Frustum of pyramid with rectangular base of unequal sides
Cube
Prism
Wedge with rectangular base and both sides sloping
Wedge with trapezoid base and both sides sloping
Tetrahedral wedge
Frustum of a wedge of the second type (used for applications in engineering)
Cylinder
Cone with circular base
Frustum of a cone
Sphere
Continuing the geometrical legacy of ancient China, there were many later figures to come, including the famed astronomer and mathematician Shen Kuo (1031–1095 CE), Yang Hui (1238–1298) who discovered Pascal's Triangle, Xu Guangqi (1562–1633), and many others.
Islamic Golden Age
Thābit ibn Qurra, using what he called the method of reduction and composition, provided two different general proofs of the Pythagorean theorem for all triangles, before which proofs only existed for the theorem for the special cases of a special right triangle.
A 2007 paper in the journal Science suggested that girih tiles possessed properties consistent with self-similar fractal quasicrystalline tilings such as the Penrose tilings.
Renaissance
The transmission of the Greek Classics to medieval Europe via the Arabic literature of the 9th to 10th century "Islamic Golden Age" began in the 10th century and culminated in the Latin translations of the 12th century.
A copy of Ptolemy's Almagest was brought back to Sicily by Henry Aristippus (d. 1162), as a gift from the Emperor to King William I (r. 1154–1166). An anonymous student at Salerno travelled to Sicily and translated the Almagest as well as several works by Euclid from Greek to Latin. Although the Sicilians generally translated directly from the Greek, when Greek texts were not available, they would translate from Arabic. Eugenius of Palermo (d. 1202) translated Ptolemy's Optics into Latin, drawing on his knowledge of all three languages in the task. The rigorous deductive methods of geometry found in Euclid's Elements of Geometry were relearned, and further development of geometry in the styles of both Euclid (Euclidean geometry) and Khayyam (algebraic geometry) continued, resulting in an abundance of new theorems and concepts, many of them very profound and elegant.
Advances in the treatment of perspective were made in Renaissance art of the 14th to 15th century which went beyond what had been achieved in antiquity.
In Renaissance architecture of the Quattrocento, concepts of architectural order were explored and rules were formulated. A prime example of is the Basilica di San Lorenzo in Florence by Filippo Brunelleschi (1377–1446).
In c. 1413 Filippo Brunelleschi demonstrated the geometrical method of perspective, used today by artists, by painting the outlines of various Florentine buildings onto a mirror.
Soon after, nearly every artist in Florence and in Italy used geometrical perspective in their paintings, notably Masolino da Panicale and Donatello. Melozzo da Forlì first used the technique of upward foreshortening (in Rome, Loreto, Forlì and others), and was celebrated for that. Not only was perspective a way of showing depth, it was also a new method of composing a painting. Paintings began to show a single, unified scene, rather than a combination of several.
As shown by the quick proliferation of accurate perspective paintings in Florence, Brunelleschi likely understood (with help from his friend the mathematician Toscanelli), but did not publish, the mathematics behind perspective. Decades later, his friend Leon Battista Alberti wrote De pictura (1435/1436), a treatise on proper methods of showing distance in painting based on Euclidean geometry. Alberti was also trained in the science of optics through the school of Padua and under the influence of Biagio Pelacani da Parma who studied Alhazen's Optics.
Piero della Francesca elaborated on Della Pittura in his De Prospectiva Pingendi in the 1470s. Alberti had limited himself to figures on the ground plane and giving an overall basis for perspective. Della Francesca fleshed it out, explicitly covering solids in any area of the picture plane. Della Francesca also started the now common practice of using illustrated figures to explain the mathematical concepts, making his treatise easier to understand than Alberti's. Della Francesca was also the first to accurately draw the Platonic solids as they would appear in perspective.
Perspective remained, for a while, the domain of Florence. Jan van Eyck, among others, was unable to create a consistent structure for the converging lines in paintings, as in London's The Arnolfini Portrait, because he was unaware of the theoretical breakthrough just then occurring in Italy. However he achieved very subtle effects by manipulations of scale in his interiors. Gradually, and partly through the movement of academies of the arts, the Italian techniques became part of the training of artists across Europe, and later other parts of the world.
The culmination of these Renaissance traditions finds its ultimate synthesis in the research of the architect, geometer, and optician Girard Desargues on perspective, optics and projective geometry.
The Vitruvian Man by Leonardo da Vinci (c. 1490) depicts a man in two superimposed positions with his arms and legs apart and inscribed in a circle and square. The drawing is based on the correlations of ideal human proportions with geometry described by the ancient Roman architect Vitruvius in Book III of his treatise De Architectura.
Modern geometry
The 17th century
In the early 17th century, there were two important developments in geometry. The first and most important was the creation of analytic geometry, or geometry with coordinates and equations, by René Descartes (1596–1650) and Pierre de Fermat (1601–1665). This was a necessary precursor to the development of calculus and a precise quantitative science of physics. The second geometric development of this period was the systematic study of projective geometry by Girard Desargues (1591–1661). Projective geometry is the study of geometry without measurement, just the study of how points align with each other. There had been some early work in this area by Hellenistic geometers, notably Pappus (c. 340). The greatest flowering of the field occurred with Jean-Victor Poncelet (1788–1867).
In the late 17th century, calculus was developed independently and almost simultaneously by Isaac Newton (1642–1727) and Gottfried Wilhelm Leibniz (1646–1716). This was the beginning of a new field of mathematics now called analysis. Though not itself a branch of geometry, it is applicable to geometry, and it solved two families of problems that had long been almost intractable: finding tangent lines to odd curves, and finding areas enclosed by those curves. The methods of calculus reduced these problems mostly to straightforward matters of computation.
The 18th and 19th centuries
Non-Euclidean geometry
The very old problem of proving Euclid's Fifth Postulate, the "Parallel Postulate", from his first four postulates had never been forgotten. Beginning not long after Euclid, many attempted demonstrations were given, but all were later found to be faulty, through allowing into the reasoning some principle which itself had not been proved from the first four postulates. Though Omar Khayyám was also unsuccessful in proving the parallel postulate, his criticisms of Euclid's theories of parallels and his proof of properties of figures in non-Euclidean geometries contributed to the eventual development of non-Euclidean geometry. By 1700 a great deal had been discovered about what can be proved from the first four, and what the pitfalls were in attempting to prove the fifth. Saccheri, Lambert, and Legendre each did excellent work on the problem in the 18th century, but still fell short of success. In the early 19th century, Gauss, Johann Bolyai, and Lobachevsky, each independently, took a different approach. Beginning to suspect that it was impossible to prove the Parallel Postulate, they set out to develop a self-consistent geometry in which that postulate was false. In this they were successful, thus creating the first non-Euclidean geometry. By 1854, Bernhard Riemann, a student of Gauss, had applied methods of calculus in a ground-breaking study of the intrinsic (self-contained) geometry of all smooth surfaces, and thereby found a different non-Euclidean geometry. This work of Riemann later became fundamental for Einstein's theory of relativity.
It remained to be proved mathematically that the non-Euclidean geometry was just as self-consistent as Euclidean geometry, and this was first accomplished by Beltrami in 1868. With this, non-Euclidean geometry was established on an equal mathematical footing with Euclidean geometry.
While it was now known that different geometric theories were mathematically possible, the question remained, "Which one of these theories is correct for our physical space?" The mathematical work revealed that this question must be answered by physical experimentation, not mathematical reasoning, and uncovered the reason why the experimentation must involve immense (interstellar, not earth-bound) distances. With the development of relativity theory in physics, this question became vastly more complicated.
Introduction of mathematical rigor
All the work related to the Parallel Postulate revealed that it was quite difficult for a geometer to separate his logical reasoning from his intuitive understanding of physical space, and, moreover, revealed the critical importance of doing so. Careful examination had uncovered some logical inadequacies in Euclid's reasoning, and some unstated geometric principles to which Euclid sometimes appealed. This critique paralleled the crisis occurring in calculus and analysis regarding the meaning of infinite processes such as convergence and continuity. In geometry, there was a clear need for a new set of axioms, which would be complete, and which in no way relied on pictures we draw or on our intuition of space. Such axioms, now known as Hilbert's axioms, were given by David Hilbert in 1894 in his dissertation Grundlagen der Geometrie (Foundations of Geometry).
Analysis situs, or topology
In the mid-18th century, it became apparent that certain progressions of mathematical reasoning recurred when similar ideas were studied on the number line, in two dimensions, and in three dimensions. Thus the general concept of a metric space was created so that the reasoning could be done in more generality, and then applied to special cases. This method of studying calculus- and analysis-related concepts came to be known as analysis situs, and later as topology. The important topics in this field were properties of more general figures, such as connectedness and boundaries, rather than properties like straightness, and precise equality of length and angle measurements, which had been the focus of Euclidean and non-Euclidean geometry. Topology soon became a separate field of major importance, rather than a sub-field of geometry or analysis.
Geometry of more than 3 dimensions
The 19th century saw the development of the general concept of Euclidean space by Ludwig Schläfli, who extended Euclidean geometry beyond three dimensions. He discovered all the higher-dimensional analogues of the Platonic solids, finding that there are exactly six such regular convex polytopes in dimension four, and three in all higher dimensions.
In 1878 William Kingdon Clifford introduced what is now termed geometric algebra, unifying William Rowan Hamilton's quaternions with Hermann Grassmann's algebra and revealing the geometric nature of these systems, especially in four dimensions. The operations of geometric algebra have the effect of mirroring, rotating, translating, and mapping the geometric objects that are being modeled to new positions.
The 20th century
Developments in algebraic geometry included the study of curves and surfaces over finite fields as demonstrated by the works of among others André Weil, Alexander Grothendieck, and Jean-Pierre Serre as well as over the real or complex numbers. Finite geometry itself, the study of spaces with only finitely many points, found applications in coding theory and cryptography. With the advent of the computer, new disciplines such as computational geometry or digital geometry deal with geometric algorithms, discrete representations of geometric data, and so forth.
Timeline
See also
Flatland, a book by "A. Square" about two– and three-dimensional space, to understand the concept of four dimensions
Timeline of geometry – Notable events in the history of geometry
History of Euclidean geometry
History of non-Euclidean geometry
History of mathematics
History of measurement
History of space (mathematics)
Important publications in geometry
Interactive geometry software
List of geometry topics
Modern triangle geometry
Notes
References
Needham, Joseph (1986), Science and Civilization in China: Volume 3, Mathematics and the Sciences of the Heavens and the Earth, Taipei: Caves Books Ltd
External links
Geometry in the 19th Century at the Stanford Encyclopedia of Philosophy
Arabic mathematics: forgotten brilliance? | History of geometry | [
"Mathematics"
] | 6,805 | [
"History of geometry",
"Geometry"
] |
11,958 | https://en.wikipedia.org/wiki/George%20Berkeley | George Berkeley ( ; 12 March 168514 January 1753) – known as Bishop Berkeley (Bishop of Cloyne of the Anglican Church of Ireland) – was an Anglo-Irish philosopher whose primary achievement was the advancement of a theory he called "immaterialism" (later referred to as "subjective idealism" by others). This theory denies the existence of material substance and instead contends that familiar objects like tables and chairs are ideas perceived by the mind and, as a result, cannot exist without being perceived. Berkeley is also known for his critique of abstraction, an important premise in his argument for immaterialism. Interest in his works increased significantly in the United States during the 19th century, and the University of California, Berkeley is named after him.
In 1709, Berkeley published his first major work, An Essay Towards a New Theory of Vision, in which he discussed the limitations of human vision and advanced the theory that the proper objects of sight are not material objects, but light and colour. This foreshadowed his chief philosophical work, A Treatise Concerning the Principles of Human Knowledge, in 1710, which, after its poor reception, he rewrote in dialogue form and published under the title Three Dialogues Between Hylas and Philonous in 1713. In this book, Berkeley's views were represented by Philonous (Greek: "lover of mind"), while Hylas ("hyle", Greek: "matter") embodies Berkeley's opponents, in particular John Locke.
Berkeley argued against Isaac Newton's doctrine of absolute space, time and motion in De Motu (On Motion), published 1721. His arguments were a precursor to the views of Ernst Mach and Albert Einstein. In 1732, he published Alciphron, a Christian apologetic against the free-thinkers, and in 1734, he published The Analyst, a critique of the foundations of calculus, which was influential in the development of mathematics.
Interest in Berkeley's work increased after World War II because he tackled many of the issues of paramount interest to philosophy in the 20th century, such as the problems of perception, the difference between primary and secondary qualities, and the importance of language.
Biography
Ireland
Berkeley was born at his family home, Dysart Castle, near Thomastown, County Kilkenny, Ireland, the eldest son of William Berkeley, a cadet of the noble family of Berkeley whose ancestry can be traced back to the Anglo-Saxon period and who had served as feudal lords and landowners in Gloucester, England. Little is known of his mother. He was educated at Kilkenny College and attended Trinity College Dublin, where he was elected a Scholar in 1702, being awarded BA in 1704 and MA and a Fellowship in 1707. He remained at Trinity College after the completion of his degree as a tutor and Greek lecturer.
His earliest publication was on mathematics, but the first that brought him notice was his An Essay towards a New Theory of Vision, first published in 1709. In the essay, Berkeley examines visual distance, magnitude, position and problems of sight and touch. While this work raised much controversy at the time, its conclusions are now accepted as an established part of the theory of optics.
The next publication to appear was the Treatise Concerning the Principles of Human Knowledge in 1710, which had great success and gave him a lasting reputation, though few accepted his theory that nothing exists outside the mind. This was followed in 1713 by Three Dialogues Between Hylas and Philonous, in which he propounded his system of philosophy, the leading principle of which is that the world, as represented by our senses, depends for its existence on being perceived.
For this theory, the Principles gives the exposition and the Dialogues the defence. One of his main objectives was to combat the prevailing materialism of his time. The theory was largely received with ridicule, while even those such as Samuel Clarke and William Whiston, who did acknowledge his "extraordinary genius," were nevertheless convinced that his first principles were false.
England and Europe
Shortly afterwards, Berkeley visited England and was received into the circle of Addison, Pope and Steele. In the period between 1714 and 1720, he interspersed his academic endeavours with periods of extensive travel in Europe, including one of the most extensive Grand Tours of the length and breadth of Italy ever undertaken. In 1721, he took Holy orders in the Church of Ireland, earning his doctorate in divinity, and once again chose to remain at Trinity College Dublin, lecturing this time in Divinity and in Hebrew. In 1721/2 he was made Dean of Dromore and, in 1724, Dean of Derry.
In 1723, Berkeley was named co-heir of Esther Vanhomrigh, along with the barrister Robert Marshall. This naming followed Vanhomrigh's violent quarrel with Jonathan Swift, who had been her intimate friend for many years. Vanhomrigh's choice of legatees caused a good deal of surprise since she did not know either of them well, although Berkeley as a very young man had known her father. Swift said that he did not grudge Berkeley his inheritance, much of which vanished in a lawsuit in any event. A story that Berkeley and Marshall disregarded a condition of the inheritance that they must publish the correspondence between Swift and Vanessa is probably untrue.
In 1725, Berkeley began the project of founding a college in Bermuda for training ministers and missionaries in the colony, in pursuit of which he gave up his deanery with its income of £1100.
Marriage and America
On 1 August 1728 at St Mary le Strand, London, Berkeley married Anne Forster, daughter of John Forster, Chief Justice of the Irish Common Pleas, and Forster's first wife Rebecca Monck. He then went to America on a salary of £100 per annum. He landed near Newport, Rhode Island, where he bought a plantation at Middletownthe famous "Whitehall". Berkeley purchased several enslaved Africans to work on the plantation. In 2023, Trinity College Dublin removed Berkeley's name from one of its libraries because of his slave ownership and his active defence of slavery.
It has been claimed that "he introduced Palladianism into America by borrowing a design from [William] Kent's Designs of Inigo Jones for the door-case of his house in Rhode Island, Whitehall". He also brought to New England John Smibert, the Scottish artist he "discovered" in Italy, who is generally regarded as the founding father of American portrait painting. Meanwhile, he drew up plans for the ideal city he planned to build on Bermuda. He lived at the plantation while he waited for funds for his college to arrive. The funds, however, were not forthcoming. "With the withdrawal from London of his own persuasive energies, opposition gathered force; and the Prime Minister, Walpole grew steadily more sceptical and lukewarm. At last it became clear that the essential Parliamentary grant would be not forthcoming", and in 1732 he left America and returned to London.
He and Anne had four children who survived infancy Henry, George, William and Julia and at least two other children who died in infancy. William's death in 1751 was a great cause of grief for his father.
Episcopate in Ireland
Berkeley was nominated to be the Bishop of Cloyne in the Church of Ireland on 18 January 1734. He was consecrated as such on 19 May 1734. He was the Bishop of Cloyne until his death on 14 January 1753, although he died at Oxford (see below).
Humanitarian work
While living in London's Saville Street, he took part in efforts to create a home for the city's abandoned children. The Foundling Hospital was founded by royal charter in 1739, and Berkeley is listed as one of its original governors.
Last works
His last two publications were Siris: A Chain of Philosophical Reflexions and Inquiries Concerning the Virtues of Tarwater, And divers other Subjects connected together and arising one from another (1744) and Further Thoughts on Tar-water (1752). Pine tar is an effective antiseptic and disinfectant when applied to cuts on the skin, but Berkeley argued for the use of pine tar as a broad panacea for diseases. His 1744 work on tar-water sold more copies than any of his other books during Berkeley's lifetime.
He remained at Cloyne until 1752, when he retired. With his wife and daughter Julia, he went to Oxford to live with his son George and supervise his education. He died soon afterwards and was buried in Christ Church Cathedral, Oxford. His affectionate disposition and genial manners made him much loved and held in warm regard by many of his contemporaries. Anne outlived her husband by many years, and died in 1786.
Contributions to philosophy
The use of the concepts of "spirit" and "idea" is central in Berkeley's philosophy. As used by him, these concepts are difficult to translate into modern terminology. His concept of "spirit" is close to the concept of "conscious subject" or of "mind", and the concept of "idea" is close to the concept of "sensation" or "state of mind" or "conscious experience".
Thus Berkeley denied the existence of matter as a metaphysical substance, but did not deny the existence of physical objects such as apples or mountains ("I do not argue against the existence of any one thing that we can apprehend, either by sense or reflection. That the things I see with mine eyes and touch with my hands do exist, really exist, I make not the least question. The only thing whose existence we deny, is that which philosophers call matter or corporeal substance. And in doing of this, there is no damage done to the rest of mankind, who, I dare say, will never miss it.", Principles #35). This basic claim of Berkeley's thought, his "idealism", is sometimes and somewhat derisively called "immaterialism" or, occasionally, subjective idealism. In Principles #3, he wrote, using a combination of Latin and English, esse is percipi (to be is to be perceived), most often if slightly inaccurately attributed to Berkeley as the pure Latin phrase esse est percipi. The phrase appears associated with him in authoritative philosophical sources, e.g., "Berkeley holds that there are no such mind-independent things, that, in the famous phrase, esse est percipi (aut percipere)—to be is to be perceived (or to perceive)."
Hence, human knowledge is reduced to two elements: that of spirits and of ideas (Principles #86). In contrast to ideas, a spirit cannot be perceived. A person's spirit, which perceives ideas, is to be comprehended intuitively by inward feeling or reflection (Principles #89). For Berkeley, we have no direct 'idea' of spirits, albeit we have good reason to believe in the existence of other spirits, for their existence explains the purposeful regularities we find in experience ("It is plain that we cannot know the existence of other spirits otherwise than by their operations, or the ideas by them excited in us", Dialogues #145). This is the solution that Berkeley offers to the problem of other minds. Finally, the order and purposefulness of the whole of our experience of the world and especially of nature overwhelms us into believing in the existence of an extremely powerful and intelligent spirit that causes that order. According to Berkeley, reflection on the attributes of that external spirit leads us to identify it with God. Thus a material thing such as an apple consists of a collection of ideas (shape, colour, taste, physical properties, etc.) which are caused in the spirits of humans by the spirit of God.
Theology
A convinced adherent of Christianity, Berkeley believed God to be present as an immediate cause of all our experiences.
Here is Berkeley's proof of the existence of God:
As T. I. Oizerman explained:
Berkeley believed that God is not the distant engineer of Newtonian machinery that in the fullness of time led to the growth of a tree in the university quadrangle. Rather, the perception of the tree is an idea that God's mind has produced in the mind, and the tree continues to exist in the quadrangle when "nobody" is there, simply because God is an infinite mind that perceives all.
The philosophy of David Hume concerning causality and objectivity is an elaboration of another aspect of Berkeley's philosophy. A.A. Luce, the most eminent Berkeley scholar of the 20th century, constantly stressed the continuity of Berkeley's philosophy. The fact that Berkeley returned to his major works throughout his life, issuing revised editions with only minor changes, also counts against any theory that attributes to him a significant volte-face.
Relativity arguments
John Locke (Berkeley's intellectual predecessor) states that we define an object by its primary and secondary qualities. He takes heat as an example of a secondary quality. If you put one hand in a bucket of cold water, and the other hand in a bucket of warm water, then put both hands in a bucket of lukewarm water, one of your hands is going to tell you that the water is cold and the other that the water is hot. Locke says that since two different objects (both your hands) perceive the water to be hot and cold, then the heat is not a quality of the water.
While Locke used this argument to distinguish primary from secondary qualities, Berkeley extends it to cover primary qualities in the same way. For example, he says that size is not a quality of an object because the size of the object depends on the distance between the observer and the object, or the size of the observer. Since an object is a different size to different observers, then size is not a quality of the object. Berkeley rejects shape with a similar argument and then asks: if neither primary qualities nor secondary qualities are of the object, then how can we say that there is anything more than the qualities we observe?
Relativity is the idea that there is no objective, universal truth; it is a state of dependence in which the existence of one independent object is solely dependent on that of another. According to Locke, characteristics of primary qualities are mind-independent, such as shape, size, etc., whereas secondary qualities are mind-dependent, for example, taste and colour. George Berkeley refuted John Locke's belief on primary and secondary qualities because Berkeley believed that "we cannot abstract the primary qualities (e.g shape) from secondary ones (e.g colour)". Berkeley argued that perception is dependent on the distance between the observer and the object, and "thus, we cannot conceive of mechanist material bodies which are extended but not (in themselves) colored". What perceived can be the same type of quality, but completely opposite from each other because of different positions and perceptions, what we perceive can be different even when the same types of things consist of contrary qualities. Secondary qualities aid in people's conception of primary qualities in an object, like how the colour of an object leads people to recognize the object itself. More specifically, the colour red can be perceived in apples, strawberries, and tomatoes, yet we would not know what these might look like without its colour. We would also be unaware of what the colour red looked like if red paint, or any object that has a perceived red colour, failed to exist. From this, we can see that colours cannot exist on their own and can solely represent a group of perceived objects. Therefore, both primary and secondary qualities are mind-dependent: they cannot exist without our minds.
George Berkeley was a philosopher who opposed rationalism and "classical" empiricism. He was a "subjective idealist" or "empirical idealist", who believed that reality is constructed entirely of immaterial, conscious minds and their ideas; everything that exists is somehow dependent on the subject perceiving it, except the subject themselves. He refuted the existence of abstract objects that many other philosophers believed to exist, notably Plato. According to Berkeley, "an abstract object does not exist in space or time and which is therefore entirely non-physical and non-mental"; however, this argument contradicts his relativity argument. If "esse est percipi", (Latin meaning that to exist is to be perceived) is true, then the objects in the relativity argument made by Berkeley can either exist or not. Berkeley believed that only the minds' perceptions and the Spirit that perceives are what exists in reality; what people perceive every day is only the idea of an object's existence, but the objects themselves are not perceived. Berkeley also discussed how, at times, materials cannot be perceived by oneself, and the mind of oneself cannot understand the objects. However, there also exists an "omnipresent, eternal mind" that Berkeley believed to consist of God and the Spirit, both omniscient and all-perceiving. According to Berkeley, God is the entity who controls everything, yet Berkeley also argued that "abstract object[s] do not exist in space or time". In other words, as Warnock argues, Berkeley "had recognized that he could not square with his own talk of spirits, of our minds and of God; for these are perceivers and not among objects of perception. Thus he says, rather weakly and without elucidation, that in addition to our ideas, we also have notions—we know what it means to speak of spirits and their operations."
However, the relativity argument violates the idea of immaterialism. Berkeley's immaterialism argues that "esse est percipi (aut percipere)", which in English is: to be is to be perceived (or to perceive). That is saying only what is perceived or perceived is real, and without our perception or God's nothing can be real. Yet, if the relativity argument, also by Berkeley, argues that the perception of an object depends on the different positions, then this means that what is perceived can either be real or not because the perception does not show that whole picture and the whole picture cannot be perceived. Berkeley also believes that "when one perceives mediately, one perceives one idea by means of perceiving another". By this, it can be elaborated that if the standards of what perceived at first are different, what perceived after that can be different, as well. In the heat perception described above, one hand perceived the water to be hot and the other hand perceived the water to be cold due to relativity. If applying the idea "to be is to be perceived", the water should be both cold and hot because both perceptions are perceived by different hands. However, the water cannot be cold and hot at the same time for it self-contradicts, so this shows that what perceived is not always true because it sometimes can break the law of noncontradiction. In this case, "it would be arbitrary anthropocentrism to claim that humans have special access to the true qualities of objects". The truth for different people can be different, and humans are limited to accessing the absolute truth due to relativity. Summing up, nothing can be absolutely true due to relativity or the two arguments, to be is to be perceived and the relativity argument, do not always work together.
New theory of vision
In his Essay Towards a New Theory of Vision, Berkeley frequently criticised the views of the Optic Writers, a title that seems to include Molyneux, Wallis, Malebranche and Descartes. In sections 1–51, Berkeley argued against the classical scholars of optics by holding that: spatial depth, as the distance that separates the perceiver from the perceived object is itself invisible. That is, we do not see space directly or deduce its form logically using the laws of optics. Space for Berkeley is no more than a contingent expectation that visual and tactile sensations will follow one another in regular sequences that we come to expect through habit.
Berkeley goes on to argue that visual cues, such as the perceived extension or 'confusion' of an object, can only be used to indirectly judge distance, because the viewer learns to associate visual cues with tactile sensations. Berkeley gives the following analogy regarding indirect distance perception: one perceives distance indirectly just as one perceives a person's embarrassment indirectly. When looking at an embarrassed person, we infer indirectly that the person is embarrassed by observing the red colour on the person's face. We know through experience that a red face tends to signal embarrassment, as we've learned to associate the two.
The question concerning the visibility of space was central to the Renaissance perspective tradition and its reliance on classical optics in the development of pictorial representations of spatial depth. This matter has been debated by scholars since the 11th-century Arab polymath and mathematician Alhazen (Abū ʿAlī al-Ḥasan ibn al-Ḥasan ibn al-Haytham) affirmed in experimental contexts the visibility of space. This issue, which was raised in Berkeley's theory of vision, was treated at length in the Phenomenology of Perception of Maurice Merleau-Ponty, in the context of confirming the visual perception of spatial depth (la profondeur), and by way of refuting Berkeley's thesis.
Berkeley wrote about the perception of size in addition to that of distance. He is frequently misquoted as believing in size–distance invariance—a view held by the Optic Writers. This idea is that we scale the image size according to distance in a geometrical manner. The error may have become commonplace because the eminent historian and psychologist E. G. Boring perpetuated it. In fact, Berkeley argued that the same cues that evoke distance also evoke size, and that we do not first see size and then calculate distance. It is worth quoting Berkeley's words on this issue (Section 53):
What inclines men to this mistake (beside the humour of making one see by geometry) is, that the same perceptions or ideas which suggest distance, do also suggest magnitude ... I say they do not first suggest distance, and then leave it to the judgement to use that as a medium, whereby to collect the magnitude; but they have as close and immediate a connexion with the magnitude as with the distance; and suggest magnitude as independently of distance, as they do distance independently of magnitude.
Berkeley claimed that his visual theories were "vindicated" by a 1728 report regarding the recovery of vision in a 13-year-old boy operated for congenital cataracts by surgeon William Cheselden. In 2021, the name of Cheselden's patient was published for the first time: Daniel Dolins. Berkeley knew the Dolins family, had numerous social links to Cheselden, including the poet Alexander Pope, and Princess Caroline, to whom Cheselden's patient was presented. The report misspelt Cheselden's name, used language typical of Berkeley, and may even have been ghost-written by Berkeley. Unfortunately, Dolins was never able to see well enough to read, and there is no evidence that the surgery improved Dolins' vision at any point prior to his death at age 30.
Philosophy of physics
"Berkeley's works display his keen interest in natural philosophy [...] from his earliest writings (Arithmetica, 1707) to his latest (Siris, 1744). Moreover, much of his philosophy is shaped fundamentally by his engagement with the science of his time." The profundity of this interest can be judged from numerous entries in Berkeley's Philosophical Commentaries (1707–1708), e.g. "Mem. to Examine & accurately discuss the scholium of the 8th Definition of Mr Newton's Principia." (#316)
Berkeley argued that forces and gravity, as defined by Newton, constituted "occult qualities" that "expressed nothing distinctly". He held that those who posited "something unknown in a body of which they have no idea and which they call the principle of motion, are in fact simply stating that the principle of motion is unknown". Therefore, those who "affirm that active force, action, and the principle of motion are really in bodies are adopting an opinion not based on experience". Forces and gravity existed nowhere in the phenomenal world. On the other hand, if they resided in the category of "soul" or "incorporeal thing", they "do not properly belong to physics" as a matter. Berkeley thus concluded that forces lay beyond any kind of empirical observation and could not be a part of proper science. He proposed his theory of signs as a means to explain motion and matter without reference to the "occult qualities" of force and gravity.
Berkeley's razor
Berkeley's razor is a rule of reasoning proposed by the philosopher Karl Popper in his study of Berkeley's key scientific work De Motu. Berkeley's razor is considered by Popper to be similar to Ockham's razor but "more powerful". It represents an extreme, empiricist view of scientific observation that states that the scientific method provides us with no true insight into the nature of the world. Rather, the scientific method gives us a variety of partial explanations about regularities that hold in the world and that are gained through experiments. The nature of the world, according to Berkeley, is only approached through proper metaphysical speculation and reasoning. Popper summarises Berkeley's razor as such:
A general practical result—which I propose to call "Berkeley's razor"—of [Berkeley's] analysis of physics allows us a priori to eliminate from physical science all essentialist explanations. If they have a mathematical and predictive content they may be admitted qua mathematical hypotheses (while their essentialist interpretation is eliminated). If not they may be ruled out altogether. This razor is sharper than Ockham's: all entities are ruled out except those which are perceived.
In another essay of the same book titled "Three Views Concerning Human Knowledge", Popper argues that Berkeley is to be considered as an instrumentalist philosopher, along with Robert Bellarmine, Pierre Duhem and Ernst Mach. According to this approach, scientific theories have the status of serviceable fictions, useful inventions aimed at explaining facts, and without any pretension to being true. Popper contrasts instrumentalism with the above-mentioned essentialism and his own "critical rationalism".
Philosophy of mathematics
In addition to his contributions to philosophy, Berkeley was also very influential in the development of mathematics, although in a rather indirect sense. "Berkeley was concerned with mathematics and its philosophical interpretation from the earliest stages of his intellectual life." Berkeley's "Philosophical Commentaries" (1707–1708) witness to his interest in mathematics:
Axiom. No reasoning about things whereof we have no idea. Therefore no reasoning about Infinitesimals. (#354)
Take away the signs from Arithmetic & Algebra, & pray what remains? (#767)
These are sciences purely Verbal, & entirely useless but for Practise in Societys of Men. No speculative knowledge, no comparison of Ideas in them. (#768)
In 1707, Berkeley published two treatises on mathematics. In 1734, he published The Analyst, subtitled A DISCOURSE Addressed to an Infidel Mathematician, a critique of calculus. Florian Cajori called this treatise "the most spectacular event of the century in the history of British mathematics." However, a recent study suggests that Berkeley misunderstood Leibnizian calculus. The mathematician in question is believed to have been either Edmond Halley, or Isaac Newton himself—though if to the latter, then the discourse was posthumously addressed, as Newton died in 1727. The Analyst represented a direct attack on the foundations and principles of calculus and, in particular, the notion of fluxion or infinitesimal change, which Newton and Leibniz used to develop the calculus. In his critique, Berkeley coined the phrase "ghosts of departed quantities", familiar to students of calculus. Ian Stewart's book From Here to Infinity captures the gist of his criticism.
Berkeley regarded his criticism of calculus as part of his broader campaign against the religious implications of Newtonian mechanicsas a defence of traditional Christianity against deism, which tends to distance God from His worshipers. Specifically, he observed that both Newtonian and Leibnizian calculus employed infinitesimals sometimes as positive, nonzero quantities and other times as a number explicitly equal to zero. Berkeley's key point in "The Analyst" was that Newton's calculus (and the laws of motion based on calculus) lacked rigorous theoretical foundations. He claimed that:
In every other Science Men prove their Conclusions by their Principles, and not their Principles by the Conclusions. But if in yours you should allow your selves this unnatural way of proceeding, the Consequence would be that you must take up with Induction, and bid adieu to Demonstration. And if you submit to this, your Authority will no longer lead the way in Points of Reason and Science.
Berkeley did not doubt that calculus produced real-world truth; simple physics experiments could verify that Newton's method did what it claimed to do. "The cause of Fluxions cannot be defended by reason", but the results could be defended by empirical observation, Berkeley's preferred method of acquiring knowledge at any rate. Berkeley, however, found it paradoxical that "Mathematicians should deduce true Propositions from false Principles, be right in Conclusion, and yet err in the Premises." In The Analyst he endeavoured to show "how Error may bring forth Truth, though it cannot bring forth Science". Newton's science, therefore, could not on purely scientific grounds justify its conclusions, and the mechanical, deistic model of the universe could not be rationally justified.
The difficulties raised by Berkeley were still present in the work of Cauchy whose approach to calculus was a combination of infinitesimals and a notion of limit, and were eventually sidestepped by Weierstrass by means of his (ε, δ) approach, which eliminated infinitesimals altogether. More recently, Abraham Robinson restored infinitesimal methods in his 1966 book Non-standard analysis by showing that they can be used rigorously.
Moral philosophy
The tract A Discourse on Passive Obedience (1712) is considered Berkeley's major contribution to moral and political philosophy.
In A Discourse on Passive Obedience, Berkeley defends the thesis that people have "a moral duty to observe the negative precepts (prohibitions) of the law, including the duty not to resist the execution of punishment." However, Berkeley does make exceptions to this sweeping moral statement, stating that we need not observe precepts of "usurpers or even madmen" and that people can obey different supreme authorities if there are more than one claims to the highest authority.
Berkeley defends this thesis with deductive proof stemming from the laws of nature. First, he establishes that because God is perfectly good, the end to which he commands humans must also be good, and that end must not benefit just one person, but the entire human race. Because these commands—or laws—if practised, would lead to the general fitness of humankind, it follows that they can be discovered by the right reason—for example, the law to never resist supreme power can be derived from reason because this law is "the only thing that stands between us and total disorder". Thus, these laws can be called the laws of nature, because they are derived from God—the creator of nature himself. "These laws of nature include duties never to resist the supreme power, lie under oath ... or do evil so that good may come of it."
One may view Berkeley's doctrine on Passive Obedience as a kind of 'Theological Utilitarianism', insofar as it states that we have a duty to uphold a moral code which presumably is working towards the ends of promoting the good of humankind. However, the concept of 'ordinary' utilitarianism is fundamentally different in that it "makes utility the one and only ground of obligation"—that is, Utilitarianism is concerned with whether particular actions are morally permissible in specific situations, while Berkeley's doctrine is concerned with whether or not we should follow moral rules in any and all circumstances. Whereas act utilitarianism might, for example, justify a morally impermissible act in light of the specific situation, Berkeley's doctrine of Passive Obedience holds that it is never morally permissible to not follow a moral rule, even when it seems like breaking that moral rule might achieve the happiest ends. Berkeley holds that even though sometimes, the consequences of an action in a specific situation might be bad, the general tendencies of that action benefit humanity.
Other important sources for Berkeley's views on morality are Alciphron (1732), especially dialogues I–III, and the Discourse to Magistrates (1738)." Passive Obedience is notable partly for containing one of the earliest statements of rule utilitarianism.
Immaterialism
George Berkeley’s theory that matter does not exist comes from the belief that "sensible things are those only which are immediately perceived by sense." Berkeley says in his book called Principles of Human Knowledge that "the ideas of sense are stronger, livelier, and clearer than those of the imagination; and they are also steady, orderly and coherent." From this we can tell that the things that we are perceiving are truly real rather than it just being a dream.
All knowledge comes from perception; what we perceive are ideas, not things in themselves; a thing in itself must be outside experience; so the world only consists of ideas and minds that perceive those ideas; a thing only exists so far as it perceives or is perceived. Through this we can see that consciousness is considered something that exists to Berkeley due to its ability to perceive. "'To be,' said of the object, means to be perceived, 'esse est percipi'; 'to be', said of the subject, means to perceive or 'percipere'." Having established this, Berkeley then attacks the "opinion strangely prevailing amongst men, that houses, mountains, rivers, and in a word all sensible objects have an existence natural or real, distinct from being perceived". He believes this idea to be inconsistent because such an object with an existence independent of perception must have both sensible qualities, and thus be known (making it an idea), and also an insensible reality, which Berkeley believes is inconsistent. Berkeley believes that the error arises because people think that perceptions can imply or infer something about the material object. Berkeley calls this concept abstract ideas. He rebuts this concept by arguing that people cannot conceive of an object without also imagining the sensual input of the object. He argues in Principles of Human Knowledge that, similar to how people can only sense matter with their senses through the actual sensation, they can only conceive of matter (or, rather, ideas of matter) through the idea of sensation of matter. This implies that everything that people can conceive in regards to matter is only ideas about matter. Thus, matter, should it exist, must exist as collections of ideas, which can be perceived by the senses and interpreted by the mind. But if matter is just a collection of ideas, then Berkeley concludes that matter, in the sense of a material substance, does not exist as most philosophers of Berkeley's time believed. Indeed, if a person visualizes something, then it must have some colour, however dark or light; it cannot just be a shape of no colour at all if a person is to visualize it.
Berkeley's ideas raised controversy because his argument refuted Descartes' philosophy, which was expanded upon by Locke, and resulted in the rejection of Berkeley's form of empiricism by several philosophers of the eighteenth century. In Locke's philosophy, "the world causes the perceptual ideas we have of it by the way it interacts with our senses." This contradicts with Berkeley's philosophy because not only does it suggest the existence of physical causes in the world, but in fact, there is no physical world beyond our ideas. The only causes that exist in Berkeley's philosophy are those that are a result of the use of the will.
Berkeley's theory relies heavily on his form of empiricism, which in turn relies heavily on the senses. His empiricism can be defined by five propositions: all significant words stand for ideas; all knowledge of things is about ideas; all ideas come from without or from within; if from without it must be by the senses, and they are called sensations (the real things), if from within they are the operations of the mind, and are called thoughts. Berkeley clarifies his distinction between ideas by saying they "are imprinted on the senses," "perceived by attending to the passions and operations of the mind," or "are formed by help of memory and imagination." One refutation of his idea was: if someone leaves a room and stops perceiving that room does that room no longer exist? Berkeley answers this by claiming that it is still being perceived and the consciousness that is doing the perceiving is God. (This makes Berkeley's argument hinge upon an omniscient, omnipresent deity.) This claim is the only thing holding up his argument which is "depending for our knowledge of the world, and of the existence of other minds, upon a God that would never deceive us." Berkeley anticipates a second objection, which he refutes in Principles of Human Knowledge. He anticipates that the materialist may take a representational materialist standpoint: although the senses can only perceive ideas, these ideas resemble (and thus can be compared to) the actual, existing object. Thus, through the sensing of these ideas, the mind can make inferences as to matter itself, even though pure matter is non-perceivable. Berkeley's objection to that notion is that "an idea can be like nothing but an idea; a colour or figure can be like nothing but another colour or figure". Berkeley distinguishes between an idea, which is mind-dependent, and a material substance, which is not an idea and is mind-independent. As they are not alike, they cannot be compared, just as one cannot compare the colour red to something that is invisible, or the sound of music to silence, other than that one exists and the other does not. This is called the likeness principle: the notion that an idea can only be like (and thus compared to) another idea.
Berkeley attempted to show how ideas manifest themselves into different objects of knowledge:
Berkeley also attempted to prove the existence of God throughout his beliefs in immaterialism.
Influence
Berkeley's Treatise Concerning the Principles of Human Knowledge was published three years before the publication of Arthur Collier's Clavis Universalis, which made assertions similar to those of Berkeley's. However, there seemed to have been no influence or communication between the two writers.
German philosopher Arthur Schopenhauer once wrote of him: "Berkeley was, therefore, the first to treat the subjective starting-point really seriously and to demonstrate irrefutably its absolute necessity. He is the father of idealism...".
Berkeley is considered one of the originators of British empiricism. A linear development is often traced from three great "British Empiricists", leading from Locke through Berkeley to Hume.
Berkeley influenced many modern philosophers, especially David Hume. Thomas Reid admitted that he put forward a drastic criticism of Berkeleianism after he had been an admirer of Berkeley's philosophical system for a long time. Berkeley's "thought made possible the work of Hume and thus Kant, notes Alfred North Whitehead". Some authors draw a parallel between Berkeley and Edmund Husserl.
When Berkeley visited America, the American educator Samuel Johnson visited him, and the two later corresponded. Johnson convinced Berkeley to establish a scholarship program at Yale and to donate a large number of books, as well as his plantation, to the college when the philosopher returned to England. It was one of Yale's largest and most important donations; it doubled its library holdings, improved the college's financial position and brought Anglican religious ideas and English culture into New England. Johnson also took Berkeley's philosophy and used parts of it as a framework for his own American Practical Idealism school of philosophy. As Johnson's philosophy was taught to about half the graduates of American colleges between 1743 and 1776, and over half of the contributors to the Declaration of Independence were connected to it, Berkeley's ideas were indirectly a foundation of the American Mind.
Outside of America, during Berkeley's lifetime, his philosophical ideas were comparatively uninfluential. But interest in his doctrine grew from the 1870s when Alexander Campbell Fraser, "the leading Berkeley scholar of the nineteenth century", published The Works of George Berkeley. A powerful impulse to serious studies in Berkeley's philosophy was given by A. A. Luce and Thomas Edmund Jessop, "two of the twentieth century's foremost Berkeley scholars", thanks to whom Berkeley scholarship was raised to the rank of a special area of historico-philosophical science. In addition, the philosopher Colin Murray Turbayne wrote extensively on Berkeley's use of language as a model for visual, physiological, natural and metaphysical relationships.
The proportion of Berkeley scholarship, in literature on the history of philosophy, is increasing. This can be judged from the most comprehensive bibliographies on George Berkeley. During the period of 1709–1932, about 300 writings on Berkeley were published. That amounted to 1.5 publications per year. During the course of 1932–1979, over one thousand works were brought out, i.e., 20 works per year. Since then, the number of publications has reached 30 per annum. In 1977 publication began in Ireland of a special journal on Berkeley's life and thought (Berkeley Studies). In 1988, the Australian philosopher Colin Murray Turbayne established the International Berkeley Essay Prize Competition at the University of Rochester in an effort to advance scholarship and research on the works of Berkeley.
Other than philosophy, Berkeley also influenced modern psychology with his work on John Locke's theory of association and how it could be used to explain how humans gain knowledge in the physical world. He also used the theory to explain perception, stating that all qualities were, as Locke would call them, "secondary qualities", therefore perception laid entirely in the perceiver and not in the object. These are both topics today studied in modern psychology.
Appearances in literature
Lord Byron's Don Juan references immaterialism in the Eleventh Canto:
When Bishop Berkeley said 'there was no matter,'
And proved it—'t was no matter what he said:
They say his system 't is in vain to batter,
Too subtle for the airiest human head;
And yet who can believe it? I would shatter
Gladly all matters down to stone or lead,
Or adamant, to find the world a spirit,
And wear my head, denying that I wear it.
Herman Melville humorously references Berkeley in Chapter 20 of Mardi (1849), when outlining a character's belief of being on board a ghostship:
And here be it said, that for all his superstitious misgivings about the brigantine; his imputing to her something equivalent to a purely phantom-like nature, honest Jarl was nevertheless exceedingly downright and practical in all hints and proceedings concerning her. Wherein, he resembled my Right Reverend friend, Bishop Berkeley–truly, one of your lords spiritual—who, metaphysically speaking, holding all objects to be mere optical delusions, was, notwithstanding, extremely matter-of-fact in all matters touching matter itself. Besides being pervious to the points of pins, and possessing a palate capable of appreciating plum-puddings:—which sentence reads off like a pattering of hailstones.
James Joyce references Berkeley's philosophy in the third episode of Ulysses (1922):
Who watches me here? Who ever anywhere will read these written words? Signs on a white field. Somewhere to someone in your flutiest voice. The good bishop of Cloyne took the veil of the temple out of his shovel hat: veil of space with coloured emblems hatched on its field. Hold hard. Coloured on a flat: yes, that's right. Flat I see, then think distance, near, far, flat I see, east, back. Ah, see now!
In commenting on a review of Ada or Ardor, author Vladimir Nabokov alludes to Berkeley's philosophy as informing his novel:
And finally I owe no debt whatsoever (as Mr. Leonard seems to think) to the famous Argentine essayist and his rather confused compilation "A New Refutation of Time." Mr. Leonard would have lost less of it had he gone straight to Berkeley and Bergson. (Strong Opinions, pp. 2892–90)
James Boswell, in the part of his Life of Samuel Johnson covering the year 1763, recorded Johnson's opinion of one aspect of Berkeley's philosophy:
After we came out of the church, we stood talking for some time together of Bishop Berkeley's ingenious sophistry to prove the non-existence of matter, and that every thing in the universe is merely ideal. I observed, that though we are satisfied his doctrine is untrue, it is impossible to refute it. I shall never forget the alacrity with which Johnson answered, striking his foot with mighty force against a large stone, till he rebounded from it,– "I refute it thus."
Commemoration
Both the University of California, Berkeley, and the city of Berkeley, California, were named after him, although the pronunciation has evolved to suit American English: ( ). The naming was suggested in 1866 by Frederick H. Billings, a trustee of what was then called the College of California. Billings was inspired by Berkeley's Verses on the Prospect of Planting Arts and Learning in America, particularly the final stanza: "Westward the course of empire takes its way; the first four Acts already past, a fifth shall close the Drama with the day; time's noblest offspring is the last".
The Town of Berkley, currently the least populated town in Bristol County, Massachusetts, was founded on 18 April 1735 and named for George Berkeley.
A residential college and an Episcopal seminary at Yale University also bear Berkeley's name.
"Bishop Berkeley's Gold Medals" were two awards given annually at Trinity College Dublin, "provided outstanding merit is shown", to candidates answering a special examination in Greek. The awards were founded in 1752 by Berkeley. However, they have not been awarded since 2011. Other elements of Berkeley's legacy at Trinity are currently under review () due to his support of slavery. For example, the library at Trinity that was named after him in 1978 was "de-named" in April 2023 and renamed in October 2024 after Irish poet Eavan Boland. Another memorialization of him in the form of a stained glass window will remain, but used as part of "a retain-and-explain approach" where his legacy will be given further context.
An Ulster History Circle blue plaque commemorating him is located in Bishop Street Within, the city of Derry.
Berkeley's farmhouse in Middletown, Rhode Island, is preserved as Whitehall Museum House, also known as Berkeley House, and was listed on the National Register of Historic Places in 1970. St. Columba's Chapel, located in the same town, was formerly named "The Berkeley Memorial Chapel", and the appellation still survives at the end of the formal name of the parish, "St. Columba's, the Berkeley Memorial Chapel".
Writings
Original publications
Arithmetica (1707)
Miscellanea Mathematica (1707)
Philosophical Commentaries or Common-Place Book (1707–08, notebooks)
An Essay Towards a New Theory of Vision (1709)
A Treatise Concerning the Principles of Human Knowledge, Part I (1710)
Passive Obedience, or the Christian doctrine of not resisting the Supreme Power (1712)
Three Dialogues Between Hylas and Philonous (1713)
An Essay Towards Preventing the Ruin of Great Britain (1721)
De Motu (1721)
A Proposal for Better Supplying Churches in our Foreign Plantations, and for converting the Savage Americans to Christianity by a College to be erected in the Summer Islands (1725)
A Sermon preached before the incorporated Society for the Propagation of the Gospel in Foreign Parts (1732)
Alciphron, or the Minute Philosopher (1732)
The Theory of Vision, or Visual Language, shewing the immediate presence and providence of a Deity, vindicated and explained (1733)
The Analyst: A Discourse Addressed to an Infidel Mathematician (1734)
A Defence of Free-thinking in Mathematics, with Appendix concerning Mr. Walton's vindication of Sir Isaac Newton's Principle of Fluxions (1735)
Reasons for not replying to Mr. Walton's Full Answer (1735)
The Querist, containing several queries proposed to the consideration of the public (three parts, 1735–37).
A Discourse addressed to Magistrates and Men of Authority (1736)
Siris, a chain of philosophical reflections and inquiries, concerning the virtues of tar-water (1744).
A Letter to the Roman Catholics of the Diocese of Cloyne (1745)
A Word to the Wise, or an exhortation to the Roman Catholic clergy of Ireland (1749)
Maxims concerning Patriotism (1750)
Farther Thoughts on Tar-water (1752)
Miscellany (1752)
Collections
The Works of George Berkeley, D.D. Late Bishop of Cloyne in Ireland. To which is added, an account of his life, and several of his letters to Thomas Prior, Esq. Dean Gervais, and Mr. Pope, &c. &c. Printed for George Robinson, Pater Noster Row, 1784. Two volumes.
The Works of George Berkeley, D.D., formerly Bishop of Cloyne: Including Many of His Writings Hitherto Unpublished; With Prefaces, Annotations, His Life and Letters, and an Account of His Philosophy. Ed. by Alexander Campbell Fraser. In 4 Volumes. Oxford: Clarendon Press, 1901.
Vol. 1
Vol. 2
Vol. 3
Vol. 4
The Works of George Berkeley. Ed. by A. A. Luce and T. E. Jessop. Nine volumes. Edinburgh and London, 1948–1957.
Ewald, William B., ed., 1996. From Kant to Hilbert: A Source Book in the Foundations of Mathematics, 2 vols. Oxford Uni. Press.
1707. Of Infinites, 16–19.
1709. Letter to Samuel Molyneaux, 19–21.
1721. De Motu, 37–54.
1734. The Analyst, 60–92.
See also
List of people on the postage stamps of Ireland
Solipsism
"Tlön, Uqbar, Orbis Tertius"
Yogacara and consciousness-only schools of thought
References
Sources
Bibliographic resources
Jessop T. E., Luce A. A. A bibliography of George Berkeley 2 edn., Springer, 1973.
Turbayne C. M. A Bibliography of George Berkeley 1963–1979 in: Berkeley: Critical and Interpretive Essays – via Google Books, Manchester, 1982. pp. 313–29.
Berkeley Bibliography (1979–2010) – A Supplement to those of Jessop and Turbayne by Silvia Parigi.
A Bibliography on George Berkeley – About 300 works from the 19th century to our days.
Philosophical studies
Daniel, Stephen H. (ed.), Re-examining Berkeley's Philosophy, Toronto: University of Toronto Press, 2007.
Daniel, Stephen H. (ed.), New Interpretations of Berkeley's Thought, Amherst: Humanity Books, 2008.
Dicker, Georges, Berkeley's Idealism. A Critical Examination, Cambridge: Cambridge University Press, 2011.
Gaustad, Edwin. George Berkeley in America. New Haven: Yale University Press, 1959.
Pappas, George S., Berkeley's Thought, Ithaca: Cornell University Press, 2000.
Stoneham, Tom, Berkeley's World: An Examination of the Three Dialogues, Oxford University Press, 2002.
Warnock, Geoffrey J., Berkeley, Penguin Books, 1953.
Winkler, Kenneth P., The Cambridge Companion to Berkeley, Cambridge: Cambridge University Press, 2005.
Attribution
Further reading
p. 349.
"Shows a thorough mastery of the literature on Berkeley, along with very perceptive remarks about the strength and weaknesses of most of the central commentators. ... Exhibits a mastery of all the material, both primary and secondary ..." Charles Larmore, for the editorial board, Journal of Philosophy.
R. Muehlmann is one of the Berkeley Prize Winners.
Edward Chaney (2000), 'George Berkeley's Grand Tours: The Immaterialist as Connoisseur of Art and Architecture', in E. Chaney, The Evolution of the Grand Tour: Anglo-Italian Cultural Relations since the Renaissance, 2nd ed. London, Routledge.
Costica Bradatan (2006), The Other Bishop Berkeley. An Exercise in Reenchantment, Fordham University Press, New York
New Interpretations of Berkeley's Thought. Ed. by S. H. Daniel. New York: Humanity Books, 2008, 319 pp. .
For reviews see:
Reviewed by Marc A. Hight , Hampden–Sydney College
Reviewed by Thomas M. Lennon – Berkeley Studies 19 (2008):51–56.
Secondary literature available on the Internet
Most sources listed below are suggested by Dr. Talia M. Bettcher in Berkeley: a Guide for the Perplexed (2008). See the textbook's description.
Luce, A. A. Berkeley and Malebranche. A Study in the Origins of Berkeley's Thought. Oxford: Oxford University Press, 1934 (2nd edn, with additional Preface, 1967).
Russell B. Berkeley // Bertrand Russell A History of Western Philosophy 3:1:16 (1945)
Turbayne, Colin Murray (1959). "Berkeley's Two Concepts of Mind" – Philosophy and Phenomenological Research, Vol. 20, No. 1, Sept. 1959, pp. 85-92 on JSTOR.org
Turbayne, Colin Murray (1962). "Berkeley's Two Concepts of Mind Part II" – Philosophy and Phenomenological Research, Vol. 22, No. 3, March 1962, pp. 383-386 on JSTOR.org
Reviewed by: Désirée Park. Studi internazionali filosofici 3 (1971):228–30; G. J. Warnock. Journal of Philosophy 69, 15 (1972):460–62; Günter Gawlick "Menschheitsglück und Wille Gottes: Neues Licht auf Berkeleys Ethik." Philosophische Rundschau 1–2 (January 1973):24–42; H. M. Bracken. Eighteenth-Century Studies 3 (1973): 396–97; and Stanley Grean. Journal of the History of Philosophy 12, 3 (1974): 398–403.
Tipton, I. C. Berkeley, The Philosophy of Immaterialism London: Methuen, 1974.
"Ian C. Tipton, one of the world's great Berkeley scholars and longtime president of the International Berkeley Society. ... Of the many works about Berkeley that were published in the twentieth century, few rival in importance his Berkeley: The Philosophy of Immaterialism ... The philosophical insight, combined with the mastery of Berkeley's texts, that Ian brought to this work make it one of the masterpieces of Berkeley scholarship. It is not surprising therefore that, when the Garland Publishing Company brought out, late in 1980s, a 15-volume collection of major works on Berkeley, Ian's book was one of only two full-length studies of Berkeley published after 1935 to be included" (Charles J. McCracken. In Memoriam: Ian C. Tipton // The Berkeley Newsletter 17 (2006), p. 4).
Winkler, Kenneth P. Berkeley: An Interpretation. Oxford: Clarendon Press, 1989.
Berman, David. George Berkeley: Idealism and the Man. Oxford: Clarendon Press, 1994.
The Cambridge Companion to Berkeley. (EPUP, Google Books). Ed. by Kenneth P. Winkler. Cambridge: Cambridge University Press, 2005.
Daniel, Stephen H., ed. Reexamining Berkeley's Philosophy. Toronto: University of Toronto Press, 2007.
Roberts, John. A Metaphysics for the Mob: The Philosophy of George Berkeley (EPUP, Google Books). New York: Oxford University Press, 2007. – 172 p.
Reviewed by Marc A. Hight , University of Tartu/Hampden–Sydney College
External links
George Berkeley at the Eighteenth-Century Poetry Archive (ECPA)
George Berkeley in the Internet Encyclopedia of Philosophy
Berkeley's Philosophy of Science in the Internet Encyclopedia of Philosophy
International Berkeley Society
A list of the published works by and about Berkeley as well as online links
Berkeley's Life and Works
Another perspective on how Berkeley framed his immaterialism
Original texts and discussion concerning The Analyst controversy
Contains more easily readable versions of New Theory of Vision, Principles of Human Knowledge, Three Dialogues, and Alciphron
An extensive compendium of online resources, including a gallery of Berkeley's images
.
Electronic Texts for philosopher Charlie Dunbar (1887–1971):
Broad, C. D. Berkeley's Argument About Material Substance New York: 1975 (Repr. of the 1942 ed. publ. by the British Academy, London.)
Broad, C. D. Berkeley's Denial of Material Substance – Published in: The Philosophical Review Vol. LXIII (1954).
Rick Grush syllabus Empiricism (J. Locke, G. Berkeley, D. Hume)
Berkeley's (1734) The Analyst – digital facsimile.
1685 births
1753 deaths
17th-century Anglican theologians
18th-century Anglican theologians
18th-century Irish male writers
18th-century Irish philosophers
18th-century Irish writers
17th-century Anglo-Irish people
18th-century Anglo-Irish people
Academics of Trinity College Dublin
Alumni of Trinity College Dublin
Anglican bishops of Cloyne
Anglican philosophers
Deans of Derry
Deans of Dromore
Empiricists
Enlightenment philosophers
Epistemologists
Fellows of Trinity College Dublin
History of calculus
Idealists
Irish expatriates in England
Irish natural philosophers
Irish religious writers
Irish slave owners
People educated at Kilkenny College
Christian clergy from County Kilkenny
Philosophers of science
Scholars of Trinity College Dublin
Burials at Christ Church Cathedral, Oxford | George Berkeley | [
"Mathematics"
] | 12,116 | [
"Mathematics of infinitesimals",
"History of calculus",
"Calculus"
] |
11,971 | https://en.wikipedia.org/wiki/Galaxy%20formation%20and%20evolution | In cosmology, the study of galaxy formation and evolution is concerned with the processes that formed a heterogeneous universe from a homogeneous beginning, the formation of the first galaxies, the way galaxies change over time, and the processes that have generated the variety of structures observed in nearby galaxies. Galaxy formation is hypothesized to occur from structure formation theories, as a result of tiny quantum fluctuations in the aftermath of the Big Bang. The simplest model in general agreement with observed phenomena is the Lambda-CDM model—that is, clustering and merging allows galaxies to accumulate mass, determining both their shape and structure. Hydrodynamics simulation, which simulates both baryons and dark matter, is widely used to study galaxy formation and evolution.
Commonly observed properties of galaxies
Because of the inability to conduct experiments in outer space, the only way to “test” theories and models of galaxy evolution is to compare them with observations. Explanations for how galaxies formed and evolved must be able to predict the observed properties and types of galaxies.
Edwin Hubble created an early galaxy classification scheme, now known as the Hubble tuning-fork diagram. It partitioned galaxies into ellipticals, normal spirals, barred spirals (such as the Milky Way), and irregulars. These galaxy types exhibit the following properties which can be explained by current galaxy evolution theories:
Many of the properties of galaxies (including the galaxy color–magnitude diagram) indicate that there are fundamentally two types of galaxies. These groups divide into blue star-forming galaxies that are more like spiral types, and red non-star forming galaxies that are more like elliptical galaxies.
Spiral galaxies are quite thin, dense, and rotate relatively fast, while the stars in elliptical galaxies have randomly oriented orbits.
The majority of giant galaxies contain a supermassive black hole in their centers, ranging in mass from millions to billions of times the mass of the Sun. The black hole mass is tied to the host galaxy bulge or spheroid mass.
Metallicity has a positive correlation with the absolute magnitude (luminosity) of a galaxy.
Astronomers now believe that disk galaxies likely formed first, then evolved into elliptical galaxies through galaxy mergers.
Current models also predict that the majority of mass in galaxies is made up of dark matter, a substance which is not directly observable, and might not interact through any means except gravity. This observation arises because galaxies could not have formed as they have, or rotate as they are seen to, unless they contain far more mass than can be directly observed.
Formation of disk galaxies
The earliest stage in the evolution of galaxies is their formation. When a galaxy forms, it has a disk shape and is called a spiral galaxy due to spiral-like "arm" structures located on the disk. There are different theories on how these disk-like distributions of stars develop from a cloud of matter: however, at present, none of them exactly predicts the results of observation.
Top-down theories
Olin J. Eggen, Donald Lynden-Bell, and Allan Sandage in 1962, proposed a theory that disk galaxies form through a monolithic collapse of a large gas cloud. The distribution of matter in the early universe was in clumps that consisted mostly of dark matter. These clumps interacted gravitationally, putting tidal torques on each other that acted to give them some angular momentum. As the baryonic matter cooled, it dissipated some energy and contracted toward the center. With angular momentum conserved, the matter near the center speeds up its rotation. Then, like a spinning ball of pizza dough, the matter forms into a tight disk. Once the disk cools, the gas is not gravitationally stable, so it cannot remain a singular homogeneous cloud. It breaks, and these smaller clouds of gas form stars. Since the dark matter does not dissipate as it only interacts gravitationally, it remains distributed outside the disk in what is known as the dark halo. Observations show that there are stars located outside the disk, which does not quite fit the "pizza dough" model. It was first proposed by Leonard Searle and Robert Zinn that galaxies form by the coalescence of smaller progenitors. Known as a top-down formation scenario, this theory is quite simple yet no longer widely accepted.
Bottom-up theory
More recent theories include the clustering of dark matter halos in the bottom-up process. Instead of large gas clouds collapsing to form a galaxy in which the gas breaks up into smaller clouds, it is proposed that matter started out in these “smaller” clumps (mass on the order of globular clusters), and then many of these clumps merged to form galaxies, which then were drawn by gravitation to form galaxy clusters. This still results in disk-like distributions of baryonic matter with dark matter forming the halo for all the same reasons as in the top-down theory. Models using this sort of process predict more small galaxies than large ones, which matches observations.
Astronomers do not currently know what process stops the contraction. In fact, theories of disk galaxy formation are not successful at producing the rotation speed and size of disk galaxies. It has been suggested that the radiation from bright newly formed stars, or from an active galactic nucleus can slow the contraction of a forming disk. It has also been suggested that the dark matter halo can pull the galaxy, thus stopping disk contraction.
The Lambda-CDM model is a cosmological model that explains the formation of the universe after the Big Bang. It is a relatively simple model that predicts many properties observed in the universe, including the relative frequency of different galaxy types; however, it underestimates the number of thin disk galaxies in the universe. The reason is that these galaxy formation models predict a large number of mergers. If disk galaxies merge with another galaxy of comparable mass (at least 15 percent of its mass) the merger will likely destroy, or at a minimum greatly disrupt the disk, and the resulting galaxy is not expected to be a disk galaxy (see next section). While this remains an unsolved problem for astronomers, it does not necessarily mean that the Lambda-CDM model is completely wrong, but rather that it requires further refinement to accurately reproduce the population of galaxies in the universe.
Galaxy mergers and the formation of elliptical galaxies
Elliptical galaxies (most notably supergiant ellipticals, such as ESO 306-17) are among some of the largest known thus far. Their stars are on orbits that are randomly oriented within the galaxy (i.e. they are not rotating like disk galaxies). A distinguishing feature of elliptical galaxies is that the velocity of the stars does not necessarily contribute to flattening of the galaxy, such as in spiral galaxies. Elliptical galaxies have central supermassive black holes, and the masses of these black holes correlate with the galaxy's mass.
Elliptical galaxies have two main stages of evolution. The first is due to the supermassive black hole growing by accreting cooling gas. The second stage is marked by the black hole stabilizing by suppressing gas cooling, thus leaving the elliptical galaxy in a stable state. The mass of the black hole is also correlated to a property called sigma which is the dispersion of the velocities of stars in their orbits. This relationship, known as the M-sigma relation, was discovered in 2000. Elliptical galaxies mostly lack disks, although some bulges of disk galaxies resemble elliptical galaxies. Elliptical galaxies are more likely found in crowded regions of the universe (such as galaxy clusters).
Astronomers now see elliptical galaxies as some of the most evolved systems in the universe. It is widely accepted that the main driving force for the evolution of elliptical galaxies is mergers of smaller galaxies. Many galaxies in the universe are gravitationally bound to other galaxies, which means that they will never escape their mutual pull. If those colliding galaxies are of similar size, the resultant galaxy will appear similar to neither of the progenitors, but will instead be elliptical. There are many types of galaxy mergers, which do not necessarily result in elliptical galaxies, but result in a structural change. For example, a minor merger event is thought to be occurring between the Milky Way and the Magellanic Clouds.
Mergers between such large galaxies are regarded as violent, and the frictional interaction of the gas between the two galaxies can cause gravitational shock waves, which are capable of forming new stars in the new elliptical galaxy. By sequencing several images of different galactic collisions, one can observe the timeline of two spiral galaxies merging into a single elliptical galaxy.
In the Local Group, the Milky Way and the Andromeda Galaxy are gravitationally bound, and currently approaching each other at high speed. Simulations show that the Milky Way and Andromeda are on a collision course, and are expected to collide in less than five billion years. During this collision, it is expected that the Sun and the rest of the Solar System will be ejected from its current path around the Milky Way. The remnant could be a giant elliptical galaxy.
Galaxy quenching
One observation that must be explained by a successful theory of galaxy evolution is the existence of two different populations of galaxies on the galaxy color-magnitude diagram. Most galaxies tend to fall into two separate locations on this diagram: a "red sequence" and a "blue cloud". Red sequence galaxies are generally non-star-forming elliptical galaxies with little gas and dust, while blue cloud galaxies tend to be dusty star-forming spiral galaxies.
As described in previous sections, galaxies tend to evolve from spiral to elliptical structure via mergers. However, the current rate of galaxy mergers does not explain how all galaxies move from the "blue cloud" to the "red sequence". It also does not explain how star formation ceases in galaxies. Theories of galaxy evolution must therefore be able to explain how star formation turns off in galaxies. This phenomenon is called galaxy "quenching".
Stars form out of cold gas (see also the Kennicutt–Schmidt law), so a galaxy is quenched when it has no more cold gas. However, it is thought that quenching occurs relatively quickly (within 1 billion years), which is much shorter than the time it would take for a galaxy to simply use up its reservoir of cold gas. Galaxy evolution models explain this by hypothesizing other physical mechanisms that remove or shut off the supply of cold gas in a galaxy. These mechanisms can be broadly classified into two categories: (1) preventive feedback mechanisms that stop cold gas from entering a galaxy or stop it from producing stars, and (2) ejective feedback mechanisms that remove gas so that it cannot form stars.
One theorized preventive mechanism called “strangulation” keeps cold gas from entering the galaxy. Strangulation is likely the main mechanism for quenching star formation in nearby low-mass galaxies. The exact physical explanation for strangulation is still unknown, but it may have to do with a galaxy's interactions with other galaxies. As a galaxy falls into a galaxy cluster, gravitational interactions with other galaxies can strangle it by preventing it from accreting more gas. For galaxies with massive dark matter halos, another preventive mechanism called “virial shock heating” may also prevent gas from becoming cool enough to form stars.
Ejective processes, which expel cold gas from galaxies, may explain how more massive galaxies are quenched. One ejective mechanism is caused by supermassive black holes found in the centers of galaxies. Simulations have shown that gas accreting onto supermassive black holes in galactic centers produces high-energy jets; the released energy can expel enough cold gas to quench star formation.
Our own Milky Way and the nearby Andromeda Galaxy currently appear to be undergoing the quenching transition from star-forming blue galaxies to passive red galaxies.
Hydrodynamics Simulation
Dark energy and dark matter account for most of the Universe's energy, so it is valid to ignore baryons when simulating large-scale structure formation (using methods such as N-body simulation). However, since the visible components of galaxies consist of baryons, it is crucial to include baryons in the simulation to study the detailed structures of galaxies. At first, the baryon component consists of mostly hydrogen and helium gas, which later transforms into stars during the formation of structures. From observations, models used in simulations can be tested and the understanding of different stages of galaxy formation can be improved.
Euler equations
In cosmological simulations, astrophysical gases are typically modeled as inviscid ideal gases that follow the Euler equations, which can be expressed mainly in three different ways: Lagrangian, Eulerian, or arbitrary Lagrange-Eulerian methods. Different methods give specific forms of hydrodynamical equations. When using the Lagrangian approach to specify the field, it is assumed that the observer tracks a specific fluid parcel with its unique characteristics during its movement through space and time. In contrast, the Eulerian approach emphasizes particular locations in space that the fluid passes through as time progresses.
Baryonic Physics
To shape the population of galaxies, the hydrodynamical equations must be supplemented by a variety of astrophysical processes mainly governed by baryonic physics.
Gas cooling
Processes, such as collisional excitation, ionization, and inverse Compton scattering, can cause the internal energy of the gas to be dissipated. In the simulation, cooling processes are realized by coupling cooling functions to energy equations. Besides the primordial cooling, at high temperature,, heavy elements (metals) cooling dominates. When , the fine structure and molecular cooling also need to be considered to simulate the cold phase of the interstellar medium.
Interstellar medium
Complex multi-phase structure, including relativistic particles and magnetic field, makes simulation of interstellar medium difficult. In particular, modeling the cold phase of the interstellar medium poses technical difficulties due to the short timescales associated with the dense gas. In the early simulations, the dense gas phase is frequently not modeled directly but rather characterized by an effective polytropic equation of state. More recent simulations use a multimodal distribution to describe
the gas density and temperature distributions, which directly model the multi-phase structure. However, more detailed physics processes needed to be considered in future simulations, since the structure of the interstellar medium directly affects star formation.
Star formation
As cold and dense gas accumulates, it undergoes gravitational collapse and eventually forms stars. To simulate this process, a portion of the gas is transformed into collisionless star particles, which represent coeval, single-metallicity stellar populations and are described by an initial underlying mass function. Observations suggest that star formation efficiency in molecular gas is almost universal, with around 1% of the gas being converted into stars per free fall time. In simulations, the gas is typically converted into star particles using a probabilistic sampling scheme based on the calculated star formation rate. Some simulations seek an alternative to the probabilistic sampling scheme and aim to better capture the clustered nature of star formation by treating star clusters as the fundamental unit of star formation. This approach permits the growth of star particles by accreting material from the surrounding medium. In addition to this, modern models of galaxy formation track the evolution of these stars and the mass they return to the gas component, leading to an enrichment of the gas with metals.
Stellar feedback
Stars have an influence on their surrounding gas by injecting energy and momentum. This creates a feedback loop that regulates the process of star formation. To effectively control star formation, stellar feedback must generate galactic-scale outflows that expel gas from galaxies. Various methods are utilized to couple energy and momentum, particularly through supernova explosions, to the surrounding gas. These methods differ in how the energy is deposited, either thermally or kinetically. However, excessive radiative gas cooling must be avoided in the former case. Cooling is expected in dense and cold gas, but it cannot be reliably modeled in cosmological simulations due to low resolution. This leads to artificial and excessive cooling of the gas, causing the supernova feedback energy to be lost via radiation and significantly reducing its effectiveness. In the latter case, kinetic energy cannot be radiated away until it thermalizes. However, using hydrodynamically decoupled wind particles to inject momentum non-locally into the gas surrounding active star-forming regions may still be necessary to achieve large-scale galactic outflows. Recent models explicitly model stellar feedback. These models not only incorporate supernova feedback but also consider other feedback channels such as energy and momentum injection from stellar winds, photoionization, and radiation pressure resulting from radiation emitted by young, massive stars. During the Cosmic Dawn, galaxy formation occurred in short bursts of 5 to 30 Myr due to stellar feedbacks.
Supermassive black holes
Simulation of supermassive black holes is also considered, numerically seeding them in dark matter haloes, due to their observation in many galaxies and the impact of their mass on the mass density distribution. Their mass accretion rate is frequently modeled by the Bondi-Hoyle model.
Active galactic nuclei
Active galactic nuclei (AGN) have an impact on the observational phenomena of supermassive black holes, and further have a regulation of black hole growth and star formation. In simulations, AGN feedback is usually classified into two modes, namely quasar and radio mode. Quasar mode feedback is linked to the radiatively efficient mode of black hole growth and is frequently incorporated through energy or momentum injection. The regulation of star formation in massive galaxies is believed to be significantly influenced by radio mode feedback, which occurs due to the presence of highly collimated jets of relativistic particles. These jets are typically linked to X-ray bubbles that possess enough energy to counterbalance cooling losses.
Magnetic fields
The ideal magnetohydrodynamics approach is commonly utilized in cosmological simulations since it provides a good approximation for cosmological magnetic fields. The effect of magnetic fields on the dynamics of gas is generally negligible on large cosmological scales. Nevertheless, magnetic fields are a critical component of the interstellar medium since they provide pressure support against gravity and affect the propagation of cosmic rays.
Cosmic rays
Cosmic rays play a significant role in the interstellar medium by contributing to its pressure, serving as a crucial heating channel, and potentially driving galactic gas outflows. The propagation of cosmic rays is highly affected by magnetic fields. So in the simulation, equations describing the cosmic ray energy and flux are coupled to magnetohydrodynamics equations.
Radiation Hydrodynamics
Radiation hydrodynamics simulations are computational methods used to study the interaction of radiation with matter. In astrophysical contexts, radiation hydrodynamics is used to study the epoch of reionization when the Universe had high redshift. There are several numerical methods used for radiation hydrodynamics simulations, including ray-tracing, Monte Carlo, and moment-based methods. Ray-tracing involves tracing the paths of individual photons through the simulation and computing their interactions with matter at each step. This method is computationally expensive but can produce very accurate results.
Gallery
See also
List of galaxies
Red nugget, small galaxies packed with large amounts of red stars
Further reading
References
External links
NOAO gallery of galaxy images
Image of Andromeda galaxy (M31)
Javascript passive evolution calculator for early type (elliptical) galaxies
Video on the evolution of galaxies by Canadian astrophysicist Doctor P
Formation and evolution
Stellar evolution
Concepts in astronomy
Physical cosmological concepts | Galaxy formation and evolution | [
"Physics",
"Astronomy"
] | 3,992 | [
"Physical cosmological concepts",
"Concepts in astrophysics",
"Concepts in astronomy",
"Galaxies",
"Astrophysics",
"Stellar evolution",
"Astronomical objects"
] |
12,013 | https://en.wikipedia.org/wiki/Girth%20%28graph%20theory%29 | In graph theory, the girth of an undirected graph is the length of a shortest cycle contained in the graph. If the graph does not contain any cycles (that is, it is a forest), its girth is defined to be infinity.
For example, a 4-cycle (square) has girth 4. A grid has girth 4 as well, and a triangular mesh has girth 3. A graph with girth four or more is triangle-free.
Cages
A cubic graph (all vertices have degree three) of girth that is as small as possible is known as a -cage (or as a -cage). The Petersen graph is the unique 5-cage (it is the smallest cubic graph of girth 5), the Heawood graph is the unique 6-cage, the McGee graph is the unique 7-cage and the Tutte eight cage is the unique 8-cage. There may exist multiple cages for a given girth. For instance there are three nonisomorphic 10-cages, each with 70 vertices: the Balaban 10-cage, the Harries graph and the Harries–Wong graph.
Girth and graph coloring
For any positive integers and , there exists a graph with girth at least and chromatic number at least ; for instance, the Grötzsch graph is triangle-free and has chromatic number 4, and repeating the Mycielskian construction used to form the Grötzsch graph produces triangle-free graphs of arbitrarily large chromatic number. Paul Erdős was the first to prove the general result, using the probabilistic method. More precisely, he showed that a random graph on vertices, formed by choosing independently whether to include each edge with probability , has, with probability tending to 1 as goes to infinity, at most cycles of length or less, but has no independent set of size . Therefore, removing one vertex from each short cycle leaves a smaller graph with girth greater than , in which each color class of a coloring must be small and which therefore requires at least colors in any coloring.
Explicit, though large, graphs with high girth and chromatic number can be constructed as certain Cayley graphs of linear groups over finite fields. These remarkable Ramanujan graphs also have large expansion coefficient.
Related concepts
The odd girth and even girth of a graph are the lengths of a shortest odd cycle and shortest even cycle respectively.
The of a graph is the length of the longest (simple) cycle, rather than the shortest.
Thought of as the least length of a non-trivial cycle, the girth admits natural generalisations as the 1-systole or higher systoles in systolic geometry.
Girth is the dual concept to edge connectivity, in the sense that the girth of a planar graph is the edge connectivity of its dual graph, and vice versa. These concepts are unified in matroid theory by the girth of a matroid, the size of the smallest dependent set in the matroid. For a graphic matroid, the matroid girth equals the girth of the underlying graph, while for a co-graphic matroid it equals the edge connectivity.
Computation
The girth of an undirected graph can be computed by running a breadth-first search from each node, with complexity where is the number of vertices of the graph and is the number of edges. A practical optimization is to limit the depth of the BFS to a depth that depends on the length of the smallest cycle discovered so far. Better algorithms are known in the case where the girth is even and when the graph is planar. In terms of lower bounds, computing the girth of a graph is at least as hard as solving the triangle finding problem on the graph.
References
Graph invariants | Girth (graph theory) | [
"Mathematics"
] | 778 | [
"Graph invariants",
"Mathematical relations",
"Graph theory"
] |
12,024 | https://en.wikipedia.org/wiki/General%20relativity | General relativity, also known as the general theory of relativity, and as Einstein's theory of gravity, is the geometric theory of gravitation published by Albert Einstein in 1915 and is the current description of gravitation in modern physics. General relativity generalizes special relativity and refines Newton's law of universal gravitation, providing a unified description of gravity as a geometric property of space and time, or four-dimensional spacetime. In particular, the curvature of spacetime is directly related to the energy and momentum of whatever present matter and radiation. The relation is specified by the Einstein field equations, a system of second-order partial differential equations.
Newton's law of universal gravitation, which describes classical gravity, can be seen as a prediction of general relativity for the almost flat spacetime geometry around stationary mass distributions. Some predictions of general relativity, however, are beyond Newton's law of universal gravitation in classical physics. These predictions concern the passage of time, the geometry of space, the motion of bodies in free fall, and the propagation of light, and include gravitational time dilation, gravitational lensing, the gravitational redshift of light, the Shapiro time delay and singularities/black holes. So far, all tests of general relativity have been shown to be in agreement with the theory. The time-dependent solutions of general relativity enable us to talk about the history of the universe and have provided the modern framework for cosmology, thus leading to the discovery of the Big Bang and cosmic microwave background radiation. Despite the introduction of a number of alternative theories, general relativity continues to be the simplest theory consistent with experimental data.
Reconciliation of general relativity with the laws of quantum physics remains a problem, however, as there is a lack of a self-consistent theory of quantum gravity. It is not yet known how gravity can be unified with the three non-gravitational forces: strong, weak and electromagnetic.
Einstein's theory has astrophysical implications, including the prediction of black holes—regions of space in which space and time are distorted in such a way that nothing, not even light, can escape from them. Black holes are the end-state for massive stars. Microquasars and active galactic nuclei are believed to be stellar black holes and supermassive black holes. It also predicts gravitational lensing, where the bending of light results in multiple images of the same distant astronomical phenomenon. Other predictions include the existence of gravitational waves, which have been observed directly by the physics collaboration LIGO and other observatories. In addition, general relativity has provided the base of cosmological models of an expanding universe.
Widely acknowledged as a theory of extraordinary beauty, general relativity has often been described as the most beautiful of all existing physical theories.
History
Henri Poincaré's 1905 theory of the dynamics of the electron was a relativistic theory which he applied to all forces, including gravity. While others thought that gravity was instantaneous or of electromagnetic origin, he suggested that relativity was "something due to our methods of measurement". In his theory, he showed that gravitational waves propagate at the speed of light. Soon afterwards, Einstein started thinking about how to incorporate gravity into his relativistic framework. In 1907, beginning with a simple thought experiment involving an observer in free fall (FFO), he embarked on what would be an eight-year search for a relativistic theory of gravity. After numerous detours and false starts, his work culminated in the presentation to the Prussian Academy of Science in November 1915 of what are now known as the Einstein field equations, which form the core of Einstein's general theory of relativity. These equations specify how the geometry of space and time is influenced by whatever matter and radiation are present. A version of non-Euclidean geometry, called Riemannian geometry, enabled Einstein to develop general relativity by providing the key mathematical framework on which he fit his physical ideas of gravity. This idea was pointed out by mathematician Marcel Grossmann and published by Grossmann and Einstein in 1913.
The Einstein field equations are nonlinear and considered difficult to solve. Einstein used approximation methods in working out initial predictions of the theory. But in 1916, the astrophysicist Karl Schwarzschild found the first non-trivial exact solution to the Einstein field equations, the Schwarzschild metric. This solution laid the groundwork for the description of the final stages of gravitational collapse, and the objects known today as black holes. In the same year, the first steps towards generalizing Schwarzschild's solution to electrically charged objects were taken, eventually resulting in the Reissner–Nordström solution, which is now associated with electrically charged black holes. In 1917, Einstein applied his theory to the universe as a whole, initiating the field of relativistic cosmology. In line with contemporary thinking, he assumed a static universe, adding a new parameter to his original field equations—the cosmological constant—to match that observational presumption. By 1929, however, the work of Hubble and others had shown that the universe is expanding. This is readily described by the expanding cosmological solutions found by Friedmann in 1922, which do not require a cosmological constant. Lemaître used these solutions to formulate the earliest version of the Big Bang models, in which the universe has evolved from an extremely hot and dense earlier state. Einstein later declared the cosmological constant the biggest blunder of his life.
During that period, general relativity remained something of a curiosity among physical theories. It was clearly superior to Newtonian gravity, being consistent with special relativity and accounting for several effects unexplained by the Newtonian theory. Einstein showed in 1915 how his theory explained the anomalous perihelion advance of the planet Mercury without any arbitrary parameters ("fudge factors"), and in 1919 an expedition led by Eddington confirmed general relativity's prediction for the deflection of starlight by the Sun during the total solar eclipse of 29 May 1919, instantly making Einstein famous. Yet the theory remained outside the mainstream of theoretical physics and astrophysics until developments between approximately 1960 and 1975, now known as the golden age of general relativity. Physicists began to understand the concept of a black hole, and to identify quasars as one of these objects' astrophysical manifestations. Ever more precise solar system tests confirmed the theory's predictive power, and relativistic cosmology also became amenable to direct observational tests.
General relativity has acquired a reputation as a theory of extraordinary beauty. Subrahmanyan Chandrasekhar has noted that at multiple levels, general relativity exhibits what Francis Bacon has termed a "strangeness in the proportion" (i.e. elements that excite wonderment and surprise). It juxtaposes fundamental concepts (space and time versus matter and motion) which had previously been considered as entirely independent. Chandrasekhar also noted that Einstein's only guides in his search for an exact theory were the principle of equivalence and his sense that a proper description of gravity should be geometrical at its basis, so that there was an "element of revelation" in the manner in which Einstein arrived at his theory. Other elements of beauty associated with the general theory of relativity are its simplicity and symmetry, the manner in which it incorporates invariance and unification, and its perfect logical consistency.
In the preface to Relativity: The Special and the General Theory, Einstein said "The present book is intended, as far as possible, to give an exact insight into the theory of Relativity to those readers who, from a general scientific and philosophical point of view, are interested in the theory, but who are not conversant with the mathematical apparatus of theoretical physics. The work presumes a standard of education corresponding to that of a university matriculation examination, and, despite the shortness of the book, a fair amount of patience and force of will on the part of the reader. The author has spared himself no pains in his endeavour to present the main ideas in the simplest and most intelligible form, and on the whole, in the sequence and connection in which they actually originated."
From classical mechanics to general relativity
General relativity can be understood by examining its similarities with and departures from classical physics. The first step is the realization that classical mechanics and Newton's law of gravity admit a geometric description. The combination of this description with the laws of special relativity results in a heuristic derivation of general relativity.
Geometry of Newtonian gravity
At the base of classical mechanics is the notion that a body's motion can be described as a combination of free (or inertial) motion, and deviations from this free motion. Such deviations are caused by external forces acting on a body in accordance with Newton's second law of motion, which states that the net force acting on a body is equal to that body's (inertial) mass multiplied by its acceleration. The preferred inertial motions are related to the geometry of space and time: in the standard reference frames of classical mechanics, objects in free motion move along straight lines at constant speed. In modern parlance, their paths are geodesics, straight world lines in curved spacetime.
Conversely, one might expect that inertial motions, once identified by observing the actual motions of bodies and making allowances for the external forces (such as electromagnetism or friction), can be used to define the geometry of space, as well as a time coordinate. However, there is an ambiguity once gravity comes into play. According to Newton's law of gravity, and independently verified by experiments such as that of Eötvös and its successors (see Eötvös experiment), there is a universality of free fall (also known as the weak equivalence principle, or the universal equality of inertial and passive-gravitational mass): the trajectory of a test body in free fall depends only on its position and initial speed, but not on any of its material properties. A simplified version of this is embodied in Einstein's elevator experiment, illustrated in the figure on the right: for an observer in an enclosed room, it is impossible to decide, by mapping the trajectory of bodies such as a dropped ball, whether the room is stationary in a gravitational field and the ball accelerating, or in free space aboard a rocket that is accelerating at a rate equal to that of the gravitational field versus the ball which upon release has nil acceleration.
Given the universality of free fall, there is no observable distinction between inertial motion and motion under the influence of the gravitational force. This suggests the definition of a new class of inertial motion, namely that of objects in free fall under the influence of gravity. This new class of preferred motions, too, defines a geometry of space and time—in mathematical terms, it is the geodesic motion associated with a specific connection which depends on the gradient of the gravitational potential. Space, in this construction, still has the ordinary Euclidean geometry. However, spacetime as a whole is more complicated. As can be shown using simple thought experiments following the free-fall trajectories of different test particles, the result of transporting spacetime vectors that can denote a particle's velocity (time-like vectors) will vary with the particle's trajectory; mathematically speaking, the Newtonian connection is not integrable. From this, one can deduce that spacetime is curved. The resulting Newton–Cartan theory is a geometric formulation of Newtonian gravity using only covariant concepts, i.e. a description which is valid in any desired coordinate system. In this geometric description, tidal effects—the relative acceleration of bodies in free fall—are related to the derivative of the connection, showing how the modified geometry is caused by the presence of mass.
Relativistic generalization
As intriguing as geometric Newtonian gravity may be, its basis, classical mechanics, is merely a limiting case of (special) relativistic mechanics. In the language of symmetry: where gravity can be neglected, physics is Lorentz invariant as in special relativity rather than Galilei invariant as in classical mechanics. (The defining symmetry of special relativity is the Poincaré group, which includes translations, rotations, boosts and reflections.) The differences between the two become significant when dealing with speeds approaching the speed of light, and with high-energy phenomena.
With Lorentz symmetry, additional structures come into play. They are defined by the set of light cones (see image). The light-cones define a causal structure: for each event , there is a set of events that can, in principle, either influence or be influenced by via signals or interactions that do not need to travel faster than light (such as event in the image), and a set of events for which such an influence is impossible (such as event in the image). These sets are observer-independent. In conjunction with the world-lines of freely falling particles, the light-cones can be used to reconstruct the spacetime's semi-Riemannian metric, at least up to a positive scalar factor. In mathematical terms, this defines a conformal structure or conformal geometry.
Special relativity is defined in the absence of gravity. For practical applications, it is a suitable model whenever gravity can be neglected. Bringing gravity into play, and assuming the universality of free fall motion, an analogous reasoning as in the previous section applies: there are no global inertial frames. Instead there are approximate inertial frames moving alongside freely falling particles. Translated into the language of spacetime: the straight time-like lines that define a gravity-free inertial frame are deformed to lines that are curved relative to each other, suggesting that the inclusion of gravity necessitates a change in spacetime geometry.
A priori, it is not clear whether the new local frames in free fall coincide with the reference frames in which the laws of special relativity hold—that theory is based on the propagation of light, and thus on electromagnetism, which could have a different set of preferred frames. But using different assumptions about the special-relativistic frames (such as their being earth-fixed, or in free fall), one can derive different predictions for the gravitational redshift, that is, the way in which the frequency of light shifts as the light propagates through a gravitational field (cf. below). The actual measurements show that free-falling frames are the ones in which light propagates as it does in special relativity. The generalization of this statement, namely that the laws of special relativity hold to good approximation in freely falling (and non-rotating) reference frames, is known as the Einstein equivalence principle, a crucial guiding principle for generalizing special-relativistic physics to include gravity.
The same experimental data shows that time as measured by clocks in a gravitational field—proper time, to give the technical term—does not follow the rules of special relativity. In the language of spacetime geometry, it is not measured by the Minkowski metric. As in the Newtonian case, this is suggestive of a more general geometry. At small scales, all reference frames that are in free fall are equivalent, and approximately Minkowskian. Consequently, we are now dealing with a curved generalization of Minkowski space. The metric tensor that defines the geometry—in particular, how lengths and angles are measured—is not the Minkowski metric of special relativity, it is a generalization known as a semi- or pseudo-Riemannian metric. Furthermore, each Riemannian metric is naturally associated with one particular kind of connection, the Levi-Civita connection, and this is, in fact, the connection that satisfies the equivalence principle and makes space locally Minkowskian (that is, in suitable locally inertial coordinates, the metric is Minkowskian, and its first partial derivatives and the connection coefficients vanish).
Einstein's equations
Having formulated the relativistic, geometric version of the effects of gravity, the question of gravity's source remains. In Newtonian gravity, the source is mass. In special relativity, mass turns out to be part of a more general quantity called the energy–momentum tensor, which includes both energy and momentum densities as well as stress: pressure and shear. Using the equivalence principle, this tensor is readily generalized to curved spacetime. Drawing further upon the analogy with geometric Newtonian gravity, it is natural to assume that the field equation for gravity relates this tensor and the Ricci tensor, which describes a particular class of tidal effects: the change in volume for a small cloud of test particles that are initially at rest, and then fall freely. In special relativity, conservation of energy–momentum corresponds to the statement that the energy–momentum tensor is divergence-free. This formula, too, is readily generalized to curved spacetime by replacing partial derivatives with their curved-manifold counterparts, covariant derivatives studied in differential geometry. With this additional condition—the covariant divergence of the energy–momentum tensor, and hence of whatever is on the other side of the equation, is zero—the simplest nontrivial set of equations are what are called Einstein's (field) equations:
On the left-hand side is the Einstein tensor, , which is symmetric and a specific divergence-free combination of the Ricci tensor and the metric. In particular,
is the curvature scalar. The Ricci tensor itself is related to the more general Riemann curvature tensor as
On the right-hand side, is a constant and is the energy–momentum tensor. All tensors are written in abstract index notation. Matching the theory's prediction to observational results for planetary orbits or, equivalently, assuring that the weak-gravity, low-speed limit is Newtonian mechanics, the proportionality constant is found to be , where is the Newtonian constant of gravitation and the speed of light in vacuum. When there is no matter present, so that the energy–momentum tensor vanishes, the results are the vacuum Einstein equations,
In general relativity, the world line of a particle free from all external, non-gravitational force is a particular type of geodesic in curved spacetime. In other words, a freely moving or falling particle always moves along a geodesic.
The geodesic equation is:
where is a scalar parameter of motion (e.g. the proper time), and are Christoffel symbols (sometimes called the affine connection coefficients or Levi-Civita connection coefficients) which is symmetric in the two lower indices. Greek indices may take the values: 0, 1, 2, 3 and the summation convention is used for repeated indices and . The quantity on the left-hand-side of this equation is the acceleration of a particle, and so this equation is analogous to Newton's laws of motion which likewise provide formulae for the acceleration of a particle. This equation of motion employs the Einstein notation, meaning that repeated indices are summed (i.e. from zero to three). The Christoffel symbols are functions of the four spacetime coordinates, and so are independent of the velocity or acceleration or other characteristics of a test particle whose motion is described by the geodesic equation.
Total force in general relativity
In general relativity, the effective gravitational potential energy of an object of mass m revolving around a massive central body M is given by
A conservative total force can then be obtained as its negative gradient
where L is the angular momentum. The first term represents the force of Newtonian gravity, which is described by the inverse-square law. The second term represents the centrifugal force in the circular motion. The third term represents the relativistic effect.
Alternatives to general relativity
There are alternatives to general relativity built upon the same premises, which include additional rules and/or constraints, leading to different field equations. Examples are Whitehead's theory, Brans–Dicke theory, teleparallelism, f(R) gravity and Einstein–Cartan theory.
Definition and basic applications
The derivation outlined in the previous section contains all the information needed to define general relativity, describe its key properties, and address a question of crucial importance in physics, namely how the theory can be used for model-building.
Definition and basic properties
General relativity is a metric theory of gravitation. At its core are Einstein's equations, which describe the relation between the geometry of a four-dimensional pseudo-Riemannian manifold representing spacetime, and the energy–momentum contained in that spacetime. Phenomena that in classical mechanics are ascribed to the action of the force of gravity (such as free-fall, orbital motion, and spacecraft trajectories), correspond to inertial motion within a curved geometry of spacetime in general relativity; there is no gravitational force deflecting objects from their natural, straight paths. Instead, gravity corresponds to changes in the properties of space and time, which in turn changes the straightest-possible paths that objects will naturally follow. The curvature is, in turn, caused by the energy–momentum of matter. Paraphrasing the relativist John Archibald Wheeler, spacetime tells matter how to move; matter tells spacetime how to curve.
While general relativity replaces the scalar gravitational potential of classical physics by a symmetric rank-two tensor, the latter reduces to the former in certain limiting cases. For weak gravitational fields and slow speed relative to the speed of light, the theory's predictions converge on those of Newton's law of universal gravitation.
As it is constructed using tensors, general relativity exhibits general covariance: its laws—and further laws formulated within the general relativistic framework—take on the same form in all coordinate systems. Furthermore, the theory does not contain any invariant geometric background structures, i.e. it is background independent. It thus satisfies a more stringent general principle of relativity, namely that the laws of physics are the same for all observers. Locally, as expressed in the equivalence principle, spacetime is Minkowskian, and the laws of physics exhibit local Lorentz invariance.
Model-building
The core concept of general-relativistic model-building is that of a solution of Einstein's equations. Given both Einstein's equations and suitable equations for the properties of matter, such a solution consists of a specific semi-Riemannian manifold (usually defined by giving the metric in specific coordinates), and specific matter fields defined on that manifold. Matter and geometry must satisfy Einstein's equations, so in particular, the matter's energy–momentum tensor must be divergence-free. The matter must, of course, also satisfy whatever additional equations were imposed on its properties. In short, such a solution is a model universe that satisfies the laws of general relativity, and possibly additional laws governing whatever matter might be present.
Einstein's equations are nonlinear partial differential equations and, as such, difficult to solve exactly. Nevertheless, a number of exact solutions are known, although only a few have direct physical applications. The best-known exact solutions, and also those most interesting from a physics point of view, are the Schwarzschild solution, the Reissner–Nordström solution and the Kerr metric, each corresponding to a certain type of black hole in an otherwise empty universe, and the Friedmann–Lemaître–Robertson–Walker and de Sitter universes, each describing an expanding cosmos. Exact solutions of great theoretical interest include the Gödel universe (which opens up the intriguing possibility of time travel in curved spacetimes), the Taub–NUT solution (a model universe that is homogeneous, but anisotropic), and anti-de Sitter space (which has recently come to prominence in the context of what is called the Maldacena conjecture).
Given the difficulty of finding exact solutions, Einstein's field equations are also solved frequently by numerical integration on a computer, or by considering small perturbations of exact solutions. In the field of numerical relativity, powerful computers are employed to simulate the geometry of spacetime and to solve Einstein's equations for interesting situations such as two colliding black holes. In principle, such methods may be applied to any system, given sufficient computer resources, and may address fundamental questions such as naked singularities. Approximate solutions may also be found by perturbation theories such as linearized gravity and its generalization, the post-Newtonian expansion, both of which were developed by Einstein. The latter provides a systematic approach to solving for the geometry of a spacetime that contains a distribution of matter that moves slowly compared with the speed of light. The expansion involves a series of terms; the first terms represent Newtonian gravity, whereas the later terms represent ever smaller corrections to Newton's theory due to general relativity. An extension of this expansion is the parametrized post-Newtonian (PPN) formalism, which allows quantitative comparisons between the predictions of general relativity and alternative theories.
Consequences of Einstein's theory
General relativity has a number of physical consequences. Some follow directly from the theory's axioms, whereas others have become clear only in the course of many years of research that followed Einstein's initial publication.
Gravitational time dilation and frequency shift
Assuming that the equivalence principle holds, gravity influences the passage of time. Light sent down into a gravity well is blueshifted, whereas light sent in the opposite direction (i.e., climbing out of the gravity well) is redshifted; collectively, these two effects are known as the gravitational frequency shift. More generally, processes close to a massive body run more slowly when compared with processes taking place farther away; this effect is known as gravitational time dilation.
Gravitational redshift has been measured in the laboratory and using astronomical observations. Gravitational time dilation in the Earth's gravitational field has been measured numerous times using atomic clocks, while ongoing validation is provided as a side effect of the operation of the Global Positioning System (GPS). Tests in stronger gravitational fields are provided by the observation of binary pulsars. All results are in agreement with general relativity. However, at the current level of accuracy, these observations cannot distinguish between general relativity and other theories in which the equivalence principle is valid.
Light deflection and gravitational time delay
General relativity predicts that the path of light will follow the curvature of spacetime as it passes near a massive object. This effect was initially confirmed by observing the light of stars or distant quasars being deflected as it passes the Sun.
This and related predictions follow from the fact that light follows what is called a light-like or null geodesic—a generalization of the straight lines along which light travels in classical physics. Such geodesics are the generalization of the invariance of lightspeed in special relativity. As one examines suitable model spacetimes (either the exterior Schwarzschild solution or, for more than a single mass, the post-Newtonian expansion), several effects of gravity on light propagation emerge. Although the bending of light can also be derived by extending the universality of free fall to light, the angle of deflection resulting from such calculations is only half the value given by general relativity.
Closely related to light deflection is the Shapiro Time Delay, the phenomenon that light signals take longer to move through a gravitational field than they would in the absence of that field. There have been numerous successful tests of this prediction. In the parameterized post-Newtonian formalism (PPN), measurements of both the deflection of light and the gravitational time delay determine a parameter called γ, which encodes the influence of gravity on the geometry of space.
Gravitational waves
Predicted in 1916 by Albert Einstein, there are gravitational waves: ripples in the metric of spacetime that propagate at the speed of light. These are one of several analogies between weak-field gravity and electromagnetism in that, they are analogous to electromagnetic waves. On 11 February 2016, the Advanced LIGO team announced that they had directly detected gravitational waves from a pair of black holes merging.
The simplest type of such a wave can be visualized by its action on a ring of freely floating particles. A sine wave propagating through such a ring towards the reader distorts the ring in a characteristic, rhythmic fashion (animated image to the right). Since Einstein's equations are non-linear, arbitrarily strong gravitational waves do not obey linear superposition, making their description difficult. However, linear approximations of gravitational waves are sufficiently accurate to describe the exceedingly weak waves that are expected to arrive here on Earth from far-off cosmic events, which typically result in relative distances increasing and decreasing by or less. Data analysis methods routinely make use of the fact that these linearized waves can be Fourier decomposed.
Some exact solutions describe gravitational waves without any approximation, e.g., a wave train traveling through empty space or Gowdy universes, varieties of an expanding cosmos filled with gravitational waves. But for gravitational waves produced in astrophysically relevant situations, such as the merger of two black holes, numerical methods are presently the only way to construct appropriate models.
Orbital effects and the relativity of direction
General relativity differs from classical mechanics in a number of predictions concerning orbiting bodies. It predicts an overall rotation (precession) of planetary orbits, as well as orbital decay caused by the emission of gravitational waves and effects related to the relativity of direction.
Precession of apsides
In general relativity, the apsides of any orbit (the point of the orbiting body's closest approach to the system's center of mass) will precess; the orbit is not an ellipse, but akin to an ellipse that rotates on its focus, resulting in a rose curve-like shape (see image). Einstein first derived this result by using an approximate metric representing the Newtonian limit and treating the orbiting body as a test particle. For him, the fact that his theory gave a straightforward explanation of Mercury's anomalous perihelion shift, discovered earlier by Urbain Le Verrier in 1859, was important evidence that he had at last identified the correct form of the gravitational field equations.
The effect can also be derived by using either the exact Schwarzschild metric (describing spacetime around a spherical mass) or the much more general post-Newtonian formalism. It is due to the influence of gravity on the geometry of space and to the contribution of self-energy to a body's gravity (encoded in the nonlinearity of Einstein's equations). Relativistic precession has been observed for all planets that allow for accurate precession measurements (Mercury, Venus, and Earth), as well as in binary pulsar systems, where it is larger by five orders of magnitude.
In general relativity the perihelion shift , expressed in radians per revolution, is approximately given by
where:
is the semi-major axis
is the orbital period
is the speed of light in vacuum
is the orbital eccentricity
Orbital decay
According to general relativity, a binary system will emit gravitational waves, thereby losing energy. Due to this loss, the distance between the two orbiting bodies decreases, and so does their orbital period. Within the Solar System or for ordinary double stars, the effect is too small to be observable. This is not the case for a close binary pulsar, a system of two orbiting neutron stars, one of which is a pulsar: from the pulsar, observers on Earth receive a regular series of radio pulses that can serve as a highly accurate clock, which allows precise measurements of the orbital period. Because neutron stars are immensely compact, significant amounts of energy are emitted in the form of gravitational radiation.
The first observation of a decrease in orbital period due to the emission of gravitational waves was made by Hulse and Taylor, using the binary pulsar PSR1913+16 they had discovered in 1974. This was the first detection of gravitational waves, albeit indirect, for which they were awarded the 1993 Nobel Prize in physics. Since then, several other binary pulsars have been found, in particular the double pulsar PSR J0737−3039, where both stars are pulsars and which was last reported to also be in agreement with general relativity in 2021 after 16 years of observations.
Geodetic precession and frame-dragging
Several relativistic effects are directly related to the relativity of direction. One is geodetic precession: the axis direction of a gyroscope in free fall in curved spacetime will change when compared, for instance, with the direction of light received from distant stars—even though such a gyroscope represents the way of keeping a direction as stable as possible ("parallel transport"). For the Moon–Earth system, this effect has been measured with the help of lunar laser ranging. More recently, it has been measured for test masses aboard the satellite Gravity Probe B to a precision of better than 0.3%.
Near a rotating mass, there are gravitomagnetic or frame-dragging effects. A distant observer will determine that objects close to the mass get "dragged around". This is most extreme for rotating black holes where, for any object entering a zone known as the ergosphere, rotation is inevitable. Such effects can again be tested through their influence on the orientation of gyroscopes in free fall. Somewhat controversial tests have been performed using the LAGEOS satellites, confirming the relativistic prediction. Also the Mars Global Surveyor probe around Mars has been used.
Astrophysical applications
Gravitational lensing
The deflection of light by gravity is responsible for a new class of astronomical phenomena. If a massive object is situated between the astronomer and a distant target object with appropriate mass and relative distances, the astronomer will see multiple distorted images of the target. Such effects are known as gravitational lensing. Depending on the configuration, scale, and mass distribution, there can be two or more images, a bright ring known as an Einstein ring, or partial rings called arcs.
The earliest example was discovered in 1979; since then, more than a hundred gravitational lenses have been observed. Even if the multiple images are too close to each other to be resolved, the effect can still be measured, e.g., as an overall brightening of the target object; a number of such "microlensing events" have been observed.
Gravitational lensing has developed into a tool of observational astronomy. It is used to detect the presence and distribution of dark matter, provide a "natural telescope" for observing distant galaxies, and to obtain an independent estimate of the Hubble constant. Statistical evaluations of lensing data provide valuable insight into the structural evolution of galaxies.
Gravitational-wave astronomy
Observations of binary pulsars provide strong indirect evidence for the existence of gravitational waves (see Orbital decay, above). Detection of these waves is a major goal of current relativity-related research. Several land-based gravitational wave detectors are currently in operation, most notably the interferometric detectors GEO 600, LIGO (two detectors), TAMA 300 and VIRGO. Various pulsar timing arrays are using millisecond pulsars to detect gravitational waves in the 10−9 to 10−6 hertz frequency range, which originate from binary supermassive blackholes. A European space-based detector, eLISA / NGO, is currently under development, with a precursor mission (LISA Pathfinder) having launched in December 2015.
Observations of gravitational waves promise to complement observations in the electromagnetic spectrum. They are expected to yield information about black holes and other dense objects such as neutron stars and white dwarfs, about certain kinds of supernova implosions, and about processes in the very early universe, including the signature of certain types of hypothetical cosmic string. In February 2016, the Advanced LIGO team announced that they had detected gravitational waves from a black hole merger.
Black holes and other compact objects
Whenever the ratio of an object's mass to its radius becomes sufficiently large, general relativity predicts the formation of a black hole, a region of space from which nothing, not even light, can escape. In the currently accepted models of stellar evolution, neutron stars of around 1.4 solar masses, and stellar black holes with a few to a few dozen solar masses, are thought to be the final state for the evolution of massive stars. Usually a galaxy has one supermassive black hole with a few million to a few billion solar masses in its center, and its presence is thought to have played an important role in the formation of the galaxy and larger cosmic structures.
Astronomically, the most important property of compact objects is that they provide a supremely efficient mechanism for converting gravitational energy into electromagnetic radiation. Accretion, the falling of dust or gaseous matter onto stellar or supermassive black holes, is thought to be responsible for some spectacularly luminous astronomical objects, notably diverse kinds of active galactic nuclei on galactic scales and stellar-size objects such as microquasars. In particular, accretion can lead to relativistic jets, focused beams of highly energetic particles that are being flung into space at almost light speed.
General relativity plays a central role in modelling all these phenomena, and observations provide strong evidence for the existence of black holes with the properties predicted by the theory.
Black holes are also sought-after targets in the search for gravitational waves (cf. Gravitational waves, above). Merging black hole binaries should lead to some of the strongest gravitational wave signals reaching detectors here on Earth, and the phase directly before the merger ("chirp") could be used as a "standard candle" to deduce the distance to the merger events–and hence serve as a probe of cosmic expansion at large distances. The gravitational waves produced as a stellar black hole plunges into a supermassive one should provide direct information about the supermassive black hole's geometry.
Cosmology
The current models of cosmology are based on Einstein's field equations, which include the cosmological constant since it has important influence on the large-scale dynamics of the cosmos,
where is the spacetime metric. Isotropic and homogeneous solutions of these enhanced equations, the Friedmann–Lemaître–Robertson–Walker solutions, allow physicists to model a universe that has evolved over the past 14 billion years from a hot, early Big Bang phase. Once a small number of parameters (for example the universe's mean matter density) have been fixed by astronomical observation, further observational data can be used to put the models to the test. Predictions, all successful, include the initial abundance of chemical elements formed in a period of primordial nucleosynthesis, the large-scale structure of the universe, and the existence and properties of a "thermal echo" from the early cosmos, the cosmic background radiation.
Astronomical observations of the cosmological expansion rate allow the total amount of matter in the universe to be estimated, although the nature of that matter remains mysterious in part. About 90% of all matter appears to be dark matter, which has mass (or, equivalently, gravitational influence), but does not interact electromagnetically and, hence, cannot be observed directly. There is no generally accepted description of this new kind of matter, within the framework of known particle physics or otherwise. Observational evidence from redshift surveys of distant supernovae and measurements of the cosmic background radiation also show that the evolution of our universe is significantly influenced by a cosmological constant resulting in an acceleration of cosmic expansion or, equivalently, by a form of energy with an unusual equation of state, known as dark energy, the nature of which remains unclear.
An inflationary phase, an additional phase of strongly accelerated expansion at cosmic times of around 10−33 seconds, was hypothesized in 1980 to account for several puzzling observations that were unexplained by classical cosmological models, such as the nearly perfect homogeneity of the cosmic background radiation. Recent measurements of the cosmic background radiation have resulted in the first evidence for this scenario. However, there is a bewildering variety of possible inflationary scenarios, which cannot be restricted by current observations. An even larger question is the physics of the earliest universe, prior to the inflationary phase and close to where the classical models predict the big bang singularity. An authoritative answer would require a complete theory of quantum gravity, which has not yet been developed (cf. the section on quantum gravity, below).
Exotic solutions: time travel, warp drives
Kurt Gödel showed that solutions to Einstein's equations exist that contain closed timelike curves (CTCs), which allow for loops in time. The solutions require extreme physical conditions unlikely ever to occur in practice, and it remains an open question whether further laws of physics will eliminate them completely. Since then, other—similarly impractical—GR solutions containing CTCs have been found, such as the Tipler cylinder and traversable wormholes. Stephen Hawking introduced chronology protection conjecture, which is an assumption beyond those of standard general relativity to prevent time travel.
Some exact solutions in general relativity such as Alcubierre drive present examples of warp drive but these solutions requires exotic matter distribution, and generally suffers from semiclassical instability.
Advanced concepts
Asymptotic symmetries
The spacetime symmetry group for special relativity is the Poincaré group, which is a ten-dimensional group of three Lorentz boosts, three rotations, and four spacetime translations. It is logical to ask what symmetries, if any, might apply in General Relativity. A tractable case might be to consider the symmetries of spacetime as seen by observers located far away from all sources of the gravitational field. The naive expectation for asymptotically flat spacetime symmetries might be simply to extend and reproduce the symmetries of flat spacetime of special relativity, viz., the Poincaré group.
In 1962 Hermann Bondi, M. G. van der Burg, A. W. Metzner and Rainer K. Sachs addressed this asymptotic symmetry problem in order to investigate the flow of energy at infinity due to propagating gravitational waves. Their first step was to decide on some physically sensible boundary conditions to place on the gravitational field at light-like infinity to characterize what it means to say a metric is asymptotically flat, making no a priori assumptions about the nature of the asymptotic symmetry group—not even the assumption that such a group exists. Then after designing what they considered to be the most sensible boundary conditions, they investigated the nature of the resulting asymptotic symmetry transformations that leave invariant the form of the boundary conditions appropriate for asymptotically flat gravitational fields. What they found was that the asymptotic symmetry transformations actually do form a group and the structure of this group does not depend on the particular gravitational field that happens to be present. This means that, as expected, one can separate the kinematics of spacetime from the dynamics of the gravitational field at least at spatial infinity. The puzzling surprise in 1962 was their discovery of a rich infinite-dimensional group (the so-called BMS group) as the asymptotic symmetry group, instead of the finite-dimensional Poincaré group, which is a subgroup of the BMS group. Not only are the Lorentz transformations asymptotic symmetry transformations, there are also additional transformations that are not Lorentz transformations but are asymptotic symmetry transformations. In fact, they found an additional infinity of transformation generators known as supertranslations. This implies the conclusion that General Relativity (GR) does not reduce to special relativity in the case of weak fields at long distances. It turns out that the BMS symmetry, suitably modified, could be seen as a restatement of the universal soft graviton theorem in quantum field theory (QFT), which relates universal infrared (soft) QFT with GR asymptotic spacetime symmetries.
Causal structure and global geometry
In general relativity, no material body can catch up with or overtake a light pulse. No influence from an event A can reach any other location X before light sent out at A to X. In consequence, an exploration of all light worldlines (null geodesics) yields key information about the spacetime's causal structure. This structure can be displayed using Penrose–Carter diagrams in which infinitely large regions of space and infinite time intervals are shrunk ("compactified") so as to fit onto a finite map, while light still travels along diagonals as in standard spacetime diagrams.
Aware of the importance of causal structure, Roger Penrose and others developed what is known as global geometry. In global geometry, the object of study is not one particular solution (or family of solutions) to Einstein's equations. Rather, relations that hold true for all geodesics, such as the Raychaudhuri equation, and additional non-specific assumptions about the nature of matter (usually in the form of energy conditions) are used to derive general results.
Horizons
Using global geometry, some spacetimes can be shown to contain boundaries called horizons, which demarcate one region from the rest of spacetime. The best-known examples are black holes: if mass is compressed into a sufficiently compact region of space (as specified in the hoop conjecture, the relevant length scale is the Schwarzschild radius), no light from inside can escape to the outside. Since no object can overtake a light pulse, all interior matter is imprisoned as well. Passage from the exterior to the interior is still possible, showing that the boundary, the black hole's horizon, is not a physical barrier.
Early studies of black holes relied on explicit solutions of Einstein's equations, notably the spherically symmetric Schwarzschild solution (used to describe a static black hole) and the axisymmetric Kerr solution (used to describe a rotating, stationary black hole, and introducing interesting features such as the ergosphere). Using global geometry, later studies have revealed more general properties of black holes. With time they become rather simple objects characterized by eleven parameters specifying: electric charge, mass–energy, linear momentum, angular momentum, and location at a specified time. This is stated by the black hole uniqueness theorem: "black holes have no hair", that is, no distinguishing marks like the hairstyles of humans. Irrespective of the complexity of a gravitating object collapsing to form a black hole, the object that results (having emitted gravitational waves) is very simple.
Even more remarkably, there is a general set of laws known as black hole mechanics, which is analogous to the laws of thermodynamics. For instance, by the second law of black hole mechanics, the area of the event horizon of a general black hole will never decrease with time, analogous to the entropy of a thermodynamic system. This limits the energy that can be extracted by classical means from a rotating black hole (e.g. by the Penrose process). There is strong evidence that the laws of black hole mechanics are, in fact, a subset of the laws of thermodynamics, and that the black hole area is proportional to its entropy. This leads to a modification of the original laws of black hole mechanics: for instance, as the second law of black hole mechanics becomes part of the second law of thermodynamics, it is possible for the black hole area to decrease as long as other processes ensure that entropy increases overall. As thermodynamical objects with nonzero temperature, black holes should emit thermal radiation. Semiclassical calculations indicate that indeed they do, with the surface gravity playing the role of temperature in Planck's law. This radiation is known as Hawking radiation (cf. the quantum theory section, below).
There are many other types of horizons. In an expanding universe, an observer may find that some regions of the past cannot be observed ("particle horizon"), and some regions of the future cannot be influenced (event horizon). Even in flat Minkowski space, when described by an accelerated observer (Rindler space), there will be horizons associated with a semiclassical radiation known as Unruh radiation.
Singularities
Another general feature of general relativity is the appearance of spacetime boundaries known as singularities. Spacetime can be explored by following up on timelike and lightlike geodesics—all possible ways that light and particles in free fall can travel. But some solutions of Einstein's equations have "ragged edges"—regions known as spacetime singularities, where the paths of light and falling particles come to an abrupt end, and geometry becomes ill-defined. In the more interesting cases, these are "curvature singularities", where geometrical quantities characterizing spacetime curvature, such as the Ricci scalar, take on infinite values. Well-known examples of spacetimes with future singularities—where worldlines end—are the Schwarzschild solution, which describes a singularity inside an eternal static black hole, or the Kerr solution with its ring-shaped singularity inside an eternal rotating black hole. The Friedmann–Lemaître–Robertson–Walker solutions and other spacetimes describing universes have past singularities on which worldlines begin, namely Big Bang singularities, and some have future singularities (Big Crunch) as well.
Given that these examples are all highly symmetric—and thus simplified—it is tempting to conclude that the occurrence of singularities is an artifact of idealization. The famous singularity theorems, proved using the methods of global geometry, say otherwise: singularities are a generic feature of general relativity, and unavoidable once the collapse of an object with realistic matter properties has proceeded beyond a certain stage and also at the beginning of a wide class of expanding universes. However, the theorems say little about the properties of singularities, and much of current research is devoted to characterizing these entities' generic structure (hypothesized e.g. by the BKL conjecture). The cosmic censorship hypothesis states that all realistic future singularities (no perfect symmetries, matter with realistic properties) are safely hidden away behind a horizon, and thus invisible to all distant observers. While no formal proof yet exists, numerical simulations offer supporting evidence of its validity.
Evolution equations
Each solution of Einstein's equation encompasses the whole history of a universe—it is not just some snapshot of how things are, but a whole, possibly matter-filled, spacetime. It describes the state of matter and geometry everywhere and at every moment in that particular universe. Due to its general covariance, Einstein's theory is not sufficient by itself to determine the time evolution of the metric tensor. It must be combined with a coordinate condition, which is analogous to gauge fixing in other field theories.
To understand Einstein's equations as partial differential equations, it is helpful to formulate them in a way that describes the evolution of the universe over time. This is done in "3+1" formulations, where spacetime is split into three space dimensions and one time dimension. The best-known example is the ADM formalism. These decompositions show that the spacetime evolution equations of general relativity are well-behaved: solutions always exist, and are uniquely defined, once suitable initial conditions have been specified. Such formulations of Einstein's field equations are the basis of numerical relativity.
Global and quasi-local quantities
The notion of evolution equations is intimately tied in with another aspect of general relativistic physics. In Einstein's theory, it turns out to be impossible to find a general definition for a seemingly simple property such as a system's total mass (or energy). The main reason is that the gravitational field—like any physical field—must be ascribed a certain energy, but that it proves to be fundamentally impossible to localize that energy.
Nevertheless, there are possibilities to define a system's total mass, either using a hypothetical "infinitely distant observer" (ADM mass) or suitable symmetries (Komar mass). If one excludes from the system's total mass the energy being carried away to infinity by gravitational waves, the result is the Bondi mass at null infinity. Just as in classical physics, it can be shown that these masses are positive. Corresponding global definitions exist for momentum and angular momentum. There have also been a number of attempts to define quasi-local quantities, such as the mass of an isolated system formulated using only quantities defined within a finite region of space containing that system. The hope is to obtain a quantity useful for general statements about isolated systems, such as a more precise formulation of the hoop conjecture.
Relationship with quantum theory
If general relativity were considered to be one of the two pillars of modern physics, then quantum theory, the basis of understanding matter from elementary particles to solid-state physics, would be the other. However, how to reconcile quantum theory with general relativity is still an open question.
Quantum field theory in curved spacetime
Ordinary quantum field theories, which form the basis of modern elementary particle physics, are defined in flat Minkowski space, which is an excellent approximation when it comes to describing the behavior of microscopic particles in weak gravitational fields like those found on Earth. In order to describe situations in which gravity is strong enough to influence (quantum) matter, yet not strong enough to require quantization itself, physicists have formulated quantum field theories in curved spacetime. These theories rely on general relativity to describe a curved background spacetime, and define a generalized quantum field theory to describe the behavior of quantum matter within that spacetime. Using this formalism, it can be shown that black holes emit a blackbody spectrum of particles known as Hawking radiation leading to the possibility that they evaporate over time. As briefly mentioned above, this radiation plays an important role for the thermodynamics of black holes.
Quantum gravity
The demand for consistency between a quantum description of matter and a geometric description of spacetime, as well as the appearance of singularities (where curvature length scales become microscopic), indicate the need for a full theory of quantum gravity: for an adequate description of the interior of black holes, and of the very early universe, a theory is required in which gravity and the associated geometry of spacetime are described in the language of quantum physics. Despite major efforts, no complete and consistent theory of quantum gravity is currently known, even though a number of promising candidates exist.
Attempts to generalize ordinary quantum field theories, used in elementary particle physics to describe fundamental interactions, so as to include gravity have led to serious problems. Some have argued that at low energies, this approach proves successful, in that it results in an acceptable effective (quantum) field theory of gravity. At very high energies, however, the perturbative results are badly divergent and lead to models devoid of predictive power ("perturbative non-renormalizability").
One attempt to overcome these limitations is string theory, a quantum theory not of point particles, but of minute one-dimensional extended objects. The theory promises to be a unified description of all particles and interactions, including gravity; the price to pay is unusual features such as six extra dimensions of space in addition to the usual three. In what is called the second superstring revolution, it was conjectured that both string theory and a unification of general relativity and supersymmetry known as supergravity form part of a hypothesized eleven-dimensional model known as M-theory, which would constitute a uniquely defined and consistent theory of quantum gravity.
Another approach starts with the canonical quantization procedures of quantum theory. Using the initial-value-formulation of general relativity (cf. evolution equations above), the result is the Wheeler–deWitt equation (an analogue of the Schrödinger equation) which, regrettably, turns out to be ill-defined without a proper ultraviolet (lattice) cutoff. However, with the introduction of what are now known as Ashtekar variables, this leads to a promising model known as loop quantum gravity. Space is represented by a web-like structure called a spin network, evolving over time in discrete steps.
Depending on which features of general relativity and quantum theory are accepted unchanged, and on what level changes are introduced, there are numerous other attempts to arrive at a viable theory of quantum gravity, some examples being the lattice theory of gravity based on the Feynman Path Integral approach and Regge calculus, dynamical triangulations, causal sets, twistor models or the path integral based models of quantum cosmology.
All candidate theories still have major formal and conceptual problems to overcome. They also face the common problem that, as yet, there is no way to put quantum gravity predictions to experimental tests (and thus to decide between the candidates where their predictions vary), although there is hope for this to change as future data from cosmological observations and particle physics experiments becomes available.
Current status
General relativity has emerged as a highly successful model of gravitation and cosmology, which has so far passed many unambiguous observational and experimental tests. However, there are strong indications that the theory is incomplete. The problem of quantum gravity and the question of the reality of spacetime singularities remain open. Observational data that is taken as evidence for dark energy and dark matter could indicate the need for new physics.
Even taken as is, general relativity is rich with possibilities for further exploration. Mathematical relativists seek to understand the nature of singularities and the fundamental properties of Einstein's equations, while numerical relativists run increasingly powerful computer simulations (such as those describing merging black holes). In February 2016, it was announced that the existence of gravitational waves was directly detected by the Advanced LIGO team on 14 September 2015. A century after its introduction, general relativity remains a highly active area of research.
See also
(warp drive)
References
Bibliography
; original paper in Russian:
See also English translation at Einstein Papers Project
See also English translation at Einstein Papers Project
See also English translation at Einstein Papers Project
Further reading
Popular books
Beginning undergraduate textbooks
Advanced undergraduate textbooks
Graduate textbooks
Specialists' books
Journal articles
See also English translation at Einstein Papers Project
External links
Einstein Online – Articles on a variety of aspects of relativistic physics for a general audience; hosted by the Max Planck Institute for Gravitational Physics
GEO600 home page, the official website of the GEO600 project.
LIGO Laboratory
NCSA Spacetime Wrinkles – produced by the numerical relativity group at the NCSA, with an elementary introduction to general relativity
(lecture by Leonard Susskind recorded 22 September 2008 at Stanford University).
Series of lectures on General Relativity given in 2006 at the Institut Henri Poincaré (introductory/advanced).
General Relativity Tutorials by John Baez.
The Feynman Lectures on Physics Vol. II Ch. 42: Curved Space
Concepts in astronomy
Albert Einstein
1915 in science
Articles containing video clips | General relativity | [
"Physics",
"Astronomy"
] | 12,128 | [
"Concepts in astronomy",
"General relativity",
"Theory of relativity"
] |
12,025 | https://en.wikipedia.org/wiki/Genealogy | Genealogy () is the study of families, family history, and the tracing of their lineages. Genealogists use oral interviews, historical records, genetic analysis, and other records to obtain information about a family and to demonstrate kinship and pedigrees of its members. The results are often displayed in charts or written as narratives. The field of family history is broader than genealogy, and covers not just lineage but also family and community history and biography.
The record of genealogical work may be presented as a "genealogy", a "family history", or a "family tree". In the narrow sense, a "genealogy" or a "family tree" traces the descendants of one person, whereas a "family history" traces the ancestors of one person, but the terms are often used interchangeably. A family history may include additional biographical information, family traditions, and the like.
The pursuit of family history and origins tends to be shaped by several motives, including the desire to carve out a place for one's family in the larger historical picture, a sense of responsibility to preserve the past for future generations, and self-satisfaction in accurate storytelling. Genealogy research is also performed for scholarly or forensic purposes, or to trace legal next of kin to inherit under intestacy laws.
Overview
Amateur genealogists typically pursue their own ancestry and that of their spouses. Professional genealogists may also conduct research for others, publish books on genealogical methods, teach, or produce their own databases. They may work for companies that provide software or produce materials of use to other professionals and to amateurs. Both try to understand not just where and when people lived but also their lifestyles, biographies, and motivations. This often requires—or leads to—knowledge of antiquated laws, old political boundaries, migration trends, and historical socioeconomic or religious conditions.
Genealogists sometimes specialize in a particular group, e.g., a Scottish clan; a particular surname, such as in a one-name study; a small community, e.g., a single village or parish, such as in a one-place study; or a particular, often famous, person. Bloodlines of Salem is an example of a specialized family-history group. It welcomes members who can prove descent from a participant of the Salem Witch Trials or who simply choose to support the group.
Genealogists and family historians often join family history societies, where novices can learn from more experienced researchers. Such societies generally serve a specific geographical area. Their members may also index records to make them more accessible or engage in advocacy and other efforts to preserve public records and cemeteries. Some schools engage students in such projects as a means to reinforce lessons regarding immigration and history. Other benefits include family medical histories for families with serious medical conditions that are hereditary.
The terms "genealogy" and "family history" are often used synonymously, but some entities offer a slight difference in definition. The Society of Genealogists, while also using the terms interchangeably, describes genealogy as the "establishment of a pedigree by extracting evidence, from valid sources, of how one generation is connected to the next" and family history as "a biographical study of a genealogically proven family and of the community and country in which they lived".
Motivation
Individuals conduct genealogical research for a number of reasons.
Personal or medical interest
Private individuals research genealogy out of curiosity about their heritage. This curiosity can be particularly strong among those whose family histories were lost or unknown due to, for example, adoption or separation from family through divorce, death, or other situations. In addition to simply wanting to know more about who they are and where they came from, individuals may research their genealogy to learn about any hereditary diseases in their family history.
There is a growing interest in family history in the media as a result of advertising and television shows sponsored by large genealogy companies, such as Ancestry.com. This, coupled with easier access to online records and the affordability of DNA tests, has both inspired curiosity and allowed those who are curious to easily start investigating their ancestry.
Community or religious obligation
In communitarian societies, one's identity is defined as much by one's kin network as by individual achievement, and the question "Who are you?" would be answered by a description of father, mother, and tribe. New Zealand Māori, for example, learn whakapapa (genealogies) to discover who they are.
Family history plays a part in the practice of some religious belief systems. For example, The Church of Jesus Christ of Latter-day Saints (LDS Church) has a doctrine of baptism for the dead, which necessitates that members of that faith engage in family history research.
In East Asian countries that were historically shaped by Confucianism, many people follow a practice of ancestor worship as well as genealogical record-keeping. Ancestors' names are inscribed on tablets and placed in shrines, where rituals are performed. Genealogies are also recorded in genealogy books. This practice is rooted in the belief that respect for one's family is a foundation for a healthy society.
Establishing identity
Royal families, both historically and in modern times, keep records of their genealogies in order to establish their right to rule and determine who will be the next sovereign. For centuries in various cultures, one's genealogy has been a source of political and social status.
Some countries and indigenous tribes allow individuals to obtain citizenship based on their genealogy. In Ireland and in Greece, for example, an individual can become a citizen if one of their grandparents was born in that country, regardless of their own or their parents' birthplace. In societies such as Australia or the United States, by the 20th century, there was growing pride in the pioneers and nation-builders. Establishing descent from these was, and is, important to lineage societies, such as the Daughters of the American Revolution and The General Society of Mayflower Descendants. Modern family history explores new sources of status, such as celebrating the resilience of families that survived generations of poverty or slavery, or the success of families in integrating across racial or national boundaries. Some family histories even emphasize links to celebrity criminals, such as the bushranger Ned Kelly in Australia.
Legal and forensic research
Lawyers involved in probate cases do genealogy to locate heirs of property.
Detectives may perform genealogical research using DNA evidence to identify victims of homicides or perpetrators of crimes.
Scholarly research
Historians and geneticists may carry out genealogical research to gain a greater understanding of specific topics in their respective fields, and some may employ professional genealogists in connection with specific aspects of their research. They also publish their research in peer-reviewed journals.
The introduction of postgraduate courses in genealogy in recent years has given genealogy more of an academic focus, with the emergence of peer-reviewed journals in this area. Scholarly genealogy is beginning to emerge as a discipline in its own right, with an increasing number of individuals who have obtained genealogical qualifications carrying out research on a diverse range of topics related to genealogy, both within academic institutions and independently.
Discrimination and persecution
In the US, the "one-drop rule" asserted that any person with even one ancestor of black ancestry ("one drop" of "black blood") was considered black. It was codified into the law of some States (e.g. the Racial Integrity Act of 1924) to reinforce racial segregation.
Genealogy was also used in Nazi Germany to determine whether a person was considered a "Jew" or a "Mischling" (Mischling Test), and whether a person was considered as "Aryan" (Ahnenpass).
History
Pre-modern genealogy
Hereditary emperors, kings and chiefs in several areas have long claimed descent from gods (thus establishing divine legitimacy). Court genealogists have preserved or invented appropriate genealogical pretensions - for example in Japan,
Polynesia,
and the Indo-European world from Scandinavia through ancient Greece to India.
Historically, in Western societies, genealogy focused on the kinship and descent of rulers and nobles, often arguing or demonstrating the legitimacy of claims to wealth and power. Genealogy often overlapped with heraldry, which reflected the ancestry of noble houses in their coats of arms. Modern scholars regard many claimed noble ancestries as fabrications, such as the Anglo-Saxon Chronicle's tracing of the ancestry of several English kings to the god Woden. With the coming of Christianity to northern Europe, Anglo-Saxon royal genealogies extended the kings' lines of ancestry from Woden back to reach the line of Biblical patriarchs: Noah
and Adam. (This extension offered the side-benefit of connecting pretentious rulers with the prestigious genealogy of Jesus.)
Modern historians and genealogists may regard manufactured pseudo-genealogies with a degree of scepticism. However, the desire to find ancestral links with prominent figures from a legendary or distant past has persisted. In the United States, for example, it does no harm to establish one's links to ancestors who boarded the Mayflower. And the popularity of the genealogical hypothesis of The Holy Blood and the Holy Grail (1982) demonstrates popular interest in ancient bloodlines, however dubious.
Some family trees have been maintained for considerable periods. The family tree of Confucius has been maintained for over 2,500 years and is listed in the Guinness Book of World Records as the largest extant family tree. The fifth edition of the Confucius Genealogy was printed in 2009 by the Confucius Genealogy Compilation Committee (CGCC).
Modern times
In modern times, genealogy has become more widespread, with commoners as well as nobility researching and maintaining their family trees. Genealogy received a boost in the late 1970s with the television broadcast of Roots: The Saga of an American Family by Alex Haley. His account of his family's descent from the African tribesman Kunta Kinte inspired many others to study their own lines.
With the advent of the Internet, the number of resources readily accessible to genealogists has vastly increased, fostering an explosion of interest in the topic. Genealogy on the internet became increasingly popular starting in the early 2000s. The Internet has become a major source not only of data for genealogists but also of education and communication.
India
Some notable places where traditional genealogy records are kept include Hindu genealogy registers at Haridwar (Uttarakhand), Varanasi and Allahabad (Uttar Pradesh), Kurukshetra (Haryana), Trimbakeshwar (Maharashtra), and Chintpurni (Himachal Pradesh).
United States
Genealogical research in the United States was first systematized in the early 19th century, especially by John Farmer (1789–1838). Before Farmer's efforts, tracing one's genealogy was seen as an attempt by the American colonists to secure a measure of social standing, an aim that was counter to the new republic's egalitarian, future-oriented ideals (as outlined in the Constitution). As Fourth of July celebrations commemorating the Founding Fathers and the heroes of the Revolutionary War became increasingly popular, however, the pursuit of "antiquarianism", which focused on local history, became acceptable as a way to honor the achievements of early Americans. Farmer capitalized on the acceptability of antiquarianism to frame genealogy within the early republic's ideological framework of pride in one's American ancestors. He corresponded with other antiquarians in New England, where antiquarianism and genealogy were well established, and became a coordinator, booster, and contributor to the growing movement. In the 1820s, he and fellow antiquarians began to produce genealogical and antiquarian tracts in earnest, slowly gaining a devoted audience among the American people. Though Farmer died in 1838, his efforts led to the founding in 1845 of the New England Historic Genealogical Society (NEHGS), one of New England's oldest and most prominent organizations dedicated to the preservation of public records. NEHGS publishes the New England Historical and Genealogical Register.
The Genealogical Society of Utah, founded in 1894, later became the Family History Department of the Church of Jesus Christ of Latter-day Saints. The department's research facility, the Family History Library, which Utah.com claims as "the largest genealogical library in the world", was established to assist in tracing family lineages for special religious ceremonies which Latter-day Saints believe will seal family units together for eternity. Latter-day Saints believe that this fulfilled a biblical prophecy stating that the prophet Elijah would return to "turn the heart of the fathers to the children, and the heart of the children to their fathers." There is a network of church-operated Family History Centers all over the United States and around the world, where volunteers assist the public with tracing their ancestors.<ref>"Family History Centers ", The Church of Jesus Christ of Latter-day Saints: Newsroom, Accessed 2 Jul 2019.</ref> Brigham Young University offers bachelor's degree, minor, and concentration programs in Family History and is the only school in North America to offer this.
The American Society of Genealogists is the scholarly honorary society of the U.S. genealogical field. Founded by John Insley Coddington, Arthur Adams, and Meredith B. Colket Jr., in December 1940, its membership is limited to 50 living fellows. ASG has semi-annually published The Genealogist, a scholarly journal of genealogical research, since 1980. Fellows of the American Society of Genealogists, who bear the post-nominal acronym "FASG", have written some of the most notable genealogical materials of the last half-century.
Some of the most notable scholarly American genealogical journals include The American Genealogist, National Genealogical Society Quarterly, The New England Historical and Genealogical Register, The New York Genealogical and Biographical Record, and The Genealogist.David L. Greene, "Scholarly Genealogical Journals in America, The American Genealogist 61 (1985–86): 116–20.
Research process
Genealogical research is a complex process that uses historical records and sometimes genetic analysis to demonstrate kinship. Reliable conclusions are based on the quality of sources (ideally, original records), the information within those sources, (ideally, primary or firsthand information), and the evidence that can be drawn (directly or indirectly), from that information. In many instances, genealogists must skillfully assemble indirect or circumstantial evidence to build a case for identity and kinship. All evidence and conclusions, together with the documentation that supports them, is then assembled to create a cohesive genealogy or family history.
Genealogists begin their research by collecting family documents and stories. This creates a foundation for documentary research, which involves examining and evaluating historical records for evidence about ancestors and other relatives, their kinship ties, and the events that occurred in their lives. As a rule, genealogists begin with the present and work backwards in time. Historical, social, and family context is essential to achieving correct identification of individuals and relationships. Source citation is also important when conducting genealogical research.Jeffry Peter La Marca, Simple Citations for Genealogical Sources, (Orting: Family Roots Publishing Co., 2024). To keep track of collected material, family group sheets and pedigree charts are used. Formerly handwritten, these can now be generated by genealogical software.
Genetic analysis
Because a person's DNA contains information that has been passed down relatively unchanged from early ancestors, analysis of DNA is sometimes used for genealogical research. Three DNA types are of particular interest. Mitochondrial DNA (mtDNA) is contained in the mitochondria of the egg cell and is passed down from a mother to all of her children, both male and female; however, only females pass it on to their children. Y-DNA is present only in males and is passed down from a father to his sons (direct male line) with only minor mutations occurring over time. Autosomal DNA (atDNA), is found in the 22 non-sex chromosomes (autosomes) and is inherited from both parents; thus, it can uncover relatives from any branch of the family. A genealogical DNA test allows two individuals to find the probability that they are, or are not, related within an estimated number of generations. Individual genetic test results are collected in databases to match people descended from a relatively recent common ancestor. See, for example, the Molecular Genealogy Research Project. Some tests are limited to either the patrilineal or the matrilineal line.
Collaboration
Most genealogy software programs can export information about persons and their relationships in a standardized format called a GEDCOM. In that format, it can be shared with other genealogists, added to databases, or converted into family web sites. Social networking service (SNS) websites allow genealogists to share data and build their family trees online. Members can upload their family trees and contact other family historians to fill in gaps in their research. In addition to the (SNS) websites, there are other resources that encourage genealogists to connect and share information, such as rootsweb.ancestry.com and rsl.rootsweb.ancestry.com.
Volunteerism
Volunteer efforts figure prominently in genealogy. These range from the extremely informal to the highly organized.
On the informal side are the many popular and useful message boards such as Rootschat and mailing lists on particular surnames, regions, and other topics. These forums can be used to try to find relatives, request record lookups, obtain research advice, and much more. Many genealogists participate in loosely organized projects, both online and off. These collaborations take numerous forms. Some projects prepare name indexes for records, such as probate cases, and publish the indexes, either online or off. These indexes can be used as finding aids to locate original records. Other projects transcribe or abstract records. Offering record lookups for particular geographic areas is another common service. Volunteers do record lookups or take photos in their home areas for researchers who are unable to travel.
Those looking for a structured volunteer environment can join one of thousands of genealogical societies worldwide. Most societies have a unique area of focus, such as a particular surname, ethnicity, geographic area, or descendancy from participants in a given historical event. Genealogical societies are almost exclusively staffed by volunteers and may offer a broad range of services, including maintaining libraries for members' use, publishing newsletters, providing research assistance to the public, offering classes or seminars, and organizing record preservation or transcription projects.
Software
Genealogy software is used to collect, store, sort, and display genealogical data. At a minimum, genealogy software accommodates basic information about individuals, including births, marriages, and deaths. Many programs allow for additional biographical information, including occupation, residence, and notes, and most also offer a method for keeping track of the sources for each piece of evidence.
Most programs can generate basic kinship charts and reports, allow for the import of digital photographs and the export of data in the GEDCOM format (short for GEnealogical Data COMmunication) so that data can be shared with those using other genealogy software. More advanced features include the ability to restrict the information that is shared, usually by removing information about living people out of privacy concerns; the import of sound files; the generation of family history books, web pages and other publications; the ability to handle same-sex marriages and children born out of wedlock; searching the Internet for data; and the provision of research guidance. Programs may be geared toward a specific religion, with fields relevant to that religion, or to specific nationalities or ethnic groups, with source types relevant for those groups. Online resources involve complex programming and large data bases, such as censuses.
Records and documentation
Genealogists use a wide variety of records in their research. To effectively conduct genealogical research, it is important to understand how the records were created, what information is included in them, and how and where to access them.David Hey, The Oxford Companion to Family and Local History (2nd ed. 2008).
List of record types
Records that are used in genealogy research include:
Vital records
Birth records
Death records
Marriage and divorce records
Adoption records
Biographies and biographical profiles (e.g. Who's Who)
Cemetery lists
Census records
Church and Religious records
Baptism or christening
Brit milah or Baby naming certificates
Confirmation
Bar or bat mitzvah
Marriage
Funeral or death
Membership
City directories and telephone directories
Coroner's reports
Court records
Criminal records
Civil records
Diaries, personal letters and family Bibles
DNA tests
Emigration, immigration and naturalization records
Hereditary & lineage organization records, e.g. Daughters of the American Revolution records
Land and property records, deeds
Medical records
Military and conscription records
Newspaper articles
Obituaries
Occupational records
Oral histories
Passports
Photographs
Poorhouse, workhouse, almshouse, and asylum records
School and alumni association records
Ship passenger lists
Social Security (within the US) and pension records
Tax records
Tombstones, cemetery records, and funeral home records
Voter registration records
Wills and probate records
To keep track of their citizens, governments began keeping records of persons who were neither royalty nor nobility. In England and Germany, for example, such record keeping started with parish registers in the 16th century. As more of the population was recorded, there were sufficient records to follow a family. Major life events, such as births, marriages, and deaths, were often documented with a license, permit, or report. Genealogists locate these records in local, regional or national offices or archives and extract information about family relationships and recreate timelines of persons' lives.
In China, India and other Asian countries, genealogy books are used to record the names, occupations, and other information about family members, with some books dating back hundreds or even thousands of years. In the eastern Indian state of Bihar, there is a written tradition of genealogical records among Maithil Brahmins and Karna Kayasthas called "Panjis", dating to the 12th century CE. Even today these records are consulted prior to marriages.
In Ireland, genealogical records were recorded by professional families of senchaidh (historians) until as late as the mid-17th century. Perhaps the most outstanding example of this genre is Leabhar na nGenealach/The Great Book of Irish Genealogies, by Dubhaltach MacFhirbhisigh (d. 1671), published in 2004.
FamilySearch collections
The LDS Church has engaged in large-scale microfilming of records of genealogical value. Its Family History Library in Salt Lake City, Utah, houses over 2 million microfiche and microfilms of genealogically relevant material, which are also available for on-site research at over 4,500 Family History Centers worldwide.
FamilySearch's website includes many resources for genealogists: a FamilyTree database, historical records, digitized family history books, resources and indexing for African American genealogy such as slave and bank records, and a Family History Research Wiki containing research guidance articles.
Indexing ancestral information
Indexing is the process of transcribing parish records, city vital records, and other reports, to a digital database for searching. Volunteers and professionals participate in the indexing process. Since 2006, the microfilm in the FamilySearch granite mountain vault is in the process of being digitally scanned, available online, and eventually indexed.
For example, after the 72-year legal limit for releasing personal information for the United States Census was reached in 2012, genealogical groups cooperated to index the 132 million residents registered in the 1940 United States census.
Between 2006 and 2012, the FamilySearch indexing effort produced more than 1 billion searchable records.
In 2022, FamilySearch and Ancestry partnered to use Artificial Intelligence (AI) technology to help the process of indexing more records. The process first began with the public release of the 1950 United States Census. The index of the census would at first be created by an AI trained on handwriting in old documents and then reviewed by thousands of volunteers using FamilySearch.
Record loss and preservation
Sometimes genealogical records are destroyed, whether accidentally or on purpose. In order to do thorough research, genealogists keep track of which records have been destroyed so they know when information they need may be missing. Of particular note for North American genealogy is the 1890 United States census, which was destroyed in a fire in 1921. Although fragments survive, most of the 1890 census no longer exists. Those looking for genealogical information for families that lived in the United States in 1890 must rely on other information to fill that gap.
War is another cause of record destruction. During World War II, many European records were destroyed. Communists in China during the Cultural Revolution and in Korea during the Korean War destroyed genealogy books kept by families.
Often records are destroyed due to accident or neglect. Since genealogical records are often kept on paper and stacked in high-density storage, they are prone to fire, mold, insect damage, and eventual disintegration. Sometimes records of genealogical value are deliberately destroyed by governments or organizations because the records are considered to be unimportant or a privacy risk. Because of this, genealogists often organize efforts to preserve records that are at risk of destruction. FamilySearch has an ongoing program that assesses what useful genealogical records have the most risk of being destroyed, and sends volunteers to digitize such records. In 2017, the government of Sierra Leone asked FamilySearch for help preserving their rapidly deteriorating vital records. FamilySearch has begun digitizing the records and making them available online. The Federation of Genealogical Societies also organized an effort to preserve and digitize United States War of 1812 pension records. In 2010, they began raising funds, which were contribute by genealogists around the United States and matched by Ancestry.com. Their goal was achieved and the process of digitization was able to begin. The digitized records are available for free online.
Types of information
Genealogists who seek to reconstruct the lives of each ancestor consider all historical information to be "genealogical" information. Traditionally, the basic information needed to ensure correct identification of each person are place names, occupations, family names, first names, and dates. However, modern genealogists greatly expand this list, recognizing the need to place this information in its historical context in order to properly evaluate genealogical evidence and distinguish between same-name individuals. A great deal of information is available for British ancestry with growing resources for other ethnic groups.
Family names
Family names are simultaneously one of the most important pieces of genealogical information, and a source of significant confusion for researchers.
In many cultures, the name of a person refers to the family to which they belong. This is called the family name, surname, or last name. Patronymics are names that identify an individual based on the father's name. For example, Marga Olafsdottir is Marga, daughter of Olaf, and Olaf Thorsson is Olaf, son of Thor. Many cultures used patronymics before surnames were adopted or came into use. The Dutch in New York, for example, used the patronymic system of names until 1687 when the advent of English rule mandated surname usage. In Iceland, patronymics are used by a majority of the population. In Denmark and Norway patronymics and farm names were generally in use through the 19th century and beyond, though surnames began to come into fashion toward the end of the 19th century in some parts of the country. Not until 1856 in Denmark and 1923 in Norway were there laws requiring surnames.
The transmission of names across generations, marriages and other relationships, and immigration may cause difficulty in genealogical research. For instance, women in many cultures have routinely used their spouse's surnames. When a woman remarried, she may have changed her name and the names of her children; only her name; or changed no names. Her birth name (maiden name) may be reflected in her children's middle names; her own middle name; or dropped entirely. Children may sometimes assume stepparent, foster parent, or adoptive parent names. Because official records may reflect many kinds of surname change, without explaining the underlying reason for the change, the correct identification of a person recorded identified with more than one name is challenging. Immigrants to America often Americanized their names.
Surname data may be found in trade directories, census returns, birth, death, and marriage records.
Given names
Genealogical data regarding given names (first names) is subject to many of the same problems as are family names and place names. Additionally, the use of nicknames is very common. For example, Beth, Lizzie or Betty are all common for Elizabeth, and Jack, John and Jonathan may be interchanged.
Middle names provide additional information. Middle names may be inherited, follow naming customs, or be treated as part of the family name. For instance, in some Latin cultures, both the mother's family name and the father's family name are used by the children.
Historically, naming traditions existed in some places and cultures. Even in areas that tended to use naming conventions, however, they were by no means universal. Families may have used them some of the time, among some of their children, or not at all. A pattern might also be broken to name a newborn after a recently deceased sibling, aunt or uncle.
An example of a naming tradition from England, Scotland and Ireland:
Another example is in some areas of Germany, where siblings were given the same first name, often of a favourite saint or local nobility, but different second names by which they were known (Rufname). If a child died, the next child of the same gender that was born may have been given the same name. It is not uncommon that a list of a particular couple's children will show one or two names repeated.
Personal names have periods of popularity, so it is not uncommon to find many similarly named people in a generation, and even similarly named families; e.g., "William and Mary and their children David, Mary, and John".
Many names may be identified strongly with a particular gender; e.g., William for boys, and Mary for girls. Others may be ambiguous, e.g., Lee, or have only slightly variant spellings based on gender, e.g., Frances (usually female) and Francis (usually male).
Place names
While the locations of ancestors' residences and life events are core elements of the genealogist's quest, they can often be confusing. Place names may be subject to variant spellings by partially literate scribes. Locations may have identical or very similar names. For example, the village name Brockton occurs six times in the border area between the English counties of Shropshire and Staffordshire. Shifts in political borders must also be understood. Parish, county, and national borders have frequently been modified. Old records may contain references to farms and villages that have ceased to exist. When working with older records from Poland, where borders and place names have changed frequently in past centuries, a source with maps and sample records such as A Translation Guide to 19th-Century Polish-Language Civil-Registration Documents can be invaluable.
Available sources may include vital records (civil or church registration), censuses, and tax assessments. Oral tradition is also an important source, although it must be used with caution. When no source information is available for a location, circumstantial evidence may provide a probable answer based on a person's or a family's place of residence at the time of the event.
Maps and gazetteers are important sources for understanding the places researched. They show the relationship of an area to neighboring communities and may be of help in understanding migration patterns. Family tree mapping using online mapping tools such as Google Earth (particularly when used with Historical Map overlays such as those from the David Rumsey Historical Map Collection) assist in the process of understanding the significance of geographical locations.
Dates
It is wise to exercise extreme caution with dates. Dates are more difficult to recall years after an event, and are more easily mistranscribed than other types of genealogical data. Therefore, one should determine whether the date was recorded at the time of the event or at a later date. Dates of birth in vital records or civil registrations and in church records at baptism are generally accurate because they were usually recorded near the time of the event. Family Bibles are often a source for dates, but can be written from memory long after the event. When the same ink and handwriting is used for all entries, the dates were probably written at the same time and therefore will be less reliable since the earlier dates were probably recorded well after the event. The publication date of the Bible also provides a clue about when the dates were recorded since they could not have been recorded at any earlier date.
People sometimes reduce their age on marriage, and those under "full age" may increase their age in order to marry or to join the armed forces. Census returns are notoriously unreliable for ages or for assuming an approximate death date. Ages over 15 in the 1841 census in the UK are rounded down to the next lower multiple of five years.
Although baptismal dates are often used to approximate birth dates, some families waited years before baptizing children, and adult baptisms are the norm in some religions. Both birth and marriage dates may have been adjusted to cover for pre-wedding pregnancies.
Calendar changes must also be considered. In 1752, England and her American colonies changed from the Julian to the Gregorian calendar. In the same year, the date the new year began was changed. Prior to 1752 it was 25 March; this was changed to 1 January. Many other European countries had already made the calendar changes before England had, sometimes centuries earlier. By 1751 there was an 11-day discrepancy between the date in England and the date in other European countries.
For further detail on the changes involved in moving from the Julian to the Gregorian calendar, see: Gregorian calendar.
The French Republican Calendar or French Revolutionary Calendar was a calendar proposed during the French Revolution, and used by the French government for about 12 years from late 1793 to 1805, and for 18 days in 1871 in Paris. Dates in official records at this time use the revolutionary calendar and need "translating" into the Gregorian calendar for calculating ages etc. There are various websites which do this.
Occupations
Occupational information may be important to understanding an ancestor's life and for distinguishing two people with the same name. A person's occupation may have been related to his or her social status, political interest, and migration pattern. Since skilled trades are often passed from father to son, occupation may also be indirect evidence of a family relationship.
It is important to remember that a person may change occupations, and that titles change over time as well. Some workers no longer fit for their primary trade often took less prestigious jobs later in life, while others moved upwards in prestige. Many unskilled ancestors had a variety of jobs depending on the season and local trade requirements. Census returns may contain some embellishment; e.g., from labourer to mason, or from journeyman to master craftsman. Names for old or unfamiliar local occupations may cause confusion if poorly legible. For example, an ostler (a keeper of horses) and a hostler (an innkeeper) could easily be confused for one another. Likewise, descriptions of such occupations may also be problematic. The perplexing description "ironer of rabbit burrows" may turn out to describe an ironer (profession) in the Bristol district named Rabbit Burrows. Several trades have regionally preferred terms. For example, "shoemaker" and "cordwainer" have the same meaning. Finally, many apparently obscure jobs are part of a larger trade community, such as watchmaking, framework knitting or gunmaking.
Occupational data may be reported in occupational licences, tax assessments, membership records of professional organizations, trade directories, census returns, and vital records (civil registration). Occupational dictionaries are available to explain many obscure and archaic trades.
Reliability of sources
Information found in historical or genealogical sources can be unreliable and it is good practice to evaluate all sources with a critical eye. Factors influencing the reliability of genealogical information include: the knowledge of the informant (or writer); the bias and mental state of the informant (or writer); the passage of time and the potential for copying and compiling errors.
The quality of census data has been of special interest to historians, who have investigated reliability issues.Richard H. Steckel, "The Quality of Census Data for Historical Inquiry: A Research Agenda", Social Science History, vol. 15, no. 4 (Winter, 1991), pp. 579–599.
Knowledge of the informant
The informant is the individual who provided the recorded information. Genealogists must carefully consider who provided the information and what they knew. In many cases the informant is identified in the record itself. For example, a death certificate usually has two informants: a physician who provides information about the time and cause of death and a family member who provides the birth date, names of parents, etc.
When the informant is not identified, one can sometimes deduce information about the identity of the person by careful examination of the source. One should first consider who was alive (and nearby) when the record was created. When the informant is also the person recording the information, the handwriting can be compared to other handwriting samples.
When a source does not provide clues about the informant, genealogists should treat the source with caution. These sources can be useful if they can be compared with independent sources. For example, a census record by itself cannot be given much weight because the informant is unknown. However, when censuses for several years concur on a piece of information that would not likely be guessed by a neighbor, it is likely that the information in these censuses was provided by a family member or other informed person. On the other hand, information in a single census cannot be confirmed by information in an undocumented compiled genealogy since the genealogy may have used the census record as its source and might therefore be dependent on the same misinformed individual.
Motivation of the informant
Even individuals who had knowledge of the fact, sometimes intentionally or unintentionally provided false or misleading information. A person may have lied in order to obtain a government benefit (such as a military pension), avoid taxation, or cover up an embarrassing situation (such as the existence of a non-marital child). A person with a distressed state of mind may not be able to accurately recall information. Many genealogical records were recorded at the time of a loved one's death, and so genealogists should consider the effect that grief may have had on the informant of these records.
The effect of time
The passage of time often affects a person's ability to recall information. Therefore, as a general rule, data recorded soon after the event are usually more reliable than data recorded many years later. However, some types of data are more difficult to recall after many years than others. One type especially prone to recollection errors is dates. Also the ability to recall is affected by the significance that the event had to the individual. These values may have been affected by cultural or individual preferences.
Copying and compiling errors
Genealogists must consider the effects that copying and compiling errors may have had on the information in a source. For this reason, sources are generally categorized in two categories: original and derivative. An original source is one that is not based on another source. A derivative source is information taken from another source. This distinction is important because each time a source is copied, information about the record may be lost and errors may result from the copyist misreading, mistyping, or miswriting the information. Genealogists should consider the number of times information has been copied and the types of derivation a piece of information has undergone. The types of derivatives include: photocopies, transcriptions, abstracts, translations, extractions, and compilations.
In addition to copying errors, compiled sources (such as published genealogies and online pedigree databases) are susceptible to misidentification errors and incorrect conclusions based on circumstantial evidence. Identity errors usually occur when two or more individuals are assumed to be the same person. Circumstantial or indirect evidence does not explicitly answer a genealogical question, but either may be used with other sources to answer the question, suggest a probable answer, or eliminate certain possibilities. Compilers sometimes draw hasty conclusions from circumstantial evidence without sufficiently examining all available sources, without properly understanding the evidence, and without appropriately indicating the level of uncertainty.
Primary and secondary sources
In genealogical research, information can be obtained from primary or secondary sources. Primary sources are records that were made at the time of the event, for example a death certificate would be a primary source for a person's death date and place. Secondary sources are records that are made days, weeks, months, or even years after an event.
Standards and ethics
Organizations that educate and certify genealogists have established standards and ethical guidelines they instruct genealogists to follow.
Research standards
Genealogy research requires analyzing documents and drawing conclusions based on the evidence provided in the available documents. Genealogists need standards to determine whether or not their evaluation of the evidence is accurate. In the past, genealogists in the United States borrowed terms from judicial law to examine evidence found in documents and how they relate to the researcher's conclusions. However, the differences between the two disciplines created a need for genealogists to develop their own standards. In 2000, the Board for Certification of Genealogists published their first manual of standards. The Genealogical Proof Standard created by the Board for Certification of Genealogists is widely distributed in seminars, workshops, and educational materials for genealogists in the United States. Other genealogical organizations around the world have created similar standards they invite genealogists to follow. Such standards provide guidelines for genealogists to evaluate their own research as well as the research of others.
Standards for genealogical research include:
Clearly document and organize findings.
Cite all sources in a specific manner so that others can locate them and properly evaluate them.
Locate all available sources that may contain information relevant to the research question.
Analyze findings thoroughly, without ignoring conflicts in records or negative evidence.
Rely on original, rather than derivative sources, wherever possible.
Use logical reasoning based on reliable sources to reach conclusions.
Acknowledge when a specific conclusion is only "possible" or "probable" rather than "proven".
Acknowledge that other records that have not yet been discovered may overturn a conclusion.
Ethical guidelines
Genealogists often handle sensitive information and share and publish such information. Because of this, there is a need for ethical standards and boundaries for when information is too sensitive to be published. Historically, some genealogists have fabricated information or have otherwise been untrustworthy. Genealogical organizations around the world have outlined ethical standards as an attempt to eliminate such problems. Ethical standards adopted by various genealogical organizations include:
Respect copyright laws
Acknowledge where one consulted another's work and do not plagiarize the work of other researchers.
Treat original records with respect and avoid causing damage to them or removing them from repositories.
Treat archives and archive staff with respect.
Protect the privacy of living individuals by not publishing or otherwise disclosing information about them without their permission.
Disclose any conflicts of interest to clients.
When doing paid research, be clear with the client about scope of research and fees involved.
Do not fabricate information or publish false or unproven information as proven.
Be sensitive about information found through genealogical research that may make the client or family members uncomfortable.
In 2015, a committee presented standards for genetic genealogy at the Salt Lake Institute of Genealogy. The standards emphasize that genealogists and testing companies should respect the privacy of clients and recognize the limits of DNA tests. It also discusses how genealogists should thoroughly document conclusions made using DNA evidence. In 2019, the Board for the Certification of Genealogists officially updated their standards and code of ethics to include standards for genetic genealogy.
See also
References
Further reading
General
Hopwood, Nick, Rebecca Flemming, Lauren Kassell, eds. Reproduction: Antiquity to the Present Day (Cambridge UP, 2018). Illustrations. xxxv + 730 pp. excerpt also online review 44 scholarly essays by historians.
British Isles
Kriesberg, Adam. "The future of access to public records? Public–private partnerships in US state and territorial archives." Archival Science 17.1 (2017): 5–25.
China
Continental Europe
Volkmar Weiss: German Genealogy in Its Social and Political Context.'' KDP, 2020, .
North America
External links
Family History UK
Kinship and descent | Genealogy | [
"Biology"
] | 9,132 | [
"Behavior",
"Phylogenetics",
"Genealogy",
"Human behavior",
"Kinship and descent"
] |
12,100 | https://en.wikipedia.org/wiki/Graviton | In theories of quantum gravity, the graviton is the hypothetical elementary particle that mediates the force of gravitational interaction. There is no complete quantum field theory of gravitons due to an outstanding mathematical problem with renormalization in general relativity. In string theory, believed by some to be a consistent theory of quantum gravity, the graviton is a massless state of a fundamental string.
If it exists, the graviton is expected to be massless because the gravitational force has a very long range, and appears to propagate at the speed of light. The graviton must be a spin-2 boson because the source of gravitation is the stress–energy tensor, a second-order tensor (compared with electromagnetism's spin-1 photon, the source of which is the four-current, a first-order tensor). Additionally, it can be shown that any massless spin-2 field would give rise to a force indistinguishable from gravitation, because a massless spin-2 field would couple to the stress–energy tensor in the same way gravitational interactions do. This result suggests that, if a massless spin-2 particle is discovered, it must be the graviton.
Theory
It is hypothesized that gravitational interactions are mediated by an as yet undiscovered elementary particle, dubbed the graviton. The three other known forces of nature are mediated by elementary particles: electromagnetism by the photon, the strong interaction by gluons, and the weak interaction by the W and Z bosons. All three of these forces appear to be accurately described by the Standard Model of particle physics. In the classical limit, a successful theory of gravitons would reduce to general relativity, which itself reduces to Newton's law of gravitation in the weak-field limit.
History
Albert Einstein discussed quantized gravitational radiation in 1916, the year following his publication of general relativity.
The term graviton was coined in 1934 by Soviet physicists Dmitry Blokhintsev and . Paul Dirac reintroduced the term in a number of lectures in 1959, noting that the energy of the gravitational field should come in quanta. A mediation of the gravitational interaction by particles was anticipated by Pierre-Simon Laplace. Just like Newton's anticipation of photons, Laplace's anticipated "gravitons" had a greater speed than the speed of light in vacuum , the speed of gravitons expected in modern theories, and were not connected to quantum mechanics or special relativity, since these theories didn't yet exist during Laplace's lifetime.
Gravitons and renormalization
When describing graviton interactions, the classical theory of Feynman diagrams and semiclassical corrections such as one-loop diagrams behave normally. However, Feynman diagrams with at least two loops lead to ultraviolet divergences. These infinite results cannot be removed because quantized general relativity is not perturbatively renormalizable, unlike quantum electrodynamics and models such as the Yang–Mills theory. Therefore, incalculable answers are found from the perturbation method by which physicists calculate the probability of a particle to emit or absorb gravitons, and the theory loses predictive veracity. Those problems and the complementary approximation framework are grounds to show that a theory more unified than quantized general relativity is required to describe the behavior near the Planck scale.
Comparison with other forces
Like the force carriers of the other forces (see photon, gluon, W and Z bosons), the graviton plays a role in general relativity, in defining the spacetime in which events take place. In some descriptions energy modifies the "shape" of spacetime itself, and gravity is a result of this shape, an idea which at first glance may appear hard to match with the idea of a force acting between particles. Because the diffeomorphism invariance of the theory does not allow any particular space-time background to be singled out as the "true" space-time background, general relativity is said to be background-independent. In contrast, the Standard Model is not background-independent, with Minkowski space enjoying a special status as the fixed background space-time. A theory of quantum gravity is needed in order to reconcile these differences. Whether this theory should be background-independent is an open question. The answer to this question will determine the understanding of what specific role gravitation plays in the fate of the universe.
Energy and wavelength
While gravitons are presumed to be massless, they would still carry energy, as does any other quantum particle. Photon energy and gluon energy are also carried by massless particles. It is unclear which variables might determine graviton energy, the amount of energy carried by a single graviton.
Alternatively, if gravitons are massive at all, the analysis of gravitational waves yielded a new upper bound on the mass of gravitons. The graviton's Compton wavelength is at least , or about 1.6 light-years, corresponding to a graviton mass of no more than . This relation between wavelength and mass-energy is calculated with the Planck–Einstein relation, the same formula that relates electromagnetic wavelength to photon energy.
Experimental observation
Unambiguous detection of individual gravitons, though not prohibited by any fundamental law, has been thought to be impossible with any physically reasonable detector. The reason is the extremely low cross section for the interaction of gravitons with matter. For example, a detector with the mass of Jupiter and 100% efficiency, placed in close orbit around a neutron star, would only be expected to observe one graviton every 10 years, even under the most favorable conditions. It would be impossible to discriminate these events from the background of neutrinos, since the dimensions of the required neutrino shield would ensure collapse into a black hole. It has been proposed that detecting single gravitons would be possible by quantum sensing. Even quantum events may not indicate quantization of gravitational radiation.
LIGO and Virgo collaborations' observations have directly detected gravitational waves. Others have postulated that graviton scattering yields gravitational waves as particle interactions yield coherent states. Although these experiments cannot detect individual gravitons, they might provide information about certain properties of the graviton. For example, if gravitational waves were observed to propagate slower than c (the speed of light in vacuum), that would imply that the graviton has mass (however, gravitational waves must propagate slower than c in a region with non-zero mass density if they are to be detectable). Observations of gravitational waves put an upper bound of on the graviton's mass. Solar system planetary trajectory measurements by space missions such as Cassini and MESSENGER give a comparable upper bound of . The gravitational wave and planetary ephemeris need not agree: they test different aspects of a potential graviton-based theory.
Astronomical observations of the kinematics of galaxies, especially the galaxy rotation problem and modified Newtonian dynamics, might point toward gravitons having non-zero mass.
Difficulties and outstanding issues
Most theories containing gravitons suffer from severe problems. Attempts to extend the Standard Model or other quantum field theories by adding gravitons run into serious theoretical difficulties at energies close to or above the Planck scale. This is because of infinities arising due to quantum effects; technically, gravitation is not renormalizable. Since classical general relativity and quantum mechanics seem to be incompatible at such energies, from a theoretical point of view, this situation is not tenable. One possible solution is to replace particles with strings. String theories are quantum theories of gravity in the sense that they reduce to classical general relativity plus field theory at low energies, but are fully quantum mechanical, contain a graviton, and are thought to be mathematically consistent.
See also
Gravitino
Dual graviton
Gravitoelectromagnetism
Planck mass
Static forces and virtual-particle exchange
Soft graviton theorem
Polarizable vacuum
References
External links
Bosons
Gauge bosons
Quantum gravity
String theory
Hypothetical elementary particles
Force carriers | Graviton | [
"Physics",
"Astronomy"
] | 1,688 | [
"Physical phenomena",
"Astronomical hypotheses",
"Force carriers",
"Unsolved problems in physics",
"Bosons",
"Quantum gravity",
"Subatomic particles",
"Fundamental interactions",
"Hypothetical elementary particles",
"String theory",
"Physics beyond the Standard Model",
"Matter"
] |
12,101 | https://en.wikipedia.org/wiki/G%C3%B6ta%20Canal | The Göta Canal () is a Swedish canal constructed in the early 19th century.
The canal is long, of which were dug or blasted, with a width varying between and a maximum depth of about . The speed is limited to 5 knots in the canal.
The Göta Canal is a part of a waterway long, linking a number of lakes and rivers to provide a route from Gothenburg (Göteborg) on the west coast to Söderköping on the Baltic Sea via the Trollhätte kanal and Göta älv river, through the large lakes Vänern and Vättern.
This waterway was dubbed as Sweden's Blue Ribbon ().
Contrary to the popular belief it is not correct to consider this waterway as a sort of greater Göta Canal: the Trollhätte Canal and the Göta Canal are completely separate entities.
History
The idea of a canal across southern Sweden was first put forward as early as 1516, by Hans Brask, the bishop of Linköping. However, it was not until the start of the 19th century that Brask's proposals were put into action by Baltzar von Platen, a German-born former officer in the Swedish Navy. He organised the project and obtained the necessary financial and political backing. His plans attracted the enthusiastic backing of the government and the new king, Charles XIII, who saw the canal as a way of kick-starting the modernisation of Sweden. Von Platen himself extolled the modernising virtues of the canal in 1806, claiming that mining, agriculture and other industries would benefit from "a navigation way through the country."
The project was inaugurated on 11 April 1810 with a budget of 24 million Swedish riksdalers. It was by far the greatest civil engineering project ever undertaken in Sweden up to that time, taking 22 years of effort by more than 58,000 workers. Much of the expertise and equipment had to be acquired from abroad, notably from Britain, whose canal system was the most advanced in the world at that time. The Scottish civil engineer Thomas Telford, renowned for his design of the Caledonian Canal in Scotland, developed the initial plans for the canal and travelled to Sweden in 1810 to oversee some of the early work on the route. Many other British engineers and craftsmen were imported to assist with the project, along with significant quantities of equipment - even apparently mundane items such as pickaxes, spades and wheelbarrows.
The Göta Canal was officially opened on 26 September 1832. Von Platen himself did not live to see the completion of the canal, having died shortly before its opening. However, the return on investment for the canal didn't live up to the hopes of the government. Bishop Hans Brask's original justifications for the canal's construction were the onerous Sound Dues imposed by Denmark–Norway on all vessels passing through the narrow Øresund channel between Sweden and Denmark and the trouble with the Hanseatic League. The canal enabled vessels travelling to or from the Baltic Sea to bypass the Øresund and so evade the Danish toll. In 1851, the tycoon André Oscar Wallenberg founded the Company for Swedish Canal Steamboat Transit Traffic to carry goods from England to Russia via the canal. However, it only ran two trips between St Petersburg and Hull via Motala before the Crimean War halted Anglo-Russian trade. After the war ended, the great powers pressured Denmark into ending the four-hundred-year-old tradition of the Sound Dues, thus eliminating at a stroke the canal's usefulness as an alternative to the Øresund.
The arrival of the railways in 1855 quickly made the canal redundant, as trains could carry passengers and goods far more rapidly and did not have to shut down with the arrival of winter, which made the canal impassable for five months of the year. By the 1870s, the canal's goods traffic had dwindled to just three major types of bulk goods - forest products, coal and ore, none of which required rapid transportation. Traffic volumes stagnated after that and never recovered.
The canal had one major industrial legacy in the shape of Motala Verkstad - a factory established in Motala to produce the machines such as cranes and steam dredgers that were needed to build the canal. This facility has sometimes been referred to as the "cradle of the Swedish engineering industry". After the canal was opened, Motala Verkstad focused on producing equipment, locomotives and rolling stock for the newly constructed railways, beginning a tradition of railway engineering that continues to this day in the form of AB Svenska Järnvägsverkstädernas Aeroplanavdelning (ASJA) that was bought by the aeroplane manufacturer SAAB in Linköping.
Description
These days the canal is primarily used as a tourist and recreational attraction. Around two million people visit the canal each year on pleasure cruises - either on their own boats or on one of the many cruise ships - and related activities. The canal sometimes is ironically called the "divorce ditch" () because of the troubles that inexperienced couples have to endure while trying to navigate the narrow canal and the many locks by themselves.
Locks
The canal has 58 locks and can accommodate vessels up to long, wide and in draft.
From the east-coast of Sweden to Lake Vänern the locks are as follows (with meters of height difference per lock):
Mem, 3
Tegelbruket, 2.3
Söderköping, 2.4
Duvkullen nedre, 2.3
Duvkullen övre, 2.4
Mariehov nedre, 2.1
Mariehov övre, 2.6
Carlsborg nedre, 5.1
Carlsborg övre, 4.7
Klämman, open
Hulta, 3.2
Bråttom, 2.3
Norsholm, 0.8
Carl Johans slussar (seven locks), 18.8
Oskars slussar, 4.8
Karl Ludvig Eugéns slussar, 5.5
Brunnby, 5.3
Heda, 5.2
Borensberg, 0.2
Borenshult, 15.3
Motala, 0.1
Lake Vättern (88 m above sea level)
Forsvik, 3.5
Lake Viken (92 m above sea level – canal's highest point)
Tåtorp, 0.2
Hajstorp övre, 5.0
Hajstorp nedre, 5.1
Riksberg, 7.5
Godhögen, 5.1
Norrkvarn övre, 2.9
Norrkvarn nedre, 2.9
Sjötorp 7-8, 4.6
Sjötorp 6, 2.4
Sjötorp 4-5, 4.8
Sjötorp 2-3, 4.8
Sjötorp 1, 2.9
After Lake Vänern (44 m above sea level) Trollhätte kanal to Gothenburg and the west-coast of Sweden.
See also
Caledonian Canal - Sister canal in Scotland
Kiel Canal
Øresund
List of government enterprises of Sweden
Bibliography and references
Eric de Maré, Swedish Cross Cut, Sweden, 1965. (In English)
External links
Göta Canal - Official site
Canals opened in 1832
19th century in Sweden
Canals in Sweden
Government-owned companies of Sweden
Tourist attractions in Sweden
Works of Thomas Telford
Historic Civil Engineering Landmarks | Göta Canal | [
"Engineering"
] | 1,515 | [
"Civil engineering",
"Historic Civil Engineering Landmarks"
] |
12,103 | https://en.wikipedia.org/wiki/Golden%20Gate%20Bridge | The Golden Gate Bridge is a suspension bridge spanning the Golden Gate, the strait connecting San Francisco Bay and the Pacific Ocean in California, United States. The structure links San Francisco—the northern tip of the San Francisco Peninsula—to Marin County, carrying both U.S. Route 101 and California State Route 1 across the strait. It also carries pedestrian and bicycle traffic, and is designated as part of U.S. Bicycle Route 95. Recognized by the American Society of Civil Engineers as one of the Wonders of the Modern World, the bridge is one of the most internationally recognized symbols of San Francisco and California.
The idea of a fixed link between San Francisco and Marin had gained increasing popularity during the late 19th century, but it was not until the early 20th century that such a link became feasible. Joseph Strauss served as chief engineer for the project, with Leon Moisseiff, Irving Morrow and Charles Ellis making significant contributions to its design. The bridge opened to the public on May 27, 1937, and has undergone various retrofits and other improvement projects in the decades since.
The Golden Gate Bridge is described in Frommer's travel guide as "possibly the most beautiful, certainly the most photographed, bridge in the world." At the time of its opening in 1937, it was both the longest and the tallest suspension bridge in the world, titles it held until 1964 and 1998 respectively. Its main span is and its total height is .
History
Ferry service
Before the bridge was built, the only practical short route between San Francisco and what is now Marin County was by boat across a section of San Francisco Bay. A ferry service began as early as 1820, with a regularly scheduled service beginning in the 1840s for the purpose of transporting water to San Francisco.
In 1867, the Sausalito Land and Ferry Company opened. In 1920, the service was taken over by the Golden Gate Ferry Company, which merged in 1929 with the ferry system of the Southern Pacific Railroad, becoming the Southern Pacific-Golden Gate Ferries, Ltd., the largest ferry operation in the world. Once for railroad passengers and customers only, Southern Pacific's automobile ferries became very profitable and important to the regional economy. The ferry crossing between the Hyde Street Pier in San Francisco and Sausalito Ferry Terminal in Marin County took approximately 20 minutes and cost $1.00 per vehicle prior to 1937, when the price was reduced to compete with the new bridge. The trip from the San Francisco Ferry Building took 27 minutes.
Many wanted to build a bridge to connect San Francisco to Marin County. San Francisco was the largest American city still served primarily by ferry boats. Because it did not have a permanent link with communities around the bay, the city's growth rate was below the national average. Many experts said that a bridge could not be built across the strait, which had strong, swirling tides and currents, with water deep at the center of the channel, and frequent strong winds. Experts said that ferocious winds and blinding fogs would prevent construction and operation.
Conception
Although the idea of a bridge spanning the Golden Gate was not new, the proposal that eventually took hold was made in a 1916 San Francisco Bulletin article by former engineering student James Wilkins. San Francisco's City Engineer estimated the cost at $100 million (equivalent to $ billion in ), and impractical for the time. He asked bridge engineers whether it could be built for less. One who responded, Joseph Strauss, was an ambitious engineer and poet who had, for his graduate thesis, designed a railroad bridge across the Bering Strait. At the time, Strauss had completed some 400 drawbridges—most of which were inland—and nothing on the scale of the new project. Strauss's initial drawings were for a massive cantilever on each side of the strait, connected by a central suspension segment, which Strauss promised could be built for $17 million (equivalent to $ million in ).
A suspension-bridge design was chosen, using recent advances in bridge design and metallurgy.
Strauss spent more than a decade drumming up support in Northern California. The bridge faced opposition, including litigation, from many sources. The Department of War was concerned that the bridge would interfere with ship traffic. The US Navy feared that a ship collision or sabotage to the bridge could block the entrance to one of its main harbors. Unions demanded guarantees that local workers would be favored for construction jobs. Southern Pacific Railroad, one of the most powerful business interests in California, opposed the bridge as competition to its ferry fleet and filed a lawsuit against the project, leading to a mass boycott of the ferry service.
In May 1924, Colonel Herbert Deakyne held the second hearing on the Bridge on behalf of the Secretary of War in a request to use federal land for construction. Deakyne, on behalf of the Secretary of War, approved the transfer of land needed for the bridge structure and leading roads to the "Bridging the Golden Gate Association" and both San Francisco County and Marin County, pending further bridge plans by Strauss. Another ally was the fledgling automobile industry, which supported the development of roads and bridges to increase demand for automobiles.
The bridge's name was first used when the project was initially discussed in 1917 by M.M. O'Shaughnessy, city engineer of San Francisco, and Strauss. The name became official with the passage of the Golden Gate Bridge and Highway District Act by the state legislature in 1923, creating a special district to design, build and finance the bridge. San Francisco and most of the counties along the North Coast of California joined the Golden Gate Bridge District, with the exception being Humboldt County, whose residents opposed the bridge's construction and the traffic it would generate.
Design
Strauss was the chief engineer in charge of the overall design and construction of the bridge project. However, because he had little understanding or experience with cable-suspension designs, responsibility for much of the engineering and architecture fell on other experts. Strauss's initial design proposal (two double cantilever spans linked by a central suspension segment) was unacceptable from a visual standpoint. The final suspension design was conceived and championed by Leon Moisseiff, the engineer of the Manhattan Bridge in New York City.
Irving Morrow, a relatively unknown residential architect, designed the overall shape of the bridge towers, the lighting scheme, and Art Deco elements, such as the tower decorations, streetlights, railing, and walkways. The famous International Orange color was Morrow's personal selection, winning out over other possibilities, including the US Navy's suggestion that it be painted with black and yellow stripes to ensure visibility by passing ships.
Senior engineer Charles Alton Ellis, collaborating remotely with Moisseiff, was the principal engineer of the project. Moisseiff produced the basic structural design, introducing his "deflection theory" by which a thin, flexible roadway would flex in the wind, greatly reducing stress by transmitting forces via suspension cables to the bridge towers. Although the Golden Gate Bridge design has proved sound, a later Moisseiff design, the original Tacoma Narrows Bridge, collapsed in a strong windstorm soon after it was completed, because of an unexpected aeroelastic flutter. Ellis was also tasked with designing a "bridge within a bridge" in the southern abutment, to avoid the need to demolish Fort Point, a pre–Civil War masonry fortification viewed, even then, as worthy of historic preservation. He penned a graceful steel arch spanning the fort and carrying the roadway to the bridge's southern anchorage.
Ellis was a Greek scholar and mathematician who at one time was a University of Illinois professor of engineering despite having no engineering degree. He eventually earned a degree in civil engineering from the University of Illinois prior to designing the Golden Gate Bridge and spent the last twelve years of his career as a professor at Purdue University. He became an expert in structural design, writing the standard textbook of the time. Ellis did much of the technical and theoretical work that built the bridge, but he received none of the credit in his lifetime. In November 1931, Strauss fired Ellis and replaced him with a former subordinate, Clifford Paine, ostensibly for wasting too much money sending telegrams back and forth to Moisseiff. Ellis, obsessed with the project and unable to find work elsewhere during the Depression, continued working 70 hours per week on an unpaid basis, eventually turning in ten volumes of hand calculations.
With an eye toward self-promotion and posterity, Strauss downplayed the contributions of his collaborators who, despite receiving little recognition or compensation, are largely responsible for the final form of the bridge. He succeeded in having himself credited as the person most responsible for the design and vision of the bridge. Only much later were the contributions of the others on the design team properly appreciated. In May 2007, the Golden Gate Bridge District issued a formal report on 70 years of stewardship of the famous bridge and decided to give Ellis major credit for the design of the bridge.
Finance
The Golden Gate Bridge and Highway District, authorized by an act of the California Legislature, was incorporated in 1928 as the official entity to design, construct, and finance the Golden Gate Bridge. However, after the Wall Street Crash of 1929, the District was unable to raise the construction funds, so it lobbied for a $30 million bond measure (equivalent to $ million today). The bonds were approved in November 1930, by votes in the counties affected by the bridge. The construction budget at the time of approval was $27 million ($ million today). However, the District was unable to sell the bonds until 1932, when Amadeo Giannini, the founder of San Francisco–based Bank of America, agreed on behalf of his bank to buy the entire issue in order to help the local economy.
Construction
Construction began on January 5, 1933. The project cost more than $35 million ($ in dollars), and was completed ahead of schedule and $1.3 million under budget (equivalent to $ million in ).
The Golden Gate Bridge construction project was carried out by the McClintic-Marshall Construction Co., a subsidiary of Bethlehem Steel Corporation founded by Howard H. McClintic and Charles D. Marshall, both of Lehigh University.
Strauss remained head of the project, overseeing day-to-day construction and making some groundbreaking contributions. A graduate of the University of Cincinnati, he placed a brick from his alma mater's demolished McMicken Hall in the south anchorage before the concrete was poured.
Strauss also innovated the use of movable safety netting beneath the men working, which saved many lives. Nineteen men saved by the nets over the course of the project formed the Half Way to Hell Club. Nonetheless, eleven men were killed in falls, ten on February 17, 1937, when a scaffold (secured by undersized bolts) with twelve men on it fell into and broke through the safety net; two of the twelve survived the fall into the water.
The Round House Café diner was then included in the southeastern end of the Golden Gate Bridge, adjacent to the tourist plaza which was renovated in 2012. The Round House Café, an Art Deco design by Alfred Finnila completed in 1938, has been popular throughout the years as a starting point for various commercial tours of the bridge and an unofficial gift shop. The diner was renovated in 2012 and the gift shop was then removed as a new, official gift shop has been included in the adjacent plaza.
During the bridge work, the Assistant Civil Engineer of California Alfred Finnila had overseen the entire iron work of the bridge as well as half of the bridge's road work.
Contributors
Plaque of the major contributors to the Golden Gate Bridge lists contractors, engineering-staff, directors and officers:
Contractors
Foundations - Pacific Bridge Company
Anchorages - Barrett & Hilp
Structural steel - Main span - Bethlehem Steel Company Incorporated
Approach steel - J.H. Pomeroy & Company Incorporated - Raymond Concrete Pile Company
Cables - John A. Roebling's Sons Company
Electrical work - Alta Electric and Mechanical Company Incorporated
Bridge deck - Pacific Bridge Company
Presidio Approach Roads and Viaducts - Easton & Smith
Toll Plaza - Barrett & Hilp
Engineering staff
Chief engineer - Joseph B. Strauss
Principal assistant engineer - Clifford E. Paine
Resident engineer - Russell Cone
Assistant engineer - Charles Clarahan Jr., Dwight N. Wetherell
Consulting engineer - O.H. Ammann, Charles Derleth Jr., Leon S. Moisseiff
Consulting traffic engineer - Sydney W. Taylor Jr.
Consulting architect - Irving F. Morrow
Consulting geologist - Andrew C. Lawson, Allan E. Sedgwick
Directors
San Francisco - William P. Filmer, Richard J. Welch, Warren Shannon, Hugo D. Newhouse, Arthur M. Brown Jr., John P. McLaughlin, William D. Hadeler, C.A. Henry, Francis V. Keesling, William P. Stanton, George T. Cameron
Marin County - Robert H. Trumbull, Harry Lutgens
Napa County - Thomas Maxwell
Sonoma County - Frank P. Doyle, Joseph A. McMinn
Mendocino County - A. R. O'Brien
Del Norte County - Henry Westbrook Jr., Milton M. McVay
Officers
President - William P. Filmer
Vice President - Robert H. Trumbull
General manager - James Reed, Alan McDonald
Chief engineer - Joseph B. Strauss
Secretary - W. W. Felt Jr.
Auditor - Roy S. West, John R. Ruckstell
Attorney - George H. Harlan
Torsional bracing retrofit
On December 1, 1951, a windstorm revealed swaying and rolling instabilities of the bridge, resulting in its closure. In 1953 and 1954, the bridge was retrofitted with lateral and diagonal bracing that connected the lower chords of the two side trusses. This bracing stiffened the bridge deck in torsion so that it would better resist the types of twisting that had destroyed the Tacoma Narrows Bridge in 1940.
Bridge deck replacement (1982–1986)
The original bridge used a concrete deck. Salt carried by fog or mist reached the rebar, causing corrosion and concrete spalling. From 1982 to 1986, the original bridge deck, in 747 sections, was systematically replaced with a 40% lighter, and stronger, steel orthotropic deck panels, over 401 nights without closing the roadway completely to traffic. The roadway was also widened by two feet, resulting in outside curb lane width of 11 feet, instead of 10 feet for the inside lanes. This deck replacement was the bridge's greatest engineering project since it was built and cost over $68 million.
Opening festivities, and 50th and 75th anniversaries
The bridge-opening celebration in 1937 began on May 27 and lasted for one week. The day before vehicle traffic was allowed, 200,000 people crossed either on foot or on roller skates. On opening day, Mayor Angelo Rossi and other officials rode the ferry to Marin, then crossed the bridge in a motorcade past three ceremonial "barriers," the last a blockade of beauty queens who required Joseph Strauss to present the bridge to the Highway District before allowing him to pass. An official song, "There's a Silver Moon on the Golden Gate," was chosen to commemorate the event. Strauss wrote a poem that is now on the Golden Gate Bridge entitled "The Mighty Task is Done." The next day, President Franklin D. Roosevelt pushed a button in Washington, D.C. signaling the official start of vehicle traffic over the Bridge at noon. Weeks of civil and cultural activities called "the Fiesta" followed. A statue of Strauss was moved in 1955 to a site near the bridge.
As part of the fiftieth anniversary celebration in 1987, the Golden Gate Bridge district again closed the bridge to automobile traffic and allowed pedestrians to cross it on May 24. This Sunday morning celebration attracted 750,000 to 1,000,000 people, and ineffective crowd control meant the bridge became congested with roughly 300,000 people, causing the center span of the bridge to flatten out under the weight. Although the bridge is designed to flex in that way under heavy loads, and was estimated not to have exceeded 40% of the yielding stress of the suspension cables, bridge officials stated that uncontrolled pedestrian access was not being considered as part of the 75th anniversary on Sunday, May 27, 2012, because of the additional law enforcement costs required "since 9/11."
Structural specifications
Until 1964, the Golden Gate Bridge had the longest suspension bridge main span in the world, at . Since 1964 its main span length has been surpassed by eighteen bridges; it now has the second-longest main span in the Americas, after the Verrazzano-Narrows Bridge in New York City. The total length of the Golden Gate Bridge from abutment to abutment is .
The Golden Gate Bridge's clearance above high water averages while its towers, at above the water, were the world's tallest on a suspension bridge until 1993 when it was surpassed by the Mezcala Bridge, in Mexico.
The weight of the roadway is hung from 250 pairs of vertical suspender ropes, which are attached to two main cables. The main cables pass over the two main towers and are fixed in concrete at each end. Each cable is made of 27,572 strands of wire. The total length of galvanized steel wire used to fabricate both main cables is estimated to be . Each of the bridge's two towers has approximately 600,000 rivets.
In the 1960s, when the Bay Area Rapid Transit system (BART) was being planned, the engineering community had conflicting opinions about the feasibility of running train tracks north to Marin County over the bridge. In June 1961, consultants hired by BART completed a study that determined the bridge's suspension section was capable of supporting service on a new lower deck. In July 1961, one of the bridge's consulting engineers, Clifford Paine, disagreed with their conclusion. In January 1962, due to more conflicting reports on feasibility, the bridge's board of directors appointed an engineering review board to analyze all the reports. The review board's report, released in April 1962, concluded that running BART on the bridge was not advisable.
Aesthetics
Aesthetics was the foremost reason why the first design of Joseph Strauss was rejected. Upon re-submission of his bridge construction plan, he added details, such as lighting, to outline the bridge's cables and towers. In 1999, it was ranked fifth on the List of America's Favorite Architecture by the American Institute of Architects.
The color of the bridge is officially an orange vermilion called international orange. The color was selected by consulting architect Irving Morrow because it complements the natural surroundings and enhances the bridge's visibility in fog.
The bridge was originally painted with red lead primer and a lead-based topcoat, which was touched up as required. In the mid-1960s, a program was started to improve corrosion protection by stripping the original paint and repainting the bridge with zinc silicate primer and vinyl topcoats. Since 1990, acrylic topcoats have been used instead for air-quality reasons. The program was completed in 1995 and it is now maintained by 38 painters who touch up the paintwork where it becomes seriously corroded.
The ongoing maintenance task of painting the bridge is continuous.
Traffic
Most maps and signage mark the bridge as part of the concurrency between U.S. Route 101 and California State Route 1. Although part of the National Highway System, the bridge is not officially part of California's Highway System. For example, under the California Streets and Highways Code § 401, Route 101 ends at "the approach to the Golden Gate Bridge" and then resumes at "a point in Marin County opposite San Francisco". The Golden Gate Bridge, Highway and Transportation District has jurisdiction over the segment of highway that crosses the bridge instead of the California Department of Transportation (Caltrans).
The movable median barrier between the lanes is moved several times daily to conform to traffic patterns. On weekday mornings, traffic flows mostly southbound into the city, so four of the six lanes run southbound. Conversely, on weekday afternoons, four lanes run northbound. During off-peak periods and weekends, traffic is split with three lanes in each direction.
From 1968 to 2015, opposing traffic was separated by small, plastic pylons; during that time, there were 16 fatalities resulting from 128 head-on collisions. To improve safety, the speed limit on the Golden Gate Bridge was reduced from on October 1, 1983. Although there had been discussion concerning the installation of a movable barrier since the 1980s, only in March 2005 did the Bridge Board of Directors commit to finding funding to complete the $2 million study required prior to the installation of a movable median barrier. Installation of the resulting barrier was completed on January 11, 2015, following a closure of 45.5 hours to private vehicle traffic, the longest in the bridge's history. The new barrier system, including the zipper trucks, cost approximately $30.3 million to purchase and install.
The bridge carries about 112,000 vehicles per day according to the Golden Gate Bridge Highway and Transportation District.
Usage and tourism
The bridge is popular with pedestrians and bicyclists, and was built with walkways on either side of the six vehicle traffic lanes. Initially, they were separated from the traffic lanes by only a metal curb, but railings between the walkways and the traffic lanes were added in 2003, primarily as a measure to prevent bicyclists from falling into the roadway. The bridge was designated as part of U.S. Bicycle Route 95 in 2021.
The main walkway is on the eastern side, and is open for use by both pedestrians and bicycles in the morning to mid-afternoon during weekdays (5:00 a.m. to 3:30 p.m.), and to pedestrians only for the remaining daylight hours (until 6:00 p.m., or 9:00 p.m. during DST). The eastern walkway is reserved for pedestrians on weekends (5:00 a.m. to 6:00 p.m., or 9:00 p.m. during DST), and is open exclusively to bicyclists in the evening and overnight, when it is closed to pedestrians. The western walkway is open only for bicyclists and only during the hours when they are not allowed on the eastern walkway.
Bus service across the bridge is provided by one public transportation agency, Golden Gate Transit, which runs numerous bus lines throughout the week. The southern end of the bridge, near the toll plaza and parking lot, is also accessible daily from 5:30 a.m. to midnight by San Francisco Muni line 28. Muni formerly offered Saturday and Sunday service across the bridge on the Marin Headlands Express bus line, but this was indefinitely suspended due to the COVID-19 pandemic. The Marin Airporter, a private company, also offers service across the bridge between Marin County and San Francisco International Airport.
A visitor center and gift shop, originally called the "Bridge Pavilion" (since renamed the "Golden Gate Bridge Welcome Center"), is located on the San Francisco side of the bridge, adjacent to the southeast parking lot. It opened in 2012, in time for the bridge's 75th-anniversary celebration. A cafe, outdoor exhibits, and restroom facilities are located nearby. On the Marin side of the bridge, only accessible from the northbound lanes, is the H. Dana Bower Rest Area and Vista Point, named after the first landscape architect for the California Division of Highways.
Lands and waters under and around the bridge are homes to varieties of wildlife such as bobcats, harbor seals, and sea lions. Three species of cetaceans (whales) that had been absent in the area for many years have shown recent recoveries/(re)colonizations in the vicinity of the bridge; researchers studying them have encouraged stronger protections and recommended that the public watch them from the bridge or from land, or use a local whale watching operator.
Tolls
Current toll rates
Tolls are only collected from southbound traffic after they cross from Marin County at the toll plaza on the San Francisco side of the bridge. All-electronic tolling has been in effect since 2013, and drivers may either pay using the FasTrak electronic toll collection device or using the license plate tolling program. It remains not truly an open road tolling system until the remaining unused toll booths are removed, forcing drivers to slow substantially from freeway speeds while passing through. Effective , the toll rate for passenger cars with license plate accounts is $9.50, while FasTrak users pay a discounted toll of $9.25. During peak traffic hours on weekdays between 5:00 am and 9:00 am, and between 4:00 pm and 6:00 pm, carpool vehicles carrying three or more people, or motorcycles may pay a discounted toll of $7.25 if they have FasTrak and use the designated carpool lane. Drivers without Fastrak or a license plate account must open a "short term" account within 48 hours after crossing the bridge or they will be sent a toll invoice of $10.25 (the FasTrak toll plus an additional $1 fee). No additional toll violation penalty will be assessed if the invoice is paid within 21 days.
Historical toll rates
When the Golden Gate Bridge opened in 1937, the toll was 50cents per car (equivalent to $ in ), collected in each direction. In 1950 it was reduced to 40cents each way ($ in ), then lowered to 25cents in 1955 ($ in ). In 1968, the bridge was converted to only collect tolls from southbound traffic, with the toll amount reset back to 50cents ($ in ).
From May 1937 until December 1970, pedestrians were charged a toll of 10 cents for bridge access via turnstiles on the sidewalks.
The last of the construction bonds were retired in 1971, with $35 million (equivalent to $M in ) in principal and nearly $39 million ($M in ) in interest raised entirely from bridge tolls. Tolls continued to be collected and subsequently incrementally raised; in 1991, the toll was raised a dollar to $3.00 (equivalent to $ in ).
The bridge began accepting tolls via the FasTrak electronic toll collection system in 2002, with $4 tolls for FasTrak users and $5 for those paying cash (equivalent to $ and $ respectively in ). In November 2006, the Golden Gate Bridge, Highway and Transportation District recommended a corporate sponsorship program for the bridge to address its operating deficit, projected at $80 million over five years. The District promised that the proposal, which it called a "partnership program", would not include changing the name of the bridge or placing advertising on the bridge itself. In October 2007, the Board unanimously voted to discontinue the proposal and seek additional revenue through other means, most likely a toll increase. The District later increased the toll amounts in 2008 to $5 for FasTrak users and $6 to those paying cash (equivalent to $ and $ respectively in ).
In an effort to save $19.2 million over the following 10 years, the Golden Gate District voted in January 2011 to eliminate all toll takers by 2012 and use only open road tolling. Subsequently, this was delayed and toll taker elimination occurred in March 2013. The cost savings have been revised to $19 million over an eight-year period. In addition to FasTrak, the Golden Gate Transportation District implemented the use of license plate tolling (branded as "Pay-by-Plate"), and also a one-time payment system for drivers to pay before or after their trip on the bridge. Twenty-eight positions were eliminated as part of this plan.
On April 7, 2014, the toll for users of FasTrak was increased from $5 to $6 (equivalent to $ in ), while the toll for drivers using either the license plate tolling or the one time payment system was raised from $6 to $7 (equivalent to $ in ). Bicycle, pedestrian, and northbound motor vehicle traffic remain toll free. For vehicles with more than two axles, the toll rate was $7 per axle for those using license plate tolling or the one time payment system, and $6 per axle for FasTrak users. During peak traffic hours, carpool vehicles carrying two or more people and motorcycles paid a discounted toll of $4 (equivalent to $ in ); drivers must have had Fastrak to take advantage of this carpool rate. The Golden Gate Transportation District then increased the tolls by 25cents in July 2015, and then by another 25cents each of the next three years.
In March 2019, the Golden Gate Transportation District approved a plan to implement 35-cent annual toll increases through 2023, except for the toll-by-plate program which will increase by 20cents per year. The district then approved another plan in March 2024 to implement 50-cent annual toll increases through 2028.
Congestion pricing
In March 2008, the Golden Gate Bridge District board approved a resolution to start congestion pricing at the Golden Gate Bridge, charging higher tolls during the peak hours, but rising and falling depending on traffic levels. This decision allowed the Bay Area to meet the federal requirement to receive $158 million in federal transportation funds from USDOT Urban Partnership grant. As a condition of the grant, the congestion toll was to be in place by September 2009.
In August 2008, transportation officials ended the congestion pricing program in favor of varying rates for metered parking along the route to the bridge including on Lombard Street and Van Ness Avenue.
Issues
Protests and stunts
In August 1977, three California Polytechnic State University students climbed the cables of the Golden Gate Bridge.
In May 1981, Dave Aguilar climbed the South Tower of the Golden Gate Bridge to protest offshore oil drilling.
On November 24, 1996, environmentalists, including Woody Harrelson, were arrested after scaling the Golden Gate Bridge.
In 1997, Quentin Kopp authored a bill, that was signed into law by Pete Wilson that increased the maximum fine for trespassing on the bridge from $1,000 to $10,000 and doubled maximum jail time from six months to a year.
In July 2001, approximately 100 protesters gathered to demand an end to the U.S. Navy's bombing activities on the Puerto Rican island of Vieques.
During the 2008 Tibetan unrest, three pro-Tibet activists scaled the bridge's vertical cables in April 2008 to protest the arrival of the Olympic torch in the city. The activists hung banners to denounce China's crackdown on Tibet. The incident resulted in the closure of a northbound lane of the bridge and was part of a wave of protests across multiple cities against China's policies in Tibet.
On January 20, 2017, thousands of people held hands as a human chain on the sidewalk across the Golden Gate Bridge as Donald Trump took the oath of office.
On June 6, 2020, protesters shut down traffic on the Golden Gate Bridge in a demonstration against police brutality following the murder of George Floyd. The protest, originally confined to the pedestrian path, spilled into traffic lanes as activists knelt for eight minutes and 46 seconds, symbolizing the time a police officer knelt on Floyd's neck. Law enforcement was unable to redirect protesters, causing a complete closure of the bridge to traffic during the demonstration. This event was part of nationwide protests, with San Francisco lifting its curfew to allow continued gatherings in support of the movement.
Approximately 5,000 Armenian-Americans marched across the Golden Gate Bridge in October 2020 to raise awareness about an illegal blockade during the Nagorno-Karabakh conflict and to urge the US government to halt arms shipments to Turkey and Azerbaijan. Organized by the Armenian Youth Federation (AYF) San Francisco "Rosdom" Chapter, the demonstration aimed at informing Bay Area citizens about the violence against Armenians.
In June 2021, activists from the Sunrise Movement marched over 250 miles to advocate for climate action, culminating in a demonstration on the Golden Gate Bridge. Activists called for urgent measures to combat climate change, including the passage of President Joe Biden's American Jobs Plan, which includes funding for green energy jobs.
On September 30, 2021, protesters blocked traffic, urging Senate Democrats to address immigration reform and advocate for citizenship for undocumented immigrants and Haitian refugees. Five organizers, including an undocumented individual, were arrested during the demonstration.
In November 2021, a protest against government-mandated COVID-19 vaccinations led to a chain-reaction crash at the bridge. During the demonstration, a vehicle collision occurred involving two California Highway Patrol officers and three Golden Gate Bridge employees. The individuals were hospitalized with not life-threatening injuries.
Protests over the death of Mahsa Amini occurred on September 26, 2022. Over 1,000 protesters gathered at the Golden Gate Bridge Welcome Center to demonstrate against the Islamic Republic of Iran and its morality police following the death of Amini, who had been detained after an encounter with Tehran police, leading to her subsequent coma and death. The protest attendees voiced demands for women's rights and freedom, displayed signs and carrying former imperial state Iranian flags. The event drew attention globally, sparking solidarity protests in Iran, Greece, England, and France.
On February 14, 2024, a pro-Palestinian protest temporarily halted traffic on the Golden Gate Bridge. Around 20 protesters gathered on the bridge, displaying banners condemning the Israeli invasion of the Gaza Strip, and calling for an end to U.S. military support to Israel. The demonstration caused a standstill in both northbound and southbound traffic.
Pro-Palestinian protesters staged demonstrations across the bridge in April 2024 in response to the ongoing Israel-Hamas War. The protests aimed to raise awareness and show solidarity with Gaza during a period of conflict, with some protestors chaining themselves to vehicles to impede traffic flow. Major highways and bridges were temporarily blocked, resulting in arrests by law enforcement.
Suicides
The Golden Gate Bridge is the most used suicide site in the world. The deck is about above the water. After a fall of four seconds, jumpers hit the water at around . Most die from impact trauma. About 5% survive the initial impact but generally drown or die of hypothermia in the cold water.
After years of debate and an estimated more than 1,500 deaths, suicide barriers, consisting of a stainless steel net extending from the bridge and supported by structural steel 20 feet under the walkway, began to be installed in April 2017. Construction was first estimated to take approximately four years at a cost of over $200 million. Installation of the nets was completed in January 2024. The metal nets are visible from the pedestrian walkways and are expected to be painful to land on.
Wind
The Golden Gate Bridge was designed to safely withstand winds of up to . Until 2008, the bridge was closed because of weather conditions only three times: on December 1, 1951, because of gusts of ; on December 23, 1982, because of winds of ; and on December 3, 1983, because of wind gusts of . An anemometer placed midway between the two towers on the west side of the bridge has been used to measure wind speeds. Another anemometer was placed on one of the towers.
As part of the retrofitting of the bridge and installation of the suicide barrier, starting in 2019 the railings on the west side of the pedestrian walkway were replaced with thinner, more flexible slats in order to improve the bridge's aerodynamic tolerance of high wind to . Starting in June 2020, reports were received of a loud hum, heard across San Francisco and Marin County, produced by the new railing slats when a strong west wind was blowing. The sound had been predicted from wind tunnel tests, but not included in the environmental impact report; ways of ameliorating it are being considered. An independent engineering analysis of a 2020 sound recording of the tones concludes that the singing noise comprises a variety of Aeolian tones (the sound produced by air flowing past a sharp edge), arising in this case from the ambient wind blowing across metal slats of the newly installed sidewalk railings. The tones observed were frequencies of 354, 398, 439 and 481 Hz, corresponding to the musical notes F4, G4, A4, and B4; these notes form an F Lydian Tetrachord.
Seismic vulnerability and improvements
Modern knowledge of the effect of earthquakes on structures led to a program to retrofit the Golden Gate to better resist seismic events. The proximity of the bridge to the San Andreas Fault places it at risk for a significant earthquake. Once thought to have been able to withstand any magnitude of foreseeable earthquake, the bridge was actually vulnerable to complete structural failure (i.e., collapse) triggered by the failure of supports on the arch over Fort Point. A $392 million program was initiated to improve the structure's ability to withstand such an event with only minimal (repairable) damage. A custom-built electro-hydraulic synchronous lift system for construction of temporary support towers and a series of intricate lifts, transferring the loads from the existing bridge onto the temporary supports, were completed with engineers from Balfour Beatty and Enerpac, without disrupting day-to-day commuter traffic. Although the retrofit was initially planned to be completed in 2012, it was expected to take several more years.
The former elevated approach to the Golden Gate Bridge through the San Francisco Presidio, known as Doyle Drive, dated to 1933 and was named after Frank P. Doyle. Doyle, the president of the Exchange Bank in Santa Rosa and son of the bank's founder, was the man who, more than any other person, made it possible to build the Golden Gate Bridge. The highway carried about 91,000 vehicles each weekday between downtown San Francisco and the North Bay and points north. The road was deemed "vulnerable to earthquake damage", had a problematic 4-lane design, and lacked shoulders; a San Francisco County Transportation Authority study recommended that it be replaced. Construction on the $1 billion replacement, temporarily known as the Presidio Parkway, began in December 2009.
The elevated Doyle Drive was demolished on the weekend of April 27–30, 2012, and traffic used a part of the partially completed Presidio Parkway, until it was switched onto the finished Presidio Parkway on the weekend of July 9–12, 2015. , an official at Caltrans said there is no plan to permanently rename the portion known as Doyle Drive.
Gallery
See also
The Bridge, a 2006 documentary on suicides from the Bridge
Golden Gate Bridge in popular culture
List of bridges documented by the Historic American Engineering Record in California
List of Historic Civil Engineering Landmarks
List of longest suspension bridge spans
List of San Francisco Designated Landmarks
List of tallest bridges
San Francisco–Oakland Bay Bridge
Suicide bridge
References
Further reading
External links
Bay Area FasTrak – includes toll information on this and the other Bay Area toll facilities
(A documentary film about the construction of the Golden Gate Bridge.)
(Educational poster.)
1937 establishments in California
Art Deco architecture in California
Articles containing video clips
Bridges by Joseph Strauss (engineer)
Bridges completed in 1937
Bridges in San Francisco
Bridges in Marin County, California
Bridges in the San Francisco Bay Area
Bridges of the United States Numbered Highway System
California Historical Landmarks
California State Route 1
Culture of San Francisco
Bridge
Historic American Engineering Record in California
Historic Civil Engineering Landmarks
Landmarks in the San Francisco Bay Area
Landmarks in San Francisco
Pedestrian bridges in California
Road bridges in California
Roads with a reversible lane
San Francisco Designated Landmarks
Suspension bridges in California
Symbols of California
Toll bridges in California
U.S. Route 101
Tourist attractions in Marin County, California
Works Progress Administration in California
Open-spandrel deck arch bridges in the United States
Steel bridges in the United States
Truss arch bridges in the United States | Golden Gate Bridge | [
"Engineering"
] | 8,091 | [
"Civil engineering",
"Historic Civil Engineering Landmarks"
] |
12,232 | https://en.wikipedia.org/wiki/Gustave%20Eiffel | Alexandre Gustave Eiffel ( , ; Bonickhausen dit Eiffel; 15 December 1832 – 27 December 1923) was a French civil engineer. A graduate of École Centrale des Arts et Manufactures, he made his name with various bridges for the French railway network, most famously the Garabit Viaduct. He is best known for the world-famous Eiffel Tower, designed by his company and built for the 1889 Universal Exposition in Paris, and his contribution to building the Statue of Liberty in New York. After his retirement from engineering, Eiffel focused on research into meteorology and aerodynamics, making significant contributions in both fields.
Early life
Alexandre Gustave Eiffel was born in France, in the Côte-d'Or, the first child of Catherine-Mélanie (née Moneuse) and Alexandre Bonickhausen dit Eiffel. He was a descendant of Marguerite Frédérique (née Lideriz) and Jean-René Bönickhausen, who had emigrated from the German town of Marmagen and settled in Paris at the beginning of the 19th century. The family adopted the name Eiffel as a reference to the Eifel mountains in the region from which they had come. Although the family always used the name Eiffel, Gustave's name was registered at birth as Bonickhausen dit Eiffel, and was not formally changed to Eiffel until 1880.
At the time of Gustave's birth, his father, an ex-soldier, was working as an administrator for the French Army; but shortly after his birth his mother expanded a charcoal business she had inherited from her parents to include a coal-distribution business, and soon afterwards his father gave up his job to assist her. Due to his mother's business commitments, Gustave spent his childhood living with his grandmother, but nevertheless remained close to his mother, who was to remain an influential figure until her death in 1878. The business was successful enough for Catherine Eiffel to sell it in 1843 and retire on the proceeds. Eiffel was not a studious child, and thought his classes at the Lycée Royal in Dijon boring and a waste of time, although in his last two years, influenced by his teachers for history and literature, he began to study seriously, and he gained his baccalauréats in humanities and science.
An important part in his education was played by his uncle, Jean-Baptiste Mollerat, who had invented a process for distilling vinegar and had a large chemical works near Dijon, and one of his uncle's friends, the chemist Michel Perret. Both men spent a lot of time with the young Eiffel, teaching him about everything from chemistry and mining to theology and philosophy.
Eiffel went on to attend the Collège Sainte-Barbe in Paris, to prepare for the difficult entrance exams set by engineering colleges in France, and qualified for entry to two of the most prestigious schools – École polytechnique and École Centrale des Arts et Manufactures – and ultimately entered the latter. During his second year he chose to specialize in chemistry, and graduated ranking at 13th place out of 80 candidates in 1855. This was the year that Paris hosted a World's Fair, and Eiffel was bought a season ticket by his mother.
Early career
After graduation, Eiffel had hoped to find work in his uncle's workshop in Dijon, but a family dispute made this impossible. After a few months working as an unpaid assistant to his brother-in-law, who managed a foundry, Eiffel approached the railway engineer Charles Nepveu, who gave Eiffel his first paid job as his private secretary. However, shortly afterwards Nepveu's company went bankrupt, Nepveu found Eiffel a job designing a sheet iron bridge for the Saint Germaine railway. Some of Nepveu's businesses were then acquired by the Compagnie Belge de Matériels de Chemin de Fer: Nepveu was appointed the managing director of the two factories in Paris, and offered Eiffel a job as head of the research department. In 1857 Nepveu negotiated a contract to build a railway bridge over the river Garonne at Bordeaux, connecting the Paris-Bordeaux line to the lines running to Sète and Bayonne, which involved the construction of a iron girder bridge supported by six pairs of masonry piers on the river bed. These were constructed with the aid of compressed air caissons and hydraulic rams, both innovative techniques at the time. Eiffel was initially given the responsibility of assembling the metalwork and eventually took over the management of the entire project from Nepveu, who resigned in March 1860.
Following the completion of the project on schedule Eiffel was appointed as the principal engineer of the Compagnie Belge. His work had also gained the attention of several people who were later to give him work, including Stanislas de la Roche Toulay, who had prepared the design for the metalwork of the Bordeaux bridge, Jean Baptiste Krantz and Wilhelm Nordling. Further promotion within the company followed, but the business began to decline, and in 1865 Eiffel, seeing no future there, resigned and set up as an independent consulting engineer. He was already working independently on the construction of two railway stations, at Toulouse and Agen, and in 1866 he was given a contract to oversee the construction of 33 locomotives for the Egyptian government, a profitable but undemanding job in the course of which he visited Egypt, where he visited the Suez Canal which was being constructed by Ferdinand de Lesseps. At the same time he was employed by Jean-Baptiste Kranz to assist him in the design of the exhibition hall for the Exposition Universelle which was to be held in 1867. Eiffel's principal job was to draw up the arch girders of the Galerie des Machines. In order to carry out this work, Eiffel and Henri Treca, the director of the Conservatoire des Arts et Metiers, conducted valuable research on the structural properties of cast iron, definitively establishing the modulus of elasticity applicable to compound castings.
Eiffel et Cie
At the end of 1866 Eiffel managed to borrow enough cash to set up his own workshops at 48 Rue Fouquet in Levallois-Perret. His first important commission was for two viaducts for the railway line between Lyon and Bordeaux, and the company also began to undertake work in other countries, including St. Mark's Cathedral in Arica, Peru, which was an all-metal prefabricated building, manufactured in France and shipped to South America in pieces to be assembled on site; first it was intended for the city of Ancón, a beach near Lima, but the Peruvian Government of President José Balta changed the final destination to Arica because the old church was destroyed by an earthquake on 13 August 1868. Because of this, a committee of ladies of Arica asked Balta to relocate Eiffel's structure to Arica.
On 6 October 1868 he entered into partnership with Théophile Seyrig, a fellow graduate of the École Centrale, forming the company Eiffel et Cie.
In 1875, Eiffel et Cie were given two important contracts, one for the Budapest Nyugati railway station for the Vienna to Budapest railway and the other for a bridge over the river Douro in Portugal. The station in Budapest was an innovative design. The usual pattern for building a railway terminus was to conceal the metal structure behind an elaborate facade: Eiffel's design for Budapest used the metal structure as the centerpiece of the building, flanked on either side by conventional stone and brick-clad structures housing administrative offices.
The bridge over the Douro came about as the result of a competition held by the Royal Portuguese Railroad Company. The task was a demanding one: the river was fast-flowing, up to deep, and had a bed formed of a deep layer of gravel which made the construction of piers on the river bed impossible, and so the bridge had to have a central span of . This was greater than the longest arch span which had been built at the time.
Eiffel's proposal was for a bridge whose deck was supported by five iron piers, with the abutments of the pair on the river bank also bearing a central supporting arch. The price quoted by Eiffel was FF.965,000, far below the nearest competitor and so he was given the job, although since his company was less experienced than his rivals the Portuguese authorities appointed a committee to report on Eiffel et Cie's suitability. The members included Jean-Baptiste Krantz, Henri Dion and Léon Molinos, both of whom had known Eiffel for a long time: their report was favorable, and Eiffel got the job. On-site work began in January 1876 and was complete by the end of October 1877: the bridge was ceremonially opened by King Luís I and Queen Maria Pia, after whom the bridge was named, on 4 November.
The Exposition Universelle in 1878 firmly established his reputation as one of the leading engineers of the time. As well as exhibiting models and drawings of work undertaken by the company, Eiffel was also responsible for the construction of several of the exhibition buildings. One of these, a pavilion for the Paris Gas Company, was Eiffel's first collaboration with Stephen Sauvestre, who was later to become the head of the company's architectural office.
In 1879 the partnership with Seyrig was dissolved, and the company was renamed the Compagnie des Établissements Eiffel.
The same year the company was given the contract for the Garabit viaduct, a railway bridge near Ruynes en Margeride in the Cantal département. Like the Douro bridge, the project involved a lengthy viaduct crossing the river valley as well as the river itself, and Eiffel was given the job without any process of competitive tendering due to his success with the bridge over the Douro. To assist him in the work he took on several people who were to play important roles in the design and construction of the Eiffel Tower, including Maurice Koechlin, a young graduate of the Zurich Polytechnikum, who was engaged to undertake calculations and make drawings, and Émile Nouguier, who had previously worked for Eiffel on the construction of the Douro bridge.
The same year Eiffel started work on a system of standardised prefabricated bridges, an idea that was the result of a conversation with the governor of Cochin-China. These used a small number of standard components, all small enough to be readily transportable in areas with poor or non-existent roads, and were joined using bolts rather than rivets, reducing the need for skilled labour on site. A number of different types were produced, ranging from footbridges to standard-gauge railway bridges.
In 1881 Eiffel was contacted by Auguste Bartholdi who was in need of an engineer to help him to realise the Statue of Liberty. Some work had already been carried out by Eugène Viollet-le-Duc, but he had died in 1879. Eiffel was selected because of his experience with wind stresses. Eiffel devised a structure consisting of a four legged pylon to support the copper sheeting which made up the body of the statue. The entire statue was erected at the Eiffel works in Paris before being dismantled and shipped to the United States.
In 1886 Eiffel also designed the dome for the Astronomical Observatory in Nice. This was the most important building in a complex designed by Charles Garnier, later among the most prominent critics of the Tower. The dome, with a diameter of , was the largest in the world when built and used an ingenious bearing device: rather than running on wheels or rollers, it was supported by a ring-shaped hollow girder floating in a circular trough containing a solution of magnesium chloride in water. This had been patented by Eiffel in 1881.
The Eiffel Tower
The design of the Eiffel Tower was originated by Maurice Koechlin and Emile Nouguier, who had discussed ideas for a centrepiece for the 1889 Exposition Universelle. In May 1884 Koechlin, working at his home, made an outline drawing of their scheme, described by him as "a great pylon, consisting of four lattice girders standing apart at the base and coming together at the top, joined together by metal trusses at regular intervals". Initially Eiffel showed little enthusiasm, although he did sanction further study of the project, and the two engineers then asked Stephen Sauvestre to add architectural embellishments. Sauvestre added the decorative arches to the base, a glass pavilion to the first level and the cupola at the top. The enhanced idea gained Eiffel's support for the project, and he bought the rights to the patent on the design which Koechlin, Nougier and Sauvestre had taken out. The design was exhibited at the Exhibition of Decorative Arts in the autumn of 1884, and on 30 March 1885 Eiffel read a paper on the project to the Société des Ingénieurs Civils. After discussing the technical problems and emphasising the practical uses of the tower, he finished his talk by saying that the tower would symbolise
Little happened until the beginning of 1886, but with the re-election of Jules Grévy as president and his appointment of Edouard Lockroy as Minister for Trade decisions began to be made. A budget for the Exposition was passed and on 1 May Lockroy announced an alteration to the terms of the open competition which was being held for a centerpiece for the exposition, which effectively made the choice of Eiffel's design a foregone conclusion: all entries had to include a study for a four-sided metal tower on the Champ de Mars. On 12 May a commission was set up to examine Eiffel's scheme and its rivals and on 12 June it presented its decision, which was that only Eiffel's proposal met their requirements. After some debate about the exact site for the tower, a contract was signed on 8 January 1887. This was signed by Eiffel acting in his own capacity rather than as the representative of his company, and granted him one and a half million francs toward the construction costs. This was less than a quarter of the estimated cost of six and a half million francs. Eiffel was to receive all income from the commercial exploitation during the exhibition and for the following twenty years. Eiffel later established a separate company to manage the tower.
The tower had been a subject of some controversy, attracting criticism both from those who did not believe it feasible and from those who objected on artistic grounds. Just as work began at the Champ de Mars, the "Committee of Three Hundred" (one member for each metre of the tower's height) was formed, led by Charles Garnier and including some of the most important figures of the French arts establishment, including Adolphe Bouguereau, Guy de Maupassant, Charles Gounod and Jules Massenet: a petition was sent to Jean-Charles Adolphe Alphand, the Minister of Works, and was published by Le Temps.
Work on the foundations started on 28 January 1887. Those for the east and south legs were straightforward, each leg resting on four concrete slabs, one for each of the principal girders of each leg but the other two, being closer to the river Seine were more complicated: each slab needed two piles installed by using compressed-air caissons long and in diameter driven to a depth of to support the concrete slabs, which were thick. Each of these slabs supported a limestone block, each with an inclined top to bear the supporting shoe for the ironwork. These shoes were anchored by bolts 10 cm (4 in) in diameter and long.
Work on the foundations was complete by 30 June and the erection of the iron work was started. Although no more than 250 men were employed on the site, a prodigious amount of exacting preparatory work was entailed: the drawing office produced 1,700 general drawings and 3,629 detail drawings of the 18,038 different parts needed. The task of drawing the components was complicated by the complex angles involved in the design and the degree of precision required: the positions of rivet holes were specified to within 0.1 mm (0.004 in) and angles worked out to one second of arc. The components, some already riveted together into sub-assemblies, were first bolted together, the bolts being replaced by rivets as construction progressed. No drilling or shaping was done on site: if any part did not fit it was sent back to the factory for alteration. The four legs, each at an angle of 54° to the ground, were initially constructed as cantilevers, relying on the anchoring bolts in the masonry foundation blocks. Eiffel had calculated that this would be satisfactory until they approached halfway to the first level: accordingly work was stopped for the purpose of erecting a wooden supporting scaffold. This gave ammunition to his critics, and lurid headlines including "Eiffel Suicide!" and "Gustave Eiffel has gone mad: he has been confined in an Asylum" appeared in the popular press. At this stage a small "creeper" crane was installed in each leg, designed to move up the tower as construction progressed and making use of the guides for the elevators which were to be fitted in each leg. After this brief pause erection of the metalwork continued, and the critical operation of linking the four legs was successfully completed by March 1888. In order to precisely align the legs so that the connecting girders could be put into place, a provision had been made to enable precise adjustments by placing hydraulic jacks in the footings for each of the girders making up the legs.
The main structural work was completed at the end of March 1889 and, on 31 March, Eiffel celebrated by leading a group of government officials, accompanied by representatives of the press, to the top of the tower. Since the lifts were not yet in operation, the ascent was made by foot, and took over an hour, Eiffel frequently stopping to make explanations of various features. Most of the party chose to stop at the lower levels, but a few, including Nouguier, Compagnon, the President of the City Council and reporters from Le Figaro and Le Monde Illustré completed the climb. At 2.35 Eiffel hoisted a large tricolour, to the accompaniment of a 25-gun salute fired from the lower level.
The Panama Scandal
In 1887, Eiffel became involved with the French effort to construct a canal across the Panama Isthmus. The French Panama Canal Company, headed by Ferdinand de Lesseps, had been attempting to build a sea-level canal, but came to the realization that this was impractical. The plan was changed to one using locks, which Eiffel was contracted to design and build. The locks were on a large scale, most having a change of level of . Eiffel had been working on the project for little more than a year when the company suspended payments of interest on 14 December 1888, and shortly afterwards was put into liquidation. Eiffel's reputation was badly damaged when he was implicated in the financial and political scandal which followed. Although he was simply a contractor, he was charged along with the directors of the project with raising money under false pretenses and misappropriation of funds. On 9 February 1893, Eiffel was found guilty on the charge of misuse of funds and was fined 20,000 francs and sentenced to two years in prison, although he was acquitted on appeal. The later American-built canal used new lock designs (see History of the Panama Canal).
Shortly before the trial, Eiffel had announced his intention to resign from the Board of Directors of the Compagnie des Etablissements Eiffel and did so at a General Meeting held on 14 February, saying, "I have absolutely decided to abstain from any participation in any manufacturing business from now on, and so that no one can be misled and to make it most evident I intend to remain uninvolved with the establishments that bears my name, and insist that it be removed from the company's name." The company changed its name to La Société Constructions Levallois-Perret, with Maurice Koechlin as managing director. The name was changed to the Anciens Etablissements Eiffel in 1937.
Later career
After his retirement from the Compagnie des Etablissements Eiffel, Eiffel went on to do important work in meteorology and aerodynamics. Eiffel's interest in these areas was a consequence of the problems he had encountered with the effects of wind forces on the structures he had built.
His first aerodynamic experiments, investigating the air resistance of surfaces, were carried out by dropping the surface to be investigated together with a measuring apparatus down a vertical cable stretched between the second level of the Eiffel Tower and the ground. Using this Eiffel definitely established that the air resistance of a body was very closely related to the square of the airspeed. He then built a laboratory on the Champ de Mars at the foot of the tower in 1905, building his first wind tunnel there in 1909. The wind tunnel was used to investigate the characteristics of the airfoil sections used by the early pioneers of aviation such as the Wright Brothers, Gabriel Voisin and Louis Blériot. Eiffel established that the lift produced by an airfoil was the result of a reduction of air pressure above the wing rather than an increase of pressure acting on the under surface. Following complaints about noise from people living nearby, he moved his experiments to a new establishment at Auteuil in 1912. Here it was possible to build a larger wind tunnel, and Eiffel began to make tests using scale models of aircraft designs.
In 1913 Eiffel was awarded the Samuel P. Langley Medal for Aerodromics by the Smithsonian Institution. In his speech at the presentation of the medal, Alexander Graham Bell said:
Eiffel had meteorological measuring equipment placed on the tower in 1889, and also built a weather station at his house in Sèvres. Between 1891 and 1892 he compiled a complete set of meteorological readings, and later extended his record-taking to include measurements from 25 different locations across France.
Eiffel died on 27 December 1923, while listening to Beethoven's 5th symphony andante, in his mansion on Rue Rabelais in Paris. He was buried in the family tomb in Levallois-Perret Cemetery.
Influence
Gustave Eiffel's career was a result of the Industrial Revolution. For a variety of economic and political reasons, this had been slow to make an impact in France, and Eiffel had the good fortune to be working at a time of rapid industrial development in France. Eiffel's importance as an engineer was twofold. Firstly he was ready to adopt innovative techniques first used by others, such as his use of compressed-air caissons and hollow cast-iron piers, and secondly he was a pioneer in his insistence on basing all engineering decisions on thorough calculation of the forces involved, combining this analytical approach with an insistence on a high standard of accuracy in drawing and manufacture.
The growth of the railway network had an immense effect on people's lives, but although the enormous number of bridges and other work undertaken by Eiffel were an important part of this, the two works that did most to make him famous are the Statue of Liberty and the Eiffel Tower, both projects of immense symbolic importance and today internationally recognized landmarks. The Tower is also important because of its role in establishing the aesthetic potential of structures whose appearance is largely dictated by practical considerations.
His contribution to the science of aerodynamics is probably of equal importance to his work as an engineer.
Works
Buildings and structures
Railway station at Toulouse, France (1862)
Railway station at Agen, France.
Church of Notre Dame des Champs, Paris (1867)
Synagogue in Rue de Pasarelles, Paris (1867)
Théâtre les Folies, Paris (1868)
Burullus lighthouse, Egypt (1869)
Ras gharib lighthouse, Egypt (1871)
Gasworks, La Paz, Bolivia (1873)
Gasworks, Tacna, Peru (1873)
Ristna Lighthouse at Hiiumaa island, Estonia (1874)
Church of San Marcos, Arica, Chile (1875)
Cathedral of San Pedro de Tacna, Peru (1875)
Lycée Carnot, Paris (1876)
Budapest Nyugati Pályaudvar (Western railway station), Budapest, Hungary (1877)
Ornamental Fountain of the Three Graces, Moquegua, Peru (1877)
Ruhnu Lighthouse at Ruhnu island, Estonia (1877)
Grand Hotel Traian, Iaşi, Romania (1882)
Iglesia Santa Barbara, Santa Rosalia, Baja California Sur, Mexico (1884–1897)
Nice Observatory, Nice, France (1886)
Statue of Liberty, Liberty Island, New York City, United States (1886)
Eiffel Tower, Paris, France (1889)
Paradis Latin theatre, Paris, France (1889)
The Iron Framework of the Chai de Segonzac (1892)
Casa de Fierro, Iquitos, Peru (1892) – disputed
Iglesia de Santa Bárbara in Santa Rosalía, Baja California Sur, Mexico (1897) – disputed
Opéra-Comique in Paris, France (1898)
Aérodynamique EIFFEL (wind tunnel), Paris (Auteuil), France (1911)
The Market, Olhão, Portugal
Konak Pier, İzmir, Turkey – disputed
Palacio de Hierro, Orizaba, Veracruz, Mexico
Catedral de Santa María, Chiclayo, Peru (late 20th century)
Combier Distillery, Saumur (Loire Valley), France
Church in Coquimbo, Chile – disputed
Fénix Theatre, Arequipa, Peru
San Camilo Market, Arequipa, Peru
Farol de São Thomé, Campos, Rio de Janeiro, Brazil
Pabellon de la Rosa Piriapolis, Uruguay
La Cristalera, old portuary storage, El Puerto de Santa María, Spain
Clock Tower, Monte Cristi, Dominican Republic – disputed
Bridges and viaducts
Railway bridge over the river Garonne, Bordeaux (1861)
Viaduct over the river Sioule (1867)
Viaduct at Neuvial (1867)
Pont de Ferro or Pont Eiffel in Girona (1876)
Maria Pia Bridge over the river Douro, Portugal (1877)
Cubzac bridge over the Dordogne, France (1880)
Borjomi bridge over the Tsemistskali River, Georgia (1902)
Belvárosi Bridge over the river Tisza in the centre of Szeged, Hungary (1881)
Mong Bridge over Bến Nghé River, Ho Chi Minh City, Vietnam (1882) – removed 2005 and restored after 2011
Garabit Viaduct, France (1884)
Railway bridge over the San, Przemyśl, Poland (1891)
Imbaba Bridge over the river Nile, Cairo, Egypt (1892)
The road (D50) bridge over the river Lay at Lavaud in the Vendée, France
Bridge over the Schelde in Temse, Belgium
Ponte Eiffel in Viana do Castelo's Marina, Portugal (1878)
The Railway Bridge over the Coura river in Caminha, Portugal.
Eiffel Bridge in Ungheni, between Moldova and Romania (1877)
Ajfel Bridge on Skenderija Sarajevo, Bosnia and Herzegovina
Ghenh Bridge and Rach Cat Bridge in Bien Hoa city, Đồng Nai Province, Vietnam
Trường Tiền Bridge in Huế city, Thừa Thiên–Huế Province, Vietnam
Bolívar Bridge, in Arequipa, Peru
Puente Ferroviario Banco de Arena Railway Bridge near Constitución, Chile
Destroyed
The Eiffel Bridge in Zrenjanin (constructed and finished in 1904; dismantled in 1969)
Birsbrücke, Münchenstein, Switzerland, which collapsed (1891), killing over 70 people in the Münchenstein rail disaster.
Souleuvre Viaduct (1893; bridge spans removed but piers survive)
Not proven
Mercado Adolpho Lisboa, Manaus, Brazil (1883)
Basilica of San Sebastian, Manila, Philippines (1891)
Malleco Viaduct, Chile (1890)
Estación Central (railway station) Santiago, Chile, (1897)
Dam on Great Bačka Canal, Bečej, Serbia (1900)
Santa Justa Lift (Carmo Lift), in Lisbon, Portugal (1901)
Santa Efigênia Viaduct, São Paulo, Brazil (1913)
La Paz Train Station, La Paz, Bolivia (now La Paz Bus Station) (1917) – different style than the Eiffel's and built more than 20 years after he left the company and construction business
"Vuelta al Mundo", Córdoba, Argentina
Puente de hierro sobre el río Conlara, Tilquicho, Córdoba, Argentina
Watermill, Villa Dolores, Córdoba, Argentina
Casa del Cura (also called Casa Eiffel), in Ulea, Spain (1912)
Palácio de Ferro (Iron Palace), Angola
Train Yard, Novi Sad, Serbia (1921)
Unrealized projects
Trinity Bridge, Saint Petersburg—Eiffel entered a project into the contest, but his project was not realized.
Heritage protection efforts
A number of works of Gustave Eiffel are in danger today. Some have already been destroyed, like in Vietnam.
A proposal to demolish the railway bridge of Bordeaux (also known as the "passerelle St Jean"), the first major work of Gustave Eiffel, resulted in a large response from the public. Actions to protect the bridge were taken as early as 2002 by the "Association of the Descendants of Gustave Eiffel", joined from 2005 onwards by the Association "Sauvons la Passerelle Eiffel" (Save the Eiffel Bridge). They led, in 2010, to the decision to list Eiffel's Bordeaux bridge as a French Historical Monument.
References
Bibliography
External links
Official website of the Association of the Descendants of Gustave Eiffel (in English)
Gustave Eiffel: The Man Behind the Masterpiece
Einsturz der Birsbrücke bei Münchenstein (Basel)
1832 births
1923 deaths
Architects from Dijon
Burials at Levallois-Perret Cemetery
École Centrale Paris alumni
Eiffel Tower
Engineers from Dijon
French architects
French bridge engineers
French civil engineers
French people of German descent
Officers of the Legion of Honour
Statue of Liberty
Structural engineers | Gustave Eiffel | [
"Engineering"
] | 6,363 | [
"Structural engineering",
"Structural engineers"
] |
12,240 | https://en.wikipedia.org/wiki/Gold | Gold is a chemical element with the chemical symbol Au (from Latin ) and atomic number 79. In its pure form, it is a bright, slightly orange-yellow, dense, soft, malleable, and ductile metal. Chemically, gold is a transition metal, a group 11 element, and one of the noble metals. It is one of the least reactive chemical elements, being the second-lowest in the reactivity series. It is solid under standard conditions.
Gold often occurs in free elemental (native state), as nuggets or grains, in rocks, veins, and alluvial deposits. It occurs in a solid solution series with the native element silver (as in electrum), naturally alloyed with other metals like copper and palladium, and mineral inclusions such as within pyrite. Less commonly, it occurs in minerals as gold compounds, often with tellurium (gold tellurides).
Gold is resistant to most acids, though it does dissolve in aqua regia (a mixture of nitric acid and hydrochloric acid), forming a soluble tetrachloroaurate anion. Gold is insoluble in nitric acid alone, which dissolves silver and base metals, a property long used to refine gold and confirm the presence of gold in metallic substances, giving rise to the term 'acid test'. Gold dissolves in alkaline solutions of cyanide, which are used in mining and electroplating. Gold also dissolves in mercury, forming amalgam alloys, and as the gold acts simply as a solute, this is not a chemical reaction.
A relatively rare element, gold is a precious metal that has been used for coinage, jewelry, and other works of art throughout recorded history. In the past, a gold standard was often implemented as a monetary policy. Gold coins ceased to be minted as a circulating currency in the 1930s, and the world gold standard was abandoned for a fiat currency system after the Nixon shock measures of 1971.
In 2023, the world's largest gold producer was China, followed by Russia and Australia. , a total of around 201,296 tonnes of gold exist above ground. This is equal to a cube, with each side measuring roughly . The world's consumption of new gold produced is about 50% in jewelry, 40% in investments, and 10% in industry. Gold's high malleability, ductility, resistance to corrosion and most other chemical reactions, as well as conductivity of electricity have led to its continued use in corrosion-resistant electrical connectors in all types of computerized devices (its chief industrial use). Gold is also used in infrared shielding, the production of colored glass, gold leafing, and tooth restoration. Certain gold salts are still used as anti-inflammatory agents in medicine.
Characteristics
Gold is the most malleable of all metals. It can be drawn into a wire of single-atom width, and then stretched considerably before it breaks. Such nanowires distort via the formation, reorientation, and migration of dislocations and crystal twins without noticeable hardening. A single gram of gold can be beaten into a sheet of , and an avoirdupois ounce into . Gold leaf can be beaten thin enough to become semi-transparent. The transmitted light appears greenish-blue because gold strongly reflects yellow and red. Such semi-transparent sheets also strongly reflect infrared light, making them useful as infrared (radiant heat) shields in the visors of heat-resistant suits and in sun visors for spacesuits. Gold is a good conductor of heat and electricity.
Gold has a density of 19.3 g/cm3, almost identical to that of tungsten at 19.25 g/cm3; as such, tungsten has been used in the counterfeiting of gold bars, such as by plating a tungsten bar with gold. By comparison, the density of lead is 11.34 g/cm3, and that of the densest element, osmium, is .
Color
Whereas most metals are gray or silvery white, gold is slightly reddish-yellow. This color is determined by the frequency of plasma oscillations among the metal's valence electrons, in the ultraviolet range for most metals but in the visible range for gold due to relativistic effects affecting the orbitals around gold atoms. Similar effects impart a golden hue to metallic caesium.
Common colored gold alloys include the distinctive eighteen-karat rose gold created by the addition of copper. Alloys containing palladium or nickel are also important in commercial jewelry as these produce white gold alloys. Fourteen-karat gold-copper alloy is nearly identical in color to certain bronze alloys, and both may be used to produce police and other badges. Fourteen- and eighteen-karat gold alloys with silver alone appear greenish-yellow and are referred to as green gold. Blue gold can be made by alloying with iron, and purple gold can be made by alloying with aluminium. Less commonly, addition of manganese, indium, and other elements can produce more unusual colors of gold for various applications.
Colloidal gold, used by electron-microscopists, is red if the particles are small; larger particles of colloidal gold are blue.
Isotopes
Gold has only one stable isotope, , which is also its only naturally occurring isotope, so gold is both a mononuclidic and monoisotopic element. Thirty-six radioisotopes have been synthesized, ranging in atomic mass from 169 to 205. The most stable of these is with a half-life of 186.1 days. The least stable is , which decays by proton emission with a half-life of 30 μs. Most of gold's radioisotopes with atomic masses below 197 decay by some combination of proton emission, α decay, and β+ decay. The exceptions are , which decays by electron capture, and , which decays most often by electron capture (93%) with a minor β− decay path (7%). All of gold's radioisotopes with atomic masses above 197 decay by β− decay.
At least 32 nuclear isomers have also been characterized, ranging in atomic mass from 170 to 200. Within that range, only , , , , and do not have isomers. Gold's most stable isomer is with a half-life of 2.27 days. Gold's least stable isomer is with a half-life of only 7 ns. has three decay paths: β+ decay, isomeric transition, and alpha decay. No other isomer or isotope of gold has three decay paths.
Synthesis
The possible production of gold from a more common element, such as lead, has long been a subject of human inquiry, and the ancient and medieval discipline of alchemy often focused on it; however, the transmutation of the chemical elements did not become possible until the understanding of nuclear physics in the 20th century. The first synthesis of gold was conducted by Japanese physicist Hantaro Nagaoka, who synthesized gold from mercury in 1924 by neutron bombardment. An American team, working without knowledge of Nagaoka's prior study, conducted the same experiment in 1941, achieving the same result and showing that the isotopes of gold produced by it were all radioactive. In 1980, Glenn Seaborg transmuted several thousand atoms of bismuth into gold at the Lawrence Berkeley Laboratory. Gold can be manufactured in a nuclear reactor, but doing so is highly impractical and would cost far more than the value of the gold that is produced.
Chemistry
Although gold is the most noble of the noble metals, it still forms many diverse compounds. The oxidation state of gold in its compounds ranges from −1 to +5, but Au(I) and Au(III) dominate its chemistry. Au(I), referred to as the aurous ion, is the most common oxidation state with soft ligands such as thioethers, thiolates, and organophosphines. Au(I) compounds are typically linear. A good example is , which is the soluble form of gold encountered in mining. The binary gold halides, such as AuCl, form zigzag polymeric chains, again featuring linear coordination at Au. Most drugs based on gold are Au(I) derivatives.
Au(III) (referred to as auric) is a common oxidation state, and is illustrated by gold(III) chloride, . The gold atom centers in Au(III) complexes, like other d8 compounds, are typically square planar, with chemical bonds that have both covalent and ionic character. Gold(I,III) chloride is also known, an example of a mixed-valence complex.
Gold does not react with oxygen at any temperature and, up to 100 °C, is resistant to attack from ozone:
Some free halogens react to form the corresponding gold halides. Gold is strongly attacked by fluorine at dull-red heat to form gold(III) fluoride . Powdered gold reacts with chlorine at 180 °C to form gold(III) chloride . Gold reacts with bromine at 140 °C to form a combination of gold(III) bromide and gold(I) bromide AuBr, but reacts very slowly with iodine to form gold(I) iodide AuI:
2 Au{} + 3 F2 ->[{}\atop\Delta] 2 AuF3
2 Au{} + 3 Cl2 ->[{}\atop\Delta] 2 AuCl3
2 Au{} + 2 Br2 ->[{}\atop\Delta] AuBr3{} + AuBr
2 Au{} + I2 ->[{}\atop\Delta] 2 AuI
Gold does not react with sulfur directly, but gold(III) sulfide can be made by passing hydrogen sulfide through a dilute solution of gold(III) chloride or chlorauric acid.
Unlike sulfur, phosphorus reacts directly with gold at elevated temperatures to produce gold phosphide (Au2P3).
Gold readily dissolves in mercury at room temperature to form an amalgam, and forms alloys with many other metals at higher temperatures. These alloys can be produced to modify the hardness and other metallurgical properties, to control melting point or to create exotic colors.
Gold is unaffected by most acids. It does not react with hydrofluoric, hydrochloric, hydrobromic, hydriodic, sulfuric, or nitric acid. It does react with selenic acid, and is dissolved by aqua regia, a 1:3 mixture of nitric acid and hydrochloric acid. Nitric acid oxidizes the metal to +3 ions, but only in minute amounts, typically undetectable in the pure acid because of the chemical equilibrium of the reaction. However, the ions are removed from the equilibrium by hydrochloric acid, forming ions, or chloroauric acid, thereby enabling further oxidation:
2 Au{} + 6 H2SeO4 ->[{}\atop{200^\circ\text{C}}] Au2(SeO4)3{} + 3 H2SeO3{} + 3 H2O
Au{} + 4HCl{} + HNO3 -> HAuCl4{} + NO\uparrow + 2H2O
Gold is similarly unaffected by most bases. It does not react with aqueous, solid, or molten sodium or potassium hydroxide. It does however, react with sodium or potassium cyanide under alkaline conditions when oxygen is present to form soluble complexes.
Common oxidation states of gold include +1 (gold(I) or aurous compounds) and +3 (gold(III) or auric compounds). Gold ions in solution are readily reduced and precipitated as metal by adding any other metal as the reducing agent. The added metal is oxidized and dissolves, allowing the gold to be displaced from solution and be recovered as a solid precipitate.
Rare oxidation states
Less common oxidation states of gold include −1, +2, and +5.
The −1 oxidation state occurs in aurides, compounds containing the anion. Caesium auride (CsAu), for example, crystallizes in the caesium chloride motif; rubidium, potassium, and tetramethylammonium aurides are also known. Gold has the highest electron affinity of any metal, at 222.8 kJ/mol, making a stable species, analogous to the halides.
Gold also has a –1 oxidation state in covalent complexes with the group 4 transition metals, such as in titanium tetraauride and the analogous zirconium and hafnium compounds. These chemicals are expected to form gold-bridged dimers in a manner similar to titanium(IV) hydride.
Gold(II) compounds are usually diamagnetic with Au–Au bonds such as [. The evaporation of a solution of in concentrated produces red crystals of gold(II) sulfate, . Originally thought to be a mixed-valence compound, it has been shown to contain cations, analogous to the better-known mercury(I) ion, . A gold(II) complex, the tetraxenonogold(II) cation, which contains xenon as a ligand, occurs in . In September 2023, a novel type of metal-halide perovskite material consisting of Au3+ and Au2+ cations in its crystal structure has been found. It has been shown to be unexpectedly stable at normal conditions.
Gold pentafluoride, along with its derivative anion, , and its difluorine complex, gold heptafluoride, is the sole example of gold(V), the highest verified oxidation state.
Some gold compounds exhibit aurophilic bonding, which describes the tendency of gold ions to interact at distances that are too long to be a conventional Au–Au bond but shorter than van der Waals bonding. The interaction is estimated to be comparable in strength to that of a hydrogen bond.
Well-defined cluster compounds are numerous. In some cases, gold has a fractional oxidation state. A representative example is the octahedral species .
Origin
Gold production in the universe
Gold is thought to have been produced in supernova nucleosynthesis, and from the collision of neutron stars, and to have been present in the dust from which the Solar System formed.
Traditionally, gold in the universe is thought to have formed by the r-process (rapid neutron capture) in supernova nucleosynthesis, but more recently it has been suggested that gold and other elements heavier than iron may also be produced in quantity by the r-process in the collision of neutron stars. In both cases, satellite spectrometers at first only indirectly detected the resulting gold. However, in August 2017, the spectroscopic signatures of heavy elements, including gold, were observed by electromagnetic observatories in the GW170817 neutron star merger event, after gravitational wave detectors confirmed the event as a neutron star merger. Current astrophysical models suggest that this single neutron star merger event generated between 3 and 13 Earth masses of gold. This amount, along with estimations of the rate of occurrence of these neutron star merger events, suggests that such mergers may produce enough gold to account for most of the abundance of this element in the universe.
Asteroid origin theories
Because the Earth was molten when it was formed, almost all of the gold present in the early Earth probably sank into the planetary core. Therefore, as hypothesized in one model, most of the gold in the Earth's crust and mantle is thought to have been delivered to Earth by asteroid impacts during the Late Heavy Bombardment, about 4 billion years ago.
Gold which is reachable by humans has, in one case, been associated with a particular asteroid impact. The asteroid that formed Vredefort impact structure 2.020 billion years ago is often credited with seeding the Witwatersrand basin in South Africa with the richest gold deposits on earth. However, this scenario is now questioned. The gold-bearing Witwatersrand rocks were laid down between 700 and 950 million years before the Vredefort impact. These gold-bearing rocks had furthermore been covered by a thick layer of Ventersdorp lavas and the Transvaal Supergroup of rocks before the meteor struck, and thus the gold did not actually arrive in the asteroid/meteorite. What the Vredefort impact achieved, however, was to distort the Witwatersrand basin in such a way that the gold-bearing rocks were brought to the present erosion surface in Johannesburg, on the Witwatersrand, just inside the rim of the original diameter crater caused by the meteor strike. The discovery of the deposit in 1886 launched the Witwatersrand Gold Rush. Some 22% of all the gold that is ascertained to exist today on Earth has been extracted from these Witwatersrand rocks.
Mantle return theories
Much of the rest of the gold on Earth is thought to have been incorporated into the planet since its very beginning, as planetesimals formed the mantle. In 2017, an international group of scientists established that gold "came to the Earth's surface from the deepest regions of our planet", the mantle, as evidenced by their findings at Deseado Massif in the Argentinian Patagonia.
Occurrence
On Earth, gold is found in ores in rock formed from the Precambrian time onward. It most often occurs as a native metal, typically in a metal solid solution with silver (i.e. as a gold/silver alloy). Such alloys usually have a silver content of 8–10%. Electrum is elemental gold with more than 20% silver, and is commonly known as white gold. Electrum's color runs from golden-silvery to silvery, dependent upon the silver content. The more silver, the lower the specific gravity.
Native gold occurs as very small to microscopic particles embedded in rock, often together with quartz or sulfide minerals such as "fool's gold", which is a pyrite. These are called lode deposits. The metal in a native state is also found in the form of free flakes, grains or larger nuggets that have been eroded from rocks and end up in alluvial deposits called placer deposits. Such free gold is always richer at the exposed surface of gold-bearing veins, owing to the oxidation of accompanying minerals followed by weathering; and by washing of the dust into streams and rivers, where it collects and can be welded by water action to form nuggets.
Gold sometimes occurs combined with tellurium as the minerals calaverite, krennerite, nagyagite, petzite and sylvanite (see telluride minerals), and as the rare bismuthide maldonite () and antimonide aurostibite (). Gold also occurs in rare alloys with copper, lead, and mercury: the minerals auricupride (), novodneprite () and weishanite ().
A 2004 research paper suggests that microbes can sometimes play an important role in forming gold deposits, transporting and precipitating gold to form grains and nuggets that collect in alluvial deposits.
A 2013 study has claimed water in faults vaporizes during an earthquake, depositing gold. When an earthquake strikes, it moves along a fault. Water often lubricates faults, filling in fractures and jogs. About below the surface, under very high temperatures and pressures, the water carries high concentrations of carbon dioxide, silica, and gold. During an earthquake, the fault jog suddenly opens wider. The water inside the void instantly vaporizes, flashing to steam and forcing silica, which forms the mineral quartz, and gold out of the fluids and onto nearby surfaces.
Seawater
The world's oceans contain gold. Measured concentrations of gold in the Atlantic and Northeast Pacific are 50–150 femtomol/L or 10–30 parts per quadrillion (about 10–30 g/km3). In general, gold concentrations for south Atlantic and central Pacific samples are the same (~50 femtomol/L) but less certain. Mediterranean deep waters contain slightly higher concentrations of gold (100–150 femtomol/L), which is attributed to wind-blown dust or rivers. At 10 parts per quadrillion, the Earth's oceans would hold 15,000 tonnes of gold. These figures are three orders of magnitude less than reported in the literature prior to 1988, indicating contamination problems with the earlier data.
A number of people have claimed to be able to economically recover gold from sea water, but they were either mistaken or acted in an intentional deception. Prescott Jernegan ran a gold-from-seawater swindle in the United States in the 1890s, as did an English fraudster in the early 1900s. Fritz Haber did research on the extraction of gold from sea water in an effort to help pay Germany's reparations following World War I. Based on the published values of 2 to 64 ppb of gold in seawater, a commercially successful extraction seemed possible. After analysis of 4,000 water samples yielding an average of 0.004 ppb, it became clear that extraction would not be possible, and he ended the project.
History
The earliest recorded metal employed by humans appears to be gold, which can be found free or "native". Small amounts of natural gold have been found in Spanish caves used during the late Paleolithic period, .
The oldest gold artifacts in the world are from Bulgaria and are dating back to the 5th millennium BC (4,600 BC to 4,200 BC), such as those found in the Varna Necropolis near Lake Varna and the Black Sea coast, thought to be the earliest "well-dated" finding of gold artifacts in history.
Gold artifacts probably made their first appearance in Ancient Egypt at the very beginning of the pre-dynastic period, at the end of the fifth millennium BC and the start of the fourth, and smelting was developed during the course of the 4th millennium; gold artifacts appear in the archeology of Lower Mesopotamia during the early 4th millennium. As of 1990, gold artifacts found at the Wadi Qana cave cemetery of the 4th millennium BC in West Bank were the earliest from the Levant. Gold artifacts such as the golden hats and the Nebra disk appeared in Central Europe from the 2nd millennium BC Bronze Age.
The oldest known map of a gold mine was drawn in the 19th Dynasty of Ancient Egypt (1320–1200 BC), whereas the first written reference to gold was recorded in the 12th Dynasty around 1900 BC. Egyptian hieroglyphs from as early as 2600 BC describe gold, which King Tushratta of the Mitanni claimed was "more plentiful than dirt" in Egypt. Egypt and especially Nubia had the resources to make them major gold-producing areas for much of history. One of the earliest known maps, known as the Turin Papyrus Map, shows the plan of a gold mine in Nubia together with indications of the local geology. The primitive working methods are described by both Strabo and Diodorus Siculus, and included fire-setting. Large mines were also present across the Red Sea in what is now Saudi Arabia.
Gold is mentioned in the Amarna letters numbered 19 and 26 from around the 14th century BC.
Gold is mentioned frequently in the Old Testament, starting with Genesis 2:11 (at Havilah), the story of the golden calf, and many parts of the temple including the Menorah and the golden altar. In the New Testament, it is included with the gifts of the magi in the first chapters of Matthew. The Book of Revelation 21:21 describes the city of New Jerusalem as having streets "made of pure gold, clear as crystal". Exploitation of gold in the south-east corner of the Black Sea is said to date from the time of Midas, and this gold was important in the establishment of what is probably the world's earliest coinage in Lydia around 610 BC. The legend of the golden fleece dating from eighth century BCE may refer to the use of fleeces to trap gold dust from placer deposits in the ancient world. From the 6th or 5th century BC, the Chu (state) circulated the Ying Yuan, one kind of square gold coin.
In Roman metallurgy, new methods for extracting gold on a large scale were developed by introducing hydraulic mining methods, especially in Hispania from 25 BC onwards and in Dacia from 106 AD onwards. One of their largest mines was at Las Medulas in León, where seven long aqueducts enabled them to sluice most of a large alluvial deposit. The mines at Roşia Montană in Transylvania were also very large, and until very recently, still mined by opencast methods. They also exploited smaller deposits in Britain, such as placer and hard-rock deposits at Dolaucothi. The various methods they used are well described by Pliny the Elder in his encyclopedia Naturalis Historia written towards the end of the first century AD.
During Mansa Musa's (ruler of the Mali Empire from 1312 to 1337) hajj to Mecca in 1324, he passed through Cairo in July 1324, and was reportedly accompanied by a camel train that included thousands of people and nearly a hundred camels where he gave away so much gold that it depressed the price in Egypt for over a decade, causing high inflation. A contemporary Arab historian remarked:
The European exploration of the Americas was fueled in no small part by reports of the gold ornaments displayed in great profusion by Native American peoples, especially in Mesoamerica, Peru, Ecuador and Colombia. The Aztecs regarded gold as the product of the gods, calling it literally "god excrement" (teocuitlatl in Nahuatl), and after Moctezuma II was killed, most of this gold was shipped to Spain. However, for the indigenous peoples of North America gold was considered useless and they saw much greater value in other minerals which were directly related to their utility, such as obsidian, flint, and slate.
El Dorado is applied to a legendary story in which precious stones were found in fabulous abundance along with gold coins. The concept of El Dorado underwent several transformations, and eventually accounts of the previous myth were also combined with those of a legendary lost city. El Dorado, was the term used by the Spanish Empire to describe a mythical tribal chief (zipa) of the Muisca native people in Colombia, who, as an initiation rite, covered himself with gold dust and submerged in Lake Guatavita. The legends surrounding El Dorado changed over time, as it went from being a man, to a city, to a kingdom, and then finally to an empire.
Beginning in the early modern period, European exploration and colonization of West Africa was driven in large part by reports of gold deposits in the region, which was eventually referred to by Europeans as the "Gold Coast". From the late 15th to early 19th centuries, European trade in the region was primarily focused in gold, along with ivory and slaves. The gold trade in West Africa was dominated by the Ashanti Empire, who initially traded with the Portuguese before branching out and trading with British, French, Spanish and Danish merchants. British desires to secure control of West African gold deposits played a role in the Anglo-Ashanti wars of the late 19th century, which saw the Ashanti Empire annexed by Britain.
Gold played a role in western culture, as a cause for desire and of corruption, as told in children's fables such as Rumpelstiltskin—where Rumpelstiltskin turns hay into gold for the peasant's daughter in return for her child when she becomes a princess—and the stealing of the hen that lays golden eggs in Jack and the Beanstalk.
The top prize at the Olympic Games and many other sports competitions is the gold medal.
75% of the presently accounted for gold has been extracted since 1910, two-thirds since 1950.
One main goal of the alchemists was to produce gold from other substances, such as lead — presumably by the interaction with a mythical substance called the philosopher's stone. Trying to produce gold led the alchemists to systematically find out what can be done with substances, and this laid the foundation for today's chemistry, which can produce gold (albeit uneconomically) by using nuclear transmutation. Their symbol for gold was the circle with a point at its center (☉), which was also the astrological symbol and the ancient Chinese character for the Sun.
The Dome of the Rock is covered with an ultra-thin golden glassier. The Sikh Golden temple, the Harmandir Sahib, is a building covered with gold. Similarly the Wat Phra Kaew emerald Buddhist temple (wat) in Thailand has ornamental gold-leafed statues and roofs. Some European king and queen's crowns were made of gold, and gold was used for the bridal crown since antiquity. An ancient Talmudic text circa 100 AD describes Rachel, wife of Rabbi Akiva, receiving a "Jerusalem of Gold" (diadem). A Greek burial crown made of gold was found in a grave circa 370 BC.
Etymology
Gold is cognate with similar words in many Germanic languages, deriving via Proto-Germanic *gulþą from Proto-Indo-European *ǵʰelh₃- .
The symbol Au is from the Latin . The Proto-Indo-European ancestor of aurum was *h₂é-h₂us-o-, meaning . This word is derived from the same root (Proto-Indo-European *h₂u̯es- ) as *h₂éu̯sōs, the ancestor of the Latin word . This etymological relationship is presumably behind the frequent claim in scientific publications that meant .
Culture
In popular culture gold is a high standard of excellence, often used in awards. Great achievements are frequently rewarded with gold, in the form of gold medals, gold trophies and other decorations. Winners of athletic events and other graded competitions are usually awarded a gold medal. Many awards such as the Nobel Prize are made from gold as well. Other award statues and prizes are depicted in gold or are gold plated (such as the Academy Awards, the Golden Globe Awards, the Emmy Awards, the Palme d'Or, and the British Academy Film Awards).
Aristotle in his ethics used gold symbolism when referring to what is now known as the golden mean. Similarly, gold is associated with perfect or divine principles, such as in the case of the golden ratio and the Golden Rule. Gold is further associated with the wisdom of aging and fruition. The fiftieth wedding anniversary is golden. A person's most valued or most successful latter years are sometimes considered "golden years" or "golden jubilee". The height of a civilization is referred to as a golden age.
Religion
The first known prehistoric human usages of gold were religious in nature.
In some forms of Christianity and Judaism, gold has been associated both with the sacred and evil. In the Book of Exodus, the Golden Calf is a symbol of idolatry, while in the Book of Genesis, Abraham was said to be rich in gold and silver, and Moses was instructed to cover the Mercy Seat of the Ark of the Covenant with pure gold. In Byzantine iconography the halos of Christ, Virgin Mary and the saints are often golden.
In Islam, gold (along with silk) is often cited as being forbidden for men to wear. Abu Bakr al-Jazaeri, quoting a hadith, said that "[t]he wearing of silk and gold are forbidden on the males of my nation, and they are lawful to their women". This, however, has not been enforced consistently throughout history, e.g. in the Ottoman Empire. Further, small gold accents on clothing, such as in embroidery, may be permitted.
In ancient Greek religion and mythology, Theia was seen as the goddess of gold, silver and other gemstones.
According to Christopher Columbus, those who had something of gold were in possession of something of great value on Earth and a substance to even help souls to paradise.
Wedding rings are typically made of gold. It is long lasting and unaffected by the passage of time and may aid in the ring symbolism of eternal vows before God and the perfection the marriage signifies. In Orthodox Christian wedding ceremonies, the wedded couple is adorned with a golden crown (though some opt for wreaths, instead) during the ceremony, an amalgamation of symbolic rites.
On 24 August 2020, Israeli archaeologists discovered a trove of early Islamic gold coins near the central city of Yavne. Analysis of the extremely rare collection of 425 gold coins indicated that they were from the late 9th century. Dating to around 1,100 years back, the gold coins were from the Abbasid Caliphate.
Production
According to the United States Geological Survey in 2016, about of gold has been accounted for, of which 85% remains in active use.
Mining and prospecting
Since the 1880s, South Africa has been the source of a large proportion of the world's gold supply, and about 22% of the gold presently accounted is from South Africa. Production in 1970 accounted for 79% of the world supply, about 1,480 tonnes. In 2007 China (with 276 tonnes) overtook South Africa as the world's largest gold producer, the first time since 1905 that South Africa had not been the largest.
In 2023, China was the world's leading gold-mining country, followed in order by Russia, Australia, Canada, the United States and Ghana.
In South America, the controversial project Pascua Lama aims at exploitation of rich fields in the high mountains of Atacama Desert, at the border between Chile and Argentina.
It has been estimated that up to one-quarter of the yearly global gold production originates from artisanal or small scale mining.
The city of Johannesburg located in South Africa was founded as a result of the Witwatersrand Gold Rush which resulted in the discovery of some of the largest natural gold deposits in recorded history. The gold fields are confined to the northern and north-western edges of the Witwatersrand basin, which is a thick layer of archean rocks located, in most places, deep under the Free State, Gauteng and surrounding provinces. These Witwatersrand rocks are exposed at the surface on the Witwatersrand, in and around Johannesburg, but also in isolated patches to the south-east and south-west of Johannesburg, as well as in an arc around the Vredefort Dome which lies close to the center of the Witwatersrand basin. From these surface exposures the basin dips extensively, requiring some of the mining to occur at depths of nearly , making them, especially the Savuka and TauTona mines to the south-west of Johannesburg, the deepest mines on Earth. The gold is found only in six areas where archean rivers from the north and north-west formed extensive pebbly Braided river deltas before draining into the "Witwatersrand sea" where the rest of the Witwatersrand sediments were deposited.
The Second Boer War of 1899–1901 between the British Empire and the Afrikaner Boers was at least partly over the rights of miners and possession of the gold wealth in South Africa.
During the 19th century, gold rushes occurred whenever large gold deposits were discovered. The first documented discovery of gold in the United States was at the Reed Gold Mine near Georgeville, North Carolina in 1803. The first major gold strike in the United States occurred in a small north Georgia town called Dahlonega. Further gold rushes occurred in California, Colorado, the Black Hills, Otago in New Zealand, a number of locations across Australia, Witwatersrand in South Africa, and the Klondike in Canada.
Grasberg mine located in Papua, Indonesia is the largest gold mine in the world.
Extraction and refining
Gold extraction is most economical in large, easily mined deposits. Ore grades as little as 0.5 parts per million (ppm) can be economical. Typical ore grades in open-pit mines are 1–5 ppm; ore grades in underground or hard rock mines are usually at least 3 ppm. Because ore grades of 30 ppm are usually needed before gold is visible to the naked eye, in most gold mines the gold is invisible.
The average gold mining and extraction costs were about $317 per troy ounce in 2007, but these can vary widely depending on mining type and ore quality; global mine production amounted to 2,471.1 tonnes.
After initial production, gold is often subsequently refined industrially by the Wohlwill process which is based on electrolysis or by the Miller process, that is chlorination in the melt. The Wohlwill process results in higher purity, but is more complex and is only applied in small-scale installations. Other methods of assaying and purifying smaller amounts of gold include parting and inquartation as well as cupellation, or refining methods based on the dissolution of gold in aqua regia.
Recycling
In 1997, recycled gold accounted for approximately 20% of the 2700 tons of gold supplied to the market. Jewelry companies such as Generation Collection and computer companies including Dell conduct recycling.
As of 2020, the amount of carbon dioxide produced in mining a kilogram of gold is 16 tonnes, while recycling a kilogram of gold produces 53 kilograms of equivalent. Approximately 30 percent of the global gold supply is recycled and not mined as of 2020.
Consumption
The consumption of gold produced in the world is about 50% in jewelry, 40% in investments, and 10% in industry.
According to the World Gold Council, China was the world's largest single consumer of gold in 2013, overtaking India.
Pollution
Gold production is associated with contribution to hazardous pollution.
Low-grade gold ore may contain less than one ppm gold metal; such ore is ground and mixed with sodium cyanide to dissolve the gold. Cyanide is a highly poisonous chemical, which can kill living creatures when exposed in minute quantities. Many cyanide spills from gold mines have occurred in both developed and developing countries which killed aquatic life in long stretches of affected rivers. Environmentalists consider these events major environmental disasters. Up to thirty tons of used ore can be dumped as waste for producing one troy ounce of gold. Gold ore dumps are the source of many heavy elements such as cadmium, lead, zinc, copper, arsenic, selenium and mercury. When sulfide-bearing minerals in these ore dumps are exposed to air and water, the sulfide transforms into sulfuric acid which in turn dissolves these heavy metals facilitating their passage into surface water and ground water. This process is called acid mine drainage. These gold ore dumps contain long-term, highly hazardous waste.
It was once common to use mercury to recover gold from ore, but today the use of mercury is largely limited to small-scale individual miners. Minute quantities of mercury compounds can reach water bodies, causing heavy metal contamination. Mercury can then enter into the human food chain in the form of methylmercury. Mercury poisoning in humans can cause severe brain damage.
Gold extraction is also a highly energy-intensive industry, extracting ore from deep mines and grinding the large quantity of ore for further chemical extraction requires nearly 25 kWh of electricity per gram of gold produced.
Monetary use
Gold has been widely used throughout the world as money, for efficient indirect exchange (versus barter), and to store wealth in hoards. For exchange purposes, mints produce standardized gold bullion coins, bars and other units of fixed weight and purity.
The first known coins containing gold were struck in Lydia, Asia Minor, around 600 BC. The talent coin of gold in use during the periods of Grecian history both before and during the time of the life of Homer weighed between 8.42 and 8.75 grams. From an earlier preference in using silver, European economies re-established the minting of gold as coinage during the thirteenth and fourteenth centuries.
Bills (that mature into gold coin) and gold certificates (convertible into gold coin at the issuing bank) added to the circulating stock of gold standard money in most 19th century industrial economies. In preparation for World War I the warring nations moved to fractional gold standards, inflating their currencies to finance the war effort. Post-war, the victorious countries, most notably Britain, gradually restored gold-convertibility, but international flows of gold via bills of exchange remained embargoed; international shipments were made exclusively for bilateral trades or to pay war reparations.
After World War II gold was replaced by a system of nominally convertible currencies related by fixed exchange rates following the Bretton Woods system. Gold standards and the direct convertibility of currencies to gold have been abandoned by world governments, led in 1971 by the United States' refusal to redeem its dollars in gold. Fiat currency now fills most monetary roles. Switzerland was the last country to tie its currency to gold; this was ended by a referendum in 1999.
Central banks continue to keep a portion of their liquid reserves as gold in some form, and metals exchanges such as the London Bullion Market Association still clear transactions denominated in gold, including future delivery contracts. Today, gold mining output is declining. With the sharp growth of economies in the 20th century, and increasing foreign exchange, the world's gold reserves and their trading market have become a small fraction of all markets and fixed exchange rates of currencies to gold have been replaced by floating prices for gold and gold future contract. Though the gold stock grows by only 1% or 2% per year, very little metal is irretrievably consumed. Inventory above ground would satisfy many decades of industrial and even artisan uses at current prices.
The gold proportion (fineness) of alloys is measured by karat (k). Pure gold (commercially termed fine gold) is designated as 24 karat, abbreviated 24k. English gold coins intended for circulation from 1526 into the 1930s were typically a standard 22k alloy called crown gold, for hardness (American gold coins for circulation after 1837 contain an alloy of 0.900 fine gold, or 21.6 kt).
Although the prices of some platinum group metals can be much higher, gold has long been considered the most desirable of precious metals, and its value has been used as the standard for many currencies. Gold has been used as a symbol for purity, value, royalty, and particularly roles that combine these properties. Gold as a sign of wealth and prestige was ridiculed by Thomas More in his treatise Utopia. On that imaginary island, gold is so abundant that it is used to make chains for slaves, tableware, and lavatory seats. When ambassadors from other countries arrive, dressed in ostentatious gold jewels and badges, the Utopians mistake them for menial servants, paying homage instead to the most modestly dressed of their party.
The ISO 4217 currency code of gold is XAU. Many holders of gold store it in form of bullion coins or bars as a hedge against inflation or other economic disruptions, though its efficacy as such has been questioned; historically, it has not proven itself reliable as a hedging instrument. Modern bullion coins for investment or collector purposes do not require good mechanical wear properties; they are typically fine gold at 24k, although the American Gold Eagle and the British gold sovereign continue to be minted in 22k (0.92) metal in historical tradition, and the South African Krugerrand, first released in 1967, is also 22k (0.92).
The special issue Canadian Gold Maple Leaf coin contains the highest purity gold of any bullion coin, at 99.999% or 0.99999, while the popular issue Canadian Gold Maple Leaf coin has a purity of 99.99%. In 2006, the United States Mint began producing the American Buffalo gold bullion coin with a purity of 99.99%. The Australian Gold Kangaroos were first coined in 1986 as the Australian Gold Nugget but changed the reverse design in 1989. Other modern coins include the Austrian Vienna Philharmonic bullion coin and the Chinese Gold Panda.
Price
Like other precious metals, gold is measured by troy weight and by grams. The proportion of gold in the alloy is measured by karat (k), with 24 karat (24k) being pure gold (100%), and lower karat numbers proportionally less (18k = 75%). The purity of a gold bar or coin can also be expressed as a decimal figure ranging from 0 to 1, known as the millesimal fineness, such as 0.995 being nearly pure.
The price of gold is determined through trading in the gold and derivatives markets, but a procedure known as the Gold Fixing in London, originating in September 1919, provides a daily benchmark price to the industry. The afternoon fixing was introduced in 1968 to provide a price when US markets are open. , gold was valued at around $42 per gram ($1,300 per troy ounce).
History
Historically gold coinage was widely used as currency; when paper money was introduced, it typically was a receipt redeemable for gold coin or bullion. In a monetary system known as the gold standard, a certain weight of gold was given the name of a unit of currency. For a long period, the United States government set the value of the US dollar so that one troy ounce was equal to $20.67 ($0.665 per gram), but in 1934 the dollar was devalued to $35.00 per troy ounce ($0.889/g). By 1961, it was becoming hard to maintain this price, and a pool of US and European banks agreed to manipulate the market to prevent further currency devaluation against increased gold demand.
The largest gold depository in the world is that of the U.S. Federal Reserve Bank in New York, which holds about 3% of the gold known to exist and accounted for today, as does the similarly laden U.S. Bullion Depository at Fort Knox. In 2005 the World Gold Council estimated total global gold supply to be 3,859 tonnes and demand to be 3,754 tonnes, giving a surplus of 105 tonnes.
After 15 August 1971 Nixon shock, the price began to greatly increase, and between 1968 and 2000 the price of gold ranged widely, from a high of $850 per troy ounce ($27.33/g) on 21 January 1980, to a low of $252.90 per troy ounce ($8.13/g) on 21 June 1999 (London Gold Fixing). Prices increased rapidly from 2001, but the 1980 high was not exceeded until 3 January 2008, when a new maximum of $865.35 per troy ounce was set. Another record price was set on 17 March 2008, at $1023.50 per troy ounce ($32.91/g).
On 2 December 2009, gold reached a new high closing at $1,217.23. Gold further rallied hitting new highs in May 2010 after the European Union debt crisis prompted further purchase of gold as a safe asset. On 1 March 2011, gold hit a new all-time high of $1432.57, based on investor concerns regarding ongoing unrest in North Africa as well as in the Middle East.
From April 2001 to August 2011, spot gold prices more than quintupled in value against the US dollar, hitting a new all-time high of $1,913.50 on 23 August 2011, prompting speculation that the long secular bear market had ended and a bull market had returned. However, the price then began a slow decline towards $1200 per troy ounce in late 2014 and 2015.
In August 2020, the gold price picked up to US$2060 per ounce after a total growth of 59% from August 2018 to October 2020, a period during which it outplaced the Nasdaq total return of 54%.
Gold futures are traded on the COMEX exchange. These contacts are priced in USD per troy ounce (1 troy ounce = 31.1034768 grams). Below are the CQG contract specifications outlining the futures contracts:
Other applications
Jewelry
Because of the softness of pure (24k) gold, it is usually alloyed with other metals for use in jewelry, altering its hardness and ductility, melting point, color and other properties. Alloys with lower karat rating, typically 22k, 18k, 14k or 10k, contain higher percentages of copper, silver, palladium or other base metals in the alloy. Nickel is toxic, and its release from nickel white gold is controlled by legislation in Europe. Palladium-gold alloys are more expensive than those using nickel. High-karat white gold alloys are more resistant to corrosion than are either pure silver or sterling silver. The Japanese craft of Mokume-gane exploits the color contrasts between laminated colored gold alloys to produce decorative wood-grain effects.
By 2014, the gold jewelry industry was escalating despite a dip in gold prices. Demand in the first quarter of 2014 pushed turnover to $23.7 billion according to a World Gold Council report.
Gold solder is used for joining the components of gold jewelry by high-temperature hard soldering or brazing. If the work is to be of hallmarking quality, the gold solder alloy must match the fineness of the work, and alloy formulas are manufactured to color-match yellow and white gold. Gold solder is usually made in at least three melting-point ranges referred to as Easy, Medium and Hard. By using the hard, high-melting point solder first, followed by solders with progressively lower melting points, goldsmiths can assemble complex items with several separate soldered joints. Gold can also be made into thread and used in embroidery.
Electronics
Only 10% of the world consumption of new gold produced goes to industry, but by far the most important industrial use for new gold is in fabrication of corrosion-free electrical connectors in computers and other electrical devices. For example, according to the World Gold Council, a typical cell phone may contain 50 mg of gold, worth about three dollars. But since nearly one billion cell phones are produced each year, a gold value of US$2.82 in each phone adds to US$2.82 billion in gold from just this application. (Prices updated to November 2022)
Though gold is attacked by free chlorine, its good conductivity and general resistance to oxidation and corrosion in other environments (including resistance to non-chlorinated acids) has led to its widespread industrial use in the electronic era as a thin-layer coating on electrical connectors, thereby ensuring good connection. For example, gold is used in the connectors of the more expensive electronics cables, such as audio, video and USB cables. The benefit of using gold over other connector metals such as tin in these applications has been debated; gold connectors are often criticized by audio-visual experts as unnecessary for most consumers and seen as simply a marketing ploy. However, the use of gold in other applications in electronic sliding contacts in highly humid or corrosive atmospheres, and in use for contacts with a very high failure cost (certain computers, communications equipment, spacecraft, jet aircraft engines) remains very common.
Besides sliding electrical contacts, gold is also used in electrical contacts because of its resistance to corrosion, electrical conductivity, ductility and lack of toxicity. Switch contacts are generally subjected to more intense corrosion stress than are sliding contacts. Fine gold wires are used to connect semiconductor devices to their packages through a process known as wire bonding.
The concentration of free electrons in gold metal is 5.91×1022 cm−3. Gold is highly conductive to electricity and has been used for electrical wiring in some high-energy applications (only silver and copper are more conductive per volume, but gold has the advantage of corrosion resistance). For example, gold electrical wires were used during some of the Manhattan Project's atomic experiments, but large high-current silver wires were used in the calutron isotope separator magnets in the project.
It is estimated that 16% of the world's presently-accounted-for gold and 22% of the world's silver is contained in electronic technology in Japan.
Medicine
There are only two gold compounds currently employed as pharmaceuticals in modern medicine (sodium aurothiomalate and auranofin), used in the treatment of arthritis and other similar conditions in the US due to their anti-inflammatory properties. These drugs have been explored as a means to help to reduce the pain and swelling of rheumatoid arthritis, and also (historically) against tuberculosis and some parasites.
Some esotericists and forms of alternative medicine assign metallic gold a healing power, against the scientific consensus.
Historically, metallic and gold compounds have long been used for medicinal purposes. Gold, usually as the metal, is perhaps the most anciently administered medicine (apparently by shamanic practitioners) and known to Dioscorides. In medieval times, gold was often seen as beneficial for the health, in the belief that something so rare and beautiful could not be anything but healthy.
In the 19th century gold had a reputation as an anxiolytic, a therapy for nervous disorders. Depression, epilepsy, migraine, and glandular problems such as amenorrhea and impotence were treated, and most notably alcoholism (Keeley, 1897).
The apparent paradox of the actual toxicology of the substance suggests the possibility of serious gaps in the understanding of the action of gold in physiology. Only salts and radioisotopes of gold are of pharmacological value, since elemental (metallic) gold is inert to all chemicals it encounters inside the body (e.g., ingested gold cannot be attacked by stomach acid).
Gold alloys are used in restorative dentistry, especially in tooth restorations, such as crowns and permanent bridges. The gold alloys' slight malleability facilitates the creation of a superior molar mating surface with other teeth and produces results that are generally more satisfactory than those produced by the creation of porcelain crowns. The use of gold crowns in more prominent teeth such as incisors is favored in some cultures and discouraged in others.
Colloidal gold preparations (suspensions of gold nanoparticles) in water are intensely red-colored, and can be made with tightly controlled particle sizes up to a few tens of nanometers across by reduction of gold chloride with citrate or ascorbate ions. Colloidal gold is used in research applications in medicine, biology and materials science. The technique of immunogold labeling exploits the ability of the gold particles to adsorb protein molecules onto their surfaces. Colloidal gold particles coated with specific antibodies can be used as probes for the presence and position of antigens on the surfaces of cells. In ultrathin sections of tissues viewed by electron microscopy, the immunogold labels appear as extremely dense round spots at the position of the antigen.
Gold, or alloys of gold and palladium, are applied as conductive coating to biological specimens and other non-conducting materials such as plastics and glass to be viewed in a scanning electron microscope. The coating, which is usually applied by sputtering with an argon plasma, has a triple role in this application. Gold's very high electrical conductivity drains electrical charge to earth, and its very high density provides stopping power for electrons in the electron beam, helping to limit the depth to which the electron beam penetrates the specimen. This improves definition of the position and topography of the specimen surface and increases the spatial resolution of the image. Gold also produces a high output of secondary electrons when irradiated by an electron beam, and these low-energy electrons are the most commonly used signal source used in the scanning electron microscope.
The isotope gold-198 (half-life 2.7 days) is used in nuclear medicine, in some cancer treatments and for treating other diseases.
Cuisine
Gold can be used in food and has the E number 175. In 2016, the European Food Safety Authority published an opinion on the re-evaluation of gold as a food additive. Concerns included the possible presence of minute amounts of gold nanoparticles in the food additive, and that gold nanoparticles have been shown to be genotoxic in mammalian cells in vitro.
Gold leaf, flake or dust is used on and in some gourmet foods, notably sweets and drinks as decorative ingredient. Gold flake was used by the nobility in medieval Europe as a decoration in food and drinks,
Danziger Goldwasser (German: Gold water of Danzig) or Goldwasser () is a traditional German herbal liqueur produced in what is today Gdańsk, Poland, and Schwabach, Germany, and contains flakes of gold leaf. There are also some expensive (c. $1000) cocktails which contain flakes of gold leaf. However, since metallic gold is inert to all body chemistry, it has no taste, it provides no nutrition, and it leaves the body unaltered.
Vark is a foil composed of a pure metal that is sometimes gold, and is used for garnishing sweets in South Asian cuisine.
Miscellanea
Gold produces a deep, intense red color when used as a coloring agent in cranberry glass.
In photography, gold toners are used to shift the color of silver bromide black-and-white prints towards brown or blue tones, or to increase their stability. Used on sepia-toned prints, gold toners produce red tones. Kodak published formulas for several types of gold toners, which use gold as the chloride.
Gold is a good reflector of electromagnetic radiation such as infrared and visible light, as well as radio waves. It is used for the protective coatings on many artificial satellites, in infrared protective faceplates in thermal-protection suits and astronauts' helmets, and in electronic warfare planes such as the EA-6B Prowler.
Gold is used as the reflective layer on some high-end CDs.
Automobiles may use gold for heat shielding. McLaren uses gold foil in the engine compartment of its F1 model.
Gold can be manufactured so thin that it appears semi-transparent. It is used in some aircraft cockpit windows for de-icing or anti-icing by passing electricity through it. The heat produced by the resistance of the gold is enough to prevent ice from forming.
Gold is attacked by and dissolves in alkaline solutions of potassium or sodium cyanide, to form the salt gold cyanide—a technique that has been used in extracting metallic gold from ores in the cyanide process. Gold cyanide is the electrolyte used in commercial electroplating of gold onto base metals and electroforming.
Gold chloride (chloroauric acid) solutions are used to make colloidal gold by reduction with citrate or ascorbate ions. Gold chloride and gold oxide are used to make cranberry or red-colored glass, which, like colloidal gold suspensions, contains evenly sized spherical gold nanoparticles.
Gold, when dispersed in nanoparticles, can act as a heterogeneous catalyst of chemical reactions.
In recent years, gold has been used as a symbol of pride by the autism rights movement, as its symbol Au could be seen as similar to the word "autism".
Toxicity
Pure metallic (elemental) gold is non-toxic and non-irritating when ingested and is sometimes used as a food decoration in the form of gold leaf. Metallic gold is also a component of the alcoholic drinks Goldschläger, Gold Strike, and Goldwasser. Metallic gold is approved as a food additive in the EU (E175 in the Codex Alimentarius). Although the gold ion is toxic, the acceptance of metallic gold as a food additive is due to its relative chemical inertness, and resistance to being corroded or transformed into soluble salts (gold compounds) by any known chemical process which would be encountered in the human body.
Soluble compounds (gold salts) such as gold chloride are toxic to the liver and kidneys. Common cyanide salts of gold such as potassium gold cyanide, used in gold electroplating, are toxic by virtue of both their cyanide and gold content. There are rare cases of lethal gold poisoning from potassium gold cyanide. Gold toxicity can be ameliorated with chelation therapy with an agent such as dimercaprol.
Gold metal was voted Allergen of the Year in 2001 by the American Contact Dermatitis Society; gold contact allergies affect mostly women. Despite this, gold is a relatively non-potent contact allergen, in comparison with metals like nickel.
A sample of the fungus Aspergillus niger was found growing from gold mining solution; and was found to contain cyano metal complexes, such as gold, silver, copper, iron and zinc. The fungus also plays a role in the solubilization of heavy metal sulfides.
See also
Bulk leach extractable gold, for sampling ores
Chrysiasis (dermatological condition)
Digital gold currency, form of electronic currency
GFMS business consultancy
Gold fingerprinting, use impurities to identify an alloy
Gold standard in banking
List of countries by gold production
Tumbaga, alloy of gold and copper
Iron pyrite, fool's gold
Nordic gold, non-gold copper alloy
References
Further reading
Bachmann, H. G. The lure of gold : an artistic and cultural history (2006) online
Bernstein, Peter L. The Power of Gold: The History of an Obsession (2000) online
Brands, H.W. The Age of Gold: The California Gold Rush and the New American Dream (2003) excerpt
Buranelli, Vincent. Gold : an illustrated history (1979) online' wide-ranging popular history
Cassel, Gustav. "The restoration of the gold standard." Economica 9 (1923): 171–185. online
Eichengreen, Barry. Golden Fetters: The Gold Standard and the Great Depression, 1919–1939 (Oxford UP, 1992).
Ferguson, Niall. The Ascent of Money – Financial History of the World (2009) online
Hart, Matthew, Gold: The Race for the World's Most Seductive Metal Gold : the race for the world's most seductive metal", New York: Simon & Schuster, 2013.
Johnson, Harry G. "The gold rush of 1968 in retrospect and prospect". American Economic Review 59.2 (1969): 344–348. online
Kwarteng, Kwasi. War and Gold: A Five-Hundred-Year History of Empires, Adventures, and Debt (2014) online
Vilar, Pierre. A History of Gold and Money, 1450–1920 (1960). online
Vilches, Elvira. New World Gold: Cultural Anxiety and Monetary Disorder in Early Modern Spain (2010).
External links
Chemistry in its element podcast (MP3) from the Royal Society of Chemistry's Chemistry World: Gold www.rsc.org
Gold at The Periodic Table of Videos (University of Nottingham)
Getting Gold 1898 book, www.lateralscience.co.uk
, www.epa.gov
Gold element information – rsc.org
Chemical elements
Transition metals
Noble metals
Precious metals
Cubic minerals
Minerals in space group 225
Dental materials
Electrical conductors
Native element minerals
E-number additives
Symbols of Alaska
Symbols of California
Chemical elements with face-centered cubic structure
Coinage metals and alloys
Symbols of Victoria | Gold | [
"Physics",
"Chemistry"
] | 13,115 | [
"Dental materials",
"Chemical elements",
"Coinage metals and alloys",
"Materials",
"Alloys",
"Electrical conductors",
"Atoms",
"Matter"
] |
12,241 | https://en.wikipedia.org/wiki/Gallium | Gallium is a chemical element; it has the symbol Ga and atomic number 31. Discovered by the French chemist Paul-Émile Lecoq de Boisbaudran in 1875, gallium is in group 13 of the periodic table and is similar to the other metals of the group (aluminium, indium, and thallium).
Elemental gallium is a relatively soft, silvery metal at standard temperature and pressure. In its liquid state, it becomes silvery white. If enough force is applied, solid gallium may fracture conchoidally. Since its discovery in 1875, gallium has widely been used to make alloys with low melting points. It is also used in semiconductors, as a dopant in semiconductor substrates.
The melting point of gallium (29.7646°C, 85.5763°F, 302.9146 K) is used as a temperature reference point. Gallium alloys are used in thermometers as a non-toxic and environmentally friendly alternative to mercury, and can withstand higher temperatures than mercury. A melting point of , well below the freezing point of water, is claimed for the alloy galinstan (62–95% gallium, 5–22% indium, and 0–16% tin by weight), but that may be the freezing point with the effect of supercooling.
Gallium does not occur as a free element in nature, but rather as gallium(III) compounds in trace amounts in zinc ores (such as sphalerite) and in bauxite. Elemental gallium is a liquid at temperatures greater than , and will melt in a person's hands at normal human body temperature of .
Gallium is predominantly used in electronics. Gallium arsenide, the primary chemical compound of gallium in electronics, is used in microwave circuits, high-speed switching circuits, and infrared circuits. Semiconducting gallium nitride and indium gallium nitride produce blue and violet light-emitting diodes and diode lasers. Gallium is also used in the production of artificial gadolinium gallium garnet for jewelry. Gallium is considered a technology-critical element by the United States National Library of Medicine and Frontiers Media.
Gallium has no known natural role in biology. Gallium(III) behaves in a similar manner to ferric salts in biological systems and has been used in some medical applications, including pharmaceuticals and radiopharmaceuticals.
Physical properties
Elemental gallium is not found in nature, but it is easily obtained by smelting. Very pure gallium is a silvery blue metal that fractures conchoidally like glass. Gallium's volume expands by 3.10% when it changes from a liquid to a solid so care must be taken when storing it in containers that may rupture when it changes state. Gallium shares the higher-density liquid state with a short list of other materials that includes water, silicon, germanium, bismuth, and plutonium.
Gallium forms alloys with most metals. It readily diffuses into cracks or grain boundaries of some metals such as aluminium, aluminium–zinc alloys and steel, causing extreme loss of strength and ductility called liquid metal embrittlement.
The melting point of gallium, at 302.9146 K (29.7646 °C, 85.5763 °F), is just above room temperature, and is approximately the same as the average summer daytime temperatures in Earth's mid-latitudes. This melting point (mp) is one of the formal temperature reference points in the International Temperature Scale of 1990 (ITS-90) established by the International Bureau of Weights and Measures (BIPM). The triple point of gallium, 302.9166 K (29.7666 °C, 85.5799 °F), is used by the US National Institute of Standards and Technology (NIST) in preference to the melting point.
The melting point of gallium allows it to melt in the human hand, and then solidify if removed. The liquid metal has a strong tendency to supercool below its melting point/freezing point: Ga nanoparticles can be kept in the liquid state below 90 K. Seeding with a crystal helps to initiate freezing. Gallium is one of the four non-radioactive metals (with caesium, rubidium, and mercury) that are known to be liquid at, or near, normal room temperature. Of the four, gallium is the only one that is neither highly reactive (as are rubidium and caesium) nor highly toxic (as is mercury) and can, therefore, be used in metal-in-glass high-temperature thermometers. It is also notable for having one of the largest liquid ranges for a metal, and for having (unlike mercury) a low vapor pressure at high temperatures. Gallium's boiling point, 2676 K, is nearly nine times higher than its melting point on the absolute scale, the greatest ratio between melting point and boiling point of any element. Unlike mercury, liquid gallium metal wets glass and skin, along with most other materials (with the exceptions of quartz, graphite, gallium(III) oxide and PTFE), making it mechanically more difficult to handle even though it is substantially less toxic and requires far fewer precautions than mercury. Gallium painted onto glass is a brilliant mirror. For this reason as well as the metal contamination and freezing-expansion problems, samples of gallium metal are usually supplied in polyethylene packets within other containers.
Gallium does not crystallize in any of the simple crystal structures. The stable phase under normal conditions is orthorhombic with 8 atoms in the conventional unit cell. Within a unit cell, each atom has only one nearest neighbor (at a distance of 244 pm). The remaining six unit cell neighbors are spaced 27, 30 and 39 pm farther away, and they are grouped in pairs with the same distance. Many stable and metastable phases are found as function of temperature and pressure.
The bonding between the two nearest neighbors is covalent; hence Ga2 dimers are seen as the fundamental building blocks of the crystal. This explains the low melting point relative to the neighbor elements, aluminium and indium. This structure is strikingly similar to that of iodine and may form because of interactions between the single 4p electrons of gallium atoms, further away from the nucleus than the 4s electrons and the [Ar]3d10 core. This phenomenon recurs with mercury with its "pseudo-noble-gas" [Xe]4f145d106s2 electron configuration, which is liquid at room temperature. The 3d10 electrons do not shield the outer electrons very well from the nucleus and hence the first ionisation energy of gallium is greater than that of aluminium. Ga2 dimers do not persist in the liquid state and liquid gallium exhibits a complex low-coordinated structure in which each gallium atom is surrounded by 10 others, rather than 11–12 neighbors typical of most liquid metals.
The physical properties of gallium are highly anisotropic, i.e. have different values along the three major crystallographic axes a, b, and c (see table), producing a significant difference between the linear (α) and volume thermal expansion coefficients. The properties of gallium are strongly temperature-dependent, particularly near the melting point. For example, the coefficient of thermal expansion increases by several hundred percent upon melting.
Isotopes
Gallium has 30 known isotopes, ranging in mass number from 60 to 89. Only two isotopes are stable and occur naturally, gallium-69 and gallium-71. Gallium-69 is more abundant: it makes up about 60.1% of natural gallium, while gallium-71 makes up the remaining 39.9%. All the other isotopes are radioactive, with gallium-67 being the longest-lived (half-life 3.261 days). Isotopes lighter than gallium-69 usually decay through beta plus decay (positron emission) or electron capture to isotopes of zinc, while isotopes heavier than gallium-71 decay through beta minus decay (electron emission), possibly with delayed neutron emission, to isotopes of germanium. Gallium-70 can decay through both beta minus decay and electron capture. Gallium-67 is unique among the light isotopes in having only electron capture as a decay mode, as its decay energy is not sufficient to allow positron emission. Gallium-67 and gallium-68 (half-life 67.7 min) are both used in nuclear medicine.
Chemical properties
Gallium is found primarily in the +3 oxidation state. The +1 oxidation state is also found in some compounds, although it is less common than it is for gallium's heavier congeners indium and thallium. For example, the very stable GaCl2 contains both gallium(I) and gallium(III) and can be formulated as GaIGaIIICl4; in contrast, the monochloride is unstable above 0 °C, disproportionating into elemental gallium and gallium(III) chloride. Compounds containing Ga–Ga bonds are true gallium(II) compounds, such as GaS (which can be formulated as Ga24+(S2−)2) and the dioxan complex Ga2Cl4(C4H8O2)2.
Aqueous chemistry
Strong acids dissolve gallium, forming gallium(III) salts such as (gallium nitrate). Aqueous solutions of gallium(III) salts contain the hydrated gallium ion, . Gallium(III) hydroxide, , may be precipitated from gallium(III) solutions by adding ammonia. Dehydrating at 100 °C produces gallium oxide hydroxide, GaO(OH).
Alkaline hydroxide solutions dissolve gallium, forming gallate salts (not to be confused with identically named gallic acid salts) containing the anion. Gallium hydroxide, which is amphoteric, also dissolves in alkali to form gallate salts. Although earlier work suggested as another possible gallate anion, it was not found in later work.
Oxides and chalcogenides
Gallium reacts with the chalcogens only at relatively high temperatures. At room temperature, gallium metal is not reactive with air and water because it forms a passive, protective oxide layer. At higher temperatures, however, it reacts with atmospheric oxygen to form gallium(III) oxide, . Reducing with elemental gallium in vacuum at 500 °C to 700 °C yields the dark brown gallium(I) oxide, . is a very strong reducing agent, capable of reducing to . It disproportionates at 800 °C back to gallium and .
Gallium(III) sulfide, , has 3 possible crystal modifications. It can be made by the reaction of gallium with hydrogen sulfide () at 950 °C. Alternatively, can be used at 747 °C:
2 + 3 → + 6
Reacting a mixture of alkali metal carbonates and with leads to the formation of thiogallates containing the anion. Strong acids decompose these salts, releasing in the process. The mercury salt, , can be used as a phosphor.
Gallium also forms sulfides in lower oxidation states, such as gallium(II) sulfide and the green gallium(I) sulfide, the latter of which is produced from the former by heating to 1000 °C under a stream of nitrogen.
The other binary chalcogenides, and , have the zincblende structure. They are all semiconductors but are easily hydrolysed and have limited utility.
Nitrides and pnictides
Gallium reacts with ammonia at 1050 °C to form gallium nitride, GaN. Gallium also forms binary compounds with phosphorus, arsenic, and antimony: gallium phosphide (GaP), gallium arsenide (GaAs), and gallium antimonide (GaSb). These compounds have the same structure as ZnS, and have important semiconducting properties. GaP, GaAs, and GaSb can be synthesized by the direct reaction of gallium with elemental phosphorus, arsenic, or antimony. They exhibit higher electrical conductivity than GaN. GaP can also be synthesized by reacting with phosphorus at low temperatures.
Gallium forms ternary nitrides; for example:
+ →
Similar compounds with phosphorus and arsenic are possible: and . These compounds are easily hydrolyzed by dilute acids and water.
Halides
Gallium(III) oxide reacts with fluorinating agents such as HF or to form gallium(III) fluoride, . It is an ionic compound strongly insoluble in water. However, it dissolves in hydrofluoric acid, in which it forms an adduct with water, . Attempting to dehydrate this adduct forms . The adduct reacts with ammonia to form , which can then be heated to form anhydrous .
Gallium trichloride is formed by the reaction of gallium metal with chlorine gas. Unlike the trifluoride, gallium(III) chloride exists as dimeric molecules, , with a melting point of 78 °C. Equivalent compounds are formed with bromine and iodine, and .
Like the other group 13 trihalides, gallium(III) halides are Lewis acids, reacting as halide acceptors with alkali metal halides to form salts containing anions, where X is a halogen. They also react with alkyl halides to form carbocations and .
When heated to a high temperature, gallium(III) halides react with elemental gallium to form the respective gallium(I) halides. For example, reacts with Ga to form :
2 Ga + 3 GaCl (g)
At lower temperatures, the equilibrium shifts toward the left and GaCl disproportionates back to elemental gallium and . GaCl can also be produced by reacting Ga with HCl at 950 °C; the product can be condensed as a red solid.
Gallium(I) compounds can be stabilized by forming adducts with Lewis acids. For example:
GaCl + →
The so-called "gallium(II) halides", , are actually adducts of gallium(I) halides with the respective gallium(III) halides, having the structure . For example:
GaCl + →
Hydrides
Like aluminium, gallium also forms a hydride, , known as gallane, which may be produced by reacting lithium gallanate () with gallium(III) chloride at −30 °C:
3 + → 3 LiCl + 4
In the presence of dimethyl ether as solvent, polymerizes to . If no solvent is used, the dimer (digallane) is formed as a gas. Its structure is similar to diborane, having two hydrogen atoms bridging the two gallium centers, unlike α- in which aluminium has a coordination number of 6.
Gallane is unstable above −10 °C, decomposing to elemental gallium and hydrogen.
Organogallium compounds
Organogallium compounds are of similar reactivity to organoindium compounds, less reactive than organoaluminium compounds, but more reactive than organothallium compounds. Alkylgalliums are monomeric. Lewis acidity decreases in the order Al > Ga > In and as a result organogallium compounds do not form bridged dimers as organoaluminium compounds do. Organogallium compounds are also less reactive than organoaluminium compounds. They do form stable peroxides. These alkylgalliums are liquids at room temperature, having low melting points, and are quite mobile and flammable. Triphenylgallium is monomeric in solution, but its crystals form chain structures due to weak intermolecluar Ga···C interactions.
Gallium trichloride is a common starting reagent for the formation of organogallium compounds, such as in carbogallation reactions. Gallium trichloride reacts with lithium cyclopentadienide in diethyl ether to form the trigonal planar gallium cyclopentadienyl complex GaCp3. Gallium(I) forms complexes with arene ligands such as hexamethylbenzene. Because this ligand is quite bulky, the structure of the [Ga(η6-C6Me6)]+ is that of a half-sandwich. Less bulky ligands such as mesitylene allow two ligands to be attached to the central gallium atom in a bent sandwich structure. Benzene is even less bulky and allows the formation of dimers: an example is [Ga(η6-C6H6)2] [GaCl4]·3C6H6.
History
In 1871, the existence of gallium was first predicted by Russian chemist Dmitri Mendeleev, who named it "eka-aluminium" from its position in his periodic table. He also predicted several properties of eka-aluminium that correspond closely to the real properties of gallium, such as its density, melting point, oxide character, and bonding in chloride.
{| class="wikitable"
|+ Comparison between Mendeleev's 1871 predictions and the known properties of gallium
|-
! Property
! Mendeleev's predictions
! Actual properties
|-
! Atomic weight
| ~68
| 69.723
|-
! Density
| 5.9 g/cm3
| 5.904 g/cm3
|-
! Melting point
| Low
| 29.767 °C
|-
! Formula of oxide
| M2O3
| Ga2O3
|-
! Density of oxide
| 5.5 g/cm3
| 5.88 g/cm3
|-
! Nature of hydroxide
| amphoteric
| amphoteric
|}
Mendeleev further predicted that eka-aluminium would be discovered by means of the spectroscope, and that metallic eka-aluminium would dissolve slowly in both acids and alkalis and would not react with air. He also predicted that M2O3 would dissolve in acids to give MX3 salts, that eka-aluminium salts would form basic salts, that eka-aluminium sulfate should form alums, and that anhydrous MCl3 should have a greater volatility than ZnCl2: all of these predictions turned out to be true.
Gallium was discovered using spectroscopy by French chemist Paul-Émile Lecoq de Boisbaudran in 1875 from its characteristic spectrum (two violet lines) in a sample of sphalerite. Later that year, Lecoq obtained the free metal by electrolysis of the hydroxide in potassium hydroxide solution.
He named the element "gallia", from Latin meaning 'Gaul', a name for his native land of France. It was later claimed that, in a multilingual pun of a kind favoured by men of science in the 19th century, he had also named gallium after himself: is French for 'the rooster', and the Latin word for 'rooster' is . In an 1877 article, Lecoq denied this conjecture.
Originally, de Boisbaudran determined the density of gallium as 4.7 g/cm3, the only property that failed to match Mendeleev's predictions; Mendeleev then wrote to him and suggested that he should remeasure the density, and de Boisbaudran then obtained the correct value of 5.9 g/cm3, that Mendeleev had predicted exactly.
From its discovery in 1875 until the era of semiconductors, the primary uses of gallium were high-temperature thermometrics and metal alloys with unusual properties of stability or ease of melting (some such being liquid at room temperature).
The development of gallium arsenide as a direct bandgap semiconductor in the 1960s ushered in the most important stage in the applications of gallium. In the late 1960s, the electronics industry started using gallium on a commercial scale to fabricate light emitting diodes, photovoltaics and semiconductors, while the metals industry used it to reduce the melting point of alloys.
First blue gallium nitride LED were developed in 1971-1973, but they were feeble. Only in the early 1990s Shuji Nakamura managed to combine GaN with indium gallium nitride and develop the modern blue LED, now making the basis of ubiquitous white LEDs, which Nichia commercialized in 1993. He and two other Japanese scientists received a Nobel in Physics in 2014 for this work.
Global gallium production slowly grew from several tens of t/year in the 1970s til ca. 2010, when it passed 100 t/yr and rapidly accelerated, by 2024 reaching about 450 t/yr.
Occurrence
Gallium does not exist as a free element in the Earth's crust, and the few high-content minerals, such as gallite (CuGaS2), are too rare to serve as a primary source. The abundance in the Earth's crust is approximately 16.9 ppm. It is the 34th most abundant element in the crust. This is comparable to the crustal abundances of lead, cobalt, and niobium. Yet unlike these elements, gallium does not form its own ore deposits with concentrations of > 0.1 wt.% in ore. Rather it occurs at trace concentrations similar to the crustal value in zinc ores, and at somewhat higher values (~ 50 ppm) in aluminium ores, from both of which it is extracted as a by-product. This lack of independent deposits is due to gallium's geochemical behaviour, showing no strong enrichment in the processes relevant to the formation of most ore deposits.
The United States Geological Survey (USGS) estimates that more than 1 million tons of gallium is contained in known reserves of bauxite and zinc ores. Some coal flue dusts contain small quantities of gallium, typically less than 1% by weight. However, these amounts are not extractable without mining of the host materials (see below). Thus, the availability of gallium is fundamentally determined by the rate at which bauxite, zinc ores, and coal are extracted.
Production and availability
Gallium is produced exclusively as a by-product during the processing of the ores of other metals. Its main source material is bauxite, the chief ore of aluminium, but minor amounts are also extracted from sulfidic zinc ores (sphalerite being the main host mineral). In the past, certain coals were an important source.
During the processing of bauxite to alumina in the Bayer process, gallium accumulates in the sodium hydroxide liquor. From this it can be extracted by a variety of methods. The most recent is the use of ion-exchange resin. Achievable extraction efficiencies critically depend on the original concentration in the feed bauxite. At a typical feed concentration of 50 ppm, about 15% of the contained gallium is extractable. The remainder reports to the red mud and aluminium hydroxide streams. Gallium is removed from the ion-exchange resin in solution. Electrolysis then gives gallium metal. For semiconductor use, it is further purified with zone melting or single-crystal extraction from a melt (Czochralski process). Purities of 99.9999% are routinely achieved and commercially available.
Its by-product status means that gallium production is constrained by the amount of bauxite, sulfidic zinc ores (and coal) extracted per year. Therefore, its availability needs to be discussed in terms of supply potential. The supply potential of a by-product is defined as that amount which is economically extractable from its host materials per year under current market conditions (i.e. technology and price). Reserves and resources are not relevant for by-products, since they cannot be extracted independently from the main-products. Recent estimates put the supply potential of gallium at a minimum of 2,100 t/yr from bauxite, 85 t/yr from sulfidic zinc ores, and potentially 590 t/yr from coal. These figures are significantly greater than current production (375 t in 2016). Thus, major future increases in the by-product production of gallium will be possible without significant increases in production costs or price. The average price for low-grade gallium was $120 per kilogram in 2016 and $135–140 per kilogram in 2017.
In 2017, the world's production of low-grade gallium was tons—an increase of 15% from 2016. China, Japan, South Korea, Russia, and Ukraine were the leading producers, while Germany ceased primary production of gallium in 2016. The yield of high-purity gallium was ca. 180 tons, mostly originating from China, Japan, Slovakia, UK and U.S. The 2017 world annual production capacity was estimated at 730 tons for low-grade and 320 tons for refined gallium.
China produced tons of low-grade gallium in 2016 and tons in 2017. It also accounted for more than half of global LED production. As of July 2023, China accounted for between 80% and 95% of its production.
Applications
Semiconductor applications dominate the commercial demand for gallium, accounting for 98% of the total. The next major application is for gadolinium gallium garnets. As of 2022, 44% of world use went to light fixtures and 36% to integrated circuits, with smaller shares equal to ~7% going to photovoltaics and magnets each.
Semiconductors
Extremely high-purity (>99.9999%) gallium is commercially available to serve the semiconductor industry. Gallium arsenide (GaAs) and gallium nitride (GaN) used in electronic components represented about 98% of the gallium consumption in the United States in 2007. About 66% of semiconductor gallium is used in the U.S. in integrated circuits (mostly gallium arsenide), such as the manufacture of ultra-high-speed logic chips and MESFETs for low-noise microwave preamplifiers in cell phones. About 20% of this gallium is used in optoelectronics.
Worldwide, gallium arsenide makes up 95% of the annual global gallium consumption. It amounted to $7.5 billion in 2016, with 53% originating from cell phones, 27% from wireless communications, and the rest from automotive, consumer, fiber-optic, and military applications. The recent increase in GaAs consumption is mostly related to the emergence of 3G and 4G smartphones, which employ up to 10 times the amount of GaAs in older models.
Gallium arsenide and gallium nitride can also be found in a variety of optoelectronic devices which had a market share of $15.3 billion in 2015 and $18.5 billion in 2016. Aluminium gallium arsenide (AlGaAs) is used in high-power infrared laser diodes. The semiconductors gallium nitride and indium gallium nitride are used in blue and violet optoelectronic devices, mostly laser diodes and light-emitting diodes. For example, gallium nitride 405 nm diode lasers are used as a violet light source for higher-density Blu-ray Disc compact data disc drives.
Other major applications of gallium nitride are cable television transmission, commercial wireless infrastructure, power electronics, and satellites. The GaN radio frequency device market alone was estimated at $370 million in 2016 and $420 million in 2016.
Multijunction photovoltaic cells, developed for satellite power applications, are made by molecular-beam epitaxy or metalorganic vapour-phase epitaxy of thin films of gallium arsenide, indium gallium phosphide, or indium gallium arsenide. The Mars Exploration Rovers and several satellites use triple-junction gallium arsenide on germanium cells. Gallium is also a component in photovoltaic compounds (such as copper indium gallium selenium sulfide ) used in solar panels as a cost-efficient alternative to crystalline silicon.
Galinstan and other alloys
Gallium readily alloys with most metals, and is used as an ingredient in low-melting alloys. The nearly eutectic alloy of gallium, indium, and tin is a room temperature liquid used in medical thermometers. This alloy, with the trade-name Galinstan (with the "-stan" referring to the tin, in Latin), has a low melting point of −19 °C (−2.2 °F). It has been suggested that this family of alloys could also be used to cool computer chips in place of water, and is often used as a replacement for thermal paste in high-performance computing. Gallium alloys have been evaluated as substitutes for mercury dental amalgams, but these materials have yet to see wide acceptance. Liquid alloys containing mostly gallium and indium have been found to precipitate gaseous CO2 into solid carbon and are being researched as potential methodologies for carbon capture and possibly carbon removal.
Because gallium wets glass or porcelain, gallium can be used to create brilliant mirrors. When the wetting action of gallium-alloys is not desired (as in Galinstan glass thermometers), the glass must be protected with a transparent layer of gallium(III) oxide.
Due to their high surface tension and deformability, gallium-based liquid metals can be used to create actuators by controlling the surface tension. Researchers have demonstrated the potentials of using liquid metal actuators as artificial muscle in robotic actuation.
The plutonium used in nuclear weapon pits is stabilized in the δ phase and made machinable by alloying with gallium.
Biomedical applications
Although gallium has no natural function in biology, gallium ions interact with processes in the body in a manner similar to iron(III). Because these processes include inflammation, a marker for many disease states, several gallium salts are used (or are in development) as pharmaceuticals and radiopharmaceuticals in medicine. Interest in the anticancer properties of gallium emerged when it was discovered that 67Ga(III) citrate injected in tumor-bearing animals localized to sites of tumor. Clinical trials have shown gallium nitrate to have antineoplastic activity against non-Hodgkin's lymphoma and urothelial cancers. A new generation of gallium-ligand complexes such as tris(8-quinolinolato)gallium(III) (KP46) and gallium maltolate has emerged. Gallium nitrate (brand name Ganite) has been used as an intravenous pharmaceutical to treat hypercalcemia associated with tumor metastasis to bones. Gallium is thought to interfere with osteoclast function, and the therapy may be effective when other treatments have failed. Gallium maltolate, an oral, highly absorbable form of gallium(III) ion, is an anti-proliferative to pathologically proliferating cells, particularly cancer cells and some bacteria that accept it in place of ferric iron (Fe3+). Researchers are conducting clinical and preclinical trials on this compound as a potential treatment for a number of cancers, infectious diseases, and inflammatory diseases.
When gallium ions are mistakenly taken up in place of iron(III) by bacteria such as Pseudomonas, the ions interfere with respiration, and the bacteria die. This happens because iron is redox-active, allowing the transfer of electrons during respiration, while gallium is redox-inactive.
A complex amine-phenol Ga(III) compound MR045 is selectively toxic to parasites resistant to chloroquine, a common drug against malaria. Both the Ga(III) complex and chloroquine act by inhibiting crystallization of hemozoin, a disposal product formed from the digestion of blood by the parasites.
Radiogallium salts
Gallium-67 salts such as gallium citrate and gallium nitrate are used as radiopharmaceutical agents in the nuclear medicine imaging known as gallium scan. The radioactive isotope 67Ga is used, and the compound or salt of gallium is unimportant. The body handles Ga3+ in many ways as though it were Fe3+, and the ion is bound (and concentrates) in areas of inflammation, such as infection, and in areas of rapid cell division. This allows such sites to be imaged by nuclear scan techniques.
Gallium-68, a positron emitter with a half-life of 68 min, is now used as a diagnostic radionuclide in PET-CT when linked to pharmaceutical preparations such as DOTATOC, a somatostatin analogue used for neuroendocrine tumors investigation, and DOTA-TATE, a newer one, used for neuroendocrine metastasis and lung neuroendocrine cancer, such as certain types of microcytoma. Gallium-68's preparation as a pharmaceutical is chemical, and the radionuclide is extracted by elution from germanium-68, a synthetic radioisotope of germanium, in gallium-68 generators.
Other uses
Neutrino detection: Gallium is used for neutrino detection. Possibly the largest amount of pure gallium ever collected in a single location is the Gallium-Germanium Neutrino Telescope used by the SAGE experiment at the Baksan Neutrino Observatory in Russia. This detector contains 55–57 tonnes (~9 cubic metres) of liquid gallium. Another experiment was the GALLEX neutrino detector operated in the early 1990s in an Italian mountain tunnel. The detector contained 12.2 tons of watered gallium-71. Solar neutrinos caused a few atoms of 71Ga to become radioactive 71Ge, which were detected. This experiment showed that the solar neutrino flux is 40% less than theory predicted. This deficit (solar neutrino problem) was not explained until better solar neutrino detectors and theories were constructed (see SNO).
Ion source: Gallium is also used as a liquid metal ion source for a focused ion beam. For example, a focused gallium-ion beam was used to create the world's smallest book, Teeny Ted from Turnip Town.
Lubricants: Gallium serves as an additive in glide wax for skis and other low-friction surface materials.
Flexible electronics: Materials scientists speculate that the properties of gallium could make it suitable for the development of flexible and wearable devices.
Hydrogen generation: Gallium disrupts the protective oxide layer on aluminium, allowing water to react with the aluminium in AlGa to produce hydrogen gas.
Humor: A well-known practical joke among chemists is to fashion gallium spoons and use them to serve tea to unsuspecting guests, since gallium has a similar appearance to its lighter homolog aluminium. The spoons then melt in the hot tea.
Gallium in the ocean
Advances in trace element testing have allowed scientists to discover traces of dissolved gallium in the Atlantic and Pacific Oceans. In recent years, dissolved gallium concentrations have presented in the Beaufort Sea. These reports reflect the possible profiles of the Pacific and Atlantic Ocean waters. For the Pacific Oceans, typical dissolved gallium concentrations are between 4 and 6 pmol/kg at depths <~150 m. In comparison, for Atlantic waters 25–28 pmol/kg at depths >~350 m.
Gallium has entered oceans mainly through aeolian input, but having gallium in our oceans can be used to resolve aluminium distribution in the oceans. The reason for this is that gallium is geochemically similar to aluminium, just less reactive. Gallium also has a slightly larger surface water residence time than aluminium. Gallium has a similar dissolved profile similar to that of aluminium, due to this gallium can be used as a tracer for aluminium. Gallium can also be used as a tracer of aeolian inputs of iron. Gallium is used as a tracer for iron in the northwest Pacific, south and central Atlantic Oceans. For example, in the northwest Pacific, low gallium surface waters, in the subpolar region suggest that there is low dust input, which can subsequently explain the following high-nutrient, low-chlorophyll environmental behavior.
Precautions
Metallic gallium is not toxic. However, several gallium compounds are toxic.
Gallium halide complexes can be toxic. The Ga3+ ion of soluble gallium salts tends to form the insoluble hydroxide when injected in large doses; precipitation of this hydroxide resulted in nephrotoxicity in animals. In lower doses, soluble gallium is tolerated well and does not accumulate as a poison, instead being excreted mostly through urine. Excretion of gallium occurs in two phases: the first phase has a biological half-life of 1 hour, while the second has a biological half-life of 25 hours.
Inhaled Ga2O3 particles are probably toxic.
Notes
References
External links
Gallium at The Periodic Table of Videos (University of Nottingham)
Safety data sheet at acialloys.com
High-resolution photographs of molten gallium, gallium crystals and gallium ingots under Creative Commons licence
Textbook information regarding gallium
Environmental effects of gallium
Gallium Statistics and Information
Gallium: A Smart Metal United States Geological Survey
Thermal conductivity
Physical and thermodynamical properties of liquid gallium (doc pdf)
Chemical elements predicted by Dmitri Mendeleev
Chemical elements
Coolants
Post-transition metals
Articles containing video clips
Materials that expand upon freezing
Chemical elements with primitive orthorhombic structure | Gallium | [
"Physics",
"Chemistry"
] | 7,869 | [
"Periodic table",
"Physical phenomena",
"Phase transitions",
"Chemical elements",
"Materials",
"Materials that expand upon freezing",
"Atoms",
"Matter",
"Chemical elements predicted by Dmitri Mendeleev"
] |
12,242 | https://en.wikipedia.org/wiki/Germanium | Germanium is a chemical element; it has symbol Ge and atomic number 32. It is lustrous, hard-brittle, grayish-white and similar in appearance to silicon. It is a metalloid (more rarely considered a metal) in the carbon group that is chemically similar to its group neighbors silicon and tin. Like silicon, germanium naturally reacts and forms complexes with oxygen in nature.
Because it seldom appears in high concentration, germanium was found comparatively late in the discovery of the elements. Germanium ranks 50th in abundance of the elements in the Earth's crust. In 1869, Dmitri Mendeleev predicted its existence and some of its properties from its position on his periodic table, and called the element ekasilicon. On February 6, 1886, Clemens Winkler at Freiberg University found the new element, along with silver and sulfur, in the mineral argyrodite. Winkler named the element after Germany, his country of birth. Germanium is mined primarily from sphalerite (the primary ore of zinc), though germanium is also recovered commercially from silver, lead, and copper ores.
Elemental germanium is used as a semiconductor in transistors and various other electronic devices. Historically, the first decade of semiconductor electronics was based entirely on germanium. Presently, the major end uses are fibre-optic systems, infrared optics, solar cell applications, and light-emitting diodes (LEDs). Germanium compounds are also used for polymerization catalysts and have most recently found use in the production of nanowires. This element forms a large number of organogermanium compounds, such as tetraethylgermanium, useful in organometallic chemistry. Germanium is considered a technology-critical element.
Germanium is not thought to be an essential element for any living organism. Similar to silicon and aluminium, naturally-occurring germanium compounds tend to be insoluble in water and thus have little oral toxicity. However, synthetic soluble germanium salts are nephrotoxic, and synthetic chemically reactive germanium compounds with halogens and hydrogen are irritants and toxins.
History
In his report on The Periodic Law of the Chemical Elements in 1869, the Russian chemist Dmitri Mendeleev predicted the existence of several unknown chemical elements, including one that would fill a gap in the carbon family, located between silicon and tin. Because of its position in his periodic table, Mendeleev called it ekasilicon (Es), and he estimated its atomic weight to be 70 (later 72).
In mid-1885, at a mine near Freiberg, Saxony, a new mineral was discovered and named argyrodite because of its high silver content. The chemist Clemens Winkler analyzed this new mineral, which proved to be a combination of silver, sulfur, and a new element. Winkler was able to isolate the new element in 1886 and found it similar to antimony. He initially considered the new element to be eka-antimony, but was soon convinced that it was instead eka-silicon. Before Winkler published his results on the new element, he decided that he would name his element neptunium, since the recent discovery of planet Neptune in 1846 had similarly been preceded by mathematical predictions of its existence. However, the name "neptunium" had already been given to another proposed chemical element (though not the element that today bears the name neptunium, which was discovered in 1940). So instead, Winkler named the new element germanium (from the Latin word, Germania, for Germany) in honor of his homeland. Argyrodite proved empirically to be Ag8GeS6.
Because this new element showed some similarities with the elements arsenic and antimony, its proper place in the periodic table was under consideration, but its similarities with Dmitri Mendeleev's predicted element "ekasilicon" confirmed that place on the periodic table. With further material from 500 kg of ore from the mines in Saxony, Winkler confirmed the chemical properties of the new element in 1887. He also determined an atomic weight of 72.32 by analyzing pure germanium tetrachloride (), while Lecoq de Boisbaudran deduced 72.3 by a comparison of the lines in the spark spectrum of the element.
Winkler was able to prepare several new compounds of germanium, including fluorides, chlorides, sulfides, dioxide, and tetraethylgermane (Ge(C2H5)4), the first organogermane. The physical data from those compounds—which corresponded well with Mendeleev's predictions—made the discovery an important confirmation of Mendeleev's idea of element periodicity. Here is a comparison between the prediction and Winkler's data:
Until the late 1930s, germanium was thought to be a poorly conducting metal. Germanium did not become economically significant until after 1945 when its properties as an electronic semiconductor were recognized. During World War II, small amounts of germanium were used in some special electronic devices, mostly diodes. The first major use was the point-contact Schottky diodes for radar pulse detection during the War. The first silicon–germanium alloys were obtained in 1955. Before 1945, only a few hundred kilograms of germanium were produced in smelters each year, but by the end of the 1950s, the annual worldwide production had reached .
The development of the germanium transistor in 1948 opened the door to countless applications of solid state electronics. From 1950 through the early 1970s, this area provided an increasing market for germanium, but then high-purity silicon began replacing germanium in transistors, diodes, and rectifiers. For example, the company that became Fairchild Semiconductor was founded in 1957 with the express purpose of producing silicon transistors. Silicon has superior electrical properties, but it requires much greater purity that could not be commercially achieved in the early years of semiconductor electronics.
Meanwhile, the demand for germanium for fiber optic communication networks, infrared night vision systems, and polymerization catalysts increased dramatically. These end uses represented 85% of worldwide germanium consumption in 2000. The US government even designated germanium as a strategic and critical material, calling for a 146 ton (132 tonne) supply in the national defense stockpile in 1987.
Germanium differs from silicon in that the supply is limited by the availability of exploitable sources, while the supply of silicon is limited only by production capacity since silicon comes from ordinary sand and quartz. While silicon could be bought in 1998 for less than $10 per kg, the price of germanium was almost $800 per kg.
Characteristics
Under standard conditions, germanium is a brittle, silvery-white, semiconductor. This form constitutes an allotrope known as α-germanium, which has a metallic luster and a diamond cubic crystal structure, the same structure as silicon and diamond. In this form, germanium has a threshold displacement energy of . At pressures above 120 kbar, germanium becomes the metallic allotrope β-germanium with the same structure as β-tin. Like silicon, gallium, bismuth, antimony, and water, germanium is one of the few substances that expands as it solidifies (i.e. freezes) from the molten state.
Germanium is a semiconductor having an indirect bandgap, as is crystalline silicon. Zone refining techniques have led to the production of crystalline germanium for semiconductors that has an impurity of only one part in 1010,
making it one of the purest materials ever obtained.
The first semi-metallic material discovered (in 2005) to become a superconductor in the presence of an extremely strong electromagnetic field was an alloy of germanium, uranium, and rhodium.
Pure germanium is known to spontaneously extrude very long screw dislocations, referred to as germanium whiskers. The growth of these whiskers is one of the primary reasons for the failure of older diodes and transistors made from germanium, as, depending on what they eventually touch, they may lead to an electrical short.
Chemistry
Elemental germanium starts to oxidize slowly in air at around 250 °C, forming GeO2 . Germanium is insoluble in dilute acids and alkalis but dissolves slowly in hot concentrated sulfuric and nitric acids and reacts violently with molten alkalis to produce germanates (). Germanium occurs mostly in the oxidation state +4 although many +2 compounds are known. Other oxidation states are rare: +3 is found in compounds such as Ge2Cl6, and +3 and +1 are found on the surface of oxides, or negative oxidation states in germanides, such as −4 in . Germanium cluster anions (Zintl ions) such as Ge42−, Ge94−, Ge92−, [(Ge9)2]6− have been prepared by the extraction from alloys containing alkali metals and germanium in liquid ammonia in the presence of ethylenediamine or a cryptand. The oxidation states of the element in these ions are not integers—similar to the ozonides O3−.
Two oxides of germanium are known: germanium dioxide (, germania) and germanium monoxide, (). The dioxide, GeO2, can be obtained by roasting germanium disulfide (), and is a white powder that is only slightly soluble in water but reacts with alkalis to form germanates. The monoxide, germanous oxide, can be obtained by the high temperature reaction of GeO2 with elemental Ge. The dioxide (and the related oxides and germanates) exhibits the unusual property of having a high refractive index for visible light, but transparency to infrared light. Bismuth germanate, Bi4Ge3O12 (BGO), is used as a scintillator.
Binary compounds with other chalcogens are also known, such as the disulfide () and diselenide (), and the monosulfide (GeS), monoselenide (GeSe), and monotelluride (GeTe). GeS2 forms as a white precipitate when hydrogen sulfide is passed through strongly acid solutions containing Ge(IV). The disulfide is appreciably soluble in water and in solutions of caustic alkalis or alkaline sulfides. Nevertheless, it is not soluble in acidic water, which allowed Winkler to discover the element. By heating the disulfide in a current of hydrogen, the monosulfide (GeS) is formed, which sublimes in thin plates of a dark color and metallic luster, and is soluble in solutions of the caustic alkalis. Upon melting with alkaline carbonates and sulfur, germanium compounds form salts known as thiogermanates.
Four tetrahalides are known. Under normal conditions germanium tetraiodide (GeI4) is a solid, germanium tetrafluoride (GeF4) a gas and the others volatile liquids. For example, germanium tetrachloride, GeCl4, is obtained as a colorless fuming liquid boiling at 83.1 °C by heating the metal with chlorine. All the tetrahalides are readily hydrolyzed to hydrated germanium dioxide. GeCl4 is used in the production of organogermanium compounds. All four dihalides are known and in contrast to the tetrahalides are polymeric solids. Additionally Ge2Cl6 and some higher compounds of formula GenCl2n+2 are known. The unusual compound Ge6Cl16 has been prepared that contains the Ge5Cl12 unit with a neopentane structure.
Germane (GeH4) is a compound similar in structure to methane. Polygermanes—compounds that are similar to alkanes—with formula GenH2n+2 containing up to five germanium atoms are known. The germanes are less volatile and less reactive than their corresponding silicon analogues. GeH4 reacts with alkali metals in liquid ammonia to form white crystalline MGeH3 which contain the GeH3− anion. The germanium hydrohalides with one, two and three halogen atoms are colorless reactive liquids.
The first organogermanium compound was synthesized by Winkler in 1887; the reaction of germanium tetrachloride with diethylzinc yielded tetraethylgermane (). Organogermanes of the type R4Ge (where R is an alkyl) such as tetramethylgermane () and tetraethylgermane are accessed through the cheapest available germanium precursor germanium tetrachloride and alkyl nucleophiles. Organic germanium hydrides such as isobutylgermane () were found to be less hazardous and may be used as a liquid substitute for toxic germane gas in semiconductor applications. Many germanium reactive intermediates are known: germyl free radicals, germylenes (similar to carbenes), and germynes (similar to carbynes). The organogermanium compound 2-carboxyethylgermasesquioxane was first reported in the 1970s, and for a while was used as a dietary supplement and thought to possibly have anti-tumor qualities.
Using a ligand called Eind (1,1,3,3,5,5,7,7-octaethyl-s-hydrindacen-4-yl) germanium is able to form a double bond with oxygen (germanone). Germanium hydride and germanium tetrahydride are very flammable and even explosive when mixed with air.
Isotopes
Germanium occurs in five natural isotopes: , , , , and . Of these, is very slightly radioactive, decaying by double beta decay with a half-life of . is the most common isotope, having a natural abundance of approximately 36%. is the least common with a natural abundance of approximately 7%. When bombarded with alpha particles, the isotope will generate stable , releasing high energy electrons in the process. Because of this, it is used in combination with radon for nuclear batteries.
At least 27 radioisotopes have also been synthesized, ranging in atomic mass from 58 to 89. The most stable of these is , decaying by electron capture with a half-life of ays. The least stable is , with a half-life of . While most of germanium's radioisotopes decay by beta decay, and decay by delayed proton emission. through isotopes also exhibit minor delayed neutron emission decay paths.
Occurrence
Germanium is created by stellar nucleosynthesis, mostly by the s-process in asymptotic giant branch stars. The s-process is a slow neutron capture of lighter elements inside pulsating red giant stars. Germanium has been detected in some of the most distant stars and in the atmosphere of Jupiter.
Germanium's abundance in the Earth's crust is approximately 1.6 ppm. Only a few minerals like argyrodite, briartite, germanite, renierite and sphalerite contain appreciable amounts of germanium. Only few of them (especially germanite) are, very rarely, found in mineable amounts. Some zinc–copper–lead ore bodies contain enough germanium to justify extraction from the final ore concentrate. An unusual natural enrichment process causes a high content of germanium in some coal seams, discovered by Victor Moritz Goldschmidt during a broad survey for germanium deposits. The highest concentration ever found was in Hartley coal ash with as much as 1.6% germanium. The coal deposits near Xilinhaote, Inner Mongolia, contain an estimated 1600 tonnes of germanium.
Production
About 118 tonnes of germanium were produced in 2011 worldwide, mostly in China (80 t), Russia (5 t) and United States (3 t). Germanium is recovered as a by-product from sphalerite zinc ores where it is concentrated in amounts as great as 0.3%, especially from low-temperature sediment-hosted, massive Zn–Pb–Cu(–Ba) deposits and carbonate-hosted Zn–Pb deposits. A recent study found that at least 10,000 t of extractable germanium is contained in known zinc reserves, particularly those hosted by Mississippi-Valley type deposits, while at least 112,000 t will be found in coal reserves. In 2007 35% of the demand was met by recycled germanium.
While it is produced mainly from sphalerite, it is also found in silver, lead, and copper ores. Another source of germanium is fly ash of power plants fueled from coal deposits that contain germanium. Russia and China used this as a source for germanium. Russia's deposits are located in the far east of Sakhalin Island, and northeast of Vladivostok. The deposits in China are located mainly in the lignite mines near Lincang, Yunnan; coal is also mined near Xilinhaote, Inner Mongolia.
The ore concentrates are mostly sulfidic; they are converted to the oxides by heating under air in a process known as roasting:
GeS2 + 3 O2 → GeO2 + 2 SO2
Some of the germanium is left in the dust produced, while the rest is converted to germanates, which are then leached (together with zinc) from the cinder by sulfuric acid. After neutralization, only the zinc stays in solution while germanium and other metals precipitate. After removing some of the zinc in the precipitate by the Waelz process, the residing Waelz oxide is leached a second time. The dioxide is obtained as precipitate and converted with chlorine gas or hydrochloric acid to germanium tetrachloride, which has a low boiling point and can be isolated by distillation:
GeO2 + 4 HCl → GeCl4 + 2 H2O
GeO2 + 2 Cl2 → GeCl4 + O2
Germanium tetrachloride is either hydrolyzed to the oxide (GeO2) or purified by fractional distillation and then hydrolyzed. The highly pure GeO2 is now suitable for the production of germanium glass. It is reduced to the element by reacting it with hydrogen, producing germanium suitable for infrared optics and semiconductor production:
GeO2 + 2 H2 → Ge + 2 H2O
The germanium for steel production and other industrial processes is normally reduced using carbon:
GeO2 + C → Ge + CO2
Applications
The major end uses for germanium in 2007, worldwide, were estimated to be: 35% for fiber-optics, 30% infrared optics, 15% polymerization catalysts, and 15% electronics and solar electric applications. The remaining 5% went into such uses as phosphors, metallurgy, and chemotherapy.
Optics
The notable properties of germania (GeO2) are its high index of refraction and its low optical dispersion. These make it especially useful for wide-angle camera lenses, microscopy, and the core part of optical fibers. It has replaced titania as the dopant for silica fiber, eliminating the subsequent heat treatment that made the fibers brittle. At the end of 2002, the fiber optics industry consumed 60% of the annual germanium use in the United States, but this is less than 10% of worldwide consumption. GeSbTe is a phase change material used for its optic properties, such as that used in rewritable DVDs.
Because germanium is transparent in the infrared wavelengths, it is an important infrared optical material that can be readily cut and polished into lenses and windows. It is especially used as the front optic in thermal imaging cameras working in the 8 to 14 micron range for passive thermal imaging and for hot-spot detection in military, mobile night vision, and fire fighting applications. It is used in infrared spectroscopes and other optical equipment that require extremely sensitive infrared detectors. It has a very high refractive index (4.0) and must be coated with anti-reflection agents. Particularly, a very hard special antireflection coating of diamond-like carbon (DLC), refractive index 2.0, is a good match and produces a diamond-hard surface that can withstand much environmental abuse.
Electronics
Germanium can be alloyed with silicon, and silicon–germanium alloys are rapidly becoming an important semiconductor material for high-speed integrated circuits. Circuits using the properties of Si-SiGe heterojunctions can be much faster than those using silicon alone. The SiGe chips, with high-speed properties, can be made with low-cost, well-established production techniques of the silicon chip industry.
High efficiency solar panels are a major use of germanium. Because germanium and gallium arsenide have nearly identical lattice constant, germanium substrates can be used to make gallium-arsenide solar cells. Germanium is the substrate of the wafers for high-efficiency multijunction photovoltaic cells for space applications, such as the Mars Exploration Rovers, which use triple-junction gallium arsenide on germanium cells. High-brightness LEDs, used for automobile headlights and to backlight LCD screens, are also an important application.
Germanium-on-insulator (GeOI) substrates are seen as a potential replacement for silicon on miniaturized chips. CMOS circuit based on GeOI substrates has been reported recently. Other uses in electronics include phosphors in fluorescent lamps and solid-state light-emitting diodes (LEDs). Germanium transistors are still used in some effects pedals by musicians who wish to reproduce the distinctive tonal character of the "fuzz"-tone from the early rock and roll era, most notably the Dallas Arbiter Fuzz Face.
Germanium has been studied as a potential material for implantable bioelectronic sensors that are resorbed in the body without generating harmful hydrogen gas, replacing zinc oxide- and indium gallium zinc oxide-based implementations.
Other uses
Germanium dioxide is also used in catalysts for polymerization in the production of polyethylene terephthalate (PET). The high brilliance of this polyester is especially favored for PET bottles marketed in Japan. In the United States, germanium is not used for polymerization catalysts.
Due to the similarity between silica (SiO2) and germanium dioxide (GeO2), the silica stationary phase in some gas chromatography columns can be replaced by GeO2.
In recent years germanium has seen increasing use in precious metal alloys. In sterling silver alloys, for instance, it reduces firescale, increases tarnish resistance, and improves precipitation hardening. A tarnish-proof silver alloy trademarked Argentium contains 1.2% germanium.
Semiconductor detectors made of single crystal high-purity germanium can precisely identify radiation sources—for example in airport security. Germanium is useful for monochromators for beamlines used in single crystal neutron scattering and synchrotron X-ray diffraction. The reflectivity has advantages over silicon in neutron and high energy X-ray applications. Crystals of high purity germanium are used in detectors for gamma spectroscopy and the search for dark matter. Germanium crystals are also used in X-ray spectrometers for the determination of phosphorus, chlorine and sulfur.
Germanium is emerging as an important material for spintronics and spin-based quantum computing applications. In 2010, researchers demonstrated room temperature spin transport and more recently donor electron spins in germanium has been shown to have very long coherence times.
Strategic importance
Due to its use in advanced electronics and optics, Germanium is considered a technology-critical element (by e.g. the European Union), essential to fulfill the green and digital transition. As China controls 60% of global Germanium production it holds a dominant position over the world's supply chains.
On 3 July 2023 China suddenly imposed restrictions on the exports of germanium (and gallium), ratcheting up trade tensions with Western allies. Invoking "national security interests," the Chinese Ministry of Commerce informed that companies that intend to sell products containing germanium would need an export licence. The products/compounds targeted are: germanium dioxide, germanium epitaxial growth substrate, germanium ingot, germanium metal, germanium tetrachloride and zinc germanium phosphide. It sees such products as "dual-use" items that may have military purposes and therefore warrant an extra layer of oversight.
The new dispute opened a new chapter in the increasingly fierce technology race that has pitted the United States, and to a lesser extent Europe, against China. The US wants its allies to heavily curb, or downright prohibit, advanced electronic components bound to the Chinese market to prevent Beijing from securing global technology supremacy. China denied any tit-for-tat intention behind the Germanium export restrictions.
Following China's export restrictions, Russian state-owned company Rostec announced an increase in germanium production to meet domestic demand.
Germanium and health
Germanium is not considered essential to the health of plants or animals. Germanium in the environment has little or no health impact. This is primarily because it usually occurs only as a trace element in ores and carbonaceous materials, and the various industrial and electronic applications involve very small quantities that are not likely to be ingested. For similar reasons, end-use germanium has little impact on the environment as a biohazard. Some reactive intermediate compounds of germanium are poisonous (see precautions, below).
Germanium supplements, made from both organic and inorganic germanium, have been marketed as an alternative medicine capable of treating leukemia and lung cancer. There is, however, no medical evidence of benefit; some evidence suggests that such supplements are actively harmful. U.S. Food and Drug Administration (FDA) research has concluded that inorganic germanium, when used as a nutritional supplement, "presents potential human health hazard".
Some germanium compounds have been administered by alternative medical practitioners as non-FDA-allowed injectable solutions. Soluble inorganic forms of germanium used at first, notably the citrate-lactate salt, resulted in some cases of renal dysfunction, hepatic steatosis, and peripheral neuropathy in individuals using them over a long term. Plasma and urine germanium concentrations in these individuals, several of whom died, were several orders of magnitude greater than endogenous levels. A more recent organic form, beta-carboxyethylgermanium sesquioxide (propagermanium), has not exhibited the same spectrum of toxic effects.
Certain compounds of germanium have low toxicity to mammals, but have toxic effects against certain bacteria.
Precautions for chemically reactive germanium compounds
While use of germanium itself does not require precautions, some of germanium's artificially produced compounds are quite reactive and present an immediate hazard to human health on exposure. For example, Germanium tetrachloride and germane (GeH4) are a liquid and gas, respectively, that can be very irritating to the eyes, skin, lungs, and throat.
See also
Germanene
Vitrain
History of the transistor
Notes
References
External links
Germanium at The Periodic Table of Videos (University of Nottingham)
Chemical elements
Metalloids
Infrared sensor materials
Optical materials
Group IV semiconductors
Chemical elements predicted by Dmitri Mendeleev
Materials that expand upon freezing
Chemical elements with diamond cubic structure | Germanium | [
"Physics",
"Chemistry"
] | 5,731 | [
"Periodic table",
"Physical phenomena",
"Phase transitions",
"Chemical elements",
"Semiconductor materials",
"Group IV semiconductors",
"Materials",
"Optical materials",
"Materials that expand upon freezing",
"Atoms",
"Matter",
"Chemical elements predicted by Dmitri Mendeleev"
] |
12,243 | https://en.wikipedia.org/wiki/Gadolinium | Gadolinium is a chemical element; it has symbol Gd and atomic number 64. Gadolinium is a silvery-white metal when oxidation is removed. It is a malleable and ductile rare-earth element. Gadolinium reacts with atmospheric oxygen or moisture slowly to form a black coating. Gadolinium below its Curie point of is ferromagnetic, with an attraction to a magnetic field higher than that of nickel. Above this temperature it is the most paramagnetic element. It is found in nature only in an oxidized form. When separated, it usually has impurities of the other rare earths because of their similar chemical properties.
Gadolinium was discovered in 1880 by Jean Charles de Marignac, who detected its oxide by using spectroscopy. It is named after the mineral gadolinite, one of the minerals in which gadolinium is found, itself named for the Finnish chemist Johan Gadolin. Pure gadolinium was first isolated by the chemist Paul-Émile Lecoq de Boisbaudran around 1886.
Gadolinium possesses unusual metallurgical properties, to the extent that as little as 1% of gadolinium can significantly improve the workability and resistance to oxidation at high temperatures of iron, chromium, and related metals. Gadolinium as a metal or a salt absorbs neutrons and is, therefore, used sometimes for shielding in neutron radiography and in nuclear reactors.
Like most of the rare earths, gadolinium forms trivalent ions with fluorescent properties, and salts of gadolinium(III) are used as phosphors in various applications.
Gadolinium(III) ions in water-soluble salts are highly toxic to mammals. However, chelated gadolinium(III) compounds prevent the gadolinium(III) from being exposed to the organism, and the majority is excreted by healthy kidneys before it can deposit in tissues. Because of its paramagnetic properties, solutions of chelated organic gadolinium complexes are used as intravenously administered gadolinium-based MRI contrast agents in medical magnetic resonance imaging.
The main uses of gadolinium, in addition to use as a contrast agent for MRI scans, are in nuclear reactors, in alloys, as a phosphor in medical imaging, as a gamma ray emitter, in electronic devices, in optical devices, and in superconductors.
Characteristics
Physical properties
Gadolinium is the eighth member of the lanthanide series. In the periodic table, it appears between the elements europium to its left and terbium to its right, and above the actinide curium. It is a silvery-white, malleable, ductile rare-earth element. Its 64 electrons are arranged in the configuration of [Xe]4f75d16s2, of which the ten 4f, 5d, and 6s electrons are valence.
Like most other metals in the lanthanide series, three electrons are usually available as valence electrons. The remaining 4f electrons are too strongly bound: this is because the 4f orbitals penetrate the most through the inert xenon core of electrons to the nucleus, followed by 5d and 6s, and this increases with higher ionic charge. Gadolinium crystallizes in the hexagonal close-packed α-form at room temperature. At temperatures above , it forms or transforms into its β-form, which has a body-centered cubic structure.
The isotope gadolinium-157 has the highest thermal-neutron capture cross-section among any stable nuclide: about 259,000 barns. Only xenon-135 has a higher capture cross-section, about 2.0 million barns, but this isotope is radioactive.
Gadolinium is believed to be ferromagnetic at temperatures below and is strongly paramagnetic above this temperature. In fact, at body temperature, gadolinium exhibits the greatest paramagnetic effect of any element. There is evidence that gadolinium is a helical antiferromagnetic, rather than a ferromagnetic, below . Gadolinium demonstrates a magnetocaloric effect whereby its temperature increases when it enters a magnetic field and decreases when it leaves the magnetic field. A significant magnetocaloric effect is observed at higher temperatures, up to about 300 kelvins, in the compounds Gd5(Si1−xGex)4.
Individual gadolinium atoms can be isolated by encapsulating them into fullerene molecules, where they can be visualized with a transmission electron microscope. Individual Gd atoms and small Gd clusters can be incorporated into carbon nanotubes.
Chemical properties
Gadolinium combines with most elements to form Gd(III) derivatives. It also combines with nitrogen, carbon, sulfur, phosphorus, boron, selenium, silicon, and arsenic at elevated temperatures, forming binary compounds.
Unlike the other rare-earth elements, metallic gadolinium is relatively stable in dry air. However, it tarnishes quickly in moist air, forming a loosely-adhering gadolinium(III) oxide (Gd2O3):
4 Gd + 3 O2 → 2 Gd2O3,
which spalls off, exposing more surface to oxidation.
Gadolinium is a strong reducing agent, which reduces oxides of several metals into their elements. Gadolinium is quite electropositive and reacts slowly with cold water and quite quickly with hot water to form gadolinium(III) hydroxide (Gd(OH)3):
2 Gd + 6 H2O → 2 Gd(OH)3 + 3 H2.
Gadolinium metal is attacked readily by dilute sulfuric acid to form solutions containing the colorless Gd(III) ions, which exist as [Gd(H2O)9]3+ complexes:
2 Gd + 3 H2SO4 + 18 H2O → 2 [Gd(H2O)9]3+ + 3 + 3 H2.
Chemical compounds
In the great majority of its compounds, like many rare-earth metals, gadolinium adopts the oxidation state +3. However, gadolinium can be found on rare occasions in the 0, +1 and +2 oxidation states. All four trihalides are known. All are white, except for the iodide, which is yellow. Most commonly encountered of the halides is gadolinium(III) chloride (GdCl3). The oxide dissolves in acids to give the salts, such as gadolinium(III) nitrate.
Gadolinium(III), like most lanthanide ions, forms complexes with high coordination numbers. This tendency is illustrated by the use of the chelating agent DOTA, an octadentate ligand. Salts of [Gd(DOTA)]− are useful in magnetic resonance imaging. A variety of related chelate complexes have been developed, including gadodiamide.
Reduced gadolinium compounds are known, especially in the solid state. Gadolinium(II) halides are obtained by heating Gd(III) halides in presence of metallic Gd in tantalum containers. Gadolinium also forms the sesquichloride Gd2Cl3, which can be further reduced to GdCl by annealing at . This gadolinium(I) chloride forms platelets with layered graphite-like structure.
Isotopes
Naturally occurring gadolinium is composed of six stable isotopes, 154Gd, 155Gd, 156Gd, 157Gd, 158Gd and 160Gd, and one radioisotope, 152Gd, with the isotope 158Gd being the most abundant (24.8% natural abundance). The predicted double beta decay of 160Gd has never been observed (an experimental lower limit on its half-life of more than 1.3×1021 years has been measured).
Thirty-three radioisotopes of gadolinium have been observed, with the most stable being 152Gd (naturally occurring), with a half-life of about 1.08×1014 years, and 150Gd, with a half-life of 1.79×106 years. All of the remaining radioactive isotopes have half-lives of less than 75 years. The majority of these have half-lives of less than 25 seconds. Gadolinium isotopes have four metastable isomers, with the most stable being 143mGd (t1/2= 110 seconds), 145mGd (t1/2= 85 seconds) and 141mGd (t1/2= 24.5 seconds).
The isotopes with atomic masses lower than the most abundant stable isotope, 158Gd, primarily decay by electron capture to isotopes of europium. At higher atomic masses, the primary decay mode is beta decay, and the primary products are isotopes of terbium.
History
Gadolinium is named after the mineral gadolinite. Gadolinite was first chemically analyzed by the Finnish chemist Johan Gadolin in 1794. In 1802 German chemist Martin Klaproth gave gadolinite its name. In 1880, the Swiss chemist Jean Charles Galissard de Marignac observed the spectroscopic lines from gadolinium in samples of gadolinite (which actually contains relatively little gadolinium, but enough to show a spectrum) and in the separate mineral cerite. The latter mineral proved to contain far more of the element with the new spectral line. De Marignac eventually separated a mineral oxide from cerite, which he realized was the oxide of this new element. He named the oxide "gadolinia". Because he realized that "gadolinia" was the oxide of a new element, he is credited with the discovery of gadolinium. The French chemist Paul-Émile Lecoq de Boisbaudran carried out the separation of gadolinium metal from gadolinia in 1886.
Occurrence
Gadolinium is a constituent in many minerals, such as monazite and bastnäsite. The metal is too reactive to exist naturally. Paradoxically, as noted above, the mineral gadolinite actually contains only traces of this element. The abundance in the Earth's crust is about 6.2 mg/kg. The main mining areas are in China, the US, Brazil, Sri Lanka, India, and Australia with reserves expected to exceed one million tonnes. World production of pure gadolinium is about 400 tonnes per year. The only known mineral with essential gadolinium, lepersonnite-(Gd), is very rare.
Production
Gadolinium is produced both from monazite and bastnäsite.
Crushed minerals are extracted with hydrochloric acid or sulfuric acid, which converts the insoluble oxides into soluble chlorides or sulfates.
The acidic filtrates are partially neutralized with caustic soda to pH 3–4. Thorium precipitates as its hydroxide, and is then removed.
The remaining solution is treated with ammonium oxalate to convert rare earths into their insoluble oxalates. The oxalates are converted to oxides by heating.
The oxides are dissolved in nitric acid that excludes one of the main components, cerium, whose oxide is insoluble in HNO3.
The solution is treated with magnesium nitrate to produce a crystallized mixture of double salts of gadolinium, samarium and europium.
The salts are separated by ion exchange chromatography.
The rare-earth ions are then selectively washed out by a suitable complexing agent.
Gadolinium metal is obtained from its oxide or salts by heating it with calcium at in an argon atmosphere. Sponge gadolinium can be produced by reducing molten GdCl3 with an appropriate metal at temperatures below (the melting point of Gd) at reduced pressure.
Applications
Gadolinium has no large-scale applications, but it has a variety of specialized uses.
Neutron absorber
Because gadolinium has a high neutron cross-section, it is effective for use with neutron radiography and in shielding of nuclear reactors. It is used as a secondary, emergency shut-down measure in some nuclear reactors, particularly of the CANDU reactor type. Gadolinium is used in nuclear marine propulsion systems as a burnable poison. The use of gadolinium in neutron capture therapy to target tumors has been investigated, and gadolinium-containing compounds have proven promising.
Alloys
Gadolinium possesses unusual metallurgic properties, with as little as 1% of gadolinium improving the workability of iron, chromium, and related alloys, and their resistance to high temperatures and oxidation.
Magnetic contrast agent
Gadolinium is paramagnetic at room temperature, with a ferromagnetic Curie point of . Paramagnetic ions, such as gadolinium, increase nuclear spin relaxation rates, making gadolinium useful as a contrast agent for magnetic resonance imaging (MRI). Solutions of organic gadolinium complexes and gadolinium compounds are used as intravenous contrast agents to enhance images in medical and magnetic resonance angiography (MRA) procedures. Magnevist is the most widespread example. Nanotubes packed with gadolinium, called "gadonanotubes", are 40 times more effective than the usual gadolinium contrast agent. Traditional gadolinium-based contrast agents are un-targeted, generally distributing throughout the body after injection, but will not readily cross the intact blood–brain barrier. Brain tumors, and other disorders that degrade the blood-brain barrier, allow these agents to penetrate into the brain and facilitate their detection by contrast-enhanced MRI. Similarly, delayed gadolinium-enhanced magnetic resonance imaging of cartilage uses an ionic compound agent, originally Magnevist, that is excluded from healthy cartilage based on electrostatic repulsion but will enter proteoglycan-depleted cartilage in diseases such as osteoarthritis.
Phosphors
Gadolinium is used as a phosphor in medical imaging. It is contained in the phosphor layer of X-ray detectors, suspended in a polymer matrix. Terbium-doped gadolinium oxysulfide (Gd2O2S:Tb) at the phosphor layer converts the X-rays released from the source into light. This material emits green light at 540 nm because of the presence of Tb3+, which is very useful for enhancing the imaging quality. The energy conversion of Gd is up to 20%, which means that one fifth of the X-ray energy striking the phosphor layer can be converted into visible photons. Gadolinium oxyorthosilicate (Gd2SiO5, GSO; usually doped by 0.1–1.0% of Ce) is a single crystal that is used as a scintillator in medical imaging such as positron emission tomography, and for detecting neutrons.
Gadolinium compounds were also used for making green phosphors for color TV tubes.
Gamma ray emitter
Gadolinium-153 is produced in a nuclear reactor from elemental europium or enriched gadolinium targets. It has a half-life of days and emits gamma radiation with strong peaks at 41 keV and 102 keV. It is used in many quality-assurance applications, such as line sources and calibration phantoms, to ensure that nuclear-medicine imaging systems operate correctly and produce useful images of radioisotope distribution inside the patient. It is also used as a gamma-ray source in X-ray absorption measurements and in bone density gauges for osteoporosis screening.
Electronic and optical devices
Gadolinium is used for making gadolinium yttrium garnet (Gd:Y3Al5O12), which has microwave applications and is used in fabrication of various optical components and as substrate material for magneto-optical films.
Electrolyte in fuel cells
Gadolinium can also serve as an electrolyte in solid oxide fuel cells (SOFCs). Using gadolinium as a dopant for materials like cerium oxide (in the form of gadolinium-doped ceria) gives an electrolyte having both high ionic conductivity and low operating temperatures.
Magnetic refrigeration
Research is being conducted on magnetic refrigeration near room temperature, which could provide significant efficiency and environmental advantages over conventional refrigeration methods. Gadolinium-based materials, such as Gd5(SixGe1−x)4, are currently the most promising materials, owing to their high Curie temperature and giant magnetocaloric effect. Pure Gd itself exhibits a large magnetocaloric effect near its Curie temperature of , and this has sparked interest into producing Gd alloys having a larger effect and tunable Curie temperature. In Gd5(SixGe1−x)4, Si and Ge compositions can be varied to adjust the Curie temperature.
Superconductors
Gadolinium barium copper oxide (GdBCO) is a superconductor with applications in superconducting motors or generators such as in wind turbines. It can be manufactured in the same way as the most widely researched cuprate high temperature superconductor, yttrium barium copper oxide (YBCO) and uses an analogous chemical composition (GdBa2Cu3O7−δ ). It was used in 2014 to set a new world record for the highest trapped magnetic field in a bulk high temperature superconductor, with a field of 17.6T being trapped within two GdBCO bulks.
Asthma treatment
Gadolinium is being investigated as a possible treatment for preventing lung tissue scarring in asthma. A positive effect has been observed in mice.
Niche and former applications
Gadolinium is used for antineutrino detection in the Japanese Super-Kamiokande detector in order to sense supernova explosions. Low-energy neutrons that arise from antineutrino absorption by protons in the detector's ultrapure water are captured by gadolinium nuclei, which subsequently emit gamma rays that are detected as part of the antineutrino signature.
Gadolinium gallium garnet (GGG, Gd3Ga5O12) was used for imitation diamonds and for computer bubble memory.
Safety
As a free ion, gadolinium is reported often to be highly toxic, but MRI contrast agents are chelated compounds and are considered safe enough to be used in most persons. The toxicity of free gadolinium ions in animals is due to interference with a number of calcium-ion channel dependent processes. The 50% lethal dose is about 0.34 mmol/kg (IV, mouse) or 100–200 mg/kg. Toxicity studies in rodents show that chelation of gadolinium (which also improves its solubility) decreases its toxicity with regard to the free ion by a factor of 31 (i.e., the lethal dose for the Gd-chelate increases by 31 times). It is believed therefore that clinical toxicity of gadolinium-based contrast agents (GBCAs) in humans will depend on the strength of the chelating agent; however this research is still not complete. About a dozen different Gd-chelated agents have been approved as MRI contrast agents around the world.
Use of gadolinium-based contrast agents results in deposition of gadolinium in tissues of the brain, bone, skin, and other tissues in amounts that depend on kidney function, structure of the chelates (linear or macrocyclic) and the dose administered. In patients with kidney failure, there is a risk of a rare but serious illness called nephrogenic systemic fibrosis (NSF) that is caused by the use of gadolinium-based contrast agents. The disease resembles scleromyxedema and to some extent scleroderma. It may occur months after a contrast agent has been injected. Its association with gadolinium and not the carrier molecule is confirmed by its occurrence with various contrast materials in which gadolinium is carried by very different carrier molecules. Because of the risk of NSF, use of these agents is not recommended for any individual with end-stage kidney failure as they may require emergent dialysis.
Included in the current guidelines from the Canadian Association of Radiologists are that dialysis patients should receive gadolinium agents only where essential and that they should receive dialysis after the exam. If a contrast-enhanced MRI must be performed on a dialysis patient, it is recommended that certain high-risk contrast agents be avoided but not that a lower dose be considered. The American College of Radiology recommends that contrast-enhanced MRI examinations be performed as closely before dialysis as possible as a precautionary measure, although this has not been proven to reduce the likelihood of developing NSF. The FDA recommends that potential for gadolinium retention be considered when choosing the type of GBCA used in patients requiring multiple lifetime doses, pregnant women, children, and patients with inflammatory conditions.
Anaphylactoid reactions are rare, occurring in approximately 0.03–0.1%.
Long-term environmental impacts of gadolinium contamination due to human usage are a topic of ongoing research.
Biological use
Gadolinium has no known native biological role, but its compounds are used as research tools in biomedicine. Gd3+ compounds are components of MRI contrast agents. It is used in various ion channel electrophysiology experiments to block sodium leak channels and stretch activated ion channels. Gadolinium has recently been used to measure the distance between two points in a protein via electron paramagnetic resonance, something that gadolinium is especially amenable to thanks to EPR sensitivity at w-band (95 GHz) frequencies.
Notes
References
External links
Nephrogenic Systemic Fibrosis – Complication of Gadolinium MR Contrast (series of images at MedPix website)
It's Elemental – Gadolinium
Refrigerator uses gadolinium metal that heats up when exposed to magnetic field
FDA advisory on gadolinium-based contrast
Abdominal MR imaging: important considerations for evaluation of gadolinium enhancement Rafael O.P. de Campos, Vasco Herédia, Ersan Altun, Richard C. Semelka, Department of Radiology University of North Carolina Hospitals Chapel Hill
Inside Japan’s Super Kamiokande 360 degree tour including details on adding Gadolinium to the pure water to aid in studying neutrinos
Chemical elements
Chemical elements with hexagonal close-packed structure
Element toxicology
Ferromagnetic materials
Lanthanides
Neutron poisons
Nuclear materials
Reducing agents | Gadolinium | [
"Physics",
"Chemistry"
] | 4,833 | [
"Element toxicology",
"Chemical elements",
"Redox",
"Biology and pharmacology of chemical elements",
"Ferromagnetic materials",
"Reducing agents",
"Materials",
"Nuclear materials",
"Atoms",
"Matter"
] |
12,266 | https://en.wikipedia.org/wiki/Genetics | Genetics is the study of genes, genetic variation, and heredity in organisms. It is an important branch in biology because heredity is vital to organisms' evolution. Gregor Mendel, a Moravian Augustinian friar working in the 19th century in Brno, was the first to study genetics scientifically. Mendel studied "trait inheritance", patterns in the way traits are handed down from parents to offspring over time. He observed that organisms (pea plants) inherit traits by way of discrete "units of inheritance". This term, still used today, is a somewhat ambiguous definition of what is referred to as a gene.
Trait inheritance and molecular inheritance mechanisms of genes are still primary principles of genetics in the 21st century, but modern genetics has expanded to study the function and behavior of genes. Gene structure and function, variation, and distribution are studied within the context of the cell, the organism (e.g. dominance), and within the context of a population. Genetics has given rise to a number of subfields, including molecular genetics, epigenetics, and population genetics. Organisms studied within the broad field span the domains of life (archaea, bacteria, and eukarya).
Genetic processes work in combination with an organism's environment and experiences to influence development and behavior, often referred to as nature versus nurture. The intracellular or extracellular environment of a living cell or organism may increase or decrease gene transcription. A classic example is two seeds of genetically identical corn, one placed in a temperate climate and one in an arid climate (lacking sufficient waterfall or rain). While the average height the two corn stalks could grow to is genetically determined, the one in the arid climate only grows to half the height of the one in the temperate climate due to lack of water and nutrients in its environment.
Etymology
The word genetics stems from the ancient Greek meaning "genitive"/"generative", which in turn derives from meaning "origin".
History
The observation that living things inherit traits from their parents has been used since prehistoric times to improve crop plants and animals through selective breeding. The modern science of genetics, seeking to understand this process, began with the work of the Augustinian friar Gregor Mendel in the mid-19th century.
Prior to Mendel, Imre Festetics, a Hungarian noble, who lived in Kőszeg before Mendel, was the first who used the word "genetic" in hereditarian context, and is considered the first geneticist. He described several rules of biological inheritance in his work The genetic laws of nature (Die genetischen Gesetze der Natur, 1819). His second law is the same as that which Mendel published. In his third law, he developed the basic principles of mutation (he can be considered a forerunner of Hugo de Vries). Festetics argued that changes observed in the generation of farm animals, plants, and humans are the result of scientific laws. Festetics empirically deduced that organisms inherit their characteristics, not acquire them. He recognized recessive traits and inherent variation by postulating that traits of past generations could reappear later, and organisms could produce progeny with different attributes. These observations represent an important prelude to Mendel's theory of particulate inheritance insofar as it features a transition of heredity from its status as myth to that of a scientific discipline, by providing a fundamental theoretical basis for genetics in the twentieth century.
Other theories of inheritance preceded Mendel's work. A popular theory during the 19th century, and implied by Charles Darwin's 1859 On the Origin of Species, was blending inheritance: the idea that individuals inherit a smooth blend of traits from their parents. Mendel's work provided examples where traits were definitely not blended after hybridization, showing that traits are produced by combinations of distinct genes rather than a continuous blend. Blending of traits in the progeny is now explained by the action of multiple genes with quantitative effects. Another theory that had some support at that time was the inheritance of acquired characteristics: the belief that individuals inherit traits strengthened by their parents. This theory (commonly associated with Jean-Baptiste Lamarck) is now known to be wrong—the experiences of individuals do not affect the genes they pass to their children. Other theories included Darwin's pangenesis (which had both acquired and inherited aspects) and Francis Galton's reformulation of pangenesis as both particulate and inherited.
Mendelian genetics
Modern genetics started with Mendel's studies of the nature of inheritance in plants. In his paper "Versuche über Pflanzenhybriden" ("Experiments on Plant Hybridization"), presented in 1865 to the Naturforschender Verein (Society for Research in Nature) in Brno, Mendel traced the inheritance patterns of certain traits in pea plants and described them mathematically. Although this pattern of inheritance could only be observed for a few traits, Mendel's work suggested that heredity was particulate, not acquired, and that the inheritance patterns of many traits could be explained through simple rules and ratios.
The importance of Mendel's work did not gain wide understanding until 1900, after his death, when Hugo de Vries and other scientists rediscovered his research. William Bateson, a proponent of Mendel's work, coined the word genetics in 1905. The adjective genetic, derived from the Greek word genesis—γένεσις, "origin", predates the noun and was first used in a biological sense in 1860. Bateson both acted as a mentor and was aided significantly by the work of other scientists from Newnham College at Cambridge, specifically the work of Becky Saunders, Nora Darwin Barlow, and Muriel Wheldale Onslow. Bateson popularized the usage of the word genetics to describe the study of inheritance in his inaugural address to the Third International Conference on Plant Hybridization in London in 1906.
After the rediscovery of Mendel's work, scientists tried to determine which molecules in the cell were responsible for inheritance. In 1900, Nettie Stevens began studying the mealworm. Over the next 11 years, she discovered that females only had the X chromosome and males had both X and Y chromosomes. She was able to conclude that sex is a chromosomal factor and is determined by the male. In 1911, Thomas Hunt Morgan argued that genes are on chromosomes, based on observations of a sex-linked white eye mutation in fruit flies. In 1913, his student Alfred Sturtevant used the phenomenon of genetic linkage to show that genes are arranged linearly on the chromosome.
Molecular genetics
Although genes were known to exist on chromosomes, chromosomes are composed of both protein and DNA, and scientists did not know which of the two is responsible for inheritance. In 1928, Frederick Griffith discovered the phenomenon of transformation: dead bacteria could transfer genetic material to "transform" other still-living bacteria. Sixteen years later, in 1944, the Avery–MacLeod–McCarty experiment identified DNA as the molecule responsible for transformation. The role of the nucleus as the repository of genetic information in eukaryotes had been established by Hämmerling in 1943 in his work on the single celled alga Acetabularia. The Hershey–Chase experiment in 1952 confirmed that DNA (rather than protein) is the genetic material of the viruses that infect bacteria, providing further evidence that DNA is the molecule responsible for inheritance.
James Watson and Francis Crick determined the structure of DNA in 1953, using the X-ray crystallography work of Rosalind Franklin and Maurice Wilkins that indicated DNA has a helical structure (i.e., shaped like a corkscrew). Their double-helix model had two strands of DNA with the nucleotides pointing inward, each matching a complementary nucleotide on the other strand to form what look like rungs on a twisted ladder. This structure showed that genetic information exists in the sequence of nucleotides on each strand of DNA. The structure also suggested a simple method for replication: if the strands are separated, new partner strands can be reconstructed for each based on the sequence of the old strand. This property is what gives DNA its semi-conservative nature where one strand of new DNA is from an original parent strand.
Although the structure of DNA showed how inheritance works, it was still not known how DNA influences the behavior of cells. In the following years, scientists tried to understand how DNA controls the process of protein production. It was discovered that the cell uses DNA as a template to create matching messenger RNA, molecules with nucleotides very similar to DNA. The nucleotide sequence of a messenger RNA is used to create an amino acid sequence in protein; this translation between nucleotide sequences and amino acid sequences is known as the genetic code.
With the newfound molecular understanding of inheritance came an explosion of research. A notable theory arose from Tomoko Ohta in 1973 with her amendment to the neutral theory of molecular evolution through publishing the nearly neutral theory of molecular evolution. In this theory, Ohta stressed the importance of natural selection and the environment to the rate at which genetic evolution occurs. One important development was chain-termination DNA sequencing in 1977 by Frederick Sanger. This technology allows scientists to read the nucleotide sequence of a DNA molecule. In 1983, Kary Banks Mullis developed the polymerase chain reaction, providing a quick way to isolate and amplify a specific section of DNA from a mixture. The efforts of the Human Genome Project, Department of Energy, NIH, and parallel private efforts by Celera Genomics led to the sequencing of the human genome in 2003.
Features of inheritance
Discrete inheritance and Mendel's laws
At its most fundamental level, inheritance in organisms occurs by passing discrete heritable units, called genes, from parents to offspring. This property was first observed by Gregor Mendel, who studied the segregation of heritable traits in pea plants, showing for example that flowers on a single plant were either purple or white—but never an intermediate between the two colors. The discrete versions of the same gene controlling the inherited appearance (phenotypes) are called alleles.
In the case of the pea, which is a diploid species, each individual plant has two copies of each gene, one copy inherited from each parent. Many species, including humans, have this pattern of inheritance. Diploid organisms with two copies of the same allele of a given gene are called homozygous at that gene locus, while organisms with two different alleles of a given gene are called heterozygous. The set of alleles for a given organism is called its genotype, while the observable traits of the organism are called its phenotype. When organisms are heterozygous at a gene, often one allele is called dominant as its qualities dominate the phenotype of the organism, while the other allele is called recessive as its qualities recede and are not observed. Some alleles do not have complete dominance and instead have incomplete dominance by expressing an intermediate phenotype, or codominance by expressing both alleles at once.
When a pair of organisms reproduce sexually, their offspring randomly inherit one of the two alleles from each parent. These observations of discrete inheritance and the segregation of alleles are collectively known as Mendel's first law or the Law of Segregation. However, the probability of getting one gene over the other can change due to dominant, recessive, homozygous, or heterozygous genes. For example, Mendel found that if you cross heterozygous organisms your odds of getting the dominant trait is 3:1. Real geneticist study and calculate probabilities by using theoretical probabilities, empirical probabilities, the product rule, the sum rule, and more.
Notation and diagrams
Geneticists use diagrams and symbols to describe inheritance. A gene is represented by one or a few letters. Often a "+" symbol is used to mark the usual, non-mutant allele for a gene.
In fertilization and breeding experiments (and especially when discussing Mendel's laws) the parents are referred to as the "P" generation and the offspring as the "F1" (first filial) generation. When the F1 offspring mate with each other, the offspring are called the "F2" (second filial) generation. One of the common diagrams used to predict the result of cross-breeding is the Punnett square.
When studying human genetic diseases, geneticists often use pedigree charts to represent the inheritance of traits. These charts map the inheritance of a trait in a family tree.
Multiple gene interactions
Organisms have thousands of genes, and in sexually reproducing organisms these genes generally assort independently of each other. This means that the inheritance of an allele for yellow or green pea color is unrelated to the inheritance of alleles for white or purple flowers. This phenomenon, known as "Mendel's second law" or the "law of independent assortment," means that the alleles of different genes get shuffled between parents to form offspring with many different combinations. Different genes often interact to influence the same trait. In the Blue-eyed Mary (Omphalodes verna), for example, there exists a gene with alleles that determine the color of flowers: blue or magenta. Another gene, however, controls whether the flowers have color at all or are white. When a plant has two copies of this white allele, its flowers are white—regardless of whether the first gene has blue or magenta alleles. This interaction between genes is called epistasis, with the second gene epistatic to the first.
Many traits are not discrete features (e.g. purple or white flowers) but are instead continuous features (e.g. human height and skin color). These complex traits are products of many genes. The influence of these genes is mediated, to varying degrees, by the environment an organism has experienced. The degree to which an organism's genes contribute to a complex trait is called heritability. Measurement of the heritability of a trait is relative—in a more variable environment, the environment has a bigger influence on the total variation of the trait. For example, human height is a trait with complex causes. It has a heritability of 89% in the United States. In Nigeria, however, where people experience a more variable access to good nutrition and health care, height has a heritability of only 62%.
Molecular basis for inheritance
DNA and chromosomes
The molecular basis for genes is deoxyribonucleic acid (DNA). DNA is composed of deoxyribose (sugar molecule), a phosphate group, and a base (amine group). There are four types of bases: adenine (A), cytosine (C), guanine (G), and thymine (T). The phosphates make phosphodiester bonds with the sugars to make long phosphate-sugar backbones. Bases specifically pair together (T&A, C&G) between two backbones and make like rungs on a ladder. The bases, phosphates, and sugars together make a nucleotide that connects to make long chains of DNA. Genetic information exists in the sequence of these nucleotides, and genes exist as stretches of sequence along the DNA chain. These chains coil into a double a-helix structure and wrap around proteins called Histones which provide the structural support. DNA wrapped around these histones are called chromosomes. Viruses sometimes use the similar molecule RNA instead of DNA as their genetic material.
DNA normally exists as a double-stranded molecule, coiled into the shape of a double helix. Each nucleotide in DNA preferentially pairs with its partner nucleotide on the opposite strand: A pairs with T, and C pairs with G. Thus, in its two-stranded form, each strand effectively contains all necessary information, redundant with its partner strand. This structure of DNA is the physical basis for inheritance: DNA replication duplicates the genetic information by splitting the strands and using each strand as a template for synthesis of a new partner strand.
Genes are arranged linearly along long chains of DNA base-pair sequences. In bacteria, each cell usually contains a single circular genophore, while eukaryotic organisms (such as plants and animals) have their DNA arranged in multiple linear chromosomes. These DNA strands are often extremely long; the largest human chromosome, for example, is about 247 million base pairs in length. The DNA of a chromosome is associated with structural proteins that organize, compact, and control access to the DNA, forming a material called chromatin; in eukaryotes, chromatin is usually composed of nucleosomes, segments of DNA wound around cores of histone proteins. The full set of hereditary material in an organism (usually the combined DNA sequences of all chromosomes) is called the genome.
DNA is most often found in the nucleus of cells, but Ruth Sager helped in the discovery of nonchromosomal genes found outside of the nucleus. In plants, these are often found in the chloroplasts and in other organisms, in the mitochondria. These nonchromosomal genes can still be passed on by either partner in sexual reproduction and they control a variety of hereditary characteristics that replicate and remain active throughout generations.
While haploid organisms have only one copy of each chromosome, most animals and many plants are diploid, containing two of each chromosome and thus two copies of every gene. The two alleles for a gene are located on identical loci of the two homologous chromosomes, each allele inherited from a different parent.
Many species have so-called sex chromosomes that determine the sex of each organism. In humans and many other animals, the Y chromosome contains the gene that triggers the development of the specifically male characteristics. In evolution, this chromosome has lost most of its content and also most of its genes, while the X chromosome is similar to the other chromosomes and contains many genes. This being said, Mary Frances Lyon discovered that there is X-chromosome inactivation during reproduction to avoid passing on twice as many genes to the offspring. Lyon's discovery led to the discovery of X-linked diseases.
Reproduction
When cells divide, their full genome is copied and each daughter cell inherits one copy. This process, called mitosis, is the simplest form of reproduction and is the basis for asexual reproduction. Asexual reproduction can also occur in multicellular organisms, producing offspring that inherit their genome from a single parent. Offspring that are genetically identical to their parents are called clones.
Eukaryotic organisms often use sexual reproduction to generate offspring that contain a mixture of genetic material inherited from two different parents. The process of sexual reproduction alternates between forms that contain single copies of the genome (haploid) and double copies (diploid). Haploid cells fuse and combine genetic material to create a diploid cell with paired chromosomes. Diploid organisms form haploids by dividing, without replicating their DNA, to create daughter cells that randomly inherit one of each pair of chromosomes. Most animals and many plants are diploid for most of their lifespan, with the haploid form reduced to single cell gametes such as sperm or eggs.
Although they do not use the haploid/diploid method of sexual reproduction, bacteria have many methods of acquiring new genetic information. Some bacteria can undergo conjugation, transferring a small circular piece of DNA to another bacterium. Bacteria can also take up raw DNA fragments found in the environment and integrate them into their genomes, a phenomenon known as transformation. These processes result in horizontal gene transfer, transmitting fragments of genetic information between organisms that would be otherwise unrelated. Natural bacterial transformation occurs in many bacterial species, and can be regarded as a sexual process for transferring DNA from one cell to another cell (usually of the same species). Transformation requires the action of numerous bacterial gene products, and its primary adaptive function appears to be repair of DNA damages in the recipient cell.
Recombination and genetic linkage
The diploid nature of chromosomes allows for genes on different chromosomes to assort independently or be separated from their homologous pair during sexual reproduction wherein haploid gametes are formed. In this way new combinations of genes can occur in the offspring of a mating pair. Genes on the same chromosome would theoretically never recombine. However, they do, via the cellular process of chromosomal crossover. During crossover, chromosomes exchange stretches of DNA, effectively shuffling the gene alleles between the chromosomes. This process of chromosomal crossover generally occurs during meiosis, a series of cell divisions that creates haploid cells. Meiotic recombination, particularly in microbial eukaryotes, appears to serve the adaptive function of repair of DNA damages.
The first cytological demonstration of crossing over was performed by Harriet Creighton and Barbara McClintock in 1931. Their research and experiments on corn provided cytological evidence for the genetic theory that linked genes on paired chromosomes do in fact exchange places from one homolog to the other.
The probability of chromosomal crossover occurring between two given points on the chromosome is related to the distance between the points. For an arbitrarily long distance, the probability of crossover is high enough that the inheritance of the genes is effectively uncorrelated. For genes that are closer together, however, the lower probability of crossover means that the genes demonstrate genetic linkage; alleles for the two genes tend to be inherited together. The amounts of linkage between a series of genes can be combined to form a linear linkage map that roughly describes the arrangement of the genes along the chromosome.
Gene expression
Genetic code
Genes express their functional effect through the production of proteins, which are molecules responsible for most functions in the cell. Proteins are made up of one or more polypeptide chains, each composed of a sequence of amino acids. The DNA sequence of a gene is used to produce a specific amino acid sequence. This process begins with the production of an RNA molecule with a sequence matching the gene's DNA sequence, a process called transcription.
This messenger RNA molecule then serves to produce a corresponding amino acid sequence through a process called translation. Each group of three nucleotides in the sequence, called a codon, corresponds either to one of the twenty possible amino acids in a protein or an instruction to end the amino acid sequence; this correspondence is called the genetic code. The flow of information is unidirectional: information is transferred from nucleotide sequences into the amino acid sequence of proteins, but it never transfers from protein back into the sequence of DNA—a phenomenon Francis Crick called the central dogma of molecular biology.
The specific sequence of amino acids results in a unique three-dimensional structure for that protein, and the three-dimensional structures of proteins are related to their functions. Some are simple structural molecules, like the fibers formed by the protein collagen. Proteins can bind to other proteins and simple molecules, sometimes acting as enzymes by facilitating chemical reactions within the bound molecules (without changing the structure of the protein itself). Protein structure is dynamic; the protein hemoglobin bends into slightly different forms as it facilitates the capture, transport, and release of oxygen molecules within mammalian blood.
A single nucleotide difference within DNA can cause a change in the amino acid sequence of a protein. Because protein structures are the result of their amino acid sequences, some changes can dramatically change the properties of a protein by destabilizing the structure or changing the surface of the protein in a way that changes its interaction with other proteins and molecules. For example, sickle-cell anemia is a human genetic disease that results from a single base difference within the coding region for the β-globin section of hemoglobin, causing a single amino acid change that changes hemoglobin's physical properties.
Sickle-cell versions of hemoglobin stick to themselves, stacking to form fibers that distort the shape of red blood cells carrying the protein. These sickle-shaped cells no longer flow smoothly through blood vessels, having a tendency to clog or degrade, causing the medical problems associated with this disease.
Some DNA sequences are transcribed into RNA but are not translated into protein products—such RNA molecules are called non-coding RNA. In some cases, these products fold into structures which are involved in critical cell functions (e.g. ribosomal RNA and transfer RNA). RNA can also have regulatory effects through hybridization interactions with other RNA molecules (such as microRNA).
Nature and nurture
Although genes contain all the information an organism uses to function, the environment plays an important role in determining the ultimate phenotypes an organism displays. The phrase "nature and nurture" refers to this complementary relationship. The phenotype of an organism depends on the interaction of genes and the environment. An interesting example is the coat coloration of the Siamese cat. In this case, the body temperature of the cat plays the role of the environment. The cat's genes code for dark hair, thus the hair-producing cells in the cat make cellular proteins resulting in dark hair. But these dark hair-producing proteins are sensitive to temperature (i.e. have a mutation causing temperature-sensitivity) and denature in higher-temperature environments, failing to produce dark-hair pigment in areas where the cat has a higher body temperature. In a low-temperature environment, however, the protein's structure is stable and produces dark-hair pigment normally. The protein remains functional in areas of skin that are colder—such as its legs, ears, tail, and faceso the cat has dark hair at its extremities.
Environment plays a major role in effects of the human genetic disease phenylketonuria. The mutation that causes phenylketonuria disrupts the ability of the body to break down the amino acid phenylalanine, causing a toxic build-up of an intermediate molecule that, in turn, causes severe symptoms of progressive intellectual disability and seizures. However, if someone with the phenylketonuria mutation follows a strict diet that avoids this amino acid, they remain normal and healthy.
A common method for determining how genes and environment ("nature and nurture") contribute to a phenotype involves studying identical and fraternal twins, or other siblings of multiple births. Identical siblings are genetically the same since they come from the same zygote. Meanwhile, fraternal twins are as genetically different from one another as normal siblings. By comparing how often a certain disorder occurs in a pair of identical twins to how often it occurs in a pair of fraternal twins, scientists can determine whether that disorder is caused by genetic or postnatal environmental factors. One famous example involved the study of the Genain quadruplets, who were identical quadruplets all diagnosed with schizophrenia.
Gene regulation
The genome of a given organism contains thousands of genes, but not all these genes need to be active at any given moment. A gene is expressed when it is being transcribed into mRNA and there exist many cellular methods of controlling the expression of genes such that proteins are produced only when needed by the cell. Transcription factors are regulatory proteins that bind to DNA, either promoting or inhibiting the transcription of a gene. Within the genome of Escherichia coli bacteria, for example, there exists a series of genes necessary for the synthesis of the amino acid tryptophan. However, when tryptophan is already available to the cell, these genes for tryptophan synthesis are no longer needed. The presence of tryptophan directly affects the activity of the genes—tryptophan molecules bind to the tryptophan repressor (a transcription factor), changing the repressor's structure such that the repressor binds to the genes. The tryptophan repressor blocks the transcription and expression of the genes, thereby creating negative feedback regulation of the tryptophan synthesis process.
Differences in gene expression are especially clear within multicellular organisms, where cells all contain the same genome but have very different structures and behaviors due to the expression of different sets of genes. All the cells in a multicellular organism derive from a single cell, differentiating into variant cell types in response to external and intercellular signals and gradually establishing different patterns of gene expression to create different behaviors. As no single gene is responsible for the development of structures within multicellular organisms, these patterns arise from the complex interactions between many cells.
Within eukaryotes, there exist structural features of chromatin that influence the transcription of genes, often in the form of modifications to DNA and chromatin that are stably inherited by daughter cells. These features are called "epigenetic" because they exist "on top" of the DNA sequence and retain inheritance from one cell generation to the next. Because of epigenetic features, different cell types grown within the same medium can retain very different properties. Although epigenetic features are generally dynamic over the course of development, some, like the phenomenon of paramutation, have multigenerational inheritance and exist as rare exceptions to the general rule of DNA as the basis for inheritance.
Genetic change
Mutations
During the process of DNA replication, errors occasionally occur in the polymerization of the second strand. These errors, called mutations, can affect the phenotype of an organism, especially if they occur within the protein coding sequence of a gene. Error rates are usually very low—1 error in every 10–100 million bases—due to the "proofreading" ability of DNA polymerases. Processes that increase the rate of changes in DNA are called mutagenic: mutagenic chemicals promote errors in DNA replication, often by interfering with the structure of base-pairing, while UV radiation induces mutations by causing damage to the DNA structure. Chemical damage to DNA occurs naturally as well and cells use DNA repair mechanisms to repair mismatches and breaks. The repair does not, however, always restore the original sequence. A particularly important source of DNA damages appears to be reactive oxygen species produced by cellular aerobic respiration, and these can lead to mutations.
In organisms that use chromosomal crossover to exchange DNA and recombine genes, errors in alignment during meiosis can also cause mutations. Errors in crossover are especially likely when similar sequences cause partner chromosomes to adopt a mistaken alignment; this makes some regions in genomes more prone to mutating in this way. These errors create large structural changes in DNA sequence—duplications, inversions, deletions of entire regions—or the accidental exchange of whole parts of sequences between different chromosomes, chromosomal translocation.
Natural selection and evolution
Mutations alter an organism's genotype and occasionally this causes different phenotypes to appear. Most mutations have little effect on an organism's phenotype, health, or reproductive fitness. Mutations that do have an effect are usually detrimental, but occasionally some can be beneficial. Studies in the fly Drosophila melanogaster suggest that if a mutation changes a protein produced by a gene, about 70 percent of these mutations are harmful with the remainder being either neutral or weakly beneficial.
Population genetics studies the distribution of genetic differences within populations and how these distributions change over time. Changes in the frequency of an allele in a population are mainly influenced by natural selection, where a given allele provides a selective or reproductive advantage to the organism, as well as other factors such as mutation, genetic drift, genetic hitchhiking, artificial selection and migration.
Over many generations, the genomes of organisms can change significantly, resulting in evolution. In the process called adaptation, selection for beneficial mutations can cause a species to evolve into forms better able to survive in their environment. New species are formed through the process of speciation, often caused by geographical separations that prevent populations from exchanging genes with each other.
By comparing the homology between different species' genomes, it is possible to calculate the evolutionary distance between them and when they may have diverged. Genetic comparisons are generally considered a more accurate method of characterizing the relatedness between species than the comparison of phenotypic characteristics. The evolutionary distances between species can be used to form evolutionary trees; these trees represent the common descent and divergence of species over time, although they do not show the transfer of genetic material between unrelated species (known as horizontal gene transfer and most common in bacteria).
Research and technology
Model organisms
Although geneticists originally studied inheritance in a wide variety of organisms, the range of species studied has narrowed. One reason is that when significant research already exists for a given organism, new researchers are more likely to choose it for further study, and so eventually a few model organisms became the basis for most genetics research. Common research topics in model organism genetics include the study of gene regulation and the involvement of genes in development and cancer. Organisms were chosen, in part, for convenience—short generation times and easy genetic manipulation made some organisms popular genetics research tools. Widely used model organisms include the gut bacterium Escherichia coli, the plant Arabidopsis thaliana, baker's yeast (Saccharomyces cerevisiae), the nematode Caenorhabditis elegans, the common fruit fly (Drosophila melanogaster), the zebrafish (Danio rerio), and the common house mouse (Mus musculus).
Medicine
Medical genetics seeks to understand how genetic variation relates to human health and disease. When searching for an unknown gene that may be involved in a disease, researchers commonly use genetic linkage and genetic pedigree charts to find the location on the genome associated with the disease. At the population level, researchers take advantage of Mendelian randomization to look for locations in the genome that are associated with diseases, a method especially useful for multigenic traits not clearly defined by a single gene. Once a candidate gene is found, further research is often done on the corresponding (or homologous) genes of model organisms. In addition to studying genetic diseases, the increased availability of genotyping methods has led to the field of pharmacogenetics: the study of how genotype can affect drug responses.
Individuals differ in their inherited tendency to develop cancer, and cancer is a genetic disease. The process of cancer development in the body is a combination of events. Mutations occasionally occur within cells in the body as they divide. Although these mutations will not be inherited by any offspring, they can affect the behavior of cells, sometimes causing them to grow and divide more frequently. There are biological mechanisms that attempt to stop this process; signals are given to inappropriately dividing cells that should trigger cell death, but sometimes additional mutations occur that cause cells to ignore these messages. An internal process of natural selection occurs within the body and eventually mutations accumulate within cells to promote their own growth, creating a cancerous tumor that grows and invades various tissues of the body. Normally, a cell divides only in response to signals called growth factors and stops growing once in contact with surrounding cells and in response to growth-inhibitory signals. It usually then divides a limited number of times and dies, staying within the epithelium where it is unable to migrate to other organs. To become a cancer cell, a cell has to accumulate mutations in a number of genes (three to seven). A cancer cell can divide without growth factor and ignores inhibitory signals. Also, it is immortal and can grow indefinitely, even after it makes contact with neighboring cells. It may escape from the epithelium and ultimately from the primary tumor. Then, the escaped cell can cross the endothelium of a blood vessel and get transported by the bloodstream to colonize a new organ, forming deadly metastasis. Although there are some genetic predispositions in a small fraction of cancers, the major fraction is due to a set of new genetic mutations that originally appear and accumulate in one or a small number of cells that will divide to form the tumor and are not transmitted to the progeny (somatic mutations). The most frequent mutations are a loss of function of p53 protein, a tumor suppressor, or in the p53 pathway, and gain of function mutations in the Ras proteins, or in other oncogenes.
Research methods
DNA can be manipulated in the laboratory. Restriction enzymes are commonly used enzymes that cut DNA at specific sequences, producing predictable fragments of DNA. DNA fragments can be visualized through use of gel electrophoresis, which separates fragments according to their length.
The use of ligation enzymes allows DNA fragments to be connected. By binding ("ligating") fragments of DNA together from different sources, researchers can create recombinant DNA, the DNA often associated with genetically modified organisms. Recombinant DNA is commonly used in the context of plasmids: short circular DNA molecules with a few genes on them. In the process known as molecular cloning, researchers can amplify the DNA fragments by inserting plasmids into bacteria and then culturing them on plates of agar (to isolate clones of bacteria cells). "Cloning" can also refer to the various means of creating cloned ("clonal") organisms.
DNA can also be amplified using a procedure called the polymerase chain reaction (PCR). By using specific short sequences of DNA, PCR can isolate and exponentially amplify a targeted region of DNA. Because it can amplify from extremely small amounts of DNA, PCR is also often used to detect the presence of specific DNA sequences.
DNA sequencing and genomics
DNA sequencing, one of the most fundamental technologies developed to study genetics, allows researchers to determine the sequence of nucleotides in DNA fragments. The technique of chain-termination sequencing, developed in 1977 by a team led by Frederick Sanger, is still routinely used to sequence DNA fragments. Using this technology, researchers have been able to study the molecular sequences associated with many human diseases.
As sequencing has become less expensive, researchers have sequenced the genomes of many organisms using a process called genome assembly, which uses computational tools to stitch together sequences from many different fragments. These technologies were used to sequence the human genome in the Human Genome Project completed in 2003. New high-throughput sequencing technologies are dramatically lowering the cost of DNA sequencing, with many researchers hoping to bring the cost of resequencing a human genome down to a thousand dollars.
Next-generation sequencing (or high-throughput sequencing) came about due to the ever-increasing demand for low-cost sequencing. These sequencing technologies allow the production of potentially millions of sequences concurrently. The large amount of sequence data available has created the subfield of genomics, research that uses computational tools to search for and analyze patterns in the full genomes of organisms. Genomics can also be considered a subfield of bioinformatics, which uses computational approaches to analyze large sets of biological data. A common problem to these fields of research is how to manage and share data that deals with human subject and personally identifiable information.
Society and culture
On 19 March 2015, a group of leading biologists urged a worldwide ban on clinical use of methods, particularly the use of CRISPR and zinc finger, to edit the human genome in a way that can be inherited. In April 2015, Chinese researchers reported results of basic research to edit the DNA of non-viable human embryos using CRISPR.
See also
Bacterial genome size
Cryoconservation of animal genetic resources
Eugenics
Embryology
Genetic disorder
Genetic diversity
Genetic engineering
Genetic enhancement
Glossary of genetics (M−Z)
Index of genetics articles
Medical genetics
Molecular tools for gene study
Neuroepigenetics
Outline of genetics
Timeline of the history of genetics
Plant genetic resources
References
Further reading
External links
Genetics | Genetics | [
"Biology"
] | 8,147 | [
"Genetics"
] |
12,281 | https://en.wikipedia.org/wiki/Gottfried%20Wilhelm%20Leibniz | Gottfried Wilhelm Leibniz (or Leibnitz; – 14 November 1716) was a German polymath active as a mathematician, philosopher, scientist and diplomat who is credited, alongside Sir Isaac Newton, with the creation of calculus in addition to many other branches of mathematics, such as binary arithmetic and statistics. Leibniz has been called the "last universal genius" due to his vast expertise across fields, which became a rarity after his lifetime with the coming of the Industrial Revolution and the spread of specialized labor. He is a prominent figure in both the history of philosophy and the history of mathematics. He wrote works on philosophy, theology, ethics, politics, law, history, philology, games, music, and other studies. Leibniz also made major contributions to physics and technology, and anticipated notions that surfaced much later in probability theory, biology, medicine, geology, psychology, linguistics and computer science.
Leibniz contributed to the field of library science by developing a cataloguing system while working at the Herzog August Library in Wolfenbüttel, Germany, that served as a model for many of Europe's largest libraries. His contributions to a wide range of subjects were scattered in various learned journals, in tens of thousands of letters and in unpublished manuscripts. He wrote in several languages, primarily in Latin, French and German.
As a philosopher, he was a leading representative of 17th-century rationalism and idealism. As a mathematician, his major achievement was the development of the main ideas of differential and integral calculus, independently of Newton's contemporaneous developments. Leibniz's notation has been favored as the conventional and more exact expression of calculus. He devised the modern binary number system, which is the basis of modern digital computing and communications.
In the 20th century, Leibniz's notions of the law of continuity and transcendental law of homogeneity found a consistent mathematical formulation by means of non-standard analysis. He was also a pioneer in the field of mechanical calculators. While working on adding automatic multiplication and division to Pascal's calculator, he was the first to describe a pinwheel calculator in 1685 and invented the Leibniz wheel, later used in the arithmometer, the first mass-produced mechanical calculator.
In philosophy and theology, Leibniz is most noted for his optimism, i.e. his conclusion that our world is, in a qualified sense, the best possible world that God could have created, a view sometimes lampooned by other thinkers, such as Voltaire in his satirical novella Candide. Leibniz, along with René Descartes and Baruch Spinoza, was one of the three influential early modern rationalists. His philosophy also assimilates elements of the scholastic tradition, notably the assumption that some substantive knowledge of reality can be achieved by reasoning from first principles or prior definitions. The work of Leibniz anticipated modern logic and still influences contemporary analytic philosophy, such as its adopted use of the term "possible world" to define modal notions.
Biography
Early life
Gottfried Leibniz was born on 1 July 1646, in Leipzig, Saxony, to Friedrich Leibniz (1597–1652) and Catharina Schmuck (1621–1664).
He was baptized two days later at St. Nicholas Church, Leipzig; his godfather was the Lutheran theologian . His father died when he was six years old, and Leibniz was raised by his mother.
Leibniz's father had been a Professor of Moral Philosophy at the University of Leipzig, where he also served as dean of philosophy. The boy inherited his father's personal library. He was given free access to it from the age of seven, shortly after his father's death. While Leibniz's schoolwork was largely confined to the study of a small canon of authorities, his father's library enabled him to study a wide variety of advanced philosophical and theological works—ones that he would not have otherwise been able to read until his college years. Access to his father's library, largely written in Latin, also led to his proficiency in the Latin language, which he achieved by the age of 12. At the age of 13 he composed 300 hexameters of Latin verse in a single morning for a special event at school.
In April 1661 he enrolled in his father's former university at age 14. There he was guided, among others, by Jakob Thomasius, previously a student of Friedrich. Leibniz completed his bachelor's degree in Philosophy in December 1662. He defended his Disputatio Metaphysica de Principio Individui (Metaphysical Disputation on the Principle of Individuation), which addressed the principle of individuation, on , presenting an early version of monadic substance theory. Leibniz earned his master's degree in Philosophy on 7 February 1664. In December 1664 he published and defended a dissertation Specimen Quaestionum Philosophicarum ex Jure collectarum (An Essay of Collected Philosophical Problems of Right), arguing for both a theoretical and a pedagogical relationship between philosophy and law. After one year of legal studies, he was awarded his bachelor's degree in Law on 28 September 1665. His dissertation was titled De conditionibus (On Conditions).
In early 1666, at age 19, Leibniz wrote his first book, De Arte Combinatoria (On the Combinatorial Art), the first part of which was also his habilitation thesis in Philosophy, which he defended in March 1666. De Arte Combinatoria was inspired by Ramon Llull's Ars Magna and contained a proof of the existence of God, cast in geometrical form, and based on the argument from motion.
His next goal was to earn his license and Doctorate in Law, which normally required three years of study. In 1666, the University of Leipzig turned down Leibniz's doctoral application and refused to grant him a Doctorate in Law, most likely due to his relative youth. Leibniz subsequently left Leipzig.
Leibniz then enrolled in the University of Altdorf and quickly submitted a thesis, which he had probably been working on earlier in Leipzig. The title of his thesis was Disputatio Inauguralis de Casibus Perplexis in Jure (Inaugural Disputation on Ambiguous Legal Cases). Leibniz earned his license to practice law and his Doctorate in Law in November 1666. He next declined the offer of an academic appointment at Altdorf, saying that "my thoughts were turned in an entirely different direction".
As an adult, Leibniz often introduced himself as "Gottfried von Leibniz". Many posthumously published editions of his writings presented his name on the title page as "Freiherr G. W. von Leibniz." However, no document has ever been found from any contemporary government that stated his appointment to any form of nobility.
1666–1676
Leibniz's first position was as a salaried secretary to an alchemical society in Nuremberg. He knew fairly little about the subject at that time but presented himself as deeply learned. He soon met Johann Christian von Boyneburg (1622–1672), the dismissed chief minister of the Elector of Mainz, Johann Philipp von Schönborn. Von Boyneburg hired Leibniz as an assistant, and shortly thereafter reconciled with the Elector and introduced Leibniz to him. Leibniz then dedicated an essay on law to the Elector in the hope of obtaining employment. The stratagem worked; the Elector asked Leibniz to assist with the redrafting of the legal code for the Electorate. In 1669, Leibniz was appointed assessor in the Court of Appeal. Although von Boyneburg died late in 1672, Leibniz remained under the employment of his widow until she dismissed him in 1674.
Von Boyneburg did much to promote Leibniz's reputation, and the latter's memoranda and letters began to attract favorable notice. After Leibniz's service to the Elector there soon followed a diplomatic role. He published an essay, under the pseudonym of a fictitious Polish nobleman, arguing (unsuccessfully) for the German candidate for the Polish crown. The main force in European geopolitics during Leibniz's adult life was the ambition of Louis XIV of France, backed by French military and economic might. Meanwhile, the Thirty Years' War had left German-speaking Europe exhausted, fragmented, and economically backward. Leibniz proposed to protect German-speaking Europe by distracting Louis as follows: France would be invited to take Egypt as a stepping stone towards an eventual conquest of the Dutch East Indies. In return, France would agree to leave Germany and the Netherlands undisturbed. This plan obtained the Elector's cautious support. In 1672, the French government invited Leibniz to Paris for discussion, but the plan was soon overtaken by the outbreak of the Franco-Dutch War and became irrelevant. Napoleon's failed invasion of Egypt in 1798 can be seen as an unwitting, late implementation of Leibniz's plan, after the Eastern hemisphere colonial supremacy in Europe had already passed from the Dutch to the British.
Thus Leibniz went to Paris in 1672. Soon after arriving, he met Dutch physicist and mathematician Christiaan Huygens and realised that his own knowledge of mathematics and physics was patchy. With Huygens as his mentor, he began a program of self-study that soon pushed him to making major contributions to both subjects, including discovering his version of the differential and integral calculus. He met Nicolas Malebranche and Antoine Arnauld, the leading French philosophers of the day, and studied the writings of Descartes and Pascal, unpublished as well as published. He befriended a German mathematician, Ehrenfried Walther von Tschirnhaus; they corresponded for the rest of their lives.
When it became clear that France would not implement its part of Leibniz's Egyptian plan, the Elector sent his nephew, escorted by Leibniz, on a related mission to the English government in London, early in 1673. There Leibniz came into acquaintance of Henry Oldenburg and John Collins. He met with the Royal Society where he demonstrated a calculating machine that he had designed and had been building since 1670. The machine was able to execute all four basic operations (adding, subtracting, multiplying, and dividing), and the society quickly made him an external member.
The mission ended abruptly when news of the Elector's death (12 February 1673) reached them. Leibniz promptly returned to Paris and not, as had been planned, to Mainz. The sudden deaths of his two patrons in the same winter meant that Leibniz had to find a new basis for his career.
In this regard, a 1669 invitation from Duke John Frederick of Brunswick to visit Hanover proved to have been fateful. Leibniz had declined the invitation, but had begun corresponding with the duke in 1671. In 1673, the duke offered Leibniz the post of counsellor. Leibniz very reluctantly accepted the position two years later, only after it became clear that no employment was forthcoming in Paris, whose intellectual stimulation he relished, or with the Habsburg imperial court.
In 1675 he tried to get admitted to the French Academy of Sciences as a foreign honorary member, but it was considered that there were already enough foreigners there and so no invitation came. He left Paris in October 1676.
House of Hanover, 1676–1716
Leibniz managed to delay his arrival in Hanover until the end of 1676 after making one more short journey to London, where Newton accused him of having seen his unpublished work on calculus in advance. This was alleged to be evidence supporting the accusation, made decades later, that he had stolen calculus from Newton. On the journey from London to Hanover, Leibniz stopped in The Hague where he met van Leeuwenhoek, the discoverer of microorganisms. He also spent several days in intense discussion with Spinoza, who had just completed, but had not published, his masterwork, the Ethics. Spinoza died very shortly after Leibniz's visit.
In 1677, he was promoted, at his request, to Privy Counselor of Justice, a post he held for the rest of his life. Leibniz served three consecutive rulers of the House of Brunswick as historian, political adviser, and most consequentially, as librarian of the ducal library. He thenceforth employed his pen on all the various political, historical, and theological matters involving the House of Brunswick; the resulting documents form a valuable part of the historical record for the period.
Leibniz began promoting a project to use windmills to improve the mining operations in the Harz Mountains. This project did little to improve mining operations and was shut down by Duke Ernst August in 1685.
Among the few people in north Germany to accept Leibniz were the Electress Sophia of Hanover (1630–1714), her daughter Sophia Charlotte of Hanover (1668–1705), the Queen of Prussia and his avowed disciple, and Caroline of Ansbach, the consort of her grandson, the future George II. To each of these women he was correspondent, adviser, and friend. In turn, they all approved of Leibniz more than did their spouses and the future king George I of Great Britain.
The population of Hanover was only about 10,000, and its provinciality eventually grated on Leibniz. Nevertheless, to be a major courtier to the House of Brunswick was quite an honor, especially in light of the meteoric rise in the prestige of that House during Leibniz's association with it. In 1692, the Duke of Brunswick became a hereditary Elector of the Holy Roman Empire. The British Act of Settlement 1701 designated the Electress Sophia and her descent as the royal family of England, once both King William III and his sister-in-law and successor, Queen Anne, were dead. Leibniz played a role in the initiatives and negotiations leading up to that Act, but not always an effective one. For example, something he published anonymously in England, thinking to promote the Brunswick cause, was formally censured by the British Parliament.
The Brunswicks tolerated the enormous effort Leibniz devoted to intellectual pursuits unrelated to his duties as a courtier, pursuits such as perfecting calculus, writing about other mathematics, logic, physics, and philosophy, and keeping up a vast correspondence. He began working on calculus in 1674; the earliest evidence of its use in his surviving notebooks is 1675. By 1677 he had a coherent system in hand, but did not publish it until 1684. Leibniz's most important mathematical papers were published between 1682 and 1692, usually in a journal which he and Otto Mencke founded in 1682, the Acta Eruditorum. That journal played a key role in advancing his mathematical and scientific reputation, which in turn enhanced his eminence in diplomacy, history, theology, and philosophy.
The Elector Ernest Augustus commissioned Leibniz to write a history of the House of Brunswick, going back to the time of Charlemagne or earlier, hoping that the resulting book would advance his dynastic ambitions. From 1687 to 1690, Leibniz traveled extensively in Germany, Austria, and Italy, seeking and finding archival materials bearing on this project. Decades went by but no history appeared; the next Elector became quite annoyed at Leibniz's apparent dilatoriness. Leibniz never finished the project, in part because of his huge output on many other fronts, but also because he insisted on writing a meticulously researched and erudite book based on archival sources, when his patrons would have been quite happy with a short popular book, one perhaps little more than a genealogy with commentary, to be completed in three years or less. They never knew that he had in fact carried out a fair part of his assigned task: when the material Leibniz had written and collected for his history of the House of Brunswick was finally published in the 19th century, it filled three volumes.
Leibniz was appointed Librarian of the Herzog August Library in Wolfenbüttel, Lower Saxony, in 1691.
In 1708, John Keill, writing in the journal of the Royal Society and with Newton's presumed blessing, accused Leibniz of having plagiarised Newton's calculus. Thus began the calculus priority dispute which darkened the remainder of Leibniz's life. A formal investigation by the Royal Society (in which Newton was an unacknowledged participant), undertaken in response to Leibniz's demand for a retraction, upheld Keill's charge. Historians of mathematics writing since 1900 or so have tended to acquit Leibniz, pointing to important differences between Leibniz's and Newton's versions of calculus.
In 1712, Leibniz began a two-year residence in Vienna, where he was appointed Imperial Court Councillor to the Habsburgs. On the death of Queen Anne in 1714, Elector George Louis became King George I of Great Britain, under the terms of the 1701 Act of Settlement. Even though Leibniz had done much to bring about this happy event, it was not to be his hour of glory. Despite the intercession of the Princess of Wales, Caroline of Ansbach, George I forbade Leibniz to join him in London until he completed at least one volume of the history of the Brunswick family his father had commissioned nearly 30 years earlier. Moreover, for George I to include Leibniz in his London court would have been deemed insulting to Newton, who was seen as having won the calculus priority dispute and whose standing in British official circles could not have been higher. Finally, his dear friend and defender, the Dowager Electress Sophia, died in 1714. In 1716, while traveling in northern Europe, the Russian Tsar Peter the Great stopped in Bad Pyrmont and met Leibniz, who took interest in Russian matters since 1708 and was appointed advisor in 1711.
Death
Leibniz died in Hanover in 1716. At the time, he was so out of favor that neither George I (who happened to be near Hanover at that time) nor any fellow courtier other than his personal secretary attended the funeral. Even though Leibniz was a life member of the Royal Society and the Berlin Academy of Sciences, neither organization saw fit to honor his death. His grave went unmarked for more than 50 years. He was, however, eulogized by Fontenelle, before the French Academy of Sciences in Paris, which had admitted him as a foreign member in 1700. The eulogy was composed at the behest of the Duchess of Orleans, a niece of the Electress Sophia.
Personal life
Leibniz never married. He proposed to an unknown woman at age 50, but changed his mind when she took too long to decide. He complained on occasion about money, but the fair sum he left to his sole heir, his sister's stepson, proved that the Brunswicks had paid him fairly well. In his diplomatic endeavors, he at times verged on the unscrupulous, as was often the case with professional diplomats of his day. On several occasions, Leibniz backdated and altered personal manuscripts, actions which put him in a bad light during the calculus controversy.
He was charming, well-mannered, and not without humor and imagination. He had many friends and admirers all over Europe. He was identified as a Protestant and a philosophical theist. Leibniz remained committed to Trinitarian Christianity throughout his life.
Philosophy
Leibniz's philosophical thinking appears fragmented because his philosophical writings consist mainly of a multitude of short pieces: journal articles, manuscripts published long after his death, and letters to correspondents. He wrote two book-length philosophical treatises, of which only the Théodicée of 1710 was published in his lifetime.
Leibniz dated his beginning as a philosopher to his Discourse on Metaphysics, which he composed in 1686 as a commentary on a running dispute between Nicolas Malebranche and Antoine Arnauld. This led to an extensive correspondence with Arnauld; it and the Discourse were not published until the 19th century. In 1695, Leibniz made his public entrée into European philosophy with a journal article titled "New System of the Nature and Communication of Substances". Between 1695 and 1705, he composed his New Essays on Human Understanding, a lengthy commentary on John Locke's 1690 An Essay Concerning Human Understanding, but upon learning of Locke's 1704 death, lost the desire to publish it, so that the New Essays were not published until 1765. The Monadologie, composed in 1714 and published posthumously, consists of 90 aphorisms.
Leibniz also wrote a short paper, "Primae veritates" ("First Truths"), first published by Louis Couturat in 1903 (pp. 518–523) summarizing his views on metaphysics. The paper is undated; that he wrote it while in Vienna in 1689 was determined only in 1999, when the ongoing critical edition finally published Leibniz's philosophical writings for the period 1677–1690. Couturat's reading of this paper influenced much 20th-century thinking about Leibniz, especially among analytic philosophers. After a meticulous study (informed by the 1999 additions to the critical edition) of all of Leibniz's philosophical writings up to 1688, Mercer (2001) disagreed with Couturat's reading.
Leibniz met Baruch Spinoza in 1676, read some of his unpublished writings, and had since been influenced by some of Spinoza's ideas. While Leibniz befriended him and admired Spinoza's powerful intellect, he was also dismayed by Spinoza's conclusions, especially when these were inconsistent with Christian orthodoxy.
Unlike Descartes and Spinoza, Leibniz had a university education in philosophy. He was influenced by his Leipzig professor Jakob Thomasius, who also supervised his BA thesis in philosophy. Leibniz also read Francisco Suárez, a Spanish Jesuit respected even in Lutheran universities. Leibniz was deeply interested in the new methods and conclusions of Descartes, Huygens, Newton, and Boyle, but the established philosophical ideas in which he was educated influenced his view of their work.
Principles
Leibniz variously invoked one or another of seven fundamental philosophical Principles:
Identity/contradiction. If a proposition is true, then its negation is false and vice versa.
Identity of indiscernibles. Two distinct things cannot have all their properties in common. If every predicate possessed by x is also possessed by y and vice versa, then entities x and y are identical; to suppose two things indiscernible is to suppose the same thing under two names. The "identity of indiscernibles" is frequently invoked in modern logic and philosophy. It has attracted the most controversy and criticism, especially from corpuscular philosophy and quantum mechanics. The converse of this is often called Leibniz's law, or the indiscernibility of identicals, which is mostly uncontroversial.
Sufficient reason. "There must be a sufficient reason for anything to exist, for any event to occur, for any truth to obtain."
Pre-established harmony. "[T]he appropriate nature of each substance brings it about that what happens to one corresponds to what happens to all the others, without, however, their acting upon one another directly." (Discourse on Metaphysics, XIV) A dropped glass shatters because it "knows" it has hit the ground, and not because the impact with the ground "compels" the glass to split.
Law of continuity. Natura non facit saltus (literally, "Nature does not make jumps").
Optimism. "God assuredly always chooses the best."
Plenitude. Leibniz believed that the best of all possible worlds would actualize every genuine possibility, and argued in Théodicée that this best of all possible worlds will contain all possibilities, with our finite experience of eternity giving no reason to dispute nature's perfection.
Leibniz would on occasion give a rational defense of a specific principle, but more often took them for granted.
Monads
Leibniz's best known contribution to metaphysics is his theory of monads, as exposited in Monadologie. He proposes his theory that the universe is made of an infinite number of simple substances known as monads. Monads can also be compared to the corpuscles of the mechanical philosophy of René Descartes and others. These simple substances or monads are the "ultimate units of existence in nature". Monads have no parts but still exist by the qualities that they have. These qualities are continuously changing over time, and each monad is unique. They are also not affected by time and are subject to only creation and annihilation. Monads are centers of force; substance is force, while space, matter, and motion are merely phenomenal. He argued, against Newton, that space, time, and motion are completely relative: "As for my own opinion, I have said more than once, that I hold space to be something merely relative, as time is, that I hold it to be an order of coexistences, as time is an order of successions." Einstein, who called himself a "Leibnizian", wrote in the introduction to Max Jammer's book Concepts of Space that Leibnizianism was superior to Newtonianism, and his ideas would have dominated over Newton's had it not been for the poor technological tools of the time; Joseph Agassi argues that Leibniz paved the way for Einstein's theory of relativity.
Leibniz's proof of God can be summarized in the Théodicée. Reason is governed by the principle of contradiction and the principle of sufficient reason. Using the principle of reasoning, Leibniz concluded that the first reason of all things is God. All that we see and experience is subject to change, and the fact that this world is contingent can be explained by the possibility of the world being arranged differently in space and time. The contingent world must have some necessary reason for its existence. Leibniz uses a geometry book as an example to explain his reasoning. If this book was copied from an infinite chain of copies, there must be some reason for the content of the book. Leibniz concluded that there must be the "monas monadum" or God.
The ontological essence of a monad is its irreducible simplicity. Unlike atoms, monads possess no material or spatial character. They also differ from atoms by their complete mutual independence, so that interactions among monads are only apparent. Instead, by virtue of the principle of pre-established harmony, each monad follows a pre-programmed set of "instructions" peculiar to itself, so that a monad "knows" what to do at each moment. By virtue of these intrinsic instructions, each monad is like a little mirror of the universe. Monads need not be "small"; e.g., each human being constitutes a monad, in which case free will is problematic.
Monads are purported to have gotten rid of the problematic:
interaction between mind and matter arising in the system of Descartes;
lack of individuation inherent to the system of Spinoza, which represents individual creatures as merely accidental.
Theodicy and optimism
The Theodicy tries to justify the apparent imperfections of the world by claiming that it is optimal among all possible worlds. It must be the best possible and most balanced world, because it was created by an all powerful and all knowing God, who would not choose to create an imperfect world if a better world could be known to him or possible to exist. In effect, apparent flaws that can be identified in this world must exist in every possible world, because otherwise God would have chosen to create the world that excluded those flaws.
Leibniz asserted that the truths of theology (religion) and philosophy cannot contradict each other, since reason and faith are both "gifts of God" so that their conflict would imply God contending against himself. The Theodicy is Leibniz's attempt to reconcile his personal philosophical system with his interpretation of the tenets of Christianity. This project was motivated in part by Leibniz's belief, shared by many philosophers and theologians during the Enlightenment, in the rational and enlightened nature of the Christian religion. It was also shaped by Leibniz's belief in the perfectibility of human nature (if humanity relied on correct philosophy and religion as a guide), and by his belief that metaphysical necessity must have a rational or logical foundation, even if this metaphysical causality seemed inexplicable in terms of physical necessity (the natural laws identified by science).
In the view of Leibniz, because reason and faith must be entirely reconciled, any tenet of faith which could not be defended by reason must be rejected. Leibniz then approached one of the central criticisms of Christian theism: if God is all good, all wise, and all powerful, then how did evil come into the world? The answer (according to Leibniz) is that, while God is indeed unlimited in wisdom and power, his human creations, as creations, are limited both in their wisdom and in their will (power to act). This predisposes humans to false beliefs, wrong decisions, and ineffective actions in the exercise of their free will. God does not arbitrarily inflict pain and suffering on humans; rather he permits both moral evil (sin) and physical evil (pain and suffering) as the necessary consequences of metaphysical evil (imperfection), as a means by which humans can identify and correct their erroneous decisions, and as a contrast to true good.
Further, although human actions flow from prior causes that ultimately arise in God and therefore are known to God as metaphysical certainties, an individual's free will is exercised within natural laws, where choices are merely contingently necessary and to be decided in the event by a "wonderful spontaneity" that provides individuals with an escape from rigorous predestination.
Discourse on Metaphysics
For Leibniz, "God is an absolutely perfect being". He describes this perfection later in section VI as the simplest form of something with the most substantial outcome (VI). Along these lines, he declares that every type of perfection "pertains to him (God) in the highest degree" (I). Even though his types of perfections are not specifically drawn out, Leibniz highlights the one thing that, to him, does certify imperfections and proves that God is perfect: "that one acts imperfectly if he acts with less perfection than he is capable of", and since God is a perfect being, he cannot act imperfectly (III). Because God cannot act imperfectly, the decisions he makes pertaining to the world must be perfect. Leibniz also comforts readers, stating that because he has done everything to the most perfect degree; those who love him cannot be injured. However, to love God is a subject of difficulty as Leibniz believes that we are "not disposed to wish for that which God desires" because we have the ability to alter our disposition (IV). In accordance with this, many act as rebels, but Leibniz says that the only way we can truly love God is by being content "with all that comes to us according to his will" (IV).
Because God is "an absolutely perfect being" (I), Leibniz argues that God would be acting imperfectly if he acted with any less perfection than what he is able of (III). His syllogism then ends with the statement that God has made the world perfectly in all ways. This also affects how we should view God and his will. Leibniz states that, in lieu of God's will, we have to understand that God "is the best of all masters" and he will know when his good succeeds, so we, therefore, must act in conformity to his good will—or as much of it as we understand (IV). In our view of God, Leibniz declares that we cannot admire the work solely because of the maker, lest we mar the glory and love God in doing so. Instead, we must admire the maker for the work he has done (II). Effectively, Leibniz states that if we say the earth is good because of the will of God, and not good according to some standards of goodness, then how can we praise God for what he has done if contrary actions are also praiseworthy by this definition (II). Leibniz then asserts that different principles and geometry cannot simply be from the will of God, but must follow from his understanding.
Leibniz wrote: "Why is there something rather than nothing? The sufficient reason ... is found in a substance which ... is a necessary being bearing the reason for its existence within itself." Martin Heidegger called this question "the fundamental question of metaphysics".
Symbolic thought and rational resolution of disputes
Leibniz believed that much of human reasoning could be reduced to calculations of a sort, and that such calculations could resolve many differences of opinion:
Leibniz's calculus ratiocinator, which resembles symbolic logic, can be viewed as a way of making such calculations feasible. Leibniz wrote memoranda that can now be read as groping attempts to get symbolic logic—and thus his calculus—off the ground. These writings remained unpublished until the appearance of a selection edited by Carl Immanuel Gerhardt (1859). Louis Couturat published a selection in 1901; by this time the main developments of modern logic had been created by Charles Sanders Peirce and by Gottlob Frege.
Leibniz thought symbols were important for human understanding. He attached so much importance to the development of good notations that he attributed all his discoveries in mathematics to this. His notation for calculus is an example of his skill in this regard. Leibniz's passion for symbols and notation, as well as his belief that these are essential to a well-running logic and mathematics, made him a precursor of semiotics.
But Leibniz took his speculations much further. Defining a character as any written sign, he then defined a "real" character as one that represents an idea directly and not simply as the word embodying the idea. Some real characters, such as the notation of logic, serve only to facilitate reasoning. Many characters well known in his day, including Egyptian hieroglyphics, Chinese characters, and the symbols of astronomy and chemistry, he deemed not real. Instead, he proposed the creation of a characteristica universalis or "universal characteristic", built on an alphabet of human thought in which each fundamental concept would be represented by a unique "real" character:
Complex thoughts would be represented by combining characters for simpler thoughts. Leibniz saw that the uniqueness of prime factorization suggests a central role for prime numbers in the universal characteristic, a striking anticipation of Gödel numbering. Granted, there is no intuitive or mnemonic way to number any set of elementary concepts using the prime numbers.
Because Leibniz was a mathematical novice when he first wrote about the characteristic, at first he did not conceive it as an algebra but rather as a universal language or script. Only in 1676 did he conceive of a kind of "algebra of thought", modeled on and including conventional algebra and its notation. The resulting characteristic included a logical calculus, some combinatorics, algebra, his analysis situs (geometry of situation), a universal concept language, and more. What Leibniz actually intended by his characteristica universalis and calculus ratiocinator, and the extent to which modern formal logic does justice to calculus, may never be established. Leibniz's idea of reasoning through a universal language of symbols and calculations remarkably foreshadows great 20th-century developments in formal systems, such as Turing completeness, where computation was used to define equivalent universal languages (see Turing degree).
Formal logic
Leibniz has been noted as one of the most important logicians between the times of Aristotle and Gottlob Frege. Leibniz enunciated the principal properties of what we now call conjunction, disjunction, negation, identity, set inclusion, and the empty set. The principles of Leibniz's logic and, arguably, of his whole philosophy, reduce to two:
All our ideas are compounded from a very small number of simple ideas, which form the alphabet of human thought.
Complex ideas proceed from these simple ideas by a uniform and symmetrical combination, analogous to arithmetical multiplication.
The formal logic that emerged early in the 20th century also requires, at minimum, unary negation and quantified variables ranging over some universe of discourse.
Leibniz published nothing on formal logic in his lifetime; most of what he wrote on the subject consists of working drafts. In his History of Western Philosophy, Bertrand Russell went so far as to claim that Leibniz had developed logic in his unpublished writings to a level which was reached only 200 years later.
Russell's principal work on Leibniz found that many of Leibniz's most startling philosophical ideas and claims (e.g., that each of the fundamental monads mirrors the whole universe) follow logically from Leibniz's conscious choice to reject relations between things as unreal. He regarded such relations as (real) qualities of things (Leibniz admitted unary predicates only): For him, "Mary is the mother of John" describes separate qualities of Mary and of John. This view contrasts with the relational logic of De Morgan, Peirce, Schröder and Russell himself, now standard in predicate logic. Notably, Leibniz also declared space and time to be inherently relational.
Leibniz's 1690 discovery of his algebra of concepts (deductively equivalent to the Boolean algebra) and the associated metaphysics, are of interest in present-day computational metaphysics.
Mathematics
Although the mathematical notion of function was implicit in trigonometric and logarithmic tables, which existed in his day, Leibniz was the first, in 1692 and 1694, to employ it explicitly, to denote any of several geometric concepts derived from a curve, such as abscissa, ordinate, tangent, chord, and the perpendicular (see History of the function concept). In the 18th century, "function" lost these geometrical associations. Leibniz was also one of the pioneers in actuarial science, calculating the purchase price of life annuities and the liquidation of a state's debt.
Leibniz's research into formal logic, also relevant to mathematics, is discussed in the preceding section. The best overview of Leibniz's writings on calculus may be found in Bos (1974).
Leibniz, who invented one of the earliest mechanical calculators, said of calculation: "For it is unworthy of excellent men to lose hours like slaves in the labor of calculation which could safely be relegated to anyone else if machines were used."
Linear systems
Leibniz arranged the coefficients of a system of linear equations into an array, now called a matrix, in order to find a solution to the system if it existed. This method was later called Gaussian elimination. Leibniz laid down the foundations and theory of determinants, although the Japanese mathematician Seki Takakazu also discovered determinants independently of Leibniz. His works show calculating the determinants using cofactors. Calculating the determinant using cofactors is named the Leibniz formula. Finding the determinant of a matrix using this method proves impractical with large n, requiring to calculate n! products and the number of n-permutations. He also solved systems of linear equations using determinants, which is now called Cramer's rule. This method for solving systems of linear equations based on determinants was found in 1684 by Leibniz (Gabriel Cramer published his findings in 1750). Although Gaussian elimination requires arithmetic operations, linear algebra textbooks still teach cofactor expansion before LU factorization.
Geometry
The Leibniz formula for states that
Leibniz wrote that circles "can most simply be expressed by this series, that is, the aggregate of fractions alternately added and subtracted". However this formula is only accurate with a large number of terms, using 10,000,000 terms to obtain the correct value of to 8 decimal places. Leibniz attempted to create a definition for a straight line while attempting to prove the parallel postulate. While most mathematicians defined a straight line as the shortest line between two points, Leibniz believed that this was merely a property of a straight line rather than the definition.
Calculus
Leibniz is credited, along with Isaac Newton, with the discovery of calculus (differential and integral calculus). According to Leibniz's notebooks, a critical breakthrough occurred on 11 November 1675, when he employed integral calculus for the first time to find the area under the graph of a function . He introduced several notations used to this day, for instance the integral sign (), representing an elongated S, from the Latin word summa, and the used for differentials (), from the Latin word differentia. Leibniz did not publish anything about his calculus until 1684. Leibniz expressed the inverse relation of integration and differentiation, later called the fundamental theorem of calculus, by means of a figure in his 1693 paper Supplementum geometriae dimensoriae.... However, James Gregory is credited for the theorem's discovery in geometric form, Isaac Barrow proved a more generalized geometric version, and Newton developed supporting theory. The concept became more transparent as developed through Leibniz's formalism and new notation. The product rule of differential calculus is still called "Leibniz's law". In addition, the theorem that tells how and when to differentiate under the integral sign is called the Leibniz integral rule.
Leibniz exploited infinitesimals in developing calculus, manipulating them in ways suggesting that they had paradoxical algebraic properties. George Berkeley, in a tract called The Analyst and also in De Motu, criticized these. A recent study argues that Leibnizian calculus was free of contradictions, and was better grounded than Berkeley's empiricist criticisms.
From 1711 until his death, Leibniz was engaged in a dispute with John Keill, Newton and others, over whether Leibniz had invented calculus independently of Newton.
The use of infinitesimals in mathematics was frowned upon by followers of Karl Weierstrass, but survived in science and engineering, and even in rigorous mathematics, via the fundamental computational device known as the differential. Beginning in 1960, Abraham Robinson worked out a rigorous foundation for Leibniz's infinitesimals, using model theory, in the context of a field of hyperreal numbers. The resulting non-standard analysis can be seen as a belated vindication of Leibniz's mathematical reasoning. Robinson's transfer principle is a mathematical implementation of Leibniz's heuristic law of continuity, while the standard part function implements the Leibnizian transcendental law of homogeneity.
Topology
Leibniz was the first to use the term analysis situs, later used in the 19th century to refer to what is now known as topology. There are two takes on this situation. On the one hand, Mates, citing a 1954 paper in German by Jacob Freudenthal, argues:
But Hideaki Hirano argues differently, quoting Mandelbrot:
Thus the fractal geometry promoted by Mandelbrot drew on Leibniz's notions of self-similarity and the principle of continuity: Natura non facit saltus. We also see that when Leibniz wrote, in a metaphysical vein, that "the straight line is a curve, any part of which is similar to the whole", he was anticipating topology by more than two centuries. As for "packing", Leibniz told his friend and correspondent Des Bosses to imagine a circle, then to inscribe within it three congruent circles with maximum radius; the latter smaller circles could be filled with three even smaller circles by the same procedure. This process can be continued infinitely, from which arises a good idea of self-similarity. Leibniz's improvement of Euclid's axiom contains the same concept.
Science and engineering
Leibniz's writings are currently discussed, not only for their anticipations and possible discoveries not yet recognized, but as ways of advancing present knowledge. Much of his writing on physics is included in Gerhardt's Mathematical Writings.
Physics
Leibniz contributed a fair amount to the statics and dynamics emerging around him, often disagreeing with Descartes and Newton. He devised a new theory of motion (dynamics) based on kinetic energy and potential energy, which posited space as relative, whereas Newton was thoroughly convinced that space was absolute. An important example of Leibniz's mature physical thinking is his Specimen Dynamicum of 1695.
Until the discovery of subatomic particles and the quantum mechanics governing them, many of Leibniz's speculative ideas about aspects of nature not reducible to statics and dynamics made little sense. For instance, he anticipated Albert Einstein by arguing, against Newton, that space, time and motion are relative, not absolute: "As for my own opinion, I have said more than once, that I hold space to be something merely relative, as time is, that I hold it to be an order of coexistences, as time is an order of successions."
Leibniz held a relational notion of space and time, against Newton's substantivalist views. According to Newton's substantivalism, space and time are entities in their own right, existing independently of things. Leibniz's relationalism, in contrast, describes space and time as systems of relations that exist between objects. The rise of general relativity and subsequent work in the history of physics has put Leibniz's stance in a more favorable light.
One of Leibniz's projects was to recast Newton's theory as a vortex theory. However, his project went beyond vortex theory, since at its heart there was an attempt to explain one of the most difficult problems in physics, that of the origin of the cohesion of matter.
The principle of sufficient reason has been invoked in recent cosmology, and his identity of indiscernibles in quantum mechanics, a field some even credit him with having anticipated in some sense. In addition to his theories about the nature of reality, Leibniz's contributions to the development of calculus have also had a major impact on physics.
The vis viva
Leibniz's vis viva (Latin for "living force") is , twice the modern kinetic energy. He realized that the total energy would be conserved in certain mechanical systems, so he considered it an innate motive characteristic of matter. Here too his thinking gave rise to another regrettable nationalistic dispute. His vis viva was seen as rivaling the conservation of momentum championed by Newton in England and by Descartes and Voltaire in France; hence academics in those countries tended to neglect Leibniz's idea. Leibniz knew of the validity of conservation of momentum. In reality, both energy and momentum are conserved (in closed systems), so both approaches are valid.
Other natural science
By proposing that the earth has a molten core, he anticipated modern geology. In embryology, he was a preformationist, but also proposed that organisms are the outcome of a combination of an infinite number of possible microstructures and of their powers. In the life sciences and paleontology, he revealed an amazing transformist intuition, fueled by his study of comparative anatomy and fossils. One of his principal works on this subject, Protogaea, unpublished in his lifetime, has recently been published in English for the first time. He worked out a primal organismic theory. In medicine, he exhorted the physicians of his time—with some results—to ground their theories in detailed comparative observations and verified experiments, and to distinguish firmly scientific and metaphysical points of view.
Psychology
Psychology had been a central interest of Leibniz. He appears to be an "underappreciated pioneer of psychology" He wrote on topics which are now regarded as fields of psychology: attention and consciousness, memory, learning (association), motivation (the act of "striving"), emergent individuality, the general dynamics of development (evolutionary psychology). His discussions in the New Essays and Monadology often rely on everyday observations such as the behaviour of a dog or the noise of the sea, and he develops intuitive analogies (the synchronous running of clocks or the balance spring of a clock). He also devised postulates and principles that apply to psychology: the continuum of the unnoticed petites perceptions to the distinct, self-aware apperception, and psychophysical parallelism from the point of view of causality and of purpose: "Souls act according to the laws of final causes, through aspirations, ends and means. Bodies act according to the laws of efficient causes, i.e. the laws of motion. And these two realms, that of efficient causes and that of final causes, harmonize with one another." This idea refers to the mind-body problem, stating that the mind and brain do not act upon each other, but act alongside each other separately but in harmony. Leibniz, however, did not use the term psychologia.
Leibniz's epistemological position—against John Locke and English empiricism (sensualism)—was made clear: "Nihil est in intellectu quod non fuerit in sensu, nisi intellectu ipse." – "Nothing is in the intellect that was not first in the senses, except the intellect itself." Principles that are not present in sensory impressions can be recognised in human perception and consciousness: logical inferences, categories of thought, the principle of causality and the principle of purpose (teleology).
Leibniz found his most important interpreter in Wilhelm Wundt, founder of psychology as a discipline. Wundt used the "… nisi intellectu ipse" quotation 1862 on the title page of his Beiträge zur Theorie der Sinneswahrnehmung (Contributions on the Theory of Sensory Perception) and published a detailed and aspiring monograph on Leibniz. Wundt shaped the term apperception, introduced by Leibniz, into an experimental psychologically based apperception psychology that included neuropsychological modelling – an excellent example of how a concept created by a great philosopher could stimulate a psychological research program. One principle in the thinking of Leibniz played a fundamental role: "the principle of equality of separate but corresponding viewpoints." Wundt characterized this style of thought (perspectivism) in a way that also applied for him—viewpoints that "supplement one another, while also being able to appear as opposites that only resolve themselves when considered more deeply."
Much of Leibniz's work went on to have a great impact on the field of psychology. Leibniz thought that there are many petites perceptions, or small perceptions of which we perceive but of which we are unaware. He believed that by the principle that phenomena found in nature were continuous by default, it was likely that the transition between conscious and unconscious states had intermediary steps. For this to be true, there must also be a portion of the mind of which we are unaware at any given time. His theory regarding consciousness in relation to the principle of continuity can be seen as an early theory regarding the stages of sleep. In this way, Leibniz's theory of perception can be viewed as one of many theories leading up to the idea of the unconscious. Leibniz was a direct influence on Ernst Platner, who is credited with originally coining the term Unbewußtseyn (unconscious). Additionally, the idea of subliminal stimuli can be traced back to his theory of small perceptions. Leibniz's ideas regarding music and tonal perception went on to influence the laboratory studies of Wilhelm Wundt.
Social science
In public health, he advocated establishing a medical administrative authority, with powers over epidemiology and veterinary medicine. He worked to set up a coherent medical training program, oriented towards public health and preventive measures. In economic policy, he proposed tax reforms and a national insurance program, and discussed the balance of trade. He even proposed something akin to what much later emerged as game theory. In sociology he laid the ground for communication theory.
Technology
In 1906, Garland published a volume of Leibniz's writings bearing on his many practical inventions and engineering work. To date, few of these writings have been translated into English. Nevertheless, it is well understood that Leibniz was a serious inventor, engineer, and applied scientist, with great respect for practical life. Following the motto theoria cum praxi, he urged that theory be combined with practical application, and thus has been claimed as the father of applied science. He designed wind-driven propellers and water pumps, mining machines to extract ore, hydraulic presses, lamps, submarines, clocks, etc. With Denis Papin, he created a steam engine. He even proposed a method for desalinating water. From 1680 to 1685, he struggled to overcome the chronic flooding that afflicted the ducal silver mines in the Harz Mountains, but did not succeed.
Computation
Leibniz may have been the first computer scientist and information theorist. Early in life, he documented the binary numeral system (base 2), then revisited that system throughout his career. While Leibniz was examining other cultures to compare his metaphysical views, he encountered an ancient Chinese book I Ching. Leibniz interpreted a diagram which showed yin and yang and corresponded it to a zero and one. More information can be found in the Sinophology section. Leibniz had similarities with Juan Caramuel y Lobkowitz and Thomas Harriot, who independently developed the binary system, as he was familiar with their works on the binary system. Juan Caramuel y Lobkowitz worked extensively on logarithms including logarithms with base 2. Thomas Harriot's manuscripts contained a table of binary numbers and their notation, which demonstrated that any number could be written on a base 2 system. Regardless, Leibniz simplified the binary system and articulated logical properties such as conjunction, disjunction, negation, identity, inclusion, and the empty set. He anticipated Lagrangian interpolation and algorithmic information theory. His calculus ratiocinator anticipated aspects of the universal Turing machine. In 1961, Norbert Wiener suggested that Leibniz should be considered the patron saint of cybernetics. Wiener is quoted with "Indeed, the general idea of a computing machine is nothing but a mechanization of Leibniz's Calculus Ratiocinator."
In 1671, Leibniz began to invent a machine that could execute all four arithmetic operations, gradually improving it over a number of years. This "stepped reckoner" attracted fair attention and was the basis of his election to the Royal Society in 1673. A number of such machines were made during his years in Hanover by a craftsman working under his supervision. They were not an unambiguous success because they did not fully mechanize the carry operation. Couturat reported finding an unpublished note by Leibniz, dated 1674, describing a machine capable of performing some algebraic operations. Leibniz also devised a (now reproduced) cipher machine, recovered by Nicholas Rescher in 2010. In 1693, Leibniz described a design of a machine which could, in theory, integrate differential equations, which he called "integraph".
Leibniz was groping towards hardware and software concepts worked out much later by Charles Babbage and Ada Lovelace. In 1679, while mulling over his binary arithmetic, Leibniz imagined a machine in which binary numbers were represented by marbles, governed by a rudimentary sort of punched cards. Modern electronic digital computers replace Leibniz's marbles moving by gravity with shift registers, voltage gradients, and pulses of electrons, but otherwise they run roughly as Leibniz envisioned in 1679.
Librarian
Later in Leibniz's career (after the death of von Boyneburg), Leibniz moved to Paris and accepted a position as a librarian in the Hanoverian court of Johann Friedrich, Duke of Brunswick-Luneburg. Leibniz's predecessor, Tobias Fleischer, had already created a cataloging system for the Duke's library but it was a clumsy attempt. At this library, Leibniz focused more on advancing the library than on the cataloging. For instance, within a month of taking the new position, he developed a comprehensive plan to expand the library. He was one of the first to consider developing a core collection for a library and felt "that a library for display and ostentation is a luxury and indeed superfluous, but a well-stocked and organized library is important and useful for all areas of human endeavor and is to be regarded on the same level as schools and churches". Leibniz lacked the funds to develop the library in this manner. After working at this library, by the end of 1690 Leibniz was appointed as privy-councilor and librarian of the Bibliotheca Augusta at Wolfenbüttel. It was an extensive library with at least 25,946 printed volumes. At this library, Leibniz sought to improve the catalog. He was not allowed to make complete changes to the existing closed catalog, but was allowed to improve upon it so he started on that task immediately. He created an alphabetical author catalog and had also created other cataloging methods that were not implemented. While serving as librarian of the ducal libraries in Hanover and Wolfenbüttel, Leibniz effectively became one of the founders of library science. Seemingly, Leibniz paid a good deal of attention to the classification of subject matter, favoring a well-balanced library covering a host of numerous subjects and interests. Leibniz, for example, proposed the following classification system in the Otivm Hanoveranvm Sive Miscellanea (1737):
Theology
Jurisprudence
Medicine
Intellectual Philosophy
Philosophy of the Imagination or Mathematics
Philosophy of Sensible Things or Physics
Philology or Language
Civil History
Literary History and Libraries
General and Miscellaneous
He also designed a book indexing system in ignorance of the only other such system then extant, that of the Bodleian Library at Oxford University. He also called on publishers to distribute abstracts of all new titles they produced each year, in a standard form that would facilitate indexing. He hoped that this abstracting project would eventually include everything printed from his day back to Gutenberg. Neither proposal met with success at the time, but something like them became standard practice among English language publishers during the 20th century, under the aegis of the Library of Congress and the British Library.
He called for the creation of an empirical database as a way to further all sciences. His characteristica universalis, calculus ratiocinator, and a "community of minds"—intended, among other things, to bring political and religious unity to Europe—can be seen as distant unwitting anticipations of artificial languages (e.g., Esperanto and its rivals), symbolic logic, even the World Wide Web.
Advocate of scientific societies
Leibniz emphasized that research was a collaborative endeavor. Hence he warmly advocated the formation of national scientific societies along the lines of the British Royal Society and the French Académie Royale des Sciences. More specifically, in his correspondence and travels he urged the creation of such societies in Dresden, Saint Petersburg, Vienna, and Berlin. Only one such project came to fruition; in 1700, the Berlin Academy of Sciences was created. Leibniz drew up its first statutes, and served as its first President for the remainder of his life. That Academy evolved into the German Academy of Sciences, the publisher of the ongoing critical edition of his works.
Law and Morality
Leibniz's writings on law, ethics, and politics were long overlooked by English-speaking scholars, but this has changed of late.
While Leibniz was no apologist for absolute monarchy like Hobbes, or for tyranny in any form, neither did he echo the political and constitutional views of his contemporary John Locke, views invoked in support of liberalism, in 18th-century America and later elsewhere. The following excerpt from a 1695 letter to Baron J. C. Boyneburg's son Philipp is very revealing of Leibniz's political sentiments:
In 1677, Leibniz called for a European confederation, governed by a council or senate, whose members would represent entire nations and would be free to vote their consciences; this is sometimes considered an anticipation of the European Union. He believed that Europe would adopt a uniform religion. He reiterated these proposals in 1715.
But at the same time, he arrived to propose an interreligious and multicultural project to create a universal system of justice, which required from him a broad interdisciplinary perspective. In order to propose it, he combined linguistics (especially sinology), moral and legal philosophy, management, economics, and politics.
Law
Leibniz trained as a legal academic, but under the tutelage of Cartesian-sympathiser Erhard Weigel we already see an attempt to solve legal problems by rationalist mathematical methods (Weigel's influence being most explicit in the Specimen Quaestionum Philosophicarum ex Jure collectarum (An Essay of Collected Philosophical Problems of Right)). For example, the Inaugural Disputation on Perplexing Cases uses early combinatorics to solve some legal disputes, while the 1666 Dissertation on the Combinatorial Art includes simple legal problems by way of illustration.
The use of combinatorial methods to solve legal and moral problems seems, via Athanasius Kircher and Daniel Schwenter to be of Llullist inspiration: Ramón Llull attempted to solve ecumenical disputes through recourse to a combinatorial mode of reasoning he regarded as universal (a mathesis universalis).
In the late 1660s the enlightened Prince-Bishop of Mainz Johann Philipp von Schönborn announced a review of the legal system and made available a position to support his current law commissioner. Leibniz left Franconia and made for Mainz before even winning the role. On reaching Frankfurt am Main Leibniz penned The New Method of Teaching and Learning the Law, by way of application. The text proposed a reform of legal education and is characteristically syncretic, integrating aspects of Thomism, Hobbesianism, Cartesianism and traditional jurisprudence. Leibniz's argument that the function of legal teaching was not to impress rules as one might train a dog, but to aid the student in discovering their own public reason, evidently impressed von Schönborn as he secured the job.
Leibniz's next major attempt to find a universal rational core to law and so found a legal "science of right", came when Leibniz worked in Mainz from 1667–72. Starting initially from Hobbes' mechanistic doctrine of power, Leibniz reverted to logico-combinatorial methods in an attempt to define justice. As Leibniz's so-called Elementa Juris Naturalis advanced, he built in modal notions of right (possibility) and obligation (necessity) in which we see perhaps the earliest elaboration of his possible worlds doctrine within a deontic frame. While ultimately the Elementa remained unpublished, Leibniz continued to work on his drafts and promote their ideas to correspondents up until his death.
Ecumenism
Leibniz devoted considerable intellectual and diplomatic effort to what would now be called an ecumenical endeavor, seeking to reconcile the Roman Catholic and Lutheran churches. In this respect, he followed the example of his early patrons, Baron von Boyneburg and the Duke John Frederickboth cradle Lutherans who converted to Catholicism as adultswho did what they could to encourage the reunion of the two faiths, and who warmly welcomed such endeavors by others. (The House of Brunswick remained Lutheran, because the Duke's children did not follow their father.) These efforts included corresponding with French bishop Jacques-Bénigne Bossuet, and involved Leibniz in some theological controversy. He evidently thought that the thoroughgoing application of reason would suffice to heal the breach caused by the Reformation.
Philology
Leibniz the philologist was an avid student of languages, eagerly latching on to any information about vocabulary and grammar that came his way. In 1710, he applied ideas of gradualism and uniformitarianism to linguistics in a short essay. He refuted the belief, widely held by Christian scholars of the time, that Hebrew was the primeval language of the human race. At the same time, he rejected the idea of unrelated language groups and considered them all to have a common source. He also refuted the argument, advanced by Swedish scholars in his day, that a form of proto-Swedish was the ancestor of the Germanic languages. He puzzled over the origins of the Slavic languages and was fascinated by classical Chinese. Leibniz was also an expert in the Sanskrit language.
He published the princeps editio (first modern edition) of the late medieval Chronicon Holtzatiae, a Latin chronicle of the County of Holstein.
Sinophology
Leibniz was perhaps the first major European intellectual to take a close interest in Chinese civilization, which he knew by corresponding with, and reading other works by, European Christian missionaries posted in China. He apparently read Confucius Sinarum Philosophus in the first year of its publication. He came to the conclusion that Europeans could learn much from the Confucian ethical tradition. He mulled over the possibility that the Chinese characters were an unwitting form of his universal characteristic. He noted how the I Ching hexagrams correspond to the binary numbers from 000000 to 111111, and concluded that this mapping was evidence of major Chinese accomplishments in the sort of philosophical mathematics he admired. Leibniz communicated his ideas of the binary system representing Christianity to the Emperor of China, hoping it would convert him. Leibniz was one of the western philosophers of the time who attempted to accommodate Confucian ideas to prevailing European beliefs.
Leibniz's attraction to Chinese philosophy originates from his perception that Chinese philosophy was similar to his own. The historian E.R. Hughes suggests that Leibniz's ideas of "simple substance" and "pre-established harmony" were directly influenced by Confucianism, pointing to the fact that they were conceived during the period when he was reading Confucius Sinarum Philosophus.
Polymath
While making his grand tour of European archives to research the Brunswick family history that he never completed, Leibniz stopped in Vienna between May 1688 and February 1689, where he did much legal and diplomatic work for the Brunswicks. He visited mines, talked with mine engineers, and tried to negotiate export contracts for lead from the ducal mines in the Harz mountains. His proposal that the streets of Vienna be lit with lamps burning rapeseed oil was implemented. During a formal audience with the Austrian Emperor and in subsequent memoranda, he advocated reorganizing the Austrian economy, reforming the coinage of much of central Europe, negotiating a Concordat between the Habsburgs and the Vatican, and creating an imperial research library, official archive, and public insurance fund. He wrote and published an important paper on mechanics.
Posthumous reputation
When Leibniz died, his reputation was in decline. He was remembered for only one book, the Théodicée, whose supposed central argument Voltaire lampooned in his popular book Candide, which concludes with the character Candide saying, "Non liquet" (it is not clear), a term that was applied during the Roman Republic to a legal verdict of "not proven". Voltaire's depiction of Leibniz's ideas was so influential that many believed it to be an accurate description. Thus Voltaire and his Candide bear some of the blame for the lingering failure to appreciate and understand Leibniz's ideas. Leibniz had an ardent disciple, Christian Wolff, whose dogmatic and facile outlook did Leibniz's reputation much harm. Leibniz also influenced David Hume, who read his Théodicée and used some of his ideas. In any event, philosophical fashion was moving away from the rationalism and system building of the 17th century, of which Leibniz had been such an ardent proponent. His work on law, diplomacy, and history was seen as of ephemeral interest. The vastness and richness of his correspondence went unrecognized.
Leibniz's reputation began to recover with the 1765 publication of the Nouveaux Essais. In 1768, Louis Dutens edited the first multi-volume edition of Leibniz's writings, followed in the 19th century by a number of editions, including those edited by Erdmann, Foucher de Careil, Gerhardt, Gerland, Klopp, and Mollat. Publication of Leibniz's correspondence with notables such as Antoine Arnauld, Samuel Clarke, Sophia of Hanover, and her daughter Sophia Charlotte of Hanover, began.
In 1900, Bertrand Russell published a critical study of Leibniz's metaphysics. Shortly thereafter, Louis Couturat published an important study of Leibniz, and edited a volume of Leibniz's heretofore unpublished writings, mainly on logic. They made Leibniz somewhat respectable among 20th-century analytical and linguistic philosophers in the English-speaking world (Leibniz had already been of great influence to many Germans such as Bernhard Riemann). For example, Leibniz's phrase salva veritate, meaning interchangeability without loss of or compromising the truth, recurs in Willard Quine's writings. Nevertheless, the secondary literature on Leibniz did not really blossom until after World War II. This is especially true of English speaking countries; in Gregory Brown's bibliography fewer than 30 of the English language entries were published before 1946. American Leibniz studies owe much to Leroy Loemker (1900–1985) through his translations and his interpretive essays in LeClerc (1973). Leibniz's philosophy was also highly regarded by Gilles Deleuze, who in 1988 published The Fold: Leibniz and the Baroque.
Nicholas Jolley has surmised that Leibniz's reputation as a philosopher is now perhaps higher than at any time since he was alive. Analytic and contemporary philosophy continue to invoke his notions of identity, individuation, and possible worlds. Work in the history of 17th- and 18th-century ideas has revealed more clearly the 17th-century "Intellectual Revolution" that preceded the better-known Industrial and commercial revolutions of the 18th and 19th centuries.
In Germany, various important institutions were named after Leibniz. In Hanover in particular, he is the namesake for some of the most important institutions in the town:
Leibniz University Hannover
Leibniz-Akademie, Institution for academic and non-academic training and further education in the business sector
Gottfried Wilhelm Leibniz Bibliothek – Niedersächsische Landesbibliothek, one of the largest regional and academic libraries in Germany and, alongside the Oldenburg State Library and the Herzog August Library in Wolfenbüttel, one of the three state libraries in Lower Saxony
Gottfried-Wilhelm-Leibniz-Gesellschaft, Society for the cultivation and dissemination of Leibniz's teachings
outside of Hanover:
Leibniz Association, Berlin
Leibniz-Sozietät der Wissenschaften zu Berlin, Association of scientists founded in Berlin in 1993 with the legal form of a registered association; It continues the activities of the Academy of Sciences of the GDR with personnel continuity
Leibniz Kolleg of Tübingen University, central propaedeutic institution of the university, which aims to enable high school graduates to make a well-founded study decision through a ten-month, comprehensive general course of study and at the same time to introduce them to academic work
Leibniz Supercomputing Centre, Munich
more than 20 schools all over Germany
Awards:
Leibniz-Ring-Hannover, Honor given since 1997 by the Hannover Press Club to personalities or institutions "who have drawn attention to themselves through an outstanding performance or have made a special mark through their life’s work."
Leibniz-Medaille of the Berlin-Brandenburg Academy of Sciences and Humanities, established in 1906 and awarded previously by the Prussian Academy of Sciences and later the German Academy of Sciences at Berlin
Gottfried-Wilhelm-Leibniz-Medaille of the Leibniz-Sozietät
Leibniz-Medaille der Akademie der Wissenschaften und der Literatur Mainz
In 1985, the German government created the Leibniz Prize, offering an annual award of 1.55 million euros for experimental results and 770,000 euros for theoretical ones. It was the world's largest prize for scientific achievement prior to the Fundamental Physics Prize.
The collection of manuscript papers of Leibniz at the Gottfried Wilhelm Leibniz Bibliothek – Niedersächische Landesbibliothek was inscribed on UNESCO's Memory of the World Register in 2007.
Cultural references
Leibniz still receives popular attention. The Google Doodle for 1 July 2018 celebrated Leibniz's 372nd birthday. Using a quill, his hand is shown writing "Google" in binary ASCII code.
One of the earliest popular but indirect expositions of Leibniz was Voltaire's satire Candide, published in 1759. Leibniz was lampooned as Professor Pangloss, described as "the greatest philosopher of the Holy Roman Empire".
Leibniz also appears as one of the main historical figures in Neal Stephenson's series of novels The Baroque Cycle. Stephenson credits readings and discussions concerning Leibniz for inspiring him to write the series.
Leibniz also stars in Adam Ehrlich Sachs's novel The Organs of Sense.
The German biscuit Choco Leibniz is named after Leibniz, a famous resident of Hanover where the manufacturer Bahlsen is based.
Writings and publication
Leibniz mainly wrote in three languages: scholastic Latin, French and German. During his lifetime, he published many pamphlets and scholarly articles, but only two "philosophical" books, the Combinatorial Art and the Théodicée. (He published numerous pamphlets, often anonymous, on behalf of the House of Brunswick-Lüneburg, most notably the "De jure suprematum" a major consideration of the nature of sovereignty.) One substantial book appeared posthumously, his Nouveaux essais sur l'entendement humain, which Leibniz had withheld from publication after the death of John Locke. Only in 1895, when Bodemann completed his catalogue of Leibniz's manuscripts and correspondence, did the enormous extent of Leibniz's Nachlass become clear: about 15,000 letters to more than 1000 recipients plus more than 40,000 other items. Moreover, quite a few of these letters are of essay length. Much of his vast correspondence, especially the letters dated after 1700, remains unpublished, and much of what is published has appeared only in recent decades. The more than 67,000 records of the Leibniz Edition's Catalogue cover almost all of his known writings and the letters from him and to him. The amount, variety, and disorder of Leibniz's writings are a predictable result of a situation he described in a letter as follows:
The extant parts of the critical edition of Leibniz's writings are organized as follows:
Series 1. Political, Historical, and General Correspondence. 25 vols., 1666–1706.
Series 2. Philosophical Correspondence. 3 vols., 1663–1700.
Series 3. Mathematical, Scientific, and Technical Correspondence. 8 vols., 1672–1698.
Series 4. Political Writings. 9 vols., 1667–1702.
Series 5. Historical and Linguistic Writings. In preparation.
Series 6. Philosophical Writings. 7 vols., 1663–90, and Nouveaux essais sur l'entendement humain.
Series 7. Mathematical Writings. 6 vols., 1672–76.
Series 8. Scientific, Medical, and Technical Writings. 1 vol., 1668–76.
The systematic cataloguing of all of Leibniz's Nachlass began in 1901. It was hampered by two world wars and then by decades of German division into two states, separating scholars and scattering portions of his literary estates. The ambitious project has had to deal with writings in seven languages, contained in some 200,000 written and printed pages. In 1985 it was reorganized and included in a joint program of German federal and state (Länder) academies. Since then the branches in Potsdam, Münster, Hanover and Berlin have jointly published 57 volumes of the critical edition, with an average of 870 pages, and prepared index and concordance works.
Selected works
The year given is usually that in which the work was completed, not of its eventual publication.
1666 (publ. 1690). De Arte Combinatoria (On the Art of Combination); partially translated in Loemker §1 and Parkinson (1966)
1667. Nova Methodus Discendae Docendaeque Iurisprudentiae (A New Method for Learning and Teaching Jurisprudence)
1667. "Dialogus de connexione inter res et verba"
1671. Hypothesis Physica Nova (New Physical Hypothesis); Loemker §8.I (part)
1673 Confessio philosophi (A Philosopher's Creed); an English translation is available online.
Oct. 1684. "Meditationes de cognitione, veritate et ideis" ("Meditations on Knowledge, Truth, and Ideas")
Nov. 1684. "Nova methodus pro maximis et minimis" ("New method for maximums and minimums"); translated in Struik, D. J., 1969. A Source Book in Mathematics, 1200–1800. Harvard University Press: 271–81.
1686. Discours de métaphysique; Martin and Brown (1988), Ariew and Garber 35, Loemker §35, Wiener III.3, Woolhouse and Francks 1
1686. Generales inquisitiones de analysi notionum et veritatum (General Inquiries About the Analysis of Concepts and of Truths)
1694. "De primae philosophiae Emendatione, et de Notione Substantiae" ("On the Correction of First Philosophy and the Notion of Substance")
1695. Système nouveau de la nature et de la communication des substances (New System of Nature)
1700. Accessiones historicae
1703. "Explication de l'Arithmétique Binaire" ("Explanation of Binary Arithmetic"); Carl Immanuel Gerhardt, Mathematical Writings VII.223. An English translation by Lloyd Strickland is available online.
1704 (publ. 1765). Nouveaux essais sur l'entendement humain. Translated in: Remnant, Peter, and Bennett, Jonathan, trans., 1996. New Essays on Human Understanding Langley translation 1896. Cambridge University Press. Wiener III.6 (part)
1707–1710. Scriptores rerum Brunsvicensium (3 Vols.)
1710. Théodicée; Farrer, A. M., and Huggard, E. M., trans., 1985 (1952). Wiener III.11 (part). An English translation is available online at Project Gutenberg.
1714. "Principes de la nature et de la Grâce fondés en raison"
1714. Monadologie; translated by Nicholas Rescher, 1991. The Monadology: An Edition for Students. University of Pittsburgh Press. Ariew and Garber 213, Loemker §67, Wiener III.13, Woolhouse and Francks 19. An English translation by Robert Latta is available online.
Posthumous works
1717. Collectanea Etymologica, edited by the secretary of Leibniz Johann Georg von Eckhart
1749. Protogaea
1750. Origines Guelficae
Collections
Six important collections of English translations are Wiener (1951), Parkinson (1966), Loemker (1969), Ariew and Garber (1989), Woolhouse and Francks (1998), and Strickland (2006). The ongoing critical edition of all of Leibniz's writings is Sämtliche Schriften und Briefe.
See also
General Leibniz rule
Leibniz Association
Leibniz operator
List of German inventors and discoverers
List of pioneers in computer science
List of things named after Gottfried Leibniz
Mathesis universalis
Scientific Revolution
Leibniz University Hannover
Bartholomew Des Bosses
Joachim Bouvet
Outline of Gottfried Wilhelm Leibniz
Gottfried Wilhelm Leibniz bibliography
Notes
References
Citations
Sources
Bibliographies
Bodemann, Eduard, Die Leibniz-Handschriften der Königlichen öffentlichen Bibliothek zu Hannover, 1895, (anastatic reprint: Hildesheim, Georg Olms, 1966).
Bodemann, Eduard, Der Briefwechsel des Gottfried Wilhelm Leibniz in der Königlichen öffentlichen Bibliothek zu Hannover, 1889, (anastatic reprint: Hildesheim, Georg Olms, 1966).
Ravier, Émile, Bibliographie des œuvres de Leibniz, Paris: Alcan, 1937 (anastatic reprint Hildesheim: Georg Olms, 1966).
Heinekamp, Albert and Mertens, Marlen. Leibniz-Bibliographie. Die Literatur über Leibniz bis 1980, Frankfurt: Vittorio Klostermann, 1984.
Heinekamp, Albert and Mertens, Marlen. Leibniz-Bibliographie. Die Literatur über Leibniz. Band II: 1981–1990, Frankfurt: Vittorio Klostermann, 1996.
An updated bibliography of more than 25.000 titles is available at Leibniz Bibliographie.
Primary literature (chronologically)
Wiener, Philip, (ed.), 1951. Leibniz: Selections. Scribner.
Schrecker, Paul & Schrecker, Anne Martin, (eds.), 1965. Monadology and other Philosophical Essays. Prentice-Hall.
Parkinson, G. H. R. (ed.), 1966. Logical Papers. Clarendon Press.
Mason, H. T. & Parkinson, G. H. R. (eds.), 1967. The Leibniz-Arnauld Correspondence. Manchester University Press.
Loemker, Leroy, (ed.), 1969 [1956]. Leibniz: Philosophical Papers and Letters. Reidel.
Morris, Mary & Parkinson, G. H. R. (eds.), 1973. Philosophical Writings. Everyman's University Library.
Riley, Patrick, (ed.), 1988. Leibniz: Political Writings. Cambridge University Press.
Niall, R. Martin, D. & Brown, Stuart (eds.), 1988. Discourse on Metaphysics and Related Writings. Manchester University Press.
Ariew, Roger and Garber, Daniel. (eds.), 1989. Leibniz: Philosophical Essays. Hackett.
Rescher, Nicholas (ed.), 1991. G. W. Leibniz's Monadology. An Edition for Students, University of Pittsburgh Press.
Rescher, Nicholas, On Leibniz, (Pittsburgh: University of Pittsburgh Press, 2013).
Parkinson, G. H. R. (ed.) 1992. De Summa Rerum. Metaphysical Papers, 1675–1676. Yale University Press.
Cook, Daniel, & Rosemont, Henry Jr., (eds.), 1994. Leibniz: Writings on China. Open Court.
Farrer, Austin (ed.), 1995. Theodicy, Open Court.
Remnant, Peter, & Bennett, Jonathan, (eds.), 1996 (1981). Leibniz: New Essays on Human Understanding. Cambridge University Press.
Woolhouse, R. S., and Francks, R., (eds.), 1997. Leibniz's 'New System' and Associated Contemporary Texts. Oxford University Press.
Woolhouse, R. S., and Francks, R., (eds.), 1998. Leibniz: Philosophical Texts. Oxford University Press.
Ariew, Roger, (ed.), 2000. G. W. Leibniz and Samuel Clarke: Correspondence. Hackett.
Richard T. W. Arthur, (ed.), 2001. The Labyrinth of the Continuum: Writings on the Continuum Problem, 1672–1686. Yale University Press.
Richard T. W. Arthur, 2014. Leibniz. John Wiley & Sons.
Robert C. Sleigh Jr., (ed.), 2005. Confessio Philosophi: Papers Concerning the Problem of Evil, 1671–1678. Yale University Press.
Dascal, Marcelo (ed.), 2006. G. W. Leibniz. The Art of Controversies, Springer.
Strickland, Lloyd, 2006 (ed.). The Shorter Leibniz Texts: A Collection of New Translations. Continuum.
Look, Brandon and Rutherford, Donald (eds.), 2007. The Leibniz-Des Bosses Correspondence, Yale University Press.
Cohen, Claudine and Wakefield, Andre, (eds.), 2008. Protogaea. University of Chicago Press.
Murray, Michael, (ed.) 2011. Dissertation on Predestination and Grace, Yale University Press.
Strickand, Lloyd (ed.), 2011. Leibniz and the two Sophies. The Philosophical Correspondence, Toronto.
Lodge, Paul (ed.), 2013. The Leibniz-De Volder Correspondence: With Selections from the Correspondence Between Leibniz and Johann Bernoulli, Yale University Press.
Artosi, Alberto, Pieri, Bernardo, Sartor, Giovanni (eds.), 2014. Leibniz: Logico-Philosophical Puzzles in the Law, Springer.
De Iuliis, Carmelo Massimo, (ed.), 2017. Leibniz: The New Method of Learning and Teaching Jurisprudence, Talbot, Clark NJ.
Secondary literature up to 1950
Du Bois-Reymond, Emil, 1912. Leibnizsche Gedanken in der neueren Naturwissenschaft, Berlin: Dummler, 1871 (reprinted in Reden, Leipzig: Veit, vol. 1).
Couturat, Louis, 1901. La Logique de Leibniz. Paris: Felix Alcan.
Heidegger, Martin, 1983. The Metaphysical Foundations of Logic. Indiana University Press (lecture course, 1928).
Lovejoy, Arthur O., 1957 (1936). "Plenitude and Sufficient Reason in Leibniz and Spinoza" in his The Great Chain of Being. Harvard University Press: 144–182. Reprinted in Frankfurt, H. G., (ed.), 1972. Leibniz: A Collection of Critical Essays. Anchor Books 1972.
Mackie, John Milton; Guhrauer, Gottschalk Eduard, 1845. Life of Godfrey William von . Gould, Kendall and Lincoln.
Russell, Bertrand, 1900, A Critical Exposition of the Philosophy of Leibniz, Cambridge: The University Press.
Trendelenburg, F. A., 1857, "Über Leibnizens Entwurf einer allgemeinen Charakteristik," Philosophische Abhandlungen der Königlichen Akademie der Wissenschaften zu Berlin. Aus dem Jahr 1856, Berlin: Commission Dümmler, pp. 36–69.
(lecture)
Secondary literature post-1950
Adams, Robert Merrihew. 1994. Leibniz: Determinist, Theist, Idealist. New York: Oxford, Oxford University Press.
Aiton, Eric J., 1985. Leibniz: A Biography. Hilger (UK).
Antognazza, Maria Rosa, 2008. Leibniz: An Intellectual Biography. Cambridge University Press.
Antognazza, Maria Rosa, 2016. Leibniz: A Very Short Introduction. Oxford University Press.
Antognazza, Maria Rosa, ed., 2018. Oxford Handbook of Leibniz. Oxford University Press.
Borowski, Audrey, 2024. Leibniz in His World: The Making of a Savant. Princeton University Press.
Brown, Stuart (ed.), 1999. The Young Leibniz and His Philosophy (1646–76), Dordrecht, Kluwer.
Connelly, Stephen, 2021. Leibniz: A Contribution to the Archaeology of Power, Edinburgh University Press .
Davis, Martin, 2000. The Universal Computer: The Road from Leibniz to Turing. WW Norton.
Deleuze, Gilles, 1993. The Fold: Leibniz and the Baroque. University of Minnesota Press.
Fahrenberg, Jochen, 2017. PsyDok ZPID The influence of Gottfried Wilhelm Leibniz on the Psychology, Philosophy, and Ethics of Wilhelm Wundt.
Fahrenberg, Jochen, 2020. Wilhelm Wundt (1832–1920). Introduction, Quotations, Reception, Commentaries, Attempts at Reconstruction. Pabst Science Publishers, Lengerich 2020, .
Finster, Reinhard & van den Heuvel, Gerd 2000. Gottfried Wilhelm Leibniz. Mit Selbstzeugnissen und Bilddokumenten. 4. Auflage. Rowohlt, Reinbek bei Hamburg (Rowohlts Monographien, 50481), .
Grattan-Guinness, Ivor, 1997. The Norton History of the Mathematical Sciences. W W Norton.
Hall, A. R., 1980. Philosophers at War: The Quarrel between Newton and Leibniz. Cambridge University Press.
Hamza, Gabor, 2005. "Le développement du droit privé européen". ELTE Eotvos Kiado Budapest.
Hostler, John, 1975. Leibniz's Moral Philosophy. UK: Duckworth.
Ishiguro, Hidé 1990. Leibniz's Philosophy of Logic and Language. Cambridge University Press.
Jolley, Nicholas, (ed.), 1995. The Cambridge Companion to Leibniz. Cambridge University Press.
Kaldis, Byron, 2011. "Leibniz' Argument for Innate Ideas", in Bruce, Michael and Barbone, Steven, eds., Just the Arguments: 100 of the Most Important Arguments in Western Philosophy. Wiley-Blackwell.
Kempe, Michael, 2024. The Best of All Possible Worlds: A Life of Leibniz in Seven Pivotal Days. W. W. Norton.
Kromer, Ralf, and Yannick Chin-Drian. New Essays on Leibniz Reception: In Science and Philosophy of Science 1800-2000. 1st ed. 2012. Heidelberg: Birkhauser, 2012.
LeClerc, Ivor (ed.), 1973. The Philosophy of Leibniz and the Modern World. Vanderbilt University Press.
Mates, Benson, 1986. The Philosophy of Leibniz: Metaphysics and Language. Oxford University Press.
Mercer, Christia, 2001. Leibniz's Metaphysics: Its Origins and Development. Cambridge University Press.
Perkins, Franklin, 2004. Leibniz and China: A Commerce of Light. Cambridge University Press.
Riley, Patrick, 1996. Leibniz's Universal Jurisprudence: Justice as the Charity of the Wise. Harvard University Press.
Rutherford, Donald, 1998. Leibniz and the Rational Order of Nature. Cambridge University Press.
Schulte-Albert, H. G. (1971). Gottfried Wilhelm Leibniz and Library Classification. The Journal of Library History (1966–1972), (2). 133–152.
Smith, Justin E. H., 2011. Divine Machines. Leibniz and the Sciences of Life, Princeton University Press.
Wilson, Catherine, 1989. Leibniz's Metaphysics: A Historical and Comparative Study. Princeton University Press.
External links
Horn, Joshua.
Jorarti, Julia.
Translations by Jonathan Bennett, of the New Essays, the exchanges with Bayle, Arnauld and Clarke, and about 15 shorter works.
Gottfried Wilhelm Leibniz: Texts and Translations, compiled by Donald Rutherford, UCSD
Leibnitiana, links and resources edited by Gregory Brown, University of Houston
Philosophical Works of Leibniz translated by G.M. Duncan (1890)
The Best of All Possible Worlds: Nicholas Rescher Talks About Gottfried Wilhelm von Leibniz's "Versatility and Creativity"
"Protogæa" (1693, Latin, in Acta eruditorum) – Linda Hall Library
Protogaea (1749, German) – full digital facsimile from Linda Hall Library
Leibniz's (1768, 6-volume) Opera omnia – digital facsimile
Leibniz's arithmetical machine, 1710, online and analyzed on BibNum [click 'à télécharger' for English analysis]
Leibniz's binary numeral system, 'De progressione dyadica', 1679, online and analyzed on BibNum [click 'à télécharger' for English analysis]
1646 births
1716 deaths
17th-century German mathematicians
17th-century German philosophers
17th-century German scientists
17th-century German writers
17th-century German male writers
17th-century writers in Latin
17th-century German inventors
18th-century German mathematicians
18th-century German philosophers
18th-century German physicists
18th-century German scientists
18th-century German writers
18th-century German male writers
18th-century writers in Latin
18th-century German inventors
Constructed language creators
Determinists
Enlightenment philosophers
Fellows of the Royal Society
German librarians
German logicians
German Lutherans
German philologists
German political philosophers
German Protestants
German writers in French
Leipzig University alumni
German mathematical analysts
Mathematics of infinitesimals
Linear algebraists
Members of the Prussian Academy of Sciences
Panpsychism
People associated with Baruch Spinoza
People educated at the St. Thomas School, Leipzig
People from the Electorate of Saxony
People of the Age of Enlightenment
German philosophers of language
Philosophers of law
Philosophers of logic
German philosophers of mind
Philosophical theists
Philosophy writers
Rationalists
University of Altdorf alumni
Writers from Leipzig
Writers about religion and science
Critics of atheism | Gottfried Wilhelm Leibniz | [
"Mathematics"
] | 19,664 | [
"Mathematics of infinitesimals"
] |
12,284 | https://en.wikipedia.org/wiki/Grimoire | A grimoire () (also known as a book of spells, magic book, or a spellbook) is a textbook of magic, typically including instructions on how to create magical objects like talismans and amulets, how to perform magical spells, charms, and divination, and how to summon or invoke supernatural entities such as angels, spirits, deities, and demons. In many cases, the books themselves are believed to be imbued with magical powers. The only contents found in a grimoire would be information on spells, rituals, the preparation of magical tools, and lists of ingredients and their magical correspondences. In this manner, while all books on magic could be thought of as grimoires, not all magical books should be thought of as grimoires.
While the term grimoire is originally European—and many Europeans throughout history, particularly ceremonial magicians and cunning folk, have used grimoires—the historian Owen Davies has noted that similar books can be found all around the world, ranging from Jamaica to Sumatra. He also noted that in this sense, the world's first grimoires were created in Europe and the ancient Near East.
Etymology
The etymology of grimoire is unclear. It is most commonly believed that the term grimoire originated from the Old French word grammaire 'grammar', which had initially been used to refer to all books written in Latin. By the 18th century, the term had gained its now common usage in France and had begun to be used to refer purely to books of magic. Owen Davies presumed this was because "many of them continued to circulate in Latin manuscripts".
However, the term grimoire later developed into a figure of speech among the French indicating something that was hard to understand. In the 19th century, with the increasing interest in occultism among the British following the publication of Francis Barrett's The Magus (1801), the term entered English in reference to books of magic.
History
Ancient period
The earliest known written magical incantations come from ancient Mesopotamia (modern Iraq), where they have been found inscribed on cuneiform clay tablets that archaeologists excavated from the city of Uruk and dated to between the 5th and 4th centuries BC. The ancient Egyptians also employed magical incantations, which have been found inscribed on amulets and other items. The Egyptian magical system, known as heka, was greatly altered and expanded after the Macedonians, led by Alexander the Great, invaded Egypt in 332 BC.
Under the next three centuries of Hellenistic Egypt, the Coptic writing system evolved, and the Library of Alexandria was opened. This likely had an influence upon books of magic, with the trend on known incantations switching from simple health and protection charms to more specific things, such as financial success and sexual fulfillment. Around this time the legendary figure of Hermes Trismegistus developed as a conflation of the Egyptian god Thoth and the Greek Hermes; this figure was associated with writing and magic and, therefore, of books on magic.
The ancient Greeks and Romans believed that books on magic were invented by the Persians. The 1st-century AD writer Pliny the Elder stated that magic had been first discovered by the ancient philosopher Zoroaster around the year 647 BC but that it was only written down in the 5th century BC by the magician Osthanes. His claims are not, however, supported by modern historians.
The ancient Jewish people were often viewed as being knowledgeable in magic, which, according to legend, they had learned from Moses, who had learned it in Egypt. Among many ancient writers, Moses was seen as an Egyptian rather than a Jew. Two manuscripts likely dating to the 4th century, both of which purport to be the legendary eighth Book of Moses (the first five being the initial books in the Biblical Old Testament), present him as a polytheist who explained how to conjure gods and subdue demons.
Meanwhile, there is definite evidence of grimoires being used by certain—particularly Gnostic—sects of early Christianity. In the Book of Enoch found within the Dead Sea Scrolls, for instance, there is information on astrology and the angels. In possible connection with the Book of Enoch, the idea of Enoch and his great-grandson Noah having some involvement with books of magic given to them by angels continued through to the medieval period.
Israelite King Solomon was a Biblical figure associated with magic and sorcery in the ancient world. The 1st-century Romano-Jewish historian Josephus mentioned a book circulating under the name of Solomon that contained incantations for summoning demons and described how a Jew called Eleazar used it to cure cases of possession. The book may have been the Testament of Solomon but was more probably a different work. The pseudepigraphic Testament of Solomon is one of the oldest magical texts. It is a Greek manuscript attributed to Solomon and was likely written in either Babylonia or Egypt sometime in the first five centuries AD; over 1,000 years after Solomon's death.
The work tells of the building of The Temple and relates that construction was hampered by demons until the archangel Michael gave the King a magical ring. The ring, engraved with the Seal of Solomon, had the power to bind demons from doing harm. Solomon used it to lock demons in jars and commanded others to do his bidding, although eventually, according to the Testament, he was tempted into worshiping "false gods", such as Moloch, Baal, and Rapha. Subsequently, after losing favour with God, King Solomon wrote the work as a warning and a guide to the reader.
When Christianity became the dominant faith of the Roman Empire, the early Church frowned upon the propagation of books on magic, connecting it with paganism, and burned books of magic. The New Testament records that after the unsuccessful exorcism by the seven sons of Sceva became known, many converts decided to burn their own magic and pagan books in the city of Ephesus; this advice was adopted on a large scale after the Christian ascent to power.
Medieval period
In the medieval period, the production of grimoires continued in Christendom, as well as amongst Jews and the followers of the newly founded Islamic faith. As the historian Owen Davies noted, "while the [Christian] Church was ultimately successful in defeating pagan worship it never managed to demarcate clearly and maintain a line of practice between religious devotion and magic." The use of such books on magic continued. In Christianised Europe, the Church divided books of magic into two kinds: those that dealt with "natural magic" and those that dealt in "demonic magic".
The former was acceptable because it was viewed as merely taking note of the powers in nature that were created by God; for instance, the Anglo-Saxon leechbooks, which contained simple spells for medicinal purposes, were tolerated. Demonic magic was not acceptable, because it was believed that such magic did not come from God, but from the Devil and his demons. These grimoires dealt in such topics as necromancy, divination and demonology. Despite this, "there is ample evidence that the mediaeval clergy were the main practitioners of magic and therefore the owners, transcribers, and circulators of grimoires," while several grimoires were attributed to Popes.
One such Arabic grimoire devoted to astral magic, the 10th-century Ghâyat al-Hakîm, was later translated into Latin and circulated in Europe during the 13th century under the name of the Picatrix. However, not all such grimoires of this era were based upon Arabic sources. The 13th-century Sworn Book of Honorius, for instance, was (like the ancient Testament of Solomon before it) largely based on the supposed teachings of the Biblical king Solomon and included ideas such as prayers and a ritual circle, with the mystical purpose of having visions of God, Hell, and Purgatory and gaining much wisdom and knowledge as a result. Another was the Hebrew Sefer Raziel Ha-Malakh, translated in Europe as the Liber Razielis Archangeli.
A later book also claiming to have been written by Solomon was originally written in Greek during the 15th century, where it was known as the Magical Treatise of Solomon or the Little Key of the Whole Art of Hygromancy, Found by Several Craftsmen and by the Holy Prophet Solomon. In the 16th century, this work had been translated into Latin and Italian, being renamed the Clavicula Salomonis, or the Key of Solomon.
In Christendom during the medieval age, grimoires were written that were attributed to other ancient figures, thereby supposedly giving them a sense of authenticity because of their antiquity. The German abbot and occultist Trithemius (1462–1516) supposedly had a Book of Simon the Magician, based upon the New Testament figure of Simon Magus.
Similarly, it was commonly believed by medieval people that other ancient figures, such as the poet Virgil, astronomer Ptolemy, and philosopher Aristotle, had been involved in magic, and grimoires claiming to have been written by them were circulated. However, there were those who did not believe this; for instance, the Franciscan friar Roger Bacon (c. 1214–94) stated that books falsely claiming to be by ancient authors "ought to be prohibited by law."
Early modern period
As the early modern period commenced in the late 15th century, many changes began to shock Europe that would have an effect on the production of grimoires. Historian Owen Davies classed the most important of these as the Protestant Reformation, and subsequent Catholic Counter-Reformation; The Witch-hunts, and the advent of printing. The Renaissance saw the continuation of interest in magic that had been found in the Medieval period, and in this period, there was an increased interest in Hermeticism among occultists and ceremonial magicians in Europe, largely fueled by the 1471 translation of the ancient Corpus hermeticum into Latin by Marsilio Ficino (1433–99).
Alongside this, there was a rise in interest in the Jewish mysticism known as the Kabbalah, which was spread across the continent by Pico della Mirandola and Johannes Reuchlin. The most important magician of the Renaissance was Heinrich Cornelius Agrippa (1486–1535), who widely studied occult topics and earlier grimoires and eventually published his own, the Three Books of Occult Philosophy, in 1533. A similar figure was the Swiss magician known as Paracelsus (1493–1541), who published Of the Supreme Mysteries of Nature, in which he emphasised the distinction between good and bad magic. A third such individual was Johann Georg Faust, upon whom several pieces of later literature were written, such as Christopher Marlowe's Doctor Faustus, that portrayed him as consulting with demons.
The idea of demonology had remained strong in the Renaissance, and several demonological grimoires were published, including The Fourth Book of Occult Philosophy, which falsely claimed to having been authored by Cornelius Agrippa, and the Pseudomonarchia Daemonum, which listed 69 demons. To counter this, the Roman Catholic Church authorised the production of many works of exorcism, the rituals of which were often very similar to those of demonic conjuration. Alongside these demonological works, grimoires on natural magic continued to be produced, including Magia Naturalis, written by Giambattista Della Porta (1535–1615).
Iceland held magical traditions in regional work as well, most remarkably the Galdrabók, where numerous symbols of mystic origin are dedicated to the practitioner. These pieces give a perfect fusion of Germanic pagan and Christian influence, seeking splendid help from the Norse gods and referring to the titles of demons.
The advent of printing in Europe meant that books could be mass-produced for the first time and could reach an ever-growing literate audience. Among the earliest books to be printed were magical texts. The nóminas were one example, consisting of prayers to the saints used as talismans. It was particularly in Protestant countries, such as Switzerland and the German states, which were not under the domination of the Roman Catholic Church, where such grimoires were published.
Despite the advent of print, however, handwritten grimoires remained highly valued, as they were believed to contain inherent magical powers, and they continued to be produced. With increasing availability, people lower down the social scale and women began to have access to books on magic; this was often incorporated into the popular folk magic of the average people and, in particular, that of the cunning folk, who were professionally involved in folk magic. These works left Europe and were imported to the parts of Latin America controlled by the Spanish and Portuguese empires and the parts of North America controlled by the British and French empires.
Throughout this period, the Inquisition, a Roman Catholic organisation, had organised the mass suppression of peoples and beliefs that they considered heretical. In many cases, grimoires were found in the heretics' possessions and destroyed. In 1599, the church published the Indexes of Prohibited Books, in which many grimoires were listed as forbidden, including several mediaeval ones, such as the Key of Solomon, which were still popular.
In Christendom, there also began to develop a widespread fear of witchcraft, which was believed to be Satanic in nature. The subsequent hysteria, known as The Witch-hunts, caused the death of around 40,000 people, most of whom were women. Sometimes, those found with grimoires—particularly demonological ones—were prosecuted and dealt with as witches but, in most cases, those accused had no access to such books. Iceland—which had a relatively high literacy rate—proved an exception to this, with a third of the 134 witch trials held involving people who had owned grimoires. By the end of the Early Modern period, and the beginning of the Enlightenment, many European governments brought in laws prohibiting many superstitious beliefs in an attempt to bring an end to the Witch Hunts; this would invariably affect the release of grimoires.
Meanwhile, Hermeticism and the Kabbalah would influence the creation of a mystical philosophy known as Rosicrucianism, which first appeared in the early 17th century, when two pamphlets detailing the existence of the mysterious Rosicrucian group were published in Germany. These claimed that Rosicrucianism had originated with a Medieval figure known as Christian Rosenkreuz, who had founded the Brotherhood of the Rosy Cross; however, there was no evidence for the existence of Rosenkreuz or the Brotherhood.
18th and 19th centuries
The 18th century saw the rise of the Enlightenment, a movement devoted to science and rationalism, predominantly amongst the ruling classes. However, amongst much of Europe, belief in magic and witchcraft persisted, as did the witch trials in certain areas. Governments tried to crack down on magicians and fortune tellers, particularly in France, where the police viewed them as social pests who took money from the gullible, often in a search for treasure. In doing so, they confiscated many grimoires.
Beginning in the 17th century, a new, ephemeral form of printed literature developed in France; the Bibliothèque bleue. Many grimoires published through this circulated among a growing percentage of the populace; in particular, the Grand Albert, the Petit Albert (1782), the Grimoire du Pape Honorius, and the Enchiridion Leonis Papae. The Petit Albert contained a wide variety of magic; for instance, dealing in simple charms for ailments, along with more complex things, such as the instructions for making a Hand of Glory.
In the late 18th and early 19th centuries, following the French Revolution of 1789, a hugely influential grimoire was published under the title of the Grand Grimoire, which was considered particularly powerful, because it involved conjuring and making a pact with the devil's chief minister, Lucifugé Rofocale, to gain wealth from him. A new version of this grimoire was later published under the title of the Dragon rouge and was available for sale in many Parisian bookstores. Similar books published in France at this time included the Black Pullet and the Grimoirium Verum. The Black Pullet, probably authored in late-18th-century Rome or France, differs from the typical grimoires in that it does not claim to be a manuscript from antiquity, but told by a man who was a member of Napoleon's armed expeditionary forces in Egypt.
The widespread availability of printed grimoires in France—despite the opposition of both the rationalists and the church—soon spread to neighbouring countries, such as Spain and Germany. In Switzerland, Geneva was commonly associated with the occult at the time, particularly by Catholics, because it had been a stronghold of Protestantism. Many of those interested in the esoteric traveled from Roman Catholic nations to Switzerland to purchase grimoires or to study with occultists. Soon, grimoires appeared that involved Catholic saints; one example that appeared during the 19th century, and became relatively popular—particularly in Spain—was the Libro de San Cipriano, or The Book of St. Ciprian, which falsely claimed to date from c. 1000. As with most grimoires of this period, it dealt with (among other things) how to discover treasure.
In Germany, with the increased interest in folklore during the 19th century, many historians took an interest in magic and in grimoires. Several published extracts of such grimoires in their own books on the history of magic, thereby helping to further propagate them. Perhaps the most notable of these was the Protestant pastor Georg Conrad Horst (1779–1832) who, from 1821 to 1826, published a six-volume collection of magical texts in which he studied grimoires as a peculiarity of the Medieval mindset.
Another scholar of the time interested in grimoires, the antiquarian bookseller Johann Scheible first published the Sixth and Seventh Books of Moses; two influential magical texts that claimed to have been written by the ancient Jewish figure Moses. The Sixth and Seventh Books of Moses were among the works which later spread to the countries of Scandinavia, where—in Danish and Swedish—grimoires were known as black books and were commonly found among members of the army.
In Britain, new grimoires continued to be produced throughout the 18th century, such as Ebenezer Sibly's A New and Complete Illustration of the Celestial Science of Astrology. In the last decades of that century, London experienced a revival of interest in the occult which was further propagated by Francis Barrett's publication of The Magus in 1801. The Magus contained many things taken from older grimoires—particularly those of Cornelius Agrippa—and, while not achieving initial popularity upon release, it gradually became an influential text.
One of Barrett's pupils, John Parkin, created his own handwritten grimoire The Grand Oracle of Heaven, or, The Art of Divine Magic, although it was never published, largely because Britain was at war with France, and grimoires were commonly associated with the French. The only writer to publish British grimoires widely in the early 19th century was Robert Cross Smith, who released The Philosophical Merlin (1822) and The Astrologer of the Nineteenth Century (1825), but neither sold well.
In the late 19th century, several of these texts (including The Book of Abramelin and the Key of Solomon) were reclaimed by para-Masonic magical organisations, such as the Hermetic Order of the Golden Dawn and Ordo Templi Orientis.
20th and 21st centuries
The Secret Grimoire of Turiel claims to have been written in the 16th century, but no copy older than 1927 has been produced.
A modern grimoire, the Simon Necronomicon, takes its name from a fictional book of magic in the stories of H. P. Lovecraft which was inspired by Babylonian mythology and the Ars Goetia—one of the five books that make up The Lesser Key of Solomon—concerning the summoning of demons. The Azoëtia of Andrew D. Chumbley has been described by Gavin Semple as a modern grimoire.
The neopagan religion of Wicca publicly appeared in the 1940s, and Gerald Gardner introduced the Book of Shadows as a Wiccan grimoire.
The term grimoire commonly serves as an alternative name for a spell book or tome of magical knowledge in fantasy fiction and role-playing games. The most famous fictional grimoire is the Necronomicon, a creation of H. P. Lovecraft.
See also
Table of magical correspondences, a type of reference work used in ceremonial magic
Cyprianus, a name for Scandinavian grimoires
Codex
Key of Solomon
Lesser Key of Solomon
Manuscript
References
Bibliography
External links
Internet Sacred Text Archives: Grimoires
Digitized Grimoires
Scandinavian folklore
Fiction about magic
Magic (supernatural)
Magic items
Non-fiction genres
Religious objects | Grimoire | [
"Physics"
] | 4,300 | [
"Magic items",
"Religious objects",
"Physical objects",
"Matter"
] |
12,295 | https://en.wikipedia.org/wiki/Gamete | A gamete (; , ultimately ) is a haploid cell that fuses with another haploid cell during fertilization in organisms that reproduce sexually. Gametes are an organism's reproductive cells, also referred to as sex cells. The name gamete was introduced by the German cytologist Eduard Strasburger in 1878.
Gametes of both mating individuals can be the same size and shape, a condition known as isogamy. By contrast, in the majority of species, the gametes are of different sizes, a condition known as anisogamy or heterogamy that applies to humans and other mammals. The human ovum has approximately 100,000 times the volume of a single human sperm cell. The type of gamete an organism produces determines its sex and sets the basis for the sexual roles and sexual selection. In humans and other species that produce two morphologically distinct types of gametes, and in which each individual produces only one type, a female is any individual that produces the larger type of gamete called an ovum, and a male produces the smaller type, called a sperm cell or spermatozoon. Sperm cells are small and motile due to the presence of a tail-shaped structure, the flagellum, that provides propulsion. In contrast, each egg cell or ovum is relatively large and non-motile.
Oogenesis, the process of female gamete formation in animals, involves meiosis (including meiotic recombination) of a diploid primary oocyte to produce a haploid ovum. Spermatogenesis, the process of male gamete formation in animals, involves meiosis in a diploid primary spermatocyte to produce haploid spermatozoa. In animals, ova are produced in the ovaries of females and sperm develop in the testes of males. During fertilization, a spermatozoon and an ovum, each carrying half of the genetic information of an individual, unite to form a zygote that develops into a new diploid organism.
Evolution
It is generally accepted that isogamy is the ancestral state from which anisogamy and oogamy evolved, although its evolution has left no fossil records. There are almost invariably only two gamete types, all analyses showing that intermediate gamete sizes are eliminated due to selection. Since intermediate sized gametes do not have the same advantages as small or large ones, they do worse than small ones in mobility and numbers, and worse than large ones in supply.
Differences between gametes and somatic cells
In contrast to a gamete, which has only one set of chromosomes, a diploid somatic cell has two sets of homologous chromosomes, one of which is a copy of the chromosome set from the sperm and one a copy of the chromosome set from the egg cell. Recombination of the genes during meiosis ensures that the chromosomes of gametes are not exact duplicates of either of the sets of chromosomes carried in the parental diploid chromosomes but a mixture of the two.
Artificial gametes
Artificial gametes, also known as in vitro derived gametes (IVD), stem cell-derived gametes (SCDGs), and in vitro generated gametes (IVG), are gametes derived from stem cells. The use of such artificial gametes would "necessarily require IVF techniques". Research shows that artificial gametes may be a reproductive technique for same-sex male couples, although a surrogate mother would still be required for the gestation period. Women who have passed menopause may be able to produce eggs and bear genetically related children with artificial gametes. Robert Sparrow wrote, in the Journal of Medical Ethics, that embryos derived from artificial gametes could be used to derive new gametes and this process could be repeated to create multiple human generations in the laboratory. This technique could be used to create cell lines for medical applications and for studying the heredity of genetic disorders. Additionally, this technique could be used for human enhancement by selectively breeding for a desired genome or by using recombinant DNA technology to create enhancements that have not arisen in nature.
Plants
Plants that reproduce sexually also produce gametes. However, since plants have a life cycle involving alternation of diploid and haploid generations some differences from animal life cycles exist. Plants use meiosis to produce spores that develop into multicellular haploid gametophytes which produce gametes by mitosis. In animals there is no corresponding multicellular haploid phase. The sperm of plants that reproduce using spores are formed by mitosis in an organ of the gametophyte known as the antheridium and the egg cells by mitosis in a flask-shaped organ called the archegonium. Plant sperm cells are their only motile cells, often described as flagellate, but more correctly as ciliate. Bryophytes have 2 flagella, horsetails have up to 200 and the mature spermatozoa of the cycad Zamia pumila has up to 50,000 flagella. Cycads and Ginkgo biloba are the only gymnosperms with motile sperm. In the flowering plants, the female gametophyte is produced inside the ovule within the ovary of the flower. When mature, the haploid gametophyte produces female gametes which are ready for fertilization. The male gametophyte is produced inside a pollen grain within the anther and is non-motile, but can be distributed by wind, water or animal vectors. When a pollen grain lands on a mature stigma of a flower it germinates to form a pollen tube that grows down the style into the ovary of the flower and then into the ovule. The pollen then produces non-motile sperm nuclei by mitosis that are transported down the pollen tube to the ovule where they are released for fertilization of the egg cell.
See also
Coenogamete
Notes and references
Classical genetics
Germ cells
Reproductive system | Gamete | [
"Biology"
] | 1,251 | [
"Behavior",
"Reproductive system",
"Sex",
"Reproduction",
"Organ systems"
] |
12,306 | https://en.wikipedia.org/wiki/Geotechnical%20engineering | Geotechnical engineering, also known as geotechnics, is the branch of civil engineering concerned with the engineering behavior of earth materials. It uses the principles of soil mechanics and rock mechanics to solve its engineering problems. It also relies on knowledge of geology, hydrology, geophysics, and other related sciences.
Geotechnical engineering has applications in military engineering, mining engineering, petroleum engineering, coastal engineering, and offshore construction. The fields of geotechnical engineering and engineering geology have overlapping knowledge areas. However, while geotechnical engineering is a specialty of civil engineering, engineering geology is a specialty of geology.
History
Humans have historically used soil as a material for flood control, irrigation purposes, burial sites, building foundations, and construction materials for buildings. Dykes, dams, and canals dating back to at least 2000 BCE—found in parts of ancient Egypt, ancient Mesopotamia, the Fertile Crescent, and the early settlements of Mohenjo Daro and Harappa in the Indus valley—provide evidence for early activities linked to irrigation and flood control. As cities expanded, structures were erected and supported by formalized foundations. The ancient Greeks notably constructed pad footings and strip-and-raft foundations. Until the 18th century, however, no theoretical basis for soil design had been developed, and the discipline was more of an art than a science, relying on experience.
Several foundation-related engineering problems, such as the Leaning Tower of Pisa, prompted scientists to begin taking a more scientific-based approach to examining the subsurface. The earliest advances occurred in the development of earth pressure theories for the construction of retaining walls. Henri Gautier, a French royal engineer, recognized the "natural slope" of different soils in 1717, an idea later known as the soil's angle of repose. Around the same time, a rudimentary soil classification system was also developed based on a material's unit weight, which is no longer considered a good indication of soil type.
The application of the principles of mechanics to soils was documented as early as 1773 when Charles Coulomb, a physicist and engineer, developed improved methods to determine the earth pressures against military ramparts. Coulomb observed that, at failure, a distinct slip plane would form behind a sliding retaining wall and suggested that the maximum shear stress on the slip plane, for design purposes, was the sum of the soil cohesion, , and friction , where is the normal stress on the slip plane and is the friction angle of the soil. By combining Coulomb's theory with Christian Otto Mohr's 2D stress state, the theory became known as Mohr-Coulomb theory. Although it is now recognized that precise determination of cohesion is impossible because is not a fundamental soil property, the Mohr-Coulomb theory is still used in practice today.
In the 19th century, Henry Darcy developed what is now known as Darcy's Law, describing the flow of fluids in a porous media. Joseph Boussinesq, a mathematician and physicist, developed theories of stress distribution in elastic solids that proved useful for estimating stresses at depth in the ground. William Rankine, an engineer and physicist, developed an alternative to Coulomb's earth pressure theory. Albert Atterberg developed the clay consistency indices that are still used today for soil classification. In 1885, Osborne Reynolds recognized that shearing causes volumetric dilation of dense materials and contraction of loose granular materials.
Modern geotechnical engineering is said to have begun in 1925 with the publication of Erdbaumechanik by Karl von Terzaghi, a mechanical engineer and geologist. Considered by many to be the father of modern soil mechanics and geotechnical engineering, Terzaghi developed the principle of effective stress, and demonstrated that the shear strength of soil is controlled by effective stress. Terzaghi also developed the framework for theories of bearing capacity of foundations, and the theory for prediction of the rate of settlement of clay layers due to consolidation. Afterwards, Maurice Biot fully developed the three-dimensional soil consolidation theory, extending the one-dimensional model previously developed by Terzaghi to more general hypotheses and introducing the set of basic equations of Poroelasticity.
In his 1948 book, Donald Taylor recognized that the interlocking and dilation of densely packed particles contributed to the peak strength of the soil. Roscoe, Schofield, and Wroth, with the publication of On the Yielding of Soils in 1958, established the interrelationships between the volume change behavior (dilation, contraction, and consolidation) and shearing behavior with the theory of plasticity using critical state soil mechanics. Critical state soil mechanics is the basis for many contemporary advanced constitutive models describing the behavior of soil.
In 1960, Alec Skempton carried out an extensive review of the available formulations and experimental data in the literature about the effective stress validity in soil, concrete, and rock in order to reject some of these expressions, as well as clarify what expressions were appropriate according to several working hypotheses, such as stress-strain or strength behavior, saturated or non-saturated media, and rock, concrete or soil behavior.
Roles
Geotechnical investigation
Geotechnical engineers investigate and determine the properties of subsurface conditions and materials. They also design corresponding earthworks and retaining structures, tunnels, and structure foundations, and may supervise and evaluate sites, which may further involve site monitoring as well as the risk assessment and mitigation of natural hazards.
Geotechnical engineers and engineering geologists perform geotechnical investigations to obtain information on the physical properties of soil and rock underlying and adjacent to a site to design earthworks and foundations for proposed structures and for the repair of distress to earthworks and structures caused by subsurface conditions. Geotechnical investigations involve surface and subsurface exploration of a site, often including subsurface sampling and laboratory testing of retrieved soil samples. Sometimes, geophysical methods are also used to obtain data, which include measurement of seismic waves (pressure, shear, and Rayleigh waves), surface-wave methods and downhole methods, and electromagnetic surveys (magnetometer, resistivity, and ground-penetrating radar). Electrical tomography can be used to survey soil and rock properties and existing underground infrastructure in construction projects.
Surface exploration can include on-foot surveys, geologic mapping, geophysical methods, and photogrammetry. Geologic mapping and interpretation of geomorphology are typically completed in consultation with a geologist or engineering geologist. Subsurface exploration usually involves in-situ testing (for example, the standard penetration test and cone penetration test). The digging of test pits and trenching (particularly for locating faults and slide planes) may also be used to learn about soil conditions at depth. Large-diameter borings are rarely used due to safety concerns and expense. Still, they are sometimes used to allow a geologist or engineer to be lowered into the borehole for direct visual and manual examination of the soil and rock stratigraphy.
Various soil samplers exist to meet the needs of different engineering projects. The standard penetration test, which uses a thick-walled split spoon sampler, is the most common way to collect disturbed samples. Piston samplers, employing a thin-walled tube, are most commonly used to collect less disturbed samples. More advanced methods, such as the Sherbrooke block sampler, are superior but expensive. Coring frozen ground provides high-quality undisturbed samples from ground conditions, such as fill, sand, moraine, and rock fracture zones.
Geotechnical centrifuge modeling is another method of testing physical-scale models of geotechnical problems. The use of a centrifuge enhances the similarity of the scale model tests involving soil because soil's strength and stiffness are susceptible to the confining pressure. The centrifugal acceleration allows a researcher to obtain large (prototype-scale) stresses in small physical models.
Foundation design
The foundation of a structure's infrastructure transmits loads from the structure to the earth. Geotechnical engineers design foundations based on the load characteristics of the structure and the properties of the soils and bedrock at the site. Generally, geotechnical engineers first estimate the magnitude and location of loads to be supported before developing an investigation plan to explore the subsurface and determine the necessary soil parameters through field and lab testing. Following this, they may begin the design of an engineering foundation. The primary considerations for a geotechnical engineer in foundation design are bearing capacity, settlement, and ground movement beneath the foundations.
Earthworks
Geotechnical engineers are also involved in the planning and execution of earthworks, which include ground improvement, slope stabilization, and slope stability analysis.
Ground improvement
Various geotechnical engineering methods can be used for ground improvement, including reinforcement geosynthetics such as geocells and geogrids, which disperse loads over a larger area, increasing the soil's load-bearing capacity. Through these methods, geotechnical engineers can reduce direct and long-term costs.
Slope stabilization
Geotechnical engineers can analyze and improve slope stability using engineering methods. Slope stability is determined by the balance of shear stress and shear strength. A previously stable slope may be initially affected by various factors, making it unstable. Nonetheless, geotechnical engineers can design and implement engineered slopes to increase stability.
Slope stability analysis
Stability analysis is needed to design engineered slopes and estimate the risk of slope failure in natural or designed slopes by determining the conditions under which the topmost mass of soil will slip relative to the base of soil and lead to slope failure. If the interface between the mass and the base of a slope has a complex geometry, slope stability analysis is difficult and numerical solution methods are required. Typically, the interface's exact geometry is unknown, and a simplified interface geometry is assumed. Finite slopes require three-dimensional models to be analyzed, so most slopes are analyzed assuming that they are infinitely wide and can be represented by two-dimensional models.
Sub-disciplines
Geosynthetics
Geosynthetics are a type of plastic polymer products used in geotechnical engineering that improve engineering performance while reducing costs. This includes geotextiles, geogrids, geomembranes, geocells, and geocomposites. The synthetic nature of the products make them suitable for use in the ground where high levels of durability are required. Their main functions include drainage, filtration, reinforcement, separation, and containment.
Geosynthetics are available in a wide range of forms and materials, each to suit a slightly different end-use, although they are frequently used together. Some reinforcement geosynthetics, such as geogrids and more recently, cellular confinement systems, have shown to improve bearing capacity, modulus factors and soil stiffness and strength. These products have a wide range of applications and are currently used in many civil and geotechnical engineering applications including roads, airfields, railroads, embankments, piled embankments, retaining structures, reservoirs, canals, dams, landfills, bank protection and coastal engineering.
Offshore
Offshore (or marine) geotechnical engineering is concerned with foundation design for human-made structures in the sea, away from the coastline (in opposition to onshore or nearshore engineering). Oil platforms, artificial islands and submarine pipelines are examples of such structures.
There are a number of significant differences between onshore and offshore geotechnical engineering. Notably, site investigation and ground improvement on the seabed are more expensive; the offshore structures are exposed to a wider range of geohazards; and the environmental and financial consequences are higher in case of failure. Offshore structures are exposed to various environmental loads, notably wind, waves and currents. These phenomena may affect the integrity or the serviceability of the structure and its foundation during its operational lifespan and need to be taken into account in offshore design.
In subsea geotechnical engineering, seabed materials are considered a two-phase material composed of rock or mineral particles and water. Structures may be fixed in place in the seabed—as is the case for piers, jetties and fixed-bottom wind turbines—or may comprise a floating structure that remains roughly fixed relative to its geotechnical anchor point. Undersea mooring of human-engineered floating structures include a large number of offshore oil and gas platforms and, since 2008, a few floating wind turbines. Two common types of engineered design for anchoring floating structures include tension-leg and catenary loose mooring systems.
Observational method
First proposed by Karl Terzaghi and later discussed in a paper by Ralph B. Peck, the observational method is a managed process of construction control, monitoring, and review, which enables modifications to be incorporated during and after construction. The method aims to achieve a greater overall economy without compromising safety by creating designs based on the most probable conditions rather than the most unfavorable. Using the observational method, gaps in available information are filled by measurements and investigation, which aid in assessing the behavior of the structure during construction, which in turn can be modified per the findings. The method was described by Peck as "learn-as-you-go".
The observational method may be described as follows:
General exploration sufficient to establish the rough nature, pattern, and properties of deposits.
Assessment of the most probable conditions and the most unfavorable conceivable deviations.
Creating the design based on a working hypothesis of behavior anticipated under the most probable conditions.
Selection of quantities to be observed as construction proceeds and calculating their anticipated values based on the working hypothesis under the most unfavorable conditions.
Selection, in advance, of a course of action or design modification for every foreseeable significant deviation of the observational findings from those predicted.
Measurement of quantities and evaluation of actual conditions.
Design modification per actual conditions
The observational method is suitable for construction that has already begun when an unexpected development occurs or when a failure or accident looms or has already happened. It is unsuitable for projects whose design cannot be altered during construction.
See also
Civil engineering
Deep Foundations Institute
Earthquake engineering
Earth structure
Effective stress
Engineering geology
Geological Engineering
Geoprofessions
Hydrogeology
International Society for Soil Mechanics and Geotechnical Engineering
Karl von Terzaghi
Land reclamation
Landfill
Mechanically stabilized earth
Offshore geotechnical engineering
Rock mass classifications
Sediment control
Seismology
Soil mechanics
Soil physics
Soil science
Notes
References
Bates and Jackson, 1980, Glossary of Geology: American Geological Institute.
Krynine and Judd, 1957, Principles of Engineering Geology and Geotechnics: McGraw-Hill, New York.
Ventura, Pierfranco, 2019, Fondazioni, Volume 1, Modellazioni statiche e sismiche, Hoepli, Milano
Holtz, R. and Kovacs, W. (1981), An Introduction to Geotechnical Engineering, Prentice-Hall, Inc.
Bowles, J. (1988), Foundation Analysis and Design, McGraw-Hill Publishing Company.
Cedergren, Harry R. (1977), Seepage, Drainage, and Flow Nets, Wiley.
Kramer, Steven L. (1996), Geotechnical Earthquake Engineering, Prentice-Hall, Inc.
Freeze, R.A. & Cherry, J.A., (1979), Groundwater, Prentice-Hall.
Lunne, T. & Long, M.,(2006), Review of long seabed samplers and criteria for new sampler design, Marine Geology, Vol 226, p. 145–165
Mitchell, James K. & Soga, K. (2005), Fundamentals of Soil Behavior 3rd ed., John Wiley & Sons, Inc.
Rajapakse, Ruwan., (2005), "Pile Design and Construction", 2005.
Fang, H.-Y. and Daniels, J. (2005) Introductory Geotechnical Engineering : an environmental perspective, Taylor & Francis.
NAVFAC (Naval Facilities Engineering Command) (1986) Design Manual 7.01, Soil Mechanics, US Government Printing Office
NAVFAC (Naval Facilities Engineering Command) (1986) Design Manual 7.02, Foundations and Earth Structures, US Government Printing Office
NAVFAC (Naval Facilities Engineering Command) (1983) Design Manual 7.03, Soil Dynamics, Deep Stabilization and Special Geotechnical Construction, US Government Printing Office
Terzaghi, K., Peck, R.B. and Mesri, G. (1996), Soil Mechanics in Engineering Practice 3rd Ed., John Wiley & Sons, Inc.
Santamarina, J.C., Klein, K.A., & Fam, M.A. (2001), "Soils and Waves: Particulate Materials Behavior, Characterization and Process Monitoring", Wiley,
Firuziaan, M. and Estorff, O., (2002), "Simulation of the Dynamic Behavior of Bedding-Foundation-Soil in the Time Domain", Springer Verlag.
External links
Worldwide Geotechnical Literature Database | Geotechnical engineering | [
"Engineering"
] | 3,457 | [
"Civil engineering",
"Geotechnical engineering"
] |
12,308 | https://en.wikipedia.org/wiki/Gregory%20Chaitin | Gregory John Chaitin ( ; born 25 June 1947) is an Argentine-American mathematician and computer scientist. Beginning in the late 1960s, Chaitin made contributions to algorithmic information theory and metamathematics, in particular a computer-theoretic result equivalent to Gödel's incompleteness theorem. He is considered to be one of the founders of what is today known as algorithmic (Solomonoff–Kolmogorov–Chaitin, Kolmogorov or program-size) complexity together with Andrei Kolmogorov and Ray Solomonoff. Along with the works of e.g. Solomonoff, Kolmogorov, Martin-Löf, and Leonid Levin, algorithmic information theory became a foundational part of theoretical computer science, information theory, and mathematical logic. It is a common subject in several computer science curricula. Besides computer scientists, Chaitin's work draws attention of many philosophers and mathematicians to fundamental problems in mathematical creativity and digital philosophy.
Mathematics and computer science
Gregory Chaitin is Jewish. He attended the Bronx High School of Science and the City College of New York, where he (still in his teens) developed the theory that led to his independent discovery of algorithmic complexity.
Chaitin has defined Chaitin's constant Ω, a real number whose digits are equidistributed and which is sometimes informally described as an expression of the probability that a random program will halt. Ω has the mathematical property that it is definable, with asymptotic approximations from below (but not from above), but not computable.
Chaitin is also the originator of using graph coloring to do register allocation in compiling, a process known as Chaitin's algorithm.
He was formerly a researcher at IBM's Thomas J. Watson Research Center in New York. He has written more than 10 books that have been translated to about 15 languages. He is today interested in questions of metabiology and information-theoretic formalizations of the theory of evolution, and is a member of the Institute for Advanced Studies at Mohammed VI Polytechnic University.
Other scholarly contributions
Chaitin also writes about philosophy, especially metaphysics and philosophy of mathematics (particularly about epistemological matters in mathematics). In metaphysics, Chaitin claims that algorithmic information theory is the key to solving problems in the field of biology (obtaining a formal definition of 'life', its origin and evolution) and neuroscience (the problem of consciousness and the study of the mind).
In recent writings, he defends a position known as digital philosophy. In the epistemology of mathematics, he claims that his findings in mathematical logic and algorithmic information theory show there are "mathematical facts that are true for no reason, that are true by accident". Chaitin proposes that mathematicians must abandon any hope of proving those mathematical facts and adopt a quasi-empirical methodology.
Honors
In 1995 he was given the degree of doctor of science honoris causa by the University of Maine. In 2002 he was given the title of honorary professor by the University of Buenos Aires in Argentina, where his parents were born and where Chaitin spent part of his youth. In 2007 he was given a Leibniz Medal by Wolfram Research. In 2009 he was given the degree of doctor of philosophy honoris causa by the National University of Córdoba. He was formerly a researcher at IBM's Thomas J. Watson Research Center and a professor at the Federal University of Rio de Janeiro.
Criticism
Some philosophers and logicians disagree with the philosophical conclusions that Chaitin has drawn from his theorems related to what Chaitin thinks is a kind of fundamental arithmetic randomness.
The logician Torkel Franzén criticized Chaitin's interpretation of Gödel's incompleteness theorem and the alleged explanation for it that Chaitin's work represents.
Bibliography
Information, Randomness & Incompleteness (World Scientific 1987) (online)
Algorithmic Information Theory (Cambridge University Press 1987) (online)
Information-theoretic Incompleteness (World Scientific 1992) (online)
The Limits of Mathematics (Springer-Verlag 1998) (online )
The Unknowable (Springer-Verlag 1999) (online)
Exploring Randomness (Springer-Verlag 2001) (online)
Conversations with a Mathematician (Springer-Verlag 2002) (online)
From Philosophy to Program Size (Tallinn Cybernetics Institute 2003)
Meta Math!: The Quest for Omega (Pantheon Books 2005) (reprinted in UK as Meta Maths: The Quest for Omega, Atlantic Books 2006) ()
Teoria algoritmica della complessità (G. Giappichelli Editore 2006)
Thinking about Gödel & Turing (World Scientific 2007) (online )
Mathematics, Complexity and Philosophy (Editorial Midas 2011)
Gödel's Way (CRC Press 2012)
Proving Darwin: Making Biology Mathematical (Pantheon Books 2012) (online)
Philosophical Mathematics: Infinity, Incompleteness, Irreducibility (Academia.edu 2024) (online)
References
Further reading
External links
G J Chaitin Home Page from academia.edu
G J Chaitin Home Page from UMaine.edu in the Internet Archive
List of publications of G J Chaitin
Video of lecture on "Leibniz, complexity and incompleteness"
New Scientist article (March, 2001) on Chaitin, Omegas and Super-Omegas
A short version of Chaitin's proof
Gregory Chaitin extended film interview and transcripts for the 'Why Are We Here?' documentary series
Chaitin Lisp on github
1947 births
Living people
The Bronx High School of Science alumni
City College of New York alumni
Argentine mathematicians
Argentine computer scientists
20th-century American mathematicians
21st-century American mathematicians
American information theorists
IBM employees
Philosophers of mathematics
Epistemologists
Metaphysics writers
American logicians
21st-century American philosophers
Argentine information theorists
Mathematicians from New York (state) | Gregory Chaitin | [
"Mathematics"
] | 1,212 | [
"Philosophers of mathematics"
] |
12,339 | https://en.wikipedia.org/wiki/Genetically%20modified%20organism | A genetically modified organism (GMO) is any organism whose genetic material has been altered using genetic engineering techniques. The exact definition of a genetically modified organism and what constitutes genetic engineering varies, with the most common being an organism altered in a way that "does not occur naturally by mating and/or natural recombination". A wide variety of organisms have been genetically modified (GM), including animals, plants, and microorganisms.
Genetic modification can include the introduction of new genes or enhancing, altering, or knocking out endogenous genes. In some genetic modifications, genes are transferred within the same species, across species (creating transgenic organisms), and even across kingdoms. Creating a genetically modified organism is a multi-step process. Genetic engineers must isolate the gene they wish to insert into the host organism and combine it with other genetic elements, including a promoter and terminator region and often a selectable marker. A number of techniques are available for inserting the isolated gene into the host genome. Recent advancements using genome editing techniques, notably CRISPR, have made the production of GMOs much simpler. Herbert Boyer and Stanley Cohen made the first genetically modified organism in 1973, a bacterium resistant to the antibiotic kanamycin. The first genetically modified animal, a mouse, was created in 1974 by Rudolf Jaenisch, and the first plant was produced in 1983. In 1994, the Flavr Savr tomato was released, the first commercialized genetically modified food. The first genetically modified animal to be commercialized was the GloFish (2003) and the first genetically modified animal to be approved for food use was the AquAdvantage salmon in 2015.
Bacteria are the easiest organisms to engineer and have been used for research, food production, industrial protein purification (including drugs), agriculture, and art. There is potential to use them for environmental purposes or as medicine. Fungi have been engineered with much the same goals. Viruses play an important role as vectors for inserting genetic information into other organisms. This use is especially relevant to human gene therapy. There are proposals to remove the virulent genes from viruses to create vaccines. Plants have been engineered for scientific research, to create new colors in plants, deliver vaccines, and to create enhanced crops. Genetically modified crops are publicly the most controversial GMOs, in spite of having the most human health and environmental benefits. Animals are generally much harder to transform and the vast majority are still at the research stage. Mammals are the best model organisms for humans. Livestock is modified with the intention of improving economically important traits such as growth rate, quality of meat, milk composition, disease resistance, and survival. Genetically modified fish are used for scientific research, as pets, and as a food source. Genetic engineering has been proposed as a way to control mosquitos, a vector for many deadly diseases. Although human gene therapy is still relatively new, it has been used to treat genetic disorders such as severe combined immunodeficiency and Leber's congenital amaurosis.
Many objections have been raised over the development of GMOs, particularly their commercialization. Many of these involve GM crops and whether food produced from them is safe and what impact growing them will have on the environment. Other concerns are the objectivity and rigor of regulatory authorities, contamination of non-genetically modified food, control of the food supply, patenting of life, and the use of intellectual property rights. Although there is a scientific consensus that currently available food derived from GM crops poses no greater risk to human health than conventional food, GM food safety is a leading issue with critics. Gene flow, impact on non-target organisms, and escape are the major environmental concerns. Countries have adopted regulatory measures to deal with these concerns. There are differences in the regulation for the release of GMOs between countries, with some of the most marked differences occurring between the US and Europe. Key issues concerning regulators include whether GM food should be labeled and the status of gene-edited organisms.
Definition
The definition of a genetically modified organism (GMO) is not clear and varies widely between countries, international bodies, and other communities. At its broadest, the definition of a GMO can include anything that has had its genes altered, including by nature. Taking a less broad view, it can encompass every organism that has had its genes altered by humans, which would include all crops and livestock. In 1993, the Encyclopedia Britannica defined genetic engineering as "any of a wide range of techniques ... among them artificial insemination, in vitro fertilization (e.g., 'test-tube' babies), sperm banks, cloning, and gene manipulation." The European Union (EU) included a similarly broad definition in early reviews, specifically mentioning GMOs being produced by "selective breeding and other means of artificial selection" These definitions were promptly adjusted with a number of exceptions added as the result of pressure from scientific and farming communities, as well as developments in science. The EU definition later excluded traditional breeding, in vitro fertilization, induction of polyploidy, mutation breeding, and cell fusion techniques that do not use recombinant nucleic acids or a genetically modified organism in the process.
Another approach was the definition provided by the Food and Agriculture Organization, the World Health Organization, and the European Commission, stating that the organisms must be altered in a way that does "not occur naturally by mating and/or natural recombination". Progress in science, such as the discovery of horizontal gene transfer being a relatively common natural phenomenon, further added to the confusion on what "occurs naturally", which led to further adjustments and exceptions. There are examples of crops that fit this definition, but are not normally considered GMOs. For example, the grain crop triticale was fully developed in a laboratory in 1930 using various techniques to alter its genome.
Genetically engineered organism (GEO) can be considered a more precise term compared to GMO when describing organisms' genomes that have been directly manipulated with biotechnology. The Cartagena Protocol on Biosafety used the synonym living modified organism (LMO) in 2000 and defined it as "any living organism that possesses a novel combination of genetic material obtained through the use of modern biotechnology." Modern biotechnology is further defined as "In vitro nucleic acid techniques, including recombinant deoxyribonucleic acid (DNA) and direct injection of nucleic acid into cells or organelles, or fusion of cells beyond the taxonomic family."
Originally, the term GMO was not commonly used by scientists to describe genetically engineered organisms until after usage of GMO became common in popular media. The United States Department of Agriculture (USDA) considers GMOs to be plants or animals with heritable changes introduced by genetic engineering or traditional methods, while GEO specifically refers to organisms with genes introduced, eliminated, or rearranged using molecular biology, particularly recombinant DNA techniques, such as transgenesis.
The definitions focus on the process more than the product, which means there could be GMOS and non-GMOs with very similar genotypes and phenotypes. This has led scientists to label it as a scientifically meaningless category, saying that it is impossible to group all the different types of GMOs under one common definition. It has also caused issues for organic institutions and groups looking to ban GMOs. It also poses problems as new processes are developed. The current definitions came in before genome editing became popular and there is some confusion as to whether they are GMOs. The EU has adjudged that they are changing their GMO definition to include "organisms obtained by mutagenesis", but has excluded them from regulation based on their "long safety record" and that they have been "conventionally been used in a number of applications". In contrast the USDA has ruled that gene edited organisms are not considered GMOs.
Even greater inconsistency and confusion is associated with various "Non-GMO" or "GMO-free" labeling schemes in food marketing, where even products such as water or salt, which do not contain any organic substances and genetic material (and thus cannot be genetically modified by definition), are being labeled to create an impression of being "more healthy".
Production
Creating a genetically modified organism (GMO) is a multi-step process. Genetic engineers must isolate the gene they wish to insert into the host organism. This gene can be taken from a cell or artificially synthesized. If the chosen gene or the donor organism's genome has been well studied it may already be accessible from a genetic library. The gene is then combined with other genetic elements, including a promoter and terminator region and a selectable marker.
A number of techniques are available for inserting the isolated gene into the host genome. Bacteria can be induced to take up foreign DNA, usually by exposed heat shock or electroporation. DNA is generally inserted into animal cells using microinjection, where it can be injected through the cell's nuclear envelope directly into the nucleus, or through the use of viral vectors. In plants the DNA is often inserted using Agrobacterium-mediated recombination, biolistics or electroporation.
As only a single cell is transformed with genetic material, the organism must be regenerated from that single cell. In plants this is accomplished through tissue culture. In animals it is necessary to ensure that the inserted DNA is present in the embryonic stem cells. Further testing using PCR, Southern hybridization, and DNA sequencing is conducted to confirm that an organism contains the new gene.
Traditionally the new genetic material was inserted randomly within the host genome. Gene targeting techniques, which creates double-stranded breaks and takes advantage on the cells natural homologous recombination repair systems, have been developed to target insertion to exact locations. Genome editing uses artificially engineered nucleases that create breaks at specific points. There are four families of engineered nucleases: meganucleases, zinc finger nucleases, transcription activator-like effector nucleases (TALENs), and the Cas9-guideRNA system (adapted from CRISPR). TALEN and CRISPR are the two most commonly used and each has its own advantages. TALENs have greater target specificity, while CRISPR is easier to design and more efficient.
History
Humans have domesticated plants and animals since around 12,000 BCE, using selective breeding or artificial selection (as contrasted with natural selection). The process of selective breeding, in which organisms with desired traits (and thus with the desired genes) are used to breed the next generation and organisms lacking the trait are not bred, is a precursor to the modern concept of genetic modification. Various advancements in genetics allowed humans to directly alter the DNA and therefore genes of organisms. In 1972, Paul Berg created the first recombinant DNA molecule when he combined DNA from a monkey virus with that of the lambda virus.
Herbert Boyer and Stanley Cohen made the first genetically modified organism in 1973. They took a gene from a bacterium that provided resistance to the antibiotic kanamycin, inserted it into a plasmid and then induced other bacteria to incorporate the plasmid. The bacteria that had successfully incorporated the plasmid was then able to survive in the presence of kanamycin. Boyer and Cohen expressed other genes in bacteria. This included genes from the toad Xenopus laevis in 1974, creating the first GMO expressing a gene from an organism of a different kingdom.
In 1974, Rudolf Jaenisch created a transgenic mouse by introducing foreign DNA into its embryo, making it the world's first transgenic animal. However it took another eight years before transgenic mice were developed that passed the transgene to their offspring. Genetically modified mice were created in 1984 that carried cloned oncogenes, predisposing them to developing cancer. Mice with genes removed (termed a knockout mouse) were created in 1989. The first transgenic livestock were produced in 1985 and the first animal to synthesize transgenic proteins in their milk were mice in 1987. The mice were engineered to produce human tissue plasminogen activator, a protein involved in breaking down blood clots.
In 1983, the first genetically engineered plant was developed by Michael W. Bevan, Richard B. Flavell and Mary-Dell Chilton. They infected tobacco with Agrobacterium transformed with an antibiotic resistance gene and through tissue culture techniques were able to grow a new plant containing the resistance gene. The gene gun was invented in 1987, allowing transformation of plants not susceptible to Agrobacterium infection. In 2000, Vitamin A-enriched golden rice was the first plant developed with increased nutrient value.
In 1976, Genentech, the first genetic engineering company was founded by Herbert Boyer and Robert Swanson; a year later, the company produced a human protein (somatostatin) in E. coli. Genentech announced the production of genetically engineered human insulin in 1978. The insulin produced by bacteria, branded Humulin, was approved for release by the Food and Drug Administration in 1982. In 1988, the first human antibodies were produced in plants. In 1987, a strain of Pseudomonas syringae became the first genetically modified organism to be released into the environment when a strawberry and potato field in California were sprayed with it.
The first genetically modified crop, an antibiotic-resistant tobacco plant, was produced in 1982. China was the first country to commercialize transgenic plants, introducing a virus-resistant tobacco in 1992. In 1994, Calgene attained approval to commercially release the Flavr Savr tomato, the first genetically modified food. Also in 1994, the European Union approved tobacco engineered to be resistant to the herbicide bromoxynil, making it the first genetically engineered crop commercialized in Europe. An insect resistant Potato was approved for release in the US in 1995, and by 1996 approval had been granted to commercially grow 8 transgenic crops and one flower crop (carnation) in 6 countries plus the EU.
In 2010, scientists at the J. Craig Venter Institute announced that they had created the first synthetic bacterial genome. They named it Synthia and it was the world's first synthetic life form.
The first genetically modified animal to be commercialized was the GloFish, a Zebra fish with a fluorescent gene added that allows it to glow in the dark under ultraviolet light. It was released to the US market in 2003. In 2015, AquAdvantage salmon became the first genetically modified animal to be approved for food use. Approval is for fish raised in Panama and sold in the US. The salmon were transformed with a growth hormone-regulating gene from a Pacific Chinook salmon and a promoter from an ocean pout enabling it to grow year-round instead of only during spring and summer.
Bacteria
Bacteria were the first organisms to be genetically modified in the laboratory, due to the relative ease of modifying their chromosomes. This ease made them important tools for the creation of other GMOs. Genes and other genetic information from a wide range of organisms can be added to a plasmid and inserted into bacteria for storage and modification. Bacteria are cheap, easy to grow, clonal, multiply quickly and can be stored at −80 °C almost indefinitely. Once a gene is isolated it can be stored inside the bacteria, providing an unlimited supply for research. A large number of custom plasmids make manipulating DNA extracted from bacteria relatively easy.
Their ease of use has made them great tools for scientists looking to study gene function and evolution. The simplest model organisms come from bacteria, with most of our early understanding of molecular biology coming from studying Escherichia coli. Scientists can easily manipulate and combine genes within the bacteria to create novel or disrupted proteins and observe the effect this has on various molecular systems. Researchers have combined the genes from bacteria and archaea, leading to insights on how these two diverged in the past. In the field of synthetic biology, they have been used to test various synthetic approaches, from synthesizing genomes to creating novel nucleotides.
Bacteria have been used in the production of food for a long time, and specific strains have been developed and selected for that work on an industrial scale. They can be used to produce enzymes, amino acids, flavorings, and other compounds used in food production. With the advent of genetic engineering, new genetic changes can easily be introduced into these bacteria. Most food-producing bacteria are lactic acid bacteria, and this is where the majority of research into genetically engineering food-producing bacteria has gone. The bacteria can be modified to operate more efficiently, reduce toxic byproduct production, increase output, create improved compounds, and remove unnecessary pathways. Food products from genetically modified bacteria include alpha-amylase, which converts starch to simple sugars, chymosin, which clots milk protein for cheese making, and pectinesterase, which improves fruit juice clarity. The majority are produced in the US and even though regulations are in place to allow production in Europe, as of 2015 no food products derived from bacteria are currently available there.
Genetically modified bacteria are used to produce large amounts of proteins for industrial use. The bacteria are generally grown to a large volume before the gene encoding the protein is activated. The bacteria are then harvested and the desired protein purified from them. The high cost of extraction and purification has meant that only high value products have been produced at an industrial scale. The majority of these products are human proteins for use in medicine. Many of these proteins are impossible or difficult to obtain via natural methods and they are less likely to be contaminated with pathogens, making them safer. The first medicinal use of GM bacteria was to produce the protein insulin to treat diabetes. Other medicines produced include clotting factors to treat hemophilia, human growth hormone to treat various forms of dwarfism, interferon to treat some cancers, erythropoietin for anemic patients, and tissue plasminogen activator which dissolves blood clots. Outside of medicine they have been used to produce biofuels. There is interest in developing an extracellular expression system within the bacteria to reduce costs and make the production of more products economical.
With a greater understanding of the role that the microbiome plays in human health, there is a potential to treat diseases by genetically altering the bacteria to, themselves, be therapeutic agents. Ideas include altering gut bacteria so they destroy harmful bacteria, or using bacteria to replace or increase deficient enzymes or proteins. One research focus is to modify Lactobacillus, bacteria that naturally provide some protection against HIV, with genes that will further enhance this protection. If the bacteria do not form colonies inside the patient, the person must repeatedly ingest the modified bacteria in order to get the required doses. Enabling the bacteria to form a colony could provide a more long-term solution, but could also raise safety concerns as interactions between bacteria and the human body are less well understood than with traditional drugs. There are concerns that horizontal gene transfer to other bacteria could have unknown effects. As of 2018 there are clinical trials underway testing the efficacy and safety of these treatments.
For over a century, bacteria have been used in agriculture. Crops have been inoculated with Rhizobia (and more recently Azospirillum) to increase their production or to allow them to be grown outside their original habitat. Application of Bacillus thuringiensis (Bt) and other bacteria can help protect crops from insect infestation and plant diseases. With advances in genetic engineering, these bacteria have been manipulated for increased efficiency and expanded host range. Markers have also been added to aid in tracing the spread of the bacteria. The bacteria that naturally colonize certain crops have also been modified, in some cases to express the Bt genes responsible for pest resistance. Pseudomonas strains of bacteria cause frost damage by nucleating water into ice crystals around themselves. This led to the development of ice-minus bacteria, which have the ice-forming genes removed. When applied to crops they can compete with the non-modified bacteria and confer some frost resistance.
Other uses for genetically modified bacteria include bioremediation, where the bacteria are used to convert pollutants into a less toxic form. Genetic engineering can increase the levels of the enzymes used to degrade a toxin or to make the bacteria more stable under environmental conditions. Bioart has also been created using genetically modified bacteria. In the 1980s artist Jon Davis and geneticist Dana Boyd converted the Germanic symbol for femininity (ᛉ) into binary code and then into a DNA sequence, which was then expressed in Escherichia coli. This was taken a step further in 2012, when a whole book was encoded onto DNA. Paintings have also been produced using bacteria transformed with fluorescent proteins.
Viruses
Viruses are often modified so they can be used as vectors for inserting genetic information into other organisms. This process is called transduction and if successful the recipient of the introduced DNA becomes a GMO. Different viruses have different efficiencies and capabilities. Researchers can use this to control for various factors; including the target location, insert size, and duration of gene expression. Any dangerous sequences inherent in the virus must be removed, while those that allow the gene to be delivered effectively are retained.
While viral vectors can be used to insert DNA into almost any organism it is especially relevant for its potential in treating human disease. Although primarily still at trial stages, there has been some successes using gene therapy to replace defective genes. This is most evident in curing patients with severe combined immunodeficiency rising from adenosine deaminase deficiency (ADA-SCID), although the development of leukemia in some ADA-SCID patients along with the death of Jesse Gelsinger in a 1999 trial set back the development of this approach for many years. In 2009, another breakthrough was achieved when an eight-year-old boy with Leber's congenital amaurosis regained normal eyesight and in 2016 GlaxoSmithKline gained approval to commercialize a gene therapy treatment for ADA-SCID. As of 2018, there are a substantial number of clinical trials underway, including treatments for hemophilia, glioblastoma, chronic granulomatous disease, cystic fibrosis and various cancers.
The most common virus used for gene delivery comes from adenoviruses as they can carry up to 7.5 kb of foreign DNA and infect a relatively broad range of host cells, although they have been known to elicit immune responses in the host and only provide short term expression. Other common vectors are adeno-associated viruses, which have lower toxicity and longer-term expression, but can only carry about 4kb of DNA. Herpes simplex viruses make promising vectors, having a carrying capacity of over 30kb and providing long term expression, although they are less efficient at gene delivery than other vectors. The best vectors for long term integration of the gene into the host genome are retroviruses, but their propensity for random integration is problematic. Lentiviruses are a part of the same family as retroviruses with the advantage of infecting both dividing and non-dividing cells, whereas retroviruses only target dividing cells. Other viruses that have been used as vectors include alphaviruses, flaviviruses, measles viruses, rhabdoviruses, Newcastle disease virus, poxviruses, and picornaviruses.
Most vaccines consist of viruses that have been attenuated, disabled, weakened or killed in some way so that their virulent properties are no longer effective. Genetic engineering could theoretically be used to create viruses with the virulent genes removed. This does not affect the viruses infectivity, invokes a natural immune response and there is no chance that they will regain their virulence function, which can occur with some other vaccines. As such they are generally considered safer and more efficient than conventional vaccines, although concerns remain over non-target infection, potential side effects and horizontal gene transfer to other viruses. Another potential approach is to use vectors to create novel vaccines for diseases that have no vaccines available or the vaccines that do not work effectively, such as AIDS, malaria, and tuberculosis. The most effective vaccine against Tuberculosis, the Bacillus Calmette–Guérin (BCG) vaccine, only provides partial protection. A modified vaccine expressing a M tuberculosis antigen is able to enhance BCG protection. It has been shown to be safe to use at phase II trials, although not as effective as initially hoped. Other vector-based vaccines have already been approved and many more are being developed.
Another potential use of genetically modified viruses is to alter them so they can directly treat diseases. This can be through expression of protective proteins or by directly targeting infected cells. In 2004, researchers reported that a genetically modified virus that exploits the selfish behavior of cancer cells might offer an alternative way of killing tumours. Since then, several researchers have developed genetically modified oncolytic viruses that show promise as treatments for various types of cancer. In 2017, researchers genetically modified a virus to express spinach defensin proteins. The virus was injected into orange trees to combat citrus greening disease that had reduced orange production by 70% since 2005.
Natural viral diseases, such as myxomatosis and rabbit hemorrhagic disease, have been used to help control pest populations. Over time the surviving pests become resistant, leading researchers to look at alternative methods. Genetically modified viruses that make the target animals infertile through immunocontraception have been created in the laboratory as well as others that target the developmental stage of the animal. There are concerns with using this approach regarding virus containment and cross species infection. Sometimes the same virus can be modified for contrasting purposes. Genetic modification of the myxoma virus has been proposed to conserve European wild rabbits in the Iberian peninsula and to help regulate them in Australia. To protect the Iberian species from viral diseases, the myxoma virus was genetically modified to immunize the rabbits, while in Australia the same myxoma virus was genetically modified to lower fertility in the Australian rabbit population.
Outside of biology scientists have used a genetically modified virus to construct a lithium-ion battery and other nanostructured materials. It is possible to engineer bacteriophages to express modified proteins on their surface and join them up in specific patterns (a technique called phage display). These structures have potential uses for energy storage and generation, biosensing and tissue regeneration with some new materials currently produced including quantum dots, liquid crystals, nanorings and nanofibres. The battery was made by engineering M13 bacteriaophages so they would coat themselves in iron phosphate and then assemble themselves along a carbon nanotube. This created a highly conductive medium for use in a cathode, allowing energy to be transferred quickly. They could be constructed at lower temperatures with non-toxic chemicals, making them more environmentally friendly.
Fungi
Fungi can be used for many of the same processes as bacteria. For industrial applications, yeasts combine the bacterial advantages of being a single-celled organism that is easy to manipulate and grow with the advanced protein modifications found in eukaryotes. They can be used to produce large complex molecules for use in food, pharmaceuticals, hormones, and steroids. Yeast is important for wine production and as of 2016 two genetically modified yeasts involved in the fermentation of wine have been commercialized in the United States and Canada. One has increased malolactic fermentation efficiency, while the other prevents the production of dangerous ethyl carbamate compounds during fermentation. There have also been advances in the production of biofuel from genetically modified fungi.
Fungi, being the most common pathogens of insects, make attractive biopesticides. Unlike bacteria and viruses they have the advantage of infecting the insects by contact alone, although they are out competed in efficiency by chemical pesticides. Genetic engineering can improve virulence, usually by adding more virulent proteins, increasing infection rate or enhancing spore persistence. Many of the disease carrying vectors are susceptible to entomopathogenic fungi. An attractive target for biological control are mosquitos, vectors for a range of deadly diseases, including malaria, yellow fever and dengue fever. Mosquitos can evolve quickly so it becomes a balancing act of killing them before the Plasmodium they carry becomes the infectious disease, but not so fast that they become resistant to the fungi. By genetically engineering fungi like Metarhizium anisopliae and Beauveria bassiana to delay the development of mosquito infectiousness the selection pressure to evolve resistance is reduced. Another strategy is to add proteins to the fungi that block transmission of malaria or remove the Plasmodium altogether.
Agaricus bisporus the common white button mushroom, has been gene edited to resist browning, giving it a longer shelf life. The process used CRISPR to knock out a gene that encodes polyphenol oxidase. As it didn't introduce any foreign DNA into the organism it was not deemed to be regulated under existing GMO frameworks and as such is the first CRISPR-edited organism to be approved for release. This has intensified debates as to whether gene-edited organisms should be considered genetically modified organisms and how they should be regulated.
Plants
Plants have been engineered for scientific research, to display new flower colors, deliver vaccines, and to create enhanced crops. Many plants are pluripotent, meaning that a single cell from a mature plant can be harvested and under the right conditions can develop into a new plant. This ability can be taken advantage of by genetic engineers; by selecting for cells that have been successfully transformed in an adult plant a new plant can then be grown that contains the transgene in every cell through a process known as tissue culture.
Much of the advances in the field of genetic engineering has come from experimentation with tobacco. Major advances in tissue culture and plant cellular mechanisms for a wide range of plants has originated from systems developed in tobacco. It was the first plant to be altered using genetic engineering and is considered a model organism for not only genetic engineering, but a range of other fields. As such the transgenic tools and procedures are well established making tobacco one of the easiest plants to transform. Another major model organism relevant to genetic engineering is Arabidopsis thaliana. Its small genome and short life cycle makes it easy to manipulate and it contains many homologs to important crop species. It was the first plant sequenced, has a host of online resources available and can be transformed by simply dipping a flower in a transformed Agrobacterium solution.
In research, plants are engineered to help discover the functions of certain genes. The simplest way to do this is to remove the gene and see what phenotype develops compared to the wild type form. Any differences are possibly the result of the missing gene. Unlike mutagenisis, genetic engineering allows targeted removal without disrupting other genes in the organism. Some genes are only expressed in certain tissues, so reporter genes, like GUS, can be attached to the gene of interest allowing visualization of the location. Other ways to test a gene is to alter it slightly and then return it to the plant and see if it still has the same effect on phenotype. Other strategies include attaching the gene to a strong promoter and see what happens when it is overexpressed, forcing a gene to be expressed in a different location or at different developmental stages.
Some genetically modified plants are purely ornamental. They are modified for flower color, fragrance, flower shape and plant architecture. The first genetically modified ornamentals commercialized altered color. Carnations were released in 1997, with the most popular genetically modified organism, a blue rose (actually lavender or mauve) created in 2004. The roses are sold in Japan, the United States, and Canada. Other genetically modified ornamentals include Chrysanthemum and Petunia. As well as increasing aesthetic value there are plans to develop ornamentals that use less water or are resistant to the cold, which would allow them to be grown outside their natural environments.
It has been proposed to genetically modify some plant species threatened by extinction to be resistant to invasive plants and diseases, such as the emerald ash borer in North American and the fungal disease, Ceratocystis platani, in European plane trees. The papaya ringspot virus devastated papaya trees in Hawaii in the twentieth century until transgenic papaya plants were given pathogen-derived resistance. However, genetic modification for conservation in plants remains mainly speculative. A unique concern is that a transgenic species may no longer bear enough resemblance to the original species to truly claim that the original species is being conserved. Instead, the transgenic species may be genetically different enough to be considered a new species, thus diminishing the conservation worth of genetic modification.
Crops
Genetically modified crops are genetically modified plants that are used in agriculture. The first crops developed were used for animal or human food and provide resistance to certain pests, diseases, environmental conditions, spoilage or chemical treatments (e.g. resistance to a herbicide). The second generation of crops aimed to improve the quality, often by altering the nutrient profile. Third generation genetically modified crops could be used for non-food purposes, including the production of pharmaceutical agents, biofuels, and other industrially useful goods, as well as for bioremediation.
There are three main aims to agricultural advancement; increased production, improved conditions for agricultural workers and sustainability. GM crops contribute by improving harvests through reducing insect pressure, increasing nutrient value and tolerating different abiotic stresses. Despite this potential, as of 2018, the commercialized crops are limited mostly to cash crops like cotton, soybean, maize and canola and the vast majority of the introduced traits provide either herbicide tolerance or insect resistance. Soybeans accounted for half of all genetically modified crops planted in 2014. Adoption by farmers has been rapid, between 1996 and 2013, the total surface area of land cultivated with GM crops increased by a factor of 100. Geographically though the spread has been uneven, with strong growth in the Americas and parts of Asia and little in Europe and Africa. Its socioeconomic spread has been more even, with approximately 54% of worldwide GM crops grown in developing countries in 2013. Although doubts have been raised, most studies have found growing GM crops to be beneficial to farmers through decreased pesticide use as well as increased crop yield and farm profit.
The majority of GM crops have been modified to be resistant to selected herbicides, usually a glyphosate or glufosinate based one. Genetically modified crops engineered to resist herbicides are now more available than conventionally bred resistant varieties; in the USA 93% of soybeans and most of the GM maize grown is glyphosate tolerant. Most currently available genes used to engineer insect resistance come from the Bacillus thuringiensis bacterium and code for delta endotoxins. A few use the genes that encode for vegetative insecticidal proteins. The only gene commercially used to provide insect protection that does not originate from B. thuringiensis is the Cowpea trypsin inhibitor (CpTI). CpTI was first approved for use cotton in 1999 and is currently undergoing trials in rice. Less than one percent of GM crops contained other traits, which include providing virus resistance, delaying senescence and altering the plants composition.
Golden rice is the most well known GM crop that is aimed at increasing nutrient value. It has been engineered with three genes that biosynthesise beta-carotene, a precursor of vitamin A, in the edible parts of rice. It is intended to produce a fortified food to be grown and consumed in areas with a shortage of dietary vitamin A, a deficiency which each year is estimated to kill 670,000 children under the age of 5 and cause an additional 500,000 cases of irreversible childhood blindness. The original golden rice produced 1.6μg/g of the carotenoids, with further development increasing this 23 times. It gained its first approvals for use as food in 2018.
Plants and plant cells have been genetically engineered for production of biopharmaceuticals in bioreactors, a process known as pharming. Work has been done with duckweed Lemna minor, the algae Chlamydomonas reinhardtii and the moss Physcomitrella patens. Biopharmaceuticals produced include cytokines, hormones, antibodies, enzymes and vaccines, most of which are accumulated in the plant seeds. Many drugs also contain natural plant ingredients and the pathways that lead to their production have been genetically altered or transferred to other plant species to produce greater volume. Other options for bioreactors are biopolymers and biofuels. Unlike bacteria, plants can modify the proteins post-translationally, allowing them to make more complex molecules. They also pose less risk of being contaminated. Therapeutics have been cultured in transgenic carrot and tobacco cells, including a drug treatment for Gaucher's disease.
Vaccine production and storage has great potential in transgenic plants. Vaccines are expensive to produce, transport, and administer, so having a system that could produce them locally would allow greater access to poorer and developing areas. As well as purifying vaccines expressed in plants it is also possible to produce edible vaccines in plants. Edible vaccines stimulate the immune system when ingested to protect against certain diseases. Being stored in plants reduces the long-term cost as they can be disseminated without the need for cold storage, don't need to be purified, and have long term stability. Also being housed within plant cells provides some protection from the gut acids upon digestion. However the cost of developing, regulating, and containing transgenic plants is high, leading to most current plant-based vaccine development being applied to veterinary medicine, where the controls are not as strict.
Genetically modified crops have been proposed as one of the ways to reduce farming-related emissions due to higher yield, reduced use of pesticides, reduced use of tractor fuel and no tillage. According to a 2021 study, in EU alone widespread adoption of GE crops would reduce greenhouse gas emissions by 33 million tons of equivalent or 7.5% of total farming-related emissions.
Animals
The vast majority of genetically modified animals are at the research stage with the number close to entering the market remaining small. As of 2018 only three genetically modified animals have been approved, all in the USA. A goat and a chicken have been engineered to produce medicines and a salmon has increased its own growth. Despite the differences and difficulties in modifying them, the end aims are much the same as for plants. GM animals are created for research purposes, production of industrial or therapeutic products, agricultural uses, or improving their health. There is also a market for creating genetically modified pets.
Mammals
The process of genetically engineering mammals is slow, tedious, and expensive. However, new technologies are making genetic modifications easier and more precise. The first transgenic mammals were produced by injecting viral DNA into embryos and then implanting the embryos in females. The embryo would develop and it would be hoped that some of the genetic material would be incorporated into the reproductive cells. Then researchers would have to wait until the animal reached breeding age and then offspring would be screened for the presence of the gene in every cell. The development of the CRISPR-Cas9 gene editing system as a cheap and fast way of directly modifying germ cells, effectively halving the amount of time needed to develop genetically modified mammals.
Mammals are the best models for human disease, making genetic engineered ones vital to the discovery and development of cures and treatments for many serious diseases. Knocking out genes responsible for human genetic disorders allows researchers to study the mechanism of the disease and to test possible cures. Genetically modified mice have been the most common mammals used in biomedical research, as they are cheap and easy to manipulate. Pigs are also a good target as they have a similar body size and anatomical features, physiology, pathophysiological response and diet. Nonhuman primates are the most similar model organisms to humans, but there is less public acceptance towards using them as research animals. In 2009, scientists announced that they had successfully transferred a gene into a primate species (marmosets) for the first time. Their first research target for these marmosets was Parkinson's disease, but they were also considering amyotrophic lateral sclerosis and Huntington's disease.
Human proteins expressed in mammals are more likely to be similar to their natural counterparts than those expressed in plants or microorganisms. Stable expression has been accomplished in sheep, pigs, rats and other animals. In 2009, the first human biological drug produced from such an animal, a goat, was approved. The drug, ATryn, is an anticoagulant which reduces the probability of blood clots during surgery or childbirth and is extracted from the goat's milk. Human alpha-1-antitrypsin is another protein that has been produced from goats and is used in treating humans with this deficiency. Another medicinal area is in creating pigs with greater capacity for human organ transplants (xenotransplantation). Pigs have been genetically modified so that their organs can no longer carry retroviruses or have modifications to reduce the chance of rejection. Chimeric pigs could carry fully human organs. The first human transplant of a genetically modified pig heart occurred in 2023, and kidney in 2024.
Livestock are modified with the intention of improving economically important traits such as growth-rate, quality of meat, milk composition, disease resistance and survival. Animals have been engineered to grow faster, be healthier and resist diseases. Modifications have also improved the wool production of sheep and udder health of cows. Goats have been genetically engineered to produce milk with strong spiderweb-like silk proteins in their milk. A GM pig called Enviropig was created with the capability of digesting plant phosphorus more efficiently than conventional pigs. They could reduce water pollution since they excrete 30 to 70% less phosphorus in manure. Dairy cows have been genetically engineered to produce milk that would be the same as human breast milk. This could potentially benefit mothers who cannot produce breast milk but want their children to have breast milk rather than formula. Researchers have also developed a genetically engineered cow that produces allergy-free milk.
Scientists have genetically engineered several organisms, including some mammals, to include green fluorescent protein (GFP), for research purposes. GFP and other similar reporting genes allow easy visualization and localization of the products of the genetic modification. Fluorescent pigs have been bred to study human organ transplants, regenerating ocular photoreceptor cells, and other topics. In 2011, green-fluorescent cats were created to help find therapies for HIV/AIDS and other diseases as feline immunodeficiency virus is related to HIV.
There have been suggestions that genetic engineering could be used to bring animals back from extinction. It involves changing the genome of a close living relative to resemble the extinct one and is currently being attempted with the passenger pigeon. Genes associated with the woolly mammoth have been added to the genome of an African Elephant, although the lead researcher says he has no intention of creating live elephants and transferring all the genes and reversing years of genetic evolution is a long way from being feasible. It is more likely that scientists could use this technology to conserve endangered animals by bringing back lost diversity or transferring evolved genetic advantages from adapted organisms to those that are struggling.
Humans
Gene therapy uses genetically modified viruses to deliver genes which can cure disease in humans. Although gene therapy is still relatively new, it has had some successes. It has been used to treat genetic disorders such as severe combined immunodeficiency, and Leber's congenital amaurosis. Treatments are also being developed for a range of other currently incurable diseases, such as cystic fibrosis, sickle cell anemia, Parkinson's disease, cancer, diabetes, heart disease and muscular dystrophy. These treatments only effect somatic cells, meaning any changes would not be inheritable. Germline gene therapy results in any change being inheritable, which has raised concerns within the scientific community.
In 2015, CRISPR was used to edit the DNA of non-viable human embryos. In November 2018, He Jiankui announced that he had edited the genomes of two human embryos, in an attempt to disable the CCR5 gene, which codes for a receptor that HIV uses to enter cells. He said that twin girls, Lulu and Nana, had been born a few weeks earlier and that they carried functional copies of CCR5 along with disabled CCR5 (mosaicism) and were still vulnerable to HIV. The work was widely condemned as unethical, dangerous, and premature.
Fish
Genetically modified fish are used for scientific research, as pets and as a food source. Aquaculture is a growing industry, currently providing over half the consumed fish worldwide. Through genetic engineering it is possible to increase growth rates, reduce food intake, remove allergenic properties, increase cold tolerance and provide disease resistance. Fish can also be used to detect aquatic pollution or function as bioreactors.
Several groups have been developing zebrafish to detect pollution by attaching fluorescent proteins to genes activated by the presence of pollutants. The fish will then glow and can be used as environmental sensors. The GloFish is a brand of genetically modified fluorescent zebrafish with bright red, green, and orange fluorescent color. It was originally developed by one of the groups to detect pollution, but is now part of the ornamental fish trade, becoming the first genetically modified animal to become publicly available as a pet when in 2003 it was introduced for sale in the USA.
GM fish are widely used in basic research in genetics and development. Two species of fish, zebrafish and medaka, are most commonly modified because they have optically clear chorions (membranes in the egg), rapidly develop, and the one-cell embryo is easy to see and microinject with transgenic DNA. Zebrafish are model organisms for developmental processes, regeneration, genetics, behavior, disease mechanisms and toxicity testing. Their transparency allows researchers to observe developmental stages, intestinal functions and tumour growth. The generation of transgenic protocols (whole organism, cell or tissue specific, tagged with reporter genes) has increased the level of information gained by studying these fish.
GM fish have been developed with promoters driving an over-production of growth hormone for use in the aquaculture industry to increase the speed of development and potentially reduce fishing pressure on wild stocks. This has resulted in dramatic growth enhancement in several species, including salmon, trout and tilapia. AquaBounty Technologies, a biotechnology company, have produced a salmon (called AquAdvantage salmon) that can mature in half the time as wild salmon. It obtained regulatory approval in 2015, the first non-plant GMO food to be commercialized. As of August 2017, GMO salmon is being sold in Canada. Sales in the US started in May 2021.
Insects
In biological research, transgenic fruit flies (Drosophila melanogaster) are model organisms used to study the effects of genetic changes on development. Fruit flies are often preferred over other animals due to their short life cycle and low maintenance requirements. They also have a relatively simple genome compared to many vertebrates, with typically only one copy of each gene, making phenotypic analysis easy. Drosophila have been used to study genetics and inheritance, embryonic development, learning, behavior, and aging. The discovery of transposons, in particular the p-element, in Drosophila provided an early method to add transgenes to their genome, although this has been taken over by more modern gene-editing techniques.
Due to their significance to human health, scientists are looking at ways to control mosquitoes through genetic engineering. Malaria-resistant mosquitoes have been developed in the laboratory by inserting a gene that reduces the development of the malaria parasite and then use homing endonucleases to rapidly spread that gene throughout the male population (known as a gene drive). This approach has been taken further by using the gene drive to spread a lethal gene. In trials the populations of Aedes aegypti mosquitoes, the single most important carrier of dengue fever and Zika virus, were reduced by between 80% and by 90%. Another approach is to use a sterile insect technique, whereby males genetically engineered to be sterile out compete viable males, to reduce population numbers.
Other insect pests that make attractive targets are moths. Diamondback moths cause US$4 to $5 billion of damage each year worldwide. The approach is similar to the sterile technique tested on mosquitoes, where males are transformed with a gene that prevents any females born from reaching maturity. They underwent field trials in 2017. Genetically modified moths have previously been released in field trials. In this case a strain of pink bollworm that were sterilized with radiation were genetically engineered to express a red fluorescent protein making it easier for researchers to monitor them.
Silkworm, the larvae stage of Bombyx mori, is an economically important insect in sericulture. Scientists are developing strategies to enhance silk quality and quantity. There is also potential to use the silk producing machinery to make other valuable proteins. Proteins currently developed to be expressed by silkworms include; human serum albumin, human collagen α-chain, mouse monoclonal antibody and N-glycanase. Silkworms have been created that produce spider silk, a stronger but extremely difficult to harvest silk, and even novel silks.
Other
Systems have been developed to create transgenic organisms in a wide variety of other animals. Chickens have been genetically modified for a variety of purposes. This includes studying embryo development, preventing the transmission of bird flu and providing evolutionary insights using reverse engineering to recreate dinosaur-like phenotypes. A GM chicken that produces the drug Kanuma, an enzyme that treats a rare condition, in its egg passed US regulatory approval in 2015. Genetically modified frogs, in particular Xenopus laevis and Xenopus tropicalis, are used in developmental biology research. GM frogs can also be used as pollution sensors, especially for endocrine disrupting chemicals. There are proposals to use genetic engineering to control cane toads in Australia.
The nematode Caenorhabditis elegans is one of the major model organisms for researching molecular biology. RNA interference (RNAi) was discovered in C. elegans and could be induced by simply feeding them bacteria modified to express double stranded RNA. It is also relatively easy to produce stable transgenic nematodes and this along with RNAi are the major tools used in studying their genes. The most common use of transgenic nematodes has been studying gene expression and localization by attaching reporter genes. Transgenes can also be combined with RNAi techniques to rescue phenotypes, study gene function, image cell development in real time or control expression for different tissues or developmental stages. Transgenic nematodes have been used to study viruses, toxicology, diseases, and to detect environmental pollutants.
The gene responsible for albinism in sea cucumbers has been found and used to engineer white sea cucumbers, a rare delicacy. The technology also opens the way to investigate the genes responsible for some of the cucumbers more unusual traits, including hibernating in summer, eviscerating their intestines, and dissolving their bodies upon death. Flatworms have the ability to regenerate themselves from a single cell. Until 2017 there was no effective way to transform them, which hampered research. By using microinjection and radiation scientists have now created the first genetically modified flatworms. The bristle worm, a marine annelid, has been modified. It is of interest due to its reproductive cycle being synchronized with lunar phases, regeneration capacity and slow evolution rate. Cnidaria such as Hydra and the sea anemone Nematostella vectensis are attractive model organisms to study the evolution of immunity and certain developmental processes. Other animals that have been genetically modified include snails, geckos, turtles, crayfish, oysters, shrimp, clams, abalone and sponges.
Regulation
Genetically modified organisms are regulated by government agencies. This applies to research as well as the release of genetically modified organisms, including crops and food. The development of a regulatory framework concerning genetic engineering began in 1975, at Asilomar, California. The Asilomar meeting recommended a set of guidelines regarding the cautious use of recombinant technology and any products resulting from that technology. The Cartagena Protocol on Biosafety was adopted on 29 January 2000 and entered into force on 11 September 2003. It is an international treaty that governs the transfer, handling, and use of genetically modified organisms. One hundred and fifty-seven countries are members of the Protocol and many use it as a reference point for their own regulations.
Universities and research institutes generally have a special committee that is responsible for approving any experiments that involve genetic engineering. Many experiments also need permission from a national regulatory group or legislation. All staff must be trained in the use of GMOs and all laboratories must gain approval from their regulatory agency to work with GMOs. The legislation covering GMOs are often derived from regulations and guidelines in place for the non-GMO version of the organism, although they are more severe. There is a near-universal system for assessing the relative risks associated with GMOs and other agents to laboratory staff and the community. They are assigned to one of four risk categories based on their virulence, the severity of the disease, the mode of transmission, and the availability of preventive measures or treatments. There are four biosafety levels that a laboratory can fall into, ranging from level 1 (which is suitable for working with agents not associated with disease) to level 4 (working with life-threatening agents). Different countries use different nomenclature to describe the levels and can have different requirements for what can be done at each level.
There are differences in the regulation for the release of GMOs between countries, with some of the most marked differences occurring between the US and Europe. Regulation varies in a given country depending on the intended use of the products of the genetic engineering. For example, a crop not intended for food use is generally not reviewed by authorities responsible for food safety. Some nations have banned the release of GMOs or restricted their use, and others permit them with widely differing degrees of regulation. In 2016, thirty eight countries officially ban or prohibit the cultivation of GMOs and nine (Algeria, Bhutan, Kenya, Kyrgyzstan, Madagascar, Peru, Russia, Venezuela and Zimbabwe) ban their importation. Most countries that do not allow GMO cultivation do permit research using GMOs. Despite regulation, illegal releases have sometimes occurred, due to weakness of enforcement.
The European Union (EU) differentiates between approval for cultivation within the EU and approval for import and processing. While only a few GMOs have been approved for cultivation in the EU a number of GMOs have been approved for import and processing. The cultivation of GMOs has triggered a debate about the market for GMOs in Europe. Depending on the coexistence regulations, incentives for cultivation of GM crops differ. The US policy does not focus on the process as much as other countries, looks at verifiable scientific risks and uses the concept of substantial equivalence. Whether gene edited organisms should be regulated the same as genetically modified organism is debated. USA regulations sees them as separate and does not regulate them under the same conditions, while in Europe a GMO is any organism created using genetic engineering techniques.
One of the key issues concerning regulators is whether GM products should be labeled. The European Commission says that mandatory labeling and traceability are needed to allow for informed choice, avoid potential false advertising and facilitate the withdrawal of products if adverse effects on health or the environment are discovered. The American Medical Association and the American Association for the Advancement of Science say that absent scientific evidence of harm even voluntary labeling is misleading and will falsely alarm consumers. Labeling of GMO products in the marketplace is required in 64 countries. Labeling can be mandatory up to a threshold GM content level (which varies between countries) or voluntary. In the U.S., the National Bioengineered Food Disclosure Standard (Mandatory Compliance Date: January 1, 2022) requires labeling GM foods. In Canada, labeling of GM food is voluntary, while in Europe all food (including processed food) or feed which contains greater than 0.9% of approved GMOs must be labeled. In 2014, sales of products that had been labeled as non-GMO grew 30 percent to $1.1 billion.
Controversy
There is controversy over GMOs, especially with regard to their release outside laboratory environments. The dispute involves consumers, producers, biotechnology companies, governmental regulators, non-governmental organizations, and scientists. Many of these concerns involve GM crops and whether food produced from them is safe and what impact growing them will have on the environment. These controversies have led to litigation, international trade disputes, and protests, and to restrictive regulation of commercial products in some countries. Most concerns are around the health and environmental effects of GMOs. These include whether they may provoke an allergic reaction, whether the transgenes could transfer to human cells, and whether genes not approved for human consumption could outcross into the food supply.
There is a scientific consensus that currently available food derived from GM crops poses no greater risk to human health than conventional food, but that each GM food needs to be tested on a case-by-case basis before introduction. Nonetheless, members of the public are much less likely than scientists to perceive GM foods as safe. The legal and regulatory status of GM foods varies by country, with some nations banning or restricting them, and others permitting them with widely differing degrees of regulation.
As late as the 1990s gene flow into wild populations was thought to be unlikely and rare, and if it were to occur, easily eradicated. It was thought that this would add no additional environmental costs or risks – no effects were expected other than those already caused by pesticide applications. However, in the decades since, several such examples have been observed. Gene flow between GM crops and compatible plants, along with increased use of broad-spectrum herbicides, can increase the risk of herbicide resistant weed populations. Debate over the extent and consequences of gene flow intensified in 2001 when a paper was published showing transgenes had been found in landrace maize in Mexico, the crop's center of diversity. Gene flow from GM crops to other organisms has been found to generally be lower than what would occur naturally. In order to address some of these concerns some GMOs have been developed with traits to help control their spread. To prevent the genetically modified salmon inadvertently breeding with wild salmon, all the fish raised for food are females, triploid, 99% are reproductively sterile, and raised in areas where escaped salmon could not survive. Bacteria have also been modified to depend on nutrients that cannot be found in nature, and genetic use restriction technology has been developed, though not yet marketed, that causes the second generation of GM plants to be sterile.
Other environmental and agronomic concerns include a decrease in biodiversity, an increase in secondary pests (non-targeted pests) and evolution of resistant insect pests. In the areas of China and the US with Bt crops the overall biodiversity of insects has increased and the impact of secondary pests has been minimal. Resistance was found to be slow to evolve when best practice strategies were followed. The impact of Bt crops on beneficial non-target organisms became a public issue after a 1999 paper suggested they could be toxic to monarch butterflies. Follow up studies have since shown that the toxicity levels encountered in the field were not high enough to harm the larvae.
Accusations that scientists are "playing God" and other religious issues have been ascribed to the technology from the beginning. With the ability to genetically engineer humans now possible there are ethical concerns over how far this technology should go, or if it should be used at all. Much debate revolves around where the line between treatment and enhancement is and whether the modifications should be inheritable. Other concerns include contamination of the non-genetically modified food supply, the rigor of the regulatory process, consolidation of control of the food supply in companies that make and sell GMOs, exaggeration of the benefits of genetic modification, or concerns over the use of herbicides with glyphosate. Other issues raised include the patenting of life and the use of intellectual property rights.
There are large differences in consumer acceptance of GMOs, with Europeans more likely to view GM food negatively than North Americans. GMOs arrived on the scene as the public confidence in food safety, attributed to recent food scares such as Bovine spongiform encephalopathy and other scandals involving government regulation of products in Europe, was low. This along with campaigns run by various non-governmental organizations (NGO) have been very successful in blocking or limiting the use of GM crops. NGOs like the Organic Consumers Association, the Union of Concerned Scientists, Greenpeace and other groups have said that risks have not been adequately identified and managed and that there are unanswered questions regarding the potential long-term impact on human health from food derived from GMOs. They propose mandatory labeling or a moratorium on such products.
References
External links
ISAAA database
GMO-Compass: Information on genetically modified organisms
Molecular biology
1973 introductions
Articles containing video clips | Genetically modified organism | [
"Chemistry",
"Engineering",
"Biology"
] | 12,613 | [
"Biochemistry",
"Genetic engineering",
"Genetically modified organisms",
"Molecular biology"
] |
12,354 | https://en.wikipedia.org/wiki/Greatest%20common%20divisor | In mathematics, the greatest common divisor (GCD), also known as greatest common factor (GCF), of two or more integers, which are not all zero, is the largest positive integer that divides each of the integers. For two integers , , the greatest common divisor of and is denoted . For example, the GCD of 8 and 12 is 4, that is, .
In the name "greatest common divisor", the adjective "greatest" may be replaced by "highest", and the word "divisor" may be replaced by "factor", so that other names include highest common factor, etc. Historically, other names for the same concept have included greatest common measure.
This notion can be extended to polynomials (see Polynomial greatest common divisor) and other commutative rings (see below).
Overview
Definition
The greatest common divisor (GCD) of integers and , at least one of which is nonzero, is the greatest positive integer such that is a divisor of both and ; that is, there are integers and such that and , and is the largest such integer. The GCD of and is generally denoted .
When one of and is zero, the GCD is the absolute value of the nonzero integer: . This case is important as the terminating step of the Euclidean algorithm.
The above definition is unsuitable for defining , since there is no greatest integer such that . However, zero is its own greatest divisor if greatest is understood in the context of the divisibility relation, so is commonly defined as . This preserves the usual identities for GCD, and in particular Bézout's identity, namely that generates the same ideal as . This convention is followed by many computer algebra systems. Nonetheless, some authors leave undefined.
The GCD of and is their greatest positive common divisor in the preorder relation of divisibility. This means that the common divisors of and are exactly the divisors of their GCD. This is commonly proved by using either Euclid's lemma, the fundamental theorem of arithmetic, or the Euclidean algorithm. This is the meaning of "greatest" that is used for the generalizations of the concept of GCD.
Example
The number 54 can be expressed as a product of two integers in several different ways:
Thus the complete list of divisors of 54 is 1, 2, 3, 6, 9, 18, 27, 54.
Similarly, the divisors of 24 are 1, 2, 3, 4, 6, 8, 12, 24.
The numbers that these two lists have in common are the common divisors of 54 and 24, that is,
Of these, the greatest is 6, so it is the greatest common divisor:
Computing all divisors of the two numbers in this way is usually not efficient, especially for large numbers that have many divisors. Much more efficient methods are described in .
Coprime numbers
Two numbers are called relatively prime, or coprime, if their greatest common divisor equals . For example, 9 and 28 are coprime.
A geometric view
For example, a 24-by-60 rectangular area can be divided into a grid of: 1-by-1 squares, 2-by-2 squares, 3-by-3 squares, 4-by-4 squares, 6-by-6 squares or 12-by-12 squares. Therefore, 12 is the greatest common divisor of 24 and 60. A 24-by-60 rectangular area can thus be divided into a grid of 12-by-12 squares, with two squares along one edge () and five squares along the other ().
Applications
Reducing fractions
The greatest common divisor is useful for reducing fractions to the lowest terms. For example, , therefore,
Least common multiple
The least common multiple of two integers that are not both zero can be computed from their greatest common divisor, by using the relation
Calculation
Using prime factorizations
Greatest common divisors can be computed by determining the prime factorizations of the two numbers and comparing factors. For example, to compute , we find the prime factorizations 48 = 24 · 31 and 180 = 22 · 32 · 51; the GCD is then 2min(4,2) · 3min(1,2) · 5min(0,1) = 22 · 31 · 50 = 12 The corresponding LCM is then
2max(4,2) · 3max(1,2) · 5max(0,1) =
24 · 32 · 51 = 720.
In practice, this method is only feasible for small numbers, as computing prime factorizations takes too long.
Euclid's algorithm
The method introduced by Euclid for computing greatest common divisors is based on the fact that, given two positive integers and such that , the common divisors of and are the same as the common divisors of and .
So, Euclid's method for computing the greatest common divisor of two positive integers consists of replacing the larger number with the difference of the numbers, and repeating this until the two numbers are equal: that is their greatest common divisor.
For example, to compute , one proceeds as follows:
So .
This method can be very slow if one number is much larger than the other. So, the variant that follows is generally preferred.
Euclidean algorithm
A more efficient method is the Euclidean algorithm, a variant in which the difference of the two numbers and is replaced by the remainder of the Euclidean division (also called division with remainder) of by .
Denoting this remainder as , the algorithm replaces with repeatedly until the pair is , where is the greatest common divisor.
For example, to compute gcd(48,18), the computation is as follows:
This again gives .
Binary GCD algorithm
The binary GCD algorithm is a variant of Euclid's algorithm that is specially adapted to the binary representation of the numbers, which is used in most computers.
The binary GCD algorithm differs from Euclid's algorithm essentially by dividing by two every even number that is encountered during the computation. Its efficiency results from the fact that, in binary representation, testing parity consists of testing the right-most digit, and dividing by two consists of removing the right-most digit.
The method is as follows, starting with and that are the two positive integers whose GCD is sought.
If and are both even, then divide both by two until at least one of them becomes odd; let be the number of these paired divisions.
If is even, then divide it by two until it becomes odd.
If is even, then divide it by two until it becomes odd.
Now, and are both odd and will remain odd until the end of the computation
While do
If , then replace with and divide the result by two until becomes odd (as and are both odd, there is, at least, one division by 2).
If , then replace with and divide the result by two until becomes odd.
Now, , and the greatest common divisor is
Step 1 determines as the highest power of that divides and , and thus their greatest common divisor. None of the steps changes the set of the odd common divisors of and . This shows that when the algorithm stops, the result is correct. The algorithm stops eventually, since each steps divides at least one of the operands by at least . Moreover, the number of divisions by and thus the number of subtractions is at most the total number of digits.
Example: (a, b, d) = (48, 18, 0) → (24, 9, 1) → (12, 9, 1) → (6, 9, 1) → (3, 9, 1) → (3, 3, 1) ; the original GCD is thus the product 6 of and .
The binary GCD algorithm is particularly easy to implement and particularly efficient on binary computers. Its computational complexity is
The square in this complexity comes from the fact that division by and subtraction take a time that is proportional to the number of bits of the input.
The computational complexity is usually given in terms of the length of the input. Here, this length is , and the complexity is thus
.
Lehmer's GCD algorithm
Lehmer's algorithm is based on the observation that the initial quotients produced by Euclid's algorithm can be determined based on only the first few digits; this is useful for numbers that are larger than a computer word. In essence, one extracts initial digits, typically forming one or two computer words, and runs Euclid's algorithms on these smaller numbers, as long as it is guaranteed that the quotients are the same with those that would be obtained with the original numbers. The quotients are collected into a small 2-by-2 transformation matrix (a matrix of single-word integers) to reduce the original numbers. This process is repeated until numbers are small enough that the binary algorithm (see below) is more efficient.
This algorithm improves speed, because it reduces the number of operations on very large numbers, and can use hardware arithmetic for most operations. In fact, most of the quotients are very small, so a fair number of steps of the Euclidean algorithm can be collected in a 2-by-2 matrix of single-word integers. When Lehmer's algorithm encounters a quotient that is too large, it must fall back to one iteration of Euclidean algorithm, with a Euclidean division of large numbers.
Other methods
If and are both nonzero, the greatest common divisor of and can be computed by using least common multiple (LCM) of and :
,
but more commonly the LCM is computed from the GCD.
Using Thomae's function ,
which generalizes to and rational numbers or commensurable real numbers.
Keith Slavin has shown that for odd :
which is a function that can be evaluated for complex b. Wolfgang Schramm has shown that
is an entire function in the variable b for all positive integers a where cd(k) is Ramanujan's sum.
Complexity
The computational complexity of the computation of greatest common divisors has been widely studied. If one uses the Euclidean algorithm and the elementary algorithms for multiplication and division, the computation of the greatest common divisor of two integers of at most bits is . This means that the computation of greatest common divisor has, up to a constant factor, the same complexity as the multiplication.
However, if a fast multiplication algorithm is used, one may modify the Euclidean algorithm for improving the complexity, but the computation of a greatest common divisor becomes slower than the multiplication. More precisely, if the multiplication of two integers of bits takes a time of , then the fastest known algorithm for greatest common divisor has a complexity . This implies that the fastest known algorithm has a complexity of .
Previous complexities are valid for the usual models of computation, specifically multitape Turing machines and random-access machines.
The computation of the greatest common divisors belongs thus to the class of problems solvable in quasilinear time. A fortiori, the corresponding decision problem belongs to the class P of problems solvable in polynomial time. The GCD problem is not known to be in NC, and so there is no known way to parallelize it efficiently; nor is it known to be P-complete, which would imply that it is unlikely to be possible to efficiently parallelize GCD computation. Shallcross et al. showed that a related problem (EUGCD, determining the remainder sequence arising during the Euclidean algorithm) is NC-equivalent to the problem of integer linear programming with two variables; if either problem is in NC or is P-complete, the other is as well. Since NC contains NL, it is also unknown whether a space-efficient algorithm for computing the GCD exists, even for nondeterministic Turing machines.
Although the problem is not known to be in NC, parallel algorithms asymptotically faster than the Euclidean algorithm exist; the fastest known deterministic algorithm is by Chor and Goldreich, which (in the CRCW-PRAM model) can solve the problem in time with processors. Randomized algorithms can solve the problem in time on processors (this is superpolynomial).
Properties
For positive integers , .
Every common divisor of and is a divisor of .
, where a and b are not both zero, may be defined alternatively and equivalently as the smallest positive integer d which can be written in the form , where p and q are integers. This expression is called Bézout's identity. Numbers p and q like this can be computed with the extended Euclidean algorithm.
, for , since any number is a divisor of 0, and the greatest divisor of a is . This is usually used as the base case in the Euclidean algorithm.
If a divides the product b⋅c, and , then a/d divides c.
If m is a positive integer, then .
If m is any integer, then . Equivalently, .
If m is a positive common divisor of a and b, then .
The GCD is a commutative function: .
The GCD is an associative function: . Thus can be used to denote the GCD of multiple arguments.
The GCD is a multiplicative function in the following sense: if a1 and a2 are relatively prime, then .
is closely related to the least common multiple : we have
.
This formula is often used to compute least common multiples: one first computes the GCD with Euclid's algorithm and then divides the product of the given numbers by their GCD.
The following versions of distributivity hold true:
.
If we have the unique prime factorizations of and where and , then the GCD of a and b is
.
It is sometimes useful to define and because then the natural numbers become a complete distributive lattice with GCD as meet and LCM as join operation. This extension of the definition is also compatible with the generalization for commutative rings given below.
In a Cartesian coordinate system, can be interpreted as the number of segments between points with integral coordinates on the straight line segment joining the points and .
For non-negative integers and , where and are not both zero, provable by considering the Euclidean algorithm in base n:
.
An identity involving Euler's totient function:
GCD Summatory function (Pillai's arithmetical function):
where is the -adic valuation.
Probabilities and expected value
In 1972, James E. Nymann showed that integers, chosen independently and uniformly from , are coprime with probability as goes to infinity, where refers to the Riemann zeta function. (See coprime for a derivation.) This result was extended in 1987 to show that the probability that random integers have greatest common divisor is .
Using this information, the expected value of the greatest common divisor function can be seen (informally) to not exist when . In this case the probability that the GCD equals is , and since we have
This last summation is the harmonic series, which diverges. However, when , the expected value is well-defined, and by the above argument, it is
For , this is approximately equal to 1.3684. For , it is approximately 1.1106.
In commutative rings
The notion of greatest common divisor can more generally be defined for elements of an arbitrary commutative ring, although in general there need not exist one for every pair of elements.
If is a commutative ring, and and are in , then an element of is called a common divisor of and if it divides both and (that is, if there are elements and in such that d·x = a and d·y = b).
If is a common divisor of and , and every common divisor of and divides , then is called a greatest common divisor of and b.
With this definition, two elements and may very well have several greatest common divisors, or none at all. If is an integral domain, then any two GCDs of and must be associate elements, since by definition either one must divide the other. Indeed, if a GCD exists, any one of its associates is a GCD as well.
Existence of a GCD is not assured in arbitrary integral domains. However, if is a unique factorization domain or any other GCD domain, then any two elements have a GCD. If is a Euclidean domain in which euclidean division is given algorithmically (as is the case for instance when where is a field, or when is the ring of Gaussian integers), then greatest common divisors can be computed using a form of the Euclidean algorithm based on the division procedure.
The following is an example of an integral domain with two elements that do not have a GCD:
The elements and are two maximal common divisors (that is, any common divisor which is a multiple of is associated to , the same holds for , but they are not associated, so there is no greatest common divisor of and .
Corresponding to the Bézout property we may, in any commutative ring, consider the collection of elements of the form , where and range over the ring. This is the ideal generated by and , and is denoted simply . In a ring all of whose ideals are principal (a principal ideal domain or PID), this ideal will be identical with the set of multiples of some ring element ; then this is a greatest common divisor of and . But the ideal can be useful even when there is no greatest common divisor of and . (Indeed, Ernst Kummer used this ideal as a replacement for a GCD in his treatment of Fermat's Last Theorem, although he envisioned it as the set of multiples of some hypothetical, or ideal, ring element , whence the ring-theoretic term.)
See also
Bézout domain
Lowest common denominator
Unitary divisor
Notes
References
Further reading
Donald Knuth. The Art of Computer Programming, Volume 2: Seminumerical Algorithms, Third Edition. Addison-Wesley, 1997. . Section 4.5.2: The Greatest Common Divisor, pp. 333–356.
Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction to Algorithms, Second Edition. MIT Press and McGraw-Hill, 2001. . Section 31.2: Greatest common divisor, pp. 856–862.
Saunders Mac Lane and Garrett Birkhoff. A Survey of Modern Algebra, Fourth Edition. MacMillan Publishing Co., 1977. . 1–7: "The Euclidean Algorithm."
Multiplicative functions
Articles containing video clips | Greatest common divisor | [
"Mathematics"
] | 3,911 | [
"Multiplicative functions",
"Number theory"
] |
12,365 | https://en.wikipedia.org/wiki/Googolplex | A googolplex is the large number 10, or equivalently, 10 or . Written out in ordinary decimal notation, it is 1 followed by 10100 zeroes; that is, a 1 followed by a googol of zeroes. Its prime factorization is 2 ×5.
History
In 1920, Edward Kasner's nine-year-old nephew, Milton Sirotta, coined the term googol, which is 10, and then proposed the further term googolplex to be "one, followed by writing zeroes until you get tired". Kasner decided to adopt a more formal definition because "different people get tired at different times and it would never do to have Carnera [be] a better mathematician than Dr. Einstein, simply because he had more endurance and could write for longer". It thus became standardized to 10(10100) = 1010100, due to the right-associativity of exponentiation.
Size
A typical book can be printed with 10 zeros (around 400 pages with 50 lines per page and 50 zeros per line). Therefore, it requires 10 such books to print all the zeros of a googolplex (that is, printing a googol zeros).
If each book had a mass of 100 grams, all of them would have a total mass of 10 kilograms. In comparison, Earth's mass is 5.97 × 10 kilograms, the mass of the Milky Way galaxy is estimated at 1.8 × 10 kilograms, and the total mass of all the stars in the observable universe is estimated at 2 × 1052 kg.
To put this in perspective, the mass of all such books required to write out a googolplex would be vastly greater than the mass of the observable universe by a factor of roughly 5 × 1040.
In pure mathematics
In pure mathematics, there are several notational methods for representing large numbers by which the magnitude of a googolplex could be represented, such as tetration, hyperoperation, Knuth's up-arrow notation, Steinhaus–Moser notation, or Conway chained arrow notation.
In the physical universe
In the PBS science program Cosmos: A Personal Voyage, Episode 9: "The Lives of the Stars", astronomer and television personality Carl Sagan estimated that writing a googolplex in full decimal form (i.e., "10,000,000,000...") would be physically impossible, since doing so would require more space than is available in the known universe. Sagan gave an example that if the entire volume of the observable universe is filled with fine dust particles roughly 1.5 micrometers in size (0.0015 millimeters), then the number of different combinations in which the particles could be arranged and numbered would be about one googolplex.
is a high estimate of the elementary particles existing in the visible universe (not including dark matter), mostly photons and other massless force carriers.
Mod n
The residues (mod n) of a googolplex, starting with mod 1, are:
0, 0, 1, 0, 0, 4, 4, 0, 1, 0, 1, 4, 3, 4, 10, 0, 1, 10, 9, 0, 4, 12, 13, 16, 0, 16, 10, 4, 24, 10, 5, 0, 1, 18, 25, 28, 10, 28, 16, 0, 1, 4, 24, 12, 10, 36, 9, 16, 4, 0, ...
This sequence is the same as the sequence of residues (mod n) of a googol up until the 17th position.
See also
Graham's number
Names of large numbers
Orders of magnitude (numbers)
Skewes's number
References
External links
Integers
Large integers
Units of amount
Numbers
Large numbers | Googolplex | [
"Mathematics"
] | 795 | [
"Units of amount",
"Quantity",
"Mathematical objects",
"Elementary mathematics",
"Arithmetic",
"Large numbers",
"Integers",
"Numbers",
"Units of measurement"
] |
12,366 | https://en.wikipedia.org/wiki/Graphite | Graphite () is a crystalline allotrope (form) of the element carbon. It consists of many stacked layers of graphene, typically in the excess of hundreds of layers. Graphite occurs naturally and is the most stable form of carbon under standard conditions. Synthetic and natural graphite are consumed on a large scale (1.3million metric tons per year in 2022) for uses in many critical industries including refractories (50%), lithium-ion batteries (18%), foundries (10%), lubricants (5%), among others (17%). Under extremely high pressures and extremely high temperatures it converts to diamond. Graphite's low cost, thermal and chemical inertness and characteristic conductivity of heat and electricity finds numerous applications in high energy and high temperature processes.
Types and varieties
Natural graphite
Graphite occurs naturally in ores that can be classified into one of two categories either amorphous (microcrystalline) or crystalline (flake or lump/chip) which is determined by the ore morphology, crystallinity, and grain size. All naturally occurring graphite deposits are formed from the metamorphism of carbonaceous sedimentary rocks, and the ore type is due to its geologic setting. Coal that has been thermally metamorphosed is the typical source of amorphous graphite. Crystalline flake graphite is mined from carbonaceous metamorphic rocks, while lump or chip graphite is mined from veins which occur in high-grade metamorphic regions. There are serious negative environmental impacts to graphite mining.
Synthetic graphite
Synthetic graphite is graphite of high purity produced by thermal graphitization at temperatures in excess of 2,100 °C from hydrocarbon materials, most commonly through the Acheson process. The high temperatures are maintained for weeks, and are required not only to form the graphite from the precursor carbons but to also vaporize any impurities that may be present, including hydrogen, nitrogen, sulfur, organics, and metals. This is why synthetic graphite is highly pure in excess of 99.9% C purity, but typically has lower density, conductivity and a higher porosity than its natural equivalent. Synthetic graphite can also be formed into very large flakes (cm) while maintaining its high purity unlike almost all sources of natural graphite. Synthetic graphite has also been known to be formed by other methods including by chemical vapor deposition from hydrocarbons at temperatures above , by decomposition of thermally unstable carbides or by crystallizing from metal melts supersaturated with carbon.
Biographite
Biographite is a commercial product proposal for reducing the carbon footprint of lithium iron phosphate (LFP) batteries. It is produced from forestry waste and similar byproducts by a company in New Zealand using a novel process called thermo-catalytic graphitisation which project is supported by grants from interested parties including a forestry company in Finland and a battery maker in Hong Kong
Natural graphite
Occurrence
Graphite occurs in metamorphic rocks as a result of the reduction of sedimentary carbon compounds during metamorphism. It also occurs in igneous rocks and in meteorites. Minerals associated with graphite include quartz, calcite, micas and tourmaline. The principal export sources of mined graphite are in order of tonnage: China, Mexico, Canada, Brazil, and Madagascar. Significant unexploited graphite resources also exists in Colombia's Cordillera Central in the form of graphite-bearing schists.
In meteorites, graphite occurs with troilite and silicate minerals. Small graphitic crystals in meteoritic iron are called cliftonite. Some microscopic grains have distinctive isotopic compositions, indicating that they were formed before the Solar System. They are one of about 12 known types of minerals that predate the Solar System and have also been detected in molecular clouds. These minerals were formed in the ejecta when supernovae exploded or low to intermediate-sized stars expelled their outer envelopes late in their lives. Graphite may be the second or third oldest mineral in the Universe.
Structure
Graphite consists of sheets of trigonal planar carbon. The individual layers are called graphene. In each layer, each carbon atom is bonded to three other atoms forming a continuous layer of sp2 bonded carbon hexagons, like a honeycomb lattice with a bond length of 0.142 nm, and the distance between planes is 0.335 nm. Bonding between layers is relatively weak van der Waals bonds, which allows the graphene-like layers to be easily separated and to glide past each other. Electrical conductivity perpendicular to the layers is consequently about 1000 times lower.
There are two allotropic forms called alpha (hexagonal) and beta (rhombohedral), differing in terms of the stacking of the graphene layers: stacking in alpha graphite is ABA, as opposed to ABC stacking in the energetically less stable beta graphite. Rhombohedral graphite cannot occur in pure form. Natural graphite, or commercial natural graphite, contains 5 to 15% rhombohedral graphite and this may be due to intensive milling. The alpha form can be converted to the beta form through shear forces, and the beta form reverts to the alpha form when it is heated to 1300 °C for four hours.
Thermodynamics
The equilibrium pressure and temperature conditions for a transition between graphite and diamond is well established theoretically and experimentally. The pressure changes linearly between at and at (the diamond/graphite/liquid triple point).
However, the phases have a wide region about this line where they can coexist. At normal temperature and pressure, and , the stable phase of carbon is graphite, but diamond is metastable and its rate of conversion to graphite is negligible. However, at temperatures above about , diamond rapidly converts to graphite. Rapid conversion of graphite to diamond requires pressures well above the equilibrium line: at , a pressure of is needed.
Other properties
The acoustic and thermal properties of graphite are highly anisotropic, since phonons propagate quickly along the tightly bound planes, but are slower to travel from one plane to another. Graphite's high thermal stability and electrical and thermal conductivity facilitate its widespread use as electrodes and refractories in high temperature material processing applications. However, in oxygen-containing atmospheres graphite readily oxidizes to form carbon dioxide at temperatures of 700 °C and above.
Graphite is an electrical conductor, hence useful in such applications as arc lamp electrodes. It can conduct electricity due to the vast electron delocalization within the carbon layers (a phenomenon called aromaticity). These valence electrons are free to move, so are able to conduct electricity. However, the electricity is primarily conducted within the plane of the layers. The conductive properties of powdered graphite allow its use as pressure sensor in carbon microphones.
Graphite and graphite powder are valued in industrial applications for their self-lubricating and dry lubricating properties. However, the use of graphite is limited by its tendency to facilitate pitting corrosion in some stainless steel, and to promote galvanic corrosion between dissimilar metals (due to its electrical conductivity). It is also corrosive to aluminium in the presence of moisture. For this reason, the US Air Force banned its use as a lubricant in aluminium aircraft, and discouraged its use in aluminium-containing automatic weapons. Even graphite pencil marks on aluminium parts may facilitate corrosion. Another high-temperature lubricant, hexagonal boron nitride, has the same molecular structure as graphite. It is sometimes called white graphite, due to its similar properties.
When a large number of crystallographic defects bind these planes together, graphite loses its lubrication properties and becomes what is known as pyrolytic graphite. It is also highly anisotropic, and diamagnetic, thus it will float in mid-air above a strong magnet. (If it is made in a fluidized bed at 1000–1300 °C then it is isotropic turbostratic, and is used in blood-contacting devices like mechanical heart valves and is called pyrolytic carbon, and is not diamagnetic. Pyrolytic graphite and pyrolytic carbon are often confused but are very different materials.)
For a long time graphite has been considered to be hydrophobic. However, recent studies using highly ordered pyrolytic graphite have shown that freshly clean graphite is hydrophilic (contact angle of 70° approximately), and it becomes hydrophobic (contact angle of 95° approximately) due to airborne pollutants (hydrocarbons) present in the atmosphere. Those contaminants also alter the electric equipotential surface of graphite by creating domains with potential differences of up to 200 mV as measured with kelvin probe force microscopy. Such contaminants can be desorbed by increasing the temperature of graphite to approximately 50 °C or higher.
Natural and crystalline graphites are not often used in pure form as structural materials, due to their shear-planes, brittleness, and inconsistent mechanical properties.
History of natural graphite use
In the 4th millennium BCE, during the Neolithic Age in southeastern Europe, the Marița culture used graphite in a ceramic paint for decorating pottery.
Sometime before 1565 (some sources say as early as 1500), an enormous deposit of graphite was discovered on the approach to Grey Knotts from the hamlet of Seathwaite in Borrowdale parish, Cumbria, England, which the locals found useful for marking sheep. During the reign of Elizabeth I (1558–1603), Borrowdale graphite was used as a refractory material to line molds for cannonballs, resulting in rounder, smoother balls that could be fired farther, contributing to the strength of the English navy. This particular deposit of graphite was extremely pure and soft, and could easily be cut into sticks. Because of its military importance, this unique mine and its production were strictly controlled by the Crown.
During the 19th century, graphite's uses greatly expanded to include stove polish, lubricants, paints, crucibles, foundry facings, and pencils, a major factor in the expansion of educational tools during the first great rise of education for the masses. The British Empire controlled most of the world's production (especially from Ceylon), but production from Austrian, German, and American deposits expanded by mid-century. For example, the Dixon Crucible Company of Jersey City, New Jersey, founded by Joseph Dixon and partner Orestes Cleveland in 1845, opened mines in the Lake Ticonderoga district of New York, built a processing plant there, and a factory to manufacture pencils, crucibles and other products in New Jersey, described in the Engineering & Mining Journal 21 December 1878. The Dixon pencil is still in production.
The beginnings of the revolutionary froth flotation process are associated with graphite mining. Included in the E&MJ article on the Dixon Crucible Company is a sketch of the "floating tanks" used in the age-old process of extracting graphite. Because graphite is so light, the mix of graphite and waste was sent through a final series of water tanks where a cleaner graphite "floated" off, which left waste to drop out. In an 1877 patent, the two brothers Bessel (Adolph and August) of Dresden, Germany, took this "floating" process a step further and added a small amount of oil to the tanks and boiled the mix – an agitation or frothing step – to collect the graphite, the first steps toward the future flotation process. Adolph Bessel received the Wohler Medal for the patented process that upgraded the recovery of graphite to 90% from the German deposit. In 1977, the German Society of Mining Engineers and Metallurgists organized a special symposium dedicated to their discovery and, thus, the 100th anniversary of flotation.
In the United States, in 1885, Hezekiah Bradford of Philadelphia patented a similar process, but it is uncertain if his process was used successfully in the nearby graphite deposits of Chester County, Pennsylvania, a major producer by the 1890s. The Bessel process was limited in use, primarily because of the abundant cleaner deposits found around the globe, which needed not much more than hand-sorting to gather the pure graphite. The state of the art, , is described in the Canadian Department of Mines report on graphite mines and mining when Canadian deposits began to become important producers of graphite.
Other names
Historically, graphite was called black lead or plumbago. Plumbago was commonly used in its massive mineral form. Both of these names arise from confusion with the similar-appearing lead ores, particularly galena. The Latin word for lead, plumbum, gave its name to the English term for this grey metallic-sheened mineral and even to the leadworts or plumbagos, plants with flowers that resemble this colour.
The term black lead usually refers to a powdered or processed graphite, matte black in color.
Abraham Gottlob Werner coined the name graphite ("writing stone") in 1789. He attempted to clear up the confusion between molybdena, plumbago and black lead after Carl Wilhelm Scheele in 1778 proved that these were at least three different minerals. Scheele's analysis showed that the chemical compounds molybdenum sulfide (molybdenite), lead(II) sulfide (galena) and graphite were three different soft black minerals.
Uses of natural graphite
Natural graphite is mostly used for refractories, batteries, steelmaking, expanded graphite, brake linings, foundry facings, and lubricants.
Refractories
The use of graphite as a refractory (heat-resistant) material began before 1900 with graphite crucibles used to hold molten metal; this is now a minor part of refractories. In the mid-1980s, the carbon-magnesite brick became important, and a bit later the alumina-graphite shape. the order of importance is: alumina-graphite shapes, carbon-magnesite brick, Monolithics (gunning and ramming mixes), and then crucibles.
Crucibles began using very large flake graphite, and carbon-magnesite bricks requiring not quite so large flake graphite; for these and others there is now much more flexibility in the size of flake required, and amorphous graphite is no longer restricted to low-end refractories. Alumina-graphite shapes are used as continuous casting ware, such as nozzles and troughs, to convey the molten steel from ladle to mold, and carbon magnesite bricks line steel converters and electric-arc furnaces to withstand extreme temperatures. Graphite blocks are also used in parts of blast furnace linings where the high thermal conductivity of the graphite is critical to ensuring adequate cooling of the bottom and hearth of the furnace. High-purity monolithics are often used as a continuous furnace lining instead of carbon-magnesite bricks.
The US and European refractories industry had a crisis in 2000–2003, with an indifferent market for steel and a declining refractory consumption per tonne of steel underlying firm buyouts and many plant closures. Many of the plant closures resulted from the acquisition of Harbison-Walker Refractories by RHI AG and some plants had their equipment auctioned off. Since much of the lost capacity was for carbon-magnesite brick, graphite consumption within the refractories area moved towards alumina-graphite shapes and Monolithics, and away from the brick. The major source of carbon-magnesite brick is now China. Almost all of the above refractories are used to make steel and account for 75% of refractory consumption; the rest is used by a variety of industries, such as cement.
According to the USGS, US natural graphite consumption in refractories comprised 12,500 tonnes in 2010.
Batteries
The use of graphite in batteries has increased since the 1970s. Natural and synthetic graphite are used as an anode material to construct electrodes in major battery technologies.
The demand for batteries, primarily nickel–metal hydride and lithium-ion batteries, caused a growth in demand for graphite in the late 1980s and early 1990s – a growth driven by portable electronics, such as portable CD players and power tools. Laptops, mobile phones, tablets, and smartphone products have increased the demand for batteries. Electric-vehicle batteries are anticipated to increase graphite demand. As an example, a lithium-ion battery in a fully electric Nissan Leaf contains nearly 40 kg of graphite.
Radioactive graphite removed from nuclear reactors has been investigated as a source of electricity for low-power applications. This waste is rich in carbon-14, which emits electrons through beta decay, so it could potentially be used as the basis for a betavoltaic device. This concept is known as the diamond battery.
Graphite anode materials
Graphite is "predominant anode material used today in lithium-ion batteries". Electric-vehicle (EV) batteries contain four basic components: anode, cathode, electrolyte, and separator. While there is much focus on the cathode materials lithium, nickel, cobalt, manganese, etc. the predominant anode material used in virtually all EV batteries is graphite.
Steelmaking
Natural graphite in steelmaking mostly goes into raising the carbon content in molten steel; it can also serve to lubricate the dies used to extrude hot steel. Carbon additives face competitive pricing from alternatives such as synthetic graphite powder, petroleum coke, and other forms of carbon. A carbon raiser is added to increase the carbon content of the steel to a specified level. An estimate based on USGS's graphite consumption statistics indicates that steelmakers in the US used 10,500 tonnes in this fashion in 2005.
Brake linings
Natural amorphous and fine flake graphite are used in brake linings or brake shoes for heavier (nonautomotive) vehicles, and became important with the need to substitute for asbestos. This use has been important for quite some time, but nonasbestos organic (NAO) compositions are beginning to reduce graphite's market share. A brake-lining industry shake-out with some plant closures has not been beneficial, nor has an indifferent automotive market. According to the USGS, US natural graphite consumption in brake linings was 6,510 tonnes in 2005.
Foundry facings and lubricants
A foundry-facing mold wash is a water-based paint of amorphous or fine flake graphite. Painting the inside of a mold with it and letting it dry leaves a fine graphite coat that will ease the separation of the object cast after the hot metal has cooled. Graphite lubricants are specialty items for use at very high or very low temperatures, as forging die lubricant, an antiseize agent, a gear lubricant for mining machinery, and to lubricate locks. Having low-grit graphite, or even better, no-grit graphite (ultra high purity), is highly desirable. It can be used as a dry powder, in water or oil, or as colloidal graphite (a permanent suspension in a liquid). An estimate based on USGS graphite consumption statistics indicates that 2,200 tonnes were used in this fashion in 2005. Metal can also be impregnated into graphite to create a self-lubricating alloy for application in extreme conditions, such as bearings for machines exposed to high or low temperatures.
Everyday use
Pencils
The ability to leave marks on paper and other objects gave graphite its name, given in 1789 by German mineralogist Abraham Gottlob Werner. It stems from γράφειν ("graphein"), meaning to write or draw in Ancient Greek.
From the 16th century, all pencils were made with leads of English natural graphite, but modern pencil lead is most commonly a mix of powdered graphite and clay; it was invented by Nicolas-Jacques Conté in 1795. It is chemically unrelated to the metal lead, whose ores had a similar appearance, hence the continuation of the name. Plumbago is another older term for natural graphite used for drawing, typically as a lump of the mineral without a wood casing. The term plumbago drawing is normally restricted to 17th and 18th-century works, mostly portraits.
Today, pencils are still a small but significant market for natural graphite. Around 7% of the 1.1 million tonnes produced in 2011 was used to make pencils. Low-quality amorphous graphite is used and sourced mainly from China.
In art, graphite is typically used to create detailed and precise drawings, as it allows for a wide range of values (light to dark) to be achieved. It can also be used to create softer, more subtle lines and shading. Graphite is popular among artists because it is easy to control, easy to erase, and produces a clean, professional look. It is also relatively inexpensive and widely available. Many artists use graphite in conjunction with other media, such as charcoal or ink, to create a range of effects and textures in their work. Graphite of various hardness or softness results in different qualities and tones when used as an artistic medium.
Pinewood derby
Graphite is probably the most-used lubricant in pinewood derbies.
Other uses
Natural graphite has found uses in zinc-carbon batteries, electric motor brushes, and various specialized applications. Railroads would often mix powdered graphite with waste oil or linseed oil to create a heat-resistant protective coating for the exposed portions of a steam locomotive's boiler, such as the smokebox or lower part of the firebox. The Scope soldering iron uses a graphite tip as its heating element.
Expanded graphite
Expanded graphite is made by immersing natural flake graphite in a bath of chromic acid, then concentrated sulfuric acid, which forces the crystal lattice planes apart, thus expanding the graphite. The expanded graphite can be used to make graphite foil or used directly as a "hot top" compound to insulate molten metal in a ladle or red-hot steel ingots and decrease heat loss, or as firestops fitted around a fire door or in sheet metal collars surrounding plastic pipe (during a fire, the graphite expands and chars to resist fire penetration and spread), or to make high-performance gasket material for high-temperature use. After being made into graphite foil, the foil is machined and assembled into the bipolar plates in fuel cells.
The foil is made into heat sinks for laptop computers which keeps them cool while saving weight, and is made into a foil laminate that can be used in valve packings or made into gaskets. Old-style packings are now a minor member of this grouping: fine flake graphite in oils or greases for uses requiring heat resistance. A GAN estimate of current US natural graphite consumption in this end-use is 7,500 tonnes.
Intercalated graphite
Graphite forms intercalation compounds with some metals and small molecules. In these compounds, the host molecule or atom gets "sandwiched" between the graphite layers, resulting in a type of compound with variable stoichiometry. A prominent example of an intercalation compound is potassium graphite, denoted by the formula KC8. Some graphite intercalation compounds are superconductors. The highest transition temperature (by June 2009) Tc = 11.5 K is achieved in CaC6, and it further increases under applied pressure (15.1 K at 8 GPa). Graphite's ability to intercalate lithium ions without significant damage from swelling is what makes it the dominant anode material in lithium-ion batteries.
History of synthetic graphite
Invention of a process to produce synthetic graphite
In 1893, Charles Street of Le Carbone discovered a process for making artificial graphite. In the mid-1890s, Edward Goodrich Acheson (1856–1931) accidentally invented another way to produce synthetic graphite after synthesizing carborundum (also called silicon carbide). He discovered that overheating carborundum, as opposed to pure carbon, produced almost pure graphite. While studying the effects of high temperature on carborundum, he had found that silicon vaporizes at about , leaving the carbon behind in graphitic carbon. This graphite became valuable as a lubricant.
Acheson's technique for producing silicon carbide and graphite is named the Acheson process. In 1896, Acheson received a patent for his method of synthesizing graphite, and in 1897 started commercial production. The Acheson Graphite Co. was formed in 1899.
Synthetic graphite can also be prepared from polyimide and then commercialized.
Scientific research
Highly oriented pyrolytic graphite (HOPG) is the highest-quality synthetic form of graphite. It is used in scientific research, in particular, as a length standard for the calibration of scanning probe microscopes.
Electrodes
Graphite electrodes carry the electricity that melts scrap iron and steel, and sometimes direct-reduced iron (DRI), in electric arc furnaces, which are the vast majority of steel furnaces. They are made from petroleum coke after it is mixed with coal tar pitch. They are extruded and shaped, then baked to carbonize the binder (pitch). This is finally graphitized by heating it to temperatures approaching , at which the carbon atoms arrange into graphite. They can vary in size up to long and in diameter. An increasing proportion of global steel is made using electric arc furnaces, and the electric arc furnace itself is becoming more efficient, making more steel per tonne of electrode. An estimate based on USGS data indicates that graphite electrode consumption was in 2005.
Electrolytic aluminium smelting also uses graphitic carbon electrodes. On a much smaller scale, synthetic graphite electrodes are used in electrical discharge machining (EDM), commonly to make injection molds for plastics.
Powder and scrap
The powder is made by heating powdered petroleum coke above the temperature of graphitization, sometimes with minor modifications. The graphite scrap comes from pieces of unusable electrode material (in the manufacturing stage or after use) and lathe turnings, usually after crushing and sizing. Most synthetic graphite powder goes to carbon raising in steel (competing with natural graphite), with some used in batteries and brake linings. According to the United States Geographical Survey, US synthetic graphite powder and scrap production were in 2001 (latest data).
Neutron moderator
Special grades of synthetic graphite, such as Gilsocarbon, also find use as a matrix and neutron moderator within nuclear reactors. Its low neutron cross-section also recommends it for use in proposed fusion reactors. Care must be taken that reactor-grade graphite is free of neutron absorbing materials such as boron, widely used as the seed electrode in commercial graphite deposition systems – this caused the failure of the Germans' World War II graphite-based nuclear reactors. Since they could not isolate the difficulty they were forced to use far more expensive heavy water moderators. Graphite used for nuclear reactors is often referred to as nuclear graphite. Herbert G. McPherson, a Berkeley trained physicist at National Carbon, a division of Union Carbide, was key in confirming a conjecture of Leo Szilard that boron impurities even in "pure" graphite were responsible for a neutron absorption cross-section in graphite that compromised U-235 chain reactions. McPherson was aware of the presence of impurities in graphite because, with the use of Technicolor in cinematography, the spectra of graphite electrode arcs used in movie projectors required impurities to enhance emission of light in the red region to display warmer skin tones on the screen. Thus, had it not been for color movies, chances are that the first sustained natural U chain reaction would have required a heavy water moderated reactor.
Other uses
Graphite (carbon) fiber and carbon nanotubes are also used in carbon fiber reinforced plastics, and in heat-resistant composites such as reinforced carbon-carbon (RCC). Commercial structures made from carbon fiber graphite composites include fishing rods, golf club shafts, bicycle frames, sports car body panels, the fuselage of the Boeing 787 Dreamliner and pool cue sticks and have been successfully employed in reinforced concrete. The mechanical properties of carbon fiber graphite-reinforced plastic composites and grey cast iron are strongly influenced by the role of graphite in these materials. In this context, the term "(100%) graphite" is often loosely used to refer to a pure mixture of carbon reinforcement and resin, while the term "composite" is used for composite materials with additional ingredients.
Modern smokeless powder is coated in graphite to prevent the buildup of static charge.
Graphite has been used in at least three radar absorbent materials. It was mixed with rubber in Sumpf and Schornsteinfeger, which were used on U-boat snorkels to reduce their radar cross section. It was also used in tiles on early F-117 Nighthawk stealth strike fighters.
Graphite composites are used as absorber for high-energy particles, for example in the Large Hadron Collider beam dump.
Graphite rods when filed into shape are used as a tool in glassworking to manipulate hot molten glass.
Graphite mining, beneficiation, and milling
Graphite is mined by both open pit and underground methods. Graphite usually needs beneficiation. This may be carried out by hand-picking the pieces of gangue (rock) and hand-screening the product or by crushing the rock and floating out the graphite. Beneficiation by flotation encounters the difficulty that graphite is very soft and "marks" (coats) the particles of gangue. This makes the "marked" gangue particles float off with the graphite, yielding impure concentrate. There are two ways of obtaining a commercial concentrate or product: repeated regrinding and floating (up to seven times) to purify the concentrate, or by acid leaching (dissolving) the gangue with hydrofluoric acid (for a silicate gangue) or hydrochloric acid (for a carbonate gangue).
In milling, the incoming graphite products and concentrates can be ground before being classified (sized or screened), with the coarser flake size fractions (below 8 mesh, 8–20 mesh, 20–50 mesh) carefully preserved, and then the carbon contents are determined. Some standard blends can be prepared from the different fractions, each with a certain flake size distribution and carbon content. Custom blends can also be made for individual customers who want a certain flake size distribution and carbon content. If flake size is unimportant, the concentrate can be ground more freely. Typical end products include a fine powder for use as a slurry in oil drilling and coatings for foundry molds, carbon raiser in the steel industry (Synthetic graphite powder and powdered petroleum coke can also be used as carbon raiser). Environmental impacts from graphite mills consist of air pollution including fine particulate exposure of workers and also soil contamination from powder spillages leading to heavy metal contamination of soil.
According to the United States Geological Survey (USGS), world production of natural graphite in 2016 was 1,200,000 tonnes, of which the following major exporters are: China (780,000 t), India (170,000 t), Brazil (80,000 t), Turkey (32,000 t) and North Korea (6,000 t). Graphite is not currently mined in the United States, but there are many historical mine sites including ones in Alabama, Montana, and in the Adirondacks of NY. Westwater Resources is in the development stages of creating a pilot plant for their Coosa Graphite Mine near Sylacauga, Alabama. U.S. production of synthetic graphite in 2010 was 134,000 t valued at $1.07 billion.
Occupational safety
Potential health effects include:
Inhalation: No inhalation hazard in manufactured and shipped state. Dust and fumes generated from the material can enter the body by inhalation. High concentrations of dust and fumes may irritate the throat and respiratory system and cause coughing. Frequent inhalation of fume/dust over a long period of time increases the risk of developing lung diseases. Prolonged and repeated overexposure to dust can lead to pneumoconiosis. Pre-existing pulmonary disorders, such as emphysema, may possibly be aggravated by prolonged exposure to high concentrations of graphite dusts.
Eye contact: Dust in the eyes will cause irritation. Exposed may experience eye tearing, redness, and discomfort.
Skin contact: Under normal conditions of intended use, this material does not pose a risk to health. Dust may irritate skin.
Ingestion: Not relevant, due to the form of the product in its manufactured and shipped state. However, ingestion of dusts generated during working operations may cause nausea and vomiting.
Potential physical / chemical effects: Bulk material is non-combustible. The material may form dust and can accumulate electrostatic charges, which may cause an electrical spark (ignition source). High dust levels may create potential for explosion.
United States
The Occupational Safety and Health Administration (OSHA) has set the legal limit (permissible exposure limit) for graphite exposure in the workplace as a time weighted average (TWA) of 15million particles per cubic foot (1.5 mg/m3) over an 8-hour workday. The National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit (REL) of TWA 2.5 mg/m3 respirable dust over an 8-hour workday. At levels of 1250 mg/m3, graphite is immediately dangerous to life and health.
Graphite recycling
The most common way of recycling graphite occurs when synthetic graphite electrodes are either manufactured and pieces are cut off or lathe turnings are discarded for reuse, or the electrode (or other materials) are used all the way down to the electrode holder. A new electrode replaces the old one, but a sizeable piece of the old electrode remains. This is crushed and sized, and the resulting graphite powder is mostly used to raise the carbon content of molten steel.
Graphite-containing refractories are sometimes also recycled, but often are not due to their low graphite content: the largest-volume items, such as carbon-magnesite bricks that contain only 15–25% graphite, usually contain too little graphite to be worthwhile to recycle. However, some recycled carbon–magnesite brick is used as the basis for furnace-repair materials, and also crushed carbon–magnesite brick is used in slag conditioners.
While crucibles have a high graphite content, the volume of crucibles used and then recycled is very small.
A high-quality flake graphite product that closely resembles natural flake graphite can be made from steelmaking kish. Kish is a large-volume near-molten waste skimmed from the molten iron feed to a basic oxygen furnace and consists of a mix of graphite (precipitated out of the supersaturated iron), lime-rich slag, and some iron. The iron is recycled on-site, leaving a mixture of graphite and slag. The best recovery process uses hydraulic classification (which utilizes a flow of water to separate minerals by specific gravity: graphite is light and settles nearly last) to get a 70% graphite rough concentrate. Leaching this concentrate with hydrochloric acid gives a 95% graphite product with a flake size ranging from 10 mesh (2 mm) down.
Research and innovation in graphite technologies
Globally, over 60,000 patent families in graphite technologies were filed from 2012 to 2021. Patents were filed by applicants from over 60 countries and regions. However, graphite-related patent families originated predominantly from just a few countries. China was the top contributor with more than 47,000 patent families, accounting for four in every five graphite patent families filed worldwide in the last decade. Among other leading countries were Japan, the Republic of Korea, the United States and the Russian Federation. Together, these top five countries of applicant origin accounted for 95 percent of global patenting output related to graphite.
Among the different graphite sources, flake graphite has the highest number of patent families, with more than 5,600 filed worldwide from 2012 to 2021. Supported by active research from its commercial entities and research institutions, China is the country most actively exploiting flake graphite and has contributed to 85 percent of global patent filings in this area.
At the same time, innovations exploring new synthesis methods and uses for artificial graphite are gaining interest worldwide, as countries seek to exploit the superior material qualities associated with this man-made substance and reduce reliance on the natural material. Patenting activity is strongly led by commercial entities, particularly world-renowned battery manufacturers and anode material suppliers, with patenting interest focused on battery anode applications.
The exfoliation process for bulk graphite, which involves separating the carbon layers within graphite, has been extensively studied between 2012 and 2021. Specifically, ultrasonic and thermal exfoliation have been the two most popular approaches worldwide, with 4,267 and 2,579 patent families, respectively, significantly more than for either the chemical or electrochemical alternatives.
Global patenting activity relating to ultrasonic exfoliation has decreased over the years, indicating that this low-cost technique has become well established. Thermal exfoliation is a more recent process. Compared to ultrasonic exfoliation, this fast and solvent-free thermal approach has attracted greater commercial interest.
As the most widespread anode material for lithium-ion batteries, graphite has drawn significant attention worldwide for use in battery applications. With over 8,000 patent families filed from 2012 to 2021, battery applications were a key driver of global graphite-related inventions. Innovations in this area are led by battery manufacturers or anode suppliers who have amassed sizable patent portfolios focused strongly on battery performance improvements based on graphite anode innovation. Besides industry players, academia and research institutions have been an essential source of innovation in graphite anode technologies.
Graphite for polymer applications was an innovation hot topic from 2012 to 2021, with over 8,000 patent families recorded worldwide. However, in recent years, in the top countries of applicant origin in this area, including China, Japan and the United States of America (US), patent filings have decreased.
Graphite for manufacturing ceramics represents another area of intensive research, with over 6,000 patent families registered in the last decade alone. Specifically, graphite for refractory accounted for over one-third of ceramics-related graphite patent families in China and about one-fifth in the rest of the world. Other important graphite applications include high-value ceramic materials such as carbides for specific industries, ranging from electrical and electronics, aerospace and precision engineering to military and nuclear applications.
Carbon brushes represent a long-explored graphite application area. There have been few inventions in this area over the last decade, with less than 300 patent families filed from 2012 to 2021, very significantly less than between 1992 and 2011.
Biomedical, sensor, and conductive ink are emerging application areas for graphite that have attracted interest from both academia and commercial entities, including renowned universities and multinational corporations. Typically for an emerging technology area, related patent families were filed by various organizations without any players dominating. As a result, the top applicants have a small number of inventions, unlike in well-explored areas, where they will have strong technology accumulation and large patent portfolios. The innovation focus of these three emerging areas is highly scattered and can be diverse, even for a single applicant. However, recent inventions are seen to leverage the development of graphite nanomaterials, particularly graphite nanocomposites and graphene.
See also
Carbon fiber
Carbon nanotube
Exfoliated graphite nano-platelets
Fullerene
Graphene
Graphitizing and non-graphitizing carbons
Intumescent
Lonsdaleite
Passive fire protection
Pyrolytic carbon
Sources
References
Further reading
External links
Battery Grade Graphite
Graphite at Minerals.net
Mineral galleries
Mineral & Exploration – Map of World Graphite Mines and Producers 2012
Mindat w/ locations
giant covalent structures
The Graphite Page
Video lecture on the properties of graphite by M. Heggie, University of Sussex
CDC – NIOSH Pocket Guide to Chemical Hazards
Native element minerals
Non-petroleum based lubricants
Dry lubricants
Visual arts materials
Refractory materials
Electrical conductors
Hexagonal minerals
Minerals in space group 186
Minerals in space group 194
Industrial minerals | Graphite | [
"Physics",
"Materials_science"
] | 8,555 | [
"Refractory materials",
"Materials",
"Electrical conductors",
"Condensed matter physics",
"Semimetals",
"Matter"
] |
12,383 | https://en.wikipedia.org/wiki/Genetic%20engineering | Genetic engineering, also called genetic modification or genetic manipulation, is the modification and manipulation of an organism's genes using technology. It is a set of technologies used to change the genetic makeup of cells, including the transfer of genes within and across species boundaries to produce improved or novel organisms.
New DNA is obtained by either isolating and copying the genetic material of interest using recombinant DNA methods or by artificially synthesising the DNA. A construct is usually created and used to insert this DNA into the host organism. The first recombinant DNA molecule was made by Paul Berg in 1972 by combining DNA from the monkey virus SV40 with the lambda virus.
As well as inserting genes, the process can be used to remove, or "knock out", genes. The new DNA can be inserted randomly, or targeted to a specific part of the genome.
An organism that is generated through genetic engineering is considered to be genetically modified (GM) and the resulting entity is a genetically modified organism (GMO). The first GMO was a bacterium generated by Herbert Boyer and Stanley Cohen in 1973. Rudolf Jaenisch created the first GM animal when he inserted foreign DNA into a mouse in 1974. The first company to focus on genetic engineering, Genentech, was founded in 1976 and started the production of human proteins. Genetically engineered human insulin was produced in 1978 and insulin-producing bacteria were commercialised in 1982. Genetically modified food has been sold since 1994, with the release of the Flavr Savr tomato. The Flavr Savr was engineered to have a longer shelf life, but most current GM crops are modified to increase resistance to insects and herbicides. GloFish, the first GMO designed as a pet, was sold in the United States in December 2003. In 2016 salmon modified with a growth hormone were sold.
Genetic engineering has been applied in numerous fields including research, medicine, industrial biotechnology and agriculture. In research, GMOs are used to study gene function and expression through loss of function, gain of function, tracking and expression experiments. By knocking out genes responsible for certain conditions it is possible to create animal model organisms of human diseases. As well as producing hormones, vaccines and other drugs, genetic engineering has the potential to cure genetic diseases through gene therapy. Chinese hamster ovary (CHO) cells are used in industrial genetic engineering. Additionally mRNA vaccines are made through genetic engineering to prevent infections by viruses such as COVID-19. The same techniques that are used to produce drugs can also have industrial applications such as producing enzymes for laundry detergent, cheeses and other products.
The rise of commercialised genetically modified crops has provided economic benefit to farmers in many different countries, but has also been the source of most of the controversy surrounding the technology. This has been present since its early use; the first field trials were destroyed by anti-GM activists. Although there is a scientific consensus that currently available food derived from GM crops poses no greater risk to human health than conventional food, critics consider GM food safety a leading concern. Gene flow, impact on non-target organisms, control of the food supply and intellectual property rights have also been raised as potential issues. These concerns have led to the development of a regulatory framework, which started in 1975. It has led to an international treaty, the Cartagena Protocol on Biosafety, that was adopted in 2000. Individual countries have developed their own regulatory systems regarding GMOs, with the most marked differences occurring between the United States and Europe.
Overview
Genetic engineering is a process that alters the genetic structure of an organism by either removing or introducing DNA, or modifying existing genetic material in situ. Unlike traditional animal and plant breeding, which involves doing multiple crosses and then selecting for the organism with the desired phenotype, genetic engineering takes the gene directly from one organism and delivers it to the other. This is much faster, can be used to insert any genes from any organism (even ones from different domains) and prevents other undesirable genes from also being added.
Genetic engineering could potentially fix severe genetic disorders in humans by replacing the defective gene with a functioning one. It is an important tool in research that allows the function of specific genes to be studied. Drugs, vaccines and other products have been harvested from organisms engineered to produce them. Crops have been developed that aid food security by increasing yield, nutritional value and tolerance to environmental stresses.
The DNA can be introduced directly into the host organism or into a cell that is then fused or hybridised with the host. This relies on recombinant nucleic acid techniques to form new combinations of heritable genetic material followed by the incorporation of that material either indirectly through a vector system or directly through micro-injection, macro-injection or micro-encapsulation.
Genetic engineering does not normally include traditional breeding, in vitro fertilisation, induction of polyploidy, mutagenesis and cell fusion techniques that do not use recombinant nucleic acids or a genetically modified organism in the process. However, some broad definitions of genetic engineering include selective breeding. Cloning and stem cell research, although not considered genetic engineering, are closely related and genetic engineering can be used within them. Synthetic biology is an emerging discipline that takes genetic engineering a step further by introducing artificially synthesised material into an organism.
Plants, animals or microorganisms that have been changed through genetic engineering are termed genetically modified organisms or GMOs. If genetic material from another species is added to the host, the resulting organism is called transgenic. If genetic material from the same species or a species that can naturally breed with the host is used the resulting organism is called cisgenic. If genetic engineering is used to remove genetic material from the target organism the resulting organism is termed a knockout organism. In Europe genetic modification is synonymous with genetic engineering while within the United States of America and Canada genetic modification can also be used to refer to more conventional breeding methods.
History
Humans have altered the genomes of species for thousands of years through selective breeding, or artificial selection as contrasted with natural selection. More recently, mutation breeding has used exposure to chemicals or radiation to produce a high frequency of random mutations, for selective breeding purposes. Genetic engineering as the direct manipulation of DNA by humans outside breeding and mutations has only existed since the 1970s. The term "genetic engineering" was coined by the Russian-born geneticist Nikolay Timofeev-Ressovsky in his 1934 paper "The Experimental Production of Mutations", published in the British journal Biological Reviews. Jack Williamson used the term in his science fiction novel Dragon's Island, published in 1951 – one year before DNA's role in heredity was confirmed by Alfred Hershey and Martha Chase, and two years before James Watson and Francis Crick showed that the DNA molecule has a double-helix structure – though the general concept of direct genetic manipulation was explored in rudimentary form in Stanley G. Weinbaum's 1936 science fiction story Proteus Island.
In 1972, Paul Berg created the first recombinant DNA molecules by combining DNA from the monkey virus SV40 with that of the lambda virus. In 1973 Herbert Boyer and Stanley Cohen created the first transgenic organism by inserting antibiotic resistance genes into the plasmid of an Escherichia coli bacterium. A year later Rudolf Jaenisch created a transgenic mouse by introducing foreign DNA into its embryo, making it the world's first transgenic animal These achievements led to concerns in the scientific community about potential risks from genetic engineering, which were first discussed in depth at the Asilomar Conference in 1975. One of the main recommendations from this meeting was that government oversight of recombinant DNA research should be established until the technology was deemed safe.
In 1976 Genentech, the first genetic engineering company, was founded by Herbert Boyer and Robert Swanson and a year later the company produced a human protein (somatostatin) in E. coli. Genentech announced the production of genetically engineered human insulin in 1978. In 1980, the U.S. Supreme Court in the Diamond v. Chakrabarty case ruled that genetically altered life could be patented. The insulin produced by bacteria was approved for release by the Food and Drug Administration (FDA) in 1982.
In 1983, a biotech company, Advanced Genetic Sciences (AGS) applied for U.S. government authorisation to perform field tests with the ice-minus strain of Pseudomonas syringae to protect crops from frost, but environmental groups and protestors delayed the field tests for four years with legal challenges. In 1987, the ice-minus strain of P. syringae became the first genetically modified organism (GMO) to be released into the environment when a strawberry field and a potato field in California were sprayed with it. Both test fields were attacked by activist groups the night before the tests occurred: "The world's first trial site attracted the world's first field trasher".
The first field trials of genetically engineered plants occurred in France and the US in 1986, tobacco plants were engineered to be resistant to herbicides. The People's Republic of China was the first country to commercialise transgenic plants, introducing a virus-resistant tobacco in 1992. In 1994 Calgene attained approval to commercially release the first genetically modified food, the Flavr Savr, a tomato engineered to have a longer shelf life. In 1994, the European Union approved tobacco engineered to be resistant to the herbicide bromoxynil, making it the first genetically engineered crop commercialised in Europe. In 1995, Bt potato was approved safe by the Environmental Protection Agency, after having been approved by the FDA, making it the first pesticide producing crop to be approved in the US. In 2009 11 transgenic crops were grown commercially in 25 countries, the largest of which by area grown were the US, Brazil, Argentina, India, Canada, China, Paraguay and South Africa.
In 2010, scientists at the J. Craig Venter Institute created the first synthetic genome and inserted it into an empty bacterial cell. The resulting bacterium, named Mycoplasma laboratorium, could replicate and produce proteins. Four years later this was taken a step further when a bacterium was developed that replicated a plasmid containing a unique base pair, creating the first organism engineered to use an expanded genetic alphabet. In 2012, Jennifer Doudna and Emmanuelle Charpentier collaborated to develop the CRISPR/Cas9 system, a technique which can be used to easily and specifically alter the genome of almost any organism.
Process
Creating a GMO is a multi-step process. Genetic engineers must first choose what gene they wish to insert into the organism. This is driven by what the aim is for the resultant organism and is built on earlier research. Genetic screens can be carried out to determine potential genes and further tests then used to identify the best candidates. The development of microarrays, transcriptomics and genome sequencing has made it much easier to find suitable genes. Luck also plays its part; the Roundup Ready gene was discovered after scientists noticed a bacterium thriving in the presence of the herbicide.
Gene isolation and cloning
The next step is to isolate the candidate gene. The cell containing the gene is opened and the DNA is purified. The gene is separated by using restriction enzymes to cut the DNA into fragments or polymerase chain reaction (PCR) to amplify up the gene segment. These segments can then be extracted through gel electrophoresis. If the chosen gene or the donor organism's genome has been well studied it may already be accessible from a genetic library. If the DNA sequence is known, but no copies of the gene are available, it can also be artificially synthesised. Once isolated the gene is ligated into a plasmid that is then inserted into a bacterium. The plasmid is replicated when the bacteria divide, ensuring unlimited copies of the gene are available. The RK2 plasmid is notable for its ability to replicate in a wide variety of single-celled organisms, which makes it suitable as a genetic engineering tool.
Before the gene is inserted into the target organism it must be combined with other genetic elements. These include a promoter and terminator region, which initiate and end transcription. A selectable marker gene is added, which in most cases confers antibiotic resistance, so researchers can easily determine which cells have been successfully transformed. The gene can also be modified at this stage for better expression or effectiveness. These manipulations are carried out using recombinant DNA techniques, such as restriction digests, ligations and molecular cloning.
Inserting DNA into the host genome
There are a number of techniques used to insert genetic material into the host genome. Some bacteria can naturally take up foreign DNA. This ability can be induced in other bacteria via stress (e.g. thermal or electric shock), which increases the cell membrane's permeability to DNA; up-taken DNA can either integrate with the genome or exist as extrachromosomal DNA. DNA is generally inserted into animal cells using microinjection, where it can be injected through the cell's nuclear envelope directly into the nucleus, or through the use of viral vectors.
Plant genomes can be engineered by physical methods or by use of Agrobacterium for the delivery of sequences hosted in T-DNA binary vectors. In plants the DNA is often inserted using Agrobacterium-mediated transformation, taking advantage of the Agrobacteriums T-DNA sequence that allows natural insertion of genetic material into plant cells. Other methods include biolistics, where particles of gold or tungsten are coated with DNA and then shot into young plant cells, and electroporation, which involves using an electric shock to make the cell membrane permeable to plasmid DNA.
As only a single cell is transformed with genetic material, the organism must be regenerated from that single cell. In plants this is accomplished through the use of tissue culture. In animals it is necessary to ensure that the inserted DNA is present in the embryonic stem cells. Bacteria consist of a single cell and reproduce clonally so regeneration is not necessary. Selectable markers are used to easily differentiate transformed from untransformed cells. These markers are usually present in the transgenic organism, although a number of strategies have been developed that can remove the selectable marker from the mature transgenic plant.
Further testing using PCR, Southern hybridization, and DNA sequencing is conducted to confirm that an organism contains the new gene. These tests can also confirm the chromosomal location and copy number of the inserted gene. The presence of the gene does not guarantee it will be expressed at appropriate levels in the target tissue so methods that look for and measure the gene products (RNA and protein) are also used. These include northern hybridisation, quantitative RT-PCR, Western blot, immunofluorescence, ELISA and phenotypic analysis.
The new genetic material can be inserted randomly within the host genome or targeted to a specific location. The technique of gene targeting uses homologous recombination to make desired changes to a specific endogenous gene. This tends to occur at a relatively low frequency in plants and animals and generally requires the use of selectable markers. The frequency of gene targeting can be greatly enhanced through genome editing. Genome editing uses artificially engineered nucleases that create specific double-stranded breaks at desired locations in the genome, and use the cell's endogenous mechanisms to repair the induced break by the natural processes of homologous recombination and nonhomologous end-joining. There are four families of engineered nucleases: meganucleases, zinc finger nucleases, transcription activator-like effector nucleases (TALENs), and the Cas9-guideRNA system (adapted from CRISPR). TALEN and CRISPR are the two most commonly used and each has its own advantages. TALENs have greater target specificity, while CRISPR is easier to design and more efficient. In addition to enhancing gene targeting, engineered nucleases can be used to introduce mutations at endogenous genes that generate a gene knockout.
Applications
Genetic engineering has applications in medicine, research, industry and agriculture and can be used on a wide range of plants, animals and microorganisms. Bacteria, the first organisms to be genetically modified, can have plasmid DNA inserted containing new genes that code for medicines or enzymes that process food and other substrates. Plants have been modified for insect protection, herbicide resistance, virus resistance, enhanced nutrition, tolerance to environmental pressures and the production of edible vaccines. Most commercialised GMOs are insect resistant or herbicide tolerant crop plants. Genetically modified animals have been used for research, model animals and the production of agricultural or pharmaceutical products. The genetically modified animals include animals with genes knocked out, increased susceptibility to disease, hormones for extra growth and the ability to express proteins in their milk.
Medicine
Genetic engineering has many applications to medicine that include the manufacturing of drugs, creation of model animals that mimic human conditions and gene therapy. One of the earliest uses of genetic engineering was to mass-produce human insulin in bacteria. This application has now been applied to human growth hormones, follicle stimulating hormones (for treating infertility), human albumin, monoclonal antibodies, antihemophilic factors, vaccines and many other drugs. Mouse hybridomas, cells fused together to create monoclonal antibodies, have been adapted through genetic engineering to create human monoclonal antibodies. Genetically engineered viruses are being developed that can still confer immunity, but lack the infectious sequences.
Genetic engineering is also used to create animal models of human diseases. Genetically modified mice are the most common genetically engineered animal model. They have been used to study and model cancer (the oncomouse), obesity, heart disease, diabetes, arthritis, substance abuse, anxiety, aging and Parkinson disease. Potential cures can be tested against these mouse models.
Gene therapy is the genetic engineering of humans, generally by replacing defective genes with effective ones. Clinical research using somatic gene therapy has been conducted with several diseases, including X-linked SCID, chronic lymphocytic leukemia (CLL), and Parkinson's disease. In 2012, Alipogene tiparvovec became the first gene therapy treatment to be approved for clinical use. In 2015 a virus was used to insert a healthy gene into the skin cells of a boy suffering from a rare skin disease, epidermolysis bullosa, in order to grow, and then graft healthy skin onto 80 percent of the boy's body which was affected by the illness.
Germline gene therapy would result in any change being inheritable, which has raised concerns within the scientific community. In 2015, CRISPR was used to edit the DNA of non-viable human embryos, leading scientists of major world academies to call for a moratorium on inheritable human genome edits. There are also concerns that the technology could be used not just for treatment, but for enhancement, modification or alteration of a human beings' appearance, adaptability, intelligence, character or behavior. The distinction between cure and enhancement can also be difficult to establish. In November 2018, He Jiankui announced that he had edited the genomes of two human embryos, to attempt to disable the CCR5 gene, which codes for a receptor that HIV uses to enter cells. The work was widely condemned as unethical, dangerous, and premature. Currently, germline modification is banned in 40 countries. Scientists that do this type of research will often let embryos grow for a few days without allowing it to develop into a baby.
Researchers are altering the genome of pigs to induce the growth of human organs, with the aim of increasing the success of pig to human organ transplantation. Scientists are creating "gene drives", changing the genomes of mosquitoes to make them immune to malaria, and then looking to spread the genetically altered mosquitoes throughout the mosquito population in the hopes of eliminating the disease.
Research
Genetic engineering is an important tool for natural scientists, with the creation of transgenic organisms one of the most important tools for analysis of gene function. Genes and other genetic information from a wide range of organisms can be inserted into bacteria for storage and modification, creating genetically modified bacteria in the process. Bacteria are cheap, easy to grow, clonal, multiply quickly, relatively easy to transform and can be stored at -80 °C almost indefinitely. Once a gene is isolated it can be stored inside the bacteria providing an unlimited supply for research.
Organisms are genetically engineered to discover the functions of certain genes. This could be the effect on the phenotype of the organism, where the gene is expressed or what other genes it interacts with. These experiments generally involve loss of function, gain of function, tracking and expression.
Loss of function experiments, such as in a gene knockout experiment, in which an organism is engineered to lack the activity of one or more genes. In a simple knockout a copy of the desired gene has been altered to make it non-functional. Embryonic stem cells incorporate the altered gene, which replaces the already present functional copy. These stem cells are injected into blastocysts, which are implanted into surrogate mothers. This allows the experimenter to analyse the defects caused by this mutation and thereby determine the role of particular genes. It is used especially frequently in developmental biology. When this is done by creating a library of genes with point mutations at every position in the area of interest, or even every position in the whole gene, this is called "scanning mutagenesis". The simplest method, and the first to be used, is "alanine scanning", where every position in turn is mutated to the unreactive amino acid alanine.
Gain of function experiments, the logical counterpart of knockouts. These are sometimes performed in conjunction with knockout experiments to more finely establish the function of the desired gene. The process is much the same as that in knockout engineering, except that the construct is designed to increase the function of the gene, usually by providing extra copies of the gene or inducing synthesis of the protein more frequently. Gain of function is used to tell whether or not a protein is sufficient for a function, but does not always mean it is required, especially when dealing with genetic or functional redundancy.
Tracking experiments, which seek to gain information about the localisation and interaction of the desired protein. One way to do this is to replace the wild-type gene with a 'fusion' gene, which is a juxtaposition of the wild-type gene with a reporting element such as green fluorescent protein (GFP) that will allow easy visualisation of the products of the genetic modification. While this is a useful technique, the manipulation can destroy the function of the gene, creating secondary effects and possibly calling into question the results of the experiment. More sophisticated techniques are now in development that can track protein products without mitigating their function, such as the addition of small sequences that will serve as binding motifs to monoclonal antibodies.
Expression studies aim to discover where and when specific proteins are produced. In these experiments, the DNA sequence before the DNA that codes for a protein, known as a gene's promoter, is reintroduced into an organism with the protein coding region replaced by a reporter gene such as GFP or an enzyme that catalyses the production of a dye. Thus the time and place where a particular protein is produced can be observed. Expression studies can be taken a step further by altering the promoter to find which pieces are crucial for the proper expression of the gene and are actually bound by transcription factor proteins; this process is known as promoter bashing.
Industrial
Organisms can have their cells transformed with a gene coding for a useful protein, such as an enzyme, so that they will overexpress the desired protein. Mass quantities of the protein can then be manufactured by growing the transformed organism in bioreactor equipment using industrial fermentation, and then purifying the protein. Some genes do not work well in bacteria, so yeast, insect cells or mammalian cells can also be used. These techniques are used to produce medicines such as insulin, human growth hormone, and vaccines, supplements such as tryptophan, aid in the production of food (chymosin in cheese making) and fuels. Other applications with genetically engineered bacteria could involve making them perform tasks outside their natural cycle, such as making biofuels, cleaning up oil spills, carbon and other toxic waste and detecting arsenic in drinking water. Certain genetically modified microbes can also be used in biomining and bioremediation, due to their ability to extract heavy metals from their environment and incorporate them into compounds that are more easily recoverable.
In materials science, a genetically modified virus has been used in a research laboratory as a scaffold for assembling a more environmentally friendly lithium-ion battery. Bacteria have also been engineered to function as sensors by expressing a fluorescent protein under certain environmental conditions.
Agriculture
One of the best-known and controversial applications of genetic engineering is the creation and use of genetically modified crops or genetically modified livestock to produce genetically modified food. Crops have been developed to increase production, increase tolerance to abiotic stresses, alter the composition of the food, or to produce novel products.
The first crops to be released commercially on a large scale provided protection from insect pests or tolerance to herbicides. Fungal and virus resistant crops have also been developed or are in development. This makes the insect and weed management of crops easier and can indirectly increase crop yield. GM crops that directly improve yield by accelerating growth or making the plant more hardy (by improving salt, cold or drought tolerance) are also under development. In 2016 Salmon have been genetically modified with growth hormones to reach normal adult size much faster.
GMOs have been developed that modify the quality of produce by increasing the nutritional value or providing more industrially useful qualities or quantities. The Amflora potato produces a more industrially useful blend of starches. Soybeans and canola have been genetically modified to produce more healthy oils. The first commercialised GM food was a tomato that had delayed ripening, increasing its shelf life.
Plants and animals have been engineered to produce materials they do not normally make. Pharming uses crops and animals as bioreactors to produce vaccines, drug intermediates, or the drugs themselves; the useful product is purified from the harvest and then used in the standard pharmaceutical production process. Cows and goats have been engineered to express drugs and other proteins in their milk, and in 2009 the FDA approved a drug produced in goat milk.
Other applications
Genetic engineering has potential applications in conservation and natural area management. Gene transfer through viral vectors has been proposed as a means of controlling invasive species as well as vaccinating threatened fauna from disease. Transgenic trees have been suggested as a way to confer resistance to pathogens in wild populations. With the increasing risks of maladaptation in organisms as a result of climate change and other perturbations, facilitated adaptation through gene tweaking could be one solution to reducing extinction risks. Applications of genetic engineering in conservation are thus far mostly theoretical and have yet to be put into practice.
Genetic engineering is also being used to create microbial art. Some bacteria have been genetically engineered to create black and white photographs. Novelty items such as lavender-colored carnations, blue roses, and glowing fish, have also been produced through genetic engineering.
Regulation
The regulation of genetic engineering concerns the approaches taken by governments to assess and manage the risks associated with the development and release of GMOs. The development of a regulatory framework began in 1975, at Asilomar, California. The Asilomar meeting recommended a set of voluntary guidelines regarding the use of recombinant technology. As the technology improved the US established a committee at the Office of Science and Technology, which assigned regulatory approval of GM food to the USDA, FDA and EPA. The Cartagena Protocol on Biosafety, an international treaty that governs the transfer, handling, and use of GMOs, was adopted on 29 January 2000. One hundred and fifty-seven countries are members of the Protocol, and many use it as a reference point for their own regulations.
The legal and regulatory status of GM foods varies by country, with some nations banning or restricting them, and others permitting them with widely differing degrees of regulation. Some countries allow the import of GM food with authorisation, but either do not allow its cultivation (Russia, Norway, Israel) or have provisions for cultivation even though no GM products are yet produced (Japan, South Korea). Most countries that do not allow GMO cultivation do permit research. Some of the most marked differences occur between the US and Europe. The US policy focuses on the product (not the process), only looks at verifiable scientific risks and uses the concept of substantial equivalence. The European Union by contrast has possibly the most stringent GMO regulations in the world. All GMOs, along with irradiated food, are considered "new food" and subject to extensive, case-by-case, science-based food evaluation by the European Food Safety Authority. The criteria for authorisation fall in four broad categories: "safety", "freedom of choice", "labelling", and "traceability". The level of regulation in other countries that cultivate GMOs lie in between Europe and the United States.
One of the key issues concerning regulators is whether GM products should be labeled. The European Commission says that mandatory labeling and traceability are needed to allow for informed choice, avoid potential false advertising and facilitate the withdrawal of products if adverse effects on health or the environment are discovered. The American Medical Association and the American Association for the Advancement of Science say that absent scientific evidence of harm even voluntary labeling is misleading and will falsely alarm consumers. Labeling of GMO products in the marketplace is required in 64 countries. Labeling can be mandatory up to a threshold GM content level (which varies between countries) or voluntary. In Canada and the US labeling of GM food is voluntary, while in Europe all food (including processed food) or feed which contains greater than 0.9% of approved GMOs must be labelled.
Controversy
Critics have objected to the use of genetic engineering on several grounds, including ethical, ecological and economic concerns. Many of these concerns involve GM crops and whether food produced from them is safe and what impact growing them will have on the environment. These controversies have led to litigation, international trade disputes, and protests, and to restrictive regulation of commercial products in some countries.
Accusations that scientists are "playing God" and other religious issues have been ascribed to the technology from the beginning. Other ethical issues raised include the patenting of life, the use of intellectual property rights, the level of labeling on products, control of the food supply and the objectivity of the regulatory process. Although doubts have been raised, economically most studies have found growing GM crops to be beneficial to farmers.
Gene flow between GM crops and compatible plants, along with increased use of selective herbicides, can increase the risk of "superweeds" developing. Other environmental concerns involve potential impacts on non-target organisms, including soil microbes, and an increase in secondary and resistant insect pests. Many of the environmental impacts regarding GM crops may take many years to be understood and are also evident in conventional agriculture practices. With the commercialisation of genetically modified fish there are concerns over what the environmental consequences will be if they escape.
There are three main concerns over the safety of genetically modified food: whether they may provoke an allergic reaction; whether the genes could transfer from the food into human cells; and whether the genes not approved for human consumption could outcross to other crops. There is a scientific consensus that currently available food derived from GM crops poses no greater risk to human health than conventional food, but that each GM food needs to be tested on a case-by-case basis before introduction. Nonetheless, members of the public are less likely than scientists to perceive GM foods as safe.
In popular culture
Genetic engineering features in many science fiction stories. Frank Herbert's novel The White Plague describes the deliberate use of genetic engineering to create a pathogen which specifically kills women. Another of Herbert's creations, the Dune series of novels, uses genetic engineering to create the powerful Tleilaxu. Few films have informed audiences about genetic engineering, with the exception of the 1978 The Boys from Brazil and the 1993 Jurassic Park, both of which make use of a lesson, a demonstration, and a clip of scientific film. Genetic engineering methods are weakly represented in film; Michael Clark, writing for the Wellcome Trust, calls the portrayal of genetic engineering and biotechnology "seriously distorted" in films such as The 6th Day. In Clark's view, the biotechnology is typically "given fantastic but visually arresting forms" while the science is either relegated to the background or fictionalised to suit a young audience.
See also
Biological engineering
Computational genomics
Modifications (genetics)
Mutagenesis (molecular biology technique)
References
Further reading
External links
GMO Safety - Information about research projects on the biological safety of genetically modified plants.
GMO-compass, news on GMO en EU
1950s neologisms
1972 introductions
Biological engineering
Biotechnology
Molecular biology
Molecular genetics
Engineering disciplines | Genetic engineering | [
"Chemistry",
"Engineering",
"Biology"
] | 6,808 | [
"Biological engineering",
"Genetic engineering",
"Biotechnology",
"Molecular genetics",
"nan",
"Molecular biology",
"Biochemistry"
] |
12,385 | https://en.wikipedia.org/wiki/Genetic%20code | The genetic code is the set of rules used by living cells to translate information encoded within genetic material (DNA or RNA sequences of nucleotide triplets or codons) into proteins. Translation is accomplished by the ribosome, which links proteinogenic amino acids in an order specified by messenger RNA (mRNA), using transfer RNA (tRNA) molecules to carry amino acids and to read the mRNA three nucleotides at a time. The genetic code is highly similar among all organisms and can be expressed in a simple table with 64 entries.
The codons specify which amino acid will be added next during protein biosynthesis. With some exceptions, a three-nucleotide codon in a nucleic acid sequence specifies a single amino acid. The vast majority of genes are encoded with a single scheme (see the RNA codon table). That scheme is often called the canonical or standard genetic code, or simply the genetic code, though variant codes (such as in mitochondria) exist.
History
Efforts to understand how proteins are encoded began after DNA's structure was discovered in 1953. The key discoverers, English biophysicist Francis Crick and American biologist James Watson, working together at the Cavendish Laboratory of the University of Cambridge, hypothesied that information flows from DNA and that there is a link between DNA and proteins. Soviet-American physicist George Gamow was the first to give a workable scheme for protein synthesis from DNA. He postulated that sets of three bases (triplets) must be employed to encode the 20 standard amino acids used by living cells to build proteins, which would allow a maximum of amino acids. He named this DNA–protein interaction (the original genetic code) as the "diamond code".
In 1954, Gamow created an informal scientific organisation the RNA Tie Club, as suggested by Watson, for scientists of different persuasions who were interested in how proteins were synthesised from genes. However, the club could have only 20 permanent members to represent each of the 20 amino acids; and four additional honorary members to represent the four nucleotides of DNA.
The first scientific contribution of the club, later recorded as "one of the most important unpublished articles in the history of science" and "the most famous unpublished paper in the annals of molecular biology", was made by Crick. Crick presented a type-written paper titled "On Degenerate Templates and the Adaptor Hypothesis: A Note for the RNA Tie Club" to the members of the club in January 1955, which "totally changed the way we thought about protein synthesis", as Watson recalled. The hypothesis states that the triplet code was not passed on to amino acids as Gamow thought, but carried by a different molecule, an adaptor, that interacts with amino acids. The adaptor was later identified as tRNA.
Codons
The Crick, Brenner, Barnett and Watts-Tobin experiment first demonstrated that codons consist of three DNA bases.
Marshall Nirenberg and J. Heinrich Matthaei were the first to reveal the nature of a codon in 1961. They used a cell-free system to translate a poly-uracil RNA sequence (i.e., UUUUU...) and discovered that the polypeptide that they had synthesized consisted of only the amino acid phenylalanine. They thereby deduced that the codon UUU specified the amino acid phenylalanine.
This was followed by experiments in Severo Ochoa's laboratory that demonstrated that the poly-adenine RNA sequence (AAAAA...) coded for the polypeptide poly-lysine and that the poly-cytosine RNA sequence (CCCCC...) coded for the polypeptide poly-proline. Therefore, the codon AAA specified the amino acid lysine, and the codon CCC specified the amino acid proline. Using various copolymers most of the remaining codons were then determined.
Subsequent work by Har Gobind Khorana identified the rest of the genetic code. Shortly thereafter, Robert W. Holley determined the structure of transfer RNA (tRNA), the adapter molecule that facilitates the process of translating RNA into protein. This work was based upon Ochoa's earlier studies, yielding the latter the Nobel Prize in Physiology or Medicine in 1959 for work on the enzymology of RNA synthesis.
Extending this work, Nirenberg and Philip Leder revealed the code's triplet nature and deciphered its codons. In these experiments, various combinations of mRNA were passed through a filter that contained ribosomes, the components of cells that translate RNA into protein. Unique triplets promoted the binding of specific tRNAs to the ribosome. Leder and Nirenberg were able to determine the sequences of 54 out of 64 codons in their experiments. Khorana, Holley and Nirenberg received the Nobel Prize (1968) for their work.
The three stop codons were named by discoverers Richard Epstein and Charles Steinberg. "Amber" was named after their friend Harris Bernstein, whose last name means "amber" in German. The other two stop codons were named "ochre" and "opal" in order to keep the "color names" theme.
Expanded genetic codes (synthetic biology)
In a broad academic audience, the concept of the evolution of the genetic code from the original and ambiguous genetic code to a well-defined ("frozen") code with the repertoire of 20 (+2) canonical amino acids is widely accepted.
However, there are different opinions, concepts, approaches and ideas, which is the best way to change it experimentally. Even models are proposed that predict "entry points" for synthetic amino acid invasion of the genetic code.
Since 2001, 40 non-natural amino acids have been added into proteins by creating a unique codon (recoding) and a corresponding transfer-RNA:aminoacyl – tRNA-synthetase pair to encode it with diverse physicochemical and biological properties in order to be used as a tool to exploring protein structure and function or to create novel or enhanced proteins.
H. Murakami and M. Sisido extended some codons to have four and five bases. Steven A. Benner constructed a functional 65th (in vivo) codon.
In 2015 N. Budisa, D. Söll and co-workers reported the full substitution of all 20,899 tryptophan residues (UGG codons) with unnatural thienopyrrole-alanine in the genetic code of the bacterium Escherichia coli.
In 2016 the first stable semisynthetic organism was created. It was a (single cell) bacterium with two synthetic bases (called X and Y). The bases survived cell division.
In 2017, researchers in South Korea reported that they had engineered a mouse with an extended genetic code that can produce proteins with unnatural amino acids.
In May 2019, researchers reported the creation of a new "Syn61" strain of the bacterium Escherichia coli. This strain has a fully synthetic genome that is refactored (all overlaps expanded), recoded (removing the use of three out of 64 codons completely), and further modified to remove the now unnecessary tRNAs and release factors. It is fully viable and grows 1.6× slower than its wild-type counterpart "MDS42".
Features
Reading frame
A reading frame is defined by the initial triplet of nucleotides from which translation starts. It sets the frame for a run of successive, non-overlapping codons, which is known as an "open reading frame" (ORF). For example, the string 5'-AAATGAACG-3' (see figure), if read from the first position, contains the codons AAA, TGA, and ACG ; if read from the second position, it contains the codons AAT and GAA ; and if read from the third position, it contains the codons ATG and AAC. Every sequence can, thus, be read in its 5' → 3' direction in three reading frames, each producing a possibly distinct amino acid sequence: in the given example, Lys (K)-Trp (W)-Thr (T), Asn (N)-Glu (E), or Met (M)-Asn (N), respectively (when translating with the vertebrate mitochondrial code). When DNA is double-stranded, six possible reading frames are defined, three in the forward orientation on one strand and three reverse on the opposite strand. Protein-coding frames are defined by a start codon, usually the first AUG (ATG) codon in the RNA (DNA) sequence.
In eukaryotes, ORFs in exons are often interrupted by introns.
Start and stop codons
Translation starts with a chain-initiation codon or start codon. The start codon alone is not sufficient to begin the process. Nearby sequences such as the Shine-Dalgarno sequence in E. coli and initiation factors are also required to start translation. The most common start codon is AUG, which is read as methionine or as formylmethionine (in bacteria, mitochondria, and plastids). Alternative start codons depending on the organism include "GUG" or "UUG"; these codons normally represent valine and leucine, respectively, but as start codons they are translated as methionine or formylmethionine.
The three stop codons have names: UAG is amber, UGA is opal (sometimes also called umber), and UAA is ochre. Stop codons are also called "termination" or "nonsense" codons. They signal release of the nascent polypeptide from the ribosome because no cognate tRNA has anticodons complementary to these stop signals, allowing a release factor to bind to the ribosome instead.
Effect of mutations
During the process of DNA replication, errors occasionally occur in the polymerization of the second strand. These errors, mutations, can affect an organism's phenotype, especially if they occur within the protein coding sequence of a gene. Error rates are typically 1 error in every 10–100 million bases—due to the "proofreading" ability of DNA polymerases.
Missense mutations and nonsense mutations are examples of point mutations that can cause genetic diseases such as sickle-cell disease and thalassemia respectively. Clinically important missense mutations generally change the properties of the coded amino acid residue among basic, acidic, polar or non-polar states, whereas nonsense mutations result in a stop codon.
Mutations that disrupt the reading frame sequence by indels (insertions or deletions) of a non-multiple of 3 nucleotide bases are known as frameshift mutations. These mutations usually result in a completely different translation from the original, and likely cause a stop codon to be read, which truncates the protein. These mutations may impair the protein's function and are thus rare in in vivo protein-coding sequences. One reason inheritance of frameshift mutations is rare is that, if the protein being translated is essential for growth under the selective pressures the organism faces, absence of a functional protein may cause death before the organism becomes viable. Frameshift mutations may result in severe genetic diseases such as Tay–Sachs disease.
Although most mutations that change protein sequences are harmful or neutral, some mutations have benefits. These mutations may enable the mutant organism to withstand particular environmental stresses better than wild type organisms, or reproduce more quickly. In these cases a mutation will tend to become more common in a population through natural selection. Viruses that use RNA as their genetic material have rapid mutation rates, which can be an advantage, since these viruses thereby evolve rapidly, and thus evade the immune system defensive responses. In large populations of asexually reproducing organisms, for example, E. coli, multiple beneficial mutations may co-occur. This phenomenon is called clonal interference and causes competition among the mutations.
Degeneracy
Degeneracy is the redundancy of the genetic code. This term was given by Bernfield and Nirenberg. The genetic code has redundancy but no ambiguity (see the codon tables below for the full correlation). For example, although codons GAA and GAG both specify glutamic acid (redundancy), neither specifies another amino acid (no ambiguity). The codons encoding one amino acid may differ in any of their three positions. For example, the amino acid leucine is specified by YUR or CUN (UUA, UUG, CUU, CUC, CUA, or CUG) codons (difference in the first or third position indicated using IUPAC notation), while the amino acid serine is specified by UCN or AGY (UCA, UCG, UCC, UCU, AGU, or AGC) codons (difference in the first, second, or third position). A practical consequence of redundancy is that errors in the third position of the triplet codon cause only a silent mutation or an error that would not affect the protein because the hydrophilicity or hydrophobicity is maintained by equivalent substitution of amino acids; for example, a codon of NUN (where N = any nucleotide) tends to code for hydrophobic amino acids. NCN yields amino acid residues that are small in size and moderate in hydropathicity; NAN encodes average size hydrophilic residues. The genetic code is so well-structured for hydropathicity that a mathematical analysis (Singular Value Decomposition) of 12 variables (4 nucleotides x 3 positions) yields a remarkable correlation (C = 0.95) for predicting the hydropathicity of the encoded amino acid directly from the triplet nucleotide sequence, without translation. Note in the table, below, eight amino acids are not affected at all by mutations at the third position of the codon, whereas in the figure above, a mutation at the second position is likely to cause a radical change in the physicochemical properties of the encoded amino acid.
Nevertheless, changes in the first position of the codons are more important than changes in the second position on a global scale. The reason may be that charge reversal (from a positive to a negative charge or vice versa) can only occur upon mutations in the first position of certain codons, but not upon changes in the second position of any codon. Such charge reversal may have dramatic consequences for the structure or function of a protein. This aspect may have been largely underestimated by previous studies.
Codon usage bias
The frequency of codons, also known as codon usage bias, can vary from species to species with functional implications for the control of translation. The codon varies by organism; for example, most common proline codon in E. coli is CCG, whereas in humans this is the least used proline codon.
Alternative genetic codes
Non-standard amino acids
In some proteins, non-standard amino acids are substituted for standard stop codons, depending on associated signal sequences in the messenger RNA. For example, UGA can code for selenocysteine and UAG can code for pyrrolysine. Selenocysteine came to be seen as the 21st amino acid, and pyrrolysine as the 22nd. Both selenocysteine and pyrrolysine may be present in the same organism. Although the genetic code is normally fixed in an organism, the achaeal prokaryote Acetohalobium arabaticum can expand its genetic code from 20 to 21 amino acids (by including pyrrolysine) under different conditions of growth.
Variations
There was originally a simple and widely accepted argument that the genetic code should be universal: namely, that any variation in the genetic code would be lethal to the organism (although Crick had stated that viruses were an exception). This is known as the "frozen accident" argument for the universality of the genetic code. However, in his seminal paper on the origins of the genetic code in 1968, Francis Crick still stated that the universality of the genetic code in all organisms was an unproven assumption, and was probably not true in some instances. He predicted that "The code is universal (the same in all organisms) or nearly so". The first variation was discovered in 1979, by researchers studying human mitochondrial genes. Many slight variants were discovered thereafter, including various alternative mitochondrial codes. These minor variants for example involve translation of the codon UGA as tryptophan in Mycoplasma species, and translation of CUG as a serine rather than leucine in yeasts of the "CTG clade" (such as Candida albicans). Because viruses must use the same genetic code as their hosts, modifications to the standard genetic code could interfere with viral protein synthesis or functioning. However, viruses such as totiviruses have adapted to the host's genetic code modification. In bacteria and archaea, GUG and UUG are common start codons. In rare cases, certain proteins may use alternative start codons.
Surprisingly, variations in the interpretation of the genetic code exist also in human nuclear-encoded genes: In 2016, researchers studying the translation of malate dehydrogenase found that in about 4% of the mRNAs encoding this enzyme the stop codon is naturally used to encode the amino acids tryptophan and arginine. This type of recoding is induced by a high-readthrough stop codon context and it is referred to as functional translational readthrough.
Despite these differences, all known naturally occurring codes are very similar. The coding mechanism is the same for all organisms: three-base codons, tRNA, ribosomes, single direction reading and translating single codons into single amino acids. The most extreme variations occur in certain ciliates where the meaning of stop codons depends on their position within mRNA. When close to the 3' end they act as terminators while in internal positions they either code for amino acids as in Condylostoma magnum or trigger ribosomal frameshifting as in Euplotes.
The origins and variation of the genetic code, including the mechanisms behind the evolvability of the genetic code, have been widely studied, and some studies have been done experimentally evolving the genetic code of some organisms.
Inference
Variant genetic codes used by an organism can be inferred by identifying highly conserved genes encoded in that genome, and comparing its codon usage to the amino acids in homologous proteins of other organisms. For example, the program FACIL infers a genetic code by searching which amino acids in homologous protein domains are most often aligned to every codon. The resulting amino acid (or stop codon) probabilities for each codon are displayed in a genetic code logo.
As of January 2022, the most complete survey of genetic codes is done by Shulgina and Eddy, who screened 250,000 prokaryotic genomes using their Codetta tool. This tool uses a similar approach to FACIL with a larger Pfam database. Despite the NCBI already providing 27 translation tables, the authors were able to find new 5 genetic code variations (corroborated by tRNA mutations) and correct several misattributions. Codetta was later used to analyze genetic code change in ciliates.
Origin
The genetic code is a key part of the history of life, according to one version of which self-replicating RNA molecules preceded life as we know it. This is the RNA world hypothesis. Under this hypothesis, any model for the emergence of the genetic code is intimately related to a model of the transfer from ribozymes (RNA enzymes) to proteins as the principal enzymes in cells. In line with the RNA world hypothesis, transfer RNA molecules appear to have evolved before modern aminoacyl-tRNA synthetases, so the latter cannot be part of the explanation of its patterns.
A hypothetical randomly evolved genetic code further motivates a biochemical or evolutionary model for its origin. If amino acids were randomly assigned to triplet codons, there would be 1.5 × 1084 possible genetic codes. This number is found by calculating the number of ways that 21 items (20 amino acids plus one stop) can be placed in 64 bins, wherein each item is used at least once. However, the distribution of codon assignments in the genetic code is nonrandom. In particular, the genetic code clusters certain amino acid assignments.
Amino acids that share the same biosynthetic pathway tend to have the same first base in their codons. This could be an evolutionary relic of an early, simpler genetic code with fewer amino acids that later evolved to code a larger set of amino acids. It could also reflect steric and chemical properties that had another effect on the codon during its evolution. Amino acids with similar physical properties also tend to have similar codons, reducing the problems caused by point mutations and mistranslations.
Given the non-random genetic triplet coding scheme, a tenable hypothesis for the origin of genetic code could address multiple aspects of the codon table, such as absence of codons for D-amino acids, secondary codon patterns for some amino acids, confinement of synonymous positions to third position, the small set of only 20 amino acids (instead of a number approaching 64), and the relation of stop codon patterns to amino acid coding patterns.
Three main hypotheses address the origin of the genetic code. Many models belong to one of them or to a hybrid:
Random freeze: the genetic code was randomly created. For example, early tRNA-like ribozymes may have had different affinities for amino acids, with codons emerging from another part of the ribozyme that exhibited random variability. Once enough peptides were coded for, any major random change in the genetic code would have been lethal; hence it became "frozen".
Stereochemical affinity: the genetic code is a result of a high affinity between each amino acid and its codon or anti-codon; the latter option implies that pre-tRNA molecules matched their corresponding amino acids by this affinity. Later during evolution, this matching was gradually replaced with matching by aminoacyl-tRNA synthetases.
Optimality: the genetic code continued to evolve after its initial creation, so that the current code maximizes some fitness function, usually some kind of error minimization.
Hypotheses have addressed a variety of scenarios:
Chemical principles govern specific RNA interaction with amino acids. Experiments with aptamers showed that some amino acids have a selective chemical affinity for their codons. Experiments showed that of 8 amino acids tested, 6 show some RNA triplet-amino acid association.
Biosynthetic expansion. The genetic code grew from a simpler earlier code through a process of "biosynthetic expansion". Primordial life "discovered" new amino acids (for example, as by-products of metabolism) and later incorporated some of these into the machinery of genetic coding. Although much circumstantial evidence has been found to suggest that fewer amino acid types were used in the past, precise and detailed hypotheses about which amino acids entered the code in what order are controversial. However, several studies have suggested that Gly, Ala, Asp, Val, Ser, Pro, Glu, Leu, Thr may belong to a group of early-addition amino acids, whereas Cys, Met, Tyr, Trp, His, Phe may belong to a group of later-addition amino acids.
Natural selection has led to codon assignments of the genetic code that minimize the effects of mutations. A recent hypothesis suggests that the triplet code was derived from codes that used longer than triplet codons (such as quadruplet codons). Longer than triplet decoding would increase codon redundancy and would be more error resistant. This feature could allow accurate decoding absent complex translational machinery such as the ribosome, such as before cells began making ribosomes.
Information channels: Information-theoretic approaches model the process of translating the genetic code into corresponding amino acids as an error-prone information channel. The inherent noise (that is, the error) in the channel poses the organism with a fundamental question: how can a genetic code be constructed to withstand noise while accurately and efficiently translating information? These "rate-distortion" models suggest that the genetic code originated as a result of the interplay of the three conflicting evolutionary forces: the needs for diverse amino acids, for error-tolerance and for minimal resource cost. The code emerges at a transition when the mapping of codons to amino acids becomes nonrandom. The code's emergence is governed by the topology defined by the probable errors and is related to the map coloring problem.
Game theory: Models based on signaling games combine elements of game theory, natural selection and information channels. Such models have been used to suggest that the first polypeptides were likely short and had non-enzymatic function. Game theoretic models suggested that the organization of RNA strings into cells may have been necessary to prevent "deceptive" use of the genetic code, i.e. preventing the ancient equivalent of viruses from overwhelming the RNA world.
Stop codons: Codons for translational stops are also an interesting aspect to the problem of the origin of the genetic code. As an example for addressing stop codon evolution, it has been suggested that the stop codons are such that they are most likely to terminate translation early in the case of a frame shift error. In contrast, some stereochemical molecular models explain the origin of stop codons as "unassignable".
See also
List of genetic engineering software
Codon tables
References
Further reading
External links
The Genetic Codes: Genetic Code Tables
The Codon Usage Database — Codon frequency tables for many organisms
History of deciphering the genetic code
Gene expression
Genetics
Molecular genetics
Molecular biology
Protein biosynthesis | Genetic code | [
"Chemistry",
"Biology"
] | 5,418 | [
"Protein biosynthesis",
"Genetics",
"Gene expression",
"Molecular genetics",
"Biosynthesis",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
12,386 | https://en.wikipedia.org/wiki/Golden%20ratio | In mathematics, two quantities are in the golden ratio if their ratio is the same as the ratio of their sum to the larger of the two quantities. Expressed algebraically, for quantities and with , is in a golden ratio to if
where the Greek letter phi ( or ) denotes the golden ratio. The constant satisfies the quadratic equation and is an irrational number with a value of
The golden ratio was called the extreme and mean ratio by Euclid, and the divine proportion by Luca Pacioli; and also goes by other names.
Mathematicians have studied the golden ratio's properties since antiquity. It is the ratio of a regular pentagon's diagonal to its side and thus appears in the construction of the dodecahedron and icosahedron. A golden rectangle—that is, a rectangle with an aspect ratio of —may be cut into a square and a smaller rectangle with the same aspect ratio. The golden ratio has been used to analyze the proportions of natural objects and artificial systems such as financial markets, in some cases based on dubious fits to data. The golden ratio appears in some patterns in nature, including the spiral arrangement of leaves and other parts of vegetation.
Some 20th-century artists and architects, including Le Corbusier and Salvador Dalí, have proportioned their works to approximate the golden ratio, believing it to be aesthetically pleasing. These uses often appear in the form of a golden rectangle.
Calculation
Two quantities and are in the golden ratio if
Thus, if we want to find , we may use that the definition above holds for arbitrary ; thus, we just set , in which case and we get the equation
,
which becomes a quadratic equation after multiplying by :
which can be rearranged to
The quadratic formula yields two solutions:
Because is a ratio between positive quantities, is necessarily the positive root. The negative root is in fact the negative inverse , which shares many properties with the golden ratio.
History
According to Mario Livio,
Ancient Greek mathematicians first studied the golden ratio because of its frequent appearance in geometry; the division of a line into "extreme and mean ratio" (the golden section) is important in the geometry of regular pentagrams and pentagons. According to one story, 5th-century BC mathematician Hippasus discovered that the golden ratio was neither a whole number nor a fraction (it is irrational), surprising Pythagoreans. Euclid's Elements () provides several propositions and their proofs employing the golden ratio, and contains its first known definition which proceeds as follows:
The golden ratio was studied peripherally over the next millennium. Abu Kamil (c. 850–930) employed it in his geometric calculations of pentagons and decagons; his writings influenced that of Fibonacci (Leonardo of Pisa) (c. 1170–1250), who used the ratio in related geometry problems but did not observe that it was connected to the Fibonacci numbers.
Luca Pacioli named his book Divina proportione (1509) after the ratio; the book, largely plagiarized from Piero della Francesca, explored its properties including its appearance in some of the Platonic solids. Leonardo da Vinci, who illustrated Pacioli's book, called the ratio the sectio aurea ('golden section'). Though it is often said that Pacioli advocated the golden ratio's application to yield pleasing, harmonious proportions, Livio points out that the interpretation has been traced to an error in 1799, and that Pacioli actually advocated the Vitruvian system of rational proportions. Pacioli also saw Catholic religious significance in the ratio, which led to his work's title. 16th-century mathematicians such as Rafael Bombelli solved geometric problems using the ratio.
German mathematician Simon Jacob (d. 1564) noted that consecutive Fibonacci numbers converge to the golden ratio; this was rediscovered by Johannes Kepler in 1608. The first known decimal approximation of the (inverse) golden ratio was stated as "about " in 1597 by Michael Maestlin of the University of Tübingen in a letter to Kepler, his former student. The same year, Kepler wrote to Maestlin of the Kepler triangle, which combines the golden ratio with the Pythagorean theorem. Kepler said of these:
Eighteenth-century mathematicians Abraham de Moivre, Nicolaus I Bernoulli, and Leonhard Euler used a golden ratio-based formula which finds the value of a Fibonacci number based on its placement in the sequence; in 1843, this was rediscovered by Jacques Philippe Marie Binet, for whom it was named "Binet's formula". Martin Ohm first used the German term goldener Schnitt ('golden section') to describe the ratio in 1835. James Sully used the equivalent English term in 1875.
By 1910, inventor Mark Barr began using the Greek letter phi () as a symbol for the golden ratio. It has also been represented by tau (), the first letter of the ancient Greek τομή ('cut' or 'section').
The zome construction system, developed by Steve Baer in the late 1960s, is based on the symmetry system of the icosahedron/dodecahedron, and uses the golden ratio ubiquitously. Between 1973 and 1974, Roger Penrose developed Penrose tiling, a pattern related to the golden ratio both in the ratio of areas of its two rhombic tiles and in their relative frequency within the pattern. This gained in interest after Dan Shechtman's Nobel-winning 1982 discovery of quasicrystals with icosahedral symmetry, which were soon afterwards explained through analogies to the Penrose tiling.
Mathematics
Irrationality
The golden ratio is an irrational number. Below are two short proofs of irrationality:
Contradiction from an expression in lowest terms
This is a proof by infinite descent. Recall that:
If we call the whole and the longer part , then the second statement above becomes
To say that the golden ratio is rational means that is a fraction where and are integers. We may take to be in lowest terms and and to be positive. But if is in lowest terms, then the equally valued is in still lower terms. That is a contradiction that follows from the assumption that is rational.
By irrationality of the square root of 5
Another short proof – perhaps more commonly known – of the irrationality of the golden ratio makes use of the closure of rational numbers under addition and multiplication. If is assumed to be rational, then , the square root of , must also be rational. This is a contradiction as the square roots of all non-square natural numbers are irrational.
Minimal polynomial
The golden ratio is also an algebraic number and even an algebraic integer. It has minimal polynomial
This quadratic polynomial has two roots, and .
The golden ratio is also closely related to the polynomial , which has roots and . As the root of a quadratic polynomial, the golden ratio is a constructible number.
Golden ratio conjugate and powers
The conjugate root to the minimal polynomial is
The absolute value of this quantity () corresponds to the length ratio taken in reverse order (shorter segment length over longer segment length, ).
This illustrates the unique property of the golden ratio among positive numbers, that
or its inverse,
The conjugate and the defining quadratic polynomial relationship lead to decimal values that have their fractional part in common with :
The sequence of powers of contains these values , , , ; more generally,
any power of is equal to the sum of the two immediately preceding powers:
As a result, one can easily decompose any power of into a multiple of and a constant. The multiple and the constant are always adjacent Fibonacci numbers. This leads to another property of the positive powers of :
If , then:
Continued fraction and square root
The formula can be expanded recursively to obtain a simple continued fraction for the golden ratio:
It is in fact the simplest form of a continued fraction, alongside its reciprocal form:
The convergents of these continued fractions, , , , , , or , , , , , are ratios of successive Fibonacci numbers. The consistently small terms in its continued fraction explain why the approximants converge so slowly. This makes the golden ratio an extreme case of the Hurwitz inequality for Diophantine approximations, which states that for every irrational , there are infinitely many distinct fractions such that,
This means that the constant cannot be improved without excluding the golden ratio. It is, in fact, the smallest number that must be excluded to generate closer approximations of such Lagrange numbers.
A continued square root form for can be obtained from , yielding:
Relationship to Fibonacci and Lucas numbers
Fibonacci numbers and Lucas numbers have an intricate relationship with the golden ratio. In the Fibonacci sequence, each term is equal to the sum of the preceding two terms and , starting with the base sequence as the 0th and 1st terms and :
The sequence of Lucas numbers (not to be confused with the generalized Lucas sequences, of which this is part) is like the Fibonacci sequence, in that each term is the sum of the previous two terms and , however instead starts with as the 0th and 1st terms and :
Exceptionally, the golden ratio is equal to the limit of the ratios of successive terms in the Fibonacci sequence and sequence of Lucas numbers:
In other words, if a Fibonacci and Lucas number is divided by its immediate predecessor in the sequence, the quotient approximates . For example,
These approximations are alternately lower and higher than , and converge to as the Fibonacci and Lucas numbers increase.
Closed-form expressions for the Fibonacci and Lucas sequences that involve the golden ratio are:
Combining both formulas above, one obtains a formula for that involves both Fibonacci and Lucas numbers:
Between Fibonacci and Lucas numbers one can deduce , which simplifies to express the limit of the quotient of Lucas numbers by Fibonacci numbers as equal to the square root of five:
Indeed, much stronger statements are true:
These values describe as a fundamental unit of the algebraic number field .
Successive powers of the golden ratio obey the Fibonacci recurrence, .
The reduction to a linear expression can be accomplished in one step by using:
This identity allows any polynomial in to be reduced to a linear expression, as in:
Consecutive Fibonacci numbers can also be used to obtain a similar formula for the golden ratio, here by infinite summation:
In particular, the powers of themselves round to Lucas numbers (in order, except for the first two powers, and , are in reverse order):
and so forth. The Lucas numbers also directly generate powers of the golden ratio; for :
Rooted in their interconnecting relationship with the golden ratio is the notion that the sum of third consecutive Fibonacci numbers equals a Lucas number, that is ; and, importantly, that .
Both the Fibonacci sequence and the sequence of Lucas numbers can be used to generate approximate forms of the golden spiral (which is a special form of a logarithmic spiral) using quarter-circles with radii from these sequences, differing only slightly from the true golden logarithmic spiral. Fibonacci spiral is generally the term used for spirals that approximate golden spirals using Fibonacci number-sequenced squares and quarter-circles.
Geometry
The golden ratio features prominently in geometry. For example, it is intrinsically involved in the internal symmetry of the pentagon, and extends to form part of the coordinates of the vertices of a regular dodecahedron, as well as those of a regular icosahedron. It features in the Kepler triangle and Penrose tilings too, as well as in various other polytopes.
Construction
Dividing by interior division
Having a line segment , construct a perpendicular at point , with half the length of . Draw the hypotenuse .
Draw an arc with center and radius . This arc intersects the hypotenuse at point .
Draw an arc with center and radius . This arc intersects the original line segment at point . Point divides the original line segment into line segments and with lengths in the golden ratio.
Dividing by exterior division
Draw a line segment and construct off the point a segment perpendicular to and with the same length as .
Do bisect the line segment with .
A circular arc around with radius intersects in point the straight line through points and (also known as the extension of ). The ratio of to the constructed segment is the golden ratio.
Application examples you can see in the articles Pentagon with a given side length, Decagon with given circumcircle and Decagon with a given side length.
Both of the above displayed different algorithms produce geometric constructions that determine two aligned line segments where the ratio of the longer one to the shorter one is the golden ratio.
Golden angle
When two angles that make a full circle have measures in the golden ratio, the smaller is called the golden angle, with measure :
This angle occurs in patterns of plant growth as the optimal spacing of leaf shoots around plant stems so that successive leaves do not block sunlight from the leaves below them.
Pentagonal symmetry system
Pentagon and pentagram
In a regular pentagon the ratio of a diagonal to a side is the golden ratio, while intersecting diagonals section each other in the golden ratio. The golden ratio properties of a regular pentagon can be confirmed by applying Ptolemy's theorem to the quadrilateral formed by removing one of its vertices. If the quadrilateral's long edge and diagonals are , and short edges are , then Ptolemy's theorem gives . Dividing both sides by yields (see above),
The diagonal segments of a pentagon form a pentagram, or five-pointed star polygon, whose geometry is quintessentially described by . Primarily, each intersection of edges sections other edges in the golden ratio. The ratio of the length of the shorter segment to the segment bounded by the two intersecting edges (that is, a side of the inverted pentagon in the pentagram's center) is , as the four-color illustration shows.
Pentagonal and pentagrammic geometry permits us to calculate the following values for :
Golden triangle and golden gnomon
The triangle formed by two diagonals and a side of a regular pentagon is called a golden triangle or sublime triangle. It is an acute isosceles triangle with apex angle and base angles . Its two equal sides are in the golden ratio to its base. The triangle formed by two sides and a diagonal of a regular pentagon is called a golden gnomon. It is an obtuse isosceles triangle with apex angle and base angle . Its base is in the golden ratio to its two equal sides. The pentagon can thus be subdivided into two golden gnomons and a central golden triangle. The five points of a regular pentagram are golden triangles, as are the ten triangles formed by connecting the vertices of a regular decagon to its center point.
Bisecting one of the base angles of the golden triangle subdivides it into a smaller golden triangle and a golden gnomon. Analogously, any acute isosceles triangle can be subdivided into a similar triangle and an obtuse isosceles triangle, but the golden triangle is the only one for which this subdivision is made by the angle bisector, because it is the only isosceles triangle whose base angle is twice its apex angle. The angle bisector of the golden triangle subdivides the side that it meets in the golden ratio, and the areas of the two subdivided pieces are also in the golden ratio.
If the apex angle of the golden gnomon is trisected, the trisector again subdivides it into a smaller golden gnomon and a golden triangle. The trisector subdivides the base in the golden ratio, and the two pieces have areas in the golden ratio. Analogously, any obtuse triangle can be subdivided into a similar triangle and an acute isosceles triangle, but the golden gnomon is the only one for which this subdivision is made by the angle trisector, because it is the only isosceles triangle whose apex angle is three times its base angle.
Penrose tilings
The golden ratio appears prominently in the Penrose tiling, a family of aperiodic tilings of the plane developed by Roger Penrose, inspired by Johannes Kepler's remark that pentagrams, decagons, and other shapes could fill gaps that pentagonal shapes alone leave when tiled together. Several variations of this tiling have been studied, all of whose prototiles exhibit the golden ratio:
Penrose's original version of this tiling used four shapes: regular pentagons and pentagrams, "boat" figures with three points of a pentagram, and "diamond" shaped rhombi.
The kite and dart Penrose tiling uses kites with three interior angles of and one interior angle of , and darts, concave quadrilaterals with two interior angles of , one of , and one non-convex angle of . Special matching rules restrict how the tiles can meet at any edge, resulting in seven combinations of tiles at any vertex. Both the kites and darts have sides of two lengths, in the golden ratio to each other. The areas of these two tile shapes are also in the golden ratio to each other.
The kite and dart can each be cut on their symmetry axes into a pair of golden triangles and golden gnomons, respectively. With suitable matching rules, these triangles, called in this context Robinson triangles, can be used as the prototiles for a form of the Penrose tiling.
The rhombic Penrose tiling contains two types of rhombus, a thin rhombus with angles of and , and a thick rhombus with angles of and . All side lengths are equal, but the ratio of the length of sides to the short diagonal in the thin rhombus equals , as does the ratio of the sides of to the long diagonal of the thick rhombus. As with the kite and dart tiling, the areas of the two rhombi are in the golden ratio to each other. Again, these rhombi can be decomposed into pairs of Robinson triangles.
In triangles and quadrilaterals
Odom's construction
George Odom found a construction for involving an equilateral triangle: if the line segment joining the midpoints of two sides is extended to intersect the circumcircle, then the two midpoints and the point of intersection with the circle are in golden proportion.
Kepler triangle
The Kepler triangle, named after Johannes Kepler, is the unique right triangle with sides in geometric progression:
These side lengths are the three Pythagorean means of the two numbers . The three squares on its sides have areas in the golden geometric progression .
Among isosceles triangles, the ratio of inradius to side length is maximized for the triangle formed by two reflected copies of the Kepler triangle, sharing the longer of their two legs. The same isosceles triangle maximizes the ratio of the radius of a semicircle on its base to its perimeter.
For a Kepler triangle with smallest side length , the area and acute internal angles are:
Golden rectangle
The golden ratio proportions the adjacent side lengths of a golden rectangle in ratio. Stacking golden rectangles produces golden rectangles anew, and removing or adding squares from golden rectangles leaves rectangles still proportioned in ratio. They can be generated by golden spirals, through successive Fibonacci and Lucas number-sized squares and quarter circles. They feature prominently in the icosahedron as well as in the dodecahedron (see section below for more detail).
Golden rhombus
A golden rhombus is a rhombus whose diagonals are in proportion to the golden ratio, most commonly . For a rhombus of such proportions, its acute angle and obtuse angles are:
The lengths of its short and long diagonals and , in terms of side length are:
Its area, in terms of and :
Its inradius, in terms of side :
Golden rhombi form the faces of the rhombic triacontahedron, the two golden rhombohedra, the Bilinski dodecahedron, and the rhombic hexecontahedron.
Golden spiral
Logarithmic spirals are self-similar spirals where distances covered per turn are in geometric progression. A logarithmic spiral whose radius increases by a factor of the golden ratio for each quarter-turn is called the golden spiral. These spirals can be approximated by quarter-circles that grow by the golden ratio, or their approximations generated from Fibonacci numbers, often depicted inscribed within a spiraling pattern of squares growing in the same ratio. The exact logarithmic spiral form of the golden spiral can be described by the polar equation with :
Not all logarithmic spirals are connected to the golden ratio, and not all spirals that are connected to the golden ratio are the same shape as the golden spiral. For instance, a different logarithmic spiral, encasing a nested sequence of golden isosceles triangles, grows by the golden ratio for each that it turns, instead of the turning angle of the golden spiral. Another variation, called the "better golden spiral", grows by the golden ratio for each half-turn, rather than each quarter-turn.
Dodecahedron and icosahedron
The regular dodecahedron and its dual polyhedron the icosahedron are Platonic solids whose dimensions are related to the golden ratio. A dodecahedron has regular pentagonal faces, whereas an icosahedron has equilateral triangles; both have edges.
For a dodecahedron of side , the radius of a circumscribed and inscribed sphere, and midradius are (, , and , respectively):
While for an icosahedron of side , the radius of a circumscribed and inscribed sphere, and midradius are:
The volume and surface area of the dodecahedron can be expressed in terms of :
As well as for the icosahedron:
These geometric values can be calculated from their Cartesian coordinates, which also can be given using formulas involving . The coordinates of the dodecahedron are displayed on the figure to the right, while those of the icosahedron are:
Sets of three golden rectangles intersect perpendicularly inside dodecahedra and icosahedra, forming Borromean rings. In dodecahedra, pairs of opposing vertices in golden rectangles meet the centers of pentagonal faces, and in icosahedra, they meet at its vertices. The three golden rectangles together contain all vertices of the icosahedron, or equivalently, intersect the centers of all of the dodecahedron's faces.
A cube can be inscribed in a regular dodecahedron, with some of the diagonals of the pentagonal faces of the dodecahedron serving as the cube's edges; therefore, the edge lengths are in the golden ratio. The cube's volume is times that of the dodecahedron's. In fact, golden rectangles inside a dodecahedron are in golden proportions to an inscribed cube, such that edges of a cube and the long edges of a golden rectangle are themselves in ratio. On the other hand, the octahedron, which is the dual polyhedron of the cube, can inscribe an icosahedron, such that an icosahedron's vertices touch the edges of an octahedron at points that divide its edges in golden ratio.
Other properties
The golden ratio's decimal expansion can be calculated via root-finding methods, such as Newton's method or Halley's method, on the equation or on (to compute first). The time needed to compute digits of the golden ratio using Newton's method is essentially , where is the time complexity of multiplying two -digit numbers. This is considerably faster than known algorithms for and . An easily programmed alternative using only integer arithmetic is to calculate two large consecutive Fibonacci numbers and divide them. The ratio of Fibonacci numbers and , each over digits, yields over significant digits of the golden ratio. The decimal expansion of the golden ratio has been calculated to an accuracy of ten trillion () digits.
In the complex plane, the fifth roots of unity (for an integer ) satisfying are the vertices of a pentagon. They do not form a ring of quadratic integers, however the sum of any fifth root of unity and its complex conjugate, , is a quadratic integer, an element of . Specifically,
This also holds for the remaining tenth roots of unity satisfying ,
For the gamma function , the only solutions to the equation are and .
When the golden ratio is used as the base of a numeral system (see golden ratio base, sometimes dubbed phinary or -nary), quadratic integers in the ring – that is, numbers of the form for and in – have terminating representations, but rational fractions have non-terminating representations.
The golden ratio also appears in hyperbolic geometry, as the maximum distance from a point on one side of an ideal triangle to the closer of the other two sides: this distance, the side length of the equilateral triangle formed by the points of tangency of a circle inscribed within the ideal triangle, is .
The golden ratio appears in the theory of modular functions as well. For let
Then
and
where and in the continued fraction should be evaluated as . The function is invariant under , a congruence subgroup of the modular group. Also for positive real numbers and such that
is a Pisot–Vijayaraghavan number.
Applications and observations
Architecture
The Swiss architect Le Corbusier, famous for his contributions to the modern international style, centered his design philosophy on systems of harmony and proportion. Le Corbusier's faith in the mathematical order of the universe was closely bound to the golden ratio and the Fibonacci series, which he described as "rhythms apparent to the eye and clear in their relations with one another. And these rhythms are at the very root of human activities. They resound in man by an organic inevitability, the same fine inevitability which causes the tracing out of the Golden Section by children, old men, savages and the learned."
Le Corbusier explicitly used the golden ratio in his Modulor system for the scale of architectural proportion. He saw this system as a continuation of the long tradition of Vitruvius, Leonardo da Vinci's "Vitruvian Man", the work of Leon Battista Alberti, and others who used the proportions of the human body to improve the appearance and function of architecture.
In addition to the golden ratio, Le Corbusier based the system on human measurements, Fibonacci numbers, and the double unit. He took suggestion of the golden ratio in human proportions to an extreme: he sectioned his model human body's height at the navel with the two sections in golden ratio, then subdivided those sections in golden ratio at the knees and throat; he used these golden ratio proportions in the Modulor system. Le Corbusier's 1927 Villa Stein in Garches exemplified the Modulor system's application. The villa's rectangular ground plan, elevation, and inner structure closely approximate golden rectangles.
Another Swiss architect, Mario Botta, bases many of his designs on geometric figures. Several private houses he designed in Switzerland are composed of squares and circles, cubes and cylinders. In a house he designed in Origlio, the golden ratio is the proportion between the central section and the side sections of the house.
Art
Leonardo da Vinci's illustrations of polyhedra in Pacioli's Divina proportione have led some to speculate that he incorporated the golden ratio in his paintings. But the suggestion that his Mona Lisa, for example, employs golden ratio proportions, is not supported by Leonardo's own writings. Similarly, although Leonardo's Vitruvian Man is often shown in connection with the golden ratio, the proportions of the figure do not actually match it, and the text only mentions whole number ratios.
Salvador Dalí, influenced by the works of Matila Ghyka, explicitly used the golden ratio in his masterpiece, The Sacrament of the Last Supper. The dimensions of the canvas are a golden rectangle. A huge dodecahedron, in perspective so that edges appear in golden ratio to one another, is suspended above and behind Jesus and dominates the composition.
A statistical study on 565 works of art of different great painters, performed in 1999, found that these artists had not used the golden ratio in the size of their canvases. The study concluded that the average ratio of the two sides of the paintings studied is , with averages for individual artists ranging from (Goya) to (Bellini). On the other hand, Pablo Tosto listed over 350 works by well-known artists, including more than 100 which have canvasses with golden rectangle and proportions, and others with proportions like , , , and .
Books and design
According to Jan Tschichold,
There was a time when deviations from the truly beautiful page proportions , , and the Golden Section were rare. Many books produced between 1550 and 1770 show these proportions exactly, to within half a millimeter.
According to some sources, the golden ratio is used in everyday design, for example in the proportions of playing cards, postcards, posters, light switch plates, and widescreen televisions.
Flags
The aspect ratio (width to height ratio) of the flag of Togo was intended to be the golden ratio, according to its designer.
Music
Ernő Lendvai analyzes Béla Bartók's works as being based on two opposing systems, that of the golden ratio and the acoustic scale, though other music scholars reject that analysis. French composer Erik Satie used the golden ratio in several of his pieces, including Sonneries de la Rose+Croix. The golden ratio is also apparent in the organization of the sections in the music of Debussy's Reflets dans l'eau (Reflections in water), from Images (1st series, 1905), in which "the sequence of keys is marked out by the intervals and and the main climax sits at the phi position".
The musicologist Roy Howat has observed that the formal boundaries of Debussy's La Mer correspond exactly to the golden section. Trezise finds the intrinsic evidence "remarkable", but cautions that no written or reported evidence suggests that Debussy consciously sought such proportions.
Music theorists including Hans Zender and Heinz Bohlen have experimented with the 833 cents scale, a musical scale based on using the golden ratio as its fundamental musical interval. When measured in cents, a logarithmic scale for musical intervals, the golden ratio is approximately 833.09 cents.
Nature
Johannes Kepler wrote that "the image of man and woman stems from the divine proportion. In my opinion, the propagation of plants and the progenitive acts of animals are in the same ratio".
The psychologist Adolf Zeising noted that the golden ratio appeared in phyllotaxis and argued from these patterns in nature that the golden ratio was a universal law. Zeising wrote in 1854 of a universal orthogenetic law of "striving for beauty and completeness in the realms of both nature and art".
However, some have argued that many apparent manifestations of the golden ratio in nature, especially in regard to animal dimensions, are fictitious.
Physics
The quasi-one-dimensional Ising ferromagnet CoNb2O6 (cobalt niobate) has predicted excitation states (with symmetry), that when probed with neutron scattering, showed its lowest two were in golden ratio. Specifically, these quantum phase transitions during spin excitation, which occur at near absolute zero temperature, showed pairs of kinks in its ordered-phase to spin-flips in its paramagnetic phase; revealing, just below its critical field, a spin dynamics with sharp modes at low energies approaching the golden mean.
Optimization
There is no known general algorithm to arrange a given number of nodes evenly on a sphere, for any of several definitions of even distribution (see, for example, Thomson problem or Tammes problem). However, a useful approximation results from dividing the sphere into parallel bands of equal surface area and placing one node in each band at longitudes spaced by a golden section of the circle, i.e. . This method was used to arrange the mirrors of the student-participatory satellite Starshine-3.
The golden ratio is a critical element to golden-section search as well.
Disputed observations
Examples of disputed observations of the golden ratio include the following:
Specific proportions in the bodies of vertebrates (including humans) are often claimed to be in the golden ratio; for example the ratio of successive phalangeal and metacarpal bones (finger bones) has been said to approximate the golden ratio. There is a large variation in the real measures of these elements in specific individuals, however, and the proportion in question is often significantly different from the golden ratio.
The shells of mollusks such as the nautilus are often claimed to be in the golden ratio. The growth of nautilus shells follows a logarithmic spiral, and it is sometimes erroneously claimed that any logarithmic spiral is related to the golden ratio, or sometimes claimed that each new chamber is golden-proportioned relative to the previous one. However, measurements of nautilus shells do not support this claim.
Historian John Man states that both the pages and text area of the Gutenberg Bible were "based on the golden section shape". However, according to his own measurements, the ratio of height to width of the pages is .
Studies by psychologists, starting with Gustav Fechner , have been devised to test the idea that the golden ratio plays a role in human perception of beauty. While Fechner found a preference for rectangle ratios centered on the golden ratio, later attempts to carefully test such a hypothesis have been, at best, inconclusive.
In investing, some practitioners of technical analysis use the golden ratio to indicate support of a price level, or resistance to price increases, of a stock or commodity; after significant price changes up or down, new support and resistance levels are supposedly found at or near prices related to the starting price via the golden ratio. The use of the golden ratio in investing is also related to more complicated patterns described by Fibonacci numbers (e.g. Elliott wave principle and Fibonacci retracement). However, other market analysts have published analyses suggesting that these percentages and patterns are not supported by the data.
Egyptian pyramids
The Great Pyramid of Giza (also known as the Pyramid of Cheops or Khufu) has been analyzed by pyramidologists as having a doubled Kepler triangle as its cross-section. If this theory were true, the golden ratio would describe the ratio of distances from the midpoint of one of the sides of the pyramid to its apex, and from the same midpoint to the center of the pyramid's base. However, imprecision in measurement caused in part by the removal of the outer surface of the pyramid makes it impossible to distinguish this theory from other numerical theories of the proportions of the pyramid, based on pi or on whole-number ratios. The consensus of modern scholars is that this pyramid's proportions are not based on the golden ratio, because such a basis would be inconsistent both with what is known about Egyptian mathematics from the time of construction of the pyramid, and with Egyptian theories of architecture and proportion used in their other works.
The Parthenon
The Parthenon's façade (c. 432 BC) as well as elements of its façade and elsewhere are said by some to be circumscribed by golden rectangles. Other scholars deny that the Greeks had any aesthetic association with golden ratio. For example, Keith Devlin says, "Certainly, the oft repeated assertion that the Parthenon in Athens is based on the golden ratio is not supported by actual measurements. In fact, the entire story about the Greeks and golden ratio seems to be without foundation." Midhat J. Gazalé affirms that "It was not until Euclid ... that the golden ratio's mathematical properties were studied."
From measurements of 15 temples, 18 monumental tombs, 8 sarcophagi, and 58 grave stelae from the fifth century BC to the second century AD, one researcher concluded that the golden ratio was totally absent from Greek architecture of the classical fifth century BC, and almost absent during the following six centuries.
Later sources like Vitruvius (first century BC) exclusively discuss proportions that can be expressed in whole numbers, i.e. commensurate as opposed to irrational proportions.
Modern art
The Section d'Or ('Golden Section') was a collective of painters, sculptors, poets and critics associated with Cubism and Orphism. Active from 1911 to around 1914, they adopted the name both to highlight that Cubism represented the continuation of a grand tradition, rather than being an isolated movement, and in homage to the mathematical harmony associated with Georges Seurat. (Several authors have claimed that Seurat employed the golden ratio in his paintings, but Seurat's writings and paintings suggest that he employed simple whole-number ratios and any approximation of the golden ratio was coincidental.) The Cubists observed in its harmonies, geometric structuring of motion and form, "the primacy of idea over nature", "an absolute scientific clarity of conception". However, despite this general interest in mathematical harmony, whether the paintings featured in the celebrated 1912 Salon de la Section d'Or exhibition used the golden ratio in any compositions is more difficult to determine. Livio, for example, claims that they did not, and Marcel Duchamp said as much in an interview. On the other hand, an analysis suggests that Juan Gris made use of the golden ratio in composing works that were likely, but not definitively, shown at the exhibition. Art historian Daniel Robbins has argued that in addition to referencing the mathematical term, the exhibition's name also refers to the earlier Bandeaux d'Or group, with which Albert Gleizes and other former members of the Abbaye de Créteil had been involved.
Piet Mondrian has been said to have used the golden section extensively in his geometrical paintings, though other experts (including critic Yve-Alain Bois) have discredited these claims.
See also
List of works designed with the golden ratio
Metallic mean
Plastic ratio
Sacred geometry
Supergolden ratio
Silver ratio
References
Explanatory footnotes
Citations
Works cited
(Originally titled A Mathematical History of Division in Extreme and Mean Ratio.)
Further reading
External links
Information and activities by a mathematics professor.
The Myth That Will Not Go Away , by Keith Devlin, addressing multiple allegations about the use of the golden ratio in culture.
Spurious golden spirals collected by Randall Munroe
YouTube lecture on Zeno's mice problem and logarithmic spirals
Euclidean plane geometry
Quadratic irrational numbers
Mathematical constants
History of geometry
Visual arts theory
Composition in visual art
Mathematics and art | Golden ratio | [
"Mathematics"
] | 8,098 | [
"History of geometry",
"Euclidean plane geometry",
"Mathematical objects",
"Golden ratio",
"nan",
"Geometry",
"Planes (geometry)",
"Mathematical constants",
"Numbers"
] |
12,388 | https://en.wikipedia.org/wiki/Genome | A genome is all the genetic information of an organism. It consists of nucleotide sequences of DNA (or RNA in RNA viruses). The nuclear genome includes protein-coding genes and non-coding genes, other functional regions of the genome such as regulatory sequences (see non-coding DNA), and often a substantial fraction of junk DNA with no evident function. Almost all eukaryotes have mitochondria and a small mitochondrial genome. Algae and plants also contain chloroplasts with a chloroplast genome.
The study of the genome is called genomics. The genomes of many organisms have been sequenced and various regions have been annotated. The first genome to be sequenced was that of the virus φX174 in 1977; the first genome sequence of a prokaryote (Haemophilus influenzae) was published in 1995; the yeast (Saccharomyces cerevisiae) genome was the first eukaryotic genome to be sequenced in 1996. The Human Genome Project was started in October 1990, and the first draft sequences of the human genome were reported in February 2001.
Origin of the term
The term genome was created in 1920 by Hans Winkler, professor of botany at the University of Hamburg, Germany. The website Oxford Dictionaries and the Online Etymology Dictionary suggest the name is a blend of the words gene and chromosome. However, see omics for a more thorough discussion. A few related -ome words already existed, such as biome and rhizome, forming a vocabulary into which genome fits systematically.
Definition
The term "genome" usually refers to the DNA (or sometimes RNA) molecules that carry the genetic information in an organism, but sometimes it is uncertain which molecules to include; for example, bacteria usually have one or two large DNA molecules (chromosomes) that contain all of the essential genetic material but they also contain smaller extrachromosomal plasmid molecules that carry important genetic information. In the scientific literature, the term 'genome' usually refers to the large chromosomal DNA molecules in bacteria.
Nuclear genome
Eukaryotic genomes are even more difficult to define because almost all eukaryotic species contain nuclear chromosomes plus extra DNA molecules in the mitochondria. In addition, algae and plants have chloroplast DNA. Most textbooks make a distinction between the nuclear genome and the organelle (mitochondria and chloroplast) genomes so when they speak of, say, the human genome, they are only referring to the genetic material in the nucleus. This is the most common use of 'genome' in the scientific literature.
Ploidy
Most eukaryotes are diploid, meaning that there are two of each chromosome in the nucleus but the 'genome' refers to only one copy of each chromosome. Some eukaryotes have distinctive sex chromosomes, such as the X and Y chromosomes of mammals, so the technical definition of the genome must include both copies of the sex chromosomes. For example, the standard reference genome of humans consists of one copy of each of the 22 autosomes plus one X chromosome and one Y chromosome.
Sequencing and mapping
A genome sequence is the complete list of the nucleotides (A, C, G, and T for DNA genomes) that make up all the chromosomes of an individual or a species. Within a species, the vast majority of nucleotides are identical between individuals, but sequencing multiple individuals is necessary to understand the genetic diversity.
In 1976, Walter Fiers at the University of Ghent (Belgium) was the first to establish the complete nucleotide sequence of a viral RNA-genome (Bacteriophage MS2). The next year, Fred Sanger completed the first DNA-genome sequence: Phage X174, of 5386 base pairs. The first bacterial genome to be sequenced was that of Haemophilus influenzae, completed by a team at The Institute for Genomic Research in 1995. A few months later, the first eukaryotic genome was completed, with sequences of the 16 chromosomes of budding yeast Saccharomyces cerevisiae published as the result of a European-led effort begun in the mid-1980s. The first genome sequence for an archaeon, Methanococcus jannaschii, was completed in 1996, again by The Institute for Genomic Research.
The development of new technologies has made genome sequencing dramatically cheaper and easier, and the number of complete genome sequences is growing rapidly. The US National Institutes of Health maintains one of several comprehensive databases of genomic information. Among the thousands of completed genome sequencing projects include those for rice, a mouse, the plant Arabidopsis thaliana, the puffer fish, and the bacteria E. coli. In December 2013, scientists first sequenced the entire genome of a Neanderthal, an extinct species of humans. The genome was extracted from the toe bone of a 130,000-year-old Neanderthal found in a Siberian cave.
Viral genomes
Viral genomes can be composed of either RNA or DNA. The genomes of RNA viruses can be either single-stranded RNA or double-stranded RNA, and may contain one or more separate RNA molecules (segments: monopartit or multipartit genome). DNA viruses can have either single-stranded or double-stranded genomes. Most DNA virus genomes are composed of a single, linear molecule of DNA, but some are made up of a circular DNA molecule.
Prokaryotic genomes
Prokaryotes and eukaryotes have DNA genomes. Archaea and most bacteria have a single circular chromosome, however, some bacterial species have linear or multiple chromosomes. If the DNA is replicated faster than the bacterial cells divide, multiple copies of the chromosome can be present in a single cell, and if the cells divide faster than the DNA can be replicated, multiple replication of the chromosome is initiated before the division occurs, allowing daughter cells to inherit complete genomes and already partially replicated chromosomes. Most prokaryotes have very little repetitive DNA in their genomes. However, some symbiotic bacteria (e.g. Serratia symbiotica) have reduced genomes and a high fraction of pseudogenes: only ~40% of their DNA encodes proteins.
Some bacteria have auxiliary genetic material, also part of their genome, which is carried in plasmids. For this, the word genome should not be used as a synonym of chromosome.
Eukaryotic genomes
Eukaryotic genomes are composed of one or more linear DNA chromosomes. The number of chromosomes varies widely from Jack jumper ants and an asexual nemotode, which each have only one pair, to a fern species that has 720 pairs. It is surprising the amount of DNA that eukaryotic genomes contain compared to other genomes. The amount is even more than what is necessary for DNA protein-coding and noncoding genes due to the fact that eukaryotic genomes show as much as 64,000-fold variation in their sizes. However, this special characteristic is caused by the presence of repetitive DNA, and transposable elements (TEs).
A typical human cell has two copies of each of 22 autosomes, one inherited from each parent, plus two sex chromosomes, making it diploid. Gametes, such as ova, sperm, spores, and pollen, are haploid, meaning they carry only one copy of each chromosome. In addition to the chromosomes in the nucleus, organelles such as the chloroplasts and mitochondria have their own DNA. Mitochondria are sometimes said to have their own genome often referred to as the "mitochondrial genome". The DNA found within the chloroplast may be referred to as the "plastome". Like the bacteria they originated from, mitochondria and chloroplasts have a circular chromosome.
Unlike prokaryotes where exon-intron organization of protein coding genes exists but is rather exceptional, eukaryotes generally have these features in their genes and their genomes contain variable amounts of repetitive DNA. In mammals and plants, the majority of the genome is composed of repetitive DNA.
DNA sequencing
High-throughput technology makes sequencing to assemble new genomes accessible to everyone. Sequence polymorphisms are typically discovered by comparing resequenced isolates to a reference, whereas analyses of coverage depth and mapping topology can provide details regarding structural variations such as chromosomal translocations and segmental duplications.
Coding sequences
DNA sequences that carry the instructions to make proteins are referred to as coding sequences. The proportion of the genome occupied by coding sequences varies widely. A larger genome does not necessarily contain more genes, and the proportion of non-repetitive DNA decreases along with increasing genome size in complex eukaryotes.
Noncoding sequences
Noncoding sequences include introns, sequences for non-coding RNAs, regulatory regions, and repetitive DNA. Noncoding sequences make up 98% of the human genome. There are two categories of repetitive DNA in the genome: tandem repeats and interspersed repeats.
Tandem repeats
Short, non-coding sequences that are repeated head-to-tail are called tandem repeats. Microsatellites consisting of 2–5 basepair repeats, while minisatellite repeats are 30–35 bp. Tandem repeats make up about 4% of the human genome and 9% of the fruit fly genome. Tandem repeats can be functional. For example, telomeres are composed of the tandem repeat TTAGGG in mammals, and they play an important role in protecting the ends of the chromosome.
In other cases, expansions in the number of tandem repeats in exons or introns can cause disease. For example, the human gene huntingtin (Htt) typically contains 6–29 tandem repeats of the nucleotides CAG (encoding a polyglutamine tract). An expansion to over 36 repeats results in Huntington's disease, a neurodegenerative disease. Twenty human disorders are known to result from similar tandem repeat expansions in various genes. The mechanism by which proteins with expanded polygulatamine tracts cause death of neurons is not fully understood. One possibility is that the proteins fail to fold properly and avoid degradation, instead accumulating in aggregates that also sequester important transcription factors, thereby altering gene expression.
Tandem repeats are usually caused by slippage during replication, unequal crossing-over and gene conversion.
Transposable elements
Transposable elements (TEs) are sequences of DNA with a defined structure that are able to change their location in the genome. TEs are categorized as either as a mechanism that replicates by copy-and-paste or as a mechanism that can be excised from the genome and inserted at a new location. In the human genome, there are three important classes of TEs that make up more than 45% of the human DNA; these classes are The long interspersed nuclear elements (LINEs), The interspersed nuclear elements (SINEs), and endogenous retroviruses. These elements have a big potential to modify the genetic control in a host organism.
The movement of TEs is a driving force of genome evolution in eukaryotes because their insertion can disrupt gene functions, homologous recombination between TEs can produce duplications, and TE can shuffle exons and regulatory sequences to new locations.
Retrotransposons
Retrotransposons are found mostly in eukaryotes but not found in prokaryotes. Retrotransposons form a large portion of the genomes of many eukaryotes. A retrotransposon is a transposable element that transposes through an RNA intermediate. Retrotransposons are composed of DNA, but are transcribed into RNA for transposition, then the RNA transcript is copied back to DNA formation with the help of a specific enzyme called reverse transcriptase. A retrotransposon that carries reverse transcriptase in its sequence can trigger its own transposition but retrotransposons that lack a reverse transcriptase must use reverse transcriptase synthesized by another retrotransposon. Retrotransposons can be transcribed into RNA, which are then duplicated at another site into the genome. Retrotransposons can be divided into long terminal repeats (LTRs) and non-long terminal repeats (Non-LTRs).
Long terminal repeats (LTRs) are derived from ancient retroviral infections, so they encode proteins related to retroviral proteins including gag (structural proteins of the virus), pol (reverse transcriptase and integrase), pro (protease), and in some cases env (envelope) genes. These genes are flanked by long repeats at both 5' and 3' ends. It has been reported that LTRs consist of the largest fraction in most plant genome and might account for the huge variation in genome size.
Non-long terminal repeats (Non-LTRs) are classified as long interspersed nuclear elements (LINEs), short interspersed nuclear elements (SINEs), and Penelope-like elements (PLEs). In Dictyostelium discoideum, there is another DIRS-like elements belong to Non-LTRs. Non-LTRs are widely spread in eukaryotic genomes.
Long interspersed elements (LINEs) encode genes for reverse transcriptase and endonuclease, making them autonomous transposable elements. The human genome has around 500,000 LINEs, taking around 17% of the genome.
Short interspersed elements (SINEs) are usually less than 500 base pairs and are non-autonomous, so they rely on the proteins encoded by LINEs for transposition. The Alu element is the most common SINE found in primates. It is about 350 base pairs and occupies about 11% of the human genome with around 1,500,000 copies.
DNA transposons
DNA transposons encode a transposase enzyme between inverted terminal repeats. When expressed, the transposase recognizes the terminal inverted repeats that flank the transposon and catalyzes its excision and reinsertion in a new site. This cut-and-paste mechanism typically reinserts transposons near their original location (within 100 kb). DNA transposons are found in bacteria and make up 3% of the human genome and 12% of the genome of the roundworm C. elegans.
Genome size
Genome size is the total number of the DNA base pairs in one copy of a haploid genome. Genome size varies widely across species. Invertebrates have small genomes, this is also correlated to a small number of transposable elements. Fish and Amphibians have intermediate-size genomes, and birds have relatively small genomes but it has been suggested that birds lost a substantial portion of their genomes during the phase of transition to flight. Before this loss, DNA methylation allows the adequate expansion of the genome.
In humans, the nuclear genome comprises approximately 3.1 billion nucleotides of DNA, divided into 24 linear molecules, the shortest 45 000 000 nucleotides in length and the longest 248 000 000 nucleotides, each contained in a different chromosome. There is no clear and consistent correlation between morphological complexity and genome size in either prokaryotes or lower eukaryotes. Genome size is largely a function of the expansion and contraction of repetitive DNA elements.
Since genomes are very complex, one research strategy is to reduce the number of genes in a genome to the bare minimum and still have the organism in question survive. There is experimental work being done on minimal genomes for single cell organisms as well as minimal genomes for multi-cellular organisms (see developmental biology). The work is both in vivo and in silico.
Genome size differences due to transposable elements
There are many enormous differences in size in genomes, specially mentioned before in the multicellular eukaryotic genomes. Much of this is due to the differing abundances of transposable elements, which evolve by creating new copies of themselves in the chromosomes. Eukaryote genomes often contain many thousands of copies of these elements, most of which have acquired mutations that make them defective.
Genomic alterations
All the cells of an organism originate from a single cell, so they are expected to have identical genomes; however, in some cases, differences arise. Both the process of copying DNA during cell division and exposure to environmental mutagens can result in mutations in somatic cells. In some cases, such mutations lead to cancer because they cause cells to divide more quickly and invade surrounding tissues. In certain lymphocytes in the human immune system, V(D)J recombination generates different genomic sequences such that each cell produces a unique antibody or T cell receptors.
During meiosis, diploid cells divide twice to produce haploid germ cells. During this process, recombination results in a reshuffling of the genetic material from homologous chromosomes so each gamete has a unique genome.
Genome-wide reprogramming
Genome-wide reprogramming in mouse primordial germ cells involves epigenetic imprint erasure leading to totipotency. Reprogramming is facilitated by active DNA demethylation, a process that entails the DNA base excision repair pathway. This pathway is employed in the erasure of CpG methylation (5mC) in primordial germ cells. The erasure of 5mC occurs via its conversion to 5-hydroxymethylcytosine (5hmC) driven by high levels of the ten-eleven dioxygenase enzymes TET1 and TET2.
Genome evolution
Genomes are more than the sum of an organism's genes and have traits that may be measured and studied without reference to the details of any particular genes and their products. Researchers compare traits such as karyotype (chromosome number), genome size, gene order, codon usage bias, and GC-content to determine what mechanisms could have produced the great variety of genomes that exist today (for recent overviews, see Brown 2002; Saccone and Pesole 2003; Benfey and Protopapas 2004; Gibson and Muse 2004; Reese 2004; Gregory 2005).
Duplications play a major role in shaping the genome. Duplication may range from extension of short tandem repeats, to duplication of a cluster of genes, and all the way to duplication of entire chromosomes or even entire genomes. Such duplications are probably fundamental to the creation of genetic novelty.
Horizontal gene transfer is invoked to explain how there is often an extreme similarity between small portions of the genomes of two organisms that are otherwise very distantly related. Horizontal gene transfer seems to be common among many microbes. Also, eukaryotic cells seem to have experienced a transfer of some genetic material from their chloroplast and mitochondrial genomes to their nuclear chromosomes. Recent empirical data suggest an important role of viruses and sub-viral RNA-networks to represent a main driving role to generate genetic novelty and natural genome editing.
In fiction
Works of science fiction illustrate concerns about the availability of genome sequences.
Michael Crichton's 1990 novel Jurassic Park and the subsequent film tell the story of a billionaire who creates a theme park of cloned dinosaurs on a remote island, with disastrous outcomes. A geneticist extracts dinosaur DNA from the blood of ancient mosquitoes and fills in the gaps with DNA from modern species to create several species of dinosaurs. A chaos theorist is asked to give his expert opinion on the safety of engineering an ecosystem with the dinosaurs, and he repeatedly warns that the outcomes of the project will be unpredictable and ultimately uncontrollable. These warnings about the perils of using genomic information are a major theme of the book.
The 1997 film Gattaca is set in a futurist society where genomes of children are engineered to contain the most ideal combination of their parents' traits, and metrics such as risk of heart disease and predicted life expectancy are documented for each person based on their genome. People conceived outside of the eugenics program, known as "In-Valids" suffer discrimination and are relegated to menial occupations. The protagonist of the film is an In-Valid who works to defy the supposed genetic odds and achieve his dream of working as a space navigator. The film warns against a future where genomic information fuels prejudice and extreme class differences between those who can and cannot afford genetically engineered children.
See also
Bacterial genome size
Cryoconservation of animal genetic resources
DNA methylation
Genome Browser
Genome Compiler
Genome topology
Genome-wide association study
List of sequenced animal genomes
List of sequenced archaeal genomes
List of sequenced bacterial genomes
List of sequenced eukaryotic genomes
List of sequenced fungi genomes
List of sequenced plant genomes
List of sequenced plastomes
List of sequenced protist genomes
Metagenomics
Microbiome
Molecular epidemiology
Molecular pathological epidemiology
Molecular pathology
Nucleic acid sequence
Pan-genome
Precision medicine
Regulator gene
Whole genome sequencing
References
Further reading
External links
UCSC Genome Browser – view the genome and annotations for more than 80 organisms.
genomecenter.howard.edu (archived 9 August 2013)
Build a DNA Molecule (archived 9 June 2010)
Some comparative genome sizes
DNA Interactive: The History of DNA Science
DNA From The Beginning
All About The Human Genome Project—from Genome.gov
Animal genome size database
Plant genome size database (archived 1 September 2005)
GOLD:Genomes OnLine Database
The Genome News Network
NCBI Entrez Genome Project database
NCBI Genome Primer
GeneCards—an integrated database of human genes
BBC News – Final genome 'chapter' published
IMG (The Integrated Microbial Genomes system)—for genome analysis by the DOE-JGI
GeKnome Technologies Next-Gen Sequencing Data Analysis—next-generation sequencing data analysis for Illumina and 454 Service from GeKnome Technologies (archived 3 March 2012)
Genetic mapping
Genomics
DNA
Methylation
ur:موراثہ | Genome | [
"Chemistry"
] | 4,565 | [
"Methylation"
] |
12,395 | https://en.wikipedia.org/wiki/Greenhouse%20effect | The greenhouse effect occurs when greenhouse gases in a planet's atmosphere insulate the planet from losing heat to space, raising its surface temperature. Surface heating can happen from an internal heat source (as in the case of Jupiter) or come from an external source, such as its host star. In the case of Earth, the Sun emits shortwave radiation (sunlight) that passes through greenhouse gases to heat the Earth's surface. In response, the Earth's surface emits longwave radiation that is mostly absorbed by greenhouse gases. The absorption of longwave radiation prevents it from reaching space, reducing the rate at which the Earth can cool off.
Without the greenhouse effect, the Earth's average surface temperature would be as cold as . This is of course much less than the 20th century average of about . In addition to naturally present greenhouse gases, burning of fossil fuels has increased amounts of carbon dioxide and methane in the atmosphere. As a result, global warming of about has occurred since the Industrial Revolution, with the global average surface temperature increasing at a rate of per decade since 1981.
All objects with a temperature above absolute zero emit thermal radiation. The wavelengths of thermal radiation emitted by the Sun and Earth differ because their surface temperatures are different. The Sun has a surface temperature of , so it emits most of its energy as shortwave radiation in near-infrared and visible wavelengths (as sunlight). In contrast, Earth's surface has a much lower temperature, so it emits longwave radiation at mid- and far-infrared wavelengths. A gas is a greenhouse gas if it absorbs longwave radiation. Earth's atmosphere absorbs only 23% of incoming shortwave radiation, but absorbs 90% of the longwave radiation emitted by the surface, thus accumulating energy and warming the Earth's surface.
The existence of the greenhouse effect (while not named as such) was proposed as early as 1824 by Joseph Fourier. The argument and the evidence were further strengthened by Claude Pouillet in 1827 and 1838. In 1856 Eunice Newton Foote demonstrated that the warming effect of the sun is greater for air with water vapour than for dry air, and the effect is even greater with carbon dioxide. The term greenhouse was first applied to this phenomenon by Nils Gustaf Ekholm in 1901.
Definition
The greenhouse effect on Earth is defined as: "The infrared radiative effect of all infrared absorbing constituents in the atmosphere. Greenhouse gases (GHGs), clouds, and some aerosols absorb terrestrial radiation emitted by the Earth’s surface and elsewhere in the atmosphere."
The enhanced greenhouse effect describes the fact that by increasing the concentration of GHGs in the atmosphere (due to human action), the natural greenhouse effect is increased.
Terminology
The term greenhouse effect comes from an analogy to greenhouses. Both greenhouses and the greenhouse effect work by retaining heat from sunlight, but the way they retain heat differs. Greenhouses retain heat mainly by blocking convection (the movement of air). In contrast, the greenhouse effect retains heat by restricting radiative transfer through the air and reducing the rate at which thermal radiation is emitted into space.
History of discovery and investigation
The existence of the greenhouse effect, while not named as such, was proposed as early as 1824 by Joseph Fourier. The argument and the evidence were further strengthened by Claude Pouillet in 1827 and 1838. In 1856 Eunice Newton Foote demonstrated that the warming effect of the sun is greater for air with water vapour than for dry air, and the effect is even greater with carbon dioxide. She concluded that "An atmosphere of that gas would give to our earth a high temperature..."
John Tyndall was the first to measure the infrared absorption and emission of various gases and vapors. From 1859 onwards, he showed that the effect was due to a very small proportion of the atmosphere, with the main gases having no effect, and was largely due to water vapor, though small percentages of hydrocarbons and carbon dioxide had a significant effect. The effect was more fully quantified by Svante Arrhenius in 1896, who made the first quantitative prediction of global warming due to a hypothetical doubling of atmospheric carbon dioxide. The term greenhouse was first applied to this phenomenon by Nils Gustaf Ekholm in 1901.
Measurement
Matter emits thermal radiation at a rate that is directly proportional to the fourth power of its temperature. Some of the radiation emitted by the Earth's surface is absorbed by greenhouse gases and clouds. Without this absorption, Earth's surface would have an average temperature of . However, because some of the radiation is absorbed, Earth's average surface temperature is around . Thus, the Earth's greenhouse effect may be measured as a temperature change of .
Thermal radiation is characterized by how much energy it carries, typically in watts per square meter (W/m). Scientists also measure the greenhouse effect based on how much more longwave thermal radiation leaves the Earth's surface than reaches space. Currently, longwave radiation leaves the surface at an average rate of 398 W/m, but only 239 W/m reaches space. Thus, the Earth's greenhouse effect can also be measured as an energy flow change of 159 W/m. The greenhouse effect can be expressed as a fraction (0.40) or percentage (40%) of the longwave thermal radiation that leaves Earth's surface but does not reach space.
Whether the greenhouse effect is expressed as a change in temperature or as a change in longwave thermal radiation, the same effect is being measured.
Role in climate change
Strengthening of the greenhouse effect through additional greenhouse gases from human activities is known as the enhanced greenhouse effect. As well as being inferred from measurements by ARGO, CERES and other instruments throughout the 21st century, this increase in radiative forcing from human activity has been observed directly, and is attributable mainly to increased atmospheric carbon dioxide levels.
is produced by fossil fuel burning and other activities such as cement production and tropical deforestation. Measurements of from the Mauna Loa Observatory show that concentrations have increased from about 313 parts per million (ppm) in 1960, passing the 400 ppm milestone in 2013. The current observed amount of exceeds the geological record maxima (≈300 ppm) from ice core data.
Over the past 800,000 years, ice core data shows that carbon dioxide has varied from values as low as 180 ppm to the pre-industrial level of 270 ppm. Paleoclimatologists consider variations in carbon dioxide concentration to be a fundamental factor influencing climate variations over this time scale.
Energy balance and temperature
Incoming shortwave radiation
Hotter matter emits shorter wavelengths of radiation. As a result, the Sun emits shortwave radiation as sunlight while the Earth and its atmosphere emit longwave radiation. Sunlight includes ultraviolet, visible light, and near-infrared radiation.
Sunlight is reflected and absorbed by the Earth and its atmosphere. The atmosphere and clouds reflect about 23% and absorb 23%. The surface reflects 7% and absorbs 48%. Overall, Earth reflects about 30% of the incoming sunlight, and absorbs the rest (240 W/m).
Outgoing longwave radiation
The Earth and its atmosphere emit longwave radiation, also known as thermal infrared or terrestrial radiation. Informally, longwave radiation is sometimes called thermal radiation. Outgoing longwave radiation (OLR) is the radiation from Earth and its atmosphere that passes through the atmosphere and into space.
The greenhouse effect can be directly seen in graphs of Earth's outgoing longwave radiation as a function of frequency (or wavelength). The area between the curve for longwave radiation emitted by Earth's surface and the curve for outgoing longwave radiation indicates the size of the greenhouse effect.
Different substances are responsible for reducing the radiation energy reaching space at different frequencies; for some frequencies, multiple substances play a role. Carbon dioxide is understood to be responsible for the dip in outgoing radiation (and associated rise in the greenhouse effect) at around 667 cm−1 (equivalent to a wavelength of 15 microns).
Each layer of the atmosphere with greenhouse gases absorbs some of the longwave radiation being radiated upwards from lower layers. It also emits longwave radiation in all directions, both upwards and downwards, in equilibrium with the amount it has absorbed. This results in less radiative heat loss and more warmth below. Increasing the concentration of the gases increases the amount of absorption and emission, and thereby causing more heat to be retained at the surface and in the layers below.
Effective temperature
The power of outgoing longwave radiation emitted by a planet corresponds to the effective temperature of the planet. The effective temperature is the temperature that a planet radiating with a uniform temperature (a blackbody) would need to have in order to radiate the same amount of energy.
This concept may be used to compare the amount of longwave radiation emitted to space and the amount of longwave radiation emitted by the surface:
Emissions to space: Based on its emissions of longwave radiation to space, Earth's overall effective temperature is .
Emissions from surface: Based on thermal emissions from the surface, Earth's effective surface temperature is about , which is warmer than Earth's overall effective temperature.
Earth's surface temperature is often reported in terms of the average near-surface air temperature. This is about , a bit lower than the effective surface temperature. This value is warmer than Earth's overall effective temperature.
Energy flux
Energy flux is the rate of energy flow per unit area. Energy flux is expressed in units of W/m2, which is the number of joules of energy that pass through a square meter each second. Most fluxes quoted in high-level discussions of climate are global values, which means they are the total flow of energy over the entire globe, divided by the surface area of the Earth, .
The fluxes of radiation arriving at and leaving the Earth are important because radiative transfer is the only process capable of exchanging energy between Earth and the rest of the universe.
Radiative balance
The temperature of a planet depends on the balance between incoming radiation and outgoing radiation. If incoming radiation exceeds outgoing radiation, a planet will warm. If outgoing radiation exceeds incoming radiation, a planet will cool. A planet will tend towards a state of radiative equilibrium, in which the power of outgoing radiation equals the power of absorbed incoming radiation.
Earth's energy imbalance is the amount by which the power of incoming sunlight absorbed by Earth's surface or atmosphere exceeds the power of outgoing longwave radiation emitted to space. Energy imbalance is the fundamental measurement that drives surface temperature. A UN presentation says "The EEI is the most critical number defining the prospects for continued global warming and climate change." One study argues, "The absolute value of EEI represents the most fundamental metric defining the status of global climate change."
Earth's energy imbalance (EEI) was about 0.7 W/m as of around 2015, indicating that Earth as a whole is accumulating thermal energy and is in a process of becoming warmer.
Over 90% of the retained energy goes into warming the oceans, with much smaller amounts going into heating the land, atmosphere, and ice.
Day and night cycle
A simple picture assumes a steady state, but in the real world, the day/night (diurnal) cycle, as well as the seasonal cycle and weather disturbances, complicate matters. Solar heating applies only during daytime. At night the atmosphere cools somewhat, but not greatly because the thermal inertia of the climate system resists changes both day and night, as well as for longer periods. Diurnal temperature changes decrease with height in the atmosphere.
Effect of lapse rate
Lapse rate
In the lower portion of the atmosphere, the troposphere, the air temperature decreases (or "lapses") with increasing altitude. The rate at which temperature changes with altitude is called the lapse rate.
On Earth, the air temperature decreases by about 6.5 °C/km (3.6 °F per 1000 ft), on average, although this varies.
The temperature lapse is caused by convection. Air warmed by the surface rises. As it rises, air expands and cools. Simultaneously, other air descends, compresses, and warms. This process creates a vertical temperature gradient within the atmosphere.
This vertical temperature gradient is essential to the greenhouse effect. If the lapse rate was zero (so that the atmospheric temperature did not vary with altitude and was the same as the surface temperature) then there would be no greenhouse effect (i.e., its value would be zero).
Emission temperature and altitude
Greenhouse gases make the atmosphere near Earth's surface mostly opaque to longwave radiation. The atmosphere only becomes transparent to longwave radiation at higher altitudes, where the air is less dense, there is less water vapor, and reduced pressure broadening of absorption lines limits the wavelengths that gas molecules can absorb.
For any given wavelength, the longwave radiation that reaches space is emitted by a particular radiating layer of the atmosphere. The intensity of the emitted radiation is determined by the weighted average air temperature within that layer. So, for any given wavelength of radiation emitted to space, there is an associated effective emission temperature (or brightness temperature).
A given wavelength of radiation may also be said to have an effective emission altitude, which is a weighted average of the altitudes within the radiating layer.
The effective emission temperature and altitude vary by wavelength (or frequency). This phenomenon may be seen by examining plots of radiation emitted to space.
Greenhouse gases and the lapse rate
Earth's surface radiates longwave radiation with wavelengths in the range of 4–100 microns. Greenhouse gases that were largely transparent to incoming solar radiation are more absorbent for some wavelengths in this range.
The atmosphere near the Earth's surface is largely opaque to longwave radiation and most heat loss from the surface is by evaporation and convection. However radiative energy losses become increasingly important higher in the atmosphere, largely because of the decreasing concentration of water vapor, an important greenhouse gas.
Rather than thinking of longwave radiation headed to space as coming from the surface itself, it is more realistic to think of this outgoing radiation as being emitted by a layer in the mid-troposphere, which is effectively coupled to the surface by a lapse rate. The difference in temperature between these two locations explains the difference between surface emissions and emissions to space, i.e., it explains the greenhouse effect.
Infrared absorbing constituents in the atmosphere
Greenhouse gases
A greenhouse gas (GHG) is a gas which contributes to the trapping of heat by impeding the flow of longwave radiation out of a planet's atmosphere. Greenhouse gases contribute most of the greenhouse effect in Earth's energy budget.
Infrared active gases
Gases which can absorb and emit longwave radiation are said to be infrared active and act as greenhouse gases.
Most gases whose molecules have two different atoms (such as carbon monoxide, ), and all gases with three or more atoms (including and ), are infrared active and act as greenhouse gases. (Technically, this is because when these molecules vibrate, those vibrations modify the molecular dipole moment, or asymmetry in the distribution of electrical charge. See Infrared spectroscopy.)
Gases with only one atom (such as argon, Ar) or with two identical atoms (such as nitrogen, , and oxygen, ) are not infrared active. They are transparent to longwave radiation, and, for practical purposes, do not absorb or emit longwave radiation. (This is because their molecules are symmetrical and so do not have a dipole moment.) Such gases make up more than 99% of the dry atmosphere.
Absorption and emission
Greenhouse gases absorb and emit longwave radiation within specific ranges of wavelengths (organized as spectral lines or bands).
When greenhouse gases absorb radiation, they distribute the acquired energy to the surrounding air as thermal energy (i.e., kinetic energy of gas molecules). Energy is transferred from greenhouse gas molecules to other molecules via molecular collisions.
Contrary to what is sometimes said, greenhouse gases do not "re-emit" photons after they are absorbed. Because each molecule experiences billions of collisions per second, any energy a greenhouse gas molecule receives by absorbing a photon will be redistributed to other molecules before there is a chance for a new photon to be emitted.
In a separate process, greenhouse gases emit longwave radiation, at a rate determined by the air temperature. This thermal energy is either absorbed by other greenhouse gas molecules or leaves the atmosphere, cooling it.
Radiative effects
Effect on air: Air is warmed by latent heat (buoyant water vapor condensing into water droplets and releasing heat), thermals (warm air rising from below), and by sunlight being absorbed in the atmosphere. Air is cooled radiatively, by greenhouse gases and clouds emitting longwave thermal radiation. Within the troposphere, greenhouse gases typically have a net cooling effect on air, emitting more thermal radiation than they absorb. Warming and cooling of air are well balanced, on average, so that the atmosphere maintains a roughly stable average temperature.
Effect on surface cooling: Longwave radiation flows both upward and downward due to absorption and emission in the atmosphere. These canceling energy flows reduce radiative surface cooling (net upward radiative energy flow). Latent heat transport and thermals provide non-radiative surface cooling which partially compensates for this reduction, but there is still a net reduction in surface cooling, for a given surface temperature.
Effect on TOA energy balance: Greenhouse gases impact the top-of-atmosphere (TOA) energy budget by reducing the flux of longwave radiation emitted to space, for a given surface temperature. Thus, greenhouse gases alter the energy balance at TOA. This means that the surface temperature needs to be higher (than the planet's effective temperature, i.e., the temperature associated with emissions to space), in order for the outgoing energy emitted to space to balance the incoming energy from sunlight. It is important to focus on the top-of-atmosphere (TOA) energy budget (rather than the surface energy budget) when reasoning about the warming effect of greenhouse gases.
Clouds and aerosols
Clouds and aerosols have both cooling effects, associated with reflecting sunlight back to space, and warming effects, associated with trapping thermal radiation.
On average, clouds have a strong net cooling effect. However, the mix of cooling and warming effects varies, depending on detailed characteristics of particular clouds (including their type, height, and optical properties). Thin cirrus clouds can have a net warming effect. Clouds can absorb and emit infrared radiation and thus affect the radiative properties of the atmosphere.
Basic formulas
Effective temperature
A given flux of thermal radiation has an associated effective radiating temperature or effective temperature. Effective temperature is the temperature that a black body (a perfect absorber/emitter) would need to be to emit that much thermal radiation. Thus, the overall effective temperature of a planet is given by
where OLR is the average flux (power per unit area) of outgoing longwave radiation emitted to space and is the Stefan-Boltzmann constant. Similarly, the effective temperature of the surface is given by
where SLR is the average flux of longwave radiation emitted by the surface. (OLR is a conventional abbreviation. SLR is used here to denote the flux of surface-emitted longwave radiation, although there is no standard abbreviation for this.)
Metrics for the greenhouse effect
The IPCC reports the greenhouse effect, , as being 159 W m, where is the flux of longwave thermal radiation that leaves the surface minus the flux of outgoing longwave radiation that reaches space:
Alternatively, the greenhouse effect can be described using the normalized greenhouse effect, , defined as
The normalized greenhouse effect is the fraction of the amount of thermal radiation emitted by the surface that does not reach space.
Based on the IPCC numbers, = 0.40. In other words, 40 percent less thermal radiation reaches space than what leaves the surface.
Sometimes the greenhouse effect is quantified as a temperature difference. This temperature difference is closely related to the quantities above.
When the greenhouse effect is expressed as a temperature difference, , this refers to the effective temperature associated with thermal radiation emissions from the surface minus the effective temperature associated with emissions to space:
Informal discussions of the greenhouse effect often compare the actual surface temperature to the temperature that the planet would have if there were no greenhouse gases. However, in formal technical discussions, when the size of the greenhouse effect is quantified as a temperature, this is generally done using the above formula. The formula refers to the effective surface temperature rather than the actual surface temperature, and compares the surface with the top of the atmosphere, rather than comparing reality to a hypothetical situation.
The temperature difference, , indicates how much warmer a planet's surface is than the planet's overall effective temperature.
Radiative balance
Earth's top-of-atmosphere (TOA) energy imbalance (EEI) is the amount by which the power of incoming radiation exceeds the power of outgoing radiation:
where ASR is the mean flux of absorbed solar radiation. ASR may be expanded as
where is the albedo (reflectivity) of the planet and MSI is the mean solar irradiance incoming at the top of the atmosphere.
The radiative equilibrium temperature of a planet can be expressed as
A planet's temperature will tend to shift towards a state of radiative equilibrium, in which the TOA energy imbalance is zero, i.e., . When the planet is in radiative equilibrium, the overall effective temperature of the planet is given by
Thus, the concept of radiative equilibrium is important because it indicates what effective temperature a planet will tend towards having.
If, in addition to knowing the effective temperature, , we know the value of the greenhouse effect, then we know the mean (average) surface temperature of the planet.
This is why the quantity known as the greenhouse effect is important: it is one of the few quantities that go into determining the planet's mean surface temperature.
Greenhouse effect and temperature
Typically, a planet will be close to radiative equilibrium, with the rates of incoming and outgoing energy being well-balanced. Under such conditions, the planet's equilibrium temperature is determined by the mean solar irradiance and the planetary albedo (how much sunlight is reflected back to space instead of being absorbed).
The greenhouse effect measures how much warmer the surface is than the overall effective temperature of the planet. So, the effective surface temperature, , is, using the definition of ,
One could also express the relationship between and using or .
So, the principle that a larger greenhouse effect corresponds to a higher surface temperature, if everything else (i.e., the factors that determine ) is held fixed, is true as a matter of definition.
Note that the greenhouse effect influences the temperature of the planet as a whole, in tandem with the planet's tendency to move toward radiative equilibrium.
Misconceptions
There are sometimes misunderstandings about how the greenhouse effect functions and raises temperatures.
The surface budget fallacy is a common error in thinking. It involves thinking that an increased concentration could only cause warming by increasing the downward thermal radiation to the surface, as a result of making the atmosphere a better emitter. If the atmosphere near the surface is already nearly opaque to thermal radiation, this would mean that increasing could not lead to higher temperatures. However, it is a mistake to focus on the surface energy budget rather than the top-of-atmosphere energy budget. Regardless of what happens at the surface, increasing the concentration of tends to reduce the thermal radiation reaching space (OLR), leading to a TOA energy imbalance that leads to warming. Earlier researchers like Callendar (1938) and Plass (1959) focused on the surface budget, but the work of Manabe in the 1960s clarified the importance of the top-of-atmosphere energy budget.
Among those who do not believe in the greenhouse effect, there is a fallacy that the greenhouse effect involves greenhouse gases sending heat from the cool atmosphere to the planet's warm surface, in violation of the second law of thermodynamics. However, this idea reflects a misunderstanding. Radiation heat flow is the net energy flow after the flows of radiation in both directions have been taken into account. Radiation heat flow occurs in the direction from the surface to the atmosphere and space, as is to be expected given that the surface is warmer than the atmosphere and space. While greenhouse gases emit thermal radiation downward to the surface, this is part of the normal process of radiation heat transfer. The downward thermal radiation simply reduces the upward thermal radiation net energy flow (radiation heat flow), i.e., it reduces cooling.
Simplified models
Simplified models are sometimes used to support understanding of how the greenhouse effect comes about and how this affects surface temperature.
Atmospheric layer models
The greenhouse effect can be seen to occur in a simplified model in which the air is treated as if it is single uniform layer exchanging radiation with the ground and space. Slightly more complex models add additional layers, or introduce convection.
Equivalent emission altitude
One simplification is to treat all outgoing longwave radiation as being emitted from an altitude where the air temperature equals the overall effective temperature for planetary emissions, . Some authors have referred to this altitude as the effective radiating level (ERL), and suggest that as the concentration increases, the ERL must rise to maintain the same mass of above that level.
This approach is less accurate than accounting for variation in radiation wavelength by emission altitude. However, it can be useful in supporting a simplified understanding of the greenhouse effect. For instance, it can be used to explain how the greenhouse effect increases as the concentration of greenhouse gases increase.
Earth's overall equivalent emission altitude has been increasing with a trend of /decade, which is said to be consistent with a global mean surface warming of /decade over the period 1979–2011.
Related effects on Earth
Negative greenhouse effect
Scientists have observed that, at times, there is a negative greenhouse effect over parts of Antarctica. In a location where there is a strong temperature inversion, so that the air is warmer than the surface, it is possible for the greenhouse effect to be reversed, so that the presence of greenhouse gases increases the rate of radiative cooling to space. In this case, the rate of thermal radiation emission to space is greater than the rate at which thermal radiation is emitted by the surface. Thus, the local value of the greenhouse effect is negative.
Runaway greenhouse effect
Bodies other than Earth
In the solar system, apart from the Earth, at least two other planets and a moon also have a greenhouse effect.
Venus
The greenhouse effect on Venus is particularly large, and it brings the surface temperature to as high as . This is due to its very dense atmosphere which consists of about 97% carbon dioxide.
Although Venus is about 30% closer to the Sun, it absorbs (and is warmed by) less sunlight than Earth, because Venus reflects 77% of incident sunlight while Earth reflects around 30%. In the absence of a greenhouse effect, the surface of Venus would be expected to have a temperature of . Thus, contrary to what one might think, being nearer to the Sun is not a reason why Venus is warmer than Earth.
Due to its high pressure, the CO2 in the atmosphere of Venus exhibits continuum absorption (absorption over a broad range of wavelengths) and is not limited to absorption within the bands relevant to its absorption on Earth.
A runaway greenhouse effect involving carbon dioxide and water vapor has for many years been hypothesized to have occurred on Venus; this idea is still largely accepted. The planet Venus experienced a runaway greenhouse effect, resulting in an atmosphere which is 96% carbon dioxide, and a surface atmospheric pressure roughly the same as found underwater on Earth. Venus may have had water oceans, but they would have boiled off as the mean surface temperature rose to the current .
Mars
Mars has about 70 times as much carbon dioxide as Earth, but experiences only a small greenhouse effect, about . The greenhouse effect is small due to the lack of water vapor and the overall thinness of the atmosphere.
The same radiative transfer calculations that predict warming on Earth accurately explain the temperature on Mars, given its atmospheric composition.
Titan
Saturn's moon Titan has both a greenhouse effect and an anti-greenhouse effect. The presence of nitrogen (N2), methane (CH4), and hydrogen (H2) in the atmosphere contribute to a greenhouse effect, increasing the surface temperature by over the expected temperature of the body without these gases.
While the gases N2 and H2 ordinarily do not absorb infrared radiation, these gases absorb thermal radiation on Titan due to pressure-induced collisions, the large mass and thickness of the atmosphere, and the long wavelengths of the thermal radiation from the cold surface.
The existence of a high-altitude haze, which absorbs wavelengths of solar radiation but is transparent to infrared, contribute to an anti-greenhouse effect of approximately .
The net result of these two effects is a warming of 21 K − 9 K = , so Titan's surface temperature of is 12 K warmer than it would be if there were no atmosphere.
Effect of pressure
One cannot predict the relative sizes of the greenhouse effects on different bodies simply by comparing the amount of greenhouse gases in their atmospheres. This is because factors other than the quantity of these gases also play a role in determining the size of the greenhouse effect.
Overall atmospheric pressure affects how much thermal radiation each molecule of a greenhouse gas can absorb. High pressure leads to more absorption and low pressure leads to less.
This is due to "pressure broadening" of spectral lines. When the total atmospheric pressure is higher, collisions between molecules occur at a higher rate. Collisions broaden the width of absorption lines, allowing a greenhouse gas to absorb thermal radiation over a broader range of wavelengths.
Each molecule in the air near Earth's surface experiences about 7 billion collisions per second. This rate is lower at higher altitudes, where the pressure and temperature are both lower. This means that greenhouse gases are able to absorb more wavelengths in the lower atmosphere than they can in the upper atmosphere.
On other planets, pressure broadening means that each molecule of a greenhouse gas is more effective at trapping thermal radiation if the total atmospheric pressure is high (as on Venus), and less effective at trapping thermal radiation if the atmospheric pressure is low (as on Mars).
See also
Anti-greenhouse effect
Climate change feedback
Climate model
Global dimming
Idealized greenhouse model
Illustrative model of greenhouse effect on climate change
Solar radiation management
References
Atmosphere
Atmospheric radiation
Climate forcing
Effects of climate change
Atmospheric chemistry | Greenhouse effect | [
"Chemistry"
] | 6,249 | [
"nan"
] |
12,396 | https://en.wikipedia.org/wiki/Group%20homomorphism | In mathematics, given two groups, (G,∗) and (H, ·), a group homomorphism from (G,∗) to (H, ·) is a function h : G → H such that for all u and v in G it holds that
where the group operation on the left side of the equation is that of G and on the right side that of H.
From this property, one can deduce that h maps the identity element eG of G to the identity element eH of H,
and it also maps inverses to inverses in the sense that
Hence one can say that h "is compatible with the group structure".
In areas of mathematics where one considers groups endowed with additional structure, a homomorphism sometimes means a map which respects not only the group structure (as above) but also the extra structure. For example, a homomorphism of topological groups is often required to be continuous.
Properties
Let be the identity element of the (H, ·) group and , then
Now by multiplying for the inverse of (or applying the cancellation rule) we obtain
Similarly,
Therefore for the uniqueness of the inverse: .
Types
Monomorphism A group homomorphism that is injective (or, one-to-one); i.e., preserves distinctness.
Epimorphism A group homomorphism that is surjective (or, onto); i.e., reaches every point in the codomain.
Isomorphism A group homomorphism that is bijective; i.e., injective and surjective. Its inverse is also a group homomorphism. In this case, the groups G and H are called isomorphic; they differ only in the notation of their elements (except of identity element) and are identical for all practical purposes. I.e. we re-label all elements except identity.
Endomorphism A group homomorphism, h: G → G; the domain and codomain are the same. Also called an endomorphism of G.
Automorphism A group endomorphism that is bijective, and hence an isomorphism. The set of all automorphisms of a group G, with functional composition as operation, itself forms a group, the automorphism group of G. It is denoted by Aut(G). As an example, the automorphism group of (Z, +) contains only two elements, the identity transformation and multiplication with −1; it is isomorphic to (Z/2Z, +).
Image and kernel
We define the kernel of h to be the set of elements in G which are mapped to the identity in H
and the image of h to be
The kernel and image of a homomorphism can be interpreted as measuring how close it is to being an isomorphism. The first isomorphism theorem states that the image of a group homomorphism, h(G) is isomorphic to the quotient group G/ker h.
The kernel of h is a normal subgroup of G. Assume and show for arbitrary :
The image of h is a subgroup of H.
The homomorphism, h, is a group monomorphism; i.e., h is injective (one-to-one) if and only if }. Injection directly gives that there is a unique element in the kernel, and, conversely, a unique element in the kernel gives injection:
Examples
Consider the cyclic group Z = (Z/3Z, +) = ({0, 1, 2}, +) and the group of integers (Z, +). The map h : Z → Z/3Z with h(u) = u mod 3 is a group homomorphism. It is surjective and its kernel consists of all integers which are divisible by 3.
The exponential map yields a group homomorphism from the group of real numbers R with addition to the group of non-zero real numbers R* with multiplication. The kernel is {0} and the image consists of the positive real numbers.
The exponential map also yields a group homomorphism from the group of complex numbers C with addition to the group of non-zero complex numbers C* with multiplication. This map is surjective and has the kernel {2πki : k ∈ Z}, as can be seen from Euler's formula. Fields like R and C that have homomorphisms from their additive group to their multiplicative group are thus called exponential fields.
The function , defined by is a homomorphism.
Consider the two groups and , represented respectively by and , where is the positive real numbers. Then, the function defined by the logarithm function is a homomorphism.
Category of groups
If and are group homomorphisms, then so is . This shows that the class of all groups, together with group homomorphisms as morphisms, forms a category (specifically the category of groups).
Homomorphisms of abelian groups
If G and H are abelian (i.e., commutative) groups, then the set of all group homomorphisms from G to H is itself an abelian group: the sum of two homomorphisms is defined by
(h + k)(u) = h(u) + k(u) for all u in G.
The commutativity of H is needed to prove that is again a group homomorphism.
The addition of homomorphisms is compatible with the composition of homomorphisms in the following sense: if f is in , h, k are elements of , and g is in , then
and .
Since the composition is associative, this shows that the set End(G) of all endomorphisms of an abelian group forms a ring, the endomorphism ring of G. For example, the endomorphism ring of the abelian group consisting of the direct sum of m copies of Z/nZ is isomorphic to the ring of m-by-m matrices with entries in Z/nZ. The above compatibility also shows that the category of all abelian groups with group homomorphisms forms a preadditive category; the existence of direct sums and well-behaved kernels makes this category the prototypical example of an abelian category.
See also
Homomorphism
Fundamental theorem on homomorphisms
Quasimorphism
Ring homomorphism
References
External links
Group theory
Morphisms | Group homomorphism | [
"Mathematics"
] | 1,309 | [
"Functions and mappings",
"Mathematical structures",
"Mathematical objects",
"Group theory",
"Fields of abstract algebra",
"Category theory",
"Mathematical relations",
"Morphisms"
] |
12,397 | https://en.wikipedia.org/wiki/Group%20isomorphism | In abstract algebra, a group isomorphism is a function between two groups that sets up a bijection between the elements of the groups in a way that respects the given group operations. If there exists an isomorphism between two groups, then the groups are called isomorphic. From the standpoint of group theory, isomorphic groups have the same properties and need not be distinguished.
Definition and notation
Given two groups and a group isomorphism from to is a bijective group homomorphism from to Spelled out, this means that a group isomorphism is a bijective function such that for all and in it holds that
The two groups and are isomorphic if there exists an isomorphism from one to the other. This is written
Often shorter and simpler notations can be used. When the relevant group operations are understood, they are omitted and one writes
Sometimes one can even simply write Whether such a notation is possible without confusion or ambiguity depends on context. For example, the equals sign is not very suitable when the groups are both subgroups of the same group. See also the examples.
Conversely, given a group a set and a bijection we can make a group by defining
If and then the bijection is an automorphism (q.v.).
Intuitively, group theorists view two isomorphic groups as follows: For every element of a group there exists an element of such that "behaves in the same way" as (operates with other elements of the group in the same way as ). For instance, if generates then so does This implies, in particular, that and are in bijective correspondence. Thus, the definition of an isomorphism is quite natural.
An isomorphism of groups may equivalently be defined as an invertible group homomorphism (the inverse function of a bijective group homomorphism is also a group homomorphism).
Examples
In this section some notable examples of isomorphic groups are listed.
The group of all real numbers under addition, , is isomorphic to the group of positive real numbers under multiplication :
via the isomorphism .
The group of integers (with addition) is a subgroup of and the factor group is isomorphic to the group of complex numbers of absolute value 1 (under multiplication):
The Klein four-group is isomorphic to the direct product of two copies of , and can therefore be written Another notation is because it is a dihedral group.
Generalizing this, for all odd is isomorphic to the direct product of and
If is an infinite cyclic group, then is isomorphic to the integers (with the addition operation). From an algebraic point of view, this means that the set of all integers (with the addition operation) is the "only" infinite cyclic group.
Some groups can be proven to be isomorphic, relying on the axiom of choice, but the proof does not indicate how to construct a concrete isomorphism. Examples:
The group is isomorphic to the group of all complex numbers under addition.
The group of non-zero complex numbers with multiplication as the operation is isomorphic to the group mentioned above.
Properties
The kernel of an isomorphism from to is always {eG}, where eG is the identity of the group
If and are isomorphic, then is abelian if and only if is abelian.
If is an isomorphism from to then for any the order of equals the order of
If and are isomorphic, then is a locally finite group if and only if is locally finite.
The number of distinct groups (up to isomorphism) of order is given by sequence A000001 in the OEIS. The first few numbers are 0, 1, 1, 1 and 2 meaning that 4 is the lowest order with more than one group.
Cyclic groups
All cyclic groups of a given order are isomorphic to where denotes addition modulo
Let be a cyclic group and be the order of Letting be a generator of , is then equal to
We will show that
Define
so that
Clearly, is bijective. Then
which proves that
Consequences
From the definition, it follows that any isomorphism will map the identity element of to the identity element of
that it will map inverses to inverses,
and more generally, th powers to th powers,
and that the inverse map is also a group isomorphism.
The relation "being isomorphic" is an equivalence relation. If is an isomorphism between two groups and then everything that is true about that is only related to the group structure can be translated via into a true ditto statement about and vice versa.
Automorphisms
An isomorphism from a group to itself is called an automorphism of the group. Thus it is a bijection such that
The image under an automorphism of a conjugacy class is always a conjugacy class (the same or another).
The composition of two automorphisms is again an automorphism, and with this operation the set of all automorphisms of a group denoted by itself forms a group, the automorphism group of
For all abelian groups there is at least the automorphism that replaces the group elements by their inverses. However, in groups where all elements are equal to their inverses this is the trivial automorphism, e.g. in the Klein four-group. For that group all permutations of the three non-identity elements are automorphisms, so the automorphism group is isomorphic to (which itself is isomorphic to ).
In for a prime number one non-identity element can be replaced by any other, with corresponding changes in the other elements. The automorphism group is isomorphic to For example, for multiplying all elements of by 3, modulo 7, is an automorphism of order 6 in the automorphism group, because while lower powers do not give 1. Thus this automorphism generates There is one more automorphism with this property: multiplying all elements of by 5, modulo 7. Therefore, these two correspond to the elements 1 and 5 of in that order or conversely.
The automorphism group of is isomorphic to because only each of the two elements 1 and 5 generate so apart from the identity we can only interchange these.
The automorphism group of has order 168, as can be found as follows. All 7 non-identity elements play the same role, so we can choose which plays the role of Any of the remaining 6 can be chosen to play the role of (0,1,0). This determines which element corresponds to For we can choose from 4, which determines the rest. Thus we have automorphisms. They correspond to those of the Fano plane, of which the 7 points correspond to the 7 elements. The lines connecting three points correspond to the group operation: and on one line means and See also general linear group over finite fields.
For abelian groups, all non-trivial automorphisms are outer automorphisms.
Non-abelian groups have a non-trivial inner automorphism group, and possibly also outer automorphisms.
See also
Group isomorphism problem
References
Group theory
Morphisms | Group isomorphism | [
"Mathematics"
] | 1,436 | [
"Functions and mappings",
"Mathematical structures",
"Mathematical objects",
"Group theory",
"Fields of abstract algebra",
"Mathematical relations",
"Category theory",
"Morphisms"
] |
12,398 | https://en.wikipedia.org/wiki/Geographic%20information%20system | A geographic information system (GIS) consists of integrated computer hardware and software that store, manage, analyze, edit, output, and visualize geographic data. Much of this often happens within a spatial database; however, this is not essential to meet the definition of a GIS. In a broader sense, one may consider such a system also to include human users and support staff, procedures and workflows, the body of knowledge of relevant concepts and methods, and institutional organizations.
The uncounted plural, geographic information systems, also abbreviated GIS, is the most common term for the industry and profession concerned with these systems. The academic discipline that studies these systems and their underlying geographic principles, may also be abbreviated as GIS, but the unambiguous GIScience is more common. GIScience is often considered a subdiscipline of geography within the branch of technical geography.
Geographic information systems are utilized in multiple technologies, processes, techniques and methods. They are attached to various operations and numerous applications, that relate to: engineering, planning, management, transport/logistics, insurance, telecommunications, and business. For this reason, GIS and location intelligence applications are at the foundation of location-enabled services, which rely on geographic analysis and visualization.
GIS provides the ability to relate previously unrelated information, through the use of location as the "key index variable". Locations and extents that are found in the Earth's spacetime are able to be recorded through the date and time of occurrence, along with x, y, and z coordinates; representing, longitude (x), latitude (y), and elevation (z). All Earth-based, spatial–temporal, location and extent references should be relatable to one another, and ultimately, to a "real" physical location or extent. This key characteristic of GIS has begun to open new avenues of scientific inquiry and studies.
History and development
While digital GIS dates to the mid-1960s, when Roger Tomlinson first coined the phrase "geographic information system", many of the geographic concepts and methods that GIS automates date back decades earlier.
One of the first known instances in which spatial analysis was used came from the field of epidemiology in the (1832). French cartographer and geographer Charles Picquet created a map outlining the forty-eight districts in Paris, using halftone color gradients, to provide a visual representation for the number of reported deaths due to cholera per every 1,000 inhabitants.
In 1854, John Snow, an epidemiologist and physician, was able to determine the source of a cholera outbreak in London through the use of spatial analysis. Snow achieved this through plotting the residence of each casualty on a map of the area, as well as the nearby water sources. Once these points were marked, he was able to identify the water source within the cluster that was responsible for the outbreak. This was one of the earliest successful uses of a geographic methodology in pinpointing the source of an outbreak in epidemiology. While the basic elements of topography and theme existed previously in cartography, Snow's map was unique due to his use of cartographic methods, not only to depict, but also to analyze clusters of geographically dependent phenomena.
The early 20th century saw the development of photozincography, which allowed maps to be split into layers, for example one layer for vegetation and another for water. This was particularly used for printing contours – drawing these was a labour-intensive task but having them on a separate layer meant they could be worked on without the other layers to confuse the draughtsman. This work was initially drawn on glass plates, but later plastic film was introduced, with the advantages of being lighter, using less storage space and being less brittle, among others. When all the layers were finished, they were combined into one image using a large process camera. Once color printing came in, the layers idea was also used for creating separate printing plates for each color. While the use of layers much later became one of the typical features of a contemporary GIS, the photographic process just described is not considered a GIS in itself – as the maps were just images with no database to link them to.
Two additional developments are notable in the early days of GIS: Ian McHarg's publication Design with Nature and its map overlay method and the introduction of a street network into the U.S. Census Bureau's DIME (Dual Independent Map Encoding) system.
The first publication detailing the use of computers to facilitate cartography was written by Waldo Tobler in 1959. Further computer hardware development spurred by nuclear weapon research led to more widespread general-purpose computer "mapping" applications by the early 1960s.
In 1963, the world's first true operational GIS was developed in Ottawa, Ontario, Canada, by the federal Department of Forestry and Rural Development. Developed by Roger Tomlinson, it was called the Canada Geographic Information System (CGIS) and was used to store, analyze, and manipulate data collected for the Canada Land Inventory, an effort to determine the land capability for rural Canada by mapping information about soils, agriculture, recreation, wildlife, waterfowl, forestry and land use at a scale of 1:50,000. A rating classification factor was also added to permit analysis.
CGIS was an improvement over "computer mapping" applications as it provided capabilities for data storage, overlay, measurement, and digitizing/scanning. It supported a national coordinate system that spanned the continent, coded lines as arcs having a true embedded topology and it stored the attribute and locational information in separate files. As a result of this, Tomlinson has become known as the "father of GIS", particularly for his use of overlays in promoting the spatial analysis of convergent geographic data. CGIS lasted into the 1990s and built a large digital land resource database in Canada. It was developed as a mainframe-based system in support of federal and provincial resource planning and management. Its strength was continent-wide analysis of complex datasets. The CGIS was never available commercially.
In 1964, Howard T. Fisher formed the Laboratory for Computer Graphics and Spatial Analysis at the Harvard Graduate School of Design (LCGSA 1965–1991), where a number of important theoretical concepts in spatial data handling were developed, and which by the 1970s had distributed seminal software code and systems, such as SYMAP, GRID, and ODYSSEY, to universities, research centers and corporations worldwide. These programs were the first examples of general-purpose GIS software that was not developed for a particular installation, and was very influential on future commercial software, such as Esri ARC/INFO, released in 1983.
By the late 1970s, two public domain GIS systems (MOSS and GRASS GIS) were in development, and by the early 1980s, M&S Computing (later Intergraph) along with Bentley Systems Incorporated for the CAD platform, Environmental Systems Research Institute (ESRI), CARIS (Computer Aided Resource Information System), and ERDAS (Earth Resource Data Analysis System) emerged as commercial vendors of GIS software, successfully incorporating many of the CGIS features, combining the first-generation approach to separation of spatial and attribute information with a second-generation approach to organizing attribute data into database structures.
In 1986, Mapping Display and Analysis System (MIDAS), the first desktop GIS product, was released for the DOS operating system. This was renamed in 1990 to MapInfo for Windows when it was ported to the Microsoft Windows platform. This began the process of moving GIS from the research department into the business environment.
By the end of the 20th century, the rapid growth in various systems had been consolidated and standardized on relatively few platforms and users were beginning to explore viewing GIS data over the Internet, requiring data format and transfer standards. More recently, a growing number of free, open-source GIS packages run on a range of operating systems and can be customized to perform specific tasks. The major trend of the 21st Century has been the integration of GIS capabilities with other Information technology and Internet infrastructure, such as relational databases, cloud computing, software as a service (SAAS), and mobile computing.
GIS software
The distinction must be made between a singular geographic information system, which is a single installation of software and data for a particular use, along with associated hardware, staff, and institutions (e.g., the GIS for a particular city government); and GIS software, a general-purpose application program that is intended to be used in many individual geographic information systems in a variety of application domains. Starting in the late 1970s, many software packages have been created specifically for GIS applications. Esri's ArcGIS, which includes ArcGIS Pro and the legacy software ArcMap, currently dominates the GIS market. Other examples of GIS include Autodesk and MapInfo Professional and open-source programs such as QGIS, GRASS GIS, MapGuide, and Hadoop-GIS. These and other desktop GIS applications include a full suite of capabilities for entering, managing, analyzing, and visualizing geographic data, and are designed to be used on their own.
Starting in the late 1990s with the emergence of the Internet, as computer network technology progressed, GIS infrastructure and data began to move to servers, providing another mechanism for providing GIS capabilities. This was facilitated by standalone software installed on a server, similar to other server software such as HTTP servers and relational database management systems, enabling clients to have access to GIS data and processing tools without having to install specialized desktop software. These networks are known as distributed GIS. This strategy has been extended through the Internet and development of cloud-based GIS platforms such as ArcGIS Online and GIS-specialized software as a service (SAAS). The use of the Internet to facilitate distributed GIS is known as Internet GIS.
An alternative approach is the integration of some or all of these capabilities into other software or information technology architectures. One example is a spatial extension to Object-relational database software, which defines a geometry datatype so that spatial data can be stored in relational tables, and extensions to SQL for spatial analysis operations such as overlay. Another example is the proliferation of geospatial libraries and application programming interfaces (e.g., GDAL, Leaflet, D3.js) that extend programming languages to enable the incorporation of GIS data and processing into custom software, including web mapping sites and location-based services in smartphones.
Geospatial data management
The core of any GIS is a database that contains representations of geographic phenomena, modeling their geometry (location and shape) and their properties or attributes. A GIS database may be stored in a variety of forms, such as a collection of separate data files or a single spatially-enabled relational database. Collecting and managing these data usually constitutes the bulk of the time and financial resources of a project, far more than other aspects such as analysis and mapping.
Aspects of geographic data
GIS uses spatio-temporal (space-time) location as the key index variable for all other information. Just as a relational database containing text or numbers can relate many different tables using common key index variables, GIS can relate otherwise unrelated information by using location as the key index variable. The key is the location and/or extent in space-time.
Any variable that can be located spatially, and increasingly also temporally, can be referenced using a GIS. Locations or extents in Earth space–time may be recorded as dates/times of occurrence, and x, y, and z coordinates representing, longitude, latitude, and elevation, respectively. These GIS coordinates may represent other quantified systems of temporo-spatial reference (for example, film frame number, stream gage station, highway mile-marker, surveyor benchmark, building address, street intersection, entrance gate, water depth sounding, POS or CAD drawing origin/units). Units applied to recorded temporal-spatial data can vary widely (even when using exactly the same data, see map projections), but all Earth-based spatial–temporal location and extent references should, ideally, be relatable to one another and ultimately to a "real" physical location or extent in space–time.
Related by accurate spatial information, an incredible variety of real-world and projected past or future data can be analyzed, interpreted and represented. This key characteristic of GIS has begun to open new avenues of scientific inquiry into behaviors and patterns of real-world information that previously had not been systematically correlated.
Data modeling
GIS data represents phenomena that exist in the real world, such as roads, land use, elevation, trees, waterways, and states. The most common types of phenomena that are represented in data can be divided into two conceptualizations: discrete objects (e.g., a house, a road) and continuous fields (e.g., rainfall amount or population density). Other types of geographic phenomena, such as events (e.g., location of World War II battles), processes (e.g., extent of suburbanization), and masses (e.g., types of soil in an area) are represented less commonly or indirectly, or are modeled in analysis procedures rather than data.
Traditionally, there are two broad methods used to store data in a GIS for both kinds of abstractions mapping references: raster images and vector. Points, lines, and polygons represent vector data of mapped location attribute references.
A new hybrid method of storing data is that of identifying point clouds, which combine three-dimensional points with RGB information at each point, returning a 3D color image. GIS thematic maps then are becoming more and more realistically visually descriptive of what they set out to show or determine.
Data acquisition
GIS data acquisition includes several methods for gathering spatial data into a GIS database, which can be grouped into three categories: primary data capture, the direct measurement phenomena in the field (e.g., remote sensing, the global positioning system); secondary data capture, the extraction of information from existing sources that are not in a GIS form, such as paper maps, through digitization; and data transfer, the copying of existing GIS data from external sources such as government agencies and private companies. All of these methods can consume significant time, finances, and other resources.
Primary data capture
Survey data can be directly entered into a GIS from digital data collection systems on survey instruments using a technique called coordinate geometry (COGO). Positions from a global navigation satellite system (GNSS) like the Global Positioning System can also be collected and then imported into a GIS. A current trend in data collection gives users the ability to utilize field computers with the ability to edit live data using wireless connections or disconnected editing sessions. The current trend is to utilize applications available on smartphones and PDAs in the form of mobile GIS. This has been enhanced by the availability of low-cost mapping-grade GPS units with decimeter accuracy in real time. This eliminates the need to post process, import, and update the data in the office after fieldwork has been collected. This includes the ability to incorporate positions collected using a laser rangefinder. New technologies also allow users to create maps as well as analysis directly in the field, making projects more efficient and mapping more accurate.
Remotely sensed data also plays an important role in data collection and consist of sensors attached to a platform. Sensors include cameras, digital scanners and lidar, while platforms usually consist of aircraft and satellites. In England in the mid-1990s, hybrid kite/balloons called helikites first pioneered the use of compact airborne digital cameras as airborne geo-information systems. Aircraft measurement software, accurate to 0.4 mm, was used to link the photographs and measure the ground. Helikites are inexpensive and gather more accurate data than aircraft. Helikites can be used over roads, railways and towns where unmanned aerial vehicles (UAVs) are banned.
Recently, aerial data collection has become more accessible with miniature UAVs and drones. For example, the Aeryon Scout was used to map a 50-acre area with a ground sample distance of in only 12 minutes.
The majority of digital data currently comes from photo interpretation of aerial photographs. Soft-copy workstations are used to digitize features directly from stereo pairs of digital photographs. These systems allow data to be captured in two and three dimensions, with elevations measured directly from a stereo pair using principles of photogrammetry. Analog aerial photos must be scanned before being entered into a soft-copy system, for high-quality digital cameras this step is skipped.
Satellite remote sensing provides another important source of spatial data. Here satellites use different sensor packages to passively measure the reflectance from parts of the electromagnetic spectrum or radio waves that were sent out from an active sensor such as radar. Remote sensing collects raster data that can be further processed using different bands to identify objects and classes of interest, such as land cover.
Secondary data capture
The most common method of data creation is digitization, where a hard copy map or survey plan is transferred into a digital medium through the use of a CAD program, and geo-referencing capabilities. With the wide availability of ortho-rectified imagery (from satellites, aircraft, Helikites and UAVs), heads-up digitizing is becoming the main avenue through which geographic data is extracted. Heads-up digitizing involves the tracing of geographic data directly on top of the aerial imagery instead of by the traditional method of tracing the geographic form on a separate digitizing tablet (heads-down digitizing). Heads-down digitizing, or manual digitizing, uses a special magnetic pen, or stylus, that feeds information into a computer to create an identical, digital map. Some tablets use a mouse-like tool, called a puck, instead of a stylus. The puck has a small window with cross-hairs which allows for greater precision and pinpointing map features. Though heads-up digitizing is more commonly used, heads-down digitizing is still useful for digitizing maps of poor quality.
Existing data printed on paper or PET film maps can be digitized or scanned to produce digital data. A digitizer produces vector data as an operator traces points, lines, and polygon boundaries from a map. Scanning a map results in raster data that could be further processed to produce vector data.
When data is captured, the user should consider if the data should be captured with either a relative accuracy or absolute accuracy, since this could not only influence how information will be interpreted but also the cost of data capture.
After entering data into a GIS, the data usually requires editing, to remove errors, or further processing. For vector data it must be made "topologically correct" before it can be used for some advanced analysis. For example, in a road network, lines must connect with nodes at an intersection. Errors such as undershoots and overshoots must also be removed. For scanned maps, blemishes on the source map may need to be removed from the resulting raster. For example, a fleck of dirt might connect two lines that should not be connected.
Projections, coordinate systems, and registration
The earth can be represented by various models, each of which may provide a different set of coordinates (e.g., latitude, longitude, elevation) for any given point on the Earth's surface. The simplest model is to assume the earth is a perfect sphere. As more measurements of the earth have accumulated, the models of the earth have become more sophisticated and more accurate. In fact, there are models called datums that apply to different areas of the earth to provide increased accuracy, like North American Datum of 1983 for U.S. measurements, and the World Geodetic System for worldwide measurements.
The latitude and longitude on a map made against a local datum may not be the same as one obtained from a GPS receiver. Converting coordinates from one datum to another requires a datum transformation such as a Helmert transformation, although in certain situations a simple translation may be sufficient.
In popular GIS software, data projected in latitude/longitude is often represented as a Geographic coordinate system. For example, data in latitude/longitude if the datum is the 'North American Datum of 1983' is denoted by 'GCS North American 1983'.
Data quality
While no digital model can be a perfect representation of the real world, it is important that GIS data be of a high quality. In keeping with the principle of homomorphism, the data must be close enough to reality so that the results of GIS procedures correctly correspond to the results of real world processes. This means that there is no single standard for data quality, because the necessary degree of quality depends on the scale and purpose of the tasks for which it is to be used. Several elements of data quality are important to GIS data:
Accuracy
The degree of similarity between a represented measurement and the actual value; conversely, error is the amount of difference between them. In GIS data, there is concern for accuracy in representations of location (positional accuracy), property (attribute accuracy), and time. For example, the US 2020 Census says that the population of Houston on April 1, 2020 was 2,304,580; if it was actually 2,310,674, this would be an error and thus a lack of attribute accuracy.
Precision
The degree of refinement in a represented value. In a quantitative property, this is the number of significant digits in the measured value. An imprecise value is vague or ambiguous, including a range of possible values. For example, if one were to say that the population of Houston on April 1, 2020 was "about 2.3 million," this statement would be imprecise, but likely accurate because the correct value (and many incorrect values) are included. As with accuracy, representations of location, property, and time can all be more or less precise. Resolution is a commonly used expression of positional precision, especially in raster data sets. Scale is closely related to precision in maps, as it dictates a desirable level of spatial precision, but is problematic in GIS, where a data set can be shown at a variety of display scales (including scales that would not be appropriate for the quality of the data).
Uncertainty
A general acknowledgement of the presence of error and imprecision in geographic data. That is, it is a degree of general doubt, given that it is difficult to know exactly how much error is present in a data set, although some form of estimate may be attempted (a confidence interval being such an estimate of uncertainty). This is sometimes used as a collective term for all or most aspects of data quality.
Vagueness or fuzziness
The degree to which an aspect (location, property, or time) of a phenomenon is inherently imprecise, rather than the imprecision being in a measured value. For example, the spatial extent of the Houston metropolitan area is vague, as there are places on the outskirts of the city that are less connected to the central city (measured by activities such as commuting) than places that are closer. Mathematical tools such as fuzzy set theory are commonly used to manage vagueness in geographic data.
Completeness
The degree to which a data set represents all of the actual features that it purports to include. For example, if a layer of "roads in Houston" is missing some actual streets, it is incomplete.
Currency
The most recent point in time at which a data set claims to be an accurate representation of reality. This is a concern for the majority of GIS applications, which attempt to represent the world "at present," in which case older data is of lower quality.
Consistency
The degree to which the representations of the many phenomena in a data set correctly correspond with each other. Consistency in topological relationships between spatial objects is an especially important aspect of consistency. For example, if all of the lines in a street network were accidentally moved 10 meters to the East, they would be inaccurate but still consistent, because they would still properly connect at each intersection, and network analysis tools such as shortest path would still give correct results.
Propagation of uncertainty
The degree to which the quality of the results of Spatial analysis methods and other processing tools derives from the quality of input data. For example, interpolation is a common operation used in many ways in GIS; because it generates estimates of values between known measurements, the results will always be more precise, but less certain (as each estimate has an unknown amount of error).
The quality of a dataset is very dependent upon its sources, and the methods used to create it. Land surveyors have been able to provide a high level of positional accuracy utilizing high-end GPS equipment, but GPS locations on the average smartphone are much less accurate. Common datasets such as digital terrain and aerial imagery are available in a wide variety of levels of quality, especially spatial precision. Paper maps, which have been digitized for many years as a data source, can also be of widely varying quality.
A quantitative analysis of maps brings accuracy issues into focus. The electronic and other equipment used to make measurements for GIS is far more precise than the machines of conventional map analysis. All geographical data are inherently inaccurate, and these inaccuracies will propagate through GIS operations in ways that are difficult to predict.
Raster-to-vector translation
Data restructuring can be performed by a GIS to convert data into different formats. For example, a GIS may be used to convert a satellite image map to a vector structure by generating lines around all cells with the same classification, while determining the cell spatial relationships, such as adjacency or inclusion.
More advanced data processing can occur with image processing, a technique developed in the late 1960s by NASA and the private sector to provide contrast enhancement, false color rendering and a variety of other techniques including use of two dimensional Fourier transforms. Since digital data is collected and stored in various ways, the two data sources may not be entirely compatible. So a GIS must be able to convert geographic data from one structure to another. In so doing, the implicit assumptions behind different ontologies and classifications require analysis. Object ontologies have gained increasing prominence as a consequence of object-oriented programming and sustained work by Barry Smith and co-workers.
Spatial ETL
Spatial ETL tools provide the data processing functionality of traditional extract, transform, load (ETL) software, but with a primary focus on the ability to manage spatial data. They provide GIS users with the ability to translate data between different standards and proprietary formats, whilst geometrically transforming the data en route. These tools can come in the form of add-ins to existing wider-purpose software such as spreadsheets.
Spatial analysis
GIS spatial analysis is a rapidly changing field, and GIS packages are increasingly including analytical tools as standard built-in facilities, as optional toolsets, as add-ins or 'analysts'. In many instances these are provided by the original software suppliers (commercial vendors or collaborative non commercial development teams), while in other cases facilities have been developed and are provided by third parties. Furthermore, many products offer software development kits (SDKs), programming languages and language support, scripting facilities and/or special interfaces for developing one's own analytical tools or variants. The increased availability has created a new dimension to business intelligence termed "spatial intelligence" which, when openly delivered via intranet, democratizes access to geographic and social network data. Geospatial intelligence, based on GIS spatial analysis, has also become a key element for security. GIS as a whole can be described as conversion to a vectorial representation or to any other digitisation process.
Geoprocessing is a GIS operation used to manipulate spatial data. A typical geoprocessing operation takes an input dataset, performs an operation on that dataset, and returns the result of the operation as an output dataset. Common geoprocessing operations include geographic feature overlay, feature selection and analysis, topology processing, raster processing, and data conversion. Geoprocessing allows for definition, management, and analysis of information used to form decisions.
Terrain analysis
Many geographic tasks involve the terrain, the shape of the surface of the earth, such as hydrology, earthworks, and biogeography. Thus, terrain data is often a core dataset in a GIS, usually in the form of a raster Digital elevation model (DEM) or a Triangulated irregular network (TIN). A variety of tools are available in most GIS software for analyzing terrain, often by creating derivative datasets that represent a specific aspect of the surface. Some of the most common include:
Slope or grade is the steepness or gradient of a unit of terrain, usually measured as an angle in degrees or as a percentage.
Aspect can be defined as the direction in which a unit of terrain faces. Aspect is usually expressed in degrees from north.
Cut and fill is a computation of the difference between the surface before and after an excavation project to estimate costs.
Hydrological modeling can provide a spatial element that other hydrological models lack, with the analysis of variables such as slope, aspect and watershed or catchment area. Terrain analysis is fundamental to hydrology, since water always flows down a slope. As basic terrain analysis of a digital elevation model (DEM) involves calculation of slope and aspect, DEMs are very useful for hydrological analysis. Slope and aspect can then be used to determine direction of surface runoff, and hence flow accumulation for the formation of streams, rivers and lakes. Areas of divergent flow can also give a clear indication of the boundaries of a catchment. Once a flow direction and accumulation matrix has been created, queries can be performed that show contributing or dispersal areas at a certain point. More detail can be added to the model, such as terrain roughness, vegetation types and soil types, which can influence infiltration and evapotranspiration rates, and hence influencing surface flow. One of the main uses of hydrological modeling is in environmental contamination research. Other applications of hydrological modeling include groundwater and surface water mapping, as well as flood risk maps.
Viewshed analysis predicts the impact that terrain has on the visibility between locations, which is especially important for wireless communications.
Shaded relief is a depiction of the surface as if it were a three dimensional model lit from a given direction, which is very commonly used in maps.
Most of these are generated using algorithms that are discrete simplifications of vector calculus. Slope, aspect, and surface curvature in terrain analysis are all derived from neighborhood operations using elevation values of a cell's adjacent neighbours. Each of these is strongly affected by the level of detail in the terrain data, such as the resolution of a DEM, which should be chosen carefully.
Proximity analysis
Distance is a key part of solving many geographic tasks, usually due to the friction of distance. Thus, a wide variety of analysis tools have analyze distance in some form, such as buffers, Voronoi or Thiessen polygons, Cost distance analysis, and network analysis.
Data analysis
It is difficult to relate wetlands maps to rainfall amounts recorded at different points such as airports, television stations, and schools. A GIS, however, can be used to depict two- and three-dimensional characteristics of the Earth's surface, subsurface, and atmosphere from information points. For example, a GIS can quickly generate a map with isopleth or contour lines that indicate differing amounts of rainfall. Such a map can be thought of as a rainfall contour map. Many sophisticated methods can estimate the characteristics of surfaces from a limited number of point measurements. A two-dimensional contour map created from the surface modeling of rainfall point measurements may be overlaid and analyzed with any other map in a GIS covering the same area. This GIS derived map can then provide additional information - such as the viability of water power potential as a renewable energy source. Similarly, GIS can be used to compare other renewable energy resources to find the best geographic potential for a region.
Additionally, from a series of three-dimensional points, or digital elevation model, isopleth lines representing elevation contours can be generated, along with slope analysis, shaded relief, and other elevation products. Watersheds can be easily defined for any given reach, by computing all of the areas contiguous and uphill from any given point of interest. Similarly, an expected thalweg of where surface water would want to travel in intermittent and permanent streams can be computed from elevation data in the GIS.
Topological modeling
A GIS can recognize and analyze the spatial relationships that exist within digitally stored spatial data. These topological relationships allow complex spatial modelling and analysis to be performed. Topological relationships between geometric entities traditionally include adjacency (what adjoins what), containment (what encloses what), and proximity (how close something is to something else).
Geometric networks
Geometric networks are linear networks of objects that can be used to represent interconnected features, and to perform special spatial analysis on them. A geometric network is composed of edges, which are connected at junction points, similar to graphs in mathematics and computer science. Just like graphs, networks can have weight and flow assigned to its edges, which can be used to represent various interconnected features more accurately. Geometric networks are often used to model road networks and public utility networks, such as electric, gas, and water networks. Network modeling is also commonly employed in transportation planning, hydrology modeling, and infrastructure modeling.
Cartographic modeling
Dana Tomlin coined the term cartographic modeling in his PhD dissertation (1983); he later used it in the title of his book, Geographic Information Systems and Cartographic Modeling (1990). Cartographic modeling refers to a process where several thematic layers of the same area are produced, processed, and analyzed. Tomlin used raster layers, but the overlay method (see below) can be used more generally. Operations on map layers can be combined into algorithms, and eventually into simulation or optimization models.
Map overlay
The combination of several spatial datasets (points, lines, or polygons) creates a new output vector dataset, visually similar to stacking several maps of the same region. These overlays are similar to mathematical Venn diagram overlays. A union overlay combines the geographic features and attribute tables of both inputs into a single new output. An intersect overlay defines the area where both inputs overlap and retains a set of attribute fields for each. A symmetric difference overlay defines an output area that includes the total area of both inputs except for the overlapping area.
Data extraction is a GIS process similar to vector overlay, though it can be used in either vector or raster data analysis. Rather than combining the properties and features of both datasets, data extraction involves using a "clip" or "mask" to extract the features of one data set that fall within the spatial extent of another dataset.
In raster data analysis, the overlay of datasets is accomplished through a process known as "local operation on multiple rasters" or "map algebra", through a function that combines the values of each raster's matrix. This function may weigh some inputs more than others through use of an "index model" that reflects the influence of various factors upon a geographic phenomenon.
Geostatistics
Geostatistics is a branch of statistics that deals with field data, spatial data with a continuous index. It provides methods to model spatial correlation, and predict values at arbitrary locations (interpolation).
When phenomena are measured, the observation methods dictate the accuracy of any subsequent analysis. Due to the nature of the data (e.g. traffic patterns in an urban environment; weather patterns over the Pacific Ocean), a constant or dynamic degree of precision is always lost in the measurement. This loss of precision is determined from the scale and distribution of the data collection.
To determine the statistical relevance of the analysis, an average is determined so that points (gradients) outside of any immediate measurement can be included to determine their predicted behavior. This is due to the limitations of the applied statistic and data collection methods, and interpolation is required to predict the behavior of particles, points, and locations that are not directly measurable.
Interpolation is the process by which a surface is created, usually a raster dataset, through the input of data collected at a number of sample points. There are several forms of interpolation, each which treats the data differently, depending on the properties of the data set. In comparing interpolation methods, the first consideration should be whether or not the source data will change (exact or approximate). Next is whether the method is subjective, a human interpretation, or objective. Then there is the nature of transitions between points: are they abrupt or gradual. Finally, there is whether a method is global (it uses the entire data set to form the model), or local where an algorithm is repeated for a small section of terrain.
Interpolation is a justified measurement because of a spatial autocorrelation principle that recognizes that data collected at any position will have a great similarity to, or influence of those locations within its immediate vicinity.
Digital elevation models, triangulated irregular networks, edge-finding algorithms, Thiessen polygons, Fourier analysis, (weighted) moving averages, inverse distance weighting, kriging, spline, and trend surface analysis are all mathematical methods to produce interpolative data.
Address geocoding
Geocoding is interpolating spatial locations (X,Y coordinates) from street addresses or any other spatially referenced data such as ZIP Codes, parcel lots and address locations. A reference theme is required to geocode individual addresses, such as a road centerline file with address ranges. The individual address locations have historically been interpolated, or estimated, by examining address ranges along a road segment. These are usually provided in the form of a table or database. The software will then place a dot approximately where that address belongs along the segment of centerline. For example, an address point of 500 will be at the midpoint of a line segment that starts with address 1 and ends with address 1,000. Geocoding can also be applied against actual parcel data, typically from municipal tax maps. In this case, the result of the geocoding will be an actually positioned space as opposed to an interpolated point. This approach is being increasingly used to provide more precise location information.
Reverse geocoding
Reverse geocoding is the process of returning an estimated street address number as it relates to a given coordinate. For example, a user can click on a road centerline theme (thus providing a coordinate) and have information returned that reflects the estimated house number. This house number is interpolated from a range assigned to that road segment. If the user clicks at the midpoint of a segment that starts with address 1 and ends with 100, the returned value will be somewhere near 50. Note that reverse geocoding does not return actual addresses, only estimates of what should be there based on the predetermined range.
Multi-criteria decision analysis
Coupled with GIS, multi-criteria decision analysis methods support decision-makers in analysing a set of alternative spatial solutions, such as the most likely ecological habitat for restoration, against multiple criteria, such as vegetation cover or roads. MCDA uses decision rules to aggregate the criteria, which allows the alternative solutions to be ranked or prioritised. GIS MCDA may reduce costs and time involved in identifying potential restoration sites.
GIS data mining
GIS or spatial data mining is the application of data mining methods to spatial data. Data mining, which is the partially automated search for hidden patterns in large databases, offers great potential benefits for applied GIS-based decision making. Typical applications include environmental monitoring. A characteristic of such applications is that spatial correlation between data measurements require the use of specialized algorithms for more efficient data analysis.
Data output and cartography
Cartography is the design and production of maps, or visual representations of spatial data. The vast majority of modern cartography is done with the help of computers, usually using GIS but production of quality cartography is also achieved by importing layers into a design program to refine it. Most GIS software gives the user substantial control over the appearance of the data.
Cartographic work serves two major functions:
First, it produces graphics on the screen or on paper that convey the results of analysis to the people who make decisions about resources. Wall maps and other graphics can be generated, allowing the viewer to visualize and thereby understand the results of analyses or simulations of potential events. Web Map Servers facilitate distribution of generated maps through web browsers using various implementations of web-based application programming interfaces (AJAX, Java, Flash, etc.).
Second, other database information can be generated for further analysis or use. An example would be a list of all addresses within one mile (1.6 km) of a toxic spill.
An archeochrome is a new way of displaying spatial data. It is a thematic on a 3D map that is applied to a specific building or a part of a building. It is suited to the visual display of heat-loss data.
Terrain depiction
Traditional maps are abstractions of the real world, a sampling of important elements portrayed on a sheet of paper with symbols to represent physical objects. People who use maps must interpret these symbols. Topographic maps show the shape of land surface with contour lines or with shaded relief.
Today, graphic display techniques such as shading based on altitude in a GIS can make relationships among map elements visible, heightening one's ability to extract and analyze information. For example, two types of data were combined in a GIS to produce a perspective view of a portion of San Mateo County, California.
The digital elevation model, consisting of surface elevations recorded on a 30-meter horizontal grid, shows high elevations as white and low elevation as black.
The accompanying Landsat Thematic Mapper image shows a false-color infrared image looking down at the same area in 30-meter pixels, or picture elements, for the same coordinate points, pixel by pixel, as the elevation information.
A GIS was used to register and combine the two images to render the three-dimensional perspective view looking down the San Andreas Fault, using the Thematic Mapper image pixels, but shaded using the elevation of the landforms. The GIS display depends on the viewing point of the observer and time of day of the display, to properly render the shadows created by the sun's rays at that latitude, longitude, and time of day.
Web mapping
In recent years there has been a proliferation of free-to-use and easily accessible mapping software such as the proprietary web applications Google Maps and Bing Maps, as well as the free and open-source alternative OpenStreetMap. These services give the public access to huge amounts of geographic data, perceived by many users to be as trustworthy and usable as professional information. For example, during the COVID-19 pandemic, web maps hosted on dashboards were used to rapidly disseminate case data to the general public.
Some of them, like Google Maps and OpenLayers, expose an application programming interface (API) that enable users to create custom applications. These toolkits commonly offer street maps, aerial/satellite imagery, geocoding, searches, and routing functionality. Web mapping has also uncovered the potential of crowdsourcing geodata in projects like OpenStreetMap, which is a collaborative project to create a free editable map of the world. These mashup projects have been proven to provide a high level of value and benefit to end users outside that possible through traditional geographic information.
Web mapping is not without its drawbacks. Web mapping allows for the creation and distribution of maps by people without proper cartographic training. This has led to maps that ignore cartographic conventions and are potentially misleading, with one study finding that more than half of United States state government COVID-19 dashboards did not follow these conventions.
Uses
Since its origin in the 1960s, GIS has been used in an ever-increasing range of applications, corroborating the widespread importance of location and aided by the continuing reduction in the barriers to adopting geospatial technology. The perhaps hundreds of different uses of GIS can be classified in several ways:
Goal: the purpose of an application can be broadly classified as either scientific research or resource management. The purpose of research, defined as broadly as possible, is to discover new knowledge; this may be performed by someone who considers herself a scientist, but may also be done by anyone who is trying to learn why the world appears to work the way it does. A study as practical as deciphering why a business location has failed would be research in this sense. Management (sometimes called operational applications), also defined as broadly as possible, is the application of knowledge to make practical decisions on how to employ the resources one has control over to achieve one's goals. These resources could be time, capital, labor, equipment, land, mineral deposits, wildlife, and so on.
Decision level: Management applications have been further classified as strategic, tactical, operational, a common classification in business management. Strategic tasks are long-term, visionary decisions about what goals one should have, such as whether a business should expand or not. Tactical tasks are medium-term decisions about how to achieve strategic goals, such as a national forest creating a grazing management plan. Operational decisions are concerned with the day-to-day tasks, such as a person finding the shortest route to a pizza restaurant.
Topic: the domains in which GIS is applied largely fall into those concerned with the human world (e.g., economics, politics, transportation, education, landscape architecture, archaeology, urban planning, real estate, public health, crime mapping, national defense), and those concerned with the natural world (e.g., geology, biology, oceanography, climate). That said, one of the powerful capabilities of GIS and the spatial perspective of geography is their integrative ability to compare disparate topics, and many applications are concerned with multiple domains. Examples of integrated human-natural application domains include deep mapping, Natural hazard mitigation, wildlife management, sustainable development, natural resources, and climate change response.
Institution: GIS has been implemented in a variety of different kinds of institutions: government (at all levels from municipal to international), business (of all types and sizes), non-profit organizations (even churches), as well as personal uses. The latter has become increasingly prominent with the rise of location-enabled smartphones.
Lifespan: GIS implementations may be focused on a project or an enterprise. A Project GIS is focused on accomplishing a single task: data is gathered, analysis is performed, and results are produced separately from any other projects the person may perform, and the implementation is essentially transitory. An Enterprise GIS is intended to be a permanent institution, including a database that is carefully designed to be useful for a variety of projects over many years, and is likely used by many individuals across an enterprise, with some employed full-time just to maintain it.
Integration: Traditionally, most GIS applications were standalone, using specialized GIS software, specialized hardware, specialized data, and specialized professionals. Although these remain common to the present day, integrated applications have greatly increased, as geospatial technology was merged into broader enterprise applications, sharing IT infrastructure, databases, and software, often using enterprise integration platforms such as SAP.
The implementation of a GIS is often driven by jurisdictional (such as a city), purpose, or application requirements. Generally, a GIS implementation may be custom-designed for an organization. Hence, a GIS deployment developed for an application, jurisdiction, enterprise, or purpose may not be necessarily interoperable or compatible with a GIS that has been developed for some other application, jurisdiction, enterprise, or purpose.
GIS is also diverging into location-based services, which allows GPS-enabled mobile devices to display their location in relation to fixed objects (nearest restaurant, gas station, fire hydrant) or mobile objects (friends, children, police car), or to relay their position back to a central server for display or other processing.
GIS is also used in digital marketing and SEO for audience segmentation based on location.
Topics
Aquatic science
Archaeology
Disaster response
Geospatial disaster response uses geospatial data and tools to help emergency responders, land managers, and scientists respond to disasters. Geospatial data can help save lives, reduce damage, and improve communication. Geospatial data can be used by federal authorities like FEMA to create maps that show the extent of a disaster, the location of people in need, and the location of debris, create models that estimate the number of people at risk and the amount of damage, improve communication between emergency responders, land managers, and scientists, as well as help determine where to allocate resources, such as emergency medical resources or search and rescue teams and plan evacuation routes and identify which areas are most at risk.
In the United States, FEMA's Response Geospatial Office is responsible for the agency's capture, analysis and development of GIS products to enhance situational awareness and enable expeditions and effective decision making. The RGO's mission is to support decision makers in understanding the size, scope, and extent of disaster impacts so they can deliver resources to the communities most in need.
Environmental governance
Environmental contamination
Geologic mapping
Geospatial intelligence
History
The use of digital maps generated by GIS has also influenced the development of an academic field known as spatial humanities.
Hydrology
Participatory GIS
Public health
Traditional knowledge GIS
Open Geospatial Consortium standards
The Open Geospatial Consortium (OGC) is an international industry consortium of 384 companies, government agencies, universities, and individuals participating in a consensus process to develop publicly available geoprocessing specifications. Open interfaces and protocols defined by OpenGIS Specifications support interoperable solutions that "geo-enable" the Web, wireless and location-based services, and mainstream IT, and empower technology developers to make complex spatial information and services accessible and useful with all kinds of applications. Open Geospatial Consortium protocols include Web Map Service, and Web Feature Service.
GIS products are broken down by the OGC into two categories, based on how completely and accurately the software follows the OGC specifications.
Compliant products are software products that comply to OGC's OpenGIS Specifications. When a product has been tested and certified as compliant through the OGC Testing Program, the product is automatically registered as "compliant" on this site.
Implementing products are software products that implement OpenGIS Specifications but have not yet passed a compliance test. Compliance tests are not available for all specifications. Developers can register their products as implementing draft or approved specifications, though OGC reserves the right to review and verify each entry.
Adding the dimension of time
The condition of the Earth's surface, atmosphere, and subsurface can be examined by feeding satellite data into a GIS. GIS technology gives researchers the ability to examine the variations in Earth processes over days, months, and years through the use of cartographic visualizations. As an example, the changes in vegetation vigor through a growing season can be animated to determine when drought was most extensive in a particular region. The resulting graphic represents a rough measure of plant health. Working with two variables over time would then allow researchers to detect regional differences in the lag between a decline in rainfall and its effect on vegetation.
GIS technology and the availability of digital data on regional and global scales enable such analyses. The satellite sensor output used to generate a vegetation graphic is produced for example by the advanced very-high-resolution radiometer (AVHRR). This sensor system detects the amounts of energy reflected from the Earth's surface across various bands of the spectrum for surface areas of about . The satellite sensor produces images of a particular location on the Earth twice a day. AVHRR and more recently the moderate-resolution imaging spectroradiometer (MODIS) are only two of many sensor systems used for Earth surface analysis.
In addition to the integration of time in environmental studies, GIS is also being explored for its ability to track and model the progress of humans throughout their daily routines. A concrete example of progress in this area is the recent release of time-specific population data by the U.S. Census. In this data set, the populations of cities are shown for daytime and evening hours highlighting the pattern of concentration and dispersion generated by North American commuting patterns. The manipulation and generation of data required to produce this data would not have been possible without GIS.
Using models to project the data held by a GIS forward in time have enabled planners to test policy decisions using spatial decision support systems.
Semantics
Tools and technologies emerging from the World Wide Web Consortium's Semantic Web are proving useful for data integration problems in information systems. Correspondingly, such technologies have been proposed as a means to facilitate interoperability and data reuse among GIS applications and also to enable new analysis mechanisms.
Ontologies are a key component of this semantic approach as they allow a formal, machine-readable specification of the concepts and relationships in a given domain. This in turn allows a GIS to focus on the intended meaning of data rather than its syntax or structure. For example, reasoning that a land cover type classified as deciduous needleleaf trees in one dataset is a specialization or subset of land cover type forest in another more roughly classified dataset can help a GIS automatically merge the two datasets under the more general land cover classification. Tentative ontologies have been developed in areas related to GIS applications, for example the hydrology ontology developed by the Ordnance Survey in the United Kingdom and the SWEET ontologies developed by NASA's Jet Propulsion Laboratory. Also, simpler ontologies and semantic metadata standards are being proposed by the W3C Geo Incubator Group to represent geospatial data on the web. GeoSPARQL is a standard developed by the Ordnance Survey, United States Geological Survey, Natural Resources Canada, Australia's Commonwealth Scientific and Industrial Research Organisation and others to support ontology creation and reasoning using well-understood OGC literals (GML, WKT), topological relationships (Simple Features, RCC8, DE-9IM), RDF and the SPARQL database query protocols.
Recent research results in this area can be seen in the International Conference on Geospatial Semantics and the Terra Cognita – Directions to the Geospatial Semantic Web workshop at the International Semantic Web Conference.
Societal implications
With the popularization of GIS in decision making, scholars have begun to scrutinize the social and political implications of GIS. GIS can also be misused to distort reality for individual and political gain. It has been argued that the production, distribution, utilization, and representation of geographic information are largely related with the social context and has the potential to increase citizen trust in government. Other related topics include discussion on copyright, privacy, and censorship. A more optimistic social approach to GIS adoption is to use it as a tool for public participation.
In education
At the end of the 20th century, GIS began to be recognized as tools that could be used in the classroom. The benefits of GIS in education seem focused on developing spatial cognition, but there is not enough bibliography or statistical data to show the concrete scope of the use of GIS in education around the world, although the expansion has been faster in those countries where the curriculum mentions them.
GIS seems to provide many advantages in teaching geography because it allows for analysis based on real geographic data and also helps raise research questions from teachers and students in the classroom. It also contributes to improvement in learning by developing spatial and geographical thinking and, in many cases, student motivation.
Courses in GIS are also offered by educational institutions.
In local government
GIS is proven as an organization-wide, enterprise and enduring technology that continues to change how local government operates. Government agencies have adopted GIS technology as a method to better manage the following areas of government organization:
Economic development departments use interactive GIS mapping tools, aggregated with other data (demographics, labor force, business, industry, talent) along with a database of available commercial sites and buildings in order to attract investment and support existing business. Businesses making location decisions can use the tools to choose communities and sites that best match their criteria for success.
Public safety operations such as emergency operations centers, fire prevention, police and sheriff mobile technology and dispatch, and mapping weather risks.
Parks and recreation departments and their functions in asset inventory, land conservation, land management, and cemetery management
Public works and utilities, tracking water and stormwater drainage, electrical assets, engineering projects, and public transportation assets and trends
Fiber network management for interdepartmental network assets
School analytical and demographic data, asset management, and improvement/expansion planning
Public administration for election data, property records, and zoning/management
The open data initiative is pushing local government to take advantage of technology such as GIS technology, as it encompasses the requirements to fit the open data/open government model of transparency. With open data, local government organizations can implement citizen engagement applications and online portals, allowing citizens to see land information, report potholes and signage issues, view and sort parks by assets, view real-time crime rates and utility repairs, and much more. The push for open data within government organizations is driving the growth in local government GIS technology spending, and database management.
See also
AM/FM/GIS
Climate Information Service
Comparison of GIS software
Concepts and Techniques in Modern Geography
Dialogue-Assisted Visual Environment for Geoinformation
Distributed GIS
Geodatabase (Esri)
Geomatics
GISCorps
GIS Day
Integrated Geo Systems
List of GIS data sources
List of GIS software
Map database management
Quantitative geography
Technical geography
Tobler's first law of geography
Tobler's second law of geography
Virtual globe
References
Further reading
Bolstad, P. (2019). GIS Fundamentals: A first text on Geographic Information Systems, Sixth Edition. Ann Arbor: XanEdu, 764 pp.
Burrough, P. A. and McDonnell, R. A. (1998). Principles of geographical information systems. Oxford University Press, Oxford, 327 pp.
DeMers, M. (2009). Fundamentals of Geographic Information Systems, 4th Edition. Wiley,
Harvey, Francis (2008). A Primer of GIS, Fundamental geographic and cartographic concepts. The Guilford Press, 31 pp.
Heywood, I., Cornelius, S., and Carver, S. (2006). An Introduction to Geographical Information Systems. Prentice Hall. 3rd edition.
Ott, T. and Swiaczny, F. (2001) .Time-integrative GIS. Management and analysis of Spatio-temporal data, Berlin / Heidelberg / New York: Springer.
Thurston, J., Poiker, T.K. and J. Patrick Moore. (2003). Integrated Geospatial Technologies: A Guide to GPS, GIS, and Data Logging. Hoboken, New Jersey: Wiley.
External links | Geographic information system | [
"Technology"
] | 12,164 | [
"Information systems",
"Geographic information systems"
] |
12,401 | https://en.wikipedia.org/wiki/Graph%20theory | In mathematics and computer science, graph theory is the study of graphs, which are mathematical structures used to model pairwise relations between objects. A graph in this context is made up of vertices (also called nodes or points) which are connected by edges (also called arcs, links or lines). A distinction is made between undirected graphs, where edges link two vertices symmetrically, and directed graphs, where edges link two vertices asymmetrically. Graphs are one of the principal objects of study in discrete mathematics.
Definitions
Definitions in graph theory vary. The following are some of the more basic ways of defining graphs and related mathematical structures.
Graph
In one restricted but very common sense of the term, a graph is an ordered pair comprising:
, a set of vertices (also called nodes or points);
, a set of edges (also called links or lines), which are unordered pairs of vertices (that is, an edge is associated with two distinct vertices).
To avoid ambiguity, this type of object may be called precisely an undirected simple graph.
In the edge , the vertices and are called the endpoints of the edge. The edge is said to join and and to be incident on and on . A vertex may exist in a graph and not belong to an edge. Under this definition, multiple edges, in which two or more edges connect the same vertices, are not allowed.
In one more general sense of the term allowing multiple edges, a graph is an ordered triple comprising:
, a set of vertices (also called nodes or points);
, a set of edges (also called links or lines);
, an incidence function mapping every edge to an unordered pair of vertices (that is, an edge is associated with two distinct vertices).
To avoid ambiguity, this type of object may be called precisely an undirected multigraph.
A loop is an edge that joins a vertex to itself. Graphs as defined in the two definitions above cannot have loops, because a loop joining a vertex to itself is the edge (for an undirected simple graph) or is incident on (for an undirected multigraph) which is not in . To allow loops, the definitions must be expanded. For undirected simple graphs, the definition of should be modified to . For undirected multigraphs, the definition of should be modified to . To avoid ambiguity, these types of objects may be called undirected simple graph permitting loops and undirected multigraph permitting loops (sometimes also undirected pseudograph), respectively.
and are usually taken to be finite, and many of the well-known results are not true (or are rather different) for infinite graphs because many of the arguments fail in the infinite case. Moreover, is often assumed to be non-empty, but is allowed to be the empty set. The order of a graph is , its number of vertices. The size of a graph is , its number of edges. The degree or valency of a vertex is the number of edges that are incident to it, where a loop is counted twice. The degree of a graph is the maximum of the degrees of its vertices.
In an undirected simple graph of order n, the maximum degree of each vertex is and the maximum size of the graph is .
The edges of an undirected simple graph permitting loops induce a symmetric homogeneous relation on the vertices of that is called the adjacency relation of . Specifically, for each edge , its endpoints and are said to be adjacent to one another, which is denoted .
Directed graph
A directed graph or digraph is a graph in which edges have orientations.
In one restricted but very common sense of the term, a directed graph is an ordered pair comprising:
, a set of vertices (also called nodes or points);
, a set of edges (also called directed edges, directed links, directed lines, arrows or arcs) which are ordered pairs of vertices (that is, an edge is associated with two distinct vertices).
To avoid ambiguity, this type of object may be called precisely a directed simple graph. In set theory and graph theory, denotes the set of -tuples of elements of that is, ordered sequences of elements that are not necessarily distinct.
In the edge directed from to , the vertices and are called the endpoints of the edge, the tail of the edge and the head of the edge. The edge is said to join and and to be incident on and on . A vertex may exist in a graph and not belong to an edge. The edge is called the inverted edge of . Multiple edges, not allowed under the definition above, are two or more edges with both the same tail and the same head.
In one more general sense of the term allowing multiple edges, a directed graph is an ordered triple comprising:
, a set of vertices (also called nodes or points);
, a set of edges (also called directed edges, directed links, directed lines, arrows or arcs);
, an incidence function mapping every edge to an ordered pair of vertices (that is, an edge is associated with two distinct vertices).
To avoid ambiguity, this type of object may be called precisely a directed multigraph.
A loop is an edge that joins a vertex to itself. Directed graphs as defined in the two definitions above cannot have loops, because a loop joining a vertex to itself is the edge (for a directed simple graph) or is incident on (for a directed multigraph) which is not in . So to allow loops the definitions must be expanded. For directed simple graphs, the definition of should be modified to . For directed multigraphs, the definition of should be modified to . To avoid ambiguity, these types of objects may be called precisely a directed simple graph permitting loops and a directed multigraph permitting loops (or a quiver) respectively.
The edges of a directed simple graph permitting loops is a homogeneous relation ~ on the vertices of that is called the adjacency relation of . Specifically, for each edge , its endpoints and are said to be adjacent to one another, which is denoted ~ .
Applications
Graphs can be used to model many types of relations and processes in physical, biological, social and information systems. Many practical problems can be represented by graphs. Emphasizing their application to real-world systems, the term network is sometimes defined to mean a graph in which attributes (e.g. names) are associated with the vertices and edges, and the subject that expresses and understands real-world systems as a network is called network science.
Computer science
Within computer science, 'causal' and 'non-causal' linked structures are graphs that are used to represent networks of communication, data organization, computational devices, the flow of computation, etc. For instance, the link structure of a website can be represented by a directed graph, in which the vertices represent web pages and directed edges represent links from one page to another. A similar approach can be taken to problems in social media, travel, biology, computer chip design, mapping the progression of neuro-degenerative diseases, and many other fields. The development of algorithms to handle graphs is therefore of major interest in computer science. The transformation of graphs is often formalized and represented by graph rewrite systems. Complementary to graph transformation systems focusing on rule-based in-memory manipulation of graphs are graph databases geared towards transaction-safe, persistent storing and querying of graph-structured data.
Linguistics
Graph-theoretic methods, in various forms, have proven particularly useful in linguistics, since natural language often lends itself well to discrete structure. Traditionally, syntax and compositional semantics follow tree-based structures, whose expressive power lies in the principle of compositionality, modeled in a hierarchical graph. More contemporary approaches such as head-driven phrase structure grammar model the syntax of natural language using typed feature structures, which are directed acyclic graphs.
Within lexical semantics, especially as applied to computers, modeling word meaning is easier when a given word is understood in terms of related words; semantic networks are therefore important in computational linguistics. Still, other methods in phonology (e.g. optimality theory, which uses lattice graphs) and morphology (e.g. finite-state morphology, using finite-state transducers) are common in the analysis of language as a graph. Indeed, the usefulness of this area of mathematics to linguistics has borne organizations such as TextGraphs, as well as various 'Net' projects, such as WordNet, VerbNet, and others.
Physics and chemistry
Graph theory is also used to study molecules in chemistry and physics. In condensed matter physics, the three-dimensional structure of complicated simulated atomic structures can be studied quantitatively by gathering statistics on graph-theoretic properties related to the topology of the atoms. Also, "the Feynman graphs and rules of calculation summarize quantum field theory in a form in close contact with the experimental numbers one wants to understand." In chemistry a graph makes a natural model for a molecule, where vertices represent atoms and edges bonds. This approach is especially used in computer processing of molecular structures, ranging from chemical editors to database searching. In statistical physics, graphs can represent local connections between interacting parts of a system, as well as the dynamics of a physical process on such
systems. Similarly, in computational neuroscience graphs can be used to represent functional connections between brain areas that interact to give rise to various cognitive processes, where the vertices represent different areas of the brain and the edges represent the connections between those areas. Graph theory plays an important role in electrical modeling of electrical networks, here, weights are associated with resistance of the wire segments to obtain electrical properties of network structures. Graphs are also used to represent the micro-scale channels of porous media, in which the vertices represent the pores and the edges represent the smaller channels connecting the pores. Chemical graph theory uses the molecular graph as a means to model molecules.
Graphs and networks are excellent models to study and understand phase transitions and critical phenomena.
Removal of nodes or edges leads to a critical transition where the network breaks into small clusters which is studied as a phase transition. This breakdown is studied via percolation theory.
Social sciences
Graph theory is also widely used in sociology as a way, for example, to measure actors' prestige or to explore rumor spreading, notably through the use of social network analysis software. Under the umbrella of social networks are many different types of graphs. Acquaintanceship and friendship graphs describe whether people know each other. Influence graphs model whether certain people can influence the behavior of others. Finally, collaboration graphs model whether two people work together in a particular way, such as acting in a movie together.
Biology
Likewise, graph theory is useful in biology and conservation efforts where a vertex can represent regions where certain species exist (or inhabit) and the edges represent migration paths or movement between the regions. This information is important when looking at breeding patterns or tracking the spread of disease, parasites or how changes to the movement can affect other species.
Graphs are also commonly used in molecular biology and genomics to model and analyse datasets with complex relationships. For example, graph-based methods are often used to 'cluster' cells together into cell-types in single-cell transcriptome analysis. Another use is to model genes or proteins in a pathway and study the relationships between them, such as metabolic pathways and gene regulatory networks. Evolutionary trees, ecological networks, and hierarchical clustering of gene expression patterns are also represented as graph structures.
Graph theory is also used in connectomics; nervous systems can be seen as a graph, where the nodes are neurons and the edges are the connections between them.
Mathematics
In mathematics, graphs are useful in geometry and certain parts of topology such as knot theory. Algebraic graph theory has close links with group theory. Algebraic graph theory has been applied to many areas including dynamic systems and complexity.
Other topics
A graph structure can be extended by assigning a weight to each edge of the graph. Graphs with weights, or weighted graphs, are used to represent structures in which pairwise connections have some numerical values. For example, if a graph represents a road network, the weights could represent the length of each road. There may be several weights associated with each edge, including distance (as in the previous example), travel time, or monetary cost. Such weighted graphs are commonly used to program GPS's, and travel-planning search engines that compare flight times and costs.
History
The paper written by Leonhard Euler on the Seven Bridges of Königsberg and published in 1736 is regarded as the first paper in the history of graph theory. This paper, as well as the one written by Vandermonde on the knight problem, carried on with the analysis situs initiated by Leibniz. Euler's formula relating the number of edges, vertices, and faces of a convex polyhedron was studied and generalized by Cauchy and L'Huilier, and represents the beginning of the branch of mathematics known as topology.
More than one century after Euler's paper on the bridges of Königsberg and while Listing was introducing the concept of topology, Cayley was led by an interest in particular analytical forms arising from differential calculus to study a particular class of graphs, the trees. This study had many implications for theoretical chemistry. The techniques he used mainly concern the enumeration of graphs with particular properties. Enumerative graph theory then arose from the results of Cayley and the fundamental results published by Pólya between 1935 and 1937. These were generalized by De Bruijn in 1959. Cayley linked his results on trees with contemporary studies of chemical composition. The fusion of ideas from mathematics with those from chemistry began what has become part of the standard terminology of graph theory.
In particular, the term "graph" was introduced by Sylvester in a paper published in 1878 in Nature, where he draws an analogy between "quantic invariants" and "co-variants" of algebra and molecular diagrams:
"[…] Every invariant and co-variant thus becomes expressible by a graph precisely identical with a Kekuléan diagram or chemicograph. […] I give a rule for the geometrical multiplication of graphs, i.e. for constructing a graph to the product of in- or co-variants whose separate graphs are given. […]" (italics as in the original).
The first textbook on graph theory was written by Dénes Kőnig, and published in 1936. Another book by Frank Harary, published in 1969, was "considered the world over to be the definitive textbook on the subject", and enabled mathematicians, chemists, electrical engineers and social scientists to talk to each other. Harary donated all of the royalties to fund the Pólya Prize.
One of the most famous and stimulating problems in graph theory is the four color problem: "Is it true that any map drawn in the plane may have its regions colored with four colors, in such a way that any two regions having a common border have different colors?" This problem was first posed by Francis Guthrie in 1852 and its first written record is in a letter of De Morgan addressed to Hamilton the same year. Many incorrect proofs have been proposed, including those by Cayley, Kempe, and others. The study and the generalization of this problem by Tait, Heawood, Ramsey and Hadwiger led to the study of the colorings of the graphs embedded on surfaces with arbitrary genus. Tait's reformulation generated a new class of problems, the factorization problems, particularly studied by Petersen and Kőnig. The works of Ramsey on colorations and more specially the results obtained by Turán in 1941 was at the origin of another branch of graph theory, extremal graph theory.
The four color problem remained unsolved for more than a century. In 1969 Heinrich Heesch published a method for solving the problem using computers. A computer-aided proof produced in 1976 by Kenneth Appel and Wolfgang Haken makes fundamental use of the notion of "discharging" developed by Heesch. The proof involved checking the properties of 1,936 configurations by computer, and was not fully accepted at the time due to its complexity. A simpler proof considering only 633 configurations was given twenty years later by Robertson, Seymour, Sanders and Thomas.
The autonomous development of topology from 1860 and 1930 fertilized graph theory back through the works of Jordan, Kuratowski and Whitney. Another important factor of common development of graph theory and topology came from the use of the techniques of modern algebra. The first example of such a use comes from the work of the physicist Gustav Kirchhoff, who published in 1845 his Kirchhoff's circuit laws for calculating the voltage and current in electric circuits.
The introduction of probabilistic methods in graph theory, especially in the study of Erdős and Rényi of the asymptotic probability of graph connectivity, gave rise to yet another branch, known as random graph theory, which has been a fruitful source of graph-theoretic results.
Representation
A graph is an abstraction of relationships that emerge in nature; hence, it cannot be coupled to a certain representation. The way it is represented depends on the degree of convenience such representation provides for a certain application. The most common representations are the visual, in which, usually, vertices are drawn and connected by edges, and the tabular, in which rows of a table provide information about the relationships between the vertices within the graph.
Visual: Graph drawing
Graphs are usually represented visually by drawing a point or circle for every vertex, and drawing a line between two vertices if they are connected by an edge. If the graph is directed, the direction is indicated by drawing an arrow. If the graph is weighted, the weight is added on the arrow.
A graph drawing should not be confused with the graph itself (the abstract, non-visual structure) as there are several ways to structure the graph drawing. All that matters is which vertices are connected to which others by how many edges and not the exact layout. In practice, it is often difficult to decide if two drawings represent the same graph. Depending on the problem domain some layouts may be better suited and easier to understand than others.
The pioneering work of W. T. Tutte was very influential on the subject of graph drawing. Among other achievements, he introduced the use of linear algebraic methods to obtain graph drawings.
Graph drawing also can be said to encompass problems that deal with the crossing number and its various generalizations. The crossing number of a graph is the minimum number of intersections between edges that a drawing of the graph in the plane must contain. For a planar graph, the crossing number is zero by definition. Drawings on surfaces other than the plane are also studied.
There are other techniques to visualize a graph away from vertices and edges, including circle packings, intersection graph, and other visualizations of the adjacency matrix.
Tabular: Graph data structures
The tabular representation lends itself well to computational applications. There are different ways to store graphs in a computer system. The data structure used depends on both the graph structure and the algorithm used for manipulating the graph. Theoretically one can distinguish between list and matrix structures but in concrete applications the best structure is often a combination of both. List structures are often preferred for sparse graphs as they have smaller memory requirements. Matrix structures on the other hand provide faster access for some applications but can consume huge amounts of memory. Implementations of sparse matrix structures that are efficient on modern parallel computer architectures are an object of current investigation.
List structures include the edge list, an array of pairs of vertices, and the adjacency list, which separately lists the neighbors of each vertex: Much like the edge list, each vertex has a list of which vertices it is adjacent to.
Matrix structures include the incidence matrix, a matrix of 0's and 1's whose rows represent vertices and whose columns represent edges, and the adjacency matrix, in which both the rows and columns are indexed by vertices. In both cases a 1 indicates two adjacent objects and a 0 indicates two non-adjacent objects. The degree matrix indicates the degree of vertices. The Laplacian matrix is a modified form of the adjacency matrix that incorporates information about the degrees of the vertices, and is useful in some calculations such as Kirchhoff's theorem on the number of spanning trees of a graph.
The distance matrix, like the adjacency matrix, has both its rows and columns indexed by vertices, but rather than containing a 0 or a 1 in each cell it contains the length of a shortest path between two vertices.
Problems
Enumeration
There is a large literature on graphical enumeration: the problem of counting graphs meeting specified conditions. Some of this work is found in Harary and Palmer (1973).
Subgraphs, induced subgraphs, and minors
A common problem, called the subgraph isomorphism problem, is finding a fixed graph as a subgraph in a given graph. One reason to be interested in such a question is that many graph properties are hereditary for subgraphs, which means that a graph has the property if and only if all subgraphs have it too.
Finding maximal subgraphs of a certain kind is often an NP-complete problem. For example:
Finding the largest complete subgraph is called the clique problem (NP-complete).
One special case of subgraph isomorphism is the graph isomorphism problem. It asks whether two graphs are isomorphic. It is not known whether this problem is NP-complete, nor whether it can be solved in polynomial time.
A similar problem is finding induced subgraphs in a given graph. Again, some important graph properties are hereditary with respect to induced subgraphs, which means that a graph has a property if and only if all induced subgraphs also have it. Finding maximal induced subgraphs of a certain kind is also often NP-complete. For example:
Finding the largest edgeless induced subgraph or independent set is called the independent set problem (NP-complete).
Still another such problem, the minor containment problem, is to find a fixed graph as a minor of a given graph. A minor or subcontraction of a graph is any graph obtained by taking a subgraph and contracting some (or no) edges. Many graph properties are hereditary for minors, which means that a graph has a property if and only if all minors have it too. For example, Wagner's Theorem states:
A graph is planar if it contains as a minor neither the complete bipartite graph K3,3 (see the Three-cottage problem) nor the complete graph K5.
A similar problem, the subdivision containment problem, is to find a fixed graph as a subdivision of a given graph. A subdivision or homeomorphism of a graph is any graph obtained by subdividing some (or no) edges. Subdivision containment is related to graph properties such as planarity. For example, Kuratowski's Theorem states:
A graph is planar if it contains as a subdivision neither the complete bipartite graph K3,3 nor the complete graph K5.
Another problem in subdivision containment is the Kelmans–Seymour conjecture:
Every 5-vertex-connected graph that is not planar contains a subdivision of the 5-vertex complete graph K5.
Another class of problems has to do with the extent to which various species and generalizations of graphs are determined by their point-deleted subgraphs. For example:
The reconstruction conjecture
Graph coloring
Many problems and theorems in graph theory have to do with various ways of coloring graphs. Typically, one is interested in coloring a graph so that no two adjacent vertices have the same color, or with other similar restrictions. One may also consider coloring edges (possibly so that no two coincident edges are the same color), or other variations. Among the famous results and conjectures concerning graph coloring are the following:
Four-color theorem
Strong perfect graph theorem
Erdős–Faber–Lovász conjecture
Total coloring conjecture, also called Behzad's conjecture (unsolved)
List coloring conjecture (unsolved)
Hadwiger conjecture (graph theory) (unsolved)
Subsumption and unification
Constraint modeling theories concern families of directed graphs related by a partial order. In these applications, graphs are ordered by specificity, meaning that more constrained graphs—which are more specific and thus contain a greater amount of information—are subsumed by those that are more general. Operations between graphs include evaluating the direction of a subsumption relationship between two graphs, if any, and computing graph unification. The unification of two argument graphs is defined as the most general graph (or the computation thereof) that is consistent with (i.e. contains all of the information in) the inputs, if such a graph exists; efficient unification algorithms are known.
For constraint frameworks which are strictly compositional, graph unification is the sufficient satisfiability and combination function. Well-known applications include automatic theorem proving and modeling the elaboration of linguistic structure.
Route problems
Hamiltonian path problem
Minimum spanning tree
Route inspection problem (also called the "Chinese postman problem")
Seven bridges of Königsberg
Shortest path problem
Steiner tree
Three-cottage problem
Traveling salesman problem (NP-hard)
Network flow
There are numerous problems arising especially from applications that have to do with various notions of flows in networks, for example:
Max flow min cut theorem
Visibility problems
Museum guard problem
Covering problems
Covering problems in graphs may refer to various set cover problems on subsets of vertices/subgraphs.
Dominating set problem is the special case of set cover problem where sets are the closed neighborhoods.
Vertex cover problem is the special case of set cover problem where sets to cover are every edges.
The original set cover problem, also called hitting set, can be described as a vertex cover in a hypergraph.
Decomposition problems
Decomposition, defined as partitioning the edge set of a graph (with as many vertices as necessary accompanying the edges of each part of the partition), has a wide variety of questions. Often, the problem is to decompose a graph into subgraphs isomorphic to a fixed graph; for instance, decomposing a complete graph into Hamiltonian cycles. Other problems specify a family of graphs into which a given graph should be decomposed, for instance, a family of cycles, or decomposing a complete graph Kn into specified trees having, respectively, 1, 2, 3, ..., edges.
Some specific decomposition problems that have been studied include:
Arboricity, a decomposition into as few forests as possible
Cycle double cover, a decomposition into a collection of cycles covering each edge exactly twice
Edge coloring, a decomposition into as few matchings as possible
Graph factorization, a decomposition of a regular graph into regular subgraphs of given degrees
Graph classes
Many problems involve characterizing the members of various classes of graphs. Some examples of such questions are below:
Enumerating the members of a class
Characterizing a class in terms of forbidden substructures
Ascertaining relationships among classes (e.g. does one property of graphs imply another)
Finding efficient algorithms to decide membership in a class
Finding representations for members of a class
See also
Gallery of named graphs
Glossary of graph theory
List of graph theory topics
List of unsolved problems in graph theory
Publications in graph theory
Graph algorithm
Graph theorists
Subareas
Algebraic graph theory
Geometric graph theory
Extremal graph theory
Probabilistic graph theory
Topological graph theory
Graph drawing
Notes
References
English edition, Wiley 1961; Methuen & Co, New York 1962; Russian, Moscow 1961; Spanish, Mexico 1962; Roumanian, Bucharest 1969; Chinese, Shanghai 1963; Second printing of the 1962 first English edition, Dover, New York 2001.
External links
Graph theory tutorial
A searchable database of small connected graphs
rocs — a graph theory IDE
The Social Life of Routers — non-technical paper discussing graphs of people and computers
Graph Theory Software — tools to teach and learn graph theory
A list of graph algorithms with references and links to graph library implementations
Online textbooks
Phase Transitions in Combinatorial Optimization Problems, Section 3: Introduction to Graphs (2006) by Hartmann and Weigt
Digraphs: Theory Algorithms and Applications 2007 by Jorgen Bang-Jensen and Gregory Gutin
Graph Theory, by Reinhard Diestel | Graph theory | [
"Mathematics"
] | 5,760 | [
"Discrete mathematics",
"Mathematical relations",
"Graph theory",
"Combinatorics"
] |
12,420 | https://en.wikipedia.org/wiki/G%C3%B6del%27s%20ontological%20proof | Gödel's ontological proof is a formal argument by the mathematician Kurt Gödel (1906–1978) for the existence of God. The argument is in a line of development that goes back to Anselm of Canterbury (1033–1109). St. Anselm's ontological argument, in its most succinct form, is as follows: "God, by definition, is that for which no greater can be conceived. God exists in the understanding. If God exists in the understanding, we could imagine Him to be greater by existing in reality. Therefore, God must exist." A more elaborate version was given by Gottfried Leibniz (1646–1716); this is the version that Gödel studied and attempted to clarify with his ontological argument.
The argument uses modal logic, which deals with statements about what is necessarily true or possibly true. From the axioms that a property can only be positive if not-having-it is not positive, and that properties implied by a positive property must all also be themselves positive, it concludes that (since positive properties do not involve contradiction) for any positive property, there is possibly a being that instantiates it. It defines God as the being instantiating all positive properties. After defining what it means for a property to be "the essence" of something (the one property that necessarily implies all its other properties), it concludes that God's instantiation of all positive properties must be the essence of God. After defining a property of "necessary existence" and taking it as an axiom that it is positive, the argument concludes that, since God must have this property, God must exist necessarily.
History
Gödel left a fourteen-point outline of his philosophical beliefs in his papers. Points relevant to the ontological proof include:
4. There are other worlds and rational beings of a different and higher kind.
5. The world in which we live is not the only one in which we shall live or have lived.
13. There is a scientific (exact) philosophy and theology, which deals with concepts of the highest abstractness; and this is also most highly fruitful for science.
14. Religions are, for the most part, bad—but religion is not.
The first version of the ontological proof in Gödel's papers is dated "around 1941". Gödel is not known to have told anyone about his work on the proof until 1970, when he thought he was dying. In February, he allowed Dana Scott to copy out a version of the proof, which circulated privately. In August 1970, Gödel told Oskar Morgenstern that he was "satisfied" with the proof, but Morgenstern recorded in his diary entry for 29 August 1970, that Gödel would not publish because he was afraid that others might think "that he actually believes in God, whereas he is only engaged in a logical investigation (that is, in showing that such a proof with classical assumptions (completeness, etc.) correspondingly axiomatized, is possible)." Gödel died January 14, 1978. Another version, slightly different from Scott's, was found in his papers. It was finally published, together with Scott's version, in 1987.
In letters to his mother, who was not a churchgoer and had raised Kurt and his brother as freethinkers, Gödel argued at length for a belief in an afterlife. He did the same in an interview with a skeptical Hao Wang, who said: "I expressed my doubts as G spoke [...] Gödel smiled as he replied to my questions, obviously aware that his answers were not convincing me." Wang reports that Gödel's wife, Adele, two days after Gödel's death, told Wang that "Gödel, although he did not go to church, was religious and read the Bible in bed every Sunday morning." In an unmailed answer to a questionnaire, Gödel described his religion as "baptized Lutheran (but not member of any religious congregation). My belief is theistic, not pantheistic, following Leibniz rather than Spinoza."
Outline
The proof uses modal logic, which distinguishes between necessary truths and contingent truths. In the most common semantics for modal logic, many "possible worlds" are considered. A truth is necessary if it is true in all possible worlds. By contrast, if a statement happens to be true in our world, but is false in another world, then it is a contingent truth. A statement that is true in some world (not necessarily our own) is called a possible truth.
Furthermore, the proof uses higher-order (modal) logic because the definition of God employs an explicit quantification over properties.
First, Gödel axiomatizes the notion of a "positive property": for each property φ, either φ or its negation ¬φ must be positive, but not both (axiom 2). If a positive property φ implies a property ψ in each possible world, then ψ is positive, too (axiom 1). Gödel then argues that each positive property is "possibly exemplified", i.e. applies at least to some object in some world (theorem 1). Defining an object to be Godlike if it has all positive properties (definition 1), and requiring that property to be positive itself (axiom 3), Gödel shows that in some possible world a Godlike object exists (theorem 2), called "God" in the following. Gödel proceeds to prove that a Godlike object exists in every possible world.
To this end, he defines essences: if x is an object in some world, then a property φ is said to be an essence of x if φ(x) is true in that world and if φ necessarily entails all other properties that x has in that world (definition 2). Requiring positive properties being positive in every possible world (axiom 4), Gödel can show that Godlikeness is an essence of a Godlike object (theorem 3). Now, x is said to exist necessarily if, for every essence φ of x, there is an element y with property φ in every possible world (definition 3). Axiom 5 requires necessary existence to be a positive property.
Hence, it must follow from Godlikeness. Moreover, Godlikeness is an essence of God, since it entails all positive properties, and any non-positive property is the negation of some positive property, so God cannot have any non-positive properties. Since necessary existence is also a positive property (axiom 5), it must be a property of every Godlike object, as every Godlike object has all the positive properties (definition 1). Since any Godlike object is necessarily existent, it follows that any Godlike object in one world is a Godlike object in all worlds, by the definition of necessary existence. Given the existence of a Godlike object in one world, proven above, we may conclude that there is a Godlike object in every possible world, as required (theorem 4). Besides axiom 1-5 and definition 1–3, a few other axioms from modal logic were tacitly used in the proof.
From these hypotheses, it is also possible to prove that there is only one God in each world by Leibniz's law, the identity of indiscernibles: two or more objects are identical (the same) if they have all their properties in common, and so, there would only be one object in each world that possesses property G. Gödel did not attempt to do so however, as he purposely limited his proof to the issue of existence, rather than uniqueness.
Argument
The following is the original argument in symbolic notation, then an explanation of each individual symbol used, and then a translation into English of the full argument.
Original formal argument
Translation of individual symbols
Common notation in symbolic logic:
: "Object has property " (notation for first-order predicates)
: "implies" (material implication)
: "For every ", or "for all " (universal quantifier)
: "There exists an ", or "for some " (existential quantifier)
: "The negation of " (i.e., not )
Modal operators (used in modal logic):
: "It is possible that...", or, "in at least one possible world, it is true that..." (modal operator for possibility)
: "It is necessary that...", or, "in all possible worlds, it is true that..." (modal operator for necessity)
Primitive predicate in this argument:
: "Property is positive" (since it applies to properties, "being positive" is a second-order property)
Derived predicates (defined in terms of other predicates within the argument):
: " is God-like". (Definition 1)
: " is an essential property of " (Definition 2)
: "Object exists necessarily" (Definition 3)
Translation of full argument
Possible-worlds readings of modal terms have been added in parentheses, i.e., "in all possible worlds" for "necessarily" and "in at least one possible world" for "possibly". For completeness, "in the actual world" should be added to all sentences that were said without "necessarily" or "possibly", but this has been skipped since it might make the text difficult to read.
Axiom 1: If is a positive property, and if it is necessarily true (true in all possible worlds) that every object with property also has property , then is also a positive property.
Axiom 2: The negation of a property is positive if, and only if, is not positive.
Theorem 1: If a property is positive, then it is possible that there exists an object that has this property (in at least one possible world, there exists an object that has this property).
Definition 1: An object is God-like if, and only if, has all positive properties.
Axiom 3: The property of being God-like is itself a positive property.
Theorem 2: It is possible that there exists a God-like object (in at least one possible world, there exists a God-like object ).
Definition 2: A property is an essential property of an object if, has property , and every property of necessarily (in all possible worlds) and generally (for all objects) follows from .
Axiom 4: If a property is positive, then it is necessarily positive (positive in all possible worlds).
Theorem 3: If is God-like, then being God-like is an essential property of .
Definition 3: An object "exists necessarily" if each of its essential properties applies, in each possible world, to some object .
Axiom 5: "Necessary existence" is a positive property.
Theorem 4: It is necessarily true (true in all possible worlds) that a God-like object exists.
Criticism
Most criticism of Gödel's proof is aimed at its axioms: as with any proof in any logical system, if the axioms the proof depends on are doubted, then the conclusions can be doubted. It is particularly applicable to Gödel's proof – because it rests on five axioms, some of which are considered questionable. A proof does not necessitate that the conclusion be correct, but rather that by accepting the axioms, the conclusion follows logically.
Many philosophers have called the axioms into question. The first layer of criticism is simply that there are no arguments presented that give reasons why the axioms are true. A second layer is that these particular axioms lead to unwelcome conclusions. This line of thought was argued by Jordan Howard Sobel, showing that if the axioms are accepted, they lead to a "modal collapse" where every statement that is true is necessarily true, i.e. the sets of necessary, of contingent, and of possible truths all coincide (provided there are accessible worlds at all). According to Robert Koons, Sobel suggested in a 2005 conference paper that Gödel might have welcomed modal collapse.
There are suggested amendments to the proof, presented by C. Anthony Anderson, but argued to be refutable by Anderson and Michael Gettings. Sobel's proof of modal collapse has been questioned by Koons, but a counter-defence by Sobel has been given.
Gödel's proof has also been questioned by Graham Oppy, asking whether many other almost-gods would also be "proven" through Gödel's axioms. This counter-argument has been questioned by Gettings, who agrees that the axioms might be questioned, but disagrees that Oppy's particular counter-example can be shown from Gödel's axioms.
Religious scholar Fr. Robert J. Spitzer accepted Gödel's proof, calling it "an improvement over the Anselmian Ontological Argument (which does not work)."
There are, however, many more criticisms, most of them focusing on the question of whether these axioms must be rejected to avoid odd conclusions. The broader criticism is that even if the axioms cannot be shown to be false, that does not mean that they are true. Hilbert's famous remark about interchangeability of the primitives' names applies to those in Gödel's ontological axioms ("positive", "god-like", "essence") as well as to those in Hilbert's geometry axioms ("point", "line", "plane"). According to André Fuhrmann (2005) it remains to show that the dazzling notion prescribed by traditions and often believed to be essentially mysterious satisfies Gödel's axioms. This is not a mathematical, but a theological task. It is this task which decides which religion's god has been proven to exist.
Computationally verified versions
Christoph Benzmüller and Bruno Woltzenlogel-Paleo formalized Gödel's proof to a level that is suitable for automated theorem proving or at least computational verification via proof assistants. The effort made headlines in German newspapers. According to the authors of this effort, they were inspired by Melvin Fitting's book.
In 2014, they computationally verified Gödel's proof (in the above version). They also proved that this version's axioms are consistent, but imply modal collapse, thus confirming Sobel's 1987 argument. In the same paper, they suspected Gödel's original version of the axioms to be inconsistent, as they failed to prove their consistency.
In 2016, they gave an automated proof that the original version implies , i.e., is inconsistent in every modal logic with a reflexive or symmetric accessibility relation. Moreover, they gave an argument that this version is inconsistent in every logic at all, but failed to duplicate it by automated provers. However, they were able to verify Melvin Fitting's reformulation of the argument and guarantee its consistency.
In literature
A humorous variant of Gödel's ontological proof is mentioned in Quentin Canterel's novel The Jolly Coroner.
The proof is also mentioned in the TV series Hand of God.
Jeffrey Kegler's 2007 novel The God Proof depicts the (fictional) rediscovery of Gödel's lost notebook about the ontological proof.
See also
Philosophy of religion
Notes
References
Further reading
Frode Alfson Bjørdal, "Understanding Gödel's Ontological Argument", in T. Childers (ed.), The Logica Yearbook 1998, Prague 1999, 214–217.
Frode Alfson Bjørdal, "All Properties are Divine, or God Exists", in Logic and Logical Philosophy, Vol. 27 No. 3, 2018, pp. 329–350.
Bromand, Joachim. "Gödels ontologischer Beweis und andere modallogische Gottesbeweise", in J. Bromand und G. Kreis (Hg.), Gottesbeweise von Anselm bis Gödel, Berlin 2011, 381–491.
Melvin Fitting, "Types, Tableaus, and Godel's God" Publisher: Dordrecht Kluwer Academic, 2002, ,
— See Chapter "Ontological Proof", pp. 403–404, and Appendix B "Texts Relating to the Ontological Proof", pp. 429–437.
Goldman, Randolph R. "Gödel's Ontological Argument", PhD Diss., University of California, Berkeley 2000.
Hazen, A. P. "On Gödel's Ontological Proof", Australasian Journal of Philosophy, Vol. 76, No 3, pp. 361–377, September 1998
External links
Annotated bibliography of studies on Gödel's Ontological Argument
Thomas Gawlick, Was sind und was sollen mathematische Gottesbeweise?, Jan. 2012 — shows Gödel's original proof manuscript on p. 2-3
A Divine Consistency Proof for Mathematics — A submitted work by Harvey Friedman showing that if God exists (in the sense of Gödel), then Mathematics, as formalized by the usual ZFC axioms, is consistent.
Arguments for the existence of God
Modal logic
Ontological proof | Gödel's ontological proof | [
"Mathematics"
] | 3,584 | [
"Mathematical logic",
"Modal logic"
] |
12,424 | https://en.wikipedia.org/wiki/Genetic%20programming | Genetic programming (GP) is an evolutionary algorithm, an artificial intelligence technique mimicking natural evolution, which operates on a population of programs. It applies the genetic operators selection according to a predefined fitness measure, mutation and crossover.
The crossover operation involves swapping specified parts of selected pairs (parents) to produce new and different offspring that become part of the new generation of programs. Some programs not selected for reproduction are copied from the current generation to the new generation. Mutation involves substitution of some random part of a program with some other random part of a program. Then the selection and other operations are recursively applied to the new generation of programs.
Typically, members of each new generation are on average more fit than the members of the previous generation, and the best-of-generation program is often better than the best-of-generation programs from previous generations. Termination of the evolution usually occurs when some individual program reaches a predefined proficiency or fitness level.
It may and often does happen that a particular run of the algorithm results in premature convergence to some local maximum which is not a globally optimal or even good solution. Multiple runs (dozens to hundreds) are usually necessary to produce a very good result. It may also be necessary to have a large starting population size and variability of the individuals to avoid pathologies.
History
The first record of the proposal to evolve programs is probably that of Alan Turing in 1950. There was a gap of 25 years before the publication of John Holland's 'Adaptation in Natural and Artificial Systems' laid out the theoretical and empirical foundations of the science. In 1981, Richard Forsyth demonstrated the successful evolution of small programs, represented as trees, to perform classification of crime scene evidence for the UK Home Office.
Although the idea of evolving programs, initially in the computer language Lisp, was current amongst John Holland's students, it was not until they organised the first Genetic Algorithms (GA) conference in Pittsburgh that Nichael Cramer published evolved programs in two specially designed languages, which included the first statement of modern "tree-based" Genetic Programming (that is, procedural languages organized in tree-based structures and operated on by suitably defined GA-operators). In 1988, John Koza (also a PhD student of John Holland) patented his invention of a GA for program evolution. This was followed by publication in the International Joint Conference on Artificial Intelligence IJCAI-89.
Koza followed this with 205 publications on “Genetic Programming” (GP), name coined by David Goldberg, also a PhD student of John Holland. However, it is the series of 4 books by Koza, starting in 1992 with accompanying videos, that really established GP. Subsequently, there was an enormous expansion of the number of publications with the Genetic Programming Bibliography, surpassing 10,000 entries. In 2010, Koza listed 77 results where Genetic Programming was human competitive.
In 1996, Koza started the annual Genetic Programming conference which was followed in 1998 by the annual EuroGP conference, and the first book in a GP series edited by Koza. 1998 also saw the first GP textbook. GP continued to flourish, leading to the first specialist GP journal and three years later (2003) the annual Genetic Programming Theory and Practice (GPTP) workshop was established by Rick Riolo. Genetic Programming papers continue to be published at a diversity of conferences and associated journals. Today there are nineteen GP books including several for students.
Foundational work in GP
Early work that set the stage for current genetic programming research topics and applications is diverse, and includes software synthesis and repair, predictive modeling, data mining, financial modeling, soft sensors, design, and image processing. Applications in some areas, such as design, often make use of intermediate representations, such as Fred Gruau's cellular encoding. Industrial uptake has been significant in several areas including finance, the chemical industry, bioinformatics and the steel industry.
Methods
Program representation
GP evolves computer programs, traditionally represented in memory as tree structures. Trees can be easily evaluated in a recursive manner. Every internal node has an operator function and every terminal node has an operand, making mathematical expressions easy to evolve and evaluate. Thus traditionally GP favors the use of programming languages that naturally embody tree structures (for example, Lisp; other functional programming languages are also suitable).
Non-tree representations have been suggested and successfully implemented, such as linear genetic programming which perhaps suits the more traditional imperative languages. The commercial GP software Discipulus uses automatic induction of binary machine code ("AIM") to achieve better performance. μGP uses directed multigraphs to generate programs that fully exploit the syntax of a given assembly language. Multi expression programming uses Three-address code for encoding solutions. Other program representations on which significant research and development have been conducted include programs for stack-based virtual machines, and sequences of integers that are mapped to arbitrary programming languages via grammars. Cartesian genetic programming is another form of GP, which uses a graph representation instead of the usual tree based representation to encode computer programs.
Most representations have structurally noneffective code (introns). Such non-coding genes may seem to be useless because they have no effect on the performance of any one individual. However, they alter the probabilities of generating different offspring under the variation operators, and thus alter the individual's variational properties.
Experiments seem to show faster convergence when using program representations that allow such non-coding genes, compared to program representations that do not have any non-coding genes. Instantiations may have both trees with introns and those without; the latter are called canonical trees. Special canonical crossover operators are introduced that maintain the canonical structure of parents in their children.
Initialisation
The methods for creation of the initial population include:
Grow creates the individuals sequentially. Every GP tree is created starting from the root, creating functional nodes with children as well as terminal nodes up to a certain depth.
Full is similar to the Grow. The difference is that all brunches in a tree are of same predetermined depth.
Ramped half-and-half creates a populations consisting of parts and a maximum depth of for its trees. The first part has a maximum depth of 2, second of 3 and so on up to the -th part with maximum depth . Half of every part is created by Grow, while the other part is created by Full.
Selection
Selection is a process whereby certain individuals are selected from the current generation that would serve as parents for the next generation. The individuals are selected probabilistically such that the better performing individuals have a higher chance of getting selected. The most commonly used selection method in GP is tournament selection, although other methods such as fitness proportionate selection, lexicase selection, and others have been demonstrated to perform better for many GP problems.
Elitism, which involves seeding the next generation with the best individual (or best n individuals) from the current generation, is a technique sometimes employed to avoid regression.
Crossover
In Genetic Programming two fit individuals are chosen from the population to be parents for one or two children. In tree genetic programming, these parents are represented as inverted lisp like trees, with their root nodes at the top. In subtree crossover in each parent a subtree is randomly chosen. (Highlighted with yellow in the animation.) In the root donating parent (in the animation on the left) the chosen subtree is removed and replaced with a copy of the randomly chosen subtree from the other parent, to give a new child tree.
Sometimes two child crossover is used, in which case the removed subtree (in the animation on the left) is not simply deleted but is copied to a copy of the second parent (here on the right) replacing (in the copy) its randomly chosen subtree. Thus this type of subtree crossover takes two fit trees and generates two child trees.
Replication
Some individuals selected according to fitness criteria do not participate in crossover, but are copied into the next generation, akin to asexual reproduction in the natural world. They may be further subject to mutation.
Mutation
There are many types of mutation in genetic programming. They start from a fit syntactically correct parent and aim to randomly create a syntactically correct child. In the animation
a subtree is randomly chosen (highlighted by yellow). It is removed and replaced by a randomly generated subtree.
Other mutation operators select a leaf (external node) of the tree and replace it with a randomly chosen leaf. Another mutation is to select at random a function (internal node) and replace it with another function with the same arity (number of inputs). Hoist mutation randomly chooses a subtree and replaces it with a subtree within itself. Thus hoist mutation is guaranteed to make the child smaller. Leaf and same arity function replacement ensure the child is the same size as the parent. Whereas subtree mutation (in the animation) may, depending upon the function and terminal sets, have a bias to either increase or decrease the tree size. Other subtree based mutations try to carefully control the size of the replacement subtree and thus the size of the child tree.
Similarly there are many types of linear genetic programming mutation, each of which tries to ensure the mutated child is still syntactically correct.
Applications
GP has been successfully used as an automatic programming tool, a machine learning tool and an automatic problem-solving engine. GP is especially useful in the domains where the exact form of the solution is not known in advance or an approximate solution is acceptable (possibly because finding the exact solution is very difficult). Some of the applications of GP are curve fitting, data modeling, symbolic regression, feature selection, classification, etc. John R. Koza mentions 76 instances where Genetic Programming has been able to produce results that are competitive with human-produced results (called Human-competitive results). Since 2004, the annual Genetic and Evolutionary Computation Conference (GECCO) holds Human Competitive Awards (called Humies) competition, where cash awards are presented to human-competitive results produced by any form of genetic and evolutionary computation. GP has won many awards in this competition over the years.
Meta-genetic programming
Meta-genetic programming is the proposed meta-learning technique of evolving a genetic programming system using genetic programming itself. It suggests that chromosomes, crossover, and mutation were themselves evolved, therefore like their real life counterparts should be allowed to change on their own rather than being determined by a human programmer. Meta-GP was formally proposed by Jürgen Schmidhuber in 1987. Doug Lenat's Eurisko is an earlier effort that may be the same technique. It is a recursive but terminating algorithm, allowing it to avoid infinite recursion. In the "autoconstructive evolution" approach to meta-genetic programming, the methods for the production and variation of offspring are encoded within the evolving programs themselves, and programs are executed to produce new programs to be added to the population.
Critics of this idea often say this approach is overly broad in scope. However, it might be possible to constrain the fitness criterion onto a general class of results, and so obtain an evolved GP that would more efficiently produce results for sub-classes. This might take the form of a meta evolved GP for producing human walking algorithms which is then used to evolve human running, jumping, etc. The fitness criterion applied to the meta GP would simply be one of efficiency.
See also
Bio-inspired computing
Covariance Matrix Adaptation Evolution Strategy (CMA-ES)
Evolutionary image processing
Fitness approximation
Genetic improvement
Genetic representation
Grammatical evolution
Inductive programming
Linear genetic programming
Multi expression programming
Propagation of schema
References
External links
Aymen S Saket & Mark C Sinclair
Genetic Programming and Evolvable Machines, a journal
Evo2 for genetic programming
GP bibliography
The Hitch-Hiker's Guide to Evolutionary Computation
Riccardo Poli, William B. Langdon, Nicholas F. McPhee, John R. Koza, "A Field Guide to Genetic Programming" (2008)
Genetic Programming, a community maintained resource | Genetic programming | [
"Biology"
] | 2,442 | [
"Genetics techniques",
"Genetic programming"
] |
12,439 | https://en.wikipedia.org/wiki/Guanine | Guanine () (symbol G or Gua) is one of the four main nucleotide bases found in the nucleic acids DNA and RNA, the others being adenine, cytosine, and thymine (uracil in RNA). In DNA, guanine is paired with cytosine. The guanine nucleoside is called guanosine.
With the formula C5H5N5O, guanine is a derivative of purine, consisting of a fused pyrimidine-imidazole ring system with conjugated double bonds. This unsaturated arrangement means the bicyclic molecule is planar.
Properties
Guanine, along with adenine and cytosine, is present in both DNA and RNA, whereas thymine is usually seen only in DNA, and uracil only in RNA. Guanine has two tautomeric forms, the major keto form (see figures) and rare enol form.
It binds to cytosine through three hydrogen bonds. In cytosine, the amino group acts as the hydrogen bond donor and the C-2 carbonyl and the N-3 amine as the hydrogen-bond acceptors. Guanine has the C-6 carbonyl group that acts as the hydrogen bond acceptor, while a group at N-1 and the amino group at C-2 act as the hydrogen bond donors.
Guanine can be hydrolyzed with strong acid to glycine, ammonia, carbon dioxide, and carbon monoxide. First, guanine gets deaminated to become xanthine. Guanine oxidizes more readily than adenine, the other purine-derivative base in DNA. Its high melting point of 350 °C reflects the intermolecular hydrogen bonding between the oxo and amino groups in the molecules in the crystal. Because of this intermolecular bonding, guanine is relatively insoluble in water, but it is soluble in dilute acids and bases.
History
The first isolation of guanine was reported in 1844 by the German chemist (1819–1885), who obtained it as a mineral formed from the excreta of sea birds, which is known as guano and which was used as a source of fertilizer; guanine was named in 1846. Between 1882 and 1906, Emil Fischer determined the structure and also showed that uric acid can be converted to guanine.
Synthesis
Trace amounts of guanine form by the polymerization of ammonium cyanide (). Two experiments conducted by Levy et al. showed that heating 10 mol·L−1 at 80 °C for 24 hours gave a yield of 0.0007%, while using 0.1 mol·L−1 frozen at −20 °C for 25 years gave a 0.0035% yield. These results indicate guanine could arise in frozen regions of the primitive earth. In 1984, Yuasa reported a 0.00017% yield of guanine after the electrical discharge of , , , and 50 mL of water, followed by a subsequent acid hydrolysis. However, it is unknown whether the presence of guanine was not simply a resultant contaminant of the reaction.
10NH3 + 2CH4 + 4C2H6 + 2H2O → 2C5H8N5O (guanine) + 25H2
A Fischer–Tropsch synthesis can also be used to form guanine, along with adenine, uracil, and thymine. Heating an equimolar gas mixture of CO, H2, and NH3 to 700 °C for 15 to 24 minutes, followed by quick cooling and then sustained reheating to 100 to 200 °C for 16 to 44 hours with an alumina catalyst, yielded guanine and uracil:
10CO + H2 + 10NH3 → 2C5H8N5O (guanine) + 8H2O
Another possible abiotic route was explored by quenching a 90% N2–10%CO–H2O gas mixture high-temperature plasma.
Traube's synthesis involves heating 2,4,5-triamino-1,6-dihydro-6-oxypyrimidine (as the sulfate) with formic acid for several hours.
Biosynthesis
Guanine is not synthesized de novo. Instead, it is split from the more complex molecule guanosine by the enzyme guanosine phosphorylase:
guanosine + phosphate guanine + alpha-D-ribose 1-phosphate
Guanine can be synthesized de novo, with the rate-limiting enzyme of inosine monophosphate dehydrogenase.
Other occurrences and biological uses
The word guanine derives from the Spanish loanword ('bird/bat droppings'), which itself is from the Quechua word , meaning 'dung'. As the Oxford English Dictionary notes, guanine is "A white amorphous substance obtained abundantly from guano, forming a constituent of the excrement of birds".
In 1656 in Paris, a Mr. Jaquin extracted from the scales of the fish Alburnus alburnus so-called "pearl essence", which is crystalline guanine. In the cosmetics industry, crystalline guanine is used as an additive to various products (e.g., shampoos), where it provides a pearly iridescent effect. It is also used in metallic paints and simulated pearls and plastics. It provides shimmering luster to eye shadow and nail polish. Facial treatments using the droppings, or guano, from Japanese nightingales have been used in Japan and elsewhere, because the guanine in the droppings makes the skin look paler. Guanine crystals are rhombic platelets composed of multiple transparent layers, but they have a high index of refraction that partially reflects and transmits light from layer to layer, thus producing a pearly luster. It can be applied by spray, painting, or dipping. It may irritate the eyes. Its alternatives are mica, faux pearl (from ground shells), and aluminium and bronze particles.
Guanine has a very wide variety of biological uses that include a range of functions ranging in both complexity and versatility. These include camouflage, display, and vision among other purposes.
Spiders, scorpions, and some amphibians convert ammonia, as a product of protein metabolism in the cells, to guanine, as it can be excreted with minimal water loss.
Guanine is also found in specialized skin cells of fish called iridocytes (e.g., the sturgeon), as well as being present in the reflective deposits of the eyes of deep-sea fish and some reptiles, such as crocodiles and chameleons.
On 8 August 2011, a report, based on NASA studies with meteorites found on Earth, was published suggesting building blocks of DNA and RNA (guanine, adenine and related organic molecules) may have been formed extra-terrestrially in outer space.
See also
Cytosine
Guanine deaminase
References
External links
Guanine MS Spectrum
Guanine at chemicalland21.com
Nucleobases
Purines
Cosmetics chemicals
Organic minerals | Guanine | [
"Chemistry"
] | 1,520 | [
"Organic compounds",
"Organic minerals"
] |
12,448 | https://en.wikipedia.org/wiki/Ganges | The Ganges ( ; in India: Ganga, ; in Bangladesh: Padma, ) is a trans-boundary river of Asia which flows through India and Bangladesh. The river rises in the western Himalayas in the Indian state of Uttarakhand. It flows south and east through the Gangetic plain of North India, receiving the right-bank tributary, the Yamuna, which also rises in the western Indian Himalayas, and several left-bank tributaries from Nepal that account for the bulk of its flow. In West Bengal state, India, a feeder canal taking off from its right bank diverts 50% of its flow southwards, artificially connecting it to the Hooghly River. The Ganges continues into Bangladesh, its name changing to the Padma. It is then joined by the Jamuna, the lower stream of the Brahmaputra, and eventually the Meghna, forming the major estuary of the Ganges Delta, and emptying into the Bay of Bengal. The Ganges–Brahmaputra–Meghna system is the second-largest river on earth by discharge.
The main stem of the Ganges begins at the town of Devprayag, at the confluence of the Alaknanda, which is the source stream in hydrology on account of its greater length, and the Bhagirathi, which is considered the source stream in Hindu mythology.
The Ganges is a lifeline to hundreds of millions of people who live in its basin and depend on it for their daily needs. It has been important historically, with many former provincial or imperial capitals such as Pataliputra, Kannauj, Sonargaon, Dhaka, Bikrampur, Kara, Munger, Kashi, Patna, Hajipur, Kanpur, Delhi, Bhagalpur, Murshidabad, Baharampur, Kampilya, and Kolkata located on its banks or those of its tributaries and connected waterways. The river is home to approximately 140 species of fish, 90 species of amphibians, and also reptiles and mammals, including critically endangered species such as the gharial and South Asian river dolphin. The Ganges is the most sacred river to Hindus. It is worshipped as the goddess Ganga in Hinduism.
The Ganges is threatened by severe pollution. This not only poses a danger to humans but also to many species of animals. The levels of fecal coliform bacteria from human waste in the river near Varanasi are more than 100 times the Indian government's official limit. The Ganga Action Plan, an environmental initiative to clean up the river, has been considered a failure which is variously attributed to corruption, a lack of will in the government, poor technical expertise, poor environmental planning, and a lack of support from religious authorities.
Course
The upper phase of the river Ganges begins at the confluence of the Bhagirathi and Alaknanda rivers in the town of Devprayag in the Garhwal division of the Indian state of Uttarakhand. The Bhagirathi is considered to be the source in Hindu culture and mythology, although the Alaknanda is longer, and therefore, hydrologically the source stream. The headwaters of the Alakananda are formed by snow melt from peaks such as Nanda Devi, Trisul, and Kamet. The Bhagirathi rises at the foot of Gangotri Glacier at Gomukh, at an elevation of and was mythologically referred to as residing in the matted locks of Shiva; symbolically Tapovan, which is a meadow of ethereal beauty at the feet of Mount Shivling, just away.
Although many small streams comprise the headwaters of the Ganges, the six longest and their five confluences are considered sacred. The six headstreams are the Alaknanda, Dhauliganga, Nandakini, Pindar, Mandakini and Bhagirathi. Their confluences, known as the Panch Prayag, are all along the Alaknanda. They are, in downstream order, Vishnuprayag, where the Dhauliganga joins the Alaknanda; Nandprayag, where the Nandakini joins; Karnaprayag, where the Pindar joins; Rudraprayag, where the Mandakini joins; and finally, Devprayag, where the Bhagirathi joins the Alaknanda to form the Ganges.
After flowing for through its narrow Himalayan valley, the Ganges emerges from the mountains at Rishikesh, then debouches onto the Gangetic Plain at the pilgrimage town of Haridwar. At Haridwar, a headworks diverts some of its water into the Ganges Canal, which irrigates the Doab region of Uttar Pradesh, whereas the river, whose course has been roughly southwest until this point, now begins to flow southeast through the plains of northern India.
The Ganges river follows a arching course passing through the cities of Bijnor, Kannauj, Farukhabad, and Kanpur. Along the way it is joined by the Ramganga, which contributes an average annual flow of about to the river. The Ganges joins the long River Yamuna at the Triveni Sangam at Prayagraj (previously Allahabad), a confluence considered holy in Hinduism. At their confluence the Yamuna is larger than the Ganges contributing about 58.5% of the combined flow, with an average flow of .
Now flowing east, the river meets the long Tamsa River (also called Tons), which flows north from the Kaimur Range and contributes an average flow of about . After the Tamsa, the long Gomti River joins, flowing south from the Himalayas. The Gomti contributes an average annual flow of about . Then the long Ghaghara River (Karnali River), also flowing south from the Himalayas of Tibet through Nepal joins. The Ghaghara (Karnali), with its average annual flow of about , is the largest tributary of the Ganges by discharge. After the Ghaghara confluence, the Ganges is joined from the south by the long Son River, which contributes about . The long Gandaki River, then the long Kosi River, join from the north flowing from Nepal, contributing about and , respectively. The Kosi is the third largest tributary of the Ganges by discharge, after Ghaghara (Karnali) and Yamuna. The Kosi merges into the Ganges near Kursela in Bihar.
Along the way between Prayagraj and Malda, West Bengal, the Ganges river passes the towns of Chunar, Mirzapur, Varanasi, Ghazipur, Ara, Patna, Chapra, Hajipur, Mokama, Begusarai, Munger, Sahibganj, Rajmahal, Bhagalpur, Ballia, Buxar, Simaria, Sultanganj, and Farakka. At Bhagalpur, the river begins to flow south-southeast and at Farakka, it begins its attrition with the branching away of its first distributary, the long Bhāgirathi-Hooghly, which goes on to become the Hooghly River. Just before the border with Bangladesh the Farakka Barrage controls the flow of Ganges, diverting some of the water into a feeder canal linked to the Hooghly for the purpose of keeping it relatively silt-free. The Hooghly River is formed by the confluence of the Bhagirathi River and Ajay River at Katwa, and Hooghly has a number of tributaries of its own. The largest is the Damodar River, which is long, with a drainage basin of . The Hooghly River empties into the Bay of Bengal near Sagar Island. Between Malda and the Bay of Bengal, the Hooghly river passes the towns and cities of Murshidabad, Nabadwip, Kolkata and Howrah.
After entering Bangladesh, the main branch of the Ganges river is known as the Padma. The Padma is joined by the Jamuna River, the largest distributary of the Brahmaputra. Further downstream, the Padma joins the Meghna River, the converged flow of Surma-Meghna River System taking on the Meghna's name as it enters the Meghna Estuary, which empties into the Bay of Bengal. Here it forms the Bengal Fan, the world's largest submarine fan, which alone accounts for 10–20% of the global burial of organic carbon.
The Ganges Delta, formed mainly by the large, sediment-laden flows of the Ganges and Brahmaputra rivers, is the world's largest delta, at about . It stretches along the Bay of Bengal.
Only the Amazon and Congo rivers have a greater average discharge than the combined flow of the Ganges, the Brahmaputra, and the Surma-Meghna river system. In full flood only the Amazon is larger.
Geology
The Indian subcontinent lies atop the Indian tectonic plate, a minor plate within the Indo-Australian Plate. Its defining geological processes commenced seventy-five million years ago, when, as a part of the southern supercontinent Gondwana, it began a northeastwards drift—lasting fifty million years—across the then unformed Indian Ocean. The subcontinent's subsequent collision with the Eurasian Plate and subduction under it, gave rise to the Himalayas, the planet's highest mountain ranges. In the former seabed immediately south of the emerging Himalayas, plate movement created a vast trough, which, having gradually been filled with sediment borne by the Indus and its tributaries and the Ganges and its tributaries, now forms the Indo-Gangetic Plain.
The Indo-Gangetic Plain is geologically known as a foredeep or foreland basin.
Hydrology
Major left-bank tributaries include the Gomti River, Ghaghara River, Gandaki River and Kosi River; major right-bank tributaries include the Yamuna River, Son River, Punpun and Damodar. The hydrology of the Ganges River is very complicated, especially in the Ganges Delta region. One result is different ways to determine the river's length, its discharge, and the size of its drainage basin.
The name Ganges is used for the river between the confluence of the Bhagirathi and Alaknanda rivers, in the Himalayas, and the first bifurcation of the river, near the Farakka Barrage and the India-Bangladesh Border. The length of the Ganges is frequently said to be slightly over long, about , or . In these cases the river's source is usually assumed to be the source of the Bhagirathi River, Gangotri Glacier at Gomukh and its mouth being the mouth of the Meghna River on the Bay of Bengal. Sometimes the source of the Ganges is considered to be at Haridwar, where its Himalayan headwater streams debouch onto the Gangetic Plain.
In some cases, the length of the Ganges is given by its Hooghly River distributary, which is longer than its main outlet via the Meghna River, resulting in a total length of about , if taken from the source of the Bhagirathi, or , if from Haridwar to the Hooghly's mouth. In other cases the length is said to be about , from the source of the Bhagirathi to the Bangladesh border, where its name changes to Padma.
For similar reasons, sources differ over the size of the river's drainage basin. The basin covers parts of four countries, India, Nepal, China, and Bangladesh; eleven Indian states, Himachal Pradesh, Uttarakhand, Uttar Pradesh, Madhya Pradesh, Chhattisgarh, Bihar, Jharkhand, Punjab, Haryana, Rajasthan, West Bengal, and the Union Territory of Delhi. The Ganges basin, including the delta but not the Brahmaputra or Meghna basins, is about , of which is in India (about 80%), in Nepal (13%), in Bangladesh (4%), and in China (3%). Sometimes the Ganges and Brahmaputra–Meghna drainage basins are combined for a total of about or . The combined Ganges-Brahmaputra-Meghna basin (abbreviated GBM or GMB) drainage basin is spread across Bangladesh, Bhutan, India, Nepal, and China.
The Ganges basin ranges from the Himalaya and the Transhimalaya in the north, to the northern slopes of the Vindhya range in the south, from the eastern slopes of the Aravalli in the west to the Chota Nagpur plateau and the Sunderbans delta in the east. A significant portion of the discharge from the Ganges comes from the Himalayan mountain system. Within the Himalaya, the Ganges basin spreads almost 1,200 km from the Yamuna-Satluj divide along the Simla ridge forming the boundary with the Indus basin in the west to the Singalila Ridge along the Nepal-Sikkim border forming the boundary with the Brahmaputra basin in the east. This section of the Himalaya contains 9 of the 14 highest peaks in the world over 8,000m in height, including Mount Everest which is the high point of the Ganges basin. The other peaks over 8,000m in the basin are Kangchenjunga, Lhotse, Makalu, Cho Oyu, Dhaulagiri, Manaslu, Annapurna and Shishapangma. The Himalayan portion of the basin includes the south-eastern portion of the state of Himachal Pradesh, the entire state of Uttarakhand, the entire country of Nepal and the extreme north-western portion of the state of West Bengal.
The discharge of the Ganges also differs by source. Frequently, discharge is described for the mouth of the Meghna River, thus combining the Ganges with the Brahmaputra and Meghna. This results in a total average annual discharge of about , or . In other cases the average annual discharges of the Ganges, Brahmaputra, and Meghna are given separately, at about for the Ganges, about for the Brahmaputra, and about for the Meghna.
The maximum peak discharge of the Ganges, as recorded at Hardinge Bridge in Bangladesh, exceeded . The minimum recorded at the same place was about , in 1997.
The hydrologic cycle in the Ganges basin is governed by the Southwest Monsoon. About 84% of the total rainfall occurs in the monsoon from June to September. Consequently, streamflow in the Ganges is highly seasonal. The average dry season to monsoon discharge ratio is about 1:6, as measured at Hardinge Bridge. This strong seasonal variation underlies many problems of land and water resource development in the region. The seasonality of flow is so acute it can cause both drought and floods. Bangladesh, in particular, frequently experiences drought during the dry season and regularly suffers extreme floods during the monsoon.
In the Ganges Delta, many large rivers come together, both merging and bifurcating in a complicated network of channels. The two largest rivers, the Ganges and Brahmaputra, both split into distributary channels, the largest of which merge with other large rivers before themselves joining the Bay of Bengal. But this current channel pattern was not always the case. Over time the rivers in Ganges Delta have often changed course, sometimes altering the network of channels in significant ways.
Before the late 12th century the Bhagirathi-Hooghly distributary was the main channel of the Ganges and the Padma was only a minor spill-channel. The main flow of the river reached the sea not via the modern Hooghly River but rather by the Adi Ganga. Between the 12th and 16th centuries, the Bhagirathi-Hooghly and Padma channels were more or less equally significant. After the 16th century, the Padma grew to become the main channel of the Ganges. It is thought that the Bhagirathi-Hooghly became increasingly choked with silt, causing the main flow of the Ganges to shift to the southeast and the Padma River. By the end of the 18th century, the Padma had become the main distributary of the Ganges. One result of this shift to the Padma was that the Ganges now joined the Meghna and Brahmaputra rivers before emptying into the Bay of Bengal. The present confluence of the Ganges and Meghna was formed very recently, about 150 years ago.
Also near the end of the 18th century, the course of the lower Brahmaputra changed dramatically, significantly altering its relationship with the Ganges. In 1787 there was a great flood on the Teesta River, which at the time was a tributary of the Ganges-Padma River. The flood of 1787 caused the Teesta to undergo a sudden change course, an avulsion, shifting east to join the Brahmaputra and causing the Brahmaputra to shift its course south, cutting a new channel. This new main channel of the Brahmaputra is called the Jamuna River. It flows south to join the Ganges-Padma. During ancient times, the main flow of the Brahmaputra was more easterly, passing by the city of Mymensingh and joining the Meghna River. Today this channel is a small distributary but retains the name Brahmaputra, sometimes Old Brahmaputra. The site of the old Brahmaputra-Meghna confluence, in the locality of Langalbandh, is still considered sacred by Hindus. Near the confluence is a major early historic site called Wari-Bateshwar.
In the rainy season of 1809, the lower channel of the Bhagirathi, leading to Kolkata, had been entirely shut; but in the following year it opened again and was nearly of the same size as the upper channel but both however suffered a considerable diminution, owing probably to the new communication opened below the Jalanggi on the upper channel.
Discharge
Discharge of the Ganges River at Farakka Barrage (period from 1998/01/01 to 2023/12/31):
History
The first European traveller to mention the Ganges was the Greek envoy Megasthenes (ca. 350–290 BCE). He did so several times in his work Indica: "India, again, possesses many rivers both large and navigable, which, having their sources in the mountains which stretch along the northern frontier, traverse the level country, and not a few of these, after uniting with each other, fall into the river called the Ganges. Now this river, which at its source is 30 stadia broad, flows from north to south, and empties its waters into the ocean forming the eastern boundary of the Gangaridai, a nation which possesses a vast force of the largest-sized elephants." (Diodorus II.37).
In 1951 a water sharing dispute arose between India and East Pakistan (now Bangladesh) after India declared its intention to build the Farakka Barrage. The original purpose of the barrage, which was completed in 1975, was to divert up to of water from the Ganges to the Bhagirathi-Hooghly distributary to restore navigability at the Port of Kolkata. It was assumed that during the worst dry season the Ganges flow would be around , thus leaving for the then East Pakistan. East Pakistan objected and a protracted dispute ensued. In 1996 a 30-year treaty was signed with Bangladesh. The terms of the agreement are complicated, but in essence, they state that if the Ganges flow at Farakka was less than then India and Bangladesh would each receive 50% of the water, with each receiving at least for alternating ten-day periods. However, within a year the flow at Farakka fell to levels far below the historic average, making it impossible to implement the guaranteed sharing of water. In March 1997, flow of the Ganges in Bangladesh dropped to its lowest ever, . Dry season flows returned to normal levels in the years following, but efforts were made to address the problem. One plan is for another barrage to be built in Bangladesh at Pangsha, west of Dhaka. This barrage would help Bangladesh better utilize its share of the waters of the Ganges.
Religious and cultural significance
Embodiment of sacredness
The Ganges is a sacred river to Hindus along every fragment of its length. All along its course, Hindus bathe in its waters, paying homage to their ancestors and their gods by cupping the water in their hands, lifting it, and letting it fall back into the river; they offer flowers and rose petals and float shallow clay dishes filled with oil and lit with wicks (diyas). On the journey back home from the Ganges, they carry small quantities of river water with them for use in rituals; Ganga Jal, literally "the water of the Ganges".
The Ganges is the embodiment of all sacred waters in Hindu mythology. Local rivers are said to be like the Ganges and are sometimes called the local Ganges. The Godavari River of Maharashtra in Western India is called the Ganges of the South or the 'Dakshin Ganga'; the Godavari is the Ganges that was led by the sage Gautama to flow through Central India. The Ganges is invoked whenever water is used in Hindu ritual and is therefore present in all sacred waters. Despite this, nothing is more stirring for a Hindu than a dip in the actual river, which is thought to remit sins, especially at one of the famous tirthas such as Varanasi, Gangotri, Haridwar, or the Triveni Sangam at Prayagraj. The symbolic and religious importance of the Ganges is one of the few things that Hindus, even their skeptics, have agreed upon. Jawaharlal Nehru, a religious iconoclast himself, asked for a handful of his ashes to be thrown into the Ganges. "The Ganga", he wrote in his will, "is the river of India, beloved of her people, round which are intertwined her racial memories, her hopes and fears, her songs of triumph, her victories and her defeats. She has been a symbol of India's age-long culture and civilization, ever-changing, ever-flowing, and yet ever the same Ganga."
Avatarana – Descent of Ganges
In late May or early June every year, Hindus celebrate the karunasiri and the rise of the Ganges from earth to heaven. The day of the celebration, Ganga Dashahara, the Dashami (tenth day) of the waxing moon of the Hindu calendar month Jyeshtha, brings throngs of bathers to the banks of the river. A dip in the Ganges on this day is said to rid the bather of ten sins (dasha = Sanskrit "ten"; hara = to destroy) or ten lifetimes of sins. Those who cannot journey to the river, however, can achieve the same results by bathing in any nearby body of water, which, for the true believer, takes on all the attributes of the Ganges.
The karunasiri is an old theme in Hinduism with a number of different versions of the story. In the Vedic version, Indra, the Lord of Svarga (Heaven) slays the celestial serpent, Vritra, releasing the celestial liquid, soma, or the nectar of the gods which then plunges to the earth and waters it with sustenance.
In the Vaishnava version of the myth, the heavenly waters were then a river called Vishnupadi (Sanskrit: "from the foot of Vishnu"). As Vishnu as the avatar Vamana completes his celebrated three strides —of earth, sky, and heaven— he stubs his toe on the vault of heaven, punches open a hole and releases the Vishnupadi, which until now had been circling the cosmic egg. Flowing out of the vault, she plummets down to Indra's heaven, where she is received by Dhruva, once a steadfast worshipper of Vishnu, now fixed in the sky as the Pole star. Next, she streams across the sky forming the Milky Way and arrives on the moon. She then flows down earthwards to Brahma's realm, a divine lotus atop Mount Meru, whose petals form the earthly continents. There, the divine waters break up, with one stream, the Bhagirathi, flowing down one petal into Bharatavarsha (India) as the Ganges.
It is Shiva, however, among the major deities of the Hindu pantheon, who appears in the most widely known version of the avatarana story. Told and retold in the Ramayana, the Mahabharata and several Puranas, the story begins with a sage, Kapila, whose intense meditation has been disturbed by the sixty thousand sons of King Sagara. Livid at being disturbed, Kapila sears them with his angry gaze, reduces them to ashes, and dispatches them to the netherworld. Only the waters of the Ganges, then in heaven, can bring the dead sons their salvation. A descendant of these sons, King Bhagiratha, anxious to restore his ancestors, undertakes rigorous penance and is eventually granted the prize of Ganges's descent from heaven. However, since her turbulent force would also shatter the earth, Bhagiratha persuades Shiva in his abode on Mount Kailash to receive the Ganges in the coils of his tangled hair and break her fall. The Ganges descends, is tamed in Shiva's locks, and arrives in the Himalayas. She is then led by the waiting Bhagiratha down into the plains at Haridwar, across the plains first to the confluence with the Yamuna at Prayag and then to Varanasi, and eventually to Ganges Sagar (Ganges delta), where she meets the ocean, sinks to the netherworld, and saves the sons of Sagara. In honour of Bhagirath's pivotal role in the avatarana, the source stream of the Ganges in the Himalayas is named Bhagirathi, (Sanskrit, "of Bhagiratha").
Redemption of the Dead
As the Ganges had descended from heaven to earth in the Hindu tradition, she is also considered the vehicle of ascent, from earth to heaven. As the Triloka-patha-gamini, (Sanskrit: triloka = "three worlds", patha = "road", gamini = "one who travels") of the tradition, she flows in heaven, earth, and the netherworld, and, consequently, is a "tirtha" or crossing point of all beings, the living as well as the dead. It is for this reason that the story of the avatarana is told at Shraddha ceremonies for the deceased in Hinduism, and Ganges water is used in Vedic rituals after death. Among all hymns devoted to the Ganges, there are none more popular than the ones expressing the worshipper's wish to breathe his last surrounded by her waters. The Gangashtakam expresses this longing fervently:O Mother! ... Necklace adorning the worlds!Banner rising to heaven!I ask that I may leave of this body on your banks,Drinking your water, rolling in your waves,Remembering your name, bestowing my gaze upon you.
No place along her banks is more longed for at the moment of death by Hindus than Varanasi, the Great Cremation Ground, or Mahashmshana. Those who are lucky enough to die in Varanasi, are cremated on the banks of the Ganges, and are granted instant salvation. If the death has occurred elsewhere, salvation can be achieved by immersing the ashes in the Ganges. If the ashes have been immersed in another body of water, a relative can still gain salvation for the deceased by journeying to the Ganges, if possible during the lunar "fortnight of the ancestors" in the Hindu calendar month of Ashwin (September or October), and performing the Shraddha rites.
Hindus also perform pinda pradana, a rite for the dead, in which balls of rice and sesame seed are offered to the Ganges while the names of the deceased relatives are recited. Every sesame seed in every ball thus offered, according to one story, assures a thousand years of heavenly salvation for each relative. Indeed, the Ganges is so important in the rituals after death that the Mahabharata, in one of its popular ślokas, says, "If only (one) bone of a (deceased) person should touch the water of the Ganges, that person shall dwell honoured in heaven." As if to illustrate this truism, the Kashi Khanda (Varanasi Chapter) of the Skanda Purana recounts the remarkable story of Vahika, a profligate and unrepentant sinner, who is killed by a tiger in the forest. His soul arrives before Yama, the Lord of Death, to be judged for the afterworld. Having no compensating virtue, Vahika's soul is at once dispatched to hell. While this is happening, his body on earth, however, is being picked at by vultures, one of whom flies away with a foot bone. Another bird comes after the vulture, and in fighting him off, the vulture accidentally drops the bone into the Ganges below. Blessed by this event, Vahika, on his way to hell, is rescued by a celestial chariot which takes him instead to heaven.
The Purifying Ganges
Hindus consider the waters of the Ganges to be both pure and purifying. Regardless of all scientific understanding of its waters, the Ganges is always ritually and symbolically pure in Hindu culture. Nothing reclaims order from disorder more than the waters of the Ganga. Moving water, as in a river, is considered purifying in Hindu culture because it is thought to both absorb impurities and take them away. The swiftly moving Ganga, especially in its upper reaches, where a bather has to grasp an anchored chain to not be carried away, is especially purifying. What the Ganges removes, however, is not necessarily physical dirt, but symbolic dirt; it wipes away the sins of the bather, not just of the present, but of a lifetime.
A popular paean to the Ganga is the Ganga Lahiri composed by a 17th-century poet Jagannatha who, legend has it, was turned out of his Hindu Brahmin caste for carrying on an affair with a Muslim woman. Having attempted futilely to be rehabilitated within the Hindu fold, the poet finally appeals to Ganga, the hope of the hopeless, and the comforter of last resort. Along with his beloved, Jagannatha sits at the top of the flight of steps leading to the water at the famous Panchganga Ghat in Varanasi. As he recites each verse of the poem, the water of the Ganges rises one step until in the end it envelops the lovers and carries them away. "I come to you as a child to his mother", begins the Ganga Lahiri. I come as an orphan to you, moist with love.I come without refuge to you, giver of sacred rest.I come a fallen man to you, uplifter of all.I come undone by disease to you, the perfect physician.I come, my heart dry with thirst, to you, ocean of sweet wine.Do with me whatever you will.
Consort, Shakti, and Mother
Ganga is a consort to all three major male deities of Hinduism. As Brahma's partner she always travels with him in the form of water in his kamandalu (water-pot). She is also Vishnu's consort. Not only does she emanate from his foot as Vishnupadi in the avatarana story, but is also, with Sarasvati and Lakshmi, one of his co-wives. In one popular story, envious of being outdone by each other, the co-wives begin to quarrel. While Lakshmi attempts to mediate the quarrel, Ganga and Sarasvati, heap misfortune on each other. They curse each other to become rivers, and to carry within them, by washing, the sins of their human worshippers. Soon their husband, Vishnu, arrives and decides to calm the situation by separating the goddesses. He orders Sarasvati to marry Brahma, Ganga to marry Shiva, and Lakshmi, as the blameless conciliator, to remain as his own wife. Ganga and Sarasvati, however, are so distraught at this dispensation, and wail so loudly, that Vishnu is forced to take back his words. Consequently, in their lives as rivers they are still thought to be with him.
It is Shiva's relationship with Ganga, that is the best-known in Ganges mythology. Her descent, the avatarana is not a one-time event, but a continuously occurring one in which she is forever falling from heaven into his locks and being forever tamed. Shiva, is depicted in Hindu iconography as Gangadhara, the "Bearer of the Ganga", with Ganga, shown as spout of water, rising from his hair. The Shiva-Ganga relationship is both perpetual and intimate. Shiva is sometimes called Uma-Ganga-Patiswara ("Husband and Lord of Uma (Parvati) and Ganga"), and Ganga often arouses the jealousy of Shiva's better-known consort.
Ganga is the shakti or the moving, restless, rolling energy in the form of which the otherwise reclusive and unapproachable Shiva appears on earth. As water, this moving energy can be felt, tasted, and absorbed. The war-god Skanda addresses the sage Agastya in the Kashi Khand of the Skanda Purana in these words: One should not be amazed ... that this Ganges is really Power, for is she not the Supreme Shakti of the Eternal Shiva, taken in the form of water?This Ganges, filled with the sweet wine of compassion, was sent out for the salvation of the world by Shiva, the Lord of the Lords.Good people should not think this Triple-Pathed River to be like the thousand other earthly rivers, filled with water.
The Ganga is also the mother, the Ganga Mata (mata="mother") of Hindu worship and culture, accepting all and forgiving all. Unlike other goddesses, she has no destructive or fearsome aspect, destructive though she might be as a river in nature. She is also a mother to other gods. She accepts Shiva's incandescent seed from the fire-god Agni, which is too hot for this world and cools it in her waters. This union produces Skanda, or Kartikeya, the god of war. In the Mahabharata, she is married to Shantanu, and the mother of heroic warrior-patriarch, Bhishma. When Bhishma is mortally wounded in battle, Ganga comes out of the water in human form and weeps uncontrollably over his body.
The Ganges is the distilled lifeblood of the Hindu tradition, of its divinities, holy books, and enlightenment. As such, her worship does not require the usual rites of invocation (avahana) at the beginning and dismissal (visarjana) at the end, required in the worship of other gods. Her divinity is immediate and everlasting.
Ganges in classical Indian iconography
Early in ancient Indian culture, the river Ganges was associated with fecundity, its redeeming waters, and its rich silt providing sustenance to all who lived along its banks. A counterpoise to the dazzling heat of the Indian summer, the Ganges came to be imbued with magical qualities and to be revered in anthropomorphic form. By the 5th century CE, an elaborate mythology surrounded the Ganges, now a goddess in her own right, and a symbol for all rivers of India. Hindu temples all over India had statues and reliefs of the goddess carved at their entrances, symbolically washing the sins of arriving worshippers and guarding the gods within. As protector of the sanctum sanctorum, the goddess soon came to be depicted with several characteristic accessories: the makara (a crocodile-like undersea monster, often shown with an elephant-like trunk), the kumbha (an overfull vase), various overhead parasol-like coverings, and a gradually increasing retinue of humans.
Central to the goddess's visual identification is the makara, which is also her vahana, or mount. An ancient symbol in India, it pre-dates all appearances of the goddess Ganga in art. The makara has a dual symbolism. On the one hand, it represents the life-affirming waters and plants of its environment; on the other, it represents fear, both fear of the unknown which it elicits by lurking in those waters, and real fear which it instils by appearing in sight. The earliest extant unambiguous pairing of the makara with Ganga is at the Udayagiri Caves in Central India (circa 400 CE). Here, in the Cave V, flanking the main figure of Vishnu shown in his boar incarnation, two river goddesses, Ganga and Yamuna appear atop their respective mounts, makara and kurma (a turtle or tortoise).
The makara is often accompanied by a gana, a small boy or child, near its mouth, as, for example, shown in the Gupta period relief from Besnagar, Central India, in the left-most frame above. The gana represents both posterity and development (udbhava). The pairing of the fearsome, life-destroying makara with the youthful, life-affirming gana speaks to two aspects of the Ganges herself. Although she has provided sustenance to millions, she has also brought hardship, injury, and death by causing major floods along her banks. The goddess Ganga is also accompanied by a dwarf attendant, who carries a cosmetic bag, and on whom she sometimes leans, as if for support. (See, for example, frames 1, 2, and 4 above.)
The purna kumbha or full pot of water is the second most discernible element of the Ganga iconography. Appearing first also in the relief in the Udayagiri Caves (5th century), it gradually appeared more frequently as the theme of the goddess matured. By the 7th century it had become an established feature, as seen, for example, in the Dashavatara temple, Deogarh, Uttar Pradesh (7th century), the Trimurti temple, Badoli, Chittorgarh, Rajasthan, and at the Lakshmaneshwar temple, Kharod, Bilaspur, Chhattisgarh, (9th or 10th century), and seen very clearly in frame 3 above and less clearly in the remaining frames. Worshipped even today, the full pot is emblematic of the formless Brahman, as well as of woman, of the womb, and of birth. Furthermore, The river goddesses Ganga and Saraswati were both born from Brahma's pot, containing the celestial waters.
In her earliest depictions at temple entrances, the goddess Ganga appeared standing beneath the overhanging branch of a tree, as seen as well in the Udayagiri caves. However, soon the tree cover had evolved into a chatra or parasol held by an attendant, for example, in the 7th-century Dasavatara temple at Deogarh. (The parasol can be clearly seen in frame 3 above; its stem can be seen in frame 4, but the rest has broken off.) The cover undergoes another transformation in the temple at Kharod, Bilaspur (9th or 10th century), where the parasol is lotus-shaped, and yet another at the Trimurti temple at Badoli where the parasol has been replaced entirely by a lotus.
As the iconography evolved, sculptors, especially in central India, were producing animated scenes of the goddess, replete with an entourage and suggestive of a queen en route to a river to bathe. A relief similar to the depiction in frame 4 above, is described in as follows: A typical relief of about the ninth century that once stood at the entrance of a temple, the river goddess Ganga is shown as a voluptuously endowed lady with a retinue. Following the iconographic prescription, she stands gracefully on her composite makara mount and holds a water pot. The dwarf attendant carries her cosmetic bag, and a ... female holds the stem of a giant lotus leaf that serves as her mistress's parasol. The fourth figure is a male guardian. Often in such reliefs, the makara tail is extended with great flourish into a scrolling design symbolizing both vegetation and water.
Kumbh Mela
Kumbh Mela is a mass Hindu pilgrimage in which Hindus gather at the Ganges River. The normal Kumbh Mela is celebrated every 3 years, the Ardh (half) Kumbh is celebrated every six years at Haridwar and Prayagraj, the Purna (complete) Kumbh takes place every twelve years at four places (Triveni Sangam (Prayagraj), Haridwar, Ujjain, and Nashik). The Maha (great) Kumbh Mela which comes after 12 'Purna Kumbh Melas', or 144 years, is held at Prayagraj.
The major event of the festival is ritual bathing at the banks of the river. Other activities include religious discussions, devotional singing, mass feeding of holy men and women and the poor, and religious assemblies where doctrines are debated and standardized. Kumbh Mela is the most sacred of all the pilgrimages. Thousands of holy men and women attend, and the auspiciousness of the festival is in part attributable to this. The sadhus are seen clad in saffron sheets with ashes and powder dabbed on their skin per the requirements of ancient traditions. Some called naga sanyasis, may not wear any clothes.
Irrigation
The Ganges and its all tributaries, especially the Yamuna, have been used for irrigation since ancient times. Dams and canals were common in the Gangetic plain by the 4th century BCE. The Ganges-Brahmaputra-Meghna basin has a huge hydroelectric potential, on the order of 200,000 to 250,000 megawatts, nearly half of which could easily be harnessed. As of 1999, India tapped about 12% of the hydroelectric potential of the Ganges and just 1% of the vast potential of the Brahmaputra.
Canals
Megasthenes, a Greek ethnographer who visited India during the 3rd century BCE when Mauryans ruled India described the existence of canals in the Gangetic plain. Kautilya (also known as Chanakya), an advisor to Chandragupta Maurya, the founder of Maurya Empire, included the destruction of dams and levees as a strategy during the war. Firuz Shah Tughlaq had many canals built, the longest of which, , was built in 1356 on the Yamuna River. Now known as the Western Yamuna Canal, it has fallen into disrepair and been restored several times. The Mughal emperor Shah Jahan built an irrigation canal on the Yamuna River in the early 17th century. It fell into disuse until 1830, when it was reopened as the Eastern Yamuna Canal, under British control. The reopened canal became a model for the Upper Ganges Canal and all following canal projects.
The first British canal in India (which did not have Indian antecedents) was the Ganges Canal built between 1842 and 1854.
Contemplated first by Col. John Russell Colvin in 1836, it did not at first elicit much enthusiasm from its eventual architect Sir Proby Thomas Cautley, who balked at the idea of cutting a canal through extensive low-lying land to reach the drier upland destination. However, after the Agra famine of 1837–38, during which the East India Company's administration spent Rs. 2,300,000 on famine relief, the idea of a canal became more attractive to the company's budget-conscious Court of Directors. In 1839, the Governor General of India, Lord Auckland, with the Court's assent, granted funds to Cautley for a full survey of the swath of land that underlay and fringed the projected course of the canal. The Court of Directors, moreover, considerably enlarged the scope of the projected canal, which, in consequence of the severity and geographical extent of the famine, they now deemed to be the entire Doab region.
The enthusiasm, however, proved to be short-lived. Auckland's successor as Governor-General, Lord Ellenborough, appeared less receptive to large-scale public works, and for the duration of his tenure, withheld major funds for the project. Only in 1844, when a new Governor-General, Lord Hardinge, was appointed, did official enthusiasm and funds return to the Ganges canal project. Although the intervening impasse had seemingly affected Cautley's health and required him to return to Britain in 1845 for recuperation, his European sojourn gave him an opportunity to study contemporary hydraulic works in the United Kingdom and Italy. By the time of his return to India even more supportive men were at the helm, both in the North-Western Provinces, with James Thomason as Lt. Governor, and in British India with Lord Dalhousie as Governor-General. Canal construction, under Cautley's supervision, now went into full swing. A long canal, with another of branch lines, eventually stretched between the headworks in Haridwar, splitting into two branches below Aligarh, and its two confluences with the Yamuna (Jumna in map) mainstem in Etawah and the Ganges in Kanpur (Cawnpore in map). The Ganges Canal, which required a total capital outlay of £2.15 million, was officially opened in 1854 by Lord Dalhousie. According to historian Ian Stone: It was the largest canal ever attempted in the world, five times greater in its length than all the main irrigation lines of Lombardy and Egypt put together, and longer by a third than even the largest USA navigation canal, the Pennsylvania Canal.
Dams and barrages
A major barrage at Farakka was opened on 21 April 1975, It is located close to the point where the main flow of the river enters Bangladesh, and the tributary Hooghly (also known as Bhagirathi) continues in West Bengal past
Kolkata. This barrage, which feeds the Hooghly branch of the river by a long feeder canal, and its water flow management has been a long-lingering source of dispute with Bangladesh. Indo-Bangladesh Ganges Water Treaty signed in December 1996 addressed some of the water sharing issues between India and Bangladesh. There is Lav Khush Barrage across the River Ganges in Kanpur.
Tehri Dam was constructed on Bhagirathi River, a tributary of the Ganges. It is located 1.5 km downstream of Ganesh Prayag, the place where Bhilangana meets Bhagirathi. Bhagirathi is called the Ganges after Devprayag. Construction of the dam in an earthquake-prone area was controversial.
Bansagar Dam was built on the Sone River, a tributary of the Ganges for both irrigation and hydroelectric power generation. Ganges floodwaters along with Brahmaputra waters can be supplied to most of its right side basin area along with central and south India by constructing a coastal reservoir to store water on the Bay of Bengal sea area.
Economy
The Ganges Basin with its fertile soil is instrumental to the agricultural economies of India and Bangladesh. The Ganges and its tributaries provide a perennial source of irrigation to a large area. Chief crops cultivated in the area include rice, sugarcane, lentils, oil seeds, potatoes, and wheat. Along the banks of the river, the presence of swamps and lakes provides a rich growing area for crops such as legumes, chillies, mustard, sesame, sugarcane, and jute. There are also many fishing opportunities along the river, though it remains highly polluted. Also, the major industrial towns of Unnao and Kanpur, situated on the banks of the river with the predominance of tanning industries add to the pollution.
Tourism
Tourism is another related activity. Three towns holy to Hinduism—Haridwar, Prayagraj, and Varanasi—attract millions of pilgrims to its waters to take a dip in the Ganges, which is believed to cleanse oneself of sins and help attain salvation. The rapids of the Ganges are also popular for river rafting in the town of Rishikesh, attracting adventure seekers in the summer months. Several cities such as Kanpur, Kolkata and Patna have also developed riverfront walkways along the banks to attract tourists.
Ecology and environment
Human development, mostly agriculture, has replaced nearly all of the original natural vegetation of the Ganges basin. More than 95% of the upper Gangetic Plain has been degraded or converted to agriculture or urban areas. Only one large block of relatively intact habitat remains, running along the Himalayan foothills and including Rajaji National Park, Jim Corbett National Park, and Dudhwa National Park. As recently as the 16th and 17th centuries the upper Gangetic Plain harboured impressive populations of wild Asian elephants (Elephas maximus), Bengal tigers (Panthera t. tigris), Indian rhinoceros (Rhinoceros unicornis), gaurs (Bos gaurus), barasinghas (Rucervus duvaucelii), sloth bears (Melursus ursinus) and Indian lions (Panthera leo leo). In the 21st century there are few large wild animals, mostly deer, wild boars, wildcats, and small numbers of Indian wolves, golden jackals, and red and Bengal foxes. Bengal tigers survive only in the Sundarbans area of the Ganges Delta. The Sundarbands freshwater swamp ecoregion, however, is nearly extinct. The Sundarbans mangroves (Heritiera fomes) also grow in the Sundarbans area of the Ganges Delta. Threatened mammals in the upper Gangetic Plain include the tiger, elephant, sloth bear, and four-horned antelope (Tetracerus quadricornis).
Many types of birds are found throughout the basin, such as myna, Psittacula parakeets, crows, kites, partridges, and fowls. Ducks and snipes migrate across the Himalayas during the winter, attracted in large numbers to wetland areas. There are no endemic birds in the upper Gangetic Plain. The great Indian bustard (Ardeotis nigriceps) and lesser florican (Sypheotides indicus) are considered globally threatened.
The natural forest of the upper Gangetic Plain has been so thoroughly eliminated it is difficult to assign a natural vegetation type with certainty. There are a few small patches of forest left, and they suggest that much of the upper plains may have supported a tropical moist deciduous forest with sal (Shorea robusta) as a climax species.
A similar situation is found in the lower Gangetic Plain, which includes the lower Brahmaputra River. The lower plains contain more open forests, which tend to be dominated by Bombax ceiba in association with Albizzia procera, Duabanga grandiflora, and Sterculia vilosa. There are early seral forest communities that would eventually become dominated by the climax species sal (Shorea robusta) if forest succession was allowed to proceed. In most places forests fail to reach climax conditions due to human causes. The forests of the lower Gangetic Plain, despite thousands of years of human settlement, remained largely intact until the early 20th century. Today only about 3% of the ecoregion is under natural forest and only one large block, south of Varanasi, remains. There are over forty protected areas in the ecoregion, but over half of these are less than . The fauna of the lower Gangetic Plain is similar to the upper plains, with the addition of a number of other species such as the smooth-coated otter (Lutrogale perspicillata) and the large Indian civet (Viverra zibetha).
Fish
It has been estimated that about 350 fish species live in the entire Ganges drainage, including several endemics. In a major 2007–2009 study of fish in the Ganges basin (including the river itself and its tributaries, but excluding the Brahmaputra and Meghna basins), a total of 143 fish species were recorded, including 10 non-native introduced species. The most diverse orders are Cypriniformes (barbs and allies), Siluriformes (catfish) and Perciformes (perciform fish), each comprising about 50%, 23% and 14% of the total fish species in the drainage.
There are distinct differences between the different sections of the river basin, but Cyprinidae is the most diverse throughout. In the upper section (roughly equalling the basin parts in Uttarakhand) more than 50 species have been recorded and Cyprinidae alone accounts for almost 80% those, followed by Balitoridae (about 15.6%) and Sisoridae (about 12.2%). Sections of the Ganges basin at altitudes above above sea level are generally without fish. Typical genera approaching this altitude are Schizothorax, Tor, Barilius, Nemacheilus and Glyptothorax. About 100 species have been recorded from the middle section of the basin (roughly equalling the sections in Uttar Pradesh and parts of Bihar) and more than 55% of these are in family Cyprinidae, followed by Schilbeidae (about 10.6%) and Clupeidae (about 8.6%). The lower section (roughly equalling the basin in parts of Bihar and West Bengal) includes major floodplains and is home to almost 100 species. About 46% of these are in the family Cyprinidae, followed by Schilbeidae (about 11.4%) and Bagridae (about 9%).
The Ganges basin supports major fisheries, but these have declined in recent decades. In the Prayagraj region in the middle section of the basin, catches of carp fell from 424.91 metric tons in 1961–1968 to 38.58 metric tons in 2001–2006, and catches of catfish fell from 201.35 metric tons in 1961–1968 to 40.56 metric tons in 2001–2006. In the Patna region in the lower section of the basin, catches of carp fell from 383.2 metric tons to 118, and catfish from 373.8 metric tons to 194.48. Some of the fish commonly caught in fisheries include catla (Catla catla), golden mahseer (Tor putitora), tor mahseer (Tor tor), rohu (Labeo rohita), walking catfish (Clarias batrachus), pangas catfish (Pangasius pangasius), goonch catfish (Bagarius), snakeheads (Channa), bronze featherback (Notopterus notopterus) and milkfish (Chanos chanos).
The Ganges basin is home to about 30 fish species that are listed as threatened with the primary issues being overfishing (sometimes illegal), pollution, water abstraction, siltation and invasive species. Among the threatened species is the critically endangered Ganges shark (Glyphis gangeticus). Several fish species migrate between different sections of the river, but these movements may be prevented by the building of dams.
Crocodilians and turtles
The main sections of the Ganges River are home to the gharial (Gavialis gangeticus) and mugger crocodile (Crocodylus palustris), and the Ganges delta is home to the saltwater crocodile (C. porosus). Among the numerous aquatic and semi-aquatic turtles in the Ganges basin are the northern river terrapin (Batagur baska; only in the lowermost section of the basin), three-striped roofed turtle (B. dhongoka), red-crowned roofed turtle (B. kachuga), black pond turtle (Geoclemys hamiltonii), Brahminy river turtle (Hardella thurjii), Indian black turtle (Melanochelys trijuga), Indian eyed turtle (Morenia petersi), brown roofed turtle (Pangshura smithii), Indian roofed turtle (Pangshura tecta), Indian tent turtle (Pangshura tentoria), Indian flapshell turtle (Lissemys punctata), Indian narrow-headed softshell turtle (Chitra indica), Indian softshell turtle (Nilssonia gangetica), Indian peacock softshell turtle (N. hurum) and Cantor's giant softshell turtle (Pelochelys cantorii; only in the lowermost section of Ganges basin). Most of these are seriously threatened.
Ganges river dolphin
The river's most famed faunal member is the freshwater Ganges river dolphin (Platanista gangetica gangetica), which has been declared India's national aquatic animal.
This dolphin used to exist in large schools near urban centres in both the Ganges and Brahmaputra rivers but is now seriously threatened by pollution, dam construction and improper fishing methods. Their numbers have now dwindled to a quarter of their numbers of fifteen years before, and they have become extinct in the Ganges' main tributaries. A 2012 survey by the World Wildlife Fund found only 3,000 left in the water catchment of both river systems.
The Ganges river dolphin is one of only five true freshwater dolphins in the world. The other four are the baiji (Lipotes vexillifer) of the Yangtze River in China, now likely extinct; the Indus River dolphin of the Indus River in Pakistan; the Amazon river dolphin of the Amazon River in South America; and the Araguaian river dolphin (not considered a separate species until 2014) of the Araguaia–Tocantins basin in Brazil. There are several marine dolphins whose ranges include some freshwater habitats, but these five are the only dolphins who live only in freshwater rivers and lakes.
Effects of climate change
The Tibetan Plateau contains the world's third-largest store of ice. Qin Dahe, the former head of the China Meteorological Administration, said that the recent fast pace of melting and warmer temperatures will be good for agriculture and tourism in the short term; but issued a strong warning:
In 2007, the Intergovernmental Panel on Climate Change (IPCC), in its Fourth Report, stated that the Himalayan glaciers which feed the river were at risk of melting by 2035. The IPCC has now withdrawn that prediction, as the original source admitted that it was speculative and the cited source was not a peer-reviewed finding. In its statement, the IPCC stands by its general findings relating to the Himalayan glaciers being at risk from global warming (with consequent risks to water flow into the Gangetic basin). Many studies have suggested that climate change will affect the water resources in the Ganges river basin including increased summer (monsoon) flow, and peak runoff could result in an increased risk of flooding.
Pollution and environmental concerns
The Ganges suffers from extreme pollution levels, caused by the 400 million people who live close to the river. Sewage from many cities along the river's course, industrial waste and religious offerings wrapped in non-degradable plastics add large amounts of pollutants to the river as it flows through densely populated areas. The problem is exacerbated by the fact that many poorer people rely on the river on a daily basis for bathing, washing, and cooking. The World Bank estimates that the health costs of water pollution in India equal three percent of India's GDP. It has also been suggested that eighty percent of all illnesses in India and one-third of deaths can be attributed to water-borne diseases.
Varanasi, a city of one million people that many pilgrims visit to take a "holy dip" in the Ganges, releases around 200 million liters of untreated human sewage into the river each day, leading to large concentrations of fecal coliform bacteria. According to official standards, water safe for bathing should not contain more than 500 fecal coliforms per 100 ml, yet upstream of Varanasi's ghats the river water already contains 120 times as much, 60,000 fecal coliform bacteria per 100 ml.
After the cremation of the deceased at Varanasi's ghats, the bones and ashes are immersed into the Ganges. However, in the past thousands of uncremated bodies were thrown into the Ganges during cholera epidemics, spreading the disease. Even today, holy men, pregnant women, people with leprosy or chicken pox, people who have been bitten by snakes, people who have committed suicide, the poor, and children under 5 are not cremated at the ghats but are left to float free, to decompose in the waters. In addition, those who cannot afford the large amount of wood needed to incinerate the entire body, leave behind many half-burned body parts.
After passing through Varanasi, and receiving 32 streams of raw sewage from the city, the concentration of fecal coliforms in the river's waters rises from 60,000 to 1.5 million, with observed peak values of 100 million per 100 ml. Drinking and bathing in its waters therefore carries a high risk of infection.
Between 1985 and 2000, Rs. 10 billion, around US$226 million, or less than 4 cents per person per year, were spent on the Ganga Action Plan, an environmental initiative that was "the largest single attempt to clean up a polluted river anywhere in the world". The Ganga Action Plan has been described variously as a "failure" and a "major failure".
According to one study, The Ganga Action Plan, which was taken on priority and with much enthusiasm, was delayed for two years. The expenditure was almost doubled. But the result was not very appreciable. Much expenditure was done on political propaganda. The concerning governments and the related agencies were not very prompt to make it a success. The public of the areas was not taken into consideration. The release of urban and industrial wastes in the river was not controlled fully. The flowing of dirty water through drains and sewers were not adequately diverted. The continuing customs of burning dead bodies, throwing carcasses, washing of dirty clothes by washermen, and immersion of idols and cattle wallowing were not checked. Very little provision of public latrines was made and the open defecation of lakhs of people continued along the riverside. All these made the Action Plan a failure.
The failure of the Ganga Action Plan has also been variously attributed to "environmental planning without proper understanding of the human-environment interactions", Indian "traditions and beliefs", "corruption and a lack of technical knowledge" and "lack of support from religious authorities".
In December 2009 the World Bank agreed to loan India US$1 billion over the next five years to help save the river. According to 2010 Planning Commission estimates, an investment of almost Rs. 70 billion (Rs. 70 billion, approximately US$1.5 billion) is needed to clean up the river.
In November 2008, the Ganges, alone among India's rivers, was declared a "National River", facilitating the formation of a National Ganga River Basin Authority that would have greater powers to plan, implement and monitor measures aimed at protecting the river.
In July 2014, the Government of India announced an integrated Ganges-development project titled Namami Gange Programme and allocated 2,037 crore for this purpose. The main objectives of the Namami Gange project is to improve the water quality by the abatement of pollution and rejuvenation of river Ganga by creating infrastructures like sewage treatment plants, river surface cleaning, biodiversity conservation, afforestation, and public awareness.
In March 2017 the High Court of Uttarakhand declared the Ganges River a legal "person", in a move that according to one newspaper, "could help in efforts to clean the pollution-choked rivers". , the ruling has been commented on in Indian newspapers to be hard to enforce, that experts do not anticipate immediate benefits, that the ruling is "hardly game changing", that experts believe "any follow-up action is unlikely", and that the "judgment is deficient to the extent it acted without hearing others (in states outside Uttarakhand) who have stakes in the matter."
The incidence of water-borne and enteric diseases—such as gastrointestinal disease, cholera, dysentery, hepatitis A and typhoid—among people who use the river's waters for bathing, washing dishes and brushing teeth is high, at an estimated 66% per year.
Recent studies by Indian Council of Medical Research (ICMR) say that the river is so full of killer pollutants that those living along its banks in Uttar Pradesh, Bihar, and Bengal are more prone to cancer than anywhere else in the country. Conducted by the National Cancer Registry Programme under the ICMR, the study throws up shocking findings indicating that the river is thick with heavy metals and lethal chemicals that cause cancer. According to Deputy Director-General of NCRP A. Nandkumar, the incidence of cancer was highest in the country in areas drained by the Ganges and stated that the problem would be studied deeply and with the findings presented in a report to the health ministry.
Apart from that, many NGOs have come forward to rejuvenate the river Ganges. Vikrant Tongad, an Environmental specialist from SAFE Green filed a petition against Simbhaoli Sugar Mill (Hapur UP) to NGT. NGT slapped a fine of Rs. 5 crores to Sugar Mill and a fine of Rs. 25 lakhs to Gopaljee Dairy for discharging untreated effluents into the Simbhaoli drain.
Water shortages
Along with ever-increasing pollution, water shortages are getting noticeably worse. Some sections of the river are already completely dry. Around Varanasi, the river once had an average depth of , but in some places, it is now only .
Mining
Illegal mining in the Ganges river bed for stones and sand for construction work has long been a problem in Haridwar district, Uttarakhand, where it touches the plains for the first time. This is despite the fact that quarrying has been banned in Kumbh Mela area zone covering 140 km2 area in Haridwar.
In art and literature
A painting of the Ganges entering the plains near Haridwar by William Purser with a poetical illustration by Letitia Elizabeth Landon in Fisher's Drawing Room Scrap Book, 1838.
A painting of the Ganges near Kahalgaon by J. M. W. Turner with a poetical illustration by Letitia Elizabeth Landon in Fisher's Drawing Room Scrap Book, 1839.
See also
Environmental personhood
Fair river sharing
Ganga Pushkaram
Gangaputra Brahmin
Ganga Talao
Ganga Lake (Mongolia)
List of most-polluted rivers
List of rivers by discharge
List of rivers by length
List of rivers of India
Mahaweli Ganga
National Waterway 1
Pollution of the Ganges
River bank erosion along the Ganges in Malda and Murshidabad districts
Sacred waters
Sankat Mochan Foundation
Ganga (goddess)
Peninsular River System
Notes
References
Sources
Further reading
Christopher de Bellaigue, "The River" (the Ganges; review of Sunil Amrith, Unruly Waters: How Rains, Rivers, Coasts, and Seas Have Shaped Asia's History; Sudipta Sen, Ganges: The Many Pasts of an Indian River; and Victor Mallet, River of Life, River of Death: The Ganges and India's Future), The New York Review of Books, vol. LXVI, no. 15 (10 October 2019), pp. 34–36. "[I]n 1951 the average Indian [inhabitant of India] had access annually to 5,200 cubic meters of water. The figure today is 1,400 ... and will probably fall below 1,000 cubic meters – the UN's definition of 'water scarcity' – at some point in the next few decades. Compounding the problem of lower summer rainfall ... India's water table is in freefall [due] to an increase in the number of tube wells ... Other contributors to India's seasonal dearth of water are canal leaks [and] the continued sowing of thirsty crops" (p. 35.)
Tandon, S.K., and R. Sinha. The Ganga River: A Summary View of a Large River System of the Indian Sub-Continent, in Sen Singh, Dhruv (Ed.), The Indian Rivers: Scientific and Socio-Economic Aspects. Singapore: Springer Nature, 2018, pp. 61–74. ISBN 978-981-10-2983-7
External links
Ganga in the Imperial Gazetteer of India, 1909
Melting Glaciers Threaten Ganga
The impacts of water infrastructure and climate change on the hydrology of the Upper Ganga River Basin IWMI research report
The Ganges: A Journey into India (NPR)
Ganga Ma: A Pilgrimage to the Source 58 min Documentary
Earthquake 2,500 years ago abruptly changed Ganga river’s course
.
International rivers of Asia
Rivers of Bangladesh
Bangladesh–India border
Border rivers
Sacred rivers
Rivers of Bihar
Rivers of Jharkhand
Rivers of Delhi
Rivers of Uttarakhand
Rivers of Uttar Pradesh
Rivers of West Bengal
National symbols of India
Rigvedic rivers
Rivers in Buddhism
Environmental personhood
Braided rivers in India | Ganges | [
"Environmental_science"
] | 14,642 | [
"Environmental personhood",
"Environmental ethics"
] |
12,450 | https://en.wikipedia.org/wiki/G%C3%B6del%27s%20completeness%20theorem | Gödel's completeness theorem is a fundamental theorem in mathematical logic that establishes a correspondence between semantic truth and syntactic provability in first-order logic.
The completeness theorem applies to any first-order theory: If T is such a theory, and φ is a sentence (in the same language) and every model of T is a model of φ, then there is a (first-order) proof of φ using the statements of T as axioms. One sometimes says this as "anything true in all models is provable". (This does not contradict Gödel's incompleteness theorem, which is about a formula φu that is unprovable in a certain theory T but true in the "standard" model of the natural numbers: φu is false in some other, "non-standard" models of T.)
The completeness theorem makes a close link between model theory, which deals with what is true in different models, and proof theory, which studies what can be formally proven in particular formal systems.
It was first proved by Kurt Gödel in 1929. It was then simplified when Leon Henkin observed in his Ph.D. thesis that the hard part of the proof can be presented as the Model Existence Theorem (published in 1949). Henkin's proof was simplified by Gisbert Hasenjaeger in 1953.
Preliminaries
There are numerous deductive systems for first-order logic, including systems of natural deduction and Hilbert-style systems. Common to all deductive systems is the notion of a formal deduction. This is a sequence (or, in some cases, a finite tree) of formulae with a specially designated conclusion. The definition of a deduction is such that it is finite and that it is possible to verify algorithmically (by a computer, for example, or by hand) that a given sequence (or tree) of formulae is indeed a deduction.
A first-order formula is called logically valid if it is true in every structure for the language of the formula (i.e. for any assignment of values to the variables of the formula). To formally state, and then prove, the completeness theorem, it is necessary to also define a deductive system. A deductive system is called complete if every logically valid formula is the conclusion of some formal deduction, and the completeness theorem for a particular deductive system is the theorem that it is complete in this sense. Thus, in a sense, there is a different completeness theorem for each deductive system. A converse to completeness is soundness, the fact that only logically valid formulas are provable in the deductive system.
If some specific deductive system of first-order logic is sound and complete, then it is "perfect" (a formula is provable if and only if it is logically valid), thus equivalent to any other deductive system with the same quality (any proof in one system can be converted into the other).
Statement
We first fix a deductive system of first-order predicate calculus, choosing any of the well-known equivalent systems. Gödel's original proof assumed the Hilbert-Ackermann proof system.
Gödel's original formulation
The completeness theorem says that if a formula is logically valid then there is a finite deduction (a formal proof) of the formula.
Thus, the deductive system is "complete" in the sense that no additional inference rules are required to prove all the logically valid formulae. A converse to completeness is soundness, the fact that only logically valid formulae are provable in the deductive system. Together with soundness (whose verification is easy), this theorem implies that a formula is logically valid if and only if it is the conclusion of a formal deduction.
More general form
The theorem can be expressed more generally in terms of logical consequence. We say that a sentence s is a syntactic consequence of a theory T, denoted , if s is provable from T in our deductive system. We say that s is a semantic consequence of T, denoted , if s holds in every model of T. The completeness theorem then says that for any first-order theory T with a well-orderable language, and any sentence s in the language of T,
Since the converse (soundness) also holds, it follows that if and only if , and thus that syntactic and semantic consequence are equivalent for first-order logic.
This more general theorem is used implicitly, for example, when a sentence is shown to be provable from the axioms of group theory by considering an arbitrary group and showing that the sentence is satisfied by that group.
Gödel's original formulation is deduced by taking the particular case of a theory without any axiom.
Model existence theorem
The completeness theorem can also be understood in terms of consistency, as a consequence of Henkin's model existence theorem. We say that a theory T is syntactically consistent if there is no sentence s such that both s and its negation ¬s are provable from T in our deductive system. The model existence theorem says that for any first-order theory T with a well-orderable language,
Another version, with connections to the Löwenheim–Skolem theorem, says:
Given Henkin's theorem, the completeness theorem can be proved as follows: If , then does not have models. By the contrapositive of Henkin's theorem, then is syntactically inconsistent. So a contradiction () is provable from in the deductive system. Hence , and then by the properties of the deductive system, .
As a theorem of arithmetic
The model existence theorem and its proof can be formalized in the framework of Peano arithmetic. Precisely, we can systematically define a model of any consistent effective first-order theory T in Peano arithmetic by interpreting each symbol of T by an arithmetical formula whose free variables are the arguments of the symbol. (In many cases, we will need to assume, as a hypothesis of the construction, that T is consistent, since Peano arithmetic may not prove that fact.) However, the definition expressed by this formula is not recursive (but is, in general, Δ2).
Consequences
An important consequence of the completeness theorem is that it is possible to recursively enumerate the semantic consequences of any effective first-order theory, by enumerating all the possible formal deductions from the axioms of the theory, and use this to produce an enumeration of their conclusions.
This comes in contrast with the direct meaning of the notion of semantic consequence, that quantifies over all structures in a particular language, which is clearly not a recursive definition.
Also, it makes the concept of "provability", and thus of "theorem", a clear concept that only depends on the chosen system of axioms of the theory, and not on the choice of a proof system.
Relationship to the incompleteness theorems
Gödel's incompleteness theorems show that there are inherent limitations to what can be proven within any given first-order theory in mathematics. The "incompleteness" in their name refers to another meaning of complete (see model theory – Using the compactness and completeness theorems): A theory is complete (or decidable) if every sentence in the language of is either provable () or disprovable ().
The first incompleteness theorem states that any which is consistent, effective and contains Robinson arithmetic ("Q") must be incomplete in this sense, by explicitly constructing a sentence which is demonstrably neither provable nor disprovable within . The second incompleteness theorem extends this result by showing that can be chosen so that it expresses the consistency of itself.
Since cannot be proven in , the completeness theorem implies the existence of a model of in which is false. In fact, is a Π1 sentence, i.e. it states that some finitistic property is true of all natural numbers; so if it is false, then some natural number is a counterexample. If this counterexample existed within the standard natural numbers, its existence would disprove within ; but the incompleteness theorem showed this to be impossible, so the counterexample must not be a standard number, and thus any model of in which is false must include non-standard numbers.
In fact, the model of any theory containing Q obtained by the systematic construction of the arithmetical model existence theorem, is always non-standard with a non-equivalent provability predicate and a non-equivalent way to interpret its own construction, so that this construction is non-recursive (as recursive definitions would be unambiguous).
Also, if is at least slightly stronger than Q (e.g. if it includes induction for bounded existential formulas), then Tennenbaum's theorem shows that it has no recursive non-standard models.
Relationship to the compactness theorem
The completeness theorem and the compactness theorem are two cornerstones of first-order logic. While neither of these theorems can be proven in a completely effective manner, each one can be effectively obtained from the other.
The compactness theorem says that if a formula φ is a logical consequence of a (possibly infinite) set of formulas Γ then it is a logical consequence of a finite subset of Γ. This is an immediate consequence of the completeness theorem, because only a finite number of axioms from Γ can be mentioned in a formal deduction of φ, and the soundness of the deductive system then implies φ is a logical consequence of this finite set. This proof of the compactness theorem is originally due to Gödel.
Conversely, for many deductive systems, it is possible to prove the completeness theorem as an effective consequence of the compactness theorem.
The ineffectiveness of the completeness theorem can be measured along the lines of reverse mathematics. When considered over a countable language, the completeness and compactness theorems are equivalent to each other and equivalent to a weak form of choice known as weak Kőnig's lemma, with the equivalence provable in RCA0 (a second-order variant of Peano arithmetic restricted to induction over Σ01 formulas). Weak Kőnig's lemma is provable in ZF, the system of Zermelo–Fraenkel set theory without axiom of choice, and thus the completeness and compactness theorems for countable languages are provable in ZF. However the situation is different when the language is of arbitrary large cardinality since then, though the completeness and compactness theorems remain provably equivalent to each other in ZF, they are also provably equivalent to a weak form of the axiom of choice known as the ultrafilter lemma. In particular, no theory extending ZF can prove either the completeness or compactness theorems over arbitrary (possibly uncountable) languages without also proving the ultrafilter lemma on a set of the same cardinality.
Completeness in other logics
The completeness theorem is a central property of first-order logic that does not hold for all logics. Second-order logic, for example, does not have a completeness theorem for its standard semantics (though does have the completeness property for Henkin semantics), and the set of logically valid formulas in second-order logic is not recursively enumerable. The same is true of all higher-order logics. It is possible to produce sound deductive systems for higher-order logics, but no such system can be complete.
Lindström's theorem states that first-order logic is the strongest (subject to certain constraints) logic satisfying both compactness and completeness.
A completeness theorem can be proved for modal logic or intuitionistic logic with respect to Kripke semantics.
Proofs
Gödel's original proof of the theorem proceeded by reducing the problem to a special case for formulas in a certain syntactic form, and then handling this form with an ad hoc argument.
In modern logic texts, Gödel's completeness theorem is usually proved with Henkin's proof, rather than with Gödel's original proof. Henkin's proof directly constructs a term model for any consistent first-order theory. James Margetson (2004) developed a computerized formal proof using the Isabelle theorem prover. Other proofs are also known.
See also
Gödel's incompleteness theorems
Original proof of Gödel's completeness theorem
References
Further reading
The first proof of the completeness theorem.
The same material as the dissertation, except with briefer proofs, more succinct explanations, and omitting the lengthy introduction.
Chapter 5: "Gödel's completeness theorem".
External links
Stanford Encyclopedia of Philosophy: "Kurt Gödel"—by Juliette Kennedy.
MacTutor biography: Kurt Gödel.
Detlovs, Vilnis, and Podnieks, Karlis, "Introduction to mathematical logic."
Theorems in the foundations of mathematics
Metatheorems
Model theory
Proof theory
Completeness theorem | Gödel's completeness theorem | [
"Mathematics"
] | 2,718 | [
"Foundations of mathematics",
"Proof theory",
"Mathematical logic",
"Model theory",
"Mathematical problems",
"Mathematical theorems",
"Theorems in the foundations of mathematics"
] |
12,451 | https://en.wikipedia.org/wiki/Global%20Boundary%20Stratotype%20Section%20and%20Point | A Global Boundary Stratotype Section and Point (GSSP), sometimes referred to as a golden spike, is an internationally agreed upon reference point on a stratigraphic section which defines the lower boundary of a stage on the geologic time scale. The effort to define GSSPs is conducted by the International Commission on Stratigraphy, a part of the International Union of Geological Sciences. Most, but not all, GSSPs are based on paleontological changes. Hence GSSPs are usually described in terms of transitions between different faunal stages, though far more faunal stages have been described than GSSPs. The GSSP definition effort commenced in 1977. As of 2024, 79 of the 101 stages that need a GSSP have a ratified GSSP.
Rules
A geologic section has to fulfill a set of criteria to be adapted as a GSSP by the ICS. The following list summarizes the criteria:
A GSSP has to define the lower boundary of a geologic stage.
The lower boundary has to be defined using a primary marker (usually first appearance datum of a fossil species).
There should also be secondary markers (other fossils, chemical, geomagnetic reversal).
The horizon in which the marker appears should have minerals that can be radiometrically dated.
The marker has to have regional and global correlation in outcrops of the same age.
The marker should be independent of facies.
The outcrop has to have an adequate thickness.
Sedimentation has to be continuous without any changes in facies.
The outcrop should be unaffected by tectonic and sedimentary movements, and metamorphism.
The outcrop has to be accessible to research and free to access.
This includes that the outcrop has to be located where it can be visited quickly (International airport and good roads), has to be kept in good condition (ideally a national reserve), in accessible terrain, extensive enough to allow repeated sampling and open to researchers of all nationalities.
Agreed-upon Global Boundary Stratotype Section and Points
Once a GSSP boundary has been agreed upon, a 'golden spike' is driven into the geologic section to mark the precise boundary for future geologists (though in practice the 'spike' need neither be golden nor an actual spike). As such, GSSPs are also sometimes referred to as golden spikes. The first stratigraphic boundary was defined in 1972 by identifying the Silurian-Devonian boundary with a bronze plaque at a locality called Klonk, northeast of the village of Suchomasty in the Czech Republic.
Fortune Head GSSP
The Precambrian-Cambrian boundary GSSP at Fortune Head, Newfoundland is a typical GSSP. It is accessible by paved road and is set aside as a nature preserve. A continuous section is available from beds that are clearly Precambrian into beds that are clearly Cambrian. The boundary is set at the first appearance of a complex trace fossil Treptichnus pedum that is found worldwide. The Fortune Head GSSP is unlikely to be washed away or built over. Nonetheless, Treptichnus pedum is less than ideal as a marker fossil as it is not found in every Cambrian sequence, and it is not assured that it is found at the same level in every exposure. In fact, further eroding its value as a boundary marker, it has since been identified in strata 4m below the GSSP.
However, no other fossil is known that would be preferable. There is no radiometrically datable bed at the boundary at Fortune Head, but there is one slightly above the boundary in similar beds nearby.
These factors have led some geologists to suggest that this GSSP is in need of reassigning.
Global Standard Stratigraphic Ages
Because defining a GSSP depends on finding well-preserved geologic sections and identifying key events, this task becomes more difficult as one goes farther back in time. Before 630 million years ago, boundaries on the geologic timescale are defined simply by reference to fixed dates, known as "Global Standard Stratigraphic Ages" (GSSAs).
See also
Body plan
MN zonation
Fauna
Geologic time scale
New Zealand geologic time scale
North American land mammal age
Type locality
Notes
References
Hedberg, H.D., (editor), International stratigraphic guide: A guide to stratigraphic classification, terminology, and procedure, New York, John Wiley and Sons, 1976
,
External links
Earth sciences
Geologic time scales of Earth
Historical geology
Paleogeography
Paleobiology
Units of time
Stratigraphy | Global Boundary Stratotype Section and Point | [
"Physics",
"Mathematics",
"Biology"
] | 931 | [
"Physical quantities",
"Time",
"Units of time",
"Quantity",
"Paleobiology",
"Spacetime",
"Units of measurement"
] |
12,458 | https://en.wikipedia.org/wiki/Ginnungagap | In Norse mythology, Ginnungagap (old Norse: ; "gaping abyss", "yawning void") is the primordial, magical void mentioned in three poems from the Poetic Edda and the Gylfaginning, the Eddaic text recording Norse cosmogony.
Etymology
Ginnunga- is usually interpreted as deriving from a verb meaning "gape" or "yawn", but no such word occurs in Old Norse except in verse 3 of the Eddic poem "Vǫluspá", "gap var ginnunga", which may be a play on the term.
In her edition of the poem, Ursula Dronke suggested it was borrowed from Old High German ginunga, as the term Múspell is believed to have been borrowed from Old High German. An alternative etymology links the ginn- prefix with that found in terms with a sacral meaning, such as ginn-heilagr, ginn-regin (both referring to the gods) and ginn-runa (referring to the runes), thus interpreting Ginnungagap as signifying a "magical (and creative) power-filled space".
Creation
Ginnungagap appears as the primordial void in the Norse creation account. The Gylfaginning states:
In the Völuspá, a supernaturally long-lived völva who was raised by jötnar tells the story of how Odin and his two brothers created the world from Ginnungagap.
Geographic rationalization
Scandinavian cartographers from the early 15th century attempted to localize or identify Ginnungagap as a real geographic location from which the creation myth derived. A fragment from a 15th-century (pre-Columbus) Old Norse encyclopedic text entitled Gripla (Little Compendium) places Ginnungagap between Greenland and Vinland:
A scholion in a 15th-century manuscript of Adam of Bremen's Gesta Hammaburgensis Ecclesiae Pontificum similarly refers to Ghimmendegop as the Norse word for the abyss in the far north.
Later, the 17th-century Icelandic bishop Guðbrandur Thorlaksson also used the name Ginnungegap to refer to a narrow body of water, possibly the Davis Strait, separating the southern tip of Greenland from Estotelandia, pars America extrema, probably Baffin Island.
In popular culture
Ginnungagap is song taken from the Jethro Tull album, RökFlöte, and released as a single on January 20, 2023.
Ginnungagap is featured in the Marvel Universe, as a void that existed before the formation of the world. In this place were formed entities such as the Elder Gods, Xian, Ennead, Frost Giants, Fire Demons, Nyx and Amatsu-Mikaboshi.
In the Netflix series Ragnarok, Ginnungagap is visited as camping site for a classroom field trip during Season 1, Episode 4; it also happens to be the name of this particular episode. In Season 2, Episode 2, Ginnungagap is visited by the characters Laurits and Vidar, and is depicted as a scenic vantage point overlooking a fjord and two adjoining mountains.
Alastair Reynolds' space opera novel Absolution Gap features a chasm named Ginnungagap Rift.
Swedish death metal band, Amon Amarth and their 2001 album The Crusher features a track titled, "Fall Through Ginnungagap".
Swedish symphonic metal band, Therion, features a track titled "Ginnungagap" on their Secret of the Runes album from 2001.
EVE Online has a black hole whose accretion disk shows up in the skybox named Ginnungagap.
"Ginungagap" (sic) is the title of a science fiction short story by Michael Swanwick.
French neofolk group SKÁLD included a song titled "Ginnunga" in their 2018 album Vikings Chant.
Ginnungagap (ギンヌンガガプ, Ginnungagapu) is a weaponized grimoire introduced in Fire Emblem Fates, part of a video-game franchise published by Nintendo. It is a high-level item that hits the hardest of all tomes and scrolls in the game.
Ginungagap is the hub world of the video game Jøtun.
In PlatinumGames's Bayonetta 3, the main characters travel through the multiverse, and the Ginnungagap is used as a gateway. In the game, it is referred to as "Ginnungagap, the Chaotic Rift".
A variation of Ginnungagap called "The Spark of the World" appears in the 2022 action-adventure video game God of War Ragnarök. This location becomes accessible during the main quest while in Muspelheim, appearing as a cosmic tapestry of orange sparks merged with blue-tinged essence, presumably from Niflheim.
See also
Abyss (religion)
Chaos (mythology)
Plane (esotericism)
Void (astronomy)
Notes
References
Dillmann, F. X. (1998). "Ginnungagap" in: Beck, H., Steuer, H. & Timpe, D. (Eds.) Reallexikon der germanischen Altertumskunde, Volume 12. Berlin: de Gruyter. .
External links
Guðbrandur Thorlaksson's 1606 map of the North Atlantic
Locations in Norse mythology
Chaos (cosmogony) | Ginnungagap | [
"Astronomy"
] | 1,146 | [
"Cosmogony",
"Chaos (cosmogony)"
] |
12,460 | https://en.wikipedia.org/wiki/Green | Green is the color between cyan and yellow on the visible spectrum. It is evoked by light which has a dominant wavelength of roughly 495570 nm. In subtractive color systems, used in painting and color printing, it is created by a combination of yellow and cyan; in the RGB color model, used on television and computer screens, it is one of the additive primary colors, along with red and blue, which are mixed in different combinations to create all other colors. By far the largest contributor to green in nature is chlorophyll, the chemical by which plants photosynthesize and convert sunlight into chemical energy. Many creatures have adapted to their green environments by taking on a green hue themselves as camouflage. Several minerals have a green color, including the emerald, which is colored green by its chromium content.
During post-classical and early modern Europe, green was the color commonly associated with wealth, merchants, bankers, and the gentry, while red was reserved for the nobility. For this reason, the costume of the Mona Lisa by Leonardo da Vinci and the benches in the British House of Commons are green while those in the House of Lords are red. It also has a long historical tradition as the color of Ireland and of Gaelic culture. It is the historic color of Islam, representing the lush vegetation of Paradise. It was the color of the banner of Muhammad, and is found in the flags of nearly all Islamic countries.
In surveys made in American, European, and Islamic countries, green is the color most commonly associated with nature, life, health, youth, spring, hope, and envy. In the European Union and the United States, green is also sometimes associated with toxicity and poor health, but in China and most of Asia, its associations are very positive, as the symbol of fertility and happiness. Because of its association with nature, it is the color of the environmental movement. Political groups advocating environmental protection and social justice describe themselves as part of the Green movement, some naming themselves Green parties. This has led to similar campaigns in advertising, as companies have sold green, or environmentally friendly, products. Green is also the traditional color of safety and permission; a green light means go ahead, a green card permits permanent residence in the United States.
Etymology and linguistic definitions
The word green comes from the Middle English and Old English word grene, which, like the German word grün, has the same root as the words grass and grow. It is from a Common Germanic *gronja-, which is also reflected in Old Norse grænn, Old High German gruoni (but unattested in East Germanic), ultimately from a PIE root * "to grow", and root-cognate with grass and to grow.
The first recorded use of the word as a color term in Old English dates to ca. AD 700.
Latin with viridis also has a genuine and widely used term for "green". Related to virere "to grow" and ver "spring", it gave rise to words in several Romance languages, French vert, Italian verde (and English vert, verdure etc.). Likewise the Slavic languages with zelenъ. Ancient Greek also had a term for yellowish, pale green – χλωρός, chloros (cf. the color of chlorine), cognate with χλοερός "verdant" and χλόη "chloe, the green of new growth".
Thus, the languages mentioned above (Germanic, Romance, Slavic, Greek) have old terms for "green" which are derived from words for fresh, sprouting vegetation.
However, comparative linguistics makes clear that these terms were coined independently, over the past few millennia, and there is no identifiable single Proto-Indo-European or word for "green". For example, the Slavic zelenъ is cognate with Sanskrit "yellow, ochre, golden".
The Turkic languages also have jašɨl "green" or "yellowish green", compared to a Mongolian word for "meadow".
Languages where green and blue are one color
In some languages, including old Chinese, Thai, old Japanese, and Vietnamese, the same word can mean either blue or green. The Chinese character 青 (pronounced qīng in Mandarin, ao in Japanese, and thanh in Sino-Vietnamese) has a meaning that covers both blue and green; blue and green are traditionally considered shades of "青". In more contemporary terms, they are 藍 (lán, in Mandarin) and 綠 (lǜ, in Mandarin) respectively. Japanese also has two terms that refer specifically to the color green, 緑 (midori, which is derived from the classical Japanese descriptive verb midoru "to be in leaf, to flourish" in reference to trees) and グリーン (guriin, which is derived from the English word "green"). However, in Japan, although the traffic lights have the same colors as other countries have, the green light is described using the same word as for blue, aoi, because green is considered a shade of aoi; similarly, green variants of certain fruits and vegetables such as green apples, green shiso (as opposed to red apples and red shiso) will be described with the word aoi. Vietnamese uses a single word for both blue and green, xanh, with variants such as xanh da trời (azure, lit. "sky blue"), lam (blue), and lục (green; also xanh lá cây, lit. "leaf green").
"Green" in modern European languages corresponds to about 520–570 nm, but many historical and non-European languages make other choices, e.g. using a term for the range of ca. 450–530 nm ("blue/green") and another for ca. 530–590 nm ("green/yellow"). In the comparative study of color terms in the world's languages, green is only found as a separate category in languages with the fully developed range of six colors (white, black, red, green, yellow, and blue), or more rarely in systems with five colors (white, red, yellow, green, and black/blue). These languages have introduced supplementary vocabulary to denote "green", but these terms are recognizable as recent adoptions that are not in origin color terms (much like the English adjective orange being in origin not a color term but the name of a fruit). Thus, the Thai word เขียว kheīyw, besides meaning "green", also means "rank" and "smelly" and holds other unpleasant associations.
The Celtic languages had a term for "blue/green/grey", Proto-Celtic *glasto-, which gave rise to Old Irish glas "green, grey" and to Welsh glas "blue". This word is cognate with the Ancient Greek γλαυκός "bluish green", contrasting with χλωρός "yellowish green" discussed above.
In modern Japanese, the term for green is 緑, while the old term for "blue/green", now means "blue". But in certain contexts, green is still conventionally referred to as 青, as in and , reflecting the absence of blue-green distinction in old Japanese (more accurately, the traditional Japanese color terminology grouped some shades of green with blue, and others with yellow tones).
In science
Color vision and colorimetry
In optics, the perception of green is evoked by light having a spectrum dominated by energy with a wavelength of roughly 495–570 nm. The sensitivity of the dark-adapted human eye is greatest at about 507 nm, a blue-green color, while the light-adapted eye is most sensitive about 555 nm, a yellow-green; these are the peak locations of the rod and cone (scotopic and photopic, respectively) luminosity functions.
The perception of greenness (in opposition to redness forming one of the opponent mechanisms in human color vision) is evoked by light which triggers the medium-wavelength M cone cells in the eye more than the long-wavelength L cones. Light which triggers this greenness response more than the yellowness or blueness of the other color opponent mechanism is called green. A green light source typically has a spectral power distribution dominated by energy with a wavelength of roughly 487–570 nm.
Human eyes have color receptors known as cone cells, of which there are three types. In some cases, one is missing or faulty, which can cause color blindness, including the common inability to distinguish red and yellow from green, known as deuteranopia or red-green color blindness.
Green is restful to the eye. Studies show that a green environment can reduce fatigue.
In the subtractive color system, used in painting and color printing, green is created by a combination of yellow and blue, or yellow and cyan; in the RGB color model, used on television and computer screens, it is one of the additive primary colors, along with red and blue, which are mixed in different combinations to create all other colors. On the HSV color wheel, also known as the RGB color wheel, the complement of green is magenta; that is, a color corresponding to an equal mixture of red and blue light (one of the purples). On a traditional color wheel, based on subtractive color, the complementary color to green is considered to be red.
In additive color devices such as computer displays and televisions, one of the primary light sources is typically a narrow-spectrum yellowish-green of dominant wavelength ≈550 nm; this "green" primary is combined with an orangish-red "red" primary and a purplish-blue "blue" primary to produce any color in between – the RGB color model. A unique green (green appearing neither yellowish nor bluish) is produced on such a device by mixing light from the green primary with some light from the blue primary.
Lasers
Lasers emitting in the green part of the spectrum are widely available to the general public in a wide range of output powers. Green laser pointers outputting at 532 nm (563.5 THz) are relatively inexpensive compared to other wavelengths of the same power, and are very popular due to their good beam quality and very high apparent brightness. The most common green lasers use diode pumped solid state (DPSS) technology to create the green light.
An infrared laser diode at 808 nm is used to pump a crystal of neodymium-doped yttrium vanadium oxide (Nd:YVO4) or neodymium-doped yttrium aluminium garnet (Nd:YAG) and induces it to emit 281.76 THz (1064 nm). This deeper infrared light is then passed through another crystal containing potassium, titanium and phosphorus (KTP), whose non-linear properties generate light at a frequency that is twice that of the incident beam (563.5 THz); in this case corresponding to the wavelength of 532 nm ("green").
Other green wavelengths are also available using DPSS technology ranging from 501 nm to 543 nm.
Green wavelengths are also available from gas lasers, including the helium–neon laser (543 nm), the Argon-ion laser (514 nm) and the Krypton-ion laser (521 nm and 531 nm), as well as liquid dye lasers. Green lasers have a wide variety of applications, including pointing, illumination, surgery, laser light shows, spectroscopy, interferometry, fluorescence, holography, machine vision, non-lethal weapons, and bird control.
As of mid-2011, direct green laser diodes at 510 nm and 500 nm have become generally available,
although the price remains relatively prohibitive for widespread public use. The efficiency of these lasers (peak 3%) compared to that of DPSS green lasers (peak 35%)
may also be limiting adoption of the diodes to niche uses.
Pigments, food coloring and fireworks
Many minerals provide pigments which have been used in green paints and dyes over the centuries. Pigments, in this case, are minerals which reflect the color green, rather that emitting it through luminescent or phosphorescent qualities. The large number of green pigments makes it impossible to mention them all. Among the more notable green minerals, however is the emerald, which is colored green by trace amounts of chromium and sometimes vanadium.
Chromium(III) oxide (Cr2O3), is called chrome green, also called viridian or institutional green when used as a pigment. For many years, the source of amazonite's color was a mystery. Widely thought to have been due to copper because copper compounds often have blue and green colors, the blue-green color is likely to be derived from small quantities of lead and water in the feldspar.
Copper is the source of the green color in malachite pigments, chemically known as basic copper(II) carbonate.
Verdigris is made by placing a plate or blade of copper, brass or bronze, slightly warmed, into a vat of fermenting wine, leaving it there for several weeks, and then scraping off and drying the green powder that forms on the metal. The process of making verdigris was described in ancient times by Pliny. It was used by the Romans in the murals of Pompeii, and in Celtic medieval manuscripts as early as the 5th century AD. It produced a blue-green which no other pigment could imitate, but it had drawbacks: it was unstable, it could not resist dampness, it did not mix well with other colors, it could ruin other colors with which it came into contact, and it was toxic. Leonardo da Vinci, in his treatise on painting, warned artists not to use it. It was widely used in miniature paintings in Europe and Persia in the 16th and 17th centuries. Its use largely ended in the late 19th century, when it was replaced by the safer and more stable chrome green. Viridian, as described above, was patented in 1859. It became popular with painters, since, unlike other synthetic greens, it was stable and not toxic. Vincent van Gogh used it, along with Prussian blue, to create a dark blue sky with a greenish tint in his painting Café Terrace at Night.
Green earth is a natural pigment used since the time of the Roman Empire. It is composed of clay colored by iron oxide, magnesium, aluminum silicate, or potassium. Large deposits were found in the South of France near Nice, and in Italy around Verona, on Cyprus, and in Bohemia. The clay was crushed, washed to remove impurities, then powdered. It was sometimes called Green of Verona.
Mixtures of oxidized cobalt and zinc were also used to create green paints as early as the 18th century.
Cobalt green, sometimes known as Rinman's green or zinc green, is a translucent green pigment made by heating a mixture of cobalt (II) oxide and zinc oxide. Sven Rinman, a Swedish chemist, discovered this compound in 1780.
Green chrome oxide was a new synthetic green created by a chemist named Pannetier in Paris in about 1835. Emerald green was a synthetic deep green made in the 19th century by hydrating chrome oxide. It was also known as Guignet green.
There is no natural source for green food colorings which has been approved by the US Food and Drug Administration. Chlorophyll, the E numbers E140 and E141, is the most common green chemical found in nature, and only allowed in certain medicines and cosmetic materials.
Quinoline Yellow (E104) is a commonly used coloring in the United Kingdom but is banned in Australia, Japan, Norway and the United States.
Green S (E142) is prohibited in many countries, for it is known to cause hyperactivity, asthma, urticaria, and insomnia.
To create green sparks, fireworks use barium salts, such as barium chlorate, barium nitrate crystals, or barium chloride, also used for green fireplace logs. Copper salts typically burn blue, but cupric chloride (also known as "campfire blue") can also produce green flames. Green pyrotechnic flares can use a mix ratio 75:25 of boron and potassium nitrate. Smoke can be turned green by a mixture: solvent yellow 33, solvent green 3, lactose, magnesium carbonate plus sodium carbonate added to potassium chlorate.
Biology
Green is common in nature, as many plants are green because of a complex chemical known as chlorophyll, which is involved in photosynthesis. Chlorophyll absorbs the long wavelengths of light (red) and short wavelengths of light (blue) much more efficiently than the wavelengths that appear green to the human eye, so light reflected by plants is enriched in green.
Chlorophyll absorbs green light poorly because it first arose in organisms living in oceans where purple halobacteria were already exploiting photosynthesis. Their purple color arose because they extracted energy in the green portion of the spectrum using bacteriorhodopsin. The new organisms that then later came to dominate the extraction of light were selected to exploit those portions of the spectrum not used by the halobacteria.
Animals typically use the color green as camouflage, blending in with the chlorophyll green of the surrounding environment. Most fish, reptiles, amphibians, and birds appear green because of a reflection of blue light coming through an over-layer of yellow pigment. Perception of color can also be affected by the surrounding environment. For example, broadleaf forests typically have a yellow-green light about them as the trees filter the light. Turacoverdin is one chemical which can cause a green hue in birds, especially. Invertebrates such as insects or mollusks often display green colors because of porphyrin pigments, sometimes caused by diet. This can causes their feces to look green as well. Other chemicals which generally contribute to greenness among organisms are flavins (lychochromes) and hemanovadin. Humans have imitated this by wearing green clothing as a camouflage in military and other fields. Substances that may impart a greenish hue to one's skin include biliverdin, the green pigment in bile, and ceruloplasmin, a protein that carries copper ions in chelation.
The green huntsman spider is green due to the presence of bilin pigments in the spider's hemolymph (circulatory system fluids) and tissue fluids.
It hunts insects in green vegetation, where it is well camouflaged.
Green eyes
There is no green pigment in green eyes; like the color of blue eyes, it is an optical illusion; its appearance is caused by the combination of an amber or light brown pigmentation of the stroma, given by a low or moderate concentration of melanin, with the blue tone imparted by the Rayleigh scattering of the reflected light.
Nobody is brought into the world with green eyes. An infant has one of two eye hues: dark or blue. Following birth, cells called melanocytes start to discharge melanin, the earthy colored shade, in the child's irises. This begins happening since melanocytes respond to light in time.
Green eyes are most common in Northern and Central Europe.
They can also be found in Southern Europe, West Asia, Central Asia, and South Asia. In Iceland, 89% of women and 87% of men have either blue or green eye color.
A study of Icelandic and Dutch adults found green eyes to be much more prevalent in women than in men.
In history and art
Prehistoric history
Neolithic cave paintings do not have traces of green pigments, but neolithic peoples in northern Europe did make a green dye for clothing, made from the leaves of the birch tree. It was of very poor quality, more brown than green. Ceramics from ancient Mesopotamia show people wearing vivid green costumes, but it is not known how the colors were produced.
Ancient history
In Ancient Egypt, green was the symbol of regeneration and rebirth, and of the crops made possible by the annual flooding of the Nile. For painting on the walls of tombs or on papyrus, Egyptian artists used finely ground malachite, mined in the west Sinai and the eastern desert; a paintbox with malachite pigment was found inside the tomb of King Tutankhamun. They also used less expensive green earth pigment, or mixed yellow ochre and blue azurite. To dye fabrics green, they first colored them yellow with dye made from saffron and then soaked them in blue dye from the roots of the woad plant.
For the ancient Egyptians, green had very positive associations. The hieroglyph for green represented a growing papyrus sprout, showing the close connection between green, vegetation, vigor and growth. In wall paintings, the ruler of the underworld, Osiris, was typically portrayed with a green face, because green was the symbol of good health and rebirth. Palettes of green facial makeup, made with malachite, were found in tombs. It was worn by both the living and the dead, particularly around the eyes, to protect them from evil. Tombs also often contained small green amulets in the shape of scarab beetles made of malachite, which would protect and give vigor to the deceased. It also symbolized the sea, which was called the "Very Green".
In Ancient Greece, green and blue were sometimes considered the same color, and the same word sometimes described the color of the sea and the color of trees. The philosopher Democritus described two different greens: , or pale green, and , or leek green. Aristotle considered that green was located midway between black, symbolizing the earth, and white, symbolizing water. However, green was not counted among the four classic colors of Greek painting – red, yellow, black and white – and is rarely found in Greek art.
The Romans had a greater appreciation for the color green; it was the color of Venus, the goddess of gardens, vegetables and vineyards. The Romans made a fine green earth pigment that was widely used in the wall paintings of Pompeii, Herculaneum, Lyon, Vaison-la-Romaine, and other Roman cities. They also used the pigment verdigris, made by soaking copper plates in fermenting wine. By the second century AD, the Romans were using green in paintings, mosaics and glass, and there were ten different words in Latin for varieties of green.
Postclassical history
In the Middle Ages and Renaissance, the color of clothing showed a person's social rank and profession. Red could only be worn by the nobility, brown and gray by peasants, and green by merchants, bankers and the gentry and their families. The Mona Lisa wears green in her portrait, as does the bride in the Arnolfini portrait by Jan van Eyck.
There were no good vegetal green dyes which resisted washing and sunlight for those who wanted or were required to wear green. Green dyes were made out of the fern, plantain, buckthorn berries, the juice of nettles and of leeks, the digitalis plant, the broom plant, the leaves of the fraxinus, or ash tree, and the bark of the alder tree, but they rapidly faded or changed color. Only in the 16th century was a good green dye produced, by first dyeing the cloth blue with woad, and then yellow with Reseda luteola, also known as yellow-weed.
The pigments available to painters were more varied; monks in monasteries used verdigris, made by soaking copper in fermenting wine, to color medieval manuscripts. They also used finely-ground malachite, which made a luminous green. They used green earth colors for backgrounds.
During the early Renaissance, painters such as Duccio di Buoninsegna learned to paint faces first with a green undercoat, then with pink, which gave the faces a more realistic hue. Over the centuries the pink has faded, making some of the faces look green.
Modern history
In the 18th and 19th century
The 18th and 19th centuries brought the discovery and production of synthetic green pigments and dyes, which rapidly replaced the earlier mineral and vegetable pigments and dyes. These new dyes were more stable and brilliant than the vegetable dyes, but some contained high levels of arsenic, and were eventually banned.
In the 18th and 19th centuries, green was associated with the romantic movement in literature and art. The German poet and philosopher Goethe declared that green was the most restful color, suitable for decorating bedrooms. Painters such as John Constable and Jean-Baptiste-Camille Corot depicted the lush green of rural landscapes and forests. Green was contrasted to the smoky grays and blacks of the Industrial Revolution.
The second half of the 19th century saw the use of green in art to create specific emotions, not just to imitate nature. One of the first to make color the central element of his picture was the American artist James McNeill Whistler, who created a series of paintings called "symphonies" or "noctures" of color, including Symphony in gray and green; The Ocean between 1866 and 1872.
The late 19th century also brought the systematic study of color theory, and particularly the study of how complementary colors such as red and green reinforced each other when they were placed next to each other. These studies were avidly followed by artists such as Vincent van Gogh. Describing his painting, The Night Cafe, to his brother Theo in 1888, Van Gogh wrote: "I sought to express with red and green the terrible human passions. The hall is blood red and pale yellow, with a green billiard table in the center, and four lamps of lemon yellow, with rays of orange and green. Everywhere it is a battle and antithesis of the most different reds and greens."
In the 20th and 21st century
In the 1980s, green became a political symbol, the color of the Green Party in Germany and in many other European countries. It symbolized the environmental movement, and also a new politics of the left which rejected traditional socialism and communism. (See section below.)
Symbolism and associations
Safety and permission
Green can communicate safety to proceed, as in traffic lights. Green and red were standardized as the colors of international railroad signals in the 19th century. The first traffic light, using green and red gas lamps, was erected in 1868 in front of the Houses of Parliament in London. It exploded the following year, injuring the policeman who operated it. In 1912, the first modern electric traffic lights were put up in Salt Lake City, Utah. Red was chosen largely because of its high visibility, and its association with danger, while green was chosen largely because it could not be mistaken for red. Today green lights universally signal that a system is turned on and working as it should. In many video games, green signifies both health and completed objectives, opposite red.
Nature, vivacity, and life
Green is the color most commonly associated in Europe and the United States with nature, vivacity and life.
It is the color of many environmental organizations, such as Greenpeace, and of the Green Parties in Europe. Many cities have designated a garden or park as a green space, and use green trash bins and containers. A green cross is commonly used to designate pharmacies in Europe.
In China, green is associated with the east, with sunrise, and with life and growth. In Thailand, the color green is considered auspicious for those born on a Wednesday (light green for those born at night).
Springtime, freshness, and hope
Green is the color most commonly associated in the United States and Europe with springtime, freshness, and hope. Green is often used to symbolize rebirth and renewal and immortality. In Ancient Egypt; the god Osiris, king of the underworld, was depicted as green-skinned. Green as the color of hope is connected with the color of springtime; hope represents the faith that things will improve after a period of difficulty, like the renewal of flowers and plants after the winter season.
Youth and inexperience
Green the color most commonly associated in Europe and the United States with youth. It also often is used to describe anyone young, inexperienced, probably by the analogy to immature and unripe fruit. Examples include green cheese, a term for a fresh, unaged cheese, and greenhorn, an inexperienced person.
Food and diet
The color green has been increasingly used by food companies, governments, and practitioners themselves to identify veganism and vegetarianism. The government of India requires food that is vegetarian to be marked with a green circle as part of the Food Safety and Standards Act of 2006 with changes to symbolism since but still maintaining the color green. In 2021, India introduced a green V to exclusively label vegan options. In the west, the V-Label, a green V designed by the European Vegetarian Union, has been used by food distributors to label vegan and vegetarian options.
Calm, tolerance, and the agreeable
Surveys also show that green is the color most associated with the calm, the agreeable, and tolerance. Red is associated with heat, blue with cold, and green with an agreeable temperature. Red is associated with dry, blue with wet, and green, in the middle, with dampness. Red is the most active color, blue the most passive; green, in the middle, is the color of neutrality and calm, sometimes used in architecture and design for these reasons.
Blue and green together symbolize harmony and balance. Experimental studies also show this calming effect in a statistical significant decrease of negative emotions
and increase of creative performance.
Jealousy and envy
Green is often associated with jealousy and envy. The expression "green-eyed monster" was first used by William Shakespeare in Othello: "it is the green-eyed monster which doth mock the meat it feeds on." Shakespeare also used it in the Merchant of Venice, speaking of "green-eyed jealousy".
Love and sexuality
Green today is not commonly associated in Europe and the United States with love and sexuality, but in stories of the medieval period it sometimes represented love and the base, natural desires of man. It was the color of the serpent in the Garden of Eden who caused the downfall of Adam and Eve. However, for the troubadours, green was the color of growing love, and light green clothing was reserved for young women who were not yet married.
In Persian and Sudanese poetry, dark-skinned women, called "green" women, were considered erotic. The Chinese term for cuckold is "to wear a green hat." This was because in ancient China, prostitutes were called "the family of the green lantern" and a prostitute's family would wear a green headscarf.
In Victorian England, the color green was associated with homosexuality.
Dragons, fairies, monsters, and devils
In legends, folk tales and films, fairies, dragons, monsters, and the devil are often shown as green.
In the Middle Ages, the devil was usually shown as either red, black or green. Dragons were usually green, because they had the heads, claws and tails of reptiles.
Modern Chinese dragons are also often green, but unlike European dragons, they are benevolent; Chinese dragons traditionally symbolize potent and auspicious powers, particularly control over water, rainfall, hurricane, and floods. The dragon is also a symbol of power, strength, and good luck. The Emperor of China usually used the dragon as a symbol of his imperial power and strength. The dragon dance is a popular feature of Chinese festivals.
In Irish and English folklore, the color was sometimes associated with witchcraft, and with faeries and spirits. The type of Irish fairy known as a leprechaun is commonly portrayed wearing a green suit, though before the 20th century he was usually described as wearing a red suit.
In theater and film, green was often connected with monsters and the inhuman. The earliest films of Frankenstein were in black and white, but in the poster for the 1935 version The Bride of Frankenstein, the monster had a green face. Actor Bela Lugosi wore green-hued makeup for the role of Dracula in the 1927–1928 Broadway stage production.
Poison and sickness
Like other common colors, green has several completely opposite associations. While it is the color most associated by Europeans and Americans with good health, it is also the color most often associated with toxicity and poison. There was a solid foundation for this association; in the nineteenth century several popular paints and pigments, notably verdigris, vert de Schweinfurt and vert de Paris, were highly toxic, containing copper or arsenic. The intoxicating drink absinthe was known as "the green fairy".
A green tinge in the skin is sometimes associated with nausea and sickness. The expression 'green at the gills' means appearing sick. The color, when combined with gold, is sometimes seen as representing the fading of youth. In some Far East cultures the color green is used as a symbol of sickness or nausea.
Social status, prosperity and the dollar
Green in Europe and the United States is sometimes associated with status and prosperity. From the Middle Ages to the 19th century it was often worn by bankers, merchants country gentlemen and others who were wealthy but not members of the nobility. The benches in the House of Commons of the United Kingdom, where the landed gentry sat, are colored green.
In the United States green was connected with the dollar bill. Since 1861, the reverse side of the dollar bill has been green. Green was originally chosen because it deterred counterfeiters, who tried to use early camera equipment to duplicate banknotes. Also, since the banknotes were thin, the green on the back did not show through and muddle the pictures on the front of the banknote. Green continues to be used because the public now associates it with a strong and stable currency.
One of the more notable uses of this meaning is found in The Wonderful Wizard of Oz. The Emerald City in this story is a place where everyone wears tinted glasses that make everything appear green. According to the populist interpretation of the story, the city's color is used by the author, L. Frank Baum, to illustrate the financial system of America in his day, as he lived in a time when America was debating the use of paper money versus gold.
On flags
The flag of Italy (1797) was modeled after the French tricolor. It was originally the flag of the Cisalpine Republic, whose capital was Milan; red and white were the colors of Milan, and green was the color of the military uniforms of the army of the Cisalpine Republic. Other versions say it is the color of the Italian landscape, or symbolizes hope.
The flag of Brazil has a green field adapted from the flag of the Empire of Brazil. The green represented the royal family.
The flag of India was inspired by an earlier flag of the independence movement of Gandhi, which had a red band for Hinduism and a green band representing Islam, the second largest religion in India.
The flag of Pakistan symbolizes Pakistan's commitment to Islam and equal rights of religious minorities where the larger portion (3:2 ratio) of flag is dark green representing Muslim majority (98% of total population) while a white vertical bar (3:1 ratio) at the mast representing equal rights for religious minorities and minority religions in country. The crescent and star symbolizes progress and bright future respectively.
The flag of Bangladesh has a green field based on a similar flag used during the Bangladesh Liberation War of 1971. It consists of a red disc on top of a green field. The red disc represents the sun rising over Bengal, and also the blood of those who died for the independence of Bangladesh. The green field stands for the lushness of the land of Bangladesh.
The flag of the international constructed language Esperanto has a green field and a green star in a white area. The green represents hope ("esperanto" means "one who hopes"), the white represents peace and neutrality and the star represents the five inhabited continents.
Green is one of the three colors (along with red and black, or red and gold) of Pan-Africanism. Several African countries thus use the color on their flags, including Nigeria, South Africa, Ghana, Senegal, Mali, Ethiopia, Togo, Guinea, Benin, and Zimbabwe. The Pan-African colors are borrowed from the Ethiopian flag, one of the oldest independent African countries. Green on some African flags represents the natural richness of Africa.
Many flags of the Islamic world are green, as the color is considered sacred in Islam (see below). The flag of Hamas, as well as the flag of Iran, is green, symbolizing their Islamist ideology. The 1977 flag of Libya consisted of a simple green field with no other characteristics. It was the only national flag in the world with just one color and no design, insignia, or other details. Some countries used green in their flags to represent their country's lush vegetation, as in the flag of Jamaica, and hope in the future, as in the flags of Portugal and Nigeria. The green cedar of Lebanon tree on the Flag of Lebanon officially represents steadiness and tolerance.
Green is a symbol of Ireland, which is often referred to as the "Emerald Isle". The color is particularly identified with the republican and nationalist traditions in modern times. It is used this way on the flag of the Republic of Ireland, in balance with white and the Protestant orange. Green is a strong trend in the Irish holiday St. Patrick's Day.
In politics
The first recorded green party was a political faction in Constantinople during the 6th century Byzantine Empire. which took its name from a popular chariot racing team. They were bitter opponents of the blue faction, which supported Emperor Justinian I and which had its own chariot racing team. In 532 AD rioting between the factions began after one race, which led to the massacre of green supporters and the destruction of much of the center of Constantinople. (See Nika Riots).
Green was the traditional color of Irish nationalism, beginning in the 17th century. The green harp flag, with a traditional gaelic harp, became the symbol of the movement. It was the banner of the Society of United Irishmen, which organized the ultimately unsuccessful Irish Rebellion of 1798. When Ireland achieved independence in 1922, green was incorporated into the national flag.
In the 1970s, green became the color of the third biggest Swiss Federal Council political party, the Swiss People's Party SVP. The ideology is Swiss nationalism, national conservatism, right-wing populism, economic liberalism, agrarianism, isolationism, euroscepticism. The SVP was founded on September 22, 1971, and has 90,000 members.
In the 1980s, green became the color of a number of new European political parties organized around an agenda of environmentalism. Green was chosen for its association with nature, health, and growth. The largest green party in Europe is Alliance '90/The Greens (German: Bündnis 90/Die Grünen) in Germany, which was formed in 1993 from the merger of the German Green Party, founded in West Germany in 1980, and Alliance 90, founded during the Revolution of 1989–1990 in East Germany. In the 2009 federal elections, the party won 11% of the votes and 68 out of 622 seats in the Bundestag.
Green parties in Europe have programs based on ecology, grassroots democracy, nonviolence, and social justice. Green parties are found in over one hundred countries, and most are members of the Global Green Network.
Greenpeace is a non-governmental environmental organization which emerged from the anti-nuclear and peace movements in the 1970s. Its ship, the Rainbow Warrior, frequently tried to interfere with nuclear tests and whaling operations. The movement now has branches in forty countries.
The Australian Greens was founded in 1992. In the 2010 federal election, the party received 13% of the vote (more than 1.6 million votes) in the Senate, a first for any Australian minor party.
Green is the color associated with Puerto Rico's Independence Party, the smallest of that country's three major political parties, which advocates Puerto Rican independence from the United States.
In Indonesia, green is used by several Islamist political party, including National Awakening Party, Crescent Star Party, United Development Party, and the local Aceh Just and Prosperous Party.
In Taiwan, green is used by Democratic Progressive Party. Green in Taiwan associates with Taiwan independence movement.
In religion
Green is the traditional color of Islam. According to tradition, the robe and banner of Muhammad were green, and according to the Koran (XVIII, 31 and LXXVI, 21) those fortunate enough to live in paradise wear green silk robes. Muhammad is quoted in a hadith as saying that "water, greenery, and a beautiful face" were three universally good things. Green was accordingly adopted as a Shi'a color.
Al-Khidr ("The Green One"), was an important Qur'anic figure who was said to have met and traveled with Moses. He was given that name because of his role as a diplomat and negotiator. Green was also considered to be the median color between light and obscurity.
Roman Catholic and more traditional Protestant clergy wear green vestments at liturgical celebrations during Ordinary Time. In the Eastern Catholic Church, green is the color of Pentecost. Green is one of the Christmas colors as well, possibly dating back to pre-Christian times, when evergreens were worshiped for their ability to maintain their color through the winter season. Romans used green holly and evergreen as decorations for their winter solstice celebration called Saturnalia, which eventually evolved into a Christmas celebration. In Ireland and Scotland especially, green is used to represent Catholics, while orange is used to represent Protestantism. This is shown on the national flag of Ireland.
In Paganism, green represents abundance, growth, wealth, renewal, and balance. In magickal practices, green is often used to bring money and luck. One figure who shares parallels with various deities is the Green Man.
In gambling and sports
Gambling tables in a casino are traditionally green. The tradition is said to have started in gambling rooms in Venice in the 16th century.
Billiards tables are traditionally covered with green woolen cloth. The first indoor tables, dating to the 15th century, were colored green after the grass courts used for the similar lawn games of the period.
Green was the traditional color worn by hunters in the 19th century, particularly the shade called hunter green. In the 20th century most hunters began wearing the color olive drab, a shade of green, instead of hunter green.
Green is a common color for sports teams. Well-known teams include A.S. Saint-Étienne of France, known as Les Verts (The Greens). The Green Bay Packers, an American football team, has the color in its official name and wears green uniforms. The NBA basketball team Boston Celtics is known for the green and white colors. In Israel, the green and white colors are identified with Maccabi Haifa F.C., a successful football club known as "The Greens". A number of national soccer teams feature the color, with the color usually reflective of the teams' national flag.
British racing green was the international motor racing color of Britain from the early 1900s until the 1960s, when it was replaced by the colors of the sponsoring automobile companies.
A green belt in karate, taekwondo, and judo symbolizes a level of proficiency in the sport.
Idioms and expressions
Having a green thumb (American English) or green fingers (British English). To be passionate about or talented at gardening. The expression was popularized beginning in 1925 by a BBC gardening program.
Greenhorn. Someone who is inexperienced.
Green-eyed monster. Refers to jealousy. (See section above on jealousy and envy).
Greenmail. A term used in finance and corporate takeovers. It refers to the practice of a company paying a high price to buy back shares of its own stock to prevent an unfriendly takeover by another company or businessman. It originated in the 1980s on Wall Street, and originates from the green of dollars.
Green room. A room at a theater where actors rest when not onstage, or a room at a television studio where guests wait before going on-camera. It originated in the late 17th century from a room of that color at the Theatre Royal, Drury Lane in London.
Greenwashing. Environmental activists sometimes use this term to describe the advertising of a company that promotes its positive environmental practices to cover up its environmental destruction.
Green around the gills. A description of a person who looks physically ill.
Going green. An expression commonly used to refer to preserving the natural environment, and participating in activities such as recycling materials.
Looking green. A description of a person who looks revolted or repulsed.
See also
Shades of green
Green pigments
Notes
References
Cited texts
External links
Green All Over—slideshow by Life magazine
Primary colors
Secondary colors
Optical spectrum
Rainbow colors
Web colors
LGBTQ symbols | Green | [
"Physics"
] | 9,295 | [
"Optical spectrum",
"Spectrum (physical sciences)",
"Electromagnetic spectrum"
] |
12,461 | https://en.wikipedia.org/wiki/Gradient | In vector calculus, the gradient of a scalar-valued differentiable function of several variables is the vector field (or vector-valued function) whose value at a point gives the direction and the rate of fastest increase. The gradient transforms like a vector under change of basis of the space of variables of . If the gradient of a function is non-zero at a point , the direction of the gradient is the direction in which the function increases most quickly from , and the magnitude of the gradient is the rate of increase in that direction, the greatest absolute directional derivative. Further, a point where the gradient is the zero vector is known as a stationary point. The gradient thus plays a fundamental role in optimization theory, where it is used to minimize a function by gradient descent. In coordinate-free terms, the gradient of a function may be defined by:
where is the total infinitesimal change in for an infinitesimal displacement , and is seen to be maximal when is in the direction of the gradient . The nabla symbol , written as an upside-down triangle and pronounced "del", denotes the vector differential operator.
When a coordinate system is used in which the basis vectors are not functions of position, the gradient is given by the vector whose components are the partial derivatives of at . That is, for , its gradient is defined at the point in n-dimensional space as the vector
Note that the above definition for gradient is defined for the function only if is differentiable at . There can be functions for which partial derivatives exist in every direction but fail to be differentiable. Furthermore, this definition as the vector of partial derivatives is only valid when the basis of the coordinate system is orthonormal. For any other basis, the metric tensor at that point needs to be taken into account.
For example, the function unless at origin where , is not differentiable at the origin as it does not have a well defined tangent plane despite having well defined partial derivatives in every direction at the origin. In this particular example, under rotation of x-y coordinate system, the above formula for gradient fails to transform like a vector (gradient becomes dependent on choice of basis for coordinate system) and also fails to point towards the 'steepest ascent' in some orientations. For differentiable functions where the formula for gradient holds, it can be shown to always transform as a vector under transformation of the basis so as to always point towards the fastest increase.
The gradient is dual to the total derivative : the value of the gradient at a point is a tangent vector – a vector at each point; while the value of the derivative at a point is a cotangent vector – a linear functional on vectors. They are related in that the dot product of the gradient of at a point with another tangent vector equals the directional derivative of at of the function along ; that is, .
The gradient admits multiple generalizations to more general functions on manifolds; see .
Motivation
Consider a room where the temperature is given by a scalar field, , so at each point the temperature is , independent of time. At each point in the room, the gradient of at that point will show the direction in which the temperature rises most quickly, moving away from . The magnitude of the gradient will determine how fast the temperature rises in that direction.
Consider a surface whose height above sea level at point is . The gradient of at a point is a plane vector pointing in the direction of the steepest slope or grade at that point. The steepness of the slope at that point is given by the magnitude of the gradient vector.
The gradient can also be used to measure how a scalar field changes in other directions, rather than just the direction of greatest change, by taking a dot product. Suppose that the steepest slope on a hill is 40%. A road going directly uphill has slope 40%, but a road going around the hill at an angle will have a shallower slope. For example, if the road is at a 60° angle from the uphill direction (when both directions are projected onto the horizontal plane), then the slope along the road will be the dot product between the gradient vector and a unit vector along the road, as the dot product measures how much the unit vector along the road aligns with the steepest slope, which is 40% times the cosine of 60°, or 20%.
More generally, if the hill height function is differentiable, then the gradient of dotted with a unit vector gives the slope of the hill in the direction of the vector, the directional derivative of along the unit vector.
Notation
The gradient of a function at point is usually written as . It may also be denoted by any of the following:
: to emphasize the vector nature of the result.
and : Written with Einstein notation, where repeated indices () are summed over.
Definition
The gradient (or gradient vector field) of a scalar function is denoted or where (nabla) denotes the vector differential operator, del. The notation is also commonly used to represent the gradient. The gradient of is defined as the unique vector field whose dot product with any vector at each point is the directional derivative of along . That is,
where the right-hand side is the directional derivative and there are many ways to represent it. Formally, the derivative is dual to the gradient; see relationship with derivative.
When a function also depends on a parameter such as time, the gradient often refers simply to the vector of its spatial derivatives only (see Spatial gradient).
The magnitude and direction of the gradient vector are independent of the particular coordinate representation.
Cartesian coordinates
In the three-dimensional Cartesian coordinate system with a Euclidean metric, the gradient, if it exists, is given by
where , , are the standard unit vectors in the directions of the , and coordinates, respectively. For example, the gradient of the function
is
or
In some applications it is customary to represent the gradient as a row vector or column vector of its components in a rectangular coordinate system; this article follows the convention of the gradient being a column vector, while the derivative is a row vector.
Cylindrical and spherical coordinates
In cylindrical coordinates with a Euclidean metric, the gradient is given by:
where is the axial distance, is the azimuthal or azimuth angle, is the axial coordinate, and , and are unit vectors pointing along the coordinate directions.
In spherical coordinates, the gradient is given by:
where is the radial distance, is the azimuthal angle and is the polar angle, and , and are again local unit vectors pointing in the coordinate directions (that is, the normalized covariant basis).
For the gradient in other orthogonal coordinate systems, see Orthogonal coordinates (Differential operators in three dimensions).
General coordinates
We consider general coordinates, which we write as , where is the number of dimensions of the domain. Here, the upper index refers to the position in the list of the coordinate or component, so refers to the second component—not the quantity squared. The index variable refers to an arbitrary element . Using Einstein notation, the gradient can then be written as:
(Note that its dual is ),
where and refer to the unnormalized local covariant and contravariant bases respectively, is the inverse metric tensor, and the Einstein summation convention implies summation over i and j.
If the coordinates are orthogonal we can easily express the gradient (and the differential) in terms of the normalized bases, which we refer to as and , using the scale factors (also known as Lamé coefficients) :
(and ),
where we cannot use Einstein notation, since it is impossible to avoid the repetition of more than two indices. Despite the use of upper and lower indices, , , and are neither contravariant nor covariant.
The latter expression evaluates to the expressions given above for cylindrical and spherical coordinates.
Relationship with derivative
Relationship with total derivative
The gradient is closely related to the total derivative (total differential) : they are transpose (dual) to each other. Using the convention that vectors in are represented by column vectors, and that covectors (linear maps ) are represented by row vectors, the gradient and the derivative are expressed as a column and row vector, respectively, with the same components, but transpose of each other:
While these both have the same components, they differ in what kind of mathematical object they represent: at each point, the derivative is a cotangent vector, a linear form (or covector) which expresses how much the (scalar) output changes for a given infinitesimal change in (vector) input, while at each point, the gradient is a tangent vector, which represents an infinitesimal change in (vector) input. In symbols, the gradient is an element of the tangent space at a point, , while the derivative is a map from the tangent space to the real numbers, . The tangent spaces at each point of can be "naturally" identified with the vector space itself, and similarly the cotangent space at each point can be naturally identified with the dual vector space of covectors; thus the value of the gradient at a point can be thought of a vector in the original , not just as a tangent vector.
Computationally, given a tangent vector, the vector can be multiplied by the derivative (as matrices), which is equal to taking the dot product with the gradient:
Differential or (exterior) derivative
The best linear approximation to a differentiable function
at a point in is a linear map from to which is often denoted by or and called the differential or total derivative of at . The function , which maps to , is called the total differential or exterior derivative of and is an example of a differential 1-form.
Much as the derivative of a function of a single variable represents the slope of the tangent to the graph of the function, the directional derivative of a function in several variables represents the slope of the tangent hyperplane in the direction of the vector.
The gradient is related to the differential by the formula
for any , where is the dot product: taking the dot product of a vector with the gradient is the same as taking the directional derivative along the vector.
If is viewed as the space of (dimension ) column vectors (of real numbers), then one can regard as the row vector with components
so that is given by matrix multiplication. Assuming the standard Euclidean metric on , the gradient is then the corresponding column vector, that is,
Linear approximation to a function
The best linear approximation to a function can be expressed in terms of the gradient, rather than the derivative. The gradient of a function from the Euclidean space to at any particular point in characterizes the best linear approximation to at . The approximation is as follows:
for close to , where is the gradient of computed at , and the dot denotes the dot product on . This equation is equivalent to the first two terms in the multivariable Taylor series expansion of at .
Relationship with
Let be an open set in . If the function is differentiable, then the differential of is the Fréchet derivative of . Thus is a function from to the space such that
where · is the dot product.
As a consequence, the usual properties of the derivative hold for the gradient, though the gradient is not a derivative itself, but rather dual to the derivative:
Linearity
The gradient is linear in the sense that if and are two real-valued functions differentiable at the point , and and are two constants, then is differentiable at , and moreover
Product rule
If and are real-valued functions differentiable at a point , then the product rule asserts that the product is differentiable at , and
Chain rule
Suppose that is a real-valued function defined on a subset of , and that is differentiable at a point . There are two forms of the chain rule applying to the gradient. First, suppose that the function is a parametric curve; that is, a function maps a subset into . If is differentiable at a point such that , then where ∘ is the composition operator: .
More generally, if instead , then the following holds:
where T denotes the transpose Jacobian matrix.
For the second form of the chain rule, suppose that is a real valued function on a subset of , and that is differentiable at the point . Then
Further properties and applications
Level sets
A level surface, or isosurface, is the set of all points where some function has a given value.
If is differentiable, then the dot product of the gradient at a point with a vector gives the directional derivative of at in the direction . It follows that in this case the gradient of is orthogonal to the level sets of . For example, a level surface in three-dimensional space is defined by an equation of the form . The gradient of is then normal to the surface.
More generally, any embedded hypersurface in a Riemannian manifold can be cut out by an equation of the form such that is nowhere zero. The gradient of is then normal to the hypersurface.
Similarly, an affine algebraic hypersurface may be defined by an equation , where is a polynomial. The gradient of is zero at a singular point of the hypersurface (this is the definition of a singular point). At a non-singular point, it is a nonzero normal vector.
Conservative vector fields and the gradient theorem
The gradient of a function is called a gradient field. A (continuous) gradient field is always a conservative vector field: its line integral along any path depends only on the endpoints of the path, and can be evaluated by the gradient theorem (the fundamental theorem of calculus for line integrals). Conversely, a (continuous) conservative vector field is always the gradient of a function.
Gradient is direction of steepest ascent
The gradient of a function at point is also the direction of its steepest ascent, i.e. it maximizes its directional derivative:
Let be an arbitrary unit vector. With the directional derivative defined as
we get, by substituting the function with its Taylor series,
where denotes higher order terms in .
Dividing by , and taking the limit yields a term which is bounded from above by the Cauchy-Schwarz inequality
Choosing maximizes the directional derivative, and equals the upper bound
Generalizations
Jacobian
The Jacobian matrix is the generalization of the gradient for vector-valued functions of several variables and differentiable maps between Euclidean spaces or, more generally, manifolds. A further generalization for a function between Banach spaces is the Fréchet derivative.
Suppose is a function such that each of its first-order partial derivatives exist on . Then the Jacobian matrix of is defined to be an matrix, denoted by or simply . The th entry is . Explicitly
Gradient of a vector field
Since the total derivative of a vector field is a linear mapping from vectors to vectors, it is a tensor quantity.
In rectangular coordinates, the gradient of a vector field is defined by:
(where the Einstein summation notation is used and the tensor product of the vectors and is a dyadic tensor of type (2,0)). Overall, this expression equals the transpose of the Jacobian matrix:
In curvilinear coordinates, or more generally on a curved manifold, the gradient involves Christoffel symbols:
where are the components of the inverse metric tensor and the are the coordinate basis vectors.
Expressed more invariantly, the gradient of a vector field can be defined by the Levi-Civita connection and metric tensor:
where is the connection.
Riemannian manifolds
For any smooth function on a Riemannian manifold , the gradient of is the vector field such that for any vector field ,
that is,
where denotes the inner product of tangent vectors at defined by the metric and is the function that takes any point to the directional derivative of in the direction , evaluated at . In other words, in a coordinate chart from an open subset of to an open subset of , is given by:
where denotes the th component of in this coordinate chart.
So, the local form of the gradient takes the form:
Generalizing the case , the gradient of a function is related to its exterior derivative, since
More precisely, the gradient is the vector field associated to the differential 1-form using the musical isomorphism
(called "sharp") defined by the metric . The relation between the exterior derivative and the gradient of a function on is a special case of this in which the metric is the flat metric given by the dot product.
See also
Notes
References
Further reading
External links
.
Differential operators
Differential calculus
Generalizations of the derivative
Linear operators in calculus
Vector calculus
Rates | Gradient | [
"Mathematics"
] | 3,358 | [
"Mathematical analysis",
"Differential operators",
"Differential calculus",
"Calculus"
] |
12,505 | https://en.wikipedia.org/wiki/Galilean%20moons | The Galilean moons (), or Galilean satellites, are the four largest moons of Jupiter: Io, Europa, Ganymede, and Callisto. They are the most readily visible Solar System objects after Saturn, the dimmest of the classical planets; though their closeness to bright Jupiter makes naked-eye observation very difficult, they are readily seen with common binoculars, even under night sky conditions of high light pollution. The invention of the telescope enabled the discovery of the moons in 1610. Through this, they became the first Solar System objects discovered since humans have started tracking the classical planets, and the first objects to be found to orbit any planet beyond Earth.
They are planetary-mass moons and among the largest objects in the Solar System. All four, along with Titan, Triton, and Earth's Moon, are larger than any of the Solar System's dwarf planets. The largest, Ganymede, is the largest moon in the Solar System and surpasses the planet Mercury in size (though not mass). Callisto is only slightly smaller than Mercury in size; the smaller ones, Io and Europa, are about the size of the Moon. The three inner moons — Io, Europa, and Ganymede — are in a 4:2:1 orbital resonance with each other. While the Galilean moons are spherical, all of Jupiter's remaining moons have irregular forms because they are too small for their self-gravitation to pull them into spheres.
The Galilean moons are named after Galileo Galilei, who observed them in either December 1609 or January 1610, and recognized them as satellites of Jupiter in March 1610; they remained the only known moons of Jupiter until the discovery of the fifth largest moon of Jupiter Amalthea in 1892. Galileo initially named his discovery the Cosmica Sidera ("Cosimo's stars") or Medicean Stars, but the names that eventually prevailed were chosen by Simon Marius. Marius discovered the moons independently at nearly the same time as Galileo, 8 January 1610, and gave them their present individual names, after mythological characters that Zeus seduced or abducted, which were suggested by Johannes Kepler in his Mundus Jovialis, published in 1614. Their discovery showed the importance of the telescope as a tool for astronomers by proving that there were objects in space that cannot be seen by the naked eye. The discovery of celestial bodies orbiting something other than Earth dealt a serious blow to the then-accepted (among educated Europeans) Ptolemaic world system, a geocentric theory in which everything orbits around Earth.
History
Discovery
As a result of improvements that Galileo Galilei made to the telescope, with a magnifying capability of 20×, he was able to see celestial bodies more distinctly than was previously possible. This allowed Galileo to observe in either December 1609 or January 1610 what came to be known as the Galilean moons.
On 7 January 1610, Galileo wrote a letter containing the first mention of Jupiter's moons. At the time, he saw only three of them, and he believed them to be fixed stars near Jupiter. He continued to observe these celestial orbs from 8 January to 2 March 1610. In these observations, he discovered a fourth body, and also observed that the four were not fixed stars, but rather were orbiting Jupiter.
Galileo's discovery proved the importance of the telescope as a tool for astronomers by showing that there were objects in space to be discovered that until then had remained unseen by the naked eye. More importantly, the discovery of celestial bodies orbiting something other than Earth dealt a blow to the then-accepted Ptolemaic world system, which held that Earth was at the center of the universe and all other celestial bodies revolved around it. Galileo's 13 March 1610, Sidereus Nuncius (Starry Messenger), which announced celestial observations through his telescope, does not explicitly mention Copernican heliocentrism, a theory that placed the Sun at the center of the universe. Nevertheless, Galileo accepted the Copernican theory.
A Chinese historian of astronomy, Xi Zezong, has claimed that a "small reddish star" observed near Jupiter in 364 BCE by Chinese astronomer Gan De may have been Ganymede. If true, this might predate Galileo's discovery by around two millennia.
The observations of Simon Marius are another noted example of observation, and he later reported observing the moons in 1609. However, because he did not publish these findings until after Galileo, there is a degree of uncertainty around his records.
Names
In 1605, Galileo had been employed as a mathematics tutor for Cosimo de' Medici. In 1609, Cosimo became Grand Duke Cosimo II of Tuscany. Galileo, seeking patronage from his now-wealthy former student and his powerful family, used the discovery of Jupiter's moons to gain it. On 13 February 1610, Galileo wrote to the Grand Duke's secretary:
"God graced me with being able, through such a singular sign, to reveal to my Lord my devotion and the desire I have that his glorious name live as equal among the stars, and since it is up to me, the first discoverer, to name these new planets, I wish, in imitation of the great sages who placed the most excellent heroes of that age among the stars, to inscribe these with the name of the Most Serene Grand Duke."
Galileo initially called his discovery the Cosmica Sidera ("Cosimo's stars"), in honour of Cosimo alone. Cosimo's secretary suggested to change the name to Medicea Sidera ("the Medician stars"), honouring all four Medici brothers (Cosimo, Francesco, Carlo, and Lorenzo). The discovery was announced in the Sidereus Nuncius ("Starry Messenger"), published in Venice in March 1610, less than two months after the first observations.
On 12 March 1610, Galileo wrote his dedicatory letter to the Duke of Tuscany, and the next day sent a copy to the Grand Duke, hoping to obtain the Grand Duke's support as quickly as possible. On 19 March, he sent the telescope he had used to first view Jupiter's moons to the Grand Duke, along with an official copy of Sidereus Nuncius (The Starry Messenger) that, following the secretary's advice, named the four moons the Medician Stars. In his dedicatory introduction, Galileo wrote:
Scarcely have the immortal graces of your soul begun to shine forth on earth than bright stars offer themselves in the heavens which, like tongues, will speak of and celebrate your most excellent virtues for all time. Behold, therefore, four stars reserved for your illustrious name ... which ... make their journeys and orbits with a marvelous speed around the star of Jupiter ... like children of the same family ... Indeed, it appears the Maker of the Stars himself, by clear arguments, admonished me to call these new planets by the illustrious name of Your Highness before all others.
Other names put forward include:
I. Principharus (for the "prince" of Tuscany), II. Victripharus (after Vittoria della Rovere), III. Cosmipharus (after Cosimo de' Medici) and IV. Fernipharus (after Duke Ferdinando de' Medici) – by Giovanni Battista Hodierna, a disciple of Galileo and author of the first ephemerides (Medicaeorum Ephemerides, 1656);
Circulatores Jovis, or Jovis Comites – by Johannes Hevelius;
Gardes, or Satellites (from the Latin satelles, satellitis, meaning "escorts") – by Jacques Ozanam.
The names that eventually prevailed were chosen by Simon Marius, who discovered the moons independently at the same time as Galileo: he named them at the suggestion of Johannes Kepler after lovers of the god Zeus (the Greek equivalent of Jupiter), in his Mundus Jovialis, published in 1614:
Jupiter is much blamed by the poets on account of his irregular loves. Three maidens are especially mentioned as having been clandestinely courted by Jupiter with success. Io, daughter of the River Inachus, Callisto of Lycaon, Europa of Agenor. Then there was Ganymede, the handsome son of King Tros, whom Jupiter, having taken the form of an eagle, transported to heaven on his back, as poets fabulously tell... I think, therefore, that I shall not have done amiss if the First is called by me Io, the Second Europa, the Third, on account of its majesty of light, Ganymede, the Fourth Callisto... This fancy, and the particular names given, were suggested to me by Kepler, Imperial Astronomer, when we met at Ratisbon fair in October 1613. So if, as a jest, and in memory of our friendship then begun, I hail him as joint father of these four stars, again I shall not be doing wrong.
Galileo steadfastly refused to use Marius' names and invented as a result the numbering scheme that is still used nowadays, in parallel with proper moon names. The numbers run from Jupiter outward, thus I, II, III and IV for Io, Europa, Ganymede, and Callisto respectively. Galileo used this system in his notebooks but never actually published it. The numbered names (Jupiter x) were used until the mid-20th century when other inner moons were discovered, and Marius' names became widely used.
Determination of longitude
Galileo's discovery had practical applications. Safe navigation required accurately determining a ship's position at sea. While latitude could be measured well enough by local astronomical observations, determining longitude required knowledge of the time of each observation synchronized to the time at a reference longitude. The longitude problem was so important that large prizes were offered for its solution at various times by Spain, Holland, and Britain.
Galileo proposed determining longitude based on the timing of the orbits of the Galilean moons. The times of the eclipses of the moons could be precisely calculated in advance and compared with local observations on land or on ship to determine the local time and hence longitude. Galileo applied in 1616 for the Spanish prize of 6,000 gold ducats with a lifetime pension of 2,000 a year, and almost two decades later for the Dutch prize, but by then he was under house arrest for possible heresy.
The main problem with the Jovian moon technique was that it was difficult to observe the Galilean moons through a telescope on a moving ship, a problem that Galileo tried to solve with the invention of the celatone. Others suggested improvements, but without success.
Land mapping surveys had the same problem determining longitude, though with less severe observational conditions. The method proved practical and was used by Giovanni Domenico Cassini and Jean Picard to re-map France.
Members
Some models predict that there may have been several generations of Galilean satellites in Jupiter's early history. Each generation of moons to have formed would have spiraled into Jupiter and been destroyed, due to tidal interactions with Jupiter's proto-satellite disk, with new moons forming from the remaining debris. By the time the present generation formed, the gas in the proto-satellite disk had thinned out to the point that it no longer greatly interfered with the moons' orbits.
Other models suggest that Galilean satellites formed in a proto-satellite disk, in which formation timescales were comparable to or shorter than orbital migration timescales. Io is anhydrous and likely has an interior of rock and metal. Europa is thought to contain 8% ice and water by mass with the remainder rock. These moons are, in increasing order of distance from Jupiter:
Io
Io (Jupiter I) is the innermost of the four Galilean moons of Jupiter; with a diameter of 3642 kilometers, it is the fourth-largest moon in the Solar System, and is only marginally larger than Earth's moon. It was named after Io, a priestess of Hera who became one of the lovers of Zeus. It was referred to as "Jupiter I", or "The first satellite of Jupiter" until the mid-20th century.
With over 400 active volcanos, Io is the most geologically active object in the Solar System. Its surface is dotted with more than 100 mountains, some of which are taller than Earth's Mount Everest. Unlike most satellites in the outer Solar System (which have a thick coating of ice), Io is primarily composed of silicate rock surrounding a molten iron or iron sulfide core.
Although not proven, data from the Galileo orbiter indicates that Io might have its own magnetic field. Io has an extremely thin atmosphere made up mostly of sulfur dioxide (SO2). If a surface data or collection vessel were to land on Io in the future, it would have to be extremely tough (similar to the tank-like bodies of the Soviet Venera landers) to survive the radiation and magnetic fields that originate from Jupiter.
Europa
Europa (Jupiter II), the second of the four Galilean moons, is the second closest to Jupiter and the smallest at 3121.6 kilometers in diameter, which is slightly smaller than Earth's Moon. The name comes from a mythical Phoenician noblewoman, Europa, who was courted by Zeus and became the queen of Crete, though the name did not become widely used until the mid-20th century.
It has a smooth and bright surface, with a layer of water surrounding the mantle of the planet, thought to be 100 kilometers thick. The smooth surface includes a layer of ice, while the bottom of the ice is theorized to be liquid water. The apparent youth and smoothness of the surface have led to the hypothesis that a water ocean exists beneath it, which could conceivably serve as an abode for extraterrestrial life. Heat energy from tidal flexing ensures that the ocean remains liquid and drives geological activity. Life may exist in Europa's under-ice ocean. So far, there is no evidence that life exists on Europa, but the likely presence of liquid water has spurred calls to send a probe there.
The prominent markings that criss-cross the moon seem to be mainly albedo features, which emphasize low topography. There are few craters on Europa because its surface is tectonically active and young. Some theories suggest that Jupiter's gravity is causing these markings, as one side of Europa is constantly facing Jupiter. Volcanic water eruptions splitting the surface of Europa and even geysers have also been considered as causes. The reddish-brown color of the markings is theorized to be caused by sulfur, but because no data collection devices have been sent to Europa, scientists cannot yet confirm this. Europa is primarily made of silicate rock and likely has an iron core. It has a tenuous atmosphere composed primarily of oxygen.
Ganymede
Ganymede (Jupiter III), the third Galilean moon, is named after the mythological Ganymede, cupbearer of the Greek gods and Zeus's beloved. Ganymede is the largest natural satellite in the Solar System at 5262.4 kilometers in diameter, which makes it larger than the planet Mercury – although only at about half of its mass since Ganymede is an icy world. It is the only satellite in the Solar System known to possess a magnetosphere, likely created through convection within the liquid iron core.
Ganymede is composed primarily of silicate rock and water ice, and a salt-water ocean is believed to exist nearly 200 km below Ganymede's surface, sandwiched between layers of ice. The metallic core of Ganymede suggests a greater heat at some time in its past than had previously been proposed. The surface is a mix of two types of terrain—highly cratered dark regions and younger, but still ancient, regions with a large array of grooves and ridges. Ganymede has a high number of craters, but many are gone or barely visible due to its icy crust forming over them. The satellite has a thin oxygen atmosphere that includes O, O2, and possibly O3 (ozone), and some atomic hydrogen.
Callisto
Callisto (Jupiter IV) is the fourth and last Galilean moon, and is the second-largest of the four, and at 4820.6 kilometers in diameter, it is the third largest moon in the Solar System, and barely smaller than Mercury, though only a third of the latter's mass. It is named after the Greek mythological nymph Callisto, a lover of Zeus who was a daughter of the Arkadian King Lykaon and a hunting companion of the goddess Artemis. The moon does not form part of the orbital resonance that affects three inner Galilean satellites and thus does not experience appreciable tidal heating. Callisto is composed of approximately equal amounts of rock and ices, which makes it the least dense of the Galilean moons. It is one of the most heavily cratered satellites in the Solar System, and one major feature is a basin around 3000 km wide called Valhalla.
Callisto is surrounded by an extremely thin atmosphere composed of carbon dioxide and probably molecular oxygen. Investigation revealed that Callisto may possibly have a subsurface ocean of liquid water at depths less than 300 kilometres. The likely presence of an ocean within Callisto indicates that it can or could harbour life. However, this is less likely than on nearby Europa. Callisto has long been considered the most suitable place for a human base for future exploration of the Jupiter system since it is furthest from the intense radiation of Jupiter's magnetic field.
Comparative structure
Fluctuations in the orbits of the moons indicate that their mean density decreases with distance from Jupiter. Callisto, the outermost and least dense of the four, has a density intermediate between ice and rock whereas Io, the innermost and densest moon, has a density intermediate between rock and iron. Callisto has an ancient, heavily cratered and unaltered ice surface and the way it rotates indicates that its density is equally distributed, suggesting that it has no rocky or metallic core but consists of a homogeneous mix of rock and ice. This may well have been the original structure of all the moons. The rotation of the three inner moons, in contrast, indicates differentiation of their interiors with denser matter at the core and lighter matter above. They also reveal significant alteration of the surface. Ganymede reveals past tectonic movement of the ice surface which required partial melting of subsurface layers. Europa reveals more dynamic and recent movement of this nature, suggesting a thinner ice crust. Finally, Io, the innermost moon, has a sulfur surface, active volcanism and no sign of ice. All this evidence suggests that the nearer a moon is to Jupiter the hotter its interior. The current model is that the moons experience tidal heating as a result of the gravitational field of Jupiter in inverse proportion to the square of their distance from the giant planet. In all but Callisto this will have melted the interior ice, allowing rock and iron to sink to the interior and water to cover the surface. In Ganymede a thick and solid ice crust then formed. In warmer Europa a thinner more easily broken crust formed. In Io the heating is so extreme that all the rock has melted and water has long ago boiled out into space.
Size
Latest flyby
Origin and evolution
Jupiter's regular satellites are believed to have formed from a circumplanetary disk, a ring of accreting gas and solid debris analogous to a protoplanetary disk. They may be the remnants of a score of Galilean-mass satellites that formed early in Jupiter's history.
Simulations suggest that, while the disk had a relatively high mass at any given moment, over time a substantial fraction (several tenths of a percent) of the mass of Jupiter captured from the Solar nebula was processed through it. However, the disk mass of only 2% that of Jupiter is required to explain the existing satellites. Thus there may have been several generations of Galilean-mass satellites in Jupiter's early history. Each generation of moons would have spiraled into Jupiter, due to drag from the disk, with new moons then forming from the new debris captured from the Solar nebula. By the time the present (possibly fifth) generation formed, the disk had thinned out to the point that it no longer greatly interfered with the moons' orbits. The current Galilean moons were still affected, falling into and being partially protected by an orbital resonance which still exists for Io, Europa, and Ganymede. Ganymede's larger mass means that it would have migrated inward at a faster rate than Europa or Io. Tidal dissipation in the Jovian system is still ongoing and Callisto will likely be captured into the resonance in about 1.5 billion years, creating a 1:2:4:8 chain.
Visibility
All four Galilean moons are bright enough to be viewed from Earth without a telescope, if only they could appear farther away from Jupiter. (They are, however, easily distinguished with even low-powered binoculars.) They have apparent magnitudes between 4.6 and 5.6 when Jupiter is in opposition with the Sun, and are about one unit of magnitude dimmer when Jupiter is in conjunction. The main difficulty in observing the moons from Earth is their proximity to Jupiter, since they are obscured by its brightness. The maximum angular separations of the moons are between 2 and 10 arcminutes from Jupiter, which is close to the limit of human visual acuity. Ganymede and Callisto, at their maximum separation, are the likeliest targets for potential naked-eye observation.
Orbit animations
GIF animations depicting the Galilean moon orbits and the resonance of Io, Europa, and Ganymede
See also
Jupiter's moons in fiction
Colonization of the Jovian System
Notes
References
External links
Sky & Telescope utility for identifying Galilean moons
Interactive 3D visualisation of Jupiter and the Galilean moons
NASA's Stunning Discoveries on Jupiter's Largest Moons | Our Solar System's Moons
A Beginner's Guide to Jupiter's Moons
Dominic Ford: The Moons of Jupiter. With a chart of the current position of the Galilean moons.
Copernican Revolution
Moons of Jupiter
Moons with a prograde orbit
Solar System | Galilean moons | [
"Astronomy"
] | 4,593 | [
"Copernican Revolution",
"Outer space",
"Solar System",
"History of astronomy"
] |
12,528 | https://en.wikipedia.org/wiki/Cis%E2%80%93trans%20isomerism | Cis–trans isomerism, also known as geometric isomerism, describes certain arrangements of atoms within molecules. The prefixes "cis" and "trans" are from Latin: "this side of" and "the other side of", respectively. In the context of chemistry, cis indicates that the functional groups (substituents) are on the same side of some plane, while trans conveys that they are on opposing (transverse) sides. Cis–trans isomers are stereoisomers, that is, pairs of molecules which have the same formula but whose functional groups are in different orientations in three-dimensional space. Cis and trans isomers occur both in organic molecules and in inorganic coordination complexes. Cis and trans descriptors are not used for cases of conformational isomerism where the two geometric forms easily interconvert, such as most open-chain single-bonded structures; instead, the terms "syn" and "anti" are used.
According to IUPAC, "geometric isomerism" is an obsolete synonym of "cis–trans isomerism".
Cis–trans or geometric isomerism is classified as one type of configurational isomerism.
Organic chemistry
Very often, cis–trans stereoisomers contain double bonds or ring structures. In both cases the rotation of bonds is restricted or prevented. When the substituent groups are oriented in the same direction, the diastereomer is referred to as cis, whereas when the substituents are oriented in opposing directions, the diastereomer is referred to as trans. An example of a small hydrocarbon displaying cis–trans isomerism is but-2-ene. 1,2-Dichlorocyclohexane is another example.
Comparison of physical properties
Cis and trans isomers have distinct physical properties. Their differing shapes influences the dipole moments, boiling, and especially melting points.
These differences can be very small, as in the case of the boiling point of straight-chain alkenes, such as pent-2-ene, which is 37 °C in the cis isomer and 36 °C in the trans isomer. The differences between cis and trans isomers can be larger if polar bonds are present, as in the 1,2-dichloroethenes. The cis isomer in this case has a boiling point of 60.3 °C, while the trans isomer has a boiling point of 47.5 °C. In the cis isomer the two polar C–Cl bond dipole moments combine to give an overall molecular dipole, so that there are intermolecular dipole–dipole forces (or Keesom forces), which add to the London dispersion forces and raise the boiling point. In the trans isomer on the other hand, this does not occur because the two C−Cl bond moments cancel and the molecule has a net zero dipole moment (it does however have a non-zero quadrupole moment).
The differing properties of the two isomers of butenedioic acid are often very different.
Polarity is key in determining relative boiling point as strong intermolecular forces raise the boiling point. In the same manner, symmetry is key in determining relative melting point as it allows for better packing in the solid state, even if it does not alter the polarity of the molecule. Another example of this is the relationship between oleic acid and elaidic acid; oleic acid, the cis isomer, has a melting point of 13.4 °C, making it a liquid at room temperature, while the trans isomer, elaidic acid, has the much higher melting point of 43 °C, due to the straighter trans isomer being able to pack more tightly, and is solid at room temperature.
Thus, trans alkenes, which are less polar and more symmetrical, have lower boiling points and higher melting points, and cis alkenes, which are generally more polar and less symmetrical, have higher boiling points and lower melting points.
In the case of geometric isomers that are a consequence of double bonds, and, in particular, when both substituents are the same, some general trends usually hold. These trends can be attributed to the fact that the dipoles of the substituents in a cis isomer will add up to give an overall molecular dipole. In a trans isomer, the dipoles of the substituents will cancel out due to being on opposite sides of the molecule. Trans isomers also tend to have lower densities than their cis counterparts.
As a general trend, trans alkenes tend to have higher melting points and lower solubility in inert solvents, as trans alkenes, in general, are more symmetrical than cis alkenes.
Vicinal coupling constants (3JHH), measured by NMR spectroscopy, are larger for trans (range: 12–18 Hz; typical: 15 Hz) than for cis (range: 0–12 Hz; typical: 8 Hz) isomers.
Stability
Usually for acyclic systems trans isomers are more stable than cis isomers. This difference is attributed to the unfavorable steric interaction of the substituents in the cis isomer. Therefore, trans isomers have a less-exothermic heat of combustion, indicating higher thermochemical stability. In the Benson heat of formation group additivity dataset, cis isomers suffer a 1.10 kcal/mol stability penalty. Exceptions to this rule exist, such as 1,2-difluoroethylene, 1,2-difluorodiazene (FN=NF), and several other halogen- and oxygen-substituted ethylenes. In these cases, the cis isomer is more stable than the trans isomer. This phenomenon is called the cis effect.
E–Z notation
In principle, cis–trans notation should not be used for alkenes with two or more different substituents. Instead the E–Z notation is used based on the priority of the substituents using the Cahn–Ingold–Prelog (CIP) priority rules for absolute configuration. The IUPAC standard designations E and Z are unambiguous in all cases, and therefore are especially useful for tri- and tetrasubstituted alkenes to avoid any confusion about which groups are being identified as cis or trans to each other.
Z (from the German ) means "together". E (from the German ) means "opposed" in the sense of "opposite". That is, Z has the higher-priority groups cis to each other and E has the higher-priority groups trans to each other. Whether a molecular configuration is designated E or Z is determined by the CIP rules; higher atomic numbers are given higher priority. For each of the two atoms in the double bond, it is necessary to determine the priority of each substituent. If both the higher-priority substituents are on the same side, the arrangement is Z; if on opposite sides, the arrangement is E.
Because the cis–trans and E–Z systems compare different groups on the alkene, it is not strictly true that Z corresponds to cis and E corresponds to trans. For example, trans-2-chlorobut-2-ene (the two methyl groups, C1 and C4, on the but-2-ene backbone are trans to each other) is (Z)-2-chlorobut-2-ene (the chlorine and C4 are together because C1 and C4 are opposite).
Undefined alkene stereochemistry
Wavy single bonds are the standard way to represent unknown or unspecified stereochemistry or a mixture of isomers (as with tetrahedral stereocenters). A crossed double-bond has been used sometimes; it is no longer considered an acceptable style for general use by IUPAC but may still be required by computer software.
Inorganic chemistry
Cis–trans isomerism can also occur in inorganic compounds.
Diazenes
Diazenes (and the related diphosphenes) can also exhibit cis–trans isomerism. As with organic compounds, the cis isomer is generally the more reactive of the two, being the only isomer that can reduce alkenes and alkynes to alkanes, but for a different reason: the trans isomer cannot line its hydrogens up suitably to reduce the alkene, but the cis isomer, being shaped differently, can.
Coordination complexes
Coordination complexes with octahedral or square planar geometries can also exhibit cis-trans isomerism.
For example, there are two isomers of square planar Pt(NH3)2Cl2, as explained by Alfred Werner in 1893. The cis isomer, whose full name is cis-diamminedichloroplatinum(II), was shown in 1969 by Barnett Rosenberg to have antitumor activity, and is now a chemotherapy drug known by the short name cisplatin. In contrast, the trans isomer (transplatin) has no useful anticancer activity. Each isomer can be synthesized using the trans effect to control which isomer is produced.
For octahedral complexes of formula MX4Y2, two isomers also exist. (Here M is a metal atom, and X and Y are two different types of ligands.) In the cis isomer, the two Y ligands are adjacent to each other at 90°, as is true for the two chlorine atoms shown in green in cis-[Co(NH3)4Cl2]+, at left. In the trans isomer shown at right, the two Cl atoms are on opposite sides of the central Co atom.
A related type of isomerism in octahedral MX3Y3 complexes is
facial–meridional (or fac–mer) isomerism, in which different numbers of ligands are cis or trans to each other. Metal carbonyl compounds can be characterized as fac or mer using infrared spectroscopy.
See also
Chirality (chemistry)
Descriptor (chemistry)
E–Z notation
Isomer
Structural isomerism
Trans fat
References
External links
IUPAC definition of "stereoisomerism"
IUPAC definition of "geometric isomerism"
IUPAC definition of "cis–trans isomers"
Stereochemistry
Isomerism
Orientation (geometry) | Cis–trans isomerism | [
"Physics",
"Chemistry",
"Mathematics"
] | 2,170 | [
"Stereochemistry",
"Topology",
"Space",
"Isomerism",
"nan",
"Geometry",
"Spacetime",
"Orientation (geometry)"
] |
12,533 | https://en.wikipedia.org/wiki/Geocaching | Geocaching (, ) is an outdoor recreational activity, in which participants use a Global Positioning System (GPS) receiver or mobile device and other navigational techniques to hide and seek containers, called geocaches or caches, at specific locations marked by coordinates all over the world. The first geocache was placed in 2000, and by 2023 there were over 3 million active caches worldwide.
Geocaching can be considered a real-world, outdoor treasure hunting game. A typical cache is a small waterproof container containing a logbook and sometimes a pen or pencil. The geocacher signs the log with their established code name/username and dates it, in order to prove that they found the cache. After signing the log, the cache must be placed back exactly where the person found it. Larger containers such as plastic storage containers (Tupperware or similar) or ammo boxes can also contain items for trading, such as toys or trinkets, usually of more sentimental worth than financial. Geocaching shares many aspects with benchmarking, trigpointing, orienteering, treasure hunting, letterboxing, trail blazing, and another type of location-based game called Munzee.
History
Geocaching is similar to the game letterboxing (originating in 1854), which uses clues and references to landmarks embedded in stories. Geocaching was conceived shortly after the removal of Selective Availability from the Global Positioning System on May 2, 2000 (Blue Switch Day), because the improved accuracy of the system allowed for a small container to be specifically placed and located.
The first documented placement of a GPS-located cache took place on May 3, 2000, by Dave Ulmer in Beavercreek, Oregon. The location was posted on the Usenet newsgroup sci.geo.satellite-nav at . Within three days, the cache had been found twice, first by Mike Teague. According to Dave Ulmer's message, this cache was a black plastic bucket that was partially buried and contained various items, such as software, videos, books, money, a can of beans, and a slingshot. The geocache and most of its contents were eventually destroyed by a lawn mower, but the can of beans was the only item salvaged and was later turned into a trackable item known as the "Original Can of Beans". Another geocache and plaque, called the Original Stash Tribute Plaque, now sits at the site.
Geocaching company Groundspeak allows extraterrestrial caches, e.g. the Moon or Mars, although presently, the website provides only earthbound coordinates. The first published extraterrestrial geocache was GC1BE91, which was placed on the International Space Station by Richard Garriott in 2008. It used the Baikonur launch area in Kazakhstan as its position. The original cache contained a Travel Bug (the first geocaching trackable item in space), which stayed on the station until it was brought back to earth in 2013. Due to fire restrictions on board the station, the geocache contained no official paper logbook. As of June 2024, only one confirmed geocacher (on November 17, 2013) has actually found the geocache, although others have claimed to have found it providing varying amounts of evidence. To commemorate the occasion, Groundspeak allowed specialized geocaching events to be published across the world, allowing attendees to obtain a virtual souvenir on their profile.
The second geocaching trackable in space is TB5EFXK which is attached to the SHERLOC calibration target on board the Mars Perseverance Rover, which landed on Mars on 18 February 2021. Geocachers were given the opportunity to virtually discover the trackable after the WATSON camera sent back its first photographs of the calibration target that contained the tracking code number. The code is printed on a prototype helmet visor material that will be used to test how well it can withstand the Martian environment. This will help scientists in creating a viable Martian spacesuit for future crewed missions to Mars.
The activity was originally referred to as the GPS stash hunt or gpsstashing. This was changed shortly after the original hide when it was suggested in the gpsstash eGroup that "stash" could have negative connotations and the term geocaching was adopted.
Over time, a variety of different hide-and-seek-type activities have been created or abandoned, so that "Geocaching" may now refer to hiding and seeking containers, or locations or information without containers.
An independent accounting of the early history documents several controversial actions taken by Jeremy Irish and Grounded, Inc., a predecessor to Groundspeak, to increase "commercialization and monopolistic control over the hobby". More recently, other similar hobbies such as Munzee have attracted some geocachers by rapidly adopting smart-phone technology, which has caused "some resistance from geocaching organizers about placing caches along with Munzees".
Geocaches
For the traditional geocache, a geocacher will place a waterproof container containing a log book, often also a pen and/or pencil and trade items or trackables, then record the cache's coordinates. These coordinates, along with other details of the location, are posted on a listing site (see list of some sites below). Other geocachers obtain the coordinates from that listing site and seek out the cache using their handheld GPS receivers. The finding geocachers record their exploits in the logbook and online, but then must return the cache to the same coordinates so that other geocachers may find it. Geocachers are free to take objects (except the logbook, pencil, or stamp) from the cache in exchange for leaving something of similar or higher value.
Typical cache "treasures", also known in the geocaching world as SWAG (a backronym of "stuff we all get"), are not high in monetary value but may hold personal value to the finder. Aside from the logbook, common cache contents are unusual coins or currency, small toys, ornamental buttons, CDs, or books. Although not required, many geocachers decide to leave behind signature items, such as personal geocoins, pins, or craft items, to mark their presence at the cache location. Disposable cameras are popular as they allow for anyone who found the cache to take a picture which can be developed and uploaded to a geocaching web site listed below. Also common are objects that are moved from cache to cache called "hitchhikers", such as Travel Bugs or geocoins, whose travels may be logged and followed online. Cachers who initially place a Travel Bug or Geocoin(s) often assign specific goals for their trackable items. Examples of goals are to be placed in a certain cache a long distance from home, or to travel to a certain country, or to travel faster and farther than other hitchhikers in a race. Less common trends are site-specific information pages about the historic significance of the site, types of trees, birds in the area or other such information. Higher-value items are occasionally included in geocaches as a reward for the First to Find (called "FTF"), or in locations which are harder to reach.
Dangerous or illegal items, including weapons and drugs, are not allowed and are specifically against the rules of most geocache listing sites. Food is also disallowed, even if sealed, as it is considered unhygienic and can attract animals.
If a geocache has been vandalized or stolen by a person who is not familiar with geocaching, it is said to have been "muggled". The term plays off the fact that those not familiar with geocaching are called "muggles", a word borrowed from the Harry Potter series of books which were rising in popularity at the same time geocaching started.
Variations
Geocaches vary in size, difficulty, and location. Simple caches that are placed near a roadside are often called "drive-bys", "park 'n grabs" (PNGs), or "cache and dash". Geocaches may also be complex, involving lengthy searches, significant travel, or use of specialist equipment such as SCUBA diving, kayaking, or abseiling. Different geocaching websites list different variations per their own policies.
Container sizes range from nano, particularly magnetic nanos, which can be smaller than the tip of a finger and have only enough room to store the log sheet, to 20-liter (5 gallon) buckets or even larger containers, such as entire trucks. The most common cache containers in rural areas are lunch-box-sized plastic storage containers or surplus military ammunition cans. Ammo cans are considered the gold standard of containers because they are very sturdy, waterproof, animal- and fire-resistant, and relatively cheap, and have plenty of room for trade items. Smaller containers are more common in urban areas because they can be more easily hidden.
Geocache types
Over time many variations of geocaches have developed. Different platforms often have their own rules on which types are allowed or how they are classified. The following cache types are supported by geocaching.com.
Traditional cache
The simplest form of a geocache. It consists of a container with a log sheet, and is located at the posted coordinates. Cache containers come in many different sizes.
Night cache
These caches are intended to be found at night, usually by use of a UV torch.
Multi-cache
These caches include at least one stage in addition to the physical final container with a log sheet. The posted coordinates for a multi-cache are the first stage. At each stage, the geocacher gathers information that leads them to the next stage or to the final container. Multi-caches can consist of physical stages (i.e. the first stage contains coordinates for the next stage and so forth) or virtual stages (i.e. the first stage is a historical marker where geocachers have to answer questions to calculate the coordinates to the final physical container).
Mystery cache
Also called a 'puzzle cache', players might need to solve a puzzle or bring a special tool to reveal the next waypoint or final coordinates. Most often, the final container is not at the posted coordinates which is noted in the cache description. Some puzzles can be easy and involve basic math operations or they can be quite difficult, with some of the more challenging ones requiring a firm understanding of computer programming. Geocaching Toolbox, a website dedicated to create and solve puzzle geocaches, provides a comprehensive list of common puzzle cache ciphers.
There are also some subcategories of the mystery cache, which are normally listed as a Mystery Type, which are listed below.
Challenge cache
This requires a geocacher to complete a reasonably attainable geocaching-related task before being able to log the cache as a find online. It does not restrict geocachers from finding the cache and signing the logbook at anytime. However a geocacher is not allowed to log a find on the geocaching website unless they qualify for the challenge specified in the cache description. Examples include finding a number of caches that meet a category, completing a number of cache finds within a period of time, or finding a cache for every calendar day.
Since 2017, Groundspeak has required new challenges to have a geochecker in which users can put their name into an algorithm to see if they qualify without the need of physically checking all of one's previous finds. These geocheckers can be requested using the ProjectGC forums where volunteers can write and create scripts for specific challenges. Groundspeak also has been more strict into what types of challenges are published. For example, prior to 2017 it was possible to create a challenge cache to find 10 caches that have a food item in the title. Under current guidelines, this is no longer allowed because it restricts geocachers to find specific geocaches. Instead, Groundspeak has encouraged new challenges to be more creative. Acceptable challenges include finding caches in 10 states, finding 100 traditional geocaches, or finding 1000 geocaches with the "wheelchair accessible" attribute.
Bonus cache
A bonus cache requires the finder to have found an amount of caches, usually by the same hider, before finding the bonus cache. The cache can be any type, however a bonus cache cannot be required for a second bonus cache.
Moving or travelling cache
These were found at a listed set of coordinates. The finder hides the cache in a different location, and updates the listing, essentially becoming the hider, and the next finder continues the cycle. This cache has been discontinued at geocaching.com and those that have been grandfathered in are solely declining and are being archived.
Chirp cache
Also known as a wireless beacon cache. This is a Garmin-created innovative on multi-caches using wireless beacon technology. It is a physical game piece, about the size of a half dollar that can be hidden anywhere. Powered by a small battery, it is able to transmit a signal detectable on Garmin devices. The Chirp stores hints, multicache coordinates, counts visitors, and can confirm the cache is nearby. These caches caused considerable discussion and some controversy at Groundspeak, where they were ultimately given a new "attribute". These types of geocaches can also be listed as a traditional, multi-cache, or letterbox. It is up to the cache owner to designated the cache type for wireless beacon caches.
Geocaching HQ geocache (GCK25B)
This is an official geocache located inside the Groundspeak headquarters office in Seattle, Washington. It is technically classified as a separate cache type under mystery caches, with its own unique icon both on the geocaching app and on one's profile statistics tab. Since publication in 2004, it has over 20,000 finds as of June 2024.
Wherigo cache
A multi-stage cache hunt that uses a Wherigo "cartridge" to guide players to find a physical cache sometime during cartridge play, usually at the end. However, not all Wherigo cartridges incorporate geocaches into gameplay. Wherigo caches are unique to the geocaching.com website. Wherigo is a GPS location-aware software platform initially released in January 2008. Authors can develop self-enclosed story files (called "cartridges") that are read by the Wherigo player software, installed on either a GPS unit or smartphone. The player and story take advantage of the location information provided by the GPS to trigger in-game events, such as using a virtual object or interacting with characters. Completing an adventure can require reaching different locations and solving puzzles. Cartridges are coded in Lua. Lua may be used directly, but a builder application is usually used. The Wherigo site offers a builder application and a database of adventures free for download, though the builder has remained in its Alpha version since its last release in May 2008. The official player is only available for Pocket PC. A built-in player is available on Garmin Colorado and Oregon GPS models. The Wherigo Foundation was organized in December 2012. The group is composed of all Wherigo application developers who, up until that time, had been acting and developing separately. Their goal is to provide a consistent Wherigo experience across platforms, connect Wherigo applications via an API, and add modern features to the Wherigo platform. While Groundspeak is aware of this project, the company has yet to take a position.
Reverse Wherigo
An RWIG provides three lines of code composed of 9 digits each that a player can type into the RWIG cartridge. Instead of following a story or interacting with characters, and RWIG gives you the distance to the final cache, but not direction. It requires geocachers to get closer to the final geocache by process of elimination. Once you are within 25 metres, the final coordinates are given to provide a more accurate location for the geocache.
Letterbox hybrid
This is a combination of a geocache and a letterbox in the same container. Letterboxes involve a rubber stamp and logbook that are not supposed to be traded and taken instead of tradable items, but letterbox hybrids may or may not include trade items. Letterboxers carry their own stamp with them, to stamp the letterbox's logbook and inversely stamp their personal logbook with the letterbox stamp. The letterbox hybrid cache contains the important materials for this. Typically, letterbox hybrid caches are not found at the given coordinates which only act as a starting location. Instead, a series of clues are given as to where to find the cache such as "take a left past the bridge" or "about 25 paces past the big oak tree".
Project A.P.E. cache
Also known as Ape caches, these are a special type of traditional cache that were hidden in conjunction with 20th Century Fox and Groundspeak to promote the 2001 remake of Planet of the Apes. There were 14 APE geocaches placed around the world and each one contained a prop from the film. As of 2023, only 2 APE caches are still active with one near Seattle, Washington ('Tunnel of Light', GC1169) and the other in Brazil ('Southern Bowl', GCC67). Of those two, the Brazil APE cache is the only surviving original APE cache because GC1169 was muggled in 2016. However, the original container was later found by a Groundspeak led survey in April of that year. What remains of "Tunnel of Light" is an "official" replacement of the original ammo can that was left in 2001.
Virtual cache
This cache type does not contain a physical logbook. They are normally hidden at a rather interesting or unique location, usually with a described object such as an art sculpture or a scenic lookout. Validation for finding a virtual cache generally requires one to email the cache hider with information such as a date or a name on a plaque, or to post a picture of oneself at the site with a GPS receiver in hand. As of 2005, new virtual caches are no longer allowed by Groundspeak as it is considered a legacy cache.
On August 24, 2017, Groundspeak announced "Virtual Rewards", allowing 4000 new virtual caches to be placed during the following year. Each year, eligible geocachers can opt-in to a drawing and some selected with the opportunity to submit a virtual cache for publication. From 2005 to 2017, the geocaching website no longer listed new caches without a physical container, including virtual and webcam caches (with the exception of earthcaches and events); however, older caches of these types have been grandfathered.
EarthCache
Similar to virtual geocaches, an Earth cache is published not by a local reviewer, but by a volunteer regional reviewer associated with the Geological Society of America. The geocacher usually has to perform a task which teaches an education lesson about the geology of the cache area. Visitors must answer geological questions to complete the cache which can be as simple as describing the color and thickness of layers in an outcrop or can be as complicated as taking measurements of stream velocities or fault offsets. Earthcaches covers geologic topics such as: rock formation, mineralogy, earthquakes, fluvial processes, erosion, volcanology, and planetary science (among others).
Locationless cache
Otherwise known as a Reverse cache, a locationless cache is similar to a scavenger hunt. A description is given for something to find, such as a one-room schoolhouse, and the finder locates an example of this object. The finder records the location using their GPS receiver and often takes a picture at the location showing the named object with their GPS receiver. Typically others are not allowed to log that same location as a find.
Since 2005, all locationless caches have been archived and locked, meaning they are unable to be logged. However, with geocaching's 20th anniversary in 2020 Groundspeak decided to publish a special locationless cache for geocachers to "find" at various Mega- and Giga-Events around the world. The first locationless cache in 15 years (GC8FR0G) required finders to take a picture of themselves with the geocaching mascot, Signal the Frog, at Mega- and Giga-Events during 2020. The cache was made available to log starting 1 January 2020. However, because of the COVID-19 pandemic, nearly all planned Mega- and Giga-events were cancelled for the year, including the planned 20th anniversary celebration event in Seattle, Washington. Therefore, Groundspeak decided to extend the deadline to log this geocache through 1 January 2023. With 22,500 finds it is the second most logged geocache in history.
The second published locationless cache since 2005 (GC8NEAT) required visitors to take a photo of them picking up trash and cleaning up their local area. geocachers were able to log this cache from 6 February 2021 through 31 December 2022. It has been logged over 33,500 times and holds the title for the most "found" geocache. On 17 August 2022, Geocaching.com made available the third locationless cache to be logged since 2005 (GC9FAVE). Instead of finding Signal or picking up trash, this cache encouraged geocachers from around the world to share their favorite geocaching story. This geocache was archived and locked on 1 January 2024.
Webcam cache
A type of virtual cache whose coordinates provide the location to a public webcam. The finder is required to capture an image of themselves through the webcam for verification of the find. New webcam caches are no longer allowed by Groundspeak as it is a legacy cache. Webcam caches are a category at Waymarking.com.
Adventure Lab
A type of virtual cache that typically consists of a set of 5 waypoints, with each waypoint counting as a "cache find". The waypoints usually have an overall theme such showcasing the history of a small town and are often created as a walking tour of a city or park. An example would be Route 66 or the Lincoln Highway, which are a nationwide series of Adventure Lab sets of 10 that stretch the entire route across the United States.
Adventure labs were first introduced in 2014 as a way to test market ideas through Groundspeak. Initially, geocachers would find a key word at a designated site where they could then enter it onto a website to claim "credit". Soon after, they were made available to "find" at select Mega-Events. In 2020, Groundspeak released the "Adventure Lab" app, separate from the geocaching app. The app made it possible to enter a geo-fence when, once inside, a question will appear that can be answered either in the form of a written answer or a multiple choice answer. This question can be answered at anytime once activated, however, some Adventure Labs must be completed sequentially implying that one must answer the question to move on to the next waypoint.
Many Adventure Labs caches have a physical bonus cache associated with them that are listed as a "mystery cache". Coordinates to the bonus cache, if applicable, can be seen in the journal entries once a user has correctly answered the question at a waypoint.
Geocachers can create their own Adventure Lab, but must first opt-in to receive an "Adventure Lab credit" which allows for the creation of 1 set of 5 waypoints, with each of the 5 waypoints counting towards a cache find. If selected, Adventure Labs can be created using the Adventure Lab builder. Adventure Labs, unlike all other geocaches, are not subject to review and are published at will by the creator. However, Adventure Labs can at anytime be archived by Groundspeak if they are in violation of terms of use. For example, placing an Adventure Lab in a place that requires people to pay a fee to visit such as airports or theme parks may get the Adventure permanently removed from the Adventure Lab app.
Event caches
There are several kinds of events geocaches. While encouraged, events do not require visitors to sign their name a logbook to prove they attended an event. Attendees of event caches can log that they 'attended', which will increment their number of found caches. Event caches can be of the following types:
Event: An event cache is a gathering of local geocachers or geocaching organizations. The event cache page specifies a time for the event and provides coordinates to its location. Event caches have to be longer than 30 minutes, and can publish no less than 14 days away from the planned event date. Event caches typically last from 1 to 2 hours.
Cache-In Trash-Out Event (CITO): is an environmental initiative to clean up and preserve the natural areas that geocachers frequent. These events are gatherings of the geocaching community that can focus on services like litter clean-up, removal of invasive species, planting trees and vegetation, and trail building. CITO events must be no less than 2 hours long. Just like event caches, CITOs have to be published no less than 14 days prior to the date of the CITO. CITO typically last from 2 to 4 hours.
Mega-Event: Just like an event cache, however it has to consist of 500 or more geocachers. Mega events are typically organized by a local geocache organizations in conjunction with local municipalities and promotion from Groundspeak. Often, mega events last an entire day and have various activities planned in the days before, during, and after the main Mega-Event. These activities can range in raffles and silent auctions, of which funds help offset the costs of organizing such an event, photo ops with Signal the Frog, a plethora of new geocaches, and panels with local geocachers, lackeys (Groundspeak employees), and reviewers. Mega-Events often have vendors where people can purchase geocoins, cache containers, and food.
Giga-Event: Just like an event cache, however it has to consist of 5,000 or more geocachers. Like a Mega-Event, Giga-Events offer a plethora of actives and are typically held in large areas to accommodate such crowds. Activities typically include a GPS Adventures Maze, panels, vendors, live music, and carnival rides. Usually the week before and after are filled with smaller gatherings which attracts geocachers from around the world who often make a vacation out of it. Only one can happen at a time in the world.
GPS Adventures Maze Exhibit: The GPS Adventures Maze is a traveling exhibit designed to teach people of all ages about GPS technology and geocaching through interactive science experiences. It may accompany a Mega- or Giga-Event. These "events" have their own cache type on geocaching.com and often include many non-geocachers.
Community Celebration Event (CCE): A type of event that is meant to celebrate the 10th and 20th anniversary of geocaching. First issued in 2010 as "Lost and Found" events, geocachers could host one to celebrate the 10 year anniversary of geocaching. In preparation for the 20th anniversary in 2020, Lost and Found events were rebranded as Community Celebration Events. Geocachers could opt-in to receive a CCE credit to host. Due to the Covid-19 Pandemic, Groundspeak allowed CCEs to be hosted until 31 December 2022. Geocaching HQ will be allowing geocachers to host CCEs in 2025, assuming they meet specific criteria.
Geocaching HQ Block Party: Hosted at Geocaching HQ, a Geocaching HQ Block party is hosted at significant milestones for Geocaching's years of existence.
Technology
Obtaining data
GPX files containing information such as a cache description and information about recent visitors to the cache are available from various listing sites. Geocachers may upload geocache data (also known as waypoints) from various websites in various formats, most commonly in file-type GPX, which uses XML. Some websites allow geocachers to search (build queries) for multiple caches within a geographic area based on criteria such as ZIP code or coordinates, downloading the results as an email attachment on a schedule. In recent years, Android and iPhone users can download apps such as GeoBeagle that allow them to use their 3G and GPS-enabled devices to actively search for and download new caches.
Converting and filtering data
A variety of geocaching applications are available for geocache data management, file-type translation, and personalization. Geocaching software can assign special icons or search (filter) for caches based on certain criteria (e.g. distance from an assigned point, difficulty, date last found).
Paperless geocaching means hunting a geocache without a physical printout of the cache description. Traditionally, this means that the seeker has an electronic means of viewing the cache information in the field, such as pre-downloading the information to a PDA or other electronic device. Various applications can directly upload and read GPX files without further conversion. Newer GPS devices released by Garmin, DeLorme, and Magellan have the ability to read GPX files directly, thus eliminating the need for a PDA. Other methods include viewing real-time information on a portable computer with internet access or with an Internet-enabled smart phone. The latest advancement of this practice involves installing dedicated applications on a smart phone with a built-in GPS receiver. Seekers can search for and download caches in their immediate vicinity directly to the application and use the on-board GPS receiver to find the cache.
A more controversial version of paperless Caching involves mass-downloading only the coordinates and cache names (or waypoint IDs) for hundreds of caches into older receivers. This is a common practice of some cachers and has been used successfully for years. In many cases, however, the cache description and hint are never read by the seeker before hunting the cache. This means they are unaware of potential restrictions such as limited hunt times, park open/close times, off-limit areas, and suggested parking locations.
Mobile devices
The website geocaching.com now sells mobile applications which allow users to view caches through a variety of different devices. Currently, the Android, iOS, and Windows Phone mobile platforms have applications in their respective stores. The apps also allow for a trial version with limited functionality. The site promotes mobile applications, and lists over two dozen applications (both mobile and browser/desktop based) that are using their proprietary but royalty-free public application programming interface (API). Developers at c:geo have criticised Groundspeak for being incompatible with open-source development.
Additionally, "c:geo - opensource" is a free opensource full function application for Android phones that is very popular. This app includes similar features to the official Geocaching mobile application, such as: View caches on a live map (Google Maps or OpenStreetMap), navigation using a compass, map, or other applications, logging finds online and offline, etc.
Geocaching enthusiasts have also made their own hand-held GPS devices using a Lego Mindstorms NXT GPS sensor.
Ethics
Geocache listing websites have their own guidelines for acceptable geocache publications. Government agencies and others responsible for public use of land often publish guidelines for geocaching, and a "Geocacher's Creed" posted on the Internet asks participants to "avoid causing disruptions or public alarm". Generally accepted rules are to not endanger others, to minimize the impact on nature, to respect private property, and to avoid public alarm.
Reception
The reception from authorities and the general public outside geocache participants has been mixed.
Cachers have been approached by police and questioned when they were seen as acting suspiciously. Other times, investigation of a cache location after suspicious activity was reported has resulted in police and bomb squad discovery of the geocache, such as the evacuation of a busy street in Wetherby, Yorkshire, England in 2011, and a street in Alvaston, Derby in 2020.
Schools have been evacuated when a cache has been seen by teachers or police, such as the case of Fairview High School in Boulder, Colorado in 2009. A number of caches have been destroyed by bomb squads. Diverse locations, from rural cemeteries to Disneyland, have been locked down as a result of such scares.
The placement of geocaches has occasional critics among some government personnel and the public at large, who consider it Littering. Some geocachers act to mitigate this perception by picking up litter while they search for geocaches, a practice referred to in the community as "Cache In Trash Out". Events and caches are often organized revolving around this practice, with many areas seeing significant cleanup that would otherwise not take place, or would instead require federal, state, or local funds to accomplish. Geocachers are also encouraged to clean up after themselves by retrieving old containers once a cache has been removed from play.
Geocaching is legal in most countries and is usually positively received when explained to law enforcement officials. However, certain types of placements can be problematic. Although generally disallowed, hiders could place caches on private property without adequate permission (intentionally or otherwise), which encourages cache finders to trespass. Historic buildings and structures have also been damaged by geocachers, who have wrongly believed the geocache can be/has been placed within, or on the roof of, the buildings.
Caches might also be hidden in places where the act of searching can make a finder look suspicious (e.g., near schools, children's playgrounds, banks, courthouses, or in residential neighborhoods), or where the container placement could be mistaken for a drug stash or a bomb (especially in urban settings, under bridges, near banks, courthouses, or embassies). As a result, geocachers are strongly advised to label their geocaches when possible, so that they are not mistaken for a harmful object if discovered by non-geocachers.
As well as concerns about littering and bomb threats, some geocachers have hidden their caches in inappropriate locations, such as electrical boxes, which may encourage risky behavior, especially by children. Hides in these areas are discouraged, and cache listing websites enforce guidelines that disallow certain types of placements. However, as cache reviewers typically cannot see exactly where and how every cache is hidden, problematic hides can slip through. Ultimately it is also up to cache finders to use discretion when attempting to search for a cache, and report any problems.
Laws and legislation
Regional rules for placement of caches have become complex. For example, in Virginia, the Virginia Department of Transportation and the Wildlife Management Agency now forbids the placement of geocaches on all land controlled by those agencies. Some cities, towns, and recreation areas allow geocaches with few or no restrictions, but others require compliance with lengthy permitting procedures.
The South Carolina House of Representatives passed Bill 3777 in 2005, stating, "It is unlawful for a person to engage in the activity of Geocaching or letterboxing in a cemetery or in a historic or archaeological site or property publicly identified by a historical marker without the express written consent of the owner or entity which oversees that cemetery site or property." The bill was referred to committee on first reading in the Senate and has been there ever since.
The Illinois Department of Natural Resources requires geocachers who wish to place a geocache at any Illinois state park to submit the location on a USGS 7.5 minute topographical map, the name and contact information of the person(s) wishing to place the geocache, a list of the original items to be included in the geocache, and a picture of the container that is to be placed.
In April 2020, during the COVID-19 pandemic, the township of Highlands East, Ontario, Canada temporarily banned geocaching, over concerns that geocache containers could not be properly disinfected between finds.
Notable incidents
Several deaths have occurred during the course of Caching.
The death of a 21-year-old experienced cacher in December 2011 "while attempting a Groundspeak Cache that does not look all that dangerous" led to discussions of whether changes should be made, and whether cache owners or Groundspeak could be held liable. Groundspeak has since updated their geocaching.com terms of use agreement to specify that geocachers find geocaches at their own risk.
In 2008, two lost hikers on Mount Hood in Oregon, U.S. stumbled across a geocache and phoned this information out to rescuers, allowing crews to locate and rescue them.
Three adult geocachers, a 24-year-old woman and her parents, were trapped in a cave and rescued by firefighters in Rochester, New York, U.S. while searching for a geocache in 2012. Rochester Fire Department spokesman Lt. Ted Kuppinger said, "It's difficult, because you're invested in it, you want to find something like that, so people will probably try to push themselves more than they should, but you need to be prudent about what you're capable of doing."
In 2015, members of the public called the British coastguard to check on a group of geocachers who were spotted walking into the Severn Estuary off the coast of Clevedon, England, in search of clues to locate a multi-cache. Although they felt they were safe and able to return to land, they were considered to be in danger and were airlifted back to the shore.
In October 2016, four people discovered a crashed car at the bottom of a ravine in Benton County, Washington, U.S., while out geocaching. They spotted the driver still trapped inside and alerted emergency services, who rescued the driver.
On 9 June 2018, four people in Prague, Czech Republic were searching for a cache in a 4 km long tunnel when a storm surge carried them through the tunnel to its terminus at the Vltava river. Two of the geocachers died, while two others were rescued from the river.
Websites and data ownership
Numerous websites list geocaches around the world. .Geocaching websites vary in many ways, such as subscription options, activity levels, and volunteers available to check and ensure caches registered remain open for others.
First page
The first website to list geocaches was announced by Mike Teague on May 8, 2000. On September 2, 2000, Jeremy Irish emailed the gpsstash mailing list that he had registered the domain name geocaching.com and had set up his own Web site. He copied the caches from Mike Teague's database into his own. On September 6, Mike Teague announced that Jeremy Irish was taking over cache listings. , Teague had logged only 5 caches.
Geocaching.com
The largest site is Geocaching.com, owned by Groundspeak Inc., which began operating in late 2000. With a worldwide membership and a freemium business model, the website claims millions of caches and members in over 190 countries and all seven continents including Antarctica. Hides and events are reviewed by volunteer regional cache reviewers before publication. Free membership allows users access to coordinates, descriptions, and logs for some caches; for a subscription fee, users are allowed additional search tools, the ability to download large amounts of cache information onto their GPS at once, instant email notifications about new caches, and access to premium-member-only caches (although, you can still access such caches on the website itself; the premium cache restriction only applies to the application). Geocaching Headquarters are located in the Fremont neighborhood of Seattle, Washington, United States.
Opencaching Network
The Opencaching Network provides independent, non-commercial listing sites based in the cacher's country or region. The Opencaching Network lists the most types of caches, including traditional, virtual, moving, multi, quiz, webcam, BIT, guest book, USB, event, and MP3. The Opencaching Network is less restrictive than many sites, and does not charge for the use of the sites, the service being community-driven. Some (or all) listings may or may not be required to be reviewed by community volunteers before being published and although cross-listing is permitted, it is discouraged. Some listings are listed on other sites, but there are many that are unique to the Opencaching Network. Features include the ability to organize one's favourite caches, build custom searches, be instantly notified of new caches in one's area, seek and create caches of all types, export GPX queries, statpics, etc. Each Opencaching Node provides the same API for free (called "OKAPI") for use by developers who want to create third-party applications which can use the Opencaching Network's content.
Countries with associated opencaching websites include the United States at www.opencaching.us; Germany at www.opencaching.de; Sweden at www.opencaching.se; Poland at www.opencaching.pl; Czech Republic at www.opencaching.cz; The Netherlands at www.opencaching.nl; Romania at www.opencaching.ro; the United Kingdom at www.opencache.uk.
The main difference between opencaching and traditional listing sites is that all services are open to the users at no cost. Generally, most geocaching services or websites offer some basic information for free, but users may have to pay for premium membership that allows access to more information or advanced searching capabilities. This is not the case with opencaching; every geocache is listed and accessible to everyone for free.
Additionally, Opencaching sites allow users to rate and report on existing geocaches. This allows users to see what other cachers think of the cache and it encourages participants to place higher-quality caches. The rating system also greatly reduces the problem of abandoned or unsatisfactory caches still being listed after repeated negative comments or posts in the cache logs.
OpenCaching.com
OpenCaching.com (short: OX) was a site created and run by Garmin from 2010 to 2015, which had the stated aim of being as free and open as possible with no paid content. Caches were approved by a community process and coordinates were available without an account. The service closed on 14 August 2015.
Other sites
In many countries there are regional geocaching sites, but these mostly only compile lists of caches in the area from the three main sites. Many of them also accept unique listings of caches for their site; these listings tend to be less popular than the international sites, although occasionally the regional sites may have more caches than the international sites. There are some exceptions, such as how, in the territory of the former Soviet Union, the site Geocaching.su remains popular because it accepts listings in the Cyrillic script. Additional international sites include Geocaching.de, a German website, and Geocaching Australia, which accepts listings of cache types deprecated by geocaching.com, cache types such as TrigPoint and Moveable caches, as well as traditional geocache types.
GPSgames
GPSgames.org was an online community dedicated to all kinds of games involving Global Positioning System receivers. GPSgames.org allowed traditional geocaches along with virtual, locationless, and traveler geocaches.
The site's geodashing game generated a large number of randomly positioned "dashpoints", requiring players to reach as many as possible, competing as individuals or teams.
Shutterspot, GeoVexilla, MinuteWar, GeoPoker, and GeoGolf were among the other GPS games available.
GPSgames.org was 100% free since 2001, through donations. The site was retired on 30 June 2021.
NaviCache
Navicache.com started as a regional listing service in 2001. While many of the website's listings have been posted to other sites, it also offers unique listings. The website lists nearly any type of geocache and does not charge to access any of the caches listed in its database. All submissions are reviewed and approved. In 2012 it was announced that Navicache was under transition to new owners, who said they "plan to develop a site that geocachers want, with rules that geocachers think are suitable. Geocaching.com and OX are both backed by large enterprises, and while that means they have more funding and people, we're a much smaller team – so our advantage is the ability to be dynamic and listen to the users." However, as of 2021 the site is mostly dormant, and the most recent cache listing is from 2014.
TerraCaching
Terracaching.com aims to provide high-quality caches made so by the difficulty of the hide or from the quality of the location. Membership is managed through a sponsorship system, and each cache is under continual peer review from other members. Terracaching.com embraces virtual caches alongside traditional or multi-stage caches and includes many locationless caches among the thousands of caches in its database. It is increasingly attracting members who like the point system. In Europe, TerraCaching is supported by Terracaching.eu. This site is translated in different European languages, has an extended FAQ and extra supporting tools for TerraCaching. TerraCaching strongly discourages caches that are listed on other sites (so-called double-listing).
Extremcaching
Extremcaching is a German private database for alternative geocaches with a focus on T5 / climbing caches, night caches, and lost place caches.
Geocaching Australia
Geocaching Australia is a community website for geocachers in Australia and New Zealand. Geocaching Australia also has many unique cache types such as Burke And Wills, Moveable_cache & Podcache geocaches.
See also
Augmented reality
Benchmarking
BookCrossing
Dead drop
Degree Confluence Project
Encounter
Geohashing
Geolocation-based video game
Ingress (video game)
Letterboxing (hobby)
Location-based game
Munzee
Orienteering
Pokémon Go
Puzzle hunt
Questing
Transmitter hunting
Treasure hunting
Treasure map
Highpointing
Peakbagging
References
Further reading
External links
In Wisconsin: Geocaching Video produced by Wisconsin Public Television
FTF Geocacher Magazine Print Magazine devoted to geocaching
geocaching.com The official geocaching website
Geosocial networking
Global Positioning System
Hobbies
Internet object tracking
Outdoor locating games
Scoutcraft
2000 neologisms
2000 establishments in Oregon
Sports originating in the United States
Games and sports introduced in 2000
Tourist attractions in Clackamas County, Oregon | Geocaching | [
"Technology",
"Engineering"
] | 9,738 | [
"Global Positioning System",
"Wireless locating",
"Aircraft instruments",
"Aerospace engineering"
] |
12,534 | https://en.wikipedia.org/wiki/Geographical%20mile | The geographical mile is an international unit of length determined by 1 minute of arc ( degree) along the Earth's equator. For the international ellipsoid 1924 this equalled 1855.4 metres. The American Practical Navigator 2017 defines the geographical mile as . Greater precision depends more on the choice of the Earth's radius of the used ellipsoid than on more careful measurement, since the radius of the geoid varies more than along the equator. In any ellipsoid, the length of a degree of longitude at the equator is exactly 60 geographical miles. The Earth's radius at the equator in the GRS80 ellipsoid is , which makes the geographical mile 1,855.3248 m. The rounding of the Earth's radius to metres in GRS80 has an effect of 0.0001 m.
The shape of the Earth is a slightly flattened sphere, which results in the Earth's circumference being 0.168% larger when measured around the equator as compared to through the poles. The geographical mile is slightly larger than the nautical mile (which was historically linked to the circumference measured through both poles); one geographic mile is equivalent to approximately .
Historical units
Historically, certain nations used slightly different divisions to create their geographical miles.
The Portuguese system derived their miles () as one third of their league of three separate values. When each equatorial degree was divided into 18 leagues, the geographical mile was equal to degree or about ; when divided into 20 leagues, the geographical mile was equal to degree, approximating the values provided above; and when divided into 25 leagues, the geographical mile was equal to degree or about .
The geographical miles of the traditional Dutch (), German ( or ), and Danish systems () all approximated their much longer milesequivalent to English leaguesby using a larger division of the equatorial degree. Instead of using one minute of arc, they all used four degreeto produce a distance now notionally equal to but actually differing slightly depending on official measurements and computations. (For example, the Danish unit was computed as equivalent to about by the astronomer Ole Rømer.)
Relationship with the nautical mile
The geographical mile is closely related to the nautical mile, which was originally determined as 1 minute of arc along a great circle of the Earth but is nowadays defined by treaty as exactly 1,852 m. The US National Institute of Standards and Technology notes that "The international nautical mile of 1,852 meters (6,076.115 49... feet) was adopted effective July 1, 1954, for use in the United States. The value formerly used in the United States was 6,080.20 feet = 1 nautical (geographical or sea) mile." This deprecated value of 6,080.2 feet is equivalent to . A separate reference identifies the geographic mile as being identical to the international nautical mile of 1,852 m and slightly shorter than the British nautical mile of .
Scandinavians used their own version of the geographical mile as their nautical mile up to the beginning of the 20th century, causing it to be more well known as the sea mile in Danish (), Norwegian (), and Swedish ().
Use
The unit is not used much in English-speaking countries but is cited in some United States laws. For example, Section 1301(a) of the Submerged Lands Act defines state seaward boundaries in terms of geographic miles. While debating what became the Land Ordinance of 1785, Thomas Jefferson's committee wanted to divide the public lands in the west into "hundreds of ten geographical miles square, each mile containing 6,086 and 4-10ths of a foot" and "sub-divided into lots of one mile square each, or 850 and 4-10ths of an acre".
See also
Conversion of units
Medieval weights and measures for details of the geographical league of France
Mile for the various other miles in use
Nautical mile
References
Units of length | Geographical mile | [
"Mathematics"
] | 810 | [
"Quantity",
"Units of measurement",
"Units of length"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.