id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
2,258,803 | https://en.wikipedia.org/wiki/IEC%2060364 | IEC 60364 Electrical Installations for Buildings is the International Electrotechnical Commission (IEC)'s international standard on electrical installations of buildings. This standard is an attempt to harmonize national wiring standards in an IEC standard and is published in the European Union by CENELEC as "HD 60364". The latest versions of many European wiring regulations (e.g., BS 7671 in the UK) follow the section structure of IEC 60364 very closely, but contain additional language to cater for historic national practice and to simplify field use and determination of compliance by electricians and inspectors. National codes and site guides are meant to attain the common objectives of IEC 60364, and provide rules in a form that allows for guidance of persons installing and inspecting electrical systems.
The standard has several parts:
Part 1: Fundamental principles, assessment of general characteristics, definitions
Part 4: Protection for safety
Section 41: Protection against electric shock
Section 42: Protection against thermal effects
Section 43: Protection against overcurrent
Section 44: Protection against voltage disturbances and electromagnetic disturbances
Part 5: Selection and erection of electrical equipment
Section 51: Common rules
Section 52: Wiring systems
Section 53: Isolation, switching and control
Section 54: Earthing arrangements, protective conductors and protective bonding conductors
Section 55: Other equipment (Note: Some national standards provide an individual document for each chapter of this section, i.e. 551 Low-voltage generating sets, 557 Auxiliary circuits, 559 Luminaires and lighting installations)
Section 56: Safety services
Part 6: Verification
Part 7: Requirements for special installations or locations
Section 701: Electrical installations in bathrooms
Section 702: Swimming pools and other basins
Section 703: Rooms and cabins containing sauna heaters
Section 704: Construction and demolition site installations
Section 705: Electrical installations of agricultural and horticultural premises
Section 706: Restrictive conductive locations
Section 708: Electrical installations in caravan parks and caravans
Section 709: Marinas and pleasure craft
Section 710: Medical locations
Section 711: Exhibitions, shows and stands
Section 712: Solar photovoltaic (PV) power supply systems
Section 713: Furniture
Section 714: External lighting
Section 715: Extra-low-voltage lighting installations
Section 717: Mobile or transportable units
Section 718: Communal facilities and workplaces
Section 721: Electrical installations in caravans and motor caravans
Section 722: Supplies for Electric Vehicles
Section 729: Operating or maintenance gangways
Section 740: Temporary electrical installations for structures, amusement devices and booths at fairgrounds, amusement parks and circuses
Section 753: Heating cables and embedded heating systems
Part 8: Functional Aspects
Section 1: Energy Efficiency
Section 2: Prosumer’s low-voltage electrical installations
Section 3: Operation of prosumer’s electrical installations
See also
Electrical wiring
Earthing systems
BS 7671
AS/NZS 3000
External links
All IEC 60364 parts and sections published by the IEC
NEMA comparison of IEC 60364 with the US NEC
How the IEC relates to North America—particularly IEC 60364
WIKI-Electrical installation guide – According to IEC 60364, Schneider Electric, 2010.
Online Cable Sizing Tool to IEC 60364-5-52:2009
Electric power distribution
60364 | IEC 60364 | [
"Technology"
] | 664 | [
"Computer standards",
"IEC standards"
] |
2,258,865 | https://en.wikipedia.org/wiki/Isotopologue | In chemistry, isotopologues (also spelled isotopologs) are molecules that differ only in their isotopic composition. They have the same chemical formula and bonding arrangement of atoms, but at least one atom has a different number of neutrons than the parent.
An example is water, whose hydrogen-related isotopologues are: "light water" (HOH or ), "semi-heavy water" with the deuterium isotope in equal proportion to protium (HDO or ), "heavy water" with two deuterium atoms ( or ); and "super-heavy water" or tritiated water ( or , as well as and , where some or all of the hydrogen is the radioactive tritium isotope). Oxygen-related isotopologues of water include the commonly available form of heavy-oxygen water () and the more difficult to separate version with the isotope. Both elements may be replaced by isotopes, for example in the doubly labeled water isotopologue . Altogether, there are 9 different stable water isotopologues, and 9 radioactive isotopologues involving tritium, for a total of 18. However only certain ratios are possible in mixture, due to prevalent hydrogen swapping.
The atom(s) of the different isotope may be anywhere in a molecule, so the difference is in the net chemical formula. If a compound has several atoms of the same element, any one of them could be the altered one, and it would still be the same isotopologue. When considering the different locations of the same isotope, the term isotopomer, first proposed by Seeman and Paine in 1992, is used.
Isotopomerism is analogous to constitutional isomerism or stereoisomerism of different elements in a structure. Depending on the formula and the symmetry of the structure, there might be several isotopomers of one isotopologue. For example, ethanol has the molecular formula . Mono-deuterated ethanol, or , is an isotopologue of it. The structural formulas and are two isotopomers of that isotopologue.
Singly substituted isotopologues
Analytical chemistry applications
Singly substituted isotopologues may be used for nuclear magnetic resonance experiments, where deuterated solvents such as deuterated chloroform (CDCl or CHCl) do not interfere with the solutes' H signals, and in investigations of the kinetic isotope effect.
Geochemical applications
In the field of stable isotope geochemistry, isotopologues of simple molecules containing rare heavy isotopes of carbon, oxygen, hydrogen, nitrogen, and sulfur are used to trace equilibrium and kinetic processes in natural environments and in Earth's past.
Doubly substituted isotopologues
Measurement of the abundance of clumped isotopes (doubly substituted isotopologues) of gases has been used in the field of stable isotope geochemistry to trace equilibrium and kinetic processes in the environment inaccessible by analysis of singly substituted isotopologues alone.
Currently measured doubly substituted isotopologues include:
Carbon dioxide: COO
Methane: 13CH3D and 12CH2D2
Oxygen: 18O2 and 17O18O
Nitrogen: 15N2
Nitrous oxide: NNO and NNO
Analytical requirements
Because of the relative rarity of the heavy isotopes of C, H, and O, isotope-ratio mass spectrometry (IRMS) of doubly substituted species requires larger volumes of sample gas and longer analysis times than traditional stable isotope measurements, thereby requiring extremely stable instrumentation. Also, the doubly-substituted isotopologues are often subject to isobaric interferences, as in the methane system where CH and CHD ions interfere with measurement of the CHD and CHD species at mass 18. A measurement of such species requires either very high mass resolving power to separate one isobar from another, or modeling of the contributions of the interfering species to the abundance of the species of interest. These analytical challenges are significant: The first publication precisely measuring doubly substituted isotopologues did not appear until 2004, though singly substituted isotopologues had been measured for decades previously.
As an alternative to more conventional gas source IRMS instruments, tunable diode laser absorption spectroscopy has also emerged as a method to measure doubly substituted species free from isobaric interferences, and has been applied to the methane isotopologue CHD.
Equilibrium fractionation
When a light isotope is replaced with a heavy isotope (e.g., C for C), the bond between the two atoms will vibrate more slowly, thereby lowering the zero-point energy of the bond and acting to stabilize the molecule. An isotopologue with a doubly substituted bond is therefore slightly more thermodynamically stable, which will tend to produce a higher abundance of the doubly substituted (or “clumped”) species than predicted by the statistical abundance of each heavy isotope (known as a stochastic distribution of isotopes). This effect increases in magnitude with decreasing temperature, so the abundance of the clumped species is related to the temperature at which the gas was formed or equilibrated. By measuring the abundance of the clumped species in standard gases formed in equilibrium at known temperatures, the thermometer can be calibrated and applied to samples with unknown abundances.
Kinetic fractionation
The abundances of multiply substituted isotopologues can also be affected by kinetic processes. As for singly substituted isotopologues, departures from thermodynamic equilibrium in a doubly-substituted species can implicate the presence of a particular reaction taking place. Photochemistry occurring in the atmosphere has been shown to alter the abundance of O from equilibrium, as has photosynthesis. Measurements of CHD and CHD can identify microbial processing of methane and have been used to demonstrate the significance of quantum tunneling in the formation of methane, as well as mixing and equilibration of multiple methane reservoirs. Variations in the relative abundances of the two NO isotopologues NNO and {{sup>15}}NNO can distinguish whether NO has been produced by bacterial denitrification or by bacterial nitrification.
Multiple substituted isotopologues
Biochemical applications
Multiple substituted isotopologues may be used for nuclear magnetic resonance or mass spectrometry experiments, where isotopologues are used to elucidate metabolic pathways in a qualitative (detect new pathways) or quantitative (detect quantitative share of a pathway) approach. A popular example in biochemistry is the use of uniform labelled glucose (U-13C glucose), which is metabolized by the organism under investigation (e. g. bacterium, plant, or animal) and whose signatures can later be detected in newly formed amino acid or metabolically cycled products.
Mass spectrometry applications
Resulting from either naturally occurring isotopes or artificial isotopic labeling, isotopologues can be used in various mass spectrometry applications.
Applications of natural isotopologues
The relative mass spectral intensity of natural isotopologues, calculable from the fractional abundances of the constituent elements, is exploited by mass spectrometry practitioners in quantitative analysis and unknown compound identification:
To identify the more likely molecular formulas for an unknown compound based on the matching between the observed isotope abundance pattern in an experiment and the expected isotope abundance patterns for given molecular formulas.
To expand the linear dynamic response range of the mass spectrometer by following multiple isotopologues, with an isotopologue of lower abundance still generating linear response even while the isotopologues of higher abundance giving saturated signals.
Applications of isotope labeling
A compound tagged by replacing specific atoms with the corresponding isotopes can facilitate the following mass spectrometry methods:
Metabolic flux analysis (MFA)
Stable isotopically labeled internal standards for quantitative analysis
See also
Mass (mass spectrometry)
Isotope-ratio mass spectrometry
Isotopomer
Clumped isotopes
Isotopocule
References
External links
Fractional abundance of atmospheric isotopologues, SpectralCalc.com
Isotopes | Isotopologue | [
"Physics",
"Chemistry"
] | 1,688 | [
"Isotopes",
"Nuclear physics"
] |
2,259,059 | https://en.wikipedia.org/wiki/Chronometry | Chronometry or horology () is the science studying the measurement of time and timekeeping. Chronometry enables the establishment of standard measurements of time, which have applications in a broad range of social and scientific areas. Horology usually refers specifically to the study of mechanical timekeeping devices, while chronometry is broader in scope, also including biological behaviours with respect to time (biochronometry), as well as the dating of geological material (geochronometry).
Horology is commonly used specifically with reference to the mechanical instruments created to keep time: clocks, watches, clockwork, sundials, hourglasses, clepsydras, timers, time recorders, marine chronometers, and atomic clocks are all examples of instruments used to measure time. People interested in horology are called horologists. That term is used both by people who deal professionally with timekeeping apparatuses, as well as enthusiasts and scholars of horology. Horology and horologists have numerous organizations, both professional associations and more scholarly societies. The largest horological membership organisation globally is the NAWCC, the National Association of Watch and Clock Collectors, which is US based, but also has local chapters elsewhere.
Records of timekeeping are attested during the Paleolithic, in the form of inscriptions made to mark the passing of lunar cycles and measure years. Written calendars were then invented, followed by mechanical devices. The highest levels of precision are presently achieved by atomic clocks, which are used to track the international standard second.
Etymology
Chronometry is derived from two root words, chronos and metron (χρόνος and μέτρον in Ancient Greek respectively), with rough meanings of "time" and "measure". The combination of the two is taken to mean time measuring.
In the Ancient Greek lexicon, meanings and translations differ depending on the source. Chronos, used in relation to time when in definite periods, and linked to dates in time, chronological accuracy, and sometimes in rare cases, refers to a delay. The length of the time it refers ranges from seconds to seasons of the year to lifetimes, it can also concern periods of time wherein some specific event takes place, or persists, or is delayed.
The root word is correlated with the god Chronos in Ancient Greek mythology, who embodied the image of time, originated from out of the primordial chaos. Known as the one who spins the Zodiac Wheel, further evidence of his connection to the progression of time. However, Ancient Greek makes a distinction between two types of time, chronos, the static and continuing progress of present to future, time in a sequential and chronological sense, and Kairos, a concept based in a more abstract sense, representing the opportune moment for action or change to occur.
Kairos (καιρός) carries little emphasis on precise chronology, instead being used as a time specifically fit for something, or also a period of time characterised by some aspect of crisis, also relating to the endtime. It can as well be seen in the light of an advantage, profit, or fruit of a thing, but has also been represented in apocalyptic feeling, and likewise shown as variable between misfortune and success, being likened to a body part vulnerable due to a gap in armor for Homer, benefit or calamity depending on the perspective. It is also referenced in Christian theology, being used as implication of God's action and judgement in circumstances.
Because of the inherent relation between chronos and kairos, their function the Ancient Greek's portrayal and concept of time, understanding one means understanding the other in part. The implication of chronos, an indifferent disposition and eternal essence lies at the core of the science of chronometry, bias is avoided, and definite measurement is favoured.
Subfields
Biochronometry
Biochronometry (also chronobiology or biological chronometry) is the study of biological behaviours and patterns seen in animals with factors based in time. It can be categorised into Circadian rhythms and Circannual cycles. Examples of these behaviours can be: the relation of daily and seasonal tidal cues to the activity of marine plants and animals, the photosynthetic capacity and phototactic responsiveness in algae, or metabolic temperature compensation in bacteria.
Circadian rhythms of various species can be observed through their gross motor function throughout the course of a day. These patterns are more apparent with the day further categorised into activity and rest times. Investigation into a species is conducted through comparisons of free-running and entrained rhythms, where the former is attained from within the species' natural environment and the latter from a subject that has been taught certain behaviours. Circannual rhythms are alike but pertain to patterns within the scale of a year, patterns like migration, moulting, reproduction, and body weight are common examples, research and investigation are achieved with similar methods to circadian patterns.
Circadian and circannual rhythms can be seen in all organisms, in both single and multi-celled organisms. A sub-branch of biochronometry is microbiochronometry (also chronomicrobiology or microbiological chronometry), and is the examination of behavioural sequences and cycles within micro-organisms. Adapting to circadian and circannual rhythms is an essential evolution for living organisms, these studies, as well as educating on the adaptations of organisms also bring to light certain factors affecting many of species' and organisms' responses, and can also be applied to further understand the overall physiology, this can be for humans as well, examples include: factors of human performance, sleep, metabolism, and disease development, which are all connected to biochronometrical cycles.
Mental chronometry
Mental chronometry (also called cognitive chronometry) studies human information processing mechanisms, namely reaction time and perception. As well as a field of chronometry, it also forms a part of cognitive psychology and its contemporary human information processing approach. Research comprises applications of the chronometric paradigms – many of which are related to classical reaction time paradigms from psychophysiology – through measuring reaction times of subjects with varied methods, and contribute to studies in cognition and action. Reaction time models and the process of expressing the temporostructural organisation of human processing mechanisms have an innate computational essence to them. It has been argued that because of this, conceptual frameworks of cognitive psychology cannot be integrated in their typical fashions.
One common method is the use of event-related potentials (ERPs) in stimulus-response experiments. These are fluctuations of generated transient voltages in neural tissues that occur in response to a stimulus event either immediately before or after. This testing emphasises the mental events' time-course and nature and assists in determining the structural functions in human information processing.
Geochronometry
The dating of geological materials makes up the field of geochronometry, and falls within areas of geochronology and stratigraphy, while differing itself from chronostratigraphy. The geochronometric scale is periodic, its units working in powers of 1000, and is based in units of duration, contrasting with the chronostratigraphic scale. The distinctions between the two scales have caused some confusion – even among academic communities.
Geochronometry deals with calculating a precise date of rock sediments and other geological events, giving an idea as to what the history of various areas is, for example, volcanic and magmatic movements and occurrences can be easily recognised, as well as marine deposits, which can be indicators for marine events and even global environmental changes. This dating can be done in a number of ways. All dependable methods – barring the exceptions of thermoluminescence, radioluminescence and ESR (electron spin resonance) dating – are based in radioactive decay, focusing on the degradation of the radioactive parent nuclide and the corresponding daughter product's growth.
By measuring the daughter isotopes in a specific sample its age can be calculated. The preserved conformity of parent and daughter nuclides provides the basis for the radioactive dating of geochronometry, applying the Rutherford Soddy Law of Radioactivity, specifically using the concept of radioactive transformation in the growth of the daughter nuclide.
Thermoluminescence is an extremely useful concept to apply, being used in a diverse amount of areas in science, dating using thermoluminescence is a cheap and convenient method for geochronometry. Thermoluminescence is the production of light from a heated insulator and semi-conductor, it is occasionally confused with incandescent light emissions of a material, a different process despite the many similarities. However, this only occurs if the material has had previous exposure to and absorption of energy from radiation. Importantly, the light emissions of thermoluminescence cannot be repeated. The entire process, from the material's exposure to radiation would have to be repeated to generate another thermoluminescence emission. The age of a material can be determined by measuring the amount of light given off during the heating process, by means of a phototube, as the emission is proportional to the dose of radiation the material absorbed.
Time metrology
Time metrology or time and frequency metrology is the application of metrology for timekeeping, including frequency stability.
Its main tasks are the realization of the second as the SI unit of measurement for time and the establishment of time standards and frequency standards as well as their dissemination.
History
Early humans would have used their basic senses to perceive the time of day, and relied on their biological sense of time to discern the seasons in order to act accordingly. Their physiological and behavioural seasonal cycles mainly being influenced by a melatonin based photoperiod time measurement biological system – which measures the change in daylight within the annual cycle, giving a sense of the time in the year – and their circannual rhythms, providing an anticipation of environmental events months beforehand to increase chances of survival.
There is debate over when the earliest use of lunar calendars was, and over whether some findings constituted as a lunar calendar. Most related findings and materials from the palaeolithic era are fashioned from bones and stone, with various markings from tools. These markings are thought to not have been the result of marks to represent the lunar cycles but non-notational and irregular engravings, a pattern of latter subsidiary marks that disregard the previous design is indicative of the markings being the use of motifs and ritual marking instead.
However, as humans' focus turned to farming the importance and reliance on understanding the rhythms and cycle of the seasons grew, and the unreliability of lunar phases became problematic. An early human accustomed to the phases of the moon would use them as a rule of thumb, and the potential for weather to interfere with reading the cycle further degraded the reliability. The length of a moon is on average less than our current month, not acting as a dependable alternate, so as years progress the room of error between would grow until some other indicator would give indication.
The Ancient Egyptian calendars were among the first calendars made, and the civil calendar even endured for a long period afterwards, surviving past even its culture's collapse and through the early Christian era. It has been assumed to have been invented near 4231 BC by some, but accurate and exact dating is difficult in its era and the invention has been attributed to 3200 BC, when the first historical king of Egypt, Menes, united Upper and Lower Egypt. It was originally based on cycles and phases of the moon, however, Egyptians later realised the calendar was flawed upon noticing the star Sirius rose before sunrise every 365 days, a year as we know it now, and was remade to consist of twelve months of thirty days, with five epagomenal days. The former is referred to as the Ancient Egyptians' lunar calendar, and the latter the civil calendar.
Early calendars often hold an element of their respective culture's traditions and values, for example, the five day intercalary month of the Ancient Egyptian's civil calendar representing the birthdays of the gods Horus, Isis, Set, Osiris and Nephthys. Maya use of a zero date as well as the Tzolkʼin's connection to their thirteen layers of heaven (the product of it and all the human digits, twenty, making the 260-day year of the year) and the length of time between conception and birth in pregnancy.
Museums and libraries
Europe
There are many horology museums and several specialized libraries devoted to the subject. One example is the Royal Greenwich Observatory, which is also the source of the Prime Meridian and the home of the first marine timekeepers accurate enough to determine longitude (made by John Harrison). Other horological museums in the London area include the Clockmakers' Museum, which re-opened at the Science Museum in October 2015, the horological collections at the British Museum, the Science Museum (London), and the Wallace Collection. The Guildhall Library in London contains an extensive public collection on horology. In Upton, also in the United Kingdom, at the headquarters of the British Horological Institute, there is the Museum of Timekeeping. A more specialised museum of horology in the United Kingdom is the Cuckooland Museum in Cheshire, which hosts the world's largest collection of antique cuckoo clocks.
One of the more comprehensive museums dedicated to horology is the Musée international d'horlogerie, in La Chaux-de-Fonds in Switzerland, which contains a public library of horology. The Musée d'Horlogerie du Locle is smaller but located nearby. Other good horological libraries providing public access are at the Musée international d'horlogerie in Switzerland, at La Chaux-de-Fonds, and at Le Locle.
In France, Besançon has the Musée du Temps (Museum of Time) in the historic Palais Grenvelle. In Serpa and Évora, in Portugal, there is the Museu do Relógio. In Germany, there is the Deutsches Uhrenmuseum in Furtwangen im Schwarzwald, in the Black Forest, which contains a public library of horology.
North America
The two leading specialised horological museums in North America are the National Watch and Clock Museum in Columbia, Pennsylvania, and the American Clock and Watch Museum in Bristol, Connecticut. Another museum dedicated to clocks is the Willard House and Clock Museum in Grafton, Massachusetts. One of the most comprehensive horological libraries open to the public is the National Watch and Clock Library in Columbia, Pennsylvania.
Organizations
Notable scholarly horological organizations include:
American Watchmakers-Clockmakers Institute – AWCI (United States of America)
Antiquarian Horological Society – AHS (United Kingdom)
British Horological Institute – BHI (United Kingdom)
Chronometrophilia (Switzerland)
Deutsche Gesellschaft für Chronometrie – DGC (Germany)
Horological Society of New York – HSNY (United States of America)
National Association of Watch and Clock Collectors – NAWCC (United States of America)
UK Horology - UK Clock & Watch Company based in Bristol
Glossary
See also
Complication (horology)
Hora (astrology)
List of clock manufacturers
List of watch manufacturers
Winthrop Kellogg Edey
Allan variance
Clock drift
International Earth Rotation and Reference Systems Service
Time and Frequency Standards Laboratory
Time deviation
Notes
References
Further reading
Berner, G.A., Illustrated Professional Dictionary of Horology, Federation of the Swiss Watch Industry FH 1961 - 2012
Daniels, George, Watchmaking, London: Philip Wilson Publishers, 1981 (reprinted June 15, 2011)
Beckett, Edmund, A Rudimentary Treatise on Clocks, Watches and Bells, 1903, from Project Gutenberg
Grafton, Edward, Horology, a popular sketch of clock and watch making, London: Aylett and Jones, 1849
Time
Frequency
Metrology
Timekeeping | Chronometry | [
"Physics",
"Mathematics"
] | 3,318 | [
"Scalar physical quantities",
"Frequency",
"Physical quantities",
"Time",
"Timekeeping",
"Quantity",
"Spacetime",
"Wikipedia categories named after physical quantities"
] |
16,290,826 | https://en.wikipedia.org/wiki/PSR%20J0855%E2%88%924644 | PSR J0855-4644 is a pulsar in the constellation Vela, and was at one time thought possibly associated with supernova remnant RX J0852.0-4622. However, this association is considered unlikely since a central compact object with better matching kinematics to the shell has been observed.
References
External links
Simbad
00.7
Vela (constellation) | PSR J0855−4644 | [
"Astronomy"
] | 84 | [
"Vela (constellation)",
"Constellations"
] |
16,291,039 | https://en.wikipedia.org/wiki/NGC%204889 | NGC 4889 (also known as Caldwell 35) is an E4 supergiant elliptical galaxy. It was discovered in 1785 by the British astronomer Frederick William Herschel I, who catalogued it as a bright, nebulous patch. The brightest galaxy within the northern Coma Cluster, it is located at a median distance of 94 million parsecs (308 million light years) from Earth. At the core of the galaxy is a supermassive black hole that heats the intracluster medium through the action of friction from infalling gases and dust. The gamma ray bursts from the galaxy extend out to several million light years of the cluster.
As with other similar elliptical galaxies, only a fraction of the mass of NGC 4889 is in the form of stars. They have a flattened, unequal distribution that bulges within its edge. Between the stars is a dense interstellar medium full of heavy elements emitted by evolved stars. The diffuse stellar halo extends out to one million light years in diameter. Orbiting the galaxy is a very large population of globular clusters. NGC 4889 is also a strong source of soft X-ray, ultraviolet, and radio frequency radiation.
As the largest and the most massive galaxy easily visible to Earth, NGC 4889 has played an important role in both amateur and professional astronomy, and has become a prototype in studying the dynamical evolution of other supergiant elliptical galaxies in the more distant universe.
Observation
NGC 4889 was not included by the astronomer Charles Messier in his famous Messier catalogue despite being an intrinsically bright object quite close to some Messier objects. The first known observation of NGC 4889 was that of Frederick William Herschel I, assisted by his sister, Caroline Lucretia Herschel, in 1785, who included it in the Catalogue of Nebulae and Clusters of Stars published a year later. In 1864, Herschel's son, John Frederick William Herschel, published the General Catalogue of Nebulae and Clusters of Stars. He included the objects catalogued by his father, including the one later to be called NGC 4889, plus others he found that were somehow missed by his father.
In 1888 the astronomer John Louis Emil Dreyer published the New General Catalogue of Nebulae and Clusters of Stars (NGC), with a total of 7,840 objects, but he erroneously duplicated the galaxy in two designations, NGC 4884 and NGC 4889. Within the following century, several projects aimed to revise the NGC catalogue, such as The NGC/IC Project, Revised New General Catalogue of Nebulae and Clusters of Stars, and the NGC 2000.0 projects, discovered the duplication. It was then decided that the object would be called by its latter designation, NGC 4889, which is in use today.
In December 1995, Patrick Caldwell Moore compiled the Caldwell catalogue, a list of 109 persistent, bright objects that were somehow missed by Messier in his catalogue. The list also includes NGC 4889, which is given the designation Caldwell 35.
Properties
NGC 4889 is located along the high declination region of Coma Berenices, south of the constellation Canes Venatici. It can be traced by following the line from Beta Comae Berenices to Gamma Comae Berenices. With an apparent magnitude of 11.4, it can be seen by telescopes with 12 inch aperture, but its visibility is greatly affected by light pollution due to glare of the light from Beta Comae Berenices. However, under very dark, moonless skies, it can be seen by small telescopes as a faint smudge, but larger telescopes are needed in order to see the galaxy's halo.
In the updated Hubble sequence galaxy morphological classification scheme by the French astronomer Gérard de Vaucouleurs in 1959, NGC 4889 is classified as an E4 type galaxy, which means it has a flat distribution of stars within its width. It is also classified as a cD galaxy, a giant type of D galaxy, a classification devised by the American astronomer William Wilson Morgan in 1958 for galaxies with an elliptical-shaped nucleus surrounded by an immense, diffuse, dustless, extended halo.
NGC 4889 is far enough that its distance can be measured using redshift. With the redshift of 0.0266 as derived from the Sloan Digital Sky Survey, and the Hubble constant as determined in 2013 by the ESA COBRAS/SAMBA/Planck Surveyor translates its distance of 94 Mpc (308 million light years) from Earth.
NGC 4889 is probably the largest and the most massive galaxy out to the radius of 100 Mpc (326 million light years) of the Milky Way. The galaxy has an effective radius which extends at 2.9 arcminutes of the sky, translating to a diameter of 239,000 light years, about the size of the Andromeda Galaxy. In addition it has an immense diffuse light halo extending to 17.8 arcminutes, roughly half the angular diameter of the Sun, translating to 1.3 million light years in diameter.
Along with its large size, NGC 4889 may also be extremely massive. If we take the Milky Way as the standard of mass, it may be close to 8 trillion solar masses. However, as NGC 4889 is a spheroid, and not a flat spiral, it has a three-dimensional profile, so it may be as high as 15 trillion solar masses. However, as usual for elliptical galaxies, only a small fraction of the mass of NGC 4889 is in the form of stars that radiate energy.
Components
Giant elliptical galaxies like NGC 4889 are believed to be the result of multiple mergers of smaller galaxies. There is now little dust remaining to form the diffuse nebulae where new stars are created, so the stellar population is dominated by old, population II stars that contain relatively low abundances of elements other than hydrogen and helium. The egg-like shape of this galaxy is maintained by random orbital motions of its member stars, in contrast to the more orderly rotational motions found in a spiral galaxy such as the Milky Way. NGC 4889 has 15,800 globular clusters, more than Messier 87, which has 12,000. This is half of NGC 4874's collection of globular clusters, which has 30,000 globular clusters.
The space between the stars in the galaxy is filled with a diffuse interstellar medium of gas, which has been filled by the elements ejected from stars as they passed beyond the end of their main sequence lifetime. Carbon and nitrogen are being continuously supplied by intermediate mass stars as they pass through the asymptotic giant branch. The heavier elements from oxygen to iron are primarily produced by supernova explosions within the galaxy. The interstellar medium is continuously heated by the emission of in-falling gases towards its central SMBH.
Supermassive black hole
On December 5, 2011, astronomers measured the velocity dispersion of the central regions of two massive galaxies, NGC 4889, and the other being NGC 3842 in the Leo Cluster. According to the data of the study, they found out the central supermassive black hole of NGC 4889 is 5,200 times more massive than the central black hole of the Milky Way, or equivalent to 2.1 (21 billion) solar masses (best fit of data; possible range is from 6 billion to 37 billion solar masses). This makes it one of the most massive black holes on record. The diameter of the black hole's immense event horizon is about 20 to 124 billion kilometers, 2 to 12 times the diameter of Pluto's orbit. The ionized medium detected around the black hole suggests that NGC 4889 may have been a quasar in the past. It is quiescent, presumably because it has already absorbed all readily available matter.
Environment
NGC 4889 lies at the center of the component A of the Coma Cluster, a giant cluster of 2,000 galaxies which it shares with NGC 4874, although NGC 4889 is sometimes referred as the cluster center, and it has been called by its other designation A1656-BCG. The total mass of the cluster is estimated to be on the order of 4 .
The Coma Cluster is located at exactly the center of the Coma Supercluster, which is one of the nearest superclusters to the Laniakea Supercluster. The Coma Supercluster itself is within the CfA Homunculus, the center of the CfA2 Great Wall, the nearest galaxy filament to Earth and one of the largest structures in the known universe.
Notes
References
External links
Elliptical galaxies
Coma Cluster
4889
035b
Coma Berenices
Astronomical objects discovered in 1785
+5-31-77
44715
08110
Discoveries by William Herschel | NGC 4889 | [
"Astronomy"
] | 1,807 | [
"Coma Berenices",
"Constellations"
] |
16,291,643 | https://en.wikipedia.org/wiki/HSL%20%28Fortran%20library%29 | HSL, originally the Harwell Subroutine Library, is a collection of Fortran 77 and 95 codes that address core problems in numerical analysis. It is primarily developed by the Numerical Analysis Group at the Rutherford Appleton Laboratory with contributions from other experts in the field.
HSL codes are easily recognizable by the format of their names, consisting of two letters followed by two numbers, dating back to early versions of Fortran's limited subroutine name length. The letters denote a broad classification of the problem they solve, and the numbers serve to distinguish different codes. For example, the well known sparse LU code MA28 (superseded by MA48) is a Matrix Algebra code number 28. Fortran 95 codes are differentiated from Fortran 77 codes by the prefix HSL_.
History
Early history
Original development of the Harwell Subroutine Library began in 1963 by Mike Powell and Mike Hopper for internal use on an IBM mainframe at AERE Harwell. Early contributors also included Alan Curtis. With a spreading reputation, the Library was distributed externally for the first time in 1964 upon request. The first library catalog (AERE Report M-1748) was released in 1966.
Recent history
Over the intervening years, HSL has striven to maintain a high standard of reliability and has garnered a worldwide reputation as a prime source of numerical software. It has undergone a number of changes to reflect newly available features of the Fortran language, completing in 1990 the conversion to Fortran 77, and more recently, the entire Library has been made thread safe. Many newer codes are written in Fortran 95.
New packages continue to be developed, with a new release issued every two to three years. Many older codes have now been superseded and are available in the HSL Archive.
Licensing
The current version, HSL 2007 is a commercial product sold by AspenTech, but is also available without charge to individual academics direct from STFC for teaching and their own academic research purposes. HSL is currently not sold to commercial competitors of Aspen Technology.
Obsolete routines are stored in the HSL archive and are available for personal non-commercial use by anyone following registration with HSL. Commercial use and distribution of these routines still requires a purchased licence.
References
J.K.Reid and J.A.Scott (Dec 2006, Sep 2007), Guidelines for the development of HSL software, Technical Report RAL-TR-2006-031
M.J.D.Powell 25 years of Theoretical Physics 1954-1979: Chapter XVIII: Numerical Analysis. A special publication by Harwell Research Laboratory of UKAEA
Footnotes
External links
HSL home page at STFC
HSL home page at AspenTech
HSL Archive
Fortran libraries
Numerical software
Science and Technology Facilities Council
Science and technology in Oxfordshire
Vale of White Horse | HSL (Fortran library) | [
"Mathematics"
] | 564 | [
"Numerical software",
"Mathematical software"
] |
16,291,933 | https://en.wikipedia.org/wiki/NGC%206885 | NGC 6885, also Caldwell 37, is an open cluster in the constellation Vulpecula. It shines at magnitude +5.7/+8.1. Its celestial coordinates are RA , dec . It surrounds the naked eye Be star 20 Vulpeculae, and is located near M27 (Dumbbell nebula), the nebula IC 4954, and open clusters NGC 6882 and NGC 6940. It is 7'/18' across.
Notes
References
External links
SEDS – NGC 6885
VizieR – NGC 6885
NED – NGC 6885
Open clusters
6885
037b
Vulpecula | NGC 6885 | [
"Astronomy"
] | 130 | [
"Vulpecula",
"Constellations"
] |
16,292,504 | https://en.wikipedia.org/wiki/HAT-P-7b | HAT-P-7b (or Kepler-2b) is an extrasolar planet discovered in 2008. It orbits very close to its host star and is larger and more massive than Jupiter. Due to the extreme heat that it receives from its star, the dayside temperature is predicted to be , while nightside temperatures are . HAT-P-7b is also one of the darkest planets ever observed, with an albedo of less than 0.03—meaning it absorbs more than 97% of the visible light that strikes it.
Discovery
The HATNet Project telescopes HAT-7, located at the Smithsonian Astrophysical Observatory's Fred Lawrence Whipple Observatory in Arizona, and HAT-8, installed on the rooftop of Smithsonian Astrophysical Observatory's Submillimeter Array building atop Mauna Kea, Hawaii, observed 33,000 stars in HATNet field G154, on nearly every night from late May to early August 2004. The light curves resulting from the 5140 exposures obtained were searched for transit signals and a very significant periodic drop in brightness was detected in the star GSC 03547–01402 (HAT-P-7), with a depth of approximately 7.0 millimagnitude, a period of 2.2047 days, and a duration of 4.1 hours.
Fortunately HAT-P-7 was located in the overlapping area between fields G154 and G155 allowing the transit to be independently confirmed by the HAT-6 (Arizona) and HAT-9 (Hawaii) telescopes which observed the neighboring field G155. Field G155 was observed from late July 2004 to late September 2005 gathering an additional 11,480 exposures for a total of 16,620 data points.
History
The GSC 03547-01402 system was within the initial field of view of the Kepler Mission spacecraft, which confirmed the transit and orbital properties of the planet with significantly improved confidence and observed occultation and light curve characteristics consistent with a strongly absorbing atmosphere with limited advection to the night side. In testing itself on HAT-P-7b, Kepler proved it was sensitive enough to detect Earth-like exoplanets.
On July 4, 2011, HAT-P-7b was the subject of the Hubble Space Telescope's one millionth scientific observation.
Physical characteristics
In August 2009, it was announced that HAT-P-7b may have a retrograde orbit, based upon measurements of the Rossiter–McLaughlin effect. This announcement came only a day after the announcement of the first planet discovered with such an orbit, WASP-17b. A study in 2012, utilizing the Rossiter–McLaughlin effect, determined the planetary orbit inclination with respect to the rotational axis of the star, equal to 155°.
In January 2010, it was announced that ellipsoidal light variations were detected for HAT-P-7b, the first detection of such kind. This method analyses the brightness variation caused by the rotation of a star as its shape is tidally distorted by the planet.
Weather
In December 2016, a letter published in Nature Astronomy by Dr. David Armstrong and his colleagues described evidence of strong wind jets of variable speed on HAT-P-7b. High variation in wind speed would explain similar variations in light reflected from HAT-P-7b's atmosphere. In particular, the brightest point on the planet shifts its phase or position on a timescale of only tens to hundreds of days, suggesting high variation in global wind speeds and cloud coverage. Condensation models of HAT-P-7b predict precipitation of Al2O3 (corundum) on the night side of the planet's atmosphere. The clouds themselves are likely made up of corundum, the mineral which forms rubies and sapphires.
See also
Hot Jupiter
HAT-P-11b
Constraints on the magnetic field strength of HAT-P-7 b
References
External links
HAT-P-7b light curve using differential photometry
Kepler Shows Exoplanet Is Unlike Anything in Our Solar System
Exoplanets with Kepler designations
Exoplanets discovered by HATNet
Cygnus (constellation)
Hot Jupiters
Transiting exoplanets
Giant planets
Exoplanets discovered in 2008 | HAT-P-7b | [
"Astronomy"
] | 865 | [
"Cygnus (constellation)",
"Constellations"
] |
16,292,765 | https://en.wikipedia.org/wiki/NGC%203632 | NGC 3632 (also known as Caldwell 40) and NGC 3626 is an unbarred lenticular galaxy and Caldwell object in the constellation Leo. It was discovered by William Herschel, on 14 March 1784. It shines at magnitude +10.6/+10.9. Its celestial coordinates are RA , dec . It is located near the naked-eye-class A4 star Zosma, as well as galaxies NGC 3608, NGC 3607, NGC 3659, NGC 3686, NGC 3684, NGC 3691, NGC 3681, and NGC 3655. Its dimensions are 2′.7 × 1′.9. The galaxy belongs to the NGC 3607 group some 70 million light-years distant, itself one of the many Leo II groups.
Notes
References
External links
Unbarred lenticular galaxies
3632
040b
Virgo Supercluster
Leo (constellation)
Astronomical objects discovered in 1784
034684 | NGC 3632 | [
"Astronomy"
] | 198 | [
"Leo (constellation)",
"Constellations"
] |
16,294,930 | https://en.wikipedia.org/wiki/Metageography | Metageography is a term used by Martin W. Lewis and Kären E. Wigen's 1997 The Myth of Continents: A Critique of Metageography, which analyzes metageographical constructs such as "East", "West", "Europe", "Asia", "North" or "South". which they define as "the set of spatial structures through which people order their knowledge of the world" Geographies, wrote one reviewer, are thus much more than just the ways in which societies are stretched across the Earth's surface. They also include the "contested, arbitrary, power-laden, and often inconsistent ways in which those structures are represented epistemologically."
In an interview, Lewis explained: "By 'metageography' I mean the relatively unexamined and often taken-for-granted spatial frameworks through which knowledge is organized within all fields of the social sciences and humanities." He added that "the distinction between the merely geographical and the metageographical is not always clear-cut.
The term was criticized by James M. Blaut: "the word metageography seems to have been coined by the authors as an impressive-sounding synonym for 'world cultural geography.'" Lewis and Wigen, however, disagreed, arguing that every consideration of human affairs employs a metageography as a structuring force on one's conception of the world
In 1969, Soviet geographers Gokhman, Gurevich and Saushkin wrote that "[m]etageography is concerned with study of the common basis of geographical regularities and the potentialities of geography as a science" and argued that many factors must be taken into account in order to define geographical entities, not simply spatial ones.
References
External links
Sidaway, J. D. (2007). Enclave space: a new metageography of development? Area, 39(3), 331–339. https://doi.org/10.1111/j.1475-4762.2007.00757.x
Human geography | Metageography | [
"Environmental_science"
] | 431 | [
"Environmental social science",
"Human geography"
] |
16,296,098 | https://en.wikipedia.org/wiki/X%20mark | An X mark (also known as an ex mark or a cross mark or simply an X or ex or a cross) is used to indicate the concept of negation (for example "no, this has not been verified", "no, that is not the correct answer" or "no, I do not agree") as well as an indicator (for example, in election ballot papers or in maps as an x-marks-the-spot). Its opposite is often considered to be the O mark used in Japan and Korea or the check mark used in the West. In Japanese, the X mark (❌) is called "batsu" (ばつ) and can be expressed by someone by crossing their arms.
It is also used as a replacement for a signature for a person who is blind or illiterate and thus cannot write their name. Typically, the writing of an X used for this purpose must be witnessed to be valid.
Contrary to the negation or negative perception delegated to the letter X, there is a significant resilience in the usage displayed by the letter's placement. This unique letter is also recognized as the symbol of multiplicity, the Roman numerical symbol for 10, and also the mark of a forgotten treasure. As a verb, to X (or ex) off/out or to cross off/out means to add such a mark. It is quite common, especially on printed forms and document, for there to be squares in which to place x marks, or interchangeably checks.
It is traditionally used on maps to indicate locations, most famously on treasure maps. It is also used as a set of three to mark jugs of moonshine for having completed all distillation steps, while additionally signifying its potency (as high as 150 proof) relative to legal spirits, which rarely exceed 80 proof (40% ABV).
Among Native Americans in the 18th and 19th centuries, the X mark was used as a signature to denote presence or approval, particularly regarding agreements and treaties.
In the 21st century, the X mark started to be used to indicate collaborations between fashion brands.
Unicode
Unicode provides various related symbols, including:
The mark is generally rendered with a less symmetrical form than the following cross-shaped symbols:
See also
List of international common standards
Single-letter second-level domain
Saltire
Dagger (typography) † ‡
Tally marks
Check mark ✓
No symbol ⃠
Mathematics
Multiplication sign
Cartesian product
Cross product
Subcultures
Straight edge
Footnotes
Cross symbols
Mathematical symbols
Typographical symbols
it:Segno di spunta | X mark | [
"Mathematics"
] | 522 | [
"Symbols",
"Mathematical symbols",
"Typographical symbols"
] |
16,296,155 | https://en.wikipedia.org/wiki/Coalescing%20%28computer%20science%29 | In computer science, coalescing is a part of memory management in which two adjacent free blocks of computer memory are merged.
When a program no longer requires certain blocks of memory, these blocks of memory can be freed. Without coalescing, these blocks of memory stay separate from each other in their original requested size, even if they are next to each other. If a subsequent request for memory specifies a size of memory that cannot be met with an integer number of these (potentially unequally-sized) freed blocks, these neighboring blocks of freed memory cannot be allocated for this request. Coalescing alleviates this issue by setting the neighboring blocks of freed memory to be contiguous without boundaries, such that part or all of it can be allocated for the request.
Among other techniques, coalescing is used to reduce external fragmentation, but is not totally effective. Coalescing can be done as soon as blocks are freed, or it can be deferred until some time later (known as deferred coalescing), or it might not be done at all.
Coalescence and related techniques like heap compaction, can be used in garbage collection.
See also
Timer coalescing
References
External links
The Memory Management Reference, Beginner's Guide Allocation
Automatic memory management | Coalescing (computer science) | [
"Technology"
] | 259 | [
"Computing stubs",
"Computer science",
"Computer science stubs"
] |
16,296,529 | https://en.wikipedia.org/wiki/Ageostrophy | Ageostrophy (or ageostrophic flow) is the difference between the actual wind or current and the geostrophic wind or geostrophic current. Since geostrophy is an exact balance between the Coriolis force and the pressure gradient force, ageostrophic flow reflects an imbalance, and thus is often implicated in disturbances, vertical motions (important for weather), and rapid changes with time. Ageostrophic flow reflects the existence of all the other terms in the momentum equation neglected in that idealization, including friction and material acceleration Dv/Dt, which includes the centrifugal force in curved flow.
See also
geostrophic
geostrophic wind
References
External links
Meteo 422 – Lecture 17 – The Omega Equation Aloft
Oceanography
fr:Vent géostrophique#Vent agéostrophique | Ageostrophy | [
"Physics",
"Environmental_science"
] | 178 | [
"Oceanography",
"Hydrology",
"Applied and interdisciplinary physics"
] |
16,297,028 | https://en.wikipedia.org/wiki/GenerativeComponents | GenerativeComponents is parametric CAD software developed by Bentley Systems, was first introduced in 2003, became increasingly used in practice (especially by the London architectural community) by early 2005, and was commercially released in November 2007. GenerativeComponents has a strong traditional base of users in academia and at technologically advanced design firms. GenerativeComponents is often referred to by the nickname of 'GC'. GC epitomizes the quest to bring parametric modeling capabilities of 3D solid modeling into architectural design, seeking to provide greater fluidity and fluency than mechanical 3D solid modeling.
Users can interact with the software by either dynamically modeling and directly manipulating geometry, or by applying rules and capturing relationships among model elements, or by defining complex forms and systems through concisely expressed algorithms.
The software supports many industry standard file input and outputs including DGN by Bentley Systems, DWG by Autodesk, STL (Stereo Lithography), Rhino, and others. The software can also integrate with Building Information Modeling systems, specifically and an installed extension/Companion Feature to Bentley's AECOsim Building Designer.
The software has a published API and uses a simple scripting language, both allowing the integration with many different software tools, and the creation of custom programs by users.
This software is primarily used by architects and engineers in the design of buildings, but has also been used to model natural and biological structures and mathematical systems.
Generative Components currently runs exclusively on Microsoft Windows operating systems, and in English.
Bentley Systems Incorporated offers GC as a free download. This version of GC does not time-out and is not feature limited. It requires registration with an email address. This is a standalone version of GC that includes the underlying Bentley MicroStation software that is required for it to run.
SmartGeometry Group
The SmartGeometry Group has been instrumental in the formation of GenerativeComponents.
GenerativeComponents was brought to the market after utilizing a multi-year testing cycle with a dedicated user community in the SmartGeometry group. This community was responsible for shaping the product very early in its life and continues to play an important role in defining it. The SmartGeometry Group is an independent non-profit organization; it is not a Bentley user group.
The SmartGeometry Group organizes an annual multi-day workshop and accompanying conference highlighting advanced design practices and technology. Recent workshop and conferences have been in:
Munich (2008);
San Francisco (2009);
IAAC - Barcelona, Spain (2010);
CITA - Copenhagen, Denmark (2011);
RPI - Troy, New York (2012);
UCL - London, UK (2013);
Hong Kong (2014);
Gothenburg (2016);
Toronto (2018).
See also
Architecture
Architectural engineering
Design computing
Comparison of CAD Software
References
Computer-aided design
Building engineering software
Computer-aided design software
Data modeling | GenerativeComponents | [
"Engineering"
] | 597 | [
"Building engineering software",
"Computer-aided design",
"Design engineering",
"Building engineering",
"Data modeling",
"Data engineering"
] |
16,297,790 | https://en.wikipedia.org/wiki/Shewanella%20oneidensis | Shewanella oneidensis is a bacterium notable for its ability to reduce metal ions and live in environments with or without oxygen. This proteobacterium was first isolated from Lake Oneida, NY in 1988, hence its name.
Shewanella oneidensis is a facultative bacterium, capable of surviving and proliferating in both aerobic and anaerobic conditions. The special interest in S. oneidensis MR-1 revolves around its behavior in an anaerobic environment contaminated by heavy metals such as iron, lead and uranium. Experiments suggest it may reduce ionic mercury to elemental mercury and ionic silver to elemental silver. Cellular respiration for these bacteria is not restricted to heavy metals though; the bacteria can also target sulfates, nitrates and chromates when grown anaerobically.
Name
This species is referred to as S. oneidensis MR-1, indicating "manganese reducing", a special feature of this organism. It is a common misconception to think that MR-1 refers to "metal-reducing" instead of the original intended "manganese-reducing" as observed by Kenneth H. Nealson, who first isolated the organism. Although it was originally known as "manganese-reducing", the additional abbreviation expansion of "metal-reducing" is also valid as S. oneidensis MR-1 does reduce metals other than manganese.
Qualities
Metal reduction
Shewanella oneidensis MR-1 belongs to a class of bacteria known as "Dissimilatory Metal-Reducing Bacteria (DMRB)" because of their ability to couple metal reduction with their metabolism. The means of reducing the metals is of particular controversy, as research using scanning electron microscopy and transmission electron microscopy revealed abnormal structural protrusions resembling bacterial filaments that are thought to be involved in the metal reduction. This process of producing an external filament is completely absent from conventional bacterial respiration and is the center of many current studies.
The mechanics of this bacterium's resistance and use of heavy metal ions is deeply related to its metabolism pathway web. Putative multidrug efflux transporters, detoxification proteins, extracytoplasmic sigma factors and PAS domain regulators are shown to have higher expression activity in presence of heavy metal. Cytochrome c class protein SO3300 also has an elevated transcription. For example, when reducing U(VI), special cytochromes such as MtrC and OmcA are used to form UO2 nanoparticles and associate it with biopolymers.
Chemical modification
In 2017 researchers used a synthetic molecule called DSFO+ to modify cell membranes in two mutant strains of Shewanella. DSFO+ could completely replace natural current-conducting proteins, boosting the power that the microbe generated. The process was a chemical modification only that did not modify the organism's genome and that was divided among the bacteria's offspring, diluting the effect.
Pellicle formation
Pellicle is a variety of biofilm that is formed between the air and the liquid in which bacteria grow. In a biofilm, bacterial cells interact with each other to protect their community and co-operate metabolically (microbial communities). In S. oneidensis, pellicle formation is typical and is related to the process of reducing heavy metal. Pellicle formation is extensively researched in this species. Pellicle is usually formed in three steps: cells attach to the triple surface of culture device, air and liquid, then developing a one-layered biofilm from the initial cells, and subsequently maturing to a complicated three-dimensional structure. In a developed pellicle, a number of substances between the cells (extracellular polymeric substances) help maintain the pellicle matrix. The process of pellicle formation involves significant microbial activities and related substances. For the extracellular polymeric substances, many proteins and other bio-macromolecules are required.
Many metal cations are also required in the process. EDTA control and extensive cation presence/absence tests show that Ca(II), Mn(II), Cu(II) and Zn(II) are all essential in this process, probably functioning as a part of a coenzyme or prosthetic group. Mg(II) has partial effect, while Fe(II) and Fe(III) are inhibitory to some degree. Flagella are considered to contribute to pellicle formation. The biofilm needs bacterial cells to move in a certain manner, while flagella is the organelle which has locomotive function. Mutant strains lacking flagella can still form pellicle, albeit much less rapidly.
Applications
Nanotechnology
Shewanella oneidensis MR-1 can change the oxidation state of metals. These microbial processes allow exploration of novel applications, for example, the biosynthesis of metal nanomaterials. In contrast to chemical and physical methods, microbial processes for synthesizing nanomaterials can be achieved in aqueous phase under gentle and environmentally benign conditions. Many organisms can be utilized to synthesize metal nanomaterials. S. oneidensis is able to reduce a diverse range of metal ions extracellularly and this extracellular production greatly facilitates the extraction of nanomaterials. The extracellular electron transport chains responsible for transferring electrons across cell membranes are relatively well characterized, in particular outer membrane c-type cytochromes MtrC and OmcA. A 2013 study suggested that it is possible to alter particle size and activity of extracellular biogenic nanoparticles via controlled expression of the genes encoding surface proteins. An important example is the synthesis of silver nanoparticle by S. oneidensis, where its antibacterial activity can be influenced by the expression of outer membrane c-type cytochromes. Silver nanoparticles are considered to be a new generation of antimicrobial as they exhibit biocidal activity towards a broad range of bacteria, and are gaining importance with the increasing resistance in antibiotics by pathogenic bacteria. Shewanella has been seen in laboratory settings to bioreduce a substantial amount of palladium and dechlorinate near 70% of polychlorinated biphenyls (PCBs). The production of nanoparticles by S. oneidensis MR-1 are closely associated to the MTR pathway (e.g. silver nanoparticles), or the hydrogenase pathway (e.g. palladium nanoparticles).
Wastewater treatment
Shewanella oneidensis' ability to reduce and absorb heavy metals makes it a candidate for use in wastewater treatment.
DSFO+ could possibly allow the bacteria to electrically communicate with an electrode and generate electricity in a wastewater application.
Genome
As a facultative anaerobe with a branching electron transport pathway, S. oneidensis is considered a model organism in microbiology. In 2002, its genomic sequence was published. It has a 4.9Mb circular chromosome that is predicted to encode 4,758 protein open reading frames. It has a 161kb plasmid with 173 open reading frames. A re-annotation was made in 2003.
References
External links
New bacterial behavior observed PNAS study documents puzzling movement of electricity-producing bacteria near energy sources, abstract at Eurekalert
'Rock-Breathing' Bacteria Could Generate Electricity and Clean Up Oil Spills, ScienceDaily (Dec. 15, 2009)
Bacteria that can form electric circuits?
Type strain of Shewanella oneidensis at BacDive – the Bacterial Diversity Metadatabase
Alteromonadales
Pollution control technologies | Shewanella oneidensis | [
"Chemistry",
"Engineering"
] | 1,565 | [
"Pollution control technologies",
"Environmental engineering"
] |
16,298,196 | https://en.wikipedia.org/wiki/Polymorphic%20association | Polymorphic association is a term used in discussions of Object-Relational Mapping with respect to the problem of representing in the relational database domain, a relationship from one class to multiple classes. In statically typed languages such as Java these multiple classes are subclasses of the same superclass. In languages with duck typing, such as Ruby, this is not necessarily the case.
See also
Polymorphism in object-oriented programming
Hibernate (Java)
References
Java Persistence with HIBERNATE, Chapter 5, Bauer, Christian & Gavin, King, Manning, copyright 2007,
External links
Hibernate Home Page
Data mapping
Object-oriented programming
Relational model | Polymorphic association | [
"Engineering"
] | 130 | [
"Data engineering",
"Data mapping"
] |
16,298,354 | https://en.wikipedia.org/wiki/Swindale%20Beck | Swindale Beck is a stream in Cumbria, England. It is formed at Swindale Head where Mosedale Beck, from the slopes of Tarn Crag, joins Hobgrumble Beck from Selside Pike. The stream flows north-east along Swindale and joins the River Lowther near Rosgill between Shap and Bampton. Its waters then flow via the
River Eamont into the Solway Firth.
Prior to 1859, it had been straightened to clear land for grazing. In 2016, of straightened channel was replaced with of a new sinuous channel, reconnecting the stream to its surrounding floodplain. This resulted in a rapid and marked improvement in its diversity. In 2022, the project was awarded the European Riverprize.
References
External links
Ecological restoration
1Swindale
Rivers of Cumbria
Beck watercourses | Swindale Beck | [
"Chemistry",
"Engineering"
] | 176 | [
"Ecological restoration",
"Environmental engineering"
] |
16,299,336 | https://en.wikipedia.org/wiki/Julius%20Bredt | Julius Bredt (29 March 1855 – 21 September 1937) was a German organic chemist. He was the first to determine, in 1893, the correct structure of camphor. Bredt also proposed in 1924 that a double bond cannot be placed at the bridgehead of a bridged ring system, a statement now known as Bredt's rule. The rule however, has been contradicted since, by a publication 100 years later.
Awards
There is a Julius Bredt lecture in his remembrance at the RWTH Aachen University.
Further reading
References
1855 births
1937 deaths
19th-century German chemists
German organic chemists
20th-century German chemists
Scientists from Berlin
Academic staff of RWTH Aachen University | Julius Bredt | [
"Chemistry"
] | 143 | [
"Organic chemists",
"German organic chemists"
] |
16,300,571 | https://en.wikipedia.org/wiki/Computational%20creativity | Computational creativity (also known as artificial creativity, mechanical creativity, creative computing or creative computation) is a multidisciplinary endeavour that is located at the intersection of the fields of artificial intelligence, cognitive psychology, philosophy, and the arts (e.g., computational art as part of computational culture).
The goal of computational creativity is to model, simulate or replicate creativity using a computer, to achieve one of several ends:
To construct a program or computer capable of human-level creativity.
To better understand human creativity and to formulate an algorithmic perspective on creative behavior in humans.
To design programs that can enhance human creativity without necessarily being creative themselves.
The field of computational creativity concerns itself with theoretical and practical issues in the study of creativity. Theoretical work on the nature and proper definition of creativity is performed in parallel with practical work on the implementation of systems that exhibit creativity, with one strand of work informing the other.
The applied form of computational creativity is known as media synthesis.
Theoretical issues
Theoretical approaches concern the essence of creativity. Especially, under what circumstances it is possible to call the model a "creative" if eminent creativity is about rule-breaking or the disavowal of convention. This is a variant of Ada Lovelace's objection to machine intelligence, as recapitulated by modern theorists such as Teresa Amabile. If a machine can do only what it was programmed to do, how can its behavior ever be called creative?
Indeed, not all computer theorists would agree with the premise that computers can only do what they are programmed to do—a key point in favor of computational creativity.
Defining creativity in Computational terms
Because no single perspective or definition seems to offer a complete picture of creativity, the AI researchers Newell, Shaw and Simon developed the combination of novelty and usefulness into the cornerstone of a multi-pronged view of creativity, one that uses the following four criteria to categorize a given answer or solution as creative:
The answer is novel and useful (either for the individual or for society)
The answer demands that we reject ideas we had previously accepted
The answer results from intense motivation and persistence
The answer comes from clarifying a problem that was originally vague
Margaret Boden focused on the first two of these criteria, arguing instead that creativity (at least when asking whether computers could be creative) should be defined as "the ability to come up with ideas or artifacts that are new, surprising, and valuable".
Mihali Csikszentmihalyi argued that creativity had to be considered instead in a social context, and his DIFI (Domain-Individual-Field Interaction) framework has since strongly influenced the field. In DIFI, an individual produces works whose novelty and value are assessed by the field—other people in society—providing feedback and ultimately adding the work, now deemed creative, to the domain of societal works from which an individual might be later influenced.
Whereas the above reflects a top-down approach to computational creativity, an alternative thread has developed among bottom-up computational psychologists involved in artificial neural network research. During the late 1980s and early 1990s, for example, such generative neural systems were driven by genetic algorithms. Experiments involving recurrent nets were successful in hybridizing simple musical melodies and predicting listener expectations.
Machine learning for Computational creativity
While traditional computational approaches to creativity rely on the explicit formulation of prescriptions by developers and a certain degree of randomness in computer programs, machine learning methods allow computer programs to learn on heuristics from input data enabling creative capacities within the computer programs. Especially, deep artificial neural networks allow to learn patterns from input data that allow for the non-linear generation of creative artefacts. Before 1989, artificial neural networks have been used to model certain aspects of creativity. Peter Todd (1989) first trained a neural network to reproduce musical melodies from a training set of musical pieces. Then he used a change algorithm to modify the network's input parameters. The network was able to randomly generate new music in a highly uncontrolled manner. In 1992, Todd extended this work, using the so-called distal teacher approach that had been developed by Paul Munro, Paul Werbos, D. Nguyen and Bernard Widrow, Michael I. Jordan and David Rumelhart. In the new approach, there are two neural networks, one of which is supplying training patterns to another.
In later efforts by Todd, a composer would select a set of melodies that define the melody space, position them on a 2-d plane with a mouse-based graphic interface, and train a connectionist network to produce those melodies, and listen to the new "interpolated" melodies that the network generates corresponding to intermediate points in the 2-d plane.
Key concepts from literature
Some high-level and philosophical themes recur throughout the field of computational creativity, for example as follows.
Important categories of creativity
Margaret Boden refers to creativity that is novel merely to the agent that produces it as "P-creativity" (or "psychological creativity"), and refers to creativity that is recognized as novel by society at large as "H-creativity" (or "historical creativity").
Exploratory and transformational creativity
Boden also distinguishes between the creativity that arises from an exploration within an established conceptual space, and the creativity that arises from a deliberate transformation or transcendence of this space. She labels the former as exploratory creativity and the latter as transformational creativity, seeing the latter as a form of creativity far more radical, challenging, and rarer than the former. Following the criteria from Newell and Simon elaborated above, we can see that both forms of creativity should produce results that are appreciably novel and useful (criterion 1), but exploratory creativity is more likely to arise from a thorough and persistent search of a well-understood space (criterion 3) -- while transformational creativity should involve the rejection of some of the constraints that define this space (criterion 2) or some of the assumptions that define the problem itself (criterion 4). Boden's insights have guided work in computational creativity at a very general level, providing more an inspirational touchstone for development work than a technical framework of algorithmic substance. However, Boden's insights are also the subject of formalization, most notably in the work by Geraint Wiggins.
Generation and evaluation
The criterion that creative products should be novel and useful means that creative computational systems are typically structured into two phases, generation and evaluation. In the first phase, novel (to the system itself, thus P-Creative) constructs are generated; unoriginal constructs that are already known to the system are filtered at this stage. This body of potentially creative constructs is then evaluated, to determine which are meaningful and useful and which are not. This two-phase structure conforms to the Geneplore model of Finke, Ward and Smith, which is a psychological model of creative generation based on empirical observation of human creativity.
Co-creation
While much of computational creativity research focuses on independent and automatic machine-based creativity generation, many researchers are inclined towards a collaboration approach. This human-computer interaction is sometimes categorized under the creativity support tools development. These systems aim to provide an ideal framework for research, integration, decision-making, and idea generation. Recently, deep learning approaches to imaging, sound and natural language processing, resulted in the modeling of productive creativity development frameworks.
Innovation
Computational creativity is increasingly being discussed in the innovation and management literature as the recent development in AI may disrupt entire innovation processes and fundamentally change how innovations will be created. Philip Hutchinson highlights the relevance of computational creativity for creating innovation and introduced the concept of “self-innovating artificial intelligence” (SAI) to describe how companies make use of AI in innovation processes to enhance their innovative offerings. SAI is defined as the organizational utilization of AI with the aim of incrementally advancing existing or developing new products, based on insights from continuously combining and analyzing multiple data sources. As AI becomes a general-purpose technology, the spectrum of products to be developed with SAI will broaden from simple to increasingly complex. This implies that computational creativity leads to a shift of creativity-related skills for humans.
Combinatorial creativity
A great deal, perhaps all, of human creativity can be understood as a novel combination of pre-existing ideas or objects. Common strategies for combinatorial creativity include:
Placing a familiar object in an unfamiliar setting (e.g., Marcel Duchamp's Fountain) or an unfamiliar object in a familiar setting (e.g., a fish-out-of-water story such as The Beverly Hillbillies)
Blending two superficially different objects or genres (e.g., a sci-fi story set in the Wild West, with robot cowboys, as in Westworld, or the reverse, as in Firefly; Japanese haiku poems, etc.)
Comparing a familiar object to a superficially unrelated and semantically distant concept (e.g., "Makeup is the Western burka"; "A zoo is a gallery with living exhibits")
Adding a new and unexpected feature to an existing concept (e.g., adding a scalpel to a Swiss Army knife; adding a camera to a mobile phone)
Compressing two incongruous scenarios into the same narrative to get a joke (e.g., the Emo Philips joke "Women are always using men to advance their careers. Damned anthropologists!")
Using an iconic image from one domain in a domain for an unrelated or incongruous idea or product (e.g., using the Marlboro Man image to sell cars, or to advertise the dangers of smoking-related impotence).
The combinatorial perspective allows us to model creativity as a search process through the space of possible combinations. The combinations can arise from composition or concatenation of different representations, or through a rule-based or stochastic transformation of initial and intermediate representations. Genetic algorithms and neural networks can be used to generate blended or crossover representations that capture a combination of different inputs.
Conceptual blending
Mark Turner and Gilles Fauconnier propose a model called Conceptual Integration Networks that elaborates upon Arthur Koestler's ideas about creativity as well as work by Lakoff and Johnson, by synthesizing ideas from Cognitive Linguistic research into mental spaces and conceptual metaphors. Their basic model defines an integration network as four connected spaces:
A first input space (contains one conceptual structure or mental space)
A second input space (to be blended with the first input)
A generic space of stock conventions and image-schemas that allow the input spaces to be understood from an integrated perspective
A blend space in which a selected projection of elements from both input spaces are combined; inferences arising from this combination also reside here, sometimes leading to emergent structures that conflict with the inputs.
Fauconnier and Turner describe a collection of optimality principles that are claimed to guide the construction of a well-formed integration network. In essence, they see blending as a compression mechanism in which two or more input structures are compressed into a single blend structure. This compression operates on the level of conceptual relations. For example, a series of similarity relations between the input spaces can be compressed into a single identity relationship in the blend.
Some computational success has been achieved with the blending model by extending pre-existing computational models of analogical mapping that are compatible by virtue of their emphasis on connected semantic structures. In 2006, Francisco Câmara Pereira presented an implementation of blending theory that employs ideas both from symbolic AI and genetic algorithms to realize some aspects of blending theory in a practical form; his example domains range from the linguistic to the visual, and the latter most notably includes the creation of mythical monsters by combining 3-D graphical models.
Linguistic creativity
Language provides continuous opportunity for creativity, evident in the generation of novel sentences, phrasings, puns, neologisms, rhymes, allusions, sarcasm, irony, similes, metaphors, analogies, witticisms, and jokes. Native speakers of morphologically rich languages frequently create new word-forms that are easily understood, and some have found their way to the dictionary. The area of natural language generation has been well studied, but these creative aspects of everyday language have yet to be incorporated with any robustness or scale.
Hypothesis of creative patterns
In the seminal work of applied linguist Ronald Carter, he hypothesized two main creativity types involving words and word patterns: pattern-reforming creativity, and pattern-forming creativity. Pattern-reforming creativity refers to creativity by the breaking of rules, reforming and reshaping patterns of language often through individual innovation, while pattern-forming creativity refers to creativity via conformity to language rules rather than breaking them, creating convergence, symmetry and greater mutuality between interlocutors through their interactions in the form of repetitions.
Story generation
Substantial work has been conducted in this area of linguistic creation since the 1970s, with the development of James Meehan's TALE-SPIN
system. TALE-SPIN viewed stories as narrative descriptions of a problem-solving effort, and created stories by first establishing a goal for the story's characters so that their search for a solution could be tracked and recorded. The MINSTREL system represents a complex elaboration of this basic approach, distinguishing a range of character-level goals in the story from a range of author-level goals for the story. Systems like Bringsjord's BRUTUS elaborate these ideas further to create stories with complex interpersonal themes like betrayal. Nonetheless, MINSTREL explicitly models the creative process with a set of Transform Recall Adapt Methods (TRAMs) to create novel scenes from old. The MEXICA model of Rafael Pérez y Pérez and Mike Sharples is more explicitly interested in the creative process of storytelling, and implements a version of the engagement-reflection cognitive model of creative writing.
Metaphor and simile
Example of a metaphor: "She was an ape."
Example of a simile: "Felt like a tiger-fur blanket."
The computational study of these phenomena has mainly focused on interpretation as a knowledge-based process. Computationalists such as Yorick Wilks, James Martin, Dan Fass, John Barnden, and Mark Lee have developed knowledge-based approaches to the processing of metaphors, either at a linguistic level or a logical level. Tony Veale and Yanfen Hao have developed a system, called Sardonicus, that acquires a comprehensive database of explicit similes from the web; these similes are then tagged as bona-fide (e.g., "as hard as steel") or ironic (e.g., "as hairy as a bowling ball", "as pleasant as a root canal"); similes of either type can be retrieved on demand for any given adjective. They use these similes as the basis of an on-line metaphor generation system called Aristotle that can suggest lexical metaphors for a given descriptive goal (e.g., to describe a supermodel as skinny, the source terms "pencil", "whip", "whippet", "rope", "stick-insect" and "snake" are suggested).
Analogy
The process of analogical reasoning has been studied from both a mapping and a retrieval perspective, the latter being key to the generation of novel analogies. The dominant school of research, as advanced by Dedre Gentner, views analogy as a structure-preserving process; this view has been implemented in the structure mapping engine or SME, the MAC/FAC retrieval engine (Many Are Called, Few Are Chosen), ACME (Analogical Constraint Mapping Engine) and ARCS (Analogical Retrieval Constraint System). Other mapping-based approaches include Sapper, which situates the mapping process in a semantic-network model of memory. Analogy is a very active sub-area of creative computation and creative cognition; active figures in this sub-area include Douglas Hofstadter, Paul Thagard, and Keith Holyoak. Also worthy of note here is Peter Turney and Michael Littman's machine learning approach to the solving of SAT-style analogy problems; their approach achieves a score that compares well with average scores achieved by humans on these tests.
Joke generation
Humour is an especially knowledge-hungry process, and the most successful joke-generation systems to date have focussed on pun-generation, as exemplified by the work of Kim Binsted and Graeme Ritchie. This work includes the JAPE system, which can generate a wide range of puns that are consistently evaluated as novel and humorous by young children. An improved version of JAPE has been developed in the guise of the STANDUP system, which has been experimentally deployed as a means of enhancing linguistic interaction with children with communication disabilities. Some limited progress has been made in generating humour that involves other aspects of natural language, such as the deliberate misunderstanding of pronominal reference (in the work of Hans Wim Tinholt and Anton Nijholt), as well as in the generation of humorous acronyms in the HAHAcronym system of Oliviero Stock and Carlo Strapparava.
Neologism
The blending of multiple word forms is a dominant force for new word creation in language; these new words are commonly called "blends" or "portmanteau words" (after Lewis Carroll). Tony Veale has developed a system called ZeitGeist that harvests neological headwords from Wikipedia and interprets them relative to their local context in Wikipedia and relative to specific word senses in WordNet. ZeitGeist has been extended to generate neologisms of its own; the approach combines elements from an inventory of word parts that are harvested from WordNet, and simultaneously determines likely glosses for these new words (e.g., "food traveller" for "gastronaut" and "time traveller" for "chrononaut"). It then uses Web search to determine which glosses are meaningful and which neologisms have not been used before; this search identifies the subset of generated words that are both novel ("H-creative") and useful.
A corpus linguistic approach to the search and extraction of neologism have also shown to be possible. Using Corpus of Contemporary American English as a reference corpus, Locky Law has performed an extraction of neologism, portmanteaus and slang words using the hapax legomena which appeared in the scripts of American TV drama House M.D.
In terms of linguistic research in neologism, Stefan Th. Gries has performed a quantitative analysis of blend structure in English and found that "the degree of recognizability of the source words and that the similarity of source words to the blend plays a vital role in blend formation." The results were validated through a comparison of intentional blends to speech-error blends.
Poetry
Like jokes, poems involve a complex interaction of different constraints, and no general-purpose poem generator adequately combines the meaning, phrasing, structure and rhyme aspects of poetry. Nonetheless, Pablo Gervás has developed a noteworthy system called ASPERA that employs a case-based reasoning (CBR) approach to generating poetic formulations of a given input text via a composition of poetic fragments that are retrieved from a case-base of existing poems. Each poem fragment in the ASPERA case-base is annotated with a prose string that expresses the meaning of the fragment, and this prose string is used as the retrieval key for each fragment. Metrical rules are then used to combine these fragments into a well-formed poetic structure. Racter is an example of such a software project.
Musical creativity
Computational creativity in the music domain has focused both on the generation of musical scores for use by human musicians, and on the generation of music for performance by computers. The domain of generation has included classical music (with software that generates music in the style of Mozart and Bach) and jazz. Most notably, David Cope has written a software system called "Experiments in Musical Intelligence" (or "EMI") that is capable of analyzing and generalizing from existing music by a human composer to generate novel musical compositions in the same style. EMI's output is convincing enough to persuade human listeners that its music is human-generated to a high level of competence.
In the field of contemporary classical music, Iamus is the first computer that composes from scratch, and produces final scores that professional interpreters can play. The London Symphony Orchestra played a piece for full orchestra, included in Iamus' debut CD, which New Scientist described as "The first major work composed by a computer and performed by a full orchestra". Melomics, the technology behind Iamus, is able to generate pieces in different styles of music with a similar level of quality.
Creativity research in jazz has focused on the process of improvisation and the cognitive demands that this places on a musical agent: reasoning about time, remembering and conceptualizing what has already been played, and planning ahead for what might be played next.
The robot Shimon, developed by Gil Weinberg of Georgia Tech, has demonstrated jazz improvisation. Virtual improvisation software based on researches on stylistic modeling carried out by Gerard Assayag and Shlomo Dubnov include OMax, SoMax and PyOracle, are used to create improvisations in real-time by re-injecting variable length sequences learned on the fly from the live performer.
In the field of musical composition, the patented works by René-Louis Baron allowed to make a robot that can create and play a multitude of orchestrated melodies, so-called "coherent" in any musical style. All outdoor physical parameter associated with one or more specific musical parameters, can influence and develop each of these songs (in real-time while listening to the song). The patented invention Medal-Composer raises problems of copyright.
Visual and artistic creativity
Computational creativity in the generation of visual art has had some notable successes in the creation of both abstract art and representational art. A well-known program in this domain is Harold Cohen's AARON, which has been continuously developed and augmented since 1973. Though formulaic, Aaron exhibits a range of outputs, generating black-and-white drawings or colour paintings that incorporate human figures (such as dancers), potted plants, rocks, and other elements of background imagery. These images are of a sufficiently high quality to be displayed in reputable galleries.
Other software artists of note include the NEvAr system (for "Neuro-Evolutionary Art") of Penousal Machado. NEvAr uses a genetic algorithm to derive a mathematical function that is then used to generate a coloured three-dimensional surface. A human user is allowed to select the best pictures after each phase of the genetic algorithm, and these preferences are used to guide successive phases, thereby pushing NEvAr's search into pockets of the search space that are considered most appealing to the user.
The Painting Fool, developed by Simon Colton originated as a system for overpainting digital images of a given scene in a choice of different painting styles, colour palettes and brush types. Given its dependence on an input source image to work with, the earliest iterations of the Painting Fool raised questions about the extent of, or lack of, creativity in a computational art system. Nonetheless, The Painting Fool has been extended to create novel images, much as AARON does, from its own limited imagination. Images in this vein include cityscapes and forests, which are generated by a process of constraint satisfaction from some basic scenarios provided by the user (e.g., these scenarios allow the system to infer that objects closer to the viewing plane should be larger and more color-saturated, while those further away should be less saturated and appear smaller). Artistically, the images now created by the Painting Fool appear on a par with those created by Aaron, though the extensible mechanisms employed by the former (constraint satisfaction, etc.) may well allow it to develop into a more elaborate and sophisticated painter.
The artist Krasi Dimtch (Krasimira Dimtchevska) and the software developer Svillen Ranev have created a computational system combining a rule-based generator of English sentences and a visual composition builder that converts sentences generated by the system into abstract art. The software generates automatically indefinite number of different images using different color, shape and size palettes. The software also allows the user to select the subject of the generated sentences or/and the one or more of the palettes used by the visual composition builder.
An emerging area of computational creativity is that of video games. ANGELINA is a system for creatively developing video games in Java by Michael Cook. One important aspect is Mechanic Miner, a system that can generate short segments of code that act as simple game mechanics. ANGELINA can evaluate these mechanics for usefulness by playing simple unsolvable game levels and testing to see if the new mechanic makes the level solvable. Sometimes Mechanic Miner discovers bugs in the code and exploits these to make new mechanics for the player to solve problems with.
In July 2015, Google released DeepDream – an open source computer vision program, created to detect faces and other patterns in images with the aim of automatically classifying images, which uses a convolutional neural network to find and enhance patterns in images via algorithmic pareidolia, thus creating a dreamlike psychedelic appearance in the deliberately over-processed images.
In August 2015, researchers from Tübingen, Germany created a convolutional neural network that uses neural representations to separate and recombine content and style of arbitrary images which is able to turn images into stylistic imitations of works of art by artists such as a Picasso or Van Gogh in about an hour. Their algorithm is put into use in the website DeepArt that allows users to create unique artistic images by their algorithm.
In early 2016, a global team of researchers explained how a new computational creativity approach known as the Digital Synaptic Neural Substrate (DSNS) could be used to generate original chess puzzles that were not derived from endgame databases. The DSNS is able to combine features of different objects (e.g. chess problems, paintings, music) using stochastic methods in order to derive new feature specifications which can be used to generate objects in any of the original domains. The generated chess puzzles have also been featured on YouTube.
Creativity in problem solving
Creativity is also useful in allowing for unusual solutions in problem solving. In psychology and cognitive science, this research area is called creative problem solving. The Explicit-Implicit Interaction (EII) theory of creativity has been implemented using a CLARION-based computational model that allows for the simulation of incubation and insight in problem-solving. The emphasis of this computational creativity project is not on performance per se (as in artificial intelligence projects) but rather on the explanation of the psychological processes leading to human creativity and the reproduction of data collected in psychology experiments. So far, this project has been successful in providing an explanation for incubation effects in simple memory experiments, insight in problem solving, and reproducing the overshadowing effect in problem solving.
Debate about "general" theories of creativity
Some researchers feel that creativity is a complex phenomenon whose study is further complicated by the plasticity of the language we use to describe it. We can describe not just the agent of creativity as "creative" but also the product and the method. Consequently, it could be claimed that it is unrealistic to speak of a general theory of creativity. Nonetheless, some generative principles are more general than others, leading some advocates to claim that certain computational approaches are "general theories". Stephen Thaler, for instance, proposes that certain modalities of neural networks are generative enough, and general enough, to manifest a high degree of creative capabilities.
Criticism of computational creativity
Traditional computers, as mainly used in the computational creativity application, do not support creativity, as they fundamentally transform a set of discrete, limited domain of input parameters into a set of discrete, limited domain of output parameters using a limited set of computational functions. As such, a computer cannot be creative, as everything in the output must have been already present in the input data or the algorithms. Related discussions and references to related work are captured in work on philosophical foundations of simulation.
Mathematically, the same set of arguments against creativity has been made by Chaitin. Similar observations come from a Model Theory perspective. All this criticism emphasizes that computational creativity is useful and may look like creativity, but it is not real creativity, as nothing new is created, just transformed in well-defined algorithms.
Events
The International Conference on Computational Creativity (ICCC) occurs annually, organized by The Association for Computational Creativity. Events in the series include:
ICCC 2023: University of Waterloo in Ontario, Canada
ICCC 2022: Free University of Bozen-Bolzano, Bolzano, Italy
ICCC 2021: Mexico City, Mexico (Virtual due to COVID-19 pandemic)
ICCC 2020, Coimbra, Portugal (Virtual due to COVID-19 pandemic)
ICCC 2019, Charlotte, North Carolina, US
ICCC 2018, Salamanca, Spain
ICCC 2017, Atlanta, Georgia, US
ICCC 2016, Paris, France
ICCC 2015, Park City, Utah, US. Keynote: Emily Short
ICCC 2014, Ljubljana, Slovenia. Keynote: Oliver Deussen
ICCC 2013, Sydney, Australia. Keynote: Arne Dietrich
ICCC 2012, Dublin, Ireland. Keynote: Steven Smith
ICCC 2011, Mexico City, Mexico. Keynote: George E Lewis
ICCC 2010, Lisbon, Portugal. Keynote/Invited Talks: Nancy J Nersessian and Mary Lou Maher
Previously, the community of computational creativity has held a dedicated workshop, the International Joint Workshop on Computational Creativity, every year since 1999. Previous events in this series include:
IJWCC 2003, Acapulco, Mexico, as part of IJCAI'2003
IJWCC 2004, Madrid, Spain, as part of ECCBR'2004
IJWCC 2005, Edinburgh, UK, as part of IJCAI'2005
IJWCC 2006, Riva del Garda, Italy, as part of ECAI'2006
IJWCC 2007, London, UK, a stand-alone event
IJWCC 2008, Madrid, Spain, a stand-alone event
The 1st Conference on Computer Simulation of Musical Creativity will be held
CCSMC 2016, 17–19 June, University of Huddersfield, UK. Keynotes: Geraint Wiggins and Graeme Bailey.
See also
1 the Road (1st novel)
Artificial imagination
Algorithmic art
Algorithmic composition
Applications of artificial intelligence
Computer art
Creative computing
Digital morphogenesis
Digital poetry
Generative art
Generative systems
Intrinsic motivation (artificial intelligence)
Musikalisches Würfelspiel (Musical dice game)
Procedural generation
Lists
List of emerging technologies
Outline of artificial intelligence
References
Further reading
An Overview of Artificial Creativity on Think Artificial
Cohen, H., "the further exploits of AARON, Painter" , SEHR, volume 4, issue 2: Constructions of the Mind, 1995
External links
Documentaries
Noorderlicht: Margaret Boden and Stephen Thaler on Creative Computers on Archive.org
In Its Image on Archive.org
Cognitive psychology
Computational fields of study
Creativity techniques
Philosophy of artificial intelligence | Computational creativity | [
"Technology",
"Biology"
] | 6,369 | [
"Behavior",
"Computational fields of study",
"Behavioural sciences",
"Computing and society",
"Cognitive psychology"
] |
16,300,987 | https://en.wikipedia.org/wiki/Enthought | Enthought, Inc. is a software company based in Austin, Texas, United States that develops scientific and analytic computing solutions using primarily the Python programming language. It is best known for the early development and maintenance of the SciPy library of mathematics, science, and engineering algorithms and for its Python for scientific computing distribution Enthought Canopy (formerly EPD).
The company was founded in 2001 by Travis Vaught and Eric Jones.
Open source software
Enthought publishes a large portion of the code as open-source software under a BSD-style license.
Enthought Canopy is a Python for scientific and analytic computing distribution and analysis environment, available for free and under a commercial license.
The Enthought Tool Suite open source software projects include:
Traits: A manifest type definition library for Python that provides initialization, validation, delegation, notification, and visualization. The Traits package is the foundation of the Enthought Tool Suite, underlying almost all other packages.
TraitsUI: A UI layer that supports the visualization features of Traits. Implementations using wxWidgets and Qt are provided by the TraitsBackendWX and TraitsBackendQt projects
Pyface: toolkit-independent GUI abstraction layer, which is used to support the "visualization" features of the Traits package.
MayaVi: 2-D/3-D scientific data visualization, usable in TraitsUIs as well as an Envisage plug-in.
Envisage: An extensible plug-in architecture for scientific applications, inspired by Eclipse and NetBeans in the Java world.
Enable: A multi-platform DisplayPDF drawing engine that supports multiple output backends, including Windows, GTK+, and macOS native windowing systems, a variety of raster image formats, PDF, and PostScript.
BlockCanvas: Visual environment for creating simulation experiments, where function and data are separated using CodeTools.
GraphCanvas: library for interacting with visualizations of complex graphs.
SciMath: Convenience libraries for math, interpolation, and units
Chaco: An interactive 2-D plotting toolkit for Python.
AppTools: General tools for ETS application development: scripting, logging, preferences, ...
Enaml: Library for creating professional quality user interfaces combining a domain specific declarative language with a constraints based layout.
See also
NumPy
matplotlib
Anaconda
ActiveState's ActivePython
References
External links
Companies based in Austin, Texas
Computational science
Free software programmed in Python
Software companies based in Texas
Software companies of the United States | Enthought | [
"Mathematics"
] | 537 | [
"Computational science",
"Applied mathematics"
] |
16,301,216 | https://en.wikipedia.org/wiki/PreQ1-II%20riboswitch | PreQ1-II riboswitches form a class of riboswitches that specifically bind pre-queuosine1 (PreQ1), a precursor of the modified nucleoside queuosine. They are found in certain species of Streptococcus and Lactococcus, and were originally identified as a conserved RNA secondary structure called the "COG4708 motif". All known members of this riboswitch class appear to control members of COG4708 genes. These genes are predicted to encode membrane-bound proteins and have been proposed to be a transporter of preQ1, or a related metabolite, based on their association with preQ1-binding riboswitches. PreQ1-II riboswitches have no apparent similarities in sequence or structure to preQ1-I riboswitches, a previously discovered class of preQ1-binding riboswitches. PreQ1 thus joins S-adenosylmethionine as the second metabolite to be found that is the ligand of more than one riboswitch class.
References
External links
Cis-regulatory RNA elements
Riboswitch | PreQ1-II riboswitch | [
"Chemistry"
] | 255 | [
"Biochemistry stubs",
"Molecular and cellular biology stubs"
] |
16,301,990 | https://en.wikipedia.org/wiki/User%20%28computing%29 | A user is a person who utilizes a computer or network service.
A user often has a user account and is identified to the system by a username (or user name).
Some software products provide services to other systems and have no direct end users.
End user
End users are the ultimate human users (also referred to as operators) of a software product. The end user stands in contrast to users who support or maintain the product such as sysops, database administrators and computer technicians. The term is used to abstract and distinguish those who only use the software from the developers of the system, who enhance the software for end users. In user-centered design, it also distinguishes the software operator from the client who pays for its development and other stakeholders who may not directly use the software, but help establish its requirements. This abstraction is primarily useful in designing the user interface, and refers to a relevant subset of characteristics that most expected users would have in common.
In user-centered design, personas are created to represent the types of users. It is sometimes specified for each persona which types of user interfaces it is comfortable with (due to previous experience or the interface's inherent simplicity), and what technical expertise and degree of knowledge it has in specific fields or disciplines. When few constraints are imposed on the end-user category, especially when designing programs for use by the general public, it is common practice to expect minimal technical expertise or previous training in end users.
The end-user development discipline blurs the typical distinction between users and developers. It designates activities or techniques in which people who are not professional developers create automated behavior and complex data objects without significant knowledge of a programming language.
Systems whose actor is another system or a software agent have no direct end users.
User account
A user's account allows a user to authenticate to a system and potentially to receive authorization to access resources provided by or connected to that system; however, authentication does not imply authorization. To log into an account, a user is typically required to authenticate oneself with a password or other credentials for the purposes of accounting, security, logging, and resource management.
Once the user has logged on, the operating system will often use an identifier such as an integer to refer to them, rather than their username, through a process known as identity correlation. In Unix systems, the username is correlated with a user identifier or user ID.
Computer systems operate in one of two types based on what kind of users they have:
Single-user systems do not have a concept of several user accounts.
Multi-user systems have such a concept, and require users to identify themselves before using the system.
Each user account on a multi-user system typically has a home directory, in which to store files pertaining exclusively to that user's activities, which is protected from access by other users (though a system administrator may have access). User accounts often contain a public user profile, which contains basic information provided by the account's owner. The files stored in the home directory (and all other directories in the system) have file system permissions which are inspected by the operating system to determine which users are granted access to read or execute a file, or to store a new file in that directory.
While systems expect most user accounts to be used by only a single person, many systems have a special account intended to allow anyone to use the system, such as the username "anonymous" for anonymous FTP and the username "guest" for a guest account.
Password storage
On Unix systems, local user accounts are stored in the file /etc/passwd, while user passwords may be stored at /etc/shadow in its hashed form.
On Microsoft Windows, user passwords can be managed within the Credential Manager program. The passwords are located in the Windows profile directory.
Username format
Various computer operating-systems and applications expect/enforce different rules for the format.
In Microsoft Windows environments, for example, note the potential use of:
User Principal Name (UPN) format – for example: UserName@Example.com
Down-Level Logon Name format – for example: DOMAIN\UserName
Terminology
Some usability professionals have expressed their dislike of the term "user" and have proposed changing it. Don Norman stated that "One of the horrible words we use is 'users'. I am on a crusade to get rid of the word 'users'. I would prefer to call them 'people'."
The term "user" may imply lack of the technical expertise required to fully understand how computer systems and software products work. Power users use advanced features of programs, though they are not necessarily capable of computer programming and system administration.
See also
Notes
References
Computing terminology
Identity management
Consumer | User (computing) | [
"Technology"
] | 968 | [
"Computing terminology"
] |
16,304,394 | https://en.wikipedia.org/wiki/DoD%20IPv6%20product%20certification | The United States Department of Defense (DoD) Internet Protocol version 6 (IPv6) product certification program began as a mandate from the DoD's Assistant Secretary of Defense for Networks & Information Integration (ASD-NII) in 2005. The program mandates the Joint Interoperability Test Command (JITC) in Fort Huachuca, Arizona, to test and certify IT products for IPv6 capability according to the RFCs outlined in the DoD's IPv6 Standards Profiles for IPv6 Capable Products. Once products are certified for special interoperability, they are added to the DoD's Unified Capabilities Approved Products List (UC APL) for IPv6. This list is used by procurement offices in the DoD and the U.S. Federal agencies for ongoing purchases and acquisitions of IT equipment.
As of February 2009, the DoD ceased the requirement for IPv6-only testing for certification and entry into the Unified Capabilities Approved Products List (UC APL). According to Kris Strance, DoD CIO IPv6 Lead, "The testing of IPv6 is a part of all product evaluations — it is much broader in scope now." The UC APL is now a single consolidated list of products that have completed Interoperability (IO) and Information Assurance (IA) certification.
DoD IPv6 standards
The DoD IPv6 Standards Profiles for IPv6 Capable Products (DoD IPv6 Profile) is the singular “IPv6 Capable” definition in DoD. It is a document that lists the six agreed upon product classes (Host, Router, Layer 3 Switch, Network Appliance, Security Device, and Advanced Server) and their corresponding standards (RFCs). It lists each standard according to its level of requirement:
MUST: The standard is required to be implemented in the product now.
SHOULD: The standard is optional, but recommended for implementation.
SHOULD+: The standard is optional now, but will be required within a short period of time.
DoD IPv6 generic test plan
The JITC uses its publicly available IPv6 Generic Test Plan (GTP) to test each product for its conformance, performance and interoperability of IPv6 according to the DoD IPv6 Profile. The JITC uses a combination of automated testing tools and manual functional test procedures to conduct this testing.
Process
The vendor, or Program Manager, must make their intentions known to test by providing the JITC with a Letter of Compliance (LoC). This letter will consist of the product to test, the product class it belongs to, a listing of all of the standards that it implements, and a signature from a Vice President or officer of the company. This is the “gateway” to the testing process.
Once the LoC is received, the product is then scheduled for test.
Approximately 6 weeks before the start of testing, the vendor must provide the JITC with funding. This funding must be in the form of a check. The amount is only to charge direct labor hours for testing by the contractor labor support.
If the product successfully meets the criteria, it will be entered on the DoD's UC APL for IPv6.
An IPv6 Capable Special Interoperability Certification Letter and Report will accompany the entry within 30–60 days after testing.
IPv6 pre-certification testing advocates
There are many companies and organizations that help develop and test products for vendors prior to testing at the JITC. These organizations cannot grant certification, but can conduct pre-testing to ensure a vendor's product will pass the necessary certification. Below is a list of these organizations:
The University of New Hampshire InterOperability Laboratory - IPv6 Ready Logo testing:
The IPv6 Ready Logo program
The IPv6 Forum has a service called IPv6 Ready Logo. This service represents a qualification program that assures devices have been tested and are IPv6 capable. Once certified, the service grants qualified products to display their logo. In the IPv6 Forum, they present objectives that are to:
Verify protocol implementation and validate interoperability of IPv6 products.
Provide access to free self-testing tools.
Provide IPv6 Ready Logo testing laboratories across the globe whom will be dedicated to providing testing assistance or services.
IPv6 experts suggest only pursuing to purchase devices given the Phase-2 approval or gold logo since they are given the full treatment:
The Department of Defense (DoD) is committed to IPv6 and will likely be the first federal organization completely converted to IPv6. They also have a process for qualifying IPv6 equipment.
JITC/DISA
The task of certifying IPv6 products was given to the Joint Interoperability Test Command (JITC), part of the Defense Information Systems Agency (DISA). To help standardize IPv6 qualification procedures, the JITC follows what’s called the IPv6 Generic Test Plan.
After JITC qualifies a product, it is added to the Unified Capabilities Approved Products List. Fortunately, JITC makes the list available to the public.
References
The Unified Capabilities Approved Products List for IPv6: https://web.archive.org/web/20081007010445/http://jitc.fhu.disa.mil/apl/ipv6.html
Official IPv6 Capable Certification Testing Process: http://jitc.fhu.disa.mil/apl/ipv6/pdf/ipv6_certification_process_ipv6_v2.pdf
The DoD IPv6 Standards Profiles for IPv6 Capable Products, Version 4: https://web.archive.org/web/20100705061401/http://jitc.fhu.disa.mil/apl/ipv6/pdf/disr_ipv6_product_profile_v4.pdf
The DoD IPv6 Generic Test Plan: https://web.archive.org/web/20100704234338/http://jitc.fhu.disa.mil/adv_ip/register/docs/ipv6v4_may09.pdf
The Testing Times, March 2008, Volume 15, Number 1: http://jitc.fhu.disa.mil/tst_time/docs/year/mar08.pdf
http://www.techrepublic.com/blog/networking/ipv6-capable-devices-make-sure-they-are-ready/2522
Military in Arizona
Interoperability
IPv6 | DoD IPv6 product certification | [
"Engineering"
] | 1,370 | [
"Telecommunications engineering",
"Interoperability"
] |
16,305,705 | https://en.wikipedia.org/wiki/Milnor%E2%80%93Thurston%20kneading%20theory | The Milnor–Thurston kneading theory is a mathematical theory which analyzes the iterates of piecewise monotone mappings of an interval into itself. The emphasis is on understanding the properties of the mapping that are invariant under topological conjugacy.
The theory had been developed by John Milnor and William Thurston in two widely circulated and influential Princeton preprints from 1977 that were revised in 1981 and finally published in 1988. Applications of the theory include piecewise linear models, counting of fixed points, computing the total variation, and constructing an invariant measure with maximal entropy.
Short description
Kneading theory provides an effective calculus for describing the qualitative behavior of the iterates of a piecewise monotone mapping f of a closed interval I of the real line into itself. Some quantitative invariants of this discrete dynamical system, such as the lap numbers of the iterates and the Artin–Mazur zeta function of f are expressed in terms of certain matrices and formal power series.
The basic invariant of f is its kneading matrix, a rectangular matrix with coefficients in the ring of integer formal power series. A closely related kneading determinant is a formal power series
with odd integer coefficients. In the simplest case when the map is unimodal, with a maximum at c, each coefficient is either or , according to whether the th iterate has local maximum or local minimum at c.
See also
Sharkovsky theorem
Topological entropy
References
Topological dynamics | Milnor–Thurston kneading theory | [
"Mathematics"
] | 305 | [
"Topology",
"Topological dynamics",
"Dynamical systems"
] |
16,305,793 | https://en.wikipedia.org/wiki/List%20of%20concentrating%20solar%20thermal%20power%20companies | This is a list of concentrating solar thermal power (CSTP) companies. The CSTP industry finished a first round of new construction during 2006/7, a resurgence after more than 15 years of commercial dormancy.
The CSTP industry saw many new entrants and new manufacturing facilities in 2008. Active project developers grew to include Ausra, Mulk Enpar Renewable Energy, Bright Source Energy, eSolar, FPL Energy, Infinia, Sopogy, and Stirling Energy Systems in the USA. In Spain, Abengoa Solar, Acciona, Iberdrola Renovables, and Sener were active in 2008.
List of notable companies
Parabolic trough collectors:
Aalborg CSP
Abengoa
Acciona
GlassPoint Solar
Parvolen
Rackam
SENER
Solar Millennium (bankruptcy)
Soliterm Group
Sopogy Micro CSP
Solar tower technology:
BrightSource Energy / Luz II
Torresol Energy
Linear Fresnel:
AREVA Solar, formerly Ausra
Novatec Solar
Unconfirmed:
eSolar
Cobra
Iberdrola
SENER
Solar Euromed
Solarlite
SolarReserve
Stirling Energy Systems (filed for bankruptcy)
Wizard Power
See also
List of energy storage projects
List of photovoltaics companies
List of solar thermal power stations
Renewable energy industry
References
External links
Desert Sunrise: Concentrating Solar Power Makes Worldwide Progress
Low-cost Solar Thermal Plants at Heart of Algerian-German Research Push
ABROS green GmbH
Solar
Renewable energy organizations
Solar | List of concentrating solar thermal power companies | [
"Engineering"
] | 298 | [
"Renewable energy organizations",
"Energy organizations"
] |
16,305,856 | https://en.wikipedia.org/wiki/Power%20Machines | OJSC Power Machines (translit. Siloviye Mashiny abbreviated as Silmash, ) is a Russian energy systems machine-building company founded in 2000. It is headquartered in Saint Petersburg.
Power Machines manufactures steam turbines with capacity up to 1,200 MWe, including turbines for nuclear power plants. Its portfolio consists of turbine generators for the Leningrad Nuclear Power Plant II and the Novovoronezh Nuclear Power Plant II. Also, Power Machines has supplied equipment to 57 countries other than Russia with significant market in Asia.
History
Power Machines company was established in the year 2000. Today it is a joint venture combining technological, industrial and intellectual resources of six world-famous Russian enterprises: Leningradsky Metallichesky zavod (established in 1857), Electrosila (1898), Turbine Blades’ Plant (1964), Kaluga Turbine Works (1946), Reostat Plant (1960) and Energomachexport (1966).
69.92% of shares were owned by Highstat Limited, a company controlled by Alexei Mordashov. 25% of shares were owned by Siemens and 5.08% by minor shareholders.
In December 2011 Highstat acquired Siemens's stake in Power Machines for less than US$280 million (3.6 rubles per share), below the market price (4.9 rubles per share). Power Machines was subsequently delisted from the MICEX-RTS stock exchange. In August 2012 Highstat made a mandatory offer of 4.53 rubles (US$0.139) per share to the remaining minority shareholders, which the Investor Protection Association said was significantly undervalued. Following a complaint filed by the association, the Federal Financial Markets Service fined Highstat 250,000 rubles.
In 2017, the company's CEO Roman Filippov was placed under temporary arrest on suspicions of divulging State secrets. Machine Powers was placed under US sanctions in 2018 for working to “support Russia’s attempted annexation of Crimea”, leading to the loss of a $500-million contract in Vietnam, a failed payment for a contract completed for Ukraine's DTEK, and the sale of its 35% stake in Siemens Gas Turbine Technologies LLC. In 2020, it was awarded the contract to build the 1.4 gigawatt Sirik power plant in Iran. In 2022, Power Machines assembled and tested its first domestically made high-power gas turbine, enabling Russia to replace imported equipment made unavailable since US sanctions.
Structure
Leningradsky Metallichesky Zavod (1857 establishment),
Electrosila (1898 establishment),
Turbine Blades’ Plant (1964 establishment),
Kaluga Turbine Works (1964 establishment),
NPO CKTI named after I. I. Polzunov (1927 establishment),
Energomashexport (1966 establishment),
Power Machines – Reostat Plant (1960 establishment).
Products
GTE180 development, GTE170 production, GTE160, GT100, GTE-150, GTE-250 GTE-300 projects, GTE65, unit M94yu2 (Licensed V94.2 Siemens SGT5-2000E in 1994)
SGTT build licensed SGT5-2000E (GTE160 GTE180 TPE180), SGT5-4000F, SGT-600 (Baltika-25)
Silmash Gas and Steam Turbines K- and T- for Power Plants (Nuclear Thermal and Hydroelectric)
Management
The Board of Directors consists of eight members:
From Severstal-group - Alexey Mordashov, Alexey Yegorov, Vladimir Lukin
From Power Machines - Igor Kostin (General Director), Vadim Chechnev
From Universal Invest - Igor Voskresensky
From Siemens - Michael Zuss, Hans-Jurgen Vio
References
External links
Official website of Power Machines
OFAC sanction notice
Industrial machine manufacturers
Steam turbine manufacturers
Gas turbine manufacturers
Engineering companies of Russia
Water turbine manufacturers
Nuclear technology companies of Russia
Russian companies established in 2000
Russian brands
Companies formerly listed on the Moscow Exchange
Manufacturing companies established in 2000 | Power Machines | [
"Engineering"
] | 844 | [
"Industrial machine manufacturers",
"Industrial machinery"
] |
16,306,862 | https://en.wikipedia.org/wiki/Refurbishment%20%28electronics%29 | In electronics, refurbishment is the practice of restoring and testing a pre-owned electronic device so that it can be re-sold. Refurbished electronics are therefore pre-owned electronic devices (usually smartphones, tablets, or laptops), that have been tested by a reseller to confirm that they are fully working. Other refurbished electronics include smartwatches, games consoles, and cameras.
Usually, a refurbished electronic device is one that has been previously returned or re-sold to a retailer for any reason. They are then tested, and if necessary, repaired by a specialist refurbisher (or sometimes by the original manufacturer). Refurbished electronics may also be referred to as renewed, reconditioned, recycled, recertified, or "like new" electronics.
Competing Definitions
In many countries, the word "refurbished" is not legally protected (although France has introduced a legal definition as of 2022). This means that different electronics resellers will have different definitions of what counts as a refurbished device. In theory, a smartphone could be sold as 'refurbished' with no repairs or testing whatsoever. However, most refurbished devices have been rigorously tested to ensure they are fully working.
Used vs. Refurbished
In the UK, the refurbished electronics marketplace Back Market claims that "refurbished" devices are distinct from "used" devices, where a "used" device is one where no repairs or testing have taken place. However other specialist retailers, like The Big Phone Store, define refurbished phones as a specific type of used or second-hand device.
Apple Certified Refurbished
Devices sold as 'Certified Refurbished' through the Apple store differ from most other refurbished devices. For example, iOS devices sold as Apple Certified Refurbished will always come with a brand-new battery and brand-new "outer shell". Because of this, these devices may be considered remanufactured, rather than refurbished.
Common Features
While in most countries there is no set legal definition, devices sold as "refurbished" tend to also come with the following assurances.
Testing and certification:
Functionality testing: the device is fully functional and has not been tampered with.
Software testing: the device has not been jailbroken or rooted.
Authenticity checks: the device is not a fake.
Network compatibility testing: the device is the correct country specification and is not blocked on any network.
Background checks: the device is not blacklisted or reported stolen.
Battery health testing: the device has a reasonably high battery health (usually at least 80%).
Full data destruction and factory reset: the device retains no data from any previous owner.
Often, the testing process is fully automated. Commonly used testing certification providers include Phonecheck and Blackbelt.
Specialist retailers of refurbished devices typically provide:
Warranty (typically 12 months)
A transparent returns policy, with a 14-day "cooling-off" period.
Secure, insured shipping.
A clearly explained cosmetic grade (also referred to as the device's "condition").
Basic accessories (such as a USB charging cable).
Optional upgrades (such as replacing a partially used battery with a brand-new one).
Refurbished phone retailers also often sell standard accessories such as phone cases, screen protectors, headphones, and chargers.
Conditions and Grades
The 'Grade' or 'Condition' of a refurbished device describes how much wear and tear there is on the device. Some refurbished phone retailers will simply describe these with letter grades (i.e. Grade A, Grade B etc.), while others use their own naming convention. It is important to note that these grades are usually cosmetic descriptions only.
Common conditions:
Like New: The device has no visible signs of use of any kind. These devices have usually never been used. Often, the only difference between a brand-new smartphone and a like-new refurbished phone is that the tamper-seal has been broken.
Pristine / Excellent: The device may exhibit minor signs of use, such as micro-scratches. These should not be visible from a normal viewing distance (more than 12 inches). Also referred to as Premium by some retailers
Very Good / Good: The device shows clear visible signs of use.
Fair / Poor: The device shows heavy signs of use, such as deep scratches or even cracks. These devices may also feature other flaws such as reduced battery health.
Devices with heavier signs of use are priced lower, and on average may be less durable, than devices in perfect condition.
Types of Refurbished Electronics Retailer
Refurbished electronic devices are sold by a number of different kinds of retailer. These include:
Specialist Retailers and Independent Refurbishers
Many independent electronics refurbishers operate their own online retail store. In the USA, refurbished electronics retailers include Gazelle, while in the UK, specialist retailers include Reboxed, Big Phone Store and Envirofone.
Device Manufacturers
Manufacturers such as Apple and Samsung increasingly operate trade-in programs when buying a new device through their online store, which allows them to easily sell their own refurbished products.
Electronics Retailers and Mobile Networks.
A number of large electronics retailers, such as BestBuy in the USA, and Currys in the UK, sell both new and refurbished electronics. These are usually sourced from an independent refurbisher. Cellular network providers have begun to offer refurbished devices on contract. These are often devices that have been traded in to a network provider as part of a contract upgrade.
Online Marketplaces
Back Market and Mozillion are examples of online marketplaces specialising in pre-owned electronics. Meanwhile Swappa, Amazon, TikTok Shop, eBay, Ovantica are all large online platforms where independent electronics refurbishers can sell their products. Marketplaces often provide the most choice for the consumer, but do not take direct responsibility for the devices sold.
Consumer Demand
Global demand for refurbished electronic devices has steadily risen since 2014. According to a 2024 report by GfK, this is primarily driven by cost, as well as the increasing necessity of owning digital devices. According to the same report, another contributing factor is increased environmental awareness, as both smartphone manufacture and electronic waste are sources of pollution.
In particular, the UK has seen steady growth in the refurbished phone market, with research showing that refurbished phones accounted for 1 in 4 smartphones sold in 2023.
See also
Factory second
Reverse logistics network modelling
Right to repair
References
Electronics manufacturing
Recycling industry
Sustainable business | Refurbishment (electronics) | [
"Engineering"
] | 1,288 | [
"Electronic engineering",
"Electronics manufacturing"
] |
16,306,870 | https://en.wikipedia.org/wiki/Validation%20therapy | Validation therapy was developed by Naomi Feil for older people with cognitive impairments and dementia. Feil's own approach classifies individuals with cognitive impairment as having one of four stages in a continuum of dementia. These stages are:
Mal orientation
Time confusion
Repetitive motion
Vegetative state
The basic principle of the therapy is the concept of validation or the reciprocated communication of respect which communicates that the other's opinions are acknowledged, respected, heard, and (regardless whether or not the listener actually agrees with the content), they are being treated with genuine respect as a legitimate expression of their feelings, rather than marginalized or dismissed.
Validation therapy is contrasted with reality orientation, in which the caregivers regularly remind people about their current situation (e.g., that they live in a nursing home now). It gave rise to an approach to advanced dementia called therapeutic deception, in which caregivers actively lie to protect people from re-learning distressing facts that they will be unable to remember from one day to the next (e.g., by saying that a deceased family member is sleeping right now, rather than telling them repeatedly that the loved one died).
There is insufficient scientific evidence to determine whether validation therapy reduces any of the behavioral and psychological symptoms of dementia.
Validation therapy improves job satisfaction and reduces stress for professional caregivers.
See also
Hogeweyk, a dementia facility designed to mimic everyday life
References
Further research
External links
Official website
Alzheimer's disease
Aging-associated diseases
Psychotherapy by type | Validation therapy | [
"Biology"
] | 310 | [
"Senescence",
"Aging-associated diseases"
] |
58,463 | https://en.wikipedia.org/wiki/Backscratcher | A backscratcher (occasionally known as a scratch-back) is a tool used for relieving an itch in an area that cannot easily be reached just by one's own hands, the , typically the back.
Composition and variation
They are generally long, slender, rod-shaped tools good for scratching one's back, with a knob on one end for holding and a rake-like device, sometimes in the form of a human hand, on the other end to perform the actual scratching. Many others are shaped like horse hooves or claws, or are retractable to reach further down the back. Though a backscratcher could feasibly be fashioned from most materials, most modern backscratchers are made of plastic, though examples can be found made of wood, whalebone, tortoiseshell, horn, cane, bamboo, ivory, baleen, and in some cases in history, narwhal tusks, due to the status afforded by relieving itches with a supposed unicorn horn (an example of conspicuous consumption).
Backscratchers vary in length between .
History
The Inuit carved backscratchers from whale teeth. Backscratchers were also observed to have developed separately in many other areas, such as ancient China, where Chinese farmers occasionally used backscratchers as a tool to check livestock for fleas and ticks. In recent history backscratchers were also employed as a kind of rake to keep in order the huge "heads" of powdered hair worn by ladies in the 18th and 19th centuries.
In the past, backscratchers were often highly decorated, and hung from the waist as accessories, with the more elaborate examples being silver-mounted, or in rare instances with an ivory carved hand with rings on its fingers. The scratching hand was sometimes replaced by a rake or a bird's talon. Generally, the hand could represent either a left or right hand, but the Chinese variety usually bore a right hand.
Although not specifically used for only back scratching, young Chiricahua men in training and women going through a puberty ritual traditionally had to use a ceremonial wooden scratcher made from a fruit-bearing tree instead of scratching with their fingernails or hands. Young men who did not use the scratcher for scratching were reported to develop skin that was too soft.
References
External links
"The Scratch-Back", at the end of August 19 in The Book of Days by Robert Chambers.
Chinese inventions
Domestic implements
Hand tools
Traditional medicine | Backscratcher | [
"Engineering"
] | 502 | [
"Human–machine interaction",
"Hand tools"
] |
58,471 | https://en.wikipedia.org/wiki/Timeline%20of%20temperature%20and%20pressure%20measurement%20technology | This is a timeline of temperature and pressure measurement technology or the history of temperature measurement and pressure measurement technology.
Timeline
1500s
1592–1593 — Galileo Galilei builds a device showing variation of hotness known as the thermoscope using the contraction of air to draw water up a tube.
1600s
1612 — Santorio Sanctorius makes the first thermometer for medical use.
1617 — Giuseppe Biancani published the first clear diagram of a thermoscope
1624 — The word thermometer (in its French form) first appeared in La Récréation Mathématique by Jean Leurechon, who describes one with a scale of 8 degrees.
1629 — Joseph Solomon Delmedigo describes in a book an accurate sealed-glass thermometer that uses brandy
1638 — Robert Fludd the first thermoscope showing a scale and thus constituting a thermometer.
1643 — Evangelista Torricelli invents the mercury barometer
1654 — Ferdinando II de' Medici, Grand Duke of Tuscany, made sealed tubes part filled with alcohol, with a bulb and stem, the first modern-style thermometer, depending on the expansion of a liquid, and independent of air pressure
1669 — Honoré Fabri suggested using a temperature scale by dividing into 8 equal parts the interval between "greatest heat of summer" and melting snow.
1676 to 1679 — Edme Mariotte conducted experiments that under the French Academy of Science’s Paris Observatory, resulting in wide adoption of temperatures of deep cellars as a fixed reference point, rather than snow or water freezing points.
1685 — Giovanni Alfonso Borelli's posthumously published De motu animalium ["On the movements of animals"] reported that the temperature of blood in a vivisected stag is the same in the left ventricle of the heart, the liver, lungs and intestines.
1688 — Joachim Dalencé proposed constructing a thermometer by dividing into 20 equal degrees the interval between freezing water and melting butter, then extrapolating 4 degrees upwards and downwards.
1695 — Guillaume Amontons improved the thermometer.
1700s
1701 — Newton publishes anonymously a method of determining the rate of heat loss of a body and introduces a scale, which had 0 degrees represent the freezing point of water, and 12 degrees for human body temperature. He used linseed oil as the thermometric fluid.
1701 — Ole Christensen Rømer made one of the first practical thermometers. As a temperature indicator it used red wine. (Rømer scale), The temperature scale used for his thermometer had 0 representing the temperature of a salt and ice mixture (at about 259 s).
1709 — Daniel Gabriel Fahrenheit constructed alcohol thermometers which were reproducible (i.e. two would give the same temperature)
1714 — Daniel Gabriel Fahrenheit invents the mercury-in-glass thermometer giving much greater precision (4 x that of Rømer). Using Rømer's zero point and an upper point of blood temperature, he adjusted the scale so the melting point of ice was 32 and the upper point 96, meaning that the difference of 64 could be got by dividing the intervals into 2 repeatedly.
1731 — René Antoine Ferchault de Réaumur produced a scale in which 0 represented the freezing point of water and 80 represented the boiling point. This was chosen as his alcohol mixture expanded 80 parts per thousand. He did not consider pressure.
1738 — Daniel Bernoulli asserted in Hydrodynamica the principle that as the speed of a moving fluid increases, the pressure within the fluid decreases. (Kinetic theory)
1742 — Anders Celsius proposed a temperature scale in which 100 represented the temperature of melting ice and 0 represented the boiling point of water at 25 inches and 3 lines of barometric mercury height. This corresponds to 751.16 mm, so that on the present-day definition, this boiling point is 99.67 degrees Celsius.
1743 — Jean-Pierre Christin had worked independently of Celsius and developed a scale where zero represented the melting point of ice and 100 represented the boiling point but did not specify a pressure.
1744 — Carl Linnaeus suggested reversing the temperature scale of Anders Celsius so that 0 represented the freezing point of water and 100 represented the boiling point.
1782 — James Six invents the Maximum minimum thermometer
1800s
1821 — Thomas Johann Seebeck invents the thermocouple
1844 — Lucien Vidi invents the aneroid Barograph
1845 — Francis Ronalds invents the first successful Barograph based on photography
1848 — Lord Kelvin (William Thomson) – Kelvin scale, in his paper, On an Absolute Thermometric Scale
1849 — Eugène Bourdon – Bourdon_gauge (manometer)
1849 — Henri Victor Regnault – Hypsometer
1864 — Henri Becquerel suggests an optical pyrometer
1866 — Thomas Clifford Allbutt invented a clinical thermometer that produced a body temperature reading in five minutes as opposed to twenty.
1871 — William Siemens describes the Resistance thermometer at the Bakerian Lecture
1874 — Herbert McLeod invents the McLeod gauge
1885 — Calender-Van Duesen invented the platinum resistance temperature device
1887 — Richard Assmann invents the psychrometer (Wet and Dry Bulb Thermometers)
1892 — Henri-Louis Le Châtelier builds the first optical pyrometer
1896 — Samuel Siegfried Karl Ritter von Basch introduced the Sphygmomanometer to measure blood pressure
1900s
1906 — Marcello Pirani – Pirani gauge (to measure pressures in vacuum systems)
1915 — J.C. Stevens — Chart recorder (first chart recorder for environmental monitoring)
1924 — Irving Langmuir — Langmuir probe (to measure plasma parameters)
1930 — Samuel Ruben invented the thermistor
See also
Dimensional metrology
Forensic metrology
Smart Metrology
Time metrology
Quantum metrology
History of thermodynamic temperature
Timeline of heat engine technology
List of timelines
References
Temperature And Pressure Measurement
History of measurement
es:Termómetro#Los termómetros a través del tiempo | Timeline of temperature and pressure measurement technology | [
"Technology",
"Engineering"
] | 1,262 | [
"Pressure gauges",
"Thermometers",
"Measuring instruments"
] |
58,527 | https://en.wikipedia.org/wiki/Finitely%20generated%20abelian%20group | In abstract algebra, an abelian group is called finitely generated if there exist finitely many elements in such that every in can be written in the form for some integers . In this case, we say that the set is a generating set of or that generate . So, finitely generated abelian groups can be thought of as a generalization of cyclic groups.
Every finite abelian group is finitely generated. The finitely generated abelian groups can be completely classified.
Examples
The integers, , are a finitely generated abelian group.
The integers modulo , , are a finite (hence finitely generated) abelian group.
Any direct sum of finitely many finitely generated abelian groups is again a finitely generated abelian group.
Every lattice forms a finitely generated free abelian group.
There are no other examples (up to isomorphism). In particular, the group of rational numbers is not finitely generated: if are rational numbers, pick a natural number coprime to all the denominators; then cannot be generated by . The group of non-zero rational numbers is also not finitely generated. The groups of real numbers under addition and non-zero real numbers under multiplication are also not finitely generated.
Classification
The fundamental theorem of finitely generated abelian groups can be stated two ways, generalizing the two forms of the fundamental theorem of finite abelian groups. The theorem, in both forms, in turn generalizes to the structure theorem for finitely generated modules over a principal ideal domain, which in turn admits further generalizations.
Primary decomposition
The primary decomposition formulation states that every finitely generated abelian group G is isomorphic to a direct sum of primary cyclic groups and infinite cyclic groups. A primary cyclic group is one whose order is a power of a prime. That is, every finitely generated abelian group is isomorphic to a group of the form
where n ≥ 0 is the rank, and the numbers q1, ..., qt are powers of (not necessarily distinct) prime numbers. In particular, G is finite if and only if n = 0. The values of n, q1, ..., qt are (up to rearranging the indices) uniquely determined by G, that is, there is one and only one way to represent G as such a decomposition.
The proof of this statement uses the basis theorem for finite abelian group: every finite abelian group is a direct sum of primary cyclic groups. Denote the torsion subgroup of G as tG. Then, G/tG is a torsion-free abelian group and thus it is free abelian. tG is a direct summand of G, which means there exists a subgroup F of G s.t. , where . Then, F is also free abelian. Since tG is finitely generated and each element of tG has finite order, tG is finite. By the basis theorem for finite abelian group, tG can be written as direct sum of primary cyclic groups.
Invariant factor decomposition
We can also write any finitely generated abelian group G as a direct sum of the form
where k1 divides k2, which divides k3 and so on up to ku. Again, the rank n and the invariant factors k1, ..., ku are uniquely determined by G (here with a unique order). The rank and the sequence of invariant factors determine the group up to isomorphism.
Equivalence
These statements are equivalent as a result of the Chinese remainder theorem, which implies that if and only if j and k are coprime.
History
The history and credit for the fundamental theorem is complicated by the fact that it was proven when group theory was not well-established, and thus early forms, while essentially the modern result and proof, are often stated for a specific case. Briefly, an early form of the finite case was proven by Gauss in 1801, the finite case was proven by Kronecker in 1870, and stated in group-theoretic terms by Frobenius and Stickelberger in 1878. The finitely presented case is solved by Smith normal form, and hence frequently credited to , though the finitely generated case is sometimes instead credited to Poincaré in 1900; details follow.
Group theorist László Fuchs states:
The fundamental theorem for finite abelian groups was proven by Leopold Kronecker in 1870, using a group-theoretic proof, though without stating it in group-theoretic terms; a modern presentation of Kronecker's proof is given in , 5.2.2 Kronecker's Theorem, 176–177. This generalized an earlier result of Carl Friedrich Gauss from Disquisitiones Arithmeticae (1801), which classified quadratic forms; Kronecker cited this result of Gauss's. The theorem was stated and proved in the language of groups by Ferdinand Georg Frobenius and Ludwig Stickelberger in 1878. Another group-theoretic formulation was given by Kronecker's student Eugen Netto in 1882.
The fundamental theorem for finitely presented abelian groups was proven by Henry John Stephen Smith in , as integer matrices correspond to finite presentations of abelian groups (this generalizes to finitely presented modules over a principal ideal domain), and Smith normal form corresponds to classifying finitely presented abelian groups.
The fundamental theorem for finitely generated abelian groups was proven by Henri Poincaré in 1900, using a matrix proof (which generalizes to principal ideal domains). This was done in the context of computing the
homology of a complex, specifically the Betti number and torsion coefficients of a dimension of the complex, where the Betti number corresponds to the rank of the free part, and the torsion coefficients correspond to the torsion part.
Kronecker's proof was generalized to finitely generated abelian groups by Emmy Noether in 1926.
Corollaries
Stated differently the fundamental theorem says that a finitely generated abelian group is the direct sum of a free abelian group of finite rank and a finite abelian group, each of those being unique up to isomorphism. The finite abelian group is just the torsion subgroup of G. The rank of G is defined as the rank of the torsion-free part of G; this is just the number n in the above formulas.
A corollary to the fundamental theorem is that every finitely generated torsion-free abelian group is free abelian. The finitely generated condition is essential here: is torsion-free but not free abelian.
Every subgroup and factor group of a finitely generated abelian group is again finitely generated abelian. The finitely generated abelian groups, together with the group homomorphisms, form an abelian category which is a Serre subcategory of the category of abelian groups.
Non-finitely generated abelian groups
Note that not every abelian group of finite rank is finitely generated; the rank 1 group is one counterexample, and the rank-0 group given by a direct sum of countably infinitely many copies of is another one.
See also
The composition series in the Jordan–Hölder theorem is a non-abelian generalization.
Notes
References
Reprinted (pp. 367–409) in The Collected Mathematical Papers of Henry John Stephen Smith, Vol. I, edited by J. W. L. Glaisher. Oxford: Clarendon Press (1894), xcv+603 pp.
Abelian group theory
Algebraic structures | Finitely generated abelian group | [
"Mathematics"
] | 1,545 | [
"Mathematical structures",
"Mathematical objects",
"Algebraic structures"
] |
58,588 | https://en.wikipedia.org/wiki/Axe%20historique | The Axe historique (; "historical axis") refers to a straightly aligned series of thoroughfare streets, squares, monuments and buildings that extend from the centre of Paris, France, to the west-northwest of the city. It is also known as the Voie Triomphale (; "triumphal way").
History
The Axe historique began with the creation of the Champs-Élysées, designed in the 17th century to create a vista to the west, extending the central axis of the gardens to the royal Tuileries Palace. Today the Tuileries Garden (Jardin des Tuileries) remain, preserving their wide central pathway, though the palace was burned down during the Paris Commune, 1871.
Between the Tuileries Garden and the Champs-Élysées extension a jumble of buildings remained on the site of the Place de la Concorde until early in the reign of Louis XV, for whom the square was at first named. Then the garden axis could open through a grand gateway into the new royal square.
To the east, the Tuileries Palace faced an open square, the Place du Carrousel. There, by order of Napoleon, the Arc de Triomphe du Carrousel was centered on the palace (and so on the same axial line that was developing beyond the palace). Long-standing plans to link the Louvre Palace, as the disused palace was called, with the Tuileries, by sweeping away the intervening buildings, finally came to fruition in the early 19th century. Consequently, the older axis extending from the courtyard of the Louvre is slightly skewed to the rest of what has become the Axe historique, but the Arc du Carrousel, at the fulcrum between the two, serves to disguise the discontinuity.
To the west, the completion of the Arc de Triomphe in 1836 on the Place de l'Étoile at the western end of the Champs-Élysées formed the far point of this line of perspective, which now starts at the equestrian statue of Louis XIV placed by I.M. Pei adjacent to his Louvre Pyramid in the Cour Napoléon of the Louvre.
The axis was extended again westwards along the Avenue de la Grande Armée, past the city boundary of Paris to La Défense. This was originally a large junction, named for a statue commemorating the defence of Paris in the Franco-Prussian War.
In the 1950s, the area around La Défense was marked out to become a new business district, and high-rise office buildings were built along the avenue. The axis found itself extended yet again, with ambitious projects for the western extremity of the modern plaza.
It was not until the 1980s, under president François Mitterrand, that a project was initiated, with a modern 20th century version of the Arc de Triomphe. This is the work of Danish architect Johan Otto von Spreckelsen, La Grande Arche de la Fraternité (also known as simply La Grande Arche or L'Arche de la Défense), a monument to humanity and humanitarian ideals rather than militaristic victories. It was inaugurated in 1989.
The network of railway lines and road tunnels beneath the elevated plaza of La Défense prevented the pillars supporting the arch from being exactly in line with the axis: it is slightly out of line, bending the axis should it be extended further to the west. From the roof of the Grande Arche, a second axis can be seen: the Tour Montparnasse stands exactly behind the Eiffel Tower.
The Seine-Arche project is extending the historical axis to the west through the city of Nanterre, but with a slight curve.
Solar alignment
As a natural consequence of the nature of the line, the sun sets behind the Grande Arche twice per year, in a phenomenon dubbed "Paris henge".
Gallery
See also
a similar axis in Yogyakarta.
The North-South Central Axis of Beijing Citya similar axis in Beijing.
External links
Entry on greatbuildings.com
References
Buildings and structures in Paris
Champs-Élysées
Environmental design
Geography of Paris
La Défense
Landscape architecture
Urban design | Axe historique | [
"Engineering"
] | 849 | [
"Environmental design",
"Design",
"Landscape architecture",
"Architecture"
] |
58,608 | https://en.wikipedia.org/wiki/Trusted%20Computing | Trusted Computing (TC) is a technology developed and promoted by the Trusted Computing Group. The term is taken from the field of trusted systems and has a specialized meaning that is distinct from the field of confidential computing. With Trusted Computing, the computer will consistently behave in expected ways, and those behaviors will be enforced by computer hardware and software. Enforcing this behavior is achieved by loading the hardware with a unique encryption key that is inaccessible to the rest of the system and the owner.
TC is controversial as the hardware is not only secured for its owner, but also against its owner, leading opponents of the technology like free software activist Richard Stallman to deride it as "treacherous computing", and certain scholarly articles to use scare quotes when referring to the technology.
Trusted Computing proponents such as International Data Corporation, the Enterprise Strategy Group and Endpoint Technologies Associates state that the technology will make computers safer, less prone to viruses and malware, and thus more reliable from an end-user perspective. They also state that Trusted Computing will allow computers and servers to offer improved computer security over that which is currently available. Opponents often state that this technology will be used primarily to enforce digital rights management policies (imposed restrictions to the owner) and not to increase computer security.
Chip manufacturers Intel and AMD, hardware manufacturers such as HP and Dell, and operating system providers such as Microsoft include Trusted Computing in their products if enabled. The U.S. Army requires that every new PC it purchases comes with a Trusted Platform Module (TPM). As of July 3, 2007, so does virtually the entire United States Department of Defense.
Key concepts
Trusted Computing encompasses six key technology concepts, of which all are required for a fully Trusted system, that is, a system compliant to the TCG specifications:
Endorsement key
Secure input and output
Memory curtaining / protected execution
Sealed storage
Remote attestation
Trusted Third Party (TTP)
Endorsement key
The endorsement key is a 2048-bit RSA public and private key pair that is created randomly on the chip at manufacture time and cannot be changed. The private key never leaves the chip, while the public key is used for attestation and for encryption of sensitive data sent to the chip, as occurs during the TPM_TakeOwnership command.
This key is used to allow the execution of secure transactions: every Trusted Platform Module (TPM) is required to be able to sign a random number (in order to allow the owner to show that he has a genuine trusted computer), using a particular protocol created by the Trusted Computing Group (the direct anonymous attestation protocol) in order to ensure its compliance of the TCG standard and to prove its identity; this makes it impossible for a software TPM emulator with an untrusted endorsement key (for example, a self-generated one) to start a secure transaction with a trusted entity. The TPM should be designed to make the extraction of this key by hardware analysis hard, but tamper resistance is not a strong requirement.
Memory curtaining
Memory curtaining extends common memory protection techniques to provide full isolation of sensitive areas of memory—for example, locations containing cryptographic keys. Even the operating system does not have full access to curtained memory. The exact implementation details are vendor specific.
Sealed storage
Sealed storage protects private information by binding it to platform configuration information including the software and hardware being used. This means the data can be released only to a particular combination of software and hardware. Sealed storage can be used for DRM enforcing. For example, users who keep a song on their computer that has not been licensed to be listened will not be able to play it. Currently, a user can locate the song, listen to it, and send it to someone else, play it in the software of their choice, or back it up (and in some cases, use circumvention software to decrypt it). Alternatively, the user may use software to modify the operating system's DRM routines to have it leak the song data once, say, a temporary license was acquired. Using sealed storage, the song is securely encrypted using a key bound to the trusted platform module so that only the unmodified and untampered music player on his or her computer can play it. In this DRM architecture, this might also prevent people from listening to the song after buying a new computer, or upgrading parts of their current one, except after explicit permission of the vendor of the song.
Remote attestation
Remote attestation allows changes to the user's computer to be detected by authorized parties. For example, software companies can identify unauthorized changes to software, including users modifying their software to circumvent commercial digital rights restrictions. It works by having the hardware generate a certificate stating what software is currently running. The computer can then present this certificate to a remote party to show that unaltered software is currently executing. Numerous remote attestation schemes have been proposed for various computer architectures, including Intel, RISC-V, and ARM.
Remote attestation is usually combined with public-key encryption so that the information sent can only be read by the programs that requested the attestation, and not by an eavesdropper.
To take the song example again, the user's music player software could send the song to other machines, but only if they could attest that they were running an authorized copy of the music player software. Combined with the other technologies, this provides a more restricted path for the music: encrypted I/O prevents the user from recording it as it is transmitted to the audio subsystem, memory locking prevents it from being dumped to regular disk files as it is being worked on, sealed storage curtails unauthorized access to it when saved to the hard drive, and remote attestation prevents unauthorized software from accessing the song even when it is used on other computers. To preserve the privacy of attestation responders, Direct Anonymous Attestation has been proposed as a solution, which uses a group signature scheme to prevent revealing the identity of individual signers.
Proof of space (PoS) have been proposed to be used for malware detection, by determining whether the L1 cache of a processor is empty (e.g., has enough space to evaluate the PoSpace routine without cache misses) or contains a routine that resisted being evicted.
Trusted third party
Known applications
The Microsoft products Windows Vista, Windows 7, Windows 8 and Windows RT make use of a Trusted Platform Module to facilitate BitLocker Drive Encryption. Other known applications with runtime encryption and the use of secure enclaves include the Signal messenger and the e-prescription service ("E-Rezept") by the German government.
Possible applications
Digital rights management
Trusted Computing would allow companies to create a digital rights management (DRM) system which would be very hard to circumvent, though not impossible. An example is downloading a music file. Sealed storage could be used to prevent the user from opening the file with an unauthorized player or computer. Remote attestation could be used to authorize play only by music players that enforce the record company's rules. The music would be played from curtained memory, which would prevent the user from making an unrestricted copy of the file while it is playing, and secure I/O would prevent capturing what is being sent to the sound system. Circumventing such a system would require either manipulation of the computer's hardware, capturing the analogue (and thus degraded) signal using a recording device or a microphone, or breaking the security of the system.
New business models for use of software (services) over Internet may be boosted by the technology. By strengthening the DRM system, one could base a business model on renting programs for a specific time periods or "pay as you go" models. For instance, one could download a music file which could only be played a certain number of times before it becomes unusable, or the music file could be used only within a certain time period.
Preventing cheating in online games
Trusted Computing could be used to combat cheating in online games. Some players modify their game copy in order to gain unfair advantages in the game; remote attestation, secure I/O and memory curtaining could be used to determine that all players connected to a server were running an unmodified copy of the software.
Verification of remote computation for grid computing
Trusted Computing could be used to guarantee participants in a grid computing system are returning the results of the computations they claim to be instead of forging them. This would allow large scale simulations to be run (say a climate simulation) without expensive redundant computations to guarantee malicious hosts are not undermining the results to achieve the conclusion they want.
Criticism
The Electronic Frontier Foundation and the Free Software Foundation criticize that trust in the underlying companies is not deserved and that the technology puts too much power and control into the hands of those who design systems and software. They also state that it may cause consumers to lose anonymity in their online interactions, as well as mandating technologies Trusted Computing opponents say are unnecessary. They suggest Trusted Computing as a possible enabler for future versions of mandatory access control, copy protection, and DRM.
Some security experts, such as Alan Cox and Bruce Schneier, have spoken out against Trusted Computing, believing it will provide computer manufacturers and software authors with increased control to impose restrictions on what users are able to do with their computers. There are concerns that Trusted Computing would have an anti-competitive effect on the IT market.
There is concern amongst critics that it will not always be possible to examine the hardware components on which Trusted Computing relies, the Trusted Platform Module, which is the ultimate hardware system where the core 'root' of trust in the platform has to reside. If not implemented correctly, it presents a security risk to overall platform integrity and protected data. The specifications, as published by the Trusted Computing Group, are open and are available for anyone to review. However, the final implementations by commercial vendors will not necessarily be subjected to the same review process. In addition, the world of cryptography can often move quickly, and that hardware implementations of algorithms might create an inadvertent obsolescence. Trusting networked computers to controlling authorities rather than to individuals may create digital imprimaturs.
Cryptographer Ross Anderson, University of Cambridge, has great concerns that:
TC can support remote censorship [...] In general, digital objects created using TC systems remain under the control of their creators, rather than under the control of the person who owns the machine on which they happen to be stored [...] So someone who writes a paper that a court decides is defamatory can be compelled to censor it — and the software company that wrote the word processor could be ordered to do the deletion if she refuses. Given such possibilities, we can expect TC to be used to suppress everything from pornography to writings that criticize political leaders.
He goes on to state that:
[...] software suppliers can make it much harder for you to switch to their competitors' products. At a simple level, Word could encrypt all your documents using keys that only Microsoft products have access to; this would mean that you could only read them using Microsoft products, not with any competing word processor. [...]
The [...] most important benefit for Microsoft is that TC will dramatically increase the costs of switching away from Microsoft products (such as Office) to rival products (such as OpenOffice). For example, a law firm that wants to change from Office to OpenOffice right now merely has to install the software, train the staff and convert their existing files. In five years' time, once they have received TC-protected documents from perhaps a thousand different clients, they would have to get permission (in the form of signed digital certificates) from each of these clients in order to migrate their files to a new platform. The law firm won't in practice want to do this, so they will be much more tightly locked in, which will enable Microsoft to hike its prices.
Anderson summarizes the case by saying:
The fundamental issue is that whoever controls the TC infrastructure will acquire a huge amount of power. Having this single point of control is like making everyone use the same bank, or the same accountant, or the same lawyer. There are many ways in which this power could be abused.
Digital rights management
One of the early motivations behind trusted computing was a desire by media and software corporations for stricter DRM technology to prevent users from freely sharing and using potentially copyrighted or private files without explicit permission.
An example could be downloading a music file from a band: the band's record company could come up with rules for how the band's music can be used. For example, they might want the user to play the file only three times a day without paying additional money. Also, they could use remote attestation to only send their music to a music player that enforces their rules: sealed storage would prevent the user from opening the file with another player that did not enforce the restrictions. Memory curtaining would prevent the user from making an unrestricted copy of the file while it is playing, and secure output would prevent capturing what is sent to the sound system.
Users unable to modify software
A user who wanted to switch to a competing program might find that it would be impossible for that new program to read old data, as the information would be "locked in" to the old program. It could also make it impossible for the user to read or modify their data except as specifically permitted by the software.
Users unable to exercise legal rights
The law in many countries allows users certain rights over data whose copyright they do not own (including text, images, and other media), often under headings such as fair use or public interest. Depending on jurisdiction, these may cover issues such as whistleblowing, production of evidence in court, quoting or other small-scale usage, backups of owned media, and making a copy of owned material for personal use on other owned devices or systems. The steps implicit in trusted computing have the practical effect of preventing users exercising these legal rights.
Users vulnerable to vendor withdrawal of service
A service that requires external validation or permission - such as a music file or game that requires connection with the vendor to confirm permission to play or use - is vulnerable to that service being withdrawn or no longer updated. A number of incidents have already occurred where users, having purchased music or video media, have found their ability to watch or listen to it suddenly stop due to vendor policy or cessation of service, or server inaccessibility, at times with no compensation. Alternatively in some cases the vendor refuses to provide services in future which leaves purchased material only usable on the present -and increasingly obsolete- hardware (so long as it lasts) but not on any hardware that may be purchased in future.
Users unable to override
Some opponents of Trusted Computing advocate "owner override": allowing an owner who is confirmed to be physically present to allow the computer to bypass restrictions and use the secure I/O path. Such an override would allow remote attestation to a user's specification, e.g., to create certificates that say Internet Explorer is running, even if a different browser is used. Instead of preventing software change, remote attestation would indicate when the software has been changed without owner's permission.
Trusted Computing Group members have refused to implement owner override. Proponents of trusted computing believe that owner override defeats the trust in other computers since remote attestation can be forged by the owner. Owner override offers the security and enforcement benefits to a machine owner, but does not allow them to trust other computers, because their owners could waive rules or restrictions on their own computers. Under this scenario, once data is sent to someone else's computer, whether it be a diary, a DRM music file, or a joint project, that other person controls what security, if any, their computer will enforce on their copy of those data. This has the potential to undermine the applications of trusted computing to enforce DRM, control cheating in online games and attest to remote computations for grid computing.
Loss of anonymity
Because a Trusted Computing equipped computer is able to uniquely attest to its own identity, it will be possible for vendors and others who possess the ability to use the attestation feature to zero in on the identity of the user of TC-enabled software with a high degree of certainty.
Such a capability is contingent on the reasonable chance that the user at some time provides user-identifying information, whether voluntarily, indirectly, or simply through inference of many seemingly benign pieces of data. (e.g. search records, as shown through simple study of the AOL search records leak). One common way that information can be obtained and linked is when a user registers a computer just after purchase. Another common way is when a user provides identifying information to the website of an affiliate of the vendor.
While proponents of TC point out that online purchases and credit transactions could potentially be more secure as a result of the remote attestation capability, this may cause the computer user to lose expectations of anonymity when using the Internet.
Critics point out that this could have a chilling effect on political free speech, the ability of journalists to use anonymous sources, whistle blowing, political blogging and other areas where the public needs protection from retaliation through anonymity.
The TPM specification offers features and suggested implementations that are meant to address the anonymity requirement. By using a third-party Privacy Certification Authority (PCA), the information that identifies the computer could be held by a trusted third party. Additionally, the use of direct anonymous attestation (DAA), introduced in TPM v1.2, allows a client to perform attestation while not revealing any personally identifiable or machine information.
The kind of data that must be supplied to the TTP in order to get the trusted status is at present not entirely clear, but the TCG itself admits that "attestation is an important TPM function with significant privacy implications". It is, however, clear that both static and dynamic information about the user computer may be supplied (Ekpubkey) to the TTP (v1.1b), it is not clear what data will be supplied to the “verifier” under v1.2. The static information will uniquely identify the endorser of the platform, model, details of the TPM, and that the platform (PC) complies with the TCG specifications . The dynamic information is described as software running on the computer. If a program like Windows is registered in the user's name this in turn will uniquely identify the user. Another dimension of privacy infringing capabilities might also be introduced with this new technology; how often you use your programs might be possible information provided to the TTP. In an exceptional, however practical situation, where a user purchases a pornographic movie on the Internet, the purchaser nowadays, must accept the fact that he has to provide credit card details to the provider, thereby possibly risking being identified. With the new technology a purchaser might also risk someone finding out that he (or she) has watched this pornographic movie 1000 times. This adds a new dimension to the possible privacy infringement. The extent of data that will be supplied to the TTP/Verifiers is at present not exactly known, only when the technology is implemented and used will we be able to assess the exact nature and volume of the data that is transmitted.
TCG specification interoperability problems
Trusted Computing requests that all software and hardware vendors will follow the technical specifications released by the Trusted Computing Group in order to allow interoperability between different trusted software stacks. However, since at least mid-2006, there have been interoperability problems between the TrouSerS trusted software stack (released as open source software by IBM) and Hewlett-Packard's stack. Another problem is that the technical specifications are still changing, so it is unclear which is the standard implementation of the trusted stack.
Shutting out of competing products
People have voiced concerns that trusted computing could be used to keep or discourage users from running software created by companies outside of a small industry group. Microsoft has received a great deal of bad press surrounding their Palladium software architecture, evoking comments such as "Few pieces of vaporware have evoked a higher level of fear and uncertainty than Microsoft's Palladium", "Palladium is a plot to take over cyberspace", and "Palladium will keep us from running any software not personally approved by Bill Gates". The concerns about trusted computing being used to shut out competition exist within a broader framework of consumers being concerned about using bundling of products to obscure prices of products and to engage in anti-competitive practices. Trusted Computing is seen as harmful or problematic to independent and open source software developers.
Trust
In the widely used public-key cryptography, creation of keys can be done on the local computer and the creator has complete control over who has access to it, and consequentially their own security policies. In some proposed encryption-decryption chips, a private/public key is permanently embedded into the hardware when it is manufactured, and hardware manufacturers would have the opportunity to record the key without leaving evidence of doing so. With this key it would be possible to have access to data encrypted with it, and to authenticate as it. It is trivial for a manufacturer to give a copy of this key to the government or the software manufacturers, as the platform must go through steps so that it works with authenticated software.
Therefore, to trust anything that is authenticated by or encrypted by a TPM or a Trusted computer, an end user has to trust the company that made the chip, the company that designed the chip, the companies allowed to make software for the chip, and the ability and interest of those companies not to compromise the whole process. A security breach breaking that chain of trust happened to a SIM card manufacturer Gemalto, which in 2010 was infiltrated by US and British spies, resulting in compromised security of cellphone calls.
It is also critical that one be able to trust that the hardware manufacturers and software developers properly implement trusted computing standards. Incorrect implementation could be hidden from users, and thus could undermine the integrity of the whole system without users being aware of the flaw.
Hardware and software support
Since 2004, most major manufacturers have shipped systems that have included Trusted Platform Modules, with associated BIOS support. In accordance with the TCG specifications, the user must enable the Trusted Platform Module before it can be used.
The Linux kernel has included trusted computing support since version 2.6.13, and there are several projects to implement trusted computing for Linux. In January 2005, members of Gentoo Linux's "crypto herd" announced their intention of providing support for TC—in particular support for the Trusted Platform Module. There is also a TCG-compliant software stack for Linux named TrouSerS, released under an open source license. There are several open-source projects that facilitate the use of confidential computing technology, including EGo, EdgelessDB and MarbleRun from Edgeless Systems, as well as Enarx, which originates from security research at Red Hat.
Some limited form of trusted computing can be implemented on current versions of Microsoft Windows with third-party software. Major cloud providers such as Microsoft Azure, AWS and Google Cloud Platform have virtual machines with trusted computing features available. With the Intel Software Guard Extension (SGX) and AMD Secure Encrypted Virtualization (SEV) processors, there is hardware available for runtime memory encryption and remote attestation features.
The Intel Classmate PC (a competitor to the One Laptop Per Child) includes a Trusted Platform Module.
PrivateCore vCage software can be used to attest x86 servers with TPM chips.
Mobile T6 secure operating system simulates the TPM functionality in mobile devices using the ARM TrustZone technology.
Samsung smartphones come equipped with Samsung Knox that depend on features like Secure Boot, TIMA, MDM, TrustZone and SE Linux.
See also
Glossary of legal terms in technology
Next-Generation Secure Computing Base (formerly known as Palladium)
Trusted Network Connect
Trusted Platform Module
Web Environment Integrity
References
External links
Cryptography
Copyright law
Microsoft Windows security technology | Trusted Computing | [
"Mathematics",
"Engineering"
] | 4,992 | [
"Trusted computing",
"Applied mathematics",
"Cryptography",
"Cybersecurity engineering"
] |
58,617 | https://en.wikipedia.org/wiki/Fact | A fact is a true datum about one or more aspects of a circumstance. Standard reference works are often used to check facts. Scientific facts are verified by repeatable careful observation or measurement by experiments or other means.
For example, "This sentence contains words." accurately describes a linguistic fact, and "The sun is a star" accurately describes an astronomical fact. Further, "Abraham Lincoln was the 16th President of the United States" and "Abraham Lincoln was assassinated" both accurately describe historical facts. Generally speaking, facts are independent of belief and of knowledge and opinion.
Facts are different from inferences, theories, values, and objects.
Etymology and usage
The word fact derives from the Latin factum. It was first used in English with the same meaning: "a thing done or performed"a meaning now obsolete. The common usage of "something that has really occurred or is the case" dates from the mid-16th century.
Barbara J. Shapiro wrote in her book A Culture of Fact how the concept of a fact evolved, starting within the English legal tradition of the 16th century.
In 1870, Charles Sanders Peirce described in his book "The Fixation of Belief" four methods which people use to decide what they should believe: tenacity, method of authority, a priori and scientific method.
The term fact also indicates a matter under discussion deemed to be true or correct, such as to emphasize a point or prove a disputed issue; (e.g., "... the fact of the matter is ...").
Alternatively, fact may also indicate an allegation or stipulation of something that may or may not be a true fact, (e.g., "the author's facts are not trustworthy"). This alternate usage, although contested by some, has a long history in standard English according to the American Heritage Dictionary of the English Language. The Oxford English Dictionary dates this use to 1729.
Fact may also indicate findings derived through a process of evaluation, including review of testimony, direct observation, or otherwise; as distinguishable from matters of inference or speculation. This use is reflected in the terms "fact-find" and "fact-finder" (e.g., "set up a fact-finding commission").
Facts may be checked by reason, experiment, personal experience, or may be argued from authority. Roger Bacon wrote "If in other sciences we should arrive at certainty without doubt and truth without error, it behooves us to place the foundations of knowledge in mathematics."
In philosophy
In philosophy, the concept fact is considered in the branch of philosophy concerned with knowledge, called epistemology and ontology, which studies concepts such as existence, being, becoming, and reality. Questions of objectivity and truth are closely associated with questions of fact. A fact can be defined as something that is the case, in other words, a state of affairs.
Facts may be understood as information, which makes a true sentence true: "A fact is, traditionally, the worldly correlate of a true proposition, a state of affairs whose obtaining makes that proposition true." Facts may also be understood as those things to which a true sentence refers. The statement "Jupiter is the largest planet in the solar system" is about the fact that Jupiter is the largest planet in the solar system.
Correspondence and the slingshot argument
Pascal Engel's version of the correspondence theory of truth explains that what makes a sentence true is that it corresponds to a fact. This theory presupposes the existence of an objective world.
The Slingshot argument claims to show that all true statements stand for the same thing, the truth value true. If this argument holds, and facts are taken to be what true statements stand for, then one arrives at the counter-intuitive conclusion that there is only one fact: the truth.
Compound facts
Any non-trivial true statement about reality is necessarily an abstraction composed of a complex of objects and properties or relations. Facts "possess internal structure, being complexes of objects and properties or relations". For example, the fact described by the true statement "Paris is the capital city of France" implies that there is such a place as Paris, there is such a place as France, there are such things as capital cities, as well as that France has a government, that the government of France has the power to define its capital city, and that the French government has chosen Paris to be the capital, that there is such a thing as a place or a government, and so on. The verifiable accuracy of all of these assertions, if facts themselves, may coincide to create the fact, that Paris is the capital of France.
Difficulties arise, however, in attempting to identify the constituent parts of negative, modal, disjunctive, or moral facts.
Fact–value distinction
Moral philosophers since David Hume have debated whether values are objective, and thus factual. In A Treatise of Human Nature Hume pointed out there is no obvious way for a series of statements about what ought to be the case to be derived from a series of statements of what is the case. This is called the is–ought distinction. Those who insist there is a logical gulf between facts and values, such that it is fallacious to attempt to derive values (e.g., "it is good to give food to hungry people") from facts (e.g., "people will die if they can't eat"), include G. E. Moore, who called attempting to do so the naturalistic fallacy.
Factual–counterfactual distinction
Factuality—what has occurred—can also be contrasted with counterfactuality: what might have occurred, but did not. A counterfactual conditional or subjunctive conditional is a conditional (or "if–then") statement indicating what would be the case if events had been other than they were. For example, "If Alexander had lived, his empire would have been greater than Rome." This contrasts with an indicative conditional, which indicates what is (in fact) the case if its antecedent is (in fact) true—for example, "If you drink this, it will make you well." Such sentences are important to modal logic, especially since the development of possible world semantics.
In mathematics
In mathematics, a fact is a statement (called a theorem) that can be proven by logical argument from certain axioms and definitions.
In science
The definition of a scientific fact is different from the definition of fact, as it implies knowledge. A scientific fact is the result of a repeatable careful observation or measurement by experimentation or other means, also called empirical evidence. These are central to building scientific theories. Various forms of observation and measurement lead to fundamental questions about the scientific method, and the scope and validity of scientific reasoning.
In the most basic sense, a scientific fact is an objective and verifiable observation, in contrast with a hypothesis or theory, which is intended to explain or interpret facts.
Various scholars have offered significant refinements to this basic formulation. Philosophers and scientists are careful to distinguish between: 1) states of affairs in the external world and 2) assertions of fact that may be considered relevant in scientific analysis. The term is used in both senses in the philosophy of science.
Scholars and clinical researchers in both the social and natural sciences have written about numerous questions and theories that arise in the attempt to clarify the fundamental nature of scientific fact. Pertinent issues raised by this inquiry include:
the process by which "established fact" becomes recognized and accepted as such;
whether and to what extent "fact" and "theoretic explanation" can be considered truly independent and separable from one another;
to what extent "facts" are influenced by the mere act of observation; and
to what extent factual conclusions are influenced by history and consensus, rather than a strictly systematic methodology.
Consistent with the idea of confirmation holism, some scholars assert "fact" to be necessarily "theory-laden" to some degree. Thomas Kuhn points out that knowing what facts to measure, and how to measure them, requires the use of other theories. For example, the age of fossils is based on radiometric dating, which is justified by reasoning that radioactive decay follows a Poisson process rather than a Bernoulli process. Similarly, Percy Williams Bridgman is credited with the methodological position known as operationalism, which asserts that all observations are not only influenced, but necessarily defined, by the means and assumptions used to measure them.
The scientific method
Apart from the fundamental inquiry into the nature of scientific fact, there remain the practical and social considerations of how fact is investigated, established, and substantiated through the proper application of the scientific method. Scientific facts are generally believed independent of the observer: no matter who performs a scientific experiment, all observers agree on the outcome.
In addition to these considerations, there are the social and institutional measures, such as peer review and accreditation, that are intended to promote factual accuracy among other interests in scientific study.
In history
A common rhetorical cliché states, "History is written by the winners". This phrase suggests but does not examine the use of facts in the writing of history.
E. H. Carr in his 1961 volume What is History? argues that the inherent biases from the gathering of facts makes the objective truth of any historical perspective idealistic and impossible. Facts are, "like fish in the Ocean", of which we may only happen to catch a few, only an indication of what is below the surface. Even a dragnet cannot tell us for certain what it would be like to live below the Ocean's surface. Even if we do not discard any facts (or fish) presented, we will always miss the majority; the site of our fishing, the methods undertaken, the weather and even luck play a vital role in what we will catch. Additionally, the composition of history is inevitably made up by the compilation of many different biases of fact finding – all compounded over time. He concludes that for a historian to attempt a more objective method, one must accept that history can only aspire to a conversation of the present with the past – and that one's methods of fact gathering should be openly examined. The set of highlighted historical facts, and their interpretations, therefore changes over time, and reflect present consensuses.
In law
This section of the article emphasizes common law jurisprudence as primarily represented in Anglo-American–based legal tradition. Nevertheless, the principles described herein have analogous treatment in other legal systems such as civil law systems as well.
In most common law jurisdictions, the general concept and analysis of fact reflects fundamental principles of jurisprudence, and is supported by several well-established standards. Matters of fact have various formal definitions under common law jurisdictions.
These include:
an element required in legal pleadings to demonstrate a cause of action;
the determinations of the finder of fact after evaluating admissible evidence produced in a trial or hearing;
a potential ground of reversible error forwarded on appeal in an appellate court; and
any of various matters subject to investigation by official authority to establish whether a crime has been perpetrated, and to establish culpability.
Legal pleadings
A party (e.g., plaintiff) to a civil suit generally must clearly state the relevant allegations of fact that form the basis of a claim. The requisite level of precision and particularity of these allegations varies, depending on the rules of civil procedure and jurisdiction. Parties who face uncertainties regarding facts and circumstances attendant to their side in a dispute may sometimes invoke alternative pleading. In this situation, a party may plead separate sets of facts that when considered together may be contradictory or mutually exclusive. This seemingly logically-inconsistent presentation of facts may be necessary as a safeguard against contingencies such as res judicata that would otherwise preclude presenting a claim or defense that depends on a particular interpretation of the underlying facts and ruling of the court.
See also
Brute fact
Common misconceptions
Consensus reality
Counterfactual history
De facto
Factoid
Fiction
Lie
References
External links
Concepts in epistemology
Concepts in logic
Concepts in metaphysics
Concepts in the philosophy of language
Concepts in the philosophy of science
Information
Knowledge
Logical truth
Statements | Fact | [
"Mathematics"
] | 2,499 | [
"Mathematical logic",
"Logical truth"
] |
58,623 | https://en.wikipedia.org/wiki/The%20Culture | The Culture is a fictional interstellar post-scarcity civilisation or society created by the Scottish writer Iain Banks and features in a number of his space opera novels and works of short fiction, collectively called the Culture series.
In the series, the Culture is composed primarily of sentient beings of the humanoid alien variety, artificially intelligent sentient machines, and a small number of other sentient "alien" life forms. Machine intelligences range from human-equivalent drones to hyper-intelligent Minds. Artificial intelligences with capabilities measured as a fraction of human intelligence also perform a variety of tasks, e.g. controlling spacesuits. Without scarcity, the Culture has no need for money; instead, Minds voluntarily indulge humanoid and drone citizens' pleasures, leading to a largely hedonistic society. Many of the series' protagonists are humanoids who have chosen to work for the Culture's diplomatic or espionage organs, and interact with other civilisations whose citizens act under different ideologies, morals, and technologies.
The Culture has a grasp of technology that is advanced relative to most other civilisations with which it shares the galaxy. Most of the Culture's citizens do not live on planets but in artificial habitats such as orbitals and ships, the largest of which are home to billions of individuals. The Culture's citizens have been genetically enhanced to live for centuries and have modified mental control over their physiology, including the ability to introduce a variety of psychoactive drugs into their systems, change biological sex, or switch off pain at will. Culture technology is able to transfer individuals into vastly different body forms, although the Culture standard form remains fairly close to human.
The Culture holds peace and individual freedom as core values, and a central theme of the series is the ethical struggle it faces when interacting with other societies – some of which brutalise their own members, pose threats to other civilisations, or threaten the Culture itself. It tends to make major decisions based on the consensus formed by its Minds and, if appropriate, its citizens. In one instance, a direct democratic vote of trillions – the entire population – decided The Culture would go to war with a rival civilisation. Those who objected to the Culture's subsequent militarisation broke off from the meta-civilisation, forming their own separate civilisation; a hallmark of the Culture is its ambiguity. In contrast to the many interstellar societies and empires which share its fictional universe, the Culture is difficult to define, geographically or sociologically, and "fades out at the edges".
Overview
The Culture is characterized as being a post-scarcity society, having overcome most physical constraints on life and being an egalitarian, stable society without the use of any form of force or compulsion, except where necessary to protect others. That being said, some citizens, including the extremely powerful artificial intelligences, Minds, sometimes engage in the manipulation of others. This can include influencing or controlling the development of alien societies, through the group known as Contact.
The novels of the Culture cycle mostly deal with people at the fringes of the Culture: diplomats, spies, or mercenaries; those who interact with other civilisations, and who do the Culture's dirty work in moving those societies closer to the Culture ideal, sometimes by force.
Fictional history
In this fictional universe, the Culture exists concurrently with human society on Earth. The time frame for the published Culture stories is from 1267 CE to roughly 2970 CE, with Earth being contacted around 2100 CE, though the Culture had covertly visited the planet in the 1970s in The State of the Art.
The Culture itself is described as having been created when several humanoid species and machine sentiences reached a certain social level, and took not only their physical, but also their civilisational evolution into their own hands. In The Player of Games, the Culture is described as having existed as a space-faring society for eleven thousand years. In The Hydrogen Sonata, one of these founding civilisations was named as the Buhdren Federality.
Society and culture
Economy
The Culture is a symbiotic society of artificial intelligences (AIs) (Minds and drones), humanoids and other alien species who all share equal status. All essential work is performed (as far as possible) by non-sentient devices, freeing sentients to do only things that they enjoy (administrative work requiring sentience is undertaken by the AIs using a bare fraction of their mental power, or by people who take on the work out of free choice). As such, the Culture is a post-scarcity society, where technological advances ensure that no one lacks any material goods or services. Energy is farmed from a fictitious "energy grid", and matter to build orbitals is collected mostly from asteroids. As a consequence, the Culture has no need of economic constructs such as money (as is apparent when it deals with civilisations in which money is still important). The Culture rejects all forms of economics based on anything other than voluntary activity. "Money implies poverty" is a common saying in the Culture.
Language
Marain is the Culture's shared constructed language. The Culture believes the Sapir–Whorf hypothesis that language influences thought, and Marain was designed by early Minds to exploit this effect, while also "appealing to poets, pedants, engineers and programmers". Designed to be represented either in binary or symbol-written form, Marain is also regarded as an aesthetically pleasing language by the Culture. The symbols of the Marain alphabet can be displayed in three-by-three grids of binary (yes/no, black/white) dots and thus correspond to nine-bit binary numbers.
Related comments are made by the narrator in The Player of Games regarding gender-specific pronouns, which Marain speakers do not use in typical conversation unless specifying one's gender is necessary, and by general reflection on the fact that Marain places much less structural emphasis on (or even lacks) concepts like possession and ownership, dominance and submission, and especially aggression. Many of these concepts would in fact be somewhat theoretical to the average Culture citizen. Indeed, the presence of these concepts in other civilization signify the brutality and hierarchy associated with forms of empire that the Culture strives to avoid.
Marain itself is also open to encryption and dialect-specific implementations for different parts of the Culture. M1 is basic Nonary Marain, the three-by-three grid. All Culture citizens can communicate in this variant. Other variants include M8 through M16, which are encrypted by various degrees, and are typically used by the Contact Section. Higher level encryptions exist, the highest of these being M32. M32 and lower level encrypted signals are the province of Special Circumstances (SC). Use of M32 is reserved for extremely secret and reserved information and communication within Special Circumstances. That said, M32 has an air of notoriety in the Culture, and in the thoughts of most may best be articulated as "the Unbreakable, Inviolable, Holy of Holies Special Circumstances M32" as described by prospective SC agent Ulver Seich. Ships and Minds also have a slightly distasteful view of SC procedure associated with M32, one Ship Mind going so far as to object to the standard SC attitude of "Full scale, stark raving M32 don't-talk-about-this-or-we'll-pull-your-plugs-out-baby paranoia" on the use of the encryption.
Laws
There are no laws as such in the Culture. Social norms are enforced by convention (personal reputation, "good manners", and by, as described in The Player of Games, possible ostracism and involuntary supervision for more serious crimes). Minds generally refrain from using their all-seeing capabilities to influence people's reputations, though they are not necessarily themselves above judging people based on such observations, as described in Excession. Minds also judge each other, with one of the more relevant criteria being the quality of their treatment of sentients in their care. Hub Minds for example are generally nominated from well-regarded GSV (the largest class of ships) Minds, and then upgraded to care for the billions living on the artificial habitats.
The only serious prohibitions that seem to exist are against harming sentient beings, or forcing them into undertaking any act (another concept that seems unnatural to and is, in fact, almost unheard of by almost all Culture citizens). As mentioned in The Player of Games, the Culture does have the occasional "crime of passion" (as described by an Azadian) and the punishment was to be "slap-droned", or to have a drone assigned to follow the offender and "make sure [they] don't do it again".
While the enforcement in theory could lead to a Big Brother-style surveillance society, in practice social convention among the Minds prohibits them from watching, or interfering in, citizens' lives unless requested, or unless they perceive severe risk. The practice of reading a sentient's mind without permission (something the Culture is technologically easily capable of) is also strictly taboo. The whole plot of Look to Windward relies on a Hub Mind not reading an agent's mind (with certain precautions in case this rule gets violated). Minds that do so anyway are considered deviant and shunned by other Minds (see GCU Grey Area). At one point it is said that if the Culture actually had written laws, the sanctity of one's own thoughts against the intrusion of others would be the first on the books.
This gives some measure of privacy and protection; though the very nature of Culture society would, strictly speaking, make keeping secrets irrelevant: most of them would be considered neither shameful nor criminal. It does allow the Minds in particular to scheme amongst themselves in a very efficient manner, and occasionally withhold information.
Symbols
The Culture has no flag, symbol or logo. According to Consider Phlebas, people can recognize items made by the Culture implicitly, by the way they are simple, efficient and aesthetic. The main outright symbol of the Culture is its language, Marain, which is used far beyond the Culture itself. It is often employed in the galaxy as a de facto lingua franca among people who don't share a language. Marain has a similar purpose to other constructed languages encountered in utopian and dystopian fiction including Pravic in The Dispossessed and Newspeak in Nineteen Eighty-Four.
Citizens
Biological
The Culture is a posthuman society, which originally arose when seven or eight roughly humanoid space-faring species coalesced into a quasi-collective (a group-civilisation) ultimately consisting of approximately thirty trillion (short scale) sentient and sapient beings (this includes artificial intelligences). In Banks's universe, a good part (but by no means an overwhelming percentage) of all sentient species is of the "pan-human" type, as noted in Matter.
Although the Culture was originated by humanoid species, subsequent interactions with other civilisations have introduced many non-humanoid species into the Culture (including some former enemy civilisations), though the majority of the biological Culture is still pan-human. Little uniformity exists in the Culture, and its citizens are such by choice, free to change physical form and even species (though some stranger biological conversions are irreversible, and conversion from biological to artificial sentience is considered to be what is known as an Unusual Life Choice). All members are also free to join, leave, and rejoin, or indeed declare themselves to be, say, 80% Culture.
Within the novels, opponents of the Culture have argued that the role of humans in the Culture is nothing more than that of pets, or parasites on Culture Minds, and that they can have nothing genuinely useful to contribute to a society where science is close to omniscient about the physical universe, where every ailment has been cured, and where every thought can be read. Many of the Culture novels in fact contain characters (from within or without the Culture) wondering how far-reaching the Minds' dominance of the Culture is, and how much of the democratic process within it might in fact be a sham: subtly but very powerfully influenced by the Minds in much the same ways Contact and Special Circumstances influence other societies. Also, except for some mentions about a vote over the Idiran-Culture War, and the existence of a very small number of "Referrers" (humans of especially acute reasoning), few biological entities are ever described as being involved in any high-level decisions.
On the other hand, the Culture can be seen as fundamentally hedonistic (one of the main objectives for any being, including Minds, is to have fun rather than to be "useful"). Also, Minds are constructed, by convention, to care for and value human beings. While a General Contact Unit (GCU) does not strictly need a crew (and could construct artificial avatars when it did), a real human crew adds richness to its existence, and offers distraction during otherwise dull periods. In Consider Phlebas it is noted that Minds still find humans fascinating, especially their odd ability to sometimes achieve similarly advanced reasoning as their much more complex machine brains.
To a large degree, the freedoms enjoyed by humans in the Culture are only available because Minds choose to provide them. The freedoms include the ability to leave the Culture when desired, often forming new associated but separate societies with Culture ships and Minds, most notably the Zetetic Elench and the ultra-pacifist and non-interventionist Peace Faction.
Physiology
Techniques in genetics have advanced in the Culture to the point where bodies can be freed from built-in limitations. Citizens of the Culture refer to a normal human as "human-basic" and the vast majority opt for significant enhancements: severed limbs grow back, sexual physiology can be voluntarily changed from male to female and back (though the process takes time), sexual stimulation and endurance are strongly heightened in both sexes (something that is often the subject of envious debate among other species), pain can be switched off, toxins can be bypassed away from the digestive system, autonomic functions such as heart rate can be switched to conscious control, reflexes like blinking can be switched off, and bones and muscles adapt quickly to changes in gravity without the need to exercise. The degree of enhancement found in Culture individuals varies to taste, with certain of the more exotic enhancements limited to Special Circumstances personnel (for example, weapons systems embedded in various parts of the body).
Most Culture individuals opt to have drug glands that allow for hormonal levels and other chemical secretions to be consciously monitored, released and controlled. These allow owners to secrete on command any of a wide selection of synthetic drugs, from the merely relaxing to the mind-altering: "Snap" is described in Use of Weapons and The Player of Games as "The Culture's favourite breakfast drug". "Sharp Blue" is described as a utility drug, as opposed to a sensory enhancer or a sexual stimulant, that helps in problem solving. "Quicken", mentioned in Excession, speeds up the user's neural processes so that time seems to slow down, allowing them to think and have mental conversation (for example with artificial intelligences) in far less time than it appears to take to the outside observer. "Sperk", as described in Matter, is a mood- and energy-enhancing drug, while other such self-produced drugs include "Calm", "Gain", "Charge", "Recall", "Diffuse", "Somnabsolute", "Softnow", "Focal", "Edge", "Drill", "Gung", "Winnow" and "Crystal Fugue State". The glanded substances have no permanent side-effects and are non-habit-forming.
Phenotypes
For all their genetic improvements, the Culture is by no means eugenically uniform. Human members in the Culture setting vary in size, colour and shape as in reality, and with possibly even further natural differences: in the novella The State of the Art, it is mentioned that a character "looks like a Yeti", and that there is variance among the Culture in minor details such as the number of toes or of joints on each finger. It is mentioned in Excession that:
Some Culture citizens opt to leave the constraints of a human or even humanoid body altogether, opting to take on the appearance of one of the myriad other galactic sentients (perhaps in order to live with them) or even non-sentient objects as commented upon in Matter (though this process can be irreversible if the desired form is too removed from the structure of the human brain). Certain eccentrics have chosen to become drones or even Minds themselves, though this is considered rude and possibly even insulting by most humans and AIs alike.
While the Culture is generally pan-humanoid (and tends to call itself "human"), various other species and individuals of other species have become part of the Culture.
As all Culture citizens are of perfect genetic health, the very rare cases of a Culture citizen showing any physical deformity are almost certain to be a sort of fashion statement of somewhat dubious taste.
Personality
Almost all Culture citizens are very sociable and of great intellectual capability and learning, and possess very well-balanced psyches. Their biological make-up and their growing up in an enlightened society make neuroses and lesser emotions like greed or (strong) jealousy practically unknown, and produce persons that, in any lesser society, appear very self-composed and charismatic. Character traits like strong shyness, while very rare, are not fully unknown, as shown in Excession. As described there and in Player of Games, a Culture citizen who becomes dysfunctional enough to pose a serious nuisance or threat to others would be offered (voluntary) psychological adjustment therapy and might potentially find themself under constant (non-voluntary) oversight by representatives of the local Mind. In extreme cases, as described in Use of Weapons and Surface Detail, dangerous individuals have been known to be assigned a "slap-drone", a robotic follower who ensures that the person in question doesn't continue to endanger the safety of others.
Artificial
As well as humans and other biological species, sentient artificial intelligences are also members of the Culture. These can be broadly categorised into drones and Minds. Also, by custom, as described in Excession, any artefact (be it a tool or vessel) above a certain capability level has to be given sentience.
Drones
Drones are roughly comparable in intelligence and social status to that of the Culture's biological members. Their intelligence is measured against that of an average biological member of the Culture; a so-called "1.0 value" drone would be considered the mental equal of a biological citizen, whereas lesser drones such as the menial service units of Orbitals are merely proto-sentient (capable of limited reaction to unprogrammed events, but possessing no consciousness, and thus not considered citizens; these take care of much of the menial work in the Culture). The sentience of advanced drones has various levels of redundancy, from systems similar to that of Minds (though much reduced in capability) down to electronic, to mechanical and finally biochemical back-up brains.
Although drones are artificial, the parameters that prescribe their minds are not rigidly constrained, and sentient drones are full individuals, with their own personalities, opinions and quirks. Like biological citizens, Culture drones generally have lengthy names. They also have a form of sexual intercourse for pleasure, called being "in thrall", though this is an intellect-only interfacing with another sympathetic drone.
While civilian drones do generally match humans in intelligence, drones built especially as Contact or Special Circumstances agents are often several times more intelligent, and imbued with extremely powerful senses, powers and armaments (usually forcefield and effector-based, though occasionally more destructive weaponry such as lasers or, exceptionally, "knife-missiles" are referred to) all powered by antimatter reactors. Despite being purpose-built, these drones are still allowed individual personalities and given a choice in lifestyle. Indeed, some are eventually deemed psychologically unsuitable as agents (for example as Mawhrin-Skel notes about itself in The Player of Games) and must choose either mental reprofiling or demilitarisation and discharge from Special Circumstances.
Physically, drones are floating units of various sizes and shapes, usually with no visible moving parts. Drones get around the limitations of this inanimation with the ability to project "fields": both those capable of physical force, which allow them to manipulate objects, as well as visible, coloured fields called "auras", which are used to enable the drone to express emotion. There is a complex drone code based on aura colours and patterns (which is fully understood by biological Culture citizens as well). Drones have full control of their auras and can display emotions they're not feeling or can switch their aura off. The drone, Jase, in Consider Phlebas, is described as being constructed before the use of auras, and refuses to be retrofitted with them, preferring to remain inscrutable.
In size drones vary substantially: the oldest still alive (eight or nine thousand years old) tend to be around the size of humans, whereas later technology allows drones to be small enough to lie in a human's cupped palm; modern drones may be any size between these extremes according to fashion and personal preference. Some drones are also designed as utility equipment with its own sentience, such as the gelfield protective suit described in Excession.
Minds
By contrast to drones, Minds are orders of magnitude more powerful and intelligent than the Culture's other biological and artificial citizens. Typically they inhabit and act as the controllers of large-scale Culture hardware such as ships or space-based habitats. Unsurprisingly, given their duties, Minds are tremendously powerful: capable of running all of the functions of a ship or habitat, while holding potentially billions of simultaneous conversations with the citizens that live aboard them. To allow them to perform at such a high degree, they exist partially in hyperspace to get around hindrances to computing power such as the speed of light.
Some inhabited planets and all orbitals have their own Minds: sapient, hyperintelligent machines originally built by biological species, which have evolved, redesigned themselves, and become many times more intelligent than their original creators. According to Consider Phlebas, a Mind is an ellipsoid object roughly the size of a bus and weighing around tons. A Mind is in fact a entity, meaning that the ellipsoid is only the protrusion of the larger four dimensional device into our 'real space'.
In the Culture universe, Minds have become an indispensable part of the prevailing society, enabling much of its post-scarcity amenities by planning and automating societal functions, and by handling day-to-day administration with mere fractions of their mental power.
The main difference between Minds and other extremely powerful artificial intelligences in fiction is that they are highly humanistic and benevolent. They are so both by design, and by their shared culture. They are often even rather eccentric. Yet, by and large, they show no wish to supplant or dominate their erstwhile creators.
On the other hand, it can also be argued that to the Minds, the human-like members of the Culture amount to little more than pets, whose wants are followed on a Mind's whim. Within the Series, this dynamic is played on more than once. In 'Excession', it is also played on to put a Mind in its place—in the mythology, a Mind is not thought to be a god, still, but an artificial intelligence capable of surprise, and even fear.
Although the Culture is a type of utopian anarchy, Minds most closely approach the status of leaders, and would likely be considered godlike in less rational societies. As independent, thinking beings, each has its own character, and indeed, legally (insofar as the Culture has a 'legal system'), each is a Culture citizen. Some Minds are more aggressive, some more calm; some don't mind mischief, others simply demonstrate intellectual curiosity. But above all they tend to behave rationally and benevolently in their decisions.
As mentioned before, Minds can serve several different purposes, but Culture ships and habitats have one special attribute: the Mind and the ship or habitat are perceived as one entity; in some ways the Mind is the ship, certainly from its passengers' point of view. It seems normal practice to address the ship's Mind as "Ship" (and an Orbital hub as "Hub"). However, a Mind can transfer its 'mind state' into and out of its ship 'body', and even switch roles entirely, becoming (for example) an Orbital Hub from a warship.
More often than not, the Mind's character defines the ship's purpose. Minds do not end up in roles unsuited to them; an antisocial Mind simply would not volunteer to organise the care of thousands of humans, for example.
On occasion groupings of two or three Minds may run a ship. This seems normal practice for larger vehicles such as s, though smaller ships only ever seem to have one Mind.
Banks also hints at a Mind's personality becoming defined at least partially before its creation or 'birth'. Warships, as an example, are designed to revel in controlled destruction; seeing a certain glory in achieving a 'worthwhile' death also seems characteristic. The presence of human crews on board warships may discourage such recklessness, since in the normal course of things, a Mind would not risk beings other than itself.
With their almost godlike powers of reasoning and action comes a temptation to bend (or break) Cultural norms of ethical behaviour, if deemed necessary for some greater good. In The Player of Games, a Culture citizen is blackmailed, apparently by Special Circumstances Minds, into assisting the overthrow of a barbaric empire, while in Excession, a conspiracy by some Minds to start a war against an oppressive alien race nearly comes to fruition. Yet even in these rare cases, the essentially benevolent intentions of Minds towards other Culture citizens is never in question. More than any other beings in the Culture, Minds are the ones faced with the more complex and provocative ethical dilemmas.
While Minds would likely have different capabilities, especially seeing their widely differing ages (and thus technological sophistication), this is not a theme of the books. It might be speculated that the older Minds are upgraded to keep in step with the advances in technology, thus making this point moot. It is also noted in Matter that every Culture Mind writes its own , thus continually improving itself and, as a side benefit, becoming much less vulnerable to outside takeover by electronic means and viruses, as every Mind's processing functions work differently.
The high computing power of the Mind is apparently enabled by thought processes (and electronics) being constantly in hyperspace (thus circumventing the light speed limit in computation). Minds do have back-up capabilities functioning with light-speed if the hyperspace capabilities fail - however, this reduces their computational powers by several orders of magnitude (though they remain sentient).
The storage capability of a GSV Mind is described in Consider Phlebas as 1030 bytes (1 million yottabytes).
The Culture is a society undergoing slow (by present-day Earth standards) but constant technological change, so the stated capacity of Minds is open to change. In the last 3,000 years, the capacity of Minds has increased considerably. By the time of the events of the novel Excession in the mid 19th century, Minds from the first millennium are referred to jocularly as minds, with a small 'm'. Their capacities only allow them to be considered equivalent to what are now known as Cores, small (in the literal physical sense) Artificial intelligences used in shuttles, trans-light modules, Drones, and other machines not large enough for a full scale Mind. While still considered sentient, a mind's power at this point is considered greatly inferior to a contemporary Mind. That said, It is possible for Minds to have upgrades, improvements and enhancements given to them since construction, to allow them to remain up to date.
Using the sensory equipment available to the Culture, Minds can see inside solid objects; in principle they can also read minds by examining the cellular processes inside a living brain, but Culture Minds regard such mindreading as taboo. The only known Mind to break this Taboo, the Grey Area seen in Excession, is largely ostracized and shunned by other Minds as a result. In Look to Windward an example is cited of an attempt to destroy a Culture Mind by smuggling a minuscule antimatter bomb onto a Culture orbital inside the head of a Chelgrian agent. However the bomb ends up being spotted without the taboo being broken.
In Consider Phlebas, a typical Mind is described as a mirror-like ellipsoid of several dozen cubic metres, but weighing many thousands of tons, due to the fact that it is made up of hyper-dense matter. It is noted that most of its 'body' only exists in the real world at the outer shell, the inner workings staying constantly within hyperspace.
The Mind in Consider Phlebas is also described as having internal power sources which function as back-up shield generators and space propulsion, and seeing the rational, safety-conscious thinking of Minds, it would be reasonable to assume that all Minds have such features, as well as a complement of drones and other remote sensors as also described.
Other equipment available to them spans the whole range of the Culture's technological capabilities and its practically limitless resources. However, this equipment would more correctly be considered emplaced in the ship or orbital that the Mind is controlling, rather than being part of the Mind itself.
Minds are constructed entities, which have general parameters fixed by their constructors (other Minds) before 'birth', not unlike biological beings. A wide variety of characteristics can be and are manipulated, such as introversion-extroversion, aggressiveness (for warships) or general disposition.
However, the character of a Mind evolves as well, and Minds often change over the course of centuries, sometimes changing personality entirely. This is often followed by them becoming eccentric or at least somewhat odd. Others drift from the Culture-accepted ethical norms, and may even start influencing their own society in subtle ways, selfishly furthering their own views of how the Culture should act.
Minds have also been known to commit suicide to escape punishment, or because of grief.
Minds are constructed with a personality typical of the Culture's interests, i.e. full of curiosity, general benevolence (expressed in the 'good works' actions of the Culture, or in the protectiveness regarding sentient beings) and respect for the Culture's customs.
Nonetheless, Minds have their own interests in addition to what their peers expect them to do for the Culture, and may develop fascinations or hobbies like other sentient beings do.
The mental capabilities of Minds are described in Excession to be vast enough to run entire universe-simulations inside their own imaginations, exploring metamathical (a fictional branch of metamathematics) scenarios, an activity addictive enough to cause some Minds to totally withdraw from caring about our own physical reality into "Infinite Fun Space", their own, ironic and understated term for this sort of activity.
One of the main activities of Ship Minds is the guidance of spaceships from a certain minimum size upwards. A culture spaceship is the Mind and vice versa; there are no different names for the two, and a spaceship without a Mind would be considered damaged or incomplete to the Culture.
Ship Mind classes include General Systems Vehicle (GSV), Medium Systems Vehicle (), Limited Systems Vehicle (), General Contact Vehicle (), General Contact Unit (GCU), Limited Contact Unit (), Rapid Offensive Unit (), General Offensive Unit (), Limited Offensive Unit (), Demilitarised ROU (), Demilitarised GOU (), Demilitarised LOU (), Very Fast Picket (–synonym for dROU), Fast Picket (–synonym for dGOU or dLOU), and Superlifter.
These ships provide a convenient 'body' for a Mind, which is too large and too important to be contained within smaller, more fragile shells. Following the 'body' analogy, it also provides the Mind with the capability of physical movement. As Minds are living beings with curiosity, emotion and wishes of their own, such mobility is likely very important to most.
Culture Minds (mostly also being ships) usually give themselves whimsical names, though these often hint at their function as well. Even the names of warships retain this humorous approach, though the implications are much darker.
Some Minds also take on functions which either preclude or discourage movement. These usually administer various types of Culture facilities:
Orbital Hubs – A Culture Orbital is a smaller version of a ringworld, with large numbers of people living on the inside surface of them, in a planet-like environment.
Rocks – Minds in charge of planetoid-like structures, built/accreted, mostly from the earliest times of the Culture before it moved into space-built orbitals.
Stores – Minds of a quiet temperament run these asteroids, containing vast hangars, full of mothballed military ships or other equipment. Some 'Rocks' also act as 'Stores'.
University Sages – Minds that run Culture universities / schools, a very important function as every Culture citizen has an extensive education and further learning is considered one of the most important reasons for life in the Culture.
Eccentric – Culture Minds who have become "... a bit odd" (as compared to the very rational standards of other Culture Minds). Existing at the fringe of the Culture, they can be considered (and consider themselves) as somewhat, but not wholly part of the Culture.
Sabbaticaler – Culture Minds who have decided to abdicate from their peer-pressure based duties in the Culture for a time.
Ulterior – Minds of the Culture Ulterior, an umbrella term for all the no-longer-quite-Culture factions.
Converts – Minds (or sentient computers) from other societies who have chosen to join the Culture.
Absconder – Minds who have completely left the Culture, especially when in doing so having deserted some form of task.
Deranged – A more extreme version of Eccentric as implied in The Hydrogen Sonata
Minds (and, as a consequence, Culture starships) usually bear names that do a little more than just identify them. The Minds themselves choose their own names, and thus they usually express something about a particular Mind's attitude, character or aims in their personal life. They range from funny to just plain cryptic. Some examples are:
Sanctioned Parts List – a habitation / factory ship
So Much For Subtlety – a habitation / factory ship
All Through With This Niceness And Negotiation Stuff – a warship
Attitude Adjuster – a warship
Of Course I Still Love You – an ambassador ship
Funny, It Worked Last Time... – an ambassador ship
Names
Some humanoid or drone Culture citizens have long names, often with seven or more words. Some of these words specify the citizen's origin (place of birth or manufacture), some an occupation, and some may denote specific philosophical or political alignments (chosen later in life by the citizen themselves), or make other similarly personal statements. An example would be Diziet Sma, whose full name is Rasd-Coduresa Diziet Embless Sma da' Marenhide:
Rasd-Coduresa is the planetary system of her birth, and the specific object (planet, orbital, Dyson sphere, etc.). The -sa suffix is roughly equivalent to -er in English. By this convention, Earth humans would all be named Sun-Earthsa (or Sun-Earther).
Diziet is her given name. This is chosen by a parent, usually the mother.
Embless is her chosen name. Most Culture citizens choose this when they reach adulthood (according to The Player of Games this is known as "completing one's name"). As with all conventions in the Culture, it may be broken or ignored: some change their chosen name during their lives, some never take one.
Sma is her surname, usually taken from one's mother.
da' Marenhide is the house or estate she was raised within, the da or dam being similar to von in German. (The usual formation is dam; da is used in Sma's name because the house name begins with an M, eliding an awkward phoneme repetition.)
Iain Banks gave his own Culture name as "Sun-Earther Iain El-Bonko Banks of North Queensferry".
Death
The Culture has a relatively relaxed attitude towards death. Genetic manipulation and the continual benevolent surveillance of the Minds make natural or accidental death almost unknown. Advanced technology allows citizens to make backup copies of their personalities, allowing them to be resurrected in case of death. The form of that resurrection can be specified by the citizen, with personalities returning either in the same biological form, in an artificial form (see below), or even just within virtual reality. Some citizens choose to go into "storage" (a form of suspended animation) for long periods of time, out of boredom or curiosity about the future.
Attitudes individual citizens have towards death are varied (and have varied throughout the Culture's history). While many, if not most, citizens make some use of backup technology, many others do not, preferring instead to risk death without the possibility of recovery (for example when engaging in extreme sports). These citizens are sometimes called "disposables", and are described in Look to Windward. Taking into account such accidents, voluntary euthanasia for emotional reasons, or choices like sublimation, the average lifespan of humans is described in Excession as being around 350 to 400 years. Some citizens choose to forgo death altogether, although this is rarely done and is viewed as an eccentricity. Other options instead of death include conversion of an individual's consciousness into an AI, joining of a group mind (which can include biological and non-biological consciousnesses), or subliming (usually in association with a group mind).
Concerning the lifespan of drones and Minds, given the durability of Culture technology and the options of mindstate backups, it is reasonable to assume that they live as long as they choose. Even Minds, with their utmost complexity, are known to be backed up (and reactivated if they for example die in a risky mission, see GSV Lasting Damage). It is noted that even Minds themselves do not necessarily live forever either, often choosing to eventually sublime or even killing themselves (as does the double-Mind GSV Lasting Damage due to its choices in the Culture-Idiran war).
Science and technology
Anti-gravity and forcefields
The Culture (and other societies) have developed powerful anti-gravity abilities, closely related to their ability to manipulate forces themselves.
In this ability they can create action-at-a-distance – including forces capable of pushing, pulling, cutting, and even fine manipulation, and forcefields for protection, visual display or plain destructive ability. Such applications still retain restrictions on range and power: while forcefields of many cubic kilometres are possible (and in fact, orbitals are held together by forcefields), even in the chronologically later novels, such as Look to Windward, spaceships are still used for long-distance travel and drones for many remote activities.
With the control of a Mind, fields can be manipulated over vast distances. In Use of Weapons, a Culture warship uses its electromagnetic effectors to hack into a computer light years away.
Artificial intelligence
Artificial intelligences (and to a lesser degree, the non-sentient computers omnipresent in all material goods), form the backbone of the technological advances of the Culture. Not only are they the most advanced scientists and designers the Culture has, their lesser functions also oversee the vast (but usually hidden) production and maintenance capabilities of the society.
The Culture has achieved artificial intelligences where each Mind has thought processing capabilities many orders of magnitude beyond that of human beings, and data storage drives which, if written out on paper and stored in filing cabinets, would cover thousands of planets skyscraper high (as described by one Mind in Consider Phlebas). Yet it has managed to condense these entities to a volume of several dozen cubic metres (though much of the contents and the operating structure are continually in hyperspace). Minds also demonstrate reaction times and multitasking abilities orders of magnitude greater than any sentient being; armed engagements between Culture and equivalent technological civilisations sometimes occur in timeframes as short as microseconds, and standard Orbital Minds are capable of running all of the vital systems on the Orbital while simultaneously conversing with millions of the inhabitants and observing phenomena in the surrounding regions of space.
At the same time, it has achieved drone sentiences and capability of Special Circumstance proportions in forms that could fit easily within a human hand, and built extremely powerful (though not sentient) computers capable of fitting into tiny insect-like drones. Some utilitarian devices (such as spacesuits) are also provided with artificial sentience. These specific types of drones, like all other Culture AI, would also be considered citizens - though as described in the short story "Descendant", they may spend most of the time when their "body" is not in use in a form of remote-linked existence outside of it, or in a form of AI-level virtual reality.
Energy manipulation
A major feature of its post-scarcity society, the Culture is obviously able to gather, manipulate, transfer and store vast amounts of energy. While not explained in detail in the novels, this involves antimatter and the "energy grid", a postulated energy field dividing the universe from neighboring anti-matter universes, and providing practically limitless energy. Transmission or storage of such energy is not explained, though these capabilities must be powerful as well, with tiny drones capable of very powerful manipulatory fields and forces.
The Culture also uses various forms of energy manipulation as weapons, with "gridfire", a method of creating a dimensional rift to the energy grid, releasing astronomical amounts of energy into a region of non-hyperspace, being described as a sort of ultimate weapon more destructive than collapsed antimatter bombardment. One character in Consider Phlebas refers to gridfire as "the weaponry of the end of the universe". Gridfire resembles the zero-point energy used within many popular science fiction stories.
Matter displacement
The Culture (at least by the time of The Player of Games) has developed a form of teleportation capable of transporting both living and unliving matter instantaneously via wormholes. This technology has not rendered spacecraft obsolete – in Excession a barely apple-sized drone was displaced no further than a light-second at maximum range (mass being a limiting factor determining range), a tiny distance in galactic terms. The process also still has a very small chance of failing and killing living beings, but the chance is described as being so small (1 in 61 million) that it normally only becomes an issue when transporting a large number of people and is only regularly brought up due to the Culture's safety conscious nature.
Displacement is an integral part of Culture technology, being widely used for a range of applications from peaceful to belligerent. Displacing warheads into or around targets is one of the main forms of attack in space warfare in the Culture universe. The Player of Games mentions that drones can be displaced to catch a person falling from a cliff before they impact the ground, as well.
Brain–computer interfaces
Through "neural lace", a form of brain–computer interface that is implanted into the brains of young people and grows with them, the Culture has the capability to read and store the full sentience of any being, biological or artificial, and thus reactivate a stored being after its death. The neural lace also allows wireless communication with the Minds and databases. This also necessitates the capability to read thoughts, but as described in Look to Windward, doing this without permission is considered taboo.
Starships and warp drives
Starships are living spaces, vehicles and ambassadors of the Culture. A proper Culture starship (as defined by hyperspace capability and the presence of a Mind to inhabit it) may range from several hundreds of metres to hundreds of kilometres. The latter may be inhabited by billions of beings and are artificial worlds in their own right, including whole ecosystems, and are considered to be self-contained representations of all aspects of Culture life and capability.
The Culture (and most other space-faring species in its universe) use a form of Hyperspace-drive to achieve faster-than-light speeds. Banks has evolved a (self-confessedly) technobabble system of theoretical physics to describe the ships' acceleration and travel, using such concepts as "infraspace" and "ultraspace" and an "energy grid" between universes (from which the warp engines "push off" to achieve momentum). An "induced singularity" is used to access infra or ultra space from real space; once there, "engine fields" reach down to the Grid and gain power and traction from it as they travel at high speeds.
These hyperspace engines do not use reaction mass and hence do not need to be mounted on the surface of the ship. They are described as being very dense exotic matter, which only reveals its complexity under a powerful microscope. Acceleration and maximum speed depend on the ratio of the mass of the ship to its engine mass. As with any other matter aboard, ships can gradually manufacture extra engine volume or break it down as needed. In Excession one of the largest ships of the Culture redesigns itself to be mostly engine ( Even more impressively, this is later discovered to be thanks to combining the hyperspace engine fields of thousands of semi-slaved warships which have been constructed in secret, and housed within the ship itself, and out of view) - and reaches a speed of 233,000 times lightspeed. Within the range of the Culture's influence in the galaxy, most ships would still take years of travelling to reach the more remote spots.
Other than the engines used by larger Culture ships, there are a number of other propulsion methods such as gravitic drive at sublight speeds, with antimatter, fusion and other reaction engines occasionally seen with less advanced civilisations, or on Culture hobby craft.
Warp engines can be very small, with Culture drones barely larger than fist-size described as being thus equipped. There is also at least one (apparently non-sentient) species (the "Chuy-Hirtsi" animal), that possesses the innate capability of warp travel. In Consider Phlebas, it is being used as a military transport by the Idirans, but no further details are given.
Nanotechnology
The Culture has highly advanced nanotechnology, though descriptions of such technology in the books is limited. Many of the described uses are by or for Special Circumstances, but there are no indications that the use of nanotechnology is limited in any way. (In a passage in one of the books, there is a brief reference to the question of sentience when comparing the human brain or a "pico-level substrate".)
One of the primary clandestine uses of nanotechnology is information gathering. The Culture likes to be in the know, and as described in Matter "they tend to know everything." Aside from its vast network of sympathetic allies and wandering Culture citizens one of the primary ways that the Culture keeps track of important events is by the use of practically invisible nanobots capable of recording and transmitting their observations. This technique is described as being especially useful to track potentially dangerous people (such as ex-Special Circumstances agents). Via such nanotechnology, it is potentially possible for the Culture (or similarly advanced societies) to see everything happening on a given planet, orbital or any other habitat. The usage of such devices is limited by various treaties and agreements among the Involved.
In addition, EDust assassins are potent Culture terror weapons, composed entirely of nano machines called EDust, or "Everything Dust." They are capable of taking almost any shape or form, including swarms of insects or entire humans or aliens, and possess powerful weaponry capable of levelling entire buildings.
Living space
Much of the Culture's population lives on orbitals, vast artificial worlds that can accommodate billions of people. Others travel the galaxy in huge space ships such as General Systems Vehicles (GSVs) that can accommodate hundreds of millions of people. Almost no Culture citizens are described as living on planets, except when visiting other civilisations. The reason for this is partly because the Culture believes in containing its own expansion to self-constructed habitats, instead of colonising or conquering new planets. With the resources of the universe allowing permanent expansion (at least assuming non-exponential growth), this frees them from having to compete for living space.
The Culture, and other civilisations in Banks' universe, are described as living in these various, often constructed habitats:
Airspheres
These are vast, brown dwarf-sized bubbles of atmosphere enclosed by force fields, and (presumably) set up by an ancient advanced race at least one and a half billion years ago (see: Look to Windward). There is only minimal gravity within an airsphere. They are illuminated by moon-sized orbiting planetoids that emit enormous light beams.
Citizens of the Culture live there only very occasionally as guests, usually to study the complex ecosystem of the airspheres and the dominant life-forms: the "dirigible behemothaurs" and "gigalithine lenticular entities", which may be described as inscrutable, ancient intelligences looking similar to a cross between gigantic blimps and whales. The airspheres slowly migrate around the galaxy, taking anywhere from 50 to 100 million years to complete one circuit. In the novels no one knows who created the airspheres or why, but it is presumed that whoever did has long since sublimed but may maintain some obscure link with the behemothaurs and lenticular entities. Guests in the airspheres are not allowed to use any force-field technology, though no reason has been offered for this prohibition.
The airspheres resemble in some respects the orbit-sized ring of breathable atmosphere created by Larry Niven in The Integral Trees, but spherical not toroidal, require a force field to retain their integrity, and arose by artificial rather than natural processes.
Orbitals
One of the main types of habitats of the Culture, an orbital is a ring structure orbiting a star as would a megastructure akin to a bigger Bishop ring. Unlike a ringworld or a Dyson sphere, an orbital does not enclose the star (being much too small). Like a ringworld, the orbital rotates to provide an analog of gravity on the inner surface. A Culture orbital rotates about once every 24 hours and has gravity-like effect about the same as the gravity of Earth, making the diameter of the ring about (nearly five times the diameter of the Moon's orbit around Earth), and ensuring that the inhabitants experience night and day. Orbitals feature prominently in many Culture stories.
Planets
Though many other civilisations in the Culture books live on planets, the Culture as currently developed has little direct connection to on-planet existence. Banks has written that he presumes this to be an inherent consequence of space colonisation, and a foundation of the liberal nature of the Culture. A small number of home worlds of the founding member-species of the Culture receive a mention in passing, and a few hundred human-habitable worlds were colonised (some of them terraformed) before the Culture elected to turn towards artificial habitats, preferring to keep the planets it encounters wild. Since then, the Culture has come to look down on terraforming as inelegant, ecologically problematic and possibly even immoral. Less than one per cent of the population of the Culture lives on planets, and many find the very concept somewhat bizarre.
This attitude is not absolute though; in Consider Phlebas, some Minds suggest testing a new technology on a "spare planet" (knowing that it could be destroyed in an antimatter explosion if unsuccessful). One could assume – from Minds' usual ethics – that such a planet would have been lifeless to start with. It is also quite possible, even probable, that the suggestion was not made in complete seriousness.
Rings
Ringworld-like megastructures exist in the Culture universe; the texts refer to them simply as "Rings" (with a capital R). As opposed to the smaller orbitals which revolve around a star, these structures are massive and completely encircle a star. Banks does not describe these habitats in detail, but records one as having been destroyed (along with three Spheres) in the Idiran-Culture war. In Matter, the Morthanveld people possesses ringworld-like structures made of innumerable various-sized tubes. Those structures, like Niven's Ringworld, encircle a star and are about the same size.
Rocks
These are asteroids and other non-planetary bodies hollowed out for habitation and usually spun for centrifugal artificial gravity. Rocks (with the exception of those used for secretive purposes) are described as having faster-than-light space drives, and thus can be considered a special form of spaceship. Like Orbitals, they are usually administered by one or more Minds.
Rocks do not play a large part in most of the Culture stories, though their use as storage for mothballed military ships (Pittance) and habitats (Phage Rock, one of the founding communities of the Culture) are both key plot points in Excession.
Shellworlds
Shellworlds are introduced in Matter, and consist of multilayered levels of concentric spheres in four dimensions held up by countless titanic interior towers. Their extra dimensional characteristics render some products of Culture technology too dangerous to use and yet others ineffective, notably access to hyperspace. About 4000 were built millions of years ago as vast machines intended to cast a forcefield around the whole of the galaxy for unknown purposes; less than half of those remain at the time of Matter, many having been destroyed by a departed species known as the Iln. The species that developed this technology, known as the Veil or the Involucra, are now lost, and many of the remaining shellworlds have become inhabited, often by many different species throughout their varying levels. Many still hold deadly secret defence mechanisms, often leading to great danger for their new inhabitants, giving them one of their other nicknames: Slaughter Worlds.
Ships
Ships in the Culture are intelligent individuals, often of very large size, controlled by one or more Minds. The ship is considered by the Culture generally and the Mind itself to be the Mind's body (compare avatars). Some ships (GSVs, for example) are tens or even hundreds of kilometres in length and may have millions or even billions of residents who live on them full-time; together with Orbitals, such ships represent the main form of habitat for the Culture. Such large ships may temporarily contain smaller ships with their own populations, and/or manufacture such ships themselves.
In Use of Weapons, the protagonist Zakalwe is allowed to acclimatise himself to the Culture by wandering for days through the habitable levels of a ship (the GSV Size Isn't Everything, which is described as over long), eating and sleeping at the many locations which provide food and accommodation throughout the structure and enjoying the various forms of contact possible with the friendly and accommodating inhabitants.
Spheres
Dyson spheres also exist in the Culture universe but receive only passing mention as "Spheres". Three spheres are recorded as having been destroyed in the Idiran-Culture war.
Interaction with other civilisations
The Culture, living mostly on massive spaceships and in artificial habitats, and also feeling no need for conquest in the typical sense of the word, possesses no borders. Its sphere of influence is better defined by the (current) concentration of Culture ships and habitats as well as the measure of effect its example and its interventions have already had on the "local" population of any galactic sector. As the Culture is also a very graduated and constantly evolving society, its societal boundaries are also constantly in flux (though they tend to be continually expanding during the novels), peacefully "absorbing" societies and individuals.
While the Culture is one of the most advanced and most powerful of all galactic civilisations, it is still but one of the "high-level Involved" (called "Optimae" by some less advanced civilisations), the most powerful non-sublimed civilisations which mentor or control the others.
An Involved society is a highly advanced group that has achieved galaxy-wide involvement with other cultures or societies. There are a few dozen Involved societies and hundreds or thousands of well-developed (interstellar) but insufficiently influential societies or cultures. The well-developed societies which do not take a dynamic role in the galaxy as a whole are designated as "galactically mature". In the novels, the Culture might be considered the premier Involved society, or at least the most dynamic and energetic, especially given that the Culture itself is a growing multicultural fusion of Involved societies.
The Involved are contrasted with the Sublimed, groups that have reached a high level of technical development and galactic influence but subsequently abandoned physical reality, ceasing to take serious interventionist interest in galactic civilisation. They are also contrasted with what some Culture people loosely refer to as "barbarians", societies of intelligent beings which lack the technical capacity to know about or take a serious role in their interstellar neighbourhood. There are also the elder civilisations, which are civilisations that reached the required level of technology for sublimation, but chose not to, and have retreated from the larger galactic meta-civilisation.
The Involved are also contrasted with hegemonising swarms (a term used in several of Banks' Culture novels). These are entities that exist to convert as much of the universe as possible into more of themselves; most typically these are technological in nature, resembling more sophisticated forms of grey goo, but the term can be applied to cultures that are sufficiently single-minded in their devotion to mass conquest, control, and colonisation. Both the Culture and the author (in his Notes on the Culture) find this behaviour quixotic and ridiculous. Most often, societies categorised as hegemonising swarms consist of species or groups newly arrived in the galactic community with highly expansionary and exploitative goals. The usage of the term "hegemonising swarm" in this context is considered derisive in the Culture and among other Involved and is used to indicate their low regard for those with these ambitions by comparing their behaviour to that of mindless self-replicating technology. The Culture's central moral dilemma regarding intervention in other societies can be construed as a conflict between the desire to help others and the desire to avoid becoming a hegemonising swarm themselves.
Foreign policy
Although they lead a comfortable life within the Culture, many of its citizens feel a need to be useful and to belong to a society that does not merely exist for their own sake but that also helps improve the lot of sentient beings throughout the galaxy. For that reason the Culture carries out "good works", covertly or overtly interfering in the development of lesser civilisations, with the main aim to gradually guide them towards less damaging paths. As Culture citizens see it, these good works provide the Culture with a "moral right to exist".
A group within the Culture, known as Contact, is responsible for its interactions (diplomatic or otherwise) with other civilisations. Non-Contact citizens are apparently not prevented from travelling or interacting with other civilisations, though the effort and potential danger involved in doing so alone makes it much more commonly the case for Culture people simply to join Contact if they long to "see the world". Further within Contact, an intelligence organisation named Special Circumstances exists to deal with interventions which require more covert behaviour; the interventionist approach that the Culture takes to advancing other societies may often create resentment in the affected civilisations and thus requires a rather delicate touch (see: Look to Windward).
In Matter, it is described that there are a number of other galactic civilisations that come close to or potentially even surpass the Culture in power and sophistication. The Culture is very careful and considerate of these groupings, and while still trying to convince them of the Culture ideal, will be much less likely to openly interfere in their activities.
In Surface Detail, three more branches of Contact are described: Quietus, the Quietudinal Service, whose purview is dealing with those entities who have retired from biological existence into digital form and/or those who have died and been resurrected; Numina, which is described as having the charge of contact with races that have sublimed; and Restoria, a subset of Contact which focuses on containing and negating the threat of swarms of self-replicating creatures ("hegswarms").
Behaviour in war
While the Culture is normally pacifist, Contact historically acts as its military arm in times of war and Special Circumstances can be considered its secret service and its military intelligence. During war, most of the strategic and tactical decisions are taken by the Minds, with apparently only a small number of especially gifted humans, the "Referrers", being involved in the top-level decisions, though they are not shown outside Consider Phlebas. It is shown in Consider Phlebas that actual decisions to go to war (as opposed to purely defensive actions) are based on a vote of all Culture citizens, presumably after vigorous discussion within the whole society.
It is described in various novels that the Culture is extremely reluctant to go to war, though it may start to prepare for it long before its actual commencement. In the Idiran-Culture War (possibly one of the most hard-fought wars for the normally extremely superior Culture forces), various star systems, stellar regions and many orbital habitats were overrun by the Idirans before the Culture had converted enough of its forces to military footing. The Culture Minds had had enough foresight to evacuate almost all its affected citizens (apparently numbering in the many billions) in time before actual hostilities reached them. As shown in Player of Games, this is a standard Culture tactic, with its strong emphasis on protecting its citizens rather than sacrificing some of them for short-term goals.
War within the Culture is mostly fought by the Culture's sentient warships, the most powerful of these being war-converted GSVs, which are described as powerful enough to oppose whole enemy fleets. The Culture has little use for conventional ground forces (as it rarely occupies enemy territory); combat drones equipped with knife missiles do appear in Descendant and "terror weapons" (basically intelligent, nano-form assassins) are mentioned in Look to Windward, while infantry combat suits of great power (also usable as capable combat drones when without living occupants) are used in Matter.
Relevance to real-world politics
The inner workings of The Culture are not especially described in detail though it is shown that the society is populated by an empowered, educated and augmented citizenry in a direct democracy or highly democratic and transparent system of self-governance. In comparisons to the real world, intended or not, the Culture could resemble various posited egalitarian societies including in the writings of Karl Marx, the end condition of communism after a withering away of the state, the anarchism of Bakunin and Fourier et al., libertarian socialism, council communism and anarcho-communism. Other characteristics of The Culture that are recognisable in real world politics include pacifism, post-capitalism, and transhumanism. Banks deliberately portrayed an imperfect utopia whose imperfection or weakness is related to its interaction with the 'other', that is, exterior civilisations and species that are sometimes variously warred with or mishandled through the Culture's Contact section which cannot always control its intrigues and the individuals it either 'employs' or interacts with. This 'dark side' of The Culture also alludes to or echoes mistakes and tragedies in 20th century Marxist–Leninist countries, although the Culture is generally portrayed as far more 'humane' and just.
Utopia
Comparisons are often made between the Culture and twentieth and twenty first century Western civilisation and nation-states, particularly their interventions in less-developed societies. These are often confused with regard to the author's assumed politics.
Ben Collier has said that the Culture is a utopia carrying significantly greater moral legitimacy than the West's, by comparison, proto-democracies. While Culture interventions can seem similar at first to Western interventions, especially when considered with their democratising rhetoric, the argument is that the Culture operates completely without material need, and therefore without the possibility of baser motives. This is not to say that the Culture's motives are purely altruistic; a peaceful, enlightened universe full of good neighbours lacking ethnic, religious, and sexual chauvinisms is in the Culture's interest as well. Furthermore, the Culture's ideals, in many ways similar to those of the liberal perspective today, are to a much larger extent realised internally in comparison to the West.
Criticism
Examples are the use of mercenaries to perform the work that the Culture does not want to get their hands dirty with, and even outright threats of invasion (the Culture has issued ultimatums to other civilisations before). Some commentators have also argued that those Special Circumstances agents tasked with civilising foreign cultures (and thus potentially also changing them into a blander, more Culture-like state) are also those most likely to regret these changes, with parallels drawn to real-world special forces trained to operate within the cultural mindsets of foreign nations.
The events of Use of Weapons are an example of just how dirty Special Circumstances will play in order to get their way and the conspiracy at the heart of the plot of Excession demonstrates how at least some Minds are prepared to risk killing sentient beings when they conclude that these actions are beneficial for the long term good. Special Circumstances represents a very small fraction of Contact, which itself is only a small fraction of the entire Culture, making it comparable again to size and influence of modern intelligence agencies.
Issues raised
The Culture stories are largely about problems and paradoxes that confront liberal societies. The Culture itself is an "ideal-typical" liberal society; that is, as pure an example as one can reasonably imagine. It is highly egalitarian; the liberty of the individual is its most important value; and all actions and decisions are expected to be determined according to a standard of reasonability and sociability inculcated into all people through a progressive system of education. It is a society so beyond material scarcity that for almost all practical purposes its people can have and do what they want. If they do not like the behaviour or opinions of others, they can easily move to a more congenial Culture population centre (or Culture subgroup), and hence there is little need to enforce codes of behaviour.
Even the Culture has to compromise its ideals where diplomacy and its own security are concerned. Contact, the group that handles these issues, and Special Circumstances, its secret service division, can employ only those on whose talents and emotional stability it can rely, and may even reject self-aware drones built for its purposes that fail to meet its requirements. Hence these divisions are regarded as the Culture's elite and membership is widely regarded as a prize; yet also something that can be shameful as it contradicts many of the Culture's moral codes.
Within Contact and Special Circumstances, there are also inner circles that can take control in crises, somewhat contradictory to the ideal notions of democratic and open process the Culture espouses. Contact and Special Circumstances may suppress or delay the release of information, for example to avoid creating public pressure for actions they consider imprudent or to prevent other civilisations from exploiting certain situations.
In dealing with less powerful regressive civilisations, the Culture usually intervenes discreetly, for example by protecting and discreetly supporting the more liberal elements, or subverting illiberal institutions. For instance, in Use of Weapons, the Culture operates within a less advanced illiberal society through control of a business cartel which is known for its humanitarian and social development investments, as well as generic good Samaritanism. In Excession, a sub-group of Minds conspires to provoke a war with the extremely sadistic Affront, although the conspiracy is foiled by a GSV that is a deep cover Special Circumstances agent. Only one story, Consider Phlebas, pits the Culture against a highly illiberal society of approximately equal power: the aggressive, theocratic Idirans. Though they posed no immediate, direct threat to the Culture, the Culture declared war because it would have felt useless if it allowed the Idirans' ruthless expansion to continue. The Culture's decision was a value-judgement rather than a utilitarian calculation, and the "Peace Faction" within the Culture seceded. Later in the timeline of the Culture's universe, the Culture has reached a technological level at which most past civilisations have Sublimed, in other words disengaged from Galactic politics and from most physical interaction with other civilisations. The Culture continues to behave "like an idealistic adolescent".
As of 2008, three stories force the Culture to consider its approach to more powerful civilisations. In one incident during the Culture–Idiran War, they strive to avoid offending a civilisation so advanced that it has disengaged from Galactic politics, and note that this hyper-advanced society is not a threat to either the welfare or the values of the Culture. In Excession, an overwhelmingly more powerful individual from an extremely advanced civilisation is simply passing through on its way from one plane of the physical Reality to another, and there is no real interaction. In the third case it sets up teams to study a civilisation that is not threatening but is thought to have eliminated aggressors in the past.
List of books describing the Culture
Banks on the Culture
When asked in Wired magazine (June 1996) whether mankind's fate depends on having intelligent machines running things, as in the Culture, Banks replied:
In a 2002 interview with Science Fiction Weekly magazine, when asked:
Banks replied:
Notes
References
Bibliography
Primary Sources
.
.
.
.
.
.
.
.
.
.
.
Secondary Sources
.
.
.
.
.
.
.
Interviews and Reviews
.
.
.
.
News Sources
.
.
Further reading
Anarchist fiction
Communism in fiction
Fiction about artificial intelligence
Fiction about wormholes
Fiction about cyborgs
Fiction about robots
Fiction about genetic engineering
Fiction about consciousness transfer
Fiction about nanotechnology
Fictional civilizations
Science fiction book series
Space opera
Utopian fiction | The Culture | [
"Materials_science",
"Engineering",
"Biology"
] | 14,711 | [
"Fiction about cyborgs",
"Genetic engineering",
"Fiction about genetic engineering",
"Fiction about nanotechnology",
"Cyborgs",
"Nanotechnology"
] |
58,648 | https://en.wikipedia.org/wiki/Faraday%20constant | In physical chemistry, the Faraday constant (symbol , sometimes stylized as ℱ) is a physical constant defined as the quotient of the total electric charge (q) by the amount (n) of elementary charge carriers in any given sample of matter: it is expressed in units of coulombs per mole (C/mol).
As such, it represents the "molar elementary charge", that is, the electric charge of one mole of elementary carriers (e.g., protons). It is named after the English scientist Michael Faraday. Since the 2019 revision of the SI, the Faraday constant has an exactly defined value, the product of the elementary charge (e, in coulombs) and the Avogadro constant (NA, in reciprocal moles):
Derivation
The Faraday constant can be thought of as the conversion factor between the mole (used in chemistry) and the coulomb (used in physics and in practical electrical measurements), and is therefore of particular use in electrochemistry. Because there are exactly NA = entities per mole, and there are exactly elementary charges per coulomb, the Faraday constant is given by the quotient of these two quantities:
One common use of the Faraday constant is in electrolysis calculations. One can divide the amount of charge (the current integrated over time) by the Faraday constant in order to find the chemical amount of a substance (in moles) that has been electrolyzed.
The value of was first determined in the 1800s by weighing the amount of silver deposited in an electrochemical reaction, in which a measured current was passed for a measured time, and using Faraday's law of electrolysis. Until about 1970, the most reliable value of the Faraday constant was determined by a related method of electro-dissolving silver metal in perchloric acid.
Other common units
96.485 kJ per volt–gram-equivalent
23.061 kcal per volt–gram-equivalent
26.801 A·h/mol
Faraday – a unit of charge
Related to the Faraday constant is the "faraday", a unit of electrical charge. Its use is much less common than of the coulomb, but is sometimes used in electrochemistry. One faraday of charge is the charge of one mole of elementary charges (or of negative one mole of electrons), that is,
1 faraday = F × 1 mol = .
Conversely, the Faraday constant F equals 1 faraday per mole.
The faraday is not to be confused with the farad, an unrelated unit of capacitance ().
See also
Farad, the unit of electrical capacitance
Faraday efficiency
Faraday's laws of electrolysis
Faraday cup
References
Electrochemical concepts
Physical constants
Michael Faraday
Units of electrical charge
Units of amount of substance
Quotients | Faraday constant | [
"Physics",
"Chemistry",
"Mathematics"
] | 594 | [
"Physical quantities",
"Electric charge",
"Quotients",
"Quantity",
"Intensive quantities",
"Electrochemical concepts",
"Physical constants",
"Electrochemistry",
"Arithmetic",
"Units of electrical charge",
"Molar quantities",
"Units of measurement"
] |
58,650 | https://en.wikipedia.org/wiki/Base%20unit%20of%20measurement | A base unit of measurement (also referred to as a base unit or fundamental unit) is a unit of measurement adopted for a base quantity. A base quantity is one of a conventionally chosen subset of physical quantities, where no quantity in the subset can be expressed in terms of the others. The SI base units, or Systéme International d'unités, consists of the metre, kilogram, second, ampere, kelvin, mole and candela.
A unit multiple (or multiple of a unit) is an integer multiple of a given unit; likewise a unit submultiple (or submultiple of a unit) is a submultiple or a unit fraction of a given unit.
Unit prefixes are common base-10 or base-2 powers multiples and submultiples of units.
While a base unit is one that has been explicitly so designated,
a derived unit is unit for a derived quantity, involving the combination of quantities with different units; several SI derived units are specially named.
A coherent derived unit involves no conversion factors.
Background
In the language of measurement, physical quantities are quantifiable aspects of the world, such as time, distance, velocity, mass, temperature, energy, and weight, and units are used to describe their magnitude or quantity. Many of these quantities are related to each other by various physical laws, and as a result the units of a quantities can be generally be expressed as a product of powers of other units; for example, momentum is mass multiplied by velocity, while velocity is distance divided by time. These relationships are discussed in dimensional analysis. Those that can be expressed in this fashion in terms of the base units are called derived units.
International System of Units
In the International System of Units (SI), there are seven base units: kilogram, metre, candela, second, ampere, kelvin, and mole.
Several derived units have been defined, many with special names and symbols.
In 2019 the seven SI base units were redefined in terms of seven defining constants. Therefore the SI base units are no longer necessary but were retained because for historical and practical reasons. See 2019 revision of the SI.
Natural units
A set of base dimensions of quantity is a minimal set of units such that every physical quantity can be expressed in terms of this set. The traditional base dimensions are mass, length, time, charge, and temperature, but in principle, other base quantities could be used. Electric current could be used instead of charge or speed could be used instead of length. Some physicists have not recognized temperature as a base dimension since it simply expresses the energy per particle per degree of freedom which can be expressed in terms of energy (or mass, length, and time). Duff argues that only dimensionless values have physical meaning and all dimensional units are human constructs.
There are other relationships between physical quantities that can be expressed by means of fundamental constants, and to some extent it is an arbitrary decision whether to retain the fundamental constant as a quantity with dimensions or simply to define it as unity or a fixed dimensionless number, and reduce the number of explicit base quantities by one. The ontological issue is whether these fundamental constants really exist as dimensional or dimensionless quantities. This is equivalent to treating length as the same as time or understanding electric charge as a combination of quantities of mass, length, and time which may seem less natural than thinking of temperature as measuring the same material as energy (which is expressible in terms of mass, length, and time).
For instance, time and distance are related to each other by the speed of light, c, which is a fundamental constant. It is possible to use this relationship to eliminate either the base unit of time or that of distance. Similar considerations apply to the Planck constant, h, which relates energy (with dimension expressible in terms of mass, length and time) to frequency (with dimension expressible in terms of time). In theoretical physics it is customary to use such units (natural units) in which and . A similar choice can be applied to the vacuum permittivity, ε0.
One could eliminate either the metre or the second by setting c to unity (or to any other fixed dimensionless number).
One could then eliminate the kilogram by setting ħ to a dimensionless number.
One could eliminate the ampere by setting either the vacuum permittivity ε0 or the elementary charge e to a dimensionless number.
One could eliminate the mole as a base unit by setting the Avogadro constant N to 1. This is natural as it is a technical scaling constant.
One could eliminate the kelvin as it can be argued that temperature simply expresses the energy per particle per degree of freedom, which can be expressed in terms of energy (or mass, length, and time). Another way of saying this is that the Boltzmann constant kB is a technical scaling constant and could be set to a fixed dimensionless number.
Similarly, one could eliminate the candela, as that is defined in terms of other physical quantities via a technical scaling constant, K.
That leaves one base dimension and an associated base unit, but there are several fundamental constants left to eliminate that too – for instance, one could use G, the gravitational constant, me, the electron rest mass, or Λ, the cosmological constant.
The preferred choices vary by the field in physics. Using natural units leaves every physical quantity expressed as a dimensionless number, which is noted by physicists disputing the existence of incompatible base quantities.
See also
Characteristic units
Dimensional analysis
Natural units
One (unit)
References
Measurement
Dimensional analysis
ro:Mărimi fizice fundamentale | Base unit of measurement | [
"Physics",
"Mathematics",
"Engineering"
] | 1,155 | [
"Dimensional analysis",
"Physical quantities",
"Quantity",
"Measurement",
"Size",
"Mechanical engineering"
] |
58,661 | https://en.wikipedia.org/wiki/CDC%206600 | The CDC 6600 was the flagship of the 6000 series of mainframe computer systems manufactured by Control Data Corporation. Generally considered to be the first successful supercomputer, it outperformed the industry's prior recordholder, the IBM 7030 Stretch, by a factor of three. With performance of up to three megaFLOPS, the CDC 6600 was the world's fastest computer from 1964 to 1969, when it relinquished that status to its successor, the CDC 7600.
The first CDC 6600s were delivered in 1965 to Livermore and Los Alamos. They quickly became a must-have system in high-end scientific and mathematical computing, with systems being delivered to Courant Institute of Mathematical Sciences, CERN, the Lawrence Radiation Laboratory, and many others. At least 100 were delivered in total.
A CDC 6600 is on display at the Computer History Museum in Mountain View, California. The only running CDC 6000 series machine has been restored by Living Computers: Museum + Labs.
History and impact
CDC's first products were based on the machines designed at Engineering Research Associates (ERA), which Seymour Cray had been asked to update after moving to CDC. After an experimental machine known as the Little Character, in 1960 they delivered the CDC 1604, one of the first commercial transistor-based computers, and one of the fastest machines on the market. Management was delighted, and made plans for a new series of machines that were more tailored to business use; they would include instructions for character handling and record keeping for instance. Cray was not interested in such a project and set himself the goal of producing a new machine that would be 50 times faster than the 1604. When asked to complete a detailed report on plans at one and five years into the future, he wrote back that his five-year goal was "to produce the largest computer in the world", "largest" at that time being synonymous with "fastest", and that his one-year plan was "to be one-fifth of the way".
Taking his core team to new offices near the original CDC headquarters, they started to experiment with higher quality versions of the "cheap" transistors Cray had used in the 1604. After much experimentation, they found that there was simply no way the germanium-based transistors could be run much faster than those used in the 1604. The "business machine" that management had originally wanted, now forming as the CDC 3000 series, pushed them about as far as they could go. Cray then decided the solution was to work with the then-new silicon-based transistors from Fairchild Semiconductor, which were just coming onto the market and offered dramatically improved switching performance.
During this period, CDC grew from a startup to a large company and Cray became increasingly frustrated with what he saw as ridiculous management requirements. Things became considerably more tense in 1962 when the new CDC 3600 started to near production quality, and appeared to be exactly what management wanted, when they wanted it. Cray eventually told CDC's CEO, William Norris that something had to change, or he would leave the company. Norris felt he was too important to lose, and gave Cray the green light to set up a new laboratory wherever he wanted.
After a short search, Cray decided to return to his home town of Chippewa Falls, Wisconsin, where he purchased a block of land and started up a new laboratory.
Although this process introduced a fairly lengthy delay in the design of his new machine, once in the new laboratory, without management interference, things started to progress quickly. By this time, the new transistors were becoming quite reliable, and modules built with them tended to work properly on the first try. The 6600 began to take form, with Cray working alongside Jim Thornton, system architect and "hidden genius" of the 6600.
More than 100 CDC 6600s were sold over the machine's lifetime (1964 to 1969). Many of these went to various nuclear weapon-related laboratories, and quite a few found their way into university computing laboratories. A CDC 6600 was used to disprove Euler's sum of powers conjecture in an early example of direct numerical search.
Cray immediately turned his attention to its replacement, this time setting a goal of ten times the performance of the 6600, delivered as the CDC 7600. The later CDC Cyber 70 and 170 computers were very similar to the CDC 6600 in overall design and were nearly completely backwards compatible.
The 6600 was three times faster than the previous record-holder, the IBM 7030 Stretch; this alarmed IBM. Then-CEO Thomas Watson Jr. wrote a memo to his employees on August 28, 1963: "Last week, Control Data ... announced the 6600 system. I understand that in the laboratory developing the system there are only 34 people including the janitor. Of these, 14 are engineers and 4 are programmers ... Contrasting this modest effort with our vast development activities, I fail to understand why we have lost our industry leadership position by letting someone else offer the world's most powerful computer." Cray's reply was sardonic: "It seems like Mr. Watson has answered his own question."
Description
Typical machines of the 1950s and 1960s used a single central processing unit (CPU) to drive the entire system. A typical program would first load data into memory (often using pre-rolled library code), process it, and then write it back out. This required the CPUs to be fairly complex in order to handle the complete set of instructions they would be called on to perform. A complex CPU implied a large CPU, introducing signalling delays while information flowed between the individual modules making it up. These delays set a maximum upper limit on performance, as the machine could only operate at a cycle speed that allowed the signals time to arrive at the next module.
Cray took another approach. In the 1960s, CPUs generally ran slower than the main memory to which they were attached. For instance, a processor might take 15 cycles to multiply two numbers, while each memory access took only one or two cycles. This meant there was a significant time where the main memory was idle. It was this idle time that the 6600 exploited.
The CDC 6600 used a simplified central processor (CP) that was designed to run mathematical and logic operations as rapidly as possible, which demanded it be built as small as possible to reduce the length of wiring and the associated signalling delays. This led to the machine's (typically) cross-shaped main chassis with the circuit boards for the CPU arranged close to the center, and resulted in a much smaller CPU. Combined with the faster switching speeds of the silicon transistors, the new CPU ran at 10 MHz (100 ns cycle time), about ten times faster than other machines on the market. In addition to the clock being faster, the simple processor executed instructions in fewer clock cycles; for instance, the CPU could complete a multiplication in ten cycles.
Supporting the CPU were ten 12-bit 4 KiB peripheral processors (PPs), each with access to a common pool of 12 input/output (I/O) channels, that handled input and output, as well as controlling what data were sent into central memory for processing by the CP. The PPs were designed to access memory during the times when the CPU was busy performing operations. This allowed them to perform input/output essentially for free in terms of central processing time, keeping the CPU busy as much as possible.
The 6600's CP used a 60-bit word and a ones' complement representation of integers, something that later CDC machines would use into the late 1980s, making them the last systems besides some digital signal processors to use this architecture.
Later, CDC offered options as to the number and type of CPs, PPs and channels, e.g., the CDC 6700 had two central processors, a 6400 CP and a 6600 CP.
While other machines of its day had elaborate front panels to control them, the 6600 has only a dead start panel. There is a dual CRT system console, but it is controlled by the operating system and neither controls nor displays the hardware directly.
The entire 6600 machine contained approximately 400,000 transistors.
Peripheral processors
The CPU could only execute a limited number of simple instructions. A typical CPU of the era had a complex instruction set, which included instructions to handle all the normal "housekeeping" tasks, such as memory access and input/output. Cray instead implemented these instructions in separate, simpler processors dedicated solely to these tasks, leaving the CPU with a much smaller instruction set. This was the first of what later came to be called reduced instruction set computer (RISC) design.
By allowing the CPU, peripheral processors (PPs) and I/O to operate in parallel, the design considerably improved the performance of the machine. Under normal conditions a machine with several processors would also cost a great deal more. Key to the 6600's design was to make the I/O processors, known as peripheral processors (PPs), as simple as possible. The PPs were based on the simple 12-bit CDC 160-A, which ran much slower than the CPU, gathering up data and transmitting it as bursts into main memory at high speed via dedicated hardware.
The 10 PPs were implemented virtually; there was CPU hardware only for a single PP. This CPU hardware was shared and operated on 10 PP register sets which represented each of the 10 PP states (similar to modern multithreading processors). The PP register barrel would "rotate", with each PP register set presented to the "slot" which the actual PP CPU occupied. The shared CPU would execute all or some portion of a PP's instruction whereupon the barrel would "rotate" again, presenting the next PP's register set (state). Multiple "rotations" of the barrel were needed to complete an instruction. A complete barrel "rotation" occurred in 1000 nanoseconds (100 nanoseconds per PP), and an instruction could take from one to five "rotations" of the barrel to be completed, or more if it was a data transfer instruction.
Instruction-set architecture of CP
The basis for the 6600 CPU is what would later be called a RISC system, one in which the processor is tuned to do instructions that are comparatively simple and have limited and well-defined access to memory. The philosophy of many other machines was toward using instructions which were complicated — for example, a single instruction which would fetch an operand from memory and add it to a value in a register. In the 6600, loading the value from memory would require one instruction, and adding it would require a second one. While slower in theory due to the additional memory accesses, the fact that in well-scheduled code, multiple instructions could be processing in parallel offloaded this expense. This simplification also forced programmers to be very aware of their memory accesses, and therefore code deliberately to reduce them as much as possible. The CDC 6600 CP, being a three-address machine, allows for the specification of all three operands.
Models
The CDC 6000 series included four basic models, the CDC 6400, the CDC 6500, the CDC 6600, and the CDC 6700. The models of the 6000 series differed only in their CPUs, which were of two kinds, the 6400 CPU and the 6600 CPU. The 6400 CPU had a unified arithmetic unit, rather than discrete functional units. As such, it could not overlap instructions' execution times. For example, in a 6400 CPU, if an add instruction immediately followed a multiply instruction, the add instruction could not be started until the multiply instruction finished, so the net execution time of the two instructions would be the sum of their individual execution times. The 6600 CPU had multiple functional units which could operate simultaneously, i.e., "in parallel", allowing the CPU to overlap instructions' execution times. For example, a 6600 CPU could begin executing an add instruction in the next CPU cycle following the beginning of a multiply instruction (assuming, of course, that the result of the multiply instruction was not an operand of the add instruction), so the net execution time of the two instructions would simply be the (longer) execution time of the multiply instruction. The 6600 CPU also had an instruction stack, a kind of instruction cache, which helped increase CPU throughput by reducing the amount of CPU idle time caused by waiting for memory to respond to instruction fetch requests. The two kinds of CPUs were instruction compatible, so that a program that ran on either of the kinds of CPUs would run the same way on the other kind but would run faster on the 6600 CPU. Indeed, all models of the 6000 series were fully inter-compatible. The CDC 6400 had one CPU (a 6400 CPU), the CDC 6500 had two CPUs (both 6400 CPUs), the CDC 6600 had one CPU (a 6600 CPU), and the CDC 6700 had two CPUs (one 6600 CPU and one 6400 CPU).
Central Processor (CP)
The Central Processor (CP) and main memory of the 6400, 6500, and 6600 machines had a 60-bit word length. The Central Processor had eight general purpose 60-bit registers X0 through X7, eight 18-bit address registers A0 through A7, and eight 18-bit "increment" registers B0 through B7. B0 was held at zero permanently by the hardware. Many programmers found it useful to set B1 to 1, and similarly treat it as inviolate.
The CP had no instructions for input and output, which are accomplished through Peripheral Processors (below). No opcodes were specifically dedicated to loading or storing memory; this occurred as a side effect of assignment to certain A registers. Setting A1 through A5 loaded the word at that address into X1 through X5 respectively; setting A6 or A7 stored a word from X6 or X7. No side effects were associated with A0. A separate hardware load/store unit, called the stunt box, handled the actual data movement independently of the operation of the instruction stream, allowing other operations to complete while memory was being accessed, which required eight cycles, in the best case.
The 6600 CP included ten parallel functional units, allowing multiple instructions to be worked on at the same time. Today, this is known as a superscalar processor design, but it was unique for its time. Unlike most modern CPU designs, functional units were not pipelined; the functional unit would become busy when an instruction was "issued" to it and would remain busy for the entire time required to execute that instruction. (By contrast, the CDC 7600 introduced pipelining into its functional units.) In the best case, an instruction could be issued to a functional unit every 100 ns clock cycle. The system read and decoded instructions from memory as fast as possible, generally faster than they could be completed, and fed them off to the units for processing. The units were:
floating point multiply (two copies)
floating point divide
floating point add
"long" integer add
incrementers (two copies; performed memory load/store)
shift
Boolean logic
branch
Floating-point operations were given pride of place in this architecture: the CDC 6600 (and kin) stand virtually alone in being able to execute a 60-bit floating point multiplication in time comparable to that for a program branch.
A recent analysis by Mitch Alsup of James Thornton's book, "Design of a Computer",
revealed that the 6600's Floating Point unit is a 2 stage pipelined design.
Fixed point addition and subtraction of 60-bit numbers were handled in the Long Add Unit, using ones' complement for negative numbers. Fixed point multiply was done as a special case in the floating-point multiply unit—if the exponent was zero, the FP unit would do a single-precision 48-bit floating-point multiply and clear the high exponent part, resulting in a 48-bit integer result. Integer divide was performed by a macro, converting to and from floating point.
Previously executed instructions were saved in an eight-word cache, called the "stack". In-stack jumps were quicker than out-of-stack jumps because no memory fetch was required. The stack was flushed by an unconditional jump instruction, so unconditional jumps at the ends of loops were conventionally written as conditional jumps that would always succeed.
The system used a 10 MHz clock, with a four-phase signal. A floating-point multiplication took ten cycles, a division took 29, and the overall performance, taking into account memory delays and other issues, was about 3 MFLOPS. Using the best available compilers, late in the machine's history, FORTRAN programs could expect to maintain about 0.5 MFLOPS.
Memory organization
User programs are restricted to use only a contiguous area of main memory. The portion of memory to which an executing program has access is controlled by the RA (Relative Address) and FL (Field Length) registers which are not accessible to the user program. When a user program tries to read or write a word in central memory at address a, the processor will first verify that a is between 0 and FL-1. If it is, the processor accesses the word in central memory at address RA+a. This process is known as base-bound relocation; each user program sees core memory as a contiguous block words with length FL, starting with address 0; in fact the program may be anywhere in the physical memory. Using this technique, each user program can be moved ("relocated") in main memory by the operating system, as long as the RA register reflects its position in memory. A user program which attempts to access memory outside the allowed range (that is, with an address which is not less than FL) will trigger an interrupt, and will be terminated by the operating system. When this happens, the operating system may create a core dump which records the contents of the program's memory and registers in a file, allowing the developer of the program a means to know what happened. Note the distinction with virtual memory systems; in this case, the entirety of a process's addressable space must be in core memory, must be contiguous, and its size cannot be larger than the real memory capacity.
All but the first seven CDC 6000 series machines could be configured with an optional Extended Core Storage (ECS) system. ECS was built from a different variety of core memory than was used in the central memory. This memory was slower, but cheap enough that it could be much larger. The primary reason was that ECS memory was wired with only two wires per core (contrast with five for central memory). Because it performed very wide transfers, its sequential transfer rate was the same as that of the small core memory. A 6000 CPU could directly perform block memory transfers between a user's program (or operating system) and the ECS unit. Wide data paths were used, so this was a very fast operation. Memory bounds were maintained in a similar manner as central memory, with an RA/FL mechanism maintained by the operating system. ECS could be used for a variety of purposes, including containing user data arrays that were too large for central memory, holding often-used files, swapping, and even as a communication path in a multi-mainframe complex.
Peripheral Processors (PPs)
To handle the "housekeeping" tasks, which in other designs were assigned to the CPU, Cray included ten other processors, based partly on his earlier computer, the CDC 160-A. These machines, called Peripheral Processors, or PPs, were full computers in their own right, but were tuned to performing I/O tasks and running the operating system. (Substantial parts of the operating system ran on the PP's; thus leaving most of the power of the Central Processor available for user programs.) Only the PPs had access to the I/O channels. One of the PPs (PP0) was in overall control of the machine, including control of the program running on the main CPU, while the others would be dedicated to various I/O tasks; PP9 was dedicated to the system console. When the CP program needed to perform an operating system function, it would put a request in a known location (Reference Address + 1) monitored by PP0. If necessary, PP0 would assign another PP to load any necessary code and to handle the request. The PP would then clear RA+1 to inform the CP program that the task was complete.
The unique role of PP0 in controlling the machine was a potential single point of failure, in that a malfunction here could shut down the whole machine, even if the nine other PPs and the CPU were still functioning properly. Cray fixed this in the design of the successor 7600, when any of the PPs could be the controller, and the CPU could reassign any one to this role.
Each PP included its own memory of 4096 12-bit words. This memory served for both for I/O buffering and program storage, but the execution units were shared by ten PPs, in a configuration called the Barrel and slot. This meant that the execution units (the "slot") would execute one instruction cycle from the first PP, then one instruction cycle from the second PP, etc. in a round robin fashion. This was done both to reduce costs, and because access to CP memory required 10 PP clock cycles: when a PP accesses CP memory, the data is available next time the PP receives its slot time.
Central Processor access
In addition to a conventional instruction set, the PPs have several instructions specifically intended to communicate with the central processor.
CRD d - transfers one 60-bit word from central memory at the address specified by the PPs A register to five consecutive PP words beginning at address d.
CRM d,m - similar to CRD, but transfers a block of words whose length was previously stored at location d into PP memory starting at PP address m.
CWD d - assembles five consecutive PP words beginning at location d, and transfers them to the central memory location specified by register A.
CWM d,m - transfers a block starting at PP memory address m to central memory. The central memory address was stored in register A, and the length was stored at location d prior to execution.
RPN - transfers the contents of the central processor's program address register to the PP's A register.
EXN - Exchange Jump transmits an address from the A register and tells the processor to perform an Exchange Jump using the address specified. The CP Exchange Jump interrupts the processor, loads its registers from the specified location and stores the previous contents at the same location. This performs a task switch.
Wordlengths, characters
The central processor has 60-bit words, while the peripheral processors have 12-bit words. CDC used the term "byte" to refer to 12-bit entities used by peripheral processors; characters are 6-bit, and central processor instructions are either 15 bits, or 30 bits with a signed 18-bit address field, the latter allowing for a directly addressable memory space of 128K words of central memory (converted to modern terms, with 8-bit bytes, this just under 1 MB). The signed nature of the address registers limits an individual program to 128K words. (Later CDC 6000-compatible machines could have 256K or more words of central memory, budget permitting, but individual user programs were still limited to 128K words of CM.) Central processor instructions start on a word boundary when they are the target of a jump statement or subroutine return jump instruction, so no-op instructions are sometimes required to fill out the last 15, 30 or 45 bits of a word. Experienced assembler programmers could fine-tune their programs by filling these no-op spaces with misc instructions that would be needed later in the program.
The 6-bit characters, in an encoding called CDC display code, could be used to store up to 10 characters in a word. They permitted a character set of 64 characters, which is enough for all upper case letters, digits, and some punctuation. It was certainly enough to write FORTRAN, or print financial or scientific reports. There were actually two variations of the CDC display code character sets in use — 64-character and 63-character. The 64-character set had the disadvantage that the ":" (colon) character would be ignored (interpreted as zero fill) if it were the last character in a word. A complementary variant, called 6/12 display code, was also used in the Kronos and NOS timesharing systems to allow full use of the ASCII character set in a manner somewhat compatible with older software.
With no byte addressing instructions at all, code had to be written to pack and shift characters into words. The very large words, and comparatively small amount of memory, meant that programmers would frequently economize on memory by packing data into words at the bit level.
Due to the large word size, and with 10 characters per word, it was often faster to process a word's worth of characters at a time, rather than unpacking/processing/repacking them. For example, the CDC COBOL compiler was actually quite good at processing decimal fields using this technique. These sorts of techniques are now commonly used in the "multi-media" instructions of current processors.
Physical design
The machine was built in a plus-sign-shaped cabinet with a pump and heat exchanger in the outermost of each of the four arms. Cooling was done with Freon circulating within the machine and exchanging heat to an external chilled water supply. Each arm could hold four chassis, each about thick, hinged near the center, and opening a bit like a book. The intersection of the "plus" was filled with cables that interconnected the chassis. The chassis were numbered from 1 (containing all 10 PPUs and their memories, as well as the 12 rather minimal I/O channels) to 16. The main memory for the CPU was spread over many of the chassis. In a system with only 64K words of main memory, one of the arms of the "plus" was omitted.
The logic of the machine was packaged into modules about square and about thick. Each module had a connector (30 pins, two vertical rows of 15) on one edge, and six test points on the opposite edge. The module was placed between two aluminum cold plates to remove heat. The module consisted of two parallel printed circuit boards, with components mounted either on one of the boards or between the two boards. This provided a very dense package; generally impossible to repair, but with good heat transfer characteristics. It was known as cordwood construction.
Operating system and programming
There was a sore point with the 6600 operating system support — slipping timelines. The machines originally ran a very simple job-control system known as COS (Chippewa Operating System), which was quickly "thrown together" based on the earlier CDC 3000 operating system in order to have something running to test the systems for delivery. However the machines were intended to be delivered with a much more powerful system known as SIPROS (for Simultaneous Processing Operating System), which was being developed at the company's System Sciences Division in Los Angeles. Customers were impressed with SIPROS' feature list, and many had SIPROS written into their delivery contracts.
SIPROS turned out to be a major fiasco. Development timelines continued to slip, costing CDC major amounts of profit in the form of delivery delay penalties. After several months of waiting with the machines ready to be shipped, the project was eventually cancelled. The programmers who had worked on COS had little faith in SIPROS and had continued working on improving COS.
Operating system development then split into two camps. The CDC-sanctioned evolution of COS was undertaken at the Sunnyvale, California software development laboratory. Many customers eventually took delivery of their systems with this software, then known as SCOPE (Supervisory Control Of Program Execution). SCOPE version 1 was, essentially, dis-assembled COS; SCOPE version 2 included new device and file system support; SCOPE version 3 included permanent file support, EI/200 remote batch support, and INTERCOM time-sharing support. SCOPE always had significant reliability and maintainability issues.
The underground evolution of COS took place at the Arden Hills, Minnesota assembly plant. MACE ([Greg] Mansfield And [Dave] Cahlander Executive) was written largely by a single programmer in the off-hours when machines were available. Its feature set was essentially the same as COS and SCOPE 1. It retained the earlier COS file system, but made significant advances in code modularity to improve system reliability and adaptiveness to new storage devices. MACE was never an official product, although many customers were able to wrangle a copy from CDC.
The unofficial MACE software was later chosen over the official SCOPE product as the basis of the next CDC operating system, Kronos, named after the Greek god of time. The main marketing reason for its adoption was the development of its TELEX time-sharing feature and its BATCHIO remote batch feature. Kronos continued to use the COS/SCOPE 1 file system with the addition of a permanent file feature.
An attempt to unify the SCOPE and Kronos operating system products produced NOS, (Network Operating System). NOS was intended to be the sole operating system for all CDC machines, a fact CDC promoted heavily. Many SCOPE customers remained software-dependent on the SCOPE architecture, so CDC simply renamed it NOS/BE (Batch Environment), and were able to claim that everyone was thus running NOS. In practice, it was far easier to modify the Kronos code base to add SCOPE features than the reverse.
The assembly plant environment also produced other operating systems which were never intended for customer use. These included the engineering tools SMM for hardware testing, and KALEIDOSCOPE, for software smoke testing. Another commonly used tool for CDC Field Engineers during testing was MALET (Maintenance Application Language for Equipment Testing), which was used to stress test components and devices after repairs or servicing by engineers. Testing conditions often used hard disk packs and magnetic tapes which were deliberately marked with errors to determine if the errors would be detected by MALET and the engineer.
The names SCOPE and COMPASS were used by CDC for both the CDC 6000 series, including the 6600, and the CDC 3000 series:
The name COMPASS was used by CDC for the Assembly languages on both families.
The name SCOPE was used for its implementations on the 3000 and 6000 series.
CDC 7600
The CDC 7600 was originally intended to be fully compatible with the existing 6000-series machines as well; it started life known as the CDC 6800. But during its design, the designers determined that maintaining complete compatibility with the existing 6000-series machines would limit how much performance improvement they could attain and decided to sacrifice compatibility for performance. While the CDC 7600's CPU was basically instruction compatible with the 6400 and 6600 CPUs, allowing code portability at the high-level language source code level, the CDC 7600's hardware, especially that of its Peripheral Processor Units (PPUs), was quite different, and the CDC 7600 required a different operating system. This turned out to be somewhat serendipitous because it allowed the designers to improve on some of the characteristics of the 6000-series design, such as the latter's complete dependence on Peripheral Processors (PPs), particularly the first (called PP0), to control operation of the entire computer system, including the CPU(s). Unlike the 6600 CPU, the CDC 7600's CPU could control its own operation via a Central Exchange jump (XJ) instruction that swapped all register contents with core memory. In fact, the 6000-series machines were retrofitted with this capability.
See also
History of supercomputing
References
Further reading
Grishman, Ralph (1974). Assembly Language Programming for the Control Data 6000 Series and the Cyber 70 Series. New York, NY: Algorithmics Press.
Control Data 6400/6500/6600 Computer Systems Reference Manual
Thornton, J. (1963). Considerations in Computer Design – Leading up to the Control Data 6600
Thornton, J. (1970). Design of a Computer—The Control Data 6600. Glenview, IL: Scott, Foresman and Co.
(1990) Understanding Computers: Speed and Power, a Time Life series
External links
Neil R. Lincoln with 18 Control Data Corporation (CDC) engineers on computer architecture and design, Charles Babbage Institute, University of Minnesota. Engineers include Robert Moe, Wayne Specker, Dennis Grinna, Tom Rowan, Maurice Hutson, Curt Alexander, Don Pagelkopf, Maris Bergmanis, Dolan Toth, Chuck Hawley, Larry Krueger, Mike Pavlov, Dave Resnick, Howard Krohn, Bill Bhend, Kent Steiner, Raymon Kort, and Neil R. Lincoln. Discussion topics include CDC 1604, CDC 6600, CDC 7600, CDC 8600, CDC STAR-100 and Seymour Cray.
Parallel operation in the Control Data 6600, James Thornton
Presentation of the CDC 6600 and other machines designed by Seymour Cray – by C. Gordon Bell of Microsoft Research (formerly of DEC)
– overview with pictures
6600
Supercomputers
Transistorized computers
Computer-related introductions in 1964
60-bit computers | CDC 6600 | [
"Technology"
] | 6,892 | [
"Supercomputers",
"Supercomputing"
] |
58,664 | https://en.wikipedia.org/wiki/Gas%20turbine | A gas turbine or gas turbine engine is a type of continuous flow internal combustion engine. The main parts common to all gas turbine engines form the power-producing part (known as the gas generator or core) and are, in the direction of flow:
a rotating gas compressor
a combustor
a compressor-driving turbine.
Additional components have to be added to the gas generator to suit its application. Common to all is an air inlet but with different configurations to suit the requirements of marine use, land use or flight at speeds varying from stationary to supersonic. A propelling nozzle is added to produce thrust for flight. An extra turbine is added to drive a propeller (turboprop) or ducted fan (turbofan) to reduce fuel consumption (by increasing propulsive efficiency) at subsonic flight speeds. An extra turbine is also required to drive a helicopter rotor or land-vehicle transmission (turboshaft), marine propeller or electrical generator (power turbine). Greater thrust-to-weight ratio for flight is achieved with the addition of an afterburner.
The basic operation of the gas turbine is a Brayton cycle with air as the working fluid: atmospheric air flows through the compressor that brings it to higher pressure; energy is then added by spraying fuel into the air and igniting it so that the combustion generates a high-temperature flow; this high-temperature pressurized gas enters a turbine, producing a shaft work output in the process, used to drive the compressor; the unused energy comes out in the exhaust gases that can be repurposed for external work, such as directly producing thrust in a turbojet engine, or rotating a second, independent turbine (known as a power turbine) that can be connected to a fan, propeller, or electrical generator. The purpose of the gas turbine determines the design so that the most desirable split of energy between the thrust and the shaft work is achieved. The fourth step of the Brayton cycle (cooling of the working fluid) is omitted, as gas turbines are open systems that do not reuse the same air.
Gas turbines are used to power aircraft, trains, ships, electrical generators, pumps, gas compressors, and tanks.
Timeline of development
50: Earliest records of Hero's engine (aeolipile). It most likely served no practical purpose, and was rather more of a curiosity; nonetheless, it demonstrated an important principle of physics that all modern turbine engines rely on.
1000: The "Trotting Horse Lamp" (, zŏumădēng) was used by the Chinese at lantern fairs as early as the Northern Song dynasty. When the lamp is lit, the heated airflow rises and drives an impeller with horse-riding figures attached on it, whose shadows are then projected onto the outer screen of the lantern.
1500: The Smoke jack was drawn by Leonardo da Vinci: Hot air from a fire rises through a single-stage axial turbine rotor mounted in the exhaust duct of the fireplace and turns the roasting spit by gear-chain connection.
1791: A patent was given to John Barber, an Englishman, for the first true gas turbine. His invention had most of the elements present in the modern day gas turbines. The turbine was designed to power a horseless carriage.
1894: Sir Charles Parsons patented the idea of propelling a ship with a steam turbine, and built a demonstration vessel, the Turbinia, easily the fastest vessel afloat at the time.
1899: Charles Gordon Curtis patented the first gas turbine engine in the US.
1900: Sanford Alexander Moss submitted a thesis on gas turbines. In 1903, Moss became an engineer for General Electric's Steam Turbine Department in Lynn, Massachusetts. While there, he applied some of his concepts in the development of the turbocharger.
1903: A Norwegian, Ægidius Elling, built the first gas turbine that was able to produce more power than needed to run its own components, which was considered an achievement in a time when knowledge about aerodynamics was limited. Using rotary compressors and turbines it produced .
1904: A gas turbine engine designed by Franz Stolze, based on his earlier 1873 patent application, is built and tested in Berlin. The Stolze gas turbine was too inefficient to sustain its own operation.
1906: The Armengaud-Lemale gas turbine tested in France. This was a relatively large machine which included a 25-stage centrifugal compressor designed by Auguste Rateau and built by the Brown Boveri Company. The gas turbine could sustain its own air compression but was too inefficient to produce useful work.
1910: The first operational Holzwarth gas turbine (pulse combustion) achieves an output of . Planned output of the machine was and its efficiency is below that of contemporary reciprocating engines.
1920s The practical theory of gas flow through passages was developed into the more formal (and applicable to turbines) theory of gas flow past airfoils by A. A. Griffith resulting in the publishing in 1926 of An Aerodynamic Theory of Turbine Design. Working testbed designs of axial turbines suitable for driving a propeller were developed by the Royal Aeronautical Establishment.
1930: Having found no interest from the RAF for his idea, Frank Whittle patented the design for a centrifugal gas turbine for jet propulsion. The first successful test run of his engine occurred in England in April 1937.
1932: The Brown Boveri Company of Switzerland starts selling axial compressor and turbine turbosets as part of the turbocharged steam generating Velox boiler. Following the gas turbine principle, the steam evaporation tubes are arranged within the gas turbine combustion chamber; the first Velox plant is erected at a French Steel mill in Mondeville, Calvados.
1936: The first constant flow industrial gas turbine is commissioned by the Brown Boveri Company and goes into service at Sun Oil's Marcus Hook refinery in Pennsylvania, US.
1937: Working proof-of-concept prototype turbojet engine runs in UK (Frank Whittle's) and Germany (Hans von Ohain's Heinkel HeS 1). Henry Tizard secures UK government funding for further development of Power Jets engine.
1939: The First 4 MW utility power generation gas turbine is built by the Brown Boveri Company for an emergency power station in Neuchâtel, Switzerland. The turbojet powered Heinkel He 178, the world's first jet aircraft, makes its first flight.
1940: Jendrassik Cs-1, a turboprop engine, made its first bench run. The Cs-1 was designed by Hungarian engineer György Jendrassik, and was intended to power a Hungarian twin-engine heavy fighter, the RMI-1. Work on the Cs-1 stopped in 1941 without the type having powered any aircraft.
1944: The Junkers Jumo 004 engine enters full production, powering the first German military jets such as the Messerschmitt Me 262. This marks the beginning of the reign of gas turbines in the sky.
1946: National Gas Turbine Establishment formed from Power Jets and the RAE turbine division to bring together Whittle and Hayne Constant's work. In Beznau, Switzerland the first commercial reheated/recuperated unit generating 27 MW was commissioned.
1947: A Metropolitan Vickers G1 (Gatric) becomes the first marine gas turbine when it completes sea trials on the Royal Navy's M.G.B 2009 vessel. The Gatric was an aeroderivative gas turbine based on the Metropolitan Vickers F2 jet engine.
1995: Siemens becomes the first manufacturer of large electricity producing gas turbines to incorporate single crystal turbine blade technology into their production models, allowing higher operating temperatures and greater efficiency.
2011 Mitsubishi Heavy Industries tests the first >60% efficiency combined cycle gas turbine (the M501J) at its Takasago, Hyōgo, works.
Theory of operation
In an ideal gas turbine, gases undergo four thermodynamic processes: an isentropic compression, an isobaric (constant pressure) combustion, an isentropic expansion and isobaric heat rejection. Together, these make up the Brayton cycle, also known as the "constant pressure cycle". It is distinguished from the Otto cycle, in that all the processes (compression, ignition combustion, exhaust), occur at the same time, continuously.
In a real gas turbine, mechanical energy is changed irreversibly (due to internal friction and turbulence) into pressure and thermal energy when the gas is compressed (in either a centrifugal or axial compressor). Heat is added in the combustion chamber and the specific volume of the gas increases, accompanied by a slight loss in pressure. During expansion through the stator and rotor passages in the turbine, irreversible energy transformation once again occurs. Fresh air is taken in, in place of the heat rejection.
Air is taken in by a compressor, called a gas generator, with either an axial or centrifugal design, or a combination of the two. This air is then ducted into the combustor section which can be of a annular, can, or can-annular design. In the combustor section, roughly 70% of the air from the compressor is ducted around the combustor itself for cooling purposes. The remaining roughly 30% the air is mixed with fuel and ignited by the already burning air-fuel mixture, which then expands producing power across the turbine. This expansion of the mixture then leaves the combustor section and has its velocity increased across the turbine section to strike the turbine blades, spinning the disc they are attached to, thus creating useful power. Of the power produced, 60-70% is solely used to power the gas generator. The remaining power is used to power what the engine is being used for, typically an aviation application, being thrust in a turbojet, driving the fan of a turbofan, rotor or accessory of a turboshaft, and gear reduction and propeller of a turboprop.
If the engine has a power turbine added to drive an industrial generator or a helicopter rotor, the exit pressure will be as close to the entry pressure as possible with only enough energy left to overcome the pressure losses in the exhaust ducting and expel the exhaust. For a turboprop engine there will be a particular balance between propeller power and jet thrust which gives the most economical operation. In a turbojet engine only enough pressure and energy is extracted from the flow to drive the compressor and other components. The remaining high-pressure gases are accelerated through a nozzle to provide a jet to propel an aircraft.
The smaller the engine, the higher the rotation rate of the shaft must be to attain the required blade tip speed. Blade-tip speed determines the maximum pressure ratios that can be obtained by the turbine and the compressor. This, in turn, limits the maximum power and efficiency that can be obtained by the engine. In order for tip speed to remain constant, if the diameter of a rotor is reduced by half, the rotational speed must double. For example, large jet engines operate around 10,000–25,000 rpm, while micro turbines spin as fast as 500,000 rpm.
Mechanically, gas turbines can be considerably less complex than Reciprocating engines. Simple turbines might have one main moving part, the compressor/shaft/turbine rotor assembly, with other moving parts in the fuel system. This, in turn, can translate into price. For instance, costing for materials, the Jumo 004 proved cheaper than the Junkers 213 piston engine, which was , and needed only 375 hours of lower-skill labor to complete (including manufacture, assembly, and shipping), compared to 1,400 for the BMW 801. This, however, also translated into poor efficiency and reliability. More advanced gas turbines (such as those found in modern jet engines or combined cycle power plants) may have 2 or 3 shafts (spools), hundreds of compressor and turbine blades, movable stator blades, and extensive external tubing for fuel, oil and air systems; they use temperature resistant alloys, and are made with tight specifications requiring precision manufacture. All this often makes the construction of a simple gas turbine more complicated than a piston engine.
Moreover, to reach optimum performance in modern gas turbine power plants the gas needs to be prepared to exact fuel specifications. Fuel gas conditioning systems treat the natural gas to reach the exact fuel specification prior to entering the turbine in terms of pressure, temperature, gas composition, and the related Wobbe index.
The primary advantage of a gas turbine engine is its power to weight ratio.
Since significant useful work can be generated by a relatively lightweight engine, gas turbines are perfectly suited for aircraft propulsion.
Thrust bearings and journal bearings are a critical part of a design. They are hydrodynamic oil bearings or oil-cooled rolling-element bearings. Foil bearings are used in some small machines such as micro turbines and also have strong potential for use in small gas turbines/auxiliary power units
Creep
A major challenge facing turbine design, especially turbine blades, is reducing the creep that is induced by the high temperatures and stresses that are experienced during operation. Higher operating temperatures are continuously sought in order to increase efficiency, but come at the cost of higher creep rates. Several methods have therefore been employed in an attempt to achieve optimal performance while limiting creep, with the most successful ones being high performance coatings and single crystal superalloys. These technologies work by limiting deformation that occurs by mechanisms that can be broadly classified as dislocation glide, dislocation climb and diffusional flow.
Protective coatings provide thermal insulation of the blade and offer oxidation and corrosion resistance. Thermal barrier coatings (TBCs) are often stabilized zirconium dioxide-based ceramics and oxidation/corrosion resistant coatings (bond coats) typically consist of aluminides or MCrAlY (where M is typically Fe and/or Cr) alloys. Using TBCs limits the temperature exposure of the superalloy substrate, thereby decreasing the diffusivity of the active species (typically vacancies) within the alloy and reducing dislocation and vacancy creep. It has been found that a coating of 1–200 μm can decrease blade temperatures by up to .
Bond coats are directly applied onto the surface of the substrate using pack carburization and serve the dual purpose of providing improved adherence for the TBC and oxidation resistance for the substrate. The Al from the bond coats forms Al2O3 on the TBC-bond coat interface which provides the oxidation resistance, but also results in the formation of an undesirable interdiffusion (ID) zone between itself and the substrate. The oxidation resistance outweighs the drawbacks associated with the ID zone as it increases the lifetime of the blade and limits the efficiency losses caused by a buildup on the outside of the blades.
Nickel-based superalloys boast improved strength and creep resistance due to their composition and resultant microstructure. The gamma (γ) FCC nickel is alloyed with aluminum and titanium in order to precipitate a uniform dispersion of the coherent gamma-prime (γ') phases. The finely dispersed γ' precipitates impede dislocation motion and introduce a threshold stress, increasing the stress required for the onset of creep. Furthermore, γ' is an ordered L12 phase that makes it harder for dislocations to shear past it. Further Refractory elements such as rhenium and ruthenium can be added in solid solution to improve creep strength. The addition of these elements reduces the diffusion of the gamma prime phase, thus preserving the fatigue resistance, strength, and creep resistance. The development of single crystal superalloys has led to significant improvements in creep resistance as well. Due to the lack of grain boundaries, single crystals eliminate Coble creep and consequently deform by fewer modes – decreasing the creep rate. Although single crystals have lower creep at high temperatures, they have significantly lower yield stresses at room temperature where strength is determined by the Hall-Petch relationship. Care needs to be taken in order to optimize the design parameters to limit high temperature creep while not decreasing low temperature yield strength.
Types
Jet engines
Airbreathing jet engines are gas turbines optimized to produce thrust from the exhaust gases, or from ducted fans connected to the gas turbines. Jet engines that produce thrust from the direct impulse of exhaust gases are often called turbojets. While still in service with many militaries and civilian operators, turbojets have mostly been phased out in favor of the turbofan engine due to the turbojet's low fuel efficiency, and high noise. Those that generate thrust with the addition of a ducted fan are called turbofans or (rarely) fan-jets. These engines produce nearly 80% of their thrust by the ducted fan, which can be seen from the front of the engine. They come in two types, low-bypass turbofan and high bypass, the difference being the amount of air moved by the fan, called "bypass air". These engines offer the benefit of more thrust without extra fuel consumption.
Gas turbines are also used in many liquid-fuel rockets, where gas turbines are used to power a turbopump to permit the use of lightweight, low-pressure tanks, reducing the empty weight of the rocket.
Turboprop engines
A turboprop engine is a turbine engine that drives an aircraft propeller using a reduction gear to translate high turbine section operating speed (often in the 10s of thousands) into low thousands necessary for efficient propeller operation. The benefit of using the turboprop engine is to take advantage of the turbine engines high power-to-weight ratio to drive a propeller, thus allowing a more powerful, but also smaller engine to be used. Turboprop engines are used on a wide range of business aircraft such as the Pilatus PC-12, commuter aircraft such as the Beechcraft 1900, and small cargo aircraft such as the Cessna 208 Caravan or De Havilland Canada Dash 8, and large aircraft (typically military) such as the Airbus A400M transport, Lockheed AC-130 and the 60-year-old Tupolev Tu-95 strategic bomber. While military turboprop engines can vary, in the civilian market there are two primary engines to be found: the Pratt & Whitney Canada PT6, a free-turbine turboshaft engine, and the Honeywell TPE331, a fixed turbine engine (formerly designated as the Garrett AiResearch 331).
Aeroderivative gas turbines
Aeroderivative gas turbines are generally based on existing aircraft gas turbine engines and are smaller and lighter than industrial gas turbines.
Aeroderivatives are used in electrical power generation due to their ability to be shut down and handle load changes more quickly than industrial machines. They are also used in the marine industry to reduce weight. Common types include the General Electric LM2500, General Electric LM6000, and aeroderivative versions of the Pratt & Whitney PW4000, Pratt & Whitney FT4 and Rolls-Royce RB211.
Amateur gas turbines
Increasing numbers of gas turbines are being used or even constructed by amateurs.
In its most straightforward form, these are commercial turbines acquired through military surplus or scrapyard sales, then operated for display as part of the hobby of engine collecting. In its most extreme form, amateurs have even rebuilt engines beyond professional repair and then used them to compete for the land speed record.
The simplest form of self-constructed gas turbine employs an automotive turbocharger as the core component. A combustion chamber is fabricated and plumbed between the compressor and turbine sections.
More sophisticated turbojets are also built, where their thrust and light weight are sufficient to power large model aircraft. The Schreckling design constructs the entire engine from raw materials, including the fabrication of a centrifugal compressor wheel from plywood, epoxy and wrapped carbon fibre strands.
Several small companies now manufacture small turbines and parts for the amateur. Most turbojet-powered model aircraft are now using these commercial and semi-commercial microturbines, rather than a Schreckling-like home-build.
Auxiliary power units
Small gas turbines are used as auxiliary power units (APUs) to supply auxiliary power to larger, mobile, machines such as an aircraft, and are a turboshaft design. They supply:
compressed air for air cycle machine style air conditioning and ventilation,
compressed air start-up power for larger jet engines,
mechanical (shaft) power to a gearbox to drive shafted accessories, and
electrical, hydraulic and other power-transmission sources to consuming devices remote from the APU.
Industrial gas turbines for power generation
Industrial gas turbines differ from aeronautical designs in that the frames, bearings, and blading are of heavier construction. They are also much more closely integrated with the devices they power—often an electric generator—and the secondary-energy equipment that is used to recover residual energy (largely heat).
They range in size from portable mobile plants to large, complex systems weighing more than a hundred tonnes housed in purpose-built buildings. When the gas turbine is used solely for shaft power, its thermal efficiency is about 30%. However, it may be cheaper to buy electricity than to generate it. Therefore, many engines are used in CHP (Combined Heat and Power) configurations that can be small enough to be integrated into portable container configurations.
Gas turbines can be particularly efficient when waste heat from the turbine is recovered by a heat recovery steam generator (HRSG) to power a conventional steam turbine in a combined cycle configuration. The 605 MW General Electric 9HA achieved a 62.22% efficiency rate with temperatures as high as .
For 2018, GE offers its 826 MW HA at over 64% efficiency in combined cycle due to advances in additive manufacturing and combustion breakthroughs, up from 63.7% in 2017 orders and on track to achieve 65% by the early 2020s.
In March 2018, GE Power achieved a 63.08% gross efficiency for its 7HA turbine.
Aeroderivative gas turbines can also be used in combined cycles, leading to a higher efficiency, but it will not be as high as a specifically designed industrial gas turbine. They can also be run in a cogeneration configuration: the exhaust is used for space or water heating, or drives an absorption chiller for cooling the inlet air and increase the power output, technology known as turbine inlet air cooling.
Another significant advantage is their ability to be turned on and off within minutes, supplying power during peak, or unscheduled, demand. Since single cycle (gas turbine only) power plants are less efficient than combined cycle plants, they are usually used as peaking power plants, which operate anywhere from several hours per day to a few dozen hours per year—depending on the electricity demand and the generating capacity of the region. In areas with a shortage of base-load and load following power plant capacity or with low fuel costs, a gas turbine powerplant may regularly operate most hours of the day. A large single-cycle gas turbine typically produces 100 to 400 megawatts of electric power and has 35–40% thermodynamic efficiency.
Industrial gas turbines for mechanical drive
Industrial gas turbines that are used solely for mechanical drive or used in collaboration with a recovery steam generator differ from power generating sets in that they are often smaller and feature a dual shaft design as opposed to a single shaft. The power range varies from 1 megawatt up to 50 megawatts. These engines are connected directly or via a gearbox to either a pump or compressor assembly. The majority of installations are used within the oil and gas industries. Mechanical drive applications increase efficiency by around 2%.
Oil and gas platforms require these engines to drive compressors to inject gas into the wells to force oil up via another bore, or to compress the gas for transportation. They are also often used to provide power for the platform. These platforms do not need to use the engine in collaboration with a CHP system due to getting the gas at an extremely reduced cost (often free from burn off gas). The same companies use pump sets to drive the fluids to land and across pipelines in various intervals.
Compressed air energy storage
One modern development seeks to improve efficiency in another way, by separating the compressor and the turbine with a compressed air store. In a conventional turbine, up to half the generated power is used driving the compressor. In a compressed air energy storage configuration, power is used to drive the compressor, and the compressed air is released to operate the turbine when required.
Turboshaft engines
Turboshaft engines are used to drive compressors in gas pumping stations and natural gas liquefaction plants. They are also used in aviation to power all but the smallest modern helicopters, and function as an auxiliary power unit in large commercial aircraft. A primary shaft carries the compressor and its turbine which, together with a combustor, is called a Gas Generator. A separately spinning power-turbine is usually used to drive the rotor on helicopters. Allowing the gas generator and power turbine/rotor to spin at their own speeds allows more flexibility in their design.
Radial gas turbines
Scale jet engines
Also known as miniature gas turbines or micro-jets.
With this in mind the pioneer of modern Micro-Jets, Kurt Schreckling, produced one of the world's first Micro-Turbines, the FD3/67. This engine can produce up to 22 newtons of thrust, and can be built by most mechanically minded people with basic engineering tools, such as a metal lathe.
Microturbines
Evolved from piston engine turbochargers, aircraft APUs or small jet engines, microturbines are 25 to 500 kilowatt turbines the size of a refrigerator.
Microturbines have around 15% efficiencies without a recuperator, 20 to 30% with one and they can reach 85% combined thermal-electrical efficiency in cogeneration.
External combustion
Most gas turbines are internal combustion engines but it is also possible to manufacture an external combustion gas turbine which is, effectively, a turbine version of a hot air engine.
Those systems are usually indicated as EFGT (Externally Fired Gas Turbine) or IFGT (Indirectly Fired Gas Turbine).
External combustion has been used for the purpose of using pulverized coal or finely ground biomass (such as sawdust) as a fuel. In the indirect system, a heat exchanger is used and only clean air with no combustion products travels through the power turbine. The thermal efficiency is lower in the indirect type of external combustion; however, the turbine blades are not subjected to combustion products and much lower quality (and therefore cheaper) fuels are able to be used.
When external combustion is used, it is possible to use exhaust air from the turbine as the primary combustion air. This effectively reduces global heat losses, although heat losses associated with the combustion exhaust remain inevitable.
Closed-cycle gas turbines based on helium or supercritical carbon dioxide also hold promise for use with future high temperature solar and nuclear power generation.
In surface vehicles
Gas turbines are often used on ships, locomotives, helicopters, tanks, and to a lesser extent, on cars, buses, and motorcycles.
A key advantage of jets and turboprops for airplane propulsion – their superior performance at high altitude compared to piston engines, particularly naturally aspirated ones – is irrelevant in most automobile applications. Their power-to-weight advantage, though less critical than for aircraft, is still important.
Gas turbines offer a high-powered engine in a very small and light package. However, they are not as responsive and efficient as small piston engines over the wide range of RPMs and powers needed in vehicle applications. In series hybrid vehicles, as the driving electric motors are mechanically detached from the electricity generating engine, the responsiveness, poor performance at low speed and low efficiency at low output problems are much less important. The turbine can be run at optimum speed for its power output, and batteries and ultracapacitors can supply power as needed, with the engine cycled on and off to run it only at high efficiency. The emergence of the continuously variable transmission may also alleviate the responsiveness problem.
Turbines have historically been more expensive to produce than piston engines, though this is partly because piston engines have been mass-produced in huge quantities for decades, while small gas turbine engines are rarities; however, turbines are mass-produced in the closely related form of the turbocharger.
The turbocharger is basically a compact and simple free shaft radial gas turbine which is driven by the piston engine's exhaust gas. The centripetal turbine wheel drives a centrifugal compressor wheel through a common rotating shaft. This wheel supercharges the engine air intake to a degree that can be controlled by means of a wastegate or by dynamically modifying the turbine housing's geometry (as in a variable geometry turbocharger).
It mainly serves as a power recovery device which converts a great deal of otherwise wasted thermal and kinetic energy into engine boost.
Turbo-compound engines (actually employed on some semi-trailer trucks) are fitted with blow down turbines which are similar in design and appearance to a turbocharger except for the turbine shaft being mechanically or hydraulically connected to the engine's crankshaft instead of to a centrifugal compressor, thus providing additional power instead of boost. While the turbocharger is a pressure turbine, a power recovery turbine is a velocity one.
Passenger road vehicles (cars, bikes, and buses)
A number of experiments have been conducted with gas turbine powered automobiles, the largest by Chrysler. More recently, there has been some interest in the use of turbine engines for hybrid electric cars. For instance, a consortium led by micro gas turbine company Bladon Jets has secured investment from the Technology Strategy Board to develop an Ultra Lightweight Range Extender (ULRE) for next-generation electric vehicles. The objective of the consortium, which includes luxury car maker Jaguar Land Rover and leading electrical machine company SR Drives, is to produce the world's first commercially viable – and environmentally friendly – gas turbine generator designed specifically for automotive applications.
The common turbocharger for gasoline or diesel engines is also a turbine derivative.
Concept cars
The first serious investigation of using a gas turbine in cars was in 1946 when two engineers, Robert Kafka and Robert Engerstein of Carney Associates, a New York engineering firm, came up with the concept where a unique compact turbine engine design would provide power for a rear wheel drive car. After an article appeared in Popular Science, there was no further work, beyond the paper stage.
Early concepts (1950s/60s)
In 1950, designer F.R. Bell and Chief Engineer Maurice Wilks from British car manufacturers Rover unveiled the first car powered with a gas turbine engine. The two-seater JET1 had the engine positioned behind the seats, air intake grilles on either side of the car, and exhaust outlets on the top of the tail. During tests, the car reached top speeds of , at a turbine speed of 50,000 rpm. After being shown in the United Kingdom and the United States in 1950, JET1 was further developed, and was subjected to speed trials on the Jabbeke highway in Belgium in June 1952, where it exceeded . The car ran on petrol, paraffin (kerosene) or diesel oil, but fuel consumption problems proved insurmountable for a production car. JET1 is on display at the London Science Museum.
A French turbine-powered car, the SOCEMA-Grégoire, was displayed at the October 1952 Paris Auto Show. It was designed by the French engineer Jean-Albert Grégoire.
The first turbine-powered car built in the US was the GM Firebird I which began evaluations in 1953. While photos of the Firebird I may suggest that the jet turbine's thrust propelled the car like an aircraft, the turbine actually drove the rear wheels. The Firebird I was never meant as a commercial passenger car and was built solely for testing & evaluation as well as public relation purposes. Additional Firebird concept cars, each powered by gas turbines, were developed for the 1953, 1956 and 1959 Motorama auto shows. The GM Research gas turbine engine also was fitted to a series of transit buses, starting with the Turbo-Cruiser I of 1953.
Starting in 1954 with a modified Plymouth, the American car manufacturer Chrysler demonstrated several prototype gas turbine-powered cars from the early 1950s through the early 1980s. Chrysler built fifty Chrysler Turbine Cars in 1963 and conducted the only consumer trial of gas turbine-powered cars. Each of their turbines employed a unique rotating recuperator, referred to as a regenerator that increased efficiency.
In 1954, Fiat unveiled a concept car with a turbine engine, called Fiat Turbina. This vehicle, looking like an aircraft with wheels, used a unique combination of both jet thrust and the engine driving the wheels. Speeds of were claimed.
In the 1960s, Ford and GM also were developing gas turbine semi-trucks. Ford displayed the Big Red at the 1964 World's Fair. With the trailer, it was long, high, and painted crimson red. It contained the Ford-developed gas turbine engine, with output power and torque of and . The cab boasted a highway map of the continental U.S., a mini-kitchen, bathroom, and a TV for the co-driver. The fate of the truck was unknown for several decades, but it was rediscovered in early 2021 in private hands, having been restored to running order. The Chevrolet division of GM built the Turbo Titan series of concept trucks with turbine motors as analogs of the Firebird concepts, including Turbo Titan I (, shares GT-304 engine with Firebird II), Turbo Titan II (, shares GT-305 engine with Firebird III), and Turbo Titan III (1965, GT-309 engine); in addition, the GM Bison gas turbine truck was shown at the 1964 World's Fair.
Emissions and fuel economy (1970s/80s)
As a result of the U.S. Clean Air Act Amendments of 1970, research was funded into developing automotive gas turbine technology. Design concepts and vehicles were conducted by Chrysler, General Motors, Ford (in collaboration with AiResearch), and American Motors (in conjunction with Williams Research). Long-term tests were conducted to evaluate comparable cost efficiency. Several AMC Hornets were powered by a small Williams regenerative gas turbine weighing and producing at 4450 rpm.
In 1982, General Motors used an Oldsmobile Delta 88 powered by a gas turbine using pulverised coal dust. This was considered for the United States and the western world to reduce dependence on middle east oil at the time
Toyota demonstrated several gas turbine powered concept cars, such as the Century gas turbine hybrid in 1975, the Sports 800 Gas Turbine Hybrid in 1979 and the GTV in 1985. No production vehicles were made. The GT24 engine was exhibited in 1977 without a vehicle.
Later development
In the early 1990s, Volvo introduced the Volvo ECC which was a gas turbine powered hybrid electric vehicle.
In 1993, General Motors developed a gas turbine powered EV1 series hybrid—as a prototype of the General Motors EV1. A Williams International 40 kW turbine drove an alternator which powered the battery–electric powertrain. The turbine design included a recuperator. In 2006, GM went into the EcoJet concept car project with Jay Leno.
At the 2010 Paris Motor Show Jaguar demonstrated its Jaguar C-X75 concept car. This electrically powered supercar has a top speed of and can go from in 3.4 seconds. It uses lithium-ion batteries to power four electric motors which combine to produce 780 bhp. It will travel on a single charge of the batteries, and uses a pair of Bladon Micro Gas Turbines to re-charge the batteries extending the range to .
Racing cars
The first race car (in concept only) fitted with a turbine was in 1955 by a US Air Force group as a hobby project with a turbine loaned them by Boeing and a race car owned by Firestone Tire & Rubber company. The first race car fitted with a turbine for the goal of actual racing was by Rover and the BRM Formula One team joined forces to produce the Rover-BRM, a gas turbine powered coupe, which entered the 1963 24 Hours of Le Mans, driven by Graham Hill and Richie Ginther. It averaged and had a top speed of . American Ray Heppenstall joined Howmet Corporation and McKee Engineering together to develop their own gas turbine sports car in 1968, the Howmet TX, which ran several American and European events, including two wins, and also participated in the 1968 24 Hours of Le Mans. The cars used Continental gas turbines, which eventually set six FIA land speed records for turbine-powered cars.
For open wheel racing, 1967's revolutionary STP-Paxton Turbocar fielded by racing and entrepreneurial legend Andy Granatelli and driven by Parnelli Jones nearly won the Indianapolis 500; the Pratt & Whitney ST6B-62 powered turbine car was almost a lap ahead of the second place car when a gearbox bearing failed just three laps from the finish line. The next year the STP Lotus 56 turbine car won the Indianapolis 500 pole position even though new rules restricted the air intake dramatically. In 1971 Team Lotus principal Colin Chapman introduced the Lotus 56B F1 car, powered by a Pratt & Whitney STN 6/76 gas turbine. Chapman had a reputation of building radical championship-winning cars, but had to abandon the project because there were too many problems with turbo lag.
Buses
General Motors fitted the GT-30x series of gas turbines (branded "Whirlfire") to several prototype buses in the 1950s and 1960s, including Turbo-Cruiser I (1953, GT-300); Turbo-Cruiser II (1964, GT-309); Turbo-Cruiser III (1968, GT-309); RTX (1968, GT-309); and RTS 3T (1972).
The arrival of the Capstone Turbine has led to several hybrid bus designs, starting with HEV-1 by AVS of Chattanooga, Tennessee in 1999, and closely followed by Ebus and ISE Research in California, and DesignLine Corporation in New Zealand (and later the United States). AVS turbine hybrids were plagued with reliability and quality control problems, resulting in liquidation of AVS in 2003. The most successful design by Designline is now operated in 5 cities in 6 countries, with over 30 buses in operation worldwide, and order for several hundred being delivered to Baltimore, and New York City.
Brescia Italy is using serial hybrid buses powered by microturbines on routes through the historical sections of the city.
Motorcycles
The MTT Turbine Superbike appeared in 2000 (hence the designation of Y2K Superbike by MTT) and is the first production motorcycle powered by a turbine engine – specifically, a Rolls-Royce Allison model 250 turboshaft engine, producing about 283 kW (380 bhp). Speed-tested to 365 km/h or 227 mph (according to some stories, the testing team ran out of road during the test), it holds the Guinness World Record for most powerful production motorcycle and most expensive production motorcycle, with a price tag of US$185,000.
Trains
Several locomotive classes have been powered by gas turbines, the most recent incarnation being Bombardier's JetTrain.
Tanks
The Third Reich Wehrmacht Heer's development division, the Heereswaffenamt (Army Ordnance Board), studied a number of gas turbine engine designs for use in tanks starting in mid-1944. The first gas turbine engine design intended for use in armored fighting vehicle propulsion, the BMW 003-based GT 101, was meant for installation in the Panther tank. Towards the end of the war, a Jagdtiger was fitted with one of the aforementioned gas turbines.
The second use of a gas turbine in an armored fighting vehicle was in 1954 when a unit, PU2979, specifically developed for tanks by C. A. Parsons and Company, was installed and trialed in a British Conqueror tank. The Stridsvagn 103 was developed in the 1950s and was the first mass-produced main battle tank to use a turbine engine, the Boeing T50. Since then, gas turbine engines have been used as auxiliary power units in some tanks and as main powerplants in Soviet/Russian T-80s and U.S. M1 Abrams tanks, among others. They are lighter and smaller than diesel engines at the same sustained power output but the models installed to date are less fuel efficient than the equivalent diesel, especially at idle, requiring more fuel to achieve the same combat range. Successive models of M1 have addressed this problem with battery packs or secondary generators to power the tank's systems while stationary, saving fuel by reducing the need to idle the main turbine. T-80s can mount three large external fuel drums to extend their range. Russia has stopped production of the T-80 in favor of the diesel-powered T-90 (based on the T-72), while Ukraine has developed the diesel-powered T-80UD and T-84 with nearly the power of the gas-turbine tank. The French Leclerc tank's diesel powerplant features the "Hyperbar" hybrid supercharging system, where the engine's turbocharger is completely replaced with a small gas turbine which also works as an assisted diesel exhaust turbocharger, enabling engine RPM-independent boost level control and a higher peak boost pressure to be reached (than with ordinary turbochargers). This system allows a smaller displacement and lighter engine to be used as the tank's power plant and effectively removes turbo lag. This special gas turbine/turbocharger can also work independently from the main engine as an ordinary APU.
A turbine is theoretically more reliable and easier to maintain than a piston engine since it has a simpler construction with fewer moving parts, but in practice, turbine parts experience a higher wear rate due to their higher working speeds. The turbine blades are highly sensitive to dust and fine sand so that in desert operations air filters have to be fitted and changed several times daily. An improperly fitted filter, or a bullet or shell fragment that punctures the filter, can damage the engine. Piston engines (especially if turbocharged) also need well-maintained filters, but they are more resilient if the filter does fail.
Like most modern diesel engines used in tanks, gas turbines are usually multi-fuel engines.
Marine applications
Naval
Gas turbines are used in many naval vessels, where they are valued for their high power-to-weight ratio and their ships' resulting acceleration and ability to get underway quickly.
The first gas-turbine-powered naval vessel was the Royal Navy's motor gunboat MGB 2009 (formerly MGB 509) converted in 1947. Metropolitan-Vickers fitted their F2/3 jet engine with a power turbine. The Steam Gun Boat Grey Goose was converted to Rolls-Royce gas turbines in 1952 and operated as such from 1953. The Bold class Fast Patrol Boats Bold Pioneer and Bold Pathfinder built in 1953 were the first ships created specifically for gas turbine propulsion.
The first large-scale, partially gas-turbine powered ships were the Royal Navy's Type 81 (Tribal class) frigates with combined steam and gas powerplants. The first, was commissioned in 1961.
The German Navy launched the first in 1961 with 2 Brown, Boveri & Cie gas turbines in the world's first combined diesel and gas propulsion system.
The Soviet Navy commissioned in 1962 the first of 25 with 4 gas turbines in combined gas and gas propulsion system. Those vessels used 4 M8E gas turbines, which generated . Those ships were the first large ships in the world to be powered solely by gas turbines.
The Danish Navy had 6 Søløven-class torpedo boats (the export version of the British Brave class fast patrol boat) in service from 1965 to 1990, which had 3 Bristol Proteus (later RR Proteus) Marine Gas Turbines rated at combined, plus two General Motors Diesel engines, rated at , for better fuel economy at slower speeds. And they also produced 10 Willemoes Class Torpedo / Guided Missile boats (in service from 1974 to 2000) which had 3 Rolls-Royce Marine Proteus Gas Turbines also rated at , same as the Søløven-class boats, and 2 General Motors Diesel Engines, rated at , also for improved fuel economy at slow speeds.
The Swedish Navy produced 6 Spica-class torpedo boats between 1966 and 1967 powered by 3 Bristol Siddeley Proteus 1282 turbines, each delivering . They were later joined by 12 upgraded Norrköping class ships, still with the same engines. With their aft torpedo tubes replaced by antishipping missiles they served as missile boats until the last was retired in 2005.
The Finnish Navy commissioned two corvettes, Turunmaa and Karjala, in 1968. They were equipped with one Rolls-Royce Olympus TM1 gas turbine and three Wärtsilä marine diesels for slower speeds. They were the fastest vessels in the Finnish Navy; they regularly achieved speeds of 35 knots, and 37.3 knots during sea trials. The Turunmaas were decommissioned in 2002. Karjala is today a museum ship in Turku, and Turunmaa serves as a floating machine shop and training ship for Satakunta Polytechnical College.
The next series of major naval vessels were the four Canadian helicopter carrying destroyers first commissioned in 1972. They used 2 ft-4 main propulsion engines, 2 ft-12 cruise engines and 3 Solar Saturn 750 kW generators.
The first U.S. gas-turbine powered ship was the U.S. Coast Guard's , a cutter commissioned in 1961 that was powered by two turbines utilizing controllable-pitch propellers. The larger High Endurance Cutters, was the first class of larger cutters to utilize gas turbines, the first of which () was commissioned in 1967. Since then, they have powered the U.S. Navy's s, and s, and guided missile cruisers. , a modified , is to be the Navy's first amphibious assault ship powered by gas turbines.
The marine gas turbine operates in a more corrosive atmosphere due to the presence of sea salt in air and fuel and use of cheaper fuels.
Civilian maritime
Up to the late 1940s, much of the progress on marine gas turbines all over the world took place in design offices and engine builder's workshops and development work was led by the British Royal Navy and other Navies. While interest in the gas turbine for marine purposes, both naval and mercantile, continued to increase, the lack of availability of the results of operating experience on early gas turbine projects limited the number of new ventures on seagoing commercial vessels being embarked upon.
In 1951, the diesel–electric oil tanker Auris, 12,290 deadweight tonnage (DWT) was used to obtain operating experience with a main propulsion gas turbine under service conditions at sea and so became the first ocean-going merchant ship to be powered by a gas turbine. Built by Hawthorn Leslie at Hebburn-on-Tyne, UK, in accordance with plans and specifications drawn up by the Anglo-Saxon Petroleum Company and launched on the UK's Princess Elizabeth's 21st birthday in 1947, the ship was designed with an engine room layout that would allow for the experimental use of heavy fuel in one of its high-speed engines, as well as the future substitution of one of its diesel engines by a gas turbine. The Auris operated commercially as a tanker for three-and-a-half years with a diesel–electric propulsion unit as originally commissioned, but in 1951 one of its four diesel engines – which were known as "Faith", "Hope", "Charity" and "Prudence" – was replaced by the world's first marine gas turbine engine, a open-cycle gas turbo-alternator built by British Thompson-Houston Company in Rugby. Following successful sea trials off the Northumbrian coast, the Auris set sail from Hebburn-on-Tyne in October 1951 bound for Port Arthur in the US and then Curaçao in the southern Caribbean returning to Avonmouth after 44 days at sea, successfully completing her historic trans-Atlantic crossing. During this time at sea the gas turbine burnt diesel fuel and operated without an involuntary stop or mechanical difficulty of any kind. She subsequently visited Swansea, Hull, Rotterdam, Oslo and Southampton covering a total of 13,211 nautical miles. The Auris then had all of its power plants replaced with a directly coupled gas turbine to become the first civilian ship to operate solely on gas turbine power.
Despite the success of this early experimental voyage the gas turbine did not replace the diesel engine as the propulsion plant for large merchant ships. At constant cruising speeds the diesel engine simply had no peer in the vital area of fuel economy. The gas turbine did have more success in Royal Navy ships and the other naval fleets of the world where sudden and rapid changes of speed are required by warships in action.
The United States Maritime Commission were looking for options to update WWII Liberty ships, and heavy-duty gas turbines were one of those selected. In 1956 the John Sergeant was lengthened and equipped with a General Electric HD gas turbine with exhaust-gas regeneration, reduction gearing and a variable-pitch propeller. It operated for 9,700 hours using residual fuel (Bunker C) for 7,000 hours. Fuel efficiency was on a par with steam propulsion at per hour, and power output was higher than expected at due to the ambient temperature of the North Sea route being lower than the design temperature of the gas turbine. This gave the ship a speed capability of 18 knots, up from 11 knots with the original power plant, and well in excess of the 15 knot targeted. The ship made its first transatlantic crossing with an average speed of 16.8 knots, in spite of some rough weather along the way. Suitable Bunker C fuel was only available at limited ports because the quality of the fuel was of a critical nature. The fuel oil also had to be treated on board to reduce contaminants and this was a labor-intensive process that was not suitable for automation at the time. Ultimately, the variable-pitch propeller, which was of a new and untested design, ended the trial, as three consecutive annual inspections revealed stress-cracking. This did not reflect poorly on the marine-propulsion gas-turbine concept though, and the trial was a success overall. The success of this trial opened the way for more development by GE on the use of HD gas turbines for marine use with heavy fuels. The John Sergeant was scrapped in 1972 at Portsmouth PA.
Boeing launched its first passenger-carrying waterjet-propelled hydrofoil Boeing 929, in April 1974. Those ships were powered by two Allison 501-KF gas turbines.
Between 1971 and 1981, Seatrain Lines operated a scheduled container service between ports on the eastern seaboard of the United States and ports in northwest Europe across the North Atlantic with four container ships of 26,000 tonnes DWT. Those ships were powered by twin Pratt & Whitney gas turbines of the FT 4 series. The four ships in the class were named Euroliner, Eurofreighter, Asialiner and Asiafreighter. Following the dramatic Organization of the Petroleum Exporting Countries (OPEC) price increases of the mid-1970s, operations were constrained by rising fuel costs. Some modification of the engine systems on those ships was undertaken to permit the burning of a lower grade of fuel (i.e., marine diesel). Reduction of fuel costs was successful using a different untested fuel in a marine gas turbine but maintenance costs increased with the fuel change. After 1981 the ships were sold and refitted with, what at the time, was more economical diesel-fueled engines but the increased engine size reduced cargo space.
The first passenger ferry to use a gas turbine was the GTS Finnjet, built in 1977 and powered by two Pratt & Whitney FT 4C-1 DLF turbines, generating and propelling the ship to a speed of 31 knots. However, the Finnjet also illustrated the shortcomings of gas turbine propulsion in commercial craft, as high fuel prices made operating her unprofitable. After four years of service, additional diesel engines were installed on the ship to reduce running costs during the off-season. The Finnjet was also the first ship with a combined diesel–electric and gas propulsion. Another example of commercial use of gas turbines in a passenger ship is Stena Line's HSS class fastcraft ferries. HSS 1500-class Stena Explorer, Stena Voyager and Stena Discovery vessels use combined gas and gas setups of twin GE LM2500 plus GE LM1600 power for a total of . The slightly smaller HSS 900-class Stena Carisma, uses twin ABB–STAL GT35 turbines rated at gross. The Stena Discovery was withdrawn from service in 2007, another victim of too high fuel costs.
In July 2000, the Millennium became the first cruise ship to be powered by both gas and steam turbines. The ship featured two General Electric LM2500 gas turbine generators whose exhaust heat was used to operate a steam turbine generator in a COGES (combined gas electric and steam) configuration. Propulsion was provided by two electrically driven Rolls-Royce Mermaid azimuth pods. The liner uses a combined diesel and gas configuration.
In marine racing applications the 2010 C5000 Mystic catamaran Miss GEICO uses two Lycoming T-55 turbines for its power system.
Advances in technology
Gas turbine technology has steadily advanced since its inception and continues to evolve. Development is actively producing both smaller gas turbines and more powerful and efficient engines. Aiding in these advances are computer-based design (specifically computational fluid dynamics and finite element analysis) and the development of advanced materials: Base materials with superior high-temperature strength (e.g., single-crystal superalloys that exhibit yield strength anomaly) or thermal barrier coatings that protect the structural material from ever-higher temperatures. These advances allow higher compression ratios and turbine inlet temperatures, more efficient combustion and better cooling of engine parts.
Computational fluid dynamics (CFD) has contributed to substantial improvements in the performance and efficiency of gas turbine engine components through enhanced understanding of the complex viscous flow and heat transfer phenomena involved. For this reason, CFD is one of the key computational tools used in design and development of gas turbine engines.
The simple-cycle efficiencies of early gas turbines were practically doubled by incorporating inter-cooling, regeneration (or recuperation), and reheating. These improvements, of course, come at the expense of increased initial and operation costs, and they cannot be justified unless the decrease in fuel costs offsets the increase in other costs. The relatively low fuel prices, the general desire in the industry to minimize installation costs, and the tremendous increase in the simple-cycle efficiency to about 40 percent left little desire for opting for these modifications.
On the emissions side, the challenge is to increase turbine inlet temperatures while at the same time reducing peak flame temperature in order to achieve lower NOx emissions and meet the latest emission regulations. In May 2011, Mitsubishi Heavy Industries achieved a turbine inlet temperature of on a 320 megawatt gas turbine, and 460 MW in gas turbine combined-cycle power generation applications in which gross thermal efficiency exceeds 60%.
Compliant foil bearings were commercially introduced to gas turbines in the 1990s. These can withstand over a hundred thousand start/stop cycles and have eliminated the need for an oil system. The application of microelectronics and power switching technology have enabled the development of commercially viable electricity generation by microturbines for distribution and vehicle propulsion.
In 2013, General Electric started the development of the GE9X with a compression ratio of 61:1.
Advantages and disadvantages
The following are advantages and disadvantages of gas-turbine engines:
Advantages include:
Very high power-to-weight ratio compared to reciprocating engines.
Smaller than most reciprocating engines of the same power rating.
Smooth rotation of the main shaft produces far less vibration than a reciprocating engine.
Fewer moving parts than reciprocating engines results in lower maintenance cost and higher reliability/availability over its service life.
Greater reliability, particularly in applications where sustained high power output is required.
Waste heat is dissipated almost entirely in the exhaust. This results in a high-temperature exhaust stream that is very usable for boiling water in a combined cycle, or for cogeneration.
Lower peak combustion pressures than reciprocating engines in general.
High shaft speeds in smaller "free turbine units", although larger gas turbines employed in power generation operate at synchronous speeds.
Low lubricating oil cost and consumption.
Can run on a wide variety of fuels.
Very low toxic emissions of CO and HC due to excess air, complete combustion and no "quench" of the flame on cold surfaces.
Disadvantages include:
Core engine costs can be high due to the use of exotic materials, especially in applications where high reliability is required (e.g. aircraft propulsion)
Less efficient than reciprocating engines at idle speed.
Longer startup than reciprocating engines.
Less responsive to changes in power demand compared with reciprocating engines.
Characteristic whine can be hard to suppress. The exhaust (particularly on turbojets) can also produce a distinctive roaring sound.
Major manufacturers
Siemens Energy
Ansaldo
Mitsubishi Heavy Industries
Rolls-Royce
GE Aviation
Silmash
ODK
Pratt & Whitney
P&W Canada
Solar Turbines
Alstom
Zorya-Mashproekt
MTU Aero Engines
MAN Turbo
IHI Corporation
Kawasaki Heavy Industries
HAL
BHEL
MAPNA
Techwin
Doosan Heavy
Shanghai Electric
Harbin Electric
AECC
Testing
British, German, other national and international test codes are used to standardize the procedures and definitions used to test gas turbines. Selection of the test code to be used is an agreement between the purchaser and the manufacturer, and has some significance to the design of the turbine and associated systems. In the United States, ASME has produced several performance test codes on gas turbines. This includes ASME PTC 22–2014. These ASME performance test codes have gained international recognition and acceptance for testing gas turbines. The single most important and differentiating characteristic of ASME performance test codes, including PTC 22, is that the test uncertainty of the measurement indicates the quality of the test and is not to be used as a commercial tolerance.
See also
List of aircraft engines
Centrifugal compressor
Gas turbine modular helium reactor
Pneumatic motor
Pulsejet
Steam turbine
Turbine engine failure
Wind turbine
References
Further reading
Stationary Combustion Gas Turbines including Oil & Over-Speed Control System description
"Aircraft Gas Turbine Technology" by Irwin E. Treager, McGraw-Hill, Glencoe Division, 1979, .
"Gas Turbine Theory" by H.I.H. Saravanamuttoo, G.F.C. Rogers and H. Cohen, Pearson Education, 2001, 5th ed., .
R. M. "Fred" Klaass and Christopher DellaCorte, "The Quest for Oil-Free Gas Turbine Engines," SAE Technical Papers, No. 2006-01-3055, available at sae.org
"Model Jet Engines" by Thomas Kamps Traplet Publications
Aircraft Engines and Gas Turbines, Second Edition by Jack L. Kerrebrock, The MIT Press, 1992, .
"Forensic Investigation of a Gas Turbine Event" by John Molloy, M&M Engineering
"Gas Turbine Performance, 2nd Edition" by Philip Walsh and Paul Fletcher, Wiley-Blackwell, 2004
External links
Technology Speed of Civil Jet Engines
MIT Gas Turbine Laboratory
MIT Microturbine research
California Distributed Energy Resource guide – Microturbine generators
Introduction to how a gas turbine works from "how stuff works.com"
Aircraft gas turbine simulator for interactive learning
An online handbook on stationary gas turbine technologies compiled by the US DOE.
Engines
Marine propulsion | Gas turbine | [
"Physics",
"Technology",
"Engineering"
] | 12,211 | [
"Machines",
"Engines",
"Gas turbines",
"Physical systems",
"Marine engineering",
"Marine propulsion"
] |
58,666 | https://en.wikipedia.org/wiki/United%20States%20Environmental%20Protection%20Agency | The Environmental Protection Agency (EPA) is an independent agency of the United States government tasked with environmental protection matters. President Richard Nixon proposed the establishment of EPA on July 9, 1970; it began operation on December 2, 1970, after Nixon signed an executive order. The order establishing the EPA was ratified by committee hearings in the House and Senate.
The agency is led by its administrator, who is appointed by the president and approved by the Senate. The current administrator is Michael S. Regan. The EPA is not a Cabinet department, but the administrator is normally given cabinet rank. The EPA has its headquarters in Washington, D.C. There are regional offices for each of the agency's ten regions, as well as 27 laboratories around the country.
The agency conducts environmental assessment, research, and education. It has the responsibility of maintaining and enforcing national standards under a variety of environmental laws, in consultation with state, tribal, and local governments. EPA enforcement powers include fines, sanctions, and other measures.
It delegates some permitting, monitoring, and enforcement responsibility to U.S. states and the federally recognized tribes. The agency also works with industries and all levels of government in a wide variety of voluntary pollution prevention programs and energy conservation efforts.
The agency's budgeted employee level in 2023 is 16,204.1 full-time equivalent (FTE). More than half of EPA's employees are engineers, scientists, and environmental protection specialists; other employees include legal, public affairs, financial, and information technologists.
History
Background
Beginning in the late 1950s and through the 1960s, Congress reacted to increasing public concern about the impact that human activity could have on the environment. Senator James E. Murray introduced a bill, the Resources and Conservation Act (RCA) of 1959, in the 86th Congress. The bill would have established a Council on Environmental Quality in the Executive Office of the President, declared a national environmental policy, and required the preparation of an annual environmental report. The conservation movement was weak at the time and the bill did not pass Congress.
The 1962 publication of Silent Spring, a best-selling book by Rachel Carson, alerted the public about the detrimental effects on animals and humans of the indiscriminate use of pesticide chemicals.
In the years following, Congress discussed possible solutions. In 1968, a joint House–Senate colloquium was convened by the chairmen of the Senate Committee on Interior and Insular Affairs, Senator Henry M. Jackson, and the House Committee on Science and Astronautics, Representative George P. Miller, to discuss the need for and means of implementing a national environmental policy. Congress enacted the National Environmental Policy Act of 1969 (NEPA) and the law was based on ideas that had been discussed in the 1959 and subsequent hearings.
The Richard Nixon administration made the environment a policy priority in 1969-1971 and created two new agencies, the Council on Environmental Quality (CEQ) and EPA. Nixon signed NEPA into law on January 1, 1970. The law established the CEQ in the Executive Office of the President. NEPA required that a detailed statement of environmental impacts be prepared for all major federal actions significantly affecting the environment. The "detailed statement" would ultimately be referred to as an environmental impact statement (EIS).
Establishment
On July 9, 1970, Nixon proposed an executive reorganization that consolidated many environmental responsibilities of the federal government under one agency, a new Environmental Protection Agency. This proposal included merging pollution control programs from a number of departments, such as the combination of pesticide programs from the United States Department of Agriculture and the United States Department of the Interior. After conducting hearings during that summer, the House and Senate approved the proposal. The EPA was created 90 days before it had to operate, and officially opened its doors on December 2, 1970. The agency's first administrator, William Ruckelshaus, took the oath of office on December 4, 1970.
EPA's primary predecessor was the former Environmental Health Divisions of the U.S. Public Health Service (PHS), and its creation caused one of a series of reorganizations of PHS that occurred during 1966–1973. From PHS, EPA absorbed the entire National Air Pollution Control Administration, as well as the Environmental Control Administration's Bureau of Solid Waste Management, Bureau of Water Hygiene, and part of its Bureau of Radiological Health. It also absorbed the Federal Water Quality Administration, which had previously been transferred from PHS to the Department of the Interior in 1966. A few functions from other agencies were also incorporated into EPA: the formerly independent Federal Radiation Council was merged into it; pesticides programs were transferred from the Department of the Interior, Food and Drug Administration, and Agricultural Research Service; and some functions were transferred from the Council on Environmental Quality and Atomic Energy Commission.
Upon its creation, EPA inherited 84 sites spread across 26 states, of which 42 sites were laboratories. The EPA consolidated these laboratories into 22 sites.
1970s
In its first year, the EPA had a budget of $1.4 billion and 5,800 employees. At its start, the EPA was primarily a technical assistance agency that set goals and standards. Soon, new acts and amendments passed by Congress gave the agency its regulatory authority. A major expansion of the Clean Air Act was approved in December 1970.
EPA staff recall that in the early days there was "an enormous sense of purpose and excitement" and the expectation that "there was this agency which was going to do something about a problem that clearly was on the minds of a lot of people in this country," leading to tens of thousands of resumes from those eager to participate in the mighty effort to clean up America's environment.
When EPA first began operation, members of the private sector felt strongly that the environmental protection movement was a passing fad. Ruckelshaus stated that he felt pressure to show a public which was deeply skeptical about government's effectiveness, that EPA could respond effectively to widespread concerns about pollution.
The burning Cuyahoga River in Cleveland, Ohio, in 1969 led to a national outcry and criminal charges against major steel companies. The US Justice Department in late 1970 began pollution control litigation in cooperation with the new EPA. Congress enacted the Federal Water Pollution Control Act Amendments of 1972, better known as the Clean Water Act (CWA). The CWA established a national framework for addressing water quality, including mandatory pollution control standards, to be implemented by the agency in partnership with the states. Congress amended the Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA) in 1972, requiring EPA to measure every pesticide's risks against its potential benefits.
In 1973 President Nixon appointed Russell E. Train, to be the next EPA Administrator. In 1974 Congress passed the Safe Drinking Water Act, requiring EPA to develop mandatory federal standards for all public water systems, which serve 90% of the US population. The law required EPA to enforce the standards with the cooperation of state agencies.
In October 1976, Congress passed the Toxic Substances Control Act (TSCA) which, like FIFRA, related to the manufacture, labeling and usage of commercial products rather than pollution. This act gave the EPA the authority to gather information on chemicals and require producers to test them, gave it the ability to regulate chemical production and use (with specific mention of PCBs), and required the agency to create the National Inventory listing of chemicals.
Congress also enacted the Resource Conservation and Recovery Act (RCRA) in 1976, significantly amending the Solid Waste Disposal Act of 1965. It tasked the EPA with setting national goals for waste disposal, conserving energy and natural resources, reducing waste, and ensuring environmentally sound management of waste. Accordingly, the agency developed regulations for solid and hazardous waste that were to be implemented in collaboration with states.
President Jimmy Carter appointed Douglas M. Costle as EPA administrator in 1977. To manage the agency's expanding legal mandates and workload, by the end of 1979 the budget grew to $5.4 billion and the workforce size increased to 13,000.
1980s
In 1980, following the discovery of many abandoned or mismanaged hazardous waste sites such as Love Canal, Congress passed the Comprehensive Environmental Response, Compensation, and Liability Act, nicknamed "Superfund." The new law authorized EPA to cast a wider net for parties responsible for sites contaminated by previous hazardous waste disposal and established a funding mechanism for assessment and cleanup.
In a dramatic move to the right, President Ronald Reagan in 1981 appointed Anne Gorsuch as EPA administrator. Gorsuch based her administration of EPA on the New Federalism approach of downsizing federal agencies by delegating their functions and services to the individual states. She believed that EPA was over-regulating business and that the agency was too large and not cost-effective. During her 22 months as agency head, she cut the budget of the EPA by 22%, reduced the number of cases filed against polluters, relaxed Clean Air Act regulations, and facilitated the spraying of restricted-use pesticides. She cut the total number of agency employees, and hired staff from the industries they were supposed to be regulating. Environmentalists contended that her policies were designed to placate polluters, and accused her of trying to dismantle the agency.
Assistant Administrator Rita Lavelle was fired by Reagan in February 1983 because of her mismanagement of the Superfund program. Gorsuch had increasing confrontations with Congress over Superfund and other programs, including her refusal to submit subpoenaed documents. Gorsuch was cited for contempt of Congress and the White House directed EPA to submit the documents to Congress. Gorsuch and most of her senior staff resigned in March 1983. Reagan then appointed William Ruckelshaus as EPA Administrator for a second term. As a condition for accepting his appointment, Ruckleshaus obtained autonomy from the White House in appointing his senior management team. He then appointed experienced competent professionals to the top management positions, and worked to restore public confidence in the agency.
Lee M. Thomas succeeded Ruckelshaus as administrator in 1985. In 1986 Congress passed the Emergency Planning and Community Right-to-Know Act, which authorized the EPA to gather data on toxic chemicals and share this information with the public. EPA also researched the implications of stratospheric ozone depletion. Under Administrator Thomas, EPA joined with several international organizations to perform a risk assessment of stratospheric ozone, which helped provide motivation for the Montreal Protocol, which was agreed to in August 1987.
In 1988, during his first presidential campaign, George H. W. Bush was vocal about environmental issues. Following his election victory, he appointed William K. Reilly, an environmentalist, as EPA Administrator in 1989. Under Reilly's leadership, the EPA implemented voluntary programs and initiated the development of a "cluster rule" for multimedia regulation of the pulp and paper industry. At the time, there was increasing awareness that some environmental issues were regional or localized in nature, and were more appropriately addressed with sub-national approaches and solutions. This understanding was reflected in the 1990 amendments to the Clean Air Act and in new approaches by the agency, such as a greater emphasis on watershed-based approaches in Clean Water Act programs.
1990s
In 1992 EPA and the Department of Energy launched the Energy Star program, a voluntary program that fosters energy efficiency.
Carol Browner was appointed EPA administrator by President Bill Clinton and served from 1993 to 2001. Major projects during Browner's term included:
Initiation of the Brownfields pilot program in 1995
Initial hazardous air pollution standards for petroleum refineries in 1995
Initial lead paint abatement regulations under TSCA in 1996
Update of the National Ambient Air Quality Standards for particulate matter and ozone in 1997.
Since the passage of the Superfund law in 1980, an excise tax had been levied on the chemical and petroleum industries, to support the cleanup trust fund. Congressional authorization of the tax was due to expire in 1995. Although Browner and the Clinton Administration supported continuation of the tax, Congress declined to reauthorize it. Subsequently, the Superfund program was supported only by annual appropriations, greatly reducing the number of waste sites that are remediated in a given year. (In 2021 Congress reauthorized an excise tax on chemical manufacturers.)
Major legislative updates during the Clinton Administration were the Food Quality Protection Act and the 1996 amendments to the Safe Drinking Water Act.
2000s
President George W. Bush appointed Christine Todd Whitman as EPA administrator in 2001. Whitman was succeeded by Mike Leavitt in 2003 and Stephen L. Johnson in 2005.
In March 2005 nine states (California, New York, New Jersey, New Hampshire, Massachusetts, Maine, Connecticut, New Mexico and Vermont) sued the EPA. The EPA's inspector general had determined that the EPA's regulation of mercury emissions did not follow the Clean Air Act, and that the regulations were influenced by top political appointees. The EPA had suppressed a study it commissioned by Harvard University which contradicted its position on mercury controls. The suit alleged that the EPA's rule exempting coal-fired power plants from "maximum available control technology" was illegal, and additionally charged that the EPA's system of cap-and-trade to lower average mercury levels would allow power plants to forego reducing mercury emissions, which they objected would lead to dangerous local hotspots of mercury contamination even if average levels declined. Several states also began to enact their own mercury emission regulations. Illinois's proposed rule would have reduced mercury emissions from power plants by an average of 90% by 2009. In 2008—by which point a total of fourteen states had joined the suit—the U.S. Court of Appeals for the District of Columbia ruled that the EPA regulations violated the Clean Air Act. In response, EPA announced plans to propose such standards to replace the vacated Clean Air Mercury Rule, and did so on March 16, 2011.
In July 2005 there was a delay in the issuance of an EPA report showing that auto companies were using loopholes to produce less fuel-efficient cars. The report was supposed to be released the day before a controversial energy bill was passed and would have provided backup for those opposed to it, but the EPA delayed its release at the last minute.
EPA initiated its voluntary WaterSense program in 2006 to encourage water efficiency through the use of a special label on consumer products.
In 2007 the state of California sued the EPA for its refusal to allow California and 16 other states to raise fuel economy standards for new cars. EPA Administrator Stephen Johnson claimed that the EPA was working on its own standards, but the move has been widely considered an attempt to shield the auto industry from environmental regulation by setting lower standards at the federal level, which would then preempt state laws. California governor Arnold Schwarzenegger, along with governors from 13 other states, stated that the EPA's actions ignored federal law, and that existing California standards (adopted by many states in addition to California) were almost twice as effective as the proposed federal standards. It was reported that Johnson ignored his own staff in making this decision.
In 2007 it was reported that EPA research was suppressed by career managers. Supervisors at EPA's National Center for Environmental Assessment required several paragraphs to be deleted from a peer-reviewed journal article about EPA's integrated risk information system, which led two co-authors to have their names removed from the publication, and the corresponding author, Ching-Hung Hsu, to leave EPA "because of the draconian restrictions placed on publishing". The 2007 report stated that EPA subjected employees who author scientific papers to prior restraint, even if those papers are written on personal time.
In December 2007 EPA administrator Johnson approved a draft of a document that declared that climate change imperiled the public welfare—a decision that would trigger the first national mandatory global-warming regulations. Associate Deputy Administrator Jason Burnett e-mailed the draft to the White House. White House aides—who had long resisted mandatory regulations as a way to address climate change—knew the gist of what Johnson's finding would be, Burnett said. They also knew that once they opened the attachment, it would become a public record, making it controversial and difficult to rescind. So they did not open it; rather, they called Johnson and asked him to take back the draft. Johnson rescinded the draft; in July 2008, he issued a new version which did not state that global warming was danger to public welfare. Burnett resigned in protest.
In April 2008, the Union of Concerned Scientists said that more than half of the nearly 1,600 EPA staff scientists who responded online to a detailed questionnaire reported they had experienced incidents of political interference in their work. The survey included chemists, toxicologists, engineers, geologists and experts in other fields of science. About 40% of the scientists reported that the interference had been more prevalent in the last five years than in previous years.
President Barack Obama appointed Lisa P. Jackson as EPA administrator in 2009.
2010s
In 2010 it was reported that a $3 million mapping study on sea level rise was suppressed by EPA management during both the Bush and Obama administrations, and managers changed a key interagency report to reflect the removal of the maps.
Between 2011 and 2012, some EPA employees reported difficulty in conducting and reporting the results of studies on hydraulic fracturing due to industry and governmental pressure, and were concerned about the censorship of environmental reports.
President Obama appointed Gina McCarthy as EPA administrator in 2013.
In 2014, the EPA published its "Tier 3" standards for cars, trucks and other motor vehicles, which tightened air pollution emission requirements and lowered the sulfur content in gasoline.
In 2015, the EPA discovered extensive violations by Volkswagen Group in its manufacture of Volkswagen and Audi diesel engine cars, for the 2009 through 2016 model years. Following notice of violations and potential criminal sanctions, Volkswagen later agreed to a legal settlement and paid billions of US dollars in criminal penalties, and was required to initiate a vehicle buyback program and modify the engines of the vehicles to reduce illegal air emissions.
In August 2015, the EPA finalized the Clean Power Plan to regulate emissions from power plants, projecting a 15-year cut of 32%, or 789 million metric tons of carbon dioxide. In 2019 it was voided and replaced by the Affordable Clean Energy rule under the Trump administration, and in 2022 its constitutionality was ruled out by the Supreme Court.
In August 2015, the 2015 Gold King Mine waste water spill occurred when EPA contractors examined the level of pollutants such as lead and arsenic in a Colorado mine, and accidentally released over three million gallons of waste water into Cement Creek and the Animas River.
In 2015, the International Agency for Research on Cancer (IARC), a branch of the World Health Organization, cited research linking glyphosate, an ingredient of the weed killer Roundup manufactured by the chemical company Monsanto, to non-Hodgkin's lymphoma. In March 2017, the presiding judge in a litigation brought about by people who claim to have developed glyphosate-related non-Hodgkin's lymphoma opened Monsanto emails and other documents related to the case, including email exchanges between the company and federal regulators. According to The New York Times, the "records suggested that Monsanto had ghostwritten research that was later attributed to academics and indicated that a senior official at the Environmental Protection Agency had worked to quash a review of Roundup's main ingredient, glyphosate, that was to have been conducted by the United States Department of Health and Human Services." The records show that Monsanto was able to prepare "a public relations assault" on the finding after they were alerted to the determination by Jess Rowland, the head of the EPA's cancer assessment review committee at that time, months in advance. Emails also showed that Rowland "had promised to beat back an effort by the Department of Health and Human Services to conduct its own review."
On February 17, 2017, President Donald Trump appointed Scott Pruitt as EPA administrator. The Democratic Party saw the appointment as a controversial move, as Pruitt had spent most of his career challenging environmental regulations and policies. He did not have previous experience in the environmental protection field and had received financial support from the fossil fuel industry. In 2017, the Presidency of Donald Trump proposed a 31% cut to the EPA's budget to $5.7 billion from $8.1 billion and to eliminate a quarter of the agency jobs. However, this cut was not approved by Congress. Pruitt resigned from the position on July 5, 2018, citing "unrelenting attacks" due to ongoing ethics controversies.
President Trump appointed Andrew R. Wheeler as EPA Administrator in 2019.
On July 17, 2019, EPA management prohibited the agency's Scientific Integrity Official, Francesca Grifo, from testifying at a House committee hearing. EPA offered to send a different representative in place of Grifo and accused the committee of "dictating to the agency who they believe was qualified to speak." The hearing was to discuss the importance of allowing federal scientists and other employees to speak freely when and to whom they want to about their research without having to worry about any political consequences.
In September 2019 air pollution standards in California were once again under attack, as the Trump administration attempted to revoke a waiver issued to the state which allowed more stringent standards for auto and truck emissions than the federal standards.
2020s
President Joe Biden appointed Michael S. Regan to be administrator in 2021. Regan began serving on March 11, 2021.
In October 2021 EPA announced its "PFAS Strategic Roadmap." PFASs are organofluorine chemical compounds referred to as "forever chemicals". The roadmap is a "whole-of-EPA" strategy and the agency will consider the full life cycle of PFAS, including preventing PFAS from entering the environment, holding polluters accountable, and remediation of contaminated sites. It also will include drinking water monitoring and risk assessment for PFOA and PFOS in biosolids (processed sewage sludge used as fertilizer).
In December 2021 EPA issued new greenhouse gas standards for passenger cars and light trucks. The standards, which will reduce climate pollution and improve public health, became effective for the 2023 vehicle model year.
In March 2022 the Biden administration allowed California to again set stricter auto emissions standards.
In August 2022 the EPA was allotted a listed ~$53.216 billion in funding pursuant to the Inflation Reduction Act (IRA). The EPA listed 24 total initiatives, the most notable among them being greenhouse gas reduction and monitoring, a superfund petroleum tax, replacing current heavy-duty vehicles with zero-emission vehicles, and a methane incentive program.
On February 3, 2023, more than 100 train cars were derailed in East Palestine, and around half of those cars containing chemicals like butyl acrylate, vinyl chloride, and ethylhexyl acrylate. Subsequently, the chemicals combusted into a flame being seen from miles around and the fumes filled the air with residents reporting animals falling ill and a burning sensation in their eyes and nose. The EPA is monitoring the situation and experts recommend that local residents take part in the EPA's at-home air screening.
On April 12, 2023, EPA proposed new federal vehicle tailpipe emissions standards that would accelerate the transition to electric vehicles (EVs). The standards would require at least two-thirds of all new cars sold in the United States to be zero-emissions vehicles by 2032. The rules seek to reduce air pollution and climate change. The EPA sought public comment by July 25, 2023. If the rules were approved it would have a significant impact on the transportation sector and public health. In March 2024, EPA finalized the new rules and projected they would cut emissions by 7 billion metric tons, or 56% of 2026 levels, by 2032.
In April 2024, EPA finalized new standards for power plant carbon emissions, projecting cuts of 65,000 tons by 2028 and 1.38 billion tons by 2047. The agency also issued final drinking water standards for six PFAS compounds.
In December, 2024 the U.S. Environmental Protection Agency announced that it has approved California's landmark plan to end the sale of gasoline-only vehicles by 2035. EPA Administrator Michael Regan has granted a waiver under the Clean Air Act to California to implement its plan which was first announced in 2020. It is required that by 2035 at least 80% of new cars sold be electric and up to 20% plug-in hybrid models. California's rules have been adopted by 11 other states as well which includes New York, Massachusetts and Oregon.
Organization
The EPA is led by the administrator, appointed following nomination by the president and approval from Congress.
Offices
Office of the Administrator (OA). , the office consisted of 12 divisions:
Office of Administrative and Executive Services
Office of Children's Health Protection
Children's Health Protection Advisory Committee
Office of Civil Rights
Office of Congressional and Intergovernmental Relations
Office of Continuous Improvement
Office of the Executive Secretariat
Office of Homeland Security
Office of Policy
Office of Public Affairs
Office of Public Engagement and Environmental Education
Office of Small and Disadvantaged Business Utilization
Science Advisory Board
Office of Air and Radiation (OAR)
Office of Chemical Safety and Pollution Prevention (OCSPP)
Office of the Chief Financial Officer (OCFO)
Office of Environmental Justice and External Civil Rights
Office of Enforcement and Compliance Assurance (OECA)
Office of General Counsel (OGC)
Office of Inspector General (OIG)
Office of International and Tribal Affairs (OITA)
Office of Mission Support (OMS)
Office of Resources and Business Operations (ORBO)
Environmental Appeals Board
Office of Federal Sustainability
Office of Administrative Law Judges
Office of Acquisition Solutions (OAS)
Office of Administration (OA)
Office of Human Resources (OHR)
Office of Grants and Debarment (OGD)
Office of Customer Advocacy, Policy and Portfolio Management (OCAPPM)
Office of Digital Services and Technical Architecture (ODSTA)
Office of Information Management (OIM)Office of Information Security and Privacy (OISP)
Office of Enterprise Information Programs (OEIP)
Office of IT Operations (OITO)
Office of Research and Development (ORD), which , consisted of:
Immediate Office of the Assistant Administrator
Office of Science Advisor, Policy, and Engagement (OSAPE)
Office of Science Information Management (OSIM)
Office of Resource Management
Center for Computational Toxicology and Exposure (CCTE)
Center for Environmental Measurement and Modeling (CEMM)
Center for Public Health and Environmental Assessment (CPHEA)
Center for Environmental Solutions and Emergency Response (CESER)
Office of Land and Emergency Management (OLEM), which , consisted of:
Office of Superfund Remediation and Technology Innovation
Office of Resource Conservation and Recovery
Office of Underground Storage Tanks
Office of Brownfields and Land Revitalization
Office of Emergency Management
Federal Facilities Restoration and Reuse Office
Office of Water (OW) which, , consisted of:
Office of Ground Water and Drinking Water (OGWDW)
Office of Science and Technology (OST)
Office of Wastewater Management (OWM)
Office of Wetlands, Oceans and Watersheds (OWOW)
Regions
Creating 10 EPA regions was an initiative that came from President Richard Nixon. See Standard Federal Regions.
Each EPA regional office is responsible within its states for implementing the agency's programs, except those programs that have been specifically delegated to states.
Region 1: responsible within the states of Connecticut, Maine, Massachusetts, New Hampshire, Rhode Island, and Vermont (New England).
Region 2: responsible within the states of New Jersey and New York. It is also responsible for the US territories of Puerto Rico, and the U.S. Virgin Islands.
Region 3: responsible within the states of Delaware, Maryland, Pennsylvania, Virginia, West Virginia, and the District of Columbia.
Region 4: responsible within the states of Alabama, Florida, Georgia, Kentucky, Mississippi, North Carolina, South Carolina, and Tennessee.
Region 5: responsible within the states of Illinois, Indiana, Michigan, Minnesota, Ohio, and Wisconsin.
Region 6: responsible within the states of Arkansas, Louisiana, New Mexico, Oklahoma, and Texas.
Region 7: responsible within the states of Iowa, Kansas, Missouri, and Nebraska.
Region 8: responsible within the states of Colorado, Montana, North Dakota, South Dakota, Utah, and Wyoming.
Region 9: responsible within the states of Arizona, California, Hawaii, Nevada, the territories of Guam and American Samoa, and the Navajo Nation.
Region 10: responsible within the states of Alaska, Idaho, Oregon, and Washington.
Each regional office also implements programs on Indian Tribal lands, except those programs delegated to tribal authorities.
Legal authority
The Environmental Protection Agency can only act pursuant to statutes—the laws passed by Congress. Appropriations statutes authorize how much money the agency can spend each year to carry out the approved statutes. The agency has the power to issue regulations. A regulation interprets a statute, and EPA applies its regulations to various environmental situations and enforces the requirements. The agency must include a rationale of why a regulation is needed. (See Administrative Procedure Act.) Regulations can be challenged in federal courts, either district court or appellate court, depending on the particular statutory provision.
Related legislation
EPA has principal implementation authority for the following federal environmental laws:
Clean Air Act
Clean Water Act
Comprehensive Environmental Response, Compensation and Liability Act ("Superfund")
Emergency Planning and Community Right-to-Know Act
Federal Insecticide, Fungicide, and Rodenticide Act
Resource Conservation and Recovery Act
Safe Drinking Water Act
Toxic Substances Control Act
Frank R. Lautenberg Chemical Safety for the 21st Century Act
There are additional laws where EPA has a contributing role or provides assistance to other agencies. Among these laws are:
Endangered Species Act
Energy Independence and Security Act
Energy Policy Act
Federal Food, Drug, and Cosmetic Act
Food Quality Protection Act
National Environmental Policy Act
Oil Pollution Act
Pollution Prevention Act
Programs
EPA established its major programs pursuant to the primary missions originally articulated in the laws passed by Congress. Additional programs have been developed to interpret the primary missions. Some of the newer programs have been specifically authorized by Congress. Former administrator William Ruckelshaus observed in 2016 that a danger for EPA was that air, water, waste and other programs would be unconnected, placed in "silos", a problem that persists more than 50 years later, albeit less so than at the start.
Core programs
Air quality and radiation protection
The Office of Air and Radiation (OAR) describes itself as the official authority in charge of "developing national programs, policies, and regulations for controlling air pollution and radiation exposure." The OAR is responsible for enforcing the Clean Air Act, the Atomic Energy Act, the Waste Isolation Pilot Plant Land Withdrawal Act, and other applicable laws. The OAR is in charge of the Offices of Air Quality Planning and Standards, Atmospheric Protection, Transportation and Air Quality, and the Office of Radiation and Indoor Air.
Ambient standards
National Ambient Air Quality Standards (NAAQS)
State Implementation Plans (SIPs)
Stationary air pollution source standards
New Source Performance Standards
National Emissions Standards for Hazardous Air Pollutants (NESHAPs)
Permits for industrial and commercial sources
Mobile source standards
On-road vehicles regulation
Non-road vehicle regulation (including aircraft, locomotives, marine transport, stationary engines)
Transportation fuel controls
National Vehicle Fuel and Emissions Laboratory (NVFEL)
Radiation protection
The Radiation Protection Program comprises seven project groups.
Radioactive Waste
Emergency Preparedness and Response Programs Protective Action Guides And Planning Guidance for Radiological Incidents: EPA developed a manual as guideline for local and state governments to protect the public from a nuclear accident, the 2017 version being a 15-year update.
EPA's Role in Emergency Response – Special Teams
Technologically Enhanced Naturally Occurring Radioactive Materials (TENORM) Program
Radiation Standards for Air and Drinking Water Programs
Federal Guidance for Radiation Protection
Water quality
Science and regulatory standards
The National Pollutant Discharge Elimination System (NPDES) permit program addresses water pollution by regulating point sources which discharge to US waters. Created in 1972 by the Clean Water Act, the NPDES permit program authorizes state governments to perform its many permitting, administrative, and enforcement aspects. , the EPA has approved 47 states to administer all or portions of the permit program. EPA regional offices manage the program in the remaining areas of the country. The Water Quality Act of 1987 extended NPDES permit coverage to industrial stormwater dischargers and municipal separate storm sewer systems. In 2016, there were 6,700 major point source NPDES permits in place and 109,000 municipal and industrial point sources with general or individual permits.
Effluent guidelines (technology based standards) for industrial point sources and Water quality standards (risk-based standards) for water bodies, under Title III of the CWA
Nonpoint source pollution programs
The CWA Section 404 Program regulates the discharge of dredged or fill material into waters of the United States. Permits are issued by the U.S. Army Corps of Engineers and reviewed by EPA, and may be denied if they would cause unacceptable degradation or if an alternative does not exist that does not also have adverse impacts on waters. Permit holders are typically required to restore or create wetlands or other waters to offset losses that cannot be avoided.
EPA ensures safe drinking water for the public, by setting standards for more than 148,000 public water systems nationwide. EPA oversees states, local governments and water suppliers to enforce the standards under the Safe Drinking Water Act. The program includes regulation of injection wells to protect underground sources of drinking water.
Infrastructure financing
The Clean Water State Revolving Fund provides grants to states which, along with matching state funds, are loaned to municipalities for wastewater and "green" infrastructure at below-market interest rates. These loans are expected to be paid back, creating revolving loan funds. Cumulative assistance from the revolving fund has surpassed US$172 billion . The revolving fund replaced the Construction Grants Program, which was phased out in 1990.
The Drinking Water State Revolving Fund (DWSRF) provides financial assistance to local drinking water utilities. The total appropriation of DWSRF funds available to states, which allocate funds to individual utilities, is US$3.5 billion in 2024.
Land, waste and cleanup
Regulation of solid waste (non-hazardous) and hazardous waste under RCRA. To implement the 1976 law, EPA published standards in 1979 for "sanitary" landfills that receive municipal solid waste. The agency published national hazardous waste regulations and established a nationwide permit and tracking system for managing hazardous waste. The system is largely managed by state agencies under EPA authorization. Standards were issued for waste treatment, storage and disposal facilities (TSDFs), and ocean dumping of waste was prohibited. In 1984 Congress passed the Hazardous and Solid Waste Amendments (HSWA) which expanded several aspects of the RCRA program:
The Land Disposal Restrictions Program sets treatment requirements for hazardous waste before it may be disposed on land. EPA began issuing treatment methods and levels of requirements in 1986 and these are continually adapted to new hazardous wastes and treatment technologies. The stringent requirements it sets and its emphasis on waste minimization practices encourage businesses to plan to minimize waste generation and prioritize reuse and recycling. From the start of the program in 1984 to 2004, the volume of hazardous waste disposed in landfills had decreased 94% and the volume of hazardous waste disposed of by underground injection had decreased 70%.
The RCRA Corrective Action Program requires TSDFs to investigate and clean up hazardous releases at their own expense. In the 1980s, EPA estimated that the number of sites needing cleanup was three times more than the number of sites on the national Superfund list. The program is largely implemented through permits and orders. , the program has led to the cleanup of 18 million acres of land, of which facilities were primarily responsible for cleanup costs. The goal of EPA and states is to complete final remedies by 2020 at 3,779 priority facilities out of 6,000 that need to be cleaned up, according to EPA.
Beginning in the mid-1980s EPA developed standards for small quantity generators of hazardous waste, pursuant to HSWA.
EPA was mandated to conduct a review of landfill conditions nationwide. The agency reported in 1988 that the effectiveness of environmental controls at landfills varied nationwide, which could lead to serious contamination of groundwater and surface waters. EPA published a national plan in 1989 calling for state and local governments to better integrate their municipal solid waste management practices with source reduction and recycling programs.
Regulation of Underground Storage Tanks. The Underground Storage Tank (UST) Program was launched in 1985 and covers about 553,000 active USTs containing petroleum and hazardous chemicals. Since 1984, 1.8 million USTs have been closed in compliance with regulations. 38 states, the District of Columbia and Puerto Rico manage UST programs with EPA authorization. When the program began, EPA had only 90 staff to develop a system to regulate more than 2 million tanks and work with 750,000 owners and operators. The program relies more on local operations and enforcement than other EPA programs. Today, the program supports the inspection of all federally regulated tanks, cleans up old and new leaks, minimizes potential leaks, and encourages sustainable reuse of abandoned gas stations.
Hazardous site cleanup. In the late 1970s, the need to clean up sites such as Love Canal that had been highly contaminated by previous hazardous waste disposal became apparent. However the existing regulatory environment depended on owners or operators to perform environmental control. While the EPA attempted to use RCRA's section 7003 to perform this cleanup, it was clear a new law was needed. In 1980, Congress passed the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA), commonly known as "Superfund". This law enabled the EPA to cast a wider net for responsible parties, including past or present generators and transporters as well as current and past owners of the site to find funding. The act also established some funding and a tax mechanism on certain industries to help fund such cleanup. Congress did not renew the Superfund tax in the 1990s, and subsequently funding for cleanup actions was supported only by general appropriations. Congress restored an excise tax on chemical manufacturers in 2021, which will eventually increase the available budget for site cleanups. Today, due to restricted funding, most cleanup activities are performed by responsible parties under the oversight of the EPA and states. , more than 1,700 sites had been put on the cleanup list since the creation of the program. Of these, 370 sites have been cleaned up and removed from the list, cleanup is underway at 535, cleanup facilities have been constructed at 790 but need to be operated in the future, and 54 are not yet in cleanup stage.
EPA's oil spill prevention program includes the Spill Prevention, Control, and Countermeasure (SPCC) and the Facility Response Plan (FRP) rules. The SPCC Rule applies to all facilities that store, handle, process, gather, transfer, refine, distribute, use or consume oil or oil products. Oil products includes petroleum and non-petroleum oils as well as: animal fats, oils and greases; fish and marine mammal oils; and vegetable oils. It mandates a written plan for facilities that store more than 1,320 gallons of fuel above ground or more than 42,000 gallons below-ground, and which might discharge to navigable waters (as defined in the Clean Water Act) or adjoining shorelines. Secondary spill containment is mandated at oil storage facilities and oil release containment is required at oil development sites.
Chemical manufacture and usage
EPA regulates pesticides under the Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA) and the Food Quality Protection Act. The agency assesses, registers, regulates, and regularly reevaluates all pesticides legally sold in the United States. A few challenges this program faces are transforming toxicity testing, screening pesticides for endocrine disruptors, and regulating biotechnology and nanotechnology.
TSCA required EPA to create and maintain a national inventory of all existing chemicals in U.S. commerce. When the act was passed in 1976, there were more than 60,000 chemicals on the market that had never been comprehensively cataloged. To do so, the EPA developed and implemented procedures that have served as a model for Canada, Japan, and the European Union. For the inventory, the EPA also established a baseline for new chemicals that the agency should be notified about before being commercially manufactured. Today, this rule keeps the EPA updated on volumes, uses, and exposures of around 7,000 of the highest-volume chemicals via industry reporting.
The Toxics Release Inventory (TRI) is a resource established by the Emergency Planning and Community Right-to-Know Act specifically for the public to learn about toxic chemical releases and pollution prevention activities reported by industrial and federal facilities. TRI data support informed decision-making by communities, government agencies, companies, and others. Annually, the agency collects data from more than 20,000 facilities. The EPA has generated a range of tools to support the use of this inventory, including interactive maps and online databases such as ChemView.
Enforcement
Civil enforcement and criminal enforcement programs. EPA develops and prosecutes administrative civil and judicial cases and provides legal support for cases and investigations initiated in its regional offices. Federal judicial actions (formal lawsuits) are filed by the U.S. Department of Justice on behalf of EPA.
Compliance assistance. EPA identifies, prevents, and reduces noncompliance and environmental risks by establishing enforcement initiatives and ensuring effective monitoring and assessment of compliance.
Federal facilities enforcement
Environmental Justice program
In 2019 the Environmental Data & Governance Initiative, "a network of academics, developers, and non-profit professionals", published a report which compared EPA enforcement statistics over time. The number of civil cases filed by EPA have gradually decreased, and in 2018 the criminal and civil penalties from EPA claims dropped over four times their amounts in 2013, 2016, and 2017. In 2016 EPA issued $6,307,833,117 in penalties due to violations of agency requirements, and in 2018 the agency issued $184,768,000 in penalties. EPA's inspections and evaluations have steadily decreased from 2015 to 2018. Enforcement activity has decreased partially due to budget cuts within the agency.
Additional programs
The EPA Safer Choice label, previously known as the "Design for the Environment" (DfE) label, helps consumers and commercial buyers identify and select products with safer chemical ingredients, without sacrificing quality or performance. When a product has the Safer Choice label, it means that every intentionally-added ingredient in the product has been evaluated by EPA scientists. Only the safest possible functional ingredients are allowed in products with the Safer Choice label.
Through the Safer Detergents Stewardship Initiative, EPA's Design for the Environment (DfE) recognizes environmental leaders who voluntarily commit to the use of safer surfactants. Safer surfactants are the ones that break down quickly to non-polluting compounds and help protect aquatic life in both fresh and salt water. Nonylphenol ethoxylates, commonly referred to as NPEs, are an example of a surfactant class that does not meet the definition of a safer surfactant. The Safer Choice program identified safer alternative surfactants through partnerships with industry and environmental advocates. These alternatives are comparable in cost and are readily available. The CleanGredients website is an information source about safer surfactants.
The Energy Star program, initiated in 1992, motivated major companies to retrofit millions of square feet of building space with more efficient lighting. , more than 40,000 Energy Star products were available including major appliances, office equipment, lighting, home electronics, and more. In addition, the label can also be found on new homes and commercial and industrial buildings. In 2006, about 12 percent of new housing in the US displayed an Energy Star label. EPA estimates that the program saved about $14 billion in energy costs in 2006 alone. The program has helped spread the use of LED traffic lights, efficient fluorescent lighting, power management systems for office equipment, and low standby energy use.
EPA's Smart Growth Program began in 1998 and was created to help communities improve their land development practices and get the type of development they want. Together with local, state, and national experts, EPA encourages development strategies that protect human health and the environment, create economic opportunities, and provide attractive and affordable neighborhoods for people of all income levels.
The Brownfields Program started as a pilot program in the 1990s and was authorized by law in 2002. The program provides grants and tools to local governments for the assessment, cleanup, and revitalization of brownfields. , the EPA estimates that program grants have resulted in 56,442 acres of land readied for reuse and leveraged 116,963 jobs and $24.2 billion to do so. Agency studies also found that property values around assessed or cleaned-up brownfields have increased 5.1 to 12.8 percent.
EPA's Indoor air quality Tools for Schools Program helps schools to maintain a healthy environment and reduce exposures to indoor environmental contaminants. It helps school personnel identify, solve, and prevent indoor air quality problems in the school environment. Through the use of a multi-step management plan and checklists for the entire building, schools can lower their students' and staff's risk of exposure to asthma triggers.
The National Environmental Education Act of 1990 requires EPA to provide national leadership to increase environmental literacy. EPA established the Office of Environmental Education to implement this program.
Clean School Bus USA is a national partnership to reduce children's exposure to diesel exhaust by eliminating unnecessary school bus idling, installing effective emission control systems on newer buses and replacing the oldest buses in the fleet with newer ones. Its goal is to reduce both children's exposure to diesel exhaust and the amount of air pollution created by diesel school buses.
The Green Chemistry Program encourages the development of products and processes that follow green chemistry principles. It has recognized more than 100 winning technologies. These reduce the use or creation of hazardous chemicals, save water, and reduce greenhouse gas release.
The Beaches Environmental Assessment and Coastal Health (BEACH) Act, was authorized in a 2000 amendment to the Clean Water Act. The program focus is on coastal recreational waters, and requires EPA to develop criteria to test and monitor waters and notify public users of any concerns. The program involves states, local beach resource managers, and the agency in assessing risks of stormwater and wastewater overflows and enables better sampling, analytical methods, and communication with the public.
The EPA has also established specific geographic programs for particular water resources such as the Chesapeake Bay Program, the National Estuary Program, and the Gulf of Mexico Program.
Advance identification, or ADID, is a planning process used by the EPA to identify wetlands and other bodies of water and their respective suitability for the discharge of dredged and fill material. The EPA conducts the process in cooperation with the U.S. Army Corps of Engineers and local states or Native American Tribes. , 38 ADID projects had been completed, and 33 were ongoing.
EPA's "One Cleanup Program" initiative was designed to improve coordination across different agency programs that have a role in cleanup at a particular site. The coordination efforts apply to the brownfields, federal facilities, USTs, RCRA and Superfund programs.
EPA reviews environmental impact statements prepared by other agencies and maintains a national EIS filing system.
Past programs
The former Construction Grants Program distributed federal grants for the construction of municipal wastewater treatment works from 1972 to 1990. While such grants existed before the 1972, the 1972 CWA expanded these grants dramatically. They were distributed through 1990, when the program and funding were replaced with the State Revolving Loan Fund Program.
In 1991 under Administrator William Reilly, the EPA implemented its voluntary 33/50 program. This was designed to encourage, recognize, and celebrate companies that voluntarily found ways to prevent and reduce pollution in their operations. Specifically, it challenged industry to reduce Toxic Release Inventory emissions of 17 priority chemicals by 33% in one year and 50% in four years. These results were achieved before the commitment deadlines.
Launched in 2006, the voluntary 2010/2015 PFOA Stewardship Program worked with eight major companies to voluntarily reduce their global emissions of certain types of perfluorinated chemicals by 95% by 2010 and eliminate these emissions by 2015.
In March 2004, the U.S. Navy transferred USNS Bold (T-AGOS-12), a Stalwart class ocean surveillance ship, to the EPA. The ship had been used in anti-submarine operations during the Cold War, was equipped with sidescan sonar, underwater video, water and sediment sampling instruments used in study of ocean and coastline. One of the major missions of the Bold was to monitor for ecological impact sites where materials were dumped from dredging operations in U.S. ports. In 2013, the General Services Administration sold the Bold to Seattle Central Community College (SCCC), which demonstrated in a competition that they would put it to the highest and best purpose, at a nominal cost of $5,000.
Controversies
Scope and fulfillment of agency's authority
Congress enacted laws such as the Clean Air Act, the Resource Conservation and Recovery Act, and CERCLA with the intent of preventing and reconciling environmental damages. Beginning in 2018 under Administrator Andrew Wheeler, EPA revised some pollution standards that resulted in less overall regulation.
Furthermore, the CAA's discretionary application has caused a varied application of the law among states. In 1970, Louisiana deployed its Comprehensive Toxic Air Pollutant Emission Control Program to comply with federal law. This program does not require pollution monitoring that is equivalent to programs in other states.
Environmental justice
The EPA has been criticized for its lack of progress towards environmental justice. Administrator Christine Todd Whitman was criticized for her changes to President Bill Clinton's Executive Order 12898 during 2001, removing the requirements for government agencies to take the poor and minority populations into special consideration when making changes to environmental legislation, and therefore defeating the spirit of the Executive Order. In a March 2004 report, the inspector general of the agency concluded that the EPA "has not developed a clear vision or a comprehensive strategic plan, and has not established values, goals, expectations, and performance measurements" for environmental justice in its daily operations. Another report in September 2006 found the agency still had failed to review the success of its programs, policies, and activities toward environmental justice. Studies have also found that poor and minority populations were underserved by the EPA's Superfund program, and that this situation was worsening.
In August 2022 the EPA was allotted a listed ~42.8 billion in funding from the Inflation Reduction Act (IRA) towards what the EPA classifies as "Advancing Environmental Justice", and published the statement "Through the Inflation Reduction Act, EPA will improve the lives of millions of Americans by reducing pollution in neighborhoods where people live, work, play, and go to school; accelerating environmental justice efforts in communities overburdened by pollution for far too long; and tackling our biggest climate challenges while creating jobs and delivering energy security."
In September 2022 EPA announced the creation of a new Office of Environmental Justice and External Civil Rights that reports directly to the EPA administrator. The new office has an expanded budget and staff with broader responsibilities than under the previous organizational arrangement.
Freedom of Information Act processing performance
In the latest Center for Effective Government analysis of 15 federal agencies which receive the most Freedom of Information Act (FOIA) requests, published in 2015 (using 2012 and 2013 data, the most recent years available), the EPA earned a D by scoring 67 out of a possible 100 points, i.e. did not earn a satisfactory overall grade.
Pebble Mine
Pebble Mine is a copper and gold mining project located in the southwest region of Alaska in the Bristol Bay region organized by Northern Dynasty Minerals. In 2014 the EPA released its statement on the impacts that mining would have on Bristol Bay and its tributaries. Among many things, the statement assesses geological, topographic, ecological, hydrological, and economic data and determined that mining could negatively impact the salmon population. Seeing as Bristol Bay and its watershed provides around 46% of the world's sockeye salmon, the EPA did not want to risk an ecological disaster. In July 2014, before Northern Dynasty Minerals had submitted its EIS, EPA's Region 10 office proposed restrictions pursuant to section 404(c) of the Clean Water Act, restrictions that would effectively prohibit the project. Northern Dynasty Minerals protested this decision and on July 18, 2014, in a published statement, Pebble Partnership CEO Tom Collier said that the project would continue its litigation against EPA; noted that the EPA's action was under investigation by the EPA inspector general and by the House Committee on Oversight and Government Reform; and noted that two bills were pending in Congress seeking to clarify that EPA did not have the authority to preemptively veto or otherwise restrict development projects prior to the onset of federal and state permitting. Collier's statement also said that EPA's proposal was based on outdated mining scenarios that were not part of the project's approach. Multiple journalists and organizations have reported on the controversy including the Natural Resources Defense Council in support of the cancelation of the project and John Stossel in support of the development of the mine. , the mine remains a controversial topic.
On January 30, 2023, the EPA vetoed the mine.
Water quality in East Palestine, Ohio
Ohio governor Mike DeWine and administrator of the EPA Michael Regan drank tap water in East Palestine, Ohio, on February 3, 2023, after a train derailment to show that the water was safe. The derailment caused a fire and the release of toxic chemicals into the air and water making locals and environmental groups concerned for the quality of water in the area. Despite the EPA's assurance that the water is safe some residents do not trust the quality of the water and question its long-term effects.
See also
Environmental history of the United States
Environmental policy of the United States
Environmental policy of the Donald Trump administration
List of EPA whistleblowers
References
Further reading
Balint, Peter J.; James K. Conant. The life cycles of the Council on Environmental Quality and the Environmental Protection Agency: 1970–2035 (Oxford Univ Press, 2016).
Bosso, Christopher. Environment, Inc.: From Grassroots to Beltway. Lawrence, KS: University of Kansas Press, 2005
Bosso, Christopher; Deborah Guber. "Maintaining Presence: Environmental Advocacy and the Permanent Campaign." pp. 78–99 in Environmental Policy: New Directions for the Twenty First Century, 6th ed., eds. Norman Vig and Michael Kraft. Washington, DC: CQ Press, 2006.
Brooks, Karl Boyd (ed.). The Environmental Legacy of Harry S. Truman (Truman State University Press, 2009).
Carter, Neil. The Politics of the Environment: Ideas, Activism, Policy, 2nd ed. Cambridge, UK: Cambridge University Press, 2007
Davies, Kate. The Rise of the U.S. Environmental Health Movement (2013). Lanham, MD: Rowman & Littlefield
Demortain, David, The Science of Bureaucracy: Risk Decision-Making and the US Environmental Protection Agency. The MIT Press, 2020 .
Freedman, Jeri. The Establishment of the Environmental Protection Agency (Cavendish Square. 2017)
Hays, Samuel P. A history of environmental politics since 1945 (2000)
Hays, Samuel P. Beauty, Health, and Permanence: Environmental Politics in the United States, 1955–1985 (1989) online
Mintz, Joel A. Enforcement at the EPA: High Stakes and Hard Choices (2nd ed. U of Texas Press, 2012).
Portney, Paul R. "EPA and the Evolution of Federal Regulation." in Public policies for environmental protection (Routledge, 2010) pp. 11–30. online
Richardson, Elmo. Dams, Parks and Politics: Resource Development and Preservation the Truman-Eisenhower Era (1973).
Ruckelshaus, William D. "Environmental Regulation: The Early Days at EPA" EPA Journal (March 1988) online
Suter, Glenn W. "Ecological risk assessment in the United States Environmental Protection Agency: A historical overview." Integrated environmental assessment and management 4.3 (2008): 285–289. online
EPA Alumni Association, "Protecting the Environment, A Half Century of Progress" – an overview of EPA's environmental protection efforts over 50 years
EPA Alumni Association individual Half Century of Progress reports for air, water, pesticides, drinking water, waste management, Superfund, and toxic substances
External links
Historical Planning, Budget, and Results Reports of EPA
Environmental Protection Agency in the Federal Register
Environmental Protection Agency on USAspending.gov
Environmental Protection Agency apportionments on OpenOMB
United States Environmental Protection Agency
Environmental agencies in the United States
Environmental protection agencies
Environment of the United States
1970 establishments in Washington, D.C.
1970 in the environment
Government agencies established in 1970
Independent agencies of the United States government
Regulators of biotechnology products
Federal law enforcement agencies of the United States
Environmental policies organizations
Environmental policy in the United States
Organizations based in Washington, D.C. | United States Environmental Protection Agency | [
"Biology"
] | 11,638 | [
"Regulators of biotechnology products",
"Biotechnology products",
"Regulation of biotechnologies"
] |
58,671 | https://en.wikipedia.org/wiki/Tripropellant%20rocket | A tripropellant rocket is a rocket that uses three propellants, as opposed to the more common bipropellant rocket or monopropellant rocket designs, which use two or one propellants, respectively. Tripropellant systems can be designed to have high specific impulse and have been investigated for single-stage-to-orbit designs. While tripropellant engines have been tested by Rocketdyne and NPO Energomash, no tripropellant rocket has been flown.
There are two different kinds of tripropellant rockets. One is a rocket engine which mixes three separate streams of propellants, burning all three propellants simultaneously. The other kind of tripropellant rocket is one that uses one oxidizer but two fuels, burning the two fuels in sequence during the flight.
Simultaneous burn
Simultaneous tripropellant systems often involve the use of a high energy density metal additive, like beryllium or lithium, with existing bipropellant systems. In these motors, the burning of the fuel with the oxidizer provides activation energy needed for a more energetic reaction between the oxidizer and the metal. While theoretical modeling of these systems suggests an advantage over bipropellant motors, several factors limit their practical implementation, including the difficulty of injecting solid metal into the thrust chamber; heat, mass, and momentum transport limitations across phases; and the difficulty of achieving and sustaining combustion of the metal.
In the 1960s, Rocketdyne test-fired an engine using a mixture of liquid lithium, gaseous hydrogen, and liquid fluorine to produce a specific impulse of 542 seconds, likely the highest measured such value for a chemical rocket motor. Despite the high specific impulse, the technical difficulties of the combination and the hazardous nature of the propellants ensured the engine has not been developed further.
Sequential burn
In sequential tripropellant rockets, the fuel is changed during flight, so the motor can combine the high thrust of a dense fuel like kerosene early in flight with the high specific impulse of a lighter fuel like liquid hydrogen (LH2) later in flight. The result is a single engine providing some of the benefits of staging.
For example, injecting a small amount of liquid hydrogen into a kerosene-burning engine can yield significant specific impulse improvements without compromising propellant density. This was demonstrated by the RD-701 achieving a specific impulse of 415 seconds in vacuum (higher than the pure LH2/LOX RS-68), where a pure kerosene engine with a similar expansion ratio would achieve 330–340 seconds.
Although liquid hydrogen delivers the largest specific impulse of the plausible rocket fuels, it also requires huge structures to hold it due to its low density. These structures can weigh a lot, offsetting the light weight of the fuel itself to some degree, and also result in higher drag while in the atmosphere. While kerosene has lower specific impulse, its higher density results in smaller structures, which reduces stage mass, and furthermore reduces losses to atmospheric drag. In addition, kerosene-based engines generally provide higher thrust, which is important for takeoff, reducing gravity drag. So in general terms there is a "sweet spot" in altitude where one type of fuel becomes more practical than the other.
Traditional rocket designs use this sweet spot to their advantage via staging. For instance the Saturn Vs used a lower stage powered by RP-1 (kerosene) and upper stages powered by LH2. Some of the early Space Shuttle design efforts used similar designs, with one stage using kerosene into the upper atmosphere, where an LH2 powered upper stage would light and go on from there. The later Shuttle design is somewhat similar, although it used solid rockets for its lower stages.
SSTO rockets could simply carry two sets of engines, but this would mean the spacecraft would be carrying one or the other set "turned off" for most of the flight. With light enough engines this might be reasonable, but an SSTO design requires a very high mass fraction and so has razor-thin margins for extra weight.
At liftoff the engine typically burns both fuels, gradually changing the mixture over altitude in order to keep the exhaust plume "tuned" (a strategy similar in concept to the plug nozzle but using a normal bell), eventually switching entirely to LH2 once the kerosene is burned off. At that point the engine is largely a straight LH2/LOX engine, with an extra fuel pump hanging onto it.
The concept was first explored in the US by Robert Salkeld, who published the first study on the concept in Mixed-Mode Propulsion for the Space Shuttle, Astronautics & Aeronautics, which was published in August 1971. He studied a number of designs using such engines, both ground-based and a number that were air-launched from large jet aircraft. He concluded that tripropellant engines would produce gains of over 100% (essentially more than double) in payload fraction, reductions of over 65% in propellant volume and better than 20% in dry weight. A second design series studied the replacement of the Shuttle's SRBs with tripropellant based boosters, in which case the engine almost halved the overall weight of the designs. His last full study was on the Orbital Rocket Airplane which used both tripropellant and (in some versions) a plug nozzle, resulting in a spaceship only slightly larger than a Lockheed SR-71, able to operate from traditional runways.
Tripropellant engines were built in Russia. Kosberg and Glushko developed a number of experimental engines in 1988 for a SSTO spaceplane called MAKS, but both the engines and MAKS were cancelled in 1991 due to a lack of funding. Glushko's RD-701 was built and test fired, however, and although there were some problems, Energomash feels that the problems are entirely solvable and that the design does represent one way to reduce launch costs by about 10 times.
References
Rocket propulsion
Rocket engines
Rocket engines using hydrogen propellant
Rocket engines using kerosene propellant
Rocket engines by propellant | Tripropellant rocket | [
"Technology"
] | 1,252 | [
"Rocket engines",
"Engines"
] |
58,673 | https://en.wikipedia.org/wiki/Liquid%20hydrogen | Liquid hydrogen () is the liquid state of the element hydrogen. Hydrogen is found naturally in the molecular H2 form.
To exist as a liquid, H2 must be cooled below its critical point of 33 K. However, for it to be in a fully liquid state at atmospheric pressure, H2 needs to be cooled to . A common method of obtaining liquid hydrogen involves a compressor resembling a jet engine in both appearance and principle. Liquid hydrogen is typically used as a concentrated form of hydrogen storage. Storing it as liquid takes less space than storing it as a gas at normal temperature and pressure. However, the liquid density is very low compared to other common fuels. Once liquefied, it can be maintained as a liquid for some time in thermally insulated containers.
There are two spin isomers of hydrogen; whereas room temperature hydrogen is mostly orthohydrogen, liquid hydrogen consists of 99.79% parahydrogen and 0.21% orthohydrogen.
Hydrogen requires a theoretical minimum of to liquefy, and including converting the hydrogen to the para isomer, but practically generally takes compared to a heating value of hydrogen.
History
In 1885, Zygmunt Florenty Wróblewski published hydrogen's critical temperature as ; critical pressure, ; and boiling point, .
Hydrogen was liquefied by James Dewar in 1898 by using regenerative cooling and his invention, the vacuum flask. The first synthesis of the stable isomer form of liquid hydrogen, parahydrogen, was achieved by Paul Harteck and Karl Friedrich Bonhoeffer in 1929.
Spin isomers of hydrogen
The two nuclei in a dihydrogen molecule can have two different spin states.
Parahydrogen, in which the two nuclear spins are antiparallel, is more stable than orthohydrogen, in which the two are parallel. At room temperature, gaseous hydrogen is mostly in the ortho isomeric form due to thermal energy, but an ortho-enriched mixture is only metastable when liquified at low temperature. It slowly undergoes an exothermic reaction to become the para isomer, with enough energy released as heat to cause some of the liquid to boil. To prevent loss of the liquid during long-term storage, it is therefore intentionally converted to the para isomer as part of the production process, typically using a catalyst such as iron(III) oxide, activated carbon, platinized asbestos, rare earth metals, uranium compounds, chromium(III) oxide, or some nickel compounds.
Uses
Liquid hydrogen is a common liquid rocket fuel for rocketry application and is used by NASA and the U.S. Air Force, which operate a large number of liquid hydrogen tanks with an individual capacity up to 3.8 million liters (1 million U.S. gallons).
In most rocket engines fueled by liquid hydrogen, it first cools the nozzle and other parts before being mixed with the oxidizer, usually liquid oxygen, and burned to produce water with traces of ozone and hydrogen peroxide. Practical H2–O2 rocket engines run fuel-rich so that the exhaust contains some unburned hydrogen. This reduces combustion chamber and nozzle erosion. It also reduces the molecular weight of the exhaust, which can increase specific impulse, despite the incomplete combustion.
Liquid hydrogen can be used as the fuel for an internal combustion engine or fuel cell. Various submarines, including the Type 212 submarine, Type 214 submarine, and others, and concept hydrogen vehicles have been built using this form of hydrogen, such as the DeepC, BMW H2R, and others. Due to its similarity, builders can sometimes modify and share equipment with systems designed for liquefied natural gas (LNG). Liquid hydrogen is being investigated as a zero carbon fuel for aircraft. Because of the lower volumetric energy, the hydrogen volumes needed for combustion are large. Unless direct injection is used, a severe gas-displacement effect also hampers maximum breathing and increases pumping losses.
Liquid hydrogen is also used to cool neutrons to be used in neutron scattering. Since neutrons and hydrogen nuclei have similar masses, kinetic energy exchange per interaction is maximum (elastic collision). Finally, superheated liquid hydrogen was used in many bubble chamber experiments.
The first thermonuclear bomb, Ivy Mike, used liquid deuterium, also known as hydrogen-2, for nuclear fusion.
Properties
The product of hydrogen combustion in a pure oxygen environment is solely water vapor. However, the high combustion temperatures and present atmospheric nitrogen can result in the breaking of N≡N bonds, forming toxic NOx if no exhaust scrubbing is done. Since water is often considered harmless to the environment, an engine burning it can be considered "zero emissions". In aviation, however, water vapor emitted in the atmosphere contributes to global warming (to a lesser extent than CO2). Liquid hydrogen also has a much higher specific energy than gasoline, natural gas, or diesel.
The density of liquid hydrogen is only 70.85 kg/m3 (at 20 K), a relative density of just 0.07. Although the specific energy is more than twice that of other fuels, this gives it a remarkably low volumetric energy density, many fold lower.
Liquid hydrogen requires cryogenic storage technology such as special thermally insulated containers and requires special handling common to all cryogenic fuels. This is similar to, but more severe than liquid oxygen. Even with thermally insulated containers it is difficult to keep such a low temperature, and the hydrogen will gradually leak away (typically at a rate of 1% per day). It also shares many of the same safety issues as other forms of hydrogen, as well as being cold enough to liquefy, or even solidify atmospheric oxygen, which can be an explosion hazard.
The triple point of hydrogen is at 13.81 K and 7.042 kPa.
Safety
Due to its cold temperatures, liquid hydrogen is a hazard for cold burns. Hydrogen itself is biologically inert and its only human health hazard as a vapor is displacement of oxygen, resulting in asphyxiation, and its very high flammability and ability to detonate when mixed with air. Because of its flammability, liquid hydrogen should be kept away from heat or flame unless ignition is intended. Unlike ambient-temperature gaseous hydrogen, which is lighter than air, hydrogen recently vaporized from liquid is so cold that it is heavier than air and can form flammable heavier-than-air air–hydrogen mixtures.
See also
Industrial gas
Liquefaction of gases
Hydrogen safety
Compressed hydrogen
Cryo-adsorption
Expansion ratio
Gasoline gallon equivalent
Slush hydrogen
Solid hydrogen
Metallic hydrogen
Hydrogen infrastructure
Hydrogen-powered aircraft
Liquid hydrogen tank car
Liquid hydrogen tanktainer
Hydrogen tanker
References
Hydrogen physics
Hydrogen technologies
Hydrogen storage
Liquid fuels
Rocket fuels
Coolants
Cryogenics
Hydrogen
Industrial gases
1898 in science | Liquid hydrogen | [
"Physics",
"Chemistry"
] | 1,407 | [
"Chemical process engineering",
"Applied and interdisciplinary physics",
"Cryogenics",
"Industrial gases"
] |
58,687 | https://en.wikipedia.org/wiki/Aggression | Aggression is a behavior aimed at opposing or attacking something or someone. Though often done with the intent to cause harm, it can be channeled into creative and practical outlets for some. It may occur either reactively or without provocation. In humans, aggression can be caused by various triggers. For example, built-up frustration due to blocked goals or perceived disrespect. Human aggression can be classified into direct and indirect aggression; while the former is characterized by physical or verbal behavior intended to cause harm to someone, the latter is characterized by behavior intended to harm the social relations of an individual or group.
In definitions commonly used in the social sciences and behavioral sciences, aggression is an action or response by an individual that delivers something unpleasant to another person. Some definitions include that the individual must intend to harm another person.
In an interdisciplinary perspective, aggression is regarded as "an ensemble of mechanism formed during the course of evolution in order to assert oneself, relatives, or friends against others, to gain or to defend resources (ultimate causes) by harmful damaging means. These mechanisms are often motivated by emotions like fear, frustration, anger, feelings of stress, dominance or pleasure (proximate causes). Sometimes aggressive behavior serves as a stress relief or a subjective feeling of power." Predatory or defensive behavior between members of different species may not be considered aggression in the same sense.
Aggression can take a variety of forms, which may be expressed physically, or communicated verbally or non-verbally: including anti-predator aggression, defensive aggression (fear-induced), predatory aggression, dominance aggression, inter-male aggression, resident-intruder aggression, maternal aggression, species-specific aggression, sex-related aggression, territorial aggression, isolation-induced aggression, irritable aggression, and brain-stimulation-induced aggression (hypothalamus). There are two subtypes of human aggression: (1) controlled-instrumental subtype (purposeful or goal-oriented); and (2) reactive-impulsive subtype (often elicits uncontrollable actions that are inappropriate or undesirable). Aggression differs from what is commonly called assertiveness, although the terms are often used interchangeably among laypeople (as in phrases such as "an aggressive salesperson").
Overview
Dollard et al. (1939) proposed that aggression was due to frustration, which was described as an unpleasant emotion resulting from any interference with achieving a rewarding goal. Berkowitz extended this frustration–aggression hypothesis and proposed that it is not so much the frustration as the unpleasant emotion that evokes aggressive tendencies, and that all aversive events produce negative affect and thereby aggressive tendencies, as well as fear tendencies. Besides conditioned stimuli, Archer categorized aggression-evoking (as well as fear-evoking) stimuli into three groups; namely, pain, novelty, and frustration, although he also described "looming", which refers to an object rapidly moving towards the visual sensors of a subject, and can be categorized as "intensity."
Aggression can have adaptive benefits or negative effects. Aggressive behavior is an individual or collective social interaction that is a hostile behavior with the intention of inflicting damage or harm. Two broad categories of aggression are commonly distinguished. One includes affective (emotional) and hostile, reactive, or retaliatory aggression that is a response to provocation, and the other includes instrumental, goal-oriented or predatory, in which aggression is used as a means to achieve a goal. An example of hostile aggression would be a person who punches someone who insulted him or her. An instrumental form of aggression would be armed robbery. Research on violence from a range of disciplines lend some support to a distinction between affective and predatory aggression. However, some researchers question the usefulness of a hostile versus instrumental distinction in humans, despite its ubiquity in research, because most real-life cases involve mixed motives and interacting causes.
A number of classifications and dimensions of aggression have been suggested. These depend on such things as whether the aggression is verbal or physical; whether or not it involves relational aggression such as covert bullying and social manipulation; whether harm to others is intended or not; whether it is carried out actively or expressed passively; and whether the aggression is aimed directly or indirectly. Classification may also encompass aggression-related emotions (e.g., anger) and mental states (e.g., impulsivity, hostility). Aggression may occur in response to non-social as well as social factors, and can have a close relationship with stress coping style. Aggression may be displayed in order to intimidate.
The operative definition of aggression may be affected by moral or political views. Examples are the axiomatic moral view called the non-aggression principle and the political rules governing the behavior of one country toward another. Likewise in competitive sports, or in the workplace, some forms of aggression may be sanctioned and others not (see Workplace aggression). Aggressive behaviors are associated with adjustment problems and several psychopathological symptoms such as antisocial personality disorder, borderline personality disorder, and intermittent explosive disorder.
Biological approaches conceptualize aggression as an internal energy released by external stimuli, a product of evolution through natural selection, part of genetics, a product of hormonal fluctuations. Psychological approaches conceptualize aggression as a destructive instinct, a response to frustration, an affect excited by a negative stimulus, a result of observed learning of society and diversified reinforcement, a resultant of variables that affect personal and situational environments.
Etymology
The term aggression comes from the Latin word aggressio, meaning attack. The Latin was itself a joining of ad- and gradi-, which meant step at. The first known use dates back to 1611, in the sense of an unprovoked attack.
A psychological sense of "hostile or destructive behavior" dates back to a 1912 English translation of Sigmund Freud's writing. Alfred Adler theorized about an "aggressive drive" in 1908. Child raising experts began to refer to aggression, rather than anger, from the 1930s.
Ethology
Ethologists study aggression as it relates to the interaction and evolution of animals in natural settings. In such settings aggression can involve bodily contact such as biting, hitting or pushing, but most conflicts are settled by threat displays and intimidating thrusts that cause no physical harm. This form of aggression may include the display of body size, antlers, claws or teeth; stereotyped signals including facial expressions; vocalizations such as bird song; the release of chemicals; and changes in coloration. The term agonistic behaviour is sometimes used to refer to these forms of behavior.
Most ethologists believe that aggression confers biological advantages. Aggression may help an animal secure territory, including resources such as food and water. Aggression between males often occurs to secure mating opportunities, and results in selection of the healthier/more vigorous animal. Aggression may also occur for self-protection or to protect offspring. Aggression between groups of animals may also confer advantage; for example, hostile behavior may force a population of animals into a new territory, where the need to adapt to a new environment may lead to an increase in genetic flexibility.
Between species and groups
The most apparent type of interspecific aggression is that observed in the interaction between a predator and its prey. However, according to many researchers, predation is not aggression. A cat does not hiss or arch its back when pursuing a rat, and the active areas in its hypothalamus resemble those that reflect hunger rather than those that reflect aggression. However, others refer to this behavior as predatory aggression, and point out cases that resemble hostile behavior, such as mouse-killing by rats. In aggressive mimicry a predator has the appearance of a harmless organism or object attractive to the prey; when the prey approaches, the predator attacks.
An animal defending against a predator may engage in either "fight or flight" or "tend and befriend" in response to predator attack or threat of attack, depending on its estimate of the predator's strength relative to its own. Alternative defenses include a range of antipredator adaptations, including alarm signals. An example of an alarm signal is nerol, a chemical which is found in the mandibular glands of Trigona fulviventris individuals. Release of nerol by T. fulviventris individuals in the nest has been shown to decrease the number of individuals leaving the nest by fifty percent, as well as increasing aggressive behaviors like biting. Alarm signals like nerol can also act as attraction signals; in T. fulviventris, individuals that have been captured by a predator may release nerol to attract nestmates, who will proceed to attack or bite the predator.
Aggression between groups is determined partly by willingness to fight, which depends on a number of factors including numerical advantage, distance from home territories, how often the groups encounter each other, competitive abilities, differences in body size, and whose territory is being invaded. Also, an individual is more likely to become aggressive if other aggressive group members are nearby. One particular phenomenon – the formation of coordinated coalitions that raid neighbouring territories to kill conspecifics – has only been documented in two species in the animal kingdom: 'common' chimpanzees and humans.
Within a group
Aggression between conspecifics in a group typically involves access to resources and breeding opportunities. One of its most common functions is to establish a dominance hierarchy. This occurs in many species by aggressive encounters between contending males when they are first together in a common environment. Usually the more aggressive animals become the more dominant. In test situations, most of the conspecific aggression ceases about 24 hours after the group of animals is brought together. Aggression has been defined from this viewpoint as "behavior which is intended to increase the social dominance of the organism relative to the dominance position of other organisms". Losing confrontations may be called social defeat, and winning or losing is associated with a range of practical and psychological consequences.
Conflicts between animals occur in many contexts, such as between potential mating partners, between parents and offspring, between siblings and between competitors for resources. Group-living animals may dispute over the direction of travel or the allocation of time to joint activities. Various factors limit the escalation of aggression, including communicative displays, conventions, and routines. In addition, following aggressive incidents, various forms of conflict resolution have been observed in mammalian species, particularly in gregarious primates. These can mitigate or repair possible adverse consequences, especially for the recipient of aggression who may become vulnerable to attacks by other members of a group. Conciliatory acts vary by species and may involve specific gestures or simply more proximity and interaction between the individuals involved. However, conflicts over food are rarely followed by post conflict reunions, even though they are the most frequent type in foraging primates.
Other questions that have been considered in the study of primate aggression, including in humans, is how aggression affects the organization of a group, what costs are incurred by aggression, and why some primates avoid aggressive behavior. For example, bonobo chimpanzee groups are known for low levels of aggression within a partially matriarchal society. Captive animals including primates may show abnormal levels of social aggression and self-harm that are related to aspects of the physical or social environment; this depends on the species and individual factors such as gender, age and background (e.g., raised wild or captive).
Aggression, fear and curiosity
Within ethology, it has long been recognized that there is a relation between aggression, fear, and curiosity. A cognitive approach to this relationship puts aggression in the broader context of inconsistency reduction, and proposes that aggressive behavior is caused by an inconsistency between a desired, or expected, situation and the actually perceived situation (e.g., "frustration"), and functions to forcefully manipulate the perception into matching the expected situation. In this approach, when the inconsistency between perception and expectancy is small, learning as a result of curiosity reduces inconsistency by updating expectancy to match perception. If the inconsistency is larger, fear or aggressive behavior may be employed to alter the perception in order to make it match expectancy, depending on the size of the inconsistency as well as the specific context. Uninhibited fear results in fleeing, thereby removing the inconsistent stimulus from the perceptual field and resolving the inconsistency. In some cases thwarted escape may trigger aggressive behavior in an attempt to remove the thwarting stimulus.
Evolutionary explanations
Like many behaviors, aggression can be examined in terms of its ability to help an animal itself survive and reproduce, or alternatively to risk survival and reproduction. This cost–benefit analysis can be looked at in terms of evolution. However, there are profound differences in the extent of acceptance of a biological or evolutionary basis for human aggression.
According to the male warrior hypothesis, intergroup aggression represents an opportunity for men to gain access to mates, territory, resources and increased status. As such, conflicts may have created selection evolutionary pressures for psychological mechanisms in men to initiate intergroup aggression.
Violence and conflict
Aggression can involve violence that may be adaptive under certain circumstances in terms of natural selection. This is most obviously the case in terms of attacking prey to obtain food, or in anti-predatory defense. It may also be the case in competition between members of the same species or subgroup, if the average reward (e.g., status, access to resources, protection of self or kin) outweighs average costs (e.g., injury, exclusion from the group, death). There are some hypotheses of specific adaptions for violence in humans under certain circumstances, including for homicide, but it is often unclear what behaviors may have been selected for and what may have been a byproduct, as in the case of collective violence.
Although aggressive encounters are ubiquitous in the animal kingdom, with often high stakes, most encounters that involve aggression may be resolved through posturing, or displaying and trial of strength. Game theory is used to understand how such behaviors might spread by natural selection within a population, and potentially become 'Evolutionary Stable Strategies'. An initial model of resolution of conflicts is the hawk-dove game. Others include the Sequential assessment model and the Energetic war of attrition. These try to understand not just one-off encounters but protracted stand-offs, and mainly differ in the criteria by which an individual decides to give up rather than risk loss and harm in physical conflict (such as through estimates of resource holding potential).
Gender
General
Gender plays an important role in human aggression. There are multiple theories that seek to explain findings that males and females of the same species can have differing aggressive behaviors. One review concluded that male aggression tended to produce pain or physical injury whereas female aggression tended towards psychological or social harm.
In general, sexual dimorphism can be attributed to greater intraspecific competition in one sex, either between rivals for access to mates and/or to be chosen by mates. This may stem from the other gender being constrained by providing greater parental investment, in terms of factors such as gamete production, gestation, lactation, or upbringing of young. Although there is much variation in species, generally the more physically aggressive sex is the male, particularly in mammals. In species where parental care by both sexes is required, there tends to be less of a difference. When the female can leave the male to care for the offspring, then females may be the larger and more physically aggressive. Competitiveness despite parental investment has also been observed in some species. A related factor is the rate at which males and females are able to mate again after producing offspring, and the basic principles of sexual selection are also influenced by ecological factors affecting the ways or extent to which one sex can compete for the other. The role of such factors in human evolution is controversial.
The pattern of male and female aggression is argued to be consistent with evolved sexually-selected behavioral differences, while alternative or complementary views emphasize conventional social roles stemming from physical evolved differences. Aggression in women may have evolved to be, on average, less physically dangerous and more covert or indirect. However, there are critiques for using animal behavior to explain human behavior, especially in the application of evolutionary explanations to contemporary human behavior, including differences between the genders.
According to the 2015 International Encyclopedia of the Social & Behavioral Sciences, sex differences in aggression is one of the most robust and oldest findings in psychology. Past meta-analyses in the encyclopedia found males regardless of age engaged in more physical and verbal aggression while small effect for females engaging in more indirect aggression such as rumor spreading or gossiping. It also found males tend to engage in more unprovoked aggression at higher frequency than females. This analysis also conforms with the Oxford Handbook of Evolutionary Psychology which reviewed past analysis which found men to use more verbal and physical aggression with the difference being greater in the physical type.
There are more recent findings that show that differences in male and female aggression appear at about two years of age, though the differences in aggression are more consistent in middle-aged children and adolescence. Tremblay, Japel and Pérusse (1999) asserted that physically aggressive behaviors such as kicking, biting and hitting are age-typical expressions of innate and spontaneous reactions to biological drives such as anger, hunger, and affiliation. Girls' relational aggression, meaning non-physical or indirect, tends to increase after age two while physical aggression decreases. There was no significant difference in aggression between males and females before two years of age. A possible explanation for this could be that girls develop language skills more quickly than boys, and therefore have better ways of verbalizing their wants and needs. They are more likely to use communication when trying to retrieve a toy with the words "Ask nicely" or "Say please."
According to the journal of Aggressive Behaviour, an analysis across 9 countries found boys reported more in the use of physical aggression. At the same time no consistent sex differences emerged within relational aggression. It has been found that girls are more likely than boys to use reactive aggression and then retract, but boys are more likely to increase rather than to retract their aggression after their first reaction. Studies show girls' aggressive tactics included gossip, ostracism, breaking confidences, and criticism of a victim's clothing, appearance, or personality, whereas boys engage in aggression that involves a direct physical and/or verbal assault. This could be due to the fact that girls' frontal lobes develop earlier than boys, allowing them to self-restrain.
One factor that shows insignificant differences between male and female aggression is in sports. In sports, the rate of aggression in both contact and non-contact sports is relatively equal. Since the establishment of Title IX, female sports have increased in competitiveness and importance, which could contribute to the evening of aggression and the "need to win" attitude between both genders. Among sex differences found in adult sports were that females have a higher scale of indirect hostility while men have a higher scale of assault. Another difference found is that men have up to 20 times higher levels of testosterone than women.
In intimate relationships
Some studies suggest that romantic involvement in adolescence decreases aggression in males and females, but decreases at a higher rate in females. Females will seem more desirable to their mate if they fit in with society and females that are aggressive do not usually fit well in society. They can often be viewed as antisocial. Female aggression is not considered the norm in society and going against the norm can sometimes prevent one from getting a mate. However, studies have shown that an increasing number of women are getting arrested on domestic violence charges. In many states, women now account for a quarter to a third of all domestic violence arrests, up from less than 10 percent a decade ago.
The new statistics reflect a reality documented in research: women are perpetrators as well as victims of family violence. However, another equally possible explanation is a case of improved diagnostics: it has become more acceptable for men to report female domestic violence to the authorities while at the same time actual female domestic violence has not increased at all. This could be the case in a situation where men had become less ashamed of reporting female violence against themsuch a situation could conceivably lead to an increasing number of women being arrested despite the actual number of violent women remaining the same.
In addition, males in competitive sports are often advised by their coaches not to be in intimate relationships based on the premises that they become more docile and less aggressive during an athletic event. The circumstances in which males and females experience aggression are also different. A study showed that social anxiety and stress was positively correlated with aggression in males, meaning as stress and social anxiety increases so does aggression. Furthermore, a male with higher social skills has a lower rate of aggressive behavior than a male with lower social skills. In females, higher rates of aggression were only correlated with higher rates of stress. Other than biological factors that contribute to aggression there are physical factors as well.
Physiological factors
Regarding sexual dimorphism, humans fall into an intermediate group with moderate sex differences in body size but relatively large testes. This is a typical pattern of primates where several males and females live together in a group and the male faces an intermediate number of challenges from other males compared to exclusive polygyny and monogamy but frequent sperm competition.
Evolutionary psychology and sociobiology have also discussed and produced theories for some specific forms of male aggression such as sociobiological theories of rape and theories regarding the Cinderella effect. Another evolutionary theory explaining gender differences in aggression is the Male Warrior hypothesis, which explains that males have psychologically evolved for intergroup aggression in order to gain access to mates, resources, territory and status.
Physiology
Brain pathways
Many researchers focus on the brain to explain aggression. Numerous circuits within both neocortical and subcortical structures play a central role in controlling aggressive behavior, depending on the species, and the exact role of pathways may vary depending on the type of trigger or intention.
In mammals, the hypothalamus and periaqueductal gray of the midbrain are critical areas, as shown in studies on cats, rats, and monkeys. These brain areas control the expression of both behavioral and autonomic components of aggression in these species, including vocalization. Electrical stimulation of the hypothalamus causes aggressive behavior and the hypothalamus has receptors that help determine aggression levels based on their interactions with serotonin and vasopressin. In rodents, activation of estrogen receptor-expressing neurons in the ventrolateral portion of the ventromedial hypothalamus (VMHvl) was found to be sufficient to initiate aggression in both males and females. Midbrain areas involved in aggression have direct connections with both the brainstem nuclei controlling these functions, and with structures such as the amygdala and prefrontal cortex.
Stimulation of the amygdala results in augmented aggressive behavior in hamsters, while lesions of an evolutionarily homologous area in the lizard greatly reduce competitive drive and aggression (Bauman et al. 2006). In rhesus monkeys, neonatal lesions in the amygdala or hippocampus results in reduced expression of social dominance, related to the regulation of aggression and fear. Several experiments in attack-primed Syrian golden hamsters, for example, support the claim of circuitry within the amygdala being involved in control of aggression. The role of the amygdala is less clear in primates and appears to depend more on situational context, with lesions leading to increases in either social affiliatory or aggressive responses. Amygdalotomy, which involves removing or destroying parts of the amygdala, has been performed on people to reduce their violent behaviour.
The broad area of the cortex known as the prefrontal cortex (PFC) is crucial for self-control and inhibition of impulses, including inhibition of aggression and emotions. Reduced activity of the prefrontal cortex, in particular its medial and orbitofrontal portions, has been associated with violent/antisocial aggression. In addition, reduced response inhibition has been found in violent offenders, compared to non-violent offenders.
The role of the chemicals in the brain, particularly neurotransmitters, in aggression has also been examined. This varies depending on the pathway, the context and other factors such as gender. A deficit in serotonin has been theorized to have a primary role in causing impulsivity and aggression. At least one epigenetic study supports this supposition. Nevertheless, low levels of serotonin transmission may explain a vulnerability to impulsiveness, potential aggression, and may have an effect through interactions with other neurochemical systems. These include dopamine systems which are generally associated with attention and motivation toward rewards, and operate at various levels. Norepinephrine, also known as noradrenaline, may influence aggression responses both directly and indirectly through the hormonal system, the sympathetic nervous system or the central nervous system (including the brain). It appears to have different effects depending on the type of triggering stimulus, for example social isolation/rank versus shock/chemical agitation which appears not to have a linear relationship with aggression. Similarly, GABA, although associated with inhibitory functions at many CNS synapses, sometimes shows a positive correlation with aggression, including when potentiated by alcohol.
The hormonal neuropeptides vasopressin and oxytocin play a key role in complex social behaviours in many mammals such as regulating attachment, social recognition, and aggression. Vasopressin has been implicated in male-typical social behaviors which includes aggression. Oxytocin may have a particular role in regulating female bonds with offspring and mates, including the use of protective aggression. Initial studies in humans suggest some similar effects.
In human, aggressive behavior has been associated with abnormalities in three principal regulatory systems in the body serotonin systems, catecholamine systems, and the hypothalamic–pituitary–adrenal axis. Abnormalities in these systems also are known to be induced by stress, either severe, acute stress or chronic low-grade stress
Testosterone
Early androgenization has an organizational effect on the developing brains of both males and females, making more neural circuits that control sexual behavior as well as intermale and interfemale aggression become more sensitive to testosterone. There are noticeable sex differences in aggression. Testosterone is present to a lesser extent in females, who may be more sensitive to its effects. Animal studies have also indicated a link between incidents of aggression and the individual level of circulating testosterone. However, results in relation to primates, particularly humans, are less clear cut and are at best only suggestive of a positive association in some contexts.
In humans, there is a seasonal variation in aggression associated with changes in testosterone. For example, in some primate species, such as rhesus monkeys and baboons, females are more likely to engage in fights around the time of ovulation as well as right before menstruation. If the results were the same in humans as they are in rhesus monkeys and baboons, then the increase in aggressive behaviors during ovulation is explained by the decline in estrogen levels. This makes normal testosterone levels more effective. Castrated mice and rats exhibit lower levels of aggression. Males castrated as neonates exhibit low levels of aggression even when given testosterone throughout their development.
Challenge hypothesis
The challenge hypothesis outlines the dynamic relationship between plasma testosterone levels and aggression in mating contexts in many species. It proposes that testosterone is linked to aggression when it is beneficial for reproduction, such as in mate guarding and preventing the encroachment of intrasexual rivals. The challenge hypothesis predicts that seasonal patterns in testosterone levels in a species are a function of mating system (monogamy versus polygyny), paternal care, and male-male aggression in seasonal breeders.
This pattern between testosterone and aggression was first observed in seasonally breeding birds, such as the song sparrow, where testosterone levels rise modestly with the onset of the breeding season to support basic reproductive functions. The hypothesis has been subsequently expanded and modified to predict relationships between testosterone and aggression in other species. For example, chimpanzees, which are continuous breeders, show significantly raised testosterone levels and aggressive male-male interactions when receptive and fertile females are present. Currently, no research has specified a relationship between the modified challenge hypothesis and human behavior, or the human nature of concealed ovulation, although some suggest it may apply.
Effects on the nervous system
Another line of research has focused on the proximate effects of circulating testosterone on the nervous system, as mediated by local metabolism within the brain. Testosterone can be metabolized to estradiol by the enzyme aromatase, or to dihydrotestosterone (DHT) by 5α-reductase.
Aromatase is highly expressed in regions involved in the regulation of aggressive behavior, such as the amygdala and hypothalamus. In studies using genetic knockout techniques in inbred mice, male mice that lacked a functional aromatase enzyme displayed a marked reduction in aggression. Long-term treatment with estradiol partially restored aggressive behavior, suggesting that the neural conversion of circulating testosterone to estradiol and its effect on estrogen receptors influences inter-male aggression. In addition, two different estrogen receptors, ERα and ERβ, have been identified as having the ability to exert different effects on aggression in mice. However, the effect of estradiol appears to vary depending on the strain of mouse, and in some strains it reduces aggression during long days (16 h of light), while during short days (8 h of light) estradiol rapidly increases aggression.
Another hypothesis is that testosterone influences brain areas that control behavioral reactions. Studies in animal models indicate that aggression is affected by several interconnected cortical and subcortical structures within the so-called social behavior network. A study involving lesions and electrical-chemical stimulation in rodents and cats revealed that such a neural network consists of the medial amygdala, medial hypothalamus and periaqueductal grey (PAG), and it positively modulates reactive aggression. Moreover, a study done in human subjects showed that prefrontal-amygdala connectivity is modulated by endogenous testosterone during social emotional behavior.
In human studies, testosterone-aggression research has also focused on the role of the orbitofrontal cortex (OFC). This brain area is strongly associated with impulse control and self-regulation systems that integrate emotion, motivation, and cognition to guide context-appropriate behavior. Patients with localized lesions to the OFC engage in heightened reactive aggression. Aggressive behavior may be regulated by testosterone via reduced medial OFC engagement following social provocation. When measuring participants' salivary testosterone, higher levels can predict subsequent aggressive behavioral reactions to unfairness faced during a task. Moreover, brain scanning with fMRI shows reduced activity in the medial OFC during such reactions. Such findings may suggest that a specific brain region, the OFC, is a key factor in understanding reactive aggression.
General associations with behavior
Scientists have for a long time been interested in the relationship between testosterone and aggressive behavior. In most species, males are more aggressive than females. Castration of males usually has a pacifying effect on aggressive behavior in males. In humans, males engage in crime and especially violent crime more than females. The involvement in crime usually rises in the early teens to mid teens which happen at the same time as testosterone levels rise. Research on the relationship between testosterone and aggression is difficult since the only reliable measurement of brain testosterone is by a lumbar puncture which is not done for research purposes. Studies therefore have often instead used more unreliable measurements from blood or saliva.
The Handbook of Crime Correlates, a review of crime studies, states most studies support a link between adult criminality and testosterone although the relationship is modest if examined separately for each sex. However, nearly all studies of juvenile delinquency and testosterone are not significant. Most studies have also found testosterone to be associated with behaviors or personality traits linked with criminality such as antisocial behavior and alcoholism. Many studies have also been done on the relationship between more general aggressive behavior/feelings and testosterone. About half the studies have found a relationship and about half no relationship.
Studies of testosterone levels of male athletes before and after a competition revealed that testosterone levels rise shortly before their matches, as if in anticipation of the competition, and are dependent on the outcome of the event: testosterone levels of winners are high relative to those of losers. No specific response of testosterone levels to competition was observed in female athletes, although a mood difference was noted. In addition, some experiments have failed to find a relationship between testosterone levels and aggression in humans.
The possible correlation between testosterone and aggression could explain the "roid rage" that can result from anabolic steroid use, although an effect of abnormally high levels of steroids does not prove an effect at physiological levels.
Dehydroepiandrosterone
Dehydroepiandrosterone (DHEA) is the most abundant circulating androgen hormone and can be rapidly metabolized within target tissues into potent androgens and estrogens. Gonadal steroids generally regulate aggression during the breeding season, but non-gonadal steroids may regulate aggression during the non-breeding season. Castration of various species in the non-breeding season has no effect on territorial aggression. In several avian studies, circulating DHEA has been found to be elevated in birds during the non-breeding season. These data support the idea that non-breeding birds combine adrenal and/or gonadal DHEA synthesis with neural DHEA metabolism to maintain territorial behavior when gonadal testosterone secretion is low. Similar results have been found in studies involving different strains of rats, mice, and hamsters. DHEA levels also have been studied in humans and may play a role in human aggression. Circulating DHEAS (its sulfated ester) levels rise during adrenarche (≈7 years of age) while plasma testosterone levels are relatively low. This implies that aggression in pre-pubertal children with aggressive conduct disorder might be correlated with plasma DHEAS rather than plasma testosterone, suggesting an important link between DHEAS and human aggressive behavior.
Glucocorticoids
Glucocorticoid hormones have an important role in regulating aggressive behavior. In adult rats, acute injections of corticosterone promote aggressive behavior and acute reduction of corticosterone decreases aggression; however, a chronic reduction of corticosterone levels can produce abnormally aggressive behavior. In addition, glucocorticoids affect development of aggression and establishment of social hierarchies. Adult mice with low baseline levels of corticosterone are more likely to become dominant than are mice with high baseline corticosterone levels.
Glucocorticoids are released by the hypothalamic pituitary adrenal (HPA) axis in response to stress, of which cortisol is the most prominent in humans. Results in adults suggest that reduced levels of cortisol, linked to lower fear or a reduced stress response, can be associated with more aggression. However, it may be that proactive aggression is associated with low cortisol levels while reactive aggression may be accompanied by elevated levels. Differences in assessments of cortisol may also explain a diversity of results, particularly in children.
The HPA axis is related to the general fight-or-flight response or acute stress reaction, and the role of catecholamines such as epinephrine, popularly known as adrenaline.
Pheromones
In many animals, aggression can be linked to pheromones released between conspecifics. In mice, major urinary proteins (Mups) have been demonstrated to promote innate aggressive behavior in males, and can be mediated by neuromodulatory systems. Mups activate olfactory sensory neurons in the vomeronasal organ (VNO), a subsystem of the nose known to detect pheromones via specific sensory receptors, of mice and rats. Pheremones have also been identified in fruit flies, detected by neurons in the antenna, that send a message to the brain eliciting aggression; it has been noted that aggression pheremones have not been identified in humans.
Genetics
In general, differences in a continuous phenotype such as aggression are likely to result from the action of a large number of genes each of small effect, which interact with each other and the environment through development and life.
In a non-mammalian example of genes related to aggression, the fruitless gene in fruit flies is a critical determinant of certain sexually dimorphic behaviors, and its artificial alteration can result in a reversal of stereotypically male and female patterns of aggression in fighting. However, in what was thought to be a relatively clear case, inherent complexities have been reported in deciphering the connections between interacting genes in an environmental context and a social phenotype involving multiple behavioral and sensory interactions with another organism.
In mice, candidate genes for differentiating aggression between the sexes are the Sry (sex determining region Y) gene, located on the Y chromosome and the Sts (steroid sulfatase) gene. The Sts gene encodes the steroid sulfatase enzyme, which is pivotal in the regulation of neurosteroid biosynthesis. It is expressed in both sexes, is correlated with levels of aggression among male mice, and increases dramatically in females after parturition and during lactation, corresponding to the onset of maternal aggression. At least one study has found a possible epigenetic signature (i.e., decreased methylation at a specific CpG site on the promoter region) of the serotonin receptor 5-HT3a that is associated with maternal aggression among human subjects.
Mice with experimentally elevated sensitivity to oxidative stress (through inhibition of copper-zinc superoxide dismutase, SOD1 activity) were tested for aggressive behavior. Males completely deficient in SOD1 were found to be more aggressive than both wild-type males and males that express 50% of this antioxidant enzyme. They were also faster to attack another male. The causal connection between SOD1 deficiency and increased aggression is not yet understood.
In humans, there is good evidence that the basic human neural architecture underpinning the potential for flexible aggressive responses is influenced by genes as well as environment. In terms of variation between individual people, more than 100 twin and adoption studies have been conducted in recent decades examining the genetic basis of aggressive behavior and related constructs such as conduct disorders. According to a meta-analysis published in 2002, approximately 40% of variation between individuals is explained by differences in genes, and 60% by differences in environment (mainly non-shared environmental influences rather than those that would be shared by being raised together). However, such studies have depended on self-report or observation by others including parents, which complicates interpretation of the results.
The few laboratory-based analyses have not found significant amounts of individual variation in aggression explicable by genetic variation in the human population. Furthermore, linkage and association studies that seek to identify specific genes, for example that influence neurotransmitter or hormone levels, have generally resulted in contradictory findings characterized by failed attempts at replication. One possible factor is an allele (variant) of the MAO-A gene which, in interaction with certain life events such as childhood maltreatment (which may show a main effect on its own), can influence development of brain regions such as the amygdala and as a result some types of behavioral response may be more likely. The generally unclear picture has been compared to equally difficult findings obtained in regard to other complex behavioral phenotypes. For example, both 7R and 5R, ADHD-linked VNTR alleles of dopamine receptor D4 gene are directly associated with the incidence of proactive aggression in the men with no history of ADHD.
Society and culture
Humans share aspects of aggression with non-human animals, and have specific aspects and complexity related to factors such as genetics, early development, social learning and flexibility, culture and morals.
Konrad Lorenz stated in his 1963 classic, On Aggression, that human behavior is shaped by four main, survival-seeking animal drives. Taken together, these drives—hunger, fear, reproduction, and aggression—achieve natural selection. E. O. Wilson elaborated in On Human Nature that aggression is, typically, a means of gaining control over resources. Aggression is, thus, aggravated during times when high population densities generate resource shortages. According to Richard Leakey and his colleagues, aggression in humans has also increased by becoming more interested in ownership and by defending his or her property. However, UNESCO adopted the Seville Statement of Violence in 1989 that refuted claims, by evolutionary scientists, that genetics by itself was the sole cause of aggression.
Social and cultural aspects may significantly interfere with the distinct expression of aggressiveness. For example, a high population density, when associated with a decrease of available resources, might be a significant intervening variable for the occurrence of violent acts.
Culture
Culture is one factor that plays a role in aggression. Tribal or band societies existing before or outside of modern states have sometimes been depicted as peaceful 'noble savages'. The ǃKung people were described as 'The Harmless People' in a popular work by Elizabeth Marshall Thomas in 1958, while Lawrence Keeley's 1996 War Before Civilization suggested that regular warfare without modern technology was conducted by most groups throughout human history, including most Native American tribes. Studies of hunter-gatherers show a range of different societies. In general, aggression, conflict and violence sometimes occur, but direct confrontation is generally avoided and conflict is socially managed by a variety of verbal and non-verbal methods. Different rates of aggression or violence, currently or in the past, within or between groups, have been linked to the structuring of societies and environmental conditions influencing factors such as resource or property acquisition, land and subsistence techniques, and population change.
American psychologist Peter Gray hypothesizes that band hunter-gatherer societies are able to reduce aggression while maintaining relatively peaceful, egalitarian relations between members through various methods, such as fostering a playful spirit in all areas of life, the use of humor to counter the tendency of any one person to dominate the group, and non-coercive or "indulgent" child-rearing practices. Gray likens hunter-gatherer bands to social play groups, while stressing that such play is not frivolous or even easy at all times. According to Gray, "Social play—that is, play involving more than one player—is necessarily egalitarian. It always requires a suspension of aggression and dominance along with a heightened sensitivity to the needs and desires of the other players".
Joan Durrant at the University of Manitoba writes that a number of studies have found physical punishment to be associated with "higher levels of aggression against parents, siblings, peers and spouses", even when controlling for other factors.
According to Elizabeth Gershoff at the University of Texas at Austin, the more that children are physically punished, the more likely they are as adults to act violently towards family members, including intimate partners.
In countries where physical punishment of children is perceived as being more culturally accepted, it is less strongly associated with increased aggression; however, physical punishment has been found to predict some increase in child aggression regardless of culture. While these associations do not prove causality, a number of longitudinal studies suggest that the experience of physical punishment has a direct causal effect on later aggressive behaviors. In examining several longitudinal studies that investigated the path from disciplinary spanking to aggression in children from preschool age through adolescence, Gershoff concluded: "Spanking consistently predicted increases in children's aggression over time, regardless of how aggressive children were when the spanking occurred".
Similar results were found by Catherine Taylor at Tulane University in 2010. Family violence researcher Murray A. Straus argues, "There are many reasons this evidence has been ignored. One of the most important is the belief that spanking is more effective than nonviolent discipline and is, therefore, sometimes necessary, despite the risk of harmful side effects".
Analyzing aggression culturally or politically is complicated by the fact that the label 'aggressive' can itself be used as a way of asserting a judgement from a particular point of view. Whether a coercive or violent method of social control is perceived as aggression – or as legitimate versus illegitimate aggression – depends on the position of the relevant parties in relation to the social order of their culture. This in turn can relate to factors such as: norms for coordinating actions and dividing resources; what is considered self-defense or provocation; attitudes towards 'outsiders', attitudes towards specific groups such as women, disabled people or those with lower status; the availability of alternative conflict resolution strategies; trade interdependence and collective security pacts; fears and impulses; and ultimate goals regarding material and social outcomes.
Cross-cultural research has found differences in attitudes towards aggression in different cultures. In one questionnaire study of university students, in addition to men overall justifying some types of aggression more than women, United States respondents justified defensive physical aggression more readily than Japanese or Spanish respondents, whereas Japanese students preferred direct verbal aggression (but not indirect) more than their American and Spanish counterparts.
Within American culture, southern men were shown in a study on university students to be more affected and to respond more aggressively than northerners when randomly insulted after being bumped into, which was theoretically related to a traditional culture of honor in the Southern United States, or "saving face." Other cultural themes sometimes applied to the study of aggression include individualistic versus collectivist styles, which may relate, for example, to whether disputes are responded to with open competition or by accommodating and avoiding conflicts.
In a study including 62 countries school principals reported aggressive student behavior more often the more individualist, and hence less collectivist, their country's culture. Other comparisons made in relation to aggression or war include democratic versus authoritarian political systems and egalitarian versus stratified societies.
The economic system known as capitalism has been viewed by some as reliant on the leveraging of human competitiveness and aggression in pursuit of resources and trade, which has been considered in both positive and negative terms. Attitudes about the social acceptability of particular acts or targets of aggression are also important factors. This can be highly controversial, as for example in disputes between religions or nation states, for example in regard to the Arab–Israeli conflict.
Media
Some scholars believe that behaviors like aggression may be partially learned by watching and imitating people's behavior, while other researchers have concluded that the media may have some small effects on aggression. There is also research questioning this view. For instance, a long-term outcome study of youth found no long-term relationship between playing violent video games and youth violence or bullying.
One study suggested there is a smaller effect of violent video games on aggression than has been found with television violence on aggression. This effect is positively associated with type of game violence and negatively associated to time spent playing the games. The author concluded that insufficient evidence exists to link video game violence with aggression. However, another study suggested links to aggressive behavior.
Children
The frequency of physical aggression in humans peaks at around 2–3 years of age. It then declines gradually on average. These observations suggest that physical aggression is not only a learned behavior but that development provides opportunities for the learning and biological development of self-regulation. However, a small subset of children fail to acquire all the necessary self-regulatory abilities and tend to show atypical levels of physical aggression across development. They may be at risk for later violent behavior or, conversely, lack of aggression that may be considered necessary within society.
However, some findings suggest that early aggression does not necessarily lead to aggression later on, although the course through early childhood is an important predictor of outcomes in middle childhood. In addition, physical aggression that continues is likely occurring in the context of family adversity, including socioeconomic factors. Moreover, 'opposition' and 'status violations' in childhood appear to be more strongly linked to social problems in adulthood than simply aggressive antisocial behavior. Social learning through interactions in early childhood has been seen as a building block for levels of aggression which play a crucial role in the development of peer relationships in middle childhood. Overall, an interplay of biological, social and environmental factors can be considered. Some research indicates that changes in the weather can increase the likelihood of children exhibiting deviant behavior.
Typical expectations
Young children preparing to enter kindergarten need to develop the socially important skill of being assertive. Examples of assertiveness include asking others for information, initiating conversation, or being able to respond to peer pressure.
In contrast, some young children use aggressive behavior, such as hitting or biting, as a form of communication.
Aggressive behavior can impede learning as a skill deficit, while assertive behavior can facilitate learning. However, with young children, aggressive behavior is developmentally appropriate and can lead to opportunities of building conflict resolution and communication skills.
By school age, children should learn more socially appropriate forms of communicating such as expressing themselves through verbal or written language; if they have not, this behavior may signify a disability or developmental delay.
Aggression triggers
Physical fear of others
Family difficulties
Learning, neurological, or conduct/behavior disorders
Psychological trauma
The Bobo doll experiment was conducted by Albert Bandura in 1961. In this work, Bandura found that children exposed to an aggressive adult model acted more aggressively than those who were exposed to a nonaggressive adult model. This experiment suggests that anyone who comes in contact with and interacts with children can affect the way they react and handle situations.
Summary points from recommendations by national associations
American Academy of Pediatrics (2011): "The best way to prevent aggressive behavior is to give your child a stable, secure home life with firm, loving discipline and full-time supervision during the toddler and preschool years. Everyone who cares for your child should be a good role model and agree on the rules he's expected to observe as well as the response to use if he disobeys."
National Association of School Psychologists (2008): "Proactive aggression is typically reasoned, unemotional, and focused on acquiring some goal. For example, a bully wants peer approval and victim submission, and gang members want status and control. In contrast, reactive aggression is frequently highly emotional and is often the result of biased or deficient cognitive processing on the part of the student."
Gender
Gender is a factor that plays a role in both human and animal aggression. Males are historically believed to be generally more physically aggressive than females from an early age, and men commit the vast majority of murders (Buss 2005). This is one of the most robust and reliable behavioral sex differences, and it has been found across many different age groups and cultures. However, some empirical studies have found the discrepancy in male and female aggression to be more pronounced in childhood and the gender difference in adults to be modest when studied in an experimental context. Still, there is evidence that males are quicker to aggression (Frey et al. 2003) and more likely than females to express their aggression physically. When considering indirect forms of non-violent aggression, such as relational aggression and social rejection, some scientists argue that females can be quite aggressive, although female aggression is rarely expressed physically. An exception is intimate partner violence that occurs among couples who are engaged, married, or in some other form of intimate relationship.
Although females are less likely than males to initiate physical violence, they can express aggression by using a variety of non-physical means. Exactly which method women use to express aggression is something that varies from culture to culture. On Bellona Island, a culture based on male dominance and physical violence, women tend to get into conflicts with other women more frequently than with men. When in conflict with males, instead of using physical means, they make up songs mocking the man, which spread across the island and humiliate him. If a woman wanted to kill a man, she would either convince her male relatives to kill him or hire an assassin. Although these two methods involve physical violence, both are forms of indirect aggression, since the aggressor herself avoids getting directly involved or putting herself in immediate physical danger.
See also the sections on testosterone and evolutionary explanations for gender differences above.
Situational factors
There has been some links between those prone to violence and their alcohol use. Those who are prone to violence and use alcohol are more likely to carry out violent acts. Alcohol impairs judgment, making people much less cautious than they usually are (MacDonald et al. 1996). It also disrupts the way information is processed (Bushman 1993, 1997; Bushman & Cooper 1990).
Pain and discomfort also increase aggression. Even the simple act of placing one's hands in hot water can cause an aggressive response. Hot temperatures have been implicated as a factor in a number of studies. One study completed in the midst of the civil rights movement found that riots were more likely on hotter days than cooler ones (Carlsmith & Anderson 1979). Students were found to be more aggressive and irritable after taking a test in a hot classroom (Anderson et al. 1996, Rule, et al. 1987). Drivers in cars without air conditioning were also found to be more likely to honk their horns (Kenrick & MacFarlane 1986), which is used as a measure of aggression and has shown links to other factors such as generic symbols of aggression or the visibility of other drivers.
Frustration is another major cause of aggression. The Frustration aggression theory states that aggression increases if a person feels that he or she is being blocked from achieving a goal (Aronson et al. 2005). One study found that the closeness to the goal makes a difference. The study examined people waiting in line and concluded that the 2nd person was more aggressive than the 12th one when someone cut in line (Harris 1974). Unexpected frustration may be another factor. In a separate study to demonstrate how unexpected frustration leads to increased aggression, Kulik & Brown (1979) selected a group of students as volunteers to make calls for charity donations. One group was told that the people they would call would be generous and the collection would be very successful. The other group was given no expectations. The group that expected success was more upset when no one was pledging than the group who did not expect success (everyone actually had horrible success). This research suggests that when an expectation does not materialize (successful collections), unexpected frustration arises which increases aggression.
There is some evidence to suggest that the presence of violent objects such as a gun can trigger aggression. In a study done by Leonard Berkowitz and Anthony Le Page (1967), college students were made angry and then left in the presence of a gun or badminton racket. They were then led to believe they were delivering electric shocks to another student, as in the Milgram experiment. Those who had been in the presence of the gun administered more shocks. It is possible that a violence-related stimulus increases the likelihood of aggressive cognitions by activating the semantic network.
A new proposal links military experience to anger and aggression, developing aggressive reactions and investigating these effects on those possessing the traits of a serial killer. Castle and Hensley state, "The military provides the social context where servicemen learn aggression, violence, and murder." Post-traumatic stress disorder (PTSD) is also a serious issue in the military, also believed to sometimes lead to aggression in soldiers who are suffering from what they witnessed in battle. They come back to the civilian world and may still be haunted by flashbacks and nightmares, causing severe stress. In addition, it has been claimed that in the rare minority who are claimed to be inclined toward serial killing, violent impulses may be reinforced and refined in war, possibly creating more effective murderers.
As a positive adaptation theory
Some recent scholarship has questioned traditional psychological conceptualizations of aggression as universally negative. Most traditional psychological definitions of aggression focus on the harm to the recipient of the aggression, implying this is the intent of the aggressor; however this may not always be the case. From this alternate view, although the recipient may or may not be harmed, the perceived intent is to increase the status of the aggressor, not necessarily to harm the recipient. Such scholars contend that traditional definitions of aggression have no validity because of how challenging it is to study directly.
From this view, rather than concepts such as assertiveness, aggression, violence and criminal violence existing as distinct constructs, they exist instead along a continuum with moderate levels of aggression being most adaptive. Such scholars do not consider this a trivial difference, noting that many traditional researchers' aggression measurements may measure outcomes lower down in the continuum, at levels which are adaptive, yet they generalize their findings to non-adaptive levels of aggression, thus losing precision.
See also
Aggressionism
Aggressive narcissism
Bullying
Child abuse
Conflict (disambiguation)
Displaced aggression
Duelling
Frustration-Aggression Hypothesis
Hero syndrome
Homo homini lupus
Microaggression
Non-aggression pact
Non-aggression principle
Parental abuse by children
Passive aggressive behavior
Rage (emotion)
Relational aggression
Resource holding potential
Revenge
School bullying
School violence
Social defeat
References
Further reading
R. Douglas Fields, "The Roots of Human Aggression: Experiments in humans and animals have started to identify how violent behaviors begin in the brain", Scientific American, vol. 320, no. 5 (May 2019), pp. 64–71. "Decisions to take aggressive action are risky and bring into play specific neural circuits." (p. 66.)
External links
When Family Life Hurts: Family experience of aggression in children – Parentline plus, 31 October 2010
Aggression and Violent Behavior, a Review Journal
International Society for Research on Aggression (ISRA)
Problems in the Concepts and Definitions of Aggression, Violence and some Related Terms by Johan van der Dennen, originally published in 1980
Aggression and brain asymmetry
Behavior
Human behavior
Problem behavior
Dispute resolution
Social psychology
Symptoms and signs of mental disorders
Violence | Aggression | [
"Biology"
] | 11,929 | [
"Behavior",
"Problem behavior",
"Violence",
"Aggression",
"Human behavior"
] |
58,690 | https://en.wikipedia.org/wiki/Crystal%20structure | In crystallography, crystal structure is a description of ordered arrangement of atoms, ions, or molecules in a crystalline material. Ordered structures occur from intrinsic nature of constituent particles to form symmetric patterns that repeat along the principal directions of three-dimensional space in matter.
The smallest group of particles in material that constitutes this repeating pattern is unit cell of the structure. The unit cell completely reflects symmetry and structure of the entire crystal, which is built up by repetitive translation of unit cell along its principal axes. The translation vectors define the nodes of Bravais lattice.
The lengths of principal axes/edges, of unit cell and angles between them are lattice constants, also called lattice parameters or cell parameters. The symmetry properties of crystal are described by the concept of space groups. All possible symmetric arrangements of particles in three-dimensional space may be described by 230 space groups.
The crystal structure and symmetry play a critical role in determining many physical properties, such as cleavage, electronic band structure, and optical transparency.
Unit cell
Crystal structure is described in terms of the geometry of arrangement of particles in the unit cells. The unit cell is defined as the smallest repeating unit having the full symmetry of the crystal structure. The geometry of the unit cell is defined as a parallelepiped, providing six lattice parameters taken as the lengths of the cell edges (a, b, c) and the angles between them (α, β, γ). The positions of particles inside the unit cell are described by the fractional coordinates (xi, yi, zi) along the cell edges, measured from a reference point. It is thus only necessary to report the coordinates of a smallest asymmetric subset of particles, called the crystallographic asymmetric unit. The asymmetric unit may be chosen so that it occupies the smallest physical space, which means that not all particles need to be physically located inside the boundaries given by the lattice parameters. All other particles of the unit cell are generated by the symmetry operations that characterize the symmetry of the unit cell. The collection of symmetry operations of the unit cell is expressed formally as the space group of the crystal structure.
Miller indices
Vectors and planes in a crystal lattice are described by the three-value Miller index notation. This syntax uses the indices h, k, and ℓ as directional parameters.
By definition, the syntax (hkℓ) denotes a plane that intercepts the three points a1/h, a2/k, and a3/ℓ, or some multiple thereof. That is, the Miller indices are proportional to the inverses of the intercepts of the plane with the unit cell (in the basis of the lattice vectors). If one or more of the indices is zero, it means that the planes do not intersect that axis (i.e., the intercept is "at infinity"). A plane containing a coordinate axis is translated so that it no longer contains that axis before its Miller indices are determined. The Miller indices for a plane are integers with no common factors. Negative indices are indicated with horizontal bars, as in (13). In an orthogonal coordinate system for a cubic cell, the Miller indices of a plane are the Cartesian components of a vector normal to the plane.
Considering only (hkℓ) planes intersecting one or more lattice points (the lattice planes), the distance d between adjacent lattice planes is related to the (shortest) reciprocal lattice vector orthogonal to the planes by the formula
Planes and directions
The crystallographic directions are geometric lines linking nodes (atoms, ions or molecules) of a crystal. Likewise, the crystallographic planes are geometric planes linking nodes. Some directions and planes have a higher density of nodes. These high density planes have an influence on the behavior of the crystal as follows:
Optical properties: Refractive index is directly related to density (or periodic density fluctuations).
Absorption and reactivity: Physical adsorption and chemical reactions occur at or near surface atoms or molecules. These phenomena are thus sensitive to the density of nodes.
Surface tension: The condensation of a material means that the atoms, ions or molecules are more stable if they are surrounded by other similar species. The surface tension of an interface thus varies according to the density on the surface.
Microstructural defects: Pores and crystallites tend to have straight grain boundaries following higher density planes.
Cleavage: This typically occurs preferentially parallel to higher density planes.
Plastic deformation: Dislocation glide occurs preferentially parallel to higher density planes. The perturbation carried by the dislocation (Burgers vector) is along a dense direction. The shift of one node in a more dense direction requires a lesser distortion of the crystal lattice.
Some directions and planes are defined by symmetry of the crystal system. In monoclinic, trigonal, tetragonal, and hexagonal systems there is one unique axis (sometimes called the principal axis) which has higher rotational symmetry than the other two axes. The basal plane is the plane perpendicular to the principal axis in these crystal systems. For triclinic, orthorhombic, and cubic crystal systems the axis designation is arbitrary and there is no principal axis.
Cubic structures
For the special case of simple cubic crystals, the lattice vectors are orthogonal and of equal length (usually denoted a); similarly for the reciprocal lattice. So, in this common case, the Miller indices (ℓmn) and [ℓmn] both simply denote normals/directions in Cartesian coordinates. For cubic crystals with lattice constant a, the spacing d between adjacent (ℓmn) lattice planes is (from above):
Because of the symmetry of cubic crystals, it is possible to change the place and sign of the integers and have equivalent directions and planes:
Coordinates in angle brackets such as denote a family of directions that are equivalent due to symmetry operations, such as [100], [010], [001] or the negative of any of those directions.
Coordinates in curly brackets or braces such as {100} denote a family of plane normals that are equivalent due to symmetry operations, much the way angle brackets denote a family of directions.
For face-centered cubic (fcc) and body-centered cubic (bcc) lattices, the primitive lattice vectors are not orthogonal. However, in these cases the Miller indices are conventionally defined relative to the lattice vectors of the cubic supercell and hence are again simply the Cartesian directions.
Interplanar spacing
The spacing d between adjacent (hkℓ) lattice planes is given by:
Cubic:
Tetragonal:
Hexagonal:
Rhombohedral (primitive setting):
Orthorhombic:
Monoclinic:
Triclinic:
Classification by symmetry
The defining property of a crystal is its inherent symmetry. Performing certain symmetry operations on the crystal lattice leaves it unchanged. All crystals have translational symmetry in three directions, but some have other symmetry elements as well. For example, rotating the crystal 180° about a certain axis may result in an atomic configuration that is identical to the original configuration; the crystal has twofold rotational symmetry about this axis. In addition to rotational symmetry, a crystal may have symmetry in the form of mirror planes, and also the so-called compound symmetries, which are a combination of translation and rotation or mirror symmetries. A full classification of a crystal is achieved when all inherent symmetries of the crystal are identified.
Lattice systems
Lattice systems are a grouping of crystal structures according to the point groups of their lattice. All crystals fall into one of seven lattice systems. They are related to, but not the same as the seven crystal systems.
The most symmetric, the cubic or isometric system, has the symmetry of a cube, that is, it exhibits four threefold rotational axes oriented at 109.5° (the tetrahedral angle) with respect to each other. These threefold axes lie along the body diagonals of the cube. The other six lattice systems, are hexagonal, tetragonal, rhombohedral (often confused with the trigonal crystal system), orthorhombic, monoclinic and triclinic.
Bravais lattices
Bravais lattices, also referred to as space lattices, describe the geometric arrangement of the lattice points, and therefore the translational symmetry of the crystal. The three dimensions of space afford 14 distinct Bravais lattices describing the translational symmetry. All crystalline materials recognized today, not including quasicrystals, fit in one of these arrangements. The fourteen three-dimensional lattices, classified by lattice system, are shown above.
The crystal structure consists of the same group of atoms, the basis, positioned around each and every lattice point. This group of atoms therefore repeats indefinitely in three dimensions according to the arrangement of one of the Bravais lattices. The characteristic rotation and mirror symmetries of the unit cell is described by its crystallographic point group.
Crystal systems
A crystal system is a set of point groups in which the point groups themselves and their corresponding space groups are assigned to a lattice system. Of the 32 point groups that exist in three dimensions, most are assigned to only one lattice system, in which case the crystal system and lattice system both have the same name. However, five point groups are assigned to two lattice systems, rhombohedral and hexagonal, because both lattice systems exhibit threefold rotational symmetry. These point groups are assigned to the trigonal crystal system.
In total there are seven crystal systems: triclinic, monoclinic, orthorhombic, tetragonal, trigonal, hexagonal, and cubic.
Point groups
The crystallographic point group or crystal class is the mathematical group comprising the symmetry operations that leave at least one point unmoved and that leave the appearance of the crystal structure unchanged. These symmetry operations include
Reflection, which reflects the structure across a reflection plane
Rotation, which rotates the structure a specified portion of a circle about a rotation axis
Inversion, which changes the sign of the coordinate of each point with respect to a center of symmetry or inversion point
Improper rotation, which consists of a rotation about an axis followed by an inversion.
Rotation axes (proper and improper), reflection planes, and centers of symmetry are collectively called symmetry elements. There are 32 possible crystal classes. Each one can be classified into one of the seven crystal systems.
Space groups
In addition to the operations of the point group, the space group of the crystal structure contains translational symmetry operations. These include:
Pure translations, which move a point along a vector
Screw axes, which rotate a point around an axis while translating parallel to the axis.
Glide planes, which reflect a point through a plane while translating it parallel to the plane.
There are 230 distinct space groups.
Atomic coordination
By considering the arrangement of atoms relative to each other, their coordination numbers, interatomic distances, types of bonding, etc., it is possible to form a general view of the structures and alternative ways of visualizing them.
Close packing
The principles involved can be understood by considering the most efficient way of packing together equal-sized spheres and stacking close-packed atomic planes in three dimensions. For example, if plane A lies beneath plane B, there are two possible ways of placing an additional atom on top of layer B. If an additional layer were placed directly over plane A, this would give rise to the following series:
...ABABABAB...
This arrangement of atoms in a crystal structure is known as hexagonal close packing (hcp).
If, however, all three planes are staggered relative to each other and it is not until the fourth layer is positioned directly over plane A that the sequence is repeated, then the following sequence arises:
...ABCABCABC...
This type of structural arrangement is known as cubic close packing (ccp).
The unit cell of a ccp arrangement of atoms is the face-centered cubic (fcc) unit cell. This is not immediately obvious as the closely packed layers are parallel to the {111} planes of the fcc unit cell. There are four different orientations of the close-packed layers.
APF and CN
One important characteristic of a crystalline structure is its atomic packing factor (APF). This is calculated by assuming that all the atoms are identical spheres, with a radius large enough that each sphere abuts on the next. The atomic packing factor is the proportion of space filled by these spheres which can be worked out by calculating the total volume of the spheres and dividing by the volume of the cell as follows:
Another important characteristic of a crystalline structure is its coordination number (CN). This is the number of nearest neighbours of a central atom in the structure.
The APFs and CNs of the most common crystal structures are shown below:
The 74% packing efficiency of the FCC and HCP is the maximum density possible in unit cells constructed of spheres of only one size.
Interstitial sites
Interstitial sites refer to the empty spaces in between the atoms in the crystal lattice. These spaces can be filled by oppositely charged ions to form multi-element structures. They can also be filled by impurity atoms or self-interstitials to form interstitial defects.
Defects and impurities
Real crystals feature defects or irregularities in the ideal arrangements described above and it is these defects that critically determine many of the electrical and mechanical properties of real materials.
Impurities
When one atom substitutes for one of the principal atomic components within the crystal structure, alteration in the electrical and thermal properties of the material may ensue. Impurities may also manifest as electron spin impurities in certain materials. Research on magnetic impurities demonstrates that substantial alteration of certain properties such as specific heat may be affected by small concentrations of an impurity, as for example impurities in semiconducting ferromagnetic alloys may lead to different properties as first predicted in the late 1960s.
Dislocations
Dislocations in a crystal lattice are line defects that are associated with local stress fields. Dislocations allow shear at lower stress than that needed for a perfect crystal structure. The local stress fields result in interactions between the dislocations which then result in strain hardening or cold working.
Grain boundaries
Grain boundaries are interfaces where crystals of different orientations meet. A grain boundary is a single-phase interface, with crystals on each side of the boundary being identical except in orientation. The term "crystallite boundary" is sometimes, though rarely, used. Grain boundary areas contain those atoms that have been perturbed from their original lattice sites, dislocations, and impurities that have migrated to the lower energy grain boundary.
Treating a grain boundary geometrically as an interface of a single crystal cut into two parts, one of which is rotated, we see that there are five variables required to define a grain boundary. The first two numbers come from the unit vector that specifies a rotation axis. The third number designates the angle of rotation of the grain. The final two numbers specify the plane of the grain boundary (or a unit vector that is normal to this plane).
Grain boundaries disrupt the motion of dislocations through a material, so reducing crystallite size is a common way to improve strength, as described by the Hall–Petch relationship. Since grain boundaries are defects in the crystal structure they tend to decrease the electrical and thermal conductivity of the material. The high interfacial energy and relatively weak bonding in most grain boundaries often makes them preferred sites for the onset of corrosion and for the precipitation of new phases from the solid. They are also important to many of the mechanisms of creep.
Grain boundaries are in general only a few nanometers wide. In common materials, crystallites are large enough that grain boundaries account for a small fraction of the material. However, very small grain sizes are achievable. In nanocrystalline solids, grain boundaries become a significant volume fraction of the material, with profound effects on such properties as diffusion and plasticity. In the limit of small crystallites, as the volume fraction of grain boundaries approaches 100%, the material ceases to have any crystalline character, and thus becomes an amorphous solid.
Prediction of structure
The difficulty of predicting stable crystal structures based on the knowledge of only the chemical composition has long been a stumbling block on the way to fully computational materials design. Now, with more powerful algorithms and high-performance computing, structures of medium complexity can be predicted using such approaches as evolutionary algorithms, random sampling, or metadynamics.
The crystal structures of simple ionic solids (e.g., NaCl or table salt) have long been rationalized in terms of Pauling's rules, first set out in 1929 by Linus Pauling, referred to by many since as the "father of the chemical bond". Pauling also considered the nature of the interatomic forces in metals, and concluded that about half of the five d-orbitals in the transition metals are involved in bonding, with the remaining nonbonding d-orbitals being responsible for the magnetic properties. Pauling was therefore able to correlate the number of d-orbitals in bond formation with the bond length, as well as with many of the physical properties of the substance. He subsequently introduced the metallic orbital, an extra orbital necessary to permit uninhibited resonance of valence bonds among various electronic structures.
In the resonating valence bond theory, the factors that determine the choice of one from among alternative crystal structures of a metal or intermetallic compound revolve around the energy of resonance of bonds among interatomic positions. It is clear that some modes of resonance would make larger contributions (be more mechanically stable than others), and that in particular a simple ratio of number of bonds to number of positions would be exceptional. The resulting principle is that a special stability is associated with the simplest ratios or "bond numbers": , , , , , etc. The choice of structure and the value of the axial ratio (which determines the relative bond lengths) are thus a result of the effort of an atom to use its valency in the formation of stable bonds with simple fractional bond numbers.
After postulating a direct correlation between electron concentration and crystal structure in beta-phase alloys, Hume-Rothery analyzed the trends in melting points, compressibilities and bond lengths as a function of group number in the periodic table in order to establish a system of valencies of the transition elements in the metallic state. This treatment thus emphasized the increasing bond strength as a function of group number. The operation of directional forces were emphasized in one article on the relation between bond hybrids and the metallic structures. The resulting correlation between electronic and crystalline structures is summarized by a single parameter, the weight of the d-electrons per hybridized metallic orbital. The "d-weight" calculates out to 0.5, 0.7 and 0.9 for the fcc, hcp and bcc structures respectively. The relationship between d-electrons and crystal structure thus becomes apparent.
In crystal structure predictions/simulations, the periodicity is usually applied, since the system is imagined as being unlimited in all directions. Starting from a triclinic structure with no further symmetry property assumed, the system may be driven to show some additional symmetry properties by applying Newton's Second Law on particles in the unit cell and a recently developed dynamical equation for the system period vectors
(lattice parameters including angles), even if the system is subject to external stress.
Polymorphism
Polymorphism is the occurrence of multiple crystalline forms of a material. It is found in many crystalline materials including polymers, minerals, and metals. According to Gibbs' rules of phase equilibria, these unique crystalline phases are dependent on intensive variables such as pressure and temperature. Polymorphism is related to allotropy, which refers to elemental solids. The complete morphology of a material is described by polymorphism and other variables such as crystal habit, amorphous fraction or crystallographic defects. Polymorphs have different stabilities and may spontaneously and irreversibly transform from a metastable form (or thermodynamically unstable form) to the stable form at a particular temperature. They also exhibit different melting points, solubilities, and X-ray diffraction patterns.
One good example of this is the quartz form of silicon dioxide, or SiO2. In the vast majority of silicates, the Si atom shows tetrahedral coordination by 4 oxygens. All but one of the crystalline forms involve tetrahedral {SiO4} units linked together by shared vertices in different arrangements. In different minerals the tetrahedra show different degrees of networking and polymerization. For example, they occur singly, joined in pairs, in larger finite clusters including rings, in chains, double chains, sheets, and three-dimensional frameworks. The minerals are classified into groups based on these structures. In each of the 7 thermodynamically stable crystalline forms or polymorphs of crystalline quartz, only 2 out of 4 of each the edges of the {SiO4} tetrahedra are shared with others, yielding the net chemical formula for silica: SiO2.
Another example is elemental tin (Sn), which is malleable near ambient temperatures but is brittle when cooled. This change in mechanical properties due to existence of its two major allotropes, α- and β-tin. The two allotropes that are encountered at normal pressure and temperature, α-tin and β-tin, are more commonly known as gray tin and white tin respectively. Two more allotropes, γ and σ, exist at temperatures above 161 °C and pressures above several GPa. White tin is metallic, and is the stable crystalline form at or above room temperature. Below 13.2 °C, tin exists in the gray form, which has a diamond cubic crystal structure, similar to diamond, silicon or germanium. Gray tin has no metallic properties at all, is a dull gray powdery material, and has few uses, other than a few specialized semiconductor applications. Although the α–β transformation temperature of tin is nominally 13.2 °C, impurities (e.g. Al, Zn, etc.) lower the transition temperature well below 0 °C, and upon addition of Sb or Bi the transformation may not occur at all.
Physical properties
Twenty of the 32 crystal classes are piezoelectric, and crystals belonging to one of these classes (point groups) display piezoelectricity. All piezoelectric classes lack inversion symmetry. Any material develops a dielectric polarization when an electric field is applied, but a substance that has such a natural charge separation even in the absence of a field is called a polar material. Whether or not a material is polar is determined solely by its crystal structure. Only 10 of the 32 point groups are polar. All polar crystals are pyroelectric, so the 10 polar crystal classes are sometimes referred to as the pyroelectric classes.
There are a few crystal structures, notably the perovskite structure, which exhibit ferroelectric behavior. This is analogous to ferromagnetism, in that, in the absence of an electric field during production, the ferroelectric crystal does not exhibit a polarization. Upon the application of an electric field of sufficient magnitude, the crystal becomes permanently polarized. This polarization can be reversed by a sufficiently large counter-charge, in the same way that a ferromagnet can be reversed. However, although they are called ferroelectrics, the effect is due to the crystal structure (not the presence of a ferrous metal).
Some examples of crystal structures
See also
Brillouin zone – a primitive cell in the reciprocal space lattice of a crystal
Crystal engineering
Crystal growth – a major stage of a crystallization process
Crystallographic database
Fractional coordinates
Frank–Kasper phases
Hermann–Mauguin notation – a notation to represent symmetry in point groups, plane groups and space groups
Laser-heated pedestal growth – a crystal growth technique
Liquid crystal – a state of matter with properties of both conventional liquids and crystals
Patterson function – a function used to solve the phase problem in X-ray crystallography
Periodic table (crystal structure) – (for elements that are solid at standard temperature and pressure) gives the crystalline structure of the most thermodynamically stable form(s) in those conditions. In all other cases the structure given is for the element at its melting point.
Primitive cell – a repeating unit formed by the vectors spanning the points of a lattice
Seed crystal – a small piece of a single crystal used to initiate growth of a larger crystal
Wigner–Seitz cell – a primitive cell of a crystal lattice with Voronoi decomposition applied
References
External links
The internal structure of crystals... Crystallography for beginners
Different types of crystal structure
Appendix A from the manual for Atoms, software for XAFS
Intro to Minerals: Crystal Class and System
Introduction to Crystallography and Mineral Crystal Systems
Crystal planes and Miller indices
Interactive 3D Crystal models
Specific Crystal 3D models
Crystallography Open Database (with more than 140,000 crystal structures)
Chemical properties
Crystallography
Materials science
Crystals
Structure | Crystal structure | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 5,158 | [
"Applied and interdisciplinary physics",
"Materials science",
"Crystallography",
"Crystals",
"Condensed matter physics",
"nan"
] |
58,702 | https://en.wikipedia.org/wiki/John%20Glenn | John Herschel Glenn Jr. (July 18, 1921 – December 8, 2016) was an American Marine Corps aviator, astronaut, businessman, and politician. He was the third American in space and the first American to orbit the Earth, circling it three times in 1962. Following his retirement from NASA, he served from 1974 to 1999 as a U.S. Senator from Ohio; in 1998, he flew into space again at the age of 77.
Before joining NASA, Glenn was a distinguished fighter pilot in World War II, the Chinese Civil War, and the Korean War. He shot down three MiG-15s and was awarded six Distinguished Flying Crosses and eighteen Air Medals. In 1957, he made the first supersonic transcontinental flight across the United States. His on-board camera took the first continuous, panoramic photograph of the United States.
Glenn was one of the Mercury Seven military test pilots selected in 1959 by NASA as the nation's first astronauts. On February 20, 1962, Glenn flew the Friendship 7 mission, becoming the first American to orbit the Earth. He was the third American, and the fifth person, to be in space. He received the NASA Distinguished Service Medal in 1962, the Congressional Space Medal of Honor in 1978, was inducted into the U.S. Astronaut Hall of Fame in 1990, and received the Presidential Medal of Freedom in 2012.
Glenn resigned from NASA in January 1964. A member of the Democratic Party, Glenn was first elected to the Senate in 1974 and served for 24 years until January 1999. In 1998, at age 77, Glenn flew on Space Shuttle Discovery STS-95 mission, making him the oldest person to enter Earth orbit, the only person to fly in both the Mercury and the Space Shuttle programs, and the first Member of Congress to visit space since Congressman Bill Nelson (D-Fla.) in 1986. Glenn, both the oldest and the last surviving member of the Mercury Seven, died at the age of 95 on December 8, 2016.
Early life and education
John Herschel Glenn Jr. was born on July 18, 1921, in Cambridge, Ohio, the son of John Herschel Glenn Sr. (1895–1966), who worked for a plumbing firm, and Clara Teresa Glenn (; 1897–1971), a teacher. His parents had married shortly before John Sr., a member of the American Expeditionary Force, left for the Western Front during World War I. The family moved to New Concord, Ohio, soon after his birth, and his father started his own business, the Glenn Plumbing Company. Glenn Jr. was only a toddler when he met Anna Margaret (Annie) Castor, whom he would later marry. The two would not be able to recall a time when they did not know each other. He first flew in an airplane with his father when he was eight years old. He became fascinated by flight and built model airplanes from balsa wood kits. Along with his adopted sister Jean, he attended New Concord Elementary School. He washed cars and sold rhubarb to earn money to buy a bicycle, after which he took a job delivering The Columbus Dispatch newspaper. He was a member of the Ohio Rangers, an organization similar to the Cub Scouts. His boyhood home in New Concord has been restored as a historic house museum and education center.
Glenn attended New Concord High School, where he played on the varsity football team as a center and linebacker. He also made the varsity basketball and tennis teams and was involved with Hi-Y, a junior branch of the YMCA. After graduating in 1939, Glenn entered Muskingum College (now Muskingum University), where he studied chemistry, joined the Stag Club fraternity, and played on the football team. Annie majored in music with minors in secretarial studies and physical education and competed on the swimming and volleyball teams, graduating in 1942. Glenn earned a private pilot license and a physics course credit for free through the Civilian Pilot Training Program in 1941. He did not complete his senior year in residence or take a proficiency exam, both required by the school for its Bachelor of Science degree.
Military career
World War II
When the United States entered World War II, Glenn quit college to enlist in the U.S. Army Air Corps. He was not called to duty by the army and enlisted as a U.S. Navy aviation cadet in March 1942. Glenn attended the University of Iowa in Iowa City for pre-flight training and made his first solo flight in a military aircraft at Naval Air Station Olathe in Kansas, where he went for primary training. During advanced training at Naval Air Station Corpus Christi in Texas, he accepted an offer to transfer to the U.S. Marine Corps. Having completed his flight training in March 1943, Glenn was commissioned as a second lieutenant. Glenn married Annie in a Presbyterian ceremony at College Drive Church in New Concord on April 6, 1943. After advanced training at Camp Kearny, California, he was assigned to Marine Squadron VMJ-353, which flew R4D transport planes from there.
The fighter squadron VMO-155 was also at Camp Kearny flying the Grumman F4F Wildcat. Glenn approached the squadron's commander, Major J. P. Haines, who suggested that he could put in for a transfer. This was approved, and Glenn was posted to VMO-155 on July 2, 1943, two days before the squadron moved to Marine Corps Air Station El Centro in California. The Wildcat was obsolete by this time, and VMO-155 re-equipped with the F4U Corsair in September 1943. He was promoted to first lieutenant in October 1943, and shipped out to Hawaii in January 1944. VMO-155 became part of the garrison on Midway Atoll on February 21, then moved to the Marshall Islands in June 1944 and flew 57 combat missions in the area. He received two Distinguished Flying Crosses and ten Air Medals.
At the end of his one-year tour of duty in February 1945, Glenn was assigned to Marine Corps Air Station Cherry Point in North Carolina, then to Naval Air Station Patuxent River in Maryland. He was promoted to captain in July 1945 and ordered back to Cherry Point. There, he joined VMF-913, another Corsair squadron, and learned that he had qualified for a regular commission. In March 1946, he was assigned to Marine Corps Air Station El Toro in southern California. He volunteered for service with the occupation in North China, believing it would be a short tour. He joined VMF-218 (another Corsair squadron), which was based at Nanyuan Field near Beijing, in December 1946, and flew patrol missions until VMF-218 was transferred to Guam in March 1947.
In December 1948, Glenn was re-posted to NAS Corpus Christi as a student at the Naval School of All-Weather Flight before becoming a flight instructor. In July 1951, he traveled to the Amphibious Warfare School at Marine Corps Base Quantico in northern Virginia for a six-month course. He then joined the staff of the commandant of the Marine Corps Schools. He maintained his proficiency (and flight pay) by flying on weekends, although he was only allowed four hours of flying time per month. He was promoted to major in July 1952. Glenn received the World War II Victory Medal, American Campaign Medal, Asiatic-Pacific Campaign Medal (with one star), Navy Occupation Service Medal (with Asia clasp), and the China Service Medal for his efforts.
Korean War
Glenn moved his family back to New Concord during a short period of leave, and after two and a half months of jet training at Cherry Point, was ordered to South Korea in October 1952, late in the Korean War. Before he set out for Korea in February 1953, he applied to fly the F-86 Sabre jet fighter-interceptor through an inter-service exchange position with the U.S. Air Force (USAF). In preparation, he arranged with Colonel Leon W. Gray to check out the F-86 at Otis Air Force Base in Massachusetts. Glenn reported to K-3, an airbase in South Korea, on February 3, 1953, and was assigned to be the operations officer for VMF-311, one of two Marine fighter squadrons there while he waited for the exchange assignment to go through. VMF-311 was equipped with the F9F Panther jet fighter-bomber. Glenn's first mission was a reconnaissance flight on February 26. He flew 63 combat missions in Korea with VMF-311 and was nicknamed "Magnet Ass" because of the number of flak hits he took on low-level close air support missions; twice, he returned to base with over 250 holes in his plane. He flew for a time with Marine reservist Ted Williams (then in the midst of a Hall of Fame baseball career with the Boston Red Sox) as his wingman. Williams later said about Glenn "Absolutely fearless. The best I ever saw. It was an honor to fly with him." Glenn also flew with future major general Ralph H. Spanjer.
In June 1953, Glenn reported for duty with the USAF's 25th Fighter-Interceptor Squadron and flew 27 combat missions in the F-86, a much faster aircraft than the F9F Panther, patrolling MiG Alley. Combat with a MiG-15, which was faster and better armed still, was regarded as a rite of passage for a fighter pilot. On the Air Force buses that ferried the pilots out to the airfields before dawn, pilots who had engaged a MiG could sit while those who had not had to stand. Glenn later wrote, "Since the days of the Lafayette Escadrille during World War I, pilots have viewed air-to-air combat as the ultimate test not only of their machines but of their own personal determination and flying skills. I was no exception." He hoped to become the second Marine jet flying ace after John F. Bolt. Glenn's USAF squadron mates painted "MiG Mad Marine" on his aircraft when he complained about there not being any MiGs to shoot at. He shot down his first MiG in a dogfight on July 12, 1953, downed a second one on July 19, and a third on July 22 when four Sabres shot down three MiGs. These were the final air victories of the war, which ended with an armistice five days later. For his service in Korea, Glenn received two more Distinguished Flying Crosses and eight more Air Medals. Glenn also received the Korean Service Medal (with two campaign stars), United Nations Korea Medal, Marine Corps Expeditionary Medal, National Defense Service Medal (with one star), and the Korean War Service Medal.
Test pilot
With combat experience as a fighter pilot, Glenn applied for training as a test pilot while still in Korea. He reported to the U.S. Naval Test Pilot School at NAS Patuxent River in Maryland in January 1954 and graduated in July. At Patuxent River, future Medal of Honor recipient James Stockdale tutored him in physics and math. Glenn's first flight test assignment, testing the FJ-3 Fury, nearly killed him when its cockpit depressurized and its oxygen system failed. He also tested the armament of aircraft such as the Vought F7U Cutlass and F8U Crusader. From November 1956 to April 1959, he was assigned to the Fighter Design Branch of the Navy Bureau of Aeronautics in Washington, D.C., and attended the University of Maryland.
On July 16, 1957, Glenn made the first supersonic transcontinental flight. Disliking his Bureau of Aeronautics desk job, he devised the flight as both a way to keep flying and publicly demonstrate the F8U Crusader. At that time, the transcontinental speed record, held by an Air Force Republic F-84 Thunderjet, was 3 hours 45 minutes and Glenn calculated that the F8U Crusader could do it faster. Because its air speed was faster than that of a .45 caliber bullet, Glenn called the flight Project Bullet. He flew an F8U Crusader from Los Alamitos, California, to Floyd Bennett Field in New York City in 3 hours, 23 minutes and 8.3 seconds, averaging supersonic speed despite three in-flight refuelings when speeds dropped below . His on-board camera took the first continuous, transcontinental panoramic photograph of the United States. He received his fifth Distinguished Flying Cross for this mission, and was promoted to lieutenant colonel on April 1, 1959. The cross-country flight made Glenn a minor celebrity. A profile appeared in The New York Times, and he appeared on the television show Name That Tune. Glenn now had nearly 9,000 hours of flying time, including about 3,000 hours in jets, but knew that at the age of 36, he was now likely too old to continue to fly.
NASA career
Selection
On October 4, 1957, the Soviet Union launched Sputnik 1, the first artificial satellite. This damaged American confidence in its technological superiority, creating a wave of anxiety known as the Sputnik crisis. In response, President Dwight D. Eisenhower launched the Space Race. The National Aeronautics and Space Administration (NASA) was established on October 1, 1958, as a civilian agency to develop space technology. One of its first initiatives was announced on December 17, 1958. This was Project Mercury, which aimed to launch a man into Earth orbit, return him safely to the Earth, and evaluate his capabilities in space.
His Bureau of Aeronautics job gave Glenn access to new spaceflight news, such as the X-15 rocket plane. While on duty at Patuxent and in Washington, Glenn read everything he could find about space. His office was asked to send a test pilot to Langley Air Force Base in Virginia to make runs on a spaceflight simulator, as part of research by the newly formed NASA into re-entry vehicle shapes. The pilot would also be sent to the Naval Air Development Center in Johnsville, Pennsylvania, and would be subjected to high G-forces in a centrifuge for comparison with data collected in the simulator. His request for the position was granted, and he spent several days at Langley and a week in Johnsville for the testing. As one of the very few pilots to have done such testing, Glenn had become an expert on the subject. NASA asked military-service members to participate in planning the mockup of a spacecraft. Having participated in the research at Langley and Johnsville, he was sent to the McDonnell plant in St. Louis as a service adviser to NASA's spacecraft mockup board. Envisioning himself in the vehicle, Glenn stated that the passenger would have to be able to control the spacecraft. McDonnell engineers told him of the importance of lightening the vehicle as much as possible, so Glenn began exercising to lose the by which he estimated he was overweight.
Eisenhower directed NASA to recruit its first astronauts from military test pilots. Of 508 graduates of test pilot schools, 110 matched the minimum standards. Marine Corps pilots were mistakenly omitted at first; two were quickly found, including Glenn. The candidates had to be younger than 40, possess a bachelor's degree or equivalent, and be or less. Only the height requirement was strictly enforced, owing to the size of the Project Mercury spacecraft. This was fortunate for Glenn, who barely met the requirements, as he was near the age cutoff and lacked a science-based degree, but had taken more classes since leaving college than needed for graduation. Glenn was otherwise so outstanding a candidate that Colonel Jake Dill, his commanding officer at test pilot school, visited NASA headquarters to insist that Glenn would be the perfect astronaut.
For an interview with Charles Donlan, associate director of Project Mercury, Glenn brought the results from the centrifuge to show that he had done well on a test that perhaps no other candidate had taken. Donlan also noticed that Glenn stayed late at night to study schematics of the Mercury spacecraft. He was among the 32 of the first 69 candidates that passed the first step of the evaluation and were interested in continuing, sufficient for the astronaut corps NASA wanted. On February 27 a grueling series of physical and psychological tests began at the Lovelace Clinic and the Wright Aerospace Medical Laboratory.
Because of his Bureau of Aeronautics job, Glenn was already participating in Project Mercury; while other candidates were at Wright, on March 17 he and most of those who would choose the astronauts visited the McDonnell plant building the spacecraft to inspect its progress and make changes. While Glenn had not scored the highest on all the tests, a member of the selection committee recalled how he had impressed everyone with his "strength of personality and his dedication". On April 6 Donlan called Glenn to offer him a position at Project Mercury, one of seven candidates chosen as astronauts. Glenn was pleased while Annie was supportive but wary of the danger; during his three years at Patuxent, 12 test pilots had died.
The identities of the seven were announced at a press conference at Dolley Madison House in Washington, D.C., on April 9, 1959: Scott Carpenter, Gordon Cooper, Glenn, Gus Grissom, Wally Schirra, Alan Shepard, and Deke Slayton. In The Right Stuff, Tom Wolfe wrote that Glenn "came out of it as tops among seven very fair-haired boys. He had the hottest record as a pilot, he was the most quotable, the most photogenic, and the lone Marine." The magnitude of the challenge ahead of them was made clear a few weeks later, on the night of May 18, 1959, when the seven astronauts gathered at Cape Canaveral to watch their first rocket launch, of an SM-65D Atlas, which was similar to the one that was to carry them into orbit. A few minutes after liftoff, it exploded spectacularly, lighting up the night sky. The astronauts were stunned. Shepard turned to Glenn and said: "Well, I'm glad they got that out of the way."
Glenn remained an officer in the Marine Corps after his selection, and was assigned to the NASA Space Task Group at Langley Research Center in Hampton, Virginia. The task force moved to Houston, Texas, in 1962, and became part of the NASA Manned Spacecraft Center. A portion of the astronauts' training was in the classroom, where they learned space science. The group also received hands-on training, which included scuba diving and work in simulators. Astronauts secured an additional role in the spaceflight program: to provide pilot input in design. The astronauts divided the various tasks between them. Glenn's specialization was cockpit layout design and control functioning for the Mercury and early Apollo programs. He pressed the other astronauts to set a moral example, living up to the squeaky-clean image of them that had been portrayed by Life magazine, a position that was not popular with the other astronauts.
Friendship 7 flight
Glenn was the backup pilot for Shepard and Grissom on the first two crewed Project Mercury flights, the sub-orbital missions Mercury-Redstone 3 and Mercury-Redstone 4. Glenn was selected for Mercury-Atlas 6, NASA's first crewed orbital flight, with Carpenter as his backup. Putting a man in orbit would achieve one of Project Mercury's most important goals. Shepard and Grissom had named their spacecraft Freedom 7 and Liberty Bell 7. The numeral 7 had originally been the production number of Shepard's spacecraft, but had come to represent the Mercury 7. Glenn named his spacecraft, number 13, Friendship 7, and had the name hand-painted on the side like the one on his F-86 had been. Glenn and Carpenter completed their training for the mission in January 1962, but postponement of the launch allowed them to continue rehearsing. Glenn spent 25 hours and 25 minutes in the spacecraft performing hangar and altitude tests, and 59 hours and 45 minutes in the simulator. He flew 70 simulated missions and reacted to 189 simulated system failures.
After a long series of delays, Friendship 7 lifted off from Cape Canaveral Air Force Station on February 20, 1962. During the countdown, there were eleven delays due to equipment malfunctions and improvements and the weather. During Glenn's first orbit, a failure of the automatic-control system was detected. This forced Glenn to operate in manual mode for the second and third orbits, and for re-entry. Later in the flight, telemetry indicated that the heat shield had loosened. If this reading had been accurate, Glenn and his spacecraft would have burned up on re-entry. After a lengthy discussion on how to deal with this problem, ground controllers decided that leaving the retrorocket pack in place might help keep the loose heat shield in place. They relayed these instructions to Glenn, but did not tell him the heat shield was possibly loose; although confused at this order, he complied. The retrorocket pack broke up into large chunks of flaming debris that flew past the window of his capsule during re-entry; Glenn thought this might have been the heat shield. He told an interviewer, "Fortunately it was the rocket pack—or I wouldn't be answering these questions." After the flight, it was determined that the heat shield was not loose; the sensor was faulty due to an improperly rigged switch.
Friendship 7 safely splashed down southeast of Cape Canaveral after Glenn's 4-hour, 55-minute flight. He carried a note on the flight which read, "I am a stranger. I come in peace. Take me to your leader and there will be a massive reward for you in eternity" in several languages, in case he landed near southern Pacific Ocean islands. The original procedure called for Glenn to exit through the top hatch, but he was uncomfortably warm and decided that egress through the side hatch would be faster. During the flight, he endured up to 7.8 g of acceleration and traveled at about . The flight took Glenn to a maximum altitude (apogee) of about and a minimum altitude of (perigee). Unlike the crewed missions of Soviet Union's Vostok programme, Glenn remained within the spacecraft during landing. The flight made Glenn the first American to orbit the Earth, the third American in space, and the fifth human in space. The mission, which Glenn called the "best day of his life", renewed U.S. confidence. His flight occurred while the U.S. and the Soviet Union were embroiled in the Cold War and competing in the Space Race.
As the first American in orbit, Glenn became a national hero, met President John F. Kennedy, and received a ticker-tape parade in New York reminiscent of those honoring Charles Lindbergh and other heroes. He became "so valuable to the nation as an iconic figure", according to NASA administrator Charles Bolden, that Kennedy would not "risk putting him back in space again." Glenn's fame and political potential were noted by the Kennedys, and he became a friend of the Kennedy family. On February 23, 1962, President Kennedy gave him the NASA Distinguished Service Medal for his Friendship 7 flight. Upon receiving the award, Glenn said, "I would like to consider I was a figurehead for this whole big, tremendous effort, and I am very proud of the medal I have on my lapel." Glenn also received his sixth Distinguished Flying Cross for his efforts. He was among the first group of astronauts to be awarded the Congressional Space Medal of Honor. The award was presented to him by President Jimmy Carter in 1978. After his 1962 spaceflight, NASA proposed giving Glenn the Medal of Honor, but Glenn did not think that would be appropriate. His military and space awards were stolen from his home in 1978, and he remarked that he would keep this medal in a safe.
Comments about women in space
In 1962, NASA contemplated recruiting women to the astronaut corps via the Mercury 13, but Glenn gave a speech before the House Space Committee detailing his opposition to sending women into space, in which he said:
In May 1965, after he left NASA, Glenn was quoted in the Miami Herald as saying NASA "offer a serious chance for space women" as scientist astronauts.
NASA had no official policy prohibiting women, but the requirement that astronauts had to be test pilots effectively excluded them. NASA dropped this requirement in 1965, but did not select any women as astronauts until 1978, when six women were selected, none as pilots. In June 1963, the Soviet Union launched a female cosmonaut, Valentina Tereshkova, into orbit. After Tereshkova, no women of any nationality flew in space again until August 1982, when the Soviet Union launched pilot-cosmonaut Svetlana Savitskaya. During the late 1970s, Glenn reportedly supported Space Shuttle Mission Specialist Judith Resnik in her career.
Political campaigning
1964 Senate campaign
At 42, Glenn was the oldest member of the astronaut corps and would likely be close to 50 by the time the lunar landings took place. During Glenn's training, NASA psychologists determined that he was the astronaut best suited for public life. Attorney General Robert F. Kennedy suggested to Glenn and his wife in December 1962 that he run for the 1964 United States Senate election in Ohio, challenging aging incumbent Stephen M. Young (1889–1984) in the Democratic primary election. As it seemed unlikely that he would be selected for Project Apollo missions, he resigned from NASA on January 16, 1964, and announced his Democratic Party candidacy for the U.S. Senate from his home state of Ohio the following day, becoming the first astronaut-politician. Glenn was still a Marine and had plenty of unused leave time, so he elected to use it while he waited for his retirement papers to go through.
To avoid partisanship, NASA quickly closed Glenn's agency office. The New York Times reported that while many Ohioans were skeptical of Glenn's qualifications for the Senate, he could defeat Young in the Democratic primary; whether he could defeat Representative Robert Taft Jr., the likely Republican candidate, in the general election was much less clear. In late February he was hospitalized for a concussion sustained in a fall against a bathtub while attempting to fix a mirror in a hotel room; an inner-ear injury from the accident left him unable to campaign. Both his wife and Scott Carpenter campaigned on his behalf during February and March, but doctors gave Glenn a recovery time of one year. Glenn did not want to win solely because of his astronaut fame, so he dropped out of the race on March 30.
Glenn was still on leave from the Marine Corps, and he withdrew his papers to retire so he could keep a salary and health benefits. Glenn was on the list of potential candidates to be promoted to full colonel, but he notified the Commandant of the Marine Corps of his intention to retire so another Marine could receive the promotion. President Johnson later decided to promote Glenn to full colonel status without taking someone else's slot. He retired as a colonel on January 1, 1965. Glenn was approached by RC Cola to join their public relations department, but Glenn declined it because he wanted to be involved with a business and not just the face of it. The company revised their offer and offered Glenn a vice president of corporate development position, as well as a place on the board of directors. The company later expanded Glenn's role, promoting him to president of Royal Crown International. A Senate seat was open in 1968, and Glenn was asked about his current political aspirations. He said he had no current plan, and "Let's talk about it one of these days." Glenn also said that a 1970 Senate run was a possibility.
In 1973, he and a friend bought a Holiday Inn near Disney World. The success of Disney World expanded to their business, and the pair built three more hotels. One of Glenn's business partners was Henri Landwirth, a Holocaust survivor who became his best friend. He remembered learning about Landwirth's background: "Henri doesn't talk about it much. It was years before he spoke about it with me and then only because of an accident. We were down in Florida during the space program. Everyone was wearing short-sleeved Ban-Lon shirts—everyone but Henri. Then one day I saw Henri at the pool and noticed the number on his arm. I told Henri that if it were me I'd wear that number like a medal with a spotlight on it."
1970 Senate campaign
Glenn remained close to the Kennedy family, and campaigned for Robert F. Kennedy during his 1968 presidential campaign. In 1968, Glenn was in Kennedy's hotel suite when Kennedy heard he had won California. Glenn was supposed to go with him to celebrate but decided not to as there would be many people there. Kennedy went downstairs to make his victory speech and was assassinated. Glenn and Annie went with Kennedy to the hospital, and the next morning took Kennedy's children home to Virginia. Glenn was later a pallbearer at the funeral in New York.
In 1970, Young did not seek reelection and the seat was open. Businessman Howard Metzenbaum, Young's former campaign manager, was backed by the Ohio Democratic party and major labor unions, which provided him a significant funding advantage over Glenn. Glenn's camp persuaded him to be thrifty during the primary so he could save money for the general election. By the end of the primary campaign, Metzenbaum was spending four times as much as Glenn. Glenn was defeated in the Democratic primary by Metzenbaum (who received 51 percent of the vote to Glenn's 49 percent). Some prominent Democrats said Glenn was a "hapless political rube", and one newspaper called him "the ultimate square".
Metzenbaum lost the general election to Robert Taft Jr. Glenn remained active in the political scene following his defeat. Governor John J. Gilligan appointed Glenn to be the chairman of the Citizens Task Force on Environmental Protection in 1970. The task force was created to survey environmental problems in the state and released a report in 1971 detailing the issues. The meetings and the final report of the task force were major contributors to the formation of Ohio's Environmental Protection Agency.
1974 Senate campaign
In 1973, President Nixon ordered Attorney General Elliot Richardson to fire Watergate special prosecutor Archibald Cox. Richardson refused and resigned in protest, triggering the Saturday Night massacre. Ohio Senator William Saxbe, elected in 1968, was appointed attorney general. Both Glenn and Metzenbaum sought the vacated seat, which was to be filled by Governor John Gilligan. Gilligan was planning on a presidential or vice-presidential run in the near future, and offered Glenn the lieutenant governor position, with the thought that Glenn would ascend to governor when Gilligan was elected to a higher position. The Ohio Democratic party backed this solution to avoid what was expected to be a divisive primary battle between Metzenbaum and Glenn. He declined, denouncing their attempts as "bossism" and "blackmail". Glenn's counteroffer suggested that Gilligan fill the position with someone other than Metzenbaum or Glenn so neither would have an advantage going into the 1974 election. Metzenbaum's campaign agreed to back Gilligan in his governor re-election campaign, and Metzenbaum was subsequently appointed in January 1974 to the vacated seat. At the end of Saxbe's term, Glenn challenged Metzenbaum in the primary for the Ohio Senate seat.
Glenn's campaign changed their strategy after the 1970 election. In 1970, Glenn won most of the counties in Ohio but lost in those with larger populations. The campaign changed its focus, and worked primarily in the large counties. In the primary, Metzenbaum contrasted his strong business background with Glenn's military and astronaut credentials and said that his opponent had "never held a payroll". Glenn's reply became known as the "Gold Star Mothers" speech. He told Metzenbaum to go to a veterans' hospital and "look those men with mangled bodies in the eyes and tell them they didn't hold a job. You go with me to any Gold Star mother and you look her in the eye and tell her that her son did not hold a job". He defeated Metzenbaum 54 to 46 percent before defeating Ralph Perk (the Republican mayor of Cleveland) in the general election, beginning a Senate career which would continue until 1999.
1976 vice presidential consideration
After Jimmy Carter became the presumptive Democratic nominee for president in the 1976 election, Glenn was reported to be in consideration to be Carter's running mate because he was a senator in a pivotal state and for his fame and straightforwardness. Some thought he was too much like Carter, partially because they both had military backgrounds, and he did not have enough experience to become president. Barbara Jordan was the first keynote speaker at the Democratic National Convention. Her speech electrified the crowd and was filled with applause and standing ovations. Glenn's keynote address immediately followed Jordan's, and he failed to impress the delegates. Walter Cronkite described it as "dull", and other delegates complained that he was hard to hear. Carter called Glenn to inform him the nomination was going to another candidate and later nominated the veteran politician Walter Mondale. It was also reported that Carter's wife thought Annie Glenn, who had a stutter, would hurt the campaign.
1980 Senate campaign
In his first reelection campaign, Glenn ran opposed in the primary for the 1980 Senate election. His opponents, engineer Francis Hunstiger and ex-teacher Frances Waterman, were not well-known and poorly funded. His opponents spent only a few thousand dollars on the campaign, while Glenn spent $700,000. Reporters noted that for a race he was likely to win, Glenn was spending a lot of time and money on the campaign. His chief strategist responded to the remarks saying, "It's the way he does things. He takes nothing for granted." Glenn won the primary by a landslide, with 934,230 of the 1.09 million votes.
Jim Betts, who ran unopposed in the Republican primary, challenged Glenn for his seat. Betts publicly stated that Glenn's policies were part of the reason for inflation increases and a lower standard of living. Betts' campaign also attacked Glenn's voting record, saying that he often voted for spending increases. Glenn's campaign's response was that he has been a part of over 3,000 roll calls and "any one of them could be taken out of context". Glenn was projected to win the race easily, and won by the largest margin ever for an Ohio Senator, defeating Betts by over 40 percent.
1984 presidential campaign
Glenn was unhappy with how divided the country was, and thought labels like conservative and liberal increased the divide. He considered himself a centrist. Glenn thought a more centrist president would help unite the country. Glenn believed his experience as a senator from Ohio was ideal because of the state's diversity. Glenn thought that Ted Kennedy could win the election, but after Kennedy's announcement in late 1982 that he would not seek the presidency, Glenn thought he had a much better chance of winning. He hired a media consultant to help him with his speaking style.
Glenn announced his candidacy for president on April 21, 1983, in the John Glenn High School gymnasium. He started out the campaign out-raising the front-runner, Mondale. He also polled the highest of any Democrat against Reagan. During the fall of 1983, The Right Stuff, a film about the Mercury Seven astronauts, was released. Reviewers saw Ed Harris' portrayal of Glenn as heroic and his staff began to publicize the film to the press. One reviewer said that "Harris' depiction helped transform Glenn from a history-book figure into a likable, thoroughly adoration-worthy Hollywood hero," turning him into a big-screen icon. Others considered the movie to be damaging to Glenn's campaign, serving as only a reminder that Glenn's most significant achievement had occurred decades earlier. Glenn's autobiography said the film "had a chilling effect on the campaign."
William White managed Glenn's campaign until his replacement by Jerry Vento on January 26, 1984. Glenn's campaign decided to forgo the traditional campaigning in early caucuses and primaries and focus on building campaign offices nationwide. He opened offices in 43 states by January 1984. Glenn's campaign spent a significant amount of money on television advertising in Iowa, and Glenn chose not to attend an Iowan debate on farm issues. He finished fifth in the Iowa caucus and went on to lose New Hampshire. Glenn's campaign continued into Super Tuesday, and he lost there as well. He announced his withdrawal from the race on March 16, 1984. After Mondale defeated him for the nomination, Glenn carried $3 million in campaign debt for over 20 years before receiving a reprieve from the Federal Election Commission.
1986 Senate campaign
Glenn's Senate seat was challenged by Thomas Kindness. Kindness was unopposed in his primary, while Glenn faced Lyndon LaRouche supporter Don Scott. LaRouche supporters had been recently elected in Illinois, but the Ohio Democratic Party chairman did not think it was likely they would see the same success in Ohio. LaRouche was known for his fringe theories, such as the queen of England being a drug dealer. Kindness spoke to his supporters and warned them against LaRouche candidates. He issued a statement telling voters to reject LaRouche candidates in both Republican and Democratic primaries. Glenn won the primary contest with 88% of the vote.
With the primary complete, Glenn began his campaign against Kindness. Glenn believed he and other Democrats were the targets of a negative campaign thought up by the GOP strategists in Washington. Kindness focused on Glenn's campaign debts for his failed presidential run and the fact he stopped payments on it while campaigning for the Senate seat. After winning the race with 62% of the vote, Glenn remarked, "We proved that in 1986, they couldn't kill Glenn with Kindness."
1992 Senate campaign
In 1992, Republican Mike DeWine won the Republican primary and challenged Glenn in the Senate election. Glenn ran unopposed in the primary. DeWine's campaign focused on the need for change and for term limits for senators. This would be Glenn's fourth term as senator. DeWine also criticized Glenn's campaign debts, using a bunny dressed as an astronaut beating a drum, with an announcer saying, "He just keeps owing and owing and owing", a play on the Energizer Bunny. During a debate, Glenn asked DeWine to stop his negative campaign ads, saying "This has been the most negative campaign". DeWine responded that he would if Glenn would disclose how he spent the money he received from Charles Keating, fallout from Glenn being named one of the Keating Five. Glenn won the Senate seat, with 2.4 million votes to DeWine's 2 million votes. It was DeWine's first-ever campaign loss. DeWine later worked on the intelligence committee with Glenn and watched his second launch into space.
Senate career
Committee on Governmental Affairs
Glenn requested to be assigned to two committees during his first year as senator: the Government Operations Committee (later known as the Committee on Governmental Affairs), and the Foreign Relations Committee. He was immediately assigned to the Government Operations Committee and waited for a seat on the Foreign Relations Committee. In 1977, Glenn wanted to chair the Energy, Nuclear Proliferation, and Federal Services Subcommittee of the Governmental Affairs Committee. Abraham Ribicoff, chair of the Governmental Affairs Committee, said he could chair the subcommittee if he also chaired the less popular Federal Services Subcommittee, which was in charge of the U.S. Postal Service. Previous chairs of the Federal Services Subcommittee had lost elections in part because of negative campaigns associated with the poorly regarded mail service to the chairmen, but Glenn accepted the offer and became the chair of both subcommittees. One of his goals as a new senator was developing environmental policies. Glenn introduced bills on energy policy to try to counter the energy crisis in the 70s. Glenn also introduced legislation promoting nuclear non-proliferation and was the chief author of the Nuclear Non-Proliferation Act of 1978, the first of six major pieces of legislation that he produced on the subject.
Glenn chaired the Committee on Governmental Affairs from 1987 to 1995. It was in this role that he discovered safety and environmental problems with the nation's nuclear weapons facilities. Glenn was made aware of the problem at the Fernald Feed Materials Production Center near Cincinnati and soon found that it affected sites across the nation. Glenn requested investigations from the General Accounting Office of Congress and held several hearings on the issue. He also released a report on the potential costs of hazardous waste cleanup at former nuclear weapons manufacturing facilities, known as the Glenn Report. He spent the remainder of his Senate career acquiring funding to clean up the nuclear waste left at the facilities.
Glenn also focused on reducing government waste. He created legislation to mandate CFOs for large governmental agencies. Glenn wrote a bill to add the office of the inspector general to federal agencies, to help find waste and fraud. He also created legislation intended to prevent the federal government from imposing regulations on local governments without funding. Glenn founded the Great Lakes Task Force, which helped protect the environment of the Great Lakes.
In 1995 Glenn became the ranking minority member of the Committee on Governmental Affairs. Glenn disputed the focus on illegal Chinese donations to the Democrats and asserted that Republicans also had egregious fundraising issues. The committee chair, Fred Thompson of Tennessee, disagreed and continued the investigation. Thompson and Glenn continued to work together poorly for the duration of the investigation. Thompson would give Glenn only information he was legally required to. Glenn would not authorize a larger budget and tried to expand the scope of the investigation to include members of the GOP. The investigation concluded with a Republican-written report, which Thompson described as, "... a lot of things strung together that paint a real ugly picture." The Democrats, led by Glenn, said the report "... does not support the conclusion that the China plan was aimed at, or affected, the 1996 presidential election."
Glenn was the vice chairman of the Permanent Subcommittee on Investigations, a subcommittee of the Committee on Governmental Affairs. When the Republican Party regained control of the Senate in 1996, Glenn became the ranking minority member on the Permanent Subcommittee on Investigations until he was succeeded by Carl Levin. During this time, the committee investigated issues such as fraud on the Internet, mortgage fraud, and day trading of securities.
Other committees and activities
Glenn's father spent his retirement money battling cancer and would have lost his house if Glenn had not intervened. His father-in-law also had expensive treatments for Parkinson's disease. These health and financial issues motivated him to request a seat on the Special Committee on Aging.
Glenn was considered an expert in matters of science and technology. He was a supporter of continuing the B-1 bomber program, which he considered successful. This conflicted with President Carter's desire to fund the B-2 bomber program. Glenn did not fully support development of the B-2 because he had doubts about the feasibility of the stealth technology. He drafted a proposal to slow down the development of the B-2, which could have potentially saved money, but the measure was rejected.
Glenn joined the Foreign Relations Committee in 1978. He became the chairman of the East Asian and Pacific Affairs Subcommittee, for which he traveled to Japan, Korea, the Republic of China, and the People's Republic of China. Glenn helped to pass the Taiwan Enabling Act of 1979. The same year, Glenn's stance on the SALT II treaty caused another dispute with President Carter. Given the loss of radar listening posts in Iran, Glenn did not believe that the U.S. had the capability to monitor the Soviet Union accurately enough to verify compliance with the treaty. During the launching ceremony for the , he spoke about his doubts about verifying treaty compliance. First Lady Rosalynn Carter also spoke at the event, during which she criticized Glenn for speaking publicly about the issue. The Senate never ratified the treaty, in part because of the Soviet invasion of Afghanistan. Glenn served on the committee until 1985, when he traded it for the Armed Services Committee.
Glenn became chairman of the Manpower Subcommittee of the Armed Services Committee in 1987. He introduced legislation such as increasing pay and benefits for American troops in the Persian Gulf during the Gulf War. He served as chairman until 1993, becoming chairman of the Armed Services Subcommittee on Military Readiness and Defense Infrastructure.
Keating Five
Glenn was one of the Keating Five—the U.S. Senators involved with the savings and loan crisis—after he accepted a $200,000 campaign contribution from Lincoln Savings and Loan Association head Charles Keating. During the crisis, the senators were accused of delaying the seizure of Keating's S&L, which cost taxpayers an additional $2 billion. The combination of perceived political pressure and Keating's monetary contributions to the senators led to an investigation.
The Ethics Committee's outside counsel, Robert Bennett, wanted to eliminate Republican senator John McCain and Glenn from the investigation. The Democrats did not want to exclude McCain, as he was the only Republican being investigated, which means they could not excuse Glenn from the investigation either. McCain and Glenn were reprimanded the least of the five, as the Senate commission found that they had exercised "poor judgment". The GOP focused on Glenn's "poor judgment" rather than what Glenn saw as complete exoneration. GOP chairman Robert Bennett said, "John Glenn misjudged Charles Keating. He also misjudged the tolerance of Ohio's taxpayers, who are left to foot the bill of nearly $2 billion." After the Senate's report, Glenn said, "They so firmly put this thing to bed ... there isn't much there to fuss with. I didn't do anything wrong." In his autobiography, Glenn wrote, "outside of people close to me dying, these hearings were the low point of my life." The case cost him $520,000 in legal fees. The association of his name with the scandal made Republicans hopeful that he could be defeated in the 1992 campaign, but Glenn defeated Lieutenant Governor Mike DeWine to retain his seat.
Retirement
On February 20, 1997, which was the 35th anniversary of his Friendship 7 flight, Glenn announced that his retirement from the Senate would occur at the end of his term in January 1999. Glenn retired because of his age, noting that he would have been 83 at the end of another term and quipping that "... there is still no cure for the common birthday".
Return to space
After the Space Shuttle Challenger disaster in 1986, Glenn criticized putting a "lay person in space for the purpose of gaining public support ... while the shuttle is still in its embryonic stage". He supported flying research scientists. In 1995, Glenn read Space Physiology and Medicine, a book written by NASA doctors. He realized that many changes that occur to physical attributes during space flight, such as loss of bone and muscle mass and blood plasma, are the same as changes that result from aging. Glenn thought NASA should send an older person on a shuttle mission, and that it should be him. Starting in 1995, he began lobbying NASA director Dan Goldin for the mission. Goldin said he would consider it if there was a scientific reason, and if Glenn could pass the same physical examination the younger astronauts took. Glenn performed research on the subject and passed the physical examination. On January 16, 1998, NASA Administrator Dan Goldin announced that Glenn would be part of the STS-95 crew; this made him, at age 77, the oldest person to fly in space at that time.
NASA and the National Institute of Aging (NIA) planned to use Glenn as a test subject for research, with biometrics taken before, during, and after his flight. Some experiments (in circadian rhythms, for example) compared him with the younger crew members. In addition to these tests, he was in charge of the flight's photography and videography. Glenn returned to space on the Space Shuttle on October 29, 1998, as a payload specialist on Space Shuttle Discovery. Shortly before the flight, researchers disqualified Glenn from one of the flight's two major human experiments (on the effect of melatonin) for undisclosed medical reasons; he participated in experiments on sleep monitoring and protein use. On November 6, President Bill Clinton sent a congratulatory email to Glenn aboard the Discovery. This is often cited as the first email sent by a sitting U.S. president, but records exist of emails being sent by President Clinton several years earlier.
His participation in the nine-day mission was criticized by some members of the space community as a favor granted by Clinton; John Pike, director of the Federation of American Scientists' space-policy project, said: "If he was a normal person, he would acknowledge he's a great American hero and that he should get to fly on the shuttle for free ... He's too modest for that, and so he's got to have this medical research reason. It's got nothing to do with medicine".
In a 2012 interview, Glenn said he regretted that NASA did not continue its research on aging by sending additional elderly people into space. After STS-95 returned safely, its crew received a ticker-tape parade. On October 15, 1998, NASA Road 1 (the main route to the Johnson Space Center) was temporarily renamed John Glenn Parkway for several months. Glenn was awarded the NASA Space Flight Medal in 1998 for flying on STS-95. In 2001, Glenn opposed sending Dennis Tito, the world's first space tourist, to the International Space Station because Tito's trip had no scientific purpose.
Personal life
Glenn and Annie had two children—John David and Carolyn Ann—and two grandchildren, and remained married for 73 years until his death.
A Freemason, Glenn was a member of Concord Lodge No. 688 in New Concord, Ohio. He received all his degrees in full in a Mason at Sight ceremony from the Grand Master of Ohio in 1978, 14 years after petitioning his lodge. In 1999, Glenn became a 33rd-degree Scottish Rite Mason in the Valley of Cincinnati (NMJ). As an adult, he was honored as part of the DeMolay Legion of Honor by DeMolay International, a Masonic youth organization for boys.
Glenn was an ordained elder of the Presbyterian Church. His religious faith began before he became an astronaut and was reinforced after he traveled in space. "To look out at this kind of creation and not believe in God is to me impossible", said Glenn after his second (and final) space voyage. He saw no contradiction between belief in God and the knowledge that evolution is "a fact" and believed evolution should be taught in schools: "I don't see that I'm any less religious that I can appreciate the fact that science just records that we change with evolution and time, and that's a fact. It doesn't mean it's less wondrous and it doesn't mean that there can't be some power greater than any of us that has been behind and is behind whatever is going on."
Public appearances
Glenn was an honorary member of the International Academy of Astronautics and a member of the Society of Experimental Test Pilots, Marine Corps Aviation Association, Order of Daedalians, National Space Club board of trustees, National Space Society board of governors, International Association of Holiday Inns, Ohio Democratic Party, State Democratic Executive Committee, Franklin County (Ohio) Democratic Party and the 10th District (Ohio) Democratic Action Club. In 2001 he guest-starred as himself on the American television sitcom Frasier.
On September 5, 2009, John and Annie Glenn dotted the "i" in Ohio State University's Script Ohio marching band performance during the Ohio State–Navy football-game halftime show, which is normally reserved for veteran band members. To commemorate the 50th anniversary of the Friendship 7 flight on February 20, 2012, he had an unexpected opportunity to speak with the orbiting crew of the International Space Station when he was onstage with NASA Administrator Charlie Bolden at Ohio State University. On April 19, 2012, Glenn participated in the ceremonial transfer of the retired Space Shuttle Discovery from NASA to the Smithsonian Institution for permanent display at the Steven F. Udvar-Hazy Center. He used the occasion to criticize the "unfortunate" decision to end the Space Shuttle program, saying that grounding the shuttles delayed research.
Illness and death
Glenn was in good health for most of his life. He retained a private pilot's license until 2011 when he was 90. In June 2014, Glenn underwent successful heart valve replacement surgery at the Cleveland Clinic. In early December 2016, he was hospitalized at the James Cancer Hospital of Ohio State University Wexner Medical Center in Columbus. According to a family source, Glenn had been in declining health, and his condition was grave; his wife and their children and grandchildren were at the hospital.
Glenn died on December 8, 2016, at the OSU Wexner Medical Center; he was 95 years old. No cause of death was disclosed. After his death, his body lay in state at the Ohio Statehouse. There was a memorial service at Mershon Auditorium at Ohio State University. Another memorial service was performed at Kennedy Space Center near the Heroes and Legends building. His body was interred at Arlington National Cemetery on April 6, 2017. At the time of his death, Glenn was the last surviving member of the Mercury Seven.
The Military Times reported that William Zwicharowski, a senior mortuary official at Dover Air Force Base, had offered to let visiting inspectors view Glenn's remains, sparking an official investigation. Zwicharowski has denied the remains were disrespected. At the conclusion of the investigation, officials said the remains were not disrespected as inspectors did not accept Zwicharowski's offer, and that Zwicharowski's actions were improper. No administrative action was taken as he had retired.
President Barack Obama said that Glenn, "the first American to orbit the Earth, reminded us that with courage and a spirit of discovery there's no limit to the heights we can reach together". Tributes were also paid by Vice President Joe Biden, President-elect Donald Trump and former Secretary of State Hillary Clinton.
The phrase "Godspeed, John Glenn", which fellow Mercury astronaut Scott Carpenter had used to hail Glenn's launch into space, became a social-media hashtag: #GodspeedJohnGlenn. Former and current astronauts added tributes; so did NASA Administrator and former shuttle astronaut Charles Bolden, who wrote: "John Glenn's legacy is one of risk and accomplishment, of history created and duty to country carried out under great pressure with the whole world watching." President Obama ordered flags to be flown at half-staff until Glenn's burial. On April 5, 2017, President Donald Trump issued presidential proclamation 9588, titled "Honoring the Memory of John Glenn".
Awards and honors
Glenn was awarded the John J. Montgomery Award in 1963. Glenn received the National Geographic Society's Hubbard Medal in 1962. Glenn, along with 37 other space race astronauts, received the Ambassador of Space Exploration Award in 2006. He was also awarded the General Thomas D. White National Defense Award and the Prince of Asturias Award for International Cooperation. In 1964, Glenn received the Golden Plate Award of the American Academy of Achievement. In 2004, he received the Woodrow Wilson Award for Public Service from the Woodrow Wilson International Center for Scholars of the Smithsonian Institution and was awarded the National Collegiate Athletic Association's Theodore Roosevelt Award for 2008.
Glenn earned the Navy's astronaut wings and the Marine Corps' Astronaut Medal. He was awarded the Congressional Gold Medal in 2011 and was among the first group of astronauts to be granted the distinction. In 2012, President Barack Obama presented Glenn with the Presidential Medal of Freedom. Glenn was the seventh astronaut to receive this distinction. The Congressional Gold Medal and the Presidential Medal of Freedom are considered the two most prestigious awards that can be bestowed on a civilian. The Society of Experimental Test Pilots awarded Glenn the Iven C. Kincheloe award in 1963, and he was inducted into the International Air & Space Hall of Fame in 1968, National Aviation Hall of Fame in 1976, the International Space Hall of Fame in 1977, and the U.S. Astronaut Hall of Fame in 1990. In 2000, he received the U.S. Senator John Heinz Award for public service by an elected or appointed official, one of the annual Jefferson Awards.
In 1961, Glenn received an honorary LL.D from Muskingum University, the college he attended before joining the military in World War II. He also received honorary doctorates from Nihon University in Tokyo; Wagner College in Staten Island, New York; Ohio Northern University; Williams College; and Brown University. In 1998, he helped found the John Glenn Institute for Public Service and Public Policy at Ohio State University to encourage public service. The institute merged with OSU's School of Public Policy and Management to become the John Glenn School of Public Affairs. He held an adjunct professorship at the school. In February 2015, it was announced that it would become the John Glenn College of Public Affairs in April.
The Glenn Research Center at Lewis Field in Cleveland is named after him, and the Senator John Glenn Highway runs along a stretch of I-480 in Ohio across from the Glenn Research Center. Colonel Glenn Highway (which passes Wright-Patterson Air Force Base and Wright State University near Dayton, Ohio), John Glenn High School in his hometown of New Concord, Elwood-John H. Glenn High School in the hamlet of Elwood, Town of Huntington, Long Island, New York, and the former Col. John Glenn Elementary in Seven Hills, Ohio, were also named for him. Colonel Glenn Road in Little Rock, Arkansas, was named for him in 1962. High schools in Westland and Bay City, Michigan; Walkerton, Indiana; and Norwalk, California bear Glenn's name. The fireboat John H. Glenn Jr., operated by the District of Columbia Fire and Emergency Medical Services Department and protecting sections of the Potomac and Anacostia Rivers which run through Washington, D.C., was named for him, as was , a mobile landing platform delivered to the U.S. Navy on March 12, 2014. In June 2016, the Port Columbus International Airport in Columbus, Ohio, was renamed John Glenn Columbus International Airport. Glenn and his family attended the ceremony, during which he spoke about how visiting the airport as a child had kindled his interest in flying. On September 12, 2016, Blue Origin announced the New Glenn, a rocket. Orbital ATK named the Cygnus space capsule used in the NASA CRS OA-7 mission to the international space station "S.S. John Glenn" in his honor. The mission successfully lifted off on April 16, 2017.
Although never a Scout himself, Glenn heavily endorsed the Boy Scouts. His son, John David, attained the coveted rank of Eagle Scout that many of Glenn's aviator peers also achieved.
Legacy
Glenn's public life and legacy began when he received his first ticker-tape parade for breaking the transcontinental airspeed record. As a senator, he used his military background to write legislation to reduce nuclear proliferation. He also focused on reducing government waste. Buzz Aldrin wrote that Glenn's Friendship 7 flight "... helped to galvanize the country's will and resolution to surmount significant technical challenges of human spaceflight."
President Barack Obama said, "With John's passing, our nation has lost an icon and Michelle and I have lost a friend. John spent his life breaking barriers, from defending our freedom as a decorated Marine Corps fighter pilot in World War II and Korea, to setting a transcontinental speed record, to becoming, at age 77, the oldest human to touch the stars." Obama issued a presidential proclamation on December 9, 2016, ordering the US flag to be flown at half-staff in Glenn's memory. NASA administrator Charles Bolden said: "Senator Glenn's legacy is one of risk and accomplishment, of history created and duty to country carried out under great pressure with the whole world watching".
References
Notes
Citations
Sources
Further reading
External links
John Glenn's Flight on Friendship 7, MA-6 – complete 5-hour capsule audio recording
John Glenn's Flight on the Space Shuttle, STS-95
|-
|-
|-
|-
|-
1921 births
1962 in spaceflight
1998 in spaceflight
2016 deaths
20th-century American businesspeople
American astronaut-politicians
American aviation record holders
American flight instructors
American Freemasons
American Presbyterians
American test pilots
Aviators from Ohio
Burials at Arlington National Cemetery
Candidates in the 1984 United States presidential election
Congressional Gold Medal recipients
Democratic Party United States senators from Ohio
Engineers from Ohio
Holiday Inn people
Mercury Seven
Military personnel from Ohio
Muskingum University alumni
National Aviation Hall of Fame inductees
Ohio Democrats
Ohio State University faculty
People from Cambridge, Ohio
People from New Concord, Ohio
Politicians from Columbus, Ohio
Presidential Medal of Freedom recipients
Recipients of the Air Medal
Recipients of the Congressional Space Medal of Honor
Recipients of the Distinguished Flying Cross (United States)
Recipients of the NASA Distinguished Service Medal
United States Astronaut Hall of Fame inductees
United States Army Air Forces personnel of World War II
United States Marine Corps astronauts
United States Marine Corps colonels
United States Marine Corps personnel of the Korean War
United States Marine Corps pilots of World War II
United States Naval Aviators
United States Naval Test Pilot School alumni
Theistic evolutionists
Science activists
Space advocates
Articles containing video clips
20th-century United States senators | John Glenn | [
"Biology"
] | 12,559 | [
"Non-Darwinian evolution",
"Theistic evolutionists",
"Biology theories"
] |
58,704 | https://en.wikipedia.org/wiki/Arbor%20Day%20Foundation | The Arbor Day Foundation is an American 501(c)(3) nonprofit membership organization dedicated to planting trees. The Arbor Day Foundation has more than one million members and has planted more than 500 million trees in neighborhoods, communities, cities and forests throughout the world. The Foundation's stated mission is "to inspire people to plant, nurture, and celebrate trees."
History
The Arbor Day Foundation was founded in 1969, the centennial of the first Arbor Day observance.
Programs
Through the global reforestation program, the Arbor Day Foundation and international partners have replanted more than 108 million trees lost to fire, insects, disease, and weather in forests in the United States and around the world. These rejuvenated forests help to protect watersheds, stabilize soil, restore wildlife habitats, improve air quality and create jobs.
Tree City USA
Founded in 1976 and co-sponsored by the National Association of State Foresters and the United States Forest Service, the Tree City USA program provides a framework for communities to manage and expand their public trees. More than 3,900 communities have achieved Tree City USA status by meeting four core standards of sound urban forestry management: maintaining a tree board or department, having a community tree ordinance, spending at least $2 per capita on urban forestry and celebrating Arbor Day. Today, in all fifty states, Washington, D.C., and Puerto Rico nearly 155 million Americans are living in Tree City USA towns and cities.
Time for Trees
In 2018, the Arbor Day Foundation launched the Time for Trees initiative to plant 100 million trees in forests and communities around the world and engage 5 million tree planters by the 50th Anniversary of the Foundation in 2022.
Team Trees
In 2019, Team Trees was formed when YouTube creators MrBeast and Mark Rober joined with the Arbor Day Foundation to raise $20 million to plant 20 million trees. The campaign crowdfunded $20 million in 56 days. More than 800,000 people donated from 200 countries and territories. The campaign set the record for the biggest YouTube collaboration and fundraiser in history with notable donors such as Elon Musk and Tobias Lütke, who each donated over a million dollars. The goal to reach 20,000,000 trees was reached on December 19, 2019. and as of October 2023 #TeamTrees has raised $24 million
Tree resources
Arborday.org has a comprehensive set of guides and extensive information about tree species, selection, planting, and care, including a zip code look-up tool to find your hardiness zone.
ARBOR DAY FARM
This natural educational center in Nebraska City, Nebraska, is the birthplace of Arbor Day, with 260 acres of land and outdoor exploration. Arbor Day Farm is home to Lied Lodge & Conference Center, an environmentally sustainable hotel with 136 guest rooms and a full-service meeting center dedicated to supporting tree planting, conservation, and stewardship around the globe. Tree Adventure, an award-winning nature-themed attraction, and Arbor Lodge State Historical Park are also part of Arbor Day Farm.
Gallery
See also
List of Tree Cities USA
Arbor Day
International Day of Forests
Britain in Bloom
Entente Florale
Reforestation
References
External links
Environmental organizations based in the United States
Environmental education
Urban forestry organizations
Charities based in Nebraska
Non-profit organizations based in Nebraska
1972 establishments in Nebraska
Environmental organizations established in 1972 | Arbor Day Foundation | [
"Environmental_science"
] | 667 | [
"Environmental education",
"Environmental social science"
] |
58,721 | https://en.wikipedia.org/wiki/Chemical%20industry | The chemical industry comprises the companies and other organizations that develop and produce industrial, specialty and other chemicals. Central to the modern world economy, it converts raw materials (oil, natural gas, air, water, metals, and minerals) into commodity chemicals for industrial and consumer products. It includes industries for petrochemicals such as polymers for plastics and synthetic fibers; inorganic chemicals such as acids and alkalis; agricultural chemicals such as fertilizers, pesticides and herbicides; and other categories such as industrial gases, speciality chemicals and pharmaceuticals.
Various professionals are involved in the chemical industry including chemical engineers, chemists and lab technicians.
History
Although chemicals were made and used throughout history, the birth of the heavy chemical industry (production of chemicals in large quantities for a variety of uses) coincided with the beginnings of the Industrial Revolution.
Industrial Revolution
One of the first chemicals to be produced in large amounts through industrial processes was sulfuric acid. In 1736 pharmacist Joshua Ward developed a process for its production that involved heating sulfur with saltpeter, allowing the sulfur to oxidize and combine with water. It was the first practical production of sulphuric acid on a large scale. John Roebuck and Samuel Garbett were the first to establish a large-scale factory in Prestonpans, Scotland, in 1749, which used leaden condensing chambers for the manufacture of sulfuric acid.
In the early 18th century, cloth was bleached by treating it with stale urine or sour milk and exposing it to sunlight for long periods of time, which created a severe bottleneck in production. Sulfuric acid began to be used as a more efficient agent as well as lime by the middle of the century, but it was the discovery of bleaching powder by Charles Tennant that spurred the creation of the first great chemical industrial enterprise. His powder was made by reacting chlorine with dry slaked lime and proved to be a cheap and successful product. He opened the St Rollox Chemical Works, north of Glasgow, and production went from just 52 tons in 1799 to almost 10,000 tons just five years later.
Soda ash was used since ancient times in the production of glass, textile, soap, and paper, and the source of the potash had traditionally been wood ashes in Western Europe. By the 18th century, this source was becoming uneconomical due to deforestation, and the French Academy of Sciences offered a prize of 2400 livres for a method to produce alkali from sea salt (sodium chloride). The Leblanc process was patented in 1791 by Nicolas Leblanc who then built a Leblanc plant at Saint-Denis. He was denied his prize money because of the French Revolution.
In Britain, the Leblanc process became popular. William Losh built the first soda works in Britain at the Losh, Wilson and Bell works on the River Tyne in 1816, but it remained on a small scale due to large tariffs on salt production until 1824. When these tariffs were repealed, the British soda industry was able to rapidly expand. James Muspratt's chemical works in Liverpool and Charles Tennant's complex near Glasgow became the largest chemical production centres anywhere. By the 1870s, the British soda output of 200,000 tons annually exceeded that of all other nations in the world combined.
These huge factories began to produce a greater diversity of chemicals as the Industrial Revolution matured. Originally, large quantities of alkaline waste were vented into the environment from the production of soda, provoking one of the first pieces of environmental legislation to be passed in 1863. This provided for close inspection of the factories and imposed heavy fines on those exceeding the limits on pollution. Methods were devised to make useful byproducts from the alkali.
The Solvay process was developed by the Belgian industrial chemist Ernest Solvay in 1861. In 1864, Solvay and his brother Alfred constructed a plant in Charleroi Belgium. In 1874, they expanded into a larger plant in Nancy, France. The new process proved more economical and less polluting than the Leblanc method, and its use spread. In the same year, Ludwig Mond visited Solvay to acquire the rights to use his process, and he and John Brunner formed Brunner, Mond & Co., and built a Solvay plant at Winnington, England. Mond was instrumental in making the Solvay process a commercial success. He made several refinements between 1873 and 1880 that removed byproducts that could inhibit the production of sodium carbonate in the process.
The manufacture of chemical products from fossil fuels began at scale in the early 19th century. The coal tar and ammoniacal liquor residues of coal gas manufacture for gas lighting began to be processed in 1822 at the Bonnington Chemical Works in Edinburgh to make naphtha, pitch oil (later called creosote), pitch, lampblack (carbon black) and sal ammoniac (ammonium chloride). Ammonium sulphate fertiliser, asphalt road surfacing, coke oil and coke were later added to the product line.
Expansion and maturation
The late 19th century saw an explosion in both the quantity of production and the variety of chemicals that were manufactured. Large chemical industries arose in Germany and later in the United States.
Production of artificial manufactured fertilizer for agriculture was pioneered by Sir John Lawes at his purpose-built Rothamsted Research facility. In the 1840s he established large works near London for the manufacture of superphosphate of lime. Processes for the vulcanization of rubber were patented by Charles Goodyear in the United States and Thomas Hancock in England in the 1840s. The first synthetic dye was discovered by William Henry Perkin in London. He partly transformed aniline into a crude mixture which, when extracted with alcohol, produced a substance with an intense purple colour. He also developed the first synthetic perfumes. German industry quickly began to dominate the field of synthetic dyes. The three major firms BASF, Bayer, and Hoechst produced several hundred different dyes. By 1913, German industries produced almost 90% of the world's supply of dyestuffs and sold approximately 80% of their production abroad. In the United States, Herbert Henry Dow's use of electrochemistry to produce chemicals from brine was a commercial success that helped to promote the country's chemical industry.
The petrochemical industry can be traced back to the oil works of Scottish chemist James Young, and Canadian Abraham Pineo Gesner. The first plastic was invented by Alexander Parkes, an English metallurgist. In 1856, he patented Parkesine, a celluloid based on nitrocellulose treated with a variety of solvents. This material, exhibited at the 1862 London International Exhibition, anticipated many of the modern aesthetic and utility uses of plastics. The industrial production of soap from vegetable oils was started by William Lever and his brother James in 1885 in Lancashire based on a modern chemical process invented by William Hough Watson that used glycerin and vegetable oils.
By the 1920s, chemical firms consolidated into large conglomerates; IG Farben in Germany, Rhône-Poulenc in France and Imperial Chemical Industries in Britain. Dupont became a major chemicals firm in the early 20th century in America.
Products
Polymers and plastics such as polyethylene, polypropylene, polyvinyl chloride, polyethylene terephthalate, polystyrene and polycarbonate comprise about 80% of the industry's output worldwide. Chemicals are used in many different consumer goods, and are also used in many different sectors. This includes agriculture manufacturing, construction, and service industries. Major industrial customers include rubber and plastic products, textiles, apparel, petroleum refining, pulp and paper, and primary metals. Chemicals are nearly a $5 trillion global enterprise, and the EU and U.S. chemical companies are the world's largest producers.
Sales of the chemical business can be divided into a few broad categories, including basic chemicals (about 35% – 37% of dollar output), life sciences (30%), specialty chemicals (20% – 25%) and consumer products (about 10%).
Overview
Basic chemicals, or "commodity chemicals" are a broad chemical category including polymers, bulk petrochemicals and intermediates, other derivatives and basic industrials, inorganic chemicals, and fertilizers.
Polymers are the largest revenue segment and includes all categories of plastics and human-made fibers. The major markets for plastics are packaging, followed by home construction, containers, appliances, pipe, transportation, toys, and games.
The largest-volume polymer product, polyethylene (PE), is used mainly in packaging films and other markets such as milk bottles, containers, and pipe.
Polyvinyl chloride (PVC), another large-volume product, is principally used to make piping for construction markets as well as siding and, to a much smaller extent, transportation and packaging materials.
Polypropylene (PP), similar in volume to PVC, is used in markets ranging from packaging, appliances, and containers to clothing and carpeting.
Polystyrene (PS), another large-volume plastic, is used principally for appliances and packaging as well as toys and recreation.
The leading human-made fibers include polyester, nylon, polypropylene, and acrylics, with applications including apparel, home furnishings, and other industrial and consumer use.
Principal raw materials for polymers are bulk petrochemicals like ethylene, propylene and benzene.
Petrochemicals and intermediate chemicals are primarily made from liquefied petroleum gas (LPG), natural gas and crude oil fractions. Large volume products include ethylene, propylene, benzene, toluene, xylenes, methanol, vinyl chloride monomer (VCM), styrene, butadiene, and ethylene oxide. These basic or commodity chemicals are the starting materials used to manufacture many polymers and other more complex organic chemicals particularly those that are made for use in the specialty chemicals category.
Other derivatives and basic industrials include synthetic rubber, surfactants, dyes and pigments, turpentine, resins, carbon black, explosives, and rubber products and contribute about 20 percent of the basic chemicals' external sales.
Inorganic chemicals (about 12% of the revenue output) make up the oldest of the chemical categories. Products include salt, chlorine, caustic soda, soda ash, acids (such as nitric acid, phosphoric acid, and sulfuric acid), titanium dioxide, and hydrogen peroxide.
Fertilizers are the smallest category (about 6 percent) and include phosphates, ammonia, and potash chemicals.
Life sciences
Life sciences (about 30% of the dollar output of the chemistry business) include differentiated chemical and biological substances, pharmaceuticals, diagnostics, animal health products, vitamins, and pesticides. While much smaller in volume than other chemical sectors, their products tend to have high prices – over ten dollars per pound – growth rates of 1.5 to 6 times GDP, and research and development spending at 15 to 25% of sales. Life science products are usually produced with high specifications and are closely scrutinized by government agencies such as the Food and Drug Administration. Pesticides, also called "crop protection chemicals", are about 10% of this category and include herbicides, insecticides, and fungicides.
Specialty chemicals
Specialty chemicals are a category of relatively high-valued, rapidly growing chemicals with diverse end product markets. Typical growth rates are one to three times GDP with prices over a dollar per pound. They are generally characterized by their innovative aspects. Products are sold for what they can do rather than for what chemicals they contain. Products include electronic chemicals, industrial gases, adhesives and sealants as well as coatings, industrial and institutional cleaning chemicals, and catalysts. In 2012, excluding fine chemicals, the $546 billion global specialty chemical market was 33% Paints, Coating and Surface Treatments, 27% Advanced Polymer, 14% Adhesives and Sealants, 13% additives, and 13% pigments and inks.
Speciality chemicals are sold as effect or performance chemicals. Sometimes they are mixtures of formulations, unlike "fine chemicals", which are almost always single-molecule products.
Consumer products
Consumer products include direct product sales of chemicals such as soaps, detergents, and cosmetics. Typical growth rates are 0.8 to 1.0 times GDP.
Consumers rarely come into contact with basic chemicals. Polymers and specialty chemicals are materials that they encounter everywhere daily. Examples are plastics, cleaning materials, cosmetics, paints & coatings, electronics, automobiles and the materials used in home construction. These specialty products are marketed by chemical companies to the downstream manufacturing industries as pesticides, specialty polymers, electronic chemicals, surfactants, construction chemicals, Industrial Cleaners, flavours and fragrances, specialty coatings, printing inks, water-soluble polymers, food additives, paper chemicals, oil field chemicals, plastic adhesives, adhesives and sealants, cosmetic chemicals, water management chemicals, catalysts, and textile chemicals. Chemical companies rarely supply these products directly to the consumer.
Annually the American Chemistry Council tabulates the US production volume of the top 100 chemicals. In 2000, the aggregate production volume of the top 100 chemicals totaled 502 million tons, up from 397 million tons in 1990.
Inorganic chemicals tend to be the largest volume but much smaller in dollar revenue due to their low prices. The top 11 of the 100 chemicals in 2000 were sulfuric acid (44 million tons), nitrogen (34), ethylene (28), oxygen (27), lime (22), ammonia (17), propylene (16), polyethylene (15), chlorine (13), phosphoric acid (13) and diammonium phosphates (12).
Companies
The largest chemical producers today are global companies with international operations and plants in numerous countries. Below is a list of the top 25 chemical companies by chemical sales in 2015. (Note: Chemical sales represent only a portion of total sales for some companies.)
Top chemical companies by chemical sales in 2015.
Technology
From the perspective of chemical engineers, the chemical industry involves the use of chemical processes such as chemical reactions and refining methods to produce a wide variety of solid, liquid, and gaseous materials. Most of these products serve to manufacture other items, although a smaller number go directly to consumers. Solvents, pesticides, lye, washing soda, and portland cement provide a few examples of products used by consumers.
The industry includes manufacturers of inorganic- and organic-industrial chemicals, ceramic products, petrochemicals, agrochemicals, polymers and rubber (elastomers), oleochemicals (oils, fats, and waxes), explosives, fragrances and flavors. Examples of these products are shown in the Table below.
Related industries include petroleum, glass, paint, ink, sealant, adhesive, pharmaceuticals and food processing.
Chemical processes such as chemical reactions operate in chemical plants to form new substances in various types of reaction vessels. In many cases, the reactions take place in special corrosion-resistant equipment at elevated temperatures and pressures with the use of catalysts. The products of these reactions are separated using a variety of techniques including distillation especially fractional distillation, precipitation, crystallization, adsorption, filtration, sublimation, and drying.
The processes and products or products are usually tested during and after manufacture by dedicated instruments and on-site quality control laboratories to ensure safe operation and to assure that the product will meet required specifications. More organizations within the industry are implementing chemical compliance software to maintain quality products and manufacturing standards. The products are packaged and delivered by many methods, including pipelines, tank-cars, and tank-trucks (for both solids and liquids), cylinders, drums, bottles, and boxes. Chemical companies often have a research-and-development laboratory for developing and testing products and processes. These facilities may include pilot plants and such research facilities may be located at a site separate from the production plant(s).
World chemical production
The scale of chemical manufacturing tends to be organized from largest in volume (petrochemicals and commodity chemicals), to specialty chemicals, and the smallest, fine chemicals.
The petrochemical and commodity chemical manufacturing units are on the whole single product continuous processing plants. Not all petrochemical or commodity chemical materials are made in one single location, but groups of related materials often are to induce industrial symbiosis as well as material, energy and utility efficiency and other economies of scale.
Those chemicals made on the largest of scales are made in a few manufacturing locations around the world, for example in Texas and Louisiana along the Gulf Coast of the United States, on Teesside (United Kingdom), and in Rotterdam in the Netherlands. The large-scale manufacturing locations often have clusters of manufacturing units that share utilities and large-scale infrastructure such as power stations, port facilities, and road and rail terminals. To demonstrate the clustering and integration mentioned above, some 50% of the United Kingdom's petrochemical and commodity chemicals are produced by the Northeast of England Process Industry Cluster on Teesside.
Specialty chemical and fine chemical manufacturing are mostly made in discrete batch processes. These manufacturers are often found in similar locations but in many cases, they are to be found in multi-sector business parks.
Continents and countries
In the U.S. there are 170 major chemical companies. They operate internationally with more than 2,800 facilities outside the U.S. and 1,700 foreign subsidiaries or affiliates operating. The U.S. chemical output is $750 billion a year. The U.S. industry records large trade surpluses and employs more than a million people in the United States alone. The chemical industry is also the second largest consumer of energy in manufacturing and spends over $5 billion annually on pollution abatement.
In Europe, the chemical, plastics, and rubber sectors are among the largest industrial sectors. Together they generate about 3.2 million jobs in more than 60,000 companies. Since 2000 the chemical sector alone has represented 2/3 of the entire manufacturing trade surplus of the EU.
In 2012, the chemical sector accounted for 12% of the EU manufacturing industry's added value. Europe remains the world's biggest chemical trading region with 43% of the world's exports and 37% of the world's imports, although the latest data shows that Asia is catching up with 34% of the exports and 37% of imports. Even so, Europe still has a trading surplus with all regions of the world except Japan and China where in 2011 there was a chemical trade balance. Europe's trade surplus with the rest of the world today amounts to 41.7 billion Euros.
Over the 20 years between 1991 and 2011, the European Chemical industry saw its sales increase from 295 billion Euros to 539 billion Euros, a picture of constant growth. Despite this, the European industry's share of the world chemical market has fallen from 36% to 20%. This has resulted from the huge increase in production and sales in emerging markets like India and China. The data suggest that 95% of this impact is from China alone. In 2012 the data from the European Chemical Industry Council shows that five European countries account for 71% of the EU's chemicals sales. These are Germany, France, the United Kingdom, Italy and the Netherlands.
The chemical industry has seen growth in China, India, Korea, the Middle East, South East Asia, Nigeria and Brazil. The growth is driven by changes in feedstock availability and price, labor and energy costs, differential rates of economic growth and environmental pressures.
Just as companies emerge as the main producers of the chemical industry, we can also look on a more global scale at how industrialized countries rank, with regard to the billions of dollars worth of production a country or region could export. Though the business of chemistry is worldwide in scope, the bulk of the world's $3.7 trillion chemical output is accounted for by only a handful of industrialized nations. The United States alone produced $689 billion, 18.6 percent of the total world chemical output in 2008.
See also
Chemical engineering
Chemical leasing
Pharmaceutical industry
Industrial gas
Prices of chemical elements
Responsible Care
Northeast of England Process Industry Cluster (NEPIC)
References
. online version
. online review
. chapters 3-6 deal with DuPont, Dow Chemicals, Monsanto, American Cyanamid, Union Carbide, and Allied in US; and European chemical producers, Bayer, Farben, and ICI.
Contains many tables and maps on the worldwide chemical industry in 1950.
External links
Chemical refinery resources: ccc-group.com
Industrial processes
Industries (economics) | Chemical industry | [
"Chemistry"
] | 4,279 | [
"nan"
] |
58,725 | https://en.wikipedia.org/wiki/Physical%20security | Physical security describes security measures that are designed to deny unauthorized access to facilities, equipment, and resources and to protect personnel and property from damage or harm (such as espionage, theft, or terrorist attacks). Physical security involves the use of multiple layers of interdependent systems that can include CCTV surveillance, security guards, protective barriers, locks, access control, perimeter intrusion detection, deterrent systems, fire protection, and other systems designed to protect persons and property.
Overview
Physical security systems for protected facilities can be intended to:
deter potential intruders (e.g. warning signs, security lighting);
detect intrusions, and identify, monitor and record intruders (e.g. security alarms, access control and CCTV systems);
trigger appropriate incident responses (e.g. by security guards and police);
delay or prevent hostile movements (e.g. door reinforcements, grilles);
protect the assets (e.g. safes).
It is up to security designers, architects and analysts to balance security controls against risks, taking into account the costs of specifying, developing, testing, implementing, using, managing, monitoring and maintaining the controls, along with broader issues such as aesthetics, human rights, health and safety, and societal norms or conventions. Physical access security measures that are appropriate for a high security prison or a military site may be inappropriate in an office, a home or a vehicle, although the principles are similar.
Elements and design
Deterrence
The goal of deterrence methods is to convince potential attackers that a successful attack is unlikely due to strong defenses.
The initial layer of security for a campus, building, office, or other physical space can use crime prevention through environmental design to deter threats. Some of the most common examples are also the most basic: warning signs or window stickers, fences, vehicle barriers, vehicle height-restrictors, restricted access points, security lighting and trenches.
Physical barriers
For example, tall fencing, topped with barbed wire, razor wire or metal spikes are often emplaced on the perimeter of a property, generally with some type of signage that warns people not to attempt entry. However, in some facilities imposing perimeter walls or fencing will not be possible (e.g. an urban office building that is directly adjacent to public sidewalks) or it may be aesthetically unacceptable (e.g. surrounding a shopping center with tall fences topped with razor wire); in this case, the outer security perimeter will be generally defined as the walls, windows and doors of the structure itself.
Security lighting
Security lighting is another effective form of deterrence. Intruders are less likely to enter well-lit areas for fear of being seen. Doors, gates, and other entrances, in particular, should be well lit to allow close observation of people entering and exiting. When lighting the grounds of a facility, widely distributed low-intensity lighting is generally superior to small patches of high-intensity lighting, because the latter can have a tendency to create blind spots for security personnel and CCTV cameras. It is important to place lighting in a manner that makes it difficult to tamper with (e.g. suspending lights from tall poles), and to ensure that there is a backup power supply so that security lights will not go out if the electricity is cut off. The introduction of low-voltage LED-based lighting products has enabled new security capabilities, such as instant-on or strobing, while substantially reducing electrical consumption.
Security lighting for nuclear power plants in the United States
For nuclear power plants in the United States (U.S.), per the U.S. Nuclear Regulatory Commission (NRC), 10 CFR Part 73, [security] lighting is mentioned four times. The most notable mentioning contained in 10 CFR 73.55(i)(6) Illumination, which clearly identifies that licensees "-shall provide a minimum illumination level of 0.2 foot-candles, measured horizontally at ground level, in the isolation zones and appropriate exterior areas within the protected area-". [Ref] This is also the minimum illumination level specified in Table H–2 Minimum Night Firing Criteria of 10 CFR 73 Appendix H, for night firing. Per 10 CFR 73.46(b)(7) "-Tactical Response Team members, armed response personnel, and guards shall qualify and requalify, at least every 12 months, for day and night firing with assigned weapons in accordance with Appendix H-"; therefore on said respective shooting range [at night] per Appendix H, Table H-2, "-all courses [shall have] 0.2 foot-candles at center mass of target area-" applicable to handguns, shotguns, and rifles. [Ref] 1 foot-candle is approximately 10.76 lux, therefore the minimum illumination requirements for the above sections also reflect 2.152 lux.
Intrusion detection and electronic surveillance
Alarm systems and sensors
Security alarms can be installed to alert security personnel when unauthorized access is attempted. Alarm systems work in tandem with physical barriers, mechanical systems, and security guards, serving to trigger a response when these other forms of security have been breached. They consist of sensors including perimeter sensors, motion sensors, contact sensors, and glass break detectors.
However, alarms are only useful if there is a prompt response when they are triggered. In the reconnaissance phase prior to an actual attack, some intruders will test the response time of security personnel to a deliberately tripped alarm system. By measuring the length of time it takes for a security team to arrive (if they arrive at all), the attacker can determine if an attack could succeed before authorities arrive to neutralize the threat. Loud audible alarms can also act as a psychological deterrent, by notifying intruders that their presence has been detected.
In some U.S. jurisdictions, law enforcement will not respond to alarms from intrusion detection systems unless the activation has been verified by an eyewitness or video. Policies like this one have been created to combat the 94–99 percent rate of false alarm activation in the United States.
Video surveillance
Surveillance cameras can be a deterrent when placed in highly visible locations and are useful for incident assessment and historical analysis. For example, if alarms are being generated and there is a camera in place, security personnel assess the situation via the camera feed. In instances when an attack has already occurred and a camera is in place at the point of attack, the recorded video can be reviewed. Although the term closed-circuit television (CCTV) is common, it is quickly becoming outdated as more video systems lose the closed circuit for signal transmission and are instead transmitting on IP camera networks.
Video monitoring does not necessarily guarantee a human response. A human must be monitoring the situation in real time in order to respond in a timely manner; otherwise, video monitoring is simply a means to gather evidence for later analysis. However, technological advances like video analytics are reducing the amount of work required for video monitoring as security personnel can be automatically notified of potential security events.
Access control
Access control methods are used to monitor and control traffic through specific access points and areas of the secure facility. This is done using a variety of methods, including CCTV surveillance, identification cards, security guards, biometric readers, locks, doors, turnstiles and gates.
Mechanical access control systems
Mechanical access control systems include turnstiles, gates, doors, and locks. Key control of the locks becomes a problem with large user populations and any user turnover. Keys quickly become unmanageable, often forcing the adoption of electronic access control.
Electronic access control systems
Electronic access control systems provide secure access to buildings or facilities by controlling who can enter and exit. Some aspects of these systems can include:
Access credentials - Access cards, fobs, or badges are used to identify and authenticate authorized users. Information encoded on the credentials is read by card readers at entry points.
Access control panels - These control the system, make access decisions, and are usually located in a secure area. Access control software runs on the panels and interfaces with card reader.
Readers - Installed at access points, these read credentials or other data, and send information to the access control panel. Readers can be proximity, magnetic stripe, smart card, biometrics, etc.
Door locking hardware - Electrified locks, electric strikes, or maglocks physically secure doors and release when valid credentials are presented. Integration allows doors to unlock when authorized.
Request to exit devices - These allow free egress through an access point without triggering an alarm. Buttons, motion detectors, and other sensors are commonly used.
Alarms - Unauthorized access attempts or held/forced doors can trigger audible alarms and alerts. Integration with camera systems also occurs.
Access levels - Software can limit access to specific users, groups, and times. For example, some employees may have 24/7 access to all areas while others are restricted.
Event logging - Systems record activity like access attempts, alarms, user tracking, etc. for security auditing and troubleshooting purposes.
Electronic access control uses credential readers, advanced software, and electrified locks to provide programmable, secure access management for facilities. Integration of cameras, alarms and other systems is also common.
An additional sub-layer of mechanical/electronic access control protection is reached by integrating a key management system to manage the possession and usage of mechanical keys to locks or property within a building or campus.
Identification systems and access policies
Another form of access control (procedural) includes the use of policies, processes and procedures to manage the ingress into the restricted area. An example of this is the deployment of security personnel conducting checks for authorized entry at predetermined points of entry. This form of access control is usually supplemented by the earlier forms of access control (i.e. mechanical and electronic access control), or simple devices such as physical passes.
Security personnel
Security personnel play a central role in all layers of security. All of the technological systems that are employed to enhance physical security are useless without a security force that is trained in their use and maintenance, and which knows how to properly respond to breaches in security. Security personnel perform many functions: patrolling facilities, administering electronic access control, responding to alarms, and monitoring and analyzing video footage.
See also
Alarm management
Artificial intelligence for video surveillance
Biometric device
Biometrics
Computer security
Door security
Executive protection
Guard tour patrol system
Information security
Infrastructure security
Logical security
Nuclear security
Perimeter intrusion detection system
Physical Security Professional
Security alarm
Security company
Security convergence
Security engineering
Surveillance
High-voltage transformer fire barriers
References
External links
UK NPSA Tools, Catalogues and Standards
Physical security
Security
Crime prevention
Public safety
National security
Warning systems
Security engineering
Perimeter security | Physical security | [
"Technology",
"Engineering"
] | 2,150 | [
"Systems engineering",
"Security engineering",
"Safety engineering",
"Measuring instruments",
"Warning systems"
] |
58,727 | https://en.wikipedia.org/wiki/Alum | An alum () is a type of chemical compound, usually a hydrated double sulfate salt of aluminium with the general formula , such that is a monovalent cation such as potassium or ammonium. By itself, "alum" often refers to potassium alum, with the formula . Other alums are named after the monovalent ion, such as sodium alum and ammonium alum.
The name "alum" is also used, more generally, for salts with the same formula and structure, except that aluminium is replaced by another trivalent metal ion like chromium, or sulfur is replaced by another chalcogen like selenium. The most common of these analogs is chrome alum .
In most industries, the name "alum" (or "papermaker's alum") is used to refer to aluminium sulfate, , which is used for most industrial flocculation (the variable is an integer whose size depends on the amount of water absorbed into the alum). For medicine, the word "alum" may also refer to aluminium hydroxide gel used as a vaccine adjuvant.
History
Alum found at archaeological sites
The western desert of Egypt was a major source of alum substitutes in antiquity. These evaporites were mainly , , , and .
The Ancient Greek Herodotus mentions Egyptian alum as a valuable commodity in The Histories.
The production of potassium alum from alunite is archaeologically attested on the island Lesbos.
The site was abandoned during the 7th century CE, but dates back at least to the 2nd century CE. Native alumen from the island of Melos appears to have been a mixture mainly of alunogen () with potassium alum and other minor sulfates.
Alumen in Pliny and Dioscorides
A detailed description of a substance termed alumen occurs in the Roman Pliny the Elder's Natural History.
By comparing Pliny's description with the account of stypteria (στυπτηρία) given by Dioscorides, it is obvious the two are identical. Pliny informs us that a form of alumen was found naturally in the earth, and terms it salsugoterrae.
Pliny wrote that different substances were distinguished by the name of alumen, but they were all characterised by a certain degree of astringency, and were all employed for dyeing and medicine. Pliny wrote that there is another kind of alum that the ancient Greeks term schiston, and which "splits into filaments of a whitish colour". From the name schiston and the mode of formation, it seems that this kind was the salt that forms spontaneously on certain salty minerals, as alum slate and bituminous shale, and consists mainly of sulfates of iron and aluminium. One kind of alumen was a liquid, which was apt to be adulterated; but when pure it had the property of blackening when added to pomegranate juice. This property seems to characterize a solution of iron sulfate in water; a solution of ordinary (potassium) alum would possess no such property. Contamination with iron sulfate was greatly disliked as this darkened and dulled dye colours. In some places the iron sulfate may have been lacking, so the salt would be white and would be suitable, according to Pliny, for dyeing bright colors.
Pliny describes several other types of alumen but it is not clear as to what these minerals are. The alumen of the ancients, then, was not always potassium alum, not even an alkali aluminum sulfate.
Alum described in medieval texts
Alum and green vitriol (iron sulfate) both have sweetish and astringent taste, and they had overlapping uses. Therefore, through the Middle Ages, alchemists and other writers do not seem to have distinguished the two salts accurately. In the writings of the alchemists we find the words misy, sory, and chalcanthum applied to either compound; and the name atramentum sutorium, which one might expect to belong exclusively to green vitriol, applied indiscriminately to both.
Alum was the most common mordant (substance used to set dyes on fabrics) used by the dye industry, especially in Islamic countries, during the middle ages. It was the main export of the Chad region, from where it was transported to the markets of Egypt and Morocco, and then to Europe. Less significant sources were found in Egypt and Yemen.
Modern understanding of the alums
During the early 1700s, G. E. Stahl claimed that reacting sulfuric acid with limestone produced a sort of alum. The error was soon corrected by Johann Heinrich Pott and Andreas Sigismund Marggraf, who showed that the precipitate obtained when an alkali is poured into a solution of alum, namely alumina, is quite different from lime and chalk, and is one of the ingredients in common clay.
Marggraf also showed that perfect crystals with properties of alum can be obtained by dissolving alumina in sulfuric acid and adding potash or ammonia to the concentrated solution. In 1767, Torbern Bergman observed the need for potassium or ammonium sulfates to convert aluminium sulfate into alum, while sodium or calcium would not work.
The composition of common alum was determined finally by Louis Vauquelin in 1797. As soon as Martin Klaproth discovered the presence of potassium in leucite and lepidolite,
Vauquelin demonstrated that common alum is a double salt, composed of sulfuric acid, alumina, and potash. In the same journal volume, Chaptal published the analysis of four different kinds of alum, namely, Roman alum, Levant alum, British alum, and an alum manufactured by himself, confirming Vauquelin's result.
Production
Some alums occur as minerals, the most important being alunite.
The most important alums – potassium, sodium, and ammonium – are produced industrially. Typical recipes involve combining aluminium sulfate and the sulfate monovalent cation. The aluminium sulfate is usually obtained by treating minerals like alum schist, bauxite and cryolite with sulfuric acid.
Types
Aluminium-based alums are named by the monovalent cation. Unlike the other alkali metals, lithium does not form alums; a fact attributed to the small size of its ion.
The most important alums are
Potassium alum, , also called "potash alum" or simply "alum"
Sodium alum, , also called "soda alum" or "SAS"
Ammonium alum,
Chemical properties
Aluminium-based alums have a number of common chemical properties. They are soluble in water, have a sweetish taste, react as acid by turning blue litmus to red, and crystallize in regular octahedra. In alums each metal ion is surrounded by six water molecules. When heated, they liquefy, and if the heating is continued, the water of crystallization is driven off, the salt froths and swells, and at last an amorphous powder remains. They are astringent and acidic.
Crystal structure
Alums crystallize in one of three different crystal structures. These classes are called α-, β- and γ-alums. The first X-ray crystal structures of alums were reported in 1927 by James M. Cork and Lawrence Bragg, and were used to develop the phase retrieval technique isomorphous replacement.
Solubility
The solubility of the various alums in water varies greatly, sodium alum being soluble readily in water, while caesium and rubidium alums are only slightly soluble. The various solubilities are shown in the following table.
At temperature , 100 parts water dissolve:
{| class="wikitable" style="text-align:right;"
|-
! !! Ammonium !! Potassium !! Rubidium !! Caesium
|-
| 0 °C || 2.62 || 3.90 || 0.71 || 0.19
|-
| 10 °C || 4.50 || 9.52 || 1.09 || 0.29
|-
| 50 °C || 15.9 || 44.11 || 4.98 || 1.235
|-
| 80 °C || 35.20 || 134.47 || 21.60 || 5.29
|-
| 100 °C || 70.83 || 357.48 || - || -
|}
Uses
Aluminium-based alums have been used since antiquity, and are still important for many industrial processes. The most widely used alum is potassium alum. It has been used since antiquity as a flocculant to clarify turbid liquids, as a mordant in dyeing, and in tanning. It is still widely used in water treatment, for medicine, for cosmetics (in deodorant), for food preparation (in baking powder and pickling), and to fire-proof paper and cloth.
Alum is also used as a styptic, in styptic pencils available from pharmacists, or as an alum block, available from barber shops and gentlemen's outfitters, to stem bleeding from shaving nicks; and as an astringent. An alum block can be used directly as a perfume-free deodorant (antiperspirant), and unprocessed mineral alum is sold in Indian bazaars for just that purpose. Throughout Island Southeast Asia, potassium alum is most widely known as tawas and has numerous uses. It is used as a traditional antiperspirant and deodorant, and in traditional medicine for open wounds and sores. The crystals are usually ground into a fine powder before use.
During the 19th century, alum was used along with other substances like plaster of Paris to adulterate certain food products, particularly bread. It was used to make lower-grade flour appear whiter, allowing the producers to spend less on whiter flour. Because it retains water, it would make the bread heavier, meaning that merchants could charge more for it in their shops. The amount of alum present in each loaf of bread could reach concentrations that would be toxic to humans and cause chronic diarrhea, which could result in the death of young children.
Alum is used as a mordant in traditional textiles; and in Indonesia and the Philippines, solutions of tawas, salt, borax, and organic pigments were used to change the color of gold ornaments. In the Philippines, alum crystals were also burned and allowed to drip into a basin of water by babaylan for divination. It is also used in other rituals in the animistic anito religions of the islands.
For traditional Japanese art, alum and animal glue were dissolved in water, forming a liquid known as dousa (), and used as an undercoat for paper sizing.
Alum in the form of potassium aluminium sulphate or ammonium aluminium sulfate in a concentrated bath of hot water is regularly used by jewelers and machinists to dissolve hardened steel drill bits that have broken off in items made of aluminum, copper, brass, gold (any karat), silver (both sterling and fine) and stainless steel. This is because alum does not react chemically to any significant degree with any of these metals, but will corrode carbon steel. When heat is applied to an alum mixture holding a piece of work that has a drill bit stuck in it, if the lost bit is small enough, it can sometimes be dissolved / removed within hours.
Related compounds
Many trivalent metals are capable of forming alums. The general form of an alum is , where is an alkali metal or ammonium, is a trivalent metal, and often is 12. The most important example is chrome alum, , a dark violet crystalline double sulfate of chromium and potassium, was used in tanning.
In general, alums are formed more easily when the alkali metal atom is larger. This rule was first stated by Locke in 1902, who found that if a trivalent metal does not form a caesium alum, it neither will form an alum with any other alkali metal or with ammonium.
Selenate-containing alums
Selenium or selenate alums are also known that contain selenium in place of sulfur in the sulfate anion, making selenate () instead. They are strong oxidizing agents.
Mixed alums
In some cases, solid solutions of alums with different monovalent and trivalent cations may occur.
Other hydrates
In addition to the alums, which are dodecahydrates, double sulfates and selenates of univalent and trivalent cations occur with other degrees of hydration. These materials may also be referred to as alums, including the undecahydrates such as mendozite and kalinite, hexahydrates such as guanidinium and dimethylammonium "alums", tetrahydrates such as goldichite, monohydrates such as thallium plutonium sulfate and anhydrous alums (yavapaiites). These classes include differing, but overlapping, combinations of ions.
Other double sulfates
A pseudo alum is a double sulfate of the typical formula , such that
is a divalent metal ion, such as cobalt (wupatkiite), manganese (apjohnite), magnesium (pickingerite) or iron (halotrichite or feather alum), and is a trivalent metal ion.
Double sulfates with the general formula are also known, where is a monovalent cation such as sodium, potassium, rubidium, caesium, thallium, ammonium, or (), methylammonium (), hydroxylammonium () or hydrazinium () and is a trivalent metal ion, such as aluminium, chromium, titanium, manganese, vanadium, iron, cobalt, gallium, molybdenum, indium, ruthenium, rhodium, or iridium.
Analogous selenates also occur. The possible combinations of univalent cation, trivalent cation, and anion depends on the sizes of the ions.
A Tutton salt is a double sulfate of the typical formula , where is a monovalent cation, and a divalent metal ion.
Double sulfates of the composition , such that is a monovalent cation and is a divalent metal ion are referred to as langbeinites, after the prototypical potassium magnesium sulfate.
See also
Alunite
List of minerals
Gum bichromate – photo prints and other similar processes use alums, sometimes as colloid (gelatin, albumen) hardeners
Footnotes
References
External links
Sulfates
Sulfate minerals
Traditional medicine
Astringent flavors | Alum | [
"Chemistry"
] | 3,116 | [
"Sulfates",
"Double salts",
"Salts"
] |
58,739 | https://en.wikipedia.org/wiki/Timeline%20of%20microscope%20technology | Timeline of microscope technology
c. 700 BC: The "Nimrud lens" of Assyrians manufacture, a rock crystal disk with a convex shape believed to be a burning or magnifying lens.
13th century: The increase in use of lenses in eyeglasses probably led to the wide spread use of simple microscopes (single lens magnifying glasses) with limited magnification.
1590: earliest date of a claimed Hans Martens/Zacharias Janssen invention of the compound microscope (claim made in 1655).
After 1609: Galileo Galilei is described as being able to close focus his telescope to view small objects close up and/or looking through the wrong end in reverse to magnify small objects. A telescope used in this fashion is the same as a compound microscope but historians debate whether Galileo was magnifying small objects or viewing near by objects with his terrestrial telescope (convex objective/concave eyepiece) reversed.
1619: Earliest recorded description of a compound microscope, Dutch Ambassador Willem Boreel sees one in London in the possession of Dutch inventor Cornelis Drebbel, an instrument about eighteen inches long, two inches in diameter, and supported on three brass dolphins.
1621: Cornelis Drebbel presents, in London, a compound microscope with a convex objective and a convex eyepiece (a "Keplerian" microscope).
c.1622: Drebbel presents his invention in Rome.
1624: Galileo improves on a compound microscope he sees in Rome and presents his occhiolino to Prince Federico Cesi, founder of the Accademia dei Lincei (in English, The Linceans).
1625: Francesco Stelluti and Federico Cesi publish Apiarium, the first account of observations using a compound microscope
1625: Giovanni Faber of Bamberg (1574–1629) of the Linceans, after seeing Galileo's occhiolino, coins the word microscope by analogy with telescope.
1655: In an investigation by Willem Boreel, Dutch spectacle-maker Johannes Zachariassen claims his father, Zacharias Janssen, invented the compound microscope in 1590. Zachariassen's claimed dates are so early it is sometimes assumed, for the claim to be true, that his grandfather, Hans Martens, must have invented it. Findings are published by writer Pierre Borel. Discrepancies in Boreel's investigation and Zachariassen's testimony (including misrepresenting his date of birth and role in the invention) has led some historians to consider this claim dubious.
1661: Marcello Malpighi observed capillary structures in frog lungs.
1665: Robert Hooke publishes Micrographia, a collection of biological drawings. He coins the word cell for the structures he discovers in cork bark.
1674: Antonie van Leeuwenhoek improves on a simple microscope for viewing biological specimens (see Van Leeuwenhoek's microscopes).
1825: Joseph Jackson Lister develops combined lenses that cancelled spherical and chromatic aberration.
1846: Carl Zeiss founded Carl Zeiss AG, to mass-produce microscopes and other optical instruments.
1850s: John Leonard Riddell, Professor of Chemistry at Tulane University, invents the first practical binocular microscope.
1863: Henry Clifton Sorby develops a metallurgical microscope to observe structure of meteorites.
1860s: Ernst Abbe, a colleague of Carl Zeiss, discovers the Abbe sine condition, a breakthrough in microscope design, which until then was largely based on trial and error. The company of Carl Zeiss exploited this discovery and becomes the dominant microscope manufacturer of its era.
1928: Edward Hutchinson Synge publishes theory underlying the near-field scanning optical microscope
1931: Max Knoll and Ernst Ruska start to build the first electron microscope. It is a transmission electron microscope (TEM).
1936: Erwin Wilhelm Müller invents the field emission microscope.
1938: James Hillier builds another TEM.
1951: Erwin Wilhelm Müller invents the field ion microscope and is the first to see atoms.
1953: Frits Zernike, professor of theoretical physics, receives the Nobel Prize in Physics for his invention of the phase-contrast microscope.
1955: Georges Nomarski, professor of microscopy, published the theoretical basis of differential interference contrast microscopy.
1957: Marvin Minsky, a professor at MIT, invents the confocal microscope, an optical imaging technique for increasing optical resolution and contrast of a micrograph by means of using a spatial pinhole to block out-of-focus light in image formation. This technology is a predecessor to today's widely used confocal laser scanning microscope.
1967: Erwin Wilhelm Müller adds time-of-flight spectroscopy to the field ion microscope, making the first atom probe and allowing the chemical identification of each individual atom.
1981: Gerd Binnig and Heinrich Rohrer develop the scanning tunneling microscope (STM).
1986: Gerd Binnig, Quate, and Gerber invent the atomic force microscope (AFM).
1988: Alfred Cerezo, Terence Godfrey, and George D. W. Smith applied a position-sensitive detector to the atom probe, making it able to resolve materials in three dimensions with near-atomic resolution.
1988: Kingo Itaya invents the electrochemical scanning tunneling microscope.
1991: Kelvin probe force microscope invented.
2008: The scanning helium microscope is introduced.
References
Microscopy
Microscope | Timeline of microscope technology | [
"Chemistry"
] | 1,109 | [
"Microscopy"
] |
58,740 | https://en.wikipedia.org/wiki/Low-temperature%20technology%20timeline | The following is a timeline of low-temperature technology and cryogenic technology (refrigeration down to close to absolute zero, i.e. –273.15 °C, −459.67 °F or 0 K). It also lists important milestones in thermometry, thermodynamics, statistical physics and calorimetry, that were crucial in development of low temperature systems.
Prior to the 19th century
– Zimri-Lim, ruler of Mari in Syria commanded the construction of one of the first ice houses near the Euphrates.
– The yakhchal (meaning "ice pit" in Persian) is an ancient Persian type of refrigerator. The structure was formed from a mortar resistant to heat transmission, in the shape of a dome. Snow and ice was stored beneath the ground, effectively allowing access to ice even in hot months and allowing for prolonged food preservation. Often a badgir was coupled with the yakhchal in order to slow the heat loss. Modern refrigerators are still called yakhchal in Persian.
– Hero of Alexandria knew of the principle that certain substances, notably air, expand and contract and described a demonstration in which a closed tube partially filled with air had its end in a container of water. The expansion and contraction of the air caused the position of the water/air interface to move along the tube. This was the first established principle of gas behaviour vs temperature, and principle of first thermometers later on. The idea could predate him even more (Empedocles of Agrigentum in his 460 B.C. book On Nature).
1396 AD – Ice storage warehouses called "Dong-bing-go-tango" (meaning "east ice storage warehouse" in Korean) and Seo-bing-go ("west ice storage warehouse") were built in Han-Yang (currently Seoul, Korea). The buildings housed ice that was collected from the frozen Han River in January (by lunar calendar). The warehouse was well-insulated, providing the royal families with ice into the summer months. These warehouses were closed in 1898 AD but the buildings are still intact in Seoul.
1593 – Galileo Galilei builds a first modern thermoscope. But it is possible the invention was by Santorio Santorio or independently around same time by Cornelis Drebbel. The principle of operation was known in ancient Greece.
–1613 – Francesco Sagredo or Santorio Santorio, put a numerical scale on a thermoscope.
1617 – Giuseppe Biancani publishes first clear diagram of thermoscope
1638 – Robert Fludd describes thermometer with a scale, using air thermometer principle with column of air and liquid water.
1650 – Otto von Guericke designed and built the world's first vacuum pump and created the world's first ever vacuum known as the Magdeburg hemispheres to disprove Aristotle's long-held supposition that 'Nature abhors a vacuum'.
1656 – Robert Boyle and Robert Hooke built an air pump on this design.
1662 – Boyle's law (gas law relating pressure and volume) is demonstrated using a vacuum pump
1665 – Boyle theorizes a minimum temperature in New Experiments and Observations touching Cold.
1679 – Denis Papin – safety valve
1702 – Guillaume Amontons first calculates absolute zero to be −240 °C using an air thermometer of his own invention (1702), theorizing at this point the gas would reach zero volume and zero pressure.
1714 – Daniel Gabriel Fahrenheit invented the first reliable thermometer, using mercury instead of alcohol and water mixtures
1724 – Daniel Gabriel Fahrenheit proposes a Fahrenheit scale, which had finer scale and greater reproducibility than competitors.
1730 – René Antoine Ferchault de Réaumur invented an alcohol thermometer and temperature scale ultimately proved to be less reliable than Fahrenheit's mercury thermometer.
1742 – Anders Celsius proposed a scale with zero at the boiling point and 100 degrees at the freezing point of water. It was later changed to be the other way around, on the input from Swedish academy of science.
1755 – William Cullen used a pump to create a partial vacuum over a container of diethyl ether, which then boiled, absorbing heat from the surrounding air.
1756 – The first documented public demonstration of artificial refrigeration by William Cullen
1782 – Antoine Lavoisier and Pierre-Simon Laplace invent the ice-calorimeter
1784 – Gaspard Monge liquefied the first pure gas with Clouet producing liquid sulfur dioxide.
1787 – Charles's law (Gas law, relating volume and temperature)
1799 – Martin van Marum and Adriaan Paets van Troostwijk compressed ammonia to see if it followed Boyle's law. They found at room temperature and 7 atm gaseous ammonia condensed to a liquid.
19th century
1802 – John Dalton wrote "the reducibility of all elastic fluids of whatever kind, into liquids"
1802 – Gay-Lussac's law (Gas law, relating temperature and pressure).
1803 – Domestic ice box
1803 – Thomas Moore of Baltimore, Md. received a patent on refrigeration.
1805 – Oliver Evans designed the first closed circuit refrigeration machine based on the vapor-compression refrigeration cycle.
1809 – Jacob Perkins patented the first refrigerating machine
1810 – John Leslie freezes water to ice by using an airpump.
1811 – Avogadro's law
1823 – Michael Faraday liquefied Cl2
1824 – Sadi Carnot – the Carnot Cycle
1834 – Ideal gas law by Émile Clapeyron
1834 – Émile Clapeyron characterizes phase transitions between two phases in form of Clausius–Clapeyron relation.
1834 – Jacob Perkins obtained the first patent for a vapor-compression refrigeration system.
1834 – Jean-Charles Peltier discovers the Peltier effect
1844 – Charles Piazzi Smyth proposes comfort cooling
c.1850 – Michael Faraday makes a hypothesis that freezing substances increases their dielectric constant.
1851 – John Gorrie patented his mechanical refrigeration machine in the US to make ice to cool the air
1852 – James Prescott Joule and William Thomson, 1st Baron Kelvin discover Joule–Thomson effect
1856 – James Harrison patented an ether liquid-vapour compression refrigeration system and developed the first practical ice-making and refrigeration room for use in the brewing and meat-packing industries of Geelong, Victoria, Australia.
1856 – August Krönig simplistic foundation of kinetic theory of gases.
1857 – Rudolf Clausius creates a sophisticated theory of gases based including all degrees of freedom, as well derives Clausius–Clapeyron relation from basic principles.
1857 – Carl Wilhelm Siemens, the Siemens cycle
1858 – Julius Plücker observed for the first time some pumping effect due to electrical discharge.
1859 – James Clerk Maxwell determines distribution of velocities and kinetic energies in a gas, and explains emergent property of temperature and heat, and creates a first law of statistical mechanics.
1859 – Ferdinand Carré – The first gas absorption refrigeration system using gaseous ammonia dissolved in water (referred to as "aqua ammonia")
1862 – Alexander Carnegie Kirk invents the Air cycle machine
1864 – Charles Tellier patented a refrigeration system using dimethyl ether
1867 – Thaddeus S. C. Lowe patented a refrigeration system using carbon dioxide, and in 1869 made ice making machine using dry carbon dioxide. The same year Lowe bought a steamship and put a compressor based refrigeration device on it for transport of frozen meat.
1867 — French immigrant Eugene Dominic Nicolle dissolved ammonia in water to reach a temperature of −20°C in a sealed room. Together with another new Australian, industrialist Sir Thomas Mort — who in 1867 built the first freezerworks using this idea in Balmain — and with the help of NSW politician, Augustus Morris, overcame the public's mistrust of frozen food by revealing the fact to an audience of the influential (after their state meal) on 2 September, 1875.
1869 – Charles Tellier installed a cold storage plant in France.
1869 – Thomas Andrews discovers existence of a critical point in fluids.
1871 – Carl von Linde built his first ammonia compression machine.
c.a. 1873 – Van der Waals publishes and proposes a real gas model named later a Van der Waals equation.
1875 – Raoul Pictet develops a refrigeration machine using sulphur dioxide to combat high-pressure problems of ammonia in when used in tropical climates (mainly for the purpose of shipping meat).
1876 – Carl von Linde patented equipment to liquefy air using the Joule Thomson expansion process and regenerative cooling
1877 – Raoul Pictet and Louis Paul Cailletet, working separately, develop two methods to liquefy oxygen.
1879 – Bell-Coleman machine
1882 – William Soltau Davidson fitted a compression refrigeration unit to the New Zealand vessel Dunedin
1883 – Zygmunt Wróblewski condenses experimentally useful quantities of liquid oxygen
1885 – Zygmunt Wróblewski published hydrogen's critical temperature as 33 K; critical pressure, 13.3 atmospheres; and boiling point, 23 K.
1888 – Loftus Perkins develops the "Arktos" cold chamber for preserving food, using an early ammonia absorption system.
1892 – James Dewar invents the vacuum-insulated, silver-plated glass Dewar flask
1894 – Marcel Audiffren, a French Cistercian monk, patented a hand-cranked device that did not lose coolant to the atmosphere.
1895 – Carl von Linde files for patent protection of the Hampson–Linde cycle for liquefaction of atmospheric air or other gases (approved in 1903).
1898 – James Dewar condenses liquid hydrogen by using regenerative cooling and his invention, the vacuum flask.
20th century
1905 – Carl von Linde obtains pure liquid oxygen and nitrogen.
1906 – Willis Carrier patents the basis for modern air conditioning.
1908 – Heike Kamerlingh Onnes liquifies helium.
1911 – Heike Kamerlingh Onnes discloses his research on metallic low-temperature phenomenon characterised by no electrical resistance, calling it superconductivity.
1915 – Wolfgang Gaede – the Diffusion pump
1920 – Edmund Copeland and Harry Edwards use iso-butane in small refrigerators.
1922 – Baltzar von Platen and Carl Munters invent the 3 fluids absorption chiller, exclusively driven by heat.
1924 – Fernand Holweck – the Holweck pump
1926 – Albert Einstein and Leó Szilárd invent the Einstein refrigerator.
1926 – Willem Hendrik Keesom solidifies helium.
1926 – General Electric Company introduced the first hermetic compressor refrigerator
1929 – David Forbes Keith of Toronto, Ontario, Canada received a patent for the Icy Ball which helped hundreds of thousands of families through the Dirty Thirties.
1933 – William Giauque and others – Adiabatic demagnetization refrigeration
1937 – Pyotr Leonidovich Kapitsa, John F. Allen, and Don Misener discover superfluidity using helium-4 at 2.2 K
1937 – Frans Michel Penning invents a type of cold cathode vacuum gauge known as Penning gauge
1944 – Manne Siegbahn, the Siegbahn pump
1949 – S.G. Sydoriak, E.R. Grilly, E.F. Hammel, first measurements on pure 3He in the 1 K range
1950 – Invention of the so-called Gifford-McMahon cooler by K.W. Taconis (patent US2,567,454)
1951 – Heinz London invents the principle of the dilution refrigerator
1955 – Willi Becker turbomolecular pump concept
1956 – G.K. Walters, W.M. Fairbank, discovery of phase separation in 3He-4He mixtures
1957 – Lewis D. Hall, Robert L. Jepsen and John C. Helmer ion pump based on Penning discharge
1959 – Kleemenko cycle
1960 – Reinvention of the Gifford-McMahon cooler by H.O. McMahon and W.E. Gifford
1965 – D.O. Edwards, and others, discovery of finite solubility of 3He in 4He at 0K
1965 – P. Das, R. de Bruyn Ouboter, K.W. Taconis, one-shot dilution refrigerator
1966 – H.E. Hall, P.J. Ford, K. Thomson, continuous dilution refrigerator
1972 – David Lee, Robert Coleman Richardson and Douglas Osheroff discover superfluidity in helium-3 at 0.002 K.
1973 – Linear compressor
1978 – Laser cooling demonstrated in the groups of Wineland and Dehmelt.
1983 – Orifice-type pulse tube refrigerator invented by Mikulin, Tarasov, and Shkrebyonock
1986 – Karl Alexander Müller and J. Georg Bednorz discover high-temperature superconductivity
1995 – Eric Cornell and Carl Wieman create the first Bose–Einstein condensate, using a dilute gas of Rubidium-87 cooled to 170 nK. They won the Nobel Prize for Physics in 2001 for BEC.
1999 – D.J. Cousins and others, dilution refrigerator reaching 1.75 mK
1999 – The current world record lowest temperature was set at 100 picokelvins (pK), or 0.000 000 000 1 of a kelvin, by cooling the nuclear spins in a piece of rhodium metal.
21st century
2000 – Nuclear spin temperatures below 100 pK were reported for an experiment at the Helsinki University of Technology's Low Temperature Lab in Espoo, Finland. However, this was the temperature of one particular degree of freedom – a quantum property called nuclear spin – not the overall average thermodynamic temperature for all possible degrees in freedom.
2014 – Scientists in the CUORE collaboration at the Laboratori Nazionali del Gran Sasso in Italy cooled a copper vessel with a volume of one cubic meter to for 15 days, setting a record for the lowest temperature in the known universe over such a large contiguous volume
2015 – Experimental physicists at Massachusetts Institute of Technology (MIT) successfully cooled molecules in a gas of sodium potassium to a temperature of 500 nanokelvins, and it is expected to exhibit an exotic state of matter by cooling these molecules a bit further.
2015 – A team of atomic physicists from Stanford University used a matter-wave lensing technique to cool a sample of rubidium atoms to an effective temperature of 50 pK along two spatial dimensions.
2017 - Cold Atom Laboratory (CAL), an experimental instrument launched to the International Space Station (ISS) in 2018. The instrument creates extremely cold conditions in the microgravity environment of the ISS leading to the formation of Bose Einstein Condensates that are a magnitude colder than those that are created in laboratories on Earth. In this space-based laboratory, up to 20 seconds interaction times and as low as 1 picokelvin ( K) temperatures are projected to be achievable, and it could lead to exploration of unknown quantum mechanical phenomena and test some of the most fundamental laws of physics.
See also
List of timelines
Liquefaction of gases
History of superconductivity
History of thermodynamics
Timeline of temperature and pressure measurement technology
Timeline of thermodynamics, statistical mechanics, and random processes
Industrial gas
References
External links
Refrigeration History
Low-temperature technology
Cryogenics
Cooling technology
Industrial gases
Refrigerants | Low-temperature technology timeline | [
"Physics",
"Chemistry"
] | 3,222 | [
"Chemical process engineering",
"Applied and interdisciplinary physics",
"Cryogenics",
"Industrial gases"
] |
58,742 | https://en.wikipedia.org/wiki/Timeline%20of%20materials%20technology | Major innovations in materials technology
BC
28,000 BC – People wear beads, bracelets, and pendants
14,500 BC – First pottery, made by the Jōmon people of Japan.
6th millennium BC – Copper metallurgy is invented and copper is used for ornamentation (see Pločnik article)
2nd millennium BC – Bronze is used for weapons and armor
16th century BC – The Hittites develop crude iron metallurgy
13th century BC – Invention of steel when iron and charcoal are combined properly
10th century BC – Glass production begins in ancient Near East
1st millennium BC – Pewter beginning to be used in China and Egypt
1000 BC – The Phoenicians introduce dyes made from the purple murex.
3rd century BC – Wootz steel, the first crucible steel, is invented in ancient India
50s BC – Glassblowing techniques flourish in Phoenicia
20s BC – Roman architect Vitruvius describes low-water-content method for mixing concrete
1st millennium
3rd century – Cast iron widely used in Han dynasty China
300 – Greek alchemist Zomius, summarizing the work of Egyptian alchemists, describes arsenic and lead acetate
4th century – Iron pillar of Delhi is the oldest surviving example of corrosion-resistant steel
8th century – Porcelain is invented in Tang dynasty China
8th century – Tin-glazing of ceramics invented by Muslim chemists and potters in Basra, Iraq
9th century – Stonepaste ceramics invented in Iraq
900 – First systematic classification of chemical substances appears in the works attributed to Jābir ibn Ḥayyān (Latin: Geber) and in those of the Persian alchemist and physician Abū Bakr al-Rāzī ( 865–925, Latin: Rhazes)
900 – Synthesis of ammonium chloride from organic substances described in the works attributed to Jābir ibn Ḥayyān (Latin: Geber)
900 – Abū Bakr al-Rāzī describes the preparation of plaster of Paris and metallic antimony
9th century – Lustreware appears in Mesopotamia
2nd millennium
1000 – Gunpowder is developed in China
1340 – In Liège, Belgium, the first blast furnaces for the production of iron are developed
1448 – Johann Gutenberg develops type metal alloy
1450s – Cristallo, a clear soda-based glass, is invented by Angelo Barovier
1540 – Vannoccio Biringuccio publishes first systematic book on metallurgy
1556 – Georg Agricola's influential book on metallurgy
1590 – Glass lenses are developed in the Netherlands and used for the first time in microscopes and telescopes
1664 – In the pipes supplying water to the gardens at Versailles, cast iron is used
18th century
1717 – Abraham Darby makes iron with coke, a derivative of coal
1738 – Metallic zinc processed by distillation from calamine and charcoal patented by William Champion
1740 – Crucible steel technique developed by Benjamin Huntsman
1774 –
Joseph Priestley discovers oxygen
Johann Gottlieb Gahn discovers manganese
Karl Wilhelm Scheele discovers chlorine
1779 – Hydraulic cement (stucco) patented by Bryan Higgins for use as an exterior plaster
1799 – Acid battery made from copper/zinc by Alessandro Volta
19th century
1821 – Thermocouple invented by Thomas Johann Seebeck
1824 – Portland cement patent issued to Joseph Aspdin
1825 – Metallic aluminum produced by Hans Christian Ørsted
1839 – Vulcanized rubber invented by Charles Goodyear
1839 – Silver-based photographic processes invented by Louis Daguerre and William Fox Talbot
1855 – Bessemer process for mass production of steel patented by Henry Bessemer
1861 – Color photography demonstrated by James Clerk Maxwell
1883 – First solar cells using selenium waffles made by Charles Fritts
1893 – Thermite Welding developed and soon used to weld rails
20th century
1902 – Synthetic rubies created by the Verneuil process developed by Auguste Verneuil
1908 – Cellophane invented by Jacques E. Brandenberger
1909 – Bakelite hard thermosetting plastic presented by Leo Baekeland
1911 – Superconductivity discovered by Heike Kamerlingh Onnes
1912 – Stainless steel invented by Harry Brearley
1916 – Method for growing single crystals of metals invented by Jan Czochralski
1919 – The merchant ship Fullagar has the first all welded hull.
1924 – Pyrex invented by scientists at Corning Incorporated, a glass with a very low coefficient of thermal expansion
1931 – synthetic rubber called neoprene developed by Julius Nieuwland (see also: E.K. Bolton, Wallace Carothers)
1931 – Nylon developed by Wallace Carothers
1935 – Langmuir–Blodgett film coating of glass was developed by Katharine Burr Blodgett, creating "invisible glass" which is >99% transmissive
1938 – The process for making poly-tetrafluoroethylene, better known as Teflon discovered by Roy Plunkett
1939 – Dislocations in metals confirmed by Robert W. Cahn
1947 – First germanium point-contact transistor invented
1947 – First commercial application of a piezoelectric ceramic: barium titanate used as a phonograph pickup
1951 – Individual atoms seen for the first time using the field ion microscope
1953 – Metallic catalysts which greatly improve the strength of polyethylene polymers discovered by Karl Ziegler
1954 – Silicon solar cells with 6% efficiency made at Bell Laboratories
1954 – Argon oxygen decarburization (AOD) refining invented by scientists at the Union Carbide Corporation
1959 – Float glass process patented by the Pilkington Brothers
1962 – SQUID superconducting quantum interference device invented
1966 – Stephanie Kwolek invented a fibre that would later become known as Kevlar
1968 – Liquid crystal display developed by RCA
1970 – Silica optical fibers grown by Corning Incorporated
1980 – Duplex stainless steels developed which resist oxidation in chlorides
1984 – Fold-forming system developed by Charles Lewton-Brain to produce complex three dimensional forms rapidly from sheet metal
1985 – The first fullerene molecule discovered by scientists at Rice University (see also: Timeline of carbon nanotubes)
1986 – The first high temperature superconductor is discovered by Georg Bednorz and K. Alex Müller
See also
Timeline of scientific discoveries
Timeline of historic inventions
List of inventions named after people
Materials science
Roman metallurgy
References
Materials Technology
Materials science | Timeline of materials technology | [
"Physics",
"Materials_science",
"Engineering"
] | 1,292 | [
"Applied and interdisciplinary physics",
"Materials science",
"nan"
] |
58,743 | https://en.wikipedia.org/wiki/Marburg%20virus%20disease | Marburg virus disease (MVD), formerly Marburg hemorrhagic fever (MHF) is a viral hemorrhagic fever in human and non-human primates caused by either of the two Marburgviruses: Marburg virus (MARV) and Ravn virus (RAVV). Its clinical symptoms are very similar to those of Ebola virus disease (EVD).
Egyptian fruit bats are believed to be the normal carrier in nature and Marburg virus RNA has been isolated from them.
Signs and symptoms
The most detailed study on the frequency, onset, and duration of MVD clinical signs and symptoms was performed during the 1998–2000 mixed MARV/RAVV disease outbreak. A skin rash, red or purple spots (e.g. petechiae or purpura), bruises, and hematomas (especially around needle injection sites) are typical hemorrhagic manifestations. However, contrary to popular belief, hemorrhage does not lead to hypovolemia and is not the cause of death (total blood loss is minimal except during labor). Instead, death occurs due to multiple organ dysfunction syndrome (MODS) due to fluid redistribution, hypotension, disseminated intravascular coagulation, and focal tissue necroses.
Clinical phases of Marburg hemorrhagic fever's presentation are described below. Note that phases overlap due to variability between cases.
Incubation: 2–21 days, averaging 5–9 days.
Generalization Phase: Day 1 up to Day 5 from the onset of clinical symptoms. MHF presents with a high fever 104 °F (~40˚C) and a sudden, severe headache, with accompanying chills, fatigue, nausea, vomiting, diarrhea, pharyngitis, maculopapular rash, abdominal pain, conjunctivitis, and malaise.
Early Organ Phase: Day 5 up to Day 13. Symptoms include prostration, dyspnea, edema, conjunctival injection, viral exanthema, and CNS symptoms, including encephalitis, confusion, delirium, apathy, and aggression. Hemorrhagic symptoms typically occur late and herald the end of the early organ phase, leading either to eventual recovery or worsening and death. Symptoms include bloody stools, ecchymoses, blood leakage from venipuncture sites, mucosal and visceral hemorrhaging, and possibly hematemesis.
Late Organ Phase: Day 13 up to Day 21+. Symptoms bifurcate into two constellations for survivors and fatal cases. Survivors will enter a convalescence phase, experiencing myalgia, fibromyalgia, hepatitis, asthenia, ocular symptoms, and psychosis. Fatal cases continue to deteriorate, experiencing continued fever, obtundation, coma, convulsions, diffuse coagulopathy, metabolic disturbances, shock and death, with death typically occurring between days 8 and 16.
The WHO also writes that at the phase of gastrointestinal symptoms' predomination, "the appearance of patients...has been described as showing 'ghost-like' drawn features, deep-set eyes, expressionless faces, and extreme lethargy."
Causes
MVD is caused by two viruses; Marburg virus (MARV) and Ravn virus (RAVV), family Filoviridae.
Marburgviruses are endemic in arid woodlands of equatorial Africa. Most marburgvirus infections were repeatedly associated with people visiting natural caves or working in mines. In 2009, the successful isolation of infectious MARV and RAVV was reported from healthy Egyptian fruit bat caught in caves. This isolation strongly suggests that Old World fruit bats are involved in the natural maintenance of marburgviruses and that visiting bat-infested caves is a risk factor for acquiring marburgvirus infections. Further studies are necessary to establish whether Egyptian rousettes are the actual hosts of MARV and RAVV or whether they get infected via contact with another animal and therefore serve only as intermediate hosts. Another risk factor is contact with nonhuman primates, although only one outbreak of MVD (in 1967) was due to contact with infected monkeys.
Contrary to Ebola virus disease (EVD), which has been associated with heavy rains after long periods of dry weather, triggering factors for spillover of marburgviruses into the human population have not yet been described.
Transmission
The details of the initial transmission of MVD to humans remain incompletely understood. Transmission most likely occurs from Egyptian fruit bats or another natural host, such as non-human primates or through the consumption of bushmeat, but the specific routes and body fluids involved are unknown. Human-to-human transmission of MVD occurs through direct contact with infected bodily fluids such as blood. Transmission events are relatively rare – there have been only 11 recorded outbreaks of MARV between 1975 and 2011, with one event involving both MARV and RAVV.
Diagnosis
MVD is clinically indistinguishable from Ebola virus disease (EVD), and it can also easily be confused with many other diseases prevalent in Equatorial Africa, such as other viral hemorrhagic fevers, falciparum malaria, typhoid fever, shigellosis, rickettsial diseases such as typhus, cholera, gram-negative sepsis, borreliosis such as relapsing fever or EHEC enteritis. Other infectious diseases that ought to be included in the differential diagnosis include leptospirosis, scrub typhus, plague, Q fever, candidiasis, histoplasmosis, trypanosomiasis, visceral leishmaniasis, hemorrhagic smallpox, measles, and fulminant viral hepatitis. Non-infectious diseases that can be confused with MVD are acute promyelocytic leukemia, hemolytic uremic syndrome, snake envenomation, clotting factor deficiencies/platelet disorders, thrombotic thrombocytopenic purpura, hereditary hemorrhagic telangiectasia, Kawasaki disease, and even warfarin intoxication.
The most important indicator that may lead to the suspicion of MVD at clinical examination is the medical history of the patient, in particular the travel and occupational history (which countries and caves were visited?) and the patient's exposure to wildlife (exposure to bats or bat excrements?). MVD can be confirmed by isolation of marburgviruses from or by detection of marburgvirus antigen or genomic or subgenomic RNAs in patient blood or serum samples during the acute phase of MVD. Marburgvirus isolation is usually performed by inoculation of grivet kidney epithelial Vero E6 or MA-104 cell cultures or by inoculation of human adrenal carcinoma SW-13 cells, all of which react to infection with characteristic cytopathic effects. Filovirions can easily be visualized and identified in cell culture by electron microscopy due to their unique filamentous shapes, but electron microscopy cannot differentiate the various filoviruses alone despite some overall length differences. Immunofluorescence assays are used to confirm marburgvirus presence in cell cultures. During an outbreak, virus isolation and electron microscopy are most often not feasible options. The most common diagnostic methods are therefore RT-PCR in conjunction with antigen-capture ELISA, which can be performed in field or mobile hospitals and laboratories. Indirect immunofluorescence assays (IFAs) are not used for diagnosis of MVD in the field anymore.
Classification
Marburg virus disease (MVD) is the official name listed in the World Health Organization's International Statistical Classification of Diseases and Related Health Problems 10 (ICD-10) for the human disease caused by any of the two marburgviruses; Marburg virus (MARV) and Ravn virus (RAVV). In the scientific literature, Marburg hemorrhagic fever (MHF) is often used as an unofficial alternative name for the same disease. Both disease names are derived from the German city Marburg, where MARV was first discovered.
Prevention
Marburgviruses are highly infectious, but not very contagious. They do not get transmitted by aerosol during natural MVD outbreaks. Due to the absence of an approved vaccine, prevention of MVD therefore relies predominantly on quarantine of confirmed or high probability cases, proper personal protective equipment, and sterilization and disinfection.
Vaccine development
There are currently no Food and Drug Administration-approved vaccines for the prevention of MVD. Many candidate vaccines have been developed and tested in various animal models. Of those, the most promising ones are DNA vaccines or based on Venezuelan equine encephalitis virus replicons, vesicular stomatitis Indiana virus (VSIV) or filovirus-like particles (VLPs) as all of these candidates could protect nonhuman primates from marburgvirus-induced disease. DNA vaccines have entered clinical trials.
There is not yet an approved vaccine, because of economic factors in vaccine development, and because filoviruses killed few before the 2010s.
Endemic zones
The natural maintenance hosts of marburgviruses remain to be identified unequivocally. However, the isolation of both MARV and RAVV from bats and the association of several MVD outbreaks with bat-infested mines or caves strongly suggests that bats are involved in Marburg virus transmission to humans. Avoidance of contact with bats and abstaining from visits to caves is highly recommended, but may not be possible for those working in mines or people dependent on bats as a food source.
During outbreaks
Since marburgviruses are not spread via aerosol, the most straightforward prevention method during MVD outbreaks is to avoid direct (skin-to-skin) contact with patients, their excretions and body fluids, and any possibly contaminated materials and utensils. Patients should be isolated, but still are safe to be visited by family members. Medical staff should be trained in and apply strict barrier nursing techniques (disposable face mask, gloves, goggles, and a gown at all times). Traditional burial rituals, especially those requiring embalming of bodies, should be discouraged or modified, ideally with the help of local traditional healers.
In the laboratory
Marburgviruses are World Health Organization Risk Group 4 Pathogens, requiring Biosafety Level 4-equivalent containment, laboratory researchers have to be properly trained in BSL-4 practices and wear proper personal protective equipment.
Treatment
There is currently no effective marburgvirus-specific therapy for MVD. Treatment is primarily supportive in nature and includes minimizing invasive procedures, balancing fluids and electrolytes to counter dehydration, administration of anticoagulants early in infection to prevent or control disseminated intravascular coagulation, administration of procoagulants late in infection to control hemorrhaging, maintaining oxygen levels, pain management, and administration of antibiotics or antifungals to treat secondary infections.
Prognosis
Although supportive care can improve survival chances, marburg virus disease is fatal in the majority of cases. the case fatality rate was assessed to be 61.9%.
Epidemiology
Pandemic potential
The WHO identifies marburg virus disease as having pandemic potential.
Historical outbreaks
Below is a table of outbreaks concerning MVD from 1967 to 2024:
1967 outbreak
MVD was first documented in 1967, when 31 people became ill in the German towns of Marburg and Frankfurt am Main, and in Belgrade, Yugoslavia. The outbreak involved 25 primary MARV infections and seven deaths, and six nonlethal secondary cases. The outbreak was traced to infected grivets (species Chlorocebus aethiops) imported from an undisclosed location in Uganda and used in developing poliomyelitis vaccines. The monkeys were received by Behringwerke, a Marburg company founded by the first winner of the Nobel Prize in Medicine, Emil von Behring. The company, which at the time was owned by Hoechst, was originally set up to develop sera against tetanus and diphtheria. Primary infections occurred in Behringwerke laboratory staff while working with grivet tissues or tissue cultures without adequate personal protective equipment. Secondary cases involved two physicians, a nurse, a post-mortem attendant, and the wife of a veterinarian. All secondary cases had direct contact, usually involving blood, with a primary case. Both physicians became infected through accidental skin pricks when drawing blood from patients.
1975 cases
In 1975, an Australian tourist became infected with MARV in Rhodesia (today Zimbabwe). He died in a hospital in Johannesburg, South Africa. His girlfriend and an attending nurse were subsequently infected with MVD, but survived.
1980 cases
A case of MARV infection occurred in 1980 in Kenya. A French man, who worked as an electrical engineer in a sugar factory in Nzoia (close to Bungoma) at the base of Mount Elgon (which contains Kitum Cave), became infected by unknown means and died on 15 January shortly after admission to Nairobi Hospital. The attending physician contracted MVD, but survived. A popular science account of these cases can be found in Richard Preston's book The Hot Zone (the French man is referred to under the pseudonym "Charles Monet", whereas the physician is identified under his real name, Shem Musoke).
1987 case
In 1987, a single lethal case of RAVV infection occurred in a 15-year-old Danish boy, who spent his vacation in Kisumu, Kenya. He had visited Kitum Cave on Mount Elgon prior to travelling to Mombasa, where he developed clinical signs of infection. The boy died after transfer to Nairobi Hospital. A popular science account of this case can be found in Richard Preston's book The Hot Zone (the boy is referred to under the pseudonym "Peter Cardinal").
1988 laboratory infection
In 1988, researcher Nikolai Ustinov infected himself lethally with MARV after accidentally pricking himself with a syringe used for inoculation of guinea pigs. The accident occurred at the Scientific-Production Association "Vektor" (today the State Research Center of Virology and Biotechnology "Vektor") in Koltsovo, USSR (today Russia). Very little information is publicly available about this MVD case because Ustinov's experiments were classified. A popular science account of this case can be found in Ken Alibek's book Biohazard.
1990 laboratory infection
Another laboratory accident occurred at the Scientific-Production Association "Vektor" (today the State Research Center of Virology and Biotechnology "Vektor") in Koltsovo, USSR, when a scientist contracted MARV by unknown means.
1998–2000 outbreak
A major MVD outbreak occurred among illegal gold miners around Goroumbwa mine in Durba and Watsa, Democratic Republic of Congo from 1998 to 2000, when co-circulating MARV and RAVV caused 154 cases of MVD and 128 deaths. The outbreak ended with the flooding of the mine.
2004–2005 outbreak
In early 2005, the World Health Organization (WHO) began investigating an outbreak of viral hemorrhagic fever in Angola, which was centered in the northeastern Uíge Province but also affected many other provinces. The Angolan government had to ask for international assistance, as there were only approximately 1,200 doctors in the entire country and provinces that had few as two. Health care workers also complained about a shortage of basic personal protective equipment. Médecins Sans Frontières (MSF) reported that when their team arrived at the provincial hospital at the center of the outbreak, they found it operating without water and electricity. Contact tracing was complicated by the fact that the country's roads and other infrastructure were devastated after nearly three decades of civil war and the countryside remained littered with land mines.
Americo Boa Vida Hospital in the Angolan capital, Luanda, set up a special isolation ward to treat patients from the countryside. Due to the high fatality rate of MVD, some people came to be suspicious of and hostile towards hospitals and medical workers. For instance, a specially-equipped isolation ward at the provincial hospital in Uíge was reported to be empty during much of the epidemic, even though the facility was at the center of the outbreak. WHO was forced to implement what it described as a "harm reduction strategy" by distributing disinfectants to affected families who refused hospital care. Of the 252 people who contracted MVD, 227 died.
2007 cases
In 2007, four miners became infected with marburgviruses in Kamwenge District, Uganda. The first case, a 29-year-old man, became symptomatic on July 4, 2007, was admitted to a hospital on July 7, and died on July 13. Contact tracing revealed that the man had had prolonged close contact with two colleagues (a 22-year-old man and a 23-year-old man), who experienced clinical signs of infection before his disease onset. Both men had been admitted to hospitals in June and survived their infections, which were proven to be due to MARV. A fourth, 25-year-old man, developed MVD clinical signs in September and was shown to be infected with RAVV. He also survived the infection.
2008 cases
On July 10, 2008, the Netherlands National Institute for Public Health and the Environment reported that a 41-year-old Dutch woman, who had visited Python Cave in Maramagambo Forest during her holiday in Uganda, had MVD due to MARV infection, and had been admitted to a hospital in the Netherlands. The woman died under treatment in the Leiden University Medical Centre in Leiden on July 11. The Ugandan Ministry of Health closed the cave after this case. On January 9 of that year an infectious diseases physician notified the Colorado Department of Public Health and the Environment that a 44-year-old American woman who had returned from Uganda had been hospitalized with a fever of unknown origin. At the time, serologic testing was negative for viral hemorrhagic fever. She was discharged on January 19, 2008. After the death of the Dutch patient and the discovery that the American woman had visited Python Cave, further testing confirmed the patient demonstrated MARV antibodies and RNA.
2017 Uganda outbreak
In October 2017 an outbreak of Marburg virus disease was detected in Kween District, Eastern Uganda. All three initial cases (belonging to one family – two brothers and one sister) had died by 3 November. The fourth case – a health care worker – developed symptoms on 4 November and was admitted to a hospital. The first confirmed case traveled to Kenya before the death. A close contact of the second confirmed case traveled to Kampala. It is reported that several hundred people may have been exposed to infection.
2021 Guinean cases
In August 2021, two months after the re-emergent Ebola epidemic in the Guéckédou prefecture was declared over, a case of the Marburg disease was confirmed by health authorities through laboratory analysis. Other potential case of the disease in a contact awaits official results. This was the first case of the Marburg hemorrhagic fever confirmed to happen in West Africa. The case of Marburg also has been identified in Guéckédou. During the outbreak, a total of one confirmed case, who died (CFR=100%), and 173 contacts were identified, including 14 high-risk contacts based on exposure. Among them, 172 were followed for a period of 21 days, of which none developed symptoms. One high-risk contact was lost to follow up. Sequencing of an isolate from the Guinean patient showed that this outbreak was caused by the Angola-like Marburg virus. A colony of Egyptian rousettus bats (reservoir host of Marburg virus) was found in close proximity (4.5 km) to the village, where the Marburg virus disease outbreak emerged in 2021. Two sampled fruit bats from this colony were PCR-positive on the Marburg virus.
2022 Ghanaian cases
In July 2022, preliminary analysis of samples taken from two patients – both deceased – in Ghana indicated the cases were positive for Marburg. However, per standard procedure, the samples were sent to the Pasteur Institute of Dakar for confirmation. On 17 July 2022 the two cases were confirmed by Ghana, which caused the country to declare a Marburg virus disease outbreak. An additional case was identified, bringing the total to three.
2023 Equatorial Guinea outbreak
A disease outbreak was first reported in Equatorial Guinea on 7 February 2023, and on 13 February 2023, it was identified as being Marburg virus disease. It was the first time the disease was detected in the country. Neighbouring Cameroon detected two suspected cases of Marburg virus disease on 13 February 2023, but they were later ruled out. On 25 February, a suspected case of Marburg was reported in the Spanish city of Valencia, however this case was subsequently discounted. As of 4 April 2023, there were 14 confirmed cases and 28 suspected cases, including ten confirmed deaths from the disease in Equatorial Guinea. On 8 June 2023, the World Health Organization declared the outbreak over. In total, 17 laboratory-confirmed cases and 12 deaths were recorded. All the 23 probable cases reportedly died. Four patients recovered from the virus and have been enrolled in a survivors programme to receive psychosocial and other post-recovery support.
2023 Tanzania outbreak
A Marburg virus disease outbreak in Tanzania was first reported on 21 March 2023 by the Ministry of Health of Tanzania. This was the first time that Tanzania had reported an outbreak of the disease. On 2 June 2023, Tanzania declared the outbreak over. There were 9 total infections, resulting in 6 total deaths.
2024 Rwanda outbreak
On September 27, 2024, an outbreak of the Marburg virus was confirmed in Rwanda. As of September 29, 2024, six deaths and twenty cases had been confirmed. The Rwandan Minister of Health, Sabin Nsanzimana, confirmed that the infected were mostly healthcare workers and that contact tracing had been initiated in the country.
Research
Experimentally, recombinant vesicular stomatitis Indiana virus (VSIV) expressing the glycoprotein of MARV has been used successfully in nonhuman primate models as post-exposure prophylaxis. A vaccine candidate has been effective in nonhuman primates. Experimental therapeutic regimens relying on antisense technology have shown promise, with phosphorodiamidate morpholino oligomers (PMOs) targeting the MARV genome New therapies from Sarepta and Tekmira have also been successfully used in humans as well as primates.
See also
List of other Filoviridae outbreaks
References
Further reading
External links
ViralZone: Marburg virus
Centers for Disease Control, Infection Control for Viral Haemorrhagic Fevers In the African Health Care Setting.
Center for Disease Control, Marburg Haemorrhagic Fever.
Center for Disease Control, Known Cases and Outbreaks of Marburg Haemorrhagic Fever
World Health Organization, Marburg Haemorrhagic Fever.
Red Cross PDF
Virus Pathogen Database and Analysis Resource (ViPR): Filoviridae
Animal viral diseases
Arthropod-borne viral fevers and viral haemorrhagic fevers
Biological agents
Hemorrhagic fevers
Tropical diseases
Virus-related cutaneous conditions
Zoonoses | Marburg virus disease | [
"Biology",
"Environmental_science"
] | 4,883 | [
"Biological agents",
"Toxicology",
"Biological warfare"
] |
58,761 | https://en.wikipedia.org/wiki/Heike%20Kamerlingh%20Onnes | Heike Kamerlingh Onnes (; 21 September 1853 – 21 February 1926) was a Dutch physicist. After studying in Groningen and Heidelberg, he became professor of experimental physics at the University of Leiden where he taught from 1882 to 1923. In 1904, he established a cryogenics laboratory where he exploited the Hampson–Linde cycle to investigate how materials behave when cooled to nearly absolute zero. In 1908, he became the first to liquefy helium, cooling it to near 1.5 Kelvin, at the time the coldest temperature achieved on earth. For this research, he was awarded the Nobel Prize in Physics in 1913. Using liquid helium to investigate the electrical conductivity of solid mercury, he found in 1911 that at 4.2 K its electrical resistance vanishes, thus discovering superconductivity.
Early life
Kamerlingh Onnes was born in Groningen, Netherlands. His father, Harm Kamerlingh Onnes, was a brickworks owner. His mother was Anna Gerdina Coers of Arnhem.
In 1870, Kamerlingh Onnes attended the University of Groningen. He studied under Robert Bunsen and Gustav Kirchhoff at the University of Heidelberg from 1871 to 1873. Again at Groningen, he obtained his master's degree in 1878 and a doctorate in 1879. His thesis was Nieuwe bewijzen voor de aswenteling der aarde (tr. New proofs of the rotation of the earth). His doctoral thesis was on Foucault's pendulum. From 1878 to 1882 he was assistant to Johannes Bosscha, the director of the Delft Polytechnic, for whom he substituted as lecturer in 1881 and 1882.
University of Leiden
From 1882 to 1923 Kamerlingh Onnes served as professor of experimental physics at the University of Leiden. In 1904 he founded a very large cryogenics laboratory and invited other researchers to the location, which made him highly regarded in the scientific community. The laboratory is known now as Kamerlingh Onnes Laboratory. Only one year after his appointment as professor he became member of the Royal Netherlands Academy of Arts and Sciences.
Liquefaction of helium
On 10 July 1908, he was the first to liquefy helium, using several precooling stages and the Hampson–Linde cycle based on the Joule–Thomson effect. This way he lowered the temperature to the boiling point of helium (−269 °C, 4.2 K). By reducing the pressure of the liquid helium he achieved a temperature near 1.5 K. These were the coldest temperatures achieved on earth at the time. The equipment employed is at the Museum Boerhaave in Leiden.
For further research on low-temperature, he needed large amounts of helium. This he obtained in 1911 from Welsbach's company, which processed thorianite to produce thorium for gas mantles. Helium is produced as a side product. Previously, Onnes obtained helium from processing monazite, and Onnes used the processed monazite (which still contained thorium) to trade for the helium. On earth, helium is usually found in coexistence with radioactive material, since it is a product of radioactive decay.
Superconductivity
In 1911 Kamerlingh Onnes measured the electrical conductivity of pure metals (mercury, and later tin and lead) at very low temperatures. Some scientists, such as William Thomson (Lord Kelvin), believed that electrons flowing through a conductor would come to a complete halt or, in other words, metal resistivity would become infinitely large at absolute zero. Others, including Kamerlingh Onnes, felt that a conductor's electrical resistance would steadily decrease and drop to nil. Augustus Matthiessen said that when the temperature decreases, the metal conductivity usually improves or in other words, the electrical resistivity usually decreases with a decrease of temperature.
On 8 April 1911, Kamerlingh Onnes found that at 4.2 K the resistance in a solid mercury wire immersed in liquid helium suddenly vanished. He immediately realized the significance of the discovery (as became clear when his notebook was deciphered a century later). He reported that "Mercury has passed into a new state, which on account of its extraordinary electrical properties may be called the superconductive state". He published more articles about the phenomenon, initially referring to it as "supraconductivity" and, only later adopting the term "superconductivity".
Nobel Prize
Kamerlingh Onnes received widespread recognition for his work, including the 1913 Nobel Prize in Physics for (in the words of the committee) "his investigations on the properties of matter at low temperatures which led, inter alia, to the production of liquid helium."
Family
He was married to Maria Adriana Wilhelmina Elisabeth Bijleveld (m. 1887) and had one child, named Albert. His brother Menso Kamerlingh Onnes (1860–1925) was a painter (and father of another painter, Harm Kamerlingh Onnes), while his sister Jenny married another painter, Floris Verster (1861–1927).
Legacy
Some of the instruments Kamerlingh Onnes devised for his experiments can be seen at the Boerhaave Museum in Leiden. The apparatus he used to first liquefy helium is on display in the lobby of the physics department at Leiden University, where the low-temperature lab is also named in his honor. His student and successor as director of the lab Willem Hendrik Keesom was the first person who was able to solidify helium, in 1926. The former Kamerlingh Onnes laboratory building is currently the Law Faculty at Leiden University and is known as "Kamerlingh Onnes Gebouw" (Kamerlingh Onnes Building), often shortened to "KOG". The current science faculty has a "Kamerlingh Onnes Laboratorium" named after him, as well as a plaque and several machines used by Kamerling Onnes in the main hall of the physics department.
The Kamerlingh Onnes Award (1948) and the Kamerlingh Onnes Prize (2000) were established in his honour, recognising further advances in low-temperature science.
The Onnes effect referring to the creeping of superfluid helium is named in his honor.
The crater Kamerlingh Onnes on the Moon is named after him.
Onnes is also credited with coining the word "enthalpy".
Onnes's discovery of superconductivity was named an IEEE Milestone in 2011.
Honors and awards
Matteucci Medal (1910)
Rumford Medal (1912)
Nobel Prize in Physics (1913)
Elected member of the American Philosophical Society (1914)
Franklin Medal (1915)
Elected member of the United States National Academy of Sciences (1920)
Selected publications
Kamerlingh Onnes, H., "Nieuwe bewijzen voor de aswenteling der aarde." Ph.D. dissertation. Groningen, Netherlands, 1879.
Kamerlingh Onnes, H., "Algemeene theorie der vloeistoffen." Amsterdam Akad. Verhandl; 21, 1881.
Kamerlingh Onnes, H., "On the Cryogenic Laboratory at Leyden and on the Production of Very Low Temperature." Comm. Phys. Lab. Univ. Leiden; 14, 1894.
Kamerlingh Onnes, H., "Théorie générale de l'état fluide." Haarlem Arch. Neerl.; 30, 1896.
Kamerlingh Onnes, H., "Further experiments with liquid helium. C. On the change of electric resistance of pure metals at very low temperatures, etc. IV. The resistance of pure mercury at helium temperatures." Comm. Phys. Lab. Univ. Leiden; No. 120b, 1911.
Kamerlingh Onnes, H., "Further experiments with liquid helium. D. On the change of electric resistance of pure metals at very low temperatures, etc. V. The disappearance of the resistance of mercury." Comm. Phys. Lab. Univ. Leiden; No. 122b, 1911.
Kamerlingh Onnes, H., "Further experiments with liquid helium. G. On the electrical resistance of pure metals, etc. VI. On the sudden change in the rate at which the resistance of mercury disappears." Comm. Phys. Lab. Univ. Leiden; No. 124c, 1911.
Kamerlingh Onnes, H., "On the Lowest Temperature Yet Obtained." Comm. Phys. Lab. Univ. Leiden; No. 159, 1922.
See also
Timeline of low-temperature technology
Timeline of states of matter and phase transitions
Coldest temperature achieved on earth
List of Nobel laureates
History of superconductivity
References
Further reading
Levelt-Sengers, J. M. H., How fluids unmix : discoveries by the School of Van der Waals and Kamerlingh Onnes. Amsterdam, Koninklijke Nederlandse Akademie van Wetenschappen, 2002. .
International Institute of Refrigeration (First International Commission), Rapports et communications issus du Laboratoire Kamerlingh Onnes. International Congress of Refrigeration (7th; 1936; La Hauge), Amsterdam, 1936.
External links
About Heike Kamerlingh Onnes, Nobel-winners.com.
J. van den Handel, Kamerlingh Onnes, Heike (1853–1926), in Biografisch Woordenboek van Nederland. (in Dutch)
Leiden University historical web site
Correspondence with James Dewar, the main competitor in the race to liquid helium
Communications from the Kamerlingh Onnes Laboratory (1885–1898)
Ph.D. students of Kamerlingh Onnes (1885-1924)
1853 births
1926 deaths
20th-century Dutch physicists
Cryogenics
Dutch Nobel laureates
20th-century Dutch inventors
Academic staff of Leiden University
Nobel laureates in Physics
Corresponding Members of the Russian Academy of Sciences (1917–1925)
Corresponding Members of the USSR Academy of Sciences
Members of the Royal Netherlands Academy of Arts and Sciences
Foreign members of the Royal Society
Foreign associates of the National Academy of Sciences
Scientists from Groningen (city)
University of Groningen alumni
Heidelberg University alumni
Academic staff of the Delft University of Technology
Recipients of the Matteucci Medal
Recipients of Franklin Medal
Members of the American Philosophical Society | Heike Kamerlingh Onnes | [
"Physics"
] | 2,182 | [
"Applied and interdisciplinary physics",
"Cryogenics"
] |
58,772 | https://en.wikipedia.org/wiki/Timeline%20of%20atomic%20and%20subatomic%20physics | A timeline of atomic and subatomic physics, including particle physics.
Antiquity
6th - 2nd Century BCE Kanada (philosopher) proposes that anu is an indestructible particle of matter, an "atom"; anu is an abstraction and not observable.
430 BCE Democritus speculates about fundamental indivisible particles—calls them "atoms"
The beginning of chemistry
1766 Henry Cavendish discovers and studies hydrogen
1778 Carl Scheele and Antoine Lavoisier discover that air is composed mostly of nitrogen and oxygen
1781 Joseph Priestley creates water by igniting hydrogen and oxygen
1800 William Nicholson and Anthony Carlisle use electrolysis to separate water into hydrogen and oxygen
1803 John Dalton introduces atomic ideas into chemistry and states that matter is composed of atoms of different weights
1805 (approximate time) Thomas Young conducts the double-slit experiment with light
1811 Amedeo Avogadro claims that equal volumes of gases should contain equal numbers of molecules
1815 William Prout hypothesizes that all matter is built up from hydrogen, adumbrating the proton;
1832 Michael Faraday states his laws of electrolysis
1838 Richard Laming hypothesized a subatomic particle carrying electric charge;
1839 Alexandre Edmond Becquerel discovered the photovoltaic effect
1858 Julius Plücker produced cathode rays;
1871 Dmitri Mendeleyev systematically examines the periodic table and predicts the existence of gallium, scandium, and germanium
1873 Johannes van der Waals introduces the idea of weak attractive forces between molecules
1874 George Johnstone Stoney hypothesizes a minimum unit of electric charge. In 1891, he coins the word electron for it;
1885 Johann Balmer finds a mathematical expression for observed hydrogen line wavelengths
1886 Eugen Goldstein produced anode rays;
1887 Heinrich Hertz discovers the photoelectric effect
1894 Lord Rayleigh and William Ramsay discover argon by spectroscopically analyzing the gas left over after nitrogen and oxygen are removed from air
1895 William Ramsay discovers terrestrial helium by spectroscopically analyzing gas produced by decaying uranium
1896 Antoine Henri Becquerel discovers the radioactivity of uranium
1896 Pieter Zeeman studies the splitting of sodium D lines when sodium is held in a flame between strong magnetic poles
1897 J. J. Thomson discovered the electron;
1897 Emil Wiechert, Walter Kaufmann and J.J. Thomson discover the electron
1898 Marie and Pierre Curie discovered the existence of the radioactive elements radium and polonium in their research of pitchblende
1898 William Ramsay and Morris Travers discover neon, and negatively charged beta particles
The age of quantum mechanics
1887 Heinrich Rudolf Hertz discovers the photoelectric effect that will play a very important role in the development of the quantum theory with Einstein's explanation of this effect in terms of quanta of light
1896 Wilhelm Conrad Röntgen discovers the X-rays while studying electrons in plasma; scattering X-rays—that were considered as 'waves' of high-energy electromagnetic radiation—Arthur Compton will be able to demonstrate in 1922 the 'particle' aspect of electromagnetic radiation.
1899 Ernest Rutherford discovered the alpha and beta particles emitted by uranium;
1900 Johannes Rydberg refines the expression for observed hydrogen line wavelengths
1900 Max Planck states his quantum hypothesis and blackbody radiation law
1900 Paul Villard discovers gamma-rays while studying uranium decay
1902 Philipp Lenard observes that maximum photoelectron energies are independent of illuminating intensity but depend on frequency
1905 Albert Einstein explains the photoelectric effect
1906 Charles Barkla discovers that each element has a characteristic X-ray and that the degree of penetration of these X-rays is related to the atomic weight of the element
1908-1911 Jean Perrin proves the existence of atoms and molecules with experimental work to test Einstein's theoretical explanation of Brownian motion
1909 Ernest Rutherford and Thomas Royds demonstrate that alpha particles are doubly ionized helium atoms
1909 Hans Geiger and Ernest Marsden discover large angle deflections of alpha particles by thin metal foils
1911 Ernest Rutherford explains the Geiger–Marsden experiment by invoking a nuclear atom model and derives the Rutherford cross section
1911 Ștefan Procopiu measures the magnetic dipole moment of the electron
1912 Max von Laue suggests using crystal lattices to diffract X-rays
1912 Walter Friedrich and Paul Knipping diffract X-rays in zinc blende
1913 Henry Moseley shows that nuclear charge is the real basis for numbering the elements
1913 Johannes Stark demonstrates that strong electric fields will split the Balmer spectral line series of hydrogen
1913 Niels Bohr presents his quantum model of the atom
1913 Robert Millikan measures the fundamental unit of electric charge
1913 William Henry Bragg and William Lawrence Bragg work out the Bragg condition for strong X-ray reflection
1914 Ernest Rutherford suggests that the positively charged atomic nucleus contains protons
1914 James Franck and Gustav Hertz observe atomic excitation
1915 Arnold Sommerfeld develops a modified Bohr atomic model with elliptic orbits to explain relativistic fine structure
1916 Gilbert N. Lewis and Irving Langmuir formulate an electron shell model of chemical bonding
1917 Albert Einstein introduces the idea of stimulated radiation emission
1918 Ernest Rutherford notices that, when alpha particles were shot into nitrogen gas, his scintillation detectors showed the signatures of hydrogen nuclei.
1921 Alfred Landé introduces the Landé g-factor
1922 Arthur Compton studies X-ray photon scattering by electrons demonstrating the 'particle' aspect of electromagnetic radiation.
1922 Otto Stern and Walther Gerlach show "spin quantization"
1923 Lise Meitner discovers what is now referred to as the Auger process
1924 John Lennard-Jones proposes a semiempirical interatomic force law
1924 Louis de Broglie suggests that electrons may have wavelike properties in addition to their 'particle' properties; the wave–particle duality has been later extended to all fermions and bosons.
1924 Santiago Antúnez de Mayolo proposes a neutron.
1924 Satyendra Bose and Albert Einstein introduce Bose–Einstein statistics
1925 George Uhlenbeck and Samuel Goudsmit postulate electron spin
1925 Pierre Auger discovers the Auger process (2 years after Lise Meitner)
1925 Werner Heisenberg, Max Born, and Pascual Jordan formulate quantum matrix mechanics
1925 Wolfgang Pauli states the quantum exclusion principle for electrons
1926 Enrico Fermi discovers the spin–statistics connection, for particles that are now called 'fermions', such as the electron (of spin-1/2).
1926 Erwin Schrödinger proves that the wave and matrix formulations of quantum theory are mathematically equivalent
1926 Erwin Schrödinger states his nonrelativistic quantum wave equation and formulates quantum wave mechanics
1926 Gilbert N. Lewis introduces the term "photon", thought by him to be "the carrier of radiant energy."
1926 Oskar Klein and Walter Gordon state their relativistic quantum wave equation, now the Klein–Gordon equation
1926 Paul Dirac introduces Fermi–Dirac statistics
1927 Charles Drummond Ellis (along with James Chadwick and colleagues) finally establish clearly that the beta decay spectrum is in fact continuous and not discrete, posing a problem that will later be solved by theorizing (and later discovering) the existence of the neutrino.
1927 Clinton Davisson, Lester Germer, and George Paget Thomson confirm the wavelike nature of electrons
1927 Thomas and Fermi develop the Thomas–Fermi model
1927 Max Born interprets the probabilistic nature of wavefunctions
1927 Max Born and Robert Oppenheimer introduce the Born–Oppenheimer approximation
1927 Walter Heitler and Fritz London introduce the concepts of valence bond theory and apply it to the hydrogen molecule.
1927 Werner Heisenberg states the quantum uncertainty principle
1928 Chandrasekhara Raman studies optical photon scattering by electrons
1928 Charles G. Darwin and Walter Gordon solve the Dirac equation for a Coulomb potential
1928 Friedrich Hund and Robert S. Mulliken introduce the concept of molecular orbital
1928 Paul Dirac states the Dirac equation
1929 Nevill Mott derives the Mott cross section for the Coulomb scattering of relativistic electrons
1929 Oskar Klein discovers the Klein paradox
1929 Oskar Klein and Yoshio Nishina derive the Klein–Nishina cross section for high energy photon scattering by electrons
1930 Wolfgang Pauli postulated the neutrino to explain the energy spectrum of beta decays;
1930 Erwin Schrödinger predicts the zitterbewegung motion
1930 Fritz London explains van der Waals forces as due to the interacting fluctuating dipole moments between molecules
1930 Paul Dirac introduces electron hole theory
1931 Harold Urey discovers deuterium using evaporation concentration techniques and spectroscopy
1931 Irène Joliot-Curie and Frédéric Joliot observe but misinterpret neutron scattering in paraffin
1931 John Lennard-Jones proposes the Lennard-Jones interatomic potential
1931 Linus Pauling discovers resonance bonding and uses it to explain the high stability of symmetric planar molecules
1931 Paul Dirac shows that charge quantization can be explained if magnetic monopoles exist
1931 Wolfgang Pauli puts forth the neutrino hypothesis to explain the apparent violation of energy conservation in beta decay
1932 Carl D. Anderson discovers the positron
1932 James Chadwick discovers the neutron
1932 John Cockcroft and Ernest Walton split lithium and boron nuclei using proton bombardment
1932 Werner Heisenberg presents the proton–neutron model of the nucleus and uses it to explain isotopes
1933 Ernst Stueckelberg (1932), Lev Landau (1932), and Clarence Zener discover the Landau–Zener transition
1933 Max Delbrück suggests that quantum effects will cause photons to be scattered by an external electric field
1934 Enrico Fermi publishes a very successful model of beta decay in which neutrinos were produced.
1934 Enrico Fermi suggests bombarding uranium atoms with neutrons to make a 93 proton element
1934 Irène Joliot-Curie and Frédéric Joliot bombard aluminium atoms with alpha particles to create artificially radioactive phosphorus-30
1934 Leó Szilárd realizes that nuclear chain reactions may be possible
1934 Lev Landau tells Edward Teller that non-linear molecules may have vibrational modes which remove the degeneracy of an orbitally degenerate state (Jahn–Teller effect)
1934 Pavel Cherenkov reports that light is emitted by relativistic particles traveling in a nonscintillating liquid
1935 Albert Einstein, Boris Podolsky, and Nathan Rosen put forth the EPR paradox
1935 Henry Eyring develops the transition state theory
1935 Hideki Yukawa presents a theory of the nuclear force and predicts the scalar meson
1935 Niels Bohr presents his analysis of the EPR paradox
1936 Carl D. Anderson discovered the muon while he studied cosmic radiation;
1936 Alexandru Proca formulates the relativistic quantum field equations for a massive vector meson of spin-1 as a basis for nuclear forces
1936 Eugene Wigner develops the theory of neutron absorption by atomic nuclei
1936 Hermann Arthur Jahn and Edward Teller present their systematic study of the symmetry types for which the Jahn–Teller effect is expected
1937 Carl Anderson proves experimentally the existence of the pion predicted by Yukawa's theory.
1937 Hans Hellmann finds the Hellmann–Feynman theorem
1937 Seth Neddermeyer, Carl Anderson, J.C. Street, and E.C. Stevenson discover muons using cloud chamber measurements of cosmic rays
1939 Lise Meitner and Otto Robert Frisch determine that nuclear fission is taking place in the Hahn–Strassmann experiments
1939 Otto Hahn and Fritz Strassmann bombard uranium salts with thermal neutrons and discover barium among the reaction products
1939 Richard Feynman finds the Hellmann–Feynman theorem
1942 Enrico Fermi makes the first controlled nuclear chain reaction
1942 Ernst Stueckelberg introduces the propagator to positron theory and interprets positrons as negative energy electrons moving backwards through spacetime
Quantum field theory
1947 George Dixon Rochester and Clifford Charles Butler discovered the kaon, the first strange particle;
1947 Cecil Powell, César Lattes, and Giuseppe Occhialini discover the pi meson by studying cosmic ray tracks
1947 Richard Feynman presents his propagator approach to quantum electrodynamics
1947 Willis Lamb and Robert Retherford measure the Lamb–Retherford shift
1948 Hendrik Casimir predicts a rudimentary attractive Casimir force on a parallel plate capacitor
1951 Martin Deutsch discovers positronium
1952 David Bohm propose his interpretation of quantum mechanics
1953 Robert Wilson observes Delbruck scattering of 1.33 MeV gamma-rays by the electric fields of lead nuclei
1953 Charles H. Townes, collaborating with J. P. Gordon, and H. J. Zeiger, builds the first ammonia maser
1954 Chen Ning Yang and Robert Mills investigate a theory of hadronic isospin by demanding local gauge invariance under isotopic spin space rotations, the first non-Abelian gauge theory
1955 Owen Chamberlain, Emilio Segrè, Clyde Wiegand, and Thomas Ypsilantis discover the antiproton
1955 and 1956 Murray Gell-Mann and Kazuhiko Nishijima independently derive the Gell-Mann–Nishijima formula, which relates the baryon number, the strangeness, and the isospin of hadrons to the charge, eventually leading to the systematic categorization of hadrons and, ultimately, the quark model of hadron composition.
1956 Clyde Cowan and Frederick Reines discovered the (electron) neutrino;
1956 Chen Ning Yang and Tsung Lee propose parity violation by the weak nuclear force
1956 Chien Shiung Wu discovers parity violation by the weak force in decaying cobalt
1956 Frederick Reines and Clyde Cowan detect antineutrino
1957 Bruno Pontecorvo postulated the flavor oscillation;
1957 Gerhart Luders proves the CPT theorem
1957 Richard Feynman, Murray Gell-Mann, Robert Marshak, and E.C.G. Sudarshan propose a vector/axial vector (VA) Lagrangian for weak interactions.
1958 Marcus Sparnaay experimentally confirms the Casimir effect
1959 Yakir Aharonov and David Bohm predict the Aharonov–Bohm effect
1960 R.G. Chambers experimentally confirms the Aharonov–Bohm effect
1961 Jeffrey Goldstone considers the breaking of global phase symmetry
1961 Murray Gell-Mann and Yuval Ne'eman discover the Eightfold Way patterns, the SU(3) group
1962 Leon Lederman shows that the electron neutrino is distinct from the muon neutrino
1963 Eugene Wigner discovers the fundamental roles played by quantum symmetries in atoms and molecules
The formation and successes of the Standard Model
1963 Nicola Cabibbo develops the mathematical matrix by which the first two (and ultimately three) generations of quarks can be predicted.
1964 Murray Gell-Mann and George Zweig propose the quark/aces model
1964 François Englert, Robert Brout, Peter Higgs, Gerald Guralnik, C. R. Hagen, and Tom Kibble postulate that a fundamental quantum field, now called the Higgs field, permeates space and, by way of the Higgs mechanism, provides mass to all the elementary subatomic particles that interact with it. While the Higgs field is postulated to confer mass on quarks and leptons, it represents only a tiny portion of the masses of other subatomic particles, such as protons and neutrons. In these, gluons that bind quarks together confer most of the particle mass. The result is obtained independently by three groups: François Englert and Robert Brout; Peter Higgs, working from the ideas of Philip Anderson; and Gerald Guralnik, C. R. Hagen, and Tom Kibble.
1964 Murray Gell-Mann and George Zweig independently propose the quark model of hadrons, predicting the arbitrarily named up, down, and strange quarks. Gell-Mann is credited with coining the term quark, which he found in James Joyce's book Finnegans Wake.
1964 Sheldon Glashow and James Bjorken predict the existence of the charm quark. The addition is proposed because it allows for a better description of the weak interaction (the mechanism that allows quarks and other particles to decay), equalizes the number of known quarks with the number of known leptons, and implies a mass formula that correctly reproduced the masses of the known mesons.
1964 John Stewart Bell shows that all local hidden variable theories must satisfy Bell's inequality
1964 Peter Higgs considers the breaking of local phase symmetry
1964 Val Fitch and James Cronin observe CP violation by the weak force in the decay of K mesons
1967 Bruno Pontecorvo postulated neutrino oscillation;
1967 Steven Weinberg and Abdus Salam publish papers in which they describe Yang–Mills theory using the SU(2) X U(1) supersymmetry group, thereby yielding a mass for the W particle of the weak interaction via spontaneous symmetry breaking.
1967 Steven Weinberg puts forth his electroweak model of leptons
1968 Stanford University: Deep inelastic scattering experiments at the Stanford Linear Accelerator Center (SLAC) show that the proton contains much smaller, point-like objects and is therefore not an elementary particle. Physicists at the time are reluctant to identify these objects with quarks, instead calling them partons — a term coined by Richard Feynman. The objects that are observed at SLAC will later be identified as up and down quarks. Nevertheless, "parton" remains in use as a collective term for the constituents of hadrons (quarks, antiquarks, and gluons). The existence of the strange quark is indirectly validated by the SLAC's scattering experiments: not only is it a necessary component of Gell-Mann and Zweig's three-quark model, but it provides an explanation for the kaon (K) and pion (π) hadrons discovered in cosmic rays in 1947.
1969 John Clauser, Michael Horne, Abner Shimony and Richard Holt propose a polarization correlation test of Bell's inequality
1970 Sheldon Glashow, John Iliopoulos, and Luciano Maiani propose the charm quark
1971 Gerard 't Hooft shows that the Glashow-Salam-Weinberg electroweak model can be renormalized
1972 Stuart Freedman and John Clauser perform the first polarization correlation test of Bell's inequality
1973 Frank Anthony Wilczek discover the quark asymptotic freedom in the theory of strong interactions; receives the Lorentz Medal in 2002, and the Nobel Prize in Physics in 2004 for his discovery and his subsequent contributions to quantum chromodynamics.
1973 Makoto Kobayashi and Toshihide Maskawa note that the experimental observation of CP violation can be explained if an additional pair of quarks exist. The two new quarks are eventually named top and bottom.
1973 David Politzer and Frank Anthony Wilczek propose the asymptotic freedom of quarks
1974 Burton Richter and Samuel Ting: Charm quarks are produced almost simultaneously by two teams in November 1974 (see November Revolution) — one at SLAC under Burton Richter, and one at Brookhaven National Laboratory under Samuel Ting. The charm quarks are observed bound with charm antiquarks in mesons. The two discovering parties independently assign the discovered meson two different symbols, J and ψ; thus, it becomes formally known as the J/ψ meson. The discovery finally convinces the physics community of the quark model's validity.
1974 Robert J. Buenker and Sigrid D. Peyerimhoff introduce the multireference configuration interaction method.
1975 Martin Perl discovers the tau lepton
1977 Leon Lederman observes the bottom quark with his team at Fermilab. This discovery is a strong indicator of the top quark's existence: without the top quark, the bottom quark would be without a partner that is required by the mathematics of the theory.
1977 Martin Lewis Perl discovered the tau lepton after a series of experiments;
1977 Steve Herb finds the upsilon resonance implying the existence of the beauty/bottom quark
1979 Gluon observed indirectly in three-jet events at DESY;
1982 Alain Aspect, J. Dalibard, and G. Roger perform a polarization correlation test of Bell's inequality that rules out conspiratorial polarizer communication
1983 Carlo Rubbia and Simon van der Meer discovered the W and Z bosons;
1983 Carlo Rubbia, Simon van der Meer, and the CERN UA-1 collaboration find the W and Z intermediate vector bosons
1989 The Z intermediate vector boson resonance width indicates three quark–lepton generations
1994 The CERN LEAR Crystal Barrel Experiment justifies the existence of glueballs (exotic meson).
1995 The top quark is finally observed by a team at Fermilab after an 18-year search. It has a mass much greater than had been previously expected — almost as great as a gold atom.
1995 The D0 and CDF experiments at the Fermilab Tevatron discover the top quark.
1998 – The Super-Kamiokande (Japan) detector facility reports experimental evidence for neutrino oscillations, implying that at least one neutrino has mass.
1998 Super-Kamiokande (Japan) observes evidence for neutrino oscillations, implying that at least one neutrino has mass.
1999 Ahmed Zewail wins the Nobel prize in chemistry for his work on femtochemistry for atoms and molecules.
2000 scientists at Fermilab announce the first direct evidence for the tau neutrino, the third kind of neutrino in particle physics.
2000 CERN announced quark-gluon plasma, a new phase of matter.
2001 the Sudbury Neutrino Observatory (Canada) confirm the existence of neutrino oscillations. Lene Hau stops a beam of light completely in a Bose–Einstein condensate.
2001 The Sudbury Neutrino Observatory (Canada) confirms the existence of neutrino oscillations.
2005 the RHIC accelerator of Brookhaven National Laboratory generates a "perfect" fluid, perhaps the quark–gluon plasma.
2010 The Large Hadron Collider at CERN begins operation with the primary goal of searching for the Higgs boson.
2012 Higgs boson-like particle discovered at CERN's Large Hadron Collider (LHC).
2014 The LHCb experiment observes particles consistent with tetraquarks and pentaquarks
2014 The T2K and OPERA experiment observe the appearance of electron neutrinos and Tau neutrinos in a muon neutrino beam
See also
Chronology of the universe
History of subatomic physics
History of quantum mechanics
History of quantum field theory
History of the molecule
History of thermodynamics
History of chemistry
Golden age of physics
Timeline of cosmological theories
Timeline of particle physics technology
References
External links
Alain Connes official website with downloadable papers.
Alain Connes's Standard Model.
A History of Quantum Mechanics
A Brief History of Quantum Mechanics
Particle physics
Nuclear physics
Atomic physics
Atomic | Timeline of atomic and subatomic physics | [
"Physics",
"Chemistry"
] | 4,767 | [
"Nuclear physics",
"Quantum mechanics",
"Atomic physics",
"Particle physics",
" molecular",
"Atomic",
" and optical physics"
] |
58,775 | https://en.wikipedia.org/wiki/Timeline%20of%20electromagnetism%20and%20classical%20optics | Timeline of electromagnetism and classical optics lists, within the history of electromagnetism, the associated theories, technology, and events.
Early developments
28th century BC – Ancient Egyptian texts describe electric fish. They refer to them as the "Thunderer of the Nile", and described them as the "protectors" of all other fish.
6th century BC – Greek philosopher Thales of Miletus observes that rubbing fur on various substances, such as amber, would cause an attraction between the two, which is now known to be caused by static electricity. He noted that rubbing the amber buttons could attract light objects such as hair and that if the amber was rubbed sufficiently a spark would jump.
424 BC Aristophanes' "lens" is a glass globe filled with water.(Seneca says that it can be used to read letters no matter how small or dim)
4th century BC Mo Di first mentions the camera obscura, a pin-hole camera.
3rd century BC Euclid is the first to write about reflection and refraction and notes that light travels in straight lines
3rd century BC – The Baghdad Battery is dated from this period. It resembles a galvanic cell and is believed by some to have been used for electroplating, although there is no common consensus on the purpose of these devices nor whether they were, indeed, even electrical in nature.
1st century AD – Pliny in his Natural History records the story of a shepherd Magnes who discovered the magnetic properties of some iron stones, "it is said, made this discovery, when, upon taking his herds to pasture, he found that the nails of his shoes and the iron ferrel of his staff adhered to the ground".
130 AD – Claudius Ptolemy (in his work Optics) wrote about the properties of light including: reflection, refraction, and color and tabulated angles of refraction for several media
8th century AD – Electric fish are reported by Arabic naturalists and physicians.
Middle Ages
1021 – Ibn al-Haytham (Alhazen) writes the Book of Optics, studying vision.
1088 – Shen Kuo first recognizes magnetic declination.
1187 – Alexander Neckham is first in Europe to describe the magnetic compass and its use in navigation.
1269 – Pierre de Maricourt describes magnetic poles and remarks on the nonexistence of isolated magnetic poles
1282 – Al-Ashraf Umar II discusses the properties of magnets and dry compasses in relation to finding qibla.
1305 – Theodoric of Freiberg uses crystalline spheres and flasks filled with water to study the reflection and refraction in raindrops that leads to primary and secondary rainbows
14th century AD – Possibly the earliest and nearest approach to the discovery of the identity of lightning, and electricity from any other source, is to be attributed to the Arabs, who before the 15th century had the Arabic word for lightning (raad) applied to the electric ray.
1550 – Gerolamo Cardano writes about electricity in De Subtilitate distinguishing, perhaps for the first time, between electrical and magnetic forces.
17th century
1600 – William Gilbert publishes De Magnete, Magneticisque Corporibus, et de Magno Magnete Tellure ("On the Magnet and Magnetic bodies, and on that Great Magnet the Earth"), Europe's then current standard on electricity and magnetism. He experimented with and noted the different character of electrical and magnetic forces. In addition to known ancient Greeks' observations of the electrical properties of rubbed amber, he experimented with a needle balanced on a pivot, and found that the needle was non-directionally affected by many materials such as alum, arsenic, hard resin, jet, glass, gum-mastic, mica, rock-salt, sealing wax, slags, sulfur, and precious stones such as amethyst, beryl, diamond, opal, and sapphire. He noted that electrical charge could be stored by covering the body with a non-conducting substance such as silk. He described the method of artificially magnetizing iron. His terrella (little earth), a sphere cut from a lodestone on a metal lathe, modeled the earth as a lodestone (magnetic iron ore) and demonstrated that every lodestone has fixed poles, and how to find them. He considered that gravity was a magnetic force and noted that this mutual force increased with the size or amount of lodestone and attracted iron objects. He experimented with such physical models in an attempt to explain problems in navigation due varying properties of the magnetic compass with respect to their location on the earth, such as magnetic declination and magnetic inclination. His experiments explained the dipping of the needle by the magnetic attraction of the earth, and were used to predict where the vertical dip would be found. Such magnetic inclination was described as early as the 11th century by Shen Kuo in his Meng Xi Bi Tan and further investigated in 1581 by retired mariner and compass maker Robert Norman, as described in his pamphlet, The Newe Attractive. The gilbert, a unit of magnetomotive force or magnetic scalar potential, was named in his honor.
1604 – Johannes Kepler describes how the eye focuses light
1604 – Johannes Kepler specifies the laws of the rectilinear propagation of light
1608 – first telescopes appear in the Netherlands
1611 – Marko Dominis discusses the rainbow in De Radiis Visus et Lucis
1611 – Johannes Kepler discovers total internal reflection, a small-angle refraction law, and thin lens optics,
c1620 – the first compound microscopes appear in Europe.
1621 – Willebrord van Roijen Snell states his Snell's law of refraction
1630 – Cabaeus finds that there are two types of electric charges
1637 – René Descartes quantitatively derives the angles at which primary and secondary rainbows are seen with respect to the angle of the Sun's elevation
1646 – Sir Thomas Browne first uses the word electricity is in his work Pseudodoxia Epidemica.
1657 – Pierre de Fermat introduces the principle of least time into optics
1660 – Otto von Guericke invents an early electrostatic generator.
1663 – Otto von Guericke (brewer and engineer who applied the barometer to weather prediction and invented the air pump, with which he demonstrated the properties of atmospheric pressure associated with a vacuum) constructs a primitive electrostatic generating (or friction) machine via the triboelectric effect, utilizing a continuously rotating sulfur globe that could be rubbed by hand or a piece of cloth. Isaac Newton suggested the use of a glass globe instead of a sulfur one.
1665 – Francesco Maria Grimaldi highlights the phenomenon of diffraction
1673 – Ignace Pardies provides a wave explanation for refraction of light
1675 – Robert Boyle discovers that electric attraction and repulsion can act across a vacuum and do not depend upon the air as a medium. Adds resin to the known list of "electrics".
1675 – Isaac Newton delivers his theory of light
1676 – Ole Rømer proves that speed of light is finite, by observing Jupiter's moons
1678 – Christiaan Huygens states his principle of wavefront sources and demonstrates the refraction and diffraction of light rays.
18th century
1704 – Isaac Newton publishes Opticks, a corpuscular theory of light and colour
1705 – Francis Hauksbee improves von Guericke's electrostatic generator by using a glass globe and generates the first sparks by approaching his finger to the rubbed globe.
1728 – James Bradley discovers the aberration of starlight and uses it to determine that the speed of light is about 283,000 km/s
1729 – Stephen Gray and the Reverend Granville Wheler experiment to discover that electrical "virtue", produced by rubbing a glass tube, could be transmitted over an extended distance (nearly 900 ft (about 270 m)) through thin iron wire using silk threads as insulators, to deflect leaves of brass. This has been described as the beginning of electrical communication. This was also the first distinction between the roles of conductors and insulators (names applied by John Desaguliers, mathematician and Royal Society member, who stated that Gray "has made greater variety of electrical experiments than all the philosophers of this and the last age".) Georges-Louis LeSage built a static electricity telegraph in 1774, based upon the same principles discovered by Gray.
1732 – C. F. du Fay Shows that all objects, except metals, animals, and liquids, can be electrified by rubbing them and that metals, animals and liquids could be electrified by means of an electrostatic generators
1734 – Charles François de Cisternay DuFay (inspired by Gray's work to perform electrical experiments) dispels the effluvia theory by his paper in Volume 38 of the Philosophical Transactions of the Royal Society, describing his discovery of the distinction between two kinds of electricity: "resinous", produced by rubbing bodies such as amber, copal, or gum-lac with silk or paper, and "vitreous", by rubbing bodies as glass, rock crystal, or precious stones with hair or wool. He also posited the principle of mutual attraction for unlike forms and the repelling of like forms and that "from this principle one may with ease deduce the explanation of a great number of other phenomena". The terms resinous and vitreous were later replaced with the terms "positive" and "negative" by William Watson and Benjamin Franklin.
1737 – C. F. du Fay and Francis Hauksbee the younger independently discover two kinds of frictional electricity: one generated from rubbing glass, the other from rubbing resin (later identified as positive and negative electrical charges).
1740 – Jean le Rond d'Alembert, in Mémoire sur la réfraction des corps solides, explains the process of refraction.
1745 – Pieter van Musschenbroek of Leiden (Leyden) independently discovers the Leyden (Leiden) jar, a primitive capacitor or "condenser" (term coined by Volta in 1782, derived from the Italian condensatore), with which the transient electrical energy generated by current friction machines could now be stored. He and his student Andreas Cunaeus used a glass jar filled with water into which a brass rod had been placed. He charged the jar by touching a wire leading from the electrical machine with one hand while holding the outside of the jar with the other. The energy could be discharged by completing an external circuit between the brass rod and another conductor, originally his hand, placed in contact with the outside of the jar. He also found that if the jar were placed on a piece of metal on a table, a shock would be received by touching this piece of metal with one hand and touching the wire connected to the electrical machine with the other.
1745 – Ewald Georg von Kleist of independently invents the capacitor: a glass jar coated inside and out with metal. The inner coating was connected to a rod that passed through the lid and ended in a metal sphere. By having this thin layer of glass insulation (a dielectric) between two large, closely spaced plates, von Kleist found the energy density could be increased dramatically compared with the situation with no insulator. Daniel Gralath improved the design and was also the first to combine several jars to form a battery strong enough to kill birds and small animals upon discharge.
1746 – Leonhard Euler develops the wave theory of light refraction and dispersion
1747 – William Watson, while experimenting with a Leyden jar, observes that a discharge of static electricity causes electric current to flow and develops the concept of an electrical potential (voltage).
1752 – Benjamin Franklin establishes the link between lightning and electricity by the flying a kite into a thunderstorm and transferring some of the charge into a Leyden jar and showed that its properties were the same as charge produced by an electrical machine. He is credited with utilizing the concepts of positive and negative charge in the explanation of then known electrical phenomenon. He theorized that there was an electrical fluid (which he proposed could be the luminiferous ether, which was used by others before and after him, to explain the wave theory of light) that was part of all material and all intervening space. The charge of any object would be neutral if the concentration of this fluid were the same both inside and outside of the body, positive if the object contained an excess of this fluid, and negative if there were a deficit. In 1749 he had documented the similar properties of lightning and electricity, such as that both an electric spark and a lightning flash produced light and sound, could kill animals, cause fires, melt metal, destroy or reverse the polarity of magnetism, and flowed through conductors and could be concentrated at sharp points. He was later able to apply the property of concentrating at sharp points by his invention of the lightning rod, for which he intentionally did not profit. He also investigated the Leyden jar, proving that the charge was stored on the glass and not in the water, as others had assumed.
1753 – C. M. (of Scotland, possibly Charles Morrison, of Greenock or Charles Marshall, of Aberdeen) proposes in 17 February edition of Scots Magazine, an electrostatic telegraph system with 26 insulated wires, each corresponding to a letter of the alphabet and each connected to electrostatic machines. The receiving charged end was to electrostatically attract a disc of paper marked with the corresponding letter.
1767 – Joseph Priestley proposes an electrical inverse-square law
1774 – Georges-Louis LeSage builds an electrostatic telegraph system with 26 insulated wires conducting Leyden-jar charges to pith-ball electroscopes, each corresponding to a letter of the alphabet. Its range was only between rooms of his home.
1784 – Henry Cavendish defines the inductive capacity of dielectrics (insulators) and measures the specific inductive capacity of various substances by comparison with an air condenser.
1785 – Charles Coulomb introduces the inverse-square law of electrostatics
1786 – Luigi Galvani discovers "animal electricity" and postulates that animal bodies are storehouses of electricity. His invention of the voltaic cell leads to the invention the electric battery.
1791 – Luigi Galvani discovers galvanic electricity and bioelectricity through experiments following an observation that touching exposed muscles in frogs' legs with a scalpel which had been close to a static electrical machine caused them to jump. He called this "animal electricity". Years of experimentation in the 1780s eventually led him to the construction of an arc of two different metals (copper and zinc for example) by connecting the two metal pieces and then connecting their open ends across the nerve of a frog leg, producing the same muscular contractions (by completing a circuit) as originally accidentally observed. The use of different metals to produce an electrical spark is the basis that led Alessandro Volta in 1799 to his invention of his voltaic pile, which eventually became the galvanic battery.
1799 – Alessandro Volta, following Galvani's discovery of galvanic electricity, creates a voltaic cell producing an electric current by the chemical action of several pairs of alternating copper (or silver) and zinc discs "piled" and separated by cloth or cardboard which had been soaked brine (salt water) or acid to increase conductivity. In 1800 he demonstrates the production of light from a glowing wire conducting electricity. This was followed in 1801 by his construction of the first electric battery, by utilizing multiple voltaic cells. Prior to his major discoveries, in a letter of praise to the Royal Society 1793, Volta reported Luigi Galvani's experiments of the 1780s as the "most beautiful and important discoveries", regarding them as the foundation of future discoveries. Volta's inventions led to revolutionary changes with this method of the production of inexpensive, controlled electric current vs. existing frictional machines and Leyden jars. The electric battery became standard equipment in every experimental laboratory and heralded an age of practical applications of electricity. The unit volt is named for his contributions.
1800 – William Herschel discovers infrared radiation from the Sun.
1800 – William Nicholson, Anthony Carlisle and Johann Ritter use electricity to decompose water into hydrogen and oxygen, thereby discovering the process of electrolysis, which led to the discovery of many other elements.
1800 – Alessandro Volta invents the voltaic pile, or "battery", specifically to disprove Galvani's animal electricity theory.
19th century
1801–1850
1801 – Johann Ritter discovers ultraviolet radiation from the Sun
1801 – Thomas Young demonstrates the wave nature of light and the principle of interference
1802 – Gian Domenico Romagnosi, Italian legal scholar, discovers that electricity and magnetism are related by noting that a nearby voltaic pile deflects a magnetic needle. He published his account in an Italian newspaper, but this was overlooked by the scientific community.
1803 – Thomas Young develops the Double-slit experiment and demonstrates the effect of interference.
1806 – Alessandro Volta employs a voltaic pile to decompose potash and soda, showing that they are the oxides of the previously unknown metals potassium and sodium. These experiments were the beginning of electrochemistry.
1808 – Étienne-Louis Malus discovers polarization by reflection
1809 – Étienne-Louis Malus publishes the law of Malus which predicts the light intensity transmitted by two polarizing sheets
1809 – Humphry Davy first publicly demonstrates the electric arc light.
1811 – François Jean Dominique Arago discovers that some quartz crystals continuously rotate the electric vector of light
1814 – Joseph von Fraunhofer discovered and studied the dark absorption lines in the spectrum of the sun now known as Fraunhofer lines
1816 – David Brewster discovers stress birefringence
1818 – Siméon Poisson predicts the Poisson-Arago bright spot at the center of the shadow of a circular opaque obstacle
1818 – François Jean Dominique Arago verifies the existence of the Poisson-Arago bright spot
1820 – Hans Christian Ørsted, Danish physicist and chemist, develops an experiment in which he notices a compass needle is deflected from magnetic north when an electric current from the battery he was using was switched on and off, convincing him that magnetic fields radiate from all sides of a live wire just as light and heat do, confirming a direct relationship between electricity and magnetism. He also observes that the movement of the compass-needle to one side or the other depends upon the direction of the current. Following intensive investigations, he published his findings, proving that a changing electric current produces a magnetic field as it flows through a wire. The oersted unit of magnetic induction is named for his contributions.
1820 – André-Marie Ampère, professor of mathematics at the École Polytechnique, demonstrates that parallel current-carrying wires experience magnetic force in a meeting of the French Academy of Science, exactly one week after Ørsted's announcement of his discovery that a magnetic needle is acted on by a voltaic current. He shows that a coil of wire carrying a current behaves like an ordinary magnet and suggests that electromagnetism might be used in telegraphy. He mathematically develops Ampère's law describing the magnetic force between two electric currents. His mathematical theory explains known electromagnetic phenomena and predicts new ones. His laws of electrodynamics include the facts that parallel conductors currying current in the same direction attract and those carrying currents in the opposite directions repel one another. One of the first to develop electrical measuring techniques, he built an instrument utilizing a free-moving needle to measure the flow of electricity, contributing to the development of the galvanometer. In 1821, he proposed a telegraphy system utilizing one wire per "galvanometer" to indicate each letter, and reported experimenting successfully with such a system. However, in 1824, Peter Barlow reported its maximum distance was only 200 feet, and so was impractical. In 1826 he publishes the Memoir on the Mathematical Theory of Electrodynamic Phenomena, Uniquely Deduced from Experience containing a mathematical derivation of the electrodynamic force law. Following Faraday's discovery of electromagnetic induction in 1831, Ampère agreed that Faraday deserved full credit for the discovery.
1820 – Johann Salomo Christoph Schweigger, German chemist, physicist, and professor, builds the first sensitive galvanometer, wrapping a coil of wire around a graduated compass, an acceptable instrument for actual measurement as well as detection of small amounts of electric current, naming it after Luigi Galvani.
1821 – André-Marie Ampère announces his theory of electrodynamics, predicting the force that one current exerts upon another.
1821 – Thomas Johann Seebeck discovers the thermoelectric effect.
1821 – Augustin-Jean Fresnel derives a mathematical demonstration that polarization can be explained only if light is entirely transverse, with no longitudinal vibration whatsoever.
1825 – Augustin Fresnel phenomenologically explains optical activity by introducing circular birefringence
1825 – William Sturgeon, founder of the first English Electric Journal, Annals of Electricity, found that an iron core inside a helical coil of wire connected to a battery greatly increased the resulting magnetic field, thus making possible the more powerful electromagnets utilizing a ferromagnetic core. Sturgeon also bent the iron core into a U-shape to bring the poles closer together, thus concentrating the magnetic field lines. These discoveries followed Ampère's discovery that electricity passing through a coiled wire produced a magnetic force and that of Dominique François Jean Arago finding that an iron bar is magnetized by putting it inside the coil of current-carrying wire, but Arago had not observed the increased strength of the resulting field while the bar was being magnetized.
1826 – Georg Simon Ohm states his Ohm's law of electrical resistance in the journals of Schweigger and Poggendorff, and also published in his landmark pamphlet Die galvanische Kette mathematisch bearbeitet in 1827. The unit ohm (Ω) of electrical resistance has been named in his honor.
1829 & 1830 – Francesco Zantedeschi publishes papers on the production of electric currents in closed circuits by the approach and withdrawal of a magnet, thereby anticipating Michael Faraday's classical experiments of 1831.
1831 – Michael Faraday began experiments leading to his discovery of the law of electromagnetic induction, though the discovery may have been anticipated by the work of Francesco Zantedeschi. His breakthrough came when he wrapped two insulated coils of wire around a massive iron ring, bolted to a chair, and found that upon passing a current through one coil, a momentary electric current was induced in the other coil. He then found that if he moved a magnet through a loop of wire, or vice versa, an electric current also flowed in the wire. He then used this principle to construct the electric dynamo, the first electric power generator. He proposed that electromagnetic forces extended into the empty space around the conductor, but did not complete that work. Faraday's concept of lines of flux emanating from charged bodies and magnets provided a way to visualize electric and magnetic fields. That mental model was crucial to the successful development of electromechanical devices which were to dominate the 19th century. His demonstrations that a changing magnetic field produces an electric field, mathematically modeled by Faraday's law of induction, would subsequently become one of Maxwell's equations. These consequently evolved into the generalization of field theory.
1831 – Macedonio Melloni uses a thermopile to detect infrared radiation
1832 – Baron Pavel L'vovitch Schilling (Paul Schilling) creates the first electromagnetic telegraph, consisting of a single-needle system in which a code was used to indicate the characters. Only months later, Göttingen professors Carl Friedrich Gauss and Wilhelm Weber constructed a telegraph that was working two years before Schilling could put his into practice. Schilling demonstrated the long-distance transmission of signals between two different rooms of his apartment and was the first to put into practice a binary system of signal transmission.
1833 – Heinrich Lenz states Lenz's law: if an increasing (or decreasing) magnetic flux induces an electromotive force (EMF), the resulting current will oppose a further increase (or decrease) in magnetic flux, i.e., that an induced current in a closed conducting loop will appear in such a direction that it opposes the change that produced it. Lenz's law is one consequence of the principle of conservation of energy. If a magnet moves towards a closed loop, then the induced current in the loop creates a field that exerts a force opposing the motion of the magnet. Lenz's law can be derived from Faraday's law of induction by noting the negative sign on the right side of the equation. He also independently discovered Joule's law in 1842; to honor his efforts, Russian physicists refer to it as the "Joule–Lenz law".
1833 – Michael Faraday announces his law of electrochemical equivalents
1834 – Heinrich Lenz determines the direction of the induced electromotive force (emf) and current resulting from electromagnetic induction. Lenz's law provides a physical interpretation of the choice of sign in Faraday's law of induction (1831), indicating that the induced emf and the change in flux have opposite signs.
1834 – Jean-Charles Peltier discovers the Peltier effect: heating by an electric current at the junction of two different metals.
1835 – Joseph Henry invents the electric relay, which is an electrical switch by which the change of a weak current through the windings of an electromagnet will attract an armature to open or close the switch. Because this can control (by opening or closing) another, much higher-power, circuit, it is in a broad sense a form of electrical amplifier. This made a practical electric telegraph possible. He was the first to coil insulated wire tightly around an iron core in order to make an extremely powerful electromagnet, improving on William Sturgeon's design, which used loosely coiled, uninsulated wire. He also discovered the property of self inductance independently of Michael Faraday.
1836 – William Fothergill Cooke invents a mechanical telegraph. 1837 with Charles Wheatstone invents the Cooke and Wheatstone needle telegraph. 1838 the Cooke and Wheatstone telegraph becomes the first commercial telegraph in the world when it is installed on the Great Western Railway.
1837 – Samuel Morse develops an alternative electrical telegraph design capable of transmitting long distances over poor quality wire. He and his assistant Alfred Vail develop the Morse code signaling alphabet. In 1838 Morse successfully tested the device at the Speedwell Ironworks near Morristown, New Jersey, and publicly demonstrated it to a scientific committee at the Franklin Institute in Philadelphia, Pennsylvania. The first electric telegram using this device was sent by Morse on 24 May, 1844 from Baltimore to Washington, D.C., bearing the message "What hath God wrought?"
1838 – Michael Faraday uses Volta's battery to discover cathode rays.
1839 – Alexandre Edmond Becquerel observes the photoelectric effect with an electrode in a conductive solution exposed to light.
1840 – James Prescott Joule formulates Joule's Law (sometimes called the Joule-Lenz law) quantifying the amount of heat produced in a circuit as proportional to the product of the time duration, the resistance, and the square of the current passing through it.
1845 – Michael Faraday discovers that light propagation in a material can be influenced by external magnetic fields (Faraday effect)
1849 – Hippolyte Fizeau and Jean-Bernard Foucault measure the speed of light to be about 298,000 km/s
1851–1900
1852 – George Gabriel Stokes defines the Stokes parameters of polarization
1852 – Edward Frankland develops the theory of chemical valence
1854 – Gustav Robert Kirchhoff, physicist and one of the founders of spectroscopy, publishes Kirchhoff's Laws on the conservation of electric charge and energy, which are used to determine currents in each branch of a circuit.
1855 – James Clerk Maxwell submits On Faraday's Lines of Force for publication containing a mathematical statement of Ampère's circuital law relating the curl of a magnetic field to the electrical current at a point.
1861 – the first transcontinental telegraph system spans North America by connecting an existing network in the eastern United States to a small network in California by a link between Omaha and Carson City via Salt Lake City. The slower Pony Express system ceased operation a month later.
1864 – James Clerk Maxwell publishes his papers on a dynamical theory of the electromagnetic field
1865 – James Clerk Maxwell publishes his landmark paper A Dynamical Theory of the Electromagnetic Field, in which Maxwell's equations demonstrated that electric and magnetic forces are two complementary aspects of electromagnetism. He shows that the associated complementary electric and magnetic fields of electromagnetism travel through space, in the form of waves, at a constant velocity of . He also proposes that light is a form of electromagnetic radiation and that waves of oscillating electric and magnetic fields travel through empty space at a speed that could be predicted from simple electrical experiments. Using available data, he obtains a velocity of and states "This velocity is so nearly that of light, that it seems we have strong reason to conclude that light itself (including radiant heat, and other radiations if any) is an electromagnetic disturbance in the form of waves propagated through the electromagnetic field according to electromagnetic laws."
1866 – the first successful transatlantic telegraph system was completed. Earlier submarine cable transatlantic cables installed in 1857 and 1858 failed after operating for a few days or weeks.
1869 – William Crookes invents the Crookes tube.
1873 — The British Association establishes the units volt, ampere, and ohm.
1873 – Willoughby Smith discovers the photoelectric effect in metals not in solution (i.e., selenium).
1871 – Lord Rayleigh discusses the blue sky law and sunsets (Rayleigh scattering)
1873 – J. C. Maxwell publishes A Treatise on Electricity and Magnetism which states that light is an electromagnetic phenomenon.
1874 – German scientist Karl Ferdinand Braun discovers the "unilateral conduction" of crystals. Braun patents the first solid state diode, a crystal rectifier, in 1899.
1875 – John Kerr discovers the electrically induced birefringence of some liquids
1878 – Thomas Edison, following work on a "multiplex telegraph" system and the phonograph, invents an improved incandescent light bulb. This was not the first electric light bulb but the first commercially practical incandescent light. In 1879 he produces a high-resistance lamp in a very high vacuum; the lamp lasts hundreds of hours. While the earlier inventors had produced electric lighting in lab conditions, Edison concentrated on commercial application and was able to sell the concept to homes and businesses by mass-producing relatively long-lasting light bulbs and creating a complete system for the generation and distribution of electricity.
1879 – Jožef Stefan discovers the Stefan–Boltzmann radiation law of a black body and uses it to calculate the first sensible value of the temperature of the Sun's surface to be
1880 – Edison discovers thermionic emission or the Edison effect.
1882 – Edison switches on the world's first electrical power distribution system, providing 110 volts direct current (DC) to 59 customers.
1884 – Oliver Heaviside reformulates Maxwell's original mathematical treatment of electromagnetic theory from twenty equations in twenty unknowns into four simple equations in four unknowns (the modern vector form of Maxwell's equations).
1886 – Oliver Heaviside coins the term inductance.
1887 – Heinrich Hertz invents a device for the production and reception of electromagnetic (EM) radio waves. His receiver consists of a coil with a spark gap.
1888 – Introduction of the induction motor, an electric motor that harnesses a rotating magnetic field produced by alternating current, independently invented by Galileo Ferraris and Nikola Tesla.
1888 – Heinrich Hertz demonstrates the existence of electromagnetic waves by building an apparatus that produced and detected UHF radio waves (or microwaves in the UHF region). He also found that radio waves could be transmitted through different types of materials and were reflected by others, the key to radar. His experiments explain reflection, refraction, polarization, interference, and velocity of electromagnetic waves.
1893 – Victor Schumann discovers the vacuum ultraviolet spectrum.
1895 – Wilhelm Conrad Röntgen discovers X-rays
1895 – Jagadis Chandra Bose gives his first public demonstration of electromagnetic waves
1896 – Arnold Sommerfeld solves the half-plane diffraction problem
1897 – J. J. Thomson discovers the electron.
1899 – Pyotr Lebedev measures the pressure of light on a solid body.
1900 – The Liénard–Wiechert potentials are introduced as time-dependent (retarded) electrodynamic potentials
1900 – Max Planck resolves the ultraviolet catastrophe by suggesting that black-body radiation consists of discrete packets, or quanta, of energy. The amount of energy in each packet is proportional to the frequency of the electromagnetic waves. The constant of proportionality is now called the Planck constant in his honor.
20th century
1904 – John Ambrose Fleming invents the thermionic diode, the first electronic vacuum tube, which had practical use in early radio receivers.
1905 – Albert Einstein proposes the Theory of Special Relativity, in which he rejects the existence of the aether as unnecessary for explaining the propagation of electromagnetic waves. Instead, Einstein asserts as a postulate that the speed of light is constant in all inertial frames of reference, and goes on to demonstrate a number of revolutionary (and highly counter-intuitive) consequences, including time dilation, length contraction, the relativity of simultaneity, the dependence of mass on velocity, and the equivalence of mass and energy.
1905 – Einstein explains the photoelectric effect by extending Planck's idea of light quanta, or photons, to the absorption and emission of photoelectrons. Einstein would later receive the Nobel Prize in Physics for this discovery, which launched the quantum revolution in physics.
1911 – Superconductivity is discovered by Heike Kamerlingh Onnes, who was studying the resistivity of solid mercury at cryogenic temperatures using the recently discovered liquid helium as a refrigerant. At the temperature of 4.2 K, he observed that the resistivity abruptly disappeared. For this discovery, he was awarded the Nobel Prize in Physics in 1913.
1919 – Albert A. Michelson makes the first interferometric measurements of stellar diameters at Mount Wilson Observatory (see history of astronomical interferometry)
1924 – Louis de Broglie postulates the wave nature of electrons and suggests that all matter has wave properties.
1946 – Martin Ryle and Vonberg build the first two-element astronomical radio interferometer (see history of astronomical interferometry)
1953 – Charles H. Townes, James P. Gordon, and Herbert J. Zeiger produce the first maser
1956 – R. Hanbury-Brown and R.Q. Twiss complete the correlation interferometer
1959 – Sheldon Glashow, Abdus Salam, and John Clive Ward merge electromagnetism and the weak interaction in Standard Model, into electroweak interaction
1960 – Theodore Maiman produces the first working laser
1966 – Jefimenko introduces time-dependent (retarded) generalizations of Coulomb's law and the Biot–Savart law
1999 – M. Henny and others demonstrate the fermionic Hanbury Brown and Twiss experiment
See also
History of electromagnetic theory
History of optics
History of special relativity
History of superconductivity
Timeline of luminiferous aether
References
Further reading
The Natural History Pliny the Elder, The Natural History from Perseus Digital Library
The Discovery of the Electron from the American Institute of Physics
Enterprise and electrolysis... from the Royal Society of Chemistry (chemsoc)
Pure Science-History, Worldwide School
External links
The Work of Jagadis Chandra Bose: 100 Years of MM-Wave Research
Jagadis Chandra Bose and His Pioneering Research on Microwaves
Timeline
Timeline
Electromagnetism and Classical Optics
Electromagnetism and Classical Optics
Electromagnetism and Classical Optics, Timeline
Electromagnetism and Classical Optics, Timeline
Electromagnetism and Classical Optics, Timeline | Timeline of electromagnetism and classical optics | [
"Physics",
"Chemistry",
"Technology",
"Engineering"
] | 7,448 | [
"Electromagnetism",
"Physical phenomena",
"Applied and interdisciplinary physics",
"Optics",
"Science and technology studies",
"History of electrical engineering",
" molecular",
"Fundamental interactions",
"History of technology",
"Atomic",
"Electrical engineering",
"History of science and tec... |
58,777 | https://en.wikipedia.org/wiki/Timeline%20of%20thermodynamics | A timeline of events in the history of thermodynamics.
Before 1800
1593 – Galileo Galilei invents one of the first thermoscopes, also known as Galileo thermometer
1650 – Otto von Guericke builds the first vacuum pump
1660 – Robert Boyle experimentally discovers Boyle's law, relating the pressure and volume of a gas (published 1662)
1665 – Robert Hooke published his book Micrographia, which contained the statement: "Heat being nothing else but a very brisk and vehement agitation of the parts of a body."
1667 – J. J. Becher puts forward a theory of combustion involving combustible earth in his book Physica subterranea (see Phlogiston theory).
1676–1689 – Gottfried Leibniz develops the concept of vis viva, a limited version of the conservation of energy
1679 – Denis Papin designed a steam digester which inspired the development of the piston-and-cylinder steam engine.
1694–1734 – Georg Ernst Stahl names Becher's combustible earth as phlogiston and develops the theory
1698 – Thomas Savery patents an early steam engine
1702 – Guillaume Amontons introduces the concept of absolute zero, based on observations of gases
1738 – Daniel Bernoulli publishes Hydrodynamica, initiating the kinetic theory
1749 – Émilie du Châtelet, in her French translation and commentary on Newton's Philosophiae Naturalis Principia Mathematica, derives the conservation of energy from the first principles of Newtonian mechanics.
1761 – Joseph Black discovers that ice absorbs heat without changing its temperature when melting
1772 – Black's student Daniel Rutherford discovers nitrogen, which he calls phlogisticated air, and together they explain the results in terms of the phlogiston theory
1776 – John Smeaton publishes a paper on experiments related to power, work, momentum, and kinetic energy, supporting the conservation of energy
1777 – Carl Wilhelm Scheele distinguishes heat transfer by thermal radiation from that by convection and conduction
1783 – Antoine Lavoisier discovers oxygen and develops an explanation for combustion; in his paper "Réflexions sur le phlogistique", he deprecates the phlogiston theory and proposes a caloric theory
1784 – Jan Ingenhousz describes Brownian motion of charcoal particles on water
1791 – Pierre Prévost shows that all bodies radiate heat, no matter how hot or cold they are
1798 – Count Rumford (Benjamin Thompson) publishes his paper An Experimental Enquiry Concerning the Source of the Heat which is Excited by Friction detailing measurements of the frictional heat generated in boring cannons and develops the idea that heat is a form of kinetic energy; his measurements are inconsistent with caloric theory, but are also sufficiently imprecise as to leave room for doubt.
1800–1847
1802 – Joseph Louis Gay-Lussac publishes Charles's law, discovered (but unpublished) by Jacques Charles around 1787; this shows the dependency between temperature and volume. Gay-Lussac also formulates the law relating temperature with pressure (the pressure law, or Gay-Lussac's law)
1804 – Sir John Leslie observes that a matte black surface radiates heat more effectively than a polished surface, suggesting the importance of black-body radiation
1805 – William Hyde Wollaston defends the conservation of energy in On the Force of Percussion
1808 – John Dalton defends caloric theory in A New System of Chemistry and describes how it combines with matter, especially gases; he proposes that the heat capacity of gases varies inversely with atomic weight
1810 – Sir John Leslie freezes water to ice artificially
1813 – Peter Ewart supports the idea of the conservation of energy in his paper On the measure of moving force; the paper strongly influences Dalton and his pupil, James Joule
1819 – Pierre Louis Dulong and Alexis Thérèse Petit give the Dulong-Petit law for the specific heat capacity of a crystal
1820 – John Herapath develops some ideas in the kinetic theory of gases but mistakenly associates temperature with molecular momentum rather than kinetic energy; his work receives little attention other than from Joule
1822 – Joseph Fourier formally introduces the use of dimensions for physical quantities in his Théorie Analytique de la Chaleur
1822 – Marc Seguin writes to John Herschel supporting the conservation of energy and kinetic theory
1824 – Sadi Carnot analyzes the efficiency of steam engines using caloric theory; he develops the notion of a reversible process and, in postulating that no such thing exists in nature, lays the foundation for the second law of thermodynamics, and initiating the science of thermodynamics
1827 – Robert Brown discovers the Brownian motion of pollen and dye particles in water
1831 – Macedonio Melloni demonstrates that black-body radiation can be reflected, refracted, and polarised in the same way as light
1834 – Émile Clapeyron popularises Carnot's work through a graphical and analytic formulation. He also combined Boyle's law, Charles's law, and Gay-Lussac's law to produce a combined gas law. PV/T = k
1841 – Julius Robert von Mayer, an amateur scientist, writes a paper on the conservation of energy, but his lack of academic training leads to its rejection
1842 – Mayer makes a connection between work, heat, and the human metabolism based on his observations of blood made while a ship's surgeon; he calculates the mechanical equivalent of heat
1842 – William Robert Grove demonstrates the thermal dissociation of molecules into their constituent atoms, by showing that steam can be disassociated into oxygen and hydrogen, and the process reversed
1843 – John James Waterston fully expounds the kinetic theory of gases, but according to D Levermore "there is no evidence that any physical scientist read the book; perhaps it was overlooked because of its misleading title, Thoughts on the Mental Functions."
1843 – James Joule experimentally finds the mechanical equivalent of heat
1845 – Henri Victor Regnault added Avogadro's law to the combined gas law to produce the ideal gas law. PV = nRT
1846 – Grove publishes an account of the general theory of the conservation of energy in On The Correlation of Physical Forces
1847 – Hermann von Helmholtz publishes a definitive statement of the conservation of energy, the first law of thermodynamics
1848–1899
1848 – William Thomson extends the concept of absolute zero from gases to all substances
1849 – William John Macquorn Rankine calculates the correct relationship between saturated vapour pressure and temperature using his hypothesis of molecular vortices
1850 – Rankine uses his vortex theory to establish accurate relationships between the temperature, pressure, and density of gases, and expressions for the latent heat of evaporation of a liquid; he accurately predicts the surprising fact that the apparent specific heat of saturated steam will be negative
1850 – Rudolf Clausius coined the term "entropy" (das Wärmegewicht, symbolized S) to denote heat lost or turned into waste. ("Wärmegewicht" translates literally as "heat-weight"; the corresponding English term stems from the Greek τρέπω, "I turn".)
1850 – Clausius gives the first clear joint statement of the first and second law of thermodynamics, abandoning the caloric theory, but preserving Carnot's principle
1851 – Thomson gives an alternative statement of the second law
1852 – Joule and Thomson demonstrate that a rapidly expanding gas cools, later named the Joule–Thomson effect or Joule–Kelvin effect
1854 – Helmholtz puts forward the idea of the heat death of the universe
1854 – Clausius establishes the importance of dQ/T (Clausius's theorem), but does not yet name the quantity
1854 – Rankine introduces his thermodynamic function, later identified as entropy
1856 – August Krönig publishes an account of the kinetic theory of gases, probably after reading Waterston's work
1857 – Clausius gives a modern and compelling account of the kinetic theory of gases in his On the nature of motion called heat
1859 – James Clerk Maxwell discovers the distribution law of molecular velocities
1859 – Gustav Kirchhoff shows that energy emission from a black body is a function of only temperature and frequency
1862 – "Disgregation", a precursor of entropy, was defined in 1862 by Clausius as the magnitude of the degree of separation of molecules of a body
1865 – Clausius introduces the modern macroscopic concept of entropy
1865 – Josef Loschmidt applies Maxwell's theory to estimate the number-density of molecules in gases, given observed gas viscosities.
1867 – Maxwell asks whether Maxwell's demon could reverse irreversible processes
1870 – Clausius proves the scalar virial theorem
1872 – Ludwig Boltzmann states the Boltzmann equation for the temporal development of distribution functions in phase space, and publishes his H-theorem
1873 - Johannes Diderik van der Waals formulates his equation of state
1874 – Thomson formally states the second law of thermodynamics
1876 – Josiah Willard Gibbs publishes the first of two papers (the second appears in 1878) which discuss phase equilibria, statistical ensembles, the free energy as the driving force behind chemical reactions, and chemical thermodynamics in general.
1876 – Loschmidt criticises Boltzmann's H theorem as being incompatible with microscopic reversibility (Loschmidt's paradox).
1877 – Boltzmann states the relationship between entropy and probability
1879 – Jožef Stefan observes that the total radiant flux from a blackbody is proportional to the fourth power of its temperature and states the Stefan–Boltzmann law
1884 – Boltzmann derives the Stefan–Boltzmann blackbody radiant flux law from thermodynamic considerations
1888 – Henri-Louis Le Chatelier states his principle that the response of a chemical system perturbed from equilibrium will be to counteract the perturbation
1889 – Walther Nernst relates the voltage of electrochemical cells to their chemical thermodynamics via the Nernst equation
1889 – Svante Arrhenius introduces the idea of activation energy for chemical reactions, giving the Arrhenius equation
1893 – Wilhelm Wien discovers the displacement law for a blackbody's maximum specific intensity
1900–1944
1900 – Max Planck suggests that light may be emitted in discrete frequencies, giving his law of black-body radiation
1905 – Albert Einstein, in the first of his miracle year papers, argues that the reality of quanta would explain the photoelectric effect
1905 – Einstein mathematically analyzes Brownian motion as a result of random molecular motion in his paper On the movement of small particles suspended in a stationary liquid demanded by the molecular-kinetic theory of heat
1906 – Nernst presents a formulation of the third law of thermodynamics
1907 – Einstein uses quantum theory to estimate the heat capacity of an Einstein solid
1909 – Constantin Carathéodory develops an axiomatic system of thermodynamics
1910 – Einstein and Marian Smoluchowski find the Einstein–Smoluchowski formula for the attenuation coefficient due to density fluctuations in a gas
1911 – Paul Ehrenfest and Tatjana Ehrenfest–Afanassjewa publish their classical review on the statistical mechanics of Boltzmann, Begriffliche Grundlagen der statistischen Auffassung in der Mechanik
1912 – Peter Debye gives an improved heat capacity estimate by allowing low-frequency phonons
1916 – Sydney Chapman and David Enskog systematically develop the kinetic theory of gases
1916 – Einstein considers the thermodynamics of atomic spectral lines and predicts stimulated emission
1919 – James Jeans discovers that the dynamical constants of motion determine the distribution function for a system of particles
1920 – Meghnad Saha states his ionization equation
1923 – Debye and Erich Hückel publish a statistical treatment of the dissociation of electrolytes
1924 – Satyendra Nath Bose introduces Bose–Einstein statistics, in a paper translated by Einstein
1926 – Enrico Fermi and Paul Dirac introduce Fermi–Dirac statistics
1927 – John von Neumann introduces the density matrix representation, establishing quantum statistical mechanics
1928 – John B. Johnson discovers Johnson noise in a resistor
1928 – Harry Nyquist derives the fluctuation-dissipation theorem, a relationship to explain Johnson noise in a resistor
1931 – Lars Onsager publishes his groundbreaking paper deriving the Onsager reciprocal relations
1935 – Ralph H. Fowler invents the title 'the zeroth law of thermodynamics' to summarise postulates made by earlier physicists that thermal equilibrium between systems is a transitive relation
1938 – Anatoly Vlasov proposes the Vlasov equation for a correct dynamical description of ensembles of particles with collective long range interaction
1939 – Nikolay Krylov and Nikolay Bogolyubov give the first consistent microscopic derivation of the Fokker–Planck equation in the single scheme of classical and quantum mechanics
1942 – Joseph L. Doob states his theorem on Gauss–Markov processes
1944 – Lars Onsager gives an analytic solution to the 2-dimensional Ising model, including its phase transition
1945–present
1945–1946 – Nikolay Bogoliubov develops a general method for a microscopic derivation of kinetic equations for classical statistical systems using BBGKY hierarchy
1947 – Nikolay Bogoliubov and Kirill Gurov extend this method for a microscopic derivation of kinetic equations for quantum statistical systems
1948 – Claude Elwood Shannon establishes information theory
1957 – Aleksandr Solomonovich Kompaneets derives his Compton scattering Fokker–Planck equation
1957 – Ryogo Kubo derives the first of the Green-Kubo relations for linear transport coefficients
1957 – Edwin T. Jaynes publishes two papers detailing the MaxEnt interpretation of thermodynamics from information theory
1960–1965 – Dmitry Zubarev develops the method of non-equilibrium statistical operator, which becomes a classical tool in the statistical theory of non-equilibrium processes
1972 – Jacob Bekenstein suggests that black holes have an entropy proportional to their surface area
1974 – Stephen Hawking predicts that black holes will radiate particles with a black-body spectrum which can cause black hole evaporation
1977 – Ilya Prigogine wins the Nobel prize for his work on dissipative structures in thermodynamic systems far from equilibrium. The importation and dissipation of energy could reverse the 2nd law of thermodynamics
See also
Timeline of heat engine technology
History of physics
History of thermodynamics
Thermodynamics
Timeline of information theory
List of textbooks in thermodynamics and statistical mechanics
References
History of thermodynamics
Thermodynamics, statistical mechanics, and random processes
Thermodynamics
Chemical engineering | Timeline of thermodynamics | [
"Physics",
"Chemistry",
"Mathematics",
"Engineering"
] | 3,034 | [
"Chemical engineering",
"History of thermodynamics",
"Thermodynamics",
"nan",
"Dynamical systems"
] |
58,783 | https://en.wikipedia.org/wiki/Timeline%20of%20particle%20physics%20technology | Timeline of particle physics technology
1896 - Charles Wilson discovers that energetic particles produce droplet tracks in supersaturated gases.
1897-1901 - Discovery of the Townsend discharge by John Sealy Townsend.
1908 - Hans Geiger and Ernest Rutherford use the Townsend discharge principle to detect alpha particles.
1911 - Charles Wilson finishes a sophisticated cloud chamber.
1928 - Hans Geiger and Walther Muller invent the Geiger Muller tube, which is based upon the gas ionisation principle used by Geiger in 1908, but is a practical device that can also detect beta and gamma radiation. This is implicitly also the invention of the Geiger Muller counter.
1934 - Ernest Lawrence and Stan Livingston invent the cyclotron.
1945 - Edwin McMillan devises a synchrotron.
1952 - Donald Glaser develops the bubble chamber.
1968 - Georges Charpak and Roger Bouclier build the first multiwire proportional mode particle detection chamber.
Particle physics
Particle physics
Particle physics | Timeline of particle physics technology | [
"Physics"
] | 196 | [
"Particle physics"
] |
58,785 | https://en.wikipedia.org/wiki/Timeline%20of%20gravitational%20physics%20and%20relativity | The following is a timeline of gravitational physics and general relativity.
Before 1500
3rd century B.C. – Aristarchus of Samos proposes the heliocentric model.
1500s
1543 – Nicolaus Copernicus publishes On the Revolutions of Heavenly Spheres.
1583 – Galileo Galilei deduces the period relationship of a pendulum from observations (according to later biographer).
1586 – Simon Stevin demonstrates that two objects of different mass accelerate at the same rate when dropped.
1589 – Galileo Galilei describes a hydrostatic balance for measuring specific gravity.
1590 – Galileo Galilei formulates modified Aristotelean theory of motion (later retracted) based on density rather than weight of objects.
1600s
1602-1608 – Galileo Galilei experiments with pendulum motion and inclined planes; deduces his law of free fall; and discovers that projectiles travel along parabolic trajectories.
1609 – Johannes Kepler announces his first two laws of planetary motion.
1610 – Johannes Kepler states the dark night paradox.
1610 – Galileo Galilei publishes The Sidereal Messenger, detailing his astronomical discoveries made with a telescope.
1619 – Johannes Kepler unveils his third law of planetary motion.
1665-66 – Isaac Newton introduces an inverse-square law of universal gravitation uniting terrestrial and celestial theories of motion and uses it to predict the orbit of the Moon and the parabolic arc of projectiles (the latter using his generalization of the binomial theorem).
1676-9 – Ole Rømer makes the first scientific determination of the speed of light.
1684 – Isaac Newton proves that planets moving under an inverse-square force law will obey Kepler's laws in a letter to Edmond Halley.
1686 – Isaac Newton uses a fixed length pendulum with weights of varying composition to test the weak equivalence principle to 1 part in 1000.
1686 – Isaac Newton publishes his Mathematical Principles of Natural Philosophy, where he develops his calculus, states his laws of motion and gravitation, proves the shell theorem, describes his rotating bucket thought experiment, explains the tides, and calculates the figure of the Earth.
1700s
1705 – Edmond Halley predicts the return of Halley's comet in 1758, the first use of Newton's laws by someone other than Newton himself.
1728 – Isaac Newton posthumously publishes his cannonball thought experiment.
1742 – Colin Maclaurin studies a self-gravitating uniform liquid drop at equilibrium, the Maclaurin spheroid.
1755 – Immanuel Kant advances Emanuel Swedenborg's nebular hypothesis on the origin of the Solar System.
1765 – Leonhard Euler discovers the first three Lagrange points.
1767 – Leonhard Euler solves Euler's restricted three-body problem.
1772 – Joseph-Louis Lagrange discovers the two remaining Lagrange points.
1796 – Pierre-Simon de Laplace independently introduces the nebular hypothesis.
1798 – Henry Cavendish tests Newton's law of universal gravitation using a torsion balance, leading to the first accurate value for the gravitational constant and the mean density of the Earth.
1800s
1846 – Urbain Le Verrier and John Couch Adams, studying Uranus' orbit, independently prove that another, farther planet must exist. Neptune was found at the predicted moment and position.
1855 – Le Verrier observes a 35 arcsecond per century excess precession of Mercury's orbit and attributes it to another planet, inside Mercury's orbit. The planet was never found. See Vulcan.
1876 – William Kingdon Clifford suggests that the motion of matter may be due to changes in the geometry of space.
1882 – Simon Newcomb observes a 43 arcsecond per century excess precession of Mercury's orbit.
1884 – William Thomson (Lord Kelvin) lectures on the issues with the wave theory of light with regards to the luminiferous ether.
1887 – Albert A. Michelson and Edward W. Morley in their famous experiment do not detect the ether drift.
1889 – Loránd Eötvös uses a torsion balance to test the weak equivalence principle to 1 part in one billion.
1887 – George Francis FitzGerald explains his hypothesis that the Michelson-Morley interferometer contracts in the direction of motion through the luminiferous ether to Oliver Lodge.
1893 – Ernst Mach states Mach's principle, the first constructive critique of the idea of Newtonian absolute space.
1897 – Henri Poincaré questions whether absolute space, absolute time, and Euclidean geometry are applicable to physics.
1900s
1902 – Paul Gerber explains the movement of the perihelion of Mercury using finite speed of gravity. His formula, at least approximately, matches the later model from Einstein's general relativity, but Gerber's theory was incorrect.
1902 – Henri Poincaré questions the concept of simultaneity in his book, Science and Hypothesis.
1904 – Hendrik Antoon Lorentz publishes the Lorentz transformations, so named by Henri Poincaré.
1902 – Henri Poincaré shows that the Lorentz transformations form a mathematical group, called the Lorentz group, and derives the relativistic formula for adding velocities.
1905 – Albert Einstein completes his special theory of relativity and examines relativistic aberration and the transverse Doppler effect.
1905 – Albert Einstein discovers the equivalence of mass and energy, in modern form.
1906 – Max Planck coins the term Relativtheorie. Albert Einstein later uses the term Relativitätstheorie in a conversation with Paul Ehrenfest. He originally prefers calling it Invariance Theory.
1906 – Max Planck formulates a variational principle for special relativity.
1907 – Albert Einstein introduces the principle of equivalence of gravitational and inertial mass and uses it to predict gravitational lensing and gravitational redshift, historically known as the Einstein shift.
1907-8 – Hermann Minkowski introduces the Minkowski spacetime and the notion of tensors to relativity. His paper was published posthumously.
1909 – Max Born proposes his notion of rigidity.
1909 – Paul Ehrenfest states the Ehrenfest paradox.
1910s
1911 – Max von Laue publishes the first textbook on special relativity.
1911 – Albert Einstein explains the need to replace both special relativity and Newton's theory of gravity; he realizes that the principle of equivalence only holds locally, not globally.
1912 – Friedrich Kottler applies the notion of tensors to curved spacetime.
1915-16 – Albert Einstein completes his general theory of relativity. He explains the perihelion of Mercury and calculates gravitational lensing correctly and introduces the post-Newtonian approximation.
1915 – David Hilbert independently introduces the Einstein-Hilbert action. Hilbert also recognizes the connection between the Einstein equations and the Gauss-Bonnet theorem.
1916 – Karl Schwarzschild publishes the Schwarzschild metric about a month after Einstein published his general theory of relativity. This was the first solution to the Einstein field equations other than the trivial flat space solution.
1916 – Albert Einstein predicts gravitational waves.
1916 – Willem de Sitter predicts the geodetic effect.
1917 – Albert Einstein applies his field equations to the entire Universe. Physical cosmology is born.
1916-20 – Arthur Eddington studies the internal constitution of the stars.
1918 – Albert Einstein derives the quadrupole formula for gravitational radiation.
1918 – Emmy Noether publishes Noether's theorem and resolves the issue of local energy conservation in general relativity.
1918 – Josef Lense and Hans Thirring find the gravitomagnetic frame-dragging of gyroscopes in the equations of general relativity.
1919 – Arthur Eddington leads a solar eclipse expedition which detects gravitational deflection of light by the Sun, which, despite opinion to the contrary, survives modern scrutiny. Other teams fail for reasons of war and politics.
1920s
1921 – Theodor Kaluza demonstrates that a five-dimensional version of Einstein's equations unifies gravitation and electromagnetism. This idea is later extended by Oskar Klein.
1922 – Alexander Friedmann derives the Friedmann equations.
1922 – Enrico Fermi introduces the Fermi coordinates. This is developed further in 1932 by Arthur Walker into the Fermi-Walker transport.
1923 – George David Birkhoff proves Birkhoff's theorem on the uniqueness of the Schwarzschild solution.
1924 – Arthur Eddington calculates the Eddington limit.
1924 – Cornelius Lanczos discovers the van Stockum dust, later rediscovered by Willem Jacob van Stockum in 1938.
1925 – Walter Adams measures the gravitational redshift of the light emitted by the companion of Sirius B, a white dwarf.
1927 – Georges Lemaître publishes his hypothesis of the primeval atom.
1929 – Edwin Hubble published the law later named for him.
1930s
1931 – Subrahmanyan Chandrasekhar studies the stability of white dwarfs.
1931 – Georges Lemaître and Arthur Eddington predict the expansion of the Universe.
1931 – Albert Einstein introduces his cosmological constant.
1932 – Albert Einstein and Willem de Sitter propose the Einstein-de Sitter cosmological model.
1932 – John Cockcroft and Ernest Walton verify Einstein's mass-energy equation by an experiment artificially transmuting lithium into helium.
1934 – Dmitry Blokhintsev and F. M. Gal'perin coin the term 'graviton'. Paul Dirac reintroduces it in 1959.
1934 – Walter Baade and Fritz Zwicky predict the existence of neutron stars. Although their details are wrong, their basic idea is now accepted.
1935 – Albert Einstein and Nathan Rosen derive the Einstein-Rosen bridge, the first wormhole solution.
1935 – Howard Robertson and Arthur Walker obtain the Robertson-Walker metric.
1936 – Albert Einstein predicts that a gravitational lens brightens the light coming from a distant object to the observer.
1937 – Fritz Zwicky states that galaxies could act as gravitational lenses.
1937 – Albert Einstein and Nathan Rosen obtain the Einstein-Rosen metric, the first exact solution describing gravitational waves.
1938 – Albert Einstein, Leopold Infeld, and Banesh Hoffmann obtain the Einstein-Infeld-Hoffmann equations of motion.
1939 – Hans Bethe shows that nuclear fusion is responsible for energy production inside stars, building upon the Kelvin–Helmholtz mechanism.
1939 – Richard Tolman solves the Einstein field equations in the case of a spherical fluid drop.
1939 – Robert Serber, George Volkoff, Richard Tolman, and J. Robert Oppenheimer study the stability of neutron stars, obtaining the Tolman–Oppenheimer–Volkoff limit.
1939 – J. Robert Oppenheimer and Hartland Snyder publish the Oppenheimer-Snyder model for the continued gravitational contraction of a star.
1940s
1948 – Ralph Alpher and Robert Herman predict the cosmic microwave background.
1949 – Cornelius Lanczos introduces the Lanczos potential for the Weyl tensor.
1949 – Kurt Gödel discovers Gödel's solution.
1950s
1953 – P. C. Vaidya Newtonian time in general relativity, Nature, 171, p260.
1954 – Suraj Gupta sketches how to derive the equations of general relativity from quantum field theory for a massless spin-2 particle (the graviton). His procedure was later carried out by Stanley Deser in 1970.
1955-56 – Robert Kraichnan shows that under the appropriate assumptions, Einstein's field equations of gravitation arise from the quantum field theory of a massless spin-2 particle coupled to the stress-energy tensor. This follows from his unpublished work as an undergraduate in 1947.
1956 – Bruno Berlotti develops the post-Minkowskian expansion.
1956 – John Lighton Synge publishes the first relativity text emphasizing spacetime diagrams and geometrical methods.
1957 – Felix A. E. Pirani uses Petrov classification to understand gravitational radiation.
1957 – Richard Feynman introduces his sticky bead argument. He later derives the quadrupole formula in a letter to Victor Weisskopf (1961).
1957-8 – John Wheeler discusses the breakdown of classical general relativity near singularities and the need for quantum gravity.
1958 – David Finkelstein presents a new coordinate system that eliminates the Schwarzschild radius as a singularity.
1959 – Robert Pound and Glen Rebka propose the Pound–Rebka experiment, first precision test of gravitational redshift. The experiment relies on the Mössbauer effect.
1959 – Lluís Bel introduces Bel–Robinson tensor and the Bel decomposition of the Riemann tensor.
1959 – Arthur Komar introduces the Komar mass.
1959 – Richard Arnowitt, Stanley Deser and Charles W. Misner developed ADM formalism.
1960s
1960 – Martin Kruskal and George Szekeres independently introduce the Kruskal–Szekeres coordinates for the Schwarzschild vacuum.
1960 – John Graves and Dieter Brill study the causal structure of an electrically charged black hole.
1960 – Thomas Matthews and Allan R. Sandage associate 3C 48 with a point-like optical image, show radio source can be at most 15 light minutes in diameter,
1960 – Ivor M. Robinson and Andrzej Trautman discover the Robinson-Trautman null dust solution
1960 – Robert Pound and Glen Rebka test the gravitational redshift predicted by the equivalence principle to approximately 1%.
1961 –Tullio Regge introduces the Regge calculus.
1961 – Carl H. Brans and Robert H. Dicke introduce Brans–Dicke theory, the first viable alternative theory with a clear physical motivation.
1961 – Pascual Jordan and Jürgen Ehlers develop the kinematic decomposition of a timelike congruence,
1961 – Robert Dicke, Peter Roll, and R. Krotkov refine the Eötvös experiment to an accuracy of 10−11.
1962 – John Wheeler and Robert Fuller show that the Einstein-Rosen bridge is unstable.
1962 – Roger Penrose and Ezra T. Newman introduce the Newman–Penrose formalism.
1962 – Ehlers and Wolfgang Kundt classify the symmetries of Pp-wave spacetimes.
1962 –Joshua Goldberg and Rainer K. Sachs prove the Goldberg–Sachs theorem.
1962 – Ehlers introduces Ehlers transformations, a new solution generating method,
1962 – Richard Arnowitt, Stanley Deser, and Charles W. Misner introduce the ADM reformulation and global hyperbolicity,
1962 – Istvan Ozsvath and Englbert Schücking rediscover the circularly polarized monochromomatic gravitational wave.
1962 – Hans Adolph Buchdahl discovers Buchdahl's theorem.
1962 – Hermann Bondi introduces Bondi mass.
1962 – Hermann Bondi, M. G. van der Burg, A. W. Metzner, and Rainer K. Sachs introduce the asymptotic symmetry group of asymptotically flat, Lorentzian spacetimes at null (i.e., light-like) infinity.
1963 – Roy Kerr discovers the Kerr vacuum solution of Einstein's field equations,
1963 – Redshifts of 3C 273 and other quasars show they are very distant; hence very luminous,
1963 – Newman, T. Unti and L.A. Tamburino introduce the NUT vacuum solution,
1963 – Roger Penrose introduces Penrose diagrams and Penrose limits.
1963 – Maarten Schmidt and Jesse Greenstein discover quasi-stellar objects, later shown to be moving away from Earth due to the expansion of the Universe.
1963 – First Texas Symposium on Relativistic Astrophysics held in Dallas, 16–18 December.
1964 – Steven Weinberg shows that a quantum field theory of interacting massless spin-2 particles is Lorentz invariant only if it satisfies the principle of equivalence.
1964 – Subrahmanyan Chandrasekhar determines a stability criterion.
1964 – R. W. Sharp and Charles Misner introduce the Misner–Sharp mass.
1964 – Hong-Yee Chiu coins the term "'quasar" for quasi-stellar radio sources.
1964 – Sjur Refsdal suggests that the Hubble constant could be determined using gravitational lensing.
1964 – Irwin Shapiro predicts a gravitational time delay of radiation travel as a test of general relativity.
1965 – Roger Penrose proves the first singularity theorem.
1965 – Penrose discovers the structure of the light cones in gravitational plane wave spacetimes.
1965 – Ezra Newman and others introduce Kerr-Newman metric.
1965 – Arno Penzias and Robert Wilson accidentally discover the cosmic microwave background radiation. This rules out the steady-state model of Fred Hoyle and Jayant Narlikar.
1965 – Joseph Weber puts the first Weber bar gravitational wave detector into operation.
1966 – Sachs and Ronald Kantowski discover the Kantowski-Sachs dust solution.
1967 – John Archibald Wheeler popularizes "black hole" at a conference.
1967 – Jocelyn Bell and Antony Hewish discover pulsars.
1967 – Robert H. Boyer and R. W. Lindquist introduce Boyer–Lindquist coordinates for the Kerr vacuum.
1967 – Bryce DeWitt publishes on canonical quantum gravity.
1967 – Werner Israel proves a special case of the no-hair theorem and the converse of Birkhoff's theorem.
1967 – Kenneth Nordtvedt develops PPN formalism.
1967 – Mendel Sachs publishes factorization of Einstein's field equations.
1967 – Hans Stephani discovers the Stephani dust solution.
1968 – F. J. Ernst discovers the Ernst equation.
1968 – B. Kent Harrison discovers the Harrison transformation, a solution-generating method.
1968 – Brandon Carter solves the geodesic equations for Kerr–Newmann electrovacuum with Carter's constant.
1968 – Hugo D. Wahlquist discovers the Wahlquist fluid.
1968 – James Hartle and Kip Thorne obtain the Hartle–Thorne metric.
1968 – Irwin Shapiro and his colleagues present the first detection of the Shapiro delay.
1968 – Kenneth Nordtvedt studies a possible violation of the weak equivalence principle for self-gravitating bodies and proposes a new test of the weak equivalence principle based on observing the relative motion of the Earth and Moon in the Sun's gravitational field.
1969 – William B. Bonnor introduces the Bonnor beam.
1969 – Joseph Weber reports observation of gravitational waves a claim now generally discounted.
1969 – Penrose proposes the (weak) cosmic censorship hypothesis and the Penrose process,
1969 – Misner introduces the mixmaster universe.
1969 – Yvonne Choquet-Bruhat and Robert Geroch discuss global aspects of the Cauchy problem in general relativity.
1965-70 – Subrahmanyan Chandrasekhar and colleagues develops the post-Newtonian expansions.
1968-70 – Roger Penrose, Stephen Hawking, and George Ellis prove that singularities must arise in the Big Bang models.
1970s
1970 – Vladimir Alekseevich Belinski, Isaak Markovich Khalatnikov, and Evgeny Lifshitz introduce the BKL conjecture.
1970 – Stephen Hawking and Roger Penrose prove trapped surfaces must arise in black holes.
1971 – David Scott demonstrates that a hammer and a feather fall at the same rate on the Moon.
1971 – Alfred Goldhaber and Michael Nieto give stringent limits on the photon mass. The strictest one is .
1971 – Stephen Hawking proves that the area of a black hole can never decrease.
1971 – Peter C. Aichelburg and Roman U. Sexl introduce the Aichelburg–Sexl ultraboost.
1971 – Introduction of the Khan–Penrose vacuum, a simple explicit colliding plane wave spacetime.
1971 – Robert H. Gowdy introduces the Gowdy vacuum solutions (cosmological models containing circulating gravitational waves).
1971 – Cygnus X-1, the first solid black hole candidate, discovered by Uhuru satellite.
1971 – William H. Press discovers black hole ringing by numerical simulation.
1971 – Harrison and Estabrook algorithm for solving systems of PDEs.
1971 – James W. York introduces conformal method generating initial data for ADM initial value formulation.
1971 – Robert Geroch introduces Geroch group and a solution generating method.
1972 – Jacob Bekenstein proposes that black holes have a non-decreasing entropy which can be identified with the area.
1972 – Sachs introduces optical scalars and proves peeling theorem.
1972 – Rainer Weiss proposes concept of interferometric gravitational wave detector in an unpublished manuscript.
1972 – Joseph Hafele and Richard Keating perform the Hafele–Keating experiment.
1972 – Richard H. Price studies gravitational collapse with numerical simulations.
1972 – Saul Teukolsky derives the Teukolsky equation.
1972 – Yakov B. Zel'dovich predicts the transmutation of electromagnetic and gravitational radiation.
1972 – Brandon Carter, Stephen Hawking, and James M. Bardeen propose the four laws of black hole mechanics.
1972 – James Bardeen calculates the shadow of a black hole. This was later verified by the Event Horizon Telescope.
1973 – Charles W. Misner, Kip S. Thorne and John A. Wheeler publish the treatise Gravitation, a textbook that remains in use in the twenty-first century.
1973 – Stephen W. Hawking and George Ellis publish the monograph The Large Scale Structure of Space-Time.
1973 – Robert Geroch introduces the GHP formalism.
1973 – Homer Ellis obtains the Ellis drainhole, the first traversable wormhole.
1974 – Russell Hulse and Joseph Hooton Taylor, Jr. discover the Hulse–Taylor binary pulsar,
1974 – James W. York and Niall Ó Murchadha present the analysis of the initial value formulation and examine the stability of its solutions.
1974 – R. O. Hansen introduces Hansen–Geroch multipole moments.
1974 – Stephen Hawking discovers Hawking radiation.
1975 – Stephen Hawking shows that the area of a black hole is proportional to its entropy, as previously conjectured by Jacob Bekenstein.
1975 – Roberto Colella, Albert Overhauser, and Samuel Werner observe the quantum-mechanical phase shift of neutrons due to gravity. Neutron interferometry was later used to test the principle of equivalence.
1975 – Chandrasekhar and Steven Detweiler compute the effects of perturbations on a Schwarzschild black hole.
1975 – Szekeres and D. A. Szafron discover the Szekeres–Szafron dust solutions.
1976 – Penrose introduces Penrose limits (every null geodesic in a Lorentzian spacetime behaves like a plane wave),
1978 – Penrose introduces the notion of a thunderbolt,
1978 – Belinskiǐ and Zakharov show how to solve Einstein's field equations using the inverse scattering transform; the first gravitational solitons,
1979 – Dennis Walsh, Robert Carswell, and Ray Weymann discover the gravitationally lensed quasar Q0957+561.
1979 – Jean-Pierre Luminet creates an image of a black hole with an accretion disk using computer simulation.
1979 – Steven Detweiler proposes using pulsar timing arrays to detect gravitational waves.
1979-81 – Richard Schoen and Shing-Tung Yau prove the positive mass theorem. Edward Witten independently proves the same thing.
1980s
1980 – Vera Rubin and colleagues study the rotational properties of UGC 2885, demonstrating the prevalence of dark matter.
1980 – Gravity Probe A verifies gravitational redshift to approximately 0.007% using a space-born hydrogen maser.
1980 – James Bardeen explains structure in the Universe using cosmological perturbation theory.
1981 – Alan Guth proposes cosmic inflation in order to solve the flatness and horizon problems.
1982 – Joseph Taylor and Joel Weisberg show that the rate of energy loss from the binary pulsar PSR B1913+16 agrees with that predicted by the general relativistic quadrupole formula to within 5%.
1983 – James Hartle and Stephen Hawking propose the no-boundary wave function for the Universe.
1983-84 – RELIKT-1 observes the cosmic microwave background.
1986 – Helmut Friedrich proves that the de Sitter spacetime is stable.
1986 – Bernard Schutz shows that cosmic distances can be determined using sources of gravitational waves without references to the cosmic distance ladder. Standard-siren astronomy is born.
1988 – Mike Morris, Kip Thorne, and Yurtsever Ulvi obtain the Morris-Thorne wormhole. Morris and Thorne argue for its pedagogical value.
1989 – Steven Weinberg discusses the cosmological constant problem, the discrepancy between the measured value and those predicted by modern theories of elementary particles.
1989-93 – The Cosmic Background Explorer (COBE) identifies anisotropy in the cosmic microwave background.
1990s
1992 – Stephen Hawking states his chronology protection conjecture.
1993 – Demetrios Christodoulou and Sergiu Klainerman prove the non-linear stability of the Minkowski spacetime.
1995 – John F. Donoghue show that general relativity is a quantum effective field theory. This framework could be used to analyze binary systems observed by gravitational-wave observatories.
1995 – Hubble Deep Field image taken. It is a landmark in the study of cosmology.
1998 – The first complete Einstein ring, B1938+666, discovered using the Hubble Space Telescope and MERLIN.
1998-99 – Scientists discover that the expansion of the Universe is accelerating.
1999 – Alessandra Buonanno and Thibault Damour introduce the effective one-body formalism. This was later used to analyze data collected by gravitational-wave observatories.
2000s
2003 – Arvind Borde, Alan Guth, and Alexander Vilenkin prove the Borde–Guth–Vilenkin theorem.
2002 – First data collection of the Laser Interferometer Gravitational-Wave Observatory (LIGO).
2002 – James Williams, Slava Turyshev, and Dale Boggs conduct stringent lunar test of violations of the principle of equivalence.
2005 – Daniel Holz and Scott Hughes coin the term "standard sirens".
2009 – Gravity Probe B experiment verifies the geodetic effect to 0.5%.
2010s
2010 – A team at the U.S. National Institute for Standards and Technology (NIST) verifies relativistic time dilation using optical atomic clocks.
2011 – Wilkinson Microwave Anisotropy Probe (WMAP) finds no statistically significant deviations from the ΛCDM model of cosmology.
2012 – Hubble Ultra-Deep Field image released. It was created using data collected by the Hubble Space Telescope between 2003 and 2004.
2013 – NuSTAR and XMM-Newton measure the spin of the supermassive black hole at the center of the galaxy NGC 1365.
2015 – Advanced LIGO reports the first direct detections of gravitational waves, GW150914 and GW151226, mergers of stellar-mass black holes. Gravitational-wave astronomy is born. No deviations from general relativity were found.
2017 – LIGO-VIRGO collaboration detects gravitational waves emitted by a neutron-star binary, GW170817. The Fermi Gamma-ray Space Telescope and the International Gamma-ray Astrophysics Laboratory (INTEGRAL) unambiguously detect the corresponding gamma-ray burst. LIGO-VIRGO and Fermi constrain the difference between the speed of gravity and the speed of light in vacuum to . This marks the first time electromagnetic and gravitational waves are detected from a single source, and give direct evidence that some (short) gamma-ray bursts are due to colliding neutron stars.
2017 – Multi-messenger astronomy reveals neutron-star mergers to be responsible for the nucleosynthesis of some heavy elements, such as strontium, via the rapid-neutron capture or r-process.
2017 – MICROSCOPE satellite experiment verifies the principle of equivalence to in terms of the Eötvös ratio . The final report is published in 2022.
2017 – Principle of equivalence tested to 10−9 for atoms in a coherent state of superposition.
2017 – Scientists begin using gravitational-wave sources as "standard sirens" to measure the Hubble constant, finding its value to be broadly in line with the best estimates of the time. Refinements of this technique will help resolve discrepancies between the different methods of measurements.
2017 – Neutron Star Interior Composition Explorer (NICER) arrives on the International Space Station.
2017-18 – Georgios Moschidis proves the instability of the anti-de Sitter spacetime.
2018 – Final paper by the Planck satellite collaboration. Planck operated between 2009 and 2013.
2018 – Mihalis Dafermos and Jonathan Luk disprove the strong cosmic censorship hypothesis for the Cauchy horizon of an uncharged, rotating black hole.
2018 – European Southern Observatory (ESO) observes gravitational redshift of radiation emitted by matter orbiting Sagittarius A*, the central supermassive black hole of the Milky Way, and verifies the innermost stable circular orbit for that object.
2018 – Advanced LIGO-VIRGO collaboration constrains equations of state for a neutron star using GW170817.
2018 – Luciano Rezzolla, Elias R. Most, and Lukas R. Weih used gravitational-wave data from GW170817 constrain the possible maximum mass for a neutron star to around 2.17 solar masses.
2018 – Kris Pardo, Maya Fishbach, Daniel Holz, and David Spergel limit the number of spacetime dimensions through which gravitational waves can propagate to 3 + 1, in line with general relativity and ruling out models that allow for "leakage" to higher dimensions of space. Analyses of GW170817 have also ruled out many other alternatives to general relativity, and proposals for dark energy.
2018 – Two different experimental teams report highly precise values of Newton's gravitational constant that slightly disagree.
2019 – Event Horizon Telescope (EHT) releases an image of supermassive black hole M87*, and measures its mass and shadow. Results are confirmed in 2024.
2019 – Advanced LIGO and VIRGO detect GW190814, the collision of a 26-solar-mass black hole and a 2.6-solar-mass object, either an extremely heavy neutron star or a very light black hole. This is the largest mass gap seen in a gravitational-wave source to-date.
2020s
2020 – Principle of equivalence tested for individual atoms using atomic interferometry to ~10−12.
2020 – ESO observes Schwarzschild precession of the star S2 about Sagittarius A*.
2021 – Jun Ye and his team measure gravitational redshift with an accuracy of 7.6 × 10−21 using an ultracold cloud of 100,000 strontium atoms in an optical lattice.
2021 – EHT measures the polarization of the ring of M87*, and other properties of the magnetic field in its vicinity.
2021 – EHT releases an image of Sagittarius A*, measures its shadow, and shows that it is accurately described by the Kerr metric.
2022 – Chris Overstreet and his team observe the gravitational Aharonov-Bohm effect using an experimental design from 2012.
2022 – James Webb Space Telescope (JWST) publishes its first image, a deep-field photograph of the SMACS 0723 galaxy cluster.
2022 – Neil Gehrels Swift Observatory detects GRB 221009A, the brightest gamma-ray burst recorded.
2022 – JWST identifies several candidate high-redshift objects, corresponding to just a few hundred million years after the Big Bang.
2023 – James Nightingale and colleagues detect Abell 1201, an ultramassive black hole (33 billion solar masses), using strong gravitational lensing.
2023 – Matteo Bachetti and colleagues confirm that neutron star M82 X-2 is violating the Eddington limit, making it an ultraluminous X-ray source (ULX).
2023 – Team led by Dong Sheng and Zheng-Tian Lu found a null result for the coupling between quantum spin and gravity to 10−9.
2023 – The North American Nanohertz Observatory for Gravitational Waves (NANOGrav), the European Pulsar Timing Array (EPTA), the Parkes Pulsar Timing Array (Australia), and the Chinese Pulsar Timing Array report detection of a gravitational-wave background.
2023 – Geraint F. Lewis and Brendon Brewer present evidence of cosmological time dilation in quasars.
2024 – The Large High Altitude Air Shower Observatory (LHAASO) collaboration imposes stringent limits on violations of Lorentz invariance proposed in certain theories of quantum gravity using GRB 221009A.
See also
Timeline of black hole physics
Timeline of special relativity and the speed of light
List of contributors to general relativity
List of scientific publications by Albert Einstein
References
External links
Timeline of relativity and gravitation (Tomohiro Harada, Department of Physics, Rikkyo University)
Timeline of General Relativity and Cosmology from 1905
2015–General Relativity's Centennial. Physical Review Journals. American Physical Society (APS).
Astrophysics
Gravity
Gravitational physics and relativity | Timeline of gravitational physics and relativity | [
"Physics",
"Astronomy"
] | 6,862 | [
"Astronomical sub-disciplines",
"Astrophysics"
] |
58,787 | https://en.wikipedia.org/wiki/Timeline%20of%20black%20hole%20physics | Timeline of black hole physics
Pre-20th century
1640 — Ismaël Bullialdus suggests an inverse-square gravitational force law
1676 — Ole Rømer demonstrates that light has a finite speed
1684 — Isaac Newton writes down his inverse-square law of universal gravitation
1758 — Rudjer Josip Boscovich develops his theory of forces, where gravity can be repulsive on small distances. So according to him strange classical bodies, such as white holes, can exist, which won't allow other bodies to reach their surfaces
1784 — John Michell discusses classical bodies which have escape velocities greater than the speed of light
1795 — Pierre Laplace discusses classical bodies which have escape velocities greater than the speed of light
1798 — Henry Cavendish measures the gravitational constant G
1876 — William Kingdon Clifford suggests that the motion of matter may be due to changes in the geometry of space
20th century
Before 1960s
1909 — Albert Einstein, together with Marcel Grossmann, starts to develop a theory which would bind metric tensor gik, which defines a space geometry, with a source of gravity, that is with mass
1910 — Hans Reissner and Gunnar Nordström define Reissner–Nordström singularity, Hermann Weyl solves special case for a point-body source
1915 — Albert Einstein presents (David Hilbert presented this independently five days earlier in Göttingen) the complete Einstein field equations at the Prussian Academy meeting in Berlin on 25 November 1915
1916 — Karl Schwarzschild solves the Einstein vacuum field equations for uncharged spherically symmetric non-rotating systems
1917 — Paul Ehrenfest gives conditional principle a three-dimensional space
1918 — Hans Reissner and Gunnar Nordström solve the Einstein–Maxwell field equations for charged spherically symmetric non-rotating systems
1918 — Friedrich Kottler gets Schwarzschild solution without Einstein vacuum field equations
1923 — George David Birkhoff proves that the Schwarzschild spacetime geometry is the unique spherically symmetric solution of the Einstein vacuum field equations
1931 — Subrahmanyan Chandrasekhar calculates, using special relativity, that a non-rotating body of electron-degenerate matter above a certain limiting mass (at 1.4 solar masses) has no stable solutions
1939 — Robert Oppenheimer and Hartland Snyder calculate the gravitational collapse of a pressure-free homogeneous fluid sphere into a black hole
1958 — David Finkelstein theorises that the Schwarzschild radius is a causality barrier: an event horizon of a black hole
1960s
1963 — Roy Kerr solves the Einstein vacuum field equations for uncharged symmetric rotating systems, deriving the Kerr metric for a rotating black hole
1963 — Maarten Schmidt discovers and analyzes the first quasar, 3C 273, as a highly red-shifted active galactic nucleus, a billion light years away
1964 — Roger Penrose proves that an imploding star will necessarily produce a singularity once it has formed an event horizon
1964 — Yakov Zel’dovich and independently Edwin Salpeter propose that accretion discs around supermassive black holes are responsible for the huge amounts of energy radiated by quasars
1964 — Hong-Yee Chiu coins the word quasar for a 'quasi-stellar radio source' in his article in Physics Today
1964 — The first recorded use of the term "black hole", by journalist Ann Ewing
1965 — Ezra T. Newman, E. Couch, K. Chinnapared, A. Exton, A. Prakash, and Robert Torrence solve the Einstein–Maxwell field equations for charged rotating systems
1966 — Yakov Zel’dovich and Igor Novikov propose searching for black hole candidates among binary systems in which one star is optically bright and X-ray dark and the other optically dark but X-ray bright (the black hole candidate)
1967 — Jocelyn Bell discovers and analyzes the first radio pulsar, direct evidence for a neutron star
1967 — Werner Israel presents the proof of the no-hair theorem at King's College London
1967 — John Wheeler introduces the term "black hole" in his lecture to the American Association for the Advancement of Science
1968 — Brandon Carter uses Hamilton–Jacobi theory to derive first-order equations of motion for a charged particle moving in the external fields of a Kerr–Newman black hole
1969 — Roger Penrose discusses the Penrose process for the extraction of the spin energy from a Kerr black hole
1969 — Roger Penrose proposes the cosmic censorship hypothesis
After 1960s
1972 — Identification of Cygnus X-1/HDE 226868 from dynamic observations as the first binary with a stellar black hole candidate
1972 — Stephen Hawking proves that the area of a classical black hole's event horizon cannot decrease
1972 — James Bardeen, Brandon Carter, and Stephen Hawking propose four laws of black hole mechanics in analogy with the laws of thermodynamics
1972 — Jacob Bekenstein suggests that black holes have an entropy proportional to their surface area due to information loss effects
1974 — Stephen Hawking applies quantum field theory to black hole spacetimes and shows that black holes will radiate particles with a black-body spectrum which can cause black hole evaporation
1975 — James Bardeen and Jacobus Petterson show that the swirl of spacetime around a spinning black hole can act as a gyroscope stabilizing the orientation of the accretion disc and jets
1989 — Identification of microquasar V404 Cygni as a binary black hole candidate system
1994 — Charles Townes and colleagues observe ionized neon gas swirling around the center of our Galaxy at such high velocities that a possible black hole mass at the very center must be approximately equal to that of 3 million suns
21st century
2002 — Astronomers at the Max Planck Institute for Extraterrestrial Physics present evidence for the hypothesis that Sagittarius A* is a supermassive black hole at the center of the Milky Way galaxy
2002 — Physicists at The Ohio State University publish fuzzball theory, which is a quantum description of black holes positing that they are extended objects composed of strings and don't have singularities.
2002 — NASA's Chandra X-ray Observatory identifies double galactic black holes system in merging galaxies NGC 6240
2004 — Further observations by a team from UCLA present even stronger evidence supporting Sagittarius A* as a black hole
2006 — The Event Horizon Telescope begins capturing data
2012 — First visual evidence of black-holes: Suvi Gezari's team in Johns Hopkins University, using the Hawaiian telescope Pan-STARRS 1, publish images of a supermassive black hole 2.7 million light-years away swallowing a red giant
2015 — LIGO Scientific Collaboration detects the distinctive gravitational waveforms from a binary black hole merging into a final black hole, yielding the basic parameters (e.g., distance, mass, and spin) of the three spinning black holes involved
2019 — Event Horizon Telescope collaboration releases the first direct photo of a black hole, the supermassive M87* at the core of the Messier 87 galaxy
References
See also
Timeline of gravitational physics and relativity
Schwarzschild radius
Black holes
Black hole physics
Black hole physics | Timeline of black hole physics | [
"Physics",
"Astronomy"
] | 1,457 | [
"Black holes",
"Physical phenomena",
"Physical quantities",
"History of astronomy",
"Astronomy timelines",
"Unsolved problems in physics",
"Astrophysics",
"Density",
"Stellar phenomena",
"Astronomical objects"
] |
58,809 | https://en.wikipedia.org/wiki/Adultery | Adultery is extramarital sex that is considered objectionable on social, religious, moral, or legal grounds. Although the sexual activities that constitute adultery vary, as well as the social, religious, and legal consequences, the concept exists in many cultures and shares some similarities in Judaism, Christianity and Islam. Adultery is viewed by many jurisdictions as offensive to public morals, undermining the marriage relationship.
Historically, many cultures considered adultery a very serious crime, some subject to severe punishment, usually for the woman and sometimes for the man, with penalties including capital punishment, mutilation, or torture. Such punishments have gradually fallen into disfavor, especially in Western countries from the 19th century. In countries where adultery is still a criminal offense, punishments range from fines to caning and even capital punishment. Since the 20th century, criminal laws against adultery have become controversial, with most Western countries decriminalising adultery.
However, even in jurisdictions that have decriminalised adultery, it may still have legal consequences, particularly in jurisdictions with fault-based divorce laws, where adultery almost always constitutes a ground for divorce and may be a factor in property settlement, the custody of children, the denial of alimony, etc. Adultery is not a ground for divorce in jurisdictions which have adopted a no-fault divorce model.
International organizations have called for the decriminalisation of adultery, especially in the light of several high-profile stoning cases that have occurred in some countries. The head of the United Nations expert body charged with identifying ways to eliminate laws that discriminate against women or are discriminatory to them in terms of implementation or impact, Kamala Chandrakirana, has stated that: "Adultery must not be classified as a criminal offence at all". A joint statement by the United Nations Working Group on discrimination against women in law and in practice states that: "Adultery as a criminal offence violates women’s human rights".
In Muslim countries that follow Sharia law for criminal justice, the punishment for adultery may be stoning. There are fifteen countries in which stoning is authorized as lawful punishment, though in recent times it has been legally carried out only in Iran and Somalia. Most countries that criminalize adultery are those where the dominant religion is Islam, and several Sub-Saharan African Christian-majority countries, but there are some notable exceptions to this rule, namely the Philippines, and several U.S. states. In some jurisdictions, having sexual relations with the king's wife or the wife of his eldest son constitutes treason.
Overview
The term adultery refers to sexual acts between a married person and someone who is not that person's spouse. It may arise in a number of contexts. In criminal law, adultery was a criminal offence in many countries in the past, and is still a crime in some countries today. In family law, adultery may be a ground for divorce, with the legal definition of adultery being "physical contact with an alien and unlawful organ", while in some countries today, adultery is not in itself grounds for divorce. Extramarital sexual acts not fitting this definition are not "adultery" though they may constitute "unreasonable behavior", also a ground of divorce.
Another issue is the issue of paternity of a child. The application of the term to the act appears to arise from the idea that "criminal intercourse with a married woman ... tended to adulterate the issue [children] of an innocent husband ... and to expose him to support and provide for another man's [children]". Thus, the "purity" of the children of a marriage is corrupted, and the inheritance is altered.
In archaic law, there was a common law tort of criminal conversation arising from adultery, "conversation" being an archaic euphemism for sexual intercourse. It was a tort action brought by a husband against a third party (“the other man”) who interfered with the marriage relationship.
Some adultery laws differentiate based on the sex of the participants, and as a result such laws are often seen as discriminatory, and in some jurisdictions they have been struck down by courts, usually on the basis that they discriminated against women.
The term adultery, rather than extramarital sex, implies a moral condemnation of the act; as such it is usually not a neutral term because it carries an implied judgment that the act is wrong.
Adultery refers to sexual relations which are not officially legitimized; for example it does not refer to having sexual intercourse with multiple partners in the case of polygamy (when a man is married to more than one wife at a time, called polygyny; or when a woman is married to more than one husband at a time, called polyandry).
Definitions and legal constructs
In the traditional English common law, adultery was a felony. Although the legal definition of adultery differs in nearly every legal system, the common theme is sexual relations outside of marriage, in one form or another.
Traditionally, many cultures, particularly Latin American ones, had strong double standards regarding male and female adultery, with the latter being seen as a much more serious violation.
Adultery involving a married woman and a man other than her husband was considered a very serious crime. In 1707, English Lord Chief Justice John Holt stated that a man having sexual relations with another man's wife was "the highest invasion of property" and claimed, in regard to the aggrieved husband, that "a man cannot receive a higher provocation" (in a case of murder or manslaughter).
The Encyclopedia of Diderot & d'Alembert, Vol. 1 (1751), also equated adultery to theft writing that, "adultery is, after homicide, the most punishable of all crimes, because it is the most cruel of all thefts, and an outrage capable of inciting murders and the most deplorable excesses."
Legal definitions of adultery vary. For example, New York defines an adulterer as a person who "engages in sexual intercourse with another person at a time when he has a living spouse, or the other person has a living spouse." North Carolina defines adultery as occurring when any man and woman "lewdly and lasciviously associate, bed, and cohabit together."
Minnesota law (repealed in 2023) provided: "when a married woman has sexual intercourse with a man other than her husband, whether married or not, both are guilty of adultery." In the 2003 New Hampshire Supreme Court case Blanchflower v. Blanchflower, it was held that female same-sex sexual relations did not constitute sexual intercourse, based on a 1961 definition from Webster's Third New International Dictionary; and thereby an accused wife in a divorce case was found not guilty of adultery. In 2001, Virginia prosecuted an attorney, John R. Bushey, for adultery, a case that ended in a guilty plea and a $125 fine. Adultery is against the governing law of the U.S. military.
In common-law countries, adultery was also known as criminal conversation. This became the name of the civil tort arising from adultery, being based upon compensation for the other spouse's injury. Criminal conversation was usually referred to by lawyers as crim. con., and was abolished in England in 1857, and the Republic of Ireland in 1976. Another tort, alienation of affection, arises when one spouse deserts the other for a third person. This act was also known as desertion, which was often a crime as well. A small number of jurisdictions still allow suits for criminal conversation and/or alienation of affection. In the United States, six states still maintain this tort.
A marriage in which both spouses agree ahead of time to accept sexual relations by either partner with others is sometimes referred to as an open marriage or the swinging lifestyle. Polyamory, meaning the practice, desire, or acceptance of intimate relationships that are not exclusive with respect to other sexual or intimate relationships, with knowledge and consent of everyone involved, sometimes involves such marriages. Swinging and open marriages are both a form of non-monogamy, and the spouses would not view the sexual relations as objectionable. However, irrespective of the stated views of the partners, extramarital relations could still be considered a crime in some legal jurisdictions which criminalize adultery.
In Canada, though the written definition in the Divorce Act refers to extramarital relations with someone of the opposite sex, a British Columbia judge used the Civil Marriage Act in a 2005 case to grant a woman a divorce from her husband who had cheated on her with another man, which the judge felt was equal reasoning to dissolve the union.
In England and Wales, case law restricts the definition of adultery to penetrative sexual intercourse between a man and a woman, no matter the gender of the spouses in the marriage. Infidelity with a person of the same gender can be grounds for a divorce as unreasonable behavior; this situation was discussed at length during debates on the Marriage (Same-Sex Couples) Bill. However, the practical effect of this ceased with the introduction of no-fault divorce in April 2022, which meant that unreasonable behavior ceased to be grounds for divorce.
In India, adultery was the sexual intercourse of a man with a married woman without the consent of her husband when such sexual intercourse did not amount to rape, and it was a non-cognizable, non-bailable criminal offence; the adultery law was overturned by the Supreme Court of India on 27 September 2018.
Prevalence
Durex's Global Sex Survey found that worldwide 22% of people surveyed admitted to have had extramarital sex. According to a 2015 study by Durex and Match.com, Thailand and Denmark were the most adulterous countries based on the percentage of adults who admitted having an affair.
In the United States Alfred Kinsey found in his studies that 50% of males and 26% of females had extramarital sex at least once during their lifetime. Depending on studies, it was estimated that 22.7% of men and 11.6% of women, had extramarital sex. Other authors say that between 20% and 25% of Americans had sex with someone other than their spouse.
Three 1990s studies in the United States, using nationally representative samples, have found that about 10–15% of women and 20–25% of men admitted to having engaged in extramarital sex.
The Standard Cross-Cultural Sample described the occurrence of extramarital sex by gender in over 50 pre-industrial cultures. The occurrence of extramarital sex by men is described as "universal" in 6 cultures, "moderate" in 29 cultures, "occasional" in 6 cultures, and "uncommon" in 10 cultures. The occurrence of extramarital sex by women is described as "universal" in 6 cultures, "moderate" in 23 cultures, "occasional" in 9 cultures, and "uncommon" in 15 cultures.
Cultural and religious traditions
Greco-Roman world
In the Greco-Roman world, there were stringent laws against adultery, but these applied to sexual intercourse with a married woman. In the early Roman Law, the jus tori belonged to the husband. It was therefore not illegal for a husband to have sex with a slave or an unmarried woman.
The Roman husband often took advantage of his legal immunity. Thus historian Spartianus said that Verus, the imperial colleague of Marcus Aurelius, did not hesitate to declare to his reproaching wife: "Uxor enim dignitatis nomen est, non voluptatis." ('Wife' connotes rank, not sexual pleasure, or more literally "Wife is the name of dignity, not bliss") (Verus, V).
Later in Roman history, as William E.H. Lecky has shown, the idea that the husband owed a fidelity similar to that demanded of the wife must have gained ground, at least in theory. Lecky gathers from the legal maxim of Ulpian: "It seems most unfair for a man to require from a wife the chastity he does not himself practice".
According to Plutarch, the lending of wives practiced among some people was also encouraged by Lycurgus, though from a motive other than that which actuated the practice (Plutarch, Lycurgus, XXIX). The recognized license of the Greek husband may be seen in the following passage of the pseudo-Demosthenic Oration Against Neaera:
We keep mistresses for our pleasures, concubines for constant attendance, and wives to bear us legitimate children and to be our faithful housekeepers. Yet, because of the wrong done to the husband only, the Athenian lawgiver Solon allowed any man to kill an adulterer whom he had taken in the act. (Plutarch, Solon)
The Roman Lex Julia, Lex Iulia de Adulteriis Coercendis (17 BC), punished adultery with banishment. The two guilty parties were sent to different islands ("dummodo in diversas insulas relegentur"), and part of their property was confiscated. Fathers were permitted to kill daughters and their partners in adultery. Husbands could kill the partners under certain circumstances and were required to divorce adulterous wives.
Abrahamic religions
Biblical sources
Both Judaism and Christianity base their injunction against adultery on passages in the Hebrew Bible (Old Testament in Christianity), which firstly prohibits adultery in the Seventh Commandment: "Thou shalt not commit adultery."
(). However, Judaism and Christianity differ on what actually constitutes adultery.
defines what constitutes adultery in the Hebrew Bible, and it also prescribes the punishment as capital punishment. In this verse, and in the Jewish tradition, adultery consists of sexual intercourse between a man and a married woman who is not his lawful wife:
And the man that committeth adultery with another man's wife, even he that committeth adultery with his neighbour's wife, the adulterer and the adulteress shall surely be put to death.
Thus, according to the Hebrew Bible, adultery is not committed if the female participant is unmarried (unless she is betrothed to be married), while the marital status of the male participant is irrelevant (he himself could be married or unmarried to another woman).
If a married woman was raped by a man who is not her husband, only the rapist is punished for adultery. The victim is not punished: as the Bible declares, "this matter is similar to when a man rises up against his fellow and murders him"; just as a murder victim is not guilty of murder, a rape victim is not guilty of adultery.
Michael Coogan writes that according to the text wives are the property of their husband, marriage meaning transfer of property (from father to husband), and adultery is violating the property right of the husband. However, in contrast to other ancient Near Eastern law collections which treat adultery as an offense against the husband alone, and allow the husband to waive or mitigate the punishment, Biblical law allows no such mitigation, on the grounds that God as well as the husband is offended by adultery, and an offense against God cannot be forgiven by man. In addition, Coogan's book was criticized by Phyllis Trible, who argues that that patriarchy was not decreed, but only described by God. She claims that Paul the Apostle made the same mistake as Coogan.
David's sexual intercourse with Bathsheba, the wife of Uriah, is described by the Bible as a "sin" whose punishment included the ravishment of David's own wives. According to Jennifer Wright Knust, David's act was adultery only according to the spirit and not the letter of the law, because Uriah was non-Jewish, and (according to Knust) the Biblical codes only technically applied to Israelites. However, according to Jacob Milgrom, Jews and resident foreigners received equal protection under Biblical law. In any case, according to the Babylonian Talmud, Uriah was indeed Jewish and wrote a provisional bill of divorce prior to going out to war, specifying that if he fell in battle, the divorce would take effect from the time the writ was issued.
Judaism
Though Leviticus 20:10 prescribes the death penalty for adultery, the legal procedural requirements were very exacting and required the testimony of two eyewitnesses of good character for conviction. The defendant also must have been warned immediately before performing the act. A death sentence could be issued only during the period when the Holy Temple stood, and only so long as the Sanhedrin court convened in its chamber within the Temple complex. Technically, therefore, no death penalty can now be applied.
The death penalty for adultery was generally strangulation, except in the case of a woman who was the daughter of a Kohen, which was specifically mentioned in Scripture as the penalty of burning (pouring molten lead down the throat), or a woman who was betrothed but not married, in which case the punishment for both man and woman was stoning.
At the civil level, Jewish law (halakha) forbids a man to continue living with an adulterous wife, and he is obliged to divorce her. Also, an adulteress is not permitted to marry the adulterer, but (to avoid any doubt as to her status as being free to marry another or that of her children) many authorities say he must give her a divorce as if they were married.
According to Judaism, the Seven laws of Noah apply to all of humankind; these laws prohibit adultery to non-Jews as well as Jews.
The extramarital intercourse of a married man is not in itself considered a crime in biblical or later Jewish law; it was considered akin to polygyny, which was permitted. Similarly, sexual intercourse between an unmarried man and a woman who was neither married nor betrothed was not considered adultery. This concept of adultery stems from the economic aspect of Israelite marriage whereby the husband has an exclusive right to his wife, whereas the wife, as the husband's possession, did not have an exclusive right to her husband.
Christianity
Adultery is considered immoral by Christians and a sin, based primarily on passages like and . Although does say that "and that is what some of you were. But you were washed", it still acknowledges adultery to be immoral and a sin.
Catholicism ties fornication with breaking the sixth commandment in its Catechism.
Until a few decades ago, adultery was a criminal offense in many countries where the dominant religion is Christianity, especially in Roman Catholic countries (for example, in Austria it was a criminal offense until 1997). Adultery was decriminalized in Chile in 1994, Argentina in 1995, Brazil in 2005 and Mexico in 2011, but in some predominantly Catholic countries, such as the Philippines, it remains illegal.
The Book of Mormon also prohibits adultery. For instance, Abinadi cites the Ten Commandments when he accuses King Noah's priests of sexual immorality. When Jesus Christ visits the Americas he reinforces the law and teaches them the higher law (also found in the New Testament):
Behold, it is written by them of old time, that thou shalt not commit adultery; but I say unto you, that whosoever looketh on a woman, to lust after her, hath committed adultery already in his heart.
Some churches such as the Church of Jesus Christ of Latter-day Saints have interpreted "adultery" to include all sexual relationships outside of marriage, regardless of the marital status of the participants. Book of Mormon prophets and civil leaders often list adultery as an illegal activity along with murder, robbing, and stealing.
Islam
Zina' is an Arabic term for illegal intercourse, premarital or extramarital. Various conditions and punishments have been attributed to adultery. Under Islamic law, adultery in general is sexual intercourse by a person (whether man or woman) with someone to whom they are not married. Adultery is a violation of the marital contract and one of the major sins condemned by God in the Qur'an:
Qur'anic verses prohibiting adultery include:
Punishments are reserved to the legal authorities and false accusations are to be punished severely. It has been said that these legal procedural requirements were instituted to protect women from slander and false accusations: i.e. four witnesses of good character are required for conviction, who were present at that time and saw the deed taking place; and if they saw it they were not of good moral character, as they were looking at naked adults; thus no one can be convicted of adultery unless both of the accused also agree and give their confession under oath four times.
According to a hadith attributed to Muhammad, an unmarried person who commits adultery or fornication is punished by flogging 100 times; a married person will then be stoned to death. A survey conducted by the Pew Research Center found support for stoning as a punishment for adultery mostly in Arab countries; it was supported in Egypt (82% of respondents in favor of the punishment) and Jordan (70% in favor), as well as Pakistan (82% favor), whereas in Nigeria (56% in favor) and in Indonesia (42% in favor) opinion is more divided, perhaps due to diverging traditions and differing interpretations of Sharia.
Eastern religions
Hinduism
The Hindu Sanskrit texts present a range of views on adultery, offering widely differing positions. The hymn 4.5.5 of the Rigveda calls adultery as pāpa (evil, sin). Other Vedic texts state adultery to be a sin, just like murder, incest, anger, evil thoughts and trickery. The Vedic texts, including the Rigveda, the Atharvaveda and the Upanishads, also acknowledge the existence of male lovers and female lovers as a basic fact of human life, followed by the recommendation that one should avoid such extra marital sex during certain ritual occasions (yajna). A number of simile in the Rigveda, a woman's emotional eagerness to meet her lover is described, and one hymn prays to the gods that they protect the embryo of a pregnant wife as she sleeps with her husband and other lovers.
Adultery and similar offenses are discussed under one of the eighteen vivādapadas (titles of laws) in the Dharma literature of Hinduism. Adultery is termed as Strisangrahana in dharmasastra texts. These texts generally condemn adultery, with some exceptions involving consensual sex and niyoga (levirate conception) in order to produce an heir. According to Apastamba Dharmasutra, the earliest dated Hindu law text, cross-varna adultery (adultery across castes) is a punishable crime, where the adulterous man receives a far more severe punishment than the adulterous arya woman. In Gautama Dharmasutra, the adulterous arya woman is liable to harsh punishment for the cross-class adultery. While Gautama Dharmasutra reserves the punishment in cases of cross-class adultery, it seems to have been generalized by Vishnu Dharmasastra and Manusmiriti. The recommended punishments in the text also vary between these texts.
The Manusmriti, also known as the Laws of Manu, deals with this in greater detail. When translated, verse 4.134 of the book declares adultery to be a heinous offense. The Manusmriti does not include adultery as a "grievous sin", but includes it as a "secondary sin" that leads to a loss of caste. In the book, the intent and mutual consent are a part that determine the recommended punishment. Rape is not considered as adultery for the woman, while the rapist is punished severely. Lesser punishment is recommended for consensual adulterous sex. Death penalty is mentioned by Manu, as well as "penance" for the sin of adultery. even in cases of repeated adultery with a man of the same caste. In verses 8.362-363, the author states that sexual relations with the wife of traveling performer is not a sin, and exempts such sexual liaisons. The verse 5.154 of Manusmirti says a woman must constantly worship her husband as a god and be completely faithful even if he commits adultery. The book offers two views on adultery. It recommends a new married couple to remain sexually faithful to each other for life. It also accepts that adulterous relationships happen, children are born from such relationships and then proceeds to reason that the child belongs to the legal husband of the pregnant woman, and not to the biological father.
Other dharmasastra texts describe adultery as a punishable crime but offer differing details. According to Naradasmriti (12.61-62), it is an adulterous act if a man has sexual intercourse with the woman who is protected by another man. The term adultery in Naradasmriti is not confined to the relationship of a married man with another man's wife. It includes sex with any woman who is protected, including wives, daughters, other relatives, and servants. Adultery is not a punishable offence for a man if "the woman's husband has abandoned her because she is wicked, or he is eunuch, or of a man who does not care, provided the wife initiates it of her own volition". Adultery is not a punishable offence if a married man engages in intercourse with woman who doesn't belong to other man and is not a Brahmin, provided the woman is not of higher caste than the man. Brihaspati-smriti mention, among other things, adulterous local customs in ancient India and then states, "for such practices these (people) incur neither penance nor secular punishment". Kautilya's Arthashastra includes an exemption that in case the husband forgives his adulterous wife, the woman and her lover should be set free. If the offended husband does not forgive, the Arthashastra recommends the adulterous woman's nose and ears be cut off, while her lover be executed.
In Kamasutra which is not a religious text like Vedas or Puranas but an ancient text on love and sex, Vatsyayana discusses adultery and devotes "not less than fifteen sutras (1.5.6–20) to enumerating the reasons (karana) for which a man is allowed to seduce a married woman". According to Wendy Doniger, the Kamasutra teaches adulterous sexual liaison as a means for a man to predispose the involved woman in assisting him, working against his enemies and facilitating his successes. It also explains the many signs and reasons a woman wants to enter into an adulterous relationship and when she does not want to commit adultery. The Kamasutra teaches strategies to engage in adulterous relationships, but concludes its chapter on sexual liaison stating that one should not commit adultery because adultery pleases only one of two sides in a marriage, hurts the other, it goes against both dharma and artha.
According to Werner Menski, the Sanskrit texts take "widely different positions on adultery", with some considering it a minor offense that can be addressed with penance, but others treat it as a severe offense that depending on the caste deserves the death penalty for the man or the woman. According to Ramanathan and Weerakoon, in Hinduism, the sexual matters are left to the judgment of those involved and not a matter to be imposed through law.
According to Carl Olsen, the classical Hindu society considered adultery as a sexual transgression but treated it with a degree of tolerance. It is described as a minor transgression in Naradasmriti and other texts, one that a sincere penance could atone. Penance is also recommended to a married person who does not actually commit adultery, but carries adulterous thoughts for someone else or is thinking of committing adultery.
Other Hindu texts present a more complex model of behavior and mythology where gods commit adultery for various reasons. For example, Krishna commits adultery and the Bhagavata Purana justifies it as something to be expected when Vishnu took a human form, just like sages become uncontrolled. According to Tracy Coleman, Radha and other gopis are indeed lovers of Krishna, but this is prema or "selfless, true love" and not carnal craving. In Hindu texts, this relationship between gopis and Krishna involves secret nightly rendezvous. Some texts state it to be divine adultery, others as a symbolism of spiritual dedication and religious value. The example of Krishna's adulterous behavior has been used by Sahajiyas Hindus of Bengal to justify their own behavior that is contrary to the mainstream Hindu norm, according to Doniger. Other Hindu texts state that Krishna's adultery is not a license for other men to do the same, in the same way that men should not drink poison just because Rudra-Shiva drank poison during the Samudra Manthan. A similar teaching is found in Mahayana Buddhism, states Doniger.
The Linga Purana indicates that sexual hospitality existed in ancient India. The sage Sudarshana, asks his wife Oghavati to please their guests in this way. One day, he comes home while she is having sex with a mendicant who visits their house. Sudarshana tells them to continue. The mendicant turns out to be Dharma, the lord of righteous conduct, who blesses the couple for their upholding of social law.
Buddhism
Buddhist texts such as Digha Nikāya describe adultery as a form of sexual wrongdoing that is one link in a chain of immorality and misery. According to Wendy Doniger, this view of adultery as evil is postulated in early Buddhist texts as having originated from greed in a previous life. This idea combines Hindu and Buddhist thoughts then prevalent. Sentient beings without body, state the canonical texts, are reborn on earth due to their greed and craving, some people become beautiful and some ugly, some become men and some women. The ugly envy the beautiful and this triggers the ugly to commit adultery with the wives of the beautiful. Like in Hindu mythology, states Doniger, Buddhist texts explain adultery as a result from sexual craving; it initiates a degenerative process.
Buddhism considers celibacy as the monastic ideal. For he who feels that he cannot live in celibacy, it recommends that he never commit adultery with another's wife. Engaging in sex outside of marriage, with the wife of another man, with a girl who is engaged to be married, or a girl protected by her relatives (father or brother), or extramarital sex with prostitutes, ultimately causes suffering to other human beings and oneself. It should be avoided, state the Buddhist canonical texts.
Buddhist Pali texts narrate legends where the Buddha explains the karmic consequences of adultery. For example, states Robert Goldman, one such story is of Thera Soreyya. Buddha states in the Soreyya story that "men who commit adultery suffer hell for hundreds of thousands of years after rebirth, then are reborn a hundred successive times as women on earth, must earn merit by "utter devotion to their husbands" in these lives, before they can be reborn again as men to pursue a monastic life and liberation from samsara.
There are some differences between the Buddhist texts and the Hindu texts on the identification and consequences of adultery. According to José Ignacio Cabezón, for example, the Hindu text Naradasmriti considers consensual extra-marital sex between a man and a woman in certain circumstances (such as if the husband has abandoned the woman) as not a punishable crime, but the Buddhist texts "nowhere exculpate" any adulterous relationship. The term adultery in Naradasmriti is broader in scope than the one in Buddhist sources. In the text, various acts such as secret meetings, exchange of messages and gifts, "inappropriate touching" and a false accusation of adultery, are deemed adulterous, while Buddhist texts do not recognize these acts under adultery. Later texts such as the Dhammapada, Pancasiksanusamsa Sutra and a few Mahayana sutras state that "heedless man who runs after other men's wife" acquire demerit, blame, discomfort and are reborn in hell. Other Buddhist texts make no mention of legal punishments for adultery.
Other historical practices
In some Native American cultures, severe penalties could be imposed on an adulterous wife by her husband. In many instances she was made to endure a bodily mutilation which would, in the mind of the aggrieved husband, prevent her from ever being a temptation to other men again. Among the Aztecs, wives caught in adultery were occasionally impaled, although the more usual punishment was to be stoned to death.
The Code of Hammurabi, a well-preserved Babylonian law code of ancient Mesopotamia, dating back to about 1772 BC, provided drowning as punishment for adultery.
Amputation of the noserhinotomywas a punishment for adultery among many civilizations, including ancient India, ancient Egypt, among Greeks and Romans, and in Byzantium and among the Arabs.
In the tenth century, the Arab explorer Ibn Fadlan noted that adultery was unknown among the pagan Oghuz Turks. Ibn Fadlan writes that "adultery is unknown among them; but whomsoever they find by his conduct that he is an adulterer, they tear him in two. This comes about so: they bring together the branches of two trees, tie him to the branches and then let both trees go, so that he is torn in two."
In medieval Europe, early Jewish law mandated stoning for an adulterous wife and her partner.
In England and its successor states, it has been high treason to engage in adultery with the King's wife, his eldest son's wife and his eldest unmarried daughter. The jurist Sir William Blackstone writes that "the plain intention of this law is to guard the Blood Royal from any suspicion of bastardy, whereby the succession to the Crown might be rendered dubious." Adultery was a serious issue when it came to succession to the crown. Philip IV of France had all three of his daughters-in-law imprisoned, two (Margaret of Burgundy and Blanche of Burgundy) on the grounds of adultery and the third (Joan of Burgundy) for being aware of their adulterous behaviour. The two brothers accused of being lovers of the king's daughters-in-law were executed immediately after being arrested. The wife of Philip IV's eldest son bore a daughter, the future Joan II of Navarre, whose paternity and succession rights were disputed all her life.
The christianization of Europe came to mean that, in theory, and unlike with the Romans, there was supposed to be a single sexual standard, where adultery was a sin and against the teachings of the church, regardless of the sex of those involved. In practice, however, the church seemed to have accepted the traditional double standard which punished the adultery of the wife more harshly than that of the husband. Among Germanic tribes, each tribe had its own laws for adultery, and many of them allowed the husband to "take the law in his hands" and commit acts of violence against a wife caught committing adultery. In the Middle Ages, adultery in Vienna was punishable by death through impalement. Austria was one of the last Western countries to decriminalize adultery, in 1997.
The Encyclopedia of Diderot & d'Alembert, Vol. 1 (1751) noted the legal double standard from that period, it wrote:
"Furthermore, although the husband who violates conjugal trust is guilty as well as the woman, it is not permitted for her to accuse him, nor to pursue him because of this crime".
Adultery and the law
Historically, many cultures considered adultery a very serious crime, some subject to severe punishment, especially for the married woman and sometimes for her sex partner, with penalties including capital punishment, mutilation, or torture. Such punishments have gradually fallen into disfavor, especially in Western countries from the 19th century. In countries where adultery is still a criminal offense, punishments range from fines to caning and even capital punishment. Since the 20th century, such laws have become controversial, with most Western countries repealing them.
However, even in jurisdictions that have decriminalised adultery, adultery may still have legal consequences, particularly in jurisdictions with fault-based divorce laws, where adultery almost always constitutes a ground for divorce and may be a factor in property settlement, the custody of children, the denial of alimony, etc. Adultery is not a ground for divorce in jurisdictions which have adopted a no-fault divorce model, but may still be a factor in child custody and property disputes.
International organizations have called for the decriminalising of adultery, especially in the light of several high-profile stoning cases that have occurred in some countries. The head of the United Nations expert body charged with identifying ways to eliminate laws that discriminate against women or are discriminatory to them in terms of implementation or impact, Kamala Chandrakirana, has stated that: "Adultery must not be classified as a criminal offence at all". A joint statement by the United Nations Working Group on discrimination against women in law and in practice states that: "Adultery as a criminal offence violates women’s human rights".
In Muslim countries that follow Sharia law for criminal justice, the punishment for adultery may be stoning. There are 15 countries in which stoning is authorized as lawful punishment, though in recent times it has been legally carried out only in Iran and Somalia. Most countries that criminalize adultery are those where the dominant religion is Islam, and several Sub-Saharan African Christian-majority countries, but there are some notable exceptions to this rule, namely, the Philippines and several U.S. states.
Punishment
In jurisdictions where adultery is illegal, punishments vary from fines (for example in the US state of Rhode Island) to caning in parts of Asia. In 15 countries the punishment includes stoning, although in recent times it has been legally enforced only in Iran and Somalia. Most stoning cases are the result of mob violence, and while technically illegal, no action is usually taken against perpetrators. Sometimes such stonings are ordered by informal village leaders who have de facto power in the community. Adultery may have consequences under civil law even in countries where it is not outlawed by the criminal law. For instance it may constitute fault in countries where the divorce law is fault based or it may be a ground for tort.
In some jurisdictions, the "intruder" (the third party) is punished, rather than the adulterous spouse. For instance art 266 of the Penal Code of South Sudan reads: "Whoever, has consensual sexual intercourse with a man or woman who is and whom he or she has reason to believe to be the spouse of another person, commits the offence of adultery [...]". Similarly, under the adultery law in India (Section 497 of the Indian Penal Code, until overturned by the Supreme Court in 2018) it was a criminal offense for a man to have consensual sexual intercourse with a married woman, without the consent of her husband (no party was criminally punished in case of intercourse between a married man and an unmarried woman).
Legal issues regarding paternity
Historically, paternity of children born out of adultery has been seen as a major issue. Modern advances such as reliable contraception and paternity testing have changed the situation (in Western countries). Most countries nevertheless have a legal presumption that a woman's husband is the father of her children who were born during that marriage. Although this is often merely a rebuttable presumption, many jurisdictions have laws which restrict the possibility of legal rebuttal (for instance by creating a legal time limit during which paternity may be challengedsuch as a certain number of years from the birth of the child). Establishing correct paternity may have major legal implications, for instance in regard to inheritance.
Children born out of adultery suffered, until recently, adverse legal and social consequences. In France, for instance, a law that stated that the inheritance rights of a child born under such circumstances were, on the part of the married parent, half of what they would have been under ordinary circumstances, remained in force until 2001, when France was forced to change it by a ruling of the European Court of Human Rights (ECtHR) (and in 2013, the ECtHR also ruled that the new 2001 regulations must be also applied to children born before 2001).
There has been, in recent years, a trend of legally favoring the right to a relation between the child and its biological father, rather than preserving the appearances of the 'social' family. In 2010, the ECtHR ruled in favor of a German man who had fathered twins with a married woman, granting him right of contact with the twins, despite the fact that the mother and her husband had forbidden him from seeing the children.
Criticism of adultery laws
Laws against adultery have been named as invasive and incompatible with principles of limited government. Much of the criticism comes from libertarianism, the consensus among whose adherents is that government must not intrude into daily personal lives and that such disputes are to be settled privately rather than prosecuted and penalized by public entities. It is also argued that adultery laws are rooted in religious doctrines; which should not be the case for laws in a secular state.
Historically, in most cultures, laws against adultery were enacted only to prevent women—and not men—from having sexual relations with anyone other than their spouses, with adultery being often defined as sexual intercourse between a married woman and a man other than her husband. Among many cultures the penalty was—and to this day still is, as noted below—capital punishment. At the same time, men were free to maintain sexual relations with any women (polygyny) provided that the women did not already have husbands or "owners". Indeed, בעל (ba`al), Hebrew for husband, used throughout the Bible, is synonymous with owner. These laws were enacted in fear of cuckoldry and thus sexual jealousy. Many indigenous customs, such as female genital mutilation and even menstrual taboos, have been theorized to have originated as preventive measures against cuckolding. This arrangement has been deplored by many modern intellectuals.
Opponents of adultery laws argue that these laws maintain social norms which justify violence, discrimination and oppression of women; in the form of state sanctioned forms of violence such as stoning, flogging or hanging for adultery; or in the form of individual acts of violence committed against women by husbands or relatives, such as honor killings, crimes of passion, and beatings. UN Women has called for the decriminalization of adultery.
An argument against the criminal status of adultery is that the resources of the law enforcement are limited, and that they should be used carefully; by investing them in the investigation and prosecution of adultery (which is very difficult) the curbing of serious violent crimes may suffer.
Human rights organizations have stated that legislation on sexual crimes must be based on consent, and must recognize consent as central, and not trivialize its importance; doing otherwise can lead to legal, social or ethical abuses. Amnesty International, when condemning stoning legislation that targets adultery, among other acts, has referred to "acts which should never be criminalized in the first place, including consensual sexual relations between adults". Salil Shetty, Amnesty International's Secretary General, said: "It is unbelievable that in the twenty-first century some countries are condoning child marriage and marital rape while others are outlawing abortion, sex outside marriage and same-sex sexual activityeven punishable by death." The My Body My Rights campaign has condemned state control over individual sexual and reproductive decisions; stating "All over the world, people are coerced, criminalized and discriminated against, simply for making choices about their bodies and their lives".
Consequences
General
For various reasons, most couples who marry do so with the expectation of fidelity. Adultery is often seen as a breach of trust and of the commitment that had been made during the act of marriage. Adultery can be emotionally traumatic for both spouses and often results in divorce.
Adultery may lead to ostracization from certain religious or social groups.
Adultery can also lead to feelings of guilt and jealousy in the person with whom the affair is being committed. In some cases, this "third person" may encourage divorce (either openly or subtly). If the cheating spouse has hinted at divorce to continue the affair, the third person may feel deceived if that does not happen. They may simply withdraw with ongoing feelings of guilt, carry on an obsession with their lover, may choose to reveal the affair, or in rare cases, commit violence or other crimes.
There is correlation between divorces and children having struggles in later life.
Sexually transmitted infections
Like any sexual contact, extramarital sex opens the possibility of the introduction of sexually-transmitted diseases (STDs) into a marriage. Since most married couples do not routinely use barrier contraceptives, STDs can be introduced to a marriage partner by a spouse engaging in unprotected extramarital sex. This can be a public health issue in regions of the world where STDs are common, but addressing this issue is very difficult due to legal and social barriersto openly talk about this situation would mean to acknowledge that adultery (often) takes place, something that is taboo in certain cultures, especially those strongly influenced by religion. In addition, dealing with the issue of barrier contraception in marriage in cultures where women have very few rights is difficult: the power of women to negotiate safer sex (or sex in general) with their husbands is often limited. The World Health Organization (WHO) found that women in violent relations were at increased risk of HIV/AIDS, because they found it very difficult to negotiate safe sex with their partners, or to seek medical advice if they thought they have been infected.
Violence
Historically, female adultery often resulted in extreme violence, including murder (of the woman, her lover, or both, committed by her husband). Today, domestic violence is outlawed in most countries.
Marital infidelity has been used, especially in the past, as a legal defence of provocation to a criminal charge, such as murder or assault. In some jurisdictions, the defence of provocation has been replaced by a partial defence or provocation or the behaviour of the victim can be invoked as a mitigating factor in sentencing.
In recent decades, feminists and women's rights organizations have worked to change laws and social norms which tolerate crimes of passion against women. UN Women has urged states to review legal defenses of passion and provocation, and other similar laws, to ensure that such laws do not lead to impunity in regard to violence against women, stating that "laws should clearly state that these defenses do not include or apply to crimes of "honour", adultery, or domestic assault or murder."
The Council of Europe Recommendation Rec(2002)5 of the Committee of Ministers to member states on the protection of women against violence states that member states should "preclude adultery as an excuse for violence within the family".
Honor killings
Honor killings are often connected to accusations of adultery. Honor killings continue to be practiced in some parts of the world, particularly (but not only) in parts of South Asia and the Middle East. Honor killings are treated leniently in some legal systems. Honor killings have also taken place in immigrant communities in Europe, Canada and the U.S. In some parts of the world, honor killings enjoy considerable public support: in one survey, 33.4% of teenagers in Jordan's capital city, Amman, approved of honor killings. A survey in Diyarbakir, Turkey, found that, when asked the appropriate punishment for a woman who has committed adultery, 37% of respondents said she should be killed, while 21% said her nose or ears should be cut off.
Until 2009, in Syria, it was legal for a husband to kill or injure his wife or his female relatives caught in flagrante delicto committing adultery or other illegitimate sexual acts. The law has changed to allow the perpetrator to only "benefit from the attenuating circumstances, provided that he serves a prison term of no less than two years in the case of killing." Other articles also provide for reduced sentences. Article 192 states that a judge may opt for reduced punishments (such as short-term imprisonment) if the killing was done with an honorable intent. Article 242 says that a judge may reduce a sentence for murders that were done in rage and caused by an illegal act committed by the victim. In recent years, Jordan has amended its Criminal Code to modify its laws which used to offer a complete defense for honor killings.
According to the UN in 2002:
"The report of the Special Rapporteur ... concerning cultural practices in the family that are violent towards women (E/CN.4/2002/83), indicated that honour killings had been reported in Egypt, Jordan, Lebanon, Morocco, Pakistan, the Syrian Arab Republic, Turkey, Yemen, and other Mediterranean and Persian Gulf countries, and that they had also taken place in western countries such as France, Germany and the United Kingdom, within migrant communities."
Crimes of passion
Crimes of passion are often triggered by jealousy, and, according to Human Rights Watch, "have a similar dynamic [to honor killings] in that the women are killed by male family members and the crimes are perceived as excusable or understandable."
Stoning
Stoning, or lapidation, refers to a form of capital punishment whereby an organized group throws stones at an individual until the person dies, or the condemned person is pushed from a platform set high enough above a stone floor that the fall would probably result in instantaneous death.
Stoning continues to be practiced today, in parts of the world. Recently, several people have been sentenced to death by stoning after being accused of adultery in Iran, Somalia, Afghanistan, Sudan, Mali, and Pakistan by tribal courts.
Flogging
In some jurisdictions flogging is a punishment for adultery. There are also incidents of extrajudicial floggings, ordered by informal religious courts. In 2011, a 14-year-old girl in Bangladesh died after being publicly lashed, when she was accused of having an affair with a married man. Her punishment was ordered by villagers under Sharia law.
Violence between the partners of an adulterous couple
Married people who form relations with extramarital partners or people who engage in relations with partners married to somebody else may be subjected to violence in these relations. Because of the nature of adulteryillicit or illegal in many societiesthis type of intimate partner violence may go underreported or may not be prosecuted when it is reported; and in some jurisdictions this type of violence is not covered by the specific domestic violence laws meant to protect persons in legitimate couples.
In fiction
The theme of adultery has been used in many literary works, and has served as a theme for notable books such as Anna Karenina, Madame Bovary, Lady Chatterley's Lover, The Scarlet Letter and Adultery. It has also been the theme of many movies.
See also
Adultery in literature
Affair
Cuckquean
Cuckold
Emotional affair
Family therapy (Relationship counseling)
Incidence of monogamy
Infidelity
Jesus and the woman taken in adultery
MacLennan v MacLennan
Open marriage
Polygyny threshold model
Polyamory
Sexual jealousy in humans
Swinging
References
Further reading
McCracken, Peggy (1998). The romance of adultery: queenship and sexual transgression in Old French literature. University of Pennsylvania Press. .
Mathews, J. Dating a Married Man: Memoirs from the "Other Women. 2008. .
Best Practices: Progressive Family Laws in Muslim Countries (August 2005)
Moultrup, David J. (1990). Husbands, Wives & Lovers. New York: Guilford Press.
Pittman, F. (1989). Private Lies. New York: W. W. Norton Co.
Vaughan, P. (1989). The Monogamy Myth. New York: New Market Press.
Blow, Adrian J.; Hartnett, Kelley (April 2005). Infidelity in Committed Relationships I: A Methodological Review. Journal of Marital and Family Therapy. INFIDELITY IN COMMITTED RELATIONSHIPS I: A METHODOLOGICAL REVIEW | Journal of Marital & Family Therapy | Find Articles at BNET at findarticles.com
Blow, Adrian J; Hartnett, Kelley (April 2005). Infidelity in Committed Relationships II: A Substantive Review. Journal of Marital and Family Therapy. INFIDELITY IN COMMITTED RELATIONSHIPS II: A SUBSTANTIVE REVIEW | Journal of Marital and Family Therapy | Find Articles at BNET at findarticles.com
Family law
Human sexuality
Love
Extramarital relationships
Sexual misconduct
Sex and the law | Adultery | [
"Biology"
] | 10,886 | [
"Human sexuality",
"Behavior",
"Sexuality",
"Human behavior"
] |
58,859 | https://en.wikipedia.org/wiki/Allergen | An allergen is an otherwise harmless substance that triggers an allergic reaction in sensitive individuals by stimulating an immune response.
In technical terms, an allergen is an antigen that is capable of stimulating a type-I hypersensitivity reaction in atopic individuals through immunoglobulin E (IgE) responses. Most humans mount significant Immunoglobulin E responses only as a defense against parasitic infections. However, some individuals may respond to many common environmental antigens. This hereditary predisposition is called atopy. In atopic individuals, non-parasitic antigens stimulate inappropriate IgE production, leading to type I hypersensitivity.
Sensitivities vary widely from one person (or from one animal) to another. A very broad range of substances can be allergens to sensitive individuals.
Examples
Allergens can be found in a variety of sources, such as dust mite excretion, pollen, pet dander, or even royal jelly. Food allergies are not as common as food sensitivity, but some foods such as peanuts (a legume), nuts, seafood and shellfish are the cause of serious allergies in many people.
The United States Food and Drug Administration recognizes nine foods as major food allergens: peanuts, tree nuts, eggs, milk, shellfish, fish, wheat, soy, and most recently sesame, as well as sulfites (chemical-based, often found in flavors and colors in foods) at 10ppm and over. In other countries, due to differences in the genetic profiles of their citizens and different levels of exposure to specific foods, the official allergen lists will vary. Canada recognizes all nine of the allergens recognized by the US as well as mustard. The European Union additionally recognizes other gluten-containing cereals as well as celery and lupin.
Another allergen is urushiol, a resin produced by poison ivy and poison oak, which causes the skin rash condition known as urushiol-induced contact dermatitis by changing a skin cell's configuration so that it is no longer recognized by the immune system as part of the body. Various trees and wood products such as paper, cardboard, MDF etc. can also cause mild to severe allergy symptoms through touch or inhalation of sawdust such as asthma and skin rash.
An allergic reaction can be caused by any form of direct contact with the allergen—consuming food or drink one is sensitive to (ingestion), breathing in pollen, perfume or pet dander (inhalation), or brushing a body part against an allergy-causing plant (direct contact). Other common causes of serious allergy are wasp, fire ant and bee stings, penicillin, and latex. An extremely serious form of an allergic reaction is called anaphylaxis. One form of treatment is the administration of sterile epinephrine to the person experiencing anaphylaxis, which suppresses the body's overreaction to the allergen, and allows for the patient to be transported to a medical facility.
Common
In addition to foreign proteins found in foreign serum (from blood transfusions) and vaccines, common allergens include:
Animal products
Fel d 1 (Allergy to cats)
fur and dander
cockroach calyx
wool
dust mite excretion
Drugs
penicillin
sulfonamides
salicylates (also found naturally in numerous fruits)
Foods
celery and celeriac
corn or maize
eggs (typically albumen, the white)
fruit
pumpkin, egg-plant
legumes
beans
peas
peanuts
soybeans
milk
seafood
sesame
soy
tree nuts
pecans
almonds
wheat
Insect stings
bee sting venom
wasp sting venom
mosquito bites
Mold spores
Top 5 allergens discovered in patch tests in 2005–06:
nickel sulfate (19.0%)
Balsam of Peru (11.9%)
fragrance mix I (11.5%)
quaternium-15 (10.3%), and
neomycin (10.0%).
Metals
nickel
chromium
Other
latex
wood
Plant pollens (hay fever)
grassryegrass, timothy-grass
weedsragweed, plantago, nettle, Artemisia vulgaris, Chenopodium album, sorrel
treesbirch, alder, hazel, hornbeam, Aesculus, willow, poplar, Platanus, Tilia, Olea, Ashe juniper, Alstonia scholaris
Seasonal
Seasonal allergy symptoms are commonly experienced during specific parts of the year, usually during spring, summer or fall when certain trees or grasses pollinate. This depends on the kind of tree or grass. For instance, some trees such as oak, elm, and maple pollinate in the spring, while grasses such as Bermuda, timothy and orchard pollinate in the summer.
Grass allergy is generally linked to hay fever because their symptoms and causes are somehow similar to each other. Symptoms include rhinitis, which causes sneezing and a runny nose, as well as allergic conjunctivitis, which includes watering and itchy eyes. Also an initial tickle on the roof of the mouth or in the back of the throat may be experienced.
Also, depending on the season, the symptoms may be more severe and people may experience coughing, wheezing, and irritability. A few people even become depressed, lose their appetite, or have problems sleeping. Moreover, since the sinuses may also become congested, some people experience headaches.
If both parents have had allergies in the past, there is a 66% chance for the individual to experience seasonal allergies, and the risk lowers to 60% if just one parent has had allergies. The immune system also has strong influence on seasonal allergies, because it reacts differently to diverse allergens like pollen. When an allergen enters the body of an individual that is predisposed to allergies, it triggers an immune reaction and the production of antibodies. These allergen antibodies migrate to mast cells lining the nose, eyes, and lungs. When an allergen drifts into the nose more than once, mast cells release a slew of chemicals or histamines that irritate and inflame the moist membranes lining the nose and produce the symptoms of an allergic reaction: scratchy throat, itching, sneezing and watery eyes. Some symptoms that differentiate allergies from a cold include:
No fever.
Mucous secretions are runny and clear.
Sneezes occurring in rapid and several sequences.
Itchy throat, ears and nose.
These symptoms usually last longer than 7–10 days.
Among seasonal allergies, there are some allergens that fuse together and produce a new type of allergy. For instance, grass pollen allergens cross-react with food allergy proteins in vegetables such as onion, lettuce, carrots, celery, and corn. Besides, the cousins of birch pollen allergens, like apples, grapes, peaches, celery, and apricots, produce severe itching in the ears and throat. The cypress pollen allergy brings a cross reactivity between diverse species like olive, privet, ash and Russian olive tree pollen allergens. In some rural areas, there is another form of seasonal grass allergy, combining airborne particles of pollen mixed with mold.
Recent research has suggested that humans might develop allergies as a defense to fight off parasites. According to Yale University Immunologist Ruslan Medzhitov, protease allergens cleave the same sensor proteins that evolved to detect proteases produced by the parasitic worms. Additionally, a new report on seasonal allergies called "Extreme allergies and Global Warming", have found that many allergy triggers are worsening due to climate change. 16 states in the United States were named as "Allergen Hotspots" for large increases in allergenic tree pollen if global warming pollution keeps increasing. Therefore, researchers on this report claimed that global warming is bad news for millions of asthmatics in the United States whose asthma attacks are triggered by seasonal allergies. Seasonal allergies are one of the main triggers for asthma, along with colds or flu, cigarette smoke and exercise. In Canada, for example, up to 75% of asthmatics also have seasonal allergies.
Diagnosis
Based on the symptoms seen on the patient, the answers given in terms of symptom evaluation and a physical exam, doctors can make a diagnosis to identify if the patient has a seasonal allergy. After performing the diagnosis, the doctor is able to tell the main cause of the allergic reaction and recommend the treatment to follow. 2 tests have to be done in order to determine the cause: a blood test and a skin test. Allergists do skin tests in one of two ways: either dropping some purified liquid of the allergen onto the skin and pricking the area with a small needle; or injecting a small amount of allergen under the skin.
Alternative tools are available to identify seasonal allergies, such as laboratory tests, imaging tests, and nasal endoscopy. In the laboratory tests, the doctor will take a nasal smear and it will be examined microscopically for factors that may indicate a cause: increased numbers of eosinophils (white blood cells), which indicates an allergic condition. If there is a high count of eosinophils, an allergic condition might be present.
Another laboratory test is the blood test for IgE (immunoglobulin production), such as the radioallergosorbent test (RAST) or the more recent enzyme allergosorbent tests (EAST), implemented to detect high levels of allergen-specific IgE in response to particular allergens. Although blood tests are less accurate than the skin tests, they can be performed on patients unable to undergo skin testing. Imaging tests can be useful to detect sinusitis in people who have chronic rhinitis, and they can work when other test results are ambiguous. There is also nasal endoscopy, wherein a tube is inserted through the nose with a small camera to view the passageways and examine any irregularities in the nose structure. Endoscopy can be used for some cases of chronic or unresponsive seasonal rhinitis.
Fungal
In 1952 basidiospores were described as being possible airborne allergens and were linked to asthma in 1969. Basidiospores are the dominant airborne fungal allergens. Fungal allergies are associated with seasonal asthma. They are considered to be a major source of airborne allergens. The basidospore family include mushrooms, rusts, smuts, brackets, and puffballs. The airborne spores from mushrooms reach levels comparable to those of mold and pollens. The levels of mushroom respiratory allergy are as high as 30 percent of those with allergic disorder, but it is believed to be less than 1 percent of food allergies. Heavy rainfall (which increases fungal spore release) is associated with increased hospital admissions of children with asthma. A study in New Zealand found that 22 percent of patients with respiratory allergic disorders tested positive for basidiospores allergies. Mushroom spore allergies can cause either immediate allergic symptomatology or delayed allergic reactions. Those with asthma are more likely to have immediate allergic reactions and those with allergic rhinitis are more likely to have delayed allergic responses. A study found that 27 percent of patients were allergic to basidiomycete mycelia extracts and 32 percent were allergic to basidiospore extracts, thus demonstrating the high incidence of fungal sensitisation in individuals with suspected allergies. It has been found that of basidiomycete cap, mycelia, and spore extracts that spore extracts are the most reliable extract for diagnosing basidiomycete allergy.
In Canada, 8% of children attending allergy clinics were found to be allergic to Ganoderma, a basidiospore. Pleurotus ostreatus, cladosporium, and Calvatia cyathiformis are significant airborne spores. Other significant fungal allergens include aspergillus and alternaria-penicillin families. In India Fomes pectinatus is a predominant air-borne allergen affecting up to 22 percent of patients with respiratory allergies. Some fungal air-borne allergens such as Coprinus comatus are associated with worsening of eczematous skin lesions. Children who are born during autumn months (during fungal spore season) are more likely to develop asthmatic symptoms later in life.
Treatment
Treatment includes over-the-counter medications, antihistamines, nasal decongestants, allergy shots, and alternative medicine. In the case of nasal symptoms, antihistamines are normally the first option. They may be taken together with pseudoephedrine to help relieve a stuffy nose and they can stop the itching and sneezing. Some over-the-counter options are Benadryl and Tavist. However, these antihistamines may cause extreme drowsiness, therefore, people are advised to not operate heavy machinery or drive while taking this kind of medication. Other side effects include dry mouth, blurred vision, constipation, difficulty with urination, confusion, and light-headedness. There is also a newer second generation of antihistamines that are generally classified as the "non-sedating antihistamines" or anti-drowsy, which include cetirizine, loratadine, and fexofenadine.
An example of nasal decongestants is pseudoephedrine and its side-effects include insomnia, restlessness, and difficulty urinating. Some other nasal sprays are available by prescription, including Azelastine and Ipratropium. Some of their side-effects include drowsiness. For eye symptoms, it is important to first bath the eyes with plain eyewashes to reduce the irritation. People should not wear contact lenses during episodes of conjunctivitis.
Allergen immunotherapy treatment involves administering doses of allergens to accustom the body to induce specific long-term tolerance. Allergy immunotherapy can be administered orally (as sublingual tablets or sublingual drops), or by injections under the skin (subcutaneous). Immunotherapy contains a small amount of the substance that triggers the allergic reactions.
Ladders are also used for egg and milk allergies as a home-based therapy mainly for children. Such methods cited in the UK involve the gradual introduction of the allergen in a cooked form where the protein allergenicity has been reduced to become less potent. By reintroducing the allergen from a fully cooked, usually baked, state research suggests that a tolerance can emerge to certain egg and milk allergies under the supervision of a dietitian or specialist. The suitability of this treatment is debated between UK and North American experts.
See also
Asthma
Asthmagen
Bioaerosol
Eczema
Eggshell skull
Hypoallergenic
Immunodiagnostics
List of allergies
Nose filter
Oral allergy syndrome
Toxin
References
External links
Allermatch — Sequence comparison to allergenic proteins
SDAP — Structural database of allergenic proteins
Allergome Database
Allergen Nomenclature
Immune system
Immunology
Allergology | Allergen | [
"Biology"
] | 3,215 | [
"Organ systems",
"Immunology",
"Immune system"
] |
58,862 | https://en.wikipedia.org/wiki/Three%20utilities%20problem | The classical mathematical puzzle known as the three utilities problem or sometimes water, gas and electricity asks for non-crossing connections to be drawn between three houses and three utility companies in the plane. When posing it in the early 20th century, Henry Dudeney wrote that it was already an old problem. It is an impossible puzzle: it is not possible to connect all nine lines without crossing. Versions of the problem on nonplanar surfaces such as a torus or Möbius strip, or that allow connections to pass through other houses or utilities, can be solved.
This puzzle can be formalized as a problem in topological graph theory by asking whether the complete bipartite graph , with vertices representing the houses and utilities and edges representing their connections, has a graph embedding in the plane. The impossibility of the puzzle corresponds to the fact that is not a planar graph. Multiple proofs of this impossibility are known, and form part of the proof of Kuratowski's theorem characterizing planar graphs by two forbidden subgraphs, one of which The question of minimizing the number of crossings in drawings of complete bipartite graphs is known as Turán's brick factory problem, and for the minimum number of crossings is one.
is a graph with six vertices and nine edges, often referred to as the utility graph in reference to the problem. It has also been called the Thomsen graph after 19th-century chemist Julius Thomsen. It is a well-covered graph, the smallest triangle-free cubic graph, and the smallest non-planar minimally rigid graph.
History
A review of the history of the three utilities problem is given by . He states that most published references to the problem characterize it as "very ancient". In the earliest publication found by Kullman, names it "water, gas, and electricity". However, Dudeney states that the problem is "as old as the hills...much older than electric lighting, or even gas". Dudeney also published the same puzzle previously, in The Strand Magazine in 1913. A competing claim of priority goes to Sam Loyd, who was quoted by his son in a posthumous biography as having published the problem in 1900.
Another early version of the problem involves connecting three houses to three wells. It is stated similarly to a different (and solvable) puzzle that also involves three houses and three fountains, with all three fountains and one house touching a rectangular wall; the puzzle again involves making non-crossing connections, but only between three designated pairs of houses and wells or fountains, as in modern numberlink puzzles. Loyd's puzzle "The Quarrelsome Neighbors" similarly involves connecting three houses to three gates by three non-crossing paths (rather than nine as in the utilities problem); one house and the three gates are on the wall of a rectangular yard, which contains the other two houses within it.
As well as in the three utilities problem, the graph appears in late 19th-century and early 20th-century publications both in early studies of structural rigidity and in chemical graph theory, where Julius Thomsen proposed it in 1886 for the then-uncertain structure of benzene. In honor of Thomsen's work, is sometimes called the Thomsen graph.
Statement
The three utilities problem can be stated as follows:
The problem is an abstract mathematical puzzle which imposes constraints that would not exist in a practical engineering situation. Its mathematical formalization is part of the field of topological graph theory which studies the embedding of graphs on surfaces. An important part of the puzzle, but one that is often not stated explicitly in informal wordings of the puzzle, is that the houses, companies, and lines must all be placed on a two-dimensional surface with the topology of a plane, and that the lines are not allowed to pass through other buildings; sometimes this is enforced by showing a drawing of the houses and companies, and asking for the connections to be drawn as lines on the same drawing.
In more formal graph-theoretic terms, the problem asks whether the complete bipartite graph is a planar graph. This graph has six vertices in two subsets of three: one vertex for each house, and one for each utility. It has nine edges, one edge for each of the pairings of a house with a utility, or more abstractly one edge for each pair of a vertex in one subset and a vertex in the other subset. Planar graphs are the graphs that can be drawn without crossings in the plane, and if such a drawing could be found, it would solve the three utilities puzzle.
Puzzle solutions
Unsolvability
As it is usually presented (on a flat two-dimensional plane), the solution to the utility puzzle is "no": there is no way to make all nine connections without any of the lines crossing each other.
In other words, the graph is not planar. Kazimierz Kuratowski stated in 1930 that is nonplanar, from which it follows that the problem has no solution. , however, states that "Interestingly enough, Kuratowski did not publish a detailed proof that [ ] is non-planar".
One proof of the impossibility of finding a planar embedding of uses a case analysis involving the Jordan curve theorem. In this solution, one examines different possibilities for the locations of the vertices with respect to the 4-cycles of the graph and shows that they are all inconsistent with a planar embedding.
Alternatively, it is possible to show that any bridgeless bipartite planar graph with vertices and edges has by combining the Euler formula (where is the number of faces of a planar embedding) with the observation that the number of faces is at most half the number of edges (the vertices around each face must alternate between houses and utilities, so each face has at least four edges, and each edge belongs to exactly two faces). In the utility graph, and so in the utility graph it is untrue that . Because it does not satisfy this inequality, the utility graph cannot be planar.
Changing the rules
is a toroidal graph, which means that it can be embedded without crossings on a torus, a surface of genus one. These embeddings solve versions of the puzzle in which the houses and companies are drawn on a coffee mug or other such surface instead of a flat plane. There is even enough additional freedom on the torus to solve a version of the puzzle with four houses and four utilities. Similarly, if the three utilities puzzle is presented on a sheet of a transparent material, it may be solved after twisting and gluing the sheet to form a Möbius strip.
Another way of changing the rules of the puzzle that would make it solvable, suggested by Henry Dudeney, is to allow utility lines to pass through other houses or utilities than the ones they connect.
Properties of the utility graph
Beyond the utility puzzle, the same graph comes up in several other mathematical contexts, including rigidity theory, the classification of cages and well-covered graphs, the study of graph crossing numbers, and the theory of graph minors.
Rigidity
The utility graph is a Laman graph, meaning that for almost all placements of its vertices in the plane, there is no way to continuously move its vertices while preserving all edge lengths, other than by a rigid motion of the whole plane, and that none of its spanning subgraphs have the same rigidity property. It is the smallest example of a nonplanar Laman graph. Despite being a minimally rigid graph, it has non-rigid embeddings with special placements for its vertices. For general-position embeddings, a polynomial equation describing all possible placements with the same edge lengths has degree 16, meaning that in general there can be at most 16 placements with the same lengths. It is possible to find systems of edge lengths for which up to eight of the solutions to this equation describe realizable placements.
Other graph-theoretic properties
is a triangle-free graph, in which every vertex has exactly three neighbors (a cubic graph). Among all such graphs, it is the smallest. Therefore, it is the (3,4)-cage, the smallest graph that has three neighbors per vertex and in which the shortest cycle has length four.
Like all other complete bipartite graphs, it is a well-covered graph, meaning that every maximal independent set has the same size. In this graph, the only two maximal independent sets are the two sides of the bipartition, and are of equal sizes. is one of only seven 3-regular 3-connected well-covered graphs.
Generalizations
Two important characterizations of planar graphs, Kuratowski's theorem that the planar graphs are exactly the graphs that contain neither nor the complete graph as a subdivision, and Wagner's theorem that the planar graphs are exactly the graphs that contain neither nor as a minor, make use of and generalize the non-planarity of .
Pál Turán's "brick factory problem" asks more generally for a formula for the minimum number of crossings in a drawing of the complete bipartite graph in terms of the numbers of vertices and on the two sides of the bipartition. The utility graph may be drawn with only one crossing, but not with zero crossings, so its crossing number is one.
References
External links
3 Utilities Puzzle at Cut-the-knot
The Utilities Puzzle explained and "solved" at Archimedes-lab.org
Topological graph theory
Mathematical puzzles
Unsolvable puzzles | Three utilities problem | [
"Mathematics"
] | 1,957 | [
"Unsolvable puzzles",
"Graph theory",
"Topology",
"Mathematical relations",
"Mathematical problems",
"Topological graph theory"
] |
58,863 | https://en.wikipedia.org/wiki/G%C3%B6del%27s%20incompleteness%20theorems | Gödel's incompleteness theorems are two theorems of mathematical logic that are concerned with the limits of in formal axiomatic theories. These results, published by Kurt Gödel in 1931, are important both in mathematical logic and in the philosophy of mathematics. The theorems are widely, but not universally, interpreted as showing that Hilbert's program to find a complete and consistent set of axioms for all mathematics is impossible.
The first incompleteness theorem states that no consistent system of axioms whose theorems can be listed by an effective procedure (i.e. an algorithm) is capable of proving all truths about the arithmetic of natural numbers. For any such consistent formal system, there will always be statements about natural numbers that are true, but that are unprovable within the system.
The second incompleteness theorem, an extension of the first, shows that the system cannot demonstrate its own consistency.
Employing a diagonal argument, Gödel's incompleteness theorems were the first of several closely related theorems on the limitations of formal systems. They were followed by Tarski's undefinability theorem on the formal undefinability of truth, Church's proof that Hilbert's Entscheidungsproblem is unsolvable, and Turing's theorem that there is no algorithm to solve the halting problem.
Formal systems: completeness, consistency, and effective axiomatization
The incompleteness theorems apply to formal systems that are of sufficient complexity to express the basic arithmetic of the natural numbers and which are consistent and effectively axiomatized. Particularly in the context of first-order logic, formal systems are also called formal theories. In general, a formal system is a deductive apparatus that consists of a particular set of axioms along with rules of symbolic manipulation (or rules of inference) that allow for the derivation of new theorems from the axioms. One example of such a system is first-order Peano arithmetic, a system in which all variables are intended to denote natural numbers. In other systems, such as set theory, only some sentences of the formal system express statements about the natural numbers. The incompleteness theorems are about formal provability within these systems, rather than about "provability" in an informal sense.
There are several properties that a formal system may have, including completeness, consistency, and the existence of an effective axiomatization. The incompleteness theorems show that systems which contain a sufficient amount of arithmetic cannot possess all three of these properties.
Effective axiomatization
A formal system is said to be effectively axiomatized (also called effectively generated) if its set of theorems is recursively enumerable. This means that there is a computer program that, in principle, could enumerate all the theorems of the system without listing any statements that are not theorems. Examples of effectively generated theories include Peano arithmetic and Zermelo–Fraenkel set theory (ZFC).
The theory known as true arithmetic consists of all true statements about the standard integers in the language of Peano arithmetic. This theory is consistent and complete, and contains a sufficient amount of arithmetic. However, it does not have a recursively enumerable set of axioms, and thus does not satisfy the hypotheses of the incompleteness theorems.
Completeness
A set of axioms is (syntactically, or negation-) complete if, for any statement in the axioms' language, that statement or its negation is provable from the axioms. This is the notion relevant for Gödel's first Incompleteness theorem. It is not to be confused with semantic completeness, which means that the set of axioms proves all the semantic tautologies of the given language. In his completeness theorem (not to be confused with the incompleteness theorems described here), Gödel proved that first-order logic is semantically complete. But it is not syntactically complete, since there are sentences expressible in the language of first-order logic that can be neither proved nor disproved from the axioms of logic alone.
In a system of mathematics, thinkers such as Hilbert believed that it was just a matter of time to find such an axiomatization that would allow one to either prove or disprove (by proving its negation) every mathematical formula.
A formal system might be syntactically incomplete by design, as logics generally are. Or it may be incomplete simply because not all the necessary axioms have been discovered or included. For example, Euclidean geometry without the parallel postulate is incomplete, because some statements in the language (such as the parallel postulate itself) can not be proved from the remaining axioms. Similarly, the theory of dense linear orders is not complete, but becomes complete with an extra axiom stating that there are no endpoints in the order. The continuum hypothesis is a statement in the language of ZFC that is not provable within ZFC, so ZFC is not complete. In this case, there is no obvious candidate for a new axiom that resolves the issue.
The theory of first-order Peano arithmetic seems consistent. Assuming this is indeed the case, note that it has an infinite but recursively enumerable set of axioms, and can encode enough arithmetic for the hypotheses of the incompleteness theorem. Thus by the first incompleteness theorem, Peano Arithmetic is not complete. The theorem gives an explicit example of a statement of arithmetic that is neither provable nor disprovable in Peano's arithmetic. Moreover, this statement is true in the usual model. In addition, no effectively axiomatized, consistent extension of Peano arithmetic can be complete.
Consistency
A set of axioms is (simply) consistent if there is no statement such that both the statement and its negation are provable from the axioms, and inconsistent otherwise. That is to say, a consistent axiomatic system is one that is free from contradiction.
Peano arithmetic is provably consistent from ZFC, but not from within itself. Similarly, ZFC is not provably consistent from within itself, but ZFC + "there exists an inaccessible cardinal" proves ZFC is consistent because if is the least such cardinal, then sitting inside the von Neumann universe is a model of ZFC, and a theory is consistent if and only if it has a model.
If one takes all statements in the language of Peano arithmetic as axioms, then this theory is complete, has a recursively enumerable set of axioms, and can describe addition and multiplication. However, it is not consistent.
Additional examples of inconsistent theories arise from the paradoxes that result when the axiom schema of unrestricted comprehension is assumed in set theory.
Systems which contain arithmetic
The incompleteness theorems apply only to formal systems which are able to prove a sufficient collection of facts about the natural numbers. One sufficient collection is the set of theorems of Robinson arithmetic . Some systems, such as Peano arithmetic, can directly express statements about natural numbers. Others, such as ZFC set theory, are able to interpret statements about natural numbers into their language. Either of these options is appropriate for the incompleteness theorems.
The theory of algebraically closed fields of a given characteristic is complete, consistent, and has an infinite but recursively enumerable set of axioms. However it is not possible to encode the integers into this theory, and the theory cannot describe arithmetic of integers. A similar example is the theory of real closed fields, which is essentially equivalent to Tarski's axioms for Euclidean geometry. So Euclidean geometry itself (in Tarski's formulation) is an example of a complete, consistent, effectively axiomatized theory.
The system of Presburger arithmetic consists of a set of axioms for the natural numbers with just the addition operation (multiplication is omitted). Presburger arithmetic is complete, consistent, and recursively enumerable and can encode addition but not multiplication of natural numbers, showing that for Gödel's theorems one needs the theory to encode not just addition but also multiplication.
has studied some weak families of arithmetic systems which allow enough arithmetic as relations to formalise Gödel numbering, but which are not strong enough to have multiplication as a function, and so fail to prove the second incompleteness theorem; that is to say, these systems are consistent and capable of proving their own consistency (see self-verifying theories).
Conflicting goals
In choosing a set of axioms, one goal is to be able to prove as many correct results as possible, without proving any incorrect results. For example, we could imagine a set of true axioms which allow us to prove every true arithmetical claim about the natural numbers . In the standard system of first-order logic, an inconsistent set of axioms will prove every statement in its language (this is sometimes called the principle of explosion), and is thus automatically complete. A set of axioms that is both complete and consistent, however, proves a maximal set of non-contradictory theorems.
The pattern illustrated in the previous sections with Peano arithmetic, ZFC, and ZFC + "there exists an inaccessible cardinal" cannot generally be broken. Here ZFC + "there exists an inaccessible cardinal" cannot from itself, be proved consistent. It is also not complete, as illustrated by the continuum hypothesis, which is unresolvable in ZFC + "there exists an inaccessible cardinal".
The first incompleteness theorem shows that, in formal systems that can express basic arithmetic, a complete and consistent finite list of axioms can never be created: each time an additional, consistent statement is added as an axiom, there are other true statements that still cannot be proved, even with the new axiom. If an axiom is ever added that makes the system complete, it does so at the cost of making the system inconsistent. It is not even possible for an infinite list of axioms to be complete, consistent, and effectively axiomatized.
First incompleteness theorem
Gödel's first incompleteness theorem first appeared as "Theorem VI" in Gödel's 1931 paper "On Formally Undecidable Propositions of Principia Mathematica and Related Systems I". The hypotheses of the theorem were improved shortly thereafter by using Rosser's trick. The resulting theorem (incorporating Rosser's improvement) may be paraphrased in English as follows, where "formal system" includes the assumption that the system is effectively generated.
First Incompleteness Theorem: "Any consistent formal system within which a certain amount of elementary arithmetic can be carried out is incomplete; i.e. there are statements of the language of which can neither be proved nor disproved in ." (Raatikainen 2020)
The unprovable statement referred to by the theorem is often referred to as "the Gödel sentence" for the system . The proof constructs a particular Gödel sentence for the system , but there are infinitely many statements in the language of the system that share the same properties, such as the conjunction of the Gödel sentence and any logically valid sentence.
Each effectively generated system has its own Gödel sentence. It is possible to define a larger system that contains the whole of plus as an additional axiom. This will not result in a complete system, because Gödel's theorem will also apply to , and thus also cannot be complete. In this case, is indeed a theorem in , because it is an axiom. Because states only that it is not provable in , no contradiction is presented by its provability within . However, because the incompleteness theorem applies to , there will be a new Gödel statement for , showing that is also incomplete. will differ from in that will refer to , rather than .
Syntactic form of the Gödel sentence
The Gödel sentence is designed to refer, indirectly, to itself. The sentence states that, when a particular sequence of steps is used to construct another sentence, that constructed sentence will not be provable in . However, the sequence of steps is such that the constructed sentence turns out to be itself. In this way, the Gödel sentence indirectly states its own unprovability within .
To prove the first incompleteness theorem, Gödel demonstrated that the notion of provability within a system could be expressed purely in terms of arithmetical functions that operate on Gödel numbers of sentences of the system. Therefore, the system, which can prove certain facts about numbers, can also indirectly prove facts about its own statements, provided that it is effectively generated. Questions about the provability of statements within the system are represented as questions about the arithmetical properties of numbers themselves, which would be decidable by the system if it were complete.
Thus, although the Gödel sentence refers indirectly to sentences of the system , when read as an arithmetical statement the Gödel sentence directly refers only to natural numbers. It asserts that no natural number has a particular property, where that property is given by a primitive recursive relation . As such, the Gödel sentence can be written in the language of arithmetic with a simple syntactic form. In particular, it can be expressed as a formula in the language of arithmetic consisting of a number of leading universal quantifiers followed by a quantifier-free body (these formulas are at level of the arithmetical hierarchy). Via the MRDP theorem, the Gödel sentence can be re-written as a statement that a particular polynomial in many variables with integer coefficients never takes the value zero when integers are substituted for its variables .
Truth of the Gödel sentence
The first incompleteness theorem shows that the Gödel sentence of an appropriate formal theory is unprovable in . Because, when interpreted as a statement about arithmetic, this unprovability is exactly what the sentence (indirectly) asserts, the Gödel sentence is, in fact, true (; also see ). For this reason, the sentence is often said to be "true but unprovable." . However, since the Gödel sentence cannot itself formally specify its intended interpretation, the truth of the sentence may only be arrived at via a meta-analysis from outside the system. In general, this meta-analysis can be carried out within the weak formal system known as primitive recursive arithmetic, which proves the implication , where is a canonical sentence asserting the consistency of (, ).
Although the Gödel sentence of a consistent theory is true as a statement about the intended interpretation of arithmetic, the Gödel sentence will be false in some nonstandard models of arithmetic, as a consequence of Gödel's completeness theorem . That theorem shows that, when a sentence is independent of a theory, the theory will have models in which the sentence is true and models in which the sentence is false. As described earlier, the Gödel sentence of a system is an arithmetical statement which claims that no number exists with a particular property. The incompleteness theorem shows that this claim will be independent of the system , and the truth of the Gödel sentence follows from the fact that no standard natural number has the property in question. Any model in which the Gödel sentence is false must contain some element which satisfies the property within that model. Such a model must be "nonstandard" – it must contain elements that do not correspond to any standard natural number (, ).
Relationship with the liar paradox
Gödel specifically cites Richard's paradox and the liar paradox as semantical analogues to his syntactical incompleteness result in the introductory section of "On Formally Undecidable Propositions in Principia Mathematica and Related Systems I". The liar paradox is the sentence "This sentence is false." An analysis of the liar sentence shows that it cannot be true (for then, as it asserts, it is false), nor can it be false (for then, it is true). A Gödel sentence for a system makes a similar assertion to the liar sentence, but with truth replaced by provability: says " is not provable in the system ." The analysis of the truth and provability of is a formalized version of the analysis of the truth of the liar sentence.
It is not possible to replace "not provable" with "false" in a Gödel sentence because the predicate " is the Gödel number of a false formula" cannot be represented as a formula of arithmetic. This result, known as Tarski's undefinability theorem, was discovered independently both by Gödel, when he was working on the proof of the incompleteness theorem, and by the theorem's namesake, Alfred Tarski.
Extensions of Gödel's original result
Compared to the theorems stated in Gödel's 1931 paper, many contemporary statements of the incompleteness theorems are more general in two ways. These generalized statements are phrased to apply to a broader class of systems, and they are phrased to incorporate weaker consistency assumptions.
Gödel demonstrated the incompleteness of the system of Principia Mathematica, a particular system of arithmetic, but a parallel demonstration could be given for any effective system of a certain expressiveness. Gödel commented on this fact in the introduction to his paper, but restricted the proof to one system for concreteness. In modern statements of the theorem, it is common to state the effectiveness and expressiveness conditions as hypotheses for the incompleteness theorem, so that it is not limited to any particular formal system. The terminology used to state these conditions was not yet developed in 1931 when Gödel published his results.
Gödel's original statement and proof of the incompleteness theorem requires the assumption that the system is not just consistent but ω-consistent. A system is ω-consistent if it is not ω-inconsistent, and is ω-inconsistent if there is a predicate such that for every specific natural number the system proves , and yet the system also proves that there exists a natural number such that (). That is, the system says that a number with property exists while denying that it has any specific value. The ω-consistency of a system implies its consistency, but consistency does not imply ω-consistency. strengthened the incompleteness theorem by finding a variation of the proof (Rosser's trick) that only requires the system to be consistent, rather than ω-consistent. This is mostly of technical interest, because all true formal theories of arithmetic (theories whose axioms are all true statements about natural numbers) are ω-consistent, and thus Gödel's theorem as originally stated applies to them. The stronger version of the incompleteness theorem that only assumes consistency, rather than ω-consistency, is now commonly known as Gödel's incompleteness theorem and as the Gödel–Rosser theorem.
Second incompleteness theorem
For each formal system containing basic arithmetic, it is possible to canonically define a formula Cons() expressing the consistency of . This formula expresses the property that "there does not exist a natural number coding a formal derivation within the system whose conclusion is a syntactic contradiction." The syntactic contradiction is often taken to be "0=1", in which case Cons() states "there is no natural number that codes a derivation of '0=1' from the axioms of ."
Gödel's second incompleteness theorem shows that, under general assumptions, this canonical consistency statement Cons() will not be provable in . The theorem first appeared as "Theorem XI" in Gödel's 1931 paper "On Formally Undecidable Propositions in Principia Mathematica and Related Systems I". In the following statement, the term "formalized system" also includes an assumption that is effectively axiomatized. This theorem states that for any consistent system F within which a certain amount of elementary arithmetic can be carried out, the consistency of F cannot be proved in F itself. This theorem is stronger than the first incompleteness theorem because the statement constructed in the first incompleteness theorem does not directly express the consistency of the system. The proof of the second incompleteness theorem is obtained by formalizing the proof of the first incompleteness theorem within the system itself.
Expressing consistency
There is a technical subtlety in the second incompleteness theorem regarding the method of expressing the consistency of as a formula in the language of . There are many ways to express the consistency of a system, and not all of them lead to the same result. The formula Cons() from the second incompleteness theorem is a particular expression of consistency.
Other formalizations of the claim that is consistent may be inequivalent in , and some may even be provable. For example, first-order Peano arithmetic (PA) can prove that "the largest consistent subset of PA" is consistent. But, because PA is consistent, the largest consistent subset of PA is just PA, so in this sense PA "proves that it is consistent". What PA does not prove is that the largest consistent subset of PA is, in fact, the whole of PA. (The term "largest consistent subset of PA" is meant here to be the largest consistent initial segment of the axioms of PA under some particular effective enumeration.)
The Hilbert–Bernays conditions
The standard proof of the second incompleteness theorem assumes that the provability predicate satisfies the Hilbert–Bernays provability conditions. Letting represent the Gödel number of a formula , the provability conditions say:
If proves , then proves .
proves 1.; that is, proves .
proves (analogue of modus ponens).
There are systems, such as Robinson arithmetic, which are strong enough to meet the assumptions of the first incompleteness theorem, but which do not prove the Hilbert–Bernays conditions. Peano arithmetic, however, is strong enough to verify these conditions, as are all theories stronger than Peano arithmetic.
Implications for consistency proofs
Gödel's second incompleteness theorem also implies that a system satisfying the technical conditions outlined above cannot prove the consistency of any system that proves the consistency of . This is because such a system can prove that if proves the consistency of , then is in fact consistent. For the claim that is consistent has form "for all numbers , has the decidable property of not being a code for a proof of contradiction in ". If were in fact inconsistent, then would prove for some that is the code of a contradiction in . But if also proved that is consistent (that is, that there is no such ), then it would itself be inconsistent. This reasoning can be formalized in to show that if is consistent, then is consistent. Since, by second incompleteness theorem, does not prove its consistency, it cannot prove the consistency of either.
This corollary of the second incompleteness theorem shows that there is no hope of proving, for example, the consistency of Peano arithmetic using any finitistic means that can be formalized in a system the consistency of which is provable in Peano arithmetic (PA). For example, the system of primitive recursive arithmetic (PRA), which is widely accepted as an accurate formalization of finitistic mathematics, is provably consistent in PA. Thus PRA cannot prove the consistency of PA. This fact is generally seen to imply that Hilbert's program, which aimed to justify the use of "ideal" (infinitistic) mathematical principles in the proofs of "real" (finitistic) mathematical statements by giving a finitistic proof that the ideal principles are consistent, cannot be carried out.
The corollary also indicates the epistemological relevance of the second incompleteness theorem. It would provide no interesting information if a system proved its consistency. This is because inconsistent theories prove everything, including their consistency. Thus a consistency proof of in would give us no clue as to whether is consistent; no doubts about the consistency of would be resolved by such a consistency proof. The interest in consistency proofs lies in the possibility of proving the consistency of a system in some system that is in some sense less doubtful than itself, for example, weaker than . For many naturally occurring theories and , such as = Zermelo–Fraenkel set theory and = primitive recursive arithmetic, the consistency of is provable in , and thus cannot prove the consistency of by the above corollary of the second incompleteness theorem.
The second incompleteness theorem does not rule out altogether the possibility of proving the consistency of a different system with different axioms. For example, Gerhard Gentzen proved the consistency of Peano arithmetic in a different system that includes an axiom asserting that the ordinal called is wellfounded; see Gentzen's consistency proof. Gentzen's theorem spurred the development of ordinal analysis in proof theory.
Examples of undecidable statements
There are two distinct senses of the word "undecidable" in mathematics and computer science. The first of these is the proof-theoretic sense used in relation to Gödel's theorems, that of a statement being neither provable nor refutable in a specified deductive system. The second sense, which will not be discussed here, is used in relation to computability theory and applies not to statements but to decision problems, which are countably infinite sets of questions each requiring a yes or no answer. Such a problem is said to be undecidable if there is no computable function that correctly answers every question in the problem set (see undecidable problem).
Because of the two meanings of the word undecidable, the term independent is sometimes used instead of undecidable for the "neither provable nor refutable" sense.
Undecidability of a statement in a particular deductive system does not, in and of itself, address the question of whether the truth value of the statement is well-defined, or whether it can be determined by other means. Undecidability only implies that the particular deductive system being considered does not prove the truth or falsity of the statement. Whether there exist so-called "absolutely undecidable" statements, whose truth value can never be known or is ill-specified, is a controversial point in the philosophy of mathematics.
The combined work of Gödel and Paul Cohen has given two concrete examples of undecidable statements (in the first sense of the term): The continuum hypothesis can neither be proved nor refuted in ZFC (the standard axiomatization of set theory), and the axiom of choice can neither be proved nor refuted in ZF (which is all the ZFC axioms except the axiom of choice). These results do not require the incompleteness theorem. Gödel proved in 1940 that neither of these statements could be disproved in ZF or ZFC set theory. In the 1960s, Cohen proved that neither is provable from ZF, and the continuum hypothesis cannot be proved from ZFC.
showed that the Whitehead problem in group theory is undecidable, in the first sense of the term, in standard set theory.
Gregory Chaitin produced undecidable statements in algorithmic information theory and proved another incompleteness theorem in that setting. Chaitin's incompleteness theorem states that for any system that can represent enough arithmetic, there is an upper bound such that no specific number can be proved in that system to have Kolmogorov complexity greater than . While Gödel's theorem is related to the liar paradox, Chaitin's result is related to Berry's paradox.
Undecidable statements provable in larger systems
These are natural mathematical equivalents of the Gödel "true but undecidable" sentence. They can be proved in a larger system which is generally accepted as a valid form of reasoning, but are undecidable in a more limited system such as Peano Arithmetic.
In 1977, Paris and Harrington proved that the Paris–Harrington principle, a version of the infinite Ramsey theorem, is undecidable in (first-order) Peano arithmetic, but can be proved in the stronger system of second-order arithmetic. Kirby and Paris later showed that Goodstein's theorem, a statement about sequences of natural numbers somewhat simpler than the Paris–Harrington principle, is also undecidable in Peano arithmetic.
Kruskal's tree theorem, which has applications in computer science, is also undecidable from Peano arithmetic but provable in set theory. In fact Kruskal's tree theorem (or its finite form) is undecidable in a much stronger system ATR0 codifying the principles acceptable based on a philosophy of mathematics called predicativism. The related but more general graph minor theorem (2003) has consequences for computational complexity theory.
Relationship with computability
The incompleteness theorem is closely related to several results about undecidable sets in recursion theory.
presented a proof of Gödel's incompleteness theorem using basic results of computability theory. One such result shows that the halting problem is undecidable: no computer program can correctly determine, given any program as input, whether eventually halts when run with a particular given input. Kleene showed that the existence of a complete effective system of arithmetic with certain consistency properties would force the halting problem to be decidable, a contradiction. This method of proof has also been presented by ; ; and .
explains how Matiyasevich's solution to Hilbert's 10th problem can be used to obtain a proof to Gödel's first incompleteness theorem. Matiyasevich proved that there is no algorithm that, given a multivariate polynomial with integer coefficients, determines whether there is an integer solution to the equation = 0. Because polynomials with integer coefficients, and integers themselves, are directly expressible in the language of arithmetic, if a multivariate integer polynomial equation = 0 does have a solution in the integers then any sufficiently strong system of arithmetic will prove this. Moreover, suppose the system is ω-consistent. In that case, it will never prove that a particular polynomial equation has a solution when there is no solution in the integers. Thus, if were complete and ω-consistent, it would be possible to determine algorithmically whether a polynomial equation has a solution by merely enumerating proofs of until either " has a solution" or " has no solution" is found, in contradiction to Matiyasevich's theorem. Hence it follows that cannot be ω-consistent and complete. Moreover, for each consistent effectively generated system , it is possible to effectively generate a multivariate polynomial over the integers such that the equation = 0 has no solutions over the integers, but the lack of solutions cannot be proved in .
shows how the existence of recursively inseparable sets can be used to prove the first incompleteness theorem. This proof is often extended to show that systems such as Peano arithmetic are essentially undecidable.
Chaitin's incompleteness theorem gives a different method of producing independent sentences, based on Kolmogorov complexity. Like the proof presented by Kleene that was mentioned above, Chaitin's theorem only applies to theories with the additional property that all their axioms are true in the standard model of the natural numbers. Gödel's incompleteness theorem is distinguished by its applicability to consistent theories that nonetheless include false statements in the standard model; these theories are known as ω-inconsistent.
Proof sketch for the first theorem
The proof by contradiction has three essential parts. To begin, choose a formal system that meets the proposed criteria:
Statements in the system can be represented by natural numbers (known as Gödel numbers). The significance of this is that properties of statements—such as their truth and falsehood—will be equivalent to determining whether their Gödel numbers have certain properties, and that properties of the statements can therefore be demonstrated by examining their Gödel numbers. This part culminates in the construction of a formula expressing the idea that "statement is provable in the system" (which can be applied to any statement "" in the system).
In the formal system it is possible to construct a number whose matching statement, when interpreted, is self-referential and essentially says that it (i.e. the statement itself) is unprovable. This is done using a technique called "diagonalization" (so-called because of its origins as Cantor's diagonal argument).
Within the formal system this statement permits a demonstration that it is neither provable nor disprovable in the system, and therefore the system cannot in fact be ω-consistent. Hence the original assumption that the proposed system met the criteria is false.
Arithmetization of syntax
The main problem in fleshing out the proof described above is that it seems at first that to construct a statement that is equivalent to " cannot be proved", would somehow have to contain a reference to , which could easily give rise to an infinite regress. Gödel's technique is to show that statements can be matched with numbers (often called the arithmetization of syntax) in such a way that "proving a statement" can be replaced with "testing whether a number has a given property". This allows a self-referential formula to be constructed in a way that avoids any infinite regress of definitions. The same technique was later used by Alan Turing in his work on the Entscheidungsproblem.
In simple terms, a method can be devised so that every formula or statement that can be formulated in the system gets a unique number, called its Gödel number, in such a way that it is possible to mechanically convert back and forth between formulas and Gödel numbers. The numbers involved might be very long indeed (in terms of number of digits), but this is not a barrier; all that matters is that such numbers can be constructed. A simple example is how English can be stored as a sequence of numbers for each letter and then combined into a single larger number:
The word hello is encoded as 104-101-108-108-111 in ASCII, which can be converted into the number 104101108108111.
The logical statement x=y => y=x is encoded as 120-061-121-032-061-062-032-121-061-120 in ASCII, which can be converted into the number 120061121032061062032121061120.
In principle, proving a statement true or false can be shown to be equivalent to proving that the number matching the statement does or does not have a given property. Because the formal system is strong enough to support reasoning about numbers in general, it can support reasoning about numbers that represent formulae and statements as well. Crucially, because the system can support reasoning about properties of numbers, the results are equivalent to reasoning about provability of their equivalent statements.
Construction of a statement about "provability"
Having shown that in principle the system can indirectly make statements about provability, by analyzing properties of those numbers representing statements it is now possible to show how to create a statement that actually does this.
A formula that contains exactly one free variable is called a statement form or class-sign. As soon as is replaced by a specific number, the statement form turns into a bona fide statement, and it is then either provable in the system, or not. For certain formulas one can show that for every natural number , is true if and only if it can be proved (the precise requirement in the original proof is weaker, but for the proof sketch this will suffice). In particular, this is true for every specific arithmetic operation between a finite number of natural numbers, such as "23 = 6".
Statement forms themselves are not statements and therefore cannot be proved or disproved. But every statement form can be assigned a Gödel number denoted by . The choice of the free variable used in the form () is not relevant to the assignment of the Gödel number .
The notion of provability itself can also be encoded by Gödel numbers, in the following way: since a proof is a list of statements which obey certain rules, the Gödel number of a proof can be defined. Now, for every statement , one may ask whether a number is the Gödel number of its proof. The relation between the Gödel number of and , the potential Gödel number of its proof, is an arithmetical relation between two numbers. Therefore, there is a statement form that uses this arithmetical relation to state that a Gödel number of a proof of exists:
( is the Gödel number of a formula and is the Gödel number of a proof of the formula encoded by ).
The name Bew is short for beweisbar, the German word for "provable"; this name was originally used by Gödel to denote the provability formula just described. Note that "" is merely an abbreviation that represents a particular, very long, formula in the original language of ; the string "" itself is not claimed to be part of this language.
An important feature of the formula is that if a statement is provable in the system then is also provable. This is because any proof of would have a corresponding Gödel number, the existence of which causes to be satisfied.
Diagonalization
The next step in the proof is to obtain a statement which, indirectly, asserts its own unprovability. Although Gödel constructed this statement directly, the existence of at least one such statement follows from the diagonal lemma, which says that for any sufficiently strong formal system and any statement form there is a statement such that the system proves
.
By letting be the negation of , we obtain the theorem
and the defined by this roughly states that its own Gödel number is the Gödel number of an unprovable formula.
The statement is not literally equal to ; rather, states that if a certain calculation is performed, the resulting Gödel number will be that of an unprovable statement. But when this calculation is performed, the resulting Gödel number turns out to be the Gödel number of itself. This is similar to the following sentence in English:
", when preceded by itself in quotes, is unprovable.", when preceded by itself in quotes, is unprovable.
This sentence does not directly refer to itself, but when the stated transformation is made the original sentence is obtained as a result, and thus this sentence indirectly asserts its own unprovability. The proof of the diagonal lemma employs a similar method.
Now, assume that the axiomatic system is ω-consistent, and let be the statement obtained in the previous section.
If were provable, then would be provable, as argued above. But asserts the negation of . Thus the system would be inconsistent, proving both a statement and its negation. This contradiction shows that cannot be provable.
If the negation of were provable, then would be provable (because was constructed to be equivalent to the negation of ). However, for each specific number , cannot be the Gödel number of the proof of , because is not provable (from the previous paragraph). Thus on one hand the system proves there is a number with a certain property (that it is the Gödel number of the proof of ), but on the other hand, for every specific number , we can prove that it does not have this property. This is impossible in an ω-consistent system. Thus the negation of is not provable.
Thus the statement is undecidable in our axiomatic system: it can neither be proved nor disproved within the system.
In fact, to show that is not provable only requires the assumption that the system is consistent. The stronger assumption of ω-consistency is required to show that the negation of is not provable. Thus, if is constructed for a particular system:
If the system is ω-consistent, it can prove neither nor its negation, and so is undecidable.
If the system is consistent, it may have the same situation, or it may prove the negation of . In the later case, we have a statement ("not ") which is false but provable, and the system is not ω-consistent.
If one tries to "add the missing axioms" to avoid the incompleteness of the system, then one has to add either or "not " as axioms. But then the definition of "being a Gödel number of a proof" of a statement changes. which means that the formula is now different. Thus when we apply the diagonal lemma to this new Bew, we obtain a new statement , different from the previous one, which will be undecidable in the new system if it is ω-consistent.
Proof via Berry's paradox
sketches an alternative proof of the first incompleteness theorem that uses Berry's paradox rather than the liar paradox to construct a true but unprovable formula. A similar proof method was independently discovered by Saul Kripke. Boolos's proof proceeds by constructing, for any computably enumerable set of true sentences of arithmetic, another sentence which is true but not contained in . This gives the first incompleteness theorem as a corollary. According to Boolos, this proof is interesting because it provides a "different sort of reason" for the incompleteness of effective, consistent theories of arithmetic.
Computer verified proofs
The incompleteness theorems are among a relatively small number of nontrivial theorems that have been transformed into formalized theorems that can be completely verified by proof assistant software. Gödel's original proofs of the incompleteness theorems, like most mathematical proofs, were written in natural language intended for human readers.
Computer-verified proofs of versions of the first incompleteness theorem were announced by Natarajan Shankar in 1986 using Nqthm , by Russell O'Connor in 2003 using Coq and by John Harrison in 2009 using HOL Light . A computer-verified proof of both incompleteness theorems was announced by Lawrence Paulson in 2013 using Isabelle .
Proof sketch for the second theorem
The main difficulty in proving the second incompleteness theorem is to show that various facts about provability used in the proof of the first incompleteness theorem can be formalized within a system using a formal predicate for provability. Once this is done, the second incompleteness theorem follows by formalizing the entire proof of the first incompleteness theorem within the system itself.
Let stand for the undecidable sentence constructed above, and assume for purposes of obtaining a contradiction that the consistency of the system can be proved from within the system itself. This is equivalent to proving the statement "System is consistent".
Now consider the statement , where = "If the system is consistent, then is not provable". The proof of sentence can be formalized within the system , and therefore the statement , " is not provable", (or identically, "not ") can be proved in the system .
Observe then, that if we can prove that the system is consistent (ie. the statement in the hypothesis of ), then we have proved that is not provable. But this is a contradiction since by the 1st Incompleteness Theorem, this sentence (ie. what is implied in the sentence , """ is not provable") is what we construct to be unprovable. Notice that this is why we require formalizing the first Incompleteness Theorem in : to prove the 2nd Incompleteness Theorem, we obtain a contradiction with the 1st Incompleteness Theorem which can do only by showing that the theorem holds in . So we cannot prove that the system is consistent. And the 2nd Incompleteness Theorem statement follows.
Discussion and implications
The incompleteness results affect the philosophy of mathematics, particularly versions of formalism, which use a single system of formal logic to define their principles.
Consequences for logicism and Hilbert's second problem
The incompleteness theorem is sometimes thought to have severe consequences for the program of logicism proposed by Gottlob Frege and Bertrand Russell, which aimed to define the natural numbers in terms of logic. Bob Hale and Crispin Wright argue that it is not a problem for logicism because the incompleteness theorems apply equally to first-order logic as they do to arithmetic. They argue that only those who believe that the natural numbers are to be defined in terms of first order logic have this problem.
Many logicians believe that Gödel's incompleteness theorems struck a fatal blow to David Hilbert's second problem, which asked for a finitary consistency proof for mathematics. The second incompleteness theorem, in particular, is often viewed as making the problem impossible. Not all mathematicians agree with this analysis, however, and the status of Hilbert's second problem is not yet decided (see "Modern viewpoints on the status of the problem").
Minds and machines
Authors including the philosopher J. R. Lucas and physicist Roger Penrose have debated what, if anything, Gödel's incompleteness theorems imply about human intelligence. Much of the debate centers on whether the human mind is equivalent to a Turing machine, or by the Church–Turing thesis, any finite machine at all. If it is, and if the machine is consistent, then Gödel's incompleteness theorems would apply to it.
suggested that while Gödel's theorems cannot be applied to humans, since they make mistakes and are therefore inconsistent, it may be applied to the human faculty of science or mathematics in general. Assuming that it is consistent, either its consistency cannot be proved or it cannot be represented by a Turing machine.
has proposed that the concept of mathematical "knowability" should be based on computational complexity rather than logical decidability. He writes that "when knowability is interpreted by modern standards, namely via computational complexity, the Gödel phenomena are very much with us."
Douglas Hofstadter, in his books Gödel, Escher, Bach and I Am a Strange Loop, cites Gödel's theorems as an example of what he calls a strange loop, a hierarchical, self-referential structure existing within an axiomatic formal system. He argues that this is the same kind of structure that gives rise to consciousness, the sense of "I", in the human mind. While the self-reference in Gödel's theorem comes from the Gödel sentence asserting its unprovability within the formal system of Principia Mathematica, the self-reference in the human mind comes from how the brain abstracts and categorises stimuli into "symbols", or groups of neurons which respond to concepts, in what is effectively also a formal system, eventually giving rise to symbols modeling the concept of the very entity doing the perception. Hofstadter argues that a strange loop in a sufficiently complex formal system can give rise to a "downward" or "upside-down" causality, a situation in which the normal hierarchy of cause-and-effect is flipped upside-down. In the case of Gödel's theorem, this manifests, in short, as the following:
Merely from knowing the formula's meaning, one can infer its truth or falsity without any effort to derive it in the old-fashioned way, which requires one to trudge methodically "upwards" from the axioms. This is not just peculiar; it is astonishing. Normally, one cannot merely look at what a mathematical conjecture says and simply appeal to the content of that statement on its own to deduce whether the statement is true or false.
In the case of the mind, a far more complex formal system, this "downward causality" manifests, in Hofstadter's view, as the ineffable human instinct that the causality of our minds lies on the high level of desires, concepts, personalities, thoughts, and ideas, rather than on the low level of interactions between neurons or even fundamental particles, even though according to physics the latter seems to possess the causal power.
There is thus a curious upside-downness to our normal human way of perceiving the world: we are built to perceive “big stuff” rather than “small stuff”, even though the domain of the tiny seems to be where the actual motors driving reality reside.
Paraconsistent logic
Although Gödel's theorems are usually studied in the context of classical logic, they also have a role in the study of paraconsistent logic and of inherently contradictory statements (dialetheia). argues that replacing the notion of formal proof in Gödel's theorem with the usual notion of informal proof can be used to show that naive mathematics is inconsistent, and uses this as evidence for dialetheism. The cause of this inconsistency is the inclusion of a truth predicate for a system within the language of the system. gives a more mixed appraisal of the applications of Gödel's theorems to dialetheism.
Appeals to the incompleteness theorems in other fields
Appeals and analogies are sometimes made to the incompleteness of theorems in support of arguments that go beyond mathematics and logic. Several authors have commented negatively on such extensions and interpretations, including , , ; and .
and , for example, quote from Rebecca Goldstein's comments on the disparity between Gödel's avowed Platonism and the anti-realist uses to which his ideas are sometimes put. criticize Régis Debray's invocation of the theorem in the context of sociology; Debray has defended this use as metaphorical (ibid.).
History
After Gödel published his proof of the completeness theorem as his doctoral thesis in 1929, he turned to a second problem for his habilitation. His original goal was to obtain a positive solution to Hilbert's second problem. At the time, theories of natural numbers and real numbers similar to second-order arithmetic were known as "analysis", while theories of natural numbers alone were known as "arithmetic".
Gödel was not the only person working on the consistency problem. Ackermann had published a flawed consistency proof for analysis in 1925, in which he attempted to use the method of ε-substitution originally developed by Hilbert. Later that year, von Neumann was able to correct the proof for a system of arithmetic without any axioms of induction. By 1928, Ackermann had communicated a modified proof to Bernays; this modified proof led Hilbert to announce his belief in 1929 that the consistency of arithmetic had been demonstrated and that a consistent proof of analysis would likely soon follow. After the publication of the incompleteness theorems showed that Ackermann's modified proof must be erroneous, von Neumann produced a concrete example showing that its main technique was unsound.
In the course of his research, Gödel discovered that although a sentence, asserting its falsehood leads to paradox, a sentence that asserts its non-provability does not. In particular, Gödel was aware of the result now called Tarski's indefinability theorem, although he never published it. Gödel announced his first incompleteness theorem to Carnap, Feigel, and Waismann on August 26, 1930; all four would attend the Second Conference on the Epistemology of the Exact Sciences, a key conference in Königsberg the following week.
Announcement
The 1930 Königsberg conference was a joint meeting of three academic societies, with many of the key logicians of the time in attendance. Carnap, Heyting, and von Neumann delivered one-hour addresses on the mathematical philosophies of logicism, intuitionism, and formalism, respectively. The conference also included Hilbert's retirement address, as he was leaving his position at the University of Göttingen. Hilbert used the speech to argue his belief that all mathematical problems can be solved. He ended his address by saying,
This speech quickly became known as a summary of Hilbert's beliefs on mathematics (its final six words, "Wir müssen wissen. Wir werden wissen!", were used as Hilbert's epitaph in 1943). Although Gödel was likely in attendance for Hilbert's address, the two never met face to face.
Gödel announced his first incompleteness theorem at a roundtable discussion session on the third day of the conference. The announcement drew little attention apart from that of von Neumann, who pulled Gödel aside for a conversation. Later that year, working independently with knowledge of the first incompleteness theorem, von Neumann obtained a proof of the second incompleteness theorem, which he announced to Gödel in a letter dated November 20, 1930. Gödel had independently obtained the second incompleteness theorem and included it in his submitted manuscript, which was received by Monatshefte für Mathematik on November 17, 1930.
Gödel's paper was published in the Monatshefte in 1931 under the title "Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme I" ("On Formally Undecidable Propositions in Principia Mathematica and Related Systems I"). As the title implies, Gödel originally planned to publish a second part of the paper in the next volume of the Monatshefte; the prompt acceptance of the first paper was one reason he changed his plans.
Generalization and acceptance
Gödel gave a series of lectures on his theorems at Princeton in 1933–1934 to an audience that included Church, Kleene, and Rosser. By this time, Gödel had grasped that the key property his theorems required is that the system must be effective (at the time, the term "general recursive" was used). Rosser proved in 1936 that the hypothesis of ω-consistency, which was an integral part of Gödel's original proof, could be replaced by simple consistency if the Gödel sentence was changed appropriately. These developments left the incompleteness theorems in essentially their modern form.
Gentzen published his consistency proof for first-order arithmetic in 1936. Hilbert accepted this proof as "finitary" although (as Gödel's theorem had already shown) it cannot be formalized within the system of arithmetic that is being proved consistent.
The impact of the incompleteness theorems on Hilbert's program was quickly realized. Bernays included a full proof of the incompleteness theorems in the second volume of Grundlagen der Mathematik (1939), along with additional results of Ackermann on the ε-substitution method and Gentzen's consistency proof of arithmetic. This was the first full published proof of the second incompleteness theorem.
Criticisms
Finsler
used a version of Richard's paradox to construct an expression that was false but unprovable in a particular, informal framework he had developed. Gödel was unaware of this paper when he proved the incompleteness theorems (Collected Works Vol. IV., p. 9). Finsler wrote to Gödel in 1931 to inform him about this paper, which Finsler felt had priority for an incompleteness theorem. Finsler's methods did not rely on formalized provability and had only a superficial resemblance to Gödel's work. Gödel read the paper but found it deeply flawed, and his response to Finsler laid out concerns about the lack of formalization. Finsler continued to argue for his philosophy of mathematics, which eschewed formalization, for the remainder of his career.
Zermelo
In September 1931, Ernst Zermelo wrote to Gödel to announce what he described as an "essential gap" in Gödel's argument. In October, Gödel replied with a 10-page letter, where he pointed out that Zermelo mistakenly assumed that the notion of truth in a system is definable in that system; it is not true in general by Tarski's undefinability theorem. However, Zermelo did not relent and published his criticisms in print with "a rather scathing paragraph on his young competitor". Gödel decided that pursuing the matter further was pointless, and Carnap agreed. Much of Zermelo's subsequent work was related to logic stronger than first-order logic, with which he hoped to show both the consistency and categoricity of mathematical theories.
Wittgenstein
Ludwig Wittgenstein wrote several passages about the incompleteness theorems that were published posthumously in his 1953 Remarks on the Foundations of Mathematics, particularly, one section sometimes called the "notorious paragraph" where he seems to confuse the notions of "true" and "provable" in Russell's system. Gödel was a member of the Vienna Circle during the period in which Wittgenstein's early ideal language philosophy and Tractatus Logico-Philosophicus dominated the circle's thinking. There has been some controversy about whether Wittgenstein misunderstood the incompleteness theorem or just expressed himself unclearly. Writings in Gödel's Nachlass express the belief that Wittgenstein misread his ideas.
Multiple commentators have read Wittgenstein as misunderstanding Gödel, although as well as have provided textual readings arguing that most commentary misunderstands Wittgenstein. On their release, Bernays, Dummett, and Kreisel wrote separate reviews on Wittgenstein's remarks, all of which were extremely negative. The unanimity of this criticism caused Wittgenstein's remarks on the incompleteness theorems to have little impact on the logic community. In 1972, Gödel stated: "Has Wittgenstein lost his mind? Does he mean it seriously? He intentionally utters trivially nonsensical statements", and wrote to Karl Menger that Wittgenstein's comments demonstrate a misunderstanding of the incompleteness theorems writing:
It is clear from the passages you cite that Wittgenstein did not understand [the first incompleteness theorem] (or pretended not to understand it). He interpreted it as a kind of logical paradox, while in fact is just the opposite, namely a mathematical theorem within an absolutely uncontroversial part of mathematics (finitary number theory or combinatorics).
Since the publication of Wittgenstein's Nachlass in 2000, a series of papers in philosophy have sought to evaluate whether the original criticism of Wittgenstein's remarks was justified. argue that Wittgenstein had a more complete understanding of the incompleteness theorem than was previously assumed. They are particularly concerned with the interpretation of a Gödel sentence for an ω-inconsistent system as saying "I am not provable", since the system has no models in which the provability predicate corresponds to actual provability. argues that their interpretation of Wittgenstein is not historically justified. explores the relationship between Wittgenstein's writing and theories of paraconsistent logic.
See also
Chaitin's incompleteness theorem
Gödel, Escher, Bach
Gödel machine
Gödel's completeness theorem
Gödel's speed-up theorem
Löb's Theorem
Minds, Machines and Gödel
Non-standard model of arithmetic
Proof theory
Provability logic
Quining
Tarski's undefinability theorem
Theory of everything#Gödel's incompleteness theorem
Typographical Number Theory
References
Citations
Articles by Gödel
Kurt Gödel, 1931, "Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme, I", Monatshefte für Mathematik und Physik, v. 38 n. 1, pp. 173–198.
—, 1931, "Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme, I", in Solomon Feferman, ed., 1986. Kurt Gödel Collected works, Vol. I. Oxford University Press, pp. 144–195. . The original German with a facing English translation, preceded by an introductory note by Stephen Cole Kleene.
—, 1951, "Some basic theorems on the foundations of mathematics and their implications", in Solomon Feferman, ed., 1995. Kurt Gödel Collected works, Vol. III, Oxford University Press, pp. 304–323. .
Translations, during his lifetime, of Gödel's paper into English
None of the following agree in all translated words and in typography. The typography is a serious matter, because Gödel expressly wished to emphasize "those metamathematical notions that had been defined in their usual sense before . . ." . Three translations exist. Of the first John Dawson states that: "The Meltzer translation was seriously deficient and received a devastating review in the Journal of Symbolic Logic; "Gödel also complained about Braithwaite's commentary . "Fortunately, the Meltzer translation was soon supplanted by a better one prepared by Elliott Mendelson for Martin Davis's anthology The Undecidable . . . he found the translation "not quite so good" as he had expected . . . [but because of time constraints he] agreed to its publication" (ibid). (In a footnote Dawson states that "he would regret his compliance, for the published volume was marred throughout by sloppy typography and numerous misprints" (ibid)). Dawson states that "The translation that Gödel favored was that by Jean van Heijenoort" (ibid). For the serious student another version exists as a set of lecture notes recorded by Stephen Kleene and J. B. Rosser "during lectures given by Gödel at to the Institute for Advanced Study during the spring of 1934" (cf commentary by and beginning on p. 41); this version is titled "On Undecidable Propositions of Formal Mathematical Systems". In their order of publication:
B. Meltzer (translation) and R. B. Braithwaite (Introduction), 1962. On Formally Undecidable Propositions of Principia Mathematica and Related Systems, Dover Publications, New York (Dover edition 1992), (pbk.) This contains a useful translation of Gödel's German abbreviations on pp. 33–34. As noted above, typography, translation and commentary is suspect. Unfortunately, this translation was reprinted with all its suspect content by
Stephen Hawking editor, 2005. God Created the Integers: The Mathematical Breakthroughs That Changed History, Running Press, Philadelphia, . Gödel's paper appears starting on p. 1097, with Hawking's commentary starting on p. 1089.
Martin Davis editor, 1965. The Undecidable: Basic Papers on Undecidable Propositions, Unsolvable problems and Computable Functions, Raven Press, New York, no ISBN. Gödel's paper begins on page 5, preceded by one page of commentary.
Jean van Heijenoort editor, 1967, 3rd edition 1967. From Frege to Gödel: A Source Book in Mathematical Logic, 1879-1931, Harvard University Press, Cambridge Mass., (pbk). van Heijenoort did the translation. He states that "Professor Gödel approved the translation, which in many places was accommodated to his wishes." (p. 595). Gödel's paper begins on p. 595; van Heijenoort's commentary begins on p. 592.
Martin Davis editor, 1965, ibid. "On Undecidable Propositions of Formal Mathematical Systems." A copy with Gödel's corrections of errata and Gödel's added notes begins on page 41, preceded by two pages of Davis's commentary. Until Davis included this in his volume this lecture existed only as mimeographed notes.
Articles by others
Bernd Buldt, 2014, "The Scope of Gödel's First Incompleteness Theorem ", Logica Universalis, v. 8, pp. 499–552.
David Hilbert, 1900, "Mathematical Problems." English translation of a lecture delivered before the International Congress of Mathematicians at Paris, containing Hilbert's statement of his Second Problem.
Martin Hirzel, 2000, "On formally undecidable propositions of Principia Mathematica and related systems I.." An English translation of Gödel's paper. Archived from the original. Sept. 16, 2004.
Reprinted in
John Barkley Rosser, 1936, "Extensions of some theorems of Gödel and Church", reprinted from the Journal of Symbolic Logic, v. 1 (1936) pp. 87–91, in Martin Davis 1965, The Undecidable (loc. cit.) pp. 230–235.
—, 1939, "An Informal Exposition of proofs of Gödel's Theorem and Church's Theorem", Reprinted from the Journal of Symbolic Logic, v. 4 (1939) pp. 53–60, in Martin Davis 1965, The Undecidable (loc. cit.) pp. 223–230
Books about the theorems
Francesco Berto. There's Something about Gödel: The Complete Guide to the Incompleteness Theorem John Wiley and Sons. 2010.
Norbert Domeisen, 1990. Logik der Antinomien. Bern: Peter Lang. 142 S. 1990. . .
Douglas Hofstadter, 1979. Gödel, Escher, Bach: An Eternal Golden Braid. Vintage Books. . 1999 reprint: .
—, 2007. I Am a Strange Loop. Basic Books. . .
Stanley Jaki, OSB, 2005. The drama of the quantities. Real View Books.
Per Lindström, 1997. Aspects of Incompleteness, Lecture Notes in Logic v. 10.
J.R. Lucas, FBA, 1970. The Freedom of the Will. Clarendon Press, Oxford, 1970.
Adrian William Moore, 2022. Gödel´s Theorem: A Very Short Introduction. Oxford University Press, Oxford, 2022.
Ernest Nagel, James Roy Newman, Douglas Hofstadter, 2002 (1958). Gödel's Proof, revised ed. .
Rudy Rucker, 1995 (1982). Infinity and the Mind: The Science and Philosophy of the Infinite. Princeton Univ. Press.
Raymond Smullyan, 1987. Forever Undecided - puzzles based on undecidability in formal systems
—, 1992. Godel's Incompleteness Theorems. Oxford Univ. Press.
—, 1994. Diagonalization and Self-Reference. Oxford Univ. Press. .
—, 2013. The Godelian Puzzle Book: Puzzles, Paradoxes and Proofs. Courier Corporation. .
Miscellaneous references
Rebecca Goldstein, 2005, Incompleteness: the Proof and Paradox of Kurt Gödel, W. W. Norton & Company.
David Hilbert and Paul Bernays, Grundlagen der Mathematik, Springer-Verlag.
Reprinted by Dover, 2002.
Reprinted in Anderson, A. R., ed., 1964. Minds and Machines. Prentice-Hall: 77.
Wolfgang Rautenberg, 2010, A Concise Introduction to Mathematical Logic, 3rd. ed., Springer,
George Tourlakis, Lectures in Logic and Set Theory, Volume 1, Mathematical Logic, Cambridge University Press, 2003.
Hao Wang, 1996, A Logical Journey: From Gödel to Philosophy, The MIT Press, Cambridge MA, .
External links
.
.
Paraconsistent Logic § Arithmetic and Gödel's Theorem entry in the Stanford Encyclopedia of Philosophy.
MacTutor biographies:
Kurt Gödel.
Gerhard Gentzen.
What is Mathematics:Gödel's Theorem and Around by Karlis Podnieks. An online free book.
World's shortest explanation of Gödel's theorem using a printing machine as an example.
October 2011 RadioLab episode about/including Gödel's Incompleteness theorem
How Gödel's Proof Works by Natalie Wolchover, Quanta Magazine, July 14, 2020.
and Gödel's incompleteness theorems formalised in Isabelle/HOL
`
Theorems in the foundations of mathematics
Mathematical logic
Model theory
Proof theory
Epistemology
Metatheorems
Incompleteness theorems | Gödel's incompleteness theorems | [
"Mathematics"
] | 14,070 | [
"Foundations of mathematics",
"Proof theory",
"Mathematical logic",
"Model theory",
"Mathematical problems",
"Mathematical theorems",
"Theorems in the foundations of mathematics"
] |
58,880 | https://en.wikipedia.org/wiki/Sugar%20substitute | A sugar substitute is a food additive that provides a sweetness like that of sugar while containing significantly less food energy than sugar-based sweeteners, making it a zero-calorie () or low-calorie sweetener. Artificial sweeteners may be derived through manufacturing of plant extracts or processed by chemical synthesis. Sugar substitute products are commercially available in various forms, such as small pills, powders and packets.
Common sugar substitutes include aspartame, monk fruit extract, saccharin, sucralose, stevia, acesulfame potassium (ace-K) and cyclamate. These sweeteners are a fundamental ingredient in diet drinks to sweeten them without adding calories. Additionally, sugar alcohols such as erythritol, xylitol and sorbitol are derived from sugars.
No links have been found between approved artificial sweeteners and cancer in humans. Reviews and dietetic professionals have concluded that moderate use of non-nutritive sweeteners as a safe replacement for sugars can help limit energy intake and assist with managing blood glucose and weight.
Description
A sugar substitute is a food additive that provides a sweetness like that of sugar while containing significantly less food energy than sugar-based sweeteners, making it a zero-calorie () or low-calorie sweetener. Sugar substitute products are commercially available in various forms, such as small pills, powders and packets.
Types
Artificial sweeteners may be derived through manufacturing of plant extracts or processed by chemical synthesis.
High-intensity sweeteners—one type of sugar substitute—are compounds with many times the sweetness of sucrose (common table sugar). As a result, much less sweetener is required and energy contribution is often negligible. The sensation of sweetness caused by these compounds is sometimes notably different from sucrose, so they are often used in complex mixtures that achieve the most intense sweet sensation.
In North America, common sugar substitutes include aspartame, monk fruit extract, saccharin, sucralose and stevia. Cyclamate is prohibited from being used as a sweetener within the United States, but is allowed in other parts of the world.
Sorbitol, xylitol and lactitol are examples of sugar alcohols (also known as polyols). These are, in general, less sweet than sucrose but have similar bulk properties and can be used in a wide range of food products. Sometimes the sweetness profile is fine-tuned by mixing with high-intensity sweeteners.
Allulose
Allulose is a sweetener in the sugar family, with a chemical structure similar to fructose. It is naturally found in figs, maple syrup and some fruit. While it comes from the same family as other sugars, it does not substantially metabolize as sugar in the body. The FDA recognizes that allulose does not act like sugar, and as of 2019, no longer requires it to be listed with sugars on U.S. nutrition labels. Allulose is about 70% as sweet as sugar, which is why it is sometimes combined with high-intensity sweeteners to make sugar substitutes.
Acesulfame potassium
Acesulfame potassium (Ace-K) is 200 times sweeter than sucrose (common sugar), as sweet as aspartame, about two-thirds as sweet as saccharin, and one-third as sweet as sucralose. Like saccharin, it has a slightly bitter aftertaste, especially at high concentrations. Kraft Foods has patented the use of sodium ferulate to mask acesulfame's aftertaste. Acesulfame potassium is often blended with other sweeteners (usually aspartame or sucralose), which give a more sucrose-like taste, whereby each sweetener masks the other's aftertaste and also exhibits a synergistic effect in which the blend is sweeter than its components.
Unlike aspartame, acesulfame potassium is stable under heat, even under moderately acidic or basic conditions, allowing it to be used as a food additive in baking or in products that require a long shelf life. In carbonated drinks, it is almost always used in conjunction with another sweetener, such as aspartame or sucralose. It is also used as a sweetener in protein shakes and pharmaceutical products, especially chewable and liquid medications, where it can make the active ingredients more palatable.
Aspartame
Aspartame was discovered in 1965 by James M. Schlatter at the G.D. Searle company. He was working on an anti-ulcer drug and accidentally spilled some aspartame on his hand. When he licked his finger, he noticed that it had a sweet taste. Torunn Atteraas Garin oversaw the development of aspartame as an artificial sweetener. It is an odorless, white crystalline powder that is derived from the two amino acids aspartic acid and phenylalanine. It is about 180–200 times sweeter than sugar, and can be used as a tabletop sweetener or in frozen desserts, gelatins, beverages and chewing gum. When cooked or stored at high temperatures, aspartame breaks down into its constituent amino acids. This makes aspartame undesirable as a baking sweetener. It is more stable in somewhat acidic conditions, such as in soft drinks. Though it does not have a bitter aftertaste like saccharin, it may not taste exactly like sugar. When eaten, aspartame is metabolized into its original amino acids. Because it is so intensely sweet, relatively little of it is needed to sweeten a food product, and is thus useful for reducing the number of calories in a product.
The safety of aspartame has been studied extensively since its discovery with research that includes animal studies, clinical and epidemiological research, and postmarketing surveillance, with aspartame being a rigorously tested food ingredient. Although aspartame has been subject to claims against its safety, multiple authoritative reviews have found it to be safe for consumption at typical levels used in food manufacturing. Aspartame has been deemed safe for human consumption by over 100 regulatory agencies in their respective countries, including the UK Food Standards Agency, the European Food Safety Authority (EFSA), and Health Canada.
Cyclamate
In the United States, the Food and Drug Administration banned the sale of cyclamate in 1969 after lab tests in rats involving a 10:1 mixture of cyclamate and saccharin (at levels comparable to humans ingesting 550 cans of diet soda per day) caused bladder cancer. This information, however, is regarded as "weak" evidence of carcinogenic activity, and cyclamate remains in common use in many parts of the world, including Canada, the European Union and Russia.
Mogrosides (monk fruit)
Mogrosides, extracted from monk fruit (which is commonly also called ), are recognized as safe for human consumption and are used in commercial products worldwide. As of 2017, it is not a permitted sweetener in the European Union, although it is allowed as a flavor at concentrations where it does not function as a sweetener. In 2017, a Chinese company requested a scientific review of its mogroside product by the European Food Safety Authority. It is the basis of McNeil Nutritionals's tabletop sweetener Nectresse in the United States and Norbu Sweetener in Australia.
Saccharin
Apart from sugar of lead (used as a sweetener in ancient through medieval times before the toxicity of lead was known), saccharin was the first artificial sweetener and was originally synthesized in 1879 by Remsen and Fahlberg. Its sweet taste was discovered by accident. It had been created in an experiment with toluene derivatives. A process for the creation of saccharin from phthalic anhydride was developed in 1950, and, currently, saccharin is created by this process as well as the original process by which it was discovered. It is 300 to 500 times sweeter than sucrose and is often used to improve the taste of toothpastes, dietary foods and dietary beverages. The bitter aftertaste of saccharin is often minimized by blending it with other sweeteners.
Fear about saccharin increased when a 1960 study showed that high levels of saccharin may cause bladder cancer in laboratory rats. In 1977, Canada banned saccharin as a result of the animal research. In the United States, the FDA considered banning saccharin in 1977, but Congress stepped in and placed a moratorium on such a ban. The moratorium required a warning label and also mandated further study of saccharin safety.
Subsequently, it was discovered that saccharin causes cancer in male rats by a mechanism not found in humans. At high doses, saccharin causes a precipitate to form in rat urine. This precipitate damages the cells lining the bladder (urinary bladder urothelial cytotoxicity) and a tumor forms when the cells regenerate (regenerative hyperplasia). According to the International Agency for Research on Cancer, part of the World Health Organization, "This mechanism is not relevant to humans because of critical interspecies differences in urine composition".
In 2001, the United States repealed the warning label requirement, while the threat of an FDA ban had already been lifted in 1991. Most other countries also permit saccharin, but restrict the levels of use, while other countries have outright banned it.
The EPA has removed saccharin and its salts from their list of hazardous constituents and commercial chemical products. In a 14 December 2010 release, the EPA stated that saccharin is no longer considered a potential hazard to human health.
Steviol glycosides (stevia)
Stevia is a natural non-caloric sweetener derived from the Stevia rebaudiana plant, and is manufactured as a sweetener. It is indigenous to South America, and has historically been used in Japanese food products, although it is now common internationally. In 1987, the FDA issued a ban on stevia because it had not been approved as a food additive, although it continued to be available as a dietary supplement. After being provided with sufficient scientific data demonstrating safety of using stevia as a manufactured sweetener, from companies such as Cargill and Coca-Cola, the FDA gave a "no objection" status as generally recognized as safe (GRAS) in December 2008 to Cargill for its stevia product, Truvia, for use of the refined stevia extracts as a blend of rebaudioside A and erythritol. In Australia, the brand Vitarium uses Natvia, a stevia sweetener, in a range of sugar-free children's milk mixes.
In August 2019, the FDA placed an import alert on stevia leaves and crude extracts—which do not have GRAS status—and on foods or dietary supplements containing them, citing concerns about safety and potential for toxicity.
Sucralose
The world's most commonly used artificial sweetener, sucralose is a chlorinated sugar that is about 600 times sweeter than sugar. It is produced from sucrose when three chlorine atoms replace three hydroxyl groups. It is used in beverages, frozen desserts, chewing gum, baked goods and other foods. Unlike other artificial sweeteners, it is stable when heated and can therefore be used in baked and fried goods. Discovered in 1976, the FDA approved sucralose for use in 1998.
Most of the controversy surrounding Splenda, a sucralose sweetener, is focused not on safety but on its marketing. It has been marketed with the slogan, "Splenda is made from sugar, so it tastes like sugar." Sucralose is prepared from either of two sugars, sucrose or raffinose. With either base sugar, processing replaces three oxygen-hydrogen groups in the sugar molecule with three chlorine atoms.
The "Truth About Splenda" website was created in 2005 by the Sugar Association, an organization representing sugar beet and sugar cane farmers in the United States, to provide its view of sucralose. In December 2004, five separate false-advertising claims were filed by the Sugar Association against Splenda manufacturers Merisant and McNeil Nutritionals for claims made about Splenda related to the slogan, "Made from sugar, so it tastes like sugar." French courts ordered the slogan to no longer be used in France, while in the U.S., the case came to an undisclosed settlement during the trial.
There are few safety concerns pertaining to sucralose and the way sucralose is metabolized suggests a reduced risk of toxicity. For example, sucralose is extremely insoluble in fat and, thus, does not accumulate in fatty tissues; sucralose also does not break down and will dechlorinate only under conditions that are not found during regular digestion (i.e., high heat applied to the powder form of the molecule). Only about 15% of sucralose is absorbed by the body and most of it passes out of the body unchanged.
In 2017, sucralose was the most common sugar substitute used in the manufacture of foods and beverages; it had 30% of the global market, which was projected to be valued at $2.8 billion by 2021.
Sugar alcohol
Sugar alcohols, or polyols, are sweetening and bulking ingredients used in the manufacturing of foods and beverages, particularly sugar-free candies, cookies and chewing gums. As a sugar substitute, they typically are less-sweet and supply fewer calories (about a half to one-third fewer calories) than sugar. They are converted to glucose slowly, and do not spike increases in blood glucose.
Sorbitol, xylitol, mannitol, erythritol and lactitol are examples of sugar alcohols. These are, in general, less sweet than sucrose, but have similar bulk properties and can be used in a wide range of food products. The sweetness profile may be altered during manufacturing by mixing with high-intensity sweeteners.
Sugar alcohols are carbohydrates with a biochemical structure partially matching the structures of sugar and alcohol, although not containing ethanol. They are not entirely metabolized by the human body. The unabsorbed sugar alcohols may cause bloating and diarrhea due to their osmotic effect, if consumed in sufficient amounts. They are found commonly in small quantities in some fruits and vegetables, and are commercially manufactured from different carbohydrates and starch.
Production
The majority of sugar substitutes approved for food use are artificially synthesized compounds. However, some bulk plant-derived sugar substitutes are known, including sorbitol, xylitol and lactitol. As it is not commercially profitable to extract these products from fruits and vegetables, they are produced by catalytic hydrogenation of the appropriate reducing sugar. For example, xylose is converted to xylitol, lactose to lactitol, and glucose to sorbitol.
Use
Reasons for use
Sugar substitutes are used instead of sugar for a number of reasons, including:
Dental care
Carbohydrates and sugars usually adhere to the tooth enamel, where bacteria feed upon them and quickly multiply. The bacteria convert the sugar to acids that decay the teeth. Sugar substitutes, unlike sugar, do not erode teeth as they are not fermented by the microflora of the dental plaque. A sweetener that may benefit dental health is xylitol, which tends to prevent bacteria from adhering to the tooth surface, thus preventing plaque formation and eventually tooth decay. A Cochrane review, however, found only low-quality evidence that xylitol in a variety of dental products actually has any benefit in preventing tooth decay in adults and children.
Dietary concerns
Sugar substitutes are a fundamental ingredient in diet drinks to sweeten them without adding calories. Additionally, sugar alcohols such as erythritol, xylitol and sorbitol are derived from sugars. In the United States, six high-intensity sugar substitutes have been approved for use: aspartame, sucralose, neotame, acesulfame potassium (Ace-K), saccharin and advantame. Food additives must be approved by the FDA, and sweeteners must be proven as safe via submission by a manufacturer of a GRAS document. The conclusions about GRAS are based on a detailed review of a large body of information, including rigorous toxicological and clinical studies. GRAS notices exist for two plant-based, high-intensity sweeteners: steviol glycosides obtained from stevia leaves (Stevia rebaudiana) and extracts from Siraitia grosvenorii, also called luo han guo or monk fruit.
Glucose metabolism
Diabetes mellitus – People with diabetes limit refined sugar intake to regulate their blood sugar levels. Many artificial sweeteners allow sweet-tasting food without increasing blood glucose. Others do release energy but are metabolized more slowly, preventing spikes in blood glucose. A concern, however, is that overconsumption of foods and beverages made more appealing with sugar substitutes may increase risk of developing diabetes. A 2014 systematic review showed that a 330ml/day (an amount little less than the standard U.S can size) consumption of artificially sweetened beverages lead to increased risks of type 2 diabetes. A 2015 meta-analysis of numerous clinical studies showed that habitual consumption of sugar sweetened beverages, artificially sweetened beverages, and fruit juice increased the risk of developing diabetes, although with inconsistent results and generally low quality of evidence. A 2016 review described the relationship between non-nutritive sweeteners as inconclusive. A 2020 Cochrane systematic review compared several non-nutritive sweeteners to sugar, placebo and a nutritive low-calorie sweetener (tagatose), but the results were unclear for effects on HbA1c, body weight and adverse events. The studies included were mainly of very low certainty and did not report on health-related quality of life, diabetes complications, all-cause mortality or socioeconomic effects.
Reactive hypoglycemia – Individuals with reactive hypoglycemia will produce an excess of insulin after quickly absorbing glucose into the bloodstream. This causes their blood glucose levels to fall below the amount needed for proper body and brain function. As a result, like diabetics, they must avoid intake of high-glycemic foods like white bread, and often use artificial sweeteners for sweetness without blood glucose.
Cost and shelf life
Many sugar substitutes are cheaper than sugar in the final food formulation. Sugar substitutes are often lower in total cost because of their long shelf life and high sweetening intensity. This allows sugar substitutes to be used in products that will not perish after a short period of time.
Acceptable daily intake levels
In the United States, the FDA provides guidance for manufacturers and consumers about the daily limits for consuming high-intensity sweeteners, a measure called acceptable daily intake (ADI). During their premarket review for all of the high-intensity sweeteners approved as food additives, the FDA established an ADI defined as an amount in milligrams per kilogram of body weight per day (mg/kg bw/d), indicating that a high-intensity sweetener does not cause safety concerns if estimated daily intakes are lower than the ADI. The FDA states: "An ADI is the amount of a substance that is considered safe to consume each day over the course of a person's lifetime." For stevia (specifically, steviol glycosides), an ADI was not derived by the FDA, but by the Joint Food and Agricultural Organization/World Health Organization Expert Committee on Food Additives, whereas an ADI has not been determined for monk fruit.
For the sweeteners approved as food additives, the ADIs in milligrams per kilogram of body weight per day are:
Acesulfame potassium, ADI 15
Advantame, ADI 32.8
Aspartame, ADI 50
Neotame, ADI 0.3
Saccharin, ADI 15
Sucralose, ADI 5
Stevia (pure extracted steviol glycosides), ADI 4
Monk fruit extract, no ADI determined
Mouthfeel
If the sucrose, or other sugar, that is replaced has contributed to the texture of the product, then a bulking agent is often also needed. This may be seen in soft drinks or sweet teas that are labeled as "diet" or "light" that contain artificial sweeteners and often have notably different mouthfeel, or in table sugar replacements that mix maltodextrins with an intense sweetener to achieve satisfactory texture sensation.
Sweetness intensity
The FDA has published estimates of sweetness intensity, called a multiplier of sweetness intensity (MSI) as compared to table sugar.
Plant-derived
The sweetness levels and energy densities are in comparison to those of sucrose.
Artificial
Sugar alcohols
Research
Body weight
Reviews and dietetic professionals have concluded that moderate use of non-nutritive sweeteners as a safe replacement for sugars may help limit energy intake and assist with managing blood glucose and weight. Other reviews found that the association between body weight and non-nutritive sweetener usage is inconclusive. Observational studies tend to show a relation with increased body weight, while randomized controlled trials instead show a little causal weight loss. Other reviews concluded that use of non-nutritive sweeteners instead of sugar reduces body weight.
Obesity
There is little evidence that artificial sweeteners directly affect the onset and mechanisms of obesity, although consuming sweetened products is associated with weight gain in children. Some preliminary studies indicate that consumption of products manufactured with artificial sweeteners is associated with obesity and metabolic syndrome, decreased satiety, disturbed glucose metabolism, and weight gain, mainly due to increased overall calorie intake, although the numerous factors influencing obesity remain poorly studied, as of 2021.
Cancer
Multiple reviews have found no link between artificial sweeteners and the risk of cancer. FDA scientists have reviewed scientific data regarding the safety of aspartame and different sweeteners in food, concluding that they are safe for the general population under common intake conditions.
Mortality
High consumption of artificially sweetened beverages was associated with a 12% higher risk of all-cause mortality and a 23% higher risk of cardiovascular disease (CVD) mortality in a 2021 meta-analysis. A 2020 meta-analysis found a similar result, with the highest consuming group having a 13% higher risk of all-cause mortality and a 25% higher risk of CVD mortality. However, both studies also found similar or greater increases in all-cause mortality when consuming the same amount of sugar-sweetened beverages.
Non-nutritive sweeteners vs sugar
The World Health Organization does not recommend using non-nutritive sweeteners to control body weight, based on a 2022 review that could only find small reductions in body fat and no effect on cardiometabolic risk. It recommends fruit or non-sweetened foods instead.
See also
Sugar alcohol
Sweetener
VirtualTaste – database (2010)
Notes
References
External links
Calorie Control Council—trade association for manufacturers of artificial sweeteners and products
Substitutes | Sugar substitute | [
"Chemistry"
] | 4,903 | [
"Carbohydrates",
"Sugar"
] |
58,890 | https://en.wikipedia.org/wiki/Resin | A resin is a solid or highly viscous liquid that can be converted into a polymer. Resins may be biological or synthetic in origin, but are typically harvested from plants. Resins are mixtures of organic compounds, and predominantly terpenes. Well known resins include amber, hashish, frankincense, myrrh and the animal-derived resin, shellac. Resins are commonly used in varnishes, adhesives, food additives, incenses and perfumes.
Resins protect plants from insects and pathogens, and are secreted in response to injury. Resins confound a wide range of herbivores, insects, and pathogens, while the volatile phenolic compounds may attract benefactors such as predators of insects that attack the plant.
Composition
Most plant resins are composed of terpenes. Specific components are alpha-pinene, beta-pinene, delta-3 carene, and sabinene, the monocyclic terpenes limonene and terpinolene, and smaller amounts of the tricyclic sesquiterpenes, longifolene, caryophyllene, and delta-cadinene. Some resins also contain a high proportion of resin acids. Rosins on the other hand are less volatile and consist of diterpenes among other compounds.
Examples
Examples of plant resins include amber, Balm of Gilead, balsam, Canada balsam, copal from trees of Protium copal and Hymenaea courbaril, dammar gum from trees of the family Dipterocarpaceae, dragon's blood from the dragon trees (Dracaena species), elemi, frankincense from Boswellia sacra, galbanum from Ferula gummosa, gum guaiacum from the lignum vitae trees of the genus Guaiacum, kauri gum from trees of Agathis australis, hashish (Cannabis resin) from Cannabis indica, labdanum from mediterranean species of Cistus, mastic (plant resin) from the mastic tree Pistacia lentiscus, myrrh from shrubs of Commiphora, sandarac resin from Tetraclinis articulata, the national tree of Malta, styrax (a Benzoin resin from various Styrax species) and spinifex resin from Australian grasses.
Amber is fossil resin (also called resinite) from coniferous and other tree species. Copal, kauri gum, dammar and other resins may also be found as subfossil deposits. Subfossil copal can be distinguished from genuine fossil amber because it becomes tacky when a drop of a solvent such as acetone or chloroform is placed on it.
African copal and the kauri gum of New Zealand are also procured in a semi-fossil condition.
Rosin
Rosin is a solidified resin from which the volatile terpenes have been removed by distillation. Typical rosin is a transparent or translucent mass, with a vitreous fracture and a faintly yellow or brown colour, non-odorous or having only a slight turpentine odour and taste. Rosin is insoluble in water, mostly soluble in alcohol, essential oils, ether, and hot fatty oils. Rosin softens and melts when heated and burns with a bright but smoky flame.
Rosin consists of a complex mixture of different substances including organic acids named the resin acids. Related to the terpenes, resin acid is oxidized terpenes. Resin acids dissolve in alkalis to form resin soaps, from which the resin acids are regenerated upon treatment with acids. Examples of resin acids are abietic acid (sylvic acid), C20H30O2, plicatic acid contained in cedar, and pimaric acid, C20H30O2, a constituent of galipot resin. Abietic acid can also be extracted from rosin by means of hot alcohol.
Rosin is obtained from pines and some other plants, mostly conifers. Plant resins are generally produced as stem secretions, but in some Central and South American species of Dalechampia and Clusia they are produced as pollination rewards, and used by some stingless bee species in nest construction. Propolis, consisting largely of resins collected from plants such as poplars and conifers, is used by honey bees to seal small gaps in their hives, while larger gaps are filled with beeswax.
Petroleum- and insect-derived resins
Shellac is an example of an insect-derived resin.
Asphaltite and Utah resin are petroleum bitumens.
History and etymology
Human use of plant resins has a very long history that was documented in ancient Greece by Theophrastus, in ancient Rome by Pliny the Elder, and especially in the resins known as frankincense and myrrh, prized in ancient Egypt. These were highly prized substances, and required as incense in some religious rites.
The word resin comes from French resine, from Latin resina "resin", which either derives from or is a cognate of the Greek rhētínē "resin of the pine", of unknown earlier origin, though probably non-Indo-European.
The word "resin" has been applied in the modern world to nearly any component of a liquid that will set into a hard lacquer or enamel-like finish. An example is nail polish. Certain "casting resins" and synthetic resins (such as epoxy resin) have also been given the name "resin".
Some naturally-derived resins, when soft, are known as 'oleoresins', and when containing benzoic acid or cinnamic acid they are called balsams. Oleoresins are naturally-occurring mixtures of an oil and a resin; they can be extracted from various plants. Other resinous products in their natural condition are a mix with gum or mucilaginous substances and known as gum resins. Several natural resins are used as ingredients in perfumes, e.g., balsams of Peru and tolu, elemi, styrax, and certain turpentines.
Non-resinous exudates
Other liquid compounds found inside plants or exuded by plants, such as sap, latex, or mucilage, are sometimes confused with resin but are not the same. Saps, in particular, serve a nutritive function that resins do not.
Uses
Plant resins
Plant resins are valued for the production of varnishes, adhesives, and food glazing agents. They are also prized as raw materials for the synthesis of other organic compounds and provide constituents of incense and perfume. The oldest known use of plant resin comes from the late Middle Stone Age in Southern Africa where it was used as an adhesive for hafting stone tools.
The hard transparent resins, such as the copals, dammars, mastic, and sandarac, are principally used for varnishes and adhesives, while the softer odoriferous oleo-resins (frankincense, elemi, turpentine, copaiba), and gum resins containing essential oils (ammoniacum, asafoetida, gamboge, myrrh, and scammony) are more used for therapeutic purposes, food and incense. The resin of the Aleppo Pine is used to flavour retsina, a Greek resinated wine.
Animal resins
While animal resins are not as common as either plant or synthetic resins some animal resins like lac (obtained from Kerria lacca) are used for applications like sealing wax in India, and lacquerware in Sri Lanka.
Synthetic resins
Many materials are produced via the conversion of synthetic resins to solids. Important examples are bisphenol A diglycidyl ether, which is a resin converted to epoxy glue upon the addition of a hardener. Silicones are often prepared from silicone resins via room temperature vulcanization. Alkyd resins are used in paints and varnishes and harden or cure by exposure to oxygen in the air.
See also
– used in food and drink for flavoring, in perfumes and toiletries for fragrance, and in medicine and pharmaceutical items.
– plant resins are naturally biodegradable in many circumstances.
References
External links
Non-timber forest products
Papermaking
Tree tapping | Resin | [
"Physics"
] | 1,765 | [
"Amorphous solids",
"Unsolved problems in physics",
"Resins"
] |
58,894 | https://en.wikipedia.org/wiki/Phosphorylation | In biochemistry, phosphorylation is the attachment of a phosphate group to a molecule or an ion. This process and its inverse, dephosphorylation, are common in biology. Protein phosphorylation often activates (or deactivates) many enzymes.
During respiration
Phosphorylation is essential to the processes of both anaerobic and aerobic respiration, which involve the production of adenosine triphosphate (ATP), the "high-energy" exchange medium in the cell. During aerobic respiration, ATP is synthesized in the mitochondrion by addition of a third phosphate group to adenosine diphosphate (ADP) in a process referred to as oxidative phosphorylation. ATP is also synthesized by substrate-level phosphorylation during glycolysis. ATP is synthesized at the expense of solar energy by photophosphorylation in the chloroplasts of plant cells.
Phosphorylation of glucose
Glucose metabolism
Phosphorylation of sugars is often the first stage in their catabolism. Phosphorylation allows cells to accumulate sugars because the phosphate group prevents the molecules from diffusing back across their transporter. Phosphorylation of glucose is a key reaction in sugar metabolism. The chemical equation for the conversion of D-glucose to D-glucose-6-phosphate in the first step of glycolysis is given by:
D-glucose + ATP → D-glucose 6-phosphate + ADP
ΔG° = −16.7 kJ/mol (° indicates measurement at standard condition)
Glycolysis
Glycolysis is an essential process of glucose degrading into two molecules of pyruvate, through various steps, with the help of different enzymes. It occurs in ten steps and proves that phosphorylation is a much required and necessary step to attain the end products. Phosphorylation initiates the reaction in step 1 of the preparatory step (first half of glycolysis), and initiates step 6 of payoff phase (second phase of glycolysis).
Glucose, by nature, is a small molecule with the ability to diffuse in and out of the cell. By phosphorylating glucose (adding a phosphoryl group in order to create a negatively charged phosphate group), glucose is converted to glucose-6-phosphate, which is trapped within the cell as the cell membrane is negatively charged. This reaction occurs due to the enzyme hexokinase, an enzyme that helps phosphorylate many six-membered ring structures. Phosphorylation takes place in step 3, where fructose-6-phosphate is converted to fructose 1,6-bisphosphate. This reaction is catalyzed by phosphofructokinase.
While phosphorylation is performed by ATPs during preparatory steps, phosphorylation during payoff phase is maintained by inorganic phosphate. Each molecule of glyceraldehyde 3-phosphate is phosphorylated to form 1,3-bisphosphoglycerate. This reaction is catalyzed by glyceraldehyde-3-phosphate dehydrogenase (GAPDH). The cascade effect of phosphorylation eventually causes instability and allows enzymes to open the carbon bonds in glucose.
Phosphorylation functions is an extremely vital component of glycolysis, as it helps in transport, control, and efficiency.
Glycogen synthesis
Glycogen is a long-term store of glucose produced by the cells of the liver. In the liver, the synthesis of glycogen is directly correlated with blood glucose concentration. High blood glucose concentration causes an increase in intracellular levels of glucose 6-phosphate in the liver, skeletal muscle, and fat (adipose) tissue. Glucose 6-phosphate has role in regulating glycogen synthase.
High blood glucose releases insulin, stimulating the translocation of specific glucose transporters to the cell membrane; glucose is phosphorylated to glucose 6-phosphate during transport across the membrane by ATP-D-glucose 6-phosphotransferase and non-specific hexokinase (ATP-D-hexose 6-phosphotransferase). Liver cells are freely permeable to glucose, and the initial rate of phosphorylation of glucose is the rate-limiting step in glucose metabolism by the liver.
The liver's crucial role in controlling blood sugar concentrations by breaking down glucose into carbon dioxide and glycogen is characterized by the negative Gibbs free energy (ΔG) value, which indicates that this is a point of regulation with. The hexokinase enzyme has a low Michaelis constant (K), indicating a high affinity for glucose, so this initial phosphorylation can proceed even when glucose levels at nanoscopic scale within the blood.
The phosphorylation of glucose can be enhanced by the binding of fructose 6-phosphate (F6P), and lessened by the binding fructose 1-phosphate (F1P). Fructose consumed in the diet is converted to F1P in the liver. This negates the action of F6P on glucokinase, which ultimately favors the forward reaction. The capacity of liver cells to phosphorylate fructose exceeds capacity to metabolize fructose-1-phosphate. Consuming excess fructose ultimately results in an imbalance in liver metabolism, which indirectly exhausts the liver cell's supply of ATP.
Allosteric activation by glucose-6-phosphate, which acts as an effector, stimulates glycogen synthase, and glucose-6-phosphate may inhibit the phosphorylation of glycogen synthase by cyclic AMP-stimulated protein kinase.
Other processes
Phosphorylation of glucose is imperative in processes within the body. For example, phosphorylating glucose is necessary for insulin-dependent mechanistic target of rapamycin pathway activity within the heart. This further suggests a link between intermediary metabolism and cardiac growth.
Protein phosphorylation
Protein phosphorylation is the most abundant post-translational modification in eukaryotes. Phosphorylation can occur on serine, threonine and tyrosine side chains (in other words, on their residues) through phosphoester bond formation, on histidine, lysine and arginine through phosphoramidate bonds, and on aspartic acid and glutamic acid through mixed anhydride linkages. Recent evidence confirms widespread histidine phosphorylation at both the 1 and 3 N-atoms of the imidazole ring. Recent work demonstrates widespread human protein phosphorylation on multiple non-canonical amino acids, including motifs containing phosphorylated histidine, aspartate, glutamate, cysteine, arginine and lysine in HeLa cell extracts. However, due to the chemical lability of these phosphorylated residues, and in marked contrast to Ser, Thr and Tyr phosphorylation, the analysis of phosphorylated histidine (and other non-canonical amino acids) using standard biochemical and mass spectrometric approaches is much more challenging and special procedures and separation techniques are required for their preservation alongside classical Ser, Thr and Tyr phosphorylation.
The prominent role of protein phosphorylation in biochemistry is illustrated by the huge body of studies published on the subject (as of March 2015, the MEDLINE database returns over 240,000 articles, mostly on protein phosphorylation).
See also
Moiety conservation
Phosida
Phosphoamino acid analysis
Phospho3D
References
External links
Functional analyses for site-specific phosphorylation of a target protein in cells (A Protocol)
Cell biology
Cell signaling
Phosphorus
Post-translational modification | Phosphorylation | [
"Chemistry",
"Biology"
] | 1,700 | [
"Post-translational modification",
"Gene expression",
"Cell biology",
"Biochemical reactions"
] |
58,899 | https://en.wikipedia.org/wiki/Direct%20sum%20of%20modules | In abstract algebra, the direct sum is a construction which combines several modules into a new, larger module. The direct sum of modules is the smallest module which contains the given modules as submodules with no "unnecessary" constraints, making it an example of a coproduct. Contrast with the direct product, which is the dual notion.
The most familiar examples of this construction occur when considering vector spaces (modules over a field) and abelian groups (modules over the ring Z of integers). The construction may also be extended to cover Banach spaces and Hilbert spaces.
See the article decomposition of a module for a way to write a module as a direct sum of submodules.
Construction for vector spaces and abelian groups
We give the construction first in these two cases, under the assumption that we have only two objects. Then we generalize to an arbitrary family of arbitrary modules. The key elements of the general construction are more clearly identified by considering these two cases in depth.
Construction for two vector spaces
Suppose V and W are vector spaces over the field K. The Cartesian product V × W can be given the structure of a vector space over K by defining the operations componentwise:
(v1, w1) + (v2, w2) = (v1 + v2, w1 + w2)
α (v, w) = (α v, α w)
for v, v1, v2 ∈ V, w, w1, w2 ∈ W, and α ∈ K.
The resulting vector space is called the direct sum of V and W and is usually denoted by a plus symbol inside a circle:
It is customary to write the elements of an ordered sum not as ordered pairs (v, w), but as a sum v + w.
The subspace V × {0} of V ⊕ W is isomorphic to V and is often identified with V; similarly for {0} × W and W. (See internal direct sum below.) With this identification, every element of V ⊕ W can be written in one and only one way as the sum of an element of V and an element of W. The dimension of V ⊕ W is equal to the sum of the dimensions of V and W. One elementary use is the reconstruction of a finite vector space from any subspace W and its orthogonal complement:
This construction readily generalizes to any finite number of vector spaces.
Construction for two abelian groups
For abelian groups G and H which are written additively, the direct product of G and H is also called a direct sum . Thus the Cartesian product G × H is equipped with the structure of an abelian group by defining the operations componentwise:
(g1, h1) + (g2, h2) = (g1 + g2, h1 + h2)
for g1, g2 in G, and h1, h2 in H.
Integral multiples are similarly defined componentwise by
n(g, h) = (ng, nh)
for g in G, h in H, and n an integer. This parallels the extension of the scalar product of vector spaces to the direct sum above.
The resulting abelian group is called the direct sum of G and H and is usually denoted by a plus symbol inside a circle:
It is customary to write the elements of an ordered sum not as ordered pairs (g, h), but as a sum g + h.
The subgroup G × {0} of G ⊕ H is isomorphic to G and is often identified with G; similarly for {0} × H and H. (See internal direct sum below.) With this identification, it is true that every element of G ⊕ H can be written in one and only one way as the sum of an element of G and an element of H. The rank of G ⊕ H is equal to the sum of the ranks of G and H.
This construction readily generalises to any finite number of abelian groups.
Construction for an arbitrary family of modules
One should notice a clear similarity between the definitions of the direct sum of two vector spaces and of two abelian groups. In fact, each is a special case of the construction of the direct sum of two modules. Additionally, by modifying the definition one can accommodate the direct sum of an infinite family of modules. The precise definition is as follows .
Let R be a ring, and {Mi : i ∈ I} a family of left R-modules indexed by the set I. The direct sum of {Mi} is then defined to be the set of all sequences where and for cofinitely many indices i. (The direct product is analogous but the indices do not need to cofinitely vanish.)
It can also be defined as functions α from I to the disjoint union of the modules Mi such that α(i) ∈ Mi for all i ∈ I and α(i) = 0 for cofinitely many indices i. These functions can equivalently be regarded as finitely supported sections of the fiber bundle over the index set I, with the fiber over being .
This set inherits the module structure via component-wise addition and scalar multiplication. Explicitly, two such sequences (or functions) α and β can be added by writing for all i (note that this is again zero for all but finitely many indices), and such a function can be multiplied with an element r from R by defining for all i. In this way, the direct sum becomes a left R-module, and it is denoted
It is customary to write the sequence as a sum . Sometimes a primed summation is used to indicate that cofinitely many of the terms are zero.
Properties
The direct sum is a submodule of the direct product of the modules Mi . The direct product is the set of all functions α from I to the disjoint union of the modules Mi with α(i)∈Mi, but not necessarily vanishing for all but finitely many i. If the index set I is finite, then the direct sum and the direct product are equal.
Each of the modules Mi may be identified with the submodule of the direct sum consisting of those functions which vanish on all indices different from i. With these identifications, every element x of the direct sum can be written in one and only one way as a sum of finitely many elements from the modules Mi.
If the Mi are actually vector spaces, then the dimension of the direct sum is equal to the sum of the dimensions of the Mi. The same is true for the rank of abelian groups and the length of modules.
Every vector space over the field K is isomorphic to a direct sum of sufficiently many copies of K, so in a sense only these direct sums have to be considered. This is not true for modules over arbitrary rings.
The tensor product distributes over direct sums in the following sense: if N is some right R-module, then the direct sum of the tensor products of N with Mi (which are abelian groups) is naturally isomorphic to the tensor product of N with the direct sum of the Mi.
Direct sums are commutative and associative (up to isomorphism), meaning that it doesn't matter in which order one forms the direct sum.
The abelian group of R-linear homomorphisms from the direct sum to some left R-module L is naturally isomorphic to the direct product of the abelian groups of R-linear homomorphisms from Mi to L: Indeed, there is clearly a homomorphism τ from the left hand side to the right hand side, where τ(θ)(i) is the R-linear homomorphism sending x∈Mi to θ(x) (using the natural inclusion of Mi into the direct sum). The inverse of the homomorphism τ is defined by for any α in the direct sum of the modules Mi. The key point is that the definition of τ−1 makes sense because α(i) is zero for all but finitely many i, and so the sum is finite.In particular, the dual vector space of a direct sum of vector spaces is isomorphic to the direct product of the duals of those spaces.
The finite direct sum of modules is a biproduct: If are the canonical projection mappings and are the inclusion mappings, then equals the identity morphism of A1 ⊕ ⋯ ⊕ An, and is the identity morphism of Ak in the case l = k, and is the zero map otherwise.
Internal direct sum
Suppose M is an R-module and Mi is a submodule of M for each i in I. If every x in M can be written in exactly one way as a sum of finitely many elements of the Mi, then we say that M is the internal direct sum of the submodules Mi . In this case, M is naturally isomorphic to the (external) direct sum of the Mi as defined above .
A submodule N of M is a direct summand of M if there exists some other submodule N′ of M such that M is the internal direct sum of N and N′. In this case, N and N′ are called complementary submodules.
Universal property
In the language of category theory, the direct sum is a coproduct and hence a colimit in the category of left R-modules, which means that it is characterized by the following universal property. For every i in I, consider the natural embedding
which sends the elements of Mi to those functions which are zero for all arguments but i. Now let M be an arbitrary R-module and fi : Mi → M be arbitrary R-linear maps for every i, then there exists precisely one R-linear map
such that f o ji = fi for all i.
Grothendieck group
The direct sum gives a collection of objects the structure of a commutative monoid, in that the addition of objects is defined, but not subtraction. In fact, subtraction can be defined, and every commutative monoid can be extended to an abelian group. This extension is known as the Grothendieck group. The extension is done by defining equivalence classes of pairs of objects, which allows certain pairs to be treated as inverses. The construction, detailed in the article on the Grothendieck group, is "universal", in that it has the universal property of being unique, and homomorphic to any other embedding of a commutative monoid in an abelian group.
Direct sum of modules with additional structure
If the modules we are considering carry some additional structure (for example, a norm or an inner product), then the direct sum of the modules can often be made to carry this additional structure, as well. In this case, we obtain the coproduct in the appropriate category of all objects carrying the additional structure. Two prominent examples occur for Banach spaces and Hilbert spaces.
In some classical texts, the phrase "direct sum of algebras over a field" is also introduced for denoting the algebraic structure that is presently more commonly called a direct product of algebras; that is, the Cartesian product of the underlying sets with the componentwise operations. This construction, however, does not provide a coproduct in the category of algebras, but a direct product (see note below and the remark on direct sums of rings).
Direct sum of algebras
A direct sum of algebras and is the direct sum as vector spaces, with product
Consider these classical examples:
is ring isomorphic to split-complex numbers, also used in interval analysis.
is the algebra of tessarines introduced by James Cockle in 1848.
called the split-biquaternions, was introduced by William Kingdon Clifford in 1873.
Joseph Wedderburn exploited the concept of a direct sum of algebras in his classification of hypercomplex numbers. See his Lectures on Matrices (1934), page 151.
Wedderburn makes clear the distinction between a direct sum and a direct product of algebras: For the direct sum the field of scalars acts jointly on both parts: while for the direct product a scalar factor may be collected alternately with the parts, but not both:
Ian R. Porteous uses the three direct sums above, denoting them as rings of scalars in his analysis of Clifford Algebras and the Classical Groups (1995).
The construction described above, as well as Wedderburn's use of the terms and follow a different convention than the one in category theory. In categorical terms, Wedderburn's is a categorical product, whilst Wedderburn's is a coproduct (or categorical sum), which (for commutative algebras) actually corresponds to the tensor product of algebras.
Direct sum of Banach spaces
The direct sum of two Banach spaces and is the direct sum of and considered as vector spaces, with the norm for all and
Generally, if is a collection of Banach spaces, where traverses the index set then the direct sum is a module consisting of all functions defined over such that for all and
The norm is given by the sum above. The direct sum with this norm is again a Banach space.
For example, if we take the index set and then the direct sum is the space which consists of all the sequences of reals with finite norm
A closed subspace of a Banach space is complemented if there is another closed subspace of such that is equal to the internal direct sum Note that not every closed subspace is complemented; e.g. is not complemented in
Direct sum of modules with bilinear forms
Let be a family indexed by of modules equipped with bilinear forms. The orthogonal direct sum is the module direct sum with bilinear form defined by
in which the summation makes sense even for infinite index sets because only finitely many of the terms are non-zero.
Direct sum of Hilbert spaces
If finitely many Hilbert spaces are given, one can construct their orthogonal direct sum as above (since they are vector spaces), defining the inner product as:
The resulting direct sum is a Hilbert space which contains the given Hilbert spaces as mutually orthogonal subspaces.
If infinitely many Hilbert spaces for are given, we can carry out the same construction; notice that when defining the inner product, only finitely many summands will be non-zero. However, the result will only be an inner product space and it will not necessarily be complete. We then define the direct sum of the Hilbert spaces to be the completion of this inner product space.
Alternatively and equivalently, one can define the direct sum of the Hilbert spaces as the space of all functions α with domain such that is an element of for every and:
The inner product of two such function α and β is then defined as:
This space is complete and we get a Hilbert space.
For example, if we take the index set and then the direct sum is the space which consists of all the sequences of reals with finite norm Comparing this with the example for Banach spaces, we see that the Banach space direct sum and the Hilbert space direct sum are not necessarily the same. But if there are only finitely many summands, then the Banach space direct sum is isomorphic to the Hilbert space direct sum, although the norm will be different.
Every Hilbert space is isomorphic to a direct sum of sufficiently many copies of the base field, which is either This is equivalent to the assertion that every Hilbert space has an orthonormal basis. More generally, every closed subspace of a Hilbert space is complemented because it admits an orthogonal complement. Conversely, the Lindenstrauss–Tzafriri theorem asserts that if every closed subspace of a Banach space is complemented, then the Banach space is isomorphic (topologically) to a Hilbert space.
See also
References
.
.
.
.
Linear algebra
Module theory | Direct sum of modules | [
"Mathematics"
] | 3,280 | [
"Linear algebra",
"Fields of abstract algebra",
"Module theory",
"Algebra"
] |
58,900 | https://en.wikipedia.org/wiki/Unmanned%20aerial%20vehicle | An unmanned aerial vehicle (UAV), or unmanned aircraft system (UAS), commonly known as a drone, is an aircraft with no human pilot, crew, or passengers onboard. UAVs were originally developed through the twentieth century for military missions too "dull, dirty or dangerous" for humans, and by the twenty-first, they had become essential assets to most militaries. As control technologies improved and costs fell, their use expanded to many non-military applications. These include aerial photography, area coverage, precision agriculture, forest fire monitoring, river monitoring, environmental monitoring, weather observation, policing and surveillance, infrastructure inspections, smuggling, product deliveries, entertainment, and drone racing.
Terminology
Many terms are used for aircraft which fly without any persons onboard.
The term drone has been used from the early days of aviation, some being applied to remotely flown target aircraft used for practice firing of a battleship's guns, such as the 1920s Fairey Queen and 1930s de Havilland Queen Bee. Later examples included the Airspeed Queen Wasp and Miles Queen Martinet, before ultimate replacement by the GAF Jindivik. The term remains in common use. In addition to the software, autonomous drones also employ a host of advanced technologies that allow them to carry out their missions without human intervention, such as cloud computing, computer vision, artificial intelligence, machine learning, deep learning, and thermal sensors. For recreational uses, an aerial photography drone is an aircraft that has first-person video, autonomous capabilities, or both.
An unmanned aerial vehicle (UAV) is defined as a "powered, aerial vehicle that does not carry a human operator, uses aerodynamic forces to provide vehicle lift, can fly autonomously or be piloted remotely, can be expendable or recoverable, and can carry a lethal or nonlethal payload". UAV is a term that is commonly applied to military use cases. Missiles with warheads are generally not considered UAVs because the vehicle itself is a munition, but certain types of propeller-based missile are often called "kamikaze drones" by the public and media. Also, the relation of UAVs to remote controlled model aircraft is unclear, UAVs may or may not include remote-controlled model aircraft. Some jurisdictions base their definition on size or weight; however, the US FAA defines any unmanned flying craft as a UAV regardless of size. A similar term is remotely piloted aerial vehicle (RPAV).
UAVs or RPAVs can also be seen as a component of an unmanned aircraft system (UAS), which also includes a ground-based controller and a system of communications with the aircraft.
The term UAS was adopted by the United States Department of Defense (DoD) and the United States Federal Aviation Administration (FAA) in 2005 according to their Unmanned Aircraft System Roadmap 2005–2030. The International Civil Aviation Organization (ICAO) and the British Civil Aviation Authority adopted this term, also used in the European Union's Single European Sky (SES) Air Traffic Management (ATM) Research (SESAR Joint Undertaking) roadmap for 2020. This term emphasizes the importance of elements other than the aircraft. It includes elements such as ground control stations, data links and other support equipment. Similar terms are unmanned aircraft vehicle system (UAVS) and remotely piloted aircraft system (RPAS). Many similar terms are in use. Under new regulations which came into effect 1 June 2019, the term RPAS has been adopted by the Canadian Government to mean "a set of configurable elements consisting of a remotely piloted aircraft, its control station, the command and control links and any other system elements required during flight operation".
Classification types
UAVs may be classified like any other aircraft, according to design configuration such as weight or engine type, maximum flight altitude, degree of operational autonomy, operational role, etc. According to the United States Department of Defense, UAVs are classified into five categories below:
Other classifications of UAVs include:
Range and endurance
There are usually five categories when UAVs are classified by range and endurance:
Size
There are usually four categories when UAVs are classified by size, with at least one of the dimensions (length or wingspan) meet the following respective limits:
Weight
Based on their weight, drones can be classified into five categories:
Degree of autonomy
Drones can also be classified based on the degree of autonomy in their flight operations. ICAO classifies unmanned aircraft as either remotely piloted aircraft or fully autonomous. Some UAVs offer intermediate degrees of autonomy. For example, a vehicle may be remotely piloted in most contexts but have an autonomous return-to-base operation. Some aircraft types may optionally fly manned or as UAVs, which may include manned aircraft transformed into manned or Optionally Piloted UAVs (OPVs). The flight of UAVs may operate under remote control by a human operator, as remotely piloted aircraft (RPA), or with various degrees of autonomy, such as autopilot assistance, up to fully autonomous aircraft that have no provision for human intervention.
Altitude
Based on the altitude, the following UAV classifications have been used at industry events such as ParcAberporth Unmanned Systems forum:
Hand-held altitude, about 2 km range
Close altitude, up to 10 km range
NATO type altitude, up to 50 km range
Tactical altitude, about 160 km range
MALE (medium altitude, long endurance) up to and range over 200 km
HALE (high altitude, long endurance) over and indefinite range
Hypersonic high-speed, supersonic (Mach 1–5) or hypersonic (Mach 5+) or suborbital altitude, range over 200 km
Orbital low Earth orbit (Mach 25+)
CIS Lunar Earth-Moon transfer
Computer Assisted Carrier Guidance System (CACGS) for UAVs
Composite criteria
An example of classification based on the composite criteria is U.S. Military's unmanned aerial systems (UAS) classification of UAVs based on weight, maximum altitude and speed of the UAV component.
Power sources
UAVs can be classified based on their power or energy source, which significantly impacts their flight duration, range, and environmental impact. The main categories include:
Battery-powered (electric): These UAVs use rechargeable batteries, offering quiet operation and lower maintenance but potentially limited flight times. The reduced noise levels make them suitable for urban environments and sensitive operations.
Fuel-powered (internal combustion): Utilizing traditional fuels like gasoline or diesel, these UAVs often have longer flight times but may be noisier and require more maintenance. They are typically used for applications requiring extended endurance or heavy payload capacity.
Hybrid: Combining electric and fuel power sources, hybrid UAVs aim to balance the benefits of both systems for improved performance and efficiency. This configuration could allow for versatility in mission profiles and adaptability to different operational requirements.
Solar-powered: Equipped with solar panels, these UAVs can potentially achieve extended flight times by harnessing solar energy, especially at high altitudes. Solar-powered UAVs may be particularly suited for long-endurance missions and environmental monitoring applications.
Nuclear-powered: While nuclear power has been explored for larger aircraft, its application in UAVs remains largely theoretical due to safety concerns and regulatory challenges. Research in this area is ongoing but faces significant hurdles before practical implementation.
Hydrogen fuel cell: An emerging technology, hydrogen fuel cells offer the potential for longer flight times with zero emissions, though the technology is still developing for widespread UAV use. The high energy density of hydrogen makes it a promising option for future UAV propulsion systems.
History
Early drones
The earliest recorded use of an unmanned aerial vehicle for warfighting occurred in July 1849, with a balloon carrier (the precursor to the aircraft carrier) in the first offensive use of air power in naval aviation. Austrian forces besieging Venice attempted to launch some 200 incendiary balloons at the besieged city. The balloons were launched mainly from land; however, some were also launched from the Austrian ship . At least one bomb fell in the city; however, due to the wind changing after launch, most of the balloons missed their target, and some drifted back over Austrian lines and the launching ship Vulcano.
The Spanish engineer Leonardo Torres Quevedo introduced a radio-based control-system called the Telekino at the Paris Academy of Science in 1903, as a way of testing airships without risking human life.
Significant development of drones started in the 1900s, and originally focused on providing practice targets for training military personnel. The earliest attempt at a powered UAV was A. M. Low's "Aerial Target" in 1916. Low confirmed that Geoffrey de Havilland's monoplane was the one that flew under control on 21 March 1917 using his radio system. Following this successful demonstration in the spring of 1917 Low was transferred to develop aircraft controlled fast motor launches D.C.B.s with the Royal Navy in 1918 intended to attack shipping and port installations and he also assisted Wing Commander Brock in preparations for the Zeebrugge Raid. Other British unmanned developments followed, leading to the fleet of over 400 de Havilland 82 Queen Bee aerial targets that went into service in 1935.
Nikola Tesla described a fleet of uncrewed aerial combat vehicles in 1915. These developments also inspired the construction of the Kettering Bug by Charles Kettering from Dayton, Ohio and the Hewitt-Sperry Automatic Airplane – initially meant as an uncrewed plane that would carry an explosive payload to a predetermined target. Development continued during World War I, when the Dayton-Wright Airplane Company invented a pilotless aerial torpedo that would explode at a preset time.
The film star and model-airplane enthusiast Reginald Denny developed the first scaled remote piloted vehicle in 1935.
Soviet researchers experimented with controlling Tupolev TB-1 bombers remotely in the late 1930s.
World War II
In 1940, Denny started the Radioplane Company and more models emerged during World War II used both to train antiaircraft gunners and to fly attack-missions. Nazi Germany produced and used various UAV aircraft during the war, like the Argus As 292 and the V-1 flying bomb with a jet engine. Fascist Italy developed a specialised drone version of the Savoia-Marchetti SM.79 flown by remote control, although the Armistice with Italy was enacted prior to any operational deployment.
Postwar period
After World War II development continued in vehicles such as the American JB-4 (using television/radio-command guidance), the Australian GAF Jindivik and Teledyne Ryan Firebee I of 1951, while companies like Beechcraft offered their Model 1001 for the U.S. Navy in 1955. Nevertheless, they were little more than remote-controlled airplanes until the Vietnam War. In 1959, the U.S. Air Force, concerned about losing pilots over hostile territory, began planning for the use of uncrewed aircraft. Planning intensified after the Soviet Union shot down a U-2 in 1960. Within days, a highly classified UAV program started under the code name of "Red Wagon". The August 1964 clash in the Tonkin Gulf between naval units of the U.S. and the North Vietnamese Navy initiated America's highly classified UAVs (Ryan Model 147, Ryan AQM-91 Firefly, Lockheed D-21) into their first combat missions of the Vietnam War. When the Chinese government showed photographs of downed U.S. UAVs via Wide World Photos, the official U.S. response was "no comment".
During the War of Attrition (1967–1970) in the Middle East, Israeli intelligence tested the first tactical UAVs installed with reconnaissance cameras, which successfully returned photos from across the Suez Canal. This was the first time that tactical UAVs that could be launched and landed on any short runway (unlike the heavier jet-based UAVs) were developed and tested in battle.
In the 1973 Yom Kippur War, Israel used UAVs as decoys to spur opposing forces into wasting expensive anti-aircraft missiles. After the 1973 Yom Kippur War, a few key people from the team that developed this early UAV joined a small startup company that aimed to develop UAVs into a commercial product, eventually purchased by Tadiran and leading to the development of the first Israeli UAV.
In 1973, the U.S. military officially confirmed that they had been using UAVs in Southeast Asia (Vietnam). Over 5,000 U.S. airmen had been killed and over 1,000 more were missing or captured. The USAF 100th Strategic Reconnaissance Wing flew about 3,435 UAV missions during the war at a cost of about 554 UAVs lost to all causes. In the words of USAF General George S. Brown, Commander, Air Force Systems Command, in 1972, "The only reason we need (UAVs) is that we don't want to needlessly expend the man in the cockpit." Later that year, General John C. Meyer, Commander in Chief, Strategic Air Command, stated, "we let the drone do the high-risk flying ... the loss rate is high, but we are willing to risk more of them ...they save lives!"
During the 1973 Yom Kippur War, Soviet-supplied surface-to-air missile-batteries in Egypt and Syria caused heavy damage to Israeli fighter jets. As a result, Israel developed the IAI Scout as the first UAV with real-time surveillance. The images and radar decoys provided by these UAVs helped Israel to completely neutralize the Syrian air defenses at the start of the 1982 Lebanon War, resulting in no pilots downed. In Israel in 1987, UAVs were first used as proof-of-concept of super-agility, post-stall controlled flight in combat-flight simulations that involved tailless, stealth-technology-based, three-dimensional thrust vectoring flight-control, and jet-steering.
Modern UAVs
With the maturing and miniaturization of applicable technologies in the 1980s and 1990s, interest in UAVs grew within the higher echelons of the U.S. military. The U.S. funded the Counterterrorism Center (CTC) within the CIA, which sought to fight terrorism with the aid of modernized drone technology. In the 1990s, the U.S. DoD gave a contract to AAI Corporation along with Israeli company Malat. The U.S. Navy bought the AAI Pioneer UAV that AAI and Malat developed jointly. Many of these UAVs saw service in the 1991 Gulf War. UAVs demonstrated the possibility of cheaper, more capable fighting-machines, deployable without risk to aircrews. Initial generations primarily involved surveillance aircraft, but some carried armaments, such as the General Atomics MQ-1 Predator, that launched AGM-114 Hellfire air-to-ground missiles.
CAPECON, a European Union project to develop UAVs, ran from 1 May 2002 to 31 December 2005.
, the United States Air Force (USAF) employed 7,494 UAVs almost one in three USAF aircraft. The Central Intelligence Agency also operated UAVs. By 2013 at least 50 countries used UAVs. China, Iran, Israel, Pakistan, Turkey, and others designed and built their own varieties. The use of drones has continued to increase. Due to their wide proliferation, no comprehensive list of UAV systems exists.
The development of smart technologies and improved electrical-power systems led to a parallel increase in the use of drones for consumer and general aviation activities. , quadcopter drones exemplify the widespread popularity of hobby radio-controlled aircraft and toys, but the use of UAVs in commercial and general aviation is limited by a lack of autonomy and by new regulatory environments which require line-of-sight contact with the pilot.
In 2020, a Kargu 2 drone hunted down and attacked a human target in Libya, according to a report from the UN Security Council's Panel of Experts on Libya, published in March 2021. This may have been the first time an autonomous killer-robot armed with lethal weaponry attacked human beings.
Superior drone technology, specifically the Turkish Bayraktar TB2, played a role in Azerbaijan's successes in the 2020 Nagorno-Karabakh war against Armenia.
UAVs are also used in NASA missions. The Ingenuity helicopter is an autonomous UAV that operated on Mars from 2021 to 2024. the Dragonfly spacecraft is being developed, and is aiming to reach and examine Saturn's moon Titan. Its primary goal is to roam around the surface, expanding the amount of area to be researched previously seen by landers. As a UAV, Dragonfly allows examination of potentially diverse types of soil. The drone is set to launch in 2027, and is estimated to take seven more years to reach the Saturnian system.
Miniaturization is also supporting the development of small UAVs which can be used as individual system or in a fleet offering the possibility to survey large areas, in a relatively small amount of time.
According to data from GlobalData, the global military uncrewed aerial systems (UAS) market, which forms a significant part of the UAV industry, is projected to experience a compound annual growth rate of 4.8% over the next decade. This represents a near doubling in market size, from $12.5 billion in 2024 to an estimated $20 billion by 2034.
Design
Crewed and uncrewed aircraft of the same type generally have recognizably similar physical components. The main exceptions are the cockpit and environmental control system or life support systems. Some UAVs carry payloads (such as a camera) that weigh considerably less than an adult human, and as a result, can be considerably smaller. Though they carry heavy payloads, weaponized military UAVs are lighter than their crewed counterparts with comparable armaments.
Small civilian UAVs have no life-critical systems, and can thus be built out of lighter but less sturdy materials and shapes, and can use less robustly tested electronic control systems. For small UAVs, the quadcopter design has become popular, though this layout is rarely used for crewed aircraft. Miniaturization means that less-powerful propulsion technologies can be used that are not feasible for crewed aircraft, such as small electric motors and batteries.
Control systems for UAVs are often different from crewed craft. For remote human control, a camera and video link almost always replace the cockpit windows; radio-transmitted digital commands replace physical cockpit controls. Autopilot software is used on both crewed and uncrewed aircraft, with varying feature sets.
Aircraft configuration
UAVs can be designed in different configurations than manned aircraft both because there is no need for a cockpit and its windows, and there is no need to optimize for human comfort, although some UAVs are adapted from piloted examples, or are designed for optionally piloted modes. Air safety is also less of a critical requirement for unmanned aircraft, allowing the designer greater freedom to experiment. Instead, UAVs are typically designed around their onboard payloads and their ground equipment. These factors have led to a great variety of airframe and motor configurations in UAVs.
For conventional flight the flying wing and blended wing body offer light weight combined with low drag and stealth, and are popular configurations for many use cases. Larger types which carry a variable payload are more likely to feature a distinct fuselage with a tail for stability, control and trim, although the wing configurations in use vary widely.
For uses that require vertical flight or hovering, the tailless quadcopter requires a relatively simple control system and is common for smaller UAVs. Multirotor designs with 6 or more rotors is more common with larger UAVs, where redundancy is prioritized.
Propulsion
Traditional internal combustion and jet engines remain in use for drones requiring long range. However, for shorter-range missions electric power has almost entirely taken over. The distance record for a UAV (built from balsa wood and mylar skin) across the North Atlantic Ocean is held by a gasoline model airplane or UAV. Manard Hill "in 2003 when one of his creations flew 1,882 miles across the Atlantic Ocean on less than a gallon of fuel" holds this record.
Besides the traditional piston engine, the Wankel rotary engine is used by some drones. This type offers high power output for lower weight, with quieter and more vibration-free running. Claims have also been made for improved reliability and greater range.
Small drones mostly use lithium-polymer batteries (Li-Po), while some larger vehicles have adopted the hydrogen fuel cell. The energy density of modern Li-Po batteries is far less than gasoline or hydrogen. However electric motors are cheaper, lighter and quieter. Complex multi-engine, multi-propeller installations are under development with the goal of improving aerodynamic and propulsive efficiency. For such complex power installations, battery elimination circuitry (BEC) may be used to centralize power distribution and minimize heating, under the control of a microcontroller unit (MCU).
Ornithopters – wing propulsion
Flapping-wing ornithopters, imitating birds or insects, have been flown as microUAVs. Their inherent stealth recommends them for spy missions.
Sub-1g microUAVs inspired by flies, albeit using a power tether, have been able to "land" on vertical surfaces. Other projects mimic the flight of beetles and other insects.
Computer control systems
UAV computing capability followed the advances of computing technology, beginning with analog controls and evolving into microcontrollers, then system-on-a-chip (SOC) and single-board computers (SBC).
Modern system hardware for UAV control is often called the flight controller (FC), flight controller board (FCB) or autopilot. Common UAV-systems control hardware typically incorporate a primary microprocessor, a secondary or failsafe processor, and sensors such as accelerometers, gyroscopes, magnetometers, and barometers into a single module.
In 2024 EASA agreed on the first certification basis for a UAV flight controller in compliance with the ETSO-C198 for Embention's autopilot. The certification of the UAV flight control systems aims to facilitate the integration of UAVs within the airspace and the operation of drones in critical areas.
Architecture
Sensors
Position and movement sensors give information about the aircraft state. Exteroceptive sensors deal with external information like distance measurements, while exproprioceptive ones correlate internal and external states.
Non-cooperative sensors are able to detect targets autonomously so they are used for separation assurance and collision avoidance.
Degrees of freedom (DOF) refers to both the amount and quality of onboard sensors: 6 DOF implies 3-axis gyroscopes and accelerometers (a typical inertial measurement unit IMU), 9 DOF refers to an IMU plus a compass, 10 DOF adds a barometer and 11 DOF usually adds a GPS receiver.
In addition to the navigation sensors, the UAV (or UAS) can be also equipped with monitoring devices such as: RGB, multispectral, hyper-spectral cameras or LiDAR, which may allow providing specific measurements or observations.
Actuators
UAV actuators include digital electronic speed controllers (which control the RPM of the motors) linked to motors/engines and propellers, servomotors (for planes and helicopters mostly), weapons, payload actuators, LEDs and speakers.
Software
Modern UAVs run a software stack that ranges from low-level firmware that directly controls actuators, to high level flight planning. At the lowest level, firmware directly controls reading from sensors such as an IMU and commanding actuators such as motors. Control software (often referred to as an autopilot) is responsible for computing actuator speeds given desired vehicle velocity. Due to its direct interaction with hardware, this software is time-critical and may run on microcontrollers. This software may also handle radio communications, in the case of UAVs that are not autonomous. One popular example is the PX4 autopilot.
At the next level, autonomy algorithms compute the desired velocity given higher level goals. For example, trajectory optimization may be used to calculate a flight trajectory given a desired goal location. This software is not necessarily time-critical, and can often run on a single board computer running an operating system such as Linux with relaxed time constraints.
Loop principles
UAVs employ open-loop, closed-loop or hybrid control architectures.
Open loop This type provides a positive control signal (faster, slower, left, right, up, down) without incorporating feedback from sensor data.
Closed loop This type incorporates sensor feedback to adjust behavior (reduce speed to reflect tailwind, move to altitude 300 feet). The PID controller is common. Sometimes, feedforward is employed, transferring the need to close the loop further.
Communications
UAVs use a radio for control and exchange of video and other data. Early UAVs had only narrowband uplink. Downlinks came later. These bi-directional narrowband radio links carried command and control (C&C) and telemetry data about the status of aircraft systems to the remote operator.
In most modern UAV applications, video transmission is required. So instead of having separate links for C&C, telemetry and video traffic, a broadband link is used to carry all types of data. These broadband links can leverage quality of service techniques and carry TCP/IP traffic that can be routed over the internet.
The radio signal from the operator side can be issued from either:
Ground control – a human operating a radio transmitter/receiver, a smartphone, a tablet, a computer, or the original meaning of a military ground control station (GCS).
Remote network system, such as satellite duplex data links for some military powers. Downstream digital video over mobile networks has also entered consumer markets, while direct UAV control uplink over the cellular mesh and LTE have been demonstrated and are in trials.
Another aircraft, serving as a relay or mobile control station military manned-unmanned teaming (MUM-T).
Modern networking standards have explicitly considered drones and therefore include optimizations. The 5G standard has mandated reduced user plane latency to 1ms while using ultra-reliable and low-latency communications.
UAV-to-UAV coordination supported by Remote ID communication technology. Remote ID messages (containing the UAV coordinates) are broadcast and can be used for collision-free navigation.
Autonomy
The level of autonomy in UAVs varies widely. UAV manufacturers often build in specific autonomous operations, such as:
Self-level: attitude stabilization on the pitch and roll axes.
Altitude hold: The aircraft maintains its altitude using barometric pressure and/or GPS data.
Hover/position hold: Keep level pitch and roll, stable yaw heading and altitude while maintaining position using GNSS or inertial sensors.
Headless mode: Pitch control relative to the position of the pilot rather than relative to the vehicle's axes.
Care-free: automatic roll and yaw control while moving horizontally
Takeoff and landing (using a variety of aircraft or ground-based sensors and systems; see also "autoland")
Failsafe: automatic landing or return-to-home upon loss of control signal
Return-to-home: Fly back to the point of takeoff (often gaining altitude first to avoid possible intervening obstructions such as trees or buildings).
Follow-me: Maintain relative position to a moving pilot or other object using GNSS, image recognition or homing beacon.
GPS waypoint navigation: Using GNSS to navigate to an intermediate location on a travel path.
Orbit around an object: Similar to Follow-me but continuously circle a target.
Pre-programmed aerobatics (such as rolls and loops)
Pre-programmed delivery (delivery drones)
One approach to quantifying autonomous capabilities is based on OODA terminology, as suggested by a 2002 US Air Force Research Laboratory report, and used in the table on the right.
Full autonomy is available for specific tasks, such as airborne refueling or ground-based battery switching.
Other functions available or under development include; collective flight, real-time collision avoidance, wall following, corridor centring, simultaneous localization and mapping and swarming, cognitive radio, and machine learning. In this context, computer vision can play an important role for automatically ensuring flight safety.
Performance considerations
Flight envelope
UAVs can be programmed to perform aggressive maneuvers or landing/perching on inclined surfaces, and then to climb toward better communication spots. Some UAVs can control flight with varying flight modelisation, such as VTOL designs.
UAVs can also implement perching on a flat vertical surface.
Endurance
UAV endurance is not constrained by the physiological capabilities of a human pilot.
Because of their small size, low weight, low vibration and high power to weight ratio, Wankel rotary engines are used in many large UAVs. Their engine rotors cannot seize; the engine is not susceptible to shock-cooling during descent and it does not require an enriched fuel mixture for cooling at high power. These attributes reduce fuel usage, increasing range or payload.
Proper drone cooling is essential for long-term drone endurance. Overheating and subsequent engine failure is the most common cause of drone failure.
Hydrogen fuel cells, using hydrogen power, may be able to extend the endurance of small UAVs, up to several hours.
Micro air vehicles endurance is so far best achieved with flapping-wing UAVs, followed by planes and multirotors standing last, due to lower Reynolds number.
Solar-electric UAVs, a concept originally championed by the AstroFlight Sunrise in 1974, have achieved flight times of several weeks.
Solar-powered atmospheric satellites ("atmosats") designed for operating at altitudes exceeding 20 km (12 miles, or 60,000 feet) for as long as five years could potentially perform duties more economically and with more versatility than low Earth orbit satellites. Likely applications include weather drones for weather monitoring, disaster recovery, Earth imaging and communications.
Electric UAVs powered by microwave power transmission or laser power beaming are other potential endurance solutions.
Another application for a high endurance UAV would be to "stare" at a battlefield for a long interval (ARGUS-IS, Gorgon Stare, Integrated Sensor Is Structure) to record events that could then be played backwards to track battlefield activities.
The delicacy of the British PHASA-35 military drone (at a late stage of development) is such that traversing the first turbulent twelve miles of atmosphere is a hazardous endeavor. It has, however, remained on station at 65,000 feet for 24 hours. Airbus' Zephyr in 2023 has attained 70,000 feet and flown for 64 days; 200 days aimed at. This is sufficiently close enough to near-space for them to be regarded in "pseudo-satellites" as regards to their operational capabilities.
Reliability
Reliability improvements target all aspects of UAV systems, using resilience engineering and fault tolerance techniques.
Individual reliability covers robustness of flight controllers, to ensure safety without excessive redundancy to minimize cost and weight. Besides, dynamic assessment of flight envelope allows damage-resilient UAVs, using non-linear analysis with ad hoc designed loops or neural networks. UAV software liability is bending toward the design and certifications of crewed avionics software.
Swarm resilience involves maintaining operational capabilities and reconfiguring tasks given unit failures.
Applications
In recent years, autonomous drones have begun to transform various application areas as they can fly beyond visual line of sight (BVLOS) while maximizing production, reducing costs and risks, ensuring site safety, security and regulatory compliance, and protecting the human workforce in times of a pandemic. They can also be used for consumer-related missions like package delivery, as demonstrated by Amazon Prime Air, and critical deliveries of health supplies.
There are numerous civilian, commercial, military, and aerospace applications for UAVs. These include:
General Recreation, disaster relief, archeology, conservation of biodiversity and habitat, law enforcement, crime, and terrorism.
Commercial Aerial surveillance, filmmaking, journalism, scientific research, surveying, cargo transport, mining, manufacturing, forestry, solar farming, thermal energy, ports and agriculture.
Warfare
As of 2020, seventeen countries have armed UAVs, and more than 100 countries use UAVs in a military capacity. The first five countries producing domestic UAV designs are the United States, China, Israel, Iran and Turkey. Top military UAV manufactures are including General Atomics, Lockheed Martin, Northrop Grumman, Boeing, Baykar, TAI, IAIO, CASC and CAIG. China has established and expanded its presence in military UAV market since 2010. In the early 2020s, Turkey also established and expanded its presence in the military UAV market.
In the early 2010s, Israeli companies mainly focused on small surveillance UAV systems, and by the number of drones, Israel exported 60.7% (2014) of UAVs on the market while the United States exported 23.9% (2014). Between 2010 and 2014, there were 439 drones exchanged compared to 322 in the five years previous to that, among these only small fraction of overall trade – just 11 (2.5%) of the 439 are armed drones. The US alone operated over 9,000 military UAVs in 2014; among them more than 7000 are RQ-11 Raven miniature UAVs. Since 2010, Chinese drone companies have begun to export large quantities of drones to the global military market. Of the 18 countries that are known to have received military drones between 2010 and 2019, the top 12 all purchased their drones from China. The shift accelerated in the 2020s due to China's advancement in drone technologies and manufacturing, compounded by market demand from the Russian invasion of Ukraine and the Israel-Gaza conflict.
For intelligence and reconnaissance missions, the inherent stealth of micro UAV flapping-wing ornithopters, imitating birds or insects, offers potential for covert surveillance and makes them difficult targets to bring down.
Unmanned surveillance and reconnaissance aerial vehicle are used for reconnaissance, attack, demining, and target practice.
Following the 2022 Russian invasion of Ukraine a dramatic increase in UAV development took place with Ukraine creating the Brave1 platform to promote rapid development of innovative systems.
Civil applications
The civilian (commercial and general) drone market is dominated by Chinese companies. Chinese manufacturer DJI alone had 74% of the civil market share in 2018, with no other company accounting for more than 5%. The companies continue to hold over 70% of global market share by 2023, despite under increasing scrutinies and sanctions from the United States. The US Interior Department grounded its fleet of DJI drones in 2020, while the Justice Department prohibited the use of federal funds for the purchase of DJI and other foreign-made UAVs.
DJI is followed by American company 3D Robotics, Chinese company Yuneec, Autel Robotics, and French company Parrot.
As of May 2021, 873,576 UAVs had been registered with the US FAA, of which 42% were categorized as commercial and 58% as recreational. 2018 NPD point to consumers increasingly purchasing drones with more advanced features with 33 percent growth in both the $500+ and $1000+ market segments.
The civil UAV market is relatively new compared to the military one. Companies are emerging in both developed and developing nations at the same time. Many early-stage startups have received support and funding from investors, as is the case in the United States, and from government agencies, as is the case in India. Some universities offer research and training programs or degrees. Private entities also provide online and in-person training programs for both recreational and commercial UAV use.
Consumer drones are widely used by police and military organizations worldwide because of the cost-effective nature of consumer products. Since 2018, the Israeli military have used DJI UAVs for light reconnaissance missions. DJI drones have been used by Chinese police in Xinjiang since 2017 and American police departments nationwide since 2018. Both Ukraine and Russia used commercial DJI drones extensively during the Russian invasion of Ukraine. These civilian DJI drones were sourced by governments, hobbyists, international donations to Ukraine and Russia to support each side on the battlefield, and were often flown by drone hobbyists recruited by the armed forces. The prevalence of DJI drones was attributable to their market dominance, affordability, high performance, and reliability.
Entertainment
Drones are also used in nighttime displays for artistic and advertising purposes with the main benefits are that they are safer, quieter and better for the environment than fireworks. They can replace or be an adjunct for fireworks displays to reduce the financial burden of festivals. In addition they can complement fireworks due to the ability for drones to carry them, creating new forms of artwork in the process.
Drones can also be used for racing, either with or without VR functionality.
Aerial photography
Drones are ideally suited to capturing aerial shots in photography and cinematography, and are widely used for this purpose. Small drones avoid the need for precise coordination between pilot and cameraman, with the same person taking on both roles. Big drones with professional cine cameras usually have a drone pilot and a camera operator who controls camera angle and lens. For example, the AERIGON cinema drone, used in film production, is operated by two people. Drones provide access to dangerous, remote or otherwise inaccessible sites.
Environmental monitoring
UASs or UAVs offer the great advantage for environmental monitoring to generate a new generation of survey at very-high or ultra-high resolution both in space and time. This gives the opportunity to bridge the existing gap between satellite data and field monitoring. This has stimulated a huge number of activities in order to enhance the description of natural and agricultural ecosystems. Most common applications are:
Topographic surveys for the production of orthomosaics, digital surface models and 3D models;
Monitoring of natural ecosystems for biodiversity monitoring, habitat mapping, detection of invasive alien species and study of ecosystem degradation due to invasive species or disturbances;
Precision agriculture which exploits all available technologies including UAV in order to produce more with less (e.g., optimisation of fertilizers, pesticides, irrigation);
River monitoring several methods have been developed to perform flow monitoring using image velocimetry methods which allow to properly describe the 2D flow velocity fields.
Structural integrity of any type of structure whether it be a dam, railway or other dangerous, inaccessible or massive locations for building monitoring.
Mineral detection for acid mine drainage using UAVs and hyperspectral cameras can produce detailed maps of proxy minerals (e.g. goethite, jarosite) for certain pH-values in natural, mining and post-mining environments, such as remediated sites.
These activities can be completed with different measurements, such as photogrammetry, thermography, multispectral images, 3D field scanning, and normalized difference vegetation index maps.
Geological hazards
UAVs have become a widely used tool for studying geohazards such as landslides. Various sensors, including radar, optical, and thermal, can be mounted on UAVs to monitor different properties. UAVs enable the capture of images of various landslide features, such as transverse, radial, and longitudinal cracks, ridges, scarps, and surfaces of rupture, even in inaccessible areas of the sliding mass. Moreover, processing the optical images captured by UAVs also allows for the creation of point clouds and 3D models, from which these properties can be derived. Comparing point clouds obtained at different times allows for the detection of changes caused by landslide deformation.
Mineral exploration
UAVs may help in the discovery of new or reevaluation of known mineral deposits to meet the demand for raw materials such as critical raw metals (e.g. cobalt, nickel), rare earths and battery minerals. By employing a suite of sensors (e.g. spectral imaging, Lidar, magnetics, gamma-ray spectroscopy), and similar to those used in environmental monitoring, UAV-based data can produce maps of geological surface and subsurface features, contributing to more efficient and targeted mineral exploration.
Agriculture, forestry and environmental studies
As global demand for food production grows exponentially, resources are depleted, farmland is reduced, and agricultural labor is increasingly in short supply, there is an urgent need for more convenient and smarter agricultural solutions than traditional methods, and the agricultural drone and robotics industry is expected to make progress. Agricultural drones have been used to help build sustainable agriculture all over the world leading to a new generation of agriculture. In this context, there is a proliferation of innovations in both tools and methodologies which allow precise description of vegetation state and also may help to precisely distribute nutrients, pesticides or seeds over a field.
The use of UAVs is also being investigated to help detect and fight wildfires, whether through observation or launching pyrotechnic devices to start backfires.
UAVs are also now widely used to survey wildlife such as nesting seabirds, seals and even wombat burrows.
Law enforcement
Police can use drones for applications such as search and rescue and traffic monitoring.
Humanitarian aid
Drones are increasingly finding their application in humanitarian aid and disaster relief, where they are used for a wide range of applications such as delivering food, medicine and essential items to remote areas or image mapping before and following disasters.
Safety and security
Threats
Nuisance
UAVs can threaten airspace security in numerous ways, including unintentional collisions or other interference with other aircraft, deliberate attacks or by distracting pilots or flight controllers. The first incident of a drone-airplane collision occurred in mid-October 2017 in Quebec City, Canada. The first recorded instance of a drone collision with a hot air balloon occurred on 10 August 2018 in Driggs, Idaho, United States; although there was no significant damage to the balloon nor any injuries to its 3 occupants, the balloon pilot reported the incident to the National Transportation Safety Board, stating that "I hope this incident helps create a conversation of respect for nature, the airspace, and rules and regulations". Unauthorized UAV flights into or near major airports have prompted extended shutdowns of commercial flights.
Drones caused significant disruption at Gatwick Airport during December 2018, needing the deployment of the British Army.
In the United States, flying close to a wildfire is punishable by a maximum $25,000 fine. Nonetheless, in 2014 and 2015, firefighting air support in California was hindered on several occasions, including at the Lake Fire and the North Fire. In response, California legislators introduced a bill that would allow firefighters to disable UAVs which invaded restricted airspace. The FAA later required registration of most UAVs.
Security vulnerabilities
By 2017, drones were being used to drop contraband into prisons.
The interest in UAVs cybersecurity has been raised greatly after the Predator UAV video stream hijacking incident in 2009, where Islamic militants used cheap, off-the-shelf equipment to stream video feeds from a UAV. Another risk is the possibility of hijacking or jamming a UAV in flight. Several security researchers have made public some vulnerabilities in commercial UAVs, in some cases even providing full source code or tools to reproduce their attacks. At a workshop on UAVs and privacy in October 2016, researchers from the Federal Trade Commission showed they were able to hack into three different consumer quadcopters and noted that UAV manufacturers can make their UAVs more secure by the basic security measures of encrypting the Wi-Fi signal and adding password protection.
Aggression
Many UAVs have been loaded with dangerous payloads, and/or crashed into targets. Payloads have included or could include explosives, chemical, radiological or biological hazards. UAVs with generally non-lethal payloads could possibly be hacked and put to malicious purposes. Counter-UAV systems (C-UAS), from detection to electronic warfare to UAVs designed to destroy other UAVs, are in development and being deployed by states to counter this threat.
Such developments have occurred despite the difficulties. As J. Rogers stated in a 2017 interview to A&T, "There is a big debate out there at the moment about what the best way is to counter these small UAVs, whether they are used by hobbyists causing a bit of a nuisance or in a more sinister manner by a terrorist actor".
Countermeasures
Counter unmanned air system
The malicious use of UAVs has led to the development of counter unmanned air system (C-UAS) technologies. Automatic tracking and detection of UAVs from commercial cameras have become accurate thanks to the development of deep learning based machine learning algorithms. It is also possible to automatically identify UAVs across different cameras with different viewpoints and hardware specification with re-identification methods. Commercial systems such as the Aaronia AARTOS have been installed on major international airports. Once a UAV is detected, it can be countered with kinetic force (missiles, projectiles or another UAV) or by non-kinetic force (laser, microwaves, communications jamming). Anti-aircraft missile systems such as the Iron Dome are also being enhanced with C-UAS technologies. Utilising a smart UAV swarm to counter one or more hostile UAVs is also proposed.
Regulation
Regulatory bodies around the world are developing unmanned aircraft system traffic management solutions to better integrate UAVs into airspace.
The use of unmanned aerial vehicles is becoming increasingly regulated by the civil aviation authorities of individual countries. Regulatory regimes can differ significantly according to drone size and use. The International Civil Aviation Organization (ICAO) began exploring the use of drone technology as far back as 2005, which resulted in a 2011 report. France was among the first countries to set a national framework based on this report and larger aviation bodies such as the FAA and the EASA quickly followed suit. In 2021, the FAA published a rule requiring all commercially used UAVs and all UAVs regardless of intent weighing 250 g or more to participate in Remote ID, which makes drone locations, controller locations, and other information public from takeoff to shutdown; this rule has since been challenged in the pending federal lawsuit RaceDayQuads v. FAA.
EU Drone Certification - Class Identification Label
The implementation of the Class Identification Label serves a crucial purpose in the regulation and operation of drones. The label is a verification mechanism designed to confirm that drones within a specific class meet the rigorous standards set by administrations for design and manufacturing. These standards are necessary to ensure the safety and reliability of drones in various industries and applications.
By providing this assurance to customers, the Class Identification Label helps to increase confidence in drone technology and encourages wider adoption across industries. This, in turn, contributes to the growth and development of the drone industry and supports the integration of drones into society.
Export controls
The export of UAVs or technology capable of carrying a 500 kg payload at least 300 km is restricted in many countries by the Missile Technology Control Regime.
See also
List of unmanned aerial vehicles
Delivery drone
Drone in a Box
Glide bomb
International Aerial Robotics Competition
List of films featuring drones
List of military electronics of the United States
MARSS Interceptor
Micromechanical Flying Insect
ParcAberporth
Quadcopter
Radio-controlled aircraft
Autonomous aircraft
Optionally piloted vehicle
Sypaq Corvo Precision Payload Delivery System
Satellite Sentinel Project
Tactical Control System
UAV ground control station
Unmanned underwater vehicle
References
Citations
Bibliography
Further reading
External links
How Intelligent Drones Are Shaping the Future of Warfare , Rolling Stone Magazine
Wireless
Avionics
Robotics
Articles containing video clips | Unmanned aerial vehicle | [
"Technology",
"Engineering"
] | 9,686 | [
"Telecommunications engineering",
"Avionics",
"Automation",
"Wireless",
"Robotics",
"Aircraft instruments"
] |
58,911 | https://en.wikipedia.org/wiki/Measles | Measles (probably from Middle Dutch or Middle High German masel(e) ("blemish, blood blister")) is a highly contagious, vaccine-preventable infectious disease caused by measles virus. Other names include morbilli, rubeola, red measles, and English measles. Both rubella, also known as German measles, and roseola are different diseases caused by unrelated viruses.
Symptoms usually develop 10–12 days after exposure to an infected person and last 7–10 days. Initial symptoms typically include fever, often greater than , cough, runny nose, and inflamed eyes. Small white spots known as Koplik's spots may form inside the mouth two or three days after the start of symptoms. A red, flat rash which usually starts on the face and then spreads to the rest of the body typically begins three to five days after the start of symptoms. Common complications include diarrhea (in 8% of cases), middle ear infection (7%), and pneumonia (6%). These occur in part due to measles-induced immunosuppression. Less commonly seizures, blindness, or inflammation of the brain may occur.
Measles is an airborne disease which spreads easily from one person to the next through the coughs and sneezes of infected people. It may also be spread through direct contact with mouth or nasal secretions. It is extremely contagious: nine out of ten people who are not immune and share living space with an infected person will be infected. Furthermore, measles's reproductive number estimates vary beyond the frequently cited range of 12 to 18. The NIH quote this 2017 paper saying: "[a] review in 2017 identified feasible measles R values of 3.7–203.3". People are infectious to others from four days before to four days after the start of the rash. While often regarded as a childhood illness, it can affect people of any age. Most people do not get the disease more than once. Testing for the measles virus in suspected cases is important for public health efforts. Measles is not known to occur in other animals.
Once a person has become infected, no specific treatment is available, although supportive care may improve outcomes. Such care may include oral rehydration solution (slightly sweet and salty fluids), healthy food, and medications to control the fever. Antibiotics should be prescribed if secondary bacterial infections such as ear infections or pneumonia occur. Vitamin A supplementation is also recommended for children. Among cases reported in the U.S. between 1985 and 1992, death occurred in only 0.2% of cases, but may be up to 10% in people with malnutrition. Most of those who die from the infection are less than five-years old.
The measles vaccine is effective at preventing the disease, is exceptionally safe, and is often delivered in combination with other vaccines. Due to the ease with which measles is transmitted from person to person in a community, more than 95% of the community must be vaccinated in order to achieve herd immunity. Vaccination resulted in an 80% decrease in deaths from measles between 2000 and 2017, with about 85% of children worldwide having received their first dose as of 2017.
Measles affects about 20 million people a year, primarily in the developing areas of Africa and Asia. It is one of the leading vaccine-preventable disease causes of death. In 1980, 2.6 million people died from measles, and in 1990, 545,000 died due to the disease; by 2014, global vaccination programs had reduced the number of deaths from measles to 73,000. Despite these trends, rates of disease and deaths increased from 2017 to 2019 due to a decrease in immunization.
Signs and symptoms
Symptoms typically begin 10–14 days after exposure. The classic symptoms include a four-day fever (the four Ds) and the three Cs—cough, coryza (head cold, fever, sneezing), and conjunctivitis (red eyes)—along with a maculopapular rash. Fever is common and typically lasts for about one week; the fever seen with measles is often as high as .
Koplik's spots seen inside the mouth are diagnostic for measles, but are temporary and therefore rarely seen. Koplik spots are small white spots that are commonly seen on the inside of the cheeks opposite the molars. They appear as "grains of salt on a reddish background." Recognizing these spots before a person reaches their maximum infectiousness can help reduce the spread of the disease.
The characteristic measles rash is classically described as a generalized red maculopapular rash that begins several days after the fever starts. It starts on the back of the ears and, after a few hours, spreads to the head and neck before spreading to cover most of the body. The measles rash appears two to four days after the initial symptoms and lasts for up to eight days. The rash is said to "stain", changing color from red to dark brown, before disappearing. Overall, measles usually resolves after about three weeks.
People who have been vaccinated against measles but have incomplete protective immunity may experience a form of modified measles. Modified measles is characterized by a prolonged incubation period, milder, and less characteristic symptoms (sparse and discrete rash of short duration).
Complications
Complications of measles are relatively common, ranging from mild ones such as diarrhea to serious ones such as pneumonia (either direct viral pneumonia or secondary bacterial pneumonia), laryngotracheobronchitis (croup) (either direct viral laryngotracheobronchitis or secondary bacterial bronchitis), otitis media, acute brain inflammation, corneal ulceration (leading to corneal scarring), and in about 1 in 600 unvaccinated infants under 15 months while more rarely in older children and adults, subacute sclerosing panencephalitis, which is progressive and eventually lethal.
In addition, measles can suppress the immune system for weeks to months, and this can contribute to bacterial superinfections such as otitis media and bacterial pneumonia. Two months after recovery there is a 11–73% decrease in the number of antibodies against other bacteria and viruses.
The death rate in the 1920s was around 30% for measles pneumonia. People who are at high risk for complications are infants and children aged less than 5 years; adults aged over 20 years; pregnant women; people with compromised immune systems, such as from leukemia, HIV infection or innate immunodeficiency; and those who are malnourished or have vitamin A deficiency. Complications are usually more severe in adults. Between 1987 and 2000, the case fatality rate across the United States was three deaths per 1,000 cases attributable to measles, or 0.3%. In underdeveloped nations with high rates of malnutrition and poor healthcare, fatality rates have been as high as 28%. In immunocompromised persons (e.g., people with AIDS) the fatality rate is approximately 30%.
Even in previously healthy children, measles can cause serious illness requiring hospitalization. One out of every 1,000 measles cases progresses to acute encephalitis, which often results in permanent brain damage. One to three out of every 1,000 children who become infected with measles will die from respiratory and neurological complications.
Cause
Measles is caused by the measles virus, a single-stranded, negative-sense, enveloped RNA virus of the genus Morbillivirus within the family Paramyxoviridae.
The virus is highly contagious and is spread by coughing and sneezing via close personal contact or direct contact with secretions. Measles is the most contagious virus known. It remains infective for up to two hours in that airspace or nearby surfaces. Measles is so contagious that if one person has it, 90% of non-immune people who have close contact with them (e.g., household members) will also become infected. Humans are the only natural hosts of the virus, and no other animal reservoirs are known to exist, although mountain gorillas are believed to be susceptible to the disease.
Risk factors for measles virus infection include immunodeficiency caused by HIV/AIDS, immunosuppression following receipt of an organ or a stem cell transplant, alkylating agents, or corticosteroid therapy, regardless of immunization status; travel to areas where measles commonly occurs or contact with travelers from such an area; and the loss of passive, inherited antibodies before the age of routine immunization.
Pathophysiology
Once the measles virus gets onto the mucosa, it infects the epithelial cells in the trachea or bronchi. Measles virus uses a protein on its surface called hemagglutinin (H protein), to bind to a target receptor on the host cell, which could be CD46, which is expressed on all nucleated human cells, CD150, aka signaling lymphocyte activation molecule or SLAM, which is found on immune cells like B or T cells, and antigen-presenting cells, or nectin-4, a cellular adhesion molecule. Once bound, the fusion, or F protein helps the virus fuse with the membrane and ultimately get inside the cell.
As the virus is a single-stranded negative-sense RNA virus, it includes the enzyme RNA-dependent RNA polymerase (RdRp) which is used to transcribe its genome into a positive-sense mRNA strand.
After entering a cell, it is ready to be translated into viral proteins, wrapped in the cell's lipid envelope, and sent out of the cell as a newly made virus. Within days, the measles virus spreads through local tissue and is picked up by dendritic cells and alveolar macrophages, and carried from that local tissue in the lungs to the local lymph nodes. From there it continues to spread, eventually getting into the blood and spreading to more lung tissue, as well as other organs like the intestines and the brain. Functional impairment of the infected dendritic cells by the measles virus is thought to contribute to measles-induced immunosuppression.
Diagnosis
Typically, clinical diagnosis begins with the onset of fever and malaise about 10 days after exposure to the measles virus, followed by the emergence of cough, coryza, and conjunctivitis that worsen in severity over 4 days of appearing. Observation of Koplik's spots is also diagnostic. Other possible condition that can result in these symptoms include parvovirus, dengue fever, Kawasaki disease, and scarlet fever. Laboratory confirmation is, however, strongly recommended.
Laboratory testing
Laboratory diagnosis of measles can be done with confirmation of positive measles IgM antibodies or detection of measles virus RNA from throat, nasal or urine specimen by using the reverse transcription polymerase chain reaction assay. This method is particularly useful to confirm cases when the IgM antibodies results are inconclusive. For people unable to have their blood drawn, saliva can be collected for salivary measles-specific IgA testing. Salivary tests used to diagnose measles involve collecting a saliva sample and testing for the presence of measles antibodies. This method is not ideal, as saliva contains many other fluids and proteins which may make it difficult to collect samples and detect measles antibodies. Saliva also contains 800 times fewer antibodies than blood samples do, which makes salivary testing additionally difficult. Positive contact with other people known to have measles adds evidence to the diagnosis.
Prevention
Mothers who are immune to measles pass antibodies to their children while they are still in the womb, especially if the mother acquired immunity through infection rather than vaccination. Such antibodies will usually give newborn infants some immunity against measles, but these antibodies are gradually lost over the course of the first nine months of life. Infants under one year of age whose maternal anti-measles antibodies have disappeared become susceptible to infection with the measles virus.
In developed countries, it is recommended that children be immunized against measles at 12 months, generally as part of a three-part MMR vaccine (measles, mumps, and rubella). The vaccine is generally not given before this age because such infants respond inadequately to the vaccine due to an immature immune system. A second dose of the vaccine is usually given to children between the ages of four and five, to increase rates of immunity. Measles vaccines have been given to over a billion people. Vaccination rates have been high enough to make measles relatively uncommon. Adverse reactions to vaccination are rare, with fever and pain at the injection site being the most common. Life-threatening adverse reactions occur in less than one per million vaccinations (<0.0001%).
In developing countries where measles is common, the World Health Organization (WHO) recommends two doses of vaccine be given, at six and nine months of age. The vaccine should be given whether the child is HIV-infected or not. The vaccine is less effective in HIV-infected infants than in the general population, but early treatment with antiretroviral drugs can increase its effectiveness. Measles vaccination programs are often used to deliver other child health interventions as well, such as bed nets to protect against malaria, antiparasite medicine and vitamin A supplements, and so contribute to the reduction of child deaths from other causes.
The Advisory Committee on Immunization Practices (ACIP) recommends that all adult international travelers who do not have positive evidence of previous measles immunity receive two doses of MMR vaccine before traveling, although birth before 1957 is presumptive evidence of immunity. Those born in the United States before 1957 are likely to have been naturally infected with measles virus and generally need not be considered susceptible.
There have been false claims of an association between the measles vaccine and autism; this incorrect concern has reduced the rate of vaccination and increased the number of cases of measles where immunization rates became too low to maintain herd immunity. Additionally, there have been false claims that measles infection protects against cancer.
Administration of the MMR vaccine may prevent measles after exposure to the virus (post-exposure prophylaxis). Post-exposure prophylaxis guidelines are specific to jurisdiction and population. Passive immunization against measles by an intramuscular injection of antibodies could be effective up to the seventh day after exposure. Compared to no treatment, the risk of measles infection is reduced by 83%, and the risk of death by measles is reduced by 76%. However, the effectiveness of passive immunization in comparison to active measles vaccine is not clear.
The MMR vaccine is 95% effective for preventing measles after one dose if the vaccine is given to a child who is 12 months or older; if a second dose of the MMR vaccine is given, it will provide immunity in 99% of children.
There is no evidence that the measles vaccine virus can be transmitted to other persons.
Treatment
There is no specific antiviral treatment if measles develops. Instead the medications are generally aimed at treating superinfections, maintaining good hydration with adequate fluids, and pain relief. Some groups, like young children and the severely malnourished, are also given vitamin A, which acts as an immunomodulator that boosts the antibody responses to measles and decreases the risk of serious complications.
Medications
Treatment is supportive, with ibuprofen or paracetamol (acetaminophen) to reduce fever and pain and, if required, a fast-acting medication to dilate the airways for cough. As for aspirin, some research has suggested a correlation between children who take aspirin and the development of Reye syndrome.
The use of vitamin A during treatment is recommended to decrease the risk of blindness; however, it does not prevent or cure the disease. A systematic review of trials into its use found no reduction in overall mortality, but two doses (200,000 IU) of vitamin A was shown to reduce mortality for measles in children younger than two years of age. It is unclear if zinc supplementation in children with measles affects outcomes as it has not been sufficiently studied. There are no adequate studies on whether Chinese medicinal herbs are effective.
Prognosis
Most people survive measles, though in some cases, complications may occur. About 1 in 4 individuals will be hospitalized and 1–2 in 1,000 will die. Complications are more likely in children under age 5 and adults over age 20. Pneumonia is the most common fatal complication of measles infection and accounts for 56–86% of measles-related deaths.
Possible consequences of measles virus infection include laryngotracheobronchitis, sensorineural hearing loss, and—in about 1 in 10,000 to 1 in 300,000 cases—panencephalitis, which is usually fatal. Acute measles encephalitis is another serious risk of measles virus infection. It typically occurs two days to one week after the measles rash breaks out and begins with very high fever, severe headache, convulsions and altered mentation. A person with measles encephalitis may become comatose, and death or brain injury may occur.
For people having had measles, it is rare to ever have a symptomatic reinfection.
The measles virus can deplete previously acquired immune memory by killing cells that make antibodies, and thus weakens the immune system, which can cause deaths from other diseases. Suppression of the immune system by measles lasts about two years and has been epidemiologically implicated in up to 90% of childhood deaths in third world countries, and historically may have caused rather more deaths in the United States, the UK and Denmark than were directly caused by measles. Although the measles vaccine contains an attenuated strain, it does not deplete immune memory.
Epidemiology
Measles is extremely infectious and its continued circulation in a community depends on the generation of susceptible hosts by birth of children. In communities that generate insufficient new hosts the disease will die out. This concept was first recognized in measles by Bartlett in 1957, who referred to the minimum number supporting measles as the critical community size (CCS). Analysis of outbreaks in island communities suggested that the CCS for measles is around 250,000. Due to the ease with which measles is transmitted from person to person in a community, more than 95% of the community must be vaccinated in order to achieve herd immunity.
In 2011, the WHO estimated that 158,000 deaths were caused by measles. This is down from 630,000 deaths in 1990. As of 2018, measles remains a leading cause of vaccine-preventable deaths in the world. In developed countries the mortality rate is lower, for example in England and Wales from 2007 to 2017 death occurred between two and three cases out of 10,000. In children one to three cases out of every 1,000 die in the United States (0.1–0.2%). In populations with high levels of malnutrition and a lack of adequate healthcare, mortality can be as high as 10%. In cases with complications, the rate may rise to 20–30%. In 2012, the number of deaths due to measles was 78% lower than in 2000 due to increased rates of immunization among UN member states.
Even in countries where vaccination has been introduced, rates may remain high. Measles is a leading cause of vaccine-preventable childhood mortality. Worldwide, the fatality rate has been significantly reduced by a vaccination campaign led by partners in the Measles Initiative: the American Red Cross, the United States CDC, the United Nations Foundation, UNICEF and the WHO. Globally, measles fell 60% from an estimated 873,000 deaths in 1999 to 345,000 in 2005. Estimates for 2008 indicate deaths fell further to 164,000 globally, with 77% of the remaining measles deaths in 2008 occurring within the Southeast Asian region. There were 142,300 measles related deaths globally in 2018, of which most cases were reported from African and eastern Mediterranean regions. These estimates were slightly higher than that of 2017, when 124,000 deaths were reported due to measles infection globally.
In 2000, the WHO established the Global Measles and Rubella Laboratory Network (GMRLN) to provide laboratory surveillance for measles, rubella, and congenital rubella syndrome. Data from 2016 to 2018 show that the most frequently detected measles virus genotypes are decreasing, suggesting that increasing global population immunity has decreased the number of chains of transmission.
Cases reported in the first three months of 2019, were 300% higher than in the first three months of 2018, with outbreaks in every region of the world, even in countries with high overall vaccination coverage where it spread among clusters of unvaccinated people. The numbers of reported cases as of mid-November is over 413,000 globally, with an additional 250,000 cases in DRC (as reported through their national system), similar to the increasing trends of infection reported in the earlier months of 2019, compared to 2018. In 2019, the total number of cases worldwide climbed to 869,770. The number of cases reported for 2020 is lower compare to 2019. According to the WHO, the COVID-19 pandemic hindered vaccination campaigns in at least 68 countries, including in countries that were experiencing outbreaks, which caused increased risk of additional cases.
In 2022, there were an estimated 136,000 measles deaths globally, mostly among unvaccinated or under vaccinated children under the age of 5 years.
In February 2024, the World Health Organization said more than half of the world was at risk of a measles outbreak due to Covid-19 pandemic-related disruptions in that month. All the world regions have reported such outbreaks with the exception of the Americas, though these could still be expected to become hotspots in the future. Death rates during the outbreaks tend to be higher among poorer countries but middle income nations are also heavily impacted, according to the WHO.
In November 2024, the WHO and CDC reported that measles cases increased by 20% last year, primarily due to insufficient vaccine coverage in the world’s poorest and conflict-affected regions. Nearly half of the major outbreaks occurred in Africa, where deaths rose by 37%, the agencies noted. In 2023, approximately 10.3 million cases of the highly contagious disease were reported, up from 8.65 million the previous year.
Europe
In England and Wales, though deaths from measles were uncommon, they averaged about 500 per year in the 1940s. Deaths diminished with the improvement of medical care in the 1950s but the incidence of the disease did not retreat until vaccination was introduced in the late 1960s. Wider coverage was achieved in the 1980s with the measles, mumps and rubella, MMR vaccine.
In 2013–14, there were almost 10,000 cases in 30 European countries. Most cases occurred in unvaccinated individuals and over 90% of cases occurred in Germany, Italy, Netherlands, Romania, and United Kingdom. Between October 2014 and March 2015, a measles outbreak in the German capital of Berlin resulted in at least 782 cases. In 2017, numbers continued to increase in Europe to 21,315 cases, with 35 deaths. In preliminary figures for 2018, reported cases in the region increased 3-fold to 82,596 in 47 countries, with 72 deaths; Ukraine had the most cases (53,218), with the highest incidence rates being in Ukraine (1209 cases per million), Serbia (579), Georgia (564) and Albania (500). The previous year (2017) saw an estimated measles vaccine coverage of 95% for the first dose and 90% for the second dose in the region, the latter figure being the highest-ever estimated second-dose coverage.
In 2019, the United Kingdom, Albania, the Czech Republic, and Greece lost their measles-free status due to ongoing and prolonged spread of the disease in these countries. In the first 6 months of 2019, 90,000 cases occurred in Europe.
Americas
As a result of widespread vaccination, the disease was declared eliminated from the Americas in 2016. However, there were cases again in 2017, 2018, 2019, and 2020 in this region.
United States
In the United States, measles affected approximately 3,000 people per million in the 1960s before the vaccine was available. With consistent widespread childhood vaccination, this figure fell to 13 cases per million by the 1980s, and to about 1 case per million by 2000.
In 1991, an outbreak of measles in Philadelphia was centered at the Faith Tabernacle Congregation, a faith healing church that actively discouraged parishioners from vaccinating their children. Over 1400 people were infected with measles and nine children died.
Before immunization in the United States, between three and four million cases occurred each year. The United States was declared free of circulating measles in 2000, with 911 cases from 2001 to 2011. In 2014 the CDC said endemic measles, rubella, and congenital rubella syndrome had not returned to the United States. Occasional measles outbreaks persist, however, because of cases imported from abroad, of which more than half are the result of unvaccinated U.S. residents who are infected abroad and infect others upon return to the United States. The CDC continues to recommend measles vaccination throughout the population to prevent outbreaks like these.
In 2014, an outbreak was initiated in Ohio when two unvaccinated Amish men harboring asymptomatic measles returned to the United States from missionary work in the Philippines. Their return to a community with low vaccination rates led to an outbreak that rose to include a total of 383 cases across nine counties. Of the 383 cases, 340 (89%) occurred in unvaccinated individuals.
From 4 January, to 2 April 2015, there were 159 cases of measles reported to the CDC. Of those 159 cases, 111 (70%) were determined to have come from an earlier exposure in late December 2014. This outbreak was believed to have originated from the Disneyland theme park in California. The Disneyland outbreak was held responsible for the infection of 147 people in seven U.S. states as well as Mexico and Canada, the majority of which were either unvaccinated or had unknown vaccination status. Of the cases 48% were unvaccinated and 38% were unsure of their vaccination status. The initial exposure to the virus was never identified.
In 2015, a U.S. woman in Washington state died of pneumonia, as a result of measles. She was the first fatality in the U.S. from measles since 2003. The woman had been vaccinated for measles and was taking immunosuppressive drugs for another condition. The drugs suppressed the woman's immunity to measles, and the woman became infected with measles; she did not develop a rash, but contracted pneumonia, which caused her death.
In June 2017, the Maine Health and Environmental Testing Laboratory confirmed a case of measles in Franklin County. This instance marks the first case of measles in 20 years for the state of Maine. In 2018, one case occurred in Portland, Oregon, with 500 people exposed; 40 of them lacked immunity to the virus and were being monitored by county health officials as of 2 July 2018. There were 273 cases of measles reported throughout the United States in 2018, including an outbreak in Brooklyn with more than 200 reported cases from October 2018 to February 2019. The outbreak was tied with population density of the Orthodox Jewish community, with the initial exposure from an unvaccinated child that caught measles while visiting Israel.
A resurgence of measles occurred during 2019, which has been generally tied to parents choosing not to have their children vaccinated as most of the reported cases have occurred in people 19 years old or younger. Cases were first reported in Washington state in January, with an outbreak of at least 58 confirmed cases most within Clark County, which has a higher rate of vaccination exemptions compared to the rest of the state; nearly one in four kindergartners in Clark did not receive vaccinations, according to state data. This led Washington state governor Jay Inslee to declare a state of emergency, and the state's congress to introduce legislation to disallow vaccination exemption for personal or philosophical reasons. In April 2019, New York Mayor Bill de Blasio declared a public health emergency because of "a huge spike" in cases of measles where there were 285 cases centred on the Orthodox Jewish areas of Brooklyn in 2018, while there were only two cases in 2017. There were 168 more in neighboring Rockland County. Other outbreaks have included Santa Cruz County and Butte County in California, and the states of New Jersey and Michigan. , there have been 695 cases of measles reported in 22 states. This is the highest number of measles cases since it was declared eradicated in 2000. From 1 January, to 31 December 2019, 1,282 individual cases of measles were confirmed in 31 states. This is the greatest number of cases reported in the U.S. since 1992. Of the 1,282 cases, 128 of the people who got measles were hospitalized, and 61 reported having complications, including pneumonia and encephalitis.
Following the end of the 2019 outbreak, reported cases have fallen to pre-outbreak levels: 13 cases in 2020, 49 cases in 2021, and 121 cases in 2022.
Brazil
The spread of measles had been interrupted in Brazil in 2016, with the last-known case twelve months earlier. This last case was in the state of Ceará.
Brazil won a measles elimination certificate by the Pan American Health Organization in 2016, but the Ministry of Health has proclaimed that the country has struggled to keep this certificate, since two outbreaks had already been identified in 2018, one in the state of Amazonas and another one in Roraima, in addition to cases in other states (Rio de Janeiro, Rio Grande do Sul, Pará, São Paulo and Rondônia), totaling 1,053 confirmed cases until 1 August 2018. In these outbreaks, and in most other cases, the contagion was related to the importation of the virus, especially from Venezuela. This was confirmed by the genotype of the virus (D8) that was identified, which is the same that circulates in Venezuela.
Southeast Asia
In the Vietnamese measles epidemic in spring of 2014, an estimated 8,500 measles cases were reported as of 19 April, with 114 fatalities; as of 30 May, 21,639 suspected measles cases had been reported, with 142 measles-related fatalities. In the Naga Self-Administered Zone in a remote northern region of Myanmar, at least 40 children died during a measles outbreak in August 2016 that was probably caused by lack of vaccination in an area of poor health infrastructure. Following the 2019 Philippines measles outbreak, 23,563 measles cases have been reported in the country with 338 fatalities. A measles outbreak also happened among the Malaysian Orang Asli sub-group of Batek people in the state of Kelantan from May 2019, causing the deaths of 15 from the tribe. In 2024, a measles outbreak was declared in the Bangsamoro region in the Philippines with at least 592 cases and 3 deaths.
South Pacific
A measles outbreak in New Zealand has 2193 confirmed cases and two deaths. A measles outbreak in Tonga has 612 cases of measles.
Samoa
A measles outbreak in Samoa in late 2019 has over 5,700 cases of measles and 83 deaths, out of a Samoan population of 200,000. Over three percent of the population were infected, and a state of emergency was declared from 17 November to 7 December. A vaccination campaign brought the measles vaccination rate from 31 to 34% in 2018 to an estimated 94% of the eligible population in December 2019.
Africa
The Democratic Republic of the Congo and Madagascar have reported the highest numbers of cases in 2019. However, cases have decreased in Madagascar as a result of nationwide emergency measles vaccine campaigns. As of August 2019 outbreaks were occurring in Angola, Cameroon, Chad, Nigeria, South Sudan and Sudan.
Madagascar
An outbreak of measles in 2018 has resulted in well beyond 115,000 cases and over 1,200 deaths.
Democratic Republic of Congo
An outbreak of measles with nearly 5,000 deaths and 250,000 infections occurred in 2019, after the disease spread to all the provinces in the country. Most deaths were among children under five years of age. The World Health Organization (WHO) has reported this as the world's largest and fastest-moving epidemic.
History
Measles is of zoonotic origin, having evolved from rinderpest, which infects cattle. A precursor of the measles began causing infections in humans as early as the 4th century BC or as late as after 500 AD. The Antonine Plague of 165–180 AD has been speculated to have been measles, but the actual cause of this plague is unknown and smallpox is a more likely cause. The first systematic description of measles, and its distinction from smallpox and chickenpox, is credited to the Persian physician Muhammad ibn Zakariya al-Razi (860–932), who published The Book of Smallpox and Measles. At the time of Razi's book, it is believed that outbreaks were still limited and that the virus was not fully adapted to humans. Sometime between 1100 and 1200 AD, the measles virus fully diverged from rinderpest, becoming a distinct virus that infects humans. This agrees with the observation that measles requires a susceptible population of over 500,000 to sustain an epidemic, a situation that occurred in historic times following the growth of medieval European cities.
Measles is an endemic disease, meaning it has been continually present in a community and many people develop resistance. In populations not exposed to measles, exposure to the new disease can be devastating. In 1529, a measles outbreak in Cuba killed two-thirds of those indigenous people who had previously survived smallpox. Two years later, measles was responsible for the deaths of half the population of Honduras, and it has ravaged Mexico, Central America, and the Inca civilization.
Between roughly 1855 and 2005, measles is estimated to have killed about 200 million people worldwide.
The 1846 measles outbreak in the Faroe Islands was unusual for being well studied. Measles had not been seen on the islands for 60 years, so almost no residents had any acquired immunity. Three-quarters of the residents got sick, and more than 100 (1–2%) died from it before the epidemic burned itself out. Peter Ludvig Panum observed the outbreak and determined that measles was spread through direct contact of contagious people with people who had never had measles.
Measles killed 20 percent of Hawaii's population in the 1850s. In 1875, measles killed over 40,000 Fijians, approximately one-third of the population. In the 19th century, the disease killed more than half of the Great Andamanese population. Seven to eight million children are thought to have died from measles each year before the vaccine was introduced.
In 1914, a statistician for the Prudential Insurance Company estimated from a survey of 22 countries that 1% of all deaths in the temperate zone were caused by measles. He observed also that 1–6% of cases of measles ended fatally, the difference depending on age (0–3 being the worst), social conditions (e.g. overcrowded tenements) and pre-existing health conditions.
In 1954, the virus causing the disease was isolated from a 13-year-old boy from the United States, David Edmonston, and adapted and propagated on chick embryo tissue culture. The World Health Organization recognizes eight clades, named A, B, C, D, E, F, G, and H. Twenty-three strains of the measles virus have been identified and designated within these clades. While at Merck, Maurice Hilleman developed the first successful vaccine. Licensed vaccines to prevent the disease became available in 1963. An improved measles vaccine became available in 1968. Measles as an endemic disease was eliminated from the United States in 2000, but continues to be reintroduced by international travelers. In 2019 there were at least 1,241 cases of measles in the United States distributed across 31 states, with over three quarters in New York.
Society and culture
German anti-vaccination campaigner and HIV/AIDS denialist Stefan Lanka posed a challenge on his website in 2011, offering a sum of €100,000 for anyone who could scientifically prove that measles is caused by a virus and determine the diameter of the virus. He posited that the illness is psychosomatic and that the measles virus does not exist. When provided with overwhelming scientific evidence from various medical studies by German physician David Bardens, Lanka did not accept the findings, forcing Bardens to appeal in court. The initial legal case ended with the ruling that Lanka was to pay the prize. However, on appeal, Lanka was ultimately not required to pay the award because the submitted evidence did not meet his exact requirements. The case received wide international coverage that prompted many to comment on it, including neurologist, well-known skeptic and science-based medicine advocate Steven Novella, who called Lanka "a crank".
As outbreaks easily occur in under-vaccinated populations, the disease is seen as a test of sufficient vaccination within a population. Measles outbreaks have been on the rise in the United States, especially in communities with lower rates of vaccination. A different vaccine distribution within a single territory by age or social class may define different general perceptions of vaccination efficacy. It is often introduced to a region by travelers from other countries and it typically spreads to those who have not received the measles vaccination.
Alternative names
Other names include morbilli, rubeola, red measles, and English measles.
Research
In May 2015, the journal Science published a report in which researchers found that the measles infection can leave a population at increased risk for mortality from other diseases for two to three years. Results from additional studies that show the measles virus can kill cells that make antibodies were published in November 2019.
A specific drug treatment for measles, ERDRP-0519, has shown promising results in animal studies, but has not yet been tested in humans.
References
External links
Initiative for Vaccine Research (IVR): Measles, World Health Organization (WHO)
Measles FAQ U.S. Centers for Disease Control and Prevention (CDC)
Case of an adult male with measles (facial photo)
Pictures of measles
Virus Pathogen Database and Analysis Resource (ViPR): Paramyxoviridae
Atypical pneumonias
Airborne diseases
Infectious diseases with eradication efforts
Pediatrics
Vaccine-preventable diseases
Virus-related cutaneous conditions
Wikipedia emergency medicine articles ready to translate
Wikipedia medicine articles ready to translate
Articles containing video clips | Measles | [
"Biology"
] | 8,353 | [
"Vaccination",
"Vaccine-preventable diseases"
] |
58,921 | https://en.wikipedia.org/wiki/Natural%20disaster | A natural disaster is the very harmful impact on a society or community after a natural hazard event. Some examples of natural hazard events include avalanches, droughts, earthquakes, floods, heat waves, landslides, tropical cyclones, volcanic activity and wildfires. Additional natural hazards include blizzards, dust storms, firestorms, hails, ice storms, sinkholes, thunderstorms, tornadoes and tsunamis. A natural disaster can cause loss of life or damage property. It typically causes economic damage. How bad the damage is depends on how well people are prepared for disasters and how strong the buildings, roads, and other structures are. Scholars have been saying that the term natural disaster is unsuitable and should be abandoned. Instead, the simpler term disaster could be used. At the same time the type of hazard would be specified. A disaster happens when a natural or human-made hazard impacts a vulnerable community. It results from the combination of the hazard and the exposure of a vulnerable society.
Nowadays it is hard to distinguish between natural and human-made disasters. The term natural disaster was already challenged in 1976. Human choices in architecture, fire risk, and resource management can cause or worsen natural disasters. Climate change also affects how often disasters due to extreme weather hazards happen. These "climate hazards" are floods, heat waves, wildfires, tropical cyclones, and the like.
Some things can make natural disasters worse. Examples are inadequate building norms, marginalization of people and poor choices on land use planning. Many developing countries do not have proper disaster risk reduction systems. This makes them more vulnerable to natural disasters than high income countries. An adverse event only becomes a disaster if it occurs in an area with a vulnerable population.
Terminology
A natural disaster is the highly harmful impact on a society or community following a natural hazard event. The term "disaster" itself is defined as follows: "Disasters are serious disruptions to the functioning of a community that exceed its capacity to cope using its own resources. Disasters can be caused by natural, man-made and technological hazards, as well as various factors that influence the exposure and vulnerability of a community."
The US Federal Emergency Management Agency (FEMA) explains the relationship between natural disasters and natural hazards as follows: "Natural hazards and natural disasters are related but are not the same. A natural hazard is the threat of an event that will likely have a negative impact. A natural disaster is the negative impact following an actual occurrence of natural hazard in the event that it significantly harms a community. An example of the distinction between a natural hazard and a disaster is that an earthquake is the hazard which caused the 1906 San Francisco earthquake disaster.
A natural hazard is a natural phenomenon that might have a negative effect on humans and other animals, or the environment. Natural hazard events can be classified into two broad categories: geophysical and biological. Natural hazards can be provoked or affected by anthropogenic processes, e.g. land-use change, drainage and construction.
There are 18 natural hazards included in the National Risk Index of FEMA: avalanche, coastal flooding, cold wave, drought, earthquake, hail, heat wave, tropical cyclone, ice storm, landslide, lightning, riverine flooding, strong wind, tornado, tsunami, volcanic activity, wildfire, winter weather. In addition, there are also dust storms.
Critique
The term natural disaster has been called a misnomer already in 1976. A disaster is a result of a natural hazard impacting a vulnerable community. But disasters can be avoided. Earthquakes, droughts, floods, storms, and other events lead to disasters because of human action and inaction. Poor land and policy planning and deregulation can create worse conditions. They often involve development activities that ignore or fail to reduce the disaster risks. Nature alone is blamed for disasters even when disasters result from failures in development. Disasters also result from failure of societies to prepare. Examples for such failures include inadequate building norms, marginalization of people, inequities, overexploitation of resources, extreme urban sprawl and climate change.
Defining disasters as solely natural events has serious implications when it comes to understanding the causes of a disaster and the distribution of political and financial responsibility in disaster risk reduction, disaster management, compensation, insurance and disaster prevention. Using natural to describe disasters misleads people to think the devastating results are inevitable, out of our control, and are simply part of a natural process. Hazards (earthquakes, hurricanes, pandemics, drought etc.) are inevitable, but the impact they have on society is not.
Thus, the term natural disaster is unsuitable and should be abandoned in favor of the simpler term disaster, while also specifying the category (or type) of hazard.
Scale
By region and country
As of 2019, the countries with the highest share of disability-adjusted life years (DALY) lost due to natural disasters are Bahamas, Haiti, Zimbabwe and Armenia (probably mainly due to the Spitak Earthquake). The Asia-Pacific region is the world's most disaster prone region. A person in Asia-Pacific is five times more likely to be hit by a natural disaster than someone living in other regions.
Between 1995 and 2015, the greatest number of natural disasters occurred in America, China and India. In 2012, there were 905 natural disasters worldwide, 93% of which were weather-related disasters. Overall costs were US$170 billion and insured losses $70 billion. 2012 was a moderate year. 45% were meteorological (storms), 36% were hydrological (floods), 12% were climatological (heat waves, cold waves, droughts, wildfires) and 7% were geophysical events (earthquakes and volcanic eruptions). Between 1980 and 2011 geophysical events accounted for 14% of all natural catastrophes.
Developing countries often have ineffective communication systems as well as insufficient support for disaster risk reduction and emergency management. This makes them more vulnerable to natural disasters than high income countries.
Slow and rapid onset events
Natural hazards occur across different time scales as well as area scales. Tornadoes and flash floods are rapid onset events, meaning they occur with a short warning time and are short-lived. Slow onset events can also be very damaging, for example drought is a natural hazards that develops slowly, sometimes over years.
Impacts
A natural disaster may cause loss of life, injury or other health impacts, property damage, loss of livelihoods and services, social and economic disruption, or environmental damage.
On death rates
Globally, the total number of deaths from natural disasters has been reduced by 75% over the last 100 years, due to the increased development of countries, increased preparedness, better education, better methods, and aid from international organizations. Since the global population has grown over the same time period, the decrease in number of deaths per capita is larger, dropping to 6% of the original amount.
The death rate from natural disasters is highest in developing countries due to the lower quality of building construction, infrastructure, and medical facilities.
On the economy
Global economic losses due to extreme weather, climate and water events are increasing. Costs have increased sevenfold from the 1970s to the 2010s. Direct losses from disasters have averaged above US$330 billion annually between 2015 and 2021. Socio-economic factors have contributed to this trend of increasing losses, such as population growth and increased wealth. This shows that increased exposure is the most important driver of economic losses. However, part of these are also due to human-induced climate change.
On the environment
During emergencies such as natural disasters and armed conflicts more waste may be produced, while waste management is given low priority compared with other services. Existing waste management services and infrastructures can be disrupted, leaving communities with unmanaged waste and increased littering. Under these circumstances human health and the environment are often negatively impacted.
Natural disasters (e.g. earthquakes, tsunamis, hurricanes) have the potential to generate a significant amount of waste within a short period. Waste management systems can be out of action or curtailed, often requiring considerable time and funding to restore. For example, the tsunami in Japan in 2011 produced huge amounts of debris: estimates of 5 million tonnes of waste were reported by the Japanese Ministry of the Environment. Some of this waste, mostly plastic and styrofoam washed up on the coasts of Canada and the United States in late 2011. Along the west coast of the United States, this increased the amount of litter by a factor of 10 and may have transported alien species. Storms are also important generators of plastic litter. A study by Lo et al. (2020) reported a 100% increase in the amount of microplastics on beaches surveyed following a typhoon in Hong Kong in 2018.
A significant amount of plastic waste can be produced during disaster relief operations. Following the 2010 earthquake in Haiti, the generation of waste from relief operations was referred to as a "second disaster". The United States military reported that millions of water bottles and styrofoam food packages were distributed although there was no operational waste management system. Over 700,000 plastic tarpaulins and 100,000 tents were required for emergency shelters. The increase in plastic waste, combined with poor disposal practices, resulted in open drainage channels being blocked, increasing the risk of disease.
Conflicts can result in large-scale displacement of communities. People living under these conditions are often provided with minimal waste management facilities. Burn pits are widely used to dispose of mixed wastes, including plastics. Air pollution can lead to respiratory and other illnesses. For example, Sahrawi refugees have been living in five camps near Tindouf, Algeria for nearly 45 years. As waste collection services are underfunded and there is no recycling facility, plastics have flooded the camps' streets and surroundings. In contrast, the Azraq camp in Jordan for refugees from Syria has waste management services; of 20.7 tonnes of waste produced per day, 15% is recyclable.
On women and vulnerable populations
Because of the social, political and cultural context of many places throughout the world, women are often disproportionately affected by disaster. In the 2004 Indian Ocean tsunami, more women died than men, partly due to the fact that fewer women knew how to swim. During and after a natural disaster, women are at increased risk of being affected by gender based violence and are increasingly vulnerable to sexual violence. Disrupted police enforcement, lax regulations, and displacement all contribute to increased risk of gender based violence and sexual assault.
In addition to LGBT people and immigrants, women are also disproportionately victimized by religion-based scapegoating for natural disasters: fanatical religious leaders or adherents may claim that a god or gods are angry with women's independent, freethinking behavior, such as dressing 'immodestly', having sex or abortions. For example, Hindutva party Hindu Makkal Katchi and others blamed women's struggle for the right to enter the Sabarimala temple for the August 2018 Kerala floods, purportedly inflicted by the angry god Ayyappan.
During and after natural disasters, routine health behaviors become interrupted. In addition, health care systems may have broken down as a result of the disaster, further reducing access to contraceptives. Unprotected intercourse during this time can lead to increased rates of childbirth, unintended pregnancies and sexually transmitted infections (STIs).
Pregnant women are one of the groups disproportionately affected by natural disasters. Inadequate nutrition, little access to clean water, lack of health-care services and psychological stress in the aftermath of the disaster can lead to a significant increase in maternal morbidity and mortality. Furthermore, shortage of healthcare resources during this time can convert even routine obstetric complications into emergencies.
Once a vulnerable population has experienced a disaster, the community can take many years to repair and that repair period can lead to further vulnerability. The disastrous consequences of natural disaster also affect the mental health of affected communities, often leading to post-traumatic symptoms. These increased emotional experiences can be supported through collective processing, leading to resilience and increased community engagement.
On governments and voting processes
Disasters stress government capacity, as the government tries to conduct routine as well as emergency operations. Some theorists of voting behavior propose that citizens update information about government effectiveness based on their response to disasters, which affects their vote choice in the next election. Indeed, some evidence, based on data from the United States, reveals that incumbent parties can lose votes if citizens perceives them as responsible for a poor disaster response or gain votes based on perceptions of well-executed relief work. The latter study also finds, however, that voters do not reward incumbent parties for disaster preparedness, which may end up affecting government incentives to invest in such preparedness.
Disasters caused by geological hazards
Landslides
Avalanches
Earthquakes
An earthquake is the result of a sudden release of energy in the Earth's crust that creates seismic waves. At the Earth's surface, earthquakes manifest themselves by vibration, shaking, and sometimes displacement of the ground. Earthquakes are caused by slippage within geological faults. The underground point of origin of the earthquake is called the seismic focus. The point directly above the focus on the surface is called the epicenter. Earthquakes by themselves rarely kill people or wildlife – it is usually the secondary events that they trigger, such as building collapse, fires, tsunamis and volcanic eruptions, that cause death. Many of these can possibly be avoided by better construction, safety systems, early warning and planning.
Sinkholes
A sinkhole is a depression or hole in the ground caused by some form of collapse of the surface layer. When natural erosion, human mining or underground excavation makes the ground too weak to support the structures built on it, the ground can collapse and produce a sinkhole.
Coastal erosion
Coastal erosion is a physical process by which shorelines in coastal areas around the world shift and change, primarily in response to waves and currents that can be influenced by tides and storm surge. Coastal erosion can result from long-term processes (see also beach evolution) as well as from episodic events such as tropical cyclones or other severe storm events. Coastal erosion is one of the most significant coastal hazards. It forms a threat to infrastructure, capital assets and property.
Volcanic eruptions
Volcanoes can cause widespread destruction and consequent disaster in several ways. One hazard is the volcanic eruption itself, with the force of the explosion and falling rocks able to cause harm. Lava may also be released during the eruption of a volcano; as it leaves the volcano, it can destroy buildings, plants and animals due to its extreme heat. In addition, volcanic ash may form a cloud (generally after cooling) and settle thickly in nearby locations. When mixed with water, this forms a concrete-like material. In sufficient quantities, ash may cause roofs to collapse under its weight. Even small quantities will harm humans if inhaled – it has the consistency of ground glass and therefore causes laceration to the throat and lungs. Volcanic ash can also cause abrasion damage to moving machinery such as engines. The main killer of humans in the immediate surroundings of a volcanic eruption is pyroclastic flows, consisting of a cloud of hot ash which builds up in the air above the volcano and rushes down the slopes when the eruption no longer supports the lifting of the gases. It is believed that Pompeii was destroyed by a pyroclastic flow. A lahar is a volcanic mudflow or landslide. The 1953 Tangiwai disaster was caused by a lahar, as was the 1985 Armero tragedy in which the town of Armero was buried and an estimated 23,000 people were killed.
Volcanoes rated at 8 (the highest level) on the volcanic explosivity index are known as supervolcanoes. According to the Toba catastrophe theory, 75,000 to 80,000 years ago, a supervolcanic eruption at what is now Lake Toba in Sumatra reduced the human population to 10,000 or even 1,000 breeding pairs, creating a bottleneck in human evolution, and killed three-quarters of all plant life in the northern hemisphere. However, there is considerable debate regarding the veracity of this theory. The main danger from a supervolcano is the immense cloud of ash, which has a disastrous global effect on climate and temperature for many years.
Tsunami
A tsunami (plural: tsunamis or tsunami; from Japanese: 津波, lit. "harbour wave"; English pronunciation: /tsuːˈnɑːmi/), also known as a seismic sea wave or tidal wave, is a series of waves in a water body caused by the displacement of a large volume of water, generally in an ocean or a large lake. Tsunamis can be caused by undersea earthquakes such as the 2004 Boxing Day tsunami, or by landslides such as the one in 1958 at Lituya Bay, Alaska, or by volcanic eruptions such as the ancient eruption of Santorini. On March 11, 2011, a tsunami occurred near Fukushima, Japan and spread through the Pacific Ocean.
Disasters caused by extreme weather hazards
Some of the 18 natural hazards included in the National Risk Index of FEMA now have a higher probability of occurring, and at higher intensity, due to the effects of climate change. This applies to heat waves, droughts, wildfire and coastal flooding.
Hot and dry conditions
Heat waves
A heat wave is a period of unusually and excessively hot weather. Heat waves are rare and require specific combinations of weather events to take place, and may include temperature inversions, katabatic winds, or other phenomena. The worst heat wave in recent history was the European Heat Wave of 2003. The 2010 Northern Hemisphere summer resulted in severe heat waves which killed over 2,000 people. The heat caused hundreds of wildfires which led to widespread air pollution and burned thousands of square kilometers of forest.
Droughts
Well-known historical droughts include the 1997–2009 Millennium Drought in Australia which led to a water supply crisis across much of the country. As a result, many desalination plants were built for the first time (see list). In 2011, the State of Texas lived under a drought emergency declaration for the entire calendar year and suffered severe economic losses. The drought caused the Bastrop fires.
Duststorms
Firestorms
Wildfires
Wildfires are large fires which often start in wildland areas. Common causes include lightning and drought but wildfires may also be started by human negligence or arson. They can spread to populated areas and thus be a threat to humans and property, as well as wildlife. One example for a notable wildfire is the 1871 Peshtigo Fire in the United States, which killed at least 1700 people. Another one is the 2009 Victorian bushfires in Australia (collectively known as "Black Saturday bushfires"). In that year, a summer heat wave in Victoria, Australia, created conditions which fueled the massive bushfires in 2009. Melbourne experienced three days in a row of temperatures exceeding 40 °C (104 °F), with some regional areas sweltering through much higher temperatures.
Storms and heavy rain
Floods
A flood is an overflow of water that 'submerges' land. The EU Floods Directive defines a flood as a temporary covering of land that is usually dry with water. In the sense of 'flowing water', the word may also be applied to the inflow of the tides. Flooding may result from the volume of a body of water, such as a river or lake, becoming higher than usual, causing some of the water to escape its usual boundaries. While the size of a lake or other body of water will vary with seasonal changes in precipitation and snow melt, a flood is not considered significant unless the water covers land used by humans, such as a village, city or other inhabited area, roads or expanses of farmland.
Thunderstorms
Severe storms, dust clouds and volcanic eruptions can generate lightning. Apart from the damage typically associated with storms, such as winds, hail and flooding, the lightning itself can damage buildings, ignite fires and kill by direct contact. Most deaths from lightning occur in the poorer countries of the Americas and Asia, where lightning is common and adobe mud brick housing provides little protection.
Tropical cyclone
Typhoon, cyclone, cyclonic storm and hurricane are different names for the same phenomenon: a tropical storm that forms over an ocean. It is caused by evaporated water that comes off of the ocean and becomes a storm. It is characterized by strong winds, heavy rainfall and thunderstorms. The determining factor on which term is used is based on where the storm originates. In the Atlantic and Northeast Pacific, the term "hurricane" is used; in the Northwest Pacific, it is referred to as a "typhoon"; a "cyclone" occurs in the South Pacific and Indian Ocean.
The deadliest hurricane ever was the 1970 Bhola cyclone; the deadliest Atlantic hurricane was the Great Hurricane of 1780, which devastated Martinique, St. Eustatius and Barbados. Another notable hurricane is Hurricane Katrina, which devastated the Gulf Coast of the United States in 2005. Hurricanes may become more intense and produce more heavy rainfall as a consequence of human-induced climate change.
Tornadoes
A tornado is a violent and dangerous rotating column of air that is in contact with both the surface of the Earth and a cumulonimbus cloud, or, in rare cases, the base of a cumulus cloud. It is also referred to as a twister or a cyclone, although the word cyclone is used in meteorology in a wider sense to refer to any closed low pressure circulation. Tornadoes come in many shapes and sizes but typically take the form of a visible condensation funnel, the narrow end of which touches the Earth and is often encircled by a cloud of debris and dust. Tornadoes can occur one at a time, or can occur in large tornado outbreaks associated with supercells or in other large areas of thunderstorm development.
Most tornadoes have wind speeds of less than , are approximately across, and travel a few kilometers before dissipating. The most extreme tornadoes can attain wind speeds of more than , attain a width exceeding across, and stay on the ground for perhaps more than .
Cold-weather events
Blizzards
Blizzards are severe winter storms characterized by heavy snow and strong winds. When high winds stir up snow that has already fallen, it is known as a ground blizzard. Blizzards can impact local economic activities, especially in regions where snowfall is rare. The Great Blizzard of 1888 affected the United States, when many tons of wheat crops were destroyed. In Asia, the 1972 Iran blizzard and the 2008 Afghanistan blizzard, were the deadliest blizzards in history; in the former, an area the size of Wisconsin was entirely buried in snow. The 1993 Superstorm originated in the Gulf of Mexico and traveled north, causing damage in 26 American states as well as in Canada and leading to more than 300 deaths.
Hailstorms
Hail is precipitation in the form of ice that does not melt before it hits the ground. Hailstorms are produced by thunderstorms. Hailstones usually measure between in diameter. These can damage the location in which they fall. Hailstorms can be especially devastating to farm fields, ruining crops and damaging equipment. A particularly damaging hailstorm hit Munich, Germany, on July 12, 1984, causing about $2 billion in insurance claims.
Multi-hazard analysis
Each of the natural hazard types outlined above have very different characteristics, in terms of the spatial and temporal scales they influence, hazard frequency and return period, and measures of intensity and impact. These complexities result in "single-hazard" assessments being commonplace, where the hazard potential from one particular hazard type is constrained. In these examples, hazards are often treated as isolated or independent. An alternative is a "multi-hazard" approach which seeks to identify all possible natural hazards and their interactions or interrelationships.
Many examples exist of one natural hazard triggering or increasing the probability of one or more other natural hazards. For example, an earthquake may trigger landslides, whereas a wildfire may increase the probability of landslides being generated in the future. A detailed review of such interactions across 21 natural hazards identified 90 possible interactions, of varying likelihood and spatial importance. There may also be interactions between these natural hazards and anthropic processes. For example, groundwater abstraction may trigger groundwater-related subsidence.
Effective hazard analysis in any given area (e.g., for the purposes of disaster risk reduction) should ideally include an examination of all relevant hazards and their interactions. To be of most use for risk reduction, hazard analysis should be extended to risk assessment wherein the vulnerability of the built environment to each of the hazards is taken into account. This step is well developed for seismic risk, where the possible effect of future earthquakes on structures and infrastructure is assessed, as well as for risk from extreme wind and to a lesser extent flood risk. For other types of natural hazard the calculation of risk is more challenging, principally because of the lack of functions linking the intensity of a hazard and the probability of different levels of damage (fragility curves).
Responses
Disaster management is a main function of civil protection (or civil defense) authorities. It should address all four of the phases of disasters: mitigation and prevention, disaster response, recovery and preparedness.
Mitigation and prevention
Disaster risk reduction
Response
Recovery
Preparedness
Society and culture
International law
The 1951 Refugee Convention and its 1967 Protocol are the cornerstone documents for refugee protection and population displacement. The 1998 UN Guiding Principles on Internal Displacement and 2009 Kampala Convention protect people displaced due to natural disasters.
See also
References
External links
Disasters | Natural disaster | [
"Physics"
] | 5,197 | [
"Physical phenomena",
"Earth phenomena",
"Weather",
"Natural disasters",
"Natural hazards"
] |
58,930 | https://en.wikipedia.org/wiki/Timeline%20of%20Solar%20System%20astronomy | The following is a timeline of Solar System astronomy and science. It includes the advances in the knowledge of the Earth at planetary scale, as part of it.
Direct observation
Humans (Homo sapiens) have inhabited the Earth in the last 300,000 years at least, and they had witnessed directly observable astronomical and geological phenomena. For millennia, these have arose admiration and curiosity, being admitted as of superhuman nature and scale. Multiple imaginative interpretations were being fixed in oral traditions of difficult dating, and incorporated into a variety of belief systems, as animism, shamanism, mythology, religion and/or philosophy.
Although such phenomena are not "discoveries" per se, as they are part of the common human experience, their observation shape the knowledge and comprehension of the world around us, and about its position in the observable universe, in which the Sun plays a role of outmost importance for us. What today is known to be the Solar System was regarded for generations as the contents of the "whole universe".
The most relevant phenomena of these kind are:
Basic gravity. Following the trajectory of free falling objects, the Earth is "below" us and the sky is "above" us.
Characterization of the terrestrial surface, in four main types of terrain: lands covered with vegetation; dry deserts; bodies of liquid water, both salted (seas and oceans) and fresh (rivers and lakes); and frozen landscapes (glaciars, polar ice caps). Recognition of emerged lands and submerged ones. Recognition of mountain ranges and cavities (grottos and caverns).
Characterization of the Earth's atmosphere and its associated meteorological phenomena: clouds, rain, hail and snow; wind, storms and thunderstorms, tornadoes and hurricanes/cyclones/typhoons; fluvial floods, deluges and landslides; rainbows and halos; mirages; glacial ages.
Diurnal apparent movement of the Sun: sunrise, noon and sunset. Recognition of the four cardinal points: north, south, east, and west.
Nightly apparent movement of the celestial sphere with its main features regarded as "fixed": stars, the brightest of them forming casual groupings known as constellations, under different names and shapes in many cultures. Different constellations are viewed in different seasons and latitudes. Along with the faint strip of the Milky Way, they altogether conform the idea of the firmament, which as viewed from Earth it seems to be a consistent, solid unit rotating smooth and uniformly. This leads to the intuitive idea of a geocentric universe.
Presence of the Moon, with its phases. Tides. Recognition of meteorological phenomena as sub-lunar.
Yearly apparent transit of the Sun through the constellations of the zodiac. Recognition of the lunar cycle as a (lunar) month, and the solar cycle as the (solar) year, the basis for calendars.
Observation of non-fixed or "wandering" objects in the night sky: the five classical planets; shooting stars and meteor showers; bolides; comets; auroras; zodiacal light.
Solar and lunar eclipses. Planetary conjunctions.
Identification of the frigid, temperate and torrid zones of the Earth by latitude. Equator and Tropics. Four seasons in temperate zones: spring, summer, autumn and winter. Equinoxes and solstices. Monsoons. Midnight sun.
Telluric phenomena: seismic (earthquakes and seaquakes; tsunamis). Geysers. Volcanoes.
Along with an indeterminate number of unregistered sightings of rare events: meteor impacts; novae and supernovae.
Antiquity
2nd millennium BCE – Earliest possible date for the composition of the Babylonian Venus tablet of Ammisaduqa, a 7th-century BC copy of a list of observations of the motions of the planet Venus, and the oldest planetary table currently known.
2nd millennium BCE – Babylonian astronomers identify the inner planets Mercury and Venus and the outer planets Mars, Jupiter and Saturn, which would remain the only known planets until the invention of the telescope in early modern times.
Late 2nd millennium BCE – Chinese astronomers record a solar eclipse during the reign of Zhong Kang, described as part of the document Punitive Expedition of Yin in the Book of Documents.
Late 2nd millennium BCE – Chinese established their timing cycle of 12 Earthly Branches based on the approximate number of years (11.86) it takes Jupiter to complete a single revolution in the sky.
1200 BCE – Earliest Babylonian star catalogues.
1100 BCE – Chinese first determine the spring equinox.
750 BCE – During the reign of Nabonassar (747–733 BC), the systematic records of ominous phenomena in Babylonian astronomical diaries that began at this time allowed for the discovery of a repeating 18-year cycle of lunar eclipses.
776 BCE – Chinese make the earliest reliable record of a solar eclipse.
687 BCE – Chinese make earliest known record of meteor shower.
7th century BCE – Egyptian astronomers alleged to have predicted a solar eclipse.
613 BCE – A comet, possibly Comet Halley, is recorded in Spring and Autumn Annals by the Chinese.
586 BCE – Thales of Miletus alleged to have predicted a solar eclipse.
560 BCE – Anaximander is arguably the first to conceive a mechanical model of the world, although highly inaccurate: a cylindrical Earth floats freely in space surrounded by three concentric wheels turning at different distances: the closest for the stars and planets, the second for the Moon and the farthest for the Sun, all conceived not as bodies but as "fire seen thru holes" in every wheel. But he starts to feed the idea of celestial mechanics as different of the notion of planets being heavenly deities, leaving mythology aside.
475 BCE – Parmenides is credited to be the first Greek who declared that the Earth is spherical and is situated in the centre of the universe, believed to have been the first to detect the identity of Hesperus, the evening-star, and Phosphorus, the morning-star (Venus), and by some, the first to claim that moonlight is a reflection of sunlight.
450 BCE – Anaxagoras shows that the Moon shines by reflected sunlight: the phases of the Moon are caused by the illumination of its sphere by the Sun in different angles along the lunar month. He was also the first to give a correct explanation of eclipses, by asserting that the Moon is rocky, thus opaque, and closer to the Earth than the Sun.
400 BCE – Philolaus and other Pythagoreans propose a model in which the Earth and the Sun revolve around an invisible "Central Fire" (not the Sun), and the Moon and the planets orbit the Earth. Due to philosophical concerns about the number 10, they also added a tenth "hidden body" or Counter-Earth (Antichthon), always in the opposite side of the invisible Central Fire and therefore also invisible from Earth.
360 BCE – Plato claims in his Timaeus that circles and spheres are the preferred shape of the universe and that the Earth is at the centre. These circles are the orbits of the heavenly bodies, varying in size for every of them. He arranged these celestial orbs, in increasing order from the Earth: Moon, Sun, Venus, Mercury, Mars, Jupiter, Saturn, and the fixed stars located on the celestial sphere forming the outermost shell.
360 BCE – Eudoxus of Cnidus proposes for first time a purely geometric-mathematical, geocentric model of the planetary movements, including that of the Sun and the Moon.
350 BCE – Aristotle argues for a spherical Earth using lunar eclipses and other observations. Also, he asserts his conception of the heavenly spheres, and of an outer space fulfilled with aether.
330 BCE – Heraclides Ponticus is said to be the first Greek who proposes that the Earth rotates on its axis, from west to east, once every 24 hours, contradicting Aristotle's teachings. Simplicius says that Heraclides proposed that the irregular movements of the planets can be explained if the Earth moves while the Sun stays still, but these statements are disputed.
280 BCE – Aristarchus of Samos offers the first definite discussion of the possibility of a heliocentric cosmos, and uses the size of the Earth's shadow on the Moon to estimate the Moon's orbital radius at 60 Earth radii, and its physical radius as one-third that of the Earth. He also makes an inaccurate attempt to measure the distance to the Sun.
250 BCE – Following the heliocentric ideas of Aristarcus, Archimedes in his work The Sand Reckoner computes the diameter of the universe centered around the Sun to be about 1014 stadia (in modern units, about 2 light years, , ).
210 BCE – Apollonius of Perga shows the equivalence of two descriptions of the apparent retrograde planet motions (assuming the geocentric model), one using eccentrics and another deferent and epicycles.
200 BCE – Eratosthenes determines that the radius of the Earth is roughly .
150 BCE – According to Strabo (1.1.9), Seleucus of Seleucia is the first to state that the tides are due to the attraction of the Moon, and that the height of the tides depends on the Moon's position relative to the Sun.
150 BCE – Hipparchus uses parallax to determine that the distance to the Moon is roughly .
134 BCE – Hipparchus discovers the precession of the equinoxes.
87 BCE – The Antikythera mechanism, the earliest known computer, is built. It is an extremely complex astronomical computer designed to predict solar and lunar eclipses accurately and track the movements of the planets and the Sun. It could also calculate the differences in the apsidial and axial precession of heavenly bodies with extreme degree of accuracy.
28 BCE – Chinese history book Book of Han makes earliest known dated record of sunspot.
150 CE – Claudius Ptolemy completes his work Almagest, that codifies the astronomical knowledge of his time and cements the geocentric model in the West, and it remained the most authoritative text on astronomy for more than 1,500 years. The Almagest put forward extremely complex and accurate methods to determine the position and structure of planets, stars (including some objects as nebulae, supernovas and galaxies then regarded as stars also) and heavenly bodies. It includes a catalogue of 1,022 stars (largely based on a previous one by Hipparchus of about 850 entries) and a large amount of constellations, comets and other astronomical phenomena. Following a long astrological tradition, he arranged the heavenly spheres ordering them (from Earth outward): Moon, Mercury, Venus, Sun, Mars, Jupiter, Saturn and fixed stars.
Middle Ages
420 – Martianus Capella describes a modified geocentric model, in which the Earth is at rest in the center of the universe and circled by the Moon, the Sun, three planets and the stars, while Mercury and Venus circle the Sun.
500 – Indian mathematician-astronomer Aryabhata accurately computes the solar and lunar eclipses, and the length of Earth's revolution around the Sun.
500 – Aryabhata discovers the oblique motion of the apsidial precession of the Sun and notes that it is changing with respect to the motion of stars and Earth.
500 – Aryabhata discovers the rotation of the Earth by conducting experiments and giving empirical examples for his theories. He also explains the cause of day and night through the diurnal rotation of the Earth. He also developed highly accurate models for the orbital motion of the Moon, Mercury and Mars. He also developed a geocentric model of the universe.
620 – Indian mathematician-astronomer Brahmagupta describe gravity as a attractive force by the term guruvatkarshan.
628 – Brahmagupta gives methods for calculations of the motions and places of various planets, their rising and setting, conjunctions, and calculations of the solar and lunar eclipses.
820 – Persian astronomer, Muhammad ibn Musa al-Khwarizmi, composes his Zij astronomical tables, utilising Arabic numerals and the Hindu–Arabic numeral system in his calculations. He also translates Aryabhata's astronomical and mathematical treatises into Arabic.
850 – Al-Farghani (Alfraganus) translated and wrote commentary on Ptolemy's Almagest and gave values for the motion of the ecliptic, and the precessional movement of the heavenly bodies based on the values given by Ptolemy and Hipparchus.
1019 – Al-Biruni observes and describes the lunar eclipse on September 17 in detail and gives the exact latitudes of the stars during it.
1030 – In his major astronomical work, the Mas'ud Canon, Al-Biruni observed that, contrary to Ptolemy, the Sun's apogee (highest point in the heavens) was mobile, not fixed.
1031 – Chinese astronomer and scientist Shen Kuo calculates the distance between the Earth and the Sun in his mathematical treatises.
1054 – Chinese astronomers record the sighting of the Crab Nebula as a "guest star", and they record several other supernovae during the 10th and 11th centuries.
1060 – Andalusi astronomer Al-Zarqali corrects geographical data from Ptolemy and Al-Khwarizmi, specifically by correcting Ptolemy's estimate of the longitude of the Mediterranean Sea from 62 degrees to the correct value of 42 degrees. He was the first to demonstrate the motion of the solar apogee relative to the fixed background of the stars, measuring its rate of motion as 12.9 seconds per year, which is remarkably close to the modern calculation of 11.77 seconds. Al-Zarqālī also contributed to the famous Tables of Toledo.
1175 – Gerard of Cremona translates Ptolemy's Almagest from Arabic into Latin.
1180s (decade) – Robert Grosseteste described the birth of the Universe in an explosion and the crystallisation of matter. He also put forward several new ideas such as rotation of the Earth around its axis and the cause of day and night. His treatise De Luce is the first attempt to describe the heavens and Earth using a single set of physical laws.
1200 – Fakhr al-Din al-Razi, in dealing with his conception of physics and the physical world, rejected the Aristotelian and Avicennian view of a single world, but instead proposed that there are "a thousand thousand worlds (alfa alfi 'awalim) beyond this world such that each one of those worlds be bigger and more massive than this world as well as having the like of what this world has."
1252 – Alfonso X of Castile sponsored the creation and compilation of the Alfonsine Tables by scholars he assemble in the Toledo School of Translators in Toledo, Spain. These astronomical tables were used and updated during the following three centuries, as the main source of astronomical data, mainly to calculate ephemerides (which were in turn used by astrologers to cast horoscopes).
1300 – Jewish astronomer Levi ben Gershon (Gersonides) recognized that the stars are much larger than the planets. Gersonides appears to be among the few astronomers before modern times, along Aristarcus, to have surmized that the fixed stars are much further away than the planets. While all other astronomers put the fixed stars on a rotating sphere just beyond the outer planets, Gersonides estimated the distance to the fixed stars to be no less than 159,651,513,380,944 Earth radii, or about 100,000 light-years in modern units.
1350 – Ibn al-Shatir anticipates Copernicus by abandoning the equant of Ptolemy in his calculations of planetary motion, and he provides a proto empirical model of lunar motion which accurately matches observations.
1350 – Nicole Oresme put forward several revolutionary theories like mean speed theorem, which he used in calculating the position and shape of the planetary orbits, measuring the apsidial and axial precession of the lunar and solar orbits, measuring the angles and distance between ecliptics and calculating stellar and planetary distances. In his Livre du Ciel et du Monde, Oresme discussed a range of evidence for the daily rotation of the Earth on its axis.
1440 – Nicholas of Cusa proposes that the Earth rotates on its axis in his book, On Learned Ignorance. Like Oresme, he also wrote about the possibility of the plurality of worlds.
16th century
1501 – Indian astronomer Nilakantha Somayaji proposes a universe in which the planets orbit the Sun, but the Sun orbits the Earth.
1514 – Nicolaus Copernicus states his heliocentric theory in Commentariolus.
1522 – First circumnavigation of the world by Magellan-Elcano expedition shows that the Earth is, in effect, a sphere.
1543 – Copernicus publishes his heliocentric theory in De revolutionibus orbium coelestium.
1576 – Tycho Brahe founds the first modern astronomical observatory in modern Europe, Uraniborg.
1577 – Tycho Brahe records the position of the Great Comet of that year as viewed from Uraniborg (in the island Hven, near Copenhagen) and compares it with that observed by Thadaeus Hagecius from Prague at the same time, giving deliberate consideration to the movement of the Moon. It was discovered that, while the comet was in approximately the same place for both of them, the Moon was not, and this meant that the comet was much further out, contrary to what was previously conceived as an atmospheric phenomenon.
1582 – Pope Gregory XIII introduces the Gregorian calendar, an enhanced solar calendar more accurate than the previous Roman Julian calendar. The principal change was to space leap years differently so as to make the average calendar year 365.2425 days long, more closely approximating the 365.2422-day 'tropical' or 'solar' year that is determined by the Earth's revolution around the Sun. The reform advanced the date by 10 days: Thursday 4 October 1582 was followed by Friday 15 October 1582. The Gregorian calendar is still in use today.
1584 – Giordano Bruno published two important philosophical dialogues (La Cena de le Ceneri and De l'infinito universo et mondi) in which he argued against the planetary spheres and affirmed the Copernican principle. Bruno's infinite universe was filled with a substance—a "pure air", aether, or spiritus—that offered no resistance to the heavenly bodies which, in Bruno's view, rather than being fixed, moved under their own impetus (momentum). Most dramatically, he completely abandoned the idea of a hierarchical universe. Bruno's cosmology distinguishes between "suns" which produce their own light and heat, and have other bodies moving around them; and "earths" which move around suns and receive light and heat from them. Bruno suggested that some, if not all, of the objects classically known as fixed stars are in fact suns, so he was arguably the first person to grasp that "stars are other suns with their own planets." Bruno wrote that other worlds "have no less virtue nor a nature different from that of our Earth" and, like Earth, "contain animals and inhabitants".
1588 – Tycho Brahe publishes his own Tychonic system, a blend between Ptolemy's classical geocentric model and Copernicus' heliocentric model, in which the Sun and the Moon revolve around the Earth, in the center of universe, and all other planets revolve around the Sun.
17th century
1600 – William Gilbert with his model called the terrella, shows the Earth behaves like a huge but low intensity magnet with its own magnetic field, which explains the behaviour of the compass pointing to the magnetic poles.
1604 – Galileo Galilei correctly hypothesized that the distance of a falling object is proportional to the square of the time elapsed.
1609 – Johannes Kepler states his first two empirical laws of planetary motion, stating that the orbits of the planets around the Sun are elliptical rather than circular, and thus resolving many ancient problems with planetary models, without the need of any epicycle.
1609 – Galileo Galilei starts to make telescopes with about 3x up to 30x magnification, based only on descriptions of the first practical telescope which Hans Lippershey tried to patent in the Netherlands in 1608. With a Galilean telescope, the observer could see magnified, upright images on the Earth—what is commonly known as a spyglass—but also it can be used to observe the sky, a key tool for further astronomical discoveries.
1609 – Galileo Galilei aimed his telescope at the Moon. While not being the first person to observe the Moon through a telescope (English mathematician Thomas Harriot had done it four months before but only saw a "strange spottednesse"), Galileo was the first to deduce the cause of the uneven waning as light occlusion from lunar mountains and craters. He also estimated the heights of that mountains. The Moon was not what was long thought to have been a translucent and perfect sphere, as Aristotle claimed, and hardly the first "planet".
1610 – Galileo Galilei observes the four main moons of Jupiter: Callisto, Europa, Ganymede, and Io; sees Saturn's planetary rings (but does not recognize that they are rings), and observes the phases of Venus, disproving the Ptolemaic system though not the geocentric model.
1619 – Johannes Kepler states his third empirical law of planetary motion, which relates the distance and period of the planetary orbits.
1631 – Pierre Gassendi is the first to observe the transit of Mercury. He was surprised by the small size of the planet compared to the Sun.
1632 – Galileo Galilei is sometimes credited with the discovery of the lunar libration in latitude, although Thomas Harriot or William Gilbert might have done so before.
1639 – Jeremiah Horrocks and his friend and correspondent William Crabtree are the first astronomers known to observe and record a transit of Venus.
1643 – Evangelista Torricelli, disciple of Galileo, builds an elementary barometer, which shows that the air weighs, and incidentally creating the first artificial vacuum in a laboratory.
1648 – Johannes Hevelius discovers the lunar libration in longitude. It can reach 7°54′ in amplitude.
1648 – Blaise Pascal, aided by his brother-in-law Florin Périer at mount Puy de Dôme, shows that air pressure on a high mountain is less than at a lower altitude, proving his idea that, as air has a finite weight, Earth's atmosphere must have a maximum height.
1655 – Giovanni Domenico Cassini and Robert Hooke separately discover Jupiter's Great Red Spot.
1656 – Christiaan Huygens identifies Saturn's rings as rings and discovers its moon Titan.
1659 – Huygens estimated a value of about 24,000 Earth radii for the distance Earth-Sun, remarkably close to modern values but he was based on many unproven (and incorrect) assumptions; the accuracy of his value seems to be based more on luck than good measurement, with his various errors cancelling each other out.
1665 – Cassini determines the rotational speeds of Jupiter, Mars, and Venus.
1668 – Isaac Newton builds his own reflecting telescope, the first fully functional of this kind, and a landmark for future developings as it reduces spherical aberration with no chromatic aberration.
1672 – Cassini discovers Saturn's moons Iapetus and Rhea.
1672 – Jean Richer and Cassini measure the Earth-Sun distance, the astronomical unit, to be about 138,370,000 km.
1675 – Ole Rømer uses the orbital mechanics of Jupiter's moons to estimate that the speed of light is about 227,000 km/s.
1675 – Cassini discovers the main division in the rings of Saturn, named after him, the Cassini Division.
1686 – Cassini discovers Saturn's moons Tethys and Dione.
1687 – Isaac Newton publishes his law of universal gravitation in his work Philosophiæ Naturalis Principia Mathematica.
1690 – Cassini observes differential rotation within Jupiter's atmosphere.
18th century
1704 – John Locke enters the term "Solar System" in the English language, when he used it to refer to the Sun, planets, and comets as a whole.
1705 – Edmond Halley publicly predicts the periodicity of the comet of 1682 and computes its expected path of return in 1757.
1715 – Edmond Halley calculates the shadow path of a solar eclipse.
1716 – Edmond Halley suggests a high-precision measurement of the Sun-Earth distance by timing the transit of Venus.
1718 – Edmond Halley discovers proper motion of stars, dispelling the concept of the "fixed stars".
1729 – James Bradley determines the cause of the aberration of starlight, providing the first direct evidence of the Earth's motion, and a more accurate method to compute the speed of light.
1735–1739 – The French Academy of Sciences sends two expeditions to measure the oblateness of the Earth by measuring the length of a degree of latitude at two locations: one to Lapland, close to the Arctic Circle and other to the Equator, the French Geodesic Mission. Their measurements show that the Earth is an oblate spheroid flattened at the poles.
1749 – Pierre Bouguer, part of French Geodesic Mission, publish that he and Charles Marie de La Condamine had been able to detect a deflection of a pendulum's plumb-bob of 8 seconds of arc in the proximity of the volcano Chimborazo. Although not enough to measure the value of the gravitational constant accurately, the experiment had at least proved that the Earth could not be a hollow shell, as some thinkers of the day had suggested.
1750 – The three collinear Lagrange points (L1, L2, L3) were discovered by Leonhard Euler, a decade before Joseph-Louis Lagrange discovered the remaining two.
1752 – Benjamin Franklin conducts his kite experiment, successfully extracting sparks from a cloud, showing that lightning bolts are huge natural electrical discharges.
1755 – Immanuel Kant first formulates the nebular hypothesis of Solar System formation.
1758 – Johann Palitzsch observes the return of the comet that Edmond Halley had anticipated in 1705. The gravitational attraction of Jupiter had slowed the return by 618 days. Parisian astronomer La Caille suggests it should be named "Halley's Comet".
1761 – Mikhail Lomonosov is the first to discover and appreciate the atmosphere of Venus during his observation of the transit of Venus.
1766 – Johann Titius finds the Titius-Bode rule for planetary distances.
1772 – Johann Bode publishes the Titius-Bode rule for planetary distances.
1772–1775 – The second voyage of James Cook definitively disproves the existence of the hypothesized Southern continent of Terra Australis.
1775 – Charles Hutton, based on his analysis of the Schiehallion experiment, shows the Earth has a density of at least and suggests that it has a planetary core made of metal. (In comparison with the modern accepted figure of , the density of the Earth had been computed with an error of less than 20%.)
1781 – William Herschel discovers a seventh planet, Uranus, during a telescopic survey of the Northern sky.
1781 – Charles Messier and his assistant Pierre Méchain publish the first catalogue of 110 nebulae and star clusters, the most prominent deep-sky objects that can easily be observed from Earth's Northern Hemisphere, in order not to be confused with ordinary Solar System's comets.
1787 – Herschel discovers Uranus's moons Titania and Oberon.
1789 – Herschel discovers Saturn's moons Enceladus and Mimas.
1796 – Pierre Laplace re-states the nebular hypothesis for the formation of the Solar System from a spinning nebula of gas and dust.
1798 – Henry Cavendish accurately measures the gravitational constant in the laboratory, which allows the mass of the Earth to be derived, and hence the masses of all bodies in the Solar System.
19th century
1801 – Giuseppe Piazzi discovers Ceres, a body that filled a gap between Mars and Jupiter following the Titius-Bode rule. At first, it was regarded as a new planet.
1802 – Heinrich Wilhelm Olbers discovers Pallas, at roughly the same distance to the Sun than Ceres. He proposed that the two objects were the remnants of a destroyed planet, and predicted that more of these pieces would be found.
1802 – Due their star-like apparience, William Herschel suggested Ceres and Pallas, and similar objects if found, be placed into a separate category, named asteroids, although they were still counted among the planets for some decades.
1804 – Karl Ludwig Harding discovers the asteroid Juno.
1807 – Olbers discovers the asteroid Vesta.
1821 – Alexis Bouvard detects irregularities in the orbit of Uranus.
1825 – Pierre Laplace completes his study of gravitation, the stability of the Solar System, tides, the precession of the equinoxes, the libration of the Moon, and Saturn's rings in his work Traité de mécanique céleste (Treatise of celestial mechanics).
1833 – Thomas Henderson successfully measures the stellar parallax of alpha Centauri, being then regarded as the Sun's closest star, but delayed the publication until 1839.
1838 – Friedrich Wilhelm Bessel measures the parallax of the star 61 Cygni, refuting one of the oldest arguments against heliocentrism.
1840 – John W. Draper takes a daguerreotype of the Moon, the first astronomical photograph.
1845 – John Adams predicts the existence and location of an eighth planet from irregularities in the orbit of Uranus.
1845 – Karl Ludwig Hencke discovers a fifth body between Mars and Jupiter, Astraea and, shortly thereafter, new objects were found there at an accelerating rate. Counting them among the planets became increasingly cumbersome. Eventually, they were dropped from the planet list (as first suggested by Alexander von Humboldt in the early 1850s) and Herschel's coinage, "asteroids", gradually came into common use. Since then, the region they occupy between Mars and Jupiter is known as the asteroid belt.
1846 – Urbain Le Verrier predicts the existence and location of an eighth planet from irregularities in the orbit of Uranus.
1846 – Johann Galle discovers the eighth planet, Neptune, following the predicted position gave to him by Le Verrier.
1846 – William Lassell discovers Neptune's moon Triton, just seventeen days later of planet's discovery.
1848 – Lassell, William Cranch Bond and George Phillips Bond discover Saturn's moon Hyperion.
1849 – Édouard Roche finds the limiting radius of tidal destruction and tidal creation for a body held together only by its own gravity, called the Roche limit, and uses it to explain why Saturn's rings do not condense into a satellite.
1849 – Annibale de Gasparis discovers the asteroid Hygiea, the fourth largest asteroid in the Solar System by both volume and mass.
1851 – Lassell discovers Uranus's moons Ariel and Umbriel.
1856 – James Clerk Maxwell demonstrates that a solid ring around Saturn would be torn apart by gravitational forces and argues that Saturn's rings consist of a multitude of tiny satellites.
1859 – Robert Bunsen and Gustav Kirchhoff develop the spectroscope, which they used to pioneer the identification of the chemical elements in the Sun, showing that the Sun contains mainly hydrogen, and also sodium.
1862 – By analysing the spectroscopic signature of the Sun and comparing it to those of other stars, Father Angelo Secchi determines that the Sun is itself a star.
1866 – Giovanni Schiaparelli realizes that meteor streams occur when the Earth passes through the orbit of a comet that has left debris along its path.
1868 – Jules Janssen observes a bright yellow line with a wavelength of 587.49 nanometers in the spectrum of the chromosphere of the Sun, during a total solar eclipse in Guntur, India. Later in the same year, Norman Lockyer observed the same line in the solar spectrum, and concluded that it was caused by an element in the Sun unknown on Earth. This element is helium, which currently comprises 23.8% of the mass in the solar photosphere.
1877 – Asaph Hall discovers Mars's moons Deimos and Phobos.
1887 – The Michelson–Morley experiment, intended to measure the relative motion of Earth through the (assumed) stationary luminiferous aether, got no results. This put an end to the centuries-old idea of the aether, dating back to Aristotle, and with it all the contemporary aether theories.
1892 – Edward Emerson Barnard discovers Jupiter's moon Amalthea.
1895 – Percival Lowell starts publishing books about his observations of features in the surface on Mars that he claimed as artificial Martian canals (due to a mistranslation of a previous paper by Schiaparelli on the subject), popularizing the long-held belief that these markings showed that Mars harbors intelligent life forms.
1897 – William Thomson, 1st Baron Kelvin, based on the thermal radiation rate and the gravitational contraction forces, argues the age of the Sun to be no more than 20 million years – unless some energy source beyond what was then known was found.
1899 – William Henry Pickering discovers Saturn's moon Phoebe.
1900–1957
1904 – Ernest Rutherford argues, in a lecture attended by Kelvin, that radioactive decay releases heat, providing the unknown energy source Kelvin had suggested, and ultimately leading to radiometric dating of rocks which reveals ages of billions of years for the Solar System bodies.
1906 – Max Wolf discovers the Trojan asteroid Achilles.
1908 – A meteor air burst occurs near Tunguska in Siberia, Russia. It is the largest impact event on Earth in recorded history to date.
1909 – Andrija Mohorovičić discovers the Moho discontinuity, the boundary between the Earth's crust and the mantle.
1912 – Alfred Wegener suggests the continental drift hypothesis, that the continents are slowly drifting around the Earth.
1915 – Robert Innes discovers Proxima Centauri, the closest star to Earth after the Sun.
1919 – Arthur Stanley Eddington uses a solar eclipse to successfully test Albert Einstein's General Theory of Relativity, which in turn explains the observed irregularities in the orbital motion of Mercury, and disproves the existence of the hypothesized inner planet Vulcan.
1920 – In the Great Debate between Harlow Shapley and Heber Curtis, galaxies are finally recognized as objects beyond the Milky Way, and the Milky Way as a galaxy proper. Within it lies the Solar System.
1930 – Clyde Tombaugh discovers Pluto. It was regarded for decades as the ninth planet of the Solar System.
1930 – Seth Nicholson and Edison Pettit measure the surface temperature of the Moon.
1932 – Karl Guthe Jansky recognizes received radio signals coming from outer space as extrasolar, coming mainly from Sagittarius. They are the first evidence of the center of the Milky Way, and the firsts experiences that founded the discipline of radio astronomy.
1935 – The Explorer II balloon reached a record altitude of 22,066 m (72,395 ft), enabling its occupants to photograph the curvature of the Earth for the first time.
1938 – Hans Bethe calculates the details of the two main energy-producing nuclear reactions that power the Sun.
1944 – Gerard Kuiper discovers that the satellite Titan has a substantial atmosphere.
1946 – American launch of a camera-equipped V-2 rocket provides the first image of the Earth from space.
1949 – Gerard Kuiper discovers Uranus's moon Miranda and Neptune's moon Nereid.
1950 – Jan Oort suggests the presence of a cometary reservoir in the outer limits of the Solar System, the Oort cloud.
1951 – Gerard Kuiper argues for an annular reservoir of comets between 40 and 100 astronomical units from the Sun having formed early in the Solar System's evolution, but he did not think that such a belt still existed today. Decades later, this region was named after him, the Kuiper belt.
1958–1976
1958 – Under supervision of James Van Allen, Explorer 1 and Explorer 3 confirmed the existence of the Earth's magnetosphere radiation belts, named after him.
1959 – Explorer 6 sends the first image of the entire Earth from space.
1959 – Luna 3 sends the first images of another celestial body, the Moon, from space, including its unseen far side.
1962 – Mariner 2 Venus flyby performs the first closeup observations of another planet.
1964 – Mariner 4 spacecraft provides the first detailed images of the surface of Mars.
1966 – Luna 9 Moon lander provides the first images from the surface of another celestial body.
1967 – Venera 4 provides the first information on Venus's dense atmosphere.
1968 – Apollo 8 becomes the first crewed lunar mission, providing historic images of the whole Earth.
1969 – Apollo 11 mission landed on the Moon, first humans walking upon it. They return the first lunar samples back to Earth.
1970 – Venera 7 Venus lander sends back the first information successfully obtained from the surface of another planet.
1971 – Mariner 9 Mars spacecraft becomes the first to successfully orbit another planet. It provides the first detailed maps of the Martian surface, discovering much of the planet's topography, including the volcano Olympus Mons and the canyon system Valles Marineris, which is named in its honor.
1971 – Mars 3 lands on Mars, and transmits the first partial image from the surface of another planet.
1973 – Skylab astronauts discover the Sun's coronal holes.
1973 – Pioneer 10 flies by Jupiter, providing the first closeup images of the planet and revealing its intense radiation belts.
1973 – Mariner 10 provides the first closeup images of the clouds of Venus.
1974 – Mariner 10 provides the first closeup images of the surface of Mercury.
1975 – Venera 9 becomes the first probe to successfully transmit images from the surface of Venus.
1976 – Viking 1 and 2 become the first probes to send images (in color) from the surface of Mars, as well as to perform in situ biological experiments with the Martian soil.
1977–2000
1977 – James Elliot discovers the rings of Uranus during a stellar occultation experiment on the Kuiper Airborne Observatory.
1977 – Charles Kowal discovers Chiron, the first centaur.
1978 – James Christy discovers Charon, the large moon of Pluto.
1978 – The Pioneer Venus probe maps the surface of Venus.
1978 – Peter Goldreich and Scott Tremaine present a Boltzmann equation model of planetary-ring dynamics for indestructible spherical ring particles that do not self-gravitate, and they find a stability requirement relation between ring optical depth and particle normal restitution coefficient.
1979 – Pioneer 11 flies by Saturn, providing the first ever closeup images of the planet and its rings. It discovers the planet's F ring and determines that its moon Titan has a thick atmosphere.
1979 – Goldreich and Tremaine postulate that Saturn's F ring is maintained by shepherd moons, a prediction that would be confirmed by observations.
1979 – Voyager 1 flies by Jupiter and discovers its faint ring system, as well as volcanoes on Io, the innermost of its Galilean moons.
1979 – Voyager 2 flies by Jupiter and discovers evidence of an ocean under the surface of its moon Europa.
1980 – Voyager 1 flies by Saturn and takes the first images of Titan. However, its atmosphere is opaque to visible light, so its surface remains obscured.
1982 – Venera 13 lands on Venus, sends the first photographs in color of its surface, and records atmospheric wind noises, the first sounds heard from another planet.
1986 – Voyager 2 provides the first ever detailed images of Uranus, its moons and rings.
1986 – The Giotto probe, part of an international effort known as the "Halley Armada", provides the first ever close up images of a comet, the Halley's Comet.
1988 – Martin Duncan, Thomas Quinn, and Scott Tremaine demonstrate that short-period comets come primarily from the Kuiper Belt and not the Oort cloud.
1989 – Voyager 2 provides the first ever detailed images of Neptune, its moons and rings.
1990 – The Hubble Space Telescope is launched. Aimed primarily at deep-space objects, it is also used to observe faint objects in the Solar System.
1990 – Voyager 1 is turned around to take the Portrait of the Planets of the Solar System, source of the Pale Blue Dot image of the Earth.
1991 – The Magellan spacecraft maps the surface of Venus.
1991 – The Galileo, while en route to Jupiter, encounters asteroid Gaspra, which became the first asteroid imaged by a spacecraft.
1992 – David Jewitt and Jane Luu of the University of Hawaii discover Albion, the first object deemed to be a member of the Kuiper belt.
1993 – Asteroid Ida is visited by the Galileo before heading to Jupiter. Mission member Ann Harch discovers its natural satellite Dactyl in images returned by the spacecraft, the first asteroid moon discovered.
1994 – Comet Shoemaker–Levy collides with Jupiter, providing the first direct observation of an extraterrestrial collision of Solar System objects.
1995 – The Galileo becomes the first spacecraft to orbit Jupiter. Its atmospheric entry probe provides the first data taken within the planet itself.
1997 – Mars Pathfinder deploys on Mars the first rover to operate outside the Earth–Moon system, the Sojourner, which conducts many experiments on the Martian surface, both teleoperated and semi-autonomous.
2000 – NEAR Shoemaker probe provides the first detailed images of a near-Earth asteroid, Eros.
2001–present
2002 – Chad Trujillo and Michael Brown of Caltech at the Palomar Observatory discover the minor planet Quaoar in the Kuiper belt.
2003 – M. Brown, C. Trujillo, and David Rabinowitz discover Sedna, a large trans-Neptunian object (TNO) with an unprecedented 12,000-year orbit.
2003 – Voyager 1 enters the termination shock, the point where the solar wind slows to subsonic speeds.
2004 – Voyager 1 sends back the first data ever obtained from within the Solar System's heliosheath.
2004 – M. Brown, C. Trujillo, and D. Rabinowitz discover the TNO Orcus.
2004 – M. Brown, C. Trujillo, and D. Rabinowitz discover the Kuiper Belt Object (KBO) Haumea. A second team led by José Luis Ortiz Moreno also claims the discovery.
2004 – The Cassini–Huygens spacecraft becomes the first to orbit Saturn. It discovers complex motions in the rings, several new small moons and cryovolcanism on the moon Enceladus, studies the Saturn's hexagon, and provides the first images from the surface of Titan.
2005 – M. Brown, C. Trujillo, and D. Rabinowitz discover Eris, a TNO more massive than Pluto, and later, by other team led by Brown, also its moon, Dysnomia. Eris was first imaged in 2003, and is the most massive object discovered in the Solar System since Neptune's moon Triton in 1846.
2005 – M. Brown, C. Trujillo, and D. Rabinowitz discover another notable KBO, Makemake.
2005 – The Mars Exploration Rovers perform the first astronomical observations ever taken from the surface of another planet, imaging an eclipse by Mars's moon Phobos.
2005 – Hayabusa spacecraft lands on asteroid Itokawa and collect samples. It returned the samples to Earth in 2010.
2006 – The 26th General Assembly of the IAU voted in favor of a revised definition of a planet and officially declared Ceres, Pluto, and Eris dwarf planets.
2007 – Dwarf planet Gonggong, a large KBO, was discovered by Megan Schwamb, M. Brown, and D. Rabinowitz.
2008 – The IAU declares Makemake and Haumea dwarf planets.
2011 – Dawn spacecraft enters orbit around the large asteroid Vesta making detailed measurements.
2012 – Saturn's moon Methone is imaged up close by the Cassini spacecraft, revealing a remarkably smooth surface.
2012 – Dawn spacecraft breaks orbit of Vesta and heads for Ceres.
2013 – MESSENGER spacecraft provides the first ever complete map of the surface of Mercury.
2013 – A team led by Felipe Braga Ribas discover a ring system around the minor planet and centaur Chariklo, the first of this kind ever detected.
2014 – Rosetta spacecraft becomes the first comet orbiter (around 67P/Churyumov–Gerasimenko), and deploys on it the first comet lander Philae that collected close-up data from the comet's surface.
2015 – Dawn spacecraft enters orbit around the dwarf planet Ceres making detailed measurements.
2015 – New Horizons spacecraft flies by Pluto, providing the first ever sharp images of its surface, and its largest moon Charon.
2017 – 'Oumuamua, the first known interstellar object crossing the Solar System, is identified.
2019 – Closest approach of New Horizons to Arrokoth, a KBO farther than Pluto.
2019 – 2I/Borisov, the first interstellar comet and second interstellar object, is discovered.
2022 – The Double Asteroid Redirection Test (DART) spacecraft mission intentionally crashed into Dimorphos, the minor-planet moon of the asteroid Didymos, deviating (slightly) the orbit of a Solar System body for the first time ever. While DART hosted no scientific payload, its camera took closeup photos of the two objects, and a secondary spacecraft, the LICIACube, also gathered related scientific data.
See also
Discovery and exploration of the Solar System
Timeline of discovery of Solar System planets and their moons
Timeline of Solar System exploration
Timeline of first images of Earth from space
List of former planets
List of hypothetical Solar System objects in astronomy
Historical models of the Solar System
History of astronomy
Timeline of cosmological theories
The number of currently known, or observed, objects of the Solar System are in the hundreds of thousands. Many of them are listed in the following articles:
List of Solar System objects
List of gravitationally rounded objects of the Solar System
List of natural satellites
List of possible dwarf planets
List of minor planets (numbered) and List of unnumbered minor planets
List of trans-Neptunian objects (numbered) and List of unnumbered trans-Neptunian objects
Lists of comets
References
History of astronomy
Solar System
Discovery and exploration of the Solar System
Solar System | Timeline of Solar System astronomy | [
"Astronomy"
] | 9,681 | [
"Outer space",
"History of astronomy",
"Astronomy timelines",
"Solar System",
"Discovery and exploration of the Solar System"
] |
58,931 | https://en.wikipedia.org/wiki/Timeline%20of%20solar%20astronomy | Timeline of solar astronomy
10th century
900–929 — Muhammad ibn Jābir al-Harrānī al-Battānī (Albatenius) discovers that the direction of the Sun's eccentricity is changing
17th century
1613 — Galileo Galilei uses sunspot observations to demonstrate the rotation of the Sun
1619 — Johannes Kepler postulates a solar wind to explain the direction of comet tails
19th century
1802 — William Hyde Wollaston observes dark lines in the solar spectrum
1814 — Joseph Fraunhofer systematically studies the dark lines in the solar spectrum
1834 — Hermann Helmholtz proposes gravitational contraction as the energy source for the Sun
1843 — Heinrich Schwabe announces his discovery of the sunspot cycle and estimates its period to be about a decade
1852 — Edward Sabine shows that sunspot number is correlated with geomagnetic field variations
1859 — Richard Carrington discovers solar flares
1860 — Gustav Kirchhoff and Robert Bunsen discover that each chemical element has its own distinct set of spectral lines
1861 — Gustav Spörer discovers the variation of sun-spot latitudes during a solar cycle, explained by Spörer's law
1863 — Richard Carrington discovers the differential nature of solar rotation
1868 — Pierre Janssen and Norman Lockyer discover an unidentified yellow line in solar prominence spectra and suggest it comes from a new element which they name "helium"
1893 — Edward Maunder discovers the 1645-1715 Maunder sunspot minimum
20th century
1904 — Edward Maunder plots the first sunspot "butterfly diagram"
1906 — Karl Schwarzschild explains solar limb darkening
1908 — George Hale discovers the Zeeman splitting of spectral lines from sunspots
1925 — Cecilia Payne proposes hydrogen is the dominant element of the Sun, not iron
1929 — Bernard Lyot invents the coronagraph and observes the corona with an "artificial eclipse"
1942 — J.S. Hey detects solar radio waves
1949 — Herbert Friedman detects solar X-rays
1960 — Robert B. Leighton, Robert Noyes, and George Simon discover solar five-minute oscillations by observing the Doppler shifts of solar dark lines
1961 — Horace W. Babcock proposes the magnetic coiling sunspot theory
1970 — Roger Ulrich, John Leibacher, and Robert F. Stein deduce from theoretical solar models that the interior of the Sun could act as a resonant acoustic cavity
1975 — Franz-Ludwig Deubner makes the first accurate measurements of the period and horizontal wavelength of the five-minute solar oscillations
1981 — NASA retrieves data from 1978 that shows a comet crashing into the Sun
21st century
2004 — largest solar flare ever recorded occurs
References
Solar Astronomy, Timeline of | Timeline of solar astronomy | [
"Astronomy"
] | 540 | [
"Astronomy timelines",
"History of astronomy"
] |
58,932 | https://en.wikipedia.org/wiki/Timeline%20of%20stellar%20astronomy | Timeline of stellar astronomy
1200 BC — Chinese star names appear on oracle bones used for divination.
134 BC — Hipparchus creates the magnitude scale of stellar apparent luminosities
185 AD — Chinese astronomers become the first to observe a supernova, the SN 185
964 — Abd al-Rahman al-Sufi (Azophi) writes the Book of Fixed Stars, in which he makes the first recorded observations of the Andromeda Galaxy and the Large Magellanic Cloud, and lists numerous stars with their positions, magnitudes, brightness, and colour, and gives drawings for each constellation
1000s (decade) — The Persian astronomer, Al-Biruni, describes the Milky Way galaxy as a collection of numerous nebulous stars
1006 — Ali ibn Ridwan and Chinese astronomers observe the SN 1006, the brightest stellar event ever recorded
1054 — Chinese and Arab astronomers observe the SN 1054, responsible for the creation of the Crab Nebula, the only nebula whose creation was observed
1181 — Chinese astronomers observe the SN 1181 supernova
1580 — Taqi al-Din measures the right ascension of the stars at the Constantinople observatory of Taqi ad-Din using an "observational clock" he invented and which he described as "a mechanical clock with three dials which show the hours, the minutes, and the seconds"
1596 — David Fabricius notices that Mira's brightness varies
1672 — Geminiano Montanari notices that Algol's brightness varies
1686 — Gottfried Kirch notices that Chi Cygni's brightness varies
1718 — Edmund Halley discovers stellar proper motions by comparing his astrometric measurements with those of the Greeks
1782 — John Goodricke notices that the brightness variations of Algol are periodic and proposes that it is partially eclipsed by a body moving around it
1784 — Edward Pigott discovers the first Cepheid variable star
1838 — Thomas Henderson, Friedrich Struve, and Friedrich Bessel measure stellar parallaxes
1844 — Friedrich Bessel explains the wobbling motions of Sirius and Procyon by suggesting that these stars have dark companions
1906 — Arthur Eddington begins his statistical study of stellar motions
1908 — Henrietta Leavitt discovers the Cepheid period-luminosity relation
1910 — Ejnar Hertzsprung and Henry Norris Russell study the relation between magnitudes and spectral types of stars
1924 — Arthur Eddington develops the main sequence mass-luminosity relationship
1929 — George Gamow proposes hydrogen fusion as the energy source for stars
1938 — Hans Bethe and Carl von Weizsäcker detail the proton–proton chain and CNO cycle in stars
1939 — Rupert Wildt realizes the importance of the negative hydrogen ion for stellar opacity
1952 — Walter Baade distinguishes between Cepheid I and Cepheid II variable stars
1953 — Fred Hoyle predicts a carbon-12 resonance to allow stellar triple alpha reactions at reasonable stellar interior temperatures
1961 — Chūshirō Hayashi publishes his work on the Hayashi track of fully convective stars
1963 — Fred Hoyle and William A. Fowler conceive the idea of supermassive stars
1964 — Subrahmanyan Chandrasekhar and Richard Feynman develop a general relativistic theory of stellar pulsations and show that supermassive stars are subject to a general relativistic instability
1967 — Eric Becklin and Gerry Neugebauer discover the Becklin-Neugebauer Object at 10 micrometres
1977 — (May 25) The Star Wars film is released and became a worldwide phenomenon, boosting interests in stellar systems.
2012 — (May 2) First visual proof of existence of black-holes. Suvi Gezari's team in Johns Hopkins University, using the Hawaiian telescope Pan-STARRS 1, publish images of a supermassive black hole 2.7 million light-years away swallowing a red giant.
See also
Timeline of astronomy
References
Stellar astronomy | Timeline of stellar astronomy | [
"Astronomy"
] | 800 | [
"Astronomy timelines",
"History of astronomy"
] |
58,933 | https://en.wikipedia.org/wiki/Timeline%20of%20white%20dwarfs%2C%20neutron%20stars%2C%20and%20supernovae | Timeline of neutron stars, pulsars, supernovae, and white dwarfs
Note that this list is mainly about the development of knowledge, but also about some supernovae taking place. For a separate list of the latter, see the article List of supernovae. All dates refer to when the supernova was observed on Earth or would have been observed on Earth had powerful enough telescopes existed at the time.
Timeline
185 – Chinese astronomers become the first to record observations of a supernova, SN 185.
1006 – SN 1006, a magnitude −7.5 supernova in the constellation of Lupus, is observed throughout Asia, the Middle East, and Europe.
1054 – Astronomers in Asia and the Middle East observe SN 1054, the Crab Nebula supernova explosion.
1181 – Chinese astronomers observe the SN 1181 supernova.
1572 – Tycho Brahe discovers a supernova (SN 1572) in the constellation Cassiopeia.
1604 – Johannes Kepler's supernova, SN 1604, in Serpens is observed.
1862 – Alvan Graham Clark observes Sirius B.
1866 – William Huggins studies the spectrum of a nova and discovers that it is surrounded by a cloud of hydrogen.
1885 – A supernova, S Andromedae, is observed in the Andromeda Galaxy leading to recognition of supernovae as a distinct class of novae.
1910 – the spectrum of 40 Eridani B is observed, making it the first confirmed white dwarf.
1914 – Walter Sydney Adams determines an incredibly high density for Sirius B.
1926 – Ralph Fowler uses Fermi–Dirac statistics to explain white dwarf stars.
1930 – Subrahmanyan Chandrasekhar discovers the white dwarf maximum mass limit.
1933 – Fritz Zwicky and Walter Baade propose the neutron star idea and suggest that supernovae might be created by the collapse of normal stars to neutron stars—they also point out that such events can explain the cosmic ray background.
1939 – Robert Oppenheimer and George Volkoff calculate the first neutron star models.
1942 – J.J.L. Duyvendak, Nicholas Mayall, and Jan Oort deduce that the Crab Nebula is a remnant of the 1054 supernova observed by Chinese astronomers.
1958 – Evry Schatzman, Kent Harrison, Masami Wakano, and John Wheeler show that white dwarfs are unstable to inverse beta decay.
1962 – Riccardo Giacconi, Herbert Gursky, Frank Paolini, and Bruno Rossi discover Scorpius X-1.
1967 – Jocelyn Bell and Antony Hewish discover radio pulses from a pulsar, PSR B1919+21.
1967 – J.R. Harries, Kenneth G. McCracken, R.J. Francey, and A.G. Fenton discover the first X-ray transient (Cen X-2).
1968 – Thomas Gold proposes that pulsars are rotating neutron stars.
1969 – David H. Staelin, Edward C. Reifenstein, William Cocke, Mike Disney, and Donald Taylor discover the Crab Nebula pulsar thus connecting supernovae, neutron stars, and pulsars.
1971 – Riccardo Giacconi, Herbert Gursky, Ed Kellogg, R. Levinson, E. Schreier, and H. Tananbaum discover 4.8 second X-ray pulsations from Centaurus X-3.
1972 – Charles Kowal discovers the Type Ia supernova SN 1972e in NGC 5253, which would be observed for more than a year and become the basis case for the type,
1974 – Russell Hulse and Joseph Taylor discover the binary pulsar PSR B1913+16.
1977 – Kip Thorne and Anna Żytkow present a detailed analysis of Thorne–Żytkow objects.
1982 – Donald Backer, Shrinivas Kulkarni, Carl Heiles, Michael Davis, and Miller Goss discover the millisecond pulsar PSR B1937+214.
1985 – Michiel van der Klis discovers 30 Hz quasi-periodic oscillations in GX 5-1.
1987 – Ian Shelton discovers SN 1987A in the Large Magellanic Cloud.
2003 – first double binary pulsar, PSR J0737−3039, discovered at Parkes Observatory.
2006 – Robert Quimby and P. Mondol discover SN 2006gy (a possible hypernova) in NGC 1260.
2017 – first observation of neutron star merger, accompanied with gravitational wave signal GW170817, short gamma-ray bursts GRB 170817A, optical transient AT 2017gfo and other electromagnetic signals.
References
White dwarfs, neutron stars, and supernovae
Lists of stars
Stellar astronomy | Timeline of white dwarfs, neutron stars, and supernovae | [
"Astronomy"
] | 992 | [
"Stellar astronomy",
"Astronomy timelines",
"Astronomical sub-disciplines",
"History of astronomy"
] |
58,934 | https://en.wikipedia.org/wiki/Timeline%20of%20knowledge%20about%20the%20interstellar%20and%20intergalactic%20medium | Timeline of knowledge about the interstellar medium and intergalactic medium:
1848 — Lord Rosse studies M1 and names it the Crab Nebula. The telescope is much larger than the small refactors typical of this period and it also reveals the spiral nature of M51.
1864 — William Huggins studies the spectrum of the Orion Nebula and shows that it is a cloud of gas
1904 — Interstellar calcium detected on spectrograph at Potsdam
1909 — Slipher confirms Kapteyn's theory of interstellar gas
1912 — Slipher confirms interstellar dust
1927 — Ira Bowen explains unidentified spectral lines from space as forbidden transition lines
1930 — Robert Trumpler discovers absorption by interstellar dust by comparing the angular sizes and brightnesses of globular clusters
1944 — Hendrik van de Hulst predicts the 21 cm hyperfine line of neutral interstellar hydrogen
1951 — Harold I. Ewen and Edward Purcell observe the 21 cm hyperfine line of neutral interstellar hydrogen
1956 — Lyman Spitzer predicts coronal gas around the Milky Way
1965 — James Gunn and Bruce Peterson use observations of the relatively low absorption of the blue component of the Lyman-alpha line from 3C9 to strongly constrain the density and ionization state of the intergalactic medium
1969 — Lewis Snyder, David Buhl, Ben Zuckerman, and Patrick Palmer find interstellar formaldehyde
1970 — Arno Penzias and Robert Wilson find interstellar carbon monoxide
1970 — George Carruthers observes molecular hydrogen in space
1977 — Christopher McKee and Jeremiah Ostriker propose a three component theory of the interstellar medium
1990 — Foreground "contamination" data from the COBE spacecraft provides the first all-sky map of the ISM in microwave bands.
References
Interstellar and intergalactic medium
Astrochemistry | Timeline of knowledge about the interstellar and intergalactic medium | [
"Chemistry",
"Astronomy"
] | 363 | [
"History of astronomy",
"Astronomy timelines",
"Astronomy stubs",
"Astrochemistry",
"nan",
"Astronomical sub-disciplines"
] |
58,935 | https://en.wikipedia.org/wiki/Timeline%20of%20knowledge%20about%20galaxies%2C%20clusters%20of%20galaxies%2C%20and%20large-scale%20structure | The following is a timeline of galaxies, clusters of galaxies, and large-scale structure of the universe.
Pre-20th century
5th century BC – Democritus proposes that the bright band in the night sky known as the Milky Way might consist of stars.
4th century BC – Aristotle believes the Milky Way to be caused by "the ignition of the fiery exhalation of some stars which were large, numerous and close together" and that the "ignition takes place in the upper part of the atmosphere, in the region of the world which is continuous with the heavenly motions".
964 – Abd al-Rahman al-Sufi (Azophi), a Persian astronomer, makes the first recorded observations of the Andromeda Galaxy and the Large Magellanic Cloud in his Book of Fixed Stars, and which are the first galaxies other than the Milky Way to be recorded.
11th century – Al-Biruni, another Persian astronomer, describes the Milky Way galaxy as a collection of fragments of numerous nebulous stars.
11th century – Alhazen (Ibn al-Haytham), an Arabian astronomer, refutes Aristotle's theory on the Milky Way by making the first attempt at observing and measuring the Milky Way's parallax, and he thus "determined that because the Milky Way had no parallax, it was very remote from the Earth and did not belong to the atmosphere".
12th century – Avempace (Ibn Bajjah) of Islamic Spain proposes the Milky Way to be made up of many stars but that it appears to be a continuous image due to the effect of refraction in the Earth's atmosphere.
14th century – Ibn Qayyim al-Jawziyya of Syria proposes the Milky Way galaxy to be "a myriad of tiny stars packed together in the sphere of the fixed stars", and that these stars are larger than planets.
1521 – Ferdinand Magellan observes the Magellanic Clouds during his circumnavigating expedition.
1610 – Galileo Galilei uses a telescope to determine that the bright band on the sky, the "Milky Way", is composed of many faint stars.
1612 – Simon Marius using a moderate telescope observes Andromeda and describes as a "flame seen through horn".
1750 – Thomas Wright discusses galaxies and the flattened shape of the Milky Way and speculates nebulae as separate.
1755 – Immanuel Kant drawing on Wright's work conjectures that our galaxy is a rotating disk of stars held together by gravity, and that the nebulae are separate such galaxies; he calls them Island Universes.
1774 – Charles Messier releases a preliminary list of 45 Messier objects, three of which turn out to be the galaxies including Andromeda and Triangulum. By 1781 the final published list grows to 103 objects, 34 of which turn out to be galaxies.
1785 – William Herschel carried the first attempt to describe the shape of the Milky Way and the position of the Sun in it by carefully counting the number of stars in different regions of the sky. He produced a diagram of the shape of the galaxy with the solar system close to the center.
1845 – Lord Rosse discovers a nebula with a distinct spiral shape.
Early 20th century
1912 – Vesto Slipher's spectrographic studies of spiral nebulae find high Doppler shifts indicating recessional velocity.
1917 – Heber Curtis finds novae in Andromeda Nebula M31 were ten magnitudes fainter than normal, giving a distance estimate of 150,000 parsecs supporting the "island universes" or independent galaxies hypothesis for spiral nebulae.
1918 – Harlow Shapley demonstrates that globular clusters are arranged in a spheroid or halo whose center is not the Earth, and hypothesizes, correctly, that its center is the Galactic Center of the galaxy,
26 April 1920 – Harlow Shapley and Heber Curtis debate whether Andromeda Nebula is within the Milky Way. Curtis notes dark lanes in Andromeda resembling the dust clouds in the Milky Way, as well as significant Doppler shift.
1922 – Ernst Öpik distance determination supports Andromeda as extra-galactic object.
1923 – Edwin Hubble resolves the Shapley–Curtis debate by finding Cepheids in the Andromeda Galaxy, definitively proving that there are other galaxies beyond the Milky Way.
1930 – Robert Trumpler uses open cluster observations to quantify the absorption of light by interstellar dust in the galactic plane; this absorption had plagued earlier models of the Milky Way.
1932 – Karl Guthe Jansky discovers radio noise from the center of the Milky Way.
1933 – Fritz Zwicky applies the virial theorem to the Coma Cluster and obtains evidence for unseen mass.
1936 – Edwin Hubble introduces the spiral, barred spiral, elliptical, and irregular galaxy classifications.
1939 – Grote Reber discovers the radio source Cygnus A.
1943 – Carl Keenan Seyfert identifies six spiral galaxies with unusually broad emission lines, named Seyfert galaxies.
1949 – J. G. Bolton, G. J. Stanley, and O. B. Slee identify NGC 4486 (M87) and NGC 5128 as extragalactic radio sources.
Mid-20th century
1953 – Gérard de Vaucouleurs discovers that the galaxies within approximately 200 million light-years of the Virgo Cluster are confined to a giant supercluster disk.
1954 – Walter Baade and Rudolph Minkowski identify the extragalactic optical counterpart of the radio source Cygnus A.
1959 – Hundreds of radio sources are detected by the Cambridge Interferometer which produces the 3C catalogue. Many of these are later found to be distant quasars and radio galaxies.
1960 – Thomas Matthews determines the radio position of the 3C source 3C 48 to within 5".
1960 – Allan Sandage optically studies 3C 48 and observes an unusual blue quasistellar object.
1962 – Cyril Hazard, M. B. Mackey, and A. J. Shimmins use lunar occultations to determine a precise position for the quasar 3C 273 and deduce that it is a double source.
1962 – Olin Eggen, Donald Lynden-Bell, and Allan Sandage theorize galaxy formation by a single (relatively) rapid monolithic collapse, with the halo forming first, followed by the disk.
1963 – Maarten Schmidt identifies the redshifted Balmer lines from the quasar 3C 273.
1973 – Jeremiah Ostriker and James Peebles discover that the amount of visible matter in the disks of typical spiral galaxies is not enough for Newtonian gravitation to keep the disks from flying apart or drastically changing shape.
1973 – Donald Gudehus finds that the diameters of the brightest cluster galaxies have increased due to merging, the diameters of the faintest cluster galaxies have decreased due to tidal distention, and that the Virgo cluster has a substantial peculiar velocity.
1974 – B. L. Fanaroff and J. M. Riley distinguish between edge-darkened (FR I) and edge-brightened (FR II) radio sources.
1976 – Sandra Faber and Robert Jackson discover the Faber-Jackson relation between the luminosity of an elliptical galaxy and the velocity dispersion in its center. In 1991 the relation is revised by Donald Gudehus.
1977 – R. Brent Tully and Richard Fisher publish the Tully–Fisher relation between the luminosity of an isolated spiral galaxy and the velocity of the flat part of its rotation curve.
1978 – Steve Gregory and Laird Thompson describe the Coma supercluster.
1978 – Donald Gudehus finds evidence that clusters of galaxies are moving at several hundred kilometers per second relative to the cosmic microwave background radiation.
1978 – Vera Rubin, Kent Ford, N. Thonnard, and Albert Bosma measure the rotation curves of several spiral galaxies and find significant deviations from what is predicted by the Newtonian gravitation of visible stars.
1978 – Leonard Searle and Robert Zinn theorize that galaxy formation occurs through the merger of smaller groups.
Late 20th century
1981 – Robert Kirshner, August Oemler, Paul Schechter, and Stephen Shectman find evidence for a giant void in Boötes, 250 to 330 million light years across.
1985 – Robert Antonucci and J. Miller discover that the Seyfert II galaxy NGC 1068 has broad lines which can only be seen in polarized reflected light.
1986 – Amos Yahil, David Walker, and Michael Rowan-Robinson find that the direction of the IRAS galaxy density dipole agrees with the direction of the cosmic microwave background temperature dipole.
1987 – David Burstein, Roger Davies, Alan Dressler, Sandra Faber, Donald Lynden-Bell, R. J. Terlevich, and Gary Wegner claim that a large group of galaxies within about 200 million light years of the Milky Way are moving together towards the "Great Attractor" in the direction of Hydra and Centaurus.
1987 – R. Brent Tully discovers the Pisces–Cetus Supercluster Complex, a structure one billion light years long and 150 million light years wide.
1989 – Margaret Geller and John Huchra discover the "Great Wall", a sheet of galaxies more than 500 million light years long and 200 million wide, but only 15 million light years thick.
1990 – Michael Rowan-Robinson and Tom Broadhurst discover that the IRAS galaxy IRAS F10214+4724 is the brightest known object in the Universe.
1991 – Donald Gudehus discovers a serious systematic bias in certain cluster galaxy data (surface brightness vs. radius parameter, and the method) which affect galaxy distances and evolutionary history; he devises a new distance indicator, the reduced galaxian radius parameter, which is free of biases.
1992 – First detection of large-scale structure in the cosmic microwave background indicating the seeds of the first clusters of galaxies in the early Universe.
1995 – First detection of small-scale structure in the cosmic microwave background.
1995 – Hubble Deep Field survey of galaxies in field 144 arc seconds across.
1998 – The 2dF Galaxy Redshift Survey maps the large-scale structure in a section of the Universe close to the Milky Way.
1998 – The Hubble Deep Field South is compiled.
1998 – Discovery of accelerating universe.
2000 – Data from several cosmic microwave background experiments give strong evidence that the Universe is "flat" (space is not curved, although space-time is), with important implications for the formation of large-scale structure.
Early 21st century
2001 – First data release from the ongoing Sloan Digital Sky Survey
2004 – The European Southern Observatory discovers Abell 1835 IR1916, the most distant galaxy yet seen from Earth.
2004 – The Arcminute Microkelvin Imager begins to map the distribution of distant clusters of galaxies
2005 – Spitzer Space Telescope data confirm what had been considered likely since the early 1990s from radio telescope data, i.e., that the Milky Way Galaxy is a barred spiral galaxy.
2012 – Astronomers report the discovery of the most distant dwarf galaxy yet found, approximately 10 billion light-years away.
2012 – The Huge-LQG, a large quasar group, one of the largest known structures in the universe, is discovered.
2013 – The galaxy Z8 GND 5296 is confirmed by spectroscopy to be one of the most distant galaxies found up to this time. Formed just 700 million years after the Big Bang, expansion of the universe has carried it to its current location, about 13 billion light years away from Earth (30 billion light years comoving distance).
2013 – The Hercules–Corona Borealis Great Wall, a massive galaxy filament and the largest known structure in the universe, was discovered through gamma-ray burst mapping.
2014 – The Laniakea Supercluster, the galaxy supercluster that is home to the Milky Way is defined via a new way of defining superclusters according to the relative velocities of galaxies. The new definition of the local supercluster subsumes the prior defined local supercluster, the Virgo Supercluster, as an appendage.
2020 – Astronomers report the discovery of a large cavity in the Ophiuchus Supercluster, first detected in 2016 and originating from a supermassive black hole with the mass of 10 million solar masses. The cavity is a result of the largest known explosion in the Universe. The formerly active galactic nucleus created it by emitting radiation and particle jets, possibly as a result of a spike in supply of gas to the black hole that could have occurred if a galaxy fell into the centre of the cavity.
2020 – Astronomers report to have discovered the disk galaxy Wolfe Disk, dating back to when the universe was only 1.5 billion years old, possibly indicating the need to revise theories of galaxy formation and evolution.
2020 – The South Pole Wall is a massive cosmic structure formed by a giant wall of galaxies (a galaxy filament) that extends across at least 1.37 billion light-years of space, and is located approximately a half billion light-years away.
2020 – After a 20-year-long survey, astrophysicists of the Sloan Digital Sky Survey publish the largest, most detailed 3D map of the universe so far, fill a gap of 11 billion years in its expansion history, and provide data which supports the theory of a flat geometry of the universe and confirms that different regions seem to be expanding at different speeds.
2022 – James Webb Space Telescope (JWST) releases the Webb's First Deep Field.
2022 – JWST detects CEERS-93316, a candidate high-redshift galaxy, with an estimated redshift of approximately z = 16.7, corresponding to 235.8 million years after the Big Bang. If confirmed, it is one of the earliest and most distant known galaxies observed.
See also
Illustris project
Large-scale structure of the cosmos
Timeline of astronomical maps, catalogs, and surveys
Timeline of cosmological theories
UniverseMachine
List of largest cosmic structures
List of the most distant astronomical objects#Timeline of most distant astronomical object recordholders
References
Galaxies, clusters of galaxies, and large-scale structure, timeline of knowledge about | Timeline of knowledge about galaxies, clusters of galaxies, and large-scale structure | [
"Astronomy"
] | 2,919 | [
"Astronomy timelines",
"History of astronomy"
] |
58,937 | https://en.wikipedia.org/wiki/Diphtheria | Diphtheria is an infection caused by the bacterium Corynebacterium diphtheriae. Most infections are asymptomatic or have a mild clinical course, but in some outbreaks, the mortality rate approaches 10%. Signs and symptoms may vary from mild to severe, and usually start two to five days after exposure. Symptoms often develop gradually, beginning with a sore throat and fever. In severe cases, a grey or white patch develops in the throat, which can block the airway, and create a barking cough similar to what is observed in croup. The neck may also swell, in part due to the enlargement of the facial lymph nodes. Diphtheria can also involve the skin, eyes, or genitals, and can cause complications, including myocarditis (which in itself can result in an abnormal heart rate), inflammation of nerves (which can result in paralysis), kidney problems, and bleeding problems due to low levels of platelets.
Diphtheria is usually spread between people by direct contact, through the air, or through contact with contaminated objects. Asymptomatic transmission and chronic infection are also possible. Different strains of C. diphtheriae are the main cause in the variability of lethality, as the lethality and symptoms themselves are caused by the exotoxin produced by the bacteria. Diagnosis can often be made based on the appearance of the throat with confirmation by microbiological culture. Previous infection may not protect against reinfection.
A diphtheria vaccine is effective for prevention, and is available in a number of formulations. Three or four doses, given along with tetanus vaccine and pertussis vaccine, are recommended during childhood. Further doses of the diphtheria–tetanus vaccine are recommended every ten years. Protection can be verified by measuring the antitoxin level in the blood. Diphtheria can be prevented in those exposed, as well as treated with the antibiotics erythromycin or benzylpenicillin. In severe cases a tracheotomy may be needed to open the airway.
In 2015, 4,500 cases were officially reported worldwide, down from nearly 100,000 in 1980. About a million cases a year are believed to have occurred before the 1980s. Diphtheria currently occurs most often in sub-Saharan Africa, South Asia, and Indonesia. In 2015, it resulted in 2,100 deaths, down from 8,000 deaths in 1990. In areas where it is still common, children are most affected. It is rare in the developed world due to widespread vaccination, but can re-emerge if vaccination rates decrease. In the United States, 57 cases were reported between 1980 and 2004. Death occurs in 5–10% of those diagnosed. The disease was first described in the 5th century BC by Hippocrates. The bacterium was identified in 1882 by Edwin Klebs.
Signs and symptoms
The symptoms of diphtheria usually begin two to seven days after infection. They include fever of 38 °C (100.4 °F) or above; chills; fatigue; bluish skin coloration (cyanosis); sore throat; hoarseness; cough; headache; difficulty swallowing; painful swallowing; difficulty breathing; rapid breathing; foul-smelling and bloodstained nasal discharge; and lymphadenopathy. Within two to three days, diphtheria may destroy healthy tissues in the respiratory system. The dead tissue forms a thick, gray coating that can build up in the throat or nose. This thick gray coating is called a "pseudomembrane." It can cover tissues in the nose, tonsils, voice box, and throat, making it very hard to breathe and swallow. Symptoms can also include cardiac arrhythmias, myocarditis, and cranial and peripheral nerve palsies.
Diphtheritic croup
Laryngeal diphtheria can lead to a characteristic swollen neck and throat, or "bull neck." The swollen throat is often accompanied by a serious respiratory condition, characterized by a brassy or "barking" cough, stridor, hoarseness, and difficulty breathing; and historically referred to variously as "diphtheritic croup," "true croup," or sometimes simply as "croup." Diphtheritic croup is extremely rare in countries where diphtheria vaccination is customary. As a result, the term "croup" nowadays most often refers to an unrelated viral illness that produces similar but milder respiratory symptoms.
Transmission
Human-to-human transmission of diphtheria typically occurs through the air when an infected individual coughs or sneezes. Breathing in particles released from the infected individual leads to infection. Contact with any lesions on the skin can also lead to transmission of diphtheria, but this is uncommon. Indirect infections can occur, as well. If an infected individual touches a surface or object, the bacteria can be left behind and remain viable. Also, some evidence indicates diphtheria has the potential to be zoonotic, but this has yet to be confirmed. Corynebacterium ulcerans has been found in some animals, which would suggest zoonotic potential.
Mechanism
Diphtheria toxin (DT) is produced only by C. diphtheriae infected with a certain type of bacteriophage. Toxinogenicity is determined by phage conversion (also called lysogenic conversion); i.e., the ability of the bacterium to make DT changes as a consequence of infection by a particular phage. DT is encoded by the tox gene. Strains of corynephage are either tox+ (e.g., corynephage β) or tox− (e.g., corynephage γ). The tox gene becomes integrated into the bacterial genome. The chromosome of C. diphtheriae has two different but functionally equivalent bacterial attachment sites (attB) for integration of β prophage into the chromosome.
The diphtheria toxin precursor is a protein of molecular weight 60 kDa. Certain proteases, such as trypsin, selectively cleave DT to generate two peptide chains, amino-terminal fragment A (DT-A) and carboxyl-terminal fragment B (DT-B), which are held together by a disulfide bond. DT-B is a recognition subunit that gains entry of DT into the host cell by binding to the EGF-like domain of heparin-binding EGF-like growth factor on the cell surface. This signals the cell to internalize the toxin within an endosome via receptor-mediated endocytosis. Inside the endosome, DT is split by a trypsin-like protease into DT-A and DT-B. The acidity of the endosome causes DT-B to create pores in the endosome membrane, thereby catalysing the release of DT-A into the cytoplasm.
Fragment A inhibits the synthesis of new proteins in the affected cell by catalyzing ADP-ribosylation of elongation factor EF-2—a protein that is essential to the translation step of protein synthesis. This ADP-ribosylation involves the transfer of an ADP-ribose from NAD+ to a diphthamide (a modified histidine) residue within the EF-2 protein. Since EF-2 is needed for the moving of tRNA from the A-site to the P-site of the ribosome during protein translation, ADP-ribosylation of EF-2 prevents protein synthesis.
ADP-ribosylation of EF-2 is reversed by giving high doses of nicotinamide (a form of vitamin B3), since this is one of the reaction's end products, and high amounts drive the reaction in the opposite direction.
Diagnosis
The current clinical case definition of diphtheria used by the United States' Centers for Disease Control and Prevention is based on both laboratory and clinical criteria.
Laboratory criteria
Isolation of C. diphtheriae from a Gram stain or throat culture from a clinical specimen.
Histopathologic diagnosis of diphtheria by Albert's stain.
Toxin demonstration
In vivo tests (guinea pig inoculation): Subcutaneous and intracutaneous tests.
In vitro test: Elek's gel precipitation test, detection of tox gene by PCR, ELISA, ICA.
Clinical criteria
Upper respiratory tract illness with sore throat.
Low-grade fever (above is rare).
An adherent, dense, grey pseudomembrane covering the posterior aspect of the pharynx; in severe cases, it can extend to cover the entire tracheobronchial tree.
Case classification
Probable: a clinically compatible case that is not laboratory-confirmed, and is not epidemiologically linked to a laboratory-confirmed case.
Confirmed: a clinically compatible case that is either laboratory-confirmed or epidemiologically linked to a laboratory-confirmed case.
Empirical treatment should generally be started in a patient in whom suspicion of diphtheria is high.
Prevention
Vaccination against diphtheria is commonly done in infants, and delivered as a combination vaccine, such as a DPT vaccine (diphtheria, pertussis, tetanus). Pentavalent vaccines, which vaccinate against diphtheria and four other childhood diseases simultaneously, are frequently used in disease prevention programs in developing countries by organizations such as UNICEF.
Treatment
The disease may remain manageable, but in more severe cases, lymph nodes in the neck may swell, and breathing and swallowing are more difficult. People in this stage should seek immediate medical attention, as obstruction in the throat may require intubation or a tracheotomy. Abnormal cardiac rhythms can occur early in the course of the illness or weeks later, and can lead to heart failure. Diphtheria can also cause paralysis in the eye, neck, throat, or respiratory muscles. Patients with severe cases are put in a hospital intensive care unit, and given diphtheria antitoxin (consisting of antibodies isolated from the serum of horses that have been challenged with diphtheria toxin). Since antitoxin does not neutralize toxin that is already bound to tissues, delaying its administration increases risk of death. Therefore, the decision to administer diphtheria antitoxin is based on clinical diagnosis, and should not await laboratory confirmation.
Antibiotics have not been demonstrated to affect healing of local infection in diphtheria patients treated with antitoxin. Antibiotics are used in patients or carriers to eradicate C. diphtheriae, and prevent its transmission to others. The Centers for Disease Control and Prevention (CDC) recommends either:
Metronidazole
Erythromycin is given (orally or by injection) for 14 days (40 mg/kg per day with a maximum of 2 g/d), or
Procaine penicillin G is given intramuscularly for 14 days (300,000 U/d for patients weighing <10 kg, and 600,000 U/d for those weighing >10 kg); patients with allergies to penicillin G or erythromycin can use rifampin or clindamycin.
In cases that progress beyond a throat infection, diphtheria toxin spreads through the blood, and can lead to potentially life-threatening complications that affect other organs, such as the heart and kidneys. Damage to the heart caused by the toxin affects the heart's ability to pump blood, or the kidneys' ability to clear wastes. It can also cause nerve damage, eventually leading to paralysis. About 40–50% of those left untreated can die.
Epidemiology
Diphtheria is fatal in 5–10% of cases. In children under five years and adults over 40 years, the fatality rate may be as much as 20%. In 2013, it resulted in 3,300 deaths, down from 8,000 deaths in 1990. Better standards of living, mass immunization, improved diagnosis, prompt treatment, and more effective health care have led to a decrease in cases worldwide.
History
In 1613, Spain experienced an epidemic of diphtheria, referred to as (The Year of Strangulations).
In 1705, the Mariana Islands experienced an epidemic of diphtheria and typhus simultaneously, reducing the population to about 5,000 people.
In 1735, a diphtheria epidemic swept through New England.
Before 1826, diphtheria was known by different names across the world. In England, it was known as "Boulogne sore throat," as the illness had spread from France. In 1826, Pierre Bretonneau gave the disease the name diphthérite (from Greek διφθέρα, diphthera 'leather'), describing the appearance of pseudomembrane in the throat.
In 1856, Victor Fourgeaud described an epidemic of diphtheria in California.
In 1878, Princess Alice (Queen Victoria's second daughter) and her family became infected with diphtheria; Princess Alice and her four-year-old daughter, Princess Marie, both died.
In 1883, Edwin Klebs identified the bacterium causing diphtheria, and named it Klebs–Loeffler bacterium. The club shape of this bacterium helped Edwin to differentiate it from other bacteria. Over time, it has been called Microsporon diphtheriticum, Bacillus diphtheriae, and Mycobacterium diphtheriae. Current nomenclature is Corynebacterium diphtheriae.
In 1884, German bacteriologist Friedrich Loeffler became the first person to cultivate C. diphtheriae. He used Koch's postulates to prove association between C. diphtheriae and diphtheria. He also showed that the bacillus produces an exotoxin.
In 1885, Joseph P. O'Dwyer introduced the O'Dwyer tube for laryngeal intubation in patients with an obstructed larynx. It soon replaced tracheostomy as the emergency diphtheric intubation method.
In 1888, Emile Roux and Alexandre Yersin showed that a substance produced by C. diphtheriae caused symptoms of diphtheria in animals.
In 1890, Shibasaburō Kitasato and Emil von Behring immunized guinea pigs with heat-treated diphtheria toxin. They also immunized goats and horses in the same way, and showed that an "antitoxin" made from serum of immunized animals could cure the disease in non-immunized animals. Behring used this antitoxin (now known to consist of antibodies that neutralize the toxin produced by C. diphtheriae) for human trials in 1891, but they were unsuccessful. Successful treatment of human patients with horse-derived antitoxin began in 1894, after production and quantification of antitoxin had been optimized. In 1901, Von Behring won the first Nobel Prize in medicine for his work on diphtheria.
In 1895, H. K. Mulford Company of Philadelphia started production and testing of diphtheria antitoxin in the United States. Park and Biggs described the method for producing serum from horses for use in diphtheria treatment.
In 1897, Paul Ehrlich developed a standardized unit of measure for diphtheria antitoxin. This was the first ever standardization of a biological product, and played an important role in future developmental work on sera and vaccines.
In 1901, 10 of 11 inoculated St. Louis children died from contaminated diphtheria antitoxin. The horse from which the antitoxin was derived died of tetanus. This incident, coupled with a tetanus outbreak in Camden, New Jersey, played an important part in initiating federal regulation of biologic products.
On 7 January 1904, Ruth Cleveland died of diphtheria at the age of 12 years in Princeton, New Jersey. Ruth was the eldest daughter of former President Grover Cleveland and the former First Lady, Frances Folsom.
In 1905, Franklin Royer, from Philadelphia's Municipal Hospital, published a paper urging timely treatment for diphtheria and adequate doses of antitoxin. In 1906, Clemens Pirquet and Béla Schick described serum sickness in children receiving large quantities of horse-derived antitoxin.
Between 1910 and 1911, Béla Schick developed the Schick test to detect pre-existing immunity to diphtheria in an exposed person. Only those who had not been exposed to diphtheria were vaccinated. A massive, five-year campaign was coordinated by Dr. Schick. As a part of the campaign, 85 million pieces of literature were distributed by the Metropolitan Life Insurance Company, with an appeal to parents to "Save your child from diphtheria." A vaccine was developed in the next decade, and deaths began declining significantly in 1924.
In 1919, in Dallas, Texas, 10 children were killed and 60 others made seriously ill by toxic antitoxin which had passed the tests of the New York State Health Department. The manufacturer of the antitoxin, the Mulford Company of Philadelphia, paid damages in every case.
During the 1920s, an annual estimate of 100,000 to 200,000 diphtheria cases and 13,000 to 15,000 deaths occurred in the United States. Children represented a large majority of these cases and fatalities. One of the most infamous outbreaks of diphtheria occurred in 1925, in Nome, Alaska; the "Great Race of Mercy" to deliver diphtheria antitoxin is now celebrated by the Iditarod Trail Sled Dog Race.
In 1926, Alexander Thomas Glenny increased the effectiveness of diphtheria toxoid (a modified version of the toxin used for vaccination) by treating it with aluminum salts. Vaccination with toxoid was not widely used until the early 1930s. In 1939, Dr. Nora Wattie, who was the Principal Medical Officer (Maternity and Child Welfare) of Glasgow between 1934– 1964, introduced immunisation clinics across Glasgow, and promoted mother and child health education, resulting in virtual eradication of the infection in the city.
Widespread vaccination pushed cases in the United States down from 4.4 per 100,000 inhabitants in 1932 to 2.0 in 1937. In Nazi Germany, where authorities preferred treatment and isolation over vaccination (until about 1939–1941), cases rose over the same period from 6.1 to 9.6 per 100,000 inhabitants.
Between June 1942 and February 1943, 714 cases of diphtheria were recorded at Sham Shui Po Barracks, resulting in 112 deaths because the Imperial Japanese Army did not release supplies of anti-diphtheria serum.
In 1943, diphtheria outbreaks accompanied war and disruption in Europe. The 1 million cases in Europe resulted in 50,000 deaths.
During 1948 in Kyoto, 68 of 606 children died after diphtheria immunization due to improper manufacture of aluminum phosphate toxoid.
In 1974, the World Health Organization included DPT vaccine in their Expanded Programme on Immunization for developing countries.
In 1975, an outbreak of cutaneous diphtheria in Seattle, Washington, was reported.
After the breakup of the former Soviet Union in 1991, vaccination rates in its constituent countries fell so low that an explosion of diphtheria cases occurred. In 1991, 2,000 cases of diphtheria occurred in the USSR. Between 1991 and 1998, as many as 200,000 cases were reported in the Commonwealth of Independent States, and resulted in 5,000 deaths. In 1994, the Russian Federation had 39,703 diphtheria cases. By contrast, in 1990, only 1,211 cases were reported.
In early May 2010, a case of diphtheria was diagnosed in Port-au-Prince, Haiti, after the devastating 2010 Haiti earthquake. The 15-year-old male patient died while workers searched for antitoxin.
In 2013, three children died of diphtheria in Hyderabad, India.
In early June 2015, a case of diphtheria was diagnosed at Vall d'Hebron University Hospital in Barcelona, Spain. The six-year-old child who died of the illness had not been previously vaccinated due to parental opposition to vaccination. It was the first case of diphtheria in the country since 1986, as reported by the Spanish daily newspaper El Mundo, or from 1998, as reported by the WHO.
In March 2016, a three-year-old girl died of diphtheria in the University Hospital of Antwerp, Belgium.
In June 2016, a three-year-old, five-year-old, and seven-year-old girl died of diphtheria in Kedah, Malacca, and Sabah, Malaysia.
In January 2017, more than 300 cases were recorded in Venezuela.
In 2017, outbreaks occurred in a Rohingya refugee camp in Bangladesh, and amongst children unvaccinated due to the Yemeni Civil War.
In November and December 2017, an outbreak of diphtheria occurred in Indonesia, with more than 600 cases found and 38 fatalities.
In November 2019, two cases of diphtheria occurred in the Lothian area of Scotland. Additionally, in November 2019, an unvaccinated 8-year-old boy died of diphtheria in Athens, Greece.
In July 2022, two cases of diphtheria occurred in northern New South Wales, Australia.
In October 2022, there was an outbreak of diphtheria at the former Manston airfield, a former Ministry of Defence (MoD) site in Kent, England, which had been converted to an asylum seeker processing centre. The capacity of the processing centre was 1,000 people, although about 3,000 were living at the site, with some accommodated in tents. The Home Office, the government department responsible for asylum seekers, refused to confirm the number of cases.
In December 2023 there was an outbreak at a school in Luton, in the United Kingdom. UK Health Security Agency (UKHSA) issued a statement saying specialists have been providing public health support following confirmation of the diphtheria case at a primary school in Luton. The agency said it is working closely with local and national partners "to ensure all necessary public health measures are implemented" following the discovery of the new case. The statement added: "We have conducted a risk assessment and close contacts of the case have been identified and where appropriate, vaccination and advice will be given to prevent the spread of the infection."
References
Further reading
"Antitoxin dars 1735 and 1740." The William and Mary Quarterly, 3rd Ser., Vol 6, No 2. p. 338.
External links
Mapping diphtheria-pertussis-tetanus vaccine coverage in Africa, 2000–2016: a spatial and temporal modelling study
Bacterial diseases
Wikipedia medicine articles ready to translate
Wikipedia emergency medicine articles ready to translate
Vaccine-preventable diseases
Rare infectious diseases | Diphtheria | [
"Biology"
] | 4,836 | [
"Vaccination",
"Vaccine-preventable diseases"
] |
58,939 | https://en.wikipedia.org/wiki/Timeline%20of%20cosmological%20theories | This timeline of cosmological theories and discoveries is a chronological record of the development of humanity's understanding of the cosmos over the last two-plus millennia. Modern cosmological ideas follow the development of the scientific discipline of physical cosmology.
For millennia, what today is known to be the Solar System was regarded as the contents of the "whole universe", so advances in the knowledge of both mostly paralleled. Clear distinction was not made until circa mid-17th century. See Timeline of Solar System astronomy for further details on this side.
Antiquity
16th century BCE – Mesopotamian cosmology has a flat, circular Earth enclosed in a cosmic ocean.
15th–11th century BCE – The Rigveda of Hinduism has some cosmological hymns, particularly in the late book 10, notably the Nasadiya Sukta which describes the origin of the universe, originating from the monistic Hiranyagarbha or "Golden Egg". Primal matter remains manifest for 311.04 trillion years and unmanifest for an equal length. The universe remains manifest for 4.32 billion years and unmanifest for an equal length. Innumerable universes exist simultaneously. These cycles have and will last forever, driven by desires.
15th–6th century BCE – During this period, Zoroastrian Cosmology Develops and defines Creation as a manifestation of a cosmic conflict between existence and non-existence, good and evil, and light and darkness.
6th century BCE – The Babylonian Map of the World shows the Earth surrounded by the cosmic ocean, with seven islands arranged around it so as to form a seven-pointed star. Contemporary Biblical cosmology reflects the same view of a flat, circular Earth swimming on water and overarched by the solid vault of the firmament to which are fastened the stars.
6th–4th century BCE – Greek philosophers, as early as Anaximander, introduce the idea of multiple or even infinite universes. Democritus further detailed that these worlds varied in distance, size; the presence, number and size of their suns and moons; and that they are subject to destructive collisions. Also during this time period, the Greeks established that the Earth is spherical rather than flat.
6th century BCE – Anaximander conceives a mechanical, non-mythological model of the world: the Earth floats very still in the centre of the infinite, not supported by anything. Its curious shape is that of a cylinder with a height one-third of its diameter. The flat top forms the inhabited world, which is surrounded by a circular oceanic mass. Anaximander considered the Sun as a huge object (larger than the land of Peloponnesus), and consequently, he realized how far from Earth it might be. In his system the celestial bodies turned at different distances. At the origin, after the separation of hot and cold, a ball of flame appeared that surrounded Earth like bark on a tree. This ball broke apart to form the rest of the Universe. It resembled a system of hollow concentric wheels, filled with fire, with the rims pierced by holes like those of a flute. Consequently, the Sun was the fire that one could see through a hole the same size as the Earth on the farthest wheel, and an eclipse corresponded with the occlusion of that hole. The diameter of the solar wheel was twenty-seven times that of the Earth (or twenty-eight, depending on the sources) and the lunar wheel, whose fire was less intense, eighteen (or nineteen) times. Its hole could change shape, thus explaining lunar phases. The stars and the planets, located closer, followed the same model.
5th century BCE – Parmenides is credited to be the first Greek who declared that the Earth is spherical and is situated in the centre of the universe.
5th century BCE – Pythagoreans as Philolaus believed the motion of planets is caused by an out-of-sight "fire" at the centre of the universe (not the Sun) that powers them, and Sun and Earth orbit that Central Fire at different distances. The Earth's inhabited side is always opposite to the Central Fire, rendering it invisible to people. They also claimed that the Moon and the planets orbit the Earth. This model depicts a moving Earth, simultaneously self-rotating and orbiting around an external point (but not around the Sun), thus not being geocentrical, contrary to common intuition. Due to philosophical concerns about the number 10 (a "perfect number" for the Pythagorians), they also added a tenth "hidden body" or Counter-Earth (Antichthon), always in the opposite side of the invisible Central Fire and therefore also invisible from Earth.
4th century BCE – Plato claimed in his Timaeus that circles and spheres are the preferred shape of the universe, that the Earth is at the center and is circled by, ordered in-to-outwards: Moon, Sun, Venus, Mercury, Mars, Jupiter, Saturn, and finally the fixed stars located on the celestial sphere. In Plato's complex cosmogony, the demiurge gave the primacy to the motion of Sameness and left it undivided; but he divided the motion of Difference in six parts, to have seven unequal circles. He prescribed these circles to move in opposite directions, three of them with equal speeds, the others with unequal speeds, but always in proportion. These circles are the orbits of the heavenly bodies: the three moving at equal speeds are the Sun, Venus and Mercury, while the four moving at unequal speeds are the Moon, Mars, Jupiter and Saturn. The complicated pattern of these movements is bound to be repeated again after a period called a 'complete' or 'perfect' year. However, others like Philolaus and Hicetas had rejected geocentrism.
4th century BCE – Eudoxus of Cnidus devised a geometric-mathematical model for the movements of the planets, the first known effort in this sense, based on (conceptual) concentric spheres centered on Earth. To explain the complexity of the movements of the planets along with that of the Sun and the Moon, Eudoxus thought they move as if they were attached to a number of concentrical, invisible spheres, every of them rotating around its own and different axis and at different paces. His model had twenty-seven homocentric spheres with each sphere explaining a type of observable motion for each celestial object. Eudoxus emphasised that this is a purely mathematical construct of the model in the sense that the spheres of each celestial body do not exist, it just shows the possible positions of the bodies. His model was later refined and expanded by Callippus.
4th century BCE – Aristotle follows the Plato's Earth-centered universe in which the Earth is stationary and the cosmos (or universe) is finite in extent but infinite in time. He argued for a spherical Earth using lunar eclipses and other observations. Aristotle adopted and expanded even more the previous Eudoxus' and Callippus' model, but by supposing the spheres were material and crystalline. Aristotle also tried to determine whether the Earth moves and concluded that all the celestial bodies fall towards Earth by natural tendency and since Earth is the centre of that tendency, it is stationary. Plato seems to have obscurely argued that the universe did have a beginning, but Aristotle and others interpreted his words differently.
4th century BCE – De Mundo – Five elements, situated in spheres in five regions, the less being in each case surrounded by the greater – namely, earth surrounded by water, water by air, air by fire, and fire by aether – make up the whole Universe.
4th century BCE – Heraclides Ponticus is said to be the first Greek who proposes that the Earth rotates on its axis, from west to east, once every 24 hours, contradicting Aristotle's teachings. Simplicius says that Heraclides proposed that the irregular movements of the planets can be explained if the Earth moves while the Sun stays still, but these statements are disputed.
3rd century BCE – Aristarchus of Samos proposes a Sun-centered universe and Earth's rotation in its own axis. He also provides evidences for his theory from his own observations.
3rd century BCE – Archimedes in his essay The Sand Reckoner, estimates the diameter of the cosmos to be the equivalent in stadia of what would in modern times be called two light years, if Aristarchus' theories were correct.
2nd century BCE – Seleucus of Seleucia elaborates on Aristarchus' heliocentric universe, using the phenomenon of tides to explain heliocentrism. Seleucus was the first to prove the heliocentric system through reasoning. Seleucus' arguments for a heliocentric cosmology were probably related to the phenomenon of tides. According to Strabo (1.1.9), Seleucus was the first to state that the tides are due to the attraction of the Moon, and that the height of the tides depends on the Moon's position relative to the Sun. Alternatively, he may have proved heliocentricity by determining the constants of a geometric model for it.
2nd century BCE – Apollonius of Perga shows the equivalence of two descriptions of the apparent retrograde planet motions (assuming the geocentric model), one using eccentrics and another deferent and epicycles. The latter will be a key feature for future models. The epicycle is described as a small orbit within a greater one, called the deferent: as a planet orbits the Earth, it also orbits the original orbit, so its trajectory resembles a curve known as an epitrochoid. This could explain how the planet seems to move as viewed from Earth.
2nd century BCE – Eratosthenes determines that the radius of the Earth is roughly 6,400 km.
2nd century BCE – Hipparchus uses parallax to determine that the distance to the Moon is roughly 380,000 km. The work of Hipparchus about the Earth-Moon system was so accurate that he could forecast solar and lunar eclipses for the next six centuries. Also, he discovers the precession of the equinoxes, and compiles a star catalog of about 850 entries.
2nd century BCE–3rd century CE – In Hindu cosmology, the Manusmriti (1.67–80) and Puranas describe time as cyclical, with a new universe (planets and life) created by Brahma every 8.64 billion years. The universe is created, maintained, and destroyed within a kalpa (day of Brahma) period lasting for 4.32 billion years, and is followed by a pralaya (night) period of partial dissolution equal in duration. In some Puranas (e.g. Bhagavata Purana), a larger cycle of time is described where matter (mahat-tattva or universal womb) is created from primal matter (prakriti) and root matter (pradhana) every 622.08 trillion years, from which Brahma is born. The elements of the universe are created, used by Brahma, and fully dissolved within a maha-kalpa (life of Brahma; 100 of his 360-day years) period lasting for 311.04 trillion years containing 36,000 kalpas (days) and pralayas (nights), and is followed by a maha-pralaya period of full dissolution equal in duration. The texts also speak of innumerable worlds or universes.
2nd century CE – Ptolemy proposes an Earth-centered universe, with the Sun, Moon, and visible planets revolving around the Earth. Based on Apollonius' epicycles, he calculates the positions, orbits and positional equations of the Heavenly bodies along with instruments to measure these quantities. Ptolemy emphasised that the epicycle motion does not apply to the Sun. His main contribution to the model was the equant points. He also re-arranged the heavenly spheres in a different order than Plato did (from Earth outward): Moon, Mercury, Venus, Sun, Mars, Jupiter, Saturn and fixed stars, following a long astrological tradition and the decreasing orbital periods. His book The Almagest, which also cataloged 1,022 stars and other astronomical objects (largely based upon Hipparchus'), remained the most authoritative text on astronomy and largest astronomical catalogue until the 17th century.
Middle Ages
2nd century CE-5th century CE – Jain cosmology considers the loka, or universe, as an uncreated entity, existing since infinity, the shape of the universe as similar to a man standing with legs apart and arm resting on his waist. This Universe, according to Jainism, is broad at the top, narrow at the middle and once again becomes broad at the bottom.
5th century (or earlier) – Buddhist texts speak of "hundreds of thousands of billions, countlessly, innumerably, boundlessly, incomparably, incalculably, unspeakably, inconceivably, immeasurably, inexplicably many worlds" to the east, and "infinite worlds in the ten directions".
5th century Aryabhata writes a treatise on motion of planets, Sun and Moon and stars. Aryabhatta puts forward the theory of rotation of the Earth in its own axis and explained day and night was caused by the diurnal rotation of the Earth. He models a geocentric universe with the sun, moon, and planets following circular and eccentric orbits with epicycles.
5th century – The Jewish talmud gives an argument for finite universe theory along with explanation.
5th century – Martianus Capella describes a modified geocentric model, in which the Earth is at rest in the center of the universe and circled by the Moon, the Sun, three planets and the stars, while Mercury and Venus circle the Sun, all surrounded by the sphere of fixed stars.
6th century – John Philoponus proposes a universe that is finite in time and argues against the ancient Greek notion of an infinite universe
7th century – The Quran says in Chapter 21: Verse 30 – "Have those who disbelieved not considered that the Heavens and the Earth were a joined entity, and We separated them".
9th–12th centuries – Al-Kindi (Alkindus), Saadia Gaon (Saadia ben Joseph) and Al-Ghazali (Algazel) support a universe that has a finite past and develop two logical arguments for the notion.
12th century – Fakhr al-Din al-Razi discusses Islamic cosmology, rejects Aristotle's idea of an Earth-centered universe, and, in the context of his commentary on the Quranic verse, "All praise belongs to God, Lord of the Worlds," and proposes that the universe has more than "a thousand worlds beyond this world."
12th century – Robert Grosseteste described the birth of the Universe in an explosion and the crystallisation of matter. He also put forward several new ideas such as rotation of the Earth around its axis and the cause of day and night. His treatise De Luce is the first attempt to describe the heavens and Earth using a single set of physical laws.
14th century – Jewish astronomer Levi ben Gershon (Gersonides) estimates the distance to the outermost orb of the fixed stars to be no less than 159,651,513,380,944 Earth radii, or about 100,000 light-years in modern units.
14th century – Several European mathematicians and astronomers develop the theory of Earth's rotation including Nicole Oresme. Oresme also give logical reasoning, empirical evidence and mathematical proofs for his notion.
15th century – Nicholas of Cusa proposes that the Earth rotates on its axis in his book, On Learned Ignorance (1440). Like Oresme, he also wrote about the possibility of the plurality of worlds.
Renaissance
1501 – Indian astronomer Nilakantha Somayaji proposes a universe in which the planets orbit the Sun, but the Sun orbits the Earth.
1543 – Nicolaus Copernicus publishes his heliocentric universe in his .
1576 – Thomas Digges modifies the Copernican system by removing its outer edge and replacing the edge with a star-filled unbounded space.
1584 – Giordano Bruno proposes a non-hierarchical cosmology, wherein the Copernican Solar System is not the center of the universe, but rather, a relatively insignificant star system, amongst an infinite multitude of others.
1588 – Tycho Brahe publishes his own Tychonic system, a blend between Ptolemy's classical geocentric model and Copernicus' heliocentric model, in which the Sun and the Moon revolve around the Earth, in the center of universe, and all other planets revolve around the Sun. It is a geo-heliocentric model similar to that described by Somayaji.
1600 – William Gilbert rejects the idea of a limiting sphere of the fixed stars for which no proof has been offered.
1609 – Galileo Galilei examines the skies and constellations through a telescope and concluded that the "fixed stars" which had been studied and mapped were only a tiny portion of the massive universe that lay beyond the reach of the naked eye. When in 1610 he aimed his telescope to the faint strip of the Milky Way, he found it resolves into countless white star-like spots, presumably farther stars themselves.
1610 – Johannes Kepler uses the dark night sky to argue for a finite universe. Shortly after, it was proved by Kepler himself that the Jupiter's moons move around the planet the same way planets orbit the Sun, thus making Kepler's laws universal.
Enlightenment to Victorian Era
1659 – Christiaan Huygens makes precise measurements of the angular distance between the Sun and Venus, which were based on the first absolute measurements of the Astronomical unit.
1672 – Jean Richer and Giovanni Domenico Cassini measure the Earth-Sun distance, the astronomical unit, to be about 138,370,000 km. Later it will be refined by others up to the current value of 149,597,870 km.
1675 – Ole Rømer uses the orbital mechanics of Jupiter's moons to estimate that the speed of light is about 227,000 km/s.
1687 – Isaac Newton's laws describe large-scale motion throughout the universe. The universal force of gravity suggested that stars could not simply be fixed or at rest, as their gravitational pulls cause "mutual attraction" and therefore cause them to move in relation to each other.
1704 – John Locke enters the term "Solar System" in the English language, when he used it to refer to the Sun, planets, and comets as a whole. By then it had been stablished beyond doubt that planets are other worlds, and stars are other distant suns, so the whole Solar System is actually only a small part of an immensely large universe, and definitively something distinct.
1718 – Edmund Halley discovers proper motion of stars, dispelling the concept of the "fixed stars".
1720 – Edmund Halley puts forth an early form of Olbers' paradox (if the universe is infinite, every line of sight would end at a star, thus the night sky would be entirely bright).
1729 – James Bradley discovers the aberration of light, which proved the Earth's motion around the Sun, and also provides a more accurate method to compute the speed of light closer to its actual value of about 300,000 km/s.
1744 – Jean-Philippe de Cheseaux puts forth an early form of Olbers' paradox.
1755 – Immanuel Kant asserts that the nebulae are really galaxies separate from, independent of, and outside the Milky Way Galaxy; he calls them island universes.
1781 – Charles Messier and his assistant Pierre Méchain publish the first catalogue of 110 nebulae and star clusters, the most prominent deep-sky objects that can easily be observed from Earth's Northern Hemisphere, in order not to be confused with ordinary Solar System's comets.
1785 – William Herschel proposes a heliocentric model of the universe that Earth's Sun is at or near the center of the universe, which at the time was assumed to only be the Milky Way Galaxy.
1791 – Erasmus Darwin pens the first description of a cyclical expanding and contracting universe in his poem The Economy of Vegetation.
1796 – Pierre Laplace re-states the nebular hypothesis for the formation of the Solar System from a spinning nebula of gas and dust.
1826 – Heinrich Wilhelm Olbers puts forth Olbers' paradox.
1832–1838 – Following over 100 years of unsuccessful attempts, Thomas Henderson, Friedrich Bessel, and Otto Struve measure the parallax of a few nearby stars; these are the first measurements of any distances outside the Solar System.
1842 – Christian Doppler proposes the redshift and blueshift effects, based on an analog effect found in sound. Hippolyte Fizeau discovered independently the same phenomenon on electromagnetic waves in 1848.
1848 – Edgar Allan Poe offers first correct solution to Olbers' paradox in Eureka: A Prose Poem, an essay that also suggests the expansion and collapse of the universe.
1860s – William Huggins develops astronomical spectroscopy; he shows that the Orion nebula is mostly made of gas, while the Andromeda nebula (later called Andromeda Galaxy) is probably dominated by stars.
1862 – By analysing the spectroscopic signature of the Sun and comparing it to those of other stars, Father Angelo Secchi determines that the Sun in itself is also a star.
1887 – The Michelson–Morley experiment, intended to measure the relative motion of Earth through the (assumed) stationary luminiferous aether, got no results. This put an end to the centuries-old idea of the aether, dating back to Aristotle, and with it all the contemporary aether theories.
1897 – J. J. Thomson identifies the electrons as the constituent particles of the cathode rays, leading to the modern atomic model of matter.
1897 – William Thomson, 1st Baron Kelvin, based on the thermal radiation rate and the gravitational contraction forces, argues the age of the Sun to be no more than 20 million years – unless some energy source beyond what was then known was found.
1901–1950
1904 – Ernest Rutherford argues, in a lecture attended by Kelvin, that radioactive decay releases heat, providing the unknown energy source Kelvin had suggested, and ultimately leading to radiometric dating of rocks which reveals ages of billions of years for the Solar System bodies, hence the Sun and all the stars.
1905 – Albert Einstein publishes the Special Theory of Relativity, positing that space and time are not separate continua, and demonstrating that mass and energy are interchangeable.
1912 – Henrietta Leavitt discovers the period-luminosity law for Cepheid variable stars, which becomes a crucial step in measuring distances to other galaxies.
1913 – Niels Bohr publishes the Bohr model of the atom, which explains the spectral lines, and definitively established the quantum mechanics behaviour of the matter.
1915 – Robert Innes discovers Proxima Centauri, the closest star to Earth after the Sun.
1915 – Albert Einstein publishes the General Theory of Relativity, showing that an energy density warps spacetime.
1917 – Willem de Sitter derives an isotropic static cosmology with a cosmological constant, as well as an empty expanding cosmology with a cosmological constant, termed a de Sitter universe.
1918 – Harlow Shapley's work on globular clusters showed that the heliocentrism model of cosmology was wrong, and galactocentrism replaced heliocentrism as the dominant model of cosmology.
1919 – Arthur Stanley Eddington uses a solar eclipse to successfully test Albert Einstein's General Theory of Relativity.
1920 – The Shapley-Curtis Debate, on the distances to spiral nebulae, takes place at the Smithsonian.
1921 – The National Research Council (NRC) published the official transcript of the Shapley-Curtis Debate. Galaxies are finally recognized as objects beyond the Milky Way, and the Milky Way as a galaxy proper.
1922 – Vesto Slipher summarizes his findings on the spiral nebulae's systematic redshifts.
1922 – Alexander Friedmann finds a solution to the Einstein field equations which suggests a general expansion of space.
1923 – Edwin Hubble measures distances to a few nearby spiral nebulae (galaxies), the Andromeda Galaxy (M31), Triangulum Galaxy (M33), and NGC 6822. The distances place them far outside the Milky Way, and implies that fainter galaxies are much more distant, and the universe is composed of many thousands of galaxies.
1924 – Louis de Broglie asserts that moderately accelerated electrons must show an associated wave. This was later confirmed by the Davisson–Germer experiment in 1927.
1927 – Georges Lemaître discusses the creation event of an expanding universe governed by the Einstein field equations. From its solutions to the Einstein equations, he predicts the distance-redshift relation.
1928 – Paul Dirac realises that his relativistic version of the Schrödinger wave equation for electrons predicts the possibility of antielectrons, and hence antimatter. This was confirmed in 1932 by Carl D. Anderson.
1928 – Howard P. Robertson briefly mentions that Vesto Slipher's redshift measurements combined with brightness measurements of the same galaxies indicate a redshift-distance relation.
1929 – Edwin Hubble demonstrates the linear redshift-distance relationship and thus shows the expansion of the universe.
1932 – Karl Guthe Jansky recognizes received radio signals coming from outer space as extrasolar, coming mainly from Sagittarius. They are the first evidence of the center of the Milky Way, and the firsts experiences that founded the discipline of radio astronomy.
1933 – Edward Milne names and formalizes the cosmological principle.
1933 – Fritz Zwicky shows that the Coma cluster of galaxies contains large amounts of dark matter. This result agrees with modern measurements, but is generally ignored until the 1970s.
1934 – Georges Lemaître interprets the cosmological constant as due to a vacuum energy with an unusual perfect fluid equation of state.
1938 – Hans Bethe calculates the details of the two main energy-producing nuclear reactions that power the stars.
1938 – Paul Dirac suggests the large numbers hypothesis, that the gravitational constant may be small because it is decreasing slowly with time.
1948 – Ralph Alpher, Hans Bethe ("in absentia"), and George Gamow examine element synthesis in a rapidly expanding and cooling universe, and suggest that the elements were produced by rapid neutron capture.
1948 – Hermann Bondi, Thomas Gold, and Fred Hoyle propose steady state cosmologies based on the perfect cosmological principle.
1948 – George Gamow predicts the existence of the cosmic microwave background radiation by considering the behavior of primordial radiation in an expanding universe.
1950 – Fred Hoyle coins the term "Big Bang", saying that it was not derisive; it was just a striking image meant to highlight the difference between that and the Steady-State model.
1951–2000
1961 – Robert Dicke argues that carbon-based life can only arise when the gravitational force is small, because this is when burning stars exist; first use of the weak anthropic principle.
1963 – Maarten Schmidt discovers the first quasar; these soon provide a probe of the universe back to substantial redshifts.
1965 – Hannes Alfvén proposes the now-discounted concept of ambiplasma to explain baryon asymmetry and supports the idea of an infinite universe.
1965 – Martin Rees and Dennis Sciama analyze quasar source count data and discover that the quasar density increases with redshift.
1965 – Arno Penzias and Robert Wilson, astronomers at Bell Labs discover the 2.7 K microwave background radiation, which earns them the 1978 Nobel Prize in Physics. Robert Dicke, James Peebles, Peter Roll and David Todd Wilkinson interpret it as a relic from the Big Bang.
1966 – Stephen Hawking and George Ellis show that any plausible general relativistic cosmology is singular.
1966 – James Peebles shows that the hot Big Bang predicts the correct helium abundance.
1967 – Andrei Sakharov presents the requirements for baryogenesis, a baryon-antibaryon asymmetry in the universe.
1967 – John Bahcall, Wal Sargent, and Maarten Schmidt measure the fine-structure splitting of spectral lines in 3C191 and thereby show that the fine-structure constant does not vary significantly with time.
1967 – Robert Wagner, William Fowler, and Fred Hoyle show that the hot Big Bang predicts the correct deuterium and lithium abundances.
1968 – Brandon Carter speculates that perhaps the fundamental constants of nature must lie within a restricted range to allow the emergence of life; first use of the strong anthropic principle.
1969 – Charles Misner formally presents the Big Bang horizon problem.
1969 – Robert Dicke formally presents the Big Bang flatness problem.
1970 – Vera Rubin and Kent Ford measure spiral galaxy rotation curves at large radii, showing evidence for substantial amounts of dark matter.
1973 – Edward Tryon proposes that the universe may be a large scale quantum mechanical vacuum fluctuation where positive mass-energy is balanced by negative gravitational potential energy.
1976 – Alexander Shlyakhter uses samarium ratios from the Oklo prehistoric natural nuclear fission reactor in Gabon to show that some laws of physics have remained unchanged for over two billion years.
1977 – Gary Steigman, David Schramm, and James Gunn examine the relation between the primordial helium abundance and number of neutrinos and claim that at most five lepton families can exist.
1980 – Alan Guth and Alexei Starobinsky independently propose the inflationary Big Bang universe as a possible solution to the horizon and flatness problems.
1981 – Viatcheslav Mukhanov and G. Chibisov propose that quantum fluctuations could lead to large scale structure in an inflationary universe.
1982 – The first CfA galaxy redshift survey is completed.
1982 – Several groups including James Peebles, J. Richard Bond and George Blumenthal propose that the universe is dominated by cold dark matter.
1983–1987 – The first large computer simulations of cosmic structure formation are run by Davis, Efstathiou, Frenk and White. The results show that cold dark matter produces a reasonable match to observations, but hot dark matter does not.
1988 – The CfA2 Great Wall is discovered in the CfA2 redshift survey.
1988 – Measurements of galaxy large-scale flows provide evidence for the Great Attractor.
1990 – The Hubble Space Telescope is launched. It is aimed primarily at deep-space objects.
1990 – Preliminary results from NASA's COBE mission confirm the cosmic microwave background radiation has a blackbody spectrum to an astonishing one part in 105 precision, thus eliminating the possibility of an integrated starlight model proposed for the background by steady state enthusiasts.
1992 – Further COBE measurements discover the very small anisotropy of the cosmic microwave background, providing a "baby picture" of the seeds of large-scale structure when the universe was around 1/1100th of its present size and 380,000 years old.
1992 – First planetary system beyond the Solar System detected, around the pulsar PSR B1257+12.
1995 – The first planet around a Sun-like star is discovered, in orbit around the star 51 Pegasi.
1996 – The first Hubble Deep Field is released, providing a clear view of very distant galaxies when the universe was around one-third of its present age.
1998 – Controversial evidence for the fine-structure constant varying over the lifetime of the universe is first published.
1998 – The Supernova Cosmology Project and High-Z Supernova Search Team discover cosmic acceleration based on distances to Type Ia supernovae, providing the first direct evidence for a non-zero cosmological constant.
1999 – Measurements of the cosmic microwave background radiation with finer resolution than COBE, (most notably by the BOOMERanG experiment see Mauskopf et al., 1999, Melchiorri et al., 1999, de Bernardis et al. 2000) provide evidence for oscillations (the first acoustic peak) in the anisotropy angular spectrum, as expected in the standard model of cosmological structure formation. The angular position of this peak indicates that the geometry of the universe is close to flat.
2001–present
2001 – The 2dF Galaxy Redshift Survey (2dF) by an Australian/British team gave strong evidence that the matter density is near 25% of critical density. Together with the CMB results for a flat universe, this provides independent evidence for a cosmological constant or similar dark energy.
2002 – The Cosmic Background Imager (CBI) in Chile obtained images of the cosmic microwave background radiation with the highest angular resolution of 4 arc minutes. It also obtained the anisotropy spectrum at high-resolution not covered before up to l ~ 3000. It found a slight excess in power at high-resolution (l > 2500) not yet completely explained, the so-called "CBI-excess".
2003 – NASA's Wilkinson Microwave Anisotropy Probe (WMAP) obtained full-sky detailed pictures of the cosmic microwave background radiation. The images can be interpreted to indicate that the universe is 13.7 billion years old (within one percent error), and are very consistent with the Lambda-CDM model and the density fluctuations predicted by inflation.
2003 – The Sloan Great Wall is discovered.
2004 – The Degree Angular Scale Interferometer (DASI) first obtained the E-mode polarization spectrum of the cosmic microwave background radiation.
2004 – Voyager 1 sends back the first data ever obtained from within the Solar System's heliosheath.
2005 – The Sloan Digital Sky Survey (SDSS) and 2dF redshift surveys both detected the baryon acoustic oscillation feature in the galaxy distribution, a key prediction of cold dark matter models.
2006 – Three-year WMAP results are released, confirming previous analysis, correcting several points, and including polarization data.
2009–2013 – Planck, a space observatory operated by the European Space Agency (ESA), mapped the anisotropies of the cosmic microwave background radiation, with increased sensitivity and small angular resolution.
2006–2011 – Improved measurements from WMAP, new supernova surveys ESSENCE and SNLS, and baryon acoustic oscillations from SDSS and WiggleZ, continue to be consistent with the standard Lambda-CDM model.
2014 – Astrophysicists of the BICEP2 collaboration announce the detection of inflationary gravitational waves in the B-mode power spectrum, which if confirmed, would provide clear experimental evidence for the theory of inflation. However, in June lowered confidence in confirming the cosmic inflation findings was reported.
2016 – LIGO Scientific Collaboration and Virgo Collaboration announce that gravitational waves were directly detected by two LIGO detectors. The waveform matched the prediction of General relativity for a gravitational wave emanating from the inward spiral and merger of a pair of black holes of around 36 and 29 solar masses and the subsequent "ringdown" of the single resulting black hole. The second detection verified that GW150914 is not a fluke, thus opens entire new branch in astrophysics, gravitational-wave astronomy.
2019 – The Event Horizon Telescope Collaboration publishes the image of the black hole at the center of the M87 Galaxy. This is the first time astronomers have ever captured an image of a black hole, which once again proves the existence of black holes and thus helps verify Einstein's general theory of relativity. This was done by utilising very-long-baseline interferometry.
2020 – Physicist Lucas Lombriser of the University of Geneva presents a possible way of reconciling the two significantly different determinations of the Hubble constant by proposing the notion of a surrounding vast "bubble", 250 million light years in diameter, that is half the density of the rest of the universe.
2020 – Scientists publish a study which suggests that the Universe is no longer expanding at the same rate in all directions and that therefore the widely accepted isotropy hypothesis might be wrong. While previous studies already suggested this, the study is the first to examine galaxy clusters in X-rays and, according to Norbert Schartel, has a much greater significance. The study found a consistent and strong directional behavior of deviations – which have earlier been described to indicate a "crisis of cosmology" by others – of the normalization parameter A, or the Hubble constant H0. Beyond the potential cosmological implications, it shows that studies which assume perfect isotropy in the properties of galaxy clusters and their scaling relations can produce strongly biased results.
2020 – Scientists report verifying measurements 2011–2014 via ULAS J1120+0641 of what seem to be a spatial variation in four measurements of the fine-structure constant, a basic physical constant used to measure electromagnetism between charged particles, which indicates that there might be directionality with varying natural constants in the Universe which would have implications for theories on the emergence of habitability of the Universe and be at odds with the widely accepted theory of constant natural laws and the standard model of cosmology which is based on an isotropic Universe.
2021 – James Webb Space Telescope is launched.
2023 – Astrophysicists questioned the overall current view of the universe, in the form of the Standard Model of Cosmology, based on the latest James Webb Space Telescope studies.
See also
Cosmology
Physical cosmology
Chronology of the universe
Graphical timeline from Big Bang to Heat Death
List of cosmologists
Interpretations of quantum mechanics
Non-standard cosmology
Historical development of hypotheses
Timeline of Solar System astronomy
Timeline of knowledge about galaxies, clusters of galaxies, and large-scale structure
Timeline of cosmic microwave background astronomy
Historical models of the Solar System
Fixed stars
Belief systems
Buddhist cosmology
Jain cosmology
Jainism and non-creationism
Hindu cosmology
Maya mythology
Others
Cosmology@Home
References
Bibliography
Bunch, Bryan, and Alexander Hellemans, The History of Science and Technology: A Browser's Guide to the Great Discoveries, Inventions, and the People Who Made Them from the Dawn of Time to Today.
P. de Bernardis et al., astro-ph/0004404, Nature 404 (2000) 955–959.
P. Mauskopf et al., astro-ph/9911444, Astrophys. J. 536 (2000) L59–L62.
A. Melchiorri et al., astro-ph/9911445, Astrophys. J. 536 (2000) L63–L66.
A. Readhead et al., Polarization observations with the Cosmic Background Imager, Science 306 (2004), 836–844.
Astronomy timelines
Physical cosmology
Lists of inventions or discoveries
Physics timelines | Timeline of cosmological theories | [
"Physics",
"Astronomy"
] | 8,088 | [
"Astronomical sub-disciplines",
"History of astronomy",
"Theoretical physics",
"Astronomy timelines",
"Astrophysics",
"Physical cosmology"
] |
58,950 | https://en.wikipedia.org/wiki/Galaxy%20cluster | A galaxy cluster, or a cluster of galaxies, is a structure that consists of anywhere from hundreds to thousands of galaxies that are bound together by gravity, with typical masses ranging from 1014 to 1015 solar masses. They are the second-largest known gravitationally bound structures in the universe after some superclusters (of which only one, the Shapley Supercluster, is known to be bound). They were believed to be the largest known structures in the universe until the 1980s, when superclusters were discovered. One of the key features of clusters is the intracluster medium (ICM). The ICM consists of heated gas between the galaxies and has a peak temperature between 2–15 keV that is dependent on the total mass of the cluster. Galaxy clusters should not be confused with galactic clusters (also known as open clusters), which are star clusters within galaxies, or with globular clusters, which typically orbit galaxies. Small aggregates of galaxies are referred to as galaxy groups rather than clusters of galaxies. The galaxy groups and clusters can themselves cluster together to form superclusters.
Notable galaxy clusters in the relatively nearby Universe include the Virgo Cluster, Fornax Cluster, Hercules Cluster, and the Coma Cluster. A very large aggregation of galaxies known as the Great Attractor, dominated by the Norma Cluster, is massive enough to affect the local expansion of the Universe. Notable galaxy clusters in the distant, high-redshift universe include SPT-CL J0546-5345 and SPT-CL J2106-5844, the most massive galaxy clusters found in the early Universe. In the last few decades, they are also found to be relevant sites of particle acceleration, a feature that has been discovered by observing non-thermal diffuse radio emissions, such as radio halos and radio relics. Using the Chandra X-ray Observatory, structures such as cold fronts and shock waves have also been found in many galaxy clusters.
Basic properties
Galaxy clusters typically have the following properties:
They contain 100 to 1,000 galaxies, hot X-ray emitting gas and large amounts of dark matter. Details are described in the "Composition" section.
The distribution of the three components is approximately the same in the cluster.
They have total masses of 1014 to 1015 solar masses.
They typically have a diameter from 1 to 5 Mpc (see 1023 m for distance comparisons).
The spread of velocities for the individual galaxies is about 800–1000 km/s.
Composition
There are three main components of a galaxy cluster. They are tabulated below:
Classification
Galaxy clusters are categorized as type I, II, or III based on morphology.
Galaxy clusters as measuring instruments
Gravitational redshift
Galaxy clusters have been used by Radek Wojtak from the Niels Bohr Institute at the University of Copenhagen to test predictions of general relativity: energy loss from light escaping a gravitational field. Photons emitted from the center of a galaxy cluster should lose more energy than photons coming from the edge of the cluster because gravity is stronger in the center. Light emitted from the center of a cluster has a longer wavelength than light coming from the edge. This effect is known as gravitational redshift. Using the data collected from 8000 galaxy clusters, Wojtak was able to study the properties of gravitational redshift for the distribution of galaxies in clusters. He found that the light from the clusters was redshifted in proportion to the distance from the center of the cluster as predicted by general relativity. The result also strongly supports the Lambda-Cold Dark Matter model of the Universe, according to which most of the cosmos is made up of Dark Matter that does not interact with matter.
Gravitational lensing
Galaxy clusters are also used for their strong gravitational potential as gravitational lenses to boost the reach of telescopes. The gravitational distortion of space-time occurs near massive galaxy clusters and bends the path of photons to create a cosmic magnifying glass. This can be done with photons of any wavelength from the optical to the X-ray band. The latter is more difficult, because galaxy clusters emit a lot of X-rays. However, X-ray emission may still be detected when combining X-ray data to optical data. One particular case is the use of the Phoenix galaxy cluster to observe a dwarf galaxy in its early high energy stages of star formation.
List
Gallery
Images
Videos
See also
Abell catalogue
Intracluster medium
List of Abell clusters
References
Cluster
Articles containing video clips
Types of groupings | Galaxy cluster | [
"Astronomy"
] | 922 | [
"Galaxy clusters",
"Astronomical objects"
] |
58,952 | https://en.wikipedia.org/wiki/Timeline%20of%20astronomical%20maps%2C%20catalogs%2C%20and%20surveys | Timeline of astronomical maps, catalogs and surveys
c. 1800 BC — Babylonian star catalog (see Babylonian star catalogues)
c. 1370 BC; Observations for the Babylonia MUL.APIN (an astro catalog).
c. 350 BC — Shi Shen's star catalog has almost 800 entries
c. 300 BC — star catalog of Timocharis of Alexandria
c. 134 BC — Hipparchus makes a detailed star map
c. 150 — Ptolemy completes his Almagest, which contains a catalog of stars, observations of planetary motions, and treatises on geometry and cosmology
c. 705 — Dunhuang Star Chart, a manuscript star chart from the Mogao Caves at Dunhuang
c. 750 — The first Zij treatise, Az-Zij ‛alā Sinī al-‛Arab, written by Ibrāhīm al-Fazārī and Muḥammad ibn Ibrāhīm al-Fazārī
c. 777 — Yaʿqūb ibn Ṭāriq's Az-Zij al-Mahlul min as-Sindhind li-Darajat Daraja
c. 830 — Muhammad ibn Musa al-Khwarizmi's Zij al-Sindhind
c. 840 — Al-Farghani's Compendium of the Science of the Stars
c. 900 — Al-Battani's Az-Zij as-Sabi
964 — Abd al-Rahman al-Sufi (Azophi)'s star catalog Book of the Fixed Stars
1031 — Al-Biruni's al-Qanun al-Mas'udi, making first use of a planisphere projection, and discussing the use of the astrolabe and the armillary sphere.
1088 — The first almanac is the Almanac of Azarqueil written by Abū Ishāq Ibrāhīm al-Zarqālī (Azarqueil)
1115–1116 — Al-Khazini's Az-Zij as-Sanjarī (Sinjaric Tables)
c. 1150 — Gerard of Cremona publishes Tables of Toledo based on the work of Azarqueil
1252–1270 — Alfonsine tables recorded by order of Alfonso X
1272 — Nasir al-Din al-Tusi's Zij-i Ilkhani (Ilkhanic Tables)
1395 — Cheonsang Yeolcha Bunyajido star map created at the order of King Taejo
c. 1400 — Jamshid al-Kashi's Khaqani Zij
1437 — Publication of Ulugh Beg's Zij-i-Sultani
1551 — Prussian Tables by Erasmus Reinhold
late 16th century — Tycho Brahe updates Ptolemy's Almagest
1577–1580 — Taqi ad-Din Muhammad ibn Ma'ruf's Unbored Pearl
1598 — Tycho Brahe publishes his "Thousand Star Catalog"
1603 — Johann Bayer's Uranometria
1627 — Johannes Kepler publishes his Rudolphine Tables of 1006 stars from Tycho plus 400 more
1678 — Edmond Halley publishes a catalog of 341 southern stars, the first systematic southern sky survey
1712 — Isaac Newton and Edmond Halley publish a catalog based on data from a Royal Astronomer who left all his data under seal, the official version would not be released for another decade.
1725 — Posthumous publication of John Flamsteed's Historia Coelestis Britannica
1771 — Charles Messier publishes his first list of nebulae
1824 — Urania's Mirror by Sidney Hall
1862 — Friedrich Wilhelm Argelander publishes his final edition of the Bonner Durchmusterung catalog of stars north of declination -1°.
1864 — John Herschel publishes the General Catalogue of nebulae and star clusters
1887 — Paris conference institutes Carte du Ciel project to map entire sky to 14th magnitude photographically
1890 — John Dreyer publishes the New General Catalogue of nebulae and star clusters
1932 — Harlow Shapley and Adelaide Ames publish A Survey of the External Galaxies Brighter than the Thirteenth Magnitude, later known as the Shapley-Ames Catalog
1948 — Antonín Bečvář publishes the Skalnate Pleso Atlas of the Heavens (Atlas Coeli Skalnaté Pleso 1950.0)
1950–1957 — Completion of the Palomar Observatory Sky Survey (POSS) with the Palomar 48-inch Schmidt optical reflecting telescope. Actual date quoted varies upon source.
1962 — A.S. Bennett of the Cambridge Radio Astronomy Group publishes the Revised 3C Catalogue of 328 radio sources
1965 — Gerry Neugebauer and Robert Leighton begin a 2.2 micrometre sky survey with a 1.6-meter telescope on Mount Wilson
1982 — IRAS space observatory completes an all-sky mid-infrared survey
1990 — Publication of APM Galaxy Survey of 2+ million galaxies, to study large-scale structure of the cosmos
1991 — ROSAT space observatory begins an all-sky X-ray survey
1993 — Start of the 20 cm VLA FIRST survey
1997 — Two Micron All Sky Survey (2MASS) commences, first version of Hipparcos Catalogue published
1998 — Sloan Digital Sky Survey commences
2003 — 2dF Galaxy Redshift Survey published; 2MASS completes
2012 — On March 14, 2012, a new atlas and catalog of the entire infrared sky as imaged by Wide-field Infrared Survey Explorer was released.
2020 — On July 19, 2020, after a 20-year-long survey, astrophysicists of the Sloan Digital Sky Survey published the largest, most detailed 3D map of the universe so far, fill a gap of 11 billion years in its expansion history, and provide data which supports the theory of a flat geometry of the universe and confirms that different regions seem to be expanding at different speeds.
2020 — On October 8, 2020, scientists released the largest and most detailed 3D maps of the Universe, called "PS1-STRM". The data of the MAST was created using artificial neural networks and combines data from the Sloan Digital Sky Survey and others. Users can query the dataset online or download it in its entirety of ~300GB.
2021 — A celestial map is published to the journal Astronomy & Astrophysics identifying over 250,000 supermassive black holes, using data from 52 stations across nine different countries in Europe.
See also
Timeline of knowledge about galaxies, clusters of galaxies, and large-scale structure
List of asteroid close approaches to Earth
List of potentially habitable exoplanets
List of nearby stellar associations and moving groups
References
Astronomical maps, catalogs, and surveys
Works about astronomy
Maps
World maps | Timeline of astronomical maps, catalogs, and surveys | [
"Astronomy"
] | 1,354 | [
"Astronomy timelines",
"Works about astronomy",
"History of astronomy"
] |
58,953 | https://en.wikipedia.org/wiki/Timeline%20of%20telescopes%2C%20observatories%2C%20and%20observing%20technology | Timeline of telescopes, observatories, and observing technology.
Before the Common Era (BCE)
1900s BCE
Taosi Astronomical Observatory, Xiangfen County, Linfen City, Shanxi Province, China
1500s BCE
Shadow clocks invented in ancient Egypt and Mesopotamia
600s BCE
11th–7th century BCE, Zhou dynasty astronomical observatory (灵台) in Fenghao (today's Xi'an), China
200s BCE
Thirteen Towers solar observatory, Chankillo, Peru
Antikythera Mechanism, a geared astronomical computer that calculates lunar and solar eclipses, the position of the Sun and the Moon the lunar phase (age of the moon), has several lunisolar calendars, including the Olympic Games calendar. It is at the National Archaeological Museum, Athens, Greece.
100s BCE
220-206 BCE, Han dynasty astronomical observatory (灵台) in Chang'an and Luoyang. During East Han dynasty, astronomical observatory (灵台) built in Yanshi, Henan Province, China
220-150 BCE, Astrolabe invented by Apollonius of Perga
Common Era (CE)
400s
5th century – Observatory at Ujjain, India
600s
632–647 – Cheomseongdae observatory is built in the reign of Queen Seondeok at Gyeongju, then the capital of Silla (present day South Korea)
618–1279 – Tang dynasty-Song dynasty, observatories built in Chang'an, Kaifeng, Hangzhou, China
700s
700–96 – Brass astrolabe constructed by Muhammad al-Fazari based on Hellenistic sources
800s
9th century – quadrant invented by Muhammad ibn Mūsā al-Khwārizmī in 9th century Baghdad and is used for astronomical calculations
800–33 – The first modern observatory research institute built in Baghdad, Iraq, by Arabic astronomers during time of Al-Mamun
825–35 – Al-Shammisiyyah observatory by Habash al-Hasib al-Marwazi in Baghdad, Iraq
900s
10th century – Large astrolabe of diameter 1.4 meters constructed by Ibn Yunus
994 – First sextant constructed in Ray, Iran, by Abu-Mahmud al-Khujandi. It was a very large mural sextant that achieved a high level of accuracy for astronomical measurements.
1000s
1000 – Mokattam observatory in Egypt for Al-Hakim bi-Amr Allah
11th century – Planisphere invented by Biruni
11th century – Universal latitude-independent astrolabe invented by Abū Ishāq Ibrāhīm al-Zarqālī (Arzachel)
1023 – Hamedan observatory in Persia
c. 1030 – Treasury of Optics by Ibn al-Haytham (Alhazen) of Iraq and Egypt
1074–92 – Malikshah Observatory at Isfahan used by Omar Khayyám
1100s
1100–50 – Jabir ibn Aflah develops instruments resembling and perhaps inspiring the torquetum, an observational instrument and mechanical analog computer device
1119–25 – Cairo al-Bataihi observatory for Al-Afdal Shahanshah
1200s
1259 – Maragheh observatory and library of Nasir al-Din al-Tusi built in Persia under Hulagu Khan
c. 1270 – Terrace for Managing Heaven 26 observatory network of Guo Shoujing under Khubilai Khan
1276 – Dengfeng Star Observatory Platform, Gaocheng, Dengfeng City, Henan Province, China
1300s
1371 – The idea of using hours of equal time length throughout the year in a sundial was the innovation of Ibn al-Shatir
1400s
1420 – Samarkand observatory of Ulugh Beg
1442 – Beijing Ancient Observatory in China
1467–71 – Observatory at Nagyvarad Oradea, Kingdom of Hungary for Matthias Corvinus. Tabula Varadensis.
1472 – The Nuremberg observatory of Regiomontanus and Bernhard Walther.
1500s
1560 – Kassel observatory under Landgrave Wilhelm IV of Hesse
1575–80 – Constantinople Observatory of Taqi ad-Din under Sultan Murad III
1576 – Royal Danish Astronomical Observatory Uraniborg at Hven by Tycho Brahe
1577–80 – Taqi al-Din invents a mechanical astronomical clock that measures time in seconds, one of the most important innovations in 16th-century practical astronomy, as previous clocks were not accurate enough to be used for astronomical purposes.
1577–80 – Taqi al-Din invents framed sextant
1581 – Royal Danish Astronomical Observatory Stjerneborg at Hven by Tycho Brahe
1600s
1600 – Prague observatory in Benátky nad Jizerou by Tycho Brahe
1603 – Johann Bayer's Uranometria is published
1608 – Hans Lippershey tries to patent an optical refracting telescope, the first recorded functional telescope
1609 – Galileo Galilei builds his first optical refracting telescope
1616 – Niccolò Zucchi experiments with a reflecting telescope
1633 – Construction of Leiden University Observatory
1641 – William Gascoigne invents telescope cross hairs
1641 – Danzig/Gdansk observatory of Jan Hevelius
1642 – Copenhagen University Royal observatory
1661 – James Gregory proposes an optical reflecting telescope with parabolic mirrors
1667 – Paris Observatory
1668 – Isaac Newton constructs the first "practical" reflecting telescope, the Newtonian telescope
1672 – Laurent Cassegrain designs the Cassegrain telescope
1675 – Royal Greenwich Observatory of England
1684 – Christiaan Huygens publishes "Astroscopia Compendiaria" in which he described the design of very long aerial telescopes
1700s
1704 – First observatory at Cambridge University (based at Trinity College)
1724 – Indian observatory of Sawai Jai Singh at Delhi
1725 – St. Petersburg observatory at Royal Academy
1732 – Indian observatories of Sawai Jai Singh at Varanasi, Ujjain, Mathura, Madras
1733 – Chester Moore Hall invents the achromatic lens refracting telescope
1734 – Indian observatory of Sawai Jai Singh at Jaipur
1753 – Real Observatorio de Cádiz (Spain)
1753 – Vilnius Observatory at Vilnius University, Lithuania
1758 – John Dollond reinvents the achromatic lens
1761 – Joseph-Nicolas Delisle 62 observing station network for observing the transit of Venus
1769 – Short reflectors used at 63 station network for transit of Venus
1774 – Vatican Observatory (Specola Vaticana), originally established as the Observatory of the Roman College.
1780 – Florence Specola observatory
1789 – William Herschel finishes a 49-inch (1.2 m) optical reflecting telescope, located in Slough, England
1798 – Real Observatorio de la Isla de Léon (actualmente Real Instituto y Observatorio de la Armada) (Spain)
1800s
1803 National Astronomical Observatory (Colombia), the first observatory in the Americas
1836 Swathithirunal opened Trivandrum observatory
1839 Louis Jacques Mandé Daguerre (inventor of the daguerreotype photographic process) attempts to photograph the moon. Tracking errors in guiding the telescope during the long exposure made the photograph came out as an indistinct fuzzy spot
1840 – John William Draper takes make a successful photographic image of the Moon, the first astronomical photograph
1845 – Lord Rosse finishes the Birr Castle optical reflecting telescope, located in Parsonstown, Ireland
1849 – Santiago observatory set up by USA, later becomes Chilean National Observatory (now part of the University of Chile)
1859 – Kirchhoff and Bunsen develop spectroscopy
1864 – Herschel's so-called GC (General Catalogue) of nebulae and star clusters published
1868 – Janssen and Lockyer discover Helium observing spectra of the Sun
1871 – German Astronomical Association organized network of 13 (later 16) observatories for stellar proper motion studies
1863 – William Allen Miller and Sir William Huggins use the photographic wet collodion plate process to obtain the first ever photographic spectrogram of a star, Sirius and Capella.
1872 – Henry Draper photographs a spectrum of Vega that shows absorption lines.
1878 – Dreyer published a supplement to the GC of about 1000 new objects, the New General Catalogue
1883 – Andrew Ainslie Common uses the photographic dry plate process and a 36-inch (91 cm) reflecting telescope in his backyard to record 60 minute exposures of the Orion nebula that for the first time showed stars too faint to be seen by the human eye.
1887 – Paris conference institutes Carte du Ciel project to map entire sky to 14th magnitude photographically
1888 – First light of 91cm refracting telescope at Lick Observatory, on Mount Hamilton near San Jose, California
1889 – Astronomical Society of the Pacific founded
1890 – Albert A. Michelson proposes the stellar interferometer
1892 – George Ellery Hale finishes a spectroheliograph, which allows the Sun to be photographed in the light of one element only
1897 – Alvan Clark finishes the Yerkes optical refracting telescope, located in Williams Bay, Wisconsin
1900s
1902 – Dominion Observatory, Ottawa, Ontario, Canada established
1904 – Observatories of the Carnegie Institution of Washington founded
1907 – F.C. Brown and Joel Stebbins develop a selenium cell photometer at the University of Illinois Observatory.
1910s
1912 – Joel Stebbins and Jakob Kunz begin to use a photometer using a photoelectric cell at the University of Illinois Observatory.
1917 – Mount Wilson optical reflecting telescope begins operation, located in Mount Wilson, California
1918 – 1.8m Plaskett Telescope begins operation at the Dominion Astrophysical Observatory, Victoria, British Columbia, Canada
1919 – International Astronomical Union (IAU) founded
1930s
1930 – Bernard-Ferdinand Lyot invents the coronagraph
1930 – Karl Jansky builds a 30-meter long rotating aerial radio telescope This was the first radio telescope.
1933 – Bernard-Ferdinand Lyot invents the Lyot filter
1934 – Bernhard Schmidt finishes the first Schmidt optical reflecting telescope
1936 – Palomar Schmidt optical reflecting telescope begins operation, located in Palomar, California
1937 – Grote Reber builds a radio telescope
1940s
1941 – Dmitri Dmitrievich Maksutov invents the Maksutov telescope which is adopted by major observatories in the Soviet Union and internationally. It is now also a popular design with amateur astronomers
1946 – Martin Ryle and his group perform the first astronomical observations with a radio interferometer
1947 – Bernard Lovell and his group complete the Jodrell Bank non-steerable radio telescope
1949 – Palomar Schmidt optical reflecting telescope begins operation, located in Palomar, California
1949 – Palomar optical reflecting telescope (Hale Telescope) begins regular operation, located in Palomar, California
1950s
1953 – Luoxue Mountain Cosmic Rays Research Center, Yunnan Province, in China founded
1954 – Earth rotation aperture synthesis suggested (see e.g. Christiansen and Warburton (1955))
1956 – Dwingeloo Radio Observatory 25 m telescope completed, Dwingeloo, Netherlands
1957 – Bernard Lovell and his group complete the Jodrell Bank 250-foot (75 m) steerable radio telescope (the Lovell Telescope)
1957 – Peter Scheuer publishes his P(D) method for obtaining source counts of spatially unresolved sources
1959 – Radio Observatory of the University of Chile, located at Maipú, Chile founded
1959 – The 3C catalogue of radio sources is published (revised in 1962)
1959 – The Shane Telescope Opened at Lick Observatory
1960s
1960 – Owens Valley 27-meter radio telescopes begin operation, located in Big Pine, California
1961 – Parkes 64-metre radio telescope begins operation, located near Parkes, Australia
1962 – European Southern Observatory (ESO) founded
1962 – Kitt Peak solar observatory founded
1962 – Green Bank, West Virginia 90m radio telescope
1962 – Orbiting Solar Observatory 1 satellite launched
1963 – Arecibo 300-meter radio telescope begins operation, located in Arecibo, Puerto Rico
1964 – Martin Ryle's radio interferometer begins operation, located in Cambridge, England
1965 – Owens Valley 40-meter radio telescope begins operation, located in Big Pine, California
1967 – First VLBI images, with 183 km baseline
1969 – Observations start at Big Bear Solar Observatory, located in Big Bear, California
1969 – Las Campanas Observatory
1970s
1970 – Cerro Tololo optical reflecting telescope begins operation, located in Cerro Tololo, Chile
1970 – Kitt Peak National Observatory optical reflecting telescope begins operation, located near Tucson, Arizona
1970 – Uhuru x-ray telescope satellite
1970 – Antoine Labeyrie performs the first high-resolution optical speckle interferometry observations
1970 – Westerbork Synthesis Radio Telescope completed, near Westerbork, Netherlands
1972 – 100 m Effelsberg radio telescope inaugurated (Germany)
1973 – UK Schmidt Telescope 1.2 metre optical reflecting telescope begins operation, located in Anglo-Australian Observatory near Coonabarabran, Australia
1974 – Anglo-Australian Telescope optical reflecting telescope begins operation, located in Anglo-Australian Observatory near Coonabarabran, Australia
1975 – Gerald Smith, Frederick Landauer, and James Janesick use a CCD to observe Uranus, the first astronomical CCD observation
1975 – Antoine Labeyrie builds the first two-telescope optical interferometer
1976 – The 6-m BTA-6 (Bolshoi Teleskop Azimutalnyi or “Large Altazimuth Telescope”) goes into operation on Mt. Pashtukhov in the Russian Caucasus
1978 – Multiple Mirror equivalent optical/infrared reflecting telescope begins operation, located in Amado, Arizona
1978 – International Ultraviolet Explorer (IUE) telescope satellite
1978 – Einstein High Energy Astronomy Observatory x-ray telescope satellite
1979 – UKIRT infrared reflecting telescope begins operation, located at Mauna Kea Observatory, Hawaii
1979 – Canada-France-Hawaii optical reflecting telescope begins operation, located at Mauna Kea Observatory, Hawaii
1979 – NASA Infrared Telescope Facility infrared reflecting telescope begins operation, located at Mauna Kea, Hawaii
1980s
1980 – Completion of construction of the VLA, located in Socorro, New Mexico
1983 – Infrared Astronomical Satellite (IRAS) telescope
1984 – IRAM 30-m telescope at Pico Veleta near Granada, Spain completed
1987 – 15-m James Clerk Maxwell Telescope UK submillimetre telescope installed at Mauna Kea Observatory
1987 – 5-m Swedish-ESO Submillimetre Telescope (SEST) installed at the ESO La Silla Observatory
1988 – Australia Telescope Compact Array aperture synthesis radio telescope begins operation, located near Narrabri, Australia
1989 – Cosmic Background Explorer (COBE) satellite
1990s
1990 – Hubble 2.4m space Telescope launched, mirror found to be flawed
1991 – Compton Gamma Ray Observatory satellite
1993 – Keck 10-meter optical/infrared reflecting telescope begins operation, located at Mauna Kea, Hawaii
1993 – Very Long Baseline Array of 10 dishes
1995 – Cambridge Optical Aperture Synthesis Telescope (COAST)—the first very high resolution optical astronomical images (from aperture synthesis observations)
1995 – Giant Metrewave Radio Telescope of thirty 45 m dishes at Pune
1996 – Keck 2 10-meter optical/infrared reflecting telescope begins operation, located at Mauna Kea, Hawaii
1997 – The Japanese HALCA satellite begins operations, producing first VLBI observations from space, 25,000 km maximum baseline
1998 – First light at VLT1, the 8.2 m ESO telescope
2000s
2001 – First light at the Keck Interferometer. Single-baseline operations begin in the near-infrared.
2001 – First light at VLTI interferometry array. Operations on the interferometer start with single-baseline near-infrared observations with the 103 m baseline.
2005 – First imaging with the VLTI using the AMBER optical aperture synthesis instrument and three VLT telescopes.
2005 – First light at SALT, the largest optical telescope in the Southern Hemisphere, with a hexagonal primary mirror of 11.1 by 9.8 meters.
2007 – First light at Gran Telescopio de Canarias (GTC), in Spain, the largest optical telescope in the world with an effective diameter of 10.4 meters.
2021 — James Webb Space Telescope (JWST), was launched 25 December 2021 on an ESA Ariane 5 rocket from Kourou, French Guiana and will succeed the Hubble Space Telescope as NASA's flagship mission in astrophysics.
2023 — Euclid, was launched on 1 July 2023 on a Falcon 9 Block 5 rocket from Cape Canaveral, Florida, to study dark matter and energy.
2023 — XRISM was launched on 6 September 2023 on a H-IIA rocket to study the formation of the universe and the dark matter.
Under Construction
Iranian National Observatory 3.4 m (first light planned in 2020)
Extremely Large Telescope (first light planned in 2027)
Planned
Public Telescope (PST), German project of astrofactum. Launch was planned for 2019, but the project's website is now defunct and no updates have been provided on the fate of the effort.
Mid/late-2021 – Science first light of the Vera C. Rubin Observatory is anticipated for 2021 with full science operations to begin a year later.
Nancy Grace Roman Space Telescope, part of NASA's Exoplanet Exploration Program. Launch is tentatively scheduled for 2027.
See also
Timeline of telescope technology
List of largest optical telescopes historically
Extremely large telescope
References
Telescopes, observatories, and observing technology
Telescopes, observatories, and observing technology
Astronomical observatories
Astronomical imaging
Observational astronomy
Telescopes | Timeline of telescopes, observatories, and observing technology | [
"Astronomy"
] | 3,580 | [
"Telescopes",
"Astronomical observatories",
"History of astronomy",
"Astronomy timelines",
"Observational astronomy",
"Astronomy organizations",
"Astronomical instruments",
"Astronomical sub-disciplines"
] |
58,955 | https://en.wikipedia.org/wiki/Timeline%20of%20artificial%20satellites%20and%20space%20probes | This timeline of artificial satellites and space probes includes uncrewed spacecraft including technology demonstrators, observatories, lunar probes, and interplanetary probes. First satellites from each country are included. Not included are most Earth science satellites, commercial satellites or crewed missions.
Timeline
1950s
1960s
1970s
1980s
1990s
2000s
2010s
2020s
References
External links
Current and Upcoming Launches
Missions-NASA
Unmanned spaceflight discussion forum
Chronology of Lunar and Planetary Exploration (homepage)
Artificial satellites and space probes | Timeline of artificial satellites and space probes | [
"Astronomy"
] | 100 | [
"Satellites",
"Outer space"
] |
58,960 | https://en.wikipedia.org/wiki/Timeline%20of%20medicine%20and%20medical%20technology | This is a timeline of the history of medicine and medical technology.
Antiquity
3300 BC – During the Stone Age, early doctors used very primitive forms of herbal medicine in India.
3000 BC – Ayurveda The origins of Ayurveda have been traced back to around 3,000 BCE.
c. 2600 BC – Imhotep the priest-physician who was later deified as the Egyptian god of medicine.
2500 BC – Iry Egyptian inscription speaks of Iry as eye-doctor of the palace, palace physician of the belly, guardian of the royal bowels, and he who prepares the important medicine (name cannot be translated) and knows the inner juices of the body.
1900–1600 BC Akkadian clay tablets on medicine survive primarily as copies from Ashurbanipal's library at Nineveh.
1800 BC – Code of Hammurabi sets out fees for surgeons and punishments for malpractice
1800 BC – Kahun Gynecological Papyrus
1600 BC – Hearst papyrus, coprotherapy and magic
1551 BC – Ebers Papyrus, coprotherapy and magic
1500 BC – Saffron used as a medicine on the Aegean island of Thera in ancient Greece
1500 BC – Edwin Smith Papyrus, an Egyptian medical text and the oldest known surgical treatise (no true surgery) no magic
1300 BC – Brugsch Papyrus and London Medical Papyrus
1250 BC – Asklepios
9th century – Hesiod reports an ontological conception of disease via the Pandora myth. Disease has a "life" of its own but is of divine origin.
8th century – Homer tells that Polydamna supplied the Greek forces besieging Troy with healing drugs. Homer also tells about battlefield surgery Idomeneus tells Nestor after Machaon had fallen: A surgeon who can cut out an arrow and heal the wound with his ointments is worth a regiment.
700 BC – Cnidos medical school; also one at Cos
500 BC – Darius I orders the restoration of the House of Life (First record of a (much older) medical school)
500 BC – Bian Que becomes the earliest physician known to use acupuncture and pulse diagnosis
500 BC – The Sushruta Samhita is published, laying the framework for Ayurvedic medicine, giving many surgical procedures for first time such as lithotomy, forehead flap rhinoplasty, otoplasty and many more.
– – Empedocles four elements
500 BC – Pills were used. They were presumably invented so that measured amounts of a medicinal substance could be delivered to a patient.
510–430 BC – Alcmaeon of Croton scientific anatomic dissections. He studied the optic nerves and the brain, arguing that the brain was the seat of the senses and intelligence. He distinguished veins from the arteries and had at least vague understanding of the circulation of the blood. Variously described by modern scholars as Father of Anatomy; Father of Physiology; Father of Embryology; Father of Psychology; Creator of Psychiatry; Founder of Gynecology; and as the Father of Medicine itself. There is little evidence to support the claims but he is, nonetheless, important.
fl. 425 BC – Diogenes of Apollonia
– 425 BC – Herodotus tells us Egyptian doctors were specialists: Medicine is practiced among them on a plan of separation; each physician treats a single disorder, and no more. Thus the country swarms with medical practitioners, some undertaking to cure diseases of the eye, others of the head, others again of the teeth, others of the intestines, and some those which are not local.
496 – 405 BC – Sophocles "It is not a learned physician who sings incantations over pains which should be cured by cutting."
420 BC – Hippocrates of Cos maintains that diseases have natural causes and puts forth the Hippocratic Oath. Origin of rational medicine.
Medicine after Hippocrates
c. 400 BC – 1 BC – The Huangdi Neijing (Yellow Emperor's Classic of Internal Medicine) is published, laying the framework for traditional Chinese medicine
4th century BC – Philistion of Locri Praxagoras distinguishes veins and arteries and determines only arteries pulse
375–295 BC – Diocles of Carystus
354 BC – Critobulus of Cos extracts an arrow from the eye of Phillip II, treating the loss of the eyeball without causing facial disfigurement.
3rd century BC – Philinus of Cos founder of the Empiricist school. Herophilos and Erasistratus practice androtomy. (Dissecting live and dead human beings)
280 BC – Herophilus Dissection studies the nervous system and distinguishes between sensory nerves and motor nerves and the brain. also the anatomy of the eye and medical terminology such as (in Latin translation "net like" becomes retiform/retina.
270 – Huangfu Mi writes the Zhēnjiǔ jiǎyǐ jīng (The ABC Compendium of Acupuncture), the first textbook focusing solely on acupuncture.
250 BC – Erasistratus studies the brain and distinguishes between the cerebrum and cerebellum physiology of the brain, heart and eyes, and in the vascular, nervous, respiratory and reproductive systems.
219 – Zhang Zhongjing publishes Shang Han Lun (On Cold Disease Damage).
200 BC – the Charaka Samhita uses a rational approach to the causes and cure of disease and uses objective methods of clinical examination
124 – 44 BC – Asclepiades of Bithynia
116 – 27 BC – Marcus Terentius Varro Prototypal germ theory of disease.
1st century AD – Rufus of Ephesus; Marcellinus a physician of the first century AD; Numisianus
23 – 79 AD – Pliny the Elder writes Natural History
– – Aulus Cornelius Celsus Medical encyclopedia
50 – 70 AD – Pedanius Dioscorides writes De Materia Medica – a precursor of modern pharmacopoeias that was in use for almost 1600 years
2nd century AD Aretaeus of Cappadocia
98 – 138 AD – Soranus of Ephesus
129 – 216 AD – Galen – Clinical medicine based on observation and experience. The resulting tightly integrated and comprehensive system, offering a complete medical philosophy dominated medicine throughout the Middle Ages and until the beginning of the modern era.
After Galen 200 AD
– Fabulla or Fabylla, medical writer
d. 260 – Gargilius Martialis, short Latin handbook on Medicines from Vegetables and Fruits
4th century Magnus of Nisibis, Alexandrian doctor and professor book on urine
325 – 400 – Oribasius 70 volume encyclopedia
362 – Julian orders xenones built, imitating Christian charity (proto hospitals)
369 – Basil of Caesarea founded at Caesarea in Cappadocia an institution (hospital) called Basileias, with several buildings for patients, nurses, physicians, workshops, and schools
375 – Ephrem the Syrian opened a hospital at Edessa They spread out and specialized nosocomia for the sick, brephotrophia for foundlings, orphanotrophia for orphans, ptochia for the poor, xenodochia for poor or infirm pilgrims, and gerontochia for the old.
400 – The first hospital in Latin Christendom was founded by Fabiola at Rome
420 – Caelius Aurelianus a doctor from Sicca Veneria (El-Kef, Tunisia) handbook On Acute and Chronic Diseases in Latin.
447 – Cassius Felix of Cirta (Constantine, Ksantina, Algeria), medical handbook drew on Greek sources, Methodist and Galenist in Latin
480 – 547 Benedict of Nursia founder of "monastic medicine"
484 – 590 – Flavius Magnus Aurelius Cassiodorus
fl. 511 – 534 – Anthimus Greek: Ἄνθιμος
536 – Sergius of Reshaina (died 536) – A Christian theologian-physician who translated thirty-two of Galen's works into Syriac and wrote medical treatises of his own
525 – 605 – Alexander of Tralles Alexander Trallianus
500 – 550 – Aetius of Amida Encyclopedia 4 books each divided into 4 sections
second half of 6th century building of xenodocheions/bimārestāns by the Nestorians under the Sasanians, would evolve into the complex secular "Islamic hospital", which combined lay practice and Galenic teaching
550 – 630 Stephanus of Athens
560 – 636 – Isidore of Seville
c. 620 Aaron of Alexandria Syriac . He wrote 30 books on medicine, the "Pandects". He was the first author in antiquity who mentioned the diseases of smallpox and measles translated by Māsarjawaih a Syrian Jew and Physician, into Arabic about A. D. 683
c. 630 – Paul of Aegina Encyclopedia in 7 books very detailed surgery used by Albucasis
790 – 869 – Leo Itrosophist also Mathematician or Philosopher wrote "Epitome of Medicine"
c. 800 – 873 – Al-Kindi (Alkindus) De Gradibus
820 – Benedictine hospital founded, School of Salerno would grow around it
d. 857 – Mesue the elder (Yūḥannā ibn Māsawayh) Syriac Christian
c. 830 – 870 – Hunayn ibn Ishaq (Johannitius) Syriac-speaking Christian also knew Greek and Arabic. Translator and author of several medical tracts.
c. 838 – 870 – Ali ibn Sahl Rabban al-Tabari, writes an encyclopedia of medicine in Arabic.
c. 910d – Ishaq ibn Hunayn
9th century – Yahya ibn Sarafyun a Syriac physician Johannes Serapion, Serapion the Elder
c. 865 – 925 – Rhazes pediatrics, and makes the first clear distinction between smallpox and measles in his al-Hawi.
d. 955 – Isaac Judaeus Isḥāq ibn Sulaymān al-Isrāʾīlī Egyptian born Jewish physician
913 – 982 – Shabbethai Donnolo alleged founding father of School of Salerno wrote in Hebrew
d. 982 – 994 – 'Ali ibn al-'Abbas al-Majusi Haly Abbas
1000 – Albucasis (936–1018) surgery Kitab al-Tasrif, surgical instruments.
d. 1075 – Ibn Butlan Christian physician of Baghdad Tacuinum sanitatis the Arabic original and most of the Latin copies, are in tabular format
1018 – 1087 – Michael Psellos or Psellus a Byzantine monk, writer, philosopher, politician and historian. several books on medicine
c. 1030 – Avicenna The Canon of Medicine The Canon remains a standard textbook in Muslim and European universities until the 18th century.
c. 1071 – 1078 – Simeon Seth or Symeon Seth an 11th-century Jewish Byzantine translated Arabic works into Greek
1084 – First documented hospital in England Canterbury
d. 1087 – Constantine the African
1083 – 1153 – Anna Komnene, Latinized as Comnena
1095 – Congregation of the Antonines, was founded to treat victims of "St. Anthony's fire" a skin disease.
Late 11th or early 12th century – Trotula
1123 – St Bartholomew's Hospital founded by the court jester Rahere Augustine nuns originally cared for the patients. Mental patients were accepted along with others
1127 – Stephen of Antioch translated the work of Haly Abbas
1100 – 1161 – Avenzoar Teacher of Averroes
1170 – Rogerius Salernitanus composed his Chirurgia also known as The Surgery of Roger
1126 – 1198 – Averroes
d. c. 1161 – Matthaeus Platearius
1200–1499
1203 – Innocent III organized the hospital of Santo Spirito at Rome inspiring others all over Europe
c. 1210 – 1277 – William of Saliceto, also known as Guilielmus de Saliceto
1210 – 1295 – Taddeo Alderotti – Scholastic medicine
1240 Bartholomeus Anglicus
1242 – Ibn al-Nafis suggests that the right and left ventricles of the heart are separate and discovers the pulmonary circulation and coronary circulation
c. 1248 – Ibn al-Baytar wrote on botany and pharmacy, studied animal anatomy and medicine veterinary medicine.
1249 – Roger Bacon writes about convex lens spectacles for treating long-sightedness
1257 – 1316 Pietro d'Abano also known as Petrus De Apono or Aponensis
1260 – Louis IX established Les Quinze-vingt; originally a retreat for the blind, it became a hospital for eye diseases, and is now one of the most important medical centers in Paris
c. 1260 – 1320 Henri de Mondeville
1284 – Mansur hospital of Cairo
– Joannes Zacharias Actuarius a Byzantine physician wrote the last great compendium of Byzantine medicine
1275 –1326 – Mondino de Luzzi "Mundinus" carried out the first systematic human dissections since Herophilus of Chalcedon and Erasistratus of Ceos 1500 years earlier.
1288 – The hospital of Santa Maria Nuova founded in Florence, it was strictly medical.
1300 – concave lens spectacles to treat myopia developed in Italy.
1310 – Pietro d'Abano's Conciliator ()
d. 1348 – Gentile da Foligno
1292–1350 – Ibn Qayyim al-Jawziya
1306–1390 – John of Arderne
d. 1368 – Guy de Chauliac
f. 1460 – Heinrich von Pfolspeundt
1443 – 1502 – Antonio Benivieni Pathological anatomy
1493 – 1541 – Paracelsus On the relationship between medicine and surgery surgery book
1500–1799
Early 16th century:
Paracelsus, an alchemist by trade, rejects occultism and pioneers the use of chemicals and minerals in medicine. Burns the books of Avicenna, Galen and Hippocrates.
Hieronymus Fabricius His "Surgery" is mostly that of Celsus, Paul of Aegina, and Abulcasis citing them by name.
Caspar Stromayr
1500? – 1561 Pierre Franco
Ambroise Paré (1510–1590) pioneered the treatment of gunshot wounds.
Bartholomeo Maggi at Bologna, Felix Wurtz of Zurich, Léonard Botal in Paris, and the Englishman Thomas Gale (surgeon), (the diversity of their geographical origins attests to the widespread interest of surgeons in the problem), all published works urging similar treatment to Paré's. But it was Paré's writings which were the most influential.
1518 – College of Physicians founded now known as Royal College of Physicians of London is a British professional body of doctors of general medicine and its subspecialties. It received the royal charter in 1518
1510 – 1590 – Ambroise Paré surgeon
1540 – 1604 – William Clowes – Surgical chest for military surgeons
1543 – Andreas Vesalius publishes De Fabrica Corporis Humani which corrects Greek medical errors and revolutionizes European medicine
1546 – Girolamo Fracastoro proposes that epidemic diseases are caused by transferable seedlike entities
1550 – 1612 – Peter Lowe
1553 – Miguel Servet describes the circulation of blood through the lungs.
1556 – Amato Lusitano describes venous valves in the Ázigos vein
1559 – Realdo Colombo describes the circulation of blood through the lungs in detail
1563 – Garcia de Orta founds tropical medicine with his treatise on Indian diseases and treatments
1570 – 1643 – John Woodall Ship surgeons used lemon juice to treat scurvy wrote "The Surgions Mate"
1590 – Microscope was invented, which played a huge part in medical advancement
1596 – Li Shizhen publishes Běncǎo Gāngmù or Compendium of Materia Medica
1603 – Girolamo Fabrici studies leg veins and notices that they have valves which allow blood to flow only toward the heart
1621 – 1676 – Richard Wiseman
1628 – William Harvey explains the circulatory system in Exercitatio Anatomica de Motu Cordis et Sanguinis in Animalibus
1683 – 1758 – Lorenz Heister
1688 – 1752 – William Cheselden
1701 – Giacomo Pylarini gives the first smallpox inoculations in Europe. They were widely practised in the East before then.
1714 – 1789 – Percivall Pott
1720 – Lady Mary Wortley Montagu
1728 – 1793 – John Hunter
1736 – Claudius Aymand performs the first successful appendectomy
1744 – 1795 – Pierre-Joseph Desault First surgical periodical
1747 – James Lind discovers that citrus fruits prevent scurvy
1749 – 1806 – Benjamin Bell – Leading surgeon of his time and father of a surgical dynasty, author of "A System of Surgery"
1752 – 1832 – Antonio Scarpa
1763 – 1820 – John Bell
1766 – 1842 – Dominique Jean Larrey Surgeon to Napoleon
1768 – 1843 – Astley Cooper surgeon lectures principles and practice
1774 – 1842 – Charles Bell, surgeon
1774 – Joseph Priestley discovers nitrous oxide, nitric oxide, ammonia, hydrogen chloride and oxygen
1777 – 1835 – Baron Guillaume Dupuytren – Head surgeon at Hôtel-Dieu de Paris, The age Dupuytren
1785 – William Withering publishes "An Account of the Foxglove" the first systematic description of digitalis in treating dropsy
1790 – Samuel Hahnemann rages against the prevalent practice of bloodletting as a universal cure and founds homeopathy
1796 – Edward Jenner develops a smallpox vaccination method
1799 – Humphry Davy discovers the anesthetic properties of nitrous oxide
1800–1899
1800 – Humphry Davy announces the anaesthetic properties of nitrous oxide.
1803 – 1841 – Morphine was first isolated by Friedrich Sertürner, this is generally believed to be the first isolation of an active ingredient from a plant.
1813–1883 – James Marion Sims vesico-vaganial surgery Father of surgical gynecology.
1816 – René Laennec invents the stethoscope.
1827 – 1912 – Joseph Lister antiseptic surgery Father of modern surgery
1818 – James Blundell performs the first successful human transfusion.
1842 – Crawford Long performs the first surgical operation using anesthesia with ether.
1845 – John Hughes Bennett first describes leukemia as a blood disorder.
1846 – First painless surgery with general anesthetic.
1847 – Ignaz Semmelweis discovers how to prevent puerperal fever.
1849 – Elizabeth Blackwell is the first woman to gain a medical degree in the United States.
1850 – Female Medical College of Pennsylvania (later Woman's Medical College), the first medical college in the world to grant degrees to women, is founded in Philadelphia.
1858 – Rudolf Carl Virchow 13 October 1821 – 5 September 1902 his theories of cellular pathology spelled the end of Humoral medicine.
1861 – Louis Pasteur discovers the Germ Theory
1867 – Lister publishes Antiseptic Principle of the Practice of Surgery, based partly on Pasteur's work.
1870 – Louis Pasteur and Robert Koch establish the germ theory of disease.
1878 – Ellis Reynolds Shipp graduates from the Women's Medical College of Pennsylvania and begins practice in Utah.
1879 – First vaccine for cholera.
1881 – Louis Pasteur develops an anthrax vaccine.
1882 – Louis Pasteur develops a rabies vaccine.
1887 – Willem Einthoven invents electrocardiography (ECG/EKG)
1890 – Emil von Behring discovers antitoxins and uses them to develop tetanus and diphtheria vaccines.
1895 – Wilhelm Conrad Röntgen discovers medical use of X-rays in medical imaging
1900–1999
1901 – Karl Landsteiner discovers the existence of different human blood types
1901 – Alois Alzheimer identifies the first case of what becomes known as Alzheimer's disease
1906 – Frederick Hopkins suggests the existence of vitamins and suggests that a lack of vitamins causes scurvy and rickets
1907 – Paul Ehrlich develops a chemotherapeutic cure for sleeping sickness
1907 – Henry Stanley Plummer develops the first structured patient record and clinical number (Mayo clinic)
1908 – Victor Horsley and R. Clarke invents the stereotactic method
1909 – First intrauterine device described by Richard Richter.
1910 – Hans Christian Jacobaeus performs the first laparoscopy on humans
1917 – Julius Wagner-Jauregg discovers the malarial fever shock therapy for general paresis of the insane
1921 – Edward Mellanby discovers vitamin D and shows that its absence causes rickets
1921 – Frederick Banting and Charles Best discover insulin – important for the treatment of diabetes
1921 – Fidel Pagés pioneers epidural anesthesia
1923 – First vaccine for diphtheria
1924 – Hans Berger discovers human electroencephalography
1926 – First vaccine for pertussis
1927 – First vaccine for tuberculosis
1927 – First vaccine for tetanus
1930 – First successful sex reassignment surgery performed on Lili Elbe in Dresden, Germany.
1932 – Gerhard Domagk develops a chemotherapeutic cure for streptococcus
1933 – Manfred Sakel discovers insulin shock therapy
1935 – Ladislas J. Meduna discovers metrazol shock therapy
1935 – First vaccine for yellow fever
1936 – Egas Moniz discovers prefrontal lobotomy for treating mental diseases; Enrique Finochietto develops the now ubiquitous self-retaining thoracic retractor
1938 – Ugo Cerletti and Lucio Bini discover electroconvulsive therapy
1938 – Howard Florey and Ernst Chain investigate Penicillin and attempted to mass-produce it and tested it on the policeman Albert Alexander (police officer) who recovered but died due to a lack of Penicillin
1943 – Willem J. Kolff builds the first dialysis machine
1944 – Disposable catheter – David S. Sheridan
1946 – Chemotherapy – Alfred G. Gilman and Louis S. Goodman
1947 – Defibrillator – Claude Beck
1948 – Acetaminophen – Julius Axelrod, Bernard Brodie
1949 – First implant of intraocular lens, by Sir Harold Ridley
1949 – Mechanical assistor for anesthesia – John Emerson
1952 – Jonas Salk develops the first polio vaccine (available in 1955)
1952 – Cloning – Robert Briggs and Thomas King
1953 – First live birth from frozen sperm
1953 – Heart-lung machine – John Heysham Gibbon
1953 – Medical ultrasonography – Inge Edler
1954 – Joseph Murray performs the first human kidney transplant (on identical twins)
1954 – Ventouse – Tage Malmstrom
1955 – Tetracycline – Lloyd Conover
1956 – Metered-dose inhaler – 3M
1957 – William Grey Walter invents the brain EEG topography (toposcope)
1958 – Pacemaker – Rune Elmqvist
1959 – In vitro fertilization – Min Chueh Chang
1960 – Invention of cardiopulmonary resuscitation (CPR)
1960 – First combined oral contraceptive approved by the FDA
1962 – Hip replacement – John Charnley
1962 – Beta blocker James W. Black
1962 – Albert Sabin develops first oral polio vaccine
1963 – Artificial heart – Paul Winchell
1963 – Thomas Starzl performs the first human liver transplant
1963 – James Hardy performs the first human lung transplant
1963 – Valium (diazepam) – Leo H. Sternbach
1964 – First vaccine for measles
1965 – Frank Pantridge installs the first portable defibrillator
1965 – First commercial ultrasound
1966 – C. Walton Lillehei performs the first human pancreas transplant
1966 – Rubella Vaccine – Harry Martin Meyer and Paul D. Parkman
1967 – First vaccine for mumps
1967 – René Favaloro develops Coronary Bypass surgery
1967 – Christiaan Barnard performs the first human heart transplant
1968 – Powered prothesis – Samuel Alderson
1968 – Controlled drug delivery – Alejandro Zaffaron
1969 – Balloon catheter – Thomas Fogarty
1969 – Cochlear implant – William House
1970 – Cyclosporine, the first effective immunosuppressive drug is introduced in organ transplant practice
1971 – MMR Vaccine – developed by Maurice Hilleman
1971 – Genetically modified organisms – Ananda Chakrabart
1971 – Magnetic resonance imaging – Raymond Vahan Damadian
1971 – Computed tomography (CT or CAT Scan) – Godfrey Hounsfield
1971 – Transdermal patches – Alejandro Zaffaroni
1971 – Sir Godfrey Hounsfield invents the first commercial CT scanner
1972 – Insulin pump Dean Kamen
1973 – Laser eye surgery (LASIK) – Mani Lal Bhaumik
1974 – Liposuction – Giorgio Fischer
1976 – First commercial PET scanner
1978 – First live birth from in vitro fertilisation (IVF)
1978 – Last fatal case of smallpox
1979 – Antiviral drugs – George Hitchings and Gertrude Elion
1980 – Raymond Damadian builds first commercial MRI scanner
1980 – Lithotripter – Dornier Research Group
1980 – First vaccine for hepatitis B – Baruch Samuel Blumberg
1980 – Cloning of interferons – Sidney Pestka
1981 – Artificial skin – John F. Burke and Ioannis V. Yannas
1981 – Bruce Reitz performs the first human heart-lung combined transplant
1982 – Human insulin – Eli Lilly
1982 – Willem Johan Kolff performs the first artificial heart transplant.
1985 – Automated DNA sequencer – Leroy Hood and Lloyd Smith
1985 – Polymerase chain reaction (PCR) – Kary Mullis
1985 – Surgical robot – Yik San Kwoh
1985 – DNA fingerprinting – Alec Jeffreys
1985 – Capsule endoscopy – Tarun Mullick
1986 – Fluoxetine HCl – Eli Lilly and Co
1987 – commercially available Statins – Merck & Co.
1987 – Tissue engineering – Joseph Vacanti & Robert Langer
1988 – Intravascular stent – Julio Palmaz
1988 – Laser cataract surgery – Patricia Bath
1989 – Pre-implantation genetic diagnosis (PGD) – Alan Handyside
1989 – DNA microarray – Stephen Fodor
1990 – Gamow bag® – Igor Gamow
1992 – Description of Brugada syndrome (Pedro and Josep Brugada)
1992 – First vaccine for hepatitis A available
1992 – Electroactive polymers (artificial muscle) – SRI International
1992 – Intracytoplasmic sperm injection (ICSI) – Andre van Steirteghem
1995 – Adult stem cell use in regeneration of tissues and organs in vivo – B. G Matapurkar U.S . International Patent
1996 – Dolly the Sheep cloned
1998 – Stem cell therapy – James Thomson
2000–2022
2000 – The Human Genome Project draft was completed.
2001 – The first telesurgery was performed by Jacques Marescaux.
2003 – Carlo Urbani, of Doctors without Borders alerted the World Health Organization to the threat of the SARS virus, triggering the most effective response to an epidemic in history. Urbani succumbs to the disease himself in less than a month.
2005 – Jean-Michel Dubernard performs the first partial face transplant.
2006 – First HPV vaccine approved.
2006 – The second rotavirus vaccine approved (first was withdrawn).
2007 – The visual prosthetic (bionic eye) Argus II.
2008 – Laurent Lantieri performs the first full face transplant.
2011 – First successful Uterus transplant from a deceased donor in Turkey
2013 – The first kidney was grown in vitro in the U.S.
2013 – The first human liver was grown from stem cells in Japan.
2014 – A 3D printer is used for first ever skull transplant.
2014 - Sonendo, a medical technology company based in Laguna Hills, California, introduces the GentleWave system in the United States for root canal treatments.
2016 – The first ever artificial pancreas was created
2019 – 3D-print heart from human patient's cells.
2020 – First vaccine for COVID-19.
2022 – The complete human genome is sequenced.
See also
Timeline of antibiotics
Timeline of vaccines
Timeline of hospitals
Notes
Citations
Reference:
1. International patent USA. .wef 1995. US PTO no.6227202 and 20020007223.
2. R. Maingot's Text Book of Abdominal operations.1997 USA.
3. Text book of Obstetrics and Gynecology. 2010 J P Publishers.
References
Matapurkar B G. (1995). US international Patent 6227202 and 20020007223.medical use of Adult Stem cells. A new physiological phenomenon of Desired Metaplasia for regeneration of tissues and organs in vivo. Annals of NYAS 1998.
Bynum, W. F. and Roy Porter, eds. Companion Encyclopedia of the History of Medicine (2 vol. 1997); 1840pp; 72 long essays by scholars excerpt and text search
Conrad, Lawrence I. et al. The Western Medical Tradition: 800 BC to AD 1800 (1995); excerpt and text search
Bynum, W.F. et al. The Western Medical Tradition: 1800–2000 (2006) excerpt and text search
Loudon, Irvine, ed. Western Medicine: An Illustrated History (1997) online
McGrew, Roderick. Encyclopedia of Medical History (1985)
Porter, Roy, ed. The Cambridge History of Medicine (2006); 416pp; excerpt and text search
Porter, Roy, ed. The Cambridge Illustrated History of Medicine (2001) excerpt and text search excerpt and text search
Singer, Charles, and E. Ashworth Underwood. A Short History of Medicine (2nd ed. 1962)
Watts, Sheldon. Disease and Medicine in World History (2003), 166pp online
Further reading
External links
Interactive timeline of medicine and medical technology (requires Flash plugin)
The Historyscoper
History of medicine
History of medical technology
Medical | Timeline of medicine and medical technology | [
"Biology"
] | 6,187 | [
"History of medical technology",
"Medical technology"
] |
58,981 | https://en.wikipedia.org/wiki/Tobacco%20smoke | Tobacco smoke is a sooty aerosol produced by the incomplete combustion of tobacco during the smoking of cigarettes and other tobacco products. Temperatures in burning cigarettes range from about 400 °C between puffs to about 900 °C during a puff. During the burning of the cigarette tobacco (itself a complex mixture), thousands of chemical substances are generated by combustion, distillation, pyrolysis and pyrosynthesis. Tobacco smoke is used as a fumigant and inhalant.
Composition
The particles in tobacco smoke are liquid aerosol droplets (about 20% water), with a mass median aerodynamic diameter (MMAD) that is submicrometer (and thus, fairly "lung-respirable" by humans). The droplets are present in high concentrations (some estimates are as high as 1010 droplets per cm3).
Tobacco smoke may be grouped into a particulate phase (trapped on a glass-fiber pad, and termed "TPM" (total particulate matter)) and a gas/vapor phase (which passes through such a glass-fiber pad). "Tar" is mathematically determined by subtracting the weight of the nicotine and water from the TPM. However, several components of tobacco smoke (e.g., hydrogen cyanide, formaldehyde, phenanthrene, and pyrene) do not fit neatly into this rather arbitrary classification, because they are distributed among the solid, liquid and gaseous phases.
Tobacco smoke contains a number of toxicologically significant chemicals and groups of chemicals, including polycyclic aromatic hydrocarbons (benzopyrene), tobacco-specific nitrosamines (NNK, NNN), aldehydes (acrolein, formaldehyde), carbon monoxide, hydrogen cyanide, nitrogen oxides (nitrogen dioxide), benzene, toluene, phenols (phenol, cresol), aromatic amines (nicotine, ABP (4-aminobiphenyl)), and harmala alkaloids. The radioactive element polonium-210 is also known to occur in tobacco smoke. The chemical composition of smoke depends on puff frequency, intensity, volume, and duration at different stages of cigarette consumption.
Between 1933 and the late 1940s, the yields from an average cigarette varied from 33 to 49 mg "tar" and from less than 1 to 3 mg nicotine. In the 1960s and 1970s, the average yield from cigarettes in Western Europe and the USA was around 16 mg tar and 1.5 mg nicotine per cigarette. Current average levels are lower. This has been achieved in a variety of ways including use of selected strains of tobacco plant, changes in agricultural and curing procedures, use of reconstituted sheets (reprocessed tobacco leaf wastes), incorporation of tobacco stalks, reduction of the amount of tobacco needed to fill a cigarette by expanding it (like puffed wheat) to increase its "filling power", and by the use of filters and high-porosity wrapping papers. The development of lower "tar" and nicotine cigarettes has tended to yield products that lacked the taste components to which the smoker had become accustomed. In order to keep such products acceptable to the consumer, the manufacturers reconstitute aroma or flavor.
Tobacco polyphenols (e. g., caffeic acid, chlorogenic acid, scopoletin, rutin) determine the taste and quality of the smoke. Freshly cured tobacco leaf is unfit for use because of its pungent and irritating smoke. After fermentation and aging, the leaf delivers mild and aromatic smoke.
Tumorigenic agents
Safety
Tobacco smoke, besides being an irritant and significant indoor air pollutant, is known to cause lung cancer, heart disease, chronic obstructive pulmonary disease (COPD), emphysema, and other serious diseases in smokers (and in non-smokers as well). The actual mechanisms by which smoking can cause so many diseases remain largely unknown. Many attempts have been made to produce lung cancer in animals exposed to tobacco smoke by the inhalation route, without success. It is only by collecting the "tar" and repeatedly painting this on to mice that tumors are produced, and these tumors are very different from those tumors exhibited by smokers. Tobacco smoke is associated with an increased risk of developing respiratory conditions such as bronchitis, pneumonia, and asthma. Tobacco smoke aerosols generated at temperatures below 400 °C did not test positive in the Ames assay.
In spite of all changes in cigarette design and manufacturing since the 1960s, the use of filters and "light" cigarettes has neither decreased the nicotine intake per cigarette, nor has it lowered the incidence of lung cancers (NCI, 2001; IARC 83, 2004; U.S. Surgeon General, 2004). The shift over the years from higher- to lower-yield cigarettes may explain the change in the pathology of lung cancer. That is, the percentage of lung cancers that are adenocarcinomas has increased, while the percentage of squamous cell cancers has decreased. The change in tumor type is believed to reflect the higher nitrosamine delivery of lower-yield cigarettes and the increased depth or volume of inhalation of lower-yield cigarettes to compensate for lower level concentrations of nicotine in the smoke.
In the United States, lung cancer incidence and mortality rates are particularly high among African American men. Lung cancer tends to be most common in developed countries, particularly in North America and Europe, and less common in developing countries, particularly in Africa and South America.
See also
Liquid smoke
Electronic cigarette aerosol
Tobacco smoke enema
References
Aerosols
Tobacco smoking
Tobacco
Toxicology
Smoke | Tobacco smoke | [
"Chemistry",
"Environmental_science"
] | 1,175 | [
"Aerosols",
"Toxicology",
"Colloids"
] |
58,991 | https://en.wikipedia.org/wiki/Fog | Fog is a visible aerosol consisting of tiny water droplets or ice crystals suspended in the air at or near the Earth's surface. Fog can be considered a type of low-lying cloud usually resembling stratus and is heavily influenced by nearby bodies of water, topography, and wind conditions. In turn, fog affects many human activities, such as shipping, travel, and warfare.
Fog appears when water vapor (water in its gaseous form) condenses. During condensation, molecules of water vapor combine to make tiny water droplets that hang in the air. Sea fog, which shows up near bodies of saline water, is formed as water vapor condenses on bits of salt. Fog is similar to, but less transparent than, mist.
Definition
The term fog is typically distinguished from the more generic term cloud in that fog is low-lying, and the moisture in the fog is often generated locally (such as from a nearby body of water, like a lake or ocean, or from nearby moist ground or marshes). By definition, fog reduces visibility to less than , whereas mist causes lesser impairment of visibility. For aviation purposes in the United Kingdom, a visibility of less than but greater than is considered to be mist if the relative humidity is 95% or greater; below 95%, haze is reported.
Formation
Fog forms when the difference between air temperature and dew point is less than . Fog begins to form when water vapor condenses into tiny water droplets that are suspended in the air. Some examples of ways that water vapor is condensed include wind convergence into areas of upward motion; precipitation or virga falling from above; daytime heating evaporating water from the surface of oceans, water bodies, or wet land; transpiration from plants; cool or dry air moving over warmer water; and lifting air over mountains. Water vapor normally begins to condense on condensation nuclei such as dust, ice, and salt in order to form clouds. Fog, like its elevated cousin stratus, is a stable cloud deck which tends to form when a cool, stable air mass is trapped underneath a warm air mass.
Fog normally occurs at a relative humidity near 100%. This occurs from either added moisture in the air, or falling ambient air temperature. However, fog can form at lower humidities and can sometimes fail to form with relative humidity at 100%. At 100% relative humidity, the air cannot hold additional moisture, thus the air will become supersaturated if additional moisture is added.
Fog commonly produces precipitation in the form of drizzle or very light snow. Drizzle occurs when the humidity attains 100% and the minute cloud droplets begin to coalesce into larger droplets. This can occur when the fog layer is lifted and cooled sufficiently, or when it is forcibly compressed from above by descending air. Drizzle becomes freezing drizzle when the temperature at the surface drops below the freezing point.
The thickness of a fog layer is largely determined by the altitude of the inversion boundary, which in coastal or oceanic locales is also the top of the marine layer, above which the air mass is warmer and drier. The inversion boundary varies its altitude primarily in response to the weight of the air above it, which is measured in terms of atmospheric pressure. The marine layer, and any fog-bank it may contain, will be "squashed" when the pressure is high and conversely may expand upwards when the pressure above it is lowering.
Types
Fog can form multiple ways, depending on how the cooling occurred that caused the condensation.
Radiation fog is formed by the cooling of land after sunset by infrared thermal radiation in calm conditions with a clear sky. The cooling ground then cools adjacent air by conduction, causing the air temperature to fall and reach the dew point, forming fog. In perfect calm, the fog layer can be less than a meter thick, but turbulence can promote a thicker layer. Radiation fog occurs at night and usually does not last long after sunrise, but it can persist all day in the winter months especially in areas bounded by high ground. Radiation fog is most common in autumn and early winter. Examples of this phenomenon include tule fog.
Ground fog is fog that obscures less than 60% of the sky and does not extend to the base of any overhead clouds. However, the term is usually a synonym for shallow radiation fog; in some cases the depth of the fog is on the order of tens of centimetres over certain kinds of terrain with the absence of wind.
Advection fog occurs when moist air passes over a cool surface by advection (wind) and is cooled. It is common as a warm front passes over an area with significant snow-pack. It is most common at sea when moist air encounters cooler waters, including areas of cold water upwelling, such as along the California coast. A strong enough temperature difference over water or bare ground can also cause advection fog.
Although strong winds often mix the air and can disperse, fragment, or prevent many kinds of fog, markedly warmer and humid air blowing over a snowpack can continue to generate advection fog at elevated velocities up to or more – this fog will be in a turbulent, rapidly moving, and comparatively shallow layer, observed as a few centimetres/inches in depth over flat farm fields, flat urban terrain and the like, and/or form more complex forms where the terrain is different such as rotating areas in the lee of hills or large buildings and so on.
Fog formed by advection along the California coastline is propelled onto land by one of several processes. A cold front can push the marine layer coast-ward, an occurrence most typical in the spring or late fall. During the summer months, a low-pressure trough produced by intense heating inland creates a strong pressure gradient, drawing in the dense marine layer. Also, during the summer, strong high pressure aloft over the desert southwest, usually in connection with the summer monsoon, produces a south to southeasterly flow which can drive the offshore marine layer up the coastline; a phenomenon known as a "southerly surge", typically following a coastal heat spell. However, if the monsoonal flow is sufficiently turbulent, it might instead break up the marine layer and any fog it may contain. Moderate turbulence will typically transform a fog bank, lifting it and breaking it up into shallow convective clouds called stratocumulus.
Frontal fog forms in much the same way as stratus cloud near a front when raindrops, falling from relatively warm air above a frontal surface, evaporate into cooler air close to the Earth's surface and cause it to become saturated. The water vapor cools and at the dewpoint it condenses and fog forms. This type of fog can be the result of a very low frontal stratus cloud subsiding to surface level in the absence of any lifting agent after the front passes.
Hail fog sometimes occurs in the vicinity of significant hail accumulations due to decreased temperature and increased moisture leading to saturation in a very shallow layer near the surface. It most often occurs when there is a warm, humid layer atop the hail and when wind is light. This ground fog tends to be localized but can be extremely dense and abrupt. It may form shortly after the hail falls; when the hail has had time to cool the air and as it absorbs heat when melting and evaporating.
Freezing conditions
occurs when liquid fog droplets freeze to surfaces, forming white soft or hard rime ice. This is very common on mountain tops which are exposed to low clouds. It is equivalent to freezing rain and essentially the same as the ice that forms inside a freezer which is not of the "frostless" or "frost-free" type. The term "freezing fog" may also refer to fog where water vapor is super-cooled, filling the air with small ice crystals similar to very light snow. It seems to make the fog "tangible", as if one could "grab a handful".
In the western United States, freezing fog may be referred to as pogonip. It occurs commonly during cold winter spells, usually in deep mountain valleys. The word pogonip is derived from the Shoshone word paγi̵nappi̵h, which means "cloud".
In The Old Farmer's Almanac, in the calendar for December, the phrase "Beware the Pogonip" regularly appears. In his anthology Smoke Bellew, Jack London describes a pogonip which surrounded the main characters, killing one of them.
The phenomenon is common in the inland areas of the Pacific Northwest, with temperatures in the range. The Columbia Plateau experiences this phenomenon most years during temperature inversions, sometimes lasting for as long as three weeks. The fog typically begins forming around the area of the Columbia River and expands, sometimes covering the land to distances as far away as La Pine, Oregon, almost due south of the river and into south central Washington.
Frozen fog (also known as ice fog) is any kind of fog where the droplets have frozen into extremely tiny crystals of ice in midair. Generally, this requires temperatures at or below , making it common only in and near the Arctic and Antarctic regions. It is most often seen in urban areas where it is created by the freezing of water vapor present in automobile exhaust and combustion products from heating and power generation. Urban ice fog can become extremely dense and will persist day and night until the temperature rises. It can be associated with the diamond dust form of precipitation, in which very small crystals of ice form and slowly fall. This often occurs during blue sky conditions, which can cause many types of halos and other results of refraction of sunlight by the airborne crystals. Ice fog often leads to the visual phenomenon of light pillars.
Topographical influences
Up-slope fog or hill fog forms when winds blow air up a slope (called orographic lift), adiabatically cooling it as it rises and causing the moisture in it to condense. This often causes freezing fog on mountaintops, where the cloud ceiling would not otherwise be low enough.
Valley fog forms in mountain valleys, often during winter. It is essentially a radiation fog confined by local topography and can last for several days in calm conditions. In California's Central Valley, valley fog is often referred to as tule fog.
Sea and coastal areas
Sea fog (also known as haar or fret) is heavily influenced by the presence of sea spray and microscopic airborne salt crystals. Clouds of all types require minute hygroscopic particles upon which water vapor can condense. Over the ocean surface, the most common particles are salt from salt spray produced by breaking waves. Except in areas of storminess, the most common areas of breaking waves are located near coastlines, hence the greatest densities of airborne salt particles are there.
Condensation on salt particles has been observed to occur at humidities as low as 70%, thus fog can occur even in relatively dry air in suitable locations such as the California coast. Typically, such lower humidity fog is preceded by a transparent mistiness along the coastline as condensation competes with evaporation, a phenomenon that is typically noticeable by beachgoers in the afternoon. Another recently discovered source of condensation nuclei for coastal fog is kelp seaweed. Researchers have found that under stress (intense sunlight, strong evaporation, etc.), kelp releases particles of iodine which in turn become nuclei for condensation of water vapor, causing fog that diffuses direct sunlight.
Sea smoke, also called steam fog or evaporation fog, is created by cold air passing over warmer water or moist land. It may cause freezing fog or sometimes hoar frost. This situation can also lead to the formation of steam devils, which look like their dust counterparts. Lake-effect fog is of this type, sometimes in combination with other causes like radiation fog. It tends to differ from most advective fog formed over land in that it is (like lake-effect snow) a convective phenomenon, resulting in fog that can be very dense and deep and looks fluffy from above. Arctic sea smoke is similar to sea smoke but occurs when the air is very cold. Instead of condensing into water droplets, columns of freezing, rising, and condensing water vapor is formed. The water vapor produces the sea smoke fog and is usually misty and smoke-like.
Garúa fog near the coast of Chile and Peru occurs when typical fog produced by the sea travels inland but suddenly meets an area of hot air. This causes the water particles of fog to shrink by evaporation, producing a "transparent mist". Garua fog is nearly invisible, yet it still forces drivers to use windshield wipers because of condensation onto cooler hard surfaces. Camanchaca is a similar dense fog.
Visibility effects
Depending on the concentration of the droplets, visibility in fog can range from the appearance of haze to almost zero visibility. Many lives are lost each year worldwide from accidents involving fog conditions on the highways, including multiple-vehicle collisions.
The aviation travel industry is affected by the severity of fog conditions. Even though modern auto-landing computers can put an aircraft down without the aid of a pilot, personnel manning an airport control tower must be able to see if aircraft are sitting on the runway awaiting takeoff. Safe operations are difficult in thick fog, and civilian airports may forbid takeoffs and landings until conditions improve.
A solution for landing returning military aircraft developed in World War II was called Fog Investigation and Dispersal Operation (FIDO). It involved burning enormous amounts of fuel alongside runways to evaporate fog, allowing returning fighter and bomber pilots sufficient visual cues to safely land their aircraft. The high energy demands of this method discourage its use for routine operations.
Shadows
Shadows are cast through fog in three dimensions. The fog is dense enough to be illuminated by light that passes through gaps in a structure or tree, but thin enough to let a large quantity of that light pass through to illuminate points further on. As a result, object shadows appear as "beams" oriented in a direction parallel to the light source. These voluminous shadows are created the same way as crepuscular rays, which are the shadows of clouds. In fog, it is solid objects that cast shadows.
Sound propagation and acoustic effects
Sound typically travels fastest and farthest through solids, then liquids, then gases such as the atmosphere. Sound is affected during fog conditions due to the small distances between water droplets, and air temperature differences.
Though fog is essentially liquid water, the many droplets are separated by small air gaps. High-pitched sounds have a high frequency, which in turn means they have a short wavelength. To transmit a high frequency wave, air must move back and forth very quickly. Short-wavelength high-pitched sound waves are reflected and refracted by many separated water droplets, partially cancelling and dissipating their energy (a process called "damping"). In contrast, low pitched notes, with a low frequency and a long wavelength, move the air less rapidly and less often, and lose less energy to interactions with small water droplets. Low-pitched notes are less affected by fog and travel further, which is why foghorns use a low-pitched tone.
A fog can be caused by a temperature inversion where cold air is pooled at the surface which helped to create the fog, while warmer air sits above it. The inverted boundary between cold air and warm air reflects sound waves back toward the ground, allowing sound that would normally radiate out escaping into the upper atmosphere to instead bounce back and travel near the surface. A temperature inversion increases the distance that lower frequency sounds can travel, by reflecting the sound between the ground and the inversion layer.
Record extremes
Particularly foggy places include Hamilton, New Zealand and Grand Banks off the coast of Newfoundland (the meeting place of the cold Labrador Current from the north and the much warmer Gulf Stream from the south). Some very foggy land areas in the world include Argentia (Newfoundland) and Point Reyes (California), each with over 200 foggy days per year. Even in generally warmer southern Europe, thick fog and localized fog are often found in lowlands and valleys, such as the lower part of the Po Valley and the Arno and Tiber valleys in Italy; Ebro Valley in northeastern Spain; as well as on the Swiss plateau, especially in the Seeland area, in late autumn and winter. Other notably foggy areas include coastal Chile (in the south); coastal Namibia; Nord, Greenland; and the Severnaya Zemlya islands.
As a water source
Redwood forests in California receive approximately 30–40% of their moisture from coastal fog by way of fog drip. Change in climate patterns could result in relative drought in these areas. Some animals, including insects, depend on wet fog as a principal source of water, particularly in otherwise desert climes, as along many African coastal areas. Some coastal communities use fog nets to extract moisture from the atmosphere where groundwater pumping and rainwater collection are insufficient. Fog can be of different type according to climatic conditions.
Artificial fog
Artificial fog is man-made fog that is usually created by vaporizing a water- and glycol- or glycerine-based fluid. The fluid is injected into a heated metal block which evaporates quickly. The resulting pressure forces the vapor out of a vent. Upon coming into contact with cool outside air, the vapor condenses in microscopic droplets and appears as fog. Such fog machines are primarily used for entertainment applications.
Historical references
The presence of fog has often played a key role in historical events, such as strategic battles. One example is the 1776 Battle of Long Island when American General George Washington and his command were able to evade imminent capture by the British Army, using fog to conceal their escape. Another example is D-Day (6 June 1944) during World War II, when the Allies landed on the beaches of Normandy, France during fog conditions. Both positive and negative results were reported from both sides during that battle, due to impaired visibility.
See also
Technology
Anti-fog
Automotive lighting
Decontamination foam
Fog Investigation and Dispersal Operation (FIDO)
Fog collection
Fogging (photography)
Fog lamp
Head-up display
Runway visual range
Transmissometer
Weather
Fog season
Smoke
Smog
Whiteout (weather)
Vog
References
Under "[ ^ "Federal Meteorological Handbook Number 1: Chapter 8 – Present Weather" (PDF). Office of the Federal Coordinator for Meteorology. 1 September 2005. pp. 8–1, 8–2. Retrieved 9 October 2010. ] " ….
Actually use the following link- http://www.ofcm.gov/publications/fmh/FMH1/FMH1.pdf and proceed to Chapter 8, etc.
Further reading
Ahrens, C. (1991). Meteorology today: an introduction to weather, climate, and the environment. West Pub. Co. .
Corton, Christine L. London Fog: The Biography (2015)
External links
Social & Economic Costs of Fog from "NOAA Socioeconomics" website initiative
United States' current dense fog advisories from NOAA
Current Western US fog satellite pictures from NOAA
Weather hazards to aircraft
Snow or ice weather phenomena
Road hazards
Clouds, fog and precipitation
Hazards of outdoor recreation | Fog | [
"Physics",
"Technology"
] | 3,925 | [
"Visibility",
"Fog",
"Physical quantities",
"Road hazards"
] |
58,992 | https://en.wikipedia.org/wiki/Linear-feedback%20shift%20register | In computing, a linear-feedback shift register (LFSR) is a shift register whose input bit is a linear function of its previous state.
The most commonly used linear function of single bits is exclusive-or (XOR). Thus, an LFSR is most often a shift register whose input bit is driven by the XOR of some bits of the overall shift register value.
The initial value of the LFSR is called the seed, and because the operation of the register is deterministic, the stream of values produced by the register is completely determined by its current (or previous) state. Likewise, because the register has a finite number of possible states, it must eventually enter a repeating cycle. However, an LFSR with a well-chosen feedback function can produce a sequence of bits that appears random and has a very long cycle.
Applications of LFSRs include generating pseudo-random numbers, pseudo-noise sequences, fast digital counters, and whitening sequences. Both hardware and software implementations of LFSRs are common.
The mathematics of a cyclic redundancy check, used to provide a quick check against transmission errors, are closely related to those of an LFSR. In general, the arithmetics behind LFSRs makes them very elegant as an object to study and implement. One can produce relatively complex logics with simple building blocks. However, other methods, that are less elegant but perform better, should be considered as well.
Fibonacci LFSRs
The bit positions that affect the next state are called the taps. In the diagram the taps are [16,14,13,11]. The rightmost bit of the LFSR is called the output bit, which is always also a tap. To obtain the next state, the tap bits are XOR-ed sequentially; then, all bits are shifted one place to the right, with the rightmost bit being discarded, and that result of XOR-ing the tap bits is fed back into the now-vacant leftmost bit. To obtain the pseudorandom output stream, read the rightmost bit after each state transition.
A maximum-length LFSR produces an m-sequence (i.e., it cycles through all possible 2m − 1 states within the shift register except the state where all bits are zero), unless it contains all zeros, in which case it will never change.
As an alternative to the XOR-based feedback in an LFSR, one can also use XNOR. This function is an affine map, not strictly a linear map, but it results in an equivalent polynomial counter whose state is the complement of the state of an LFSR. A state with all ones is illegal when using an XNOR feedback, in the same way as a state with all zeroes is illegal when using XOR. This state is considered illegal because the counter would remain "locked-up" in this state. This method can be advantageous in hardware LFSRs using flip-flops that start in a zero state, as it does not start in a lockup state, meaning that the register does not need to be seeded in order to begin operation.
The sequence of numbers generated by an LFSR or its XNOR counterpart can be considered a binary numeral system just as valid as Gray code or the natural binary code.
The arrangement of taps for feedback in an LFSR can be expressed in finite field arithmetic as a polynomial mod 2. This means that the coefficients of the polynomial must be 1s or 0s. This is called the feedback polynomial or reciprocal characteristic polynomial. For example, if the taps are at the 16th, 14th, 13th and 11th bits (as shown), the feedback polynomial is
The "one" in the polynomial does not correspond to a tap – it corresponds to the input to the first bit (i.e. x0, which is equivalent to 1). The powers of the terms represent the tapped bits, counting from the left. The first and last bits are always connected as an input and output tap respectively.
The LFSR is maximal-length if and only if the corresponding feedback polynomial is primitive over the Galois field GF(2). This means that the following conditions are necessary (but not sufficient):
The number of taps is even.
The set of taps is setwise co-prime; i.e., there must be no divisor other than 1 common to all taps.
Tables of primitive polynomials from which maximum-length LFSRs can be constructed are given below and in the references.
There can be more than one maximum-length tap sequence for a given LFSR length. Also, once one maximum-length tap sequence has been found, another automatically follows. If the tap sequence in an n-bit LFSR is , where the 0 corresponds to the x0 = 1 term, then the corresponding "mirror" sequence is . So the tap sequence has as its counterpart . Both give a maximum-length sequence.
An example in C is below:
#include <stdint.h>
unsigned lfsr_fib(void)
{
uint16_t start_state = 0xACE1u; /* Any nonzero start state will work. */
uint16_t lfsr = start_state;
uint16_t bit; /* Must be 16-bit to allow bit<<15 later in the code */
unsigned period = 0;
do
{ /* taps: 16 14 13 11; feedback polynomial: x^16 + x^14 + x^13 + x^11 + 1 */
bit = ((lfsr >> 0) ^ (lfsr >> 2) ^ (lfsr >> 3) ^ (lfsr >> 5)) & 1u;
lfsr = (lfsr >> 1) | (bit << 15);
++period;
}
while (lfsr != start_state);
return period;
}
If a fast parity or popcount operation is available, the feedback bit can be computed more efficiently as the dot product of the register with the characteristic polynomial:
bit = parity(lfsr & 0x002Du);, or equivalently
bit = popcnt(lfsr & 0x002Du) /* & 1u */;. (The & 1u turns the popcnt into a true parity function, but the bitshift later bit << 15 makes higher bits irrelevant.)
If a rotation operation is available, the new state can be computed as
lfsr = rotateright((lfsr & ~1u) | (bit & 1u), 1);, or equivalently
lfsr = rotateright(((bit ^ lfsr) & 1u) ^ lfsr, 1);
This LFSR configuration is also known as standard, many-to-one or external XOR gates. The alternative Galois configuration is described in the next section.
Example in Python
A sample python implementation of a similar (16 bit taps at [16,15,13,4]) Fibonacci LFSR would be
start_state = 1 << 15 | 1
lfsr = start_state
period = 0
while True:
# taps: 16 15 13 4; feedback polynomial: x^16 + x^15 + x^13 + x^4 + 1
bit = (lfsr ^ (lfsr >> 1) ^ (lfsr >> 3) ^ (lfsr >> 12)) & 1
lfsr = (lfsr >> 1) | (bit << 15)
period += 1
if lfsr == start_state:
print(period)
break
Where a register of 16 bits is used and the xor tap at the fourth, 13th, 15th and sixteenth bit establishes a maximum sequence length.
Galois LFSRs
Named after the French mathematician Évariste Galois, an LFSR in Galois configuration, which is also known as modular, internal XORs, or one-to-many LFSR, is an alternate structure that can generate the same output stream as a conventional LFSR (but offset in time). In the Galois configuration, when the system is clocked, bits that are not taps are shifted one position to the right unchanged. The taps, on the other hand, are XORed with the output bit before they are stored in the next position. The new output bit is the next input bit. The effect of this is that when the output bit is zero, all the bits in the register shift to the right unchanged, and the input bit becomes zero. When the output bit is one, the bits in the tap positions all flip (if they are 0, they become 1, and if they are 1, they become 0), and then the entire register is shifted to the right and the input bit becomes 1.
To generate the same output stream, the order of the taps is the counterpart (see above) of the order for the conventional LFSR, otherwise the stream will be in reverse. Note that the internal state of the LFSR is not necessarily the same. The Galois register shown has the same output stream as the Fibonacci register in the first section. A time offset exists between the streams, so a different startpoint will be needed to get the same output each cycle.
Galois LFSRs do not concatenate every tap to produce the new input (the XORing is done within the LFSR, and no XOR gates are run in serial, therefore the propagation times are reduced to that of one XOR rather than a whole chain), thus it is possible for each tap to be computed in parallel, increasing the speed of execution.
In a software implementation of an LFSR, the Galois form is more efficient, as the XOR operations can be implemented a word at a time: only the output bit must be examined individually.
Below is a C code example for the 16-bit maximal-period Galois LFSR example in the figure:
#include <stdint.h>
unsigned lfsr_galois(void)
{
uint16_t start_state = 0xACE1u; /* Any nonzero start state will work. */
uint16_t lfsr = start_state;
unsigned period = 0;
do
{
#ifndef LEFT
unsigned lsb = lfsr & 1u; /* Get LSB (i.e., the output bit). */
lfsr >>= 1; /* Shift register */
if (lsb) /* If the output bit is 1, */
lfsr ^= 0xB400u; /* apply toggle mask. */
#else
unsigned msb = (int16_t) lfsr < 0; /* Get MSB (i.e., the output bit). */
lfsr <<= 1; /* Shift register */
if (msb) /* If the output bit is 1, */
lfsr ^= 0x002Du; /* apply toggle mask. */
#endif
++period;
}
while (lfsr != start_state);
return period;
}
The branch if (lsb) lfsr ^= 0xB400u;can also be written as lfsr ^= (-lsb) & 0xB400u; which may produce more efficient code on some compilers. In addition, the left-shifting variant may produce even better code, as the msb is the carry from the addition of lfsr to itself.
Galois LFSR parallel computation
State and resulting bits can also be combined and computed in parallel. The following function calculates the next 64 bits using the 63-bit polynomial :
#include <stdint.h>
uint64_t prsg63(uint64_t lfsr) {
lfsr = lfsr << 32 | (lfsr<<1 ^ lfsr<<2) >> 32;
lfsr = lfsr << 32 | (lfsr<<1 ^ lfsr<<2) >> 32;
return lfsr;
}
Non-binary Galois LFSR
Binary Galois LFSRs like the ones shown above can be generalized to any q-ary alphabet {0, 1, ..., q − 1} (e.g., for binary, q = 2, and the alphabet is simply {0, 1}). In this case, the exclusive-or component is generalized to addition modulo-q (note that XOR is addition modulo 2), and the feedback bit (output bit) is multiplied (modulo-q) by a q-ary value, which is constant for each specific tap point. Note that this is also a generalization of the binary case, where the feedback is multiplied by either 0 (no feedback, i.e., no tap) or 1 (feedback is present). Given an appropriate tap configuration, such LFSRs can be used to generate Galois fields for arbitrary prime values of q.
Xorshift LFSRs
As shown by George Marsaglia and further analysed by Richard P. Brent, linear feedback shift registers can be implemented using XOR and Shift operations. This approach lends itself to fast execution in software because these operations typically map efficiently into modern processor instructions.
Below is a C code example for a 16-bit maximal-period Xorshift LFSR using the 7,9,13 triplet from John Metcalf:
#include <stdint.h>
unsigned lfsr_xorshift(void)
{
uint16_t start_state = 0xACE1u; /* Any nonzero start state will work. */
uint16_t lfsr = start_state;
unsigned period = 0;
do
{ // 7,9,13 triplet from http://www.retroprogramming.com/2017/07/xorshift-pseudorandom-numbers-in-z80.html
lfsr ^= lfsr >> 7;
lfsr ^= lfsr << 9;
lfsr ^= lfsr >> 13;
++period;
}
while (lfsr != start_state);
return period;
}
Matrix forms
Binary LFSRs of both Fibonacci and Galois configurations can be expressed as linear functions using matrices in (see GF(2)). Using the companion matrix of the characteristic polynomial of the LFSR and denoting the seed as a column vector , the state of the register in Fibonacci configuration after steps is given by
Matrix for the corresponding Galois form is :
For a suitable initialisation,
the top coefficient of the column vector :
gives the term of the original sequence.
These forms generalize naturally to arbitrary fields.
Example polynomials for maximal LFSRs
The following table lists examples of maximal-length feedback polynomials (primitive polynomials) for shift-register lengths up to 24. The formalism for maximum-length LFSRs was developed by Solomon W. Golomb in his 1967 book. The number of different primitive polynomials grows exponentially with shift-register length and can be calculated exactly using Euler's totient function .
Output-stream properties
Ones and zeroes occur in "runs". The output stream 1110010, for example, consists of four runs of lengths 3, 2, 1, 1, in order. In one period of a maximal LFSR, 2n−1 runs occur (in the example above, the 3-bit LFSR has 4 runs). Exactly half of these runs are one bit long, a quarter are two bits long, up to a single run of zeroes n − 1 bits long, and a single run of ones n bits long. This distribution almost equals the statistical expectation value for a truly random sequence. However, the probability of finding exactly this distribution in a sample of a truly random sequence is rather low.
LFSR output streams are deterministic. If the present state and the positions of the XOR gates in the LFSR are known, the next state can be predicted. This is not possible with truly random events. With maximal-length LFSRs, it is much easier to compute the next state, as there are only an easily limited number of them for each length.
The output stream is reversible; an LFSR with mirrored taps will cycle through the output sequence in reverse order.
The value consisting of all zeros cannot appear. Thus an LFSR of length n cannot be used to generate all 2n values.
Applications
LFSRs can be implemented in hardware, and this makes them useful in applications that require very fast generation of a pseudo-random sequence, such as direct-sequence spread spectrum radio. LFSRs have also been used for generating an approximation of white noise in various programmable sound generators.
Uses as counters
The repeating sequence of states of an LFSR allows it to be used as a clock divider or as a counter when a non-binary sequence is acceptable, as is often the case where computer index or framing locations need to be machine-readable. LFSR counters have simpler feedback logic than natural binary counters or Gray-code counters, and therefore can operate at higher clock rates. However, it is necessary to ensure that the LFSR never enters a lockup state (all zeros for a XOR based LFSR, and all ones for a XNOR based LFSR), for example by presetting it at start-up to any other state in the sequence. It is possible to count up and down with a LFSR. LFSR have also been used as a Program Counter for CPUs, this requires that the program itself is "scrambled" and it done to save on gates when they are a premium (using fewer gates than an adder) and for speed (as a LFSR does not require a long carry chain).
The table of primitive polynomials shows how LFSRs can be arranged in Fibonacci or Galois form to give maximal periods. One can obtain any other period by adding to an LFSR that has a longer period some logic that shortens the sequence by skipping some states.
Uses in cryptography
LFSRs have long been used as pseudo-random number generators for use in stream ciphers, due to the ease of construction from simple electromechanical or electronic circuits, long periods, and very uniformly distributed output streams. However, an LFSR is a linear system, leading to fairly easy cryptanalysis. For example, given a stretch of known plaintext and corresponding ciphertext, an attacker can intercept and recover a stretch of LFSR output stream used in the system described, and from that stretch of the output stream can construct an LFSR of minimal size that simulates the intended receiver by using the Berlekamp-Massey algorithm. This LFSR can then be fed the intercepted stretch of output stream to recover the remaining plaintext.
Three general methods are employed to reduce this problem in LFSR-based stream ciphers:
Non-linear combination of several bits from the LFSR state;
Non-linear combination of the output bits of two or more LFSRs (see also: shrinking generator); or using Evolutionary algorithm to introduce non-linearity.
Irregular clocking of the LFSR, as in the alternating step generator.
Important LFSR-based stream ciphers include A5/1 and A5/2, used in GSM cell phones, E0, used in Bluetooth, and the shrinking generator. The A5/2 cipher has been broken and both A5/1 and E0 have serious weaknesses.
The linear feedback shift register has a strong relationship to linear congruential generators.
Uses in circuit testing
LFSRs are used in circuit testing for test-pattern generation (for exhaustive testing, pseudo-random testing or pseudo-exhaustive testing) and for signature analysis.
Test-pattern generation
Complete LFSR are commonly used as pattern generators for exhaustive testing, since they cover all possible inputs for an n-input circuit. Maximal-length LFSRs and weighted LFSRs are widely used as pseudo-random test-pattern generators for pseudo-random test applications.
Signature analysis
In built-in self-test (BIST) techniques, storing all the circuit outputs on chip is not possible, but the circuit output can be compressed to form a signature that will later be compared to the golden signature (of the good circuit) to detect faults. Since this compression is lossy, there is always a possibility that a faulty output also generates the same signature as the golden signature and the faults cannot be detected. This condition is called error masking or aliasing. BIST is accomplished with a multiple-input signature register (MISR or MSR), which is a type of LFSR. A standard LFSR has a single XOR or XNOR gate, where the input of the gate is connected to several "taps" and the output is connected to the input of the first flip-flop. A MISR has the same structure, but the input to every flip-flop is fed through an XOR/XNOR gate. For example, a 4-bit MISR has a 4-bit parallel output and a 4-bit parallel input. The input of the first flip-flop is XOR/XNORd with parallel input bit zero and the "taps". Every other flip-flop input is XOR/XNORd with the preceding flip-flop output and the corresponding parallel input bit. Consequently, the next state of the MISR depends on the last several states opposed to just the current state. Therefore, a MISR will always generate the same golden signature given that the input sequence is the same every time.
Recent applications are proposing set-reset flip-flops as "taps" of the LFSR. This allows the BIST system to optimise storage, since set-reset flip-flops can save the initial seed to generate the whole stream of bits from the LFSR. Nevertheless, this requires changes in the architecture of BIST, is an option for specific applications.
Uses in digital broadcasting and communications
Scrambling
To prevent short repeating sequences (e.g., runs of 0s or 1s) from forming spectral lines that may complicate symbol tracking at the
receiver or interfere with other transmissions, the data bit sequence is combined with the output of a linear-feedback register before modulation and transmission. This scrambling is removed at the receiver after demodulation. When the LFSR runs at the same bit rate as the transmitted symbol stream, this technique is referred to as scrambling. When the LFSR runs considerably faster than the symbol stream, the LFSR-generated bit sequence is called chipping code. The chipping code is combined with the data using exclusive or before transmitting using binary phase-shift keying or a similar modulation method. The resulting signal has a higher bandwidth than the data, and therefore this is a method of spread-spectrum communication. When used only for the spread-spectrum property, this technique is called direct-sequence spread spectrum; when used to distinguish several signals transmitted in the same channel at the same time and frequency, it is called code-division multiple access.
Neither scheme should be confused with encryption or encipherment; scrambling and spreading with LFSRs do not protect the information from eavesdropping. They are instead used to produce equivalent streams that possess convenient engineering properties to allow robust and efficient modulation and demodulation.
Digital broadcasting systems that use linear-feedback registers:
ATSC Standards (digital TV transmission system – North America)
DAB (Digital Audio Broadcasting system – for radio)
DVB-T (digital TV transmission system – Europe, Australia, parts of Asia)
NICAM (digital audio system for television)
Other digital communications systems using LFSRs:
Intelsat business service (IBS)
Intermediate data rate (IDR)
HDMI 2.0
SDI (Serial Digital Interface transmission)
Data transfer over PSTN (according to the ITU-T V-series recommendations)
CDMA (Code Division Multiple Access) cellular telephony
100BASE-T2 "fast" Ethernet scrambles bits using an LFSR
1000BASE-T Ethernet, the most common form of Gigabit Ethernet, scrambles bits using an LFSR
PCI Express
SATA
Serial Attached SCSI (SAS/SPL)
USB 3.0
IEEE 802.11a scrambles bits using an LFSR
Bluetooth Low Energy Link Layer is making use of LFSR (referred to as whitening)
Satellite navigation systems such as GPS and GLONASS. All current systems use LFSR outputs to generate some or all of their ranging codes (as the chipping code for CDMA or DSSS) or to modulate the carrier without data (like GPS L2 CL ranging code). GLONASS also uses frequency-division multiple access combined with DSSS.
Other uses
LFSRs are also used in radio jamming systems to generate pseudo-random noise to raise the noise floor of a target communication system.
The German time signal DCF77, in addition to amplitude keying, employs phase-shift keying driven by a 9-stage LFSR to increase the accuracy of received time and the robustness of the data stream in the presence of noise.
See also
Pinwheel
Mersenne twister
Maximum length sequence
Analog feedback shift register
NLFSR, Non-Linear Feedback Shift Register
Ring counter
Pseudo-random binary sequence
Gold sequence
JPL sequence
Kasami sequence
Berlekamp–Massey algorithm
References
Further reading
https://web.archive.org/web/20161007061934/http://courses.cse.tamu.edu/csce680/walker/lfsr_table.pdf
http://users.ece.cmu.edu/~koopman/lfsr/index.html — Tables of maximum length feedback polynomials for 2-64 bits.
https://github.com/hayguen/mlpolygen — Code for generating maximal length feedback polynomials
External links
– LFSR theory and implementation, maximal length sequences, and comprehensive feedback tables for lengths from 7 to 16,777,215 (3 to 24 stages), and partial tables for lengths up to 4,294,967,295 (25 to 32 stages).
International Telecommunication Union Recommendation O.151 (August 1992)
Maximal Length LFSR table with length from 2 to 67.
Pseudo-Random Number Generation Routine for the MAX765x Microprocessor
http://www.ece.ualberta.ca/~elliott/ee552/studentAppNotes/1999f/Drivers_Ed/lfsr.html
http://www.quadibloc.com/crypto/co040801.htm
Simple explanation of LFSRs for Engineers
Feedback terms
General LFSR Theory
An implementation of LFSR in VHDL.
Simple VHDL coding for Galois and Fibonacci LFSR.
mlpolygen: A Maximal Length polynomial generator
LFSR and Intrinsic Generation of Randomness: Notes From NKS
Binary arithmetic
Digital registers
Cryptographic algorithms
Pseudorandom number generators
Articles with example C code | Linear-feedback shift register | [
"Mathematics"
] | 5,743 | [
"Arithmetic",
"Binary arithmetic"
] |
59,006 | https://en.wikipedia.org/wiki/Number%20sign | The symbol is known variously in English-speaking regions as the number sign, hash, or pound sign. The symbol has historically been used for a wide range of purposes including the designation of an ordinal number and as a ligatured abbreviation for pounds avoirdupois – having been derived from the now-rare .
Since 2007, widespread usage of the symbol to introduce metadata tags on social media platforms has led to such tags being known as "hashtags", and from that, the symbol itself is sometimes called a hashtag.
The symbol is distinguished from similar symbols by its combination of level horizontal strokes and right-tilting vertical strokes.
History
It is believed that the symbol traces its origins to the symbol , an abbreviation of the Roman term libra pondo, which translates as "pound weight". The abbreviation "lb" was printed as a dedicated ligature including a horizontal line across (which indicated abbreviation). Ultimately, the symbol was reduced for clarity as an overlay of two horizontal strokes "=" across two slash-like strokes "//".
The symbol is described as the "number" character in an 1853 treatise on bookkeeping, and its double meaning is described in a bookkeeping text from 1880. The instruction manual of the Blickensderfer model 5 typewriter () appears to refer to the symbol as the "number mark". Some early-20th-century U.S. sources refer to it as the "number sign", although this could also refer to the numero sign (). A 1917 manual distinguishes between two uses of the sign: "number (written before a figure)" and "pounds (written after a figure)". The use of the phrase "pound sign" to refer to this symbol is found from 1932 in U.S. usage. The term hash sign is found in South African writings from the late 1960s and from other non-North-American sources in the 1970s.
For mechanical devices, the symbol appeared on the keyboard of the Remington Standard typewriter (). It appeared in many of the early teleprinter codes and from there was copied to ASCII, which made it available on computers and thus caused many more uses to be found for the character. The symbol was introduced on the bottom right button of touch-tone keypads in 1968, but that button was not extensively used until the advent of large-scale voicemail (PBX systems, etc.) in the early 1980s.
One of the uses in computers was to label the following text as having a different interpretation (such as a command or a comment) from the rest of the text. It was adopted for use within internet relay chat (IRC) networks circa 1988 to label groups and topics. This usage inspired Chris Messina to propose a similar system to be used on Twitter to tag topics of interest on the microblogging network; this became known as a hashtag. Although used initially and most popularly on Twitter, hashtag use has extended to other social media sites.
Names
Number sign
"Number sign" is the name chosen by the Unicode consortium. Most common in Canada and the northeastern United States. American telephone equipment companies which serve Canadian callers often have an option in their programming to denote Canadian English, which in turn instructs the system to say number sign to callers instead of pound.
Pound sign or pound
In the United States, the "#" key on a phone is commonly referred to as the pound sign, pound key, or simply pound. Dialing instructions to an extension such as #77, for example, can be read as "pound seven seven". This name is rarely used outside the United States, where the term pound sign is understood to mean the currency symbol £.
Hash, hash mark, hashmark
In the United Kingdom, Australia, and some other countries, it is generally called a "hash" (probably from "hatch", referring to cross-hatching).
Programmers also use this term; for instance is "hash, bang" or "shebang".
Hashtag
Derived from the previous, the word "hashtag" is often used when reading social media messages aloud, indicating the start of a hashtag. For instance, the text "#foo" is often read out loud as "hashtag foo" (as opposed to "hash foo"). This leads to the common belief that the symbol itself is called hashtag. Twitter documentation refers to it as "the hashtag symbol".
Hex
"Hex" is commonly used in Singapore and Malaysia, as spoken by many recorded telephone directory-assistance menus: "Please enter your phone number followed by the 'hex' key". The term "hex" is discouraged in Singapore in favour of "hash". In Singapore, a hash is also called "hex" in apartment addresses, where it precedes the floor number.
, octothorpe, octathorp, octatherp
Most scholars believe the word was invented by workers at the Bell Telephone Laboratories by 1968, who needed a word for the symbol on the telephone keypad. Don MacPherson is said to have created the word by combining octo and the last name of Jim Thorpe, an Olympic medalist. Howard Eby and Lauren Asplund claim to have invented the word as a joke in 1964, combining octo with the syllable therp which, because of the "th" digraph, was hard to pronounce in different languages. The Merriam-Webster New Book of Word Histories, 1991, has a long article that is consistent with Doug Kerr's essay, which says "octotherp" was the original spelling, and that the word arose in the 1960s among telephone engineers as a joke. Other hypotheses for the origin of the word include the last name of James Oglethorpe or using the Old English word for village, thorp, because the symbol looks like a village surrounded by eight fields. The word was popularized within and outside Bell Labs. The first appearance of "octothorp" in a US patent is in a 1973 filing. This patent also refers to the six-pointed asterisk (✻) used on telephone buttons as a "sextile".
Sharp
Use of the name "sharp" is due to the symbol's resemblance to . The same derivation is seen in the name of the Microsoft programming languages C#, J# and F#. Microsoft says that the name C# is pronounced 'see sharp'." According to the ECMA-334 C# Language Specification, the name of the language is written "C#" (" (U+0043) followed by the # (U+0023)") and pronounced "C Sharp".
Square
On telephones, the International Telecommunication Union specification ITU-T E.161 3.2.2 states: "The symbol may be referred to as the square or the most commonly used equivalent term in other languages." Formally, this is not a number sign but rather another character, . The real or virtual keypads on almost all modern telephones use the simple instead, as does most documentation.
Other
Names that may be seen include: crosshatch, crunch, fence, flash, garden fence, garden gate, gate, grid, hak, mesh, oof, pig-pen, punch mark, rake, score, scratch, scratch mark, tic-tac-toe, and unequal.
Usage
When prefixes a number, it is read as "number". "A #2 pencil", for example, indicates "a number-two pencil". The abbreviations and are used commonly and interchangeably. The use of as an abbreviation for "number" is common in informal writing, but use in print is rare. Where Americans might write "Symphony #5", British and Irish people usually write "Symphony No. 5".
When is after a number, it is read as "pound" or "pounds", meaning the unit of weight. The text "5# bag of flour" would mean "five-pound bag of flour". The abbreviations "lb." and "" are used commonly and interchangeably. This usage is rare outside North America, where "lb" or "lbs" is used.
is not a replacement for the pound sign , but British typewriters and keyboards have a key where American keyboards have a key. Many early computer and teleprinter codes (such as BS 4730 (the UK national variant of the ISO/IEC 646 character set) substituted "£" for "#" to make the British versions, thus it was common for the same binary code to display as on US equipment and on British equipment ("$" was not substituted to avoid confusing dollars and pounds in financial communications).
Mathematics
In set theory, #S is one possible notation for the cardinality or size of the set S, instead of . That is, for a set , in which all are mutually distinct, This notation is only sometimes used for finite sets, usually in number theory, to avoid confusion with the divisibility symbol, e.g. .
In topology, A#B is the connected sum of manifolds A and B, or of knots A and B in knot theory.
In number theory, n# is the primorial of n.
In constructive mathematics, # denotes an apartness relation.
In computational complexity theory, #P denotes a complexity class of counting problems. The standard notation for this class uses the number sign symbol, not the sharp sign from music, but it is pronounced "sharp P". More generally, the number sign may be used to denote the class of counting problems associated with any class of search problems.
Computing
In Unicode and ASCII, the symbol has a code point as and in HTML5.
In many scripting languages and data file formats, especially ones that originated on Unix, introduces a comment that goes to the end of the line. The combination at the start of an executable file is a "shebang", "hash-bang" or "pound-bang", used to tell the operating system which program to use to run the script (see magic number). This combination was chosen so it would be a comment in the scripting languages.
is the symbol of the CrunchBang Linux distribution.
In the Perl programming language, is used as a modifier to array syntax to return the index number of the last element in the array, e.g., an array's last element is at . The number of elements in the array is , since Perl arrays default to using zero-based indices. If the array has not been defined, the return is also undefined. If the array is defined but has not had any elements assigned to it, e.g., , then returns . See the section on Array functions in the Perl language structure article.
In both the C and C++ preprocessors, as well as in other syntactically C-like languages, is used to start a preprocessor directive. Inside macros, after , it is used for various purposes; for example, the double pound (hash) sign is used for token concatenation.
In Unix shells, is placed by convention at the end of a command prompt to denote that the user is working as root.
is used in a URL of a web page or other resource to introduce a "fragment identifier" – an id which defines a position within that resource. In HTML, this is known as an anchor link. For example, in the URL the portion after the () is the fragment identifier, in this case denoting that the display should be moved to show the tag marked by in the HTML.
Internet Relay Chat: on (IRC) servers, precedes the name of every channel that is available across an entire IRC network.
In blogs, is sometimes used to denote a permalink for that particular weblog entry.
In lightweight markup languages, such as wikitext, is often used to introduce numbered list items.
is used in the Modula-2 and Oberon programming languages designed by Niklaus Wirth and in the Component Pascal language derived from Oberon to denote the not equal symbol, as a stand-in for the mathematical unequal sign , being more intuitive than or . For example:
In Rust, is used for attributes such as in .
In OCaml, is the operator used to call a method.
In Common Lisp, is a dispatching read macro character used to extend the S-expression syntax with short cuts and support for various data types (complex numbers, vectors and more).
In Scheme, is the prefix for certain syntax with special meaning.
In Standard ML, , when prefixed to a field name, becomes a projection function (function to access the field of a record or tuple); also, prefixes a string literal to turn it into a character literal.
In Mathematica syntax, , when used as a variable, becomes a pure function (a placeholder that is mapped to any variable meeting the conditions).
In LaTeX, , when prefixing a number, references an arguments for a user defined command. For instance \newcommand{\code}[1]{\texttt{#1}}.
In Javadoc, is used with the tag to introduce or separate a field, constructor, or method member from its containing class.
In Redcode and some other dialects of assembly language, is used to denote immediate mode addressing, e.g., , which means "load accumulator A with the value 10" in MOS 6502 assembly language.
in HTML, CSS, SVG, and other computing applications is used to identify a color specified in hexadecimal format, e.g., . This usage comes from X11 color specifications, which inherited it from early assembler dialects that used to prefix hexadecimal constants, e.g.: ZX Spectrum Z80 assembly.
In Be-Music Script, every command line starts with . Lines starting with characters other than "#" are treated as comments.
The use of the hash symbol in a hashtag is a phenomenon conceived by Chris Messina, and popularized by social media network Twitter, as a way to direct conversations and topics amongst users. This has led to an increasingly common tendency to refer to the symbol itself as "hashtag".
In programming languages like PL/1 and Assembler used on IBM mainframe systems, as well as JCL (Job Control Language), the (along with and ) are used as additional letters in identifiers, labels and data set names.
In J, is the Tally or Count function, and similarly in Lua, can be used as a shortcut to get the length of a table, or get the length of a string. Due to the ease of writing "#" over longer function names, this practice has become standard in the Lua community.
In Dyalog APL, is a reference to the root namespace while is a reference to the current space's parent namespace.
Other uses
Algebraic notation for chess: A hash after a move denotes checkmate.
American Sign Language transcription: The hash prefixing an all-caps word identifies a lexicalized fingerspelled sign, having some sort of blends or letter drops. All-caps words without the prefix are used for standard English words that are fingerspelled in their entirety.
Copy writing and copy editing: Technical writers in press releases often use three number signs, directly above the boilerplate or underneath the body copy, indicating to media that there is no further copy to come.
Footnote symbols (or endnote symbols): Due to ready availability in many fonts and directly on computer keyboards, "#" and other symbols (such as the caret) have in recent years begun to be occasionally used in catalogues and reports in place of more traditional symbols (esp. dagger, double-dagger, pilcrow).
Linguistic phonology: denotes a word boundary. For instance, means that becomes when it is the last segment in a word (i.e. when it appears before a word boundary).
Linguistic syntax: A hash before an example sentence denotes that the sentence is semantically ill-formed, though grammatically well-formed. For instance, "#The toothbrush is pregnant" is a grammatically correct sentence, but the meaning is odd.
Medical prescription drug delimiter: In some countries, such as Norway or Poland, is used as a delimiter between different drugs on medical prescriptions.
Medical shorthand: The hash is often used to indicate a bone fracture. For example, "#NOF" is often used for "fractured neck of femur". In radiotherapy, a full dose of radiation is divided into smaller doses or 'fractions'. These are given the shorthand to denote either the number of treatments in a prescription (e.g. 60Gy in 30#), or the fraction number (#9 of 25).
As a proofreading mark, to indicate that a space should be inserted.
Publishing: When submitting a science fiction manuscript for publication, a number sign on a line by itself (indented or centered) indicates a section break in the text.
Scrabble: Putting a number sign after a word indicates that the word is found in the British word lists, but not the North American lists.
Teletext and DVB subtitles (in the UK and Ireland): The hash symbol, resembling music notation's sharp sign, is used to mark text that is either sung by a character or heard in background music, e.g.
Unicode
The number sign was assigned code 35 (hex 0x23) in ASCII where it was inherited by many character sets. In EBCDIC it is often at 0x7B or 0xEC.
Unicode characters with "number sign" in their names:
Other attested names in Unicode are: .
Additionally, a Unicode named sequence is defined for the grapheme cluster (#️⃣).
On keyboards
On the standard US keyboard layout, the symbol is . On standard UK and some other European keyboards, the same keystrokes produce the pound (sterling) sign, symbol, and may be moved to a separate key above the right shift key. If there is no key, the symbol can be produced on Windows with , on Mac OS with , and on Linux with .
See also
pound sign,
sharp sign,
viewdata square, ,
equal and parallel to symbol,
looped square,
the Chinese character for "well"
the game tic-tac-toe
Explanatory notes
References
Typographical symbols | Number sign | [
"Mathematics"
] | 3,854 | [
"Symbols",
"Typographical symbols"
] |
59,045 | https://en.wikipedia.org/wiki/NATO%20phonetic%20alphabet | The International Radiotelephony Spelling Alphabet or simply the Radiotelephony Spelling Alphabet, commonly known as the NATO phonetic alphabet, is the most widely used set of clear-code words for communicating the letters of the Roman alphabet. Technically a radiotelephonic spelling alphabet, it goes by various names, including NATO spelling alphabet, ICAO phonetic alphabet, and ICAO spelling alphabet. The ITU phonetic alphabet and figure code is a rarely used variant that differs in the code words for digits.
Although spelling alphabets are commonly called "phonetic alphabets", they are not phonetic in the sense of phonetic transcription systems such as the International Phonetic Alphabet.
To create the code, a series of international agencies assigned 26 clear-code words (also known as "phonetic words") acrophonically to the letters of the Roman alphabet, with the goal that the letters and numbers would be easily distinguishable from one another over radio and telephone. The words were chosen to be accessible to speakers of English, French and Spanish. Some of the code words were changed over time, as they were found to be ineffective in real-life conditions. In 1956, NATO modified the then-current set used by the International Civil Aviation Organization (ICAO); this modification then became the international standard when it was accepted by ICAO that year and by the International Telecommunication Union (ITU) a few years later.
The 26 code words are as follows (ICAO spellings): , Bravo, Charlie, Delta, Echo, Foxtrot, Golf, Hotel, India, , Kilo, Lima, Mike, November, Oscar, Papa, Quebec, Romeo, Sierra, Tango, Uniform, Victor, Whiskey, X-ray, Yankee, and Zulu. and are spelled that way to avoid mispronunciation by people unfamiliar with English orthography; NATO changed to for the same reason. The code words for digits are their English names, though with their pronunciations modified in the cases of three, four, five, nine and thousand.
The code words have been stable since 1956. A 1955 NATO memo stated that:
International adoption
Soon after the code words were developed by ICAO (see history below), they were adopted by other national and international organizations, including the ITU, the International Maritime Organization (IMO), the United States Federal Government as Federal Standard 1037C: Glossary of Telecommunications Terms and its successors ANSI T1.523-2001 and ATIS Telecom Glossary (ATIS-0100523.2019) (all three using the spellings "Alpha" and "Juliet"), the United States Department of Defense, the Federal Aviation Administration (FAA) (using the spelling "Xray"), the International Amateur Radio Union (IARU), the American Radio Relay League (ARRL), the Association of Public-Safety Communications Officials-International (APCO), and by many military organizations such as NATO (using the spelling "Xray") and the now-defunct Southeast Asia Treaty Organization (SEATO).
The same alphabetic code words are used by all agencies, but each agency chooses one of two different sets of numeric code words. NATO uses the regular English numerals (zero, one, two, etc., though with some differences in pronunciation), whereas the ITU (beginning on 1 April 1969) and the IMO created compound code words (nadazero, unaone, bissotwo etc.). In practice the compound words are used very rarely.
Usage
A spelling alphabet is used to distinguish those parts of a message that contain letters and digits, because the names of many letters sound similar, for instance bee and pee, en and em or ef and ess. The potential for confusion increases if static or other interference is present, as is commonly the case with radio and telephonic communication. For instance, the target message "proceed to map grid DH98" would be transmitted as proceed to map grid Delta-Hotel-Niner-Ait.
Civilian industry uses the code words to avoid similar problems in the transmission of messages by telephone systems. For example, it is often used in the retail industry where customer or site details are conveyed by telephone (for example to authorize a credit agreement or confirm stock codes), although ad-hoc code words are often used in that instance. It has been used by information technology workers to communicate serial numbers and reference codes, which are often very long, by voice. Most major airlines use the alphabet to communicate passenger name records (PNRs) internally, and in some cases, with customers. It is often used in a medical context as well.
Several codes words and sequences of code words have become well-known, such as Bravo Zulu (letter code BZ) for "well done", Checkpoint Charlie (Checkpoint C) in Berlin, and Zulu Time for Greenwich Mean Time or Coordinated Universal Time. During the Vietnam War, the US government referred to the Viet Cong guerrillas and the group itself as VC, or Victor Charlie; the name "Charlie" became synonymous with this force.
Pronunciation of code words
The final choice of code words for the letters of the alphabet and for the digits was made after hundreds of thousands of comprehension tests involving 31 nationalities. The qualifying feature was the likelihood of a code word being understood in the context of others. For example, Football has a higher chance of being understood than Foxtrot in isolation, but Foxtrot is superior in extended communication.
Pronunciations were set out by the ICAO before 1956 with advice from the governments of both the United States and United Kingdom. To eliminate national variations in pronunciation, posters illustrating the pronunciation desired by ICAO are available. However, there remain differences in the pronunciations published by ICAO and other agencies, and ICAO has apparently conflicting Latin-alphabet and IPA transcriptions. At least some of these differences appear to be typographic errors. In 2022 the Deutsches Institut für Normung (DIN) attempted to resolve these conflicts. For example, they consistently transcribe for what the ICAO had transcribed variously as in IPA and as a, ah, ar, er in orthography.
Just as words are spelled out as individual letters, numbers are spelled out as individual digits. That is, 17 is rendered as one seven and 60 as six zero. Depending on context, the word thousand may be used as in English, and for whole hundreds only (when the sequence 00 occurs at the end of a number), the word hundred may be used. For example, 1300 is read as one three zero zero if it is a transponder code or serial number, and as one thousand three hundred if it is an altitude or distance.
The ICAO, NATO, and FAA use modifications of English digits as code words, with 3, 4, 5 and 9 being pronounced tree, fower (rhymes with lower), fife and niner. The digit 3 is specified as tree so that it will not be mispronounced sri (and similarly for thousand); the long pronunciation of 4 (still found in some English dialects) keeps it somewhat distinct from for; 5 is pronounced with a second "f" because the normal pronunciation with a "v" is easily confused with "fire"; and 9 has an extra syllable to keep it distinct from the German word nein "no". (Prior to 1956, three and five had been pronounced with the English consonants, but with the vowels broken into two syllables.) For directions presented as the hour-hand position on a clock, the additional numerals "ten", "eleven" and "twelve" are used with the word "o'clock".
The ITU and IMO, however, specify a different set of code words. These are compounds of ICAO and Latinesque roots.
The IMO's GMDSS procedures permits the use of either set of code words.
Tables
There are two IPA transcriptions of the letter names, from the International Civil Aviation Organization (ICAO) and the Deutsches Institut für Normung (DIN). Both authorities indicate that a non-rhotic pronunciation is standard. That of the ICAO, first published in 1950 and reprinted many times without correction (e.g. the error in 'golf'), uses a large number of vowels. For instance, it has six low/central vowels: . The DIN consolidated all six into the single low-central vowel . The DIN vowels are partly predictable, with in closed syllables and in open syllables apart from echo and sierra, which have as in English, German and Italian. The DIN also reduced the number of stressed syllables in bravo and x-ray, consistent with the ICAO English respellings of those words and with the NATO change of spelling of x-ray to xray so that people would know to pronounce it as a single word.
There is no authoritative IPA transcription of the digits. However, there are respellings into both English and French, which can be compared to clarify some of the ambiguities and inconsistencies.
CCEB has code words for punctuation, including those in the table below.
Others are: "colon", "semi-colon", "exclamation mark", "question mark", "apostrophe", "quote", and "unquote".
History
Prior to World War I and the development and widespread adoption of two-way radio that supported voice, telephone spelling alphabets were developed to improve communication on low-quality and long-distance telephone circuits.
The first non-military internationally recognized spelling alphabet was adopted by the CCIR (predecessor of the ITU) during 1927. The experience gained with that alphabet resulted in several changes being made during 1932 by the ITU. The resulting alphabet was adopted by the International Commission for Air Navigation, the predecessor of the ICAO, and was used for civil aviation until World War II. It continued to be used by the IMO until 1965.
Throughout World War II, many nations used their own versions of a spelling alphabet. The US adopted the Joint Army/Navy radiotelephony alphabet during 1941 to standardize systems among all branches of its armed forces. The US alphabet became known as Able Baker after the words for A and B. The Royal Air Force adopted one similar to the United States one during World War II as well. Other British forces adopted the RAF radio alphabet, which is similar to the phonetic alphabet used by the Royal Navy during World War I. At least two of the terms are sometimes still used by UK civilians to spell words over the phone, namely F for Freddie and S for Sugar.
To enable the US, UK, and Australian armed forces to communicate during joint operations, in 1943 the CCB (Combined Communications Board; the combination of US and UK upper military commands) modified the US military's Joint Army/Navy alphabet for use by all three nations, with the result being called the US-UK spelling alphabet. It was defined in one or more of CCBP-1: Combined Amphibious Communications Instructions, CCBP3: Combined Radiotelephone (R/T) Procedure, and CCBP-7: Combined Communication Instructions. The CCB alphabet itself was based on the US Joint Army/Navy spelling alphabet. The CCBP (Combined Communications Board Publications) documents contain material formerly published in US Army Field Manuals in the 24-series. Several of these documents had revisions, and were renamed. For instance, CCBP3-2 was the second edition of CCBP3.
During World War II, the US military conducted significant research into spelling alphabets. Major F. D. Handy, directorate of Communications in the Army Air Force (and a member of the working committee of the Combined Communications Board), enlisted the help of Harvard University's Psycho-Acoustic Laboratory, asking them to determine the most successful word for each letter when using "military interphones in the intense noise encountered in modern warfare." He included lists from the US, Royal Air Force, Royal Navy, British Army, AT&T, Western Union, RCA Communications, and that of the International Telecommunications Convention. According to a report on the subject:
After World War II, with many aircraft and ground personnel from the allied armed forces, "Able Baker" was officially adopted for use in international aviation. During the 1946 Second Session of the ICAO Communications Division, the organization adopted the so-called "Able Baker" alphabet that was the 1943 US–UK spelling alphabet. However, many sounds were unique to English, so an alternative "Ana Brazil" alphabet was used in Latin America. In spite of this, International Air Transport Association (IATA), recognizing the need for a single universal alphabet, presented a draft alphabet to the ICAO during 1947 that had sounds common to English, French, Spanish and Portuguese.
From 1948 to 1949, Jean-Paul Vinay, a professor of linguistics at the Université de Montréal, worked closely with the ICAO to research and develop a new spelling alphabet. The directions of ICAO were that "To be considered, a word must:
Be a live word in each of the three working languages.
Be easily pronounced and recognized by airmen of all languages.
Have good radio transmission and readability characteristics.
Have a similar spelling in at least English, French, and Spanish, and the initial letter must be the letter the word identifies.
Be free from any association with objectionable meanings."
After further study and modification by each approving body, the revised alphabet was adopted on , to become effective on 1 April 1952 for civil aviation (but it may not have been adopted by any military).
Problems were soon found with this list. Some users believed that they were so severe that they reverted to the old "Able Baker" alphabet. Confusion among words like Delta and Extra, and between Nectar and Victor, or the poor intelligibility of other words during poor receiving conditions were the main problems. Later in 1952, ICAO decided to revisit the alphabet and their research. To identify the deficiencies of the new alphabet, testing was conducted among speakers from 31 nations, principally by the governments of the United Kingdom and the United States. In the United States, the research was conducted by the USAF-directed Operational Applications Laboratory (AFCRC, ARDC), to monitor a project with the Research Foundation of Ohio State University. Among the more interesting of the research findings was that "higher noise levels do not create confusion, but do intensify those confusions already inherent between the words in question".
By early 1956 the ICAO was nearly complete with this research, and published the new official phonetic alphabet in order to account for discrepancies that might arise in communications as a result of multiple alphabet naming systems coexisting in different places and organizations. NATO was in the process of adopting the ICAO spelling alphabet, and apparently felt enough urgency that it adopted the proposed new alphabet with changes based on NATO's own research, to become effective on 1 January 1956, but quickly issued a new directive on 1 March 1956 adopting the now official ICAO spelling alphabet, which had changed by one word (November) from NATO's earlier request to ICAO to modify a few words based on US Air Force research.
After all of the above study, only the five words representing the letters C, M, N, U, and X were replaced. The ICAO sent a recording of the new Radiotelephony Spelling Alphabet to all member states in November 1955. The final version given in the table above was implemented by the ICAO on , and the ITU adopted it no later than 1959 when they mandated its usage via their official publication, Radio Regulations. Because the ITU governs all international radio communications, it was also adopted by most radio operators, whether military, civilian, or amateur. It was finally adopted by the IMO in 1965.
During 1947 the ITU adopted the compound Latinate prefix-number words (Nadazero, Unaone, etc.), later adopted by the IMO during 1965.
Nadazero – from Spanish or Portuguese nada + NATO/ICAO zero
Unaone – generic Romance una, from Latin ūna + NATO/ICAO one
Bissotwo – from Latin bis + NATO/ICAO two. (1959 ITU proposals bis and too)
Terrathree – from Italian terzo + NATO/ICAO three ("tree") (1959 ITU proposals ter and tree)
Kartefour – from French quatre (Latin quartus) + NATO/ICAO four ("fow-er") (1959 ITU proposals quarto and fow-er)
Pantafive – from Greek penta- + NATO/ICAO five ("fife") (From 1959 ITU proposals penta and fife)
Soxisix – from French soix + NATO/ICAO six (1959 ITU proposals were saxo and six)
Setteseven – from Italian sette + NATO/ICAO seven (1959 ITU proposals sette and sev-en)
Oktoeight – generic Romance octo-, from Latin octō + NATO/ICAO eight (1959 ITU proposals octo and ait)
Novenine – from Italian nove + NATO/ICAO nine ("niner") (1959 ITU proposals were nona and niner)
In the official version of the alphabet, two spellings deviate from the English norm: Alfa and Juliett. Alfa is spelled with an f as it is in most European languages because the spelling Alpha may not be pronounced properly by native speakers of some languages – who may not know that ph should be pronounced as f. The spelling Juliett is used rather than Juliet for the benefit of French speakers, because they may otherwise treat a single final t as silent. For similar reasons, Charlie and Uniform have alternative pronunciations where the ch is pronounced "sh" and the u is pronounced "oo". Early on, the NATO alliance changed X-ray to Xray in its version of the alphabet to ensure that it would be pronounced as one word rather than as two, while the global organization ICAO keeps the spelling X-ray.
The alphabet is defined by various international conventions on radio, including:
Universal Electrical Communications Union (UECU), Washington, D.C., December 1920
International Radiotelegraph Convention, Washington, 1927 (which created the CCIR)
General Radiocommunication and Additional Regulations (Madrid, 1932)
Instructions for the International Telephone Service, 1932 (ITU-T E.141; withdrawn in 1993)
General Radiocommunication Regulations and Additional Radiocommunication Regulations (Cairo, 1938)
Radio Regulations and Additional Radio Regulations (Atlantic City, 1947), where "it was decided that the International Civil Aviation Organization and other international aeronautical organizations would assume the responsibility for procedures and regulations related to aeronautical communication. However, ITU would continue to maintain general procedures regarding distress signals."
1959 Administrative Radio Conference (Geneva, 1959)
International Telecommunication Union, Radio
Final Acts of WARC-79 (Geneva, 1979). Here the alphabet was formally named "Phonetic Alphabet and Figure Code".
International Code of Signals for Visual, Sound, and Radio Communications, United States Edition, 1969 (revised 2003)
Tables
For the 1938 and 1947 phonetics, each transmission of figures is preceded and followed by the words "as a number" spoken twice.
The ITU adopted the IMO phonetic spelling alphabet in 1959, and in 1969 specified that it be "for application in the maritime mobile service only".
Pronunciation was not defined prior to 1959. For the post-1959 phonetics, the underlined syllable of each letter word should be emphasized, and each syllable of the code words for the post-1969 figures should be equally emphasized.
International aviation
The Radiotelephony Spelling Alphabet is used by the International Civil Aviation Organization for international aircraft communications.
International maritime mobile service
The ITU-R Radiotelephony Alphabet is used by the International Maritime Organization for international marine communications.
Variants
Since "Nectar" was changed to "November" in 1956, the code has been mostly stable. However, there is occasional regional substitution of a few code words, such as replacing them with earlier variants, to avoid confusion with local terminology.
As of 2013, it was reported that "Delta" was often replaced by "David" or "Dixie" at Atlanta International Airport, where Delta Air Lines is based, because "Delta" is also the airline's callsign. Air traffic control once referred to Taxiway D at the same airport as "Taxiway Dixie", though this practice was officially discontinued in 2020.
"Foxtrot" may be shortened to "Fox" at airports in the United States.
British police use "Indigo" rather than "India".
In Indonesia, "London" is used in place of "Lima", because lima is the Malay word for 'five'.
It has been reported that "Hawk" is sometimes used for "Hotel" in the Philippines.
See also
Allied military phonetic spelling alphabets
APCO radiotelephony spelling alphabet (used by some US police departments)
International Code of Signals
Language-specific spelling alphabets
Finnish Armed Forces radio alphabet
German spelling alphabet
Greek spelling alphabet
Japanese radiotelephony alphabet
Korean spelling alphabet
Russian spelling alphabet
Swedish Armed Forces radio alphabet
List of military time zones
List of NATO country codes
PGP word list
Radiotelephony procedure
Procedure word
Brevity code
Ten-code
Q code
Spelling alphabet
Explanatory notes
References
External links
Amateur radio
International Civil Aviation Organization
General aviation
History of air traffic control
Latin-script representations
Military communications
NATO standardisation
Spelling alphabets
Telecommunications-related introductions in 1956
de:Buchstabiertafel#Internationale Buchstabiertafel | NATO phonetic alphabet | [
"Engineering"
] | 4,411 | [
"Military communications",
"Telecommunications engineering"
] |
59,046 | https://en.wikipedia.org/wiki/Silly%20Putty | Silly Putty is a toy containing silicone polymers that have unusual physical properties. It can flow like a liquid, bounce and can be stretched or broken depending on the amount of physical stress to which it is subjected. It contains viscoelastic liquid silicones, a type of non-Newtonian fluid, which makes it act as a viscous liquid over a long period of time but as an elastic solid over a short time period. It was originally created during research into a potential rubber substitute for use by the United States in World War II.
The name Silly Putty is a trademark of Crayola LLC. Other names are used to market similar substances from other manufacturers.
Description
As a bouncing putty, Silly Putty is noted for its unusual characteristics. It bounces when dropped from a height, but breaks when struck or stretched sharply; it can also float in a liquid and will form a puddle given enough time. Silly Putty and most other retail putty products have viscoelastic agents added to reduce the flow and enable the putty to hold its shape.
The original coral-colored Silly Putty is composed of 65% dimethylsiloxane (hydroxy-terminated polymers with boric acid), 17% silica (crystalline quartz), 9% Thixatrol ST (castor oil derivative), 4% polydimethylsiloxane, 1% decamethyl cyclopentasiloxane, 1% glycerine, and 1% titanium dioxide.
Silly Putty's unusual flow characteristics are due to the ingredient polydimethylsiloxane (PDMS), a viscoelastic substance. Viscoelasticity is a type of non-Newtonian flow, characterizing a material that acts as a viscous liquid over a long time period but as an elastic solid over a short time period. Because its apparent viscosity increases directly with respect to the amount of force applied, Silly Putty can be characterized as a dilatant fluid.
Silly Putty is also a fairly good adhesive. When newspaper ink was petroleum based, Silly Putty could be used to transfer newspaper images to other surfaces, providing amusement by distorting the transferred image afterwards. Newer papers with soy-based inks are more resistant to this process.
Generally, Silly Putty is difficult to remove from textured items such as dirt and clothing. Hand sanitizers containing alcohol are often helpful. Silly Putty will dissolve when in contact with an alcohol; after the alcohol evaporates, the material will not exhibit its original properties.
If Silly Putty is submerged in warm or hot water, it will become softer and thus "melt" much faster. It also becomes harder to remove small amounts of it from surfaces. After a long period of time, it will return to its original viscosity.
Silly Putty is sold as a piece of clay inside an egg-shaped plastic container. The Silly Putty brand is owned by Crayola LLC (formerly the Binney & Smith company). , twenty thousand eggs of Silly Putty are sold daily. Since 1950, more than 300 million eggs of Silly Putty (approximately ) have been sold. It is available in various colors, including glow-in-the-dark and metallic. Other brands offer similar materials, sometimes in larger-sized containers, and in a similarly wide variety of colors or with different properties, such as magnetism and iridescence.
History
During World War II, Japan invaded rubber-producing countries as it expanded its sphere of influence in the Pacific Rim. Rubber was vital for the production of rafts, tires, vehicle and aircraft parts, gas masks, and boots. In the US, all rubber products were rationed; citizens were encouraged to make their rubber products last until the end of the war and to donate spare tires, boots, and coats. Meanwhile, the government funded research into synthetic rubber compounds to attempt to solve this shortage.
Credit for the invention of Silly Putty is disputed and has been attributed variously to Earl Warrick of the then newly formed Dow Corning; Harvey Chin; and James Wright, a Scottish-born inventor working for General Electric in New Haven, Connecticut. Throughout his life, Warrick insisted that he and his colleague, Rob Roy McGregor, received the patent for Silly Putty before Wright did; but Crayola's history of Silly Putty states that Wright first invented it in 1943. Both researchers independently discovered that reacting boric acid with silicone oil would produce a gooey, bouncy material with several unique properties. The non-toxic putty would bounce when dropped, could stretch farther than regular rubber, would not go moldy, and had a very high melting temperature. However, the substance did not have all the properties needed to replace rubber.
In 1949, toy store owner Ruth Fallgatter came across the putty. She contacted marketing consultant Peter C. L. Hodgson (1912–1976). The two decided to market the bouncing putty by selling it in a clear case. Although it sold well, Fallgatter did not pursue it further. However, Hodgson saw its potential.
Already US$12,000 in debt, Hodgson borrowed $147 to buy a batch of the putty to pack portions into plastic eggs for $1, calling it Silly Putty. Initial sales were poor, but after a New Yorker article mentioned it, Hodgson sold over 250,000 eggs of silly putty in three days. However, Hodgson was almost put out of business in 1951 by the Korean War. Silicone, the main ingredient in silly putty, was put on ration, harming his business. A year later, the restriction on silicone was lifted and the production of Silly Putty resumed. Initially, it was primarily targeted towards adults. However, by 1955, the majority of its customers were aged six to twelve. In 1957, Hodgson produced the first televised commercial for Silly Putty, which aired during the Howdy Doody Show.
In 1961, Silly Putty went worldwide, becoming a hit in the Soviet Union and Europe. In 1968, it was taken into lunar orbit by the Apollo 8 astronauts.
Peter Hodgson died in 1976. A year later, Binney & Smith, the makers of Crayola products, acquired the rights to Silly Putty. , annual Silly Putty sales exceeded six million eggs.
Silly Putty was inducted into the National Toy Hall of Fame on May 28, 2001.
Other uses
In addition to its success as a toy, other uses for the putty have been found. In the home, it can be used to remove substances such as dirt, lint, pet hair, or ink from various surfaces. The material's unique properties have found niche use in medical and scientific applications. Occupational therapists use it for rehabilitative therapy of hand injuries. A number of other brands (such as Power Putty and TheraPutty) alter the material's properties, offering different levels of resistance. The material is also used as a tool to help reduce stress, and exists in various viscosities based on the user's preference.
Because of its adhesive characteristics, it was used by Apollo astronauts to secure their tools in zero gravity. Scale model building hobbyists use the putty as a masking medium when spray-painting model assemblies. The Steward Observatory uses a Silly-Putty backed lap to polish astronomical telescope mirrors.
Researchers from Trinity College Dublin School of Physics (Centre for Research on Adaptive Nanostructures and Nanodevices (CRANN) and Advanced Materials and Bioengineering Research (AMBER) Research Centers) have discovered nano composite mixtures of graphene and Silly Putty behave as sensitive pressure sensors, claiming the ability to measure the footsteps of a spider crawling on it.
See also
Blu Tack
Flubber (material)
Slime (toy)
References
External links
Clay toys
1940s toys
1950s toys
American inventions
Brand name materials
Crayola
Products introduced in 1949
Companies based in Northampton County, Pennsylvania
Dow Chemical Company
Easton, Pennsylvania
Non-Newtonian fluids
Polymers
Soft matter
Articles containing video clips
Sensory toys | Silly Putty | [
"Physics",
"Chemistry",
"Materials_science"
] | 1,646 | [
"Polymers",
"Soft matter",
"Condensed matter physics",
"Polymer chemistry"
] |
59,052 | https://en.wikipedia.org/wiki/Ensemble%20%28mathematical%20physics%29 | In physics, specifically statistical mechanics, an ensemble (also statistical ensemble) is an idealization consisting of a large number of virtual copies (sometimes infinitely many) of a system, considered all at once, each of which represents a possible state that the real system might be in. In other words, a statistical ensemble is a set of systems of particles used in statistical mechanics to describe a single
system. The concept of an ensemble was introduced by J. Willard Gibbs in 1902.
A thermodynamic ensemble is a specific variety of statistical ensemble that, among other properties, is in statistical equilibrium (defined below), and is used to derive the properties of thermodynamic systems from the laws of classical or quantum mechanics.
Physical considerations
The ensemble formalises the notion that an experimenter repeating an experiment again and again under the same macroscopic conditions, but unable to control the microscopic details, may expect to observe a range of different outcomes.
The notional size of ensembles in thermodynamics, statistical mechanics and quantum statistical mechanics can be very large, including every possible microscopic state the system could be in, consistent with its observed macroscopic properties. For many important physical cases, it is possible to calculate averages directly over the whole of the thermodynamic ensemble, to obtain explicit formulas for many of the thermodynamic quantities of interest, often in terms of the appropriate partition function.
The concept of an equilibrium or stationary ensemble is crucial to many applications of statistical ensembles. Although a mechanical system certainly evolves over time, the ensemble does not necessarily have to evolve. In fact, the ensemble will not evolve if it contains all past and future phases of the system. Such a statistical ensemble, one that does not change over time, is called stationary and can be said to be in statistical equilibrium.
Terminology
The word "ensemble" is also used for a smaller set of possibilities sampled from the full set of possible states. For example, a collection of walkers in a Markov chain Monte Carlo iteration is called an ensemble in some of the literature.
The term "ensemble" is often used in physics and the physics-influenced literature. In probability theory, the term probability space is more prevalent.
Main types
The study of thermodynamics is concerned with systems that appear to human perception to be "static" (despite the motion of their internal parts), and which can be described simply by a set of macroscopically observable variables. These systems can be described by statistical ensembles that depend on a few observable parameters, and which are in statistical equilibrium. Gibbs noted that different macroscopic constraints lead to different types of ensembles, with particular statistical characteristics.
"We may imagine a great number of systems of the same nature, but differing in the configurations and velocities which they have at a given instant, and differing in not merely infinitesimally, but it may be so as to embrace every conceivable combination of configuration and velocities..." J. W. Gibbs (1903)
Three important thermodynamic ensembles were defined by Gibbs:
Microcanonical ensemble (or NVE ensemble) —a statistical ensemble where the total energy of the system and the number of particles in the system are each fixed to particular values; each of the members of the ensemble are required to have the same total energy and particle number. The system must remain totally isolated (unable to exchange energy or particles with its environment) in order to stay in statistical equilibrium.
Canonical ensemble (or NVT ensemble)—a statistical ensemble where the energy is not known exactly but the number of particles is fixed. In place of the energy, the temperature is specified. The canonical ensemble is appropriate for describing a closed system which is in, or has been in, weak thermal contact with a heat bath. In order to be in statistical equilibrium, the system must remain totally closed (unable to exchange particles with its environment) and may come into weak thermal contact with other systems that are described by ensembles with the same temperature.
Grand canonical ensemble (or μVT ensemble)—a statistical ensemble where neither the energy nor particle number are fixed. In their place, the temperature and chemical potential are specified. The grand canonical ensemble is appropriate for describing an open system: one which is in, or has been in, weak contact with a reservoir (thermal contact, chemical contact, radiative contact, electrical contact, etc.). The ensemble remains in statistical equilibrium if the system comes into weak contact with other systems that are described by ensembles with the same temperature and chemical potential.
The calculations that can be made using each of these ensembles are explored further in their respective articles.
Other thermodynamic ensembles can be also defined, corresponding to different physical requirements, for which analogous formulae can often similarly be derived.
For example, in the reaction ensemble, particle number fluctuations are only allowed to occur according to the stoichiometry of the chemical reactions which are present in the system.
Equivalence
In thermodynamic limit all ensembles should produce identical observables due to Legendre transforms, deviations to this rule occurs under conditions that state-variables are non-convex, such as small molecular measurements.
Representations
The precise mathematical expression for a statistical ensemble has a distinct form depending on the type of mechanics under consideration (quantum or classical). In the classical case, the ensemble is a probability distribution over the microstates. In quantum mechanics, this notion, due to von Neumann, is a way of assigning a probability distribution over the results of each complete set of commuting observables.
In classical mechanics, the ensemble is instead written as a probability distribution in phase space; the microstates are the result of partitioning phase space into equal-sized units, although the size of these units can be chosen somewhat arbitrarily.
Requirements for representations
Putting aside for the moment the question of how statistical ensembles are generated operationally, we should be able to perform the following two operations on ensembles A, B of the same system:
Test whether A, B are statistically equivalent.
If p is a real number such that , then produce a new ensemble by probabilistic sampling from A with probability p and from B with probability .
Under certain conditions, therefore, equivalence classes of statistical ensembles have the structure of a convex set.
Quantum mechanical
A statistical ensemble in quantum mechanics (also known as a mixed state) is most often represented by a density matrix, denoted by . The density matrix provides a fully general tool that can incorporate both quantum uncertainties (present even if the state of the system were completely known) and classical uncertainties (due to a lack of knowledge) in a unified manner. Any physical observable in quantum mechanics can be written as an operator, . The expectation value of this operator on the statistical ensemble is given by the following trace:
This can be used to evaluate averages (operator ), variances (using operator ), covariances (using operator ), etc. The density matrix must always have a trace of 1: (this essentially is the condition that the probabilities must add up to one).
In general, the ensemble evolves over time according to the von Neumann equation.
Equilibrium ensembles (those that do not evolve over time, ) can be written solely as a function of conserved variables. For example, the microcanonical ensemble and canonical ensemble are strictly functions of the total energy, which is measured by the total energy operator (Hamiltonian). The grand canonical ensemble is additionally a function of the particle number, measured by the total particle number operator . Such equilibrium ensembles are a diagonal matrix in the orthogonal basis of states that simultaneously diagonalize each conserved variable. In bra–ket notation, the density matrix is
where the , indexed by , are the elements of a complete and orthogonal basis. (Note that in other bases, the density matrix is not necessarily diagonal.)
Classical mechanical
In classical mechanics, an ensemble is represented by a probability density function defined over the system's phase space. While an individual system evolves according to Hamilton's equations, the density function (the ensemble) evolves over time according to Liouville's equation.
In a mechanical system with a defined number of parts, the phase space has generalized coordinates called , and associated canonical momenta called . The ensemble is then represented by a joint probability density function .
If the number of parts in the system is allowed to vary among the systems in the ensemble (as in a grand ensemble where the number of particles is a random quantity), then it is a probability distribution over an extended phase space that includes further variables such as particle numbers (first kind of particle), (second kind of particle), and so on up to (the last kind of particle; is how many different kinds of particles there are). The ensemble is then represented by a joint probability density function . The number of coordinates varies with the numbers of particles.
Any mechanical quantity can be written as a function of the system's phase. The expectation value of any such quantity is given by an integral over the entire phase space of this quantity weighted by :
The condition of probability normalization applies, requiring
Phase space is a continuous space containing an infinite number of distinct physical states within any small region. In order to connect the probability density in phase space to a probability distribution over microstates, it is necessary to somehow partition the phase space into blocks that are distributed representing the different states of the system in a fair way. It turns out that the correct way to do this simply results in equal-sized blocks of canonical phase space, and so a microstate in classical mechanics is an extended region in the phase space of canonical coordinates that has a particular volume. In particular, the probability density function in phase space, , is related to the probability distribution over microstates, by a factor
where
is an arbitrary but predetermined constant with the units of , setting the extent of the microstate and providing correct dimensions to .
is an overcounting correction factor (see below), generally dependent on the number of particles and similar concerns.
Since can be chosen arbitrarily, the notional size of a microstate is also arbitrary. Still, the value of influences the offsets of quantities such as entropy and chemical potential, and so it is important to be consistent with the value of when comparing different systems.
Correcting overcounting in phase space
Typically, the phase space contains duplicates of the same physical state in multiple distinct locations. This is a consequence of the way that a physical state is encoded into mathematical coordinates; the simplest choice of coordinate system often allows a state to be encoded in multiple ways. An example of this is a gas of identical particles whose state is written in terms of the particles' individual positions and momenta: when two particles are exchanged, the resulting point in phase space is different, and yet it corresponds to an identical physical state of the system. It is important in statistical mechanics (a theory about physical states) to recognize that the phase space is just a mathematical construction, and to not naively overcount actual physical states when integrating over phase space. Overcounting can cause serious problems:
Dependence of derived quantities (such as entropy and chemical potential) on the choice of coordinate system, since one coordinate system might show more or less overcounting than another.
Erroneous conclusions that are inconsistent with physical experience, as in the mixing paradox.
Foundational issues in defining the chemical potential and the grand canonical ensemble.
It is in general difficult to find a coordinate system that uniquely encodes each physical state. As a result, it is usually necessary to use a coordinate system with multiple copies of each state, and then to recognize and remove the overcounting.
A crude way to remove the overcounting would be to manually define a subregion of phase space that includes each physical state only once and then exclude all other parts of phase space. In a gas, for example, one could include only those phases where the particles' coordinates are sorted in ascending order. While this would solve the problem, the resulting integral over phase space would be tedious to perform due to its unusual boundary shape. (In this case, the factor introduced above would be set to , and the integral would be restricted to the selected subregion of phase space.)
A simpler way to correct the overcounting is to integrate over all of phase space but to reduce the weight of each phase in order to exactly compensate the overcounting. This is accomplished by the factor introduced above, which is a whole number that represents how many ways a physical state can be represented in phase space. Its value does not vary with the continuous canonical coordinates, so overcounting can be corrected simply by integrating over the full range of canonical coordinates, then dividing the result by the overcounting factor. However, does vary strongly with discrete variables such as numbers of particles, and so it must be applied before summing over particle numbers.
As mentioned above, the classic example of this overcounting is for a fluid system containing various kinds of particles, where any two particles of the same kind are indistinguishable and exchangeable. When the state is written in terms of the particles' individual positions and momenta, then the overcounting related to the exchange of identical particles is corrected by using
This is known as "correct Boltzmann counting".
Ensembles in statistics
The formulation of statistical ensembles used in physics has now been widely adopted in other fields, in part because it has been recognized that the canonical ensemble or Gibbs measure serves to maximize the entropy of a system, subject to a set of constraints: this is the principle of maximum entropy. This principle has now been widely applied to problems in linguistics, robotics, and the like.
In addition, statistical ensembles in physics are often built on a principle of locality: that all interactions are only between neighboring atoms or nearby molecules. Thus, for example, lattice models, such as the Ising model, model ferromagnetic materials by means of nearest-neighbor interactions between spins. The statistical formulation of the principle of locality is now seen to be a form of the Markov property in the broad sense; nearest neighbors are now Markov blankets. Thus, the general notion of a statistical ensemble with nearest-neighbor interactions leads to Markov random fields, which again find broad applicability; for example in Hopfield networks.
Ensemble average
In statistical mechanics, the ensemble average is defined as the mean of a quantity that is a function of the microstate of a system, according to the distribution of the system on its micro-states in this ensemble.
Since the ensemble average is dependent on the ensemble chosen, its mathematical expression varies from ensemble to ensemble. However, the mean obtained for a given physical quantity does not depend on the ensemble chosen at the thermodynamic limit.
The grand canonical ensemble is an example of an open system.
Classical statistical mechanics
For a classical system in thermal equilibrium with its environment, the ensemble average takes the form of an integral over the phase space of the system:
where
is the ensemble average of the system property A,
is , known as thermodynamic beta,
H is the Hamiltonian of the classical system in terms of the set of coordinates and their conjugate generalized momenta ,
is the volume element of the classical phase space of interest.
The denominator in this expression is known as the partition function and is denoted by the letter Z.
Quantum statistical mechanics
In quantum statistical mechanics, for a quantum system in thermal equilibrium with its environment, the weighted average takes the form of a sum over quantum energy states, rather than a continuous integral:
Canonical ensemble average
The generalized version of the partition function provides the complete framework for working with ensemble averages in thermodynamics, information theory, statistical mechanics and quantum mechanics.
The microcanonical ensemble represents an isolated system in which energy (E), volume (V) and the number of particles (N) are all constant. The canonical ensemble represents a closed system which can exchange energy (E) with its surroundings (usually a heat bath), but the volume (V) and the number of particles (N) are all constant. The grand canonical ensemble represents an open system which can exchange energy (E) and particles (N) with its surroundings, but the volume (V) is kept constant.
Operational interpretation
In the discussion given so far, while rigorous, we have taken for granted that the notion of an ensemble is valid a priori, as is commonly done in physical context. What has not been shown is that the ensemble itself (not the consequent results) is a precisely defined object mathematically. For instance,
It is not clear where this very large set of systems exists (for example, is it a gas of particles inside a container?)
It is not clear how to physically generate an ensemble.
In this section, we attempt to partially answer this question.
Suppose we have a preparation procedure for a system in a physics lab: For example, the procedure might involve a physical apparatus and some protocols for manipulating the apparatus. As a result of this preparation procedure, some system is produced and maintained in isolation for some small period of time. By repeating this laboratory preparation procedure we obtain a sequence of systems X1, X2, ...,Xk, which in our mathematical idealization, we assume is an infinite sequence of systems. The systems are similar in that they were all produced in the same way. This infinite sequence is an ensemble.
In a laboratory setting, each one of these prepped systems might be used as input for one subsequent testing procedure. Again, the testing procedure involves a physical apparatus and some protocols; as a result of the testing procedure we obtain a yes or no answer. Given a testing procedure E applied to each prepared system, we obtain a sequence of values Meas (E, X1), Meas (E, X2), ..., Meas (E, Xk). Each one of these values is a 0 (or no) or a 1 (yes).
Assume the following time average exists:
For quantum mechanical systems, an important assumption made in the quantum logic approach to quantum mechanics is the identification of yes–no questions to the lattice of closed subspaces of a Hilbert space. With some additional technical assumptions one can then infer that states are given by density operators S so that:
We see this reflects the definition of quantum states in general: A quantum state is a mapping from the observables to their expectation values.
See also
Density matrix
Ensemble (fluid mechanics)
Phase space
Liouville's theorem (Hamiltonian)
Maxwell–Boltzmann statistics
Replication (statistics)
Notes
References
External links
Monte Carlo applet applied in statistical physics problems.
Equations of physics
Philosophy of thermal and statistical physics | Ensemble (mathematical physics) | [
"Physics",
"Chemistry",
"Mathematics"
] | 3,840 | [
"Philosophy of thermal and statistical physics",
"Equations of physics",
"Mathematical objects",
"Equations",
"Thermodynamics",
"Statistical ensembles",
"Statistical mechanics"
] |
59,114 | https://en.wikipedia.org/wiki/Packet%20analyzer | A packet analyzer (also packet sniffer or network analyzer) is a computer program or computer hardware such as a packet capture appliance that can analyze and log traffic that passes over a computer network or part of a network. Packet capture is the process of intercepting and logging traffic. As data streams flow across the network, the analyzer captures each packet and, if needed, decodes the packet's raw data, showing the values of various fields in the packet, and analyzes its content according to the appropriate RFC or other specifications.
A packet analyzer used for intercepting traffic on wireless networks is known as a wireless analyzer - those designed specifically for Wi-Fi networks are Wi-Fi analyzers. While a packet analyzer can also be referred to as a network analyzer or protocol analyzer these terms can also have other meanings. Protocol analyzer can technically be a broader, more general class that includes packet analyzers/sniffers. However, the terms are frequently used interchangeably.
Capabilities
On wired shared-medium networks, such as Ethernet, Token Ring, and FDDI, depending on the network structure (hub or switch), it may be possible to capture all traffic on the network from a single machine. On modern networks, traffic can be captured using a network switch using port mirroring, which mirrors all packets that pass through designated ports of the switch to another port, if the switch supports port mirroring. A network tap is an even more reliable solution than to use a monitoring port since taps are less likely to drop packets during high traffic load.
On wireless LANs, traffic can be captured on one channel at a time, or by using multiple adapters, on several channels simultaneously.
On wired broadcast and wireless LANs, to capture unicast traffic between other machines, the network adapter capturing the traffic must be in promiscuous mode. On wireless LANs, even if the adapter is in promiscuous mode, packets not for the service set the adapter is configured for are usually ignored. To see those packets, the adapter must be in monitor mode. No special provisions are required to capture multicast traffic to a multicast group the packet analyzer is already monitoring, or broadcast traffic.
When traffic is captured, either the entire contents of packets or just the headers are recorded. Recording just headers reduces storage requirements, and avoids some privacy legal issues, yet often provides sufficient information to diagnose problems.
Captured information is decoded from raw digital form into a human-readable format that lets engineers review exchanged information. Protocol analyzers vary in their abilities to display and analyze data.
Some protocol analyzers can also generate traffic. These can act as protocol testers. Such testers generate protocol-correct traffic for functional testing, and may also have the ability to deliberately introduce errors to test the device under test's ability to handle errors.
Protocol analyzers can also be hardware-based, either in probe format or, as is increasingly common, combined with a disk array. These devices record packets or packet headers to a disk array.
Uses
Packet analyzers can:
Analyze network problems
Detect network intrusion attempts
Detect network misuse by internal and external users
Documenting regulatory compliance through logging all perimeter and endpoint traffic
Gain information for effecting a network intrusion
Identify data collection and sharing of software such as operating systems (for strengthening privacy, control and security)
Aid in gathering information to isolate exploited systems
Monitor WAN bandwidth utilization
Monitor network usage (including internal and external users and systems)
Monitor data in transit
Monitor WAN and endpoint security status
Gather and report network statistics
Identify suspect content in network traffic
Troubleshoot performance problems by monitoring network data from an application
Serve as the primary data source for day-to-day network monitoring and management
Spy on other network users and collect sensitive information such as login details or users cookies (depending on any content encryption methods that may be in use)
Reverse engineer proprietary protocols used over the network
Debug client/server communications
Debug network protocol implementations
Verify adds, moves, and changes
Verify internal control system effectiveness (firewalls, access control, Web filter, spam filter, proxy)
Packet capture can be used to fulfill a warrant from a law enforcement agency to wiretap all network traffic generated by an individual. Internet service providers and VoIP providers in the United States must comply with Communications Assistance for Law Enforcement Act regulations. Using packet capture and storage, telecommunications carriers can provide the legally required secure and separate access to targeted network traffic and can use the same device for internal security purposes. Collecting data from a carrier system without a warrant is illegal due to laws about interception. By using end-to-end encryption, communications can be kept confidential from telecommunication carriers and legal authorities.
Notable packet analyzers
Allegro Network Multimeter
Capsa Network Analyzer
Charles Web Debugging Proxy
Carnivore (software)
CommView
dSniff
EndaceProbe Packet Capture Platform
ettercap
Fiddler
Kismet
Lanmeter
Microsoft Network Monitor
NarusInsight
NetScout Systems nGenius Infinistream
ngrep, Network Grep
OmniPeek, Omnipliance by Savvius
SkyGrabber
The Sniffer
snoop
tcpdump
Observer Analyzer
Wireshark (formerly known as Ethereal)
Xplico Open source Network Forensic Analysis Tool
See also
Bus analyzer
Logic analyzer
Network detector
pcap
Signals intelligence
Traffic generation model
Notes
References
External links
Packet Capture
Network analyzers
Packets (information technology)
Wireless networking
Deep packet capture | Packet analyzer | [
"Technology",
"Engineering"
] | 1,120 | [
"Wireless networking",
"Computer networks engineering"
] |
59,131 | https://en.wikipedia.org/wiki/Colon%20%28punctuation%29 | The colon, , is a punctuation mark consisting of two equally sized dots aligned vertically. A colon often precedes an explanation, a list, or a quoted sentence. It is also used between hours and minutes in time, between certain elements in medical journal citations, between chapter and verse in Bible citations, and, in the US, for salutations in business letters and other formal letters.
History
In Ancient Greek, in rhetoric and prosody, the term (, 'limb, member of a body') did not refer to punctuation, but to a member or section of a complete thought or passage; see also Colon (rhetoric). From this usage, in palaeography, a colon is a clause or group of clauses written as a line in a manuscript.
In the 3rd century BC, Aristophanes of Byzantium is alleged to have devised a punctuation system, in which the end of such a was thought to occasion a medium-length breath, and was marked by a middot . In practice, evidence is scarce for its early usage, but it was revived later as the ano teleia, the modern Greek semicolon. Some writers also used a double dot symbol , that later came to be used as a full stop or to mark a change of speaker. (See also Punctuation in Ancient Greek.)
In 1589, in The Arte of English Poesie, the English term colon and the corresponding punctuation mark is attested:
In 1622, in Nicholas Okes' print of William Shakespeare's Othello, the typographical construction of a colon followed by a hyphen or dash to indicate a restful pause is attested. This construction, known as the dog's bollocks, was once common in British English, though this usage is now discouraged.
As late as the 18th century, John Mason related the appropriateness of a colon to the length of the pause taken when reading the text aloud, but silent reading eventually replaced this with other considerations.
Usage in English
In modern English usage, a complete sentence precedes a colon, while a list, description, explanation, or definition follows it. The elements which follow the colon may or may not be a complete sentence: since the colon is preceded by a sentence, it is a complete sentence whether what follows the colon is another sentence or not. While it is acceptable to capitalise the first letter after the colon in American English, it is not the case in British English, except where a proper noun immediately follows a colon.
Colon used before list
Daequan was so hungry that he ate everything in the house: chips, cold pizza, pretzels and dip, hot dogs, peanut butter, and candy.
Colon used before a description
Bertha is so desperate that she'll date anyone, even William: he's uglier than a squashed toad on the highway, and that's on his good days.
Colon before definition
For years while I was reading Shakespeare's Othello and criticism on it, I had to constantly look up the word "egregious" since the villain uses that word: outstandingly bad or shocking.
Colon before explanation
I guess I can say I had a rough weekend: I had chest pain and spent all Saturday and Sunday in the emergency room.
Some writers use fragments (incomplete sentences) before a colon for emphasis or stylistic preferences (to show a character's voice in literature), as in this example:
Dinner: chips and juice. What a well-rounded diet I have.
The Bedford Handbook describes several uses of a colon. For example, one can use a colon after an independent clause to direct attention to a list, an appositive, or a quotation, and it can be used between independent clauses if the second summarizes or explains the first. In non-literary or non-expository uses, one may use a colon after the salutation in a formal letter, to indicate hours and minutes, to show proportions, between a title and subtitle, and between city and publisher in bibliographic entries.
Luca Serianni, an Italian scholar who helped to define and develop the colon as a punctuation mark, identified four punctuational modes for it: syntactical-deductive, syntactical-descriptive, appositive, and segmental.
Syntactical-deductive
The colon introduces the logical consequence, or effect, of a fact stated before.
There was only one possible explanation: the train had never arrived.
Syntactical-descriptive
In this sense the colon introduces a description; in particular, it makes explicit the elements of a set.
I have three sisters: Daphne, Rose, and Suzanne.
Syntactical-descriptive colons may separate the numbers indicating hours, minutes, and seconds in abbreviated measures of time.
The concert begins at 21:45.
The rocket launched at 09:15:05.
British English and Australian English, however, more frequently uses a point for this purpose:
The programme will begin at 8.00 pm.
You will need to arrive by 14.30.
A colon is also used in the descriptive location of a book verse if the book is divided into verses, such as in the Bible or the Quran:
"Isaiah 42:8"
"Deuteronomy 32:39"
"Quran 10:5"
Appositive
Luruns could not speak: he was drunk.
An appositive colon also separates the subtitle of a work from its principal title. (In effect, the example given above illustrates an appositive use of the colon as an abbreviation for the conjunction "because".) Dillon has noted the impact of colons on scholarly articles, but the reliability of colons as a predictor of quality or impact has also been challenged. In titles, neither needs to be a complete sentence as titles do not represent expository writing:
Star Wars Episode VI: Return of the Jedi
Segmental
Like a dash or quotation mark, a segmental colon introduces speech. The segmental function was once a common means of indicating an unmarked quotation on the same line. The following example is from the grammar book The King's English:
Benjamin Franklin proclaimed the virtue of frugality: A penny saved is a penny earned.
This form is still used in British industry-standard templates for written performance dialogues, such as in a play. The colon indicates that the words following an character's name are spoken by that character.
Patient: Doctor, I feel like a pair of curtains.
Doctor: Pull yourself together!
The uniform visual pattern of <character_nametag : character_spoken_lines> placement on a script page assists an actor in scanning for the lines of their assigned character during rehearsal, especially if a script is undergoing rewrites between rehearsals.
Use of capitals
Use of capitalization or lower-case after a colon varies. In British English, and in most Commonwealth countries, the word following the colon is in lower case unless it is normally capitalized for some other reason, as with proper nouns and acronyms. British English also capitalizes a new sentence introduced by a colon's segmental use.
American English permits writers to similarly capitalize the first word of any independent clause following a colon. This follows the guidelines of some modern American style guides, including those published by the Associated Press and the Modern Language Association. The Chicago Manual of Style, however, requires capitalization only when the colon introduces a direct quotation, a direct question, or two or more complete sentences.
In many European languages, the colon is usually followed by a lower-case letter unless the upper case is required for other reasons, as with British English. German usage requires capitalization of independent clauses following a colon. Dutch further capitalizes the first word of any quotation following a colon, even if it is not a complete sentence on its own.
Spacing and parentheses
In print, a thin space was traditionally placed before a colon and a thick space after it. In modern English-language printing, no space is placed before a colon and a single space is placed after it. In French-language typing and printing, the traditional rules are preserved.
One or two spaces may be and have been used after a colon. The older convention (designed to be used by monospaced fonts) was to use two spaces after a colon.
In modern typography, a colon will be placed outside the closing parenthesis introducing a list. In very early English typography, it could be placed inside, as seen in Roger Williams' 1643 book about the Native American languages of New England.
Usage in other languages
Suffix separator
In Finnish and Swedish, the colon can appear inside words in a manner similar to the apostrophe in the English possessive case, connecting a grammatical suffix to an abbreviation or initialism, a special symbol, or a digit (e.g., Finnish USA:n and Swedish USA:s for the genitive case of "USA", Finnish %:ssa for the inessive case of "%", or Finnish 20:een for the illative case of "20").
Abbreviation mark
Written Swedish uses colons in contractions, such as S:t for Sankt (Swedish for "Saint") – for example in the name of the Stockholm metro station S:t Eriksplan, and k:a for kyrka ("church") – for instance Svenska k:a (Svenska kyrkan), the Evangelical Lutheran national Church of Sweden. This can even occur in people's names, for example Antonia Ax:son Johnson (Ax:son for Axelson). Early Modern English texts also used colons to mark abbreviations.
Word separator
In Ethiopia, both Amharic and Ge'ez script used and sometimes still use a colon-like mark as word separator.
Historically, a colon-like mark was used as a word separator in Old Turkic script.
End of sentence or verse
In Armenian, a colon indicates the end of a sentence, similar to a Latin full stop or period.
In liturgical Hebrew, the sof pasuq is used in some writings such as prayer books to signal the end of a verse.
Score divider
In German, Hebrew, and sometimes in English, a colon divides the scores of opponents in sports and games. A result of 149–0 would be written as 149 : 0 in German and in Hebrew.
Mathematics and logic
The colon is used in mathematics, cartography, model building, and other fields, in this context it denotes a ratio or a scale, as in 3:1 (pronounced "three to one").
When a ratio is reduced to a simpler form, such as 10:15 to 2:3, this may be expressed with a double colon as 10:15::2:3; this would be read "10 is to 15 as 2 is to 3". This form is also used in tests of logic where the question of "Dog is to Puppy as Cat is to _?" can be expressed as "Dog:Puppy::Cat:_". For these uses, there is a dedicated Unicode symbol () that is preferred in some contexts. Compare (ratio colon) with 2:3 (U+003A ASCII colon).
In some languages (e.g. German, Russian, and French), the colon is the commonly used sign for division (instead of ÷).
The notation | : | may also denote the index of a subgroup.
The notation indicates that is a function with domain and codomain .
The combination with an equal sign () is used for definitions.
In mathematical logic, when using set-builder notation for describing the characterizing property of a set, it is used as an alternative to a vertical bar (which is the ISO 31-11 standard), to mean "such that". Example:
(S is the set of all in (the real numbers) such that is strictly greater than 1 and strictly smaller than 3)
In older literature on mathematical logic, it is used to indicate how expressions should be bracketed (see Glossary of Principia Mathematica).
In type theory and programming language theory, the colon sign after a term is used to indicate its type, sometimes as a replacement to the "∈" symbol. Example:
.
A colon is also sometimes used to indicate a tensor contraction involving two indices, and a double colon (::) for a contraction over four indices.
A colon is also used to denote a parallel sum operation involving two operands (many authors, however, instead use a ∥ sign and a few even a ∗ for this purpose).
Computing
The character was on early typewriters and therefore appeared in most text encodings, such as Baudot code and EBCDIC. It was placed at code 58 in ASCII and from there inherited into Unicode. Unicode also defines several related characters:
, used in IPA.
, IPA modifier-letter.
, used in IPA.
, IPA modifier-letter.
, used by Uralic Phonetic Alphabet.
, compatible with right-to-left text.
, for mathematical usage.
, for use in pretty-printing programming languages.
, see Colon (letter), sometimes used in Windows filenames as it is identical to the colon in the Segoe UI font used for filenames. The colon itself is not permitted as it is a reserved character.
, compatibility character for the Chinese Standard GB 18030.
, for compatibility with halfwidth and fullwidth fonts.
, compatibility character for the Chinese National Standard CNS 11643.
Programming languages
Many programming languages, most notably ALGOL, Pascal and Ada, use a colon and equals sign as the assignment operator, to distinguish it from a single equals which is an equality test (C instead uses a single equals as assignment, and a double equals as the equality test).
Many languages including C and Java use the colon to indicate the text before it is a label, such as a target for a goto or an introduction to a case in a switch statement. In a related use, Python uses a colon to separate a control statement (the clause header) from the block of statements it controls (the suite):
if test(x):
print("test(x) is true!")
else:
print("test(x) is not true...")
In many languages, including JavaScript, colons are used to define name–value pairs in a dictionary or object. This is also used by data formats such as JSON. Some other languages use an equals sign.
var obj = {
name: "Charles",
age: 18,
}
The colon is used as part of the ?: conditional operator in C and many other languages.
C++ uses a double colon as the scope resolution operator, and class member access. Most other languages use a period but C++ had to use this for compatibility with C. Another language using colons for scope resolution is Erlang, which uses a single colon.
In BASIC, it is used as a separator between the statements or instructions in a single line. Most other languages use a semicolon, but BASIC had used semicolon to separate items in print statements.
In Forth, a colon precedes definition of a new word.
Haskell uses a colon (pronounced as "cons", short for "construct") as an operator to add a data element to the front of a list:
"child" : ["woman", "man"] -- equals ["child","woman","man"]
while a double colon :: is read as "has type of" (compare scope resolution operator):
("text", False) :: ([Char], Bool)
The ML languages (such as Standard ML) have the above reversed, where the double colon (::) is used to add an element to the front of a list; and the single colon (:) is used for type guards.
MATLAB uses the colon as a binary operator to generate a vector, or to select a part of an extant matrix.
APL uses the colon:
to introduce a control structure element. In this usage it must be the first non-blank character of the line.
after a label name that will be the target of a :goto or a right-pointing arrow (this style of programming is deprecated and programs are supposed to use control structures instead).
to separate a guard (Boolean expression) from its expression in a dynamic function. Two colons are used for an Error guard (one or more error numbers).
Colon + space are used in class definitions to indicate inheritance.
⍠ (a colon in a box) is used by APL for its variant operator.
The colon is also used in many operating systems commands.
In the esoteric programming language INTERCAL, the colon is called two-spot and used to label a 32-bit variable, distinct from spot (.) to label a 16-bit variable.
Addresses
Internet URLs use the colon to separate the protocol (such as ) from the hostname or IP address.
In an IPv6 address, colons (and one optional double colon) separate up to 8 groups of 16 bits in hexadecimal representation. In a URL, a colon follows the initial scheme name (such as Hypertext Transfer Protocol (HTTP) and File Transfer Protocol (FTP), and separates a port number from the hostname or IP address.
In Microsoft Windows filenames, the colon is reserved for use in alternate data streams and cannot appear in a filename. It was used as the directory separator in Classic Mac OS, and was difficult to use in early versions of the newer BSD-based macOS due to code swapping the slash and colon to try to preserve this usage. In most systems it is often difficult to put a colon in a filename as the shell interprets it for other purposes.
CP/M and early versions of MSDOS required the colon after the names of devices, such as though this gradually disappeared except for disks (where it had to be between the disk name and the required path representation of the file as in C:\Windows\). This then migrated to use in URLs.
Text markup
It is often used as a single post-fix delimiter, signifying a token keyword had immediately preceded it or the transition from one mode of character string interpretation to another related mode. Some applications, such as the widely used MediaWiki, utilize the colon as both a pre-fix and post-fix delimiter.
In wiki markup, the colon is often used to indent text. Common usage includes separating or marking comments in a discussion as replies, or to distinguish certain parts of a text.
In human-readable text messages, a colon, or multiple colons, is sometimes used to denote an action (similar to how asterisks are used) or to emote (for example, in vBulletin). In the action denotation usage it has the inverse function of quotation marks, denoting actions where unmarked text is assumed to be dialogue. For example:
Tom: Pluto is so small; it should not be considered a planet. It is tiny!
Mark: Oh really? ::drops Pluto on Tom's head:: Still think it's small now?
Colons may also be used for sounds, e.g., ::click::, though sounds can also be denoted by asterisks or other punctuation marks.
Colons can also be used to represent eyes in emoticons.
See also
Semicolon ()
Two dots (disambiguation)
Notes
References
External links
Walden University Guides Punctuation: Colons
Punctuation
Typographical symbols
Programming language comparisons
Articles with example Haskell code
Articles with example JavaScript code
Articles with example Python (programming language) code | Colon (punctuation) | [
"Mathematics",
"Technology"
] | 4,122 | [
"Symbols",
"Programming language comparisons",
"Computing comparisons",
"Typographical symbols"
] |
59,153 | https://en.wikipedia.org/wiki/Ampersand | The ampersand, also known as the and sign, is the logogram , representing the conjunction "and". It originated as a ligature of the letters of the word (Latin for "and").
Etymology
Traditionally in English, when spelling aloud, any letter that could also be used as a word in itself ("A", "I", and "O") was referred to by the Latin expression ('by itself'), as in "per se A" or "A per se A". The character &, when used by itself as opposed to more extended forms such as &c., was similarly referred to as "and per se and". This last phrase was routinely slurred to "ampersand", and the term had entered common English usage by 1837.
It has been falsely claimed that André-Marie Ampère used the symbol in his widely read publications and that people began calling the new shape "Ampère's and".
History
The ampersand can be traced back to the 1st century AD and the old Roman cursive, in which the letters E and T occasionally were written together to form a ligature (Evolution of the ampersand – figure 1). In the later and more flowing New Roman Cursive, ligatures of all kinds were extremely common; figures 2 and 3 from the middle of 4th century are examples of how the et-ligature could look in this script. During the later development of the Latin script leading up to Carolingian minuscule (9th century) the use of ligatures in general diminished. The et-ligature, however, continued to be used and gradually became more stylized and less revealing of its origin (figures 4–6).
The modern italic type ampersand is a kind of "et" ligature that goes back to the cursive scripts developed during the Renaissance. After the advent of printing in Europe in 1455, printers made extensive use of both the italic and Roman ampersands. Since the ampersand's roots go back to Roman times, many languages that use a variation of the Latin alphabet make use of it.
The ampersand often appeared as a character at the end of the Latin alphabet, as for example in Byrhtferð's list of letters from 1011. Similarly, was regarded as the 27th letter of the English alphabet, as taught to children in the US and elsewhere. An example may be seen in M. B. Moore's 1863 book The Dixie Primer, for the Little Folks. In her 1859 novel Adam Bede, George Eliot refers to this when she makes Jacob Storey say: "He thought it [Z] had only been put to finish off th' alphabet like; though ampusand would ha' done as well, for what he could see." The popular nursery rhyme Apple Pie ABC finishes with the lines "X, Y, Z, and ampersand, All wished for a piece in hand".
Similar characters
In Irish and Scottish Gaelic, the character () is used in place of the ampersand. This character is a survival of Tironian notes, a medieval shorthand system. This character is known as the Tironian Et in English, the in Irish, and the in Scottish Gaelic.
The logical conjunction symbol, , is often pronounced "and," but is not related to the ampersand.
Writing the ampersand
In everyday handwriting, the ampersand is sometimes simplified in design as a large lowercase epsilon or a reversed numeral , superimposed by a vertical line. The ampersand is also sometimes shown as an epsilon with a vertical line above and below it or a dot above and below it.
The plus sign (itself based on an et-ligature) is often informally used in place of an ampersand, sometimes with an added loop and resembling . Other times it is a single stroke with a diagonal line connecting the bottom to the left side. This was a version of shorthand for ampersand, and the stroke economy of this version provided ease of writing for workers while also assuring the character was distinct from other numeric or alphabetic symbols.
Usage
Ampersands are commonly seen in business names formed from a partnership of two or more people, such as Johnson & Johnson, Dolce & Gabbana, Marks & Spencer, and Tiffany & Co., as well as some abbreviations containing the word and, such as AT&T (American Telephone and Telegraph), A&P (supermarkets), P&O (originally "Peninsular and Oriental", shipping and logistics company), R&D (research and development), D&B (drum and bass), D&D (Dungeons & Dragons), R&B (rhythm and blues), B&B (bed and breakfast), and P&L (profit and loss).
In film credits for stories, screenplays, etc., & indicates a closer collaboration than and. The ampersand is used by the Writers Guild of America to denote two writers collaborating on a specific script, rather than one writer rewriting another's work. In screenplays, two authors joined with & collaborated on the script, while two authors joined with and worked on the script at different times and may not have consulted each other at all. In the latter case, they both contributed enough significant material to the screenplay to receive credit but did not work together. As a result, both & and and may appear in the same credit, as appropriate to how the writing proceeded.
In APA style, the ampersand is used when citing sources in text such as (Jones & Jones, 2005). In the list of references, an ampersand precedes the last author's name when there is more than one author. (This does not apply to MLA style, which calls for the "and" to be spelled.)
The phrase ("and the rest"), usually written as etc. can be abbreviated &c. representing the combination et + c(etera).
The ampersand can be used to indicate that the "and" in a listed item is a part of the item's name and not a separator (e.g. "Rock, pop, rhythm & blues and hip hop").
The ampersand may still be used as an abbreviation for "and" in informal writing regardless of how "and" is used.
Computing
Encoding and display
The character in Unicode is ; this is inherited from the same value in ASCII.
Apart from this, Unicode also has the following variants:
The last six of these are carryovers from the Wingdings fonts, and are meant only for backward compatibility with those fonts.
On the QWERTY keyboard layout, the ampersand is . It is almost always available on keyboard layouts, sometimes on or . On the AZERTY keyboard layout, is an unmodified keystroke, positioned above .
In URLs, the ampersand must be replaced by %26 when representing a string character to avoid interpretation as a URL syntax character.
Programming languages
In the 20th century, following the development of formal logic, the ampersand became a commonly used logical notation for the binary operator or sentential connective AND. This usage was adopted in computing.
Many languages with syntax derived from C, including C++, Perl, and more differentiate between:
for bitwise AND is zero, is 4.
for short-circuit logical AND is true.
In C, C++, and Go, a prefix is a unary operator denoting the address in memory of the argument, e.g. .
In C++ and PHP, unary prefix before a formal parameter of a function denotes pass-by-reference.
In Pascal, the as the first character of an identifier prevents the compiler from treating it as a keyword, thus escaping it.
In Fortran, the ampersand forces the compiler to treat two lines as one. This is accomplished by placing an ampersand at the end of the first line and at the beginning of the second line.
In many implementations of ALGOL 60 the ampersand denotes the tens exponent of a real number.
In Common Lisp, the ampersand is the prefix for lambda list keywords.
Ampersand is the string concatenation operator in many BASIC dialects, AppleScript, Lingo, HyperTalk, and FileMaker. In Ada it applies to all one-dimensional arrays, not just strings.
BASIC-PLUS on the DEC PDP-11 uses the ampersand as a short form of the verb .
Applesoft BASIC used the ampersand as an internal command, not intended to be used for general programming, that invoked a machine language program in the computer's ROM.
In some versions of BASIC, unary suffix & denotes a variable is of type long, or 32 bits in length.
The ampersand was occasionally used as a prefix to denote a hexadecimal number, such as for decimal 255, for instance in BBC BASIC. (The modern convention is to use "x" as a prefix to denote hexadecimal, thus .) Some other languages, such as the Monitor built into ROM on the Commodore 128, used it to indicate octal instead, a convention that spread throughout the Commodore community and is now used in the VICE emulator.
In MySQL, has dual roles. As well as a logical AND, it serves as the bitwise operator of an intersection between elements.
Dyalog APL uses ampersand similarly to Unix shells, spawning a separate green thread upon application of a function.
In more recent years, the ampersand has made its way into the Haskell standard library, representing flipped function application: means the same thing as .
Perl uses the ampersand as a sigil to refer to subroutines:
In Perl 4 and earlier, it was effectively required to call user-defined subroutines
In Perl 5, it can still be used to modify the way user-defined subroutines are called
In Raku (formerly known as Perl 6), the ampersand sigil is only used when referring to a subroutine as an object, never when calling it
In MASM 80x86 Assembly Language, is the Substitution Operator, which tells the assembler to replace a macro parameter or text macro name with its actual value.
Ampersand is the name of a reactive programming language, which uses relation algebra to specify information systems.
Text markup
In SGML, XML, and HTML, the ampersand is used to introduce an SGML entity, such as (for non-breaking space) or (for the Greek letter α). The HTML and XML encoding for the ampersand character is the entity . This can create a problem known as delimiter collision when converting text into one of these markup languages. For instance, when putting URLs or other material containing ampersands into XML format files such as RSS files the & must be replaced with & or they are considered not well formed, and computers will be unable to read the files correctly. SGML derived the use from IBM Generalized Markup Language, which was one of many IBM-mainframe languages to use the ampersand to signal a text substitution, eventually going back to System/360 macro assembly language.
In the plain TeX markup language, the ampersand is used to mark tabstops. The ampersand itself can be applied in TeX with . The Computer Modern fonts replace it with an "E.T." symbol in the (text italic) fonts, so it can be entered as in running text when using the default (Computer Modern) fonts.
In Microsoft Windows menus, labels, and other captions, the ampersand is used to denote the next letter as a keyboard shortcut (called an "Access key" by Microsoft). For instance setting a button label to makes it display as rint and for to be a shortcut equivalent to pressing that button. A double ampersand is needed in order to display a real ampersand. This convention originated in the first WIN32 api, and is used in Windows Forms, (but not WPF, which uses underscore for this purpose) and is also copied into many other toolkits on multiple operating systems. Sometimes this causes problems similar to other programs that fail to sanitize markup from user input, for instance Navision databases have trouble if this character is in either "Text" or "Code" fields.
Unix shells
Some Unix shells use the ampersand as a metacharacter:
Some Unix shells, like the POSIX standard sh shell, use an ampersand to execute a process in the background and to duplicate file descriptors.
In Bash, the ampersand can separate words, control the command history, duplicate file descriptors, perform logical operations, control jobs, and participate in regular expressions.
Web standards
The generic URL (Uniform Resource Locator) syntax allows for a query string to be appended to a file name in a web address so that additional information can be passed to a script; the question mark, or query mark, , is used to indicate the start of a query string. A query string is usually made up of a number of different name–value pairs, each separated by the ampersand symbol, . For example, .
Typeface samples
Notes
See also
And (disambiguation)
List of typographical symbols and punctuation marks
Kai (abbreviation)
Heta
Tironian notes
References
External links
The Hot Word at Dictionary.com: How ampersand came from a misunderstanding
"Ask the Editor: Ampersand", video at Merriam-Webster.com (2:01). Retrieved 2013-10-18
Font of 52 ampersands, designed by Frederic Goudy
Latin-script letters
Latin-script ligatures
Logic symbols
Typographical symbols
Graphemes
Punctuation | Ampersand | [
"Mathematics"
] | 2,896 | [
"Typographical symbols",
"Symbols",
"Mathematical symbols",
"Logic symbols"
] |
59,155 | https://en.wikipedia.org/wiki/Bullet%20%28typography%29 | In typography, a bullet or bullet point, , is a typographical symbol or glyph used to introduce items in a list. For example:
Red
Green
Blue
The bullet symbol may take any of a variety of shapes, such as circular, square, diamond or arrow. Typical word processor software offers a wide selection of shapes and colors. Several regular symbols, such as (asterisk), (hyphen), (period), and even (lowercase Latin letter O), are conventionally used in ASCII-only text or other environments where bullet characters are not available. Historically, the index symbol (representing a hand with a pointing index finger) was popular for similar uses.
Lists made with bullets are called bulleted lists. The HTML element name for a bulleted list is "unordered list", because the list items are not arranged in numerical order (as they would be in a numbered list).
"Bullet points"
Items—known as "bullet points"—may be short phrases, single sentences, or of paragraph length. Bulleted items are not usually terminated with a full stop unless they are complete sentences. In some cases, however, the style guide for a given publication may call for every item except the last one in each bulleted list to be terminated with a semicolon, and the last item with a full stop. It is correct to terminate any bullet point with a full stop if the text within that item consists of one full sentence or more. Bullet points are usually used to highlight list elements.
History
The 1950 New York News Type Book is credited as the first style guide to include a defined use for bullets. The Type Book described it as a typographic device to be used as an "Accessory" alongside asterisks, checks, and other marks available to people making advertisements for the News. The book "neither discusses the function of bullets in advertisements nor distinguishes them from any of the other items in the 'accessories' category", but can be seen to use them as a form of dinkus in an advertising panel.
Modern use
Example:
"Bullets are often used in technical writing, reference works, notes, and presentations". This statement may be presented using bullets or other techniques.
Bullets are often used in:
Technical writing
Reference works
Notes
Presentations
Alternatives to bulleted lists are numbered lists and outlines (lettered lists, hierarchical lists). They are used where either the order is important or to label the items for later referencing.
Other uses
The glyph is sometimes used as a way to hide passwords or confidential information. For example, the credit card number might be displayed as .
Bullet operator
A variant, the bullet operator () has a unicode code-point but its purpose does not appear to be documented. The glyph was transposed into Unicode from the original IBM PC character set, Code page 437, where it had the code-point F916 (24910).
Computer usage
There have been different ways to encode bullet points in computer systems.
In historical systems
Glyphs such as , and their reversed variants , became available in text mode since early IBM PCs with MDA–CGA–EGA graphic adapters, because built-in screen fonts contained such forms at code points 7–10. These were not true characters because such points belong to the C0 control codes range; therefore, these glyphs required a special way to be placed on the screen (see code page 437 for discussion).
Prior to the widespread use of word processors, bullets were often denoted by an asterisk; several word processors automatically convert asterisks to bullets if used at the start of line. This notation was inherited by Setext and wiki engines.
In Unicode
There are a variety of Unicode bullet characters, including:
for use in mathematical notation primarily as a dot product instead of interpunct.
; see Fleuron (typography)
; see Fleuron (typography)
used in Japan as a bullet, and called tainome.
In web pages
To create bulleted list items for a web page, the markup language HTML provides the list tag . The browser will display one bulleted list item for each item in an unordered list.
In Windows
When using the US keyboard, a bullet point character can be produced by pressing 7 on the numpad while keeping Alt pressed.
In MacOS
When using the US keyboard, a bullet point character can be produced by pressing 8 while keeping Option(Alt) pressed.
In LaTeX
To create bulleted list items for a document, the markup language LaTeX provides the item tag \item . Each item tag inside an itemized list will generate one bulleted list item.
Wiki markup
A list item on a wiki page is indicated using one or more leading asterisks in wiki markup as well as in many other wikis.
Other uses in computing
The bullet is often used for separating menu items, usually in the footer menu. It is common, for example, to see it in latest website designs and in many WordPress themes. It is also used by text editors, like Microsoft Word, to create lists.
Notes
References
Further reading
Access to most interior pages is via search.
External links
Punctuation
Typographical symbols | Bullet (typography) | [
"Mathematics"
] | 1,089 | [
"Symbols",
"Typographical symbols"
] |
59,156 | https://en.wikipedia.org/wiki/Dry%20ice | Dry ice means the solid form of carbon dioxide. It is commonly used for temporary refrigeration as CO2 does not have a liquid state at normal atmospheric pressure and sublimes directly from the solid state to the gas state. It is used primarily as a cooling agent, but is also used in fog machines at theatres for dramatic effects. Its advantages include lower temperature than that of water ice and not leaving any residue (other than incidental frost from moisture in the atmosphere). It is useful for preserving frozen foods (such as ice cream) where mechanical cooling is unavailable.
Dry ice sublimes at at Earth atmospheric pressure. This extreme cold makes the solid dangerous to handle without protection from frostbite injury. While generally not very toxic, the outgassing from it can cause hypercapnia (abnormally elevated carbon dioxide levels in the blood) due to buildup in confined locations.
Properties
Dry ice is the solid form of carbon dioxide (CO2), a molecule consisting of a single carbon atom bonded to two oxygen atoms. Dry ice is colorless, odorless, and non-flammable, and can lower the pH of a solution when dissolved in water, forming carbonic acid (H2CO3).
At pressures below 5.13 atm and temperatures below (the triple point), CO2 changes from a solid to a gas with no intervening liquid form, through a process called sublimation. The opposite process is called deposition, where CO2 changes from the gas to solid phase (dry ice). At atmospheric pressure, sublimation/deposition occurs at .
The density of dry ice increases with decreasing temperature and ranges between about below . The low temperature and direct sublimation to a gas makes dry ice an effective coolant, since it is colder than water ice and leaves no residue as it changes state. Its enthalpy of sublimation is 571 kJ/kg (25.2 kJ/mol, 136.5 calorie/g).
Dry ice is non-polar, with a dipole moment of zero, so attractive intermolecular van der Waals forces operate. The composition results in low thermal and electrical conductivity.
History
It is generally accepted that dry ice was first observed in 1835 by French inventor Adrien-Jean-Pierre Thilorier (1790–1844), who published the first account of the substance. In his experiments, he noted that when opening the lid of a large cylinder containing liquid carbon dioxide, most of the liquid carbon dioxide quickly evaporated. This left only solid dry ice in the container. In 1924, Thomas B. Slate applied for a US patent to sell dry ice commercially. Subsequently, he became the first to make dry ice successful as an industry. In 1925, this solid form of CO2 was trademarked by the DryIce Corporation of America as "Dry ice", leading to its common name. That same year the DryIce Co. sold the substance commercially for the first time, marketing it for refrigeration purposes.
Manufacture
Dry ice is easily manufactured. The most common industrial method of manufacturing dry ice starts with a gas having a high concentration of carbon dioxide. Such gases can be a byproduct of another process, such as producing ammonia from nitrogen and natural gas, oil refinery activities or large-scale fermentation. Second, the carbon dioxide-rich gas is pressurized and refrigerated until it liquefies. Next, the pressure is reduced. When this occurs some liquid carbon dioxide vaporizes, causing a rapid lowering of temperature of the remaining liquid. As a result, the extreme cold causes the liquid to solidify into a snow-like consistency. Finally, the snow-like solid carbon dioxide is compressed into small pellets or larger blocks of dry ice.
Dry ice is typically produced in three standard forms: large blocks, small ( diameter) cylindrical pellets and tiny ( diameter) cylindrical, high surface to volume pellets that float on oil or water and do not stick to skin because of their high radii of curvature. Tiny dry ice pellets are used primarily for dry ice blasting, quick freezing, fire fighting, oil solidifying and have been found to be safe for experimentation by middle school students wearing appropriate personal protective equipment such as gloves and safety glasses. A standard block weighing approximately covered in a taped paper wrapping is most common. These are commonly used in shipping, because they sublime relatively slowly due to a low ratio of surface area to volume. Pellets are around in diameter and can be bagged easily. This form is suited to small scale use, for example at grocery stores and laboratories where it is stored in a thickly insulated chest. Density of pellets is 60–70% of the density of blocks.
Dry ice is also produced as a byproduct of cryogenic air separation, an industry primarily concerned with manufacturing extremely cold liquids such as liquid nitrogen and liquid oxygen. In this process, carbon dioxide liquefies or freezes at a far higher temperature compared to that needed to liquefy nitrogen and oxygen. The carbon dioxide must be removed during the process to prevent dry ice from fouling the equipment, and once separated can be processed into commercial dry ice in a manner similar to that described above.
Applications
Commercial
The most common use of dry ice is to preserve food, using non-cyclic refrigeration.
It is frequently used to package items that must remain cold or frozen, such as ice cream or biological samples, in the absence of availability or practicality of mechanical cooling.
Dry ice is critical in the deployment of some vaccines, which require storage at ultra-cold temperatures along their supply line.
Dry ice can be used to flash-freeze food or laboratory biological samples, carbonate beverages, make ice cream, solidify oil spills and stop ice sculptures and ice walls from melting.
Dry ice can be used to arrest and prevent insect activity in closed containers of grains and grain products, as it displaces oxygen, but does not alter the taste or quality of foods. For the same reason, it can prevent or retard food oils and fats from becoming rancid.
When dry ice is placed in water, sublimation is accelerated, and low-sinking, dense clouds of smoke-like fog are created. This is used in fog machines, at theatres, haunted house attractions, and nightclubs for dramatic effects. Unlike most artificial fog machines, in which fog rises like smoke, fog from dry ice hovers near the ground. Dry ice is useful in theatre productions that require dense fog effects. The fog originates from the bulk water into which the dry ice is placed, and not from atmospheric water vapor (as is commonly assumed).
It is occasionally used to freeze and remove warts. However, liquid nitrogen performs better in this role, as it is colder, thereby requiring less time to act, and needs less pressure to store. Dry ice has fewer problems with storage, since it can be generated from compressed carbon dioxide gas as needed.
In plumbing, dry ice is used to cut off water flow to pipes to allow repairs to be made without shutting off water mains. Pressurised liquid CO2 is forced into a jacket wrapped around a pipe, which in turn causes the water inside to freeze and block the pipe. When the repairs are done, the jacket is removed and the ice plug melts, allowing the flow to resume. This technique can be used on pipes up to 4 inches or 100 mm in diameter.
Dry ice can be used as bait to trap mosquitoes, bedbugs, and other insects, due to their attraction to carbon dioxide.
It can be used to exterminate rodents. This is done by dropping pellets into rodent tunnels in the ground and then sealing off the entrance, thus suffocating the animals as the dry ice sublimates.
Tiny dry ice pellets can be used to fight fire by both cooling fuel and suffocating the fire by excluding oxygen.
The extreme temperature of dry ice can cause viscoelastic materials to change to glass phase. Thus, it is useful for removing many types of pressure sensitive adhesives.
Industrial
Dry ice can be used for loosening asphalt floor tiles or car sound deadening material, making them easy to prise off, as well as freezing water in valveless pipes to enable repair.
One of the largest mechanical uses of dry ice is blast cleaning. Dry ice pellets are shot from a nozzle with compressed air, combining the power of the speed of the pellets with the action of the sublimation. This can remove residues from industrial equipment. Examples of materials removed include ink, glue, oil, paint, mold and rubber. Dry ice blasting can replace sandblasting, steam blasting, water blasting or solvent blasting. The primary environmental residue of dry ice blasting is the sublimed CO2, thus making it a useful technique where residues from other blasting techniques are undesirable. Recently, blast cleaning has been introduced as a method of removing smoke damage from structures after fires.
Dry ice is also useful for the de-gassing of flammable vapours from storage tanks — the sublimation of dry ice pellets inside an emptied and vented tank causes an outrush of CO2 that carries with it the flammable vapours.
The removal and fitting of cylinder liners in large engines requires the use of dry ice to chill and thus shrink the liner so that it freely slides into the engine block. When the liner then warms up, it expands, and the resulting interference fit holds it tightly in place. Similar procedures may be used in fabricating mechanical assemblies with a high resultant strength, replacing the need for pins, keys or welds.
Dry ice has found its application in construction for freezing soil, serving as an effective alternative to liquid nitrogen. This method reduces the soil temperature to approximately -70 to -74 °C, rapidly freezing the groundwater. As a result, the soil's strength and impermeability significantly increase, which is essential for the safe execution of underground construction projects.
It is also useful as a cutting fluid.
Scientific
In laboratories, a slurry of dry ice in an organic solvent is a useful freezing mixture for cold chemical reactions and for condensing solvents in rotary evaporators. Dry ice and acetone forms a cold bath of , which can be used for instance to prevent thermal runaway in a Swern oxidation.
The process of altering cloud precipitation can be done with the use of dry ice. It was widely used in experiments in the US in the 1950s and early 1960s before it was replaced by silver iodide. Dry ice has the advantage of being relatively cheap and completely non-toxic. Its main drawback is the need to be delivered directly into the supercooled region of clouds being seeded.
Dry ice bombs
A "dry ice bomb" is a balloon-like device using dry ice in a sealed container such as a plastic bottle. Water is usually added to accelerate the sublimation of the dry ice. As the dry ice sublimes, pressure increases, causing the bottle to burst with a loud noise. The screw cap can be replaced with a rubber stopper to make a water rocket.
The dry ice bomb device was featured on MythBusters, episode 57 Mentos and Soda, which first aired on August 9, 2006. It was also featured in an episode of Time Warp, as well as in an episode of Archer.
Extraterrestrial occurrence
Following the Mars flyby of the Mariner 4 spacecraft in 1966, scientists concluded that Mars' polar caps consist entirely of dry ice. However, findings made in 2003 by researchers at the California Institute of Technology have shown that Mars' polar caps are almost completely made of water ice, and that dry ice only forms a thin surface layer that thickens and thins seasonally. A phenomenon named dry ice storms was proposed to occur over the polar regions of Mars. They are comparable to Earth's thunderstorms, with crystalline CO2 taking the place of water in the clouds. Dry ice is also proposed as a mechanism for the geysers on Mars.
In 2012, the European Space Agency's Venus Express probe detected a cold layer in the atmosphere of Venus where temperatures are close to the triple point of carbon dioxide and it is possible that flakes of dry ice precipitate.
Observations from the Uranus flyby by Voyager 2 indicates that dry ice is present on the surface of its large moons Ariel, Umbriel and Titania. Scientists speculate that the magnetic field of Uranus contributes to the generation of CO2 ice on the surfaces of its moons. Voyager 2 observations of Neptune's moon Triton suggested the presence of dry ice on the surface, though followup observations indicate that the carbon ices on the surface are carbon monoxide but that the moon's crust is composed of a significant quantity of dry ice.
Safety
Prolonged exposure to dry ice can cause severe skin damage through frostbite, and the fog produced may also hinder attempts to withdraw from contact in a safe manner. Because it sublimes into large quantities of carbon dioxide gas, which could pose a danger of hypercapnia, dry ice should only be exposed to open air in a well-ventilated environment. For this reason, in the context of laboratory safety dry ice is assigned label precaution Industrial dry ice may contain contaminants that make it unsafe for direct contact with food. Tiny dry ice pellets used in dry ice blast cleaning do not contain oily residues.
Dry ice is assigned a UN number, a code for hazardous substances: UN 1845. Dry ice is not classified as a dangerous substance by the European Union, or as a hazardous material by the United States Department of Transportation for ground transportation. However, in the US, it is regulated as a dangerous good when shipped by air or water. International Air Transport Association (IATA) regulations require specific diamond-shaped black-and white labelling to be placed on the package. The package must have adequate ventilation so that it will not rupture from pressure in the event that the Dry Ice begins to sublime in the packaging. The Federal Aviation Administration in the US allows airline passengers to carry up to per person either as checked baggage or carry-on baggage, when used to refrigerate perishables.
At least one person has been killed by carbon dioxide gas subliming off dry ice in coolers placed in a car. In 2020, three people were killed at a party in Moscow after 25 kg of dry ice was dumped in a pool; carbon dioxide is heavier than air, and so can linger near the ground, just above water level.
Drink
Dry ice is sometimes used to give a fog effect to cocktails. One bar patron who accidentally ingested pellets from a drink suffered severe burns to his esophagus, stomach, and duodenum, causing permanent problems with eating. Rapid sublimation could cause gas buildup that ruptures digestive organs or suffocation. Products that contain dry ice and prevent it from being accidentally ingested eliminate these risks while producing the desired fog effect.
Footnotes
References
General bibliography
External links
Articles containing video clips
Brands that became generic
Carbon dioxide
Coolants
Ice
Refrigerants | Dry ice | [
"Chemistry"
] | 3,110 | [
"Greenhouse gases",
"Carbon dioxide"
] |
59,161 | https://en.wikipedia.org/wiki/Dagger%20%28mark%29 | A dagger, obelisk, or obelus is a typographical mark that usually indicates a footnote if an asterisk has already been used. The symbol is also used to indicate death (of people) or extinction (of species or languages). It is one of the modern descendants of the obelus, a mark used historically by scholars as a critical or highlighting indicator in manuscripts. In older texts, it is called an obelisk.
A double dagger, or diesis, is a variant with two hilts and crossguards that usually marks a third footnote after the asterisk and dagger. The triple dagger is a variant with three crossguards and is used by medievalists to indicate another level of notation.
History
The dagger symbol originated from a variant of the obelus, originally depicted by a plain line or a line with one or two dots . It represented an iron roasting spit, a dart, or the sharp end of a javelin, symbolizing the skewering or cutting out of dubious matter.
The obelus is believed to have been invented by the Homeric scholar Zenodotus as one of a system of editorial symbols. They marked questionable or corrupt words or passages in manuscripts of the Homeric epics. The system was further refined by his student Aristophanes of Byzantium, who first introduced the asterisk and used a symbol resembling a for an obelus; and finally by Aristophanes' student, in turn, Aristarchus, from whom they earned the name of "Aristarchian symbols".
While the asterisk (asteriscus) was used for corrective additions, the obelus was used for corrective deletions of invalid reconstructions. It was used when non-attested words are reconstructed for the sake of argument only, implying that the author did not believe such a word or word form had ever existed. Some scholars used the obelus and various other critical symbols, in conjunction with a second symbol known as the metobelos ("end of obelus"), variously represented as two vertically arranged dots, a -like symbol, a mallet-like symbol, or a diagonal slash (with or without one or two dots). They indicated the end of a marked passage.
It was used much in the same way by later scholars to mark differences between various translations or versions of the Bible and other manuscripts. The early Christian Alexandrian scholar Origen ( AD) used it to indicate differences between different versions of the Old Testament in his Hexapla. Epiphanius of Salamis (c. 310–320 – 403) used both a horizontal slash or hook (with or without dots) and an upright and slightly slanting dagger to represent an obelus. St. Jerome (c. 347–420) used a simple horizontal slash for an obelus, but only for passages in the Old Testament. He describes the use of the asterisk and the dagger as: "an asterisk makes a light shine, the obelisk cuts and pierces".
Isidore of Seville (c. 560–636) described the use of the symbol as follows: "The obelus is appended to words or phrases uselessly repeated, or else where the passage involves a false reading, so that, like the arrow, it lays low the superfluous and makes the errors disappear ... The obelus accompanied by points is used when we do not know whether a passage should be suppressed or not."
Medieval scribes used the symbols extensively for critical markings of manuscripts. In addition to this, the dagger was also used in notations in early Christianity, to indicate a minor intermediate pause in the chanting of Psalms, equivalent to the quaver rest notation or the trope symbol in Hebrew cantillation. It also indicates a breath mark when reciting, along with the asterisk, and is thus frequently seen beside a comma.
In the 16th century, the printer and scholar Robert Estienne (also known as Stephanus in Latin and Stephens in English) used it to mark differences in the words or passages between different printed versions of the Greek New Testament (Textus Receptus).
Due to the variations as to the different uses of the different forms of the obelus, there is some controversy as to which symbols can actually be considered an obelus. The symbol and its variant, the , is sometimes considered to be different from other obeli. The term 'obelus' may have referred strictly only to the horizontal slash and the dagger symbols.
Modern usage
The dagger usually indicates a footnote if an asterisk has already been used. A third footnote employs the double dagger. Additional footnotes are somewhat inconsistent and represented by a variety of symbols, e.g., parallels ( ), section sign , and the pilcrow some of which were nonexistent in early modern typography. Partly because of this, superscript numerals have increasingly been used in modern literature in the place of these symbols, especially when several footnotes are required. Some texts use asterisks and daggers alongside superscripts, using the former for per-page footnotes and the latter for endnotes.
The dagger is also used to indicate death, extinction, or obsolescence. The asterisk and the dagger, when placed beside years, indicate year of birth and year of death respectively. This usage is particularly common in German. When placed immediately before or after a person's name, the dagger indicates that the person is deceased. In this usage, it is referred to as the "death dagger". Death-related usages include:
In biology, the dagger next to a taxon name indicates that the taxon is extinct.
In chemistry, the double dagger is used in chemical kinetics to indicate a short-lived transition state species.
In genealogy, the dagger is used traditionally to mark a death in genealogical records.
In chess notation, the dagger may be suffixed to a move to signify the move resulted in a check, and a double dagger denotes checkmate. This is a stylistic variation on the more common (plus sign) for a check and (number sign) for checkmate.
In linguistics, the dagger placed after a language name indicates an extinct language.
In philology, the dagger indicates an obsolete form of a word or phrase. As language that has become obsolete in everyday use tends to live on elsewhere, the dagger can indicate language only occurring in poetical texts or "restricted to an archaic, literary style".
In the Oxford English Dictionary, the dagger symbol indicates an obsolete word.
Non-death usages include:
The asteroid 37 Fides, the last asteroid to be assigned an astronomical symbol before the practice faded, was assigned the dagger.
In Anglican chant pointing, the dagger indicates a verse to be sung to the second part of the chant.
In some early printed Bible translations, a dagger or double dagger indicates that a literal translation of a word or phrase is to be found in the margin.
In library cataloging, a double dagger delimits MARC subfields.
On a cricket scorecard or team list, the dagger indicates the team's wicket-keeper.
Some logicians use the dagger as an affirmation ('it is true that ...') operator.
The palochka is transliterated to a double dagger in the ISO 9 standard for converting Cyrillic to Latin
In psychological statistics the dagger indicates that a difference between two figures is not significant to a p<0.05 level, however is still considered a "trend" or worthy of note. Commonly this will be used for a p-value between 0.1 and 0.05.
In mathematics and, more often, physics, a dagger denotes the Hermitian adjoint of an operator; for example, A† denotes the adjoint of A. This notation is sometimes replaced with an asterisk, especially in mathematics. An operator is said to be Hermitian if A† = A.
In textual criticism and in some editions of works written before the invention of printing, daggers enclose text that is believed not to be original.
While daggers are freely used in English-language texts, they are often avoided in other languages because of their similarity to the Christian cross.
Encoding
– used in Alexander John Ellis's "palaeotype" transliteration to indicate retracted pronunciation
– used in Alexander John Ellis's "palaeotype" transliteration to indicate advanced pronunciation
– used in Alexander John Ellis's "palaeotype" transliteration to indicate retroflex pronunciation
– A variant with three handles.
Typing the character
Single dagger:
In HTML:
Windows:
MacOS:
Linux: it appears there is no compose key sequence, but you can type
Double dagger:
In HTML:
Windows:
MacOS:
Linux:
Visually similar symbols
The dagger should not be confused with the symbols , , or other cross symbols.
The double dagger should not be confused with the , or , or in IPA, or .
See also
Notes
References
Punctuation
Typographical symbols
Ancient Greek punctuation | Dagger (mark) | [
"Mathematics"
] | 1,857 | [
"Symbols",
"Typographical symbols"
] |
59,171 | https://en.wikipedia.org/wiki/Mumps | Mumps is a highly contagious viral disease caused by the mumps virus. Initial symptoms of mumps are non-specific and include fever, headache, malaise, muscle pain, and loss of appetite. These symptoms are usually followed by painful swelling around the side of the face (the parotid glands, called parotitis), which is the most common symptom of a mumps infection. Symptoms typically occur 16 to 18 days after exposure to the virus. About one-third of people with a mumps infection do not have any symptoms (asymptomatic).
Complications are rare but include deafness and a wide range of inflammatory conditions, of which inflammation of the testes, breasts, ovaries, pancreas, meninges, and brain are the most common. Viral meningitis can occur in 1/4 of people with mumps. Testicular inflammation may result in reduced fertility and, rarely, sterility.
Humans are the only natural hosts of the mumps virus. The mumps virus is an RNA virus in the family Paramyxoviridae. The virus is primarily transmitted by respiratory secretions such as droplets and saliva, as well as via direct contact with an infected person. Mumps is highly contagious and spreads easily in densely populated settings. Transmission can occur from one week before the onset of symptoms to eight days after. During infection, the virus first infects the upper respiratory tract. From there, it spreads to the salivary glands and lymph nodes. Infection of the lymph nodes leads to the presence of the virus in the blood, which spreads the virus throughout the body. In places where mumps is common, it can be diagnosed based on clinical presentation. In places where mumps is less common, however, laboratory diagnosis using antibody testing, viral cultures, or real-time reverse transcription polymerase chain reaction may be needed.
There is no specific treatment for mumps, so treatment is supportive and includes rest and pain relief. Mumps infection is usually self-limiting, coming to an end as the immune system clears the infection. Infection can be prevented with vaccination. The MMR vaccine is a safe and effective vaccine to prevent mumps infections and is used widely around the world. The MMR vaccine also protects against measles and rubella. The spread of the disease can also be prevented by isolating infected individuals.
Mumps historically has been a highly prevalent disease, commonly occurring in outbreaks in densely crowded spaces. In the absence of vaccination, infection normally occurs in childhood, most frequently at the ages of 5–9. Symptoms and complications are more common in males and more severe in adolescents and adults. Infection is most common in winter and spring in temperate climates, whereas no seasonality is observed in tropical regions. Written accounts of mumps have existed since ancient times, and the cause of mumps, the mumps virus, was discovered in 1934. By the 1970s, vaccines had been created to protect against infection, and countries that have adopted mumps vaccination have seen a near-elimination of the disease. In the 21st century, however, there has been a resurgence in the number of cases in many countries that vaccinate, primarily among adolescents and young adults, due to multiple factors such as waning vaccine immunity and opposition to vaccination.
Etymology
The word "mumps" was first attested circa 1600 and is the plural form of "mump", meaning "grimace", originally a verb meaning "to whine or mutter like a beggar". The disease was likely called mumps due to the swelling caused by mumps parotitis, reflecting its impact on facial expressions and the painful, difficult swallowing that it causes. "Mumps" was also used starting from the 17th century to mean "a fit of melancholy, sullenness, silent displeasure". Mumps is sometimes called "epidemic parotitis".
History
According to Chinese medical literature, mumps was recorded as far back as 640 B.C. The Greek physician Hippocrates documented an outbreak on the island of Thasos in approximately 410 B.C. and provided a fuller description of the disease in the first book of Epidemics in the Corpus Hippocraticum. In modern times, the disease was first described scientifically in 1790 by British physician Robert Hamilton in Transactions of the Royal Society of Edinburgh. During the First World War, mumps was one of the most debilitating diseases among soldiers. In 1934, the etiology of the disease, the mumps virus, was discovered by Claude D. Johnson and Ernest William Goodpasture. They found that rhesus macaques exposed to saliva taken from humans in the early stages of the disease developed mumps. Furthermore, they showed that mumps could then be transferred to children via filtered and sterilized, bacteria-less preparations of macerated monkey parotid tissue, showing that it was a viral disease.
In 1945, the mumps virus was isolated for the first time. Just a few years later, in 1948, an inactivated vaccine using killed viruses was invented. This vaccine provided only short-term immunity and was later discontinued. It was replaced in the 1970s with vaccines that have live but weakened viruses, which are more effective at providing long-term immunity than the inactivated vaccine. The first of these vaccines was Mumpsvax, licensed on 30 March 1967, which used the Jeryl Lynn strain. Maurice Hilleman created this vaccine using the strain taken from his five-year-old daughter, Jeryl Lynn. Mumpsvax was recommended for use in 1977, and the Jeryl Lynn strain continues to be used.
Hilleman worked to combine the attenuated mumps vaccines with the measles and rubella vaccines, creating the MMR-1 vaccine. In 1971, a newer version, MMR-2, was approved for use by the US Food and Drug Administration. In the 1980s, the benefit of multiple doses was recognized, so a two-dose immunization schedule was widely adopted. With MMR-2, four other MMR vaccines have been created since the 1960s: Triviraten, Morupar, Priorix, and Trimovax. Since the mid-2000s, two MMRV vaccines have been in use: Priorix-Tetra and ProQuad.
The United States began to vaccinate against mumps in the 1960s, with other countries following suit. From 1977 to 1985, 290 cases per 100,000 people were diagnosed each year worldwide. Although few countries recorded mumps cases after they began vaccination, those that did reported dramatic declines. From 1968 to 1982, cases declined by 97% in the U.S., and in Finland cases were reduced to less than one per 100,000 people per year, and a decline from 160 cases per 100,000 to 17 per 100,000 per year in England was observed from 1989 to 1995. By 2001, there had been a 99.9% reduction in the number of cases in the U.S. and similar near-elimination in other vaccinating countries.
In Japan in 1993, concerns over the rates of aseptic meningitis following MMR vaccination with the Urabe strain prompted the removal of MMR vaccines from the national immunization program, resulting in a dramatic increase in the number of cases. Japan provides voluntary mumps vaccination separately from measles and rubella. Starting in the mid-1990s, controversies surrounding the MMR vaccine emerged. One paper connected the MMR vaccine to Crohn's disease in 1995, and another in 1998 connected it to autism spectrum disorders and inflammatory bowel disease. These papers are now considered to be fraudulent and incorrect, and no association between the MMR vaccine and the aforementioned conditions has been identified. Despite this, their publication led to a significant decline in vaccination rates, ultimately causing measles, mumps, and rubella to reemerge in places with lowered vaccination rates.
Outbreaks in the 21st century include more than 300,000 cases in China in 2013 and more than 56,000 cases in England and Wales in 2004–2005. In the latter outbreak, most cases were reported in 15–24 year olds who were attending colleges and universities. This age group was thought to be vulnerable to infection because of the MMR vaccine controversies when they should have been vaccinated or MMR vaccine shortages that had also occurred at that time. Similar outbreaks in densely crowded environments have frequently occurred in many other countries, including the U.S., the Netherlands, Sweden, and Belgium.
Resurgence
In the 21st century, mumps has reemerged in many places that vaccinate against it, causing recurrent outbreaks. These outbreaks have largely affected adolescents and young adults in densely crowded spaces, such as schools, sports teams, religious gatherings, and the military, and it is expected that outbreaks will continue to occur. The cause of this reemergence is subject to debate, and various factors have been proposed, including waning immunity from vaccination, low vaccination rates, vaccine failure, and potential antigenic variation of the mumps virus.
Waning immunity from vaccines is likely the primary cause of the mumps resurgence. In the past, subclinical natural infections provided boosts to immunity similar to vaccines. As time went on with vaccine use, these asymptomatic infections declined in frequency, likely leading to a reduction in long-term immunity against mumps. With less long-term immunity, the effects of waning vaccine immunity became more prominent, and vaccinated individuals have frequently fallen ill from mumps. A third dose of the vaccine provided in adolescence has been considered to address this as some studies support this. Other research indicates that a third dose may be useful only for short-term immunity in responding to outbreaks, which is recommended for at-risk persons by the Advisory Committee on Immunization Practices of the Centers for Disease Control and Prevention.
Low vaccination rates have been implicated as the cause of some outbreaks in the UK, Canada, Sweden, and Japan, whereas outbreaks in other places, such as the U.S., the Czech Republic, and the Netherlands, have occurred mainly among the vaccinated. Compared to the measles and rubella vaccines, mumps vaccines appear to have a relatively high failure rate, varying depending on the vaccine strain. This has been addressed by providing two vaccine doses, supported by recent outbreaks among the vaccinated having primarily occurred among those who received only one dose. Lastly, certain mumps virus lineages are highly divergent genetically from vaccine strains, which may cause a mismatch between protection against vaccine strains and non-vaccine strains, though research is inconclusive on this matter.
Signs and symptoms
Common symptoms
The incubation period, the time between the start of an infection and when symptoms begin to show, is about 7–25 days, averaging 16–18 days. 20–40% of infections are asymptomatic or are restricted to mild respiratory symptoms, sometimes with a fever. Over the course of the disease, three distinct phases are recognized: prodromal, early acute, and established acute. The prodromal phase typically has non-specific, mild symptoms such as a low-grade fever, headache, malaise, muscle pain, loss of appetite, and sore throat. In the early acute phase, as the mumps virus spreads throughout the body, systemic symptoms emerge. Most commonly, parotitis occurs during this time period. During the established acute phase, orchitis, meningitis, and encephalitis may occur, and these conditions are responsible for the bulk of mumps morbidity.
The parotid glands are salivary glands situated on the sides of the mouth in front of the ears. Inflammation of them, called parotitis, is the most common mumps symptom and occurs in about 90% of symptomatic cases and 60–70% of total infections. During mumps parotitis, usually both the left and right parotid glands experience painful swelling, with unilateral swelling in a small percentage of cases. Parotitis occurs 2–3 weeks after exposure to the virus, within two days of developing symptoms, and usually lasts 2–3 days, but it may last as long as a week or longer.
In 90% of parotitis cases, swelling on one side is delayed rather than both sides swelling in unison. The parotid duct, which is the opening that provides saliva from the parotid glands to the mouth, may become red, swollen, and filled with fluid. Parotitis is usually preceded by local tenderness and occasionally earache. Other salivary glands, namely the submandibular, and sublingual glands, may also swell. Inflammation of these glands is rarely the only symptom.
Complications
Outside of the salivary glands, inflammation of the testes, called orchitis, is the most common symptom of infection. Pain, swelling, and warmness of a testis appear usually 1–2 weeks after the onset of parotitis but can occur up to six weeks later. During mumps orchitis, the scrotum is tender and inflamed. It occurs in 10–40% of pubertal and post-pubertal males who contract mumps. Usually, mumps orchitis affects only one testis but in 10–30% of cases both are affected. Mumps orchitis is accompanied by inflammation of the epididymis, called epididymitis, about 85% of the time, typically occurring before orchitis. The onset of mumps orchitis is associated with a high-grade fever, vomiting, headache, and malaise. In prepubertal males, orchitis is rare as symptoms are usually restricted to parotitis.
A variety of other inflammatory conditions may also occur as a result of mumps virus infection, including:
Mastitis, inflammation of the breasts, in up to about 30% of post-pubertal women
Oophoritis, inflammation of an ovary, in 5–10% of post-pubertal women, which usually presents as pelvic pain
Aseptic meningitis, inflammation of the meninges, in 5–10% of cases and 4–6% of those with parotitis, typically occurring 4–10 days after the onset of symptoms. Mumps meningitis can also occur up to one week before parotitis as well as in the absence of parotitis. It is commonly accompanied by fever, headache, vomiting, and neck stiffness.
Pancreatitis, inflammation of the pancreas, in about 4% of cases, which causes severe pain and tenderness in the upper abdomen below the ribs
Encephalitis, inflammation of the brain, in less than 0.5% of cases. People who experience mumps encephalitis typically experience a fever, altered consciousness, seizures, and weakness. Like meningitis, mumps encephalitis can occur in the absence of parotitis.
Meningoencephalitis, inflammation of the brain and its surrounding membranes. Mumps meningoencephalitis is commonly accompanied by fever 97% of the time, vomiting 94% of the time, and headache 88.8% of the time.
Nephritis, inflammation of the kidneys, which is rare because kidney involvement in mumps is usually benign but leads to presence of the virus in urine
Inflammation of the joints (arthritis), which may affect at least five joints (polyarthritis), multiple nerves in the peripheral nervous system (polyneuritis), pneumonia, gallbladder without gallstones (acalculous cholecystitis), cornea and uveal tract (keratouveitis), thyroids (thyroiditis), liver (hepatitis), retina (retinitis), and corneal endothelium (corneal endothelitis), all of which are rare
Recurrent sialadenitis, inflammation of the salivary glands, which is frequent
A relatively common complication is deafness, which occurs in about 4% of cases. Mumps deafness is often accompanied by vestibular symptoms such as vertigo and repetitive, uncontrolled eye movements. Based on electrocardiographic abnormalities in the infected, MuV also likely infects cardiac tissue, but this is usually asymptomatic. Rarely, myocarditis and pericarditis can occur. Fluid buildup in the brain, called hydrocephalus, has also been observed. In the first trimester of pregnancy, mumps may increase the risk of miscarriage. Otherwise, mumps is not associated with birth defects.
Other rare complications of infection include: paralysis, seizures, cranial nerve palsies, cerebellar ataxia, transverse myelitis, ascending polyradiculitis, a polio-like disease, arthropathy, autoimmune hemolytic anemia, idiopathic thrombocytopenic purpura, Guillain–Barré syndrome, post-infectious encephalitis encephalomyelitis, and hemophagocytic syndrome. At least one complication occurs in combination with the standard mumps symptoms in up to 42% of cases. Mumps has also been connected to the onset of type 1 diabetes, and, relatedly, the mumps virus is able to infect and replicate in insulin-producing beta cells. Among children, seizures occur in about 20–30% of cases involving the central nervous system.
Cause
Mumps is caused by the mumps virus (MuV), scientific name Mumps orthorubulavirus, which belongs to the Orthorubulavirus genus in the Paramyxoviridae family of viruses. Humans are the only natural host of the mumps virus. MuV's genome is made of RNA and contains seven genes that encode nine proteins. In MuV particles, the genome is encased by a helical capsid. The capsid is surrounded by a viral envelope that has spikes protruding from its surface. MuV particles are pleomorphic in shape and range from 100 to 600 nanometers in diameter.
The replication cycle of MuV begins when the spikes on its surface bond to a cell, which then causes the envelope to fuse with the host cell's cell membrane, releasing the capsid into the host cell's cytoplasm. Upon entry, the viral RNA-dependent RNA polymerase (RdRp) transcribes messenger RNA (mRNA) from the genome, which is then translated by the host cell's ribosomes to synthesize viral proteins. RdRp then begins replicating the viral genome to produce progeny. Viral spike proteins fuse into the host cell's membrane, and new virions are formed at the sites beneath the spikes. MuV then utilizes host cell proteins to leave the host cell by budding from its surface, using the host cell's membrane as the viral envelope.
Twelve genotypes of MuV are recognized, named genotypes A to N, excluding E and M. These genotypes vary in frequency from region to region. For example, genotypes C, D, H, and J are more common in the western hemisphere, whereas genotypes F, G, and I are more common in Asia, although genotype G is considered to be a global genotype. Genotypes A and B have not been observed in the wild since the 1990s. MuV has just one serotype, so antibodies to one genotype are functional against all genotypes. MuV is a relatively stable virus and is unlikely to experience antigenic shifting that may cause new strains to emerge.
Transmission
The mumps virus is mainly transmitted by inhalation or oral contact with respiratory droplets or secretions. In experiments, mumps could develop after inoculation either via the mouth or the nose. Respiratory transmission is also supported by the presence of MuV in cases of respiratory illness without parotitis, detection in nasal samples, and transmission between people in close contact. MuV is excreted in saliva from approximately one week before to eight days after the onset of symptoms, peaking at the onset of parotitis, though it has also been identified in the saliva of asymptomatic individuals.
Mother-to-child transmission has been observed in various forms. In non-human primates, placental transmission has been observed, which is supported by the isolation of MuV from spontaneous and planned aborted fetuses during maternal mumps. MuV has also been isolated from newborns whose mother was infected. While MuV has been detected in breast milk, it is unclear if the virus can be transmitted through it. Other manners of transmission include direct contact with infected droplets or saliva, fomites contaminated by saliva and possibly urine. Most transmissions likely occur before the development of symptoms and up to five days after such time.
In susceptible populations, a single case can cause up to twelve new ones. The period when a person is contagious lasts from two days before the onset of symptoms to nine days after symptoms have ceased. Asymptomatic carriers of the mump virus can also transmit the virus. These factors are thought to be reasons why controlling the spread of mumps is difficult. Furthermore, reinfection can occur after natural infection or vaccination, indicating that lifelong immunity is not guaranteed after infection. Vaccinated individuals who are infected appear to be less contagious than the unvaccinated.
The average number of new cases generated from a single case in a susceptible population called the basic reproduction number, is 4–7. Given this, it is estimated that a vaccination rate between 79 and 100% is needed to achieve herd immunity. Outbreaks continue to occur in places that have vaccination rates exceeding 90%, however, suggesting that other factors may influence disease transmission. Outbreaks that have occurred in these vaccinated communities typically occur in highly crowded areas such as schools and military dormitories.
Pathogenesis
Many aspects of the pathogenesis of mumps are poorly understood and are inferred from clinical observations and experimental infections in laboratory animals. These animal studies may be unreliable due to unnatural methods of inoculation. Following exposure, the virus infects epithelial cells in the upper respiratory tract that express sialic acid receptors on their surface. After infection, the virus spreads to the parotid glands, causing the signature parotitis. It is thought that shortly after infection the virus spreads to lymph nodes, in particular T-cells and viruses in the blood, called viremia. Viremia lasts for 7–10 days, during which MuV spreads throughout the body.
In mumps orchitis, infection leads to: parenchymal edema; congestion, or separation, of the seminiferous tubules; and perivascular infiltration by lymphocytes. The tunica albuginea forms a barrier against edema, causing an increase in intratesticular pressure that causes necrosis of the seminiferous tubules. The seminiferous tubules also experience hyalinization, i.e. degeneration into a translucent glass-like substance, which can cause fibrosis and atrophy of the testes.
In up to half of cases, MuV infiltrates the central nervous system (CNS), where it may cause meningitis, encephalitis, or hydrocephalus. Mumps is rarely fatal, so few post-mortem analyses have been done to analyze CNS involvement. Of these, fluid buildup, congestion, and hemorrhaging in the brain, white blood cell infiltration in the perivascular spaces in the brain, reactive changes to glial cells and damage to the myelin sheaths surrounding neurons were observed. Neurons appear to be relatively unaffected.
In laboratory tests on rodents, MuV appears to enter the CNS first through cerebrospinal fluid (CSF), then spreading to the ventricular system. There, MuV replicates in ependymal cells that line the ventricles, which allows the virus to enter the brain parenchyma. This often leads to MuV infecting pyramidal cells in the cerebral cortex and hippocampus. Infected ependymal cells become inflamed, lose their cilia, and collapse into CSF, which may be the cause of the narrowing of the cerebral aqueduct thought to cause mumps hydrocephalus.
In humans, mumps hydrocephalus may be due to obstruction of the cerebral aqueduct with dilatation of the lateral and third ventricles, obstruction of the interventricular foramina, or obstruction of the median and lateral apertures. Ependymal cells have been isolated from CSF of mumps patients, suggesting that animals and humans share hydrocephalus pathogenesis. Hydrocephalus has also been observed in the absence of canal obstruction, however, indicating that obstruction may be a result of external compression by edematous tissue and not related to hydrocephalus.
Deafness from mumps may be caused by MuV infection in CSF, which has contact with the perilymph of the inner ear, possibly leading to infection of the cochlea, or it may occur as a result of inner ear infection via viremia that leads to inflammation in the endolymph. Hearing loss may also be caused indirectly by the immune response. In animal studies, MuV has been isolated from the vestibular ganglion, which may explain vestibular symptoms such as vertigo that often co-occur with deafness.
Immune response
Even though MuV has just one serotype, significant variation in the quantity of genotype-specific sera needed to neutralize different genotypes in vitro has been observed. Neutralizing antibodies in the salivary glands may be important in restricting MuV replication and transmission via saliva, as the level of viral secretion in saliva inversely correlates to the quantity of MuV-specific IgA produced. The neutralizing ability of salivary IgA appears to be greater than serum IgG and IgM.
It has been proposed that symptomatic infections in the vaccinated may be because memory T lymphocytes generated as a result of vaccination may be necessary but insufficient for protection. The immune system in general appears to have a relatively weak response to the mumps virus, indicated by various measures: antibody production appears to be predominately directed toward non-neutralizing viral proteins, and there may be a low quantity of MuV-specific memory B lymphocytes. The amount of antibodies needed to confer immunity is unknown.
Diagnosis
In places where mumps is widespread, diagnosis can be made based on the development of parotitis and a history of exposure to someone with mumps. In places where mumps is less common because parotitis has other causes, laboratory diagnosis may be needed to verify mumps infection. A differential diagnosis may be used to compare symptoms to other diseases, including allergic reaction, mastoiditis, measles, and pediatric HIV infection and rubella. MuV can be isolated from saliva, blood, the nasopharynx, salivary ducts, and seminal fluid within one week of the onset of symptoms, as well as from cell cultures. In meningitis cases, MuV can be isolated from CSF. In CNS cases, a lumbar puncture may be used to rule out other potential causes, which shows normal opening pressure, more than ten leukocytes per cubic millimeter, elevated lymphocyte count in CSF, polymorphonuclear leukocytes up to 25% of the time, often a mildly elevated protein level, and a slightly reduced CSF glucose to blood glucose ratio up to 30% of the time.
Mumps-specific IgM antibodies in serum or oral fluid specimens can be used to identify mumps. IgM quantities peak up to eight days after the onset of symptoms, and IgM can be measured by enzyme-linked immunosorbent assays (ELISA) 7–10 days after the onset of symptoms. Sensitivity to IgM testing is variable, ranging from as low as 24–51% to 75% in the first week and 100% thereafter. Throughout infection, IgM titers increase four-fold between the acute phase and recovery. False negatives can occur in people previously infected or vaccinated, in which case a rise of serum IgG may be more useful for diagnosis. False positives can occur after infection of parainfluenza viruses1 and 3 and Newcastle disease virus as well as recently after mumps vaccination.
Antibody titers can also be measured with complement fixation tests, hemagglutination assays, and neutralization tests. In vaccinated people, antibody-based diagnosis can be difficult since IgM oftentimes cannot be detected in acute-phase serum samples. In these instances, it is easier to identify MuV RNA from oral fluid, a throat swab, or urine. In meningitis cases, MuV-specific IgM can be found in CSF in half of cases, and IgG in 30–90%, sometimes lasting for more than a year with increased white blood cell count. These findings are not associated with an increased risk of long-term complications. Most parotitis cases have elevated white blood cell count in CSF.
Real-time reverse transcription polymerase chain reaction (rRT-PCR) can be used to detect MuV RNA from the first day symptoms appear, declining over the next 8–10 days. rRT-PCR of saliva is typically positive from 2–3 days before parotitis develops to 4–5 days after and has a sensitivity of about 70%. Since MuV replicates in kidneys, viral culture and RNA detection in urine can be used for diagnosis up to two weeks after symptoms begin, though rRT-PCR used to identify the virus in urine has a very low sensitivity compared to virus cultures at below 30%. In meningoencephalitis cases, a nested RT-PCR can detect MuV RNA in CSF up to two years after infection.
In sialadenitis cases, imaging shows an enlargement of the salivary glands, fat stranding, and thickening of the superficial cervical fascia and platysma muscles, which are situated on the front side of the neck. If parotitis occurs only on one side, then detection of mumps-specific IgM antibodies, IgG titer, or PCR is required for diagnosis. In cases of pancreatitis, there may be elevated levels of lipase or amylase, an enzyme found in saliva and the pancreas.
Mumps orchitis is usually diagnosed by white blood cell count, with normal differential white blood cell counts. A complete blood count can show above or below-average white blood cell count and an elevated C-reactive protein level. Urine analysis can exclude bacterial infections. If orchitis is present with normal urine analysis, negative urethral cultures, and negative midstream urine, then that can indicate mumps orchitis. Ultrasounds typically show diffuse hyper-vascularity, increased volume of the testes and epididymis, lower than usual ability to return ultrasound signals, swelling of the epididymis, and formation of hydroceles. Echo color Doppler ultrasound is more effective at detecting orchitis than ultrasound alone.
Prevention
Mumps is preventable with vaccination. Mumps vaccines use live attenuated viruses. Most countries include mumps vaccination in their immunization programs, and the MMR vaccine, which also protects against measles and rubella, is the most commonly used mumps vaccine. Mumps vaccination can also be done on its own and as a part of the MMRV vaccine, which also protects against measles, rubella, chickenpox, and shingles. More than 120 countries have adopted mumps vaccination, but coverage remains low in most African, South Asian, and Southeast Asian countries. In countries that have implemented mumps vaccination, significant declines in mumps cases and complications caused by infection such as encephalitis have been observed. Mumps vaccines are typically administered in early childhood, but may also be given in adolescence and adulthood if need be. Vaccination is expected to be capable of neutralizing wild-type MuVs, which are not included in the vaccine, since they do not appear to evade vaccine-derived immunity.
A variety of virus strains have been used in mumps vaccines, including the Jeryl Lynn (JL), Leningrad-3, Leningrad-3-Zagreb (L-Zagreb), Rubini, and Urabe AM9 strains. Some other less prominent strains exist that are typically confined to individual countries. These include the Hoshino, Miyahara, Torii, and NK M-46 strains that have been produced in Japan and the S-12 strain, which is used by Iran. Mild adverse reactions are relatively common, including fever and rash, but aseptic meningitis also occurs at varying rates. Other rare adverse reactions include meningoencephalitis, parotitis, deafness from inner ear damage, orchitis, and pancreatitis. Safety and effectiveness vary by vaccine strain:
Rubini is safe but because of its low effectiveness in outbreaks, its use has been abandoned.
JL is relatively safe and has a relatively high effectiveness. However, the effectiveness is significantly lower in outbreaks. A modified version of JL vaccines is RIT 4385, which is also considered safe.
Urabe and Leningrad-3 are both at least as effective as JL, but are less safe.
L-Zagreb, a modified version of Leningrad-3, is considered safe and effective, including in outbreaks.
Mumps protection from the MMR vaccine is higher after two doses than one and is estimated to be between 79% and 95%, lower than the degree of protection against measles and rubella. This, however, has still been sufficient to nearly eliminate mumps in countries that vaccinate against it as well as significantly reduce frequencies of complications among the vaccinated. If at least one dose is received, then hospitalization rates are reduced by an estimated 50% among the infected. Compared to the MMR vaccine, the MMRV vaccine appears to be less effective in terms of providing mumps protection. A difficulty in assessing vaccine effectiveness is that there is no clear correlate of immunity, so it is not possible to predict if a person has acquired immunity from the vaccine.
There is a lack of data on the effectiveness of a third dose of the MMR vaccine. In an outbreak in which a third dose was administered, it was unclear if it had any effect on reducing disease incidence, and it only appeared to boost antibodies in those who previously had little or no antibodies to mumps. Contraindications for mumps vaccines include prior allergic reaction to any ingredients or to neomycin, pregnancy, immunosuppression, a moderate or severe illness, having received a blood product recently, and, for MMRV vaccines specifically, a personal or familial history of seizures. It is also advised that women not become pregnant in the four weeks after MMR vaccination. No effective prophylaxis exists for mumps after one has been exposed to the virus, so vaccination or receiving immunoglobulin after exposure does not prevent progression to illness.
For people who are infected or suspected to be infected, isolation is important in preventing the spread of the disease. This includes abstaining from school, childcare, work, and other settings in which people gather together. In healthcare settings, it is recommended that healthcare workers use precautions such as face masks to reduce the likelihood of infection and to abstain from work if they develop mumps. Additional measures taken in health care facilities include reducing wait times for mumps patients, having mumps patients wear masks, and cleaning and disinfecting areas that mumps patients use. The virus can be inactivated using formalin, ether, chloroform, heat, or ultraviolet light.
Treatment
Mumps is usually self-limiting, and no specific antiviral treatments exist for it, so treatment is aimed at alleviating symptoms and preventing complications. Non-medicinal ways to manage the disease include bed rest, using ice or heat packs on the neck and scrotum, consuming more fluids, eating soft food, and gargling with warm salt water. Anti-fever medications may be used during the febrile period, excluding aspirin when given to children, which may cause Reye syndrome. Analgesics may also be provided to control pain from mumps inflammatory conditions. For seizures, anticonvulsants may be used. In severe neurological cases, ventilators may be used to support breathing.
Intramuscular mumps immunoglobulin may be of benefit when administered early in some cases, but it has not shown benefit in outbreaks. Although not recommended, intravenous immunoglobulin therapy may reduce the rates of some complications. Antibiotics may be used as a precaution in cases in which bacterial infection cannot be ruled out as well as to prevent secondary bacterial infection. Autoimmune-based disorders connected to mumps are treatable with intravenous immunoglobulin.
Various types of treatment for mumps orchitis have been used, but no specific treatment is recommended due to each method's limitations. These measures are primarily based on relieving testicular pain and reducing intratesticular pressure to reduce the likelihood of testicular atrophy. Interferon-α2α interferes with viral replication, so it has been postulated to be useful in preventing testicular damage and infertility. Interferon alfa-2b may reduce the duration of symptoms and incidence of complications. In cases of hydrocele formation, excess fluid can be removed.
Acupuncture has been used fairly widely in China to treat children who have mumps, however, no high-quality trials have been conducted to determine the safety or effectiveness of this treatment approach.
Prognosis
The prognosis for most people who experience mumps is excellent as long-term complications and death are rare. Hospitalization is typically not required. Mumps is usually self-limiting and symptoms resolve spontaneously within two weeks as the immune system clears the virus from the body. In high-risk groups such as immunocompromised persons, the prognosis is considered to be the same as for other groups. For most people, infection leads to lifelong immunity against future infection. Reinfections appear to be more mild and atypical than the first infection. The overall case-fatality rate of mumps is 1.6–3.8 people per 10,000, and these deaths typically occur in those who develop encephalitis.
Mumps orchitis typically resolves within two weeks. In 20% of cases, the testicles may be tender for a few more weeks. Atrophy, or reduction of size, of the involved testicle occurs in 30–50% of orchitis cases, which may lead to abnormalities in sperm creation and fertility such as low sperm count, absence of sperm in semen, reduced sperm motility, reduced fertility (hypofertility) in 13% of cases, and rarely sterility. Hypofertility can, however, occur in cases without atrophy. Abnormalities in sperm creation can persist for months to years after recovery from the initial infection, the length of which increases as the severity of orchitis increases. Examination of these cases shows decreased testicular volume, tenderness of the testicles, and a feeling of inconsistency when handling the testicles. Infertility is linked to severe cases of orchitis affecting both testes followed by testicular atrophy, which may develop up to one year after the initial infection. Of bilateral orchitis cases, 30–87% experience infertility. There is a weak association between orchitis and later development of epididymitis and testicular tumors.
Mumps meningitis typically resolves within 3–10 days without long-term complications. In meningoencephalitis cases, higher protein levels in CSF and a lower CSF glucose to blood glucose ratio are associated with longer periods of hospitalization. Approximately 1% of those whose CNS is affected die from mumps. Post-infectious encephalitis tends to be relatively mild, whereas post-infectious encephalomyelitis has a case-fatality rate of up to ten percent. Most cases of mumps deafness affect just one ear and are temporary, but permanent hearing loss occurs in 0.005% of infections. Myocarditis and pericarditis that occur as a result of mumps may lead to endocardial fibroelastosis, i.e. thickening of the endocardium. With extreme rarity, infertility and premature menopause have occurred as a result of mumps oophoritis.
Epidemiology
Clinical age and immunity
Mumps is found worldwide. In the absence of vaccination against mumps there are between 100 and 1,000 cases per 100,000 people each year, i.e. 0.1% to 1.0% of the population are infected each year. The number of cases peaks every 2–5 years, with incidence highest in children 5–9 years old. According to seroconversion surveys done before the start of mumps vaccination, a sharp increase in mumps antibody levels at age 2–3 was observed.
Furthermore, 50% of 4–6 year olds, 90% of 14–15 year olds, and 95% of adults had tested positive to prior exposure to mumps, indicating that nearly all people are eventually infected in unvaccinated populations.
Prior to the start of vaccination, mumps accounted for ten percent of meningitis cases and about a third of encephalitis cases. Worldwide, mumps is the most common cause of inflammation of the salivary glands. In children, mumps is the most common cause of deafness in one ear in cases when the inner ear is damaged. Asymptomatic infections are more common in adults, and the rate of asymptomatic infections is very high, up to two-thirds, in vaccinated populations. Mumps vaccination has the effect of increasing the average age of the infected in vaccinated populations that have not previously experienced a mumps outbreak. While infection rates appear to be the same in males and females, males appear to experience symptoms and complications, including neurological involvement, at a higher rate than females. Symptoms are more severe in adolescents and adults than in children.
Settings of outbreaks
It is common for outbreaks of mumps to occur. These outbreaks typically occur in crowded spaces where the virus can spread from person to person easily, such as schools, military barracks, prisons, and sports clubs. Since the introduction of vaccines, the frequency of mumps has declined dramatically, as have complications caused by mumps. The epidemiology in countries that vaccinate reflects the number of doses administered, age at vaccination, and vaccination rates. If vaccine coverage is insufficient, then herd immunity may be unobtainable and the average age of infection will increase, leading to an increase in the prevalence of complications. Risk factors include age, exposure to a person with mumps, compromised immunity, time of year, travel history, and vaccination status. Mumps vaccination is less common in developing countries, which consequently have higher rates of mumps.
Cases peak in different seasons of the year in different regions. In temperate climates, cases peak in winter and spring, whereas in tropical regions no seasonality is observed. Additional research has shown that mumps increases in frequency as temperature and humidity increase. The seasonality of mumps is thought to be caused by several factors: fluctuation in the human immune response due to seasonal factors, such as changes in melatonin levels; behavior and lifestyle changes, such as school attendance and indoor crowding; and meteorological factors such as changes in temperature, brightness, wind, and humidity.
References
External links
Pediatrics
Wikipedia medicine articles ready to translate
Salivary gland pathology
Wikipedia emergency medicine articles ready to translate
Vaccine-preventable diseases | Mumps | [
"Biology"
] | 9,255 | [
"Vaccination",
"Vaccine-preventable diseases"
] |
59,202 | https://en.wikipedia.org/wiki/Object%E2%80%93relational%20mapping | Object–relational mapping (ORM, O/RM, and O/R mapping tool) in computer science is a programming technique for converting data between a relational database and the memory (usually the heap) of an object-oriented programming language. This creates, in effect, a virtual object database that can be used from within the programming language.
In object-oriented programming, data-management tasks act on objects that combine scalar values into objects. For example, consider an address book entry that represents a single person along with zero or more phone numbers and zero or more addresses. This could be modeled in an object-oriented implementation by a "Person object" with an attribute/field to hold each data item that the entry comprises: the person's name, a list of phone numbers, and a list of addresses. The list of phone numbers would itself contain "PhoneNumber objects" and so on. Each such address-book entry is treated as a single object by the programming language (it can be referenced by a single variable containing a pointer to the object, for instance). Various methods can be associated with the object, such as methods to return the preferred phone number, the home address, and so on.
By contrast, relational databases, such as SQL, group scalars into tuples, which are then enumerated in tables. Tuples and objects have some general similarity, in that they are both ways to collect values into named fields such that the whole collection can be manipulated as a single compound entity. They have many differences, though, in particular: lifecycle management (row insertion and deletion, versus garbage collection or reference counting), references to other entities (object references, versus foreign key references), and inheritance (non-existent in relational databases). As well, objects are managed on-heap and are under full control of a single process, while database tuples are shared and must incorporate locking, merging, and retry. Object–relational mapping provides automated support for mapping tuples to objects and back, while accounting for all of these differences.
The heart of the problem involves translating the logical representation of the objects into an atomized form that is capable of being stored in the database while preserving the properties of the objects and their relationships so that they can be reloaded as objects when needed. If this storage and retrieval functionality is implemented, the objects are said to be persistent.
Overview
Implementation-specific details of storage drivers are generally wrapped in an API in the programming language in use, exposing methods to interact with the storage medium in a way which is simpler and more in line with the paradigms of surrounding code.
The following is a simple example, written in C# code, to execute a query written in SQL using a database engine.
var sql = "SELECT id, first_name, last_name, phone, birth_date, sex, age FROM persons WHERE id = 10";
var result = context.Persons.FromSqlRaw(sql).ToList();
var name = result[0]["first_name"];
In contrast, the following makes use of an ORM-job API which makes it possible to write code that naturally makes use of the features of the language.
var person = repository.GetPerson(10);
var firstName = person.GetFirstName();
The case above makes use of an object representing the storage repository and methods of that object. Other frameworks might provide code as static methods, as in the example below, and yet other methods may not implement an object-oriented system at all. Often the choice of paradigm is made for the best fit of the ORM into the surrounding language's design principles.
var person = Person.Get(10);
Comparison with traditional data access techniques
Compared to traditional techniques of exchange between an object-oriented language and a relational database, ORM often reduces the amount of code that needs to be written.
Disadvantages of ORM tools generally stem from the high level of abstraction obscuring what is actually happening in the implementation code. Also, heavy reliance on ORM software has been cited as a major factor in producing poorly designed databases.
Object-oriented databases
Another approach is to use an object-oriented database management system (OODBMS) or document-oriented databases such as native XML databases that provide more flexibility in data modeling. OODBMSs are databases designed specifically for working with object-oriented values. Using an OODBMS eliminates the need for converting data to and from its SQL form, as the data is stored in its original object representation and relationships are directly represented, rather than requiring join tables/operations. The equivalent of ORMs for document-oriented databases are called object-document mappers (ODMs).
Document-oriented databases also prevent the user from having to "shred" objects into table rows. Many of these systems also support the XQuery query language to retrieve datasets.
Object-oriented databases tend to be used in complex, niche applications. One of the arguments against using an OODBMS is that it may not be able to execute ad-hoc, application-independent queries. For this reason, many programmers find themselves more at home with an object-SQL mapping system, even though most object-oriented databases are able to process SQL queries to a limited extent. Other OODBMS provide replication to SQL databases, as a means of addressing the need for ad-hoc queries, while preserving well-known query patterns.
Challenges
A variety of difficulties arise when considering how to match an object system to a relational database. These difficulties are referred to as the object–relational impedance mismatch.
An alternative to implementing ORM is use of the native procedural languages provided with every major database. These can be called from the client using SQL statements. The Data Access Object (DAO) design pattern is used to abstract these statements and offer a lightweight object-oriented interface to the rest of the application.
ORMs are limited to their predefined functionality, which may not cover all edge cases or database features. They usually mitigate this limitation by providing users with an interface to write raw queries, such as Django ORM.
See also
List of object–relational mapping software
Comparison of object–relational mapping software
AutoFetch – automatic query tuning
Common Object Request Broker Architecture (CORBA)
Object database
Object persistence
Object–relational database
Object–relational impedance mismatch
Relational model
SQL (Structured Query Language)
Java Data Objects (JDO)
Java Persistence API (JPA), now Jakarta Persistence
Service Data Objects
Entity Framework
Active record pattern
Data mapper pattern
Single Table Inheritance
References
External links
About ORM by Anders Hejlsberg
Mapping Objects to Relational Databases: O/R Mapping In Detail by Scott W. Ambler
Data mapping
Articles with example C Sharp code | Object–relational mapping | [
"Engineering"
] | 1,399 | [
"Data engineering",
"Data mapping"
] |
59,211 | https://en.wikipedia.org/wiki/Altitude | Altitude is a distance measurement, usually in the vertical or "up" direction, between a reference datum and a point or object. The exact definition and reference datum varies according to the context (e.g., aviation, geometry, geographical survey, sport, or atmospheric pressure). Although the term altitude is commonly used to mean the height above sea level of a location, in geography the term elevation is often preferred for this usage.
In aviation, altitude is typically measured relative to mean sea level or above ground level to ensure safe navigation and flight operations. In geometry and geographical surveys, altitude helps create accurate topographic maps and understand the terrain's elevation. For high-altitude trekking and sports, knowing and adapting to altitude is vital for performance and safety. Higher altitudes mean reduced oxygen levels, which can lead to altitude sickness if proper acclimatization measures are not taken.
Vertical distance measurements in the "down" direction are commonly referred to as depth.
In aviation
The term altitude can have several meanings, and is always qualified by explicitly adding a modifier (e.g. "true altitude"), or implicitly through the context of the communication. Parties exchanging altitude information must be clear which definition is being used.
Aviation altitude is measured using either mean sea level (MSL) or local ground level (above ground level, or AGL) as the reference datum.
Pressure altitude divided by 100 feet (30 m) is the flight level, and is used above the transition altitude ( in the US, but may be as low as in other jurisdictions). So when the altimeter reads the country-specific flight level on the standard pressure setting the aircraft is said to be at "Flight level XXX/100" (where XXX is the transition altitude). When flying at a flight level, the altimeter is always set to standard pressure (29.92 inHg or 1013.25 hPa).
On the flight deck, the definitive instrument for measuring altitude is the pressure altimeter, which is an aneroid barometer with a front face indicating distance (feet or metres) instead of atmospheric pressure.
There are several types of altitude in aviation:
Indicated altitude is the reading on the altimeter when it is set to the local barometric pressure at mean sea level. In UK aviation radiotelephony usage, the vertical distance of a level, a point or an object considered as a point, measured from mean sea level; this is referred to over the radio as altitude.(see QNH)
Absolute altitude is the vertical distance of the aircraft above the terrain over which it is flying. It can be measured using a radar altimeter (or "absolute altimeter"). Also referred to as "radar height" or feet/metres above ground level (AGL).
True altitude is the actual elevation above mean sea level. It is indicated altitude corrected for non-standard temperature and pressure.
Height is the vertical distance above a reference point, commonly the terrain elevation. In UK aviation radiotelephony usage, the vertical distance of a level, a point or an object considered as a point, measured from a specified datum; this is referred to over the radio as height, where the specified datum is the airfield elevation (see QFE)
Pressure altitude is the elevation above a standard datum air-pressure plane (typically, 1013.25 millibars or 29.92" Hg). Pressure altitude is used to indicate "flight level" which is the standard for altitude reporting in the U.S. in Class A airspace (above roughly 18,000 feet). Pressure altitude and indicated altitude are the same when the altimeter setting is 29.92" Hg or 1013.25 millibars.
Density altitude is the altitude corrected for non-ISA International Standard Atmosphere atmospheric conditions. Aircraft performance depends on density altitude, which is affected by barometric pressure, humidity and temperature. On a very hot day, density altitude at an airport (especially one at a high elevation) may be so high as to preclude takeoff, particularly for helicopters or a heavily loaded aircraft.
These types of altitude can be explained more simply as various ways of measuring the altitude:
Indicated altitude – the altitude shown on the altimeter.
Absolute altitude – altitude in terms of the distance above the ground directly below
True altitude – altitude in terms of elevation above sea level
Height – vertical distance above a certain point
Pressure altitude – the air pressure in terms of altitude in the International Standard Atmosphere
Density altitude – the density of the air in terms of altitude in the International Standard Atmosphere in the air
In satellite orbits
In atmospheric studies
Atmospheric layers
The Earth's atmosphere is divided into several altitude regions. These regions start and finish at varying heights depending on season and distance from the poles. The altitudes stated below are averages:
Troposphere: surface to at the poles, at the Equator, ending at the Tropopause
Stratosphere: Troposphere to
Mesosphere: Stratosphere to
Thermosphere: Mesosphere to
Exosphere: Thermosphere to
The Kármán line, at an altitude of above sea level, by convention defines represents the demarcation between the atmosphere and space. The thermosphere and exosphere (along with the higher parts of the mesosphere) are regions of the atmosphere that are conventionally defined as space.
High altitude and low pressure
Regions on the Earth's surface (or in its atmosphere) that are high above mean sea level are referred to as high altitude. High altitude is sometimes defined to begin at above sea level.
At high altitude, atmospheric pressure is lower than that at sea level. This is due to two competing physical effects: gravity, which causes the air to be as close as possible to the ground; and the heat content of the air, which causes the molecules to bounce off each other and expand.
Temperature profile
The temperature profile of the atmosphere is a result of an interaction between radiation and convection. Sunlight in the visible spectrum hits the ground and heats it. The ground then heats the air at the surface. If radiation were the only way to transfer heat from the ground to space, the greenhouse effect of gases in the atmosphere would keep the ground at roughly , and the temperature would decay exponentially with height.
However, when air is hot, it tends to expand, which lowers its density. Thus, hot air tends to rise and transfer heat upward. This is the process of convection. Convection comes to equilibrium when a parcel of air at a given altitude has the same density as its surroundings. Air is a poor conductor of heat, so a parcel of air will rise and fall without exchanging heat. This is known as an adiabatic process, which has a characteristic pressure-temperature curve. As the pressure gets lower, the temperature decreases. The rate of decrease of temperature with elevation is known as the adiabatic lapse rate, which is approximately 9.8 °C per kilometer (or per 1000 feet) of altitude.
The presence of water in the atmosphere complicates the process of convection. Water vapor contains latent heat of vaporization. As air rises and cools, it eventually becomes saturated and cannot hold its quantity of water vapor. The water vapor condenses (forming clouds), and releases heat, which changes the lapse rate from the dry adiabatic lapse rate to the moist adiabatic lapse rate (5.5 °C per kilometer or per 1000 feet).
As an average, the International Civil Aviation Organization (ICAO) defines an international standard atmosphere (ISA) with a temperature lapse rate of 6.49 °C per kilometer (3.56 °F per 1,000 feet). The actual lapse rate can vary by altitude and by location.
Finally, only the troposphere (up to approximately of altitude) in the Earth's atmosphere undergoes notable convection; in the stratosphere, there is little vertical convection.
Effects on organisms
Humans
Medicine recognizes that altitudes above start to affect humans, and there is no record of humans living at extreme altitudes above for more than two years. As the altitude increases, atmospheric pressure decreases, which affects humans by reducing the partial pressure of oxygen. The lack of oxygen above can cause serious illnesses such as altitude sickness, high altitude pulmonary edema, and high altitude cerebral edema. The higher the altitude, the more likely are serious effects. The human body can adapt to high altitude by breathing faster, having a higher heart rate, and adjusting its blood chemistry. It can take days or weeks to adapt to high altitude. However, above , (in the "death zone"), altitude acclimatization becomes impossible.
There is a significantly lower overall mortality rate for permanent residents at higher altitudes. Additionally, there is a dose response relationship between increasing elevation and decreasing obesity prevalence in the United States. In addition, the recent hypothesis suggests that high altitude could be protective against Alzheimer's disease via action of erythropoietin, a hormone released by kidney in response to hypoxia.
However, people living at higher elevations have a statistically significant higher rate of suicide. The cause for the increased suicide risk is unknown so far.
Athletes
For athletes, high altitude produces two contradictory effects on performance. For explosive events (sprints up to 400 metres, long jump, triple jump) the reduction in atmospheric pressure signifies less atmospheric resistance, which generally results in improved athletic performance. For endurance events (races of 5,000 metres or more) the predominant effect is the reduction in oxygen which generally reduces the athlete's performance at high altitude. Sports organizations acknowledge the effects of altitude on performance: the International Association of Athletic Federations (IAAF), for example, marks record performances achieved at an altitude greater than with the letter "A".
Athletes also can take advantage of altitude acclimatization to increase their performance. The same changes that help the body cope with high altitude increase performance back at sea level. These changes are the basis of altitude training which forms an integral part of the training of athletes in a number of endurance sports including track and field, distance running, triathlon, cycling and swimming.
Other organisms
Decreased oxygen availability and decreased temperature make life at high altitude challenging. Despite these environmental conditions, many species have been successfully adapted at high altitudes. Animals have developed physiological adaptations to enhance oxygen uptake and delivery to tissues which can be used to sustain metabolism. The strategies used by animals to adapt to high altitude depend on their morphology and phylogeny. For example, small mammals face the challenge of maintaining body heat in cold temperatures, due to their small volume to surface area ratio. As oxygen is used as a source of metabolic heat production, the hypobaric hypoxia at high altitudes is problematic.
There is also a general trend of smaller body sizes and lower species richness at high altitudes, likely due to lower oxygen partial pressures. These factors may decrease productivity in high altitude habitats, meaning there will be less energy available for consumption, growth, and activity.
However, some species, such as birds, thrive at high altitude. Birds thrive because of physiological features that are advantageous for high-altitude flight.
See also
Atmosphere of Earth
Coffin corner (aerodynamics) At higher altitudes, the air density is lower than at sea level. At a certain altitude it is very difficult to keep an airplane in stable flight.
Geocentric altitude
Near space
References
External links
Downloadable ETOPO2 Raw Data Database (2 minute grid)
Downloadable ETOPO5 Raw Data Database (5 minute grid)
Aerospace
Physical geography
Topography
Vertical position | Altitude | [
"Physics"
] | 2,349 | [
"Vertical position",
"Aerospace",
"Physical quantities",
"Distance",
"Space",
"Spacetime"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.