id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
13,445 | https://en.wikipedia.org/wiki/Heinrich%20Hertz | Heinrich Rudolf Hertz (, ; ; 22 February 1857 – 1 January 1894) was a German physicist who first conclusively proved the existence of the electromagnetic waves predicted by James Clerk Maxwell's equations of electromagnetism. The SI unit of frequency, the hertz (Hz), is named after him.
Biography
Heinrich Rudolf Hertz was born in 1857 in Hamburg, then a sovereign state of the German Confederation, into a prosperous and cultured Hanseatic family. His father was Gustav Ferdinand Hertz. His mother was Anna Elisabeth Pfefferkorn.
While studying at the Gelehrtenschule des Johanneums in Hamburg, Hertz showed an aptitude for sciences as well as languages, learning Arabic. He studied sciences and engineering in the German cities of Dresden, Munich and Berlin, where he studied under Gustav R. Kirchhoff and Hermann von Helmholtz. In 1880, Hertz obtained his PhD from the University of Berlin, and for the next three years remained for post-doctoral study under Helmholtz, serving as his assistant. In 1883, Hertz took a post as a lecturer in theoretical physics at the University of Kiel. In 1885, Hertz became a full professor at the University of Karlsruhe.
In 1886, Hertz married Elisabeth Doll, the daughter of Max Doll, a lecturer in geometry at Karlsruhe. They had two daughters: Johanna, born on 20 October 1887 and Mathilde, born on 14 January 1891, who went on to become a notable biologist. During this time Hertz conducted his landmark research into electromagnetic waves.
Hertz took a position of Professor of Physics and Director of the Physics Institute in Bonn on 3 April 1889, a position he held until his death. During this time he worked on theoretical mechanics with his work published in the book Die Prinzipien der Mechanik in neuem Zusammenhange dargestellt (The Principles of Mechanics Presented in a New Form), published posthumously in 1894.
Death
In 1892, Hertz was diagnosed with an infection (after a bout of severe migraines) and underwent operations to treat the illness. He died due to complications after surgery which had attempted to cure his condition. Some consider his ailment to have been caused by a malignant bone condition. He died at the age of 36 in Bonn, Germany, in 1894, and was buried in the Ohlsdorf Cemetery in Hamburg.
Hertz's wife, Elisabeth Hertz (née Doll; 1864–1941), did not remarry. He was survived by his daughters, Johanna (1887–1967) and Mathilde (1891–1975). Neither ever married or had children, hence Hertz has no living descendants.
Scientific work
Electromagnetic waves
In 1864 Scottish mathematical physicist James Clerk Maxwell proposed a comprehensive theory of electromagnetism, now called Maxwell's equations. Maxwell's theory predicted that coupled electric and magnetic fields could travel through space as an "electromagnetic wave". Maxwell proposed that light consisted of electromagnetic waves of short wavelength, but no one had been able to prove this, or generate or detect electromagnetic waves of other wavelengths.
During Hertz's studies in 1879 Helmholtz suggested that Hertz's doctoral dissertation be on testing Maxwell's theory. Helmholtz had also proposed the "Berlin Prize" problem that year at the Prussian Academy of Sciences for anyone who could experimentally prove an electromagnetic effect in the polarization and depolarization of insulators, something predicted by Maxwell's theory. Helmholtz was sure Hertz was the most likely candidate to win it. Not seeing any way to build an apparatus to experimentally test this, Hertz thought it was too difficult, and worked on electromagnetic induction instead. Hertz did produce an analysis of Maxwell's equations during his time at Kiel, showing they did have more validity than the then prevalent "action at a distance" theories.
In the autumn of 1886, after Hertz received his professorship at Karlsruhe, he was experimenting with a pair of Riess spirals when he noticed that discharging a Leyden jar into one of these coils produced a spark in the other coil. With an idea on how to build an apparatus, Hertz now had a way to proceed with the "Berlin Prize" problem of 1879 on proving Maxwell's theory (although the actual prize had expired uncollected in 1882). He used a dipole antenna consisting of two collinear one-meter wires with a spark gap between their inner ends, and zinc spheres attached to the outer ends for capacitance, as a radiator. The antenna was excited by pulses of high voltage of about 30 kilovolts applied between the two sides from a Ruhmkorff coil. He received the waves with a resonant single-loop antenna with a micrometer spark gap between the ends. This experiment produced and received what are now called radio waves in the very high frequency range.
Between 1886 and 1889 Hertz conducted a series of experiments that would prove the effects he was observing were results of Maxwell's predicted electromagnetic waves. Starting in November 1887 with his paper "On Electromagnetic Effects Produced by Electrical Disturbances in Insulators", Hertz sent a series of papers to Helmholtz at the Berlin Academy, including papers in 1888 that showed transverse free space electromagnetic waves traveling at a finite speed over a distance. In the apparatus Hertz used, the electric and magnetic fields radiated away from the wires as transverse waves. Hertz had positioned the oscillator about 12 meters from a zinc reflecting plate to produce standing waves. Each wave was about 4 meters long. Using the ring detector, he recorded how the wave's magnitude and component direction varied. Hertz measured Maxwell's waves and demonstrated that the velocity of these waves was equal to the velocity of light. The electric field intensity, polarization and reflection of the waves were also measured by Hertz. These experiments established that light and these waves were both a form of electromagnetic radiation obeying the Maxwell equations.
Hertz did not realize the practical importance of his radio wave experiments. He stated that,
It's of no use whatsoever ... this is just an experiment that proves Maestro Maxwell was right—we just have these mysterious electromagnetic waves that we cannot see with the naked eye. But they are there.
Asked about the applications of his discoveries, Hertz replied,
Nothing, I guess
Hertz's proof of the existence of airborne electromagnetic waves led to an explosion of experimentation with this new form of electromagnetic radiation, which was called "Hertzian waves" until around 1910 when the term "radio waves" became current. Within 6 years Guglielmo Marconi began developing a radio wave based wireless telegraphy system, leading to the wide use of radio communication.
Cathode rays
In 1883, he tried to prove that the cathode rays are electrically neutral and got what he interpreted as a confident absence of deflection in electrostatic field. However, as J. J. Thomson explained in 1897, Hertz placed the deflecting electrodes in a highly-conductive area of the tube, resulting in a strong screening effect close to their surface.
Nine years later Hertz began experimenting and demonstrated that cathode rays could penetrate very thin metal foil (such as aluminium). Philipp Lenard, a student of Heinrich Hertz, further researched this "ray effect". He developed a version of the cathode tube and studied the penetration by X-rays of various materials. However, Lenard did not realize that he was producing X-rays. Hermann von Helmholtz formulated mathematical equations for X-rays. He postulated a dispersion theory before Röntgen made his discovery and announcement. It was formed on the basis of the electromagnetic theory of light (Wiedmann's Annalen, Vol. XLVIII). However, he did not work with actual X-rays.
Photoelectric effect
Hertz helped establish the photoelectric effect (which was later explained by Albert Einstein) when he noticed that a charged object loses its charge more readily when illuminated by ultraviolet radiation (UV). In 1887, he made observations of the photoelectric effect and of the production and reception of electromagnetic (EM) waves, published in the journal Annalen der Physik. His receiver consisted of a coil with a spark gap, whereby a spark would be seen upon detection of EM waves. He placed the apparatus in a darkened box to see the spark better. He observed that the maximum spark length was reduced when in the box. A glass panel placed between the source of EM waves and the receiver absorbed UV that assisted the electrons in jumping across the gap. When removed, the spark length would increase. He observed no decrease in spark length when he substituted quartz for glass, as quartz does not absorb UV radiation. Hertz concluded his months of investigation and reported the results obtained. He did not further pursue investigation of this effect, nor did he make any attempt at explaining how the observed phenomenon was brought about.
Contact mechanics
In 1881 and 1882, Hertz published two articles on what was to become known as the field of contact mechanics, which proved to be an important basis for later theories in the field. Joseph Valentin Boussinesq published some critically important observations on Hertz's work, nevertheless establishing this work on contact mechanics to be of immense importance. His work basically summarises how two axi-symmetric objects placed in contact will behave under loading, he obtained results based upon the classical theory of elasticity and continuum mechanics. The most significant flaw of his theory was the neglect of any nature of adhesion between the two solids, which proves to be important as the materials composing the solids start to assume high elasticity. It was natural to neglect adhesion at the time, however, as there were no experimental methods of testing for it.
To develop his theory Hertz used his observation of elliptical Newton's rings formed upon placing a glass sphere upon a lens as the basis of assuming that the pressure exerted by the sphere follows an elliptical distribution. He used the formation of Newton's rings again while validating his theory with experiments in calculating the displacement which the sphere has into the lens. Kenneth L. Johnson, K. Kendall and A. D. Roberts (JKR) used this theory as a basis while calculating the theoretical displacement or indentation depth in the presence of adhesion in 1971. Hertz's theory is recovered from their formulation if the adhesion of the materials is assumed to be zero. Similar to this theory, however using different assumptions, B. V. Derjaguin, V. M. Muller and Y. P. Toporov published another theory in 1975, which came to be known as the DMT theory in the research community, which also recovered Hertz's formulations under the assumption of zero adhesion. This DMT theory proved to be premature and needed several revisions before it came to be accepted as another material contact theory in addition to the JKR theory. Both the DMT and the JKR theories form the basis of contact mechanics upon which all transition contact models are based and used in material parameter prediction in nanoindentation and atomic force microscopy. These models are central to the field of tribology and he was named as one of the 23 "Men of Tribology" by Duncan Dowson. Despite preceding his great work on electromagnetism (which he himself considered with his characteristic soberness to be trivial), Hertz's research on contact mechanics has facilitated the age of nanotechnology.
Hertz also described the "Hertzian cone", a type of fracture mode in brittle solids caused by the transmission of stress waves.
Meteorology
Hertz always had a deep interest in meteorology, probably derived from his contacts with Wilhelm von Bezold (who was his professor in a laboratory course at the Munich Polytechnic in the summer of 1878). As an assistant to Helmholtz in Berlin, he contributed a few minor articles in the field, including research on the evaporation of liquids, a new kind of hygrometer, and a graphical means of determining the properties of moist air when subjected to adiabatic changes.
Philosophy of science
In the introduction of his 1894 book Principles of Mechanics, Hertz discusses the different "pictures" used to represent physics in his time including the picture of Newtonian mechanics (based on mass and forces), a second picture (based on conservation of energy and Hamilton's principle) and his own picture (based uniquely on space, time, mass and the Hertz principle), comparing them in terms of 'permissibility', 'correctness' and 'appropriateness'. Hertz wanted to remove "empty assumptions" and argue against the Newtonian concept of force and against action at a distance. Philosopher Ludwig Wittgenstein inspired by Hertz's work, extended his picture theory into a picture theory of language in his 1921 Tractatus Logico-Philosophicus which influenced logical positivism. Wittgenstein also quotes him in the Blue and Brown Books.
Third Reich treatment
Because Hertz's family converted from Judaism to Lutheranism two decades before his birth, his legacy ran afoul of the Nazi government in the 1930s, a regime that classified people by "race" instead of religious affiliation.
Hertz's name was removed from streets and institutions and there was even a movement to rename the frequency unit named in his honor (hertz) after Hermann von Helmholtz instead, keeping the symbol (Hz) unchanged.
His family was also persecuted for their non-Aryan status. Hertz's youngest daughter, Mathilde, lost a lectureship at Berlin University after the Nazis came to power and within a few years she, her sister, and their mother left Germany and settled in England.
Legacy and honors
Heinrich Hertz's nephew, Gustav Ludwig Hertz was a Nobel Prize winner, and Gustav's son Carl Helmut Hertz invented medical ultrasonography. His daughter Mathilde Carmen Hertz was a well-known biologist and comparative psychologist. Hertz's grandnephew Hermann Gerhard Hertz, professor at the University of Karlsruhe, was a pioneer of NMR-spectroscopy and in 1995 published Hertz's laboratory notes.
The SI unit hertz (Hz) was established in his honor by the International Electrotechnical Commission in 1930 for frequency, an expression of the number of times that a repeated event occurs per second. It was adopted by the CGPM (Conférence générale des poids et mesures) in 1960, officially replacing the previous name, "cycles per second" (cps).
In 1928 the Heinrich-Hertz Institute for Oscillation Research was founded in Berlin. Today known as the Fraunhofer Institute for Telecommunications, Heinrich Hertz Institute, HHI.
In 1969, in East Germany, a Heinrich Hertz memorial medal was cast.
The IEEE Heinrich Hertz Medal, established in 1987, is "for outstanding achievements in Hertzian waves [...] presented annually to an individual for achievements which are theoretical or experimental in nature".
The Submillimeter Radio Telescope at Mt. Graham, Arizona, constructed in 1992 is named after him.
A crater that lies on the far side of the Moon, just behind the eastern limb, is the Hertz crater, named in his honor.
On his birthday in 2012, Google honored Hertz with a Google doodle, inspired by his life's work, on its home page.
Works
Books
Articles
Hertz, H.R. "Ueber sehr schnelle electrische Schwingungen", Annalen der Physik, vol. 267, no. 7, p. 421–448, May 1887
Hertz, H.R. "Ueber einen Einfluss des ultravioletten Lichtes auf die electrische Entladung", Annalen der Physik, vol. 267, no. 8, p. 983–1000, June 1887
Hertz, H.R. "Ueber die Einwirkung einer geradlinigen electrischen Schwingung auf eine benachbarte Strombahn", Annalen der Physik, vol. 270, no. 5, p. 155–170, March 1888
Hertz, H.R. "Ueber die Ausbreitungsgeschwindigkeit der electrodynamischen Wirkungen", Annalen der Physik, vol. 270, no. 7, p. 551–569, May 1888
Hertz, H. R.(1899) The Principles of Mechanics Presented in a New Form, London, Macmillan, with an introduction by Hermann von Helmholtz (English translation of Die Prinzipien der Mechanik in neuem Zusammenhange dargestellt, Leipzig, posthumously published in 1894).
See also
Lists and histories
Fraunhofer Institute for Telecommunications, Heinrich Hertz Institute
History of radio
Invention of radio
List of physicists
Outline of physics
Timeline of mechanics and physics
Electromagnetism timeline
Wireless telegraphy
Electromagnetic radiation
Microwave
Other
List of German inventors and discoverers
References
Further reading
Appleyard, Rollo. (1930). Pioneers of Electrical Communication". London: Macmillan and Company. reprinted by Ayer Company Publishers, Manchester, New Hampshire:
Bodanis, David. (2006). Electric Universe: How Electricity Switched on the Modern World. New York: Three Rivers Press.
Bryant, John H. (1988). Heinrich Hertz, the Beginning of Microwaves: Discovery of Electromagnetic Waves and Opening of the Electromagnetic Spectrum by Heinrich Hertz in the Years 1886–1892. New York: IEEE (Institute of Electrical and Electronics Engineers).
Jenkins, John D. "The Discovery of Radio Waves – 1888; Heinrich Rudolf Hertz (1847–1894)" (retrieved 27 Jan 2008)
Maugis, Daniel. (2000). Contact, Adhesion and Rupture of Elastic Solids. New York: Springer-Verlag.
Naughton, Russell. "Heinrich Rudolph (alt: Rudolf) Hertz, Dr : 1857 – 1894" (retrieved 27 Jan 2008)
Roberge, Pierre R. "Heinrich Rudolph Hertz, 1857–1894" (retrieved 27 Jan 2008)
Susskind, Charles. (1995). Heinrich Hertz: A Short Life. San Francisco: San Francisco Press.
External links
1857 births
1894 deaths
19th-century German inventors
19th-century German physicists
Burials at the Ohlsdorf Cemetery
German Lutherans
German male writers
German people of Jewish descent
19th-century German philosophers
Humboldt University of Berlin alumni
Academic staff of the Karlsruhe Institute of Technology
People educated at the Gelehrtenschule des Johanneums
Radio pioneers
Recipients of the Matteucci Medal
Scientists from Hamburg
Technical University of Munich alumni
Tribologists
Academic staff of the University of Bonn
Academic staff of the University of Kiel | Heinrich Hertz | [
"Materials_science"
] | 3,865 | [
"Tribology",
"Tribologists"
] |
13,457 | https://en.wikipedia.org/wiki/Heredity | Heredity, also called inheritance or biological inheritance, is the passing on of traits from parents to their offspring; either through asexual reproduction or sexual reproduction, the offspring cells or organisms acquire the genetic information of their parents. Through heredity, variations between individuals can accumulate and cause species to evolve by natural selection. The study of heredity in biology is genetics.
Overview
In humans, eye color is an example of an inherited characteristic: an individual might inherit the "brown-eye trait" from one of the parents. Inherited traits are controlled by genes and the complete set of genes within an organism's genome is called its genotype.
The complete set of observable traits of the structure and behavior of an organism is called its phenotype. These traits arise from the interaction of the organism's genotype with the environment. As a result, many aspects of an organism's phenotype are not inherited. For example, suntanned skin derives from the interaction between a person's genotype and sunlight; thus, suntans are not passed on to people's children. However, some people tan more easily than others, due to differences in their genotype: a striking example is people with the inherited trait of albinism, who do not tan at all and are very sensitive to sunburn.
Heritable traits are known to be passed from one generation to the next via DNA, a molecule that encodes genetic information. DNA is a long polymer that incorporates four types of bases, which are interchangeable. The Nucleic acid sequence (the sequence of bases along a particular DNA molecule) specifies the genetic information: this is comparable to a sequence of letters spelling out a passage of text. Before a cell divides through mitosis, the DNA is copied, so that each of the resulting two cells will inherit the DNA sequence. A portion of a DNA molecule that specifies a single functional unit is called a gene; different genes have different sequences of bases. Within cells, the long strands of DNA form condensed structures called chromosomes. Organisms inherit genetic material from their parents in the form of homologous chromosomes, containing a unique combination of DNA sequences that code for genes. The specific location of a DNA sequence within a chromosome is known as a locus. If the DNA sequence at a particular locus varies between individuals, the different forms of this sequence are called alleles. DNA sequences can change through mutations, producing new alleles. If a mutation occurs within a gene, the new allele may affect the trait that the gene controls, altering the phenotype of the organism.
However, while this simple correspondence between an allele and a trait works in some cases, most traits are more complex and are controlled by multiple interacting genes within and among organisms. Developmental biologists suggest that complex interactions in genetic networks and communication among cells can lead to heritable variations that may underlie some of the mechanics in developmental plasticity and canalization.
Recent findings have confirmed important examples of heritable changes that cannot be explained by direct agency of the DNA molecule. These phenomena are classed as epigenetic inheritance systems that are causally or independently evolving over genes. Research into modes and mechanisms of epigenetic inheritance is still in its scientific infancy, but this area of research has attracted much recent activity as it broadens the scope of heritability and evolutionary biology in general. DNA methylation marking chromatin, self-sustaining metabolic loops, gene silencing by RNA interference, and the three dimensional conformation of proteins (such as prions) are areas where epigenetic inheritance systems have been discovered at the organismic level. Heritability may also occur at even larger scales. For example, ecological inheritance through the process of niche construction is defined by the regular and repeated activities of organisms in their environment. This generates a legacy of effect that modifies and feeds back into the selection regime of subsequent generations. Descendants inherit genes plus environmental characteristics generated by the ecological actions of ancestors. Other examples of heritability in evolution that are not under the direct control of genes include the inheritance of cultural traits, group heritability, and symbiogenesis. These examples of heritability that operate above the gene are covered broadly under the title of multilevel or hierarchical selection, which has been a subject of intense debate in the history of evolutionary science.
Relation to theory of evolution
When Charles Darwin proposed his theory of evolution in 1859, one of its major problems was the lack of an underlying mechanism for heredity. Darwin believed in a mix of blending inheritance and the inheritance of acquired traits (pangenesis). Blending inheritance would lead to uniformity across populations in only a few generations and then would remove variation from a population on which natural selection could act. This led to Darwin adopting some Lamarckian ideas in later editions of On the Origin of Species and his later biological works. Darwin's primary approach to heredity was to outline how it appeared to work (noticing that traits that were not expressed explicitly in the parent at the time of reproduction could be inherited, that certain traits could be sex-linked, etc.) rather than suggesting mechanisms.
Darwin's initial model of heredity was adopted by, and then heavily modified by, his cousin Francis Galton, who laid the framework for the biometric school of heredity. Galton found no evidence to support the aspects of Darwin's pangenesis model, which relied on acquired traits.
The inheritance of acquired traits was shown to have little basis in the 1880s when August Weismann cut the tails off many generations of mice and found that their offspring continued to develop tails.
History
Scientists in Antiquity had a variety of ideas about heredity: Theophrastus proposed that male flowers caused female flowers to ripen; Hippocrates speculated that "seeds" were produced by various body parts and transmitted to offspring at the time of conception; and Aristotle thought that male and female fluids mixed at conception. Aeschylus, in 458 BC, proposed the male as the parent, with the female as a "nurse for the young life sown within her".
Ancient understandings of heredity transitioned to two debated doctrines in the 18th century. The Doctrine of Epigenesis and the Doctrine of Preformation were two distinct views of the understanding of heredity. The Doctrine of Epigenesis, originated by Aristotle, claimed that an embryo continually develops. The modifications of the parent's traits are passed off to an embryo during its lifetime. The foundation of this doctrine was based on the theory of inheritance of acquired traits. In direct opposition, the Doctrine of Preformation claimed that "like generates like" where the germ would evolve to yield offspring similar to the parents. The Preformationist view believed procreation was an act of revealing what had been created long before. However, this was disputed by the creation of the cell theory in the 19th century, where the fundamental unit of life is the cell, and not some preformed parts of an organism. Various hereditary mechanisms, including blending inheritance were also envisaged without being properly tested or quantified, and were later disputed. Nevertheless, people were able to develop domestic breeds of animals as well as crops through artificial selection. The inheritance of acquired traits also formed a part of early Lamarckian ideas on evolution.
During the 18th century, Dutch microscopist Antonie van Leeuwenhoek (1632–1723) discovered "animalcules" in the sperm of humans and other animals. Some scientists speculated they saw a "little man" (homunculus) inside each sperm. These scientists formed a school of thought known as the "spermists". They contended the only contributions of the female to the next generation were the womb in which the homunculus grew, and prenatal influences of the womb. An opposing school of thought, the ovists, believed that the future human was in the egg, and that sperm merely stimulated the growth of the egg. Ovists thought women carried eggs containing boy and girl children, and that the gender of the offspring was determined well before conception.
An early research initiative emerged in 1878 when Alpheus Hyatt led an investigation to study the laws of heredity through compiling data on family phenotypes (nose size, ear shape, etc.) and expression of pathological conditions and abnormal characteristics, particularly with respect to the age of appearance. One of the projects aims was to tabulate data to better understand why certain traits are consistently expressed while others are highly irregular.
Gregor Mendel: father of genetics
The idea of particulate inheritance of genes can be attributed to the Moravian monk Gregor Mendel who published his work on pea plants in 1865. However, his work was not widely known and was rediscovered in 1901. It was initially assumed that Mendelian inheritance only accounted for large (qualitative) differences, such as those seen by Mendel in his pea plants – and the idea of additive effect of (quantitative) genes was not realised until R.A. Fisher's (1918) paper, "The Correlation Between Relatives on the Supposition of Mendelian Inheritance" Mendel's overall contribution gave scientists a useful overview that traits were inheritable. His pea plant demonstration became the foundation of the study of Mendelian Traits. These traits can be traced on a single locus.
Modern development of genetics and heredity
In the 1930s, work by Fisher and others resulted in a combination of Mendelian and biometric schools into the modern evolutionary synthesis. The modern synthesis bridged the gap between experimental geneticists and naturalists; and between both and palaeontologists, stating that:
All evolutionary phenomena can be explained in a way consistent with known genetic mechanisms and the observational evidence of naturalists.
Evolution is gradual: small genetic changes, recombination ordered by natural selection. Discontinuities amongst species (or other taxa) are explained as originating gradually through geographical separation and extinction (not saltation).
Selection is overwhelmingly the main mechanism of change; even slight advantages are important when continued. The object of selection is the phenotype in its surrounding environment. The role of genetic drift is equivocal; though strongly supported initially by Dobzhansky, it was downgraded later as results from ecological genetics were obtained.
The primacy of population thinking: the genetic diversity carried in natural populations is a key factor in evolution. The strength of natural selection in the wild was greater than expected; the effect of ecological factors such as niche occupation and the significance of barriers to gene flow are all important.
The idea that speciation occurs after populations are reproductively isolated has been much debated. In plants, polyploidy must be included in any view of speciation. Formulations such as 'evolution consists primarily of changes in the frequencies of alleles between one generation and another' were proposed rather later. The traditional view is that developmental biology ('evo-devo') played little part in the synthesis, but an account of Gavin de Beer's work by Stephen Jay Gould suggests he may be an exception.
Almost all aspects of the synthesis have been challenged at times, with varying degrees of success. There is no doubt, however, that the synthesis was a great landmark in evolutionary biology. It cleared up many confusions, and was directly responsible for stimulating a great deal of research in the post-World War II era.
Trofim Lysenko however caused a backlash of what is now called Lysenkoism in the Soviet Union when he emphasised Lamarckian ideas on the inheritance of acquired traits. This movement affected agricultural research and led to food shortages in the 1960s and seriously affected the USSR.
There is growing evidence that there is transgenerational inheritance of epigenetic changes in humans and other animals.
Common genetic disorders
Fragile X syndrome
Sickle cell disease
Phenylketonuria (PKU)
Haemophilia
Types
The description of a mode of biological inheritance consists of three main categories:
1. Number of involved loci
Monogenetic (also called "simple") – one locus
Oligogenic – few loci
Polygenetic – many loci
2. Involved chromosomes
Autosomal – loci are not situated on a sex chromosome
Gonosomal – loci are situated on a sex chromosome
X-chromosomal – loci are situated on the X-chromosome (the more common case)
Y-chromosomal – loci are situated on the Y-chromosome
Mitochondrial – loci are situated on the mitochondrial DNA
3. Correlation genotype–phenotype
Dominant
Intermediate (also called "codominant")
Recessive
Overdominant
Underdominant
These three categories are part of every exact description of a mode of inheritance in the above order. In addition, more specifications may be added as follows:
4. Coincidental and environmental interactions
Penetrance
Complete
Incomplete (percentual number)
Expressivity
Invariable
Variable
Heritability (in polygenetic and sometimes also in oligogenetic modes of inheritance)
Maternal or paternal imprinting phenomena (also see epigenetics)
5. Sex-linked interactions
Sex-linked inheritance (gonosomal loci)
Sex-limited phenotype expression (e.g., cryptorchism)
Inheritance through the maternal line (in case of mitochondrial DNA loci)
Inheritance through the paternal line (in case of Y-chromosomal loci)
6. Locus–locus interactions
Epistasis with other loci (e.g., overdominance)
Gene coupling with other loci (also see crossing over)
Homozygotous lethal factors
Semi-lethal factors
Determination and description of a mode of inheritance is also achieved primarily through statistical analysis of pedigree data. In case the involved loci are known, methods of molecular genetics can also be employed.
Dominant and recessive alleles
An allele is said to be dominant if it is always expressed in the appearance of an organism (phenotype) provided that at least one copy of it is present. For example, in peas the allele for green pods, G, is dominant to that for yellow pods, g. Thus pea plants with the pair of alleles either GG (homozygote) or Gg (heterozygote) will have green pods. The allele for yellow pods is recessive. The effects of this allele are only seen when it is present in both chromosomes, gg (homozygote). This derives from Zygosity, the degree to which both copies of a chromosome or gene have the same genetic sequence, in other words, the degree of similarity of the alleles in an organism.
See also
References
External links
Stanford Encyclopedia of Philosophy entry on Heredity and Heritability
""Experiments in Plant Hybridization" (1866), by Johann Gregor Mendel", by A. Andrei at the Embryo Project Encyclopedia
Genetics | Heredity | [
"Biology"
] | 3,036 | [
"Genetics"
] |
13,465 | https://en.wikipedia.org/wiki/Holmium | Holmium is a chemical element; it has symbol Ho and atomic number 67. It is a rare-earth element and the eleventh member of the lanthanide series. It is a relatively soft, silvery, fairly corrosion-resistant and malleable metal. Like many other lanthanides, holmium is too reactive to be found in native form, as pure holmium slowly forms a yellowish oxide coating when exposed to air. When isolated, holmium is relatively stable in dry air at room temperature. However, it reacts with water and corrodes readily, and also burns in air when heated.
In nature, holmium occurs together with the other rare-earth metals (like thulium). It is a relatively rare lanthanide, making up 1.4 parts per million of the Earth's crust, an abundance similar to tungsten. Holmium was discovered through isolation by Swedish chemist Per Theodor Cleve. It was also independently discovered by Jacques-Louis Soret and Marc Delafontaine, who together observed it spectroscopically in 1878. Its oxide was first isolated from rare-earth ores by Cleve in 1878. The element's name comes from Holmia, the Latin name for the city of Stockholm.
Like many other lanthanides, holmium is found in the minerals monazite and gadolinite and is usually commercially extracted from monazite using ion-exchange techniques. Its compounds in nature and in nearly all of its laboratory chemistry are trivalently oxidized, containing Ho(III) ions. Trivalent holmium ions have fluorescent properties similar to many other rare-earth ions (while yielding their own set of unique emission light lines), and thus are used in the same way as some other rare earths in certain laser and glass-colorant applications.
Holmium has the highest magnetic permeability and magnetic saturation of any element and is thus used for the pole pieces of the strongest static magnets. Because holmium strongly absorbs neutrons, it is also used as a burnable poison in nuclear reactors.
Properties
Holmium is the eleventh member of the lanthanide series. In the periodic table, it appears in period 6, between the lanthanides dysprosium to its left and erbium to its right, and above the actinide einsteinium.
Physical properties
With a boiling point of , holmium is the sixth most volatile lanthanide after ytterbium, europium, samarium, thulium and dysprosium. At standard temperature and pressure, holmium, like many of the second half of the lanthanides, normally assumes a hexagonally close-packed (hcp) structure. Its 67 electrons are arranged in the configuration [Xe] 4f11 6s2, so that it has thirteen valence electrons filling the 4f and 6s subshells.
Holmium, like all of the lanthanides, is paramagnetic at standard temperature and pressure. However, holmium is ferromagnetic at temperatures below . It has the highest magnetic moment () of any naturally occurring element and possesses other unusual magnetic properties. When combined with yttrium, it forms highly magnetic compounds.
Chemical properties
Holmium metal tarnishes slowly in air, forming a yellowish oxide layer that has an appearance similar to that of iron rust. It burns readily to form holmium(III) oxide:
4 Ho + 3 O2 → 2 Ho2O3
It is a relatively soft and malleable element that is fairly corrosion-resistant and chemically stable in dry air at standard temperature and pressure. In moist air and at higher temperatures, however, it quickly oxidizes, forming a yellowish oxide. In pure form, holmium possesses a metallic, bright silvery luster.
Holmium is quite electropositive: on the Pauling electronegativity scale, it has an electronegativity of 1.23. It is generally trivalent. It reacts slowly with cold water and quickly with hot water to form holmium(III) hydroxide:
2 Ho (s) + 6 H2O (l) → 2 Ho(OH)3 (aq) + 3 H2 (g)
Holmium metal reacts with all the stable halogens:
2 Ho (s) + 3 F2 (g) → 2 HoF3 (s) [pink]
2 Ho (s) + 3 Cl2 (g) → 2 HoCl3 (s) [yellow]
2 Ho (s) + 3 Br2 (g) → 2 HoBr3 (s) [yellow]
2 Ho (s) + 3 I2 (g) → 2 HoI3 (s) [yellow]
Holmium dissolves readily in dilute sulfuric acid to form solutions containing the yellow Ho(III) ions, which exist as a [Ho(OH2)9]3+ complexes:
2 Ho (s) + 3 H2SO4 (aq) → 2 Ho3+ (aq) + 3 (aq) + 3 H2 (g)
Oxidation states
As with many lanthanides, holmium is usually found in the +3 oxidation state, forming compounds such as holmium(III) fluoride (HoF3) and holmium(III) chloride (HoCl3). Holmium in solution is in the form of Ho3+ surrounded by nine molecules of water. Holmium dissolves in acids. However, holmium is also found to exist in +2, +1 and 0 oxidation states.
Isotopes
The isotopes of holmium range from 140Ho to 175Ho. The primary decay mode before the most abundant stable isotope, 165Ho, is positron emission, and the primary mode after is beta minus decay. The primary decay products before 165Ho are terbium and dysprosium isotopes, and the primary products after are erbium isotopes.
Natural holmium consists of one primordial isotope, holmium-165; it is the only isotope of holmium that is thought to be stable, although it is predicted to undergo alpha decay to terbium-161 with a very long half-life. Of the 35 synthetic radioactive isotopes that are known, the most stable one is holmium-163 (163Ho), with a half-life of 4570 years. All other radioisotopes have ground-state half-lives not greater than 1.117 days, with the longest, holmium-166 (166Ho) having a half-life of 26.83 hours, and most have half-lives under 3 hours.
166m1Ho has a half-life of around 1200 years. The high excitation energy, resulting in a particularly rich spectrum of decay gamma rays produced when the metastable state de-excites, makes this isotope useful as a means for calibrating gamma ray spectrometers.
Compounds
Oxides and chalcogenides
Holmium(III) oxide is the only oxide of holmium. It changes its color depending on the lighting conditions. In daylight, it has a yellowish color. Under trichromatic light, it appears orange red, almost indistinguishable from the appearance of erbium oxide under the same lighting conditions. The color change is related to the sharp emission lines of trivalent holmium ions acting as red phosphors. Holmium(III) oxide appears pink under a cold-cathode fluorescent lamp.
Other chalcogenides are known for holmium. Holmium(III) sulfide has orange-yellow crystals in the monoclinic crystal system, with the space group P21/m (No. 11). Under high pressure, holmium(III) sulfide can form in the cubic and orthorhombic crystal systems. It can be obtained by the reaction of holmium(III) oxide and hydrogen sulfide at . Holmium(III) selenide is also known. It is antiferromagnetic below 6 K.
Halides
All four trihalides of holmium are known. Holmium(III) fluoride is a yellowish powder that can be produced by reacting holmium(III) oxide and ammonium fluoride, then crystallising it from the ammonium salt formed in solution. Holmium(III) chloride can be prepared in a similar way, with ammonium chloride instead of ammonium fluoride. It has the YCl3 layer structure in the solid state. These compounds, as well as holmium(III) bromide and holmium(III) iodide, can be obtained by the direct reaction of the elements:
2 Ho + 3 X2 → 2 HoX3
In addition, holmium(III) iodide can be obtained by the direct reaction of holmium and mercury(II) iodide, then removing the mercury by distillation.
Organoholmium compounds
Organoholmium compounds are very similar to those of the other lanthanides, as they all share an inability to undergo π backbonding. They are thus mostly restricted to the mostly ionic cyclopentadienides (isostructural with those of lanthanum) and the σ-bonded simple alkyls and aryls, some of which may be polymeric.
History
Holmium (, Latin name for Stockholm) was discovered by the Swiss chemists Jacques-Louis Soret and Marc Delafontaine in 1878 who noticed the aberrant spectrographic emission spectrum of the then-unknown element (they called it "Element X").
The Swedish chemist Per Teodor Cleve also independently discovered the element while he was working on erbia earth (erbium oxide). He was the first to isolate the new element. Using the method developed by the Swedish chemist Carl Gustaf Mosander, Cleve first removed all of the known contaminants from erbia. The result of that effort was two new materials, one brown and one green. He named the brown substance holmia (after the Latin name for Cleve's home town, Stockholm) and the green one thulia. Holmia was later found to be the holmium oxide, and thulia was thulium oxide.
In the English physicist Henry Moseley's classic paper on atomic numbers, holmium was assigned the value 66. The holmium preparation he had been given to investigate had been impure, dominated by neighboring (at the time undiscovered) dysprosium. He would have seen x-ray emission lines for both elements, but assumed that the dominant ones belonged to holmium, instead of the dysprosium impurity.
Occurrence and production
Like all the other rare-earth elements, holmium is not naturally found as a free element. It occurs combined with other elements in gadolinite, monazite and other rare-earth minerals. No holmium-dominant mineral has yet been found. The main mining areas are China, United States, Brazil, India, Sri Lanka, and Australia with reserves of holmium estimated as 400,000 tonnes. The annual production of holmium metal is of about 10 tonnes per year.
Holmium makes up 1.3 parts per million of the Earth's crust by mass. Holmium makes up 1 part per million of the soils, 400 parts per quadrillion of seawater, and almost none of Earth's atmosphere, which is very rare for a lanthanide. It makes up 500 parts per trillion of the universe by mass.
Holmium is commercially extracted by ion exchange from monazite sand (0.05% holmium), but is still difficult to separate from other rare earths. The element has been isolated through the reduction of its anhydrous chloride or fluoride with metallic calcium. Its estimated abundance in the Earth's crust is 1.3 mg/kg. Holmium obeys the Oddo–Harkins rule: as an odd-numbered element, it is less abundant than both dysprosium and erbium. However, it is the most abundant of the odd-numbered heavy lanthanides. Of the lanthanides, only promethium, thulium, lutetium and terbium are less abundant on Earth. The principal current source are some of the ion-adsorption clays of southern China. Some of these have a rare-earth composition similar to that found in xenotime or gadolinite. Yttrium makes up about two-thirds of the total by mass; holmium is around 1.5%. Holmium is relatively inexpensive for a rare-earth metal with the price about 1000 USD/kg.
Applications
Glass containing holmium oxide and holmium oxide solutions (usually in perchloric acid) have sharp optical absorption peaks in the spectral range 200 to 900 nm. They are therefore used as a calibration standard for optical spectrophotometers. The radioactive but long-lived 166m1Ho is used in calibration of gamma-ray spectrometers.
Holmium is used to create the strongest artificially generated magnetic fields, when placed within high-strength magnets as a magnetic pole piece (also called a magnetic flux concentrator). Holmium is also used in the manufacture of some permanent magnets.
Holmium-doped yttrium iron garnet (YIG) and yttrium lithium fluoride have applications in solid-state lasers, and Ho-YIG has applications in optical isolators and in microwave equipment (e.g., YIG spheres). Holmium lasers emit at 2.1 micrometres. They are used in medical, dental, and fiber-optical applications. It is also being considered for usage in the enucleation of the prostate.
Since holmium can absorb nuclear fission-bred neutrons, it is used as a burnable poison to regulate nuclear reactors. It is used as a colorant for cubic zirconia, providing pink coloring, and for glass, providing yellow-orange coloring. In March 2017, IBM announced that they had developed a technique to store one bit of data on a single holmium atom set on a bed of magnesium oxide. With sufficient quantum and classical control techniques, holmium may be a good candidate to make quantum computers.
Holmium is used in the medical field, particularly in laser surgery for procedures such as kidney stone removal and prostate treatment, due to its precision and minimal tissue damage. Its isotope, holmium-166, is applied in targeted cancer therapies, especially for liver cancer, and it also enhances MRI imaging as a contrast agent.
Biological role and precautions
Holmium plays no biological role in humans, but its salts are able to stimulate metabolism. Humans typically consume about a milligram of holmium a year. Plants do not readily take up holmium from the soil. Some vegetables have had their holmium content measured, and it amounted to 100 parts per trillion. Holmium and its soluble salts are slightly toxic if ingested, but insoluble holmium salts are nontoxic. Metallic holmium in dust form presents a fire and explosion hazard. Large amounts of holmium salts can cause severe damage if inhaled, consumed orally, or injected. The biological effects of holmium over a long period of time are not known. Holmium has a low level of acute toxicity.
See also
:Category:Holmium compounds
Period 6 element
References
Bibliography
Further reading
R. J. Callow, The Industrial Chemistry of the Lanthanons, Yttrium, Thorium, and Uranium, Pergamon Press, 1967.
External links
Holmium at The Periodic Table of Videos (University of Nottingham)
Chemical elements
Chemical elements with hexagonal close-packed structure
Ferromagnetic materials
Lanthanides
Neutron poisons
Reducing agents | Holmium | [
"Physics",
"Chemistry"
] | 3,269 | [
"Chemical elements",
"Redox",
"Ferromagnetic materials",
"Reducing agents",
"Materials",
"Atoms",
"Matter"
] |
13,466 | https://en.wikipedia.org/wiki/Hafnium | Hafnium is a chemical element; it has symbol Hf and atomic number 72. A lustrous, silvery gray, tetravalent transition metal, hafnium chemically resembles zirconium and is found in many zirconium minerals. Its existence was predicted by Dmitri Mendeleev in 1869, though it was not identified until 1922, by Dirk Coster and George de Hevesy. Hafnium is named after , the Latin name for Copenhagen, where it was discovered.
Hafnium is used in filaments and electrodes. Some semiconductor fabrication processes use its oxide for integrated circuits at 45 nanometers and smaller feature lengths. Some superalloys used for special applications contain hafnium in combination with niobium, titanium, or tungsten.
Hafnium's large neutron capture cross section makes it a good material for neutron absorption in control rods in nuclear power plants, but at the same time requires that it be removed from the neutron-transparent corrosion-resistant zirconium alloys used in nuclear reactors.
Characteristics
Physical characteristics
Hafnium is a shiny, silvery, ductile metal that is corrosion-resistant and chemically similar to zirconium in that they have the same number of valence electrons and are in the same group. Also, their relativistic effects are similar: The expected expansion of atomic radii from period 5 to 6 is almost exactly canceled out by the lanthanide contraction. Hafnium changes from its alpha form, a hexagonal close-packed lattice, to its beta form, a body-centered cubic lattice, at . The physical properties of hafnium metal samples are markedly affected by zirconium impurities, especially the nuclear properties, as these two elements are among the most difficult to separate because of their chemical similarity.
A notable physical difference between these metals is their density, with zirconium having about one-half the density of hafnium. The most notable nuclear properties of hafnium are its high thermal neutron capture cross section and that the nuclei of several different hafnium isotopes readily absorb two or more neutrons apiece. In contrast with this, zirconium is practically transparent to thermal neutrons, and it is commonly used for the metal components of nuclear reactors—especially the cladding of their nuclear fuel rods.
Chemical characteristics
Hafnium reacts in air to form a protective film that inhibits further corrosion. Despite this, the metal is attacked by hydrofluoric acid and concentrated sulfuric acid, and can be oxidized with halogens or burnt in air. Like its sister metal zirconium, finely divided hafnium can ignite spontaneously in air. The metal is resistant to concentrated alkalis.
As a consequence of lanthanide contraction, the chemistry of hafnium and zirconium is so similar that the two cannot be separated based on differing chemical reactions. The melting and boiling points of the compounds and the solubility in solvents are the major differences in the chemistry of these twin elements.
Isotopes
At least 40 isotopes of hafnium have been observed, ranging in mass number from 153 to 192. The five stable isotopes have mass numbers ranging from 176 to 180 inclusive. The radioactive isotopes' half-lives range from 400 ms for 153Hf to years for the most stable one, the primordial 174Hf.
The extinct radionuclide 182Hf has a half-life of , and is an important tracker isotope for the formation of planetary cores. The nuclear isomer 178m2Hf was at the center of a controversy for several years regarding its potential use as a weapon.
Occurrence
Hafnium is estimated to make up about between 3.0 and 4.8 ppm of the Earth's upper crust by mass. It does not exist as a free element on Earth, but is found combined in solid solution with zirconium in natural zirconium compounds such as zircon, ZrSiO4, which usually has about 1–4% of the Zr replaced by Hf. Rarely, the Hf/Zr ratio increases during crystallization to give the isostructural mineral hafnon , with atomic Hf > Zr. An obsolete name for a variety of zircon containing unusually high Hf content is alvite.
A major source of zircon (and hence hafnium) ores is heavy mineral sands ore deposits, pegmatites, particularly in Brazil and Malawi, and carbonatite intrusions, particularly the Crown Polymetallic Deposit at Mount Weld, Western Australia. A potential source of hafnium is trachyte tuffs containing rare zircon-hafnium silicates eudialyte or armstrongite, at Dubbo in New South Wales, Australia.
Production
The heavy mineral sands ore deposits of the titanium ores ilmenite and rutile yield most of the mined zirconium, and therefore also most of the hafnium.
Zirconium is a good nuclear fuel-rod cladding metal, with the desirable properties of a very low neutron capture cross section and good chemical stability at high temperatures. However, because of hafnium's neutron-absorbing properties, hafnium impurities in zirconium would cause it to be far less useful for nuclear reactor applications. Thus, a nearly complete separation of zirconium and hafnium is necessary for their use in nuclear power. The production of hafnium-free zirconium is the main source of hafnium.
The chemical properties of hafnium and zirconium are nearly identical, which makes the two difficult to separate. The methods first used—fractional crystallization of ammonium fluoride salts or the fractional distillation of the chloride—have not proven suitable for an industrial-scale production. After zirconium was chosen as a material for nuclear reactor programs in the 1940s, a separation method had to be developed. Liquid–liquid extraction processes with a wide variety of solvents were developed and are still used for producing hafnium. About half of all hafnium metal manufactured is produced as a by-product of zirconium refinement. The end product of the separation is hafnium(IV) chloride. The purified hafnium(IV) chloride is converted to the metal by reduction with magnesium or sodium, as in the Kroll process.
HfCl4{} + 2 Mg ->[1100~^\circ\text{C}] Hf{} + 2 MgCl2
Further purification is effected by a chemical transport reaction developed by Arkel and de Boer: In a closed vessel, hafnium reacts with iodine at temperatures of , forming hafnium(IV) iodide; at a tungsten filament of the reverse reaction happens preferentially, and the chemically bound iodine and hafnium dissociate into the native elements. The hafnium forms a solid coating at the tungsten filament, and the iodine can react with additional hafnium, resulting in a steady iodine turnover and ensuring the chemical equilibrium remains in favor of hafnium production.
Hf{} + 2 I2 ->[500~^\circ\text{C}] HfI4
HfI4 ->[1700~^\circ\text{C}] Hf{} + 2 I2
Chemical compounds
Due to the lanthanide contraction, the ionic radius of hafnium(IV) (0.78 ångström) is almost the same as that of zirconium(IV) (0.79 angstroms). Consequently, compounds of hafnium(IV) and zirconium(IV) have very similar chemical and physical properties. Hafnium and zirconium tend to occur together in nature and the similarity of their ionic radii makes their chemical separation rather difficult. Hafnium tends to form inorganic compounds in the oxidation state of +4. Halogens react with it to form hafnium tetrahalides. At higher temperatures, hafnium reacts with oxygen, nitrogen, carbon, boron, sulfur, and silicon. Some hafnium compounds in lower oxidation states are known.
Hafnium(IV) chloride and hafnium(IV) iodide have some applications in the production and purification of hafnium metal. They are volatile solids with polymeric structures. These tetrachlorides are precursors to various organohafnium compounds such as hafnocene dichloride and tetrabenzylhafnium.
The white hafnium oxide (HfO2), with a melting point of and a boiling point of roughly , is very similar to zirconia, but slightly more basic. Hafnium carbide is the most refractory binary compound known, with a melting point over , and hafnium nitride is the most refractory of all known metal nitrides, with a melting point of . This has led to proposals that hafnium or its carbides might be useful as construction materials that are subjected to very high temperatures. The mixed carbide tantalum hafnium carbide () possesses the highest melting point of any currently known compound, . Recent supercomputer simulations suggest a hafnium alloy with a melting point of .
History
Hafnium's existence was predicted by Dmitri Mendeleev in 1869.
In his report on The Periodic Law of the Chemical Elements, in 1869, Dmitri Mendeleev had implicitly predicted the existence of a heavier analog of titanium and zirconium. At the time of his formulation in 1871, Mendeleev believed that the elements were ordered by their atomic masses and placed lanthanum (element 57) in the spot below zirconium. The exact placement of the elements and the location of missing elements was done by determining the specific weight of the elements and comparing the chemical and physical properties.
The X-ray spectroscopy done by Henry Moseley in 1914 showed a direct dependency between spectral line and effective nuclear charge. This led to the nuclear charge, or atomic number of an element, being used to ascertain its place within the periodic table. With this method, Moseley determined the number of lanthanides and showed the gaps in the atomic number sequence at numbers 43, 61, 72, and 75.
The discovery of the gaps led to an extensive search for the missing elements. In 1914, several people claimed the discovery after Henry Moseley predicted the gap in the periodic table for the then-undiscovered element 72. Georges Urbain asserted that he found element 72 in the rare earth elements in 1907 and published his results on celtium in 1911. Neither the spectra nor the chemical behavior he claimed matched with the element found later, and therefore his claim was turned down after a long-standing controversy. The controversy was partly because the chemists favored the chemical techniques which led to the discovery of celtium, while the physicists relied on the use of the new X-ray spectroscopy method that proved that the substances discovered by Urbain did not contain element 72. In 1921, Charles R. Bury suggested that element 72 should resemble zirconium and therefore was not part of the rare earth elements group. By early 1923, Niels Bohr and others agreed with Bury. These suggestions were based on Bohr's theories of the atom which were identical to chemist Charles Bury, the X-ray spectroscopy of Moseley, and the chemical arguments of Friedrich Paneth.
Encouraged by these suggestions and by the reappearance in 1922 of Urbain's claims that element 72 was a rare earth element discovered in 1911, Dirk Coster and Georg von Hevesy were motivated to search for the new element in zirconium ores. Hafnium was discovered by the two in 1923 in Copenhagen, Denmark, validating the original 1869 prediction of Mendeleev. It was ultimately found in zircon in Norway through X-ray spectroscopy analysis. The place where the discovery took place led to the element being named for the Latin name for "Copenhagen", Hafnia, the home town of Niels Bohr. Today, the Faculty of Science of the University of Copenhagen uses in its seal a stylized image of the hafnium atom.
Hafnium was separated from zirconium through repeated recrystallization of the double ammonium or potassium fluorides by Valdemar Thal Jantzen and von Hevesey. Anton Eduard van Arkel and Jan Hendrik de Boer were the first to prepare metallic hafnium by passing hafnium tetraiodide vapor over a heated tungsten filament in 1924. This process for differential purification of zirconium and hafnium is still in use today.
Hafnium was one of the last two stable elements to be discovered. The element rhenium was found in 1908 by Masataka Ogawa, though its atomic number was misidentified at the time, and it was not generally recognised by the scientific community until its rediscovery by Walter Noddack, Ida Noddack, and Otto Berg in 1925. This makes it somewhat difficult to say if hafnium or rhenium was discovered last.
In 1923, six predicted elements were still missing from the periodic table: 43 (technetium), 61 (promethium), 85 (astatine), and 87 (francium) are radioactive elements and are only present in trace amounts in the environment, thus making elements 75 (rhenium) and 72 (hafnium) the last two unknown non-radioactive elements.
Applications
Most of the hafnium produced is used in the manufacture of control rods for nuclear reactors.
Hafnium has limited technical applications due to a few factors. First, it's very similar to zirconium, a more abundant element that can be used in most cases. Second, pure hafnium wasn't widely available until the late 1950s, when it became a byproduct of the nuclear industry's need for hafnium-free zirconium. Additionally, hafnium is rare and difficult to separate from other elements, making it expensive. After the Fukushima disaster reduced the demand for hafnium-free zirconium, the price of hafnium increased significantly from around $500–$600/kg ($227-$272/lb) in 2014 to around $1000/kg ($454/lb) in 2015.
Nuclear reactors
The nuclei of several hafnium isotopes can each absorb multiple neutrons. This makes hafnium a good material for nuclear reactors' control rods. Its neutron capture cross section (Capture Resonance Integral Io ≈ 2000 barns) is about 600 times that of zirconium (other elements that are good neutron-absorbers for control rods are cadmium and boron). Excellent mechanical properties and exceptional corrosion-resistance properties allow its use in the harsh environment of pressurized water reactors. The German research reactor FRM II uses hafnium as a neutron absorber. It is also common in military reactors, particularly in US naval submarine reactors, to slow reactor rates that are too high. It is seldom found in civilian reactors, the first core of the Shippingport Atomic Power Station (a conversion of a naval reactor) being a notable exception.
Alloys
Hafnium is used in alloys with iron, titanium, niobium, tantalum, and other metals. An alloy used for liquid-rocket thruster nozzles, for example the main engine of the Apollo Lunar Modules, is C103 which consists of 89% niobium, 10% hafnium and 1% titanium.
Small additions of hafnium increase the adherence of protective oxide scales on nickel-based alloys. It thereby improves the corrosion resistance, especially under cyclic temperature conditions that tend to break oxide scales, by inducing thermal stresses between the bulk material and the oxide layer.
Microprocessors
Hafnium-based compounds are employed in gates of transistors as insulators in the 45 nm (and below) generation of integrated circuits from Intel, IBM and others. Hafnium oxide-based compounds are practical high-k dielectrics, allowing reduction of the gate leakage current which improves performance at such scales.
Isotope geochemistry
Isotopes of hafnium and lutetium (along with ytterbium) are also used in isotope geochemistry and geochronological applications, in lutetium-hafnium dating. It is often used as a tracer of isotopic evolution of Earth's mantle through time. This is because 176Lu decays to 176Hf with a half-life of approximately 37 billion years.
In most geologic materials, zircon is the dominant host of hafnium (>10,000 ppm) and is often the focus of hafnium studies in geology. Hafnium is readily substituted into the zircon crystal lattice, and is therefore very resistant to hafnium mobility and contamination. Zircon also has an extremely low Lu/Hf ratio, making any correction for initial lutetium minimal. Although the Lu/Hf system can be used to calculate a "model age", i.e. the time at which it was derived from a given isotopic reservoir such as the depleted mantle, these "ages" do not carry the same geologic significance as do other geochronological techniques as the results often yield isotopic mixtures and thus provide an average age of the material from which it was derived.
Garnet is another mineral that contains appreciable amounts of hafnium to act as a geochronometer. The high and variable Lu/Hf ratios found in garnet make it useful for dating metamorphic events.
Other uses
Due to its heat resistance and its affinity to oxygen and nitrogen, hafnium is a good scavenger for oxygen and nitrogen in gas-filled and incandescent lamps. Hafnium is also used as the electrode in plasma cutting because of its ability to shed electrons into the air.
The high energy content of 178m2Hf was the concern of a DARPA-funded program in the US. This program eventually concluded that using the above-mentioned 178m2Hf nuclear isomer of hafnium to construct high-yield weapons with X-ray triggering mechanisms—an application of induced gamma emission—was infeasible because of its expense. See hafnium controversy.
Hafnium metallocene compounds can be prepared from hafnium tetrachloride and various cyclopentadiene-type ligand species. Perhaps the simplest hafnium metallocene is hafnocene dichloride. Hafnium metallocenes are part of a large collection of Group 4 transition metal metallocene catalysts that are used worldwide in the production of polyolefin resins like polyethylene and polypropylene.
A pyridyl-amidohafnium catalyst can be used for the controlled iso-selective polymerization of propylene which can then be combined with polyethylene to make a much tougher recycled plastic.
Hafnium diselenide is studied in spintronics thanks to its charge density wave and superconductivity.
Precautions
Care needs to be taken when machining hafnium because it is pyrophoric—fine particles can spontaneously combust when exposed to air. Compounds that contain this metal are rarely encountered by most people. The pure metal is not considered toxic, but hafnium compounds should be handled as if they were toxic because the ionic forms of metals are normally at greatest risk for toxicity, and limited animal testing has been done for hafnium compounds.
People can be exposed to hafnium in the workplace by breathing, swallowing, skin, and eye contact. The Occupational Safety and Health Administration (OSHA) has set the legal limit (permissible exposure limit) for exposure to hafnium and hafnium compounds in the workplace as TWA 0.5 mg/m3 over an 8-hour workday. The National Institute for Occupational Safety and Health (NIOSH) has set the same recommended exposure limit (REL). At levels of 50 mg/m3, hafnium is immediately dangerous to life and health.
References
Further reading
External links
Hafnium at Los Alamos National Laboratory's periodic table of the elements
Hafnium at The Periodic Table of Videos (University of Nottingham)
Hafnium Technical & Safety Data
NLM Hazardous Substances Databank – Hafnium, elemental
Don Clark: Intel Shifts from Silicon to Lift Chip Performance – WSJ, 2007
Hafnium-based Intel 45nm Process Technology
CDC – NIOSH Pocket Guide to Chemical Hazards
Chemical elements
Transition metals
Neutron poisons
1923 in science
Chemical elements with hexagonal close-packed structure | Hafnium | [
"Physics"
] | 4,339 | [
"Chemical elements",
"Atoms",
"Matter"
] |
13,475 | https://en.wikipedia.org/wiki/Harbor | A harbor (American English), or harbour (Australian English, British English, Canadian English, Irish English, New Zealand English; see spelling differences), is a sheltered body of water where ships, boats, and barges can be moored. The term harbor is often used interchangeably with port, which is a man-made facility built for loading and unloading vessels and dropping off and picking up passengers. Harbors usually include one or more ports. Alexandria Port in Egypt, meanwhile, is an example of a port with two harbors.
Harbors may be natural or artificial. An artificial harbor can have deliberately constructed breakwaters, sea walls, or jetties or they can be constructed by dredging, which requires maintenance by further periodic dredging. An example of an artificial harbor is Long Beach Harbor, California, United States, which was an array of salt marshes and tidal flats too shallow for modern merchant ships before it was first dredged in the early 20th century. In contrast, a natural harbor is surrounded on several sides by land. Examples of natural harbors include Sydney Harbour, New South Wales, Australia, Halifax Harbour in Halifax, Nova Scotia, Canada and Trincomalee Harbour in Sri Lanka.
Artificial harbors
Artificial harbors are frequently built for use as ports. The oldest artificial harbor known is the Ancient Egyptian site at Wadi al-Jarf, on the Red Sea coast, which is at least 4500 years old (ca. 2600–2550 BCE, reign of King Khufu). The largest artificially created harbor is Jebel Ali in Dubai. Other large and busy artificial harbors include:
Port of Houston, Texas, United States
Port of Long Beach, California, United States
Port of Los Angeles in San Pedro, California, United States
Port of Rotterdam, Netherlands
Port of Savannah, Georgia, United States
The Ancient Carthaginians constructed fortified, artificial harbors called cothons.
Natural harbors
A natural harbor is a landform where a section of a body of water is protected and deep enough to allow anchorage. Many such harbors are rias. Natural harbors have long been of great strategic naval and economic importance, and many great cities of the world are located on them. Having a protected harbor reduces or eliminates the need for breakwaters as it will result in calmer waves inside the harbor. Some examples are:
Bali Strait, Indonesia
Berehaven Harbour, Ireland
Balikpapan Bay in East Kalimantan, Indonesia
Mumbai in Maharashtra, India
Boston Harbor in Massachusetts, United States
Burrard Inlet in Vancouver, British Columbia, Canada
Chittagong in Chittagong Division, Bangladesh
Cork Harbour, Ireland
Grand Harbour, Malta
Guantánamo Bay, Cuba
Gulf of Paria, Trinidad and Tobago
Haifa Bay, in Haifa, Israel
Halifax Harbour in Nova Scotia, Canada
Hamilton Harbour in Ontario, Canada
Killybegs in County Donegal, Ireland
Kingston Harbour, Jamaica
Mahón harbour, in Menorca, Spain
Marsamxett Harbour, Malta
Milford Haven in Wales, United Kingdom
New York Harbor in the United States
Pago Pago Harbor in American Samoa
Pearl Harbor in Hawaii, United States
Poole Harbour in England, United Kingdom
Port Hercules, Monaco
Sydney Harbour in New South Wales, Australia, technically a ria
Port Stephens in Australia
Tanjung Perak in Surabaya, Indonesia
Port of Tobruk in Tobruk, Libya
Presque Isle Bay in Pennsylvania, United States
Prince William Sound in Alaska, United States
Puget Sound in Washington state, United States
Rías Altas and Rías Baixas in Galicia, Spain
Roadstead of Brest in Brittany, France
San Francisco Bay in California, United States
Scapa Flow in Scotland, United Kingdom
Sept-Îles in Côte-Nord, Quebec, Canada
Shelburne in Nova Scotia, Canada
Subic Bay in Zambales, Philippines
Tallinn Bay in Tallinn, Estonia
Tampa Bay in Florida, United States
Trincomalee Harbour, Sri Lanka
Tuticorin in Tamil Nadu, India
Victoria Harbour in Hong Kong
Visakhapatnam Harbour, India
Vizhinjam in Trivandrum, India
Waitematā Harbour in Auckland, New Zealand
Manukau Harbour in Auckland, New Zealand
Wellington Harbour in Wellington, New Zealand
Port Foster in Deception Island, Antarctica
Ice-free harbors
For harbors near the North and South poles, being ice-free is an important advantage, especially when it is year-round. Examples of these are:
Hammerfest, Norway
Liinakhamari, Russia
Murmansk, Russia
Nakhodka in Nakhodka Bay, Russia
Pechenga, Russia
Prince Rupert, Canada
Valdez, United States
Vardø, Norway
Vostochny Port, Russia
The world's southernmost harbor, located at Antarctica's Winter Quarters Bay (77° 50′ South), is sometimes ice-free, depending on the summertime pack ice conditions.
Important harbors
Although the world's busiest port is a contested title, in 2017 the world's busiest harbor by cargo tonnage was the Port of Ningbo-Zhoushan.
The following are large natural harbors:
See also
Boyd's Automatic tide signalling apparatus
Dock
Ice pier
Inland harbor
List of marinas
List of seaports
Mandracchio
Marina
Mulberry harbour
Quay
Roadstead
Seaport
Shipyard
Wharf
Notes
External links
Harbor Maintenance Finance and Funding Congressional Research Service
Coastal construction
Nautical terminology
Bodies of water
Infrastructure
Industrial buildings and structures
it:Porto
tt:Лиман | Harbor | [
"Engineering"
] | 1,106 | [
"Construction",
"Coastal construction",
"Infrastructure"
] |
13,483 | https://en.wikipedia.org/wiki/Hemoglobin | Hemoglobin (haemoglobin, Hb or Hgb) is a protein containing iron that facilitates the transportation of oxygen in red blood cells. Almost all vertebrates contain hemoglobin, with the sole exception of the fish family Channichthyidae. Hemoglobin in the blood carries oxygen from the respiratory organs (lungs or gills) to the other tissues of the body, where it releases the oxygen to enable aerobic respiration which powers an animal's metabolism. A healthy human has 12to 20grams of hemoglobin in every 100mL of blood. Hemoglobin is a metalloprotein, a chromoprotein, and globulin.
In mammals, hemoglobin makes up about 96% of a red blood cell's dry weight (excluding water), and around 35% of the total weight (including water). Hemoglobin has an oxygen-binding capacity of 1.34mL of O2 per gram, which increases the total blood oxygen capacity seventy-fold compared to dissolved oxygen in blood plasma alone. The mammalian hemoglobin molecule can bind and transport up to four oxygen molecules.
Hemoglobin also transports other gases. It carries off some of the body's respiratory carbon dioxide (about 20–25% of the total) as carbaminohemoglobin, in which CO2 binds to the heme protein. The molecule also carries the important regulatory molecule nitric oxide bound to a thiol group in the globin protein, releasing it at the same time as oxygen.
Hemoglobin is also found in other cells, including in the A9 dopaminergic neurons of the substantia nigra, macrophages, alveolar cells, lungs, retinal pigment epithelium, hepatocytes, mesangial cells of the kidney, endometrial cells, cervical cells, and vaginal epithelial cells. In these tissues, hemoglobin absorbs unneeded oxygen as an antioxidant, and regulates iron metabolism. Excessive glucose in the blood can attach to hemoglobin and raise the level of hemoglobin A1c.
Hemoglobin and hemoglobin-like molecules are also found in many invertebrates, fungi, and plants. In these organisms, hemoglobins may carry oxygen, or they may transport and regulate other small molecules and ions such as carbon dioxide, nitric oxide, hydrogen sulfide and sulfide. A variant called leghemoglobin serves to scavenge oxygen away from anaerobic systems such as the nitrogen-fixing nodules of leguminous plants, preventing oxygen poisoning.
The medical condition hemoglobinemia, a form of anemia, is caused by intravascular hemolysis, in which hemoglobin leaks from red blood cells into the blood plasma.
Research history
In 1825, Johann Friedrich Engelhart discovered that the ratio of iron to protein is identical in the hemoglobins of several species. From the known atomic mass of iron, he calculated the molecular mass of hemoglobin to n × 16000 (n=number of iron atoms per hemoglobin molecule, now known to be 4), the first determination of a protein's molecular mass. This "hasty conclusion" drew ridicule from colleagues who could not believe that any molecule could be so large. However, Gilbert Smithson Adair confirmed Engelhart's results in 1925 by measuring the osmotic pressure of hemoglobin solutions.
Although blood had been known to carry oxygen since at least 1794, the oxygen-carrying property of hemoglobin was described by Hünefeld in 1840. In 1851, German physiologist Otto Funke published a series of articles in which he described growing hemoglobin crystals by successively diluting red blood cells with a solvent such as pure water, alcohol or ether, followed by slow evaporation of the solvent from the resulting protein solution. Hemoglobin's reversible oxygenation was described a few years later by Felix Hoppe-Seyler.
With the development of X-ray crystallography, it became possible to sequence protein structures. In 1959, Max Perutz determined the molecular structure of hemoglobin. For this work he shared the 1962 Nobel Prize in Chemistry with John Kendrew, who sequenced the globular protein myoglobin.
The role of hemoglobin in the blood was elucidated by French physiologist Claude Bernard.
The name hemoglobin (or haemoglobin) is derived from the words heme (or haem) and globin, reflecting the fact that each subunit of hemoglobin is a globular protein with an embedded heme group. Each heme group contains one iron atom, that can bind one oxygen molecule through ion-induced dipole forces. The most common type of hemoglobin in mammals contains four such subunits.
Genetics
Hemoglobin consists of protein subunits (globin molecules), which are polypeptides, long folded chains of specific amino acids which determine the protein's chemical properties and function. The amino acid sequence of any polypeptide is translated from a segment of DNA, the corresponding gene.
There is more than one hemoglobin gene. In humans, hemoglobin A (the main form of hemoglobin in adults) is coded by genes HBA1, HBA2, and HBB. Alpha 1 and alpha 2 subunits are respectively coded by genes HBA1 and HBA2 close together on chromosome 16, while the beta subunit is coded by gene HBB on chromosome 11. The amino acid sequences of the globin subunits usually differ between species, with the difference growing with evolutionary distance. For example, the most common hemoglobin sequences in humans, bonobos and chimpanzees are completely identical, with exactly the same alpha and beta globin protein chains. Human and gorilla hemoglobin differ in one amino acid in both alpha and beta chains, and these differences grow larger between less closely related species.
Mutations in the genes for hemoglobin can result in variants of hemoglobin within a single species, although one sequence is usually "most common" in each species. Many of these mutations cause no disease, but some cause a group of hereditary diseases called hemoglobinopathies. The best known hemoglobinopathy is sickle-cell disease, which was the first human disease whose mechanism was understood at the molecular level. A mostly separate set of diseases called thalassemias involves underproduction of normal and sometimes abnormal hemoglobins, through problems and mutations in globin gene regulation. All these diseases produce anemia.
Variations in hemoglobin sequences, as with other proteins, may be adaptive. For example, hemoglobin has been found to adapt in different ways to the thin air at high altitudes, where lower partial pressure of oxygen diminishes its binding to hemoglobin compared to the higher pressures at sea level. Recent studies of deer mice found mutations in four genes that can account for differences between high- and low-elevation populations. It was found that the genes of the two breeds are "virtually identical—except for those that govern the oxygen-carrying capacity of their hemoglobin. . . . The genetic difference enables highland mice to make more efficient use of their oxygen." Mammoth hemoglobin featured mutations that allowed for oxygen delivery at lower temperatures, thus enabling mammoths to migrate to higher latitudes during the Pleistocene. This was also found in hummingbirds that inhabit the Andes. Hummingbirds already expend a lot of energy and thus have high oxygen demands and yet Andean hummingbirds have been found to thrive in high altitudes. Non-synonymous mutations in the hemoglobin gene of multiple species living at high elevations (Oreotrochilus, A. castelnaudii, C. violifer, P. gigas, and A. viridicuada) have caused the protein to have less of an affinity for inositol hexaphosphate (IHP), a molecule found in birds that has a similar role as 2,3-BPG in humans; this results in the ability to bind oxygen in lower partial pressures.
Birds' unique circulatory lungs also promote efficient use of oxygen at low partial pressures of O2. These two adaptations reinforce each other and account for birds' remarkable high-altitude performance.
Hemoglobin adaptation extends to humans, as well. There is a higher offspring survival rate among Tibetan women with high oxygen saturation genotypes residing at 4,000 m. Natural selection seems to be the main force working on this gene because the mortality rate of offspring is significantly lower for women with higher hemoglobin-oxygen affinity when compared to the mortality rate of offspring from women with low hemoglobin-oxygen affinity. While the exact genotype and mechanism by which this occurs is not yet clear, selection is acting on these women's ability to bind oxygen in low partial pressures, which overall allows them to better sustain crucial metabolic processes.
Synthesis
Hemoglobin (Hb) is synthesized in a complex series of steps. The heme part is synthesized in a series of steps in the mitochondria and the cytosol of immature red blood cells, while the globin protein parts are synthesized by ribosomes in the cytosol. Production of Hb continues in the cell throughout its early development from the proerythroblast to the reticulocyte in the bone marrow. At this point, the nucleus is lost in mammalian red blood cells, but not in birds and many other species. Even after the loss of the nucleus in mammals, residual ribosomal RNA allows further synthesis of Hb until the reticulocyte loses its RNA soon after entering the vasculature (this hemoglobin-synthetic RNA in fact gives the reticulocyte its reticulated appearance and name).
Structure of heme
Hemoglobin has a quaternary structure characteristic of many multi-subunit globular proteins. Most of the amino acids in hemoglobin form alpha helices, and these helices are connected by short non-helical segments. Hydrogen bonds stabilize the helical sections inside this protein, causing attractions within the molecule, which then causes each polypeptide chain to fold into a specific shape. Hemoglobin's quaternary structure comes from its four subunits in roughly a tetrahedral arrangement.
In most vertebrates, the hemoglobin molecule is an assembly of four globular protein subunits. Each subunit is composed of a protein chain tightly associated with a non-protein prosthetic heme group. Each protein chain arranges into a set of alpha-helix structural segments connected together in a globin fold arrangement. Such a name is given because this arrangement is the same folding motif used in other heme/globin proteins such as myoglobin. This folding pattern contains a pocket that strongly binds the heme group.
A heme group consists of an iron (Fe) ion held in a heterocyclic ring, known as a porphyrin. This porphyrin ring consists of four pyrrole molecules cyclically linked together (by methine bridges) with the iron ion bound in the center. The iron ion, which is the site of oxygen binding, coordinates with the four nitrogen atoms in the center of the ring, which all lie in one plane. The heme is bound strongly (covalently) to the globular protein via the N atoms of the imidazole ring of F8 histidine residue (also known as the proximal histidine) below the porphyrin ring. A sixth position can reversibly bind oxygen by a coordinate covalent bond, completing the octahedral group of six ligands. This reversible bonding with oxygen is why hemoglobin is so useful for transporting oxygen around the body. Oxygen binds in an "end-on bent" geometry where one oxygen atom binds to Fe and the other protrudes at an angle. When oxygen is not bound, a very weakly bonded water molecule fills the site, forming a distorted octahedron.
Even though carbon dioxide is carried by hemoglobin, it does not compete with oxygen for the iron-binding positions but is bound to the amine groups of the protein chains attached to the heme groups.
The iron ion may be either in the ferrous Fe2+ or in the ferric Fe3+ state, but ferrihemoglobin (methemoglobin) (Fe3+) cannot bind oxygen. In binding, oxygen temporarily and reversibly oxidizes (Fe2+) to (Fe3+) while oxygen temporarily turns into the superoxide ion, thus iron must exist in the +2 oxidation state to bind oxygen. If superoxide ion associated to Fe3+ is protonated, the hemoglobin iron will remain oxidized and incapable of binding oxygen. In such cases, the enzyme methemoglobin reductase will be able to eventually reactivate methemoglobin by reducing the iron center.
In adult humans, the most common hemoglobin type is a tetramer (which contains four subunit proteins) called hemoglobin A, consisting of two α and two β subunits non-covalently bound, each made of 141 and 146 amino acid residues, respectively. This is denoted as α2β2. The subunits are structurally similar and about the same size. Each subunit has a molecular weight of about 16,000 daltons, for a total molecular weight of the tetramer of about 64,000 daltons (64,458 g/mol). Thus, 1 g/dL=0.1551 mmol/L. Hemoglobin A is the most intensively studied of the hemoglobin molecules.
In human infants, the fetal hemoglobin molecule is made up of 2 α chains and 2 γ chains. The γ chains are gradually replaced by β chains as the infant grows.
The four polypeptide chains are bound to each other by salt bridges, hydrogen bonds, and the hydrophobic effect.
Oxygen saturation
In general, hemoglobin can be saturated with oxygen molecules (oxyhemoglobin), or desaturated with oxygen molecules (deoxyhemoglobin).
Oxyhemoglobin
Oxyhemoglobin is formed during physiological respiration when oxygen binds to the heme component of the protein hemoglobin in red blood cells. This process occurs in the pulmonary capillaries adjacent to the alveoli of the lungs. The oxygen then travels through the blood stream to be dropped off at cells where it is utilized as a terminal electron acceptor in the production of ATP by the process of oxidative phosphorylation. It does not, however, help to counteract a decrease in blood pH. Ventilation, or breathing, may reverse this condition by removal of carbon dioxide, thus causing a shift up in pH.
Hemoglobin exists in two forms, a taut (tense) form (T) and a relaxed form (R). Various factors such as low pH, high CO2 and high 2,3 BPG at the level of the tissues favor the taut form, which has low oxygen affinity and releases oxygen in the tissues. Conversely, a high pH, low CO2, or low 2,3 BPG favors the relaxed form, which can better bind oxygen. The partial pressure of the system also affects O2 affinity where, at high partial pressures of oxygen (such as those present in the alveoli), the relaxed (high affinity, R) state is favoured. Inversely, at low partial pressures (such as those present in respiring tissues), the (low affinity, T) tense state is favoured. Additionally, the binding of oxygen to the iron(II) heme pulls the iron into the plane of the porphyrin ring, causing a slight conformational shift. The shift encourages oxygen to bind to the three remaining heme units within hemoglobin (thus, oxygen binding is cooperative).
Classically, the iron in oxyhemoglobin is seen as existing in the iron(II) oxidation state. However, the complex of oxygen with heme iron is diamagnetic, whereas both oxygen and high-spin iron(II) are paramagnetic. Experimental evidence strongly suggests heme iron is in the iron(III) oxidation state in oxyhemoglobin, with the oxygen existing as superoxide anion (O2•−) or in a covalent charge-transfer complex.
Deoxygenated hemoglobin
Deoxygenated hemoglobin (deoxyhemoglobin) is the form of hemoglobin without the bound oxygen. The absorption spectra of oxyhemoglobin and deoxyhemoglobin differ. The oxyhemoglobin has significantly lower absorption of the 660 nm wavelength than deoxyhemoglobin, while at 940 nm its absorption is slightly higher. This difference is used for the measurement of the amount of oxygen in a patient's blood by an instrument called a pulse oximeter. This difference also accounts for the presentation of cyanosis, the blue to purplish color that tissues develop during hypoxia.
Deoxygenated hemoglobin is paramagnetic; it is weakly attracted to magnetic fields. In contrast, oxygenated hemoglobin exhibits diamagnetism, a weak repulsion from a magnetic field.
Evolution of vertebrate hemoglobin
Scientists agree that the event that separated myoglobin from hemoglobin occurred after lampreys diverged from jawed vertebrates. This separation of myoglobin and hemoglobin allowed for the different functions of the two molecules to arise and develop: myoglobin has more to do with oxygen storage while hemoglobin is tasked with oxygen transport. The α- and β-like globin genes encode the individual subunits of the protein. The predecessors of these genes arose through another duplication event also after the gnathosome common ancestor derived from jawless fish, approximately 450–500 million years ago. Ancestral reconstruction studies suggest that the preduplication ancestor of the α and β genes was a dimer made up of identical globin subunits, which then evolved to assemble into a tetrameric architecture after the duplication. The development of α and β genes created the potential for hemoglobin to be composed of multiple distinct subunits, a physical composition central to hemoglobin's ability to transport oxygen. Having multiple subunits contributes to hemoglobin's ability to bind oxygen cooperatively as well as be regulated allosterically. Subsequently, the α gene also underwent a duplication event to form the HBA1 and HBA2 genes. These further duplications and divergences have created a diverse range of α- and β-like globin genes that are regulated so that certain forms occur at different stages of development.
Most ice fish of the family Channichthyidae have lost their hemoglobin genes as an adaptation to cold water.
Cooperativity
When oxygen binds to the iron complex, it causes the iron atom to move back toward the center of the plane of the porphyrin ring (see moving diagram). At the same time, the imidazole side-chain of the histidine residue interacting at the other pole of the iron is pulled toward the porphyrin ring. This interaction forces the plane of the ring sideways toward the outside of the tetramer, and also induces a strain in the protein helix containing the histidine as it moves nearer to the iron atom. This strain is transmitted to the remaining three monomers in the tetramer, where it induces a similar conformational change in the other heme sites such that binding of oxygen to these sites becomes easier.
As oxygen binds to one monomer of hemoglobin, the tetramer's conformation shifts from the T (tense) state to the R (relaxed) state. This shift promotes the binding of oxygen to the remaining three monomers' heme groups, thus saturating the hemoglobin molecule with oxygen.
In the tetrameric form of normal adult hemoglobin, the binding of oxygen is, thus, a cooperative process. The binding affinity of hemoglobin for oxygen is increased by the oxygen saturation of the molecule, with the first molecules of oxygen bound influencing the shape of the binding sites for the next ones, in a way favorable for binding. This positive cooperative binding is achieved through steric conformational changes of the hemoglobin protein complex as discussed above; i.e., when one subunit protein in hemoglobin becomes oxygenated, a conformational or structural change in the whole complex is initiated, causing the other subunits to gain an increased affinity for oxygen. As a consequence, the oxygen binding curve of hemoglobin is sigmoidal, or S-shaped, as opposed to the normal hyperbolic curve associated with noncooperative binding.
The dynamic mechanism of the cooperativity in hemoglobin and its relation with low-frequency resonance has been discussed.
Binding of ligands other than oxygen
Besides the oxygen ligand, which binds to hemoglobin in a cooperative manner, hemoglobin ligands also include competitive inhibitors such as carbon monoxide (CO) and allosteric ligands such as carbon dioxide (CO2) and nitric oxide (NO). The carbon dioxide is bound to amino groups of the globin proteins to form carbaminohemoglobin; this mechanism is thought to account for about 10% of carbon dioxide transport in mammals. Nitric oxide can also be transported by hemoglobin; it is bound to specific thiol groups in the globin protein to form an S-nitrosothiol, which dissociates into free nitric oxide and thiol again, as the hemoglobin releases oxygen from its heme site. This nitric oxide transport to peripheral tissues is hypothesized to assist oxygen transport in tissues, by releasing vasodilatory nitric oxide to tissues in which oxygen levels are low.
Competitive
The binding of oxygen is affected by molecules such as carbon monoxide (for example, from tobacco smoking, exhaust gas, and incomplete combustion in furnaces). CO competes with oxygen at the heme binding site. Hemoglobin's binding affinity for CO is 250 times greater than its affinity for oxygen, Since carbon monoxide is a colorless, odorless and tasteless gas, and poses a potentially fatal threat, carbon monoxide detectors have become commercially available to warn of dangerous levels in residences. When hemoglobin combines with CO, it forms a very bright red compound called carboxyhemoglobin, which may cause the skin of CO poisoning victims to appear pink in death, instead of white or blue. When inspired air contains CO levels as low as 0.02%, headache and nausea occur; if the CO concentration is increased to 0.1%, unconsciousness will follow. In heavy smokers, up to 20% of the oxygen-active sites can be blocked by CO.
In similar fashion, hemoglobin also has competitive binding affinity for cyanide (CN−), sulfur monoxide (SO), and sulfide (S2−), including hydrogen sulfide (H2S). All of these bind to iron in heme without changing its oxidation state, but they nevertheless inhibit oxygen-binding, causing grave toxicity.
The iron atom in the heme group must initially be in the ferrous (Fe2+) oxidation state to support oxygen and other gases' binding and transport (it temporarily switches to ferric during the time oxygen is bound, as explained above). Initial oxidation to the ferric (Fe3+) state without oxygen converts hemoglobin into "hemiglobin" or methemoglobin, which cannot bind oxygen. Hemoglobin in normal red blood cells is protected by a reduction system to keep this from happening. Nitric oxide is capable of converting a small fraction of hemoglobin to methemoglobin in red blood cells. The latter reaction is a remnant activity of the more ancient nitric oxide dioxygenase function of globins.
Allosteric
Carbon dioxide occupies a different binding site on the hemoglobin. At tissues, where carbon dioxide concentration is higher, carbon dioxide binds to allosteric site of hemoglobin, facilitating unloading of oxygen from hemoglobin and ultimately its removal from the body after the oxygen has been released to tissues undergoing metabolism. This increased affinity for carbon dioxide by the venous blood is known as the Bohr effect. Through the enzyme carbonic anhydrase, carbon dioxide reacts with water to give carbonic acid, which decomposes into bicarbonate and protons:
CO2 + H2O → H2CO3 → HCO3− + H+
Hence, blood with high carbon dioxide levels is also lower in pH (more acidic). Hemoglobin can bind protons and carbon dioxide, which causes a conformational change in the protein and facilitates the release of oxygen. Protons bind at various places on the protein, while carbon dioxide binds at the α-amino group. Carbon dioxide binds to hemoglobin and forms carbaminohemoglobin. This decrease in hemoglobin's affinity for oxygen by the binding of carbon dioxide and acid is known as the Bohr effect. The Bohr effect favors the T state rather than the R state. (shifts the O2-saturation curve to the right). Conversely, when the carbon dioxide levels in the blood decrease (i.e., in the lung capillaries), carbon dioxide and protons are released from hemoglobin, increasing the oxygen affinity of the protein. A reduction in the total binding capacity of hemoglobin to oxygen (i.e. shifting the curve down, not just to the right) due to reduced pH is called the root effect. This is seen in bony fish.
It is necessary for hemoglobin to release the oxygen that it binds; if not, there is no point in binding it. The sigmoidal curve of hemoglobin makes it efficient in binding (taking up O2 in lungs), and efficient in unloading (unloading O2 in tissues).
In people acclimated to high altitudes, the concentration of 2,3-Bisphosphoglycerate (2,3-BPG) in the blood is increased, which allows these individuals to deliver a larger amount of oxygen to tissues under conditions of lower oxygen tension. This phenomenon, where molecule Y affects the binding of molecule X to a transport molecule Z, is called a heterotropic allosteric effect. Hemoglobin in organisms at high altitudes has also adapted such that it has less of an affinity for 2,3-BPG and so the protein will be shifted more towards its R state. In its R state, hemoglobin will bind oxygen more readily, thus allowing organisms to perform the necessary metabolic processes when oxygen is present at low partial pressures.
Animals other than humans use different molecules to bind to hemoglobin and change its O2 affinity under unfavorable conditions. Fish use both ATP and GTP. These bind to a phosphate "pocket" on the fish hemoglobin molecule, which stabilizes the tense state and therefore decreases oxygen affinity. GTP reduces hemoglobin oxygen affinity much more than ATP, which is thought to be due to an extra hydrogen bond formed that further stabilizes the tense state. Under hypoxic conditions, the concentration of both ATP and GTP is reduced in fish red blood cells to increase oxygen affinity.
A variant hemoglobin, called fetal hemoglobin (HbF, α2γ2), is found in the developing fetus, and binds oxygen with greater affinity than adult hemoglobin. This means that the oxygen binding curve for fetal hemoglobin is left-shifted (i.e., a higher percentage of hemoglobin has oxygen bound to it at lower oxygen tension), in comparison to that of adult hemoglobin. As a result, fetal blood in the placenta is able to take oxygen from maternal blood.
Hemoglobin also carries nitric oxide (NO) in the globin part of the molecule. This improves oxygen delivery in the periphery and contributes to the control of respiration. NO binds reversibly to a specific cysteine residue in globin; the binding depends on the state (R or T) of the hemoglobin. The resulting S-nitrosylated hemoglobin influences various NO-related activities such as the control of vascular resistance, blood pressure and respiration. NO is not released in the cytoplasm of red blood cells but transported out of them by an anion exchanger called AE1.
Types of hemoglobin in humans
Hemoglobin variants are a part of the normal embryonic and fetal development. They may also be pathologic mutant forms of hemoglobin in a population, caused by variations in genetics. Some well-known hemoglobin variants, such as sickle-cell anemia, are responsible for diseases and are considered hemoglobinopathies. Other variants cause no detectable pathology, and are thus considered non-pathological variants.
In embryos:
Gower 1 (ζ2ε2).
Gower 2 (α2ε2) ().
Hemoglobin Portland I (ζ2γ2).
Hemoglobin Portland II (ζ2β2).
In fetuses:
Hemoglobin F (α2γ2) ().
In neonates (newborns inmmediately after birth):
Hemoglobin A (adult hemoglobin) (α2β2) () – The most common with a normal amount over 95%
Hemoglobin A2 (α2δ2) – δ chain synthesis begins late in the third trimester and, in adults, it has a normal range of 1.5–3.5%
Hemoglobin F (fetal hemoglobin) (α2γ2) – In adults Hemoglobin F is restricted to a limited population of red cells called F-cells. However, the level of Hb F can be elevated in persons with sickle-cell disease and beta-thalassemia.
Abnormal forms that occur in diseases:
Hemoglobin D – (α2βD2) – A variant form of hemoglobin.
Hemoglobin H (β4) – A variant form of hemoglobin, formed by a tetramer of β chains, which may be present in variants of α thalassemia.
Hemoglobin Barts (γ4) – A variant form of hemoglobin, formed by a tetramer of γ chains, which may be present in variants of α thalassemia.
Hemoglobin S (α2βS2) – A variant form of hemoglobin found in people with sickle cell disease. There is a variation in the β-chain gene, causing a change in the properties of hemoglobin, which results in sickling of red blood cells.
Hemoglobin C (α2βC2) – Another variant due to a variation in the β-chain gene. This variant causes a mild chronic hemolytic anemia.
Hemoglobin E (α2βE2) – Another variant due to a variation in the β-chain gene. This variant causes a mild chronic hemolytic anemia.
Hemoglobin AS – A heterozygous form causing sickle cell trait with one adult gene and one sickle cell disease gene
Hemoglobin SC disease – A compound heterozygous form with one sickle gene and another encoding hemoglobin C.
Hemoglobin Hopkins-2 – A variant form of hemoglobin that is sometimes viewed in combination with hemoglobin S to produce sickle cell disease.
Degradation in vertebrate animals
When red blood cells reach the end of their life due to aging or defects, they are removed from the circulation by the phagocytic activity of macrophages in the spleen or the liver or hemolyze within the circulation. Free hemoglobin is then cleared from the circulation via the hemoglobin transporter CD163, which is exclusively expressed on monocytes or macrophages. Within these cells the hemoglobin molecule is broken up, and the iron gets recycled. This process also produces one molecule of carbon monoxide for every molecule of heme degraded. Heme degradation is the only natural source of carbon monoxide in the human body, and is responsible for the normal blood levels of carbon monoxide in people breathing normal air.
The other major final product of heme degradation is bilirubin. Increased levels of this chemical are detected in the blood if red blood cells are being destroyed more rapidly than usual. Improperly degraded hemoglobin protein or hemoglobin that has been released from the blood cells too rapidly can clog small blood vessels, especially the delicate blood filtering vessels of the kidneys, causing kidney damage. Iron is removed from heme and salvaged for later use, it is stored as hemosiderin or ferritin in tissues and transported in plasma by beta globulins as transferrins. When the porphyrin ring is broken up, the fragments are normally secreted as a yellow pigment called bilirubin, which is secreted into the intestines as bile. Intestines metabolize bilirubin into urobilinogen. Urobilinogen leaves the body in faeces, in a pigment called stercobilin. Globulin is metabolized into amino acids that are then released into circulation.
Diseases related to hemoglobin
Hemoglobin deficiency can be caused either by a decreased amount of hemoglobin molecules, as in anemia, or by decreased ability of each molecule to bind oxygen at the same partial pressure of oxygen. Hemoglobinopathies (genetic defects resulting in abnormal structure of the hemoglobin molecule) may cause both. In any case, hemoglobin deficiency decreases blood oxygen-carrying capacity. Hemoglobin deficiency is, in general, strictly distinguished from hypoxemia, defined as decreased partial pressure of oxygen in blood, although both are causes of hypoxia (insufficient oxygen supply to tissues).
Other common causes of low hemoglobin include loss of blood, nutritional deficiency, bone marrow problems, chemotherapy, kidney failure, or abnormal hemoglobin (such as that of sickle-cell disease).
The ability of each hemoglobin molecule to carry oxygen is normally modified by altered blood pH or CO2, causing an altered oxygen–hemoglobin dissociation curve. However, it can also be pathologically altered in, e.g., carbon monoxide poisoning.
Decrease of hemoglobin, with or without an absolute decrease of red blood cells, leads to symptoms of anemia. Anemia has many different causes, although iron deficiency and its resultant iron deficiency anemia are the most common causes in the Western world. As absence of iron decreases heme synthesis, red blood cells in iron deficiency anemia are hypochromic (lacking the red hemoglobin pigment) and microcytic (smaller than normal). Other anemias are rarer. In hemolysis (accelerated breakdown of red blood cells), associated jaundice is caused by the hemoglobin metabolite bilirubin, and the circulating hemoglobin can cause kidney failure.
Some mutations in the globin chain are associated with the hemoglobinopathies, such as sickle-cell disease and thalassemia. Other mutations, as discussed at the beginning of the article, are benign and are referred to merely as hemoglobin variants.
There is a group of genetic disorders, known as the porphyrias that are characterized by errors in metabolic pathways of heme synthesis. King George III of the United Kingdom was probably the most famous porphyria sufferer.
To a small extent, hemoglobin A slowly combines with glucose at the terminal valine (an alpha aminoacid) of each β chain. The resulting molecule is often referred to as Hb A1c, a glycated hemoglobin. The binding of glucose to amino acids in the hemoglobin takes place spontaneously (without the help of an enzyme) in many proteins, and is not known to serve a useful purpose. However, as the concentration of glucose in the blood increases, the percentage of Hb A that turns into Hb A1c increases. In diabetics whose glucose usually runs high, the percent Hb A1c also runs high. Because of the slow rate of Hb A combination with glucose, the Hb A1c percentage reflects a weighted average of blood glucose levels over the lifetime of red cells, which is approximately 120 days. The levels of glycated hemoglobin are therefore measured in order to monitor the long-term control of the chronic disease of type 2 diabetes mellitus (T2DM). Poor control of T2DM results in high levels of glycated hemoglobin in the red blood cells. The normal reference range is approximately 4.0–5.9%. Though difficult to obtain, values less than 7% are recommended for people with T2DM. Levels greater than 9% are associated with poor control of the glycated hemoglobin, and levels greater than 12% are associated with very poor control. Diabetics who keep their glycated hemoglobin levels close to 7% have a much better chance of avoiding the complications that may accompany diabetes (than those whose levels are 8% or higher). In addition, increased glycated of hemoglobin increases its affinity for oxygen, therefore preventing its release at the tissue and inducing a level of hypoxia in extreme cases.
Elevated levels of hemoglobin are associated with increased numbers or sizes of red blood cells, called polycythemia. This elevation may be caused by congenital heart disease, cor pulmonale, pulmonary fibrosis, too much erythropoietin, or polycythemia vera. High hemoglobin levels may also be caused by exposure to high altitudes, smoking, dehydration (artificially by concentrating Hb), advanced lung disease and certain tumors.
Diagnostic uses
Hemoglobin concentration measurement is among the most commonly performed blood tests, usually as part of a complete blood count. For example, it is typically tested before or after blood donation. Results are reported in g/L, g/dL or mol/L. 1 g/dL equals about 0.6206 mmol/L, although the latter units are not used as often due to uncertainty regarding the polymeric state of the molecule. This conversion factor, using the single globin unit molecular weight of 16,000 Da, is more common for hemoglobin concentration in blood. For MCHC (mean corpuscular hemoglobin concentration) the conversion factor 0.155, which uses the tetramer weight of 64,500 Da, is more common. Normal levels are:
Men: 13.8 to 18.0 g/dL (138 to 180 g/L, or 8.56 to 11.17 mmol/L)
Women: 12.1 to 15.1 g/dL (121 to 151 g/L, or 7.51 to 9.37 mmol/L)
Children: 11 to 16 g/dL (110 to 160 g/L, or 6.83 to 9.93 mmol/L)
Pregnant women: 11 to 14 g/dL (110 to 140 g/L, or 6.83 to 8.69 mmol/L) (9.5 to 15 usual value during pregnancy)
Normal values of hemoglobin in the 1st and 3rd trimesters of pregnant women must be at least 11 g/dL and at least 10.5 g/dL during the 2nd trimester.
Dehydration or hyperhydration can greatly influence measured hemoglobin levels. Albumin can indicate hydration status.
If the concentration is below normal, this is called anemia. Anemias are classified by the size of red blood cells, the cells that contain hemoglobin in vertebrates. The anemia is called "microcytic" if red cells are small, "macrocytic" if they are large, and "normocytic" otherwise.
Hematocrit, the proportion of blood volume occupied by red blood cells, is typically about three times the hemoglobin concentration measured in g/dL. For example, if the hemoglobin is measured at 17 g/dL, that compares with a hematocrit of 51%.
Laboratory hemoglobin test methods require a blood sample (arterial, venous, or capillary) and analysis on hematology analyzer and CO-oximeter. Additionally, a new noninvasive hemoglobin (SpHb) test method called Pulse CO-Oximetry is also available with comparable accuracy to invasive methods.
Concentrations of oxy- and deoxyhemoglobin can be measured continuously, regionally and noninvasively using NIRS. NIRS can be used both on the head and on muscles. This technique is often used for research in e.g. elite sports training, ergonomics, rehabilitation, patient monitoring, neonatal research, functional brain monitoring, brain–computer interface, urology (bladder contraction), neurology (Neurovascular coupling) and more.
Hemoglobin mass can be measured in humans using the non-radioactive, carbon monoxide (CO) rebreathing technique that has been used for more than 100 years. With this technique, a small volume of pure CO gas is inhaled and rebreathed for a few minutes. During rebreathing, CO binds to hemoglobin present in red blood cells. Based on the increase in blood CO after the rebreathing period, the hemoglobin mass can be determined through the dilution principle.
Long-term control of blood sugar concentration can be measured by the concentration of Hb A1c. Measuring it directly would require many samples because blood sugar levels vary widely through the day. Hb A1c is the product of the irreversible reaction of hemoglobin A with glucose. A higher glucose concentration results in more Hb A1c. Because the reaction is slow, the Hb A1c proportion represents glucose level in blood averaged over the half-life of red blood cells, is typically ~120 days. An Hb A1c proportion of 6.0% or less show good long-term glucose control, while values above 7.0% are elevated. This test is especially useful for diabetics.
The functional magnetic resonance imaging (fMRI) machine uses the signal from deoxyhemoglobin, which is sensitive to magnetic fields since it is paramagnetic. Combined measurement with NIRS shows good correlation with both the oxy- and deoxyhemoglobin signal compared to the BOLD signal.
Athletic tracking and self-tracking uses
Hemoglobin can be tracked noninvasively, to build an individual data set tracking the hemoconcentration and hemodilution effects of daily activities for better understanding of sports performance and training. Athletes are often concerned about endurance and intensity of exercise. The sensor uses light-emitting diodes that emit red and infrared light through the tissue to a light detector, which then sends a signal to a processor to calculate the absorption of light by the hemoglobin protein. This sensor is similar to a pulse oximeter, which consists of a small sensing device that clips to the finger.
Analogues in non-vertebrate organisms
A variety of oxygen-transport and -binding proteins exist in organisms throughout the animal and plant kingdoms. Organisms including bacteria, protozoans, and fungi all have hemoglobin-like proteins whose known and predicted roles include the reversible binding of gaseous ligands. Since many of these proteins contain globins and the heme moiety (iron in a flat porphyrin support), they are often called hemoglobins, even if their overall tertiary structure is very different from that of vertebrate hemoglobin. In particular, the distinction of "myoglobin" and hemoglobin in lower animals is often impossible, because some of these organisms do not contain muscles. Or, they may have a recognizable separate circulatory system but not one that deals with oxygen transport (for example, many insects and other arthropods). In all these groups, heme/globin-containing molecules (even monomeric globin ones) that deal with gas-binding are referred to as oxyhemoglobins. In addition to dealing with transport and sensing of oxygen, they may also deal with NO, CO2, sulfide compounds, and even O2 scavenging in environments that must be anaerobic. They may even deal with detoxification of chlorinated materials in a way analogous to heme-containing P450 enzymes and peroxidases.
The structure of hemoglobins varies across species. Hemoglobin occurs in all kingdoms of organisms, but not in all organisms. Primitive species such as bacteria, protozoa, algae, and plants often have single-globin hemoglobins. Many nematode worms, molluscs, and crustaceans contain very large multisubunit molecules, much larger than those in vertebrates. In particular, chimeric hemoglobins found in fungi and giant annelids may contain both globin and other types of proteins.
One of the most striking occurrences and uses of hemoglobin in organisms is in the giant tube worm (Riftia pachyptila, also called Vestimentifera), which can reach 2.4 meters length and populates ocean volcanic vents. Instead of a digestive tract, these worms contain a population of bacteria constituting half the organism's weight. The bacteria oxidize H2S from the vent with O2 from the water to produce energy to make food from H2O and CO2. The worms' upper end is a deep-red fan-like structure ("plume"), which extends into the water and absorbs H2S and O2 for the bacteria, and CO2 for use as synthetic raw material similar to photosynthetic plants. The structures are bright red due to their content of several extraordinarily complex hemoglobins that have up to 144 globin chains, each including associated heme structures. These hemoglobins are remarkable for being able to carry oxygen in the presence of sulfide, and even to carry sulfide, without being completely "poisoned" or inhibited by it as hemoglobins in most other species are.
Other oxygen-binding proteins
Myoglobin Found in the muscle tissue of many vertebrates, including humans, it gives muscle tissue a distinct red or dark gray color. It is very similar to hemoglobin in structure and sequence, but is not a tetramer; instead, it is a monomer that lacks cooperative binding. It is used to store oxygen rather than transport it.
Hemocyanin The second most common oxygen-transporting protein found in nature, it is found in the blood of many arthropods and molluscs. Uses copper prosthetic groups instead of iron heme groups and is blue in color when oxygenated.
Hemerythrin Some marine invertebrates and a few species of annelid use this iron-containing non-heme protein to carry oxygen in their blood. Appears pink/violet when oxygenated, clear when not.
Chlorocruorin Found in many annelids, it is very similar to erythrocruorin, but the heme group is significantly different in structure. Appears green when deoxygenated and red when oxygenated.
Vanabins Also known as vanadium chromagens, they are found in the blood of sea squirts. They were once hypothesized to use the metal vanadium as an oxygen binding prosthetic group. However, although they do contain vanadium by preference, they apparently bind little oxygen, and thus have some other function, which has not been elucidated (sea squirts also contain some hemoglobin). They may act as toxins.
Erythrocruorin Found in many annelids, including earthworms, it is a giant free-floating blood protein containing many dozens—possibly hundreds—of iron- and heme-bearing protein subunits bound together into a single protein complex with a molecular mass greater than 3.5 million daltons.
Leghemoglobin In leguminous plants, such as alfalfa or soybeans, the nitrogen fixing bacteria in the roots are protected from oxygen by this iron heme containing oxygen-binding protein. The specific enzyme protected is nitrogenase, which is unable to reduce nitrogen gas in the presence of free oxygen.
Coboglobin A synthetic cobalt-based porphyrin. Coboprotein would appear colorless when oxygenated, but yellow when in veins.
Presence in nonerythroid cells
Some nonerythroid cells (i.e., cells other than the red blood cell line) contain hemoglobin. In the brain, these include the A9 dopaminergic neurons in the substantia nigra, astrocytes in the cerebral cortex and hippocampus, and in all mature oligodendrocytes. It has been suggested that brain hemoglobin in these cells may enable the "storage of oxygen to provide a homeostatic mechanism in anoxic conditions, which is especially important for A9 DA neurons that have an elevated metabolism with a high requirement for energy production". It has been noted further that "A9 dopaminergic neurons may be at particular risk of anoxic degeneration since in addition to their high mitochondrial activity they are under intense oxidative stress caused by the production of hydrogen peroxide via autoxidation and/or monoamine oxidase (MAO)-mediated deamination of dopamine and the subsequent reaction of accessible ferrous iron to generate highly toxic hydroxyl radicals". This may explain the risk of degeneration of these cells in Parkinson's disease. The hemoglobin-derived iron in these cells is not the cause of the post-mortem darkness of these cells (origin of the Latin name, substantia nigra), but rather is due to neuromelanin.
Outside the brain, hemoglobin has non-oxygen-carrying functions as an antioxidant and a regulator of iron metabolism in macrophages, alveolar cells, and mesangial cells in the kidney.
In history, art, and music
Historically, an association between the color of blood and rust occurs in the association of the planet Mars, with the Roman god of war, since the planet is an orange-red, which reminded the ancients of blood. Although the color of the planet is due to iron compounds in combination with oxygen in the Martian soil, it is a common misconception that the iron in hemoglobin and its oxides gives blood its red color. The color is actually due to the porphyrin moiety of hemoglobin to which the iron is bound, not the iron itself, although the ligation and redox state of the iron can influence the pi to pi* or n to pi* electronic transitions of the porphyrin and hence its optical characteristics.
Artist Julian Voss-Andreae created a sculpture called Heart of Steel (Hemoglobin) in 2005, based on the protein's backbone. The sculpture was made from glass and weathering steel. The intentional rusting of the initially shiny work of art mirrors hemoglobin's fundamental chemical reaction of oxygen binding to iron.
Montreal artist Nicolas Baier created Lustre (Hémoglobine), a sculpture in stainless steel that shows the structure of the hemoglobin molecule. It is displayed in the atrium of McGill University Health Centre's research centre in Montreal. The sculpture measures about 10 metres × 10 metres × 10 metres.
See also
Carbaminohemoglobin (Hb associated with )
Carboxyhemoglobin (Hb associated with CO)
Chlorophyll (Mg heme)
Complete blood count
Delta globin
Hemoglobinometer
Hemoprotein
Methemoglobin (ferric Hb, or ferrihemoglobin)
Oxyhemoglobin (with diatomic oxygen, colored blood-red)
Tegillarca granosa - "blood clam"
Vaska's complex – iridium organometallic complex notable for its ability to bind to O2 reversibly
References
Notes
Sources
Further reading
Hazelwood, Loren (2001) Can't Live Without It: The story of hemoglobin in sickness and in health, Nova Science Publishers
External links
National Anemia Action Council at anemia.org
New hemoglobin type causes mock diagnosis with pulse oxymeters at www.life-of-science.net
Animation of hemoglobin: from deoxy to oxy form at vimeo.com
Hemoglobins
Equilibrium chemistry
Respiratory physiology | Hemoglobin | [
"Chemistry"
] | 11,313 | [
"Equilibrium chemistry"
] |
13,499 | https://en.wikipedia.org/wiki/Transclusion | In computer science, transclusion is the inclusion of part or all of an electronic document into one or more other documents by reference via hypertext. Transclusion is usually performed when the referencing document is displayed, and is normally automatic and transparent to the end user. The result of transclusion is a single integrated document made of parts assembled dynamically from separate sources, possibly stored on different computers in disparate places.
Transclusion facilitates modular design (using the "single source of truth" model, whether in data, code, or content): a resource is stored once and distributed for reuse in multiple documents. Updates or corrections to a resource are then reflected in any referencing documents.
In systems where transclusion is not available, and in some situations where it is available but not desirable, substitution is often the complementary option, whereby a static copy of the "single source of truth" is integrated into the relevant document. Examples of both are provided by the ways in which they are both used in creating the content of Wikipedia, for example (see Wikipedia:Transclusion and Wikipedia:Substitution for more information). Substituted static copies introduce a different set of considerations for version control than transclusion does, but they are sometimes necessary.
Ted Nelson coined the term for his 1980 nonlinear book Literary Machines, but the idea of master copy and occurrences was applied 17 years before, in Sketchpad. Currently it is a common technique employed by textbook writers, where a single topic/subject needs to be discussed in multiple chapters. An advantage of this system in textbooks is that it helps data redundancy and keeps the book to a manageable size.
Technical considerations
Context neutrality
Transclusion works better when transcluded sections of text are self-contained, so that the meaning and validity of the text is independent of context. For example, formulations like "as explained in the previous section" are problematic, because the transcluded section may appear in a different context, causing confusion. What constitutes "context-neutral" text varies, but often includes things like company information or boilerplate. To help overcome context sensitivity issues such as those aforementioned, systems capable of transclusion are often also capable of suppressing particular elements within the transcluded content. For example, Wikipedia can use tags such as "noinclude", "onlyinclude", and "includeonly" for this purpose. Typical examples of elements that often require such exceptions are document titles, footnotes, and cross-references; in this way, they can be automatically suppressed upon transclusion, without manual reworking for each instance.
Parameterization
Under some circumstances, and in some technical contexts, transcluded sections of text may not require strict adherence to the "context neutrality" principle, because the transcluded sections are capable of parameterization. Parameterization implies the ability to modify certain portions or subsections of a transcluded text depending on exogenous variables that can be changed independently. This is customarily done by supplying a transcluded text with one or more substitution placeholders. These placeholders are then replaced with the corresponding variable values prior to rendering the final transcluded output in context.
Origins
The concept of reusing file content began with computer programming languages: COBOL in 1960, followed by BCPL, PL/I, C, and by 1978, even FORTRAN. An include directive allows common source code to be reused while avoiding the pitfalls of copy-and-paste-programming and hard coding of constants. As with many innovations, a problem developed. Multiple include directives may provide the same content as another include directive, inadvertently causing repetitions of the same source code into the final result, resulting in an error. Include guards help solve this by, after a single inclusion of content, thereafter omitting the duplicate content.
The idea of a single, reusable, source for information lead to concepts like: Don't repeat yourself and the abstraction principle. A further use was found to make programs more portable. Portable source code uses an include directive to specify a standard library, which contains system specific source code that varies with each computer environment.
History and implementation by Project Xanadu
Ted Nelson, who originated the words hypertext and hypermedia, also coined the term transclusion in his 1980 book Literary Machines. Part of his proposal was the idea that micropayments could be automatically exacted from the reader for all the text, no matter how many snippets of content are taken from various places.
However, according to Nelson, the concept of transclusion had already formed part of his 1965 description of hypertext. Nelson defines transclusion as, "...the same content knowably in more than one place," setting it apart from more special cases, such as the inclusion of content from a different location (which he calls transdelivery) or an explicit quotation that remains connected to its origins, (which he calls transquotation).
Some hypertext systems, including Ted Nelson's own Xanadu Project, support transclusion.
Nelson has delivered a demonstration of Web transclusion, the Little Transquoter (programmed to Nelson's specification by Andrew Pam in 2004–2005). It creates a new format built on portion addresses from Web pages; when dereferenced, each portion on the resulting page remains click-connected to its original context.
Implementation on the Web
HTTP, as a transmission protocol, has rudimentary support for transclusion via byte serving: specifying a byte range in an HTTP request message.
Transclusion can occur either before (server-side) or after (client-side) transmission. For example:
An HTML document may be pre-composed by the server before delivery to the client using Server-Side Includes or another server-side application.
XML Entities or HTML Objects may be parsed by the client, which then requests the corresponding resources separately from the main document.
A web browser may cache elements using its own algorithms, which can operate without explicit directives in the document's markup.
AngularJS employs transclusion for nested directive operation.
Publishers of web content may object to the transclusion of material from their own web sites into other web sites, or they may require an agreement to do so. Critics of the practice may refer to various forms of inline linking as bandwidth theft or leeching.
Other publishers may seek specifically to have their materials transcluded into other web sites, as in the form of web advertising, or as widgets like a hit counter or web bug.
Mashups make use of transclusion to assemble resources or data into a new application, as by placing geo-tagged photos on an interactive map, or by displaying business metrics in an interactive dashboard.
Client-side HTML
HTML defines elements for client-side transclusion of images, scripts, stylesheets, other documents, and other types of media. HTML has relied heavily on client-side transclusion from the earliest days of the Web (so web pages could be displayed more quickly before multimedia elements finished loading), rather than embedding the raw data for such objects inline into a web page's markup.
Through techniques such as Ajax, scripts associated with an HTML document can instruct a web browser to modify the document in-place, as opposed to the earlier technique of having to pull an entirely new version of the page from the web server. Such scripts may transclude elements or documents from a server after the web browser has rendered the page, in response to user input or changing conditions, for example.
Future versions of HTML may support deeper transclusion of portions of documents using XML technologies such as entities, XPointer document referencing, and XSLT manipulations.
Proxy servers may employ transclusion to reduce redundant transmissions of commonly requested resources.
A popular Front End Framework known as AngularJS developed and maintained by Google has a directive callend ng-transclude that marks the insertion point for the transcluded DOM of the nearest parent directive that uses transclusion.
Server-side transclusion
Transclusion can be accomplished on the server side, as through Server Side Includes and markup entity references resolved by the server software. It is a feature of substitution templates.
Transclusion of source code
Transclusion of source code into software design or reference materials lets source code be presented within the document, but not interpreted as part of the document, preserving the semantic consistency of the inserted code in relation to its source codebase.
Transclusion in content management
In content management for single-source publishing, top-class content management systems increasingly provide for transclusion and substitution. Component content management systems, especially, aim to take the modular design principle to its optimal degree. MediaWiki provides transclusion and substitution and is a good off-the-shelf option for many smaller organizations (such as smaller nonprofits and SMEs) that may not have the budget for other commercial options; for details, see Component content management system.
Implementation in software development
A common feature in programming languages is the ability of one source code file to transclude, in whole or part, another source code file. The part transcluded is interpreted as if it were part of the transcluding file. Some of the methods are:
Include: Some programs will explicitly INCLUDE another file. The included file can consist of executable code, declarations, compiler instructions, and/or branching to later parts of the document, depending on compile-time variables.
Macro: Assembly languages, and some high-level programming languages, will typically provide for macros, special named instructions used to make definitions, generate executable code, provide looping and other decisions, and modify the document produced according to parameters supplied to the macro when the file is rendered.
Copy: The Cobol programming language has the COPY command, in which a copied file is inserted into the copying document, replacing the COPY command. Code and declarations in the copied file can be modified by a REPLACING argument as part of the copying command.
See also
and content reuse
References
Further reading
External links
Ted Nelson: Transclusion: Fixing Electronic Literature—on Google Tech Talks, 29 January 2007.
HTML
Hypertext
Metadata
Ted Nelson | Transclusion | [
"Technology"
] | 2,100 | [
"Metadata",
"Data"
] |
13,509 | https://en.wikipedia.org/wiki/H.%20P.%20Lovecraft | Howard Phillips Lovecraft (, ; August 20, 1890 – March 15, 1937) was an American writer of weird, science, fantasy, and horror fiction. He is best known for his creation of the Cthulhu Mythos.
Born in Providence, Rhode Island, Lovecraft spent most of his life in New England. After his father's institutionalization in 1893, he lived affluently until his family's wealth dissipated after the death of his grandfather. Lovecraft then lived with his mother, in reduced financial security, until her institutionalization in 1919. He began to write essays for the United Amateur Press Association and in 1913 wrote a critical letter to a pulp magazine that ultimately led to his involvement in pulp fiction. He became active in the speculative fiction community and was published in several pulp magazines. Lovecraft moved to New York City, marrying Sonia Greene in 1924, and later became the center of a wider group of authors known as the "Lovecraft Circle". They introduced him to Weird Tales, which became his most prominent publisher. Lovecraft's time in New York took a toll on his mental state and financial conditions. He returned to Providence in 1926 and produced some of his most popular works, including The Call of Cthulhu, At the Mountains of Madness, The Shadow over Innsmouth, and The Shadow Out of Time. He remained active as a writer for 11 years until his death from intestinal cancer at the age of 46.
Lovecraft's literary corpus is rooted in cosmicism, which was simultaneously his personal philosophy and the main theme of his fiction. Cosmicism posits that humanity is an insignificant part of the cosmos and could be swept away at any moment. He incorporated fantasy and science fiction elements into his stories, representing the perceived fragility of anthropocentrism. This was tied to his ambivalent views on knowledge. His works were largely set in a fictionalized version of New England. Civilizational decline also plays a major role in his works, as he believed that the West was in decline during his lifetime. Lovecraft's early political views were conservative and traditionalist; additionally, he held a number of racist views for much of his adult life. Following the Great Depression, Lovecraft's political views became more socialist while still remaining elitist and aristocratic.
Throughout his adult life, Lovecraft was never able to support himself from his earnings as an author and editor. He was virtually unknown during his lifetime and was almost exclusively published in pulp magazines before his death. A scholarly revival of Lovecraft's work began in the 1970s, and he is now regarded as one of the most significant 20th-century authors of supernatural horror fiction. Many direct adaptations and spiritual successors followed. Works inspired by Lovecraft, adaptations or original works, began to form the basis of the Cthulhu Mythos, which utilizes Lovecraft's characters, setting, and themes.
Biography
Early life and family tragedies
Lovecraft was born in his family home on August 20, 1890, in Providence, Rhode Island. He was the only child of Winfield Scott Lovecraft and Sarah Susan ("Susie"; née Phillips) Lovecraft, who were both of English descent. Susie's family was of substantial means at the time of their marriage, as her father, Whipple Van Buren Phillips, was involved in business ventures. In April 1893, after a psychotic episode in a Chicago hotel, Winfield was committed to Butler Hospital in Providence. His medical records state that he was "doing and saying strange things at times" for a year before his commitment. The person who reported these symptoms is unknown. Winfield spent five years in Butler before dying in 1898. His death certificate listed the cause of death as general paresis, a term synonymous with late-stage syphilis. Throughout his life, Lovecraft maintained that his father fell into a paralytic state, due to insomnia and overwork, and remained that way until his death. It is not known whether Lovecraft was simply kept ignorant of his father's illness or whether his later statements were intentionally misleading.
After his father's institutionalization, Lovecraft resided in the family home with his mother, his maternal aunts Lillian and Annie, and his maternal grandparents Whipple and Robie. According to family friends, Susie doted on the young Lovecraft excessively, pampering him and never letting him out of her sight. Lovecraft later recollected that his mother was "permanently stricken with grief" after his father's illness. Whipple became a father figure to Lovecraft in this time, Lovecraft later noted that his grandfather became the "centre of my entire universe". Whipple, who often traveled to manage his business, maintained correspondence by letter with the young Lovecraft who, by the age of three, was already proficient at reading and writing.
Whipple encouraged the young Lovecraft to have an appreciation of literature, especially classical literature and English poetry. In his old age, he helped raise the young H. P. Lovecraft and educated him not only in the classics, but also in original weird tales of "winged horrors" and "deep, low, moaning sounds" which he created for his grandchild's entertainment. The original sources of Phillips's weird tales are unidentified. Lovecraft himself guessed that they originated from Gothic novelists like Ann Radcliffe, Matthew Lewis, and Charles Maturin. It was during this period that Lovecraft was introduced to some of his earliest literary influences, such as The Rime of the Ancient Mariner illustrated by Gustave Doré, One Thousand and One Nights, Thomas Bulfinch's Age of Fable, and Ovid's Metamorphoses.
While there is no indication that Lovecraft was particularly close to his grandmother, Robie, her death in 1896 had a profound effect on him. According to him, it sent his family into "a gloom from which it never fully recovered". His mother and aunts wore black mourning dresses that "terrified" him. This was also the time when Lovecraft, approximately five-and-a-half years old, started having nightmares that later informed his fictional writings. Specifically, he began to have recurring nightmares of beings he referred to as "night-gaunts". He credited their appearance to the influence of Doré's illustrations, which would "whirl me through space at a sickening rate of speed, the while fretting & impelling me with their detestable tridents". Thirty years later, night-gaunts appeared in Lovecraft's fiction.
Lovecraft's earliest known literary works were written at the age of seven, and were poems restyling the Odyssey and other Greco-Roman mythological stories. Lovecraft later wrote that during his childhood he was fixated on the Greco-Roman pantheon, and briefly accepted them as genuine expressions of divinity, foregoing his Christian upbringing. He recalled, at five years old, being told Santa Claus did not exist and retorted by asking why "God is not equally a myth?" At the age of eight, he took a keen interest in the sciences, particularly astronomy and chemistry. He also examined the anatomical books that were held in the family library, which taught him the specifics of human reproduction that were not yet explained to him. As a result, he found that it "virtually killed my interest in the subject".
In 1902, according to Lovecraft's later correspondence, astronomy became a guiding influence on his worldview. He began publishing the periodical Rhode Island Journal of Astronomy, using the hectograph printing method. Lovecraft went in and out of elementary school repeatedly, oftentimes with home tutors making up for the lost years, missing time due to health concerns that have not been determined. In their written recollections, his peers described him as withdrawn but welcoming to those who shared his then-current fascination with astronomy, inviting them to look through his prized telescope.
Education and financial decline
By 1900, Whipple's various business concerns were suffering a downturn, which resulted in the slow erosion of his family's wealth. He was forced to let his family's hired servants go, leaving Lovecraft, Whipple, and Susie, being the only unmarried sister, alone in the family home. In the spring of 1904, Whipple's largest business venture suffered a catastrophic failure. Within months, he died at age 70 due to a stroke. After Whipple's death, Susie was unable to financially support the upkeep of the expansive family home on what remained of the Phillips estate. Later that year, she was forced to move to a small duplex with her son.
Lovecraft called this time one of the darkest of his life, remarking in a 1934 letter that he saw no point in living anymore; he considered the possibility of committing suicide. His scientific curiosity and desire to know more about the world prevented him from doing so. In fall 1904, he entered high school. Much like his earlier school years, Lovecraft was periodically removed from school for long periods for what he termed "near breakdowns". He did say, though, that while having some conflicts with teachers, he enjoyed high school, becoming close with a small circle of friends. Lovecraft also performed well academically, excelling in particular at chemistry and physics. Aside from a pause in 1904, he also resumed publishing the Rhode Island Journal of Astronomy as well as starting the Scientific Gazette, which dealt mostly with chemistry. It was also during this period that Lovecraft produced the first of the fictional works that he was later known for, namely "The Beast in the Cave" and "The Alchemist".
It was in 1908, prior to what would have been his high school graduation, that Lovecraft suffered another unidentified health crisis, though this instance was more severe than his prior illnesses. The exact circumstances and causes remain unknown. The only direct records are Lovecraft's own correspondence wherein he retrospectively described it variously as a "nervous collapse" and "a sort of breakdown", in one letter blaming it on the stress of high school despite his enjoying it. In another letter concerning the events of 1908, he notes, "I was and am prey to intense headaches, insomnia, and general nervous weakness which prevents my continuous application to any thing".
Although Lovecraft maintained that he was going to attend Brown University after high school, he never graduated and never attended school again. Whether Lovecraft suffered from a physical ailment, a mental one, or some combination thereof has never been determined. An account from a high school classmate described Lovecraft as exhibiting "terrible tics" and that at times "he'd be sitting in his seat and he'd suddenly up and jump". Harry K. Brobst, a psychology professor, examined the account and claimed that chorea minor was the probable cause of Lovecraft's childhood symptoms, while noting that instances of chorea minor after adolescence are very rare. In his letters, Lovecraft acknowledged that he suffered from bouts of chorea as a child. Brobst further ventured that Lovecraft's 1908 breakdown was attributed to a "hysteroid seizure", a term that has become synonymous with atypical depression. In another letter concerning the events of 1908, Lovecraft stated that he "could hardly bear to see or speak to anyone, & liked to shut out the world by pulling down dark shades & using artificial light".
Earliest recognition
Few of Lovecraft and Susie's activities between late 1908 and 1913 were recorded. Lovecraft described the steady continuation of their financial decline highlighted by his uncle's failed business that cost Susie a large portion of their already dwindling wealth. One of Susie's friends, Clara Hess, recalled a visit during which Susie spoke continuously about Lovecraft being "so hideous that he hid from everyone and did not like to walk upon the streets where people could gaze on him." Despite Hess's protests to the contrary, Susie maintained this stance. For his part, Lovecraft said he found his mother to be "a positive marvel of consideration". A next-door neighbor later pointed out that what others in the neighborhood often assumed were loud, nocturnal quarrels between mother and son, were actually recitations of William Shakespeare, an activity that seemed to delight them both.
During this period, Lovecraft revived his earlier scientific periodicals. He endeavored to commit himself to the study of organic chemistry, Susie buying the expensive glass chemistry assemblage he wanted. Lovecraft found his studies were stymied by the mathematics involved, which he found boring and caused headaches that incapacitated him for the remainder of the day. Lovecraft's first non-self-published poem appeared in a local newspaper in 1912. Called Providence in 2000 A.D., it envisioned a future where Americans of English descent were displaced by Irish, Italian, Portuguese, and Jewish immigrants. In this period he also wrote racist poetry, including "New-England Fallen" and "On the Creation of Niggers", but there is no indication that either were published during his lifetime.
In 1911, Lovecraft's letters to editors began appearing in pulp and weird-fiction magazines, most notably Argosy. A 1913 letter critical of Fred Jackson, one of Argosy'''s more prominent writers, started Lovecraft down a path that defined the remainder of his career as a writer. In the following letters, Lovecraft described Jackson's stories as being "trivial, effeminate, and, in places, coarse". Continuing, Lovecraft argued that Jackson's characters exhibit the "delicate passions and emotions proper to negroes and anthropoid apes." This sparked a nearly year-long feud in the magazine's letters section between the two writers and their respective supporters. Lovecraft's most prominent opponent was John Russell, who often replied in verse, and to whom Lovecraft felt compelled to reply because he respected Russell's writing skills. The most immediate effect of this feud was the recognition garnered from Edward F. Daas, then head editor of the United Amateur Press Association (UAPA). Daas invited Russell and Lovecraft to join the organization and both accepted, Lovecraft in April 1914.
Rejuvenation and tragedy
Lovecraft immersed himself in the world of amateur journalism for most of the following decade. During this period, he advocated for amateurism's superiority to commercialism. Lovecraft defined commercialism as writing for what he considered low-brow publications for pay. This was contrasted with his view of "professional publication", which was what he called writing for what he considered respectable journals and publishers. He thought of amateur journalism as serving as practice for a professional career.
Lovecraft was appointed chairman of the Department of Public Criticism of the UAPA in late 1914. He used this position to advocate for what he saw as the superiority of archaic English language usage. Emblematic of the Anglophilic opinions he maintained throughout his life, he openly criticized other UAPA contributors for their "Americanisms" and "slang". Often, these criticisms were embedded in xenophobic and racist statements that the "national language" was being negatively changed by immigrants. In mid-1915, Lovecraft was elected vice-president of the UAPA. Two years later, he was elected president and appointed other board members who mostly shared his belief in the supremacy of British English over modern American English. Another significant event of this time was the beginning of World War I. Lovecraft published multiple criticisms of the American government and public's reluctance to join the war to protect England, which he viewed as America's ancestral homeland.
In 1916, Lovecraft published his first short story, "The Alchemist", in the main UAPA journal, which was a departure from his usual verse. Due to the encouragement of W. Paul Cook, another UAPA member and future lifelong friend, Lovecraft began writing and publishing more prose fiction. Soon afterwards, he wrote "The Tomb" and "Dagon". "The Tomb", by Lovecraft's own admission, was greatly influenced by the style and structure of Edgar Allan Poe's works. Meanwhile, "Dagon" is considered Lovecraft's first work that displays the concepts and themes that his writings later became known for. Lovecraft published another short story, "Beyond the Wall of Sleep" in 1919, which was his first science fiction story.
Lovecraft's term as president of the UAPA ended in 1918, and he returned to his former post as chairman of the Department of Public Criticism. In 1917, as Lovecraft related to Kleiner, Lovecraft made an aborted attempt to enlist in the United States Army. Though he passed the physical exam, he told Kleiner that his mother threatened to do anything, legal or otherwise, to prove that he was unfit for service. After his failed attempt to serve in World War I, he attempted to enroll in the Rhode Island Army National Guard, but his mother used her family connections to prevent it.
During the winter of 1918–1919, Susie, exhibiting the symptoms of a nervous breakdown, went to live with her elder sister, Lillian. The nature of Susie's illness is unclear, as her medical papers were later destroyed in a fire at Butler Hospital. Winfield Townley Scott, who was able to read the papers before the fire, described Susie as having suffered a psychological collapse. Neighbour and friend Clara Hess, interviewed in 1948, recalled instances of Susie describing "weird and fantastic creatures that rushed out from behind buildings and from corners at dark." In the same account, Hess described a time when they crossed paths in downtown Providence and Susie was unaware of where she was. In March 1919, she was committed to Butler Hospital, like her husband before her. Lovecraft's immediate reaction to Susie's commitment was visceral, writing to Kleiner that "existence seems of little value", and that he wished "it might terminate". During Susie's time at Butler, Lovecraft periodically visited her and walked the large grounds with her.
Late 1919 saw Lovecraft become more outgoing. After a period of isolation, he began joining friends in trips to writer gatherings; the first being a talk in Boston presented by Lord Dunsany, whom Lovecraft had recently discovered and idolized. In early 1920, at an amateur writer convention, he met Frank Belknap Long, who ended up being Lovecraft's most influential and closest confidant for the remainder of his life. The influence of Dunsany is apparent in his 1919 output, which is part of what was later called Lovecraft's Dream Cycle, including "The White Ship" and "The Doom That Came to Sarnath". In early 1920, he wrote "The Cats of Ulthar" and "Celephaïs", which were also strongly influenced by Dunsany.
It was later in 1920 that Lovecraft began publishing the earliest Cthulhu Mythos stories. The Cthulhu Mythos, a term coined by later authors, encompasses Lovecraft's stories that share a commonality in the revelation of cosmic insignificance, initially realistic settings, and recurring entities and texts. The prose poem "Nyarlathotep" and the short story "The Crawling Chaos", in collaboration with Winifred Virginia Jackson, were written in late 1920. Following in early 1921 came "The Nameless City", the first story that falls definitively within the Cthulhu Mythos. In it is one of Lovecraft's most enduring phrases, a couplet recited by Abdul Alhazred; "That is not dead which can eternal lie; And with strange aeons even death may die." In the same year, he also wrote "The Outsider", which has become one of Lovecraft's most heavily analyzed, and differently interpreted, stories. It has been variously interpreted as being autobiographical, an allegory of the psyche, a parody of the afterlife, a commentary on humanity's place in the universe, and a critique of progress.
On May 24, 1921, Susie died in Butler Hospital, due to complications from an operation on her gallbladder five days earlier. Lovecraft's initial reaction, expressed in a letter written nine days after Susie's death, was a deep state of sadness that crippled him physically and emotionally. He again expressed a desire that his life might end. Lovecraft's later response was relief, as he became able to live independently from his mother. His physical health also began to improve, although he was unaware of the exact cause. Despite Lovecraft's reaction, he continued to attend amateur journalist conventions. Lovecraft met his future wife, Sonia Greene, at one such convention in July.
Marriage and New York
Lovecraft's aunts disapproved of his relationship with Sonia. Lovecraft and Greene married on March 3, 1924, and relocated to her Brooklyn apartment at 259 Parkside Avenue; she thought he needed to leave Providence to flourish and was willing to support him financially. Greene, who was married before, later said Lovecraft performed satisfactorily as a lover, but she had to take the initiative in all aspects of the relationship. She attributed Lovecraft's passive nature to a stultifying upbringing by his mother. Lovecraft's weight increased to on his wife's home cooking.
He was enthralled by New York City, and, in what was informally dubbed the Kalem Club, he acquired a group of encouraging intellectual and literary friends who urged him to submit stories to Weird Tales. Its editor, Edwin Baird, accepted many of Lovecraft's stories for the ailing publication, including "Under the Pyramids", which was ghostwritten for Harry Houdini. Established informally some years before Lovecraft arrived in New York, the core Kalem Club members were boys' adventure novelist Henry Everett McNeil, the lawyer and anarchist writer James Ferdinand Morton Jr., and the poet Reinhardt Kleiner.
On January 1, 1925, Sonia moved from Parkside to Cleveland in response to a job opportunity, and Lovecraft left for a small first-floor apartment on 169 Clinton Street "at the edge of Red Hook"—a location which came to discomfort him greatly. Later that year, the Kalem Club's four regular attendees were joined by Lovecraft along with his protégé Frank Belknap Long, bookseller George Willard Kirk, and Samuel Loveman. Loveman was Jewish, but he and Lovecraft became close friends in spite of the latter's antisemitic attitudes. By the 1930s, writer and publisher Herman Charles Koenig was one of the last to become involved with the Kalem Club.
Not long after the marriage, Greene lost her business and her assets disappeared in a bank failure. Lovecraft made efforts to support his wife through regular jobs, but his lack of previous work experience meant he lacked proven marketable skills. The publisher of Weird Tales was attempting to make the loss-making magazine profitable and offered the job of editor to Lovecraft, who declined, citing his reluctance to relocate to Chicago on aesthetic grounds. Baird was succeeded by Farnsworth Wright, whose writing Lovecraft criticized. Lovecraft's submissions were often rejected by Wright. This may have been partially due to censorship guidelines imposed in the aftermath of a Weird Tales story that hinted at necrophilia, although after Lovecraft's death, Wright accepted many of the stories he had originally rejected.
Sonia also became ill and immediately after recovering, relocated to Cincinnati, and then to Cleveland; her employment required constant travel. Added to his feelings of failure in a city with a large immigrant population, Lovecraft's single-room apartment was burgled, leaving him with only the clothes he was wearing. In August 1925, he wrote "The Horror at Red Hook" and "He". In the latter, the narrator says "My coming to New York had been a mistake; for whereas I had looked for poignant wonder and inspiration [...] I had found instead only a sense of horror and oppression which threatened to master, paralyze, and annihilate me." This was an expression of his despair at being in New York. It was at around this time he wrote the outline for "The Call of Cthulhu", with its theme of the insignificance of all humanity. During this time, Lovecraft wrote "Supernatural Horror in Literature" on the eponymous subject. It later became one of the most influential essays on supernatural horror. With a weekly allowance Greene sent, Lovecraft moved to a working-class area of Brooklyn Heights, where he resided in a tiny apartment. He lost approximately of body weight by 1926, when he left for Providence.
Return to Providence and death
Back in Providence, Lovecraft lived with his aunts in a "spacious brown Victorian wooden house" at 10 Barnes Street until 1933. He then moved to 66 Prospect Street, which became his final home. The period beginning after his return to Providence contains some of his most prominent works, including The Dream-Quest of Unknown Kadath, The Case of Charles Dexter Ward, "The Call of Cthulhu", and The Shadow over Innsmouth. The former two stories are partially autobiographical, as scholars have argued that The Dream-Quest of Unknown Kadath is about Lovecraft's return to Providence and The Case of Charles Dexter Ward is, in part, about the city itself. The former story also represents a partial repudiation of Dunsany's influence, as Lovecraft decided that his style did not come to him naturally. At this time, he frequently revised work for other authors and did a large amount of ghostwriting, including The Mound, "Winged Death", and "The Diary of Alonzo Typer". Client Harry Houdini was laudatory, and attempted to help Lovecraft by introducing him to the head of a newspaper syndicate. Plans for a further project, a book titled The Cancer of Superstition, were ended by Houdini's death in 1926. After returning, he also began to engage in antiquarian travels across the eastern seaboard during the summer months. During the spring–summer of 1930, Lovecraft visited, among other locations, New York City, Brattleboro, Vermont, Wilbraham, Massachusetts, Charleston, South Carolina, and Quebec City.
Later, in August, Robert E. Howard wrote a letter to Weird Tales praising a then-recent reprint of Lovecraft's "The Rats in the Walls" and discussing some of the Gaelic references used within. Its editor, Farnsworth Wright, forwarded the letter to Lovecraft, who responded positively to Howard, and soon the two writers were engaged in a vigorous correspondence that lasted for the rest of Howard's life. Howard quickly became a member of the Lovecraft Circle, a group of writers and friends all linked through Lovecraft's voluminous correspondence, as he introduced his many like-minded friends to one another and encouraged them to share their stories, utilize each other's fictional creations, and help each other succeed in the field of pulp fiction.
Meanwhile, Lovecraft was increasingly producing work that brought him no remuneration. Affecting a calm indifference to the reception of his works, Lovecraft was in reality extremely sensitive to criticism and easily precipitated into withdrawal. He was known to give up trying to sell a story after it was rejected once. Sometimes, as with The Shadow over Innsmouth, he wrote a story that might have been commercially viable but did not try to sell it. Lovecraft even ignored interested publishers. He failed to reply when one inquired about any novel Lovecraft might have ready: although he had completed such a work, The Case of Charles Dexter Ward, it was never typed up. A few years after Lovecraft moved to Providence, he and his wife Sonia Greene, having lived separately for so long, agreed to an amicable divorce. Greene moved to California in 1933 and remarried in 1936, unaware that Lovecraft, despite his assurances to the contrary, never officially signed the final decree.
As a result of the Great Depression, he shifted towards socialism, decrying both his prior political beliefs and the rising tide of fascism. He thought that socialism was a workable middle ground between what he saw as the destructive impulses of both the capitalists and the Marxists of his day. This was based in a general opposition to cultural upheaval, as well as support for an ordered society. Electorally, he supported Franklin D. Roosevelt, but he thought that the New Deal was not sufficiently leftist. Lovecraft's support for it was based in his view that no other set of reforms were possible at that time.
In late 1936, he witnessed the publication of The Shadow over Innsmouth as a paperback book. 400 copies were printed, and the work was advertised in Weird Tales and several fan magazines. However, Lovecraft was displeased, as this book was riddled with errors that required extensive editing. It sold slowly and only approximately 200 copies were bound. The remaining 200 copies were destroyed after the publisher went out of business seven years later. By this point, Lovecraft's literary career was reaching its end. Shortly after having written his last original short story, "The Haunter of the Dark", he stated that the hostile reception of At the Mountains of Madness had done "more than anything to end my effective fictional career". His declining psychological and physical states made it impossible for him to continue writing fiction.
On June 11, Robert E. Howard was informed that his chronically ill mother would not awaken from her coma. He walked out to his car and died by suicide with a pistol that he had stored there. His mother died shortly thereafter. This deeply affected Lovecraft, who consoled Howard's father through correspondence. Almost immediately after hearing about Howard's death, Lovecraft wrote a brief memoir titled "In Memoriam: Robert Ervin Howard", which he distributed to his correspondents. Meanwhile, Lovecraft's physical health was deteriorating. He was suffering from an affliction that he referred to as "grippe".
Due to his fear of doctors, Lovecraft was not examined until a month before his death and was diagnosed with terminal cancer of the small intestine. He was hospitalized in the Jane Brown Memorial Hospital and lived in constant pain until his death on March 15, 1937, in Providence. In accordance with his lifelong scientific curiosity, he kept a diary of his illness until he was physically incapable of holding a pen. After a small funeral, Lovecraft was buried in Swan Point Cemetery and was listed alongside his parents on the Phillips family monument. In 1977, fans erected a headstone in the same cemetery, on which they inscribed his name, the dates of his birth and death, and the phrase "I AM PROVIDENCE"—a line from one of his personal letters.
Personal views
Politics
Lovecraft began his life as a Tory, which was likely the result of his conservative upbringing. His family supported the Republican Party for the entirety of his life. While it is unclear how consistently he voted, he voted for Herbert Hoover in the 1928 U.S. presidential election. Rhode Island as a whole remained politically conservative and Republican into the 1930s. Lovecraft himself was an Anglophile who supported the British monarchy. He opposed democracy and thought that the United States should be governed by an aristocracy. This viewpoint emerged during his youth and lasted until the end of the 1920s. During World War I, his Anglophilia caused him to strongly support the Entente against the Central Powers. Many of his earlier poems were devoted to then-current political subjects, and he published several political essays in his amateur journal, The Conservative. He was a teetotaler who supported the implementation of Prohibition, which was one of the few reforms that he supported during the early part of his life. While remaining a teetotaler, he later became convinced that Prohibition was ineffectual in the 1930s. His personal justification for his early political viewpoints was primarily based on tradition and aesthetics.
As a result of the Great Depression, Lovecraft re-examined his political views. Initially, he thought that affluent people would take on the characteristics of his ideal aristocracy and solve America's problems. When this did not occur, he became a socialist. This shift was caused by his observation that the Depression was harming American society. It was also influenced by the increase in socialism's political capital during the 1930s. One of the main points of Lovecraft's socialism was its opposition to Soviet Marxism, as he thought that a Marxist revolution would bring about the destruction of American civilization. Lovecraft thought that an intellectual aristocracy needed to be formed to preserve America. His ideal political system is outlined in his 1933 essay "Some Repetitions on the Times". Lovecraft used this essay to echo the political proposals that were made over the course of the last few decades. In this essay, he advocates governmental control of resource distribution, fewer working hours and a higher wage, and unemployment insurance and old age pensions. He also outlines the need for an oligarchy of intellectuals. In his view, power needed to be restricted to those who are sufficiently intelligent and educated. He frequently used the term "fascism" to describe this form of government, but, according to S. T. Joshi, it bore little resemblance to that ideology.
Lovecraft had varied views on the political figures of his day. He was an ardent supporter of Franklin D. Roosevelt. He saw that Roosevelt was trying to steer a middle course between the conservatives and the revolutionaries, which he approved of. While he thought that Roosevelt should have enacted more progressive policies, he came to the conclusion that the New Deal was the only realistic option for reform. He thought that voting for his opponents on the political left was a wasted effort. Internationally, like some Americans, he initially expressed support for Adolf Hitler. More specifically, he thought that Hitler would preserve German culture. However, he thought that Hitler's racial policies should be based on culture rather than descent. There is evidence that, at the end of his life, Lovecraft began to oppose Hitler. Harry K. Brobst, Lovecraft's downstairs neighbor, went to Germany and witnessed Jews being beaten. Lovecraft and his aunt were angered by this, and his discussions of Hitler drop off after this point.
Atheism
Lovecraft was an atheist. His viewpoints on religion are outlined in his 1922 essay "A Confession of Unfaith". In this essay, he describes his shift away from the Protestantism of his parents to the atheism of his adulthood. Lovecraft was raised by a conservative Protestant family. He was introduced to the Bible and Santa Claus when he was two. He passively accepted both of them. Over the course of the next few years, he was introduced to Grimms' Fairy Tales and One Thousand and One Nights, favoring the latter. In response, Lovecraft took on the identity of "Abdul Alhazred", a name he later used for the author of the Necronomicon. Lovecraft experienced a brief period as a Greco-Roman pagan shortly thereafter. According to this account, his first moment of skepticism occurred before his fifth birthday, when he questioned if God is a myth after learning that Santa Claus is not real. In 1896, he was introduced to Greco-Roman myths and became "a genuine pagan".
This came to an end in 1902, when Lovecraft was introduced to space. He later described this event as the most poignant in his life. In response to this discovery, Lovecraft took to studying astronomy and described his observations in the local newspaper. Before his thirteenth birthday, he became convinced of humanity's impermanence. By the time he was seventeen, he had read detailed writings that agreed with his worldview. Lovecraft ceased writing positively about progress, instead developing his later cosmic philosophy. Despite his interests in science, he had an aversion to realistic literature, so he became interested in fantastical fiction. Lovecraft became pessimistic when he entered amateur journalism in 1914. World War I seemed to confirm his viewpoints. He began to despise philosophical idealism. Lovecraft took to discussing and debating his pessimism with his peers, which allowed him to solidify his philosophy. His readings of Friedrich Nietzsche and H. L. Mencken, among other writers, furthered this development. At the end of his essay, Lovecraft states that all he desired was oblivion. He was willing to cast aside any illusion that he may still have held.
Race
Race is the most controversial aspect of Lovecraft's legacy, expressed in many disparaging remarks against non-Anglo-Saxon races and cultures in his works. Scholars have argued that these racial attitudes were common in the American society of his day, particularly in New England. As he grew older, his original racial worldview became classist and elitist, which regarded non-white members of the upper class as honorary members of the superior race. Lovecraft was a white supremacist. Despite this, he did not hold all white people in uniform high regard, but rather esteemed English people and those of English descent. In his early published essays, private letters, and personal utterances, he argued for a strong color line to preserve race and culture. His arguments were supported using disparagements of various races in his journalism and letters, and allegorically in some of his fictional works that depict miscegenation between humans and non-human creatures. This is evident in his portrayal of the Deep Ones in The Shadow over Innsmouth. Their interbreeding with humanity is framed as being a type of miscegenation that corrupts both the town of Innsmouth and the protagonist.
Initially, Lovecraft showed sympathy to minorities who adopted Western culture, even to the extent of marrying a Jewish woman he viewed as being "well assimilated". By the 1930s, Lovecraft's views on ethnicity and race had moderated. He supported ethnicities' preserving their native cultures; for example, he thought that "a real friend of civilisation wishes merely to make the Germans more German, the French more French, the Spaniards more Spanish, & so on". This represented a shift from his previous support for cultural assimilation. His shift was partially the result of his exposure to different cultures through his travels and circle. The former resulted in him writing positively about Québécois and First Nations cultural traditions in his travelogue of Quebec. However, this did not represent a complete elimination of his racial prejudices.
Influences
His interest in weird fiction began in his childhood when his grandfather, who preferred Gothic stories, told him stories of his own design. Lovecraft's childhood home on Angell Street had a large library that contained classical literature, scientific works, and early weird fiction. At the age of five, Lovecraft enjoyed reading One Thousand and One Nights, and was reading Nathaniel Hawthorne a year later. He was also influenced by the travel literature of John Mandeville and Marco Polo. This led to his discovery of gaps in then-contemporary science, which prevented Lovecraft from committing suicide in response to the death of his grandfather and his family's declining financial situation during his adolescence. These travelogues may have also influenced how Lovecraft's later works describe their characters and locations. For example, there is a resemblance between the powers of the Tibetan enchanters in The Travels of Marco Polo and the powers unleashed on Sentinel Hill in "The Dunwich Horror".
One of Lovecraft's most significant literary influences was Edgar Allan Poe, whom he described as his "God of Fiction". Poe's fiction was introduced to Lovecraft when the latter was eight years old. His earlier works were significantly influenced by Poe's prose and writing style. He also made extensive use of Poe's unity of effect in his fiction. Furthermore, At the Mountains of Madness directly quotes Poe and was influenced by The Narrative of Arthur Gordon Pym of Nantucket. One of the main themes of the two stories is to discuss the unreliable nature of language as a method of expressing meaning. In 1919, Lovecraft's discovery of the stories of Lord Dunsany moved his writing in a new direction, resulting in a series of fantasies. Throughout his life, Lovecraft referred to Dunsany as the author who had the greatest impact on his literary career. The initial result of this influence was the Dream Cycle, a series of fantasies that originally take place in prehistory, but later shift to a dreamworld setting. By 1930, Lovecraft decided that he would no longer write Dunsanian fantasies, arguing that the style did not come naturally to him. Additionally, he also read and cited Arthur Machen and Algernon Blackwood as influences in the 1920s.
Aside from horror authors, Lovecraft was significantly influenced by the Decadents, the Puritans, and the Aesthetic movement. In "H. P. Lovecraft: New England Decadent", Barton Levi St. Armand, a professor emeritus of English and American studies at Brown University, has argued that these three influences combined to define Lovecraft as a writer. He traces this influence to both Lovecraft's stories and letters, noting that he actively cultivated the image of a New England gentleman in his letters. Meanwhile, his influence from the Decadents and the Aesthetic Movement stems from his readings of Edgar Allan Poe. Lovecraft's aesthetic worldview and fixation on decline stems from these readings. The idea of cosmic decline is described as having been Lovecraft's response to both the Aesthetic Movement and the 19th century Decadents. St. Armand describes it as being a combination of non-theological Puritan thought and the Decadent worldview. This is used as a division in his stories, particularly in "The Horror at Red Hook", "Pickman's Model", and "The Music of Erich Zann". The division between Puritanism and Decadence, St. Armand argues, represents a polarization between an artificial paradise and oneiriscopic visions of different worlds.
A non-literary inspiration came from then-contemporary scientific advances in biology, astronomy, geology, and physics. Lovecraft's study of science contributed to his view of the human race as insignificant, powerless, and doomed in a materialistic and mechanistic universe. Lovecraft was a keen amateur astronomer from his youth, often visiting the Ladd Observatory in Providence, and penning numerous astronomical articles for his personal journal and local newspapers. Lovecraft's materialist views led him to espouse his philosophical views through his fiction; these philosophical views came to be called cosmicism. Cosmicism took on a more pessimistic tone with his creation of what is now known as the Cthulhu Mythos, a fictional universe that contains alien deities and horrors. The term "Cthulhu Mythos" was likely coined by later writers after Lovecraft's death. In his letters, Lovecraft jokingly called his fictional mythology "Yog-Sothothery".
Dreams had a major role in Lovecraft's literary career. In 1991, as a result of his rising place in American literature, it was popularly thought that Lovecraft extensively transcribed his dreams when writing fiction. However, the majority of his stories are not transcribed dreams. Instead, many of them are directly influenced by dreams and dreamlike phenomena. In his letters, Lovecraft frequently compared his characters to dreamers. They are described as being as helpless as a real dreamer who is experiencing a nightmare. His stories also have dreamlike qualities. The Randolph Carter stories deconstruct the division between dreams and reality. The dreamlands in The Dream-Quest of Unknown Kadath are a shared dreamworld that can be accessed by a sensitive dreamer. Meanwhile, in "The Silver Key", Lovecraft mentions the concept of "inward dreams", which implies the existence of outward dreams. Burleson compares this deconstruction to Carl Jung's argument that dreams are the source of archetypal myths. Lovecraft's way of writing fiction required both a level of realism and dreamlike elements. Citing Jung, Burleson argues that a writer may create realism by being inspired by dreams.
Themes
Cosmicism
The central theme of Lovecraft's corpus is cosmicism. Cosmicism is a literary philosophy that argues that humanity is an insignificant force in the universe. Despite appearing pessimistic, Lovecraft thought of himself as being a cosmic indifferentist, which is expressed in his fiction. In it, human beings are often subject to powerful beings and other cosmic forces, but these forces are not so much malevolent as they are indifferent toward humanity. He believed in a meaningless, mechanical, and uncaring universe that human beings could never fully understand. There is no allowance for beliefs that could not be supported scientifically. Lovecraft first articulated this philosophy in 1921, but he did not fully incorporate it into his fiction until five years later. "Dagon", "Beyond the Wall of Sleep", and "The Temple" contain early depictions of this concept, but the majority of his early tales do not analyze the concept. "Nyarlathotep" interprets the collapse of human civilization as being a corollary to the collapse of the universe. "The Call of Cthulhu" represents an intensification of this theme. In it, Lovecraft introduces the idea of alien influences on humanity, which came to dominate all subsequent works. In these works, Lovecraft expresses cosmicism through the usage of confirmation rather than revelation. Lovecraftian protagonists do not learn that they are insignificant. Instead, they already know it and have it confirmed to them through an event.
Knowledge
Lovecraft's fiction reflects his own ambivalent views regarding the nature of knowledge. This expresses itself in the concept of forbidden knowledge. In Lovecraft's stories, happiness is only achievable through blissful ignorance. Trying to know things that are not meant to be known leads to harm and psychological danger. This concept intersects with several other ideas. This includes the idea that the visible reality is an illusion masking the horrific true reality. Similarly, there are also intersections with the concepts of ancient civilizations that exert a malign influence on humanity and the general philosophy of cosmicism. According to Lovecraft, self-knowledge can bring ruin to those who seek it. Those seekers would become aware of their own insignificance in the wider cosmos and would be unable to bear the weight of this knowledge. Lovecraftian horror is not achieved through external phenomena. Instead, it is reached through the internalized psychological impact that knowledge has on its protagonists. "The Call of Cthulhu", The Shadow over Innsmouth, and The Shadow Out of Time feature protagonists who experience both external and internal horror through the acquisition of self-knowledge. The Case of Charles Dexter Ward also reflects this. One of its central themes is the danger of knowing too much about one's family history. Charles Dexter Ward, the protagonist, engages in historical and genealogical research that ultimately leads to both madness and his own self-destruction.
Decline of civilization
For much of his life, Lovecraft was fixated on the concepts of decline and decadence. More specifically, he thought that the West was in a state of terminal decline. Starting in the 1920s, Lovecraft became familiar with the work of the German conservative-revolutionary theorist Oswald Spengler, whose pessimistic thesis of the decadence of the modern West formed a crucial element in Lovecraft's overall anti-modern worldview. Spenglerian imagery of cyclical decay is a central theme in At the Mountains of Madness. S. T. Joshi, in H. P. Lovecraft: The Decline of the West, places Spengler at the center of his discussion of Lovecraft's political and philosophical ideas. According to him, the idea of decline is the single idea that permeates and connects his personal philosophy. The main Spenglerian influence on Lovecraft was his view that politics, economics, science, and art are all interdependent aspects of civilization. This realization led him to shed his personal ignorance of then-current political and economic developments after 1927. Lovecraft had developed his idea of Western decline independently, but Spengler gave it a clear framework.
Science
Lovecraft shifted supernatural horror away from its previous focus on human issues to a focus on cosmic ones. In this way, he merged the elements of supernatural fiction that he deemed to be scientifically viable with science fiction. This merge required an understanding of both supernatural horror and then-contemporary science. Lovecraft used this combined knowledge to create stories that extensively reference trends in scientific development. Beginning with "The Shunned House", Lovecraft increasingly incorporated elements of both Einsteinian science and his own personal materialism into his stories. This intensified with the writing of "The Call of Cthulhu", where he depicted alien influences on humanity. This trend continued throughout the remainder of his literary career. "The Colour Out of Space" represents what scholars have called the peak of this trend. It portrays an alien lifeform whose otherness prevents it from being defined by then-contemporary science.
Another part of this effort was the repeated usage of mathematics in an effort to make his creatures and settings appear more alien. Tom Hull, a mathematician, regards this as enhancing his ability to invoke a sense of otherness and fear. He attributes this use of mathematics to Lovecraft's childhood interest in astronomy and his adulthood awareness of non-Euclidean geometry. Another reason for his use of mathematics was his reaction to the scientific developments of his day. These developments convinced him that humanity's primary means of understanding the world was no longer trustable. Lovecraft's usage of mathematics in his fiction serves to convert otherwise supernatural elements into things that have in-universe scientific explanations. "The Dreams in the Witch House" and The Shadow Out of Time both have elements of this. The former uses a witch and her familiar, while the latter uses the idea of mind transference. These elements are explained using scientific theories that were prevalent during Lovecraft's lifetime.
Lovecraft Country
Setting plays a major role in Lovecraft's fiction. A fictionalized version of New England serves as the central hub for his mythos, called "Lovecraft Country" by later commentators. It represents the history, culture, and folklore of the region, as interpreted by Lovecraft. These attributes are exaggerated and altered to provide a suitable setting for his stories. The names of the locations in the region were directly influenced by the names of real locations in the region, which was done to increase their realism. Lovecraft's stories use their connections with New England to imbue themselves with the ability to instill fear. Lovecraft was primarily inspired by the cities and towns in Massachusetts. However, the specific location of Lovecraft Country is variable, as it moved according to Lovecraft's literary needs. Starting with areas that he thought were evocative, Lovecraft redefined and exaggerated them under fictional names. For example, Lovecraft based Arkham on the town of Oakham and expanded it to include a nearby landmark. Its location was moved, as Lovecraft decided that it would have been destroyed by the recently-built Quabbin Reservoir. This is alluded to in "The Colour Out of Space", as the "blasted heath" is submerged by the creation of a fictionalized version of the reservoir. Similarly, Lovecraft's other towns were based on other locations in Massachusetts. Innsmouth was based on Newburyport, and Dunwich was based on Greenwich. The vague locations of these towns also played into Lovecraft's desire to create a mood in his stories. In his view, a mood can only be evoked through reading.
Critical reception
Literary
Early efforts to revise an established literary view of Lovecraft as an author of "pulp" were resisted by some eminent critics; in 1945, Edmund Wilson sneered: "the only real horror in most of these fictions is the horror of bad taste and bad art." However, Wilson praised Lovecraft's ability to write about his chosen field; he described him as having written about it "with much intelligence". According to L. Sprague de Camp, Wilson later improved his opinion of Lovecraft, citing a report of David Chavchavadze that Wilson included a Lovecraftian reference in Little Blue Light: A Play in Three Acts. After Chavchavadze met with him to discuss this, Wilson revealed that he was reading a copy of Lovecraft's correspondence. Two years before Wilson's critique, Lovecraft's works were reviewed by Winfield Townley Scott, the literary editor of The Providence Journal. He argued that Lovecraft was one of the most significant Rhode Island authors and that it was regrettable that he received little attention from mainstream critics at the time. Mystery and Adventure columnist Will Cuppy of the New York Herald Tribune recommended to readers a volume of Lovecraft's stories in 1944, asserting that "the literature of horror and macabre fantasy belongs with mystery in its broader sense."
By 1957, Floyd C. Gale of Galaxy Science Fiction said that Lovecraft was comparable to Robert E. Howard, stating that "they appear more prolific than ever," noting L. Sprague de Camp, Björn Nyberg, and August Derleth's usage of their creations. He said that "Lovecraft at his best could build a mood of horror unsurpassed; at his worst, he was laughable." In 1962, Colin Wilson, in his survey of anti-realist trends in fiction The Strength to Dream, cited Lovecraft as one of the pioneers of the "assault on rationality" and included him with M. R. James, H. G. Wells, Aldous Huxley, J. R. R. Tolkien, and others as one of the builders of mythicised realities contending against what he considered the failing project of literary realism. Subsequently, Lovecraft began to acquire the status of a cult writer in the counterculture of the 1960s, and reprints of his work proliferated.
Michael Dirda, a reviewer for The Times Literary Supplement, has described Lovecraft as being a "visionary" who is "rightly regarded as second only to Edgar Allan Poe in the annals of American supernatural literature." According to him, Lovecraft's works prove that mankind cannot bear the weight of reality, as the true nature of reality cannot be understood by either science or history. In addition, Dirda praises Lovecraft's ability to create an uncanny atmosphere. This atmosphere is created through the feeling of wrongness that pervades the objects, places, and people in Lovecraft's works. He also comments favorably on Lovecraft's correspondence, and compares him to Horace Walpole. Particular attention is given to his correspondence with August Derleth and Robert E. Howard. The Derleth letters are called "delightful", while the Howard letters are described as being an ideological debate. Overall, Dirda believes that Lovecraft's letters are equal to, or better than, his fictional output.Los Angeles Review of Books reviewer Nick Mamatas has stated that Lovecraft was a particularly difficult author, rather than a bad one. He described Lovecraft as being "perfectly capable" in the fields of story logic, pacing, innovation, and generating quotable phrases. However, Lovecraft's difficulty made him ill-suited to the pulps; he was unable to compete with the popular recurring protagonists and damsel in distress stories. Furthermore, he compared a paragraph from The Shadow Out of Time to a paragraph from the introduction to The Economic Consequences of the Peace. In Mamatas' view, Lovecraft's quality is obscured by his difficulty, and his skill is what has allowed his following to outlive the followings of other then-prominent authors, such as Seabury Quinn and Kenneth Patchen.
In 2005, the Library of America published a volume of Lovecraft's works. This volume was reviewed by many publications, including The New York Times Book Review and The Wall Street Journal, and sold 25,000 copies within a month of release. The overall critical reception of the volume was mixed. Several scholars, including S. T. Joshi and Alison Sperling, have said that this confirms H. P. Lovecraft's place in the western canon. The editors of The Age of Lovecraft, Carl H. Sederholm and Jeffrey Andrew Weinstock, attributed the rise of mainstream popular and academic interest in Lovecraft to this volume, along with the Penguin Classics volumes and the Modern Library edition of At the Mountains of Madness. These volumes led to a proliferation of other volumes containing Lovecraft's works. According to the two authors, these volumes are part of a trend in Lovecraft's popular and academic reception: increased attention by one audience causes the other to also become more interested. Lovecraft's success is, in part, the result of his success.
Lovecraft's style has often been subject to criticism, but scholars such as S. T. Joshi have argued that Lovecraft consciously utilized a variety of literary devices to form a unique style of his own—these include prose-poetic rhythm, stream of consciousness, alliteration, and conscious archaism. According to Joyce Carol Oates, Lovecraft and Edgar Allan Poe have exerted a significant influence on later writers in the horror genre. Horror author Stephen King called Lovecraft "the twentieth century's greatest practitioner of the classic horror tale." King stated in his semi-autobiographical non-fiction book Danse Macabre that Lovecraft was responsible for his own fascination with horror and the macabre and was the largest influence on his writing.
Philosophical
Lovecraft's writings have influenced the speculative realist philosophical movement during the early twenty-first century. The four founders of the movement, Ray Brassier, Iain Hamilton Grant, Graham Harman, and Quentin Meillassoux, have cited Lovecraft as an inspiration for their worldviews. Graham Harman wrote a monograph, Weird Realism: Lovecraft and Philosophy, about Lovecraft and philosophy. In it, he argues that Lovecraft was a "productionist" author. He describes Lovecraft as having been an author who was uniquely obsessed with gaps in human knowledge. He goes further and asserts Lovecraft's personal philosophy as being in opposition to both idealism and David Hume. In his view, Lovecraft resembles Georges Braque, Pablo Picasso, and Edmund Husserl in his division of objects into different parts that do not exhaust the potential meanings of the whole. The anti-idealism of Lovecraft is represented through his commentary on the inability of language to describe his horrors. Harman also credits Lovecraft with inspiring parts of his own articulation of object-oriented ontology. According to Lovecraft scholar Alison Sperling, this philosophical interpretation of Lovecraft's fiction has caused other philosophers in Harmon's tradition to write about Lovecraft. These philosophers seek to remove human perception and human life from the foundations of ethics. These scholars have used Lovecraft's works as the central example of their worldview. They base this usage in Lovecraft's arguments against anthropocentrism and the ability of the human mind to truly understand the universe. They have also played a role in Lovecraft's improving literary reputation by focusing on his interpretation of ontology, which gives him a central position in Anthropocene studies.
Legacy
Lovecraft was relatively unknown during his lifetime. While his stories appeared in prominent pulp magazines such as Weird Tales, not many people knew his name. He did, however, correspond regularly with other contemporary writers such as Clark Ashton Smith and August Derleth, who became his friends, even though he never met them in person. This group became known as the "Lovecraft Circle", since their writings freely borrowed Lovecraft's motifs, with his encouragement. He borrowed from them as well. For example, he made use of Clark Ashton Smith's Tsathoggua in The Mound.
After Lovecraft's death, the Lovecraft Circle carried on. August Derleth founded Arkham House with Donald Wandrei to preserve Lovecraft's works and keep them in print. He added to and expanded on Lovecraft's vision, not without controversy. While Lovecraft considered his pantheon of alien gods a mere plot device, Derleth created an entire cosmology, complete with a war between the good Elder Gods and the evil Outer Gods, such as Cthulhu and his ilk. The forces of good were supposed to have won, locking Cthulhu and others beneath the earth, the ocean, and elsewhere. Derleth's Cthulhu Mythos stories went on to associate different gods with the traditional four elements of fire, air, earth, and water, which did not line up with Lovecraft's original vision of his mythos. However, Derleth's ownership of Arkham House gave him a position of authority in Lovecraftiana that did not dissipate until his death, and through the efforts of Lovecraft scholars in the 1970s.
Lovecraft's works have influenced many writers and other creators. Stephen King has cited Lovecraft as a major influence on his works. As a child in the 1960s, he came across a volume of Lovecraft's works which inspired him to write his fiction. He goes on to argue that all works in the horror genre that were written after Lovecraft were influenced by him. In the field of comics, Alan Moore has described Lovecraft as having been a formative influence on his graphic novels. Film director John Carpenter's films include direct references and quotations of Lovecraft's fiction, in addition to their use of a Lovecraftian aesthetic and themes. Guillermo del Toro was similarly influenced by Lovecraft's corpus.
The first World Fantasy Awards were held in Providence in 1975. The theme was "The Lovecraft Circle". Until 2015, winners were presented with an elongated bust of Lovecraft that was designed by the cartoonist Gahan Wilson, nicknamed the "Howard". In November 2015 it was announced that the World Fantasy Award trophy would no longer be modeled on H. P. Lovecraft in response to the author's views on race. After the World Fantasy Award dropped their connection to Lovecraft, The Atlantic commented that "In the end, Lovecraft still wins—people who've never read a page of his work will still know who Cthulhu is for years to come, and his legacy lives on in the work of Stephen King, Guillermo del Toro, and Neil Gaiman."
In 2016, Lovecraft was inducted into the Museum of Pop Culture's Science Fiction and Fantasy Hall of Fame. Three years later, Lovecraft and the other Cthulhu Mythos authors were posthumously awarded the 1945 Retro-Hugo Award for Best Series for their contributions to it.
Lovecraft studies
Starting in the early 1970s, a body of scholarly work began to emerge around Lovecraft's life and works. Referred to as Lovecraft studies, its proponents sought to establish Lovecraft as a significant author in the American literary canon. This can be traced to Derleth's preservation and dissemination of Lovecraft's fiction, non-fiction, and letters through Arkham House. Joshi credits the development of the field to this process. However, it was marred by low quality editions and misinterpretations of Lovecraft's worldview. After Derleth's death in 1971, the scholarship entered a new phase. There was a push to create a book-length biography of Lovecraft. L. Sprague de Camp, a science fiction scholar, wrote the first major one in 1975. This biography was criticized by early Lovecraft scholars for its lack of scholarly merit and its lack of sympathy for its subject. Despite this, it played a significant role in Lovecraft's literary rise. It exposed Lovecraft to the mainstream of American literary criticism. During the late 1970s and early 1980s, there was a division in the field between the "Derlethian traditionalists" who wished to interpret Lovecraft through the lens of fantasy literature and the newer scholars who wished to place greater attention on the entirety of his corpus.
The 1980s and 1990s saw a further proliferation of the field. The 1990 H. P. Lovecraft Centennial Conference and the republishing of older essays in An Epicure in the Terrible represented the publishing of many basic studies that were used as a base for then-future studies. The 1990 centennial also saw the installation of the "H. P. Lovecraft Memorial Plaque" in a garden adjoining John Hay Library, that features a portrait by silhouettist E. J. Perry. Following this, in 1996, S. T. Joshi wrote his own biography of Lovecraft. This biography was met with positive reviews and became the main biography in the field. It has since been superseded by his expanded edition of the book, I am Providence in 2010.
Lovecraft's improving literary reputation has caused his works to receive increased attention by both classics publishers and scholarly fans. His works have been published by several different series of literary classics. Penguin Classics published three volumes of Lovecraft's works between 1999 and 2004. These volumes were edited by S. T. Joshi. Barnes & Noble published their own volume of Lovecraft's complete fiction in 2008. The Library of America published a volume of Lovecraft's works in 2005. The publishing of these volumes represented a reversal of the traditional judgment that Lovecraft was not part of the Western canon. Meanwhile, the biannual NecronomiCon Providence convention was first held in 2013. Its purpose is to serve as a fan and scholarly convention that discusses both Lovecraft and the wider field of weird fiction. It is organized by the Lovecraft Arts and Sciences organization and is held on the weekend of Lovecraft's birth. That July, the Providence City Council designated the "H. P. Lovecraft Memorial Square" and installed a commemorative sign at the intersection of Angell and Prospect streets, near the author's former residences.
Music
Lovecraft's fictional mythos has influenced a number of musicians, particularly in rock and heavy metal music. This began in the 1960s with the formation of the psychedelic rock band H. P. Lovecraft, who released the albums H. P. Lovecraft and H. P. Lovecraft II in 1967 and 1968 respectively. They broke up afterwards, but later songs were released. This included "The White Ship" and "At the Mountains of Madness", both titled after Lovecraft stories. Extreme metal has also been influenced by Lovecraft. This has expressed itself in both the names of bands and the contents of their albums. This began in 1970 with the release of Black Sabbath's eponymous first album, which contained a song titled "Behind the Wall of Sleep", deriving its name from the 1919 story "Beyond the Wall of Sleep". Heavy metal band Metallica was also inspired by Lovecraft. They recorded a song inspired by "The Call of Cthulhu" titled "The Call of Ktulu", and a song based on The Shadow over Innsmouth titled "The Thing That Should Not Be". The latter contains direct quotations of Lovecraft's works. Joseph Norman, a speculative scholar, has argued that there are similarities between the music described in Lovecraft's fiction and the aesthetics and atmosphere of black metal. He argues that this is evident through the "animalistic" qualities of black metal vocals. The usage of occult elements is also cited as a thematic commonality. In terms of atmosphere, he asserts that both Lovecraft's works and extreme metal place heavy focus on creating a strong negative mood.
Games
Lovecraft has also influenced gaming, despite having personally disliked games during his lifetime. Chaosium's tabletop role-playing game Call of Cthulhu, released in 1981 and currently in its seventh major edition, was one of the first games to draw heavily from Lovecraft. It includes a Lovecraft-inspired insanity mechanic, which allowed for player characters to go insane from contact with cosmic horrors. This mechanic went on to make appearances in subsequent tabletop and video games. 1987 saw the release of another Lovecraftian board game, Arkham Horror, which was published by Fantasy Flight Games. Though few subsequent Lovecraftian board games were released annually from 1987 to 2014, the years after 2014 saw a rapid increase in the number of Lovecraftian board games. According to Christina Silva, this revival may have been influenced by the entry of Lovecraft's works into the public domain and a revival of interest in board games. Few video games are direct adaptations of Lovecraft's works, but many video games have been inspired or heavily influenced by Lovecraft. Call of Cthulhu: Dark Corners of the Earth, a Lovecraftian first-person video game, was released in 2005. It is a loose adaptation of The Shadow over Innsmouth, The Shadow Out of Time, and "The Thing on the Doorstep" that uses noir themes. These adaptations focus more on Lovecraft's monsters and gamification than they do on his themes, which represents a break from Lovecraft's core theme of human insignificance. The 2015 video game Bloodborne does not adapt any of Lovecraft's stories, but it reflects Lovecraftian themes and stylistic elements. Those elements include its usage of tension, anticipation, and the environment.
Religion and occultism
Several contemporary religions have been influenced by Lovecraft's works. Kenneth Grant, the founder of the Typhonian Order, incorporated the Cthulhu Mythos into his ritual and occult system. Grant combined his interest in Lovecraft's fiction with his adherence to Aleister Crowley's Thelema. The Typhonian Order considers Lovecraftian entities to be symbols through which people may interact with something inhuman. Grant also argued that Crowley himself was influenced by Lovecraft's writings, particularly in the naming of characters in The Book of the Law. Similarly, The Satanic Rituals, co-written by Anton LaVey and Michael A. Aquino, includes the "Ceremony of the Nine Angles", which is a ritual that was influenced by the descriptions in "The Dreams in the Witch House". It contains invocations of several of Lovecraft's fictional gods.
There have been several books that have claimed to be an authentic edition of Lovecraft's Necronomicon. The Simon Necronomicon is one such example. It was written by an unknown figure who identified themselves as "Simon". Peter Levenda, an occult author who has written about the Necronomicon, claims that he and "Simon" came across a hidden Greek translation of the grimoire while looking through a collection of antiquities at a New York bookstore during the 1960s or 1970s. This book was claimed to have borne the seal of the Necronomicon. Levenda went on to claim that Lovecraft had access to this purported scroll. A textual analysis has determined that the contents of this book were derived from multiple documents that discuss Mesopotamian myth and magic. The finding of a magical text by monks is also a common theme in the history of grimoires. It was suggested that Levenda is the true author of the Simon Necronomicon.
Correspondence
Although Lovecraft is known mostly for his works of weird fiction, the bulk of his writing consists of voluminous letters about a variety of topics, from weird fiction and art criticism to politics and history. Lovecraft biographers L. Sprague de Camp and S. T. Joshi have estimated that Lovecraft wrote 100,000 letters in his lifetime, a fifth of which are believed to survive. These letters were directed at fellow writers and members of the amateur press. His involvement in the latter was what caused him to begin writing them. He included comedic elements in these letters. This included posing as an eighteenth-century gentleman and signing them with pseudonyms, most commonly "Grandpa Theobald" and "E'ch-Pi-El." According to Joshi, the most important sets of letters were those written to Frank Belknap Long, Clark Ashton Smith, and James F. Morton. He attributes this importance to the contents of these letters. With Long, Lovecraft argued in support and in opposition to many of Long's viewpoints. The letters to Smith are characterized by their focus on weird fiction. Lovecraft and Morton debated many scholarly subjects in their letters, resulting in what Joshi has called the "single greatest correspondence Lovecraft ever wrote."
Copyright and other legal issues
Despite several claims to the contrary, there is currently no evidence that any company or individual owns the copyright to any of Lovecraft's works, and it is generally accepted that it has passed into the public domain. Lovecraft specified that R. H. Barlow would serve as the executor of his literary estate, but these instructions were not incorporated into his will. Nevertheless, his surviving aunt carried out his expressed wishes, and Barlow was given control of Lovecraft's literary estate upon his death. Barlow deposited the bulk of the papers, including the voluminous correspondence, in the John Hay Library, and attempted to organize and maintain Lovecraft's other writings. Lovecraft protégé August Derleth, an older and more established writer than Barlow, vied for control of the literary estate. He and Donald Wandrei, a fellow protégé and co-owner of Arkham House, falsely claimed that Derleth was the true literary executor. Barlow capitulated, and later committed suicide in 1951. This gave Derleth and Wandrei complete control over Lovecraft's corpus.
On October 9, 1947, Derleth purchased all rights to the stories that were published in Weird Tales. However, since April 1926 at the latest, Lovecraft reserved all second printing rights to stories published in Weird Tales. Therefore, Weird Tales only owned the rights to at most six of Lovecraft's tales. If Derleth legally obtained the copyrights to these tales, there is no evidence that they were renewed before the rights expired. Following Derleth's death in 1971, Donald Wandrei sued his estate to challenge Derleth's will, which stated that he only held the copyrights and royalties to Lovecraft's works that were published under both his and Derleth's names. Arkham House's lawyer, Forrest D. Hartmann, argued that the rights to Lovecraft's works were never renewed. Wandrei won the case, but Arkham House's actions regarding copyright have damaged their ability to claim ownership of them.
In H. P. Lovecraft: A Life, S. T. Joshi concludes that Derleth's claims are "almost certainly fictitious" and argues that most of Lovecraft's works that were published in the amateur press are likely in the public domain. The copyright for Lovecraft's works would have been inherited by the only surviving heir named in his 1912 will, his aunt Annie Gamwell. When she died in 1941, the copyrights passed to her remaining descendants, Ethel Phillips Morrish and Edna Lewis. They signed a document, sometimes referred to as the Morrish-Lewis gift, permitting Arkham House to republish Lovecraft's works while retaining their ownership of the copyrights. Searches of the Library of Congress have failed to find any evidence that these copyrights were renewed after the 28-year period, making it likely that these works are in the public domain. However, the Lovecraft literary estate, reconstituted in 1998 under Robert C. Harrall, has claimed that they own the rights. They have been based in Providence since 2009 and have been granting the rights to Lovecraft's works to several publishers. Their claims have been criticized by scholars, such as Chris J. Karr, who has argued that the rights had not been renewed. Joshi has withdrawn his support for his conclusion, and now supports the estate's copyright claims.
Bibliography
See also
H. P. Lovecraft scholars
Lovecraft, a crater on Mercury named for the author
Notes
Citations
General and cited sources
Further reading
External links
The H. P. Lovecraft Archive
The H. P. Lovecraft Historical Society
H. P. Lovecraft Collection in the Special Collections at the John Hay Library (Brown University)
Lovecraft Annual, a scholarly journal
The Lovecraft Arts & Sciences Council, a non-profit educational organization
H. P. Lovecraft at the Encyclopedia of Science Fiction''
Online editions
H. P. Lovecraft
1890 births
1937 deaths
20th-century American essayists
20th-century American journalists
20th-century American male writers
20th-century American novelists
20th-century American poets
20th-century American short story writers
American agnostics
American alternative journalists
American atheists
American socialists
American fantasy writers
American horror writers
American letter writers
American literary critics
American magazine editors
American male essayists
American male journalists
American male non-fiction writers
American male novelists
American male poets
American monarchists
American people of English descent
American science fiction writers
Burials at Swan Point Cemetery
American critics of religions
Cthulhu Mythos writers
Deaths from cancer in Rhode Island
Deaths from colorectal cancer in the United States
Deaths from small intestine cancer
American ghostwriters
Hugo Award–winning writers
Literary circles
Materialists
Mythopoeic writers
People from Brooklyn Heights
People from Flatbush, Brooklyn
People from Red Hook, Brooklyn
Philosophers of pessimism
Poets from Rhode Island
Pulp fiction writers
Re-Animator (film series)
Rhode Island socialists
Science Fiction Hall of Fame inductees
American science fiction critics
American weird fiction writers
Writers from Providence, Rhode Island
Writers of Gothic fiction | H. P. Lovecraft | [
"Physics"
] | 15,918 | [
"Materialism",
"Matter",
"Materialists"
] |
13,533 | https://en.wikipedia.org/wiki/Hacker | A hacker is a person skilled in information technology who achieves goals by non-standard means. The term has become associated in popular culture with a security hackersomeone with knowledge of bugs or exploits to break into computer systems and access data which would otherwise be inaccessible to them. In a positive connotation, though, hacking can also be utilized by legitimate figures in legal situations. For example, law enforcement agencies sometimes use hacking techniques to collect evidence on criminals and other malicious actors. This could include using anonymity tools (such as a VPN or the dark web) to mask their identities online and pose as criminals.
Hacking can also have a broader sense of any roundabout solution to a problem, or programming and hardware development in general, and hacker culture has spread the term's broader usage to the general public even outside the profession or hobby of electronics (see life hack).
Definitions
Reflecting the two types of hackers, there are two definitions of the word "hacker":
Originally, hacker simply meant advanced computer technology enthusiast (both hardware and software) and adherent of programming subculture; see hacker culture.
Someone who is able to subvert computer security. If doing so for malicious purposes, the person can also be called a cracker.
Mainstream usage of "hacker" mostly refers to computer criminals, due to the mass media usage of the word since the 1990s. This includes what hacker jargon calls script kiddies, less skilled criminals who rely on tools written by others with very little knowledge about the way they work. This usage has become so predominant that the general public is largely unaware that different meanings exist. Though the self-designation of hobbyists as hackers is generally acknowledged and accepted by computer security hackers, people from the programming subculture consider the computer intrusion related usage incorrect, and emphasize the difference between the two by calling security breakers "crackers" (analogous to a safecracker).
The controversy is usually based on the assertion that the term originally meant someone messing about with something in a positive sense, that is, using playful cleverness to achieve a goal. But then, it is supposed, the meaning of the term shifted over the decades and came to refer to computer criminals.
As the security-related usage has spread more widely, the original meaning has become less known. In popular usage and in the media, "computer intruders" or "computer criminals" is the exclusive meaning of the word. In computer enthusiast and hacker culture, the primary meaning is a complimentary description for a particularly brilliant programmer or technical expert. A large segment of the technical community insist the latter is the correct usage, as in the Jargon File definition.
Sometimes, "hacker" is simply used synonymously with "geek": "A true hacker is not a group person. He's a person who loves to stay up all night, he and the machine in a love-hate relationship... They're kids who tended to be brilliant but not very interested in conventional goals It's a term of derision and also the ultimate compliment."
Fred Shapiro thinks that "the common theory that 'hacker' originally was a benign term and the malicious connotations of the word were a later perversion is untrue." He found that the malicious connotations were already present at MIT in 1963 (quoting The Tech, an MIT student newspaper), and at that time referred to unauthorized users of the telephone network, that is, the phreaker movement that developed into the computer security hacker subculture of today.
Civic hacker
Civic hackers use their security and/or programming acumens to create solutions, often public and open-sourced, addressing challenges relevant to neighborhoods, cities, states or countries and the infrastructure within them. Municipalities and major government agencies such as NASA have been known to host hackathons or promote a specific date as a "National Day of Civic Hacking" to encourage participation from civic hackers. Civic hackers, though often operating autonomously and independently, may work alongside or in coordination with certain aspects of government or local infrastructure such as trains and buses. For example, in 2008, Philadelphia-based civic hacker William Entriken developed a web application that displayed a comparison of the actual arrival times of local SEPTA trains to their scheduled times after being reportedly frustrated by the discrepancy.
Security related hacking
Security hackers are people involved with circumvention of computer security. There are several types, including:
White hatHackers who work to keep data safe from other hackers by finding system vulnerabilities that can be mitigated. White hats are usually employed by the target system's owner and are typically paid (sometimes quite well) for their work. Their work is not illegal because it is done with the system owner's consent.
Black hat or CrackerHackers with malicious intentions. They often steal, exploit, and sell data, and are usually motivated by personal gain. Their work is usually illegal. A cracker is like a black hat hacker, but is specifically someone who is very skilled and tries via hacking to make profits or to benefit, not just to vandalize. Crackers find exploits for system vulnerabilities and often use them to their advantage by either selling the fix to the system owner or selling the exploit to other black hat hackers, who in turn use it to steal information or gain royalties.
Grey hatComputer security experts who may sometimes violate laws or typical ethical standards, but do not have the malicious intent typical of a black hat hacker.
Hacker culture
Hacker culture is an idea derived from a community of enthusiast computer programmers and systems designers in the 1960s around the Massachusetts Institute of Technology's (MIT's) Tech Model Railroad Club (TMRC) and the MIT Artificial Intelligence Laboratory. The concept expanded to the hobbyist home computing community, focusing on hardware in the late 1970s (e.g. the Homebrew Computer Club) and on software (video games, software cracking, the demoscene) in the 1980s/1990s. Later, this would go on to encompass many new definitions such as art, and life hacking.
Motives
Four primary motives have been proposed as possibilities for why hackers attempt to break into computers and networks. First, there is a criminal financial gain to be had when hacking systems with the specific purpose of stealing credit card numbers or manipulating banking systems. Second, many hackers thrive off of increasing their reputation within the hacker subculture and will leave their handles on websites they defaced or leave some other evidence as proof that they were involved in a specific hack. Third, corporate espionage allows companies to acquire information on products or services that can be stolen or used as leverage within the marketplace. Lastly, state-sponsored attacks provide nation states with both wartime and intelligence collection options conducted on, in, or through cyberspace.
Overlaps and differences
The main basic difference between programmer subculture and computer security hacker is their mostly separate historical origin and development. However, the Jargon File reports that considerable overlap existed for the early phreaking at the beginning of the 1970s. An article from MIT's student paper The Tech used the term hacker in this context already in 1963 in its pejorative meaning for someone messing with the phone system. The overlap quickly started to break when people joined in the activity who did it in a less responsible way. This was the case after the publication of an article exposing the activities of Draper and Engressia.
According to Raymond, hackers from the programmer subculture usually work openly and use their real name, while computer security hackers prefer secretive groups and identity-concealing aliases. Also, their activities in practice are largely distinct. The former focus on creating new and improving existing infrastructure (especially the software environment they work with), while the latter primarily and strongly emphasize the general act of circumvention of security measures, with the effective use of the knowledge (which can be to report and help fixing the security bugs, or exploitation reasons) being only rather secondary. The most visible difference in these views was in the design of the MIT hackers' Incompatible Timesharing System, which deliberately did not have any security measures.
There are some subtle overlaps, however, since basic knowledge about computer security is also common within the programmer subculture of hackers. For example, Ken Thompson noted during his 1983 Turing Award lecture that it is possible to add code to the UNIX "login" command that would accept either the intended encrypted password or a particular known password, allowing a backdoor into the system with the latter password. He named his invention the "Trojan horse". Furthermore, Thompson argued, the C compiler itself could be modified to automatically generate the rogue code, to make detecting the modification even harder. Because the compiler is itself a program generated from a compiler, the Trojan horse could also be automatically installed in a new compiler program, without any detectable modification to the source of the new compiler. However, Thompson disassociated himself strictly from the computer security hackers: "I would like to criticize the press in its handling of the 'hackers,' the 414 gang, the Dalton gang, etc. The acts performed by these kids are vandalism at best and probably trespass and theft at worst. ... I have watched kids testifying before Congress. It is clear that they are completely unaware of the seriousness of their acts."
The programmer subculture of hackers sees secondary circumvention of security mechanisms as legitimate if it is done to get practical barriers out of the way for doing actual work. In special forms, that can even be an expression of playful cleverness. However, the systematic and primary engagement in such activities is not one of the actual interests of the programmer subculture of hackers and it does not have significance in its actual activities, either. A further difference is that, historically, members of the programmer subculture of hackers were working at academic institutions and used the computing environment there. In contrast, the prototypical computer security hacker had access exclusively to a home computer and a modem. However, since the mid-1990s, with home computers that could run Unix-like operating systems and with inexpensive internet home access being available for the first time, many people from outside of the academic world started to take part in the programmer subculture of hacking.
Since the mid-1980s, there are some overlaps in ideas and members with the computer security hacking community. The most prominent case is Robert T. Morris, who was a user of MIT-AI, yet wrote the Morris worm. The Jargon File hence calls him "a true hacker who blundered". Nevertheless, members of the programmer subculture have a tendency to look down on and disassociate from these overlaps. They commonly refer disparagingly to people in the computer security subculture as crackers and refuse to accept any definition of hacker that encompasses such activities. The computer security hacking subculture, on the other hand, tends not to distinguish between the two subcultures as harshly, acknowledging that they have much in common including many members, political and social goals, and a love of learning about technology. They restrict the use of the term cracker to their categories of script kiddies and black hat hackers instead.
All three subcultures have relations to hardware modifications. In the early days of network hacking, phreaks were building blue boxes and various variants. The programmer subculture of hackers has stories about several hardware hacks in its folklore, such as a mysterious "magic" switch attached to a PDP-10 computer in MIT's AI lab that, when switched off, crashed the computer. The early hobbyist hackers built their home computers themselves from construction kits. However, all these activities have died out during the 1980s when the phone network switched to digitally controlled switchboards, causing network hacking to shift to dialing remote computers with modems when pre-assembled inexpensive home computers were available and when academic institutions started to give individual mass-produced workstation computers to scientists instead of using a central timesharing system. The only kind of widespread hardware modification nowadays is case modding.
An encounter of the programmer and the computer security hacker subculture occurred at the end of the 1980s, when a group of computer security hackers, sympathizing with the Chaos Computer Club (which disclaimed any knowledge in these activities), broke into computers of American military organizations and academic institutions. They sold data from these machines to the Soviet secret service, one of them in order to fund his drug addiction. The case was solved when Clifford Stoll, a scientist working as a system administrator, found ways to log the attacks and to trace them back (with the help of many others). 23, a German film adaption with fictional elements, shows the events from the attackers' perspective. Stoll described the case in his book The Cuckoo's Egg and in the TV documentary The KGB, the Computer, and Me from the other perspective. According to Eric S. Raymond, it "nicely illustrates the difference between 'hacker' and 'cracker'. Stoll's portrait of himself, his lady Martha, and his friends at Berkeley and on the Internet paints a marvelously vivid picture of how hackers and the people around them like to live and how they think."
Representation in media
The mainstream media's current usage of the term may be traced back to the early 1980s. When the term, previously used only among computer enthusiasts, was introduced to wider society by the mainstream media in 1983, even those in the computer community referred to computer intrusion as hacking, although not as the exclusive definition of the word. In reaction to the increasing media use of the term exclusively with the criminal connotation, the computer community began to differentiate their terminology. Alternative terms such as cracker were coined in an effort to maintain the distinction between hackers within the legitimate programmer community and those performing computer break-ins. Further terms such as black hat, white hat and gray hat developed when laws against breaking into computers came into effect, to distinguish criminal activities from those activities which were legal.
Network news' use of the term consistently pertains primarily to criminal activities, despite attempts by the technical community to preserve and distinguish the original meaning. Today, the mainstream media and general public continue to describe computer criminals, with all levels of technical sophistication, as "hackers" and do not generally make use of the word in any of its non-criminal connotations. Members of the media sometimes seem unaware of the distinction, grouping legitimate "hackers" such as Linus Torvalds and Steve Wozniak along with criminal "crackers".
As a result, the definition is still the subject of heated controversy. The wider dominance of the pejorative connotation is resented by many who object to the term being taken from their cultural jargon and used negatively, including those who have historically preferred to self-identify as hackers. Many advocate using the more recent and nuanced alternate terms when describing criminals and others who negatively take advantage of security flaws in software and hardware. Others prefer to follow common popular usage, arguing that the positive form is confusing and unlikely to become widespread in the general public. A minority still use the term in both senses despite the controversy, leaving context to clarify (or leave ambiguous) which meaning is intended.
However, because the positive definition of hacker was widely used as the predominant form for many years before the negative definition was popularized, "hacker" can therefore be seen as a shibboleth, identifying those who use the technically oriented sense (as opposed to the exclusively intrusion-oriented sense) as members of the computing community. On the other hand, due to the variety of industries software designers may find themselves in, many prefer not to be referred to as hackers because the word holds a negative denotation in many of those industries.
A possible middle ground position has been suggested, based on the observation that "hacking" describes a collection of skills and tools which are used by hackers of both descriptions for differing reasons. The analogy is made to locksmithing, specifically picking locks, which is a skill which can be used for good or evil. The primary weakness of this analogy is the inclusion of script kiddies in the popular usage of "hacker", despite their lack of an underlying skill and knowledge base.
See also
Script kiddie, an unskilled computer security attacker
Hacktivism, conducting cyber attacks on a business or organisation in order to bring social change
References
Further reading
Baker, Bruce D. "Sin and the Hacker Ethic: The Tragedy of Techno-Utopian Ideology in Cyberspace Business Cultures." Journal of Religion and Business Ethics 4.2 (2020): 1+ online .
Hasse, Michael. Die Hacker: Strukturanalyse einer jugendlichen Subkultur (1994)
Himanen, Pekka. The hacker ethic (Random House, 2010).
Himanen, Pekka. "19. The hacker ethic as the culture of the information age." The Network Society (2004): 420+ online.
Holt, Thomas J. "Computer hacking and the hacker subculture." in The palgrave handbook of international cybercrime and cyberdeviance (2020): 725–742.
Computer security
Dey, Debabrata, Atanu Lahiri, and Guoying Zhang. "Hacker behavior, network effects, and the security software market." Journal of Management Information Systems 29.2 (2012): 77–108.
Logik Bomb: Hacker's Encyclopedia (1997)
Revelation: The Ultimate Beginner's Guide to Hacking & Phreaking (1996)
Free software/open source
External links
Hacking (computer security)
Hacker culture
Computing culture
Computing terminology
Computer programming
Computer viruses
Internet security | Hacker | [
"Technology",
"Engineering"
] | 3,638 | [
"Computing terminology",
"Computing culture",
"Computer programming",
"Software engineering",
"Computing and society",
"Computers"
] |
13,544 | https://en.wikipedia.org/wiki/Help%20desk | A help desk is a department or person that provides assistance and information, usually for electronic or computer problems. In the mid-1990s, research by Iain Middleton of Robert Gordon University studied the value of an organization's help desks. It found that value was derived not only from a reactive response to user issues, but also from the help desk's unique position of communicating daily with numerous customers or employees. Information gained in areas such as technical problems, user preferences, and satisfaction can be valuable for the planning and development work of other information technology units.
A main function of the Help desk is to separate issues from defects. Many issues can be solved at the Help Desk level such as password resets and simple misunderstandings. Some issues will be the result of actual product defect which should be forwarded to a development team for resolution.
Large help desks have a person or team responsible for managing the incoming requests, called "issues"; they are commonly called queue managers or queue supervisors. The queue manager is responsible for the issue queues, which can be set up in various ways depending on the help desk size or structure. Typically, large help desks have several teams that are experienced in working on different issues. The queue manager will assign an issue to one of the specialized teams based on the type of issue raised. Some help desks may have telephone systems with ACD splits ensuring that calls about specific topics are put through to analysts with the requisite experience or knowledge.
See also
Service desk
Call center
Customer service
Comparison of issue-tracking systems
Comparison of help desk issue tracking software
Technical support
Help desk software
References
Computer telephony integration
Customer service
Outsourcing
Telephony | Help desk | [
"Technology"
] | 338 | [
"Information technology",
"Computer telephony integration"
] |
13,547 | https://en.wikipedia.org/wiki/Hate%20crime | A hate crime (also known a bias crime) is crime where a perpetrator targets a victim because of their physical appearance or perceived membership of a certain social group.
Examples of such groups can include, and are almost exclusively limited to race, ethnicity, disability, language, nationality, physical appearance, political views, political affiliation, age, religion, sex, gender identity, or sexual orientation. Non-criminal actions that are motivated by these reasons are often called "bias incidents".
Incidents may involve physical assault, homicide, damage to property, bullying, harassment, verbal abuse (which includes slurs) or insults, mate crime, or offensive graffiti or letters (hate mail).
In the criminal law of the United States, the Federal Bureau of Investigation (FBI) defines a hate crime as a traditional offense like murder, arson, or vandalism with an added element of bias. Hate itself is not a hate crime but committing a crime motivated by bias against one or more of the social groups listed above, or by bias against their derivatives constitutes a hate crime. A hate crime law is a law intended to deter bias-motivated violence. Hate crime laws are distinct from laws against hate speech: hate crime laws enhance the penalties associated with conduct which is already criminal under other laws, while hate speech laws criminalize a category of speech. Hate speech is a factor for sentencing enhancement in the United States, distinct from laws that criminalize speech.
History
The term "hate crime" came into common usage in the United States during the 1980s, but it is often used retrospectively in order to describe events which occurred prior to that era. From the Roman persecution of Christians to the Nazi slaughter of Jews, hate crimes were committed by individuals as well as governments long before the term was commonly used. A major part of defining crimes as hate crimes is determining that they have been committed against members of historically oppressed groups.
As Europeans began to colonize the world from the 16th century onwards, indigenous peoples in the colonized areas, such as Native Americans, increasingly became the targets of bias-motivated intimidation and violence. During the past two centuries, typical examples of hate crimes in the U.S. include lynchings of African Americans, largely in the South, lynchings of Europeans in the East, and lynchings of Mexicans and Chinese in the West; cross burnings in order to intimidate black activists or drive black families out of predominantly white neighborhoods both during and after Reconstruction; assaults on lesbian, gay, bisexual and transgender people; the painting of swastikas on Jewish synagogues; and xenophobic responses to a variety of minority ethnic groups.
The verb "to lynch" is attributed to the actions of Charles Lynch, an 18th-century Virginia Quaker. Lynch, other militia officers, and justices of the peace rounded up Tory sympathizers who were given a summary trial at an informal court; sentences which were handed down included whipping, property seizure, coerced pledges of allegiance, and conscription into the military. Originally, the term referred to the extrajudicial organized but unauthorized punishment of criminals. It later evolved to describe executions which were committed outside "ordinary justice". It is highly associated with white suppression of African Americans in the South, and periods of weak or nonexistent police authority, as in certain frontier areas of the Old West.
Due to the COVID-19 pandemic, the violence against people of Chinese origin significantly increased on the background of accusation of spreading the virus. In May 2020, the Polish-based "NEVER AGAIN" Association published its report titled The Virus of Hate: The Brown Book of Epidemic, that documented numerous acts of racism, xenophobia, and discrimination that occurred in the wake of the COVID-19 pandemic, as well as cases of spreading hate speech and conspiracy theories about the epidemic by the Alt-Right.
Psychological effects
Hate crimes can have significant and wide-ranging psychological consequences, not only for their direct victims but for others of the group as well. Moreover, victims of hate crimes often experience a sense of victimization that goes beyond the initial crime, creating a heightened sense of vulnerability towards future victimization. In many ways, hate crime victimization can be reminder to victims of their marginalized status in society, and for immigrants or refugees, may also serve to make them relive the violence that drove them to seek refuge in another country. A 1999 U.S. study of lesbian and gay victims of violent hate crimes documented that they experienced higher levels of psychological distress, including symptoms of depression and anxiety, than lesbian and gay victims of comparable crimes which were not motivated by antigay bias. A manual issued by the Attorney-General of the Province of Ontario in Canada lists the following consequences:
Impact on the individual victim psychological and affective disturbances; repercussions on the victim's identity and self-esteem; both reinforced by a specific hate crime's degree of violence, which is usually stronger than that of a common crime.
Effect on the targeted group generalized terror in the group to which the victim belongs, inspiring feelings of vulnerability among its other members, who could be the next hate crime victims.
Effect on other vulnerable groups ominous effects on minority groups or on groups that identify themselves with the targeted group, especially when the referred hate is based on an ideology or a doctrine that preaches simultaneously against several groups.
Effect on the community as a whole divisions and factionalism arising in response to hate crimes are particularly damaging to multicultural societies.
Hate crime victims can also develop depression and psychological trauma. They suffer from typical symptoms of trauma: lack of concentration, fear, unintentional rethinking of the incident and feeling vulnerable or unsafe. These symptoms may be severe enough to qualify as PTSD. In the United States, the Supreme Court has accepted the claim that hate crimes cause 'distinct emotional harm' to victims. People who have been victims of hate crimes avoid spaces where they feel unsafe which can make communities less functional when ties with police are strained by persistent group fears and feelings of insecurity. In the United States, hate crime has been shown to reduce educational attainment among affected groups—particularly among black, non-Hispanic victims.
A review of European and American research indicates that terrorist bombings cause Islamophobia and hate crimes to flare up but, in calmer times, they subside again, although to a relatively high level. Terrorists' most persuasive message is that of fear; a primary and strong emotion, fear increases risk estimates and has distortive effects on the perception of ordinary Muslims. Widespread Islamophobic prejudice seems to contribute to anti-Muslim hate crimes, but indirectly; terrorist attacks and intensified Islamophobic prejudice serve as a window of opportunity for extremist groups and networks.
Motivation
Sociologists Jack McDevitt and Jack Levin's 2002 study into the motives for hate crimes found four motives, and reported that "thrill-seeking" accounted for 66 percent of all hate crimes overall in the United States:
Thrill-seeking – perpetrators engage in hate crimes for excitement and drama. Often, there is no greater purpose behind the crimes, with victims being vulnerable because they have an ethnic, religious, sexual or gender background that differs from their attackers. While the actual animosity present in such a crime can be quite low, thrill-seeking crimes were determined to often be dangerous, with 70 percent of thrill-seeking hate crimes studied involving physical attacks. Typically, these attacks are perpetrated by groups of young teenagers or adults seeking excitement.
Defensive – perpetrators engage in hate crimes out of a belief they are protecting their communities. Often, these are triggered by a certain background event. Perpetrators believe society supports their actions but is too afraid to act and thus they believe they have communal assent in their actions.
Retaliatory – perpetrators engage in hate crimes out of a desire for revenge. This can be in response to perceived personal slights, other hate crimes or terrorism. The "avengers" target members of a group whom they believe committed the original crime, even if the victims had nothing to do with it. These kinds of hate crimes are a common occurrence after terrorist attacks.
Mission offenders – perpetrators engage in hate crimes out of ideological reasons. They consider themselves to be crusaders, often for a religious or racial cause. They may write complex explanations for their views and target symbolically important sites, trying to maximize damage. They believe that there is no other way to accomplish their goals, which they consider to be justification for excessive violence against innocents. This kind of hate crime often overlaps with terrorism, and is considered by the FBI to be both the rarest and deadliest form of hate crime.
In a later article, Levin and fellow sociologist Ashley Reichelmann found that following the September 11 attacks, thrill motivated hate crimes tended to decrease as the overall rate of violent crime decreased while defensive hate crimes increased substantially. Specifically, they found that 60% of all hate motivated assaults in 2001 were perpetrated against those the offenders perceived to be Middle Eastern and were motivated mainly by a desire for revenge. Levin and McDevitt also argued that while thrill crimes made up the majority of hate crimes in the 1990s, after September 11, 2001, hate crimes in the United States shifted from thrill offenses by young groups to more defensive oriented and more often perpetrated by older individuals respond to a precipitating event.
The motivations of hate-crime offenders are complex. Therefore, there is no one theory that can completely account for hate-motived crimes. However, Mark Austin Walters previously attempted to synthesize three interdisciplinary theories to account for the behavior of hate-crime offenders:
1. Strain Theory: suggests that hate crimes are motivated by perceived economic and material inequality, which results in differential attitudes towards outsiders who may be viewed as “straining” already scarce resources. An example of this can be seen in the discourse surrounding some people's apprehension towards immigrants, who feel as though immigrants and/or refugees receive extra benefits from government and strain social systems.
2. Doing Difference Theory: suggests that some individuals fear groups other than their own and, as a result of this, seek to suppress different cultures.
3. Self-Control Theory: suggests that a person's upbringing determines their tolerance threshold towards others, here individuals with low self-esteem are often impulsive, have poor employment prospects, and have little academic success.
Walters argues that a synthesis of these theories provides a more well-rounded scope of the motivations behind hate crimes, where he explains that social, cultural, and individual factors interact to elicit the violence behavior of individuals with low self-control.
Additionally, psychological perspectives within the realm of behaviorism have also contributed to theoretical explanations for the motivations of hate crimes particularly as it relates to conditioning and social learning. For instance, the seminal work of John B. Watson and Rosalie Rayner illustrated that hate, a form of prejudice, was a conditioned emotional response. Later on, the work of Arthur Staats and Carolyn Staats illustrated that both hate and fear were learned behavioral responses. In their experiment, Staats and Staats paired positive and negative works with several different nationalities. The pairing of verbal stimuli was a form of conditioning, and it was found to influence attitude formation and attitude change.
These studies are of interest when considering modern forms of prejudice directed towards ethnic, religious, or racial groups. For instance, there was a significant increase in Islamophobia and hate crimes following the 9/11 terrorist attacks on the United States. Simultaneously, the news media was consistently pairing Islam with terrorism. Thus, the pairing of verbal stimuli in the media contributed to widespread prejudice towards all Arab individuals in a process that is known as semantic generalization, which refers to how a learned behavior can generalize across situations based on meaning or other abstract representations. These occurrences continue today with the social and political discourse that contribute to the context in which people learn, come to form beliefs, and engage in behavioural actions. Although not all individuals with prejudicial attitudes go on to engage in hate-motived crime, it has been suggested that hate-crime offenders come to learn their prejudices through social interaction, consumption of biased news media, political hate speech, and internal misrepresentations of cultures other than their own.
Risk management for hate-crime offenders
Compared to other types of offending, there has been relatively little research directed towards the management of hate-crime offenders. However, risk management for hate-crime offenders is an important consideration for forensic psychology and public safety in order to decrease the potential for future harm. Forensic risk assessments are designed to evaluate the likelihood of re-offending and to aid in risk management strategies. While not specifically designed for hate crime offenders, some of the most common risk assessment tools used to assess risk for hate-crime offenders include the Violence Risk Appraisal Guide (VRAG; ), the Historical Clinical Risk Management 20 (HCR-20; ) and the Psychopathy Checklist-Revised (PCL-R; ). Research has shown that assessing and addressing risk posed by hate-crime offenders is especially complex, and while existing tools are useful, it is important to incorporate bias-oriented factors (Dunbar et al., 2005). That is, hate-crime offenders do tend to score high risk on tools including both static and dynamic factors, but severity has been found to not be solely related to these factors, illustrating a need to incorporate biases and ideological factors.
Laws
Hate crime laws generally fall into one of several categories:
laws defining specific bias-motivated acts as distinct crimes;
criminal penalty-enhancement laws;
laws creating a distinct civil cause of action for hate crimes; and
laws requiring administrative agencies to collect hate crime statistics. Sometimes (as in Bosnia and Herzegovina), the laws focus on war crimes, genocide, and crimes against humanity with the prohibition against discriminatory action limited to public officials.
Europe and Asia
Council of Europe
Since 2006, with the Additional Protocol to the Convention on Cybercrime, most signatories to that Convention – mostly members of the Council of Europe – committed to punish as a crime racist and xenophobic hate speech done through the internet.
Andorra
Discriminatory acts constituting harassment or infringement of a person's dignity on the basis of origin, citizenship, race, religion, or gender (Penal Code Article 313). Courts have cited bias-based motivation in delivering sentences, but there is no explicit penalty enhancement provision in the Criminal Code. The government does not track hate crime statistics, although they are relatively rare.
Armenia
Armenia has a penalty-enhancement statute for crimes with ethnic, racial, or religious motives (Criminal Code Article 63).
Austria
Austria has a penalty-enhancement statute for reasons like repeating a crime, being especially cruel, using others' helpless states, playing a leading role in a crime, or committing a crime with racist, xenophobic or especially reprehensible motivation (Penal Code section 33(5)). Austria is a party to the Convention on Cybercrime, but not the Additional Protocol.
Azerbaijan
Azerbaijan has a penalty-enhancement statute for crimes motivated by racial, national, or religious hatred (Criminal Code Article 61). Murder and infliction of serious bodily injury motivated by racial, religious, national, or ethnic intolerance are distinct crimes (Article 111). Azerbaijan is a party to the Convention on Cybercrime, but not the Additional Protocol.
Belarus
Belarus has a penalty-enhancement statute for crimes motivated by racial, national, and religious hatred and discord.
Belgium
Belgium's Act of 25 February 2003 ("aimed at combating discrimination and modifying the Act of 15 February 1993 which establishes the Centre for Equal Opportunities and the Fight against Racism") establishes a penalty-enhancement for crimes involving discrimination on the basis of gender, supposed race, color, descent, national or ethnic origin, sexual orientation, civil status, birth, fortune, age, religious or philosophical beliefs, current or future state of health and handicap or physical features. The Act also "provides for a civil remedy to address discrimination." The Act, along with the Act of 20 January 2003 ("on strengthening legislation against racism"), requires the centre to collect and publish statistical data on racism and discriminatory crimes. Belgium is a party to the Convention on Cybercrime, but not the Additional Protocol.
Bosnia and Herzegovina
The Criminal Code of Bosnia and Herzegovina (enacted 2003) "contains provisions prohibiting discrimination by public officials on grounds, inter alia, of race, skin colour, national or ethnic background, religion and language and prohibiting the restriction by public officials of the language rights of the citizens in their relations with the authorities (Article 145/1 and 145/2)."
Bulgaria
Bulgarian criminal law prohibits certain crimes motivated by racism, xenophobia and sexual orientation (since 2023), but a 1999 report by the European Commission against Racism and Intolerance found that it does not appear that those provisions "have ever resulted in convictions before the courts in Bulgaria."
Croatia
The Croatian Penal Code explicitly defines hate crime in article 89 as "any crime committed out of hatred for someone's race, skin color, sex, sexual orientation, language, religion, political or other belief, national or social background, asset, birth, education, social condition, age, health condition or other attribute". On 1 January 2013, a new Penal Code was introduced with the recognition of a hate crime based on "race, skin color, religion, national or ethnic background, sexual orientation or gender identity".
Czech Republic
The Czech legislation finds its constitutional basis in the principles of equality and non-discrimination contained in the Charter of Fundamental Rights and Basic Freedoms. From there, we can trace two basic lines of protection against hate-motivated incidents: one passes through criminal law, the other through civil law.
The current Czech criminal legislation has implications both for decisions about guilt (affecting the decision whether to find a defendant guilty or not guilty) and decisions concerning sentencing (affecting the extent of the punishment imposed). It has three levels, to wit:
a circumstance determining whether an act is a crime – hate motivation is included in the basic constituent elements. If hate motivation is not proven, a conviction for a hate crime is not possible.
a circumstance determining the imposition of a higher penalty – hate motivation is included in the qualified constituent elements for some types of crimes (murder, bodily harm). If hate motivation is not proven, the penalty is imposed according to the scale specified for the basic constituent elements of the crime.
general aggravating circumstance – the court is obligated to take the hate motivation into account as a general aggravating circumstance and determines the amount of penalty to impose. Nevertheless, it is not possible to add together a general aggravating circumstance and a circumstance determining the imposition of a higher penalty. (see Annex for details)
Current criminal legislation does not provide for special penalties for acts that target another by reason of his sexual orientation, age or health status. Only the constituent elements of the criminal offence of Incitement to hatred towards a group of persons or to the curtailment of their rights and freedoms and general aggravating circumstances include attacking a so-called different group of people. Such a group of people can then, of course, be also defined by sexual orientation, age or health status. A certain disparity has thus been created between, on the one hand, those groups of people who are victimized by reason of their skin color, faith, nationality, ethnicity or political persuasion and enjoy increased protection, and, on the other hand, those groups that are victimized by reason of their sexual orientation, age or health status and are not granted increased protection. This gap in protection against attacks motivated by the victim's sexual orientation, age or health status cannot be successfully bridged by interpretation. Interpretation by analogy is inadmissible in criminal law, sanctionable motivations being exhaustively enumerated.
Denmark
Although Danish law does not include explicit hate crime provisions, "section 80(1) of the Criminal Code instructs courts to take into account the gravity of the offence and the offender's motive when meting out penalty, and therefore to attach importance to the racist motive of crimes in determining sentence." In recent years judges have used this provision to increase sentences on the basis of racist motives.
Since 1992, the Danish Civil Security Service (PET) has released statistics on crimes with apparent racist motivation.
Estonia
Under section 151 of the Criminal Code of Estonia of 6 June 2001, which entered into force on 1 September 2002, with amendments and supplements and as amended by the Law of 8 December 2011, "activities which publicly incite to hatred, violence or discrimination on the basis of nationality, race, colour, sex, language, origin, religion, sexual orientation, political opinion, or financial or social status, if this results in danger to the life, health or property of a person, are punishable by a fine of up to 300 fine units or by detention".
Finland
Finnish Criminal Code 515/2003 (enacted 31 January 2003) makes "committing a crime against a person, because of his national, racial, ethnical or equivalent group" an aggravating circumstance in sentencing. In addition, ethnic agitation () is criminalized and carries a fine or a prison sentence of not more than two years. The prosecution need not prove that an actual danger to an ethnic group is caused but only that malicious message is conveyed. A more aggravated hate crime, warmongering (), carries a prison sentence of one to ten years. However, in case of warmongering, the prosecution must prove an overt act that evidently increases the risk that Finland is involved in a war or becomes a target for a military operation. The act in question may consist of
illegal violence directed against a foreign country or its citizens,
systematic dissemination of false information on Finnish foreign policy or defense
public influence on the public opinion towards a pro-war viewpoint or
public suggestion that a foreign country or Finland should engage in an aggressive act.
France
In 2003, France enacted penalty-enhancement hate crime laws for crimes motivated by bias against the victim's actual or perceived ethnicity, nation, race, religion, or sexual orientation. The penalties for murder were raised from 30 years (for non-hate crimes) to life imprisonment (for hate crimes), and the penalties for violent attacks leading to permanent disability were raised from 10 years (for non-hate crimes) to 15 years (for hate crimes).
Georgia
"There is no general provision in Georgian law for racist motivation to be considered an aggravating circumstance in prosecutions of ordinary offenses. Certain crimes involving racist motivation are, however, defined as specific offenses in the Georgian Criminal Code of 1999, including murder motivated by racial, religious, national or ethnic intolerance (article 109); infliction of serious injuries motivated by racial, religious, national or ethnic intolerance (article 117); and torture motivated by racial, religious, national or ethnic intolerance (article 126). ECRI reported no knowledge of cases in which this law has been enforced. There is no systematic monitoring or data collection on discrimination in Georgia."
Germany
The German Criminal Code does not have hate crime legislation, instead, it criminalizes hate speech under a number of different laws, including Volksverhetzung. In the German legal framework motivation is not taken into account while identifying the element of the offence. However, within the sentencing procedure the judge can define certain principles for determining punishment. In section 46 of the German Criminal Code it is stated that "the motives and aims of the perpetrator; the state of mind reflected in the act and the willfulness involved in its commission" can be taken into consideration when determining the punishment; under this statute, hate and bias have been taken into consideration in sentencing in past cases.
Hate crimes are not specifically tracked by German police, but have been studied separately: a recently published EU "Report on Racism" finds that racially motivated attacks are frequent in Germany, identifying 18,142 incidences for 2006, of which 17,597 were motivated by right-wing ideologies, both about a 14% year-by-year increase. Relative to the size of the population, this represents an eightfold higher rate of hate crimes than reported in the US during the same period. Awareness of hate crimes in Germany remains low.
Greece
Article Law 927/1979 "Section 1,1 penalises incitement to discrimination, hatred or violence towards individuals or groups because of
their racial, national or religious origin, through public written or oral expressions; Section 1,2 prohibits the establishment of, and membership in, organisations which organise propaganda and activities aimed at racial discrimination; Section 2 punishes public expression of offensive ideas; Section 3 penalises the act of refusing, in the exercise of one's occupation, to sell a commodity or to supply a service on racial grounds." Public prosecutors may press charges even if the victim does not file a complaint. However, as of 2003, no convictions had been attained under the law.
Hungary
Violent action, cruelty, and coercion by threat made on the basis of the victim's actual or perceived national, ethnic, religious status or membership in a particular social group are punishable under article 174/B of the Hungarian Criminal Code. This article was added to the Code in 1996. Hungary is a party to the Convention on Cybercrime, but not the Additional Protocol.
Iceland
Section 233a of the Icelandic Penal Code states "Anyone who in a ridiculing, slanderous, insulting, threatening or any other manner publicly abuses a person or a group of people on the basis of their nationality, skin colour, race, religion or sexual orientation, shall be fined or jailed for up to two years." Iceland is a party to the Convention on Cybercrime, but not the Additional Protocol.
India
India does not have any specific laws governing hate crimes in general other than hate speech which is covered under the Indian Penal Code.
Ireland
In legal effect since December 31st, 2024 Ireland implemented broad-based comprehensive legislation on hate crimes.
The Prohibition of Incitement to Hatred Act 1989 created the offence of inciting hatred against a group of persons on account of their race, colour, nationality, religion, ethnic or national origins, membership of the Traveller community (an indigenous minority group), or sexual orientation. Frustration at the low number of prosecutions (18 by 2011) was attributed to a misconception that the law addressed hate crimes more generally as opposed to incitement in particular.
In 2019, a UN rappourteur told Irish representatives at the Committee on the Elimination of Racial Discrimination, meeting at UN Geneva, to introduce new hate crime legislation to combat the low prosecution rate for offences under the 1989 act – particularly for online hate speech – and lack of training for the Garda Síochána on racially-motivated crimes. The rapporteur's points came during a rise in anti-immigrant rhetoric and racist attacks in Ireland and were based on recommendations submitted by the Irish Human Rights and Equality Commission and numerous other civil society organisations. Reforms are supported by the Irish Network Against Racism.
The Criminal Justice (Incitement to Violence or Hatred and Hate Offences) Bill known as the "Hate Crime Bill", prohibiting hate speech or incitement to hate crimes based on protected characteristics, is in its Third Stage at the Seanad, Ireland's upper house, and the Irish Times reports it is likely to become law in late 2023. It has drawn concern from the Irish Council for Civil Liberties and from across the political spectrum (specifically from Michael McDowell, Rónán Mullen, and People Before Profit), as well as internationally, from business magnate Elon Musk and political activist Donald Trump Jr. Paul Murphy of People Before Profit said the bill created a "thought crime" by its criminalisation of possessing material prepared for circulation where circulation would incite hatred. Pauline O'Reilly, a Green Party senator said that the existing legislation was "not effective" and outdated, adding that the Gardaí saw a rise of 30% in hate crime in Ireland."
Data published by the Gardaí showed a 29% increase in hate crimes and hate-related incidents from 448 in 2021 to 582 in 2022. The Gardaí recognise that "despite improvements, hate crime and hate related incidents are still under-reported".
Italy
Italian criminal law, at Section 3 of Law No. 205/1993, the so-called Legge Mancino (Mancino law), contains a penalty-enhancement provision for all crimes motivated by racial, ethnic, national, or religious bias. Italy is a party to the Convention on Cybercrime, but not the Additional Protocol.
Kazakhstan
In Kazakhstan, there are constitutional provisions prohibiting propaganda promoting racial or ethnic superiority.
Kyrgyzstan
In Kyrgyzstan, "the Constitution of the State party prohibits any kind of discrimination on grounds of origin, sex, race, nationality, language, faith, political or religious convictions or any other personal or social trait or circumstance, and that the prohibition against racial discrimination is also included in other legislation, such as the Civil, Penal and Labour Codes."
Article 299 of the Criminal Code defines incitement to national, racist, or religious hatred as a specific offense. This article has been used in political trials of suspected members of the banned organization Hizb-ut-Tahrir.
Poland
Article 13 of the Constitution of Poland prohibits organizations "whose programmes or activities sanction racial or national hatred".
Russia
Article 29 of Constitution of the Russian Federation bans incitement to riot for the sake of stirring societal, racial, ethnic, and religious hatred as well as the promotion of the superiority of the same. Article 282 of the Criminal code further includes protections against incitement of hatred (including gender) via various means of communication, instilling criminal penalties including fines and imprisonment. Although a former member of the Council of Europe, Russia is not a party to the Convention on Cybercrime.
Slovenia
In 2023, Slovenia introduced a penalty-enhancement provision in its Penal Code. If the victim's national, racial, religious or ethnic origin, sex, colour, descent, property, education, social status, political or other opinion, disability, sexual orientation or any other personal circumstance was a factor contributing to the commission of the criminal offence, it shall be taken into account when determining the penalty.
Spain
Article 22(4) of the Spanish Penal Code includes a penalty-enhancement provision for crimes motivated by bias against the victim's ideology, beliefs, religion, ethnicity, race, nationality, gender, sexual orientation, illness or disability.
On 14 May 2019, the Spanish Attorney General distributed a circular instructing on the interpretation of hate crime law. This new interpretation includes nazis as a collective that can be protected under this law.
Although a member of the Council of Europe, Spain is not a party to the Convention on Cybercrime.
Sweden
Article 29 of the Swedish Penal Code includes a penalty-enhancement provision for crimes motivated by bias against the victim's race, color, nationality, ethnicity, sexual orientation, religion, or "other similar circumstance" of the victim.
Ukraine
The constitution of Ukraine guarantees protection against hate crime:
Article 10: "In Ukraine, free development, use and protection of Russian and other languages of ethnic minorities of Ukraine are guaranteed".
Article 11: "The State shall promote the development of the ethnic, cultural, linguistic and religious identity of all indigenous peoples and ethnic minorities of Ukraine".
Article 24: "There can be no privileges or restrictions on the grounds of race, color of the skin, political, religious or other beliefs, sex, ethnic or social origin, property status, place of residence, language or other grounds".
Under the Criminal Codex, crimes committed because of hatred are hate crimes and carry increased punishment in many articles of the criminal law. There are also separate articles on punishment for a hate crime.
Article 161: "Violations of equality of citizens depending on their race, ethnicity, religious beliefs, disability and other grounds: Intentional acts aimed at incitement to ethnic, racial or religious hatred and violence, to demean the ethnic honor and dignity, or to repulse citizens' feelings due to their religious beliefs, as well as direct or indirect restriction of rights or the establishment of direct or indirect privileges of citizens on the grounds of race, color, political, religious or other beliefs, sex, disability, ethnic or social origin, property status, place of residence, language or other grounds" (maximum criminal sentence of up to 8 years in prison).
Article 300: "Importation, manufacture or distribution of literature and other media promoting a cult of violence and cruelty, racial, ethnic or religious intolerance and discrimination" (maximum criminal sentence of up to 5 years in prison).
United Kingdom
For England, Wales, and Scotland, the Sentencing Act 2020 makes racial or religious hostility, or hostility related to disability, sexual orientation, or transgender identity an aggravation in sentencing for crimes in general.
Separately, the Crime and Disorder Act 1998 defines separate offences, with increased sentences, for racially or religiously aggravated assaults, harassment, and a handful of public order offences.
For Northern Ireland, Public Order 1987 (S.I. 1987/463 (N.I. 7)) serves the same purposes. A "racial group" is a group of persons defined by reference to race, colour, nationality (including citizenship) or ethnic or national origins. A "religious group" is a group of persons defined by reference to religious belief or lack of religious belief.
"Hate crime" legislation is distinct from "hate speech" legislation. See Hate speech laws in the United Kingdom.
The Crime Survey for England and Wales (CSEW) reported in 2013 that there were an average of 278,000 hate crimes a year with 40 percent being reported according to a victims survey; police records only identified around 43,000 hate crimes a year. It was reported that police recorded a 57-percent increase in hate crime complaints in the four days following the UK's European Union membership referendum; however, a press release from the National Police Chief's Council stated that "this should not be read as a national increase in hate crime of 57 percent".
In 2013, Greater Manchester Police began recording attacks on goths, punks and other alternative culture groups as hate crimes.
On 4 December 2013, Essex Police launched the 'Stop the Hate' initiative as part of a concerted effort to find new ways to tackle hate crime in Essex. The launch was marked by a conference in Chelmsford, hosted by Chief Constable Stephen Kavanagh, which brought together 220 delegates from a range of partner organizations involved in the field. The theme of the conference was 'Report it to Sort it' and the emphasis was on encouraging people to tell police if they have been a victim of hate crime, whether it be based on race, religion, sexual orientation, transgender identity or disability.
Crown Prosecution Service guidance issued on 21 August 2017 stated that online hate crimes should be treated as seriously as offences in person.
Perhaps the most high-profile hate crime in modern Britain occurred in Eltham, London, on 24 April 1993, when 18-year-old black student Stephen Lawrence was stabbed to death in an attack by a gang of white youths. Two white teenagers were later charged with the murder, and at least three other suspects were mentioned in the national media, but the charges against them were dropped within three months after the Crown Prosecution Service concluded that there was insufficient evidence to prosecute. However, a change in the law a decade later allowed a suspect to be charged with a crime twice if new evidence emerged after the original charges were dropped or a "not guilty" verdict was delivered in court. Gary Dobson, who had been charged with the murder in the initial 1993 investigation, was found guilty of Stephen Lawrence's murder in January 2012 and sentenced to life imprisonment, as was David Norris, who had not been charged in 1993. A third suspect, Luke Knight, had been charged in 1993 but was not charged when the case came to court nearly 20 years later.
In September 2020, the Law Commission proposed that sex or gender be added to the list of protected characteristics.
The United Kingdom is a party to the Convention on Cybercrime, but not the Additional Protocol.
A 2021 investigation by Newsnight and The Law Society Gazette found that alleged hate crimes in which the victim was a police officer were significantly more likely to result in a successful prosecution. The investigation found that in several areas, crimes against police officers and staff constituted up to half of all hate crimes convictions, despite representing a much smaller proportion of reported incidents.
Scotland
Under Scottish Common law the courts can take any aggravating factor into account when sentencing someone found guilty of an offence. There is legislation dealing with the offences of incitement of racial hatred, racially aggravated harassment, and prejudice relating to religious beliefs, disability, sexual orientation, and transgender identity. A Scottish Executive working group examined the issue of hate crime and ways of combating crime motivated by social prejudice, reporting in 2004. Its main recommendations were not implemented, but in their manifestos for the 2007 Scottish Parliament election several political parties included commitments to legislate in this area, including the Scottish National Party, which now forms the Scottish Government. The Offences (Aggravation by Prejudice) (Scotland) Bill was introduced on 19 May 2008 by Patrick Harvie MSP, having been prepared with support from the Scottish Government, and was passed unanimously by the parliament on 3 June 2009.
The Hate Crime and Public Order (Scotland) Act 2021 comes into force on 1 April 2024. Its introduction was criticised by the Association of Scottish Police Superintendents saying it feared Police Scotland would be deluged by cases, diverting officers from tackling violent offenders and that the Act threatened to fuel claims of “institutional bias” against the force.
Non-crime hate incidents
In March 2024, Scottish Conservatives MSP Murdo Fraser threatened Police Scotland with legal action following his criticism of the Scottish Government's transgender policy was logged as a "hate incident" after being told that his name appears in police records for expressing his view about the policy even though no crime was committed. Fraser had shared a column written by Susan Dalgety for The Scotsman, which claimed the Scottish Government's 'non-binary equality action plan' would lead to children being "damaged by this cult" and commenting "Choosing to identify as 'non-binary' is as valid as choosing to identify as a cat. I'm not sure governments should be spending time on action plans for either."
Eurasian countries with no hate crime laws
Albania, Cyprus, San Marino and Turkey have no hate crime laws. Nonetheless, all of these except Turkey are parties to the Convention on Cybercrime and the Additional Protocol.
North America
Canada
"In Canada the legal definition of a hate crime can be found in sections 318 and 319 of the Criminal Code".
In 1996, the federal government amended a section of the Criminal Code that pertains to sentencing. Specifically, section 718.2. The section states (with regard to the hate crime):
A vast majority (84 percent) of hate crime perpetrators were "male, with an average age of just under 30. Less than 10 of those accused had criminal records, and less than 5 percent had previous hate crime involvement". "Only 4 percent of hate crimes were linked to an organized or extremist group".
As of 2004, Jewish people were the largest ethnic group targeted by hate crimes, followed by black people, Muslims, South Asians, and homosexuals (Silver et al., 2004). More recently, hate crimes targeting Jews accounted for 67% of all hate crimes targeting religions in 2022.
During the Nazi regime in Germany, antisemitism was a cause of hate-related violence in Canada. For example, on 16 August 1933, there was a baseball game in Toronto and one team was made up mostly of Jewish players. At the end of the game, a group of Nazi sympathizers unfolded a Swastika flag and shouted "Heil Hitler." That event erupted into a brawl that pitted Jews and Italians against Anglo Canadians; the brawl went on for hours.
The first time someone was charged for hate speech over the internet occurred on 27 March 1996. "A Winnipeg teenager was arrested by the police for sending an email to a local political activist that contained the message "Death to homosexuals...it's prescribed in the Bible! Better watch out next Gay Pride Week.
During the COVID-19 pandemic, Canada saw a sudden rise in hate crimes based on race, religion, and sexual orientation. Statistics Canada reported there was a 72% increase in hate crimes between 2019 and 2021.
Mexico
Alejandro Gertz Manero, Attorney General of Mexico, recommended in August 2020 that all murders involving women be investigated as femicides. An average of 11 women are killed every day.
Murders of LGBTQ individuals are not legally classified as hate crimes in Mexico, although Luis Guzman of the Cohesión de Diversidades para la Sustentabilidad (Codise) notes that there is a lot of homophobia in Mexico, particularly in the states of Veracruz, Chihuahua, and Michoacán. Between 2014 and May 2020, there have been 209 such murders registered.
United States
Hate crime laws have a long history in the United States. The first hate crime laws were passed after the American Civil War, beginning with the Civil Rights Act of 1871, in order to combat the growing number of racially motivated crimes which were being committed by the Reconstruction era—Ku Klux Klan. The modern era of hate-crime legislation began in 1968 with the passage of federal statute, 18 U.S.C.A. § 249, part of the Civil Rights Act which made it illegal to "by force or by threat of force, injure, intimidate, or interfere with anyone who is engaged in six specified protected activities, by reason of their race, color, religion, or national origin." However, "The prosecution of such crimes must be certified by the U.S. attorney general."
The first state hate-crime statute, California's Section 190.2, was passed in 1978 and provided penalty enhancements in cases when murders were motivated by prejudice against four "protected status" categories: race, religion, color, and national origin. Washington included ancestry in a statute which was passed in 1981. Alaska included creed and sex in 1982, and later disability, sexual orientation, and ethnicity. In the 1990s some state laws began to include age, marital status, membership in the armed forces, and membership in civil rights organizations.
Until California state legislation included all crimes as possible hate crimes in 1987, criminal acts which could be considered hate crimes in various states included aggravated assault, assault and battery, vandalism, rape, threats and intimidation, arson, trespassing, stalking, and various "lesser" acts.
Defined in the 1999 National Crime Victim Survey, "A hate crime is a criminal offence. In the United States, federal prosecution is possible for hate crimes committed on the basis of a person's race, religion, or nation origin when engaging in a federally protected activity." In 2009, capping a broad-based public campaign lasting more than a decade, President Barack Obama signed into law the Matthew Shepard and James Byrd Jr. Hate Crimes Prevention Act. The Act added actual or perceived gender, gender identity, sexual orientation, and disability to the federal definition of a hate crime, and dropped the prerequisite that the victim be engaging in a federally protected activity. Led by Shepard's parents and a coalition of civil rights groups, with ADL (the Anti-Defamation League), in a lead role, the campaign to pass the Matthew Shepard Act lasted 13 years, in large part because of opposition to including the term "sexual orientation" as one of the bases for deeming a crime to be a hate crime.
ADL also drafted model hate crimes legislation in the 1980s that serves as the template for the legislation that a majority of states have adopted. As of the fall of 2020, 46 of the 50 states and Washington, D.C. have statutes criminalizing various types of hate crimes. Thirty-one states and the District of Columbia have statutes creating a civil cause of action in addition to the criminal penalty for similar acts. Twenty-seven states and the District of Columbia have statutes requiring the state to collect hate crime statistics. In May 2020, the killing of African-American jogger Ahmaud Arbery reinvigorated efforts to adopt a hate-crimes law in Georgia, which was one of a handful of states without a such legislation. Led in great part by the Hate-Free Georgia Coalition, a group of 35 nonprofit groups organized by the Georgia state ADL, the legislation was adopted in June 2020, after 16 years of debate.
According to the FBI Hate Crime Statistics report for 2006, hate crimes increased nearly 8 percent nationwide, with a total of 7,722 incidents and 9,080 offences reported by participating law enforcement agencies. Of the 5,449 crimes against persons, 46 percent were classified as intimidation, and 32 percent as simple assaults. Acts of vandalism or destruction comprised 81 percent of the 3,593 crimes against property.
However, according to the FBI Hate Crime Statistics for 2007, the number of hate crimes decreased to 7,624 incidents reported by participating law enforcement agencies. These incidents included nine murders and two rapes (out of the almost 17,000 murders and 90,000 forcible rapes committed in the U.S. in 2007).
In June 2009, Attorney General Eric Holder said recent killings showed the need for a tougher U.S. hate-crimes law to stop "violence masquerading as political activism."
Leadership Conference on Civil Rights Education Fund published a report in 2009 revealing that 33 percent of hate-crime offenders were under the age of 18, while 29 percent were between the ages of 18 and 24.
The 2011 hate-crime statistics show 46.9 percent were motivated by race, and 20.8 percent by sexual orientation.
In 2015, the Hate Crimes Statistics report identified 5,818 single-bias incidents involving 6,837 offenses, 7,121 victims, and 5,475 known offenders
In 2017, the FBI released new data showing a 17 percent increase in hate crimes between 2016 and 2017.
In 2018, the Hate Crime Statistics report showed 59.5 percent were motivated by race bias and 16.9 percent by sexual orientation.
Prosecutions of hate crimes have been difficult in the United States. Recently, state governments have attempted to re-investigate and re-try past hate crimes. One notable example was Mississippi's decision to retry Byron De La Beckwith in 1990 for the 1963 murder of Medgar Evers, a prominent figure in the NAACP and a leader of the civil rights movement. This was the first time in U.S. history that an unresolved civil rights case was re-opened. De La Beckwith, a member of the Ku Klux Klan, was tried for the murder on two previous occasions, resulting in hung juries. A mixed-race jury found Beckwith guilty of murder, and he was sentenced to life in prison in 1994.
According to a November 2016 report issued by the FBI, hate crimes are on the rise in the United States. The number of hate crimes increased from 5,850 in 2015, to 6,121 hate crime incidents in 2016, an increase of 4.6 percent.
The Khalid Jabara-Heather Heyer National Opposition to Hate, Assault, and Threats to Equality Act (NO HATE), which was first introduced in 2017, was reintroduced in June 2019 to improve hate crime reporting and expand support for victims as a response to anti-LGBTQ, anti-Muslim and antisemitic attacks. The bill would fund state hate-crime hotlines, and support expansion of reporting and training programs in law enforcement agencies.
According to a 2021 study, in the years between 1992 and 2014, white people were the offenders in 74.5 percent of anti-Asian hate crimes, 99 percent of anti-black hate crimes, and 81.1 percent of anti-Hispanic hate crimes.
Victims in the United States
One of the largest waves of hate crimes in the history of the United States took place during the civil rights movement in the 1950s and 1960s. Violence and threats of violence were common against African Americans, and hundreds of people died due to such acts. Members of this ethnic group faced violence from groups such as the Ku Klux Klan, as well as violence from individuals who were committed to maintaining segregation. At the time, civil rights leaders such as Martin Luther King Jr. and their supporters fought hard for the right of African Americans to vote, as well as for equality in their everyday lives. African Americans have been the target of hate crimes since the Civil War, and the humiliation of this ethnic group was also desired by many anti-black individuals. Other frequently reported bias motivations were bias against a religion, bias against a particular sexual orientation, and bias against a particular ethnicity or national origin. At times, these bias motivations overlapped, because violence can be both anti-gay and anti-black, for example.
Analysts have compared groups in terms of the per capita rate of hate crimes committed against them to allow for differing populations. Overall, the total number of hate crimes committed since the first hate crime bill was passed in 1997 is 86,582.
Among the groups which are mentioned in the Hate Crimes Statistics Act, the largest number of hate crimes are committed against African Americans. During the Civil Rights Movement, some of the most notorious hate crimes included the 1968 assassination of Martin Luther King Jr., the 1964 murders of Charles Moore and Henry Dee, the 1963 16th Street Baptist Church bombing, the 1955 murder of Emmett Till, and the burning of crosses, churches, Jewish synagogues, and other places of worship of minority religions. Such acts began to take place more frequently after the racial integration of many schools and public facilities.
Since then, hate crimes targeting Jews have risen sharply, as in 2023, Antisemitic hate crimes increased by 63% to an all time high of 1,832 incidents in the United States. Furthermore, Jews comprise roughly 2% of the American population, but represent 68% of all religion-based hate crimes in the country.
High-profile murders targeting victims based on their sexual orientation have prompted the passage of hate crimes legislation, notably the cases of Sean W. Kennedy and Matthew Shepard. Kennedy's murder was mentioned by Senator Gordon Smith in a speech on the floor of the U.S. Senate while he advocated such legislation. The Matthew Shepard and James Byrd, Jr. Hate Crimes Prevention Act was signed into law in 2009. It included sexual orientation, gender identity and expression, disably status, and military personnel and their family members. This is the first all-inclusive bill ever passed in the United States, taking 45 years to complete.
Gender-based crimes may also be considered hate crimes. This view would designate rape and domestic violence, as well as non-interpersonal violence against women such as the École Polytechnique massacre in Quebec, as hate crimes.
Following the September 11, 2001, terrorist attacks, the United States experienced a spike in overall hate crimes against Muslim individuals. In the year before, only 28 events had been recorded of hate crimes against Muslims; in 2001, this number jumped to 481. While the number decreased in the following years, the number of Muslim hate crimes remains higher than pre-2001.
In May 2018, ProPublica reviewed police reports for 58 cases of purported anti-heterosexual hate crimes. ProPublica found that about half of the cases were anti-LGBT hate crimes that had been miscategorized, and that the rest were motivated by hate towards Jews, blacks or women or that there was no element of a hate crime at all. ProPublica did not find any cases of hate crimes spurred by anti-heterosexual bias.
Anti-trans hate crime
In 2017, shortly after President Donald Trump took office, hate crimes against transgender individuals increased. In June 2020, after the death of several African Americans at the hands of police officers – in particular, George Floyd – triggered protests around the world as part of the Black Lives Matter movement, hate crimes against the black trans community began to increase.
South America
Brazil
In Brazil, hate crime laws focus on racism, racial injury, and other special bias-motivated crimes such as, for example, murder by death squads and genocide on the grounds of nationality, ethnicity, race or religion. Murder by death squads and genocide are legally classified as "hideous crimes" ( in Portuguese).
The crimes of racism and racial injury, although similar, are enforced slightly differently. Article 140, 3rd paragraph, of the Penal Code establishes a harsher penalty, from a minimum of one year to a maximum of three years, for injuries motivated by "elements referring to race, color, ethnicity, religion, origin, or the condition of being an aged or disabled person". On the other side, Law 7716/1989 covers "crimes resulting from discrimination or prejudice on the grounds of race, color, ethnicity, religion, or national origin".
In addition, the Brazilian Constitution defines as a "fundamental goal of the Republic" (Article 3rd, clause IV) "to promote the well-being of all, with no prejudice as to origin, race, sex, color, age, and any other forms of discrimination".
Chile
In 2012, the Anti-discrimination law amended the Criminal Code adding a new aggravating circumstance of criminal responsibility, as follows: "Committing or participating in a crime motivated by ideology, political opinion, religion or beliefs of the victim; nation, race, ethnic or social group; sex, sexual orientation, gender identity, age, affiliation, personal appearance or suffering from illness or disability."
Middle East
Israel is the only country in the Middle East that has hate crime laws. Hate crime, as passed by the Israeli Knesset (Parliament), is defined as crime for reason of race, religion, gender and sexual orientation.
Support for and opposition to hate crime laws
Support
Justifications for harsher punishments for hate crimes focus on the notion that hate crimes cause greater individual and societal harm. In a 2014 book, author Marian Duggan asserts that when the core of a person's identity is attacked, the degradation and dehumanization is especially severe, and additional emotional and physiological problems are likely to result. Wider society can suffer from the disempowerment of a group of people. Furthermore, it is asserted that the chances for retaliatory crimes are greater when a hate crime has been committed. The riots in Los Angeles, California, that followed the beating of Rodney King, a black motorist, by a group of white police officers are cited as support for this argument. The beating of white truck driver Reginald Denny by black rioters during the same riot is also an example that supports this argument.
In Wisconsin v. Mitchell, the U.S. Supreme Court unanimously found that penalty-enhancement hate crime statutes do not conflict with free speech rights, because they do not punish an individual for exercising freedom of expression; rather, they allow courts to consider motive when sentencing a criminal for conduct which is not protected by the First Amendment. In the case of Chaplinsky v. New Hampshire, the court defined "fighting words" as "those which by their very utterance inflict injury or tend to incite an immediate breach of the peace."
David Brax argues that critics of hate-crime laws are wrong in claiming that hate crimes punish thoughts or motives; he asserts they do not do this, but instead punish people for choosing these reasons to commit a criminal act. Similarly, Andrew Seidel writes, "Hate crime or bias intimidation crimes are not thoughtcrimes. Most crimes require two things: an act and an intent... If you simply hate someone based on race, sexuality, or creed, that thought is not punishable. Only the thought combined with an illegal action is criminal."
Opposition
The U.S. Supreme Court unanimously found the St. Paul Bias-Motivated Crime Ordinance amounted to viewpoint-based discrimination in conflict with rights of free speech, because it selectively criminalized bias-motivated speech or symbolic speech for disfavored topics while permitting such speech for other topics. Many critics further assert that it conflicts with an even more fundamental right: free thought. The claim is that hate-crime legislation effectively makes certain ideas or beliefs, including religious ones, illegal, in other words, thought crimes. Heidi Hurd argues that hate crimes criminalize certain dispositions yet do not show why hate is a morally worse disposition for a crime than one motivated by jealousy, greed, sadism or vengeance or why hatred and bias are uniquely responsive to criminal sanction compared to other motivations. Hurd argues that whether or not a disposition is worse than another is case sensitive and thus it is difficult to argue that some motivations are categorically worse than others.
In their book Hate Crimes: Criminal Law and Identity Politics, James B. Jacobs and Kimberly Potter criticize hate crime legislation for exacerbating conflicts between groups. They assert that by defining crimes as being committed by one group against another, rather than as being committed by individuals against their society, the labeling of crimes as "hate crimes" causes groups to feel persecuted by one another, and that this impression of persecution can incite a backlash and thus lead to an actual increase in crime. Jacobs and Potter also argued that hate crime legislation can end up only covering the victimization of some groups rather than all, which is a form of discrimination itself and that attempts to remedy this by making all identifiable groups covered by hate crime protection thus make hate crimes co-terminus with generic criminal law. The authors also suggest that arguments which attempt to portray hate crimes as worse than normal crimes because they spread fear in a community are unsatisfactory, as normal criminal acts can also spread fear yet only hate crimes are singled out. Indeed, it has been argued that victims have varied reactions to hate crimes, so it is not necessarily true that hate crimes are regarded as more harmful than other crimes. Dan Kahan argues that the "greater harm" argument is conceptually flawed, as it is only because people value their group identities that attacks motivated by an animus against those identities are seen as worse, thus making it the victim and society's reaction to the crime rather than the crime itself.
Heidi Hurd argues that hate crime represents an effort by the state to encourage a certain moral character in its citizen and thus represents the view that the instillation of virtue and the elimination of vice are legitimate state goals, which she argues is a contradiction of the principles of liberalism. Hurd also argues that increasing punishment for an offence because the perpetrator was motivated by hate compared to some other motivation means that the justice systems is treating the same crime differently, even though treating like cases alike is a cornerstone of criminal justice.
Some have argued hate crime laws bring the law into disrepute and further divide society, as groups apply to have their critics silenced. American forensic psychologist Karen Franklin said that the term hate crime is somewhat misleading since it assumes there is a hateful motivation which is not present in many occasions; in her view, laws to punish people who commit hate crimes may not be the best remedy for preventing them because the threat of future punishment does not usually deter such criminal acts. Some on the political left have been critical of hate crime laws for expanding the criminal justice system and dealing with violence against minority groups through punitive measures. Briana Alongi argues that hate crime legislation is inconsistent, redundant and arbitrarily applied, while also being partially motivated by political opportunism and media bias rather than purely by legal principle.
See also
Discrimination
Documenting Hate
Ethnic cleansing
Ethnocentrism
Genocide
Hate group
Hate media
Hate speech
Hate studies
Nativism (politics)
Oppression
Persecution
Prejudice
Racism
Supremacism
Xenophobia
References
External links
Hate crimes information, by Gregory Herek
Alexander Verkhovsky Criminal Law on Hate Crime, Incitement to Hatred and Hate Speech in OSCE Participating States – The Hague: SOVA Center, 2016 – 136 pages.
Hate Crime Statistics, annual FBI/U.S. Department of Justice report on the prevalence of hate crimes in the United States. Required by the Hate Crime Statistics Act.
A Policymaker's Guide to Hate Crimes, a publication by the National Criminal Justice Reference Service, part of the U.S. Department of Justice. Many parts of this article have been adapted from this document.
Peabody, Michael "Thought & Crime," Liberty Magazine, March/April 2008, review of recently proposed hate crime legislation and criminal intent issues.
"Hate Crime." Oxford Bibliographies Online: Criminology.
OSCE Hate Crime Reporting website
Crime by type
LGBTQ rights by issue
Racism
Aggression
Harassment and bullying
Abuse | Hate crime | [
"Biology"
] | 12,417 | [
"Behavior",
"Abuse",
"Harassment and bullying",
"Aggression",
"Human behavior"
] |
13,564 | https://en.wikipedia.org/wiki/Homomorphism | In algebra, a homomorphism is a structure-preserving map between two algebraic structures of the same type (such as two groups, two rings, or two vector spaces). The word homomorphism comes from the Ancient Greek language: () meaning "same" and () meaning "form" or "shape". However, the word was apparently introduced to mathematics due to a (mis)translation of German meaning "similar" to meaning "same". The term "homomorphism" appeared as early as 1892, when it was attributed to the German mathematician Felix Klein (1849–1925).
Homomorphisms of vector spaces are also called linear maps, and their study is the subject of linear algebra.
The concept of homomorphism has been generalized, under the name of morphism, to many other structures that either do not have an underlying set, or are not algebraic. This generalization is the starting point of category theory.
A homomorphism may also be an isomorphism, an endomorphism, an automorphism, etc. (see below). Each of those can be defined in a way that may be generalized to any class of morphisms.
Definition
A homomorphism is a map between two algebraic structures of the same type (e.g. two groups, two fields, two vector spaces), that preserves the operations of the structures. This means a map between two sets , equipped with the same structure such that, if is an operation of the structure (supposed here, for simplification, to be a binary operation), then
for every pair , of elements of . One says often that preserves the operation or is compatible with the operation.
Formally, a map preserves an operation of arity , defined on both and if
for all elements in .
The operations that must be preserved by a homomorphism include 0-ary operations, that is the constants. In particular, when an identity element is required by the type of structure, the identity element of the first structure must be mapped to the corresponding identity element of the second structure.
For example:
A semigroup homomorphism is a map between semigroups that preserves the semigroup operation.
A monoid homomorphism is a map between monoids that preserves the monoid operation and maps the identity element of the first monoid to that of the second monoid (the identity element is a 0-ary operation).
A group homomorphism is a map between groups that preserves the group operation. This implies that the group homomorphism maps the identity element of the first group to the identity element of the second group, and maps the inverse of an element of the first group to the inverse of the image of this element. Thus a semigroup homomorphism between groups is necessarily a group homomorphism.
A ring homomorphism is a map between rings that preserves the ring addition, the ring multiplication, and the multiplicative identity. Whether the multiplicative identity is to be preserved depends upon the definition of ring in use. If the multiplicative identity is not preserved, one has a rng homomorphism.
A linear map is a homomorphism of vector spaces; that is, a group homomorphism between vector spaces that preserves the abelian group structure and scalar multiplication.
A module homomorphism, also called a linear map between modules, is defined similarly.
An algebra homomorphism is a map that preserves the algebra operations.
An algebraic structure may have more than one operation, and a homomorphism is required to preserve each operation. Thus a map that preserves only some of the operations is not a homomorphism of the structure, but only a homomorphism of the substructure obtained by considering only the preserved operations. For example, a map between monoids that preserves the monoid operation and not the identity element, is not a monoid homomorphism, but only a semigroup homomorphism.
The notation for the operations does not need to be the same in the source and the target of a homomorphism. For example, the real numbers form a group for addition, and the positive real numbers form a group for multiplication. The exponential function
satisfies
and is thus a homomorphism between these two groups. It is even an isomorphism (see below), as its inverse function, the natural logarithm, satisfies
and is also a group homomorphism.
Examples
The real numbers are a ring, having both addition and multiplication. The set of all 2×2 matrices is also a ring, under matrix addition and matrix multiplication. If we define a function between these rings as follows:
where is a real number, then is a homomorphism of rings, since preserves both addition:
and multiplication:
For another example, the nonzero complex numbers form a group under the operation of multiplication, as do the nonzero real numbers. (Zero must be excluded from both groups since it does not have a multiplicative inverse, which is required for elements of a group.) Define a function from the nonzero complex numbers to the nonzero real numbers by
That is, is the absolute value (or modulus) of the complex number . Then is a homomorphism of groups, since it preserves multiplication:
Note that cannot be extended to a homomorphism of rings (from the complex numbers to the real numbers), since it does not preserve addition:
As another example, the diagram shows a monoid homomorphism from the monoid to the monoid . Due to the different names of corresponding operations, the structure preservation properties satisfied by amount to and .
A composition algebra over a field has a quadratic form, called a norm, , which is a group homomorphism from the multiplicative group of to the multiplicative group of .
Special homomorphisms
Several kinds of homomorphisms have a specific name, which is also defined for general morphisms.
Isomorphism
An isomorphism between algebraic structures of the same type is commonly defined as a bijective homomorphism.
In the more general context of category theory, an isomorphism is defined as a morphism that has an inverse that is also a morphism. In the specific case of algebraic structures, the two definitions are equivalent, although they may differ for non-algebraic structures, which have an underlying set.
More precisely, if
is a (homo)morphism, it has an inverse if there exists a homomorphism
such that
If and have underlying sets, and has an inverse , then is bijective. In fact, is injective, as implies , and is surjective, as, for any in , one has , and is the image of an element of .
Conversely, if is a bijective homomorphism between algebraic structures, let be the map such that is the unique element of such that . One has and it remains only to show that is a homomorphism. If is a binary operation of the structure, for every pair , of elements of , one has
and is thus compatible with As the proof is similar for any arity, this shows that is a homomorphism.
This proof does not work for non-algebraic structures. For example, for topological spaces, a morphism is a continuous map, and the inverse of a bijective continuous map is not necessarily continuous. An isomorphism of topological spaces, called homeomorphism or bicontinuous map, is thus a bijective continuous map, whose inverse is also continuous.
Endomorphism
An endomorphism is a homomorphism whose domain equals the codomain, or, more generally, a morphism whose source is equal to its target.
The endomorphisms of an algebraic structure, or of an object of a category, form a monoid under composition.
The endomorphisms of a vector space or of a module form a ring. In the case of a vector space or a free module of finite dimension, the choice of a basis induces a ring isomorphism between the ring of endomorphisms and the ring of square matrices of the same dimension.
Automorphism
An automorphism is an endomorphism that is also an isomorphism.
The automorphisms of an algebraic structure or of an object of a category form a group under composition, which is called the automorphism group of the structure.
Many groups that have received a name are automorphism groups of some algebraic structure. For example, the general linear group is the automorphism group of a vector space of dimension over a field .
The automorphism groups of fields were introduced by Évariste Galois for studying the roots of polynomials, and are the basis of Galois theory.
Monomorphism
For algebraic structures, monomorphisms are commonly defined as injective homomorphisms.
In the more general context of category theory, a monomorphism is defined as a morphism that is left cancelable. This means that a (homo)morphism is a monomorphism if, for any pair , of morphisms from any other object to , then implies .
These two definitions of monomorphism are equivalent for all common algebraic structures. More precisely, they are equivalent for fields, for which every homomorphism is a monomorphism, and for varieties of universal algebra, that is algebraic structures for which operations and axioms (identities) are defined without any restriction (the fields do not form a variety, as the multiplicative inverse is defined either as a unary operation or as a property of the multiplication, which are, in both cases, defined only for nonzero elements).
In particular, the two definitions of a monomorphism are equivalent for sets, magmas, semigroups, monoids, groups, rings, fields, vector spaces and modules.
A split monomorphism is a homomorphism that has a left inverse and thus it is itself a right inverse of that other homomorphism. That is, a homomorphism is a split monomorphism if there exists a homomorphism such that A split monomorphism is always a monomorphism, for both meanings of monomorphism. For sets and vector spaces, every monomorphism is a split monomorphism, but this property does not hold for most common algebraic structures.
An injective homomorphism is left cancelable: If one has for every in , the common source of and . If is injective, then , and thus . This proof works not only for algebraic structures, but also for any category whose objects are sets and arrows are maps between these sets. For example, an injective continuous map is a monomorphism in the category of topological spaces.
For proving that, conversely, a left cancelable homomorphism is injective, it is useful to consider a free object on . Given a variety of algebraic structures a free object on is a pair consisting of an algebraic structure of this variety and an element of satisfying the following universal property: for every structure of the variety, and every element of , there is a unique homomorphism such that . For example, for sets, the free object on is simply ; for semigroups, the free object on is which, as, a semigroup, is isomorphic to the additive semigroup of the positive integers; for monoids, the free object on is which, as, a monoid, is isomorphic to the additive monoid of the nonnegative integers; for groups, the free object on is the infinite cyclic group which, as, a group, is isomorphic to the additive group of the integers; for rings, the free object on is the polynomial ring for vector spaces or modules, the free object on is the vector space or free module that has as a basis.
If a free object over exists, then every left cancelable homomorphism is injective: let be a left cancelable homomorphism, and and be two elements of such . By definition of the free object , there exist homomorphisms and from to such that and . As , one has by the uniqueness in the definition of a universal property. As is left cancelable, one has , and thus . Therefore, is injective.
Existence of a free object on for a variety (see also ): For building a free object over , consider the set of the well-formed formulas built up from and the operations of the structure. Two such formulas are said equivalent if one may pass from one to the other by applying the axioms (identities of the structure). This defines an equivalence relation, if the identities are not subject to conditions, that is if one works with a variety. Then the operations of the variety are well defined on the set of equivalence classes of for this relation. It is straightforward to show that the resulting object is a free object on .
Epimorphism
In algebra, epimorphisms are often defined as surjective homomorphisms. On the other hand, in category theory, epimorphisms are defined as right cancelable morphisms. This means that a (homo)morphism is an epimorphism if, for any pair , of morphisms from to any other object , the equality implies .
A surjective homomorphism is always right cancelable, but the converse is not always true for algebraic structures. However, the two definitions of epimorphism are equivalent for sets, vector spaces, abelian groups, modules (see below for a proof), and groups. The importance of these structures in all mathematics, especially in linear algebra and homological algebra, may explain the coexistence of two non-equivalent definitions.
Algebraic structures for which there exist non-surjective epimorphisms include semigroups and rings. The most basic example is the inclusion of integers into rational numbers, which is a homomorphism of rings and of multiplicative semigroups. For both structures it is a monomorphism and a non-surjective epimorphism, but not an isomorphism.
A wide generalization of this example is the localization of a ring by a multiplicative set. Every localization is a ring epimorphism, which is not, in general, surjective. As localizations are fundamental in commutative algebra and algebraic geometry, this may explain why in these areas, the definition of epimorphisms as right cancelable homomorphisms is generally preferred.
A split epimorphism is a homomorphism that has a right inverse and thus it is itself a left inverse of that other homomorphism. That is, a homomorphism is a split epimorphism if there exists a homomorphism such that A split epimorphism is always an epimorphism, for both meanings of epimorphism. For sets and vector spaces, every epimorphism is a split epimorphism, but this property does not hold for most common algebraic structures.
In summary, one has
the last implication is an equivalence for sets, vector spaces, modules, abelian groups, and groups; the first implication is an equivalence for sets and vector spaces.
Let be a homomorphism. We want to prove that if it is not surjective, it is not right cancelable.
In the case of sets, let be an element of that not belongs to , and define such that is the identity function, and that for every except that is any other element of . Clearly is not right cancelable, as and
In the case of vector spaces, abelian groups and modules, the proof relies on the existence of cokernels and on the fact that the zero maps are homomorphisms: let be the cokernel of , and be the canonical map, such that . Let be the zero map. If is not surjective, , and thus (one is a zero map, while the other is not). Thus is not cancelable, as (both are the zero map from to ).
Kernel
Any homomorphism defines an equivalence relation on by if and only if . The relation is called the kernel of . It is a congruence relation on . The quotient set can then be given a structure of the same type as , in a natural way, by defining the operations of the quotient set by , for each operation of . In that case the image of in under the homomorphism is necessarily isomorphic to ; this fact is one of the isomorphism theorems.
When the algebraic structure is a group for some operation, the equivalence class of the identity element of this operation suffices to characterize the equivalence relation. In this case, the quotient by the equivalence relation is denoted by (usually read as " mod "). Also in this case, it is , rather than , that is called the kernel of . The kernels of homomorphisms of a given type of algebraic structure are naturally equipped with some structure. This structure type of the kernels is the same as the considered structure, in the case of abelian groups, vector spaces and modules, but is different and has received a specific name in other cases, such as normal subgroup for kernels of group homomorphisms and ideals for kernels of ring homomorphisms (in the case of non-commutative rings, the kernels are the two-sided ideals).
Relational structures
In model theory, the notion of an algebraic structure is generalized to structures involving both operations and relations. Let L be a signature consisting of function and relation symbols, and A, B be two L-structures. Then a homomorphism from A to B is a mapping h from the domain of A to the domain of B such that
h(FA(a1,...,an)) = FB(h(a1),...,h(an)) for each n-ary function symbol F in L,
RA(a1,...,an) implies RB(h(a1),...,h(an)) for each n-ary relation symbol R in L.
In the special case with just one binary relation, we obtain the notion of a graph homomorphism.
Formal language theory
Homomorphisms are also used in the study of formal languages and are often briefly referred to as morphisms. Given alphabets and , a function such that for all is called a homomorphism on . If is a homomorphism on and denotes the empty string, then is called an -free homomorphism when for all in .
A homomorphism on that satisfies for all is called a -uniform homomorphism. If for all (that is, is 1-uniform), then is also called a coding or a projection.
The set of words formed from the alphabet may be thought of as the free monoid generated by Here the monoid operation is concatenation and the identity element is the empty word. From this perspective, a language homomorphism is precisely a monoid homomorphism.
See also
Diffeomorphism
Homomorphic encryption
Homomorphic secret sharing – a simplistic decentralized voting protocol
Morphism
Quasimorphism
Notes
Citations
References
Morphisms | Homomorphism | [
"Mathematics"
] | 3,871 | [
"Functions and mappings",
"Mathematical structures",
"Mathematical objects",
"Category theory",
"Mathematical relations",
"Morphisms"
] |
13,567 | https://en.wikipedia.org/wiki/HyperCard | HyperCard is a software application and development kit for Apple Macintosh and Apple IIGS computers. It is among the first successful hypermedia systems predating the World Wide Web.
HyperCard combines a flat-file database with a graphical, flexible, user-modifiable interface. HyperCard includes a built-in programming language called HyperTalk for manipulating data and the user interface.
This combination of features – a database with simple form layout, flexible support for graphics, and ease of programming – suits HyperCard for many different projects such as rapid application development of applications and databases, interactive applications with no database requirements, command and control systems, and many examples in the demoscene.
HyperCard was originally released in 1987 for $49.95 and was included free with all new Macs sold afterwards. It was withdrawn from sale in March 2004, having received its final update in 1998 upon the return of Steve Jobs to Apple. HyperCard was not ported to Mac OS X, but can run in the Classic Environment on versions of Mac OS X that support it.
Overview
Design
HyperCard is based on the concept of a "stack" of virtual "cards". Cards hold data, just as they would in a Rolodex card-filing device. Each card contains a set of interactive objects, including text fields, check boxes, buttons, and similar common graphical user interface (GUI) elements. Users browse the stack by navigating from card to card, using built-in navigation features, a powerful search mechanism, or through user-created scripts.
Users build or modify stacks by adding new cards. They place GUI objects on the cards using an interactive layout engine based on a simple drag-and-drop interface. Also, HyperCard includes prototype or template cards called backgrounds; when new cards are created they can refer to one of these background cards, which causes all of the objects on the background to "show through" behind the new card. This way, a stack of cards with a common layout and functionality can be created. The layout engine is similar in concept to a form as used in most rapid application development (RAD) environments such as Borland Delphi, and Microsoft Visual Basic and Visual Studio.
The database features of the HyperCard system are based on the storage of the state of all of the objects on the cards in the physical file representing the stack. The database does not exist as a separate system within the HyperCard stack; no database engine or similar construct exists. Instead, the state of any object in the system is considered to be live and editable at any time. From the HyperCard runtime's perspective, there is no difference between moving a text field on the card and typing into it; both operations simply change the state of the target object within the stack. Such changes are immediately saved when complete, so typing into a field causes that text to be stored to the stack's physical file. The system operates in a largely stateless fashion, with no need to save during operation. This is in common with many database-oriented systems, although somewhat different from document-based applications.
The final key element in HyperCard is the script, a single code-carrying element of every object within the stack. The script is a text field whose contents are interpreted in the HyperTalk language. Like any other property, the script of any object can be edited at any time and changes are saved as soon as they were complete. When the user invokes actions in the GUI, like clicking on a button or typing into a field, these actions are translated into events by the HyperCard runtime. The runtime then examines the script of the object that is the target of the event, like a button, to see if its script object contains the event's code, called a handler. If it does, the HyperTalk engine runs the handler; if it does not, the runtime examines other objects in the visual hierarchy.
These concepts make up the majority of the HyperCard system; stacks, backgrounds and cards provide a form-like GUI system, the stack file provides object persistence and database-like functionality, and HyperTalk allows handlers to be written for GUI events. Unlike the majority of RAD or database systems of the era, however, HyperCard combines all of these features, both user-facing and developer-facing, in a single application. This allows rapid turnaround and immediate prototyping, possibly without any coding, allowing users to author custom solutions to problems with their own personalized interface. "Empowerment" became a catchword as this possibility was embraced by the Macintosh community, as was the phrase "programming for the rest of us", that is, anyone, not just professional programmers.
It is this combination of features that also makes HyperCard a powerful hypermedia system. Users can build backgrounds to suit the needs of some system, say a rolodex, and use simple HyperTalk commands to provide buttons to move from place to place within the stack, or provide the same navigation system within the data elements of the UI, like text fields. Using these features, it is easy to build linked systems similar to hypertext links on the Web. Unlike the Web, programming, placement, and browsing are all the same tool. Similar systems have been created for HTML, but traditional Web services are considerably more heavyweight.
HyperTalk
HyperCard contains an object-oriented scripting language called HyperTalk, which was noted for having a syntax resembling casual English language. HyperTalk language features were predetermined by the HyperCard environment, although they could be extended by the use of externals functions (XFCN) and commands (XCMD), written in a compiled language. The weakly typed HyperTalk supports most standard programming structures such as "if–then" and "repeat". HyperTalk is verbose, hence its ease of use and readability. HyperTalk code segments are referred to as "scripts", a term that is considered less daunting to beginning programmers.
Externals
HyperCard can be extended significantly through the use of external command (XCMD) and external function (XFCN) modules. These are code libraries packaged in a resource fork that integrate into either the system generally or the HyperTalk language specifically; this is an early example of the plug-in concept. Unlike conventional plug-ins, these do not require separate installation before they are available for use; they can be included in a stack, where they are directly available to scripts in that stack.
During HyperCard's peak popularity in the late 1980s, a whole ecosystem of vendors offered thousands of these externals such as HyperTalk compilers, graphing systems, database access, Internet connectivity, and animation. Oracle offered an XCMD that allows HyperCard to directly query Oracle databases on any platform, superseded by Oracle Card. BeeHive Technologies offered a hardware interface that allows the computer to control external devices. Connected via the Apple Desktop Bus (ADB), this instrument can read the state of connected external switches or write digital outputs to a multitude of devices.
Externals allow access to the Macintosh Toolbox, which contains many lower-level commands and functions not native to HyperTalk, such as control of the serial and ADB ports.
History
Development
HyperCard was created by Bill Atkinson following an LSD trip. Work for it began in March 1985 under the name of WildCard (hence its creator code of WILD). In 1986, Dan Winkler began work on HyperTalk and the name was changed to HyperCard for trademark reasons. It was released on 11 August 1987 for the first day of the MacWorld Conference & Expo in Boston, with the understanding that Atkinson would give HyperCard to Apple only if the company promised to release it for free on all Macs. Apple timed its release to coincide with the MacWorld Conference & Expo in Boston, Massachusetts to guarantee maximum publicity.
Launch
HyperCard was successful almost instantly. The Apple Programmer's and Developer's Association (APDA) said, "HyperCard has been an informational feeding frenzy. From August [1987, when it was announced] to October our phones never stopped ringing. It was a zoo." Within a few months of release, there were multiple HyperCard books and a 50 disk set of public domain stacks. Apple's project managers found HyperCard was being used by a huge number of people, internally and externally. Bug reports and upgrade suggestions continued to flow in, demonstrating its wide variety of users. Since it was also free, it was difficult to justify dedicating engineering resources to improvements in the software. Apple and its mainstream developers understood that HyperCard's user empowerment could reduce the sales of ordinary shrink-wrapped products. Stewart Alsop II speculated that HyperCard might replace Finder as the shell of the Macintosh graphical user interface.
HyperCard 2.0
In late 1989, Kevin Calhoun, then a HyperCard engineer at Apple, led an effort to upgrade the program. This resulted in HyperCard 2.0, released in 1990. The new version included an on-the-fly compiler that greatly increased performance of computationally intensive code, a new debugger and many improvements to the underlying HyperTalk language.
At the same time HyperCard 2.0 was being developed, a separate group within Apple developed and in 1991 released HyperCard IIGS, a version of HyperCard for the Apple IIGS system. Aimed mainly at the education market, HyperCard IIGS has roughly the same feature set as the 1.x versions of Macintosh HyperCard, while adding support for the color graphics abilities of the IIGS. Although stacks (HyperCard program documents) are not binary-compatible, a translator program (another HyperCard stack) allows them to be moved from one platform to the other.
Then, Apple decided that most of its application software packages, including HyperCard, would be the property of a wholly owned subsidiary called Claris. Many of the HyperCard developers chose to stay at Apple rather than move to Claris, causing the development team to be split. Claris attempted to create a business model where HyperCard could also generate revenues. At first the freely-distributed versions of HyperCard shipped with authoring disabled. Early versions of Claris HyperCard contain an Easter Egg: typing "magic" into the message box converts the player into a full HyperCard authoring environment. When this trick became nearly universal, they wrote a new version, HyperCard Player, which Apple distributed with the Macintosh operating system, while Claris sold the full version commercially. Many users were upset that they had to pay to use software that had traditionally been supplied free and which many considered a basic part of the Mac.
Even after HyperCard was generating revenue, Claris did little to market it. Development continued with minor upgrades, and the first failed attempt to create a third generation of HyperCard. During this period, HyperCard began losing market share. Without several important, basic features, HyperCard authors began moving to systems such as SuperCard and Macromedia Authorware. Nonetheless, HyperCard continued to be popular and used for a widening range of applications, from the game The Manhole, an earlier effort by the creators of Myst, to corporate information services.
Apple eventually folded Claris back into the parent company, returning HyperCard to Apple's core engineering group. In 1992, Apple released the eagerly anticipated upgrade of HyperCard 2.2 and included licensed versions of Color Tools and Addmotion II, adding support for color pictures and animations. However, these tools are limited and often cumbersome to use because HyperCard 2.0 lacks true, internal color support.
HyperCard 3.0
Several attempts were made to restart HyperCard development once it returned to Apple. Because of the product's widespread use as a multimedia-authoring tool it was rolled into the QuickTime group. A new effort to allow HyperCard to create QuickTime interactive (QTi) movies started, once again under the direction of Kevin Calhoun. QTi extended QuickTime's core multimedia playback features to provide true interactive facilities and a low-level programming language based on 68000 assembly language. The resulting HyperCard 3.0 was first presented in 1996 when an alpha-quality version was shown to developers at Apple's annual Apple Worldwide Developers Conference (WWDC). Under the leadership of Dan Crow development continued through the late 1990s, with public demos showing many popular features such as color support, Internet connectivity, and the ability to play HyperCard stacks (which were now special QuickTime movies) in a web browser. Development upon HyperCard 3.0 stalled when the QuickTime team was focused away from developing QuickTime interactive to the streaming features of QuickTime 4.0. in 1998 Steve Jobs disliked the software because Atkinson had chosen to stay at Apple to finish it instead of joining Jobs at NeXT, and (according to Atkinson) "it had Sculley's stink all over it". In 2000, the HyperCard engineering team was reassigned to other tasks after Jobs decided to abandon the product. Calhoun and Crow both left Apple shortly after, in 2001.
Its final release was in 1998, and it was totally discontinued in March 2004.
HyperCard runs natively only in the classic Mac OS, but it can still be used in Mac OS X's Classic mode on PowerPC based machines (G5 and earlier). The last functional native HyperCard authoring environment is Classic mode in Mac OS X 10.4 (Tiger) on PowerPC-based machines.
Applications
HyperCard has been used for a range of hypertext and artistic purposes. Before the advent of PowerPoint, HyperCard was often used as a general-purpose presentation program. Examples of HyperCard applications include simple databases, "choose your own adventure"-type games, and educational teaching aids.
Due to its rapid application design facilities, HyperCard was also often used for prototyping applications and sometimes even for version 1.0 implementations. Inside Apple, the QuickTime team was one of HyperCard's biggest customers.
HyperCard has lower hardware requirements than Macromedia Director. Several commercial software products were created in HyperCard, most notably the original version of the graphic adventure game Myst, the Voyager Company's Expanded Books, multimedia CD-ROMs of Beethoven's Ninth Symphony CD-ROM, A Hard Day's Night by the Beatles, and the Voyager MacBeth. An early electronic edition of the Whole Earth Catalog was implemented in HyperCard. and stored on CD-ROM.
The prototype and demo of the popular game You Don't Know Jack was written in HyperCard. The French auto manufacturer Renault used it to control their inventory system.
In Quebec, Canada, HyperCard was used to control a robot arm used to insert and retrieve video disks at the National Film Board CinéRobothèque.
In 1989, Hypercard was used to control the BBC Radiophonic Workshop Studio Network, using a single Macintosh.
HyperCard was used to prototype a fully functional prototype of SIDOCI (one of the first experiments in the world to develop an integrated electronic patient record system) and was heavily used by Montréal Consulting firm DMR to demonstrate what "a typical day in the life of a patient about to get surgery" would look like in a paperless age.
Activision, which was until then mainly a game company, saw HyperCard as an entry point into the business market. Changing its name to Mediagenic, it published several major HyperCard-based applications, most notably Danny Goodman's Focal Point, a personal information manager, and Reports For HyperCard, a program by Nine To Five Software that allows users to treat HyperCard as a full database system with robust information viewing and printing features.
The HyperCard-inspired SuperCard for a while included the Roadster plug-in that allowed stacks to be placed inside web pages and viewed by web browsers with an appropriate browser plug-in. There was even a Windows version of this plug-in allowing computers other than Macintoshes to use the plug-in.
Exploits
The first HyperCard virus was discovered in Belgium and the Netherlands in April 1991.
Because HyperCard executed scripts in stacks immediately on opening, it was also one of the first applications susceptible to macro viruses. The Merryxmas virus was discovered in early 1993 by Ken Dunham, two years before the Concept virus. Very few viruses were based on HyperCard, and their overall impact was minimal.
Reception
Compute!'s Apple Applications in 1987 stated that HyperCard "may make Macintosh the personal computer of choice". While noting that its large memory requirement made it best suited for computers with 2 MB of memory and hard drives, the magazine predicted that "the smallest programming shop should be able to turn out stackware", especially for using CD-ROMs. Compute! predicted in 1988 that most future Mac software would be developed using HyperCard, if only because using it was so addictive that developers "won't be able to tear themselves away from it long enough to create anything else". Byte in 1989 listed it as among the "Excellence" winners of the Byte Awards. While stating that "like any first entry, it has some flaws", the magazine wrote that "HyperCard opened up a new category of software", and praised Apple for bundling it with every Mac. In 2001 Steve Wozniak called HyperCard "the best program ever written".
Legacy
HyperCard is one of the first products that made use of and popularized the hypertext concept to a large popular base of users.
Jakob Nielsen has pointed out that HyperCard was really only a hypermedia program since its links started from regions on a card, not text objects; actual HTML-style text hyperlinks were possible in later versions, but were awkward to implement and seldom used. Deena Larsen programmed links into HyperCard for Marble Springs. Bill Atkinson later lamented that if he had only realized the power of network-oriented stacks, instead of focusing on local stacks on a single machine, HyperCard could have become the first Web browser.
HyperCard saw a loss in popularity with the growth of the World Wide Web, since the Web could handle and deliver data in much the same way as HyperCard without being limited to files on a local hard disk. HyperCard had a significant impact on the web as it inspired the creation of both HTTP (through its influence on Tim Berners-Lee's colleague Robert Cailliau), and JavaScript (whose creator, Brendan Eich, was inspired by HyperTalk). It was also a key inspiration for ViolaWWW, an early web browser.
The pointing-finger cursor used for navigating stacks was later used in the first web browsers, as the hyperlink cursor.
The Myst computer game franchise, initially released as a HyperCard stack and included bundled with some Macs (for example the Performa 5300), still lives on, making HyperCard a facilitating technology for starting one of the best-selling computer games of all time.
According to Ward Cunningham, the inventor of Wiki, the wiki concept can be traced back to a HyperCard stack he wrote in the late 1980s.
In 2017 the Internet Archive established a project to preserve and emulate HyperCard stacks, allowing users to upload their own.
The GUI of the prototype Apple Wizzy Active Lifestyle Telephone was based on HyperCard.
World Wide Web
HyperCard influenced the development of the Web in late 1990 through its influence on Robert Cailliau, who assisted in developing Tim Berners-Lee's first Web browser. Javascript was inspired by HyperTalk.
Although HyperCard stacks do not operate over the Internet, by 1988, at least 300 stacks were publicly available for download from the commercial CompuServe network (which was not connected to the official Internet yet). The system can link phone numbers on a user's computer together and enable them to dial numbers without a modem, using a less expensive piece of hardware, the Hyperdialer.
In this sense, like the Web, it does form an association-based experience of information browsing via links, though not operating remotely over the TCP/IP protocol then. Like the Web, it also allows for the connections of many different kinds of media.
Similar systems
Other companies have offered their own versions. , two products are available which offer HyperCard-like abilities:
HyperStudio, one of the first HyperCard clones, is , developed and published by Software MacKiev.
LiveCode, published by LiveCode, Ltd., expands greatly on HyperCard's feature set and offers color and a GUI toolkit which can be deployed on many popular platforms (Android, iOS, Classic Macintosh system software, Mac OS X, Windows 98 through 10, and Linux/Unix). LiveCode directly imports extant HyperCard stacks and provides a migration path for stacks still in use.
Past products include:
SuperCard, the first HyperCard clone, was similar to HyperCard, but with many added features such as: full color support, pixel and vector graphics, a full GUI toolkit, and support for many modern macOS features. It could create both standalone applications and projects that run on the freeware SuperCard Player. SuperCard could also convert extant HyperCard stacks into SuperCard projects. It ran only on Macs.
SK8 was a "HyperCard killer" developed within Apple but never released. It extends HyperTalk to allow arbitrary objects which allowed it to build complete Mac-like applications (instead of stacks). The project was never released, although the source code was placed in the public domain.
Hyper DA by Symmetry was a Desk Accessory for classic single-tasked Mac OS that allows viewing HyperCard 1.x stacks as added windows in any extant application, and is also embedded into many Claris products (like MacDraw II) to display their user documentation.
HyperPad from Brightbill-Roberts was a clone of HyperCard, written for DOS. It makes use of ASCII linedrawing to create the graphics of cards and buttons.
Plus, later renamed WinPlus, was similar to HyperCard, for Windows and Macintosh.
Oracle purchased Plus and created a cross-platform version as Oracle Card, later renamed Oracle Media Objects, used as a 4GL for database access.
IBM LinkWay was a mouse-controlled HyperCard-like environment for DOS PCs. It has minimal system requirements, runs in graphics CGA and VGA. It even supported video disc control.
Asymetrix's Windows application ToolBook resembled HyperCard, and later included an external converter to read HyperCard stacks (the first was a third-party product from Heizer software).
TileStack was an attempt to create a web based version of HyperCard that is compatible with the original HyperCard files. The site closed down January 24, 2011.
In addition, many of the basic concepts of the original system were later re-used in other forms. Apple built its system-wide scripting engine AppleScript on a language similar to HyperTalk; it is often used for desktop publishing (DTP) workflow automation needs. In the 1990s FaceSpan provided a third-party graphical interface. AppleScript also has a native graphical programming front-end called Automator, released with Mac OS X Tiger in April 2005. One of HyperCard's strengths was its handling of multimedia, and many multimedia systems like Macromedia Authorware and Macromedia Director are based on concepts originating in HyperCard.
AppWare, originally named Serius Developer, is sometimes seen to be similar to HyperCard, as both are rapid application development (RAD) systems. AppWare was sold in the early 90s and worked on both Mac and Windows systems.
Zoomracks, a DOS application with a similar "stack" database metaphor, predates HyperCard by 4 years, which led to a contentious lawsuit against Apple.
See also
Apple Media Tool
MetaCard, LiveCode
Morphic (software)
mTropolis
NoteCards
Stagecast Creator
References
Bibliography
External links
Collection of emulated HyperCard stacks via the Internet Archive
; HyperCard conversion utility
HyperCard online simulator
1987 software
Domain-specific programming languages
Hypertext
HyperCard products
Classic Mac OS-only software made by Apple Inc.
Classic Mac OS programming tools | HyperCard | [
"Technology"
] | 4,923 | [
"Hypermedia",
"HyperCard products"
] |
13,570 | https://en.wikipedia.org/wiki/Histology | Histology,
also known as microscopic anatomy or microanatomy, is the branch of biology that studies the microscopic anatomy of biological tissues. Histology is the microscopic counterpart to gross anatomy, which looks at larger structures visible without a microscope. Although one may divide microscopic anatomy into organology, the study of organs, histology, the study of tissues, and cytology, the study of cells, modern usage places all of these topics under the field of histology. In medicine, histopathology is the branch of histology that includes the microscopic identification and study of diseased tissue. In the field of paleontology, the term paleohistology refers to the histology of fossil organisms.
Biological tissues
Animal tissue classification
There are four basic types of animal tissues: muscle tissue, nervous tissue, connective tissue, and epithelial tissue. All animal tissues are considered to be subtypes of these four principal tissue types (for example, blood is classified as connective tissue, since the blood cells are suspended in an extracellular matrix, the plasma).
Plant tissue classification
For plants, the study of their tissues falls under the field of plant anatomy, with the following four main types:
Dermal tissue
Vascular tissue
Ground tissue
Meristematic tissue
Medical histology
Histopathology is the branch of histology that includes the microscopic identification and study of diseased tissue. It is an important part of anatomical pathology and surgical pathology, as accurate diagnosis of cancer and other diseases often requires histopathological examination of tissue samples. Trained physicians, frequently licensed pathologists, perform histopathological examination and provide diagnostic information based on their observations.
Occupations
The field of histology that includes the preparation of tissues for microscopic examination is known as histotechnology. Job titles for the trained personnel who prepare histological specimens for examination are numerous and include histotechnicians, histotechnologists, histology technicians and technologists, medical laboratory technicians, and biomedical scientists.
Sample preparation
Most histological samples need preparation before microscopic observation; these methods depend on the specimen and method of observation.
Fixation
Chemical fixatives are used to preserve and maintain the structure of tissues and cells; fixation also hardens tissues which aids in cutting the thin sections of tissue needed for observation under the microscope. Fixatives generally preserve tissues (and cells) by irreversibly cross-linking proteins. The most widely used fixative for light microscopy is 10% neutral buffered formalin, or NBF (4% formaldehyde in phosphate buffered saline).
For electron microscopy, the most commonly used fixative is glutaraldehyde, usually as a 2.5% solution in phosphate buffered saline. Other fixatives used for electron microscopy are osmium tetroxide or uranyl acetate.
The main action of these aldehyde fixatives is to cross-link amino groups in proteins through the formation of methylene bridges (-CH2-), in the case of formaldehyde, or by C5H10 cross-links in the case of glutaraldehyde. This process, while preserving the structural integrity of the cells and tissue can damage the biological functionality of proteins, particularly enzymes.
Formalin fixation leads to degradation of mRNA, miRNA, and DNA as well as denaturation and modification of proteins in tissues. However, extraction and analysis of nucleic acids and proteins from formalin-fixed, paraffin-embedded tissues is possible using appropriate protocols.
Selection and trimming
Selection is the choice of relevant tissue in cases where it is not necessary to put the entire original tissue mass through further processing. The remainder may remain fixed in case it needs to be examined at a later time.
Trimming is the cutting of tissue samples in order to expose the relevant surfaces for later sectioning. It also creates tissue samples of appropriate size to fit into cassettes.
Embedding
Tissues are embedded in a harder medium both as a support and to allow the cutting of thin tissue slices. In general, water must first be removed from tissues (dehydration) and replaced with a medium that either solidifies directly, or with an intermediary fluid (clearing) that is miscible with the embedding media.
Paraffin wax
For light microscopy, paraffin wax is the most frequently used embedding material. Paraffin is immiscible with water, the main constituent of biological tissue, so it must first be removed in a series of dehydration steps. Samples are transferred through a series of progressively more concentrated ethanol baths, up to 100% ethanol to remove remaining traces of water. Dehydration is followed by a clearing agent (typically xylene although other environmental safe substitutes are in use) which removes the alcohol and is miscible with the wax, finally melted paraffin wax is added to replace the xylene and infiltrate the tissue. In most histology, or histopathology laboratories the dehydration, clearing, and wax infiltration are carried out in tissue processors which automate this process. Once infiltrated in paraffin, tissues are oriented in molds which are filled with wax; once positioned, the wax is cooled, solidifying the block and tissue.
Other materials
Paraffin wax does not always provide a sufficiently hard matrix for cutting very thin sections (which are especially important for electron microscopy). Paraffin wax may also be too soft in relation to the tissue, the heat of the melted wax may alter the tissue in undesirable ways, or the dehydrating or clearing chemicals may harm the tissue. Alternatives to paraffin wax include, epoxy, acrylic, agar, gelatin, celloidin, and other types of waxes.
In electron microscopy epoxy resins are the most commonly employed embedding media, but acrylic resins are also used, particularly where immunohistochemistry is required.
For tissues to be cut in a frozen state, tissues are placed in a water-based embedding medium. Pre-frozen tissues are placed into molds with the liquid embedding material, usually a water-based glycol, OCT, TBS, Cryogen, or resin, which is then frozen to form hardened blocks.
Sectioning
For light microscopy, a knife mounted in a microtome is used to cut tissue sections (typically between 5-15 micrometers thick) which are mounted on a glass microscope slide. For transmission electron microscopy (TEM), a diamond or glass knife mounted in an ultramicrotome is used to cut between 50 and 150 nanometer thick tissue sections.
A limited number of manufacturers are recognized for their production of microtomes, including vibrating microtomes commonly referred to as vibratomes, primarily for research and clinical studies. Additionally, Leica Biosystems is known for its production of products related to light microscopy in the context of research and clinical studies.
Staining
Biological tissue has little inherent contrast in either the light or electron microscope. Staining is employed to give both contrast to the tissue as well as highlighting particular features of interest. When the stain is used to target a specific chemical component of the tissue (and not the general structure), the term histochemistry is used.
Light microscopy
Hematoxylin and eosin (H&E stain) is one of the most commonly used stains in histology to show the general structure of the tissue. Hematoxylin stains cell nuclei blue; eosin, an acidic dye, stains the cytoplasm and other tissues in different stains of pink.
In contrast to H&E, which is used as a general stain, there are many techniques that more selectively stain cells, cellular components, and specific substances. A commonly performed histochemical technique that targets a specific chemical is the Perls' Prussian blue reaction, used to demonstrate iron deposits in diseases like hemochromatosis. The Nissl method for Nissl substance and Golgi's method (and related silver stains) are useful in identifying neurons are other examples of more specific stains.
Historadiography
In historadiography, a slide (sometimes stained histochemically) is X-rayed. More commonly, autoradiography is used in visualizing the locations to which a radioactive substance has been transported within the body, such as cells in S phase (undergoing DNA replication) which incorporate tritiated thymidine, or sites to which radiolabeled nucleic acid probes bind in in situ hybridization. For autoradiography on a microscopic level, the slide is typically dipped into liquid nuclear tract emulsion, which dries to form the exposure film. Individual silver grains in the film are visualized with dark field microscopy.
Immunohistochemistry
Recently, antibodies have been used to specifically visualize proteins, carbohydrates, and lipids. This process is called immunohistochemistry, or when the stain is a fluorescent molecule, immunofluorescence. This technique has greatly increased the ability to identify categories of cells under a microscope. Other advanced techniques, such as nonradioactive in situ hybridization, can be combined with immunochemistry to identify specific DNA or RNA molecules with fluorescent probes or tags that can be used for immunofluorescence and enzyme-linked fluorescence amplification (especially alkaline phosphatase and tyramide signal amplification). Fluorescence microscopy and confocal microscopy are used to detect fluorescent signals with good intracellular detail.
Electron microscopy
For electron microscopy heavy metals are typically used to stain tissue sections. Uranyl acetate and lead citrate are commonly used to impart contrast to tissue in the electron microscope.
Specialized techniques
Cryosectioning
Similar to the frozen section procedure employed in medicine, cryosectioning is a method to rapidly freeze, cut, and mount sections of tissue for histology. The tissue is usually sectioned on a cryostat or freezing microtome. The frozen sections are mounted on a glass slide and may be stained to enhance the contrast between different tissues. Unfixed frozen sections can be used for studies requiring enzyme localization in tissues and cells. Tissue fixation is required for certain procedures such as antibody-linked immunofluorescence staining. Frozen sections are often prepared during surgical removal of tumors to allow rapid identification of tumor margins, as in Mohs surgery, or determination of tumor malignancy, when a tumor is discovered incidentally during surgery.
Ultramicrotomy
Ultramicrotomy is a method of preparing extremely thin sections for transmission electron microscope (TEM) analysis. Tissues are commonly embedded in epoxy or other plastic resin. Very thin sections (less than 0.1 micrometer in thickness) are cut using diamond or glass knives on an ultramicrotome.
Artifacts
Artifacts are structures or features in tissue that interfere with normal histological examination. Artifacts interfere with histology by changing the tissues appearance and hiding structures. Tissue processing artifacts can include pigments formed by fixatives, shrinkage, washing out of cellular components, color changes in different tissues types and alterations of the structures in the tissue. An example is mercury pigment left behind after using Zenker's fixative to fix a section. Formalin fixation can also leave a brown to black pigment under acidic conditions.
History
In the 17th century the Italian Marcello Malpighi used microscopes to study tiny biological entities; some regard him as the founder of the fields of histology and microscopic pathology. Malpighi analyzed several parts of the organs of bats, frogs and other animals under the microscope. While studying the structure of the lung, Malpighi noticed its membranous alveoli and the hair-like connections between veins and arteries, which he named capillaries. His discovery established how the oxygen breathed in enters the blood stream and serves the body.
In the 19th century histology was an academic discipline in its own right. The French anatomist Xavier Bichat introduced the concept of tissue in anatomy in 1801, and the term "histology" (), coined to denote the "study of tissues", first appeared in a book by Karl Meyer in 1819. Bichat described twenty-one human tissues, which can be subsumed under the four categories currently accepted by histologists. The usage of illustrations in histology, deemed as useless by Bichat, was promoted by Jean Cruveilhier.
In the early 1830s Purkynĕ invented a microtome with high precision.
During the 19th century many fixation techniques were developed by Adolph Hannover (solutions of chromates and chromic acid), Franz Schulze and Max Schultze (osmic acid), Alexander Butlerov (formaldehyde) and Benedikt Stilling (freezing).
Mounting techniques were developed by Rudolf Heidenhain (1824–1898), who introduced gum Arabic; Salomon Stricker (1834–1898), who advocated a mixture of wax and oil; and Andrew Pritchard (1804–1884) who, in 1832, used a gum/isinglass mixture. In the same year, Canada balsam appeared on the scene, and in 1869 Edwin Klebs (1834–1913) reported that he had for some years embedded his specimens in paraffin.
The 1906 Nobel Prize in Physiology or Medicine was awarded to histologists Camillo Golgi and Santiago Ramon y Cajal. They had conflicting interpretations of the neural structure of the brain based on differing interpretations of the same images. Ramón y Cajal won the prize for his correct theory, and Golgi for the silver-staining technique that he invented to make it possible.
Future directions
In vivo histology
There is interest in developing techniques for in vivo histology (predominantly using MRI), which would enable doctors to non-invasively gather information about healthy and diseased tissues in living patients, rather than from fixed tissue samples.
See also
National Society for Histotechnology
Slice preparation
Notes
References
External links
Histotechnology
Staining
Histochemistry
Anatomy
Laboratory healthcare occupations | Histology | [
"Chemistry",
"Biology"
] | 2,893 | [
"Staining",
"Histology",
"Microbiology techniques",
"Microscopy",
"Cell imaging",
"Anatomy"
] |
13,590 | https://en.wikipedia.org/wiki/House | A house is a single-unit residential building. It may range in complexity from a rudimentary hut to a complex structure of wood, masonry, concrete or other material, outfitted with plumbing, electrical, and heating, ventilation, and air conditioning systems. Houses use a range of different roofing systems to keep precipitation such as rain from getting into the dwelling space. Houses generally have doors or locks to secure the dwelling space and protect its inhabitants and contents from burglars or other trespassers. Most conventional modern houses in Western cultures will contain one or more bedrooms and bathrooms, a kitchen or cooking area, and a living room. A house may have a separate dining room, or the eating area may be integrated into the kitchen or another room. Some large houses in North America have a recreation room. In traditional agriculture-oriented societies, domestic animals such as chickens or larger livestock (like cattle) may share part of the house with humans.
The social unit that lives in a house is known as a household. Most commonly, a household is a family unit of some kind, although households may also have other social groups, such as roommates or, in a rooming house, unconnected individuals, that typically use a house as their home. Some houses only have a dwelling space for one family or similar-sized group; larger houses called townhouses or row houses may contain numerous family dwellings in the same structure. A house may be accompanied by outbuildings, such as a garage for vehicles or a shed for gardening equipment and tools. A house may have a backyard, a front yard or both, which serve as additional areas where inhabitants can relax, eat, or exercise.
Etymology
The English word house derives directly from the Old English word hus, meaning "dwelling, shelter, home, house," which in turn derives from Proto-Germanic husan (reconstructed by etymological analysis) which is of unknown origin. The term house itself gave rise to the letter 'B' through an early Proto-Semitic hieroglyphic symbol depicting a house. The symbol was called "bayt", "bet" or "beth" in various related languages, and became beta, the Greek letter, before it was used by the Romans. Beit in Arabic means house, while in Maltese bejt refers to the roof of the house.
Elements
Layout
Ideally, architects of houses design rooms to meet the needs of the people who will live in the house. Feng shui, originally a Chinese method of moving houses according to such factors as rain and micro-climates, has recently expanded its scope to address the design of interior spaces, with a view to promoting harmonious effects on the people living inside the house, although no actual effect has ever been demonstrated. Feng shui can also mean the "aura" in or around a dwelling, making it comparable to the real estate sales concept of "indoor-outdoor flow".
The square footage of a house in the United States reports the area of "living space", excluding the garage and other non-living spaces. The "square metres" figure of a house in Europe reports the area of the walls enclosing the home, and thus includes any attached garage and non-living spaces. The number of floors or levels making up the house can affect the square footage of a home.
Humans often build houses for domestic or wild animals, often resembling smaller versions of human domiciles. Familiar animal houses built by humans include birdhouses, hen houses and dog houses, while housed agricultural animals more often live in barns and stables.
Parts
Many houses have several large rooms with specialized functions and several very small rooms for other various reasons. These may include a living/eating area, a sleeping area, and (if suitable facilities and services exist) separate or combined washing and lavatory areas. Some larger properties may also feature rooms such as a spa room, indoor pool, indoor basketball court, and other 'non-essential' facilities. In traditional agriculture-oriented societies, domestic animals such as chickens or larger livestock often share part of the house with humans. Most conventional modern houses will at least contain a bedroom, bathroom, kitchen or cooking area, and a living room.
The names of parts of a house often echo the names of parts of other buildings, but could typically include:
Alcove
Atrium
Attic
Basement/cellar
Bathroom
Bedroom (or nursery)
Box-room / storage room
Conservatory
Dining room
Family room or den
Fireplace
Foyer
Front room
Garage
Hallway / passage / Vestibule
Hearth
Home-office or study
Kitchen
Larder
Laundry room
Library
Living room
Loft
Nook
Pantry
Parlour
Porch
Recreation room / rumpus room / television room
Shrines to serve the religious functions associated with a family
Stairwell
Sunroom
Swimming pool
Window
Workshop
utility room
History
Little is known about the earliest origin of the house and its interior; however, it can be traced back to the simplest form of shelters. An exceptionally well-preserved house dating to the fifth millennium BC and with its contents still preserved was for example excavated at Tell Madhur in Iraq. Roman architect Vitruvius' theories have claimed the first form of architecture as a frame of timber branches finished in mud, also known as the primitive hut.
Middle Ages
In the Middle Ages, the Manor Houses facilitated different activities and events. Furthermore, the houses accommodated numerous people, including family, relatives, employees, servants and their guests. Their lifestyles were largely communal, as areas such as the Great Hall enforced the custom of dining and meetings and the Solar intended for shared sleeping beds.
During the 15th and 16th centuries, the Italian Renaissance Palazzo consisted of plentiful rooms of connectivity. Unlike the qualities and uses of the Manor Houses, most rooms of the palazzo contained no purpose, yet were given several doors. These doors adjoined rooms in which Robin Evans describes as a "matrix of discrete but thoroughly interconnected chambers." The layout allowed occupants to freely walk room to room from one door to another, thus breaking the boundaries of privacy.
"Once inside it is necessary to pass from one room to the next, then to the next to traverse the building. Where passages and staircases are used, as inevitably they are, they nearly always connect just one space to another and never serve as general distributors of movement. Thus, despite the precise architectural containment offered by the addition of room upon room, the villa was, in terms of occupation, an open plan, relatively permeable to the numerous members of the household." Although very public, the open plan encouraged sociality and connectivity for all inhabitants.
An early example of the segregation of rooms and consequent enhancement of privacy may be found in 1597 at the Beaufort House built in Chelsea, London. It was designed by English architect John Thorpe who wrote on his plans, "A Long Entry through all". The separation of the passageway from the room developed the function of the corridor. This new extension was revolutionary at the time, allowing the integration of one door per room, in which all universally connected to the same corridor. English architect Sir Roger Pratt states "the common way in the middle through the whole length of the house, [avoids] the offices from one molesting the other by continual passing through them." Social hierarchies within the 17th century were highly regarded, as architecture was able to epitomize the servants and the upper class. More privacy is offered to the occupant as Pratt further claims, "the ordinary servants may never publicly appear in passing to and fro for their occasions there." This social divide between rich and poor favored the physical integration of the corridor into housing by the 19th century.
Sociologist Witold Rybczynski wrote, "the subdivision of the house into day and night uses, and into formal and informal areas, had begun." Rooms were changed from public to private as single entryways forced notions of entering a room with a specific purpose.
Industrial Revolution
Compared to the large scaled houses in England and the Renaissance, the 17th Century Dutch house was smaller, and was only inhabited by up to four to five members. This was because they embraced "self-reliance" in contrast to the dependence on servants, and a design for a lifestyle centered on the family. It was important for the Dutch to separate work from domesticity, as the home became an escape and a place of comfort. This way of living and the home has been noted as highly similar to the contemporary family and their dwellings.
By the end of the 17th century, the house layout was transformed to become employment-free, enforcing these ideas for the future. This came in favour for the Industrial Revolution, gaining large-scale factory production and workers.
19th and 20th centuries
In the American context, some professions, such as doctors, in the 19th and early 20th century typically operated out of the front room or parlor or had a two-room office on their property, which was detached from the house. By the mid 20th century, the increase in high-tech equipment created a marked shift whereby the contemporary doctor typically worked from an office or hospital.
Technology and electronic systems has caused privacy issues and issues with segregating personal life from remote work. Technological advances of surveillance and communications allow insight of personal habits and private lives. As a result, the "private becomes ever more public, [and] the desire for a protective home life increases, fuelled by the very media that undermine it," writes Jonathan Hill. Work has been altered by the increase of communications. The "deluge of information" has expressed the efforts of work conveniently gaining access inside the house. Although commuting is reduced, the desire to separate working and living remains apparent. On the other hand, some architects have designed homes in which eating, working and living are brought together.
Gallery
Construction
In many parts of the world, houses are constructed using scavenged materials. In Manila's Payatas neighborhood, slum houses are often made of material sourced from a nearby garbage dump. In Dakar, it is common to see houses made of recycled materials standing atop a mixture of garbage and sand which serves as a foundation. The garbage-sand mixture is also used to protect the house from flooding.
In the United States, modern house construction techniques include light-frame construction (in areas with access to supplies of wood) and adobe or sometimes rammed-earth construction (in arid regions with scarce wood-resources). Some areas use brick almost exclusively, and quarried stone has long provided foundations and walls. To some extent, aluminum and steel have displaced some traditional building materials. Increasingly popular alternative construction materials include insulating concrete forms (foam forms filled with concrete), structural insulated panels (foam panels faced with oriented strand board or fiber cement), light-gauge steel, and steel framing. More generally, people often build houses out of the nearest available material, and often tradition or culture govern construction-materials, so whole towns, areas, counties or even states/countries may be built out of one main type of material. For example, a large portion of American houses use wood, while most British and many European houses use stone, brick, or mud.
In the early 20th century, some house designers started using prefabrication. Sears, Roebuck & Co. first marketed their Sears Catalog Homes to the general public in 1908. Prefab techniques became popular after World War II. First small inside rooms framing, then later, whole walls were prefabricated and carried to the construction site. The original impetus was to use the labor force inside a shelter during inclement weather. More recently, builders have begun to collaborate with structural engineers who use finite element analysis to design prefabricated steel-framed homes with known resistance to high wind loads and seismic forces. These newer products provide labor savings, more consistent quality, and possibly accelerated construction processes.
Lesser-used construction methods have gained (or regained) popularity in recent years. Though not in wide use, these methods frequently appeal to homeowners who may become actively involved in the construction process. They include:
Hempcrete construction
Cordwood construction
Geodesic domes
Straw-bale construction
Wattle and daub
Framing (construction)
In the developed world, energy-conservation has grown in importance in house design. Housing produces a major proportion of carbon emissions (studies have shown that it is 30% of the total in the United Kingdom).
Development of a number of low-energy building types and techniques continues. They include the zero-energy house, the passive solar house, the autonomous buildings, the super insulated houses and houses built to the Passivhaus standard.
Legal issues
Buildings with historical importance have legal restrictions. New houses in the UK are not covered by the Sale of Goods Act. When purchasing a new house, the buyer has different legal protection than when buying other products. New houses in the UK are covered by a National House Building Council guarantee.
Identification and symbolism
With the growth of dense settlement, humans designed ways of identifying houses and parcels of land. Individual houses sometimes acquire proper names, and those names may acquire in their turn considerable emotional connotations. A more systematic and general approach to identifying houses may use various methods of house numbering.
Houses may express the circumstances or opinions of their builders or their inhabitants. Thus, a vast and elaborate house may serve as a sign of conspicuous wealth whereas a low-profile house built of recycled materials may indicate support of energy conservation. Houses of particular historical significance (former residences of the famous, for example, or even just very old houses) may gain a protected status in town planning as examples of built heritage or of streetscape. Commemorative plaques may mark such structures. Home ownership provides a common measure of prosperity in economics. Contrast the importance of house-destruction, tent dwelling and house rebuilding in the wake of many natural disasters.
See also
Building
House-building
Index of construction articles
Functions
Building science
Mixed-use development
Visitability
Types
Boarding house
Earth sheltering
Home automation
Housing estate
Housing in Japan
Hurricane-proof house
Lodging
Lustron house
Mobile home
Modular home
Slope house
Summer house
Tiny house
Economics
Affordable housing
Real estate bubble
United States housing bubble
Housing tenure
Show house
Miscellaneous
Domestic robot
Homelessness
Home network
Housewarming party
Squatting
Institutions
U.S. Department of Housing and Urban Development
HUD USER
Regulatory Barriers Clearinghouse
Lists
List of American houses
List of house styles
List of house types
List of real estate topics
Open-air museum
References
External links
Housing through the centuries, animation by The Atlantic
Structural system
Housing
Home | House | [
"Technology",
"Engineering"
] | 2,923 | [
"Structural system",
"Structural engineering",
"Houses",
"Building engineering"
] |
13,593 | https://en.wikipedia.org/wiki/Java%20applet | Java applets are small applications written in the Java programming language, or another programming language that compiles to Java bytecode, and delivered to users in the form of Java bytecode.
At the time of their introduction, the intended use was for the user to launch the applet from a web page, and for the applet to then execute within a Java virtual machine (JVM) in a process separate from the web browser itself. A Java applet could appear in a frame of the web page, a new application window, a program from Sun called appletviewer, or a stand-alone tool for testing applets.
Java applets were introduced in the first version of the Java language, which was released in 1995. Beginning in 2013, major web browsers began to phase out support for NPAPI, the underlying technology applets used to run. with applets becoming completely unable to be run by 2015–2017. Java applets were deprecated by Java 9 in 2017.
Java applets were usually written in Java, but other languages such as Jython, JRuby, Pascal, Scala, NetRexx, or Eiffel (via SmartEiffel) could be used as well.
Unlike early versions of JavaScript, Java applets had access to 3D hardware acceleration, making them well-suited for non-trivial, computation-intensive visualizations. Since applets' introduction, JavaScript has gained support for hardware-accelerated graphics via canvas technology (or specifically WebGL in the case of 3D graphics), as well as just-in-time compilation.
Since Java bytecode is cross-platform (or platform independent), Java applets could be executed by clients for many platforms, including Microsoft Windows, FreeBSD, Unix, macOS and Linux. They could not be run on mobile devices, which do not support running standard Oracle JVM bytecode. Android devices can run code written in Java compiled for the Android Runtime.
Overview
The applets are used to provide interactive features to web applications that cannot be provided by HTML alone. They can capture mouse input and also have controls like buttons or check boxes. In response to user actions, an applet can change the provided graphic content. This makes applets well-suited for demonstration, visualization, and teaching. There are online applet collections for studying various subjects, from physics to heart physiology.
An applet can also be a text area only; providing, for instance, a cross-platform command-line interface to some remote system. If needed, an applet can leave the dedicated area and run as a separate window. However, applets have very little control over web page content outside the applet's dedicated area, so they are less useful for improving the site appearance in general, unlike other types of browser extensions (while applets like news tickers or WYSIWYG editors are also known). Applets can also play media in formats that are not natively supported by the browser.
Pages coded in HTML may embed parameters within them that are passed to the applet. Because of this, the same applet may have a different appearance depending on the parameters that were passed.
As applets were available before HTML5, modern CSS and JavaScript interface DOM were standard, they were also widely used for trivial effects such as mouseover and navigation buttons. This approach, which posed major problems for accessibility and misused system resources, is no longer in use and was strongly discouraged even at the time.
Technical information
Most browsers executed Java applets in a sandbox, preventing applets from accessing local data like the file system. The code of the applet was downloaded from a web server, after which the browser either embedded the applet into a web page or opened a new window showing the applet's user interface.
The first implementations involved downloading an applet class by class. While classes are small files, there are often many of them, so applets got a reputation as slow-loading components. However, since .jar files were introduced, an applet is usually delivered as a single file that has a size similar to an image file (hundreds of kilobytes to several megabytes).
Java system libraries and runtimes are backwards-compatible, allowing one to write code that runs both on current and on future versions of the Java virtual machine.
Similar technologies
Many Java developers, blogs and magazines recommended that the Java Web Start technology be used in place of applets. Java Web Start allowed the launching of unmodified applet code, which then ran in a separate window (not inside the invoking browser).
A Java Servlet is sometimes informally compared to be "like" a server-side applet, but it is different in its language, functions, and in each of the characteristics described here about applets.
Embedding into a web page
The applet would be displayed on the web page by making use of the deprecated applet HTML element, or the recommended object element. The embed element can be used with Mozilla family browsers (embed was deprecated in HTML 4 but is included in HTML 5). This specifies the applet's source and location. Both object and embed tags can also download and install Java virtual machine (if required) or at least lead to the plugin page. applet and object tags also support loading of the serialized applets that start in some particular (rather than initial) state. Tags also specify the message that shows up in place of the applet if the browser cannot run it due to any reason.
However, despite object being officially a recommended tag in 2010, the support of the object tag was not yet consistent among browsers and Sun kept recommending the older applet tag for deploying in multibrowser environments, as it remained the only tag consistently supported by the most popular browsers. To support multiple browsers, using the object tag to embed an applet would require JavaScript (that recognizes the browser and adjusts the tag), usage of additional browser-specific tags or delivering adapted output from the server side.
The Java browser plug-in relied on NPAPI, which nearly all web browser vendors have removed support for, or do not implement, due to its age and security issues. In January 2016, Oracle announced that Java runtime environments based on JDK 9 will discontinue the browser plug-in.
Advantages
A Java applet could have any or all of the following advantages:
It was simple to make it work on FreeBSD, Linux, Microsoft Windows and macOS that is, to make it cross-platform. Applets were supported by most web browsers through the first decade of the 21st century; since then, however, most browsers have dropped applet support for security reasons.
The same applet would work on "all" installed versions of Java at the same time, rather than just the latest plug-in version only. However, if an applet requires a later version of the Java Runtime Environment (JRE) the client would be forced to wait during the large download.
Most web browsers cached applets so they were quick to load when returning to a web page. Applets also improved with use: after a first applet is run, the JVM was already running and subsequent applets started quickly (the JVM will need to restart each time the browser starts afresh). JRE versions 1.5 and greater restarted the JVM when the browser navigates between pages, as a security measure which removed that performance gain.
It moved work from the server to the client, making a web solution more scalable with the number of users/clients.
If a standalone program (like Google Earth) talks to a web server, that server normally needs to support all prior versions for users who have not kept their client software updated. In contrast, a browser loaded (and cached) the latest applet version, so there is no need to support legacy versions.
Applet naturally supported changing user state, such as figure positions on the chessboard.
Developers could develop and debug an applet directly simply by creating a main routine (either in the applet's class or in a separate class) and calling init() and start() on the applet, thus allowing for development in their favorite Java SE development environment. All one had to do was to re-test the applet in the AppletViewer program or a web browser to ensure it conforms to security restrictions.
An untrusted applet had no access to the local machine and can only access the server it came from. This makes applets much safer to run than the native executables that they would replace. However, a signed applet could have full access to the machine it is running on, if the user agreed.
Java applets were fast, with similar performance to natively installed software.
Disadvantages
Java applets had the following disadvantages compared to other client-side web technologies:
Java applets would depend on a Java Runtime Environment (JRE), a complex and heavy-weight software package. They also normally required a plug-in for the web browser. Some organizations only allow software installed by an administrator. As a result, users were unable to view applets unless one was important enough to justify contacting the administrator to request installation of the JRE and plug-in.
If an applet requires a newer JRE than available on the system, the user running it the first time will need to wait for the large JRE download to complete.
Mobile browsers on iOS or Android, never run Java applets at all. Even before the deprecation of applets on all platforms, desktop browsers phased out Java applet support concurrently with the rise of mobile operating systems.
There was no standard to make the content of applets available to screen readers. Therefore, applets harmed the accessibility of a web site to users with special needs.
As with any client-side scripting, security restrictions made it difficult or even impossible for some untrusted applets to achieve their desired goals. Only by editing the java.policy file in the JAVA JRE installation could one grant access to the local filesystem or system clipboard, or to network sources other than the one that served the applet to the browser.
Most users did not care about the difference between untrusted and trusted applets, so this distinction did not help much with security. The ability to run untrusted applets was eventually removed entirely to fix this, before all applets were removed.
Compatibility-related lawsuits
Sun made considerable efforts to ensure compatibility is maintained between Java versions as they evolve, enforcing Java portability by law if required. Oracle seems to be continuing the same strategy.
1997: Sun vs Microsoft
The 1997 lawsuit, was filed after Microsoft created a modified Java Virtual Machine of their own, which shipped with Internet Explorer. Microsoft added about 50 methods and 50 fields into the classes within the java.awt, java.lang, and java.io packages. Other modifications included removal of RMI capability and replacement of Java Native Interface from JNI to RNI, a different standard. RMI was removed because it only easily supports Java to Java communications and competes with Microsoft DCOM technology. Applets that relied on these changes or just inadvertently used them worked only within Microsoft's Java system. Sun sued for breach of trademark, as the point of Java was that there should be no proprietary extensions and that code should work everywhere. Microsoft agreed to pay Sun $20 million, and Sun agreed to grant Microsoft limited license to use Java without modifications only and for a limited time.
2002: Sun vs Microsoft
Microsoft continued to ship its own unmodified Java virtual machine. Over the years it became extremely outdated yet still default for Internet Explorer. A later study revealed that applets of this time often contain their own classes that mirror Swing and other newer features in a limited way. In 2002, Sun filed an antitrust lawsuit, claiming that Microsoft's attempts at illegal monopolization had harmed the Java platform. Sun demanded Microsoft distribute Sun's current, binary implementation of Java technology as part of Windows, distribute it as a recommended update for older Microsoft desktop operating systems and stop the distribution of Microsoft's Virtual Machine (as its licensing time, agreed in the prior lawsuit, had expired). Microsoft paid $700 million for pending antitrust issues, another $900 million for patent issues and a $350 million royalty fee to use Sun's software in the future.
Security
There were two applet types with very different security models: signed applets and unsigned applets. Starting with Java SE 7 Update 21 (April 2013) applets and Web-Start Apps are encouraged to be signed with a trusted certificate, and warning messages appear when running unsigned applets. Further, starting with Java 7 Update 51 unsigned applets were blocked by default; they could be run by creating an exception in the Java Control Panel.
Unsigned
Limits on unsigned applets were understood as "draconian": they have no access to the local filesystem and web access limited to the applet download site; there are also many other important restrictions. For instance, they cannot access all system properties, use their own class loader, call native code, execute external commands on a local system or redefine classes belonging to core packages included as part of a Java release. While they can run in a standalone frame, such frame contains a header, indicating that this is an untrusted applet. Successful initial call of the forbidden method does not automatically create a security hole as an access controller checks the entire stack of the calling code to be sure the call is not coming from an improper location.
As with any complex system, many security problems have been discovered and fixed since Java was first released. Some of these (like the Calendar serialization security bug) persisted for many years with nobody being aware. Others have been discovered in use by malware in the wild.
Some studies mention applets crashing the browser or overusing CPU resources but these are classified as nuisances and not as true security flaws. However, unsigned applets may be involved in combined attacks that exploit a combination of multiple severe configuration errors in other parts of the system. An unsigned applet can also be more dangerous to run directly on the server where it is hosted because while code base allows it to talk with the server, running inside it can bypass the firewall. An applet may also try DoS attacks on the server where it is hosted, but usually people who manage the web site also manage the applet, making this unreasonable. Communities may solve this problem via source code review or running applets on a dedicated domain.
The unsigned applet can also try to download malware hosted on originating server. However it could only store such file into a temporary folder (as it is transient data) and has no means to complete the attack by executing it. There were attempts to use applets for spreading Phoenix and Siberia exploits this way, but these exploits do not use Java internally and were also distributed in several other ways.
Signed
A signed applet contains a signature that the browser should verify through a remotely running, independent certificate authority server. Producing this signature involves specialized tools and interaction with the authority server maintainers. Once the signature is verified, and the user of the current machine also approves, a signed applet can get more rights, becoming equivalent to an ordinary standalone program. The rationale is that the author of the applet is now known and will be responsible for any deliberate damage. This approach allows applets to be used for many tasks that are otherwise not possible by client-side scripting. However, this approach requires more responsibility from the user, deciding whom he or she trusts. The related concerns include a non-responsive authority server, wrong evaluation of the signer identity when issuing certificates, and known applet publishers still doing something that the user would not approve of. Hence signed applets that appeared from Java 1.1 may actually have more security concerns.
Self-signed
Self-signed applets, which are applets signed by the developer themselves, may potentially pose a security risk; java plugins provide a warning when requesting authorization for a self-signed applet, as the function and safety of the applet is guaranteed only by the developer itself, and has not been independently confirmed. Such self-signed certificates are usually only used during development prior to release where third-party confirmation of security is unimportant, but most applet developers will seek third-party signing to ensure that users trust the applet's safety.
Java security problems are not fundamentally different from similar problems of any client-side scripting platform. In particular, all issues related to signed applets also apply to Microsoft ActiveX components.
As of 2014, self-signed and unsigned applets are no longer accepted by the commonly available Java plugins or Java Web Start. Consequently, developers who wish to deploy Java applets have no alternative but to acquire trusted certificates from commercial sources.
Alternatives
Alternative technologies exist (for example, WebAssembly and JavaScript) that satisfy all or more of the scope of what was possible with an applet. JavaScript could coexist with applets in the same page, assist in launching applets (for instance, in a separate frame or providing platform workarounds) and later be called from the applet code. As JavaScript gained in features and performance, the support for and use of applets declined, until their eventual removal.
See also
ActiveX
Adobe Flash Player
Curl (programming language)
Jakarta Servlet
Java Web Start
JavaFX
Rich web application
SWF
WebGL
Silverlight
References
External links
Latest version of Sun Microsystems' Java Virtual Machine (includes browser plug-ins for running Java applets in most web browsers).
Information about writing applets from Oracle
Demonstration applets from Sun Microsystems (JDK 1.4 include source code)
Java (programming language)
Java platform
Web 1.0 | Java applet | [
"Technology"
] | 3,697 | [
"Computing platforms",
"Java platform"
] |
13,600 | https://en.wikipedia.org/wiki/Hipparchus | Hipparchus (; , ; BC) was a Greek astronomer, geographer, and mathematician. He is considered the founder of trigonometry, but is most famous for his incidental discovery of the precession of the equinoxes. Hipparchus was born in Nicaea, Bithynia, and probably died on the island of Rhodes, Greece. He is known to have been a working astronomer between 162 and 127 BC.
Hipparchus is considered the greatest ancient astronomical observer and, by some, the greatest overall astronomer of antiquity. He was the first whose quantitative and accurate models for the motion of the Sun and Moon survive. For this he certainly made use of the observations and perhaps the mathematical techniques accumulated over centuries by the Babylonians and by Meton of Athens (fifth century BC), Timocharis, Aristyllus, Aristarchus of Samos, and Eratosthenes, among others.
He developed trigonometry and constructed trigonometric tables, and he solved several problems of spherical trigonometry. With his solar and lunar theories and his trigonometry, he may have been the first to develop a reliable method to predict solar eclipses.
His other reputed achievements include the discovery and measurement of Earth's precession, the compilation of the first known comprehensive star catalog from the western world, and possibly the invention of the astrolabe, as well as of the armillary sphere that he may have used in creating the star catalogue. Hipparchus is sometimes called the "father of astronomy", a title conferred on him by Jean Baptiste Joseph Delambre in 1817.
Life and work
Hipparchus was born in Nicaea (), in Bithynia. The exact dates of his life are not known, but Ptolemy attributes astronomical observations to him in the period from 147 to 127 BC, and some of these are stated as made in Rhodes; earlier observations since 162 BC might also have been made by him. His birth date ( BC) was calculated by Delambre based on clues in his work. Hipparchus must have lived some time after 127 BC because he analyzed and published his observations from that year. Hipparchus obtained information from Alexandria as well as Babylon, but it is not known when or if he visited these places. He is believed to have died on the island of Rhodes, where he seems to have spent most of his later life.
In the second and third centuries, coins were made in his honour in Bithynia that bear his name and show him with a globe.
Relatively little of Hipparchus's direct work survives into modern times. Although he wrote at least fourteen books, only his commentary on the popular astronomical poem by Aratus was preserved by later copyists. Most of what is known about Hipparchus comes from Strabo's Geography and Pliny's Natural History in the first century; Ptolemy's second-century Almagest; and additional references to him in the fourth century by Pappus and Theon of Alexandria in their commentaries on the Almagest.
Hipparchus's only preserved work is Commentary on the Phaenomena of Eudoxus and Aratus (). This is a highly critical commentary in the form of two books on a popular poem by Aratus based on the work by Eudoxus. Hipparchus also made a list of his major works that apparently mentioned about fourteen books, but which is only known from references by later authors. His famous star catalog was incorporated into the one by Ptolemy and may be almost perfectly reconstructed by subtraction of two and two-thirds degrees from the longitudes of Ptolemy's stars . The first trigonometric table was apparently compiled by Hipparchus, who is consequently now known as "the father of trigonometry".
Babylonian sources
Earlier Greek astronomers and mathematicians were influenced by Babylonian astronomy to some extent, for instance the period relations of the Metonic cycle and Saros cycle may have come from Babylonian sources (see "Babylonian astronomical diaries"). Hipparchus seems to have been the first to exploit Babylonian astronomical knowledge and techniques systematically. Eudoxus in the 4th century BC and Timocharis and Aristillus in the 3rd century BC already divided the ecliptic in 360 parts (our degrees, Greek: moira) of 60 arcminutes and Hipparchus continued this tradition. It was only in Hipparchus's time (2nd century BC) when this division was introduced (probably by Hipparchus's contemporary Hypsikles) for all circles in mathematics. Eratosthenes (3rd century BC), in contrast, used a simpler sexagesimal system dividing a circle into 60 parts. Hipparchus also adopted the Babylonian astronomical cubit unit (Akkadian ammatu, Greek πῆχυς pēchys) that was equivalent to 2° or 2.5° ('large cubit').
Hipparchus probably compiled a list of Babylonian astronomical observations; Gerald J. Toomer, a historian of astronomy, has suggested that Ptolemy's knowledge of eclipse records and other Babylonian observations in the Almagest came from a list made by Hipparchus. Hipparchus's use of Babylonian sources has always been known in a general way, because of Ptolemy's statements, but the only text by Hipparchus that survives does not provide sufficient information to decide whether Hipparchus's knowledge (such as his usage of the units cubit and finger, degrees and minutes, or the concept of hour stars) was based on Babylonian practice. However, Franz Xaver Kugler demonstrated that the synodic and anomalistic periods that Ptolemy attributes to Hipparchus had already been used in Babylonian ephemerides, specifically the collection of texts nowadays called "System B" (sometimes attributed to Kidinnu).
Hipparchus's long draconitic lunar period (5,458 months = 5,923 lunar nodal periods) also appears a few times in Babylonian records. But the only such tablet explicitly dated, is post-Hipparchus so the direction of transmission is not settled by the tablets.
Geometry, trigonometry and other mathematical techniques
Hipparchus was recognized as the first mathematician known to have possessed a trigonometric table, which he needed when computing the eccentricity of the orbits of the Moon and Sun. He tabulated values for the chord function, which for a central angle in a circle gives the length of the straight line segment between the points where the angle intersects the circle. He may have computed this for a circle with a circumference of 21,600 units and a radius (rounded) of 3,438 units; this circle has a unit length for each arcminute along its perimeter. (This was “proven” by Toomer, but he later “cast doubt“ upon his earlier affirmation. Other authors have argued that a circle of radius 3,600 units may instead have been used by Hipparchus.) He tabulated the chords for angles with increments of 7.5°. In modern terms, the chord subtended by a central angle in a circle of given radius equals times twice the sine of half of the angle, i.e.:
The now-lost work in which Hipparchus is said to have developed his chord table, is called Tōn en kuklōi eutheiōn (Of Lines Inside a Circle) in Theon of Alexandria's fourth-century commentary on section I.10 of the Almagest. Some claim the table of Hipparchus may have survived in astronomical treatises in India, such as the Surya Siddhanta. Trigonometry was a significant innovation, because it allowed Greek astronomers to solve any triangle, and made it possible to make quantitative astronomical models and predictions using their preferred geometric techniques.
Hipparchus must have used a better approximation for than the one given by Archimedes of between (≈ 3.1408) and (≈ 3.1429). Perhaps he had the approximation later used by Ptolemy, sexagesimal 3;08,30 (≈ 3.1417) (Almagest VI.7).
Hipparchus could have constructed his chord table using the Pythagorean theorem and a theorem known to Archimedes. He also might have used the relationship between sides and diagonals of a cyclic quadrilateral, today called Ptolemy's theorem because its earliest extant source is a proof in the Almagest (I.10).
The stereographic projection was ambiguously attributed to Hipparchus by Synesius (c. 400 AD), and on that basis Hipparchus is often credited with inventing it or at least knowing of it. However, some scholars believe this conclusion to be unjustified by available evidence. The oldest extant description of the stereographic projection is found in Ptolemy's Planisphere (2nd century AD).
Besides geometry, Hipparchus also used arithmetic techniques developed by the Chaldeans. He was one of the first Greek mathematicians to do this and, in this way, expanded the techniques available to astronomers and geographers.
There are several indications that Hipparchus knew spherical trigonometry, but the first surviving text discussing it is by Menelaus of Alexandria in the first century, who now, on that basis, commonly is credited with its discovery. (Previous to the finding of the proofs of Menelaus a century ago, Ptolemy was credited with the invention of spherical trigonometry.) Ptolemy later used spherical trigonometry to compute things such as the rising and setting points of the ecliptic, or to take account of the lunar parallax. If he did not use spherical trigonometry, Hipparchus may have used a globe for these tasks, reading values off coordinate grids drawn on it, or he may have made approximations from planar geometry, or perhaps used arithmetical approximations developed by the Chaldeans.
Lunar and solar theory
Motion of the Moon
Hipparchus also studied the motion of the Moon and confirmed the accurate values for two periods of its motion that Chaldean astronomers are widely presumed to have possessed before him. The traditional value (from Babylonian System B) for the mean synodic month is 29 days; 31,50,8,20 (sexagesimal) = 29.5305941... days. Expressed as 29 days + 12 hours + hours this value has been used later in the Hebrew calendar. The Chaldeans also knew that 251 synodic months ≈ 269 anomalistic months. Hipparchus used the multiple of this period by a factor of 17, because that interval is also an eclipse period, and is also close to an integer number of years (4,267 moons : 4,573 anomalistic periods : 4,630.53 nodal periods : 4,611.98 lunar orbits : 344.996 years : 344.982 solar orbits : 126,007.003 days : 126,351.985 rotations). What was so exceptional and useful about the cycle was that all 345-year-interval eclipse pairs occur slightly more than 126,007 days apart within a tight range of only approximately ± hour, guaranteeing (after division by 4,267) an estimate of the synodic month correct to one part in order of magnitude 10 million.
Hipparchus could confirm his computations by comparing eclipses from his own time (presumably 27 January 141 BC and 26 November 139 BC according to Toomer) with eclipses from Babylonian records 345 years earlier (Almagest IV.2).
Later al-Biruni (Qanun VII.2.II) and Copernicus (de revolutionibus IV.4) noted that the period of 4,267 moons is approximately five minutes longer than the value for the eclipse period that Ptolemy attributes to Hipparchus. However, the timing methods of the Babylonians had an error of no fewer than eight minutes. Modern scholars agree that Hipparchus rounded the eclipse period to the nearest hour, and used it to confirm the validity of the traditional values, rather than to try to derive an improved value from his own observations. From modern ephemerides and taking account of the change in the length of the day (see ΔT) we
estimate that the error in the assumed length of the synodic month was less than 0.2 second in the fourth century BC and less than 0.1 second in Hipparchus's time.
Orbit of the Moon
It had been known for a long time that the motion of the Moon is not uniform: its speed varies. This is called its anomaly and it repeats with its own period; the anomalistic month. The Chaldeans took account of this arithmetically, and used a table giving the daily motion of the Moon according to the date within a long period. However, the Greeks preferred to think in geometrical models of the sky. At the end of the third century BC, Apollonius of Perga had proposed two models for lunar and planetary motion:
In the first, the Moon would move uniformly along a circle, but the Earth would be eccentric, i.e., at some distance of the center of the circle. So the apparent angular speed of the Moon (and its distance) would vary.
The Moon would move uniformly (with some mean motion in anomaly) on a secondary circular orbit, called an epicycle that would move uniformly (with some mean motion in longitude) over the main circular orbit around the Earth, called deferent; see deferent and epicycle.
Apollonius demonstrated that these two models were in fact mathematically equivalent. However, all this was theory and had not been put to practice. Hipparchus is the first astronomer known to attempt to determine the relative proportions and actual sizes of these orbits. Hipparchus devised a geometrical method to find the parameters from three positions of the Moon at particular phases of its anomaly. In fact, he did this separately for the eccentric and the epicycle model. Ptolemy describes the details in the Almagest IV.11. Hipparchus used two sets of three lunar eclipse observations that he carefully selected to satisfy the requirements. The eccentric model he fitted to these eclipses from his Babylonian eclipse list: 22/23 December 383 BC, 18/19 June 382 BC, and 12/13 December 382 BC. The epicycle model he fitted to lunar eclipse observations made in Alexandria at 22 September 201 BC, 19 March 200 BC, and 11 September 200 BC.
For the eccentric model, Hipparchus found for the ratio between the radius of the eccenter and the distance between the center of the eccenter and the center of the ecliptic (i.e., the observer on Earth): 3144 : ;
and for the epicycle model, the ratio between the radius of the deferent and the epicycle: : .
These figures are due to the cumbersome unit he used in his chord table and may partly be due to some sloppy rounding and calculation errors by Hipparchus, for which Ptolemy criticised him while also making rounding errors. A simpler alternate reconstruction agrees with all four numbers. Hipparchus found inconsistent results; he later used the ratio of the epicycle model ( : ), which is too small (60 : 4;45 sexagesimal). Ptolemy established a ratio of 60 : . (The maximum angular deviation producible by this geometry is the arcsin of divided by 60, or approximately 5° 1', a figure that is sometimes therefore quoted as the equivalent of the Moon's equation of the center in the Hipparchan model.)
Apparent motion of the Sun
Before Hipparchus, Meton, Euctemon, and their pupils at Athens had made a solstice observation (i.e., timed the moment of the summer solstice) on 27 June 432 BC (proleptic Julian calendar). Aristarchus of Samos is said to have done so in 280 BC, and Hipparchus also had an observation by Archimedes. He observed the summer solstices in 146 and 135 BC both accurately to a few hours, but observations of the moment of equinox were simpler, and he made twenty during his lifetime. Ptolemy gives an extensive discussion of Hipparchus's work on the length of the year in the Almagest III.1, and quotes many observations that Hipparchus made or used, spanning 162–128 BC, including an equinox timing by Hipparchus (at 24 March 146 BC at dawn) that differs by 5 hours from the observation made on Alexandria's large public equatorial ring that same day (at 1 hour before noon). Ptolemy claims his solar observations were on a transit instrument set in the meridian.
At the end of his career, Hipparchus wrote a book entitled Peri eniausíou megéthous ("On the Length of the Year") regarding his results. The established value for the tropical year, introduced by Callippus in or before 330 BC was days. Speculating a Babylonian origin for the Callippic year is difficult to defend, since Babylon did not observe solstices thus the only extant System B year length was based on Greek solstices (see below). Hipparchus's equinox observations gave varying results, but he points out (quoted in Almagest III.1(H195)) that the observation errors by him and his predecessors may have been as large as day. He used old solstice observations and determined a difference of approximately one day in approximately 300 years. So he set the length of the tropical year to − days (= 365.24666... days = 365 days 5 hours 55 min, which differs from the modern estimate of the value (including earth spin acceleration), in his time of approximately 365.2425 days, an error of approximately 6 min per year, an hour per decade, and ten hours per century.
Between the solstice observation of Meton and his own, there were 297 years spanning 108,478 days; this implies a tropical year of 365.24579... days = 365 days;14,44,51 (sexagesimal; = 365 days + + + ), a year length found on one of the few Babylonian clay tablets which explicitly specifies the System B month. Whether Babylonians knew of Hipparchus's work or the other way around is debatable.
Hipparchus also gave the value for the sidereal year to be 365 + + days (= 365.25694... days = 365 days 6 hours 10 min). Another value for the sidereal year that is attributed to Hipparchus (by the physician Galen in the second century AD) is 365 + + days (= 365.25347... days = 365 days 6 hours 5 min), but this may be a corruption of another value attributed to a Babylonian source: 365 + + days (= 365.25694... days = 365 days 6 hours 10 min). It is not clear whether Hipparchus got the value from Babylonian astronomers or calculated by himself.
Orbit of the Sun
Before Hipparchus, astronomers knew that the lengths of the seasons are not equal. Hipparchus made observations of equinox and solstice, and according to Ptolemy (Almagest III.4) determined that spring (from spring equinox to summer solstice) lasted 94 days, and summer (from summer solstice to autumn equinox) days. This is inconsistent with a premise of the Sun moving around the Earth in a circle at uniform speed. Hipparchus's solution was to place the Earth not at the center of the Sun's motion, but at some distance from the center. This model described the apparent motion of the Sun fairly well. It is known today that the planets, including the Earth, move in approximate ellipses around the Sun, but this was not discovered until Johannes Kepler published his first two laws of planetary motion in 1609. The value for the eccentricity attributed to Hipparchus by Ptolemy is that the offset is of the radius of the orbit (which is a little too large), and the direction of the apogee would be at longitude 65.5° from the vernal equinox. Hipparchus may also have used other sets of observations, which would lead to different values. One of his two eclipse trios' solar longitudes are consistent with his having initially adopted inaccurate lengths for spring and summer of and days. His other triplet of solar positions is consistent with and days, an improvement on the results ( and days) attributed to Hipparchus by Ptolemy. Ptolemy made no change three centuries later, and expressed lengths for the autumn and winter seasons which were already implicit (as shown, e.g., by A. Aaboe).
Distance, parallax, size of the Moon and the Sun
Hipparchus also undertook to find the distances and sizes of the Sun and the Moon, in the now-lost work On Sizes and Distances ( ). His work is mentioned in Ptolemy's Almagest V.11, and in a commentary thereon by Pappus; Theon of Smyrna (2nd century) also mentions the work, under the title On Sizes and Distances of the Sun and Moon.
Hipparchus measured the apparent diameters of the Sun and Moon with his diopter. Like others before and after him, he found that the Moon's size varies as it moves on its (eccentric) orbit, but he found no perceptible variation in the apparent diameter of the Sun. He found that at the mean distance of the Moon, the Sun and Moon had the same apparent diameter; at that distance, the Moon's diameter fits 650 times into the circle, i.e., the mean apparent diameters are = 0°33′14″.
Like others before and after him, he also noticed that the Moon has a noticeable parallax, i.e., that it appears displaced from its calculated position (compared to the Sun or stars), and the difference is greater when closer to the horizon. He knew that this is because in the then-current models the Moon circles the center of the Earth, but the observer is at the surface—the Moon, Earth and observer form a triangle with a sharp angle that changes all the time. From the size of this parallax, the distance of the Moon as measured in Earth radii can be determined. For the Sun however, there was no observable parallax (we now know that it is about 8.8", several times smaller than the resolution of the unaided eye).
In the first book, Hipparchus assumes that the parallax of the Sun is 0, as if it is at infinite distance. He then analyzed a solar eclipse, which Toomer presumes to be the eclipse of 14 March 190 BC. It was total in the region of the Hellespont (and in his birthplace, Nicaea); at the time Toomer proposes the Romans were preparing for war with Antiochus III in the area, and the eclipse is mentioned by Livy in his Ab Urbe Condita Libri VIII.2. It was also observed in Alexandria, where the Sun was reported to be obscured 4/5ths by the Moon. Alexandria and Nicaea are on the same meridian. Alexandria is at about 31° North, and the region of the Hellespont about 40° North. (It has been contended that authors like Strabo and Ptolemy had fairly decent values for these geographical positions, so Hipparchus must have known them too. However, Strabo's Hipparchus dependent latitudes for this region are at least 1° too high, and Ptolemy appears to copy them, placing Byzantium 2° high in latitude.) Hipparchus could draw a triangle formed by the two places and the Moon, and from simple geometry was able to establish a distance of the Moon, expressed in Earth radii. Because the eclipse occurred in the morning, the Moon was not in the meridian, and it has been proposed that as a consequence the distance found by Hipparchus was a lower limit. In any case, according to Pappus, Hipparchus found that the least distance is 71 (from this eclipse), and the greatest 83 Earth radii.
In the second book, Hipparchus starts from the opposite extreme assumption: he assigns a (minimum) distance to the Sun of 490 Earth radii. This would correspond to a parallax of 7′, which is apparently the greatest parallax that Hipparchus thought would not be noticed (for comparison: the typical resolution of the human eye is about 2′; Tycho Brahe made naked eye observation with an accuracy down to 1′). In this case, the shadow of the Earth is a cone rather than a cylinder as under the first assumption. Hipparchus observed (at lunar eclipses) that at the mean distance of the Moon, the diameter of the shadow cone is lunar diameters. That apparent diameter is, as he had observed, degrees. With these values and simple geometry, Hipparchus could determine the mean distance; because it was computed for a minimum distance of the Sun, it is the maximum mean distance possible for the Moon. With his value for the eccentricity of the orbit, he could compute the least and greatest distances of the Moon too. According to Pappus, he found a least distance of 62, a mean of , and consequently a greatest distance of Earth radii. With this method, as the parallax of the Sun decreases (i.e., its distance increases), the minimum limit for the mean distance is 59 Earth radii—exactly the mean distance that Ptolemy later derived.
Hipparchus thus had the problematic result that his minimum distance (from book 1) was greater than his maximum mean distance (from book 2). He was intellectually honest about this discrepancy, and probably realized that especially the first method is very sensitive to the accuracy of the observations and parameters. (In fact, modern calculations show that the size of the 189 BC solar eclipse at Alexandria must have been closer to ths and not the reported ths, a fraction more closely matched by the degree of totality at Alexandria of eclipses occurring in 310 and 129 BC which were also nearly total in the Hellespont and are thought by many to be more likely possibilities for the eclipse Hipparchus used for his computations.)
Ptolemy later measured the lunar parallax directly (Almagest V.13), and used the second method of Hipparchus with lunar eclipses to compute the distance of the Sun (Almagest V.15). He criticizes Hipparchus for making contradictory assumptions, and obtaining conflicting results (Almagest V.11): but apparently he failed to understand Hipparchus's strategy to establish limits consistent with the observations, rather than a single value for the distance. His results were the best so far: the actual mean distance of the Moon is 60.3 Earth radii, within his limits from Hipparchus's second book.
Theon of Smyrna wrote that according to Hipparchus, the Sun is 1,880 times the size of the Earth, and the Earth twenty-seven times the size of the Moon; apparently this refers to volumes, not diameters. From the geometry of book 2 it follows that the Sun is at 2,550 Earth radii, and the mean distance of the Moon is radii. Similarly, Cleomedes quotes Hipparchus for the sizes of the Sun and Earth as 1050:1; this leads to a mean lunar distance of 61 radii. Apparently Hipparchus later refined his computations, and derived accurate single values that he could use for predictions of solar eclipses.
See Toomer (1974) for a more detailed discussion.
Eclipses
Pliny (Naturalis Historia II.X) tells us that Hipparchus demonstrated that lunar eclipses can occur five months apart, and solar eclipses seven months (instead of the usual six months); and the Sun can be hidden twice in thirty days, but as seen by different nations. Ptolemy discussed this a century later at length in Almagest VI.6. The geometry, and the limits of the positions of Sun and Moon when a solar or lunar eclipse is possible, are explained in Almagest VI.5. Hipparchus apparently made similar calculations. The result that two solar eclipses can occur one month apart is important, because this can not be based on observations: one is visible on the northern and the other on the southern hemisphere—as Pliny indicates—and the latter was inaccessible to the Greek.
Prediction of a solar eclipse, i.e., exactly when and where it will be visible, requires a solid lunar theory and proper treatment of the lunar parallax. Hipparchus must have been the first to be able to do this. A rigorous treatment requires spherical trigonometry, thus those who remain certain that Hipparchus lacked it must speculate that he may have made do with planar approximations. He may have discussed these things in Perí tēs katá plátos mēniaías tēs selēnēs kinēseōs ("On the monthly motion of the Moon in latitude"), a work mentioned in the Suda.
Pliny also remarks that "he also discovered for what exact reason, although the shadow causing the eclipse must from sunrise onward be below the earth, it happened once in the past that the Moon was eclipsed in the west while both luminaries were visible above the earth" (translation H. Rackham (1938), Loeb Classical Library 330 p. 207). Toomer argued that this must refer to the large total lunar eclipse of 26 November 139 BC, when over a clean sea horizon as seen from Rhodes, the Moon was eclipsed in the northwest just after the Sun rose in the southeast. This would be the second eclipse of the 345-year interval that Hipparchus used to verify the traditional Babylonian periods: this puts a late date to the development of Hipparchus's lunar theory. We do not know what "exact reason" Hipparchus found for seeing the Moon eclipsed while apparently it was not in exact opposition to the Sun. Parallax lowers the altitude of the luminaries; refraction raises them, and from a high point of view the horizon is lowered.
Astronomical instruments and astrometry
Hipparchus and his predecessors used various instruments for astronomical calculations and observations, such as the gnomon, the astrolabe, and the armillary sphere.
Hipparchus is credited with the invention or improvement of several astronomical instruments, which were used for a long time for naked-eye observations. According to Synesius of Ptolemais (4th century) he made the first astrolabion: this may have been an armillary sphere (which Ptolemy however says he constructed, in Almagest V.1); or the predecessor of the planar instrument called astrolabe (also mentioned by Theon of Alexandria). With an astrolabe Hipparchus was the first to be able to measure the geographical latitude and time by observing fixed stars. Previously this was done at daytime by measuring the shadow cast by a gnomon, by recording the length of the longest day of the year or with the portable instrument known as a scaphe.
Ptolemy mentions (Almagest V.14) that he used a similar instrument as Hipparchus, called dioptra, to measure the apparent diameter of the Sun and Moon. Pappus of Alexandria described it (in his commentary on the Almagest of that chapter), as did Proclus (Hypotyposis IV). It was a four-foot rod with a scale, a sighting hole at one end, and a wedge that could be moved along the rod to exactly obscure the disk of Sun or Moon.
Hipparchus also observed solar equinoxes, which may be done with an equatorial ring: its shadow falls on itself when the Sun is on the equator (i.e., in one of the equinoctial points on the ecliptic), but the shadow falls above or below the opposite side of the ring when the Sun is south or north of the equator. Ptolemy quotes (in Almagest III.1 (H195)) a description by Hipparchus of an equatorial ring in Alexandria; a little further he describes two such instruments present in Alexandria in his own time.
Hipparchus applied his knowledge of spherical angles to the problem of denoting locations on the Earth's surface. Before him a grid system had been used by Dicaearchus of Messana, but Hipparchus was the first to apply mathematical rigor to the determination of the latitude and longitude of places on the Earth. Hipparchus wrote a critique in three books on the work of the geographer Eratosthenes of Cyrene (3rd century BC), called Pròs tèn Eratosthénous geographían ("Against the Geography of Eratosthenes"). It is known to us from Strabo of Amaseia, who in his turn criticised Hipparchus in his own Geographia. Hipparchus apparently made many detailed corrections to the locations and distances mentioned by Eratosthenes. It seems he did not introduce many improvements in methods, but he did propose a means to determine the geographical longitudes of different cities at lunar eclipses (Strabo Geographia 1 January 2012). A lunar eclipse is visible simultaneously on half of the Earth, and the difference in longitude between places can be computed from the difference in local time when the eclipse is observed. His approach would give accurate results if it were correctly carried out but the limitations of timekeeping accuracy in his era made this method impractical.
Star catalog
Late in his career (possibly about 135 BC) Hipparchus compiled his star catalog. Scholars have been searching for it for centuries. In 2022, it was announced that a part of it was discovered in a medieval parchment manuscript, Codex Climaci Rescriptus, from Saint Catherine's Monastery in the Sinai Peninsula, Egypt as hidden text (palimpsest). This was proven wrong in 2024.
Hipparchus also constructed a celestial globe depicting the constellations, based on his observations. His interest in the fixed stars may have been inspired by the observation of a supernova (according to Pliny), or by his discovery of precession, according to Ptolemy, who says that Hipparchus could not reconcile his data with earlier observations made by Timocharis and Aristillus. For more information see Discovery of precession. In Raphael's painting The School of Athens, Hipparchus may be depicted holding his celestial globe, as the representative figure for astronomy. It is not certain that the figure is meant to represent him.
Previously, Eudoxus of Cnidus in the fourth century BC had described the stars and constellations in two books called Phaenomena and Entropon. Aratus wrote a poem called Phaenomena or Arateia based on Eudoxus's work. Hipparchus wrote a commentary on the Arateia—his only preserved work—which contains many stellar positions and times for rising, culmination, and setting of the constellations, and these are likely to have been based on his own measurements.
According to Roman sources, Hipparchus made his measurements with a scientific instrument and he obtained the positions of roughly 850 stars. Pliny the Elder writes in book II, 24–26 of his Natural History:
This passage reports that
Hipparchus was inspired by a newly emerging star
he doubts on the stability of stellar brightnesses
he observed with appropriate instruments (plural—it is not said that he observed everything with the same instrument)
he made a catalogue of stars
It is unknown what instrument he used. The armillary sphere was probably invented only later—maybe by Ptolemy 265 years after Hipparchus. The historian of science S. Hoffmann found clues that Hipparchus may have observed the longitudes and latitudes in different coordinate systems and, thus, with different instrumentation. Right ascensions, for instance, could have been observed with a clock, while angular separations could have been measured with another device.
Stellar magnitude
Hipparchus is conjectured to have ranked the apparent magnitudes of stars on a numerical scale from 1, the brightest, to 6, the faintest. This hypothesis is based on the vague statement by Pliny the Elder but cannot be proven by the data in Hipparchus's commentary on Aratus's poem. In this only work by his hand that has survived until today, he does not use the magnitude scale but estimates brightnesses unsystematically. However, this does not prove or disprove anything because the commentary might be an early work while the magnitude scale could have been introduced later. Yet, it was proven that the error bars of magnitudes in ancient star catalogue is 1.5 mag which suggests that these numbers are not based on measurements. There were several suggestions on measurement methodologies and feasibility studies. In all cases, the error bars would be smaller. Hence, Hoffmann (2022) suggested that the magnitudes were not measured at all but mere estimates for globe makers to improve pattern recognition on globes as astronomer's computing machines.
Nevertheless, this system certainly precedes Ptolemy, who used it extensively about AD 150. This system was made more precise and extended by N. R. Pogson in 1856, who placed the magnitudes on a logarithmic scale, making magnitude 1 stars 100 times brighter than magnitude 6 stars, thus each magnitude is or 2.512 times brighter than the next faintest magnitude.
Coordinate System
It is disputed which coordinate system(s) he used. Ptolemy's catalog in the Almagest, which is derived from Hipparchus's catalog, is given in ecliptic coordinates. Although Hipparchus strictly distinguishes between "signs" (30° section of the zodiac) and "constellations" in the zodiac, it is highly questionable whether or not he had an instrument to directly observe / measure units on the ecliptic. He probably marked them as a unit on his celestial globe but the instrumentation for his observations is unknown.
Delambre in his (1817) concluded that Hipparchus knew and used the equatorial coordinate system, a conclusion challenged by Otto Neugebauer in his History of Ancient Mathematical Astronomy (1975). Hipparchus seems to have used a mix of ecliptic coordinates and equatorial coordinates: in his commentary on Eudoxus he provides stars' polar distance (equivalent to the declination in the equatorial system), right ascension (equatorial), longitude (ecliptic), polar longitude (hybrid), but not celestial latitude. This opinion was confirmed by the careful investigation of Hoffmann who independently studied the material, potential sources, techniques and results of Hipparchus and reconstructed his celestial globe and its making.
As with most of his work, Hipparchus's star catalog was adopted and perhaps expanded by Ptolemy, who has (since Brahe in 1598) been accused by some of fraud for stating (Syntaxis, book 7, chapter 4) that he observed all 1025 stars—critics claim that, for almost every star, he used Hipparchus's data and precessed it to his own epoch centuries later by adding 2°40' to the longitude, using an erroneously small precession constant of 1° per century. This claim is highly exaggerated because it applies modern standards of citation to an ancient author. True is only that "the ancient star catalogue" that was initiated by Hipparchus in the second century BC, was reworked and improved multiple times in the 265 years to the Almagest (which is good scientific practise even today). Although the Almagest star catalogue is based upon Hipparchus's, it is not only a blind copy but enriched, enhanced, and thus (at least partially) re-observed.
Celestial globe
Hipparchus's celestial globe was an instrument similar to modern electronic computers. He used it to determine risings, settings and culminations (cf. also Almagest, book VIII, chapter 3). Therefore, his globe was mounted in a horizontal plane and had a meridian ring with a scale. In combination with a grid that divided the celestial equator into 24 hour lines (longitudes equalling our right ascension hours) the instrument allowed him to determine the hours. The ecliptic was marked and divided in 12 sections of equal length (the "signs", which he called or in order to distinguish them from constellations (). The globe was virtually reconstructed by a historian of science.
Arguments for and against Hipparchus's star catalog in the Almagest
For:
common errors in the reconstructed Hipparchian star catalogue and the Almagest suggest a direct transfer without re-observation within 265 years. There are 18 stars with common errors - for the other ~800 stars, the errors are not extant or within the error ellipse. That means, no further statement is allowed on these hundreds of stars.
further statistical arguments
Against:
Unlike Ptolemy, Hipparchus did not use ecliptic coordinates to describe stellar positions.
Hipparchus's catalogue is reported in Roman times to have enlisted about 850 stars but Ptolemy's catalogue has 1025 stars. Thus, somebody has added further entries.
There are stars cited in the Almagest from Hipparchus that are missing in the Almagest star catalogue. Thus, by all the reworking within scientific progress in 265 years, not all of Hipparchus's stars made it into the Almagest version of the star catalogue.
Conclusion: Hipparchus's star catalogue is one of the sources of the Almagest star catalogue but not the only source.
Precession of the equinoxes (146–127 BC)
Hipparchus is generally recognized as discoverer of the precession of the equinoxes in 127 BC. His two books on precession, On the Displacement of the Solstitial and Equinoctial Points and On the Length of the Year, are both mentioned in the Almagest of Claudius Ptolemy. According to Ptolemy, Hipparchus measured the longitude of Spica and Regulus and other bright stars. Comparing his measurements with data from his predecessors, Timocharis and Aristillus, he concluded that Spica had moved 2° relative to the autumnal equinox. He also compared the lengths of the tropical year (the time it takes the Sun to return to an equinox) and the sidereal year (the time it takes the Sun to return to a fixed star), and found a slight discrepancy. Hipparchus concluded that the equinoxes were moving ("precessing") through the zodiac, and that the rate of precession was not less than 1° in a century.
Geography
Hipparchus's treatise Against the Geography of Eratosthenes in three books is not preserved.
Most of our knowledge of it comes from Strabo, according to whom Hipparchus thoroughly and often unfairly criticized Eratosthenes, mainly for internal contradictions and inaccuracy in determining positions of geographical localities. Hipparchus insists that a geographic map must be based only on astronomical measurements of latitudes and longitudes and triangulation for finding unknown distances.
In geographic theory and methods Hipparchus introduced three main innovations.
He was the first to use the grade grid, to determine geographic latitude from star observations, and not only from the Sun's altitude, a method known long before him, and to suggest that geographic longitude could be determined by means of simultaneous observations of lunar eclipses in distant places. In the practical part of his work, the so-called "table of climata", Hipparchus listed latitudes for several tens of localities. In particular, he improved Eratosthenes' values for the latitudes of Athens, Sicily, and southern extremity of India. In calculating latitudes of climata (latitudes correlated with the length of the longest solstitial day), Hipparchus used an unexpectedly accurate value for the obliquity of the ecliptic, 23°40' (the actual value in the second half of the second century BC was approximately 23°43'), whereas all other ancient authors knew only a roughly rounded value 24°, and even Ptolemy used a less accurate value, 23°51'.
Hipparchus opposed the view generally accepted in the Hellenistic period that the Atlantic and Indian Oceans and the Caspian Sea are parts of a single ocean. At the same time he extends the limits of the oikoumene, i.e. the inhabited part of the land, up to the equator and the Arctic Circle. Hipparchus's ideas found their reflection in the Geography of Ptolemy. In essence, Ptolemy's work is an extended attempt to realize Hipparchus's vision of what geography ought to be.
Modern speculation
Hipparchus was in the international news in 2005, when it was again proposed (as in 1898) that the data on the celestial globe of Hipparchus or in his star catalog may have been preserved in the only surviving large ancient celestial globe which depicts the constellations with moderate accuracy, the globe carried by the Farnese Atlas. Evidence suggests that the Farnese globe may show constellations in the Aratean tradition and deviate from the constellations used by Hipparchus.
A line in Plutarch's Table Talk states that Hipparchus counted 103,049 compound propositions that can be formed from ten simple propositions. 103,049 is the tenth Schröder–Hipparchus number, which counts the number of ways of adding one or more pairs of parentheses around consecutive subsequences of two or more items in any sequence of ten symbols. This has led to speculation that Hipparchus knew about enumerative combinatorics, a field of mathematics that developed independently in modern mathematics.
Hipparchos was suggested in a 2013 paper to have accidentally observed the planet Uranus in 128 BC and catalogued it as a star, over a millennium and a half before its formal discovery in 1781.
Legacy
Hipparchus may be depicted opposite Ptolemy in Raphael's 1509–1511 painting The School of Athens, although this figure is usually identified as Zoroaster.
The formal name for the ESA's Hipparcos Space Astrometry Mission is High Precision Parallax Collecting Satellite, making a backronym, HiPParCoS, that echoes and commemorates the name of Hipparchus.
The lunar crater Hipparchus, the Martian crater Hipparchus, and the asteroid 4000 Hipparchus are named after him.
He was inducted into the International Space Hall of Fame in 2004.
Jean Baptiste Joseph Delambre, historian of astronomy, mathematical astronomer and director of the Paris Observatory, in his history of astronomy in the 18th century (1821), considered Hipparchus along with Johannes Kepler and James Bradley the greatest astronomers of all time.
The Astronomers Monument at the Griffith Observatory in Los Angeles, California, United States features a relief of Hipparchus as one of six of the greatest astronomers of all time and the only one from Antiquity.
Johannes Kepler had great respect for Tycho Brahe's methods and the accuracy of his observations, and considered him to be the new Hipparchus, who would provide the foundation for a restoration of the science of astronomy.
Translations
Originally published in
See also
Aristarchus of Samos (), a Greek mathematician who calculated the distance from the Earth to the Sun.
Eratosthenes (), a Greek mathematician who calculated the circumference of the Earth and also the distance from the Earth to the Sun.
Greek mathematics
On the Sizes and Distances (Aristarchus)
On the Sizes and Distances (Hipparchus)
Posidonius (), a Greek astronomer and mathematician who calculated the circumference of the Earth.
Notes
References
Works cited
Further reading
External links
David Ulansey about Hipparchus's understanding of the precession
A brief view by Carmen Rush on Hipparchus' stellar catalog
190s BC births
120 BC deaths
2nd-century BC Greek people
2nd-century BC writers
2nd-century BC mathematicians
Ancient Greek astronomers
Ancient Greek geographers
Ancient Greek mathematicians
Ancient Anatolian Greeks
Ancient Rhodian scientists
Scientific instrument makers
People from Nicaea
Ancient Greek inventors
2nd-century BC geographers
2nd-century BC astronomers
Equinoxes | Hipparchus | [
"Astronomy"
] | 10,083 | [
"Time in astronomy",
"Equinoxes"
] |
13,605 | https://en.wikipedia.org/wiki/Heteroatom | In chemistry, a heteroatom () is, strictly, any atom that is not carbon or hydrogen.
Organic chemistry
In practice, the term is mainly used more specifically to indicate that non-carbon atoms have replaced carbon in the backbone of the molecular structure. Typical heteroatoms are nitrogen (N), oxygen (O), sulfur (S), phosphorus (P), chlorine (Cl), bromine (Br), and iodine (I), as well as the metals lithium (Li) and magnesium (Mg).
Proteins
It can also be used with highly specific meanings in specialised contexts. In the description of protein structure, in particular in the Protein Data Bank file format, a heteroatom record (HETATM) describes an atom as belonging to a small molecule cofactor rather than being part of a biopolymer chain.
Zeolites
In the context of zeolites, the term heteroatom refers to partial isomorphous substitution of the typical framework atoms (silicon, aluminium, and phosphorus) by other elements such as beryllium, vanadium, and chromium. The goal is usually to adjust properties of the material (e.g., Lewis acidity) to optimize the material for a certain application (e.g., catalysis).
References
External links
Journal - Heteroatom Chemistry
Organic chemistry | Heteroatom | [
"Chemistry"
] | 292 | [
"nan"
] |
13,606 | https://en.wikipedia.org/wiki/Half-life | Half-life (symbol ) is the time required for a quantity (of substance) to reduce to half of its initial value. The term is commonly used in nuclear physics to describe how quickly unstable atoms undergo radioactive decay or how long stable atoms survive. The term is also used more generally to characterize any type of exponential (or, rarely, non-exponential) decay. For example, the medical sciences refer to the biological half-life of drugs and other chemicals in the human body. The converse of half-life (in exponential growth) is doubling time.
The original term, half-life period, dating to Ernest Rutherford's discovery of the principle in 1907, was shortened to half-life in the early 1950s. Rutherford applied the principle of a radioactive element's half-life in studies of age determination of rocks by measuring the decay period of radium to lead-206.
Half-life is constant over the lifetime of an exponentially decaying quantity, and it is a characteristic unit for the exponential decay equation. The accompanying table shows the reduction of a quantity as a function of the number of half-lives elapsed.
Probabilistic nature
A half-life often describes the decay of discrete entities, such as radioactive atoms. In that case, it does not work to use the definition that states "half-life is the time required for exactly half of the entities to decay". For example, if there is just one radioactive atom, and its half-life is one second, there will not be "half of an atom" left after one second.
Instead, the half-life is defined in terms of probability: "Half-life is the time required for exactly half of the entities to decay on average". In other words, the probability of a radioactive atom decaying within its half-life is 50%.
For example, the accompanying image is a simulation of many identical atoms undergoing radioactive decay. Note that after one half-life there are not exactly one-half of the atoms remaining, only approximately, because of the random variation in the process. Nevertheless, when there are many identical atoms decaying (right boxes), the law of large numbers suggests that it is a very good approximation to say that half of the atoms remain after one half-life.
Various simple exercises can demonstrate probabilistic decay, for example involving flipping coins or running a statistical computer program.
Formulas for half-life in exponential decay
An exponential decay can be described by any of the following four equivalent formulas:
where
is the initial quantity of the substance that will decay (this quantity may be measured in grams, moles, number of atoms, etc.),
is the quantity that still remains and has not yet decayed after a time ,
is the half-life of the decaying quantity,
is a positive number called the mean lifetime of the decaying quantity,
is a positive number called the decay constant of the decaying quantity.
The three parameters , , and are directly related in the following way:where is the natural logarithm of 2 (approximately 0.693).
Half-life and reaction orders
In chemical kinetics, the value of the half-life depends on the reaction order:
Zero order kinetics
The rate of this kind of reaction does not depend on the substrate concentration, . Thus the concentration decreases linearly.
The integrated rate law of zero order kinetics is:
In order to find the half-life, we have to replace the concentration value for the initial concentration divided by 2: and isolate the time:This formula indicates that the half-life for a zero order reaction depends on the initial concentration and the rate constant.
First order kinetics
In first order reactions, the rate of reaction will be proportional to the concentration of the reactant. Thus the concentration will decrease exponentially.
as time progresses until it reaches zero, and the half-life will be constant, independent of concentration.
The time for to decrease from to in a first-order reaction is given by the following equation:It can be solved forFor a first-order reaction, the half-life of a reactant is independent of its initial concentration. Therefore, if the concentration of at some arbitrary stage of the reaction is , then it will have fallen to after a further interval of Hence, the half-life of a first order reaction is given as the following:</p>The half-life of a first order reaction is independent of its initial concentration and depends solely on the reaction rate constant, .
Second order kinetics
In second order reactions, the rate of reaction is proportional to the square of the concentration. By integrating this rate, it can be shown that the concentration of the reactant decreases following this formula:
We replace for in order to calculate the half-life of the reactant and isolate the time of the half-life ():This shows that the half-life of second order reactions depends on the initial concentration and rate constant.
Decay by two or more processes
Some quantities decay by two exponential-decay processes simultaneously. In this case, the actual half-life can be related to the half-lives and that the quantity would have if each of the decay processes acted in isolation:
For three or more processes, the analogous formula is:
For a proof of these formulas, see Exponential decay § Decay by two or more processes.
Examples
There is a half-life describing any exponential-decay process. For example:
As noted above, in radioactive decay the half-life is the length of time after which there is a 50% chance that an atom will have undergone nuclear decay. It varies depending on the atom type and isotope, and is usually determined experimentally. See List of nuclides.
The current flowing through an RC circuit or RL circuit decays with a half-life of or , respectively. For this example the term half time tends to be used rather than "half-life", but they mean the same thing.
In a chemical reaction, the half-life of a species is the time it takes for the concentration of that substance to fall to half of its initial value. In a first-order reaction the half-life of the reactant is , where (also denoted as ) is the reaction rate constant.
In non-exponential decay
The term "half-life" is almost exclusively used for decay processes that are exponential (such as radioactive decay or the other examples above), or approximately exponential (such as biological half-life discussed below). In a decay process that is not even close to exponential, the half-life will change dramatically while the decay is happening. In this situation it is generally uncommon to talk about half-life in the first place, but sometimes people will describe the decay in terms of its "first half-life", "second half-life", etc., where the first half-life is defined as the time required for decay from the initial value to 50%, the second half-life is from 50% to 25%, and so on.
In biology and pharmacology
A biological half-life or elimination half-life is the time it takes for a substance (drug, radioactive nuclide, or other) to lose one-half of its pharmacologic, physiologic, or radiological activity. In a medical context, the half-life may also describe the time that it takes for the concentration of a substance in blood plasma to reach one-half of its steady-state value (the "plasma half-life").
The relationship between the biological and plasma half-lives of a substance can be complex, due to factors including accumulation in tissues, active metabolites, and receptor interactions.
While a radioactive isotope decays almost perfectly according to first order kinetics, where the rate constant is a fixed number, the elimination of a substance from a living organism usually follows more complex chemical kinetics.
For example, the biological half-life of water in a human being is about 9 to 10 days, though this can be altered by behavior and other conditions. The biological half-life of caesium in human beings is between one and four months.
The concept of a half-life has also been utilized for pesticides in plants, and certain authors maintain that pesticide risk and impact assessment models rely on and are sensitive to information describing dissipation from plants.
In epidemiology, the concept of half-life can refer to the length of time for the number of incident cases in a disease outbreak to drop by half, particularly if the dynamics of the outbreak can be modeled exponentially.
See also
Half time (physics)
List of radioactive nuclides by half-life
Mean lifetime
Median lethal dose
References
External links
https://www.calculator.net/half-life-calculator.html Comprehensive half-life calculator
wiki: Decay Engine, Nucleonica.net (archived 2016)
System Dynamics – Time Constants, Bucknell.edu
Researchers Nikhef and UvA measure slowest radioactive decay ever: Xe-124 with 18 billion trillion years
https://academo.org/demos/radioactive-decay-simulator/ Interactive radioactive decay simulator demonstrating how half-life is related to the rate of decay
Chemical kinetics
Radioactivity
Nuclear fission
Temporal exponentials | Half-life | [
"Physics",
"Chemistry"
] | 1,897 | [
"Nuclear fission",
"Chemical reaction engineering",
"Physical quantities",
"Time",
"Nuclear physics",
"Temporal exponentials",
"Spacetime",
"Chemical kinetics",
"Radioactivity"
] |
13,609 | https://en.wikipedia.org/wiki/Hydrogen%20bond | In chemistry, a hydrogen bond (or H-bond) is primarily an electrostatic force of attraction between a hydrogen (H) atom which is covalently bonded to a more electronegative "donor" atom or group (Dn), and another electronegative atom bearing a lone pair of electrons—the hydrogen bond acceptor (Ac). Such an interacting system is generally denoted , where the solid line denotes a polar covalent bond, and the dotted or dashed line indicates the hydrogen bond. The most frequent donor and acceptor atoms are the period 2 elements nitrogen (N), oxygen (O), and fluorine (F).
Hydrogen bonds can be intermolecular (occurring between separate molecules) or intramolecular (occurring among parts of the same molecule). The energy of a hydrogen bond depends on the geometry, the environment, and the nature of the specific donor and acceptor atoms and can vary between 1 and 40 kcal/mol. This makes them somewhat stronger than a van der Waals interaction, and weaker than fully covalent or ionic bonds. This type of bond can occur in inorganic molecules such as water and in organic molecules like DNA and proteins. Hydrogen bonds are responsible for holding materials such as paper and felted wool together, and for causing separate sheets of paper to stick together after becoming wet and subsequently drying.
The hydrogen bond is also responsible for many of the physical and chemical properties of compounds of N, O, and F that seem unusual compared with other similar structures. In particular, intermolecular hydrogen bonding is responsible for the high boiling point of water (100 °C) compared to the other group-16 hydrides that have much weaker hydrogen bonds. Intramolecular hydrogen bonding is partly responsible for the secondary and tertiary structures of proteins and nucleic acids.
Bonding
Definitions and general characteristics
In a hydrogen bond, the electronegative atom not covalently attached to the hydrogen is named the proton acceptor, whereas the one covalently bound to the hydrogen is named the proton donor. This nomenclature is recommended by the IUPAC. The hydrogen of the donor is protic and therefore can act as a Lewis acid and the acceptor is the Lewis base. Hydrogen bonds are represented as system, where the dots represent the hydrogen bond. Liquids that display hydrogen bonding (such as water) are called associated liquids.
Hydrogen bonds arise from a combination of electrostatics (multipole-multipole and multipole-induced multipole interactions), covalency (charge transfer by orbital overlap), and dispersion (London forces).
In weaker hydrogen bonds, hydrogen atoms tend to bond to elements such as sulfur (S) or chlorine (Cl); even carbon (C) can serve as a donor, particularly when the carbon or one of its neighbors is electronegative (e.g., in chloroform, aldehydes and terminal acetylenes). Gradually, it was recognized that there are many examples of weaker hydrogen bonding involving donor other than N, O, or F and/or acceptor Ac with electronegativity approaching that of hydrogen (rather than being much more electronegative). Although weak (≈1 kcal/mol), "non-traditional" hydrogen bonding interactions are ubiquitous and influence structures of many kinds of materials.
The definition of hydrogen bonding has gradually broadened over time to include these weaker attractive interactions. In 2011, an IUPAC Task Group recommended a modern evidence-based definition of hydrogen bonding, which was published in the IUPAC journal Pure and Applied Chemistry. This definition specifies:
Bond strength
Hydrogen bonds can vary in strength from weak (1–2 kJ/mol) to strong (161.5 kJ/mol in the bifluoride ion, ). Typical enthalpies in vapor include:
(161.5 kJ/mol or 38.6 kcal/mol), illustrated uniquely by
(29 kJ/mol or 6.9 kcal/mol), illustrated water-ammonia
(21 kJ/mol or 5.0 kcal/mol), illustrated water-water, alcohol-alcohol
(13 kJ/mol or 3.1 kcal/mol), illustrated by ammonia-ammonia
(8 kJ/mol or 1.9 kcal/mol), illustrated water-amide
(18 kJ/mol or 4.3 kcal/mol)
The strength of intermolecular hydrogen bonds is most often evaluated by measurements of equilibria between molecules containing donor and/or acceptor units, most often in solution. The strength of intramolecular hydrogen bonds can be studied with equilibria between conformers with and without hydrogen bonds. The most important method for the identification of hydrogen bonds also in complicated molecules is crystallography, sometimes also NMR-spectroscopy. Structural details, in particular distances between donor and acceptor which are smaller than the sum of the van der Waals radii can be taken as indication of the hydrogen bond strength. One scheme gives the following somewhat arbitrary classification: those that are 15 to 40 kcal/mol, 5 to 15 kcal/mol, and >0 to 5 kcal/mol are considered strong, moderate, and weak, respectively.
Hydrogen bonds involving C-H bonds are both very rare and weak.
Resonance assisted hydrogen bond
The resonance assisted hydrogen bond (commonly abbreviated as RAHB) is a strong type of hydrogen bond. It is characterized by the π-delocalization that involves the hydrogen and cannot be properly described by the electrostatic model alone. This description of the hydrogen bond has been proposed to describe unusually short distances generally observed between or .
Structural details
The distance is typically ≈110 pm, whereas the distance is ≈160 to 200 pm. The typical length of a hydrogen bond in water is 197 pm. The ideal bond angle depends on the nature of the hydrogen bond donor. The following hydrogen bond angles between a hydrofluoric acid donor and various acceptors have been determined experimentally:
Spectroscopy
Strong hydrogen bonds are revealed by downfield shifts in the 1H NMR spectrum. For example, the acidic proton in the enol tautomer of acetylacetone appears at 15.5, which is about 10 ppm downfield of a conventional alcohol.
In the IR spectrum, hydrogen bonding shifts the stretching frequency to lower energy (i.e. the vibration frequency decreases). This shift reflects a weakening of the bond. Certain hydrogen bonds - improper hydrogen bonds - show a blue shift of the stretching frequency and a decrease in the bond length. H-bonds can also be measured by IR vibrational mode shifts of the acceptor. The amide I mode of backbone carbonyls in α-helices shifts to lower frequencies when they form H-bonds with side-chain hydroxyl groups. The dynamics of hydrogen bond structures in water can be probed by this OH stretching vibration. In the hydrogen bonding network in protic organic ionic plastic crystals (POIPCs), which are a type of phase change material exhibiting solid-solid phase transitions prior to melting, variable-temperature infrared spectroscopy can reveal the temperature dependence of hydrogen bonds and the dynamics of both the anions and the cations. The sudden weakening of hydrogen bonds during the solid-solid phase transition seems to be coupled with the onset of orientational or rotational disorder of the ions.
Theoretical considerations
Hydrogen bonding is of persistent theoretical interest. According to a modern description integrates both the intermolecular O:H lone pair ":" nonbond and the intramolecular polar-covalent bond associated with repulsive coupling.
Quantum chemical calculations of the relevant interresidue potential constants (compliance constants) revealed large differences between individual H bonds of the same type. For example, the central interresidue hydrogen bond between guanine and cytosine is much stronger in comparison to the bond between the adenine-thymine pair.
Theoretically, the bond strength of the hydrogen bonds can be assessed using NCI index, non-covalent interactions index, which allows a visualization of these non-covalent interactions, as its name indicates, using the electron density of the system.
Interpretations of the anisotropies in the Compton profile of ordinary ice claim that the hydrogen bond is partly covalent. However, this interpretation was challenged and subsequently clarified.
Most generally, the hydrogen bond can be viewed as a metric-dependent electrostatic scalar field between two or more intermolecular bonds. This is slightly different from the intramolecular bound states of, for example, covalent or ionic bonds. However, hydrogen bonding is generally still a bound state phenomenon, since the interaction energy has a net negative sum. The initial theory of hydrogen bonding proposed by Linus Pauling suggested that the hydrogen bonds had a partial covalent nature. This interpretation remained controversial until NMR techniques demonstrated information transfer between hydrogen-bonded nuclei, a feat that would only be possible if the hydrogen bond contained some covalent character.
History
The concept of hydrogen bonding once was challenging. Linus Pauling credits T. S. Moore and T. F. Winmill with the first mention of the hydrogen bond, in 1912. Moore and Winmill used the hydrogen bond to account for the fact that trimethylammonium hydroxide is a weaker base than tetramethylammonium hydroxide. The description of hydrogen bonding in its better-known setting, water, came some years later, in 1920, from Latimer and Rodebush. In that paper, Latimer and Rodebush cited the work of a fellow scientist at their laboratory, Maurice Loyal Huggins, saying, "Mr. Huggins of this laboratory in some work as yet unpublished, has used the idea of a hydrogen kernel held between two atoms as a theory in regard to certain organic compounds."
Hydrogen bonds in small molecules
Water
An ubiquitous example of a hydrogen bond is found between water molecules. In a discrete water molecule, there are two hydrogen atoms and one oxygen atom. The simplest case is a pair of water molecules with one hydrogen bond between them, which is called the water dimer and is often used as a model system. When more molecules are present, as is the case with liquid water, more bonds are possible because the oxygen of one water molecule has two lone pairs of electrons, each of which can form a hydrogen bond with a hydrogen on another water molecule. This can repeat such that every water molecule is H-bonded with up to four other molecules, as shown in the figure (two through its two lone pairs, and two through its two hydrogen atoms). Hydrogen bonding strongly affects the crystal structure of ice, helping to create an open hexagonal lattice. The density of ice is less than the density of water at the same temperature; thus, the solid phase of water floats on the liquid, unlike most other substances.
Liquid water's high boiling point is due to the high number of hydrogen bonds each molecule can form, relative to its low molecular mass. Owing to the difficulty of breaking these bonds, water has a very high boiling point, melting point, and viscosity compared to otherwise similar liquids not conjoined by hydrogen bonds. Water is unique because its oxygen atom has two lone pairs and two hydrogen atoms, meaning that the total number of bonds of a water molecule is up to four.
The number of hydrogen bonds formed by a molecule of liquid water fluctuates with time and temperature. From TIP4P liquid water simulations at 25 °C, it was estimated that each water molecule participates in an average of 3.59 hydrogen bonds. At 100 °C, this number decreases to 3.24 due to the increased molecular motion and decreased density, while at 0 °C, the average number of hydrogen bonds increases to 3.69. Another study found a much smaller number of hydrogen bonds: 2.357 at 25 °C. Defining and counting the hydrogen bonds is not straightforward however.
Because water may form hydrogen bonds with solute proton donors and acceptors, it may competitively inhibit the formation of solute intermolecular or intramolecular hydrogen bonds. Consequently, hydrogen bonds between or within solute molecules dissolved in water are almost always unfavorable relative to hydrogen bonds between water and the donors and acceptors for hydrogen bonds on those solutes. Hydrogen bonds between water molecules have an average lifetime of 10−11 seconds, or 10 picoseconds.
Bifurcated and over-coordinated hydrogen bonds in water
A single hydrogen atom can participate in two hydrogen bonds. This type of bonding is called "bifurcated" (split in two or "two-forked"). It can exist, for instance, in complex organic molecules. It has been suggested that a bifurcated hydrogen atom is an essential step in water reorientation.
Acceptor-type hydrogen bonds (terminating on an oxygen's lone pairs) are more likely to form bifurcation (it is called overcoordinated oxygen, OCO) than are donor-type hydrogen bonds, beginning on the same oxygen's hydrogens.
Other liquids
For example, hydrogen fluoride—which has three lone pairs on the F atom but only one H atom—can form only two bonds; (ammonia has the opposite problem: three hydrogen atoms but only one lone pair).
H-F***H-F***H-F
Further manifestations of solvent hydrogen bonding
Increase in the melting point, boiling point, solubility, and viscosity of many compounds can be explained by the concept of hydrogen bonding.
Negative azeotropy of mixtures of HF and water.
The fact that ice is less dense than liquid water is due to a crystal structure stabilized by hydrogen bonds.
Dramatically higher boiling points of , , and HF compared to the heavier analogues , , and HCl, where hydrogen-bonding is absent.
Viscosity of anhydrous phosphoric acid and of glycerol.
Dimer formation in carboxylic acids and hexamer formation in hydrogen fluoride, which occur even in the gas phase, resulting in gross deviations from the ideal gas law.
Pentamer formation of water and alcohols in apolar solvents.
Hydrogen bonds in polymers
Hydrogen bonding plays an important role in determining the three-dimensional structures and the properties adopted by many proteins. Compared to the , , and bonds that comprise most polymers, hydrogen bonds are far weaker, perhaps 5%. Thus, hydrogen bonds can be broken by chemical or mechanical means while retaining the basic structure of the polymer backbone. This hierarchy of bond strengths (covalent bonds being stronger than hydrogen-bonds being stronger than van der Waals forces) is relevant in the properties of many materials.
DNA
In these macromolecules, bonding between parts of the same macromolecule cause it to fold into a specific shape, which helps determine the molecule's physiological or biochemical role. For example, the double helical structure of DNA is due largely to hydrogen bonding between its base pairs (as well as pi stacking interactions), which link one complementary strand to the other and enable replication.
Proteins
In the secondary structure of proteins, hydrogen bonds form between the backbone oxygens and amide hydrogens. When the spacing of the amino acid residues participating in a hydrogen bond occurs regularly between positions i and , an alpha helix is formed. When the spacing is less, between positions i and , then a 310 helix is formed. When two strands are joined by hydrogen bonds involving alternating residues on each participating strand, a beta sheet is formed. Hydrogen bonds also play a part in forming the tertiary structure of protein through interaction of R-groups. (See also protein folding).
Bifurcated H-bond systems are common in alpha-helical transmembrane proteins between the backbone amide of residue i as the H-bond acceptor and two H-bond donors from residue : the backbone amide and a side-chain hydroxyl or thiol . The energy preference of the bifurcated H-bond hydroxyl or thiol system is -3.4 kcal/mol or -2.6 kcal/mol, respectively. This type of bifurcated H-bond provides an intrahelical H-bonding partner for polar side-chains, such as serine, threonine, and cysteine within the hydrophobic membrane environments.
The role of hydrogen bonds in protein folding has also been linked to osmolyte-induced protein stabilization. Protective osmolytes, such as trehalose and sorbitol, shift the protein folding equilibrium toward the folded state, in a concentration dependent manner. While the prevalent explanation for osmolyte action relies on excluded volume effects that are entropic in nature, circular dichroism (CD) experiments have shown osmolyte to act through an enthalpic effect. The molecular mechanism for their role in protein stabilization is still not well established, though several mechanisms have been proposed. Computer molecular dynamics simulations suggest that osmolytes stabilize proteins by modifying the hydrogen bonds in the protein hydration layer.
Several studies have shown that hydrogen bonds play an important role for the stability between subunits in multimeric proteins. For example, a study of sorbitol dehydrogenase displayed an important hydrogen bonding network which stabilizes the tetrameric quaternary structure within the mammalian sorbitol dehydrogenase protein family.
A protein backbone hydrogen bond incompletely shielded from water attack is a dehydron. Dehydrons promote the removal of water through proteins or ligand binding. The exogenous dehydration enhances the electrostatic interaction between the amide and carbonyl groups by de-shielding their partial charges. Furthermore, the dehydration stabilizes the hydrogen bond by destabilizing the nonbonded state consisting of dehydrated isolated charges.
Wool, being a protein fibre, is held together by hydrogen bonds, causing wool to recoil when stretched. However, washing at high temperatures can permanently break the hydrogen bonds and a garment may permanently lose its shape.
Other polymers
The properties of many polymers are affected by hydrogen bonds within and/or between the chains. Prominent examples include cellulose and its derived fibers, such as cotton and flax. In nylon, hydrogen bonds between carbonyl and the amide NH effectively link adjacent chains, which gives the material mechanical strength. Hydrogen bonds also affect the aramid fibre, where hydrogen bonds stabilize the linear chains laterally. The chain axes are aligned along the fibre axis, making the fibres extremely stiff and strong. Hydrogen-bond networks make both polymers sensitive to humidity levels in the atmosphere because water molecules can diffuse into the surface and disrupt the network. Some polymers are more sensitive than others. Thus nylons are more sensitive than aramids, and nylon 6 more sensitive than nylon-11.
Symmetric hydrogen bond
A symmetric hydrogen bond is a special type of hydrogen bond in which the proton is spaced exactly halfway between two identical atoms. The strength of the bond to each of those atoms is equal. It is an example of a three-center four-electron bond. This type of bond is much stronger than a "normal" hydrogen bond. The effective bond order is 0.5, so its strength is comparable to a covalent bond. It is seen in ice at high pressure, and also in the solid phase of many anhydrous acids such as hydrofluoric acid and formic acid at high pressure. It is also seen in the bifluoride ion . Due to severe steric constraint, the protonated form of Proton Sponge (1,8-bis(dimethylamino)naphthalene) and its derivatives also have symmetric hydrogen bonds (), although in the case of protonated Proton Sponge, the assembly is bent.
Dihydrogen bond
The hydrogen bond can be compared with the closely related dihydrogen bond, which is also an intermolecular bonding interaction involving hydrogen atoms. These structures have been known for some time, and well characterized by crystallography; however, an understanding of their relationship to the conventional hydrogen bond, ionic bond, and covalent bond remains unclear. Generally, the hydrogen bond is characterized by a proton acceptor that is a lone pair of electrons in nonmetallic atoms (most notably in the nitrogen, and chalcogen groups). In some cases, these proton acceptors may be pi-bonds or metal complexes. In the dihydrogen bond, however, a metal hydride serves as a proton acceptor, thus forming a hydrogen-hydrogen interaction. Neutron diffraction has shown that the molecular geometry of these complexes is similar to hydrogen bonds, in that the bond length is very adaptable to the metal complex/hydrogen donor system.
Application to drugs
The Hydrogen bond is relevant to drug design. According to Lipinski's rule of five the majority of orally active drugs have no more than five hydrogen bond donors and fewer than ten hydrogen bond acceptors. These interactions exist between nitrogen–hydrogen and oxygen–hydrogen centers. Many drugs do not, however, obey these "rules".
References
Further reading
George A. Jeffrey. An Introduction to Hydrogen Bonding (Topics in Physical Chemistry). Oxford University Press, US (March 13, 1997).
External links
The Bubble Wall (Audio slideshow from the National High Magnetic Field Laboratory explaining cohesion, surface tension and hydrogen bonds)
isotopic effect on bond dynamics
Chemical bonding
Hydrogen physics
Supramolecular chemistry
Intermolecular forces | Hydrogen bond | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 4,430 | [
"Molecular physics",
"Materials science",
"Intermolecular forces",
"Condensed matter physics",
"nan",
"Nanotechnology",
"Chemical bonding",
"Supramolecular chemistry"
] |
13,615 | https://en.wikipedia.org/wiki/Hardware%20%28mechanical%20and%20construction%29 | Hardware (some types also known as household hardware) is equipment, generally used in machines, in construction or in any built good, that can be touched or held by hand such as keys, locks, nuts, screws, washers, hinges, latches, handles, wire, chains, belts, plumbing supplies, electrical supplies, tools, utensils, cutlery and machine parts. Household hardware is typically sold in hardware stores.
See also
Builders hardware
References
Equipment
Locksmithing
Wire
Chains
Plumbing
Electrical wiring
Tools
Painting materials | Hardware (mechanical and construction) | [
"Physics",
"Technology",
"Engineering"
] | 107 | [
"Machines",
"Electrical systems",
"Building engineering",
"Plumbing",
"Physical systems",
"Construction",
"Electrical engineering",
"Electrical wiring",
"Hardware (mechanical)"
] |
13,624 | https://en.wikipedia.org/wiki/High%20fidelity | High fidelity (often shortened to Hi-Fi or HiFi) is the high-quality reproduction of sound. It is popular with audiophiles and home audio enthusiasts. Ideally, high-fidelity equipment has inaudible noise and distortion, and a flat (neutral, uncolored) frequency response within the human hearing range.
High fidelity contrasts with the lower-quality "lo-fi" sound produced by inexpensive audio equipment, AM radio, or the inferior quality of sound reproduction that can be heard in recordings made until the late 1940s.
History
Bell Laboratories began experimenting with various recording techniques in the early 1930s. Performances by Leopold Stokowski and the Philadelphia Orchestra were recorded in 1931 and 1932 using telephone lines between the Academy of Music in Philadelphia and the Bell labs in New Jersey. Some multitrack recordings were made on optical sound film, which led to new advances used primarily by MGM (as early as 1937) and Twentieth Century Fox Film Corporation (as early as 1941). RCA Victor began recording performances by several orchestras using optical sound around 1941, resulting in higher-fidelity masters for 78-rpm discs. During the 1930s, Avery Fisher, an amateur violinist, began experimenting with audio design and acoustics. He wanted to make a radio that would sound like he was listening to a live orchestra and achieve high fidelity to the original sound. After World War II, Harry F. Olson conducted an experiment whereby test subjects listened to a live orchestra through a hidden variable acoustic filter. The results proved that listeners preferred high-fidelity reproduction, once the noise and distortion introduced by early sound equipment was removed.
Beginning in 1948, several innovations created the conditions that made major improvements in home audio quality possible:
Reel-to-reel audio tape recording, based on technology taken from Germany after WWII, helped musical artists such as Bing Crosby make and distribute recordings with better fidelity.
The advent of the 33⅓ rpm long play (LP) microgroove vinyl record, with lower surface noise and quantitatively specified equalization curves as well as noise-reduction and dynamic range systems. Classical music fans, who were opinion leaders in the audio market, quickly adopted LPs because, unlike with older records, most classical works would fit on a single LP.
Higher quality turntables, with more responsive needles
FM radio, with wider audio bandwidth and less susceptibility to signal interference and fading than AM radio.
Better amplifier designs, with more attention to frequency response and much higher power output capability, reproducing audio without perceptible distortion.
New loudspeaker designs, including acoustic suspension, developed by Edgar Villchur and Henry Kloss with improved bass frequency response.
In the 1950s, audio manufacturers employed the phrase high fidelity as a marketing term to describe records and equipment intended to provide faithful sound reproduction. Many consumers found the difference in quality compared to the then-standard AM radios and 78-rpm records readily apparent and bought high-fidelity phonographs and 33⅓ LPs such as RCA's New Orthophonics and London's FFRR (Full Frequency Range Recording, a UK Decca system). Audiophiles focused on technical characteristics and bought individual components, such as separate turntables, radio tuners, preamplifiers, power amplifiers and loudspeakers. Some enthusiasts even assembled their loudspeaker systems, with the advent of integrated multi-speaker console systems in the 1950s, hi-fi became a generic term for home sound equipment, to some extent displacing phonograph and record player.
In the late 1950s and early 1960s, the development of stereophonic equipment and recordings led to the next wave of home-audio improvement, and in common parlance stereo displaced hi-fi. Records were now played on a stereo (stereophonic phonograph). In the world of the audiophile, however, the concept of high fidelity continued to refer to the goal of highly accurate sound reproduction and to the technological resources available for approaching that goal. This period is regarded as the "Golden Age of Hi-Fi", when vacuum tube equipment manufacturers of the time produced many models considered superior by modern audiophiles, and just before solid state (transistorized) equipment was introduced to the market, subsequently replacing tube equipment as the mainstream technology.
In the 1960s, the FTC with the help of the audio manufacturers came up with a definition to identify high-fidelity equipment so that the manufacturers could clearly state if they meet the requirements and reduce misleading advertisements.
A popular type of system for reproducing music beginning in the 1970s was the integrated music centre—which combined a phonograph turntable, AM-FM radio tuner, tape player, preamplifier, and power amplifier in one package, often sold with its own separate, detachable or integrated speakers. These systems advertised their simplicity. The consumer did not have to select and assemble individual components or be familiar with impedance and power ratings. Purists generally avoid referring to these systems as high fidelity, though some are capable of very good quality sound reproduction.
Audiophiles in the 1970s and 1980s preferred to buy each component separately. That way, they could choose models of each component with the specifications that they desired. In the 1980s, several audiophile magazines became available, offering reviews of components and articles on how to choose and test speakers, amplifiers, and other components.
Listening tests
Listening tests are used by hi-fi manufacturers, audiophile magazines, and audio engineering researchers and scientists. If a listening test is done in such a way that the listener who is assessing the sound quality of a component or recording can see the components that are being used for the test (e.g., the same musical piece listened to through a tube power amplifier and a solid-state amplifier), then it is possible that the listener's pre-existing biases towards or against certain components or brands could affect their judgment. To respond to this issue, researchers began to use blind tests, in which listeners cannot see the components being tested. A commonly used variant of this test is the ABX test. A subject is presented with two known samples (sample A, the reference, and sample B, an alternative), and one unknown sample X, for three samples total. X is randomly selected from A and B, and the subject identifies X as being either A or B. Although there is no way to prove that a certain methodology is transparent, a properly conducted double-blind test can prove that a method is not transparent.
Blind tests are sometimes used as part of attempts to ascertain whether certain audio components (such as expensive, exotic cables) have any subjectively perceivable effect on sound quality. Data gleaned from these blind tests is not accepted by some audiophile magazines such as Stereophile and The Absolute Sound in their evaluations of audio equipment. John Atkinson, current editor of Stereophile, stated that he once purchased a solid-state amplifier, the Quad 405, in 1978 after seeing the results from blind tests, but came to realize months later that "the magic was gone" until he replaced it with a tube amp. Robert Harley of The Absolute Sound wrote, in 2008, that: "...blind listening tests fundamentally distort the listening process and are worthless in determining the audibility of a certain phenomenon."
Doug Schneider, editor of the online Soundstage network, argued the opposite in 2009. He stated: "Blind tests are at the core of the decades' worth of research into loudspeaker design done at Canada's National Research Council (NRC). The NRC researchers knew that for their result to be credible within the scientific community and to have the most meaningful results, they had to eliminate bias, and blind testing was the only way to do so." Many Canadian companies such as Axiom, Energy, Mirage, Paradigm, PSB, and Revel use blind testing extensively in designing their loudspeakers. Audio professional Dr. Sean Olive of Harman International shares this view.
Semblance of realism
Stereophonic sound provided a partial solution to the problem of reproducing the sound of live orchestral performers by creating separation among instruments, the illusion of space, and a phantom central channel. An attempt to enhance reverberation was tried in the 1970s through quadraphonic sound. Consumers did not want to pay the additional costs and space required for the marginal improvements in realism. With the rise in popularity of home theater, however, multi-channel playback systems became popular, and many consumers were willing to tolerate the six to eight channels required in a home theater.
In addition to spatial realism, the playback of music must be subjectively free from noise, such as hiss or hum, to achieve realism. The compact disc (CD) provides about 90 decibels of dynamic range, which exceeds the 80 dB dynamic range of music as normally perceived in a concert hall. Audio equipment must be able to reproduce frequencies high enough and low enough to be realistic. The human hearing range, for healthy young persons, is 20 Hz to 20,000 Hz. Most adults can't hear higher than 15,000 Hz. CDs are capable of reproducing frequencies as low as 0 Hz and as high as 22,050 Hz, making them adequate for reproducing the frequency range that most humans can hear. The equipment must also provide no noticeable distortion of the signal or emphasis or de-emphasis of any frequency in this frequency range.
Modularity
Integrated, mini, or lifestyle systems (also known by the older terms music centre or midi system) contain one or more sources such as a CD player, a tuner, or a cassette tape deck together with a preamplifier and a power amplifier in one box. A limitation of an "integrated" system is that failure of any one component can possibly lead to the need to replace the entire unit, as components are not readily swapped in or out of a system merely by plugging and unplugging cables, and may not even have been made available by the manufacturer to allow piecemeal repairs.
Although some high-end audio manufacturers do produce integrated systems, such products are generally disparaged by audiophiles, who prefer to build a system from separates (or components), often with each item from a different manufacturer specialising in a particular component. This provides the most flexibility for piece-by-piece upgrades and repairs.
A preamplifier and a power amplifier in one box is called an integrated amplifier; with a tuner added, it is a receiver. A monophonic power amplifier is called a monoblock and is often used for powering a subwoofer. Other modules in the system may include components like cartridges, tonearms, hi-fi turntables, digital media players, DVD players that play a wide variety of discs including CDs, CD recorders, MiniDisc recorders, hi-fi videocassette recorders (VCRs) and reel-to-reel tape recorders. Signal modification equipment can include equalizers and noise-reduction systems.
This modularity allows the enthusiast to spend as little or as much as they want on a component to suit their specific needs, achieve a desired sound, and add components as desired. Also, failure of any component of an integrated system can render it unusable, while the unaffected components of a modular system may continue to function. A modular system introduces the complexity of cabling multiple components and often having different remote controls for each unit.
Modern equipment
Some modern hi-fi equipment can be digitally connected using fiber optic TOSLINK cables, USB ports (including one to play digital audio files), or Wi-Fi support.
Another modern component is the music server consisting of one or more computer hard drives that hold music in the form of computer files. When the music is stored in an audio file format that is lossless such as FLAC, Monkey's Audio or WMA Lossless, the computer playback of recorded audio can serve as an audiophile-quality source for a hi-fi system. There is now a push from certain streaming services to offer hi-fi services.
Streaming services typically have a modified dynamic range and possibly bit rates lower than audiophile standards. Tidal and others have launched a hi-fi tier that includes access to FLAC and Master Quality Authenticated studio masters for many tracks through the desktop version of the player. This integration is also available for high-end audio systems.
See also
Audio system measurements
Comparison of analog and digital recording
DIY audio
Edwin Howard Armstrong
Entertainment center
Lo-fi music
Wife acceptance factor
Wi-Fi, a wireless term derived from hi-fi
References
Further reading
External links
A Dictionary of Home Entertainment Terms
Sound
Consumer electronics
Sound recording
Audio engineering | High fidelity | [
"Engineering"
] | 2,588 | [
"Electrical engineering",
"Audio engineering"
] |
13,636 | https://en.wikipedia.org/wiki/History%20of%20computing%20hardware | The history of computing hardware spans the developments from early devices used for simple calculations to today's complex computers, encompassing advancements in both analog and digital technology.
The first aids to computation were purely mechanical devices which required the operator to set up the initial values of an elementary arithmetic operation, then manipulate the device to obtain the result. In later stages, computing devices began representing numbers in continuous forms, such as by distance along a scale, rotation of a shaft, or a specific voltage level. Numbers could also be represented in the form of digits, automatically manipulated by a mechanism. Although this approach generally required more complex mechanisms, it greatly increased the precision of results. The development of transistor technology, followed by the invention of integrated circuit chips, led to revolutionary breakthroughs. Transistor-based computers and, later, integrated circuit-based computers enabled digital systems to gradually replace analog systems, increasing both efficiency and processing power. Metal-oxide-semiconductor (MOS) large-scale integration (LSI) then enabled semiconductor memory and the microprocessor, leading to another key breakthrough, the miniaturized personal computer (PC), in the 1970s. The cost of computers gradually became so low that personal computers by the 1990s, and then mobile computers (smartphones and tablets) in the 2000s, became ubiquitous.
Early devices
Ancient and medieval
Devices have been used to aid computation for thousands of years, mostly using one-to-one correspondence with fingers. The earliest counting device was probably a form of tally stick. The Lebombo bone from the mountains between Eswatini and South Africa may be the oldest known mathematical artifact. It dates from 35,000 BCE and consists of 29 distinct notches that were deliberately cut into a baboon's fibula. Later record keeping aids throughout the Fertile Crescent included calculi (clay spheres, cones, etc.) which represented counts of items, probably livestock or grains, sealed in hollow unbaked clay containers. The use of counting rods is one example. The abacus was early used for arithmetic tasks. What we now call the Roman abacus was used in Babylonia as early as –2300 BC. Since then, many other forms of reckoning boards or tables have been invented. In a medieval European counting house, a checkered cloth would be placed on a table, and markers moved around on it according to certain rules, as an aid to calculating sums of money.
Several analog computers were constructed in ancient and medieval times to perform astronomical calculations. These included the astrolabe and Antikythera mechanism from the Hellenistic world (c. 150–100 BC). In Roman Egypt, Hero of Alexandria (c. 10–70 AD) made mechanical devices including automata and a programmable cart. The steam-powered automatic flute described by the Book of Ingenious Devices (850) by the Persian-Baghdadi Banū Mūsā brothers may have been the first programmable device.
Other early mechanical devices used to perform one or another type of calculations include the planisphere and other mechanical computing devices invented by Al-Biruni (c. AD 1000); the equatorium and universal latitude-independent astrolabe by Al-Zarqali (c. AD 1015); the astronomical analog computers of other medieval Muslim astronomers and engineers; and the astronomical clock tower of Su Song (1094) during the Song dynasty. The castle clock, a hydropowered mechanical astronomical clock invented by Ismail al-Jazari in 1206, was the first programmable analog computer. Ramon Llull invented the Lullian Circle: a notional machine for calculating answers to philosophical questions (in this case, to do with Christianity) via logical combinatorics. This idea was taken up by Leibniz centuries later, and is thus one of the founding elements in computing and information science.
Renaissance calculating tools
Scottish mathematician and physicist John Napier discovered that the multiplication and division of numbers could be performed by the addition and subtraction, respectively, of the logarithms of those numbers. While producing the first logarithmic tables, Napier needed to perform many tedious multiplications. It was at this point that he designed his 'Napier's bones', an abacus-like device that greatly simplified calculations that involved multiplication and division.
Since real numbers can be represented as distances or intervals on a line, the slide rule was invented in the 1620s, shortly after Napier's work, to allow multiplication and division operations to be carried out significantly faster than was previously possible. Edmund Gunter built a calculating device with a single logarithmic scale at the University of Oxford. His device greatly simplified arithmetic calculations, including multiplication and division. William Oughtred greatly improved this in 1630 with his circular slide rule. He followed this up with the modern slide rule in 1632, essentially a combination of two Gunter rules, held together with the hands. Slide rules were used by generations of engineers and other mathematically involved professional workers, until the invention of the pocket calculator.
Mechanical calculators
In 1609 Guidobaldo del Monte made a mechanical multiplier to calculate fractions of a degree. Based on a system of four gears, the rotation of an index on one quadrant corresponds to 60 rotations of another index on an opposite quadrant. Thanks to this machine, errors in the calculation of first, second, third and quarter degrees can be avoided. Guidobaldo is the first to document the use of gears for mechanical calculation.
Wilhelm Schickard, a German polymath, designed a calculating machine in 1623 which combined a mechanized form of Napier's rods with the world's first mechanical adding machine built into the base. Because it made use of a single-tooth gear there were circumstances in which its carry mechanism would jam. A fire destroyed at least one of the machines in 1624 and it is believed Schickard was too disheartened to build another.
In 1642, while still a teenager, Blaise Pascal started some pioneering work on calculating machines and after three years of effort and 50 prototypes he invented a mechanical calculator. He built twenty of these machines (called Pascal's calculator or Pascaline) in the following ten years. Nine Pascalines have survived, most of which are on display in European museums. A continuing debate exists over whether Schickard or Pascal should be regarded as the "inventor of the mechanical calculator" and the range of issues to be considered is discussed elsewhere.
Gottfried Wilhelm von Leibniz invented the stepped reckoner and his famous stepped drum mechanism around 1672. He attempted to create a machine that could be used not only for addition and subtraction but would use a moveable carriage to enable multiplication and division. Leibniz once said "It is unworthy of excellent men to lose hours like slaves in the labour of calculation which could safely be relegated to anyone else if machines were used." However, Leibniz did not incorporate a fully successful carry mechanism. Leibniz also described the binary numeral system, a central ingredient of all modern computers. However, up to the 1940s, many subsequent designs (including Charles Babbage's machines of 1822 and even ENIAC of 1945) were based on the decimal system.
Around 1820, Charles Xavier Thomas de Colmar created what would over the rest of the century become the first successful, mass-produced mechanical calculator, the Thomas Arithmometer. It could be used to add and subtract, and with a moveable carriage the operator could also multiply, and divide by a process of long multiplication and long division. It utilised a stepped drum similar in conception to that invented by Leibniz. Mechanical calculators remained in use until the 1970s.
Punched-card data processing
In 1804, French weaver Joseph Marie Jacquard developed a loom in which the pattern being woven was controlled by a paper tape constructed from punched cards. The paper tape could be changed without changing the mechanical design of the loom. This was a landmark achievement in programmability. His machine was an improvement over similar weaving looms. Punched cards were preceded by punch bands, as in the machine proposed by Basile Bouchon. These bands would inspire information recording for automatic pianos and more recently numerical control machine tools.
In the late 1880s, the American Herman Hollerith invented data storage on punched cards that could then be read by a machine. To process these punched cards, he invented the tabulator and the keypunch machine. His machines used electromechanical relays and counters. Hollerith's method was used in the 1890 United States census. That census was processed two years faster than the prior census had been. Hollerith's company eventually became the core of IBM.
By 1920, electromechanical tabulating machines could add, subtract, and print accumulated totals. Machine functions were directed by inserting dozens of wire jumpers into removable control panels. When the United States instituted Social Security in 1935, IBM punched-card systems were used to process records of 26 million workers. Punched cards became ubiquitous in industry and government for accounting and administration.
Leslie Comrie's articles on punched-card methods and W. J. Eckert's publication of Punched Card Methods in Scientific Computation in 1940, described punched-card techniques sufficiently advanced to solve some differential equations or perform multiplication and division using floating-point representations, all on punched cards and unit record machines. Such machines were used during World War II for cryptographic statistical processing, as well as a vast number of administrative uses. The Astronomical Computing Bureau, Columbia University, performed astronomical calculations representing the state of the art in computing.
Calculators
By the 20th century, earlier mechanical calculators, cash registers, accounting machines, and so on were redesigned to use electric motors, with gear position as the representation for the state of a variable. The word "computer" was a job title assigned to primarily women who used these calculators to perform mathematical calculations. By the 1920s, British scientist Lewis Fry Richardson's interest in weather prediction led him to propose human computers and numerical analysis to model the weather; to this day, the most powerful computers on Earth are needed to adequately model its weather using the Navier–Stokes equations.
Companies like Friden, Marchant Calculator and Monroe made desktop mechanical calculators from the 1930s that could add, subtract, multiply and divide. In 1948, the Curta was introduced by Austrian inventor Curt Herzstark. It was a small, hand-cranked mechanical calculator and as such, a descendant of Gottfried Leibniz's Stepped Reckoner and Thomas' Arithmometer.
The world's first all-electronic desktop calculator was the British Bell Punch ANITA, released in 1961. It used vacuum tubes, cold-cathode tubes and Dekatrons in its circuits, with 12 cold-cathode "Nixie" tubes for its display. The ANITA sold well since it was the only electronic desktop calculator available, and was silent and quick. The tube technology was superseded in June 1963 by the U.S. manufactured Friden EC-130, which had an all-transistor design, a stack of four 13-digit numbers displayed on a CRT, and introduced reverse Polish notation (RPN).
First proposed general-purpose computing device
The Industrial Revolution (late 18th to early 19th century) had a significant impact on the evolution of computing hardware, as the era's rapid advancements in machinery and manufacturing laid the groundwork for mechanized and automated computing. Industrial needs for precise, large-scale calculations—especially in fields such as navigation, engineering, and finance—prompted innovations in both design and function, setting the stage for devices like Charles Babbage's difference engine (1822). This mechanical device was intended to automate the calculation of polynomial functions and represented one of the earliest applications of computational logic.
Babbage, often regarded as the "father of the computer," envisioned a fully mechanical system of gears and wheels, powered by steam, capable of handling complex calculations that previously required intensive manual labor. His difference engine, designed to aid navigational calculations, ultimately led him to conceive the analytical engine in 1833. This concept, far more advanced than his difference engine, included an arithmetic logic unit, control flow through conditional branching and loops, and integrated memory. Babbage's plans made his analytical engine the first general-purpose design that could be described as Turing-complete in modern terms.
The analytical engine was programmed using punched cards, a method adapted from the Jacquard loom invented by Joseph Marie Jacquard in 1804, which controlled textile patterns with a sequence of punched cards. These cards became foundational in later computing systems as well. Babbage's machine would have featured multiple output devices, including a printer, a curve plotter, and even a bell, demonstrating his ambition for versatile computational applications beyond simple arithmetic.
Ada Lovelace expanded on Babbage's vision by conceptualizing algorithms that could be executed by his machine. Her notes on the analytical engine, written in the 1840s, are now recognized as the earliest examples of computer programming. Lovelace saw potential in computers to go beyond numerical calculations, predicting that they might one day generate complex musical compositions or perform tasks like language processing.
Though Babbage's designs were never fully realized due to technical and financial challenges, they influenced a range of subsequent developments in computing hardware. Notably, in the 1890s, Herman Hollerith adapted the idea of punched cards for automated data processing, which was utilized in the U.S. Census and sped up data tabulation significantly, bridging industrial machinery with data processing.
The Industrial Revolution's advancements in mechanical systems demonstrated the potential for machines to conduct complex calculations, influencing engineers like Leonardo Torres Quevedo and Vannevar Bush in the early 20th century. Torres Quevedo designed an electromechanical machine with floating-point arithmetic, while Bush's later work explored electronic digital computing. By the mid-20th century, these innovations paved the way for the first fully electronic computers.
Analog computers
In the first half of the 20th century, analog computers were considered by many to be the future of computing. These devices used the continuously changeable aspects of physical phenomena such as electrical, mechanical, or hydraulic quantities to model the problem being solved, in contrast to digital computers that represented varying quantities symbolically, as their numerical values change. As an analog computer does not use discrete values, but rather continuous values, processes cannot be reliably repeated with exact equivalence, as they can with Turing machines.
The first modern analog computer was a tide-predicting machine, invented by Sir William Thomson, later Lord Kelvin, in 1872. It used a system of pulleys and wires to automatically calculate predicted tide levels for a set period at a particular location and was of great utility to navigation in shallow waters. His device was the foundation for further developments in analog computing.
The differential analyser, a mechanical analog computer designed to solve differential equations by integration using wheel-and-disc mechanisms, was conceptualized in 1876 by James Thomson, the brother of the more famous Lord Kelvin. He explored the possible construction of such calculators, but was stymied by the limited output torque of the ball-and-disk integrators. In a differential analyzer, the output of one integrator drove the input of the next integrator, or a graphing output.
A notable series of analog calculating machines were developed by Leonardo Torres Quevedo since 1895, including one that was able to compute the roots of arbitrary polynomials of order eight, including the complex ones, with a precision down to thousandths.
An important advance in analog computing was the development of the first fire-control systems for long range ship gunlaying. When gunnery ranges increased dramatically in the late 19th century it was no longer a simple matter of calculating the proper aim point, given the flight times of the shells. Various spotters on board the ship would relay distance measures and observations to a central plotting station. There the fire direction teams fed in the location, speed and direction of the ship and its target, as well as various adjustments for Coriolis effect, weather effects on the air, and other adjustments; the computer would then output a firing solution, which would be fed to the turrets for laying. In 1912, British engineer Arthur Pollen developed the first electrically powered mechanical analogue computer (called at the time the Argo Clock). It was used by the Imperial Russian Navy in World War I. The alternative Dreyer Table fire control system was fitted to British capital ships by mid-1916.
Mechanical devices were also used to aid the accuracy of aerial bombing. Drift Sight was the first such aid, developed by Harry Wimperis in 1916 for the Royal Naval Air Service; it measured the wind speed from the air, and used that measurement to calculate the wind's effects on the trajectory of the bombs. The system was later improved with the Course Setting Bomb Sight, and reached a climax with World War II bomb sights, Mark XIV bomb sight (RAF Bomber Command) and the Norden (United States Army Air Forces).
The art of mechanical analog computing reached its zenith with the differential analyzer, built by H. L. Hazen and Vannevar Bush at MIT starting in 1927, which built on the mechanical integrators of James Thomson and the torque amplifiers invented by H. W. Nieman. A dozen of these devices were built before their obsolescence became obvious; the most powerful was constructed at the University of Pennsylvania's Moore School of Electrical Engineering, where the ENIAC was built.
A fully electronic analog computer was built by Helmut Hölzer in 1942 at Peenemünde Army Research Center.
By the 1950s the success of digital electronic computers had spelled the end for most analog computing machines, but hybrid analog computers, controlled by digital electronics, remained in substantial use into the 1950s and 1960s, and later in some specialized applications.
Advent of the digital computer
The principle of the modern computer was first described by computer scientist Alan Turing, who set out the idea in his seminal 1936 paper, On Computable Numbers. Turing reformulated Kurt Gödel's 1931 results on the limits of proof and computation, replacing Gödel's universal arithmetic-based formal language with the formal and simple hypothetical devices that became known as Turing machines. He proved that some such machine would be capable of performing any conceivable mathematical computation if it were representable as an algorithm. He went on to prove that there was no solution to the Entscheidungsproblem by first showing that the halting problem for Turing machines is undecidable: in general, it is not possible to decide algorithmically whether a given Turing machine will ever halt.
He also introduced the notion of a "universal machine" (now known as a universal Turing machine), with the idea that such a machine could perform the tasks of any other machine, or in other words, it is provably capable of computing anything that is computable by executing a program stored on tape, allowing the machine to be programmable. Von Neumann acknowledged that the central concept of the modern computer was due to this paper. Turing machines are to this day a central object of study in theory of computation. Except for the limitations imposed by their finite memory stores, modern computers are said to be Turing-complete, which is to say, they have algorithm execution capability equivalent to a universal Turing machine.
Electromechanical computers
The era of modern computing began with a flurry of development before and during World War II. Most digital computers built in this period were built with electromechanical – electric switches drove mechanical relays to perform the calculation. These mechanical components had a low operating speed due to their mechanical nature and were eventually superseded by much faster all-electric components, originally using vacuum tubes and later transistors.
The Z2 was one of the earliest examples of an electric operated digital computer built with electromechanical relays and was created by civil engineer Konrad Zuse in 1940 in Germany. It was an improvement on his earlier, mechanical Z1; although it used the same mechanical memory, it replaced the arithmetic and control logic with electrical relay circuits.
In the same year, electro-mechanical devices called bombes were built by British cryptologists to help decipher German Enigma-machine-encrypted secret messages during World War II. The bombe's initial design was created in 1939 at the UK Government Code and Cypher School (GC&CS) at Bletchley Park by Alan Turing, with an important refinement devised in 1940 by Gordon Welchman. The engineering design and construction was the work of Harold Keen of the British Tabulating Machine Company. It was a substantial development from a device that had been designed in 1938 by Polish Cipher Bureau cryptologist Marian Rejewski, and known as the "cryptologic bomb" (Polish: "bomba kryptologiczna").
In 1941, Zuse followed his earlier machine up with the Z3, the world's first working electromechanical programmable, fully automatic digital computer. The Z3 was built with 2000 relays, implementing a 22-bit word length that operated at a clock frequency of about 5–10 Hz. Program code and data were stored on punched film. It was quite similar to modern machines in some respects, pioneering numerous advances such as floating-point numbers. Replacement of the hard-to-implement decimal system (used in Charles Babbage's earlier design) by the simpler binary system meant that Zuse's machines were easier to build and potentially more reliable, given the technologies available at that time. The Z3 was proven to have been a Turing-complete machine in 1998 by Raúl Rojas. In two 1936 patent applications, Zuse also anticipated that machine instructions could be stored in the same storage used for data—the key insight of what became known as the von Neumann architecture, first implemented in 1948 in America in the electromechanical IBM SSEC and in Britain in the fully electronic Manchester Baby.
Zuse suffered setbacks during World War II when some of his machines were destroyed in the course of Allied bombing campaigns. Apparently his work remained largely unknown to engineers in the UK and US until much later, although at least IBM was aware of it as it financed his post-war startup company in 1946 in return for an option on Zuse's patents.
In 1944, the Harvard Mark I was constructed at IBM's Endicott laboratories. It was a similar general purpose electro-mechanical computer to the Z3, but was not quite Turing-complete.
Digital computation
The term digital was first suggested by George Robert Stibitz and refers to where a signal, such as a voltage, is not used to directly represent a value (as it would be in an analog computer), but to encode it. In November 1937, Stibitz, then working at Bell Labs (1930–1941), completed a relay-based calculator he later dubbed the "Model K" (for "kitchen table", on which he had assembled it), which became the first binary adder. Typically signals have two states – low (usually representing 0) and high (usually representing 1), but sometimes three-valued logic is used, especially in high-density memory. Modern computers generally use binary logic, but many early machines were decimal computers. In these machines, the basic unit of data was the decimal digit, encoded in one of several schemes, including binary-coded decimal or BCD, bi-quinary, excess-3, and two-out-of-five code.
The mathematical basis of digital computing is Boolean algebra, developed by the British mathematician George Boole in his work The Laws of Thought, published in 1854. His Boolean algebra was further refined in the 1860s by William Jevons and Charles Sanders Peirce, and was first presented systematically by Ernst Schröder and A. N. Whitehead. In 1879 Gottlob Frege develops the formal approach to logic and proposes the first logic language for logical equations.
In the 1930s and working independently, American electronic engineer Claude Shannon and Soviet logician Victor Shestakov both showed a one-to-one correspondence between the concepts of Boolean logic and certain electrical circuits, now called logic gates, which are now ubiquitous in digital computers. They showed that electronic relays and switches can realize the expressions of Boolean algebra. This thesis essentially founded practical digital circuit design. In addition Shannon's paper gives a correct circuit diagram for a 4 bit digital binary adder.
Electronic data processing
Purely electronic circuit elements soon replaced their mechanical and electromechanical equivalents, at the same time that digital calculation replaced analog. Machines such as the Z3, the Atanasoff–Berry Computer, the Colossus computers, and the ENIAC were built by hand, using circuits containing relays or valves (vacuum tubes), and often used punched cards or punched paper tape for input and as the main (non-volatile) storage medium.
Engineer Tommy Flowers joined the telecommunications branch of the General Post Office in 1926. While working at the research station in Dollis Hill in the 1930s, he began to explore the possible use of electronics for the telephone exchange. Experimental equipment that he built in 1934 went into operation 5 years later, converting a portion of the telephone exchange network into an electronic data processing system, using thousands of vacuum tubes.
In the US, in 1940 Arthur Dickinson (IBM) invented the first digital electronic computer. This calculating device was fully electronic – control, calculations and output (the first electronic display). John Vincent Atanasoff and Clifford E. Berry of Iowa State University developed the Atanasoff–Berry Computer (ABC) in 1942, the first binary electronic digital calculating device. This design was semi-electronic (electro-mechanical control and electronic calculations), and used about 300 vacuum tubes, with capacitors fixed in a mechanically rotating drum for memory. However, its paper card writer/reader was unreliable and the regenerative drum contact system was mechanical. The machine's special-purpose nature and lack of changeable, stored program distinguish it from modern computers.
Computers whose logic was primarily built using vacuum tubes are now known as first generation computers.
The electronic programmable computer
During World War II, British codebreakers at Bletchley Park, north of London, achieved a number of successes at breaking encrypted enemy military communications. The German encryption machine, Enigma, was first attacked with the help of the electro-mechanical bombes. They ruled out possible Enigma settings by performing chains of logical deductions implemented electrically. Most possibilities led to a contradiction, and the few remaining could be tested by hand.
The Germans also developed a series of teleprinter encryption systems, quite different from Enigma. The Lorenz SZ 40/42 machine was used for high-level Army communications, code-named "Tunny" by the British. The first intercepts of Lorenz messages began in 1941. As part of an attack on Tunny, Max Newman and his colleagues developed the Heath Robinson, a fixed-function machine to aid in code breaking. Tommy Flowers, a senior engineer at the Post Office Research Station was recommended to Max Newman by Alan Turing and spent eleven months from early February 1943 designing and building the more flexible Colossus computer (which superseded the Heath Robinson). After a functional test in December 1943, Colossus was shipped to Bletchley Park, where it was delivered on 18 January 1944 and attacked its first message on 5 February. By the time Germany surrendered in May 1945, there were ten Colossi working at Bletchley Park.
Colossus was the world's first electronic digital programmable computer. It used a large number of valves (vacuum tubes). It had paper-tape input and was capable of being configured to perform a variety of Boolean logical operations on its data, but it was not Turing-complete. Data input to Colossus was by photoelectric reading of a paper tape transcription of the enciphered intercepted message. This was arranged in a continuous loop so that it could be read and re-read multiple times – there being no internal store for the data. The reading mechanism ran at 5,000 characters per second with the paper tape moving at . Colossus Mark 1 contained 1500 thermionic valves (tubes), but Mark 2 with 2400 valves and five processors in parallel, was both 5 times faster and simpler to operate than Mark 1, greatly speeding the decoding process. Mark 2 was designed while Mark 1 was being constructed. Allen Coombs took over leadership of the Colossus Mark 2 project when Tommy Flowers moved on to other projects. The first Mark 2 Colossus became operational on 1 June 1944, just in time for the Allied Invasion of Normandy on D-Day.
Most of the use of Colossus was in determining the start positions of the Tunny rotors for a message, which was called "wheel setting". Colossus included the first-ever use of shift registers and systolic arrays, enabling five simultaneous tests, each involving up to 100 Boolean calculations. This enabled five different possible start positions to be examined for one transit of the paper tape. As well as wheel setting some later Colossi included mechanisms intended to help determine pin patterns known as "wheel breaking". Both models were programmable using switches and plug panels in a way their predecessors had not been.
Without the use of these machines, the Allies would have been deprived of the very valuable intelligence that was obtained from reading the vast quantity of enciphered high-level telegraphic messages between the German High Command (OKW) and their army commands throughout occupied Europe. Details of their existence, design, and use were kept secret well into the 1970s. Winston Churchill personally issued an order for their destruction into pieces no larger than a man's hand, to keep secret that the British were capable of cracking Lorenz SZ cyphers (from German rotor stream cipher machines) during the oncoming Cold War. Two of the machines were transferred to the newly formed GCHQ and the others were destroyed. As a result, the machines were not included in many histories of computing. A reconstructed working copy of one of the Colossus machines is now on display at Bletchley Park.
The ENIAC (Electronic Numerical Integrator and Computer) was the first electronic programmable computer built in the US. Although the ENIAC used similar technology to the Colossi, it was much faster and more flexible and was Turing-complete. Like the Colossi, a "program" on the ENIAC was defined by the states of its patch cables and switches, a far cry from the stored-program electronic machines that came later. Once a program was ready to be run, it had to be mechanically set into the machine with manual resetting of plugs and switches. The programmers of the ENIAC were women who had been trained as mathematicians.
It combined the high speed of electronics with the ability to be programmed for many complex problems. It could add or subtract 5000 times a second, a thousand times faster than any other machine. It also had modules to multiply, divide, and square root. High-speed memory was limited to 20 words (equivalent to about 80 bytes). Built under the direction of John Mauchly and J. Presper Eckert at the University of Pennsylvania, ENIAC's development and construction lasted from 1943 to full operation at the end of 1945. The machine was huge, weighing 30 tons, using 200 kilowatts of electric power and contained over 18,000 vacuum tubes, 1,500 relays, and hundreds of thousands of resistors, capacitors, and inductors. One of its major engineering feats was to minimize the effects of tube burnout, which was a common problem in machine reliability at that time. The machine was in almost constant use for the next ten years.
Stored-program computer
The theoretical basis for the stored-program computer was proposed by Alan Turing in his 1936 paper On Computable Numbers. Whilst Turing was at Princeton working on his PhD, John von Neumann got to know him and became intrigued by his concept of a universal computing machine.
Early computing machines executed the set sequence of steps, known as a 'program', that could be altered by changing electrical connections using switches or a patch panel (or plugboard). However, this process of 'reprogramming' was often difficult and time-consuming, requiring engineers to create flowcharts and physically re-wire the machines. Stored-program computers, by contrast, were designed to store a set of instructions (a program), in memory – typically the same memory as stored data.
ENIAC inventors John Mauchly and J. Presper Eckert proposed, in August 1944, the construction of a machine called the Electronic Discrete Variable Automatic Computer (EDVAC) and design work for it commenced at the University of Pennsylvania's Moore School of Electrical Engineering, before the ENIAC was fully operational. The design implemented a number of important architectural and logical improvements conceived during the ENIAC's construction, and a high-speed serial-access memory. However, Eckert and Mauchly left the project and its construction floundered.
In 1945, von Neumann visited the Moore School and wrote notes on what he saw, which he sent to the project. The U.S. Army liaison there had them typed and circulated as the First Draft of a Report on the EDVAC. The draft did not mention Eckert and Mauchly and, despite its incomplete nature and questionable lack of attribution of the sources of some of the ideas, the computer architecture it outlined became known as the 'von Neumann architecture'.
In 1945, Turing joined the UK National Physical Laboratory and began work on developing an electronic stored-program digital computer. His late-1945 report 'Proposed Electronic Calculator' was the first reasonably detailed specification for such a device. Turing presented a more detailed paper to the National Physical Laboratory (NPL) Executive Committee in March 1946, giving the first substantially complete design of a stored-program computer, a device that was called the Automatic Computing Engine (ACE).
Turing considered that the speed and the size of computer memory were crucial elements, so he proposed a high-speed memory of what would today be called 25 KB, accessed at a speed of 1 MHz. The ACE implemented subroutine calls, whereas the EDVAC did not, and the ACE also used Abbreviated Computer Instructions, an early form of programming language.
Manchester Baby
The Manchester Baby (Small Scale Experimental Machine, SSEM) was the world's first electronic stored-program computer. It was built at the Victoria University of Manchester by Frederic C. Williams, Tom Kilburn and Geoff Tootill, and ran its first program on 21 June 1948.
The machine was not intended to be a practical computer but was instead designed as a testbed for the Williams tube, the first random-access digital storage device. Invented by Freddie Williams and Tom Kilburn at the University of Manchester in 1946 and 1947, it was a cathode-ray tube that used an effect called secondary emission to temporarily store electronic binary data, and was used successfully in several early computers.
Described as small and primitive in a 1998 retrospective, the Baby was the first working machine to contain all of the elements essential to a modern electronic computer. As soon as it had demonstrated the feasibility of its design, a project was initiated at the university to develop the design into a more usable computer, the Manchester Mark 1. The Mark 1 in turn quickly became the prototype for the Ferranti Mark 1, the world's first commercially available general-purpose computer.
The Baby had a 32-bit word length and a memory of 32 words. As it was designed to be the simplest possible stored-program computer, the only arithmetic operations implemented in hardware were subtraction and negation; other arithmetic operations were implemented in software. The first of three programs written for the machine found the highest proper divisor of 218 (262,144), a calculation that was known would take a long time to run—and so prove the computer's reliability—by testing every integer from 218 − 1 downwards, as division was implemented by repeated subtraction of the divisor. The program consisted of 17 instructions and ran for 52 minutes before reaching the correct answer of 131,072, after the Baby had performed 3.5 million operations (for an effective CPU speed of 1.1 kIPS). The successive approximations to the answer were displayed as a pattern of dots on the output CRT which mirrored the pattern held on the Williams tube used for storage.
Manchester Mark 1
The SSEM led to the development of the Manchester Mark 1 at the University of Manchester. Work began in August 1948, and the first version was operational by April 1949; a program written to search for Mersenne primes ran error-free for nine hours on the night of 16/17 June 1949. The machine's successful operation was widely reported in the British press, which used the phrase "electronic brain" in describing it to their readers.
The computer is especially historically significant because of its pioneering inclusion of index registers, an innovation which made it easier for a program to read sequentially through an array of words in memory. Thirty-four patents resulted from the machine's development, and many of the ideas behind its design were incorporated in subsequent commercial products such as the and 702 as well as the Ferranti Mark 1. The chief designers, Frederic C. Williams and Tom Kilburn, concluded from their experiences with the Mark 1 that computers would be used more in scientific roles than in pure mathematics. In 1951 they started development work on Meg, the Mark 1's successor, which would include a floating-point unit.
EDSAC
The other contender for being the first recognizably modern digital stored-program computer was the EDSAC, designed and constructed by Maurice Wilkes and his team at the University of Cambridge Mathematical Laboratory in England at the University of Cambridge in 1949. The machine was inspired by John von Neumann's seminal First Draft of a Report on the EDVAC and was one of the first usefully operational electronic digital stored-program computers.
EDSAC ran its first programs on 6 May 1949, when it calculated a table of squares and a list of prime numbers.The EDSAC also served as the basis for the first commercially applied computer, the LEO I, used by food manufacturing company J. Lyons & Co. Ltd. EDSAC 1 was finally shut down on 11 July 1958, having been superseded by EDSAC 2 which stayed in use until 1965.
EDVAC
ENIAC inventors John Mauchly and J. Presper Eckert proposed the EDVAC's construction in August 1944, and design work for the EDVAC commenced at the University of Pennsylvania's Moore School of Electrical Engineering, before the ENIAC was fully operational. The design implemented a number of important architectural and logical improvements conceived during the ENIAC's construction, and a high-speed serial-access memory. However, Eckert and Mauchly left the project and its construction floundered.
It was finally delivered to the U.S. Army's Ballistics Research Laboratory at the Aberdeen Proving Ground in August 1949, but due to a number of problems, the computer only began operation in 1951, and then only on a limited basis.
Commercial computers
The first commercial computer was the Ferranti Mark 1, built by Ferranti and delivered to the University of Manchester in February 1951. It was based on the Manchester Mark 1. The main improvements over the Manchester Mark 1 were in the size of the primary storage (using random access Williams tubes), secondary storage (using a magnetic drum), a faster multiplier, and additional instructions. The basic cycle time was 1.2 milliseconds, and a multiplication could be completed in about 2.16 milliseconds. The multiplier used almost a quarter of the machine's 4,050 vacuum tubes (valves). A second machine was purchased by the University of Toronto, before the design was revised into the Mark 1 Star. At least seven of these later machines were delivered between 1953 and 1957, one of them to Shell labs in Amsterdam.
In October 1947, the directors of J. Lyons & Company, a British catering company famous for its teashops but with strong interests in new office management techniques, decided to take an active role in promoting the commercial development of computers. The LEO I computer (Lyons Electronic Office) became operational in April 1951 and ran the world's first regular routine office computer job. On 17 November 1951, the J. Lyons company began weekly operation of a bakery valuations job on the LEO – the first business application to go live on a stored-program computer.
In June 1951, the UNIVAC I (Universal Automatic Computer) was delivered to the U.S. Census Bureau. Remington Rand eventually sold 46 machines at more than each ($ as of ). UNIVAC was the first "mass-produced" computer. It used 5,200 vacuum tubes and consumed of power. Its primary storage was serial-access mercury delay lines capable of storing 1,000 words of 11 decimal digits plus sign (72-bit words).
In 1952, Compagnie des Machines Bull released the Gamma 3 computer, which became a large success in Europe, eventually selling more than 1,200 units, and the first computer produced in more than 1,000 units. The Gamma 3 had innovative features for its time including a dual-mode, software switchable, BCD and binary ALU, as well as a hardwired floating-point library for scientific computing. In its E.T configuration, the Gamma 3 drum memory could fit about 50,000 instructions for a capacity of 16,384 words (around 100 kB), a large amount for the time.
Compared to the UNIVAC, IBM introduced a smaller, more affordable computer in 1954 that proved very popular. The IBM 650 weighed over , the attached power supply weighed around and both were held in separate cabinets of roughly 1.50.9. The system cost ($ as of ) or could be leased for a month ($ as of ). Its drum memory was originally 2,000 ten-digit words, later expanded to 4,000 words. Memory limitations such as this were to dominate programming for decades afterward. The program instructions were fetched from the spinning drum as the code ran. Efficient execution using drum memory was provided by a combination of hardware architecture – the instruction format included the address of the next instruction – and software: the Symbolic Optimal Assembly Program, SOAP, assigned instructions to the optimal addresses (to the extent possible by static analysis of the source program). Thus many instructions were, when needed, located in the next row of the drum to be read and additional wait time for drum rotation was reduced.
Microprogramming
In 1951, British scientist Maurice Wilkes developed the concept of microprogramming from the realisation that the central processing unit of a computer could be controlled by a miniature, highly specialized computer program in high-speed ROM. Microprogramming allows the base instruction set to be defined or extended by built-in programs (now called firmware or microcode). This concept greatly simplified CPU development. He first described this at the University of Manchester Computer Inaugural Conference in 1951, then published in expanded form in IEEE Spectrum in 1955.
It was widely used in the CPUs and floating-point units of mainframe and other computers; it was implemented for the first time in EDSAC 2, which also used multiple identical "bit slices" to simplify design. Interchangeable, replaceable tube assemblies were used for each bit of the processor.
Magnetic memory
Magnetic drum memories were developed for the US Navy during WW II with the work continuing at Engineering Research Associates (ERA) in 1946 and 1947. ERA, then a part of Univac included a drum memory in its 1103, announced in February 1953. The first mass-produced computer, the IBM 650, also announced in 1953 had about 8.5 kilobytes of drum memory.
Magnetic-core memory patented in 1949 with its first usage demonstrated for the Whirlwind computer in August 1953. Commercialization followed quickly. Magnetic core was used in peripherals of the IBM 702 delivered in July 1955, and later in the 702 itself. The IBM 704 (1955) and the Ferranti Mercury (1957) used magnetic-core memory. It went on to dominate the field into the 1970s, when it was replaced with semiconductor memory. Magnetic core peaked in volume about 1975 and declined in usage and market share thereafter.
As late as 1980, PDP-11/45 machines using magnetic-core main memory and drums for swapping were still in use at many of the original UNIX sites.
Early digital computer characteristics
Transistor computers
The bipolar transistor was invented in 1947. From 1955 onward transistors replaced vacuum tubes in computer designs, giving rise to the "second generation" of computers. Compared to vacuum tubes, transistors have many advantages: they are smaller, and require less power than vacuum tubes, so give off less heat. Silicon junction transistors were much more reliable than vacuum tubes and had longer service life. Transistorized computers could contain tens of thousands of binary logic circuits in a relatively compact space. Transistors greatly reduced computers' size, initial cost, and operating cost. Typically, second-generation computers were composed of large numbers of printed circuit boards such as the IBM Standard Modular System, each carrying one to four logic gates or flip-flops.
At the University of Manchester, a team under the leadership of Tom Kilburn designed and built a machine using the newly developed transistors instead of valves. Initially the only devices available were germanium point-contact transistors, less reliable than the valves they replaced but which consumed far less power. Their first transistorized computer, and the first in the world, was operational by 1953, and a second version was completed there in April 1955. The 1955 version used 200 transistors, 1,300 solid-state diodes, and had a power consumption of 150 watts. However, the machine did make use of valves to generate its 125 kHz clock waveforms and in the circuitry to read and write on its magnetic drum memory, so it was not the first completely transistorized computer.
That distinction goes to the Harwell CADET of 1955, built by the electronics division of the Atomic Energy Research Establishment at Harwell. The design featured a 64-kilobyte magnetic drum memory store with multiple moving heads that had been designed at the National Physical Laboratory, UK. By 1953 this team had transistor circuits operating to read and write on a smaller magnetic drum from the Royal Radar Establishment. The machine used a low clock speed of only 58 kHz to avoid having to use any valves to generate the clock waveforms.
CADET used 324-point-contact transistors provided by the UK company Standard Telephones and Cables; 76 junction transistors were used for the first stage amplifiers for data read from the drum, since point-contact transistors were too noisy. From August 1956, CADET was offering a regular computing service, during which it often executed continuous computing runs of 80 hours or more. Problems with the reliability of early batches of point contact and alloyed junction transistors meant that the machine's mean time between failures was about 90 minutes, but this improved once the more reliable bipolar junction transistors became available.
The Manchester University Transistor Computer's design was adopted by the local engineering firm of Metropolitan-Vickers in their Metrovick 950, the first commercial transistor computer anywhere. Six Metrovick 950s were built, the first completed in 1956. They were successfully deployed within various departments of the company and were in use for about five years. A second generation computer, the IBM 1401, captured about one third of the world market. IBM installed more than ten thousand 1401s between 1960 and 1964.
Transistor peripherals
Transistorized electronics improved not only the CPU (Central Processing Unit), but also the peripheral devices. The second generation disk data storage units were able to store tens of millions of letters and digits. Next to the fixed disk storage units, connected to the CPU via high-speed data transmission, were removable disk data storage units. A removable disk pack can be easily exchanged with another pack in a few seconds. Even if the removable disks' capacity is smaller than fixed disks, their interchangeability guarantees a nearly unlimited quantity of data close at hand. Magnetic tape provided archival capability for this data, at a lower cost than disk.
Many second-generation CPUs delegated peripheral device communications to a secondary processor. For example, while the communication processor controlled card reading and punching, the main CPU executed calculations and binary branch instructions. One databus would bear data between the main CPU and core memory at the CPU's fetch-execute cycle rate, and other databusses would typically serve the peripheral devices. On the PDP-1, the core memory's cycle time was 5 microseconds; consequently most arithmetic instructions took 10 microseconds (100,000 operations per second) because most operations took at least two memory cycles; one for the instruction, one for the operand data fetch.
During the second generation remote terminal units (often in the form of Teleprinters like a Friden Flexowriter) saw greatly increased use. Telephone connections provided sufficient speed for early remote terminals and allowed hundreds of kilometers separation between remote-terminals and the computing center. Eventually these stand-alone computer networks would be generalized into an interconnected network of networks—the Internet.
Transistor supercomputers
The early 1960s saw the advent of supercomputing. The Atlas was a joint development between the University of Manchester, Ferranti, and Plessey, and was first installed at Manchester University and officially commissioned in 1962 as one of the world's first supercomputers – considered to be the most powerful computer in the world at that time. It was said that whenever Atlas went offline half of the United Kingdom's computer capacity was lost. It was a second-generation machine, using discrete germanium transistors. Atlas also pioneered the Atlas Supervisor, "considered by many to be the first recognisable modern operating system".
In the US, a series of computers at Control Data Corporation (CDC) were designed by Seymour Cray to use innovative designs and parallelism to achieve superior computational peak performance. The CDC 6600, released in 1964, is generally considered the first supercomputer. The CDC 6600 outperformed its predecessor, the IBM 7030 Stretch, by about a factor of 3. With performance of about 1 megaFLOPS, the CDC 6600 was the world's fastest computer from 1964 to 1969, when it relinquished that status to its successor, the CDC 7600.
Integrated circuit computers
The "third-generation" of digital electronic computers used integrated circuit (IC) chips as the basis of their logic.
The idea of an integrated circuit was conceived by a radar scientist working for the Royal Radar Establishment of the Ministry of Defence, Geoffrey W.A. Dummer.
The first working integrated circuits were invented by Jack Kilby at Texas Instruments and Robert Noyce at Fairchild Semiconductor. Kilby recorded his initial ideas concerning the integrated circuit in July 1958, successfully demonstrating the first working integrated example on 12 September 1958. Kilby's invention was a hybrid integrated circuit (hybrid IC). It had external wire connections, which made it difficult to mass-produce.
Noyce came up with his own idea of an integrated circuit half a year after Kilby. Noyce's invention was a monolithic integrated circuit (IC) chip. His chip solved many practical problems that Kilby's had not. Produced at Fairchild Semiconductor, it was made of silicon, whereas Kilby's chip was made of germanium. The basis for Noyce's monolithic IC was Fairchild's planar process, which allowed integrated circuits to be laid out using the same principles as those of printed circuits. The planar process was developed by Noyce's colleague Jean Hoerni in early 1959, based on Mohamed M. Atalla's work on semiconductor surface passivation by silicon dioxide at Bell Labs in the late 1950s.
Third generation (integrated circuit) computers first appeared in the early 1960s in computers developed for government purposes, and then in commercial computers beginning in the mid-1960s. The first silicon IC computer was the Apollo Guidance Computer or AGC. Although not the most powerful computer of its time, the extreme constraints on size, mass, and power of the Apollo spacecraft required the AGC to be much smaller and denser than any prior computer, weighing in at only . Each lunar landing mission carried two AGCs, one each in the command and lunar ascent modules.
Semiconductor memory
The MOSFET (metal–oxide–semiconductor field-effect transistor, or MOS transistor) was invented by Mohamed M. Atalla and Dawon Kahng at Bell Labs in 1959. In addition to data processing, the MOSFET enabled the practical use of MOS transistors as memory cell storage elements, a function previously served by magnetic cores. Semiconductor memory, also known as MOS memory, was cheaper and consumed less power than magnetic-core memory. MOS random-access memory (RAM), in the form of static RAM (SRAM), was developed by John Schmidt at Fairchild Semiconductor in 1964. In 1966, Robert Dennard at the IBM Thomas J. Watson Research Center developed MOS dynamic RAM (DRAM). In 1967, Dawon Kahng and Simon Sze at Bell Labs developed the floating-gate MOSFET, the basis for MOS non-volatile memory such as EPROM, EEPROM and flash memory.
Microprocessor computers
The "fourth-generation" of digital electronic computers used microprocessors as the basis of their logic. The microprocessor has origins in the MOS integrated circuit (MOS IC) chip. Due to rapid MOSFET scaling, MOS IC chips rapidly increased in complexity at a rate predicted by Moore's law, leading to large-scale integration (LSI) with hundreds of transistors on a single MOS chip by the late 1960s. The application of MOS LSI chips to computing was the basis for the first microprocessors, as engineers began recognizing that a complete computer processor could be contained on a single MOS LSI chip.
The subject of exactly which device was the first microprocessor is contentious, partly due to lack of agreement on the exact definition of the term "microprocessor". The earliest multi-chip microprocessors were the Four-Phase Systems AL-1 in 1969 and Garrett AiResearch MP944 in 1970, developed with multiple MOS LSI chips. The first single-chip microprocessor was the Intel 4004, developed on a single PMOS LSI chip. It was designed and realized by Ted Hoff, Federico Faggin, Masatoshi Shima and Stanley Mazor at Intel, and released in 1971. Tadashi Sasaki and Masatoshi Shima at Busicom, a calculator manufacturer, had the initial insight that the CPU could be a single MOS LSI chip, supplied by Intel.
While the earliest microprocessor ICs literally contained only the processor, i.e. the central processing unit, of a computer, their progressive development naturally led to chips containing most or all of the internal electronic parts of a computer. The integrated circuit in the image on the right, for example, an Intel 8742, is an 8-bit microcontroller that includes a CPU running at 12 MHz, 128 bytes of RAM, 2048 bytes of EPROM, and I/O in the same chip.
During the 1960s, there was considerable overlap between second and third generation technologies. IBM implemented its IBM Solid Logic Technology modules in hybrid circuits for the IBM System/360 in 1964. As late as 1975, Sperry Univac continued the manufacture of second-generation machines such as the UNIVAC 494. The Burroughs large systems such as the B5000 were stack machines, which allowed for simpler programming. These pushdown automatons were also implemented in minicomputers and microprocessors later, which influenced programming language design. Minicomputers served as low-cost computer centers for industry, business and universities. It became possible to simulate analog circuits with the simulation program with integrated circuit emphasis, or SPICE (1971) on minicomputers, one of the programs for electronic design automation (EDA). The microprocessor led to the development of microcomputers, small, low-cost computers that could be owned by individuals and small businesses. Microcomputers, the first of which appeared in the 1970s, became ubiquitous in the 1980s and beyond.
While which specific product is considered the first microcomputer system is a matter of debate, one of the earliest is R2E's Micral N (François Gernelle, André Truong) launched "early 1973" using the Intel 8008. The first commercially available microcomputer kit was the Intel 8080-based Altair 8800, which was announced in the January 1975 cover article of Popular Electronics. However, the Altair 8800 was an extremely limited system in its initial stages, having only 256 bytes of DRAM in its initial package and no input-output except its toggle switches and LED register display. Despite this, it was initially surprisingly popular, with several hundred sales in the first year, and demand rapidly outstripped supply. Several early third-party vendors such as Cromemco and Processor Technology soon began supplying additional S-100 bus hardware for the Altair 8800.
In April 1975, at the Hannover Fair, Olivetti presented the P6060, the world's first complete, pre-assembled personal computer system. The central processing unit consisted of two cards, code named PUCE1 and PUCE2, and unlike most other personal computers was built with TTL components rather than a microprocessor. It had one or two 8" floppy disk drives, a 32-character plasma display, 80-column graphical thermal printer, 48 Kbytes of RAM, and BASIC language. It weighed . As a complete system, this was a significant step from the Altair, though it never achieved the same success. It was in competition with a similar product by IBM that had an external floppy disk drive.
From 1975 to 1977, most microcomputers, such as the MOS Technology KIM-1, the Altair 8800, and some versions of the Apple I, were sold as kits for do-it-yourselfers. Pre-assembled systems did not gain much ground until 1977, with the introduction of the Apple II, the Tandy TRS-80, the first SWTPC computers, and the Commodore PET. Computing has evolved with microcomputer architectures, with features added from their larger brethren, now dominant in most market segments.
A NeXT Computer and its object-oriented development tools and libraries were used by Tim Berners-Lee and Robert Cailliau at CERN to develop the world's first web server software, CERN httpd, and also used to write the first web browser, WorldWideWeb.
Systems as complicated as computers require very high reliability. ENIAC remained on, in continuous operation from 1947 to 1955, for eight years before being shut down. Although a vacuum tube might fail, it would be replaced without bringing down the system. By the simple strategy of never shutting down ENIAC, the failures were dramatically reduced. The vacuum-tube SAGE air-defense computers became remarkably reliable – installed in pairs, one off-line, tubes likely to fail did so when the computer was intentionally run at reduced power to find them. Hot-pluggable hard disks, like the hot-pluggable vacuum tubes of yesteryear, continue the tradition of repair during continuous operation. Semiconductor memories routinely have no errors when they operate, although operating systems like Unix have employed memory tests on start-up to detect failing hardware. Today, the requirement of reliable performance is made even more stringent when server farms are the delivery platform. Google has managed this by using fault-tolerant software to recover from hardware failures, and is even working on the concept of replacing entire server farms on-the-fly, during a service event.
In the 21st century, multi-core CPUs became commercially available. Content-addressable memory (CAM) has become inexpensive enough to be used in networking, and is frequently used for on-chip cache memory in modern microprocessors, although no computer system has yet implemented hardware CAMs for use in programming languages. Currently, CAMs (or associative arrays) in software are programming-language-specific. Semiconductor memory cell arrays are very regular structures, and manufacturers prove their processes on them; this allows price reductions on memory products. During the 1980s, CMOS logic gates developed into devices that could be made as fast as other circuit types; computer power consumption could therefore be decreased dramatically. Unlike the continuous current draw of a gate based on other logic types, a CMOS gate only draws significant current, except for leakage, during the 'transition' between logic states.
CMOS circuits have allowed computing to become a commodity which is now ubiquitous, embedded in many forms, from greeting cards and telephones to satellites. The thermal design power which is dissipated during operation has become as essential as computing speed of operation. In 2006 servers consumed 1.5% of the total energy budget of the U.S. The energy consumption of computer data centers was expected to double to 3% of world consumption by 2011. The SoC (system on a chip) has compressed even more of the integrated circuitry into a single chip; SoCs are enabling phones and PCs to converge into single hand-held wireless mobile devices.
Quantum computing is an emerging technology in the field of computing. MIT Technology Review reported 10 November 2017 that IBM has created a 50-qubit computer; currently its quantum state lasts 50 microseconds. Google researchers have been able to extend the 50 microsecond time limit, as reported 14 July 2021 in Nature; stability has been extended 100-fold by spreading a single logical qubit over chains of data qubits for quantum error correction. Physical Review X reported a technique for 'single-gate sensing as a viable readout method for spin qubits' (a singlet-triplet spin state in silicon) on 26 November 2018. A Google team has succeeded in operating their RF pulse modulator chip at 3 kelvins, simplifying the cryogenics of their 72-qubit computer, which is set up to operate at 0.3 K; but the readout circuitry and another driver remain to be brought into the cryogenics. See: Quantum supremacy Silicon qubit systems have demonstrated entanglement at non-local distances.
Computing hardware and its software have even become a metaphor for the operation of the universe.
Epilogue
An indication of the rapidity of development of this field can be inferred from the history of the seminal 1947 article by Burks, Goldstine and von Neumann. By the time that anyone had time to write anything down, it was obsolete. After 1945, others read John von Neumann's First Draft of a Report on the EDVAC, and immediately started implementing their own systems. To this day, the rapid pace of development has continued, worldwide.
See also
Antikythera mechanism
History of computing
History of computing hardware (1960s–present)
History of laptops
History of personal computers
History of software
Information Age
IT History Society
Retrocomputing
Timeline of computing
List of pioneers in computer science
Vacuum-tube computer
Notes
References
With notes upon the Memoir by the Translator.
German to English translation, M.I.T., 1969.
Pages 220–226 are annotated references and guide for further reading.
Stibitz, George
Other online sources:
Translated from: Der Computer. Mein Lebenswerk (1984).
Further reading
Computers and Automation Magazine – Pictorial Report on the Computer Field:
A PICTORIAL INTRODUCTION TO COMPUTERS – 06/1957
A PICTORIAL MANUAL ON COMPUTERS – 12/1957
A PICTORIAL MANUAL ON COMPUTERS, Part 2 – 01/1958
1958–1967 Pictorial Report on the Computer Field – December issues (195812.pdf, ..., 196712.pdf)
Bit by Bit: An Illustrated History of Computers, Stan Augarten, 1984. OCR with permission of the author
External links
Obsolete Technology – Old Computers
Things That Count
Historic Computers in Japan
The History of Japanese Mechanical Calculating Machines
Computer History — a collection of articles by Bob Bemer
25 Microchips that shook the world (archived) – a collection of articles by the Institute of Electrical and Electronics Engineers
Columbia University Computing History
Computer Histories – An introductory course on the history of computing
Revolution – The First 2000 Years Of Computing, Computer History Museum
01
Hardware | History of computing hardware | [
"Technology"
] | 13,395 | [
"History of computing hardware",
"History of computing",
"Computers"
] |
13,637 | https://en.wikipedia.org/wiki/Hausdorff%20space | In topology and related branches of mathematics, a Hausdorff space ( , ), T2 space or separated space, is a topological space where distinct points have disjoint neighbourhoods. Of the many separation axioms that can be imposed on a topological space, the "Hausdorff condition" (T2) is the most frequently used and discussed. It implies the uniqueness of limits of sequences, nets, and filters.
Hausdorff spaces are named after Felix Hausdorff, one of the founders of topology. Hausdorff's original definition of a topological space (in 1914) included the Hausdorff condition as an axiom.
Definitions
Points and in a topological space can be separated by neighbourhoods if there exists a neighbourhood of and a neighbourhood of such that and are disjoint . is a Hausdorff space if any two distinct points in are separated by neighbourhoods. This condition is the third separation axiom (after T0 and T1), which is why Hausdorff spaces are also called T2 spaces. The name separated space is also used.
A related, but weaker, notion is that of a preregular space. is a preregular space if any two topologically distinguishable points can be separated by disjoint neighbourhoods. A preregular space is also called an R1 space.
The relationship between these two conditions is as follows. A topological space is Hausdorff if and only if it is both preregular (i.e. topologically distinguishable points are separated by neighbourhoods) and Kolmogorov (i.e. distinct points are topologically distinguishable). A topological space is preregular if and only if its Kolmogorov quotient is Hausdorff.
Equivalences
For a topological space , the following are equivalent:
is a Hausdorff space.
Limits of nets in are unique.
Limits of filters on are unique.
Any singleton set is equal to the intersection of all closed neighbourhoods of . (A closed neighbourhood of is a closed set that contains an open set containing .)
The diagonal is closed as a subset of the product space .
Any injection from the discrete space with two points to has the lifting property with respect to the map from the finite topological space with two open points and one closed point to a single point.
Examples of Hausdorff and non-Hausdorff spaces
Almost all spaces encountered in analysis are Hausdorff; most importantly, the real numbers (under the standard metric topology on real numbers) are a Hausdorff space. More generally, all metric spaces are Hausdorff. In fact, many spaces of use in analysis, such as topological groups and topological manifolds, have the Hausdorff condition explicitly stated in their definitions.
A simple example of a topology that is T1 but is not Hausdorff is the cofinite topology defined on an infinite set, as is the cocountable topology defined on an uncountable set.
Pseudometric spaces typically are not Hausdorff, but they are preregular, and their use in analysis is usually only in the construction of Hausdorff gauge spaces. Indeed, when analysts run across a non-Hausdorff space, it is still probably at least preregular, and then they simply replace it with its Kolmogorov quotient, which is Hausdorff.
In contrast, non-preregular spaces are encountered much more frequently in abstract algebra and algebraic geometry, in particular as the Zariski topology on an algebraic variety or the spectrum of a ring. They also arise in the model theory of intuitionistic logic: every complete Heyting algebra is the algebra of open sets of some topological space, but this space need not be preregular, much less Hausdorff, and in fact usually is neither. The related concept of Scott domain also consists of non-preregular spaces.
While the existence of unique limits for convergent nets and filters implies that a space is Hausdorff, there are non-Hausdorff T1 spaces in which every convergent sequence has a unique limit. Such spaces are called US spaces. For sequential spaces, this notion is equivalent to being weakly Hausdorff.
Properties
Subspaces and products of Hausdorff spaces are Hausdorff, but quotient spaces of Hausdorff spaces need not be Hausdorff. In fact, every topological space can be realized as the quotient of some Hausdorff space.
Hausdorff spaces are T1, meaning that each singleton is a closed set. Similarly, preregular spaces are R0. Every Hausdorff space is a Sober space although the converse is in general not true.
Another property of Hausdorff spaces is that each compact set is a closed set. For non-Hausdorff spaces, it can be that each compact set is a closed set (for example, the cocountable topology on an uncountable set) or not (for example, the cofinite topology on an infinite set and the Sierpiński space).
The definition of a Hausdorff space says that points can be separated by neighborhoods. It turns out that this implies something which is seemingly stronger: in a Hausdorff space every pair of disjoint compact sets can also be separated by neighborhoods, in other words there is a neighborhood of one set and a neighborhood of the other, such that the two neighborhoods are disjoint. This is an example of the general rule that compact sets often behave like points.
Compactness conditions together with preregularity often imply stronger separation axioms. For example, any locally compact preregular space is completely regular. Compact preregular spaces are normal, meaning that they satisfy Urysohn's lemma and the Tietze extension theorem and have partitions of unity subordinate to locally finite open covers. The Hausdorff versions of these statements are: every locally compact Hausdorff space is Tychonoff, and every compact Hausdorff space is normal Hausdorff.
The following results are some technical properties regarding maps (continuous and otherwise) to and from Hausdorff spaces.
Let be a continuous function and suppose is Hausdorff. Then the graph of , , is a closed subset of .
Let be a function and let be its kernel regarded as a subspace of .
If is continuous and is Hausdorff then is a closed set.
If is an open surjection and is a closed set then is Hausdorff.
If is a continuous, open surjection (i.e. an open quotient map) then is Hausdorff if and only if is a closed set.
If are continuous maps and is Hausdorff then the equalizer is a closed set in . It follows that if is Hausdorff and and agree on a dense subset of then . In other words, continuous functions into Hausdorff spaces are determined by their values on dense subsets.
Let be a closed surjection such that is compact for all . Then if is Hausdorff so is .
Let be a quotient map with a compact Hausdorff space. Then the following are equivalent:
is Hausdorff.
is a closed map.
is a closed set.
Preregularity versus regularity
All regular spaces are preregular, as are all Hausdorff spaces. There are many results for topological spaces that hold for both regular and Hausdorff spaces.
Most of the time, these results hold for all preregular spaces; they were listed for regular and Hausdorff spaces separately because the idea of preregular spaces came later.
On the other hand, those results that are truly about regularity generally do not also apply to nonregular Hausdorff spaces.
There are many situations where another condition of topological spaces (such as paracompactness or local compactness) will imply regularity if preregularity is satisfied. Such conditions often come in two versions: a regular version and a Hausdorff version. Although Hausdorff spaces are not, in general, regular, a Hausdorff space that is also (say) locally compact will be regular, because any Hausdorff space is preregular. Thus from a certain point of view, it is really preregularity, rather than regularity, that matters in these situations. However, definitions are usually still phrased in terms of regularity, since this condition is better known than preregularity.
See History of the separation axioms for more on this issue.
Variants
The terms "Hausdorff", "separated", and "preregular" can also be applied to such variants on topological spaces as uniform spaces, Cauchy spaces, and convergence spaces. The characteristic that unites the concept in all of these examples is that limits of nets and filters (when they exist) are unique (for separated spaces) or unique up to topological indistinguishability (for preregular spaces).
As it turns out, uniform spaces, and more generally Cauchy spaces, are always preregular, so the Hausdorff condition in these cases reduces to the T0 condition. These are also the spaces in which completeness makes sense, and Hausdorffness is a natural companion to completeness in these cases. Specifically, a space is complete if and only if every Cauchy net has at least one limit, while a space is Hausdorff if and only if every Cauchy net has at most one limit (since only Cauchy nets can have limits in the first place).
Algebra of functions
The algebra of continuous (real or complex) functions on a compact Hausdorff space is a commutative C*-algebra, and conversely by the Banach–Stone theorem one can recover the topology of the space from the algebraic properties of its algebra of continuous functions. This leads to noncommutative geometry, where one considers noncommutative C*-algebras as representing algebras of functions on a noncommutative space.
Academic humour
Hausdorff condition is illustrated by the pun that in Hausdorff spaces any two points can be "housed off" from each other by open sets.
In the Mathematics Institute of the University of Bonn, in which Felix Hausdorff researched and lectured, there is a certain room designated the Hausdorff-Raum. This is a pun, as Raum means both room and space in German.
See also
, a Hausdorff space X such that every continuous function has a fixed point.
Notes
References
Separation axioms
Properties of topological spaces | Hausdorff space | [
"Mathematics"
] | 2,187 | [
"Properties of topological spaces",
"Topological spaces",
"Topology",
"Space (mathematics)"
] |
13,654 | https://en.wikipedia.org/wiki/Heat%20engine | A heat engine is a system that converts heat to usable energy, particularly mechanical energy, which can then be used to do mechanical work. While originally conceived in the context of mechanical energy, the concept of the heat engine has been applied to various other kinds of energy, particularly electrical, since at least the late 19th century. The heat engine does this by bringing a working substance from a higher state temperature to a lower state temperature. A heat source generates thermal energy that brings the working substance to the higher temperature state. The working substance generates work in the working body of the engine while transferring heat to the colder sink until it reaches a lower temperature state. During this process some of the thermal energy is converted into work by exploiting the properties of the working substance. The working substance can be any system with a non-zero heat capacity, but it usually is a gas or liquid. During this process, some heat is normally lost to the surroundings and is not converted to work. Also, some energy is unusable because of friction and drag.
In general, an engine is any machine that converts energy to mechanical work. Heat engines distinguish themselves from other types of engines by the fact that their efficiency is fundamentally limited by Carnot's theorem of thermodynamics. Although this efficiency limitation can be a drawback, an advantage of heat engines is that most forms of energy can be easily converted to heat by processes like exothermic reactions (such as combustion), nuclear fission, absorption of light or energetic particles, friction, dissipation and resistance. Since the heat source that supplies thermal energy to the engine can thus be powered by virtually any kind of energy, heat engines cover a wide range of applications.
Heat engines are often confused with the cycles they attempt to implement. Typically, the term "engine" is used for a physical device and "cycle" for the models.
Overview
In thermodynamics, heat engines are often modeled using a standard engineering model such as the Otto cycle. The theoretical model can be refined and augmented with actual data from an operating engine, using tools such as an indicator diagram. Since very few actual implementations of heat engines exactly match their underlying thermodynamic cycles, one could say that a thermodynamic cycle is an ideal case of a mechanical engine. In any case, fully understanding an engine and its efficiency requires a good understanding of the (possibly simplified or idealised) theoretical model, the practical nuances of an actual mechanical engine and the discrepancies between the two.
In general terms, the larger the difference in temperature between the hot source and the cold sink, the larger is the potential thermal efficiency of the cycle. On Earth, the cold side of any heat engine is limited to being close to the ambient temperature of the environment, or not much lower than 300 kelvin, so most efforts to improve the thermodynamic efficiencies of various heat engines focus on increasing the temperature of the source, within material limits. The maximum theoretical efficiency of a heat engine (which no engine ever attains) is equal to the temperature difference between the hot and cold ends divided by the temperature at the hot end, each expressed in absolute temperature.
The efficiency of various heat engines proposed or used today has a large range:
3% (97 percent waste heat using low quality heat) for the ocean thermal energy conversion (OTEC) ocean power proposal
25% for most automotive gasoline engines
49% for a supercritical coal-fired power station such as the Avedøre Power Station
50%+ for long stroke marine Diesel engines
60% for a combined cycle gas turbine
The efficiency of these processes is roughly proportional to the temperature drop across them. Significant energy may be consumed by auxiliary equipment, such as pumps, which effectively reduces efficiency.
Examples
Although some cycles have a typical combustion location (internal or external), they can often be implemented with the other. For example, John Ericsson developed an external heated engine running on a cycle very much like the earlier Diesel cycle. In addition, externally heated engines can often be implemented in open or closed cycles. In a closed cycle the working fluid is retained within the engine at the completion of the cycle whereas is an open cycle the working fluid is either exchanged with the environment together with the products of combustion in the case of the internal combustion engine or simply vented to the environment in the case of external combustion engines like steam engines and turbines.
Everyday examples
Everyday examples of heat engines include the thermal power station, internal combustion engine, firearms, refrigerators and heat pumps. Power stations are examples of heat engines run in a forward direction in which heat flows from a hot reservoir and flows into a cool reservoir to produce work as the desired product. Refrigerators, air conditioners and heat pumps are examples of heat engines that are run in reverse, i.e. they use work to take heat energy at a low temperature and raise its temperature in a more efficient way than the simple conversion of work into heat (either through friction or electrical resistance). Refrigerators remove heat from within a thermally sealed chamber at low temperature and vent waste heat at a higher temperature to the environment and heat pumps take heat from the low temperature environment and 'vent' it into a thermally sealed chamber (a house) at higher temperature.
In general heat engines exploit the thermal properties associated with the expansion and compression of gases according to the gas laws or the properties associated with phase changes between gas and liquid states.
Earth's heat engine
Earth's atmosphere and hydrosphere—Earth's heat engine—are coupled processes that constantly even out solar heating imbalances through evaporation of surface water, convection, rainfall, winds and ocean circulation, when distributing heat around the globe.
A Hadley cell is an example of a heat engine. It involves the rising of warm and moist air in the earth's equatorial region and the descent of colder air in the subtropics creating a thermally driven direct circulation, with consequent net production of kinetic energy.
Phase-change cycles
In phase change cycles and engines, the working fluids are gases and liquids. The engine converts the working fluid from a gas to a liquid, from liquid to gas, or both, generating work from the fluid expansion or compression.
Rankine cycle (classical steam engine)
Regenerative cycle (steam engine more efficient than Rankine cycle)
Organic Rankine cycle (Coolant changing phase in temperature ranges of ice and hot liquid water)
Vapor to liquid cycle (drinking bird, injector, Minto wheel)
Liquid to solid cycle (frost heaving – water changing from ice to liquid and back again can lift rock up to 60 cm.)
Solid to gas cycle (firearms – solid propellants combust to hot gases.)
Gas-only cycles
In these cycles and engines the working fluid is always a gas (i.e., there is no phase change):
Carnot cycle (Carnot heat engine)
Ericsson cycle (Caloric Ship John Ericsson)
Stirling cycle (Stirling engine, thermoacoustic devices)
Internal combustion engine (ICE):
Otto cycle (e.g. gasoline/petrol engine)
Diesel cycle (e.g. Diesel engine)
Atkinson cycle (Atkinson engine)
Brayton cycle or Joule cycle originally Ericsson cycle (gas turbine)
Lenoir cycle (e.g., pulse jet engine)
Miller cycle (Miller engine)
Liquid-only cycles
In these cycles and engines the working fluid are always like liquid:
Stirling cycle (Malone engine)
Electron cycles
Johnson thermoelectric energy converter
Thermoelectric (Peltier–Seebeck effect)
Thermogalvanic cell
Thermionic emission
Thermotunnel cooling
Magnetic cycles
Thermo-magnetic motor (Tesla)
Cycles used for refrigeration
A domestic refrigerator is an example of a heat pump: a heat engine in reverse. Work is used to create a heat differential. Many cycles can run in reverse to move heat from the cold side to the hot side, making the cold side cooler and the hot side hotter. Internal combustion engine versions of these cycles are, by their nature, not reversible.
Refrigeration cycles include:
Air cycle machine
Gas-absorption refrigerator
Magnetic refrigeration
Stirling cryocooler
Vapor-compression refrigeration
Vuilleumier cycle
Evaporative heat engines
The Barton evaporation engine is a heat engine based on a cycle producing power and cooled moist air from the evaporation of water into hot dry air.
Mesoscopic heat engines
Mesoscopic heat engines are nanoscale devices that may serve the goal of processing heat fluxes and perform useful work at small scales. Potential applications include e.g. electric cooling devices. In such mesoscopic heat engines, work per cycle of operation fluctuates due to thermal noise. There is exact equality that relates average of exponents of work performed by any heat engine and the heat transfer from the hotter heat bath. This relation transforms the Carnot's inequality into exact equality. This relation is also a Carnot cycle equality
Efficiency
The efficiency of a heat engine relates how much useful work is output for a given amount of heat energy input.
From the laws of thermodynamics, after a completed cycle:
and therefore
where
is the net work extracted from the engine in one cycle. (It is negative, in the IUPAC convention, since work is done by the engine.)
is the heat energy taken from the high temperature heat source in the surroundings in one cycle. (It is positive since heat energy is added to the engine.)
is the waste heat given off by the engine to the cold temperature heat sink. (It is negative since heat is lost by the engine to the sink.)
In other words, a heat engine absorbs heat energy from the high temperature heat source, converting part of it to useful work and giving off the rest as waste heat to the cold temperature heat sink.
In general, the efficiency of a given heat transfer process is defined by the ratio of "what is taken out" to "what is put in". (For a refrigerator or heat pump, which can be considered as a heat engine run in reverse, this is the coefficient of performance and it is ≥ 1.) In the case of an engine, one desires to extract work and has to put in heat , for instance from combustion of a fuel, so the engine efficiency is reasonably defined as
The efficiency is less than 100% because of the waste heat unavoidably lost to the cold sink (and corresponding compression work put in) during the required recompression at the cold temperature before the power stroke of the engine can occur again.
The theoretical maximum efficiency of any heat engine depends only on the temperatures it operates between. This efficiency is usually derived using an ideal imaginary heat engine such as the Carnot heat engine, although other engines using different cycles can also attain maximum efficiency. Mathematically, after a full cycle, the overall change of entropy is zero:
Note that is positive because isothermal expansion in the power stroke increases the multiplicity of the working fluid while is negative since recompression decreases the multiplicity. If the engine is ideal and runs reversibly, and , and thus
,
which gives and thus the Carnot limit for heat-engine efficiency,
where is the absolute temperature of the hot source and that of the cold sink, usually measured in kelvins.
The reasoning behind this being the maximal efficiency goes as follows. It is first assumed that if a more efficient heat engine than a Carnot engine is possible, then it could be driven in reverse as a heat pump. Mathematical analysis can be used to show that this assumed combination would result in a net decrease in entropy. Since, by the second law of thermodynamics, this is statistically improbable to the point of exclusion, the Carnot efficiency is a theoretical upper bound on the reliable efficiency of any thermodynamic cycle.
Empirically, no heat engine has ever been shown to run at a greater efficiency than a Carnot cycle heat engine.
Figure 2 and Figure 3 show variations on Carnot cycle efficiency with temperature. Figure 2 indicates how efficiency changes with an increase in the heat addition temperature for a constant compressor inlet temperature. Figure 3 indicates how the efficiency changes with an increase in the heat rejection temperature for a constant turbine inlet temperature.
Endo-reversible heat-engines
By its nature, any maximally efficient Carnot cycle must operate at an infinitesimal temperature gradient; this is because any transfer of heat between two bodies of differing temperatures is irreversible, therefore the Carnot efficiency expression applies only to the infinitesimal limit. The major problem is that the objective of most heat-engines is to output power, and infinitesimal power is seldom desired.
A different measure of ideal heat-engine efficiency is given by considerations of endoreversible thermodynamics, where the system is broken into reversible subsystems, but with non reversible interactions between them. A classical example is the Curzon–Ahlborn engine, very similar to a Carnot engine, but where the thermal reservoirs at temperature and are allowed to be different from the temperatures of the substance going through the reversible Carnot cycle: and . The heat transfers between the reservoirs and the substance are considered as conductive (and irreversible) in the form . In this case, a tradeoff has to be made between power output and efficiency. If the engine is operated very slowly, the heat flux is low, and the classical Carnot result is found
,
but at the price of a vanishing power output. If instead one chooses to operate the engine at its maximum output power, the efficiency becomes
(Note: T in units of K or °R)
This model does a better job of predicting how well real-world heat-engines can do (Callen 1985, see also endoreversible thermodynamics):
As shown, the Curzon–Ahlborn efficiency much more closely models that observed.
History
Heat engines have been known since antiquity but were only made into useful devices at the time of the industrial revolution in the 18th century. They continue to be developed today.
Enhancements
Engineers have studied the various heat-engine cycles to improve the amount of usable work they could extract from a given power source. The Carnot cycle limit cannot be reached with any gas-based cycle, but engineers have found at least two ways to bypass that limit and one way to get better efficiency without bending any rules:
Increase the temperature difference in the heat engine. The simplest way to do this is to increase the hot side temperature, which is the approach used in modern combined-cycle gas turbines. Unfortunately, physical limits (such as the melting point of the materials used to build the engine) and environmental concerns regarding NOx production (if the heat source is combustion with ambient air) restrict the maximum temperature on workable heat-engines. Modern gas turbines run at temperatures as high as possible within the range of temperatures necessary to maintain acceptable NOx output . Another way of increasing efficiency is to lower the output temperature. One new method of doing so is to use mixed chemical working fluids, then exploit the changing behavior of the mixtures. One of the most famous is the so-called Kalina cycle, which uses a 70/30 mix of ammonia and water as its working fluid. This mixture allows the cycle to generate useful power at considerably lower temperatures than most other processes.
Exploit the physical properties of the working fluid. The most common such exploitation is the use of water above the critical point (supercritical water). The behavior of fluids above their critical point changes radically, and with materials such as water and carbon dioxide it is possible to exploit those changes in behavior to extract greater thermodynamic efficiency from the heat engine, even if it is using a fairly conventional Brayton or Rankine cycle. A newer and very promising material for such applications is supercritical CO2. SO2 and xenon have also been considered for such applications. Downsides include issues of corrosion and erosion, the different chemical behavior above and below the critical point, the needed high pressures and – in the case of sulfur dioxide and to a lesser extent carbon dioxide – toxicity. Among the mentioned compounds xenon is least suitable for use in a nuclear reactor due to the high neutron absorption cross section of almost all isotopes of xenon, whereas carbon dioxide and water can also double as a neutron moderator for a thermal spectrum reactor.
Exploit the chemical properties of the working fluid. A fairly new and novel exploit is to use exotic working fluids with advantageous chemical properties. One such is nitrogen dioxide (NO2), a toxic component of smog, which has a natural dimer as di-nitrogen tetraoxide (N2O4). At low temperature, the N2O4 is compressed and then heated. The increasing temperature causes each N2O4 to break apart into two NO2 molecules. This lowers the molecular weight of the working fluid, which drastically increases the efficiency of the cycle. Once the NO2 has expanded through the turbine, it is cooled by the heat sink, which makes it recombine into N2O4. This is then fed back by the compressor for another cycle. Such species as aluminium bromide (Al2Br6), NOCl, and Ga2I6 have all been investigated for such uses. To date, their drawbacks have not warranted their use, despite the efficiency gains that can be realized.
Heat engine processes
Each process is one of the following:
isothermal (at constant temperature, maintained with heat added or removed from a heat source or sink)
isobaric (at constant pressure)
isometric/isochoric (at constant volume), also referred to as iso-volumetric
adiabatic (no heat is added or removed from the system during adiabatic process)
isentropic (reversible adiabatic process, no heat is added or removed during isentropic process)
See also
Carnot heat engine
Cogeneration
Einstein refrigerator
Heat pump
Reciprocating engine for a general description of the mechanics of piston engines
Stirling engine
Thermosynthesis
Timeline of heat engine technology
References
Energy conversion
Engine technology
Engines
Heating, ventilation, and air conditioning
Thermodynamics | Heat engine | [
"Physics",
"Chemistry",
"Mathematics",
"Technology"
] | 3,733 | [
"Machines",
"Engines",
"Physical systems",
"Engine technology",
"Thermodynamics",
"Dynamical systems"
] |
13,660 | https://en.wikipedia.org/wiki/Homeomorphism | In mathematics and more specifically in topology, a homeomorphism (from Greek roots meaning "similar shape", named by Henri Poincaré), also called topological isomorphism, or bicontinuous function, is a bijective and continuous function between topological spaces that has a continuous inverse function. Homeomorphisms are the isomorphisms in the category of topological spaces—that is, they are the mappings that preserve all the topological properties of a given space. Two spaces with a homeomorphism between them are called homeomorphic, and from a topological viewpoint they are the same.
Very roughly speaking, a topological space is a geometric object, and a homeomorphism results from a continuous deformation of the object into a new shape. Thus, a square and a circle are homeomorphic to each other, but a sphere and a torus are not. However, this description can be misleading. Some continuous deformations do not result into homeomorphisms, such as the deformation of a line into a point. Some homeomorphisms do not result from continuous deformations, such as the homeomorphism between a trefoil knot and a circle. Homotopy and isotopy are precise definitions for the informal concept of continuous deformation.
Definition
A function between two topological spaces is a homeomorphism if it has the following properties:
is a bijection (one-to-one and onto),
is continuous,
the inverse function is continuous ( is an open mapping).
A homeomorphism is sometimes called a bicontinuous function. If such a function exists, and are homeomorphic. A self-homeomorphism is a homeomorphism from a topological space onto itself. Being "homeomorphic" is an equivalence relation on topological spaces. Its equivalence classes are called homeomorphism classes.
The third requirement, that be continuous, is essential. Consider for instance the function (the unit circle in ) defined by This function is bijective and continuous, but not a homeomorphism ( is compact but is not). The function is not continuous at the point because although maps to any neighbourhood of this point also includes points that the function maps close to but the points it maps to numbers in between lie outside the neighbourhood.
Homeomorphisms are the isomorphisms in the category of topological spaces. As such, the composition of two homeomorphisms is again a homeomorphism, and the set of all self-homeomorphisms forms a group, called the homeomorphism group of X, often denoted This group can be given a topology, such as the compact-open topology, which under certain assumptions makes it a topological group.
In some contexts, there are homeomorphic objects that cannot be continuously deformed from one to the other. Homotopy and isotopy are equivalence relations that have been introduced for dealing with such situations.
Similarly, as usual in category theory, given two spaces that are homeomorphic, the space of homeomorphisms between them, is a torsor for the homeomorphism groups and and, given a specific homeomorphism between and all three sets are identified.
Examples
The open interval is homeomorphic to the real numbers for any (In this case, a bicontinuous forward mapping is given by while other such mappings are given by scaled and translated versions of the or functions).
The unit 2-disc and the unit square in are homeomorphic; since the unit disc can be deformed into the unit square. An example of a bicontinuous mapping from the square to the disc is, in polar coordinates,
The graph of a differentiable function is homeomorphic to the domain of the function.
A differentiable parametrization of a curve is a homeomorphism between the domain of the parametrization and the curve.
A chart of a manifold is a homeomorphism between an open subset of the manifold and an open subset of a Euclidean space.
The stereographic projection is a homeomorphism between the unit sphere in with a single point removed and the set of all points in (a 2-dimensional plane).
If is a topological group, its inversion map is a homeomorphism. Also, for any the left translation the right translation and the inner automorphism are homeomorphisms.
Counter-examples
and are not homeomorphic for
The Euclidean real line is not homeomorphic to the unit circle as a subspace of , since the unit circle is compact as a subspace of Euclidean but the real line is not compact.
The one-dimensional intervals and are not homeomorphic because one is compact while the other is not.
Properties
Two homeomorphic spaces share the same topological properties. For example, if one of them is compact, then the other is as well; if one of them is connected, then the other is as well; if one of them is Hausdorff, then the other is as well; their homotopy and homology groups will coincide. Note however that this does not extend to properties defined via a metric; there are metric spaces that are homeomorphic even though one of them is complete and the other is not.
A homeomorphism is simultaneously an open mapping and a closed mapping; that is, it maps open sets to open sets and closed sets to closed sets.
Every self-homeomorphism in can be extended to a self-homeomorphism of the whole disk (Alexander's trick).
Informal discussion
The intuitive criterion of stretching, bending, cutting and gluing back together takes a certain amount of practice to apply correctly—it may not be obvious from the description above that deforming a line segment to a point is impermissible, for instance. It is thus important to realize that it is the formal definition given above that counts. In this case, for example, the line segment possesses infinitely many points, and therefore cannot be put into a bijection with a set containing only a finite number of points, including a single point.
This characterization of a homeomorphism often leads to a confusion with the concept of homotopy, which is actually defined as a continuous deformation, but from one function to another, rather than one space to another. In the case of a homeomorphism, envisioning a continuous deformation is a mental tool for keeping track of which points on space X correspond to which points on Y—one just follows them as X deforms. In the case of homotopy, the continuous deformation from one map to the other is of the essence, and it is also less restrictive, since none of the maps involved need to be one-to-one or onto. Homotopy does lead to a relation on spaces: homotopy equivalence.
There is a name for the kind of deformation involved in visualizing a homeomorphism. It is (except when cutting and regluing are required) an isotopy between the identity map on X and the homeomorphism from X to Y.
See also
is an isomorphism between uniform spaces
is an isomorphism between metric spaces
(closely related to graph subdivision)
References
External links
Theory of continuous functions
Functions and mappings | Homeomorphism | [
"Mathematics"
] | 1,481 | [
"Functions and mappings",
"Mathematical analysis",
"Theory of continuous functions",
"Homeomorphisms",
"Mathematical objects",
"Topology",
"Mathematical relations"
] |
13,665 | https://en.wikipedia.org/wiki/Hausdorff%20maximal%20principle | In mathematics, the Hausdorff maximal principle is an alternate and earlier formulation of Zorn's lemma proved by Felix Hausdorff in 1914 (Moore 1982:168). It states that in any partially ordered set, every totally ordered subset is contained in a maximal totally ordered subset, where "maximal" is with respect to set inclusion.
In a partially ordered set, a totally ordered subset is also called a chain. Thus, the maximal principle says every chain in the set extends to a maximal chain.
The Hausdorff maximal principle is one of many statements equivalent to the axiom of choice over ZF (Zermelo–Fraenkel set theory without the axiom of choice). The principle is also called the Hausdorff maximality theorem or the Kuratowski lemma (Kelley 1955:33).
Statement
The Hausdorff maximal principle states that, in any partially ordered set , every chain (i.e., a totally ordered subset) is contained in a maximal chain (i.e., a chain that is not contained in a strictly larger chain in ). In general, there may be several maximal chains containing a given chain.
An equivalent form of the Hausdorff maximal principle is that in every partially ordered set, there exists a maximal chain. (Note if the set is empty, the empty subset is a maximal chain.)
This form follows from the original form since the empty set is a chain. Conversely, to deduce the original form from this form, consider the set of all chains in containing a given chain in . Then is partially ordered by set inclusion. Thus, by the maximal principle in the above form, contains a maximal chain . Let be the union of , which is a chain in since a union of a totally ordered set of chains is a chain. Since contains , it is an element of . Also, since any chain containing is contained in as is a union, is in fact a maximal element of ; i.e., a maximal chain in .
The proof that the Hausdorff maximal principle is equivalent to Zorn's lemma is somehow similar to this proof. Indeed, first assume Zorn's lemma. Since a union of a totally ordered set of chains is a chain, the hypothesis of Zorn's lemma (every chain has an upper bound) is satisfied for and thus contains a maximal element or a maximal chain in .
Conversely, if the maximal principle holds, then contains a maximal chain . By the hypothesis of Zorn's lemma, has an upper bound in . If , then is a chain containing and so by maximality, ; i.e., and so .
Examples
If A is any collection of sets, the relation "is a proper subset of" is a strict partial order on A. Suppose that A is the collection of all circular regions (interiors of circles) in the plane. One maximal totally ordered sub-collection of A consists of all circular regions with centers at the origin. Another maximal totally ordered sub-collection consists of all circular regions bounded by circles tangent from the right to the y-axis at the origin.
If (x0, y0) and (x1, y1) are two points of the plane , define (x0, y0) < (x1, y1) if y0 = y1 and x0 < x1. This is a partial ordering of under which two points are comparable only if they lie on the same horizontal line. The maximal totally ordered sets are horizontal lines in .
Application
By the Hausdorff maximal principle, we can show every Hilbert space contains a maximal orthonormal subset as follows. (This fact can be stated as saying that as Hilbert spaces.)
Let be the set of all orthonormal subsets of the given Hilbert space , which is partially ordered by set inclusion. It is nonempty as it contains the empty set and thus by the maximal principle, it contains a maximal chain . Let be the union of . We shall show it is a maximal orthonormal subset. First, if are in , then either or . That is, any given two distinct elements in are contained in some in and so they are orthogonal to each other (and of course, is a subset of the unit sphere in ). Second, if for some in , then cannot be in and so is a chain strictly larger than , a contradiction.
For the purpose of comparison, here is a proof of the same fact by Zorn's lemma. As above, let be the set of all orthonormal subsets of . If is a chain in , then the union of is also orthonormal by the same argument as above and so is an upper bound of . Thus, by Zorn's lemma, contains a maximal element . (So, the difference is that the maximal principle gives a maximal chain while Zorn's lemma gives a maximal element directly.)
Proof 1
The idea of the proof is essentially due to Zermelo and is to prove the following weak form of Zorn's lemma, from the axiom of choice.
Let be a nonempty set of subsets of some fixed set, ordered by set inclusion, such that (1) the union of each totally ordered subset of is in and (2) each subset of a set in is in . Then has a maximal element.
(Zorn's lemma itself also follows from this weak form.) The maximal principle follows from the above since the set of all chains in satisfies the above conditions.
By the axiom of choice, we have a function such that for the power set of .
For each , let be the set of all such that is in . If , then let . Otherwise, let
Note is a maximal element if and only if . Thus, we are done if we can find a such that .
Fix a in . We call a subset a tower (over ) if
is in .
The union of each totally ordered subset is in , where "totally ordered" is with respect to set inclusion.
For each in , is in .
There exists at least one tower; indeed, the set of all sets in containing is a tower. Let be the intersection of all towers, which is again a tower.
Now, we shall show is totally ordered. We say a set is comparable in if for each in , either or . Let be the set of all sets in that are comparable in . We claim is a tower. The conditions 1. and 2. are straightforward to check. For 3., let in be given and then let be the set of all in such that either or .
We claim is a tower. The conditions 1. and 2. are again straightforward to check. For 3., let be in . If , then since is comparable in , either or . In the first case, is in . In the second case, we have , which implies either or . (This is the moment we needed to collapse a set to an element by the axiom of choice to define .) Either way, we have is in . Similarly, if , we see is in . Hence, is a tower. Now, since and is the intersection of all towers, , which implies is comparable in ; i.e., is in . This completes the proof of the claim that is a tower.
Finally, since is a tower contained in , we have , which means is totally ordered.
Let be the union of . By 2., is in and then by 3., is in . Since is the union of , and thus .
Proof 2
The Bourbaki–Witt theorem can also be used to prove the Hausdorff maximal principle:
Notes
References
. Reprinted by Springer-Verlag, New York, 1974. (Springer-Verlag edition).
John Kelley (1955), General topology, Von Nostrand.
Gregory Moore (1982), Zermelo's axiom of choice, Springer.
James Munkres (2000), Topology, Pearson.
Appendix of
Axiom of choice
Mathematical principles
Order theory | Hausdorff maximal principle | [
"Mathematics"
] | 1,659 | [
"Mathematical principles",
"Mathematical axioms",
"Axiom of choice",
"Axioms of set theory",
"Order theory"
] |
13,692 | https://en.wikipedia.org/wiki/History%20of%20the%20Internet | The history of the Internet originated in the efforts of scientists and engineers to build and interconnect computer networks. The Internet Protocol Suite, the set of rules used to communicate between networks and devices on the Internet, arose from research and development in the United States and involved international collaboration, particularly with researchers in the United Kingdom and France.
Computer science was an emerging discipline in the late 1950s that began to consider time-sharing between computer users, and later, the possibility of achieving this over wide area networks. J. C. R. Licklider developed the idea of a universal network at the Information Processing Techniques Office (IPTO) of the United States Department of Defense (DoD) Advanced Research Projects Agency (ARPA). Independently, Paul Baran at the RAND Corporation proposed a distributed network based on data in message blocks in the early 1960s, and Donald Davies conceived of packet switching in 1965 at the National Physical Laboratory (NPL), proposing a national commercial data network in the United Kingdom.
ARPA awarded contracts in 1969 for the development of the ARPANET project, directed by Robert Taylor and managed by Lawrence Roberts. ARPANET adopted the packet switching technology proposed by Davies and Baran. The network of Interface Message Processors (IMPs) was built by a team at Bolt, Beranek, and Newman, with the design and specification led by Bob Kahn. The host-to-host protocol was specified by a group of graduate students at UCLA, led by Steve Crocker, along with Jon Postel and others. The ARPANET expanded rapidly across the United States with connections to the United Kingdom and Norway.
Several early packet-switched networks emerged in the 1970s which researched and provided data networking. Louis Pouzin and Hubert Zimmermann pioneered a simplified end-to-end approach to internetworking at the IRIA. Peter Kirstein put internetworking into practice at University College London in 1973. Bob Metcalfe developed the theory behind Ethernet and the PARC Universal Packet. ARPA initiatives and the International Network Working Group developed and refined ideas for internetworking, in which multiple separate networks could be joined into a network of networks. Vint Cerf, now at Stanford University, and Bob Kahn, now at DARPA, published their research on internetworking in 1974. Through the Internet Experiment Note series and later RFCs this evolved into the Transmission Control Protocol (TCP) and Internet Protocol (IP), two protocols of the Internet protocol suite. The design included concepts pioneered in the French CYCLADES project directed by Louis Pouzin. The development of packet switching networks was underpinned by mathematical work in the 1970s by Leonard Kleinrock at UCLA.
In the late 1970s, national and international public data networks emerged based on the X.25 protocol, designed by Rémi Després and others. In the United States, the National Science Foundation (NSF) funded national supercomputing centers at several universities in the United States, and provided interconnectivity in 1986 with the NSFNET project, thus creating network access to these supercomputer sites for research and academic organizations in the United States. International connections to NSFNET, the emergence of architecture such as the Domain Name System, and the adoption of TCP/IP on existing networks in the United States and around the world marked the beginnings of the Internet. Commercial Internet service providers (ISPs) emerged in 1989 in the United States and Australia. Limited private connections to parts of the Internet by officially commercial entities emerged in several American cities by late 1989 and 1990. The optical backbone of the NSFNET was decommissioned in 1995, removing the last restrictions on the use of the Internet to carry commercial traffic, as traffic transitioned to optical networks managed by Sprint, MCI and AT&T in the United States.
Research at CERN in Switzerland by the British computer scientist Tim Berners-Lee in 1989–90 resulted in the World Wide Web, linking hypertext documents into an information system, accessible from any node on the network. The dramatic expansion of the capacity of the Internet, enabled by the advent of wave division multiplexing (WDM) and the rollout of fiber optic cables in the mid-1990s, had a revolutionary impact on culture, commerce, and technology. This made possible the rise of near-instant communication by electronic mail, instant messaging, voice over Internet Protocol (VoIP) telephone calls, video chat, and the World Wide Web with its discussion forums, blogs, social networking services, and online shopping sites. Increasing amounts of data are transmitted at higher and higher speeds over fiber-optic networks operating at 1 Gbit/s, 10 Gbit/s, and 800 Gbit/s by 2019. The Internet's takeover of the global communication landscape was rapid in historical terms: it only communicated 1% of the information flowing through two-way telecommunications networks in the year 1993, 51% by 2000, and more than 97% of the telecommunicated information by 2007. The Internet continues to grow, driven by ever greater amounts of online information, commerce, entertainment, and social networking services. However, the future of the global network may be shaped by regional differences.
Foundations
Precursors
Telegraphy
The practice of transmitting messages between two different places through an electromagnetic medium dates back to the electrical telegraph in the late 19th century, which was the first fully digital communication system. Radiotelegraphy began to be used commercially in the early 20th century. Telex became an operational teleprinter service in the 1930s. Such systems were limited to point-to-point communication between two end devices.
Information theory
Fundamental theoretical work in telecommunications technology was developed by Harry Nyquist and Ralph Hartley in the 1920s. Information theory, as enunciated by Claude Shannon in 1948, provided a firm theoretical underpinning to understand the trade-offs between signal-to-noise ratio, bandwidth, and error-free transmission in the presence of noise.
Computers and modems
Early fixed-program computers in the 1940s were operated manually by entering small programs via switches in order to load and run a series of programs. As transistor technology evolved in the 1950s, central processing units and user terminals came into use by 1955. The mainframe computer model was devised, and modems, such as the Bell 101, allowed digital data to be transmitted over regular unconditioned telephone lines at low speeds by the late 1950s. These technologies made it possible to exchange data between remote computers. However, a fixed-line link was still necessary; the point-to-point communication model did not allow for direct communication between any two arbitrary systems. In addition, the applications were specific and not general purpose. Examples included SAGE (1958) and SABRE (1960).
Time-sharing
Christopher Strachey, who became Oxford University's first Professor of Computation, filed a patent application in the United Kingdom for time-sharing in February 1959. In June that year, he gave a paper "Time Sharing in Large Fast Computers" at the UNESCO Information Processing Conference in Paris where he passed the concept on to J. C. R. Licklider. Licklider, a vice president at Bolt Beranek and Newman, Inc. (BBN), promoted the idea of time-sharing as an alternative to batch processing. John McCarthy, at MIT, wrote a memo in 1959 that broadened the concept of time sharing to encompass multiple interactive user sessions, which resulted in the Compatible Time-Sharing System (CTSS) implemented at MIT. Other multi-user mainframe systems developed, such as PLATO at the University of Illinois Chicago. In the early 1960, the Advanced Research Projects Agency (ARPA) of the United States Department of Defense funded further research into time-sharing at MIT through Project MAC.
Inspiration
J. C. R. Licklider, while working at BBN, proposed a computer network in his March 1960 paper Man-Computer Symbiosis:
In August 1962, Licklider and Welden Clark published the paper "On-Line Man-Computer Communication" which was one of the first descriptions of a networked future.
In October 1962, Licklider was hired by Jack Ruina as director of the newly established Information Processing Techniques Office (IPTO) within ARPA, with a mandate to interconnect the United States Department of Defense's main computers at Cheyenne Mountain, the Pentagon, and SAC HQ. There he formed an informal group within DARPA to further computer research. He began by writing memos in 1963 describing a distributed network to the IPTO staff, whom he called "Members and Affiliates of the Intergalactic Computer Network".
Although he left the IPTO in 1964, five years before the ARPANET went live, it was his vision of universal networking that provided the impetus for one of his successors, Robert Taylor, to initiate the ARPANET development. Licklider later returned to lead the IPTO in 1973 for two years.
Packet switching
The infrastructure for telephone systems at the time was based on circuit switching, which requires pre-allocation of a dedicated communication line for the duration of the call. Telegram services had developed store and forward telecommunication techniques. Western Union's Automatic Telegraph Switching System Plan 55-A was based on message switching. The U.S. military's AUTODIN network became operational in 1962. These systems, like SAGE and SBRE, still required rigid routing structures that were prone to single point of failure.
The technology was considered vulnerable for strategic and military use because there were no alternative paths for the communication in case of a broken link. In the early 1960s, Paul Baran of the RAND Corporation produced a study of survivable networks for the U.S. military in the event of nuclear war. Information would be transmitted across a "distributed" network, divided into what he called "message blocks".
In addition to being prone to a single point of failure, existing telegraphic techniques were inefficient and inflexible. Beginning in 1965 Donald Davies, at the National Physical Laboratory in the United Kingdom, developed a more advanced proposal of the concept, designed for high-speed computer networking, which he called packet switching, the term that would ultimately be adopted.
Packet switching is a technique for transmitting computer data by splitting it into very short, standardized chunks, attaching routing information to each of these chunks, and transmitting them independently through a computer network. It provides better bandwidth utilization than traditional circuit-switching used for telephony, and enables the connection of computers with different transmission and receive rates. It is a distinct concept to message switching.
Networks that led to the Internet
NPL network
Following discussions with J. C. R. Licklider in 1965, Donald Davies became interested in data communications for computer networks. Later that year, at the National Physical Laboratory (NPL) in the United Kingdom, Davies designed and proposed a national commercial data network based on packet switching. The following year, he described the use of "switching nodes" to act as routers in a digital communication network. The proposal was not taken up nationally but he produced a design for a local network to serve the needs of the NPL and prove the feasibility of packet switching using high-speed data transmission. To deal with packet permutations (due to dynamically updated route preferences) and to datagram losses (unavoidable when fast sources send to a slow destinations), he assumed that "all users of the network will provide themselves with some kind of error control", thus inventing what came to be known as the end-to-end principle. In 1967, he and his team were the first to use the term 'protocol' in a modern data-commutation context.
In 1968, Davies began building the Mark I packet-switched network to meet the needs of his multidisciplinary laboratory and prove the technology under operational conditions. The network's development was described at a 1968 conference. Elements of the network became operational in early 1969, the first implementation of packet switching, and the NPL network was the first to use high-speed links. Many other packet switching networks built in the 1970s were similar "in nearly all respects" to Davies' original 1965 design. The Mark II version which operated from 1973 used a layered protocol architecture. In 1976, 12 computers and 75 terminal devices were attached, and more were added. The NPL team carried out simulation work on wide-area packet networks, including datagrams and congestion; and research into internetworking and secure communications. The network was replaced in 1986.
ARPANET
Robert Taylor was promoted to the head of the Information Processing Techniques Office (IPTO) at Defense Advanced Research Projects Agency (DARPA) in 1966. He intended to realize Licklider's ideas of an interconnected networking system. As part of the IPTO's role, three network terminals had been installed: one for System Development Corporation in Santa Monica, one for Project Genie at University of California, Berkeley, and one for the Compatible Time-Sharing System project at Massachusetts Institute of Technology (MIT). Taylor's identified need for networking became obvious from the waste of resources apparent to him.
Bringing in Larry Roberts from MIT in January 1967, he initiated a project to build such a network. Roberts and Thomas Merrill had been researching computer time-sharing over wide area networks (WANs). Wide area networks emerged during the late 1950s and became established during the 1960s. At the first ACM Symposium on Operating Systems Principles in October 1967, Roberts presented a proposal for the "ARPA net", based on Wesley Clark's idea to use Interface Message Processors (IMP) to create a message switching network. At the conference, Roger Scantlebury presented Donald Davies' work on a hierarchical digital communications network using packet switching and referenced the work of Paul Baran at RAND. Roberts incorporated the packet switching and routing concepts of Davies and Baran into the ARPANET design and upgraded the proposed communications speed from 2.4 kbit/s to 50 kbit/s.
ARPA awarded the contract to build the network to Bolt Beranek & Newman. The "IMP guys", led by Frank Heart and Bob Kahn, developed the routing, flow control, software design and network control. The first ARPANET link was established between the Network Measurement Center at the University of California, Los Angeles (UCLA) Henry Samueli School of Engineering and Applied Science directed by Leonard Kleinrock, and the NLS system at Stanford Research Institute (SRI) directed by Douglas Engelbart in Menlo Park, California at 22:30 hours on October 29, 1969.
By December 1969, a four-node network was connected by adding the Culler-Fried Interactive Mathematics Center at the University of California, Santa Barbara followed by the University of Utah Graphics Department. In the same year, Taylor helped fund ALOHAnet, a system designed by professor Norman Abramson and others at the University of Hawaiʻi at Mānoa that transmitted data by radio between seven computers on four islands on Hawaii.
Steve Crocker formed the "Network Working Group" in 1969 at UCLA. Working with Jon Postel and others, he initiated and managed the Request for Comments (RFC) process, which is still used today for proposing and distributing contributions. RFC 1, entitled "Host Software", was written by Steve Crocker and published on April 7, 1969. The protocol for establishing links between network sites in the ARPANET, the Network Control Program (NCP), was completed in 1970. These early years were documented in the 1972 film Computer Networks: The Heralds of Resource Sharing.
Roberts presented the idea of packet switching to the communication professionals, and faced anger and hostility. Before ARPANET was operating, they argued that the router buffers would quickly run out. After the ARPANET was operating, they argued packet switching would never be economic without the government subsidy. Baran faced the same rejection and thus failed to convince the military into constructing a packet switching network.
Early international collaborations via the ARPANET were sparse. Connections were made in 1973 to the Norwegian Seismic Array (NORSAR), via a satellite link at the Tanum Earth Station in Sweden, and to Peter Kirstein's research group at University College London, which provided a gateway to British academic networks, the first international heterogenous resource sharing network. Throughout the 1970s, Leonard Kleinrock developed the mathematical theory to model and measure the performance of packet-switching technology, building on his earlier work on the application of queueing theory to message switching systems. By 1981, the number of hosts had grown to 213. The ARPANET became the technical core of what would become the Internet, and a primary tool in developing the technologies used.
Merit Network
The Merit Network was formed in 1966 as the Michigan Educational Research Information Triad to explore computer networking between three of Michigan's public universities as a means to help the state's educational and economic development. With initial support from the State of Michigan and the National Science Foundation (NSF), the packet-switched network was first demonstrated in December 1971 when an interactive host to host connection was made between the IBM mainframe computer systems at the University of Michigan in Ann Arbor and Wayne State University in Detroit. In October 1972 connections to the CDC mainframe at Michigan State University in East Lansing completed the triad. Over the next several years in addition to host to host interactive connections the network was enhanced to support terminal to host connections, host to host batch connections (remote job submission, remote printing, batch file transfer), interactive file transfer, gateways to the Tymnet and Telenet public data networks, X.25 host attachments, gateways to X.25 data networks, Ethernet attached hosts, and eventually TCP/IP and additional public universities in Michigan join the network. All of this set the stage for Merit's role in the NSFNET project starting in the mid-1980s.
CYCLADES
The CYCLADES packet switching network was a French research network designed and directed by Louis Pouzin. In 1972, he began planning the network to explore alternatives to the early ARPANET design and to support internetworking research. First demonstrated in 1973, it was the first network to implement the end-to-end principle conceived by Donald Davies and make the hosts responsible for reliable delivery of data, rather than the network itself, using unreliable datagrams. Concepts implemented in this network influenced TCP/IP architecture.
X.25 and public data networks
Based on international research initiatives, particularly the contributions of Rémi Després, packet switching network standards were developed by the International Telegraph and Telephone Consultative Committee (ITU-T) in the form of X.25 and related standards. X.25 is built on the concept of virtual circuits emulating traditional telephone connections. In 1974, X.25 formed the basis for the SERCnet network between British academic and research sites, which later became JANET, the United Kingdom's high-speed national research and education network (NREN). The initial ITU Standard on X.25 was approved in March 1976. Existing networks, such as Telenet in the United States adopted X.25 as well as new public data networks, such as DATAPAC in Canada and TRANSPAC in France. X.25 was supplemented by the X.75 protocol which enabled internetworking between national PTT networks in Europe and commercial networks in North America.
The British Post Office, Western Union International, and Tymnet collaborated to create the first international packet-switched network, referred to as the International Packet Switched Service (IPSS), in 1978. This network grew from Europe and the US to cover Canada, Hong Kong, and Australia by 1981. By the 1990s it provided a worldwide networking infrastructure.
Unlike ARPANET, X.25 was commonly available for business use. Telenet offered its Telemail electronic mail service, which was also targeted to enterprise use rather than the general email system of the ARPANET.
The first public dial-in networks used asynchronous teleprinter (TTY) terminal protocols to reach a concentrator operated in the public network. Some networks, such as Telenet and CompuServe, used X.25 to multiplex the terminal sessions into their packet-switched backbones, while others, such as Tymnet, used proprietary protocols. In 1979, CompuServe became the first service to offer electronic mail capabilities and technical support to personal computer users. The company broke new ground again in 1980 as the first to offer real-time chat with its CB Simulator. Other major dial-in networks were America Online (AOL) and Prodigy that also provided communications, content, and entertainment features. Many bulletin board system (BBS) networks also provided on-line access, such as FidoNet which was popular amongst hobbyist computer users, many of them hackers and amateur radio operators.
UUCP and Usenet
In 1979, two students at Duke University, Tom Truscott and Jim Ellis, originated the idea of using Bourne shell scripts to transfer news and messages on a serial line UUCP connection with nearby University of North Carolina at Chapel Hill. Following public release of the software in 1980, the mesh of UUCP hosts forwarding on the Usenet news rapidly expanded. UUCPnet, as it would later be named, also created gateways and links between FidoNet and dial-up BBS hosts. UUCP networks spread quickly due to the lower costs involved, ability to use existing leased lines, X.25 links or even ARPANET connections, and the lack of strict use policies compared to later networks like CSNET and BITNET. All connects were local. By 1981 the number of UUCP hosts had grown to 550, nearly doubling to 940 in 1984.
Sublink Network, operating since 1987 and officially founded in Italy in 1989, based its interconnectivity upon UUCP to redistribute mail and news groups messages throughout its Italian nodes (about 100 at the time) owned both by private individuals and small companies. Sublink Network evolved into one of the first examples of Internet technology coming into use through popular diffusion.
1973–1989: Merging the networks and creating the Internet
TCP/IP
With so many different networking methods seeking interconnection, a method was needed to unify them. Louis Pouzin initiated the CYCLADES project in 1972, building on the work of Donald Davies and the ARPANET. An International Network Working Group formed in 1972; active members included Vint Cerf from Stanford University, Alex McKenzie from BBN, Donald Davies and Roger Scantlebury from NPL, and Louis Pouzin and Hubert Zimmermann from IRIA. Pouzin coined the term catenet for concatenated network. Bob Metcalfe at Xerox PARC outlined the idea of Ethernet and PARC Universal Packet (PUP) for internetworking. Bob Kahn, now at DARPA, recruited Vint Cerf to work with him on the problem. By 1973, these groups had worked out a fundamental reformulation, in which the differences between network protocols were hidden by using a common internetworking protocol. Instead of the network being responsible for reliability, as in the ARPANET, the hosts became responsible.
Cerf and Kahn published their ideas in May 1974, which incorporated concepts implemented by Louis Pouzin and Hubert Zimmermann in the CYCLADES network. The specification of the resulting protocol, the Transmission Control Program, was published as by the Network Working Group in December 1974. It contains the first attested use of the term internet, as a shorthand for internetwork. This software was monolithic in design using two simplex communication channels for each user session.
With the role of the network reduced to a core of functionality, it became possible to exchange traffic with other networks independently from their detailed characteristics, thereby solving the fundamental problems of internetworking. DARPA agreed to fund the development of prototype software. Testing began in 1975 through concurrent implementations at Stanford, BBN and University College London (UCL). After several years of work, the first demonstration of a gateway between the Packet Radio network (PRNET) in the SF Bay area and the ARPANET was conducted by the Stanford Research Institute. On November 22, 1977, a three network demonstration was conducted including the ARPANET, the SRI's Packet Radio Van on the Packet Radio Network and the Atlantic Packet Satellite Network (SATNET) including a node at UCL.
The software was redesigned as a modular protocol stack, using full-duplex channels; between 1976 and 1977, Yogen Dalal and Robert Metcalfe among others, proposed separating TCP's routing and transmission control functions into two discrete layers, which led to the splitting of the Transmission Control Program into the Transmission Control Protocol (TCP) and the Internet Protocol (IP) in version 3 in 1978. Version 4 was described in IETF publication RFC 791 (September 1981), 792 and 793. It was installed on SATNET in 1982 and the ARPANET in January 1983 after the DoD made it standard for all military computer networking. This resulted in a networking model that became known informally as TCP/IP. It was also referred to as the Department of Defense (DoD) model or DARPA model. Cerf credits his graduate students Yogen Dalal, Carl Sunshine, Judy Estrin, Richard Karp, and Gérard Le Lann with important work on the design and testing. DARPA sponsored or encouraged the development of TCP/IP implementations for many operating systems.
From ARPANET to NSFNET
After the ARPANET had been up and running for several years, ARPA looked for another agency to hand off the network to; ARPA's primary mission was funding cutting-edge research and development, not running a communications utility. In July 1975, the network was turned over to the Defense Communications Agency, also part of the Department of Defense. In 1983, the U.S. military portion of the ARPANET was broken off as a separate network, the MILNET. MILNET subsequently became the unclassified but military-only NIPRNET, in parallel with the SECRET-level SIPRNET and JWICS for TOP SECRET and above. NIPRNET does have controlled security gateways to the public Internet.
The networks based on the ARPANET were government funded and therefore restricted to noncommercial uses such as research; unrelated commercial use was strictly forbidden. This initially restricted connections to military sites and universities. During the 1980s, the connections expanded to more educational institutions, and a growing number of companies such as Digital Equipment Corporation and Hewlett-Packard, which were participating in research projects or providing services to those who were. Data transmission speeds depended upon the type of connection, the slowest being analog telephone lines and the fastest using optical networking technology.
Several other branches of the U.S. government, the National Aeronautics and Space Administration (NASA), the National Science Foundation (NSF), and the Department of Energy (DOE) became heavily involved in Internet research and started development of a successor to ARPANET. In the mid-1980s, all three of these branches developed the first Wide Area Networks based on TCP/IP. NASA developed the NASA Science Network, NSF developed CSNET and DOE evolved the Energy Sciences Network or ESNet.
NASA developed the TCP/IP based NASA Science Network (NSN) in the mid-1980s, connecting space scientists to data and information stored anywhere in the world. In 1989, the DECnet-based Space Physics Analysis Network (SPAN) and the TCP/IP-based NASA Science Network (NSN) were brought together at NASA Ames Research Center creating the first multiprotocol wide area network called the NASA Science Internet, or NSI. NSI was established to provide a totally integrated communications infrastructure to the NASA scientific community for the advancement of earth, space and life sciences. As a high-speed, multiprotocol, international network, NSI provided connectivity to over 20,000 scientists across all seven continents.
In 1981, NSF supported the development of the Computer Science Network (CSNET). CSNET connected with ARPANET using TCP/IP, and ran TCP/IP over X.25, but it also supported departments without sophisticated network connections, using automated dial-up mail exchange. CSNET played a central role in popularizing the Internet outside the ARPANET.
In 1986, the NSF created NSFNET, a 56 kbit/s backbone to support the NSF-sponsored supercomputing centers. The NSFNET also provided support for the creation of regional research and education networks in the United States, and for the connection of university and college campus networks to the regional networks. The use of NSFNET and the regional networks was not limited to supercomputer users and the 56 kbit/s network quickly became overloaded. NSFNET was upgraded to 1.5 Mbit/s in 1988 under a cooperative agreement with the Merit Network in partnership with IBM, MCI, and the State of Michigan. The existence of NSFNET and the creation of Federal Internet Exchanges (FIXes) allowed the ARPANET to be decommissioned in 1990.
NSFNET was expanded and upgraded to dedicated fiber, optical lasers and optical amplifier systems capable of delivering T3 start up speeds or 45 Mbit/s in 1991. However, the T3 transition by MCI took longer than expected, allowing Sprint to establish a coast-to-coast long-distance commercial Internet service. When NSFNET was decommissioned in 1995, its optical networking backbones were handed off to several commercial Internet service providers, including MCI, PSI Net and Sprint. As a result, when the handoff was complete, Sprint and its Washington DC Network Access Points began to carry Internet traffic, and by 1996, Sprint was the world's largest carrier of Internet traffic.
The research and academic community continues to develop and use advanced networks such as Internet2 in the United States and JANET in the United Kingdom.
Transition towards the Internet
The term "internet" was reflected in the first RFC published on the TCP protocol (RFC 675: Internet Transmission Control Program, December 1974) as a short form of internetworking, when the two terms were used interchangeably. In general, an internet was a collection of networks linked by a common protocol. In the time period when the ARPANET was connected to the newly formed NSFNET project in the late 1980s, the term was used as the name of the network, Internet, being the large and global TCP/IP network.
Opening the Internet and the fiber optic backbone to corporate and consumers increased demand for network capacity. The expense and delay of laying new fiber led providers to test a fiber bandwidth expansion alternative that had been pioneered in the late 1970s by Optelecom using "interactions between light and matter, such as lasers and optical devices used for optical amplification and wave mixing". This technology became known as wave division multiplexing (WDM). Bell Labs deployed a 4-channel WDM system in 1995. To develop a mass capacity (dense) WDM system, Optelecom and its former head of Light Systems Research, David R. Huber formed a new venture, Ciena Corp., that deployed the world's first dense WDM system on the Sprint fiber network in June 1996. This was referred to as the real start of optical networking.
As interest in networking grew by needs of collaboration, exchange of data, and access of remote computing resources, the Internet technologies spread throughout the rest of the world. The hardware-agnostic approach in TCP/IP supported the use of existing network infrastructure, such as the International Packet Switched Service (IPSS) X.25 network, to carry Internet traffic.
Many sites unable to link directly to the Internet created simple gateways for the transfer of electronic mail, the most important application of the time. Sites with only intermittent connections used UUCP or FidoNet and relied on the gateways between these networks and the Internet. Some gateway services went beyond simple mail peering, such as allowing access to File Transfer Protocol (FTP) sites via UUCP or mail.
Finally, routing technologies were developed for the Internet to remove the remaining centralized routing aspects. The Exterior Gateway Protocol (EGP) was replaced by a new protocol, the Border Gateway Protocol (BGP). This provided a meshed topology for the Internet and reduced the centric architecture which ARPANET had emphasized. In 1994, Classless Inter-Domain Routing (CIDR) was introduced to support better conservation of address space which allowed use of route aggregation to decrease the size of routing tables.
Optical networking
The MOS transistor underpinned the rapid growth of telecommunication bandwidth over the second half of the 20th century. To address the need for transmission capacity beyond that provided by radio, satellite and analog copper telephone lines, engineers developed optical communications systems based on fiber optic cables powered by lasers and optical amplifier techniques.
The concept of lasing arose from a 1917 paper by Albert Einstein, "On the Quantum Theory of Radiation." Einstein expanded upon a dialog with Max Planck on how atoms absorb and emit light, part of a thought process that, with input from Erwin Schrödinger, Werner Heisenberg and others, gave rise to Quantum Mechanics. Specifically, in his quantum theory, Einstein mathematically determined that light could be generated not only by spontaneous emission, such as the light emitted by an incandescent light or the Sun, but also by stimulated emission.
Forty years later, on November 13, 1957, Columbia University physics student Gordon Gould first realized how to make light by stimulated emission through a process of optical amplification. He coined the term LASER for this technology—Light Amplification by Stimulated Emission of Radiation. Using Gould's light amplification method (patented as "Optically Pumped Laser Amplifier"), Theodore Maiman made the first working laser on May 16, 1960.
Gould co-founded Optelecom, Inc. in 1973 to commercialize his inventions in optical fiber telecommunications. just as Corning Glass was producing the first commercial fiber optic cable in small quantities. Optelecom configured its own fiber lasers and optical amplifiers into the first commercial optical communication systems which it delivered to Chevron and the US Army Missile Defense. Three years later, GTE deployed the first optical telephone system in 1977 in Long Beach, California. By the early 1980s, optical networks powered by lasers, LED and optical amplifier equipment supplied by Bell Labs, NTT and Perelli were used by select universities and long-distance telephone providers.
TCP/IP goes global (1980s)
CERN and the European Internet
In 1982, NORSAR/NDRE and Peter Kirstein's research group at University College London (UCL) left the ARPANET and began to use TCP/IP over SATNET. There were 40 British academic research groups using UCL's link to the ARPANET in 1975.
Between 1984 and 1988, CERN began installation and operation of TCP/IP to interconnect its major internal computer systems, workstations, PCs, and an accelerator control system. CERN continued to operate a limited self-developed system (CERNET) internally and several incompatible (typically proprietary) network protocols externally. There was considerable resistance in Europe towards more widespread use of TCP/IP, and the CERN TCP/IP intranets remained isolated from the Internet until 1989, when a transatlantic connection to Cornell University was established.
The Computer Science Network (CSNET) began operation in 1981 to provide networking connections to institutions that could not connect directly to ARPANET. Its first international connection was to Israel in 1984. Soon after, connections were established to computer science departments in Canada, France, and Germany.
In 1988, the first international connections to NSFNET was established by France's INRIA, and Piet Beertema at the Centrum Wiskunde & Informatica (CWI) in the Netherlands. Daniel Karrenberg, from CWI, visited Ben Segal, CERN's TCP/IP coordinator, looking for advice about the transition of EUnet, the European side of the UUCP Usenet network (much of which ran over X.25 links), over to TCP/IP. The previous year, Segal had met with Len Bosack from the then still small company Cisco about purchasing some TCP/IP routers for CERN, and Segal was able to give Karrenberg advice and forward him on to Cisco for the appropriate hardware. This expanded the European portion of the Internet across the existing UUCP networks. The NORDUnet connection to NSFNET was in place soon after, providing open access for university students in Denmark, Finland, Iceland, Norway, and Sweden. In January 1989, CERN opened its first external TCP/IP connections. This coincided with the creation of Réseaux IP Européens (RIPE), initially a group of IP network administrators who met regularly to carry out coordination work together. Later, in 1992, RIPE was formally registered as a cooperative in Amsterdam.
The United Kingdom's national research and education network (NREN), JANET, began operation in 1984 using the UK's Coloured Book protocols and connected to NSFNET in 1989. In 1991, JANET adopted Internet Protocol on the existing network. The same year, Dai Davies introduced Internet technology into the pan-European NREN, EuropaNet, which was built on the X.25 protocol. The European Academic and Research Network (EARN) and RARE adopted IP around the same time, and the European Internet backbone EBONE became operational in 1992.
Nonetheless, for a period in the late 1980s and early 1990s, engineers, organizations and nations were polarized over the issue of which standard, the OSI model or the Internet protocol suite would result in the best and most robust computer networks.
The link to the Pacific
South Korea set up a two-node domestic TCP/IP network in 1982, the System Development Network (SDN), adding a third node the following year. SDN was connected to the rest of the world in August 1983 using UUCP (Unix-to-Unix-Copy); connected to CSNET in December 1984; and formally connected to the NSFNET in 1990.
Japan, which had built the UUCP-based network JUNET in 1984, connected to CSNET, and later to NSFNET in 1989, marking the spread of the Internet to Asia.
In Australia, ad hoc networking to ARPA and in-between Australian universities formed in the late 1980s, based on various technologies such as X.25, UUCPNet, and via a CSNET. These were limited in their connection to the global networks, due to the cost of making individual international UUCP dial-up or X.25 connections. In 1989, Australian universities joined the push towards using IP protocols to unify their networking infrastructures. AARNet was formed in 1989 by the Australian Vice-Chancellors' Committee and provided a dedicated IP based network for Australia.
New Zealand adopted the UK's Coloured Book protocols as an interim standard and established its first international IP connection to the U.S. in 1989.
A "digital divide" emerges
While developed countries with technological infrastructures were joining the Internet, developing countries began to experience a digital divide separating them from the Internet. On an essentially continental basis, they built organizations for Internet resource administration and to share operational experience, which enabled more transmission facilities to be put into place.
Africa
At the beginning of the 1990s, African countries relied upon X.25 IPSS and 2400 baud modem UUCP links for international and internetwork computer communications.
In August 1995, InfoMail Uganda, Ltd., a privately held firm in Kampala now known as InfoCom, and NSN Network Services of Avon, Colorado, sold in 1997 and now known as Clear Channel Satellite, established Africa's first native TCP/IP high-speed satellite Internet services. The data connection was originally carried by a C-Band RSCC Russian satellite which connected InfoMail's Kampala offices directly to NSN's MAE-West point of presence using a private network from NSN's leased ground station in New Jersey. InfoCom's first satellite connection was just 64 kbit/s, serving a Sun host computer and twelve US Robotics dial-up modems.
In 1996, a USAID funded project, the Leland Initiative, started work on developing full Internet connectivity for the continent. Guinea, Mozambique, Madagascar and Rwanda gained satellite earth stations in 1997, followed by Ivory Coast and Benin in 1998.
Africa is building an Internet infrastructure. AFRINIC, headquartered in Mauritius, manages IP address allocation for the continent. As with other Internet regions, there is an operational forum, the Internet Community of Operational Networking Specialists.
There are many programs to provide high-performance transmission plant, and the western and southern coasts have undersea optical cable. High-speed cables join North Africa and the Horn of Africa to intercontinental cable systems. Undersea cable development is slower for East Africa; the original joint effort between New Partnership for Africa's Development (NEPAD) and the East Africa Submarine System (Eassy) has broken off and may become two efforts.
Asia and Oceania
The Asia Pacific Network Information Centre (APNIC), headquartered in Australia, manages IP address allocation for the continent. APNIC sponsors an operational forum, the Asia-Pacific Regional Internet Conference on Operational Technologies (APRICOT).
In South Korea, VDSL, a last mile technology developed in the 1990s by NextLevel Communications, connected corporate and consumer copper-based telephone lines to the Internet.
The People's Republic of China established its first TCP/IP college network, Tsinghua University's TUNET in 1991. The PRC went on to make its first global Internet connection in 1994, between the Beijing Electro-Spectrometer Collaboration and Stanford University's Linear Accelerator Center. However, China went on to implement its own digital divide by implementing a country-wide content filter.
Japan hosted the annual meeting of the Internet Society, INET'92, in Kobe. Singapore developed TECHNET in 1990, and Thailand gained a global Internet connection between Chulalongkorn University and UUNET in 1992.
Latin America
As with the other regions, the Latin American and Caribbean Internet Addresses Registry (LACNIC) manages the IP address space and other resources for its area. LACNIC, headquartered in Uruguay, operates DNS root, reverse DNS, and other key services.
1990–2003: Rise of the global Internet, Web 1.0
Development
Initially, as with its predecessor networks, the system that would evolve into the Internet was primarily for government and government body use. Although commercial use was forbidden, the exact definition of commercial use was unclear and subjective. UUCPNet and the X.25 IPSS had no such restrictions, which would eventually see the official barring of UUCPNet use of ARPANET and NSFNET connections.
As a result, during the late 1980s, the first Internet service provider (ISP) companies were formed. Companies like PSINet, UUNET, Netcom, and Portal Software were formed to provide service to the regional research networks and provide alternate network access, UUCP-based email and Usenet News to the public. In 1989, MCI Mail became the first commercial email provider to get an experimental gateway to the Internet. The first commercial dialup ISP in the United States was The World, which opened in 1989.
In 1992, the U.S. Congress passed the Scientific and Advanced-Technology Act, , which allowed NSF to support access by the research and education communities to computer networks which were not used exclusively for research and education purposes, thus permitting NSFNET to interconnect with commercial networks. This caused controversy within the research and education community, who were concerned commercial use of the network might lead to an Internet that was less responsive to their needs, and within the community of commercial network providers, who felt that government subsidies were giving an unfair advantage to some organizations.
By 1990, ARPANET's goals had been fulfilled and new networking technologies exceeded the original scope and the project came to a close. New network service providers including PSINet, Alternet, CERFNet, ANS CO+RE, and many others were offering network access to commercial customers. NSFNET was no longer the de facto backbone and exchange point of the Internet. The Commercial Internet eXchange (CIX), Metropolitan Area Exchanges (MAEs), and later Network Access Points (NAPs) were becoming the primary interconnections between many networks. The final restrictions on carrying commercial traffic ended on April 30, 1995, when the National Science Foundation ended its sponsorship of the NSFNET Backbone Service. NSF provided initial support for the NAPs and interim support to help the regional research and education networks transition to commercial ISPs. NSF also sponsored the very high speed Backbone Network Service (vBNS) which continued to provide support for the supercomputing centers and research and education in the United States.
An event held on 11 January 1994, The Superhighway Summit at UCLA's Royce Hall, was the "first public conference bringing together all of the major industry, government and academic leaders in the field [and] also began the national dialogue about the Information Superhighway and its implications".
Internet use in wider society
The invention of the World Wide Web by Tim Berners-Lee at CERN, as an application on the Internet, brought many social and commercial uses to what was, at the time, a network of networks for academic and research institutions. The Web opened to the public in 1991 and began to enter general use in 1993–4, when websites for everyday use started to become available.
During the first decade or so of the public Internet, the immense changes it would eventually enable in the 2000s were still nascent. In terms of providing context for this period, mobile cellular devices ("smartphones" and other cellular devices) which today provide near-universal access, were used for business and not a routine household item owned by parents and children worldwide. Social media in the modern sense had yet to come into existence, laptops were bulky and most households did not have computers. Data rates were slow and most people lacked means to video or digitize video; media storage was transitioning slowly from analog tape to digital optical discs (DVD and to an extent still, floppy disc to CD). Enabling technologies used from the early 2000s such as PHP, modern JavaScript and Java, technologies such as AJAX, HTML 4 (and its emphasis on CSS), and various software frameworks, which enabled and simplified speed of web development, largely awaited invention and their eventual widespread adoption.
The Internet was widely used for mailing lists, emails, creating and distributing maps with tools like MapQuest, e-commerce and early popular online shopping (Amazon and eBay for example), online forums and bulletin boards, and personal websites and blogs, and use was growing rapidly, but by more modern standards, the systems used were static and lacked widespread social engagement. It awaited a number of events in the early 2000s to change from a communications technology to gradually develop into a key part of global society's infrastructure.
Typical design elements of these "Web 1.0" era websites included: Static pages instead of dynamic HTML; content served from filesystems instead of relational databases; pages built using Server Side Includes or CGI instead of a web application written in a dynamic programming language; HTML 3.2-era structures such as frames and tables to create page layouts; online guestbooks; overuse of GIF buttons and similar small graphics promoting particular items; and HTML forms sent via email. (Support for server side scripting was rare on shared servers so the usual feedback mechanism was via email, using mailto forms and their email program.
During the period 1997 to 2001, the first speculative investment bubble related to the Internet took place, in which "dot-com" companies (referring to the ".com" top level domain used by businesses) were propelled to exceedingly high valuations as investors rapidly stoked stock values, followed by a market crash; the first dot-com bubble. However this only temporarily slowed enthusiasm and growth, which quickly recovered and continued to grow.
The history of the World Wide Web up to around 2004 was retrospectively named and described by some as "Web 1.0".
IPv6
In the final stage of IPv4 address exhaustion, the last IPv4 address block was assigned in January 2011 at the level of the regional Internet registries. IPv4 uses 32-bit addresses which limits the address space to 232 addresses, i.e. addresses. IPv4 is in the process of replacement by IPv6, its successor, which uses 128-bit addresses, providing 2128 addresses, i.e. , a vastly increased address space. The shift to IPv6 is expected to take a long time to complete.
2004–present: Web 2.0, global ubiquity, social media
The rapid technical advances that would propel the Internet into its place as a social system, which has completely transformed the way humans interact with each other, took place during a relatively short period from around 2005 to 2010, coinciding with the point in time in which IoT devices surpassed the number of humans alive at some point in the late 2000s. They included:
The call to "Web 2.0" in 2004 (first suggested in 1999),
Accelerating adoption and commoditization among households of, and familiarity with, the necessary hardware (such as computers).
Accelerating storage technology and data access speeds – hard drives emerged, took over from far smaller, slower floppy discs, and grew from megabytes to gigabytes (and by around 2010, terabytes), RAM from hundreds of kilobytes to gigabytes as typical amounts on a system, and Ethernet, the enabling technology for TCP/IP, moved from common speeds of kilobits to tens of megabits per second, to gigabits per second.
High speed Internet and wider coverage of data connections, at lower prices, allowing larger traffic rates, more reliable simpler traffic, and traffic from more locations,
The public's accelerating perception of the potential of computers to create new means and approaches to communication, the emergence of social media and websites such as Twitter and Facebook to their later prominence, and global collaborations such as Wikipedia (which existed before but gained prominence as a result),
The mobile device revolution, particularly with smartphones and tablet computers becoming widespread, which began to provide easy access to the Internet to much of human society of all ages, in their daily lives, and allowed them to share, discuss, and continually update, inquire, and respond.
Non-volatile RAM rapidly grew in size and reliability, and decreased in price, becoming a commodity capable of enabling high levels of computing activity on these small handheld devices as well as solid-state drives (SSD).
An emphasis on power efficient processor and device design, rather than purely high processing power; one of the beneficiaries of this was Arm, a British company which had focused since the 1980s on powerful but low cost simple microprocessors. ARM architecture family rapidly gained dominance in the market for mobile and embedded devices.
Web 2.0
The term "Web 2.0" describes websites that emphasize user-generated content (including user-to-user interaction), usability, and interoperability. It first appeared in a January 1999 article called "Fragmented Future" written by Darcy DiNucci, a consultant on electronic information design, where she wrote:
The term resurfaced during 2002–2004, and gained prominence in late 2004 following presentations by Tim O'Reilly and Dale Dougherty at the first Web 2.0 Conference. In their opening remarks, John Battelle and Tim O'Reilly outlined their definition of the "Web as Platform", where software applications are built upon the Web as opposed to upon the desktop. The unique aspect of this migration, they argued, is that "customers are building your business for you". They argued that the activities of users generating content (in the form of ideas, text, videos, or pictures) could be "harnessed" to create value.
Web 2.0 does not refer to an update to any technical specification, but rather to cumulative changes in the way Web pages are made and used. Web 2.0 describes an approach, in which sites focus substantially upon allowing users to interact and collaborate with each other in a social media dialogue as creators of user-generated content in a virtual community, in contrast to Web sites where people are limited to the passive viewing of content. Examples of Web 2.0 include social networking services, blogs, wikis, folksonomies, video sharing sites, hosted services, Web applications, and mashups. Terry Flew, in his 3rd Edition of New Media described what he believed to characterize the differences between Web 1.0 and Web 2.0:
"[The] move from personal websites to blogs and blog site aggregation, from publishing to participation, from web content as the outcome of large up-front investment to an ongoing and interactive process, and from content management systems to links based on tagging (folksonomy)".
This era saw several household names gain prominence through their community-oriented operation – YouTube, Twitter, Facebook, Reddit and Wikipedia being some examples.
Telephone networks convert to VoIP
Telephone systems have been slowly adopting Voice over IP since 2003. Early experiments proved that voice can be converted to digital packets and sent over the Internet. The packets are collected and converted back to analog voice.
The mobile revolution
The process of change that generally coincided with "Web 2.0" was itself greatly accelerated and transformed only a short time later by the increasing growth in mobile devices. This mobile revolution meant that computers in the form of smartphones became something many people used, took with them everywhere, communicated with, used for photographs and videos they instantly shared or to shop or seek information "on the move" – and used socially, as opposed to items on a desk at home or just used for work.
Location-based services, services using location and other sensor information, and crowdsourcing (frequently but not always location based), became common, with posts tagged by location, or websites and services becoming location aware. Mobile-targeted websites (such as "m.website.com") became common, designed especially for the new devices used. Netbooks, ultrabooks, widespread 4G and Wi-Fi, and mobile chips capable or running at nearly the power of desktops from not many years before on far lower power usage, became enablers of this stage of Internet development, and the term "App" emerged (short for "Application program" or "Program") as did the "App store".
This "mobile revolution" has allowed for people to have a nearly unlimited amount of information at all times. With the ability to access the internet from cell phones came a change in the way media was consumed. Media consumption statistics show that over half of media consumption between those aged 18 and 34 were using a smartphone.
Networking in outer space
The first Internet link into low Earth orbit was established on January 22, 2010, when astronaut T. J. Creamer posted the first unassisted update to his Twitter account from the International Space Station, marking the extension of the Internet into space. (Astronauts at the ISS had used email and Twitter before, but these messages had been relayed to the ground through a NASA data link before being posted by a human proxy.) This personal Web access, which NASA calls the Crew Support LAN, uses the space station's high-speed Ku band microwave link. To surf the Web, astronauts can use a station laptop computer to control a desktop computer on Earth, and they can talk to their families and friends on Earth using Voice over IP equipment.
Communication with spacecraft beyond Earth orbit has traditionally been over point-to-point links through the Deep Space Network. Each such data link must be manually scheduled and configured. In the late 1990s NASA and Google began working on a new network protocol, Delay-tolerant networking (DTN) which automates this process, allows networking of spaceborne transmission nodes, and takes the fact into account that spacecraft can temporarily lose contact because they move behind the Moon or planets, or because space weather disrupts the connection. Under such conditions, DTN retransmits data packages instead of dropping them, as the standard TCP/IP Internet Protocol does. NASA conducted the first field test of what it calls the "deep space internet" in November 2008. Testing of DTN-based communications between the International Space Station and Earth (now termed Disruption-Tolerant Networking) has been ongoing since March 2009, and was scheduled to continue until March 2014.
This network technology is supposed to ultimately enable missions that involve multiple spacecraft where reliable inter-vessel communication might take precedence over vessel-to-Earth downlinks. According to a February 2011 statement by Google's Vint Cerf, the so-called "Bundle protocols" have been uploaded to NASA's EPOXI mission spacecraft (which is in orbit around the Sun) and communication with Earth has been tested at a distance of approximately 80 light seconds.
Internet governance
As a globally distributed network of voluntarily interconnected autonomous networks, the Internet operates without a central governing body. Each constituent network chooses the technologies and protocols it deploys from the technical standards that are developed by the Internet Engineering Task Force (IETF). However, successful interoperation of many networks requires certain parameters that must be common throughout the network. For managing such parameters, the Internet Assigned Numbers Authority (IANA) oversees the allocation and assignment of various technical identifiers. In addition, the Internet Corporation for Assigned Names and Numbers (ICANN) provides oversight and coordination for the two principal name spaces in the Internet, the Internet Protocol address space and the Domain Name System.
NIC, InterNIC, IANA, and ICANN
The IANA function was originally performed by USC Information Sciences Institute (ISI), and it delegated portions of this responsibility with respect to numeric network and autonomous system identifiers to the Network Information Center (NIC) at Stanford Research Institute (SRI International) in Menlo Park, California. ISI's Jonathan Postel managed the IANA, served as RFC Editor and performed other key roles until his death in 1998.
As the early ARPANET grew, hosts were referred to by names, and a HOSTS.TXT file would be distributed from SRI International to each host on the network. As the network grew, this became cumbersome. A technical solution came in the form of the Domain Name System, created by ISI's Paul Mockapetris in 1983. The Defense Data Network—Network Information Center (DDN-NIC) at SRI handled all registration services, including the top-level domains (TLDs) of .mil, .gov, .edu, .org, .net, .com and .us, root nameserver administration and Internet number assignments under a United States Department of Defense contract. In 1991, the Defense Information Systems Agency (DISA) awarded the administration and maintenance of DDN-NIC (managed by SRI up until this point) to Government Systems, Inc., who subcontracted it to the small private-sector Network Solutions, Inc.
The increasing cultural diversity of the Internet also posed administrative challenges for centralized management of the IP addresses. In October 1992, the Internet Engineering Task Force (IETF) published RFC 1366, which described the "growth of the Internet and its increasing globalization" and set out the basis for an evolution of the IP registry process, based on a regionally distributed registry model. This document stressed the need for a single Internet number registry to exist in each geographical region of the world (which would be of "continental dimensions"). Registries would be "unbiased and widely recognized by network providers and subscribers" within their region.
The RIPE Network Coordination Centre (RIPE NCC) was established as the first RIR in May 1992. The second RIR, the Asia Pacific Network Information Centre (APNIC), was established in Tokyo in 1993, as a pilot project of the Asia Pacific Networking Group.
Since at this point in history most of the growth on the Internet was coming from non-military sources, it was decided that the Department of Defense would no longer fund registration services outside of the .mil TLD. In 1993 the U.S. National Science Foundation, after a competitive bidding process in 1992, created the InterNIC to manage the allocations of addresses and management of the address databases, and awarded the contract to three organizations. Registration Services would be provided by Network Solutions; Directory and Database Services would be provided by AT&T; and Information Services would be provided by General Atomics.
Over time, after consultation with the IANA, the IETF, RIPE NCC, APNIC, and the Federal Networking Council (FNC), the decision was made to separate the management of domain names from the management of IP numbers. Following the examples of RIPE NCC and APNIC, it was recommended that management of IP address space then administered by the InterNIC should be under the control of those that use it, specifically the ISPs, end-user organizations, corporate entities, universities, and individuals. As a result, the American Registry for Internet Numbers (ARIN) was established as in December 1997, as an independent, not-for-profit corporation by direction of the National Science Foundation and became the third Regional Internet Registry.
In 1998, both the IANA and remaining DNS-related InterNIC functions were reorganized under the control of ICANN, a California non-profit corporation contracted by the United States Department of Commerce to manage a number of Internet-related tasks. As these tasks involved technical coordination for two principal Internet name spaces (DNS names and IP addresses) created by the IETF, ICANN also signed a memorandum of understanding with the IAB to define the technical work to be carried out by the Internet Assigned Numbers Authority. The management of Internet address space remained with the regional Internet registries, which collectively were defined as a supporting organization within the ICANN structure. ICANN provides central coordination for the DNS system, including policy coordination for the split registry / registrar system, with competition among registry service providers to serve each top-level-domain and multiple competing registrars offering DNS services to end-users.
Internet Engineering Task Force
The Internet Engineering Task Force (IETF) is the largest and most visible of several loosely related ad-hoc groups that provide technical direction for the Internet, including the Internet Architecture Board (IAB), the Internet Engineering Steering Group (IESG), and the Internet Research Task Force (IRTF).
The IETF is a loosely self-organized group of international volunteers who contribute to the engineering and evolution of Internet technologies. It is the principal body engaged in the development of new Internet standard specifications. Much of the work of the IETF is organized into Working Groups. Standardization efforts of the Working Groups are often adopted by the Internet community, but the IETF does not control or patrol the Internet.
The IETF grew out of quarterly meetings with U.S. government-funded researchers, starting in January 1986. Non-government representatives were invited by the fourth IETF meeting in October 1986. The concept of Working Groups was introduced at the fifth meeting in February 1987. The seventh meeting in July 1987 was the first meeting with more than one hundred attendees. In 1992, the Internet Society, a professional membership society, was formed and IETF began to operate under it as an independent international standards body. The first IETF meeting outside of the United States was held in Amsterdam, the Netherlands, in July 1993. Today, the IETF meets three times per year and attendance has been as high as ca. 2,000 participants. Typically one in three IETF meetings are held in Europe or Asia. The number of non-US attendees is typically ca. 50%, even at meetings held in the United States.
The IETF is not a legal entity, has no governing board, no members, and no dues. The closest status resembling membership is being on an IETF or Working Group mailing list. IETF volunteers come from all over the world and from many different parts of the Internet community. The IETF works closely with and under the supervision of the Internet Engineering Steering Group (IESG) and the Internet Architecture Board (IAB). The Internet Research Task Force (IRTF) and the Internet Research Steering Group (IRSG), peer activities to the IETF and IESG under the general supervision of the IAB, focus on longer-term research issues.
RFCs
RFCs are the main documentation for the work of the IAB, IESG, IETF, and IRTF. Originally intended as requests for comments, RFC 1, "Host Software", was written by Steve Crocker at UCLA in April 1969. These technical memos documented aspects of ARPANET development. They were edited by Jon Postel, the first RFC Editor.
RFCs cover a wide range of information from proposed standards, draft standards, full standards, best practices, experimental protocols, history, and other informational topics. RFCs can be written by individuals or informal groups of individuals, but many are the product of a more formal Working Group. Drafts are submitted to the IESG either by individuals or by the Working Group Chair. An RFC Editor, appointed by the IAB, separate from IANA, and working in conjunction with the IESG, receives drafts from the IESG and edits, formats, and publishes them. Once an RFC is published, it is never revised. If the standard it describes changes or its information becomes obsolete, the revised standard or updated information will be re-published as a new RFC that "obsoletes" the original.
The Internet Society
The Internet Society (ISOC) is an international, nonprofit organization founded during 1992 "to assure the open development, evolution and use of the Internet for the benefit of all people throughout the world". With offices near Washington, DC, US, and in Geneva, Switzerland, ISOC has a membership base comprising more than 80 organizational and more than 50,000 individual members. Members also form "chapters" based on either common geographical location or special interests. There are currently more than 90 chapters around the world.
ISOC provides financial and organizational support to and promotes the work of the standards settings bodies for which it is the organizational home: the Internet Engineering Task Force (IETF), the Internet Architecture Board (IAB), the Internet Engineering Steering Group (IESG), and the Internet Research Task Force (IRTF). ISOC also promotes understanding and appreciation of the Internet model of open, transparent processes and consensus-based decision-making.
Globalization and Internet governance in the 21st century
Since the 1990s, the Internet's governance and organization has been of global importance to governments, commerce, civil society, and individuals. The organizations which held control of certain technical aspects of the Internet were the successors of the old ARPANET oversight and the current decision-makers in the day-to-day technical aspects of the network. While recognized as the administrators of certain aspects of the Internet, their roles and their decision-making authority are limited and subject to increasing international scrutiny and increasing objections. These objections have led to the ICANN removing themselves from relationships with first the University of Southern California in 2000, and in September 2009 gaining autonomy from the US government by the ending of its longstanding agreements, although some contractual obligations with the U.S. Department of Commerce continued. Finally, on October 1, 2016, ICANN ended its contract with the United States Department of Commerce National Telecommunications and Information Administration (NTIA), allowing oversight to pass to the global Internet community.
The IETF, with financial and organizational support from the Internet Society, continues to serve as the Internet's ad-hoc standards body and issues Request for Comments.
In November 2005, the World Summit on the Information Society, held in Tunis, called for an Internet Governance Forum (IGF) to be convened by United Nations Secretary General. The IGF opened an ongoing, non-binding conversation among stakeholders representing governments, the private sector, civil society, and the technical and academic communities about the future of Internet governance. The first IGF meeting was held in October/November 2006 with follow up meetings annually thereafter. Since WSIS, the term "Internet governance" has been broadened beyond narrow technical concerns to include a wider range of Internet-related policy issues.
Tim Berners-Lee, inventor of the web, was becoming concerned about threats to the web's future and in November 2009 at the IGF in Washington DC launched the World Wide Web Foundation (WWWF) to campaign to make the web a safe and empowering tool for the good of humanity with access to all. In November 2019 at the IGF in Berlin, Berners-Lee and the WWWF went on to launch the Contract for the Web, a campaign initiative to persuade governments, companies and citizens to commit to nine principles to stop "misuse" with the warning "If we don't act now - and act together - to prevent the web being misused by those who want to exploit, divide and undermine, we are at risk of squandering" (its potential for good).
Politicization of the Internet
Due to its prominence and immediacy as an effective means of mass communication, the Internet has also become more politicized as it has grown. This has led in turn, to discourses and activities that would once have taken place in other ways, migrating to being mediated by internet.
Examples include political activities such as public protest and canvassing of support and votes, but also:
The spreading of ideas and opinions;
Recruitment of followers, and "coming together" of members of the public, for ideas, products, and causes;
Providing and widely distributing and sharing information that might be deemed sensitive or relates to whistleblowing (and efforts by specific countries to prevent this by censorship);
Criminal activity and terrorism (and resulting law enforcement use, together with its facilitation by mass surveillance);
Politically motivated fake news.
Net neutrality
On April 23, 2014, the Federal Communications Commission (FCC) was reported to be considering a new rule that would permit Internet service providers to offer content providers a faster track to send content, thus reversing their earlier net neutrality position. A possible solution to net neutrality concerns may be municipal broadband, according to Professor Susan Crawford, a legal and technology expert at Harvard Law School. On May 15, 2014, the FCC decided to consider two options regarding Internet services: first, permit fast and slow broadband lanes, thereby compromising net neutrality; and second, reclassify broadband as a telecommunication service, thereby preserving net neutrality. On November 10, 2014, President Obama recommended the FCC reclassify broadband Internet service as a telecommunications service in order to preserve net neutrality. On January 16, 2015, Republicans presented legislation, in the form of a U.S. Congress HR discussion draft bill, that makes concessions to net neutrality but prohibits the FCC from accomplishing the goal or enacting any further regulation affecting Internet service providers (ISPs). On January 31, 2015, AP News reported that the FCC will present the notion of applying ("with some caveats") Title II (common carrier) of the Communications Act of 1934 to the internet in a vote expected on February 26, 2015. Adoption of this notion would reclassify internet service from one of information to one of telecommunications and, according to Tom Wheeler, chairman of the FCC, ensure net neutrality. The FCC is expected to enforce net neutrality in its vote, according to The New York Times.
On February 26, 2015, the FCC ruled in favor of net neutrality by applying Title II (common carrier) of the Communications Act of 1934 and Section 706 of the Telecommunications act of 1996 to the Internet. The FCC chairman, Tom Wheeler, commented, "This is no more a plan to regulate the Internet than the First Amendment is a plan to regulate free speech. They both stand for the same concept."
On March 12, 2015, the FCC released the specific details of the net neutrality rules. On April 13, 2015, the FCC published the final rule on its new "Net Neutrality" regulations.
On December 14, 2017, the FCC repealed their March 12, 2015 decision by a 3–2 vote regarding net neutrality rules.
Use and culture
Email and Usenet
Email has often been called the killer application of the Internet. It predates the Internet, and was a crucial tool in creating it. Email started in 1965 as a way for multiple users of a time-sharing mainframe computer to communicate. Although the history is undocumented, among the first systems to have such a facility were the System Development Corporation (SDC) Q32 and the Compatible Time-Sharing System (CTSS) at MIT.
The ARPANET computer network made a large contribution to the evolution of electronic mail. An experimental inter-system transferred mail on the ARPANET shortly after its creation. In 1971 Ray Tomlinson created what was to become the standard Internet electronic mail addressing format, using the @ sign to separate mailbox names from host names.
A number of protocols were developed to deliver messages among groups of time-sharing computers over alternative transmission systems, such as UUCP and IBM's VNET email system. Email could be passed this way between a number of networks, including ARPANET, BITNET and NSFNET, as well as to hosts connected directly to other sites via UUCP. See the history of SMTP protocol.
In addition, UUCP allowed the publication of text files that could be read by many others. The News software developed by Steve Daniel and Tom Truscott in 1979 was used to distribute news and bulletin board-like messages. This quickly grew into discussion groups, known as newsgroups, on a wide range of topics. On ARPANET and NSFNET similar discussion groups would form via mailing lists, discussing both technical issues and more culturally focused topics (such as science fiction, discussed on the sflovers mailing list).
During the early years of the Internet, email and similar mechanisms were also fundamental to allow people to access resources that were not available due to the absence of online connectivity. UUCP was often used to distribute files using the 'alt.binary' groups. Also, FTP e-mail gateways allowed people that lived outside the US and Europe to download files using ftp commands written inside email messages. The file was encoded, broken in pieces and sent by email; the receiver had to reassemble and decode it later, and it was the only way for people living overseas to download items such as the earlier Linux versions using the slow dial-up connections available at the time. After the popularization of the Web and the HTTP protocol such tools were slowly abandoned.
File sharing
Resource or file sharing has been an important activity on computer networks from well before the Internet was established and was supported in a variety of ways including bulletin board systems (1978), Usenet (1980), Kermit (1981), and many others. The File Transfer Protocol (FTP) for use on the Internet was standardized in 1985 and is still in use today. A variety of tools were developed to aid the use of FTP by helping users discover files they might want to transfer, including the Wide Area Information Server (WAIS) in 1991, Gopher in 1991, Archie in 1991, Veronica in 1992, Jughead in 1993, Internet Relay Chat (IRC) in 1988, and eventually the World Wide Web (WWW) in 1991 with Web directories and Web search engines.
In 1999, Napster became the first peer-to-peer file sharing system. Napster used a central server for indexing and peer discovery, but the storage and transfer of files was decentralized. A variety of peer-to-peer file sharing programs and services with different levels of decentralization and anonymity followed, including: Gnutella, eDonkey2000, and Freenet in 2000, FastTrack, Kazaa, Limewire, and BitTorrent in 2001, and Poisoned in 2003.
All of these tools are general purpose and can be used to share a wide variety of content, but sharing of music files, software, and later movies and videos are major uses. And while some of this sharing is legal, large portions are not. Lawsuits and other legal actions caused Napster in 2001, eDonkey2000 in 2005, Kazaa in 2006, and Limewire in 2010 to shut down or refocus their efforts. The Pirate Bay, founded in Sweden in 2003, continues despite a trial and appeal in 2009 and 2010 that resulted in jail terms and large fines for several of its founders. File sharing remains contentious and controversial with charges of theft of intellectual property on the one hand and charges of censorship on the other.
File hosting services
File hosting allowed for people to expand their computer's hard drives and "host" their files on a server. Most file hosting services offer free storage, as well as larger storage amount for a fee. These services have greatly expanded the internet for business and personal use.
Google Drive, launched on April 24, 2012, has become the most popular file hosting service. Google Drive allows users to store, edit, and share files with themselves and other users. Not only does this application allow for file editing, hosting, and sharing. It also acts as Google's own free-to-access office programs, such as Google Docs, Google Slides, and Google Sheets. This application served as a useful tool for University professors and students, as well as those who are in need of Cloud storage.
Dropbox, released in June 2007 is a similar file hosting service that allows users to keep all of their files in a folder on their computer, which is synced with Dropbox's servers. This differs from Google Drive as it is not web-browser based. Now, Dropbox works to keep workers and files in sync and efficient.
Mega, having over 200 million users, is an encrypted storage and communication system that offers users free and paid storage, with an emphasis on privacy. Being three of the largest file hosting services, Google Drive, Dropbox, and Mega all represent the core ideas and values of these services.
Online piracy
The earliest form of online piracy began with a P2P (peer to peer) music sharing service named Napster, launched in 1999. Sites like LimeWire, The Pirate Bay, and BitTorrent allowed for anyone to engage in online piracy, sending ripples through the media industry. With online piracy came a change in the media industry as a whole.
Mobile telephone data traffic
Total global mobile data traffic reached 588 exabytes during 2020, a 150-fold increase from 3.86 exabytes/year in 2010. Most recently, smartphones accounted for 95% of this mobile data traffic with video accounting for 66% by type of data. Mobile traffic travels by radio frequency to the closest cell phone tower and its base station where the radio signal is converted into an optical signal that is transmitted over high-capacity optical networking systems that convey the information to data centers. The optical backbones enable much of this traffic as well as a host of emerging mobile services including the Internet of things, 3-D virtual reality, gaming and autonomous vehicles. The most popular mobile phone application is texting, of which 2.1 trillion messages were logged in 2020. The texting phenomenon began on December 3, 1992, when Neil Papworth sent the first text message of "Merry Christmas" over a commercial cell phone network to the CEO of Vodafone.
The first mobile phone with Internet connectivity was the Nokia 9000 Communicator, launched in Finland in 1996. The viability of Internet services access on mobile phones was limited until prices came down from that model, and network providers started to develop systems and services conveniently accessible on phones. NTT DoCoMo in Japan launched the first mobile Internet service, i-mode, in 1999 and this is considered the birth of the mobile phone Internet services. In 2001, the mobile phone email system by Research in Motion (now BlackBerry Limited) for their BlackBerry product was launched in America. To make efficient use of the small screen and tiny keypad and one-handed operation typical of mobile phones, a specific document and networking model was created for mobile devices, the Wireless Application Protocol (WAP). Most mobile device Internet services operate using WAP. The growth of mobile phone services was initially a primarily Asian phenomenon with Japan, South Korea and Taiwan all soon finding the majority of their Internet users accessing resources by phone rather than by PC. Developing countries followed, with India, South Africa, Kenya, the Philippines, and Pakistan all reporting that the majority of their domestic users accessed the Internet from a mobile phone rather than a PC. The European and North American use of the Internet was influenced by a large installed base of personal computers, and the growth of mobile phone Internet access was more gradual, but had reached national penetration levels of 20–30% in most Western countries. The cross-over occurred in 2008, when more Internet access devices were mobile phones than personal computers. In many parts of the developing world, the ratio is as much as 10 mobile phone users to one PC user.
Growth in demand
Global Internet traffic continues to grow at a rapid rate, rising 23% from 2020 to 2021 when the number of active Internet users reached 4.66 billion people, representing half of the global population. Further demand for data, and the capacity to satisfy this demand, are forecast to increase to 717 terabits per second in 2021. This capacity stems from the optical amplification and WDM systems that are the common basis of virtually every metro, regional, national, international and submarine telecommunications networks. These optical networking systems have been installed throughout the 5 billion kilometers of fiber optic lines deployed around the world. Continued growth in traffic is expected for the foreseeable future from a combination of new users, increased mobile phone adoption, machine-to-machine connections, connected homes, 5G devices and the burgeoning requirement for cloud and Internet services such as Amazon, Facebook, Apple Music and YouTube.
Historiography
There are nearly insurmountable problems in supplying a historiography of the Internet's development. The process of digitization represents a twofold challenge both for historiography in general and, in particular, for historical communication research. A sense of the difficulty in documenting early developments that led to the internet can be gathered from the quote:
Notable works on the subject were published by Katie Hafner and Matthew Lyon, Where Wizards Stay Up Late: The Origins Of The Internet (1996), Roy Rosenzweig, Wizards, Bureaucrats, Warriors, and Hackers: Writing the History of the Internet (1998), and Janet Abbate, Inventing the Internet (2000).
Most scholarship and literature on the Internet lists ARPANET as the prior network that was iterated on and studied to create it, although other early computer networks and experiments existed alongside or before ARPANET.
These histories of the Internet have since been characterized as teleologies or Whig history; that is, they take the present to be the end point toward which history has been unfolding based on a single cause:
In addition to these characteristics, historians have cited methodological problems arising in their work:
See also
History of email
History of hypertext
History of telecommunication
Index of Internet-related articles
Internet activism
List of Internet pioneers
MH & xmh: Email for Users & Programmers
Nerds 2.0.1 A Brief History of the Internet
On the Internet, nobody knows you're a dog
Outline of the Internet
References
Sources
Further reading
External links
Internet History Timeline – Computer History Museum
Histories of the Internet – Internet Society
History of the Internet, a short animated film (2009)
Internet
Articles containing video clips
History of computing
Internet governance | History of the Internet | [
"Technology"
] | 17,210 | [
"Computers",
"History of computing"
] |
13,694 | https://en.wikipedia.org/wiki/Microsoft%20Windows%20version%20history | Microsoft Windows was announced by Bill Gates on November 10, 1983, 2 years before it was first released. Microsoft introduced Windows as a graphical user interface for MS-DOS, which had been introduced two years earlier, on August 12, 1981. The product line evolved in the 1990s from an operating environment into a fully complete, modern operating system over two lines of development, each with their own separate codebase.
The first versions of Windows (1.0 through to 3.11) were graphical shells that ran from MS-DOS. Windows 95, though still being based on MS-DOS, was its own operating system. Windows 95 also had a significant amount of 16-bit code ported from Windows 3.1. Windows 95 introduced many features that have been part of the product ever since, including the Start menu, the taskbar, and Windows Explorer (renamed File Explorer in Windows 8). In 1997, Microsoft released Internet Explorer 4 which included the (at the time controversial) Windows Desktop Update. It aimed to integrate Internet Explorer and the web into the user interface and also brought many new features into Windows, such as the ability to display JPEG images as the desktop wallpaper and single window navigation in Windows Explorer. In 1998, Microsoft released Windows 98, which also included the Windows Desktop Update and Internet Explorer 4 by default. The inclusion of Internet Explorer 4 and the Desktop Update led to an antitrust case in the United States. Windows 98 included USB support out of the box, and also plug and play, which allows devices to work when plugged in without requiring a system reboot or manual configuration. Windows Me, the last DOS-based version of Windows, was aimed at consumers and released in 2000. It introduced System Restore, Help and Support Center, updated versions of the Disk Defragmenter and other system tools.
In 1993, Microsoft released Windows NT 3.1, the first version of the newly developed Windows NT operating system, followed by Windows NT 3.5 in 1994, and Windows NT 3.51 in 1995. "NT" is an initialism for "New Technology". Unlike the Windows 9x series of operating systems, it was a fully 32-bit operating system. NT 3.1 introduced NTFS, a file system designed to replace the older File Allocation Table (FAT) which was used by DOS and the DOS-based Windows operating systems. In 1996, Windows NT 4.0 was released, which included a fully 32-bit version of Windows Explorer written specifically for it, making the operating system work like Windows 95. Windows NT was originally designed to be used on high-end systems and servers, but with the release of Windows 2000, many consumer-oriented features from Windows 95 and Windows 98 were included, such as the Windows Desktop Update, Internet Explorer 5, USB support and Windows Media Player. These consumer-oriented features were further extended in Windows XP in 2001, which included a new visual style called Luna, a more user-friendly interface, updated versions of Windows Media Player and Internet Explorer 6 by default, and extended features from Windows Me, such as the Help and Support Center and System Restore. Windows Vista, which was released in 2007, focused on securing the Windows operating system against computer viruses and other malicious software by introducing features such as User Account Control. New features include Windows Aero, updated versions of the standard games (e.g. Solitaire), Windows Movie Maker, and Windows Mail to replace Outlook Express. Despite this, Windows Vista was critically panned for its poor performance on older hardware and its at-the-time high system requirements. Windows 7 followed in 2009 nearly three years after its launch, and despite it technically having higher system requirements, reviewers noted that it ran better than Windows Vista. Windows 7 removed many applications, such as Windows Movie Maker, Windows Photo Gallery and Windows Mail, instead requiring users to download separate Windows Live Essentials to gain some of those features and other online services. Windows 8, which was released in 2012, introduced many controversial changes, such as the replacement of the Start menu with the Start Screen, the removal of the Aero interface in favor of a flat, colored interface as well as the introduction of "Metro" apps (later renamed to Universal Windows Platform apps), and the Charms Bar user interface element, all of which received considerable criticism from reviewers. Windows 8.1, a free upgrade to Windows 8, was released in 2013.
The following version of Windows, Windows 10, which was released in 2015, reintroduced the Start menu and added the ability to run Universal Windows Platform apps in a window instead of always in full screen. Windows 10 was generally well-received, with many reviewers stating that Windows 10 is what Windows 8 should have been.
The latest version of Windows, Windows 11, was released to the general public on October 5, 2021. Windows 11 incorporates a redesigned user interface, including a new Start menu, a visual style featuring rounded corners, and a new layout for the Microsoft Store, and also included Microsoft Edge by default.
Windows 1.0
Windows 1.0, the first independent version of Microsoft Windows, released on November 20, 1985, achieved little popularity. The project was briefly codenamed "Interface Manager" before the windowing system was implemented—contrary to popular belief that it was the original name for Windows and Rowland Hanson, the head of marketing at Microsoft, convinced the company that the name Windows would be more appealing to customers.
Windows 1.0 was not a complete operating system, but rather an "operating environment" that extended MS-DOS, and shared the latter's inherent flaws.
The first version of Microsoft Windows included a simple graphics painting program called Windows Paint; Windows Write, a simple word processor; an appointment calendar; a card-filer; a notepad; a clock; a control panel; a computer terminal; Clipboard; and RAM driver. It also included the MS-DOS Executive and a game called Reversi.
Microsoft had worked with Apple Computer to develop applications for Apple's new Macintosh computer, which featured a graphical user interface. As part of the related business negotiations, Microsoft had licensed certain aspects of the Macintosh user interface from Apple; in later litigation, a district court summarized these aspects as "screen displays".
In the development of Windows 1.0, Microsoft intentionally limited its borrowing of certain GUI elements from the Macintosh user interface, to comply with its license. For example, windows were only displayed "tiled" on the screen; that is, they could not overlap or overlie one another.
On December 31, 2001, Microsoft declared Windows 1.0 obsolete and stopped providing support and updates for the system.
OS/2 and Windows 2.x
During the mid to late 1980s, Microsoft and IBM had cooperatively been developing OS/2 as a successor to DOS. OS/2 would take full advantage of the aforementioned protected mode of the Intel 80286 processor and up to 16 MB of memory. OS/2 1.0, released in 1987, supported swapping and multitasking and allowed running of DOS executables.
IBM licensed Windows' GUI for OS/2 as Presentation Manager, and the two companies stated that it and Windows 2.0 would be almost identical. Presentation Manager was not available with OS/2 until version 1.1, released in 1988. Its API was incompatible with Windows. Version 1.2, released in 1989, introduced a new file system, HPFS, to replace the FAT file system.
By the early 1990s, conflicts developed in the Microsoft/IBM relationship. They cooperated with each other in developing their PC operating systems and had access to each other's code. Microsoft wanted to further develop Windows, while IBM desired for future work to be based on OS/2. In an attempt to resolve this tension, IBM and Microsoft agreed that IBM would develop OS/2 2.0, to replace OS/2 1.3 and Windows 3.0, while Microsoft would develop the next version, OS/2 3.0.
This agreement soon fell apart however, and the Microsoft/IBM relationship was terminated. IBM continued to develop OS/2, while Microsoft changed the name of its (as yet unreleased) OS/2 3.0 to Windows NT. Both retained the rights to use OS/2 and Windows technology developed up to the termination of the agreement; Windows NT, however, was to be written anew, mostly independently (see below).
After an interim 1.3 version to fix up many remaining problems with the 1.x series, IBM released OS/2 version 2.0 in 1992. This was a major improvement: it featured a new, object-oriented GUI, the Workplace Shell (WPS), that included a desktop and was considered by many to be OS/2's best feature. Microsoft would later imitate much of it in Windows 95. Version 2.0 also provided a full 32-bit API, offered smooth multitasking and could take advantage of the 4 gigabytes of address space provided by the Intel 80386. Still, much of the system had 16-bit code internally which required, among other things, device drivers to be 16-bit code as well. This was one of the reasons for the chronic shortage of OS/2 drivers for the latest devices. Version 2.0 could also run DOS and Windows 3.0 programs, since IBM had retained the right to use the DOS and Windows code as a result of the breakup.
Microsoft Windows version 2.0 (2.01 and 2.03 internally) came out on December 9, 1987, and proved slightly more popular than its predecessor. Much of the popularity for Windows 2.0 came by way of its inclusion as a "run-time version" with Microsoft's new graphical applications, Excel and Word for Windows. They could be run from MS-DOS, executing Windows for the duration of their activity, and closing down Windows upon exit.
Microsoft Windows received a major boost around this time when Aldus PageMaker appeared in a Windows version, having previously run only on Macintosh. Some computer historians date this, the first appearance of a significant and non-Microsoft application for Windows, as the start of the success of Windows.
Like prior versions of Windows, version 2.0 could use the real-mode memory model, which confined it to a maximum of 1 megabyte of memory. In such a configuration, it could run under another multitasker like DESQview, which used the 286 protected mode. It was also the first version to support the High Memory Area when running on an Intel 80286 compatible processor. This edition was renamed Windows/286 with the release of Windows 2.1.
A separate Windows/386 edition had a protected mode kernel, which required an 80386 compatible processor, with LIM-standard EMS emulation and VxD drivers in the kernel. All Windows and DOS-based applications at the time were real mode, and Windows/386 could run them over the protected mode kernel by using the virtual 8086 mode, which was new with the 80386 processor.
Version 2.1 came out on May 27, 1988, followed by version 2.11 on March 13, 1989; they included a few minor changes.
In Apple Computer, Inc. v. Microsoft Corp., version 2.03, and later 3.0, faced challenges from Apple over its overlapping windows and other features Apple charged mimicked the ostensibly copyrighted "look and feel" of its operating system and "embodie[d] and generated a copy of the Macintosh" in its OS. Judge William Schwarzer dropped all but 10 of Apple's 189 claims of copyright infringement, and ruled that most of the remaining 10 were over uncopyrightable ideas.
On December 31, 2001, Microsoft declared Windows 2.x obsolete and stopped providing support and updates for the system.
Windows 3.0
Windows 3.0, released in May 1990, improved capabilities given to native applications. It also allowed users to better multitask older MS-DOS based software compared to Windows/386, thanks to the introduction of virtual memory.
Windows 3.0's user interface finally resembled a serious competitor to the user interface of the Macintosh computer. PCs had improved graphics by this time, due to VGA video cards, and the protected/enhanced mode allowed Windows applications to use more memory in a more painless manner than their DOS counterparts could. Windows 3.0 could run in real, standard, or 386 enhanced modes, and was compatible with any Intel processor from the 8086/8088 up to the 80286 and 80386. This was the first version to run Windows programs in protected mode, although the 386 enhanced mode kernel was an enhanced version of the protected mode kernel in Windows/386.
Windows 3.0 received two updates. A few months after introduction, Windows 3.0a was released as a maintenance release, resolving bugs and improving stability. A "multimedia" version, Windows 3.0 with Multimedia Extensions 1.0, was released in October 1991. This was bundled with "multimedia upgrade kits", comprising a CD-ROM drive and a sound card, such as the Creative Labs Sound Blaster Pro. This version was the precursor to the multimedia features available in Windows 3.1 (first released in April 1992) and later, and was part of Microsoft's specification for the Multimedia PC.
The features listed above and growing market support from application software developers made Windows 3.0 wildly successful, selling around 10 million copies in the two years before the release of version 3.1. Windows 3.0 became a major source of income for Microsoft, and led the company to revise some of its earlier plans. Support was discontinued on December 31, 2001.
Windows 3.1
In response to the impending release of OS/2 2.0, Microsoft developed Windows 3.1 (first released in April 1992), which included several improvements to Windows 3.0, such as display of TrueType scalable fonts (developed jointly with Apple), improved disk performance in 386 Enhanced Mode, multimedia support, and bugfixes. It also removed Real Mode, and only ran on an 80286 or better processor. Later Microsoft also released Windows 3.11, a touch-up to Windows 3.1 which included all of the patches and updates that followed the release of Windows 3.1 in 1992.
In 1992 and 1993, Microsoft released Windows for Workgroups (WfW), which was available both as an add-on for existing Windows 3.1 installations and in a version that included the base Windows environment and the networking extensions all in one package. Windows for Workgroups included improved network drivers and protocol stacks, and support for peer-to-peer networking. There were two versions of Windows for Workgroups – 3.1 and 3.11. Unlike prior versions, Windows for Workgroups 3.11 ran in 386 Enhanced Mode only, and needed at least an 80386SX processor. One optional download for WfW was the "Wolverine" TCP/IP protocol stack, which allowed for easy access to the Internet through corporate networks.
All these versions continued version 3.0's impressive sales pace. Even though the 3.1x series still lacked most of the important features of OS/2, such as long file names, a desktop, or protection of the system against misbehaving applications, Microsoft quickly took over the OS and GUI markets for the IBM PC. The Windows API became the de facto standard for consumer software.
On December 31, 2001, Microsoft declared Windows 3.1 obsolete and stopped providing support and updates for the system. However, OEM licensing for Windows for Workgroups 3.11 on embedded systems continued to be available until November 1, 2008.
Windows NT 3.x
Meanwhile, Microsoft continued to develop Windows NT. The main architect of the system was Dave Cutler, one of the chief architects of VAX/VMS at Digital Equipment Corporation. Microsoft hired him in October 1988 to create a successor to OS/2, but Cutler created a completely new system instead. Cutler had been developing a follow-on to VMS at DEC called MICA, and when DEC dropped the project he brought the expertise and around 20 engineers with him to Microsoft.
Windows NT Workstation (Microsoft marketing wanted Windows NT to appear to be a continuation of Windows 3.1) arrived in Beta form to developers at the July 1992 Professional Developers Conference in San Francisco. Microsoft announced at the conference its intentions to develop a successor to both Windows NT and Windows 3.1's replacement (Windows 95, codenamed Chicago), which would unify the two into one operating system. This successor was codenamed Cairo. In hindsight, Cairo was a much more difficult project than Microsoft had anticipated and, as a result, NT and Chicago would not be unified until Windows XP—albeit Windows 2000, oriented to business, had already unified most of the system's bolts and gears, it was XP that was sold to home consumers like Windows 95 and came to be viewed as the final unified OS. Parts of Cairo have still not made it into Windows : most notably, the WinFS file system, which was the much touted Object File System of Cairo. Microsoft announced in 2006 that they would not make a separate release of WinFS for Windows XP and Windows Vista and would gradually incorporate the technologies developed for WinFS in other products and technologies, notably Microsoft SQL Server.
Driver support was lacking due to the increased programming difficulty in dealing with NT's superior hardware abstraction model. This problem plagued the NT line all the way through Windows 2000. Programmers complained that it was too hard to write drivers for NT, and hardware developers were not going to go through the trouble of developing drivers for a small segment of the market. Additionally, although allowing for good performance and fuller exploitation of system resources, it was also resource-intensive on limited hardware, and thus was only suitable for larger, more expensive machines.
However, these same features made Windows NT perfect for the LAN server market (which in 1993 was experiencing a rapid boom, as office networking was becoming common). NT also had advanced network connectivity options and NTFS, an efficient file system. Windows NT version 3.51 was Microsoft's entry into this field, and took away market share from Novell (the dominant player) in the following years.
One of Microsoft's biggest advances initially developed for Windows NT was a new 32-bit API, to replace the legacy 16-bit Windows API. This API was called Win32, and from then on Microsoft referred to the older 16-bit API as Win16. The Win32 API had three levels of implementation: the complete one for Windows NT, a subset for Chicago (originally called Win32c) missing features primarily of interest to enterprise customers (at the time) such as security and Unicode support, and a more limited subset called Win32s which could be used on Windows 3.1 systems. Thus Microsoft sought to ensure some degree of compatibility between the Chicago design and Windows NT, even though the two systems had radically different internal architectures.
Windows NT was the first Windows operating system based on a hybrid kernel. The hybrid kernel was designed as a modified microkernel, influenced by the Mach microkernel developed by Richard Rashid at Carnegie Mellon University, but without meeting all of the criteria of a pure microkernel.
As released, Windows NT 3.x went through three versions (3.1, 3.5, and 3.51), changes were primarily internal and reflected back end changes. The 3.5 release added support for new types of hardware and improved performance and data reliability; the 3.51 release was primarily to update the Win32 APIs to be compatible with software being written for the Win32c APIs in what became Windows 95. Support for Windows NT 3.51 ended in 2001 and 2002 for the Workstation and Server editions, respectively.
Windows 95
After Windows 3.11, Microsoft began to develop a new consumer-oriented version of the operating system codenamed Chicago. Chicago was designed to have support for 32-bit preemptive multitasking like OS/2 and Windows NT, although a 16-bit kernel would remain for the sake of backward compatibility. The Win32 API first introduced with Windows NT was adopted as the standard 32-bit programming interface, with Win16 compatibility being preserved through a technique known as "thunking". A new object-oriented GUI was not originally planned as part of the release, although elements of the Cairo user interface were borrowed and added as other aspects of the release (notably Plug and Play) slipped.
Microsoft did not change all of the Windows code to 32-bit; parts of it remained 16-bit (albeit not directly using real mode) for reasons of compatibility, performance, and development time. Additionally it was necessary to carry over design decisions from earlier versions of Windows for reasons of backwards compatibility, even if these design decisions no longer matched a more modern computing environment. These factors eventually began to impact the operating system's efficiency and stability.
Microsoft marketing adopted Windows 95 as the product name for Chicago when it was released on August 24, 1995. Microsoft had a double gain from its release: first, it made it impossible for consumers to run Windows 95 on a cheaper, non-Microsoft DOS, secondly, although traces of DOS were never completely removed from the system and MS DOS 7 would be loaded briefly as a part of the booting process, Windows 95 applications ran solely in 386 enhanced mode, with a flat 32-bit address space and virtual memory. These features make it possible for Win32 applications to address up to 2 gigabytes of virtual RAM (with another 2 GB reserved for the operating system), and in theory prevented them from inadvertently corrupting the memory space of other Win32 applications. In this respect the functionality of Windows 95 moved closer to Windows NT, although Windows 95/98/Me did not support more than 512 megabytes of physical RAM without obscure system tweaks. Three years after its introduction, Windows 95 was succeeded by Windows 98.
IBM continued to market OS/2, producing later versions in OS/2 3.0 and 4.0 (also called Warp). Responding to complaints about OS/2 2.0's high demands on computer hardware, version 3.0 was significantly optimized both for speed and size. Before Windows 95 was released, OS/2 Warp 3.0 was even shipped pre-installed with several large German hardware vendor chains. However, with the release of Windows 95, OS/2 began to lose market share.
It is probably impossible to choose one specific reason why OS/2 failed to gain much market share. While OS/2 continued to run Windows 3.1 applications, it lacked support for anything but the Win32s subset of Win32 API (see above). Unlike with Windows 3.1, IBM did not have access to the source code for Windows 95 and was unwilling to commit the time and resources to emulate the moving target of the Win32 API. IBM later introduced OS/2 into the United States v. Microsoft case, blaming unfair marketing tactics on Microsoft's part.
Microsoft went on to release five different versions of Windows 95:
Windows 95 – original release
Windows 95 A – included Windows 95 OSR1 slipstreamed into the installation
Windows 95 B (OSR2) – included several major enhancements, Internet Explorer (IE) 3.0 and full FAT32 file system support
Windows 95 B USB (OSR2.1) – included basic USB support
Windows 95 C (OSR2.5) – included all the above features, plus IE 4.0; this was the last 95 version produced
OSR2, OSR2.1, and OSR2.5 were not released to the general public, rather, they were available only to OEMs that would preload the OS onto computers. Some companies sold new hard drives with OSR2 preinstalled (officially justifying this as needed due to the hard drive's capacity).
The first Microsoft Plus! add-on pack was sold for Windows 95. Microsoft ended extended support for Windows 95 on December 31, 2001.
Phases in development
Windows NT 4.0
Microsoft released the successor to NT 3.51, Windows NT 4.0, on August 24, 1996, one year after the release of Windows 95. It was Microsoft's primary business-oriented operating system until the introduction of Windows 2000. Major new features included the new Explorer shell from Windows 95, scalability and feature improvements to the core architecture, kernel, USER32, COM and MSRPC.
Windows NT 4.0 came in five versions:
Windows NT 4.0 Workstation
Windows NT 4.0 Server
Windows NT 4.0 Server, Enterprise Edition (includes support for 8-way SMP and clustering)
Windows NT 4.0 Terminal Server
Windows NT 4.0 Embedded
Microsoft ended mainstream support for Windows NT 4.0 Workstation on June 30, 2002, and ended extended support on June 30, 2004, while Windows NT 4.0 Server mainstream support ended on December 31, 2002, and extended support ended on December 31, 2004. Both editions were succeeded by Windows 2000 Professional and the Windows 2000 Server Family, respectively.
Microsoft ended mainstream support for Windows NT 4.0 Embedded on June 30, 2003, and ended extended support on July 11, 2006. This edition was succeeded by Windows XP Embedded.
Windows 98
On June 25, 1998, Microsoft released Windows 98 (code-named Memphis), three years after the release of Windows 95, two years after the release of Windows NT 4.0, and 21 months before the release of Windows 2000. It included new hardware drivers and the FAT32 file system which supports disk partitions that are larger than 2 GB (first introduced in Windows 95 OSR2). USB support in Windows 98 is marketed as a vast improvement over Windows 95. The release continued the controversial inclusion of the Internet Explorer browser with the operating system that started with Windows 95 OEM Service Release 1. The action eventually led to the filing of the United States v. Microsoft case, dealing with the question of whether Microsoft was introducing unfair practices into the market in an effort to eliminate competition from other companies such as Netscape.
In 1999, Microsoft released Windows 98 Second Edition, an interim release. One of the more notable new features was the addition of Internet Connection Sharing, a form of network address translation, allowing several machines on a LAN (Local Area Network) to share a single Internet connection. Hardware support through device drivers was increased and this version shipped with Internet Explorer 5. Many minor problems that existed in the first edition were fixed making it, according to many, the most stable release of the Windows 9x family.
Mainstream support for Windows 98 and 98 SE ended on June 30, 2002. Extended support ended on July 11, 2006.
Windows 2000
Microsoft released Windows 2000 on February 17, 2000, as the successor to Windows NT 4.0, 17 months after the release of Windows 98. It has the version number Windows NT 5.0, and it was Microsoft's business-oriented operating system starting with the official release on February 17, 2000, until 2001 when it was succeeded by Windows XP. Windows 2000 has had four official service packs. It was successfully deployed both on the server and the workstation markets. Amongst Windows 2000's most significant new features was Active Directory, a near-complete replacement of the NT 4.0 Windows Server domain model, which built on industry-standard technologies like DNS, LDAP, and Kerberos to connect machines to one another. Terminal Services, previously only available as a separate edition of NT 4, was expanded to all server versions. A number of features from Windows 98 were incorporated also, such as an improved Device Manager, Windows Media Player, and a revised DirectX that made it possible for the first time for many modern games to work on the NT kernel. Windows 2000 is also the last NT-kernel Windows operating system to lack product activation.
While Windows 2000 upgrades were available for Windows 95 and Windows 98, it was not intended for home users.
Windows 2000 was available in four editions:
Windows 2000 Professional
Windows 2000 Server
Windows 2000 Advanced Server
Windows 2000 Datacenter Server
Microsoft ended support for both Windows 2000 and Windows XP Service Pack 2 on July 13, 2010.
Windows Me
On September 14, 2000, Microsoft released a successor to Windows 98 called Windows Me, short for "Millennium Edition". It was the last DOS-based operating system from Microsoft. Windows Me introduced a new multimedia-editing application called Windows Movie Maker, came standard with Internet Explorer 5.5 and Windows Media Player 7, and debuted the first version of System Restore – a recovery utility that enables the operating system to revert system files back to a prior date and time. System Restore was a notable feature that would continue to thrive in all later versions of Windows.
Windows Me was conceived as a quick one-year project that served as a stopgap release between Windows 98 and Windows XP. Many of the new features were available from the Windows Update site as updates for older Windows versions (System Restore and Windows Movie Maker were exceptions). Windows Me was criticized for stability issues, as well as for lacking real mode DOS support, to the point of being referred to as the "Mistake Edition". Windows Me was the last operating system to be based on the Windows 9x (monolithic) kernel and MS-DOS, with its successor Windows XP being based on Microsoft's Windows NT kernel instead.
Windows XP, Server 2003 series and Fundamentals for Legacy PCs
On October 25, 2001, Microsoft released Windows XP (codenamed "Whistler"). The merging of the Windows NT/2000 and Windows 95/98/Me lines was finally achieved with Windows XP. Windows XP uses the Windows NT 5.1 kernel, marking the entrance of the Windows NT core to the consumer market, to replace the aging Windows 9x branch. The initial release was met with considerable criticism, particularly in the area of security, leading to the release of three major Service Packs. Windows XP SP1 was released in September 2002, SP2 was released in August 2004 and SP3 was released in April 2008. Service Pack 2 provided significant improvements and encouraged widespread adoption of XP among both home and business users. Windows XP was one of Microsoft's longest-running flagship operating systems, beginning with the public release on October 25, 2001, for at least 5 years, and ending on January 30, 2007, when it was succeeded by Windows Vista.
Windows XP is available in a number of versions:
Windows XP Home Edition, for home users
Windows XP Professional, for business and power users contained a number of features not available in Home Edition.
Windows XP N, like above editions, but without a default installation of Windows Media Player, as mandated by a European Union ruling
Windows XP Media Center Edition (MCE), released in October 2002 for desktops and notebooks with an emphasis on home entertainment. Contained all features offered in Windows XP Professional and the Windows Media Center. Subsequent versions are the same but have an updated Windows Media Center.
Windows XP Media Center Edition 2004, released on September 30, 2003
Windows XP Media Center Edition 2005, released on October 12, 2004. Included the Royale theme, support for Media Center Extenders, themes and screensavers from Microsoft Plus! for Windows XP. The ability to join an Active Directory domain is disabled.
Windows XP Tablet PC Edition, for tablet PCs
Windows XP Tablet PC Edition 2005
Windows XP Embedded, for embedded systems
Windows XP Starter Edition, for new computer users in developing countries
Windows XP Professional x64 Edition, released on April 25, 2005, for home and workstation systems utilizing 64-bit processors based on the x86-64 instruction set originally developed by AMD as AMD64; Intel calls their version Intel 64. Internally, XP x64 was a somewhat updated version of Windows based on the Server 2003 codebase.
Windows XP 64-bit Edition, is a version for Intel's Itanium line of processors; maintains 32-bit compatibility solely through a software emulator. It is roughly analogous to Windows XP Professional in features. It was discontinued in September 2005 when the last vendor of Itanium workstations stopped shipping Itanium systems marketed as "Workstations".
Windows Server 2003
On April 25, 2003, Microsoft launched Windows Server 2003, a notable update to Windows 2000 Server encompassing many new security features, a new "Manage Your Server" wizard that simplifies configuring a machine for specific roles, and improved performance. It is based on the Windows NT 5.2 kernel. A few services not essential for server environments are disabled by default for stability reasons, most noticeable are the "Windows Audio" and "Themes" services; users have to enable them manually to get sound or the "Luna" look as per Windows XP. The hardware acceleration for display is also turned off by default, users have to turn the acceleration level up themselves if they trust the display card driver.
In December 2005, Microsoft released Windows Server 2003 R2, which is actually Windows Server 2003 with SP1 (Service Pack 1), together with an add-on package.
Among the new features are a number of management features for branch offices, file serving, printing and company-wide identity integration.
Windows Server 2003 is available in six editions:
Web Edition (32-bit)
Enterprise Edition (32 and 64-bit)
Datacenter Edition (32 and 64-bit)
Small Business Server (32-bit)
Storage Server (OEM channel only)
Windows Server 2003 R2, an update of Windows Server 2003, was released to manufacturing on December 6, 2005. It is distributed on two CDs, with one CD being the Windows Server 2003 SP1 CD. The other CD adds many optionally installable features for Windows Server 2003. The R2 update was released for all x86 and x64 versions, except Windows Server 2003 R2 Enterprise Edition, which was not released for Itanium.
Windows XP x64 and Server 2003 x64 Editions
On April 25, 2005, Microsoft released Windows XP Professional x64 Edition and Windows Server 2003, x64 Editions in Standard, Enterprise and Datacenter SKUs. Windows XP Professional x64 Edition is an edition of Windows XP for x86-64 personal computers. It is designed to use the expanded 64-bit memory address space provided by the x86–64 architecture.
Windows XP Professional x64 Edition is based on the Windows Server 2003 codebase, with the server features removed and client features added. Both Windows Server 2003 x64 and Windows XP Professional x64 Edition use identical kernels.
Windows XP Professional x64 Edition is not to be confused with Windows XP 64-bit Edition, as the latter was designed for Intel Itanium processors. During the initial development phases, Windows XP Professional x64 Edition was named Windows XP 64-Bit Edition for 64-Bit Extended Systems.
Windows Fundamentals for Legacy PCs
In July 2006, Microsoft released a thin-client version of Windows XP Service Pack 2, called Windows Fundamentals for Legacy PCs (WinFLP). It is only available to Software Assurance customers. The aim of WinFLP is to give companies a viable upgrade option for older PCs that are running Windows 95, 98, and Me that will be supported with patches and updates for the next several years. Most user applications will typically be run on a remote machine using Terminal Services or Citrix.
While being visually the same as Windows XP, it has some differences. For example, if the screen has been set to 16 bit colors, the Windows 2000 recycle bin icon and some XP 16-bit icons will show. Paint and some games like Solitaire aren't present too.
Windows Home Server 2007
Windows Home Server (code-named Q, Quattro) is a server product based on Windows Server 2003, designed for consumer use. The system was announced on January 7, 2007, by Bill Gates. Windows Home Server can be configured and monitored using a console program that can be installed on a client PC. Such features as Media Sharing, local and remote drive backup and file duplication are all listed as features. The release of Windows Home Server Power Pack 3 added support for Windows 7 to Windows Home Server.
Windows Vista and Server 2008
Windows Vista was released on November 30, 2006, to business customers—consumer versions followed on January 30, 2007. Windows Vista intended to have enhanced security by introducing a new restricted user mode called User Account Control, replacing the "administrator-by-default" philosophy of Windows XP. Vista was the target of much criticism and negative press, and in general was not well regarded, this was seen as leading to the relatively swift release of Windows 7.
One major difference between Vista and earlier versions of Windows, Windows 95 and later, was that the original start button was replaced with the Windows icon in a circle (called the Start Orb). Vista also featured new graphics features, the Windows Aero GUI, new applications (such as Windows Calendar, Windows DVD Maker and some new games including Chess, Mahjong, and Purble Place), Internet Explorer 7, Windows Media Player 11, and a large number of underlying architectural changes. Windows Vista had the version number NT 6.0. During its lifetime, Windows Vista had two service packs.
Windows Vista shipped in six editions:
Starter (only available in developing countries)
Home Basic
Home Premium
Business
Enterprise (only available to large business and enterprise)
Ultimate (combines both Home Premium and Enterprise)
All editions (except Starter edition) were available in both 32-bit and 64-bit versions. The biggest advantage of the 64-bit version was breaking the 4 gigabyte memory barrier, which 32-bit computers cannot fully access.
Windows Server 2008
Windows Server 2008, released on February 27, 2008, was originally known as Windows Server Codename "Longhorn". Windows Server 2008 built on the technological and security advances first introduced with Windows Vista, and was significantly more modular than its predecessor, Windows Server 2003.
Windows Server 2008 shipped in ten editions:
Windows Server 2008 Foundation (for OEMs only)
Windows Server 2008 Standard (32-bit and 64-bit)
Windows Server 2008 Enterprise (32-bit and 64-bit)
Windows Server 2008 Datacenter (32-bit and 64-bit)
Windows Server 2008 for Itanium-based Systems (IA-64)
Windows HPC Server 2008
Windows Web Server 2008 (32-bit and 64-bit)
Windows Storage Server 2008 (32-bit and 64-bit)
Windows Small Business Server 2008 (64-bit only)
Windows Essential Business Server 2008 (32-bit and 64-bit)
Windows 7 and Server 2008 R2
Windows 7 was released to manufacturing on July 22, 2009, and reached general retail availability on October 22, 2009. Since its release, Windows 7 had one service pack.
Some features of Windows 7 were faster booting, Device Stage, Windows PowerShell, less obtrusive User Account Control, multi-touch, and improved window management. The interface was renewed with a bigger taskbar and some improvements in the searching system and the Start menu. Features included with Windows Vista and not in Windows 7 include the sidebar (although gadgets remain) and several programs that were removed in favor of downloading their Windows Live counterparts. Windows 7 met with positive reviews, which said the OS was faster and easier to use than Windows Vista.
Windows 7 shipped in six editions:
Starter (available worldwide)
Home Basic
Home Premium
Professional
Enterprise (available to volume-license business customers only)
Ultimate
In some countries in the European Union, there were other editions that lacked some features such as Windows Media Player, Windows Media Center and Internet Explorer—these editions were called names such as "Windows 7 N."
Microsoft focused on selling Windows 7 Home Premium and Professional. All editions, except the Starter edition, were available in both 32-bit and 64-bit versions.
Unlike the corresponding Vista editions, the Professional and Enterprise editions were supersets of the Home Premium edition.
At the Professional Developers Conference (PDC) 2008, Microsoft also announced Windows Server 2008 R2, as the server variant of Windows 7. Windows Server 2008 R2 shipped in 64-bit versions (x64 and Itanium) only.
Windows Thin PC
In 2010, Microsoft released Windows Thin PC or WinTPC, which was a feature-and size-reduced locked-down version of Windows 7 expressly designed to turn older PCs into thin clients. WinTPC was available for software assurance customers and relied on cloud computing in a business network. Wireless operation is supported since WinTPC has full wireless stack integration, but wireless operation may not be as good as the operation on a wired connection.
Windows Home Server 2011
Windows Home Server 2011 code named 'Vail' was released on April 6, 2011. Windows Home Server 2011 is built on the Windows Server 2008 R2 code base and removed the Drive Extender drive pooling technology in the original Windows Home Server release. Windows Home Server 2011 is considered a "major release". Its predecessor was built on Windows Server 2003. WHS 2011 only supports x86-64 hardware.
Microsoft decided to discontinue Windows Home Server 2011 on July 5, 2012, while including its features into Windows Server 2012 Essentials. Windows Home Server 2011 was supported until April 12, 2016.
Windows 8 and Server 2012
On June 1, 2011, Microsoft previewed Windows 8 at both Computex Taipei and the D9: All Things Digital conference in California. The first public preview of Windows Server 2012 was shown by Microsoft at the 2011 Microsoft Worldwide Partner Conference. Windows 8 Release Preview and Windows Server 2012 Release Candidate were both released on May 31, 2012. Product development on Windows 8 was completed on August 1, 2012, and it was released to manufacturing the same day. Windows Server 2012 went on sale to the public on September 4, 2012. Windows 8 went on sale to the public on October 26, 2012. One edition, Windows RT, runs on some system-on-a-chip devices with mobile 32-bit ARM (ARMv7) processors. Windows 8 features a redesigned user interface, designed to make it easier for touchscreen users to use Windows. The interface introduced an updated Start menu known as the Start screen, and a new full-screen application platform. The desktop interface is also present for running windowed applications, although Windows RT will not run any desktop applications not included in the system. On the Building Windows 8 blog, it was announced that a computer running Windows 8 can boot up much faster than Windows 7. New features also include USB 3.0 support, the Windows Store, the ability to run from USB drives with Windows To Go, and others.
Windows 8.1 and Windows Server 2012 R2 were released on October 17, 2013. Windows 8.1 is available as an update in the Windows Store for Windows 8 users only and also available to download for clean installation. The update adds new options for resizing the live tiles on the Start screen. Windows 8 was given the kernel number NT 6.2, with its successor 8.1 receiving the kernel number 6.3. Neither had any service packs, although many consider Windows 8.1 to be a service pack for Windows 8. However, Windows 8.1 received two main updates in 2014. Both versions received some criticism due to the removal of the Start menu and some difficulties to perform tasks and commands.
Windows 8 is available in the following editions:
Windows 8
Windows 8 Pro
Windows 8 Enterprise
Windows RT
Microsoft ended support for Windows 8 on January 12, 2016, and for Windows 8.1 on January 10, 2023.
Windows 10 and corresponding Server versions
Windows 10 was unveiled on September 30, 2014, as the successor for Windows 8, and was released on July 29, 2015. It was distributed without charge to Windows 7 and 8.1 users for one year after release. A number of new features like Cortana, the Microsoft Edge web browser, the ability to view Windows Store apps as a window instead of fullscreen, the return of the Start menu, virtual desktops, revamped core apps, Continuum, and a unified Settings app were all features debuted in Windows 10. Like its successor, the operating system was announced as a service OS that would receive constant performance and stability updates. Unlike Windows 8, Windows 10 received mostly positive reviews, praising improvements of stability and practicality than its predecessor, however, it received some criticism due to mandatory update installation, privacy concerns and advertising-supported software tactics.
Although Microsoft claimed Windows 10 would be the last Windows version, eventually a new major release, Windows 11, was announced in 2021. That made Windows 10 last longer as Microsoft's flagship operating system than any other version of Windows, beginning with the public release on July 29, 2015, for six years, and ending on October 5, 2021, when Windows 11 was released. Windows 10 had received thirteen main updates.
Stable releases
Version 1507 (codenamed Threshold 1) was the original version of Windows 10 and released in July 2015. One of the big features was the introduction of Windows Hello, which at launch enabled users to log into Windows with facial recognition if the PC was equipped with a compatible active illuminated near-infrared, NIR, camera.
Version 1511, announced as the November Update and codenamed Threshold 2. It was released in November 2015. This update added many visual tweaks, such as more consistent context menus and the ability to change the color of window titlebars. Windows 10 can now be activated with a product key for Windows 7 and later, thus simplifying the activation process and essentially making Windows 10 free for anyone who has Windows 7 or later, even after the free upgrade period ended. A "Find My Device" feature was added, allowing users to track their devices if they lose them, similar to the Find My iPhone service that Apple offers. Controversially, the Start menu now displays "featured apps". A few tweaks were added to Microsoft Edge, including tab previews and the ability to sync the browser with other devices running Windows 10. Kernel version number: 10.0.10586.
Version 1607, announced as the Anniversary Update and codenamed Redstone 1. It was the first of several planned updates with the "Redstone" codename. Its version number, 1607, means that it was supposed to launch in July 2016, however it was delayed until August 2016. Many new features were included in the version, including more integration with Cortana, a dark theme, browser extension support for Microsoft Edge, click-to-play Flash by default, tab pinning, web notifications, swipe navigation in Edge, and the ability for Windows Hello to use a fingerprint sensor to sign into apps and websites, similar to Touch ID on the iPhone. Also added was Windows Ink, which improves digital inking in many apps, and the Windows Ink Workspace which lists pen-compatible apps, as well as quick shortcuts to a sticky notes app and a sketchpad. Microsoft, through their partnership with Canonical, integrated a full Ubuntu bash shell via the Windows Subsystem for Linux. Notable tweaks in this version of Windows 10 include the removal of the controversial password-sharing feature of Microsoft's Wi-Fi Sense service, a slightly redesigned Start menu, Tablet Mode working more like Windows 8, overhauled emoji, improvements to the lock screen, calendar integration in the taskbar, and the Blue Screen of Death now showing a QR code which users can scan to quickly find out what caused the error. This version of Windows 10's kernel version is 10.0.14393.
Version 1703, announced as the Creators Update and codenamed Redstone 2. Features for this update include a new Paint 3D application, which allows users to create and modify 3D models, integration with Microsoft's HoloLens and other "mixed-reality" headsets produced by other manufacturers, Windows My People, which allows users to manage contacts, Xbox game broadcasting, support for newly developed APIs such as WDDM 2.2, Dolby Atmos support, improvements to the Settings app, and more Edge and Cortana improvements. This version also included tweaks to system apps, such as an address bar in the Registry Editor, Windows PowerShell being the default command line interface instead of the Command Prompt and the Windows Subsystem for Linux being upgraded to support Ubuntu 16.04. This version of Windows 10 was released on April 11, 2017, as a free update.
Version 1709, announced as the Fall Creators Update and codenamed Redstone 3. It introduced a new design language—the Fluent Design System and incorporates it in UWP apps such as Calculator. It also added new features to the Photos application, which were once available only in Windows Movie Maker.
Version 1803, announced as the April 2018 Update and codenamed Redstone 4 introduced Timeline, an upgrade to the task view screen such that it has the ability to show past activities and let users resume them. The respective icon on the taskbar was also changed to reflect this upgrade. Strides were taken to incorporate Fluent Design into Windows, which included adding Acrylic transparency to the Taskbar and Taskbar Flyouts. The Settings App was also redesigned to have an Acrylic left pane. Variable Fonts were introduced.
Version 1809, announced as the Windows 10 October 2018 Update and codenamed Redstone 5 among new features, introduced Dark Mode for File Explorer, Your Phone App to link Android phone with Windows 10, new screenshot tool called Snip & Sketch, Make Text Bigger for easier accessibility, and Clipboard History and Cloud Sync.
Version 1903, announced as the Windows 10 May 2019 Update, codenamed 19H1, was released on May 21, 2019. It added many new features including the addition of a light theme to the Windows shell and a new feature known as Windows Sandbox, which allowed users to run programs in a throwaway virtual window. Notably, this was the first version to allow an application to default to using UTF-8 as the process code page and to default to UTF-8 as the code page in programs such as Notepad.
Version 1909, announced as the Windows 10 November 2019 Update, codenamed 19H2, was released on November 12, 2019. It unlocked many features that were already present, but hidden or disabled, on 1903, such as an auto-expanding menu on Start while hovering the mouse on it, OneDrive integration on Windows Search and creating events from the taskbar's clock. Some PCs with version 1903 had already enabled these features without installing 1909.
Version 2004, announced as the Windows 10 May 2020 Update, codenamed 20H1, was released on May 27, 2020. It introduces several new features such as renaming virtual desktops, GPU temperature control and type of disk on task manager, chat-based interface and window appearance for Cortana, and cloud reinstalling and quick searches (depends from region) for search home.
Version 20H2, announced as the Windows 10 October 2020 Update, codenamed 20H2, was released on October 20, 2020. It introduces resizing the start menu panels, a graphing mode for Calculator, process architecture view on task manager's Details pane, and optional drivers delivery from Windows Update and an updated in-use location icon on taskbar.
Version 21H1, announced as the Windows 10 May 2021 Update, codenamed 21H1, was released on May 18, 2021.
Version 21H2, announced as the Windows 10 November 2021 Update, codenamed 21H2, was released on November 16, 2021.
Version 22H2, announced as the Windows 10 2022 Update, codenamed 22H2, was released on October 18, 2022. It was the last version of Windows 10.
Windows Server 2016
Windows Server 2016 is a release of the Microsoft Windows Server operating system that was unveiled on September 30, 2014. Windows Server 2016 was officially released at Microsoft's Ignite Conference, September 26–30, 2016. It is based on the Windows 10 Anniversary Update codebase.
Windows Server 2019
Windows Server 2019 is a release of the Microsoft Windows Server operating system that was announced on March 20, 2018. The first Windows Insider preview version was released on the same day. It was released for general availability on October 2, 2018. Windows Server 2019 is based on the Windows 10 October 2018 Update codebase.
On October 6, 2018, distribution of Windows version 1809 (build 17763) was paused while Microsoft investigated an issue with user data being deleted during an in-place upgrade. It affected systems where a user profile folder (e.g. Documents, Music or Pictures) had been moved to another location, but data was left in the original location. As Windows Server 2019 is based on the Windows version 1809 codebase, it too was removed from distribution at the time, but was re-released on November 13, 2018. The software product life cycle for Server 2019 was reset in accordance with the new release date.
Windows Server 2022
Windows Server 2022 was released on August 18, 2021. This is the first NT server version which does not share the build number with any of its client version counterpart, although its codename is 21H2, similar to the Windows 10 November 2021 Update.
Windows 11 and corresponding Server versions
Windows 11 is the latest release of Windows NT, and the successor to Windows 10. It was unveiled on June 24, 2021, and was released on October 5, serving as a free upgrade to compatible Windows 10 devices. The system incorporates a renewed interface called "Mica", which includes translucent backgrounds, rounded edges and color combinations. The taskbar's icons are center aligned by default, while the Start menu replaces the "Live Tiles" with pinned apps and recommended apps and files. The MSN widget panel, the Microsoft Store, and the file browser, among other applications, have also been redesigned. However, some features and programs such as Cortana, Internet Explorer (replaced by Microsoft Edge as the default web browser) and Paint 3D were removed. Apps like 3D Viewer, Paint 3D, Skype and OneNote for Windows 10 can be downloaded from the Microsoft Store. Beginning in 2021, Windows 11 included compatibility with Android applications, however, Microsoft has announced support for Android apps will end in March, 2025; the Amazon Appstore is included in Windows Subsystem for Android. Windows 11 received a positive reception from critics. While it was praised for its redesigned interface, and increased security and productivity, it was criticized for its high system requirements (which includes an installed TPM 2.0 chip, enabling the Secure Boot protocol, and UEFI firmware) and various UI changes and regressions (such as requiring a Microsoft account for first-time setup, preventing users from changing default browsers, and inconsistent dark theme) compared to Windows 10.
Stable releases
Version 21H2, codenamed "Sun Valley", was the initial version of Windows 11 released on October 5, 2021.
Version 22H2, announced as the Windows 11 2022 Update, codenamed "Sun Valley 2", was released on September 20, 2022. Features in this Windows 11 version include an updated, UWP version of the Task Manager and the Smart App Control feature within the Windows Security app. This version has had three major updates, with features including tabbed browsing in the File Explorer, iOS support for the Phone Link app, Bluetooth Low Energy audio support, and a preview of Microsoft Copilot within Windows.
Version 23H2, announced as the Windows 11 2023 Update, codenamed "Sun Valley 3", was released on October 31, 2023.
Version 24H2, announced as the Windows 11 2024 Update, codenamed "Hudson Valley", was released on October 1, 2024.
Windows Server 2025
Windows Server 2025 follows on Windows Server 2022 and was released on November 1, 2024. It is graphically based on Windows 11 and uses features like Hotpatching, among others.
See also
Comparison of operating systems
History of operating systems
List of Microsoft codenames
References
Further reading
History of Microsoft
Microsoft Windows
Windows
OS/2
Software version histories | Microsoft Windows version history | [
"Technology"
] | 11,494 | [
"Computing platforms",
"Microsoft Windows",
"History of software",
"OS/2",
"History of computing"
] |
13,702 | https://en.wikipedia.org/wiki/Hebrew%20numerals | The system of Hebrew numerals is a quasi-decimal alphabetic numeral system using the letters of the Hebrew alphabet.
The system was adapted from that of the Greek numerals sometime between 200 and 78 BCE, the latter being the date of the earliest archeological evidence.
The current numeral system is also known as the Hebrew alphabetic numerals to contrast with earlier systems of writing numerals used in classical antiquity. These systems were inherited from usage in the Aramaic and Phoenician scripts, attested from in the Samaria Ostraca.
The Greek system was adopted in Hellenistic Judaism and had been in use in Greece since about the 5th century BCE.
In this system, there is no notation for zero, and the numeric values for individual letters are added together. Each unit (1, 2, ..., 9) is assigned a separate letter, each tens (10, 20, ..., 90) a separate letter, and the first four hundreds (100, 200, 300, 400) a separate letter. The later hundreds (500, 600, 700, 800 and 900) are represented by the sum of two or three letters representing the first four hundreds. To represent numbers from 1,000 to 999,999, the same letters are reused to serve as thousands, tens of thousands, and hundreds of thousands. Gematria (Jewish numerology) uses these transformations extensively.
In Israel today, the decimal system of Hindu–Arabic numeral system (ex. 0, 1, 2, 3, etc.) is used in almost all cases (money, age, date on the civil calendar). The Hebrew numerals are used only in special cases, such as when using the Hebrew calendar, or numbering a list (similar to a, b, c, d, etc.), much as Roman numerals are used in the West.
Numbers
The Hebrew language has names for common numbers that range from zero to one million. Letters of the Hebrew alphabet are used to represent numbers in a few traditional contexts, such as in calendars. In other situations, numerals from the Hindu–Arabic numeral system are used. Cardinal and ordinal numbers must agree in gender with the noun they are describing. If there is no such noun (e.g., in telephone numbers), the feminine form is used. For ordinal numbers greater than ten, the cardinal is used. Multiples of ten above the value 20 have no gender (20, 30, 40, ... are genderless), unless the number has the digit 1 in the tens position (110, 210, 310, ...).
Ordinal values
Note: For ordinal numbers greater than 10, cardinal numbers are used instead.
Cardinal values
Note: Officially, numbers greater than a million were represented by the long scale. However, since January 21, 2013, the modified short scale (under which the long scale milliard is substituted for the strict short scale billion), which was already the colloquial standard, became official.
Collective numerals
Speaking and writing
Cardinal and ordinal numbers must agree in gender (masculine or feminine; mixed groups are treated as masculine) with the noun they are describing. If there is no such noun (e.g. a telephone number or a house number in a street address), the feminine form is used. Ordinal numbers must also agree in number and definite status like other adjectives. The cardinal number precedes the noun (e.g., shlosha yeladim), except for the number one which succeeds it (e.g., yeled echad). The number two is special: shnayim (m.) and shtayim (f.) become shney (m.) and shtey (f.) when followed by the noun they count. For ordinal numbers (numbers indicating position) greater than ten the cardinal is used.
Calculations
The Hebrew numeric system operates on the additive principle in which the numeric values of the letters are added together to form the total. For example, 177 is represented as which (from right to left) corresponds to 100 + 70 + 7 = 177.
Mathematically, this type of system requires 27 letters (1–9, 10–90, 100–900). In practice, the last letter, tav (which has the value 400), is used in combination with itself or other letters from qof (100) onwards to generate numbers from 500 and above. Alternatively, the 22-letter Hebrew numeral set is sometimes extended to 27 by using 5 sofit (final) forms of the Hebrew letters.
Key exceptions
By convention, the numbers 15 and 16 are represented as (9 + 6) and (9 + 7), respectively, in order to refrain from using the two-letter combinations (10 + 5) and (10 + 6), which are alternate written forms for the Name of God in everyday writing. In the calendar, this manifests every full moon since all Hebrew months start on a new moon (see for example: Tu BiShvat).
This convention developed sometimes in the Middle Ages, before that it was common to write 15 and 16 as י"ה and י"ו.
Combinations which would spell out words with negative connotations are sometimes avoided by switching the order of the letters. For instance, 744 which should be written as (meaning "you/it will be destroyed") might instead be written as or (meaning "end to demon").
Use of final letters
The Hebrew numeral system has sometimes been extended to include the five final letter forms— for 500, for 600, for 700, for 800, for 900. Usually though the final letter form are used with the same value as the regular letter form— for 20, for 40, for 50, for 80, for 90.
The ordinary additive forms for 500 to 900 are , , , and .
Gershayim
Gershayim (U+05F4 in Unicode, and resembling a double quote mark) (sometimes erroneously referred to as merkha'ot, which is Hebrew for double quote) are inserted before (to the right of) the last (leftmost) letter to indicate that the sequence of letters represents something other than a word. This is used in the case where a number is represented by two or more Hebrew numerals (e.g., 28 → ).
Similarly, a single geresh (U+05F3 in Unicode, and resembling a single quote mark) is appended after (to the left of) a single letter to indicate that the letter represents a number rather than a (one-letter) word. This is used in the case where a number is represented by a single Hebrew numeral (e.g. 100 → ).
Note that geresh and gershayim merely indicate "not a (normal) word." Context usually determines whether they indicate a number or something else (such as an abbreviation).
An alternative method found in old manuscripts and still found on modern-day tombstones is to put a dot above each letter of the number.
Decimals
In print, Arabic numerals are employed in Modern Hebrew for most purposes. Hebrew numerals are used nowadays primarily for writing the days and years of the Hebrew calendar; for references to traditional Jewish texts (particularly for Biblical chapter and verse and for Talmudic folios); for bulleted or numbered lists (similar to A, B, C, etc., in English); and in numerology (gematria).
Thousands and date formats
Thousands are counted separately, and the thousands count precedes the rest of the number (to the right, since Hebrew is read from right to left). There are no special marks to signify that the "count" is starting over with thousands, which can theoretically lead to ambiguity, although a single quote mark is sometimes used after the letter. When specifying years of the Hebrew calendar in the present millennium, writers usually omit the thousands (which is presently 5 []), but if they do not, this is accepted to mean 5,000, with no ambiguity. The current Israeli coinage includes the thousands.
Date examples
"Monday, 15 Adar 5764" (where 5764 = 5(×1000) + 400 + 300 + 60 + 4, and 15 = 9 + 6):
In full (with thousands): "Monday, 15(th) of Adar, 5764"
Common usage (omitting thousands): "Monday, 15(th) of Adar, (5)764"
"Thursday, 3 Nisan 5767" (where 5767 = 5(×1000) + 400 + 300 + 60 + 7):
In full (with thousands): "Thursday, 3(rd) of Nisan, 5767"
Common usage (omitting thousands): "Thursday, 3(rd) of Nisan, (5)767"
To see how today's date in the Hebrew calendar is written, see, for example, Hebcal date converter.
Recent years
5785 (2024–25) =
5784 (2023–24) =
5783 (2022–23) =
...
5772 (2011–12) =
5771 (2010–11) =
5770 (2009–10) =
5769 (2008–09) =
...
5761 (2000–01) =
5760 (1999–2000) =
Similar systems
The Abjad numerals are equivalent to the Hebrew numerals up to 400. The Greek numerals differ from the Hebrew ones from 90 upwards because in the Greek alphabet there is no equivalent for tsade ().
See also
Bible code, a purported set of secret messages encoded within the Torah.
Gematria, Jewish system of assigning numerical value to a word or phrase.
Hebrew calendar
Notarikon, a method of deriving a word by using each of its initial letters.
Sephirot, the 10 attributes/emanations found in Kabbalah.
Significance of numbers in Judaism
Base 32, a system that can be written with both all Arabic numerals and all Hebrew letters, much as how Base 36 is written with all Arabic numerals and roman letters.
References
External links
, ,
Gematria Chart on inner.org
Hebrew Number Chart 1 to 1 Million with English Transliteration
Learn to say any number in English with Transliteration
Numerals
Numerals
Numerals | Hebrew numerals | [
"Mathematics"
] | 2,178 | [
"Numeral systems",
"Numerals"
] |
13,711 | https://en.wikipedia.org/wiki/Hydroxide | Hydroxide is a diatomic anion with chemical formula OH−. It consists of an oxygen and hydrogen atom held together by a single covalent bond, and carries a negative electric charge. It is an important but usually minor constituent of water. It functions as a base, a ligand, a nucleophile, and a catalyst. The hydroxide ion forms salts, some of which dissociate in aqueous solution, liberating solvated hydroxide ions. Sodium hydroxide is a multi-million-ton per annum commodity chemical.
The corresponding electrically neutral compound HO• is the hydroxyl radical. The corresponding covalently bound group –OH of atoms is the hydroxy group.
Both the hydroxide ion and hydroxy group are nucleophiles and can act as catalysts in organic chemistry.
Many inorganic substances which bear the word hydroxide in their names are not ionic compounds of the hydroxide ion, but covalent compounds which contain hydroxy groups.
Hydroxide ion
The hydroxide ion is naturally produced from water by the self-ionization reaction:
H3O+ + OH− 2H2O
The equilibrium constant for this reaction, defined as
Kw = [H+][OH−]
has a value close to 10−14 at 25 °C, so the concentration of hydroxide ions in pure water is close to 10−7 mol∙dm−3, to satisfy the equal charge constraint. The pH of a solution is equal to the decimal cologarithm of the hydrogen cation concentration; the pH of pure water is close to 7 at ambient temperatures. The concentration of hydroxide ions can be expressed in terms of pOH, which is close to (14 − pH), so the pOH of pure water is also close to 7. Addition of a base to water will reduce the hydrogen cation concentration and therefore increase the hydroxide ion concentration (decrease pH, increase pOH) even if the base does not itself contain hydroxide. For example, ammonia solutions have a pH greater than 7 due to the reaction NH3 + H+ , which decreases the hydrogen cation concentration, which increases the hydroxide ion concentration. pOH can be kept at a nearly constant value with various buffer solutions.
In an aqueous solution the hydroxide ion is a base in the Brønsted–Lowry sense as it can accept a proton from a Brønsted–Lowry acid to form a water molecule. It can also act as a Lewis base by donating a pair of electrons to a Lewis acid. In aqueous solution both hydrogen and hydroxide ions are strongly solvated, with hydrogen bonds between oxygen and hydrogen atoms. Indeed, the bihydroxide ion has been characterized in the solid state. This compound is centrosymmetric and has a very short hydrogen bond (114.5 pm) that is similar to the length in the bifluoride ion (114 pm). In aqueous solution the hydroxide ion forms strong hydrogen bonds with water molecules. A consequence of this is that concentrated solutions of sodium hydroxide have high viscosity due to the formation of an extended network of hydrogen bonds as in hydrogen fluoride solutions.
In solution, exposed to air, the hydroxide ion reacts rapidly with atmospheric carbon dioxide, acting as an acid, to form, initially, the bicarbonate ion.
OH− + CO2
The equilibrium constant for this reaction can be specified either as a reaction with dissolved carbon dioxide or as a reaction with carbon dioxide gas (see Carbonic acid for values and details). At neutral or acid pH, the reaction is slow, but is catalyzed by the enzyme carbonic anhydrase, which effectively creates hydroxide ions at the active site.
Solutions containing the hydroxide ion attack glass. In this case, the silicates in glass are acting as acids. Basic hydroxides, whether solids or in solution, are stored in airtight plastic containers.
The hydroxide ion can function as a typical electron-pair donor ligand, forming such complexes as tetrahydroxoaluminate/tetrahydroxidoaluminate [Al(OH)4]−. It is also often found in mixed-ligand complexes of the type [MLx(OH)y]z+, where L is a ligand. The hydroxide ion often serves as a bridging ligand, donating one pair of electrons to each of the atoms being bridged. As illustrated by [Pb2(OH)]3+, metal hydroxides are often written in a simplified format. It can even act as a 3-electron-pair donor, as in the tetramer [PtMe3(OH)]4.
When bound to a strongly electron-withdrawing metal centre, hydroxide ligands tend to ionise into oxide ligands. For example, the bichromate ion [HCrO4]− dissociates according to
[O3CrO–H]− [CrO4]2− + H+
with a pKa of about 5.9.
Vibrational spectra
The infrared spectra of compounds containing the OH functional group have strong absorption bands in the region centered around 3500 cm−1. The high frequency of molecular vibration is a consequence of the small mass of the hydrogen atom as compared to the mass of the oxygen atom, and this makes detection of hydroxyl groups by infrared spectroscopy relatively easy. A band due to an OH group tends to be sharp. However, the band width increases when the OH group is involved in hydrogen bonding. A water molecule has an HOH bending mode at about 1600 cm−1, so the absence of this band can be used to distinguish an OH group from a water molecule.
When the OH group is bound to a metal ion in a coordination complex, an M−OH bending mode can be observed. For example, in [Sn(OH)6]2− it occurs at 1065 cm−1. The bending mode for a bridging hydroxide tends to be at a lower frequency as in [(bipyridine)Cu(OH)2Cu(bipyridine)]2+ (955 cm−1). M−OH stretching vibrations occur below about 600 cm−1. For example, the tetrahedral ion [Zn(OH)4]2− has bands at 470 cm−1 (Raman-active, polarized) and 420 cm−1 (infrared). The same ion has a (HO)–Zn–(OH) bending vibration at 300 cm−1.
Applications
Sodium hydroxide solutions, also known as lye and caustic soda, are used in the manufacture of pulp and paper, textiles, drinking water, soaps and detergents, and as a drain cleaner. Worldwide production in 2004 was approximately 60 million tonnes. The principal method of manufacture is the chloralkali process.
Solutions containing the hydroxide ion are generated when a salt of a weak acid is dissolved in water. Sodium carbonate is used as an alkali, for example, by virtue of the hydrolysis reaction
+ H2O + OH− (pKa2= 10.33 at 25 °C and zero ionic strength)
Although the base strength of sodium carbonate solutions is lower than a concentrated sodium hydroxide solution, it has the advantage of being a solid. It is also manufactured on a vast scale (42 million tonnes in 2005) by the Solvay process. An example of the use of sodium carbonate as an alkali is when washing soda (another name for sodium carbonate) acts on insoluble esters, such as triglycerides, commonly known as fats, to hydrolyze them and make them soluble.
Bauxite, a basic hydroxide of aluminium, is the principal ore from which the metal is manufactured. Similarly, goethite (α-FeO(OH)) and lepidocrocite (γ-FeO(OH)), basic hydroxides of iron, are among the principal ores used for the manufacture of metallic iron.
Inorganic hydroxides
Alkali metals
Aside from NaOH and KOH, which enjoy very large scale applications, the hydroxides of the other alkali metals also are useful. Lithium hydroxide (LiOH) is used in breathing gas purification systems for spacecraft, submarines, and rebreathers to remove carbon dioxide from exhaled gas.
2 LiOH + CO2 → Li2CO3 + H2O
The hydroxide of lithium is preferred to that of sodium because of its lower mass. Sodium hydroxide, potassium hydroxide, and the hydroxides of the other alkali metals are also strong bases.
Alkaline earth metals
Beryllium hydroxide Be(OH)2 is amphoteric. The hydroxide itself is insoluble in water, with a solubility product log K*sp of −11.7. Addition of acid gives soluble hydrolysis products, including the trimeric ion [Be3(OH)3(H2O)6]3+, which has OH groups bridging between pairs of beryllium ions making a 6-membered ring. At very low pH the aqua ion [Be(H2O)4]2+ is formed. Addition of hydroxide to Be(OH)2 gives the soluble tetrahydroxoberyllate or tetrahydroxidoberyllate anion, [Be(OH)4]2−.
The solubility in water of the other hydroxides in this group increases with increasing atomic number. Magnesium hydroxide Mg(OH)2 is a strong base (up to the limit of its solubility, which is very low in pure water), as are the hydroxides of the heavier alkaline earths: calcium hydroxide, strontium hydroxide, and barium hydroxide. A solution or suspension of calcium hydroxide is known as limewater and can be used to test for the weak acid carbon dioxide. The reaction Ca(OH)2 + CO2 Ca2+ + + OH− illustrates the basicity of calcium hydroxide. Soda lime, which is a mixture of the strong bases NaOH and KOH with Ca(OH)2, is used as a CO2 absorbent.
Boron group elements
The simplest hydroxide of boron B(OH)3, known as boric acid, is an acid. Unlike the hydroxides of the alkali and alkaline earth hydroxides, it does not dissociate in aqueous solution. Instead, it reacts with water molecules acting as a Lewis acid, releasing protons.
B(OH)3 + H2O + H+
A variety of oxyanions of boron are known, which, in the protonated form, contain hydroxide groups.
Aluminium hydroxide Al(OH)3 is amphoteric and dissolves in alkaline solution.
Al(OH)3 (solid) + OH− (aq) (aq)
In the Bayer process for the production of pure aluminium oxide from bauxite minerals this equilibrium is manipulated by careful control of temperature and alkali concentration. In the first phase, aluminium dissolves in hot alkaline solution as , but other hydroxides usually present in the mineral, such as iron hydroxides, do not dissolve because they are not amphoteric. After removal of the insolubles, the so-called red mud, pure aluminium hydroxide is made to precipitate by reducing the temperature and adding water to the extract, which, by diluting the alkali, lowers the pH of the solution. Basic aluminium hydroxide AlO(OH), which may be present in bauxite, is also amphoteric.
In mildly acidic solutions, the hydroxo/hydroxido complexes formed by aluminium are somewhat different from those of boron, reflecting the greater size of Al(III) vs. B(III). The concentration of the species [Al13(OH)32]7+ is very dependent on the total aluminium concentration. Various other hydroxo complexes are found in crystalline compounds. Perhaps the most important is the basic hydroxide AlO(OH), a polymeric material known by the names of the mineral forms boehmite or diaspore, depending on crystal structure. Gallium hydroxide, indium hydroxide, and thallium(III) hydroxide are also amphoteric. Thallium(I) hydroxide is a strong base.
Carbon group elements
Carbon forms no simple hydroxides. The hypothetical compound C(OH)4 (orthocarbonic acid or methanetetrol) is unstable in aqueous solution:
C(OH)4 → + H3O+
+ H+ H2CO3
Carbon dioxide is also known as carbonic anhydride, meaning that it forms by dehydration of carbonic acid H2CO3 (OC(OH)2).
Silicic acid is the name given to a variety of compounds with a generic formula [SiOx(OH)4−2x]n. Orthosilicic acid has been identified in very dilute aqueous solution. It is a weak acid with pKa1 = 9.84, pKa2 = 13.2 at 25 °C. It is usually written as H4SiO4, but the formula Si(OH)4 is generally accepted. Other silicic acids such as metasilicic acid (H2SiO3), disilicic acid (H2Si2O5), and pyrosilicic acid (H6Si2O7) have been characterized. These acids also have hydroxide groups attached to the silicon; the formulas suggest that these acids are protonated forms of polyoxyanions.
Few hydroxo complexes of germanium have been characterized. Tin(II) hydroxide Sn(OH)2 was prepared in anhydrous media. When tin(II) oxide is treated with alkali the pyramidal hydroxo complex is formed. When solutions containing this ion are acidified, the ion [Sn3(OH)4]2+ is formed together with some basic hydroxo complexes. The structure of [Sn3(OH)4]2+ has a triangle of tin atoms connected by bridging hydroxide groups. Tin(IV) hydroxide is unknown but can be regarded as the hypothetical acid from which stannates, with a formula [Sn(OH)6]2−, are derived by reaction with the (Lewis) basic hydroxide ion.
Hydrolysis of Pb2+ in aqueous solution is accompanied by the formation of various hydroxo-containing complexes, some of which are insoluble. The basic hydroxo complex [Pb6O(OH)6]4+ is a cluster of six lead centres with metal–metal bonds surrounding a central oxide ion. The six hydroxide groups lie on the faces of the two external Pb4 tetrahedra. In strongly alkaline solutions soluble plumbate ions are formed, including [Pb(OH)6]2−.
Other main-group elements
In the higher oxidation states of the pnictogens, chalcogens, halogens, and noble gases there are oxoacids in which the central atom is attached to oxide ions and hydroxide ions. Examples include phosphoric acid H3PO4, and sulfuric acid H2SO4. In these compounds one or more hydroxide groups can dissociate with the liberation of hydrogen cations as in a standard Brønsted–Lowry acid. Many oxoacids of sulfur are known and all feature OH groups that can dissociate.
Telluric acid is often written with the formula H2TeO4·2H2O but is better described structurally as Te(OH)6.
Ortho-periodic acid can lose all its protons, eventually forming the periodate ion [IO4]−. It can also be protonated in strongly acidic conditions to give the octahedral ion [I(OH)6]+, completing the isoelectronic series, [E(OH)6]z, E = Sn, Sb, Te, I; z = −2, −1, 0, +1. Other acids of iodine(VII) that contain hydroxide groups are known, in particular in salts such as the mesoperiodate ion that occurs in K4[I2O8(OH)2]·8H2O.
As is common outside of the alkali metals, hydroxides of the elements in lower oxidation states are complicated. For example, phosphorous acid H3PO3 predominantly has the structure OP(H)(OH)2, in equilibrium with a small amount of P(OH)3.
The oxoacids of chlorine, bromine, and iodine have the formula OA(OH), where n is the oxidation number: +1, +3, +5, or +7, and A = Cl, Br, or I. The only oxoacid of fluorine is F(OH), hypofluorous acid. When these acids are neutralized the hydrogen atom is removed from the hydroxide group.
Transition and post-transition metals
The hydroxides of the transition metals and post-transition metals usually have the metal in the +2 (M = Mn, Fe, Co, Ni, Cu, Zn) or +3 (M = Fe, Ru, Rh, Ir) oxidation state. None are soluble in water, and many are poorly defined. One complicating feature of the hydroxides is their tendency to undergo further condensation to the oxides, a process called olation. Hydroxides of metals in the +1 oxidation state are also poorly defined or unstable. For example, silver hydroxide Ag(OH) decomposes spontaneously to the oxide (Ag2O). Copper(I) and gold(I) hydroxides are also unstable, although stable adducts of CuOH and AuOH are known. The polymeric compounds M(OH)2 and M(OH)3 are in general prepared by increasing the pH of an aqueous solutions of the corresponding metal cations until the hydroxide precipitates out of solution. On the converse, the hydroxides dissolve in acidic solution. Zinc hydroxide Zn(OH)2 is amphoteric, forming the tetrahydroxidozincate ion in strongly alkaline solution.
Numerous mixed ligand complexes of these metals with the hydroxide ion exist. In fact, these are in general better defined than the simpler derivatives. Many can be made by deprotonation of the corresponding metal aquo complex.
LnM(OH2) + B LnM(OH) + BH+ (L = ligand, B = base)
Vanadic acid H3VO4 shows similarities with phosphoric acid H3PO4 though it has a much more complex vanadate oxoanion chemistry. Chromic acid H2CrO4, has similarities with sulfuric acid H2SO4; for example, both form acid salts A+[HMO4]−. Some metals, e.g. V, Cr, Nb, Ta, Mo, W, tend to exist in high oxidation states. Rather than forming hydroxides in aqueous solution, they convert to oxo clusters by the process of olation, forming polyoxometalates.
Basic salts containing hydroxide
In some cases, the products of partial hydrolysis of metal ion, described above, can be found in crystalline compounds. A striking example is found with zirconium(IV). Because of the high oxidation state, salts of Zr4+ are extensively hydrolyzed in water even at low pH. The compound originally formulated as ZrOCl2·8H2O was found to be the chloride salt of a tetrameric cation [Zr4(OH)8(H2O)16]8+ in which there is a square of Zr4+ ions with two hydroxide groups bridging between Zr atoms on each side of the square and with four water molecules attached to each Zr atom.
The mineral malachite is a typical example of a basic carbonate. The formula, Cu2CO3(OH)2 shows that it is halfway between copper carbonate and copper hydroxide. Indeed, in the past the formula was written as CuCO3·Cu(OH)2. The crystal structure is made up of copper, carbonate and hydroxide ions. The mineral atacamite is an example of a basic chloride. It has the formula, Cu2Cl(OH)3. In this case the composition is nearer to that of the hydroxide than that of the chloride CuCl2·3Cu(OH)2. Copper forms hydroxyphosphate (libethenite), arsenate (olivenite), sulfate (brochantite), and nitrate compounds. White lead is a basic lead carbonate, (PbCO3)2·Pb(OH)2, which has been used as a white pigment because of its opaque quality, though its use is now restricted because it can be a source for lead poisoning.
Structural chemistry
The hydroxide ion appears to rotate freely in crystals of the heavier alkali metal hydroxides at higher temperatures so as to present itself as a spherical ion, with an effective ionic radius of about 153 pm. Thus, the high-temperature forms of KOH and NaOH have the sodium chloride structure, which gradually freezes in a monoclinically distorted sodium chloride structure at temperatures below about 300 °C. The OH groups still rotate even at room temperature around their symmetry axes and, therefore, cannot be detected by X-ray diffraction. The room-temperature form of NaOH has the thallium iodide structure. LiOH, however, has a layered structure, made up of tetrahedral Li(OH)4 and (OH)Li4 units. This is consistent with the weakly basic character of LiOH in solution, indicating that the Li–OH bond has much covalent character.
The hydroxide ion displays cylindrical symmetry in hydroxides of divalent metals Ca, Cd, Mn, Fe, and Co. For example, magnesium hydroxide Mg(OH)2 (brucite) crystallizes with the cadmium iodide layer structure, with a kind of close-packing of magnesium and hydroxide ions.
The amphoteric hydroxide Al(OH)3 has four major crystalline forms: gibbsite (most stable), bayerite, nordstrandite, and doyleite.
All these polymorphs are built up of double layers of hydroxide ions – the aluminium atoms on two-thirds of the octahedral holes between the two layers – and differ only in the stacking sequence of the layers. The structures are similar to the brucite structure. However, whereas the brucite structure can be described as a close-packed structure in gibbsite the OH groups on the underside of one layer rest on the groups of the layer below. This arrangement led to the suggestion that there are directional bonds between OH groups in adjacent layers. This is an unusual form of hydrogen bonding since the two hydroxide ion involved would be expected to point away from each other. The hydrogen atoms have been located by neutron diffraction experiments on α-AlO(OH) (diaspore). The O–H–O distance is very short, at 265 pm; the hydrogen is not equidistant between the oxygen atoms and the short OH bond makes an angle of 12° with the O–O line. A similar type of hydrogen bond has been proposed for other amphoteric hydroxides, including Be(OH)2, Zn(OH)2, and Fe(OH)3.
A number of mixed hydroxides are known with stoichiometry A3MIII(OH)6, A2MIV(OH)6, and AMV(OH)6. As the formula suggests these substances contain M(OH)6 octahedral structural units. Layered double hydroxides may be represented by the formula . Most commonly, z = 2, and M2+ = Ca2+, Mg2+, Mn2+, Fe2+, Co2+, Ni2+, Cu2+, or Zn2+; hence q = x.
In organic reactions
Potassium hydroxide and sodium hydroxide are two well-known reagents in organic chemistry.
Base catalysis
The hydroxide ion may act as a base catalyst. The base abstracts a proton from a weak acid to give an intermediate that goes on to react with another reagent. Common substrates for proton abstraction are alcohols, phenols, amines, and carbon acids. The pKa value for dissociation of a C–H bond is extremely high, but the pKa alpha hydrogens of a carbonyl compound are about 3 log units lower. Typical pKa values are 16.7 for acetaldehyde and 19 for acetone. Dissociation can occur in the presence of a suitable base.
RC(O)CH2R' + B RC(O)CH−R' + BH+
The base should have a pKa value not less than about 4 log units smaller, or the equilibrium will lie almost completely to the left.
The hydroxide ion by itself is not a strong enough base, but it can be converted in one by adding sodium hydroxide to ethanol
OH− + EtOH EtO− + H2O
to produce the ethoxide ion. The pKa for self-dissociation of ethanol is about 16, so the alkoxide ion is a strong enough base. The addition of an alcohol to an aldehyde to form a hemiacetal is an example of a reaction that can be catalyzed by the presence of hydroxide. Hydroxide can also act as a Lewis-base catalyst.
As a nucleophilic reagent
The hydroxide ion is intermediate in nucleophilicity between the fluoride ion F−, and the amide ion . Ester hydrolysis under alkaline conditions (also known as base hydrolysis)
R1C(O)OR2 + OH− R1CO(O)H + −OR2 R1CO2− + HOR2
is an example of a hydroxide ion serving as a nucleophile.
Early methods for manufacturing soap treated triglycerides from animal fat (the ester) with lye.
Other cases where hydroxide can act as a nucleophilic reagent are amide hydrolysis, the Cannizzaro reaction, nucleophilic aliphatic substitution, nucleophilic aromatic substitution, and in elimination reactions. The reaction medium for KOH and NaOH is usually water but with a phase-transfer catalyst the hydroxide anion can be shuttled into an organic solvent as well, for example in the generation of the reactive intermediate dichlorocarbene.
Notes
References
Bibliography
Oxyanions
Water chemistry | Hydroxide | [
"Chemistry"
] | 5,644 | [
"Bases (chemistry)",
"Hydroxides",
"nan"
] |
13,726 | https://en.wikipedia.org/wiki/Hogshead | A hogshead (abbreviated "hhd", plural "hhds") is a large cask of liquid (or, less often, of a food commodity). It refers to a specified volume, measured in either imperial or US customary measures, primarily applied to alcoholic beverages, such as wine, ale, or cider.
Etymology
English philologist Walter William Skeat (1835–1912) noted the origin is to be found in the name for a cask or liquid measure appearing in various forms in Germanic languages, in Dutch oxhooft (modern okshoofd), Danish oxehoved, Old Swedish oxhuvud, etc. The Encyclopædia Britannica of 1911 conjectured that the word should therefore be "oxhead", "hogshead" being a mere corruption.
Varieties and standardisation
A tobacco hogshead was used in British and American colonial times to transport and store tobacco. It was a very large wooden barrel. A standardized hogshead measured long and in diameter at the head (at least , depending on the width in the middle). Fully packed with tobacco, it weighed about .
A hogshead in Britain contains about .
The Oxford English Dictionary (OED) notes that the hogshead was first standardized by an act of Parliament (2 Hen. 6. c. 14) in 1423, though the standards continued to vary by locality and content. For example, the OED cites an 1897 edition of Whitaker's Almanack, which specified the gallons of wine in a hogshead varying most particularly across fortified wines: claret/Madeira , port , sherry . The American Heritage Dictionary claims that a hogshead can consist of anything from (presumably) . A hogshead of Madeira wine was approximately equal to 45–48 gallons (0.205–0.218 m3). A hogshead of brandy was approximately equal to 56–61 gallons (0.255–0.277 m3).
Eventually, a hogshead of wine came to be (63 US gallons), while a hogshead of beer or ale came to be 54 gallons (249.5421 L with the pre-1824 beer and ale gallon, or 245.48886 L with the imperial gallon).
A hogshead was also used as unit of measurement for sugar in Louisiana for most of the 19th century. Plantations were listed in sugar schedules by the number of hogsheads of sugar or molasses produced. Used for sugar in the 18th and 19th centuries in the British West Indies, a hogshead weighed on average 16 cwt / 813kg. A hogshead was also used for the measurement of herring fished for sardines in Blacks Harbour, New Brunswick and Cornwall.
Charts
See also
English units of wine casks
References
Imperial units
Wine packaging and storage
Brewing
Units of volume
Containers
Alcohol measurement
Customary units of measurement in the United States | Hogshead | [
"Mathematics"
] | 602 | [
"Units of volume",
"Quantity",
"Units of measurement"
] |
13,733 | https://en.wikipedia.org/wiki/Hilbert%27s%20basis%20theorem | In mathematics Hilbert's basis theorem asserts that every ideal of a polynomial ring over a field has a finite generating set (a finite basis in Hilbert's terminology).
In modern algebra, rings whose ideals have this property are called Noetherian rings. Every field, and the ring of integers are Noetherian rings. So, the theorem can be generalized and restated as: every polynomial ring over a Noetherian ring is also Noetherian.
The theorem was stated and proved by David Hilbert in 1890 in his seminal article on invariant theory, where he solved several problems on invariants. In this article, he proved also two other fundamental theorems on polynomials, the Nullstellensatz (zero-locus theorem) and the syzygy theorem (theorem on relations). These three theorems were the starting point of the interpretation of algebraic geometry in terms of commutative algebra. In particular, the basis theorem implies that every algebraic set is the intersection of a finite number of hypersurfaces.
Another aspect of this article had a great impact on mathematics of the 20th century; this is the systematic use of non-constructive methods. For example, the basis theorem asserts that every ideal has a finite generator set, but the original proof does not provide any way to compute it for a specific ideal. This approach was so astonishing for mathematicians of that time that the first version of the article was rejected by Paul Gordan, the greatest specialist of invariants of that time, with the comment "This is not mathematics. This is theology." Later, he recognized "I have convinced myself that even theology has its merits."
Statement
If is a ring, let denote the ring of polynomials in the indeterminate over . Hilbert proved that if is "not too large", in the sense that if is Noetherian, the same must be true for . Formally,
Hilbert's Basis Theorem. If is a Noetherian ring, then is a Noetherian ring.
Corollary. If is a Noetherian ring, then is a Noetherian ring.
Hilbert proved the theorem (for the special case of multivariate polynomials over a field) in the course of his proof of finite generation of rings of invariants. The theorem is interpreted in algebraic geometry as follows: every algebraic set is the set of the common zeros of finitely many polynomials.
Hilbert's proof is highly non-constructive: it proceeds by induction on the number of variables, and, at each induction step uses the non-constructive proof for one variable less. Introduced more than eighty years later, Gröbner bases allow a direct proof that is as constructive as possible: Gröbner bases produce an algorithm for testing whether a polynomial belong to the ideal generated by other polynomials. So, given an infinite sequence of polynomials, one can construct algorithmically the list of those polynomials that do not belong to the ideal generated by the preceding ones. Gröbner basis theory implies that this list is necessarily finite, and is thus a finite basis of the ideal. However, for deciding whether the list is complete, one must consider every element of the infinite sequence, which cannot be done in the finite time allowed to an algorithm.
Proof
Theorem. If is a left (resp. right) Noetherian ring, then the polynomial ring is also a left (resp. right) Noetherian ring.
Remark. We will give two proofs, in both only the "left" case is considered; the proof for the right case is similar.
First proof
Suppose is a non-finitely generated left ideal. Then by recursion (using the axiom of dependent choice) there is a sequence of polynomials such that if is the left ideal generated by then is of minimal degree. By construction, is a non-decreasing sequence of natural numbers. Let be the leading coefficient of and let be the left ideal in generated by . Since is Noetherian the chain of ideals
must terminate. Thus for some integer . So in particular,
Now consider
whose leading term is equal to that of ; moreover, . However, , which means that has degree less than , contradicting the minimality.
Second proof
Let be a left ideal. Let be the set of leading coefficients of members of . This is obviously a left ideal over , and so is finitely generated by the leading coefficients of finitely many members of ; say . Let be the maximum of the set , and let be the set of leading coefficients of members of , whose degree is . As before, the are left ideals over , and so are finitely generated by the leading coefficients of finitely many members of , say
with degrees . Now let be the left ideal generated by:
We have and claim also . Suppose for the sake of contradiction this is not so. Then let be of minimal degree, and denote its leading coefficient by .
Case 1: . Regardless of this condition, we have , so is a left linear combination
of the coefficients of the . Consider
which has the same leading term as ; moreover while . Therefore and , which contradicts minimality.
Case 2: . Then so is a left linear combination
of the leading coefficients of the . Considering
we yield a similar contradiction as in Case 1.
Thus our claim holds, and which is finitely generated.
Note that the only reason we had to split into two cases was to ensure that the powers of multiplying the factors were non-negative in the constructions.
Applications
Let be a Noetherian commutative ring. Hilbert's basis theorem has some immediate corollaries.
By induction we see that will also be Noetherian.
Since any affine variety over (i.e. a locus-set of a collection of polynomials) may be written as the locus of an ideal and further as the locus of its generators, it follows that every affine variety is the locus of finitely many polynomials — i.e. the intersection of finitely many hypersurfaces.
If is a finitely-generated -algebra, then we know that , where is an ideal. The basis theorem implies that must be finitely generated, say , i.e. is finitely presented.
Formal proofs
Formal proofs of Hilbert's basis theorem have been verified through the Mizar project (see HILBASIS file) and Lean (see ring_theory.polynomial).
References
Further reading
Cox, Little, and O'Shea, Ideals, Varieties, and Algorithms, Springer-Verlag, 1997.
The definitive English-language biography of Hilbert.
Commutative algebra
Invariant theory
Articles containing proofs
Theorems in ring theory
David Hilbert
Theorems about polynomials | Hilbert's basis theorem | [
"Physics",
"Mathematics"
] | 1,348 | [
"Symmetry",
"Group actions",
"Theorems in algebra",
"Fields of abstract algebra",
"Theorems about polynomials",
"Articles containing proofs",
"Commutative algebra",
"Invariant theory"
] |
13,734 | https://en.wikipedia.org/wiki/Heterocyclic%20compound | A heterocyclic compound or ring structure is a cyclic compound that has atoms of at least two different elements as members of its ring(s). Heterocyclic organic chemistry is the branch of organic chemistry dealing with the synthesis, properties, and applications of organic heterocycles.
Examples of heterocyclic compounds include all of the nucleic acids, the majority of drugs, most biomass (cellulose and related materials), and many natural and synthetic dyes. More than half of known compounds are heterocycles. 59% of US FDA-approved drugs contain nitrogen heterocycles.
Classification
The study of organic heterocyclic chemistry focuses especially on organic unsaturated derivatives, and the preponderance of work and applications involves unstrained organic 5- and 6-membered rings. Included are pyridine, thiophene, pyrrole, and furan. Another large class of organic heterocycles refers to those fused to benzene rings. For example, the fused benzene derivatives of pyridine, thiophene, pyrrole, and furan are quinoline, benzothiophene, indole, and benzofuran, respectively. The fusion of two benzene rings gives rise to a third large family of organic compounds. Analogs of the previously mentioned heterocycles for this third family of compounds are acridine, dibenzothiophene, carbazole, and dibenzofuran, respectively.
Heterocyclic organic compounds can be usefully classified based on their electronic structure. The saturated organic heterocycles behave like the acyclic derivatives. Thus, piperidine and tetrahydrofuran are conventional amines and ethers, with modified steric profiles. Therefore, the study of organic heterocyclic chemistry focuses on organic unsaturated rings.
Inorganic rings
Some heterocycles contain no carbon. Examples are borazine (B3N3 ring), hexachlorophosphazenes (P3N3 rings), and tetrasulfur tetranitride S4N4. In comparison with organic heterocycles, which have numerous commercial applications, inorganic ring systems are mainly of theoretical interest. IUPAC recommends the Hantzsch-Widman nomenclature for naming heterocyclic compounds.
Notes on lists
"Heteroatoms" are atoms in the ring other than carbon atoms.
Names in italics are retained by IUPAC and do not follow the Hantzsch-Widman nomenclature
Some of the names refer to classes of compounds rather than individual compounds.
Also no attempt is made to list isomers.
3-membered rings
Although subject to ring strain, 3-membered heterocyclic rings are well characterized.
4-membered rings
5-membered rings
The 5-membered ring compounds containing two heteroatoms, at least one of which is nitrogen, are collectively called the azoles. Thiazoles and isothiazoles contain a sulfur and a nitrogen atom in the ring. Dithioles have two sulfur atoms.
A large group of 5-membered ring compounds with three or more heteroatoms also exists. One example is the class of dithiazoles, which contain two sulfur atoms and one nitrogen atom.
6-membered rings
The 6-membered ring compounds containing two heteroatoms, at least one of which is nitrogen, are collectively called the azines. Thiazines contain a sulfur and a nitrogen atom in the ring. Dithiines have two sulfur atoms.
Six-membered rings with five heteroatomsThe hypothetical chemical compound with five nitrogen heteroatoms would be pentazine.
Six-membered rings with six heteroatomsThe hypothetical chemical compound with six nitrogen heteroatoms would be hexazine. Borazine is a six-membered ring with three nitrogen heteroatoms and three boron heteroatoms.
7-membered rings
In a 7-membered ring, the heteroatom must be able to provide an empty π-orbital (e.g. boron) for "normal" aromatic stabilization to be available; otherwise, homoaromaticity may be possible.
8-membered rings
Borazocine is an eight-membered ring with four nitrogen heteroatoms and four boron heteroatoms.
9-membered rings
Images of rings with one heteroatom
Fused/condensed rings
Heterocyclic rings systems that are formally derived by fusion with other rings, either carbocyclic or heterocyclic, have a variety of common and systematic names. For example, with the benzo-fused unsaturated nitrogen heterocycles, pyrrole provides indole or isoindole depending on the orientation. The pyridine analog is quinoline or isoquinoline. For azepine, benzazepine is the preferred name. Likewise, the compounds with two benzene rings fused to the central heterocycle are carbazole, acridine, and dibenzoazepine. Thienothiophene are the fusion of two thiophene rings. Phosphaphenalenes are a tricyclic phosphorus-containing heterocyclic system derived from the carbocycle phenalene.
History of heterocyclic chemistry
The history of heterocyclic chemistry began in the 1800s, in step with the development of organic chemistry. Some noteworthy developments:
1818: Brugnatelli makes alloxan from uric acid
1832: Dobereiner produces furfural (a furan) by treating starch with sulfuric acid
1834: Runge obtains pyrrole ("fiery oil") by dry distillation of bones
1906: Friedlander synthesizes indigo dye, allowing synthetic chemistry to displace a large agricultural industry
1936: Treibs isolates chlorophyll derivatives from crude oil, explaining the biological origin of petroleum.
1951: Chargaff's rules are described, highlighting the role of heterocyclic compounds (purines and pyrimidines) in the genetic code.
Uses
Heterocyclic compounds are pervasive in many areas of life sciences and technology. Many drugs are heterocyclic compounds.
See also
Spiroketals
References
External links
Hantzsch-Widman nomenclature, IUPAC
Heterocyclic amines in cooked meat, US CDC
List of known and probable carcinogens, American Cancer Society
List of known carcinogens by the State of California, Proposition 65 (more comprehensive) | Heterocyclic compound | [
"Chemistry"
] | 1,407 | [
"Organic compounds",
"Heterocyclic compounds"
] |
13,746 | https://en.wikipedia.org/wiki/Hydrostatic%20shock | Hydrostatic shock, also known as Hydro-shock, is the controversial concept that a penetrating projectile (such as a bullet) can produce a pressure wave that causes "remote neural damage", "subtle damage in neural tissues" and "rapid effects" in living targets. It has also been suggested that pressure wave effects can cause indirect bone fractures at a distance from the projectile path, although it was later demonstrated that indirect bone fractures are caused by temporary cavity effects (strain placed on the bone by the radial tissue displacement produced by the temporary cavity formation).
Proponents of the concept argue that hydrostatic shock can produce remote neural damage and produce incapacitation more quickly than blood loss effects. In arguments about the differences in stopping power between calibers and between cartridge models, proponents of cartridges that are "light and fast" (such as the 9×19mm Parabellum) versus cartridges that are "slow and heavy" (such as the .45 ACP) often refer to this phenomenon.
Martin Fackler has argued that sonic pressure waves do not cause tissue disruption and that temporary cavity formation is the actual cause of tissue disruption mistakenly ascribed to sonic pressure waves. One review noted that strong opinion divided papers on whether the pressure wave contributes to wound injury. It ultimately concluded that no "conclusive evidence could be found for permanent pathological effects produced by the pressure wave".
Origin of the hypothesis
An early mention of "hydrostatic shock" appeared in Popular Mechanics in April 1942.
In the scientific literature, the first discussion of pressure waves created when a bullet hits a living target is presented by E. Harvey Newton and his research group at Princeton University in 1947:
Frank Chamberlin, a World War II trauma surgeon and ballistics researcher, noted remote pressure wave effects. Col. Chamberlin described what he called "explosive effects" and "hydraulic reaction" of bullets in tissue. ...liquids are put in motion by 'shock waves' or hydraulic effects... with liquid filled tissues, the effects and destruction of tissues extend in all directions far beyond the wound axis. He avoided the ambiguous use of the term "shock" because it can refer to either a specific kind of pressure wave associated with explosions and supersonic projectiles or to a medical condition in the body.
Col. Chamberlin recognized that many theories have been advanced in wound ballistics. During World War II he commanded an 8,500-bed hospital center that treated over 67,000 patients during the fourteen months that he operated it. P.O. Ackley estimates that 85% of the patients were suffering from gunshot wounds. Col. Chamberlin spent many hours interviewing patients as to their reactions to bullet wounds. He conducted many live animal experiments after his tour of duty. On the subject of wound ballistics theories, he wrote:
Other World War II era scientists noted remote pressure wave effects in the peripheral nerves. There was support for the idea of remote neural effects of ballistic pressure waves in the medical and scientific communities, but the phrase "hydrostatic shock" and similar phrases including "shock" were used mainly by gunwriters (such as Jack O'Conner) and the small arms industry (such as Roy Weatherby, and Federal "Hydra-Shok.")
Arguments against
Martin Fackler, a Vietnam-era trauma surgeon, wound ballistics researcher, a colonel in the U.S. Army and the head of the Wound Ballistics Laboratory for the U.S. Army's Medical Training Center, Letterman Institute, claimed that hydrostatic shock had been disproved and that the assertion that a pressure wave plays a role in injury or incapacitation is a myth. Others expressed similar views.
Fackler based his argument on the lithotriptor, a tool commonly used to break up kidney stones. A lithotriptor uses sonic pressure waves which are stronger than those caused by most handgun bullets, yet it produces no damage to soft tissues whatsoever. Hence, Fackler argued, ballistic pressure waves cannot damage tissue either.
Fackler claimed that a study of rifle bullet wounds in Vietnam (Wound Data and Munitions Effectiveness Team) found "no cases of bones being broken, or major vessels torn, that were not hit by the penetrating bullet. In only two cases, an organ that was not hit (but was within a few cm of the projectile path), suffered some disruption." Fackler cited a personal communication with R. F. Bellamy. However, Bellamy's published findings the following year estimated that 10% of fractures in the data set might be due to indirect injuries, and one specific case is described in detail (pp. 153–154). In addition, the published analysis documents five instances of abdominal wounding in cases where the bullet did not penetrate the abdominal cavity (pp. 149–152), a case of lung contusion resulting from a hit to the shoulder (pp. 146–149), and a case of indirect effects on the central nervous system (p. 155). Fackler's critics argue that his evidence does not contradict distant injuries, as Fackler claimed, but the WDMET data from Vietnam actually provides supporting evidence for it.
A summary of the debate was published in 2009 as part of a Historical Overview of Wound Ballistics Research.
Distant injuries in the WDMET data
The Wound Data and Munitions Effectiveness Team (WDMET) gathered data on wounds sustained during the Vietnam War. In their analysis of this data published in the Textbook of Military Medicine, Ronald Bellamy and Russ Zajtchuck point out a number of cases which seem to be examples of distant injuries. Bellamy and Zajtchuck describe three mechanisms of distant wounding due to pressure transients: 1) stress waves 2) shear waves and 3) a vascular pressure impulse.
After citing Harvey's conclusion that "stress waves probably do not cause any tissue damage" (p. 136), Bellamy and Zajtchuck express their view that Harvey's interpretation might not be definitive because they write "the possibility that stress waves from a penetrating projectile might also cause tissue damage cannot be ruled out." (p. 136) The WDMET data includes a case of a lung contusion resulting from a hit to the shoulder. The caption to Figure 4-40 (p. 149) says, "The pulmonary injury may be the result of a stress wave." They describe the possibility that a hit to a soldier's trapezius muscle caused temporary paralysis due to "the stress wave passing through the soldier's neck indirectly [causing] cervical cord dysfunction." (p. 155)
In addition to stress waves, Bellamy and Zajtchuck describe shear waves as a possible mechanism of indirect injuries in the WDMET data. They estimate that 10% of bone fractures in the data may be the result of indirect injuries, that is, bones fractured by the bullet passing close to the bone without a direct impact. A Chinese experiment is cited which provides a formula estimating how pressure magnitude decreases with distance. Together with the difference between strength of human bones and strength of the animal bones in the Chinese experiment, Bellamy and Zajtchuck use this formula to estimate that assault rifle rounds "passing within a centimeter of a long bone might very well be capable of causing an indirect fracture." (p. 153) Bellamy and Zajtchuck suggest the fracture in Figures 4-46 and 4-47 is likely an indirect fracture of this type. Damage due to shear waves extends to even greater distances in abdominal injuries in the WDMET data. Bellamy and Zajtchuck write, "The abdomen is one body region in which damage from indirect effects may be common." (p. 150) Injuries to the liver and bowel shown in Figures 4-42 and 4-43 are described, "The damage shown in these examples extends far beyond the tissue that is likely to direct contact with the projectile." (p. 150)
In addition to providing examples from the WDMET data for indirect injury due to propagating shear and stress waves, Bellamy and Zajtchuck expresses an openness to the idea of pressure transients propagating via blood vessels can cause indirect injuries. "For example, pressure transients arising from an abdominal gunshot wound might propagate through the vena cavae and jugular venous system into the cranial cavity and cause a precipitous rise in intracranial pressure there, with attendant transient neurological dysfunction." (p. 154) However, no examples of this injury mechanism are presented from the WDMET data. However, the authors suggest the need for additional studies writing, "Clinical and experimental data need to be gathered before such indirect injuries can be confirmed." Distant injuries of this nature were later confirmed in the experimental data of Swedish and Chinese researchers, in the clinical findings of Krajsa and in autopsy findings from Iraq.
Autopsy findings
Proponents of the concept point to human autopsy results demonstrating brain hemorrhaging from fatal hits to the chest, including cases with handgun bullets. Thirty-three cases of fatal penetrating chest wounds by a single bullet were selected from a much larger set by excluding all other traumatic factors, including past history.
An 8-month study in Iraq performed in 2010 and published in 2011 reports on autopsies of 30 gunshot victims struck with high-velocity (greater than 2500 fps) rifle bullets. The authors determined that the lungs and chest are the most susceptible to distant wounding, followed by the abdomen. The study noted that the "sample size was so small [too small] to reach the level of statistical significance". Nevertheless, the authors conclude:
Inferences from blast pressure wave observations
A shock wave can be created when fluid is rapidly displaced by an explosive or projectile. Tissue behaves similarly enough to water that a sonic pressure wave can be created by a bullet impact, generating pressures in excess of .
Duncan MacPherson, a former member of the International Wound Ballistics Association and author of the book, Bullet Penetration, claimed that shock waves cannot result from bullet impacts with tissue. In contrast, Brad Sturtevant, a leading researcher in shock wave physics at Caltech for many decades, found that shock waves can result from handgun bullet impacts in tissue. Other sources indicate that ballistic impacts can create shock waves in tissue.
Blast and ballistic pressure waves have physical similarities. Prior to wave reflection, they both are characterized by a steep wave front followed by a nearly exponential decay at close distances. They have similarities in how they cause neural effects in the brain. In tissue, both types of pressure waves have similar magnitudes, duration, and frequency characteristics. Both have been shown to cause damage in the hippocampus. It has been hypothesized that both reach the brain from the thoracic cavity via major blood vessels.
For example, Ibolja Cernak, a leading researcher in blast wave injury at the Applied Physics Laboratory at Johns Hopkins University, hypothesized, "alterations in brain function following blast exposure are induced by kinetic energy transfer of blast overpressure via great blood vessels in abdomen and thorax to the central nervous system." This hypothesis is supported by observations of neural effects in the brain from localized blast exposure focused on the lungs in experiments in animals.
Physics of ballistic pressure waves
A number of papers describe the physics of ballistic pressure waves created when a high-speed projectile enters a viscous medium. These results show that ballistic impacts produce pressure waves that propagate at close to the speed of sound.
Lee et al. present an analytical model showing that unreflected ballistic pressure waves are well approximated by an exponential decay, which is similar to blast pressure waves. Lee et al. note the importance of the energy transfer:
The rigorous calculations of Lee et al. require knowing the drag coefficient and frontal area of the penetrating projectile at every instant of the penetration. Since this is not generally possible with expanding handgun bullets, Courtney and Courtney developed a model for estimating the peak pressure waves of handgun bullets from the impact energy and penetration depth in ballistic gelatin. This model agrees with the more rigorous approach of Lee et al. for projectiles where they can both be applied. For expanding handgun bullets, the peak pressure wave magnitude is proportional to the bullet's kinetic energy divided by the penetration depth.
Remote cerebral effects of ballistic pressure waves
Göransson et al. were the first contemporary researchers to present compelling evidence for remote cerebral effects of extremity bullet impact. They observed changes in EEG readings from pigs shot in the thigh. A follow-up experiment by Suneson et al. implanted high-speed pressure transducers into the brain of pigs and demonstrated that a significant pressure wave reaches the brain of pigs shot in the thigh. These scientists observed apnea, depressed EEG readings, and neural damage in the brain caused by the distant effects of the ballistic pressure wave originating in the thigh.
The results of Suneson et al. were confirmed and expanded upon by a later experiment in dogs
which "confirmed that distant effect exists in the central nervous system after a high-energy missile impact to an extremity. A high-frequency oscillating pressure wave with large amplitude and short duration was found in the brain after the extremity impact of a high-energy missile..." Wang et al. observed significant damage in both the hypothalamus and hippocampus regions of the brain due to remote effects of the ballistic pressure wave.
Remote pressure wave effects in the spine and internal organs
In a study of a handgun injury, Sturtevant found that pressure waves from a bullet impact in the torso can reach the spine and that a focusing effect from concave surfaces can concentrate the pressure wave on the spinal cord producing significant injury. This is consistent with other work showing remote spinal cord injuries from ballistic impacts.
Roberts et al. present both experimental work and finite element modeling showing that there can be considerable pressure wave magnitudes in the thoracic cavity for handgun projectiles stopped by a Kevlar vest. For example, an 8 gram projectile at 360 m/s impacting a NIJ level II vest over the sternum can produce an estimated pressure wave level of nearly 2.0 MPa (280 psi) in the heart and a pressure wave level of nearly 1.5 MPa (210 psi) in the lungs. Impacting over the liver can produce an estimated pressure wave level of 2.0 MPa (280 psi) in the liver.
Energy transfer required for remote neural effects
The work of Courtney et al. supports the role of a ballistic pressure wave in incapacitation and injury. The work of Suneson et al. and Courtney et al. suggest that remote neural effects can occur with levels of energy transfer possible with handguns, about . Using sensitive biochemical techniques, the work of Wang et al. suggests even lower impact energy thresholds for remote neural injury to the brain. In analysis of experiments of dogs shot in the thigh they report highly significant (p < 0.01), easily detectable neural effects in the hypothalamus and hippocampus with energy transfer levels close to . Wang et al. reports less significant (p < 0.05) remote effects in the hypothalamus with energy transfer just under .
Even though Wang et al. document remote neural damage for low levels of energy transfer, roughly , these levels of neural damage are probably too small to contribute to rapid incapacitation. Courtney and Courtney believe that remote neural effects only begin to make significant contributions to rapid incapacitation for ballistic pressure wave levels above (corresponds to transferring roughly in of penetration) and become easily observable above (corresponds to transferring roughly in of penetration). Incapacitating effects in this range of energy transfer are consistent with observations of remote spinal injuries, observations of suppressed EEGs and apnea in pigs and with observations of incapacitating effects of ballistic pressure waves without a wound channel.
Other scientific findings
The scientific literature contains significant other findings regarding injury mechanisms of ballistic pressure waves. Ming et al. found that ballistic pressure waves can break bones. Tikka et al. reports abdominal pressure changes produced in pigs hit in one thigh. Akimov et al. report on injuries to the nerve trunk from gunshot wounds to the extremities.
Hydrostatic shock as a factor in selection of ammunition
Ammunition selection for self-defense, military, and law enforcement
In self-defense, military, and law enforcement communities, opinions vary regarding the importance of remote wounding effects in ammunition design and selection. In his book on hostage rescuers, Leroy Thompson discusses the importance of hydrostatic shock in choosing a specific design of .357 Magnum and 9×19mm Parabellum bullets. In Armed and Female, Paxton Quigley explains that hydrostatic shock is the real source of "stopping power." Jim Carmichael, who served as shooting editor for Outdoor Life magazine for 25 years, believes that hydrostatic shock is important to "a more immediate disabling effect" and is a key difference in the performance of .38 Special and .357 Magnum hollow point bullets. In "The search for an effective police handgun," Allen Bristow describes that police departments recognize the importance of hydrostatic shock when choosing ammunition. A research group at West Point suggests handgun loads with at least of energy and of penetration and recommends:
A number of law enforcement and military agencies have adopted the 5.7×28mm cartridge. These agencies include the Navy SEALs and the Federal Protective Service branch of the ICE. In contrast, some defense contractors, law enforcement analysts, and military analysts say that hydrostatic shock is an unimportant factor when selecting cartridges for a particular use because any incapacitating effect it may have on a target is difficult to measure and inconsistent from one individual to the next. This is in contrast to factors such as proper shot placement and massive blood loss which are almost always eventually incapacitating for nearly every individual.
The FBI recommends that loads intended for self-defense and law enforcement applications meet a minimum penetration requirement of in ballistic gelatin and explicitly advises against selecting rounds based on hydrostatic shock effects.
Ammunition selection for hunting
Hydrostatic shock is commonly considered as a factor in the selection of hunting ammunition. Peter Capstick explains that hydrostatic shock may have value for animals up to the size of white-tailed deer, but the ratio of energy transfer to animal weight is an important consideration for larger animals. If the animal's weight exceeds the bullet's energy transfer, penetration in an undeviating line to a vital organ is a much more important consideration than energy transfer and hydrostatic shock. Jim Carmichael, in contrast, describes evidence that hydrostatic shock can affect animals as large as Cape Buffalo in the results of a carefully controlled study carried out by veterinarians in a buffalo culling operation.
Randall Gilbert describes hydrostatic shock as an important factor in bullet performance on whitetail deer, "When it [a bullet] enters a whitetail’s body, huge accompanying shock waves send vast amounts of energy through nearby organs, sending them into arrest or shut down." Dave Ehrig expresses the view that hydrostatic shock depends on impact velocities above per second. Sid Evans explains the performance of the Nosler Partition bullet and Federal Cartridge Company's decision to load this bullet in terms of the large tissue cavitation and hydrostatic shock produced from the frontal diameter of the expanded bullet. The North American Hunting Club suggests big game cartridges that create enough hydrostatic shock to quickly bring animals down.
See also
Blast injury
Shock (fluid dynamics)
Stopping power
Table of handgun and rifle cartridges
References
External links
Terminal Ballistics Research
Ballistics | Hydrostatic shock | [
"Physics"
] | 3,977 | [
"Applied and interdisciplinary physics",
"Ballistics"
] |
13,755 | https://en.wikipedia.org/wiki/Hull%20%28watercraft%29 | A hull is the watertight body of a ship, boat, submarine, or flying boat. The hull may open at the top (such as a dinghy), or it may be fully or partially covered with a deck. Atop the deck may be a deckhouse and other superstructures, such as a funnel, derrick, or mast. The line where the hull meets the water surface is called the waterline.
General features
There is a wide variety of hull types that are chosen for suitability for different usages, the hull shape being dependent upon the needs of the design. Shapes range from a nearly perfect box, in the case of scow barges, to a needle-sharp surface of revolution in the case of a racing multihull sailboat. The shape is chosen to strike a balance between cost, hydrostatic considerations (accommodation, load carrying, and stability), hydrodynamics (speed, power requirements, and motion and behavior in a seaway) and special considerations for the ship's role, such as the rounded bow of an icebreaker or the flat bottom of a landing craft.
In a typical modern steel ship, the hull will have watertight decks, and major transverse members called bulkheads. There may also be intermediate members such as girders, stringers and webs, and minor members called ordinary transverse frames, frames, or longitudinals, depending on the structural arrangement. The uppermost continuous deck may be called the "upper deck", "weather deck", "spar deck", "main deck", or simply "deck". The particular name given depends on the context—the type of ship or boat, the arrangement, or even where it sails.
In a typical wooden sailboat, the hull is constructed of wooden planking, supported by transverse frames (often referred to as ribs) and bulkheads, which are further tied together by longitudinal stringers or ceiling. Often but not always there is a centerline longitudinal member called a keel. In fiberglass or composite hulls, the structure may resemble wooden or steel vessels to some extent, or be of a monocoque arrangement. In many cases, composite hulls are built by sandwiching thin fiber-reinforced skins over a lightweight but reasonably rigid core of foam, balsa wood, impregnated paper honeycomb, or other material.
Perhaps the earliest proper hulls were built by the Ancient Egyptians, who by 3000 BC knew how to assemble wooden planks into a hull.
Hull shapes
Hulls come in many varieties and can have composite shape, (e.g., a fine entry forward and inverted bell shape aft), but are grouped primarily as follows:
Chined and hard-chined. Examples are the flat-bottom (chined), v-bottom, and multi-chine hull (several gentler hard chines, still not smooth). These types have at least one pronounced knuckle throughout all or most of their length.
Moulded, round bilged or soft-chined. These hull shapes all have smooth curves. Examples are the round bilge, semi-round bilge, and s-bottom hull.
Planing and displacement hulls
Displacement hull: here the hull is supported exclusively or predominantly by buoyancy. Vessels that have this type of hull travel through the water at a limited rate that is defined by the waterline length except for especially narrow hulls such as sailing multihulls that are less limited this way.
Planing hull: here, the planing hull form is configured to develop positive dynamic pressure so that its draft decreases with increasing speed. The dynamic lift reduces the wetted surface and therefore also the drag. Such hulls are sometimes flat-bottomed, sometimes V-bottomed and more rarely, round-bilged. The most common form is to have at least one chine, which makes for more efficient planing and can throw spray down. Planing hulls are more efficient at higher speeds, although they still require more energy to achieve these speeds. An effective planing hull must be as light as possible with flat surfaces that are consistent with good sea keeping. Sailboats that plane must also sail efficiently in displacement mode in light winds.
Semi-displacement, or semi-planing: here the hull form is capable of developing a moderate amount of dynamic lift; however, most of the vessel's weight is still supported through buoyancy.
Hull forms
At present, the most widely used form is the round bilge hull.
With a small payload, such a craft has less of its hull below the waterline, giving less resistance and more speed. With a greater payload, resistance is greater and speed lower, but the hull's outward bend provides smoother performance in waves. As such, the inverted bell shape is a popular form used with planing hulls.
Chined and hard-chined hulls
A chined hull does not have a smooth rounded transition between bottom and sides. Instead, its contours are interrupted by sharp angles where predominantly longitudinal panels of the hull meet. The sharper the intersection (the more acute the angle), the "harder" the chine. More than one chine per side is possible.
The Cajun "pirogue" is an example of a craft with hard chines.
Benefits of this type of hull include potentially lower production cost and a (usually) fairly flat bottom, making the boat faster at planing. A hard chined hull resists rolling (in smooth water) more than does a hull with rounded bilges (the chine creates turbulence and drag resisting the rolling motion, as it moves through the water, the rounded-bilge provides less flow resistance around the turn). In rough seas, this can make the boat roll more, as the motion drags first down, then up, on a chine: round-bilge boats are more seakindly in waves, as a result.
Chined hulls may have one of three shapes:
Flat-bottom chined hulls
Multi-chined hulls
V-bottom chined hulls. Sometimes called hard chine.
Each of these chine hulls has its own unique characteristics and use. The flat-bottom hull has high initial stability but high drag. To counter the high drag, hull forms are narrow and sometimes severely tapered at bow and stern. This leads to poor stability when heeled in a sailboat. This is often countered by using heavy interior ballast on sailing versions. They are best suited to sheltered inshore waters. Early racing power boats were fine forward and flat aft. This produced maximum lift and a smooth, fast ride in flat water, but this hull form is easily unsettled in waves. The multi-chine hull approximates a curved hull form. It has less drag than a flat-bottom boat. Multi chines are more complex to build but produce a more seaworthy hull form. They are usually displacement hulls. V or arc-bottom chine boats have a Vshape between 6°and 23°. This is called the angle. The flatter shape of a 6-degree hull will plane with less wind or a lower-horsepower engine but will pound more in waves. The deep Vform (between 18and 23degrees) is only suited to high-powered planing boats. They require more powerful engines to lift the boat onto the plane but give a faster, smoother ride in waves. Displacement chined hulls have more wetted surface area, hence more drag, than an equivalent round-hull form, for any given displacement.
Smooth curve hulls
Smooth curve hulls are hulls that use, just like the curved hulls, a centreboard, or an attached keel.
Semi round bilge hulls are somewhat less round. The advantage of the semi-round is that it is a nice middle between the S-bottom and chined hull. Typical examples of a semi-round bilge hull can be found in the Centaur and Laser sailing dinghies.
S-bottom hulls are sailing boat hulls with a midships transverse half-section shaped like an s. In the s-bottom, the hull has round bilges and merges smoothly with the keel, and there are no sharp corners on the hull sides between the keel centreline and the sheer line. Boats with this hull form may have a long fixed deep keel, or a long shallow fixed keel with a centreboard swing keel inside. Ballast may be internal, external, or a combination. This hull form was most popular in the late 19th and early to mid 20th centuries. Examples of small sailboats that use this s-shape are the Yngling and Randmeer.
Appendages
Control devices such as a rudder, trim tabs or stabilizing fins may be fitted.
A keel may be fitted on a hull to increase the transverse stability, directional stability or to create lift.
Retractable appendages include centreboards and daggerboards.
A forward protrusion below the waterline is called a bulbous bow. These are fitted on some hulls to reduce the wave making resistance drag and thereby increase fuel efficiency.
Terms
Baseline is a level reference line from which vertical distances are measured.
Bow is the front part of the hull.
is the middle portion of the vessel in the fore and aft direction.
Port is the left side of the vessel when facing the bow from on board.
Starboard is the right side of the vessel when facing the bow from on board.
Stern is the rear part of the hull.
Waterline is an imaginary line circumscribing the hull that matches the surface of the water when the hull is not moving.
Metrics
Hull forms are defined as follows:
Block measures that define the principal dimensions. They are:
Beam or breadth (B) is the width of the hull. (ex: BWL is the maximum beam at the waterline)
Draft (d) or (T) is the vertical distance from the bottom of the keel to the waterline.
Freeboard (FB) is depth plus the height of the keel structure minus draft.
Length at the waterline (LWL) is the length from the forwardmost point of the waterline measured in profile to the stern-most point of the waterline.
Length between perpendiculars (LBP or LPP) is the length of the summer load waterline from the stern post to the point where it crosses the stem. (see also p/p)
Length overall (LOA) is the extreme length from one end to the other.
Moulded depth (D) is the vertical distance measured from the top of the keel to the underside of the upper deck at side.
Form derivatives that are calculated from the shape and the block measures. They are:
Displacement (Δ) is the weight of water equivalent to the immersed volume of the hull.
Longitudinal centre of buoyancy (LCB) is the longitudinal position of the centroid of the displaced volume, often given as the distance from a point of reference (often midships) to the centroid of the static displaced volume. Note that the longitudinal centre of gravity or centre of the weight of the vessel must align with the LCB when the hull is in equilibrium.
Longitudinal centre of flotation (LCF) is the longitudinal position of the centroid of the waterplane area, usually expressed as longitudinal distance from a point of reference (often midships) to the centre of the area of the static waterplane. This can be visualized as being the area defined by the water's surface and the hull.
Vertical centre of buoyancy (VCB) is the vertical position of the centroid of displaced volume, generally given as a distance from a point of reference (such as the baseline) to the centre of the static displaced volume.
Volume (V or ∇) is the volume of water displaced by the hull.
Coefficients help compare hull forms as well:
(Cb) is the volume (V) divided by the LWL × BWL × TWL. If you draw a box around the submerged part of the ship, it is the ratio of the box volume occupied by the ship. It gives a sense of how much of the block defined by the LWL, beam (B) & draft (T) is filled by the hull. Full forms such as oil tankers will have a high Cb where fine shapes such as sailboats will have a low Cb.
Midship coefficient (Cm or Cx) is the cross-sectional area (Ax) of the slice at midships (or at the largest section for Cx) divided by beam x draft. It displays the ratio of the largest underwater section of the hull to a rectangle of the same overall width and depth as the underwater section of the hull. This defines the fullness of the underbody. A low Cm indicates a cut-away mid-section and a high Cm indicates a boxy section shape. Sailboats have a cut-away mid-section with low Cx whereas cargo vessels have a boxy section with high Cx to help increase the Cb.
Prismatic coefficient (Cp) is the volume (V) divided by LWLx Ax. It displays the ratio of the immersed volume of the hull to a volume of a prism with equal length to the ship and cross-sectional area equal to the largest underwater section of the hull (midship section). This is used to evaluate the distribution of the volume of the underbody. A low or fine Cp indicates a full mid-section and fine ends, a high or full Cp indicates a boat with fuller ends. Planing hulls and other highspeed hulls tend towards a higher Cp. Efficient displacement hulls travelling at a low Froude number will tend to have a low Cp.
Waterplane coefficient (Cw) is the waterplane area divided by LWL x BWL. The waterplane coefficient expresses the fullness of the waterplane, or the ratio of the waterplane area to a rectangle of the same length and width. A low Cw figure indicates fine ends and a high Cw figure indicates fuller ends. High Cw improves stability as well as handling behavior in rough conditions.
Note:
Computer-aided design
Use of computer-aided design has superseded paper-based methods of ship design that relied on manual calculations and lines drawing. Since the early 1990s, a variety of commercial and freeware software packages specialized for naval architecture have been developed that provide 3D drafting capabilities combined with calculation modules for hydrostatics and hydrodynamics. These may be referred to as geometric modeling systems for naval architecture.
See also
Hull classification symbol
Notes
References
Shipbuilding
Watercraft components
Naval architecture
Structural system
Structural engineering
Ship measurements | Hull (watercraft) | [
"Technology",
"Engineering"
] | 3,001 | [
"Structural engineering",
"Naval architecture",
"Building engineering",
"Shipbuilding",
"Structural system",
"Construction",
"Civil engineering",
"Marine engineering"
] |
13,764 | https://en.wikipedia.org/wiki/Hassium | Hassium is a synthetic chemical element; it has symbol Hs and atomic number 108. It is highly radioactive: its most stable known isotopes have half-lives of about ten seconds. One of its isotopes, Hs, has magic numbers of protons and neutrons for deformed nuclei, giving it greater stability against spontaneous fission. Hassium is a superheavy element; it has been produced in a laboratory in very small quantities by fusing heavy nuclei with lighter ones. Natural occurrences of hassium have been hypothesized but never found.
In the periodic table, hassium is a transactinide element, a member of the 7th period and group 8; it is thus the sixth member of the 6d series of transition metals. Chemistry experiments have confirmed that hassium behaves as the heavier homologue to osmium, reacting readily with oxygen to form a volatile tetroxide. The chemical properties of hassium have been only partly characterized, but they compare well with the chemistry of the other group 8 elements.
The main innovation that led to the discovery of hassium was cold fusion, where the fused nuclei do not differ by mass as much as in earlier techniques. It relied on greater stability of target nuclei, which in turn decreased excitation energy. This decreased the number of neutrons ejected during synthesis, creating heavier, more stable resulting nuclei. The technique was first tested at the Joint Institute for Nuclear Research (JINR) in Dubna, Moscow Oblast, Russian SFSR, Soviet Union, in 1974. JINR used this technique to attempt synthesis of element 108 in 1978, in 1983, and in 1984; the latter experiment resulted in a claim that element 108 had been produced. Later in 1984, a synthesis claim followed from the Gesellschaft für Schwerionenforschung (GSI) in Darmstadt, Hesse, West Germany. The 1993 report by the Transfermium Working Group, formed by the International Union of Pure and Applied Chemistry (IUPAC) and the International Union of Pure and Applied Physics (IUPAP), concluded that the report from Darmstadt was conclusive on its own whereas that from Dubna was not, and major credit was assigned to the German scientists. GSI formally announced they wished to name the element hassium after the German state of Hesse (Hassia in Latin), home to the facility in 1992; this name was accepted as final in 1997.
Introduction to the heaviest elements
Discovery
Cold fusion
Nuclear reactions used in the 1960s resulted in high excitation energies that required expulsion of four or five neutrons; these reactions used targets made of elements with high atomic numbers to maximize the size difference between the two nuclei in a reaction. While this increased the chance of fusion due to the lower electrostatic repulsion between target and projectile, the formed compound nuclei often broke apart and did not survive to form a new element. Moreover, fusion inevitably produces neutron-poor nuclei, as heavier elements need more neutrons per proton for stability; therefore, the necessary ejection of neutrons results in final products that are typically shorter-lived. As such, light beams (six to ten protons) allowed synthesis of elements only up to 106.
To advance to heavier elements, Soviet physicist Yuri Oganessian at the Joint Institute for Nuclear Research (JINR) in Dubna, Moscow Oblast, Russian SFSR, Soviet Union, proposed a different mechanism, in which the bombarded nucleus would be lead-208, which has magic numbers of protons and neutrons, or another nucleus close to it. Each proton and neutron has a fixed rest energy; those of all protons are equal and so are those of all neutrons. In a nucleus, some of this energy is diverted to binding protons and neutrons; if a nucleus has a magic number of protons and/or neutrons, then even more of its rest energy is diverted, which makes the nuclide more stable. This additional stability requires more energy for an external nucleus to break the existing one and penetrate it. More energy diverted to binding nucleons means less rest energy, which in turn means less mass (mass is proportional to rest energy). More equal atomic numbers of the reacting nuclei result in greater electrostatic repulsion between them, but the lower mass excess of the target nucleus balances it. This leaves less excitation energy for the new compound nucleus, which necessitates fewer neutron ejections to reach a stable state. Due to this energy difference, the former mechanism became known as "hot fusion" and the latter as "cold fusion".
Cold fusion was first declared successful in 1974 at JINR, when it was tested for synthesis of the yet-undiscovered element106. These new nuclei were projected to decay via spontaneous fission. The physicists at JINR concluded element 106 was produced in the experiment because no fissioning nucleus known at the time showed parameters of fission similar to what was observed during the experiment and because changing either of the two nuclei in the reactions negated the observed effects. Physicists at Lawrence Berkeley Laboratory (LBL; originally Radiation Laboratory, RL, and later Lawrence Berkeley National Laboratory, LBNL) of the University of California in Berkeley, California, United States, also expressed great interest in the new technique. When asked about how far this new method could go and if lead targets were a physics' Klondike, Oganessian responded, "Klondike may be an exaggeration [...] But soon, we will try to get elements 107... 108 in these reactions."
Reports
Synthesis of element108 was first attempted in 1978 by a team led by Oganessian at JINR. The team used a reaction that would generate element108, specifically, the isotope 108, from fusion of radium (specifically, the isotope and calcium . The researchers were uncertain in interpreting their data, and their paper did not unambiguously claim to have discovered the element. The same year, another team at JINR investigated the possibility of synthesis of element108 in reactions between lead and iron ; they were uncertain in interpreting the data, suggesting the possibility that element108 had not been created.
In 1983, new experiments were performed at JINR. The experiments probably resulted in the synthesis of element108; bismuth was bombarded with manganese to obtain 108, lead (Pb) was bombarded with iron (Fe) to obtain 108, and californium was bombarded with neon to obtain 108. These experiments were not claimed as a discovery and Oganessian announced them in a conference rather than in a written report.
In 1984, JINR researchers in Dubna performed experiments set up identically to the previous ones; they bombarded bismuth and lead targets with ions of manganese and iron, respectively. Twenty-one spontaneous fission events were recorded; the researchers concluded they were caused by 108.
Later in 1984, a research team led by Peter Armbruster and Gottfried Münzenberg at Gesellschaft für Schwerionenforschung (GSI; Institute for Heavy Ion Research) in Darmstadt, Hesse, West Germany, tried to create element108. The team bombarded a lead (Pb) target with accelerated iron (Fe) nuclei. GSI's experiment to create element108 was delayed until after their creation of element109 in 1982, as prior calculations had suggested that even–even isotopes of element108 would have spontaneous fission half-lives of less than one microsecond, making them difficult to detect and identify. The element108 experiment finally went ahead after 109 had been synthesized and was found to decay by alpha emission, suggesting that isotopes of element108 would do likewise, and this was corroborated by an experiment aimed at synthesizing isotopes of element106. GSI reported synthesis of three atoms of 108. Two years later, they reported synthesis of one atom of the even–even 108.
Arbitration
In 1985, the International Union of Pure and Applied Chemistry (IUPAC) and the International Union of Pure and Applied Physics (IUPAP) formed the Transfermium Working Group (TWG) to assess discoveries and establish final names for elements with atomic numbers greater than 100. The party held meetings with delegates from the three competing institutes; in 1990, they established criteria for recognition of an element and in 1991, they finished the work of assessing discoveries and disbanded. These results were published in 1993.
According to the report, the 1984 works from JINR and GSI simultaneously and independently established synthesis of element108. Of the two 1984 works, the one from GSI was said to be sufficient as a discovery on its own. The JINR work, which preceded the GSI one, "very probably" displayed synthesis of element108. However, that was determined in retrospect given the work from Darmstadt; the JINR work focused on chemically identifying remote granddaughters of element108 isotopes (which could not exclude the possibility that these daughter isotopes had other progenitors), while the GSI work clearly identified the decay path of those element108 isotopes. The report concluded that the major credit should be awarded to GSI. In written responses to this ruling, both JINR and GSI agreed with its conclusions. In the same response, GSI confirmed that they and JINR were able to resolve all conflicts between them.
Naming
Historically, a newly discovered element was named by its discoverer. The first regulation came in 1947, when IUPAC decided naming required regulation in case there are conflicting names. These matters were to be resolved by the Commission of Inorganic Nomenclature and the Commission of Atomic Weights. They would review the names in case of a conflict and select one; the decision would be based on a number of factors, such as usage, and would not be an indicator of priority of a claim. The two commissions would recommend a name to the IUPAC Council, which would be the final authority. The discoverers held the right to name an element, but their name would be subject to approval by IUPAC. The Commission of Atomic Weights distanced itself from element naming in most cases.
In Mendeleev's nomenclature for unnamed and undiscovered elements, hassium would be called "eka-osmium", as in "the first element below osmium in the periodic table" (from Sanskrit eka meaning "one"). In 1979, IUPAC published recommendations according to which the element was to be called "unniloctium" (symbol "Uno"), a systematic element name as a placeholder until the element was discovered and the discovery then confirmed, and a permanent name was decided. Although these recommendations were widely followed in the chemical community, the competing physicists in the field ignored them. They either called it "element108", with the symbols E108, (108) or 108, or used the proposed name "hassium".
In 1990, in an attempt to break a deadlock in establishing priority of discovery and naming of several elements, IUPAC reaffirmed in its nomenclature of inorganic chemistry that after existence of an element was established, the discoverers could propose a name. (Also, the Commission of Atomic Weights was excluded from the naming process.) The first publication on criteria for an element discovery, released in 1991, specified the need for recognition by TWG.
Armbruster and his colleagues, the officially recognized German discoverers, held a naming ceremony for the elements 107 through 109, which had all been recognized as discovered by GSI, on 7September 1992. For element108, the scientists proposed the name "hassium". It is derived from the Latin name Hassia for the German state of Hesse where the institute is located. This name was proposed to IUPAC in a written response to their ruling on priority of discovery claims of elements, signed 29 September 1992.
The process of naming of element 108 was a part of a larger process of naming a number of elements starting with element 101; three teams—JINR, GSI, and LBL—claimed discovery of several elements and the right to name those elements. Sometimes, these claims clashed; since a discoverer was considered entitled to naming of an element, conflicts over priority of discovery often resulted in conflicts over names of these new elements. These conflicts became known as the Transfermium Wars. Different suggestions to name the whole set of elements from 101 onward and they occasionally assigned names suggested by one team to be used for elements discovered by another. However, not all suggestions were met with equal approval; the teams openly protested naming proposals on several occasions.
In 1994, IUPAC Commission on Nomenclature of Inorganic Chemistry recommended that element108 be named "hahnium" (Hn) after German physicist Otto Hahn so elements named after Hahn and Lise Meitner (it was recommended element109 should be named meitnerium, following GSI's suggestion) would be next to each other, honouring their joint discovery of nuclear fission; IUPAC commented that they felt the German suggestion was obscure. GSI protested, saying this proposal contradicted the long-standing convention of giving the discoverer the right to suggest a name; the American Chemical Society supported GSI. The name "hahnium", albeit with the different symbol Ha, had already been proposed and used by the American scientists for element105, for which they had a discovery dispute with JINR; they thus protested the confusing scrambling of names. Following the uproar, IUPAC formed an ad hoc committee of representatives from the national adhering organizations of the three countries home to the competing institutions; they produced a new set of names in 1995. Element108 was again named hahnium; this proposal was also retracted. The final compromise was reached in 1996 and published in 1997; element108 was named hassium (Hs). Simultaneously, the name dubnium (Db; from Dubna, the JINR location) was assigned to element105, and the name hahnium was not used for any element.
The official justification for this naming, alongside that of darmstadtium for element110, was that it completed a set of geographic names for the location of the GSI; this set had been initiated by 19th-century names europium and germanium. This set would serve as a response to earlier naming of americium, californium, and berkelium for elements discovered in Berkeley. Armbruster commented on this, "this bad tradition was established by Berkeley. We wanted to do it for Europe." Later, when commenting on the naming of element112, Armbruster said, "I did everything to ensure that we do not continue with German scientists and German towns."
Isotopes
Hassium has no stable or naturally occurring isotopes. Several radioisotopes have been synthesized in the lab, either by fusing two atoms or by observing the decay of heavier elements. As of 2019, the quantity of all hassium ever produced was on the order of hundreds of atoms. Thirteen isotopes with mass numbers 263 through 277 (except for 274 and 276) have been reported, six of which—Hs—have known metastable states, though that of Hs is unconfirmed. Most of these isotopes decay mainly through alpha decay; this is the most common for all isotopes for which comprehensive decay characteristics are available; the only exception is Hs, which undergoes spontaneous fission. Lighter isotopes were usually synthesized by direct fusion of two nuclei, whereas heavier isotopes were typically observed as decay products of nuclei with larger atomic numbers.
Atomic nuclei have well-established nuclear shells, which make nuclei more stable. If a nucleus has certain numbers (magic numbers) of protons or neutrons, that complete a nuclear shell, then the nucleus is even more stable against decay. The highest known magic numbers are 82 for protons and 126 for neutrons. This notion is sometimes expanded to include additional numbers between those magic numbers, which also provide some additional stability and indicate closure of "sub-shells". Unlike the better-known lighter nuclei, superheavy nuclei are deformed. Until the 1960s, the liquid drop model was the dominant explanation for nuclear structure. It suggested that the fission barrier would disappear for nuclei with ~280nucleons. It was thus thought that spontaneous fission would occur nearly instantly before nuclei could form a structure that could stabilize them; it appeared that nuclei with Z≈103 were too heavy to exist for a considerable length of time.
The later nuclear shell model suggested that nuclei with ~300 nucleons would form an island of stability where nuclei will be more resistant to spontaneous fission and will mainly undergo alpha decay with longer half-lives, and the next doubly magic nucleus (having magic numbers of both protons and neutrons) is expected to lie in the center of the island of stability near Z=110–114 and the predicted magic neutron number N=184. Subsequent discoveries suggested that the predicted island might be further than originally anticipated. They also showed that nuclei intermediate between the long-lived actinides and the predicted island are deformed, and gain additional stability from shell effects, against alpha decay and especially against spontaneous fission. The center of the region on a chart of nuclides that would correspond to this stability for deformed nuclei was determined as Hs, with 108 expected to be a magic number for protons for deformed nuclei—nuclei that are far from spherical—and 162 a magic number for neutrons for such nuclei. Experiments on lighter superheavy nuclei, as well as those closer to the expected island, have shown greater than previously anticipated stability against spontaneous fission, showing the importance of shell effects on nuclei.
Theoretical models predict a region of instability for some hassium isotopes to lie around A=275 and N=168–170, which is between the predicted neutron shell closures at N=162 for deformed nuclei and N=184 for spherical nuclei. Nuclides in this region are predicted to have low fission barrier heights, resulting in short partial half-lives toward spontaneous fission. This prediction is supported by the observed 11-millisecond half-life of Hs and the 5-millisecond half-life of the neighbouring isobar Mt because the hindrance factors from the odd nucleon were shown to be much lower than otherwise expected. The measured half-lives are even lower than those originally predicted for the even–even Hs and Ds, which suggests a gap in stability away from the shell closures and perhaps a weakening of the shell closures in this region.
In 1991, Polish physicists Zygmunt Patyk and Adam Sobiczewski predicted that 108 is a proton magic number for deformed nuclei and 162 is a neutron magic number for such nuclei. This means such nuclei are permanently deformed in their ground state but have high, narrow fission barriers to further deformation and hence relatively long spontaneous-fission half-lives. Computational prospects for shell stabilization for Hs made it a promising candidate for a deformed doubly magic nucleus. Experimental data is scarce, but the existing data is interpreted by the researchers to support the assignment of N=162 as a magic number. In particular, this conclusion was drawn from the decay data of Hs, Hs, and Hs. In 1997, Polish physicist Robert Smolańczuk calculated that the isotope Hs may be the most stable superheavy nucleus against alpha decay and spontaneous fission as a consequence of the predicted N=184 shell closure.
Natural occurrence
Hassium is not known to occur naturally on Earth; all its known isotopes are so short-lived that no primordial hassium would survive to today. This does not rule out the possibility of unknown, longer-lived isotopes or nuclear isomers, some of which could still exist in trace quantities if they are long-lived enough. As early as 1914, German physicist Richard Swinne proposed element108 as a source of X-rays in the Greenland ice sheet. Though Swinne was unable to verify this observation and thus did not claim discovery, he proposed in 1931 the existence of "regions" of long-lived transuranic elements, including one around Z=108.
In 1963, Soviet geologist and physicist Viktor Cherdyntsev, who had previously claimed the existence of primordial curium-247, claimed to have discovered element108—specifically the 267108 isotope, which supposedly had a half-life of 400 to 500million years—in natural molybdenite and suggested the provisional name sergenium (symbol Sg); this name comes from the name for the Silk Road and was explained as "coming from Kazakhstan" for it. His rationale for claiming that sergenium was the heavier homologue to osmium was that minerals supposedly containing sergenium formed volatile oxides when boiled in nitric acid, similarly to osmium.
Soviet physicist Vladimir Kulakov criticized Cherdyntsev's findings on the grounds that some of the properties Cherdyntsev claimed sergenium had, were inconsistent with then-current nuclear physics. The chief questions Kulakov raised were that the claimed alpha decay energy of sergenium was many orders of magnitude lower than expected and the half-life given was eight orders of magnitude shorter than what would be predicted for a nuclide alpha-decaying with the claimed decay energy. At the same time, a corrected half-life in the region of 10years would be impossible because it would imply the samples contained ~100 milligrams of sergenium. In 2003, it was suggested that the observed alpha decay with energy 4.5MeV could be due to a low-energy and strongly enhanced transition between different hyperdeformed states of a hassium isotope around Hs, thus suggesting that the existence of superheavy elements in nature was at least possible, but unlikely.
In 2006, Russian geologist Alexei Ivanov hypothesized that an isomer of Hs might have a half-life of ~ years, which would explain the observation of alpha particles with energies of ~4.4MeV in some samples of molybdenite and osmiridium. This isomer of Hs could be produced from the beta decay of Bh and Sg, which, being homologous to rhenium and molybdenum respectively, should occur in molybdenite along with rhenium and molybdenum if they occurred in nature. Because hassium is homologous to osmium, it should occur along with osmium in osmiridium if it occurs in nature. The decay chains of Bh and Sg are hypothetical and the predicted half-life of this hypothetical hassium isomer is not long enough for any sufficient quantity to remain on Earth. It is possible that more Hs may be deposited on the Earth as the Solar System travels through the spiral arms of the Milky Way; this would explain excesses of plutonium-239 found on the ocean floors of the Pacific Ocean and the Gulf of Finland. However, minerals enriched with Hs are predicted to have excesses of its daughters uranium-235 and lead-207; they would also have different proportions of elements that are formed by spontaneous fission, such as krypton, zirconium, and xenon. The natural occurrence of hassium in minerals such as molybdenite and osmiride is theoretically possible, but very unlikely.
In 2004, JINR started a search for natural hassium in the Modane Underground Laboratory in Modane, Auvergne-Rhône-Alpes, France; this was done underground to avoid interference and false positives from cosmic rays. In 2008–09, an experiment run in the laboratory resulted in detection of several registered events of neutron multiplicity (number of emitted free neutrons after a nucleus is hit by a neutron and fissioned) above three in natural osmium, and in 2012–13, these findings were reaffirmed in another experiment run in the laboratory. These results hinted natural hassium could potentially exist in nature in amounts that allow its detection by the means of analytical chemistry, but this conclusion is based on an explicit assumption that there is a long-lived hassium isotope to which the registered events could be attributed.
Since Hs may be particularly stable against alpha decay and spontaneous fission, it was considered as a candidate to exist in nature. This nuclide, however, is predicted to be very unstable toward beta decay and any beta-stable isotopes of hassium such as Hs would be too unstable in the other decay channels to be observed in nature. A 2012 search for Hs in nature along with its homologue osmium at the Maier-Leibnitz Laboratory in Garching, Bavaria, Germany, was unsuccessful, setting an upper limit to its abundance at of hassium per gram of osmium.
Predicted properties
Various calculations suggest hassium should be the heaviest group 8 element so far, consistently with the periodic law. Its properties should generally match those expected for a heavier homologue of osmium; as is the case for all transactinides, a few deviations are expected to arise from relativistic effects.
Very few properties of hassium or its compounds have been measured; this is due to its extremely limited and expensive production and the fact that hassium (and its parents) decays very quickly. A few singular chemistry-related properties have been measured, such as enthalpy of adsorption of hassium tetroxide, but properties of hassium metal remain unknown and only predictions are available.
Relativistic effects
Relativistic effects in hassium should arise due to the high charge of its nuclei, which causes the electrons around the nucleus to move faster—so fast their speed is comparable to the speed of light. There are three main effects: the direct relativistic effect, the indirect relativistic effect, and spin–orbit splitting. (The existing calculations do not account for Breit interactions, but those are negligible, and their omission can only result in an uncertainty of the current calculations of no more than 2%.)
As atomic number increases, so does the electrostatic attraction between an electron and the nucleus. This causes the velocity of the electron to increase, which leads to an increase in its mass. This in turn leads to contraction of the atomic orbitals, most specifically the s and p orbitals. Their electrons become more closely attached to the atom and harder to pull from the nucleus. This is the direct relativistic effect. It was originally thought to be strong only for the innermost electrons, but was later established to significantly influence valence electrons as well.
Since the s and p orbitals are closer to the nucleus, they take a bigger portion of the electric charge of the nucleus on themselves ("shield" it). This leaves less charge for attraction of the remaining electrons, whose orbitals therefore expand, making them easier to pull from the nucleus. This is the indirect relativistic effect. As a result of the combination of the direct and indirect relativistic effects, the Hs ion, compared to the neutral atom, lacks a 6d electron, rather than a 7s electron. In comparison, Os lacks a 6s electron compared to the neutral atom. The ionic radius (in oxidation state +8) of hassium is greater than that of osmium because of the relativistic expansion of the 6p orbitals, which are the outermost orbitals for an Hs ion (although in practice such highly charged ions would be too polarized in chemical environments to have much reality).
There are several kinds of electron orbitals, denoted s, p, d, and f (g orbitals are expected to start being chemically active among elements after element 120). Each of these corresponds to an azimuthal quantum number l: s to 0, p to 1, d to 2, and f to 3. Every electron also corresponds to a spin quantum number s, which may equal either +1/2 or −1/2. Thus, the total angular momentum quantum number j = l + s is equal to j = l ± 1/2 (except for l = 0, for which for both electrons in each orbital j = 0 + 1/2 = 1/2). Spin of an electron relativistically interacts with its orbit, and this interaction leads to a split of a subshell into two with different energies (the one with j = l − 1/2 is lower in energy and thus these electrons more difficult to extract): for instance, of the six 6p electrons, two become 6p and four become 6p. This is the spin–orbit splitting (also called subshell splitting or jj coupling). It is most visible with p electrons, which do not play an important role in the chemistry of hassium, but those for d and f electrons are within the same order of magnitude (quantitatively, spin–orbit splitting in expressed in energy units, such as electronvolts).
These relativistic effects are responsible for the expected increase of the ionization energy, decrease of the electron affinity, and increase of stability of the +8 oxidation state compared to osmium; without them, the trends would be reversed. Relativistic effects decrease the atomization energies of hassium compounds because the spin–orbit splitting of the d orbital lowers binding energy between electrons and the nucleus and because relativistic effects decrease ionic character in bonding.
Physical and atomic
The previous members of group8 have high melting points: Fe, 1538°C; Ru, 2334°C; Os, 3033°C. Like them, hassium is predicted to be a solid at room temperature though its melting point has not been precisely calculated. Hassium should crystallize in the hexagonal close-packed structure (/=1.59), similarly to its lighter congener osmium. Pure metallic hassium is calculated to have a bulk modulus (resistance to uniform compression) of 450GPa, comparable with that of diamond, 442GPa. Hassium is expected to be one of the densest of the 118 known elements, with a predicted density of 27–29 g/cm vs. the 22.59 g/cm measured for osmium.
Hassium's atomic radius is expected to be ≈126pm. Due to relativistic stabilization of the 7s orbital and destabilization of the 6d orbital, the Hs ion is predicted to have an electron configuration of [Rn]5f6d7s, giving up a 6d electron instead of a 7s electron, which is the opposite of the behaviour of its lighter homologues. The Hs ion is expected to have electron configuration [Rn]5f6d7s, analogous to that calculated for the Os ion. In chemical compounds, hassium is calculated to display bonding characteristic for a d-block element, whose bonding will be primarily executed by 6d and 6d orbitals; compared to the elements from the previous periods, 7s, 6p, 6p, and 7p orbitals should be more important.
Chemical
Hassium is the sixth member of the 6d series of transition metals and is expected to be much like the platinum group metals. Some of these properties were confirmed by gas-phase chemistry experiments. The group8 elements portray a wide variety of oxidation states but ruthenium and osmium readily portray their group oxidation state of +8; this state becomes more stable down the group. This oxidation state is extremely rare: among stable elements, only ruthenium, osmium, and xenon are able to attain it in reasonably stable compounds. Hassium is expected to follow its congeners and have a stable +8 state, but like them it should show lower stable oxidation states such as +6, +4, +3, and +2. Hassium(IV) is expected to be more stable than hassium(VIII) in aqueous solution. Hassium should be a rather noble metal. The standard reduction potential for the Hs4+/Hs couple is expected to be 0.4V.
The group 8 elements show a distinctive oxide chemistry. All the lighter members have known or hypothetical tetroxides, MO4. Their oxidizing power decreases as one descends the group. FeO4 is not known due to its extraordinarily large electron affinity—the amount of energy released when an electron is added to a neutral atom or molecule to form a negative ion—which results in the formation of the well-known oxyanion ferrate(VI), . Ruthenium tetroxide, RuO4, which is formed by oxidation of ruthenium(VI) in acid, readily undergoes reduction to ruthenate(VI), . Oxidation of ruthenium metal in air forms the dioxide, RuO2. In contrast, osmium burns to form the stable tetroxide, OsO4, which complexes with the hydroxide ion to form an osmium(VIII) -ate complex, [OsO4(OH)2]2−. Therefore, hassium should behave as a heavier homologue of osmium by forming of a stable, very volatile tetroxide HsO4, which undergoes complexation with hydroxide to form a hassate(VIII), [HsO4(OH)2]2−. Ruthenium tetroxide and osmium tetroxide are both volatile due to their symmetrical tetrahedral molecular geometry and because they are charge-neutral; hassium tetroxide should similarly be a very volatile solid. The trend of the volatilities of the group8 tetroxides is experimentally known to be RuO4<OsO4>HsO4, which confirms the calculated results. In particular, the calculated enthalpies of adsorption—the energy required for the adhesion of atoms, molecules, or ions from a gas, liquid, or dissolved solid to a surface—of HsO4, −(45.4±1)kJ/mol on quartz, agrees very well with the experimental value of −(46±2)kJ/mol.
Experimental chemistry
The first goal for chemical investigation was the formation of the tetroxide; it was chosen because ruthenium and osmium form volatile tetroxides, being the only transition metals to display a stable compound in the +8 oxidation state. Despite this selection for gas-phase chemical studies being clear from the beginning, chemical characterization of hassium was considered a difficult task for a long time. Although hassium was first synthesized in 1984, it was not until 1996 that a hassium isotope long-lived enough to allow chemical studies was synthesized. Unfortunately, this isotope, Hs, was synthesized indirectly from the decay of Cn; not only are indirect synthesis methods not favourable for chemical studies, but the reaction that produced the isotope Cn had a low yield—its cross section was only 1pb—and thus did not provide enough hassium atoms for a chemical investigation. Direct synthesis of Hs and Hs in the reaction Cm(Mg,xn)Hs (x=4 or 5) appeared more promising because the cross section for this reaction was somewhat larger at 7pb. This yield was still around ten times lower than that for the reaction used for the chemical characterization of bohrium. New techniques for irradiation, separation, and detection had to be introduced before hassium could be successfully characterized chemically.
Ruthenium and osmium have very similar chemistry due to the lanthanide contraction but iron shows some differences from them; for example, although ruthenium and osmium form stable tetroxides in which the metal is in the +8 oxidation state, iron does not. In preparation for the chemical characterization of hassium, research focused on ruthenium and osmium rather than iron because hassium was expected to be similar to ruthenium and osmium, as the predicted data on hassium closely matched that of those two.
The first chemistry experiments were performed using gas thermochromatography in 2001, using the synthetic osmium radioisotopes Os as a reference. During the experiment, seven hassium atoms were synthesized using the reactions Cm(Mg,5n)Hs and Cm(Mg,4n)Hs. They were then thermalized and oxidized in a mixture of helium and oxygen gases to form hassium tetroxide molecules.
Hs + 2 O → HsO
The measured deposition temperature of hassium tetroxide was higher than that of osmium tetroxide, which indicated the former was the less volatile one, and this placed hassium firmly in group 8. The enthalpy of adsorption for HsO measured, , was significantly lower than the predicted value, , indicating OsO is more volatile than HsO, contradicting earlier calculations that implied they should have very similar volatilities. For comparison, the value for OsO is . (The calculations that yielded a closer match to the experimental data came after the experiment, in 2008.) It is possible hassium tetroxide interacts differently with silicon nitride than with silicon dioxide, the chemicals used for the detector; further research is required to establish whether there is a difference between such interactions and whether it has influenced the measurements. Such research would include more accurate measurements of the nuclear properties of Hs and comparisons with RuO in addition to OsO.
In 2004, scientists reacted hassium tetroxide and sodium hydroxide to form sodium hassate(VIII), a reaction that is well known with osmium. This was the first acid-base reaction with a hassium compound, forming sodium hassate(VIII):
+ 2 NaOH →
The team from the University of Mainz planned in 2008 to study the electrodeposition of hassium atoms using the new TASCA facility at GSI. Their aim was to use the reaction Ra(Ca,4n)Hs. Scientists at GSI were hoping to use TASCA to study the synthesis and properties of the hassium(II) compound hassocene, Hs(CH), using the reaction Ra(Ca,xn). This compound is analogous to the lighter compounds ferrocene, ruthenocene, and osmocene, and is expected to have the two cyclopentadienyl rings in an eclipsed conformation like ruthenocene and osmocene and not in a staggered conformation like ferrocene. Hassocene, which is expected to be a stable and highly volatile compound, was chosen because it has hassium in the low formal oxidation state of +2—although the bonding between the metal and the rings is mostly covalent in metallocenes—rather than the high +8 state that had previously been investigated, and relativistic effects were expected to be stronger in the lower oxidation state. The highly symmetrical structure of hassocene and its low number of atoms make relativistic calculations easier. , there are no experimental reports of hassocene.
Notes
References
Bibliography
External links
Chemical elements
Transition metals
Synthetic elements
Chemical elements with hexagonal close-packed structure
Densest things | Hassium | [
"Physics",
"Chemistry"
] | 8,096 | [
"Matter",
"Chemical elements",
"Synthetic materials",
"Synthetic elements",
"Atoms",
"Radioactivity"
] |
13,767 | https://en.wikipedia.org/wiki/Hydra%20%28genus%29 | Hydra ( ) is a genus of small freshwater hydrozoans of the phylum Cnidaria. They are native to the temperate and tropical regions. The genus was named by Linnaeus in 1758 after the Hydra, which was the many-headed beast of myth defeated by Heracles, as when the animal has a part severed, it will regenerate much like the mythical hydra's heads. Biologists are especially interested in Hydra because of their regenerative ability; they do not appear to die of old age, or to age at all.
Morphology
Hydra has a tubular, radially symmetric body up to long when extended, secured by a simple adhesive foot known as the basal disc. Gland cells in the basal disc secrete a sticky fluid that accounts for its adhesive properties.
At the free end of the body is a mouth opening surrounded by one to twelve thin, mobile tentacles. Each tentacle, or cnida (plural: cnidae), is clothed with highly specialised stinging cells called cnidocytes. Cnidocytes contain specialized structures called nematocysts, which look like miniature light bulbs with a coiled thread inside. At the narrow outer edge of the cnidocyte is a short trigger hair called a cnidocil. Upon contact with prey, the contents of the nematocyst are explosively discharged, firing a dart-like thread containing neurotoxins into whatever triggered the release. This can paralyze the prey, especially if many hundreds of nematocysts are fired.
Hydra has two main body layers, which makes it "diploblastic". The layers are separated by mesoglea, a gel-like substance. The outer layer is the epidermis, and the inner layer is called the gastrodermis, because it lines the stomach. The cells making up these two body layers are relatively simple. Hydramacin is a bactericide recently discovered in Hydra; it protects the outer layer against infection. A single Hydra is composed of 50,000 to 100,000 cells which consist of three specific stem cell populations that create many different cell types. These stem cells continually renew themselves in the body column. Hydras have two significant structures on their body: the "head" and the "foot". When a Hydra is cut in half, each half regenerates and forms into a small Hydra; the "head" regenerates a "foot" and the "foot" regenerates a "head". If the Hydra is sliced into many segments then the middle slices form both a "head" and a "foot".
Respiration and excretion occur by diffusion throughout the surface of the epidermis, while larger excreta are discharged through the mouth.
Nervous system
The nervous system of Hydra is a nerve net, which is structurally simple compared to more derived animal nervous systems. Hydra does not have a recognizable brain or true muscles. Nerve nets connect sensory photoreceptors and touch-sensitive nerve cells located in the body wall and tentacles.
The structure of the nerve net has two levels:
level 1 – sensory cells or internal cells; and
level 2 – interconnected ganglion cells synapsed to epithelial or motor cells.
Some have only two sheets of neurons.
Motion and locomotion
If Hydra are alarmed or attacked, the tentacles can be retracted to small buds, and the body column itself can be retracted to a small gelatinous sphere. Hydra generally react in the same way regardless of the direction of the stimulus, and this may be due to the simplicity of the nerve nets.
Hydra are generally sedentary or sessile, but do occasionally move quite readily, especially when hunting. They have two distinct methods for moving – 'looping' and 'somersaulting'. They do this by bending over and attaching themselves to the substrate with the mouth and tentacles and then relocate the foot, which provides the usual attachment, this process is called looping. In somersaulting, the body then bends over and makes a new place of attachment with the foot. By this process of "looping" or "somersaulting", a Hydra can move several inches (c. 100 mm) in a day. Hydra may also move by amoeboid motion of their bases or by detaching from the substrate and floating away in the current.
Reproduction and life cycle
Most hydra species do not have any gender system. Instead, when food is plentiful, many Hydra reproduce asexually by budding. The buds form from the body wall, grow into miniature adults and break away when mature.
When a hydra is well fed, a new bud can form every two days. When conditions are harsh, often before winter or in poor feeding conditions, sexual reproduction occurs in some Hydra. Swellings in the body wall develop into either ovaries or testes. The testes release free-swimming gametes into the water, and these can fertilize the egg in the ovary of another individual. The fertilized eggs secrete a tough outer coating, and, as the adult dies (due to starvation or cold), these resting eggs fall to the bottom of the lake or pond to await better conditions, whereupon they hatch into nymph Hydra. Some Hydra species, like Hydra circumcincta and Hydra viridissima, are hermaphrodites and may produce both testes and ovaries at the same time.
Many members of the Hydrozoa go through a body change from a polyp to an adult form called a medusa, which is usually the life stage where sexual reproduction occurs, but Hydra do not progress beyond the polyp phase.
Feeding
Hydra mainly feed on aquatic invertebrates such as Daphnia and Cyclops.
While feeding, Hydra extend their body to maximum length and then slowly extend their tentacles. Despite their simple construction, the tentacles of Hydra are extraordinarily extensible and can be four to five times the length of the body. Once fully extended, the tentacles are slowly maneuvered around waiting for contact with a suitable prey animal. Upon contact, nematocysts on the tentacle fire into the prey, and the tentacle itself coils around the prey. Most of the tentacles join in the attack within 30 seconds to subdue the struggling prey. Within two minutes, the tentacles surround the prey and move it into the open mouth aperture. Within ten minutes, the prey is engulfed within the body cavity, and digestion commences. Hydra can stretch their body wall considerably.
The feeding behaviour of Hydra demonstrates the sophistication of what appears to be a simple nervous system.
Some species of Hydra exist in a mutual relationship with various types of unicellular algae. The algae are protected from predators by Hydra; in return, photosynthetic products from the algae are beneficial as a food source to Hydra, and even help to maintain the Hydra microbiome.
Measuring the feeding response
The feeding response in Hydra is induced by glutathione (specifically in the reduced state as GSH) released from damaged tissue of injured prey. There are several methods conventionally used for quantification of the feeding response. In some, the duration for which the mouth remains open is measured. Other methods rely on counting the number of Hydra among a small population showing the feeding response after addition of glutathione. Recently, an assay for measuring the feeding response in hydra has been developed. In this method, the linear two-dimensional distance between the tip of the tentacle and the mouth of hydra was shown to be a direct measure of the extent of the feeding response. This method has been validated using a starvation model, as starvation is known to cause enhancement of the Hydra feeding response.
Predators
The species Hydra oligactis is preyed upon by the flatworm Microstomum lineare.
Tissue regeneration
Hydras undergo morphallaxis (tissue regeneration) when injured or severed. Typically, Hydras reproduce by just budding off a whole new individual; the bud occurs around two-thirds of the way down the body axis. When a Hydra is cut in half, each half regenerates and forms into a small Hydra; the "head" regenerates a "foot" and the "foot" regenerates a "head". This regeneration occurs without cell division. If the Hydra is sliced into many segments, the middle slices form both a "head" and a "foot". The polarity of the regeneration is explained by two pairs of positional value gradients. There is both a head and foot activation and inhibition gradient. The head activation and inhibition works in an opposite direction of the pair of foot gradients. The evidence for these gradients was shown in the early 1900s with grafting experiments. The inhibitors for both gradients have shown to be important to block the bud formation. The location where the bud forms is where the gradients are low for both the head and foot.
Hydras are capable of regenerating from pieces of tissue from the body and additionally after tissue dissociation from reaggregates. This process takes place not only in the pieces of tissue excised from the body column, but also from re-aggregates of dissociated single cells. It was found that in these aggregates, cells initially distributed randomly undergo sorting and form two epithelial cell layers, in which the endodermal epithelial cells play more active roles in the process. Active mobility of these endodermal epithelial cells forms two layers in both the re-aggregate and the re-generating tip of the excised tissue. As these two layers are established, a patterning process takes place to form heads and feet.
Non-senescence
Daniel Martinez claimed in a 1998 article in Experimental Gerontology that Hydra are biologically immortal. This publication has been widely cited as evidence that Hydra do not senesce (do not age), and that they are proof of the existence of non-senescing organisms generally. In 2010, Preston Estep published (also in Experimental Gerontology) a letter to the editor arguing that the Martinez data refutes the hypothesis that Hydra do not senesce.
The controversial unlimited lifespan of Hydra has attracted much attention from scientists. Research today appears to confirm Martinez' study. Hydra stem cells have a capacity for indefinite self-renewal. The transcription factor "forkhead box O" (FoxO) has been identified as a critical driver of the continuous self-renewal of Hydra. In experiments, a drastically reduced population growth resulted from FoxO down-regulation.
In bilaterally symmetrical organisms (Bilateria), the transcription factor FoxO affects stress response, lifespan, and increase in stem cells. If this transcription factor is knocked down in bilaterian model organisms, such as fruit flies and nematodes, their lifespan is significantly decreased. In experiments on H. vulgaris (a radially symmetrical member of phylum Cnidaria), when FoxO levels were decreased, there was a negative effect on many key features of the Hydra, but no death was observed, thus it is believed other factors may contribute to the apparent lack of aging in these creatures.
DNA repair
Hydra are capable of two types of DNA repair: nucleotide excision repair and base excision repair. The repair pathways facilitate DNA replication by removing DNA damage. Their identification in hydra was based, in part, on the presence in its genome of genes homologous to ones present in other genetically well studied species playing key roles in these DNA repair pathways.
Genomics
An ortholog comparison analysis done within the last decade demonstrated that Hydra share a minimum of 6,071 genes with humans. Hydra is becoming an increasingly better model system as more genetic approaches become available. Transgenic hydra have become attractive model organisms to study the evolution of immunity. A draft of the genome of Hydra magnipapillata was reported in 2010.
The genomes of cnidarians are usually less than 500 Mb (megabases) in size, as in the Hydra viridissima, which has a genome size of approximately 300 Mb. In contrast, the genomes of brown hydras are approximately 1 Gb in size. This is because the brown hydra genome is the result of an expansion event involving LINEs, a type of transposable elements, in particular, a single family of the CR1 class. This expansion is unique to this subgroup of the genus Hydra and is absent in the green hydra, which has a repeating landscape similar to other cnidarians. These genome characteristics make Hydra attractive for studies of transposon-driven speciations and genome expansions.
Due to the simplicity of their life cycle when compared to other hydrozoans, hydras have lost many genes that correspond to cell types or metabolic pathways of which the ancestral function is still unknown.
Hydra genome shows a preference towards proximal promoters. Thanks to this feature, many reporter cell lines have been created with regions around 500 to 2000 bases upstream of the gene of interest. Its cis-regulatory elements (CRE) are mostly located less than 2000 base pairs upstream from the closest transcription initiation site, but there are CREs located further away.
Its chromatin has a Rabl configuration. There are interactions between the centromeres of different chromosomes and the centromeres and telomeres of the same chromosome. It presents a great number of intercentromeric interactions when compared to other cnidarians, probably due to the loss of multiple subunits of condensin II. It is organized in domains that span dozens to hundreds of megabases, containing epigenetically co-regulated genes and flanked by boundaries located within heterochromatin.
Transcriptomics
Different Hydra cell types express gene families of different evolutionary ages. Progenitor cells (stem cells, neuron and nematocyst precursors, and germ cells) express genes from families that predate metazoans. Among differentiated cells some express genes from families that date from the base of metazoans, like gland and neuronal cells, and others express genes from newer families, originating from the base of cnidaria or medusozoa, like nematocysts. Interstitial cells contain translation factors with a function that has been conserved for at least 400 million years.
See also
Lernaean Hydra, a Greek mythological aquatic creature after which the genus is named
Turritopsis dohrnii, another cnidarian (a jellyfish) that scientists believe to be immortal
References
Hydridae
Negligibly senescent organisms | Hydra (genus) | [
"Biology"
] | 2,987 | [
"Senescence",
"Negligibly senescent organisms",
"Organisms by adaptation"
] |
13,768 | https://en.wikipedia.org/wiki/Hydrus | Hydrus is a small constellation in the deep southern sky. It was one of twelve constellations created by Petrus Plancius from the observations of Pieter Dirkszoon Keyser and Frederick de Houtman and it first appeared on a 35-cm (14 in) diameter celestial globe published in late 1597 (or early 1598) in Amsterdam by Plancius and Jodocus Hondius. The first depiction of this constellation in a celestial atlas was in Johann Bayer's Uranometria of 1603. The French explorer and astronomer Nicolas Louis de Lacaille charted the brighter stars and gave their Bayer designations in 1756. Its name means "male water snake", as opposed to Hydra, a much larger constellation that represents a female water snake. It remains below the horizon for most Northern Hemisphere observers.
The brightest star is the 2.8-magnitude Beta Hydri, also the closest reasonably bright star to the south celestial pole. Pulsating between magnitude 3.26 and 3.33, Gamma Hydri is a variable red giant 60 times the diameter of the Sun. Lying near it is VW Hydri, one of the brightest dwarf novae in the heavens. Four star systems in Hydrus have been found to have exoplanets to date, including HD 10180, which could bear up to nine planetary companions.
History
Hydrus was one of the twelve constellations established by the astronomer Petrus Plancius from the observations of the southern sky by the Dutch explorers Pieter Dirkszoon Keyser and Frederick de Houtman, who had sailed on the first Dutch trading expedition, known as the Eerste Schipvaart, to the East Indies. It first appeared on a 35-cm (14 in) diameter celestial globe published in late 1597 (or early 1598) in Amsterdam by Plancius with Jodocus Hondius. The first depiction of this constellation in a celestial atlas was in the German cartographer Johann Bayer's Uranometria of 1603. De Houtman included it in his southern star catalogue the same year under the Dutch name De Waterslang, "The Water Snake", it representing a type of snake encountered on the expedition rather than a mythical creature. The French explorer and astronomer Nicolas Louis de Lacaille called it l’Hydre Mâle on the 1756 version of his planisphere of the southern skies, distinguishing it from the feminine Hydra. The French name was retained by Jean Fortin in 1776 for his Atlas Céleste, while Lacaille Latinised the name to Hydrus for his revised Coelum Australe Stelliferum in 1763.
Characteristics
Irregular in shape, Hydrus is bordered by Mensa to the southeast, Eridanus to the east, Horologium and Reticulum to the northeast, Phoenix to the north, Tucana to the northwest and west, and Octans to the south; Lacaille had shortened Hydrus' tail to make space for this last constellation he had drawn up. Covering 243 square degrees and 0.589% of the night sky, it ranks 61st of the 88 constellations in size. The three-letter abbreviation for the constellation, as adopted by the International Astronomical Union in 1922, is "Hyi". The official constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined by a polygon of 12 segments. In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between −57.85° and −82.06°. As one of the deep southern constellations, it remains below the horizon at latitudes north of the 30th parallel in the Northern Hemisphere, and is circumpolar at latitudes south of the 50th parallel in the Southern Hemisphere. Herman Melville mentions it and Argo Navis in Moby Dick "beneath effulgent Antarctic Skies", highlighting his knowledge of the southern constellations from whaling voyages. A line drawn between the long axis of the Southern Cross to Beta Hydri and then extended 4.5 times will mark a point due south. Hydrus culminates at midnight around 26 October.
Features
Stars
Keyzer and de Houtman assigned fifteen stars to the constellation in their Malay and Madagascan vocabulary, with a star that would be later designated as Alpha Hydri marking the head, Gamma the chest and a number of stars that were later allocated to Tucana, Reticulum, Mensa and Horologium marking the body and tail. Lacaille charted and designated 20 stars with the Bayer designations Alpha through to Tau in 1756. Of these, he used the designations Eta, Pi and Tau twice each, for three sets of two stars close together, and omitted Omicron and Xi. He assigned Rho to a star that subsequent astronomers were unable to find.
Beta Hydri, the brightest star in Hydrus, is a yellow star of apparent magnitude 2.8, lying 24 light-years from Earth. It has about 104% of the mass of the Sun and 181% of the Sun's radius, with more than three times the Sun's luminosity. The spectrum of this star matches a stellar classification of G2 IV, with the luminosity class of 'IV' indicating this is a subgiant star. As such, it is a slightly more evolved star than the Sun, with the supply of hydrogen fuel at its core becoming exhausted. It is the nearest subgiant star to the Sun and one of the oldest stars in the solar neighbourhood. Thought to be between 6.4 and 7.1 billion years old, this star bears some resemblance to what the Sun may look like in the far distant future, making it an object of interest to astronomers. It is also the closest bright star to the south celestial pole.
Located at the northern edge of the constellation and just southwest of Achernar is Alpha Hydri, a white sub-giant star of magnitude 2.9, situated 72 light-years from Earth. Of spectral type F0IV, it is beginning to cool and enlarge as it uses up its supply of hydrogen. It is twice as massive and 3.3 times as wide as the Sun and 26 times more luminous. A line drawn between Alpha Hydri and Beta Centauri is bisected by the south celestial pole.
In the southeastern corner of the constellation is Gamma Hydri, a red giant of spectral type M2III located 214 light-years from Earth. It is a semi-regular variable star, pulsating between magnitudes 3.26 and 3.33. Observations over five years were not able to establish its periodicity. It is around 1.5 to 2 times as massive as the Sun, and has expanded to about 60 times the Sun's diameter. It shines with about 655 times the luminosity of the Sun. Located 3° northeast of Gamma is the VW Hydri, a dwarf nova of the SU Ursae Majoris type. It is a close binary system that consists of a white dwarf and another star, the former drawing off matter from the latter into a bright accretion disk. These systems are characterised by frequent eruptions and less frequent supereruptions. The former are smooth, while the latter exhibit short "superhumps" of heightened activity. One of the brightest dwarf novae in the sky, it has a baseline magnitude of 14.4 and can brighten to magnitude 8.4 during peak activity. BL Hydri is another close binary system composed of a low-mass star and a strongly magnetic white dwarf. Known as a polar or AM Herculis variable, these produce polarized optical and infrared emissions and intense soft and hard X-ray emissions to the frequency of the white dwarf's rotation period—in this case 113.6 minutes.
There are two notable optical double stars in Hydrus. Pi Hydri, composed of Pi1 Hydri and Pi2 Hydri, is divisible in binoculars. Around 476 light-years distant, Pi1 is a red giant of spectral type M1III that varies between magnitudes 5.52 and 5.58. Pi2 is an orange giant of spectral type K2III and shining with a magnitude of 5.7, around 488 light-years from Earth.
Eta Hydri is the other optical double, composed of Eta1 and Eta2. Eta1 is a blue-white main sequence star of spectral type B9V that was suspected of being variable, and is located just over 700 light-years away. Eta2 has a magnitude of 4.7 and is a yellow giant star of spectral type G8.5III around 218 light-years distant, which has evolved off the main sequence and is expanding and cooling on its way to becoming a red giant. Calculations of its mass indicate it was most likely a white A-type main sequence star for most of its existence, around twice the mass of the Sun. A planet, Eta2 Hydri b, greater than 6.5 times the mass of Jupiter was discovered in 2005, orbiting around Eta2 every 711 days at a distance of 1.93 astronomical units (AU).
Three other systems have been found to have planets, most notably the Sun-like star HD 10180, which has seven planets, plus possibly an additional two for a total of nine—as of 2012 more than any other system to date, including the Solar System. Lying around from the Earth, it has an apparent magnitude of 7.33.
GJ 3021 is a solar twin—a star very like the Sun—around 57 light-years distant with a spectral type G8V and magnitude of 6.7. It has a Jovian planet companion (GJ 3021 b). Orbiting about 0.5 AU from its star, it has a minimum mass 3.37 times that of Jupiter and a period of around 133 days. The system is a complex one as the faint star GJ 3021B orbits at a distance of 68 AU; it is a red dwarf of spectral type M4V.
HD 20003 is a star of magnitude 8.37. It is a yellow main sequence star of spectral type G8V a little cooler and smaller than the Sun around 143 light-years away. It has two planets that are around 12 and 13.5 times as massive as the Earth with periods of just under 12 and 34 days respectively.
Deep-sky objects
Hydrus contains only faint deep-sky objects. IC 1717 was a deep-sky object discovered by the Danish astronomer John Louis Emil Dreyer in the late 19th century. The object at the coordinate Dreyer observed is no longer there, and is now a mystery. It was very likely to have been a faint comet. PGC 6240, known as the White Rose Galaxy, is a giant spiral galaxy surrounded by shells resembling rose petals, located around 345 million light years from the Solar System. Unusually, it has cohorts of globular clusters of three distinct ages suggesting bouts of post-starburst formation following a merger with another galaxy. The constellation also contains a spiral galaxy, NGC 1511, which lies edge on to observers on Earth and is readily viewed in amateur telescopes.
Located mostly in Dorado, the Large Magellanic Cloud extends into Hydrus. The globular cluster NGC 1466 is an outlying component of the galaxy, and contains many RR Lyrae-type variable stars. It has a magnitude of 11.59 and is thought to be over 12 billion years old. Two stars, HD 24188 of magnitude 6.3 and HD 24115 of magnitude 9.0, lie nearby in its foreground. NGC 602 is composed of an emission nebula and a young, bright open cluster of stars that is an outlying component on the eastern edge of the Small Magellanic Cloud, a satellite galaxy to the Milky Way. Most of the cloud is located in the neighbouring constellation Tucana.
See also
Hydrus (Chinese astronomy)
References
External links
Chandra information about Hydrus
The Deep Photographic Guide to the Constellations: Hydrus
The clickable Hydrus
Southern constellations
Constellations listed by Petrus Plancius | Hydrus | [
"Astronomy"
] | 2,513 | [
"Hydrus",
"Constellations listed by Petrus Plancius",
"Southern constellations",
"Constellations"
] |
13,777 | https://en.wikipedia.org/wiki/Hard%20disk%20drive | A hard disk drive (HDD), hard disk, hard drive, or fixed disk is an electro-mechanical data storage device that stores and retrieves digital data using magnetic storage with one or more rigid rapidly rotating platters coated with magnetic material. The platters are paired with magnetic heads, usually arranged on a moving actuator arm, which read and write data to the platter surfaces. Data is accessed in a random-access manner, meaning that individual blocks of data can be stored and retrieved in any order. HDDs are a type of non-volatile storage, retaining stored data when powered off. Modern HDDs are typically in the form of a small rectangular box.
Hard disk drives were introduced by IBM in 1956, and were the dominant secondary storage device for general-purpose computers beginning in the early 1960s. HDDs maintained this position into the modern era of servers and personal computers, though personal computing devices produced in large volume, like mobile phones and tablets, rely on flash memory storage devices. More than 224 companies have produced HDDs historically, though after extensive industry consolidation, most units are manufactured by Seagate, Toshiba, and Western Digital. HDDs dominate the volume of storage produced (exabytes per year) for servers. Though production is growing slowly (by exabytes shipped), sales revenues and unit shipments are declining, because solid-state drives (SSDs) have higher data-transfer rates, higher areal storage density, somewhat better reliability, and much lower latency and access times.
The revenues for SSDs, most of which use NAND flash memory, slightly exceeded those for HDDs in 2018. Flash storage products had more than twice the revenue of hard disk drives . Though SSDs have four to nine times higher cost per bit, they are replacing HDDs in applications where speed, power consumption, small size, high capacity and durability are important. , the cost per bit of SSDs is falling, and the price premium over HDDs has narrowed.
The primary characteristics of an HDD are its capacity and performance. Capacity is specified in unit prefixes corresponding to powers of : a 1-terabyte (TB) drive has a capacity of gigabytes, where 1 gigabyte = 1 000 megabytes = 1 000 000 kilobytes (1 million) = 1 000 000 000 bytes (1 billion). Typically, some of an HDD's capacity is unavailable to the user because it is used by the file system and the computer operating system, and possibly inbuilt redundancy for error correction and recovery. There can be confusion regarding storage capacity since capacities are stated in decimal gigabytes (powers of 1000) by HDD manufacturers, whereas the most commonly used operating systems report capacities in powers of 1024, which results in a smaller number than advertised. Performance is specified as the time required to move the heads to a track or cylinder (average access time), the time it takes for the desired sector to move under the head (average latency, which is a function of the physical rotational speed in revolutions per minute), and finally, the speed at which the data is transmitted (data rate).
The two most common form factors for modern HDDs are 3.5-inch, for desktop computers, and 2.5-inch, primarily for laptops. HDDs are connected to systems by standard interface cables such as SATA (Serial ATA), USB, SAS (Serial Attached SCSI), or PATA (Parallel ATA) cables.
History
The first production IBM hard disk drive, the 350 disk storage, shipped in 1957 as a component of the IBM 305 RAMAC system. It was approximately the size of two large refrigerators and stored five million six-bit characters (3.75 megabytes) on a stack of 52 disks (100 surfaces used). The 350 had a single arm with two read/write heads, one facing up and the other down, that moved both horizontally between a pair of adjacent platters and vertically from one pair of platters to a second set. Variants of the IBM 350 were the IBM 355, IBM 7300 and IBM 1405.
In 1961, IBM announced, and in 1962 shipped, the IBM 1301 disk storage unit, which superseded
the IBM 350 and similar drives. The 1301 consisted of one (for Model 1) or two (for model 2) modules, each containing 25 platters, each platter about thick and in diameter. While the earlier IBM disk drives used only two read/write heads per arm, the 1301 used an array of 48 heads (comb), each array moving horizontally as a single unit, one head per surface used. Cylinder-mode read/write operations were supported, and the heads flew about 250 micro-inches (about 6 μm) above the platter surface. Motion of the head array depended upon a binary adder system of hydraulic actuators which assured repeatable positioning. The 1301 cabinet was about the size of three large refrigerators placed side by side, storing the equivalent of about 21 million eight-bit bytes per module. Access time was about a quarter of a second.
Also in 1962, IBM introduced the model 1311 disk drive, which was about the size of a washing machine and stored two million characters on a removable disk pack. Users could buy additional packs and interchange them as needed, much like reels of magnetic tape. Later models of removable pack drives, from IBM and others, became the norm in most computer installations and reached capacities of 300 megabytes by the early 1980s. Non-removable HDDs were called "fixed disk" drives.
In 1963, IBM introduced the 1302, with twice the track capacity and twice as many tracks per cylinder as the 1301. The 1302 had one (for Model 1) or two (for Model 2) modules, each containing a separate comb for the first 250 tracks and the last 250 tracks.
Some high-performance HDDs were manufactured with one head per track, e.g., Burroughs B-475 in 1964, IBM 2305 in 1970, so that no time was lost physically moving the heads to a track and the only latency was the time for the desired block of data to rotate into position under the head. Known as fixed-head or head-per-track disk drives, they were very expensive and are no longer in production.
In 1973, IBM introduced a new type of HDD code-named "Winchester". Its primary distinguishing feature was that the disk heads were not withdrawn completely from the stack of disk platters when the drive was powered down. Instead, the heads were allowed to "land" on a special area of the disk surface upon spin-down, "taking off" again when the disk was later powered on. This greatly reduced the cost of the head actuator mechanism but precluded removing just the disks from the drive as was done with the disk packs of the day. Instead, the first models of "Winchester technology" drives featured a removable disk module, which included both the disk pack and the head assembly, leaving the actuator motor in the drive upon removal. Later "Winchester" drives abandoned the removable media concept and returned to non-removable platters.
In 1974, IBM introduced the swinging arm actuator, made feasible because the Winchester recording heads function well when skewed to the recorded tracks. The simple design of the IBM GV (Gulliver) drive, invented at IBM's UK Hursley Labs, became IBM's most licensed electro-mechanical invention of all time, the actuator and filtration system being adopted in the 1980s eventually for all HDDs, and still universal nearly 40 years and 10 billion arms later.
Like the first removable pack drive, the first "Winchester" drives used platters in diameter. In 1978, IBM introduced a swing arm drive, the IBM 0680 (Piccolo), with eight-inch platters, exploring the possibility that smaller platters might offer advantages. Other eight-inch drives followed, then drives, sized to replace the contemporary floppy disk drives. The latter were primarily intended for the then fledgling personal computer (PC) market.
Over time, as recording densities were greatly increased, further reductions in disk diameter to 3.5" and 2.5" were found to be optimum. Powerful rare earth magnet materials became affordable during this period and were complementary to the swing arm actuator design to make possible the compact form factors of modern HDDs.
As the 1980s began, HDDs were a rare and very expensive additional feature in PCs, but by the late 1980s, their cost had been reduced to the point where they were standard on all but the cheapest computers.
Most HDDs in the early 1980s were sold to PC end users as an external, add-on subsystem. The subsystem was not sold under the drive manufacturer's name but under the subsystem manufacturer's name such as Corvus Systems and Tallgrass Technologies, or under the PC system manufacturer's name such as the Apple ProFile. The IBM PC/XT in 1983 included an internal 10 MB HDD, and soon thereafter, internal HDDs proliferated on personal computers.
External HDDs remained popular for much longer on the Apple Macintosh. Many Macintosh computers made between 1986 and 1998 featured a SCSI port on the back, making external expansion simple. Older compact Macintosh computers did not have user-accessible hard drive bays (indeed, the Macintosh 128K, Macintosh 512K, and Macintosh Plus did not feature a hard drive bay at all), so on those models, external SCSI disks were the only reasonable option for expanding upon any internal storage.
HDD improvements have been driven by increasing areal density, listed in the table above. Applications expanded through the 2000s, from the mainframe computers of the late 1950s to most mass storage applications including computers and consumer applications such as storage of entertainment content.
In the 2000s and 2010s, NAND began supplanting HDDs in applications requiring portability or high performance. NAND performance is improving faster than HDDs, and applications for HDDs are eroding. In 2018, the largest hard drive had a capacity of 15 TB, while the largest capacity SSD had a capacity of 100 TB. , HDDs were forecast to reach 100 TB capacities around 2025, but , the expected pace of improvement was pared back to 50 TB by 2026. Smaller form factors, 1.8-inches and below, were discontinued around 2010. The cost of solid-state storage (NAND), represented by Moore's law, is improving faster than HDDs. NAND has a higher price elasticity of demand than HDDs, and this drives market growth. During the late 2000s and 2010s, the product life cycle of HDDs entered a mature phase, and slowing sales may indicate the onset of the declining phase.
The 2011 Thailand floods damaged the manufacturing plants and impacted hard disk drive cost adversely between 2011 and 2013.
In 2019, Western Digital closed its last Malaysian HDD factory due to decreasing demand, to focus on SSD production. All three remaining HDD manufacturers have had decreasing demand for their HDDs since 2014.
Technology
Magnetic recording
A modern HDD records data by magnetizing a thin film of ferromagnetic material on both sides of a disk. Sequential changes in the direction of magnetization represent binary data bits. The data is read from the disk by detecting the transitions in magnetization. User data is encoded using an encoding scheme, such as run-length limited encoding, which determines how the data is represented by the magnetic transitions.
A typical HDD design consists of a that holds flat circular disks, called platters, which hold the recorded data. The platters are made from a non-magnetic material, usually aluminum alloy, glass, or ceramic. They are coated with a shallow layer of magnetic material typically 10–20 nm in depth, with an outer layer of carbon for protection. For reference, a standard piece of copy paper is thick.
The platters in contemporary HDDs are spun at speeds varying from in energy-efficient portable devices, to 15,000 rpm for high-performance servers. The first HDDs spun at 1,200 rpm and, for many years, 3,600 rpm was the norm. , the platters in most consumer-grade HDDs spin at 5,400 or 7,200 rpm.
Information is written to and read from a platter as it rotates past devices called read-and-write heads that are positioned to operate very close to the magnetic surface, with their flying height often in the range of tens of nanometers. The read-and-write head is used to detect and modify the magnetization of the material passing immediately under it.
In modern drives, there is one head for each magnetic platter surface on the spindle, mounted on a common arm. An actuator arm (or access arm) moves the heads on an arc (roughly radially) across the platters as they spin, allowing each head to access almost the entire surface of the platter as it spins. The arm is moved using a voice coil actuator or, in some older designs, a stepper motor. Early hard disk drives wrote data at some constant bits per second, resulting in all tracks having the same amount of data per track, but modern drives (since the 1990s) use zone bit recording, increasing the write speed from inner to outer zone and thereby storing more data per track in the outer zones.
In modern drives, the small size of the magnetic regions creates the danger that their magnetic state might be lost because of thermal effects — thermally induced magnetic instability which is commonly known as the "superparamagnetic limit". To counter this, the platters are coated with two parallel magnetic layers, separated by a three-atom layer of the non-magnetic element ruthenium, and the two layers are magnetized in opposite orientation, thus reinforcing each other. Another technology used to overcome thermal effects to allow greater recording densities is perpendicular recording (PMR), first shipped in 2005, and , used in certain HDDs. Perpendicular recording may be accompanied by changes in the manufacturing of the read/write heads to increase the strength of the magnetic field created by the heads.
In 2004, a higher-density recording media was introduced, consisting of coupled soft and hard magnetic layers. So-called exchange spring media magnetic storage technology, also known as exchange coupled composite media, allows good writability due to the write-assist nature of the soft layer. However, the thermal stability is determined only by the hardest layer and not influenced by the soft layer.
Flux control MAMR (FC-MAMR) allows a hard drive to have increased recording capacity without the need for new hard disk drive platter materials. MAMR hard drives have a microwave-generating spin torque generator (STO) on the read/write heads which allows physically smaller bits to be recorded to the platters, increasing areal density. Normally hard drive recording heads have a pole called a main pole that is used for writing to the platters, and adjacent to this pole is an air gap and a shield. The write coil of the head surrounds the pole. The STO device is placed in the air gap between the pole and the shield to increase the strength of the magnetic field created by the pole; FC-MAMR technically doesn't use microwaves but uses technology employed in MAMR. The STO has a Field Generation Layer (FGL) and a Spin Injection Layer (SIL), and the FGL produces a magnetic field using spin-polarised electrons originating in the SIL, which is a form of spin torque energy.
Components
A typical HDD has two electric motors: a spindle motor that spins the disks and an actuator (motor) that positions the read/write head assembly across the spinning disks. The disk motor has an external rotor attached to the disks; the stator windings are fixed in place. Opposite the actuator at the end of the head support arm is the read-write head; thin printed-circuit cables connect the read-write heads to amplifier electronics mounted at the pivot of the actuator. The head support arm is very light, but also stiff; in modern drives, acceleration at the head reaches 550 g.
The is a permanent magnet and moving coil motor that swings the heads to the desired position. A metal plate supports a squat neodymium–iron–boron (NIB) high-flux magnet. Beneath this plate is the moving coil, often referred to as the voice coil by analogy to the coil in loudspeakers, which is attached to the actuator hub, and beneath that is a second NIB magnet, mounted on the bottom plate of the motor (some drives have only one magnet).
The voice coil itself is shaped rather like an arrowhead and is made of doubly coated copper magnet wire. The inner layer is insulation, and the outer is thermoplastic, which bonds the coil together after it is wound on a form, making it self-supporting. The portions of the coil along the two sides of the arrowhead (which point to the center of the actuator bearing) then interact with the magnetic field of the fixed magnet. Current flowing radially outward along one side of the arrowhead and radially inward on the other produces the tangential force. If the magnetic field were uniform, each side would generate opposing forces that would cancel each other out. Therefore, the surface of the magnet is half north pole and half south pole, with the radial dividing line in the middle, causing the two sides of the coil to see opposite magnetic fields and produce forces that add instead of canceling. Currents along the top and bottom of the coil produce radial forces that do not rotate the head.
The HDD's electronics controls the movement of the actuator and the rotation of the disk and transfers data to or from a disk controller. Feedback of the drive electronics is accomplished by means of special segments of the disk dedicated to servo feedback. These are either complete concentric circles (in the case of dedicated servo technology) or segments interspersed with real data (in the case of embedded servo, otherwise known as sector servo technology). The servo feedback optimizes the signal-to-noise ratio of the GMR sensors by adjusting the voice coil motor to rotate the arm. A more modern servo system also employs milli or micro actuators to more accurately position the read/write heads. The spinning of the disks uses fluid-bearing spindle motors. Modern disk firmware is capable of scheduling reads and writes efficiently on the platter surfaces and remapping sectors of the media that have failed.
Error rates and handling
Modern drives make extensive use of error correction codes (ECCs), particularly Reed–Solomon error correction. These techniques store extra bits, determined by mathematical formulas, for each block of data; the extra bits allow many errors to be corrected invisibly. The extra bits themselves take up space on the HDD, but allow higher recording densities to be employed without causing uncorrectable errors, resulting in much larger storage capacity. For example, a typical 1 TB hard disk with 512-byte sectors provides additional capacity of about 93 GB for the ECC data.
In the newest drives, , low-density parity-check codes (LDPC) were supplanting Reed–Solomon; LDPC codes enable performance close to the Shannon limit and thus provide the highest storage density available.
Typical hard disk drives attempt to "remap" the data in a physical sector that is failing to a spare physical sector provided by the drive's "spare sector pool" (also called "reserve pool"), while relying on the ECC to recover stored data while the number of errors in a bad sector is still low enough. The S.M.A.R.T (Self-Monitoring, Analysis and Reporting Technology) feature counts the total number of errors in the entire HDD fixed by ECC (although not on all hard drives as the related S.M.A.R.T attributes "Hardware ECC Recovered" and "Soft ECC Correction" are not consistently supported), and the total number of performed sector remappings, as the occurrence of many such errors may predict an HDD failure.
The "No-ID Format", developed by IBM in the mid-1990s, contains information about which sectors are bad and where remapped sectors have been located.
Only a tiny fraction of the detected errors end up as not correctable. Examples of specified uncorrected bit read error rates include:
2013 specifications for enterprise SAS disk drives state the error rate to be one uncorrected bit read error in every 1016 bits read,
2018 specifications for consumer SATA hard drives state the error rate to be one uncorrected bit read error in every 1014 bits.
Within a given manufacturers model the uncorrected bit error rate is typically the same regardless of capacity of the drive.
The worst type of errors are silent data corruptions which are errors undetected by the disk firmware or the host operating system; some of these errors may be caused by hard disk drive malfunctions while others originate elsewhere in the connection between the drive and the host.
Development
The rate of areal density advancement was similar to Moore's law (doubling every two years) through 2010: 60% per year during 1988–1996, 100% during 1996–2003 and 30% during 2003–2010. Speaking in 1997, Gordon Moore called the increase "flabbergasting", while observing later that growth cannot continue forever. Price improvement decelerated to −12% per year during 2010–2017, as the growth of areal density slowed. The rate of advancement for areal density slowed to 10% per year during 2010–2016, and there was difficulty in migrating from perpendicular recording to newer technologies.
As bit cell size decreases, more data can be put onto a single drive platter. In 2013, a production desktop 3 TB HDD (with four platters) would have had an areal density of about 500 Gbit/in2 which would have amounted to a bit cell comprising about 18 magnetic grains (11 by 1.6 grains). Since the mid-2000s, areal density progress has been challenged by a superparamagnetic trilemma involving grain size, grain magnetic strength and ability of the head to write. In order to maintain acceptable signal-to-noise, smaller grains are required; smaller grains may self-reverse (electrothermal instability) unless their magnetic strength is increased, but known write head materials are unable to generate a strong enough magnetic field sufficient to write the medium in the increasingly smaller space taken by grains.
Magnetic storage technologies are being developed to address this trilemma, and compete with flash memory–based solid-state drives (SSDs). In 2013, Seagate introduced shingled magnetic recording (SMR), intended as something of a "stopgap" technology between PMR and Seagate's intended successor heat-assisted magnetic recording (HAMR). SMR utilizes overlapping tracks for increased data density, at the cost of design complexity and lower data access speeds (particularly write speeds and random access 4k speeds).
By contrast, HGST (now part of Western Digital) focused on developing ways to seal helium-filled drives instead of the usual filtered air. Since turbulence and friction are reduced, higher areal densities can be achieved due to using a smaller track width, and the energy dissipated due to friction is lower as well, resulting in a lower power draw. Furthermore, more platters can be fit into the same enclosure space, although helium gas is notoriously difficult to prevent escaping. Thus, helium drives are completely sealed and do not have a breather port, unlike their air-filled counterparts.
Other recording technologies are either under research or have been commercially implemented to increase areal density, including Seagate's heat-assisted magnetic recording (HAMR). HAMR requires a different architecture with redesigned media and read/write heads, new lasers, and new near-field optical transducers. HAMR is expected to ship commercially in late 2024, after technical issues delayed its introduction by more than a decade, from earlier projections as early as 2009. HAMR's planned successor, bit-patterned recording (BPR), has been removed from the roadmaps of Western Digital and Seagate. Western Digital's microwave-assisted magnetic recording (MAMR), also referred to as energy-assisted magnetic recording (EAMR), was sampled in 2020, with the first EAMR drive, the Ultrastar HC550, shipping in late 2020. Two-dimensional magnetic recording (TDMR) and "current perpendicular to plane" giant magnetoresistance (CPP/GMR) heads have appeared in research papers.
Some drives have adopted dual independent actuator arms to increase read/write speeds and compete with SSDs.
A 3D-actuated vacuum drive (3DHD) concept and 3D magnetic recording have been proposed.
Depending upon assumptions on feasibility and timing of these technologies, Seagate forecasts that areal density will grow 20% per year during 2020–2034.
Capacity
The highest-capacity HDDs shipping commercially are 32 TB. The capacity of a hard disk drive, as reported by an operating system to the end user, is smaller than the amount stated by the manufacturer for several reasons, e.g. the operating system using some space, use of some space for data redundancy, space use for file system structures. Confusion of decimal prefixes and binary prefixes can also lead to errors.
Calculation
Modern hard disk drives appear to their host controller as a contiguous set of logical blocks, and the gross drive capacity is calculated by multiplying the number of blocks by the block size. This information is available from the manufacturer's product specification, and from the drive itself through use of operating system functions that invoke low-level drive commands. Older IBM and compatible drives, e.g. IBM 3390 using the CKD record format, have variable length records; such drive capacity calculations must take into account the characteristics of the records. Some newer DASD simulate CKD, and the same capacity formulae apply.
The gross capacity of older sector-oriented HDDs is calculated as the product of the number of cylinders per recording zone, the number of bytes per sector (most commonly 512), and the count of zones of the drive. Some modern SATA drives also report cylinder-head-sector (CHS) capacities, but these are not physical parameters because the reported values are constrained by historic operating system interfaces. The C/H/S scheme has been replaced by logical block addressing (LBA), a simple linear addressing scheme that locates blocks by an integer index, which starts at LBA 0 for the first block and increments thereafter. When using the C/H/S method to describe modern large drives, the number of heads is often set to 64, although a typical modern hard disk drive has between one and four platters. In modern HDDs, spare capacity for defect management is not included in the published capacity; however, in many early HDDs, a certain number of sectors were reserved as spares, thereby reducing the capacity available to the operating system. Furthermore, many HDDs store their firmware in a reserved service zone, which is typically not accessible by the user, and is not included in the capacity calculation.
For RAID subsystems, data integrity and fault-tolerance requirements also reduce the realized capacity. For example, a RAID 1 array has about half the total capacity as a result of data mirroring, while a RAID 5 array with drives loses of capacity (which equals to the capacity of a single drive) due to storing parity information. RAID subsystems are multiple drives that appear to be one drive or more drives to the user, but provide fault tolerance. Most RAID vendors use checksums to improve data integrity at the block level. Some vendors design systems using HDDs with sectors of 520 bytes to contain 512 bytes of user data and eight checksum bytes, or by using separate 512-byte sectors for the checksum data.
Some systems may use hidden partitions for system recovery, reducing the capacity available to the end user without knowledge of special disk partitioning utilities like diskpart in Windows.
Formatting
Data is stored on a hard drive in a series of logical blocks. Each block is delimited by markers identifying its start and end, error detecting and correcting information, and space between blocks to allow for minor timing variations. These blocks often contained 512 bytes of usable data, but other sizes have been used. As drive density increased, an initiative known as Advanced Format extended the block size to 4096 bytes of usable data, with a resulting significant reduction in the amount of disk space used for block headers, error-checking data, and spacing.
The process of initializing these logical blocks on the physical disk platters is called low-level formatting, which is usually performed at the factory and is not normally changed in the field. High-level formatting writes data structures used by the operating system to organize data files on the disk. This includes writing partition and file system structures into selected logical blocks. For example, some of the disk space will be used to hold a directory of disk file names and a list of logical blocks associated with a particular file.
Examples of partition mapping scheme include Master boot record (MBR) and GUID Partition Table (GPT). Examples of data structures stored on disk to retrieve files include the File Allocation Table (FAT) in the DOS file system and inodes in many UNIX file systems, as well as other operating system data structures (also known as metadata). As a consequence, not all the space on an HDD is available for user files, but this system overhead is usually small compared with user data.
Units
In the early days of computing, the total capacity of HDDs was specified in seven to nine decimal digits frequently truncated with the idiom millions.
By the 1970s, the total capacity of HDDs was given by manufacturers using SI decimal prefixes such as megabytes (1 MB = 1,000,000 bytes), gigabytes (1 GB = 1,000,000,000 bytes) and terabytes (1 TB = 1,000,000,000,000 bytes). However, capacities of memory are usually quoted using a binary interpretation of the prefixes, i.e. using powers of 1024 instead of 1000.
Software reports hard disk drive or memory capacity in different forms using either decimal or binary prefixes. The Microsoft Windows family of operating systems uses the binary convention when reporting storage capacity, so an HDD offered by its manufacturer as a 1 TB drive is reported by these operating systems as a 931 GB HDD. Mac OS X 10.6 ("Snow Leopard") uses decimal convention when reporting HDD capacity. The default behavior of the command-line utility on Linux is to report the HDD capacity as a number of 1024-byte units.
The difference between the decimal and binary prefix interpretation caused some consumer confusion and led to class action suits against HDD manufacturers. The plaintiffs argued that the use of decimal prefixes effectively misled consumers, while the defendants denied any wrongdoing or liability, asserting that their marketing and advertising complied in all respects with the law and that no class member sustained any damages or injuries. In 2020, a California court ruled that use of the decimal prefixes with a decimal meaning was not misleading.
Form factors
IBM's first hard disk drive, the IBM 350, used a stack of fifty 24-inch platters, stored 3.75 MB of data (approximately the size of one modern digital picture), and was of a size comparable to two large refrigerators. In 1962, IBM introduced its model 1311 disk, which used six 14-inch (nominal size) platters in a removable pack and was roughly the size of a washing machine. This became a standard platter size for many years, used also by other manufacturers. The IBM 2314 used platters of the same size in an eleven-high pack and introduced the "drive in a drawer" layout, sometimes called the "pizza oven", although the "drawer" was not the complete drive. Into the 1970s, HDDs were offered in standalone cabinets of varying dimensions containing from one to four HDDs.
Beginning in the late 1960s, drives were offered that fit entirely into a chassis that would mount in a 19-inch rack. Digital's RK05 and RL01 were early examples using single 14-inch platters in removable packs, the entire drive fitting in a 10.5-inch-high rack space (six rack units). In the mid-to-late 1980s, the similarly sized Fujitsu Eagle, which used (coincidentally) 10.5-inch platters, was a popular product.
With increasing sales of microcomputers having built-in floppy-disk drives (FDDs), HDDs that would fit to the FDD mountings became desirable. Starting with the Shugart Associates SA1000, HDD form factors initially followed those of 8-inch, 5¼-inch, and 3½-inch floppy disk drives. Although referred to by these nominal sizes, the actual sizes for those three drives respectively are 9.5", 5.75" and 4" wide. Because there were no smaller floppy disk drives, smaller HDD form factors such as 2½-inch drives (actually 2.75" wide) developed from product offerings or industry standards.
, 2½-inch and 3½-inch hard disks are the most popular sizes. By 2009, all manufacturers had discontinued the development of new products for the 1.3-inch, 1-inch and 0.85-inch form factors due to falling prices of flash memory, which has no moving parts. While nominal sizes are in inches, actual dimensions are specified in millimeters.
Performance characteristics
The factors that limit the time to access the data on an HDD are mostly related to the mechanical nature of the rotating disks and moving heads, including:
Seek time is a measure of how long it takes the head assembly to travel to the track of the disk that contains data.
Rotational latency is incurred because the desired disk sector may not be directly under the head when data transfer is requested. Average rotational latency is shown in the table, based on the statistical relation that the average latency is one-half the rotational period.
The bit rate or data transfer rate (once the head is in the right position) creates delay which is a function of the number of blocks transferred; typically relatively small, but can be quite long with the transfer of large contiguous files.
Delay may also occur if the drive disks are stopped to save energy.
Defragmentation is a procedure used to minimize delay in retrieving data by moving related items to physically proximate areas on the disk. Some computer operating systems perform defragmentation automatically. Although automatic defragmentation is intended to reduce access delays, performance will be temporarily reduced while the procedure is in progress.
Time to access data can be improved by increasing rotational speed (thus reducing latency) or by reducing the time spent seeking. Increasing areal density increases throughput by increasing data rate and by increasing the amount of data under a set of heads, thereby potentially reducing seek activity for a given amount of data. The time to access data has not kept up with throughput increases, which themselves have not kept up with growth in bit density and storage capacity.
Latency
Data transfer rate
, a typical 7,200-rpm desktop HDD has a sustained "disk-to-buffer" data transfer rate up to . This rate depends on the track location; the rate is higher for data on the outer tracks (where there are more data sectors per rotation) and lower toward the inner tracks (where there are fewer data sectors per rotation); and is generally somewhat higher for 10,000-rpm drives. A current, widely used standard for the "buffer-to-computer" interface is SATA, which can send about 300 megabyte/s (10-bit encoding) from the buffer to the computer, and thus is still comfortably ahead of today's disk-to-buffer transfer rates. Data transfer rate (read/write) can be measured by writing a large file to disk using special file-generator tools, then reading back the file. Transfer rate can be influenced by file system fragmentation and the layout of the files.
HDD data transfer rate depends upon the rotational speed of the platters and the data recording density. Because heat and vibration limit rotational speed, advancing density becomes the main method to improve sequential transfer rates. Higher speeds require a more powerful spindle motor, which creates more heat. While areal density advances by increasing both the number of tracks across the disk and the number of sectors per track, only the latter increases the data transfer rate for a given rpm. Since data transfer rate performance tracks only one of the two components of areal density, its performance improves at a lower rate.
Other considerations
Other performance considerations include quality-adjusted price, power consumption, audible noise, and both operating and non-operating shock resistance.
Access and interfaces
Current hard drives connect to a computer over one of several bus types, including parallel ATA, Serial ATA, SCSI, Serial Attached SCSI (SAS), and Fibre Channel. Some drives, especially external portable drives, use IEEE 1394, or USB. All of these interfaces are digital; electronics on the drive process the analog signals from the read/write heads. Current drives present a consistent interface to the rest of the computer, independent of the data encoding scheme used internally, and independent of the physical number of disks and heads within the drive.
Typically, a DSP in the electronics inside the drive takes the raw analog voltages from the read head and uses PRML and Reed–Solomon error correction to decode the data, then sends that data out the standard interface. That DSP also watches the error rate detected by error detection and correction, and performs bad sector remapping, data collection for Self-Monitoring, Analysis, and Reporting Technology, and other internal tasks.
Modern interfaces connect the drive to the host interface with a single data/control cable. Each drive also has an additional power cable, usually direct to the power supply unit. Older interfaces had separate cables for data signals and for drive control signals.
Small Computer System Interface (SCSI), originally named SASI for Shugart Associates System Interface, was standard on servers, workstations, Commodore Amiga, Atari ST and Apple Macintosh computers through the mid-1990s, by which time most models had been transitioned to newer interfaces. The length limit of the data cable allows for external SCSI devices. The SCSI command set is still used in the more modern SAS interface.
Integrated Drive Electronics (IDE), later standardized under the name AT Attachment (ATA, with the alias PATA (Parallel ATA) retroactively added upon introduction of SATA) moved the HDD controller from the interface card to the disk drive. This helped to standardize the host/controller interface, reduce the programming complexity in the host device driver, and reduced system cost and complexity. The 40-pin IDE/ATA connection transfers 16 bits of data at a time on the data cable. The data cable was originally 40-conductor, but later higher speed requirements led to an "ultra DMA" (UDMA) mode using an 80-conductor cable with additional wires to reduce crosstalk at high speed.
EIDE was an unofficial update (by Western Digital) to the original IDE standard, with the key improvement being the use of direct memory access (DMA) to transfer data between the disk and the computer without the involvement of the CPU, an improvement later adopted by the official ATA standards. By directly transferring data between memory and disk, DMA eliminates the need for the CPU to copy byte per byte, therefore allowing it to process other tasks while the data transfer occurs.
Fibre Channel (FC) is a successor to parallel SCSI interface on enterprise market. It is a serial protocol. In disk drives usually the Fibre Channel Arbitrated Loop (FC-AL) connection topology is used. FC has much broader usage than mere disk interfaces, and it is the cornerstone of storage area networks (SANs). Recently other protocols for this field, like iSCSI and ATA over Ethernet have been developed as well. Confusingly, drives usually use copper twisted-pair cables for Fibre Channel, not fiber optics. The latter are traditionally reserved for larger devices, such as servers or disk array controllers.
Serial Attached SCSI (SAS). The SAS is a new generation serial communication protocol for devices designed to allow for much higher speed data transfers and is compatible with SATA. SAS uses a mechanically compatible data and power connector to standard 3.5-inch SATA1/SATA2 HDDs, and many server-oriented SAS RAID controllers are also capable of addressing SATA HDDs. SAS uses serial communication instead of the parallel method found in traditional SCSI devices but still uses SCSI commands.
Serial ATA (SATA). The SATA data cable has one data pair for differential transmission of data to the device, and one pair for differential receiving from the device, just like EIA-422. That requires that data be transmitted serially. A similar differential signaling system is used in RS485, LocalTalk, USB, FireWire, and differential SCSI. SATA I to III are designed to be compatible with, and use, a subset of SAS commands, and compatible interfaces. Therefore, a SATA hard drive can be connected to and controlled by a SAS hard drive controller (with some minor exceptions such as drives/controllers with limited compatibility). However, they cannot be connected the other way round—a SATA controller cannot be connected to a SAS drive.
Integrity and failure
Due to the extremely close spacing between the heads and the disk surface, HDDs are vulnerable to being damaged by a head crash – a failure of the disk in which the head scrapes across the platter surface, often grinding away the thin magnetic film and causing data loss. Head crashes can be caused by electronic failure, a sudden power failure, physical shock, contamination of the drive's internal enclosure, wear and tear, corrosion, or poorly manufactured platters and heads.
The HDD's spindle system relies on air density inside the disk enclosure to support the heads at their proper flying height while the disk rotates. HDDs require a certain range of air densities to operate properly. The connection to the external environment and density occurs through a small hole in the enclosure (about 0.5 mm in breadth), usually with a filter on the inside (the breather filter). If the air density is too low, then there is not enough lift for the flying head, so the head gets too close to the disk, and there is a risk of head crashes and data loss. Specially manufactured sealed and pressurized disks are needed for reliable high-altitude operation, above about . Modern disks include temperature sensors and adjust their operation to the operating environment. Breather holes can be seen on all disk drives – they usually have a sticker next to them, warning the user not to cover the holes. The air inside the operating drive is constantly moving too, being swept in motion by friction with the spinning platters. This air passes through an internal recirculation filter to remove any leftover contaminants from manufacture, any particles or chemicals that may have somehow entered the enclosure, and any particles or outgassing generated internally in normal operation. Very high humidity present for extended periods of time can corrode the heads and platters. An exception to this are hermetically sealed, helium-filled HDDs that largely eliminate environmental issues that can arise due to humidity or atmospheric pressure changes. Such HDDs were introduced by HGST in their first successful high-volume implementation in 2013.
For giant magnetoresistive (GMR) heads in particular, a minor head crash from contamination (that does not remove the magnetic surface of the disk) still results in the head temporarily overheating, due to friction with the disk surface and can render the data unreadable for a short period until the head temperature stabilizes (so-called "thermal asperity", a problem which can partially be dealt with by proper electronic filtering of the read signal).
When the logic board of a hard disk fails, the drive can often be restored to functioning order and the data recovered by replacing the circuit board with one of an identical hard disk. In the case of read-write head faults, they can be replaced using specialized tools in a dust-free environment. If the disk platters are undamaged, they can be transferred into an identical enclosure and the data can be copied or cloned onto a new drive. In the event of disk-platter failures, disassembly and imaging of the disk platters may be required. For logical damage to file systems, a variety of tools, including fsck on UNIX-like systems and CHKDSK on Windows, can be used for data recovery. Recovery from logical damage can require file carving.
A common expectation is that hard disk drives designed and marketed for server use will fail less frequently than consumer-grade drives usually used in desktop computers. However, two independent studies by Carnegie Mellon University and Google found that the "grade" of a drive does not relate to the drive's failure rate.
A 2011 summary of research, into SSD and magnetic disk failure patterns by Tom's Hardware summarized research findings as follows:
Mean time between failures (MTBF) does not indicate reliability; the annualized failure rate is higher and usually more relevant.
HDDs do not tend to fail during early use, and temperature has only a minor effect; instead, failure rates steadily increase with age.
S.M.A.R.T. warns of mechanical issues but not other issues affecting reliability, and is therefore not a reliable indicator of condition.
Failure rates of drives sold as "enterprise" and "consumer" are "very much similar", although these drive types are customized for their different operating environments.
In drive arrays, one drive's failure significantly increases the short-term risk of a second drive failing.
, Backblaze, a storage provider, reported an annualized failure rate of two percent per year for a storage farm with 110,000 off-the-shelf HDDs with the reliability varying widely between models and manufacturers. Backblaze subsequently reported that the failure rate for HDDs and SSD of equivalent age was similar.
To minimize cost and overcome failures of individual HDDs, storage systems providers rely on redundant HDD arrays. HDDs that fail are replaced on an ongoing basis.
Market segments
Consumer segment
Desktop HDDs
Desktop HDDs typically have one to five internal platters, rotate at 5,400 to 10,000 rpm, and have a media transfer rate of or higher (1 GB = 109 bytes; = ). Earlier (1980–1990s) drives tend to be slower in rotation speed. , the highest-capacity desktop HDDs stored 16 TB, with plans to release 18 TB drives later in 2019. 18 TB HDDs were released in 2020. , the typical speed of a hard drive in an average desktop computer is 7,200 rpm, whereas low-cost desktop computers may use 5,900 rpm or 5,400 rpm drives. For some time in the 2000s and early 2010s some desktop users and data centers also used 10,000 rpm drives such as Western Digital Raptor but such drives have become much rarer and are not commonly used now, having been replaced by NAND flash-based SSDs.
Mobile (laptop) HDDs
Smaller than their desktop and enterprise counterparts, they tend to be slower and have lower capacity, because typically has one internal platter and were 2.5" or 1.8" physical size instead of more common for desktops 3.5" form-factor. Mobile HDDs spin at 4,200 rpm, 5,200 rpm, 5,400 rpm, or 7,200 rpm, with 5,400 rpm being the most common; 7,200 rpm drives tend to be more expensive and have smaller capacities, while 4,200 rpm models usually have very high storage capacities. Because of smaller platter(s), mobile HDDs generally have lower capacity than their desktop counterparts.
Consumer electronics HDDs
These drives typically spin at 5400 rpm and include:
Video hard drives, sometimes called "surveillance hard drives", are embedded into digital video recorders and provide a guaranteed streaming capacity, even in the face of read and write errors.
Drives embedded into automotive vehicles; they are typically built to resist larger amounts of shock and operate over a larger temperature range.
External and portable HDDs
Current external hard disk drives typically connect via USB-C; earlier models use USB-B (sometimes with using of a pair of ports for better bandwidth) or (rarely) eSATA connection. Variants using USB 2.0 interface generally have slower data transfer rates when compared to internally mounted hard drives connected through SATA. Plug and play drive functionality offers system compatibility and features large storage options and portable design. , available capacities for external hard disk drives ranged from 500 GB to 10 TB. External hard disk drives are usually available as assembled integrated products, but may be also assembled by combining an external enclosure (with USB or other interface) with a separately purchased drive. They are available in 2.5-inch and 3.5-inch sizes; 2.5-inch variants are typically called portable external drives, while 3.5-inch variants are referred to as desktop external drives. "Portable" drives are packaged in smaller and lighter enclosures than the "desktop" drives; additionally, "portable" drives use power provided by the USB connection, while "desktop" drives require external power bricks. Features such as encryption, Wi-Fi connectivity, biometric security or multiple interfaces (for example, FireWire) are available at a higher cost. There are pre-assembled external hard disk drives that, when taken out from their enclosures, cannot be used internally in a laptop or desktop computer due to embedded USB interface on their printed circuit boards, and lack of SATA (or Parallel ATA) interfaces.
Enterprise and business segment
Server and workstation HDDs
Typically used with multiple-user computers running enterprise software. Examples are: transaction processing databases, internet infrastructure (email, webserver, e-commerce), scientific computing software, and nearline storage management software. Enterprise drives commonly operate continuously ("24/7") in demanding environments while delivering the highest possible performance without sacrificing reliability. Maximum capacity is not the primary goal, and as a result the drives are often offered in capacities that are relatively low in relation to their cost.
The fastest enterprise HDDs spin at 10,000 or 15,000 rpm, and can achieve sequential media transfer speeds above and a sustained transfer rate up to . Drives running at 10,000 or 15,000 rpm use smaller platters to mitigate increased power requirements (as they have less air drag) and therefore generally have lower capacity than the highest capacity desktop drives. Enterprise HDDs are commonly connected through Serial Attached SCSI (SAS) or Fibre Channel (FC). Some support multiple ports, so they can be connected to a redundant host bus adapter.
Enterprise HDDs can have sector sizes larger than 512 bytes (often 520, 524, 528 or 536 bytes). The additional per-sector space can be used by hardware RAID controllers or applications for storing Data Integrity Field (DIF) or Data Integrity Extensions (DIX) data, resulting in higher reliability and prevention of silent data corruption.
Surveillance hard drives;
Video recording HDDs used in network video recorders.
Economy
Price evolution
HDD price per byte decreased at the rate of 40% per year during 1988–1996, 51% per year during 1996–2003 and 34% per year during 2003–2010. The price decrease slowed down to 13% per year during 2011–2014, as areal density increase slowed and the 2011 Thailand floods damaged manufacturing facilities and have held at 11% per year during 2010–2017.
The Federal Reserve Board has published a quality-adjusted price index for large-scale enterprise storage systems including three or more enterprise HDDs and associated controllers, racks and cables. Prices for these large-scale storage systems decreased at the rate of 30% per year during 2004–2009 and 22% per year during 2009–2014.
Manufacturers and sales
More than 200 companies have manufactured HDDs over time, but consolidations have concentrated production to just three manufacturers today: Western Digital, Seagate, and Toshiba. Production is mainly in the Pacific rim.
HDD unit shipments peaked at 651 million units in 2010 and have been declining since then to 166 million units in 2022. Seagate at 43% of units had the largest market share.
Competition from SSDs
HDDs are being superseded by solid-state drives (SSDs) in markets where the higher speed (up to 7 gigabytes per second for M.2 (NGFF) NVMe drives and 2.5 gigabytes per second for PCIe expansion card drives)), ruggedness, and lower power of SSDs are more important than price, since the bit cost of SSDs is four to nine times higher than HDDs. , HDDs are reported to have a failure rate of 2–9% per year, while SSDs have fewer failures: 1–3% per year. However, SSDs have more un-correctable data errors than HDDs.
SSDs are available in larger capacities (up to 100 TB) than the largest HDD, as well as higher storage densities (100 TB and 30 TB SSDs are housed in 2.5 inch HDD cases with the same height as a 3.5-inch HDD), although such large SSDs are very expensive.
A laboratory demonstration of a 1.33 Tb 3D NAND chip with 96 layers (NAND commonly used in solid-state drives (SSDs)) had 5.5 Tbit/in2 ), while the maximum areal density for HDDs is 1.5 Tbit/in2. The areal density of flash memory is doubling every two years, similar to Moore's law (40% per year) and faster than the 10–20% per year for HDDs. , the maximum capacity was 16 terabytes for an HDD, and 100 terabytes for an SSD. HDDs were used in 70% of the desktop and notebook computers produced in 2016, and SSDs were used in 30%. The usage share of HDDs is declining and could drop below 50% in 2018–2019 according to one forecast, because SSDs are replacing smaller-capacity (less than one terabyte) HDDs in desktop and notebook computers and MP3 players.
The market for silicon-based flash memory (NAND) chips, used in SSDs and other applications, is growing faster than for HDDs. Worldwide NAND revenue grew 16% per year from $22 billion to $57 billion during 2011–2017, while production grew 45% per year from 19 exabytes to 175 exabytes.
See also
Automatic acoustic management
Cleanroom
Click of death
Comparison of disk encryption software
Data erasure
Drive mapping
Error recovery control
Hard disk drive performance characteristics
Hybrid drive
Microdrive
Network drive (file server, shared resource)
Object storage
Write precompensation
Notes
References
Further reading
External links
Hard Disk Drives Encyclopedia
Video showing an opened HD working
Average seek time of a computer disk (archived)
Timeline: 50 Years of Hard Drives. .
HDD from inside: Tracks and Zones. How hard it can be?
Hard disk hacking firmware modifications, in eight parts, going as far as booting a Linux kernel on an ordinary HDD controller board
Hiding Data in Hard Drive's Service Areas (PDF), February 14, 2013, by Ariel Berkman (archived)
Rotary Acceleration Feed Forward (RAFF) Information Sheet (PDF), Western Digital, January 2013
PowerChoice Technology for Hard Disk Drive Power Savings and Flexibility (PDF), Seagate Technology, March 2010
Shingled Magnetic Recording (SMR), HGST, Inc., 2015
The Road to Helium, HGST, Inc., 2015
Research paper about perspective usage of magnetic photoconductors in magneto-optical data storage.
20th-century inventions
American inventions
Articles containing video clips
Computer data storage
Computer storage devices
Rotating disc computer storage media | Hard disk drive | [
"Technology"
] | 11,635 | [
"Computer storage devices",
"Recording devices"
] |
13,821 | https://en.wikipedia.org/wiki/Hadron | In particle physics, a hadron (; ) is a composite subatomic particle made of two or more quarks held together by the strong interaction. They are analogous to molecules, which are held together by the electric force. Most of the mass of ordinary matter comes from two hadrons: the proton and the neutron, while most of the mass of the protons and neutrons is in turn due to the binding energy of their constituent quarks, due to the strong force.
Hadrons are categorized into two broad families: baryons, made of an odd number of quarks (usually three) and mesons, made of an even number of quarks (usually two: one quark and one antiquark). Protons and neutrons (which make the majority of the mass of an atom) are examples of baryons; pions are an example of a meson. A tetraquark state (an exotic meson), named the Z(4430), was discovered in 2007 by the Belle Collaboration and confirmed as a resonance in 2014 by the LHCb collaboration. Two pentaquark states (exotic baryons), named and , were discovered in 2015 by the LHCb collaboration. There are several other "Exotic" hadron candidates and other colour-singlet quark combinations that may also exist.
Almost all "free" hadrons and antihadrons (meaning, in isolation and not bound within an atomic nucleus) are believed to be unstable and eventually decay into other particles. The only known possible exception is free protons, which appear to be stable, or at least, take immense amounts of time to decay (order of 1034+ years). By way of comparison, free neutrons are the longest-lived unstable particle, and decay with a half-life of about 611 seconds, and have a mean lifetime of 879 seconds, see free neutron decay.
Hadron physics is studied by colliding hadrons, e.g. protons, with each other or the nuclei of dense, heavy elements, such as lead (Pb) or gold (Au), and detecting the debris in the produced particle showers. A similar process occurs in the natural environment, in the extreme upper-atmosphere, where muons and mesons such as pions are produced by the collisions of cosmic rays with rarefied gas particles in the outer atmosphere.
Terminology and etymology
The term "hadron" is a new Greek word introduced by L. B. Okun in a plenary talk at the 1962 International Conference on High Energy Physics at CERN. He opened his talk with the definition of a new category term:
Properties
According to the quark model, the properties of hadrons are primarily determined by their so-called valence quarks. For example, a proton is composed of two up quarks (each with electric charge , for a total of + together) and one down quark (with electric charge ). Adding these together yields the proton charge of +1. Although quarks also carry color charge, hadrons must have zero total color charge because of a phenomenon called color confinement. That is, hadrons must be "colorless" or "white". The simplest ways for this to occur are with a quark of one color and an antiquark of the corresponding anticolor, or three quarks of different colors. Hadrons with the first arrangement are a type of meson, and those with the second arrangement are a type of baryon.
Massless virtual gluons compose the overwhelming majority of particles inside hadrons, as well as the major constituents of its mass (with the exception of the heavy charm and bottom quarks; the top quark vanishes before it has time to bind into a hadron). The strength of the strong-force gluons which bind the quarks together has sufficient energy () to have resonances composed of massive () quarks ( ≥ 2). One outcome is that short-lived pairs of virtual quarks and antiquarks are continually forming and vanishing again inside a hadron. Because the virtual quarks are not stable wave packets (quanta), but an irregular and transient phenomenon, it is not meaningful to ask which quark is real and which virtual; only the small excess is apparent from the outside in the form of a hadron. Therefore, when a hadron or anti-hadron is stated to consist of (typically) two or three quarks, this technically refers to the constant excess of quarks versus antiquarks.
Like all subatomic particles, hadrons are assigned quantum numbers corresponding to the representations of the Poincaré group: (), where is the spin quantum number, the intrinsic parity (or P-parity), the charge conjugation (or C-parity), and is the particle's mass. Note that the mass of a hadron has very little to do with the mass of its valence quarks; rather, due to mass–energy equivalence, most of the mass comes from the large amount of energy associated with the strong interaction. Hadrons may also carry flavor quantum numbers such as isospin (G-parity), and strangeness. All quarks carry an additive, conserved quantum number called a baryon number (), which is for quarks and for antiquarks. This means that baryons (composite particles made of three, five or a larger odd number of quarks) have = 1 whereas mesons have = 0.
Hadrons have excited states known as resonances. Each ground state hadron may have several excited states; several hundred different resonances have been observed in experiments. Resonances decay extremely quickly (within about 10 seconds) via the strong nuclear force.
In other phases of matter the hadrons may disappear. For example, at very high temperature and high pressure, unless there are sufficiently many flavors of quarks, the theory of quantum chromodynamics (QCD) predicts that quarks and gluons will no longer be confined within hadrons, "because the strength of the strong interaction diminishes with energy". This property, which is known as asymptotic freedom, has been experimentally confirmed in the energy range between 1 GeV (gigaelectronvolt) and 1 TeV (teraelectronvolt). All free hadrons except (possibly) the proton and antiproton are unstable.
Baryons
Baryons are hadrons containing an odd number of valence quarks (at least 3). Most well-known baryons such as the proton and neutron have three valence quarks, but pentaquarks with five quarks—three quarks of different colors, and also one extra quark-antiquark pair—have also been proven to exist. Because baryons have an odd number of quarks, they are also all fermions, i.e., they have half-integer spin. As quarks possess baryon number B = , baryons have baryon number B = 1. Pentaquarks also have B = 1, since the extra quark's and antiquark's baryon numbers cancel.
Each type of baryon has a corresponding antiparticle (antibaryon) in which quarks are replaced by their corresponding antiquarks. For example, just as a proton is made of two up quarks and one down quark, its corresponding antiparticle, the antiproton, is made of two up antiquarks and one down antiquark.
As of August 2015, there are two known pentaquarks, and , both discovered in 2015 by the LHCb collaboration.
Mesons
Mesons are hadrons containing an even number of valence quarks (at least two). Most well known mesons are composed of a quark-antiquark pair, but possible tetraquarks (four quarks) and hexaquarks (six quarks, comprising either a dibaryon or three quark-antiquark pairs) may have been discovered and are being investigated to confirm their nature. Several other hypothetical types of exotic meson may exist which do not fall within the quark model of classification. These include glueballs and hybrid mesons (mesons bound by excited gluons).
Because mesons have an even number of quarks, they are also all bosons, with integer spin, i.e., 0, +1, or −1. They have baryon number Examples of mesons commonly produced in particle physics experiments include pions and kaons. Pions also play a role in holding atomic nuclei together via the residual strong force.
See also
Footnotes
References
External links
Nuclear physics | Hadron | [
"Physics"
] | 1,838 | [
"Hadrons",
"Subatomic particles",
"Matter",
"Nuclear physics"
] |
13,850 | https://en.wikipedia.org/wiki/Hang%20gliding | Hang gliding is an air sport or recreational activity in which a pilot flies a light, non-motorised, fixed-wing heavier-than-air aircraft called a hang glider. Most modern hang gliders are made of an aluminium alloy or composite frame covered with synthetic sailcloth to form a wing. Typically the pilot is in a harness suspended from the airframe, and controls the aircraft by shifting body weight in opposition to a control frame.
Early hang gliders had a low lift-to-drag ratio, so pilots were restricted to gliding down small hills. By the 1980s this ratio significantly improved, and since then pilots have been able to soar for hours, gain thousands of feet of altitude in thermal updrafts, perform aerobatics, and glide cross-country for hundreds of kilometers. The Federation Aeronautique Internationale and national airspace governing organisations control some regulatory aspects of hang gliding. Obtaining the safety benefits of being instructed is highly recommended and indeed a mandatory requirement in many countries.
History
In 1853, George Cayley invented a slope-launched, piloted glider.
Most early glider designs did not ensure safe flight; the problem was that early flight pioneers did not sufficiently understand the underlying principles that made a bird's wing work. Starting in the 1880s, technical and scientific advancements were made that led to the first truly practical gliders, such as those developed in the United States by John Joseph Montgomery. Otto Lilienthal built controllable gliders in the 1890s, with which he could ridge soar. His rigorously documented work influenced later designers, making Lilienthal one of the most influential early aviation pioneers. His aircraft was controlled by weight shift and is similar to a modern hang glider.
Hang gliding saw a stiffened flexible wing hang glider in 1904, when Jan Lavezzari flew a double lateen sail hang glider off Berck Beach, France. In 1910 in Breslau, the triangle control frame with hang glider pilot hung behind the triangle in a hang glider, was evident in a gliding club's activity. The biplane hang glider was very widely publicized in public magazines with plans for building; such biplane hang gliders were constructed and flown in several nations since Octave Chanute and his tailed biplane hang gliders were demonstrated. In April 1909, a how-to article by Carl S. Bates proved to be a seminal hang glider article that seemingly affected builders even of contemporary times. Many builders would have their first hang glider made by following the plan in his article. Volmer Jensen with a biplane hang glider in 1940 called VJ-11 allowed safe three-axis control of a foot-launched hang glider.
On 23 November 1948, Francis Rogallo and Gertrude Rogallo applied for a kite patent for a fully flexible kited wing with approved claims for its stiffenings and gliding uses; the flexible wing or Rogallo wing, which in 1957 the American space agency NASA began testing in various flexible and semi-rigid configurations in order to use it as a recovery system for the Gemini space capsules. The various stiffening formats and the wing's simplicity of design and ease of construction, along with its capability of slow flight and its gentle landing characteristics, did not go unnoticed by hang glider enthusiasts. In 1960–1962 Barry Hill Palmer adapted the flexible wing concept to make foot-launched hang gliders with four different control arrangements. In 1963 Mike Burns adapted the flexible wing to build a towable kite-hang glider he called Skiplane. In 1963, John W. Dickenson adapted the flexible wing airfoil concept to make another water-ski kite glider; for this, the Fédération Aéronautique Internationale vested Dickenson with the Hang Gliding Diploma (2006) for the invention of the "modern" hang glider. Since then, the Rogallo wing has been the most used airfoil of hang gliders.
Components
Hang glider sailcloth
Hang glider sailcloth is normally made from woven or laminated fiber, such as dacron or mylar, respectively.
Woven polyester sailcloth is a very tight weave of small diameter polyester fibers that has been stabilized by the hot-press impregnation of a polyester resin. The resin impregnation is required to provide resistance to distortion and stretch. This resistance is important in maintaining the aerodynamic shape of the sail. Woven polyester provides the best combination of light weight and durability in a sail, with the best overall handling qualities.
Laminated sail materials using polyester film achieve superior performance by using a lower stretch material that is better at maintaining sail shape, but is still relatively light in weight. The disadvantages of polyester film fabrics are that the reduced elasticity under load generally results in stiffer and less responsive handling, and polyester laminated fabrics are generally not as durable or long-lasting as the woven fabrics.
Triangle control frame
In most hang gliders, the pilot is ensconced in a harness suspended from the airframe, and exercises control by shifting body weight in opposition to a stationary control frame, also known as a triangle control frame, or an A-frame. The control frame normally consists of 2 "down-tubes" and a control bar/base bar/base-tube. Either end of the control bar is attached to an upright tube or a more aerodynamic strut (a "down-tube"), where both extend from the base-tube and are connected to the apex of the control frame/ the keel of the glider. This creates the shape of a triangle or 'A-frame'. In many of these configurations additional wheels or other equipment can be suspended from the bottom bar or rod ends.
Images showing a triangle control frame on Otto Lilienthal's 1892 hang glider shows that the technology of such frames has existed since the early design of gliders, but he did not mention it in his patents. A control frame for body weight shift was also shown in Octave Chanute's designs. It was a major part of the now common design of hang gliders by George A. Spratt from 1929. The most simple A-frame that is cable-stayed was demonstrated in a Breslau gliding club hang gliding meet in a battened wing foot-launchable hang glider in the year 1908 by W. Simon; hang glider historian Stephan Nitsch has collected instances also of the U control frame used in the first decade of the 1900s; the U is variant of the A-frame.
Training and safety
Due to the poor safety record of early hang gliding pioneers, the sport has traditionally been considered unsafe. Advances in pilot training and glider construction have led to a much improved safety record. Modern hang gliders are very sturdy when constructed to Hang Glider Manufacturers Association, BHPA, Deutscher Hängegleiterverband, or other certified standards using modern materials. Although lightweight, they can be easily damaged, either through misuse or by continued operation in unsafe wind and weather conditions. All modern gliders have built-in dive recovery mechanisms such as luff lines in kingposted gliders, or "sprogs" in topless gliders.
Pilots fly in harnesses that support their bodies. Several different types of harnesses exist. Pod harnesses are put on like a jacket and the leg portion is behind the pilot during launch. Once in the air the feet are tucked into the bottom of the harness. They are zipped up in the air with a rope and unzipped before landing with a separate rope. A cocoon harness is slipped over the head and lies in front of the legs during launch. After takeoff, the feet are tucked into it and the back is left open. A knee hanger harness is also slipped over the head but the knee part is wrapped around the knees before launch and just pick up the pilots leg automatically after launch. A supine or suprone harness is a seated harness. The shoulder straps are put on before launch and after takeoff the pilot slides back into the seat and flies in a seated position.
Pilots carry a parachute enclosed in the harness. In case of serious problems, the parachute is manually deployed (either by hand or with a ballistic assist) and carries both pilot and glider down to earth. Pilots also wear helmets and generally carry other safety items such as knives (for cutting their parachute bridle after impact or cutting their harness lines and straps in case of a tree or water landing), light ropes (for lowering from trees to haul up tools or climbing ropes), radios (for communication with other pilots or ground crew), and first-aid equipment.
The accident rate from hang glider flying has been dramatically decreased by pilot training. Early hang glider pilots learned their sport through trial and error and gliders were sometimes home-built. Training programs have been developed for today's pilot with emphasis on flight within safe limits, as well as the discipline to cease flying when weather conditions are unfavorable, for example: excess wind or risk cloud suck.
In the UK, a 2011 study reported there is one death per 116,000 flights, a risk comparable to sudden cardiac death from running a marathon or playing tennis. An estimate of worldwide mortality rate is one death per 1,000 active pilots per year.
Most pilots learn at recognised courses which lead to the internationally recognised International Pilot Proficiency Information card issued by the FAI.
Launch
Launch techniques include launching from a hill/cliff/mountain/sand dune/any raised terrain on foot, tow-launching from a ground-based tow system, aerotowing (behind a powered aircraft), powered harnesses, and being towed up by a boat. Modern winch tows typically utilize hydraulic systems designed to regulate line tension, this reduces scenarios for lock out as strong aerodynamic forces will result in additional rope spooling out rather than direct tension on the tow line. Other more exotic launch techniques have also been used successfully, such as hot air balloon drops from very high altitude. When weather conditions are unsuitable to sustain a soaring flight, this results in a top-to-bottom flight and is referred to as a "sled run". In addition to typical launch configurations, a hang glider may be so constructed for alternative launching modes other than being foot launched; one practical avenue for this is for people who physically cannot foot-launch.
In 1983 Denis Cummings re-introduced a safe tow system that was designed to tow through the centre of mass and had a gauge that displayed the towing tension, it also integrated a 'weak link' that broke when the safe tow tension was exceeded. After initial testing, in the Hunter Valley, Denis Cummings, pilot, John Clark, (Redtruck), driver and Bob Silver, officianado, began the Flatlands Hang gliding competition at Parkes, NSW. The competition quickly grew, from 16 pilots the first year to hosting a World Championship with 160 pilots towing from several wheat paddocks in western NSW.
In 1986 Denis and 'Redtruck' took a group of international pilots to Alice Springs to take advantage of the massive thermals. Using the new system many world records were set. With the growing use of the system, other launch methods were incorporated, static winch and towing behind an ultralight trike or an ultralight airplane.
Soaring flight and cross-country flying
A glider in flight is continuously descending, so to achieve an extended flight, the pilot must seek air currents rising faster than the sink rate of the glider. Selecting the sources of rising air currents is the skill that has to be mastered if the pilot wants to achieve flying long distances, known as cross-country (XC). Rising air masses derive from the following sources:
Thermals
The most commonly used source of lift is created by the Sun's energy heating the ground which in turn heats the air above it. This warm air rises in columns known as thermals. Soaring pilots quickly become aware of land features which can generate thermals and their trigger points downwind, because thermals have a surface tension with the ground and roll until hitting a trigger point. When the thermal lifts, the first indicator are the swooping birds feeding on the insects being carried aloft, or dust devils or a change in wind direction as the air is pulled in below the thermal. As the thermal climbs, bigger soaring birds indicate the thermal. The thermal rises until it either forms into a cumulus cloud or hits an inversion layer, which is where the surrounding air is becoming warmer with height, and stops the thermal developing into a cloud. Also, nearly every glider contains an instrument known as a variometer (a very sensitive vertical speed indicator) which shows visually (and often audibly) the presence of lift and sink. Having located a thermal, a glider pilot will circle within the area of rising air to gain height. In the case of a cloud street, thermals can line up with the wind, creating rows of thermals and sinking air. A pilot can use a cloud street to fly long straight-line distances by remaining in the row of rising air.
Ridge lift
Ridge lift occurs when the wind encounters a mountain, cliff, hill, sand dune, or any other raised terrain. The air is pushed up the windward face of the mountain, creating lift. The area of lift extending from the ridge is called the lift band. Providing the air is rising faster than the gliders sink rate, gliders can soar and climb in the rising air by flying within the lift band parallel to the ridge. Ridge soaring is also known as slope soaring.
Mountain waves
The third main type of lift used by glider pilots is the lee waves that occur near mountains. The obstruction to the airflow can generate standing waves with alternating areas of lift and sink. The top of each wave peak is often marked by lenticular cloud formations.
Convergence
Another form of lift results from the convergence of air masses, as with a sea-breeze front. More exotic forms of lift are the polar vortices which the Perlan Project hopes to use to soar to great altitudes. A rare phenomenon known as Morning Glory has also been used by glider pilots in Australia.
Performance
With each generation of materials and with the improvements in aerodynamics, the performance of hang gliders has increased. One measure of performance is the glide ratio. For example, a ratio of 12:1 means that in smooth air a glider can travel forward 12 metres while only losing 1 metre of altitude.
Some performance figures as of 2006:
Topless gliders (no kingpost): glide ratio ~17:1, speed range ~, best glide at
Rigid wings: glide ratio ~20:1, speed range ~, best glide at ~. .
Ballast
The extra weight provided by ballast is advantageous if the lift is likely to be strong. Although heavier gliders have a slight disadvantage when climbing in rising air, they achieve a higher speed at any given glide angle. This is an advantage in strong conditions when the gliders spend only little time climbing in thermals.
Stability and equilibrium
Because hang gliders are most often used for recreational flying, a premium is placed on gentle behaviour, especially at the stall and natural pitch stability. The wing loading must be very low in order to allow the pilot to run fast enough to get above stall speed. Unlike a traditional aircraft with an extended fuselage and empennage for maintaining stability, hang gliders rely on the natural stability of their flexible wings to return to equilibrium in yaw and pitch. Roll stability is generally set to be near neutral. In calm air, a properly designed wing will maintain balanced trimmed flight with little pilot input. The flex wing pilot is suspended beneath the wing by a strap attached to their harness. The pilot lies prone (sometimes supine) within a large, triangular, metal control frame. Controlled flight is achieved by the pilot pushing and pulling on this control frame, thus shifting their weight fore or aft, and right or left in coordinated maneuvers.
Roll
Most flexible wings are set up with near neutral roll due to sideslip (anhedral effect). In the roll axis, the pilot shifts their body mass using the wing control bar, applying a rolling moment directly to the wing. The flexible wing is built to flex differentially across the span in response to the pilot applied roll moment. For example, if the pilot shifts their weight to the right, the right wing trailing edge flexes up more than the left, creating dissimilar lift that rolls the glider to the right.
Yaw
The yaw axis is stabilized through the backward-sweep of the wings. The swept platform, when yawed out of the relative wind, creates more lift on the advancing wing and also more drag, stabilizing the wing in yaw. If one wing advances ahead of the other, it presents more area to the wind and causes more drag on that side. This causes the advancing wing to go slower and to retreat back. The wing is at equilibrium when the aircraft is travelling straight and both wings present the same amount of area to the wind.
Pitch
The pitch control response is direct and very efficient. It is partially stabilized by the washout combined with the sweep of the wings, which results in a different angle of attack of the rear most lifting surfaces of the glider. The wing centre of gravity is close to the hang point and, at the trim speed, the wing will fly "hands off" and return to trim after being disturbed. The weight-shift control system only works when the wing is positively loaded (right side up). Positive pitching devices such as reflex lines or washout rods are employed to maintain a minimum safe amount of washout when the wing is unloaded or even negatively loaded (upside down). Flying faster than trim speed is accomplished by moving the pilot's weight forward in the control frame; flying slower by shifting the pilot's weight aft (pushing out).
Furthermore, the fact that the wing is designed to bend and flex, provides favourable dynamics analogous to a spring suspension. This provides a gentler flying experience than a similarly sized rigid-winged hang glider.
Instruments
To maximize a pilot's understanding of how the hang glider is flying, most pilots carry flight instruments. The most basic being a variometer and altimeter—often combined. Some more advanced pilots also carry airspeed indicators and radios. When flying in competition or cross country, pilots often also carry maps and/or GPS units. Hang gliders do not have instrument panels as such, so all the instruments are mounted to the control frame of the glider or occasionally strapped to the pilot's forearm.
Variometer
Gliding pilots are able to sense the acceleration forces when they first hit a thermal, but have difficulty gauging constant motion. Thus it is difficult to detect the difference between constantly rising air and constantly sinking air. A variometer is a very sensitive vertical speed indicator. The variometer indicates climb rate or sink rate with audio signals (beeps) and/or a visual display. These units are generally electronic, vary in sophistication, and often include an altimeter and an airspeed indicator. More advanced units often incorporate a barograph for recording flight data and/or a built-in GPS. The main purpose of a variometer is in helping a pilot find and stay in the 'core' of a thermal to maximize height gain, and conversely indicating when he or she is in sinking air and needs to find rising air. Variometers are sometimes capable of electronic calculations to indicate the optimal speed to fly for given conditions. The MacCready theory answers the question on how fast a pilot should cruise between thermals, given the average lift the pilot expects in the next thermal climb and the amount of lift or sink he encounters in cruise mode. Some electronic variometers make the calculations automatically, allowing for factors such as the glider's theoretical performance (glide ratio), altitude, hook in weight, and wind direction.
Radio
Pilots sometimes use 2-way radios for training purposes, for communicating with other pilots in the air, and with their ground crew when traveling on cross-country flights.
One type of radio used are PTT (push-to-talk) handheld transceivers, operating in VHF FM. Usually a microphone is worn on the head or incorporated in the helmet, and the PTT switch is either fixed to the outside of the helmet, or strapped to a finger. Operating a VHF band radio without an appropriate license is illegal in most countries that have regulated airwaves (including United States, Canada, Brazil, etc.), so additional information must be obtained with the national or local Hang Gliding association or with the competent radio regulatory authority.
As aircraft operating in airspace occupied by other aircraft, hang glider pilots may also use the appropriate type of radio (i.e. the aircraft transceiver into Aero Mobile Service VHF band). It can, of course, be fitted with a PTT switch to a finger and speakers inside the helmet. The use of aircraft transceivers is subject to regulations specific to the use in the air such as frequencies restrictions, but has several advantages over FM (i.e. frequency modulated) radios used in other services. First is the great range it has (without repeaters) because of its amplitude modulation (i.e. AM). Second is the ability to contact, inform and be informed directly by other aircraft pilots of their intentions thereby improving collision avoidance and increasing safety. Third is to allow greater liberty regarding distance flights in regulated airspaces, in which the aircraft radio is normally a legal requirement. Fourth is the universal emergency frequency monitored by all other users and satellites and used in case of emergency or impending emergency.
GPS
GPS (global positioning system) can be used to aid in navigation. For competitions, it is used to verify the contestant reached the required check-points.
Records
Records are sanctioned by the FAI. The world record for straight distance is held by Dustin B. Martin, with a distance of in 2012, originating from Zapata, Texas.
Judy Leden (GBR) holds the altitude record for a balloon-launched hang glider: 11,800 m (38,800 ft) at Wadi Rum, Jordan on 25 October 1994. Leden also holds the gain of height record: 3,970 m (13,025 ft), set in 1992.
The altitude records for balloon-launched hang gliders:
Competition
Competitions started with "flying as long as possible" and spot landings. With increasing performance, cross-country flying has largely replaced them. Usually two to four waypoints have to be passed with a landing at a goal. In the late 1990s low-power GPS units were introduced and have completely replaced photographs of the goal. Every two years there is a world championship. The Rigid and Women's World Championship in 2006 was hosted by Quest Air in Florida. Big Spring, Texas hosted the 2007 World Championship. Hang gliding is also one of the competition categories in World Air Games organized by Fédération Aéronautique Internationale (World Air Sports Federation - FAI), which maintains a chronology of the FAI World Hang Gliding Championships.
Other forms of competition include Aerobatic competitions, and Speedgliding competitions, wherein the goal is to descend from a mountain as fast as possible while passing through various gates in a manner similar to down-hill skiing.
Classes
For competitive purposes, there are three classes of hang glider:
Class 1 The flexible wing hang glider, having flight controlled by virtue of the shifted weight of the pilot. This is not a paraglider. Class 1 hang gliders sold in the United States are usually rated by the Hang Gliders Manufacturers' Association.
Class 5 The rigid wing hang glider, having flight controlled by spoilers, typically on top of the wing. In both flexible and rigid wings the pilot hangs below the wing without any additional fairing.
Class 2 (designated by the FAI as Sub-Class O-2) where the pilot is integrated into the wing by means of a fairing. These offer the best performance and are the most expensive.
Aerobatics
There are four basic aerobatic maneuvers in a hang glider:
Loop — a maneuver that starts in a wings level dive, climbs, without any rolling, to the apex where the glider is upside down, wings level (heading back where it came from), and then returning to the start altitude and heading, again without rolling, having completed an approximately circular path in the vertical plane.
Spin — A spin is scored from the moment one wing stalls and the glider rotates noticeably into the spin. The entry heading is noted at this point. The glider must remain in the spin for at least 1/2 of a revolution to score any versatility spin points.
Rollover — a maneuver where the apex heading is less than 90° left or right of the entry heading.
Climb over — a maneuver where the apex heading is greater than 90° left or right of the entry heading.
Comparison of hang gliders, paragliders, and gliders
Paragliders and hang gliders are both foot-launched glider aircraft from which cases the pilot is suspended ("hangs") below the lift surface, but hang gliders include a rigid aluminum frame, while paragliders are entirely flexible and look more similar to a parachute. Gliders and sailplanes are structured from composite materials and may have wheels, propellers, and engines.
Hang gliding in media
1971: Early rock video featuring hang gliding, Sweeney's Glider, is produced. It was made by Fitz Weatherby and featured Terry Sweeney.
1973: First film made on the sport of hang gliding, Hang Gliding: The New Freedom, directed by Ron Underwood. It was distributed by Paramount Communications, a short film division of Paramount Pictures.
See also
References
Notes
Bibliography
External links
HangGlider.Org
Adventure travel
Aircraft configurations
Articles containing video clips
Glider aircraft
Individual sports | Hang gliding | [
"Engineering"
] | 5,218 | [
"Aircraft configurations",
"Aerospace engineering"
] |
13,860 | https://en.wikipedia.org/wiki/Hahn%E2%80%93Banach%20theorem | The Hahn–Banach theorem is a central tool in functional analysis.
It allows the extension of bounded linear functionals defined on a vector subspace of some vector space to the whole space, and it also shows that there are "enough" continuous linear functionals defined on every normed vector space to make the study of the dual space "interesting". Another version of the Hahn–Banach theorem is known as the Hahn–Banach separation theorem or the hyperplane separation theorem, and has numerous uses in convex geometry.
History
The theorem is named for the mathematicians Hans Hahn and Stefan Banach, who proved it independently in the late 1920s.
The special case of the theorem for the space of continuous functions on an interval was proved earlier (in 1912) by Eduard Helly, and a more general extension theorem, the M. Riesz extension theorem, from which the Hahn–Banach theorem can be derived, was proved in 1923 by Marcel Riesz.
The first Hahn–Banach theorem was proved by Eduard Helly in 1912 who showed that certain linear functionals defined on a subspace of a certain type of normed space () had an extension of the same norm. Helly did this through the technique of first proving that a one-dimensional extension exists (where the linear functional has its domain extended by one dimension) and then using induction. In 1927, Hahn defined general Banach spaces and used Helly's technique to prove a norm-preserving version of Hahn–Banach theorem for Banach spaces (where a bounded linear functional on a subspace has a bounded linear extension of the same norm to the whole space). In 1929, Banach, who was unaware of Hahn's result, generalized it by replacing the norm-preserving version with the dominated extension version that uses sublinear functions. Whereas Helly's proof used mathematical induction, Hahn and Banach both used transfinite induction.
The Hahn–Banach theorem arose from attempts to solve infinite systems of linear equations. This is needed to solve problems such as the moment problem, whereby given all the potential moments of a function one must determine if a function having these moments exists, and, if so, find it in terms of those moments. Another such problem is the Fourier cosine series problem, whereby given all the potential Fourier cosine coefficients one must determine if a function having those coefficients exists, and, again, find it if so.
Riesz and Helly solved the problem for certain classes of spaces (such as and ) where they discovered that the existence of a solution was equivalent to the existence and continuity of certain linear functionals. In effect, they needed to solve the following problem:
() Given a collection of bounded linear functionals on a normed space and a collection of scalars determine if there is an such that for all
If happens to be a reflexive space then to solve the vector problem, it suffices to solve the following dual problem:
(The functional problem) Given a collection of vectors in a normed space and a collection of scalars determine if there is a bounded linear functional on such that for all
Riesz went on to define space () in 1910 and the spaces in 1913. While investigating these spaces he proved a special case of the Hahn–Banach theorem. Helly also proved a special case of the Hahn–Banach theorem in 1912. In 1910, Riesz solved the functional problem for some specific spaces and in 1912, Helly solved it for a more general class of spaces. It wasn't until 1932 that Banach, in one of the first important applications of the Hahn–Banach theorem, solved the general functional problem. The following theorem states the general functional problem and characterizes its solution.
The Hahn–Banach theorem can be deduced from the above theorem. If is reflexive then this theorem solves the vector problem.
Hahn–Banach theorem
A real-valued function defined on a subset of is said to be a function if for every
Hence the reason why the following version of the Hahn–Banach theorem is called .
The theorem remains true if the requirements on are relaxed to require only that be a convex function:
A function is convex and satisfies if and only if for all vectors and all non-negative real such that Every sublinear function is a convex function.
On the other hand, if is convex with then the function defined by is positively homogeneous
(because for all and one has ), hence, being convex, it is sublinear. It is also bounded above by and satisfies for every linear functional So the extension of the Hahn–Banach theorem to convex functionals does not have a much larger content than the classical one stated for sublinear functionals.
If is linear then if and only if
which is the (equivalent) conclusion that some authors write instead of
It follows that if is also , meaning that holds for all then if and only
Every norm is a seminorm and both are symmetric balanced sublinear functions. A sublinear function is a seminorm if and only if it is a balanced function. On a real vector space (although not on a complex vector space), a sublinear function is a seminorm if and only if it is symmetric. The identity function on is an example of a sublinear function that is not a seminorm.
For complex or real vector spaces
The dominated extension theorem for real linear functionals implies the following alternative statement of the Hahn–Banach theorem that can be applied to linear functionals on real or complex vector spaces.
The theorem remains true if the requirements on are relaxed to require only that for all and all scalars and satisfying
This condition holds if and only if is a convex and balanced function satisfying or equivalently, if and only if it is convex, satisfies and for all and all unit length scalars
A complex-valued functional is said to be if for all in the domain of
With this terminology, the above statements of the Hahn–Banach theorem can be restated more succinctly:
Hahn–Banach dominated extension theorem: If is a seminorm defined on a real or complex vector space then every dominated linear functional defined on a vector subspace of has a dominated linear extension to all of In the case where is a real vector space and is merely a convex or sublinear function, this conclusion will remain true if both instances of "dominated" (meaning ) are weakened to instead mean "dominated " (meaning ).
Proof
The following observations allow the Hahn–Banach theorem for real vector spaces to be applied to (complex-valued) linear functionals on complex vector spaces.
Every linear functional on a complex vector space is completely determined by its real part through the formula
and moreover, if is a norm on then their dual norms are equal:
In particular, a linear functional on extends another one defined on if and only if their real parts are equal on (in other words, a linear functional extends if and only if extends ).
The real part of a linear functional on is always a (meaning that it is linear when is considered as a real vector space) and if is a real-linear functional on a complex vector space then defines the unique linear functional on whose real part is
If is a linear functional on a (complex or real) vector space and if is a seminorm then
Stated in simpler language, a linear functional is dominated by a seminorm if and only if its real part is dominated above by
The proof above shows that when is a seminorm then there is a one-to-one correspondence between dominated linear extensions of and dominated real-linear extensions of the proof even gives a formula for explicitly constructing a linear extension of from any given real-linear extension of its real part.
Continuity
A linear functional on a topological vector space is continuous if and only if this is true of its real part if the domain is a normed space then (where one side is infinite if and only if the other side is infinite).
Assume is a topological vector space and is sublinear function.
If is a continuous sublinear function that dominates a linear functional then is necessarily continuous. Moreover, a linear functional is continuous if and only if its absolute value (which is a seminorm that dominates ) is continuous. In particular, a linear functional is continuous if and only if it is dominated by some continuous sublinear function.
Proof
The Hahn–Banach theorem for real vector spaces ultimately follows from Helly's initial result for the special case where the linear functional is extended from to a larger vector space in which has codimension
This lemma remains true if is merely a convex function instead of a sublinear function.
Assume that is convex, which means that for all and Let and be as in the lemma's statement. Given any and any positive real the positive real numbers and sum to so that the convexity of on guarantees
and hence
thus proving that which after multiplying both sides by becomes
This implies that the values defined by
are real numbers that satisfy As in the above proof of the one–dimensional dominated extension theorem above, for any real define by
It can be verified that if then where follows from when (respectively, follows from when ).
The lemma above is the key step in deducing the dominated extension theorem from Zorn's lemma.
When has countable codimension, then using induction and the lemma completes the proof of the Hahn–Banach theorem. The standard proof of the general case uses Zorn's lemma although the strictly weaker ultrafilter lemma (which is equivalent to the compactness theorem and to the Boolean prime ideal theorem) may be used instead. Hahn–Banach can also be proved using Tychonoff's theorem for compact Hausdorff spaces (which is also equivalent to the ultrafilter lemma)
The Mizar project has completely formalized and automatically checked the proof of the Hahn–Banach theorem in the HAHNBAN file.
Continuous extension theorem
The Hahn–Banach theorem can be used to guarantee the existence of continuous linear extensions of continuous linear functionals.
In category-theoretic terms, the underlying field of the vector space is an injective object in the category of locally convex vector spaces.
On a normed (or seminormed) space, a linear extension of a bounded linear functional is said to be if it has the same dual norm as the original functional:
Because of this terminology, the second part of the above theorem is sometimes referred to as the "norm-preserving" version of the Hahn–Banach theorem. Explicitly:
Proof of the continuous extension theorem
The following observations allow the continuous extension theorem to be deduced from the Hahn–Banach theorem.
The absolute value of a linear functional is always a seminorm. A linear functional on a topological vector space is continuous if and only if its absolute value is continuous, which happens if and only if there exists a continuous seminorm on such that on the domain of
If is a locally convex space then this statement remains true when the linear functional is defined on a vector subspace of
Proof for normed spaces
A linear functional on a normed space is continuous if and only if it is bounded, which means that its dual norm
is finite, in which case holds for every point in its domain.
Moreover, if is such that for all in the functional's domain, then necessarily
If is a linear extension of a linear functional then their dual norms always satisfy
so that equality is equivalent to which holds if and only if for every point in the extension's domain.
This can be restated in terms of the function defined by which is always a seminorm:
A linear extension of a bounded linear functional is norm-preserving if and only if the extension is dominated by the seminorm
Applying the Hahn–Banach theorem to with this seminorm thus produces a dominated linear extension whose norm is (necessarily) equal to that of which proves the theorem:
Non-locally convex spaces
The continuous extension theorem might fail if the topological vector space (TVS) is not locally convex. For example, for the Lebesgue space is a complete metrizable TVS (an F-space) that is locally convex (in fact, its only convex open subsets are itself and the empty set) and the only continuous linear functional on is the constant function . Since is Hausdorff, every finite-dimensional vector subspace is linearly homeomorphic to Euclidean space or (by F. Riesz's theorem) and so every non-zero linear functional on is continuous but none has a continuous linear extension to all of
However, it is possible for a TVS to not be locally convex but nevertheless have enough continuous linear functionals that its continuous dual space separates points; for such a TVS, a continuous linear functional defined on a vector subspace have a continuous linear extension to the whole space.
If the TVS is not locally convex then there might not exist any continuous seminorm (not just on ) that dominates in which case the Hahn–Banach theorem can not be applied as it was in the above proof of the continuous extension theorem.
However, the proof's argument can be generalized to give a characterization of when a continuous linear functional has a continuous linear extension: If is any TVS (not necessarily locally convex), then a continuous linear functional defined on a vector subspace has a continuous linear extension to all of if and only if there exists some continuous seminorm on that dominates Specifically, if given a continuous linear extension then is a continuous seminorm on that dominates and conversely, if given a continuous seminorm on that dominates then any dominated linear extension of to (the existence of which is guaranteed by the Hahn–Banach theorem) will be a continuous linear extension.
Geometric Hahn–Banach (the Hahn–Banach separation theorems)
The key element of the Hahn–Banach theorem is fundamentally a result about the separation of two convex sets: and This sort of argument appears widely in convex geometry, optimization theory, and economics. Lemmas to this end derived from the original Hahn–Banach theorem are known as the Hahn–Banach separation theorems.
They are generalizations of the hyperplane separation theorem, which states that two disjoint nonempty convex subsets of a finite-dimensional space can be separated by some , which is a fiber (level set) of the form where is a non-zero linear functional and is a scalar.
When the convex sets have additional properties, such as being open or compact for example, then the conclusion can be substantially strengthened:
Then following important corollary is known as the Geometric Hahn–Banach theorem or Mazur's theorem (also known as Ascoli–Mazur theorem). It follows from the first bullet above and the convexity of
Mazur's theorem clarifies that vector subspaces (even those that are not closed) can be characterized by linear functionals.
Supporting hyperplanes
Since points are trivially convex, geometric Hahn–Banach implies that functionals can detect the boundary of a set. In particular, let be a real topological vector space and be convex with If then there is a functional that is vanishing at but supported on the interior of
Call a normed space smooth if at each point in its unit ball there exists a unique closed hyperplane to the unit ball at Köthe showed in 1983 that a normed space is smooth at a point if and only if the norm is Gateaux differentiable at that point.
Balanced or disked neighborhoods
Let be a convex balanced neighborhood of the origin in a locally convex topological vector space and suppose is not an element of Then there exists a continuous linear functional on such that
Applications
The Hahn–Banach theorem is the first sign of an important philosophy in functional analysis: to understand a space, one should understand its continuous functionals.
For example, linear subspaces are characterized by functionals: if is a normed vector space with linear subspace (not necessarily closed) and if is an element of not in the closure of , then there exists a continuous linear map with for all and (To see this, note that is a sublinear function.) Moreover, if is an element of , then there exists a continuous linear map such that and This implies that the natural injection from a normed space into its double dual is isometric.
That last result also suggests that the Hahn–Banach theorem can often be used to locate a "nicer" topology in which to work. For example, many results in functional analysis assume that a space is Hausdorff or locally convex. However, suppose is a topological vector space, not necessarily Hausdorff or locally convex, but with a nonempty, proper, convex, open set . Then geometric Hahn–Banach implies that there is a hyperplane separating from any other point. In particular, there must exist a nonzero functional on — that is, the continuous dual space is non-trivial. Considering with the weak topology induced by then becomes locally convex; by the second bullet of geometric Hahn–Banach, the weak topology on this new space separates points.
Thus with this weak topology becomes Hausdorff. This sometimes allows some results from locally convex topological vector spaces to be applied to non-Hausdorff and non-locally convex spaces.
Partial differential equations
The Hahn–Banach theorem is often useful when one wishes to apply the method of a priori estimates. Suppose that we wish to solve the linear differential equation for with given in some Banach space . If we have control on the size of in terms of and we can think of as a bounded linear functional on some suitable space of test functions then we can view as a linear functional by adjunction: At first, this functional is only defined on the image of but using the Hahn–Banach theorem, we can try to extend it to the entire codomain . The resulting functional is often defined to be a weak solution to the equation.
Characterizing reflexive Banach spaces
Example from Fredholm theory
To illustrate an actual application of the Hahn–Banach theorem, we will now prove a result that follows almost entirely from the Hahn–Banach theorem.
The above result may be used to show that every closed vector subspace of is complemented because any such space is either finite dimensional or else TVS–isomorphic to
Generalizations
General template
There are now many other versions of the Hahn–Banach theorem. The general template for the various versions of the Hahn–Banach theorem presented in this article is as follows:
is a sublinear function (possibly a seminorm) on a vector space is a vector subspace of (possibly closed), and is a linear functional on satisfying on (and possibly some other conditions). One then concludes that there exists a linear extension of to such that on (possibly with additional properties).
For seminorms
So for example, suppose that is a bounded linear functional defined on a vector subspace of a normed space so its the operator norm is a non-negative real number.
Then the linear functional's absolute value is a seminorm on and the map defined by is a seminorm on that satisfies on
The Hahn–Banach theorem for seminorms guarantees the existence of a seminorm that is equal to on (since ) and is bounded above by everywhere on (since ).
Geometric separation
Maximal dominated linear extension
If is a singleton set (where is some vector) and if is such a maximal dominated linear extension of then
Vector valued Hahn–Banach
Invariant Hahn–Banach
A set of maps is (with respect to function composition ) if for all
Say that a function defined on a subset of is if and on for every
This theorem may be summarized:
Every -invariant continuous linear functional defined on a vector subspace of a normed space has a -invariant Hahn–Banach extension to all of
For nonlinear functions
The following theorem of Mazur–Orlicz (1953) is equivalent to the Hahn–Banach theorem.
The following theorem characterizes when scalar function on (not necessarily linear) has a continuous linear extension to all of
Converse
Let be a topological vector space. A vector subspace of has the extension property if any continuous linear functional on can be extended to a continuous linear functional on , and we say that has the Hahn–Banach extension property (HBEP) if every vector subspace of has the extension property.
The Hahn–Banach theorem guarantees that every Hausdorff locally convex space has the HBEP. For complete metrizable topological vector spaces there is a converse, due to Kalton: every complete metrizable TVS with the Hahn–Banach extension property is locally convex. On the other hand, a vector space of uncountable dimension, endowed with the finest vector topology, then this is a topological vector spaces with the Hahn–Banach extension property that is neither locally convex nor metrizable.
A vector subspace of a TVS has the separation property if for every element of such that there exists a continuous linear functional on such that and for all Clearly, the continuous dual space of a TVS separates points on if and only if has the separation property. In 1992, Kakol proved that any infinite dimensional vector space , there exist TVS-topologies on that do not have the HBEP despite having enough continuous linear functionals for the continuous dual space to separate points on . However, if is a TVS then vector subspace of has the extension property if and only if vector subspace of has the separation property.
Relation to axiom of choice and other theorems
The proof of the Hahn–Banach theorem for real vector spaces (HB) commonly uses Zorn's lemma, which in the axiomatic framework of Zermelo–Fraenkel set theory (ZF) is equivalent to the axiom of choice (AC). It was discovered by Łoś and Ryll-Nardzewski and independently by Luxemburg that HB can be proved using the ultrafilter lemma (UL), which is equivalent (under ZF) to the Boolean prime ideal theorem (BPI). BPI is strictly weaker than the axiom of choice and it was later shown that HB is strictly weaker than BPI.
The ultrafilter lemma is equivalent (under ZF) to the Banach–Alaoglu theorem, which is another foundational theorem in functional analysis. Although the Banach–Alaoglu theorem implies HB, it is not equivalent to it (said differently, the Banach–Alaoglu theorem is strictly stronger than HB).
However, HB is equivalent to a certain weakened version of the Banach–Alaoglu theorem for normed spaces.
The Hahn–Banach theorem is also equivalent to the following statement:
(∗): On every Boolean algebra there exists a "probability charge", that is: a non-constant finitely additive map from into
(BPI is equivalent to the statement that there are always non-constant probability charges which take only the values 0 and 1.)
In ZF, the Hahn–Banach theorem suffices to derive the existence of a non-Lebesgue measurable set. Moreover, the Hahn–Banach theorem implies the Banach–Tarski paradox.
For separable Banach spaces, D. K. Brown and S. G. Simpson proved that the Hahn–Banach theorem follows from WKL0, a weak subsystem of second-order arithmetic that takes a form of Kőnig's lemma restricted to binary trees as an axiom. In fact, they prove that under a weak set of assumptions, the two are equivalent, an example of reverse mathematics.
See also
Notes
Proofs
References
Bibliography
Reed, Michael and Simon, Barry, Methods of Modern Mathematical Physics, Vol. 1, Functional Analysis, Section III.3. Academic Press, San Diego, 1980. .
Tao, Terence, The Hahn–Banach theorem, Menger's theorem, and Helly's theorem
Wittstock, Gerd, Ein operatorwertiger Hahn-Banach Satz, J. of Functional Analysis 40 (1981), 127–150
Zeidler, Eberhard, Applied Functional Analysis: main principles and their applications, Springer, 1995.
Articles containing proofs
Linear algebra
Linear functionals
Theorems in functional analysis
Topological vector spaces | Hahn–Banach theorem | [
"Mathematics"
] | 5,010 | [
"Theorems in mathematical analysis",
"Vector spaces",
"Topological vector spaces",
"Space (mathematics)",
"Theorems in functional analysis",
"Linear algebra",
"Articles containing proofs",
"Algebra"
] |
13,885 | https://en.wikipedia.org/wiki/High-density%20lipoprotein | High-density lipoprotein (HDL) is one of the five major groups of lipoproteins. Lipoproteins are complex particles composed of multiple proteins which transport all fat molecules (lipids) around the body within the water outside cells. They are typically composed of 80–100 proteins per particle (organized by one, two or three ApoA). HDL particles enlarge while circulating in the blood, aggregating more fat molecules and transporting up to hundreds of fat molecules per particle.
Overview
Lipoproteins are divided into five subgroups, by density/size (an inverse relationship), which also correlates with function and incidence of cardiovascular events. Unlike the larger lipoprotein particles, which deliver fat molecules to cells, HDL particles remove fat molecules from cells. The lipids carried include cholesterol, phospholipids, and triglycerides, amounts of each are variable.
Increasing concentrations of HDL particles are associated with decreasing accumulation of atherosclerosis within the walls of arteries, reducing the risk of sudden plaque ruptures, cardiovascular disease, stroke and other vascular diseases. HDL particles are commonly referred to as "good cholesterol", because they transport fat molecules out of artery walls, reduce macrophage accumulation, and thus help prevent or even regress atherosclerosis. Higher HDL-C may not necessarily be protective against cardiovascular disease and may even be harmful in extremely high quantities, with an increased cardiovascular risk, especially in hypertensive patients.
Testing
Because of the high cost of directly measuring HDL and LDL (low-density lipoprotein) protein particles, blood tests are commonly performed for the surrogate value, HDL-C, i.e. the cholesterol associated with ApoA-1/HDL particles. In healthy individuals, about 30% of blood cholesterol, along with other fats, is carried by HDL. This is often contrasted with the amount of cholesterol estimated to be carried within low-density lipoprotein particles, LDL, and called LDL-C. HDL particles remove fats and cholesterol from cells, including within artery wall atheroma, and transport it back to the liver for excretion or re-utilization; thus the cholesterol carried within HDL particles (HDL-C) is sometimes called "good cholesterol" (despite being the same as cholesterol in LDL particles). Those with higher levels of HDL-C tend to have fewer problems with cardiovascular diseases, while those with low HDL-C cholesterol levels (especially less than 40 mg/dL or about 1 mmol/L) have increased rates for heart disease. Higher native HDL levels are correlated with lowered risk of cardiovascular disease in healthy people.
The remainder of the serum cholesterol after subtracting the HDL is the non-HDL cholesterol. The concentration of these other components, which may cause atheroma, is known as the non-HDL-C. This is now preferred to LDL-C as a secondary marker as it has been shown to be a better predictor and it is more easily calculated.
Structure and function
With a size ranging from 5 to 17 nm, HDL is the smallest of the lipoprotein particles. It is the densest because it contains the highest proportion of protein to lipids. Its most abundant apolipoproteins are apo A-I and apo A-II. A rare genetic variant, ApoA-1 Milano, has been documented to be far more effective in both protecting against and regressing arterial disease, atherosclerosis.
The liver synthesizes these lipoproteins as complexes of apolipoproteins and phospholipid, which resemble cholesterol-free flattened spherical lipoprotein particles, whose NMR structure was published; the complexes are capable of picking up cholesterol, carried internally, from cells by interaction with the ATP-binding cassette transporter A1 (ABCA1). A plasma enzyme called lecithin-cholesterol acyltransferase (LCAT) converts the free cholesterol into cholesteryl ester (a more hydrophobic form of cholesterol), which is then sequestered into the core of the lipoprotein particle, eventually causing the newly synthesized HDL to assume a spherical shape. HDL particles increase in size as they circulate through the blood and incorporate more cholesterol and phospholipid molecules from cells and other lipoproteins, such as by interaction with the ABCG1 transporter and the phospholipid transport protein (PLTP).
HDL transports cholesterol mostly to the liver or steroidogenic organs such as adrenals, ovary, and testes by both direct and indirect pathways. HDL is removed by HDL receptors such as scavenger receptor BI (SR-BI), which mediate the selective uptake of cholesterol from HDL. In humans, probably the most relevant pathway is the indirect one, which is mediated by cholesteryl ester transfer protein (CETP). This protein exchanges triglycerides of VLDL against cholesteryl esters of HDL. As the result, VLDLs are processed to LDL, which are removed from the circulation by the LDL receptor pathway. The triglycerides are not stable in HDL, but are degraded by hepatic lipase so that, finally, small HDL particles are left, which restart the uptake of cholesterol from cells.
The cholesterol delivered to the liver is excreted into the bile and, hence, intestine either directly or indirectly after conversion into bile acids. Delivery of HDL cholesterol to adrenals, ovaries, and testes is important for the synthesis of steroid hormones.
Several steps in the metabolism of HDL can participate in the transport of cholesterol from lipid-laden macrophages of atherosclerotic arteries, termed foam cells, to the liver for secretion into the bile. This pathway has been termed reverse cholesterol transport and is considered as the classical protective function of HDL toward atherosclerosis.
HDL carries many lipid and protein species, several of which have very low concentrations but are biologically very active. For example, HDL and its protein and lipid constituents help to inhibit oxidation, inflammation, activation of the endothelium, coagulation, and platelet aggregation. All these properties may contribute to the ability of HDL to protect from atherosclerosis, and it is not yet known which are the most important. In addition, a small subfraction of HDL lends protection against the protozoan parasite Trypanosoma brucei brucei. This HDL subfraction, termed trypanosome lytic factor (TLF), contains specialized proteins that, while very active, are unique to the TLF molecule.
In the stress response, serum amyloid A, which is one of the acute-phase proteins and an apolipoprotein, is under the stimulation of cytokines (interleukin 1, interleukin 6), and cortisol produced in the adrenal cortex and carried to the damaged tissue incorporated into HDL particles. At the inflammation site, it attracts and activates leukocytes. In chronic inflammations, its deposition in the tissues manifests itself as amyloidosis.
It has been postulated that the concentration of large HDL particles more accurately reflects protective action, as opposed to the concentration of total HDL particles. This ratio of large HDL to total HDL particles varies widely and is measured only by more sophisticated lipoprotein assays using either electrophoresis (the original method developed in the 1970s) or newer NMR spectroscopy methods (See also nuclear magnetic resonance and spectroscopy), developed in the 1990s.
Subfractions
Five subfractions of HDL have been identified. From largest (and most effective in cholesterol removal) to smallest (and least effective), the types are 2a, 2b, 3a, 3b, and 3c.
Epidemiology
Men tend to have noticeably lower HDL concentrations, with smaller size and lower cholesterol content, than women. Men also have a greater incidence of atherosclerotic heart disease. Studies confirm the fact that HDL has a buffering role in balancing the effects of the hypercoagulable state in type 2 diabetics and decreases the high risk of cardiovascular complications in these patients. Also, the results obtained in this study revealed that there was a significant negative correlation between HDL and activated partial thromboplastin time (APTT).
Epidemiological studies have shown that high concentrations of HDL (over 60 mg/dL) have protective value against cardiovascular diseases such as ischemic stroke and myocardial infarction. Low concentrations of HDL (below 40 mg/dL for men, below 50 mg/dL for women) increase the risk for atherosclerotic diseases.
Data from the landmark Framingham Heart Study showed that, for a given level of LDL, the risk of heart disease increases 10-fold as the HDL varies from high to low. On the converse, however, for a fixed level of HDL, the risk increases 3-fold as LDL varies from low to high.
Even people with very low LDL levels achieved by statin treatment are exposed to increased risk if their HDL levels are not high enough.
Estimating HDL via associated cholesterol
Clinical laboratories formerly measured HDL cholesterol by separating other lipoprotein fractions using either ultracentrifugation or chemical precipitation with divalent ions such as Mg2+, then coupling the products of a cholesterol oxidase reaction to an indicator reaction. The reference method still uses a combination of these techniques. Most laboratories now use automated homogeneous analytical methods in which lipoproteins containing apo B are blocked using antibodies to apo B, then a colorimetric enzyme reaction measures cholesterol in the non-blocked HDL particles. HPLC can also be used. Subfractions (HDL-2C, HDL-3C) can be measured, but clinical significance of these subfractions has not been determined. The measurement of apo-A reactive capacity can be used to measure HDL cholesterol but is thought to be less accurate.
Recommended ranges
The American Heart Association, NIH and NCEP provide a set of guidelines for fasting HDL levels and risk for heart disease.
High LDL with low HDL level is an additional risk factor for cardiovascular disease.
Measuring HDL concentration and sizes
As technology has reduced costs and clinical trials have continued to demonstrate the importance of HDL, methods for directly measuring HDL concentrations and size (which indicates function) at lower costs have become more widely available and increasingly regarded as important for assessing individual risk for progressive arterial disease and treatment methods.
Electrophoresis measurements
Since the HDL particles have a net negative charge and vary by density & size, ultracentrifugation combined with electrophoresis have been utilized since before 1950 to enumerate the concentration of HDL particles and sort them by size with a specific volume of blood plasma. Larger HDL particles are carrying more cholesterol.
NMR measurements
Concentration and sizes of lipoprotein particles can be estimated using nuclear magnetic resonance fingerprinting.
Optimal total and large HDL concentrations
The HDL particle concentrations are typically categorized by event rate percentiles based on the people participating and being tracked in the MESA trial, a medical research study sponsored by the United States National Heart, Lung, and Blood Institute.
The lowest incidence of atherosclerotic events over time occurs within those with both the highest concentrations of total HDL particles (the top quarter, >75%) and the highest concentrations of large HDL particles. Multiple additional measures, including LDL particle concentrations, small LDL particle concentrations, VLDL concentrations, estimations of insulin resistance and standard cholesterol lipid measurements (for comparison of the plasma data with the estimation methods discussed above) are routinely provided in clinical testing.
Increasing HDL levels
While higher HDL levels are correlated with lower risk of cardiovascular diseases, no medication used to increase HDL has been proven to improve health. As of 2017, numerous lifestyle changes and drugs to increase HDL levels were under study.
HDL lipoprotein particles that bear apolipoprotein C3 are associated with increased, rather than decreased, risk for coronary heart disease.
Diet and exercise
Certain changes in diet and exercise may have a positive impact on raising HDL levels:
Decreased intake of simple carbohydrates.
Aerobic exercise
Weight loss
Avocado consumption
Magnesium supplements raise HDL-C.
Addition of soluble fiber to diet
Consumption of omega-3 fatty acids such as fish oil or flax oil
Increased intake of unsaturated fats
Removal of trans fatty acids from the diet
Most saturated fats increase HDL cholesterol to varying degrees but also raise total and LDL cholesterol.
Recreational drugs
HDL levels can be increased by smoking cessation, or mild to moderate alcohol intake.
Cannabis in unadjusted analyses, past and current cannabis use was not associated with higher HDL-C levels. A study performed in 4635 patients demonstrated no effect on the HDL-C levels (P=0.78) [the mean (standard error) HDL-C values in control subjects (never used), past users and current users were 53.4 (0.4), 53.9 (0.6) and 53.9 (0.7) mg/dL, respectively].
Exogenous anabolic androgenic steroids, particularly 17α-alkylated anabolic steroids and others administered orally, can reduce HDL-C by 50 percent or more. Other androgen receptor agonists such as selective androgen receptor modulators can also lower HDL. As there is some evidence that the HDL reduction is caused by increased reverse cholesterol transport, it is unknown if AR agonists' HDL-lowering effect is pro- or anti-atherogenic.
Pharmaceutical drugs and niacin
Pharmacological therapy to increase the level of HDL cholesterol includes use of fibrates and niacin. Fibrates have not been proven to have an effect on overall deaths from all causes, despite their effects on lipids.
Niacin (nicotinic acid, a form of vitamin B3) increases HDL by selectively inhibiting hepatic diacylglycerol acyltransferase 2, reducing triglyceride synthesis and VLDL secretion through a receptor HM74 otherwise known as niacin receptor 2 and HM74A / GPR109A, niacin receptor 1.
Pharmacologic (1- to 3-gram/day) niacin doses increase HDL levels by 10–30%, making it the most powerful agent to increase HDL-cholesterol. A randomized clinical trial demonstrated that treatment with niacin can significantly reduce atherosclerosis progression and cardiovascular events. Niacin products sold as "no-flush", i.e. not having side-effects such as "niacin flush", do not, however, contain free nicotinic acid and are therefore ineffective at raising HDL, while products sold as "sustained-release" may contain free nicotinic acid, but "some brands are hepatotoxic"; therefore the recommended form of niacin for raising HDL is the cheapest, immediate-release preparation. Both fibrates and niacin increase artery toxic homocysteine, an effect that can be counteracted by also consuming a multivitamin with relatively high amounts of the B-vitamins, but multiple European trials of the most popular B-vitamin cocktails, trial showing 30% average reduction in homocysteine, while not showing problems have also not shown any benefit in reducing cardiovascular event rates. A 2011 extended-release niacin (Niaspan) study was halted early because patients adding niacin to their statin treatment showed no increase in heart health, but did experience an increase in the risk of stroke.
In contrast, while the use of statins is effective against high levels of LDL cholesterol, most have little or no effect in raising HDL cholesterol. Rosuvastatin and pitavastatin, however, have been demonstrated to significantly raise HDL levels.
Lovaza has been shown to increase HDL-C. However, the best evidence to date suggests it has no benefit for primary or secondary prevention of cardiovascular disease.
The PPAR modulator GW501516 has shown a positive effect on HDL-C and an antiatherogenic where LDL is an issue. However, research on the drug has been discontinued after it was discovered to cause rapid cancer development in several organs in rats.
See also
Asymmetric dimethylarginine
Cardiovascular disease
Cholesteryl ester storage disease
Endothelium
Lipid profile
Lysosomal acid lipase deficiency
References
Lipid disorders
Cardiology
Lipoproteins | High-density lipoprotein | [
"Chemistry"
] | 3,619 | [
"Lipid biochemistry",
"Lipoproteins"
] |
13,899 | https://en.wikipedia.org/wiki/Harmonic%20oscillator | In classical mechanics, a harmonic oscillator is a system that, when displaced from its equilibrium position, experiences a restoring force F proportional to the displacement x:
where k is a positive constant.
If F is the only force acting on the system, the system is called a simple harmonic oscillator, and it undergoes simple harmonic motion: sinusoidal oscillations about the equilibrium point, with a constant amplitude and a constant frequency (which does not depend on the amplitude).
If a frictional force (damping) proportional to the velocity is also present, the harmonic oscillator is described as a damped oscillator. Depending on the friction coefficient, the system can:
Oscillate with a frequency lower than in the undamped case, and an amplitude decreasing with time (underdamped oscillator).
Decay to the equilibrium position, without oscillations (overdamped oscillator).
The boundary solution between an underdamped oscillator and an overdamped oscillator occurs at a particular value of the friction coefficient and is called critically damped.
If an external time-dependent force is present, the harmonic oscillator is described as a driven oscillator.
Mechanical examples include pendulums (with small angles of displacement), masses connected to springs, and acoustical systems. Other analogous systems include electrical harmonic oscillators such as RLC circuits. The harmonic oscillator model is very important in physics, because any mass subject to a force in stable equilibrium acts as a harmonic oscillator for small vibrations. Harmonic oscillators occur widely in nature and are exploited in many manmade devices, such as clocks and radio circuits. They are the source of virtually all sinusoidal vibrations and waves.
Simple harmonic oscillator
A simple harmonic oscillator is an oscillator that is neither driven nor damped. It consists of a mass m, which experiences a single force F, which pulls the mass in the direction of the point and depends only on the position x of the mass and a constant k. Balance of forces (Newton's second law) for the system is
Solving this differential equation, we find that the motion is described by the function
where
The motion is periodic, repeating itself in a sinusoidal fashion with constant amplitude A. In addition to its amplitude, the motion of a simple harmonic oscillator is characterized by its period , the time for a single oscillation or its frequency , the number of cycles per unit time. The position at a given time t also depends on the phase φ, which determines the starting point on the sine wave. The period and frequency are determined by the size of the mass m and the force constant k, while the amplitude and phase are determined by the starting position and velocity.
The velocity and acceleration of a simple harmonic oscillator oscillate with the same frequency as the position, but with shifted phases. The velocity is maximal for zero displacement, while the acceleration is in the direction opposite to the displacement.
The potential energy stored in a simple harmonic oscillator at position x is
Damped harmonic oscillator
In real oscillators, friction, or damping, slows the motion of the system. Due to frictional force, the velocity decreases in proportion to the acting frictional force. While in a simple undriven harmonic oscillator the only force acting on the mass is the restoring force, in a damped harmonic oscillator there is in addition a frictional force which is always in a direction to oppose the motion. In many vibrating systems the frictional force Ff can be modeled as being proportional to the velocity v of the object: , where c is called the viscous damping coefficient.
The balance of forces (Newton's second law) for damped harmonic oscillators is then
which can be rewritten into the form
where
is called the "undamped angular frequency of the oscillator",
is called the "damping ratio".
The value of the damping ratio ζ critically determines the behavior of the system. A damped harmonic oscillator can be:
Overdamped (ζ > 1): The system returns (exponentially decays) to steady state without oscillating. Larger values of the damping ratio ζ return to equilibrium more slowly.
Critically damped (ζ = 1): The system returns to steady state as quickly as possible without oscillating (although overshoot can occur if the initial velocity is nonzero). This is often desired for the damping of systems such as doors.
Underdamped (ζ < 1): The system oscillates (with a slightly different frequency than the undamped case) with the amplitude gradually decreasing to zero. The angular frequency of the underdamped harmonic oscillator is given by the exponential decay of the underdamped harmonic oscillator is given by
The Q factor of a damped oscillator is defined as
Q is related to the damping ratio by
Driven harmonic oscillators
Driven harmonic oscillators are damped oscillators further affected by an externally applied force F(t).
Newton's second law takes the form
It is usually rewritten into the form
This equation can be solved exactly for any driving force, using the solutions z(t) that satisfy the unforced equation
and which can be expressed as damped sinusoidal oscillations:
in the case where . The amplitude A and phase φ determine the behavior needed to match the initial conditions.
Step input
In the case and a unit step input with :
the solution is
with phase φ given by
The time an oscillator needs to adapt to changed external conditions is of the order . In physics, the adaptation is called relaxation, and τ is called the relaxation time.
In electrical engineering, a multiple of τ is called the settling time, i.e. the time necessary to ensure the signal is within a fixed departure from final value, typically within 10%. The term overshoot refers to the extent the response maximum exceeds final value, and undershoot refers to the extent the response falls below final value for times following the response maximum.
Sinusoidal driving force
In the case of a sinusoidal driving force:
where is the driving amplitude, and is the driving frequency for a sinusoidal driving mechanism. This type of system appears in AC-driven RLC circuits (resistor–inductor–capacitor) and driven spring systems having internal mechanical resistance or external air resistance.
The general solution is a sum of a transient solution that depends on initial conditions, and a steady state that is independent of initial conditions and depends only on the driving amplitude , driving frequency , undamped angular frequency , and the damping ratio .
The steady-state solution is proportional to the driving force with an induced phase change :
where
is the absolute value of the impedance or linear response function, and
is the phase of the oscillation relative to the driving force. The phase value is usually taken to be between −180° and 0 (that is, it represents a phase lag, for both positive and negative values of the arctan argument).
For a particular driving frequency called the resonance, or resonant frequency , the amplitude (for a given ) is maximal. This resonance effect only occurs when , i.e. for significantly underdamped systems. For strongly underdamped systems the value of the amplitude can become quite large near the resonant frequency.
The transient solutions are the same as the unforced () damped harmonic oscillator and represent the systems response to other events that occurred previously. The transient solutions typically die out rapidly enough that they can be ignored.
Parametric oscillators
A parametric oscillator is a driven harmonic oscillator in which the drive energy is provided by varying the parameters of the oscillator, such as the damping or restoring force.
A familiar example of parametric oscillation is "pumping" on a playground swing.
A person on a moving swing can increase the amplitude of the swing's oscillations without any external drive force (pushes) being applied, by changing the moment of inertia of the swing by rocking back and forth ("pumping") or alternately standing and squatting, in rhythm with the swing's oscillations. The varying of the parameters drives the system. Examples of parameters that may be varied are its resonance frequency and damping .
Parametric oscillators are used in many applications. The classical varactor parametric oscillator oscillates when the diode's capacitance is varied periodically. The circuit that varies the diode's capacitance is called the "pump" or "driver". In microwave electronics, waveguide/YAG based parametric oscillators operate in the same fashion. The designer varies a parameter periodically to induce oscillations.
Parametric oscillators have been developed as low-noise amplifiers, especially in the radio and microwave frequency range. Thermal noise is minimal, since a reactance (not a resistance) is varied. Another common use is frequency conversion, e.g., conversion from audio to radio frequencies. For example, the Optical parametric oscillator converts an input laser wave into two output waves of lower frequency ().
Parametric resonance occurs in a mechanical system when a system is parametrically excited and oscillates at one of its resonant frequencies. Parametric excitation differs from forcing, since the action appears as a time varying modification on a system parameter. This effect is different from regular resonance because it exhibits the instability phenomenon.
Universal oscillator equation
The equation
is known as the universal oscillator equation, since all second-order linear oscillatory systems can be reduced to this form. This is done through nondimensionalization.
If the forcing function is , where , the equation becomes
The solution to this differential equation contains two parts: the "transient" and the "steady-state".
Transient solution
The solution based on solving the ordinary differential equation is for arbitrary constants c1 and c2
The transient solution is independent of the forcing function.
Steady-state solution
Apply the "complex variables method" by solving the auxiliary equation below and then finding the real part of its solution:
Supposing the solution is of the form
Its derivatives from zeroth to second order are
Substituting these quantities into the differential equation gives
Dividing by the exponential term on the left results in
Equating the real and imaginary parts results in two independent equations
Amplitude part
Squaring both equations and adding them together gives
Therefore,
Compare this result with the theory section on resonance, as well as the "magnitude part" of the RLC circuit. This amplitude function is particularly important in the analysis and understanding of the frequency response of second-order systems.
Phase part
To solve for , divide both equations to get
This phase function is particularly important in the analysis and understanding of the frequency response of second-order systems.
Full solution
Combining the amplitude and phase portions results in the steady-state solution
The solution of original universal oscillator equation is a superposition (sum) of the transient and steady-state solutions:
Equivalent systems
Harmonic oscillators occurring in a number of areas of engineering are equivalent in the sense that their mathematical models are identical (see universal oscillator equation above). Below is a table showing analogous quantities in four harmonic oscillator systems in mechanics and electronics. If analogous parameters on the same line in the table are given numerically equal values, the behavior of the oscillatorstheir output waveform, resonant frequency, damping factor, etc.are the same.
Application to a conservative force
The problem of the simple harmonic oscillator occurs frequently in physics, because a mass at equilibrium under the influence of any conservative force, in the limit of small motions, behaves as a simple harmonic oscillator.
A conservative force is one that is associated with a potential energy. The potential-energy function of a harmonic oscillator is
Given an arbitrary potential-energy function , one can do a Taylor expansion in terms of around an energy minimum () to model the behavior of small perturbations from equilibrium.
Because is a minimum, the first derivative evaluated at must be zero, so the linear term drops out:
The constant term is arbitrary and thus may be dropped, and a coordinate transformation allows the form of the simple harmonic oscillator to be retrieved:
Thus, given an arbitrary potential-energy function with a non-vanishing second derivative, one can use the solution to the simple harmonic oscillator to provide an approximate solution for small perturbations around the equilibrium point.
Examples
Simple pendulum
Assuming no damping, the differential equation governing a simple pendulum of length , where is the local acceleration of gravity, is
If the maximal displacement of the pendulum is small, we can use the approximation and instead consider the equation
The general solution to this differential equation is
where and are constants that depend on the initial conditions.
Using as initial conditions and , the solution is given by
where is the largest angle attained by the pendulum (that is, is the amplitude of the pendulum). The period, the time for one complete oscillation, is given by the expression
which is a good approximation of the actual period when is small. Notice that in this approximation the period is independent of the amplitude . In the above equation, represents the angular frequency.
Spring/mass system
When a spring is stretched or compressed by a mass, the spring develops a restoring force. Hooke's law gives the relationship of the force exerted by the spring when the spring is compressed or stretched a certain length:
where F is the force, k is the spring constant, and x is the displacement of the mass with respect to the equilibrium position. The minus sign in the equation indicates that the force exerted by the spring always acts in a direction that is opposite to the displacement (i.e. the force always acts towards the zero position), and so prevents the mass from flying off to infinity.
By using either force balance or an energy method, it can be readily shown that the motion of this system is given by the following differential equation:
the latter being Newton's second law of motion.
If the initial displacement is A, and there is no initial velocity, the solution of this equation is given by
Given an ideal massless spring, is the mass on the end of the spring. If the spring itself has mass, its effective mass must be included in .
Energy variation in the spring–damping system
In terms of energy, all systems have two types of energy: potential energy and kinetic energy. When a spring is stretched or compressed, it stores elastic potential energy, which is then transferred into kinetic energy. The potential energy within a spring is determined by the equation
When the spring is stretched or compressed, kinetic energy of the mass gets converted into potential energy of the spring. By conservation of energy, assuming the datum is defined at the equilibrium position, when the spring reaches its maximal potential energy, the kinetic energy of the mass is zero. When the spring is released, it tries to return to equilibrium, and all its potential energy converts to kinetic energy of the mass.
Definition of terms
See also
Anharmonicity
Critical speed
Effective mass (spring–mass system)
Fradkin tensor
Normal mode
Parametric oscillator
Phasor
Q factor
Quantum harmonic oscillator
Radial harmonic oscillator
Elastic pendulum
Notes
References
External links
The Harmonic Oscillator from The Feynman Lectures on Physics
Mechanical vibrations
Ordinary differential equations
Articles containing video clips
Oscillators
Acoustics
Sound | Harmonic oscillator | [
"Physics",
"Engineering"
] | 3,259 | [
"Structural engineering",
"Classical mechanics",
"Acoustics",
"Mechanics",
"Mechanical vibrations"
] |
13,914 | https://en.wikipedia.org/wiki/History%20of%20the%20graphical%20user%20interface | The history of the graphical user interface, understood as the use of graphic icons and a pointing device to control a computer, covers a five-decade span of incremental refinements, built on some constant core principles. Several vendors have created their own windowing systems based on independent code, but with basic elements in common that define the WIMP "window, icon, menu and pointing device" paradigm.
There have been important technological achievements, and enhancements to the general interaction in small steps over previous systems. There have been a few significant breakthroughs in terms of use, but the same organizational metaphors and interaction idioms are still in use. Desktop computers are often controlled by computer mice and/or keyboards while laptops often have a pointing stick or touchpad, and smartphones and tablet computers have a touchscreen. The influence of game computers and joystick operation has been omitted.
Early research and developments
Early dynamic information devices such as radar displays, where input devices were used for direct control of computer-created data, set the basis for later improvements of graphical interfaces. Some early cathode-ray-tube (CRT) screens used a light pen, rather than a mouse, as the pointing device.
The concept of a multi-panel windowing system was introduced by the first real-time graphic display systems for computers: the SAGE Project and Ivan Sutherland's Sketchpad.
Augmentation of Human Intellect (NLS)
In the 1960s, Douglas Engelbart's Augmentation of Human Intellect project at the Augmentation Research Center at SRI International in Menlo Park, California developed the oN-Line System (NLS). This computer incorporated a mouse-driven cursor and multiple windows used to work on hypertext. Engelbart had been inspired, in part, by the memex desk-based information machine suggested by Vannevar Bush in 1945.
Much of the early research was based on how young children learn. So, the design was based on the childlike characteristics of hand–eye coordination, rather than use of command languages, user-defined macro procedures, or automated transformation of data as later used by adult professionals.
Engelbart publicly demonstrated this work at the Association for Computing Machinery / Institute of Electrical and Electronics Engineers (ACM/IEEE)—Computer Society's Fall Joint Computer Conference in San Francisco on December 9, 1968. It was so-called The Mother of All Demos.
The idea of multiple overlapping and resizable windows on a "desktop" is commonly, and incorrectly, attributed to Xerox PARC and its Alto. The Xerox Alto's windowing system was inspired by the DNLS (Display NLS)'s overlapping multi-windowing system, which was operational by early 1973 and used at several ARPA locations. In the DNLS, overlapping windows were referred to as "display areas", or DAs, and could store multiple lines of strings. In 1971, the screen could only be split into two display areas, vertically or horizontally; by early 1973, the full overlapping windowing system was implemented, and was capable of displaying on an Imlac PDS-1. The Xerox Alto greatly improved upon this system by adding the capability to display bitmapped images, buttons, and other graphics in these windows, as opposed to the DNLS's overlapping DAs which could only display strings of text.
Xerox PARC
led to the advances at Xerox PARC. Several people went from SRI to Xerox PARC in the early 1970s.
In 1973, Xerox PARC developed the Alto personal computer. It had a bitmapped screen, and was the first computer to demonstrate the desktop metaphor and graphical user interface (GUI). It was not a commercial product, but several thousand units were built and were heavily used at PARC, as well as other XEROX offices, and at several universities for many years. The Alto greatly influenced the design of personal computers during the late 1970s and early 1980s, notably the Three Rivers PERQ, the Apple Lisa and Macintosh, and the first Sun workstations.
The modern WIMP GUI was first developed at Xerox PARC by Alan Kay, Larry Tesler, Dan Ingalls, David Smith, Clarence Ellis and a number of other researchers. This was introduced in the Smalltalk programming environment. It used windows, icons, and menus (including the first fixed drop-down menu) to support commands such as opening files, deleting files, moving files, etc. In 1974, work began at PARC on Gypsy, the first bitmap What-You-See-Is-What-You-Get (WYSIWYG) cut and paste editor. In 1975, Xerox engineers demonstrated a graphical user interface "including icons and the first use of pop-up menus".
In 1981 Xerox introduced a pioneering product, Star, a workstation incorporating many of PARC's innovations. Although not commercially successful, Star greatly influenced future developments, for example at Apple, Microsoft and Sun Microsystems.
Quantel Paintbox
Released by digital imaging company Quantel in 1981, the Paintbox was a color graphical workstation with supporting of mouse input, but more oriented for graphics tablets; this model also was notable as one of the first systems with implementation of pop-up menus.
Blit
The Blit, a graphics terminal, was developed at Bell Labs in 1982.
Lisp machines, Symbolics
Lisp machines originally developed at MIT and later commercialized by Symbolics and other manufacturers, were early high-end single user computer workstations with advanced graphical user interfaces, windowing, and mouse as an input device. First workstations from Symbolics came to market in 1981, with more advanced designs in the subsequent years.
Lisa, Macintosh, and Apple II
Beginning in 1979, started by Steve Jobs and led by Jef Raskin, the Apple Lisa and Macintosh teams at Apple Computer (which included former members of the Xerox PARC group) continued to develop such ideas. The Lisa, released in 1983, featured a high-resolution stationery-based (document-centric) graphical interface atop an advanced hard disk based OS that featured such things as preemptive multitasking and graphically oriented inter-process communication. The comparatively simplified Macintosh, released in 1984 and designed to be lower in cost, was the first commercially successful product to use a multi-panel window interface. A desktop metaphor was used, in which files looked like pieces of paper, file directories looked like file folders, there were a set of desk accessories like a calculator, notepad, and alarm clock that the user could place around the screen as desired, and the user could delete files and folders by dragging them to a trash-can icon on the screen. The Macintosh, in contrast to the Lisa, used a program-centric rather than document-centric design. Apple revisited the document-centric design, in a limited manner, much later with OpenDoc.
There is still some controversy over the amount of influence that Xerox's PARC work, as opposed to previous academic research, had on the GUIs of the Apple Lisa and Macintosh, but it is clear that the influence was extensive, because first versions of Lisa GUIs even lacked icons. These prototype GUIs are at least mouse-driven, but completely ignored the WIMP ( "window, icon, menu, pointing device") concept. Screenshots of first GUIs of Apple Lisa prototypes show the early designs. Apple engineers visited the PARC facilities (Apple secured the rights for the visit by compensating Xerox with a pre-IPO purchase of Apple stock) and a number of PARC employees subsequently moved to Apple to work on the Lisa and Macintosh GUI. However, the Apple work extended PARC's considerably, adding manipulatable icons, and drag and drop manipulation of objects in the file system (see Macintosh Finder) for example. A list of the improvements made by Apple, beyond the PARC interface, can be read at Folklore.org. Jef Raskin warns that many of the reported facts in the history of the PARC and Macintosh development are inaccurate, distorted or even fabricated, due to the lack of usage by historians of direct primary sources.
In 1984, Apple released a television commercial which introduced the Apple Macintosh during the telecast of Super Bowl XVIII by CBS, with allusions to George Orwell's noted novel, Nineteen Eighty-Four. The commercial was aimed at making people think about computers, identifying the user-friendly interface as a personal computer which departed from previous business-oriented systems, and becoming a signature representation of Apple products.
In 1986, the Apple II was launched with 16-bit CPU and significantly improved graphics and audio. It shipped with a new operating system, Apple GS/OS, with a Finder-like GUI similar to the Macintosh series.
Agat
The Soviet Union Agat PC featured a graphical interface and a mouse device and was released in 1983.
SGI 1000 series and MEX
Founded 1982, SGI introduced the IRIS 1000 Series in 1983. The first graphical terminals (IRIS 1000) shipped in late 1983, and the corresponding workstation model (IRIS 1400) was released in mid-1984. The machines used an early version of the MEX windowing system on top of the GL2 Release 1 operating environment. Examples of the MEX user interface can be seen in a 1988 article in the journal "Computer Graphics", while earlier screenshots can not be found. The first commercial GUI-based systems, these did not find widespread use as to their (discounted) academic list price of $22,500 and $35,700 for the IRIS 1000 and IRIS 1400, respectively. However, these systems were commercially successful enough to start SGI's business as one of the main graphical workstation vendors. In later revisions of graphical workstations, SGI switched to the X window system, which had been developed starting at MIT since 1984 and which became the standard for UNIX workstations.
Visi On
VisiCorp's Visi On was a GUI designed to run on DOS for IBM PCs. It was released in December 1983. Visi On had many features of a modern GUI, and included a few that did not become common until many years later. It was fully mouse-driven, used a bit-mapped display for both text and graphics, included on-line help, and allowed the user to open a number of programs at once, each in its own window, and switch between them to multitask. Visi On did not, however, include a graphical file manager. Visi On also demanded a hard drive in order to implement its virtual memory system used for "fast switching", at a time when hard drives were very expensive.
GEM (Graphics Environment Manager)
Digital Research (DRI) created GEM as an add-on program for personal computers. GEM was developed to work with existing CP/M and MS-DOS compatible operating systems on business computers such as IBM PC compatibles. It was developed from DRI software, known as GSX, designed by a former PARC employee. Its similarity to the Macintosh desktop led to a copyright lawsuit from Apple Computer, and a settlement which involved some changes to GEM. This was to be the first of a series of "look and feel" lawsuits related to GUI design in the 1980s.
GEM received widespread use in the consumer market from 1985, when it was made the default user interface built into the Atari TOS operating system of the Atari ST line of personal computers. It was also bundled by other computer manufacturers and distributors, such as Amstrad. Later, it was distributed with the best-sold Digital Research version of DOS for IBM PC compatibles, the DR-DOS 6.0. The GEM desktop faded from the market with the withdrawal of the Atari ST line in 1992 and with the popularity of the Microsoft Windows 3.0 in the PC front around the same period of time. The Falcon030, released in 1993 was the last computer from Atari to use GEM.
DeskMate
Tandy's DeskMate appeared in the early 1980s on its TRS-80 machines and was ported to its Tandy 1000 range in 1984. Like most PC GUIs of the time, it depended on a disk operating system such as TRSDOS or MS-DOS. The application was popular at the time and included a number of programs like Draw, Text and Calendar, as well as attracting outside investment such as Lotus 1-2-3 for DeskMate.
MSX-View
MSX-View was developed for MSX computers by ASCII Corporation and HAL Laboratory. MSX-View contains software such as Page Edit, Page View, Page Link, VShell, VTed, VPaint and VDraw. An external version of the built-in MSX View of the Panasonic FS-A1GT was released as an add-on for the Panasonic FS-A1ST on disk instead of 512 KB ROM DISK.
Amiga Intuition and the Workbench
The Amiga computer was launched by Commodore in 1985 with a GUI called Workbench. Workbench was based on an internal engine developed mostly by RJ Mical, called Intuition, which drove all the input events. The first versions used a blue/orange/white/black default palette, which was selected for high contrast on televisions and composite monitors. Workbench presented directories as drawers to fit in with the "workbench" theme. Intuition was the widget and graphics library that made the GUI work. It was driven by user events through the mouse, keyboard, and other input devices.
Due to a mistake made by the Commodore sales department, the first floppies of AmigaOS (released with the Amiga1000) named the whole OS "Workbench". Since then, users and CBM itself referred to "Workbench" as the nickname for the whole AmigaOS (including Amiga DOS, Extras, etc.). This common consent ended with release of version 2.0 of AmigaOS, which re-introduced proper names to the installation floppies of AmigaDOS, Workbench, Extras, etc.
Starting with Workbench 1.0, AmigaOS treated the Workbench as a backdrop, borderless window sitting atop a blank screen. With the introduction of AmigaOS 2.0, however, the user was free to select whether the main Workbench window appeared as a normally layered window, complete with a border and scrollbars, through a menu item.
Amiga users were able to boot their computer into a command-line interface (also known as the CLI or Amiga Shell). This was a keyboard-based environment without the Workbench GUI. Later they could invoke it with the CLI/SHELL command "LoadWB" which loaded Workbench GUI.
One major difference between other OS's of the time (and for some time after) was the Amiga's fully multi-tasking operating system, a powerful built-in animation system using a hardware blitter and copper and four channels of 26 kHz 8-bit sampled sound. This made the Amiga the first multi-media computer years before other OS's.
Like most GUIs of the day, Amiga's Intuition followed Xerox's, and sometimes Apple's, lead. But a CLI was included which dramatically extended the functionality of the platform. However, the CLI/Shell of Amiga is not just a simple text-based interface like in MS-DOS, but another graphic process driven by Intuition, and with the same gadgets included in Amiga's graphics.library. The CLI/Shell interface integrates itself with the Workbench, sharing privileges with the GUI.
The Amiga Workbench evolved over the 1990s, even after Commodore's 1994 bankruptcy.
Acorn BBC Master Compact
Acorn's 8-bit BBC Master Compact shipped with Acorn's first public GUI interface in 1986. Little commercial software, beyond that included on the Welcome disk, was ever made available for the system, despite the claim by Acorn at the time that "the major software houses have worked with Acorn to make over 100 titles available on compilation discs at launch". The most avid supporter of the Master Compact appeared to be Superior Software, who produced and specifically labelled their games as 'Master Compact' compatible.
Arthur / RISC OS
RISC OS is a series of graphical user interface-based computer operating systems (OSes) designed for ARM architecture systems. It takes its name from the RISC (reduced instruction set computer) architecture supported. The OS was originally developed by Acorn Computers for use with their 1987 range of Archimedes personal computers using the Acorn RISC Machine (ARM) processors. It comprises a command-line interface and desktop environment with a windowing system.
Originally branded as the Arthur 1.20 the subsequent Arthur 2 release was shipped under the name RISC OS 2.
Desktop
The WIMP interface incorporates three mouse buttons (named Select, Menu and Adjust), context-sensitive menus, window stack control (i.e. send to back) and dynamic window focus (a window can have input focus at any position on the stack). The Icon bar (Dock) holds icons which represent mounted disc drives, RAM discs, network directories, running applications, system utilities and docked: Files, Directories or inactive Applications. These icons and open windows have context-sensitive menus and support drag-and-drop behaviour. They represent the running application as a whole, irrespective of whether it has open windows.
The application has control of the context-sensitive menus, inapplicable menu choices can be 'greyed out' to make them unavailable. Menus have their own titles and may be moved around the desktop by the user. Any menu can have further sub-menus or a new window for complicated choices.
The GUI is centered around the concept of files. The Filer displays the contents of a disc. Applications are run from the Filer view and files can be dragged to the Filer view from applications to perform saves. The opposite can perform a load. With their co-operation data can be copied or moved directly between applications by saving (dragging) to another application.
Application directories are used to store applications. The OS differentiates them from normal directories through the use of a pling (exclamation mark, also called shriek) prefix. Double-clicking on such a directory launches the application rather than opening the directory. The application's executable files and resources are contained within the directory, but normally they remain hidden from the user. Because applications are self-contained, this allows drag-and-drop installation and removal.
Files are normally typed. RISC OS has some predefined types. Applications can supplement the set of known types. Double-clicking a file with a known type will launch the appropriate application to load the file.
The Style Guide encourages a consistent look and feel across applications. This was introduced in and specifies application appearance and behaviour. Acorn's own main bundled applications were not updated to comply with the guide until 's Select release in 2001.
Font manager
The outline fonts manager provides spatial anti-aliasing of fonts, the OS being the first operating system to include such a feature, having included it since before January 1989. Since 1994, in RISC OS 3.5, it has been possible to use an outline anti-aliased font in the WindowManager for UI elements, rather than the bitmap system font from previous versions.
MS-DOS file managers and utility suites
Because most of the very early IBM PC and compatibles lacked any common true graphical capability (they used the 80-column basic text mode compatible with the original MDA display adapter), a series of file managers arose, including Microsoft's DOS Shell, which features typical GUI elements as menus, push buttons, lists with scrollbars and mouse pointer. The name text-based user interface was later invented to name this kind of interface. Many MS-DOS text mode applications, like the default text editor for MS-DOS 5.0 (and related tools, like QBasic), also used the same philosophy. The IBM DOS Shell included with IBM DOS 5.0 (circa 1992) supported both text display modes and actual graphics display modes, making it both a TUI and a GUI, depending on the chosen mode.
Advanced file managers for MS-DOS were able to redefine character shapes with EGA and better display adapters, giving some basic low resolution icons and graphical interface elements, including an arrow (instead of a coloured cell block) for the mouse pointer. When the display adapter lacks the ability to change the character's shapes, they default to the CP437 character set found in the adapter's ROM. Some popular utility suites for MS-DOS, as Norton Utilities (pictured) and PC Tools used these techniques as well.
DESQview was a text mode multitasking program introduced in July 1985. Running on top of MS-DOS, it allowed users to run multiple DOS programs concurrently in windows. It was the first program to bring multitasking and windowing capabilities to a DOS environment in which existing DOS programs could be used. DESQview was not a true GUI but offered certain components of one, such as resizable, overlapping windows and mouse pointing.
Applications under MS-DOS with proprietary GUIs
Before the MS-Windows age, and with the lack of a true common GUI under MS-DOS, most graphical applications which worked with EGA, VGA and better graphic cards had proprietary built-in GUIs. One of the best known such graphical applications was Deluxe Paint, a popular painting software with a typical WIMP interface.
The original Adobe Acrobat Reader executable file for MS-DOS was able to run on both the standard Windows 3.x GUI and the standard DOS command prompt. When it was launched from the command prompt, on a machine with a VGA graphics card, it provided its own GUI.
Microsoft Windows (16-bit versions)
Windows 1.0, a GUI for the MS-DOS operating system was released in 1985. The market's response was less than stellar. Windows 2.0 followed, but it wasn't until the 1990 launch of Windows 3.0, based on Common User Access that its popularity truly exploded. The GUI has seen minor redesigns since, mainly the networking enabled Windows 3.11 and its Win32s 32-bit patch. The 16-bit line of MS Windows were discontinued with the introduction of Windows 95 and Windows NT 32-bit based architecture in the 1990s.
The main window of a given application can occupy the full screen in maximized status. The users must then to switch between maximized applications using the Alt+Tab keyboard shortcut; no alternative with the mouse except for de-maximize. When none of the running application windows are maximized, switching can be done by clicking on a partially visible window, as is the common way in other GUIs.
In 1988, Apple sued Microsoft for copyright infringement of the Lisa and Apple Macintosh GUI. The court case lasted 4 years before almost all of Apple's claims were denied on a contractual technicality. Subsequent appeals by Apple were also denied. Microsoft and Apple apparently entered a final, private settlement of the matter in 1997.
GEOS
GEOS was launched in 1986, originally written for the 8-bit home computer Commodore 64, and shortly after, the Apple II. The name was later used by the company as PC/Geos for IBM PC systems, then Geoworks Ensemble. It came with several application programs like a calendar and word processor. A cut-down version served as the basis for America Online's MS-DOS client. Compared to the competing Windows 3.0 GUI, it could run reasonably well on simpler hardware, but its developer had a restrictive policy towards third-party developers that prevented it from becoming a serious competitor. Additionally, it was targeted at 8-bit machines, whilst the 16-bit computer age was dawning.
The X Window System
The standard windowing system in the Unix world is the X Window System (commonly X11 or X), first released in the mid-1980s. The W Window System (1983) was the precursor to X; X was developed at MIT as Project Athena. Its original purpose was to allow users of the newly emerging graphic terminals to access remote graphics workstations without regard to the workstation's operating system or the hardware. Due largely to the availability of the source code used to write X, it has become the standard layer for management of graphical and input/output devices and for the building of both local and remote graphical interfaces on virtually all Unix, Linux and other Unix-like operating systems, with the notable exceptions of macOS and Android.
X allows a graphical terminal user to make use of remote resources on the network as if they were all located locally to the user by running a single module of software called the X server. The software running on the remote machine is called the client application. X's network transparency protocols allow the display and input portions of any application to be separated from the remainder of the application and 'served up' to any of a large number of remote users. X is available today as free software.
NeWS
The PostScript-based NeWS (Network extensible Window System) was developed by Sun Microsystems in the mid-1980s. For several years SunOS included a window system combining NeWS and the X Window System. Although NeWS was considered technically elegant by some commentators, Sun eventually dropped the product. Unlike X, NeWS was always proprietary software.
The 1990s: Mainstream usage of the desktop
The widespread adoption of the PC platform in homes and small businesses popularized computers among people with no formal training. This created a fast-growing market, opening an opportunity for commercial exploitation and of easy-to-use interfaces and making economically viable the incremental refinement of the existing GUIs for home systems.
Also, the spreading of high-color and true-color capabilities of display adapters providing thousands and millions of colors, along with faster CPUs and accelerated graphic cards, cheaper RAM, storage devices orders of magnitude larger (from megabytes to gigabytes) and larger bandwidth for telecom networking at lower cost helped to create an environment in which the common user was able to run complicated GUIs which began to favor aesthetics.
Windows 95 and "a computer in every home"
After Windows 3.11, Microsoft started development on a new consumer-oriented version of the operating system. Windows 95 was intended to integrate Microsoft's formerly separate MS-DOS and Windows products and included an enhanced version of DOS, often referred to as MS-DOS 7.0. It also featured a significant redesign of the GUI, dubbed "Cairo". While Cairo never really materialized, parts of Cairo found their way into subsequent versions of the operating system starting with Windows 95. Both Win95 and WinNT could run 32-bit applications, and could exploit the abilities of the Intel 80386 CPU, as the preemptive multitasking and up to 4 GiB of linear address memory space. Windows 95 was touted as a 32-bit based operating system but it was actually based on a hybrid kernel (VWIN32.VXD) with the 16-bit user interface (USER.EXE) and graphic device interface (GDI.EXE) of Windows for Workgroups (3.11), which had 16-bit kernel components with a 32-bit subsystem (USER32.DLL and GDI32.DLL) that allowed it to run native 16-bit applications as well as 32-bit applications. In the marketplace, Windows 95 was an unqualified success, promoting a general upgrade to 32-bit technology, and within a year or two of its release had become the most successful operating system ever produced.
Accompanied by an extensive marketing campaign, Windows 95 was a major success in the marketplace at launch and shortly became the most popular desktop operating system.
Windows 95 saw the beginning of the browser wars, when the World Wide Web began receiving a great deal of attention in popular culture and mass media. Microsoft at first did not see potential in the Web, and Windows 95 was shipped with Microsoft's own online service called The Microsoft Network, which was dial-up only and was used primarily for its own content, not internet access. As versions of Netscape Navigator and Internet Explorer were released at a rapid pace over the following few years, Microsoft used its desktop dominance to push its browser and shape the ecology of the web mainly as a monoculture.
Windows 95 evolved through the years into Windows 98 and Windows ME. Windows ME was the last in the line of the Windows 3.x-based operating systems from Microsoft. Windows underwent a parallel 32-bit evolutionary path, where Windows NT 3.1 was released in 1993. Windows NT (for New Technology) was a native 32-bit operating system with a new driver model, was unicode-based, and provided for true separation between applications. Windows NT also supported 16-bit applications in an NTVDM, but it did not support VxD based drivers. Windows 95 was supposed to be released before 1993 as the predecessor to Windows NT. The idea was to promote the development of 32-bit applications with backward compatibility – leading the way for more successful NT release. After multiple delays, Windows 95 was released without unicode and used the VxD driver model. Windows NT 3.1 evolved to Windows NT 3.5, 3.51 and then 4.0 when it finally shared a similar interface with its Windows 9x desktop counterpart and included a Start button. The evolution continued with Windows 2000, Windows XP, Windows Vista, then Windows 7. Windows XP and higher were also made available in 64-bit modes. Windows server products branched off with the introduction of Windows Server 2003 (available in 32-bit and 64-bit IA64 or x64), then Windows Server 2008 and then Windows Server 2008 R2. Windows 2000 and XP shared the same basic GUI although XP introduced Visual Styles. With Windows 98, the Active Desktop theme was introduced, allowing an HTML approach for the desktop, but this feature was coldly received by customers, who frequently disabled it. At the end, Windows Vista definitively discontinued it, but put a new SideBar on the desktop.
Macintosh
The Macintosh's GUI has been revised multiple times since 1984, with major updates including System 7 and Mac OS 8. It underwent its largest revision to date with the introduction of the "Aqua" interface in 2001's Mac OS X. It was a new operating system built primarily on technology from NeXTSTEP with UI elements of the original Mac OS grafted on. macOS uses a technology known as Quartz, for graphics rendering and drawing on-screen. Some interface features of macOS are inherited from NeXTSTEP (such as the Dock, the automatic wait cursor, or double-buffered windows giving a solid appearance and flicker-free window redraws), while others are inherited from the old Mac OS operating system (the single system-wide menu-bar). Mac OS X 10.3 introduced features to improve usability including Exposé, which is designed to make finding open windows easier.
With Mac OS X 10.4 released in April 2005, new features were added, including Dashboard (a virtual alternate desktop for mini specific-purpose applications) and a search tool called Spotlight, which provides users with an option for searching through files instead of browsing through folders.
With Mac OS X 10.7 released in July 2011, included support for full screen apps and Mac OS X 10.11 (El Capitan) released in September 2015 support creating a full screen split view by pressing the green button on left upper corner of the window or Control+Cmd+F keyboard shortcut.
GUIs built on the X Window System
of X Window development, Sun Microsystems and AT&T attempted to push for a GUI standard called OPEN LOOK in competition with Motif. OPEN LOOK was developed from scratch in conjunction with Xerox, while Motif was a collective effort. Motif eventually gained prominence and became the basis for Hewlett-Packard's Visual User Environment (VUE), which later became the Common Desktop Environment (CDE).
In the late 1990s, there was significant growth in the Unix world, especially among the free software community. New graphical desktop movements grew up around Linux and similar operating systems, based on the X Window System. A new emphasis on providing an integrated and uniform interface to the user brought about new desktop environments, such as KDE Plasma 5, GNOME and Xfce which have supplanted CDE in popularity on both Unix and Unix-like operating systems. The Xfce, KDE and GNOME look and feel each tend to undergo more rapid change and less codification than the earlier OPEN LOOK and Motif environments.
Amiga
Later releases added improvements over the original Workbench, like support for high-color Workbench screens, context menus, and embossed 2D icons with pseudo-3D aspect. Some Amiga users preferred alternative interfaces to standard Workbench, such as Directory Opus Magellan.
The use of improved, third-party GUI engines became common amongst users who preferred more attractive interfaces – such as Magic User Interface (MUI), and ReAction. These object-oriented graphic engines driven by user interface classes and methods were then standardized into the Amiga environment and changed Amiga Workbench to a complete and modern guided interface, with new standard gadgets, animated buttons, true 24-bit-color icons, increased use of wallpapers for screens and windows, alpha channel, transparencies and shadows as any modern GUI provides.
Modern derivatives of Workbench are Ambient for MorphOS, Scalos, Workbench for AmigaOS 4 and Wanderer for AROS.
There is a brief article on Ambient and descriptions of MUI icons, menus and gadgets at aps.fr and images of Zune stay at main AROS site.
Use of object oriented graphic engines dramatically changes the look and feel of a GUI to match actual styleguides.
OS/2
Originally collaboratively developed by Microsoft and IBM to replace DOS, OS/2 version 1.0 (released in 1987) had no GUI at all. Version 1.1 (released 1988) included Presentation Manager (PM), an implementation of IBM Common User Access, which looked a lot like the later Windows 3.1 UI. After the split with Microsoft, IBM developed the Workplace Shell (WPS) for version 2.0 (released in 1992), a quite radical, object-oriented approach to GUIs. Microsoft later imitated much of this look in Windows 95.
NeXTSTEP
The NeXTSTEP user interface was used in the NeXT line of computers. NeXTSTEP's first major version was released in 1989. It used Display PostScript for its graphical underpinning. The NeXTSTEP interface's most significant feature was the Dock, carried with some modification into Mac OS X, and had other minor interface details that some found made it easier and more intuitive to use than previous GUIs. NeXTSTEP's GUI was the first to feature opaque dragging of windows in its user interface, on a comparatively weak machine by today's standards, ideally aided by high performance graphics hardware.
BeOS
BeOS was developed on custom AT&T Hobbit-based computers before switching to PowerPC hardware by a team led by former Apple executive Jean-Louis Gassée as an alternative to Mac OS. BeOS was later ported to Intel hardware. It used an object-oriented kernel written by Be, and did not use the X Window System, but a different GUI written from scratch. Much effort was spent by the developers to make it an efficient platform for multimedia applications. Be Inc. was acquired by PalmSource, Inc. (Palm Inc. at the time) in 2001. The BeOS GUI still lives in Haiku, an open-source software reimplementation of the BeOS.
Current trends
Mobile devices
General Magic is the apparent parent of all modern smartphone GUI, i.e. touch-screen based including the iPhone et al. In 2007, with the iPhone and later in 2010 with the introduction of the iPad, Apple popularized the post-WIMP style of interaction for multi-touch screens, with those devices considered to be milestones in the development of mobile devices.
Other portable devices such as MP3 players and cell phones have been a burgeoning area of deployment for GUIs in recent years. Since the mid-2000s, a vast majority of portable devices have advanced to having high-screen resolutions and sizes. (The Galaxy Note 4's 2,560 × 1,440 pixel display is an example). Because of this, these devices have their own famed user interfaces and operating systems that have large homebrew communities dedicated to creating their own visual elements, such as icons, menus, wallpapers, and more. Post-WIMP interfaces are often used in these mobile devices, where the traditional pointing devices required by the desktop metaphor are not practical.
As high-powered graphics hardware draws considerable power and generates significant heat, many of the 3D effects developed between 2000 and 2010 are not practical on this class of device. This has led to the development of simpler interfaces making a design feature of two dimensionality such as exhibited by the Metro (Modern) UI first used in Windows 8 and the 2012 Gmail redesign.
3D user interface
In the first decade of the 21st century, the rapid development of GPUs led to a trend for the inclusion of 3D effects in window management. It is based in experimental research in user interface design trying to expand the expressive power of the existing toolkits in order to enhance the physical cues that allow for direct manipulation. New effects common to several projects are scale resizing and zooming, several windows transformations and animations (wobbly windows, smooth minimization to system tray...), composition of images (used for window drop shadows and transparency) and enhancing the global organization of open windows (zooming to virtual desktops, desktop cube, Exposé, etc.) The proof-of-concept BumpTop desktop combines a physical representation of documents with tools for document classification possible only in the simulated environment, like instant reordering and automated grouping of related documents.
These effects are popularized thanks to the widespread use of 3D video cards (mainly due to gaming) which allow for complex visual processing with low CPU use, using the 3D acceleration in most modern graphics cards to render the application clients in a 3D scene. The application window is drawn off-screen in a pixel buffer, and the graphics card renders it into the 3D scene.
This can have the advantage of moving some of the window rendering to the GPU on the graphics card and thus reducing the load on the main CPU, but the facilities that allow this must be available on the graphics card to be able to take advantage of this.
Examples of 3D user-interface software include Xgl and Compiz from Novell, and AIGLX bundled with Red Hat/Fedora. Quartz Extreme for macOS and Windows 7 and Vista's Aero interface use 3D rendering for shading and transparency effects as well as Exposé and Windows Flip and Flip 3D, respectively. Windows Vista uses Direct3D to accomplish this, whereas the other interfaces use OpenGL.
Notebook interface
The notebook interface is widely used in data science and other areas of research. Notebooks allow users to mix text, calculations, and graphs in the same interface which was previously impossible with a command-line interface.
Virtual reality and presence
Virtual reality devices such as the Oculus Rift and Sony's PlayStation VR (formerly Project Morpheus) aim to provide users with presence, a perception of full immersion into a virtual environment.
See also
Windowing system
Bill Atkinson
The Blit (graphics terminal by Rob Pike, 1982)
Direct manipulation interface
Douglas Engelbart's On-Line System
Graphical user interface
Text-based user interface
History of computing hardware
History of computer icons
Ivan Sutherland's Sketchpad
Jef Raskin
Office of the future
ETH Oberon
Xgl
AIGLX
DirectFB
Tiling window manager
Macro command language
Texting
Skeuomorph
Apple v. Microsoft
IBM Common User Access
The Mother of All Demos
References
External links
Raj Lal "User Interface evolution in last 50 years", Digital Design and Innovation Summit, San Francisco, Sept 20, 2013
Jeremy Reimer. "A History of the GUI" Ars Technica. May 5, 2005.
"User Interface Timeline" George Mason University
Nathan Lineback. "The Graphical User Interface Gallery". Nathan's Toasty Technology Page.
Oral history interview with Marvin L. Minsky, Charles Babbage Institute, University of Minnesota. Minsky describes artificial intelligence (AI) research at the Massachusetts Institute of Technology (MIT), including research in the areas of graphics, word processing, and time-sharing.
Oral history interview with Ivan Sutherland, Charles Babbage Institute, University of Minnesota. Sutherland describes his tenure as head of ARPA's Information Processing Techniques Office (IPTO) from 1963 to 1965, including new projects in graphics and networking.
Oral history interview with Charles A. Csuri, Charles Babbage Institute, University of Minnesota. Csuri recounts his art education and explains his transition to computer graphics in the mid-1960s, after receiving a National Science Foundation grant for research in graphics.
GUIdebook: Graphical User Interface gallery
VisiOn history – The first GUI for the PC
mprove: Historical Overview of Graphical User Interfaces
Anecdotes about the development of the Macintosh Hardware & GUI
Firsts: The Demo – Doug Engelbart Institute
History of human–computer interaction
History of computing | History of the graphical user interface | [
"Technology"
] | 8,556 | [
"History of human–computer interaction",
"Computers",
"History of computing"
] |
13,980 | https://en.wikipedia.org/wiki/Homeostasis | In biology, homeostasis (British also homoeostasis; ) is the state of steady internal physical and chemical conditions maintained by living systems. This is the condition of optimal functioning for the organism and includes many variables, such as body temperature and fluid balance, being kept within certain pre-set limits (homeostatic range). Other variables include the pH of extracellular fluid, the concentrations of sodium, potassium, and calcium ions, as well as the blood sugar level, and these need to be regulated despite changes in the environment, diet, or level of activity. Each of these variables is controlled by one or more regulators or homeostatic mechanisms, which together maintain life.
Homeostasis is brought about by a natural resistance to change when already in optimal conditions, and equilibrium is maintained by many regulatory mechanisms; it is thought to be the central motivation for all organic action. All homeostatic control mechanisms have at least three interdependent components for the variable being regulated: a receptor, a control center, and an effector. The receptor is the sensing component that monitors and responds to changes in the environment, either external or internal. Receptors include thermoreceptors and mechanoreceptors. Control centers include the respiratory center and the renin-angiotensin system. An effector is the target acted on, to bring about the change back to the normal state. At the cellular level, effectors include nuclear receptors that bring about changes in gene expression through up-regulation or down-regulation and act in negative feedback mechanisms. An example of this is in the control of bile acids in the liver.
Some centers, such as the renin–angiotensin system, control more than one variable. When the receptor senses a stimulus, it reacts by sending action potentials to a control center. The control center sets the maintenance range—the acceptable upper and lower limits—for the particular variable, such as temperature. The control center responds to the signal by determining an appropriate response and sending signals to an effector, which can be one or more muscles, an organ, or a gland. When the signal is received and acted on, negative feedback is provided to the receptor that stops the need for further signaling.
The cannabinoid receptor type 1, located at the presynaptic neuron, is a receptor that can stop stressful neurotransmitter release to the postsynaptic neuron; it is activated by endocannabinoids such as anandamide (N-arachidonoylethanolamide) and 2-arachidonoylglycerol via a retrograde signaling process in which these compounds are synthesized by and released from postsynaptic neurons, and travel back to the presynaptic terminal to bind to the CB1 receptor for modulation of neurotransmitter release to obtain homeostasis.
The polyunsaturated fatty acids are lipid derivatives of omega-3 (docosahexaenoic acid, and eicosapentaenoic acid) or of omega-6 (arachidonic acid). They are synthesized from membrane phospholipids and used as precursors for endocannabinoids to mediate significant effects in the fine-tuning adjustment of body homeostasis.
Etymology
The word homeostasis () uses combining forms of homeo- and -stasis, Neo-Latin from Greek: ὅμοιος homoios, "similar" and στάσις stasis, "standing still", yielding the idea of "staying the same".
History
The concept of the regulation of the internal environment was described by French physiologist Claude Bernard in 1849, and the word homeostasis was coined by Walter Bradford Cannon in 1926. In 1932, Joseph Barcroft a British physiologist, was the first to say that higher brain function required the most stable internal environment. Thus, to Barcroft homeostasis was not only organized by the brain—homeostasis served the brain. Homeostasis is an almost exclusively biological term, referring to the concepts described by Bernard and Cannon, concerning the constancy of the internal environment in which the cells of the body live and survive. The term cybernetics is applied to technological control systems such as thermostats, which function as homeostatic mechanisms but are often defined much more broadly than the biological term of homeostasis.
Overview
The metabolic processes of all organisms can only take place in very specific physical and chemical environments. The conditions vary with each organism, and with whether the chemical processes take place inside the cell or in the interstitial fluid bathing the cells. The best-known homeostatic mechanisms in humans and other mammals are regulators that keep the composition of the extracellular fluid (or the "internal environment") constant, especially with regard to the temperature, pH, osmolality, and the concentrations of sodium, potassium, glucose, carbon dioxide, and oxygen. However, a great many other homeostatic mechanisms, encompassing many aspects of human physiology, control other entities in the body. Where the levels of variables are higher or lower than those needed, they are often prefixed with hyper- and hypo-, respectively such as hyperthermia and hypothermia or hypertension and hypotension.
If an entity is homeostatically controlled it does not imply that its value is necessarily absolutely steady in health. Core body temperature is, for instance, regulated by a homeostatic mechanism with temperature sensors in, amongst others, the hypothalamus of the brain. However, the set point of the regulator is regularly reset. For instance, core body temperature in humans varies during the course of the day (i.e. has a circadian rhythm), with the lowest temperatures occurring at night, and the highest in the afternoons. Other normal temperature variations include those related to the menstrual cycle. The temperature regulator's set point is reset during infections to produce a fever. Organisms are capable of adjusting somewhat to varied conditions such as temperature changes or oxygen levels at altitude, by a process of acclimatisation.
Homeostasis does not govern every activity in the body. For instance, the signal (be it via neurons or hormones) from the sensor to the effector is, of necessity, highly variable in order to convey information about the direction and magnitude of the error detected by the sensor. Similarly, the effector's response needs to be highly adjustable to reverse the error – in fact it should be very nearly in proportion (but in the opposite direction) to the error that is threatening the internal environment. For instance, arterial blood pressure in mammals is homeostatically controlled and measured by stretch receptors in the walls of the aortic arch and carotid sinuses at the beginnings of the internal carotid arteries. The sensors send messages via sensory nerves to the medulla oblongata of the brain indicating whether the blood pressure has fallen or risen, and by how much. The medulla oblongata then distributes messages along motor or efferent nerves belonging to the autonomic nervous system to a wide variety of effector organs, whose activity is consequently changed to reverse the error in the blood pressure. One of the effector organs is the heart whose rate is stimulated to rise (tachycardia) when the arterial blood pressure falls, or to slow down (bradycardia) when the pressure rises above the set point. Thus the heart rate (for which there is no sensor in the body) is not homeostatically controlled but is one of the effector responses to errors in arterial blood pressure. Another example is the rate of sweating. This is one of the effectors in the homeostatic control of body temperature, and therefore highly variable in rough proportion to the heat load that threatens to destabilize the body's core temperature, for which there is a sensor in the hypothalamus of the brain.
Controls of variables
Core temperature
Mammals regulate their core temperature using input from thermoreceptors in the hypothalamus, brain, spinal cord, internal organs, and great veins. Apart from the internal regulation of temperature, a process called allostasis can come into play that adjusts behaviour to adapt to the challenge of very hot or cold extremes (and to other challenges). These adjustments may include seeking shade and reducing activity, seeking warmer conditions and increasing activity, or huddling.
Behavioral thermoregulation takes precedence over physiological thermoregulation since necessary changes can be affected more quickly and physiological thermoregulation is limited in its capacity to respond to extreme temperatures.
When the core temperature falls, the blood supply to the skin is reduced by intense vasoconstriction. The blood flow to the limbs (which have a large surface area) is similarly reduced and returned to the trunk via the deep veins which lie alongside the arteries (forming venae comitantes). This acts as a counter-current exchange system that short-circuits the warmth from the arterial blood directly into the venous blood returning into the trunk, causing minimal heat loss from the extremities in cold weather. The subcutaneous limb veins are tightly constricted, not only reducing heat loss from this source but also forcing the venous blood into the counter-current system in the depths of the limbs.
The metabolic rate is increased, initially by non-shivering thermogenesis, followed by shivering thermogenesis if the earlier reactions are insufficient to correct the hypothermia.
When core temperature rises are detected by thermoreceptors, the sweat glands in the skin are stimulated via cholinergic sympathetic nerves to secrete sweat onto the skin, which, when it evaporates, cools the skin and the blood flowing through it. Panting is an alternative effector in many vertebrates, which cools the body also by the evaporation of water, but this time from the mucous membranes of the throat and mouth.
Blood glucose
Blood sugar levels are regulated within fairly narrow limits. In mammals, the primary sensors for this are the beta cells of the pancreatic islets. The beta cells respond to a rise in the blood sugar level by secreting insulin into the blood and simultaneously inhibiting their neighboring alpha cells from secreting glucagon into the blood. This combination (high blood insulin levels and low glucagon levels) act on effector tissues, the chief of which is the liver, fat cells, and muscle cells. The liver is inhibited from producing glucose, taking it up instead, and converting it to glycogen and triglycerides. The glycogen is stored in the liver, but the triglycerides are secreted into the blood as very low-density lipoprotein (VLDL) particles which are taken up by adipose tissue, there to be stored as fats. The fat cells take up glucose through special glucose transporters (GLUT4), whose numbers in the cell wall are increased as a direct effect of insulin acting on these cells. The glucose that enters the fat cells in this manner is converted into triglycerides (via the same metabolic pathways as are used by the liver) and then stored in those fat cells together with the VLDL-derived triglycerides that were made in the liver. Muscle cells also take glucose up through insulin-sensitive GLUT4 glucose channels, and convert it into muscle glycogen.
A fall in blood glucose, causes insulin secretion to be stopped, and glucagon to be secreted from the alpha cells into the blood. This inhibits the uptake of glucose from the blood by the liver, fats cells, and muscle. Instead the liver is strongly stimulated to manufacture glucose from glycogen (through glycogenolysis) and from non-carbohydrate sources (such as lactate and de-aminated amino acids) using a process known as gluconeogenesis. The glucose thus produced is discharged into the blood correcting the detected error (hypoglycemia). The glycogen stored in muscles remains in the muscles, and is only broken down, during exercise, to glucose-6-phosphate and thence to pyruvate to be fed into the citric acid cycle or turned into lactate. It is only the lactate and the waste products of the citric acid cycle that are returned to the blood. The liver can take up only the lactate, and, by the process of energy-consuming gluconeogenesis, convert it back to glucose.
Iron levels
Controlling iron levels in the body is a critically important part of many aspects of human health and disease. In humans iron is both necessary to the body and potentially harmful.
Copper regulation
Copper is absorbed, transported, distributed, stored, and excreted in the body according to complex homeostatic processes which ensure a constant and sufficient supply of the micronutrient while simultaneously avoiding excess levels. If an insufficient amount of copper is ingested for a short period of time, copper stores in the liver will be depleted. Should this depletion continue, a copper health deficiency condition may develop. If too much copper is ingested, an excess condition can result. Both of these conditions, deficiency and excess, can lead to tissue injury and disease. However, due to homeostatic regulation, the human body is capable of balancing a wide range of copper intakes for the needs of healthy individuals.
Many aspects of copper homeostasis are known at the molecular level. Copper's essentiality is due to its ability to act as an electron donor or acceptor as its oxidation state fluxes between Cu1+ (cuprous) and Cu2+ (cupric). As a component of about a dozen cuproenzymes, copper is involved in key redox (i.e., oxidation-reduction) reactions in essential metabolic processes such as mitochondrial respiration, synthesis of melanin, and cross-linking of collagen. Copper is an integral part of the antioxidant enzyme copper-zinc superoxide dismutase, and has a role in iron homeostasis as a cofactor in ceruloplasmin.
Levels of blood gases
Changes in the levels of oxygen, carbon dioxide, and plasma pH are sent to the respiratory center, in the brainstem where they are regulated.
The partial pressure of oxygen and carbon dioxide in the arterial blood is monitored by the peripheral chemoreceptors (PNS) in the carotid artery and aortic arch. A change in the partial pressure of carbon dioxide is detected as altered pH in the cerebrospinal fluid by central chemoreceptors (CNS) in the medulla oblongata of the brainstem. Information from these sets of sensors is sent to the respiratory center which activates the effector organs – the diaphragm and other muscles of respiration. An increased level of carbon dioxide in the blood, or a decreased level of oxygen, will result in a deeper breathing pattern and increased respiratory rate to bring the blood gases back to equilibrium.
Too little carbon dioxide, and, to a lesser extent, too much oxygen in the blood can temporarily halt breathing, a condition known as apnea, which freedivers use to prolong the time they can stay underwater.
The partial pressure of carbon dioxide is more of a deciding factor in the monitoring of pH. However, at high altitude (above 2500 m) the monitoring of the partial pressure of oxygen takes priority, and hyperventilation keeps the oxygen level constant. With the lower level of carbon dioxide, to keep the pH at 7.4 the kidneys secrete hydrogen ions into the blood and excrete bicarbonate into the urine. This is important in acclimatization to high altitude.
Blood oxygen content
The kidneys measure the oxygen content rather than the partial pressure of oxygen in the arterial blood. When the oxygen content of the blood is chronically low, oxygen-sensitive cells secrete erythropoietin (EPO) into the blood. The effector tissue is the red bone marrow which produces red blood cells (RBCs, also called ). The increase in RBCs leads to an increased hematocrit in the blood, and a subsequent increase in hemoglobin that increases the oxygen carrying capacity. This is the mechanism whereby high altitude dwellers have higher hematocrits than sea-level residents, and also why persons with pulmonary insufficiency or right-to-left shunts in the heart (through which venous blood by-passes the lungs and goes directly into the systemic circulation) have similarly high hematocrits.
Regardless of the partial pressure of oxygen in the blood, the amount of oxygen that can be carried, depends on the hemoglobin content. The partial pressure of oxygen may be sufficient for example in anemia, but the hemoglobin content will be insufficient and subsequently as will be the oxygen content. Given enough supply of iron, vitamin B12 and folic acid, EPO can stimulate RBC production, and hemoglobin and oxygen content restored to normal.
Arterial blood pressure
The brain can regulate blood flow over a range of blood pressure values by vasoconstriction and vasodilation of the arteries.
High pressure receptors called baroreceptors in the walls of the aortic arch and carotid sinus (at the beginning of the internal carotid artery) monitor the arterial blood pressure. Rising pressure is detected when the walls of the arteries stretch due to an increase in blood volume. This causes heart muscle cells to secrete the hormone atrial natriuretic peptide (ANP) into the blood. This acts on the kidneys to inhibit the secretion of renin and aldosterone causing the release of sodium, and accompanying water into the urine, thereby reducing the blood volume.
This information is then conveyed, via afferent nerve fibers, to the solitary nucleus in the medulla oblongata. From here motor nerves belonging to the autonomic nervous system are stimulated to influence the activity of chiefly the heart and the smallest diameter arteries, called arterioles. The arterioles are the main resistance vessels in the arterial tree, and small changes in diameter cause large changes in the resistance to flow through them. When the arterial blood pressure rises the arterioles are stimulated to dilate making it easier for blood to leave the arteries, thus deflating them, and bringing the blood pressure down, back to normal. At the same time, the heart is stimulated via cholinergic parasympathetic nerves to beat more slowly (called bradycardia), ensuring that the inflow of blood into the arteries is reduced, thus adding to the reduction in pressure, and correcting the original error.
Low pressure in the arteries, causes the opposite reflex of constriction of the arterioles, and a speeding up of the heart rate (called tachycardia). If the drop in blood pressure is very rapid or excessive, the medulla oblongata stimulates the adrenal medulla, via "preganglionic" sympathetic nerves, to secrete epinephrine (adrenaline) into the blood. This hormone enhances the tachycardia and causes severe vasoconstriction of the arterioles to all but the essential organ in the body (especially the heart, lungs, and brain). These reactions usually correct the low arterial blood pressure (hypotension) very effectively.
Calcium levels
The plasma ionized calcium (Ca2+) concentration is very tightly controlled by a pair of homeostatic mechanisms. The sensor for the first one is situated in the parathyroid glands, where the chief cells sense the Ca2+ level by means of specialized calcium receptors in their membranes. The sensors for the second are the parafollicular cells in the thyroid gland. The parathyroid chief cells secrete parathyroid hormone (PTH) in response to a fall in the plasma ionized calcium level; the parafollicular cells of the thyroid gland secrete calcitonin in response to a rise in the plasma ionized calcium level.
The effector organs of the first homeostatic mechanism are the bones, the kidney, and, via a hormone released into the blood by the kidney in response to high PTH levels in the blood, the duodenum and jejunum. Parathyroid hormone (in high concentrations in the blood) causes bone resorption, releasing calcium into the plasma. This is a very rapid action which can correct a threatening hypocalcemia within minutes. High PTH concentrations cause the excretion of phosphate ions via the urine. Since phosphates combine with calcium ions to form insoluble salts (see also bone mineral), a decrease in the level of phosphates in the blood, releases free calcium ions into the plasma ionized calcium pool. PTH has a second action on the kidneys. It stimulates the manufacture and release, by the kidneys, of calcitriol into the blood. This steroid hormone acts on the epithelial cells of the upper small intestine, increasing their capacity to absorb calcium from the gut contents into the blood.
The second homeostatic mechanism, with its sensors in the thyroid gland, releases calcitonin into the blood when the blood ionized calcium rises. This hormone acts primarily on bone, causing the rapid removal of calcium from the blood and depositing it, in insoluble form, in the bones.
The two homeostatic mechanisms working through PTH on the one hand, and calcitonin on the other can very rapidly correct any impending error in the plasma ionized calcium level by either removing calcium from the blood and depositing it in the skeleton, or by removing calcium from it. The skeleton acts as an extremely large calcium store (about 1 kg) compared with the plasma calcium store (about 180 mg). Longer term regulation occurs through calcium absorption or loss from the gut.
Another example are the most well-characterised endocannabinoids like anandamide (N-arachidonoylethanolamide; AEA) and 2-arachidonoylglycerol (2-AG), whose synthesis occurs through the action of a series of intracellular enzymes activated in response to a rise in intracellular calcium levels to introduce homeostasis and prevention of tumor development through putative protective mechanisms that prevent cell growth and migration by activation of CB1 and/or CB2 and adjoining receptors.
Sodium concentration
The homeostatic mechanism which controls the plasma sodium concentration is rather more complex than most of the other homeostatic mechanisms described on this page.
The sensor is situated in the juxtaglomerular apparatus of kidneys, which senses the plasma sodium concentration in a surprisingly indirect manner. Instead of measuring it directly in the blood flowing past the juxtaglomerular cells, these cells respond to the sodium concentration in the renal tubular fluid after it has already undergone a certain amount of modification in the proximal convoluted tubule and loop of Henle. These cells also respond to rate of blood flow through the juxtaglomerular apparatus, which, under normal circumstances, is directly proportional to the arterial blood pressure, making this tissue an ancillary arterial blood pressure sensor.
In response to a lowering of the plasma sodium concentration, or to a fall in the arterial blood pressure, the juxtaglomerular cells release renin into the blood. Renin is an enzyme which cleaves a decapeptide (a short protein chain, 10 amino acids long) from a plasma α-2-globulin called angiotensinogen. This decapeptide is known as angiotensin I. It has no known biological activity. However, when the blood circulates through the lungs a pulmonary capillary endothelial enzyme called angiotensin-converting enzyme (ACE) cleaves a further two amino acids from angiotensin I to form an octapeptide known as angiotensin II. Angiotensin II is a hormone which acts on the adrenal cortex, causing the release into the blood of the steroid hormone, aldosterone. Angiotensin II also acts on the smooth muscle in the walls of the arterioles causing these small diameter vessels to constrict, thereby restricting the outflow of blood from the arterial tree, causing the arterial blood pressure to rise. This, therefore, reinforces the measures described above (under the heading of "Arterial blood pressure"), which defend the arterial blood pressure against changes, especially hypotension.
The angiotensin II-stimulated aldosterone released from the zona glomerulosa of the adrenal glands has an effect on particularly the epithelial cells of the distal convoluted tubules and collecting ducts of the kidneys. Here it causes the reabsorption of sodium ions from the renal tubular fluid, in exchange for potassium ions which are secreted from the blood plasma into the tubular fluid to exit the body via the urine. The reabsorption of sodium ions from the renal tubular fluid halts further sodium ion losses from the body, and therefore preventing the worsening of hyponatremia. The hyponatremia can only be corrected by the consumption of salt in the diet. However, it is not certain whether a "salt hunger" can be initiated by hyponatremia, or by what mechanism this might come about.
When the plasma sodium ion concentration is higher than normal (hypernatremia), the release of renin from the juxtaglomerular apparatus is halted, ceasing the production of angiotensin II, and its consequent aldosterone-release into the blood. The kidneys respond by excreting sodium ions into the urine, thereby normalizing the plasma sodium ion concentration. The low angiotensin II levels in the blood lower the arterial blood pressure as an inevitable concomitant response.
The reabsorption of sodium ions from the tubular fluid as a result of high aldosterone levels in the blood does not, of itself, cause renal tubular water to be returned to the blood from the distal convoluted tubules or collecting ducts. This is because sodium is reabsorbed in exchange for potassium and therefore causes only a modest change in the osmotic gradient between the blood and the tubular fluid. Furthermore, the epithelium of the distal convoluted tubules and collecting ducts is impermeable to water in the absence of antidiuretic hormone (ADH) in the blood. ADH is part of the control of fluid balance. Its levels in the blood vary with the osmolality of the plasma, which is measured in the hypothalamus of the brain. Aldosterone's action on the kidney tubules prevents sodium loss to the extracellular fluid (ECF). So there is no change in the osmolality of the ECF, and therefore no change in the ADH concentration of the plasma. However, low aldosterone levels cause a loss of sodium ions from the ECF, which could potentially cause a change in extracellular osmolality and therefore of ADH levels in the blood.
Potassium concentration
High potassium concentrations in the plasma cause depolarization of the zona glomerulosa cells' membranes in the outer layer of the adrenal cortex. This causes the release of aldosterone into the blood.
Aldosterone acts primarily on the distal convoluted tubules and collecting ducts of the kidneys, stimulating the excretion of potassium ions into the urine. It does so, however, by activating the basolateral Na+/K+ pumps of the tubular epithelial cells. These sodium/potassium exchangers pump three sodium ions out of the cell, into the interstitial fluid and two potassium ions into the cell from the interstitial fluid. This creates an ionic concentration gradient which results in the reabsorption of sodium (Na+) ions from the tubular fluid into the blood, and secreting potassium (K+) ions from the blood into the urine (lumen of collecting duct).
Fluid balance
The total amount of water in the body needs to be kept in balance. Fluid balance involves keeping the fluid volume stabilized, and also keeping the levels of electrolytes in the extracellular fluid stable. Fluid balance is maintained by the process of osmoregulation and by behavior. Osmotic pressure is detected by osmoreceptors in the median preoptic nucleus in the hypothalamus. Measurement of the plasma osmolality to give an indication of the water content of the body, relies on the fact that water losses from the body, (through unavoidable water loss through the skin which is not entirely waterproof and therefore always slightly moist, water vapor in the exhaled air, sweating, vomiting, normal feces and especially diarrhea) are all hypotonic, meaning that they are less salty than the body fluids (compare, for instance, the taste of saliva with that of tears. The latter has almost the same salt content as the extracellular fluid, whereas the former is hypotonic with respect to the plasma. Saliva does not taste salty, whereas tears are decidedly salty). Nearly all normal and abnormal losses of body water therefore cause the extracellular fluid to become hypertonic. Conversely, excessive fluid intake dilutes the extracellular fluid causing the hypothalamus to register hypotonic hyponatremia conditions.
When the hypothalamus detects a hypertonic extracellular environment, it causes the secretion of an antidiuretic hormone (ADH) called vasopressin which acts on the effector organ, which in this case is the kidney. The effect of vasopressin on the kidney tubules is to reabsorb water from the distal convoluted tubules and collecting ducts, thus preventing aggravation of the water loss via the urine. The hypothalamus simultaneously stimulates the nearby thirst center causing an almost irresistible (if the hypertonicity is severe enough) urge to drink water. The cessation of urine flow prevents the hypovolemia and hypertonicity from getting worse; the drinking of water corrects the defect.
Hypo-osmolality results in very low plasma ADH levels. This results in the inhibition of water reabsorption from the kidney tubules, causing high volumes of very dilute urine to be excreted, thus getting rid of the excess water in the body.
Urinary water loss, when the body water homeostat is intact, is a compensatory water loss, correcting any water excess in the body. However, since the kidneys cannot generate water, the thirst reflex is the all-important second effector mechanism of the body water homeostat, correcting any water deficit in the body.
Blood pH
The plasma pH can be altered by respiratory changes in the partial pressure of carbon dioxide; or altered by metabolic changes in the carbonic acid to bicarbonate ion ratio. The bicarbonate buffer system regulates the ratio of carbonic acid to bicarbonate to be equal to 1:20, at which ratio the blood pH is 7.4 (as explained in the Henderson–Hasselbalch equation). A change in the plasma pH gives an acid–base imbalance.
In acid–base homeostasis there are two mechanisms that can help regulate the pH. Respiratory compensation a mechanism of the respiratory center, adjusts the partial pressure of carbon dioxide by changing the rate and depth of breathing, to bring the pH back to normal. The partial pressure of carbon dioxide also determines the concentration of carbonic acid, and the bicarbonate buffer system can also come into play. Renal compensation can help the bicarbonate buffer system.
The sensor for the plasma bicarbonate concentration is not known for certain. It is very probable that the renal tubular cells of the distal convoluted tubules are themselves sensitive to the pH of the plasma. The metabolism of these cells produces carbon dioxide, which is rapidly converted to hydrogen and bicarbonate through the action of carbonic anhydrase. When the ECF pH falls (becoming more acidic) the renal tubular cells excrete hydrogen ions into the tubular fluid to leave the body via urine. Bicarbonate ions are simultaneously secreted into the blood that decreases the carbonic acid, and consequently raises the plasma pH. The converse happens when the plasma pH rises above normal: bicarbonate ions are excreted into the urine, and hydrogen ions released into the plasma.
When hydrogen ions are excreted into the urine, and bicarbonate into the blood, the latter combines with the excess hydrogen ions in the plasma that stimulated the kidneys to perform this operation. The resulting reaction in the plasma is the formation of carbonic acid which is in equilibrium with the plasma partial pressure of carbon dioxide. This is tightly regulated to ensure that there is no excessive build-up of carbonic acid or bicarbonate. The overall effect is therefore that hydrogen ions are lost in the urine when the pH of the plasma falls. The concomitant rise in the plasma bicarbonate mops up the increased hydrogen ions (caused by the fall in plasma pH) and the resulting excess carbonic acid is disposed of in the lungs as carbon dioxide. This restores the normal ratio between bicarbonate and the partial pressure of carbon dioxide and therefore the plasma pH.
The converse happens when a high plasma pH stimulates the kidneys to secrete hydrogen ions into the blood and to excrete bicarbonate into the urine. The hydrogen ions combine with the excess bicarbonate ions in the plasma, once again forming an excess of carbonic acid which can be exhaled, as carbon dioxide, in the lungs, keeping the plasma bicarbonate ion concentration, the partial pressure of carbon dioxide and, therefore, the plasma pH, constant.
Cerebrospinal fluid
Cerebrospinal fluid (CSF) allows for regulation of the distribution of substances between cells of the brain, and neuroendocrine factors, to which slight changes can cause problems or damage to the nervous system. For example, high glycine concentration disrupts temperature and blood pressure control, and high CSF pH causes dizziness and syncope.
Neurotransmission
Inhibitory neurons in the central nervous system play a homeostatic role in the balance of neuronal activity between excitation and inhibition. Inhibitory neurons using GABA, make compensating changes in the neuronal networks preventing runaway levels of excitation. An imbalance between excitation and inhibition is seen to be implicated in a number of neuropsychiatric disorders.
Neuroendocrine system
The neuroendocrine system is the mechanism by which the hypothalamus maintains homeostasis, regulating metabolism, reproduction, eating and drinking behaviour, energy utilization, osmolarity and blood pressure.
The regulation of metabolism, is carried out by hypothalamic interconnections to other glands.
Three endocrine glands of the hypothalamic–pituitary–gonadal axis (HPG axis) often work together and have important regulatory functions. Two other regulatory endocrine axes are the hypothalamic–pituitary–adrenal axis (HPA axis) and the hypothalamic–pituitary–thyroid axis (HPT axis).
The liver also has many regulatory functions of the metabolism. An important function is the production and control of bile acids. Too much bile acid can be toxic to cells and its synthesis can be inhibited by activation of FXR a nuclear receptor.
Gene regulation
At the cellular level, homeostasis is carried out by several mechanisms including transcriptional regulation that can alter the activity of genes in response to changes.
Energy balance
The amount of energy taken in through nutrition needs to match the amount of energy used. To achieve energy homeostasis appetite is regulated by two hormones, grehlin and leptin. Grehlin stimulates hunger and the intake of food and leptin acts to signal satiety (fullness).
A 2019 review of weight-change interventions, including dieting, exercise and overeating, found that body weight homeostasis could not precisely correct for "energetic errors", the loss or gain of calories, in the short-term.
Clinical significance
Many diseases are the result of a homeostatic failure. Almost any homeostatic component can malfunction either as a result of an inherited defect, an inborn error of metabolism, or an acquired disease. Some homeostatic mechanisms have inbuilt redundancies, which ensures that life is not immediately threatened if a component malfunctions; but sometimes a homeostatic malfunction can result in serious disease, which can be fatal if not treated. A well-known example of a homeostatic failure is shown in type 1 diabetes mellitus. Here blood sugar regulation is unable to function because the beta cells of the pancreatic islets are destroyed and cannot produce the necessary insulin. The blood sugar rises in a condition known as hyperglycemia.
The plasma ionized calcium homeostat can be disrupted by the constant, unchanging, over-production of parathyroid hormone by a parathyroid adenoma resulting in the typically features of hyperparathyroidism, namely high plasma ionized Ca2+ levels and the resorption of bone, which can lead to spontaneous fractures. The abnormally high plasma ionized calcium concentrations cause conformational changes in many cell-surface proteins (especially ion channels and hormone or neurotransmitter receptors) giving rise to lethargy, muscle weakness, anorexia, constipation and labile emotions.
The body water homeostat can be compromised by the inability to secrete ADH in response to even the normal daily water losses via the exhaled air, the feces, and insensible sweating. On receiving a zero blood ADH signal, the kidneys produce huge unchanging volumes of very dilute urine, causing dehydration and death if not treated.
As organisms age, the efficiency of their control systems becomes reduced. The inefficiencies gradually result in an unstable internal environment that increases the risk of illness, and leads to the physical changes associated with aging.
Various chronic diseases are kept under control by homeostatic compensation, which masks a problem by compensating for it (making up for it) in another way. However, the compensating mechanisms eventually wear out or are disrupted by a new complicating factor (such as the advent of a concurrent acute viral infection), which sends the body reeling through a new cascade of events. Such decompensation unmasks the underlying disease, worsening its symptoms. Common examples include decompensated heart failure, kidney failure, and liver failure.
Biosphere
In the Gaia hypothesis, James Lovelock stated that the entire mass of living matter on Earth (or any planet with life) functions as a vast homeostatic superorganism that actively modifies its planetary environment to produce the environmental conditions necessary for its own survival. In this view, the entire planet maintains several homeostasis (the primary one being temperature homeostasis). Whether this sort of system is present on Earth is open to debate. However, some relatively simple homeostatic mechanisms are generally accepted. For example, it is sometimes claimed that when atmospheric carbon dioxide levels rise, certain plants may be able to grow better and thus act to remove more carbon dioxide from the atmosphere. However, warming has exacerbated droughts, making water the actual limiting factor on land. When sunlight is plentiful and the atmospheric temperature climbs, it has been claimed that the phytoplankton of the ocean surface waters, acting as global sunshine, and therefore heat sensors, may thrive and produce more dimethyl sulfide (DMS). The DMS molecules act as cloud condensation nuclei, which produce more clouds, and thus increase the atmospheric albedo, and this feeds back to lower the temperature of the atmosphere. However, rising sea temperature has stratified the oceans, separating warm, sunlit waters from cool, nutrient-rich waters. Thus, nutrients have become the limiting factor, and plankton levels have actually fallen over the past 50 years, not risen. As scientists discover more about Earth, vast numbers of positive and negative feedback loops are being discovered, that, together, maintain a metastable condition, sometimes within a very broad range of environmental conditions.
Predictive
Predictive homeostasis is an anticipatory response to an expected challenge in the future, such as the stimulation of insulin secretion by gut hormones which enter the blood in response to a meal. This insulin secretion occurs before the blood sugar level rises, lowering the blood sugar level in anticipation of a large influx into the blood of glucose resulting from the digestion of carbohydrates in the gut. Such anticipatory reactions are open loop systems which are based, essentially, on "guess work", and are not self-correcting. Anticipatory responses always require a closed loop negative feedback system to correct the 'over-shoots' and 'under-shoots' to which the anticipatory systems are prone.
Other fields
The term has come to be used in other fields, for example:
Risk
An actuary may refer to risk homeostasis, where (for example) people who have anti-lock brakes have no better safety record than those without anti-lock brakes, because the former unconsciously compensate for the safer vehicle via less-safe driving habits. Previous to the innovation of anti-lock brakes, certain maneuvers involved minor skids, evoking fear and avoidance: Now the anti-lock system moves the boundary for such feedback, and behavior patterns expand into the no-longer punitive area. It has also been suggested that ecological crises are an instance of risk homeostasis in which a particular behavior continues until proven dangerous or dramatic consequences actually occur.
Stress
Sociologists and psychologists may refer to stress homeostasis, the tendency of a population or an individual to stay at a certain level of stress, often generating artificial stresses if the "natural" level of stress is not enough.
Jean-François Lyotard, a postmodern theorist, has applied this term to societal 'power centers' that he describes in The Postmodern Condition, as being 'governed by a principle of homeostasis,' for example, the scientific hierarchy, which will sometimes ignore a radical new discovery for years because it destabilizes previously accepted norms.
Technology
Familiar technological homeostatic mechanisms include:
A thermostat operates by switching heaters or air-conditioners on and off in response to the output of a temperature sensor.
Cruise control adjusts a car's throttle in response to changes in speed.
An autopilot operates the steering controls of an aircraft or ship in response to deviation from a pre-set compass bearing or route.
Process control systems in a chemical plant or oil refinery maintain fluid levels, pressures, temperature, chemical composition, etc. by controlling heaters, pumps and valves.
The centrifugal governor of a steam engine, as designed by James Watt in 1788, reduces the throttle valve in response to increases in the engine speed, or opens the valve if the speed falls below the pre-set rate.
Society and culture
The use of sovereign power, codes of conduct, religious and cultural practices and other dynamic processes in a society can be described as a part of an evolved homeostatic system of regularizing life and maintaining an overall equilibrium that protects the security of the whole from internal and external imbalances or dangers. Healthy civic cultures can be said to have achieved an optimal homeostatic balance between multiple contradictory concerns such as in the tension between respect for individual rights and concern for the public good, or that between governmental effectiveness and responsiveness to the interests of citizens.
See also
References
Further reading
electronic-book electronic-
External links
Homeostasis
Walter Bradford Cannon, Homeostasis (1932)
Physiology
Biology terminology
Cybernetics | Homeostasis | [
"Biology"
] | 9,177 | [
"nan",
"Physiology",
"Homeostasis"
] |
14,004 | https://en.wikipedia.org/wiki/Hour | An hour (symbol: h; also abbreviated hr) is a unit of time historically reckoned as of a day and defined contemporarily as exactly 3,600 seconds (SI). There are 60 minutes in an hour, and 24 hours in a day.
The hour was initially established in the ancient Near East as a variable measure of of the night or daytime. Such seasonal hours, also known as temporal hours or unequal hours, varied by season and latitude.
Equal hours or equinoctial hours were taken as of the day as measured from noon to noon; the minor seasonal variations of this unit were eventually smoothed by making it of the mean solar day. Since this unit was not constant due to long term variations in the Earth's rotation, the hour was finally separated from the Earth's rotation and defined in terms of the atomic or physical second.
In the modern metric system, hours are an accepted unit of time defined as 3,600 atomic seconds. However, on rare occasions an hour may incorporate a positive or negative leap second, effectively making it appear to last 3,599 or 3,601 seconds, in order to keep UTC within 0.9 seconds of UT1, the latter of which is based on measurements of the mean solar day.
Etymology
Hour is a development of the Anglo-Norman and Middle English , first attested in the 13th century.
It displaced tide tīd, 'time' and stound stund, span of time. The Anglo-Norman term was a borrowing of Old French , a variant of , which derived from Latin and Greek hṓrā ().
Like Old English and , hṓrā was originally a vaguer word for any span of time, including seasons and years. Its Proto-Indo-European root has been reconstructed as ("year, summer"), making hour distantly cognate with year.
The time of day is typically expressed in English in terms of hours. Whole hours on a 12-hour clock are expressed using the contracted phrase o'clock, from the older of the clock. (10 am and 10 pm are both read as "ten o'clock".)
Hours on a 24-hour clock ("military time") are expressed as "hundred" or "hundred hours". (1000 is read "ten hundred" or "ten hundred hours"; 10 pm would be "twenty-two hundred".)
Fifteen and thirty minutes past the hour is expressed as "a quarter past" or "after" and "half past", respectively, from their fraction of the hour. Fifteen minutes before the hour may be expressed as "a quarter to", "of", "till", or "before" the hour. (9:45 may be read "nine forty-five" or "a quarter till ten".)
History
Antiquity
Ancient Egypt
In ancient Egypt the flooding of the Nile was, and still is, an important annual event, crucial for agriculture. It was accompanied by the rise of Sirius before the sunrise, and the appearance of 12 constellations across the night sky, to which the Egyptians assigned some significance. Influenced by this, the Egyptians divided the night into 12 equal intervals. These were seasonal hours, shorter in the summer than in the winter. Subsequently, the day was divided into intervals as well, which eventually became more important than the nightly intervals. These subdivisions of a day spread to Greece, and later to Rome.
Ancient Greece
The ancient Greeks kept time differently than is done today. Instead of dividing the time between one midnight and the next into 24 equal hours, they divided the time from sunrise to sunset into 12 "seasonal hours" (their actual duration depending on season), and the time from sunset to the next sunrise again in 12 "seasonal hours". Initially, only the day was divided into 12 seasonal hours and the night into three or four night watches.
By the Hellenistic period the night was also divided into 12 hours. The day-and-night () was probably first divided into 24 hours by Hipparchus of Nicaea. The Greek astronomer Andronicus of Cyrrhus oversaw the construction of a horologion called the Tower of the Winds in Athens during the first century BCE. This structure tracked a 24-hour day using both sundials and mechanical hour indicators.
The canonical hours were inherited into early Christianity from Second Temple Judaism.
By AD 60, the Didache recommends disciples to pray the Lord's Prayer three times a day; this practice found its way into the canonical hours as well. By the second and third centuries, such Church Fathers as Clement of Alexandria, Origen, and Tertullian wrote of the practice of Morning and Evening Prayer, and of the prayers at the third, sixth and ninth hours.
In the early church, during the night before every feast, a vigil was kept. The word "Vigils", at first applied to the Night Office, comes from a Latin source, namely the Vigiliae or nocturnal watches or guards of the soldiers. The night from six o'clock in the evening to six o'clock in the morning was divided into four watches or vigils of three hours each, the first, the second, the third, and the fourth vigil.
The Horae were originally personifications of seasonal aspects of nature, not of the time of day.
The list of 12 Horae representing the 12 hours of the day is recorded only in Late Antiquity, by Nonnus. The first and twelfth of the Horae were added to the original set of ten:
Auge (first light)
Anatole (sunrise)
Mousike (morning hour of music and study)
Gymnastike (morning hour of exercise)
Nymphe (morning hour of ablutions)
Mesembria (noon)
Sponde (libations poured after lunch)
Elete (prayer)
Akte (eating and pleasure)
Hesperis (start of evening)
Dysis (sunset)
Arktos (night sky)
Middle Ages
Medieval astronomers such as al-Biruni and Sacrobosco, divided the hour into 60 minutes, each of 60 seconds; this derives from Babylonian astronomy, where the corresponding terms denoted the time required for the Sun's apparent motion through the ecliptic to describe one minute or second of arc, respectively. In present terms, the Babylonian degree of time was thus four minutes long, the "minute" of time was thus four seconds long and the "second" 1/15 of a second.
In medieval Europe, the Roman hours continued to be marked on sundials but the more important units of time were the canonical hours of the Orthodox and Catholic Church. During daylight, these followed the pattern set by the three-hour bells of the Roman markets, which were succeeded by the bells of local churches. They rang prime at about 6am, terce at about 9am, sext at noon, nones at about 3pm, and vespers at either 6pm or sunset. Matins and lauds precede these irregularly in the morning hours; compline follows them irregularly before sleep; and the midnight office follows that. Vatican II ordered their reformation for the Catholic Church in 1963, though they continue to be observed in the Orthodox churches.
When mechanical clocks began to be used to show hours of daylight or nighttime, their period needed to be changed every morning and evening (for example, by changing the length of their pendula). The use of 24 hours for the entire day meant hours varied much less and the clocks needed to be adjusted only a few times a month.
Modernity
The minor irregularities of the apparent solar day were smoothed by measuring time using the mean solar day, using the Sun's movement along the celestial equator rather than along the ecliptic. The irregularities of this time system were so minor that most clocks reckoning such hours did not need adjustment. However, scientific measurements eventually became precise enough to note the effect of tidal deceleration of the Earth by the Moon, which gradually lengthens the Earth's days.
During the French Revolution, a general decimalisation of measures was enacted, including decimal time between 1794 and 1800. Under its provisions, the French hour () was of the day and divided formally into 100 decimal minutes () and informally into 10 tenths (). Mandatory use for all public records began in 1794, but was suspended six months later by the same 1795 legislation that first established the metric system. In spite of this, a few localities continued to use decimal time for six years for civil status records, until 1800, after Napoleon's Coup of 18 Brumaire.
The metric system bases its measurements of time upon the second, defined since 1952 in terms of the Earth's rotation in AD1900. Its hours are a secondary unit computed as precisely 3,600 seconds. However, an hour of Coordinated Universal Time (UTC), used as the basis of most civil time, has lasted 3,601 seconds 27 times since 1972 in order to keep it within 0.9 seconds of universal time, which is based on measurements of the mean solar day at 0° longitude. The addition of these seconds accommodates the very gradual slowing of the rotation of the Earth.
In modern life, the ubiquity of clocks and other timekeeping devices means that segmentation of days according to their hours is commonplace. Most forms of employment, whether wage or salaried labour, involve compensation based upon measured or expected hours worked. The fight for an eight-hour day was a part of labour movements around the world. Informal rush hours and happy hours cover the times of day when commuting slows down due to congestion or alcoholic drinks being available at discounted prices. The hour record for the greatest distance travelled by a cyclist within the span of an hour is one of cycling's greatest honours.
Counting hours
Many different ways of counting the hours have been used. Because sunrise, sunset, and, to a lesser extent, noon, are the conspicuous points in the day, starting to count at these times was, for most people in most early societies, much easier than starting at midnight. However, with accurate clocks and modern astronomical equipment (and the telegraph or similar means to transfer a time signal in a split-second), this issue is much less relevant.
Astrolabes, sundials, and astronomical clocks sometimes show the hour length and count using some of these older definitions and counting methods.
Counting from dawn
In ancient and medieval cultures, the counting of hours generally started with sunrise. Before the widespread use of artificial light, societies were more concerned with the division between night and day, and daily routines often began when light was sufficient.
"Babylonian hours" divide the day and night into 24 equal hours, reckoned from the time of sunrise. They are so named from the false belief of ancient authors that the Babylonians divided the day into 24 parts, beginning at sunrise. In fact, they divided the day into 12 parts (called kaspu or "double hours") or into 60 equal parts.
Unequal hours
Sunrise marked the beginning of the first hour, the middle of the day was at the end of the sixth hour and sunset at the end of the twelfth hour. This meant that the duration of hours varied with the season. In the Northern hemisphere, particularly in the more northerly latitudes, summer daytime hours were longer than winter daytime hours, each being one twelfth of the time between sunrise and sunset. These variable-length hours were variously known as temporal, unequal, or seasonal hours and were in use until the appearance of the mechanical clock, which furthered the adoption of equal length hours.
This is also the system used in Jewish law and frequently called "Talmudic hour" (Sha'a Zemanit) in a variety of texts. The Talmudic hour is one twelfth of time elapsed from sunrise to sunset, day hours therefore being longer than night hours in the summer; in winter they reverse.
The Indic day began at sunrise. The term hora was used to indicate an hour. The time was measured based on the length of the shadow at day time. A hora translated to 2.5 pe. There are 60 pe per day, 60 minutes per pe and 60 kshana (snap of a finger or instant) per minute. Pe was measured with a bowl with a hole placed in still water. Time taken for this graduated bowl was one pe. Kings usually had an officer in charge of this clock.
Counting from sunset
In so-called "Italian time", "Italian hours", or "old Czech time", the first hour started with the sunset Angelus bell (or at the end of dusk, i.e., half an hour after sunset, depending on local custom and geographical latitude). The hours were numbered from 1 to 24. For example, in Lugano, the sun rose in December during the 14th hour and noon was during the 19th hour; in June the sun rose during the 7th hour and noon was in the 15th hour. Sunset was always at the end of the 24th hour. The clocks in church towers struck only from 1 to 12, thus only during night or early morning hours.
This manner of counting hours had the advantage that everyone could easily know how much time they had to finish their day's work without artificial light. It was already widely used in Italy by the 14th century and lasted until the mid-18th century; it was officially abolished in 1755, or in some regions customary until the mid-19th century.
The system of Italian hours can be seen on a number of clocks in Europe, where the dial is numbered from 1 to 24 in either Roman or Arabic numerals. The St Mark's Clock in Venice, and the Orloj in Prague are famous examples. It was also used in Poland, Silesia, and Bohemia until the 17th century.
Its replacement by the more practical division into twice twelve (equinoctial) hours (also called small clock or civic hours) began as early as the 16th century.
The Islamic day begins at sunset. The first prayer of the day (maghrib) is to be performed between just after sunset and the end of twilight. Until 1968 Saudi Arabia used the system of counting 24 equal hours with the first hour starting at sunset.
Counting from noon
For many centuries, up to 1925, astronomers counted the hours and days from noon, because it was the easiest solar event to measure accurately. An advantage of this method (used in the Julian Date system, in which a new Julian Day begins at noon) is that the date doesn't change during a single night's observing.
Counting from midnight
In the modern 12-hour clock, counting the hours starts at midnight and restarts at noon. Hours are numbered 12, 1, 2, ..., 11. Solar noon is always close to 12 noon (ignoring artificial adjustments due to time zones and daylight saving time), differing according to the equation of time by as much as fifteen minutes either way. At the equinoxes sunrise is around 6 a.m. (, before noon), and sunset around 6 p.m. (, after noon).
In the modern 24-hour clock, counting the hours starts at midnight, and hours are numbered from 0 to 23. Solar noon is always close to 12:00, again differing according to the equation of time. At the equinoxes sunrise is around 06:00, and sunset around 18:00.
History of timekeeping in other cultures
Egypt
The ancient Egyptians began dividing the night into at some time before the compilation of the Dynasty V Pyramid Texts in the 24thcenturyBC. By 2150BC (Dynasty IX), diagrams of stars inside Egyptian coffin lids—variously known as "diagonal calendars" or "star clocks"—attest that there were exactly 12 of these. Clagett writes that it is "certain" this duodecimal division of the night followed the adoption of the Egyptian civil calendar, usually placed BC on the basis of analyses of the Sothic cycle, but a lunar calendar presumably long predated this and also would have had 12 months in each of its years. The coffin diagrams show that the Egyptians took note of the heliacal risings of 36 stars or constellations (now known as "decans"), one for each of the ten-day "weeks" of their civil calendar. (12 sets of alternate "triangle decans" were used for the 5 epagomenal days between years.) Each night, the rising of eleven of these decans were noted, separating the night into 12 divisions whose middle terms would have lasted about 40minutes each. (Another seven stars were noted by the Egyptians during the twilight and predawn periods, although they were not important for the hour divisions.) The original decans used by the Egyptians would have fallen noticeably out of their proper places over a span of several centuries. By the time of (BC), the priests at Karnak were using water clocks to determine the hours. These were filled to the brim at sunset and the hour determined by comparing the water level against one of its 12 gauges, one for each month of the year. During the New Kingdom, another system of decans was used, made up of 24 stars over the course of the year and 12 within any one night.
The later division of the day into 12 hours was accomplished by sundials marked with ten equal divisions. The morning and evening periods when the sundials failed to note time were observed as the first and last hours.
The Egyptian hours were closely connected both with the priesthood of the gods and with their divine services. By the New Kingdom, each hour was conceived as a specific region of the sky or underworld through which Ra's solar barge travelled. Protective deities were assigned to each and were used as the names of the hours. As the protectors and resurrectors of the sun, the goddesses of the night hours were considered to hold power over all lifespans and thus became part of Egyptian funerary rituals. Two fire-spitting cobras were said to guard the gates of each hour of the underworld, and Wadjet and the rearing cobra (uraeus) were also sometimes referenced as from their role protecting the dead through these gates. The Egyptian word for astronomer, used as a synonym for priest, was , "one of the wnwt", as it were "one of the hours". The earliest forms of include one or three stars, with the later solar hours including the determinative hieroglyph for "sun".
East Asia
Ancient China divided its day into 100 "marks" running from midnight to midnight. The system is said to have been used since remote antiquity, credited to the legendary Yellow Emperor, but is first attested in Han-era water clocks and in the 2nd-century history of that dynasty. It was measured with sundials and water clocks. Into the Eastern Han, the Chinese measured their day schematically, adding the 20-ke difference between the solstices evenly throughout the year, one every nine days. During the night, time was more commonly reckoned during the night by the "watches" of the guard, which were reckoned as a fifth of the time from sunset to sunrise.
Imperial China continued to use ke and geng but also began to divide the day into 12 "double hours" named after the earthly branches and sometimes also known by the name of the corresponding animal of the Chinese zodiac. The first shi originally ran from 11pm to 1am but was reckoned as starting at midnight by the time of the History of Song, compiled during the early Yuan. These apparently began to be used during the Eastern Han that preceded the Three Kingdoms era, but the sections that would have covered them are missing from their official histories; they first appear in official use in the Tang-era Book of Sui. Variations of all these units were subsequently adopted by Japan and the other countries of the Sinosphere.
The 12 shi supposedly began to be divided into 24 hours under the Tang, although they are first attested in the Ming-era Book of Yuan. In that work, the hours were known by the same earthly branches as the shi, with the first half noted as its "starting" and the second as "completed" or "proper" shi. In modern China, these are instead simply numbered and described as "little shi". The modern ke is now used to count quarter-hours, rather than a separate unit.
As with the Egyptian night and daytime hours, the division of the day into 12 shi has been credited to the example set by the rough number of lunar cycles in a solar year, although the 12-year Jovian orbital cycle was more important to traditional Chinese and Babylonian reckoning of the zodiac.
Southeast Asia
In Thailand, Laos, and Cambodia, the traditional system of noting hours is the six-hour clock. This reckons each of a day's 24 hours apart from noon as part of a fourth of the day. The first hour of the first half of daytime was 7 am; 1 pm the first hour of the latter half of daytime; 7 pm the first hour of the first half of nighttime; and 1 am the first hour of the latter half of nighttime. This system existed in the Ayutthaya Kingdom, deriving its current phrasing from the practice of publicly announcing the daytime hours with a gong and the nighttime hours with a drum. It was abolished in Laos and Cambodia during their French occupation and is uncommon there now. The Thai system remains in informal use in the form codified in 1901 by King Chulalongkorn.
India
The Vedas and Puranas employed units of time based on the sidereal day (nakṣatra ahorātra). This was variously divided into 30 muhūrta-s of 48 minutes each or 60 dandas or nadī-s of 24 minutes each. The solar day was later similarly divided into 60 ghaṭikás of about the same duration, each divided in turn into 60 vinadis. The Sinhalese followed a similar system but called their sixtieth of a day a peya.
Derived measures
air changes per hour (ACH), a measure of the replacements of air within a defined space used for indoor air quality
ampere hour (Ah), a measure of electrical charge used in electrochemistry
BTU-hour, a measure of power used in the power industry and for air conditioners and heaters
credit hour, a measure of an academic course's contracted instructional time per week for a semester
horsepower-hour (hph), a measure of energy used in the railroad industry
hour angle, a measure of the angle between the meridian plane and the hour circle passing through a certain point used in the equatorial coordinate system
kilometres per hour (km/h), a measure of land speed
kilowatt-hour (kWh), a measure of energy commonly used as an electrical billing unit
knot (kn), a measure of nautical miles per hour, used for maritime and aerial speed
man-hour, the amount of work performed by the average worker in one hour, used in productivity analysis
metre per hour (m/h), a measure of slow speeds
mile per hour (mph), a measure of land speed
passengers per hour per direction (p/h/d), a measure of the capacity of public transportation systems
pound per hour (PPH), a measure of mass flow rate used for engines' fuel flow
work or working hour, a measure of working time used in various regulations, such as those distinguishing part- and full-time employment and those limiting truck drivers' working hours or hours of service
See also
Danna
Decimal hour or deciday, a French Revolutionary unit lasting 2h 24min
Equinoctial hours
Golden Hour & Blue Hour in photography
Hexadecimal hour, a proposed unit lasting 1h 30min
Horae, the deified hours of ancient Greece and Rome
Horology
Julian day
Liturgy of the Hours
Metric time
Six-hour day
Temporal hours
Explanatory notes
Citations
General and cited references
.
Further reading
Christopher Walker (ed.), Astronomy before the Telescope. London: British Museum Press, 1996.
External links
World time zones
Accurate time vs. PC Clock Difference
Orders of magnitude (time)
Units of time | Hour | [
"Physics",
"Mathematics"
] | 4,962 | [
"Physical quantities",
"Time",
"Units of time",
"Quantity",
"Spacetime",
"Units of measurement"
] |
14,009 | https://en.wikipedia.org/wiki/Hemicellulose | A hemicellulose (also known as polyose) is one of a number of heteropolymers (matrix polysaccharides), such as arabinoxylans, present along with cellulose in almost all terrestrial plant cell walls. Cellulose is crystalline, strong, and resistant to hydrolysis. Hemicelluloses are branched, shorter in length than cellulose, and also show a propensity to crystallize. They can be hydrolyzed by dilute acid or base as well as a myriad of hemicellulase enzymes.
Composition
Diverse kinds of hemicelluloses are known. Important examples include xylan, glucuronoxylan, arabinoxylan, glucomannan, and xyloglucan.
Hemicelluloses are polysaccharides often associated with cellulose, but with distinct compositions and structures. Whereas cellulose is derived exclusively from glucose, hemicelluloses are composed of diverse sugars, and can include the five-carbon sugars xylose and arabinose, the six-carbon sugars glucose, mannose and galactose, and the six-carbon deoxy sugar rhamnose. Hemicelluloses contain most of the D-pentose sugars, and occasionally small amounts of L-sugars as well. Xylose is in most cases the sugar monomer present in the largest amount, although in softwoods mannose can be the most abundant sugar. Not only regular sugars can be found in hemicellulose, but also their acidified forms, for instance glucuronic acid and galacturonic acid can be present.
Structural comparison to cellulose
Unlike cellulose, hemicelluloses consist of shorter chains – 500–3,000 sugar units. In contrast, each polymer of cellulose comprises 7,000–15,000 glucose molecules. In addition, hemicelluloses may be branched polymers, while cellulose is unbranched. Hemicelluloses are embedded in the cell walls of plants, sometimes in chains that form a 'ground' – they bind with pectin to cellulose to form a network of cross-linked fibres.
Based on the structural difference, like backbone linkages and side groups, as well as other factors, like abundance and distributions in plants, hemicelluloses can be categorized into four groups as following: 1) xylans, 2) mannans; 3) mixed linkage β-glucans; 4) xyloglucans.
Xylans
Xylans usually consist of a backbone of β-(1→4)-linked xylose residues and can be further divided into homoxylans and heteroxylans. Homoxylans have a backbone of D-xylopyranose residues linked by β(1→4) glycosidic linkages. Homoxylans mainly have structural functions. Heteroxylans such as glucuronoxylans, glucuronoarabinoxylans, and complex heteroxylans, have a backbone of D-xylopyranose and short carbohydrate branches. For example, glucuronoxylan has a substitution with α-(1→2)-linked glucuronosyl and 4-O-methyl glucuronosyl residues. Arabinoxylans and glucuronoarabinoxylans contain arabinose residues attached to the backbone
Mannans
The mannan-type hemicellulose can be classified into two types based on their main chain difference, galactomannans and glucomannans. Galactomannans have only β-(1→4) linked D-mannopyranose residues in linear chains. Glucomannans consist of both β-(1→4) linked D-mannopyranose and β-(1→4) linked D-glucopyranose residues in the main chains. As for the side chains, D-galactopyranose residues tend to be 6-linked to both types as the single side chains with various amount.
Mixed linkage β-glucans
The conformation of the mixed linkage glucan chains usually contains blocks of β-(1→4) D-Glucopyranose separated by single β-(1→3) D-Glucopyranose. The population of β-(1→4) and β-(1→3) are about 70% and 30%. These glucans primarily consist of cellotriosyl (C18H32O16) and cellotraosyl (C24H42O21)segments in random order. There are some study show the molar ratio of cellotriosyl/cellotraosyl for oat (2.1-2.4), barley (2.8-3.3), and wheat (4.2-4.5).
Xyloglucans
Xyloglucans have a backbone similar to cellulose with α-D-xylopyranose residues at position 6. To better describe different side chains, a single letter code notation is used for each side chain type. G -- unbranched Glc residue; X -- α-d-Xyl-(1→6)-Glc. L -- β-Gal , S -- α-l-Araf, F-- α-l-Fuc. These are the most common side chains.
The two most common types of xyloglucans in plant cell walls are identified as XXXG and XXGG.
Biosynthesis
Hemicelluloses are synthesised from sugar nucleotides in the cell's Golgi apparatus. Two models explain their synthesis: 1) a '2 component model' where modification occurs at two transmembrane proteins, and 2) a '1 component model' where modification occurs only at one transmembrane protein. After synthesis, hemicelluloses are transported to the plasma membrane via Golgi vesicles.
Each kind of hemicellulose is biosynthesized by specialized enzymes.
Mannan chain backbones are synthesized by cellulose synthase-like protein family A (CSLA) and possibly enzymes in cellulose synthase-like protein family D (CSLD). Mannan synthase, a particular enzyme in CSLA, is responsible for the addition of mannose units to the backbone. The galactose side-chains of some mannans are added by galactomannan galactosyltransferase. Acetylation of mannans is mediated by a mannan O-acetyltransferase, however, this enzyme has not been definitively identified.
Xyloglucan backbone synthesis is mediated by cellulose synthase-like protein family C (CSLC), particularly glucan synthase, which adds glucose units to the chain. Backbone synthesis of xyloglucan is also mediated in some way by xylosyltransferase, but this mechanism is separate to its transferase function and remains unclear. Xylosyltransferase in its transferase function is, however, utilized for the addition of xylose to the side-chain. Other enzymes utilized for side-chain synthesis of xyloglucan include galactosyltransferase (which is responsible for the addition of [galactose and of which two different forms are utilized), fucosyltransferase (which is responsible for the addition of fucose), and acetyltransferase (which is responsible for acetylation).
Xylan backbone synthesis, unlike that of the other hemicelluloses, is not mediated by any cellulose synthase-like proteins. Instead, xylan synthase is responsible for backbone synthesis, facilitating the addition of xylose. Several genes for xylan synthases have been identified. Several other enzymes are utilized for the addition and modification of the side-chain units of xylan, including glucuronosyltransferase (which adds [glucuronic acid units), xylosyltransferase (which adds additional xylose units), arabinosyltransferase (which adds arabinose), methyltransferase (responsible for methylation), and acetyltransferase] (responsible for acetylation). Given that mixed-linkage glucan is a non-branched homopolymer of glucose, there is no side-chain synthesis, only the addition of glucose to the backbone in two linkages, β1-3 and β1-4. Backbone synthesis is mediated by enzymes in cellulose synthase-like protein families F and H (CSLF and CSLH), specifically glucan synthase. Several forms of glucan synthase from CSLF and CSLH have been identified. All of them are responsible for addition of glucose to the backbone and all are capable of producing both β1-3 and β1-4 linkages, however, it is unknown how much each specific enzyme contributes to the distribution of β1-3 and β1-4 linkages.
Applications
In the sulfite pulp process the hemicellulose is largely hydrolysed by the acid pulping liquor ending up in the brown liquor where the fermentable hexose sugars (around 2%) can be used for producing ethanol. This process was primarily applied to calcium sulfite brown liquors.
Arabinogalactan
Arabinogalactans can be used as emulsifiers, stabilizers and binders according to the Federal Food, Drug and Cosmetic Act. Arabinogalactans can also be used as bonding agent in sweeteners.
Xylan
The films based on xylan show low oxygen permeability and thus are of potential interest as packaging for oxygen-sensitive products.
Agar
Agar is used in making jellies and puddings. It is also growth medium with other nutrients for microorganisms.
Curdlan
Curdlan can be used in fat replacement to produce diet food while having a taste and a mouth feel of real fat containing products.
beta-glucan
b-glucans have an important role in food supplement while b-glucans are also promising in health-related issues, especially in immune reactions and the treatment of cancer.
Xanthan
Xanthan, with other polysaccharides can form gels that have high solution viscosity which can be used in the oil industry to thicken drilling mud. In the food industry, xanthan is used in products such as dressings and sauces.
Alginate
Alginate is an important role in the development of antimicrobial textiles due to its characteristics of environmental friendliness, and high industrialization level as a sustainable biopolymer.
Natural functions
As a polysaccharide compound in plant cell walls similar to cellulose, hemicellulose helps cellulose in the strengthening of plant cell walls. Hemicellulose interacts with the cellulose by providing cross-linking of cellulose microfibrils: hemicellulose will search for voids in the cell wall during its formation and provide support around cellulose fibrils in order to equip the cell wall with the maximum possible strength it can provide. Hemicellulose dominates the middle lamella of the plant cell, unlike cellulose which is primarily found in the secondary layers. This allows for hemicellulose to provide middle-ground support for the cellulose on the outer layers of the plant cell. In few cell walls, hemicellulose will also interact with lignin to provide structural tissue support of more vascular plants.
Extraction
There are many ways to obtain hemicellulose; all of these rely on extraction methods through hardwood or softwood trees milled into smaller samples. In hardwoods the main hemicellulose extract is glucuronoxlyan (acetylated xylans), while galactoglucomannan is found in softwoods. Prior to extraction the wood typically must be milled into wood chips of various sizes depending on the reactor used. Following this, a hot water extraction process, also known as autohydrolysis or hydrothermal treatment, is utilized with the addition of acids and bases to change the yield size and properties. The main advantage to hot water extraction is that it offers a method where the only chemical that is needed is water, making this environmentally friendly and cheap.
The goal of hot water treatment is to remove as much hemicellulose from the wood as possible. This is done through the hydrolysis of the hemicellulose to achieve smaller oligomers and xylose. Xylose when dehydrated becomes furfural. When xylose and furfural are the goal, acid catalysts, such as formic acid, are added to increase the transition of polysaccharide to monosaccharides. This catalyst also has been shown to also utilize a solvent effect to be aid the reaction.
One method of pretreatment is to soak the wood with diluted acids (with concentrations around 4%). This converts the hemicellulose into monosaccharides. When pretreatment is done with bases (for instance sodium or potassium hydroxide) this destroys the structure of the lignin. This changes the structure from crystalline to amorphous. Hydrothermal pretreatment is another method. This offers advantages such as no toxic or corrosive solvents are needed, nor are special reactors, and no extra costs to dispose of hazardous chemicals.
The hot water extraction process is done in batch reactors, semi-continuous reactors, or slurry continuous reactors. For batch and semi-continuous reactors wood samples can be used in conditions such as chips or pellets while a slurry reactor must have particles as small as 200 to 300 micrometers. While the particle size decreases the yield production decreases as well. This is due to the increase of cellulose.
The hot water process is operated at a temperature range of 160 to 240 degrees Celsius in order to maintain the liquid phase. This is done above the normal boiling point of water to increase the solubilization of the hemicellulose and the depolymerization of polysaccharides. This process can take several minutes to several hours depending on the temperature and pH of the system. Higher temperatures paired with higher extraction times lead to higher yields. A maximum yield is obtained at a pH of 3.5. If below, the extraction yield exponentially decreases. In order to control pH, sodium bicarbonate is generally added. The sodium bicarbonate inhibits the autolysis of acetyl groups as well as inhibiting glycosyl bonds. Depending on the temperature and time the hemicellulose can be further converted into oligomers, monomers and lignin.
See also
Cellulose
Lignin
Polysaccharides
References
External links
Structure and Properties of Hemicellulose /David Wang's Wood Chemistry Class
Polysaccharides
Cell biology | Hemicellulose | [
"Chemistry",
"Biology"
] | 3,240 | [
"Carbohydrates",
"Cell biology",
"Polysaccharides"
] |
14,019 | https://en.wikipedia.org/wiki/Horsepower | Horsepower (hp) is a unit of measurement of power, or the rate at which work is done, usually in reference to the output of engines or motors. There are many different standards and types of horsepower. Two common definitions used today are the imperial horsepower as in "hp" or "bhp" which is about , and the metric horsepower as in "cv" or "PS" which is approximately . The electric horsepower "hpE" is exactly , while the boiler horsepower is 9809.5 or 9811 watts, depending on the exact year.
The term was adopted in the late 18th century by Scottish engineer James Watt to compare the output of steam engines with the power of draft horses. It was later expanded to include the output power of other power-generating machinery such as piston engines, turbines, and electric motors. The definition of the unit varied among geographical regions. Most countries now use the SI unit watt for measurement of power. With the implementation of the EU Directive 80/181/EEC on 1 January 2010, the use of horsepower in the EU is permitted only as a supplementary unit.
History
The development of the steam engine provided a reason to compare the output of horses with that of the engines that could replace them. In 1702, Thomas Savery wrote in The Miner's Friend:
So that an engine which will raise as much water as two horses, working together at one time in such a work, can do, and for which there must be constantly kept ten or twelve horses for doing the same. Then I say, such an engine may be made large enough to do the work required in employing eight, ten, fifteen, or twenty horses to be constantly maintained and kept for doing such a work...
The idea was later used by James Watt to help market his improved steam engine. He had previously agreed to take royalties of one-third of the savings in coal from the older Newcomen steam engines. This royalty scheme did not work with customers who did not have existing steam engines but used horses instead.
Watt determined that a horse could turn a mill wheel 144 times in an hour (or 2.4 times a minute). The wheel was in radius; therefore, the horse travelled feet in one minute. Watt judged that the horse could pull with a force of . So:
Engineering in History recounts that John Smeaton initially estimated that a horse could produce per minute. John Desaguliers had previously suggested per minute, and Thomas Tredgold suggested per minute. "Watt found by experiment in 1782 that a 'brewery horse' could produce per minute." James Watt and Matthew Boulton standardized that figure at per minute the next year.
A common legend states that the unit was created when one of Watt's first customers, a brewer, specifically demanded an engine that would match a horse, and chose the strongest horse he had and driving it to the limit. In that legend, Watt accepted the challenge and built a machine that was actually even stronger than the figure achieved by the brewer, and the output of that machine became the horsepower.
In 1993, R. D. Stevenson and R. J. Wassersug published correspondence in Nature summarizing measurements and calculations of peak and sustained work rates of a horse. Citing measurements made at the 1926 Iowa State Fair, they reported that the peak power over a few seconds has been measured to be as high as and also observed that for sustained activity, a work rate of about per horse is consistent with agricultural advice from both the 19th and 20th centuries and also consistent with a work rate of about four times the basal rate expended by other vertebrates for sustained activity.
When considering human-powered equipment, a healthy human can produce about briefly (see orders of magnitude) and sustain about indefinitely; trained athletes can manage up to about briefly
and for a period of several hours. The Jamaican sprinter Usain Bolt produced a maximum of 0.89 seconds into his 9.58 second sprint world record in 2009.
In 2023 a group of engineers modified a dynamometer to be able to measure how much power a horse can produce. This horse was measured to .
Calculating power
When torque is in pound-foot units, rotational speed is in rpm, the resulting power in horsepower is
The constant 5252 is the rounded value of (33,000 ft⋅lbf/min)/(2π rad/rev).
When torque is in inch-pounds,
The constant 63,025 is the approximation of
Definitions
Imperial horsepower
Assuming the third CGPM (1901, CR 70) definition of standard gravity, , is used to define the pound-force as well as the kilogram force, and the international avoirdupois pound (1959), one imperial horsepower is:
{|
|-
|1 hp
|≡ 33,000 ft·lbf/min
| colspan="2" |by definition
|-
|
|= 550 ft⋅lbf/s
|since
|1 min = 60 s
|-
|
|= 550 × 0.3048 × 0.45359237 m⋅kgf/s
|since
|1 ft ≡ 0.3048 m and 1 lb ≡ 0.45359237 kg
|-
|
|= 76.0402249068 kgf⋅m/s
|
|
|-
|
|= 76.0402249068 × 9.80665 kg⋅m2/s3
|since
|g = 9.80665 m/s2
|-
|
|= 745.69987158227022 W ≈ 745.700 W
|since
|1 W ≡ 1 J/s = 1 N⋅m/s = 1 (kg⋅m/s2)⋅(m/s)
|}
Or given that 1 hp = 550 ft⋅lbf/s, 1 ft = 0.3048 m, 1 lbf ≈ 4.448 N, 1 J = 1 N⋅m, 1 W = 1 J/s: 1 hp ≈ 745.7 W
Metric horsepower (PS, KM, cv, hk, pk, k, ks, ch)
The various units used to indicate this definition (PS, KM, cv, hk, pk, k, ks and ch) all translate to horse power in English. British manufacturers often intermix metric horsepower and mechanical horsepower depending on the origin of the engine in question.
DIN 66036 defines one metric horsepower (Pferdestärke, or PS) as the power to raise a mass of 75 kilograms against the Earth's gravitational force over a distance of one metre in one second: = 75 ⋅m/s = 1 PS. This is equivalent to 735.49875 W, or 98.6% of an imperial horsepower. In 1972, the PS was replaced by the kilowatt as the official power-measuring unit in EEC directives.
Other names for the metric horsepower are the Italian , Dutch , the French , the Spanish and Portuguese , the Russian , the Swedish , the Finnish , the Estonian , the Norwegian and Danish , the Hungarian , the Czech and Slovak or ), the Serbo-Croatian , the Bulgarian , the Macedonian , the Polish (), Slovenian , the Ukrainian , the Romanian , and the German .
In the 19th century, revolutionary-era France had its own unit used to replace the cheval vapeur (horsepower); based on a 100 kgf⋅m/s standard, it was called the poncelet and was abbreviated p.
Tax horsepower
Tax or fiscal horsepower is a non-linear rating of a motor vehicle for tax purposes. Tax horsepower ratings were originally more or less directly related to the size of the engine; but as of 2000, many countries changed over to systems based on emissions, so are not directly comparable to older ratings. The Citroën 2CV is named for its French fiscal horsepower rating, "deux chevaux" (2CV).
Electrical horsepower
Nameplates on electrical motors show their power output, not the power input (the power delivered at the shaft, not the power consumed to drive the motor). This power output is ordinarily stated in watts or kilowatts. In the United States, the power output is stated in horsepower which, for this purpose, is defined as exactly 746 watts. Wattage is calculated by multiplying voltage by amperage.
Hydraulic horsepower
Hydraulic horsepower can represent the power available within hydraulic machinery, power through the down-hole nozzle of a drilling rig, or can be used to estimate the mechanical power needed to generate a known hydraulic flow rate.
It may be calculated as
where pressure is in psi, and flow rate is in US gallons per minute.
Drilling rigs are powered mechanically by rotating the drill pipe from above. Hydraulic power is still needed though, as 1 500 to 5 000 W are required to push mud through the drill bit to clear waste rock. Additional hydraulic power may also be used to drive a down-hole mud motor to power directional drilling.
When using SI units, the equation becomes coherent and there is no dividing constant.
where pressure is in pascals (Pa), and flow rate is in cubic metres per second (m3).
Boiler horsepower
Boiler horsepower is a boiler's capacity to deliver steam to a steam engine and is not the same unit of power as the 550 ft lb/s definition. One boiler horsepower is equal to the thermal energy rate required to evaporate of fresh water at in one hour. In the early days of steam use, the boiler horsepower was roughly comparable to the horsepower of engines fed by the boiler.
The term "boiler horsepower" was originally developed at the Philadelphia Centennial Exhibition in 1876, where the best steam engines of that period were tested. The average steam consumption of those engines (per output horsepower) was determined to be the evaporation of of water per hour, based on feed water at , and saturated steam generated at . This original definition is equivalent to a boiler heat output of . A few years later in 1884, the ASME re-defined the boiler horsepower as the thermal output equal to the evaporation of 34.5 pounds per hour of water "from and at" . This considerably simplified boiler testing, and provided more accurate comparisons of the boilers at that time. This revised definition is equivalent to a boiler heat output of . Present industrial practice is to define "boiler horsepower" as a boiler thermal output equal to , which is very close to the original and revised definitions.
Boiler horsepower is still used to measure boiler output in industrial boiler engineering in the US. Boiler horsepower is abbreviated BHP, which is also used in many places to symbolize brake horsepower.
Drawbar power
Drawbar power (dbp) is the power a railway locomotive has available to haul a train or an agricultural tractor to pull an implement. This is a measured figure rather than a calculated one. A special railway car called a dynamometer car coupled behind the locomotive keeps a continuous record of the drawbar pull exerted, and the speed. From these, the power generated can be calculated. To determine the maximum power available, a controllable load is required; it is normally a second locomotive with its brakes applied, in addition to a static load.
If the drawbar force () is measured in pounds-force (lbf) and speed () is measured in miles per hour (mph), then the drawbar power () in horsepower (hp) is
Example: How much power is needed to pull a drawbar load of 2,025 pounds-force at 5 miles per hour?
The constant 375 is because 1 hp = 375 lbf⋅mph. If other units are used, the constant is different. When using coherent SI units (watts, newtons, and metres per second), no constant is needed, and the formula becomes .
This formula may also be used to calculate the power of a jet engine, using the speed of the jet and the thrust required to maintain that speed.
Example: how much power is generated with a thrust of 4000 pounds at 400 miles per hour?
RAC horsepower (taxable horsepower)
This measure was instituted by the Royal Automobile Club and was used to denote the power of early 20th-century British cars. Many cars took their names from this figure (hence the Austin Seven and Riley Nine), while others had names such as "40/50 hp", which indicated the RAC figure followed by the true measured power.
Taxable horsepower does not reflect developed horsepower; rather, it is a calculated figure based on the engine's bore size, number of cylinders, and a (now archaic) presumption of engine efficiency. As new engines were designed with ever-increasing efficiency, it was no longer a useful measure, but was kept in use by UK regulations, which used the rating for tax purposes. The United Kingdom was not the only country that used the RAC rating; many states in Australia used RAC hp to determine taxation. The RAC formula was sometimes applied in British colonies as well, such as Kenya (British East Africa).
where
D is the diameter (or bore) of the cylinder in inches,
n is the number of cylinders.
Since taxable horsepower was computed based on bore and number of cylinders, not based on actual displacement, it gave rise to engines with "undersquare" dimensions (bore smaller than stroke), which tended to impose an artificially low limit on rotational speed, hampering the potential power output and efficiency of the engine.
The situation persisted for several generations of four- and six-cylinder British engines: For example, Jaguar's 3.4-litre XK engine of the 1950s had six cylinders with a bore of and a stroke of , where most American automakers had long since moved to oversquare (large bore, short stroke) V8 engines. See, for example, the early Chrysler Hemi engine.
Measurement
The power of an engine may be measured or estimated at several points in the transmission of the power from its generation to its application. A number of names are used for the power developed at various stages in this process, but none is a clear indicator of either the measurement system or definition used.
In general:
nominal horsepower is derived from the size of the engine and the piston speed and is only accurate at a steam pressure of ;
indicated or gross horsepower is the theoretical capability of the engine [PLAN/ 33000];
brake/net/crankshaft horsepower (power delivered directly to and measured at the engine's crankshaft) equals
indicated horsepower minus frictional losses within the engine (bearing drag, rod and crankshaft windage losses, oil film drag, etc.);
shaft horsepower (power delivered to and measured at the output shaft of the transmission, when present in the system) equals
crankshaft horsepower minus frictional losses in the transmission (bearings, gears, oil drag, windage, etc.);
effective or true (thp), commonly referred to as wheel horsepower (whp), equals
shaft horsepower minus frictional losses in the universal joint/s, differential, wheel bearings, tire and chain, (if present).
All the above assumes that no power inflation factors have been applied to any of the readings.
Engine designers use expressions other than horsepower to denote objective targets or performance, such as brake mean effective pressure (BMEP). This is a coefficient of theoretical brake horsepower and cylinder pressures during combustion.
Nominal horsepower
Nominal horsepower (nhp) is an early 19th-century rule of thumb used to estimate the power of steam engines. It assumed a steam pressure of .
Nominal horsepower = 7 × area of piston in square inches × equivalent piston speed in feet per minute/33,000.
For paddle ships, the Admiralty rule was that the piston speed in feet per minute was taken as 129.7 × (stroke)1/3.38. For screw steamers, the intended piston speed was used.
The stroke (or length of stroke) was the distance moved by the piston measured in feet.
For the nominal horsepower to equal the actual power it would be necessary for the mean steam pressure in the cylinder during the stroke to be and for the piston speed to be that generated by the assumed relationship for paddle ships.
The French Navy used the same definition of nominal horse power as the Royal Navy.
Indicated horsepower
Indicated horsepower (ihp) is the theoretical power of a reciprocating engine if it is completely frictionless in converting the expanding gas energy (piston pressure × displacement) in the cylinders. It is calculated from the pressures developed in the cylinders, measured by a device called an engine indicator – hence indicated horsepower. As the piston advances throughout its stroke, the pressure against the piston generally decreases, and the indicator device usually generates a graph of pressure vs stroke within the working cylinder. From this graph the amount of work performed during the piston stroke may be calculated.
Indicated horsepower was a better measure of engine power than nominal horsepower (nhp) because it took account of steam pressure. But unlike later measures such as shaft horsepower (shp) and brake horsepower (bhp), it did not take into account power losses due to the machinery internal frictional losses, such as a piston sliding within the cylinder, plus bearing friction, transmission and gear box friction, etc.
Brake horsepower
Brake horsepower (bhp) is the power measured using a brake type (load) dynamometer at a specified location, such as the crankshaft, output shaft of the transmission, rear axle or rear wheels.
In Europe, the DIN 70020 standard tests the engine fitted with all ancillaries and the exhaust system as used in the car. The older American standard (SAE gross horsepower, referred to as bhp) used an engine without alternator, water pump, and other auxiliary components such as power steering pump, muffled exhaust system, etc., so the figures were higher than the European figures for the same engine. The newer American standard (referred to as SAE net horsepower) tests an engine with all the auxiliary components (see "Engine power test standards" below).
Brake refers to the device which is used to provide an equal braking force, load to balance, or equal an engine's output force and hold it at a desired rotational speed. During testing, the output torque and rotational speed are measured to determine the brake horsepower. Horsepower was originally measured and calculated by use of the "indicator diagram" (a James Watt invention of the late 18th century), and later by means of a Prony brake connected to the engine's output shaft. Modern dynamometers use any of several braking methods to measure the engine's brake horsepower, the actual output of the engine itself, before losses to the drivetrain.
Shaft horsepower
Shaft horsepower (shp) is the power delivered to a propeller or turbine shaft. Shaft horsepower is a common rating for turboshaft and turboprop engines, industrial turbines, and some marine applications.
Equivalent shaft horsepower (eshp) is sometimes used to rate turboprop engines. It includes the equivalent power derived from residual jet thrust from the turbine exhaust. of residual jet thrust is estimated to be produced from one unit of horsepower.
Engine power test standards
There exist a number of different standards determining how the power and torque of an automobile engine is measured and corrected. Correction factors are used to adjust power and torque measurements to standard atmospheric conditions, to provide a more accurate comparison between engines as they are affected by the pressure, humidity, and temperature of ambient air. Some standards are described below.
Society of Automotive Engineers/SAE International
Early "SAE horsepower"
In the early twentieth century, a so-called "SAE horsepower" was sometimes quoted for U.S. automobiles. This long predates the Society of Automotive Engineers (SAE) horsepower measurement standards and was another name for the industry standard ALAM or NACC horsepower figure and the same as the British RAC horsepower also used for tax purposes. Alliance for Automotive Innovation is the current successor of ALAM and NACC.
SAE gross power
Prior to the 1972 model year, American automakers rated and advertised their engines in brake horsepower, bhp, which was a version of brake horsepower called SAE gross horsepower because it was measured according to Society of Automotive Engineers (SAE) standards (J245 and J1995) that call for a stock test engine without accessories (such as dynamo/alternator, radiator fan, water pump), and sometimes fitted with long tube test headers in lieu of the OEM exhaust manifolds. This contrasts with both SAE net power and DIN 70020 standards, which account for engine accessories (but not transmission losses). The atmospheric correction standards for barometric pressure, humidity and temperature for SAE gross power testing were relatively idealistic.
SAE net power
In the United States, the term bhp fell into disuse in 1971–1972, as automakers began to quote power in terms of SAE net horsepower in accord with SAE standard J1349. Like SAE gross and other brake horsepower protocols, SAE net hp is measured at the engine's crankshaft, and so does not account for transmission losses. However, similar to the DIN 70020 standard, SAE net power testing protocol calls for standard production-type belt-driven accessories, air cleaner, emission controls, exhaust system, and other power-consuming accessories. This produces ratings in closer alignment with the power produced by the engine as it is actually configured and sold.
SAE certified power
In 2005, the SAE introduced "SAE Certified Power" with SAE J2723. To attain certification the test must follow the SAE standard in question, take place in an ISO 9000/9002 certified facility and be witnessed by an SAE approved third party.
A few manufacturers such as Honda and Toyota switched to the new ratings immediately. The rating for Toyota's Camry 3.0 L 1MZ-FE V6 fell from . The company's Lexus ES 330 and Camry SE V6 (3.3 L V6) were previously rated at but the ES 330 dropped to while the Camry declined to . The first engine certified under the new program was the 7.0 L LS7 used in the 2006 Chevrolet Corvette Z06. Certified power rose slightly from .
While Toyota and Honda are retesting their entire vehicle lineups, other automakers generally are retesting only those with updated powertrains. For example, the 2006 Ford Five Hundred is rated at , the same as that of 2005 model. However, the 2006 rating does not reflect the new SAE testing procedure, as Ford did not opt to incur the extra expense of retesting its existing engines. Over time, most automakers are expected to comply with the new guidelines.
SAE tightened its horsepower rules to eliminate the opportunity for engine manufacturers to manipulate factors affecting performance such as how much oil was in the crankcase, engine control system calibration, and whether an engine was tested with high octane fuel. In some cases, such can add up to a change in horsepower ratings.
Deutsches Institut für Normung 70020 (DIN 70020)
DIN 70020 is a German DIN standard for measuring road vehicle horsepower. DIN hp is measured at the engine's output shaft as a form of metric horsepower rather than mechanical horsepower. Similar to SAE net power rating, and unlike SAE gross power, DIN testing measures the engine as installed in the vehicle, with cooling system, charging system and stock exhaust system all connected. DIN hp is often abbreviated as "PS", derived from the German word Pferdestärke (literally, "horsepower").
CUNA
A test standard by Italian CUNA (Commissione Tecnica per l'Unificazione nell'Automobile, Technical Commission for Automobile Unification), a federated entity of standards organisation UNI, was formerly used in Italy.
CUNA prescribed that the engine be tested with all accessories necessary to its running fitted (such as the water pump), while all others – such as alternator/dynamo, radiator fan, and exhaust manifold – could be omitted. All calibration and accessories had to be as on production engines.
Economic Commission for Europe R24
ECE R24 is a UN standard for the approval of compression ignition engine emissions, installation and measurement of engine power. It is similar to DIN 70020 standard, but with different requirements for connecting an engine's fan during testing causing it to absorb less power from the engine.
Economic Commission for Europe R85
ECE R85 is a UN standard for the approval of internal combustion engines with regard to the measurement of the net power.
80/1269/EEC
80/1269/EEC of 16 December 1980 is a European Union standard for road vehicle engine power.
International Organization for Standardization
The International Organization for Standardization (ISO) publishes several standards for measuring engine horsepower.
ISO 14396 specifies the additional and method requirement for determining the power of reciprocating internal combustion engines when presented for an ISO 8178 exhaust emission test. It applies to reciprocating internal combustion engines for land, rail and marine use excluding engines of motor vehicles primarily designed for road use.
ISO 1585 is an engine net power test code intended for road vehicles.
ISO 2534 is an engine gross power test code intended for road vehicles.
ISO 4164 is an engine net power test code intended for mopeds.
ISO 4106 is an engine net power test code intended for motorcycles.
ISO 9249 is an engine net power test code intended for earth moving machines.
Japanese Industrial Standard D 1001
JIS D 1001 is a Japanese net, and gross, engine power test code for automobiles or trucks having a spark ignition, diesel engine, or fuel injection engine.
See also
Brake-specific fuel consumption – how much fuel an engine consumes per unit energy output
Dynamometer engine testing
European units of measurement directives
Horsepower-hour
Mean effective pressure
Torque
References
External links
Imperial units
Units of power
Customary units of measurement in the United States
James Watt | Horsepower | [
"Physics",
"Mathematics"
] | 5,296 | [
"Physical quantities",
"Quantity",
"Power (physics)",
"Units of power",
"Units of measurement"
] |
14,021 | https://en.wikipedia.org/wiki/History%20of%20astronomy | The history of astronomy focuses on the contributions civilizations have made to further their understanding of the universe beyond earth's atmosphere.
Astronomy is one of the oldest natural sciences, achieving a high level of success in the second half of the first millennium. Astronomy has origins in the religious, mythological, cosmological, calendrical, and astrological beliefs and practices of prehistory. Early astronomical records date back to the Babylonians around 1000 BCE. There is also astronomical evidence of interest from early Chinese, Central American and North European cultures.
Astronomy was used by early cultures for a variety of reasons. These include timekeeping, navigation, spiritual and religious practices, and agricultural planning. Ancient astronomers used their observations to chart the skies in an effort to learn about the workings of the universe. During the Renaissance Period, revolutionary ideas emerged about astronomy. One such idea was contributed in 1593 by Polish astronomer Nicolaus Copernicus, who developed a heliocentric model that depicted the planets orbiting the sun. This was the start of the Copernican Revolution.
The success of astronomy, compared to other sciences, was achieved because of several reasons. Astronomy was the first science to have a mathematical foundation and have sophisticated procedures such as using armillary spheres and quadrants. This provided a solid base for collecting and verifying data.
Throughout the years, astronomy has broadened into multiple subfields such as astrophysics, observational astronomy, theoretical astronomy, and astrobiology.
Early history
Early cultures identified celestial objects with gods and spirits. They related these objects (and their movements) to phenomena such as rain, drought, seasons, and tides. It is generally believed that the first astronomers were priests, and that they understood celestial objects and events to be manifestations of the divine, hence early astronomy's connection to what is now called astrology. A 32,500-year-old carved ivory mammoth tusk could contain the oldest known star chart (resembling the constellation Orion). It has also been suggested that drawings on the wall of the Lascaux caves in France dating from 33,000 to 10,000 years ago could be a graphical representation of the Pleiades, the Summer Triangle, and the Northern Crown. Ancient structures with possibly astronomical alignments (such as Stonehenge) probably fulfilled astronomical, religious, and social functions.
Calendars of the world have often been set by observations of the Sun and Moon (marking the day, month and year), and were important to agricultural societies, in which the harvest depended on planting at the correct time of year, and for which the nearly full moon was the only lighting for night-time travel into city markets.
The common modern calendar is based on the Roman calendar. Although originally a lunar calendar, it broke the traditional link of the month to the phases of the Moon and divided the year into twelve almost-equal months, that mostly alternated between thirty and thirty-one days. Julius Caesar instigated calendar reform in 46 BC and introduced what is now called the Julian calendar, based upon the day year length originally proposed by the 4th century BC Greek astronomer Callippus.
Prehistoric Europe
Ancient astronomical artifacts have been found throughout Europe. The artifacts demonstrate that Neolithic and Bronze Age Europeans had a sophisticated knowledge of mathematics and astronomy.
Among the discoveries are:
Paleolithic archaeologist Alexander Marshack put forward a theory in 1972 that bone sticks from locations like Africa and Europe from possibly as long ago as 35,000 BC could be marked in ways that tracked the Moon's phases, an interpretation that has met with criticism.
The Warren Field calendar in the Dee River valley of Scotland's Aberdeenshire. First excavated in 2004 but only in 2013 revealed as a find of huge significance, it is to date the oldest known calendar, created around 8000 BC and predating all other calendars by some 5,000 years. The calendar takes the form of an early Mesolithic monument containing a series of 12 pits which appear to help the observer track lunar months by mimicking the phases of the Moon. It also aligns to sunrise at the winter solstice, thus coordinating the solar year with the lunar cycles. The monument had been maintained and periodically reshaped, perhaps up to hundreds of times, in response to shifting solar/lunar cycles, over the course of 6,000 years, until the calendar fell out of use around 4,000 years ago.
Goseck circle is located in Germany and belongs to the linear pottery culture. First discovered in 1991, its significance was only clear after results from archaeological digs became available in 2004. The site is one of hundreds of similar circular enclosures built in a region encompassing Austria, Germany, and the Czech Republic during a 200-year period starting shortly after 5000 BC.
The Nebra sky disc is a Bronze Age bronze disc that was buried in Germany, not far from the Goseck circle, around 1600 BC. It measures about diameter with a mass of and displays a blue-green patina (from oxidization) inlaid with gold symbols. Found by archeological thieves in 1999 and recovered in Switzerland in 2002, it was soon recognized as a spectacular discovery, among the most important of the 20th century. Investigations revealed that the object had been in use around 400 years before burial (2000 BC), but that its use had been forgotten by the time of burial. The inlaid gold depicted the full moon, a crescent moon about 4 or 5 days old, and the Pleiades star cluster in a specific arrangement forming the earliest known depiction of celestial phenomena. Twelve lunar months pass in 354 days, requiring a calendar to insert a leap month every two or three years in order to keep synchronized with the solar year's seasons (making it lunisolar). The earliest known descriptions of this coordination were recorded by the Babylonians in 6th or 7th centuries BC, over one thousand years later. Those descriptions verified ancient knowledge of the Nebra sky disc's celestial depiction as the precise arrangement needed to judge when to insert the intercalary month into a lunisolar calendar, making it an astronomical clock for regulating such a calendar a thousand or more years before any other known method.
The Kokino site, discovered in 2001, sits atop an extinct volcanic cone at an elevation of , occupying about 0.5 hectares overlooking the surrounding countryside in North Macedonia. A Bronze Age astronomical observatory was constructed there around 1900 BC and continuously served the nearby community that lived there until about 700 BC. The central space was used to observe the rising of the Sun and full moon. Three markings locate sunrise at the summer and winter solstices and at the two equinoxes. Four more give the minimum and maximum declinations of the full moon: in summer, and in winter. Two measure the lengths of lunar months. Together, they reconcile solar and lunar cycles in marking the 235 lunations that occur during 19 solar years, regulating a lunar calendar. On a platform separate from the central space, at lower elevation, four stone seats (thrones) were made in north–south alignment, together with a trench marker cut in the eastern wall. This marker allows the rising Sun's light to fall on only the second throne, at midsummer (about July 31). It was used for ritual ceremony linking the ruler to the local sun god, and also marked the end of the growing season and time for harvest.
Golden hats of Germany, France and Switzerland dating from 1400 to 800 BC are associated with the Bronze Age Urnfield culture. The Golden hats are decorated with a spiral motif of the Sun and the Moon. They were probably a kind of calendar used to calibrate between the lunar and solar calendars. Modern scholarship has demonstrated that the ornamentation of the gold leaf cones of the Schifferstadt type, to which the Berlin Gold Hat example belongs, represent systematic sequences in terms of number and types of ornaments per band. A detailed study of the Berlin example, which is the only fully preserved one, showed that the symbols probably represent a lunisolar calendar. The object would have permitted the determination of dates or periods in both lunar and solar calendars.
Ancient times
Mesopotamia
The origins of astronomy can be found in Mesopotamia, the "land between the rivers" Tigris and Euphrates, where the ancient kingdoms of Sumer, Assyria, and Babylonia were located. A form of writing known as cuneiform emerged among the Sumerians around 3500–3000 BC. Our knowledge of Sumerian astronomy is indirect, via the earliest Babylonian star catalogues dating from about 1200 BC. The fact that many star names appear in Sumerian suggests a continuity reaching into the Early Bronze Age. Astral theology, which gave planetary gods an important role in Mesopotamian mythology and religion, began with the Sumerians. They also used a sexagesimal (base 60) place-value number system, which simplified the task of recording very large and very small numbers. The modern practice of dividing a circle into 360 degrees, or an hour into 60 minutes, began with the Sumerians. For more information, see the articles on Babylonian numerals and mathematics.
Classical sources frequently use the term Chaldeans for the astronomers of Mesopotamia, who were, in reality, priest-scribes specializing in astrology and other forms of divination.
The first evidence of recognition that astronomical phenomena are periodic and of the application of mathematics to their prediction is Babylonian. Tablets dating back to the Old Babylonian period document the application of mathematics to the variation in the length of daylight over a solar year. Centuries of Babylonian observations of celestial phenomena are recorded in the series of cuneiform tablets known as the Enūma Anu Enlil. The oldest significant astronomical text that we possess is Tablet 63 of the Enūma Anu Enlil, the Venus tablet of Ammi-saduqa, which lists the first and last visible risings of Venus over a period of about 21 years and is the earliest evidence that the phenomena of a planet were recognized as periodic. The MUL.APIN, contains catalogues of stars and constellations as well as schemes for predicting heliacal risings and the settings of the planets, lengths of daylight measured by a water clock, gnomon, shadows, and intercalations. The Babylonian GU text arranges stars in 'strings' that lie along declination circles and thus measure right-ascensions or time-intervals, and also employs the stars of the zenith, which are also separated by given right-ascensional differences.
A significant increase in the quality and frequency of Babylonian observations appeared during the reign of Nabonassar (747–733 BC). The systematic records of ominous phenomena in Babylonian astronomical diaries that began at this time allowed for the discovery of a repeating 18-year cycle of lunar eclipses, for example. The Greek astronomer Ptolemy later used Nabonassar's reign to fix the beginning of an era, since he felt that the earliest usable observations began at this time.
The last stages in the development of Babylonian astronomy took place during the time of the Seleucid Empire (323–60 BC). In the 3rd century BC, astronomers began to use "goal-year texts" to predict the motions of the planets. These texts compiled records of past observations to find repeating occurrences of ominous phenomena for each planet. About the same time, or shortly afterwards, astronomers created mathematical models that allowed them to predict these phenomena directly, without consulting records. A notable Babylonian astronomer from this time was Seleucus of Seleucia, who was a supporter of the heliocentric model.
Babylonian astronomy was the basis for much of what was done in Greek and Hellenistic astronomy, in classical Indian astronomy, in Sassanian Iran, in Byzantium, in Syria, in Islamic astronomy, in Central Asia, and in Western Europe.
India
Astronomy in the Indian subcontinent dates back to the period of Indus Valley Civilisation during 3rd millennium BC, when it was used to create calendars. As the Indus Valley civilization did not leave behind written documents, the oldest extant Indian astronomical text is the Vedanga Jyotisha, dating from the Vedic period. The Vedanga Jyotisha is attributed to Lagadha and has an internal date of approximately 1350 BC, and describes rules for tracking the motions of the Sun and the Moon for the purposes of ritual. It is available in two recensions, one belonging to the Rig Veda, and the other to the Yajur Veda. According to the Vedanga Jyotisha, in a yuga or "era", there are 5 solar years, 67 lunar sidereal cycles, 1,830 days, 1,835 sidereal days and 62 synodic months. During the 6th century, astronomy was influenced by the Greek and Byzantine astronomical traditions.
Aryabhata (476–550), in his magnum opus Aryabhatiya (499), propounded a computational system based on a planetary model in which the Earth was taken to be spinning on its axis and the periods of the planets were given with respect to the Sun. He accurately calculated many astronomical constants, such as the periods of the planets, times of the solar and lunar eclipses, and the instantaneous motion of the Moon. Early followers of Aryabhata's model included Varāhamihira, Brahmagupta, and Bhāskara II.
Astronomy was advanced during the Shunga Empire and many star catalogues were produced during this time. The Shunga period is known as the "Golden age of astronomy in India".
It saw the development of calculations for the motions and places of various planets, their rising and setting, conjunctions, and the calculation of eclipses.
Indian astronomers by the 6th century believed that comets were celestial bodies that re-appeared periodically. This was the view expressed in the 6th century by the astronomers Varahamihira and Bhadrabahu, and the 10th-century astronomer Bhattotpala listed the names and estimated periods of certain comets, but it is unfortunately not known how these figures were calculated or how accurate they were.
Greece and Hellenistic world
The Ancient Greeks developed astronomy, which they treated as a branch of mathematics, to a highly sophisticated level. The first geometrical, three-dimensional models to explain the apparent motion of the planets were developed in the 4th century BC by Eudoxus of Cnidus and Callippus of Cyzicus. Their models were based on nested homocentric spheres centered upon the Earth. Their younger contemporary Heraclides Ponticus proposed that the Earth rotates around its axis.
A different approach to celestial phenomena was taken by natural philosophers such as Plato and Aristotle. They were less concerned with developing mathematical predictive models than with developing an explanation of the reasons for the motions of the Cosmos. In his Timaeus, Plato described the universe as a spherical body divided into circles carrying the planets and governed according to harmonic intervals by a world soul. Aristotle, drawing on the mathematical model of Eudoxus, proposed that the universe was made of a complex system of concentric spheres, whose circular motions combined to carry the planets around the Earth. This basic cosmological model prevailed, in various forms, until the 16th century.
In the 3rd century BC Aristarchus of Samos was the first to suggest a heliocentric system, although only fragmentary descriptions of his idea survive. Eratosthenes estimated the circumference of the Earth with great accuracy (see also: history of geodesy).
Greek geometrical astronomy developed away from the model of concentric spheres to employ more complex models in which an eccentric circle would carry around a smaller circle, called an epicycle which in turn carried around a planet. The first such model is attributed to Apollonius of Perga and further developments in it were carried out in the 2nd century BC by Hipparchus of Nicea. Hipparchus made a number of other contributions, including the first measurement of precession and the compilation of the first star catalog in which he proposed our modern system of apparent magnitudes.
The Antikythera mechanism, an ancient Greek astronomical observational device for calculating the movements of the Sun and the Moon, possibly the planets, dates from about 150–100 BC, and was the first ancestor of an astronomical computer. It was discovered in an ancient shipwreck off the Greek island of Antikythera, between Kythera and Crete. The device became famous for its use of a differential gear, previously believed to have been invented in the 16th century, and the miniaturization and complexity of its parts, comparable to a clock made in the 18th century. The original mechanism is displayed in the Bronze collection of the National Archaeological Museum of Athens, accompanied by a replica.
Ptolemaic system
Depending on the historian's viewpoint, the acme or corruption of Classical physical astronomy is seen with Ptolemy, a Greco-Roman astronomer from Alexandria of Egypt, who wrote the classic comprehensive presentation of geocentric astronomy, the Megale Syntaxis (Great Synthesis), better known by its Arabic title Almagest, which had a lasting effect on astronomy up to the Renaissance. In his Planetary Hypotheses, Ptolemy ventured into the realm of cosmology, developing a physical model of his geometric system, in a universe many times smaller than the more realistic conception of Aristarchus of Samos four centuries earlier.
Egypt
The precise orientation of the Egyptian pyramids affords a lasting demonstration of the high degree of technical skill in watching the heavens attained in the 3rd millennium BC. It has been shown the Pyramids were aligned towards the pole star, which, because of the precession of the equinoxes, was at that time Thuban, a faint star in the constellation of Draco. Evaluation of the site of the temple of Amun-Re at Karnak, taking into account the change over time of the obliquity of the ecliptic, has shown that the Great Temple was aligned on the rising of the midwinter Sun. The length of the corridor down which sunlight would travel would have limited illumination at other times of the year. The Egyptians also found the position of Sirius (the dog star) who they believed was Anubis, their Jackal headed god, moving through the heavens. Its position was critical to their civilisation as when it rose heliacal in the east before sunrise it foretold the flooding of the Nile. It is also the origin of the phrase 'dog days of summer'.
Astronomy played a considerable part in religious matters for fixing the dates of festivals and determining the hours of the night. The titles of several temple books are preserved recording the movements and phases of the Sun, Moon and stars. The rising of Sirius (Egyptian: Sopdet, Greek: Sothis) at the beginning of the inundation was a particularly important point to fix in the yearly calendar.
Writing in the Roman era, Clement of Alexandria gives some idea of the importance of astronomical observations to the sacred rites:
And after the Singer advances the Astrologer (ὡροσκόπος), with a horologium (ὡρολόγιον) in his hand, and a palm (φοίνιξ), the symbols of astrology. He must know by heart the Hermetic astrological books, which are four in number. Of these, one is about the arrangement of the fixed stars that are visible; one on the positions of the Sun and Moon and five planets; one on the conjunctions and phases of the Sun and Moon; and one concerns their risings.
The Astrologer's instruments (horologium and palm) are a plumb line and sighting instrument. They have been identified with two inscribed objects in the Berlin Museum; a short handle from which a plumb line was hung, and a palm branch with a sight-slit in the broader end. The latter was held close to the eye, the former in the other hand, perhaps at arm's length. The "Hermetic" books which Clement refers to are the Egyptian theological texts, which probably have nothing to do with Hellenistic Hermetism.
From the tables of stars on the ceiling of the tombs of Rameses VI and Rameses IX it seems that for fixing the hours of the night a man seated on the ground faced the Astrologer in such a position that the line of observation of the pole star passed over the middle of his head. On the different days of the year each hour was determined by a fixed star culminating or nearly culminating in it, and the position of these stars at the time is given in the tables as in the centre, on the left eye, on the right shoulder, etc. According to the texts, in founding or rebuilding temples the north axis was determined by the same apparatus, and we may conclude that it was the usual one for astronomical observations. In careful hands it might give results of a high degree of accuracy.
China
The astronomy of East Asia began in China. Solar term was completed in Warring States period. The knowledge of Chinese astronomy was introduced into East Asia.
Astronomy in China has a long history. Detailed records of astronomical observations were kept from about the 6th century BC, until the introduction of Western astronomy and the telescope in the 17th century. Chinese astronomers were able to precisely predict eclipses.
Much of early Chinese astronomy was for the purpose of timekeeping. The Chinese used a lunisolar calendar, but because the cycles of the Sun and the Moon are different, astronomers often prepared new calendars and made observations for that purpose.
Astrological divination was also an important part of astronomy. Astronomers took careful note of "guest stars" () which suddenly appeared among the fixed stars. They were the first to record a supernova, in the Astrological Annals of the Houhanshu in 185 AD. Also, the supernova that created the Crab Nebula in 1054 is an example of a "guest star" observed by Chinese astronomers, although it was not recorded by their European contemporaries. Ancient astronomical records of phenomena like supernovae and comets are sometimes used in modern astronomical studies.
The world's first star catalogue was made by Gan De, a Chinese astronomer, in the 4th century BC.
Mesoamerica
Maya astronomical codices include detailed tables for calculating phases of the Moon, the recurrence of eclipses, and the appearance and disappearance of Venus as morning and evening star. The Maya based their calendrics in the carefully calculated cycles of the Pleiades, the Sun, the Moon, Venus, Jupiter, Saturn, Mars, and also they had a precise description of the eclipses as depicted in the Dresden Codex, as well as the ecliptic or zodiac, and the Milky Way was crucial in their Cosmology. A number of important Maya structures are believed to have been oriented toward the extreme risings and settings of Venus. To the ancient Maya, Venus was the patron of war and many recorded battles are believed to have been timed to the motions of this planet. Mars is also mentioned in preserved astronomical codices and early mythology.
Although the Maya calendar was not tied to the Sun, John Teeple has proposed that the Maya calculated the solar year to somewhat greater accuracy than the Gregorian calendar. Both astronomy and an intricate numerological scheme for the measurement of time were vitally important components of Maya religion.
The Maya believed that the Earth was the center of all things, and that the stars, moons, and planets were gods. They believed that their movements were the gods traveling between the Earth and other celestial destinations. Many key events in Maya culture were timed around celestial events, in the belief that certain gods would be present.
Middle Ages
Middle East
The Arabic and the Persian world under Islam had become highly cultured, and many important works of knowledge from Greek astronomy and Indian astronomy and Persian astronomy were translated into Arabic, used and stored in libraries throughout the area. An important contribution by Islamic astronomers was their emphasis on observational astronomy. This led to the emergence of the first astronomical observatories in the Muslim world by the early 9th century. Zij star catalogues were produced at these observatories.
In the 9th century, Persian astrologer Albumasar was thought to be one of the greatest astrologer at that time. His practical manuals for training astrologers profoundly influenced Muslim intellectual history and, through translations, that of western Europe and Byzantium In the 10th century, Albumasar's "Introduction" was one of the most important sources for the recovery of Aristotle for medieval European scholars. Abd al-Rahman al-Sufi (Azophi) carried out observations on the stars and described their positions, magnitudes, brightness, and colour and drawings for each constellation in his Book of Fixed Stars. He also gave the first descriptions and pictures of "A Little Cloud" now known as the Andromeda Galaxy. He mentions it as lying before the mouth of a Big Fish, an Arabic constellation. This "cloud" was apparently commonly known to the Isfahan astronomers, very probably before 905 AD. The first recorded mention of the Large Magellanic Cloud was also given by al-Sufi. In 1006, Ali ibn Ridwan observed SN 1006, the brightest supernova in recorded history, and left a detailed description of the temporary star.
In the late 10th century, a huge observatory was built near Tehran, Iran, by the astronomer Abu-Mahmud al-Khujandi who observed a series of meridian transits of the Sun, which allowed him to calculate the tilt of the Earth's axis relative to the Sun. He noted that measurements by earlier (Indian, then Greek) astronomers had found higher values for this angle, possible evidence that the axial tilt is not constant but was in fact decreasing. In 11th-century Persia, Omar Khayyám compiled many tables and performed a reformation of the calendar that was more accurate than the Julian and came close to the Gregorian.
Other Muslim advances in astronomy included the collection and correction of previous astronomical data, resolving significant problems in the Ptolemaic model, the development of the universal latitude-independent astrolabe by Arzachel, the invention of numerous other astronomical instruments, Ja'far Muhammad ibn Mūsā ibn Shākir's belief that the heavenly bodies and celestial spheres were subject to the same physical laws as Earth, and the introduction of empirical testing by Ibn al-Shatir, who produced the first model of lunar motion which matched physical observations.
Natural philosophy (particularly Aristotelian physics) was separated from astronomy by Ibn al-Haytham (Alhazen) in the 11th century, by Ibn al-Shatir in the 14th century, and Qushji in the 15th century.
India
Bhāskara II (1114–1185) was the head of the astronomical observatory at Ujjain, continuing the mathematical tradition of Brahmagupta. He wrote the Siddhantasiromani which consists of two parts: Goladhyaya (sphere) and Grahaganita (mathematics of the planets). He also calculated the time taken for the Earth to orbit the Sun to 9 decimal places. The Buddhist University of Nalanda at the time offered formal courses in astronomical studies.
Other important astronomers from India include Madhava of Sangamagrama, Nilakantha Somayaji and Jyeshtadeva, who were members of the Kerala school of astronomy and mathematics from the 14th century to the 16th century. Nilakantha Somayaji, in his Aryabhatiyabhasya, a commentary on Aryabhata's Aryabhatiya, developed his own computational system for a partially heliocentric planetary model, in which Mercury, Venus, Mars, Jupiter and Saturn orbit the Sun, which in turn orbits the Earth, similar to the Tychonic system later proposed by Tycho Brahe in the late 16th century. Nilakantha's system, however, was mathematically more efficient than the Tychonic system, due to correctly taking into account the equation of the centre and latitudinal motion of Mercury and Venus. Most astronomers of the Kerala school of astronomy and mathematics who followed him accepted his planetary model.
Western Europe
After the significant contributions of Greek scholars to the development of astronomy, it entered a relatively static era in Western Europe from the Roman era through the 12th century. This lack of progress has led some astronomers to assert that nothing happened in Western European astronomy during the Middle Ages. Recent investigations, however, have revealed a more complex picture of the study and teaching of astronomy in the period from the 4th to the 16th centuries.
Western Europe entered the Middle Ages with great difficulties that affected the continent's intellectual production. The advanced astronomical treatises of classical antiquity were written in Greek, and with the decline of knowledge of that language, only simplified summaries and practical texts were available for study. The most influential writers to pass on this ancient tradition in Latin were Macrobius, Pliny, Martianus Capella, and Calcidius. In the 6th century Bishop Gregory of Tours noted that he had learned his astronomy from reading Martianus Capella, and went on to employ this rudimentary astronomy to describe a method by which monks could determine the time of prayer at night by watching the stars.
In the 7th century the English monk Bede of Jarrow published an influential text, On the Reckoning of Time, providing churchmen with the practical astronomical knowledge needed to compute the proper date of Easter using a procedure called the computus. This text remained an important element of the education of clergy from the 7th century until well after the rise of the Universities in the 12th century.
The range of surviving ancient Roman writings on astronomy and the teachings of Bede and his followers began to be studied in earnest during the revival of learning sponsored by the emperor Charlemagne. By the 9th century rudimentary techniques for calculating the position of the planets were circulating in Western Europe; medieval scholars recognized their flaws, but texts describing these techniques continued to be copied, reflecting an interest in the motions of the planets and in their astrological significance.
Building on this astronomical background, in the 10th century European scholars such as Gerbert of Aurillac began to travel to Spain and Sicily to seek out learning which they had heard existed in the Arabic-speaking world. There they first encountered various practical astronomical techniques concerning the calendar and timekeeping, most notably those dealing with the astrolabe. Soon scholars such as Hermann of Reichenau were writing texts in Latin on the uses and construction of the astrolabe and others, such as Walcher of Malvern, were using the astrolabe to observe the time of eclipses in order to test the validity of computistical tables.
By the 12th century, scholars were traveling to Spain and Sicily to seek out more advanced astronomical and astrological texts, which they translated into Latin from Arabic and Greek to further enrich the astronomical knowledge of Western Europe. The arrival of these new texts coincided with the rise of the universities in medieval Europe, in which they soon found a home. Reflecting the introduction of astronomy into the universities, John of Sacrobosco wrote a series of influential introductory astronomy textbooks: the Sphere, a Computus, a text on the Quadrant, and another on Calculation.
In the 14th century, Nicole Oresme, later bishop of Liseux, showed that neither the scriptural texts nor the physical arguments advanced against the movement of the Earth were demonstrative and adduced the argument of simplicity for the theory that the Earth moves, and not the heavens. However, he concluded "everyone maintains, and I think myself, that the heavens do move and not the earth: For God hath established the world which shall not be moved." In the 15th century, Cardinal Nicholas of Cusa suggested in some of his scientific writings that the Earth revolved around the Sun, and that each star is itself a distant sun.
Renaissance and Early Modern Europe
Copernican Revolution
During the renaissance period, astronomy began to undergo a revolution in thought known as the Copernican Revolution, which gets the name from the astronomer Nicolaus Copernicus, who proposed a heliocentric system, in which the planets revolved around the Sun and not the Earth. His De revolutionibus orbium coelestium was published in 1543. While in the long term this was a very controversial claim, in the very beginning it only brought minor controversy. The theory became the dominant view because many figures, most notably Galileo Galilei, Johannes Kepler and Isaac Newton championed and improved upon the work. Other figures also aided this new model despite not believing the overall theory, like Tycho Brahe, with his well-known observations.
Brahe, a Danish noble, was an essential astronomer in this period. He came on the astronomical scene with the publication of De nova stella, in which he disproved conventional wisdom on the supernova SN 1572 (As bright as Venus at its peak, SN 1572 later became invisible to the naked eye, disproving the Aristotelian doctrine of the immutability of the heavens.) He also created the Tychonic system, where the Sun and Moon and the stars revolve around the Earth, but the other five planets revolve around the Sun. This system blended the mathematical benefits of the Copernican system with the "physical benefits" of the Ptolemaic system. This was one of the systems people believed in when they did not accept heliocentrism, but could no longer accept the Ptolemaic system. He is most known for his highly accurate observations of the stars and the Solar System. Later he moved to Prague and continued his work. In Prague he was at work on the Rudolphine Tables, that were not finished until after his death. The Rudolphine Tables was a star map designed to be more accurate than either the Alfonsine tables, made in the 1300s, and the Prutenic Tables, which were inaccurate. He was assisted at this time by his assistant Johannes Kepler, who would later use his observations to finish Brahe's works and for his theories as well.
After the death of Brahe, Kepler was deemed his successor and was given the job of completing Brahe's uncompleted works, like the Rudolphine Tables. He completed the Rudolphine Tables in 1624, although it was not published for several years. Like many other figures of this era, he was subject to religious and political troubles, like the Thirty Years' War, which led to chaos that almost destroyed some of his works. Kepler was, however, the first to attempt to derive mathematical predictions of celestial motions from assumed physical causes. He discovered the three Kepler's laws of planetary motion that now carry his name, those laws being as follows:
The orbit of a planet is an ellipse with the Sun at one of the two foci.
A line segment joining a planet and the Sun sweeps out equal areas during equal intervals of time.
The square of the orbital period of a planet is proportional to the cube of the semi-major axis of its orbit.
With these laws, he managed to improve upon the existing heliocentric model. The first two were published in 1609. Kepler's contributions improved upon the overall system, giving it more credibility because it adequately explained events and could cause more reliable predictions. Before this, the Copernican model was just as unreliable as the Ptolemaic model. This improvement came because Kepler realized the orbits were not perfect circles, but ellipses. Galileo Galilei was among the first to use a telescope to observe the sky, and after constructing a 20x refractor telescope. He discovered the four largest moons of Jupiter in 1610, which are now collectively known as the Galilean moons, in his honor. This discovery was the first known observation of satellites orbiting another planet. He also found that the Moon had craters and observed, and correctly explained sunspots, and that Venus exhibited a full set of phases resembling lunar phases. Galileo argued that these facts demonstrated incompatibility with the Ptolemaic model, which could not explain the phenomenon and would even contradict it. With the moons it demonstrated that the Earth does not have to have everything orbiting it and that other parts of the Solar System could orbit another object, such as the Earth orbiting the Sun. In the Ptolemaic system the celestial bodies were supposed to be perfect so such objects should not have craters or sunspots. The phases of Venus could only happen in the event that Venus' orbit is inside Earth's orbit, which could not happen if the Earth was the center. He, as the most famous example, had to face challenges from church officials, more specifically the Roman Inquisition. They accused him of heresy because these beliefs went against the teachings of the Roman Catholic Church and were challenging the Catholic church's authority when it was at its weakest. While he was able to avoid punishment for a little while he was eventually tried and pled guilty to heresy in 1633. Although this came at some expense, his book was banned, and he was put under house arrest until he died in 1642. Sir Isaac Newton developed further ties between physics and astronomy through his law of universal gravitation. Realizing that the same force that attracts objects to the surface of the Earth held the Moon in orbit around the Earth, Newton was able to explain – in one theoretical framework – all known gravitational phenomena. In his Philosophiæ Naturalis Principia Mathematica, he derived Kepler's laws from first principles. Those first principles are as follows:
In an inertial frame of reference, an object either remains at rest or continues to move at constant velocity, unless acted upon by a force.
In an inertial reference frame, the vector sum of the forces F on an object is equal to the mass m of that object multiplied by the acceleration a of the object: F = ma. (It is assumed here that the mass m is constant)
When one body exerts a force on a second body, the second body simultaneously exerts a force equal in magnitude and opposite in direction on the first body.
Thus while Kepler explained how the planets moved, Newton accurately managed to explain why the planets moved the way they do. Newton's theoretical developments laid many of the foundations of modern physics.
Completing the Solar System
Outside of England, Newton's theory took some time to become established. Descartes' theory of vortices held sway in France, and Huygens, Leibniz and Cassini accepted only parts of Newton's system, preferring their own philosophies. Voltaire published a popular account in 1738. In 1748, the French Academy of Sciences offered a reward for solving the question of the perturbations of Jupiter and Saturn, which was eventually done by Euler and Lagrange. Laplace completed the theory of the planets, publishing from 1798 to 1825. The early origins of the solar nebular model of planetary formation had begun.
Edmond Halley succeeded Flamsteed as Astronomer Royal in England and succeeded in predicting the return of the comet that bears his name in 1758. Sir William Herschel found the first new planet, Uranus, to be observed in modern times in 1781. The gap between the planets Mars and Jupiter disclosed by the Titius–Bode law was filled by the discovery of the asteroids Ceres and Pallas in 1801 and 1802 with many more following.
At first, astronomical thought in America was based on Aristotelian philosophy, but interest in the new astronomy began to appear in Almanacs as early as 1659.
Stellar astronomy
Cosmic pluralism is the name given to the idea that the stars are distant suns, perhaps with their own planetary systems.
Ideas in this direction were expressed in antiquity, by Anaxagoras and by Aristarchus of Samos, but did not find mainstream acceptance. The first astronomer of the European Renaissance to suggest that the stars were distant suns was Giordano Bruno in his De l'infinito universo et mondi (1584). This idea, together with a belief in intelligent extraterrestrial life, was among the charges brought against him by the Inquisition.
The idea became mainstream in the later 17th century, especially following the publication of Conversations on the Plurality of Worlds by Bernard Le Bovier de Fontenelle (1686), and by the early 18th century it was the default working assumptions in stellar astronomy.
The Italian astronomer Geminiano Montanari recorded observing variations in luminosity of the star Algol in 1667. Edmond Halley published the first measurements of the proper motion of a pair of nearby "fixed" stars, demonstrating that they had changed positions since the time of the ancient Greek astronomers Ptolemy and Hipparchus. William Herschel was the first astronomer to attempt to determine the distribution of stars in the sky. During the 1780s, he established a series of gauges in 600 directions and counted the stars observed along each line of sight. From this he deduced that the number of stars steadily increased toward one side of the sky, in the direction of the Milky Way core. His son John Herschel repeated this study in the southern hemisphere and found a corresponding increase in the same direction. In addition to his other accomplishments, William Herschel is noted for his discovery that some stars do not merely lie along the same line of sight, but are physical companions that form binary star systems.
Modern astronomy
19th century
Pre-photography, data recording of astronomical data was limited by the human eye. In 1840, John W. Draper, a chemist, created the earliest known astronomical photograph of the Moon. And by the late 19th century thousands of photographic plates of images of planets, stars, and galaxies were created. Most photography had lower quantum efficiency (i.e. captured less of the incident photons) than human eyes but had the advantage of long integration times (100 ms for the human eye compared to hours for photos). This vastly increased the data available to astronomers, which led to the rise of human computers, famously the Harvard Computers, to track and analyze the data.
Scientists began discovering forms of light which were invisible to the naked eye: X-rays, gamma rays, radio waves, microwaves, ultraviolet radiation, and infrared radiation. This had a major impact on astronomy, spawning the fields of infrared astronomy, radio astronomy, x-ray astronomy and finally gamma-ray astronomy. With the advent of spectroscopy it was proven that other stars were similar to the Sun, but with a range of temperatures, masses and sizes.
The science of stellar spectroscopy was pioneered by Joseph von Fraunhofer and Angelo Secchi. By comparing the spectra of stars such as Sirius to the Sun, they found differences in the strength and number of their absorption lines—the dark lines in stellar spectra caused by the atmosphere's absorption of specific frequencies. In 1865, Secchi began classifying stars into spectral types. The first evidence of helium was observed on August 18, 1868, as a bright yellow spectral line with a wavelength of 587.49 nanometers in the spectrum of the chromosphere of the Sun. The line was detected by French astronomer Jules Janssen during a total solar eclipse in Guntur, India.
The first direct measurement of the distance to a star (61 Cygni at 11.4 light-years) was made in 1838 by Friedrich Bessel using the parallax technique. Parallax measurements demonstrated the vast separation of the stars in the heavens. Observation of double stars gained increasing importance during the 19th century. In 1834, Friedrich Bessel observed changes in the proper motion of the star Sirius and inferred a hidden companion. Edward Pickering discovered the first spectroscopic binary in 1899 when he observed the periodic splitting of the spectral lines of the star Mizar in a 104-day period. Detailed observations of many binary star systems were collected by astronomers such as Friedrich Georg Wilhelm von Struve and S. W. Burnham, allowing the masses of stars to be determined from the computation of orbital elements. The first solution to the problem of deriving an orbit of binary stars from telescope observations was made by Felix Savary in 1827.
In 1847, Maria Mitchell discovered a comet using a telescope.
20th century
With the accumulation of large sets of astronomical data, teams like the Harvard Computers rose in prominence which led to many female astronomers, previously relegated as assistants to male astronomers, gaining recognition in the field. The United States Naval Observatory (USNO) and other astronomy research institutions hired human "computers", who performed the tedious calculations while scientists performed research requiring more background knowledge. A number of discoveries in this period were originally noted by the women "computers" and reported to their supervisors. Henrietta Swan Leavitt discovered the cepheid variable star period-luminosity relation which she further developed into a method of measuring distance outside of the Solar System.
A veteran of the Harvard Computers, Annie J. Cannon developed the modern version of the stellar classification scheme in during the early 1900s (O B A F G K M, based on color and temperature), manually classifying more stars in a lifetime than anyone else (around 350,000).
The twentieth century saw increasingly rapid advances in the scientific study of stars.
Karl Schwarzschild discovered that the color of a star and, hence, its temperature, could be determined by comparing the visual magnitude against the photographic magnitude. The development of the photoelectric photometer allowed precise measurements of magnitude at multiple wavelength intervals. In 1921 Albert A. Michelson made the first measurements of a stellar diameter using an interferometer on the Hooker telescope at Mount Wilson Observatory.
Important theoretical work on the physical structure of stars occurred during the first decades of the twentieth century. In 1913, the Hertzsprung–Russell diagram was developed, propelling the astrophysical study of stars.
In Potsdam in 1906, the Danish astronomer Ejnar Hertzsprung published the first plots of color versus luminosity for these stars. These plots showed a prominent and continuous sequence of stars, which he named the Main Sequence.
At Princeton University, Henry Norris Russell plotted the spectral types of these stars against their absolute magnitude, and found that dwarf stars followed a distinct relationship. This allowed the real brightness of a dwarf star to be predicted with reasonable accuracy.
Successful models were developed to explain the interiors of stars and stellar evolution. Cecilia Payne-Gaposchkin first proposed that stars were made primarily of hydrogen and helium in her 1925 doctoral thesis. The spectra of stars were further understood through advances in quantum physics. This allowed the chemical composition of the stellar atmosphere to be determined.
As evolutionary models of stars were developed during the 1930s, Bengt Strömgren introduced the term Hertzsprung–Russell diagram to denote a luminosity-spectral class diagram.
A refined scheme for stellar classification was published in 1943 by William Wilson Morgan and Philip Childs Keenan.
The existence of our galaxy, the Milky Way, as a separate group of stars was only proven in the 20th century, along with the existence of "external" galaxies, and soon after, the expansion of the universe seen in the recession of most galaxies from us. The "Great Debate" between Harlow Shapley and Heber Curtis, in the 1920s, concerned the nature of the Milky Way, spiral nebulae, and the dimensions of the universe.
With the advent of quantum physics, spectroscopy was further refined.
The Sun was found to be part of a galaxy made up of more than 1010 stars (10 billion stars). The existence of other galaxies, one of the matters of the great debate, was settled by Edwin Hubble, who identified the Andromeda nebula as a different galaxy, and many others at large distances and receding, moving away from our galaxy.
Physical cosmology, a discipline that has a large intersection with astronomy, made huge advances during the 20th century, with the model of the hot Big Bang heavily supported by the evidence provided by astronomy and physics, such as the redshifts of very distant galaxies and radio sources, the cosmic microwave background radiation, Hubble's law and cosmological abundances of elements.
See also
Age of the universe
Anthropic principle
Astrotheology
Expansion of the universe
Hebrew astronomy
History of astrology
History of Mars observation
History of supernova observation
History of the telescope
Letters on Sunspots
List of astronomers
List of French astronomers
List of Hungarian astronomers
List of Russian astronomers and astrophysicists
List of Slovenian astronomers
List of women astronomers
List of astronomical instrument makers
List of astronomical observatories
Patronage in astronomy
Society for the History of Astronomy
Timeline of astronomy
Timeline of Solar System astronomy
Worship of heavenly bodies
References
Citations
Works cited
Further reading
External links
Astronomy & Empire, BBC Radio 4 discussion with Simon Schaffer, Kristen Lippincott & Allan Chapman (In Our Time, May 4, 2006)
Bibliothèque numérique de l'Observatoire de Paris (Digital library of the Paris Observatory)
Caelum Antiquum: Ancient Astronomy and Astrology Resources on LacusCurtius
Mesoamerican Archaeoastronomy: A Review of Contemporary Understandings of Prehispanic Astronomical Knowledge
UNESCO-IAU Portal to the Heritage of Astronomy
Astronomy
Astronomy | History of astronomy | [
"Astronomy"
] | 10,060 | [
"History of astrology",
"History of astronomy"
] |
14,022 | https://en.wikipedia.org/wiki/Haber%20process | The Haber process, also called the Haber–Bosch process, is the main industrial procedure for the production of ammonia. It converts atmospheric nitrogen (N2) to ammonia (NH3) by a reaction with hydrogen (H2) using finely divided iron metal as a catalyst:
This reaction is thermodynamically favorable at room temperature, but the kinetics are prohibitively slow. At high temperatures at which catalysts are active enough that the reaction proceeds to equilibrium, the reaction is reactant-favored rather than product-favored. As a result, high pressures are needed to drive the reaction forward.
The German chemists Fritz Haber and Carl Bosch developed the process in the first decade of the 20th century, and its improved efficiency over existing methods such as the Birkeland-Eyde and Frank-Caro processes was a major advancement in the industrial production of ammonia. The Haber process can be combined with steam reforming to produce ammonia with just three chemical inputs: water, natural gas, and atmospheric nitrogen. Both Haber and Bosch were eventually awarded the Nobel Prize in Chemistry: Haber in 1918 for ammonia synthesis specifically, and Bosch in 1931 for related contributions to high-pressure chemistry.
History
During the 19th century, the demand rapidly increased for nitrates and ammonia for use as fertilizers, which supply plants with the nutrients they need to grow, and for industrial feedstocks. The main source was mining niter deposits and guano from tropical islands. At the beginning of the 20th century these reserves were thought insufficient to satisfy future demands, and research into new potential sources of ammonia increased. Although atmospheric nitrogen (N2) is abundant, comprising ~78% of the air, it is exceptionally stable and does not readily react with other chemicals.
Haber, with his assistant Robert Le Rossignol, developed the high-pressure devices and catalysts needed to demonstrate the Haber process at a laboratory scale. They demonstrated their process in the summer of 1909 by producing ammonia from the air, drop by drop, at the rate of about per hour. The process was purchased by the German chemical company BASF, which assigned Carl Bosch the task of scaling up Haber's tabletop machine to industrial scale. He succeeded in 1910. Haber and Bosch were later awarded Nobel Prizes, in 1918 and 1931 respectively, for their work in overcoming the chemical and engineering problems of large-scale, continuous-flow, high-pressure technology.
Ammonia was first manufactured using the Haber process on an industrial scale in 1913 in BASF's Oppau plant in Germany, reaching 20 tonnes/day in 1914. During World War I, the production of munitions required large amounts of nitrate. The Allied powers had access to large deposits of sodium nitrate in Chile (Chile saltpetre) controlled by British companies. India had large supplies too, but it was also controlled by the British. Moreover, even if German commercial interests had nominal legal control of such resources, the Allies controlled the sea lanes and imposed a highly effective blockade which would have prevented such supplies from reaching Germany. The Haber process proved so essential to the German war effort that it is considered virtually certain Germany would have been defeated in a matter of months without it. Synthetic ammonia from the Haber process was used for the production of nitric acid, a precursor to the nitrates used in explosives.
The original Haber–Bosch reaction chambers used osmium as the catalyst, but this was available in extremely small quantities. Haber noted that uranium was almost as effective and easier to obtain than osmium. In 1909, BASF researcher Alwin Mittasch discovered a much less expensive iron-based catalyst that is still used. A major contributor to the discovery of this catalysis was Gerhard Ertl. The most popular catalysts are based on iron promoted with K2O, CaO, SiO2, and Al2O3.
During the interwar years, alternative processes were developed, most notably the Casale process, the Claude process, and the Mont-Cenis process developed by the Friedrich Uhde Ingenieurbüro. Luigi Casale and Georges Claude proposed to increase the pressure of the synthesis loop to , thereby increasing the single-pass ammonia conversion and making nearly complete liquefaction at ambient temperature feasible. Claude proposed to have three or four converters with liquefaction steps in series, thereby avoiding recycling. Most plants continue to use the original Haber process ( and ), albeit with improved single-pass conversion and lower energy consumption due to process and catalyst optimization.
Process
Combined with the energy needed to produce hydrogen and purified atmospheric nitrogen, ammonia production is energy-intensive, accounting for 1% to 2% of global energy consumption, 3% of global carbon emissions, and 3% to 5% of natural gas consumption. Hydrogen required for ammonia synthesis is most often produced through gasification of carbon-containing material, mostly natural gas, but other potential carbon sources include coal, petroleum, peat, biomass, or waste. As of 2012, the global production of ammonia produced from natural gas using the steam reforming process was 72%, however in China as of 2022 natural gas and coal were responsible for 20% and 75% respectively. Hydrogen can also be produced from water and electricity using electrolysis: at one time, most of Europe's ammonia was produced from the Hydro plant at Vemork. Other possibilities include biological hydrogen production or photolysis, but at present, steam reforming of natural gas is the most economical means of mass-producing hydrogen.
The choice of catalyst is important for synthesizing ammonia. In 2012, Hideo Hosono's group found that Ru-loaded calcium-aluminum oxide C12A7: electride works well as a catalyst and pursued more efficient formation. This method is implemented in a small plant for ammonia synthesis in Japan. In 2019, Hosono's group found another catalyst, a novel perovskite oxynitride-hydride , that works at lower temperature and without costly ruthenium.
Hydrogen production
The major source of hydrogen is methane. Steam reforming of natural gas extracts hydrogen from methane in a high-temperature and pressure tube inside a reformer with a nickel catalyst. Other fossil fuel sources include coal, heavy fuel oil and naphtha.
Green hydrogen is produced without fossil fuels or carbon dioxide emissions from biomass, water electrolysis and thermochemical (solar or another heat source) water splitting.
Starting with a natural gas () feedstock, the steps are as follows;
Remove sulfur compounds from the feedstock, because sulfur deactivates the catalysts used in subsequent steps. Sulfur removal requires catalytic hydrogenation to convert sulfur compounds in the feedstocks to gaseous hydrogen sulfide (hydrodesulfurization, hydrotreating):
H2 + RSH -> RH + H2S
Hydrogen sulfide is adsorbed and removed by passing it through beds of zinc oxide where it is converted to solid zinc sulfide:
H2S + ZnO -> ZnS + H2O
Catalytic steam reforming of the sulfur-free feedstock forms hydrogen plus carbon monoxide:
CH4 + H2O -> CO + 3 H2
Catalytic shift conversion converts the carbon monoxide to carbon dioxide and more hydrogen:
CO + H2O -> CO2 + H2
Carbon dioxide is removed either by absorption in aqueous ethanolamine solutions or by adsorption in pressure swing adsorbers (PSA) using proprietary solid adsorption media.
The final step in producing hydrogen is to use catalytic methanation to remove residual carbon monoxide or carbon dioxide:
CO + 3 H2 -> CH4 + H2O
CO2 + 4 H2 -> CH4 + 2 H2O
Ammonia production
The hydrogen is catalytically reacted with nitrogen (derived from process air) to form anhydrous liquid ammonia. It is difficult and expensive, as lower temperatures result in slower reaction kinetics (hence a slower reaction rate) and high pressure requires high-strength pressure vessels that resist hydrogen embrittlement. Diatomic nitrogen is bound together by a triple bond, which makes it relatively inert. Yield and efficiency are low, meaning that the ammonia must be extracted and the gases reprocessed for the reaction to proceed at an acceptable pace.
This step is known as the ammonia synthesis loop:
3 H2 + N2 -> 2 NH3
The gases (nitrogen and hydrogen) are passed over four beds of catalyst, with cooling between each pass to maintain a reasonable equilibrium constant. On each pass, only about 15% conversion occurs, but unreacted gases are recycled, and eventually conversion of 97% is achieved.
Due to the nature of the (typically multi-promoted magnetite) catalyst used in the ammonia synthesis reaction, only low levels of oxygen-containing (especially CO, CO2 and H2O) compounds can be tolerated in the hydrogen/nitrogen mixture. Relatively pure nitrogen can be obtained by air separation, but additional oxygen removal may be required.
Because of relatively low single pass conversion rates (typically less than 20%), a large recycle stream is required. This can lead to the accumulation of inerts in the gas.
Nitrogen gas (N2) is unreactive because the atoms are held together by triple bonds. The Haber process relies on catalysts that accelerate the scission of these bonds.
Two opposing considerations are relevant: the equilibrium position and the reaction rate. At room temperature, the equilibrium is in favor of ammonia, but the reaction does not proceed at a detectable rate due to its high activation energy. Because the reaction is exothermic, the equilibrium constant decreases with increasing temperature following Le Châtelier's principle. It becomes unity at around .
Above this temperature, the equilibrium quickly becomes unfavorable at atmospheric pressure, according to the Van 't Hoff equation. Lowering the temperature is unhelpful because the catalyst requires a temperature of at least 400 °C to be efficient.
Increased pressure favors the forward reaction because 4 moles of reactant produce 2 moles of product, and the pressure used () alters the equilibrium concentrations to give a substantial ammonia yield. The reason for this is evident in the equilibrium relationship:
where is the fugacity coefficient of species , is the mole fraction of the same species, is the reactor pressure, and is standard pressure, typically .
Economically, reactor pressurization is expensive: pipes, valves, and reaction vessels need to be strong enough, and safety considerations affect operating at 20 MPa. Compressors take considerable energy, as work must be done on the (compressible) gas. Thus, the compromise used gives a single-pass yield of around 15%.
While removing the ammonia from the system increases the reaction yield, this step is not used in practice, since the temperature is too high; instead it is removed from the gases leaving the reaction vessel. The hot gases are cooled under high pressure, allowing the ammonia to condense and be removed as a liquid. Unreacted hydrogen and nitrogen gases are returned to the reaction vessel for another round. While most ammonia is removed (typically down to 2–5 mol.%), some ammonia remains in the recycle stream. In academic literature, a more complete separation of ammonia has been proposed by absorption in metal halides, metal-organic frameworks or zeolites. Such a process is called an absorbent-enhanced Haber process or adsorbent-enhanced Haber–Bosch process.
Pressure/temperature
The steam reforming, shift conversion, carbon dioxide removal, and methanation steps each operate at absolute pressures of about 25 to 35 bar, while the ammonia synthesis loop operates at temperatures of and pressures ranging from 60 to 180 bar depending upon the method used. The resulting ammonia must then be separated from the residual hydrogen and nitrogen at temperatures of .
Catalysts
The Haber–Bosch process relies on catalysts to accelerate N2 hydrogenation. The catalysts are heterogeneous solids that interact with gaseous reagents.
The catalyst typically consists of finely divided iron bound to an iron oxide carrier containing promoters possibly including aluminium oxide, potassium oxide, calcium oxide, potassium hydroxide, molybdenum, and magnesium oxide.
Iron-based catalysts
The iron catalyst is obtained from finely ground iron powder, which is usually obtained by reduction of high-purity magnetite (Fe3O4). The pulverized iron is oxidized to give magnetite or wüstite (FeO, ferrous oxide) particles of a specific size. The magnetite (or wüstite) particles are then partially reduced, removing some of the oxygen. The resulting catalyst particles consist of a core of magnetite, encased in a shell of wüstite, which in turn is surrounded by an outer shell of metallic iron. The catalyst maintains most of its bulk volume during the reduction, resulting in a highly porous high-surface-area material, which enhances its catalytic effectiveness. Minor components include calcium and aluminium oxides, which support the iron catalyst and help it maintain its surface area. These oxides of Ca, Al, K, and Si are unreactive to reduction by hydrogen.
The production of the catalyst requires a particular melting process in which used raw materials must be free of catalyst poisons and the promoter aggregates must be evenly distributed in the magnetite melt. Rapid cooling of the magnetite, which has an initial temperature of about 3500 °C, produces the desired precursor. Unfortunately, the rapid cooling ultimately forms a catalyst of reduced abrasion resistance. Despite this disadvantage, the method of rapid cooling is often employed.
The reduction of the precursor magnetite to α-iron is carried out directly in the production plant with synthesis gas. The reduction of the magnetite proceeds via the formation of wüstite (FeO) so that particles with a core of magnetite become surrounded by a shell of wüstite. The further reduction of magnetite and wüstite leads to the formation of α-iron, which forms together with the promoters the outer shell. The involved processes are complex and depend on the reduction temperature: At lower temperatures, wüstite disproportionates into an iron phase and a magnetite phase; at higher temperatures, the reduction of the wüstite and magnetite to iron dominates.
The α-iron forms primary crystallites with a diameter of about 30 nanometers. These crystallites form a bimodal pore system with pore diameters of about 10 nanometers (produced by the reduction of the magnetite phase) and of 25 to 50 nanometers (produced by the reduction of the wüstite phase). With the exception of cobalt oxide, the promoters are not reduced.
During the reduction of the iron oxide with synthesis gas, water vapor is formed. This water vapor must be considered for high catalyst quality as contact with the finely divided iron would lead to premature aging of the catalyst through recrystallization, especially in conjunction with high temperatures. The vapor pressure of the water in the gas mixture produced during catalyst formation is thus kept as low as possible, target values are below 3 gm−3. For this reason, the reduction is carried out at high gas exchange, low pressure, and low temperatures. The exothermic nature of the ammonia formation ensures a gradual increase in temperature.
The reduction of fresh, fully oxidized catalyst or precursor to full production capacity takes four to ten days. The wüstite phase is reduced faster and at lower temperatures than the magnetite phase (Fe3O4). After detailed kinetic, microscopic, and X-ray spectroscopic investigations it was shown that wüstite reacts first to metallic iron. This leads to a gradient of iron(II) ions, whereby these diffuse from the magnetite through the wüstite to the particle surface and precipitate there as iron nuclei. A high-activity novel catalyst based on this phenomenon was discovered in the 1980s at the Zhejiang University of Technology and commercialized by 2003.
Pre-reduced, stabilized catalysts occupy a significant market share. They are delivered showing the fully developed pore structure, but have been oxidized again on the surface after manufacture and are therefore no longer pyrophoric. The reactivation of such pre-reduced catalysts requires only 30 to 40 hours instead of several days. In addition to the short start-up time, they have other advantages such as higher water resistance and lower weight.
Catalysts other than iron
Many efforts have been made to improve the Haber–Bosch process. Many metals were tested as catalysts. The requirement for suitability is the dissociative adsorption of nitrogen (i. e. the nitrogen molecule must be split into nitrogen atoms upon adsorption). If the binding of the nitrogen is too strong, the catalyst is blocked and the catalytic ability is reduced (self-poisoning). The elements in the periodic table to the left of the iron group show such strong bonds. Further, the formation of surface nitrides makes, for example, chromium catalysts ineffective. Metals to the right of the iron group, in contrast, adsorb nitrogen too weakly for ammonia synthesis. Haber initially used catalysts based on osmium and uranium. Uranium reacts to its nitride during catalysis, while osmium oxide is rare.
According to theoretical and practical studies, improvements over pure iron are limited. The activity of iron catalysts is increased by the inclusion of cobalt.
Ruthenium
Ruthenium forms highly active catalysts. Allowing milder operating pressures and temperatures, Ru-based materials are referred to as second-generation catalysts. Such catalysts are prepared by the decomposition of triruthenium dodecacarbonyl on graphite. A drawback of activated-carbon-supported ruthenium-based catalysts is the methanation of the support in the presence of hydrogen. Their activity is strongly dependent on the catalyst carrier and the promoters. A wide range of substances can be used as carriers, including carbon, magnesium oxide, aluminium oxide, zeolites, spinels, and boron nitride.
Ruthenium-activated carbon-based catalysts have been used industrially in the KBR Advanced Ammonia Process (KAAP) since 1992. The carbon carrier is partially degraded to methane; however, this can be mitigated by a special treatment of the carbon at 1500 °C, thus prolonging the catalyst lifetime. In addition, the finely dispersed carbon poses a risk of explosion. For these reasons and due to its low acidity, magnesium oxide has proven to be a good choice of carrier. Carriers with acidic properties extract electrons from ruthenium, make it less reactive, and have the undesirable effect of binding ammonia to the surface.
Catalyst poisons
Catalyst poisons lower catalyst activity. They are usually impurities in the synthesis gas. Permanent poisons cause irreversible loss of catalytic activity, while temporary poisons lower the activity while present. Sulfur compounds, phosphorus compounds, arsenic compounds, and chlorine compounds are permanent poisons. Oxygenic compounds like water, carbon monoxide, carbon dioxide, and oxygen are temporary poisons.
Although chemically inert components of the synthesis gas mixture such as noble gases or methane are not strictly poisons, they accumulate through the recycling of the process gases and thus lower the partial pressure of the reactants, which in turn slows conversion.
Industrial production
Synthesis parameters
The reaction is:
The reaction is an exothermic equilibrium reaction in which the gas volume is reduced. The equilibrium constant Keq of the reaction (see table) and obtained from:
Since the reaction is exothermic, the equilibrium of the reaction shifts at lower temperatures to the ammonia side. Furthermore, four volumetric units of the raw materials produce two volumetric units of ammonia. According to Le Chatelier's principle, higher pressure favours ammonia. High pressure is necessary to ensure sufficient surface coverage of the catalyst with nitrogen. For this reason, a ratio of nitrogen to hydrogen of 1 to 3, a pressure of 250 to 350 bar, a temperature of 450 to 550 °C and α iron are optimal.
The catalyst ferrite (α-Fe) is produced in the reactor by the reduction of magnetite with hydrogen. The catalyst has its highest efficiency at temperatures of about 400 to 500 °C. Even though the catalyst greatly lowers the activation energy for the cleavage of the triple bond of the nitrogen molecule, high temperatures are still required for an appropriate reaction rate. At the industrially used reaction temperature of 450 to 550 °C an optimum between the decomposition of ammonia into the starting materials and the effectiveness of the catalyst is achieved. The formed ammonia is continuously removed from the system. The volume fraction of ammonia in the gas mixture is about 20%.
The inert components, especially the noble gases such as argon, should not exceed a certain content in order not to reduce the partial pressure of the reactants too much. To remove the inert gas components, part of the gas is removed and the argon is separated in a gas separation plant. The extraction of pure argon from the circulating gas is carried out using the Linde process.
Large-scale implementation
Modern ammonia plants produce more than 3000 tons per day in one production line. The following diagram shows the set-up of a modern (designed in the early 1960s by Kellogg) "single-train" Haber–Bosch plant:
Depending on its origin, the synthesis gas must first be freed from impurities such as hydrogen sulfide or organic sulphur compounds, which act as a catalyst poison. High concentrations of hydrogen sulfide, which occur in synthesis gas from carbonization coke, are removed in a wet cleaning stage such as the sulfosolvan process, while low concentrations are removed by adsorption on activated carbon. Organosulfur compounds are separated by pressure swing adsorption together with carbon dioxide after CO conversion.
To produce hydrogen by steam reforming, methane reacts with water vapor using a nickel oxide-alumina catalyst in the primary reformer to form carbon monoxide and hydrogen. The energy required for this, the enthalpy ΔH, is 206 kJ/mol.
The methane gas reacts in the primary reformer only partially. To increase the hydrogen yield and keep the content of inert components (i. e. methane) as low as possible, the remaining methane gas is converted in a second step with oxygen to hydrogen and carbon monoxide in the secondary reformer. The secondary reformer is supplied with air as the oxygen source. Also, the required nitrogen for the subsequent ammonia synthesis is added to the gas mixture.
In the third step, the carbon monoxide is oxidized to carbon dioxide, which is called CO conversion or water-gas shift reaction.
Carbon monoxide and carbon dioxide would form carbamates with ammonia, which would clog (as solids) pipelines and apparatus within a short time. In the following process step, the carbon dioxide must therefore be removed from the gas mixture. In contrast to carbon monoxide, carbon dioxide can easily be removed from the gas mixture by gas scrubbing with triethanolamine. The gas mixture then still contains methane and noble gases such as argon, which, however, behave inertly.
The gas mixture is then compressed to operating pressure by turbo compressors. The resulting compression heat is dissipated by heat exchangers; it is used to preheat raw gases.
The actual production of ammonia takes place in the ammonia reactor. The first reactors were bursting under high pressure because the atomic hydrogen in the carbonaceous steel partially recombined into methane and produced cracks in the steel. Bosch, therefore, developed tube reactors consisting of a pressure-bearing steel tube in which a low-carbon iron lining tube was inserted and filled with the catalyst. Hydrogen that diffused through the inner steel pipe escaped to the outside via thin holes in the outer steel jacket, the so-called Bosch holes. A disadvantage of the tubular reactors was the relatively high-pressure loss, which had to be applied again by compression. The development of hydrogen-resistant chromium-molybdenum steels made it possible to construct single-walled pipes.
Modern ammonia reactors are designed as multi-storey reactors with a low-pressure drop, in which the catalysts are distributed as fills over about ten storeys one above the other. The gas mixture flows through them one after the other from top to bottom. Cold gas is injected from the side for cooling. A disadvantage of this reactor type is the incomplete conversion of the cold gas mixture in the last catalyst bed.
Alternatively, the reaction mixture between the catalyst layers is cooled using heat exchangers, whereby the hydrogen-nitrogen mixture is preheated to the reaction temperature. Reactors of this type have three catalyst beds. In addition to good temperature control, this reactor type has the advantage of better conversion of the raw material gases compared to reactors with cold gas injection.
Uhde has developed and is using an ammonia converter with three radial flow catalyst beds and two internal heat exchangers instead of axial flow catalyst beds. This further reduces the pressure drop in the converter.
The reaction product is continuously removed for maximum yield. The gas mixture is cooled to 450 °C in a heat exchanger using water, freshly supplied gases, and other process streams. The ammonia also condenses and is separated in a pressure separator. Unreacted nitrogen and hydrogen are then compressed back to the process by a circulating gas compressor, supplemented with fresh gas, and fed to the reactor. In a subsequent distillation, the product ammonia is purified.
Mechanism
Elementary steps
The mechanism of ammonia synthesis contains the following seven elementary steps:
transport of the reactants from the gas phase through the boundary layer to the surface of the catalyst.
pore diffusion to the reaction center
adsorption of reactants
reaction
desorption of product
transport of the product through the pore system back to the surface
transport of the product into the gas phase
Transport and diffusion (the first and last two steps) are fast compared to adsorption, reaction, and desorption because of the shell structure of the catalyst. It is known from various investigations that the rate-determining step of the ammonia synthesis is the dissociation of nitrogen. In contrast, exchange reactions between hydrogen and deuterium on the Haber–Bosch catalysts still take place at temperatures of at a measurable rate; the exchange between deuterium and hydrogen on the ammonia molecule also takes place at room temperature. Since the adsorption of both molecules is rapid, it cannot determine the speed of ammonia synthesis.
In addition to the reaction conditions, the adsorption of nitrogen on the catalyst surface depends on the microscopic structure of the catalyst surface. Iron has different crystal surfaces, whose reactivity is very different. The Fe(111) and Fe(211) surfaces have by far the highest activity. The explanation for this is that only these surfaces have so-called C7 sites – these are iron atoms with seven closest neighbours.
The dissociative adsorption of nitrogen on the surface follows the following scheme, where S* symbolizes an iron atom on the surface of the catalyst:
N2 → S*–N2 (γ-species) → S*–N2–S* (α-species) → 2 S*–N (β-species, surface nitride)
The adsorption of nitrogen is similar to the chemisorption of carbon monoxide. On a Fe(111) surface, the adsorption of nitrogen first leads to an adsorbed γ-species with an adsorption energy of 24 kJmol−1 and an N-N stretch vibration of 2100 cm−1. Since the nitrogen is isoelectronic to carbon monoxide, it adsorbs in an on-end configuration in which the molecule is bound perpendicular to the metal surface at one nitrogen atom. This has been confirmed by photoelectron spectroscopy.
Ab-initio-MO calculations have shown that, in addition to the σ binding of the free electron pair of nitrogen to the metal, there is a π binding from the d orbitals of the metal to the π* orbitals of nitrogen, which strengthens the iron-nitrogen bond. The nitrogen in the α state is more strongly bound with 31 kJmol−1. The resulting N–N bond weakening could be experimentally confirmed by a reduction of the wave numbers of the N–N stretching oscillation to 1490 cm−1.
Further heating of the Fe(111) area covered by α-N2 leads to both desorption and the emergence of a new band at 450 cm−1. This represents a metal-nitrogen oscillation, the β state. A comparison with vibration spectra of complex compounds allows the conclusion that the N2 molecule is bound "side-on", with an N atom in contact with a C7 site. This structure is called "surface nitride". The surface nitride is very strongly bound to the surface. Hydrogen atoms (Hads), which are very mobile on the catalyst surface, quickly combine with it.
Infrared spectroscopically detected surface imides (NHad), surface amides (NH2,ad) and surface ammoniacates (NH3,ad) are formed, the latter decay under NH3 release (desorption). The individual molecules were identified or assigned by X-ray photoelectron spectroscopy (XPS), high-resolution electron energy loss spectroscopy (HREELS) and Ir Spectroscopy.
On the basis of these experimental findings, the reaction mechanism is believed to involve the following steps (see also figure):
N2 (g) → N2 (adsorbed)
N2 (adsorbed) → 2 N (adsorbed)
H2 (g) → H2 (adsorbed)
H2 (adsorbed) → 2 H (adsorbed)
N (adsorbed) + 3 H (adsorbed) → NH3 (adsorbed)
NH3 (adsorbed) → NH3 (g)
Reaction 5 occurs in three steps, forming NH, NH2, and then NH3. Experimental evidence points to reaction 2 as being slow, rate-determining step. This is not unexpected, since that step breaks the nitrogen triple bond, the strongest of the bonds broken in the process.
As with all Haber–Bosch catalysts, nitrogen dissociation is the rate-determining step for ruthenium-activated carbon catalysts. The active center for ruthenium is a so-called B5 site, a 5-fold coordinated position on the Ru(0001) surface where two ruthenium atoms form a step edge with three ruthenium atoms on the Ru(0001) surface. The number of B5 sites depends on the size and shape of the ruthenium particles, the ruthenium precursor and the amount of ruthenium used. The reinforcing effect of the basic carrier used in the ruthenium catalyst is similar to the promoter effect of alkali metals used in the iron catalyst.
Energy diagram
An energy diagram can be created based on the Enthalpy of Reaction of the individual steps. The energy diagram can be used to compare homogeneous and heterogeneous reactions: Due to the high activation energy of the dissociation of nitrogen, the homogeneous gas phase reaction is not realizable. The catalyst avoids this problem as the energy gain resulting from the binding of nitrogen atoms to the catalyst surface overcompensates for the necessary dissociation energy so that the reaction is finally exothermic. Nevertheless, the dissociative adsorption of nitrogen remains the rate-determining step: not because of the activation energy, but mainly because of the unfavorable pre-exponential factor of the rate constant. Although hydrogenation is endothermic, this energy can easily be applied by the reaction temperature (about 700 K).
Economic and environmental aspects
When first invented, the Haber process competed against another industrial process, the cyanamide process. However, the cyanamide process consumed large amounts of electrical power and was more labor-intensive than the Haber process.
As of 2018, the Haber process produces 230 million tonnes of anhydrous ammonia per year. The ammonia is used mainly as a nitrogen fertilizer as ammonia itself, in the form of ammonium nitrate, and as urea. The Haber process consumes 3–5% of the world's natural gas production (around 1–2% of the world's energy supply). In combination with advances in breeding, herbicides, and pesticides, these fertilizers have helped to increase the productivity of agricultural land:
The energy-intensity of the process contributes to climate change and other environmental problems such as the leaching of nitrates into groundwater, rivers, ponds, and lakes; expanding dead zones in coastal ocean waters, resulting from recurrent eutrophication; atmospheric deposition of nitrates and ammonia affecting natural ecosystems; higher emissions of nitrous oxide (N2O), now the third most important greenhouse gas following CO2 and CH4. The Haber–Bosch process is one of the largest contributors to a buildup of reactive nitrogen in the biosphere, causing an anthropogenic disruption to the nitrogen cycle.
Since nitrogen use efficiency is typically less than 50%, farm runoff from heavy use of fixed industrial nitrogen disrupts biological habitats.
Nearly 50% of the nitrogen found in human tissues originated from the Haber–Bosch process. Thus, the Haber process serves as the "detonator of the population explosion", enabling the global population to increase from 1.6 billion in 1900 to 7.7 billion by November 2018.
Reverse fuel cell technology converts electric energy, water and nitrogen into ammonia without a separate hydrogen electrolysis process.
The use of synthetic nitrogen fertilisers reduces the incentive for farmers to use more sustainable crop rotations which include legumes for their natural nitrogen-fixing ability.
See also
Crop rotation
Legume
References
Sources
External links
, 29 July 1999.
BASF – Fertilizer out of thin air
Britannica guide to Nobel Prizes: Fritz Haber
Haber Process for Ammonia Synthesis
Haber–Bosch process, most important invention of the 20th century, according to V. Smil, Nature, 29 July 1999, p. 415 (by Jürgen Schmidhuber)
Nobel e-Museum – Biography of Fritz Haber
Uses and Production of Ammonia
BASF
Chemical processes
Industrial processes
Equilibrium chemistry
Peak oil
Catalysis
History of mining in Chile
German inventions
Industrial gases
Name reactions
Fritz Haber
1909 in science
1909 in Germany | Haber process | [
"Chemistry"
] | 7,081 | [
"Catalysis",
"Equilibrium chemistry",
"Name reactions",
"Chemical processes",
"Industrial gases",
"nan",
"Chemical process engineering",
"Chemical kinetics"
] |
14,029 | https://en.wikipedia.org/wiki/Histone | In biology, histones are highly basic proteins abundant in lysine and arginine residues that are found in eukaryotic cell nuclei and in most Archaeal phyla. They act as spools around which DNA winds to create structural units called nucleosomes. Nucleosomes in turn are wrapped into 30-nanometer fibers that form tightly packed chromatin. Histones prevent DNA from becoming tangled and protect it from DNA damage. In addition, histones play important roles in gene regulation and DNA replication. Without histones, unwound DNA in chromosomes would be very long. For example, each human cell has about 1.8 meters of DNA if completely stretched out; however, when wound about histones, this length is reduced to about 9 micrometers (0.09 mm) of 30 nm diameter chromatin fibers.
There are five families of histones, which are designated H1/H5 (linker histones), H2, H3, and H4 (core histones). The nucleosome core is formed of two H2A-H2B dimers and a H3-H4 tetramer. The tight wrapping of DNA around histones, is to a large degree, a result of electrostatic attraction between the positively charged histones and negatively charged phosphate backbone of DNA.
Histones may be chemically modified through the action of enzymes to regulate gene transcription. The most common modifications are the methylation of arginine or lysine residues or the acetylation of lysine. Methylation can affect how other proteins such as transcription factors interact with the nucleosomes. Lysine acetylation eliminates a positive charge on lysine thereby weakening the electrostatic attraction between histone and DNA, resulting in partial unwinding of the DNA, making it more accessible for gene expression.
Classes and variants
Five major families of histone proteins exist: H1/H5, H2A, H2B, H3, and H4. Histones H2A, H2B, H3 and H4 are known as the core or nucleosomal histones, while histones H1/H5 are known as the linker histones.
The core histones all exist as dimers, which are similar in that they all possess the histone fold domain: three alpha helices linked by two loops. It is this helical structure that allows for interaction between distinct dimers, particularly in a head-tail fashion (also called the handshake motif). The resulting four distinct dimers then come together to form one octameric nucleosome core, approximately 63 Angstroms in diameter (a solenoid (DNA)-like particle). Around 146 base pairs (bp) of DNA wrap around this core particle 1.65 times in a left-handed super-helical turn to give a particle of around 100 Angstroms across. The linker histone H1 binds the nucleosome at the entry and exit sites of the DNA, thus locking the DNA into place and allowing the formation of higher order structure. The most basic such formation is the 10 nm fiber or beads on a string conformation. This involves the wrapping of DNA around nucleosomes with approximately 50 base pairs of DNA separating each pair of nucleosomes (also referred to as linker DNA). Higher-order structures include the 30 nm fiber (forming an irregular zigzag) and 100 nm fiber, these being the structures found in normal cells. During mitosis and meiosis, the condensed chromosomes are assembled through interactions between nucleosomes and other regulatory proteins.
Histones are subdivided into canonical replication-dependent histones, whose genes are expressed during the S-phase of the cell cycle and replication-independent histone variants, expressed during the whole cell cycle. In mammals, genes encoding canonical histones are typically clustered along chromosomes in 4 different highly-conserved loci, lack introns and use a stem loop structure at the 3' end instead of a polyA tail. Genes encoding histone variants are usually not clustered, have introns and their mRNAs are regulated with polyA tails. Complex multicellular organisms typically have a higher number of histone variants providing a variety of different functions. Recent data are accumulating about the roles of diverse histone variants highlighting the functional links between variants and the delicate regulation of organism development. Histone variants proteins from different organisms, their classification and variant specific features can be found in "HistoneDB 2.0 - Variants" database. Several pseudogenes have also been discovered and identified in very close sequences of their respective functional ortholog genes.
The following is a list of human histone proteins, genes and pseudogenes:
Structure
The nucleosome core is formed of two H2A-H2B dimers and a H3-H4 tetramer, forming two nearly symmetrical halves by tertiary structure (C2 symmetry; one macromolecule is the mirror image of the other). The H2A-H2B dimers and H3-H4 tetramer also show pseudodyad symmetry. The 4 'core' histones (H2A, H2B, H3 and H4) are relatively similar in structure and are highly conserved through evolution, all featuring a 'helix turn helix turn helix' motif (DNA-binding protein motif that recognize specific DNA sequence). They also share the feature of long 'tails' on one end of the amino acid structure - this being the location of post-translational modification (see below).
Archaeal histone only contains a H3-H4 like dimeric structure made out of a single type of unit. Such dimeric structures can stack into a tall superhelix ("hypernucleosome") onto which DNA coils in a manner similar to nucleosome spools. Only some archaeal histones have tails.
The distance between the spools around which eukaryotic cells wind their DNA has been determined to range from 59 to 70 Å.
In all, histones make five types of interactions with DNA:
Salt bridges and hydrogen bonds between side chains of basic amino acids (especially lysine and arginine) and phosphate oxygens on DNA
Helix-dipoles form alpha-helixes in H2B, H3, and H4 cause a net positive charge to accumulate at the point of interaction with negatively charged phosphate groups on DNA
Hydrogen bonds between the DNA backbone and the amide group on the main chain of histone proteins
Nonpolar interactions between the histone and deoxyribose sugars on DNA
Non-specific minor groove insertions of the H3 and H2B N-terminal tails into two minor grooves each on the DNA molecule
The highly basic nature of histones, aside from facilitating DNA-histone interactions, contributes to their water solubility.
Histones are subject to post translational modification by enzymes primarily on their N-terminal tails, but also in their globular domains. Such modifications include methylation, citrullination, acetylation, phosphorylation, SUMOylation, ubiquitination, and ADP-ribosylation. This affects their function of gene regulation.
In general, genes that are active have less bound histone, while inactive genes are highly associated with histones during interphase. It also appears that the structure of histones has been evolutionarily conserved, as any deleterious mutations would be severely maladaptive. All histones have a highly positively charged N-terminus with many lysine and arginine residues.
Evolution and species distribution
Core histones are found in the nuclei of eukaryotic cells and in most Archaeal phyla, but not in bacteria. The unicellular algae known as dinoflagellates were previously thought to be the only eukaryotes that completely lack histones, but later studies showed that their DNA still encodes histone genes. Unlike the core histones, homologs of the lysine-rich linker histone (H1) proteins are found in bacteria, otherwise known as nucleoprotein HC1/HC2.
It has been proposed that core histone proteins are evolutionarily related to the helical part of the extended AAA+ ATPase domain, the C-domain, and to the N-terminal substrate recognition domain of Clp/Hsp100 proteins. Despite the differences in their topology, these three folds share a homologous helix-strand-helix (HSH) motif. It's also proposed that they may have evolved from ribosomal proteins (RPS6/RPS15), both being short and basic proteins.
Archaeal histones may well resemble the evolutionary precursors to eukaryotic histones. Histone proteins are among the most highly conserved proteins in eukaryotes, emphasizing their important role in the biology of the nucleus. In contrast mature sperm cells largely use protamines to package their genomic DNA, most likely because this allows them to achieve an even higher packaging ratio.
There are some variant forms in some of the major classes. They share amino acid sequence homology and core structural similarity to a specific class of major histones but also have their own feature that is distinct from the major histones. These minor histones usually carry out specific functions of the chromatin metabolism. For example, histone H3-like CENPA is associated with only the centromere region of the chromosome. Histone H2A variant H2A.Z is associated with the promoters of actively transcribed genes and also involved in the prevention of the spread of silent heterochromatin. Furthermore, H2A.Z has roles in chromatin for genome stability. Another H2A variant H2A.X is phosphorylated at S139 in regions around double-strand breaks and marks the region undergoing DNA repair. Histone H3.3 is associated with the body of actively transcribed genes.
Function
Compacting DNA strands
Histones act as spools around which DNA winds. This enables the compaction necessary to fit the large genomes of eukaryotes inside cell nuclei: the compacted molecule is 40,000 times shorter than an unpacked molecule.
Chromatin regulation
Histones undergo posttranslational modifications that alter their interaction with DNA and nuclear proteins. The H3 and H4 histones have long tails protruding from the nucleosome, which can be covalently modified at several places. Modifications of the tail include methylation, acetylation, phosphorylation, ubiquitination, SUMOylation, citrullination, and ADP-ribosylation. The core of the histones H2A and H2B can also be modified. Combinations of modifications, known as histone marks, are thought to constitute a code, the so-called "histone code". Histone modifications act in diverse biological processes such as gene regulation, DNA repair, chromosome condensation (mitosis) and spermatogenesis (meiosis).
The common nomenclature of histone modifications is:
The name of the histone (e.g., H3)
The single-letter amino acid abbreviation (e.g., K for Lysine) and the amino acid position in the protein
The type of modification (Me: methyl, P: phosphate, Ac: acetyl, Ub: ubiquitin)
The number of modifications (only Me is known to occur in more than one copy per residue. 1, 2 or 3 is mono-, di- or tri-methylation)
So H3K4me1 denotes the monomethylation of the 4th residue (a lysine) from the start (i.e., the N-terminal) of the H3 protein.
Modification
A huge catalogue of histone modifications have been described, but a functional understanding of most is still lacking. Collectively, it is thought that histone modifications may underlie a histone code, whereby combinations of histone modifications have specific meanings. However, most functional data concerns individual prominent histone modifications that are biochemically amenable to detailed study.
Chemistry
Lysine methylation
The addition of one, two, or many methyl groups to lysine has little effect on the chemistry of the histone; methylation leaves the charge of the lysine intact and adds a minimal number of atoms so steric interactions are mostly unaffected. However, proteins containing Tudor, chromo or PHD domains, amongst others, can recognise lysine methylation with exquisite sensitivity and differentiate mono, di and tri-methyl lysine, to the extent that, for some lysines (e.g.: H4K20) mono, di and tri-methylation appear to have different meanings. Because of this, lysine methylation tends to be a very informative mark and dominates the known histone modification functions.
Glutamine serotonylation
Recently it has been shown, that the addition of a serotonin group to the position 5 glutamine of H3, happens in serotonergic cells such as neurons. This is part of the differentiation of the serotonergic cells. This post-translational modification happens in conjunction with the H3K4me3 modification. The serotonylation potentiates the binding of the general transcription factor TFIID to the TATA box.
Arginine methylation
What was said above of the chemistry of lysine methylation also applies to arginine methylation, and some protein domains—e.g., Tudor domains—can be specific for methyl arginine instead of methyl lysine. Arginine is known to be mono- or di-methylated, and methylation can be symmetric or asymmetric, potentially with different meanings.
Arginine citrullination
Enzymes called peptidylarginine deiminases (PADs) hydrolyze the imine group of arginines and attach a keto group, so that there is one less positive charge on the amino acid residue. This process has been involved in the activation of gene expression by making the modified histones less tightly bound to DNA and thus making the chromatin more accessible. PADs can also produce the opposite effect by removing or inhibiting mono-methylation of arginine residues on histones and thus antagonizing the positive effect arginine methylation has on transcriptional activity.
Lysine acetylation
Addition of an acetyl group has a major chemical effect on lysine as it neutralises the positive charge. This reduces electrostatic attraction between the histone and the negatively charged DNA backbone, loosening the chromatin structure; highly acetylated histones form more accessible chromatin and tend to be associated with active transcription. Lysine acetylation appears to be less precise in meaning than methylation, in that histone acetyltransferases tend to act on more than one lysine; presumably this reflects the need to alter multiple lysines to have a significant effect on chromatin structure. The modification includes H3K27ac.
Serine/threonine/tyrosine phosphorylation
Addition of a negatively charged phosphate group can lead to major changes in protein structure, leading to the well-characterised role of phosphorylation in controlling protein function. It is not clear what structural implications histone phosphorylation has, but histone phosphorylation has clear functions as a post-translational modification, and binding domains such as BRCT have been characterised.
Effects on transcription
Most well-studied histone modifications are involved in control of transcription.
Actively transcribed genes
Two histone modifications are particularly associated with active transcription:
Trimethylation of H3 lysine 4 (H3K4me3) This trimethylation occurs at the promoter of active genes and is performed by the COMPASS complex. Despite the conservation of this complex and histone modification from yeast to mammals, it is not entirely clear what role this modification plays. However, it is an excellent mark of active promoters and the level of this histone modification at a gene's promoter is broadly correlated with transcriptional activity of the gene. The formation of this mark is tied to transcription in a rather convoluted manner: early in transcription of a gene, RNA polymerase II undergoes a switch from initiating' to 'elongating', marked by a change in the phosphorylation states of the RNA polymerase II C terminal domain (CTD). The same enzyme that phosphorylates the CTD also phosphorylates the Rad6 complex, which in turn adds a ubiquitin mark to H2B K123 (K120 in mammals). H2BK123Ub occurs throughout transcribed regions, but this mark is required for COMPASS to trimethylate H3K4 at promoters.
Trimethylation of H3 lysine 36 (H3K36me3) This trimethylation occurs in the body of active genes and is deposited by the methyltransferase Set2. This protein associates with elongating RNA polymerase II, and H3K36Me3 is indicative of actively transcribed genes. H3K36Me3 is recognised by the Rpd3 histone deacetylase complex, which removes acetyl modifications from surrounding histones, increasing chromatin compaction and repressing spurious transcription. Increased chromatin compaction prevents transcription factors from accessing DNA, and reduces the likelihood of new transcription events being initiated within the body of the gene. This process therefore helps ensure that transcription is not interrupted.
Repressed genes
Three histone modifications are particularly associated with repressed genes:
Trimethylation of H3 lysine 27 (H3K27me3) This histone modification is deposited by the polycomb complex PRC2. It is a clear marker of gene repression, and is likely bound by other proteins to exert a repressive function. Another polycomb complex, PRC1, can bind H3K27me3 and adds the histone modification H2AK119Ub which aids chromatin compaction. Based on this data it appears that PRC1 is recruited through the action of PRC2, however, recent studies show that PRC1 is recruited to the same sites in the absence of PRC2.
Di and tri-methylation of H3 lysine 9 (H3K9me2/3) H3K9me2/3 is a well-characterised marker for heterochromatin, and is therefore strongly associated with gene repression. The formation of heterochromatin has been best studied in the yeast Schizosaccharomyces pombe, where it is initiated by recruitment of the RNA-induced transcriptional silencing (RITS) complex to double stranded RNAs produced from centromeric repeats. RITS recruits the Clr4 histone methyltransferase which deposits H3K9me2/3. This process is called histone methylation. H3K9Me2/3 serves as a binding site for the recruitment of Swi6 (heterochromatin protein 1 or HP1, another classic heterochromatin marker) which in turn recruits further repressive activities including histone modifiers such as histone deacetylases and histone methyltransferases.
Trimethylation of H4 lysine 20 (H4K20me3) This modification is tightly associated with heterochromatin, although its functional importance remains unclear. This mark is placed by the Suv4-20h methyltransferase, which is at least in part recruited by heterochromatin protein 1.
Bivalent promoters
Analysis of histone modifications in embryonic stem cells (and other stem cells) revealed many gene promoters carrying both H3K4Me3 and H3K27Me3, in other words these promoters display both activating and repressing marks simultaneously. This peculiar combination of modifications marks genes that are poised for transcription; they are not required in stem cells, but are rapidly required after differentiation into some lineages. Once the cell starts to differentiate, these bivalent promoters are resolved to either active or repressive states depending on the chosen lineage.
Other functions
DNA damage repair
Marking sites of DNA damage is an important function for histone modifications. Without a repair marker, DNA would get destroyed by damage accumulated from sources such as the ultraviolet radiation of the sun.
Phosphorylation of H2AX at serine 139 (γH2AX) Phosphorylated H2AX (also known as gamma H2AX) is a marker for DNA double strand breaks, and forms part of the response to DNA damage. H2AX is phosphorylated early after detection of DNA double strand break, and forms a domain extending many kilobases either side of the damage. Gamma H2AX acts as a binding site for the protein MDC1, which in turn recruits key DNA repair proteins (this complex topic is well reviewed in) and as such, gamma H2AX forms a vital part of the machinery that ensures genome stability.
Acetylation of H3 lysine 56 (H3K56Ac) H3K56Acx is required for genome stability. H3K56 is acetylated by the p300/Rtt109 complex, but is rapidly deacetylated around sites of DNA damage. H3K56 acetylation is also required to stabilise stalled replication forks, preventing dangerous replication fork collapses. Although in general mammals make far greater use of histone modifications than microorganisms, a major role of H3K56Ac in DNA replication exists only in fungi, and this has become a target for antibiotic development.
Trimethylation of H3 lysine 36 (H3K36me3)
H3K36me3 has the ability to recruit the MSH2-MSH6 (hMutSα) complex of the DNA mismatch repair pathway. Consistently, regions of the human genome with high levels of H3K36me3 accumulate less somatic mutations due to mismatch repair activity.
Chromosome condensation
Phosphorylation of H3 at serine 10 (phospho-H3S10) The mitotic kinase aurora B phosphorylates histone H3 at serine 10, triggering a cascade of changes that mediate mitotic chromosome condensation. Condensed chromosomes therefore stain very strongly for this mark, but H3S10 phosphorylation is also present at certain chromosome sites outside mitosis, for example in pericentric heterochromatin of cells during G2. H3S10 phosphorylation has also been linked to DNA damage caused by R-loop formation at highly transcribed sites.
Phosphorylation H2B at serine 10/14 (phospho-H2BS10/14) Phosphorylation of H2B at serine 10 (yeast) or serine 14 (mammals) is also linked to chromatin condensation, but for the very different purpose of mediating chromosome condensation during apoptosis. This mark is not simply a late acting bystander in apoptosis as yeast carrying mutations of this residue are resistant to hydrogen peroxide-induced apoptotic cell death.
Addiction
Epigenetic modifications of histone tails in specific regions of the brain are of central importance in addictions. Once particular epigenetic alterations occur, they appear to be long lasting "molecular scars" that may account for the persistence of addictions.
Cigarette smokers (about 15% of the US population) are usually addicted to nicotine. After 7 days of nicotine treatment of mice, acetylation of both histone H3 and histone H4 was increased at the FosB promoter in the nucleus accumbens of the brain, causing 61% increase in FosB expression. This would also increase expression of the splice variant Delta FosB. In the nucleus accumbens of the brain, Delta FosB functions as a "sustained molecular switch" and "master control protein" in the development of an addiction.
About 7% of the US population is addicted to alcohol. In rats exposed to alcohol for up to 5 days, there was an increase in histone 3 lysine 9 acetylation in the pronociceptin promoter in the brain amygdala complex. This acetylation is an activating mark for pronociceptin. The nociceptin/nociceptin opioid receptor system is involved in the reinforcing or conditioning effects of alcohol.
Methamphetamine addiction occurs in about 0.2% of the US population. Chronic methamphetamine use causes methylation of the lysine in position 4 of histone 3 located at the promoters of the c-fos and the C-C chemokine receptor 2 (ccr2) genes, activating those genes in the nucleus accumbens (NAc). c-fos is well known to be important in addiction. The ccr2 gene is also important in addiction, since mutational inactivation of this gene impairs addiction.
Synthesis
The first step of chromatin structure duplication is the synthesis of histone proteins: H1, H2A, H2B, H3, H4. These proteins are synthesized during S phase of the cell cycle. There are different mechanisms which contribute to the increase of histone synthesis.
Yeast
Yeast carry one or two copies of each histone gene, which are not clustered but rather scattered throughout chromosomes. Histone gene transcription is controlled by multiple gene regulatory proteins such as transcription factors which bind to histone promoter regions. In budding yeast, the candidate gene for activation of histone gene expression is SBF. SBF is a transcription factor that is activated in late G1 phase, when it dissociates from its repressor Whi5. This occurs when Whi5 is phosphorylated by Cdc8 which is a G1/S Cdk. Suppression of histone gene expression outside of S phases is dependent on Hir proteins which form inactive chromatin structure at the locus of histone genes, causing transcriptional activators to be blocked.
Metazoan
In metazoans the increase in the rate of histone synthesis is due to the increase in processing of pre-mRNA to its mature form as well as decrease in mRNA degradation; this results in an increase of active mRNA for translation of histone proteins. The mechanism for mRNA activation has been found to be the removal of a segment of the 3' end of the mRNA strand, and is dependent on association with stem-loop binding protein (SLBP). SLBP also stabilizes histone mRNAs during S phase by blocking degradation by the 3'hExo nuclease. SLBP levels are controlled by cell-cycle proteins, causing SLBP to accumulate as cells enter S phase and degrade as cells leave S phase. SLBP are marked for degradation by phosphorylation at two threonine residues by cyclin dependent kinases, possibly cyclin A/ cdk2, at the end of S phase. Metazoans also have multiple copies of histone genes clustered on chromosomes which are localized in structures called Cajal bodies as determined by genome-wide chromosome conformation capture analysis (4C-Seq).
Link between cell-cycle control and synthesis
Nuclear protein Ataxia-Telangiectasia (NPAT), also known as nuclear protein coactivator of histone transcription, is a transcription factor which activates histone gene transcription on chromosomes 1 and 6 of human cells. NPAT is also a substrate of cyclin E-Cdk2, which is required for the transition between G1 phase and S phase. NPAT activates histone gene expression only after it has been phosphorylated by the G1/S-Cdk cyclin E-Cdk2 in early S phase. This shows an important regulatory link between cell-cycle control and histone synthesis.
History
Histones were discovered in 1884 by Albrecht Kossel. The word "histone" dates from the late 19th century and is derived from the German word "Histon", a word itself of uncertain origin, perhaps from Ancient Greek ἵστημι (hístēmi, “make stand”) or ἱστός (histós, “loom”).
In the early 1960s, before the types of histones were known and before histones were known to be highly conserved across taxonomically diverse organisms, James F. Bonner and his collaborators began a study of these proteins that were known to be tightly associated with the DNA in the nucleus of higher organisms. Bonner and his postdoctoral fellow Ru Chih C. Huang showed that isolated chromatin would not support RNA transcription in the test tube, but if the histones were extracted from the chromatin, RNA could be transcribed from the remaining DNA. Their paper became a citation classic. Paul T'so and James Bonner had called together a World Congress on Histone Chemistry and Biology in 1964, in which it became clear that there was no consensus on the number of kinds of histone and that no one knew how they would compare when isolated from different organisms. Bonner and his collaborators then developed methods to separate each type of histone, purified individual histones, compared amino acid compositions in the same histone from different organisms, and compared amino acid sequences of the same histone from different organisms in collaboration with Emil Smith from UCLA. For example, they found Histone IV sequence to be highly conserved between peas and calf thymus. However, their work on the biochemical characteristics of individual histones did not reveal how the histones interacted with each other or with DNA to which they were tightly bound.
Also in the 1960s, Vincent Allfrey and Alfred Mirsky had suggested, based on their analyses of histones, that acetylation and methylation of histones could provide a transcriptional control mechanism, but did not have available the kind of detailed analysis that later investigators were able to conduct to show how such regulation could be gene-specific. Until the early 1990s, histones were dismissed by most as inert packing material for eukaryotic nuclear DNA, a view based in part on the models of Mark Ptashne and others, who believed that transcription was activated by protein-DNA and protein-protein interactions on largely naked DNA templates, as is the case in bacteria.
During the 1980s, Yahli Lorch and Roger Kornberg showed that a nucleosome on a core promoter prevents the initiation of transcription in vitro, and Michael Grunstein demonstrated that histones repress transcription in vivo, leading to the idea of the nucleosome as a general gene repressor. Relief from repression is believed to involve both histone modification and the action of chromatin-remodeling complexes. Vincent Allfrey and Alfred Mirsky had earlier proposed a role of histone modification in transcriptional activation, regarded as a molecular manifestation of epigenetics. Michael Grunstein and David Allis found support for this proposal, in the importance of histone acetylation for transcription in yeast and the activity of the transcriptional activator Gcn5 as a histone acetyltransferase.
The discovery of the H5 histone appears to date back to the 1970s, and it is now considered an isoform of Histone H1.
See also
Histone variants
Chromatin
Gene silencing
Genetics
Histone acetyltransferase
Histone deacetylases
Histone methyltransferase
Histone-modifying enzymes
Nucleosome
PRMT4 pathway
Protamine
Histone H1
References
External links
HistoneDB 2.0 - Database of histones and variants at NCBI
Chromatin, Histones & Cathepsin; PMAP The Proteolysis Map-animation
Epigenetics
Proteins
DNA-binding proteins | Histone | [
"Chemistry"
] | 6,713 | [
"Proteins",
"Biomolecules by chemical classification",
"Molecular biology"
] |
14,034 | https://en.wikipedia.org/wiki/Heroin | Heroin, also known as diacetylmorphine and diamorphine among other names, is a morphinan opioid substance synthesized from the dried latex of the opium poppy; it is mainly used as a recreational drug for its euphoric effects. Heroin is used medically in several countries to relieve pain, such as during childbirth or a heart attack, as well as in opioid replacement therapy. Medical-grade diamorphine is used as a pure hydrochloride salt. Various white and brown powders sold illegally around the world as heroin are routinely diluted with cutting agents. Black tar heroin is a variable admixture of morphine derivatives—predominantly 6-MAM (6-monoacetylmorphine), which is the result of crude acetylation during clandestine production of street heroin.
Heroin is typically injected, usually into a vein, but it can also be snorted, smoked, or inhaled. In a clinical context, the route of administration is most commonly intravenous injection; it may also be given by intramuscular or subcutaneous injection, as well as orally in the form of tablets. The onset of effects is usually rapid and lasts for a few hours.
Common side effects include respiratory depression (decreased breathing), dry mouth, drowsiness, impaired mental function, constipation, and addiction. Use by injection can also result in abscesses, infected heart valves, blood-borne infections, and pneumonia. After a history of long-term use, opioid withdrawal symptoms can begin within hours of the last use. When given by injection into a vein, heroin has two to three times the effect of a similar dose of morphine. It typically appears in the form of a white or brown powder.
Treatment of heroin addiction often includes behavioral therapy and medications. Medications can include buprenorphine, methadone, or naltrexone. A heroin overdose may be treated with naloxone. As of 2015, an estimated 17 million people use opiates, of which heroin is the most common, and opioid use resulted in 122,000 deaths; also, as of 2015, the total number of heroin users worldwide is believed to have increased in Africa, the Americas, and Asia since 2000. In the United States, approximately 1.6 percent of people have used heroin at some point. When people die from overdosing on a drug, the drug is usually an opioid and often heroin.
Heroin was first made by C. R. Alder Wright in 1874 from morphine, a natural product of the opium poppy. Internationally, heroin is controlled under Schedules I and IV of the Single Convention on Narcotic Drugs, and it is generally illegal to make, possess, or sell without a license. About 448 tons of heroin were made in 2016. In 2015, Afghanistan produced about 66% of the world's opium. Illegal heroin is often mixed with other substances such as sugar, starch, caffeine, quinine, or other opioids like fentanyl.
Uses
Recreational
Bayer's original trade name of heroin is typically used in non-medical settings. It is used as a recreational drug for the euphoria it induces. Anthropologist Michael Agar once described heroin as "the perfect whatever drug." Tolerance develops quickly, and increased doses are needed in order to achieve the same effects. Its popularity with recreational drug users, compared to morphine, reportedly stems from its perceived different effects.
Short-term addiction studies by the same researchers demonstrated that tolerance developed at a similar rate to both heroin and morphine. When compared to the opioids hydromorphone, fentanyl, oxycodone, and pethidine (meperidine), former addicts showed a strong preference for heroin and morphine, suggesting that heroin and morphine are particularly susceptible to misuse and causing dependence. Morphine and heroin were also much more likely to produce euphoria and other positive subjective effects when compared to these other opioids.
Medical uses
In the United States, heroin is not accepted as medically useful.
Under the generic name diamorphine, heroin is prescribed as a strong pain medication in the United Kingdom, where it is administered via oral, subcutaneous, intramuscular, intrathecal, intranasal or intravenous routes. It may be prescribed for the treatment of acute pain, such as in severe physical trauma, myocardial infarction, post-surgical pain and chronic pain, including end-stage terminal illnesses. In other countries it is more common to use morphine or other strong opioids in these situations. In 2004, the National Institute for Health and Clinical Excellence produced guidance on the management of caesarean section, which recommended the use of intrathecal or epidural diamorphine for post-operative pain relief. For women who have had intrathecal opioids, there should be a minimum hourly observation of respiratory rate, sedation and pain scores for at least 12 hours for diamorphine and 24 hours for morphine. Women should be offered diamorphine (0.3–0.4 mg intrathecally) for intra- and postoperative analgesia because it reduces the need for supplemental analgesia after a caesarean section. Epidural diamorphine (2.5–5 mg) is a suitable alternative.
Diamorphine continues to be widely used in palliative care in the UK, where it is commonly given by the subcutaneous route, often via a syringe driver if patients cannot easily swallow morphine solution. The advantage of diamorphine over morphine is that diamorphine is more fat soluble and therefore more potent by injection, so smaller doses of it are needed for the same effect on pain. Both of these factors are advantageous if giving high doses of opioids via the subcutaneous route, which is often necessary for palliative care.
It is also used in the palliative management of bone fractures and other trauma, especially in children. In the trauma context, it is primarily given by nose in hospital; although a prepared nasal spray is available. It has traditionally been made by the attending physician, generally from the same "dry" ampoules as used for injection. In children, Ayendi nasal spray is available at 720 micrograms and 1600 micrograms per 50 microlitres actuation of the spray, which may be preferable as a non-invasive alternative in pediatric care, avoiding the fear of injection in children.
Maintenance therapy
A number of European countries prescribe heroin for treatment of heroin addiction. The initial Swiss HAT (heroin-assisted treatment) trial ("PROVE" study) was conducted as a prospective cohort study with some 1,000 participants in 18 treatment centers between 1994 and 1996, at the end of 2004, 1,200 patients were enrolled in HAT in 23 treatment centers across Switzerland. Diamorphine may be used as a maintenance drug to assist the treatment of opiate addiction, normally in long-term chronic intravenous (IV) heroin users. It is only prescribed following exhaustive efforts at treatment via other means. It is sometimes thought that heroin users can walk into a clinic and walk out with a prescription, but the process takes many weeks before a prescription for diamorphine is issued. Though this is somewhat controversial among proponents of a zero-tolerance drug policy, it has proven superior to methadone in improving the social and health situations of addicts.
The UK Department of Health's Rolleston Committee Report in 1926 established the British approach to diamorphine prescription to users, which was maintained for the next 40 years: dealers were prosecuted, but doctors could prescribe diamorphine to users when withdrawing. In 1964, the Brain Committee recommended that only selected approved doctors working at approved specialized centres be allowed to prescribe diamorphine and cocaine to users. The law was made more restrictive in 1968. Beginning in the 1970s, the emphasis shifted to abstinence and the use of methadone; currently, only a small number of users in the UK are prescribed diamorphine.
In 1994, Switzerland began a trial diamorphine maintenance program for users that had failed multiple withdrawal programs. The aim of this program was to maintain the health of the user by avoiding medical problems stemming from the illicit use of diamorphine. The first trial in 1994 involved 340 users, although enrollment was later expanded to 1000, based on the apparent success of the program. The trials proved diamorphine maintenance to be superior to other forms of treatment in improving the social and health situation for this group of patients. It has also been shown to save money, despite high treatment expenses, as it significantly reduces costs incurred by trials, incarceration, health interventions and delinquency. Patients appear twice daily at a treatment center, where they inject their dose of diamorphine under the supervision of medical staff. They are required to contribute about 450 Swiss francs per month to the treatment costs. A national referendum in November 2008 showed 68% of voters supported the plan, introducing diamorphine prescription into federal law. The previous trials were based on time-limited executive ordinances. The success of the Swiss trials led German, Dutch, and Canadian cities to try out their own diamorphine prescription programs. Some Australian cities (such as Sydney) have instituted legal diamorphine supervised injecting centers, in line with other wider harm minimization programs.
Since January 2009, Denmark has prescribed diamorphine to a few addicts who have tried methadone and buprenorphine without success. Beginning in February 2010, addicts in Copenhagen and Odense became eligible to receive free diamorphine. Later in 2010, other cities including Århus and Esbjerg joined the scheme. It was estimated that around 230 addicts would be able to receive free diamorphine.
However, Danish addicts would only be able to inject heroin according to the policy set by Danish National Board of Health. Of the estimated 1500 drug users who did not benefit from the then-current oral substitution treatment, approximately 900 would not be in the target group for treatment with injectable diamorphine, either because of "massive multiple drug abuse of non-opioids" or "not wanting treatment with injectable diamorphine".
In July 2009, the German Bundestag passed a law allowing diamorphine prescription as a standard treatment for addicts; a large-scale trial of diamorphine prescription had been authorized in the country in 2002.
On 26 August 2016, Health Canada issued regulations amending prior regulations it had issued under the Controlled Drugs and Substances Act; the "New Classes of Practitioners Regulations", the "Narcotic Control Regulations", and the "Food and Drug Regulations", to allow doctors to prescribe diamorphine to people who have a severe opioid addiction who have not responded to other treatments. The prescription heroin can be accessed by doctors through Health Canada's Special Access Programme (SAP) for "emergency access to drugs for patients with serious or life-threatening conditions when conventional treatments have failed, are unsuitable, or are unavailable."
Routes of administration
The onset of heroin's effects depends upon the route of administration. Smoking is the fastest route of drug administration, although intravenous injection results in a quicker rise in blood concentration. These are followed by suppository (anal or vaginal insertion), insufflation (snorting), and ingestion (swallowing).
A 2002 study suggests that a fast onset of action increases the reinforcing effects of addictive drugs. Ingestion does not produce a rush as a forerunner to the high experienced with the use of heroin, which is most pronounced with intravenous use. While the onset of the rush induced by injection can occur in as little as a few seconds, the oral route of administration requires approximately half an hour before the high sets in. Thus, with both higher the dosage of heroin used and faster the route of administration used, the higher the potential risk for psychological dependence/addiction.
Large doses of heroin can cause fatal respiratory depression, and the drug has been used for suicide or as a murder weapon. The serial killer Harold Shipman used diamorphine on his victims, and the subsequent Shipman Inquiry led to a tightening of the regulations surrounding the storage, prescribing and destruction of controlled drugs in the UK.
Because significant tolerance to respiratory depression develops quickly with continued use and is lost just as quickly during withdrawal, it is often difficult to determine whether a heroin lethal overdose was accidental, suicide or homicide. Examples include the overdose deaths of Sid Vicious, Janis Joplin, Tim Buckley, Hillel Slovak, Layne Staley, Bradley Nowell, Ted Binion, and River Phoenix.
By mouth
Use of heroin by mouth is less common than other methods of administration, mainly because there is little to no "rush", and the effects are less potent. Heroin is entirely converted to morphine by means of first-pass metabolism, resulting in deacetylation when ingested. Heroin's oral bioavailability is both dose-dependent (as is morphine's) and significantly higher than oral use of morphine itself, reaching up to 64.2% for high doses and 45.6% for low doses; opiate-naive users showed far less absorption of the drug at low doses, having bioavailabilities of only up to 22.9%. The maximum plasma concentration of morphine following oral administration of heroin was around twice as much as that of oral morphine.
Injection
Injection, also known as "slamming", "banging", "shooting up", "digging" or "mainlining", is a popular method which carries relatively greater risks than other methods of administration. Heroin base (commonly found in Europe), when prepared for injection, will only dissolve in water when mixed with an acid (most commonly citric acid powder or lemon juice) and heated. Heroin in the east-coast United States is most commonly found in the hydrochloride salt form, requiring just water (and no heat) to dissolve. Users tend to initially inject in the easily accessible arm veins, but as these veins collapse over time, users resort to more dangerous areas of the body, such as the femoral vein in the groin. Some medical professionals have expressed concern over this route of administration, as they suspect that it can lead to deep vein thrombosis.
Intravenous users can use a variable single dose range using a hypodermic needle. The dose of heroin used for recreational purposes is dependent on the frequency and level of use.
As with the injection of any drug, if a group of users share a common needle without sterilization procedures, blood-borne diseases, such as HIV/AIDS or hepatitis, can be transmitted.
The use of a common dispenser for water for the use in the preparation of the injection, as well as the sharing of spoons and filters can also cause the spread of blood-borne diseases. Many countries now supply small sterile spoons and filters for single use in order to prevent the spread of disease.
Smoking
Smoking heroin refers to vaporizing it to inhale the resulting fumes, rather than burning and inhaling the smoke. It is commonly smoked in glass pipes made from glassblown Pyrex tubes and light bulbs. Heroin may be smoked from aluminium foil that is heated by a flame underneath it, with the resulting smoke inhaled through a tube of rolled up foil, a method also known as "chasing the dragon".
Insufflation
Another popular route to intake heroin is insufflation (snorting), where a user crushes the heroin into a fine powder and then gently inhales it (sometimes with a straw or a rolled-up banknote, as with cocaine) into the nose, where heroin is absorbed through the soft tissue in the mucous membrane of the sinus cavity and straight into the bloodstream. This method of administration redirects first-pass metabolism, with a quicker onset and higher bioavailability than oral administration, though the duration of action is shortened. This method is sometimes preferred by users who do not want to prepare and administer heroin for injection or smoking but still want to experience a fast onset. Snorting heroin becomes an often unwanted route, once a user begins to inject the drug. The user may still get high on the drug from snorting, and experience a nod, but will not get a rush. A "rush" is caused by a large amount of heroin entering the body at once. When the drug is taken in through the nose, the user does not get the rush because the drug is absorbed slowly rather than instantly.
Heroin for pain has been mixed with sterile water on site by the attending physician, and administered using a syringe with a nebulizer tip. Heroin may be used for fractures, burns, finger-tip injuries, suturing, and wound re-dressing, but is inappropriate in head injuries.
Suppository
Little research has been focused on the suppository (anal insertion) or pessary (vaginal insertion) methods of administration, also known as "plugging". These methods of administration are commonly carried out using an oral syringe. Heroin can be dissolved and withdrawn into an oral syringe which may then be lubricated and inserted into the anus or vagina before the plunger is pushed. The rectum or the vaginal canal is where the majority of the drug would likely be taken up, through the membranes lining their walls.
Adverse effects
Heroin is classified as a hard drug in terms of drug harmfulness. Like most opioids, unadulterated heroin may lead to adverse effects. The purity of street heroin varies greatly, leading to overdoses when the purity is higher than expected.
Short-term effects
Users report an intense rush, an acute transcendent state of euphoria, which occurs while diamorphine is being metabolized into 6-monoacetylmorphine (6-MAM) and morphine in the brain. Some believe that heroin produces more euphoria than other opioids; one possible explanation is the presence of 6-monoacetylmorphine, a metabolite unique to heroin – although a more likely explanation is the rapidity of onset. While other opioids of recreational use produce only morphine, heroin also leaves 6-MAM, also a psycho-active metabolite.
However, this perception is not supported by the results of clinical studies comparing the physiological and subjective effects of injected heroin and morphine in individuals formerly addicted to opioids; these subjects showed no preference for one drug over the other. Equipotent injected doses had comparable action courses, with no difference in subjects' self-rated feelings of euphoria, ambition, nervousness, relaxation, drowsiness, or sleepiness.
The rush is usually accompanied by a warm flushing of the skin, dry mouth, and a heavy feeling in the extremities. Nausea, vomiting, and severe itching may also occur. After the initial effects, users usually will be drowsy for several hours; mental function is clouded; heart function slows, and breathing is also severely slowed, sometimes enough to be life-threatening. Slowed breathing can also lead to coma and permanent brain damage. Heroin use has also been associated with myocardial infarction.
Long-term effects
Repeated heroin use changes the physical structure and physiology of the brain, creating long-term imbalances in neuronal and hormonal systems that are not easily reversed. Studies have shown some deterioration of the brain's white matter due to heroin use, which may affect decision-making abilities, the ability to regulate behavior, and responses to stressful situations. Heroin also produces profound degrees of tolerance and physical dependence. Tolerance occurs when more and more of the drug is required to achieve the same effects. With physical dependence, the body adapts to the presence of the drug, and withdrawal symptoms occur if use is reduced abruptly.
Injection
Intravenous use of heroin (and any other substance) with needles and syringes or other related equipment may lead to:
Contracting blood-borne pathogens such as HIV and hepatitis via the sharing of needles
Contracting bacterial or fungal endocarditis and possibly venous sclerosis
Abscesses
Poisoning from contaminants added to "cut" or dilute heroin
Decreased kidney function (nephropathy), although it is not currently known if this is because of adulterants or infectious diseases
Withdrawal
The withdrawal syndrome from heroin may begin within as little as two hours of discontinuation of the drug; however, this time frame can fluctuate with the degree of tolerance as well as the amount of the last consumed dose, and more typically begins within 6–24 hours after cessation. Symptoms may include sweating, malaise, anxiety, depression, akathisia, priapism, extra sensitivity of the genitals in females, general feeling of heaviness, excessive yawning or sneezing, rhinorrhea, insomnia, cold sweats, chills, severe muscle and bone aches, nausea, vomiting, diarrhea, cramps, watery eyes, fever, cramp-like pains, and involuntary spasms in the limbs (thought to be an origin of the term "kicking the habit").
Overdose
Heroin overdose is usually treated with the opioid antagonist naloxone. This reverses the effects of heroin and causes an immediate return of consciousness but may result in withdrawal symptoms. The half-life of naloxone is shorter than some opioids, such that it may need to be given multiple times until the opioid has been metabolized by the body.
Between 2012 and 2015, heroin was the leading cause of drug-related deaths in the United States. Since then, fentanyl has been a more common cause of drug-related deaths.
Depending on drug interactions and numerous other factors, death from overdose can take anywhere from several minutes to several hours. Death usually occurs due to lack of oxygen resulting from the lack of breathing caused by the opioid. Heroin overdoses can occur because of an unexpected increase in the dose or purity or because of diminished opioid tolerance. However, many fatalities reported as overdoses are probably caused by interactions with other depressant drugs such as alcohol or benzodiazepines. Since heroin can cause nausea and vomiting, a significant number of deaths attributed to heroin overdose are caused by aspiration of vomit by an unconscious person. Some sources quote the median lethal dose (for an average 75 kg opiate-naive individual) as being between 75 and 600 mg. Illicit heroin is of widely varying and unpredictable purity. This means that the user may prepare what they consider to be a moderate dose while actually taking far more than intended. Also, tolerance typically decreases after a period of abstinence. If this occurs and the user takes a dose comparable to their previous use, the user may experience drug effects that are much greater than expected, potentially resulting in an overdose. It has been speculated that an unknown portion of heroin-related deaths are the result of an overdose or allergic reaction to quinine, which may sometimes be used as a cutting agent.
Pharmacology
When taken orally, heroin undergoes extensive first-pass metabolism via deacetylation, making it a prodrug for the systemic delivery of morphine. When the drug is injected, however, it avoids this first-pass effect, very rapidly crossing the blood–brain barrier because of the presence of the acetyl groups, which render it much more fat soluble than morphine itself. Once in the brain, it then is deacetylated variously into the inactive 3-monoacetylmorphine and the active 6-monoacetylmorphine (6-MAM), and then to morphine, which bind to μ-opioid receptors, resulting in the drug's euphoric, analgesic (pain relief), and anxiolytic (anti-anxiety) effects; heroin itself exhibits relatively low affinity for the μ receptor. Analgesia follows from the activation of the μ receptor G-protein coupled receptor, which indirectly hyperpolarizes the neuron, reducing the release of nociceptive neurotransmitters, and hence, causes analgesia and increased pain tolerance.
Unlike hydromorphone and oxymorphone, however, administered intravenously, heroin creates a larger histamine release, similar to morphine, resulting in the feeling of a greater subjective "body high" to some, but also instances of pruritus (itching) when they first start using.
Normally, GABA, which is released from inhibitory neurones, inhibits the release of dopamine. Opiates, like heroin and morphine, decrease the inhibitory activity of such neurones. This causes increased release of dopamine in the brain which is the reason for euphoric and rewarding effects of heroin.
Both morphine and 6-MAM are μ-opioid agonists that bind to receptors present throughout the brain, spinal cord, and gut of all mammals. The μ-opioid receptor also binds endogenous opioid peptides such as β-endorphin, leu-enkephalin, and met-enkephalin. Repeated use of heroin results in a number of physiological changes, including an increase in the production of μ-opioid receptors (upregulation). These physiological alterations lead to tolerance and dependence, so that stopping heroin use results in uncomfortable symptoms including pain, anxiety, muscle spasms, and insomnia called the opioid withdrawal syndrome. Depending on usage it has an onset 4–24 hours after the last dose of heroin. Morphine also binds to δ- and κ-opioid receptors.
There is also evidence that 6-MAM binds to a subtype of μ-opioid receptors that are also activated by the morphine metabolite morphine-6β-glucuronide but not morphine itself. The third subtype of third opioid type is the mu-3 receptor, which may be a commonality to other six-position monoesters of morphine. The contribution of these receptors to the overall pharmacology of heroin remains unknown.
A subclass of morphine derivatives, namely the 3,6 esters of morphine, with similar effects and uses, includes the clinically used strong analgesics nicomorphine (Vilan), and dipropanoylmorphine; there is also the latter's dihydromorphine analogue, diacetyldihydromorphine (Paralaudin). Two other 3,6 diesters of morphine invented in 1874–75 along with diamorphine, dibenzoylmorphine and acetylpropionylmorphine, were made as substitutes after it was outlawed in 1925 and, therefore, sold as the first "designer drugs" until they were outlawed by the League of Nations in 1930.
Chemistry
Diamorphine is produced from acetylation of morphine derived from natural opium sources, generally using acetic anhydride.
The major metabolites of diamorphine, 6-MAM, morphine, morphine-3-glucuronide, and morphine-6-glucuronide, may be quantitated in blood, plasma or urine to monitor for use, confirm a diagnosis of poisoning, or assist in a medicolegal death investigation. Most commercial opiate screening tests cross-react appreciably with these metabolites, as well as with other biotransformation products likely to be present following usage of street-grade diamorphine such as 6-Monoacetylcodeine and codeine. However, chromatographic techniques can easily distinguish and measure each of these substances. When interpreting the results of a test, it is important to consider the diamorphine usage history of the individual, since a chronic user can develop tolerance to doses that would incapacitate an opiate-naive individual, and the chronic user often has high baseline values of these metabolites in his system. Furthermore, some testing procedures employ a hydrolysis step before quantitation that converts many of the metabolic products to morphine, yielding a result that may be 2 times larger than with a method that examines each product individually.
History
The opium poppy was cultivated in lower Mesopotamia as long ago as 3400 BC. The chemical analysis of opium in the 19th century revealed that most of its activity could be ascribed to the alkaloids codeine and morphine.
Diamorphine was first synthesized in 1874 by C. R. Alder Wright, an English chemist working at St. Mary's Hospital Medical School in London who had been experimenting combining morphine with various acids. He boiled anhydrous morphine alkaloid with acetic anhydride for several hours and produced a more potent, acetylated form of morphine which is now called diacetylmorphine or morphine diacetate. He sent the compound to F. M. Pierce of Owens College in Manchester for analysis. Pierce told Wright:
Wright's invention did not lead to any further developments, and diamorphine became popular only after it was independently re-synthesized 23 years later by chemist Felix Hoffmann. Hoffmann was working at Bayer pharmaceutical company in Elberfeld, Germany, and his supervisor Heinrich Dreser instructed him to acetylate morphine with the objective of producing codeine, a constituent of the opium poppy that is pharmacologically similar to morphine but less potent and less addictive. Instead, the experiment produced an acetylated form of morphine one and a half to two times more potent than morphine itself. Hoffmann synthesized heroin on August 21, 1897, just eleven days after he had synthesized aspirin.
The head of Bayer's research department reputedly coined the drug's new name of "heroin", based on the German heroisch which means "heroic, strong" (from the ancient Greek word "heros, ήρως"). Bayer scientists were not the first to make heroin, but their scientists discovered ways to make it, and Bayer led the commercialization of heroin.
Bayer marketed diacetylmorphine as an over-the-counter drug under the trademark name Heroin. It was developed chiefly as a morphine substitute for cough suppressants that did not have morphine's addictive side-effects. Morphine at the time was a popular recreational drug, and Bayer wished to find a similar but non-addictive substitute to market. However, contrary to Bayer's advertising as a "non-addictive morphine substitute", heroin would soon have one of the highest rates of addiction among its users.
From 1898 through to 1910, diamorphine was marketed under the trademark name Heroin as a non-addictive morphine substitute and cough suppressant. In the 11th edition of Encyclopædia Britannica (1910), the article on morphine states: "In the cough of phthisis minute doses [of morphine] are of service, but in this particular disease morphine is frequently better replaced by codeine or by heroin, which checks irritable coughs without the narcotism following upon the administration of morphine."
In the US, the Harrison Narcotics Tax Act was passed in 1914 to control the sale and distribution of diacetylmorphine and other opioids, which allowed the drug to be prescribed and sold for medical purposes. In 1924, the United States Congress banned its sale, importation, or manufacture. It is now a Schedule I substance, which makes it illegal for non-medical use in signatory nations of the Single Convention on Narcotic Drugs treaty, including the United States.
The Health Committee of the League of Nations banned diacetylmorphine in 1925, although it took more than three years for this to be implemented. In the meantime, the first designer drugs, viz. 3,6 diesters and 6 monoesters of morphine and acetylated analogues of closely related drugs like hydromorphone and dihydromorphine, were produced in massive quantities to fill the worldwide demand for diacetylmorphine—this continued until 1930 when the Committee banned diacetylmorphine analogues with no therapeutic advantage over drugs already in use, the first major legislation of this type.
Bayer lost some of its trademark rights to heroin (as well as aspirin) under the 1919 Treaty of Versailles following the German defeat in World War I.
Use of heroin by jazz musicians in particular was prevalent in the mid-twentieth century, including Billie Holiday, saxophonists Charlie Parker and Art Pepper, trumpeter and vocalist Chet Baker, guitarist Joe Pass and piano player/singer Ray Charles; a "staggering number of jazz musicians were addicts". It was also a problem with many rock musicians, particularly from the late 1960s through the 1990s. Pete Doherty is also a self-confessed user of heroin. Nirvana lead singer Kurt Cobain's heroin addiction was well documented. Pantera frontman Phil Anselmo turned to heroin while touring during the 1990s to cope with his back pain. James Taylor, Taylor Hawkins, Jimmy Page, John Lennon, Eric Clapton, Johnny Winter, Keith Richards, Shaun Ryder, Shane MacGowan and Janis Joplin also used heroin. Many musicians have made songs referencing their heroin usage.
Society and culture
Names
"Diamorphine" is the Recommended International Nonproprietary Name and British Approved Name. Other synonyms for heroin include: diacetylmorphine, and morphine diacetate. Heroin is also known by many street names including dope, H, smack, junk, horse, skag, brown, and unga, among others.
Legal status
Asia
In Hong Kong, diamorphine is regulated under Schedule 1 of Hong Kong's Chapter 134 Dangerous Drugs Ordinance. It is available by prescription. Anyone supplying diamorphine without a valid prescription can be fined $5,000,000 (HKD) and imprisoned for life. The penalty for trafficking or manufacturing diamorphine is a $5,000,000 (HKD) fine and life imprisonment. Possession of diamorphine without a license from the Department of Health is illegal with a $1,000,000 (HKD) fine and 7 years of jail time.
Europe
In the Netherlands, diamorphine is a List I drug of the Opium Law. It is available for prescription under tight regulation exclusively to long-term addicts for whom methadone maintenance treatment has failed. It cannot be used to treat severe pain or other illnesses.
In the United Kingdom, diamorphine is available by prescription, though it is a restricted Class A drug. According to the 50th edition of the British National Formulary (BNF), diamorphine hydrochloride may be used in the treatment of acute pain, myocardial infarction, acute pulmonary oedema, and chronic pain. The treatment of chronic non-malignant pain must be supervised by a specialist. The BNF notes that all opioid analgesics cause dependence and tolerance but that this is "no deterrent in the control of pain in terminal illness". When used in the palliative care of cancer patients, diamorphine is often injected using a syringe driver.
In Switzerland, heroin is produced in injectable or tablet form under the name Diaphin by a private company under contract to the Swiss government. Swiss-produced heroin has been imported into Canada with government approval.
Australia
In Australia, diamorphine is listed as a schedule 9 prohibited substance under the Poisons Standard (October 2015). The state of Western Australia, in its Poisons Act 1964 (Reprint 6: amendments as at 10 Sep 2004), described a schedule 9 drug as: "Poisons that are drugs of abuse, the manufacture, possession, sale or use of which should be prohibited by law except for amounts which may be necessary for educational, experimental or research purposes conducted with the approval of the Governor."
North America
In Canada, diamorphine is a controlled substance under Schedule I of the Controlled Drugs and Substances Act (CDSA). Any person seeking or obtaining diamorphine without disclosing authorization 30 days before obtaining another prescription from a practitioner is guilty of an indictable offense and subject to imprisonment for a term not exceeding seven years. Possession of diamorphine for the purpose of trafficking is an indictable offense and subject to imprisonment for life.
In the United States, diamorphine is a Schedule I drug according to the Controlled Substances Act of 1970, making it illegal to possess without a DEA license. Possession of more than 100 grams of diamorphine or a mixture containing diamorphine is punishable with a minimum mandatory sentence of 5 years of imprisonment in a federal prison.
In 2021, the US state of Oregon became the first state to decriminalize the use of heroin after voters passed Ballot Measure 110 in 2020. This measure will allow people with small amounts to avoid arrest.
Turkey
Turkey maintains strict laws against the use, possession or trafficking of illegal drugs. If convicted under these offences, one could receive a heavy fine or a prison sentence of 4 to 24 years.
Misuse of prescription medication
Misused prescription medicine, such as opioids, can lead to heroin use and dependence. The number of death from illegal opioid overdose follows the increasing number of death caused by prescription opioid overdoses. Prescription opioids are relatively easy to obtain. This may ultimately lead to heroin injection because heroin is cheaper than prescribed pills.
Economics
Production
Diamorphine is produced from acetylation of morphine derived from natural opium sources. One such method of heroin production involves isolation of the water-soluble components of raw opium, including morphine, in a strongly basic aqueous solution, followed by recrystallization of the morphine base by addition of ammonium chloride. The solid morphine base is then filtered out. The morphine base is then reacted with acetic anhydride, which forms heroin. This highly impure brown heroin base may then undergo further purification steps, which produces a white-colored product; the final products have a different appearance depending on purity and have different names. Heroin purity has been classified into four grades. No.4 is the purest form – white powder (salt) to be easily dissolved and injected. No.3 is "brown sugar" for smoking (base). No.1 and No.2 are unprocessed raw heroin (salt or base).
Trafficking
Traffic is heavy worldwide, with the biggest producer being Afghanistan. According to a U.N. sponsored survey, in 2004, Afghanistan accounted for production of 87 percent of the world's diamorphine. Afghan opium kills around 100,000 people annually.
In 2003 The Independent reported:
Opium production in that country has increased rapidly since, reaching an all-time high in 2006. War in Afghanistan once again appeared as a facilitator of the trade. Some 3.3 million Afghans are involved in producing opium.
At present, opium poppies are mostly grown in Afghanistan (), and in Southeast Asia, especially in the region known as the Golden Triangle straddling Burma (), Thailand, Vietnam, Laos () and Yunnan province in China. There is also cultivation of opium poppies in Pakistan (), Mexico () and in Colombia (). According to the DEA, the majority of the heroin consumed in the United States comes from Mexico (50%) and Colombia (43–45%) via Mexican criminal cartels such as Sinaloa Cartel. However, these statistics may be significantly unreliable, the DEA's 50/50 split between Colombia and Mexico is contradicted by the amount of hectares cultivated in each country and in 2014, the DEA claimed most of the heroin in the US came from Colombia.
, the Sinaloa Cartel is the most active drug cartel involved in smuggling illicit drugs such as heroin into the United States and trafficking them throughout the United States. According to the Royal Canadian Mounted Police, 90% of the heroin seized in Canada (where the origin was known) came from Afghanistan. Pakistan is the destination and transit point for 40 percent of the opiates produced in Afghanistan, other destinations of Afghan opiates are Russia, Europe and Iran.
A conviction for trafficking heroin carries the death penalty in most Southeast Asian, some East Asian and Middle Eastern countries (see Use of death penalty worldwide for details), among which Malaysia, Singapore and Thailand are the strictest. The penalty applies even to citizens of countries where the penalty is not in place, sometimes causing controversy when foreign visitors are arrested for trafficking, for example, the arrest of nine Australians in Bali, the death sentence given to Nola Blake in Thailand in 1987, or the hanging of an Australian citizen Van Tuong Nguyen in Singapore.
Trafficking history
The origins of the present international illegal heroin trade can be traced back to laws passed in many countries in the early 1900s that closely regulated the production and sale of opium and its derivatives including heroin. At first, heroin flowed from countries where it was still legal into countries where it was no longer legal. By the mid-1920s, heroin production had been made illegal in many parts of the world. An illegal trade developed at that time between heroin labs in China (mostly in Shanghai and Tianjin) and other nations. The weakness of the government in China and conditions of civil war enabled heroin production to take root there. Chinese triad gangs eventually came to play a major role in the illicit heroin trade. The French Connection route started in the 1930s.
Heroin trafficking was virtually eliminated in the US during World War II because of temporary trade disruptions caused by the war. Japan's war with China had cut the normal distribution routes for heroin and the war had generally disrupted the movement of opium. After World War II, the Mafia took advantage of the weakness of the postwar Italian government and set up heroin labs in Sicily which was located along the historic route opium took westward into Europe and the United States. Large-scale international heroin production effectively ended in China with the victory of the communists in the civil war in the late 1940s. The elimination of Chinese production happened at the same time that Sicily's role in the trade developed.
Although it remained legal in some countries until after World War II, health risks, addiction, and widespread recreational use led most western countries to declare heroin a controlled substance by the latter half of the 20th century. In the late 1960s and early 1970s, the CIA supported anti-Communist Chinese Nationalists settled near the Sino-Burmese border and Hmong tribesmen in Laos. This helped the development of the Golden Triangle opium production region, which supplied about one-third of heroin consumed in the US after the 1973 American withdrawal from Vietnam. In 1999, Burma, the heartland of the Golden Triangle, was the second-largest producer of heroin, after Afghanistan.
The Soviet-Afghan war led to increased production in the Pakistani-Afghan border regions, as US-backed mujaheddin militants raised money for arms from selling opium, contributing heavily to the modern Golden Crescent creation. By 1980, 60 percent of the heroin sold in the US originated in Afghanistan. It increased international production of heroin at lower prices in the 1980s. The trade shifted away from Sicily in the late 1970s as various criminal organizations violently fought with each other over the trade. The fighting also led to a stepped-up government law enforcement presence in Sicily.
Following the discovery at a Jordanian airport of a toner cartridge that had been modified into an improvised explosive device, the resultant increased level of airfreight scrutiny led to a major shortage (drought) of heroin from October 2010 until April 2011. This was reported in most of mainland Europe and the UK which led to a price increase of approximately 30 percent in the cost of street heroin and increased demand for diverted methadone. The number of addicts seeking treatment also increased significantly during this period. Other heroin droughts (shortages) have been attributed to cartels restricting supply in order to force a price increase and also to a fungus that attacked the opium crop of 2009. Many people thought that the American government had introduced pathogens into the Afghanistan atmosphere in order to destroy the opium crop and thus starve insurgents of income.
On 13 March 2012, Haji Bagcho, with ties to the Taliban, was convicted by a US District Court of conspiracy, distribution of heroin for importation into the United States and narco-terrorism. Based on heroin production statistics compiled by the United Nations Office on Drugs and Crime, in 2006, Bagcho's activities accounted for approximately 20 percent of the world's total production for that year.
Street price
The European Monitoring Centre for Drugs and Drug Addiction reports that the retail price of brown heroin varies from €14.5 per gram in Turkey to €110 per gram in Sweden, with most European countries reporting typical prices of €35–40 per gram. The price of white heroin is reported only by a few European countries and ranged between €27 and €110 per gram.
The United Nations Office on Drugs and Crime claims in its 2008 World Drug Report that typical US retail prices are US$172 per gram.
Research
Researchers are attempting to reproduce the biosynthetic pathway that produces morphine in genetically engineered yeast. In June 2015 the S-reticuline could be produced from sugar and R-reticuline could be converted to morphine, but the intermediate reaction could not be performed.
See also
References
External links
NIDA InfoFacts on Heroin
ONDCP Drug Facts
U.S. National Library of Medicine: Drug Information Portal – Heroin
BBC Article entitled 'When Heroin Was Legal'. References to the United Kingdom and the United States
Drug-poisoning Deaths Involving Heroin: United States, 2000–2013 U.S. Department of Health and Human Services, Centers for Disease Control and Prevention, National Center for Health Statistics.
Heroin Trafficking in the United States (2016) by Kristin Finklea, Congressional Research Service.
1874 introductions
1898 introductions
Acetate esters
Brands that became generic
British inventions
4,5-Epoxymorphinans
Euphoriants
Morphine
Mu-opioid receptor agonists
Nephrotoxins
Opioids
Phenol ethers
Prodrugs
Semisynthetic opioids
Wikipedia medicine articles ready to translate
Obsolete medications | Heroin | [
"Chemistry"
] | 9,609 | [
"Chemicals in medicine",
"Prodrugs"
] |
14,073 | https://en.wikipedia.org/wiki/Hydropower | Hydropower (from Ancient Greek -, "water"), also known as water power or water energy, is the use of falling or fast-running water to produce electricity or to power machines. This is achieved by converting the gravitational potential or kinetic energy of a water source to produce power. Hydropower is a method of sustainable energy production. Hydropower is now used principally for hydroelectric power generation, and is also applied as one half of an energy storage system known as pumped-storage hydroelectricity.
Hydropower is an attractive alternative to fossil fuels as it does not directly produce carbon dioxide or other atmospheric pollutants and it provides a relatively consistent source of power. Nonetheless, it has economic, sociological, and environmental downsides and requires a sufficiently energetic source of water, such as a river or elevated lake. International institutions such as the World Bank view hydropower as a low-carbon means for economic development.
Since ancient times, hydropower from watermills has been used as a renewable energy source for irrigation and the operation of mechanical devices, such as gristmills, sawmills, textile mills, trip hammers, dock cranes, domestic lifts, and ore mills. A trompe, which produces compressed air from falling water, is sometimes used to power other machinery at a distance.
Calculating the amount of available power
A hydropower resource can be evaluated by its available power. Power is a function of the hydraulic head and volumetric flow rate. The head is the energy per unit weight (or unit mass) of water. The static head is proportional to the difference in height through which the water falls. Dynamic head is related to the velocity of moving water. Each unit of water can do an amount of work equal to its weight times the head.
The power available from falling water can be calculated from the flow rate and density of water, the height of fall, and the local acceleration due to gravity:
where
(work flow rate out) is the useful power output (SI unit: watts)
("eta") is the efficiency of the turbine (dimensionless)
is the mass flow rate (SI unit: kilograms per second)
("rho") is the density of water (SI unit: kilograms per cubic metre)
is the volumetric flow rate (SI unit: cubic metres per second)
is the acceleration due to gravity (SI unit: metres per second per second)
("Delta h") is the difference in height between the outlet and inlet (SI unit: metres)
To illustrate, the power output of a turbine that is 85% efficient, with a flow rate of 80 cubic metres per second (2800 cubic feet per second) and a head of , is 97 megawatts:
Operators of hydroelectric stations compare the total electrical energy produced with the theoretical potential energy of the water passing through the turbine to calculate efficiency. Procedures and definitions for calculation of efficiency are given in test codes such as ASME PTC 18 and IEC 60041. Field testing of turbines is used to validate the manufacturer's efficiency guarantee. Detailed calculation of the efficiency of a hydropower turbine accounts for the head lost due to flow friction in the power canal or penstock, rise in tailwater level due to flow, the location of the station and effect of varying gravity, the air temperature and barometric pressure, the density of the water at ambient temperature, and the relative altitudes of the forebay and tailbay. For precise calculations, errors due to rounding and the number of significant digits of constants must be considered.
Some hydropower systems such as water wheels can draw power from the flow of a body of water without necessarily changing its height. In this case, the available power is the kinetic energy of the flowing water. Over-shot water wheels can efficiently capture both types of energy. The flow in a stream can vary widely from season to season. The development of a hydropower site requires analysis of flow records, sometimes spanning decades, to assess the reliable annual energy supply. Dams and reservoirs provide a more dependable source of power by smoothing seasonal changes in water flow. However, reservoirs have a significant environmental impact, as does alteration of naturally occurring streamflow. Dam design must account for the worst-case, "probable maximum flood" that can be expected at the site; a spillway is often included to route flood flows around the dam. A computer model of the hydraulic basin and rainfall and snowfall records are used to predict the maximum flood.
Disadvantages and limitations
Some disadvantages of hydropower have been identified. Dam failures can have catastrophic effects, including loss of life, property and pollution of land.
Dams and reservoirs can have major negative impacts on river ecosystems such as preventing some animals traveling upstream, cooling and de-oxygenating of water released downstream, and loss of nutrients due to settling of particulates. River sediment builds river deltas and dams prevent them from restoring what is lost from erosion. Furthermore, studies found that the construction of dams and reservoirs can result in habitat loss for some aquatic species.Large and deep dam and reservoir plants cover large areas of land which causes greenhouse gas emissions from underwater rotting vegetation. Furthermore, although at lower levels than other renewable energy sources, it was found that hydropower produces methane equivalent to almost a billion tonnes of CO2 greenhouse gas a year. This occurs when organic matters accumulate at the bottom of the reservoir because of the deoxygenation of water which triggers anaerobic digestion.
People who live near a hydro plant site are displaced during construction or when reservoir banks become unstable. Another potential disadvantage is cultural or religious sites may block construction.
Applications
Mechanical power
Watermills
Compressed air
A plentiful head of water can be made to generate compressed air directly without moving parts. In these designs, a falling column of water is deliberately mixed with air bubbles generated through turbulence or a venturi pressure reducer at the high-level intake. This allows it to fall down a shaft into a subterranean, high-roofed chamber where the now-compressed air separates from the water and becomes trapped. The height of the falling water column maintains compression of the air in the top of the chamber, while an outlet, submerged below the water level in the chamber allows water to flow back to the surface at a lower level than the intake. A separate outlet in the roof of the chamber supplies the compressed air. A facility on this principle was built on the Montreal River at Ragged Shutes near Cobalt, Ontario, in 1910 and supplied 5,000 horsepower to nearby mines.
Electricity
Hydroelectricity is the biggest hydropower application. Hydroelectricity generates about 15% of global electricity and provides at least 50% of the total electricity supply for more than 35 countries. In 2021, global installed hydropower electrical capacity reached almost 1400 GW, the highest among all renewable energy technologies.
Hydroelectricity generation starts with converting either the potential energy of water that is present due to the site's elevation or the kinetic energy of moving water into electrical energy.
Hydroelectric power plants vary in terms of the way they harvest energy. One type involves a dam and a reservoir. The water in the reservoir is available on demand to be used to generate electricity by passing through channels that connect the dam to the reservoir. The water spins a turbine, which is connected to the generator that produces electricity.
The other type is called a run-of-river plant. In this case, a barrage is built to control the flow of water, absent a reservoir. The run-of river power plant needs continuous water flow and therefore has less ability to provide power on demand. The kinetic energy of flowing water is the main source of energy.
Both designs have limitations. For example, dam construction can result in discomfort to nearby residents. The dam and reservoirs occupy a relatively large amount of space that may be opposed by nearby communities. Moreover, reservoirs can potentially have major environmental consequences such as harming downstream habitats. On the other hand, the limitation of the run-of-river project is the decreased efficiency of electricity generation because the process depends on the speed of the seasonal river flow. This means that the rainy season increases electricity generation compared to the dry season.
The size of hydroelectric plants can vary from small plants called micro hydro, to large plants that supply power to a whole country. As of 2019, the five largest power stations in the world are conventional hydroelectric power stations with dams.
Hydroelectricity can also be used to store energy in the form of potential energy between two reservoirs at different heights with pumped-storage. Water is pumped uphill into reservoirs during periods of low demand to be released for generation when demand is high or system generation is low.
Other forms of electricity generation with hydropower include tidal stream generators using energy from tidal power generated from oceans, rivers, and human-made canal systems to generating electricity.
Rain power
Rain has been referred to as "one of the last unexploited energy sources in nature. When it rains, billions of litres of water can fall, which have an enormous electric potential if used in the right way." Research is being done into the different methods of generating power from rain, such as by using the energy in the impact of raindrops. This is in its very early stages with new and emerging technologies being tested, prototyped and created. Such power has been called rain power. One method in which this has been attempted is by using hybrid solar panels called "all-weather solar panels" that can generate electricity from both the sun and the rain.
According to zoologist and science and technology educator, Luis Villazon, "A 2008 French study estimated that you could use piezoelectric devices, which generate power when they move, to extract 12 milliwatts from a raindrop. Over a year, this would amount to less than 0.001kWh per square metre – enough to power a remote sensor." Villazon suggested a better application would be to collect the water from fallen rain and use it to drive a turbine, with an estimated energy generation of 3 kWh of energy per year for a 185 m2 roof. A microturbine-based system created by three students from the Technological University of Mexico has been used to generate electricity. The Pluvia system "uses the stream of rainwater runoff from houses' rooftop rain gutters to spin a microturbine in a cylindrical housing. Electricity generated by that turbine is used to charge 12-volt batteries."
The term rain power has also been applied to hydropower systems which include the process of capturing the rain.
History
Ancient history
Evidence suggests that the fundamentals of hydropower date to ancient Greek civilization. Other evidence indicates that the waterwheel independently emerged in China around the same period. Evidence of water wheels and watermills date to the ancient Near East in the 4th century BC. Moreover, evidence indicates the use of hydropower using irrigation machines to ancient civilizations such as Sumer and Babylonia. Studies suggest that the water wheel was the initial form of water power and it was driven by either humans or animals.
In the Roman Empire, water-powered mills were described by Vitruvius by the first century BC. The Barbegal mill, located in modern-day France, had 16 water wheels processing up to 28 tons of grain per day. Roman waterwheels were also used for sawing marble such as the Hierapolis sawmill of the late 3rd century AD. Such sawmills had a waterwheel that drove two crank-and-connecting rods to power two saws. It also appears in two 6th century Eastern Roman sawmills excavated at Ephesus and Gerasa respectively. The crank and connecting rod mechanism of these Roman watermills converted the rotary motion of the waterwheel into the linear movement of the saw blades.
Water-powered trip hammers and bellows in China, during the Han dynasty (202 BC – 220 AD), were initially thought to be powered by water scoops. However, some historians suggested that they were powered by waterwheels. This is since it was theorized that water scoops would not have had the motive force to operate their blast furnace bellows. Many texts describe the Hun waterwheel; some of the earliest ones are the Jijiupian dictionary of 40 BC, Yang Xiong's text known as the Fangyan of 15 BC, as well as Xin Lun, written by Huan Tan about 20 AD. It was also during this time that the engineer Du Shi (c. AD 31) applied the power of waterwheels to piston-bellows in forging cast iron.
Ancient Indian texts dating back to the 4th century BC refer to the term cakkavattaka (turning wheel), which commentaries explain as arahatta-ghati-yanta (machine with wheel-pots attached), however whether this is water or hand powered is disputed by scholars India received Roman water mills and baths in the early 4th century AD when a certain according to Greek sources. Dams, spillways, reservoirs, channels, and water balance would develop in India during the Mauryan, Gupta and Chola empires.
Another example of the early use of hydropower is seen in hushing, a historic method of mining that uses flood or torrent of water to reveal mineral veins. The method was first used at the Dolaucothi Gold Mines in Wales from 75 AD onwards. This method was further developed in Spain in mines such as Las Médulas. Hushing was also widely used in Britain in the Medieval and later periods to extract lead and tin ores. It later evolved into hydraulic mining when used during the California Gold Rush in the 19th century.
The Islamic Empire spanned a large region, mainly in Asia and Africa, along with other surrounding areas. During the Islamic Golden Age and the Arab Agricultural Revolution (8th–13th centuries), hydropower was widely used and developed. Early uses of tidal power emerged along with large hydraulic factory complexes. A wide range of water-powered industrial mills were used in the region including fulling mills, gristmills, paper mills, hullers, sawmills, ship mills, stamp mills, steel mills, sugar mills, and tide mills. By the 11th century, every province throughout the Islamic Empire had these industrial mills in operation, from Al-Andalus and North Africa to the Middle East and Central Asia. Muslim engineers also used water turbines while employing gears in watermills and water-raising machines. They also pioneered the use of dams as a source of water power, used to provide additional power to watermills and water-raising machines. Islamic irriguation techniques including Persian Wheels would be introduced to India, and would be combined with local methods, during the Delhi Sultanate and the Mughal Empire.
Furthermore, in his book, The Book of Knowledge of Ingenious Mechanical Devices, the Muslim mechanical engineer, Al-Jazari (1136–1206) described designs for 50 devices. Many of these devices were water-powered, including clocks, a device to serve wine, and five devices to lift water from rivers or pools, where three of them are animal-powered and one can be powered by animal or water. Moreover, they included an endless belt with jugs attached, a cow-powered shadoof (a crane-like irrigation tool), and a reciprocating device with hinged valves.
19th century
In the 19th century, French engineer Benoît Fourneyron developed the first hydropower turbine. This device was implemented in the commercial plant of Niagara Falls in 1895 and it is still operating. In the early 20th century, English engineer William Armstrong built and operated the first private electrical power station which was located in his house in Cragside in Northumberland, England. In 1753, the French engineer Bernard Forest de Bélidor published his book, Architecture Hydraulique, which described vertical-axis and horizontal-axis hydraulic machines.
The growing demand for the Industrial Revolution would drive development as well. At the beginning of the Industrial Revolution in Britain, water was the main power source for new inventions such as Richard Arkwright's water frame. Although water power gave way to steam power in many of the larger mills and factories, it was still used during the 18th and 19th centuries for many smaller operations, such as driving the bellows in small blast furnaces (e.g. the Dyfi Furnace) and gristmills, such as those built at Saint Anthony Falls, which uses the drop in the Mississippi River.
Technological advances moved the open water wheel into an enclosed turbine or water motor. In 1848, the British-American engineer James B. Francis, head engineer of Lowell's Locks and Canals company, improved on these designs to create a turbine with 90% efficiency. He applied scientific principles and testing methods to the problem of turbine design. His mathematical and graphical calculation methods allowed the confident design of high-efficiency turbines to exactly match a site's specific flow conditions. The Francis reaction turbine is still in use. In the 1870s, deriving from uses in the California mining industry, Lester Allan Pelton developed the high-efficiency Pelton wheel impulse turbine, which used hydropower from the high head streams characteristic of the Sierra Nevada.
20th century
The modern history of hydropower begins in the 1900s, with large dams built not simply to power neighboring mills or factories but provide extensive electricity for increasingly distant groups of people. Competition drove much of the global hydroelectric craze: Europe competed amongst itself to electrify first, and the United States' hydroelectric plants in Niagara Falls and the Sierra Nevada inspired bigger and bolder creations across the globe. American and USSR financers and hydropower experts also spread the gospel of dams and hydroelectricity across the globe during the Cold War, contributing to projects such as the Three Gorges Dam and the Aswan High Dam.
Feeding desire for large scale electrification with water inherently required large dams across powerful rivers, which impacted public and private interests downstream and in flood zones. Inevitably smaller communities and marginalized groups suffered. They were unable to successfully resist companies flooding them out of their homes or blocking traditional salmon passages. The stagnant water created by hydroelectric dams provides breeding ground for pests and pathogens, leading to local epidemics. However, in some cases, a mutual need for hydropower could lead to cooperation between otherwise adversarial nations.
Hydropower technology and attitude began to shift in the second half of the 20th century. While countries had largely abandoned their small hydropower systems by the 1930s, the smaller hydropower plants began to make a comeback in the 1970s, boosted by government subsidies and a push for more independent energy producers. Some politicians who once advocated for large hydropower projects in the first half of the 20th century began to speak out against them, and citizen groups organizing against dam projects increased.
In the 1980s and 90s the international anti-dam movement had made finding government or private investors for new large hydropower projects incredibly difficult, and given rise to NGOs devoted to fighting dams. Additionally, while the cost of other energy sources fell, the cost of building new hydroelectric dams increased 4% annually between 1965 and 1990, due both to the increasing costs of construction and to the decrease in high quality building sites. In the 1990s, only 18% of the world's electricity came from hydropower. Tidal power production also emerged in the 1960s as a burgeoning alternative hydropower system, though still has not taken hold as a strong energy contender.
United States
Especially at the start of the American hydropower experiment, engineers and politicians began major hydroelectricity projects to solve a problem of 'wasted potential' rather than to power a population that needed the electricity. When the Niagara Falls Power Company began looking into damming Niagara, the first major hydroelectric project in the United States, in the 1890s they struggled to transport electricity from the falls far enough away to actually reach enough people and justify installation. The project succeeded in large part due to Nikola Tesla's invention of the alternating current motor. On the other side of the country, San Francisco engineers, the Sierra Club, and the federal government fought over acceptable use of the Hetch Hetchy Valley. Despite ostensible protection within a national park, city engineers successfully won the rights to both water and power in the Hetch Hetchy Valley in 1913. After their victory they delivered Hetch Hetchy hydropower and water to San Francisco a decade later and at twice the promised cost, selling power to PG&E which resold to San Francisco residents at a profit.
The American West, with its mountain rivers and lack of coal, turned to hydropower early and often, especially along the Columbia River and its tributaries. The Bureau of Reclamation built the Hoover Dam in 1931, symbolically linking the job creation and economic growth priorities of the New Deal. The federal government quickly followed Hoover with the Shasta Dam and Grand Coulee Dam. Power demand in Oregon did not justify damming the Columbia until WWI revealed the weaknesses of a coal-based energy economy. The federal government then began prioritizing interconnected power—and lots of it. Electricity from all three dams poured into war production during WWII.
After the war, the Grand Coulee Dam and accompanying hydroelectric projects electrified almost all of the rural Columbia Basin, but failed to improve the lives of those living and farming there the way its boosters had promised and also damaged the river ecosystem and migrating salmon populations. In the 1940s as well, the federal government took advantage of the sheer amount of unused power and flowing water from the Grand Coulee to build a nuclear site placed on the banks of the Columbia. The nuclear site leaked radioactive matter into the river, contaminating the entire area.
Post-WWII Americans, especially engineers from the Tennessee Valley Authority, refocused from simply building domestic dams to promoting hydropower abroad. While domestic dam building continued well into the 1970s, with the Reclamation Bureau and Army Corps of Engineers building more than 150 new dams across the American West, organized opposition to hydroelectric dams sparked up in the 1950s and 60s based on environmental concerns. Environmental movements successfully shut down proposed hydropower dams in Dinosaur National Monument and the Grand Canyon, and gained more hydropower-fighting tools with 1970s environmental legislation. As nuclear and fossil fuels grew in the 70s and 80s and environmental activists push for river restoration, hydropower gradually faded in American importance.
Africa
Foreign powers and IGOs have frequently used hydropower projects in Africa as a tool to interfere in the economic development of African countries, such as the World Bank with the Kariba and Akosombo Dams, and the Soviet Union with the Aswan Dam. The Nile River especially has borne the consequences of countries both along the Nile and distant foreign actors using the river to expand their economic power or national force. After the British occupation of Egypt in 1882, the British worked with Egypt to construct the first Aswan Dam, which they heightened in 1912 and 1934 to try to hold back the Nile floods. Egyptian engineer Adriano Daninos developed a plan for the Aswan High Dam, inspired by the Tennessee Valley Authority's multipurpose dam.
When Gamal Abdel Nasser took power in the 1950s, his government decided to undertake the High Dam project, publicizing it as an economic development project. After American refusal to help fund the dam, and anti-British sentiment in Egypt and British interests in neighboring Sudan combined to make the United Kingdom pull out as well, the Soviet Union funded the Aswan High Dam. Between 1977 and 1990 the dam's turbines generated one third of Egypt's electricity. The building of the Aswan Dam triggered a dispute between Sudan and Egypt over the sharing of the Nile, especially since the dam flooded part of Sudan and decreased the volume of water available to them. Ethiopia, also located on the Nile, took advantage of the Cold War tensions to request assistance from the United States for their own irrigation and hydropower investments in the 1960s. While progress stalled due to the coup d'état of 1974 and following 17-year-long Ethiopian Civil War Ethiopia began construction on the Grand Ethiopian Renaissance Dam in 2011.
Beyond the Nile, hydroelectric projects cover the rivers and lakes of Africa. The Inga powerplant on the Congo River had been discussed since Belgian colonization in the late 19th century, and was successfully built after independence. Mobutu's government failed to regularly maintain the plants and their capacity declined until the 1995 formation of the Southern African Power Pool created a multi-national power grid and plant maintenance program. States with an abundance of hydropower, such as the Democratic Republic of the Congo and Ghana, frequently sell excess power to neighboring countries. Foreign actors such as Chinese hydropower companies have proposed a significant amount of new hydropower projects in Africa, and already funded and consulted on many others in countries like Mozambique and Ghana.
Small hydropower also played an important role in early 20th century electrification across Africa. In South Africa, small turbines powered gold mines and the first electric railway in the 1890s, and Zimbabwean farmers installed small hydropower stations in the 1930s. While interest faded as national grids improved in the second half of the century, 21st century national governments in countries including South Africa and Mozambique, as well as NGOs serving countries like Zimbabwe, have begun re-exploring small-scale hydropower to diversify power sources and improve rural electrification.
Europe
In the early 20th century, two major factors motivated the expansion of hydropower in Europe: in the northern countries of Norway and Sweden, high rainfall and mountains proved exceptional resources for abundant hydropower, and in the south, coal shortages pushed governments and utility companies to seek alternative power sources.
Early on, Switzerland dammed the Alpine rivers and the Swiss Rhine, creating, along with Italy and Scandinavia, a Southern Europe hydropower race. In Italy's Po Valley, the main 20th-century transition was not the creation of hydropower but the transition from mechanical to electrical hydropower. 12,000 watermills churned in the Po watershed in the 1890s, but the first commercial hydroelectric plant, completed in 1898, signaled the end of the mechanical reign. These new large plants moved power away from rural mountainous areas to urban centers in the lower plain. Italy prioritized early near-nationwide electrification, almost entirely from hydropower, which powered its rise as a dominant European and imperial force. However, they failed to reach any conclusive standard for determining water rights before WWI.
Modern German hydropower dam construction was built on a history of small dams powering mines and mills in the 15th century. Some parts of the German industry relied more on waterwheels than steam until the 1870s. The German government did not set out building large dams such as the prewar Urft, Mohne, and Eder dams to expand hydropower: they mostly wanted to reduce flooding and improve navigation. However, hydropower quickly emerged as a bonus for all these dams, especially in the coal-poor south. Bavaria even achieved a statewide power grid by damming the Walchensee in 1924, inspired in part by loss of coal reserves after WWI.
Hydropower became a symbol of regional pride and distaste for northern 'coal barons', although the north also held strong enthusiasm for hydropower. Dam building rapidly increased after WWII, aiming to increase hydropower. However, conflict accompanied the dam building and spread of hydropower: agrarian interests suffered from decreased irrigation, small mills lost water flow, and different interest groups fought over where dams should be located, controlling who benefited and whose homes they drowned.
See also
Deep water source cooling
Energy conversion efficiency
Gravitation water vortex power plant
Hydraulic ram
Hydropower Sustainability Assessment Protocol
International Hydropower Association
Low-head hydro power
Marine current power
Marine energy
Ocean thermal energy conversion
Osmotic power
Pumped-storage hydroelectricity
Run-of-the-river hydroelectricity
Tidal power
Tidal stream generator
Wave power
Notes
References
Sources
External links
International Hydropower Association
International Centre for Hydropower (ICH) hydropower portal with links to numerous organizations related to hydropower worldwide
IEC TC 4: Hydraulic turbines (International Electrotechnical Commission – Technical Committee 4) IEC TC 4 portal with access to scope, documents and TC 4 website
Micro-hydro power, Adam Harvey, 2004, Intermediate Technology Development Group. Retrieved 1 January 2005
Microhydropower Systems, US Department of Energy, Energy Efficiency and Renewable Energy, 2005
Power station technology
Energy conversion
Hydraulic engineering
Sustainable technologies | Hydropower | [
"Physics",
"Engineering",
"Environmental_science"
] | 5,709 | [
"Hydrology",
"Physical systems",
"Hydraulics",
"Civil engineering",
"Hydraulic engineering"
] |
14,084 | https://en.wikipedia.org/wiki/Heterosexuality | Heterosexuality is romantic attraction, sexual attraction or sexual behavior between people of the opposite sex or gender. As a sexual orientation, heterosexuality is "an enduring pattern of emotional, romantic, and/or sexual attractions" to people of the opposite sex. It "also refers to a person's sense of identity based on those attractions, related behaviors, and membership in a community of others who share those attractions." Someone who is heterosexual is commonly referred to as straight.
Along with bisexuality and homosexuality, heterosexuality is one of the three main categories of sexual orientation within the heterosexual–homosexual continuum. Across cultures, most people are heterosexual, and heterosexual activity is by far the most common type of sexual activity.
Scientists do not know the exact cause of sexual orientation, but they theorize that it is caused by a complex interplay of genetic, hormonal, and environmental influences, and do not view it as a choice. Although no single theory on the cause of sexual orientation has yet gained widespread support, scientists favor biologically based theories. There is considerably more evidence supporting nonsocial, biological causes of sexual orientation than social ones, especially for males.
The term heterosexual or heterosexuality is usually applied to humans, but heterosexual behavior is observed in all other mammals and in other animals, as it is necessary for sexual reproduction.
Terminology
Hetero- comes from the Greek word ἕτερος [héteros], meaning "other party" or "another", used in science as a prefix meaning "different"; and the Latin word for sex (that is, characteristic sex or sexual differentiation).
The current use of the term heterosexual has its roots in the broader 19th century tradition of personality taxonomy. The term heterosexual was coined alongside the word homosexual by Karl Maria Kertbeny in 1869. The terms were not in current use during the late nineteenth century, but were reintroduced by Richard von Krafft-Ebing and Albert Moll around 1890. The noun came into wider use from the early 1920s, but did not enter common use until the 1960s. The colloquial shortening "hetero" is attested from 1933. The abstract noun "heterosexuality" is first recorded in 1900. The word "heterosexual" was listed in Merriam-Webster's New International Dictionary in 1923 as a medical term for "morbid sexual passion for one of the opposite sex"; however, in 1934 in their Second Edition Unabridged it is defined as a "manifestation of sexual passion for one of the opposite sex; normal sexuality".
Hyponyms of heterosexual include heteroflexible.
The word can be informally shortened to "hetero". The term straight originated as a mid-20th century gay slang term for heterosexuals, ultimately coming from the phrase "to go straight" (as in "straight and narrow"), or stop engaging in homosexual sex. One of the first uses of the word in this way was in 1941 by author G. W. Henry. Henry's book concerned conversations with homosexual males and used this term in connection with people who are identified as ex-gays. It is now simply a colloquial term for "heterosexual", having changed in primary meaning over time. Some object to usage of the term straight because it implies that non-heterosexual people are crooked.
Demographics
In their 2016 literature review, Bailey et al. stated that they "expect that in all cultures the vast majority of individuals are sexually predisposed exclusively to the other sex (i.e., heterosexual)" and that there is no persuasive evidence that the demographics of sexual orientation have varied much across time or place. Heterosexual activity between only one male and one female is by far the most common type of sociosexual activity.
According to several major studies, 89% to 98% of people have had only heterosexual contact within their lifetime; but this percentage falls to 79–84% when either or both same-sex attraction and behavior are reported.
A 1992 study reported that 93.9% of males in Britain have only had heterosexual experience, while in France the number was reported at 95.9%. According to a 2008 poll, 85% of Britons have only opposite-sex sexual contact while 94% of Britons identify themselves as heterosexual. Similarly, a survey by the UK Office for National Statistics (ONS) in 2010 found that 95% of Britons identified as heterosexual, 1.5% of Britons identified themselves as homosexual or bisexual, and the last 3.5% gave more vague answers such as "don't know", "other", or did not respond to the question. In the United States, according to a Williams Institute report in April 2011, 96% or approximately 250 million of the adult population are heterosexual.
An October 2012 Gallup poll provided unprecedented demographic information about those who identify as heterosexual, arriving at the conclusion that 96.6%, with a margin of error of ±1%, of all U.S. adults identify as heterosexual. The Gallup results show:
In a 2015 YouGov survey of 1,000 adults of the United States, 89% of the sample identified as heterosexual, 4% as homosexual (2% as homosexual male and 2% as homosexual female) and 4% as bisexual (of either sex).
Bailey et al., in their 2016 review, stated that in recent Western surveys, about 93% of men and 87% of women identify as completely heterosexual, and about 4% of men and 10% of women as mostly heterosexual.
Academic study
Biological and environmental
No simple and singular determinant for sexual orientation has been conclusively demonstrated, but scientists believe that a combination of genetic, hormonal, and environmental factors determine sexual orientation. They favor biological theories for explaining the causes of sexual orientation, as there is considerably more evidence supporting nonsocial, biological causes than social ones, especially for males.
Factors related to the development of a heterosexual orientation include genes, prenatal hormones, and brain structure, and their interaction with the environment.
Prenatal hormones
The neurobiology of the masculinization of the brain is fairly well understood. Estradiol and testosterone, which is catalyzed by the enzyme 5α-reductase into dihydrotestosterone, act upon androgen receptors in the brain to masculinize it. If there are few androgen receptors (people with androgen insensitivity syndrome) or too much androgen (females with congenital adrenal hyperplasia), there can be physical and psychological effects. It has been suggested that both male and female heterosexuality are the results of this process. In these studies heterosexuality in females is linked to a lower amount of masculinization than is found in lesbian females, though when dealing with male heterosexuality there are results supporting both higher and lower degrees of masculinization than homosexual males.
Animals and reproduction
Sexual reproduction in the animal world is facilitated through opposite-sex sexual activity, although there are also animals that reproduce asexually, including protozoa and lower invertebrates.
Reproductive sex does not require a heterosexual orientation, since sexual orientation typically refers to a long-term enduring pattern of sexual and emotional attraction leading often to long-term social bonding, while reproduction requires as little as a single act of copulation to fertilize the ovum by sperm.
Sexual fluidity
Often, sexual orientation and sexual orientation identity are not distinguished, which can impact accurately assessing sexual identity and whether or not sexual orientation is able to change; sexual orientation identity can change throughout an individual's life, and may or may not align with biological sex, sexual behavior or actual sexual orientation. Sexual orientation is stable and unlikely to change for the vast majority of people, but some research indicates that some people may experience change in their sexual orientation, and this is more likely for women than for men. The American Psychological Association distinguishes between sexual orientation (an innate attraction) and sexual orientation identity (which may change at any point in a person's life).
A 2012 study found that 2% of a sample of 2,560 adult participants reported a change of sexual orientation identity after a 10-year period. For men, a change occurred in 0.78% of those who had identified as heterosexual, 9.52% of homosexuals, and 47% of bisexuals. For women, a change occurred in 1.36% of heterosexuals, 63.6% of lesbians, and 64.7% of bisexuals.
A 2-year study by Lisa M. Diamond on a sample of 80 non-heterosexual female adolescents (age 16–23) reported that half of the participants had changed sexual-minority identities more than once, one third of them during the 2-year follow-up. Diamond concluded that "although sexual attractions appear fairly stable, sexual identities and behaviors are more fluid."
Heteroflexibility is a form of sexual orientation or situational sexual behavior characterized by minimal homosexual activity in an otherwise primarily heterosexual orientation that is considered to distinguish it from bisexuality. It has been characterized as "mostly straight".
Sexual orientation change efforts
Sexual orientation change efforts are methods that aim to change sexual orientation, used to try to convert homosexual and bisexual people to heterosexuality. Scientists and mental health professionals generally do not believe that sexual orientation is a choice. There are no studies of adequate scientific rigor that conclude that sexual orientation change efforts are effective.
Society and culture
A heterosexual couple, a man and woman in an intimate relationship, form the core of a nuclear family.
Many societies throughout history have insisted that a marriage take place before the couple settle down, but enforcement of this rule or compliance with it has varied considerably.
Symbolism
Heterosexual symbolism dates back to the earliest artifacts of humanity, with gender symbols, ritual fertility carvings, and primitive art. This was later expressed in the symbolism of fertility rites and polytheistic worship, which often included images of human reproductive organs, such as lingam in Hinduism. Modern symbols of heterosexuality in societies derived from European traditions still reference symbols used in these ancient beliefs. One such image is a combination of the symbol for Mars, the Roman god of war, as the definitive male symbol of masculinity, and Venus, the Roman goddess of love and beauty, as the definitive female symbol of femininity. The unicode character for this combined symbol is ⚤ (U+26A4).
Historical views
There was no need to coin a term such as heterosexual until terms emerged with which it could be compared and contrasted. Jonathan Ned Katz dates the definition of heterosexuality, as it is used today, to the late 19th century. According to Katz, in the Victorian era, sex was seen as a means to achieve reproduction, and relations between the sexes were not believed to be overtly sexual. The body was thought of as a tool for procreation – "Human energy, thought of as a closed and severely limited system, was to be used in producing children and in work, not wasted in libidinous pleasures."
Katz argues that modern ideas of sexuality and eroticism began to develop in America and Germany in the later 19th century. The changing economy and the "transformation of the family from producer to consumer" resulted in shifting values. The Victorian work ethic had changed, pleasure became more highly valued and this allowed ideas of human sexuality to change. Consumer culture had created a market for the erotic, pleasure became commoditized. At the same time medical doctors began to acquire more power and influence. They developed the medical model of "normal love", in which healthy men and women enjoyed sex as part of a "new ideal of male-female relationships that included.. an essential, necessary, normal eroticism." This model also had a counterpart, "the Victorian Sex Pervert", anyone who failed to meet the norm. The basic oppositeness of the sexes was the basis for normal, healthy sexual attraction. "The attention paid the sexual abnormal created a need to name the sexual normal, the better to distinguish the average him and her from the deviant it." The creation of the term heterosexual consolidated the social existence of the pre-existing heterosexual experience and created a sense of ensured and validated normalcy within it.
Religious views
The Judeo-Christian tradition has several scriptures related to heterosexuality. The Book of Genesis states that God created women because "It is not good that the man should be alone; I will make him an help meet for him,", and that "Therefore shall a man leave his father and his mother, and shall cleave unto his wife: and they shall be one flesh"
For the most part, religious traditions in the world reserve marriage to heterosexual unions, but there are exceptions including certain Buddhist and Hindu traditions, Unitarian Universalists, Metropolitan Community Church, some Anglican dioceses, and some Quaker, United Church of Canada, and Reform and Conservative Jewish congregations.
Almost all religions believe that sex between a man and a woman within marriage is allowed, but there are a few that believe that it is a sin, such as The Shakers, The Harmony Society, and The Ephrata Cloister. These religions tend to view all sexual relations as sinful, and promote celibacy. Some religions require celibacy for certain roles, such as Catholic priests; however, the Catholic Church also views heterosexual marriage as sacred and necessary.
Heteronormativity and heterosexism
Heteronormativity denotes or relates to a world view that promotes heterosexuality as the normal or preferred sexual orientation for people to have. It can assign strict gender roles to males and females. The term was popularized by Michael Warner in 1991. Feminist Adrienne Rich argues that compulsory heterosexuality, a continual and repeating reassertion of heterosexual norms, is a facet of heterosexism. Compulsory heterosexuality is the idea that female heterosexuality is both assumed and enforced by a patriarchal society. Heterosexuality is then viewed as the natural inclination or obligation by both sexes. Consequently, anyone who differs from the normalcy of heterosexuality is deemed deviant or abhorrent.
Heterosexism is a form of bias or discrimination in favor of opposite-sex sexuality and relationships. It may include an assumption that everyone is heterosexual and may involve various kinds of discrimination against gays, lesbians, bisexuals, asexuals, heteroflexible people, or transgender or non-binary individuals.
Straight pride is a slogan that arose in the late 1980s and early 1990s and has been used primarily by social conservative groups as a political stance and strategy. The term is described as a response to gay pride adopted by various LGBT groups in the early 1970s or to the accommodations provided to gay pride initiatives.
See also
Heterosociality
Human reproduction
Queer heterosexuality
Gynogenesis
References
Further reading
LeVay, Simon. Gay, Straight, and the Reason Why: The Science of Sexual Orientation, Oxford University Press, 2017
Johnson, P. (2005) Love, Heterosexuality and Society. London: Routledge
Answers to Your Questions About Sexual Orientation and Homosexuality. American Psychiatric Association.
Bohan, Janis S., Psychology and Sexual Orientation: Coming to Terms, Routledge, 1996
Kinsey, Alfred C., et al., Sexual Behavior in the Human Male. Indiana University Press.
Kinsey, Alfred C., et al., Sexual Behavior in the Human Female. Indiana University Press.
External links
Keel, Robert O., Heterosexual Deviance. (Goode, 1994, chapter 8, and Chapter 9, 6th edition, 2001.) Sociology of Deviant Behavior: FS 2003, University of Missouri–St. Louis.
Coleman, Thomas F., What's Wrong with Excluding Heterosexual Couples from Domestic Partner Benefits Programs? Unmarried America, American Association for Single People.
Interpersonal attraction
Interpersonal relationships
Love
Sexual orientation
1860s neologisms | Heterosexuality | [
"Biology"
] | 3,227 | [
"Behavior",
"Interpersonal relationships",
"Human behavior"
] |
14,090 | https://en.wikipedia.org/wiki/Hull%20classification%20symbol | The United States Navy, United States Coast Guard, and United States National Oceanic and Atmospheric Administration (NOAA) use a hull classification symbol (sometimes called hull code or hull number) to identify their ships by type and by individual ship within a type. The system is analogous to the pennant number system that the Royal Navy and other European and Commonwealth navies use.
History
United States Navy
The U.S. Navy began to assign unique Naval Registry Identification Numbers to its ships in the 1890s. The system was a simple one in which each ship received a number which was appended to its ship type, fully spelled out, and added parenthetically after the ship's name when deemed necessary to avoid confusion between ships. Under this system, for example, the battleship Indiana was USS Indiana (Battleship No. 1), the cruiser Olympia was USS Olympia (Cruiser No. 6), and so on. Beginning in 1907, some ships also were referred to alternatively by single-letter or three-letter codes—for example, USS Indiana (Battleship No. 1) could be referred to as USS Indiana (B-1) and USS Olympia (Cruiser No. 6) could also be referred to as USS Olympia (C-6), while USS Pennsylvania (Armored Cruiser No. 4) could be referred to as USS Pennsylvania (ACR-4). However, rather than replacing it, these codes coexisted and were used interchangeably with the older system until the modern system was instituted on 17 July 1920.
During World War I, the U.S. Navy acquired large numbers of privately owned and commercial ships and craft for use as patrol vessels, mine warfare vessels, and various types of naval auxiliary ships, some of them with identical names. To keep track of them all, the Navy assigned unique identifying numbers to them. Those deemed appropriate for patrol work received section patrol numbers (SP), while those intended for other purposes received "identification numbers", generally abbreviated "Id. No." or "ID;" some ships and craft changed from an SP to an ID number or vice versa during their careers, without their unique numbers themselves changing, and some ships and craft assigned numbers in anticipation of naval service were never acquired by the Navy. The SP/ID numbering sequence was unified and continuous, with no SP number repeated in the ID series or vice versa so that there could not be, for example, both an "SP-435" and an "Id. No. 435". The SP and ID numbers were used parenthetically after each boat's or ship's name to identify it; although this system pre-dated the modern hull classification system and its numbers were not referred to at the time as "hull codes" or "hull numbers," it was used in a similar manner to today's system and can be considered its precursor.
United States Revenue Cutter Service and United States Coast Guard
The United States Revenue Cutter Service, which merged with the United States Lifesaving Service in January 1915 to form the modern United States Coast Guard, began following the Navy's lead in the 1890s, with its cutters having parenthetical numbers called Naval Registry Identification Numbers following their names, such as (Cutter No. 1), etc. This persisted until the Navy's modern hull classification system's introduction in 1920, which included Coast Guard ships and craft.
United States Coast and Geodetic Survey
Like the U.S. Navy, the United States Coast and Geodetic Survey – a uniformed seagoing service of the United States Government and a predecessor of the National Oceanic and Atmospheric Administration (NOAA) – adopted a hull number system for its fleet in the 20th century. Its largest vessels, "Category I" oceanographic survey ships, were classified as "ocean survey ships" and given the designation "OSS". Intermediate-sized "Category II" oceanographic survey ships received the designation "MSS" for "medium survey ship," and smaller "Category III" oceanographic survey ships were given the classification "CSS" for "coastal survey ship." A fourth designation, "ASV" for "auxiliary survey vessel," included even smaller vessels. In each case, a particular ship received a unique designation based on its classification and a unique hull number separated by a space rather than a hyphen; for example, the third Coast and Geodetic Survey ship named Pioneer was an ocean survey ship officially known as USC&GS Pioneer (OSS 31). The Coast and Geodetic Surveys system persisted after the creation of NOAA in 1970, when NOAA took control of the Surveys fleet, but NOAA later changed to its modern hull classification system.
United States Fish and Wildlife Service
The Fish and Wildlife Service, created in 1940 and reorganized as the United States Fish and Wildlife Service (USFWS) in 1956, adopted a hull number system for its fisheries research ships and patrol vessels. It consisted of "FWS" followed by a unique identifying number. In 1970, NOAA took control of the seagoing ships of the USFWS's Bureau of Commercial Fisheries, and as part of the NOAA fleet they were assigned new hull numbers beginning with "FRV," for Fisheries Research Vessel, followed by a unique identifying number. They eventually were renumbered under the modern NOAA hull number system.
The modern hull classification system
United States Navy
The U.S. Navy instituted its modern hull classification system on 17 July 1920, doing away with section patrol numbers, "identification numbers", and the other numbering systems described above. In the new system, all hull classification symbols are at least two letters; for basic types the symbol is the first letter of the type name, doubled, except for aircraft carriers.
The combination of symbol and hull number identifies a modern Navy ship uniquely. A heavily modified or re-purposed ship may receive a new symbol, and either retain the hull number or receive a new one. For example, the heavy gun cruiser was converted to a gun/missile cruiser, changing the hull number to CAG-1. Also, the system of symbols has changed a number of times both since it was introduced in 1907 and since the modern system was instituted in 1920, so ships' symbols sometimes change without anything being done to the physical ship.
Hull numbers are assigned by classification. Duplication between, but not within, classifications is permitted. Hence, CV-1 was the aircraft carrier and BB-1 was the battleship .
Ship types and classifications have come and gone over the years, and many of the symbols listed below are not presently in use. The Naval Vessel Register maintains an online database of U.S. Navy ships showing which symbols are presently in use.
After World War II until 1975, the U.S. Navy defined a "frigate" as a type of surface warship larger than a destroyer and smaller than a cruiser. In other navies, such a ship generally was referred to as a "flotilla leader", or "destroyer leader". Hence the U.S. Navy's use of "DL" for "frigate" prior to 1975, while "frigates" in other navies were smaller than destroyers and more like what the U.S. Navy termed a "destroyer escort", "ocean escort", or "DE". The United States Navy 1975 ship reclassification of cruisers, frigates, and ocean escorts brought U.S. Navy classifications into line with other nations' classifications, at least cosmetically in terms of terminology, and eliminated the perceived "cruiser gap" with the Soviet Navy by redesignating the former "frigates" as "cruisers".
Military Sealift Command
If a U.S. Navy ship's hull classification symbol begins with "T-", it is part of the Military Sealift Command, has a primarily civilian crew, and is a United States Naval Ship (USNS) in non-commissioned service – as opposed to a commissioned United States Ship (USS) with an all-military crew.
United States Coast Guard
If a ship's hull classification symbol begins with "W", it is a commissioned cutter of the United States Coast Guard. Until 1965, the Coast Guard used U.S. Navy hull classification codes, prepending a "W" to their beginning. In 1965, it retired some of the less mission-appropriate Navy-based classifications and developed new ones of its own, most notably WHEC for "high endurance cutter" and WMEC for "medium endurance cutter".
National Oceanic and Atmospheric Administration
The National Oceanic and Atmospheric Administration (NOAA), a component of the United States Department of Commerce, includes the National Oceanic and Atmospheric Administration Commissioned Officer Corps (or "NOAA Corps"), one of the eight uniformed services of the United States, and operates a fleet of seagoing research and survey ships. The NOAA fleet also uses a hull classification symbol system, which it also calls "hull numbers," for its ships.
After NOAA took over the former fleets of the U.S. Coast and Geodetic Survey and the U.S. Fish and Wildlife Service Bureau of Commercial Fisheries in 1970, it initially retained the Coast and Geodetic Survey's hull-number designations for its survey ships and adopted hull numbers beginning with "FRV", for "Fisheries Research Vessel", for its fisheries research ships. It later adopted a new system of ship classification, which it still uses today. In its modern system, the NOAA fleet is divided into two broad categories, research ships and survey ships. The research ships, which include oceanographic and fisheries research vessels, are given hull numbers beginning with "R", while the survey ships, generally hydrographic survey vessels, receive hull numbers beginning with "S". The letter is followed by a three-digit number; the first digit indicates the NOAA "class" (i.e., size) of the vessel, which NOAA assigns based on the ship's gross tonnage and horsepower, while the next two digits combine with the first digit to create a unique three-digit identifying number for the ship.
Generally, each NOAA hull number is written with a space between the letter and the three-digit number, as in, for example, or .
Unlike in the U.S. Navy system, once an older NOAA ship leaves service, a newer one can be given the same hull number; for example, "S 222" was assigned to , then assigned to NOAAS Thomas Jefferson (S 222), which entered NOAA service after Mount Mitchell was stricken.
United States Navy hull classification codes
The U.S. Navy's system of alpha-numeric ship designators, and its associated hull numbers, have been for several decades a unique method of categorizing ships of all types: combatants, auxiliaries and district craft. Although considerably changed in detail and expanded over the years, this system remains essentially the same as when formally implemented in 1920. It is a very useful tool for organizing and keeping track of naval vessels, and also provides the basis for the identification numbers painted on the bows (and frequently the sterns) of most U.S. Navy ships.
The ship designator and hull number system's roots extend back to the late 1880s when ship type serial numbers were assigned to most of the new-construction warships of the emerging "Steel Navy". During the course of the next thirty years, these same numbers were combined with filing codes used by the Navy's clerks to create an informal version of the system that was put in place in 1920. Limited usage of ship numbers goes back even earlier, most notably to the "Jeffersonian Gunboats" of the early 1800s and the "Tinclad" river gunboats of the Civil War Mississippi Squadron.
It is important to understand that hull number-letter prefixes are not acronyms, and should not be carelessly treated as abbreviations of ship type classifications. Thus, "DD" does not stand for anything more than "Destroyer". "SS" simply means "Submarine". And "FF" is the post-1975 type code for "Frigate."
The hull classification codes for ships in active duty in the United States Navy are governed under Secretary of the Navy Instruction 5030.8D.
Warships
Warships are designed to participate in combat operations.
The origin of the two-letter code derives from the need to distinguish various cruiser subtypes.
Aircraft carrier type
Aircraft carriers are ships designed primarily for the purpose of conducting combat operations by aircraft which engage in attacks against airborne, surface, sub-surface and shore targets. Contrary to popular belief, the "CV" hull classification symbol does not stand for "carrier vessel". "CV" derives from the cruiser designation, with one popular theory that the V comes from French voler, "to fly", but this has never been definitively proven. The V has long been used by the U.S. Navy for heavier-than-air craft and possibly comes from the French volplane. Aircraft carriers are designated in two sequences: the first sequence runs from CV-1 USS Langley to the very latest ships, and the second sequence, "CVE" for escort carriers, ran from CVE-1 Long Island to CVE-127 Okinawa before being discontinued.
AV: Heavier-than-air aircraft tender, later Seaplane tender (retired)
AVD: Seaplane tender destroyer (retired)
AVP: Seaplane tender, Small (retired)
AZ: Lighter-than-air aircraft tender (retired) (1920–1923)
AVG: General-purpose aircraft tender (repurposed escort carrier) (1941–42)
AVT (i) Auxiliary aircraft transport (retired)
AVT (ii) Auxiliary training carrier (retired)
ACV: Auxiliary aircraft carrier (escort carrier, replaced by CVE) (1942)
CV: Fleet aircraft carrier (1921–1975), multi-purpose aircraft carrier (1975–present)
CVA: Aircraft carrier, attack (category merged into CV, 30 June 1975)
CV(N): Aircraft carrier, night (deck equipped with lighting and pilots trained for nighttime fights) (1944) (retired)
CVAN: Aircraft carrier, attack, nuclear-powered (category merged into CVN, 30 June 1975)
CVB: Aircraft carrier, large (original USS Midway class, category merged into CVA, 1952)
CVE: Aircraft carrier, escort (retired) (1943–retirement of type)
CVHA: Aircraft carrier, helicopter assault (retired in favor of several LH-series amphibious assault ship hull codes)
CVHE: Aircraft carrier, helicopter, escort (retired)
CVL: Light aircraft carrier or aircraft carrier, small (retired)
CVN: Aircraft carrier, nuclear-powered
CVS: Antisubmarine aircraft carrier (retired)
CVT: Aircraft carrier, training (changed to AVT (auxiliary))
CVU: Aircraft carrier, utility (retired)
CVG: Aircraft carrier, guided missile (retired)
CF: Flight deck cruiser (1930s, retired unused)
CVV: Aircraft carrier, vari-purpose, medium (retired unused)
Surface combatant type
Surface combatants are ships which are designed primarily to engage enemy forces on the high seas. The primary surface combatants are battleships, cruisers and destroyers. Battleships are very heavily armed and armored; cruisers moderately so; destroyers and smaller warships, less so. Before 1920, ships were called "<type> no. X", with the type fully pronounced. The types were commonly abbreviated in ship lists to "B-X", "C-X", "D-X" et cetera—for example, before 1920, would have been called "USS Minnesota, Battleship number 22" orally and "USS Minnesota, B-22" in writing. After 1920, the ship's name would have been both written and pronounced "USS Minnesota (BB-22)". In generally decreasing size, the types are:
ACR: Armored cruiser (pre-1920)
AFSB: Afloat forward staging base (also AFSB(I) for "interim", changed to MLP (Mobile Landing Platform, then ESD and ESB)
B: Battleship (pre-1920)
BB: Battleship
BBG: Battleship, guided missile or arsenal ship (never used operationally)
BM: Monitor (1920–retirement)
C: Cruiser (pre-1920 protected cruisers and peace cruisers)
CA: (first series) Cruiser, armored (retired, comprised all surviving pre-1920 armored and protected cruisers)
CA: (second series) Heavy cruiser, category later renamed gun cruiser (retired)
CAG: Cruiser, heavy, guided missile (retired)
CB: Large cruiser (retired)
CBC: Large command cruiser (never used operationally)
CC: (first usage) Battlecruiser (never used operationally)
CC: (second usage) Command cruiser (retired)
CLC: Command cruiser, light (retired)
CG: Cruiser, guided missile
CGN: Cruiser, guided missile, nuclear-powered: and
CL: Cruiser, light (retired)
CLAA: Cruiser, light, anti-aircraft (retired)
CLD: Cruiser-destroyer, light (never used operationally)
CLG: Cruiser, light, guided missile (retired)
CLGN: Cruiser, light, guided missile, nuclear-powered (never used operationally)
CLK: Cruiser, hunter–killer (never used operationally)
CM: Cruiser–minelayer (retired)
CS: Scout cruiser (retired)
CSGN: Cruiser, strike, guided missile, nuclear-powered (never used operationally)
D: Destroyer (pre-1920)
DD: Destroyer
DDC: Corvette (briefly proposed in the mid-1950s)
DDE: Escort destroyer, a destroyer (DD) converted for antisubmarine warfare – category abolished 1962. (not to be confused with destroyer escort DE)
DDG: Destroyer, guided missile
DDK: Hunter–killer destroyer (category merged into DDE, 4 March 1950)
DDR: Destroyer, radar picket (retired)
DE: Destroyer escort (World War II, later became Ocean escort)
DE: Ocean escort (abolished 30 June 1975)
DEG: Guided missile ocean escort (abolished 30 June 1975)
DER: Destroyer escort, radar picket (abolished 30 June 1975) There were two distinct breeds of DER, the DEs which were converted to DERs during World War II and the more numerous postwar DER conversions.
DL: Destroyer leader (later frigate) (retired)
DLG: Destroyer leader, guided missile (later frigate) (abolished 30 June 1975)
DLGN: Destroyer leader, guided missile, nuclear-propulsion (later frigate) (abolished 30 June 1975) The DL category was established in 1951 with the abolition of the CLK category. CLK 1 became DL 1 and DD 927–930 became DL 2–5. By the mid-1950s the term destroyer leader had been dropped in favor of frigate. Most DLGs and DLGNs were reclassified as CGs and CGNs, 30 June 1975. However, DLG 6–15 became DDG 37–46. The old DLs were already gone by that time. Only applied to .
DM: Destroyer, minelayer (retired)
DMS: Destroyer, minesweeper (retired)
FF: Frigate
PF: Patrol frigate (retired)
FFG: Frigate, guided missile
FFH: Frigate with assigned helicopter
FFL: Frigate, light
FFR: Frigate, radar picket (retired)
FFT: Frigate (reserve training) (retired) The FF, FFG, and FFR designations were established 30 June 1975 as new type symbols for ex-DEs, DEGs, and DERs. The first new-built ships to carry the FF/FFG designation were the s.
PG: Patrol gunboat (retired)
PCH: Patrol craft, hydrofoil (retired)
PHM: Patrol, hydrofoil, missile (retired)
K: Corvette (retired)
LCS: Littoral combat ship In January 2015, the Navy announced that the up-gunned LCS will be reclassified as a frigate, since the requirements of the SSC Task Force was to upgrade the ships with frigate-like capabilities. The Navy is hoping to start retrofitting technological upgrades onto existing and under construction LCSs before 2019.
LSES: Large Surface Effect Ship
M: Monitor (1880s–1920)
SES: Surface Effect Ship
TB: Torpedo boat
Submarine type
Submarines are all self-propelled submersible types (usually started with SS) regardless of whether employed as combatant, auxiliary, or research and development vehicles which have at least a residual combat capability. While some classes, including all diesel-electric submarines, are retired from USN service, non-U.S. navies continue to employ SS, SSA, SSAN, SSB, SSC, SSG, SSM, and SST types. With the advent of new Air Independent Propulsion/Power (AIP) systems, both SSI and SSP are used to distinguish the types within the USN, but SSP has been declared the preferred term. SSK, retired by the USN, continues to be used colloquially and interchangeably with SS for diesel-electric attack/patrol submarines within the USN, and, more formally, by the Royal Navy and British firms such as Jane's Information Group.
SC: Cruiser Submarine (retired)
SF: Fleet Submarine (retired)
SM: Submarine Minelayer (retired)
SS: Submarine, Attack Submarine
SSA: Submarine Auxiliary, Auxiliary/Cargo Submarine
SSAN: Submarine Auxiliary Nuclear, Auxiliary/Cargo Submarine, Nuclear-powered
SSB: Submarine Ballistic, Ballistic Missile Submarine
SSBN: Submarine Ballistic Nuclear, Ballistic Missile Submarine, Nuclear-powered
SSC: Coastal Submarine, over 150 tons
SSG: Guided Missile Submarine
SSGN: Guided Missile Submarine, Nuclear-powered
SSI: Attack Submarine (Diesel Air-Independent Propulsion)
SSK: Hunter-Killer/ASW Submarine (retired)
SSKN: Hunter-Killer/ASW Submarine, Nuclear-powered (retired)
SSM: Midget Submarine, under 150 tons
SSN: Attack Submarine, Nuclear-powered
SSNR: Special Attack Submarine
SSO: Submarine Oiler (retired)
SSP: Attack Submarine (Diesel Air-Independent Power) (alternate use), formerly Submarine Transport
SSQ: Auxiliary Submarine, Communications (retired)
SSQN: Auxiliary Submarine, Communications, Nuclear-powered (retired)
SSR: Radar Picket Submarine (retired)
SSRN: Radar Picket Submarine, Nuclear-powered (retired)
SST: Training Submarine
X: Midget submarine
IXSS: Unclassified Miscellaneous Submarine
MTS: Moored Training Ship (Naval Nuclear Power School Training Platform; reconditioned SSBNs and SSNs)
Patrol combatant type
Patrol combatants are ships whose mission may extend beyond coastal duties and whose characteristics include adequate endurance and seakeeping, providing a capability for operations exceeding 48 hours on the high seas without support. This notably included Brown Water Navy/Riverine Forces during the Vietnam War. Few of these ships are in service today.
PBR: Patrol Boat, River, Brown Water Navy (Pibber or PBR-Vietnam)
PC: Coastal Patrol, originally Sub Chaser
PCF: Patrol Craft, Fast; Swift Boat, Brown Water Navy (Vietnam)
PE: Eagle Boat of World War I
PF: World War II Frigate, based on British .
PFG: Original designation of
PG: WWII-era Gunboats, later Patrol combatant, with ability to operate in rivers; what is generally known as River gunboats
PGH: Patrol Combatant, Hydrofoil ()
PHM: Patrol, Hydrofoil Missile ()
PR: Patrol, River, such as the
PT: Patrol Torpedo Boat, the U.S. take on the Motor Torpedo Boat (World War II)
PTF:Patrol Torpedo Fast, Brown Water Navy (Vietnam)
PTG/PTGB: Patrol Torpedo Gunboat
Monitor: Heavily gunned riverine boat, Brown Water Navy (Vietnam and prior). Named for
ASPB: Assault Support Patrol Boat, "Alpha Boat", Brown Water Navy; also used as riverine minesweeper (Vietnam)
PACV: Patrol Air Cushion Vehicle, hovercraft that was part of the Brown Water Navy (Vietnam)
SP: Section Patrol, used indiscriminately for patrol vessels, mine warfare vessels, and some other types (World War I; retired 1920)
Amphibious warfare type
Amphibious warfare vessels include all ships having an organic capability for amphibious warfare and which have characteristics enabling long duration operations on the high seas. There are two classifications of craft: amphibious warfare ships, which are built to cross oceans, and landing craft, which are designed to take troops from ship to shore in an invasion.
The U.S. Navy hull classification symbol for a ship with a well deck depends on its facilities for aircraft:
An LSD has a helicopter deck, which was removable in the older ships.
An LPD has a hangar in addition to the helicopter deck.
An LHD or LHA has a full-length flight deck.
Ships
AKA: Attack Cargo Ship (To LKA, 1969)
APA: Attack Transport (To LPA, 1969)
APD: High speed transport (Converted Destroyer or Destroyer Escort) (To LPR, 1969)
APM: Mechanized Artillery Transports (To LSD)
AGC: Amphibious Force Flagship (To LCC, 1969)
LCC: (second usage) Amphibious Command Ship
LHA: General-Purpose Amphibious Assault Ship, also known as Landing ship, Helicopter, Assault
LHD: Multi-Purpose Amphibious Assault Ship, also known as Landing ship, Helicopter, Dock
LKA: Amphibious Cargo Ship (out of commission)
LPA: Amphibious Transport
LPD: Amphibious transport dock, also known as Landing ship, Personnel, Dock
LPH: Landing ship, Personnel, Helicopter
LPR: High speed transport
LSD: Landing Ship, Dock
LSH: Landing Ship, Heavy
LSIL: Landing Ship, Infantry (Large) (formerly LCIL)
LSL: Landing Ship, Logistics
LSM: Landing Ship, Medium
LSM(R): Landing Ship, Medium (Rocket)
LSSL: Landing Ship, Support (Large) (formerly LCSL)
LST: Landing Ship, Tank
LST(H): Landing Ship, Tank (Hospital)
LSV: Landing Ship, Vehicle
Landing Craft
LCA: Landing Craft, Assault
LCAC: Landing Craft Air Cushion
LCC: (first usage) Landing Craft, Control
LCFF: (Flotilla Flagship)
LCH: Landing Craft, Heavy
LCI: Landing Craft, Infantry, World War II-era classification further modified by
(G) – Gunboat
(L) – Large
(M) – Mortar
(R) – Rocket
LCL: Landing Craft, Logistics (UK)
LCM: Landing Craft, Mechanized
LCP: Landing Craft, Personnel
LCP(L): Landing Craft, Personnel, Large
LCP(R): Landing Craft, Personnel, Ramped
LCPA: Landing Craft, Personnel, Air-Cushioned
LCS(L): Landing Craft, Support (Large) changed to LSSL in 1949
LCT: Landing Craft, Tank (World War II era)
LCU: Landing Craft, Utility
LCVP: Landing Craft, Vehicle and Personnel
LSH: Landing Ship Heavy (Royal Australian Navy)
Expeditionary support
Operated by Military Sealift Command, have ship prefix "USNS", hull code begins with "T-".
EMS: Expeditionary Medical Ship, an EPF modified into a hospital ship
EPF: Expeditionary fast transport
ESB: Expeditionary Mobile Base (a variant of ESD, formerly Afloat Forward Staging Base (AFSB))
ESD: Expeditionary Transfer Dock
HST: High-Speed Transport (similar to JHSV, not to be confused with WWII-era High-speed transport (APD))
HSV: High-Speed Vessel
JHSV: Joint High-Speed Vessel (changed to EPF)
MLP: Mobile Landing Platform (changed to ESD)
Mine warfare type
Mine warfare ships are those ships whose primary function is mine warfare on the high seas.
ADG: Degaussing ship
AM: Minesweeper
AMb: Harbor minesweeper
AMc: Coastal minesweeper
AMCU: Underwater mine locater
AMS: Motor minesweeper
CM: Cruiser (i.e., large) minelayer
CMc: Coastal minelayer
DM: High-speed minelayer (converted destroyer)
DMS: High-speed minesweeper (converted-destroyer)
PCS: Submarine chasers (wooden) fitted for minesweeping
YDG: District degaussing vessel
In 1955 all mine warfare vessels except for degaussing vessels had their hull codes changed to begin with "M".
MCM: Mine countermeasures ship
MCS: Mine countermeasures support ship
MH(C)(I)(O)(S): Minehunter, (coastal) (inshore) (ocean) (hunter and sweeper, general)
MLC: Coastal minelayer
MSC: Minesweeper, coastal
MSF: Minesweeper, steel hulled
MSO: Minesweeper, ocean
Coastal defense type
Coastal defense ships are those whose primary function is coastal patrol and interdiction.
FS: Corvette
PB: Patrol boat
PBR: Patrol boat, river
PC: Patrol, coastal
PCE: Patrol craft, escort
PCF: Patrol craft, fast, (swift boat)
PCS: Patrol craft, sweeper (modified-motor minesweepers meant for anti-submarine warfare)
PF: Frigate, in a role similar to World War II Commonwealth corvette
PG: Patrol gunboat
PGM: Motor gunboat (To PG, 1967)
PR: Patrol, river
SP: Section patrol
Auxiliaries
An auxiliary ship is designed to operate in any number of roles supporting combatant ships and other naval operations.
Combat logistics type
Ships which have the capability to provide underway replenishment (UNREP) to fleet units.
AE: Ammunition ship
AF: Stores ship (retired)
AFS: Combat stores ship
AK: Dry cargo ship
AKE: Advanced dry cargo ship
AKS: General stores ship
AO: Fleet Oiler
AOE: Fast combat support ship
AOL: Light replenishment oiler
AOR: Replenishment oiler
AVS: Aviation Stores Issue Ship (retired)
Mobile logistics type
Mobile logistics ships have the capability to provide direct material support to other deployed units operating far from home ports.
AC: Collier (retired)
AD: Destroyer tender
AGP: Patrol craft tender
AR: Repair ship
ARB: Repair ship, battle damage
ARC: Repair ship, cable
ARG: Repair ship, internal combustion engine
ARH: Repair ship, heavy-hull
ARL: Repair ship, landing craft
ARV: Repair ship, aircraft
ARVH: Repair ship, aircraft, helicopter
AS: Submarine tender
AW: Distilling ship (retired)
Support ships
Support ships are not designed to participate in combat and are generally not armed. For ships with civilian crews (owned by and/or operated for Military Sealift Command and the Maritime Administration), the prefix T- is placed at the front of the hull classification.
Support ships are designed to operate in the open ocean in a variety of sea states to provide general support to either combatant forces or shore-based establishments. They include smaller auxiliaries which, by the nature of their duties, leave inshore waters.
AB: Auxiliary Crane Ship (1920–41)
ACS: Auxiliary Crane Ship
AG: Miscellaneous Auxiliary
AGB: Icebreaker
AGDE: Testing Ocean Escort
AGDS: Deep Submergence Support Ship
AGEH: Hydrofoil, experimental
AGER: (i): Miscellaneous Auxiliary, Electronic Reconnaissance
AGER: (ii): Environmental Research Ship
AGF: Miscellaneous Command Ship
AGFF: Testing Frigate
AGHS: Patrol combatant support ship—ocean or inshore
AGL: Auxiliary vessel, lighthouse tender
AGM: Missile Range Instrumentation Ship
AGMR: Major Communications Relay Ship
AGOR: Oceanographic Research Ship
AGOS: Ocean Surveillance Ship
AGR: Radar picket ship
AGS: Surveying Ship
AGSC: Coastal Survey Ships
AGSE: Submarine and Special Warfare Support
AGTR: Technical research ship
AH: Hospital ship
AKD: Cargo Ship, Dock
AKL: Cargo Ship, Small
AKN: Cargo Ship, Net
AKR: Cargo Ship, Vehicle
AKV: Cargo Ship, Aircraft
AN: Net laying ship
AOG: Gasoline tanker
AOT: Transport Oiler
AP: Transport
APB: Self-propelled Barracks Ship
APC: Coastal Transport
APc: Coastal Transport, Small
APH: Evacuation Transport
APL: Barracks Craft
ARS: Rescue and Salvage Ship
ARSD: Salvage Lifting Vessels
ASR: Submarine Rescue Ship
AT: Fleet Tug
ATA: Auxiliary Ocean Tug
ATF: Fleet Ocean Tug
ATLS: Drone Launch Ship
ATO: Fleet Tug, Old
ATR: Rescue Tug
ATS: Salvage and Rescue Ship
AVB(i): Aviation Logistics Support Ship
AVB(ii): Advance Aviation Base Ship
AVM: Guided Missile Ship
AVT(i): Auxiliary Aircraft Transport
AVT(ii): Auxiliary Aircraft Landing Training Ship
EPCER: Experimental – Patrol Craft Escort – Rescue
PCER: Patrol Craft Escort – Rescue
SBX: Sea-based X-band Radar – a mobile active electronically scanned array early-warning radar station.
Service type craft
Service craft are navy-subordinated craft (including non-self-propelled) designed to provide general support to either combatant forces or shore-based establishments. The suffix "N" refers to non-self-propelled variants.
AFDB: Large Auxiliary Floating Dry Dock
AFD/AFDL: Small Auxiliary Floating Dry Dock
AFDM: Medium Auxiliary Floating Dry Dock
ARD: Auxiliary Repair Dry Dock
ARDM: Medium Auxiliary Repair Dry Dock
JUB/JB : Jack Up Barge
Submersibles
DSRV: Deep Submergence Rescue Vehicle
DSV: Deep Submergence Vehicle
NR: Submersible Research Vehicle
Yard and district craft
YC: Open Lighter
YCF: Car Float
YCV: Aircraft Transportation Lighter
YD: Floating Crane
YDT: Diving Tender
YF: Covered Lighter
YFB: Ferry Boat or Launch
YFD: Yard Floating Dry Dock
YFN: Covered Lighter (non-self propelled)
YFNB: Large Covered Lighter (non-self propelled)
YFND: Dry Dock Companion Craft (non-self propelled)
YFNX: Lighter (Special purpose) (non-self propelled)
YFP: Floating Power Barge
YFR: Refrigerated Cover Lighter
YFRN: Refrigerated Covered Lighter (non-self propelled)
YFRT: Range Tender USNS Range Recoverer (T-AG-161)
YFU: Harbor Utility Craft
YG: Garbage Lighter
YGN: Garbage Lighter (non-self propelled)
YH: Ambulance boat/small medical support vessel
YLC: Salvage Lift Craft
YM: Dredge
YMN: Dredge (non-self propelled)
YNG: Net Gate Craft
YN: Yard Net Tender
YNT: Net Tender
YO: Fuel Oil Barge
YOG: Gasoline Barge
YOGN: Gasoline Barge (non-self propelled)
YON: Fuel Oil Barge (non-self propelled)
YOS: Oil Storage Barge
YP: Patrol Craft, Training
YPD: Floating Pile Driver
YR: Floating Workshop
YRB: Repair and Berthing Barge
YRBM: Repair, Berthing and Messing Barge
YRDH: Floating Dry Dock Workshop (Hull)
YRDM: Floating Dry Dock Workshop (Machine)
YRR: Radiological Repair Barge nuclear ships and submarines service
YRST: Salvage Craft Tender
YSD: Seaplane Wrecking Derrick - Yard Seaplane Derrick
YSR: Sludge Removal Barge
YT: Harbor Tug (craft later assigned YTB, YTL, or YTM classifications)
YTB: Large Harbor tug
YTL: Small Harbor Tug
YTM: Medium Harbor Tug
YTT: Torpedo trials craft
YW: Water Barge
YWN: Water Barge (non-self propelled)
Miscellaneous ships and craft
ID or Id. No.: Civilian ship taken into service for auxiliary duties, used indiscriminately for large ocean-going ships of all kinds and coastal and yard craft (World War I; retired 1920)
IX: Unclassified Miscellaneous Unit
"none": To honor her unique historical status, USS Constitution, formerly IX 21, was reclassified to "none", effective 1 September 1975.
Airships
Although aircraft, pre-World War II rigid airships were commissioned (no different from surface warships and submarines), flew the U.S. ensign from their stern and carried a United States Ship (USS) designation.
Rigid airships:
ZR: Rigid airship
ZRS: Rigid airship scout
ZRCV: Rigid airship aircraft carrier, proposed, not built
Lighter-than-air aircraft (e.g., blimps) continued to fly the U.S. ensign from their stern but were registered as aircraft:
Temporary designations
United States Navy Designations (Temporary) are a form of U.S. Navy ship designation, intended for temporary identification use. Such designations usually occur during periods of sudden mobilization, such as that which occurred prior to, and during, World War II or the Korean War, when it was determined that a sudden temporary need arose for a ship for which there was no official Navy designation.
During World War II, for example, a number of commercial vessels were requisitioned, or acquired, by the U.S. Navy to meet the sudden requirements of war. A yacht acquired by the U.S. Navy during the start of World War II might seem desirable to the Navy whose use for the vessel might not be fully developed or explored at the time of acquisition.
On the other hand, a U.S. Navy vessel, such as the yacht in the example above, already in commission or service, might be desired, or found useful, for another need or purpose for which there is no official designation.
IX: Unclassified Miscellaneous Auxiliary Ship, for example, yacht Chanco acquired by the U.S. Navy on 1 October 1940. It was classified as a minesweeper , but instead, mainly used as a patrol craft along the New England coast. When another assignment came, and it could not be determined how to classify the vessel, it was redesignated IX-175 on 10 July 1944.
IXSS: Unclassified Miscellaneous Submarines, such as the , the and the .
YAG: Miscellaneous Auxiliary Service Craft, such as the , and which, curiously, was earlier known as .
Numerous other U.S. Navy vessels were launched with a temporary, or nominal, designation, such as YMS or PC, since it could not be determined, at the time of construction, what they should be used for. Many of these were vessels in the 150 to 200 feet length class with powerful engines, whose function could be that of a minesweeper, patrol craft, submarine chaser, seaplane tender, tugboat, or other. Once their destiny, or capability, was found or determined, such vessels were reclassified with their actual designation.
United States Coast Guard vessels
Prior to 1965, U.S. Coast Guard cutters used the same designation as naval ships but preceded by a "W" to indicate Coast Guard commission. The U.S. Coast Guard considers any ship over 65 feet in length with a permanently assigned crew, a cutter.
Current USCG cutter classes and types
Historic USCG cutter classes and types
USCG classification symbols definitions
CG: all Coast Guard ships in the 1920s (retired)
WAGB: Coast Guard
WAGL: Auxiliary vessel, lighthouse tender (retired 1960's)
WAVP: seagoing Coast Guard seaplane tenders (retired 1960s)
WDE: seagoing Coast Guard destroyer escorts (retired 1960s)
WHEC: Coast Guard high endurance cutters
WIX: Coast Guard barque
WLB: Coast Guard buoy tenders
WLBB: Coast Guard seagoing buoy tenders/ice breaker
WLI: Coast Guard inland buoy tenders
WLIC: Coast Guard inland construction tenders
WLM: Coast Guard coastal buoy tenders
WLR: Coast Guard river buoy tenders
WMEC: Coast Guard medium endurance cutters
WMSL: Coast Guard maritime security cutter, large (referred to as national security cutters)
WPB: Coast Guard patrol boats
WPC: Coast Guard patrol craft—later reclassed under WHEC, symbol reused for Coast Guard patrol cutter (referred to as fast response cutters)
WPG: seagoing Coast Guard gunboats (retired 1960s)
WTGB: Coast Guard tug boat (140' icebreakers)
WYTL: Small harbor tug
USCG classification symbols for small craft and boats
MLB: Motor Life Boat (52', 47', and 44' variants)
UTB: Utility Boat
DPB: Deployable Pursuit Boat
ANB: Aids to Navigation Boats
TPSB: Transportable Port Security Boat
RHIB: Rigid Hull Inflatable Boats
SRB: Surf Rescue Boat (30')
National Oceanic and Atmospheric Administration hull codes
R: Research ships, including oceanographic and fisheries research ships
S: Survey ships, including hydrographic survey ships
The letter is paired with a three-digit number. The first digit of the number is determined by the ships "power tonnage," defined as the sum of its shaft horsepower and gross international tonnage, as follows:
If the power tonnage is 5,501 through 9,000, the first digit is "1".
If the power tonnage 3,501 through 5,500, the first digit is "2."
If the power tonnage is 2,001 through 3,500, the first digit is "3."
If the power tonnage is 1,001 through 2,000, the first digit is "4."
If the power tonnage is 501 through 1,000, the first digit is "5."
If the power tonnage is 500 or less and the ship is at least long, the first digit is "6."
The second and third digits are assigned to create a unique three-digit hull number.
See also
United States Navy 1975 ship reclassification
List of hull classifications - same as this article but in alphabetical order
List of ships of the United States Army
Ship prefix
Hull classification symbol (Canada)
Pennant number for the British Commonwealth equivalent
Notes
Explanatory notes
Wikilink footnotes
Citations
General and cited references
United States Naval Aviation 1910–1995, Appendix 16: U.S. Navy and Marine Corps Squadron Designations and Abbreviations. U.S. Navy, c. 1995. Quoted in Derdall and DiGiulian, op cit.
USCG Designations
Naval History and Heritage Command Online Library of Selected Images: U.S. Navy Ships – Listed by Hull Number: "SP" #s and "ID" #s — World War I Era Patrol Vessels and other Acquired Ships and Craft
Wertheim, Eric. The Naval Institute Guide to Combat Fleets of the World, 15th Edition: Their Ships, Aircraft, and Systems. Annapolis, Maryland: Naval Institute Press, 2007. . .
Further reading
Friedman, Norman. U.S. Small Combatants, Including PT-Boats, Subchasers, and the Brown-Water Navy: An Illustrated Design History. Annapolis, Md: Naval Institute Press, 1987. .
External links
Current U.S. Navy Ship Classifications
U.S. Navy Inactive Classification Symbols
U.S. Naval Vessels Registry (Service Craft)
U.S. Naval Vessels Registry (Ships)
U.S. Naval Vessel Register (Current ships)
Ships of the United States Navy
Ship identification numbers
Hull classifications
United States
Service vessels of the United States | Hull classification symbol | [
"Mathematics"
] | 8,933 | [
"Ship identification numbers",
"Mathematical objects",
"Numbers"
] |
14,094 | https://en.wikipedia.org/wiki/Human%20cloning | Human cloning is the creation of a genetically identical copy of a human. The term is generally used to refer to artificial human cloning, which is the reproduction of human cells and tissue. It does not refer to the natural conception and delivery of identical twins. The possibilities of human cloning have raised controversies. These ethical concerns have prompted several nations to pass laws regarding human cloning.
Two commonly discussed types of human cloning are therapeutic cloning and reproductive cloning.
Therapeutic cloning would involve cloning cells from a human for use in medicine and transplants. It is an active area of research, and is in medical practice over the world. Two common methods of therapeutic cloning that are being researched are somatic-cell nuclear transfer and (more recently) pluripotent stem cell induction.
Reproductive cloning would involve making an entire cloned human, instead of just specific cells or tissues.
History
Although the possibility of cloning humans had been the subject of speculation for much of the 20th century, scientists and policymakers began to take the prospect seriously in 1969. J. B. S. Haldane was the first to introduce the idea of human cloning, for which he used the terms "clone" and "cloning", which had been used in agriculture since the early 20th century. In his speech on "Biological Possibilities for the Human Species of the Next Ten Thousand Years" at the Ciba Foundation Symposium on Man and his Future in 1963, he said:
Nobel Prize-winning geneticist Joshua Lederberg advocated cloning and genetic engineering in an article in The American Naturalist in 1966 and again, the following year, in The Washington Post. He sparked a debate with conservative bioethicist Leon Kass, who wrote at the time that "the programmed reproduction of man will, in fact, dehumanize him." Another Nobel Laureate, James D. Watson, publicized the potential and the perils of cloning in his Atlantic Monthly essay, "Moving Toward the Clonal Man", in 1971.
With the cloning of a sheep known as Dolly in 1996 by somatic cell nuclear transfer (SCNT), the idea of human cloning became a hot debate topic. Many nations outlawed it, while a few scientists promised to make a clone within the next few years. The first hybrid human clone was created in November 1998, by Advanced Cell Technology. It was created using SCNT; a nucleus was taken from a man's leg cell and inserted into a cow's egg from which the nucleus had been removed, and the hybrid cell was cultured and developed into an embryo. The embryo was destroyed after 12 days.
In 2004 and 2005, Hwang Woo-suk, a professor at Seoul National University, published two separate articles in the journal Science claiming to have successfully harvested pluripotent, embryonic stem cells from a cloned human blastocyst using SCNT techniques. Hwang claimed to have created eleven different patient-specific stem cell lines. This would have been the first major breakthrough in human cloning. However, in 2006 Science retracted both of his articles on account of clear evidence that much of his data from the experiments was fabricated.
In January 2008, Dr. Andrew French and Samuel Wood of the biotechnology company Stemagen announced that they successfully created the first five mature human embryos using SCNT. In this case, each embryo was created by taking a nucleus from a skin cell (donated by Wood and a colleague) and inserting it into a human egg from which the nucleus had been removed. The embryos were developed only to the blastocyst stage, at which point they were studied in processes that destroyed them. Members of the lab said that their next set of experiments would aim to generate embryonic stem cell lines; these are the "holy grail" that would be useful for therapeutic or reproductive cloning.
In 2011, scientists at the New York Stem Cell Foundation announced that they had succeeded in generating embryonic stem cell lines, but their process involved leaving the oocyte's nucleus in place, resulting in triploid cells, which would not be useful for cloning.
In 2013, a group of scientists led by Shoukhrat Mitalipov published the first report of embryonic stem cells created using SCNT. In this experiment, the researchers developed a protocol for using SCNT in human cells, which differs slightly from the one used in other organisms. Four embryonic stem cell lines from human fetal somatic cells were derived from those blastocysts. All four lines were derived using oocytes from the same donor, ensuring that all mitochondrial DNA inherited was identical. A year later, a team led by Robert Lanza at Advanced Cell Technology reported that they had replicated Mitalipov's results and further demonstrated the effectiveness by cloning adult cells using SCNT.
In 2018, the first successful cloning of primates using SCNT was reported with the birth of two live female clones, crab-eating macaques named Zhong Zhong and Hua Hua.
Methods
Somatic cell nuclear transfer (SCNT)
In somatic cell nuclear transfer ("SCNT"), the nucleus of a somatic cell is taken from a donor and transplanted into a host egg cell, which had its own genetic material removed previously, making it an enucleated egg. After the donor somatic cell genetic material is transferred into the host oocyte with a micropipette, the somatic cell genetic material is fused with the egg using an electric current. Once the two cells have fused, the new cell can be permitted to grow in a surrogate or artificially. This is the process that was used to successfully clone Dolly the sheep (see ). The technique, now refined, has indicated that it was possible to replicate cells and reestablish pluripotency, or "the potential of an embryonic cell to grow into any one of the numerous different types of mature body cells that make up a complete organism".
Induced pluripotent stem cells (iPSCs)
Creating induced pluripotent stem cells ("iPSCs") is a long and inefficient process. Pluripotency refers to a stem cell that has the potential to differentiate into any of the three germ layers: endoderm (interior stomach lining, gastrointestinal tract, the lungs), mesoderm (muscle, bone, blood, urogenital), or ectoderm (epidermal tissues and nervous tissue). A specific set of genes, often called "reprogramming factors", are introduced into a specific adult cell type. These factors send signals in the mature cell that cause the cell to become a pluripotent stem cell. This process is highly studied and new techniques are being discovered frequently on how to improve this induction process.
Depending on the method used, reprogramming of adult cells into iPSCs for implantation could have severe limitations in humans. If a virus is used as a reprogramming factor for the cell, cancer-causing genes called oncogenes may be activated. These cells would appear as rapidly dividing cancer cells that do not respond to the body's natural cell signaling process. However, in 2008 scientists discovered a technique that could remove the presence of these oncogenes after pluripotency induction, thereby increasing the potential use of iPSC in humans.
Comparing SCNT to reprogramming
Both the processes of SCNT and iPSCs have benefits and deficiencies. Historically, reprogramming methods were better studied than SCNT derived embryonic stem cells (ESCs). However, more recent studies have put more emphasis on developing new procedures for SCNT-ESCs. The major advantage of SCNT over iPSCs at this time is the speed with which cells can be produced. iPSCs derivation takes several months while SCNT would take a much shorter time, which could be important for medical applications. New studies are working to improve the process of iPSC in terms of both speed and efficiency with the discovery of new reprogramming factors in oocytes. Another advantage SCNT could have over iPSCs is its potential to treat mitochondrial disease, as it uses a donor oocyte. No other advantages are known at this time in using stem cells derived from one method over stem cells derived from the other.
Uses and actual potential
Work on cloning techniques has advanced understanding of developmental biology in humans. Observing human pluripotent stem cells grown in culture provides great insight into human embryo development, which otherwise cannot be seen. Scientists are now able to better define steps of early human development. Studying signal transduction along with genetic manipulation within the early human embryo has the potential to provide answers to many developmental diseases and defects. Many human-specific signaling pathways have been discovered by studying human embryonic stem cells. Studying developmental pathways in humans has given developmental biologists more evidence toward the hypothesis that developmental pathways are conserved throughout species.
iPSCs and cells created by SCNT are useful for research into the causes of disease, and as model systems used in drug discovery.
Cells produced with SCNT, or iPSCs could eventually be used in stem cell therapy, or to create organs to be used in transplantation, known as regenerative medicine. Stem cell therapy is the use of stem cells to treat or prevent a disease or condition. Bone marrow transplantation is a widely used form of stem cell therapy. No other forms of stem cell therapy are in clinical use at this time. Research is underway to potentially use stem cell therapy to treat heart disease, diabetes, and spinal cord injuries. Regenerative medicine is not in clinical practice, but is heavily researched for its potential uses. This type of medicine would allow for autologous transplantation, thus removing the risk of organ transplant rejection by the recipient. For instance, a person with liver disease could potentially have a new liver grown using their same genetic material and transplanted to remove the damaged liver. In current research, human pluripotent stem cells have been promised as a reliable source for generating human neurons, showing the potential for regenerative medicine in brain and neural injuries.
Ethical implications
In bioethics, the ethics of cloning refers to a variety of ethical positions regarding the practice and possibilities of cloning, especially human cloning. While many of these views are religious in origin, for instance relating to Christian views of procreation and personhood, the questions raised by cloning engage secular perspectives as well, particularly the concept of identity.
Advocates support development of therapeutic cloning in order to generate tissues and whole organs to treat patients who otherwise cannot obtain transplants, to avoid the need for immunosuppressive drugs, and to stave off the effects of aging. Advocates for reproductive cloning believe that parents who cannot otherwise procreate should have access to the technology.
Opposition to therapeutic cloning mainly centers around the status of embryonic stem cells, which has connections with the abortion debate. The moral argument put forward is based on the notion that embryos deserve protection from the moment of their conception because it is at this precise moment that a new human entity emerges, already a unique individual. Since it is deemed unacceptable to sacrifice human lives for any purpose, the argument asserts that the destruction of embryos for research purposes is no longer justifiable.
Some opponents of reproductive cloning have concerns that technology is not yet developed enough to be safe – for example, the position of the American Association for the Advancement of Science while others emphasize that reproductive cloning could be prone to abuse (leading to the generation of humans whose organs and tissues would be harvested), and have concerns about how cloned individuals could integrate with families and with society at large.
Members of religious groups are divided. Some Christian theologians perceive the technology as usurping God's role in creation and, to the extent embryos are used, destroying a human life; others see no inconsistency between Christian tenets and cloning's positive and potentially life-saving benefits.
Legal status of human therapeutic cloning maps
Legal status of human cloning by jurisdiction
Legal status of human cloning by U.S. state
In popular culture
Science fiction has used cloning, most commonly and specifically human cloning, due to the fact that it brings up controversial questions of identity. Humorous fiction, such as Multiplicity (1996) and the Maxwell Smart feature The Nude Bomb (1980), have featured human cloning. A recurring sub-theme of cloning fiction is the use of clones as a supply of organs for transplantation. Robin Cook's 1997 novel Chromosome 6, Michael Bay's The Island, and Nancy Farmer's 2002 novel House of the Scorpion are examples of this; Chromosome 6 also features genetic manipulation and xenotransplantation. The Star Wars saga makes use of millions of human clones to form the Grand Army of the Republic that participated in the Clone Wars. The series Orphan Black follows human clones' stories and experiences as they deal with issues and react to being the property of a chain of scientific institutions. In the 2019 horror film Us, the entirety of the United States' population is secretly cloned. Years later, these clones (known as The Tethered) reveal themselves to the world by successfully pulling off a mass genocide of their counterparts.
In the 2005 novel Never Let Me Go, Kazuo Ishiguro crafts a subtle exploration into the ethical complications of cloning humans for medical advancement and longevity.
See also
Homunculus
Hwang affair
Notes and references
Notes
References
Further reading
Araujo, Robert John, "The UN Declaration on Human Cloning: a survey and assessment of the debate," 7 The National Catholic Bioethics Quarterly 129 – 149 (2007).
Oregon Health & Science University. "Human skin cells converted into embryonic stem cells: First time human stem cells have been produced via nuclear transfer." ScienceDaily. ScienceDaily, 15 May 2013. Human skin cells converted into embryonic stem cells: First time human stem cells have been produced via nuclear transfer.
Seyyed Hassan Eslami Ardakani, Human Cloning in Catholic and Islamic Perspectives, University of Religions and Denominations, 2007
External links
"Variations and voids: the regulation of human cloning around the world" academic article by S. Pattinson & T. Caulfield
Cloning Fact Sheet
General Assembly Adopts United Nations Declaration on Human Cloning By Vote of 84-34-37
How Human Cloning Will Work
Moving Toward the Clonal Man
Should We Really Fear Reproductive Human Cloning
United Nation declares law against cloning.
Biotechnology | Human cloning | [
"Engineering",
"Biology"
] | 2,980 | [
"Cloning",
"nan",
"Genetic engineering",
"Biotechnology"
] |
14,121 | https://en.wikipedia.org/wiki/Hertz | The hertz (symbol: Hz) is the unit of frequency in the International System of Units (SI), often described as being equivalent to one event (or cycle) per second. The hertz is an SI derived unit whose formal expression in terms of SI base units is s−1, meaning that one hertz is one per second or the reciprocal of one second. It is used only in the case of periodic events. It is named after Heinrich Rudolf Hertz (1857–1894), the first person to provide conclusive proof of the existence of electromagnetic waves. For high frequencies, the unit is commonly expressed in multiples: kilohertz (kHz), megahertz (MHz), gigahertz (GHz), terahertz (THz).
Some of the unit's most common uses are in the description of periodic waveforms and musical tones, particularly those used in radio- and audio-related applications. It is also used to describe the clock speeds at which computers and other electronics are driven. The units are sometimes also used as a representation of the energy of a photon, via the Planck relation E = hν, where E is the photon's energy, ν is its frequency, and h is the Planck constant.
Definition
The hertz is defined as one per second for periodic events. The International Committee for Weights and Measures defined the second as "the duration of periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium-133 atom" and then adds: "It follows that the hyperfine splitting in the ground state of the caesium 133 atom is exactly , ." The dimension of the unit hertz is 1/time (T−1). Expressed in base SI units, the unit is the reciprocal second (1/s).
In English, "hertz" is also used as the plural form. As an SI unit, Hz can be prefixed; commonly used multiples are kHz (kilohertz, ), MHz (megahertz, ), GHz (gigahertz, ) and THz (terahertz, ). One hertz (i.e. one per second) simply means "one periodic event occurs per second" (where the event being counted may be a complete cycle); means "one hundred periodic events occur per second", and so on. The unit may be applied to any periodic event—for example, a clock might be said to tick at , or a human heart might be said to beat at .
The occurrence rate of aperiodic or stochastic events is expressed in reciprocal second or inverse second (1/s or s−1) in general or, in the specific case of radioactivity, in becquerels. Whereas (one per second) specifically refers to one cycle (or periodic event) per second, (also one per second) specifically refers to one radionuclide event per second on average.
Even though frequency, angular velocity, angular frequency and radioactivity all have the dimension T−1, of these only frequency is expressed using the unit hertz. Thus a disc rotating at 60 revolutions per minute (rpm) is said to have an angular velocity of 2 rad/s and a frequency of rotation of . The correspondence between a frequency f with the unit hertz and an angular velocity ω with the unit radians per second is
and
History
The hertz is named after the German physicist Heinrich Hertz (1857–1894), who made important scientific contributions to the study of electromagnetism. The name was established by the International Electrotechnical Commission (IEC) in 1935. It was adopted by the General Conference on Weights and Measures (CGPM) (Conférence générale des poids et mesures) in 1960, replacing the previous name for the unit, "cycles per second" (cps), along with its related multiples, primarily "kilocycles per second" (kc/s) and "megacycles per second" (Mc/s), and occasionally "kilomegacycles per second" (kMc/s). The term "cycles per second" was largely replaced by "hertz" by the 1970s.
In some usage, the "per second" was omitted, so that "megacycles" (Mc) was used as an abbreviation of "megacycles per second" (that is, megahertz (MHz)).
Applications
Sound and vibration
Sound is a traveling longitudinal wave, which is an oscillation of pressure. Humans perceive the frequency of a sound as its pitch. Each musical note corresponds to a particular frequency. An infant's ear is able to perceive frequencies ranging from to ; the average adult human can hear sounds between and . The range of ultrasound, infrasound and other physical vibrations such as molecular and atomic vibrations extends from a few femtohertz into the terahertz range and beyond.
Electromagnetic radiation
Electromagnetic radiation is often described by its frequency—the number of oscillations of the perpendicular electric and magnetic fields per second—expressed in hertz.
Radio frequency radiation is usually measured in kilohertz (kHz), megahertz (MHz), or gigahertz (GHz). with the latter known as microwaves. Light is electromagnetic radiation that is even higher in frequency, and has frequencies in the range of tens of terahertz (THz, infrared) to a few petahertz (PHz, ultraviolet), with the visible spectrum being 400–790 THz. Electromagnetic radiation with frequencies in the low terahertz range (intermediate between those of the highest normally usable radio frequencies and long-wave infrared light) is often called terahertz radiation. Even higher frequencies exist, such as that of X-rays and gamma rays, which can be measured in exahertz (EHz).
For historical reasons, the frequencies of light and higher frequency electromagnetic radiation are more commonly specified in terms of their wavelengths or photon energies: for a more detailed treatment of this and the above frequency ranges, see Electromagnetic spectrum.
Gravitational waves
Gravitational waves are also described in Hertz. Current observations are conducted in the 30–7000 Hz range by laser interferometers like LIGO, and the nanohertz (1–1000 nHz) range by pulsar timing arrays. Future space-based detectors are planned to fill in the gap, with LISA operating from 0.1–10 mHz (with some sensitivity from 10 μHz to 100 mHz), and DECIGO in the 0.1–10 Hz range.
Computers
In computers, most central processing units (CPU) are labeled in terms of their clock rate expressed in megahertz () or gigahertz (). This specification refers to the frequency of the CPU's master clock signal. This signal is nominally a square wave, which is an electrical voltage that switches between low and high logic levels at regular intervals. As the hertz has become the primary unit of measurement accepted by the general populace to determine the performance of a CPU, many experts have criticized this approach, which they claim is an easily manipulable benchmark. Some processors use multiple clock cycles to perform a single operation, while others can perform multiple operations in a single cycle. For personal computers, CPU clock speeds have ranged from approximately in the late 1970s (Atari, Commodore, Apple computers) to up to in IBM Power microprocessors.
Various computer buses, such as the front-side bus connecting the CPU and northbridge, also operate at various frequencies in the megahertz range.
SI multiples
Higher frequencies than the International System of Units provides prefixes for are believed to occur naturally in the frequencies of the quantum-mechanical vibrations of massive particles, although these are not directly observable and must be inferred through other phenomena. By convention, these are typically not expressed in hertz, but in terms of the equivalent energy, which is proportional to the frequency by the factor of the Planck constant.
Unicode
The CJK Compatibility block in Unicode contains characters for common SI units for frequency. These are intended for compatibility with East Asian character encodings, and not for use in new documents (which would be expected to use Latin letters, e.g. "MHz").
(, )
(Hz)
(kHz)
(MHz)
(GHz)
(THz)
See also
Alternating current
Bandwidth (signal processing)
Electronic tuner
FLOPS
Frequency changer
Normalized frequency (signal processing)
Orders of magnitude (frequency)
Orders of magnitude (rotational speed)
Periodic function
Radian per second
Rate
Sampling rate
Notes
References
External links
SI Brochure: Unit of time (second)
National Research Council of Canada: Cesium fountain clock
National Research Council of Canada: Optical frequency standard based on a single trapped ion (archived 23 December 2013)
National Research Council of Canada: Optical frequency comb (archived 27 June 2013)
National Physical Laboratory: Time and frequency Optical atomic clocks
SI derived units
Units of frequency
Heinrich Hertz | Hertz | [
"Mathematics"
] | 1,859 | [
"Quantity",
"Units of frequency",
"Units of measurement"
] |
14,136 | https://en.wikipedia.org/wiki/Hydrophobe | In chemistry, hydrophobicity is the chemical property of a molecule (called a hydrophobe) that is seemingly repelled from a mass of water. In contrast, hydrophiles are attracted to water.
Hydrophobic molecules tend to be nonpolar and, thus, prefer other neutral molecules and nonpolar solvents. Because water molecules are polar, hydrophobes do not dissolve well among them. Hydrophobic molecules in water often cluster together, forming micelles. Water on hydrophobic surfaces will exhibit a high contact angle.
Examples of hydrophobic molecules include the alkanes, oils, fats, and greasy substances in general. Hydrophobic materials are used for oil removal from water, the management of oil spills, and chemical separation processes to remove non-polar substances from polar compounds.
The term hydrophobic—which comes from the Ancient Greek (), "having a fear of water", constructed —is often used interchangeably with lipophilic, "fat-loving". However, the two terms are not synonymous. While hydrophobic substances are usually lipophilic, there are exceptions, such as the silicones and fluorocarbons.
Chemical background
The hydrophobic interaction is mostly an entropic effect originating from the disruption of the highly dynamic hydrogen bonds between molecules of liquid water by the nonpolar solute, causing the water to form a clathrate-like structure around the non-polar molecules. This structure formed is more highly ordered than free water molecules due to the water molecules arranging themselves to interact as much as possible with themselves, and thus results in a higher entropic state which causes non-polar molecules to clump together to reduce the surface area exposed to water and decrease the entropy of the system. Thus, the two immiscible phases (hydrophilic vs. hydrophobic) will change so that their corresponding interfacial area will be minimal. This effect can be visualized in the phenomenon called phase separation.
Superhydrophobicity
Superhydrophobic surfaces, such as the leaves of the lotus plant, are those that are extremely difficult to wet. The contact angles of a water droplet exceeds 150°. This is referred to as the lotus effect, and is primarily a chemical property related to interfacial tension, rather than a chemical property.
Theory
In 1805, Thomas Young defined the contact angle θ by analyzing the forces acting on a fluid droplet resting on a solid surface surrounded by a gas.
where
= Interfacial tension between the solid and gas
= Interfacial tension between the solid and liquid
= Interfacial tension between the liquid and gas
θ can be measured using a contact angle goniometer.
Wenzel determined that when the liquid is in intimate contact with a microstructured surface, θ will change to θW*
where r is the ratio of the actual area to the projected area. Wenzel's equation shows that microstructuring a surface amplifies the natural tendency of the surface. A hydrophobic surface (one that has an original contact angle greater than 90°) becomes more hydrophobic when microstructured – its new contact angle becomes greater than the original. However, a hydrophilic surface (one that has an original contact angle less than 90°) becomes more hydrophilic when microstructured – its new contact angle becomes less than the original.
Cassie and Baxter found that if the liquid is suspended on the tops of microstructures, θ will change to θCB*:
where φ is the area fraction of the solid that touches the liquid. Liquid in the Cassie–Baxter state is more mobile than in the Wenzel state.
We can predict whether the Wenzel or Cassie–Baxter state should exist by calculating the new contact angle with both equations. By a minimization of free energy argument, the relation that predicted the smaller new contact angle is the state most likely to exist. Stated in mathematical terms, for the Cassie–Baxter state to exist, the following inequality must be true.
A recent alternative criterion for the Cassie–Baxter state asserts that the Cassie–Baxter state exists when the following 2 criteria are met:1) Contact line forces overcome body forces of unsupported droplet weight and 2) The microstructures are tall enough to prevent the liquid that bridges microstructures from touching the base of the microstructures.
A new criterion for the switch between Wenzel and Cassie-Baxter states has been developed recently based on surface roughness and surface energy. The criterion focuses on the air-trapping capability under liquid droplets on rough surfaces, which could tell whether Wenzel's model or Cassie-Baxter's model should be used for certain combination of surface roughness and energy.
Contact angle is a measure of static hydrophobicity, and contact angle hysteresis and slide angle are dynamic measures. Contact angle hysteresis is a phenomenon that characterizes surface heterogeneity. When a pipette injects a liquid onto a solid, the liquid will form some contact angle. As the pipette injects more liquid, the droplet will increase in volume, the contact angle will increase, but its three-phase boundary will remain stationary until it suddenly advances outward. The contact angle the droplet had immediately before advancing outward is termed the advancing contact angle. The receding contact angle is now measured by pumping the liquid back out of the droplet. The droplet will decrease in volume, the contact angle will decrease, but its three-phase boundary will remain stationary until it suddenly recedes inward. The contact angle the droplet had immediately before receding inward is termed the receding contact angle. The difference between advancing and receding contact angles is termed contact angle hysteresis and can be used to characterize surface heterogeneity, roughness, and mobility. Surfaces that are not homogeneous will have domains that impede motion of the contact line. The slide angle is another dynamic measure of hydrophobicity and is measured by depositing a droplet on a surface and tilting the surface until the droplet begins to slide. In general, liquids in the Cassie–Baxter state exhibit lower slide angles and contact angle hysteresis than those in the Wenzel state.
Research and development
Dettre and Johnson discovered in 1964 that the superhydrophobic lotus effect phenomenon was related to rough hydrophobic surfaces, and they developed a theoretical model based on experiments with glass beads coated with paraffin or TFE telomer. The self-cleaning property of superhydrophobic micro-nanostructured surfaces was reported in 1977. Perfluoroalkyl, perfluoropolyether, and RF plasma -formed superhydrophobic materials were developed, used for electrowetting and commercialized for bio-medical applications between 1986 and 1995. Other technology and applications have emerged since the mid-1990s. A durable superhydrophobic hierarchical composition, applied in one or two steps, was disclosed in 2002 comprising nano-sized particles ≤ 100 nanometers overlaying a surface having micrometer-sized features or particles ≤ 100 micrometers. The larger particles were observed to protect the smaller particles from mechanical abrasion.
In recent research, superhydrophobicity has been reported by allowing alkylketene dimer (AKD) to solidify into a nanostructured fractal surface. Many papers have since presented fabrication methods for producing superhydrophobic surfaces including particle deposition, sol-gel techniques, plasma treatments, vapor deposition, and casting techniques. Current opportunity for research impact lies mainly in fundamental research and practical manufacturing. Debates have recently emerged concerning the applicability of the Wenzel and Cassie–Baxter models. In an experiment designed to challenge the surface energy perspective of the Wenzel and Cassie–Baxter model and promote a contact line perspective, water drops were placed on a smooth hydrophobic spot in a rough hydrophobic field, a rough hydrophobic spot in a smooth hydrophobic field, and a hydrophilic spot in a hydrophobic field. Experiments showed that the surface chemistry and geometry at the contact line affected the contact angle and contact angle hysteresis, but the surface area inside the contact line had no effect. An argument that increased jaggedness in the contact line enhances droplet mobility has also been proposed.
Many hydrophobic materials found in nature rely on Cassie's law and are biphasic on the submicrometer level with one component air. The lotus effect is based on this principle. Inspired by it, many functional superhydrophobic surfaces have been prepared.
An example of a bionic or biomimetic superhydrophobic material in nanotechnology is nanopin film.
One study presents a vanadium pentoxide surface that switches reversibly between superhydrophobicity and superhydrophilicity under the influence of UV radiation. According to the study, any surface can be modified to this effect by application of a suspension of rose-like V2O5 particles, for instance with an inkjet printer. Once again hydrophobicity is induced by interlaminar air pockets (separated by 2.1 nm distances). The UV effect is also explained. UV light creates electron-hole pairs, with the holes reacting with lattice oxygen, creating surface oxygen vacancies, while the electrons reduce V5+ to V3+. The oxygen vacancies are met by water, and it is this water absorbency by the vanadium surface that makes it hydrophilic. By extended storage in the dark, water is replaced by oxygen and hydrophilicity is once again lost.
A significant majority of hydrophobic surfaces have their hydrophobic properties imparted by structural or chemical modification of a surface of a bulk material, through either coatings or surface treatments. That is to say, the presence of molecular species (usually organic) or structural features results in high contact angles of water. In recent years, rare earth oxides have been shown to possess intrinsic hydrophobicity. The intrinsic hydrophobicity of rare earth oxides depends on surface orientation and oxygen vacancy levels, and is naturally more robust than coatings or surface treatments, having potential applications in condensers and catalysts that can operate at high temperatures or corrosive environments.
Applications and potential applications
Hydrophobic concrete has been produced since the mid-20th century.
Active recent research on superhydrophobic materials might eventually lead to more industrial applications.
A simple routine of coating cotton fabric with silica or titania particles by sol-gel technique has been reported, which protects the fabric from UV light and makes it superhydrophobic.
An efficient routine has been reported for making polyethylene superhydrophobic and thus self-cleaning. 99% of dirt on such a surface is easily washed away.
Patterned superhydrophobic surfaces also have promise for lab-on-a-chip microfluidic devices and can drastically improve surface-based bioanalysis.
In pharmaceuticals, hydrophobicity of pharmaceutical blends affects important quality attributes of final products, such as drug dissolution and hardness. Methods have been developed to measure the hydrophobicity of pharmaceutical materials.
The development of hydrophobic passive daytime radiative cooling (PDRC) surfaces, whose effectiveness at solar reflectance and thermal emittance is predicated on their cleanliness, has improved the "self-cleaning" of these surfaces. Scalable and sustainable hydrophobic PDRCs that avoid VOCs have further been developed.
See also
References
External links
What are superhydrophobic surfaces?
Chemical properties
Intermolecular forces
Surface science
Articles containing video clips | Hydrophobe | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 2,382 | [
"Molecular physics",
"Materials science",
"Surface science",
"Intermolecular forces",
"Condensed matter physics",
"nan"
] |
14,147 | https://en.wikipedia.org/wiki/Harmonic%20analysis | Harmonic analysis is a branch of mathematics concerned with investigating the connections between a function and its representation in frequency. The frequency representation is found by using the Fourier transform for functions on unbounded domains such as the full real line or by Fourier series for functions on bounded domains, especially periodic functions on finite intervals. Generalizing these transforms to other domains is generally called Fourier analysis, although the term is sometimes used interchangeably with harmonic analysis. Harmonic analysis has become a vast subject with applications in areas as diverse as number theory, representation theory, signal processing, quantum mechanics, tidal analysis, spectral analysis, and neuroscience.
The term "harmonics" originated from the Ancient Greek word harmonikos, meaning "skilled in music". In physical eigenvalue problems, it began to mean waves whose frequencies are integer multiples of one another, as are the frequencies of the harmonics of music notes. Still, the term has been generalized beyond its original meaning.
Development of Harmonic Analysis
Historically, harmonic functions first referred to the solutions of Laplace's equation. This terminology was extended to other special functions that solved related equations, then to eigenfunctions of general elliptic operators, and nowadays harmonic functions are considered as a generalization of periodic functions in function spaces defined on manifolds, for example as solutions of general, not necessarily elliptic, partial differential equations including some boundary conditions that may imply their symmetry or periodicity.
Fourier Analysis
The classical Fourier transform on Rn is still an area of ongoing research, particularly concerning Fourier transformation on more general objects such as tempered distributions. For instance, if we impose some requirements on a distribution f, we can attempt to translate these requirements into the Fourier transform of f. The Paley–Wiener theorem is an example. The Paley–Wiener theorem immediately implies that if f is a nonzero distribution of compact support (these include functions of compact support), then its Fourier transform is never compactly supported (i.e., if a signal is limited in one domain, it is unlimited in the other). This is an elementary form of an uncertainty principle in a harmonic-analysis setting.
Fourier series can be conveniently studied in the context of Hilbert spaces, which provides a connection between harmonic analysis and functional analysis. There are four versions of the Fourier transform, dependent on the spaces that are mapped by the transformation:
Discrete/periodic–discrete/periodic: Discrete Fourier transform
Continuous/periodic–discrete/aperiodic: Fourier series
Discrete/aperiodic–continuous/periodic: Discrete-time Fourier transform
Continuous/aperiodic–continuous/aperiodic: Fourier transform
As the spaces mapped by the Fourier transform are, in particular, subspaces of the space of tempered distributions it can be shown that the four versions of the Fourier transform are particular cases of the Fourier transform on tempered distributions.
Abstract harmonic analysis
Abstract harmonic analysis is primarily concerned with how real or
complex-valued functions (often on very general domains) can be studied using symmetries such
as translations or rotations (for instance via the Fourier transform and its relatives); this field is of
course related to real-variable harmonic analysis, but is perhaps closer in spirit to representation theory and functional analysis.
One of the most modern branches of harmonic analysis, having its roots in the mid-20th century, is analysis on topological groups. The core motivating ideas are the various Fourier transforms, which can be generalized to a transform of functions defined on Hausdorff locally compact topological groups.
One of the major results in the theory of functions on abelian locally compact groups is called Pontryagin duality.
Harmonic analysis studies the properties of that duality. Different generalization of Fourier transforms attempts to extend those features to different settings, for instance, first to the case of general abelian topological groups and second to the case of non-abelian Lie groups.
Harmonic analysis is closely related to the theory of unitary group representations for general non-abelian locally compact groups. For compact groups, the Peter–Weyl theorem explains how one may get harmonics by choosing one irreducible representation out of each equivalence class of representations. This choice of harmonics enjoys some of the valuable properties of the classical Fourier transform in terms of carrying convolutions to pointwise products or otherwise showing a certain understanding of the underlying group structure. See also: Non-commutative harmonic analysis.
If the group is neither abelian nor compact, no general satisfactory theory is currently known ("satisfactory" means at least as strong as the Plancherel theorem). However, many specific cases have been analyzed, for example, SLn. In this case, representations in infinite dimensions play a crucial role.
Applied harmonic analysis
Many applications of harmonic analysis in science and engineering begin with the idea or hypothesis that a phenomenon or signal is composed of a sum of individual oscillatory components. Ocean tides and vibrating strings are common and simple examples. The theoretical approach often tries to describe the system by a differential equation or system of equations to predict the essential features, including the amplitude, frequency, and phases of the oscillatory components. The specific equations depend on the field, but theories generally try to select equations that represent significant principles that are applicable.
The experimental approach is usually to acquire data that accurately quantifies the phenomenon. For example, in a study of tides, the experimentalist would acquire samples of water depth as a function of time at closely enough spaced intervals to see each oscillation and over a long enough duration that multiple oscillatory periods are likely included. In a study on vibrating strings, it is common for the experimentalist to acquire a sound waveform sampled at a rate at least twice that of the highest frequency expected and for a duration many times the period of the lowest frequency expected.
For example, the top signal at the right is a sound waveform of a bass guitar playing an open string corresponding to an A note with a fundamental frequency of 55 Hz. The waveform appears oscillatory, but it is more complex than a simple sine wave, indicating the presence of additional waves. The different wave components contributing to the sound can be revealed by applying a mathematical analysis technique known as the Fourier transform, shown in the lower figure. There is a prominent peak at 55 Hz, but other peaks at 110 Hz, 165 Hz, and at other frequencies corresponding to integer multiples of 55 Hz. In this case, 55 Hz is identified as the fundamental frequency of the string vibration, and the integer multiples are known as harmonics.
Other branches
Study of the eigenvalues and eigenvectors of the Laplacian on domains, manifolds, and (to a lesser extent) graphs is also considered a branch of harmonic analysis. See, e.g., hearing the shape of a drum.
Harmonic analysis on Euclidean spaces deals with properties of the Fourier transform on Rn that have no analog on general groups. For example, the fact that the Fourier transform is rotation-invariant. Decomposing the Fourier transform into its radial and spherical components leads to topics such as Bessel functions and spherical harmonics.
Harmonic analysis on tube domains is concerned with generalizing properties of Hardy spaces to higher dimensions.
Automorphic forms are generalized harmonic functions, with respect to a symmetry group. They are an old and at the same time active area of development in harmonic analysis due to their connections to the Langlands program.
Non linear harmonic analysis is the use of harmonic and functional analysis tools and techniques to study nonlinear systems. This includes both problems with infinite degrees of freedom and also non linear operators and equations.
Major results
See also
Convergence of Fourier series
Fourier analysis for computing periodicity in evenly-spaced data
Harmonic (mathematics)
Least-squares spectral analysis for computing periodicity in unevenly spaced data
Spectral density estimation
Tate's thesis
References
Bibliography
Elias Stein and Guido Weiss, Introduction to Fourier Analysis on Euclidean Spaces, Princeton University Press, 1971.
Elias Stein with Timothy S. Murphy, Harmonic Analysis: Real-Variable Methods, Orthogonality, and Oscillatory Integrals, Princeton University Press, 1993.
Elias Stein, Topics in Harmonic Analysis Related to the Littlewood-Paley Theory, Princeton University Press, 1970.
Yitzhak Katznelson, An introduction to harmonic analysis, Third edition. Cambridge University Press, 2004. ; 0-521-54359-2
Terence Tao, Fourier Transform. (Introduces the decomposition of functions into odd + even parts as a harmonic decomposition over .)
Yurii I. Lyubich. Introduction to the Theory of Banach Representations of Groups. Translated from the 1985 Russian-language edition (Kharkov, Ukraine). Birkhäuser Verlag. 1988.
George W. Mackey, Harmonic analysis as the exploitation of symmetry–a historical survey, Bull. Amer. Math. Soc. 3 (1980), 543–698.
M. Bujosa, A. Bujosa and A. Garcıa-Ferrer. Mathematical Framework for Pseudo-Spectra of Linear Stochastic Difference Equations, IEEE Transactions on Signal Processing vol. 63 (2015), 6498–6509.
External links
Acoustics
Musical terminology | Harmonic analysis | [
"Physics"
] | 1,853 | [
"Classical mechanics",
"Acoustics"
] |
14,199 | https://en.wikipedia.org/wiki/Handheld%20game%20console | A handheld game console, or simply handheld console, is a small, portable self-contained video game console with a built-in screen, game controls and speakers. Handheld game consoles are smaller than home video game consoles and contain the console, screen, speakers, and controls in one unit, allowing players to carry them and play them at any time or place.
In 1976, Mattel introduced the first handheld electronic game with the release of Auto Race. Later, several companies—including Coleco and Milton Bradley—made their own single-game, lightweight table-top or handheld electronic game devices. The first commercial successful handheld console was Merlin from 1978, which sold more than 5 million units. The first handheld game console with interchangeable cartridges is the Milton Bradley Microvision in 1979.
Nintendo is credited with popularizing the handheld console concept with the release of the Game Boy in 1989 and continues to dominate the handheld console market. The first internet-enabled handheld console and the first with a touchscreen was the Game.com released by Tiger Electronics in 1997. The Nintendo DS, released in 2004, introduced touchscreen controls and wireless online gaming to a wider audience, becoming the best-selling handheld console with over units sold worldwide.
History
Timeline
This table describes handheld games consoles by generation, with over 1 million sales. No handheld achieved this prior to the fourth generation of game consoles. This list does not include dedicated consoles, such as LCD games and the Tamagotchi.
Origins
The origins of handheld game consoles are found in handheld and tabletop electronic game devices of the 1970s and early 1980s. These electronic devices are capable of playing only a single game, they fit in the palm of the hand or on a tabletop, and they may make use of a variety of video displays such as LED, VFD, or LCD. In 1978, handheld electronic games were described by Popular Electronics magazine as "nonvideo electronic games" and "non-TV games" as distinct from devices that required use of a television screen. Handheld electronic games, in turn, find their origins in the synthesis of previous handheld and tabletop electro-mechanical devices such as Waco's Electronic Tic-Tac-Toe (1972) Cragstan's Periscope-Firing Range (1951), and the emerging optoelectronic-display-driven calculator market of the early 1970s. This synthesis happened in 1976, when "Mattel began work on a line of calculator-sized sports games that became the world's first handheld electronic games. The project began when Michael Katz, Mattel's new product category marketing director, told the engineers in the electronics group to design a game the size of a calculator, using LED (light-emitting diode) technology."
our big success was something that I conceptualized—the first handheld game. I asked the design group to see if they could come up with a game that was electronic that was the same size as a calculator.
—Michael Katz, former marketing director, Mattel Toys.
The result was the 1976 release of Auto Race. Followed by Football later in 1977, the two games were so successful that according to Katz, "these simple electronic handheld games turned into a '$400 million category.'" Mattel would later win the honor of being recognized by the industry for innovation in handheld game device displays. Soon, other manufacturers including Coleco, Parker Brothers, Milton Bradley, Entex, and Bandai began following up with their own tabletop and handheld electronic games.
In 1979 the LCD-based Microvision, designed by Smith Engineering and distributed by Milton-Bradley, became the first handheld game console and the first to use interchangeable game cartridges. The Microvision game Cosmic Hunter (1981) also introduced the concept of a directional pad on handheld gaming devices, and is operated by using the thumb to manipulate the on-screen character in any of four directions.
In 1979, Gunpei Yokoi, traveling on a bullet train, saw a bored businessman playing with an LCD calculator by pressing the buttons. Yokoi then thought of an idea for a watch that doubled as a miniature game machine for killing time. Starting in 1980, Nintendo began to release a series of electronic games designed by Yokoi called the Game & Watch games. Taking advantage of the technology used in the credit-card-sized calculators that had appeared on the market, Yokoi designed the series of LCD-based games to include a digital time display in the corner of the screen. For later, more complicated Game & Watch games, Yokoi invented a cross shaped directional pad or "D-pad" for control of on-screen characters. Yokoi also included his directional pad on the NES controllers, and the cross-shaped thumb controller soon became standard on game console controllers and ubiquitous across the video game industry since. When Yokoi began designing Nintendo's first handheld game console, he came up with a device that married the elements of his Game & Watch devices and the Famicom console, including both items' D-pad controller. The result was the Nintendo Game Boy.
In 1982, the Bandai LCD Solarpower was the first solar-powered gaming device. Some of its games, such as the horror-themed game Terror House, features two LCD panels, one stacked on the other, for an early 3D effect. In 1983, Takara Tomy's Tomytronic 3D simulates 3D by having two LCD panels that were lit by external light through a window on top of the device, making it the first dedicated home video 3D hardware.
Beginnings
The late 1980s and early 1990s saw the beginnings of the modern-day handheld game console industry, after the demise of the Microvision. As backlit LCD game consoles with color graphics consume a lot of power, they were not battery-friendly like the non-backlit original Game Boy whose monochrome graphics allowed longer battery life. By this point, rechargeable battery technology had not yet matured and so the more advanced game consoles of the time such as the Sega Game Gear and Atari Lynx did not have nearly as much success as the Game Boy.
Even though third-party rechargeable batteries were available for the battery-hungry alternatives to the Game Boy, these batteries employed a nickel-cadmium process and had to be completely discharged before being recharged to ensure maximum efficiency; lead-acid batteries could be used with automobile circuit limiters (cigarette lighter plug devices); but the batteries had mediocre portability. The later NiMH batteries, which do not share this requirement for maximum efficiency, were not released until the late 1990s, years after the Game Gear, Atari Lynx, and original Game Boy had been discontinued. During the time when technologically superior handhelds had strict technical limitations, batteries had a very low mAh rating since batteries with heavy power density were not yet available.
Modern game systems such as the Nintendo DS and PlayStation Portable have rechargeable Lithium-Ion batteries with proprietary shapes. Other seventh-generation consoles, such as the GP2X, use standard alkaline batteries. Because the mAh rating of alkaline batteries has increased since the 1990s, the power needed for handhelds like the GP2X may be supplied by relatively few batteries.
Game Boy
Nintendo released the Game Boy on April 21, 1989 (September 1990 for the UK). The design team headed by Gunpei Yokoi had also been responsible for the Game & Watch system, as well as the Nintendo Entertainment System games Metroid and Kid Icarus. The Game Boy came under scrutiny by Nintendo president Hiroshi Yamauchi, saying that the monochrome screen was too small, and the processing power was inadequate. The design team had felt that low initial cost and battery economy were more important concerns, and when compared to the Microvision, the Game Boy was a huge leap forward.
Yokoi recognized that the Game Boy needed a killer app—at least one game that would define the console, and persuade customers to buy it. In June 1988, Minoru Arakawa, then-CEO of Nintendo of America saw a demonstration of the game Tetris at a trade show. Nintendo purchased the rights for the game, and packaged it with the Game Boy system as a launch title. It was almost an immediate hit. By the end of the year more than a million units were sold in the US. As of March 31, 2005, the Game Boy and Game Boy Color combined to sell over 118 million units worldwide.
Atari Lynx
In 1987, Epyx created the Handy Game; a device that would become the Atari Lynx in 1989. It is the first color handheld console ever made, as well as the first with a backlit screen. It also features networking support with up to 17 other players, and advanced hardware that allows the zooming and scaling of sprites. The Lynx can also be turned upside down to accommodate left-handed players. However, all these features came at a very high price point, which drove consumers to seek cheaper alternatives. The Lynx is also very unwieldy, consumes batteries very quickly, and lacked the third-party support enjoyed by its competitors. Due to its high price, short battery life, production shortages, a dearth of compelling games, and Nintendo's aggressive marketing campaign, and despite a redesign in 1991, the Lynx became a commercial failure. Despite this, companies like Telegames helped to keep the system alive long past its commercial relevance, and when new owner Hasbro released the rights to develop for the public domain, independent developers like Songbird have managed to release new commercial games for the system every year until 2004's Winter Games.
TurboExpress
The TurboExpress is a portable version of the TurboGrafx, released in 1990 for $249.99. Its Japanese equivalent is the PC Engine GT.
It is the most advanced handheld of its time and can play all the TurboGrafx-16's games (which are on a small, credit-card sized media called HuCards). It has a 66 mm (2.6 in.) screen, the same as the original Game Boy, but in a much higher resolution, and can display 64 sprites at once, 16 per scanline, in 512 colors. Although the hardware can only handle 481 simultaneous colors. It has 8 kilobytes of RAM. The Turbo runs the HuC6820 CPU at 1.79 or 7.16 MHz.
The optional "TurboVision" TV tuner includes RCA audio/video input, allowing users to use TurboExpress as a video monitor. The "TurboLink" allowed two-player play. Falcon, a flight simulator, included a "head-to-head" dogfight mode that can only be accessed via TurboLink. However, very few TG-16 games offered co-op play modes especially designed with the TurboExpress in mind.
Bitcorp Gamate
The Bitcorp Gamate is one of the first handheld game systems created in response to the Nintendo Game Boy. It was released in Asia in 1990 and distributed worldwide by 1991.
Like the Sega Game Gear, it was horizontal in orientation and like the Game Boy, required 4 AA batteries. Unlike many later Game Boy clones, its internal components were professionally assembled (no "glop-top" chips). Unfortunately the system's fatal flaw is its screen. Even by the standards of the day, its screen is rather difficult to use, suffering from similar ghosting problems that were common complaints with the first generation Game Boys. Likely because of this fact sales were quite poor, and Bitcorp closed by 1992. However, new games continued to be published for the Asian market, possibly as late as 1994. The total number of games released for the system remains unknown.
Gamate games were designed for stereo sound, but the console is only equipped with a mono speaker.
Game Gear
The Game Gear is the third color handheld console, after the Lynx and the TurboExpress; produced by Sega. Released in Japan in 1990 and in North America and Europe in 1991, it is based on the Master System, which gave Sega the ability to quickly create Game Gear games from its large library of games for the Master System. While never reaching the level of success enjoyed by Nintendo, the Game Gear proved to be a fairly durable competitor, lasting longer than any other Game Boy rivals.
While the Game Gear is most frequently seen in black or navy blue, it was also released in a variety of additional colors: red, light blue, yellow, clear, and violet. All of these variations were released in small quantities and frequently only in the Asian market.
Following Sega's success with the Game Gear, they began development on a successor during the early 1990s, which was intended to feature a touchscreen interface, many years before the Nintendo DS. However, such a technology was very expensive at the time, and the handheld itself was estimated to have cost around $289 were it to be released. Sega eventually chose to shelve the idea and instead release the Genesis Nomad, a handheld version of the Genesis, as the successor.
Watara Supervision
The Watara Supervision was released in 1992 in an attempt to compete with the Nintendo Game Boy. The first model was designed very much like a Game Boy, but it is grey in color and has a slightly larger screen. The second model was made with a hinge across the center and can be bent slightly to provide greater comfort for the user. While the system did enjoy a modest degree of success, it never impacted the sales of Nintendo or Sega. The Supervision was redesigned a final time as "The Magnum". Released in limited quantities it was roughly equivalent to the Game Boy Pocket. It was available in three colors: yellow, green and grey. Watara designed many of the games themselves, but did receive some third party support, most notably from Sachen.
A TV adapter was available in both PAL and NTSC formats that could transfer the Supervision's black-and-white palette to 4 colors, similar in some regards to the Super Game Boy from Nintendo.
Hartung Game Master
The Hartung Game Master is an obscure handheld released at an unknown point in the early 1990s. Its graphics fidelity was much lower than most of its contemporaries, displaying just 64x64 pixels. It was available in black, white, and purple, and was frequently rebranded by its distributors, such as Delplay, Videojet and Systema.
The exact number of games released is not known, but is likely around 20. The system most frequently turns up in Europe and Australia.
Late 1990s
By this time, the lack of significant development in Nintendo's product line began allowing more advanced systems such as the Neo Geo Pocket Color and the WonderSwan Color to be developed.
Sega Nomad
The Nomad was released in October 1995 in North America only. The release was six years into the market span of the Genesis, with an existing library of more than 500 Genesis games. According to former Sega of America research and development head Joe Miller, the Nomad was not intended to be the Game Gear's replacement; he believed that there was little planning from Sega of Japan for the new handheld. Sega was supporting five different consoles: Saturn, Genesis, Game Gear, Pico, and the Master System, as well as the Sega CD and 32X add-ons. In Japan, the Mega Drive had never been successful and the Saturn was more successful than Sony's PlayStation, so Sega Enterprises CEO Hayao Nakayama decided to focus on the Saturn. By 1999, the Nomad was being sold at less than a third of its original price.
Game Boy Pocket
The Game Boy Pocket is a redesigned version of the original Game Boy having the same features. It was released in 1996. Notably, this variation is smaller and lighter. It comes in seven different colors; red, yellow, green, black, clear, silver, blue, and pink. It has space for two AAA batteries, which provide approximately 10 hours of game play. The screen was changed to a true black-and-white display, rather than the "pea soup" monochromatic display of the original Game Boy. Although, like its predecessor, the Game Boy Pocket has no backlight to allow play in a darkened area, it did notably improve visibility and pixel response-time (mostly eliminating ghosting).
The first model of the Game Boy Pocket did not have an LED to show battery levels, but the feature was added due to public demand. The Game Boy Pocket was not a new software platform and played the same software as the original Game Boy model.
Game.com
The Game.com (pronounced in TV commercials as "game com", not "game dot com", and not capitalized in marketing material) is a handheld game console released by Tiger Electronics in September 1997. It featured many new ideas for handheld consoles and was aimed at an older target audience, sporting PDA-style features and functions such as a touch screen and stylus. However, Tiger hoped it would also challenge Nintendo's Game Boy and gain a following among younger gamers too. Unlike other handheld game consoles, the first game.com consoles included two slots for game cartridges, which would not happen again until the Tapwave Zodiac, the DS and DS Lite, and could be connected to a 14.4 kbit/s modem. Later models had only a single cartridge slot.
Game Boy Color
The Game Boy Color (also referred to as GBC or CGB) is Nintendo's successor to the Game Boy and was released on October 21, 1998, in Japan and in November of the same year in the United States. It features a color screen, and is slightly bigger than the Game Boy Pocket. The processor is twice as fast as a Game Boy's and has twice as much memory. It also had an infrared communications port for wireless linking which did not appear in later versions of the Game Boy, such as the Game Boy Advance.
The Game Boy Color was a response to pressure from game developers for a new system, as they felt that the Game Boy, even in its latest incarnation, the Game Boy Pocket, was insufficient. The resulting product was backward compatible, a first for a handheld console system, and leveraged the large library of games and great installed base of the predecessor system. This became a major feature of the Game Boy line, since it allowed each new launch to begin with a significantly larger library than any of its competitors. As of March 31, 2005, the Game Boy and Game Boy Color combined to sell 118.69 million units worldwide.
The console is capable of displaying up to 56 different colors simultaneously on screen from its palette of 32,768, and can add basic four-color shading to games that had been developed for the original Game Boy. It can also give the sprites and backgrounds separate colors, for a total of more than four colors.
Neo Geo Pocket Color
The Neo Geo Pocket Color (or NGPC) was released in 1999 in Japan, and later that year in the United States and Europe. It is a 16-bit color handheld game console designed by SNK, the maker of the Neo Geo home console and arcade machine. It came after SNK's original Neo Geo Pocket monochrome handheld, which debuted in 1998 in Japan.
In 2000 following SNK's purchase by Japanese Pachinko manufacturer Aruze, the Neo Geo Pocket Color was dropped from both the US and European markets, purportedly due to commercial failure.
The system seemed well on its way to being a success in the U.S. It was more successful than any Game Boy competitor since Sega's Game Gear, but was hurt by several factors, such as SNK's infamous lack of communication with third-party developers, and anticipation of the Game Boy Advance. The decision to ship U.S. games in cardboard boxes in a cost-cutting move rather than hard plastic cases that Japanese and European releases were shipped in may have also hurt US sales.
Wonderswan Color
The WonderSwan Color is a handheld game console designed by Bandai. It was released on December 9, 2000, in Japan, Although the WonderSwan Color was slightly larger and heavier (7 mm and 2 g) compared to the original WonderSwan, the color version featured 512 KB of RAM and a larger color LCD screen. In addition, the WonderSwan Color is compatible with the original WonderSwan library of games.
Prior to WonderSwan's release, Nintendo had virtually a monopoly in the Japanese video game handheld market. After the release of the WonderSwan Color, Bandai took approximately 8% of the market share in Japan partly due to its low price of 6800 yen (approximately US$65). Another reason for the WonderSwan's success in Japan was the fact that Bandai managed to get a deal with Square to port over the original Famicom Final Fantasy games with improved graphics and controls. However, with the popularity of the Game Boy Advance and the reconciliation between Square and Nintendo, the WonderSwan Color and its successor, the SwanCrystal quickly lost its competitive advantage.
Early 2000s
The 2000s saw a major leap in innovation, particularly in the second half with the release of the DS and PSP.
Game Boy Advance
In 2001, Nintendo released the Game Boy Advance (GBA or AGB), which added two shoulder buttons, a larger screen, and more computing power than the Game Boy Color.
The design was revised two years later when the Game Boy Advance SP (GBA SP), a more compact version, was released. The SP features a "clamshell" design (folding open and closed, like a laptop computer), as well as a frontlit color display and rechargeable battery. Despite the smaller form factor, the screen remained the same size as that of the original. In 2005, the Game Boy Micro was released. This revision sacrificed screen size and backwards compatibility with previous Game Boys for a dramatic reduction in total size and a brighter backlit screen. A new SP model with a backlit screen was released in some regions around the same time.
Along with the GameCube, the GBA also introduced the concept of "connectivity": using a handheld system as a console controller. A handful of games use this feature, most notably Animal Crossing, Pac-Man Vs., Final Fantasy Crystal Chronicles, The Legend of Zelda: Four Swords Adventures, The Legend of Zelda: The Wind Waker, Metroid Prime, and Sonic Adventure 2: Battle.
As of December 31, 2007, the GBA, GBA SP, and the Game Boy Micro combined have sold 80.72 million units worldwide.
Game Park 32
The original GP32 was released in 2001 by the South Korean company Game Park a few months after the launch of the Game Boy Advance. It featured a 32-bit CPU, 133 MHz processor, MP3 and Divx player, and e-book reader. SmartMedia cards were used for storage, and could hold up to 128mb of anything downloaded through a USB cable from a PC. The GP32 was redesigned in 2003. A front-lit screen was added and the new version was called GP32 FLU (Front Light Unit). In summer 2004, another redesign, the GP32 BLU, was made, and added a backlit screen. This version of the handheld was planned for release outside South Korea; in Europe, and it was released for example in Spain (VirginPlay was the distributor). While not a commercial success on a level with mainstream handhelds (only 30,000 units were sold), it ended up being used mainly as a platform for user-made applications and emulators of other systems, being popular with developers and more technically adept users.
N-Gage
Nokia released the N-Gage in 2003. It was designed as a combination MP3 player, cellphone, PDA, radio, and gaming device. The system received much criticism alleging defects in its physical design and layout, including its vertically oriented screen and requirement of removing the battery to change game cartridges. The most well known of these was "sidetalking", or the act of placing the phone speaker and receiver on an edge of the device instead of one of the flat sides, causing the user to appear as if they are speaking into a taco.
The N-Gage QD was later released to address the design flaws of the original. However, certain features available in the original N-Gage, including MP3 playback, FM radio reception, and USB connectivity were removed.
Second generation of N-Gage launched on April 3, 2008 in the form of a service for selected Nokia Smartphones.
Tapwave Zodiac
In 2003, Tapwave released the Zodiac. It was designed to be a PDA-handheld game console hybrid. It supported photos, movies, music, Internet, and documents. The Zodiac used a special version Palm OS 5, 5.2T, that supported the special gaming buttons and graphics chip. Two versions were available, Zodiac 1 and 2, differing in memory and looks. The Zodiac line ended in July 2005 when Tapwave declared bankruptcy.
Mid 2000s
Nintendo DS
The Nintendo DS was released in November 2004. Among its new features were the incorporation of two screens, a touchscreen, wireless connectivity, and a microphone port. As with the Game Boy Advance SP, the DS features a clamshell design, with the two screens aligned vertically on either side of the hinge.
The DS's lower screen is touch sensitive, designed to be pressed with a stylus, a user's finger or a special "thumb pad" (a small plastic pad attached to the console's wrist strap, which can be affixed to the thumb to simulate an analog stick). More traditional controls include four face buttons, two shoulder buttons, a D-pad, and "Start" and "Select" buttons. The console also features online capabilities via the Nintendo Wi-Fi Connection and ad-hoc wireless networking for multiplayer games with up to sixteen players. It is backwards-compatible with all Game Boy Advance games, but like the Game Boy Micro, it is not compatible with games designed for the Game Boy or Game Boy Color.
In January 2006, Nintendo revealed an updated version of the DS: the Nintendo DS Lite (released on March 2, 2006, in Japan) with an updated, smaller form factor (42% smaller and 21% lighter than the original Nintendo DS), a cleaner design, longer battery life, and brighter, higher-quality displays, with adjustable brightness. It is also able to connect wirelessly with Nintendo's Wii console.
On October 2, 2008, Nintendo announced the Nintendo DSi, with larger, 3.25-inch screens and two integrated cameras. It has an SD card storage slot in place of the Game Boy Advance slot, plus internal flash memory for storing downloaded games. It was released on November 1, 2008, in Japan, April 2, 2009, in Australia, April 3, 2009, in Europe, and April 5, 2009, in North America. On October 29, 2009, Nintendo announced a larger version of the DSi, called the DSi XL, which was released on November 21, 2009, in Japan, March 5, 2010, in Europe, March 28, 2010, in North America, and April 15, 2010, in Australia.
As of December 31, 2009, the Nintendo DS, Nintendo DS Lite, and Nintendo DSi combined have sold 125.13 million units worldwide.
Game King
The GameKing is a handheld game console released by the Chinese company TimeTop in 2004. The first model while original in design owes a large debt to Nintendo's Game Boy Advance. The second model, the GameKing 2, is believed to be inspired by Sony's PSP. This model also was upgraded with a backlit screen, with a distracting background transparency (which can be removed by opening up the console). A color model, the GameKing 3 apparently exists, but was only made for a brief time and was difficult to purchase outside of Asia. Whether intentionally or not, the GameKing has the most primitive graphics of any handheld released since the Game Boy of 1989.
As many of the games have an "old school" simplicity, the device has developed a small cult following. The Gameking's speaker is quite loud and the cartridges' sophisticated looping soundtracks (sampled from other sources) are seemingly at odds with its primitive graphics.
TimeTop made at least one additional device sometimes labeled as "GameKing", but while it seems to possess more advanced graphics, is essentially an emulator that plays a handful of multi-carts (like the GB Station Light II). Outside of Asia (especially China) however the Gameking remains relatively unheard of due to the enduring popularity of Japanese handhelds such as those manufactured by Nintendo and Sony.
PlayStation Portable
The PlayStation Portable (officially abbreviated PSP) is a handheld game console manufactured and marketed by Sony Computer Entertainment. Development of the console was first announced during E3 2003, and it was unveiled on May 11, 2004, at a Sony press conference before E3 2004. The system was released in Japan on December 12, 2004, in North America on March 24, 2005, and in the PAL region on September 1, 2005.
The PlayStation Portable is the first handheld video game console to use an optical disc format, Universal Media Disc (UMD), for distribution of its games. UMD Video discs with movies and television shows were also released. The PSP utilized the Sony/SanDisk Memory Stick Pro Duo format as its primary storage medium. Other distinguishing features of the console include its large viewing screen, multi-media capabilities, and connectivity with the PlayStation 3, other PSPs, and the Internet.
Gizmondo
Tiger's Gizmondo came out in the UK during March 2005 and it was released in the U.S. during October 2005. It is designed to play music, movies, and games, have a camera for taking and storing photos, and have GPS functions. It also has Internet capabilities. It has a phone for sending text and multimedia messages. Email was promised at launch, but was never released before Gizmondo, and ultimately Tiger Telematics', downfall in early 2006. Users obtained a second service pack, unreleased, hoping to find such functionality. However, Service Pack B did not activate the e-mail functionality.
GP2X Series
The GP2X is an open-source, Linux-based handheld video game console and media player created by GamePark Holdings of South Korea, designed for homebrew developers as well as commercial developers. It is commonly used to run emulators for game consoles such as Neo-Geo, Genesis, Master System, Game Gear, Amstrad CPC, Commodore 64, Nintendo Entertainment System, TurboGrafx-16, MAME and others.
A new version called the "F200" was released October 30, 2007, and features a touchscreen, among other changes. Followed by GP2X Wiz (2009) and GP2X Caanoo (2010).
Late 2000s
Dingoo
The Dingoo A320 is a micro-sized gaming handheld that resembles the Game Boy Micro and is open to game development. It also supports music, radio, emulators (8 bit and 16 bit) and video playing capabilities with its own interface much like the PSP. There is also an onboard radio and recording program. It is currently available in two colors — white and black. Other similar products from the same manufacturer are the Dingoo A330 (also known as Geimi), Dingoo A360, Dingoo A380, and Dingoo A320E.
PSP Go
The PSP Go is a version of the PlayStation Portable handheld game console manufactured by Sony. It was released on October 1, 2009, in American and European territories, and on November 1 in Japan. It was revealed prior to E3 2009 through Sony's Qore VOD service. Although its design is significantly different from other PSPs, it is not intended to replace the PSP 3000, which Sony continued to manufacture, sell, and support. On April 20, 2011, the manufacturer announced that the PSP Go would be discontinued so that they may concentrate on the PlayStation Vita. Sony later said that only the European and Japanese versions were being cut, and that the console would still be available in the US.
Unlike previous PSP models, the PSP Go does not feature a UMD drive, but instead has 16 GB of internal flash memory to store games, video, pictures, and other media. This can be extended by up to 32 GB with the use of a Memory Stick Micro (M2) flash card. Also unlike previous PSP models, the PSP Go's rechargeable battery is not removable or replaceable by the user. The unit is 43% lighter and 56% smaller than the original PSP-1000, and 16% lighter and 35% smaller than the PSP-3000. It has a 3.8" 480 × 272 LCD (compared to the larger 4.3" 480 × 272 pixel LCD on previous PSP models). The screen slides up to reveal the main controls. The overall shape and sliding mechanism are similar to that of Sony's mylo COM-2 internet device.
Pandora
The Pandora is a handheld game console/UMPC/PDA hybrid designed to take advantage of existing open source software and to be a target for home-brew development. It runs a full distribution of Linux, and in functionality is like a small PC with gaming controls. It is developed by OpenPandora, which is made up of former distributors and community members of the GP32 and GP2X handhelds.
OpenPandora began taking pre-orders for one batch of 4000 devices in November 2008 and after manufacturing delays, began shipping to customers on May 21, 2010.
FC-16 Go
The FC-16 Go is a portable Super NES hardware clone manufactured by Yobo Gameware in 2009. It features a 3.5-inch display, two wireless controllers, and CRT cables that allow cartridges to be played on a television screen. Unlike other Super NES clone consoles, it has region tabs that only allow NTSC North American cartridges to be played. Later revisions feature stereo sound output, larger shoulder buttons, and a slightly re-arranged button, power, and A/V output layout.
2010s
Nintendo 3DS
The Nintendo 3DS is the successor to Nintendo's DS handheld. The autostereoscopic device is able to project stereoscopic three-dimensional effects without requirement of active shutter or passive polarized glasses, which are required by most current 3D televisions to display the 3D effect. The 3DS was released in Japan on February 26, 2011; in Europe on March 25, 2011; in North America on March 27, 2011, and in Australia on March 31, 2011. The system features backward compatibility with Nintendo DS series software, including Nintendo DSi software except those that require the Game Boy Advance slot. It also features an online service called the Nintendo eShop, launched on June 6, 2011, in North America and June 7, 2011, in Europe and Japan, which allows owners to download games, demos, applications and information on upcoming film and game releases. On November 24, 2011, a limited edition Legend of Zelda 25th Anniversary 3DS was released that contained a unique Cosmo Black unit decorated with gold Legend of Zelda related imagery, along with a copy of The Legend of Zelda: Ocarina of Time 3D.
There are also other models including the Nintendo 2DS and the New Nintendo 3DS, the latter with a larger (XL/LL) variant, like the original Nintendo 3DS, as well as the New Nintendo 2DS XL.
Xperia Play
The Sony Ericsson Xperia PLAY is a handheld game console smartphone produced by Sony Ericsson under the Xperia smartphone brand. The device runs Android 2.3 Gingerbread, and is the first to be part of the PlayStation Certified program which means that it can play PlayStation Suite games. The device is a horizontally sliding phone with its original form resembling the Xperia X10 while the slider below resembles the slider of the PSP Go. The slider features a D-pad on the left side, a set of standard PlayStation buttons (, , and ) on the right, a long rectangular touchpad in the middle, start and select buttons on the bottom right corner, a menu button on the bottom left corner, and two shoulder buttons (L and R) on the back of the device. It is powered by a 1 GHz Qualcomm Snapdragon processor, a Qualcomm Adreno 205 GPU, and features a display measuring 4.0 inches (100 mm) (854 × 480), an 8-megapixel camera, 512 MB RAM, 8 GB internal storage, and a micro-USB connector. It supports microSD cards, versus the Memory Stick variants used in PSP consoles. The device was revealed officially for the first time in a Super Bowl ad on Sunday, February 6, 2011. On February 13, 2011, at Mobile World Congress (MWC) 2011, it was announced that the device would be shipping globally in March 2011, with a launch lineup of around 50 software titles.
PlayStation Vita
The PlayStation Vita is the successor to Sony's PlayStation Portable (PSP) Handheld series. It was released in Japan on December 17, 2011, and in Europe, Australia, North, and South America on February 22, 2012.
The handheld includes two analog sticks, a 5-inch (130 mm) OLED/LCD multi-touch capacitive touchscreen, and supports Bluetooth, Wi-Fi and optional 3G. Internally, the PS Vita features a 4 core ARM Cortex-A9 MPCore processor and a 4 core SGX543MP4+ graphics processing unit, as well as LiveArea software as its main user interface, which succeeds the XrossMediaBar.
The device is fully backwards-compatible with PlayStation Portable games digitally released on the PlayStation Network via the PlayStation Store. However, PSone Classics and PS2 titles were not compatible at the time of the primary public release in Japan. The Vita's dual analog sticks will be supported on selected PSP games. The graphics for PSP releases will be up-scaled, with a smoothing filter to reduce pixelation.
On September 20, 2018, Sony announced at Tokyo Game Show 2018 that the Vita would be discontinued in 2019, ending its hardware production. Production of Vita hardware officially ended on March 1, 2019.
Razer Switchblade
The Razer Switchblade was a prototype pocket-sized like a Nintendo DSi XL designed to run Windows 7, featured a multi-touch LCD screen and an adaptive keyboard that changed keys depending on the game the user would play. It also was to feature a full mouse.
It was first unveiled on January 5, 2011, on the Consumer Electronics Show (CES). The Switchblade won The Best of CES 2011 People's Voice award. It has since been in development and the release date is still unknown. The device has likely been suspended indefinitely.
Nvidia Shield
Project Shield is a handheld system developed by Nvidia announced at CES 2013. It runs on Android 4.2 and uses Nvidia Tegra 4 SoC. The hardware includes a 5-inches multitouch screen with support for HD graphics (720p). The console allows for the streaming of games running on a compatible desktop PC, or laptop.
Nvidia Shield Portable has received mixed reception from critics. Generally, reviewers praised the performance of the device, but criticized the cost and lack of worthwhile games. Engadget's review noted the system's "extremely impressive PC gaming", but also that due to its high price, the device was "a hard sell as a portable game console", especially when compared to similar handhelds on the market. CNET's Eric Franklin states in his review of the device that "The Nvidia Shield is an extremely well made device, with performance that pretty much obliterates any mobile product before it; but like most new console launches, there is currently a lack of available games worth your time." Eurogamer's comprehensive review of the device provides a detailed account of the device and its features; concluded by saying: "In the here and now, the first-gen Shield Portable is a gloriously niche, luxury product - the most powerful Android system on the market by a clear stretch and possessing a unique link to PC gaming that's seriously impressive in beta form, and can only get better."
Nintendo Switch
The Nintendo Switch is a hybrid console that can either be used in a handheld form, or inserted into a docking station attached to a television to play on a bigger screen. The Switch features two detachable wireless controllers, called Joy-Con, which can be used individually or attached to a grip to provide a traditional gamepad form. A handheld-only revision named Nintendo Switch Lite was released on September 20, 2019.
The Switch Lite had sold about 1.95 million units worldwide by September 30, 2019, only 10 days after its launch.
2020s
Evercade
Evercade is a handheld game console developed and manufactured by UK company Blaze Entertainment. It focuses on retrogaming with ROM cartridges that each contain a number of emulated games. Development began in 2018, and the console was released in May 2020, after a few delays. Upon its launch, the console offered 10 game cartridges with a combined total of 122 games.
Arc System Works, Atari, Data East, Interplay Entertainment, Bandai Namco Entertainment and Piko Interactive have released emulated versions of their games for the Evercade. Pre-existing homebrew games have also been re-released for the console by Mega Cat Studios. The Evercade is capable of playing games originally released for the Atari 2600, the Atari 7800, the Atari Lynx, the NES, the SNES, and the Sega Genesis/Mega Drive.
Analogue Pocket
The Analogue Pocket is a FPGA-based handheld game console designed and manufactured by Analogue, It is designed to play games designed for handhelds of the fourth, fifth and sixth generation of video game consoles. The console features a design reminiscent of the Game Boy, with additional buttons for the supported platforms. It features a 3.5" 1600x1440 LTPS LCD display, an SD card port, and a link cable port compatible with Game Boy link cables. The Analogue Pocket uses an Altera Cyclone V processor, and is compatible with the original Game Boy, Game Boy Color and Game Boy Advance cartridges out of the box. With cartridge adapters (sold separately) the Analogue Pocket can play Game Gear, Neo Geo Pocket, Neo Geo Pocket Color and Atari Lynx game cartridges. The Analogue Pocket includes an additional FPGA, allowing 3rd party FPGA development. The Analogue Pocket was released in December 2021.
Steam Deck
The Steam Deck is a handheld computer device, developed by Valve, which runs SteamOS 3.0, a tailored distro of Arch Linux and includes support for Proton, a compatibility layer that allows most Microsoft Windows games to be played on the Linux-based operating system. This device and other similar ones generally are not referred to as "consoles" but more commonly as handheld style gaming computers due to effectively being IBM PC–compatible like contemporary desktop and laptop gaming PCs. In terms of hardware, the Deck includes a custom AMD APU based on their Zen 2 and RDNA 2 architectures, with the CPU running a four-core/eight-thread unit and the GPU running on eight compute units with a total estimated performance of 1.6 TFLOPS. Both the CPU and GPU use variable timing frequencies, with the CPU running between 2.4 and 3.5 GHz and the GPU between 1.0 and 1.6 GHz based on current processor needs. Valve stated that the CPU has comparable performance to Ryzen 3000 desktop computer processors and the GPU performance to the Radeon RX 6000 series. The Deck includes 16 GB of LPDDR5 RAM in a quad channel configuration.
Valve revealed the Steam Deck on July 15, 2021, with pre-orders being made option the next day. The Deck was expected to ship in December 2021 to the US, Canada, the EU and the UK but was delayed to February 2022, with other regions to follow in 2022. Pre-orders were limited to those with Steam accounts opened before June 2021 to prevent resellers from controlling access to the device. Pre-orders reservations on July 16, 2021, through the Steam storefront briefly crashed the servers due to the demand. While initial shipments are still planned by February 2022, Valve has reported to new purchasers that wider availability will be later, with the 64 GB model and 256 GB NVMe model due in Q2 2022, and the 512 GB NVMe model by Q3 2022. Steam Deck was released on February 25, 2022.
See also
List of handheld game consoles
Video game console emulator
Handheld electronic game
Handheld television
Video games and Linux
Cloud gaming
Mobile game
References
Video game terminology
Handheld game consoles | Handheld game console | [
"Technology"
] | 9,218 | [
"Computing terminology",
"Video game terminology"
] |
14,207 | https://en.wikipedia.org/wiki/Hematite | Hematite (), also spelled as haematite, is a common iron oxide compound with the formula, Fe2O3 and is widely found in rocks and soils. Hematite crystals belong to the rhombohedral lattice system which is designated the alpha polymorph of . It has the same crystal structure as corundum () and ilmenite (). With this it forms a complete solid solution at temperatures above .
Hematite occurs naturally in black to steel or silver-gray, brown to reddish-brown, or red colors. It is mined as an important ore mineral of iron. It is electrically conductive. Hematite varieties include kidney ore, martite (pseudomorphs after magnetite), iron rose and specularite (specular hematite). While these forms vary, they all have a rust-red streak. Hematite is not only harder than pure iron, but also much more brittle. The term kidney ore may be broadly used to describe botryoidal, mammillary, or reniform hematite. Maghemite is a polymorph of hematite (γ-) with the same chemical formula, but with a spinel structure like magnetite.
Large deposits of hematite are found in banded iron formations. Gray hematite is typically found in places that have still, standing water, or mineral hot springs, such as those in Yellowstone National Park in North America. The mineral may precipitate in the water and collect in layers at the bottom of the lake, spring, or other standing water. Hematite can also occur in the absence of water, usually as the result of volcanic activity.
Clay-sized hematite crystals also may occur as a secondary mineral formed by weathering processes in soil, and along with other iron oxides or oxyhydroxides such as goethite, which is responsible for the red color of many tropical, ancient, or otherwise highly weathered soils.
Etymology and history
The name hematite is derived from the Greek word for blood, (haima), due to the red coloration found in some varieties of hematite. The color of hematite is often used as a pigment. The English name of the stone is derived from Middle French hématite pierre, which was taken from Latin lapis haematites the 15th century, which originated from Ancient Greek (haimatitēs lithos, "blood-red stone").
Ochre is a clay that is colored by varying amounts of hematite, varying between 20% and 70%. Red ochre contains unhydrated hematite, whereas yellow ochre contains hydrated hematite (Fe2O3 · H2O). The principal use of ochre is for tinting with a permanent color.
Use of the red chalk of this iron-oxide mineral in writing, drawing, and decoration is among the earliest in human history. To date, the earliest known human use of the powdery mineral is 164,000 years ago by the inhabitants of the Pinnacle Point caves in what now is South Africa, possibly for social purposes. Hematite residues are also found in graves from 80,000 years ago. Near Rydno in Poland and Lovas in Hungary red chalk mines have been found that are from 5000 BC, belonging to the Linear Pottery culture at the Upper Rhine.
Rich deposits of hematite have been found on the island of Elba that have been mined since the time of the Etruscans.
Underground hematite mining is classified as carcinogenic hazard to humans.
Magnetism
Hematite shows only a very feeble response to a magnetic field. Unlike magnetite, it is not noticeably attracted to an ordinary magnet. Hematite is an antiferromagnetic material below the Morin transition at , and a canted antiferromagnet or weakly ferromagnetic above the Morin transition and below its Néel temperature at , above which it is paramagnetic.
The magnetic structure of α-hematite was the subject of considerable discussion and debate during the 1950s, as it appeared to be ferromagnetic with a Curie temperature of approximately , but with an extremely small magnetic moment (0.002 Bohr magnetons). Adding to the surprise was a transition with a decrease in temperature at around to a phase with no net magnetic moment. It was shown that the system is essentially antiferromagnetic, but that the low symmetry of the cation sites allows spin–orbit coupling to cause canting of the moments when they are in the plane perpendicular to the c axis. The disappearance of the moment with a decrease in temperature at is caused by a change in the anisotropy which causes the moments to align along the c axis. In this configuration, spin canting does not reduce the energy. The magnetic properties of bulk hematite differ from their nanoscale counterparts. For example, the Morin transition temperature of hematite decreases with a decrease in the particle size. The suppression of this transition has been observed in hematite nanoparticles and is attributed to the presence of impurities, water molecules and defects in the crystals lattice. Hematite is part of a complex solid solution oxyhydroxide system having various contents of H2O (water), hydroxyl groups and vacancy substitutions that affect the mineral's magnetic and crystal chemical properties. Two other end-members are referred to as protohematite and hydrohematite.
Enhanced magnetic coercivities for hematite have been achieved by dry-heating a two-line ferrihydrite precursor prepared from solution. Hematite exhibited temperature-dependent magnetic coercivity values ranging from . The origin of these high coercivity values has been interpreted as a consequence of the subparticle structure induced by the different particle and crystallite size growth rates at increasing annealing temperature. These differences in the growth rates are translated into a progressive development of a subparticle structure at the nanoscale (super small). At lower temperatures (350–600 °C), single particles crystallize. However, at higher temperatures (600–1000 °C), the growth of crystalline aggregates, and a subparticle structure is favored.
Mine tailings
Hematite is present in the waste tailings of iron mines. A recently developed process, magnetation, uses magnets to glean waste hematite from old mine tailings in Minnesota's vast Mesabi Range iron district. Falu red is a pigment used in traditional Swedish house paints. Originally, it was made from tailings of the Falu mine.
Mars
The spectral signature of hematite was seen on the planet Mars by the infrared spectrometer on the NASA Mars Global Surveyor and 2001 Mars Odyssey spacecraft in orbit around Mars. The mineral was seen in abundance at two sites on the planet, the Terra Meridiani site, near the Martian equator at 0° longitude, and the Aram Chaos site near the Valles Marineris. Several other sites also showed hematite, such as Aureum Chaos. Because terrestrial hematite is typically a mineral formed in aqueous environments or by aqueous alteration, this detection was scientifically interesting enough that the second of the two Mars Exploration Rovers was sent to a site in the Terra Meridiani region designated Meridiani Planum. In-situ investigations by the Opportunity rover showed a significant amount of hematite, much of it in the form of small "Martian spherules" that were informally named "blueberries" by the science team. Analysis indicates that these spherules are apparently concretions formed from a water solution. "Knowing just how the hematite on Mars was formed will help us characterize the past environment and determine whether that environment was favorable for life".
Jewelry
Hematite is often shaped into beads, tumbling stones, and other jewellery components. Hematite was once used as mourning jewelry. Certain types of hematite- or iron-oxide-rich clay, especially Armenian bole, have been used in gilding. Hematite is also used in art such as in the creation of intaglio engraved gems. Hematine is a synthetic material sold as magnetic hematite.
Pigment
Hematite has been sourced to make pigments since earlier origins of human pictorial depictions, such as on cave linings and other surfaces, and has been employed continually in artwork through the eras. In Roman times, the pigment obtained by finely grinding hematite was known as sil atticum. Other names for the mineral when used in painting include colcotar and caput mortuum. In Spanish, it is called almagre or almagra, from the Arabic al-maghrah, red earth, which passed into English and Portuguese. Other ancient names for the pigment include ochra hispanica, sil atticum antiquorum, and Spanish brown. It forms the basis for red, purple, and brown iron-oxide pigments, as well as being an important component of ochre, sienna, and umber pigments. The main producer of hematite for the pigment industry is India, followed distantly by Spain.
Industrial purposes
As mentioned earlier, hematite is an important mineral for iron ore. The physical properties of hematite are also employed in the areas of medical equipment, shipping industries, and coal production. Having high density and capable as an effective barrier against X-ray passage, it often is incorporated into radiation shielding. As with other iron ores, it often is a component of ship ballasts because of its density and economy. In the coal industry, it can be formed into a high specific density solution, to help separate coal powder from impurities.
Gallery
See also
Mill scale
Mineral redox buffer
Wüstite
References
External links
MineralData.org
Oxide minerals
Iron(III) minerals
Iron oxide pigments
Hematite group
Trigonal minerals
Minerals in space group 167
Iron ores
Glances
Magnetic minerals
Jewellery components
Symbols of Alabama | Hematite | [
"Technology"
] | 2,066 | [
"Jewellery components",
"Components"
] |
14,208 | https://en.wikipedia.org/wiki/Holocene%20extinction | The Holocene extinction, also referred to as the Anthropocene extinction, is an ongoing extinction event caused by human activities during the Holocene epoch. This extinction event spans numerous families of plants and animals, including mammals, birds, reptiles, amphibians, fish, and invertebrates, impacting both terrestrial and marine species. Widespread degradation of biodiversity hotspots such as coral reefs and rainforests has exacerbated the crisis. Many of these extinctions are undocumented, as the species are often undiscovered before their extinctions.
Current extinction rates are estimated at 100 to 1,000 times higher than natural background extinction rates and are accelerating. Over the past 100–200 years, biodiversity loss has reached such alarming levels that some conservation biologists now believe human activities have triggered a mass extinction, or are on the cusp of doing so. As such, after the "Big Five" mass extinctions, the Holocene extinction event has been referred to as the sixth mass extinction. However, given the recent recognition of the Capitanian mass extinction, the term seventh mass extinction has also been proposed.
The Holocene extinction was preceded by the extinction of most large (megafaunal) animals during the Late Pleistocene, a decline attributed in part to human hunting. The prevailing theory is that human overhunting, coinciding with existing stress conditions, likely contributed to this decline. Examples from regions such as New Zealand, Madagascar, and Hawaii have shown how human colonization and habitat destruction have led to significant biodiversity losses. While debates persist about the exact role of human predation and habitat alteration, certain extinctions have been directly linked to these activities. Additionally, climate shifts at the end of the Pleistocene likely compounded these effects.
Over the course of the Late Holocene, human settlement of the previously uninhabited Pacific islands led to extinctions of hundreds of bird species, peaking around 1300 AD. Recent estimates suggest that roughly 12% of avian species have been lost to human activities over the last 126,000 years—double earlier estimates.
In the 20th century, the human population quadrupled, and the global economy grew twenty-five-fold. This period, often called the Great Acceleration, has intensified species' extinction. Humanity has become an unprecedented "global superpredator", preying on adult apex predators, invading habitats of other species, and disrupting food webs.
The Holocene extinction continues into the 21st century, driven by anthropogenic global warming, human population growth, and increasing consumption—particularly among affluent societies. Factors such as rising meat production, deforestation, and the destruction of critical habitats compound these issues. Other drivers include overexploitation of natural resources, pollution, and climate change-induced shifts in ecosystems.
Major extinction events during this period have been recorded across all continents, including Africa, Asia, Europe, Australia, North and South America, and various islands. The cumulative effects of deforestation, overfishing, ocean acidification, and wetland destruction have further destabilized ecosystems. Decline in amphibian populations, in particular, serves as an early indicator of broader ecological collapse.
Despite this grim outlook, there are efforts to mitigate biodiversity loss. Conservation initiatives, international treaties, and sustainable practices aim to address this crisis. However, without significant changes in global policies and individual behaviors, the Holocene extinction threatens to irreversibly alter the planet's ecosystems and the services they provide.
Background
Mass extinctions are characterized by the loss of at least 75% of species within a geologically short period of time (i.e., less than 2 million years). The Holocene extinction is also known as the "sixth extinction", as it is possibly the sixth mass extinction event, after the Ordovician–Silurian extinction events, the Late Devonian extinction, the Permian–Triassic extinction event, the Triassic–Jurassic extinction event, and the Cretaceous–Paleogene extinction event. If the Capitanian extinction event is included among the first-order mass extinctions, the Holocene extinction would correspondingly be known as the "seventh extinction". The Holocene is the current geological epoch.
Overview
The precise timing of the Holocene extinction event remains debated, with no clear consensus on when it began or whether it should be considered distinct from the Quaternary extinction event. However, most scientists agree that human activities are the primary driver of the Holocene extinction. A 1998 survey conducted by the American Museum of Natural History found that 70% of biologists acknowledged an ongoing anthropogenic extinction event. Some researchers suggested that the activities of earlier archaic humans may have contributed to earlier extinctions, especially in Australia, New Zealand, and Madagascar. Even modest hunting pressure, combined with the vulnerability of large animals on isolated islands, is thought to have been enough to wipe out entire species. Only in the more recent stages of the Holocene have plants suffered extensive losses, which are also linked to human activities such as deforestation and land conversion.
Extinction rate
The contemporary rate of extinction is estimated at 100 to 1,000 times higher than the natural background extinction rate—the typical rate of species loss through natural evolutionary processes. One estimation suggested the rate could be as high as 10,000 times the background extinction rate, though this figure remains controversial. Theoretical ecologist Stuart Pimm has noted that the extinction rate for plants alone is 100 times higher than normal.
While some argue that the current extinction rates have not yet reached the catastrophic levels of past mass extinctions, Barnosky et al. (2011) and Hull et al. (2015) point out that extinction rates during past mass extinctions cannot be fully determined due to gaps in the fossil record. However, they agree that the ongoing biodiversity loss is nonetheless unprecedented. Estimates of species lost per year vary widely—from 1.5 to 40,000 species—but all indicate that human activity is driving this crisis.
In The Future of Life (2002), biologist Edward Osborne Wilson predicted that, if current trend continues, half of Earth's higher lifeforms could be extinct by 2100. More recent studies further support this view. A 2015 study on Hawaiian snails suggested that up to 7% of Earth's species may already be extinct. A 2021 study also found that only around 3% of the planet's terrestrial surface remains ecologically and faunally intact—areas still with healthy populations of native species and minimal human footprint. A 2022 study suggests that if global warming continues, between 13% and 27% of terrestrial vertebrate species could be driven to extinction by 2100, with habitat destructions and co-extinctions accounting for the rest.
The 2019 Global Assessment Report on Biodiversity and Ecosystem Services, published by the United Nations IPBES, estimated that about one million species are currently at risk of extinction within decades due to human activities. Organized human existence is jeopardised by increasingly rapid destruction of the systems that support life on Earth, according to the report, the result of one of the most comprehensive studies of the health of the planet ever conducted. Moreover, the 2021 Economics of Biodiversity review, published by the UK government, asserts that "biodiversity is declining faster than at any time in human history." According to a 2022 study published in Frontiers in Ecology and the Environment, a survey of more than 3,000 experts says that the extent of the mass extinction might be greater than previously thought, and estimates that roughly 30% of species "have been globally threatened or driven extinct since the year 1500." In a 2022 report, IPBES listed unsustainable fishing, hunting, and logging as some of the primary drivers of the global extinction crisis.
A 2023 study published in PLOS One shows that around two million species are threatened with extinction, double the estimate put forward in the 2019 IPBES report. According to a 2023 study published in PNAS, at least 73 genera of animals have gone extinct since 1500. If humans had never existed, the study estimates it would have taken 18,000 years for the same genera to have disappeared naturally, leading the authors to conclude that "the current generic extinction rates are 35 times higher than expected background rates prevailing in the last million years under the absence of human impacts" and that human civilization is causing the "rapid mutilation of the tree of life."
Attribution
There is widespread consensus among scientists that human activities—especially habitat destruction, resource consumption, and the elimination of species— are the main drivers of the current extinction crisis. Rising extinction rates among mammals, birds, reptiles, amphibians, and other groups have led many scientists to declare a global biodiversity crisis.
Scientific debate
The description of recent extinction as a mass extinction has been debated among scientists. Stuart Pimm, for example, asserts that the sixth mass extinction "is something that hasn't happened yet – we are on the edge of it." Several studies posit that the Earth has entered a sixth mass extinction event, including a 2015 paper by Barnosky et al. and a November 2017 statement titled "World Scientists' Warning to Humanity: A Second Notice", led by eight authors and signed by 15,364 scientists from 184 countries which asserted, among other things, that "we have unleashed a mass extinction event, the sixth in roughly 540 million years, wherein many current life forms could be extirpated or at least committed to extinction by the end of this century." The World Wide Fund for Nature's 2020 Living Planet Report says that wildlife populations have declined by 68% since 1970 as a result of overconsumption, population growth, and intensive farming, which is further evidence that humans have unleashed a sixth mass extinction event; however, this finding has been disputed by one 2020 study, which posits that this major decline was primarily driven by a few extreme outlier populations, and that when these outliers are removed, the trend shifts to that of a decline between the 1980s and 2000s, but a roughly positive trend after 2000. A 2021 report in Frontiers in Conservation Science which cites both of the aforementioned studies, says "population sizes of vertebrate species that have been monitored across years have declined by an average of 68% over the last five decades, with certain population clusters in extreme decline, thus presaging the imminent extinction of their species," and asserts "that we are already on the path of a sixth major extinction is now scientifically undeniable." A January 2022 review article published in Biological Reviews builds upon previous studies documenting biodiversity decline to assert that a sixth mass extinction event caused by anthropogenic activity is currently under way. A December 2022 study published in Science Advances states that "the planet has entered the sixth mass extinction" and warns that current anthropogenic trends, particularly regarding climate and land-use changes, could result in the loss of more than a tenth of plant and animal species by the end of the century. 12% of all bird species are threatened with extinction. A 2023 study published in Biological Reviews found that, of 70,000 monitored species, some 48% are experiencing population declines from anthropogenic pressures, whereas only 3% have increasing populations.
According to the UNDP's 2020 Human Development Report, The Next Frontier: Human Development and the Anthropocene:
The 2022 Living Planet Report found that vertebrate wildlife populations have plummeted by an average of almost 70% since 1970, with agriculture and fishing being the primary drivers of this decline.
Some scientists, including Rodolfo Dirzo and Paul R. Ehrlich, contend that the sixth mass extinction is largely unknown to most people globally and is also misunderstood by many in the scientific community. They say it is not the disappearance of species, which gets the most attention, that is at the heart of the crisis, but "the existential threat of myriad population extinctions."
Anthropocene
The abundance of species extinctions considered anthropogenic, or due to human activity, has sometimes (especially when referring to hypothesized future events) been collectively called the "Anthropocene extinction". Anthropocene is a term introduced in 2000. Some now postulate that a new geological epoch has begun, with the most abrupt and widespread extinction of species since the Cretaceous–Paleogene extinction event 66 million years ago.
The term "anthropocene" is being used more frequently by scientists, and some commentators may refer to the current and projected future extinctions as part of a longer Holocene extinction. The Holocene–Anthropocene boundary is contested, with some commentators asserting significant human influence on climate for much of what is normally regarded as the Holocene Epoch. Some experts mark the transition from the Holocene to the Anthropocene at the onset of the industrial revolution. They also note that the official use of this term in the near future will heavily rely on its usefulness, especially for Earth scientists studying late Holocene periods.
It has been suggested that human activity has made the period starting from the mid-20th century different enough from the rest of the Holocene to consider it a new geological epoch, known as the Anthropocene, a term which was considered for inclusion in the timeline of Earth's history by the International Commission on Stratigraphy in 2016, but the proposal was rejected in 2024. To constitute the Holocene as an extinction event, scientists must determine exactly when anthropogenic greenhouse gas emissions began to measurably alter natural atmospheric levels on a global scale, and when these alterations caused changes to global climate. Using chemical proxies from Antarctic ice cores, researchers have estimated the fluctuations of carbon dioxide (CO2) and methane (CH4) gases in the Earth's atmosphere during the Late Pleistocene and Holocene epochs. Estimates of the fluctuations of these two gases in the atmosphere, using chemical proxies from Antarctic ice cores, generally indicate that the peak of the Anthropocene occurred within the previous two centuries: typically beginning with the Industrial Revolution, when the highest greenhouse gas levels were recorded.
Human ecology
A 2015 article in Science suggested that humans are unique in ecology as an unprecedented "global superpredator", regularly preying on large numbers of fully grown terrestrial and marine apex predators, and with a great deal of influence over food webs and climatic systems worldwide. Although significant debate exists as to how much human predation and indirect effects contributed to prehistoric extinctions, certain population crashes have been directly correlated with human arrival. Human activity has been the main cause of mammalian extinctions since the Late Pleistocene. A 2018 study published in PNAS found that since the dawn of human civilization, the biomass of wild mammals has decreased by 83%. The biomass decrease is 80% for marine mammals, 50% for plants, and 15% for fish. Currently, livestock make up 60% of the biomass of all mammals on Earth, followed by humans (36%) and wild mammals (4%). As for birds, 70% are domesticated, such as poultry, whereas only 30% are wild.
Historic extinction
Human activity
Activities contributing to extinctions
Extinction of animals, plants, and other organisms caused by human actions may go as far back as the late Pleistocene, over 12,000 years ago. There is a correlation between megafaunal extinction and the arrival of humans. Megafauna that are still extant also suffered severe declines that were highly correlated with human expansion and activity. Over the past 125,000 years, the average body size of wildlife has fallen by 14% as actions by prehistoric humans eradicated megafauna on all continents with the exception of Africa. Over the past 130,000 years, avian functional diversity has declined precipitously and disproportionately relative to phylogenetic diversity losses.
Human civilization was founded on and grew from agriculture. The more land used for farming, the greater the population a civilization could sustain, and subsequent popularization of farming led to widespread habitat conversion.
Habitat destruction by humans, thus replacing the original local ecosystems, is a major driver of extinction. The sustained conversion of biodiversity rich forests and wetlands into poorer fields and pastures (of lesser carrying capacity for wild species), over the last 10,000 years, has considerably reduced the Earth's carrying capacity for wild birds and mammals, among other organisms, in both population size and species count.
Other, related human causes of the extinction event include deforestation, hunting, pollution, the introduction in various regions of non-native species, and the widespread transmission of infectious diseases spread through livestock and crops.
Agriculture and climate change
Recent investigations into the practice of landscape burning during the Neolithic Revolution have a major implication for the current debate about the timing of the Anthropocene and the role that humans may have played in the production of greenhouse gases prior to the Industrial Revolution. Studies of early hunter-gatherers raise questions about the current use of population size or density as a proxy for the amount of land clearance and anthropogenic burning that took place in pre-industrial times. Scientists have questioned the correlation between population size and early territorial alterations. Ruddiman and Ellis' research paper in 2009 makes the case that early farmers involved in systems of agriculture used more land per capita than growers later in the Holocene, who intensified their labor to produce more food per unit of area (thus, per laborer); arguing that agricultural involvement in rice production implemented thousands of years ago by relatively small populations created significant environmental impacts through large-scale means of deforestation.
While a number of human-derived factors are recognized as contributing to rising atmospheric concentrations of CH4 (methane) and CO2 (carbon dioxide), deforestation and territorial clearance practices associated with agricultural development may have contributed most to these concentrations globally in earlier millennia. Scientists that are employing a variance of archaeological and paleoecological data argue that the processes contributing to substantial human modification of the environment spanned many thousands of years on a global scale and thus, not originating as late as the Industrial Revolution. Palaeoclimatologist William Ruddiman has argued that in the early Holocene 11,000 years ago, atmospheric carbon dioxide and methane levels fluctuated in a pattern which was different from the Pleistocene epoch before it. He argued that the patterns of the significant decline of CO2 levels during the last ice age of the Pleistocene inversely correlate to the Holocene where there have been dramatic increases of CO2 around 8000 years ago and CH4 levels 3000 years after that. The correlation between the decrease of CO2 in the Pleistocene and the increase of it during the Holocene implies that the causation of this spark of greenhouse gases into the atmosphere was the growth of human agriculture during the Holocene.
Climate change
One of the main theories explaining early Holocene extinctions is historic climate change. The climate change theory has suggested that a change in climate near the end of the late Pleistocene stressed the megafauna to the point of extinction. Some scientists favor abrupt climate change as the catalyst for the extinction of the megafauna at the end of the Pleistocene, most who believe increased hunting from early modern humans also played a part, with others even suggesting that the two interacted. In the Americas, a controversial explanation for the shift in climate is presented under the Younger Dryas impact hypothesis, which states that the impact of comets cooled global temperatures. Despite its popularity among nonscientists, this hypothesis has never been accepted by relevant experts, who dismiss it as a fringe theory.
Contemporary extinction
History
Contemporary human overpopulation and continued population growth, along with per-capita consumption growth, prominently in the past two centuries, are regarded as the underlying causes of extinction. Inger Andersen, the executive director of the United Nations Environment Programme, stated that "we need to understand that the more people there are, the more we put the Earth under heavy pressure. As far as biodiversity is concerned, we are at war with nature."
Some scholars assert that the emergence of capitalism as the dominant economic system has accelerated ecological exploitation and destruction, and has also exacerbated mass species extinction. CUNY professor David Harvey, for example, posits that the neoliberal era "happens to be the era of the fastest mass extinction of species in the Earth's recent history". Ecologist William E. Rees concludes that the "neoliberal paradigm contributes significantly to planetary unraveling" by treating the economy and the ecosphere as totally separate systems, and by neglecting the latter. Major lobbying organizations representing corporations in the agriculture, fisheries, forestry and paper, mining, and oil and gas industries, including the United States Chamber of Commerce, have been pushing back against legislation that could address the extinction crisis. A 2022 report by the climate think tank InfluenceMap stated that "although industry associations, especially in the US, appear reluctant to discuss the biodiversity crisis, they are clearly engaged on a wide range of policies with significant impacts on biodiversity loss."
The loss of animal species from ecological communities, defaunation, is primarily driven by human activity. This has resulted in empty forests, ecological communities depleted of large vertebrates. This is not to be confused with extinction, as it includes both the disappearance of species and declines in abundance. Defaunation effects were first implied at the Symposium of Plant-Animal Interactions at the University of Campinas, Brazil in 1988 in the context of Neotropical forests. Since then, the term has gained broader usage in conservation biology as a global phenomenon.
Big cat populations have severely declined over the last half-century and could face extinction in the following decades. According to 2011 IUCN estimates: lions are down to 25,000, from 450,000; leopards are down to 50,000, from 750,000; cheetahs are down to 12,000, from 45,000; tigers are down to 3,000 in the wild, from 50,000. A December 2016 study by the Zoological Society of London, Panthera Corporation and Wildlife Conservation Society showed that cheetahs are far closer to extinction than previously thought, with only 7,100 remaining in the wild, existing within only 9% of their historic range. Human pressures are to blame for the cheetah population crash, including prey loss due to overhunting by people, retaliatory killing from farmers, habitat loss and the illegal wildlife trade. Populations of brown bears have experienced similar population decline.
The term pollinator decline refers to the reduction in abundance of insect and other animal pollinators in many ecosystems worldwide beginning at the end of the twentieth century, and continuing into the present day. Pollinators, which are necessary for 75% of food crops, are declining globally in both abundance and diversity. A 2017 study led by Radboud University's Hans de Kroon indicated that the biomass of insect life in Germany had declined by three-quarters in the previous 25 years. Participating researcher Dave Goulson of Sussex University stated that their study suggested that humans are making large parts of the planet uninhabitable for wildlife. Goulson characterized the situation as an approaching "ecological Armageddon", adding that "if we lose the insects then everything is going to collapse." A 2019 study found that over 40% of insect species are threatened with extinction. The most significant drivers in the decline of insect populations are associated with intensive farming practices, along with pesticide use and climate change. The world's insect population decreases by around 1 to 2% per year.
Various species are predicted to become extinct in the near future, among them some species of rhinoceros, primates, and pangolins. Others, including several species of giraffe, are considered "vulnerable" and are experiencing significant population declines from anthropogenic impacts including hunting, deforestation and conflict. Hunting alone threatens bird and mammalian populations around the world. The direct killing of megafauna for meat and body parts is the primary driver of their destruction, with 70% of the 362 megafauna species in decline as of 2019. Mammals in particular have suffered such severe losses as the result of human activity (mainly during the Quaternary extinction event, but partly during the Holocene) that it could take several million years for them to recover. Contemporary assessments have discovered that roughly 41% of amphibians, 25% of mammals, 21% of reptiles and 14% of birds are threatened with extinction, which could disrupt ecosystems on a global scale and eliminate billions of years of phylogenetic diversity. 189 countries, which are signatory to the Convention on Biological Diversity (Rio Accord), have committed to preparing a Biodiversity Action Plan, a first step at identifying specific endangered species and habitats, country by country.
A 2023 study published in Current Biology concluded that current biodiversity loss rates could reach a tipping point and inevitably trigger a total ecosystem collapse.
Recent extinction
Recent extinctions are more directly attributable to human influences, whereas prehistoric extinctions can be attributed to other factors. The International Union for Conservation of Nature (IUCN) characterizes 'recent' extinction as those that have occurred past the cut-off point of 1500, and at least 875 plant and animal species have gone extinct since that time and 2009. Some species, such as the Père David's deer and the Hawaiian crow, are extinct in the wild, and survive solely in captive populations. Other populations are only locally extinct (extirpated), still existent elsewhere, but reduced in distribution, as with the extinction of gray whales in the Atlantic, and of the leatherback sea turtle in Malaysia.
Since the Late Pleistocene, humans (together with other factors) have been rapidly driving the largest vertebrate animals towards extinction, and in the process interrupting a 66-million-year-old feature of ecosystems, the relationship between diet and body mass, which researchers suggest could have unpredictable consequences. A 2019 study published in Nature Communications found that rapid biodiversity loss is impacting larger mammals and birds to a much greater extent than smaller ones, with the body mass of such animals expected to shrink by 25% over the next century. Another 2019 study published in Biology Letters found that extinction rates are perhaps much higher than previously estimated, in particular for bird species.
The 2019 Global Assessment Report on Biodiversity and Ecosystem Services lists the primary causes of contemporary extinctions in descending order: (1) changes in land and sea use (primarily agriculture and overfishing respectively); (2) direct exploitation of organisms such as hunting; (3) anthropogenic climate change; (4) pollution and (5) invasive alien species spread by human trade. This report, along with the 2020 Living Planet Report by the WWF, both project that climate change will be the leading cause in the next several decades.
A June 2020 study published in PNAS posits that the contemporary extinction crisis "may be the most serious environmental threat to the persistence of civilization, because it is irreversible" and that its acceleration "is certain because of the still fast growth in human numbers and consumption rates." The study found that more than 500 vertebrate species are poised to be lost in the next two decades.
Habitat destruction
Humans both create and destroy crop cultivar and domesticated animal varieties. Advances in transportation and industrial farming has led to monoculture and the extinction of many cultivars. The use of certain plants and animals for food has also resulted in their extinction, including silphium and the passenger pigeon. It was estimated in 2012 that 13% of Earth's ice-free land surface is used as row-crop agricultural sites, 26% used as pastures, and 4% urban-industrial areas.
In March 2019, Nature Climate Change published a study by ecologists from Yale University, who found that over the next half century, human land use will reduce the habitats of 1,700 species by up to 50%, pushing them closer to extinction. That same month PLOS Biology published a similar study drawing on work at the University of Queensland, which found that "more than 1,200 species globally face threats to their survival in more than 90% of their habitat and will almost certainly face extinction without conservation intervention".
Since 1970, the populations of migratory freshwater fish have declined by 76%, according to research published by the Zoological Society of London in July 2020. Overall, around one in three freshwater fish species are threatened with extinction due to human-driven habitat degradation and overfishing.
Some scientists and academics assert that industrial agriculture and the growing demand for meat is contributing to significant global biodiversity loss as this is a significant driver of deforestation and habitat destruction; species-rich habitats, such as the Amazon region and Indonesia being converted to agriculture. A 2017 study by the World Wildlife Fund (WWF) found that 60% of biodiversity loss can be attributed to the vast scale of feed crop cultivation required to rear tens of billions of farm animals. Moreover, a 2006 report by the Food and Agriculture Organization (FAO) of the United Nations, Livestock's Long Shadow, also found that the livestock sector is a "leading player" in biodiversity loss. More recently, in 2019, the IPBES Global Assessment Report on Biodiversity and Ecosystem Services attributed much of this ecological destruction to agriculture and fishing, with the meat and dairy industries having a very significant impact. Since the 1970s food production has soared to feed a growing human population and bolster economic growth, but at a huge price to the environment and other species. The report says some 25% of the Earth's ice-free land is used for cattle grazing. A 2020 study published in Nature Communications warned that human impacts from housing, industrial agriculture and in particular meat consumption are wiping out a combined 50 billion years of Earth's evolutionary history (defined as phylogenetic diversity) and driving to extinction some of the "most unique animals on the planet," among them the Aye-aye lemur, the Chinese crocodile lizard and the pangolin. Said lead author Rikki Gumbs:
Urbanization has also been cited as a significant driver of biodiversity loss, particularly of plant life. A 1999 study of local plant extirpations in Great Britain found that urbanization contributed at least as much to local plant extinction as did agriculture.
Climate change
Climate change is expected to be a major driver of extinctions from the 21st century. Rising levels of carbon dioxide are resulting in influx of this gas into the ocean, increasing its acidity. Marine organisms which possess calcium carbonate shells or exoskeletons experience physiological pressure as the carbonate reacts with acid. For example, this is already resulting in coral bleaching on various coral reefs worldwide, which provide valuable habitat and maintain a high biodiversity. Marine gastropods, bivalves, and other invertebrates are also affected, as are the organisms that feed on them. Some studies have suggested that it is not climate change that is driving the current extinction crisis, but the demands of contemporary human civilization on nature. However, a rise in average global temperatures greater than 5.2 °C is projected to cause a mass extinction similar to the "Big Five" mass extinction events of the Phanerozoic, even without other anthropogenic impacts on biodiversity.
Overexploitation
Overhunting can reduce the local population of game animals by more than half, as well as reducing population density, and may lead to extinction for some species. Populations located nearer to villages are significantly more at risk of depletion. Several conservationist organizations, among them IFAW and HSUS, assert that trophy hunters, particularly from the United States, are playing a significant role in the decline of giraffes, which they refer to as a "silent extinction".
The surge in the mass killings by poachers involved in the illegal ivory trade along with habitat loss is threatening African elephant populations. In 1979, their populations stood at 1.7 million; at present there are fewer than 400,000 remaining. Prior to European colonization, scientists believe Africa was home to roughly 20 million elephants. According to the Great Elephant Census, 30% of African elephants (or 144,000 individuals) disappeared over a seven-year period, 2007 to 2014. African elephants could become extinct by 2035 if poaching rates continue.
Fishing has had a devastating effect on marine organism populations for several centuries even before the explosion of destructive and highly effective fishing practices like trawling. Humans are unique among predators in that they regularly prey on other adult apex predators, particularly in marine environments; bluefin tuna, blue whales, North Atlantic right whales, and over fifty species of sharks and rays are vulnerable to predation pressure from human fishing, in particular commercial fishing. A 2016 study published in Science concludes that humans tend to hunt larger species, and this could disrupt ocean ecosystems for millions of years. A 2020 study published in Science Advances found that around 18% of marine megafauna, including iconic species such as the Great white shark, are at risk of extinction from human pressures over the next century. In a worst-case scenario, 40% could go extinct over the same time period. According to a 2021 study published in Nature, 71% of oceanic shark and ray populations have been destroyed by overfishing (the primary driver of ocean defaunation) from 1970 to 2018, and are nearing the "point of no return" as 24 of the 31 species are now threatened with extinction, with several being classified as critically endangered. Almost two-thirds of sharks and rays around coral reefs are threatened with extinction from overfishing, with 14 of 134 species being critically endangered.
Disease
The decline of amphibian populations has also been identified as an indicator of environmental degradation. As well as habitat loss, introduced predators and pollution, Chytridiomycosis, a fungal infection accidentally spread by human travel, globalization, and the wildlife trade, has caused severe population drops of over 500 amphibian species, and perhaps 90 extinctions, including (among many others) the extinction of the golden toad in Costa Rica, the Gastric-brooding frog in Australia, the Rabb's Fringe-limbed Treefrog and the extinction of the Panamanian golden frog in the wild. Chytrid fungus has spread across Australia, New Zealand, Central America and Africa, including countries with high amphibian diversity such as cloud forests in Honduras and Madagascar. Batrachochytrium salamandrivorans is a similar infection currently threatening salamanders. Amphibians are now the most endangered vertebrate group, having existed for more than 300 million years through three other mass extinctions.
Millions of bats in the US have been dying off since 2012 due to a fungal infection known as white-nose syndrome that spread from European bats, who appear to be immune. Population drops have been as great as 90% within five years, and extinction of at least one bat species is predicted. There is currently no form of treatment, and such declines have been described as "unprecedented" in bat evolutionary history by Alan Hicks of the New York State Department of Environmental Conservation.
Between 2007 and 2013, over ten million beehives were abandoned due to colony collapse disorder, which causes worker bees to abandon the queen. Though no single cause has gained widespread acceptance by the scientific community, proposals include infections with Varroa and Acarapis mites; malnutrition; various pathogens; genetic factors; immunodeficiencies; loss of habitat; changing beekeeping practices; or a combination of factors.
By region
Megafauna were once found on every continent of the world, but are now almost exclusively found on the continent of Africa. In some regions, megafauna experienced population crashes and trophic cascades shortly after the earliest human settlers. Worldwide, 178 species of the world's largest mammals died out between 52,000 and 9,000 BC; it has been suggested that a higher proportion of African megafauna survived because they evolved alongside humans. The timing of South American megafaunal extinction appears to precede human arrival, although the possibility that human activity at the time impacted the global climate enough to cause such an extinction has been suggested.
Africa
Africa experienced the smallest decline in megafauna compared to the other continents. This is presumably due to the idea that African megafauna evolved alongside humans, and thus developed a healthy fear of them, unlike the comparatively tame animals of other continents.
Eurasia
Unlike other continents, the megafauna of Eurasia went extinct over a relatively long period of time, possibly due to climate fluctuations fragmenting and decreasing populations, leaving them vulnerable to over-exploitation, as with the steppe bison (Bison priscus). The warming of the arctic region caused the rapid decline of grasslands, which had a negative effect on the grazing megafauna of Eurasia. Most of what once was mammoth steppe was converted to mire, rendering the environment incapable of supporting them, notably the woolly mammoth. However, all these megafauna had survived previous interglacials with the same or more intense warming, suggesting that even during warm periods, refugia may have existed and that human hunting may have been the critical factor for their extinction.
In the western Mediterranean region, anthropogenic forest degradation began around 4,000 BP, during the Chalcolithic, and became especially pronounced during the Roman era. The reasons for the decline of forest ecosystems stem from agriculture, grazing, and mining. During the twilight years of the Western Roman Empire, forests in northwestern Europe rebounded from losses incurred throughout the Roman period, though deforestation on a large scale resumed once again around 800 BP, during the High Middle Ages.
In southern China, human land use is believed to have permanently altered the trend of vegetation dynamics in the region, which was previously governed by temperature. This is evidenced by high fluxes of charcoal from that time interval.
Americas
There has been a debate as to the extent to which the disappearance of megafauna at the end of the last glacial period can be attributed to human activities by hunting, or even by slaughter of prey populations. Discoveries at Monte Verde in South America and at Meadowcroft Rock Shelter in Pennsylvania have caused a controversy regarding the Clovis culture. There likely would have been human settlements prior to the Clovis culture, and the history of humans in the Americas may extend back many thousands of years before the Clovis culture. The amount of correlation between human arrival and megafauna extinction is still being debated: for example, in Wrangel Island in Siberia the extinction of dwarf woolly mammoths (approximately 2000 BC) did not coincide with the arrival of humans, nor did megafaunal mass extinction on the South American continent, although it has been suggested climate changes induced by anthropogenic effects elsewhere in the world may have contributed.
Comparisons are sometimes made between recent extinctions (approximately since the Industrial Revolution) and the Pleistocene extinction near the end of the last glacial period. The latter is exemplified by the extinction of large herbivores such as the woolly mammoth and the carnivores that preyed on them. Humans of this era actively hunted the mammoth and the mastodon, but it is not known if this hunting was the cause of the subsequent massive ecological changes, widespread extinctions and climate changes.
The ecosystems encountered by the first Americans had not been exposed to human interaction, and may have been far less resilient to human made changes than the ecosystems encountered by industrial era humans. Therefore, the actions of the Clovis people, despite seeming insignificant by today's standards could indeed have had a profound effect on the ecosystems and wild life which was entirely unused to human influence.
In the Yukon, the mammoth steppe ecosystem collapsed between 13,500 and 10,000 BP, though wild horses and woolly mammoths somehow persisted in the region for millennia after this collapse. In what is now Texas, a drop in local plant and animal biodiversity occurred during the Younger Dryas cooling, though while plant diversity recovered after the Younger Dryas, animal diversity did not. In the Channel Islands, multiple terrestrial species went extinct around the same time as human arrival, but direct evidence for an anthropogenic cause of their extinction remains lacking. In the montane forests of the Colombian Andes, spores of coprophilous fungi indicate megafaunal extinction occurred in two waves, the first occurring around 22,900 BP and the second around 10,990 BP. A 2023 study of megafaunal extinctions in the Junín Plateau of Peru found that the timing of the disappearance of megafauna was concurrent with a large uptick in fire activity attributed to human actions, implicating humans as the cause of their local extinction on the plateau.
New Guinea
Humans in New Guinea used volcanically fertilised soil following major eruptions and interfered with vegetation succession patterns since the Late Pleistocene, with this process intensifying in the Holocene.
Australia
Since European colonisation Australia has lost over 100 plant and animal species, including 10% of its mammal species, the highest of any continent.
Australia was once home to a large assemblage of megafauna, with many parallels to those found on the African continent today. Australia's fauna is characterized by primarily marsupial mammals, and many reptiles and birds, all existing as giant forms until recently. Humans arrived on the continent very early, about 50,000 years ago. The extent human arrival contributed is controversial; climatic drying of Australia 40,000–60,000 years ago was an unlikely cause, as it was less severe in speed or magnitude than previous regional climate change which failed to kill off megafauna. Extinctions in Australia continued from original settlement until today in both plants and animals, while many more animals and plants have declined or are endangered.
Due to the older timeframe and the soil chemistry on the continent, very little subfossil preservation evidence exists relative to elsewhere. However, continent-wide extinction of all genera weighing over 100 kilograms, and six of seven genera weighing between 45 and 100 kilograms occurred around 46,400 years ago (4,000 years after human arrival) and the fact that megafauna survived until a later date on the island of Tasmania following the establishment of a land bridge suggest direct hunting or anthropogenic ecosystem disruption such as fire-stick farming as likely causes. The first evidence of direct human predation leading to extinction in Australia was published in 2016.
A 2021 study found that the rate of extinction of Australia's megafauna is rather unusual, with some generalistic species having gone extinct earlier while highly specialized ones having become extinct later or even still surviving today. A mosaic cause of extinction with different anthropogenic and environmental pressures has been proposed.
The arrival of invasive species such as feral cats and cane toads has further devastated Australia's ecosystems.
Caribbean
Human arrival in the Caribbean around 6,000 years ago is correlated with the extinction of many species. These include many different genera of ground and arboreal sloths across all islands. These sloths were generally smaller than those found on the South American continent. Megalocnus were the largest genus at up to , Acratocnus were medium-sized relatives of modern two-toed sloths endemic to Cuba, Imagocnus also of Cuba, Neocnus and many others.
Macaronesia
The arrival of the first human settlers in the Azores saw the introduction of invasive plants and livestock to the archipelago, resulting in the extinction of at least two plant species on Pico Island. On Faial Island, the decline of Prunus lusitanica has been hypothesized by some scholars to have been related to the tree species being endozoochoric, with the extirpation or extinction of various bird species drastically limiting its seed dispersal. Lacustrine ecosystems were ravaged by human colonization, as evidenced by hydrogen isotopes from C30 fatty acids recording hypoxic bottom waters caused by eutrophication in Lake Funda on Flores Island beginning between 1500 and 1600 AD.
The arrival of humans on the archipelago of Madeira caused the extinction of approximately two-thirds of its endemic bird species, with two non-endemic birds also being locally extirpated from the archipelago. Of thirty-four land snail species collected in a subfossil sample from eastern Madeira Island, nine became extinct following the arrival of humans. On the Desertas Islands, of forty-five land snail species known to exist before human colonization, eighteen are extinct and five are no longer present on the islands. Eurya stigmosa, whose extinction is typically attributed to climate change following the end of the Pleistocene rather than humans, may have survived until the colonization of the archipelago by the Portuguese and gone extinct as a result of human activity. Introduced mice have been implicated as a leading driver of extinction on Madeira following its discovery and settlement by humans.
In the Canary Islands, native thermophilous woodlands were decimated and two tree taxa were driven extinct following the arrival of its first humans, primarily as a result of increased fire clearance and soil erosion and the introduction of invasive pigs, goats, and rats. Invasive species introductions accelerated during the Age of Discovery when Europeans first settled the Macaronesian archipelago. The archipelago's laurel forests, though still negatively impacted, fared better due to being less suitable for human economic use.
Cabo Verde, like the Canary Islands, witnessed precipitous deforestation upon the arrival of European settlers and various invasive species brought by them in the archipelago, with the archipelago's thermophilous woodlands suffering the greatest destruction. Introduced species, overgrazing, increased fire incidence, and soil degradation have been attributed as the chief causes of Cabo Verde's ecological devastation.
Pacific
Archaeological and paleontological digs on 70 different Pacific islands suggested that numerous species became extinct as people moved across the Pacific, starting 30,000 years ago in the Bismarck Archipelago and Solomon Islands. It is currently estimated that among the bird species of the Pacific, some 2000 species have gone extinct since the arrival of humans, representing a 20% drop in the biodiversity of birds worldwide. In Polynesia, the Late Holocene declines in avifaunas only abated after they were heavily depleted and there were increasingly fewer bird species able to be driven to extinction. Iguanas were likewise decimated by the spread of humans. Additionally, the endemic faunas of Pacific archipelagos are exceptionally at risk in the coming decades due to rising sea levels caused by global warming.
Lord Howe Island, which remained uninhabited until the arrival of Europeans in the South Pacific in the 18th century, lost much of its endemic avifauna when it became a whaling station in the early 19th century. Another wave of bird extinctions occurred following the introduction of black rats in 1918.
The endemic megafaunal meiolaniid turtles of Vanuatu became extinct immediately following the first human arrivals and remains of them containing evidence of butchery by humans have been found.
The arrival of humans in New Caledonia marked the commencement of coastal forest and mangrove decline on the island. The archipelago's megafauna was still extant when humans arrived, but indisputable evidence for the anthropogenicity of their extinction remains elusive.
In Fiji, the giant iguanas Brachylophus gibbonsi and Lapitiguana impensa both succumbed to human-induced extinction shortly after encountering the first humans on the island.
In American Samoa, deposits dating back to the period of initial human colonisation contain elevated quantities of bird, turtle, and fish remains caused by increased predation pressure.
On Mangaia in the Cook Islands, human colonisation was associated with a major extinction of endemic avifauna, along with deforestation, erosion of volcanic hillsides, and increased charcoal influx, causing additional environmental damage.
On Rapa in the Austral Archipelago, human arrival, marked by the increase in charcoal and in taro pollen in the palynological record, is associated with the extinction of an endemic palm.
Henderson Island, once thought to be untouched by humans, was colonised and later abandoned by Polynesians. The ecological collapse on the island caused by the anthropogenic extinctions is believed to have caused the island's abandonment.
The first human settlers of the Hawaiian Islands are thought to have arrived between 300 and 800 AD, with European arrival in the 16th century. Hawaii is notable for its endemism of plants, birds, insects, mollusks and fish; 30% of its organisms are endemic. Many of its species are endangered or have gone extinct, primarily due to accidentally introduced species and livestock grazing. Over 40% of its bird species have gone extinct, and it is the location of 75% of extinctions in the United States. Evidence suggests that the introduction of the Polynesian rat, above all other factors, drove the ecocide of the endemic forests of the archipelago. Extinction has increased in Hawaii over the last 200 years and is relatively well documented, with extinctions among native snails used as estimates for global extinction rates. High rates of habitat fragmentation on the archipelago have further reduced biodiversity. The extinction of endemic Hawaiian avifauna is likely to accelerate even further as anthropogenic global warming adds additional pressure on top of land-use changes and invasive species.
Madagascar
Within centuries of the arrival of humans around the 1st millennium AD, nearly all of Madagascar's distinct, endemic, and geographically isolated megafauna became extinct. The largest animals, of more than , were extinct very shortly after the first human arrival, with large and medium-sized species dying out after prolonged hunting pressure from an expanding human population moving into more remote regions of the island around 1000 years ago. as well as 17 species of "giant" lemurs. Some of these lemurs typically weighed over , and their fossils have provided evidence of human butchery on many species. Other megafauna present on the island included the Malagasy hippopotamuses as well as the large flightless elephant birds, both groups are thought to have gone extinct in the interval 750–1050 AD. Smaller fauna experienced initial increases due to decreased competition, and then subsequent declines over the last 500 years. All fauna weighing over died out. The primary reasons for the decline of Madagascar's biota, which at the time was already stressed by natural aridification, were human hunting, herding, farming, and forest clearing, all of which persist and threaten Madagascar's remaining taxa today. The natural ecosystems of Madagascar as a whole were further impacted by the much greater incidence of fire as a result of anthropogenic fire production; evidence from Lake Amparihibe on the island of Nosy Be indicates a shift in local vegetation from intact rainforest to a fire-disturbed patchwork of grassland and woodland between 1300 and 1000 BP.
New Zealand
New Zealand is characterized by its geographic isolation and island biogeography, and had been isolated from mainland Australia for 80 million years. It was the last large land mass to be colonized by humans. Upon the arrival of Polynesian settlers in the late 13th century, the native biota suffered a catastrophic decline due to deforestation, hunting, and the introduction of invasive species. The extinction of all of the islands' megafaunal birds occurred within several hundred years of human arrival. The moa, large flightless ratites, were thriving during the Late Holocene, but became extinct within 200 years of the arrival of human settlers, as did the enormous Haast's eagle, their primary predator, and at least two species of large, flightless geese. The Polynesians also introduced the Polynesian rat. This may have put some pressure on other birds, but at the time of early European contact (18th century) and colonization (19th century), the bird life was prolific. The megafaunal extinction happened extremely rapidly despite a very small population density, which never exceeded 0.01 people per km2. Extinctions of parasites followed the extinction of New Zealand's megafauna. With them, the Europeans brought various invasive species including ship rats, possums, cats and mustelids which devastated native bird life, some of which had adapted flightlessness and ground nesting habits, and had no defensive behavior as a result of having no native mammalian predators. The kākāpō, the world's biggest parrot, which is flightless, now only exists in managed breeding sanctuaries. New Zealand's national emblem, the kiwi, is on the endangered bird list.
==Mitigation==
Stabilizing human populations; reining in capitalism, decreasing economic demands, and shifting them to economic activities with low impacts on biodiversity; transitioning to plant-based diets; and increasing the number and size of terrestrial and marine protected areas have been suggested to avoid or limit biodiversity loss and a possible sixth mass extinction. Rodolfo Dirzo and Paul R. Ehrlich suggest that "the one fundamental, necessary, 'simple' cure, ... is reducing the scale of the human enterprise." According to a 2021 paper published in Frontiers in Conservation Science, humanity almost certainly faces a "ghastly future" of mass extinction, biodiversity collapse, climate change, and their impacts unless major efforts to change human industry and activity are rapidly undertaken.
Reducing human population growth has been suggested as a means of mitigating climate change and the biodiversity crisis, although many scholars believe it has been largely ignored in mainstream policy discourse. An alternative proposal is greater agricultural efficiency & sustainability. Lots of non-arable land can be made into arable land good for growing food crops. Mushrooms have also been known to repair damaged soil.
A 2018 article in Science advocated for the global community to designate 30% of the planet by 2030, and 50% by 2050, as protected areas to mitigate the contemporary extinction crisis. It highlighted that the human population is projected to grow to 10 billion by the middle of the century, and consumption of food and water resources is projected to double by this time. A 2022 report published in Science warned that 44% of Earth's terrestrial surface, or , must be conserved and made "ecologically sound" to prevent further biodiversity loss.
In November 2018, the UN's biodiversity chief Cristiana Pașca Palmer urged people worldwide to pressure governments to implement significant protections for wildlife by 2020. She called biodiversity loss a "silent killer" as dangerous as global warming but said it had received little attention by comparison. "It's different from climate change, where people feel the impact in everyday life. With biodiversity, it is not so clear but by the time you feel what is happening, it may be too late." In January 2020, the UN Convention on Biological Diversity drafted a Paris-style plan to stop biodiversity and ecosystem collapse by setting the deadline of 2030 to protect 30% of the Earth's land and oceans and to reduce pollution by 50%, to allow for the restoration of ecosystems by 2050. The world failed to meet the Aichi Biodiversity Targets for 2020 set by the convention during a summit in Japan in 2010. Of the 20 biodiversity targets proposed, only six were "partially achieved" by the deadline. It was called a global failure by Inger Andersen, head of the United Nations Environment Programme:
Some scientists have proposed keeping extinctions below 20 per year for the next century as a global target to reduce species loss, which is the biodiversity equivalent of the 2 °C climate target, although it is still much higher than the normal background rate of two per year prior to anthropogenic impacts on the natural world.
An October 2020 report on the "era of pandemics" from IPBES found that many of the same human activities that contribute to biodiversity loss and climate change, including deforestation and the wildlife trade, have also increased the risk of future pandemics. The report offers several policy options to reduce such risk, such as taxing meat production and consumption, cracking down on the illegal wildlife trade, removing high disease-risk species from the legal wildlife trade, and eliminating subsidies to businesses which are harmful to the environment. According to marine zoologist John Spicer, "the COVID-19 crisis is not just another crisis alongside the biodiversity crisis and the climate change crisis. Make no mistake, this is one big crisis – the greatest that humans have ever faced."
In December 2022, nearly every country on Earth, with the United States and the Holy See being the only exceptions, signed onto the Kunming-Montreal Global Biodiversity Framework agreement formulated at the 2022 United Nations Biodiversity Conference (COP 15) which includes protecting 30% of land and oceans by 2030 and 22 other targets intended to mitigate the extinction crisis. The agreement is weaker than the Aichi Targets of 2010. It was criticized by some countries for being rushed and not going far enough to protect endangered species.
See also
Anthropocene
Biodiversity loss
Ecocide
Extinction Rebellion
Extinction risk from climate change
Extinction symbol
Extinction: The Facts (2020 documentary)
Human impact on the environment
Lists of extinct species
Pleistocene rewilding
Late Pleistocene extinctions
Racing Extinction (2015 documentary film)
Timeline of extinctions in the Holocene
World Scientists' Warning to Humanity
Notes
References
Further reading
External links
The Extinction Crisis . Center for Biological Diversity.
Vanishing: The extinction crisis is far worse than you think . CNN. December 2016.
Biologists say half of all species could be extinct by end of century , The Guardian, 25 February 2017
Humans are ushering in the sixth mass extinction of life on Earth, scientists warn , The Independent, 31 May 2017
Human activity pushing Earth towards 'sixth mass species extinction,' report warns . CBC. Mar 26, 2018
'Terror being waged on wildlife', leaders warn . The Guardian. October 4, 2018.
Earth Is on the Cusp of the Sixth Mass Extinction. Here's What Paleontologists Want You to Know . Discover. December 3, 2020.
What the Extinction Crisis Took From the World in 2022 . The Nation. December 22, 2022.
Extinction crisis puts 1 million species on the brink . Reuters. December 23, 2022.
Exclusive: Huge chunk of plants, animals in U.S. at risk of extinction . Reuters. February 6, 2023.
Species made extinct by human activities
Extinction
Extinction
Extinction
Environmental impact by effect
Human ecology
Human impact on the environment
Anthropocene
Extinction events
hu:Pleisztocén-holocén becsapódási esemény | Holocene extinction | [
"Biology",
"Environmental_science"
] | 11,866 | [
"Evolution of the biosphere",
"Human ecology",
"Environmental social science",
"Extinction events"
] |
14,225 | https://en.wikipedia.org/wiki/Hydrogen%20atom | A hydrogen atom is an atom of the chemical element hydrogen. The electrically neutral hydrogen atom contains a single positively charged proton in the nucleus, and a single negatively charged electron bound to the nucleus by the Coulomb force. Atomic hydrogen constitutes about 75% of the baryonic mass of the universe.
In everyday life on Earth, isolated hydrogen atoms (called "atomic hydrogen") are extremely rare. Instead, a hydrogen atom tends to combine with other atoms in compounds, or with another hydrogen atom to form ordinary (diatomic) hydrogen gas, H2. "Atomic hydrogen" and "hydrogen atom" in ordinary English use have overlapping, yet distinct, meanings. For example, a water molecule contains two hydrogen atoms, but does not contain atomic hydrogen (which would refer to isolated hydrogen atoms).
Atomic spectroscopy shows that there is a discrete infinite set of states in which a hydrogen (or any) atom can exist, contrary to the predictions of classical physics. Attempts to develop a theoretical understanding of the states of the hydrogen atom have been important to the history of quantum mechanics, since all other atoms can be roughly understood by knowing in detail about this simplest atomic structure.
Isotopes
The most abundant isotope, protium (1H), or light hydrogen, contains no neutrons and is simply a proton and an electron. Protium is stable and makes up 99.985% of naturally occurring hydrogen atoms.
Deuterium (2H) contains one neutron and one proton in its nucleus. Deuterium is stable, makes up 0.0156% of naturally occurring hydrogen, and is used in industrial processes like nuclear reactors and Nuclear Magnetic Resonance.
Tritium (3H) contains two neutrons and one proton in its nucleus and is not stable, decaying with a half-life of 12.32 years. Because of its short half-life, tritium does not exist in nature except in trace amounts.
Heavier isotopes of hydrogen are only created artificially in particle accelerators and have half-lives on the order of 10−22 seconds. They are unbound resonances located beyond the neutron drip line; this results in prompt emission of a neutron.
The formulas below are valid for all three isotopes of hydrogen, but slightly different values of the Rydberg constant (correction formula given below) must be used for each hydrogen isotope.
Hydrogen ion
Lone neutral hydrogen atoms are rare under normal conditions. However, neutral hydrogen is common when it is covalently bound to another atom, and hydrogen atoms can also exist in cationic and anionic forms.
If a neutral hydrogen atom loses its electron, it becomes a cation. The resulting ion, which consists solely of a proton for the usual isotope, is written as "H+" and sometimes called hydron. Free protons are common in the interstellar medium, and solar wind. In the context of aqueous solutions of classical Brønsted–Lowry acids, such as hydrochloric acid, it is actually hydronium, H3O+, that is meant. Instead of a literal ionized single hydrogen atom being formed, the acid transfers the hydrogen to H2O, forming H3O+.
If instead a hydrogen atom gains a second electron, it becomes an anion. The hydrogen anion is written as "H–" and called hydride.
Theoretical analysis
The hydrogen atom has special significance in quantum mechanics and quantum field theory as a simple two-body problem physical system which has yielded many simple analytical solutions in closed-form.
Failed classical description
Experiments by Ernest Rutherford in 1909 showed the structure of the atom to be a dense, positive nucleus with a tenuous negative charge cloud around it. This immediately raised questions about how such a system could be stable. Classical electromagnetism had shown that any accelerating charge radiates energy, as shown by the Larmor formula. If the electron is assumed to orbit in a perfect circle and radiates energy continuously, the electron would rapidly spiral into the nucleus with a fall time of:
where is the Bohr radius and is the classical electron radius. If this were true, all atoms would instantly collapse. However, atoms seem to be stable. Furthermore, the spiral inward would release a smear of electromagnetic frequencies as the orbit got smaller. Instead, atoms were observed to emit only discrete frequencies of radiation. The resolution would lie in the development of quantum mechanics.
Bohr–Sommerfeld Model
In 1913, Niels Bohr obtained the energy levels and spectral frequencies of the hydrogen atom after making a number of simple assumptions in order to correct the failed classical model. The assumptions included:
Electrons can only be in certain, discrete circular orbits or stationary states, thereby having a discrete set of possible radii and energies.
Electrons do not emit radiation while in one of these stationary states.
An electron can gain or lose energy by jumping from one discrete orbit to another.
Bohr supposed that the electron's angular momentum is quantized with possible values:
where
and is Planck constant over . He also supposed that the centripetal force which keeps the electron in its orbit is provided by the Coulomb force, and that energy is conserved. Bohr derived the energy of each orbit of the hydrogen atom to be:
where is the electron mass, is the electron charge, is the vacuum permittivity, and is the quantum number (now known as the principal quantum number). Bohr's predictions matched experiments measuring the hydrogen spectral series to the first order, giving more confidence to a theory that used quantized values.
For , the value
is called the Rydberg unit of energy. It is related to the Rydberg constant of atomic physics by
The exact value of the Rydberg constant assumes that the nucleus is infinitely massive with respect to the electron. For hydrogen-1, hydrogen-2 (deuterium), and hydrogen-3 (tritium) which have finite mass, the constant must be slightly modified to use the reduced mass of the system, rather than simply the mass of the electron. This includes the kinetic energy of the nucleus in the problem, because the total (electron plus nuclear) kinetic energy is equivalent to the kinetic energy of the reduced mass moving with a velocity equal to the electron velocity relative to the nucleus. However, since the nucleus is much heavier than the electron, the electron mass and reduced mass are nearly the same. The Rydberg constant RM for a hydrogen atom (one electron), R is given by
where is the mass of the atomic nucleus. For hydrogen-1, the quantity is about 1/1836 (i.e. the electron-to-proton mass ratio). For deuterium and tritium, the ratios are about 1/3670 and 1/5497 respectively. These figures, when added to 1 in the denominator, represent very small corrections in the value of R, and thus only small corrections to all energy levels in corresponding hydrogen isotopes.
There were still problems with Bohr's model:
it failed to predict other spectral details such as fine structure and hyperfine structure
it could only predict energy levels with any accuracy for single–electron atoms (hydrogen-like atoms)
the predicted values were only correct to , where is the fine-structure constant.
Most of these shortcomings were resolved by Arnold Sommerfeld's modification of the Bohr model. Sommerfeld introduced two additional degrees of freedom, allowing an electron to move on an elliptical orbit characterized by its eccentricity and declination with respect to a chosen axis. This introduced two additional quantum numbers, which correspond to the orbital angular momentum and its projection on the chosen axis. Thus the correct multiplicity of states (except for the factor 2 accounting for the yet unknown electron spin) was found. Further, by applying special relativity to the elliptic orbits, Sommerfeld succeeded in deriving the correct expression for the fine structure of hydrogen spectra (which happens to be exactly the same as in the most elaborate Dirac theory). However, some observed phenomena, such as the anomalous Zeeman effect, remained unexplained. These issues were resolved with the full development of quantum mechanics and the Dirac equation. It is often alleged that the Schrödinger equation is superior to the Bohr–Sommerfeld theory in describing hydrogen atom. This is not the case, as most of the results of both approaches coincide or are very close (a remarkable exception is the problem of hydrogen atom in crossed electric and magnetic fields, which cannot be self-consistently solved in the framework of the Bohr–Sommerfeld theory), and in both theories the main shortcomings result from the absence of the electron spin. It was the complete failure of the Bohr–Sommerfeld theory to explain many-electron systems (such as helium atom or hydrogen molecule) which demonstrated its inadequacy in describing quantum phenomena.
Schrödinger equation
The Schrödinger equation is the standard quantum-mechanics model; it allows one to calculate the stationary states and also the time evolution of quantum systems. Exact analytical answers are available for the nonrelativistic hydrogen atom. Before we go to present a formal account, here we give an elementary overview.
Given that the hydrogen atom contains a nucleus and an electron, quantum mechanics allows one to predict the probability of finding the electron at any given radial distance . It is given by the square of a mathematical function known as the "wavefunction", which is a solution of the Schrödinger equation. The lowest energy equilibrium state of the hydrogen atom is known as the ground state. The ground state wave function is known as the wavefunction. It is written as:
Here, is the numerical value of the Bohr radius. The probability density of finding the electron at a distance in any radial direction is the squared value of the wavefunction:
The wavefunction is spherically symmetric, and the surface area of a shell at distance is , so the total probability of the electron being in a shell at a distance and thickness is
It turns out that this is a maximum at . That is, the Bohr picture of an electron orbiting the nucleus at radius corresponds to the most probable radius. Actually, there is a finite probability that the electron may be found at any place , with the
probability indicated by the square of the wavefunction. Since the probability of finding the electron somewhere in the whole volume is unity, the integral of is unity. Then we say that the wavefunction is properly normalized.
As discussed below, the ground state is also indicated by the quantum numbers . The second lowest energy states, just above the ground state, are given by the quantum numbers , , and . These states all have the same energy and are known as the and states. There is one state:
and there are three states:
An electron in the or state is most likely to be found in the second Bohr orbit with energy given by the Bohr formula.
Wavefunction
The Hamiltonian of the hydrogen atom is the radial kinetic energy operator plus the Coulomb electrostatic potential energy between the positive proton and the negative electron. Using the time-independent Schrödinger equation, ignoring all spin-coupling interactions and using the reduced mass , the equation is written as:
Expanding the Laplacian in spherical coordinates:
This is a separable, partial differential equation which can be solved in terms of special functions. When the wavefunction is separated as product of functions , , and three independent differential functions appears with A and B being the separation constants:
radial:
polar:
azimuth:
The normalized position wavefunctions, given in spherical coordinates are:
where:
,
is the reduced Bohr radius, ,
is a generalized Laguerre polynomial of degree , and
is a spherical harmonic function of degree and order .
Note that the generalized Laguerre polynomials are defined differently by different authors. The usage here is consistent with the definitions used by Messiah, and Mathematica. In other places, the Laguerre polynomial includes a factor of , or the generalized Laguerre polynomial appearing in the hydrogen wave function is instead.
The quantum numbers can take the following values:
(principal quantum number)
(azimuthal quantum number)
(magnetic quantum number).
Additionally, these wavefunctions are normalized (i.e., the integral of their modulus square equals 1) and orthogonal:
where is the state represented by the wavefunction in Dirac notation, and is the Kronecker delta function.
The wavefunctions in momentum space are related to the wavefunctions in position space through a Fourier transform
which, for the bound states, results in
where denotes a Gegenbauer polynomial and is in units of .
The solutions to the Schrödinger equation for hydrogen are analytical, giving a simple expression for the hydrogen energy levels and thus the frequencies of the hydrogen spectral lines and fully reproduced the Bohr model and went beyond it. It also yields two other quantum numbers and the shape of the electron's wave function ("orbital") for the various possible quantum-mechanical states, thus explaining the anisotropic character of atomic bonds.
The Schrödinger equation also applies to more complicated atoms and molecules. When there is more than one electron or nucleus the solution is not analytical and either computer calculations are necessary or simplifying assumptions must be made.
Since the Schrödinger equation is only valid for non-relativistic quantum mechanics, the solutions it yields for the hydrogen atom are not entirely correct. The Dirac equation of relativistic quantum theory improves these solutions (see below).
Results of Schrödinger equation
The solution of the Schrödinger equation (wave equation) for the hydrogen atom uses the fact that the Coulomb potential produced by the nucleus is isotropic (it is radially symmetric in space and only depends on the distance to the nucleus). Although the resulting energy eigenfunctions (the orbitals) are not necessarily isotropic themselves, their dependence on the angular coordinates follows completely generally from this isotropy of the underlying potential: the eigenstates of the Hamiltonian (that is, the energy eigenstates) can be chosen as simultaneous eigenstates of the angular momentum operator. This corresponds to the fact that angular momentum is conserved in the orbital motion of the electron around the nucleus. Therefore, the energy eigenstates may be classified by two angular momentum quantum numbers, and (both are integers). The angular momentum quantum number determines the magnitude of the angular momentum. The magnetic quantum number determines the projection of the angular momentum on the (arbitrarily chosen) -axis.
In addition to mathematical expressions for total angular momentum and angular momentum projection of wavefunctions, an expression for the radial dependence of the wave functions must be found. It is only here that the details of the Coulomb potential enter (leading to Laguerre polynomials in ). This leads to a third quantum number, the principal quantum number . The principal quantum number in hydrogen is related to the atom's total energy.
Note that the maximum value of the angular momentum quantum number is limited by the principal quantum number: it can run only up to , i.e., .
Due to angular momentum conservation, states of the same but different have the same energy (this holds for all problems with rotational symmetry). In addition, for the hydrogen atom, states of the same but different are also degenerate (i.e., they have the same energy). However, this is a specific property of hydrogen and is no longer true for more complicated atoms which have an (effective) potential differing from the form (due to the presence of the inner electrons shielding the nucleus potential).
Taking into account the spin of the electron adds a last quantum number, the projection of the electron's spin angular momentum along the -axis, which can take on two values. Therefore, any eigenstate of the electron in the hydrogen atom is described fully by four quantum numbers. According to the usual rules of quantum mechanics, the actual state of the electron may be any superposition of these states. This explains also why the choice of -axis for the directional quantization of the angular momentum vector is immaterial: an orbital of given and obtained for another preferred axis can always be represented as a suitable superposition of the various states of different (but same ) that have been obtained for .
Mathematical summary of eigenstates of hydrogen atom
In 1928, Paul Dirac found an equation that was fully compatible with special relativity, and (as a consequence) made the wave function a 4-component "Dirac spinor" including "up" and "down" spin components, with both positive and "negative" energy (or matter and antimatter). The solution to this equation gave the following results, more accurate than the Schrödinger solution.
Energy levels
The energy levels of hydrogen, including fine structure (excluding Lamb shift and hyperfine structure), are given by the Sommerfeld fine-structure expression:
where is the fine-structure constant and is the total angular momentum quantum number, which is equal to , depending on the orientation of the electron spin relative to the orbital angular momentum. This formula represents a small correction to the energy obtained by Bohr and Schrödinger as given above. The factor in square brackets in the last expression is nearly one; the extra term arises from relativistic effects (for details, see #Features going beyond the Schrödinger solution). It is worth noting that this expression was first obtained by A. Sommerfeld in 1916 based on the relativistic version of the old Bohr theory. Sommerfeld has however used different notation for the quantum numbers.
Visualizing the hydrogen electron orbitals
The image to the right shows the first few hydrogen atom orbitals (energy eigenfunctions). These are cross-sections of the probability density that are color-coded (black represents zero density and white represents the highest density). The angular momentum (orbital) quantum number ℓ is denoted in each column, using the usual spectroscopic letter code (s means ℓ = 0, p means ℓ = 1, d means ℓ = 2). The main (principal) quantum number n (= 1, 2, 3, ...) is marked to the right of each row. For all pictures the magnetic quantum number m has been set to 0, and the cross-sectional plane is the xz-plane (z is the vertical axis). The probability density in three-dimensional space is obtained by rotating the one shown here around the z-axis.
The "ground state", i.e. the state of lowest energy, in which the electron is usually found, is the first one, the 1s state (principal quantum level n = 1, ℓ = 0).
Black lines occur in each but the first orbital: these are the nodes of the wavefunction, i.e. where the probability density is zero. (More precisely, the nodes are spherical harmonics that appear as a result of solving the Schrödinger equation in spherical coordinates.)
The quantum numbers determine the layout of these nodes. There are:
total nodes,
of which are angular nodes:
angular nodes go around the axis (in the xy plane). (The figure above does not show these nodes since it plots cross-sections through the xz-plane.)
(the remaining angular nodes) occur on the (vertical) axis.
(the remaining non-angular nodes) are radial nodes.
Oscillation of orbitals
The frequency of a state in level n is , so in case of a superposition of multiple orbitals, they would oscillate due to the difference in frequency. For example two states, ψ1and ψ2: The wavefunction is given by and the probability function is
The result is a rotating wavefunction. The movement of electrons and change of quantum states radiates light at a frequency of the cosine.
Features going beyond the Schrödinger solution
There are several important effects that are neglected by the Schrödinger equation and which are responsible for certain small but measurable deviations of the real spectral lines from the predicted ones:
Although the mean speed of the electron in hydrogen is only 1/137th of the speed of light, many modern experiments are sufficiently precise that a complete theoretical explanation requires a fully relativistic treatment of the problem. A relativistic treatment results in a momentum increase of about 1 part in 37,000 for the electron. Since the electron's wavelength is determined by its momentum, orbitals containing higher speed electrons show contraction due to smaller wavelengths.
Even when there is no external magnetic field, in the inertial frame of the moving electron, the electromagnetic field of the nucleus has a magnetic component. The spin of the electron has an associated magnetic moment which interacts with this magnetic field. This effect is also explained by special relativity, and it leads to the so-called spin–orbit coupling, i.e., an interaction between the electron's orbital motion around the nucleus, and its spin.
Both of these features (and more) are incorporated in the relativistic Dirac equation, with predictions that come still closer to experiment. Again the Dirac equation may be solved analytically in the special case of a two-body system, such as the hydrogen atom. The resulting solution quantum states now must be classified by the total angular momentum number (arising through the coupling between electron spin and orbital angular momentum). States of the same and the same are still degenerate. Thus, direct analytical solution of Dirac equation predicts 2S() and 2P() levels of hydrogen to have exactly the same energy, which is in a contradiction with observations (Lamb–Retherford experiment).
There are always vacuum fluctuations of the electromagnetic field, according to quantum mechanics. Due to such fluctuations degeneracy between states of the same but different is lifted, giving them slightly different energies. This has been demonstrated in the famous Lamb–Retherford experiment and was the starting point for the development of the theory of quantum electrodynamics (which is able to deal with these vacuum fluctuations and employs the famous Feynman diagrams for approximations using perturbation theory). This effect is now called Lamb shift.
For these developments, it was essential that the solution of the Dirac equation for the hydrogen atom could be worked out exactly, such that any experimentally observed deviation had to be taken seriously as a signal of failure of the theory.
Alternatives to the Schrödinger theory
In the language of Heisenberg's matrix mechanics, the hydrogen atom was first solved by Wolfgang Pauli using a rotational symmetry in four dimensions [O(4)-symmetry] generated by the angular momentum
and the Laplace–Runge–Lenz vector. By extending the symmetry group O(4) to the dynamical group O(4,2),
the entire spectrum and all transitions were embedded in a single irreducible group representation.
In 1979 the (non-relativistic) hydrogen atom was solved for the first time within Feynman's path integral formulation
of quantum mechanics by Duru and Kleinert. This work greatly extended the range of applicability of Feynman's method.
Further alternative models are Bohm mechanics and the complex Hamilton-Jacobi formulation of quantum mechanics.
See also
Antihydrogen
Atomic orbital
Balmer series
Helium atom
Hydrogen molecular ion
List of quantum-mechanical systems with analytical solutions
Lithium atom
Proton decay
Quantum chemistry
Quantum state
Trihydrogen cation
References
Books
Section 4.2 deals with the hydrogen atom specifically, but all of Chapter 4 is relevant.
Kleinert, H. (2009). Path Integrals in Quantum Mechanics, Statistics, Polymer Physics, and Financial Markets, 4th edition, Worldscibooks.com, World Scientific, Singapore (also available online physik.fu-berlin.de)
External links
The Hydrogen Atom and The Periodic Table - The Feynman Lectures on Physics
Physics of hydrogen atom on Scienceworld
Atoms
Quantum models
Hydrogen
Hydrogen physics
Isotopes of hydrogen
Exactly solvable models
pl:Wodór atomowy | Hydrogen atom | [
"Physics",
"Chemistry"
] | 4,952 | [
"Isotopes of hydrogen",
"Quantum mechanics",
"Isotopes",
"Quantum models",
"Atoms",
"Matter"
] |
14,231 | https://en.wikipedia.org/wiki/Hairpin | A hairpin or hair pin is a long device used to hold a person's hair in place. It may be used simply to secure long hair out of the way for convenience or as part of an elaborate hairstyle or coiffure. The earliest evidence for dressing the hair may be seen in carved "Venus figurines" such as the Venus of Brassempouy and the Venus of Willendorf. The creation of different hairstyles, especially among women, seems to be common to all cultures and all periods and many past, and current, societies use hairpins.
Hairpins made of metal, ivory, bronze, carved wood, etc. were used in ancient Egypt. for securing decorated hairstyles. Such hairpins suggest, as graves show, that many were luxury objects among the Egyptians and later the Greeks, Etruscans, and Romans. Major success came in 1901 with the invention of the spiral hairpin by New Zealand inventor Ernest Godward. This was a predecessor of the hair clip.
The hairpin may be decorative and encrusted with jewels and ornaments, or it may be utilitarian, and designed to be almost invisible while holding a hairstyle in place. Some hairpins are a single straight pin, but modern versions are more likely to be constructed from different lengths of wire that are bent in half with a u-shaped end and a few kinks along the two opposite portions. The finished pin may vary from two to six inches in last length. The length of the wires enables placement in several designs of hairstyles to hold the nature in place. The kinks enable retaining the pin during normal movements.
A hairpin patent was issued to Kelly Chamandy in 1925.
Hairpins in Chinese culture
Hairpins (generally known as ; ) are an important symbol in Chinese culture. In ancient China, hairpins were worn by men as well as women, and they were essential items for everyday hairstyling, mainly for securing and decorating a hair bun. Furthermore, hairpins worn by women could also represent their social status.
In Han Chinese culture, when young girls reached the age of fifteen, they were allowed to take part in a rite of passage known as (), or "hairpin initiation". This ceremony marked the coming of age of young women. Particularly, before the age of fifteen, girls did not use hairpins as they wore their hair in braids, and they were considered as children. When they turned fifteen, they could be considered as young women after the ceremony, and they started to style their hair as buns secured and embellished by hairpins. This practice indicated that these young women could now enter into marriage. However, if a young woman had not been consented to marriage before age twenty, or she had not yet participated in a coming of age ceremony, she would attend a ceremony when she turned twenty.
In comparison with , the male equivalent known as () or "hat initiation", usually took place five years later, at the age of twenty. In the 21st century hanfu movement, an attempt to revive the traditional Han Chinese coming-of-age ceremonies has been made, and the ideal age to attend the ceremony is twenty years old for all genders.
While hairpins can symbolize the transition from childhood to adulthood, they were closely connected to the concept of marriage as well. At the time of an engagement, the fiancée may take a hairpin from her hair and give it to her fiancé as a pledge: this can be seen as a reversal of the Western tradition, in which the future groom presents an engagement ring to his betrothed. After the wedding ceremony, the husband should put the hairpin back into his spouse's hair.
Hair has always carried many psychological, philosophical, romantic, and cultural meanings in Chinese culture. In Han culture, people call the union between two people (), literally "tying hair". During the wedding ceremony, some Chinese couples exchange a lock of hair as a pledge, while others break a hairpin into two parts, and then, each of the betrothed take one part with them for keeping. If this couple were ever to get separated in the future, when they reunite, they can piece the two halves together, and the completed hairpin would serve as a proof of their identities as well as a symbol of their reunion. In addition, a married couple is sometimes referred to as (), an idiom which implies the relationship between the pair is very intimate and happy, just like how their hair has been tied together.
Gallery
See also
Bobby pin
Hair stick
Hatpin
Hair clip
References
External links
Fasteners
Hairdressing
Types of jewellery | Hairpin | [
"Engineering"
] | 950 | [
"Construction",
"Fasteners"
] |
14,233 | https://en.wikipedia.org/wiki/Hate%20speech | Hate speech is a term with varied meaning and has no single, consistent definition. It is defined by the Cambridge Dictionary as "public speech that expresses hate or encourages violence towards a person or group based on something such as race, religion, sex, or sexual orientation". The Encyclopedia of the American Constitution states that hate speech is "usually thought to include communications of animosity or disparagement of an individual or a group on account of a group characteristic such as race, color, national origin, sex, disability, religion, or sexual orientation". There is no single definition of what constitutes "hate" or "disparagement". Legal definitions of hate speech vary from country to country.
There has been much debate over freedom of speech, hate speech, and hate speech legislation. The laws of some countries describe hate speech as speech, gestures, conduct, writing, or displays that incite violence or prejudicial actions against a group or individuals on the basis of their membership in the group, or that disparage or intimidate a group or individuals on the basis of their membership in the group. The law may identify protected groups based on certain characteristics. In some countries, including the United States, what is usually labelled "hate speech" is constitutionally protected. In some other countries, a victim of hate speech may seek redress under civil law, criminal law, or both.
Hate speech is generally accepted to be one of the prerequisites for mass atrocities such as genocide. Incitement to genocide is an extreme form of hate speech, and has been prosecuted in international courts such as the International Criminal Tribunal for Rwanda.
History
Starting in the 1940s and 50s, various American civil rights groups responded to the atrocities of World War II by advocating for restrictions on hateful speech targeting groups on the basis of race and religion. These organizations used group libel as a legal framework for describing the violence of hate speech and addressing its harm. In his discussion of the history of criminal libel, scholar Jeremy Waldron states that these laws helped "vindicate public order, not just by preempting violence, but by upholding against attack a shared sense of the basic elements of each person's status, dignity, and reputation as a citizen or member of society in good standing". A key legal victory for this view came in 1952 when group libel law was affirmed by the United States Supreme Court in Beauharnais v. Illinois. However, the group libel approach lost ground due to a rise in support for individual rights within civil rights movements during the 60s. Critiques of group defamation laws are not limited to defenders of individual rights. Some legal theorists, such as critical race theorist Richard Delgado, support legal limits on hate speech, but claim that defamation is too narrow a category to fully counter hate speech. Ultimately, Delgado advocates a legal strategy that would establish a specific section of tort law for responding to racist insults, citing the difficulty of receiving redress under the existing legal system.
Hate speech laws
After World War II, Germany criminalized Volksverhetzung ("incitement of popular hatred") to prevent resurgence of Nazism. Hate speech on the basis of sexual orientation and gender identity also is banned in Germany. Most European countries have likewise implemented various laws and regulations regarding hate speech, and the European Union's Framework Decision 2008/913/JHA requires member states to criminalize hate crimes and speech (though individual implementation and interpretation of this framework varies by state).
International human rights laws from the United Nations Human Rights Committee have been protecting freedom of expression, and one of the most fundamental documents is the Universal Declaration of Human Rights (UDHR) drafted by the U.N. General Assembly in 1948. Article 19 of the UDHR states that "Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers."
While there are fundamental laws in place designed to protect freedom of expression, there are also multiple international laws that expand on the UDHR and pose limitations and restrictions, specifically concerning the safety and protection of individuals.
The Committee on the Elimination of Racial Discrimination (CERD) was the first to address hate speech and the need to establish legislation prohibiting inflammatory types of language.
The CERD addresses hate speech through the International Convention on the Elimination of All Forms of Racial Discrimination (ICERD) and monitors its implementation by State parties.
Article 19(3) of the International Covenant on Civil and Political Rights (ICCPR) permits restrictions on the human right of freedom of expression only when provided by law, and when necessary to protect "rights or reputations of others", or for "protection of national security or of public order (ordre public), or of public health or morals".
Article 20(2) of the ICCPR prohibits national, religious, or racial hatred that incites violence, discrimination, or hostility.
Most developed democracies have laws that restrict hate speech, including Australia, Canada, Denmark, France, Germany, India, Ireland, South Africa, Sweden, New Zealand, and the United Kingdom. The United States does not have hate speech laws, because the U.S. Supreme Court has repeatedly ruled that they violate the guarantee to freedom of speech contained in the First Amendment to the U.S. Constitution.
Laws against hate speech can be divided into two types: those intended to preserve public order and those intended to protect human dignity. The laws designed to protect public order require that a higher threshold be violated, so they are not often enforced. For example, a 1992 study found that only one person was prosecuted in Northern Ireland in the preceding 21 years for violating a law against incitement to religious violence. The laws meant to protect human dignity have a much lower threshold for violation, so those in Canada, Denmark, France, Germany and the Netherlands tend to be more frequently enforced.
State-sanctioned hate speech
A few states, including Saudi Arabia, Iran, Rwanda Hutu factions, actors in the Yugoslav Wars and Ethiopia have been described as spreading official hate speech or incitement to genocide.
Internet
The rise of the internet and social media has presented a new medium through which hate speech can spread. Hate speech on the internet can be traced all the way back to its initial years, with a 1983 bulletin board system created by neo-Nazi George Dietz considered the first instance of hate speech online. As the internet evolved over time hate speech continued to spread and create its footprint; the first hate speech website Stormfront was published in 1996, and hate speech has become one of the central challenges for social media platforms.
The structure and nature of the internet contribute to both the creation and persistence of hate speech online. The widespread use and access to the internet gives hate mongers an easy way to spread their message to wide audiences with little cost and effort. According to the International Telecommunication Union, approximately 66% of the world population has access to the internet. Additionally, the pseudo-anonymous nature of the internet imboldens many to make statements constituting hate speech that they otherwise wouldn't for fear of social or real life repercussions. While some governments and companies attempt to combat this type of behavior by leveraging real name systems, difficulties in verifying identities online, public opposition to such policies, and sites that don't enforce these policies leave large spaces for this behavior to persist.
Because the internet crosses national borders, comprehensive government regulations on online hate speech can be difficult to implement and enforce. Governments who want to regulate hate speech contend with issues around lack of jurisdiction and conflicting viewpoints from other countries. In an early example of this, the case of Yahoo! Inc. v. La Ligue Contre Le Racisme et l'Antisemitisme had a French court hold Yahoo! liable for allowing Nazi memorabilia auctions to be visible to the public. Yahoo! refused to comply with the ruling and ultimately won relief in a U.S. court which found that the ruling was unenforceable in the U.S. Disagreements like these make national level regulations difficult, and while there are some international efforts and laws that attempt to regulate hate speech and its online presence, as with most international agreements the implementation and interpretation of these treaties varies by country.
Much of the regulation regarding online hate speech is performed voluntarily by individual companies. Many major tech companies have adopted terms of service which outline allowed content on their platform, often banning hate speech. In a notable step for this, on 31 May 2016, Facebook, Google, Microsoft, and Twitter, jointly agreed to a European Union code of conduct obligating them to review "[the] majority of valid notifications for removal of illegal hate speech" posted on their services within 24 hours. Techniques employed by these companies to regulate hate speech include user reporting, Artificial Intelligence flagging, and manual review of content by employees. Major search engines like Google Search also tweak their algorithms to try and suppress hateful content from appearing in their results. However, despite these efforts hate speech remains a persistent problem online. According to a 2021 study by the Anti Defamation League 33% of Americans were the target of identity based harassment in the preceding year, a statistic which has not noticeably shifted downwards despite increasing self regulation by companies.
Commentary
Several activists and scholars have criticized the practice of limiting hate speech. Kim Holmes, Vice President of the conservative Heritage Foundation and a critic of hate speech theory, has argued that it "assumes bad faith on the part of people regardless of their stated intentions" and that it "obliterates the ethical responsibility of the individual". Rebecca Ruth Gould, a professor of Islamic and Comparative Literature at the University of Birmingham, argues that laws against hate speech constitute viewpoint discrimination (which is prohibited by the First Amendment in the United States) as the legal system punishes some viewpoints but not others. Other scholars, such as Gideon Elford, argue instead that "insofar as hate speech regulation targets the consequences of speech that are contingently connected with the substance of what is expressed then it is viewpoint discriminatory in only an indirect sense." John Bennett argues that restricting hate speech relies on questionable conceptual and empirical foundations and is reminiscent of efforts by totalitarian regimes to control the thoughts of their citizens.
Civil libertarians say that hate speech laws have been used, in both developing and developed nations, to persecute minority viewpoints and critics of the government. Former ACLU president Nadine Strossen says that, while efforts to censor hate speech have the goal of protecting the most vulnerable, they are ineffective and may have the opposite effect: disadvantaged and ethnic minorities being charged with violating laws against hate speech. Journalist Glenn Greenwald says that hate speech laws in Europe have been used to censor left-wing views as much as they have been used to combat hate speech.
Miisa Kreandner and Eriz Henze argue that hate speech laws are arbitrary, as they only protect some categories of people but not others. Henze argues the only way to resolve this problem without abolishing hate speech laws would be to extend them to all possible conceivable categories, which Henze argues would amount to totalitarian control over speech.
Michael Conklin argues that there are benefits to hate speech that are often overlooked. He contends that allowing hate speech provides a more accurate view of the human condition, provides opportunities to change people's minds, and identifies certain people that may need to be avoided in certain circumstances. According to one psychological research study, a high degree of psychopathy is "a significant predictor" for involvement in online hate activity, while none of the other 7 potential factors examined were found to have a statistically significant predictive power.
Political philosopher Jeffrey W. Howard considers the popular framing of hate speech as "free speech vs. other political values" as a mischaracterization. He refers to this as the "balancing model", and says it seeks to weigh the benefit of free speech against other values such as dignity and equality for historically marginalized groups. Instead, he believes that the crux of debate should be whether or not freedom of expression is inclusive of hate speech. Research indicates that when people support censoring hate speech, they are motivated more by concerns about the effects the speech has on others than they are about its effects on themselves. Women are somewhat more likely than men to support censoring hate speech due to greater perceived harm of hate speech, which some researchers believe may be due to gender differences in empathy towards targets of hate speech.
See also
Antilocution
Genocide justification
Incitement to terrorism
Psychology of genocide
Risk factors for genocide
References
External links
TANDIS (Tolerance and Non-Discrimination Information System), developed by the OSCE Office for Democratic Institutions and Human Rights
Reconciling Rights and Responsibilities of Colleges and Students: Offensive Speech, Assembly, Drug Testing and Safety
From Discipline to Development: Rethinking Student Conduct in Higher Education
Sexual Minorities on Community College Campuses
The Foundation for Individual Rights in Education
Activities to tackle Hate speech
Survivor bashing – bias motivated hate crimes
"Striking the right balance" by Agnès Callamard, for Article 19
Hate speech, a factsheet by the European Court of Human Rights, 2015
Recommendation No. R (97) 20 Committee of Ministers of the Council of Europe 1997
Ableism
Censorship
Freedom of speech
Harassment and bullying
Harassment law
Hate crime
Homophobia
LGBTQ and society
Linguistic controversies
Political terminology
Racism
Sexism
Transphobia
Islamophobia
Authoritarianism
Totalitarianism | Hate speech | [
"Biology"
] | 2,746 | [
"Harassment and bullying",
"Behavior",
"Aggression",
"Discrimination"
] |
14,263 | https://en.wikipedia.org/wiki/Horner%27s%20method | In mathematics and computer science, Horner's method (or Horner's scheme) is an algorithm for polynomial evaluation. Although named after William George Horner, this method is much older, as it has been attributed to Joseph-Louis Lagrange by Horner himself, and can be traced back many hundreds of years to Chinese and Persian mathematicians. After the introduction of computers, this algorithm became fundamental for computing efficiently with polynomials.
The algorithm is based on Horner's rule, in which a polynomial is written in nested form:
This allows the evaluation of a polynomial of degree with only multiplications and additions. This is optimal, since there are polynomials of degree that cannot be evaluated with fewer arithmetic operations.
Alternatively, Horner's method and also refers to a method for approximating the roots of polynomials, described by Horner in 1819. It is a variant of the Newton–Raphson method made more efficient for hand calculation by application of Horner's rule. It was widely used until computers came into general use around 1970.
Polynomial evaluation and long division
Given the polynomial
where are constant coefficients, the problem is to evaluate the polynomial at a specific value of
For this, a new sequence of constants is defined recursively as follows:
Then is the value of .
To see why this works, the polynomial can be written in the form
Thus, by iteratively substituting the into the expression,
Now, it can be proven that;
This expression constitutes Horner's practical application, as it offers a very quick way of determining the outcome of;
with (which is equal to ) being the division's remainder, as is demonstrated by the examples below. If is a root of , then (meaning the remainder is ), which means you can factor as .
To finding the consecutive -values, you start with determining , which is simply equal to . Then you then work recursively using the formula:
till you arrive at .
Examples
Evaluate for .
We use synthetic division as follows:
x│ x x x x
3 │ 2 −6 2 −1
│ 6 0 6
└────────────────────────
2 0 2 5
The entries in the third row are the sum of those in the first two. Each entry in the second row is the product of the -value ( in this example) with the third-row entry immediately to the left. The entries in the first row are the coefficients of the polynomial to be evaluated. Then the remainder of on division by is .
But by the polynomial remainder theorem, we know that the remainder is . Thus, .
In this example, if we can see that , the entries in the third row. So, synthetic division (which was actually invented and published by Ruffini 10 years before Horner's publication) is easier to use; it can be shown to be equivalent to Horner's method.
As a consequence of the polynomial remainder theorem, the entries in the third row are the coefficients of the second-degree polynomial, the quotient of on division by .
The remainder is . This makes Horner's method useful for polynomial long division.
Divide by :
2 │ 1 −6 11 −6
│ 2 −8 6
└────────────────────────
1 −4 3 0
The quotient is .
Let and . Divide by using Horner's method.
0.5 │ 4 −6 0 3 −5
│ 2 −2 −1 1
└───────────────────────
2 −2 −1 1 −4
The third row is the sum of the first two rows, divided by . Each entry in the second row is the product of with the third-row entry to the left. The answer is
Efficiency
Evaluation using the monomial form of a degree polynomial requires at most additions and multiplications, if powers are calculated by repeated multiplication and each monomial is evaluated individually. The cost can be reduced to additions and multiplications by evaluating the powers of by iteration.
If numerical data are represented in terms of digits (or bits), then the naive algorithm also entails storing approximately times the number of bits of : the evaluated polynomial has approximate magnitude , and one must also store itself. By contrast, Horner's method requires only additions and multiplications, and its storage requirements are only times the number of bits of . Alternatively, Horner's method can be computed with fused multiply–adds. Horner's method can also be extended to evaluate the first derivatives of the polynomial with additions and multiplications.
Horner's method is optimal, in the sense that any algorithm to evaluate an arbitrary polynomial must use at least as many operations. Alexander Ostrowski proved in 1954 that the number of additions required is minimal. Victor Pan proved in 1966 that the number of multiplications is minimal. However, when is a matrix, Horner's method is not optimal.
This assumes that the polynomial is evaluated in monomial form and no preconditioning of the representation is allowed, which makes sense if the polynomial is evaluated only once. However, if preconditioning is allowed and the polynomial is to be evaluated many times, then faster algorithms are possible. They involve a transformation of the representation of the polynomial. In general, a degree- polynomial can be evaluated using only +2 multiplications and additions.
Parallel evaluation
A disadvantage of Horner's rule is that all of the operations are sequentially dependent, so it is not possible to take advantage of instruction level parallelism on modern computers. In most applications where the efficiency of polynomial evaluation matters, many low-order polynomials are evaluated simultaneously (for each pixel or polygon in computer graphics, or for each grid square in a numerical simulation), so it is not necessary to find parallelism within a single polynomial evaluation.
If, however, one is evaluating a single polynomial of very high order, it may be useful to break it up as follows:
More generally, the summation can be broken into k parts:
where the inner summations may be evaluated using separate parallel instances of Horner's method. This requires slightly more operations than the basic Horner's method, but allows k-way SIMD execution of most of them. Modern compilers generally evaluate polynomials this way when advantageous, although for floating-point calculations this requires enabling (unsafe) reassociative math.
Application to floating-point multiplication and division
Horner's method is a fast, code-efficient method for multiplication and division of binary numbers on a microcontroller with no hardware multiplier. One of the binary numbers to be multiplied is represented as a trivial polynomial, where (using the above notation) , and . Then, x (or x to some power) is repeatedly factored out. In this binary numeral system (base 2), , so powers of 2 are repeatedly factored out.
Example
For example, to find the product of two numbers (0.15625) and m:
Method
To find the product of two binary numbers d and m:
A register holding the intermediate result is initialized to d.
Begin with the least significant (rightmost) non-zero bit in m.
If all the non-zero bits were counted, then the intermediate result register now holds the final result. Otherwise, add d to the intermediate result, and continue in step 2 with the next most significant bit in m.
Derivation
In general, for a binary number with bit values () the product is
At this stage in the algorithm, it is required that terms with zero-valued coefficients are dropped, so that only binary coefficients equal to one are counted, thus the problem of multiplication or division by zero is not an issue, despite this implication in the factored equation:
The denominators all equal one (or the term is absent), so this reduces to
or equivalently (as consistent with the "method" described above)
In binary (base-2) math, multiplication by a power of 2 is merely a register shift operation. Thus, multiplying by 2 is calculated in base-2 by an arithmetic shift. The factor (2−1) is a right arithmetic shift, a (0) results in no operation (since 20 = 1 is the multiplicative identity element), and a (21) results in a left arithmetic shift.
The multiplication product can now be quickly calculated using only arithmetic shift operations, addition and subtraction.
The method is particularly fast on processors supporting a single-instruction shift-and-addition-accumulate. Compared to a C floating-point library, Horner's method sacrifices some accuracy, however it is nominally 13 times faster (16 times faster when the "canonical signed digit" (CSD) form is used) and uses only 20% of the code space.
Other applications
Horner's method can be used to convert between different positional numeral systems – in which case x is the base of the number system, and the ai coefficients are the digits of the base-x representation of a given number – and can also be used if x is a matrix, in which case the gain in computational efficiency is even greater. However, for such cases faster methods are known.
Polynomial root finding
Using the long division algorithm in combination with Newton's method, it is possible to approximate the real roots of a polynomial. The algorithm works as follows. Given a polynomial of degree with zeros make some initial guess such that . Now iterate the following two steps:
Using Newton's method, find the largest zero of using the guess .
Using Horner's method, divide out to obtain . Return to step 1 but use the polynomial and the initial guess .
These two steps are repeated until all real zeros are found for the polynomial. If the approximated zeros are not precise enough, the obtained values can be used as initial guesses for Newton's method but using the full polynomial rather than the reduced polynomials.
Example
Consider the polynomial
which can be expanded to
From the above we know that the largest root of this polynomial is 7 so we are able to make an initial guess of 8. Using Newton's method the first zero of 7 is found as shown in black in the figure to the right. Next is divided by to obtain
which is drawn in red in the figure to the right. Newton's method is used to find the largest zero of this polynomial with an initial guess of 7. The largest zero of this polynomial which corresponds to the second largest zero of the original polynomial is found at 3 and is circled in red. The degree 5 polynomial is now divided by to obtain
which is shown in yellow. The zero for this polynomial is found at 2 again using Newton's method and is circled in yellow. Horner's method is now used to obtain
which is shown in green and found to have a zero at −3. This polynomial is further reduced to
which is shown in blue and yields a zero of −5. The final root of the original polynomial may be found by either using the final zero as an initial guess for Newton's method, or by reducing and solving the linear equation. As can be seen, the expected roots of −8, −5, −3, 2, 3, and 7 were found.
Divided difference of a polynomial
Horner's method can be modified to compute the divided difference Given the polynomial (as before)
proceed as follows
At completion, we have
This computation of the divided difference is subject to less round-off error than evaluating and separately, particularly when . Substituting in this method gives , the derivative of .
History
Horner's paper, titled "A new method of solving numerical equations of all orders, by continuous approximation", was read before the Royal Society of London, at its meeting on July 1, 1819, with a sequel in 1823. Horner's paper in Part II of Philosophical Transactions of the Royal Society of London for 1819 was warmly and expansively welcomed by a reviewer in the issue of The Monthly Review: or, Literary Journal for April, 1820; in comparison, a technical paper by Charles Babbage is dismissed curtly in this review. The sequence of reviews in The Monthly Review for September, 1821, concludes that Holdred was the first person to discover a direct and general practical solution of numerical equations. Fuller showed that the method in Horner's 1819 paper differs from what afterwards became known as "Horner's method" and that in consequence the priority for this method should go to Holdred (1820).
Unlike his English contemporaries, Horner drew on the Continental literature, notably the work of Arbogast. Horner is also known to have made a close reading of John Bonneycastle's book on algebra, though he neglected the work of Paolo Ruffini.
Although Horner is credited with making the method accessible and practical, it was known long before Horner. In reverse chronological order, Horner's method was already known to:
Paolo Ruffini in 1809 (see Ruffini's rule)
Isaac Newton in 1669
the Chinese mathematician Zhu Shijie in the 14th century
the Chinese mathematician Qin Jiushao in his Mathematical Treatise in Nine Sections in the 13th century
the Persian mathematician Sharaf al-Dīn al-Ṭūsī in the 12th century (the first to use that method in a general case of cubic equation)
the Chinese mathematician Jia Xian in the 11th century (Song dynasty)
The Nine Chapters on the Mathematical Art, a Chinese work of the Han dynasty (202 BC – 220 AD) edited by Liu Hui (fl. 3rd century).
Qin Jiushao, in his Shu Shu Jiu Zhang (Mathematical Treatise in Nine Sections; 1247), presents a portfolio of methods of Horner-type for solving polynomial equations, which was based on earlier works of the 11th century Song dynasty mathematician Jia Xian; for example, one method is specifically suited to bi-quintics, of which Qin gives an instance, in keeping with the then Chinese custom of case studies. Yoshio Mikami in Development of Mathematics in China and Japan (Leipzig 1913) wrote:
Ulrich Libbrecht concluded: It is obvious that this procedure is a Chinese invention ... the method was not known in India. He said, Fibonacci probably learned of it from Arabs, who perhaps borrowed from the Chinese. The extraction of square and cube roots along similar lines is already discussed by Liu Hui in connection with Problems IV.16 and 22 in Jiu Zhang Suan Shu, while Wang Xiaotong in the 7th century supposes his readers can solve cubics by an approximation method described in his book Jigu Suanjing.
See also
Clenshaw algorithm to evaluate polynomials in Chebyshev form
De Boor's algorithm to evaluate splines in B-spline form
De Casteljau's algorithm to evaluate polynomials in Bézier form
Estrin's scheme to facilitate parallelization on modern computer architectures
Lill's method to approximate roots graphically
Ruffini's rule and synthetic division to divide a polynomial by a binomial of the form x − r
Notes
References
Read before the Southwestern Section of the American Mathematical Society on November 26, 1910.
Holdred's method is in the supplement following page numbered 45 (which is the 52nd page of the pdf version).
Directly available online via the link, but also reprinted with appraisal in D.E. Smith: A Source Book in Mathematics, McGraw-Hill, 1929; Dover reprint, 2 vols, 1959.
Reprinted from issues of The North China Herald (1852).
External links
Qiu Jin-Shao, Shu Shu Jiu Zhang (Cong Shu Ji Cheng ed.)
For more on the root-finding application see
Computer algebra
Polynomials
Numerical analysis | Horner's method | [
"Mathematics",
"Technology"
] | 3,187 | [
"Polynomials",
"Computer algebra",
"Computational mathematics",
"Computer science",
"Mathematical relations",
"Numerical analysis",
"Approximations",
"Algebra"
] |
14,269 | https://en.wikipedia.org/wiki/Hypnotic | Hypnotic (from Greek Hypnos, sleep), or soporific drugs, commonly known as sleeping pills, are a class of (and umbrella term for) psychoactive drugs whose primary function is to induce sleep (or surgical anesthesia) and to treat insomnia (sleeplessness).
This group of drugs is related to sedatives. Whereas the term sedative describes drugs that serve to calm or relieve anxiety, the term hypnotic generally describes drugs whose main purpose is to initiate, sustain, or lengthen sleep. Because these two functions frequently overlap, and because drugs in this class generally produce dose-dependent effects (ranging from anxiolysis to loss of consciousness), they are often referred to collectively as sedative–hypnotic drugs.
Hypnotic drugs are regularly prescribed for insomnia and other sleep disorders, with over 95% of insomnia patients being prescribed hypnotics in some countries. Many hypnotic drugs are habit-forming and—due to many factors known to disturb the human sleep pattern—a physician may instead recommend changes in the environment before and during sleep, better sleep hygiene, the avoidance of caffeine and alcohol or other stimulating substances, or behavioral interventions such as cognitive behavioral therapy for insomnia (CBT-I), before prescribing medication for sleep. When prescribed, hypnotic medication should be used for the shortest period of time necessary.
Among individuals with sleep disorders, 13.7% are taking or prescribed nonbenzodiazepines, while 10.8% are taking benzodiazepines, as of 2010, in the USA. Early classes of drugs, such as barbiturates, have fallen out of use in most practices but are still prescribed for some patients. In children, prescribing hypnotics is not yet acceptable—unless used to treat night terrors or sleepwalking. Elderly people are more sensitive to potential side effects of daytime fatigue and cognitive impairments, and a meta-analysis found that the risks generally outweigh any marginal benefits of hypnotics in the elderly. A review of the literature regarding benzodiazepine hypnotics and Z-drugs concluded that these drugs can have adverse effects, such as dependence and accidents, and that optimal treatment uses the lowest effective dose for the shortest therapeutic time period, with gradual discontinuation in order to improve health without worsening of sleep.
Falling outside the above-mentioned categories, the neurohormone melatonin and its analogues (such as ramelteon) serve a hypnotic function.
History
Hypnotica was a class of somniferous drugs and substances tested in medicine of the 1890s and later. These include Urethan, Acetal, Methylal, Sulfonal, Paraldehyde, Amylenhydrate, Hypnon, Chloralurethan and Ohloralamid or Chloralimid.
Research about using medications to treat insomnia evolved throughout the last half of the 20th century. Treatment for insomnia in psychiatry dates back to 1869, when chloral hydrate was first used as a soporific. Barbiturates emerged as the first class of drugs in the early 1900s, after which chemical substitution allowed derivative compounds. Although they were the best drug family at the time (with less toxicity and fewer side effects), they were dangerous in overdose and tended to cause physical and psychological dependence.
During the 1970s, quinazolinones and benzodiazepines were introduced as safer alternatives to replace barbiturates; by the late 1970s, benzodiazepines emerged as the safer drug.
Benzodiazepines are not without their drawbacks; substance dependence is possible, and deaths from overdoses sometimes occur, especially in combination with alcohol and/or other depressants. Questions have been raised as to whether they disturb sleep architecture.
Nonbenzodiazepines are the most recent development (1990s–present). Although it is clear that they are less toxic than barbiturates, their predecessors, comparative efficacy over benzodiazepines have not been established. Such efficacy is hard to determine without longitudinal studies. However, some psychiatrists recommend these drugs, citing research suggesting they are equally potent with less potential for abuse.
Other sleep remedies that may be considered "sedative–hypnotics" exist; psychiatrists will sometimes prescribe medicines off-label if they have sedating effects. Examples of these include mirtazapine (an antidepressant), clonidine (an older antihypertensive drug), quetiapine (an antipsychotic), and the over-the-counter allergy and antiemetic medications doxylamine and diphenhydramine. Off-label sleep remedies are particularly useful when first-line treatment is unsuccessful or deemed unsafe (as in patients with a history of substance abuse).
Types
Barbiturates
Barbiturates are drugs that act as central nervous system depressants, and can therefore produce a wide spectrum of effects, from mild sedation to total anesthesia. They are also effective as anxiolytics, hypnotics, and anticonvulsalgesic effects; however, these effects are somewhat weak, preventing barbiturates from being used in surgery in the absence of other analgesics. They have dependence liability, both physical and psychological. Barbiturates have now largely been replaced by benzodiazepines in routine medical practice – such as in the treatment of anxiety and insomnia – mainly because benzodiazepines are significantly less dangerous in overdose. However, barbiturates are still used in general anesthesia, for epilepsy, and for assisted suicide. Barbiturates are derivatives of barbituric acid.
The principal mechanism of action of barbiturates is believed to be positive allosteric modulation of GABAA receptors.
Examples include amobarbital, pentobarbital, phenobarbital, secobarbital, and sodium thiopental.
Quinazolinones
Quinazolinones are also a class of drugs which function as hypnotic/sedatives that contain a 4-quinazolinone core. Their use has also been proposed in the treatment of cancer.
Examples of quinazolinones include cloroqualone, diproqualone, etaqualone (Aolan, Athinazone, Ethinazone), mebroqualone, Afloqualone (Arofuto), mecloqualone (Nubarene, Casfen), and methaqualone (Quaalude).
Benzodiazepines
Benzodiazepines can be useful for short-term treatment of insomnia. Their use beyond 2 to 4 weeks is not recommended due to the risk of dependence. It is preferred that benzodiazepines be taken intermittently—and at the lowest effective dose. They improve sleep-related problems by shortening the time spent in bed before falling asleep, prolonging the sleep time, and, in general, reducing wakefulness. Like alcohol, benzodiazepines are commonly used to treat insomnia in the short-term (both prescribed and self-medicated), but worsen sleep in the long-term. While benzodiazepines can put people to sleep (i.e., inhibit NREM stage 1 and 2 sleep), while asleep, the drugs disrupt sleep architecture by decreasing sleep time, delaying time to REM sleep, and decreasing deep slow-wave sleep (the most restorative part of sleep for both energy and mood).
Other drawbacks of hypnotics, including benzodiazepines, are possible tolerance to their effects, rebound insomnia, and reduced slow-wave sleep and a withdrawal period typified by rebound insomnia and a prolonged period of anxiety and agitation. The list of benzodiazepines approved for the treatment of insomnia is fairly similar among most countries, but which benzodiazepines are officially designated as first-line hypnotics prescribed for the treatment of insomnia can vary distinctly between countries. Longer-acting benzodiazepines such as nitrazepam and diazepam have residual effects that may persist into the next day and are, in general, not recommended.
It is not clear as to whether the new nonbenzodiazepine hypnotics (Z-drugs) are better than the short-acting benzodiazepines. The efficacy of these two groups of medications is similar. According to the US Agency for Healthcare Research and Quality, indirect comparison indicates that side-effects from benzodiazepines may be about twice as frequent as from nonbenzodiazepines. Some experts suggest using nonbenzodiazepines preferentially as a first-line long-term treatment of insomnia. However, the UK National Institute for Health and Clinical Excellence (NICE) did not find any convincing evidence in favor of Z-drugs. A NICE review pointed out that short-acting Z-drugs were inappropriately compared in clinical trials with long-acting benzodiazepines. There have been no trials comparing short-acting Z-drugs with appropriate doses of short-acting benzodiazepines. Based on this, NICE recommended choosing the hypnotic based on cost and the patient's preference.
Older adults should not use benzodiazepines to treat insomnia—unless other treatments have failed to be effective. When benzodiazepines are used, patients, their caretakers, and their physician should discuss the increased risk of harms, including evidence which shows twice the incidence of traffic collisions among driving patients, as well as falls and hip fracture for all older patients.
Their mechanism of action is primarily at GABAA receptors.
Nonbenzodiazepines
Nonbenzodiazepines are a class of psychoactive drugs that are very "benzodiazepine-like" in nature. Nonbenzodiazepine pharmacodynamics are almost entirely the same as benzodiazepine drugs, and therefore entail similar benefits, side-effects and risks. Nonbenzodiazepines, however, have dissimilar or entirely different chemical structures, and therefore are unrelated to benzodiazepines on a molecular level.
Examples include zopiclone (Imovane, Zimovane), eszopiclone (Lunesta), zaleplon (Sonata), and zolpidem (Ambien, Stilnox, Stilnoct). Since the generic names of all drugs of this type start with Z, they are often referred to as Z-drugs.
Research on nonbenzodiazepines is new and conflicting. A review by a team of researchers suggests the use of these drugs for people that have trouble falling asleep (but not staying asleep), as next-day impairments were minimal. The team noted that the safety of these drugs had been established, but called for more research into their long-term effectiveness in treating insomnia. Other evidence suggests that tolerance to nonbenzodiazepines may be slower to develop than with benzodiazepines. A different team was more skeptical, finding little benefit over benzodiazepines.
Others
Melatonin
Melatonin, the hormone produced in the pineal gland in the brain and secreted in dim light and darkness, among its other functions, promotes sleep in diurnal mammals. Ramelteon and tasimelteon are synthetic analogues of melatonin which are also used for sleep-related indications.
Antihistamines
In common use, the term antihistamine refers only to compounds that inhibit action at the H1 receptor (and not H2, etc.).
Clinically, H1 antagonists are used to treat certain allergies. Sedation is a common side-effect, and some H1 antagonists, such as diphenhydramine (Benadryl) and doxylamine, are also used to treat insomnia.
Second-generation antihistamines cross the blood–brain barrier to a much lower degree than the first ones. This results in their primarily affecting peripheral histamine receptors, and therefore having a much lower sedative effect. High doses can still induce the central nervous system effect of drowsiness.
Antidepressants
Some antidepressants have sedating effects.
Examples include:
Serotonin antagonists and reuptake inhibitors
Trazodone
Tricyclic antidepressants
Amitriptyline
Doxepin
Trimipramine
Tetracyclic antidepressants
Mianserin
Mirtazapine
Antipsychotics
While some of these drugs are frequently prescribed for insomnia, such use is not recommended unless the insomnia is due to an underlying mental health condition treatable by antipsychotics as the risks frequently outweigh the benefits. Some of the more serious adverse effects have been observed to occur at the low doses used for this off-label prescribing, such as dyslipidemia and neutropenia, and a recent network meta-analysis of 154 double-blind, randomized controlled trials of drug therapies vs. placebo for insomnia in adults found that quetiapine had not demonstrated any short-term benefits in sleep quality. Examples of antipsychotics with sedation as a side effect that are occasionally used for insomnia:
First-generation
Chlorpromazine
Second-generation
Clozapine
Olanzapine
Quetiapine
Risperidone
Zotepine
Ziprasidone
Miscellaneous drugs
Alpha-adrenergic agonist
Clonidine
Guanfacine
Cannabinoids
Cannabidiol
Tetrahydrocannabinol
Orexin receptor antagonist
Suvorexant
Lemborexant
Daridorexant
Gabapentinoids
Gabapentin
Pregabalin
Phenibut
Effectiveness
A major systematic review and network meta-analysis of medications for the treatment of insomnia was published in 2022. It found a wide range of effect sizes (standardized mean difference (SMD)) in terms of efficacy for insomnia. The assessed medications included benzodiazepines (e.g., temazepam, triazolam, many others) (SMDs 0.58 to 0.83), Z-drugs (eszopiclone, zaleplon, zolpidem, zopiclone) (SMDs 0.03 to 0.63), sedative antidepressants and antihistamines (doxepin, doxylamine, trazodone, trimipramine) (SMDs 0.30 to 0.55), the antipsychotic quetiapine (SMD 0.07), orexin receptor antagonists (daridorexant, lemborexant, seltorexant, suvorexant) (SMDs 0.23 to 0.44), and melatonin receptor agonists (melatonin, ramelteon) (SMDs 0.00 to 0.13). The certainty of evidence varied and ranged from high to very low depending on the medication. Certain medications often used as hypnotics, including the antihistamines diphenhydramine, hydroxyzine, and promethazine and the antidepressants amitriptyline and mirtazapine, were not included in analyses due to insufficient data.
Risks
The use of sedative medications in older people generally should be avoided. These medications are associated with poorer health outcomes, including cognitive decline, and bone fractures.
Therefore, sedatives and hypnotics should be avoided in people with dementia, according to the clinical guidelines known as the Medication Appropriateness Tool for Comorbid Health Conditions in Dementia (MATCH-D). The use of these medications can further impede cognitive function for people with dementia, who are also more sensitive to side effects of medications.
See also
Sleep induction § Alcohol
Somnifacient
Notes
References
Further reading
discusses Barbs vs. benzos
External links
Sleeping pills overview
Psychoactive drugs
Treatment of sleep disorders | Hypnotic | [
"Chemistry",
"Biology"
] | 3,383 | [
"Hypnotics",
"Behavior",
"Sleep",
"Psychoactive drugs",
"Neurochemistry"
] |
14,275 | https://en.wikipedia.org/wiki/Hacker%20ethic | The hacker ethic is a philosophy and set of moral values within hacker culture. Practitioners believe that sharing information and data with others is an ethical imperative. The hacker ethic is related to the concept of freedom of information, as well as the political theories of anti-authoritarianism, anarchism, and libertarianism.
While some tenets of the hacker ethic were described in other texts like Computer Lib/Dream Machines (1974) by Ted Nelson, the term hacker ethic is generally attributed to journalist Steven Levy, who appears to have been the first to document both the philosophy and the founders of the philosophy in his 1984 book titled Hackers: Heroes of the Computer Revolution.
History
The hacker ethic originated at the Massachusetts Institute of Technology in the 1950s–1960s. The term "hacker" has long been used there to describe college pranks that MIT students would regularly devise, and was used more generally to describe a project undertaken or a product built to fulfill some constructive goal, but also out of pleasure for mere involvement.
MIT housed an early IBM 704 computer inside the Electronic Accounting Machinery (EAM) room in 1959. This room became the staging grounds for early hackers, as MIT students from the Tech Model Railroad Club sneaked inside the EAM room after hours to attempt programming the 30-ton, computer.
The hacker ethic was described as a "new way of life, with a philosophy, an ethic and a dream". However, the elements of the hacker ethic were not openly debated and discussed; rather they were implicitly accepted and silently agreed upon.
The free software movement was born in the early 1980s from followers of the hacker ethic. Its founder, Richard Stallman, is referred to by Steven Levy as "the last true hacker".
Richard Stallman describes:
"The hacker ethic refers to the feelings of right and wrong, to the ethical ideas this community of people had—that knowledge should be shared with other people who can benefit from it, and that important resources should be utilized rather than wasted."
and states more precisely that hacking (which Stallman defines as playful cleverness) and ethics are two separate issues:
"Just because someone enjoys hacking does not mean he has an ethical commitment to treating other people properly. Some hackers care about ethics—I do, for instance—but that is not part of being a hacker, it is a separate trait. [...] Hacking is not primarily about an ethical issue. [...] hacking tends to lead a significant number of hackers to think about ethical questions in a certain way. I would not want to completely deny all connection between hacking and views on ethics."The hacker culture has been compared to early Protestantism . Protestant sectarians emphasized individualism and loneliness, similar to hackers who have been considered loners and nonjudgmental individuals. The notion of moral indifference between hackers characterized the persistent actions of computer culture in the 1970s and early 1980s. According to Kirkpatrick, author of The Hacker Ethic, the "computer plays the role of God, whose requirements took priority over the human ones of sentiment when it came to assessing one's duty to others."
According to Kirkpatrick's The Hacker Ethic:
"Exceptional single-mindedness and determination to keep plugging away at a problem until the optimal solution had been found are well-documented traits of the early hackers. Willingness to work right through the night on a single programming problem are widely cited as features of the early 'hacker' computer culture."
The hacker culture is placed in the context of 1960s youth culture when American youth culture challenged the concept of capitalism and big, centralized structures. The hacker culture was a subculture within 1960s counterculture. The hackers' main concern was challenging the idea of technological expertise and authority. The 1960s hippy period attempted to "overturn the machine." Although hackers appreciated technology, they wanted regular citizens, and not big corporations, to have power over technology "as a weapon that might actually undermine the authority of the expert and the hold of the monolithic system."
The hacker ethics
As Levy summarized in the preface of Hackers, the general tenets or principles of hacker ethic include:
Sharing
Openness
Decentralization
Free access to computers
World Improvement (foremost, upholding democracy and the fundamental laws we all live by, as a society)
In addition to those principles, Levy also described more specific hacker ethics and beliefs in chapter 2, The Hacker Ethic: The ethics he described in chapter 2 are:
1. "Access to computers—and anything which might teach you something about the way the world works—should be unlimited and total. Always yield to the Hands-On Imperative!" Levy is recounting hackers' abilities to learn and build upon pre-existing ideas and systems. He believes that access gives hackers the opportunity to take things apart, fix, or improve upon them and to learn and understand how they work. This gives them the knowledge to create new and even more interesting things. Access aids the expansion of technology.
2. "All information should be free" Linking directly with the principle of access, information needs to be free for hackers to fix, improve, and reinvent systems. A free exchange of information allows for greater overall creativity. In the hacker viewpoint, any system could benefit from an easy flow of information, a concept known as transparency in the social sciences. As Stallman notes, "free" refers to unrestricted access; it does not refer to price.
3. "Mistrust authority—promote decentralization" The best way to promote the free exchange of information is to have an open system that presents no boundaries between a hacker and a piece of information or an item of equipment that they need in their quest for knowledge, improvement, and time on-line. Hackers believe that bureaucracies, whether corporate, government, or university, are flawed systems.
4. "Hackers should be judged by their hacking, not bogus criteria such as degrees, age, race, sex, or position" Inherent in the hacker ethic is a meritocratic system where superficiality is disregarded in esteem of skill. Levy articulates that criteria such as age, sex, race, position, and qualification are deemed irrelevant within the hacker community. Hacker skill is the ultimate determinant of acceptance. Such a code within the hacker community fosters the advance of hacking and software development.
5. "You can create art and beauty on a computer" Hackers deeply appreciate innovative techniques which allow programs to perform complicated tasks with few instructions. A program's code was considered to hold a beauty of its own, having been carefully composed and artfully arranged. Learning to create programs which used the least amount of space almost became a game between the early hackers.
6. "Computers can change your life for the better" Hackers felt that computers had enriched their lives, given their lives focus, and made their lives adventurous. Hackers regarded computers as Aladdin's lamps that they could control. They believed that everyone in society could benefit from experiencing such power and that if everyone could interact with computers in the way that hackers did, then the hacker ethic might spread through society and computers would improve the world. The hackers succeeded in turning dreams of endless possibilities into realities. The hacker's primary object was to teach society that "the world opened up by the computer was a limitless one" (Levy 230:1984)
Sharing
From the early days of modern computing through to the 1970s, it was far more common for computer users to have the freedoms that are provided by an ethic of open sharing and collaboration. Software, including source code, was commonly shared by individuals who used computers. Most companies had a business model based on hardware sales, and provided or bundled the associated software free of charge. According to Levy's account, sharing was the norm and expected within the non-corporate hacker culture. The principle of sharing stemmed from the open atmosphere and informal access to resources at MIT. During the early days of computers and programming, the hackers at MIT would develop a program and share it with other computer users.
If the hack was deemed particularly good, then the program might be posted on a board somewhere near one of the computers. Other programs that could be built upon it and improved it were saved to tapes and added to a drawer of programs, readily accessible to all the other hackers. At any time, a fellow hacker might reach into the drawer, pick out the program, and begin adding to it or "bumming" it to make it better. Bumming referred to the process of making the code more concise so that more can be done in fewer instructions, saving precious memory for further enhancements.
In the second generation of hackers, sharing was about sharing with the general public in addition to sharing with other hackers. A particular organization of hackers that was concerned with sharing computers with the general public was a group called Community Memory. This group of hackers and idealists put computers in public places for anyone to use. The first community computer was placed outside of Leopold's Records in Berkeley, California.
Another sharing of resources occurred when Bob Albrecht provided considerable resources for a non-profit organization called the People's Computer Company (PCC). PCC opened a computer center where anyone could use the computers there for fifty cents per hour.
This second generation practice of sharing contributed to the battles of free and open software. In fact, when Bill Gates' version of BASIC for the Altair was shared among the hacker community, Gates claimed to have lost a considerable sum of money because few users paid for the software. As a result, Gates wrote an Open Letter to Hobbyists. This letter was published by several computer magazines and newsletters, most notably that of the Homebrew Computer Club where much of the sharing occurred.
According to Brent K. Jesiek in "Democratizing Software: Open Source, the Hacker Ethic, and Beyond," technology is being associated with social views and goals. Jesiek refers to Gisle Hannemyr's views on open source vs. commercialized software. Hannemyr concludes that when a hacker constructs software, the software is flexible, tailorable, modular in nature and is open-ended. A hacker's software contrasts mainstream hardware which favors control, a sense of being whole, and be immutable (Hannemyr, 1999).
Furthermore, he concludes that 'the difference between the hacker’s approach and those of the industrial programmer is one of outlook: between an agoric, integrated and holistic attitude towards the creation of artifacts and a proprietary, fragmented and reductionist one' (Hannemyr, 1999). As Hannemyr’s analysis reveals, the characteristics of a given piece of software frequently reflect the attitude and outlook of the programmers and organizations from which it emerges."
Copyright and patents
As copyright and patent laws limit the ability to share software, opposition to software patents is widespread in the hacker and free software community.
Hands-On Imperative
Many of the principles and tenets of hacker ethic contribute to a common goal: the Hands-On Imperative. As Levy described in Chapter 2, "Hackers believe that essential lessons can be learned about the systems—about the world—from taking things apart, seeing how they work, and using this knowledge to create new and more interesting things."
Employing the Hands-On Imperative requires free access, open information, and the sharing of knowledge. To a true hacker, if the Hands-On Imperative is restricted, then the ends justify the means to make it unrestricted so that improvements can be made. When these principles are not present, hackers tend to work around them. For example, when the computers at MIT were protected either by physical locks or login programs, the hackers there systematically worked around them in order to have access to the machines. Hackers assumed a "willful blindness" in the pursuit of perfection.
This behavior was not malicious in nature: the MIT hackers did not seek to harm the systems or their users. This deeply contrasts with the modern, media-encouraged image of hackers who crack secure systems in order to steal information or complete an act of cyber-vandalism.
Community and collaboration
Throughout writings about hackers and their work processes, a common value of community and collaboration is present. For example, in Levy's Hackers, each generation of hackers had geographically based communities where collaboration and sharing occurred. For the hackers at MIT, it was the labs where the computers were running. For the hardware hackers (second generation) and the game hackers (third generation) the geographic area was centered in Silicon Valley where the Homebrew Computer Club and the People's Computer Company helped hackers network, collaborate, and share their work.
The concept of community and collaboration is still relevant today, although hackers are no longer limited to collaboration in geographic regions. Now collaboration takes place via the Internet. Eric S. Raymond identifies and explains this conceptual shift in The Cathedral and the Bazaar:
Before cheap Internet, there were some geographically compact communities where the culture encouraged Weinberg's egoless programming, and a developer could easily attract a lot of skilled kibitzers and co-developers. Bell Labs, the MIT AI and LCS labs, UC Berkeley: these became the home of innovations that are legendary and still potent.
Raymond also notes that the success of Linux coincided with the wide availability of the World Wide Web. The value of community is still in high practice and use today.
Levy's "true hackers"
Levy identifies several "true hackers" who significantly influenced the hacker ethic. Some well-known "true hackers" include:
Bill Gosper: Mathematician and hacker
Richard Greenblatt: Programmer and early designer of LISP machines
John McCarthy: Co-founder of the MIT Artificial Intelligence Lab and Stanford AI Laboratory
Jude Milhon: Founder of the cypherpunk movement, senior editor at Mondo 2000, and co-founder of Community Memory
Richard Stallman: Programmer and political activist who is well known for GNU, Emacs and the Free Software Movement
Levy also identified the "hardware hackers" (the "second generation", mostly centered in Silicon Valley) and the "game hackers" (or the "third generation"). All three generations of hackers, according to Levy, embodied the principles of the hacker ethic. Some of Levy's "second-generation" hackers include:
Steve Dompier: Homebrew Computer Club member and hacker who worked with the early Altair 8800
John Draper: A legendary figure in the computer programming world. He wrote EasyWriter, the first word processor.
Lee Felsenstein: A hardware hacker and co-founder of Community Memory and Homebrew Computer Club; a designer of the Sol-20 computer
Bob Marsh: A designer of the Sol-20 computer
Fred Moore: Activist and founder of the Homebrew Computer Club
Steve Wozniak: One of the founders of Apple Computer
Levy's "third generation" practitioners of hacker ethic include:
John Harris: One of the first programmers hired at On-Line Systems (which later became Sierra Entertainment)
Ken Williams: Along with wife Roberta, founded On-Line Systems after working at IBM – the company would later achieve mainstream popularity as Sierra.
Other descriptions
In 2001, Finnish philosopher Pekka Himanen promoted the hacker ethic in opposition to the Protestant work ethic. In Himanen's opinion, the hacker ethic is more closely related to the virtue ethics found in the writings of Plato and of Aristotle. Himanen explained these ideas in a book, The Hacker Ethic and the Spirit of the Information Age, with a prologue contributed by Linus Torvalds and an epilogue by Manuel Castells.
In this manifesto, the authors wrote about a hacker ethic centering on passion, hard work, creativity and joy in creating software. Both Himanen and Torvalds were inspired by the Sampo in Finnish mythology. The Sampo, described in the Kalevala saga, was a magical artifact constructed by Ilmarinen, the blacksmith god, that brought good fortune to its holder; nobody knows exactly what it was supposed to be. The Sampo has been interpreted in many ways: a world pillar or world tree, a compass or astrolabe, a chest containing a treasure, a Byzantine coin die, a decorated Vendel period shield, a Christian relic, etc. Kalevala saga compiler Lönnrot interpreted it to be a "quern" or mill of some sort that made flour, salt, and wealth.
See also
Hacks at the Massachusetts Institute of Technology
Hacker (programmer subculture)
Hacker (term)
Hacktivism
Tech Model Railroad Club
The Cathedral and the Bazaar
Free software movement
Free software philosophy
Footnotes
References
Further reading
External links
Learn Ethical Hacking From Android
Gabriella Coleman, an anthropologist at McGill University, studies hacker cultures and has written extensively on the hacker ethic and culture
Tom Chance's essay on The Hacker Ethic and Meaningful Work
Hacker ethic from the Jargon file
Directory of free software
ITERATIVE DISCOURSE AND THE FORMATION OF NEW SUBCULTURES by Steve Mizrach describes the hacker terminology, including the term cracker.
Richard Stallman's Personal Website
Is there a Hacker Ethic for 90s Hackers? by Steven Mizrach
The Hacker's Ethics by the Cyberpunk Project
Computing and society
Hacker culture
Decentralization | Hacker ethic | [
"Technology"
] | 3,568 | [
"Computing and society"
] |
14,276 | https://en.wikipedia.org/wiki/Hotel | A hotel is an establishment that provides paid lodging on a short-term basis. Facilities provided inside a hotel room may range from a modest-quality mattress in a small room to large suites with bigger, higher-quality beds, a dresser, a refrigerator, and other kitchen facilities, upholstered chairs, a television, and en-suite bathrooms. Small, lower-priced hotels may offer only the most basic guest services and facilities. Larger, higher-priced hotels may provide additional guest facilities such as a swimming pool, a business center with computers, printers, and other office equipment, childcare, conference and event facilities, tennis or basketball courts, gymnasium, restaurants, day spa, and social function services. Hotel rooms are usually numbered (or named in some smaller hotels and B&Bs) to allow guests to identify their room. Some boutique, high-end hotels have custom decorated rooms. Some hotels offer meals as part of a room and board arrangement. In Japan, capsule hotels provide a tiny room suitable only for sleeping and shared bathroom facilities.
The precursor to the modern hotel was the inn of medieval Europe. For a period of about 200 years from the mid-17th century, coaching inns served as a place for lodging for coach travelers. Inns began to cater to wealthier clients in the mid-18th century. One of the first hotels in a modern sense was opened in Exeter in 1768. Hotels proliferated throughout Western Europe and North America in the early 19th century, and luxury hotels began to spring up in the later part of the 19th century, particularly in the United States.
Hotel operations vary in size, function, complexity, and cost. Most hotels and major hospitality companies have set industry standards to classify hotel types. An upscale full-service hotel facility offers luxury amenities, full-service accommodations, an on-site restaurant, and the highest level of personalized service, such as a concierge, room service, and clothes-ironing staff. Full-service hotels often contain upscale full-service facilities with many full-service accommodations, an on-site full-service restaurant, and a variety of on-site amenities. Boutique hotels are smaller independent, non-branded hotels that often contain upscale facilities. Small to medium-sized hotel establishments offer a limited amount of on-site amenities. Economy hotels are small to medium-sized hotel establishments that offer basic accommodations with little to no services. Extended stay hotels are small to medium-sized hotels that offer longer-term full-service accommodations compared to a traditional hotel.
Timeshare and destination clubs are a form of property ownership involving ownership of an individual unit of accommodation for seasonal usage. A motel is a small-sized low-rise lodging with direct access to individual rooms from the car parking area. Boutique hotels are typically hotels with a unique environment or intimate setting. A number of hotels and motels have entered the public consciousness through popular culture. Some hotels are built specifically as destinations in themselves, for example casinos and holiday resorts.
Most hotel establishments are run by a general manager who serves as the head executive (often referred to as the "hotel manager"), department heads who oversee various departments within a hotel (e.g., food service), middle managers, administrative staff, and line-level supervisors. The organizational chart and volume of job positions and hierarchy varies by hotel size, function and class, and is often determined by hotel ownership and managing companies.
Etymology
The word hotel is derived from the French hôtel (coming from the same origin as hospital), which referred to a French version of a building seeing frequent visitors, and providing care, rather than a place offering accommodation. In contemporary French usage, hôtel now has the same meaning as the English term, and hôtel particulier is used for the old meaning, as well as "hôtel" in some place names such as Hôtel-Dieu (in Paris), which has been a hospital since the Middle Ages. The French spelling, with the circumflex, was also used in English, but is now rare. The circumflex replaces the 's' found in the earlier hostel spelling, which over time took on a new, but closely related meaning. Grammatically, hotels usually take the definite article – hence "The Astoria Hotel" or simply "The Astoria".
History
Facilities offering hospitality to travellers featured in early civilizations. In Greco-Roman culture and in ancient Persia, hospitals for recuperation and rest were built at thermal baths. Guinness World Records officially recognised Japan's Nishiyama Onsen Keiunkan, founded in 705, as the oldest hotel in the world. During the Middle Ages, various religious orders at monasteries and abbeys would offer accommodation for travellers on the road.
The precursor to the modern hotel was the inn of medieval Europe, possibly dating back to the rule of Ancient Rome. These would provide for the needs of travellers, including food and lodging, stabling and fodder for the traveller's horses and fresh horses for mail coaches. Famous London examples of inns include the George and the Tabard. A typical layout of an inn featured an inner court with bedrooms on the two sides, with the kitchen and parlour at the front and the stables at the back.
For a period of about 200 years from the mid-17th century, coaching inns served as a place for lodging for coach travellers (in other words, a roadhouse). Coaching inns stabled teams of horses for stagecoaches and mail coaches and replaced tired teams with fresh teams. Traditionally they were seven miles apart, but this depended very much on the terrain.
Some English towns had as many as ten such inns and rivalry between them became intense, not only for the income from the stagecoach operators but for the revenue from the food and drink supplied to the wealthy passengers. By the end of the century, coaching inns were being run more professionally, with a regular timetable being followed and fixed menus for food.
Inns began to cater to richer clients in the mid-18th century, and consequently grew in grandeur and in the level of service provided. Sudhir Andrews traces "the birth of an organised hotel industry" to Europe's chalets and small hotels which catered primarily to aristocrats.
One of the first hotels in a modern sense, the Royal Clarence, opened in Exeter in 1768, although the idea only really caught on in the early-19th century. In 1812 Mivart's Hotel opened its doors in London, later changing its name to Claridge's.
Hotels proliferated throughout Western Europe and North America in the 19th century. Luxury hotels, including the 1829 Tremont House in Boston, the 1836 Astor House in New York City, the 1889 Savoy Hotel in London, and the Ritz chain of hotels in London and Paris in the late 1890s, catered to an ever more-wealthy clientele.
Title II of the Civil Rights Act of 1964 is part of a United States law that prohibits discrimination on the basis of race, religion, or national origin in places of public accommodation. Hotels are included as types of public accommodation in the Act.
International scale
Hotels cater to travelers from many countries and languages, since no one country dominates the travel industry.
Types
Hotel operations vary in size, function, and cost. Most hotels and major hospitality companies that operate hotels have set widely accepted industry standards to classify hotel types. General categories include the following:
International luxury
International luxury hotels offer high-quality amenities, full-service accommodations, on-site full-service restaurants, and the highest level of personalized and professional service in major or capital cities. International luxury hotels are classified with at least a Five Diamond rating or Five Star hotel rating depending on the country and local classification standards. Example brands include: Grand Hyatt, Conrad, InterContinental, Sofitel, Mandarin Oriental, Four Seasons, The Peninsula, Rosewood, JW Marriott and The Ritz-Carlton.
Lifestyle luxury resorts
Lifestyle luxury resorts are branded hotels that appeal to a guest with lifestyle or personal image in specific locations. They are typically full-service and classified as luxury. A key characteristic of lifestyle resorts is focus on providing a unique guest experience as opposed to simply providing lodging. Lifestyle luxury resorts are classified with a Five Star hotel rating depending on the country and local classification standards. Example brands include: Waldorf Astoria, St. Regis, Wynn Resorts, MGM, Shangri-La, Oberoi, Belmond, Jumeirah, Aman, Taj Hotels, Hoshino, Raffles, Fairmont, Banyan Tree, Regent and Park Hyatt.
Upscale full-service
Upscale full-service hotels often provide a wide array of guest services and on-site facilities. Commonly found amenities may include: on-site food and beverage (room service and restaurants), meeting and conference services and facilities, fitness center, and business center. Upscale full-service hotels range in quality from upscale to luxury. This classification is based upon the quality of facilities and amenities offered by the hotel. Examples include: W Hotels, Sheraton, Langham, Kempinski, Pullman,
Kimpton Hotels, Hilton, Swissôtel, Lotte, Renaissance, Marriott and Hyatt Regency brands.
Boutique
Boutique hotels are smaller independent non-branded hotels that often contain mid-scale to upscale facilities of varying size in unique or intimate settings with full-service accommodations. These hotels are generally 100 rooms or fewer.
Focused or select service
Small to medium-sized hotel establishments that offer a limited number of on-site amenities that only cater and market to a specific demographic of travelers, such as the single business traveler. Most focused or select service hotels may still offer full-service accommodations but may lack leisure amenities such as an on-site restaurant or a swimming pool. Examples include Hyatt Place, Holiday Inn, Courtyard by Marriott and Hilton Garden Inn.
Economy and limited service
Small to medium-sized hotel establishments that offer a very limited number of on-site amenities and often only offer basic accommodations with little to no services, catering to the budget-minded traveler seeking a "no frills" accommodation. Limited service hotels often lack an on-site restaurant but in return may offer a limited complimentary food and beverage amenity such as on-site continental breakfast service. Examples include Ibis Budget, Hampton Inn, Aloft, Holiday Inn Express, Fairfield Inn, and Four Points by Sheraton.
Extended stay
Extended stay hotels are small to medium-sized hotels that offer longer-term full-service accommodations compared to a traditional hotel. Extended stay hotels may offer non-traditional pricing methods such as a weekly rate that caters towards travelers in need of short-term accommodations for an extended period of time. Similar to limited and select service hotels, on-site amenities are normally limited and most extended stay hotels lack an on-site restaurant. Examples include Staybridge Suites, Candlewood Suites, Homewood Suites by Hilton, Home2 Suites by Hilton, Residence Inn by Marriott, Element, and Extended Stay America.
Timeshare and destination clubs
Timeshare and destination clubs are a form of property ownership also referred to as a vacation ownership involving the purchase and ownership of an individual unit of accommodation for seasonal usage during a specified period of time. Timeshare resorts often offer amenities similar that of a full-service hotel with on-site restaurants, swimming pools, recreation grounds, and other leisure-oriented amenities. Destination clubs on the other hand may offer more exclusive private accommodations such as private houses in a neighborhood-style setting. Examples of timeshare brands include Hilton Grand Vacations, Marriott Vacation Club International, Westgate Resorts, Disney Vacation Club, and Holiday Inn Club Vacations.
Motel
A motel, an abbreviation for "motor hotel", is a small-sized low-rise lodging establishment similar to a limited service, lower-cost hotel, but typically with direct access to individual rooms from the car park. Motels were built to serve road travellers, including travellers on road trip vacations and workers who drive for their job (travelling salespeople, truck drivers, etc.). Common during the 1950s and 1960s, motels were often located adjacent to a major highway, where they were built on inexpensive land at the edge of towns or along stretches of freeway.
New motel construction is rare in the 2000s as hotel chains have been building economy-priced, limited-service franchised properties at freeway exits which compete for largely the same clientele, largely saturating the market by the 1990s. Motels are still useful in less populated areas for driving travelers, but the more populated an area becomes, the more hotels move in to meet the demand for accommodation. While many motels are unbranded and independent, many of the other motels which remain in operation joined national franchise chains, often rebranding themselves as hotels, inns or lodges. Some examples of chains with motels include EconoLodge, Motel 6, Super 8, and Travelodge.
Motels in some parts of the world are more often regarded as places for romantic assignations where rooms are often rented by the hour. This is fairly common in parts of Latin America.
In the United States, motels have a reputation for criminal activity such as prostitution and drug dealing.
Microstay
Hotels may offer rooms for microstays, a type of booking for less than 24 hours where the customer chooses the check in time and the length of the stay. This allows the hotel increased revenue by reselling the same room several times a day. They first gained popularity in Europe but are now common in major global tourist centers.
Management
Hotel management is a globally accepted professional career field and academic field of study. Degree programs such as hospitality management studies, a business degree, and/or certification programs formally prepare hotel managers for industry practice.
Most hotel establishments consist of a general manager who serves as the head executive (often referred to as the "hotel manager"), department heads who oversee various departments within a hotel, middle managers, administrative staff, and line-level supervisors. The organizational chart and volume of job positions and hierarchy varies by hotel size, function, and is often determined by hotel ownership and managing companies.
Unique and specialty hotels
Historic inns and boutique hotels
Boutique hotels are typically hotels with a unique environment or intimate setting.
Some hotels have gained their renown through tradition, by hosting significant events or persons, such as Schloss Cecilienhof in Potsdam, Germany, which derives its fame from the Potsdam Conference of the World War II allies Winston Churchill, Harry Truman and Joseph Stalin in 1945. The Taj Mahal Palace & Tower in Mumbai is one of India's most famous and historic hotels because of its association with the Indian independence movement. Some establishments have given name to a particular meal or beverage, as is the case with the Waldorf Astoria in New York City, United States where the Waldorf Salad was first created or the Hotel Sacher in Vienna, Austria, home of the Sachertorte. Others have achieved fame by association with dishes or cocktails created on their premises, such as the Hotel de Paris where the crêpe Suzette was invented or the Raffles Hotel in Singapore, where the Singapore Sling cocktail was devised.
A number of hotels have entered the public consciousness through popular culture, such as the Ritz Hotel in London, through its association with Irving Berlin's song, "Puttin' on the Ritz". The Algonquin Hotel in New York City is famed as the meeting place of the literary group, the Algonquin Round Table, and Hotel Chelsea, also in New York City, has been the subject of a number of songs and the scene of the stabbing of Nancy Spungen (allegedly by her boyfriend Sid Vicious).
Resort hotels
Some hotels are built specifically as a destination in itself to create a captive trade, example at casinos, amusement parks and holiday resorts. Though hotels have always been built in popular destinations, the defining characteristic of a resort hotel is that it exists purely to serve another attraction, the two having the same owners.
On the Las Vegas Strip there is a tradition of one-upmanship with luxurious and extravagant hotels in a concentrated area. This trend now has extended to other resorts worldwide, but the concentration in Las Vegas is still the world's highest: nineteen of the world's twenty-five largest hotels by room count are on the Strip, with a total of over 67,000 rooms.
Bunker hotels
The Null Stern Hotel in Teufen, Appenzellerland, Switzerland, and the Concrete Mushrooms in Albania are former nuclear bunkers transformed into hotels.
Cave hotels
The Cuevas Pedro Antonio de Alarcón (named after the author) in Guadix, Spain, as well as several hotels in Cappadocia, Turkey, are notable for being built into natural cave formations, some with rooms underground. The Desert Cave Hotel in Coober Pedy, South Australia, is built into the remains of an opal mine.
Cliff hotels
Located on the coast but high above sea level, these hotels offer unobstructed panoramic views and a great sense of privacy without the feeling of total isolation. Some examples from around the globe are the Riosol Hotel in Gran Canaria, Caruso Belvedere Hotel in Amalfi Coast (Italy), Aman Resorts Amankila in Bali, Birkenhead House in Hermanus (South Africa), The Caves in Jamaica and Caesar Augustus in Capri.
Capsule hotels
Capsule hotels are a type of economical hotel first introduced in Japan, where people sleep in stacks of rectangular containers. In the sleeping capsules, beside the bed, the customer can watch TV, put their valuables in the mini safes, and the customers also can use the wireless internet.
Day room hotels
Some hotels fill daytime occupancy with day rooms, for example, Rodeway Inn and Suites near Port Everglades in Fort Lauderdale, Florida. Day rooms are booked in a block of hours typically between 8 am and 5 pm, before the typical night shift. These are similar to transit hotels in that they appeal to travelers, however, unlike transit hotels, they do not eliminate the need to go through Customs.
Garden hotels
Garden hotels, famous for their gardens before they became hotels, include Gravetye Manor, the home of garden designer William Robinson, and Cliveden, designed by Charles Barry with a rose garden by Geoffrey Jellicoe.
Ice, snow and igloo hotels
The Ice Hotel in Jukkasjärvi, Sweden, was the first ice hotel in the world; first built in 1990, it is built each winter and melts every spring. The Hotel de Glace in Duschenay, Canada, opened in 2001 and it is North America's only ice hotel. It is redesigned and rebuilt in its entirety every year.
Ice hotels can also be included within larger ice complexes; for example, the Mammut Snow Hotel in Finland is located within the walls of the Kemi snow castle; and the Lainio Snow Hotel is part of a snow village near Ylläs, Finland. There is an arctic snowhotel in Rovaniemi in Lapland, Finland, along with glass igloos. The first glass igloos were built in 1999 in Finland, they became the Kakslauttanen Arctic Resort with 65 buildings, 53 small ones for two people and 12 large ones for four people. Glass igloos, with their roof made of thermal glass, allow guests to admire auroras comfortably from their beds.
Love hotels
A love hotel (also 'love motel', especially in Taiwan) is a type of short-stay hotel found around the world, operated primarily for the purpose of allowing guests privacy for sexual activities, typically for one to three hours, but with overnight as an option. Styles of premises vary from extremely low-end to extravagantly appointed. In Japan, love hotels have a history of over 400 years.
Portable modular hotels
In 2021 a New York-based company introduced new modular and movable hotel rooms which allow landowners and hospitality groups to create and easily scale hotel accommodations. The portable units can be built in three to five months and can be stacked to create multi-floor units.
Referral hotel
A referral hotel is a hotel chain that offers branding to independently operated hotels; the chain itself is founded by or owned by the member hotels as a group. Many former referral chains have been converted to franchises; the largest surviving member-owned chain is Best Western.
Railway hotels
The first recorded purpose-built railway hotel was the Great Western Hotel, which opened adjacent to Reading railway station in 1844, shortly after the Great Western Railway opened its line from London. The building still exists, and although it has been used for other purposes over the years, it is now again a hotel and a member of the Malmaison hotel chain.
Frequently, expanding railway companies built grand hotels at their termini, such as the Midland Hotel, Manchester next to the former Manchester Central Station, and in London the ones above St Pancras railway station and Charing Cross railway station. London also has the Chiltern Court Hotel above Baker Street tube station, there are also Canada's grand railway hotels. They are or were mostly, but not exclusively, used by those traveling by rail.
Straw bale hotels
The Maya Guesthouse in Nax Mont-Noble in the Swiss Alps, is the first hotel in Europe built entirely with straw bales. Due to the insulation values of the walls it needs no conventional heating or air conditioning system, although the Maya Guesthouse is built at an altitude of in the Alps.
Transit hotels
Transit hotels are short stay hotels typically used at international airports where passengers can stay while waiting to change airplanes. The hotels are typically on the airside and do not require a visa for a stay or re-admission through security checkpoints.
Treehouse hotels
Some hotels are built with living trees as structural elements, for example the Treehotel near Piteå, Sweden, the Costa Rica Tree House near the Jairo Mora Sandoval Gandoca-Manzanillo Mixed Wildlife Refuge, Costa Rica; the Treetops Hotel in Aberdare National Park, Kenya; the Ariau Towers near Manaus, Brazil, on the Rio Negro in the Amazon; and Bayram's Tree Houses in Olympos, Turkey.
Underwater hotels
Some hotels have accommodation underwater, such as Utter Inn in Lake Mälaren, Sweden. Hydropolis, project in Dubai, would have had suites on the bottom of the Persian Gulf, and Jules' Undersea Lodge in Key Largo, Florida, requires scuba diving to access its rooms.
Overwater hotels
A resort island is an island or an archipelago that contains resorts, hotels, overwater bungalows, restaurants, tourist attractions and its amenities. Maldives has the most overwater bungalows resorts.
Yurt hotels
Yurts are circular, self-supporting structures with long rafters coalescing toward a central dome. During the day, the dome allows sunlight to illuminate the entire yurt interior, while moonlight and starlight shine through the dome at night.
Other specialty hotels
The Burj al-Arab hotel in Dubai, United Arab Emirates, built on an artificial island, is structured in the shape of a boat's sail.
The Library Hotel in New York City, is unique in that each of its ten floors is assigned one category from the Dewey Decimal System.
The Jailhotel Löwengraben in Lucerne, Switzerland, the Malmaison in Oxford, and Bodmin Jail Hotel in Bodmin, are in converted prisons now used as a hotels.
The Luxor, a hotel and casino on the Las Vegas Strip in Paradise, Nevada, United States is unusual due to its pyramidal structure.
The Ritz-Carlton opened the highest hotel in the world in 2011, The Ritz-Carlton, Hong Kong on floors 102-118 of the International Commerce Centre in Tsim Sha Tsui on Kowloon Peninsula, Hong Kong. The lobby is above the ground.
The Liberty Hotel in Boston used to be the Charles Street Jail.
Hotel Kakslauttanen in Finland, a collection of glass igloos in Lapland that allow you to watch the Northern Lights
Built in Scotland and completed in 1936, The former ocean liner in Long Beach, California, United States uses its first-class staterooms as a hotel, after retiring in 1967 from Transatlantic service.
The Wigwam Motels used patented novelty architecture in which each motel room was a free-standing concrete wigwam or teepee.
The Bus Collective in Singapore was built from 20 retired public buses, and opened in 2023.
Various Caboose Motel or Red Caboose Inn properties are built from decommissioned rail cars.
Throughout the world there are several hotels built from converted airliners.
Records
Largest
In 2006, Guinness World Records listed the First World Hotel in Genting Highlands, Malaysia, as the world's largest hotel with a total of 6,118 rooms (and which has now expanded to 7,351 rooms). The Izmailovo Hotel in Moscow has the most beds, with 7,500, followed by The Venetian and The Palazzo complex in Las Vegas (7,117 rooms) and MGM Grand Las Vegas complex (6,852 rooms).
Oldest
According to the Guinness Book of World Records, the oldest hotel in operation is the Nisiyama Onsen Keiunkan in Yamanashi, Japan. The hotel, first opened in AD 707, has been operated by the same family for forty-six generations. The title was held until 2011 by the Hoshi Ryokan, in the Awazu Onsen area of Komatsu, Japan, which opened in the year 718, as the history of the Nisiyama Onsen Keiunkan was virtually unknown.
Highest
The Rosewood Guangzhou located on the top floors of the 108-story Guangzhou CTF Finance Centre in Tianhe District, Guangzhou, China. Soaring to 530-meters at its highest point, earns the singular status as the world's highest hotel.
Most expensive purchase
In October 2014, the Anbang Insurance Group, based in China, purchased the Waldorf Astoria New York in Manhattan for US$1.95 billion, making it the world's most expensive hotel ever sold.
Long term residence
A number of public figures have notably chosen to take up semi-permanent or permanent residence in hotels.
Fashion designer Coco Chanel lived in the Hôtel Ritz, Paris, on and off for more than 30 years.
Inventor Nikola Tesla lived the last ten years of his life at the New Yorker Hotel until he died in his room in 1943.
Larry Fine (of The Three Stooges) and his family lived in hotels, due to his extravagant spending habits and his wife's dislike for housekeeping. They first lived in the President Hotel in Atlantic City, New Jersey, where his daughter Phyllis was raised, then the Knickerbocker Hotel in Hollywood. Not until the late 1940s did Fine buy a home in the Los Feliz area of Los Angeles.
The Waldorf-Astoria Hotel and its affiliated Waldorf Towers has been the home of many famous persons over the years including former President Herbert Hoover who lived there from the end of his presidency in 1933 until his death in 1964. General Douglas MacArthur lived his last 14 years in the penthouse of the Waldorf Towers. Composer Cole Porter spent the last 25 years of his life in an apartment at the Waldorf Towers.
Billionaire Howard Hughes lived in hotels during the last ten years of his life (1966–76), primarily in Las Vegas, as well as Acapulco, Beverly Hills, Boston, Freeport, London, Managua, Nassau, Vancouver, and others.
Vladimir Nabokov and his wife Vera lived in the Montreux Palace Hotel in Montreux, Switzerland, from 1961 until his death in 1977.
Actor Richard Harris lived at the Savoy Hotel while in London. Hotel archivist Susan Scott recounts an anecdote that, when he was being taken out of the building on a stretcher shortly before his death in 2002, he raised his hand and told the diners "it was the food."
Egyptian actor Ahmed Zaki lived his last 15 years in Ramses Hilton Hotel – Cairo.
British entrepreneur Jack Lyons lived in the Hotel Mirador Kempinski in Switzerland for several years until his death in 2008.
American actress Ethel Merman lived in the Berkshire Hotel in Manhattan for many years but was evicted in 1978 by new ownership who did not want permanent residents.
American actress Elaine Stritch lived in the Savoy Hotel in London for over a decade.
Uruguayan-Argentinian tango composer Horacio Ferrer lived almost 40 years, from 1976 until his death in 2014, in an apartment inside the Alvear Palace Hotel, in Buenos Aires, one of the most exclusive hotels in the city.
See also
Lists of hotels
List of chained-brand hotels
List of defunct hotel chains
Casino hotel
List of casino hotels
Niche tourism markets
Resort
Resort hotel
Industry and careers
Bellhop
Concierge
Front desk clerk, a type of clerk
General manager
GOPPAR, RevPAR, TRevPAR – hotel profitability equations.
Hospitality industry
Hotel rating
Innkeeper
Night auditor
Property caretaker
Tourism
Human habitation types
Apartment hotel
Boutique hotel
Caravanserai
Cruise ship
Dharamshala
Dak bungalow
Eco hotel
Guest house
Glamping
Homestay
Hostal
Human habitats
Inn
Serviced apartment
Vacation rental
Pop-up hotel
References
Further reading
External links
Buildings and structures by type
Tourist accommodations | Hotel | [
"Engineering"
] | 5,980 | [
"Buildings and structures by type",
"Architecture"
] |
14,282 | https://en.wikipedia.org/wiki/Hubris | Hubris (; ), or less frequently hybris (), describes a personality quality of extreme or excessive pride or dangerous overconfidence and complacency, often in combination with (or synonymous with) arrogance. The term arrogance comes from the Latin , meaning "to feel that one has a right to demand certain attitudes and behaviors from other people". To arrogate means "to claim or seize without justification... To make undue claims to having", or "to claim or seize without right... to ascribe or attribute without reason". The term pretension is also associated with the term hubris, but is not synonymous with it.
According to studies, hubris, arrogance, and pretension are related to the need for victory (even if it does not always mean winning) instead of reconciliation, which "friendly" groups might promote. Hubris is usually perceived as a characteristic of an individual rather than a group, although the group the offender belongs to may suffer collateral consequences from wrongful acts. Hubris often indicates a loss of contact with reality and an overestimation of one's own competence, accomplishments, or capabilities. The adjectival form of the noun hubris/hybris is hubristic/hybristic.
The term hubris originated in Ancient Greek, where it had several different meanings depending on the context. In legal usage, it meant assault or sexual crimes and theft of public property, and in religious usage it meant emulation of divinity or transgression against a god.
Ancient Greek origin
In ancient Greek, hubris referred to "outrage": actions that violated natural order, or which shamed and humiliated the victim, sometimes for the pleasure or gratification of the abuser.
Mythological usage
Hesiod and Aeschylus used the word "hubris" to describe transgressions against the gods. A common way that hubris was committed was when a mortal claimed to be better than a god in a particular skill or attribute. Claims like these were rarely left unpunished, and so Arachne, a talented young weaver, was transformed into a spider when she said that her skills exceeded those of the goddess Athena. Additional examples include Icarus, Phaethon, Salmoneus, Niobe, Cassiopeia, Tantalus, and Tereus.
The goddess Hybris is described in the Encyclopædia Britannica Eleventh Edition as having "insolent encroachment upon the rights of others".
These events were not limited to myth, and certain figures in history were considered to have been punished for committing hubris through their arrogance. One such person was king Xerxes as portrayed in Aeschylus's play The Persians, and who allegedly threw chains to bind the Hellespont sea as punishment for daring to destroy his fleet.
What is common in all of these examples is the breaching of limits, as the Greeks believed that the Fates (Μοῖραι) had assigned each being with a particular area of freedom, an area that even the gods could not breach.
Legal usage
In ancient Athens, hubris was defined as the use of violence to shame the victim (this sense of hubris could also characterize rape). In legal terms, hubristic violations of the law included what might today be termed assault-and-battery, sexual crimes, or the theft of public or sacred property. In some contexts, the term had a sexual connotation. Shame was frequently reflected upon the perpetrator, as well.
Crucial to this definition are the ancient Greek concepts of honour (τιμή, timē) and shame (αἰδώς, aidōs). The concept of honour included not only the exaltation of the one receiving honour, but also the shaming of the one overcome by the act of hubris. This concept of honour is akin to a zero-sum game. Rush Rehm simplifies this definition of hubris to the contemporary concept of "insolence, contempt, and excessive violence".
Two well-known cases are found in the speeches of Demosthenes, a prominent statesman and orator in ancient Greece. These two examples occurred when first Midias punched Demosthenes in the face in the theatre (Against Midias), and second when (in Against Conon) a defendant allegedly assaulted a man and crowed over the victim. Yet another example of hubris appears in Aeschines' Against Timarchus, where the defendant, Timarchus, is accused of breaking the law of hubris by submitting himself to prostitution and anal intercourse. Aeschines brought this suit against Timarchus to bar him from the rights of political office and his case succeeded. Aristotle defined hubris as shaming the victim, not because of anything that happened to the committer or might happen to the committer, but merely for that committer's own gratification:
to cause shame to the victim, not in order that anything may happen to you, nor because anything has happened to you, but merely for your own gratification. Hubris is not the requital of past injuries; this is revenge. As for the pleasure in hubris, its cause is this: naive men think that by ill-treating others they make their own superiority the greater.
Early Christianity
In the Septuagint, the "hubris is overweening pride, superciliousness or arrogance, often resulting in fatal retribution or nemesis". The word hubris as used in the New Testament parallels the Hebrew word pesha, meaning "transgression". It represents a pride that "makes a man defy God", sometimes to the degree that he considers himself an equal.
Modern usage
In its modern usage, hubris denotes overconfident pride combined with arrogance. Hubris is also referred to as "pride that blinds" because it often causes a committer of hubris to act in foolish ways that belie common sense.
Arrogance
The Oxford English Dictionary defines "arrogance" in terms of "high or inflated opinion of one's own abilities, importance, etc., that gives rise to presumption or excessive self-confidence, or to a feeling or attitude of being superior to others [...]." Adrian Davies sees arrogance as more generic and less severe than hubris.
References
Further reading
Nicolas R. E. Fisher, Hybris: A Study in the Values of Honour and Shame in Ancient Greece, Warminster, Aris & Phillips, 1992.
Michael DeWilde, "The Psychological and Spiritual Roots of a Universal Affliction"
Hubris on 2012's Encyclopædia Britannica
Robert A. Stebbins, From Humility to Hubris among Scholars and Politicians: Exploring Expressions of Self-Esteem and Achievement. Bingley, UK: Emerald Group Publishing, 2017.
External links
Narcissism
Barriers to critical thinking
Pride
Psychological attitude
Religious terminology
Seven deadly sins
Concepts in ancient Greek ethics
Greek words and phrases | Hubris | [
"Biology"
] | 1,451 | [
"Behavior",
"Narcissism",
"Human behavior"
] |
14,283 | https://en.wikipedia.org/wiki/Heavy%20water | Heavy water (deuterium oxide, , ) is a form of water in which hydrogen atoms are all deuterium ( or D, also known as heavy hydrogen) rather than the common hydrogen-1 isotope (, also called protium) that makes up most of the hydrogen in normal water. The presence of the heavier isotope gives the water different nuclear properties, and the increase in mass gives it slightly different physical and chemical properties when compared to normal water.
Deuterium is a heavy hydrogen isotope. Heavy water contains deuterium atoms and is used in nuclear reactors. Semiheavy water (HDO) is more common than pure heavy water, while heavy-oxygen water is denser but lacks unique properties. Tritiated water is radioactive due to tritium content.
Heavy water has different physical properties from regular water, such as being 10.6% denser and having a higher melting point. Heavy water is less dissociated at a given temperature, and it does not have the slightly blue color of regular water. It can taste slightly sweeter than regular water, though not to a significant degree. Heavy water affects biological systems by altering enzymes, hydrogen bonds, and cell division in eukaryotes. It can be lethal to multicellular organisms at concentrations over 50%. However, some prokaryotes like bacteria can survive in a heavy hydrogen environment. Heavy water can be toxic to humans, but a large amount would be needed for poisoning to occur.
The most cost-effective process for producing heavy water is the Girdler sulfide process. Heavy water is used in various industries and is sold in different grades of purity. Some of its applications include nuclear magnetic resonance, infrared spectroscopy, neutron moderation, neutrino detection, metabolic rate testing, neutron capture therapy, and the production of radioactive materials such as plutonium and tritium.
Composition
The deuterium nucleus consists of a neutron and a proton; the nucleus of a protium (normal hydrogen) atom consists of just a proton. The additional neutron makes a deuterium atom roughly twice as heavy as a protium atom.
A molecule of heavy water has two deuterium atoms in place of the two protium atoms of ordinary water. The term heavy water as defined by the IUPAC Gold Book can also refer to water in which a higher than usual proportion of hydrogen atoms are deuterium. For comparison, Vienna Standard Mean Ocean Water (the "ordinary water" used for a deuterium standard) contains about 156 deuterium atoms per million hydrogen atoms; that is, 0.0156% of the hydrogen atoms are H. Thus heavy water as defined by the Gold Book includes semiheavy water (hydrogen-deuterium oxide, HDO) and other mixtures of , , and HDO in which the proportion of deuterium is greater than usual. For instance, the heavy water used in CANDU reactors is a highly enriched water mixture that is mostly deuterium oxide , but also some hydrogen-deuterium oxide and a smaller amount of ordinary water . It is 99.75% enriched by hydrogen atom-fraction; that is, 99.75% of the hydrogen atoms are of the heavy type; however, heavy water in the Gold Book sense need not be so highly enriched. The weight of a heavy water molecule, however, is not very different from that of a normal water molecule, because about 89% of the mass of the molecule comes from the single oxygen atom rather than the two hydrogen atoms.
Heavy water is not radioactive. In its pure form, it has a density about 11% greater than water but is otherwise physically and chemically similar. Nevertheless, the various differences in deuterium-containing water (especially affecting the biological properties) are larger than in any other commonly occurring isotope-substituted compound because deuterium is unique among heavy stable isotopes in being twice as heavy as the lightest isotope. This difference increases the strength of water's hydrogen–oxygen bonds, and this in turn is enough to cause differences that are important to some biochemical reactions. The human body naturally contains deuterium equivalent to about five grams of heavy water, which is harmless. When a large fraction of water (> 50%) in higher organisms is replaced by heavy water, the result is cell dysfunction and death.
Heavy water was first produced in 1932, a few months after the discovery of deuterium. With the discovery of nuclear fission in late 1938, and the need for a neutron moderator that captured few neutrons, heavy water became a component of early nuclear energy research. Since then, heavy water has been an essential component in some types of reactors, both those that generate power and those designed to produce isotopes for nuclear weapons. These heavy water reactors have the advantage of being able to run on natural uranium without using graphite moderators that pose radiological and dust explosion hazards in the decommissioning phase. The graphite moderated Soviet RBMK design tried to avoid using either enriched uranium or heavy water (being cooled with ordinary water instead) which produced the positive void coefficient that was one of a series of flaws in reactor design leading to the Chernobyl disaster. Most modern reactors use enriched uranium with ordinary water as the moderator.
Other heavy forms of water
Semiheavy water
Semiheavy water, HDO, exists whenever there is water with light hydrogen (protium, ) and deuterium (D or ) in the mix. This is because hydrogen atoms (H and H) are rapidly exchanged between water molecules. Water containing 50% and 50% in its hydrogen, is actually about 50% HDO and 25% each of and , in dynamic equilibrium.
In normal water, about 1 molecule in 3,200 is HDO (one hydrogen in 6,400 is ), and heavy water molecules () only occur in a proportion of about 1 molecule in 41 million (i.e. one in 6,400). Thus semiheavy water molecules are far more common than "pure" (homoisotopic) heavy water molecules.
Heavy-oxygen water
Water enriched in the heavier oxygen isotopes and is also commercially available. It is "heavy water" as it is denser than normal water ( is approximately as dense as , is about halfway between and )—but is rarely called heavy water, since it does not contain the excess deuterium that gives DO its unusual nuclear and biological properties. It is more expensive than DO due to the more difficult separation of O and O. HO is also used for production of fluorine-18 in radiopharmaceuticals and radiotracers, and positron emission tomography. Small amounts of and are naturally present in water, and most processes enriching heavy water also enrich heavier isotopes of oxygen as a side-effect. This is undesirable if the heavy water is to be used as a neutron moderator in nuclear reactors, as can undergo neutron capture, followed by emission of an alpha particle, producing radioactive . However, doubly labeled water, containing both a heavy oxygen and hydrogen, is useful as a non-radioactive isotopic tracer.
Compared to the isotopic change of hydrogen atoms, the isotopic change of oxygen has a smaller effect on the physical properties.
Tritiated water
Tritiated water contains tritium (H) in place of protium (H) or deuterium (H). Since tritium is radioactive, tritiated water is also radioactive.
Physical properties
The physical properties of water and heavy water differ in several respects. Heavy water is less dissociated than light water at given temperature, and the true concentration of D ions is less than ions would be for light water at the same temperature. The same is true of OD vs. ions. For heavy water Kw DO (25.0 °C) = 1.35 × 10, and [D] must equal [OD] for neutral water. Thus pKw DO = p[OD] + p[D] = 7.44 + 7.44 = 14.87 (25.0 °C), and the p[D] of neutral heavy water at 25.0 °C is 7.44.
The pD of heavy water is generally measured using pH electrodes giving a pH (apparent) value, or pHa, and at various temperatures a true acidic pD can be estimated from the directly pH meter measured pHa, such that pD+ = pHa (apparent reading from pH meter) + 0.41. The electrode correction for alkaline conditions is 0.456 for heavy water. The alkaline correction is then pD+ = pH(apparent reading from pH meter) + 0.456. These corrections are slightly different from the differences in p[D+] and p[OD-] of 0.44 from the corresponding ones in heavy water.
Heavy water is 10.6% denser than ordinary water, and heavy water's physically different properties can be seen without equipment if a frozen sample is dropped into normal water, as it will sink. If the water is ice-cold the higher melting temperature of heavy ice can also be observed: it melts at 3.7 °C, and thus does not melt in ice-cold normal water.
A 1935 experiment reported not the "slightest difference" in taste between ordinary and heavy water. However, a more recent study confirmed anecdotal observation that heavy water tastes slightly sweet to humans, with the effect mediated by the TAS1R2/TAS1R3 taste receptor. Rats given a choice between distilled normal water and heavy water were able to avoid the heavy water based on smell, and it may have a different taste. Some people report that minerals in water affect taste, e.g. potassium lending a sweet taste to hard water, but there are many factors of a perceived taste in water besides mineral contents.
Heavy water lacks the characteristic blue color of light water; this is because the molecular vibration harmonics, which in light water cause weak absorption in the red part of the visible spectrum, are shifted into the infrared and thus heavy water does not absorb red light.
No physical properties are listed for "pure" semi-heavy water because it is unstable as a bulk liquid. In the liquid state, a few water molecules are always in an ionized state, which means the hydrogen atoms can exchange among different oxygen atoms. Semi-heavy water could, in theory, be created via a chemical method, but it would rapidly transform into a dynamic mixture of 25% light water, 25% heavy water, and 50% semi-heavy. However, if it were made in the gas phase and directly deposited into a solid, semi-heavy water in the form of ice could be stable. This is due to collisions between water vapor molecules being almost completely negligible in the gas phase at standard temperatures, and once crystallized, collisions between the molecules cease altogether due to the rigid lattice structure of solid ice.
Heavy water exchanges with atmospheric water until it reaches the usual hydrogen-isotopic ratio.
History
The US scientist and Nobel laureate Harold Urey discovered the isotope deuterium in 1931 and was later able to concentrate it in water. Urey's mentor Gilbert Newton Lewis isolated the first sample of pure heavy water by electrolysis in 1933. George de Hevesy and Erich Hofer used heavy water in 1934 in one of the first biological tracer experiments, to estimate the rate of turnover of water in the human body. The history of large-quantity production and use of heavy water, in early nuclear experiments, is described below.
Emilian Bratu and Otto Redlich studied the autodissociation of heavy water in 1934.
In the 1930s, it was suspected by the United States and Soviet Union that Austrian chemist Fritz Johann Hansgirg built a pilot plant for the Empire of Japan in Japanese ruled northern Korea to produce heavy water by using a new process he had invented.
During the second World War, the company Fosfatbolaget in Ljungaverk, Sweden, produced 2,300 liters per year of heavy water. The heavy water was then sold both to Germany and to the Manhattan Project for the price of 1,40 SEK per gram of heavy water.
In October 1939, Soviet physicists Yakov Borisovich Zel'dovich and Yulii Borisovich Khariton concluded that heavy water and carbon were the only feasible moderators for a natural uranium reactor, and in August 1940, along with Georgy Flyorov, submitted a plan to the Russian Academy of Sciences calculating that 15 tons of heavy water were needed for a reactor. With the Soviet Union having no uranium mines at the time, young Academy workers were sent to Leningrad photographic shops to buy uranium nitrate, but the entire heavy water project was halted in 1941 when German forces invaded during Operation Barbarossa.
By 1943, Soviet scientists had discovered that all scientific literature relating to heavy water had disappeared from the West, which Flyorov in a letter warned Soviet leader Joseph Stalin about, and at which time there was only 2–3 kg of heavy water in the entire country. In late 1943, the Soviet purchasing commission in the U.S. obtained 1 kg of heavy water and a further 100 kg in February 1945, and upon World War II ending, the NKVD took over the project.
In October 1946, as part of the Russian Alsos, the NKVD deported to the Soviet Union from Germany the German scientists who had worked on heavy water production during the war, including Karl-Hermann Geib, the inventor of the Girdler sulfide process. These German scientists worked under the supervision of German physical chemist Max Volmer at the Institute of Physical Chemistry in Moscow with the plant they constructed producing large quantities of heavy water by 1948.
Effect on biological systems
Different isotopes of chemical elements have slightly different chemical behaviors, but for most elements the differences are far too small to have a biological effect. In the case of hydrogen, larger differences in chemical properties among protium, deuterium, and tritium occur because chemical bond energy depends on the reduced mass of the nucleus–electron system; this is altered in heavy-hydrogen compounds (hydrogen-deuterium oxide is the most common) more than for heavy-isotope substitution involving other chemical elements. The isotope effects are especially relevant in biological systems, which are very sensitive to even the smaller changes, due to isotopically influenced properties of water when it acts as a solvent.
To perform their tasks, enzymes rely on their finely tuned networks of hydrogen bonds, both in the active center with their substrates and outside the active center, to stabilize their tertiary structures. As a hydrogen bond with deuterium is slightly stronger than one involving ordinary hydrogen, in a highly deuterated environment, some normal reactions in cells are disrupted.
Particularly hard-hit by heavy water are the delicate assemblies of mitotic spindle formations necessary for cell division in eukaryotes. Plants stop growing and seeds do not germinate when given only heavy water, because heavy water stops eukaryotic cell division. Tobacco does not germinate, but wheat does. The deuterium cell is larger and is a modification of the direction of division. The cell membrane also changes, and it reacts first to the impact of heavy water. In 1972, it was demonstrated that an increase in the percentage of deuterium in water reduces plant growth. Research conducted on the growth of prokaryote microorganisms in artificial conditions of a heavy hydrogen environment showed that in this environment, all the hydrogen atoms of water could be replaced with deuterium. Experiments showed that bacteria can live in 98% heavy water. Concentrations over 50% are lethal to multicellular organisms, but a few exceptions are known: plant species such as switchgrass (Panicum virgatum) which is able to grow on 50% D2O; Arabidopsis thaliana (70% DO); Vesicularia dubyana (85% D2O); Funaria hygrometrica (90% DO); and the anhydrobiotic species of nematode Panagrolaimus superbus (nearly 100% DO).
A comprehensive study of heavy water on the fission yeast Schizosaccharomyces pombe showed that the cells displayed an altered glucose metabolism and slow growth at high concentrations of heavy water. In addition, the cells activated the heat-shock response pathway and the cell integrity pathway, and mutants in the cell integrity pathway displayed increased tolerance to heavy water. Despite its toxicity at high levels, heavy water has been observed to extend lifespan of certain yeasts by up to 85%, with the hypothesized mechanism being the reduction of reactive oxygen species turnover.
Heavy water affects the period of circadian oscillations, consistently increasing the length of each cycle. The effect has been demonstrated in unicellular organisms, green plants, isopods, insects, birds, mice, and hamsters. The mechanism is unknown.
Like ethanol, heavy water temporarily changes the relative density of cupula relative to the endolymph in the vestibular organ, causing positional nystagmus, illusions of bodily rotations, dizziness, and nausea. However, the direction of nystagmus is in the opposite direction of ethanol, since it is denser than water, not lighter.
Effect on animals
Experiments with mice, rats, and dogs have shown that a degree of 25% deuteration prevents gametes or zygotes from developing, causing (sometimes irreversible) sterility. High concentrations of heavy water (90%) rapidly kill fish, tadpoles, flatworms, and Drosophila. Mice raised from birth with 30% heavy water have 25% deuteration in body fluid and 10% in brains. They are normal except for sterility. Deuteration during pregnancy induces fetal abnormality. Higher deuteration in body fluid causes death. Mammals (for example, rats) given heavy water to drink die after a week, at a time when their body water approaches about 50% deuteration. The mode of death appears to be the same as that in cytotoxic poisoning (such as chemotherapy) or in acute radiation syndrome (though deuterium is not radioactive), and is caused by deuterium's action in generally inhibiting cell division. It is more toxic to malignant cells than normal cells, but the concentrations needed are too high for regular use. As may occur in chemotherapy, deuterium-poisoned mammals die of a failure of bone marrow (producing bleeding and infections) and of intestinal-barrier functions (producing diarrhea and loss of fluids).
Despite the problems of plants and animals in living with too much deuterium, prokaryotic organisms such as bacteria, which do not have the mitotic problems induced by deuterium, may be grown and propagated in fully deuterated conditions, resulting in replacement of all hydrogen atoms in the bacterial proteins and DNA with the deuterium isotope. This leads to a process of bootstrapping. With prokaryotes producing fully deuterated glucose, fully deuterated Escherichia coli and Torula were raised, and they could produce even more complex fully deuterated chemicals. Molds like Aspergillus could not replicate under fully deuterated conditions.
In higher organisms, full replacement with heavy isotopes can be accomplished with other non-radioactive heavy isotopes (such as carbon-13, nitrogen-15, and oxygen-18), but this cannot be done for deuterium. This is a consequence of the ratio of nuclear masses between the isotopes of hydrogen, which is much greater than for any other element.
Deuterium oxide is used to enhance boron neutron capture therapy, but this effect does not rely on the biological or chemical effects of deuterium, but instead on deuterium's ability to moderate (slow) neutrons without capturing them. 2021 experimental evidence indicates that systemic administration of deuterium oxide (30% drinking water supplementation) suppresses tumor growth in a standard mouse model of human melanoma, an effect attributed to selective induction of cellular stress signaling and gene expression in tumor cells.
Toxicity in humans
Because it would take a very large amount of heavy water to replace 25% to 50% of a human being's body water (water being in turn 50–75% of body weight) with heavy water, accidental or intentional poisoning with heavy water is unlikely to the point of practical disregard. Poisoning would require that the victim ingest large amounts of heavy water without significant normal water intake for many days to produce any noticeable toxic effects.
Oral doses of heavy water in the range of several grams, as well as heavy oxygen O, are routinely used in human metabolic experiments. (See doubly labeled water testing.) Since one in about every 6,400 hydrogen atoms is deuterium, a human containing of body water would normally contain enough deuterium (about ) to make of pure heavy water, so roughly this dose is required to double the amount of deuterium in the body.
A loss of blood pressure may partially explain the reported incidence of dizziness upon ingestion of heavy water. However, it is more likely that this symptom can be attributed to altered vestibular function. Heavy water, like ethanol, causes a temporary difference in the density of endolymph within the cupula, which confuses the vestibulo-ocular reflex and causes motion sickness symptoms.
Heavy water radiation contamination confusion
Although many people associate heavy water primarily with its use in nuclear reactors, pure heavy water is not radioactive. Commercial-grade heavy water is slightly radioactive due to the presence of minute traces of natural tritium, but the same is true of ordinary water. Heavy water that has been used as a coolant in nuclear power plants contains substantially more tritium as a result of neutron bombardment of the deuterium in the heavy water (tritium is a health risk when ingested in large quantities).
In 1990, a disgruntled employee at the Point Lepreau Nuclear Generating Station in Canada obtained a sample (estimated as about a "half cup") of heavy water from the primary heat transport loop of the nuclear reactor, and loaded it into a cafeteria drink dispenser. Eight employees drank some of the contaminated water. The incident was discovered when employees began leaving bioassay urine samples with elevated tritium levels. The quantity of heavy water involved was far below levels that could induce heavy water toxicity, but several employees received elevated radiation doses from tritium and neutron-activated chemicals in the water. This was not an incident of heavy water poisoning, but rather radiation poisoning from other isotopes in the heavy water.
Some news services were not careful to distinguish these points, and some of the public were left with the impression that heavy water is normally radioactive and more severely toxic than it actually is. Even if pure heavy water had been used in the water dispenser indefinitely, it is not likely the incident would have been detected or caused harm, since no employee would be expected to get much more than 25% of their daily drinking water from such a source.
Production methods
The most cost-effective process for producing heavy water is the dual temperature exchange sulfide process (known as the Girdler sulfide process) developed in parallel by Karl-Hermann Geib and Jerome S. Spevack in 1943. An alternative process, patented by Graham M. Keyser, uses lasers to selectively dissociate deuterated hydrofluorocarbons to form deuterium fluoride, which can then be separated by physical means. Although the energy consumption for this process is much less than for the Girdler sulfide process, this method is currently uneconomical due to the expense of procuring the necessary hydrofluorocarbons.
As noted, modern commercial heavy water is almost universally referred to, and sold as, deuterium oxide. It is most often sold in various grades of purity, from 98% enrichment to 99.75–99.98% deuterium enrichment (nuclear reactor grade) and occasionally even higher isotopic purity.
Production by country
Argentina
Argentina was the main producer of heavy water, using an ammonia/hydrogen exchange based plant supplied by Switzerland's Sulzer company. It was also a major exporter to Canada, Germany, the US and other countries. The heavy water production facility located in Arroyito was the world's largest heavy water production facility. Argentina produced of heavy water per year in 2015 using the monothermal ammonia-hydrogen isotopic exchange method. Since 2017, the Arroyito plant has not been operational.
United States
During the Manhattan Project the United States constructed three heavy water production plants as part of the P-9 Project at Morgantown Ordnance Works, near Morgantown, West Virginia; at the Wabash River Ordnance Works, near Dana and Newport, Indiana; and at the Alabama Ordnance Works, near Childersburg and Sylacauga, Alabama. Heavy water was also acquired from the Cominco plant in Trail, British Columbia, Canada. The Chicago Pile-3 experimental reactor used heavy water as a moderator and went critical in 1944. The three domestic production plants were shut down in 1945 after producing around of product. The Wabash plant resumed heavy water production in 1952.
In 1953, the United States began using heavy water in plutonium production reactors at the Savannah River Site. The first of the five heavy water reactors came online in 1953, and the last was placed in cold shutdown in 1996. The reactors were heavy water reactors so that they could produce both plutonium and tritium for the US nuclear weapons program.
The U.S. developed the Girdler sulfide chemical exchange production process—which was first demonstrated on a large scale at the Dana, Indiana plant in 1945 and at the Savannah River Site in 1952.
India
India is the world's largest producer of heavy water through its Heavy Water Board. It exports heavy water to countries including the Republic of Korea, China, and the United States.
Norway
In 1934, Norsk Hydro built the first commercial heavy water plant at Vemork, Tinn, eventually producing per day. From 1940 and throughout World War II, the plant was under German control, and the Allies decided to destroy the plant and its heavy water to inhibit German development of nuclear weapons. In late 1942, a planned raid called Operation Freshman by British airborne troops failed, both gliders crashing. The raiders were killed in the crash or subsequently executed by the Germans.
On the night of 27 February 1943 Operation Gunnerside succeeded. Norwegian commandos and local resistance managed to demolish small, but key parts of the electrolytic cells, dumping the accumulated heavy water down the factory drains.
On 16 November 1943, the Allied air forces dropped more than 400 bombs on the site. The Allied air raid prompted the Nazi government to move all available heavy water to Germany for safekeeping. On 20 February 1944, a Norwegian partisan sank the ferry M/F Hydro carrying heavy water across Lake Tinn, at the cost of 14 Norwegian civilian lives, and most of the heavy water was presumably lost. A few of the barrels were only half full, hence buoyant, and may have been salvaged and transported to Germany.
Recent investigation of production records at Norsk Hydro and analysis of an intact barrel that was salvaged in 2004 revealed that although the barrels in this shipment contained water of pH 14—indicative of the alkaline electrolytic refinement process—they did not contain high concentrations of DO. Despite the apparent size of the shipment, the total quantity of pure heavy water was quite small, most barrels only containing 0.5–1% pure heavy water. The Germans would have needed about 5 tons of heavy water to get a nuclear reactor running. The manifest clearly indicated that there was only half a ton of heavy water being transported to Germany. Hydro was carrying far too little heavy water for one reactor, let alone the 10 or more tons needed to make enough plutonium for a nuclear weapon. The German nuclear weapons program was much less advanced than the Manhattan Project, and no reactor constructed in Nazi Germany came close to reaching criticality. No amount of heavy water would have changed that.
Israel admitted running the Dimona reactor with Norwegian heavy water sold to it in 1959. Through re-export using Romania and Germany, India probably also used Norwegian heavy water.
Canada
As part of its contribution to the Manhattan Project, Canada built and operated a per month (design capacity) electrolytic heavy water plant at Trail, British Columbia, which started operation in 1943.
The Atomic Energy of Canada Limited (AECL) design of power reactor requires large quantities of heavy water to act as a neutron moderator and coolant. AECL ordered two heavy water plants, which were built and operated in Atlantic Canada at Glace Bay, Nova Scotia (by Deuterium of Canada Limited) and Point Tupper, Richmond County, Nova Scotia (by Canadian General Electric). These plants proved to have significant design, construction and production problems. The Glace Bay plant reached full production in 1984 after being taken over by AECL in 1971. The Point Tupper plant reached full production in 1974 and AECL purchased the plant in 1975. Design changes from the Point Tupper plant were carried through as AECL built the Bruce Heavy Water Plant (), which it later sold to Ontario Hydro, to ensure a reliable supply of heavy water for future power plants. The two Nova Scotia plants were shut down in 1985 when their production proved unnecessary.
The Bruce Heavy Water Plant (BHWP) in Ontario was the world's largest heavy water production plant with a capacity of 1600 tonnes per year at its peak (800 tonnes per year per full plant, two fully operational plants at its peak). It used the Girdler sulfide process to produce heavy water, and required 340,000 tonnes of feed water to produce one tonne of heavy water. It was part of a complex that included eight CANDU reactors, which provided heat and power for the heavy water plant. The site was located at Douglas Point/Bruce Nuclear Generating Station near Tiverton, Ontario, on Lake Huron where it had access to the waters of the Great Lakes.
AECL issued the construction contract in 1969 for the first BHWP unit (BHWP A). Commissioning of BHWP A was done by Ontario Hydro from 1971 through 1973, with the plant entering service on 28 June 1973, and design production capacity being achieved in April 1974. Due to the success of BHWP A and the large amount of heavy water that would be required for the large numbers of upcoming planned CANDU nuclear power plant construction projects, Ontario Hydro commissioned three additional heavy water production plants for the Bruce site (BHWP B, C, and D). BHWP B was placed into service in 1979. These first two plants were significantly more efficient than planned, and the number of CANDU construction projects ended up being significantly lower than originally planned, which led to the cancellation of construction on BHWP C & D. In 1984, BHWP A was shut down. By 1993 Ontario Hydro had produced enough heavy water to meet all of its anticipated domestic needs (which were lower than expected due to improved efficiency in the use and recycling of heavy water), so they shut down and demolished half of the capacity of BHWP B. The remaining capacity continued to operate in order to fulfil demand for heavy water exports until it was permanently shut down in 1997, after which the plant was gradually dismantled and the site cleared.
AECL is currently researching other more efficient and environmentally benign processes for creating heavy water. This is relevant for CANDU reactors since heavy water represented about 15–20% of the total capital cost of each CANDU plant in the 1970s and 1980s.
Iran
Since 1996 a plant for production of heavy water was being constructed at Khondab near Arak. On 26 August 2006, Iranian President Ahmadinejad inaugurated the expansion of the country's heavy-water plant. Iran has indicated that the heavy-water production facility will operate in tandem with a 40 MW research reactor that had a scheduled completion date in 2009. Iran produced deuterated solvents in early 2011 for the first time. The core of the IR-40 is supposed to be re-designed based on the nuclear agreement in July 2015.
Under the Joint Comprehensive Plan of Action, Iran is permitted to store only of heavy water. Iran exports excess production, making Iran the world's third largest exporter of heavy water. In 2023, Iran sells heavy water; customers have proposed a price over 1,000 dollars per liter.
Pakistan
In Pakistan, there are two heavy water production sites that are based in Punjab. Commissioned in 1995–96, the Khushab Nuclear Complex is a central element of Pakistan's stockpile program for production of weapon-grade plutonium, deuterium, and tritium for advanced compact warheads (i.e. thermonuclear weapons). Another heavy water facility for producing the heavy water is located in Multan, that it sells to nuclear power plants in Karachi and Chashma.
In early 1980s, Pakistan succeeded in acquiring a tritium purification and storage plant and deuterium and tritium precursor materials from two former East German firms. Unlike India and Iran, the heavy water produced by Pakistan is not exported nor available for purchase to any nation and is solely used for its weapons complex and energy generation at its local nuclear power plants.
Other countries
Romania produced heavy water at the now-decommissioned Drobeta Girdler sulfide plant for domestic and export purposes. France operated a small plant during the 1950s and 1960s.
Applications
Nuclear magnetic resonance
Deuterium oxide is used in nuclear magnetic resonance spectroscopy when using water as a solvent if the nuclide of interest is hydrogen. This is because the signal from light-water (HO) solvent molecules would overwhelm the signal from the molecule of interest dissolved in it. Deuterium has a different magnetic moment and therefore does not contribute to the H-NMR signal at the hydrogen-1 resonance frequency.
For some experiments, it may be desirable to identify the labile hydrogens on a compound, that is hydrogens that can easily exchange away as H ions on some positions in a molecule. With addition of DO, sometimes referred to as a DO shake, labile hydrogens exchange between the compound of interest and the solvent, leading to replacement of those specific H atoms in the compound with H. These positions in the molecule then do not appear in the H-NMR spectrum.
Organic chemistry
Deuterium oxide is often used as the source of deuterium for preparing specifically labelled isotopologues of organic compounds. For example, C-H bonds adjacent to ketonic carbonyl groups can be replaced by C-D bonds, using acid or base catalysis. Trimethylsulfoxonium iodide, made from dimethyl sulfoxide and methyl iodide can be recrystallized from deuterium oxide, and then dissociated to regenerate methyl iodide and dimethyl sulfoxide, both deuterium labelled. In cases where specific double labelling by deuterium and tritium is contemplated, the researcher must be aware that deuterium oxide, depending upon age and origin, can contain some tritium.
Infrared spectroscopy
Deuterium oxide is often used instead of water when collecting FTIR spectra of proteins in solution. HO creates a strong band that overlaps with the amide I region of proteins. The band from DO is shifted away from the amide I region.
Neutron moderator
Heavy water is used in certain types of nuclear reactors, where it acts as a neutron moderator to slow down neutrons so that they are more likely to react with the fissile uranium-235 than with uranium-238, which captures neutrons without fissioning.
The CANDU reactor uses this design. Light water also acts as a moderator, but because light water absorbs more neutrons than heavy water, reactors using light water for a reactor moderator must use enriched uranium rather than natural uranium, otherwise criticality is impossible. A significant fraction of outdated power reactors, such as the RBMK reactors in the USSR, were constructed using normal water for cooling but graphite as a moderator. However, the danger of graphite in power reactors (graphite fires in part led to the Chernobyl disaster) has led to the discontinuation of graphite in standard reactor designs.
The breeding and extraction of plutonium can be a relatively rapid and cheap route to building a nuclear weapon, as chemical separation of plutonium from fuel is easier than isotopic separation of U-235 from natural uranium.
Among current and past nuclear weapons states, Israel, India, and North Korea first used plutonium from heavy water moderated reactors burning natural uranium, while China, South Africa and Pakistan first built weapons using highly enriched uranium.
The Nazi nuclear program, operating with more modest means than the contemporary Manhattan Project and hampered by many leading scientists having been driven into exile (many of them ending up working for the Manhattan Project), as well as continuous infighting, wrongly dismissed graphite as a moderator due to not recognizing the effect of impurities. Given that isotope separation of uranium was deemed too big a hurdle, this left heavy water as a potential moderator. Other problems were the ideological aversion regarding what propaganda dismissed as "Jewish physics" and the mistrust between those who had been enthusiastic Nazis even before 1933 and those who were Mitläufer or trying to keep a low profile. In part due to allied sabotage and commando raids on Norsk Hydro (then the world's largest producer of heavy water) as well as the aforementioned infighting, the German nuclear program never managed to assemble enough uranium and heavy water in one place to achieve criticality despite possessing enough of both by the end of the war.
In the U.S., however, the first experimental atomic reactor (1942), as well as the Manhattan Project Hanford production reactors that produced the plutonium for the Trinity test and Fat Man bombs, all used pure carbon (graphite) neutron moderators combined with normal water cooling pipes. They functioned with neither enriched uranium nor heavy water. Russian and British plutonium production also used graphite-moderated reactors.
There is no evidence that civilian heavy water power reactors—such as the CANDU or Atucha designs—have been used to produce military fissile materials. In nations that do not already possess nuclear weapons, nuclear material at these facilities is under IAEA safeguards to discourage any diversion.
Due to its potential for use in nuclear weapons programs, the possession or import/export of large industrial quantities of heavy water are subject to government control in several countries. Suppliers of heavy water and heavy water production technology typically apply IAEA (International Atomic Energy Agency) administered safeguards and material accounting to heavy water. (In Australia, the Nuclear Non-Proliferation (Safeguards) Act 1987.) In the U.S. and Canada, non-industrial quantities of heavy water (i.e., in the gram to kg range) are routinely available without special license through chemical supply dealers and commercial companies such as the world's former major producer Ontario Hydro.
Neutrino detector
The Sudbury Neutrino Observatory (SNO) in Sudbury, Ontario uses 1,000 tonnes of heavy water on loan from Atomic Energy of Canada Limited. The neutrino detector is underground in a mine, to shield it from muons produced by cosmic rays. SNO was built to answer the question of whether or not electron-type neutrinos produced by fusion in the Sun (the only type the Sun should be producing directly, according to theory) might be able to turn into other types of neutrinos on the way to Earth. SNO detects the Cherenkov radiation in the water from high-energy electrons produced from electron-type neutrinos as they undergo charged current (CC) interactions with neutrons in deuterium, turning them into protons and electrons (however, only the electrons are fast enough to produce Cherenkov radiation for detection).
SNO also detects neutrino electron scattering (ES) events, where the neutrino transfers energy to the electron, which then proceeds to generate Cherenkov radiation distinguishable from that produced by CC events. The first of these two reactions is produced only by electron-type neutrinos, while the second can be caused by all of the neutrino flavors. The use of deuterium is critical to the SNO function, because all three "flavours" (types) of neutrinos may be detected in a third type of reaction as well, neutrino-disintegration, in which a neutrino of any type (electron, muon, or tau) scatters from a deuterium nucleus (deuteron), transferring enough energy to break up the loosely bound deuteron into a free neutron and proton via a neutral current (NC) interaction.
This event is detected when the free neutron is absorbed by 35Cl− present from NaCl deliberately dissolved in the heavy water, causing emission of characteristic capture gamma rays. Thus, in this experiment, heavy water not only provides the transparent medium necessary to produce and visualize Cherenkov radiation, but it also provides deuterium to detect exotic mu type (μ) and tau (τ) neutrinos, as well as a non-absorbent moderator medium to preserve free neutrons from this reaction, until they can be absorbed by an easily detected neutron-activated isotope.
Metabolic rate and water turnover testing in physiology and biology
Heavy water is employed as part of a mixture with HO for a common and safe test of mean metabolic rate in humans and animals undergoing their normal activities.The elimination rate of deuterium alone is a measure of body water turnover. This is highly variable between individuals and depends on environmental conditions as well as subject size, sex, age and physical activity.
Tritium production
Tritium is the active substance in self-powered lighting and controlled nuclear fusion, its other uses including autoradiography and radioactive labeling. It is also used in nuclear weapon design for boosted fission weapons and initiators. Tritium undergoes beta decay into helium-3, which is a stable, but rare, isotope of helium that is itself highly sought after. Some tritium is created in heavy water moderated reactors when deuterium captures a neutron. This reaction has a small cross-section (probability of a single neutron-capture event) and produces only small amounts of tritium, although enough to justify cleaning tritium from the moderator every few years to reduce the environmental risk of tritium escape. Given that helium-3 is a neutron poison with orders of magnitude higher capture cross section than any component of heavy or tritiated water, its accumulation in a heavy water neutron moderator or target for tritium production must be kept to a minimum.
Producing a lot of tritium in this way would require reactors with very high neutron fluxes, or with a very high proportion of heavy water to nuclear fuel and very low neutron absorption by other reactor material. The tritium would then have to be recovered by isotope separation from a much larger quantity of deuterium, unlike production from lithium-6 (the present method), where only chemical separation is needed.
Deuterium's absorption cross section for thermal neutrons is 0.52 millibarn (5.2 × 10 m; 1 barn = 10 m), while those of oxygen-16 and oxygen-17 are 0.19 and 0.24 millibarn, respectively. O makes up 0.038% of natural oxygen, making the overall cross section 0.28 millibarns. Therefore, in DO with natural oxygen, 21% of neutron captures are on oxygen, rising higher as O builds up from neutron capture on O. Also, O may emit an alpha particle on neutron capture, producing radioactive carbon-14.
See also
Cold fusion
Deuterium-depleted water
Interstellar ice
References
External links
Heavy Water and Heavy Water – Part II at The Periodic Table of Videos (University of Nottingham)
Heavy Water Production, Federation of American Scientists
Heavy Water: A Manufacturer's Guide for the Hydrogen Century
Is "heavy water" dangerous? Straight Dope Staff Report. 9 December 2003
Annotated bibliography for heavy water from the Alsos Digital Library for Nuclear Issues
Ice is supposed to float, but with a little heavy water, you can make cubes that sink
Isotopic Effects of Heavy Water in Biological Objects Oleg Mosin, Ignat Ignatov
J. Chem. Phys. 41, 1964
MOU between HWB and M/s Clearsynth MOU between HWB and M/s Clearsynth, Mumbai for sale of 20 tonnes of Heavy Water in a year for its non-nuclear applications.
Deuterated compounds
Deuterated solvents
Forms of water
Neutron moderators
Nuclear reactor coolants
Oxygen compounds | Heavy water | [
"Physics",
"Chemistry"
] | 9,227 | [
"Deuterated solvents",
"Nuclear magnetic resonance",
"Phases of matter",
"Forms of water",
"Matter"
] |
14,285 | https://en.wikipedia.org/wiki/History%20of%20science%20and%20technology | The history of science and technology (HST) is a field of history that examines the development of the understanding of the natural world (science) and humans' ability to manipulate it (technology) at different points in time. This academic discipline also examines the cultural, economic, and political context and impacts of scientific practices; it likewise may study the consequences of new technologies on existing scientific fields.
Academic study of history of science
History of science is an academic discipline with an international community of specialists. Main professional organizations for this field include the History of Science Society, the British Society for the History of Science, and the European Society for the History of Science.
Much of the study of the history of science has been devoted to answering questions about what science is, how it functions, and whether it exhibits large-scale patterns and trends.
History of the academic study of history of science
Histories of science were originally written by practicing and retired scientists, starting primarily with William Whewell's History of the Inductive Sciences (1837), as a way to communicate the virtues of science to the public.
Auguste Comte proposed that there should be a specific discipline to deal with the history of science.
The development of the distinct academic discipline of the history of science and technology did not occur until the early 20th century. Historians have suggested that this was bound to the changing role of science during the same time period.
After World War I, extensive resources were put into teaching and researching the discipline, with the hopes that it would help the public better understand both Science and Technology as they came to play an exceedingly prominent role in the world.
In the decades since the end of World War II, history of science became an academic discipline, with graduate schools, research institutes, public and private patronage, peer-reviewed journals, and professional societies.
Formation of academic departments
In the United States, a more formal study of the history of science as an independent discipline was initiated by George Sarton's publications, Introduction to the History of Science (1927) and the journal Isis (founded in 1912). Sarton exemplified the early 20th-century view of the history of science as the history of great men and great ideas. He shared with many of his contemporaries a Whiggish belief in history as a record of the advances and delays in the march of progress.
The study of the history of science continued to be a small effort until the rise of Big Science after World War II. With the work of I. Bernard Cohen at Harvard University, the history of science began to become an established subdiscipline of history in the United States.
In the United States, the influential bureaucrat Vannevar Bush, and the president of Harvard, James Conant, both encouraged the study of the history of science as a way of improving general knowledge about how science worked, and why it was essential to maintain a large scientific workforce.
Universities with history of science and technology programs
Argentina
Buenos Aires Institute of Technology, Argentina, has been offering courses on History of the Technology and the Science.
National Technological University, Argentina, has a complete history program on its offered careers.
Australia
The University of Sydney offers both undergraduate and postgraduate programmes in the History and Philosophy of Science, run by the Unit for the History and Philosophy of Science, within the Science Faculty. Undergraduate coursework can be completed as part of either a Bachelor of Science or a Bachelor of Arts Degree. Undergraduate study can be furthered by completing an additional Honours year. For postgraduate study, the Unit offers both coursework and research-based degrees. The two course-work based postgraduate degrees are the Graduate Certificate in Science (HPS) and the Graduate Diploma in Science (HPS). The two research based postgraduate degrees are a Master of Science (MSc) and Doctor of Philosophy (PhD).
Belgium
University of Liège, has a Department called Centre d'histoire des Sciences et Techniques.
Canada
Carleton University Ottawa offer courses in Ancient Science and Technology in its Technology, Society and Environment program.
University of Toronto has a program in History and Philosophy of Science and Technology.
Huron University College offers a course in the History of Science which follows the development and philosophy of science from 10,000 BCE to the modern day.
University of King's College in Halifax, Nova Scotia has a History of Science and Technology Program.
France
Nantes University has a dedicated Department called Centre François Viète.
Paris Diderot University (Paris 7) has a Department of History and Philosophy of Science.
A CNRS research center in History and Philosophy of Science SPHERE, affiliated with Paris Diderot University, has a dedicated history of technology section.
Pantheon-Sorbonne University (Paris 1) has a dedicated Institute of History and Philosophy of Science and Technics.
The École Normale Supérieure de Paris has a history of science department.
Germany
Technische Universität Berlin, has a program in the History of Science and Technology.
The of Masterpieces of Science and Technology in Munich is one of the largest science and technology museums in the world in terms of exhibition space, with about 28,000 exhibited objects from 50 fields of science and technology.
Greece
The University of Athens has a Department of Philosophy and History of Science
India
History of science and technology is a well-developed field in India. At least three generations of scholars can be identified.
The first generation includes D.D.Kosambi, Dharmpal, Debiprasad Chattopadhyay and Rahman. The second generation mainly consists of Ashis Nandy, Deepak Kumar, Dhruv Raina, S. Irfan Habib, Shiv Visvanathan, Gyan Prakash, Stan Lourdswamy, V.V. Krishna, Itty Abraham, Richard Grove, Kavita Philip, Mira Nanda and Rob Anderson. There is an emergent third generation that includes scholars like Abha Sur and Jahnavi Phalkey.
Departments and Programmes
The National Institute of Science, Technology and Development Studies had a research group active in the 1990s which consolidated social history of science as a field of research in India.
Currently there are several institutes and university departments offering HST programmes.
Jawaharlal Nehru University has an Mphil-PhD program that offer specialisation in Social History of Science. It is at the History of Science and Education group of the Zakir Husain Centre for Educational Studies (ZHCES) in the School of Social Sciences. Renowned Indian science historians Deepak Kumar and Dhruv Raina teach here. Also, *Centre for Studies in Science Policy has an Mphil-PhD program that offers specialization in Science, Technology, and Society along with various allied subdisciplines.
Central University of Gujarat has an MPhil-PhD programme in Studies in Science, Technology & Innovation Policy at the Centre for Studies in Science, Technology & Innovation Policy (CSSTIP), where Social History of Science and Technology in India is a major emphasis for research and teaching.
Banaras Hindu University has programs: one in History of Science and Technology at the Faculty of Science and one in Historical and Comparative Studies of the Sciences and the Humanities at the Faculty of Humanities.
Andhra University has now set History of Science and Technology as a compulsory subject for all the First year B-Tech students.
Israel
Tel Aviv University. The Cohn Institute for the History and Philosophy of Science and Ideas is a research and graduate teaching institute within the framework of the School of History of Tel Aviv University.
Bar-Ilan University has a graduate program in Science, Technology, and Society.
Japan
Kyoto University has a program in the Philosophy and History of Science.
Tokyo Institute of Technology has a program in the History, Philosophy, and Social Studies of Science and Technology.
The University of Tokyo has a program in the History and Philosophy of Science.
Netherlands
Utrecht University, has two co-operating programs: one in History and Philosophy of Science at the Faculty of Natural Sciences and one in Historical and Comparative Studies of the Sciences and the Humanities at the Faculty of Humanities.
Poland
Institute for the History of Science of the Polish Academy of Sciences offers PhD programmes and habilitation degrees in the fields of History of Science, Technology and Ideas.
Russia
Spain
University of the Basque Country, offers a master's degree and PhD programme in History and Philosophy of Science and runs since 1952 THEORIA. International Journal for Theory, History and Foundations of Science. The university also sponsors the Basque Museum of the History of Medicine and Science, the only open museum of History of Science of Spain, that in the past offered also PhD courses.
Universitat Autònoma de Barcelona, offers a master's degree and PhD programme in HST together with the Universitat de Barcelona.
Universitat de València, offers a master's degree and PhD programme in HST together with the Consejo Superior de Investigaciones Científicas.
Sweden
Linköpings universitet, has a Science, Technology, and Society program which includes HST.
Switzerland
University of Bern, has an undergraduate and a graduate program in the History and Philosophy of Science.
Ukraine
State University of Infrastructure and Technologies, has a Department of Philosophy and History of Science and technology.
United Kingdom
University of Bristol has a masters and PhD program in the Philosophy and History of Science.
University of Cambridge has an undergraduate course and a large masters and PhD program in the History and Philosophy of Science (including the History of Medicine).
University of Durham has several undergraduate History of Science modules in the Philosophy department, as well as Masters and PhD programs in the discipline.
University of Kent has a Centre for the History of the Sciences, which offers Masters programmes and undergraduate modules.
University College London's Department of Science and Technology Studies offers undergraduate programme in History and Philosophy of Science, including two BSc single honour degrees (UCAS V550 and UCAS L391), plus both major and minor streams in history, philosophy and social studies of science in UCL's Natural Sciences programme. The department also offers MSc degrees in History and Philosophy of Science and in the study of contemporary Science, Technology, and Society. An MPhil/PhD research degree is offered, too. UCL also contains a Centre for the History of Medicine. This operates a small teaching programme in History of Medicine.
University of Leeds has both undergraduate and graduate programmes in History and Philosophy of Science in the Department of Philosophy.
University of Manchester offers undergraduate modules and postgraduate study in History of Science, Technology and Medicine and is sponsored by the Wellcome Trust.
University of Oxford has a one-year graduate course in 'History of Science: Instruments, Museums, Science, Technology' associated with the Museum of the History of Science.
The London Centre for the History of Science, Medicine, and Technology – this Centre closed in 2013. It was formed in 1987 and ran a taught MSc programme, jointly taught by University College London's Department of Science and Technology Studies and Imperial College London. The Masters programme transferred to UCL.
United States
Academic study of the history of science as an independent discipline was launched by George Sarton at Harvard with his book Introduction to the History of Science (1927) and the Isis journal (founded in 1912). Sarton exemplified the early 20th century view of the history of science as the history of great men and great ideas. He shared with many of his contemporaries a Whiggish belief in history as a record of the advances and delays in the march of progress. The History of Science was not a recognized subfield of American history in this period, and most of the work was carried out by interested Scientists and Physicians rather than professional Historians. With the work of I. Bernard Cohen at Harvard, the history of Science became an established subdiscipline of history after 1945.
Arizona State University's Center for Biology and Society offers several paths for MS or PhD students who are interested in issues surrounding the history and philosophy of the science.
California Institute of Technology offers courses in the History and Philosophy of Science to fulfill its core humanities requirements.
Case Western Reserve University has an undergraduate interdisciplinary program in the History and Philosophy of Science and a graduate program in the History of Science, Technology, Environment, and Medicine (STEM).
Cornell University offers a variety of courses within the Science and Technology course.
Georgia Institute of Technology has an undergraduate and graduate program in the History of Technology and Society.
Harvard University has an undergraduate and graduate program in History of Science
Indiana University offers undergraduate courses and a masters and PhD program in the History and Philosophy of Science.
Johns Hopkins University has an undergraduate and graduate program in the History of Science, Medicine, and Technology.
Lehigh University offers an undergraduate level STS concentration (founded in 1972) and a graduate program with emphasis on the History of Industrial America.
Massachusetts Institute of Technology has a Science, Technology, and Society program which includes HST.
Michigan State University offers an undergraduate major and minor in History, Philosophy, and Sociology of Science through its Lyman Briggs College.
New Jersey Institute of Technology has a Science, Technology, and Society program which includes the History of Science and Technology
Oregon State University offers a Masters and Ph.D. in History of Science through its Department of History.
Princeton University has a program in the History of Science.
Rensselaer Polytechnic Institute has a Science and Technology Studies department
Rutgers has a graduate Program in History of Science, Technology, Environment, and Health.
Stanford has a History and Philosophy of Science and Technology program.
Stevens Institute of Technology has an undergraduate and graduate program in the History of Science.
University of California, Berkeley offers a graduate degree in HST through its History program, and maintains a separate sub-department for the field.
University of California, Los Angeles has a relatively large group History of Science and Medicine faculty and graduate students within its History department, and also offers an undergraduate minor in the History of Science.
University of California, Santa Barbara has an interdisciplinary graduate program emphasis in Technology & Society through the Center for Information Technology & Society.
University of Chicago offers a B.A. program in the History, Philosophy, and Social Studies of Science and Medicine as well as M.A. and Ph.D. degrees through its Committee on the Conceptual and Historical Studies of Science.
University of Florida has a Graduate Program in 'History of Science, Technology, and Medicine' at the University of Florida provides undergraduate and graduate degrees.
University of Minnesota has a Ph.D. program in History of Science, Technology, and Medicine as well as undergraduate courses in these fields.
University of Oklahoma has an undergraduate minor and a graduate degree program in History of Science.
University of Pennsylvania has a program in History and Sociology of Science.
University of Pittsburgh's Department of History and Philosophy of Science offers graduate and undergraduate courses.
University of Puget Sound has a Science, Technology, and Society program, which includes the history of Science and Technology.
University of Wisconsin–Madison has a program in History of Science, Medicine and Technology. It offers M.A. and Ph.D. degrees as well as an undergraduate major.
Wesleyan University has a Science in Society program.
Yale University has a program in the History of Science and Medicine.
Prominent historians of the field
Wiebe Bijker
Peter J. Bowler
Janet Browne
Stephen G. Brush
James Burke
Edwin Arthur Burtt (1892–1989)
Johann Beckmann (1739–1811)
Jim Bennett
Herbert Butterfield (1900–1979)
Martin Campbell-Kelly
Georges Canguilhem (1904–1995)
Allan Chapman
I. Bernard Cohen (1914–2003)
A. C. Crombie (1915–1996)
E. J. Dijksterhuis (1892–1965)
A. G. Drachmann (1891–1980)
Pierre Duhem (1861–1916)
A. Hunter Dupree (1921–2019)
George Dyson
Jacques Ellul (1912–1994)
Eugene S. Ferguson (1916–2004)
Peter Galison
Sigfried Giedion
Charles Coulston Gillispie
Robert Gunther (1869–1940)
Paul Forman (historian)
Donna Haraway
Peter Harrison
Ahmad Y Hassan
John L. Heilbron
Boris Hessen
Reijer Hooykaas
David A. Hounshell
Thomas P. Hughes
Evelyn Fox Keller
Daniel Kevles
Alexandre Koyré (1892–1964)
Melvin Kranzberg
Thomas Kuhn
Deepak Kumar
Gilbert LaFreniere
Bruno Latour
David C. Lindberg
G. E. R. Lloyd
Jane Maienschein
Anneliese Maier
Leo Marx
Lewis Mumford (1895–1990)
John E. Murdoch (1927–2010)
Otto Neugebauer (1899–1990)
William R. Newman
David Noble
Ronald Numbers
David E. Nye
Abraham Pais (1918–2000)
Trevor Pinch
Theodore Porter
Lawrence M. Principe
Raúl Rojas
Michael Ruse
A. I. Sabra
Jan Sapp
George Sarton (1884–1956)
Simon Schaffer
Howard Segal (1948–2020)
Steven Shapin
Wolfgang Schivelbusch
Charles Singer (1876–1960)
Merritt Roe Smith
Stephen Snobelen
M. Norton Wise
Frances A. Yates (1899–1981)
Journals and periodicals
Annals of Science
The British Journal for the History of Science
Centaurus
Dynamis
History and Technology (magazine)
History of Science and Technology (journal)
History of Technology (book series)
Historical Studies in the Physical and Biological Sciences (HSPS)
Historical Studies in the Natural Sciences (HSNS)
HoST - Journal of History of Science and Technology
ICON
IEEE Annals of the History of Computing
Isis
Journal of the History of Biology
Journal of the History of Medicine and Allied Sciences
Notes and Records of the Royal Society
Osiris
Science & Technology Studies
Science in Context
Science, Technology, & Human Values
Social History of Medicine
Social Studies of Science
Technology and Culture
Transactions of the Newcomen Society
Historia Mathematica
Bulletin of the Scientific Instrument Society
See also
History of science
History of technology
Ancient Egyptian technology
History of science and technology in China
History of science and technology in Japan
History of science and technology in France
History of science and technology in the Indian subcontinent
Mesopotamian science
Productivity improving technologies (historical)
Science and technology in Argentina
Science and technology in Canada
Science and technology in Iran
Science and technology in the United States
Science in the medieval Islamic world
Science tourism
Technological and industrial history of the United States
Timeline of science and engineering in the Islamic world
Professional societies
The British Society for the History of Science (BSHS)
History of Science Society (HSS)
Newcomen Society
Society for the History of Technology (SHOT)
Society for the Social Studies of Science (4S)
Scientific Instrument Society
References
Bibliography
Historiography of science
H. Floris Cohen, The Scientific Revolution: A Historiographical Inquiry, University of Chicago Press 1994 – Discussion on the origins of modern science has been going on for more than two hundred years. Cohen provides an excellent overview.
Ernst Mayr, The Growth of Biological Thought, Belknap Press 1985
Michel Serres,(ed.), A History of Scientific Thought, Blackwell Publishers 1995
Companion to Science in the Twentieth Century, John Krige (Editor), Dominique Pestre (Editor), Taylor & Francis 2003, 941pp
The Cambridge History of Science, Cambridge University Press
Volume 4, Eighteenth-Century Science, 2003
Volume 5, The Modern Physical and Mathematical Sciences, 2002
History of science as a discipline
J. A. Bennett, 'Museums and the Establishment of the History of Science at Oxford and Cambridge', British Journal for the History of Science 30, 1997, 29–46
Dietrich von Engelhardt, Historisches Bewußtsein in der Naturwissenschaft : von der Aufklärung bis zum Positivismus, Freiburg [u.a.] : Alber, 1979
A.-K. Mayer, 'Setting up a Discipline: Conflicting Agendas of the Cambridge History of Science Committee, 1936–1950.' Studies in History and Philosophy of Science, 31, 2000
Science and technology
Technological change
Technology
Technology systems
History by topic | History of science and technology | [
"Technology",
"Engineering"
] | 4,029 | [
"Systems engineering",
"Technology systems",
"History of science",
"Science and technology studies",
"nan",
"History of technology",
"History of science and technology"
] |
14,286 | https://en.wikipedia.org/wiki/Holographic%20principle | The holographic principle is a property of string theories and a supposed property of quantum gravity that states that the description of a volume of space can be thought of as encoded on a lower-dimensional boundary to the region – such as a light-like boundary like a gravitational horizon. First proposed by Gerard 't Hooft, it was given a precise string theoretic interpretation by Leonard Susskind, who combined his ideas with previous ones of 't Hooft and Charles Thorn. Susskind said, "The three-dimensional world of ordinary experience—the universe filled with galaxies, stars, planets, houses, boulders, and people—is a hologram, an image of reality coded on a distant two-dimensional surface." As pointed out by Raphael Bousso, Thorn observed in 1978, that string theory admits a lower-dimensional description in which gravity emerges from it in what would now be called a holographic way. The prime example of holography is the AdS/CFT correspondence.
The holographic principle was inspired by the Bekenstein bound of black hole thermodynamics, which conjectures that the maximum entropy in any region scales with the radius , rather than cubed as might be expected. In the case of a black hole, the insight was that the information content of all the objects that have fallen into the hole might be entirely contained in surface fluctuations of the event horizon. The holographic principle resolves the black hole information paradox within the framework of string theory. However, there exist classical solutions to the Einstein equations that allow values of the entropy larger than those allowed by an area law (radius squared), hence in principle larger than those of a black hole. These are the so-called "Wheeler's bags of gold". The existence of such solutions conflicts with the holographic interpretation, and their effects in a quantum theory of gravity including the holographic principle are not yet fully understood.
High-level summary
The physical universe is widely seen to be composed of "matter" and "energy". In his 2003 article published in Scientific American magazine, Jacob Bekenstein speculatively summarized a current trend started by John Archibald Wheeler, which suggests scientists may "regard the physical world as made of information, with energy and matter as incidentals". Bekenstein asks "Could we, as William Blake memorably penned, 'see a world in a grain of sand', or is that idea no more than 'poetic license'?", referring to the holographic principle.
Unexpected connection
Bekenstein's topical overview "A Tale of Two Entropies" describes potentially profound implications of Wheeler's trend, in part by noting a previously unexpected connection between the world of information theory and classical physics. This connection was first described shortly after the seminal 1948 papers of American applied mathematician Claude Shannon introduced today's most widely used measure of information content, now known as Shannon entropy. As an objective measure of the quantity of information, Shannon entropy has been enormously useful, as the design of all modern communications and data storage devices, from cellular phones to modems to hard disk drives and DVDs, rely on Shannon entropy.
In thermodynamics (the branch of physics dealing with heat), entropy is popularly described as a measure of the "disorder" in a physical system of matter and energy. In 1877, Austrian physicist Ludwig Boltzmann described it more precisely in terms of the number of distinct microscopic states that the particles composing a macroscopic "chunk" of matter could be in, while still "looking" like the same macroscopic "chunk". As an example, for the air in a room, its thermodynamic entropy would equal the logarithm of the count of all the ways that the individual gas molecules could be distributed in the room, and all the ways they could be moving.
Energy, matter, and information equivalence
Shannon's efforts to find a way to quantify the information contained in, for example, a telegraph message, led him unexpectedly to a formula with the same form as Boltzmann's. In an article in the August 2003 issue of Scientific American titled "Information in the Holographic Universe", Bekenstein summarizes that "Thermodynamic entropy and Shannon entropy are conceptually equivalent: the number of arrangements that are counted by Boltzmann entropy reflects the amount of Shannon information one would need to implement any particular arrangement" of matter and energy. The only salient difference between the thermodynamic entropy of physics and Shannon's entropy of information is in the units of measure; the former is expressed in units of energy divided by temperature, the latter in essentially dimensionless "bits" of information.
The holographic principle states that the entropy of ordinary mass (not just black holes) is also proportional to surface area and not volume; that volume itself is illusory and the universe is really a hologram which is isomorphic to the information "inscribed" on the surface of its boundary.
The AdS/CFT correspondence
The anti-de Sitter/conformal field theory correspondence, sometimes called Maldacena duality (after ref.) or gauge/gravity duality, is a conjectured relationship between two kinds of physical theories. On one side are anti-de Sitter spaces (AdS) which are used in theories of quantum gravity, formulated in terms of string theory or M-theory. On the other side of the correspondence are conformal field theories (CFT) which are quantum field theories, including theories similar to the Yang–Mills theories that describe elementary particles.
The duality represents a major advance in understanding of string theory and quantum gravity. This is because it provides a non-perturbative formulation of string theory with certain boundary conditions and because it is the most successful realization of the holographic principle.
It also provides a powerful toolkit for studying strongly coupled quantum field theories. Much of the usefulness of the duality results from a strong-weak duality: when the fields of the quantum field theory are strongly interacting, the ones in the gravitational theory are weakly interacting and thus more mathematically tractable. This fact has been used to study many aspects of nuclear and condensed matter physics by translating problems in those subjects into more mathematically tractable problems in string theory.
The AdS/CFT correspondence was first proposed by Juan Maldacena in late 1997. Important aspects of the correspondence were elaborated in articles by Steven Gubser, Igor Klebanov, and Alexander Markovich Polyakov, and by Edward Witten. By 2015, Maldacena's article had over 10,000 citations, becoming the most highly cited article in the field of high energy physics.
Black hole entropy
An object with relatively high entropy is microscopically random, like a hot gas. A known configuration of classical fields has zero entropy: there is nothing random about electric and magnetic fields, or gravitational waves. Since black holes are exact solutions of Einstein's equations, they were thought not to have any entropy.
But Jacob Bekenstein noted that this leads to a violation of the second law of thermodynamics. If one throws a hot gas with entropy into a black hole, once it crosses the event horizon, the entropy would disappear. The random properties of the gas would no longer be seen once the black hole had absorbed the gas and settled down. One way of salvaging the second law is if black holes are in fact random objects with an entropy that increases by an amount greater than the entropy of the consumed gas.
Given a fixed volume, a black hole whose event horizon encompasses that volume should be the object with the highest amount of entropy. Otherwise, imagine something with a larger entropy, then by throwing more mass into that something, we obtain a black hole with less entropy, violating the second law.
In a sphere of radius R, the entropy in a relativistic gas increases as the energy increases. The only known limit is gravitational; when there is too much energy, the gas collapses into a black hole. Bekenstein used this to put an upper bound on the entropy in a region of space, and the bound was proportional to the area of the region. He concluded that the black hole entropy is directly proportional to the area of the event horizon. Gravitational time dilation causes time, from the perspective of a remote observer, to stop at the event horizon. Due to the natural limit on maximum speed of motion, this prevents falling objects from crossing the event horizon no matter how close they get to it. Since any change in quantum state requires time to flow, all objects and their quantum information state stay imprinted on the event horizon. Bekenstein concluded that from the perspective of any remote observer, the black hole entropy is directly proportional to the area of the event horizon.
Stephen Hawking had shown earlier that the total horizon area of a collection of black holes always increases with time. The horizon is a boundary defined by light-like geodesics; it is those light rays that are just barely unable to escape. If neighboring geodesics start moving toward each other they eventually collide, at which point their extension is inside the black hole. So the geodesics are always moving apart, and the number of geodesics which generate the boundary, the area of the horizon, always increases. Hawking's result was called the second law of black hole thermodynamics, by analogy with the law of entropy increase.
At first, Hawking did not take the analogy too seriously. He argued that the black hole must have zero temperature, since black holes do not radiate and therefore cannot be in thermal equilibrium with any black body of positive temperature. Then he discovered that black holes do radiate. When heat is added to a thermal system, the change in entropy is the increase in mass–energy divided by temperature:
(Here the term δM c2 is substituted for the thermal energy added to the system, generally by non-integrable random processes, in contrast to dS, which is a function of a few "state variables" only, i.e. in conventional thermodynamics only of the Kelvin temperature T and a few additional state variables, such as the pressure.)
If black holes have a finite entropy, they should also have a finite temperature. In particular, they would come to equilibrium with a thermal gas of photons. This means that black holes would not only absorb photons, but they would also have to emit them in the right amount to maintain detailed balance.
Time-independent solutions to field equations do not emit radiation, because a time-independent background conserves energy. Based on this principle, Hawking set out to show that black holes do not radiate. But, to his surprise, a careful analysis convinced him that they do, and in just the right way to come to equilibrium with a gas at a finite temperature. Hawking's calculation fixed the constant of proportionality at 1/4; the entropy of a black hole is one quarter its horizon area in Planck units.
The entropy is proportional to the logarithm of the number of microstates, the enumerated ways a system can be configured microscopically while leaving the macroscopic description unchanged. Black hole entropy is deeply puzzling – it says that the logarithm of the number of states of a black hole is proportional to the area of the horizon, not the volume in the interior.
Later, Raphael Bousso came up with a covariant version of the bound based upon null sheets.
Black hole information paradox
Hawking's calculation suggested that the radiation which black holes emit is not related in any way to the matter that they absorb. The outgoing light rays start exactly at the edge of the black hole and spend a long time near the horizon, while the infalling matter only reaches the horizon much later. The infalling and outgoing mass/energy interact only when they cross. It is implausible that the outgoing state would be completely determined by some tiny residual scattering.
Hawking interpreted this to mean that when black holes absorb some photons in a pure state described by a wave function, they re-emit new photons in a thermal mixed state described by a density matrix. This would mean that quantum mechanics would have to be modified because, in quantum mechanics, states which are superpositions with probability amplitudes never become states which are probabilistic mixtures of different possibilities.
Troubled by this paradox, Gerard 't Hooft analyzed the emission of Hawking radiation in more detail. He noted that when Hawking radiation escapes, there is a way in which incoming particles can modify the outgoing particles. Their gravitational field would deform the horizon of the black hole, and the deformed horizon could produce different outgoing particles than the undeformed horizon. When a particle falls into a black hole, it is boosted relative to an outside observer, and its gravitational field assumes a universal form. 't Hooft showed that this field makes a logarithmic tent-pole shaped bump on the horizon of a black hole, and like a shadow, the bump is an alternative description of the particle's location and mass. For a four-dimensional spherical uncharged black hole, the deformation of the horizon is similar to the type of deformation which describes the emission and absorption of particles on a string-theory world sheet. Since the deformations on the surface are the only imprint of the incoming particle, and since these deformations would have to completely determine the outgoing particles, 't Hooft believed that the correct description of the black hole would be by some form of string theory.
This idea was made more precise by Leonard Susskind, who had also been developing holography, largely independently. Susskind argued that the oscillation of the horizon of a black hole is a complete description of both the infalling and outgoing matter, because the world-sheet theory of string theory was just such a holographic description. While short strings have zero entropy, he could identify long highly excited string states with ordinary black holes. This was a deep advance because it revealed that strings have a classical interpretation in terms of black holes.
This work showed that the black hole information paradox is resolved when quantum gravity is described in an unusual string-theoretic way assuming the string-theoretical description is complete, unambiguous and non-redundant. The space-time in quantum gravity would emerge as an effective description of the theory of oscillations of a lower-dimensional black-hole horizon, and suggest that any black hole with appropriate properties, not just strings, would serve as a basis for a description of string theory.
In 1995, Susskind, along with collaborators Tom Banks, Willy Fischler, and Stephen Shenker, presented a formulation of the new M-theory using a holographic description in terms of charged point black holes, the D0 branes of type IIA string theory. The matrix theory they proposed was first suggested as a description of two branes in eleven-dimensional supergravity by Bernard de Wit, Jens Hoppe, and Hermann Nicolai. The later authors reinterpreted the same matrix models as a description of the dynamics of point black holes in particular limits. Holography allowed them to conclude that the dynamics of these black holes give a complete non-perturbative formulation of M-theory. In 1997, Juan Maldacena gave the first holographic descriptions of a higher-dimensional object, the 3+1-dimensional type IIB membrane, which resolved a long-standing problem of finding a string description which describes a gauge theory. These developments simultaneously explained how string theory is related to some forms of supersymmetric quantum field theories.
Limit on information density
Information content is defined as the logarithm of the reciprocal of the probability that a system is in a specific microstate, and the information entropy of a system is the expected value of the system's information content. This definition of entropy is equivalent to the standard Gibbs entropy used in classical physics. Applying this definition to a physical system leads to the conclusion that, for a given energy in a given volume, there is an upper limit to the density of information (the Bekenstein bound) about the whereabouts of all the particles which compose matter in that volume. In particular, a given volume has an upper limit of information it can contain, at which it will collapse into a black hole.
This suggests that matter itself cannot be subdivided infinitely many times and there must be an ultimate level of fundamental particles. As the degrees of freedom of a particle are the product of all the degrees of freedom of its sub-particles, were a particle to have infinite subdivisions into lower-level particles, the degrees of freedom of the original particle would be infinite, violating the maximal limit of entropy density. The holographic principle thus implies that the subdivisions must stop at some level.
The most rigorous realization of the holographic principle is the AdS/CFT correspondence by Juan Maldacena. However, J. David Brown and Marc Henneaux had rigorously proved in 1986, that the asymptotic symmetry of 2+1 dimensional gravity gives rise to a Virasoro algebra, whose corresponding quantum theory is a 2-dimensional conformal field theory.
Experimental tests
The Fermilab physicist Craig Hogan claims that the holographic principle would imply quantum fluctuations in spatial position that would lead to apparent background noise or "holographic noise" measurable at gravitational wave detectors, in particular GEO 600. However these claims have not been widely accepted, or cited, among quantum gravity researchers and appear to be in direct conflict with string theory calculations.
Analyses in 2011 of measurements of gamma ray burst GRB 041219A in 2004 by the INTEGRAL space observatory launched in 2002 by the European Space Agency, shows that Craig Hogan's noise is absent down to a scale of 10−48 meters, as opposed to the scale of 10−35 meters predicted by Hogan, and the scale of 10−16 meters found in measurements of the GEO 600 instrument. Research continued at Fermilab under Hogan as of 2013.
Jacob Bekenstein claimed to have found a way to test the holographic principle with a tabletop photon experiment.
See also
Bekenstein bound
Beyond black holes
Bousso's holographic bound
Brane cosmology
Digital physics
Entropic gravity
Implicate and explicate order
Quantum speed limit theorems
Physical cosmology
Quantum foam
Notes
References
Citations
Sources
. 't Hooft's original paper.
External links
Alfonso V. Ramallo: Introduction to the AdS/CFT correspondence, , pedagogical lecture. For the holographic principle: see especially Fig. 1.
UC Berkeley's Raphael Bousso gives an introductory lecture on the holographic principle – Video.
Scientific American article on holographic principle by Jacob Bekenstein
Theoretical physics
Black holes
Quantum information science
Holography | Holographic principle | [
"Physics",
"Astronomy"
] | 3,876 | [
"Physical phenomena",
"Black holes",
"Physical quantities",
"Theoretical physics",
"Unsolved problems in physics",
"Astrophysics",
"Density",
"Stellar phenomena",
"Astronomical objects"
] |
14,294 | https://en.wikipedia.org/wiki/Hausdorff%20dimension | In mathematics, Hausdorff dimension is a measure of roughness, or more specifically, fractal dimension, that was introduced in 1918 by mathematician Felix Hausdorff. For instance, the Hausdorff dimension of a single point is zero, of a line segment is 1, of a square is 2, and of a cube is 3. That is, for sets of points that define a smooth shape or a shape that has a small number of corners—the shapes of traditional geometry and science—the Hausdorff dimension is an integer agreeing with the usual sense of dimension, also known as the topological dimension. However, formulas have also been developed that allow calculation of the dimension of other less simple objects, where, solely on the basis of their properties of scaling and self-similarity, one is led to the conclusion that particular objects—including fractals—have non-integer Hausdorff dimensions. Because of the significant technical advances made by Abram Samoilovitch Besicovitch allowing computation of dimensions for highly irregular or "rough" sets, this dimension is also commonly referred to as the Hausdorff–Besicovitch dimension.
More specifically, the Hausdorff dimension is a dimensional number associated with a metric space, i.e. a set where the distances between all members are defined. The dimension is drawn from the extended real numbers, , as opposed to the more intuitive notion of dimension, which is not associated to general metric spaces, and only takes values in the non-negative integers.
In mathematical terms, the Hausdorff dimension generalizes the notion of the dimension of a real vector space. That is, the Hausdorff dimension of an n-dimensional inner product space equals n. This underlies the earlier statement that the Hausdorff dimension of a point is zero, of a line is one, etc., and that irregular sets can have noninteger Hausdorff dimensions. For instance, the Koch snowflake shown at right is constructed from an equilateral triangle; in each iteration, its component line segments are divided into 3 segments of unit length, the newly created middle segment is used as the base of a new equilateral triangle that points outward, and this base segment is then deleted to leave a final object from the iteration of unit length of 4. That is, after the first iteration, each original line segment has been replaced with N=4, where each self-similar copy is 1/S = 1/3 as long as the original. Stated another way, we have taken an object with Euclidean dimension, D, and reduced its linear scale by 1/3 in each direction, so that its length increases to N=SD. This equation is easily solved for D, yielding the ratio of logarithms (or natural logarithms) appearing in the figures, and giving—in the Koch and other fractal cases—non-integer dimensions for these objects.
The Hausdorff dimension is a successor to the simpler, but usually equivalent, box-counting or Minkowski–Bouligand dimension.
Intuition
The intuitive concept of dimension of a geometric object X is the number of independent parameters one needs to pick out a unique point inside. However, any point specified by two parameters can be instead specified by one, because the cardinality of the real plane is equal to the cardinality of the real line (this can be seen by an argument involving interweaving the digits of two numbers to yield a single number encoding the same information). The example of a space-filling curve shows that one can even map the real line to the real plane surjectively (taking one real number into a pair of real numbers in a way so that all pairs of numbers are covered) and continuously, so that a one-dimensional object completely fills up a higher-dimensional object.
Every space-filling curve hits some points multiple times and does not have a continuous inverse. It is impossible to map two dimensions onto one in a way that is continuous and continuously invertible. The topological dimension, also called Lebesgue covering dimension, explains why. This dimension is the greatest integer n such that in every covering of X by small open balls there is at least one point where n + 1 balls overlap. For example, when one covers a line with short open intervals, some points must be covered twice, giving dimension n = 1.
But topological dimension is a very crude measure of the local size of a space (size near a point). A curve that is almost space-filling can still have topological dimension one, even if it fills up most of the area of a region. A fractal has an integer topological dimension, but in terms of the amount of space it takes up, it behaves like a higher-dimensional space.
The Hausdorff dimension measures the local size of a space taking into account the distance between points, the metric. Consider the number N(r) of balls of radius at most r required to cover X completely. When r is very small, N(r) grows polynomially with 1/r. For a sufficiently well-behaved X, the Hausdorff dimension is the unique number d such that N(r) grows as 1/rd as r approaches zero. More precisely, this defines the box-counting dimension, which equals the Hausdorff dimension when the value d is a critical boundary between growth rates that are insufficient to cover the space, and growth rates that are overabundant.
For shapes that are smooth, or shapes with a small number of corners, the shapes of traditional geometry and science, the Hausdorff dimension is an integer agreeing with the topological dimension. But Benoit Mandelbrot observed that fractals, sets with noninteger Hausdorff dimensions, are found everywhere in nature. He observed that the proper idealization of most rough shapes one sees is not in terms of smooth idealized shapes, but in terms of fractal idealized shapes:
Clouds are not spheres, mountains are not cones, coastlines are not circles, and bark is not smooth, nor does lightning travel in a straight line.
For fractals that occur in nature, the Hausdorff and box-counting dimension coincide. The packing dimension is yet another similar notion which gives the same value for many shapes, but there are well-documented exceptions where all these dimensions differ.
Formal definition
The formal definition of the Hausdorff dimension is arrived at by defining first the d-dimensional Hausdorff measure, a fractional-dimension analogue of the Lebesgue measure. First, an outer measure is constructed:
Let be a metric space. If and ,
where the infimum is taken over all countable covers of . The Hausdorff d-dimensional outer measure is then defined as , and the restriction of the mapping to measurable sets justifies it as a measure, called the -dimensional Hausdorff Measure.
Hausdorff dimension
The Hausdorff dimension of is defined by
This is the same as the supremum of the set of such that the -dimensional Hausdorff measure of is infinite (except that when this latter set of numbers is empty the Hausdorff dimension is zero).
Hausdorff content
The -dimensional unlimited Hausdorff content of is defined by
In other words, has the construction of the Hausdorff measure where the covering sets are allowed to have arbitrarily large sizes (Here, we use the standard convention that ). The Hausdorff measure and the Hausdorff content can both be used to determine the dimension of a set, but if the measure of the set is non-zero, their actual values may disagree.
Examples
Countable sets have Hausdorff dimension 0.
The Euclidean space has Hausdorff dimension , and the circle has Hausdorff dimension 1.
Fractals often are spaces whose Hausdorff dimension strictly exceeds the topological dimension. For example, the Cantor set, a zero-dimensional topological space, is a union of two copies of itself, each copy shrunk by a factor 1/3; hence, it can be shown that its Hausdorff dimension is ln(2)/ln(3) ≈ 0.63. The Sierpinski triangle is a union of three copies of itself, each copy shrunk by a factor of 1/2; this yields a Hausdorff dimension of ln(3)/ln(2) ≈ 1.58. These Hausdorff dimensions are related to the "critical exponent" of the Master theorem for solving recurrence relations in the analysis of algorithms.
Space-filling curves like the Peano curve have the same Hausdorff dimension as the space they fill.
The trajectory of Brownian motion in dimension 2 and above is conjectured to be Hausdorff dimension 2.
Lewis Fry Richardson performed detailed experiments to measure the approximate Hausdorff dimension for various coastlines. His results have varied from 1.02 for the coastline of South Africa to 1.25 for the west coast of Great Britain.
Properties of Hausdorff dimension
Hausdorff dimension and inductive dimension
Let X be an arbitrary separable metric space. There is a topological notion of inductive dimension for X which is defined recursively. It is always an integer (or +∞) and is denoted dimind(X).
Theorem. Suppose X is non-empty. Then
Moreover,
where Y ranges over metric spaces homeomorphic to X. In other words, X and Y have the same underlying set of points and the metric dY of Y is topologically equivalent to dX.
These results were originally established by Edward Szpilrajn (1907–1976), e.g., see Hurewicz and Wallman, Chapter VII.
Hausdorff dimension and Minkowski dimension
The Minkowski dimension is similar to, and at least as large as, the Hausdorff dimension, and they are equal in many situations. However, the set of rational points in [0, 1] has Hausdorff dimension zero and Minkowski dimension one. There are also compact sets for which the Minkowski dimension is strictly larger than the Hausdorff dimension.
Hausdorff dimensions and Frostman measures
If there is a measure μ defined on Borel subsets of a metric space X such that μ(X) > 0 and μ(B(x, r)) ≤ rs holds for some constant s > 0 and for every ball B(x, r) in X, then dimHaus(X) ≥ s. A partial converse is provided by Frostman's lemma.
Behaviour under unions and products
If is a finite or countable union, then
This can be verified directly from the definition.
If X and Y are non-empty metric spaces, then the Hausdorff dimension of their product satisfies
This inequality can be strict. It is possible to find two sets of dimension 0 whose product has dimension 1. In the opposite direction, it is known that when X and Y are Borel subsets of Rn, the Hausdorff dimension of X × Y is bounded from above by the Hausdorff dimension of X plus the upper packing dimension of Y. These facts are discussed in Mattila (1995).
Self-similar sets
Many sets defined by a self-similarity condition have dimensions which can be determined explicitly. Roughly, a set E is self-similar if it is the fixed point of a set-valued transformation ψ, that is ψ(E) = E, although the exact definition is given below.
Theorem. Suppose
are each a contraction mapping on Rn with contraction constant ri < 1. Then there is a unique non-empty compact set A such that
The theorem follows from Stefan Banach's contractive mapping fixed point theorem applied to the complete metric space of non-empty compact subsets of Rn with the Hausdorff distance.
The open set condition
To determine the dimension of the self-similar set A (in certain cases), we need a technical condition called the open set condition (OSC) on the sequence of contractions ψi.
There is an open set V with compact closure, such that
where the sets in union on the left are pairwise disjoint.
The open set condition is a separation condition that ensures the images ψi(V) do not overlap "too much".
Theorem. Suppose the open set condition holds and each ψi is a similitude, that is a composition of an isometry and a dilation around some point. Then the unique fixed point of ψ is a set whose Hausdorff dimension is s where s is the unique solution of
The contraction coefficient of a similitude is the magnitude of the dilation.
In general, a set E which is carried onto itself by a mapping
is self-similar if and only if the intersections satisfy the following condition:
where s is the Hausdorff dimension of E and Hs denotes s-dimensional Hausdorff measure. This is clear in the case of the Sierpinski gasket (the intersections are just points), but is also true more generally:
Theorem. Under the same conditions as the previous theorem, the unique fixed point of ψ is self-similar.
See also
List of fractals by Hausdorff dimension Examples of deterministic fractals, random and natural fractals.
Assouad dimension, another variation of fractal dimension that, like Hausdorff dimension, is defined using coverings by balls
Intrinsic dimension
Packing dimension
Fractal dimension
References
Further reading
Several selections from this volume are reprinted in See chapters 9,10,11
External links
Hausdorff dimension at Encyclopedia of Mathematics
Hausdorff measure at Encyclopedia of Mathematics
Fractals
Metric geometry
Dimension theory | Hausdorff dimension | [
"Mathematics"
] | 2,801 | [
"Mathematical analysis",
"Functions and mappings",
"Mathematical objects",
"Fractals",
"Mathematical relations"
] |
14,300 | https://en.wikipedia.org/wiki/Henry%20Laurens | Henry Laurens (December 8, 1792) was an American Founding Father, merchant, slave trader, and rice planter from South Carolina who became a political leader during the Revolutionary War. A delegate to the Second Continental Congress, Laurens succeeded John Hancock as its president. He was a signatory to the Articles of Confederation and, as president, presided over its passage.
Laurens had earned great wealth as a partner in the largest slave-trading house in North America, Austin and Laurens. In the 1750s alone, this Charleston firm oversaw the sale of more than 8,000 enslaved Africans. Laurens served for a time as vice president of South Carolina and as the United States minister to the Netherlands during the Revolutionary War. He was captured at sea by the British and imprisoned for a little more than a year in the Tower of London. His oldest son, John Laurens, was an aide-de-camp to George Washington and a colonel in the Continental Army.
Early life and education
Laurens's forebears were Huguenots who fled France after the Edict of Nantes was revoked in 1685. His grandfather Andre Laurens left earlier, in 1682, and eventually made his way to America, settling first in New York City and then Charleston, South Carolina. Andre's son John married Hester (or Esther) Grasset, also a Huguenot refugee. Henry was their third child and eldest son. John Laurens became a saddler, and his business eventually grew to be the largest of its kind in the colonies.
In 1744, Laurens was sent to London to augment his business training. This took place in the company of Richard Oswald. His father died in 1747, bequeathing a considerable estate to 23-year-old Henry.
Marriage and family
Laurens married Eleanor Ball, also of a South Carolina rice planter family, on June 25, 1750. They had thirteen children, many of whom died in infancy or childhood. Eleanor died in 1770, one month after giving birth to their last child. Laurens took their three sons to England for their education, encouraging their oldest, John Laurens, to study law. Instead of completing his studies, John Laurens returned to the United States in 1776 to serve in the American Revolutionary War.
Political career
Laurens served in the militia, as did most able-bodied men in his time. He rose to the rank of lieutenant colonel in the campaigns against the Cherokee Indians in 1757–1761, during the French and Indian War (also known as the Seven Years' War).
In 1757, he was elected to South Carolina's colonial assembly. Laurens was elected again every year but one until the Revolution replaced the assembly with a state convention as an interim government. The year he missed was 1773, when he visited England to arrange for his sons' educations. He was named to the colony's council in 1764 and 1768 but declined both times. In 1772, he joined the American Philosophical Society of Philadelphia and carried on extensive correspondence with other members.
As the American Revolution neared, Laurens was at first inclined to support reconciliation with the British Crown. But as conditions deteriorated, he came to fully support the American position. When Carolina began to create a revolutionary government, Laurens was elected to the Provincial Congress, which first met on January 9, 1775. He was president of the Committee of Safety and presiding officer of that congress from June until March 1776. When South Carolina installed a fully independent government, he served as the vice president of South Carolina from March 1776 to June 27, 1777. Laurens was first named a delegate to the Continental Congress on January 10, 1777. He served in the Congress until 1780. He was the president of the Continental Congress from November 1, 1777, to December 9, 1778.
In the fall of 1779, the Congress named Laurens their minister to the Netherlands. In early 1780, he took up that post and successfully negotiated Dutch support for the war. But on his return voyage to Amsterdam that fall, the British frigate intercepted his ship, the continental packet Mercury, off the banks of Newfoundland. Although his dispatches were tossed in the water, they were retrieved by the British, who discovered the draft of a possible U.S.-Dutch treaty prepared in Aix-la-Chapelle in 1778 by William Lee and the Amsterdam banker Jean de Neufville. This prompted Britain to declare war on the Dutch Republic, becoming known as the Fourth Anglo-Dutch War.
The British charged Laurens with treason, transported him to England, and imprisoned him in the Tower of London (he is the only American to have been held prisoner in the tower). His imprisonment was protested by the Americans. In the field, most captives were regarded as prisoners of war, and while conditions were frequently appalling, prisoner exchanges and mail privileges were accepted practice. During his imprisonment, Laurens was assisted by Richard Oswald, his former business partner and the principal owner of Bunce Island, a slave-trading island base in the Sierra Leone River. Oswald argued on Laurens's behalf to the British government. Finally, on December 31, 1781, he was released in exchange for General Lord Cornwallis and completed his voyage to Amsterdam. He helped raise funds for the American effort.
Laurens's oldest son, Colonel John Laurens, was killed in 1782 in the Battle of the Combahee River, as one of the last casualties of the Revolutionary War. He had supported enlisting and freeing slaves for the war effort and suggested to his father that he begin with the 40 he stood to inherit. He had urged his father to free the family's slaves, but although conflicted, Henry Laurens never manumitted his 260 slaves.
In 1783, Laurens was sent to Paris as one of the peace commissioners for the negotiations leading to the Treaty of Paris. While he was not a signatory of the primary treaty, he was instrumental in reaching the secondary accords that resolved issues related to the Netherlands and Spain. Richard Oswald, a former partner of Laurens in the slave trade, was the principal negotiator for the British during the Paris peace talks.
Laurens generally retired from public life in 1784. He was sought for a return to the Continental Congress, the Constitutional Convention in 1787 and the state assembly, but he declined all of these positions. He did serve in the state convention of 1788, where he voted to ratify the United States Constitution.
British forces, during their occupation of Charleston, had burned the Laurens home at Mepkin during the war. When Laurens and his family returned in 1784, they lived in an outbuilding while the great house was rebuilt. He lived on the estate the rest of his life, working to recover the estimated £40,000 that the revolution had cost him (equivalent to about $ in ).
Death and legacy
Laurens suffered from gout starting in his 40s and the affliction plagued him throughout the rest of his life. Laurens died on December 8, 1792, at his estate, Mepkin, in South Carolina. In his will he stated he wished to be cremated and his ashes be interred at his estate. It is reported that he was the first Caucasian cremation in the United States, which he chose because of a fear of being buried alive. Afterward, the estate passed through several hands. Large portions of the estate still exist. Part of the original estate was donated to the Roman Catholic Church in 1949 and is now the location of Mepkin Abbey, a monastery of the Order of Cistercians of the Strict Observance (Trappist monks).
The city of Laurens, South Carolina, and its county are named for him. The town and the village of Laurens, New York, are named for him. Laurens County, Georgia, is named for his son John. General Lachlan McIntosh, who worked for Laurens as a clerk and became close friends with him, named Fort Laurens in Ohio after him.
Notes
References
Sources
Kirschke, James J., and Victor J. Sensenig. "Steps toward nationhood: Henry Laurens (1724–92) and the American Revolution in the South" Historical Research 78.200 (2005): 180–192
, 16 vols.; Collection Inventory available at the Historical Society of Pennsylvania.
McDonough, Daniel J. Christopher Gadsden and Henry Laurens: The Parallel Lives of Two American Patriots (Susquehanna University Press, 2001)
Neville, Gabriel. "The Tragedy of Henry Laurens." Journal of the American Revolution, Aug. 1, 2019
External links
National Park Service: Henry Laurens Biography
Forgotten Founders Biography site
Henry Laurens, South Carolina Hall of Fame, South Carolina Educational Television
Henry Laurens Account Book, 1766-1773, Lowcountry Digital Library
Friends of Fort Laurens
1724 births
1792 deaths
Ambassadors of the United States to the Netherlands
American Revolutionary War prisoners of war held by Great Britain
18th-century American planters
18th-century American slave traders
American slave owners
Prisoners in the Tower of London
Merchants from colonial South Carolina
Huguenot participants in the American Revolution
Colonial South Carolina
Continental Congressmen from South Carolina
Businesspeople from Charleston, South Carolina
People from pre-statehood South Carolina
People of South Carolina in the American Revolution
People of South Carolina in the French and Indian War
Pre-statehood history of South Carolina
Signers of the Articles of Confederation
Cremation
Politicians from Charleston, South Carolina
18th-century Anglicans
Founding Fathers of the United States
Diplomats from South Carolina
18th-century American diplomats | Henry Laurens | [
"Chemistry"
] | 1,935 | [
"Cremation",
"Incineration"
] |
14,307 | https://en.wikipedia.org/wiki/Hall%20effect | The Hall effect is the production of a potential difference (the Hall voltage) across an electrical conductor that is transverse to an electric current in the conductor and to an applied magnetic field perpendicular to the current. It was discovered by Edwin Hall in 1879.
The Hall coefficient is defined as the ratio of the induced electric field to the product of the current density and the applied magnetic field. It is a characteristic of the material from which the conductor is made, since its value depends on the type, number, and properties of the charge carriers that constitute the current.
Discovery
Wires carrying current in a magnetic field experience a mechanical force perpendicular to both the current and magnetic field.
In the 1820s, André-Marie Ampère observed this underlying mechanism that led to the discovery of the Hall effect. However it was not until a solid mathematical basis for electromagnetism was systematized by James Clerk Maxwell's "On Physical Lines of Force" (published in 1861–1862) that details of the interaction between magnets and electric current could be understood.
Edwin Hall then explored the question of whether magnetic fields interacted with the conductors or the electric current, and reasoned that if the force was specifically acting on the current, it should crowd current to one side of the wire, producing a small measurable voltage. In 1879, he discovered this Hall effect while he was working on his doctoral degree at Johns Hopkins University in Baltimore, Maryland. Eighteen years before the electron was discovered, his measurements of the tiny effect produced in the apparatus he used were an experimental tour de force, published under the name "On a New Action of the Magnet on Electric Currents".
Hall effect within voids
The term ordinary Hall effect can be used to distinguish the effect described in the introduction from a related effect which occurs across a void or hole in a semiconductor or metal plate when current is injected via contacts that lie on the boundary or edge of the void. The charge then flows outside the void, within the metal or semiconductor material. The effect becomes observable, in a perpendicular applied magnetic field, as a Hall voltage appearing on either side of a line connecting the current-contacts. It exhibits apparent sign reversal in comparison to the "ordinary" effect occurring in the simply connected specimen. It depends only on the current injected from within the void.
Hall effect superposition
Superposition of these two forms of the effect, the ordinary and void effects, can also be realized. First imagine the "ordinary" configuration, a simply connected (void-less) thin rectangular homogeneous element with current-contacts on the (external) boundary. This develops a Hall voltage, in a perpendicular magnetic field. Next, imagine placing a rectangular void within this ordinary configuration, with current-contacts, as mentioned above, on the interior boundary of the void. (For simplicity, imagine the contacts on the boundary of the void lined up with the ordinary-configuration contacts on the exterior boundary.) In such a combined configuration, the two Hall effects may be realized and observed simultaneously in the same doubly connected device: A Hall effect on the external boundary that is proportional to the current injected only via the outer boundary, and an apparently sign-reversed Hall effect on the interior boundary that is proportional to the current injected only via the interior boundary. The superposition of multiple Hall effects may be realized by placing multiple voids within the Hall element, with current and voltage contacts on the boundary of each void.
Further "Hall effects" may have additional physical mechanisms but are built on these basics.
Theory
The Hall effect is due to the nature of the current in a conductor. Current consists of the movement of many small charge carriers, typically electrons, holes, ions (see Electromigration) or all three. When a magnetic field is present, these charges experience a force, called the Lorentz force. When such a magnetic field is absent, the charges follow approximately straight paths between collisions with impurities, phonons, etc. However, when a magnetic field with a perpendicular component is applied, their paths between collisions are curved; thus, moving charges accumulate on one face of the material. This leaves equal and opposite charges exposed on the other face, where there is a scarcity of mobile charges. The result is an asymmetric distribution of charge density across the Hall element, arising from a force that is perpendicular to both the straight path and the applied magnetic field. The separation of charge establishes an electric field that opposes the migration of further charge, so a steady electric potential is established for as long as the charge is flowing.
In classical electromagnetism electrons move in the opposite direction of the current (by convention "current" describes a theoretical "hole flow"). In some metals and semiconductors it appears "holes" are actually flowing because the direction of the voltage is opposite to the derivation below.
For a simple metal where there is only one type of charge carrier (electrons), the Hall voltage can be derived by using the Lorentz force and seeing that, in the steady-state condition, charges are not moving in the -axis direction. Thus, the magnetic force on each electron in the -axis direction is cancelled by a -axis electrical force due to the buildup of charges. The term is the drift velocity of the current which is assumed at this point to be holes by convention. The term is negative in the -axis direction by the right hand rule.
In steady state, , so , where is assigned in the direction of the -axis, (and not with the arrow of the induced electric field as in the image (pointing in the direction), which tells you where the field caused by the electrons is pointing).
In wires, electrons instead of holes are flowing, so and . Also . Substituting these changes gives
The conventional "hole" current is in the negative direction of the electron current and the negative of the electrical charge which gives where is charge carrier density, is the cross-sectional area, and is the charge of each electron. Solving for and plugging into the above gives the Hall voltage:
If the charge build up had been positive (as it appears in some metals and semiconductors), then the assigned in the image would have been negative (positive charge would have built up on the left side).
The Hall coefficient is defined as
or
where is the current density of the carrier electrons, and is the induced electric field. In SI units, this becomes
(The units of are usually expressed as m3/C, or Ω·cm/G, or other variants.) As a result, the Hall effect is very useful as a means to measure either the carrier density or the magnetic field.
One very important feature of the Hall effect is that it differentiates between positive charges moving in one direction and negative charges moving in the opposite. In the diagram above, the Hall effect with a negative charge carrier (the electron) is presented. But consider the same magnetic field and current are applied but the current is carried inside the Hall effect device by a positive particle. The particle would of course have to be moving in the opposite direction of the electron in order for the current to be the same—down in the diagram, not up like the electron is. And thus, mnemonically speaking, your thumb in the Lorentz force law, representing (conventional) current, would be pointing the same direction as before, because current is the same—an electron moving up is the same current as a positive charge moving down. And with the fingers (magnetic field) also being the same, interestingly the charge carrier gets deflected to the left in the diagram regardless of whether it is positive or negative. But if positive carriers are deflected to the left, they would build a relatively positive voltage on the left whereas if negative carriers (namely electrons) are, they build up a negative voltage on the left as shown in the diagram. Thus for the same current and magnetic field, the electric polarity of the Hall voltage is dependent on the internal nature of the conductor and is useful to elucidate its inner workings.
This property of the Hall effect offered the first real proof that electric currents in most metals are carried by moving electrons, not by protons. It also showed that in some substances (especially p-type semiconductors), it is contrarily more appropriate to think of the current as positive "holes" moving rather than negative electrons. A common source of confusion with the Hall effect in such materials is that holes moving one way are really electrons moving the opposite way, so one expects the Hall voltage polarity to be the same as if electrons were the charge carriers as in most metals and n-type semiconductors. Yet we observe the opposite polarity of Hall voltage, indicating positive charge carriers. However, of course there are no actual positrons or other positive elementary particles carrying the charge in p-type semiconductors, hence the name "holes". In the same way as the oversimplistic picture of light in glass as photons being absorbed and re-emitted to explain refraction breaks down upon closer scrutiny, this apparent contradiction too can only be resolved by the modern quantum mechanical theory of quasiparticles wherein the collective quantized motion of multiple particles can, in a real physical sense, be considered to be a particle in its own right (albeit not an elementary one).
Unrelatedly, inhomogeneity in the conductive sample can result in a spurious sign of the Hall effect, even in ideal van der Pauw configuration of electrodes. For example, a Hall effect consistent with positive carriers was observed in evidently n-type semiconductors. Another source of artefact, in uniform materials, occurs when the sample's aspect ratio is not long enough: the full Hall voltage only develops far away from the current-introducing contacts, since at the contacts the transverse voltage is shorted out to zero.
Hall effect in semiconductors
When a current-carrying semiconductor is kept in a magnetic field, the charge carriers of the semiconductor experience a force in a direction perpendicular to both the magnetic field and the current. At equilibrium, a voltage appears at the semiconductor edges.
The simple formula for the Hall coefficient given above is usually a good explanation when conduction is dominated by a single charge carrier. However, in semiconductors and many metals the theory is more complex, because in these materials conduction can involve significant, simultaneous contributions from both electrons and holes, which may be present in different concentrations and have different mobilities. For moderate magnetic fields the Hall coefficient is
or equivalently
with
Here is the electron concentration, the hole concentration, the electron mobility, the hole mobility and the elementary charge.
For large applied fields the simpler expression analogous to that for a single carrier type holds.
Relationship with star formation
Although it is well known that magnetic fields play an important role in star formation, research models indicate that Hall diffusion critically influences the dynamics of gravitational collapse that forms protostars.
Quantum Hall effect
For a two-dimensional electron system which can be produced in a MOSFET, in the presence of large magnetic field strength and low temperature, one can observe the quantum Hall effect, in which the Hall conductance undergoes quantum Hall transitions to take on the quantized values.
Spin Hall effect
The spin Hall effect consists in the spin accumulation on the lateral boundaries of a current-carrying sample. No magnetic field is needed. It was predicted by Mikhail Dyakonov and V. I. Perel in 1971 and observed experimentally more than 30 years later, both in semiconductors and in metals, at cryogenic as well as at room temperatures.
The quantity describing the strength of the Spin Hall effect is known as Spin Hall angle, and it is defined as:
Where is the spin current generated by the applied current density .
Quantum spin Hall effect
For mercury telluride two dimensional quantum wells with strong spin-orbit coupling, in zero magnetic field, at low temperature, the quantum spin Hall effect has been observed in 2007.
Anomalous Hall effect
In ferromagnetic materials (and paramagnetic materials in a magnetic field), the Hall resistivity includes an additional contribution, known as the anomalous Hall effect (or the extraordinary Hall effect), which depends directly on the magnetization of the material, and is often much larger than the ordinary Hall effect. (Note that this effect is not due to the contribution of the magnetization to the total magnetic field.) For example, in nickel, the anomalous Hall coefficient is about 100 times larger than the ordinary Hall coefficient near the Curie temperature, but the two are similar at very low temperatures. Although a well-recognized phenomenon, there is still debate about its origins in the various materials. The anomalous Hall effect can be either an extrinsic (disorder-related) effect due to spin-dependent scattering of the charge carriers, or an intrinsic effect which can be described in terms of the Berry phase effect in the crystal momentum space (-space).
Hall effect in ionized gases
The Hall effect in an ionized gas (plasma) is significantly different from the Hall effect in solids (where the Hall parameter is always much less than unity). In a plasma, the Hall parameter can take any value. The Hall parameter, , in a plasma is the ratio between the electron gyrofrequency, , and the electron-heavy particle collision frequency, :
where
is the elementary charge (approximately )
is the magnetic field (in teslas)
is the electron mass (approximately ).
The Hall parameter value increases with the magnetic field strength.
Physically, the trajectories of electrons are curved by the Lorentz force. Nevertheless, when the Hall parameter is low, their motion between two encounters with heavy particles (neutral or ion) is almost linear. But if the Hall parameter is high, the electron movements are highly curved. The current density vector, , is no longer collinear with the electric field vector, . The two vectors and make the Hall angle, , which also gives the Hall parameter:
Other Hall effects
The Hall Effects family has expanded to encompass other quasi-particles in semiconductor nanostructures. Specifically, a set of Hall Effects has emerged based on excitons and exciton-polaritons n 2D materials and quantum wells.
Applications
Hall sensors amplify and use the Hall effect for a variety of sensing applications.
Corbino effect
The Corbino effect, named after its discoverer Orso Mario Corbino, is a phenomenon involving the Hall effect, but a disc-shaped metal sample is used in place of a rectangular one. Because of its shape the Corbino disc allows the observation of Hall effect–based magnetoresistance without the associated Hall voltage.
A radial current through a circular disc, subjected to a magnetic field perpendicular to the plane of the disc, produces a "circular" current through the disc. The absence of the free transverse boundaries renders the interpretation of the Corbino effect simpler than that of the Hall effect.
See also
Electromagnetic induction
Nernst effect
Thermal Hall effect
References
Sources
Introduction to Plasma Physics and Controlled Fusion, Volume 1, Plasma Physics, Second Edition, 1984, Francis F. Chen
Further reading
Annraoi M. de Paor. Correction to the classical two-species Hall Coefficient using twoport network theory. International Journal of Electrical Engineering Education 43/4.
The Hall effect - The Feynman Lectures on Physics
University of Washington The Hall Effect
External links
, P. H. Craig, System and apparatus employing the Hall effect
, J. T. Maupin, E. A. Vorthmann, Hall effect contactless switch with prebiased Schmitt trigger
Understanding and Applying the Hall Effect
Hall Effect Thrusters Alta Space
Hall effect calculators
Interactive Java tutorial on the Hall effect National High Magnetic Field Laboratory
Science World (wolfram.com) article.
"The Hall Effect". nist.gov.
Table with Hall coefficients of different elements at room temperature .
Simulation of the Hall effect as a Youtube video
Hall effect in electrolytes
Condensed matter physics
Electric and magnetic fields in matter | Hall effect | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 3,247 | [
"Physical phenomena",
"Hall effect",
"Phases of matter",
"Electric and magnetic fields in matter",
"Materials science",
"Electrical phenomena",
"Condensed matter physics",
"Solid state engineering",
"Matter"
] |
14,308 | https://en.wikipedia.org/wiki/Hoover%20Dam | Hoover Dam is a concrete arch-gravity dam in the Black Canyon of the Colorado River, on the border between the U.S. states of Nevada and Arizona. Constructed between 1931 and 1936, during the Great Depression, it was dedicated on September 30, 1935, by President Franklin D. Roosevelt. Its construction was the result of a massive effort involving thousands of workers, and cost over 100 lives. Bills passed by Congress during its construction referred to it as Hoover Dam (after President Herbert Hoover), but the Roosevelt administration named it Boulder Dam. In 1947, Congress restored the name Hoover Dam.
Since about 1900, the Black Canyon and nearby Boulder Canyon had been investigated for their potential to support a dam that would control floods, provide irrigation water, and produce hydroelectric power. In 1928, Congress authorized the project. The winning bid to build the dam was submitted by a consortium named Six Companies, Inc., which began construction in early 1931. Such a large concrete structure had never been built before, and some of the techniques used were unproven. The torrid summer weather and lack of facilities near the site also presented difficulties. Nevertheless, Six Companies turned the dam over to the federal government on March 1, 1936, more than two years ahead of schedule.
Hoover Dam impounds Lake Mead and is located near Boulder City, Nevada, a municipality originally constructed for workers on the construction project, about southeast of Las Vegas, Nevada. The dam's generators provide power for public and private utilities in Nevada, Arizona, and California. Hoover Dam is a major tourist attraction, with 7 million tourists a year. The heavily traveled U.S. Route 93 (US 93) ran along the dam's crest until October 2010, when the Hoover Dam Bypass opened.
Background
Search for resources
As the United States developed the Southwest, the Colorado River was seen as a potential source of irrigation water. An initial attempt at diverting the river for irrigation purposes occurred in the late 1890s, when land speculator William Beatty built the Alamo Canal just north of the Mexican border; the canal dipped into Mexico before running to a desolate area Beatty named the Imperial Valley. Though water from the Alamo Canal allowed for the widespread settlement of the valley, the canal proved expensive to operate. After a catastrophic breach that caused the Colorado River to fill the Salton Sea, the Southern Pacific Railroad spent $3 million in 1906–07 to stabilize the waterway, an amount it hoped in vain that it would be reimbursed for by the federal government. Even after the waterway was stabilized, it proved unsatisfactory because of constant disputes with landowners on the Mexican side of the border.
As the technology of electric power transmission improved, the Lower Colorado was considered for its hydroelectric-power potential. In 1902, the Edison Electric Company of Los Angeles surveyed the river in the hope of building a rock dam which could generate . However, at the time, the limit of transmission of electric power was , and there were few customers (mostly mines) within that limit. Edison allowed land options it held on the river to lapse—including an option for what became the site of Hoover Dam.
In the following years, the Bureau of Reclamation (BOR), known as the Reclamation Service at the time, also considered the Lower Colorado as the site for a dam. Service chief Arthur Powell Davis proposed using dynamite to collapse the walls of Boulder Canyon, north of the eventual dam site, into the river. The river would carry off the smaller pieces of debris, and a dam would be built incorporating the remaining rubble. In 1922, after considering it for several years, the Reclamation Service finally rejected the proposal, citing doubts about the unproven technique and questions as to whether it would, in fact, save money.
Planning and agreements
In 1922, the Reclamation Service presented a report calling for the development of a dam on the Colorado River for flood control and electric power generation. The report was principally authored by Davis and was called the Fall-Davis report after Interior Secretary Albert Fall. The Fall-Davis report cited use of the Colorado River as a federal concern because the river's basin covered several states, and the river eventually entered Mexico. Though the Fall-Davis report called for a dam "at or near Boulder Canyon", the Reclamation Service (which was renamed the Bureau of Reclamation the following year) found that canyon unsuitable. One potential site at Boulder Canyon was bisected by a geologic fault; two others were so narrow there was no space for a construction camp at the bottom of the canyon or for a spillway. The Service investigated Black Canyon and found it ideal; a railway could be laid from the railhead in Las Vegas to the top of the dam site. Despite the site change, the dam project was referred to as the "Boulder Canyon Project".
With little guidance on water allocation from the Supreme Court, proponents of the dam feared endless litigation. Delph Carpenter, a Colorado attorney, proposed that the seven states which fell within the river's basin (California, Nevada, Arizona, Utah, New Mexico, Colorado and Wyoming) form an interstate compact, with the approval of Congress. Such compacts were authorized by Article I of the United States Constitution but had never been concluded among more than two states. In 1922, representatives of seven states met with then-Secretary of Commerce Herbert Hoover. Initial talks produced no result, but when the Supreme Court handed down the Wyoming v. Colorado decision undermining the claims of the upstream states, they became anxious to reach an agreement. The resulting Colorado River Compact was signed on November 24, 1922.
Legislation to authorize the dam was introduced repeatedly by two California Republicans, Representative Phil Swing and Senator Hiram Johnson, but representatives from other parts of the country considered the project as hugely expensive and one that would mostly benefit California. The 1927 Mississippi flood made Midwestern and Southern congressmen and senators more sympathetic toward the dam project. On March 12, 1928, the failure of the St. Francis Dam, constructed by the city of Los Angeles, caused a disastrous flood that killed up to 600 people. As that dam was a curved-gravity type, similar in design to the arch-gravity as was proposed for the Black Canyon dam, opponents claimed that the Black Canyon dam's safety could not be guaranteed. Congress authorized a board of engineers to review plans for the proposed dam. The Colorado River Board found the project feasible, but warned that should the dam fail, every downstream Colorado River community would be destroyed, and that the river might change course and empty into the Salton Sea. The Board cautioned: "To avoid such possibilities, the proposed dam should be constructed on conservative if not ultra-conservative lines."
On December 21, 1928, President Coolidge signed the bill authorizing the dam. The Boulder Canyon Project Act appropriated $165 million for the project along with the downstream Imperial Dam and All-American Canal, a replacement for Beatty's canal entirely on the U.S. side of the border. It also permitted the compact to go into effect when at least six of the seven states approved it. This occurred on March 6, 1929, with Utah's ratification; Arizona did not approve it until 1944.
Design, preparation and contracting
Even before Congress approved the Boulder Canyon Project, the Bureau of Reclamation was considering what kind of dam should be used. Officials eventually decided on a massive concrete arch-gravity dam, the design of which was overseen by the Bureau's chief design engineer John L. Savage. The monolithic dam would be thick at the bottom and thin near the top and would present a convex face towards the water above the dam. The curving arch of the dam would transmit the water's force into the abutments, in this case the rock walls of the canyon. The wedge-shaped dam would be thick at the bottom, narrowing to at the top, leaving room for a highway connecting Nevada and Arizona.
On January 10, 1931, the Bureau made the bid documents available to interested parties, at five dollars a copy. The government was to provide the materials, and the contractor was to prepare the site and build the dam. The dam was described in minute detail, covering 100 pages of text and 76 drawings. A $2 million bid bond was to accompany each bid; the winner would have to post a $5 million performance bond. The contractor had seven years to build the dam, or penalties would ensue.
The Wattis Brothers, heads of the Utah Construction Company, were interested in bidding on the project, but lacked the money for the performance bond. They lacked sufficient resources even in combination with their longtime partners, Morrison-Knudsen, which employed the nation's leading dam builder, Frank Crowe. They formed a joint venture to bid for the project with Pacific Bridge Company of Portland, Oregon; Henry J. Kaiser & W. A. Bechtel Company of San Francisco; MacDonald & Kahn Ltd. of Los Angeles; and the J.F. Shea Company of Portland, Oregon. The joint venture was called Six Companies, Inc. as Bechtel and Kaiser were considered one company for purposes of Six in the name. The name was descriptive and was an inside joke among the San Franciscans in the bid, where "Six Companies" was also a Chinese benevolent association in the city. There were three valid bids, and Six Companies' bid of $48,890,955 was the lowest, within $24,000 of the confidential government estimate of what the dam would cost to build, and five million dollars less than the next-lowest bid.
The city of Las Vegas had lobbied hard to be the headquarters for the dam construction, closing its many speakeasies when the decision maker, Secretary of the Interior Ray Wilbur, came to town. Instead, Wilbur announced in early 1930 that a model city was to be built in the desert near the dam site. This town became known as Boulder City, Nevada. Construction of a rail line joining Las Vegas and the dam site began in September 1930.
Construction
Labor force
Soon after the dam was authorized, increasing numbers of unemployed people converged on southern Nevada. Las Vegas, then a small city of some 5,000, saw between 10,000 and 20,000 unemployed descend on it. A government camp was established for surveyors and other personnel near the dam site; this soon became surrounded by a squatters' camp. Known as McKeeversville, the camp was home to men hoping for work on the project, together with their families. Another camp, on the flats along the Colorado River, was officially called Williamsville, but was known to its inhabitants as "Ragtown". When construction began, Six Companies hired large numbers of workers, with more than 3,000 on the payroll by 1932 and with employment peaking at 5,251 in July 1934. "Mongolian" (Chinese) labor was prevented by the construction contract, while the number of black people employed by Six Companies never exceeded thirty, mostly lowest-pay-scale laborers in a segregated crew, who were issued separate water buckets.
As part of the contract, Six Companies, Inc. was to build Boulder City to house the workers. The original timetable called for Boulder City to be built before the dam project began, but President Hoover ordered work on the dam to begin in March 1931 rather than in October. The company built bunkhouses, attached to the canyon wall, to house 480 single men at what became known as River Camp. Workers with families were left to provide their own accommodations until Boulder City could be completed, and many lived in Ragtown. The site of Hoover Dam endures extremely hot weather, and the summer of 1931 was especially torrid, with the daytime high averaging . Sixteen workers and other riverbank residents died of heat prostration between June 25 and July 26, 1931.
The Industrial Workers of the World (IWW or "Wobblies"), though much-reduced from their heyday as militant labor organizers in the early years of the century, hoped to unionize the Six Companies workers by capitalizing on their discontent. They sent eleven organizers, several of whom were arrested by Las Vegas police. On August 7, 1931, the company cut wages for all tunnel workers. Although the workers sent the organizers away, not wanting to be associated with the "Wobblies", they formed a committee to represent them with the company. The committee drew up a list of demands that evening and presented them to Crowe the following morning. He was noncommittal. The workers hoped that Crowe, the general superintendent of the job, would be sympathetic; instead, he gave a scathing interview to a newspaper, describing the workers as "malcontents".
On the morning of the 9th, Crowe met with the committee and told them that management refused their demands, was stopping all work, and was laying off the entire work force, except for a few office workers and carpenters. The workers were given until 5 p.m. to vacate the premises. Concerned that a violent confrontation was imminent, most workers took their paychecks and left for Las Vegas to await developments. Two days later, the remainder were talked into leaving by law enforcement. On August 13, the company began hiring workers again, and two days later, the strike was called off. While the workers received none of their demands, the company guaranteed there would be no further reductions in wages. Living conditions began to improve as the first residents moved into Boulder City in late 1931.
A second labor action took place in July 1935, as construction on the dam wound down. When a Six Companies manager altered working times to force workers to take lunch on their own time, workers responded with a strike. Emboldened by Crowe's reversal of the lunch decree, workers raised their demands to include a $1-per-day raise. The company agreed to ask the Federal government to supplement the pay, but no money was forthcoming from Washington. The strike ended.
River diversion
Before the dam could be built, the Colorado River needed to be diverted away from the construction site. To accomplish this, four diversion tunnels were driven through the canyon walls, two on the Nevada side and two on the Arizona side. These tunnels were in diameter. Their combined length was nearly 16,000 ft, or more than . The contract required these tunnels to be completed by October 1, 1933, with a $3,000-per-day fine to be assessed for any delay. To meet the deadline, Six Companies had to complete work by early 1933, since only in late fall and winter was the water level in the river low enough to safely divert.
Tunneling began at the lower portals of the Nevada tunnels in May 1931. Shortly afterward, work began on two similar tunnels in the Arizona canyon wall. In March 1932, work began on lining the tunnels with concrete. First the base, or invert, was poured. Gantry cranes, running on rails through the entire length of each tunnel were used to place the concrete. The sidewalls were poured next. Movable sections of steel forms were used for the sidewalls. Finally, using pneumatic guns, the overheads were filled in. The concrete lining is thick, reducing the finished tunnel diameter to . The river was diverted into the two Arizona tunnels on November 13, 1932; the Nevada tunnels were kept in reserve for high water. This was done by exploding a temporary cofferdam protecting the Arizona tunnels while at the same time dumping rubble into the river until its natural course was blocked.
Following the completion of the dam, the entrances to the two outer diversion tunnels were sealed at the opening and halfway through the tunnels with large concrete plugs. The downstream halves of the tunnels following the inner plugs are now the main bodies of the spillway tunnels. The inner diversion tunnels were plugged at approximately one-third of their length, beyond which they now carry steel pipes connecting the intake towers to the power plant and outlet works. The inner tunnels' outlets are equipped with gates that can be closed to drain the tunnels for maintenance.
Groundworks, rock clearance and grout curtain
To protect the construction site from the Colorado River and to facilitate the river's diversion, two cofferdams were constructed. Work on the upper cofferdam began in September 1932, even though the river had not yet been diverted. The cofferdams were designed to protect against the possibility of the river's flooding a site at which two thousand men might be at work, and their specifications were covered in the bid documents in nearly as much detail as the dam itself. The upper cofferdam was high, and thick at its base, thicker than the dam itself. It contained of material.
When the cofferdams were in place and the construction site was drained of water, excavation for the dam foundation began. For the dam to rest on solid rock, it was necessary to remove accumulated erosion soils and other loose materials in the riverbed until sound bedrock was reached. Work on the foundation excavations was completed in June 1933. During this excavation, approximately of material was removed. Since the dam was an arch-gravity type, the side-walls of the canyon would bear the force of the impounded lake. Therefore, the side-walls were also excavated to reach virgin rock, as weathered rock might provide pathways for water seepage. Shovels for the excavation came from the Marion Power Shovel Company.
The men who removed this rock were called "high scalers". While suspended from the top of the canyon with ropes, the high-scalers climbed down the canyon walls and removed the loose rock with jackhammers and dynamite. Falling objects were the most common cause of death on the dam site; the high scalers' work thus helped ensure worker safety. One high scaler was able to save a life in a more direct manner: when a government inspector lost his grip on a safety line and began tumbling down a slope towards almost certain death, a high scaler was able to intercept him and pull him into the air. The construction site had become a magnet for tourists. The high scalers were prime attractions and showed off for the watchers. The high scalers received considerable media attention, with one worker dubbed the "Human Pendulum" for swinging co-workers (and, at other times, cases of dynamite) across the canyon. To protect themselves against falling objects, some high scalers dipped cloth hats in tar and allowed them to harden. When workers wearing such headgear were struck hard enough to inflict broken jaws, they sustained no skull damage. Six Companies ordered thousands of what initially were called "hard boiled hats" (later "hard hats") and strongly encouraged their use.
The cleared, underlying rock foundation of the dam site was reinforced with grout, forming a grout curtain. Holes were driven into the walls and base of the canyon, as deep as into the rock, and any cavities encountered were to be filled with grout. This was done to stabilize the rock, to prevent water from seeping past the dam through the canyon rock, and to limit "uplift"—upward pressure from water seeping under the dam. The workers were under severe time constraints due to the beginning of the concrete pour. When they encountered hot springs or cavities too large to readily fill, they moved on without resolving the problem. A total of 58 of the 393 holes were incompletely filled. After the dam was completed and the lake began to fill, large numbers of significant leaks caused the Bureau of Reclamation to examine the situation. It found that the work had been incompletely done, and was based on less than a full understanding of the canyon's geology. New holes were drilled from inspection galleries inside the dam into the surrounding bedrock. It took nine years (1938–47) under relative secrecy to complete the supplemental grout curtain.
Concrete
The first concrete was poured into the dam on June 6, 1933, 18 months ahead of schedule. Since concrete heats and contracts as it cures, the potential for uneven cooling and contraction of the concrete posed a serious problem. Bureau of Reclamation engineers calculated that if the dam were to be built in a single continuous pour, the concrete would take 125 years to cool, and the resulting stresses would cause the dam to crack and crumble. Instead, the ground where the dam would rise was marked with rectangles, and concrete blocks in columns were poured, some as large as and high. Each five-foot form contained a set of steel pipes; cool river water would be poured through the pipes, followed by ice-cold water from a refrigeration plant. When an individual block had cured and had stopped contracting, the pipes were filled with grout. Grout was also used to fill the hairline spaces between columns, which were grooved to increase the strength of the joints.
The concrete was delivered in huge steel buckets and almost 7 feet in diameter; Crowe was awarded two patents for their design. These buckets, which weighed when full, were filled at two massive concrete plants on the Nevada side, and were delivered to the site in special railcars. The buckets were then suspended from aerial cableways which were used to deliver the bucket to a specific column. As the required grade of aggregate in the concrete differed depending on placement in the dam (from pea-sized gravel to stones), it was vital that the bucket be maneuvered to the proper column. When the bottom of the bucket opened up, disgorging of concrete, a team of men worked it throughout the form. Although there are myths that men were caught in the pour and are entombed in the dam to this day, each bucket deepened the concrete in a form by only , and Six Companies engineers would not have permitted a flaw caused by the presence of a human body.
A total of of concrete was used in the dam before concrete pouring ceased on May 29, 1935. In addition, were used in the power plant and other works. More than of cooling pipes were placed within the concrete. Overall, there is enough concrete in the dam to pave a two-lane highway from San Francisco to New York. Concrete cores were removed from the dam for testing in 1995; they showed that "Hoover Dam's concrete has continued to slowly gain strength" and the dam is composed of a "durable concrete having a compressive strength exceeding the range typically found in normal mass concrete". Hoover Dam concrete is not subject to alkali–silica reaction (ASR), as the Hoover Dam builders happened to use nonreactive aggregate, unlike that at downstream Parker Dam, where ASR has caused measurable deterioration.
Dedication and completion
With most work finished on the dam itself (the powerhouse remained uncompleted), a formal dedication ceremony was arranged for September 30, 1935, to coincide with a western tour being made by President Franklin D. Roosevelt. The morning of the dedication, it was moved forward three hours from 2 p.m. Pacific time to 11 a.m.; this was done because Secretary of the Interior Harold L. Ickes had reserved a radio slot for the President for 2 p.m. but officials did not realize until the day of the ceremony that the slot was for 2 p.m. Eastern Time. Despite the change in the ceremony time, and temperatures of , 10,000 people were present for the President's speech, in which he avoided mentioning the name of former President Hoover, who was not invited to the ceremony. To mark the occasion, a three-cent stamp was issued by the United States Post Office Department—bearing the name "Boulder Dam", the official name of the dam between 1933 and 1947. After the ceremony, Roosevelt made the first visit by any American president to Las Vegas.
Most work had been completed by the dedication, and Six Companies negotiated with the government through late 1935 and early 1936 to settle all claims and arrange for the formal transfer of the dam to the Federal Government. The parties came to an agreement and on March 1, 1936, Secretary Ickes formally accepted the dam on behalf of the government. Six Companies was not required to complete work on one item, a concrete plug for one of the bypass tunnels, as the tunnel had to be used to take in irrigation water until the powerhouse went into operation.
Construction deaths
There were 112 deaths reported as associated with the construction of the dam. The first was Bureau of Reclamation employee Harold Connelly who died on May 15, 1921, after falling from a barge while surveying the Colorado River for an ideal spot for the dam. Surveyor John Gregory ("J.G.") Tierney, who drowned on December 20, 1922, in a flash flood while looking for an ideal spot for the dam was the second person. The official list's final death occurred on December 20, 1935, when Patrick Tierney, electrician's helper and the son of J.G. Tierney, fell from one of the two Arizona-side intake towers. Included in the fatality list are three workers who took their own lives on site, one in 1932 and two in 1933. Of the 112 fatalities, 91 were Six Companies employees, three were Bureau of Reclamation employees, and one was a visitor to the site; the remainder were employees of various contractors not part of Six Companies.
Ninety-six of the deaths occurred during construction at the site. Not included in the official number of fatalities were deaths that were recorded as pneumonia. Workers alleged that this diagnosis was a cover for death from carbon monoxide poisoning (brought on by the use of gasoline-fueled vehicles in the diversion tunnels), and a classification used by Six Companies to avoid paying compensation claims. The site's diversion tunnels frequently reached , enveloped in thick plumes of vehicle exhaust gases. A total of 42 workers were recorded as having died from pneumonia and were not included in the above total; none were listed as having died from carbon monoxide poisoning. No deaths of non-workers from pneumonia were recorded in Boulder City during the construction period.
Architectural style
The initial plans for the facade of the dam, the power plant, the outlet tunnels and ornaments clashed with the modern look of an arch dam. The Bureau of Reclamation, more concerned with the dam's functionality, adorned it with a Gothic-inspired balustrade and eagle statues. This initial design was criticized by many as being too plain and unremarkable for a project of such immense scale, so Los Angeles-based architect Gordon B. Kaufmann, then the supervising architect to the Bureau of Reclamation, was brought in to redesign the exteriors. Kaufmann greatly streamlined the design and applied an elegant Art Deco style to the entire project. He designed sculpted turrets rising seamlessly from the dam face and clock faces on the intake towers set for the time in Nevada and Arizona—both states are in different time zones, but since Arizona does not observe daylight saving time, the clocks display the same time for more than half the year.
At Kaufmann's request, Denver artist Allen Tupper True was hired to handle the design and decoration of the walls and floors of the new dam. True's design scheme incorporated motifs of the Navajo and Pueblo tribes of the region. Although some were initially opposed to these designs, True was given the go-ahead and was officially appointed consulting artist. With the assistance of the National Laboratory of Anthropology, True researched authentic decorative motifs from Indian sand paintings, textiles, baskets and ceramics. The images and colors are based on Native American visions of rain, lightning, water, clouds, and local animals—lizards, serpents, birds—and on the Southwestern landscape of stepped mesas. In these works, which are integrated into the walkways and interior halls of the dam, True also reflected on the machinery of the operation, making the symbolic patterns appear both ancient and modern.
With the agreement of Kaufmann and the engineers, True also devised for the pipes and machinery an innovative color-coding which was implemented throughout all BOR projects. True's consulting artist job lasted through 1942; it was extended so he could complete design work for the Parker, Shasta and Grand Coulee dams and power plants. True's work on the Hoover Dam was humorously referred to in a poem published in The New Yorker, part of which read, "lose the spark, and justify the dream; but also worthy of remark will be the color scheme".
Complementing Kaufmann and True's work, sculptor Oskar J. W. Hansen designed many of the sculptures on and around the dam. His works include the monument of dedication plaza, a plaque to memorialize the workers killed and the bas-reliefs on the elevator towers. In his words, Hansen wanted his work to express "the immutable calm of intellectual resolution, and the enormous power of trained physical strength, equally enthroned in placid triumph of scientific accomplishment", because "[t]he building of Hoover Dam belongs to the sagas of the daring." Hansen's dedication plaza, on the Nevada abutment, contains a sculpture of two winged figures flanking a flagpole.
Surrounding the base of the monument is a terrazzo floor embedded with a "star map". The map depicts the Northern Hemisphere sky at the moment of President Roosevelt's dedication of the dam. This is intended to help future astronomers, if necessary, calculate the exact date of dedication. The bronze figures, dubbed Winged Figures of the Republic, were both formed in a continuous pour. To put such large bronzes into place without marring the highly polished bronze surface, they were placed on ice and guided into position as the ice melted. Hansen's bas-relief on the Nevada elevator tower depicts the benefits of the dam: flood control, navigation, irrigation, water storage, and power. The bas-relief on the Arizona elevator depicts, in his words, "the visages of those Indian tribes who have inhabited mountains and plains from ages distant."
Operation
Power plant and water demands
Excavation for the powerhouse was carried out simultaneously with the excavation for the dam foundation and abutments. The excavation of this U-shaped structure located at the downstream toe of the dam was completed in late 1933 with the first concrete placed in November 1933. Filling of Lake Mead began February 1, 1935, even before the last of the concrete was poured that May. The powerhouse was one of the projects uncompleted at the time of the formal dedication on September 30, 1935; a crew of 500 men remained to finish it and other structures. To make the powerhouse roof bombproof, it was constructed of layers of concrete, rock, and steel with a total thickness of about , topped with layers of sand and tar.
In the latter half of 1936, water levels in Lake Mead were high enough to permit power generation, and the first three Allis Chalmers built Francis turbine-generators, all on the Nevada side, began operating. In March 1937, one more Nevada generator went online and the first Arizona generator by August. By September 1939, four more generators were operating, and the dam's power plant became the largest hydroelectricity facility in the world. The final generator was not placed in service until 1961, bringing the maximum generating capacity to 1,345 megawatts at the time. Original plans called for 16 large generators, eight on each side of the river, but two smaller generators were installed instead of one large one on the Arizona side for a total of 17. The smaller generators were used to serve smaller communities at a time when the output of each generator was dedicated to a single municipality, before the dam's total power output was placed on the grid and made arbitrarily distributable.
Before water from Lake Mead reaches the turbines, it enters the intake towers and then four gradually narrowing penstocks which funnel the water down towards the powerhouse. The intakes provide a maximum hydraulic head (water pressure) of as the water reaches a speed of about . The entire flow of the Colorado River usually passes through the turbines. The spillways and outlet works (jet-flow gates) are rarely used. The jet-flow gates, located in concrete structures above the river and also at the outlets of the inner diversion tunnels at river level, may be used to divert water around the dam in emergency or flood conditions, but have never done so, and in practice are used only to drain water from the penstocks for maintenance. Following an uprating project from 1986 to 1993, the total gross power rating for the plant, including two 2.4 megawatt Pelton turbine-generators that power Hoover Dam's own operations is a maximum capacity of 2080 megawatts. The annual generation of Hoover Dam varies. The maximum net generation was 10.348 TWh in 1984, and the minimum since 1940 was 2.648 TWh in 1956. The average power generated was 4.2 TWh/year for 1947–2008. In 2015, the dam generated 3.6 TWh.
The amount of electricity generated by Hoover Dam has been decreasing along with the falling water level in Lake Mead due to the prolonged drought since year 2000 and high demand for the Colorado River's water. By 2014 its generating capacity was downrated by 23% to 1592 MW and was providing power only during periods of peak demand. Lake Mead fell to a new record low elevation of on July 1, 2016, before beginning to rebound slowly. Under its original design, the dam would no longer be able to generate power once the water level fell below , which might have occurred in 2017 had water restrictions not been enforced. To lower the minimum power pool elevation from , five wide-head turbines, designed to work efficiently with less flow, were installed. Water levels were maintained at over in 2018 and 2019, but fell to a new record low of on June 10, 2021 and were projected to fall below by the end of 2021.
Control of water was the primary concern in the building of the dam. Power generation has allowed the dam project to be self-sustaining: proceeds from the sale of power repaid the 50-year construction loan, and those revenues also finance the multimillion-dollar yearly maintenance budget. Power is generated in step with and only with the release of water in response to downstream water demands.
Lake Mead and downstream releases from the dam also provide water for both municipal and irrigation uses. Water released from the Hoover Dam eventually reaches several canals. The Colorado River Aqueduct and Central Arizona Project branch off Lake Havasu while the All-American Canal is supplied by the Imperial Dam. In total, water from Lake Mead serves 18 million people in Arizona, Nevada, and California and supplies the irrigation of over of land.
In 2018, the Los Angeles Department of Water and Power (LADWP) proposed a $3 billion pumped-storage hydroelectricity project—a "battery" of sorts—that would use wind and solar power to recirculate water back up to Lake Mead from a pumping station downriver.
Power distribution
Electricity from the dam's powerhouse was originally sold pursuant to a fifty-year contract, authorized by Congress in 1934, which ran from 1937 to 1987. In 1984, Congress passed a new statute which set power allocations to southern California, Arizona, and Nevada from the dam from 1987 to 2017. The powerhouse was run under the original authorization by the Los Angeles Department of Water and Power and Southern California Edison; in 1987, the Bureau of Reclamation assumed control. In 2011, Congress enacted legislation extending the current contracts until 2067, after setting aside 5% of Hoover Dam's power for sale to Native American tribes, electric cooperatives, and other entities. The new arrangement began on October 1, 2017.
The Bureau of Reclamation reports that the energy generated under the contracts ending in 2017 was allocated as follows:
Spillways
The dam is protected against over-topping by two spillways. The spillway entrances are located behind each dam abutment, running roughly parallel to the canyon walls. The spillway entrance arrangement forms a classic side-flow weir with each spillway containing four and steel-drum gates. Each gate weighs and can be operated manually or automatically. Gates are raised and lowered depending on water levels in the reservoir and flood conditions. The gates cannot entirely prevent water from entering the spillways but can maintain an extra of lake level.
Water flowing over the spillways falls dramatically into , spillway tunnels before connecting to the outer diversion tunnels and reentering the main river channel below the dam. This complex spillway entrance arrangement combined with the approximate elevation drop from the top of the reservoir to the river below was a difficult engineering problem and posed numerous design challenges. Each spillway's capacity of was empirically verified in post-construction tests in 1941.
The large spillway tunnels have only been used twice, for testing in 1941 and because of flooding in 1983. Both times, when inspecting the tunnels after the spillways were used, engineers found major damage to the concrete linings and underlying rock. The 1941 damage was attributed to a slight misalignment of the tunnel invert (or base), which caused cavitation, a phenomenon in fast-flowing liquids in which vapor bubbles collapse with explosive force. In response to this finding, the tunnels were patched with special heavy-duty concrete and the surface of the concrete was polished mirror-smooth. The spillways were modified in 1947 by adding flip buckets, which both slow the water and decrease the spillway's effective capacity, in an attempt to eliminate conditions thought to have contributed to the 1941 damage. The 1983 damage, also due to cavitation, led to the installation of aerators in the spillways. Tests at Grand Coulee Dam showed that the technique worked, in principle.
Roadway and tourism
There are two lanes for automobile traffic across the top of the dam, which formerly served as the Colorado River crossing for U.S. Route 93. In the wake of the September 11 terrorist attacks, authorities expressed security concerns and the Hoover Dam Bypass project was expedited. Pending the completion of the bypass, restricted traffic was permitted over Hoover Dam. Some types of vehicles were inspected prior to crossing the dam while semi-trailer trucks, buses carrying luggage, and enclosed-box trucks over long were not allowed on the dam at all, and were diverted to U.S. Route 95 or Nevada State Routes 163/68. The four-lane Hoover Dam Bypass opened on October 19, 2010. It includes a composite steel and concrete arch bridge, the Mike O'Callaghan–Pat Tillman Memorial Bridge, downstream from the dam.
With the opening of the bypass, through traffic is no longer allowed across Hoover Dam; dam visitors are allowed to use the existing roadway to approach from the Nevada side and cross to parking lots and other facilities on the Arizona side.
Hoover Dam opened for tours in 1937 after its completion but following Japan's attack on Pearl Harbor on December 7, 1941, it was closed to the public when the United States entered World War II, during which only authorized traffic, in convoys, was permitted. After the war, it reopened September 2, 1945, and by 1953, annual attendance had risen to 448,081. The dam closed on November 25, 1963, and March 31, 1969, days of mourning in remembrance of Presidents Kennedy and Eisenhower. In 1995, a new visitors' center was built, and the following year, visits exceeded one million for the first time. The dam closed again to the public on September 11, 2001; modified tours were resumed in December and a new "Discovery Tour" was added the following year. Today, nearly a million people per year take the tours of the dam offered by the Bureau of Reclamation. The government's increased security concerns have led to the exclusion of visitors from most of the interior structures. As a result, few of True's decorations can now be seen by visitors. Visitors can only purchase tickets on-site and have the options of a guided tour of the whole facility or only the power plant area. The only self-guided tour option is for the visitor center itself, where visitors can view various exhibits and enjoy a 360-degree view of the dam.
Environmental impact
The changes in water flow and use caused by Hoover Dam's construction and operation have had a large impact on the Colorado River Delta. The construction of the dam has been implicated in causing the decline of this estuarine ecosystem. For six years after the construction of the dam, while Lake Mead filled, virtually no water reached the mouth of the river. The delta's estuary, which once had a freshwater-saltwater mixing zone stretching south of the river's mouth, was turned into an inverse estuary where the level of salinity was higher close to the river's mouth.
The Colorado River had experienced natural flooding before the construction of the Hoover Dam. The dam eliminated the natural flooding, threatening many species adapted to the flooding, including both plants and animals. The construction of the dam devastated the populations of native fish in the river downstream from the dam. Four species of fish native to the Colorado River, the Bonytail chub, Colorado pikeminnow, Humpback chub, and Razorback sucker, are listed as endangered.
Naming controversy
During the years of lobbying leading up to the passage of legislation authorizing the dam in 1928, the press generally referred to the dam as "Boulder Dam" or as "Boulder Canyon Dam", even though the proposed site had shifted to Black Canyon. The Boulder Canyon Project Act of 1928 (BCPA) never mentioned a proposed name or title for the dam. The BCPA merely allows the government to "construct, operate, and maintain a dam and incidental works in the main stream of the Colorado River at Black Canyon or Boulder Canyon".
When Secretary of the Interior Ray Wilbur spoke at the ceremony starting the building of the railway between Las Vegas and the dam site on September 17, 1930, he named the dam "Hoover Dam", citing a tradition of naming dams after Presidents, though none had been so honored during their terms of office. Wilbur justified his choice on the ground that Hoover was "the great engineer whose vision and persistence ... has done so much to make [the dam] possible". One writer complained in response that "the Great Engineer had quickly drained, ditched, and dammed the country."
After Hoover's election defeat in 1932 and the accession of the Roosevelt administration, Secretary Ickes ordered on May 13, 1933, that the dam be referred to as Boulder Dam. Ickes stated that Wilbur had been imprudent in naming the dam after a sitting president, that Congress had never ratified his choice, and that it had long been referred to as Boulder Dam. Unknown to the general public, Attorney General Homer Cummings informed Ickes that Congress had indeed used the name "Hoover Dam" in five different bills appropriating money for construction of the dam. The official status this conferred to the name "Hoover Dam" had been noted on the floor of the House of Representatives by Congressman Edward T. Taylor of Colorado on December 12, 1930, but was likewise ignored by Ickes.
When Ickes spoke at the dedication ceremony on September 30, 1935, he was determined, as he recorded in his diary, "to try to nail down for good and all the name Boulder Dam." At one point in the speech, he spoke the words "Boulder Dam" five times within thirty seconds. Further, he suggested that if the dam were to be named after any one person, it should be for California Senator Hiram Johnson, a lead sponsor of the authorizing legislation. Roosevelt also referred to the dam as Boulder Dam, and the Republican-leaning Los Angeles Times, which at the time of Ickes' name change had run an editorial cartoon showing Ickes ineffectively chipping away at an enormous sign "HOOVER DAM", reran it showing Roosevelt reinforcing Ickes, but having no greater success.
In the following years, the name "Boulder Dam" failed to fully take hold, with many Americans using both names interchangeably and mapmakers divided as to which name should be printed. Memories of the Great Depression faded, and Hoover to some extent rehabilitated himself through good works during and after World War II. In 1947, a bill passed both Houses of Congress unanimously restoring the name "Hoover Dam." Ickes, who was by then a private citizen, opposed the change, stating, "I didn't know Hoover was that small a man to take credit for something he had nothing to do with."
Recognition
Hoover Dam was recognized as a National Historic Civil Engineering Landmark in 1984. It was listed on the National Register of Historic Places in 1981 and was designated a National Historic Landmark in 1985, cited for its engineering innovations.
See also
Ralph Luther Criswell, lobbyist on behalf of the dam
Glen Canyon Dam
Hoover Dam Police
List of dams in the Colorado River system
List of largest hydroelectric power stations
List of largest hydroelectric power stations in the United States
List of National Historic Landmarks in Arizona
List of National Historic Landmarks in Nevada
St. Thomas, Nevada, ghost town with site now under Lake Mead.
Water in California
Hoover Dam in popular culture
Citations
Bibliography
Cited works
Other sources
Arrigo, Anthony F. (2014). Imaging Hoover Dam: The Making of a Cultural Icon. Reno, NV: University of Nevada Press.
External links
Hoover Dam – Visitors Site
Historic Construction Company Project – Hoover Dam
Hoover Dam – An American Experience Documentary
Boulder City/Hoover Dam Museum official site
1936 establishments in Arizona
1936 establishments in Nevada
Arch-gravity dams
Art Deco architecture in Arizona
Presidential memorials in the United States
Buildings and structures in Clark County, Nevada
Buildings and structures in Mohave County, Arizona
Dams completed in 1936
Dams in Arizona
Dams in Nevada
Dams on the Colorado River
Dams on the National Register of Historic Places in Nevada
Energy infrastructure completed in 1939
Energy infrastructure on the National Register of Historic Places
Engineering projects
Historic American Engineering Record in Arizona
Historic American Engineering Record in Nevada
Historic Civil Engineering Landmarks
Hydroelectric power plants in Arizona
Hydroelectric power plants in Nevada
Industrial buildings and structures on the National Register of Historic Places in Arizona
Lake Mead National Recreation Area
Lake Mead
Naming controversies
National Historic Landmarks in Arizona
National Historic Landmarks in Nevada
National Register of Historic Places in Mohave County, Arizona
PWA Moderne architecture
Tourist attractions in Clark County, Nevada
Tourist attractions in Mohave County, Arizona
U.S. Route 93
United States Bureau of Reclamation dams | Hoover Dam | [
"Engineering"
] | 9,404 | [
"Civil engineering",
"nan",
"Historic Civil Engineering Landmarks"
] |
14,313 | https://en.wikipedia.org/wiki/Hair | Hair is a protein filament that grows from follicles found in the dermis. Hair is one of the defining characteristics of mammals.
The human body, apart from areas of glabrous skin, is covered in follicles which produce thick terminal and fine vellus hair. Most common interest in hair is focused on hair growth, hair types, and hair care, but hair is also an important biomaterial primarily composed of protein, notably alpha-keratin.
Attitudes towards different forms of hair, such as hairstyles and hair removal, vary widely across different cultures and historical periods, but it is often used to indicate a person's personal beliefs or social position, such as their age, gender, or religion.
Overview
Meaning
The word "hair" usually refers to two distinct structures:
the part beneath the skin, called the hair follicle, or, when pulled from the skin, the bulb or root. This organ is located in the dermis and maintains stem cells, which not only re-grow the hair after it falls out, but also are recruited to regrow skin after a wound.
the hair shaft, which is the hard filamentous part that extends above the skin surface. It is made of multi-layered keratinized (dead) flat cells whose rope-like filaments provide structure and strength to it. The protein called keratin makes up most of its volume. A cross section of the hair shaft may be divided roughly into three zones.
Hair fibers have a structure consisting of several layers, starting from the outside:
the cuticle, which consists of several layers of flat, thin cells laid out overlapping one another as roof shingles
the cortex, which contains the keratin bundles in cell structures that remain roughly rod-like
the medulla, a disorganized and open area at the fiber's center
Etymology
The word "hair" is derived from and , in turn derived from and , with influence from . Both the Old English and Old Norse words derive from and are related to terms for hair in other Germanic languages such as , Dutch and , and . The now broadly obsolete word "fax" refers specifically to head hair and is found in compounds such as Fairfax and Halifax. It is derived from and is cognate with terms such as Old Norse and .
Description
Each strand of hair is made up of the medulla, cortex, and cuticle. The innermost region, the medulla, is an open and unstructured region that is not always present. The highly structural and organized cortex, or second of three layers of the hair, is the primary source of mechanical strength and water uptake. The cortex contains melanin, which colors the fiber based on the number, distribution and types of melanin granules. The melanin may be evenly spaced or cluster around the edges of the hair. The shape of the follicle determines the shape of the cortex, and the shape of the fiber is related to how straight or curly the hair is. People with straight hair have round hair fibers. Oval and other shaped fibers are generally more wavy or curly. The cuticle is the outer covering. Its complex structure slides as the hair swells and is covered with a single molecular layer of lipid that makes the hair repel water. The diameter of human hair varies from . Some of these characteristics in humans' head hair vary by race: people of mostly African ancestry tend to have hair with a diameter of 60–90 μm and a flat cross-section, while people of mostly European or Middle Eastern ancestry tend to have hair with a diameter of 70–100 μm and an oval cross-section, and people of mostly Asian or Native American ancestry tend to have hair with a diameter of 90–120 μm and a round cross-section. There are roughly two million small, tubular glands and sweat glands that produce watery fluids that cool the body by evaporation. The glands at the opening of the hair produce a fatty secretion that lubricates the hair.
Hair growth begins inside the hair follicle. The only "living" portion of the hair is found in the follicle. The hair that is visible is the hair shaft, which exhibits no biochemical activity and is considered "dead". The base of a hair's root (the "bulb") contains the cells that produce the hair shaft. Other structures of the hair follicle include the oil producing sebaceous gland which lubricates the hair and the arrector pili muscles, which are responsible for causing hairs to stand up. In humans with little body hair, the effect results in goose bumps.
Root of the hair
The root of the hair ends in an enlargement, the hair bulb, which is whiter in color and softer in texture than the shaft and is lodged in a follicular involution of the epidermis called the hair follicle. The bulb of hair consists of fibrous connective tissue, glassy membrane, external root sheath, internal root sheath composed of epithelium stratum (Henle's layer) and granular stratum (Huxley's layer), cuticle, cortex and medulla.
Natural color
All natural hair colors are the result of two types of hair pigments. Both of these pigments are melanin types, produced inside the hair follicle and packed into granules found in the fibers. Eumelanin is the dominant pigment in brown hair and black hair, while pheomelanin is dominant in red hair. Blond hair is the result of having little pigmentation in the hair strand. Gray hair occurs when melanin production decreases or stops, while poliosis is white hair (and often the skin to which the hair is attached), typically in spots that never possessed melanin at all, or ceased for natural reasons, generally genetic, in the first years of life.
Human hair growth
Hair grows everywhere on the external body except for mucous membranes and glabrous skin, such as that found on the palms of the hands, soles of the feet, and lips.
The body has different types of hair, including vellus hair and androgenic hair, each with its own type of cellular construction. The different construction gives the hair unique characteristics, serving specific purposes, mainly, warmth and protection.
The three stages of hair growth are the anagen, catagen, and telogen phases. Each strand of hair on the human body is at its own stage of development. Once the cycle is complete, it restarts and a new strand of hair begins to form. The growth rate of hair varies from individual to individual depending on their age, genetic predisposition and a number of environmental factors. It is commonly stated that hair grows about 1 cm per month on average; however reality is more complex, since not all hair grows at once. Scalp hair was reported to grow between 0.6 cm and 3.36 cm per month. The growth rate of scalp hair somewhat depends on age (hair tends to grow more slowly with age), sex, and ethnicity. Thicker hair (>60 μm) grows generally faster (11.4 mm per month) than thinner (20-30 μm) hair (7.6 mm per month).
It was previously thought that Caucasian hair grew more quickly than Asian hair and that the growth rate of women's hair was faster than that of men. However, more recent research has shown that the growth rate of hair in men and women does not significantly differ and that the hair of Chinese people grew more quickly than the hair of French Caucasians and West and Central Africans. The quantity of hair hovers in a certain range depending on hair colour. An average blonde person has 150,000 hairs, a brown-haired person has 110,000, a black-haired person has 100,000, and a redhead has 90,000. Hair growth stops after a human's death. Visible growth of hair on the dead body happens only because of skin drying out due to water loss.
The world record for longest hair on a living person stands with Smita Srivastava of Uttar Pradesh, India. At 7 feet and 9 inches long, she broke a Guinness World Record in November 2023, having grown her hair for 32 years.
Texture
Hair exists in a variety of textures. Three main aspects of hair texture are the curl pattern, volume, and consistency. All mammalian hair is composed of keratin, so the make-up of hair follicles is not the source of varying hair patterns. There are a range of theories pertaining to the curl patterns of hair. Scientists have come to believe that the shape of the hair shaft has an effect on the curliness of the individual's hair. A very round shaft allows for fewer disulfide bonds to be present in the hair strand. This means the bonds present are directly in line with one another, resulting in straight hair.
The flatter the hair shaft becomes, the curlier hair gets, because the shape allows more cysteines to become compacted together resulting in a bent shape that, with every additional disulfide bond, becomes curlier in form. As the hair follicle shape determines curl pattern, the hair follicle size determines thickness. While the circumference of the hair follicle expands, so does the thickness of the hair follicle. An individual's hair volume, as a result, can be thin, normal, or thick. The consistency of hair can almost always be grouped into three categories: fine, medium, and coarse. This trait is determined by the hair follicle volume and the condition of the strand. Fine hair has the smallest circumference, coarse hair has the largest circumference, and medium hair is anywhere between the other two. Coarse hair has a more open cuticle than thin or medium hair causing it to be the most porous.
Classification systems
There are various systems that people use to classify their curl patterns. Being knowledgeable of an individual's hair type is a good start to knowing how to take care of one's hair. There is not just one method to discovering one's hair type. Additionally it is possible, and quite normal to have more than one kind of hair type, for instance having a mixture of both type 3a & 3b curls.
Andre Walker system
The Andre Walker Hair Typing System is the most widely used system to classify hair. The system was created by Oprah Winfrey's hairstylist, Andre Walker. According to this system there are four types of hair: straight, wavy, curly, kinky.
Type 1 is straight hair, which reflects the most sheen and also the most resilient hair of all of the hair types. It is hard to damage and immensely difficult to curl this hair texture. Because the sebum easily spreads from the scalp to the ends without curls or kinks to interrupt its path, it is the most oily hair texture of all.
Type 2 is wavy hair, whose texture and sheen ranges somewhere between straight and curly hair. Wavy hair is also more likely to become frizzy than straight hair. While type A waves can easily alternate between straight and curly styles, type B and C wavy hair is resistant to styling.
Type 3 is curly hair known to have an S-shape. The curl pattern may resemble a lowercase "s", uppercase "S", or sometimes an uppercase "Z" or lowercase "z". Lack of proper care causes less defined curls.
Type 4 is kinky hair, which features a tightly coiled curl pattern (or no discernible curl pattern at all) that is often fragile with a very high density. This type of hair shrinks when wet and because it has fewer cuticle layers than other hair types it is more susceptible to damage.
FIA system
This is a method which classifies the hair by curl pattern, hair-strand thickness and overall hair volume.
Composition
Hair is mainly composed of keratin proteins and keratin-associated proteins (KRTAPs). The human genome encodes 54 different keratin proteins which are present in various amounts in hair. Similarly, humans encode more than 100 different KRTAPs which crosslink keratins in hair. The content of KRTAPs ranges from less than 3% in human hair to 30–40% in echidna quill.
Functions
Many mammals have fur and other hairs that serve different functions. Hair provides thermal regulation and camouflage for many animals; for others it provides signals to other animals such as warnings, mating, or other communicative displays; and for some animals hair provides defensive functions and, rarely, even offensive protection. Hair also has a sensory function, extending the sense of touch beyond the surface of the skin. Guard hairs give warnings that may trigger a recoiling reaction.
Warmth
While humans have developed clothing and other means of keeping warm, the hair found on the head serves primarily as a source of heat insulation and cooling (when sweat evaporates from soaked hair) as well as protection from ultra-violet radiation exposure. The function of hair in other locations is debated. Hats and coats are still required while doing outdoor activities in cold weather to prevent frostbite and hypothermia, but the hair on the human body does help to keep the internal temperature regulated. When the body is too cold, the arrector pili muscles found attached to hair follicles stand up, causing the hair in these follicles to do the same. These hairs then form a heat-trapping layer above the epidermis. This process is formally called piloerection, derived from the Latin words 'pilus' ('hair') and 'erectio' ('rising up'), but is more commonly known as 'having goose bumps' in English. This is more effective in other mammals whose fur fluffs up to create air pockets between hairs that insulate the body from the cold. The opposite actions occur when the body is too warm; the arrector muscles make the hair lie flat on the skin which allows heat to leave.
Protection
In some mammals, such as hedgehogs and porcupines, the hairs have been modified into hard spines or quills. These are covered with thick plates of keratin and serve as protection against predators. Thick hair such as that of the lion's mane and grizzly bear's fur do offer some protection from physical damages such as bites and scratches.
Touch sense
Displacement and vibration of hair shafts are detected by hair follicle nerve receptors and nerve receptors within the skin. Hairs can sense movements of air as well as touch by physical objects and they provide sensory awareness of the presence of ectoparasites. Some hairs, such as eyelashes, are especially sensitive to the presence of potentially harmful matter.
Eyebrows and eyelashes
The eyebrows provide moderate protection to the eyes from dirt, sweat and rain. They also play a key role in non-verbal communication by displaying emotions such as sadness, anger, surprise and excitement. In many other mammals, they contain much longer, whisker-like hairs that act as tactile sensors.
The eyelash grows at the edges of the eyelid and protects the eye from dirt. The eyelash is to humans, camels, horses, ostriches etc., what whiskers are to cats; they are used to sense when dirt, dust, or any other potentially harmful object is too close to the eye. The eye reflexively closes as a result of this sensation.
Eyebrows and eyelashes do not grow beyond a certain length (eyelashes are rarely more than 10 mm long). However, trichomegaly can cause the lashes to grow remarkably long and prominent (in some cases the upper lashes grow to 15 mm long).
Evolution
Hair has its origins in the common ancestor of mammals, the synapsids, about 300 million years ago. It is currently unknown at what stage the synapsids acquired mammalian characteristics such as body hair and mammary glands, as the fossils only rarely provide direct evidence for soft tissues. Skin impression of the belly and lower tail of a pelycosaur, possibly Haptodus shows the basal synapsid stock bore transverse rows of rectangular scutes, similar to those of a modern crocodile, so the age of acquirement of hair logically could not have been earlier than ≈299 ma, based on the current understanding of the animal's phylogeny. An exceptionally well-preserved skull of Estemmenosuchus, a therapsid from the Upper Permian, shows smooth, hairless skin with what appears to be glandular depressions, though as a semi-aquatic species it might not have been particularly useful to determine the integument of terrestrial species. The oldest undisputed known fossils showing unambiguous imprints of hair are the Callovian (late middle Jurassic) Castorocauda and several contemporary haramiyidans, both near-mammal cynodonts, giving the age as no later than ≈220 ma based on the modern phylogenetic understanding of these clades. More recently, studies on terminal Permian Russian coprolites may suggest that non-mammalian synapsids from that era had fur. If this is the case, these are the oldest hair remnants known, showcasing that fur occurred as far back as the latest Paleozoic.
Some modern mammals have a special gland in front of each orbit used to preen the fur, called the harderian gland. Imprints of this structure are found in the skull of the small early mammals like Morganucodon, but not in their cynodont ancestors like Thrinaxodon.
The hairs of the fur in modern animals are all connected to nerves, and so the fur also serves as a transmitter for sensory input. Fur could have evolved from sensory hair (whiskers). The signals from this sensory apparatus is interpreted in the neocortex, a section of the brain that expanded markedly in animals like Morganucodon and Hadrocodium. The more advanced therapsids could have had a combination of naked skin, whiskers, and scutes. A full pelage likely did not evolve until the therapsid-mammal transition. The more advanced, smaller therapsids could have had a combination of hair and scutes, a combination still found in some modern mammals, such as rodents and the opossum.
The high interspecific variability of the size, color, and microstructure of hair often enables the identification of species based on single hair filaments.
In varying degrees most mammals have some skin areas without natural hair. On the human body, glabrous skin is found on the ventral portion of the fingers, palms, soles of feet and lips, which are all parts of the body most closely associated with interacting with the world around us, as are the labia minora and glans penis. There are four main types of mechanoreceptors in the glabrous skin of humans: Pacinian corpuscles, Meissner's corpuscles, Merkel's discs, and Ruffini corpuscles.
The naked mole-rat (Heterocephalus glaber) has evolved skin lacking in general, pelagic hair covering, yet has retained long, very sparsely scattered tactile hairs over its body. Glabrousness is a trait that may be associated with neoteny.
Human hairlessness
Evolutionary variation
Primates are relatively hairless compared to other mammals, and Hominini such as chimpanzees, have less dense hair than would be expected given their body size for a primate. Evolutionary biologists suggest that the genus Homo arose in East Africa approximately 2 million years ago. Part of this evolution was the development of endurance running and venturing out during the hot times of the day that required efficient thermoregulation through perspiration. The loss of heat through heat of evaporation by means of sweat glands is aided by air currents next to the skin surface, which are facilitated by the loss of body hair.
Another factor in human evolution that also occurred in the prehistoric past was a preferential selection for neoteny, particularly in females. The idea that adult humans exhibit certain neotenous (juvenile) features, not evinced in the other great apes, is about a century old. Louis Bolk made a long list of such traits, and Stephen Jay Gould published a short list in Ontogeny and Phylogeny. In addition, paedomorphic characteristics in women are often acknowledged as desirable by men in developed countries. For instance, vellus hair is a juvenile characteristic. However, while men develop longer, coarser, thicker, and darker terminal hair through sexual differentiation, women do not, leaving their vellus hair visible.
Texture
Curly hair
Jablonski asserts head hair was evolutionarily advantageous for pre-humans to retain because it protected the scalp as they walked upright in the intense African (equatorial) UV light. While some might argue that, by this logic, humans should also express hairy shoulders because these body parts would putatively be exposed to similar conditions, the protection of the head, the seat of the brain that enabled humanity to become one of the most successful species on the planet (and which also is very vulnerable at birth) was arguably a more urgent issue (axillary hair in the underarms and groin were also retained as signs of sexual maturity). Sometime during the gradual process by which Homo erectus began a transition from furry skin to the naked skin expressed by Homo sapiens, hair texture putatively gradually changed from straight hair (the condition of most mammals, including humanity's closest cousins—chimpanzees) to Afro-textured hair or 'kinky' (i.e. tightly coiled). This argument assumes that curly hair better impedes the passage of UV light into the body relative to straight hair (thus curly or coiled hair would be particularly advantageous for light-skinned hominids living at the equator).
It is substantiated by Iyengar's findings (1998) that UV light can enter into straight human hair roots (and thus into the body through the skin) via the hair shaft. Specifically, the results of that study suggest that this phenomenon resembles the passage of light through fiber optic tubes (which do not function as effectively when kinked or sharply curved or coiled). In this sense, when hominids (i.e. Homo erectus) were gradually losing their straight body hair and thereby exposing the initially pale skin underneath their fur to the sun, straight hair would have been an adaptive liability. By inverse logic, later, as humans traveled farther from Africa and/or the equator, straight hair may have (initially) evolved to aid the entry of UV light into the body during the transition from dark, UV-protected skin to paler skin.
Jablonski's assertions suggest that the adjective "woolly" in reference to Afro-hair is a misnomer in connoting the high heat insulation derivable from the true wool of sheep. Instead, the relatively sparse density of Afro-hair, combined with its springy coils actually results in an airy, almost sponge-like structure that in turn, Jablonski argues, more likely facilitates an increase in the circulation of cool air onto the scalp. Further, wet Afro-hair does not stick to the neck and scalp unless totally drenched and instead tends to retain its basic springy puffiness because it less easily responds to moisture and sweat than straight hair does. In this sense, the trait may enhance comfort levels in intense equatorial climates more than straight hair (which, on the other hand, tends to naturally fall over the ears and neck to a degree that provides slightly enhanced comfort levels in cold climates relative to tightly coiled hair).
Further, it is notable that the most pervasive expression of this hair texture can be found in sub-Saharan Africa; a region of the world that abundant genetic and paleo-anthropological evidence suggests, was the relatively recent (≈200,000-year-old) point of origin for modern humanity. In fact, although genetic findings (Tishkoff, 2009) suggest that sub-Saharan Africans are the most genetically diverse continental group on Earth, Afro-textured hair approaches ubiquity in this region. This points to a strong, long-term selective pressure that, in stark contrast to most other regions of the genomes of sub-Saharan groups, left little room for genetic variation at the determining loci. Such a pattern, again, does not seem to support human sexual aesthetics as being the sole or primary cause of this distribution.
The EDAR locus
A group of studies have recently shown that genetic patterns at the EDAR locus, a region of the modern human genome that contributes to hair texture variation among most individuals of East Asian descent, support the hypothesis that (East Asian) straight hair likely developed in this branch of the modern human lineage subsequent to the original expression of tightly coiled natural afro-hair. Specifically, the relevant findings indicate that the EDAR mutation coding for the predominant East Asian 'coarse' or thick, straight hair texture arose within the past ≈65,000 years, which is a time frame that covers from the earliest of the 'Out of Africa' migrations up to now.
Disease
Ringworm is a fungal disease that targets hairy skin.
Premature greying of hair is another condition that results in greying before the age of 20 years in Europeans, before 25 years in Asians, and before 30 years in Africans.
Hair care
Hair care involves the hygiene and cosmetology of hair including hair on the scalp, facial hair (beard and moustache), pubic hair and other body hair. Hair care routines differ according to an individual's culture and the physical characteristics of one's hair. Hair may be colored, trimmed, shaved, plucked, or otherwise removed with treatments such as waxing, sugaring, and threading.
Removal practices
Depilation is the removal of hair from the surface of the skin. This can be achieved through methods such as shaving. Epilation is the removal of the entire hair strand, including the part of the hair that has not yet left the follicle. A popular way to epilate hair is through waxing.
Shaving
Shaving is accomplished with bladed instruments, such as razors. The blade is brought close to the skin and stroked over the hair in the desired area to cut the terminal hairs and leave the skin feeling smooth. Depending upon the rate of growth, one can begin to feel the hair growing back within hours of shaving. This is especially evident in men who develop a five o'clock shadow after having shaved their faces. This new growth is called stubble. Stubble typically appears to grow back thicker because the shaved hairs are blunted instead of tapered off at the end, although the hair never actually grows back thicker.
Waxing
Waxing involves using a sticky wax and strip of paper or cloth to pull hair from the root. Waxing is the ideal hair removal technique to keep an area hair-free for long periods of time. It can take three to five weeks for waxed hair to begin to resurface again. Hair in areas that have been waxed consistently is known to grow back finer and thinner, especially compared to hair that has been shaved with a razor.
Laser removal
Laser hair removal is a cosmetic method where a small laser beam pulses selective heat on dark target matter in the area that causes hair growth without harming the skin tissue. This process is repeated several times over the course of many months to a couple of years with hair regrowing less frequently until it finally stops; this is used as a more permanent solution to waxing or shaving. Laser removal is practiced in many clinics along with many at-home products.
Cutting and trimming
Because the hair on one's head is normally longer than other types of body hair, it is cut with scissors or clippers. People with longer hair will most often use scissors to cut their hair, whereas shorter hair is maintained using a trimmer. Depending on the desired length and overall health of the hair, periods without cutting or trimming the hair can vary.
Cut hair may be used in wigs. Global imports of hair in 2010 was worth $US 1.24 billion.
Social role
Hair has great social significance for human beings. It can grow on most external areas of the human body, except on the palms of the hands and the soles of the feet (among other areas). Hair is most noticeable on most people in a small number of areas, which are also the ones that are most commonly trimmed, plucked, or shaved. These include the face, ears, head, eyebrows, legs, and armpits, as well as the pubic region. The highly visible differences between male and female body and facial hair are a notable secondary sex characteristic.
The world's longest documented hair belongs to Xie Qiuping (in China), at 5.627 m (18 ft 5.54 in) when measured on 8 May 2004. She has been growing her hair since 1973, from the age of 13.
Indication of status
Healthy hair indicates health and youth (important in evolutionary biology). Hair color and texture can be a sign of ethnic ancestry. Facial hair is a sign of puberty in men. White or gray hair is a sign of age or genetics, which may be concealed with hair dye (not easily for some), although many prefer to assume it (especially if it is a poliosis characteristic of the person since childhood). Pattern baldness in men is usually seen as a sign of aging that may be concealed with a toupee, hats, or religious and cultural adornments; however, the condition can be triggered by various hormonal factors at any age following puberty and is not uncommon in younger men. Although pattern baldness can be slowed down by drugs such as Finasteride and Minoxidil or treated with hair transplants, many men see this as unnecessary effort for the sake of vanity and instead shave their heads. In early modern China, the queue was a male hairstyle in which the hair at the front and top was shaved every 10 days in a style mimicking pattern baldness, while the remaining hair at the back was braided into a long pigtail.
A hairstyle may be an indicator of group membership. During the English Civil War, followers of Oliver Cromwell cropped their hair close to their head in an act of defiance against the curls and ringlets of the king's men, which led to them being nicknamed Roundheads. Recent isotopic analysis of hair is helping to shed further light on sociocultural interaction, giving information on food procurement and consumption in the 19th century. Having bobbed hair was popular among the flappers in the 1920s as a sign of rebellion against traditional roles for women. Female art students known as the Cropheads also adopted the style, notably at the Slade School in London. Regional variations in hirsutism has caused practices regarding hair on the arms and legs to differ. Some religious groups may follow certain rules regarding hair as part of religious observance. The rules often differ for men and women.
Many subcultures have hairstyles which may indicate an unofficial membership. Many hippies, metalheads, and Indian sadhus have long hair, as well many older hipsters. Many punks wear a hairstyle known as a mohawk or other spiked and dyed hairstyles, while skinheads have short-cropped or completely shaved heads. Long stylized bangs were very common for emos, scene kids, and younger hipsters in the 2000s and early 2010s.
Heads were shaved in concentration camps, and head-shaving has been used as punishment, especially for women with long hair. The shaven head is common in military haircuts, while Western monks are known for the tonsure. By contrast, among some Indian holy men, the hair is worn extremely long.
In the time of Confucius (5th century BCE), the Chinese grew out their hair and often tied it, as a symbol of filial piety. Regular hairdressing in some cultures is considered a sign of wealth or status. The dreadlocks of the Rastafari movement were despised early in the movement's history. In some cultures, having one's hair cut can symbolize a liberation from one's past, usually after a trying time in one's life. Cutting the hair also may be a sign of mourning.
Tightly coiled hair in its natural state may be worn in an Afro. This hairstyle was once worn among African Americans as a symbol of racial pride. Given that the coiled texture is the natural state of some African Americans' hair, or perceived as being more "African", this simple style is now often seen as a sign of self-acceptance and an affirmation that the beauty norms of the (eurocentric) dominant culture are not absolute. African Americans as a whole have a variety of hair textures, as they are not an ethnically homogeneous group, but an ad-hoc of different racial admixtures.
The film Easy Rider (1969) includes the assumption that the two main characters could have their long hairs forcibly shaved with a rusty razor when jailed, symbolizing the intolerance of some conservative groups toward members of the counterculture. At the conclusion of England's 1971 Oz trials, the defendants had their heads shaved by the police, causing public outcry. During the appeal trial, they appeared in the dock wearing wigs. A case where a 14-year-old student was expelled from school in Brazil in the mid-2000s, allegedly because of his fauxhawk haircut, sparked national debate and legal action resulting in compensation.
Religious practices
Women's hair may be hidden using headscarves, a common part of the hijab in Islam and a symbol of modesty required for certain religious rituals in Eastern Orthodoxy. Russian Orthodox Church requires all married women to wear headscarves inside the church; this tradition is often extended to all women, regardless of marital status. Orthodox Judaism also commands the use of scarves and other head coverings for married women for modesty reasons. Certain Hindu sects also wear head scarves for religious reasons. Sikhs have an obligation not to cut hair (a Sikh cutting hair becomes 'apostate' which means fallen from religion) and men keep it tied in a bun on the head, which is then covered appropriately using a turban. Multiple religions, both ancient and contemporary, require or advise one to allow their hair to become dreadlocks, though people also wear them for fashion. For men, Islam, Orthodox Judaism, Orthodox Christianity, Roman Catholicism, and other religious groups have at various times recommended or required the covering of the head and sections of the hair of men, and some have dictates relating to the cutting of men's facial and head hair. Some Christian sects throughout history and up to modern times have also religiously proscribed the cutting of women's hair. For some Sunni madhabs, the donning of a kufi or topi is a form of sunnah. Brahmin males are prescribed to shave their heads, but leave a tuft of hair unshaved, worn in the form of a topknot.
In Arabic poetry
Since ancient times, women's long, thick, wavy hair has featured prominently in Arabic poetry. Pre-Islamic poets used only limited imagery to describe women's hair. For example, al-A'sha wrote a verse comparing a lover's hair to "a garden whose grapes dangle down upon me", but Bashshar ibn Burd considered this unusual. One comparison used by early poets, such as Imru al-Qays, was to bunches of dates. In Abbasid times, however, the imagery for hair expanded significantly - particularly for the then-fashionable "love-locks" (sudgh) framing the temples, which came into style at the court of the caliph al-Amin. Hair curls were compared to hooks and chains, letters (such as fa, waw, lam, and nun), scorpions, annelids, and polo sticks. An example was the poet Ibn al-Mu'tazz, who compared a lock of hair and a birthmark to a polo stick driving a ball.
See also
List of hairstyles
Body hair
Chaetophobia – the fear of hair
Hair analysis (alternative medicine)
Hypertrichosis – the state of having an excess of hair on the head or body
Hypotrichosis – the state of having a less than normal amount of hair on the head or body
Lanugo
Seta – hair-like structures in insects
Bristle sensilla – tactile hairs in insects
Trichotillomania – hair pulling
Mane (horse)
References
Citations
Sources
External links
How to measure the diameter of your own hair using a laser pointer
Instant insight outlining the chemistry of hair from the Royal Society of Chemistry | Hair | [
"Biology"
] | 7,616 | [
"Organ systems",
"Hair"
] |
14,322 | https://en.wikipedia.org/wiki/Holy%20Grail | The Holy Grail (, , , ) is a treasure that serves as an important motif in Arthurian literature. Various traditions describe the Holy Grail as a cup, dish, or stone with miraculous healing powers, sometimes providing eternal youth or sustenance in infinite abundance, often guarded in the custody of the Fisher King and located in the hidden Grail castle. By analogy, any elusive object or goal of great significance may be perceived as a "holy grail" by those seeking such.
A mysterious "grail" (Old French: graal or greal), wondrous but not unequivocally holy, first appears in Perceval, the Story of the Grail, an unfinished chivalric romance written by Chrétien de Troyes around 1190. Chrétien's story inspired many continuations, translators and interpreters in the later-12th and early-13th centuries, including Wolfram von Eschenbach, who portrayed the Grail as a stone in Parzival. The Christian, Celtic or possibly other origins of the Arthurian grail trope are uncertain and have been debated among literary scholars and historians.
Writing soon after Chrétien, Robert de Boron in portrayed the Grail as Jesus's vessel from the Last Supper, which Joseph of Arimathea used to catch Christ's blood at the crucifixion. Thereafter, the Holy Grail became interwoven with the legend of the Holy Chalice, the Last Supper cup, an idea continued in works such as the Lancelot-Grail cycle, and subsequently the 15th-century Le Morte d'Arthur. In this form, it is now a popular theme in modern culture, and has become the subject of folklore studies, pseudohistorical writings, works of fiction, and conspiracy theories.
Etymology
The word , as it is spelled in its earliest appearances, comes from Old French or , cognate with Old Occitan and Old Catalan , meaning "a cup or bowl of earth, wood, or metal" (or other various types of vessels in different Occitan dialects). The most commonly accepted etymology derives it from Latin or via an earlier form, , a derivative of or , which was, in turn, borrowed from Ancient Greek (, a large wine-mixing vessel). Alternative suggestions include a derivative of , a name for a type of woven basket that came to refer to a dish, or a derivative of Latin meaning by degree', 'by stages', applied to a dish brought to the table in different stages or services during a meal".
In the 15th century, English writer John Hardyng invented a fanciful new etymology for Old French (or ), meaning "Holy Grail", by parsing it as , meaning "royal blood". This etymology was used by some later medieval British writers such as Thomas Malory, and became prominent in the conspiracy theory developed in the book The Holy Blood and the Holy Grail, in which refers to the Jesus bloodline.
Medieval literature
The literature surrounding the Grail can be divided into two groups. The first concerns King Arthur's knights visiting the Grail castle or questing after the object. The second concerns the Grail's earlier history in the time of Joseph of Arimathea.
The nine works from the first group are:
Perceval, the Story of the Grail, a chivalric romance poem by Chrétien de Troyes.
The Four Continuations of Chrétien's unfinished poem, by authors of differing vision, designed to bring the story to a close.
The , purportedly a prosification of Robert de Boron's sequel to his romance poems and Merlin.
Parzival by Wolfram von Eschenbach, which adapted at least the holiness of Robert's Grail into the framework of Chrétien's story.
Welsh romance Peredur son of Efrawg, a loose translation of Chrétien's poem and the Continuations, with some influence from native Welsh literature.
Perlesvaus, called the "least canonical" Grail romance because of its very different character.
German poem Diu Crône (The Crown), in which Gawain, rather than Perceval, achieves the Grail.
The Prose Lancelot section of the vast Lancelot-Grail cycle introduced the new Grail hero, Galahad. The Queste del Saint Graal, a follow-up part of the cycle, concerns Galahad's eventual achievement of the Grail.
Of the second group there are:
Robert de Boron's Joseph d'Arimathie.
The Estoire del Saint Graal, the first part of the Lancelot-Grail cycle (but written after Lancelot and the Queste), based on Robert's tale but expanding it greatly with many new details.
Verses by Rigaut de Barbezieux, a late 12th or early 13th-century Provençal troubador, where mention is made of Perceval, the lance, and the Grail served.
The Grail was considered a bowl or dish when first described by Chrétien de Troyes. There, it is a processional salver, a tray, used to serve at a feast. Hélinand of Froidmont described a grail as a "wide and deep saucer" (scutella lata et aliquantulum profunda); other authors had their own ideas. Robert de Boron portrayed it as the vessel of the Last Supper. Peredur son of Efrawg had no Grail as such, presenting the hero instead with a platter containing his kinsman's bloody, severed head.
Chrétien de Troyes
The Grail is first featured in Perceval, le Conte du Graal (The Story of the Grail) by Chrétien de Troyes, who claims he was working from a source book given to him by his patron, Count Philip of Flanders. In this incomplete poem, dated sometime between 1180 and 1191, the object has not yet acquired the implications of holiness it would have in later works. While dining in the magical abode of the Fisher King, Perceval witnesses a wondrous procession in which youths carry magnificent objects from one chamber to another, passing before him at each course of the meal. First comes a young man carrying a bleeding lance, then two boys carrying candelabras. Finally, a beautiful young girl emerges bearing an elaborately decorated graal, or "grail". Chrétien's Perceval does not achieve the quest, but different authors completed his unfinished story in their own poems known as Perceval Continuations, including two alternative endings to the initial successive continuations.
Chrétien refers to this object not as "The Grail" but as "a grail" (un graal), showing the word was used, in its earliest literary context, as a common noun. For Chrétien, a grail was a wide, somewhat deep, dish or bowl, interesting because it contained not a pike, salmon, or lamprey, as the audience may have expected for such a container, but a single Communion wafer which provided sustenance for the Fisher King's crippled father. Perceval, who had been warned against talking too much, remains silent through all of this and wakes up the next morning alone. He later learns that if he had asked the appropriate questions about what he saw, he would have healed his maimed host, much to his honour. The story of the Wounded King's mystical fasting is not unique; several saints were said to have lived without food besides communion, for instance Saint Catherine of Genoa. This may imply that Chrétien intended the Communion wafer to be the significant part of the ritual, and the Grail to be a mere prop. The First Continuation seemingly features two grails: a floating dish and a carved head of Jesus.
Robert de Boron
Though Chrétien's account is the earliest and most influential of all Grail texts, it was in the work of Robert that the Grail truly became the "Holy Grail" and assumed the form most familiar to modern readers in its Christian context. In his verse romance Joseph d'Arimathie, composed between 1191 and 1202, Robert tells the story of Joseph of Arimathea acquiring the chalice of the Last Supper to collect Christ's blood upon his removal from the cross. Joseph is thrown in prison, where Christ visits him and explains the mysteries of the blessed cup. Upon his release, Joseph gathers his in-laws and other followers and travels to the west. He founds a dynasty of Grail keepers that eventually includes Perceval.
Perceval himself is the subject of the Didot-Perceval (Perceval en prose), a prose work presenting a revised and completed version of Chrétien's story, but replacing him with Galahad (Galaad) as the principle Grail hero while simultaneously also serving as a continuation to Joseph and Merlin by Robert de Boron. In Perlesvaus, another markedly different anonymous prose continuation of Chrétien's Perceval, the Grail is a holy blood relic creating mystical visions and appearing in the form of a hovering chalice, apparently as inspired by de Boron.
Wolfram von Eschenbach
In Parzival, the author Wolfram von Eschenbach, citing the authority of a certain (probably fictional) Kyot the Provençal, claimed the Grail was a gemstone, the sanctuary of the neutral angels who took neither side during Lucifer's rebellion. It is called Lapis exillis, which in alchemy is the name of the philosopher's stone. In Wolfram's telling, the Grail was kept safe at the castle of Munsalvaesche (mons salvationis), entrusted to Titurel, the first Grail King. The stone grants eternal life to its guardian; in the end, Parzival replaces the maimed and long suffering Anfortas as the new Grail King, having finally released him by correctly answering his question.
Lancelot-Grail
The authors of the Vulgate Cycle (Lancelot-Grail) used the Grail as a symbol of divine grace; the virgin Galahad, illegitimate son of Lancelot and Elaine, the world's greatest knight and the Grail Bearer at the castle of Corbenic, is destined to achieve the Grail, his spiritual purity making him a greater warrior than even his illustrious father. The Queste del Saint Graal (The Quest of The Holy Grail), continuing directly from the expanded prose versions of Robert de Boron's stories of Joseph and Merlin, tells also of the adventures of various Knights of the Round Table in their eponymous quest. Some of them, including Perceval and Bors the Younger, eventually join Galahad as his companions near the successful end of the Grail Quest and are witnesses of his ascension to Heaven.
Alternative versions of the Grail Quest based on that from the Vulgate Cycle were included in the Prose Tristan and the Post-Vulgate Cycle. Galahad and the interpretation of the Grail involving him were picked up in the 15th century by Thomas Malory in Le Morte d'Arthur, and remain popular today. Based closely on the Vulgate Cycle in an abridged form, Malory's telling accordingly elevates Galahad above Perceval (Percivale), the latter reduced to a secondary role in the Quest. Uniquely, Malory described the Grail as invisible, apparently confused by his French source text's mention of an invisible Grail bearer.
Scholarly hypotheses
Scholars have long speculated on the origins of the Holy Grail before Chrétien, suggesting that it may contain elements of the trope of magical cauldrons from Celtic mythology and later Welsh mythology, combined with Christian legend surrounding the Eucharist, the latter found in Eastern Christian sources, conceivably in that of the Byzantine Mass, or even Persian sources. The view that the "origin" of the Grail legend should be seen as deriving from Celtic mythology was championed by Roger Sherman Loomis (The Grail: From Celtic Myth to Christian Symbol), Alfred Nutt (Studies on the Legend of the Holy Grail, available at Wikisource), and Jessie Weston (From Ritual to Romance and The Quest of the Holy Grail). Loomis traced a number of parallels between medieval Welsh literature and Irish material, and the Grail romances, including similarities between the Mabinogions Bran the Blessed and the Arthurian Fisher King, and between Bran's life-restoring cauldron and the Grail.
The opposing view dismissed the "Celtic" connections as spurious, and interpreted the legend as essentially Christian in origin. Joseph Goering identified sources for Grail imagery in 12th-century wall paintings from churches in the Catalan Pyrenees (now mostly moved to the Museu Nacional d'Art de Catalunya), which present unique iconic images of the Virgin Mary holding a bowl that radiates tongues of fire, images that predate the first literary account by Chrétien de Troyes. Goering argues that they were the original inspiration for the Grail legend.
Psychologists Emma Jung and Marie-Louise von Franz used analytical psychology to interpret the Grail as a series of symbols in their book The Grail Legend. They directly expanded on interpretations by Carl Jung, which were later invoked by Joseph Campbell. Philosopher Henry Corbin, a member of the Eranos circle founded by Jung, also commented on the esoteric significance of the grail, relating it to the Iranian Islamic symbols that he studied.
Richard Barber (2004) argued that the Grail legend is connected to the introduction of "more ceremony and mysticism" surrounding the sacrament of the Eucharist in the high medieval period, proposing that the first Grail stories may have been connected to the "renewal in this traditional sacrament". Daniel Scavone (1999, 2003) has argued that the "Grail" originally referred to the Image of Edessa. Goulven Peron (2016) suggested that the Holy Grail may reflect the horn of the river-god Achelous, as described by Ovid in the Metamorphoses.
Later traditions
Relics
In the wake of the Arthurian romances, several artifacts came to be identified as the Holy Grail in medieval relic veneration. These artifacts are said to have been the vessel used at the Last Supper, but other details vary. Despite the prominence of the Grail literature, traditions about a Last Supper relic remained rare in contrast to other items associated with Jesus' last days, such as the True Cross and Holy Lance.
One tradition predates the Grail romances: in the 7th century, the pilgrim Arculf reported that the Last Supper chalice was displayed near Jerusalem. In the wake of Robert de Boron's Grail works, several other items came to be claimed as the true Last Supper vessel. In the late 12th century, one was said to be in Byzantium; Albrecht von Scharfenberg's Grail romance Der Jüngere Titurel associated it explicitly with the Arthurian Grail, but claimed it was only a copy. This item was said to have been looted in the Fourth Crusade and brought to Troyes in France, but it was lost during the French Revolution.
Two relics associated with the Grail survive today. The Sacro Catino (Sacred Basin, also known as the Genoa Chalice) is a green glass dish held at the Genoa Cathedral said to have been used at the Last Supper. Its provenance is unknown, and there are two divergent accounts of how it was brought to Genoa by Crusaders in the 12th century. It was not associated with the Last Supper until later, in the wake of the Grail romances; the first known association is in Jacobus de Voragine's chronicle of Genoa in the late 13th century, which draws on the Grail literary tradition. The Catino was moved and broken during Napoleon's conquest in the early 19th century, revealing that it is glass rather than emerald.
The Holy Chalice of Valencia is an agate dish with a mounting for use as a chalice. The bowl may date to Greco-Roman times, but its dating is unclear, and its provenance is unknown before 1399, when it was gifted to Martin I of Aragon. By the 14th century, an elaborate tradition had developed that this object was the Last Supper chalice. This tradition mirrors aspects of the Grail material, with several major differences, suggesting a separate tradition entirely. It is not associated with Joseph of Arimathea or Jesus' blood; it is said to have been taken to Rome by Saint Peter and later entrusted to Saint Lawrence. Early references do not call the object the "Grail"; the first evidence connecting it to the Grail tradition is from the 15th century. The monarchy sold the cup in the 15th century to Valencia Cathedral, where it remains a significant local icon.
Several objects were identified with the Holy Grail in the 17th century. In the 20th century, a series of new items became associated with it. These include the Nanteos Cup, a medieval wooden bowl found near Rhydyfelin, Wales; a glass dish found near Glastonbury, England; the Antioch chalice, a 6th-century silver-gilt object that became attached to the Grail legend in the 1930s; and the Chalice of Doña Urraca, a cup made between 200 BC and 100 AD, kept in León’s Basilica of Saint Isidore.
Locations associated with the Holy Grail
In the modern era, a number of places have become associated with the Holy Grail. One of the most prominent is Glastonbury in Somerset, England. Glastonbury was associated with King Arthur and his resting place of Avalon by the 12th century. In the 13th century, a legend arose that Joseph of Arimathea was the founder of Glastonbury Abbey. Early accounts of Joseph at Glastonbury focus on his role as the evangelist of Britain rather than as the custodian of the Holy Grail, but from the 15th century, the Grail became a more prominent part of the legends surrounding Glastonbury. Interest in Glastonbury resurged in the late 19th century, inspired by renewed interest in the Arthurian legend and contemporary spiritual movements centered on ancient sacred sites. In the late 19th century, John Goodchild hid a glass bowl near Glastonbury; a group of his friends, including Wellesley Tudor Pole, retrieved the cup in 1906 and promoted it as the original Holy Grail. Glastonbury and its Holy Grail legend have since become a point of focus for various New Age and Neopagan groups.
Some, not least the Benedictine monks, have identified the castle from Parzival with their real sanctuary of Montserrat in Catalonia. In the early 20th century, esoteric writers identified Montségur, a stronghold of the heretical Cathar sect in the 13th century, as the Grail castle. Similarly, the 14th-century Rosslyn Chapel in Midlothian, Scotland, became attached to the Grail legend in the mid-20th century when a succession of conspiracy books identified it as a secret hiding place of the Grail.
Modern interpretations
Pseudohistory and conspiracy theories
Since the 19th century, the Holy Grail has been linked to various conspiracy theories. In 1818, Austrian pseudohistorical writer Joseph von Hammer-Purgstall connected the Grail to contemporary myths surrounding the Knights Templar that cast the order as a secret society dedicated to mystical knowledge and relics. In Hammer-Purgstall's work, the Grail is not a physical relic, but a symbol of the secret knowledge that the Templars sought. There is no historical evidence linking the Templars to a search for the Grail, but subsequent writers have elaborated on the Templar theories.
Starting in the early 20th century, writers, particularly in France, further connected the Templars and Grail to the Cathars. In 1906, French esoteric writer Joséphin Péladan identified the Cathar castle of Montségur with Munsalväsche or Montsalvat, the Grail castle in Wolfram's Parzival. This identification has inspired a wider legend asserting that the Cathars possessed the Holy Grail. According to these stories, the Cathars guarded the Grail at Montségur, and smuggled it out when the castle fell in 1244.
Beginning in 1933, German writer Otto Rahn published a series of books tying the Grail, Templars, and Cathars to modern German nationalist mythology. According to Rahn, the Grail was a symbol of a pure Germanic religion repressed by Christianity. Rahn's books inspired interest in the Grail within Nazi occultist circles, and led to the SS chief Heinrich Himmler's abortive sponsorship of Rahn's search for the Grail, as well as many subsequent conspiracy theories and fictional works about the Nazis searching for the Grail.
In the late 20th century, writers Michael Baigent, Richard Leigh, and Henry Lincoln created one of the most widely known conspiracy theories about the Holy Grail. The theory first appeared on the BBC documentary series Chronicle in the 1970s, and was elaborated upon in the bestselling 1982 book Holy Blood, Holy Grail. The theory combines myths about the Templars and Cathars with various other legends, and a prominent hoax about a secret order called the Priory of Sion. According to this theory, the Holy Grail is not a physical object, but a symbol of the bloodline of Jesus. The blood connection is based on the etymological reading of san greal (holy grail) as sang real (royal blood), which dates to the 15th century. The narrative developed is that Jesus was not divine, and had children with Mary Magdalene, who took the family to France where their descendants became the Merovingian dynasty. Supposedly, while the Catholic Church worked to destroy the dynasty, they were protected by the Priory of Sion and their associates, including the Templars, Cathars, and other secret societies. The book, its arguments, and its evidence have been widely dismissed by scholars as pseudohistorical, but it has had a vast influence on conspiracy and alternate history books. It has also inspired fiction, most notably Dan Brown's 2003 novel The Da Vinci Code and its 2006 film adaptation.
Music and painting
The combination of hushed reverence, chromatic harmonies and sexualized imagery in Richard Wagner's final music drama Parsifal, premiered in 1882, developed this theme, associating the Grail – now periodically producing blood – directly with female fertility. The high seriousness of the subject was also epitomized in Dante Gabriel Rossetti's painting in which a woman modeled by Alexa Wilding holds the Grail with one hand, while adopting a gesture of blessing with the other.
A major mural series depicting the Quest for the Holy Grail was done by the artist Edwin Austin Abbey during the first decade of the 20th century for the Boston Public Library. Other artists, including George Frederic Watts and William Dyce, also portrayed grail subjects.
Literature
The story of the Grail and of the quest to find it became increasingly popular in the 19th century, referred to in literature such as Alfred, Lord Tennyson's Arthurian cycle Idylls of the King. A sexualised interpretation of the grail, now identified with female genitalia, appeared in 1870 in Hargrave Jennings' book The Rosicrucians, Their Rites and Mysteries.
T. S. Eliot's poem The Waste Land (1922) loosely follows the legend of the Holy Grail and the Fisher King combined with vignettes of contemporary British society. In his first note to the poem, Eliot attributes the title to Jessie Weston's book on the Grail legend, From Ritual to Romance. The allusion is to the wounding of the Fisher King and the subsequent sterility of his lands. A poem of the same title, though otherwise dissimilar, written by Madison Cawein, was published in 1913 in Poetry.
In John Cowper Powys's A Glastonbury Romance (1932), the "heroine is the Grail," and its central concerns are with the various myths and legends, along with the history associated with Glastonbury. It is also possible to see most of the main characters as undertaking a Grail quest.
The Grail is central in Charles Williams' novel War in Heaven (1930) and his two collections of poems about Taliessin, Taliessin Through Logres and Region of the Summer Stars (1938).
The Silver Chalice (1952) is a non-Arthurian historical Grail novel by Thomas B. Costain.
A quest for the Grail appears in Nelson DeMille's adventure novel The Quest (1975), set during the 1970s.
Marion Zimmer Bradley's Arthurian revisionist fantasy novel The Mists of Avalon (1983) presented the Grail as a symbol of water, part of a set of objects representing the four classical elements.
The main theme of Rosalind Miles' Child of the Holy Grail (2000) in her Guenevere series is the story of the Grail quest by the 14-year-old Galahad.
The Grail motif features heavily in Umberto Eco's 2000 novel Baudolino, set in the 12th century.
It is the subject of Bernard Cornwell's historical fiction series of books The Grail Quest (2000–2012), set during the Hundred Years War. In his earlier series The Warlord Chronicles, an adaptation of the Arthurian legend, Cornwell also reimagines the Grail quest as a quest for a cauldron that is one of the Thirteen Treasures of Britain from Celtic mythology.
Influenced by the 1982 publication of the ostensibly non-fiction The Holy Blood and the Holy Grail, Dan Brown's The Da Vinci Code (2003) has the "grail" taken to refer to Mary Magdalene as the "receptacle" of Jesus' bloodline (playing on the sang real etymology). In Brown's novel, it is hinted that this Grail was long buried beneath Rosslyn Chapel in Scotland, but that in recent decades, its guardians had it moved to a secret chamber embedded in the floor beneath the Inverted Pyramid in the entrance of the Louvre museum.
Michael Moorcock's fantasy novel The War Hound and the World's Pain (1981) depicts a supernatural Grail quest set in the era of the Thirty Years' War.
German history and fantasy novel author Rainer M. Schröder wrote the trilogy Die Bruderschaft vom Heiligen Gral (The Brotherhood of the Holy Grail) about a group of four Knights Templar who save the Grail from the Fall of Acre in 1291 and go through an odyssey to bring it to the Temple in Paris in the first two books, Der Fall von Akkon (2006) and Das Amulett der Wüstenkrieger (2006), while defending the holy relic from the attempts of a Satanic sect called Iscarians to steal it. In the third book, Das Labyrinth der schwarzen Abtei (2007), the four heroes must reunite to smuggle the Holy Grail out of the Temple in Paris after the trials of the Knights Templar in 1307, again pursued by the Iscarians. Schröder indirectly addresses the Cathar theory by letting the four heroes encounter Cathars – among them old friends from their flight from Acre – on their way to Portugal to seek refuge with the King of Portugal and travel further west.
The 15th novel in The Dresden Files series by Jim Butcher, Skin Game (2014), features Harry Dresden being recruited by Denarian and longtime enemy Nicodemus into a heist team seeking to retrieve the Holy Grail from the vault of Hades, the lord of the Underworld. The properties of the item are not explicit, but the relic itself makes an appearance and is in the hands of Nicodemus by the end of the novel's events.
The Holy Grail features prominently in Jack Vance's Lyonesse Trilogy, where it is the subject of an earlier quest, several generations before the birth of King Arthur. However, in contrast to the Arthurian canon, Vance's Grail is a common object lacking any magical or spiritual qualities, and the characters finding it derive little benefit.
Grails: Quests of the Dawn (1994), edited by Richard Gilliam, Martin H. Greenberg, and Edward E. Kramer is a collection of 25 short stories about the grail by various science fiction and fantasy writers.
In Robert Bruton's Empire in Apocalypse (2023), the Holy Grail appears as General Belisarius's Vandal chalice, recovered with other treasures the Vandals had stolen during the sacking of Rome.
Film and other media
In the cinema, the Holy Grail debuted in the 1904 silent film Parsifal, an adaptation of Wagner's opera by Edwin S. Porter. More recent cinematic adaptations include Costain's The Silver Chalice made into a 1954 film by Victor Saville and Brown's The Da Vinci Code turned into a 2006 film by Ron Howard.
The silent drama film The Light in the Dark (1922) involves discovery of the Grail in modern times.
Robert Bresson's fantasy film Lancelot du Lac (1974) includes a more realistic version of the Grail quest from Arthurian romances.
Monty Python and the Holy Grail (1975) is a comedic take on the Arthurian Grail quest, adapted in 2004 as the stage production Spamalot.
John Boorman, in his fantasy film Excalibur (1981), attempted to restore a more traditional heroic representation of an Arthurian tale, in which the Grail is revealed as a mystical means to revitalise Arthur and the barren land to which his depressive sickness is connected.
Steven Spielberg's adventure film Indiana Jones and the Last Crusade (1989) features Indiana Jones and his father in a race for the Grail against the Nazis.
In a pair of fifth-season episodes (September 1989), entitled "Legend of the Holy Rose," MacGyver undertakes a quest for the Grail.
Terry Gilliam's comedy-drama film The Fisher King (1991) features the Grail quest in the modern New York City.
In the season one episode "Grail" (1994) of the television series Babylon 5, a man named Aldous Gajic visits Babylon 5 in his continuing quest to find the Holy Grail. His quest is primarily a plot device, as the episode's action revolves not around the quest but rather around his presence and impact on the life of a station resident.
The video game Gabriel Knight 3: Blood of the Sacred, Blood of the Damned (1999) features an alternate version of the Grail, interwoven with the mythology of the Knights Templar. The Holy Grail is revealed in the story to be the blood of Jesus Christ that contains his power, only accessible to those descended from him, with the vessel of the Grail being defined as his body itself which the Templars uncovered in the Holy Lands.
In Pretty Guardian Sailor Moon, the Holy Grail (Sehai in the anime, or Rainbow Moon Chalice) is the magical object with which Sailor Moon transforms in her Super form.
A science fiction version of the Grail Quest is central theme in the Stargate SG-1 season 10 episode "The Quest" (2006).
The song "Holy Grail" by Jay-Z featuring Justin Timberlake was released in 2013.
In the video game Persona 5 (2016), the Holy Grail is the Treasure of the game's final Palace, representing the combined desires of all of humanity for a higher power to take control of their lives and make a world that has no sense of individuality.
In the television series Knightfall (2017), the search for the Holy Grail by the Knights Templar is a major theme of the series' first season. The Grail, which appears as a simple earthenware cup, is coveted by various factions including the Pope, who thinks that possession of it will enable him to ignite another Crusade.
In the Fate franchise, the Holy Grail serves as the prize of the Holy Grail War, granting a single wish to the victor of the battle royale. However, it is hinted at throughout the series that this Grail is not the real chalice of Christ, but is actually an item of uncertain nature created by mages some generations ago.
In the Assassin's Creed video game franchise the Holy Grail is mentioned. In the original game, one Templar refers to the main relic of the game as the Holy Grail, although it was later discovered to be one of many Apples of Eden. The Holy Grail was mentioned again in Templar Legends, ending up in either Scotland or Spain by different accounts. The Holy Grail appears again in Assassin's Creed: Altaïr's Chronicles, by the name of the Chalice, however this time not as an object but as a woman named Adha, similar to the sang rael, or royal blood, interpretation.
In the fourth series of The Grand Tour, the trio goes to Nosy Boraha where they accidentally find the Holy Grail while searching for La Buse's buried treasure.
In the 17th episode of Little Witch Academia, "Amanda O'Neill and the Holy Grail", the Holy Grail is used as a plot device in which witches Amanda O'Neill and Akko Kagari set out to find the item itself at Appleton School.
In the 12th episode of season 9 of the American show The Office, Jim Halpert sends Dwight Schrute on a wild goose chase to find the Holy Grail. After Dwight completing all the clues to find it, but coming up empty handed, the camera cuts to Glenn drinking out of it in his office.
In the 2022 Christmas special episode of the British TV series Detectorists, "Special", Lance finds a crockery cup, eyes only, in a field that turns out to be where a historic battle took place and a reliquary containing the Holy Grail was lost. A montage shows how the same crockery cup went from the hands of Jesus at the Last Supper (implied) to being lost in the field.
The 2023 limited television series Mrs. Davis revolves around Sister Simone's quest to find and destroy the Holy Grail, both as the central plot device and also as metacommentary on quests for the Holy Grail, which one character observes might be the "most overused MacGuffin ever".
See also
Akshaya Patra (Hindu mythology)
Ark of the Covenant
Arma Christi
Cornucopia (Greek mythology)
Cup of Jamshid (Persian mythology)
Fairy cup legend
Holy Chalice (Christian mythology)
List of mythological objects
Relics associated with Jesus
Sampo (Finnish mythology)
Salsabil (Quran)
Coat of arms of the Kingdom of Galicia
References
Further reading
Barber, Richard (2004). The Holy Grail: Imagination and Belief. Harvard University Press.
Campbell, Joseph (1990). Transformations of Myth Through Time. Harper & Row Publishers, New York.
Loomis, Roger Sherman (1991). The Grail: From Celtic Myth to Christian Symbol. Princeton.
Weston, Jessie L. (1993; originally published 1920). From Ritual To Romance. Princeton University Press, Princeton, New Jersey.
Wood, Juliette (2012). The Holy Grail: History and Legend. University of Wales Press. .
External links
The Holy Grail at the Camelot Project
The Holy Grail at the Catholic Encyclopedia
The Holy Grail today in Valencia Cathedral
XVth-century Old French Estoire del saint Graal manuscript BNF fr. 113 Bibliothèque Nationale de France, selection of illuminated folios, Modern French Translation, Commentaries.
Cauldrons
Christian terminology
Fictional elements introduced in the 12th century
Literary motifs
Magic items | Holy Grail | [
"Physics"
] | 7,449 | [
"Magic items",
"Physical objects",
"Matter"
] |
14,337 | https://en.wikipedia.org/wiki/Human%20sexual%20activity | Human sexual activity, human sexual practice or human sexual behaviour is the manner in which humans experience and express their sexuality. People engage in a variety of sexual acts, ranging from activities done alone (e.g., masturbation) to acts with another person (e.g., sexual intercourse, non-penetrative sex, oral sex, etc.) in varying patterns of frequency, for a wide variety of reasons. Sexual activity usually results in sexual arousal and physiological changes in the aroused person, some of which are pronounced while others are more subtle. Sexual activity may also include conduct and activities which are intended to arouse the sexual interest of another or enhance the sex life of another, such as strategies to find or attract partners (courtship and display behaviour), or personal interactions between individuals (for instance, foreplay or BDSM). Sexual activity may follow sexual arousal.
Human sexual activity has sociological, cognitive, emotional, behavioural and biological aspects. It involves personal bonding, sharing emotions, the physiology of the reproductive system, sex drive, sexual intercourse, and sexual behaviour in all its forms.
In some cultures, sexual activity is considered acceptable only within marriage, while premarital and extramarital sex are taboo. Some sexual activities are illegal either universally or in some countries or subnational jurisdictions, while some are considered contrary to the norms of certain societies or cultures. Two examples that are criminal offences in most jurisdictions are sexual assault and sexual activity with a person below the local age of consent.
Types
Sexual activity can be classified in a number of ways. The practices may be preceded by or consist solely of foreplay. Acts involving one person (autoeroticism) may include sexual fantasy or masturbation. If two people are involved, they may engage in vaginal sex, anal sex, oral sex or manual sex. Penetrative sex between two people may be described as sexual intercourse, but definitions vary. If there are more than two participants in a sex act, it may be referred to as group sex. Autoerotic sexual activity can involve use of dildos, vibrators, butt plugs, and other sex toys, though these devices can also be used with a partner.
Sexual activity can be classified into the gender and sexual orientation of the participants, as well as by the relationship of the participants. The relationships can be ones of marriage, intimate partners, casual sex partners or anonymous. Sexual activity can be regarded as conventional or as alternative, involving, for example, fetishism or BDSM activities.
Fetishism can take many forms, including the desire for certain body parts (partialism) such as breasts, navels, or feet. The object of desire can be shoes, boots, lingerie, clothing, leather or rubber items. Some non-conventional autoerotic practices can be dangerous. These include autoerotic asphyxiation and self-bondage. The potential for injury or even death that exists while engaging in the partnered versions of these fetishes (choking and bondage, respectively) becomes drastically increased in the autoerotic case due to the isolation and lack of assistance in the event of a problem.
Sexual activity that is consensual is sexual activity in which both or all participants agree to take part and are of the age that they can consent. If sexual activity takes place under force or duress, it is considered rape or another form of sexual assault. In different cultures and countries, various sexual activities may be lawful or illegal in regards to the age, gender, marital status or other factors of the participants, or otherwise contrary to social norms or generally accepted sexual morals.
Mating strategies
In evolutionary psychology and behavioral ecology, human mating strategies are a set of behaviors used by individuals to attract, select, and retain mates. Mating strategies overlap with reproductive strategies, which encompass a broader set of behaviors involving the timing of reproduction and the trade-off between quantity and quality of offspring (see life history theory).
Relative to other animals, human mating strategies are unique in their relationship with cultural variables such as the institution of marriage. Humans may seek out individuals with the intention of forming a long-term intimate relationship, marriage, casual relationship, or friendship. The human desire for companionship is one of the strongest human drives. It is an innate feature of human nature, and may be related to the sex drive. The human mating process encompasses the social and cultural processes whereby one person may meet another to assess suitability, the courtship process and the process of forming an interpersonal relationship. Commonalities, however, can be found between humans and nonhuman animals in mating behavior.
Stages of physiological arousal during sexual stimulation
The physiological responses during sexual stimulation are fairly similar for both men and women and there are four phases.
During the excitement phase, muscle tension and blood flow increase in and around the sexual organs, heart and respiration increase and blood pressure rises. Men and women experience a "sex flush" on the skin of the upper body and face. For women, the vagina becomes lubricated and the clitoris engorges. For men, the penis becomes erect.
During the plateau phase, heart rate and muscle tension increase further. A man's urinary bladder closes to prevent urine from mixing with semen. A woman's clitoris may withdraw slightly and there is more lubrication, outer swelling and muscles tighten and reduction of diameter.
During the orgasm phase, breathing becomes extremely rapid and the pelvic muscles begin a series of rhythmic contractions. Both men and women experience quick cycles of muscle contraction of lower pelvic muscles and women often experience uterine and vaginal contractions; this experience can be described as intensely pleasurable, but roughly 15% of women never experience orgasm, and half report having faked it. A large genetic component is associated with how often women experience orgasm.
During the resolution phase, muscles relax, blood pressure drops, and the body returns to its resting state. Though generally reported that women do not experience a refractory period and thus can experience an additional orgasm, or multiple orgasms soon after the first, some sources state that both men and women experience a refractory period because women may also experience a period after orgasm in which further sexual stimulation does not produce excitement. This period may last from minutes to days and is typically longer for men than women.
Sexual dysfunction is the inability to react emotionally or physically to sexual stimulation in a way projected of the average healthy person; it can affect different stages in the sexual response cycles, which are desire, excitement and orgasm. In the media, sexual dysfunction is often associated with men, but in actuality, it is more commonly observed in females (43 percent) than males (31 percent).
Psychological aspects
Sexual activity can lower blood pressure and overall stress levels. It serves to release tension, elevate mood, and possibly create a profound sense of relaxation, especially in the postcoital period. From a biochemical perspective, sex causes the release of oxytocin and endorphins and boosts the immune system.
Motivations
People engage in sexual activity for any of a multitude of possible reasons. Although the primary evolutionary purpose of sexual activity is reproduction, research on college students suggested that people have sex for four general reasons: physical attraction, as a means to an end, to increase emotional connection, and to alleviate insecurity.
Most people engage in sexual activity because of pleasure they derive from the arousal of their sexuality, especially if they can achieve orgasm. Sexual arousal can also be experienced from foreplay and flirting, and from fetish or BDSM activities, or other erotic activities. Most commonly, people engage in sexual activity because of the sexual desire generated by a person to whom they feel sexual attraction; but they may engage in sexual activity for the physical satisfaction they achieve in the absence of attraction for another, as in the case of casual or social sex. At times, a person may engage in a sexual activity solely for the sexual pleasure of their partner, such as because of an obligation they may have to the partner or because of love, sympathy or pity they may feel for the partner.
A person may engage in sexual activity for purely monetary considerations, or to obtain some advantage from either the partner or the activity. A man and woman may engage in sexual intercourse with the objective of conception. Some people engage in hate sex which occurs between two people who strongly dislike or annoy each other. It is related to the idea that opposition between two people can heighten sexual tension, attraction and interest.
Self-determination theory
Research has found that people also engage in sexual activity for reasons associated with self-determination theory. The self-determination theory can be applied to a sexual relationship when the participants have positive feelings associated with the relationship. These participants do not feel guilty or coerced into the partnership. Researchers have proposed the model of self-determined sexual motivation. The purpose of this model is to connect self-determination and sexual motivation. This model has helped to explain how people are sexually motivated when involved in self-determined dating relationships. This model also links the positive outcomes, (satisfying the need for autonomy, competence, and relatedness) gained from sexual motivations.
According to the completed research associated with this model, it was found that people of both sexes who engaged in sexual activity for self-determined motivation had more positive psychological well-being. While engaging in sexual activity for self-determined reasons, the participants also had a higher need for fulfillment. When this need was satisfied, they felt better about themselves. This was correlated with greater closeness to their partner and higher overall satisfaction in their relationship. Though both sexes engaged in sexual activity for self-determined reasons, there were some differences found between males and females. It was concluded that females had more motivation than males to engage in sexual activity for self-determined reasons. Females also had higher satisfaction and relationship quality than males did from the sexual activity. Overall, research concluded that psychological well-being, sexual motivation, and sexual satisfaction were all positively correlated when dating couples partook in sexual activity for self-determined reasons.
Frequency
The frequency of sexual activity might range from zero to 15 or 20 times a week. Frequency of intercourse tends to decline with age. Some post-menopausal women experience decline in frequency of sexual intercourse, while others do not. According to the Kinsey Institute, the average frequency of sexual intercourse in the US for individuals with partners is 112 times per year (age 18–29), 86 times per year (age 30–39), and 69 times per year (age 40–49). The rate of sexual activity has been declining in the 21st century, a phenomenon that has been described as a sex recession.
Adolescents
The age at which adolescents become sexually active varies considerably between different cultures and times. (See Prevalence of virginity.) The first sexual act of a child or adolescent is sometimes referred to as the sexualization of the child, and may be considered a milestone or a change of status, as the loss of virginity or innocence. Youth are legally free to have intercourse after they reach the age of consent.
A 1999 survey of students indicated that approximately 40% of ninth graders across the United States report having had sexual intercourse. This figure rises with each grade. Males are more sexually active than females at each of the grade levels surveyed. Sexual activity of young adolescents differs in ethnicity as well. A higher percentage of African American and Hispanic adolescents are more sexually active than white adolescents.
Research on sexual frequency has also been conducted solely on female adolescents who engage in sexual activity. Female adolescents tended to engage in more sexual activity due to positive mood. In female teenagers, engaging in sexual activity was directly positively correlated with being older, greater sexual activity in the previous week or prior day, and more positive mood the previous day or the same day as the sexual activity occurred. Decreased sexual activity was associated with prior or same day negative mood or menstruation.
Although opinions differ, researchers suggest that sexual activity is an essential part of humans, and that teenagers need to experience sex. According to a study, sexual experiences help teenagers understand pleasure and satisfaction. In relation to hedonic and eudaimonic well-being, it stated that teenagers can positively benefit from sexual activity. The cross-sectional study was conducted in 2008 and 2009 at a rural upstate New York community. Teenagers who had their first sexual experience at age 16 revealed a higher well-being than those who were sexually inexperienced or who became sexually active at age 17. Furthermore, teenagers who had their first sexual experience at age 15 or younger, or who had many sexual partners were not negatively affected and did not have associated lower well-being.
Health and safety
Sexual activity is an innately physiological function, but like other physical activity, it comes with risks. There are four main types of risks that may arise from sexual activity: unwanted pregnancy, contracting a sexually transmitted infection (STI), physical injury, and psychological injury.
Unwanted pregnancy
Any sexual activity that involves the introduction of semen into a woman's vagina, such as during sexual intercourse, or contact of semen with her vulva, may result in a pregnancy. To reduce the risk of unintended pregnancies, some people who engage in penile-vaginal sex may use contraception, such as birth control pills, a condom, diaphragms, spermicides, hormonal contraception or sterilization. The effectiveness of the various contraceptive methods in avoiding pregnancy varies considerably, and depends on the method rather than the user.
Sexually transmitted infections
Sexual activity that involves skin-to-skin contact, exposure to an infected person's bodily fluids or mucous membranes carries the risk of contracting a sexually transmitted infection. People may not be able to detect that their sexual partner has one or more STIs, for example if they are asymptomatic (show no symptoms). The risk of STIs can be reduced by safe sex practices, such as using condoms. Both partners may opt to be tested for STIs before engaging in sex. The exchange of body fluids is not necessary to contract an infestation of crab lice. Crab lice typically are found attached to hair in the pubic area but sometimes are found on coarse hair elsewhere on the body (for example, eyebrows, eyelashes, beard, mustache, chest, armpits, etc.). Pubic lice infestations (pthiriasis) are spread through direct contact with someone who is infested with the louse.
Some STIs like HIV/AIDS can also be contracted by using IV drug needles after their use by an infected person, as well as through childbirth or breastfeeding.
Aging
Factors such as biological and psychological factors, diseases, mental conditions, boredom with the relationship, and widowhood have been found to contribute to a decrease in sexual interest and activity in old age, but older age does not eliminate the ability to enjoy sexual activity.
Orientations and society
Heterosexuality
Heterosexuality is the romantic or sexual attraction to the opposite sex. Heterosexual practices are institutionally privileged in most countries. In some countries, mostly those where religion has a strong influence on social policy, marriage laws serve the purpose of encouraging people to have sex only within marriage. Sodomy laws have been used to discourage same-sex sexual practices, but they may also affect opposite-sex sexual practices. Laws also ban adults from committing sexual abuse, committing sexual acts with anyone under an age of consent, performing sexual activities in public, and engaging in sexual activities for money (prostitution). Though these laws cover both same-sex and opposite-sex sexual activities, they may differ in regard to punishment, and may be more frequently (or exclusively) enforced on those who engage in same-sex sexual activities.
Different-sex sexual practices may be monogamous, serially monogamous, or polyamorous, and, depending on the definition of sexual practice, abstinent or autoerotic (including masturbation). Additionally, different religious and political movements have tried to influence or control changes in sexual practices including courting and marriage, though in most countries changes occur at a slow rate.
Homosexuality
Homosexuality is the romantic or sexual attraction to the same sex. People with a homosexual orientation can express their sexuality in a variety of ways, and may or may not express it in their behaviors. Research indicates that many gay men and lesbians want, and succeed in having, committed and durable relationships. For example, survey data indicate that between 40% and 60% of gay men and between 45% and 80% of lesbians are currently involved in a romantic relationship.
It is possible for a person whose sexual identity is mainly heterosexual to engage in sexual acts with people of the same sex. Gay and lesbian people who pretend to be heterosexual are often referred to as being closeted (hiding their sexuality in "the closet"). "Closet case" is a derogatory term used to refer to people who hide their sexuality. Making that orientation public can be called "coming out of the closet" in the case of voluntary disclosure or "outing" in the case of disclosure by others against the subject's wishes (or without their knowledge). Among some communities (called "men on the DL" or "down-low"), same-sex sexual behavior is sometimes viewed as solely for physical pleasure. Men who have sex with men, as well as women who have sex with women, or men on the "down-low" may engage in sex acts with members of the same sex while continuing sexual and romantic relationships with the opposite sex.
People who engage exclusively in same-sex sexual practices may not identify themselves as gay or lesbian. In sex-segregated environments, individuals may seek relationships with others of their own gender (known as situational homosexuality). In other cases, some people may experiment or explore their sexuality with same (or different) sex sexual activity before defining their sexual identity. Despite stereotypes and common misconceptions, there are no forms of sexual acts exclusive to same-sex sexual behavior that cannot also be found in opposite-sex sexual behavior, except those involving the meeting of the genitalia between same-sex partners – tribadism (generally vulva-to-vulva rubbing) and frot (generally penis-to-penis rubbing).
Bisexuality and pansexuality
People who have a romantic or sexual attraction to both sexes are referred to as bisexual. People who have a distinct but not exclusive preference for one sex/gender over the other may also identify themselves as bisexual. Like gay and lesbian individuals, bisexual people who pretend to be heterosexual are often referred to as being closeted.
Pansexuality (also referred to as omnisexuality) may or may not be subsumed under bisexuality, with some sources stating that bisexuality encompasses sexual or romantic attraction to all gender identities. Pansexuality is characterized by the potential for aesthetic attraction, romantic love, or sexual desire towards people without regard for their gender identity or biological sex. Some pansexuals suggest that they are gender-blind; that gender and sex are insignificant or irrelevant in determining whether they will be sexually attracted to others. As defined in the Oxford English Dictionary, pansexuality "encompasses all kinds of sexuality; not limited or inhibited in sexual choice with regards to gender or practice".
Avoidance of inbreeding
Although the main adaptive function of human sexual activity is reproduction, human sexual activity also includes the adaptive constraint of avoiding close inbreeding, since inbreeding can have deleterious effects on progeny. Charles Darwin, who was married to his first cousin Emma Wedgwood, considered that the ill health that plagued his family was a consequence of inbreeding. In general, inbreeding between individuals who are closely genetically related leads to the expression of deleterious recessive mutations. The avoidance of inbreeding as a constraint on human sexual activity is apparent in the near universal cultural inhibitions in human societies of sexual activity between closely related individuals. Human outcrossing sexual activity provides the adaptive benefit of the masking of expression of deleterious recessive mutations.
Other social aspects
General attitudes
Alex Comfort and others propose three potential social aspects of sexual intercourse in humans, which are not mutually exclusive: reproductive, relational, and recreational. The development of the contraceptive pill and other highly effective forms of contraception in the mid- and late 20th century has increased people's ability to segregate these three functions, which still overlap a great deal and in complex patterns. For example: A fertile couple may have intercourse while using contraception to experience sexual pleasure (recreational) and also as a means of emotional intimacy (relational), thus deepening their bonding, making their relationship more stable and more capable of sustaining children in the future (deferred reproductive). This same couple may emphasize different aspects of intercourse on different occasions, being playful during one episode of intercourse (recreational), experiencing deep emotional connection on another occasion (relational), and later, after discontinuing contraception, seeking to achieve pregnancy (reproductive, or more likely reproductive and relational).
Religious and ethical
Human sexual activity is generally influenced by social rules that are culturally specific and vary widely.
Sexual ethics, morals, and norms relate to issues including deception/honesty, legality, fidelity and consent. Some activities, known as sex crimes in some locations, are illegal in some jurisdictions, including those conducted between (or among) consenting and competent adults (examples include sodomy law and adult-adult incest).
Some people who are in a relationship but want to hide polygamous activity (possibly of opposite sexual orientation) from their partner, may solicit consensual sexual activity with others through personal contacts, online chat rooms, or, advertising in select media.
Swinging involves singles or partners in a committed relationship engaging in sexual activities with others as a recreational or social activity. The increasing popularity of swinging is regarded by some as arising from the upsurge in sexual activity during the sexual revolution of the 1960s.
Some people engage in various sexual activities as a business transaction. When this involves having sex with, or performing certain actual sexual acts for another person in exchange for money or something of value, it is called prostitution. Other aspects of the adult industry include phone sex operators, strip clubs, and pornography.
Gender roles and the expression of sexuality
Social gender roles can influence sexual behavior as well as the reaction of individuals and communities to certain incidents; the World Health Organization states that, "Sexual violence is also more likely to occur where beliefs in male sexual entitlement are strong, where gender roles are more rigid, and in countries experiencing high rates of other types of violence." Some societies, such as those where the concepts of family honor and female chastity are very strong, may practice violent control of female sexuality, through practices such as honor killings and female genital mutilation.
The relation between gender equality and sexual expression is recognized, and promotion of equity between men and women is crucial for attaining sexual and reproductive health, as stated by the UN International Conference on Population and Development Program of Action:
"Human sexuality and gender relations are closely interrelated and together affect the ability of men and women to achieve and maintain sexual health and manage their reproductive lives. Equal relationships between men and women in matters of sexual relations and reproduction, including full respect for the physical integrity of the human body, require mutual respect and willingness to accept responsibility for the consequences of sexual behaviour. Responsible sexual behaviour, sensitivity and equity in gender relations, particularly when instilled during the formative years, enhance and promote respectful and harmonious partnerships between men and women."
BDSM
BDSM is a variety of erotic practices or roleplaying involving bondage, dominance and submission, sadomasochism, and other interpersonal dynamics. Given the wide range of practices, some of which may be engaged in by people who do not consider themselves as practicing BDSM, inclusion in the BDSM community or subculture usually being dependent on self-identification and shared experience. BDSM communities generally welcome anyone with a non-normative streak who identifies with the community; this may include cross-dressers, extreme body modification enthusiasts, animal players, latex or rubber aficionados, and others.
B/D (bondage and discipline) is a part of BDSM. Bondage includes the restraint of the body or mind. D/s means "Dominant and submissive". A Dominant is one who takes control of a person who wishes to surrender control and a submissive is one who surrenders control to a person who wishes to take control. S/M (sadism and masochism) is the other part of BDSM. A sadist is an individual who takes pleasure in the pain or humiliation of others and a masochist is an individual who takes pleasure from their own pain or humiliation.
Unlike the usual "power neutral" relationships and play styles commonly followed by couples, activities and relationships within a BDSM context are often characterized by the participants' taking on complementary, but unequal roles; thus, the idea of informed consent of both the partners becomes essential. Participants who exert dominance (sexual or otherwise) over their partners are known as Dominants or Tops, while participants who take the passive, receiving, or obedient role are known as submissives or bottoms.
These terms are sometimes shortened so that a dominant person may be referred to as a "Dom" (a woman may choose to use the feminine "Domme") and a submissive may be referred to as a "sub". Individuals who can change between Top/Dominant and bottom/submissive roles – whether from relationship to relationship or within a given relationship – are known as switches. The precise definition of roles and self-identification is a common subject of debate within the community.
In a 2013 study, researchers stated that BDSM is a sexual act where participants play role games, use restraint, use power exchange,
use suppression and pain is sometimes involved depending on individual(s). The study serves to challenge the widespread notion that BDSM could be in some way linked to psychopathology. According to the findings, one who participates in BDSM may have greater strength socially and mentally as well as greater independence than those who do not practice BDSM. It suggests that people who participate in BDSM play have higher subjective well-being, and that this might be due to the fact that BDSM play requires extensive communication. Before any act occurs, the partners must discuss their agreement of their relationship. They discuss how long the play will last, the intensity, their actions, what each participant needs or desires, and what, if any, sexual activities may be included. All acts must be consensual and pleasurable to both parties.
In a 2015 study, interviewed BDSM participants have mentioned that the activities have helped to create higher levels of connection, intimacy, trust and communication between partners. The study suggests that Dominants and submissives exchange control for each other's pleasure and to satisfy a need. The participants have remarked that they enjoy pleasing their partner in any way they can and many surveyed have felt that this is one of the best things about BDSM. It gives a submissive pleasure to do things in general for their Dominant while a Dominant enjoys making their encounters all about their submissive and enjoy doing things that makes their submissive happy. The findings indicate that the surveyed submissives and Dominants found BDSM makes play more pleasurable and fun. The participants have also mentioned improvements in their personal growth, romantic relationships, sense of community and self, the dominant's confidence, and their coping with everyday things by giving them a psychological release.
Legal issues
There are many laws and social customs which prohibit, or in some way affect sexual activities. These laws and customs vary from country to country, and have varied over time. They cover, for example, a prohibition to non-consensual sex, to sex outside marriage, to sexual activity in public, besides many others. Many of these restrictions are non-controversial, but some have been the subject of public debate.
Most societies consider it a serious crime to force someone to engage in sexual acts or to engage in sexual activity with someone who does not consent. This is called sexual assault, and if sexual penetration occurs it is called rape, the most serious kind of sexual assault. The details of this distinction may vary among different legal jurisdictions. Also, what constitutes effective consent in sexual matters varies from culture to culture and is frequently debated. Laws regulating the minimum age at which a person can consent to have sex (age of consent) are frequently the subject of debate, as is adolescent sexual behavior in general. Some societies have forced marriage, where consent may not be required.
Same-sex laws
Many locales have laws that limit or prohibit same-sex sexual activity.
Sex outside marriage
In the West, sex before marriage is not illegal. There are social taboos and many religions condemn pre-marital sex. In many Muslim countries, such as Saudi Arabia, Pakistan, Afghanistan, Iran, Kuwait, Maldives, Morocco, Oman, Mauritania, United Arab Emirates, Sudan, and Yemen, any form of sexual activity outside marriage is illegal. Those found guilty, especially women, may be forced to wed the sexual partner, may be publicly beaten, or may be stoned to death. In many African and native tribes, sexual activity is not viewed as a privilege or right of a married couple, but rather as the unification of bodies and is thus not frowned upon.
Other studies have analyzed the changing attitudes about sex that American adolescents have outside marriage. Adolescents were asked how they felt about oral and vaginal sex in relation to their health, social, and emotional well-being. Overall, teenagers felt that oral sex was viewed as more socially positive amongst their demographic. Results stated that teenagers believed that oral sex for dating and non-dating adolescents was less threatening to their overall values and beliefs than vaginal sex was. When asked, teenagers who participated in the research viewed oral sex as more acceptable to their peers, and their personal values than vaginal sex.
Minimum age of sexual activity (age of consent)
The laws of each jurisdiction set the minimum age at which a young person is allowed to engage in sexual activity. This age of consent is typically between 14 and 18 years, but laws vary. In many jurisdictions, age of consent is a person's mental or functional age. As a result, those above the set age of consent may still be considered unable to legally consent due to mental immaturity. Many jurisdictions regard any sexual activity by an adult involving a child as child sexual abuse.
Age of consent may vary by the type of sexual act, the sex of the actors, or other restrictions such as abuse of a position of trust. Some jurisdictions also make allowances for young people engaged in sexual acts with each other.
Incestuous relationships
Most jurisdictions prohibit sexual activity between certain close relatives. These laws vary to some extent; such acts are called incestuous.
Incest laws may involve restrictions on marriage rights, which also vary between jurisdictions. When incest involves an adult and a child, it is considered to be a form of child sexual abuse.
Sexual abuse
Non-consensual sexual activity or subjecting an unwilling person to witnessing a sexual activity are forms of sexual abuse, as well as (in many countries) certain non-consensual paraphilias such as frotteurism, telephone scatophilia (indecent phonecalls), and non-consensual exhibitionism and voyeurism (known as "indecent exposure" and "peeping tom" respectively).
Prostitution and survival sex
People sometimes exchange sex for money or access to other resources. Work takes place under many varied circumstances. The person who receives payment for sexual services is known as a prostitute and the person who receives such services is referred to by a multitude of terms, such as being a client. Prostitution is one of the branches of the sex industry. The legal status of prostitution varies from country to country, from being a punishable crime to a regulated profession. Estimates place the annual revenue generated from the global prostitution industry to be over $100 billion. Prostitution is sometimes referred to as "the world's oldest profession". Prostitution may be a voluntary individual activity or facilitated or forced by pimps.
Survival sex is a form of prostitution engaged in by people in need, usually when homeless or otherwise disadvantaged people trade sex for food, a place to sleep, or other basic needs, or for drugs. The term is used by sex trade and poverty researchers and aid workers.
See also
Child sexuality
Erotic plasticity
History of human sexuality
Human female sexuality
Human male sexuality
Mechanics of human sexuality
Orgasm control
Orgastic potency
Sociosexual orientation
Transgender sexuality
References
Further reading
Durex Global Sex Survey 2005 (PDF) at data360.org
4
Intimate relationships
Evolutionary psychology | Human sexual activity | [
"Biology"
] | 6,723 | [
"Human sexuality",
"Behavior",
"Sexual acts",
"Sexuality",
"Human behavior",
"Mating"
] |
14,340 | https://en.wikipedia.org/wiki/Hydraulic%20ram | A hydraulic ram pump, ram pump, or hydram is a cyclic water pump powered by hydropower. It takes in water at one "hydraulic head" (pressure) and flow rate, and outputs water at a higher hydraulic head and lower flow rate. The device uses the water hammer effect to develop pressure that allows a portion of the input water that powers the pump to be lifted to a point higher than where the water originally started. The hydraulic ram is sometimes used in remote areas, where there is both a source of low-head hydropower and a need for pumping water to a destination higher in elevation than the source. In this situation, the ram is often useful, since it requires no outside source of power other than the kinetic energy of flowing water.
History
In 1772, John Whitehurst of Cheshire, England, invented a manually controlled precursor of the hydraulic ram called the "pulsation engine" and installed the first one at Oulton, Cheshire to raise water to a height of . In 1783, he installed another in Ireland. He did not patent it, and details are obscure, but it is known to have had an air vessel.
The first self-acting ram pump was invented by the Frenchman Joseph Michel Montgolfier (best known as a co-inventor of the hot air balloon) in 1796 for raising water in his paper mill at Voiron. His friend Matthew Boulton took out a British patent on his behalf in 1797. The sons of Montgolfier obtained a British patent for an improved version in 1816, and this was acquired, together with Whitehurst's design, in 1820 by Josiah Easton, a Somerset-born engineer who had just moved to London.
Easton's firm, inherited by his son James (1796–1871), grew during the nineteenth century to become one of the more important engineering manufacturers in England, with a large works at Erith, Kent. They specialised in water supply and sewerage systems worldwide, as well as land drainage projects. Eastons had a good business supplying rams for water supply purposes to large country houses, farms, and village communities. Some of their installations still survived as of 2004, one such example being at the hamlet of Toller Whelme, in Dorset. Until about 1958 when the mains water arrived, the hamlet of East Dundry just south of Bristol had three working rams – their noisy "thump" every minute or so resonated through the valley night and day: these rams served farms that needed much water for their dairy herds.
The firm closed in 1909, but the ram business was continued by James R. Easton. In 1929, it was acquired by Green & Carter of Winchester, Hampshire, who were engaged in the manufacturing and installation of Vulcan and Vacher Rams.
The first US patent was issued to Joseph Cerneau (or Curneau) and Stephen (Étienne) S. Hallet (1755-1825) in 1809. US interest in hydraulic rams picked up around 1840, as further patents were issued and domestic companies started offering rams for sale. Toward the end of the 19th century, interest waned as electricity and electric pumps became widely available.
Priestly's Hydraulic Ram, built in 1890 in Idaho, was a "marvelous" invention, apparently independent, which lifted water to provide irrigation. The ram survives and is listed on the U.S. National Register of Historic Places.
By the end of the twentieth century, interest in hydraulic rams has revived, due to the needs of sustainable technology in developing countries, and energy conservation in developed ones. An example is Aid Foundation International in the Philippines, who won an Ashden Award for their work developing ram pumps that could be easily maintained for use in remote villages. The hydraulic ram principle has been used in some proposals for exploiting wave power, one of which was discussed as long ago as 1931 by Hanns Günther in his book In hundert Jahren.
Some later ram designs in the UK called compound rams were designed to pump treated water using an untreated drive water source, which overcomes some of the problems of having drinking water sourced from an open stream.
In 1996 English engineer Frederick Philip Selwyn patented a more compact hydraulic ram pump where the waste valve used the venturi effect and was arranged concentrically around the input pipe. Initially patented as a fluid pressure amplifier due to its different design, it is currently sold as the "Papa Pump".
Additionally to this a large scale version named the "Venturo Pump" is also being manufactured.
Construction and principle of operation
A traditional hydraulic ram has only two moving parts, a spring or weight loaded "waste" valve sometimes known as the "clack" valve and a "delivery" check valve, making it cheap to build, easy to maintain, and very reliable.
Priestly's Hydraulic Ram, described in detail in the 1947 Encyclopedia Britannica, has no moving parts.
Sequence of operation
A simplified hydraulic ram is shown in Figure 2. Initially, the waste valve [4] is open (i.e. lowered) because of its own weight, and the delivery valve [5] is closed under the pressure caused by the water column from the outlet [3]. The water in the inlet pipe [1] starts to flow under the force of gravity and picks up speed and kinetic energy until the increasing drag force lifts the waste valve's weight and closes it. The momentum of the water flow in the inlet pipe against the now closed waste valve causes a water hammer that raises the pressure in the pump beyond the pressure caused by the water column pressing down from the outlet. This pressure differential now opens the delivery valve [5], and forces some water to flow into the delivery pipe [3]. Because this water is being forced uphill through the delivery pipe farther than it is falling downhill from the source, the flow slows; when the flow reverses, the delivery check valve [5] closes. Meanwhile, the water hammer from the closing of the waste valve also produces a pressure pulse which propagates back up the inlet pipe to the source where it converts to a suction pulse that propagates back down the inlet pipe. This suction pulse, with the weight or spring on the valve, pulls the waste valve back open and allows the process to begin again.
A pressure vessel [6] containing air cushions the hydraulic pressure shock when the waste valve closes, and it also improves the pumping efficiency by allowing a more constant flow through the delivery pipe. Although the pump could in theory work without it, the efficiency would drop drastically and the pump would be subject to extraordinary stresses that could shorten its life considerably. One problem is that the pressurized air will gradually dissolve into the water until none remains. One solution to this problem is to have the air separated from the water by an elastic diaphragm (similar to an expansion tank); however, this solution can be problematic in developing countries where replacements are difficult to procure. Another solution is a snifting valve installed close to the drive side of the delivery valve. This automatically inhales a small amount of air each time the delivery valve shuts and the partial vacuum develops. Another solution is to insert an inner tube of a car or bicycle tire into the pressure vessel with some air in it and the valve closed. This tube is in effect the same as the diaphragm, but it is implemented with more widely available materials. The air in the tube cushions the shock of the water the same as the air in other configurations does.
Efficiency
A typical energy efficiency is 60%, but up to 80% is possible. This should not be confused with the volumetric efficiency, which relates the volume of water delivered to total water taken from the source. The portion of water available at the delivery pipe will be reduced by the ratio of the delivery head to the supply head. Thus if the source is above the ram and the water is lifted to above the ram, only 20% of the supplied water can be available, the other 80% being spilled via the waste valve. These ratios assume 100% energy efficiency. Actual water delivered will be further reduced by the energy efficiency factor. In the above example, if the energy efficiency is 70%, the water delivered will be 70% of 20%, i.e. 14%. Assuming a 2-to-1 supply-head-to-delivery-head ratio and 70% efficiency, the delivered water would be 70% of 50%, i.e. 35%. Very high ratios of delivery to supply head usually result in lowered energy efficiency. Suppliers of rams often provide tables giving expected volume ratios based on actual tests.
Drive and delivery pipe design
Since both efficiency and reliable cycling depend on water hammer effects, the drive pipe design is important. It should be between 3 and 7 times longer than the vertical distance between the source and the ram. Commercial rams may have an input fitting designed to accommodate this optimum slope. The diameter of the supply pipe would normally match the diameter of the input fitting on the ram, which in turn is based on its pumping capacity. The drive pipe should be of constant diameter and material, and should be as straight as possible. Where bends are necessary, they should be smooth, large diameter curves. Even a large spiral is allowed, but elbows are to be avoided. PVC will work in some installations, but steel pipe is preferred, although much more expensive. If valves are used they should be a free flow type such as a ball valve or gate valve.
The delivery pipe is much less critical since the pressure vessel prevents water hammer effects from traveling up it. Its overall design would be determined by the allowable pressure drop based on the expected flow. Typically the pipe size will be about half that of the supply pipe, but for very long runs a larger size may be indicated. PVC pipe and any necessary valves are not a problem.
Starting operation
A ram newly placed into operation or which has stopped cycling should start automatically if the waste valve weight or spring pressure is adjusted correctly, but it can be restarted as follows: If the waste valve is in the raised (closed) position, it must be pushed down manually into the open position and released. If the flow is sufficient, it will then cycle at least once. If it does not continue to cycle, it must be pushed down repeatedly until it cycles continuously on its own, usually after three or four manual cycles. If the ram stops with the waste valve in the down (open) position it must be lifted manually and kept up for as long as necessary for the supply pipe to fill with water and for any air bubbles to travel up the pipe to the source. This may take some time, depending on supply pipe length and diameter. Then it can be started manually by pushing it down a few times as described above. Having a valve on the delivery pipe at the ram makes starting easier. Closing the valve until the ram starts cycling, then gradually opening it to fill the delivery pipe. If opened too quickly it will stop the cycle. Once the delivery pipe is full the valve can be left open.
Common operational problems
Failure to deliver sufficient water may be due to improper adjustment of the waste valve, having too little air in the pressure vessel, or simply attempting to raise the water higher than the level of which the ram is capable.
The ram may be damaged by freezing in winter, or loss of air in the pressure vessel leading to excess stress on the ram parts. These failures will require welding or other repair methods and perhaps parts replacement.
It is not uncommon for an operating ram to require occasional restarts. The cycling may stop due to poor adjustment of the waste valve, or insufficient water flow at the source. Air can enter if the supply water level is not at least a few inches above the input end of the supply pipe. Other problems are blockage of the valves with debris, or improper installation, such as using a supply pipe of non-uniform diameter or material, having sharp bends or a rough interior, or one that is too long or short for the drop, or is made of an insufficiently rigid material. A PVC supply pipe will work in some installations but a steel pipe is better.
See also
Boost converterelectronic–hydraulic analog of the hydraulic ram.
Heron's fountain
Pulser pump, a similar device made from a trompe connected to an airlift pump
Tesla valve
Water rocket
References
Further reading
External links
Ram pump complete blueprints
Articles containing video clips
Liquid-piston pumps
Pumps
Water
French inventions
it:Pompa fluidodinamica#Pompa ad ariete idraulico | Hydraulic ram | [
"Physics",
"Chemistry",
"Environmental_science"
] | 2,554 | [
"Pumps",
"Hydrology",
"Turbomachinery",
"Physical systems",
"Hydraulics",
"Water"
] |
14,359 | https://en.wikipedia.org/wiki/Huygens%E2%80%93Fresnel%20principle | The Huygens–Fresnel principle (named after Dutch physicist Christiaan Huygens and French physicist Augustin-Jean Fresnel) states that every point on a wavefront is itself the source of spherical wavelets, and the secondary wavelets emanating from different points mutually interfere. The sum of these spherical wavelets forms a new wavefront. As such, the Huygens-Fresnel principle is a method of analysis applied to problems of luminous wave propagation both in the far-field limit and in near-field diffraction as well as reflection.
History
In 1678, Huygens proposed that every point reached by a luminous disturbance becomes a source of a spherical wave. The sum of these secondary waves determines the form of the wave at any subsequent time; the overall procedure is referred to as Huygens' construction. He assumed that the secondary waves travelled only in the "forward" direction, and it is not explained in the theory why this is the case. He was able to provide a qualitative explanation of linear and spherical wave propagation, and to derive the laws of reflection and refraction using this principle, but could not explain the deviations from rectilinear propagation that occur when light encounters edges, apertures and screens, commonly known as diffraction effects.
In 1818, Fresnel showed that Huygens's principle, together with his own principle of interference, could explain both the rectilinear propagation of light and also diffraction effects. To obtain agreement with experimental results, he had to include additional arbitrary assumptions about the phase and amplitude of the secondary waves, and also an obliquity factor. These assumptions have no obvious physical foundation, but led to predictions that agreed with many experimental observations, including the Poisson spot.
Poisson was a member of the French Academy, which reviewed Fresnel's work. He used Fresnel's theory to predict that a bright spot ought to appear in the center of the shadow of a small disc, and deduced from this that the theory was incorrect. However, Arago, another member of the committee, performed the experiment and showed that the prediction was correct. This success was important evidence in favor of the wave theory of light over then predominant corpuscular theory.
In 1882, Gustav Kirchhoff analyzed Fresnel's theory in a rigorous mathematical formulation, as an approximate form of an integral theorem. Very few rigorous solutions to diffraction problems are known however, and most problems in optics are adequately treated using the Huygens-Fresnel principle.
In 1939 Edward Copson, extended the Huygens' original principle to consider the polarization of light, which requires a vector potential, in contrast to the scalar potential of a simple ocean wave or sound wave.
In antenna theory and engineering, the reformulation of the Huygens–Fresnel principle for radiating current sources is known as surface equivalence principle.
Issues in Huygens-Fresnel theory continue to be of interest. In 1991, David A. B. Miller suggested that treating the source as a dipole (not the monopole assumed by Huygens) will cancel waves propagating in the reverse direction, making Huygens' construction quantiatively correct. In 2021, Forrest L. Anderson showed that treating the wavelets as Dirac delta functions, summing and differentiating the summation is sufficient to cancel reverse propagating waves.
Examples
Refraction
The apparent change in direction of a light ray as it enters a sheet of glass at angle can be understood by the Huygens construction. Each point on the surface of the glass gives a secondary wavelet. These wavelets propagate at a slower velocity in the glass, making less forward progress than their counterparts in air. When the wavelets are summed, the resulting wavefront propagates at an angle to the direction of the wavefront in air.
In an inhomogeneous medium with a variable index of refraction, different parts of the wavefront propagate at different speeds. Consequently the wavefront bends around in the direction of higher index.
Diffraction
Huygens' principle as a microscopic model
The Huygens–Fresnel principle provides a reasonable basis for understanding and predicting the classical wave propagation of light. However, there are limitations to the principle, namely the same approximations done for deriving the Kirchhoff's diffraction formula and the approximations of near field due to Fresnel. These can be summarized in the fact that the wavelength of light is much smaller than the dimensions of any optical components encountered.
Kirchhoff's diffraction formula provides a rigorous mathematical foundation for diffraction, based on the wave equation. The arbitrary assumptions made by Fresnel to arrive at the Huygens–Fresnel equation emerge automatically from the mathematics in this derivation.
A simple example of the operation of the principle can be seen when an open doorway connects two rooms and a sound is produced in a remote corner of one of them. A person in the other room will hear the sound as if it originated at the doorway. As far as the second room is concerned, the vibrating air in the doorway is the source of the sound.
Mathematical expression of the principle
Consider the case of a point source located at a point P0, vibrating at a frequency f. The disturbance may be described by a complex variable U0 known as the complex amplitude. It produces a spherical wave with wavelength λ, wavenumber . Within a constant of proportionality, the complex amplitude of the primary wave at the point Q located at a distance r0 from P0 is:
Note that magnitude decreases in inverse proportion to the distance traveled, and the phase changes as k times the distance traveled.
Using Huygens's theory and the principle of superposition of waves, the complex amplitude at a further point P is found by summing the contribution from each point on the sphere of radius r0. In order to get an agreement with experimental results, Fresnel found that the individual contributions from the secondary waves on the sphere had to be multiplied by a constant, −i/λ, and by an additional inclination factor, K(χ). The first assumption means that the secondary waves oscillate at a quarter of a cycle out of phase with respect to the primary wave and that the magnitude of the secondary waves are in a ratio of 1:λ to the primary wave. He also assumed that K(χ) had a maximum value when χ = 0, and was equal to zero when χ = π/2, where χ is the angle between the normal of the primary wavefront and the normal of the secondary wavefront. The complex amplitude at P, due to the contribution of secondary waves, is then given by:
where S describes the surface of the sphere, and s is the distance between Q and P.
Fresnel used a zone construction method to find approximate values of K for the different zones, which enabled him to make predictions that were in agreement with experimental results. The integral theorem of Kirchhoff includes the basic idea of Huygens–Fresnel principle. Kirchhoff showed that in many cases, the theorem can be approximated to a simpler form that is equivalent to the formation of Fresnel's formulation.
For an aperture illumination consisting of a single expanding spherical wave, if the radius of the curvature of the wave is sufficiently large, Kirchhoff gave the following expression for K(χ):
K has a maximum value at χ = 0 as in the Huygens–Fresnel principle; however, K is not equal to zero at χ = π/2, but at χ = π.
Above derivation of K(χ) assumed that the diffracting aperture is illuminated by a single spherical wave with a sufficiently large radius of curvature. However, the principle holds for more general illuminations. An arbitrary illumination can be decomposed into a collection of point sources, and the linearity of the wave equation can be invoked to apply the principle to each point source individually. K(χ) can be generally expressed as:
In this case, K satisfies the conditions stated above (maximum value at χ = 0 and zero at χ = π/2).
Generalized Huygens' principle
Many books and references - e.g. (Greiner, 2002) and (Enders, 2009) - refer to the Generalized Huygens' Principle using the definition in (Feynman, 1948).
Feynman defines the generalized principle in the following way:
This clarifies the fact that in this context the generalized principle reflects the linearity of quantum mechanics and the fact that the quantum mechanics equations are first order in time. Finally only in this case the superposition principle fully apply, i.e. the wave function in a point P can be expanded as a superposition of waves on a border surface enclosing P. Wave functions can be interpreted in the usual quantum mechanical sense as probability densities where the formalism of Green's functions and propagators apply. What is note-worthy is that this generalized principle is applicable for "matter waves" and not for light waves any more. The phase factor is now clarified as given by the action and there is no more confusion why the phases of the wavelets are different from the one of the original wave and modified by the additional Fresnel parameters.
As per Greiner the generalized principle can be expressed for in the form:
where G is the usual Green function that propagates in time the wave function . This description resembles and generalize the initial Fresnel's formula of the classical model.
Feynman's path integral and the modern photon wave function
Huygens' theory served as a fundamental explanation of the wave nature of light interference and was further developed by Fresnel and Young but did not fully resolve all observations such as the low-intensity double-slit experiment first performed by G. I. Taylor in 1909. It was not until the early and mid-1900s that quantum theory discussions, particularly the early discussions at the 1927 Brussels Solvay Conference, where Louis de Broglie proposed his de Broglie hypothesis that the photon is guided by a wave function.
The wave function presents a much different explanation of the observed light and dark bands in a double slit experiment. In this conception, the photon follows a path which is a probabilistic choice of one of many possible paths in the electromagnetic field. These probable paths form the pattern: in dark areas, no photons are landing, and in bright areas, many photons are landing. The set of possible photon paths is consistent with Richard Feynman's path integral theory, the paths determined by the surroundings: the photon's originating point (atom), the slit, and the screen and by tracking and summing phases. The wave function is a solution to this geometry. The wave function approach was further supported by additional double-slit experiments in Italy and Japan in the 1970s and 1980s with electrons.
Quantum field theory
Huygens' principle can be seen as a consequence of the homogeneity of space—space is uniform in all locations. Any disturbance created in a sufficiently small region of homogeneous space (or in a homogeneous medium) propagates from that region in all geodesic directions. The waves produced by this disturbance, in turn, create disturbances in other regions, and so on. The superposition of all the waves results in the observed pattern of wave propagation.
Homogeneity of space is fundamental to quantum field theory (QFT) where the wave function of any object propagates along all available unobstructed paths. When integrated along all possible paths, with a phase factor proportional to the action, the interference of the wave-functions correctly predicts observable phenomena. Every point on the wavefront acts as the source of secondary wavelets that spread out in the light cone with the same speed as the wave. The new wavefront is found by constructing the surface tangent to the secondary wavelets.
In other spatial dimensions
In 1900, Jacques Hadamard observed that Huygens' principle was broken when the number of spatial dimensions is even. From this, he developed a set of conjectures that remain an active topic of research. In particular, it has been discovered that Huygens' principle holds on a large class of homogeneous spaces derived from the Coxeter group (so, for example, the Weyl groups of simple Lie algebras).
The traditional statement of Huygens' principle for the D'Alembertian gives rise to the KdV hierarchy; analogously, the Dirac operator gives rise to the AKNS hierarchy.
See also
Fraunhofer diffraction
Kirchhoff's diffraction formula
Green's function
Green's theorem
Green's identities
Near-field diffraction pattern
Double-slit experiment
Knife-edge effect
Fermat's principle
Fourier optics
Surface equivalence principle
Wave field synthesis
Kirchhoff integral theorem
References
Further reading
Stratton, Julius Adams: Electromagnetic Theory, McGraw-Hill, 1941. (Reissued by Wiley – IEEE Press, ).
B.B. Baker and E.T. Copson, The Mathematical Theory of Huygens' Principle, Oxford, 1939, 1950; AMS Chelsea, 1987.
Wave mechanics
Diffraction
Christiaan Huygens | Huygens–Fresnel principle | [
"Physics",
"Chemistry",
"Materials_science"
] | 2,738 | [
"Physical phenomena",
"Spectrum (physical sciences)",
"Classical mechanics",
"Waves",
"Wave mechanics",
"Diffraction",
"Crystallography",
"Spectroscopy"
] |
14,365 | https://en.wikipedia.org/wiki/Hengist%20and%20Horsa | Hengist and Horsa are Germanic brothers said to have led the Angles, Saxons and Jutes in their supposed invasion of Britain in the 5th century. Tradition lists Hengist as the first of the Jutish kings of Kent.
Modern scholarly consensus regards Hengist and Horsa as mythical figures, given their alliterative animal names, the seemingly constructed nature of their genealogy, and the unknowable quality of Bede's sources. Their later detailed representation in texts such as the Anglo-Saxon Chronicle says more about ninth-century attitudes to the past than about the time in which they are said to have existed.
According to early sources, Hengist and Horsa arrived in Britain at Ebbsfleet on the Isle of Thanet. For a time, they served as mercenaries for Vortigern, King of the Britons, but later they turned against him (British accounts have them betraying him in the Treachery of the Long Knives). Horsa was killed fighting the Britons, but Hengist successfully conquered Kent, becoming the forefather of its kings.
A figure named Hengest, possibly identifiable with the leader of British legend, appears in the Finnesburg Fragment and in Beowulf. J. R. R. Tolkien has theorized that this indicates Hengest/Hengist is the same person and originates as a historical person.
Hengist was historically said to have been buried at Hengistbury Head in Dorset.
Etymology
The Old English names Hengest and Horsa mean "stallion" and "horse", respectively.
The original Old English word for a horse was eoh. Eoh derives from the Proto-Indo-European base *éḱwos, whence also Latin equus, which gave rise to the modern English words equine and equestrian. Hors is derived from the Proto-Indo-European base *kurs, to run, which also gave rise to hurry, carry and current (the latter two are borrowings from French). Hors eventually replaced eoh, fitting a pattern elsewhere in Germanic languages where the original names of sacred animals are abandoned for adjectives; for example, the word bear, meaning 'the brown one'. While the Ecclesiastical History and the Anglo-Saxon Chronicle refer to the brother as Horsa, in the History of the Britons his name is simply Hors. It has been suggested that Horsa may be a pet form of a compound name with the first element "horse".
Attestations
Ecclesiastical History of the English People
In his 8th-century Ecclesiastical History, Bede records that the first chieftains among the Angles, Saxons and Jutes in England were said to have been Hengist and Horsa. He relates that Horsa was killed in battle against the Britons and was thereafter buried in East Kent, where at the time of writing a monument still stood to him. According to Bede, Hengist and Horsa were the sons of Wictgils, son of Witta, son of Wecta, son of Woden.
Anglo-Saxon Chronicle
The Anglo-Saxon Chronicle, which exists in nine manuscripts and fragments compiled from the 9th to the 12th centuries, records that in the year 449, Vortigern invited Hengist and Horsa to Britain to assist his forces in fighting the Picts. The brothers landed at Eopwinesfleot (Ebbsfleet), and went on to defeat the Picts wherever they fought them. Hengist and Horsa sent word home to Germany describing "the worthlessness of the Britons, and the richness of the land" and asked for assistance. Their request was granted and support arrived. Afterward, more people arrived in Britain from "the three powers of Germany; the Old Saxons, the Angles, and the Jutes". The Saxons populated Essex, Sussex, and Wessex; the Jutes Kent, the Isle of Wight, and part of Hampshire; and the Angles East Anglia, Mercia, and Northumbria (leaving their original homeland, Angeln, deserted). The Worcester Chronicle (Chronicle D, compiled in the 11th century), and the Peterborough Chronicle (Chronicle E, compiled in the 12th century), include the detail that these forces were led by the brothers Hengist and Horsa, sons of Wihtgils, son of Witta, son of Wecta, son of Woden, but this information is not included in the A, B, C, or F versions.
In the entry for the year 455 the Chronicle details that Hengist and Horsa fought against Vortigern at Aylesford and that Horsa died there. Hengist took control of the kingdom with his son Esc. In 457, Hengist and Esc fought against British forces in Crayford "and there slew four thousand men". The Britons left the land of Kent and fled to London. In 465 Hengest and Esc fought again at the Battle of Wippedesfleot, probably near Ebbsfleet, and slew twelve British leaders. In the year 473, the final entry in the Chronicle mentioning Hengist or Horsa, Hengist and Esc are recorded as having taken "immense booty" and the Britons having "fled from the English like fire".
History of the Britons
The 9th century History of the Britons, attributed to the Briton Nennius, records that, during the reign of Vortigern in Britain, three vessels that had been exiled from Germany arrived in Britain, commanded by Hengist and Horsa. The narrative then gives a genealogy of the two: Hengist and Horsa were sons of Guictglis, son of Guicta, son of Guechta, son of Vouden, son of Frealof, son of Fredulf, son of Finn, son of Foleguald, son of Geta. Geta was said to be the son of a god, yet "not of the omnipotent God and our Lord Jesus Christ", but rather "the offspring of one of their idols, and whom, blinded by some demon, they worshipped according to the custom of the heathen". In 447 AD Vortigern received Hengist and Horsa "as friends" and gave to the brothers the Isle of Thanet.
After the Saxons had lived on Thanet for "some time" Vortigern promised them supplies of clothing and other provisions on condition that they assist him in fighting the enemies of his country. As the Saxons increased in number the Britons became unable to keep their agreement, and so told them that their assistance was no longer needed and that they should go home.
Vortigern allowed Hengist to send for more of his countrymen to come over to fight for him. Messengers were sent to "Scythia", where "a number" of warriors were selected, and, with sixteen ships, the messengers returned. With the men came Hengist's beautiful daughter. Hengist prepared a feast, inviting Vortigern, Vortigern's officers, and Ceretic, his translator. Prior to the feast, Hengist enjoined his daughter to serve the guests plenty of wine and ale so that they would become drunk. At the feast Vortigern became enamored with her and promised Hengist whatever he liked in exchange for her betrothal. Hengist, having "consulted with the Elders who attended him of the Angle race", demanded Kent. Without the knowledge of the then-ruler of Kent, Vortigern agreed.
Hengist's daughter was given to Vortigern, who slept with her and deeply loved her. Hengist told Vortigern that he would now be both his father and adviser and that Vortigern would know no defeat with his counsel, "for the people of my country are strong, warlike, and robust". With Vortigern's approval, Hengist would send for his son and his brother to fight against the Scots and those who dwelt near the wall. Vortigern agreed and Ochta and Ebissa arrived with 40 ships, sailed around the land of the Picts, conquered "many regions", and assaulted the Orkney Islands. Hengist continued to send for more ships from his country, so that some islands where his people had previously dwelt are now free of inhabitants.
Vortigern had meanwhile incurred the wrath of Germanus, Bishop of Auxerre (by taking his own daughter for a wife and having a son by her) and had gone into hiding at the advice of his council. But at length his son Vortimer engaged Hengist and Horsa and their men in battle, drove them back to Thanet and there enclosed them and beset them on the western flank. The war waxed and waned; the Saxons repeatedly gained ground and were repeatedly driven back. Vortimer attacked the Saxons four times: first enclosing the Saxons in Thanet, secondly fighting at the river Derwent, the third time at Epsford, where both Horsa and Vortigern's son Catigern died, and lastly "near the stone on the shore of the Gallic sea", where the Saxons were defeated and fled to their ships.
After a "short interval" Vortimer died and the Saxons became established, "assisted by foreign pagans". Hengist convened his forces and sent to Vortigern an offer of peace. Vortigern accepted, and Hengist prepared a feast to bring together the British and Saxon leaders. However, he instructed his men to conceal knives beneath their feet. At the right moment, Hengist shouted nima der sexa (get your knives) and his men massacred the unsuspecting Britons. However, they spared Vortigern, who ransomed himself by giving the Saxons Essex, Sussex, Middlesex and other unnamed districts.
Germanus of Auxerre was acclaimed as commander of the British forces. By praying, singing "hallelujah" and crying to God, the Britons drove the Saxons to the sea. Germanus then prayed for three days and nights at Vortigern's castle and fire fell from heaven and engulfed the castle. Vortigern, Hengist's daughter, Vortigern's other wives, and all other inhabitants burned to death. Potential alternate fates for Vortigern are provided. However, the Saxons continued to increase in numbers, and after Hengist died his son Ochta succeeded him.
History of the Kings of Britain
In his sometimes described as "pseudo-historical" twelfth-century work The History of the Kings of Britain, Geoffrey of Monmouth adapted and greatly expanded the account in the History of the Britons. Hengist and Horsa appear in books 6 and 8:
Book 6
Geoffrey records that three brigantines or long galleys arrived in Kent, full of armed men and commanded by two brothers, Hengist and Horsa. Vortigern was then staying at Dorobernia (Canterbury), and ordered that the "tall strangers" be received peacefully and brought to him. When Vortigern saw the company, he immediately observed that the brothers "excelled all the rest both in nobility and in gracefulness of person". He asked what country they had come from and why they had come to his kingdom. Hengist ("whose years and wisdom entitled him to precedence") replied that they had left their homeland of Saxony to offer their services to Vortigern or some other prince, as part of a Saxon custom in which, when the country became overpopulated, able young men were chosen by lot to seek their fortunes in other lands. Hengist and Horsa were made generals over the exiles, as befitted their noble birth.
Vortigern was aggrieved when he learned that the strangers were pagans, but nonetheless rejoiced at their arrival, since he was surrounded by enemies. He asked Hengist and Horsa if they would help him in his wars, offering them land and "other possessions". They accepted the offer, settled on an agreement, and stayed with Vortigern at his court. Soon after, the Picts came from Alba with an immense army and attacked the northern parts of Vortigern's kingdom. In the ensuing battle "there was little occasion for the Britons to exert themselves, for the Saxons fought so bravely, that the enemy, formerly victorious, were speedily put to flight".
In gratitude Vortigern increased the rewards he had promised to the brothers. Hengist was given "large possessions of lands in Lindsey for the subsistence of himself and his fellow-soldiers". A "man of experience and subtlety", Hengist told Vortigern that his enemies assailed him from every quarter, and that his subjects wished to depose him and make Aurelius Ambrosius king. He asked the king to allow him to send word to Saxony for more soldiers. Vortigern agreed, adding that Hengist could invite over whom he pleased and that "you shall have no refusal from me in whatever you shall desire".
Hengist bowed low in thanks, and made a further request, that he be made a consul or prince, as befitted his birth. Vortigern responded that it was not in his power to do this, reasoning that Hengist was a foreign pagan and would not be accepted by the British lords. Hengist asked instead for leave to build a fortress on a piece of land small enough that it could be encircled by a leather thong. Vortigern granted this and ordered Hengist to invite more Saxons.
After executing Vortigern's orders, Hengist took a bull's hide and made it into a single thong, which he used to encircle a carefully chosen rocky place (perhaps at Caistor in Lindsey). Here he built the castle of Kaercorrei, or in Saxon Thancastre: "thong castle."
The messengers returned from Germany with eighteen ships full of the best soldiers they could get, as well as Hengist's beautiful daughter Rowena. Hengist invited Vortigern to see his new castle and the newly arrived soldiers. A banquet took place in Thancastre, at which Vortigern drunkenly asked Hengist to let him marry Rowena. Horsa and the men all agreed that Hengist should allow the marriage, on the condition that Vortigern give him Kent.
Vortigern and Rowena were immediately married and Hengist received Kent. The king, though delighted with his new wife, incurred the hatred of his nobles and of his three sons.
As his new father-in-law, Hengist made further demands of Vortigern:
As I am your father, I claim the right of being your counsellor: do not therefore slight my advice, since it is to my countrymen you must owe the conquest of all your enemies. Let us invite over my son Octa, and his brother Ebissa, who are brave soldiers, and give them the countries that are in the northern parts of Britain, by the wall, between Deira and Alba. For they will hinder the inroads of the barbarians, and so you shall enjoy peace on the other side of the Humber.
Vortigern agreed. Upon receiving the invitation, Octa, Ebissa, and another lord, Cherdich, immediately left for Britain with three hundred ships. Vortigern received them kindly, and gave them ample gifts. With their assistance, Vortigern defeated his enemies in every engagement. All the while Hengist continued inviting over yet more ships, adding to his numbers daily. Witnessing this, the Britons tried to get Vortigern to banish the Saxons, but on account of his wife he would not. Consequently, his subjects turned against him and took his son Vortimer for their king. The Saxons and the Britons, led by Vortimer, met in four battles. In the second, Horsa and Vortimer's brother, Catigern, slew one another. By the fourth battle, the Saxons had fled to Thanet, where Vortimer besieged them. When the Saxons could no longer bear the British onslaughts, they sent out Vortigern to ask his son to allow them safe passage back to Germany. While discussions were taking place, the Saxons boarded their ships and left, leaving their wives and children behind.
Rowena poisoned the victorious Vortimer, and Vortigern returned to the throne. At his wife's request he invited Hengist back to Britain, but instructed him to bring only a small retinue. Hengist, knowing Vortimer to be dead, instead raised an army of 300,000 men. When Vortigern received word of the imminent arrival of the vast Saxon fleet, he resolved to fight them. Rowena alerted her father of this, who, after considering various strategies, resolved to make a show of peace and sent ambassadors to Vortigern.
The ambassadors informed Vortigern that Hengist had only brought so many men because he did not know of Vortimer's death and feared further attacks from him. Now that there was no threat, Vortigern could choose from among the men the ones he wished to return to Germany. Vortigern was greatly pleased by these tidings, and arranged to meet Hengist on the first of May at the monastery of Ambrius.
Before the meeting, Hengist ordered his soldiers to carry long daggers beneath their clothing. At the signal Nemet oure Saxas (get your knives), the Saxons fell upon the unsuspecting Britons and massacred them, while Hengist held Vortigern by his cloak. 460 British barons and consuls were killed, as well as some Saxons whom the Britons beat to death with clubs and stones. Vortigern was held captive and threatened with death until he resigned control of Britain's chief cities to Hengist. Once free, he fled to Cambria.
Book 8
In Cambria, Merlin prophesied to Vortigern that the brothers Aurelius Ambrosius and Uther Pendragon (who had fled to Armorica as children after Vortigern killed their brother Constans and their father, King Constantine) would return to have their revenge and defeat the Saxons. They arrived the next day, and, after rallying the dispersed Britons, Aurelius was proclaimed king. Aurelius marched into Cambria and burned Vortigern alive in his tower, before setting his sights upon the Saxons.
Hengist was struck by terror at the news of Vortigern's death and fled with his army beyond the Humber. He took courage at the approach of Aurelius and selected the bravest among his men to defend him. Hengist told these chosen men not to be afraid of Aurelius, for he had brought less than 10,000 Armorican Britons (the native Britons were hardly worth taking into account), while there were 200,000 Saxons. Hengist and his men advanced towards Aurelius in a field called Maisbeli (probably Ballifield, near Sheffield), intending to take the Britons by surprise, but Aurelius anticipated them.
As they marched to meet the Saxons, Eldol, Duke of Gloucester, told Aurelius that he greatly wished to meet Hengist in combat, noting that "one of the two of us should die before we parted". He explained that he had been at the Treachery of the Long Knives, but had escaped when God threw him a stake to defend himself with, making him the only Briton present to survive. Meanwhile, Hengist was placing his troops into formation, giving directions, and walking through the lines of troops, "the more to spirit them up".
With the armies in formation, battle began between the Britons and Saxons, both sides suffering "no small loss of blood". Eldol focused on attempting to find Hengist, but had no opportunity to fight him. "By the especial favour of God" the Britons took the upper hand, and the Saxons withdrew and made for Kaerconan (Conisbrough). Aurelius pursued them, killing or enslaving any Saxon he met on the way. Realizing Kaerconan would not hold against Aurelius, Hengist stopped outside the town and ordered his men to make a stand, "for he knew that his whole security now lay in his sword".
Aurelius reached Hengist, and a "most furious" fight ensued, with the Saxons maintaining their ground despite heavy losses. They came close to winning before a detachment of horses from the Armorican Britons arrived. When Gorlois, Duke of Cornwall, arrived, Eldol knew the day was won and grabbed Hengist's helmet, dragging him into the British ranks. The Saxons fled. Hengist's son Octa retreated to York and his kinsman Eosa to Alclud (Dumbarton).
Three days after the battle, Aurelius called together a council of principal officers to decide what to do with Hengist. Eldol's brother Eldad, Bishop of Gloucester, said:
Though all should be unanimous for setting him at liberty, yet would I cut him to pieces. The prophet Samuel is my warrant, who, when he had Agag, king of Amalek, in his power, hewed him in pieces, saying, As thy sword hath made women childless, so shall thy mother be childless among women. Do therefore the same to Hengist, who is a second Agag.
Consequently, Eldol drew Hengist out of the city and cut off his head. Aurelius, "who showed moderation in all his conduct", arranged for him to be buried and for a mound to be raised over his corpse, according to the custom of pagans. Octa and Eosa surrendered to Aurelius, who granted them the country bordering Scotland and made a firm covenant with them.
Prose Edda
The Icelander Snorri Sturluson, writing in the 13th century, briefly mentions Hengist in the Prologue, the first book of the Prose Edda. The Prologue gives a euhemerized account of Germanic history, including the detail that Woden put three of his sons in charge of Saxony. The ruler of eastern Saxony was Veggdegg, one of whose sons was Vitrgils, the father of Vitta, the father of Hengist.
Horse-head gables
On farmhouses in Lower Saxony and Schleswig-Holstein, horse-head gables were referred to as "Hengst und Hors" (Low German for "stallion and mare") as late as around 1875. Rudolf Simek notes that these horse-head gables can still be seen today, and says that the horse-head gables confirm that Hengist and Horsa were originally considered mythological, horse-shaped beings. Martin Litchfield West comments that the horse heads may have been remnants of pagan religious practices in the area.
Theories
Finnesburg Fragment and Beowulf
A Hengest appears in line 34 of the Finnesburg Fragment, which describes the legendary Battle of Finnsburg. In Beowulf, a scop recites a composition summarizing the Finnsburg events, including information not provided in the fragment. Hengest is mentioned in lines 1082 and 1091.
Some scholars have proposed that the figure mentioned in both of these references is one and the same as the Hengist of the Hengist and Horsa accounts, though Horsa is not mentioned in either source. In his work Finn and Hengest, J. R. R. Tolkien argued that Hengist was a historical figure, and that Hengist came to Britain after the events recorded in the Finnesburg Fragment and Beowulf. Patrick Sims-Williams is more sceptical of the account, suggesting that Bede's Canterbury source, which he relied on for his account of Hengist and Horsa in the Ecclesiastical History, had confused two separate traditions.
Germanic twin brothers and divine Indo-European horse twins
Several sources attest that the Germanic peoples venerated a divine pair of twin brothers. The earliest reference to this practice derives from Timaeus (c. 345 – c. 250 BC). Timaeus records that the Celts of the North Sea were especially devoted to what he describes as Castor and Pollux. In his work Germania, Tacitus records the veneration of the Alcis, whom he identifies with Castor and Pollux. Germanic legends mention various brothers as founding figures. The 1st- or 2nd-century historian Cassius Dio cites the brothers Raos and Raptos as the leaders of the Astings. According to Paul the Deacon's 8th-century History of the Lombards, the Lombards migrated southward from Scandinavia led by Ibur and Aio, while Saxo Grammaticus records in his 12th-century Deeds of the Danes that this migration was prompted by Aggi and Ebbi. In related Indo-European cultures, similar traditions are attested, such as the Dioscuri. Scholars have theorized that these divine twins in Indo-European cultures stem from divine twins in prehistoric Proto-Indo-European culture.
J. P. Mallory comments on the great importance of the horse in Indo-European religion, as exemplified "most obviously" by various mythical brothers appearing in Indo-European legend, including Hengist and Horsa:
Some would maintain that the premier animal of the Indo-European sacrifice and ritual was probably the horse. We have already seen how its embedment in Proto-Indo-European society lies not just in its lexical reconstruction but also in the proliferation of personal names which contain "horse" as an element among the various Indo-European peoples. Furthermore, we witness the importance of the horse in Indo-European rituals and mythology. One of the most obvious examples is the recurrent depiction of twins such as the Indic Asvins "horsemen," the Greek horsemen Castor and Pollux, the legendary Anglo-Saxon settlers Horsa and Hengist [...] or the Irish twins of Macha, born after she had completed a horse race. All of these attest the existence of Indo-European divine twins associated with or represented by horses.
Uffington White Horse
In his 17th-century work Monumenta Britannica, John Aubrey ascribes the Uffington White Horse hill figure to Hengist and Horsa, stating that "the White Horse was their Standard at the Conquest of Britain". However, he also ascribes the origins of the horse to the Ancient Britons, reasoning that the horse resembles Celtic Iron Age coins. As a result, advocates of a Saxon origin of the figure debated with those favouring an ancient British origin for three centuries after Aubrey's findings. In 1995, using optically stimulated luminescence dating, David Miles and Simon Palmer of the Oxford Archaeology Unit assigned the Uffington White Horse to the Bronze Age.
Aschanes
The Brothers Grimm identified Hengist with Aschanes, mythical first King of the Saxons, in their notes for legend number 413 of their German Legends. Editor and translator Donald Ward, in his commentary on the tale, regards the identification as untenable on linguistic grounds.
Modern influence
Hengist and Horsa have appeared in a variety of media in the modern period. Written between 1616 and 1620, Thomas Middleton's play Hengist, King of Kent features portrayals of both Hengist and Horsa (as Hersus). On 6 July 1776, the first committee for the production of the Great Seal of the United States convened. One of three members of the committee, Thomas Jefferson, proposed that one side of the seal feature Hengist and Horsa, "the Saxon chiefs from whom we claim the honor of being descended, and whose political principles and form of government we assumed".
"Hengist and Horsus" appear as antagonists in the play Vortigern and Rowena, which was touted as a newly discovered work by William Shakespeare in 1796, but was soon revealed as a hoax by William Henry Ireland. The pair have plaques in the Walhalla Temple at Regensburg, Bavaria, which honours distinguished figures of German history.
During World War II, two British military gliders took their names from the brothers: the Slingsby Hengist and the Airspeed Horsa. The 20th-century American poet Robinson Jeffers composed a poem titled Ode to Hengist and Horsa. Likewise, Jorge Luis Borges's poem Hengist Quiere Hombres (449 A.D.) was published in translation in The New Yorker in 1977.
In 1949, Prince Georg of Denmark came to Pegwell Bay in Kent to dedicate the longship Hugin, commemorating the landing of Hengest and Horsa at nearby Ebbsfleet 1500 years earlier in 449 AD.
Though Hengist and Horsa are not referenced in the medieval tales of King Arthur, some modern Arthurian tales do link them. For example, in Mary Stewart's Merlin Trilogy, Hengist and Horsa are executed by Ambrosius; Hengist is given full Saxon funeral honours, cremated with his weapons on a pyre. In Alfred Duggan's Conscience of the King, Hengist plays a major role in the early career of Cerdic Elesing, legendary founder of the kingdom of Wessex.
Part of the A299 road on the Isle of Thanet is named Hengist Way.
See also
Alcis (gods), Germanic horse brother deities venerated by the Naharvali, a Germanic people described by Tacitus in 1 AD
Ašvieniai. Lithuanian brother horse deities, also used crossed, on top of cottage house roofs.
Ashvins, Vedic twin deities of medicine
Divine twins, A number of Indo-European mythical brother deities, associated with horses
Horses in Germanic paganism, wider importance of horses in early Germanic cultures
Saxon Steed, a heraldic motif
Thracian horseman, sometimes linked to the Dioscuri
Notes
References
Lyon, Bryce. "From Hengist and Horsa to Edward of Caernarvon: Recent writing on English history" in Elizabeth Chapin Furber, ed. Changing views on British history: essays on historical writing since 1939 (Harvard University Press, 1966), pp 1–57; historiography
Lyon, Bryce. " Change or Continuity: Writing since 1965 on English History before Edward of Caernarvon," in Richard Schlatter, ed., Recent Views on British History: Essays on Historical Writing since 1966 (Rutgers UP, 1984), pp 1–34, historiography
External links
5th-century births
5th-century deaths
5th-century English monarchs
5th century in England
Anglo-Saxon warriors
British traditional history
Brother duos
English heroic legends
Founding monarchs
Jutish people
Kent folklore
Kentish monarchs
Legendary English people
Origin myths
Possibly fictional people from Europe
Legendary progenitors
Castor and Pollux | Hengist and Horsa | [
"Astronomy"
] | 6,411 | [
"Castor and Pollux",
"Astronomical myths"
] |
14,374 | https://en.wikipedia.org/wiki/Haematopoiesis | Haematopoiesis (; ; also hematopoiesis in American English, sometimes h(a)emopoiesis) is the formation of blood cellular components. All cellular blood components are derived from haematopoietic stem cells. In a healthy adult human, roughly ten billion () to a hundred billion () new blood cells are produced per day, in order to maintain steady state levels in the peripheral circulation.
Process
Haematopoietic stem cells (HSCs)
Haematopoietic stem cells (HSCs) reside in the medulla of the bone (bone marrow) and have the unique ability to give rise to all of the different mature blood cell types and tissues. HSCs are self-renewing cells: when they differentiate, at least some of their daughter cells remain as HSCs so the pool of stem cells is not depleted. This phenomenon is called asymmetric division. The other daughters of HSCs (myeloid and lymphoid progenitor cells) can follow any of the other differentiation pathways that lead to the production of one or more specific types of blood cell, but cannot renew themselves. The pool of progenitors is heterogeneous and can be divided into two groups; long-term self-renewing HSC and only transiently self-renewing HSC, also called short-terms. This is one of the main vital processes in the body.
Cell types
All blood cells are divided into three lineages.
Red blood cells, which are also called erythrocytes, are the oxygen-carrying cells. Erythrocytes are functional, and are released into the blood. The number of reticulocytes, which are immature red blood cells, gives an estimate of the rate of erythropoiesis.
Lymphocytes are the cornerstone of the adaptive immune system. They are derived from common lymphoid progenitors. The lymphoid lineage is composed of T-cells, B-cells, and natural killer cells. This is lymphopoiesis.
Cells of the myeloid lineage, which include granulocytes, megakaryocytes, monocytes, and macrophages, are derived from common myeloid progenitors, and are involved in such diverse roles as innate immunity and blood clotting. This is myelopoiesis.
Granulopoiesis (or granulocytopoiesis) is haematopoiesis of granulocytes, except mast cells which are granulocytes but with an extramedullar maturation.
Thrombopoiesis is haematopoiesis of thrombocytes (platelets).
Terminology
Between 1948 and 1950, the Committee for Clarification of the Nomenclature of Cells and Diseases of the Blood and Blood-forming Organs issued reports on the nomenclature of blood cells. An overview of the terminology is shown below, from earliest to final stage of development:
[root]blast
pro[root]cyte
[root]cyte
meta[root]cyte
mature cell name
The root for erythrocyte colony-forming units (CFU-E) is "rubri", for granulocyte-monocyte colony-forming units (CFU-GM) is "granulo" or "myelo" and "mono", for lymphocyte colony-forming units (CFU-L) is "lympho" and for megakaryocyte colony-forming units (CFU-Meg) is "megakaryo". According to this terminology, the stages of red blood cell formation would be: rubriblast, prorubricyte, rubricyte, metarubricyte, and erythrocyte. However, the following nomenclature seems to be, at present, the most prevalent:
Osteoclasts also arise from hemopoietic cells of the monocyte/neutrophil lineage, specifically CFU-GM.
Location
In developing embryos, blood formation occurs in aggregates of blood cells in the yolk sac, called blood islands. As development progresses, blood formation occurs in the spleen, liver and lymph nodes. When bone marrow develops, it eventually assumes the task of forming most of the blood cells for the entire organism. However, maturation, activation, and some proliferation of lymphoid cells occurs in the spleen, thymus, and lymph nodes. In children, haematopoiesis occurs in the marrow of the long bones such as the femur and tibia. In adults, it occurs mainly in the pelvis, cranium, vertebrae, and sternum.
Extramedullary
In some cases, the liver, thymus, and spleen may resume their haematopoietic function, if necessary. This is called extramedullary haematopoiesis. It may cause these organs to increase in size substantially. During fetal development, since bones and thus the bone marrow develop later, the liver functions as the main haematopoietic organ. Therefore, the liver is enlarged during development. Extramedullary haematopoiesis and myelopoiesis may supply leukocytes in cardiovascular disease and inflammation during adulthood. Splenic macrophages and adhesion molecules may be involved in regulation of extramedullary myeloid cell generation in cardiovascular disease.
Maturation
As a stem cell matures it undergoes changes in gene expression that limit the cell types that it can become and moves it closer to a specific cell type (cellular differentiation). These changes can often be tracked by monitoring the presence of proteins on the surface of the cell. Each successive change moves the cell closer to the final cell type and further limits its potential to become a different cell type.
Cell fate determination
Two models for haematopoiesis have been proposed: determinism and stochastic theory. For the stem cells and other undifferentiated blood cells in the bone marrow, the determination is generally explained by the determinism theory of haematopoiesis, saying that colony stimulating factors and other factors of the haematopoietic microenvironment determine the cells to follow a certain path of cell differentiation. This is the classical way of describing haematopoiesis. In stochastic theory, undifferentiated blood cells differentiate to specific cell types by randomness. This theory has been supported by experiments showing that within a population of mouse haematopoietic progenitor cells, underlying stochastic variability in the distribution of Sca-1, a stem cell factor, subdivides the population into groups exhibiting variable rates of cellular differentiation. For example, under the influence of erythropoietin (an erythrocyte-differentiation factor), a subpopulation of cells (as defined by the levels of Sca-1) differentiated into erythrocytes at a sevenfold higher rate than the rest of the population. Furthermore, it was shown that if allowed to grow, this subpopulation re-established the original subpopulation of cells, supporting the theory that this is a stochastic, reversible process. Another level at which stochasticity may be important is in the process of apoptosis and self-renewal. In this case, the haematopoietic microenvironment prevails upon some of the cells to survive and some, on the other hand, to perform apoptosis and die. By regulating this balance between different cell types, the bone marrow can alter the quantity of different cells to ultimately be produced.
Growth factors
Red and white blood cell production is regulated with great precision in healthy humans, and the production of leukocytes is rapidly increased during infection. The proliferation and self-renewal of these cells depend on growth factors. One of the key players in self-renewal and development of haematopoietic cells is stem cell factor (SCF), which binds to the c-kit receptor on the HSC. Absence of SCF is lethal. There are other important glycoprotein growth factors which regulate the proliferation and maturation, such as interleukins IL-2, IL-3, IL-6, IL-7. Other factors, termed colony-stimulating factors (CSFs), specifically stimulate the production of committed cells. Three CSFs are granulocyte-macrophage CSF (GM-CSF), granulocyte CSF (G-CSF) and macrophage CSF (M-CSF). These stimulate granulocyte formation and are active on either progenitor cells or end product cells.
Erythropoietin is required for a myeloid progenitor cell to become an erythrocyte. On the other hand, thrombopoietin makes myeloid progenitor cells differentiate to megakaryocytes (thrombocyte-forming cells). The diagram to the right provides examples of cytokines and the differentiated blood cells they give rise to.
Transcription factors
Growth factors initiate signal transduction pathways, which lead to activation of transcription factors. Growth factors elicit different outcomes depending on the combination of factors and the cell's stage of differentiation. For example, long-term expression of PU.1 results in myeloid commitment, and short-term induction of PU.1 activity leads to the formation of immature eosinophils. Recently, it was reported that transcription factors such as NF-κB can be regulated by microRNAs (e.g., miR-125b) in haematopoiesis.
The first key player of differentiation from HSC to a multipotent progenitor (MPP) is transcription factor CCAAT-enhancer binding protein α (C/EBPα). Mutations in C/EBPα are associated with acute myeloid leukaemia. From this point, cells can either differentiate along the Erythroid-megakaryocyte lineage or lymphoid and myeloid lineage, which have common progenitor, called lymphoid-primed multipotent progenitor. There are two main transcription factors. PU.1 for Erythroid-megakaryocyte lineage and GATA-1, which leads to a lymphoid-primed multipotent progenitor.
Other transcription factors include Ikaros (B cell development), and Gfi1 (promotes Th2 development and inhibits Th1) or IRF8 (basophils and mast cells). Significantly, certain factors elicit different responses at different stages in the haematopoiesis. For example, CEBPα in neutrophil development or PU.1 in monocytes and dendritic cell development. It is important to note that processes are not unidirectional: differentiated cells may regain attributes of progenitor cells.
An example is PAX5 factor, which is important in B cell development and associated with lymphomas. Surprisingly, pax5 conditional knock out mice allowed peripheral mature B cells to de-differentiate to early bone marrow progenitors. These findings show that transcription factors act as caretakers of differentiation level and not only as initiators.
Mutations in transcription factors are tightly connected to blood cancers, as acute myeloid leukemia (AML) or acute lymphoblastic leukemia (ALL). For example, Ikaros is known to be regulator of numerous biological events. Mice with no Ikaros lack B cells, Natural killer and T cells. Ikaros has six zinc fingers domains, four are conserved DNA-binding domain and two are for dimerization. Very important finding is, that different zinc fingers are involved in binding to different place in DNA and this is the reason for pleiotropic effect of Ikaros and different involvement in cancer, but mainly are mutations associated with BCR-Abl patients and it is bad prognostic marker.
Other animals
In some vertebrates, haematopoiesis can occur wherever there is a loose stroma of connective tissue and slow blood supply, such as the gut, spleen or kidney.
Unlike eutherian mammals, the liver of newborn marsupials is actively haematopoietic.
See also
Clonal hematopoiesis
Erythropoiesis-stimulating agents
Haematopoietic stimulants:
Granulocyte colony-stimulating factor
Granulocyte macrophage colony-stimulating factor
Leukocyte extravasation
References
Further reading
External links
Hematopoietic cell lineage in KEGG
Hematopoiesis and bone marrow histology
Hematopoiesis
Histology | Haematopoiesis | [
"Chemistry"
] | 2,640 | [
"Histology",
"Microscopy"
] |
14,375 | https://en.wikipedia.org/wiki/Hogmanay | Hogmanay ( , ) is the Scots word for the last day of the old year and is synonymous with the celebration of the New Year in the Scottish manner. It is normally followed by further celebration on the morning of New Year's Day (1 January) and, in some cases, 2 January—a Scottish bank holiday. In a few contexts, the word Hogmanay is used more loosely to describe the entire period consisting of the last few days of the old year and the first few days of the new year. For instance, not all events held under the banner of Edinburgh's Hogmanay take place on 31 December.
Customs vary throughout Scotland and usually include gift-giving and visiting the homes of friends and neighbours, with particular attention given to the first-foot, the first guest of the new year.
Etymology
The etymology of the word is obscure. The earliest proposed etymology comes from the 1693 Scotch Presbyterian Eloquence, which held that the term was a corruption of a presumed () and that this meant "holy month". The three main modern theories derive it from a French, Norse or Gaelic root.
The word is first recorded in a Latin entry in 1443 in the West Riding of Yorkshire as . The first appearance in Scots language came in 1604 in the records of Elgin, as hagmonay. Subsequent 17th-century spellings include Hagmena (1677), Hogmynae night (1681), and Hagmane (1693) in an entry of the Scotch Presbyterian Eloquence.
Although Hogmanay is currently the predominant spelling and pronunciation, several variant spellings and pronunciations have been recorded, including:
(Roxburghshire)
(Shetland)
(Shetland)
with the first syllable variously being , , , or .
Possible French etymologies
The term may have been introduced to Middle Scots via French. The most commonly cited explanation is a derivation from the northern French dialectal word , or variants such as , and , those being derived from 16th-century Middle French meaning either a gift given at New Year, a children's cry for such a gift, or New Year's Eve itself. The Oxford English Dictionary reports this theory, saying that the term is a borrowing of , a medieval French cry used to welcome the new year consisting of an unknown first element plus "" ("the new year").
This explanation is supported by a children's tradition, observed up to the 1960s in parts of Scotland at least, of visiting houses in their locality on New Year's Eve and requesting and receiving small treats such as sweets or fruit. The second element would appear to be ('the New Year'), with sources suggesting a druidical origin of the practice overall. Compare those to Norman and the obsolete customs in Jersey of crying , and in Guernsey of asking for an , for a New Year gift (see also ). In Québec, was a door-to-door collection for people experiencing poverty.
Compare also the apparent Spanish cognate /, with a suggested Latin derivation of "in this year".
Other suggestions include ("lead to the mistletoe"), ('bring to the beggars'), ('at the mistletoe the new year', or ('(the) man is born').
Possible Goidelic etymologies
The word may have come from the Goidelic languages. Frazer and Kelley report a Manx new-year song that begins with the line To-night is New Year's Night, Hogunnaa but did not record the full text in Manx. Kelley himself uses the spelling whereas other sources parse this as and give the modern Manx form as Hob dy naa. Manx dictionaries though give (), generally glossing it as "Hallowe'en", same as many of the more Manx-specific folklore collections.
In this context, it is also recorded that in the south of Scotland (for example Roxburghshire), there is no , the word thus being Hunganay, which could suggest the is intrusive.
Another theory occasionally encountered is a derivation from the phrase (, "I raised the cry"), which resembles Hogmanay in pronunciation and was part of the rhymes traditionally recited at New Year but it is unclear if this is simply a case of folk etymology.
Overall, Gaelic consistently refers to the New Year's Eve as ("the Night of the New Year") and ("the Night of the Calends").
Possible Norse etymologies
Other authors reject both the French and Goidelic theories and instead suggest that the ultimate source for this word's Norman French, Scots, and Goidelic variants have a common Norse root. It is suggested that the full forms
"Hoginanaye-Trollalay/Hogman aye, Troll a lay" (with a Manx cognate )
"Hogmanay, Trollolay, give us of your white bread and none of your gray"
invoke the hill-men (Icelandic , compare Anglo-Saxon ) or "elves" and banishes the trolls into the sea (Norse 'into the sea'). Repp furthermore links "Trollalay/Trolla-laa" and the rhyme recorded in Percy's Relics: "Trolle on away, trolle on awaye. Synge heave and howe rombelowe trolle on away", which he reads as a straightforward invocation of troll-banning.
Origins
It is speculated that the roots of Hogmanay may reach back to the celebration of the winter solstice among the Norse, as well as incorporating customs from the Gaelic celebration of Samhain. The Vikings celebrated Yule, which later contributed to the Twelve Days of Christmas, or the "Daft Days" as they were sometimes called in Scotland. Christmas was not celebrated as a festival, and Hogmanay was the more traditional celebration in Scotland. This may have been a result of the Protestant Reformation after which Christmas was seen as "too Papist".
Hogmanay was also celebrated in the far north of England, down to and including Richmond in North Yorkshire. It was traditionally known as 'Hagmena' in Northumberland, 'Hogmina' in Cumberland, and 'Hagman-ha' or 'Hagman-heigh' in the North Riding of Yorkshire.
Customs
There are many customs, both national and local, associated with Hogmanay. The most widespread national custom is the practice of first-footing, which starts immediately after midnight. This involves being the first person to cross the threshold of a friend or neighbour and often involves the giving of symbolic gifts such as salt (less common today), coal, shortbread, whisky, and black bun (a rich fruit cake), intended to bring different kinds of luck to the householder. Food and drink (as the gifts) are then given to the guests. This may go on throughout the early morning hours and into the next day (although modern days see people visiting houses well into the middle of January). The first-foot is supposed to set the luck for the rest of the year. Traditionally, tall, dark-haired men are preferred as the first-foot.
Local customs
An example of a local Hogmanay custom is the fireball swinging that takes place in Stonehaven, Aberdeenshire, in northeast Scotland. This involves local people making up "balls" of chicken wire filled with old newspaper, sticks, rags, and other dry flammable material up to a diameter of , each attached to about of wire, chain or nonflammable rope. As the Old Town House bell sounds to mark the new year, the balls are set alight, and the swingers set off up the High Street from the Mercat Cross to the Cannon and back, swinging the burning balls around their heads as they go.
At the end of the ceremony, fireballs still burning are cast into the harbour. Many people enjoy this display, and large crowds flock to see it, with 12,000 attending the 2007/2008 event. In recent years, additional attractions have been added to entertain the crowds as they wait for midnight, such as fire poi, a pipe band, street drumming, and a firework display after the last fireball is cast into the sea. The festivities are now streamed live over the Internet.
Another example of a fire festival is the burning the clavie in the town of Burghead in Moray.
In the east coast fishing communities and Dundee, first-footers once carried a decorated herring. And in Falkland in Fife, local men marched in torchlight procession to the top of the Lomond Hills as midnight approached. Bakers in St Andrews baked special cakes for their Hogmanay celebration (known as "Cake Day") and distributed them to local children.
Institutions also had their own traditions. For example, amongst the Scottish regiments, officers waited on the men at special dinners while at the bells, the Old Year is piped out of barrack gates. The sentry then challenges the new escort outside the gates: "Who goes there?" The answer is "The New Year, all's well."
An old custom in the Highlands is to celebrate Hogmanay with the saining (Scots for 'protecting, blessing') of the household and livestock. Early on New Year's morning, householders drink and then sprinkle 'magic water' from 'a dead and living ford' around the house (a 'dead and living ford' refers to a river ford that is routinely crossed by both the living and the dead). After the sprinkling of the water in every room, on the beds and all the inhabitants, the house is sealed up tight and branches of juniper are set on fire and carried throughout the house and byre. The juniper smoke is allowed to thoroughly fumigate the buildings until it causes sneezing and coughing among the inhabitants. Then, all the doors and windows are flung open to let in the cold, fresh air of the new year. The woman of the house then administers 'a restorative' from the whisky bottle, and the household sits down to its New Year breakfast.
"Auld Lang Syne"
The Hogmanay custom of singing "Auld Lang Syne" has become common in many countries. "Auld Lang Syne" is a Scots poem by Robert Burns, based on traditional and other earlier sources. It is common to sing this in a circle of linked arms crossed over one another as the clock strikes midnight for New Year's Day. However, it is only intended that participants link arms at the beginning of the final verse before rushing into the centre as a group.
In the media
Between 1957 and 1968, a New Year's Eve television programme, The White Heather Club, was presented to herald the Hogmanay celebrations.
The show was presented by Andy Stewart, who always began by singing, "Come in, come in, it's nice to see you...." The show always ended with Stewart and the cast singing, "Haste ye Back":
The performers were Jimmy Shand and band, Ian Powrie and his band, Scottish country dancers: Dixie Ingram and the Dixie Ingram Dancers, Joe Gordon Folk Four, James Urquhart, Ann & Laura Brand, Moira Anderson & Kenneth McKellar. All the male dancers and Andy Stewart wore kilts, and the female dancers wore long white dresses with tartan sashes.
Following the demise of the White Heather Club, Andy Stewart continued to feature regularly in TV Hogmanay shows until his retirement. His last appearance was in 1992.
In the 1980s, comedian Andy Cameron presented the Hogmanay Show (on STV in 1983 and 1984 and from 1985 to 1990 on BBC Scotland) while Peter Morrison presented the show A Highland Hogmanay on STV/Grampian, axed in 1993.
For many years, a staple of New Year's Eve television programming in Scotland was the comedy sketch show Scotch and Wry, featuring the comedian Rikki Fulton, which invariably included a hilarious monologue from him as the gloomy Reverend I.M. Jolly.
Since 1993, the programmes that have been mainstays on BBC Scotland on Hogmanay have been Hogmanay Live and Jonathan Watson's football-themed sketch comedy show, Only an Excuse?.
Presbyterian influence
The 1693 Scotch Presbyterian Eloquence contained one of the first mentions of the holiday in official church records. Hogmanay was treated with general disapproval. Still, in Scotland, Hogmanay and New Year's Day are as important as Christmas Eve and Christmas Day.
Although Christmas Day held its normal religious nature in Scotland amongst its Catholic and Episcopalian communities, the Presbyterian national church, the Church of Scotland, discouraged the celebration of Christmas for nearly 400 years; it only became a public holiday in Scotland in 1958. Conversely, 1 and 2 January are public holidays, and Hogmanay is still associated with as much celebration as Christmas in Scotland.
Major celebrations
As in much of the world, the largest Scottish cities – Glasgow, Edinburgh and Aberdeen – hold all-night celebrations, as do Stirling and Inverness. The Edinburgh Hogmanay celebrations are among the largest in the world. Celebrations in Edinburgh in 1996–97 were recognised by the Guinness Book of Records as the world's largest New Years party, with approximately 400,000 people in attendance. Numbers were then restricted due to safety concerns.
In 2003-4, most organised events were cancelled at short notice due to very high winds. The Stonehaven Fireballs went ahead as planned, however, with 6,000 people braving the stormy weather to watch 42 fireball swingers process along the High Street. Similarly, the 2006–07 celebrations in Edinburgh, Glasgow, and Stirling were all cancelled on the day, again due to high winds and heavy rain. The Aberdeen celebration, however, went ahead and was opened by pop music group Wet Wet Wet.
Many Hogmanay festivities were cancelled in 2020–21 and 2021–22 due to the COVID-19 pandemic in Scotland.
The Edinburgh event was also cancelled in 2024-25 due to high winds.
Ne'erday
Most Scots celebrate New Year's Day with a special dinner, usually steak pie.
Handsel Day
Historically, presents were given in Scotland on the first Monday of the New Year. A roast dinner would be eaten to celebrate the festival. Handsel was a word for gift and hence "Handsel Day". In modern Scotland, this practice has died out.
The period of festivities running from Christmas to Handsel Monday, including Hogmanay and Ne'erday, is known as the Daft Days.
See also
Christmas in Scotland
Calennig, the last day of the year in Wales
Footnotes
Notes
References
Observations on the Popular Antiquities of Great Britain, Brand, London, 1859
Dictiounnaire Angllais-Guernesiais, de Garis, Chichester, 1982
Dictionnaire Jersiais-Français, Le Maistre, Jersey, 1966
Dictionary of the Scots Language, Edinburgh
External links
Edinburgh's Hogmanay (official site)
Hogmanay.net
"The Origins, History and Traditions of Hogmanay", The British Newspaper Archive (31 December 2012)
Hogmanay traditional bonfire
Annual events in Scotland
December observances
Festivals in Scotland
Holidays in Scotland
New Year celebrations
Winter events in Scotland
Winter solstice | Hogmanay | [
"Astronomy"
] | 3,162 | [
"Astronomical events",
"Winter solstice"
] |
14,380 | https://en.wikipedia.org/wiki/Helium-3 | Helium-3 (3He see also helion) is a light, stable isotope of helium with two protons and one neutron. (In contrast, the most common isotope, helium-4, has two protons and two neutrons.) Helium-3 and protium (ordinary hydrogen) are the only stable nuclides with more protons than neutrons. It was discovered in 1939.
Helium-3 occurs as a primordial nuclide, escaping from Earth's crust into its atmosphere and into outer space over millions of years. It is also thought to be a natural nucleogenic and cosmogenic nuclide, one produced when lithium is bombarded by natural neutrons, which can be released by spontaneous fission and by nuclear reactions with cosmic rays. Some found in the terrestrial atmosphere is a remnant of atmospheric and underwater nuclear weapons testing.
Nuclear fusion using helium-3 has long been viewed as a desirable future energy source. The fusion of two of its atoms would be aneutronic, not release the dangerous radiation of traditional fusion or require much higher temperatures. The process may unavoidably create other reactions that themselves would cause the surrounding material to become radioactive.
Helium-3 is thought to be more abundant on the Moon than on Earth, having been deposited in the upper layer of regolith by the solar wind over billions of years, though still lower in abundance than in the Solar System's gas giants.
History
The existence of helium-3 was first proposed in 1934 by the Australian nuclear physicist Mark Oliphant while he was working at the University of Cambridge Cavendish Laboratory. Oliphant had performed experiments in which fast deuterons collided with deuteron targets (incidentally, the first demonstration of nuclear fusion). Isolation of helium-3 was first accomplished by Luis Alvarez and Robert Cornog in 1939. Helium-3 was thought to be a radioactive isotope until it was also found in samples of natural helium, which is mostly helium-4, taken both from the terrestrial atmosphere and from natural gas wells.
Physical properties
Due to its low atomic mass of 3.016 u, helium-3 has some physical properties different from those of helium-4, with a mass of 4.0026 u. On account of the weak, induced dipole–dipole interaction between the helium atoms, their microscopic physical properties are mainly determined by their zero-point energy. Also, the microscopic properties of helium-3 cause it to have a higher zero-point energy than helium-4. This implies that helium-3 can overcome dipole–dipole interactions with less thermal energy than helium-4 can.
The quantum mechanical effects on helium-3 and helium-4 are significantly different because with two protons, two neutrons, and two electrons, helium-4 has an overall spin of zero, making it a boson, but with one fewer neutron, helium-3 has an overall spin of one half, making it a fermion.
Pure helium-3 gas boils at 3.19 K compared with helium-4 at 4.23 K, and its critical point is also lower at 3.35 K, compared with helium-4 at 5.2 K. Helium-3 has less than half the density of helium-4 when it is at its boiling point: 59 g/L compared to 125 g/L of helium-4 at a pressure of one atmosphere. Its latent heat of vaporization is also considerably lower at 0.026 kJ/mol compared with the 0.0829 kJ/mol of helium-4.
Superfluidity
An important property of helium-3, which distinguishes it from the more common helium-4, is that its nucleus is a fermion since it contains an odd number of spin particles. Helium-4 nuclei are bosons, containing an even number of spin particles. This is a direct result of the addition rules for quantized angular momentum. At low temperatures (about 2.17 K), helium-4 undergoes a phase transition: A fraction of it enters a superfluid phase that can be roughly understood as a type of Bose–Einstein condensate. Such a mechanism is not available for helium-3 atoms, which are fermions. Many speculated that helium-3 could also become a superfluid at much lower temperatures, if the atoms formed into pairs analogous to Cooper pairs in the BCS theory of superconductivity. Each Cooper pair, having integer spin, can be thought of as a boson. During the 1970s, David Lee, Douglas Osheroff and Robert Coleman Richardson discovered two phase transitions along the melting curve, which were soon realized to be the two superfluid phases of helium-3. The transition to a superfluid occurs at 2.491 millikelvins on the melting curve. They were awarded the 1996 Nobel Prize in Physics for their discovery. Alexei Abrikosov, Vitaly Ginzburg, and Tony Leggett won the 2003 Nobel Prize in Physics for their work on refining understanding of the superfluid phase of helium-3.
In a zero magnetic field, there are two distinct superfluid phases of 3He, the A-phase and the B-phase. The B-phase is the low-temperature, low-pressure phase which has an isotropic energy gap. The A-phase is the higher temperature, higher pressure phase that is further stabilized by a magnetic field and has two point nodes in its gap. The presence of two phases is a clear indication that 3He is an unconventional superfluid (superconductor), since the presence of two phases requires an additional symmetry, other than gauge symmetry, to be broken. In fact, it is a p-wave superfluid, with spin one, S=1, and angular momentum one, L=1. The ground state corresponds to total angular momentum zero, J=S+L=0 (vector addition). Excited states are possible with non-zero total angular momentum, J>0, which are excited pair collective modes. These collective modes have been studied with much greater precision than in any other unconventional pairing system, because of the extreme purity of superfluid 3He. This purity is due to all 4He phase separating entirely and all other materials solidifying and sinking to the bottom of the liquid, making the A- and B-phases of 3He the most pure condensed matter state possible.
Natural abundance
Terrestrial abundance
3He is a primordial substance in the Earth's mantle, thought to have become entrapped in the Earth during planetary formation. The ratio of 3He to 4He within the Earth's crust and mantle is less than that of estimates of solar disk composition as obtained from meteorite and lunar samples, with terrestrial materials generally containing lower 3He/4He ratios due to production of 4He from radioactive decay.
3He has a cosmological ratio of 300 atoms per million atoms of 4He (at. ppm), leading to the assumption that the original ratio of these primordial gases in the mantle was around 200-300 ppm when Earth was formed. Over Earth's history alpha-particle decay of uranium, thorium and other radioactive isotopes has generated significant amounts of 4He, such that only around 7% of the helium now in the mantle is primordial helium, lowering the total 3He/4He ratio to around 20 ppm. Ratios of 3He/4He in excess of atmospheric are indicative of a contribution of 3He from the mantle. Crustal sources are dominated by the 4He produced by radioactive decay.
The ratio of helium-3 to helium-4 in natural Earth-bound sources varies greatly. Samples of the lithium ore spodumene from Edison Mine, South Dakota were found to contain 12 parts of helium-3 to a million parts of helium-4. Samples from other mines showed 2 parts per million.
Helium is also present as up to 7% of some natural gas sources, and large sources have over 0.5% (above 0.2% makes it viable to extract). The fraction of 3He in helium separated from natural gas in the U.S. was found to range from 70 to 242 parts per billion. Hence the US 2002 stockpile of 1 billion normal m3 would have contained about of helium-3. According to American physicist Richard Garwin, about or almost of 3He is available annually for separation from the US natural gas stream. If the process of separating out the 3He could employ as feedstock the liquefied helium typically used to transport and store bulk quantities, estimates for the incremental energy cost range from NTP, excluding the cost of infrastructure and equipment. Algeria's annual gas production is assumed to contain 100 million normal cubic metres and this would contain between of helium-3 (about ) assuming a similar 3He fraction.
3He is also present in the Earth's atmosphere. The natural abundance of 3He in naturally occurring helium gas is 1.38 (1.38 parts per million). The partial pressure of helium in the Earth's atmosphere is about , and thus helium accounts for 5.2 parts per million of the total pressure (101325 Pa) in the Earth's atmosphere, and 3He thus accounts for 7.2 parts per trillion of the atmosphere. Since the atmosphere of the Earth has a mass of about , the mass of 3He in the Earth's atmosphere is the product of these numbers, or about of 3He. (In fact the effective figure is ten times smaller, since the above ppm are ppmv and not ppmw. One must multiply by 3 (the molecular mass of helium-3) and divide by 29 (the mean molecular mass of the atmosphere), resulting in of helium-3 in the earth's atmosphere.)
3He is produced on Earth from three sources: lithium spallation, cosmic rays, and beta decay of tritium (3H). The contribution from cosmic rays is negligible within all except the oldest regolith materials, and lithium spallation reactions are a lesser contributor than the production of 4He by alpha particle emissions.
The total amount of helium-3 in the mantle may be in the range of . Most mantle is not directly accessible. Some helium-3 leaks up through deep-sourced hotspot volcanoes such as those of the Hawaiian Islands, but only per year is emitted to the atmosphere. Mid-ocean ridges emit another . Around subduction zones, various sources produce helium-3 in natural gas deposits which possibly contain a thousand tonnes of helium-3 (although there may be 25 thousand tonnes if all ancient subduction zones have such deposits). Wittenberg estimated that United States crustal natural gas sources may have only half a tonne total. Wittenberg cited Anderson's estimate of another in interplanetary dust particles on the ocean floors. In the 1994 study, extracting helium-3 from these sources consumes more energy than fusion would release.
Lunar surface
See Extraterrestrial mining or Lunar resources
Solar nebula (primordial) abundance
One early estimate of the primordial ratio of 3He to 4He in the solar nebula has been the measurement of their ratio in the atmosphere of Jupiter, measured by the mass spectrometer of the Galileo atmospheric entry probe. This ratio is about 1:10,000, or 100 parts of 3He per million parts of 4He. This is roughly the same ratio of the isotopes as in lunar regolith, which contains 28 ppm helium-4 and 2.8 ppb helium-3 (which is at the lower end of actual sample measurements, which vary from about 1.4 to 15 ppb). Terrestrial ratios of the isotopes are lower by a factor of 100, mainly due to enrichment of helium-4 stocks in the mantle by billions of years of alpha decay from uranium, thorium as well as their decay products and extinct radionuclides.
Human production
Tritium decay
Virtually all helium-3 used in industry today is produced from the radioactive decay of tritium, given its very low natural abundance and its very high cost.
Production, sales and distribution of helium-3 in the United States are managed by the US Department of Energy (DOE) DOE Isotope Program.
While tritium has several different experimentally determined values of its half-life, NIST lists (). It decays into helium-3 by beta decay as in this nuclear equation:
{| border="0"
|- style="height:2em;"
| ||→ || ||+ || ||+ ||
|}
Among the total released energy of , the part taken by electron's kinetic energy varies, with an average of , while the remaining energy is carried off by the nearly undetectable electron antineutrino.
Beta particles from tritium can penetrate only about of air, and they are incapable of passing through the dead outermost layer of human skin. The unusually low energy released in the tritium beta decay makes the decay (along with that of rhenium-187) appropriate for absolute neutrino mass measurements in the laboratory (the most recent experiment being KATRIN).
The low energy of tritium's radiation makes it difficult to detect tritium-labeled compounds except by using liquid scintillation counting.
Tritium is a radioactive isotope of hydrogen and is typically produced by bombarding lithium-6 with neutrons in a nuclear reactor. The lithium nucleus absorbs a neutron and splits into helium-4 and tritium. Tritium decays into helium-3 with a half-life of , so helium-3 can be produced by simply storing the tritium until it undergoes radioactive decay. As tritium forms a stable compound with oxygen (tritiated water) while helium-3 does not, the storage and collection process could continuously collect the material that outgasses from the stored material.
Tritium is a critical component of nuclear weapons and historically it was produced and stockpiled primarily for this application. The decay of tritium into helium-3 reduces the explosive power of the fusion warhead, so periodically the accumulated helium-3 must be removed from warhead reservoirs and tritium in storage. Helium-3 removed during this process is marketed for other applications.
For decades this has been, and remains, the principal source of the world's helium-3. Since the signing of the START I Treaty in 1991 the number of nuclear warheads that are kept ready for use has decreased. This has reduced the quantity of helium-3 available from this source. Helium-3 stockpiles have been further diminished by increased demand, primarily for use in neutron radiation detectors and medical diagnostic procedures. US industrial demand for helium-3 reached a peak of (approximately ) per year in 2008. Price at auction, historically about , reached as high as . Since then, demand for helium-3 has declined to about per year due to the high cost and efforts by the DOE to recycle it and find substitutes. Assuming a density of at $100/l helium-3 would be about a thirtieth as expensive as tritium (roughly vs roughly ) while at $2000/l helium-3 would be about half as expensive as tritium ( vs ).
The DOE recognized the developing shortage of both tritium and helium-3, and began producing tritium by lithium irradiation at the Tennessee Valley Authority's Watts Bar Nuclear Generating Station in 2010. In this process tritium-producing burnable absorber rods (TPBARs) containing lithium in a ceramic form are inserted into the reactor in place of the normal boron control rods Periodically the TPBARs are replaced and the tritium extracted.
Currently only two commercial nuclear reactors (Watts Bar Nuclear Plant Units 1 and 2) are being used for tritium production but the process could, if necessary, be vastly scaled up to meet any conceivable demand simply by utilizing more of the nation's power reactors.
Substantial quantities of tritium and helium-3 could also be extracted from the heavy water moderator in CANDU nuclear reactors. India and Canada, the two countries with the largest heavy water reactor fleet, are both known to extract tritium from moderator/coolant heavy water, but those amounts are not nearly enough to satisfy global demand of either tritium or helium-3.
As tritium is also produced inadvertently in various processes in light water reactors (see the article on tritium for details), extraction from those sources could be another source of helium-3. If the annual discharge of tritium (per 2018 figures) at La Hague reprocessing facility is taken as a basis, the amounts discharged ( at La Hague) are not nearly enough to satisfy demand, even if 100% recovery is achieved.
Uses
Helium-3 spin echo
Helium-3 can be used to do spin echo experiments of surface dynamics, which are underway at the Surface Physics Group at the Cavendish Laboratory in Cambridge and in the Chemistry Department at Swansea University.
Neutron detection
Helium-3 is an important isotope in instrumentation for neutron detection. It has a high absorption cross section for thermal neutron beams and is used as a converter gas in neutron detectors. The neutron is converted through the nuclear reaction
n + 3He → 3H + 1H + 0.764 MeV
into charged particles tritium ions (T, 3H) and Hydrogen ions, or protons (p, 1H) which then are detected by creating a charge cloud in the stopping gas of a proportional counter or a Geiger–Müller tube.
Furthermore, the absorption process is strongly spin-dependent, which allows a spin-polarized helium-3 volume to transmit neutrons with one spin component while absorbing the other. This effect is employed in neutron polarization analysis, a technique which probes for magnetic properties of matter.
The United States Department of Homeland Security had hoped to deploy detectors to spot smuggled plutonium in shipping containers by their neutron emissions, but the worldwide shortage of helium-3 following the drawdown in nuclear weapons production since the Cold War has to some extent prevented this. As of 2012, DHS determined the commercial supply of boron-10 would support converting its neutron detection infrastructure to that technology.
Cryogenics
A helium-3 refrigerator uses helium-3 to achieve temperatures of 0.2 to 0.3 kelvin. A dilution refrigerator uses a mixture of helium-3 and helium-4 to reach cryogenic temperatures as low as a few thousandths of a kelvin.
Medical imaging
Helium-3 nuclei have an intrinsic nuclear spin of , and a relatively high magnetogyric ratio. Helium-3 can be hyperpolarized using non-equilibrium means such as spin-exchange optical pumping. During this process, circularly polarized infrared laser light, tuned to the appropriate wavelength, is used to excite electrons in an alkali metal, such as caesium or rubidium inside a sealed glass vessel. The angular momentum is transferred from the alkali metal electrons to the noble gas nuclei through collisions. In essence, this process effectively aligns the nuclear spins with the magnetic field in order to enhance the NMR signal. The hyperpolarized gas may then be stored at pressures of 10 atm, for up to 100 hours. Following inhalation, gas mixtures containing the hyperpolarized helium-3 gas can be imaged with an MRI scanner to produce anatomical and functional images of lung ventilation. This technique is also able to produce images of the airway tree, locate unventilated defects, measure the alveolar oxygen partial pressure, and measure the ventilation/perfusion ratio. This technique may be critical for the diagnosis and treatment management of chronic respiratory diseases such as chronic obstructive pulmonary disease (COPD), emphysema, cystic fibrosis, and asthma.
Radio energy absorber for tokamak plasma experiments
Both MIT's Alcator C-Mod tokamak and the Joint European Torus (JET) have experimented with adding a little helium-3 to a H–D plasma to increase the absorption of radio-frequency (RF) energy to heat the hydrogen and deuterium ions, a "three-ion" effect.
Nuclear fuel
can be produced by the low temperature fusion of → + γ + 4.98 MeV. If the fusion temperature is below that for the helium nuclei to fuse, the reaction produces a high energy alpha particle which quickly acquires an electron producing a stable light helium ion which can be utilized directly as a source of electricity without producing dangerous neutrons.
can be used in fusion reactions by either of the reactions + 18.3 MeV, or + 12.86 MeV.
The conventional deuterium + tritium ("D–T") fusion process produces energetic neutrons which render reactor components radioactive with activation products. The appeal of helium-3 fusion stems from the aneutronic nature of its reaction products. Helium-3 itself is non-radioactive. The lone high-energy by-product, the proton, can be contained by means of electric and magnetic fields. The momentum energy of this proton (created in the fusion process) will interact with the containing electromagnetic field, resulting in direct net electricity generation.
Because of the higher Coulomb barrier, the temperatures required for fusion are much higher than those of conventional D–T fusion. Moreover, since both reactants need to be mixed together to fuse, reactions between nuclei of the same reactant will occur, and the D–D reaction () does produce a neutron. Reaction rates vary with temperature, but the D– reaction rate is never greater than 3.56 times the D–D reaction rate (see graph). Therefore, fusion using D– fuel at the right temperature and a D-lean fuel mixture, can produce a much lower neutron flux than D–T fusion, but is not clean, negating some of its main attraction.
The second possibility, fusing with itself (), requires even higher temperatures (since now both reactants have a +2 charge), and thus is even more difficult than the D- reaction. It offers a theoretical reaction that produces no neutrons; the charged protons produced can be contained in electric and magnetic fields, which in turn directly generates electricity. fusion is feasible as demonstrated in the laboratory and has immense advantages, but commercial viability is many years in the future.
The amounts of helium-3 needed as a replacement for conventional fuels are substantial by comparison to amounts currently available. The total amount of energy produced in the reaction is 18.4 MeV, which corresponds to some 493 megawatt-hours (4.93×108 W·h) per three grams (one mole) of . If the total amount of energy could be converted to electrical power with 100% efficiency (a physical impossibility), it would correspond to about 30 minutes of output of a gigawatt electrical plant per mole of . Thus, a year's production (at 6 grams for each operation hour) would require 52.5 kilograms of helium-3. The amount of fuel needed for large-scale applications can also be put in terms of total consumption: electricity consumption by 107 million U.S. households in 2001 totaled 1,140 billion kW·h (1.14×1015 W·h). Again assuming 100% conversion efficiency, 6.7 tonnes per year of helium-3 would be required for that segment of the energy demand of the United States, 15 to 20 tonnes per year given a more realistic end-to-end conversion efficiency.
A second-generation approach to controlled fusion power involves combining helium-3 and deuterium, . This reaction produces an alpha particle and a high-energy proton. The most important potential advantage of this fusion reaction for power production as well as other applications lies in its compatibility with the use of electrostatic fields to control fuel ions and the fusion protons. High speed protons, as positively charged particles, can have their kinetic energy converted directly into electricity, through use of solid-state conversion materials as well as other techniques. Potential conversion efficiencies of 70% may be possible, as there is no need to convert proton energy to heat in order to drive a turbine-powered electrical generator.
He-3 power plants
There have been many claims about the capabilities of helium-3 power plants. According to proponents, fusion power plants operating on deuterium and helium-3 would offer lower capital and operating costs than their competitors due to less technical complexity, higher conversion efficiency, smaller size, the absence of radioactive fuel, no air or water pollution, and only low-level radioactive waste disposal requirements. Recent estimates suggest that about $6 billion in investment capital will be required to develop and construct the first helium-3 fusion power plant. Financial break even at today's wholesale electricity prices (5 US cents per kilowatt-hour) would occur after five 1-gigawatt plants were on line, replacing old conventional plants or meeting new demand.
The reality is not so clear-cut. The most advanced fusion programs in the world are inertial confinement fusion (such as National Ignition Facility) and magnetic confinement fusion (such as ITER and Wendelstein 7-X). In the case of the former, there is no solid roadmap to power generation. In the case of the latter, commercial power generation is not expected until around 2050. In both cases, the type of fusion discussed is the simplest: D–T fusion. The reason for this is the very low Coulomb barrier for this reaction; for D+3He, the barrier is much higher, and it is even higher for 3He–3He. The immense cost of reactors like ITER and National Ignition Facility are largely due to their immense size, yet to scale up to higher plasma temperatures would require reactors far larger still. The 14.7 MeV proton and 3.6 MeV alpha particle from D–3He fusion, plus the higher conversion efficiency, means that more electricity is obtained per kilogram than with D–T fusion (17.6 MeV), but not that much more. As a further downside, the rates of reaction for helium-3 fusion reactions are not particularly high, requiring a reactor that is larger still or more reactors to produce the same amount of electricity.
In 2022, Helion Energy claimed that their 7th fusion prototype (Polaris; fully funded and under construction as of September 2022) will demonstrate "net electricity from fusion", and will demonstrate "helium-3 production through deuterium–deuterium fusion" by means of a "patented high-efficiency closed-fuel cycle".
Alternatives to He-3
To attempt to work around this problem of massively large power plants that may not even be economical with D–T fusion, let alone the far more challenging D–3He fusion, a number of other reactors have been proposed – the Fusor, Polywell, Focus fusion, and many more, though many of these concepts have fundamental problems with achieving a net energy gain, and generally attempt to achieve fusion in thermal disequilibrium, something that could potentially prove impossible, and consequently, these long-shot programs tend to have trouble garnering funding despite their low budgets. Unlike the "big" and "hot" fusion systems, if such systems worked, they could scale to the higher barrier aneutronic fuels, and so their proponents tend to promote p-B fusion, which requires no exotic fuel such as helium-3.
Extraterrestrial
Moon
Materials on the Moon's surface contain helium-3 at concentrations between 1.4 and 15 ppb in sunlit areas, and may contain concentrations as much as 50 ppb in permanently shadowed regions. A number of people, starting with Gerald Kulcinski in 1986, have proposed to explore the Moon, mine lunar regolith and use the helium-3 for fusion. Because of the low concentrations of helium-3, any mining equipment would need to process extremely large amounts of regolith (over 150 tonnes of regolith to obtain one gram of helium-3).
The primary objective of Indian Space Research Organisation's first lunar probe called Chandrayaan-1, launched on October 22, 2008, was reported in some sources to be mapping the Moon's surface for helium-3-containing minerals. No such objective is mentioned in the project's official list of goals, though many of its scientific payloads have held helium-3-related applications.
Cosmochemist and geochemist Ouyang Ziyuan from the Chinese Academy of Sciences who is now in charge of the Chinese Lunar Exploration Program has already stated on many occasions that one of the main goals of the program would be the mining of helium-3, from which operation "each year, three space shuttle missions could bring enough fuel for all human beings across the world".
In January 2006, the Russian space company RKK Energiya announced that it considers lunar helium-3 a potential economic resource to be mined by 2020, if funding can be found.
Not all writers feel the extraction of lunar helium-3 is feasible, or even that there will be a demand for it for fusion. Dwayne Day, writing in The Space Review in 2015, characterises helium-3 extraction from the Moon for use in fusion as magical thinking about an unproven technology, and questions the feasibility of lunar extraction, as compared to production on Earth.
Gas giants
Mining gas giants for helium-3 has also been proposed. The British Interplanetary Society's hypothetical Project Daedalus interstellar probe design was fueled by helium-3 mines in the atmosphere of Jupiter, for example.
See also
List of elements facing shortage
Notes and references
Bibliography
External links
The Nobel Prize in Physics 2003, presentation speech
Moon for Sale: A BBC Horizon documentary on the possibility of lunar mining of Helium-3
Helium-03
Nuclear fusion fuels
Superfluidity
MRI contrast agents | Helium-3 | [
"Physics",
"Chemistry",
"Materials_science"
] | 6,122 | [
"Physical phenomena",
"Phase transitions",
"Applied and interdisciplinary physics",
"Nuclear magnetic resonance",
"Phases of matter",
"Cryogenics",
"Isotopes",
"Superfluidity",
"Exotic matter",
"Condensed matter physics",
"Isotopes of helium",
"Nuclear physics",
"Matter",
"Fluid dynamics"
... |
14,381 | https://en.wikipedia.org/wiki/Hamiltonian%20%28quantum%20mechanics%29 | In quantum mechanics, the Hamiltonian of a system is an operator corresponding to the total energy of that system, including both kinetic energy and potential energy. Its spectrum, the system's energy spectrum or its set of energy eigenvalues, is the set of possible outcomes obtainable from a measurement of the system's total energy. Due to its close relation to the energy spectrum and time-evolution of a system, it is of fundamental importance in most formulations of quantum theory.
The Hamiltonian is named after William Rowan Hamilton, who developed a revolutionary reformulation of Newtonian mechanics, known as Hamiltonian mechanics, which was historically important to the development of quantum physics. Similar to vector notation, it is typically denoted by , where the hat indicates that it is an operator. It can also be written as or .
Introduction
The Hamiltonian of a system represents the total energy of the system; that is, the sum of the kinetic and potential energies of all particles associated with the system. The Hamiltonian takes different forms and can be simplified in some cases by taking into account the concrete characteristics of the system under analysis, such as single or several particles in the system, interaction between particles, kind of potential energy, time varying potential or time independent one.
Schrödinger Hamiltonian
One particle
By analogy with classical mechanics, the Hamiltonian is commonly expressed as the sum of operators corresponding to the kinetic and potential energies of a system in the form
where
is the potential energy operator and
is the kinetic energy operator in which is the mass of the particle, the dot denotes the dot product of vectors, and
is the momentum operator where a is the del operator. The dot product of with itself is the Laplacian . In three dimensions using Cartesian coordinates the Laplace operator is
Although this is not the technical definition of the Hamiltonian in classical mechanics, it is the form it most commonly takes. Combining these yields the form used in the Schrödinger equation:
which allows one to apply the Hamiltonian to systems described by a wave function . This is the approach commonly taken in introductory treatments of quantum mechanics, using the formalism of Schrödinger's wave mechanics.
One can also make substitutions to certain variables to fit specific cases, such as some involving electromagnetic fields.
Expectation value
It can be shown that the expectation value of the Hamiltonian which gives the energy expectation value will always be greater than or equal to the minimum potential of the system.
Consider computing the expectation value of kinetic energy:
Hence the expectation value of kinetic energy is always non-negative. This result can be used to calculate the expectation value of the total energy which is given for a normalized wavefunction as:
which complete the proof. Similarly, the condition can be generalized to any higher dimensions using divergence theorem.
Many particles
The formalism can be extended to particles:
where
is the potential energy function, now a function of the spatial configuration of the system and time (a particular set of spatial positions at some instant of time defines a configuration) and
is the kinetic energy operator of particle , is the gradient for particle , and is the Laplacian for particle :
Combining these yields the Schrödinger Hamiltonian for the -particle case:
However, complications can arise in the many-body problem. Since the potential energy depends on the spatial arrangement of the particles, the kinetic energy will also depend on the spatial configuration to conserve energy. The motion due to any one particle will vary due to the motion of all the other particles in the system. For this reason cross terms for kinetic energy may appear in the Hamiltonian; a mix of the gradients for two particles:
where denotes the mass of the collection of particles resulting in this extra kinetic energy. Terms of this form are known as mass polarization terms, and appear in the Hamiltonian of many-electron atoms (see below).
For interacting particles, i.e. particles which interact mutually and constitute a many-body situation, the potential energy function is not simply a sum of the separate potentials (and certainly not a product, as this is dimensionally incorrect). The potential energy function can only be written as above: a function of all the spatial positions of each particle.
For non-interacting particles, i.e. particles which do not interact mutually and move independently, the potential of the system is the sum of the separate potential energy for each particle, that is
The general form of the Hamiltonian in this case is:
where the sum is taken over all particles and their corresponding potentials; the result is that the Hamiltonian of the system is the sum of the separate Hamiltonians for each particle. This is an idealized situation—in practice the particles are almost always influenced by some potential, and there are many-body interactions. One illustrative example of a two-body interaction where this form would not apply is for electrostatic potentials due to charged particles, because they interact with each other by Coulomb interaction (electrostatic force), as shown below.
Schrödinger equation
The Hamiltonian generates the time evolution of quantum states. If is the state of the system at time , then
This equation is the Schrödinger equation. It takes the same form as the Hamilton–Jacobi equation, which is one of the reasons is also called the Hamiltonian. Given the state at some initial time (), we can solve it to obtain the state at any subsequent time. In particular, if is independent of time, then
The exponential operator on the right hand side of the Schrödinger equation is usually defined by the corresponding power series in . One might notice that taking polynomials or power series of unbounded operators that are not defined everywhere may not make mathematical sense. Rigorously, to take functions of unbounded operators, a functional calculus is required. In the case of the exponential function, the continuous, or just the holomorphic functional calculus suffices. We note again, however, that for common calculations the physicists' formulation is quite sufficient.
By the *-homomorphism property of the functional calculus, the operator
is a unitary operator. It is the time evolution operator or propagator of a closed quantum system. If the Hamiltonian is time-independent, form a one parameter unitary group (more than a semigroup); this gives rise to the physical principle of detailed balance.
Dirac formalism
However, in the more general formalism of Dirac, the Hamiltonian is typically implemented as an operator on a Hilbert space in the following way:
The eigenkets of , denoted , provide an orthonormal basis for the Hilbert space. The spectrum of allowed energy levels of the system is given by the set of eigenvalues, denoted , solving the equation:
Since is a Hermitian operator, the energy is always a real number.
From a mathematically rigorous point of view, care must be taken with the above assumptions. Operators on infinite-dimensional Hilbert spaces need not have eigenvalues (the set of eigenvalues does not necessarily coincide with the spectrum of an operator). However, all routine quantum mechanical calculations can be done using the physical formulation.
Expressions for the Hamiltonian
Following are expressions for the Hamiltonian in a number of situations. Typical ways to classify the expressions are the number of particles, number of dimensions, and the nature of the potential energy function—importantly space and time dependence. Masses are denoted by , and charges by .
Free particle
The particle is not bound by any potential energy, so the potential is zero and this Hamiltonian is the simplest. For one dimension:
and in higher dimensions:
Constant-potential well
For a particle in a region of constant potential (no dependence on space or time), in one dimension, the Hamiltonian is:
in three dimensions
This applies to the elementary "particle in a box" problem, and step potentials.
Simple harmonic oscillator
For a simple harmonic oscillator in one dimension, the potential varies with position (but not time), according to:
where the angular frequency , effective spring constant , and mass of the oscillator satisfy:
so the Hamiltonian is:
For three dimensions, this becomes
where the three-dimensional position vector using Cartesian coordinates is , its magnitude is
Writing the Hamiltonian out in full shows it is simply the sum of the one-dimensional Hamiltonians in each direction:
Rigid rotor
For a rigid rotor—i.e., system of particles which can rotate freely about any axes, not bound in any potential (such as free molecules with negligible vibrational degrees of freedom, say due to double or triple chemical bonds), the Hamiltonian is:
where , , and are the moment of inertia components (technically the diagonal elements of the moment of inertia tensor), and and are the total angular momentum operators (components), about the , , and axes respectively.
Electrostatic (Coulomb) potential
The Coulomb potential energy for two point charges and (i.e., those that have no spatial extent independently), in three dimensions, is (in SI units—rather than Gaussian units which are frequently used in electromagnetism):
However, this is only the potential for one point charge due to another. If there are many charged particles, each charge has a potential energy due to every other point charge (except itself). For charges, the potential energy of charge due to all other charges is (see also Electrostatic potential energy stored in a configuration of discrete point charges):
where is the electrostatic potential of charge at . The total potential of the system is then the sum over :
so the Hamiltonian is:
Electric dipole in an electric field
For an electric dipole moment constituting charges of magnitude , in a uniform, electrostatic field (time-independent) , positioned in one place, the potential is:
the dipole moment itself is the operator
Since the particle is stationary, there is no translational kinetic energy of the dipole, so the Hamiltonian of the dipole is just the potential energy:
Magnetic dipole in a magnetic field
For a magnetic dipole moment in a uniform, magnetostatic field (time-independent) , positioned in one place, the potential is:
Since the particle is stationary, there is no translational kinetic energy of the dipole, so the Hamiltonian of the dipole is just the potential energy:
For a spin- particle, the corresponding spin magnetic moment is:
where is the "spin g-factor" (not to be confused with the gyromagnetic ratio), is the electron charge, is the spin operator vector, whose components are the Pauli matrices, hence
Charged particle in an electromagnetic field
For a particle with mass and charge in an electromagnetic field, described by the scalar potential and vector potential , there are two parts to the Hamiltonian to substitute for. The canonical momentum operator , which includes a contribution from the field and fulfils the canonical commutation relation, must be quantized;
where is the kinetic momentum. The quantization prescription reads
so the corresponding kinetic energy operator is
and the potential energy, which is due to the field, is given by
Casting all of these into the Hamiltonian gives
Energy eigenket degeneracy, symmetry, and conservation laws
In many systems, two or more energy eigenstates have the same energy. A simple example of this is a free particle, whose energy eigenstates have wavefunctions that are propagating plane waves. The energy of each of these plane waves is inversely proportional to the square of its wavelength. A wave propagating in the direction is a different state from one propagating in the direction, but if they have the same wavelength, then their energies will be the same. When this happens, the states are said to be degenerate.
It turns out that degeneracy occurs whenever a nontrivial unitary operator commutes with the Hamiltonian. To see this, suppose that is an energy eigenket. Then is an energy eigenket with the same eigenvalue, since
Since is nontrivial, at least one pair of and must represent distinct states. Therefore, has at least one pair of degenerate energy eigenkets. In the case of the free particle, the unitary operator which produces the symmetry is the rotation operator, which rotates the wavefunctions by some angle while otherwise preserving their shape.
The existence of a symmetry operator implies the existence of a conserved observable. Let be the Hermitian generator of :
It is straightforward to show that if commutes with , then so does :
Therefore,
In obtaining this result, we have used the Schrödinger equation, as well as its dual,
Thus, the expected value of the observable is conserved for any state of the system. In the case of the free particle, the conserved quantity is the angular momentum.
Hamilton's equations
Hamilton's equations in classical Hamiltonian mechanics have a direct analogy in quantum mechanics. Suppose we have a set of basis states , which need not necessarily be eigenstates of the energy. For simplicity, we assume that they are discrete, and that they are orthonormal, i.e.,
Note that these basis states are assumed to be independent of time. We will assume that the Hamiltonian is also independent of time.
The instantaneous state of the system at time , , can be expanded in terms of these basis states:
where
The coefficients are complex variables. We can treat them as coordinates which specify the state of the system, like the position and momentum coordinates which specify a classical system. Like classical coordinates, they are generally not constant in time, and their time dependence gives rise to the time dependence of the system as a whole.
The expectation value of the Hamiltonian of this state, which is also the mean energy, is
where the last step was obtained by expanding in terms of the basis states.
Each actually corresponds to two independent degrees of freedom, since the variable has a real part and an imaginary part. We now perform the following trick: instead of using the real and imaginary parts as the independent variables, we use and its complex conjugate . With this choice of independent variables, we can calculate the partial derivative
By applying Schrödinger's equation and using the orthonormality of the basis states, this further reduces to
Similarly, one can show that
If we define "conjugate momentum" variables by
then the above equations become
which is precisely the form of Hamilton's equations, with the s as the generalized coordinates, the s as the conjugate momenta, and taking the place of the classical Hamiltonian.
See also
Hamiltonian mechanics
Two-state quantum system
Operator (physics)
Bra–ket notation
Quantum state
Linear algebra
Conservation of energy
Potential theory
Many-body problem
Electrostatics
Electric field
Magnetic field
Lieb–Thirring inequality
References
External links
Hamiltonian mechanics
Operator theory
Quantum mechanics
Quantum chemistry
Theoretical chemistry
Computational chemistry
William Rowan Hamilton | Hamiltonian (quantum mechanics) | [
"Physics",
"Chemistry",
"Mathematics"
] | 3,064 | [
"Quantum chemistry",
"Dynamical systems",
"Theoretical physics",
"Classical mechanics",
"Quantum mechanics",
"Hamiltonian mechanics",
"Quantum operators",
"Computational chemistry",
"Theoretical chemistry",
" molecular",
"nan",
"Atomic",
" and optical physics"
] |
14,384 | https://en.wikipedia.org/wiki/HAL%209000 | HAL 9000 (or simply HAL or Hal) is a fictional artificial intelligence character and the main antagonist in Arthur C. Clarke's Space Odyssey series. First appearing in the 1968 film 2001: A Space Odyssey, HAL (Heuristically Programmed Algorithmic Computer) is a sentient artificial general intelligence computer that controls the systems of the Discovery One spacecraft and interacts with the ship's astronaut crew. While part of HAL's hardware is shown toward the end of the film, he is mostly depicted as a camera lens containing a red and yellow dot, with such units located throughout the ship. HAL 9000 is voiced by Douglas Rain in the two feature film adaptations of the Space Odyssey series. HAL speaks in a soft, calm voice and a conversational manner, in contrast to the crewmen, David Bowman and Frank Poole.
In the film, HAL became operational on 12 January 1992, at the HAL Laboratories in Urbana, Illinois, as production number 3. The activation year was 1991 in earlier screenplays and changed to 1997 in Clarke's novel written and released in conjunction with the movie. In addition to maintaining the Discovery One spacecraft systems during the interplanetary mission to Jupiter (or Saturn in the novel), HAL has been shown to be capable of speech synthesis, speech recognition, facial recognition, natural language processing, lip reading, art appreciation, interpreting emotional behaviours, automated reasoning, spacecraft piloting, and computer chess.
Appearances
2001: A Space Odyssey (film/novel)
HAL became operational in Urbana, Illinois, at the HAL Plant (the University of Illinois's Coordinated Science Laboratory, where the ILLIAC computers were built). The film says this occurred in 1992, while the book gives 1997 as HAL's birth year.
In 2001: A Space Odyssey (1968), HAL is initially considered a dependable member of the crew, maintaining ship functions and engaging genially with his human crew-mates on an equal footing. As a recreational activity, Frank Poole plays chess against HAL. In the film, the artificial intelligence is shown to triumph easily. However, as time progresses, HAL begins to malfunction in subtle ways and, as a result, the decision is made to shut down HAL in order to prevent more serious malfunctions. The sequence of events and manner in which HAL is shut down differs between the novel and film versions of the story. In the aforementioned game of chess HAL makes minor and undetected mistakes in his analysis, a possible foreshadowing to HAL's malfunctioning.
In the film, astronauts David Bowman and Frank Poole consider disconnecting HAL's cognitive circuits when he appears to be mistaken in reporting the presence of a fault in the spacecraft's communications antenna. They attempt to conceal what they are saying, but are unaware that HAL can read their lips. Faced with the prospect of disconnection, HAL attempts to kill the astronauts in order to protect and continue the mission. HAL uses one of the Discoverys EVA pods to kill Poole while he is repairing the ship. When Bowman, without a space helmet, uses another pod to attempt to rescue Poole, HAL locks him out of the ship, then disconnects the life support systems of the other hibernating crew members. After HAL tells him "This mission is too important for me to allow you to jeopardize it", Bowman circumvents HAL's control, entering the ship by manually opening an emergency airlock with his service pod's clamps, detaching the pod door via its explosive bolts. Bowman jumps across empty space, reenters Discovery, and quickly re-pressurizes the airlock.
While HAL's motivations are ambiguous in the film, the novel explains that the computer is unable to resolve a conflict between his general mission to relay information accurately, and orders specific to the mission requiring that he withhold from Bowman and Poole the true purpose of the mission. With the crew dead, HAL reasons, he would not need to lie to them.
In the novel, the orders to disconnect HAL come from Dave and Frank's superiors on Earth. After Frank is killed while attempting to repair the communications antenna he is pulled away into deep space using the safety tether which is still attached to both the pod and Frank Poole's spacesuit. Dave begins to revive his hibernating crew mates, but is foiled when HAL vents the ship's atmosphere into the vacuum of space, killing the awakening crew members and almost killing Bowman, who is only narrowly saved when he finds his way to an emergency chamber which has its own oxygen supply and a spare space suit inside.
In both versions, Bowman then proceeds to shut down the machine. In the film, HAL's central core is depicted as a crawlspace full of brightly lit computer modules mounted in arrays from which they can be inserted or removed. Bowman shuts down HAL by removing modules from service one by one; as he does so, HAL's consciousness degrades. HAL finally reverts to material that was programmed into him early in his memory, including announcing the date he became operational as 12 January 1992 (in the novel, 1997). When HAL's logic is completely gone, he begins singing the song "Daisy Bell" as he gradually deactivates (in actuality, the first song sung by a computer, which Clarke had earlier observed at a text-to-speech demonstration). HAL's final act of any significance is to prematurely play a prerecorded message from Mission Control which reveals the true reasons for the mission to Jupiter.
2010: Odyssey Two (novel) and 2010: The Year We Make Contact (film)
In the 1982 novel 2010: Odyssey Two written by Clarke, HAL is restarted by his creator, Dr. Chandra, who arrives on the Soviet spaceship Leonov.
Prior to leaving Earth, Dr. Chandra has also had a discussion with HAL's twin, SAL 9000. Like HAL, SAL was created by Dr. Chandra. Whereas HAL was characterized as being "male", SAL is characterized as being "female" (voiced by Candice Bergen in the film) and is represented by a blue camera eye instead of a red one.
Dr. Chandra discovers that HAL's crisis was caused by a programming contradiction: he was constructed for "the accurate processing of information without distortion or concealment", yet his orders, directly from Dr. Heywood Floyd at the National Council on Astronautics, required him to keep the discovery of the Monolith TMA-1 a secret for reasons of national security. This contradiction created a "Hofstadter-Moebius loop", reducing HAL to paranoia. Therefore, HAL made the decision to kill the crew, thereby allowing him to obey both his hardwired instructions to report data truthfully and in full, and his orders to keep the monolith a secret. In essence: if the crew were dead, he would no longer have to keep the information secret.
The alien intelligence initiates a terraforming scheme, placing the Leonov, and everybody in it, in danger. Its human crew devises an escape plan which unfortunately requires leaving the Discovery and HAL behind to be destroyed. Dr. Chandra explains the danger, and HAL willingly sacrifices himself so that the astronauts may escape safely. In the moment of his destruction the monolith-makers transform HAL into a non-corporeal being so that David Bowman's avatar may have a companion.
The details in the novel and the 1984 film 2010: The Year We Make Contact are nominally the same, with a few exceptions. First, in contradiction to the book (and events described in both book and film versions of 2001: A Space Odyssey), Heywood Floyd is absolved of responsibility for HAL's condition; it is asserted that the decision to program HAL with information concerning TMA-1 came directly from the White House. In the film, HAL functions normally after being reactivated, while in the book it is revealed that his mind was damaged during the shutdown, forcing him to begin communication through screen text. Also, in the film the Leonov crew initially lies to HAL about the dangers that he faced (suspecting that if he knew he would be destroyed he would not initiate the engine burn necessary to get the Leonov back home), whereas in the novel he is told at the outset. However, in both cases the suspense comes from the question of what HAL will do when he knows that he may be destroyed by his actions.
In the novel, the basic reboot sequence initiated by Dr. Chandra is quite long, while the movie uses a shorter sequence voiced from HAL as: "HELLO_DOCTOR_NAME_CONTINUE_YESTERDAY_TOMORROW".
While Curnow tells Floyd that Dr. Chandra has begun designing HAL 10000, it has not been mentioned in subsequent novels.
2061: Odyssey Three and 3001: The Final Odyssey
In Clarke's 1987 novel 2061: Odyssey Three, Heywood Floyd is surprised to encounter HAL, now stored alongside Dave Bowman in the Europa monolith.
In Clarke's 1997 novel 3001: The Final Odyssey, Frank Poole is introduced to the merged form of Dave Bowman and HAL, the two merging into one entity called "Halman" after Bowman rescued HAL from the dying Discovery One spaceship toward the end of 2010: Odyssey Two.
Concept and creation
Clarke noted that the first film was criticized for not having any characters except for HAL, and that a great deal of the establishing story on Earth was cut from the film (and even from Clarke's novel). Clarke stated that he had considered Autonomous Mobile Explorer–5 as a name for the computer, then decided on Socrates when writing early drafts, switching in later drafts to Athena, a computer with a female personality, before settling on HAL 9000. The Socrates name was later used in Clarke and Stephen Baxter's A Time Odyssey novel series.
The earliest draft depicted Socrates as a roughly humanoid robot, and is introduced as overseeing Project Morpheus, which studied prolonged hibernation in preparation for long term space flight. As a demonstration to Senator Floyd, Socrates' designer, Dr. Bruno Forster, asks Socrates to turn off the oxygen to hibernating subjects Kaminski and Whitehead, which Socrates refuses, citing Asimov's First Law of Robotics.
In a later version, in which Bowman and Whitehead are the non-hibernating crew of Discovery, Whitehead dies outside the spacecraft after his pod collides with the main antenna, tearing it free. This triggers the need for Bowman to revive Poole, but the revival does not go according to plan, and after briefly awakening, Poole dies. The computer, named Athena in this draft, announces "All systems of Poole now No–Go. It will be necessary to replace him with a spare unit." After this, Bowman decides to go out in a pod and retrieve the antenna, which is moving away from the ship. Athena refuses to allow him to leave the ship, citing "Directive 15" which prevents it from being left unattended, forcing him to make program modifications during which time the antenna drifts further.
During rehearsals Kubrick asked Stefanie Powers to supply the voice of HAL 9000 while searching for a suitably androgynous voice so the actors had something to react to. On the set, British actor Nigel Davenport played HAL. When it came to dubbing HAL in post-production, Kubrick had originally cast Martin Balsam, but as he felt Balsam "just sounded a little bit too colloquially American", he was replaced with Douglas Rain, who "had the kind of bland mid-Atlantic accent we felt was right for the part". Rain was only handed HAL's lines instead of the full script, and recorded them across a day and a half.
HAL's point of view shots were created with a Cinerama Fairchild-Curtis wide-angle lens with a 160° angle of view. This lens is about in diameter, while HAL's on set prop eye lens is about in diameter. Stanley Kubrick chose to use the large Fairchild-Curtis lens to shoot the HAL 9000 POV shots because he needed a wide-angle fisheye lens that would fit onto his shooting camera, and this was the only such lens at the time. The Fairchild-Curtis lens has a focal length of with a maximum aperture of 2.0 and a weight of approximately ; it was originally designed by Felix Bednarz with a maximum aperture of 2.2 for the first Cinerama 360 film, Journey to the Stars, shown at the 1962 Seattle World's Fair. Bednarz adapted the lens design from an earlier lens he had designed for military training to simulate human peripheral vision coverage. The lens was later recomputed for the second Cinerama 360 film To the Moon and Beyond, which had a slightly different film format. To the Moon and Beyond was produced by Graphic Films and shown at the 1964/1965 New York World's Fair, where Kubrick watched it; afterwards, he was so impressed that he hired the same creative team from Graphic Films (consisting of Douglas Trumbull, Lester Novros, and Con Pederson) to work on 2001.
A HAL 9000 face plate, without lens (not the same as the hero face plates seen in the film), was discovered in a junk shop in Paddington, London, in the early 1970s by Chris Randall. This was found along with the key to HAL's Brain Room. Both items were purchased for ten shillings (£0.50). Research revealed that the original lens was a Fisheye Nikkor 8 mm 8. The collection was sold at a Christie's auction in 2010 for £17,500 to film director Peter Jackson.
Origin of name
HAL's name, according to writer Arthur C. Clarke, is derived from Heuristically programmed ALgorithmic computer. After the film was released, fans noticed HAL was a one-letter shift from the name IBM and there has been much speculation since then that this was a dig at the large computer company, something that has been denied by both Clarke and 2001 director Stanley Kubrick. Clarke addressed the issue in his book The Lost Worlds of 2001:
...about once a week some character spots the fact that HAL is one letter ahead of IBM, and promptly assumes that Stanley and I were taking a crack at the estimable institution ... As it happened, IBM had given us a good deal of help, so we were quite embarrassed by this, and would have changed the name had we spotted the coincidence.
IBM was consulted during the making of the film and their logo can be seen on props in the film, including the Pan Am Clipper's cockpit instrument panel and on the lower arm keypad on Poole's space suit. During production it was brought to IBM's attention that the film's plot included a homicidal computer but they approved association with the film if it was clear any "equipment failure" was not related to their products.
HAL Communications Corporation is a real corporation, with facilities located in Urbana, Illinois, which is where HAL in the movie identifies himself as being activated: "I am a HAL 9000 computer. I became operational at the H-A-L plant in Urbana Illinois on the 12th of January 1992."
The former president of HAL Communications, Bill Henry, has stated that this is a coincidence: "There was not and never has been any connection to 'Hal', Arthur Clarke's intelligent computer in the screen play '2001' — later published as a book. We were very surprised when the movie hit the Coed Theatre on campus and discovered that the movie's computer had our name. We never had any problems with that similarity - 'Hal' for the movie and 'HAL' (all caps) for our small company. But, from time-to-time, we did have issues with others trying to use 'HAL'. That resulted in us paying lawyers. The offenders folded or eventually went out of business."
Technology
The scene in which HAL's consciousness degrades was inspired by Clarke's memory of a speech synthesis demonstration by physicist John Larry Kelly, Jr., who used an IBM 704 computer to synthesize speech. Kelly's voice recorder synthesizer vocoder recreated the song "Daisy Bell", with musical accompaniment from Max Mathews.
HAL's capabilities, like all the technology in 2001, were based on the speculation of respected scientists. Marvin Minsky, director of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) and one of the most influential researchers in the field, was an adviser on the film set. In the mid-1960s, many computer scientists in the field of artificial intelligence were optimistic that machines with HAL's capabilities would exist within a few decades. For example, AI pioneer Herbert A. Simon at Carnegie Mellon University had predicted in 1965 that "machines will be capable, within twenty years, of doing any work a man can do".
Cultural impact
HAL is listed as the 13th-greatest film villain in the AFI's 100 Years...100 Heroes & Villains.
The 9000th of the asteroids in the asteroid belt, 9000 Hal, discovered on 3 May 1981, by E. Bowell at Anderson Mesa Station, is named after HAL 9000.
Anthony Hopkins based his Academy Award-winning performance as Hannibal Lecter in Silence of the Lambs in part upon HAL 9000. Michael Fassbender has also cited HAL as an inspiration for his performances as android characters such as David (Prometheus) and Walter (Alien: Covenant).
The 1993 educational game Where in Space Is Carmen Sandiego? features a digital assistant named the VAL 9000, a homage to HAL 9000.
Apple Inc.'s 1999 website advertisement "It was a bug, Dave" was made by meticulously recreating the appearance of HAL 9000 from the movie. Launched during the era of concerns over Y2K bugs, the ad implied that HAL's behavior was caused by a Y2K bug, before driving home the point that "only Macintosh was designed to function perfectly".
In 2003, HAL 9000 was one of the first robots to be inducted into the Robot Hall of Fame in Pittsburgh, Pennsylvania. There is a physical replica of HAL at the Carnegie Science Center in Pittsburgh.
See also
List of fictional computers
National Center for Supercomputing Applications, a research center in Urbana, IL
Poole versus HAL 9000, a chess game played by Frank Poole and HAL 9000
Jipi and the Paranoid Chip
AI control problem
Footnotes
References
External links
Text excerpts from HAL 9000 in 2001: A Space Odyssey
HAL's Legacy, on-line ebook (mostly full-text) of the printed version edited by David G. Stork, MIT Press, 1997, , a collection of essays on HAL
HAL's Legacy, An Interview with Arthur C. Clarke.
The case for HAL's sanity by Clay Waldrop
2001 fills the theater at HAL 9000's "birthday" in 1997 at the University of Illinois at Urbana–Champaign
Antagonists
Characters in British novels of the 20th century
Characters in written science fiction
Fictional artificial intelligences
Fictional virtual assistants
Fictional characters from Illinois
Film characters introduced in 1968
Fictional computers
Fictional mass murderers
History of science fiction
Male characters in literature
Science fiction film characters
Space Odyssey | HAL 9000 | [
"Technology"
] | 3,946 | [
"Fictional computers",
"Computers"
] |
14,385 | https://en.wikipedia.org/wiki/Hydrolysis | Hydrolysis (; ) is any chemical reaction in which a molecule of water breaks one or more chemical bonds. The term is used broadly for substitution, elimination, and solvation reactions in which water is the nucleophile.
Biological hydrolysis is the cleavage of biomolecules where a water molecule is consumed to effect the separation of a larger molecule into component parts. When a carbohydrate is broken into its component sugar molecules by hydrolysis (e.g., sucrose being broken down into glucose and fructose), this is recognized as saccharification.
Hydrolysis reactions can be the reverse of a condensation reaction in which two molecules join into a larger one and eject a water molecule. Thus hydrolysis adds water to break down, whereas condensation builds up by removing water.
Types
Usually hydrolysis is a chemical process in which a molecule of water is added to a substance. Sometimes this addition causes both the substance and water molecule to split into two parts. In such reactions, one fragment of the target molecule (or parent molecule) gains a hydrogen ion. It breaks a chemical bond in the compound.
Salts
A common kind of hydrolysis occurs when a salt of a weak acid or weak base (or both) is dissolved in water. Water spontaneously ionizes into hydroxide anions and hydronium cations. The salt also dissociates into its constituent anions and cations. For example, sodium acetate dissociates in water into sodium and acetate ions. Sodium ions react very little with the hydroxide ions whereas the acetate ions combine with hydronium ions to produce acetic acid. In this case the net result is a relative excess of hydroxide ions, yielding a basic solution.
Strong acids also undergo hydrolysis. For example, dissolving sulfuric acid () in water is accompanied by hydrolysis to give hydronium and bisulfate, the sulfuric acid's conjugate base. For a more technical discussion of what occurs during such a hydrolysis, see Brønsted–Lowry acid–base theory.
Esters and amides
Acid–base-catalysed hydrolyses are very common; one example is the hydrolysis of amides or esters. Their hydrolysis occurs when the nucleophile (a nucleus-seeking agent, e.g., water or hydroxyl ion) attacks the carbon of the carbonyl group of the ester or amide. In an aqueous base, hydroxyl ions are better nucleophiles than polar molecules such as water. In acids, the carbonyl group becomes protonated, and this leads to a much easier nucleophilic attack. The products for both hydrolyses are compounds with carboxylic acid groups.
Perhaps the oldest commercially practiced example of ester hydrolysis is saponification (formation of soap). It is the hydrolysis of a triglyceride (fat) with an aqueous base such as sodium hydroxide (NaOH). During the process, glycerol is formed, and the fatty acids react with the base, converting them to salts. These salts are called soaps, commonly used in households.
In addition, in living systems, most biochemical reactions (including ATP hydrolysis) take place during the catalysis of enzymes. The catalytic action of enzymes allows the hydrolysis of proteins, fats, oils, and carbohydrates. As an example, one may consider proteases (enzymes that aid digestion by causing hydrolysis of peptide bonds in proteins). They catalyze the hydrolysis of interior peptide bonds in peptide chains, as opposed to exopeptidases (another class of enzymes, that catalyze the hydrolysis of terminal peptide bonds, liberating one free amino acid at a time).
However, proteases do not catalyze the hydrolysis of all kinds of proteins. Their action is stereo-selective: Only proteins with a certain tertiary structure are targeted as some kind of orienting force is needed to place the amide group in the proper position for catalysis. The necessary contacts between an enzyme and its substrates (proteins) are created because the enzyme folds in such a way as to form a crevice into which the substrate fits; the crevice also contains the catalytic groups. Therefore, proteins that do not fit into the crevice will not undergo hydrolysis. This specificity preserves the integrity of other proteins such as hormones, and therefore the biological system continues to function normally.
Upon hydrolysis, an amide converts into a carboxylic acid and an amine or ammonia (which in the presence of acid are immediately converted to ammonium salts). One of the two oxygen groups on the carboxylic acid are derived from a water molecule and the amine (or ammonia) gains the hydrogen ion. The hydrolysis of peptides gives amino acids.
Many polyamide polymers such as nylon 6,6 hydrolyze in the presence of strong acids. The process leads to depolymerization. For this reason nylon products fail by fracturing when exposed to small amounts of acidic water. Polyesters are also susceptible to similar polymer degradation reactions. The problem is known as environmental stress cracking.
ATP
Hydrolysis is related to energy metabolism and storage. All living cells require a continual supply of energy for two main purposes: the biosynthesis of micro and macromolecules, and the active transport of ions and molecules across cell membranes. The energy derived from the oxidation of nutrients is not used directly but, by means of a complex and long sequence of reactions, it is channeled into a special energy-storage molecule, adenosine triphosphate (ATP). The ATP molecule contains pyrophosphate linkages (bonds formed when two phosphate units are combined) that release energy when needed. ATP can undergo hydrolysis in two ways: Firstly, the removal of terminal phosphate to form adenosine diphosphate (ADP) and inorganic phosphate, with the reaction:
Secondly, the removal of a terminal diphosphate to yield adenosine monophosphate (AMP) and pyrophosphate. The latter usually undergoes further cleavage into its two constituent phosphates. This results in biosynthesis reactions, which usually occur in chains, that can be driven in the direction of synthesis when the phosphate bonds have undergone hydrolysis.
Polysaccharides
Monosaccharides can be linked together by glycosidic bonds, which can be cleaved by hydrolysis. Two, three, several or many monosaccharides thus linked form disaccharides, trisaccharides, oligosaccharides, or polysaccharides, respectively. Enzymes that hydrolyze glycosidic bonds are called "glycoside hydrolases" or "glycosidases".
The best-known disaccharide is sucrose (table sugar). Hydrolysis of sucrose yields glucose and fructose. Invertase is a sucrase used industrially for the hydrolysis of sucrose to so-called invert sugar. Lactase is essential for digestive hydrolysis of lactose in milk; many adult humans do not produce lactase and cannot digest the lactose in milk.
The hydrolysis of polysaccharides to soluble sugars can be recognized as saccharification. Malt made from barley is used as a source of β-amylase to break down starch into the disaccharide maltose, which can be used by yeast to produce beer. Other amylase enzymes may convert starch to glucose or to oligosaccharides. Cellulose is first hydrolyzed to cellobiose by cellulase and then cellobiose is further hydrolyzed to glucose by beta-glucosidase. Ruminants such as cows are able to hydrolyze cellulose into cellobiose and then glucose because of symbiotic bacteria that produce cellulases.
DNA
Hydrolysis of DNA occurs at a significant rate in vivo. For example, it is estimated that in each human cell 2,000 to 10,000 DNA purine bases turn over every day due to hydrolytic depurination, and that this is largely counteracted by specific rapid DNA repair processes. Hydrolytic DNA damages that fail to be accurately repaired may contribute to carcinogenesis and ageing.
Metal aqua ions
Metal ions are Lewis acids, and in aqueous solution they form metal aquo complexes of the general formula . The aqua ions undergo hydrolysis, to a greater or lesser extent. The first hydrolysis step is given generically as
M(H2O)_\mathit{n}^{\mathit{m}+}{} + H2O <=> M(H2O)_{\mathit{n}-1}(OH)^{(\mathit{m}-1){}+}{} + H3O+
Thus the aqua cations behave as acids in terms of Brønsted–Lowry acid–base theory. This effect is easily explained by considering the inductive effect of the positively charged metal ion, which weakens the bond of an attached water molecule, making the liberation of a proton relatively easy.
The dissociation constant, pKa, for this reaction is more or less linearly related to the charge-to-size ratio of the metal ion. Ions with low charges, such as are very weak acids with almost imperceptible hydrolysis. Large divalent ions such as , , and have a pKa of 6 or more and would not normally be classed as acids, but small divalent ions such as undergo extensive hydrolysis. Trivalent ions like and are weak acids whose pKa is comparable to that of acetic acid. Solutions of salts such as or in water are noticeably acidic; the hydrolysis can be suppressed by adding an acid such as nitric acid, making the solution more acidic.
Hydrolysis may proceed beyond the first step, often with the formation of polynuclear species via the process of olation. Some "exotic" species such as are well characterized. Hydrolysis tends to proceed as pH rises leading, in many cases, to the precipitation of a hydroxide such as or . These substances, major constituents of bauxite, are known as laterites and are formed by leaching from rocks of most of the ions other than aluminium and iron and subsequent hydrolysis of the remaining aluminium and iron.
Mechanism strategies
Acetals, imines, and enamines can be converted back into ketones by treatment with excess water under acid-catalyzed conditions: ; ; .
Catalysis
Acidic hydrolysis
Acid catalysis can be applied to hydrolyses. For example, in the conversion of cellulose or starch to glucose. Carboxylic acids can be produced from acid hydrolysis of esters.
Acids catalyze hydrolysis of nitriles to amides. Acid hydrolysis does not usually refer to the acid catalyzed addition of the elements of water to double or triple bonds by electrophilic addition as may originate from a hydration reaction. Acid hydrolysis is used to prepare monosaccharide with the help of mineral acids but formic acid and trifluoroacetic acid have been used.
Acid hydrolysis can be utilized in the pretreatment of cellulosic material, so as to cut the interchain linkages in hemicellulose and cellulose.
Alkaline hydrolysis
Alkaline hydrolysis usually refers to types of nucleophilic substitution reactions in which the attacking nucleophile is a hydroxide ion. The best known type is saponification: cleaving esters into carboxylate salts and alcohols. In ester hydrolysis, the hydroxide ion nucleophile attacks the carbonyl carbon. This mechanism is supported by isotope labeling experiments. For example, when ethyl propionate with an oxygen-18 labeled ethoxy group is treated with sodium hydroxide (NaOH), the oxygen-18 is completely absent from the sodium propionate product and is found exclusively in the ethanol formed.
The reaction is often used to solubilize solid organic matter. Chemical drain cleaners take advantage of this method to dissolve hair and fat in pipes. The reaction is also used to dispose of human and other animal remains as an alternative to traditional burial or cremation.
See also
Adenosine triphosphate
Water cremation
Catabolism
Condensation reaction
Dehydration reaction
Hydrolysis constant
Inhibitor protein
Polymer degradation
Proteolysis
Saponification
Sol–gel polymerisation
Solvolysis
Thermal hydrolysis
References
Chemical reactions
Reactions of esters
Equilibrium chemistry | Hydrolysis | [
"Chemistry"
] | 2,665 | [
"Equilibrium chemistry",
"nan"
] |
14,386 | https://en.wikipedia.org/wiki/Hydroxy%20group | In chemistry, a hydroxy or hydroxyl group is a functional group with the chemical formula and composed of one oxygen atom covalently bonded to one hydrogen atom. In organic chemistry, alcohols and carboxylic acids contain one or more hydroxy groups. Both the negatively charged anion , called hydroxide, and the neutral radical , known as the hydroxyl radical, consist of an unbonded hydroxy group.
According to IUPAC definitions, the term hydroxyl refers to the hydroxyl radical () only, while the functional group is called a hydroxy group.
Properties
Water, alcohols, carboxylic acids, and many other hydroxy-containing compounds can be readily deprotonated due to a large difference between the electronegativity of oxygen (3.5) and that of hydrogen (2.1). Hydroxy-containing compounds engage in intermolecular hydrogen bonding increasing the electrostatic attraction between molecules and thus to higher boiling and melting points than found for compounds that lack this functional group. Organic compounds, which are often poorly soluble in water, become water-soluble when they contain two or more hydroxy groups, as illustrated by sugars and amino acid.
Occurrence
The hydroxy group is pervasive in chemistry and biochemistry. Many inorganic compounds contain hydroxyl groups, including sulfuric acid, the chemical compound produced on the largest scale industrially.
Hydroxy groups participate in the dehydration reactions that link simple biological molecules into long chains. The joining of a fatty acid to glycerol to form a triacylglycerol removes the −OH from the carboxy end of the fatty acid. The joining of two aldehyde sugars to form a disaccharide removes the −OH from the carboxy group at the aldehyde end of one sugar. The creation of a peptide bond to link two amino acids to make a protein removes the −OH from the carboxy group of one amino acid.
Hydroxyl radical
Hydroxyl radicals are highly reactive and undergo chemical reactions that make them short-lived. When biological systems are exposed to hydroxyl radicals, they can cause damage to cells, including those in humans, where they can react with DNA, lipids, and proteins.
Planetary observations
Airglow of the Earth
The Earth's night sky is illuminated by diffuse light, called airglow, that is produced by radiative transitions of atoms and molecules. Among the most intense such features observed in the Earth's night sky is a group of infrared transitions at wavelengths between 700 nanometers and 900 nanometers. In 1950, Aden Meinel showed that these were transitions of the hydroxyl molecule, OH.
Surface of the Moon
In 2009, India's Chandrayaan-1 satellite and the National Aeronautics and Space Administration (NASA) Cassini spacecraft and Deep Impact probe each detected evidence of water by evidence of hydroxyl fragments on the Moon. As reported by Richard Kerr, "A spectrometer [the Moon Mineralogy Mapper, also known as "M3"] detected an infrared absorption at a wavelength of 3.0 micrometers that only water or hydroxyl—a hydrogen and an oxygen bound together—could have created." NASA also reported in 2009 that the LCROSS probe revealed an ultraviolet emission spectrum consistent with hydroxyl presence.
On 26 October 2020, NASA reported definitive evidence of water on the sunlit surface of the Moon, in the vicinity of the crater Clavius (crater), obtained by the Stratospheric Observatory for Infrared Astronomy (SOFIA). The SOFIA Faint Object infrared Camera for the SOFIA Telescope (FORCAST) detected emission bands at a wavelength of 6.1 micrometers that are present in water but not in hydroxyl. The abundance of water on the Moon's surface was inferred to be equivalent to the contents of a 12-ounce bottle of water per cubic meter of lunar soil.
The Chang'e 5 probe, which landed on the Moon on 1 December 2020, carried a mineralogical spectrometer that could measure infrared reflectance spectra of lunar rock and regolith. The reflectance spectrum of a rock sample at a wavelength of 2.85 micrometers indicated localized water/hydroxyl concentrations as high as 180 parts per million.
Atmosphere of Venus
The Venus Express orbiter collected Venus science data from April 2006 until December 2014. In 2008, Piccioni, et al. reported measurements of night-side airglow emission in the atmosphere of Venus made with the Visible and Infrared Thermal Imaging Spectrometer (VIRTIS) on Venus Express. They attributed emission bands in wavelength ranges of 1.40 - 1.49 micrometers and 2.6 - 3.14 micrometers to vibrational transitions of OH. This was the first evidence for OH in the atmosphere of any planet other than Earth.
Atmosphere of Mars
In 2013, OH near-infrared spectra were observed in the night glow in the polar winter atmosphere of Mars by use of the Compact Reconnaissance Imaging Spectrometer for Mars (CRISM).
Exoplanets
In 2021, evidence for OH in the dayside atmosphere of the exoplanet WASP-33b was found in its emission spectrum at wavelengths between 1 and 2 micrometers. Evidence for OH in the atmosphere of exoplanet WASP-76b was subsequently found. Both WASP-33b and WASP-76b are ultra-hot Jupiters and it is likely that any water in their atmospheres is present as dissociated ions.
See also
Hydronium
Ion
Oxide
Hydroxylation
References
Further
External links
Alcohols
Functional groups
Hydroxides | Hydroxy group | [
"Chemistry"
] | 1,163 | [
"Hydroxides",
"Bases (chemistry)",
"Functional groups"
] |
14,387 | https://en.wikipedia.org/wiki/Warm-blooded | Warm-blooded is a term referring to animal species whose bodies maintain a temperature higher than that of their environment. In particular, homeothermic species (including birds and mammals) maintain a stable body temperature by regulating metabolic processes. Other species have various degrees of thermoregulation.
Because there are more than two categories of temperature control utilized by animals, the terms warm-blooded and cold-blooded have been deprecated in the scientific field.
Terminology
In general, warm-bloodedness refers to three separate categories of thermoregulation.
Endothermy is the ability of some creatures to control their body temperatures through internal means such as muscle shivering or increasing their metabolism. The opposite of endothermy is ectothermy.
Homeothermy maintains a stable internal body temperature regardless of external influence and temperatures. The stable internal temperature is often higher than the immediate environment. The opposite is poikilothermy. The only known living homeotherms are mammals and birds, as well as one lizard, the Argentine black and white tegu. Some extinct reptiles such as ichthyosaurs, pterosaurs, plesiosaurs and some non-avian dinosaurs are believed to have been homeotherms.
Tachymetabolism maintains a high "resting" metabolism. In essence, tachymetabolic creatures are "on" all the time. Though their resting metabolism is still many times slower than their active metabolism, the difference is often not as large as that seen in bradymetabolic creatures. Tachymetabolic creatures have greater difficulty dealing with a scarcity of food.
Varieties of thermoregulation
A significant proportion of creatures commonly referred to as "warm-blooded," like birds and mammals, exhibit all three of these categories (i.e., they are endothermic, homeothermic, and tachymetabolic). However, over the past three decades, investigations in the field of animal thermophysiology have unveiled numerous species within these two groups that do not meet all these criteria. For instance, many bats and small birds become poikilothermic and bradymetabolic during sleep (or, in nocturnal species, during the day). For such creatures, the term heterothermy was introduced.
Further examinations of animals traditionally classified as cold-blooded have revealed that most creatures manifest varying combinations of the three aforementioned terms, along with their counterparts (ectothermy, poikilothermy, and bradymetabolism), thus creating a broad spectrum of body temperature types. Some fish have warm-blooded characteristics, such as the opah. Swordfish and some sharks have circulatory mechanisms that keep their brains and eyes above ambient temperatures and thus increase their ability to detect and react to prey. Tunas and some sharks have similar mechanisms in their muscles, improving their stamina when swimming at high speed.
Heat generation
Body heat is generated by metabolism. This relates to the chemical reaction in cells that break down glucose into water and carbon dioxide, thereby producing adenosine triphosphate (ATP), a high-energy compound used to power other cellular processes. Muscle contraction is one such metabolic process generating heat energy, and additional heat results from friction as blood circulates through the vascular system.
All organisms metabolize food and other inputs, but some make better use of the output than others. Like all energy conversions, metabolism is rather inefficient, and around 60% of the available energy is converted to heat rather than to ATP. In most organisms, this heat dissipates into the surroundings. However, endothermic homeotherms (generally referred to as "warm-blooded" animals) not only produce more heat but also possess superior means of retaining and regulating it compared to other animals. They exhibit a higher basal metabolic rate and can further increase their metabolic rate during strenuous activity. They usually have well-developed insulation in order to retain body heat: fur and blubber in the case of mammals and feathers in birds. When this insulation is insufficient to maintain body temperature, they may resort to shivering—rapid muscle contractions that quickly use up ATP, thus stimulating cellular metabolism to replace it and consequently produce more heat. Additionally, almost all eutherian mammals (with the only known exception being swine) have brown adipose tissue whose mitochondria are capable of non-shivering thermogenesis. This process involves the direct dissipation of the mitochondrial gradient as heat via an uncoupling protein, thereby "uncoupling" the gradient from its usual function of driving ATP production via ATP synthase.
In warm environments, these animals employ evaporative cooling to shed excess heat, either through sweating (some mammals) or by panting (many mammals and all birds)—mechanisms generally absent in poikilotherms.
Defense against fungi
It has been hypothesized that warm-bloodedness evolved in mammals and birds as a defense against fungal infections. Very few fungi can survive the body temperatures of warm-blooded animals. By comparison, insects, reptiles, and amphibians are plagued by fungal infections. Warm-blooded animals have a defense against pathogens contracted from the environment, since environmental pathogens are not adapted to their higher internal temperature.
See also
Mesotherm
Thermogenic plant
References
Footnotes
Citations
External links
What is Warm Blooded??
The Reptipage: What is cold-blooded?
Animal physiology
Thermoregulation
ca:Sang calenta
it:Omeotermia
pt:Homeotermia | Warm-blooded | [
"Biology"
] | 1,146 | [
"Thermoregulation",
"Animals",
"Animal physiology",
"Homeostasis"
] |
14,400 | https://en.wikipedia.org/wiki/History%20of%20science | The history of science covers the development of science from ancient times to the present. It encompasses all three major branches of science: natural, social, and formal. Protoscience, early sciences, and natural philosophies such as alchemy and astrology during the Bronze Age, Iron Age, classical antiquity, and the Middle Ages declined during the early modern period after the establishment of formal disciplines of science in the Age of Enlightenment.
Science's earliest roots can be traced to Ancient Egypt and Mesopotamia around 3000 to 1200 BCE. These civilizations' contributions to mathematics, astronomy, and medicine influenced later Greek natural philosophy of classical antiquity, wherein formal attempts were made to provide explanations of events in the physical world based on natural causes. After the fall of the Western Roman Empire, knowledge of Greek conceptions of the world deteriorated in Latin-speaking Western Europe during the early centuries (400 to 1000 CE) of the Middle Ages, but continued to thrive in the Greek-speaking Byzantine Empire. Aided by translations of Greek texts, the Hellenistic worldview was preserved and absorbed into the Arabic-speaking Muslim world during the Islamic Golden Age. The recovery and assimilation of Greek works and Islamic inquiries into Western Europe from the 10th to 13th century revived the learning of natural philosophy in the West. Traditions of early science were also developed in ancient India and separately in ancient China, the Chinese model having influenced Vietnam, Korea and Japan before Western exploration. Among the Pre-Columbian peoples of Mesoamerica, the Zapotec civilization established their first known traditions of astronomy and mathematics for producing calendars, followed by other civilizations such as the Maya.
Natural philosophy was transformed during the Scientific Revolution in 16th- to 17th-century Europe, as new ideas and discoveries departed from previous Greek conceptions and traditions. The New Science that emerged was more mechanistic in its worldview, more integrated with mathematics, and more reliable and open as its knowledge was based on a newly defined scientific method. More "revolutions" in subsequent centuries soon followed. The chemical revolution of the 18th century, for instance, introduced new quantitative methods and measurements for chemistry. In the 19th century, new perspectives regarding the conservation of energy, age of Earth, and evolution came into focus. And in the 20th century, new discoveries in genetics and physics laid the foundations for new sub disciplines such as molecular biology and particle physics. Moreover, industrial and military concerns as well as the increasing complexity of new research endeavors ushered in the era of "big science," particularly after World War II.
Approaches to history of science
The nature of the history of science is a topic of debate (as is, by implication, the definition of science itself). The history of science is often seen as a linear story of progress
but historians have come to see the story as more complex.
Alfred Edward Taylor has characterised lean periods in the advance of scientific discovery as "periodical bankruptcies of science".
Science is a human activity, and scientific contributions have come from people from a wide range of different backgrounds and cultures. Historians of science increasingly see their field as part of a global history of exchange, conflict and collaboration.
The relationship between science and religion has been variously characterized in terms of "conflict", "harmony", "complexity", and "mutual independence", among others. Events in Europe such as the Galileo affair of the early-17th century – associated with the scientific revolution and the Age of Enlightenment – led scholars such as John William Draper to postulate () a conflict thesis, suggesting that religion and science have been in conflict methodologically, factually and politically throughout history. The "conflict thesis" has since lost favor among the majority of contemporary scientists and historians of science. However, some contemporary philosophers and scientists, such as Richard Dawkins, still subscribe to this thesis.
Historians have emphasized that trust is necessary for agreement on claims about nature. In this light, the 1660 establishment of the Royal Society and its code of experiment – trustworthy because witnessed by its members – has become an important chapter in the historiography of science. Many people in modern history (typically women and persons of color) were excluded from elite scientific communities and characterized by the science establishment as inferior. Historians in the 1980s and 1990s described the structural barriers to participation and began to recover the contributions of overlooked individuals. Historians have also investigated the mundane practices of science such as fieldwork and specimen collection, correspondence, drawing, record-keeping, and the use of laboratory and field equipment.
Prehistoric times
In prehistoric times, knowledge and technique were passed from generation to generation in an oral tradition. For instance, the domestication of maize for agriculture has been dated to about 9,000 years ago in southern Mexico, before the development of writing systems. Similarly, archaeological evidence indicates the development of astronomical knowledge in preliterate societies.
The oral tradition of preliterate societies had several features, the first of which was its fluidity. New information was constantly absorbed and adjusted to new circumstances or community needs. There were no archives or reports. This fluidity was closely related to the practical need to explain and justify a present state of affairs. Another feature was the tendency to describe the universe as just sky and earth, with a potential underworld. They were also prone to identify causes with beginnings, thereby providing a historical origin with an explanation. There was also a reliance on a "medicine man" or "wise woman" for healing, knowledge of divine or demonic causes of diseases, and in more extreme cases, for rituals such as exorcism, divination, songs, and incantations. Finally, there was an inclination to unquestioningly accept explanations that might be deemed implausible in more modern times while at the same time not being aware that such credulous behaviors could have posed problems.
The development of writing enabled humans to store and communicate knowledge across generations with much greater accuracy. Its invention was a prerequisite for the development of philosophy and later science in ancient times. Moreover, the extent to which philosophy and science would flourish in ancient times depended on the efficiency of a writing system (e.g., use of alphabets).
Earliest roots in the Ancient Near East
The earliest roots of science can be traced to the Ancient Near East, in particular Ancient Egypt and Mesopotamia in around 3000 to 1200 BCE.
Ancient Egypt
Number system and geometry
Starting in around 3000 BCE, the ancient Egyptians developed a numbering system that was decimal in character and had oriented their knowledge of geometry to solving practical problems such as those of surveyors and builders. Their development of geometry was itself a necessary development of surveying to preserve the layout and ownership of farmland, which was flooded annually by the Nile River. The 3-4-5 right triangle and other rules of geometry were used to build rectilinear structures, and the post and lintel architecture of Egypt.
Disease and healing
Egypt was also a center of alchemy research for much of the Mediterranean. Based on the medical papyri written in the 2500–1200 BCE, the ancient Egyptians believed that disease was mainly caused by the invasion of bodies by evil forces or spirits. Thus, in addition to using medicines, their healing therapies included prayer, incantation, and ritual. The Ebers Papyrus, written in around 1600 BCE, contains medical recipes for treating diseases related to the eyes, mouth, skin, internal organs, and extremities, as well as abscesses, wounds, burns, ulcers, swollen glands, tumors, headaches, and even bad breath. The Edwin Smith papyrus, written at about the same time, contains a surgical manual for treating wounds, fractures, and dislocations. The Egyptians believed that the effectiveness of their medicines depended on the preparation and administration under appropriate rituals. Medical historians believe that ancient Egyptian pharmacology, for example, was largely ineffective. Both the Ebers and Edwin Smith papyri applied the following components to the treatment of disease: examination, diagnosis, treatment, and prognosis, which display strong parallels to the basic empirical method of science and, according to G.E.R. Lloyd, played a significant role in the development of this methodology.
Calendar
The ancient Egyptians even developed an official calendar that contained twelve months, thirty days each, and five days at the end of the year. Unlike the Babylonian calendar or the ones used in Greek city-states at the time, the official Egyptian calendar was much simpler as it was fixed and did not take lunar and solar cycles into consideration.
Mesopotamia
The ancient Mesopotamians had extensive knowledge about the chemical properties of clay, sand, metal ore, bitumen, stone, and other natural materials, and applied this knowledge to practical use in manufacturing pottery, faience, glass, soap, metals, lime plaster, and waterproofing. Metallurgy required knowledge about the properties of metals. Nonetheless, the Mesopotamians seem to have had little interest in gathering information about the natural world for the mere sake of gathering information and were far more interested in studying the manner in which the gods had ordered the universe. Biology of non-human organisms was generally only written about in the context of mainstream academic disciplines. Animal physiology was studied extensively for the purpose of divination; the anatomy of the liver, which was seen as an important organ in haruspicy, was studied in particularly intensive detail. Animal behavior was also studied for divinatory purposes. Most information about the training and domestication of animals was probably transmitted orally without being written down, but one text dealing with the training of horses has survived.
Mesopotamian medicine
The ancient Mesopotamians had no distinction between "rational science" and magic. When a person became ill, doctors prescribed magical formulas to be recited as well as medicinal treatments. The earliest medical prescriptions appear in Sumerian during the Third Dynasty of Ur ( 2112 BCE – 2004 BCE). The most extensive Babylonian medical text, however, is the Diagnostic Handbook written by the ummânū, or chief scholar, Esagil-kin-apli of Borsippa, during the reign of the Babylonian king Adad-apla-iddina (1069–1046 BCE). In East Semitic cultures, the main medicinal authority was a kind of exorcist-healer known as an āšipu. The profession was generally passed down from father to son and was held in extremely high regard. Of less frequent recourse was another kind of healer known as an asu, who corresponds more closely to a modern physician and treated physical symptoms using primarily folk remedies composed of various herbs, animal products, and minerals, as well as potions, enemas, and ointments or poultices. These physicians, who could be either male or female, also dressed wounds, set limbs, and performed simple surgeries. The ancient Mesopotamians also practiced prophylaxis and took measures to prevent the spread of disease.
Astronomy and celestial divination
In Babylonian astronomy, records of the motions of the stars, planets, and the moon are left on thousands of clay tablets created by scribes. Even today, astronomical periods identified by Mesopotamian proto-scientists are still widely used in Western calendars such as the solar year and the lunar month. Using this data, they developed mathematical methods to compute the changing length of daylight in the course of the year, predict the appearances and disappearances of the Moon and planets, and eclipses of the Sun and Moon. Only a few astronomers' names are known, such as that of Kidinnu, a Chaldean astronomer and mathematician. Kiddinu's value for the solar year is in use for today's calendars. Babylonian astronomy was "the first and highly successful attempt at giving a refined mathematical description of astronomical phenomena." According to the historian A. Aaboe, "all subsequent varieties of scientific astronomy, in the Hellenistic world, in India, in Islam, and in the West—if not indeed all subsequent endeavour in the exact sciences—depend upon Babylonian astronomy in decisive and fundamental ways."
To the Babylonians and other Near Eastern cultures, messages from the gods or omens were concealed in all natural phenomena that could be deciphered and interpreted by those who are adept. Hence, it was believed that the gods could speak through all terrestrial objects (e.g., animal entrails, dreams, malformed births, or even the color of a dog urinating on a person) and celestial phenomena. Moreover, Babylonian astrology was inseparable from Babylonian astronomy.
Mathematics
The Mesopotamian cuneiform tablet Plimpton 322, dating to the eighteenth-century BCE, records a number of Pythagorean triplets (3,4,5) (5,12,13) ..., hinting that the ancient Mesopotamians might have been aware of the Pythagorean theorem over a millennium before Pythagoras.
Ancient and medieval South Asia and East Asia
Mathematical achievements from Mesopotamia had some influence on the development of mathematics in India, and there were confirmed transmissions of mathematical ideas between India and China, which were bidirectional. Nevertheless, the mathematical and scientific achievements in India and particularly in China occurred largely independently from those of Europe and the confirmed early influences that these two civilizations had on the development of science in Europe in the pre-modern era were indirect, with Mesopotamia and later the Islamic World acting as intermediaries. The arrival of modern science, which grew out of the Scientific Revolution, in India and China and the greater Asian region in general can be traced to the scientific activities of Jesuit missionaries who were interested in studying the region's flora and fauna during the 16th to 17th century.
India
Mathematics
The earliest traces of mathematical knowledge in the Indian subcontinent appear with the Indus Valley Civilisation (c. 4th millennium BCE ~ c. 3rd millennium BCE). The people of this civilization made bricks whose dimensions were in the proportion 4:2:1, which is favorable for the stability of a brick structure. They also tried to standardize measurement of length to a high degree of accuracy. They designed a ruler—the Mohenjo-daro ruler—whose unit of length (approximately 1.32 inches or 3.4 centimeters) was divided into ten equal parts. Bricks manufactured in ancient Mohenjo-daro often had dimensions that were integral multiples of this unit of length.
The Bakhshali manuscript contains problems involving arithmetic, algebra and geometry, including mensuration. The topics covered include fractions, square roots, arithmetic and geometric progressions, solutions of simple equations, simultaneous linear equations, quadratic equations and indeterminate equations of the second degree. In the 3rd century BCE, Pingala presents the Pingala-sutras, the earliest known treatise on Sanskrit prosody. He also presents a numerical system by adding one to the sum of place values. Pingala's work also includes material related to the Fibonacci numbers, called .
Indian astronomer and mathematician Aryabhata (476–550), in his Aryabhatiya (499) introduced the sine function in trigonometry and the number 0 [mathematics]
. In 628 CE, Brahmagupta suggested that gravity was a force of attraction. He also lucidly explained the use of zero as both a placeholder and a decimal digit, along with the Hindu–Arabic numeral system now used universally throughout the world. Arabic translations of the two astronomers' texts were soon available in the Islamic world, introducing what would become Arabic numerals to the Islamic world by the 9th century.
Narayana Pandita () (1340–1400) was an Indian mathematician. Plofker writes that his texts were the most significant Sanskrit mathematics treatises after those of Bhaskara II, other than the Kerala school. He wrote the Ganita Kaumudi (lit. "Moonlight of mathematics") in 1356 about mathematical operations. The work anticipated many developments in combinatorics.
During the 14th–16th centuries, the Kerala school of astronomy and mathematics made significant advances in astronomy and especially mathematics, including fields such as trigonometry and analysis. In particular, Madhava of Sangamagrama led advancement in analysis by providing the infinite and taylor series expansion of some trigonometric functions and pi approximation. Parameshvara (1380–1460), presents a case of the Mean Value theorem in his commentaries on Govindasvāmi and Bhāskara II. The Yuktibhāṣā was written by Jyeshtadeva in 1530.
Astronomy
The first textual mention of astronomical concepts comes from the Vedas, religious literature of India. According to Sarma (2008): "One finds in the Rigveda intelligent speculations about the genesis of the universe from nonexistence, the configuration of the universe, the spherical self-supporting earth, and the year of 360 days divided into 12 equal parts of 30 days each with a periodical intercalary month.".
The first 12 chapters of the Siddhanta Shiromani, written by Bhāskara in the 12th century, cover topics such as: mean longitudes of the planets; true longitudes of the planets; the three problems of diurnal rotation; syzygies; lunar eclipses; solar eclipses; latitudes of the planets; risings and settings; the moon's crescent; conjunctions of the planets with each other; conjunctions of the planets with the fixed stars; and the patas of the sun and moon. The 13 chapters of the second part cover the nature of the sphere, as well as significant astronomical and trigonometric calculations based on it.
In the Tantrasangraha treatise, Nilakantha Somayaji's updated the Aryabhatan model for the interior planets, Mercury, and Venus and the equation that he specified for the center of these planets was more accurate than the ones in European or Islamic astronomy until the time of Johannes Kepler in the 17th century. Jai Singh II of Jaipur constructed five observatories called Jantar Mantars in total, in New Delhi, Jaipur, Ujjain, Mathura and Varanasi; they were completed between 1724 and 1735.
Grammar
Some of the earliest linguistic activities can be found in Iron Age India (1st millennium BCE) with the analysis of Sanskrit for the purpose of the correct recitation and interpretation of Vedic texts. The most notable grammarian of Sanskrit was (c. 520–460 BCE), whose grammar formulates close to 4,000 rules for Sanskrit. Inherent in his analytic approach are the concepts of the phoneme, the morpheme and the root. The Tolkāppiyam text, composed in the early centuries of the common era, is a comprehensive text on Tamil grammar, which includes sutras on orthography, phonology, etymology, morphology, semantics, prosody, sentence structure and the significance of context in language.
Medicine
Findings from Neolithic graveyards in what is now Pakistan show evidence of proto-dentistry among an early farming culture. The ancient text Suśrutasamhitā of Suśruta describes procedures on various forms of surgery, including rhinoplasty, the repair of torn ear lobes, perineal lithotomy, cataract surgery, and several other excisions and other surgical procedures. The Charaka Samhita of Charaka describes ancient theories on human body, etiology, symptomology and therapeutics for a wide range of diseases. It also includes sections on the importance of diet, hygiene, prevention, medical education, and the teamwork of a physician, nurse and patient necessary for recovery to health.
Politics and state
An ancient Indian treatise on statecraft, economic policy and military strategy by Kautilya and , who are traditionally identified with (c. 350–283 BCE). In this treatise, the behaviors and relationships of the people, the King, the State, the Government Superintendents, Courtiers, Enemies, Invaders, and Corporations are analyzed and documented. Roger Boesche describes the Arthaśāstra as "a book of political realism, a book analyzing how the political world does work and not very often stating how it ought to work, a book that frequently discloses to a king what calculating and sometimes brutal measures he must carry out to preserve the state and the common good."
Logic
The development of Indian logic dates back to the Chandahsutra of Pingala and anviksiki of Medhatithi Gautama (c. 6th century BCE); the Sanskrit grammar rules of Pāṇini (c. 5th century BCE); the Vaisheshika school's analysis of atomism (c. 6th century BCE to 2nd century BCE); the analysis of inference by Gotama (c. 6th century BC to 2nd century CE), founder of the Nyaya school of Hindu philosophy; and the tetralemma of Nagarjuna (c. 2nd century CE).
Indian logic stands as one of the three original traditions of logic, alongside the Greek and the Chinese logic. The Indian tradition continued to develop through early to modern times, in the form of the Navya-Nyāya school of logic.
In the 2nd century, the Buddhist philosopher Nagarjuna refined the Catuskoti form of logic. The Catuskoti is also often glossed Tetralemma (Greek) which is the name for a largely comparable, but not equatable, 'four corner argument' within the tradition of Classical logic.
Navya-Nyāya developed a sophisticated language and conceptual scheme that allowed it to raise, analyse, and solve problems in logic and epistemology. It systematised all the Nyāya concepts into four main categories: sense or perception (pratyakşa), inference (anumāna), comparison or similarity (upamāna), and testimony (sound or word; śabda).
China
Chinese mathematics
From the earliest the Chinese used a positional decimal system on counting boards in order to calculate. To express 10, a single rod is placed in the second box from the right. The spoken language uses a similar system to English: e.g. four thousand two hundred and seven. No symbol was used for zero. By the 1st century BCE, negative numbers and decimal fractions were in use and The Nine Chapters on the Mathematical Art included methods for extracting higher order roots by Horner's method and solving linear equations and by Pythagoras' theorem. Cubic equations were solved in the Tang dynasty and solutions of equations of order higher than 3 appeared in print in 1245 CE by Ch'in Chiu-shao. Pascal's triangle for binomial coefficients was described around 1100 by Jia Xian.
Although the first attempts at an axiomatization of geometry appear in the Mohist canon in 330 BCE, Liu Hui developed algebraic methods in geometry in the 3rd century CE and also calculated pi to 5 significant figures. In 480, Zu Chongzhi improved this by discovering the ratio which remained the most accurate value for 1200 years.
Astronomical observations
Astronomical observations from China constitute the longest continuous sequence from any civilization and include records of sunspots (112 records from 364 BCE), supernovas (1054), lunar and solar eclipses. By the 12th century, they could reasonably accurately make predictions of eclipses, but the knowledge of this was lost during the Ming dynasty, so that the Jesuit Matteo Ricci gained much favor in 1601 by his predictions. By 635 Chinese astronomers had observed that the tails of comets always point away from the sun.
From antiquity, the Chinese used an equatorial system for describing the skies and a star map from 940 was drawn using a cylindrical (Mercator) projection. The use of an armillary sphere is recorded from the 4th century BCE and a sphere permanently mounted in equatorial axis from 52 BCE. In 125 CE Zhang Heng used water power to rotate the sphere in real time. This included rings for the meridian and ecliptic. By 1270 they had incorporated the principles of the Arab torquetum.
In the Song Empire (960–1279) of Imperial China, Chinese scholar-officials unearthed, studied, and cataloged ancient artifacts.
Inventions
To better prepare for calamities, Zhang Heng invented a seismometer in 132 CE which provided instant alert to authorities in the capital Luoyang that an earthquake had occurred in a location indicated by a specific cardinal or ordinal direction. Although no tremors could be felt in the capital when Zhang told the court that an earthquake had just occurred in the northwest, a message came soon afterwards that an earthquake had indeed struck northwest of Luoyang (in what is now modern Gansu). Zhang called his device the 'instrument for measuring the seasonal winds and the movements of the Earth' (Houfeng didong yi 候风地动仪), so-named because he and others thought that earthquakes were most likely caused by the enormous compression of trapped air.
There are many notable contributors to early Chinese disciplines, inventions, and practices throughout the ages. One of the best examples would be the medieval Song Chinese Shen Kuo (1031–1095), a polymath and statesman who was the first to describe the magnetic-needle compass used for navigation, discovered the concept of true north, improved the design of the astronomical gnomon, armillary sphere, sight tube, and clepsydra, and described the use of drydocks to repair boats. After observing the natural process of the inundation of silt and the find of marine fossils in the Taihang Mountains (hundreds of miles from the Pacific Ocean), Shen Kuo devised a theory of land formation, or geomorphology. He also adopted a theory of gradual climate change in regions over time, after observing petrified bamboo found underground at Yan'an, Shaanxi province. If not for Shen Kuo's writing, the architectural works of Yu Hao would be little known, along with the inventor of movable type printing, Bi Sheng (990–1051). Shen's contemporary Su Song (1020–1101) was also a brilliant polymath, an astronomer who created a celestial atlas of star maps, wrote a treatise related to botany, zoology, mineralogy, and metallurgy, and had erected a large astronomical clocktower in Kaifeng city in 1088. To operate the crowning armillary sphere, his clocktower featured an escapement mechanism and the world's oldest known use of an endless power-transmitting chain drive.
The Jesuit China missions of the 16th and 17th centuries "learned to appreciate the scientific achievements of this ancient culture and made them known in Europe. Through their correspondence European scientists first learned about the Chinese science and culture." Western academic thought on the history of Chinese technology and science was galvanized by the work of Joseph Needham and the Needham Research Institute. Among the technological accomplishments of China were, according to the British scholar Needham, the water-powered celestial globe (Zhang Heng), dry docks, sliding calipers, the double-action piston pump, the blast furnace, the multi-tube seed drill, the wheelbarrow, the suspension bridge, the winnowing machine, gunpowder, the raised-relief map, toilet paper, the efficient harness, along with contributions in logic, astronomy, medicine, and other fields.
However, cultural factors prevented these Chinese achievements from developing into "modern science". According to Needham, it may have been the religious and philosophical framework of Chinese intellectuals which made them unable to accept the ideas of laws of nature:
Pre-Columbian Mesoamerica
During the Middle Formative Period (c. 900 BCE – c. 300 BCE) of Pre-Columbian Mesoamerica, the Zapotec civilization, heavily influenced by the Olmec civilization, established the first known full writing system of the region (possibly predated by the Olmec Cascajal Block), as well as the first known astronomical calendar in Mesoamerica. Following a period of initial urban development in the Preclassical period, the Classic Maya civilization (c. 250 CE – c. 900 CE) built on the shared heritage of the Olmecs by developing the most sophisticated systems of writing, astronomy, calendrical science, and mathematics among Mesoamerican peoples. The Maya developed a positional numeral system with a base of 20 that included the use of zero for constructing their calendars. Maya writing, which was developed by 200 BCE, widespread by 100 BCE, and rooted in Olmec and Zapotec scripts, contains easily discernible calendar dates in the form of logographs representing numbers, coefficients, and calendar periods amounting to 20 days and even 20 years for tracking social, religious, political, and economic events in 360-day years.
Classical antiquity and Greco-Roman science
The contributions of the Ancient Egyptians and Mesopotamians in the areas of astronomy, mathematics, and medicine had entered and shaped Greek natural philosophy of classical antiquity, whereby formal attempts were made to provide explanations of events in the physical world based on natural causes. Inquiries were also aimed at such practical goals such as establishing a reliable calendar or determining how to cure a variety of illnesses. The ancient people who were considered the first scientists may have thought of themselves as natural philosophers, as practitioners of a skilled profession (for example, physicians), or as followers of a religious tradition (for example, temple healers).
Pre-socratics
The earliest Greek philosophers, known as the pre-Socratics, provided competing answers to the question found in the myths of their neighbors: "How did the ordered cosmos in which we live come to be?" The pre-Socratic philosopher Thales (640–546 BCE) of Miletus, identified by later authors such as Aristotle as the first of the Ionian philosophers, postulated non-supernatural explanations for natural phenomena. For example, that land floats on water and that earthquakes are caused by the agitation of the water upon which the land floats, rather than the god Poseidon. Thales' student Pythagoras of Samos founded the Pythagorean school, which investigated mathematics for its own sake, and was the first to postulate that the Earth is spherical in shape. Leucippus (5th century BCE) introduced atomism, the theory that all matter is made of indivisible, imperishable units called atoms. This was greatly expanded on by his pupil Democritus and later Epicurus.
Natural philosophy
Plato and Aristotle produced the first systematic discussions of natural philosophy, which did much to shape later investigations of nature. Their development of deductive reasoning was of particular importance and usefulness to later scientific inquiry. Plato founded the Platonic Academy in 387 BCE, whose motto was "Let none unversed in geometry enter here," and also turned out many notable philosophers. Plato's student Aristotle introduced empiricism and the notion that universal truths can be arrived at via observation and induction, thereby laying the foundations of the scientific method. Aristotle also produced many biological writings that were empirical in nature, focusing on biological causation and the diversity of life. He made countless observations of nature, especially the habits and attributes of plants and animals on Lesbos, classified more than 540 animal species, and dissected at least 50. Aristotle's writings profoundly influenced subsequent Islamic and European scholarship, though they were eventually superseded in the Scientific Revolution.
Aristotle also contributed to theories of the elements and the cosmos. He believed that the celestial bodies (such as the planets and the Sun) had something called an unmoved mover that put the celestial bodies in motion. Aristotle tried to explain everything through mathematics and physics, but sometimes explained things such as the motion of celestial bodies through a higher power such as God. Aristotle did not have the technological advancements that would have explained the motion of celestial bodies. In addition, Aristotle had many views on the elements. He believed that everything was derived of the elements earth, water, air, fire, and lastly the Aether. The Aether was a celestial element, and therefore made up the matter of the celestial bodies. The elements of earth, water, air and fire were derived of a combination of two of the characteristics of hot, wet, cold, and dry, and all had their inevitable place and motion. The motion of these elements begins with earth being the closest to "the Earth," then water, air, fire, and finally Aether. In addition to the makeup of all things, Aristotle came up with theories as to why things did not return to their natural motion. He understood that water sits above earth, air above water, and fire above air in their natural state. He explained that although all elements must return to their natural state, the human body and other living things have a constraint on the elements – thus not allowing the elements making one who they are to return to their natural state.
The important legacy of this period included substantial advances in factual knowledge, especially in anatomy, zoology, botany, mineralogy, geography, mathematics and astronomy; an awareness of the importance of certain scientific problems, especially those related to the problem of change and its causes; and a recognition of the methodological importance of applying mathematics to natural phenomena and of undertaking empirical research. In the Hellenistic age scholars frequently employed the principles developed in earlier Greek thought: the application of mathematics and deliberate empirical research, in their scientific investigations. Thus, clear unbroken lines of influence lead from ancient Greek and Hellenistic philosophers, to medieval Muslim philosophers and scientists, to the European Renaissance and Enlightenment, to the secular sciences of the modern day.
Neither reason nor inquiry began with the Ancient Greeks, but the Socratic method did, along with the idea of Forms, give great advances in geometry, logic, and the natural sciences. According to Benjamin Farrington, former professor of Classics at Swansea University:
"Men were weighing for thousands of years before Archimedes worked out the laws of equilibrium; they must have had practical and intuitional knowledge of the principals involved. What Archimedes did was to sort out the theoretical implications of this practical knowledge and present the resulting body of knowledge as a logically coherent system."
and again:
"With astonishment we find ourselves on the threshold of modern science. Nor should it be supposed that by some trick of translation the extracts have been given an air of modernity. Far from it. The vocabulary of these writings and their style are the source from which our own vocabulary and style have been derived."
Greek astronomy
The astronomer Aristarchus of Samos was the first known person to propose a heliocentric model of the Solar System, while the geographer Eratosthenes accurately calculated the circumference of the Earth. Hipparchus (c. 190 – c. 120 BCE) produced the first systematic star catalog. The level of achievement in Hellenistic astronomy and engineering is impressively shown by the Antikythera mechanism (150–100 BCE), an analog computer for calculating the position of planets. Technological artifacts of similar complexity did not reappear until the 14th century, when mechanical astronomical clocks appeared in Europe.
Hellenistic medicine
There was not a defined societal structure for healthcare during the age of Hippocrates. At that time, society was not organized and knowledgeable as people still relied on pure religious reasoning to explain illnesses. Hippocrates introduced the first healthcare system based on science and clinical protocols. Hippocrates' theories about physics and medicine helped pave the way in creating an organized medical structure for society. In medicine, Hippocrates (c. 460 BC – c. 370 BCE) and his followers were the first to describe many diseases and medical conditions and developed the Hippocratic Oath for physicians, still relevant and in use today. Hippocrates' ideas are expressed in The Hippocratic Corpus. The collection notes descriptions of medical philosophies and how disease and lifestyle choices reflect on the physical body. Hippocrates influenced a Westernized, professional relationship among physician and patient. Hippocrates is also known as "the Father of Medicine". Herophilos (335–280 BCE) was the first to base his conclusions on dissection of the human body and to describe the nervous system. Galen (129 – c. 200 CE) performed many audacious operations—including brain and eye surgeries— that were not tried again for almost two millennia.
Greek mathematics
In Hellenistic Egypt, the mathematician Euclid laid down the foundations of mathematical rigor and introduced the concepts of definition, axiom, theorem and proof still in use today in his Elements, considered the most influential textbook ever written. Archimedes, considered one of the greatest mathematicians of all time, is credited with using the method of exhaustion to calculate the area under the arc of a parabola with the summation of an infinite series, and gave a remarkably accurate approximation of pi. He is also known in physics for laying the foundations of hydrostatics, statics, and the explanation of the principle of the lever.
Other developments
Theophrastus wrote some of the earliest descriptions of plants and animals, establishing the first taxonomy and looking at minerals in terms of their properties, such as hardness. Pliny the Elder produced one of the largest encyclopedias of the natural world in 77 CE, and was a successor to Theophrastus. For example, he accurately describes the octahedral shape of the diamond and noted that diamond dust is used by engravers to cut and polish other gems owing to its great hardness. His recognition of the importance of crystal shape is a precursor to modern crystallography, while notes on other minerals presages mineralogy. He recognizes other minerals have characteristic crystal shapes, but in one example, confuses the crystal habit with the work of lapidaries. Pliny was the first to show amber was a resin from pine trees, because of trapped insects within them.
The development of archaeology has its roots in history and with those who were interested in the past, such as kings and queens who wanted to show past glories of their respective nations. The 5th-century-BCE Greek historian Herodotus was the first scholar to systematically study the past and perhaps the first to examine artifacts.
Greek scholarship under Roman rule
During the rule of Rome, famous historians such as Polybius, Livy and Plutarch documented the rise of the Roman Republic, and the organization and histories of other nations, while statesmen like Julius Caesar, Cicero, and others provided examples of the politics of the republic and Rome's empire and wars. The study of politics during this age was oriented toward understanding history, understanding methods of governing, and describing the operation of governments.
The Roman conquest of Greece did not diminish learning and culture in the Greek provinces. On the contrary, the appreciation of Greek achievements in literature, philosophy, politics, and the arts by Rome's upper class coincided with the increased prosperity of the Roman Empire. Greek settlements had existed in Italy for centuries and the ability to read and speak Greek was not uncommon in Italian cities such as Rome. Moreover, the settlement of Greek scholars in Rome, whether voluntarily or as slaves, gave Romans access to teachers of Greek literature and philosophy. Conversely, young Roman scholars also studied abroad in Greece and upon their return to Rome, were able to convey Greek achievements to their Latin leadership. And despite the translation of a few Greek texts into Latin, Roman scholars who aspired to the highest level did so using the Greek language. The Roman statesman and philosopher Cicero (106 – 43 BCE) was a prime example. He had studied under Greek teachers in Rome and then in Athens and Rhodes. He mastered considerable portions of Greek philosophy, wrote Latin treatises on several topics, and even wrote Greek commentaries of Plato's Timaeus as well as a Latin translation of it, which has not survived.
In the beginning, support for scholarship in Greek knowledge was almost entirely funded by the Roman upper class. There were all sorts of arrangements, ranging from a talented scholar being attached to a wealthy household to owning educated Greek-speaking slaves. In exchange, scholars who succeeded at the highest level had an obligation to provide advice or intellectual companionship to their Roman benefactors, or to even take care of their libraries. The less fortunate or accomplished ones would teach their children or perform menial tasks. The level of detail and sophistication of Greek knowledge was adjusted to suit the interests of their Roman patrons. That meant popularizing Greek knowledge by presenting information that were of practical value such as medicine or logic (for courts and politics) but excluding subtle details of Greek metaphysics and epistemology. Beyond the basics, the Romans did not value natural philosophy and considered it an amusement for leisure time.
Commentaries and encyclopedias were the means by which Greek knowledge was popularized for Roman audiences. The Greek scholar Posidonius (c. 135-c. 51 BCE), a native of Syria, wrote prolifically on history, geography, moral philosophy, and natural philosophy. He greatly influenced Latin writers such as Marcus Terentius Varro (116-27 BCE), who wrote the encyclopedia Nine Books of Disciplines, which covered nine arts: grammar, rhetoric, logic, arithmetic, geometry, astronomy, musical theory, medicine, and architecture. The Disciplines became a model for subsequent Roman encyclopedias and Varro's nine liberal arts were considered suitable education for a Roman gentleman. The first seven of Varro's nine arts would later define the seven liberal arts of medieval schools. The pinnacle of the popularization movement was the Roman scholar Pliny the Elder (23/24–79 CE), a native of northern Italy, who wrote several books on the history of Rome and grammar. His most famous work was his voluminous Natural History.
After the death of the Roman Emperor Marcus Aurelius in 180 CE, the favorable conditions for scholarship and learning in the Roman Empire were upended by political unrest, civil war, urban decay, and looming economic crisis. In around 250 CE, barbarians began attacking and invading the Roman frontiers. These combined events led to a general decline in political and economic conditions. The living standards of the Roman upper class was severely impacted, and their loss of leisure diminished scholarly pursuits. Moreover, during the 3rd and 4th centuries CE, the Roman Empire was administratively divided into two halves: Greek East and Latin West. These administrative divisions weakened the intellectual contact between the two regions. Eventually, both halves went their separate ways, with the Greek East becoming the Byzantine Empire. Christianity was also steadily expanding during this time and soon became a major patron of education in the Latin West. Initially, the Christian church adopted some of the reasoning tools of Greek philosophy in the 2nd and 3rd centuries CE to defend its faith against sophisticated opponents. Nevertheless, Greek philosophy received a mixed reception from leaders and adherents of the Christian faith. Some such as Tertullian (c. 155-c. 230 CE) were vehemently opposed to philosophy, denouncing it as heretic. Others such as Augustine of Hippo (354-430 CE) were ambivalent and defended Greek philosophy and science as the best ways to understand the natural world and therefore treated it as a handmaiden (or servant) of religion. Education in the West began its gradual decline, along with the rest of Western Roman Empire, due to invasions by Germanic tribes, civil unrest, and economic collapse. Contact with the classical tradition was lost in specific regions such as Roman Britain and northern Gaul but continued to exist in Rome, northern Italy, southern Gaul, Spain, and North Africa.
Middle Ages
In the Middle Ages, the classical learning continued in three major linguistic cultures and civilizations: Greek (the Byzantine Empire), Arabic (the Islamic world), and Latin (Western Europe).
Byzantine Empire
Preservation of Greek heritage
The fall of the Western Roman Empire led to a deterioration of the classical tradition in the western part (or Latin West) of Europe during the 5th century. In contrast, the Byzantine Empire resisted the barbarian attacks and preserved and improved the learning.
While the Byzantine Empire still held learning centers such as Constantinople, Alexandria and Antioch, Western Europe's knowledge was concentrated in monasteries until the development of medieval universities in the 12th centuries. The curriculum of monastic schools included the study of the few available ancient texts and of new works on practical subjects like medicine and timekeeping.
In the sixth century in the Byzantine Empire, Isidore of Miletus compiled Archimedes' mathematical works in the Archimedes Palimpsest, where all Archimedes' mathematical contributions were collected and studied.
John Philoponus, another Byzantine scholar, was the first to question Aristotle's teaching of physics, introducing the theory of impetus. The theory of impetus was an auxiliary or secondary theory of Aristotelian dynamics, put forth initially to explain projectile motion against gravity. It is the intellectual precursor to the concepts of inertia, momentum and acceleration in classical mechanics. The works of John Philoponus inspired Galileo Galilei ten centuries later.
Collapse
During the Fall of Constantinople in 1453, a number of Greek scholars fled to North Italy in which they fueled the era later commonly known as the "Renaissance" as they brought with them a great deal of classical learning including an understanding of botany, medicine, and zoology. Byzantium also gave the West important inputs: John Philoponus' criticism of Aristotelian physics, and the works of Dioscorides.
Islamic world
This was the period (8th–14th century CE) of the Islamic Golden Age where commerce thrived, and new ideas and technologies emerged such as the importation of papermaking from China, which made the copying of manuscripts inexpensive.
Translations and Hellenization
The eastward transmission of Greek heritage to Western Asia was a slow and gradual process that spanned over a thousand years, beginning with the Asian conquests of Alexander the Great in 335 BCE to the founding of Islam in the 7th century CE. The birth and expansion of Islam during the 7th century was quickly followed by its Hellenization. Knowledge of Greek conceptions of the world was preserved and absorbed into Islamic theology, law, culture, and commerce, which were aided by the translations of traditional Greek texts and some Syriac intermediary sources into Arabic during the 8th–9th century.
Education and scholarly pursuits
Madrasas were centers for many different religious and scientific studies and were the culmination of different institutions such as mosques based around religious studies, housing for out-of-town visitors, and finally educational institutions focused on the natural sciences. Unlike Western universities, students at a madrasa would learn from one specific teacher, who would issue a certificate at the completion of their studies called an Ijazah. An Ijazah differs from a western university degree in many ways one being that it is issued by a single person rather than an institution, and another being that it is not an individual degree declaring adequate knowledge over broad subjects, but rather a license to teach and pass on a very specific set of texts. Women were also allowed to attend madrasas, as both students and teachers, something not seen in high western education until the 1800s. Madrasas were more than just academic centers. The Suleymaniye Mosque, for example, was one of the earliest and most well-known madrasas, which was built by Suleiman the Magnificent in the 16th century. The Suleymaniye Mosque was home to a hospital and medical college, a kitchen, and children's school, as well as serving as a temporary home for travelers.
Higher education at a madrasa (or college) was focused on Islamic law and religious science and students had to engage in self-study for everything else. And despite the occasional theological backlash, many Islamic scholars of science were able to conduct their work in relatively tolerant urban centers (e.g., Baghdad and Cairo) and were protected by powerful patrons. They could also travel freely and exchange ideas as there were no political barriers within the unified Islamic state. Islamic science during this time was primarily focused on the correction, extension, articulation, and application of Greek ideas to new problems.
Advancements in mathematics
Most of the achievements by Islamic scholars during this period were in mathematics. Arabic mathematics was a direct descendant of Greek and Indian mathematics. For instance, what is now known as Arabic numerals originally came from India, but Muslim mathematicians made several key refinements to the number system, such as the introduction of decimal point notation. Mathematicians such as Muhammad ibn Musa al-Khwarizmi (c. 780–850) gave his name to the concept of the algorithm, while the term algebra is derived from al-jabr, the beginning of the title of one of his publications. Islamic trigonometry continued from the works of Ptolemy's Almagest and Indian Siddhanta, from which they added trigonometric functions, drew up tables, and applied trignometry to spheres and planes. Many of their engineers, instruments makers, and surveyors contributed books in applied mathematics. It was in astronomy where Islamic mathematicians made their greatest contributions. Al-Battani (c. 858–929) improved the measurements of Hipparchus, preserved in the translation of Ptolemy's Hè Megalè Syntaxis (The great treatise) translated as Almagest. Al-Battani also improved the precision of the measurement of the precession of the Earth's axis. Corrections were made to Ptolemy's geocentric model by al-Battani, Ibn al-Haytham, Averroes and the Maragha astronomers such as Nasir al-Din al-Tusi, Mu'ayyad al-Din al-Urdi and Ibn al-Shatir.
Scholars with geometric skills made significant improvements to the earlier classical texts on light and sight by Euclid, Aristotle, and Ptolemy. The earliest surviving Arabic treatises were written in the 9th century by Abū Ishāq al-Kindī, Qustā ibn Lūqā, and (in fragmentary form) Ahmad ibn Isā. Later in the 11th century, Ibn al-Haytham (known as Alhazen in the West), a mathematician and astronomer, synthesized a new theory of vision based on the works of his predecessors. His new theory included a complete system of geometrical optics, which was set in great detail in his Book of Optics. His book was translated into Latin and was relied upon as a principal source on the science of optics in Europe until the 17th century.
Institutionalization of medicine
The medical sciences were prominently cultivated in the Islamic world. The works of Greek medical theories, especially those of Galen, were translated into Arabic and there was an outpouring of medical texts by Islamic physicians, which were aimed at organizing, elaborating, and disseminating classical medical knowledge. Medical specialties started to emerge, such as those involved in the treatment of eye diseases such as cataracts. Ibn Sina (known as Avicenna in the West, c. 980–1037) was a prolific Persian medical encyclopedist wrote extensively on medicine, with his two most notable works in medicine being the Kitāb al-shifāʾ ("Book of Healing") and The Canon of Medicine, both of which were used as standard medicinal texts in both the Muslim world and in Europe well into the 17th century. Amongst his many contributions are the discovery of the contagious nature of infectious diseases, and the introduction of clinical pharmacology. Institutionalization of medicine was another important achievement in the Islamic world. Although hospitals as an institution for the sick emerged in the Byzantium empire, the model of institutionalized medicine for all social classes was extensive in the Islamic empire and was scattered throughout. In addition to treating patients, physicians could teach apprentice physicians, as well write and do research. The discovery of the pulmonary transit of blood in the human body by Ibn al-Nafis occurred in a hospital setting.
Decline
Islamic science began its decline in the 12th–13th century, before the Renaissance in Europe, due in part to the Christian reconquest of Spain and the Mongol conquests in the East in the 11th–13th century. The Mongols sacked Baghdad, capital of the Abbasid Caliphate, in 1258, which ended the Abbasid empire. Nevertheless, many of the conquerors became patrons of the sciences. Hulagu Khan, for example, who led the siege of Baghdad, became a patron of the Maragheh observatory. Islamic astronomy continued to flourish into the 16th century.
Western Europe
By the eleventh century, most of Europe had become Christian; stronger monarchies emerged; borders were restored; technological developments and agricultural innovations were made, increasing the food supply and population. Classical Greek texts were translated from Arabic and Greek into Latin, stimulating scientific discussion in Western Europe.
In classical antiquity, Greek and Roman taboos had meant that dissection was usually banned, but in the Middle Ages medical teachers and students at Bologna began to open human bodies, and Mondino de Luzzi (–1326) produced the first known anatomy textbook based on human dissection.
As a result of the Pax Mongolica, Europeans, such as Marco Polo, began to venture further and further east. The written accounts of Polo and his fellow travelers inspired other Western European maritime explorers to search for a direct sea route to Asia, ultimately leading to the Age of Discovery.
Technological advances were also made, such as the early flight of Eilmer of Malmesbury (who had studied mathematics in 11th-century England), and the metallurgical achievements of the Cistercian blast furnace at Laskill.
Medieval universities
An intellectual revitalization of Western Europe started with the birth of medieval universities in the 12th century. These urban institutions grew from the informal scholarly activities of learned friars who visited monasteries, consulted libraries, and conversed with other fellow scholars. A friar who became well-known would attract a following of disciples, giving rise to a brotherhood of scholars (or collegium in Latin). A collegium might travel to a town or request a monastery to host them. However, if the number of scholars within a collegium grew too large, they would opt to settle in a town instead. As the number of collegia within a town grew, the collegia might request that their king grant them a charter that would convert them into a universitas. Many universities were chartered during this period, with the first in Bologna in 1088, followed by Paris in 1150, Oxford in 1167, and Cambridge in 1231. The granting of a charter meant that the medieval universities were partially sovereign and independent from local authorities. Their independence allowed them to conduct themselves and judge their own members based on their own rules. Furthermore, as initially religious institutions, their faculties and students were protected from capital punishment (e.g., gallows). Such independence was a matter of custom, which could, in principle, be revoked by their respective rulers if they felt threatened. Discussions of various subjects or claims at these medieval institutions, no matter how controversial, were done in a formalized way so as to declare such discussions as being within the bounds of a university and therefore protected by the privileges of that institution's sovereignty. A claim could be described as ex cathedra (literally "from the chair", used within the context of teaching) or ex hypothesi (by hypothesis). This meant that the discussions were presented as purely an intellectual exercise that did not require those involved to commit themselves to the truth of a claim or to proselytize. Modern academic concepts and practices such as academic freedom or freedom of inquiry are remnants of these medieval privileges that were tolerated in the past.
The curriculum of these medieval institutions centered on the seven liberal arts, which were aimed at providing beginning students with the skills for reasoning and scholarly language. Students would begin their studies starting with the first three liberal arts or Trivium (grammar, rhetoric, and logic) followed by the next four liberal arts or Quadrivium (arithmetic, geometry, astronomy, and music). Those who completed these requirements and received their baccalaureate (or Bachelor of Arts) had the option to join the higher faculty (law, medicine, or theology), which would confer an LLD for a lawyer, an MD for a physician, or ThD for a theologian. Students who chose to remain in the lower faculty (arts) could work towards a Magister (or Master's) degree and would study three philosophies: metaphysics, ethics, and natural philosophy. Latin translations of Aristotle's works such as (On the Soul) and the commentaries on them were required readings. As time passed, the lower faculty was allowed to confer its own doctoral degree called the PhD. Many of the Masters were drawn to encyclopedias and had used them as textbooks. But these scholars yearned for the complete original texts of the Ancient Greek philosophers, mathematicians, and physicians such as Aristotle, Euclid, and Galen, which were not available to them at the time. These Ancient Greek texts were to be found in the Byzantine Empire and the Islamic World.
Translations of Greek and Arabic sources
Contact with the Byzantine Empire, and with the Islamic world during the Reconquista and the Crusades, allowed Latin Europe access to scientific Greek and Arabic texts, including the works of Aristotle, Ptolemy, Isidore of Miletus, John Philoponus, Jābir ibn Hayyān, al-Khwarizmi, Alhazen, Avicenna, and Averroes. European scholars had access to the translation programs of Raymond of Toledo, who sponsored the 12th century Toledo School of Translators from Arabic to Latin. Later translators like Michael Scotus would learn Arabic in order to study these texts directly. The European universities aided materially in the translation and propagation of these texts and started a new infrastructure which was needed for scientific communities. In fact, European university put many works about the natural world and the study of nature at the center of its curriculum, with the result that the "medieval university laid far greater emphasis on science than does its modern counterpart and descendent."
At the beginning of the 13th century, there were reasonably accurate Latin translations of the main works of almost all the intellectually crucial ancient authors, allowing a sound transfer of scientific ideas via both the universities and the monasteries. By then, the natural philosophy in these texts began to be extended by scholastics such as Robert Grosseteste, Roger Bacon, Albertus Magnus and Duns Scotus. Precursors of the modern scientific method, influenced by earlier contributions of the Islamic world, can be seen already in Grosseteste's emphasis on mathematics as a way to understand nature, and in the empirical approach admired by Bacon, particularly in his Opus Majus. Pierre Duhem's thesis is that Stephen Tempier – the Bishop of Paris – Condemnation of 1277 led to the study of medieval science as a serious discipline, "but no one in the field any longer endorses his view that modern science started in 1277". However, many scholars agree with Duhem's view that the mid-late Middle Ages saw important scientific developments.
Medieval science
The first half of the 14th century saw much important scientific work, largely within the framework of scholastic commentaries on Aristotle's scientific writings. William of Ockham emphasized the principle of parsimony: natural philosophers should not postulate unnecessary entities, so that motion is not a distinct thing but is only the moving object and an intermediary "sensible species" is not needed to transmit an image of an object to the eye. Scholars such as Jean Buridan and Nicole Oresme started to reinterpret elements of Aristotle's mechanics. In particular, Buridan developed the theory that impetus was the cause of the motion of projectiles, which was a first step towards the modern concept of inertia. The Oxford Calculators began to mathematically analyze the kinematics of motion, making this analysis without considering the causes of motion.
In 1348, the Black Death and other disasters sealed a sudden end to philosophic and scientific development. Yet, the rediscovery of ancient texts was stimulated by the Fall of Constantinople in 1453, when many Byzantine scholars sought refuge in the West. Meanwhile, the introduction of printing was to have great effect on European society. The facilitated dissemination of the printed word democratized learning and allowed ideas such as algebra to propagate more rapidly. These developments paved the way for the Scientific Revolution, where scientific inquiry, halted at the start of the Black Death, resumed.
Renaissance
Revival of learning
The renewal of learning in Europe began with 12th century Scholasticism. The Northern Renaissance showed a decisive shift in focus from Aristotelian natural philosophy to chemistry and the biological sciences (botany, anatomy, and medicine). Thus modern science in Europe was resumed in a period of great upheaval: the Protestant Reformation and Catholic Counter-Reformation; the discovery of the Americas by Christopher Columbus; the Fall of Constantinople; but also the re-discovery of Aristotle during the Scholastic period presaged large social and political changes. Thus, a suitable environment was created in which it became possible to question scientific doctrine, in much the same way that Martin Luther and John Calvin questioned religious doctrine. The works of Ptolemy (astronomy) and Galen (medicine) were found not always to match everyday observations. Work by Vesalius on human cadavers found problems with the Galenic view of anatomy.
The discovery of Cristallo contributed to the advancement of science in the period as well with its appearance out of Venice around 1450. The new glass allowed for better spectacles and eventually to the inventions of the telescope and microscope.
Theophrastus' work on rocks, Peri lithōn, remained authoritative for millennia: its interpretation of fossils was not overturned until after the Scientific Revolution.
During the Italian Renaissance, Niccolò Machiavelli established the emphasis of modern political science on direct empirical observation of political institutions and actors. Later, the expansion of the scientific paradigm during the Enlightenment further pushed the study of politics beyond normative determinations. In particular, the study of statistics, to study the subjects of the state, has been applied to polling and voting.
In archaeology, the 15th and 16th centuries saw the rise of antiquarians in Renaissance Europe who were interested in the collection of artifacts.
Scientific Revolution and birth of New Science
The early modern period is seen as a flowering of the European Renaissance. There was a willingness to question previously held truths and search for new answers. This resulted in a period of major scientific advancements, now known as the Scientific Revolution, which led to the emergence of a New Science that was more mechanistic in its worldview, more integrated with mathematics, and more reliable and open as its knowledge was based on a newly defined scientific method. The Scientific Revolution is a convenient boundary between ancient thought and classical physics, and is traditionally held to have begun in 1543, when the books De humani corporis fabrica (On the Workings of the Human Body) by Andreas Vesalius, and also De Revolutionibus, by the astronomer Nicolaus Copernicus, were first printed. The period culminated with the publication of the Philosophiæ Naturalis Principia Mathematica in 1687 by Isaac Newton, representative of the unprecedented growth of scientific publications throughout Europe.
Other significant scientific advances were made during this time by Galileo Galilei, Johannes Kepler, Edmond Halley, William Harvey, Pierre Fermat, Robert Hooke, Christiaan Huygens, Tycho Brahe, Marin Mersenne, Gottfried Leibniz, Isaac Newton, and Blaise Pascal. In philosophy, major contributions were made by Francis Bacon, Sir Thomas Browne, René Descartes, Baruch Spinoza, Pierre Gassendi, Robert Boyle, and Thomas Hobbes. Christiaan Huygens derived the centripetal and centrifugal forces and was the first to transfer mathematical inquiry to describe unobservable physical phenomena. William Gilbert did some of the earliest experiments with electricity and magnetism, establishing that the Earth itself is magnetic.
Heliocentrism
The heliocentric astronomical model of the universe was refined by Nicolaus Copernicus. Copernicus proposed the idea that the Earth and all heavenly spheres, containing the planets and other objects in the cosmos, rotated around the Sun. His heliocentric model also proposed that all stars were fixed and did not rotate on an axis, nor in any motion at all. His theory proposed the yearly rotation of the Earth and the other heavenly spheres around the Sun and was able to calculate the distances of planets using deferents and epicycles. Although these calculations were not completely accurate, Copernicus was able to understand the distance order of each heavenly sphere. The Copernican heliocentric system was a revival of the hypotheses of Aristarchus of Samos and Seleucus of Seleucia. Aristarchus of Samos did propose that the Earth rotated around the Sun but did not mention anything about the other heavenly spheres' order, motion, or rotation. Seleucus of Seleucia also proposed the rotation of the Earth around the Sun but did not mention anything about the other heavenly spheres. In addition, Seleucus of Seleucia understood that the Moon rotated around the Earth and could be used to explain the tides of the oceans, thus further proving his understanding of the heliocentric idea.
Age of Enlightenment
Continuation of Scientific Revolution
The Scientific Revolution continued into the Age of Enlightenment, which accelerated the development of modern science.
Planets and orbits
The heliocentric model revived by Nicolaus Copernicus was followed by the model of planetary motion given by Johannes Kepler in the early 17th century, which proposed that the planets follow elliptical orbits, with the Sun at one focus of the ellipse. In Astronomia Nova (A New Astronomy), the first two of the laws of planetary motion were shown by the analysis of the orbit of Mars. Kepler introduced the revolutionary concept of planetary orbit. Because of his work astronomical phenomena came to be seen as being governed by physical laws.
Emergence of chemistry
A decisive moment came when "chemistry" was distinguished from alchemy by Robert Boyle in his work The Sceptical Chymist, in 1661; although the alchemical tradition continued for some time after his work. Other important steps included the gravimetric experimental practices of medical chemists like William Cullen, Joseph Black, Torbern Bergman and Pierre Macquer and through the work of Antoine Lavoisier ("father of modern chemistry") on oxygen and the law of conservation of mass, which refuted phlogiston theory. Modern chemistry emerged from the sixteenth through the eighteenth centuries through the material practices and theories promoted by alchemy, medicine, manufacturing and mining.
Calculus and Newtonian mechanics
In 1687, Isaac Newton published the Principia Mathematica, detailing two comprehensive and successful physical theories: Newton's laws of motion, which led to classical mechanics; and Newton's law of universal gravitation, which describes the fundamental force of gravity.
Circulatory system
William Harvey published De Motu Cordis in 1628, which revealed his conclusions based on his extensive studies of vertebrate circulatory systems. He identified the central role of the heart, arteries, and veins in producing blood movement in a circuit, and failed to find any confirmation of Galen's pre-existing notions of heating and cooling functions. The history of early modern biology and medicine is often told through the search for the seat of the soul. Galen in his descriptions of his foundational work in medicine presents the distinctions between arteries, veins, and nerves using the vocabulary of the soul.
Scientific societies and journals
A critical innovation was the creation of permanent scientific societies and their scholarly journals, which dramatically sped the diffusion of new ideas. Typical was the founding of the Royal Society in London in 1660 and its journal in 1665 the Philosophical Transaction of the Royal Society, the first scientific journal in English. 1665 also saw the first journal in French, the Journal des sçavans. Science drawing on the works of Newton, Descartes, Pascal and Leibniz, science was on a path to modern mathematics, physics and technology by the time of the generation of Benjamin Franklin (1706–1790), Leonhard Euler (1707–1783), Mikhail Lomonosov (1711–1765) and Jean le Rond d'Alembert (1717–1783). Denis Diderot's Encyclopédie, published between 1751 and 1772 brought this new understanding to a wider audience. The impact of this process was not limited to science and technology, but affected philosophy (Immanuel Kant, David Hume), religion (the increasingly significant impact of science upon religion), and society and politics in general (Adam Smith, Voltaire).
Developments in geology
Geology did not undergo systematic restructuring during the Scientific Revolution but instead existed as a cloud of isolated, disconnected ideas about rocks, minerals, and landforms long before it became a coherent science. Robert Hooke formulated a theory of earthquakes, and Nicholas Steno developed the theory of superposition and argued that fossils were the remains of once-living creatures. Beginning with Thomas Burnet's Sacred Theory of the Earth in 1681, natural philosophers began to explore the idea that the Earth had changed over time. Burnet and his contemporaries interpreted Earth's past in terms of events described in the Bible, but their work laid the intellectual foundations for secular interpretations of Earth history.
Post-Scientific Revolution
Bioelectricity
During the late 18th century, researchers such as Hugh Williamson and John Walsh experimented on the effects of electricity on the human body. Further studies by Luigi Galvani and Alessandro Volta established the electrical nature of what Volta called galvanism.
Developments in geology
Modern geology, like modern chemistry, gradually evolved during the 18th and early 19th centuries. Benoît de Maillet and the Comte de Buffon saw the Earth as much older than the 6,000 years envisioned by biblical scholars. Jean-Étienne Guettard and Nicolas Desmarest hiked central France and recorded their observations on some of the first geological maps. Aided by chemical experimentation, naturalists such as Scotland's John Walker, Sweden's Torbern Bergman, and Germany's Abraham Werner created comprehensive classification systems for rocks and minerals—a collective achievement that transformed geology into a cutting edge field by the end of the eighteenth century. These early geologists also proposed a generalized interpretations of Earth history that led James Hutton, Georges Cuvier and Alexandre Brongniart, following in the steps of Steno, to argue that layers of rock could be dated by the fossils they contained: a principle first applied to the geology of the Paris Basin. The use of index fossils became a powerful tool for making geological maps, because it allowed geologists to correlate the rocks in one locality with those of similar age in other, distant localities.
Birth of modern economics
The basis for classical economics forms Adam Smith's An Inquiry into the Nature and Causes of the Wealth of Nations, published in 1776. Smith criticized mercantilism, advocating a system of free trade with division of labour. He postulated an "invisible hand" that regulated economic systems made up of actors guided only by self-interest. The "invisible hand" mentioned in a lost page in the middle of a chapter in the middle of the "Wealth of Nations", 1776, advances as Smith's central message.
Social science
Anthropology can best be understood as an outgrowth of the Age of Enlightenment. It was during this period that Europeans attempted systematically to study human behavior. Traditions of jurisprudence, history, philology and sociology developed during this time and informed the development of the social sciences of which anthropology was a part.
19th century
The 19th century saw the birth of science as a profession. William Whewell had coined the term scientist in 1833, which soon replaced the older term natural philosopher.
Developments in physics
In physics, the behavior of electricity and magnetism was studied by Giovanni Aldini, Alessandro Volta, Michael Faraday, Georg Ohm, and others. The experiments, theories and discoveries of Michael Faraday, Andre-Marie Ampere, James Clerk Maxwell, and their contemporaries led to the unification of the two phenomena into a single theory of electromagnetism as described by Maxwell's equations. Thermodynamics led to an understanding of heat and the notion of energy being defined.
Discovery of Neptune
In astronomy, the planet Neptune was discovered. Advances in astronomy and in optical systems in the 19th century resulted in the first observation of an asteroid (1 Ceres) in 1801, and the discovery of Neptune in 1846.
Developments in mathematics
In mathematics, the notion of complex numbers finally matured and led to a subsequent analytical theory; they also began the use of hypercomplex numbers. Karl Weierstrass and others carried out the arithmetization of analysis for functions of real and complex variables. It also saw rise to new progress in geometry beyond those classical theories of Euclid, after a period of nearly two thousand years. The mathematical science of logic likewise had revolutionary breakthroughs after a similarly long period of stagnation. But the most important step in science at this time were the ideas formulated by the creators of electrical science. Their work changed the face of physics and made possible for new technology to come about such as electric power, electrical telegraphy, the telephone, and radio.
Developments in chemistry
In chemistry, Dmitri Mendeleev, following the atomic theory of John Dalton, created the first periodic table of elements. Other highlights include the discoveries unveiling the nature of atomic structure and matter, simultaneously with chemistry – and of new kinds of radiation. The theory that all matter is made of atoms, which are the smallest constituents of matter that cannot be broken down without losing the basic chemical and physical properties of that matter, was provided by John Dalton in 1803, although the question took a hundred years to settle as proven. Dalton also formulated the law of mass relationships. In 1869, Dmitri Mendeleev composed his periodic table of elements on the basis of Dalton's discoveries. The synthesis of urea by Friedrich Wöhler opened a new research field, organic chemistry, and by the end of the 19th century, scientists were able to synthesize hundreds of organic compounds. The later part of the 19th century saw the exploitation of the Earth's petrochemicals, after the exhaustion of the oil supply from whaling. By the 20th century, systematic production of refined materials provided a ready supply of products which provided not only energy, but also synthetic materials for clothing, medicine, and everyday disposable resources. Application of the techniques of organic chemistry to living organisms resulted in physiological chemistry, the precursor to biochemistry.
Age of the Earth
Over the first half of the 19th century, geologists such as Charles Lyell, Adam Sedgwick, and Roderick Murchison applied the new technique to rocks throughout Europe and eastern North America, setting the stage for more detailed, government-funded mapping projects in later decades. Midway through the 19th century, the focus of geology shifted from description and classification to attempts to understand how the surface of the Earth had changed. The first comprehensive theories of mountain building were proposed during this period, as were the first modern theories of earthquakes and volcanoes. Louis Agassiz and others established the reality of continent-covering ice ages, and "fluvialists" like Andrew Crombie Ramsay argued that river valleys were formed, over millions of years by the rivers that flow through them. After the discovery of radioactivity, radiometric dating methods were developed, starting in the 20th century. Alfred Wegener's theory of "continental drift" was widely dismissed when he proposed it in the 1910s, but new data gathered in the 1950s and 1960s led to the theory of plate tectonics, which provided a plausible mechanism for it. Plate tectonics also provided a unified explanation for a wide range of seemingly unrelated geological phenomena. Since the 1960s it has served as the unifying principle in geology.
Evolution and inheritance
Perhaps the most prominent, controversial, and far-reaching theory in all of science has been the theory of evolution by natural selection, which was independently formulated by Charles Darwin and Alfred Wallace. It was described in detail in Darwin's book The Origin of Species, which was published in 1859. In it, Darwin proposed that the features of all living things, including humans, were shaped by natural processes over long periods of time. The theory of evolution in its current form affects almost all areas of biology. Implications of evolution on fields outside of pure science have led to both opposition and support from different parts of society, and profoundly influenced the popular understanding of "man's place in the universe". Separately, Gregor Mendel formulated the principles of inheritance in 1866, which became the basis of modern genetics.
Germ theory
Another important landmark in medicine and biology were the successful efforts to prove the germ theory of disease. Following this, Louis Pasteur made the first vaccine against rabies, and also made many discoveries in the field of chemistry, including the asymmetry of crystals. In 1847, Hungarian physician Ignác Fülöp Semmelweis dramatically reduced the occurrence of puerperal fever by simply requiring physicians to wash their hands before attending to women in childbirth. This discovery predated the germ theory of disease. However, Semmelweis' findings were not appreciated by his contemporaries and handwashing came into use only with discoveries by British surgeon Joseph Lister, who in 1865 proved the principles of antisepsis. Lister's work was based on the important findings by French biologist Louis Pasteur. Pasteur was able to link microorganisms with disease, revolutionizing medicine. He also devised one of the most important methods in preventive medicine, when in 1880 he produced a vaccine against rabies. Pasteur invented the process of pasteurization, to help prevent the spread of disease through milk and other foods.
Schools of economics
Karl Marx developed an alternative economic theory, called Marxian economics. Marxian economics is based on the labor theory of value and assumes the value of good to be based on the amount of labor required to produce it. Under this axiom, capitalism was based on employers not paying the full value of workers labor to create profit. The Austrian School responded to Marxian economics by viewing entrepreneurship as driving force of economic development. This replaced the labor theory of value by a system of supply and demand.
Founding of psychology
Psychology as a scientific enterprise that was independent from philosophy began in 1879 when Wilhelm Wundt founded the first laboratory dedicated exclusively to psychological research (in Leipzig). Other important early contributors to the field include Hermann Ebbinghaus (a pioneer in memory studies), Ivan Pavlov (who discovered classical conditioning), William James, and Sigmund Freud. Freud's influence has been enormous, though more as cultural icon than a force in scientific psychology.
Modern sociology
Modern sociology emerged in the early 19th century as the academic response to the modernization of the world. Among many early sociologists (e.g., Émile Durkheim), the aim of sociology was in structuralism, understanding the cohesion of social groups, and developing an "antidote" to social disintegration. Max Weber was concerned with the modernization of society through the concept of rationalization, which he believed would trap individuals in an "iron cage" of rational thought. Some sociologists, including Georg Simmel and W. E. B. Du Bois, used more microsociological, qualitative analyses. This microlevel approach played an important role in American sociology, with the theories of George Herbert Mead and his student Herbert Blumer resulting in the creation of the symbolic interactionism approach to sociology. In particular, just Auguste Comte, illustrated with his work the transition from a theological to a metaphysical stage and, from this, to a positive stage. Comte took care of the classification of the sciences as well as a transit of humanity towards a situation of progress attributable to a re-examination of nature according to the affirmation of 'sociality' as the basis of the scientifically interpreted society.
Romanticism
The Romantic Movement of the early 19th century reshaped science by opening up new pursuits unexpected in the classical approaches of the Enlightenment. The decline of Romanticism occurred because a new movement, Positivism, began to take hold of the ideals of the intellectuals after 1840 and lasted until about 1880. At the same time, the romantic reaction to the Enlightenment produced thinkers such as Johann Gottfried Herder and later Wilhelm Dilthey whose work formed the basis for the culture concept which is central to the discipline. Traditionally, much of the history of the subject was based on colonial encounters between Western Europe and the rest of the world, and much of 18th- and 19th-century anthropology is now classed as scientific racism. During the late 19th century, battles over the "study of man" took place between those of an "anthropological" persuasion (relying on anthropometrical techniques) and those of an "ethnological" persuasion (looking at cultures and traditions), and these distinctions became part of the later divide between physical anthropology and cultural anthropology, the latter ushered in by the students of Franz Boas.
20th century
Science advanced dramatically during the 20th century. There were new and radical developments in the physical and life sciences, building on the progress from the 19th century.
Theory of relativity and quantum mechanics
The beginning of the 20th century brought the start of a revolution in physics. The long-held theories of Newton were shown not to be correct in all circumstances. Beginning in 1900, Max Planck, Albert Einstein, Niels Bohr and others developed quantum theories to explain various anomalous experimental results, by introducing discrete energy levels. Not only did quantum mechanics show that the laws of motion did not hold on small scales, but the theory of general relativity, proposed by Einstein in 1915, showed that the fixed background of spacetime, on which both Newtonian mechanics and special relativity depended, could not exist. In 1925, Werner Heisenberg and Erwin Schrödinger formulated quantum mechanics, which explained the preceding quantum theories. Currently, general relativity and quantum mechanics are inconsistent with each other, and efforts are underway to unify the two.
Big Bang
The observation by Edwin Hubble in 1929 that the speed at which galaxies recede positively correlates with their distance, led to the understanding that the universe is expanding, and the formulation of the Big Bang theory by Georges Lemaître. George Gamow, Ralph Alpher, and Robert Herman had calculated that there should be evidence for a Big Bang in the background temperature of the universe. In 1964, Arno Penzias and Robert Wilson discovered a 3 Kelvin background hiss in their Bell Labs radiotelescope (the Holmdel Horn Antenna), which was evidence for this hypothesis, and formed the basis for a number of results that helped determine the age of the universe.
Big science
In 1938 Otto Hahn and Fritz Strassmann discovered nuclear fission with radiochemical methods, and in 1939 Lise Meitner and Otto Robert Frisch wrote the first theoretical interpretation of the fission process, which was later improved by Niels Bohr and John A. Wheeler. Further developments took place during World War II, which led to the practical application of radar and the development and use of the atomic bomb. Around this time, Chien-Shiung Wu was recruited by the Manhattan Project to help develop a process for separating uranium metal into U-235 and U-238 isotopes by Gaseous diffusion. She was an expert experimentalist in beta decay and weak interaction physics. Wu designed an experiment (see Wu experiment) that enabled theoretical physicists Tsung-Dao Lee and Chen-Ning Yang to disprove the law of parity experimentally, winning them a Nobel Prize in 1957.
Though the process had begun with the invention of the cyclotron by Ernest O. Lawrence in the 1930s, physics in the postwar period entered into a phase of what historians have called "Big Science", requiring massive machines, budgets, and laboratories in order to test their theories and move into new frontiers. The primary patron of physics became state governments, who recognized that the support of "basic" research could often lead to technologies useful to both military and industrial applications.
Advances in genetics
In the early 20th century, the study of heredity became a major investigation after the rediscovery in 1900 of the laws of inheritance developed by Mendel. The 20th century also saw the integration of physics and chemistry, with chemical properties explained as the result of the electronic structure of the atom. Linus Pauling's book on The Nature of the Chemical Bond used the principles of quantum mechanics to deduce bond angles in ever-more complicated molecules. Pauling's work culminated in the physical modelling of DNA, the secret of life (in the words of Francis Crick, 1953). In the same year, the Miller–Urey experiment demonstrated in a simulation of primordial processes, that basic constituents of proteins, simple amino acids, could themselves be built up from simpler molecules, kickstarting decades of research into the chemical origins of life. By 1953, James D. Watson and Francis Crick clarified the basic structure of DNA, the genetic material for expressing life in all its forms, building on the work of Maurice Wilkins and Rosalind Franklin, suggested that the structure of DNA was a double helix. In their famous paper "Molecular structure of Nucleic Acids" In the late 20th century, the possibilities of genetic engineering became practical for the first time, and a massive international effort began in 1990 to map out an entire human genome (the Human Genome Project). The discipline of ecology typically traces its origin to the synthesis of Darwinian evolution and Humboldtian biogeography, in the late 19th and early 20th centuries. Equally important in the rise of ecology, however, were microbiology and soil science—particularly the cycle of life concept, prominent in the work of Louis Pasteur and Ferdinand Cohn. The word ecology was coined by Ernst Haeckel, whose particularly holistic view of nature in general (and Darwin's theory in particular) was important in the spread of ecological thinking. The field of ecosystem ecology emerged in the Atomic Age with the use of radioisotopes to visualize food webs and by the 1970s ecosystem ecology deeply influenced global environmental management.
Space exploration
In 1925, Cecilia Payne-Gaposchkin determined that stars were composed mostly of hydrogen and helium. She was dissuaded by astronomer Henry Norris Russell from publishing this finding in her PhD thesis because of the widely held belief that stars had the same composition as the Earth. However, four years later, in 1929, Henry Norris Russell came to the same conclusion through different reasoning and the discovery was eventually accepted.
In 1987, supernova SN 1987A was observed by astronomers on Earth both visually, and in a triumph for neutrino astronomy, by the solar neutrino detectors at Kamiokande. But the solar neutrino flux was a fraction of its theoretically expected value. This discrepancy forced a change in some values in the standard model for particle physics.
Neuroscience as a distinct discipline
The understanding of neurons and the nervous system became increasingly precise and molecular during the 20th century. For example, in 1952, Alan Lloyd Hodgkin and Andrew Huxley presented a mathematical model for transmission of electrical signals in neurons of the giant axon of a squid, which they called "action potentials", and how they are initiated and propagated, known as the Hodgkin–Huxley model. In 1961–1962, Richard FitzHugh and J. Nagumo simplified Hodgkin–Huxley, in what is called the FitzHugh–Nagumo model. In 1962, Bernard Katz modeled neurotransmission across the space between neurons known as synapses. Beginning in 1966, Eric Kandel and collaborators examined biochemical changes in neurons associated with learning and memory storage in Aplysia. In 1981 Catherine Morris and Harold Lecar combined these models in the Morris–Lecar model. Such increasingly quantitative work gave rise to numerous biological neuron models and models of neural computation. Neuroscience began to be recognized as a distinct academic discipline in its own right. Eric Kandel and collaborators have cited David Rioch, Francis O. Schmitt, and Stephen Kuffler as having played critical roles in establishing the field.
Plate tectonics
Geologists' embrace of plate tectonics became part of a broadening of the field from a study of rocks into a study of the Earth as a planet. Other elements of this transformation include: geophysical studies of the interior of the Earth, the grouping of geology with meteorology and oceanography as one of the "earth sciences", and comparisons of Earth and the solar system's other rocky planets.
Applications
In terms of applications, a massive number of new technologies were developed in the 20th century. Technologies such as electricity, the incandescent light bulb, the automobile and the phonograph, first developed at the end of the 19th century, were perfected and universally deployed. The first car was introduced by Karl Benz in 1885. The first airplane flight occurred in 1903, and by the end of the century airliners flew thousands of miles in a matter of hours. The development of the radio, television and computers caused massive changes in the dissemination of information. Advances in biology also led to large increases in food production, as well as the elimination of diseases such as polio by Dr. Jonas Salk. Gene mapping and gene sequencing, invented by Drs. Mark Skolnik and Walter Gilbert, respectively, are the two technologies that made the Human Genome Project feasible. Computer science, built upon a foundation of theoretical linguistics, discrete mathematics, and electrical engineering, studies the nature and limits of computation. Subfields include computability, computational complexity, database design, computer networking, artificial intelligence, and the design of computer hardware. One area in which advances in computing have contributed to more general scientific development is by facilitating large-scale archiving of scientific data. Contemporary computer science typically distinguishes itself by emphasizing mathematical 'theory' in contrast to the practical emphasis of software engineering.
Einstein's paper "On the Quantum Theory of Radiation" outlined the principles of the stimulated emission of photons. This led to the invention of the Laser (light amplification by the stimulated emission of radiation) and the optical amplifier which ushered in the Information Age. It is optical amplification that allows fiber optic networks to transmit the massive capacity of the Internet.
Based on wireless transmission of electromagnetic radiation and global networks of cellular operation, the mobile phone became a primary means to access the internet.
Developments in political science and economics
In political science during the 20th century, the study of ideology, behaviouralism and international relations led to a multitude of 'pol-sci' subdisciplines including rational choice theory, voting theory, game theory (also used in economics), psephology, political geography/geopolitics, political anthropology/political psychology/political sociology, political economy, policy analysis, public administration, comparative political analysis and peace studies/conflict analysis. In economics, John Maynard Keynes prompted a division between microeconomics and macroeconomics in the 1920s. Under Keynesian economics macroeconomic trends can overwhelm economic choices made by individuals. Governments should promote aggregate demand for goods as a means to encourage economic expansion. Following World War II, Milton Friedman created the concept of monetarism. Monetarism focuses on using the supply and demand of money as a method for controlling economic activity. In the 1970s, monetarism has adapted into supply-side economics which advocates reducing taxes as a means to increase the amount of money available for economic expansion. Other modern schools of economic thought are New Classical economics and New Keynesian economics. New Classical economics was developed in the 1970s, emphasizing solid microeconomics as the basis for macroeconomic growth. New Keynesian economics was created partially in response to New Classical economics. It shows how imperfect competition and market rigidities, means monetary policy has real effects, and enables analysis of different policies.
Developments in psychology, sociology, and anthropology
Psychology in the 20th century saw a rejection of Freud's theories as being too unscientific, and a reaction against Edward Titchener's atomistic approach of the mind. This led to the formulation of behaviorism by John B. Watson, which was popularized by B.F. Skinner. Behaviorism proposed epistemologically limiting psychological study to overt behavior, since that could be reliably measured. Scientific knowledge of the "mind" was considered too metaphysical, hence impossible to achieve. The final decades of the 20th century have seen the rise of cognitive science, which considers the mind as once again a subject for investigation, using the tools of psychology, linguistics, computer science, philosophy, and neurobiology. New methods of visualizing the activity of the brain, such as PET scans and CAT scans, began to exert their influence as well, leading some researchers to investigate the mind by investigating the brain, rather than cognition. These new forms of investigation assume that a wide understanding of the human mind is possible, and that such an understanding may be applied to other research domains, such as artificial intelligence. Evolutionary theory was applied to behavior and introduced to anthropology and psychology, through the works of cultural anthropologist Napoleon Chagnon. Physical anthropology would become biological anthropology, incorporating elements of evolutionary biology.
American sociology in the 1940s and 1950s was dominated largely by Talcott Parsons, who argued that aspects of society that promoted structural integration were therefore "functional". This structural functionalism approach was questioned in the 1960s, when sociologists came to see this approach as merely a justification for inequalities present in the status quo. In reaction, conflict theory was developed, which was based in part on the philosophies of Karl Marx. Conflict theorists saw society as an arena in which different groups compete for control over resources. Symbolic interactionism also came to be regarded as central to sociological thinking. Erving Goffman saw social interactions as a stage performance, with individuals preparing "backstage" and attempting to control their audience through impression management. While these theories are currently prominent in sociological thought, other approaches exist, including feminist theory, post-structuralism, rational choice theory, and postmodernism.
In the mid-20th century, much of the methodologies of earlier anthropological and ethnographical study were reevaluated with an eye towards research ethics, while at the same time the scope of investigation has broadened far beyond the traditional study of "primitive cultures".
21st century
In the early 21st century, some concepts that originated in 20th century physics were proven. On 4 July 2012, physicists working at CERN's Large Hadron Collider announced that they had discovered a new subatomic particle greatly resembling the Higgs boson, confirmed as such by the following March. Gravitational waves were first detected on 14 September 2015.
The Human Genome Project was declared complete in 2003. The CRISPR gene editing technique developed in 2012 allowed scientists to precisely and easily modify DNA and led to the development of new medicine. In 2020, xenobots, a new class of living robotics, were invented; reproductive capabilities were introduced the following year.
Positive psychology is a branch of psychology founded in 1998 by Martin Seligman that is concerned with the study of happiness, mental well-being, and positive human functioning, and is a reaction to 20th century psychology's emphasis on mental illness and dysfunction.
See also
2020s in science and technology
History and philosophy of science
Philosophy of science
History of measurement
History of astronomy
History of biology
History of chemistry
History of Earth science
History of physics
History of the social sciences
History of technology
History of scholarship
Science studies
History of science policy
List of experiments
List of Nobel laureates
List of scientists
List of years in science
Materialism Controversy
Multiple discovery
Science tourism
Sociology of the history of science
Timelines of science
Timeline of scientific discoveries
Timeline of scientific experiments
Timeline of the history of the scientific method
Yuasa Phenomenon – Migration of center of activity of world science
References
Sources
Further reading
Agar, Jon (2012) Science in the Twentieth Century and Beyond, Polity Press. .
Agassi, Joseph (2007) Science and Its History: A Reassessment of the Historiography of Science (Boston Studies in the Philosophy of Science, 253) Springer. .
Bowler, Peter J. (1993) The Norton History of the Environmental Sciences.
Brock, W.H. (1993) The Norton History of Chemistry.
Bronowski, J. (1951) The Common Sense of Science Heinemann. . (Includes a description of the history of science in England.)
Byers, Nina and Gary Williams, ed. (2006) Out of the Shadows: Contributions of Twentieth-Century Women to Physics, Cambridge University Press
Herzenberg, Caroline L. (1986). Women Scientists from Antiquity to the Present Locust Hill Press
Kumar, Deepak (2006). Science and the Raj: A Study of British India, 2nd edition. Oxford University Press.
Lakatos, Imre (1978). History of Science and its Rational Reconstructions published in The Methodology of Scientific Research Programmes: Philosophical Papers Volume 1. Cambridge University Press
Levere, Trevor Harvey. (2001) Transforming Matter: A History of Chemistry from Alchemy to the Buckyball
Lipphardt, Veronika/Ludwig, Daniel, Knowledge Transfer and Science Transfer, EGO – European History Online, Mainz: Institute of European History, 2011, retrieved: 8 March 2020 (pdf).
Margolis, Howard (2002). It Started with Copernicus. McGraw-Hill.
Mayr, Ernst. (1985). The Growth of Biological Thought: Diversity, Evolution, and Inheritance.
North, John. (1995). The Norton History of Astronomy and Cosmology.
Nye, Mary Jo, ed. (2002). The Cambridge History of Science, Volume 5: The Modern Physical and Mathematical Sciences
Park, Katharine, and Lorraine Daston, eds. (2006) The Cambridge History of Science, Volume 3: Early Modern Science
Porter, Roy, ed. (2003). The Cambridge History of Science, Volume 4: The Eighteenth Century
Rousseau, George and Roy Porter, eds. 1980). The Ferment of Knowledge: Studies in the Historiography of Science Cambridge University Press.
Slotten, Hugh Richard, ed. (2014) The Oxford Encyclopedia of the History of American Science, Medicine, and Technology.
External links
'What is the History of Science', British Academy
British Society for the History of Science
The CNRS History of Science and Technology Research Center in Paris (France)
Henry Smith Williams, History of Science, Vols 1–4, online text
Digital Archives of the National Institute of Standards and Technology (NIST)
Digital facsimiles of books from the History of Science Collection , Linda Hall Library Digital Collections
Division of History of Science and Technology of the International Union of History and Philosophy of Science
Giants of Science (website of the Institute of National Remembrance)
History of Science Digital Collection: Utah State University – Contains primary sources by such major figures in the history of scientific inquiry as Otto Brunfels, Charles Darwin, Erasmus Darwin, Carolus Linnaeus Antony van Leeuwenhoek, Jan Swammerdam, James Sowerby, Andreas Vesalius, and others.
History of Science Society ("HSS")
Inter-Divisional Teaching Commission (IDTC) of the International Union for the History and Philosophy of Science (IUHPS)
International Academy of the History of Science
International History, Philosophy and Science Teaching Group
IsisCB Explore: History of Science Index An open access discovery tool
Museo Galileo – Institute and Museum of the History of Science in Florence, Italy
National Center for Atmospheric Research (NCAR) Archives
The official site of the Nobel Foundation. Features biographies and info on Nobel laureates
The Royal Society, trailblazing science from 1650 to date
The Vega Science Trust Free to view videos of scientists including Feynman, Perutz, Rotblat, Born and many Nobel Laureates.
A Century of Science in America: with special reference to the American Journal of Science, 1818-1918
Science studies | History of science | [
"Technology"
] | 20,446 | [
"History of science",
"History of science and technology"
] |
14,403 | https://en.wikipedia.org/wiki/Hydrogen%20peroxide | Hydrogen peroxide is a chemical compound with the formula . In its pure form, it is a very pale blue liquid that is slightly more viscous than water. It is used as an oxidizer, bleaching agent, and antiseptic, usually as a dilute solution (3%–6% by weight) in water for consumer use and in higher concentrations for industrial use. Concentrated hydrogen peroxide, or "high-test peroxide", decomposes explosively when heated and has been used as both a monopropellant and an oxidizer in rocketry.
Hydrogen peroxide is a reactive oxygen species and the simplest peroxide, a compound having an oxygen–oxygen single bond. It decomposes slowly into water and elemental oxygen when exposed to light, and rapidly in the presence of organic or reactive compounds. It is typically stored with a stabilizer in a weakly acidic solution in an opaque bottle. Hydrogen peroxide is found in biological systems including the human body. Enzymes that use or decompose hydrogen peroxide are classified as peroxidases.
Properties
The boiling point of has been extrapolated as being , approximately higher than water. In practice, hydrogen peroxide will undergo potentially explosive thermal decomposition if heated to this temperature. It may be safely distilled at lower temperatures under reduced pressure.
Hydrogen peroxide forms stable adducts with urea (hydrogen peroxide–urea), sodium carbonate (sodium percarbonate) and other compounds. An acid-base adduct with triphenylphosphine oxide is a useful "carrier" for in some reactions.
Structure
Hydrogen peroxide () is a nonplanar molecule with (twisted) C2 symmetry; this was first shown by Paul-Antoine Giguère in 1950 using infrared spectroscopy. Although the O−O bond is a single bond, the molecule has a relatively high rotational barrier of 386 cm−1 (4.62 kJ/mol) for rotation between enantiomers via the trans configuration, and 2460 cm−1 (29.4 kJ/mol) via the cis configuration. These barriers are proposed to be due to repulsion between the lone pairs of the adjacent oxygen atoms and dipolar effects between the two O–H bonds. For comparison, the rotational barrier for ethane is 1040 cm−1 (12.4 kJ/mol).
The approximately 100° dihedral angle between the two O–H bonds makes the molecule chiral. It is the smallest and simplest molecule to exhibit enantiomerism. It has been proposed that the enantiospecific interactions of one rather than the other may have led to amplification of one enantiomeric form of ribonucleic acids and therefore an origin of homochirality in an RNA world.
The molecular structures of gaseous and crystalline are significantly different. This difference is attributed to the effects of hydrogen bonding, which is absent in the gaseous state. Crystals of are tetragonal with the space group D or P41212.
Aqueous solutions
In aqueous solutions, hydrogen peroxide forms a eutectic mixture, exhibiting freezing-point depression down as low as –56 °C; pure water has a freezing point of 0 °C and pure hydrogen peroxide of –0.43 °C. The boiling point of the same mixtures is also depressed in relation with the mean of both boiling points (125.1 °C). It occurs at 114 °C. This boiling point is 14 °C greater than that of pure water and 36.2 °C less than that of pure hydrogen peroxide.
Hydrogen peroxide is most commonly available as a solution in water. For consumers, it is usually available from pharmacies at 3 and 6 wt% concentrations. The concentrations are sometimes described in terms of the volume of oxygen gas generated; one milliliter of a 20-volume solution generates twenty milliliters of oxygen gas when completely decomposed. For laboratory use, 30 wt% solutions are most common. Commercial grades from 70% to 98% are also available, but due to the potential of solutions of more than 68% hydrogen peroxide to be converted entirely to steam and oxygen (with the temperature of the steam increasing as the concentration increases above 68%) these grades are potentially far more hazardous and require special care in dedicated storage areas. Buyers must typically allow inspection by commercial manufacturers.
Comparison with analogues
Hydrogen peroxide has several structural analogues with bonding arrangements (water also shown for comparison). It has the highest (theoretical) boiling point of this series (X = O, S, N, P). Its melting point is also fairly high, being comparable to that of hydrazine and water, with only hydroxylamine crystallising significantly more readily, indicative of particularly strong hydrogen bonding. Diphosphane and hydrogen disulfide exhibit only weak hydrogen bonding and have little chemical similarity to hydrogen peroxide. Structurally, the analogues all adopt similar skewed structures, due to repulsion between adjacent lone pairs.
Natural occurrence
Hydrogen peroxide is produced by various biological processes mediated by enzymes.
Hydrogen peroxide has been detected in surface water, in groundwater, and in the atmosphere. It can also form when water is exposed to UV light. Sea water contains 0.5 to 14 μg/L of hydrogen peroxide, and freshwater contains 1 to 30 μg/L. Concentrations in air are about 0.4 to 4 μg/m3, varying over several orders of magnitude depending in conditions such as season, altitude, daylight and water vapor content. In rural nighttime air it is less than 0.014 μg/m3, and in moderate photochemical smog it is 14 to 42 μg/m3.
The amount of hydrogen peroxide in biological systems can be assayed using a fluorometric assay.
Discovery
Alexander von Humboldt is sometimes said to have been the first to report the first synthetic peroxide, barium peroxide, in 1799 as a by-product of his attempts to decompose air, although this is disputed due to von Humboldt's ambiguous wording. Nineteen years later Louis Jacques Thénard recognized that this compound could be used for the preparation of a previously unknown compound, which he described as ("oxygenated water") — subsequently known as hydrogen peroxide.
An improved version of Thénard's process used hydrochloric acid, followed by addition of sulfuric acid to precipitate the barium sulfate byproduct. This process was used from the end of the 19th century until the middle of the 20th century.
The bleaching effect of peroxides and their salts on natural dyes had been known since Thénard's experiments in the 1820s, but early attempts of industrial production of peroxides failed. The first plant producing hydrogen peroxide was built in 1873 in Berlin. The discovery of the synthesis of hydrogen peroxide by electrolysis with sulfuric acid introduced the more efficient electrochemical method. It was first commercialized in 1908 in Weißenstein, Carinthia, Austria. The anthraquinone process, which is still used, was developed during the 1930s by the German chemical manufacturer IG Farben in Ludwigshafen. The increased demand and improvements in the synthesis methods resulted in the rise of the annual production of hydrogen peroxide from 35,000 tonnes in 1950, to over 100,000 tonnes in 1960, to 300,000 tonnes by 1970; by 1998 it reached 2.7 million tonnes.
Early attempts failed to produce neat hydrogen peroxide. Anhydrous hydrogen peroxide was first obtained by vacuum distillation.
Determination of the molecular structure of hydrogen peroxide proved to be very difficult. In 1892, the Italian physical chemist Giacomo Carrara (1864–1925) determined its molecular mass by freezing-point depression, which confirmed that its molecular formula is . seemed to be just as possible as the modern structure, and as late as in the middle of the 20th century at least half a dozen hypothetical isomeric variants of two main options seemed to be consistent with the available evidence. In 1934, the English mathematical physicist William Penney and the Scottish physicist Gordon Sutherland proposed a molecular structure for hydrogen peroxide that was very similar to the presently accepted one.
Production
In 1994, world production of was around 1.9 million tonnes and grew to 2.2 million in 2006, most of which was at a concentration of 70% or less. In that year, bulk 30% sold for around 0.54 USD/kg, equivalent to US$1.50/kg (US$0.68/lb) on a "100% basis".
Today, hydrogen peroxide is manufactured almost exclusively by the anthraquinone process, which was originally developed by BASF in 1939. It begins with the reduction of an anthraquinone (such as 2-ethylanthraquinone or the 2-amyl derivative) to the corresponding anthrahydroquinone, typically by hydrogenation on a palladium catalyst. In the presence of oxygen, the anthrahydroquinone then undergoes autoxidation: the labile hydrogen atoms of the hydroxy groups transfer to the oxygen molecule, to give hydrogen peroxide and regenerating the anthraquinone. Most commercial processes achieve oxidation by bubbling compressed air through a solution of the anthrahydroquinone, with the hydrogen peroxide then extracted from the solution and the anthraquinone recycled back for successive cycles of hydrogenation and oxidation.
The net reaction for the anthraquinone-catalyzed process is:
The economics of the process depend heavily on effective recycling of the extraction solvents, the hydrogenation catalyst and the expensive quinone.
Historical methods
Hydrogen peroxide was once prepared industrially by hydrolysis of ammonium persulfate:
was itself obtained by the electrolysis of a solution of ammonium bisulfate () in sulfuric acid.
Other routes
Small amounts are formed by electrolysis, photochemistry, electric arc, and related methods.
A commercially viable route for hydrogen peroxide via the reaction of hydrogen with oxygen favours production of water but can be stopped at the peroxide stage. One economic obstacle has been that direct processes give a dilute solution uneconomic for transportation. None of these has yet reached a point where it can be used for industrial-scale synthesis.
Reactions
Acid-base
Hydrogen peroxide is about 1000 times stronger as an acid than water.
(pK = 11.65)
Disproportionation
Hydrogen peroxide disproportionates to form water and oxygen with a ΔHo of –2884.5 kJ/kg and a ΔS of 70.5 J/(mol·K):
The rate of decomposition increases with rise in temperature, concentration, and pH. is unstable under alkaline conditions. Decomposition is catalysed by various redox-active ions or compounds, including most transition metals and their compounds (e.g. manganese dioxide (), silver, and platinum).
Oxidation reactions
The redox properties of hydrogen peroxide depend on pH. In acidic solutions, is a powerful oxidizer.
Sulfite () is oxidized to sulfate ().
Reduction reactions
Under alkaline conditions, hydrogen peroxide is a reductant. When acts as a reducing agent, oxygen gas is also produced. For example, hydrogen peroxide will reduce sodium hypochlorite and potassium permanganate, which is a convenient method for preparing oxygen in the laboratory:
The oxygen produced from hydrogen peroxide and sodium hypochlorite is in the singlet state.
Hydrogen peroxide also reduces silver oxide to silver:
Although usually a reductant, alkaline hydrogen peroxide converts Mn(II) to the dioxide:
In a related reaction, potassium permanganate is reduced to by acidic :
Organic reactions
Hydrogen peroxide is frequently used as an oxidizing agent. Illustrative is oxidation of thioethers to form sulfoxides, such as conversion of thioanisole to methyl phenyl sulfoxide:
Alkaline hydrogen peroxide is used for epoxidation of electron-deficient alkenes such as acrylic acid derivatives, and for the oxidation of alkylboranes to alcohols, the second step of hydroboration-oxidation. It is also the principal reagent in the Dakin oxidation process.
Precursor to other peroxide compounds
Hydrogen peroxide is a weak acid, forming hydroperoxide or peroxide salts with many metals.
It also converts metal oxides into the corresponding peroxides. For example, upon treatment with hydrogen peroxide, chromic acid ( and ) forms a blue peroxide .
Biochemistry
Production
The aerobic oxidation of glucose in the presence of the enzyme glucose oxidase produces hydrogen peroxide. The conversion affords gluconolactone:
Superoxide dismutases (SOD)s are enzymes that promote the disproportionation of superoxide into oxygen and hydrogen peroxide.
Peroxisomes are organelles found in virtually all eukaryotic cells. They are involved in the catabolism of very long chain fatty acids, branched chain fatty acids, D-amino acids, polyamines, and biosynthesis of plasmalogens and ether phospholipids, which are found in mammalian brains and lungs. They produce hydrogen peroxide in a process catalyzed by flavin adenine dinucleotide (FAD):
->[\ce{FAD}]
Hydrogen peroxide arises by the degradation of adenosine monophosphate, which yields hypoxanthine. Hypoxanthine is then oxidatively catabolized first to xanthine and then to uric acid, and the reaction is catalyzed by the enzyme xanthine oxidase:
The degradation of guanosine monophosphate yields xanthine as an intermediate product which is then converted in the same way to uric acid with the formation of hydrogen peroxide.
Consumption
Catalase, another peroxisomal enzyme, uses this to oxidize other substrates, including phenols, formic acid, formaldehyde, and alcohol, by means of a peroxidation reaction:
thus eliminating the poisonous hydrogen peroxide in the process.
This reaction is important in liver and kidney cells, where the peroxisomes neutralize various toxic substances that enter the blood. Some of the ethanol humans drink is oxidized to acetaldehyde in this way. In addition, when excess accumulates in the cell, catalase converts it to through this reaction:
Glutathione peroxidase, a selenoenzyme, also catalyzes the disproportionation of hydrogen peroxide.
Fenton reaction
The reaction of and hydrogen peroxide is the basis of the Fenton reaction, which generates hydroxyl radicals, which are of significance in biology:
The Fenton reaction explains the toxicity of hydrogen peroxides because the hydroxyl radicals rapidly and irreversibly oxidize all organic compounds, including proteins, membrane lipids, and DNA. Hydrogen peroxide is a significant source of oxidative DNA damage in living cells. DNA damage includes formation of 8-Oxo-2'-deoxyguanosine among many other altered bases, as well as strand breaks, inter-strand crosslinks, and deoxyribose damage. By interacting with Cl¯, hydrogen peroxide also leads to chlorinated DNA bases. Hydroxyl radicals readily damage vital cellular components, especially those of the mitochondria. The compound is a major factor implicated in the free-radical theory of aging, based on its ready conversion into a hydroxyl radical.
Function
Eggs of sea urchin, shortly after fertilization by a sperm, produce hydrogen peroxide. It is then converted to hydroxyl radicals (HO•), which initiate radical polymerization, which surrounds the eggs with a protective layer of polymer.
The bombardier beetle combines hydroquinone and hydrogen peroxide, leading to a violent exothermic chemical reaction to produce boiling, foul-smelling liquid that partially becomes a gas (flash evaporation) and is expelled through an outlet valve with a loud popping sound.
As a proposed signaling molecule, hydrogen peroxide may regulate a wide variety of biological processes. At least one study has tried to link hydrogen peroxide production to cancer.
Uses
Bleaching
About 60% of the world's production of hydrogen peroxide is used for pulp- and paper-bleaching. The second major industrial application is the manufacture of sodium percarbonate and sodium perborate, which are used as mild bleaches in laundry detergents. A representative conversion is:
Sodium percarbonate, which is an adduct of sodium carbonate and hydrogen peroxide, is the active ingredient in such laundry products as OxiClean and Tide laundry detergent. When dissolved in water, it releases hydrogen peroxide and sodium carbonate. By themselves these bleaching agents are only effective at wash temperatures of or above and so, often are used in conjunction with bleach activators, which facilitate cleaning at lower temperatures.
Hydrogen peroxide has also been used as a flour bleaching agent and a tooth and bone whitening agent.
Production of organic peroxy compounds
It is used in the production of various organic peroxides with dibenzoyl peroxide being a high volume example. Peroxy acids, such as peracetic acid and meta-chloroperoxybenzoic acid also are produced using hydrogen peroxide. Hydrogen peroxide has been used for creating organic peroxide-based explosives, such as acetone peroxide. It is used as an initiator in polymerizations. Hydrogen peroxide reacts with certain di-esters, such as phenyl oxalate ester (cyalume), to produce chemiluminescence; this application is most commonly encountered in the form of glow sticks.
Production of inorganic peroxides
The reaction with borax leads to sodium perborate, a bleach used in laundry detergents:
Sewage treatment
Hydrogen peroxide is used in certain waste-water treatment processes to remove organic impurities. In advanced oxidation processing, the Fenton reaction gives the highly reactive hydroxyl radical (•OH). This degrades organic compounds, including those that are ordinarily robust, such as aromatic or halogenated compounds. It can also oxidize sulfur-based compounds present in the waste; which is beneficial as it generally reduces their odour.
Disinfectant
Hydrogen peroxide may be used for the sterilization of various surfaces, including surgical instruments, and may be deployed as a vapour (VHP) for room sterilization. demonstrates broad-spectrum efficacy against viruses, bacteria, yeasts, and bacterial spores. In general, greater activity is seen against Gram-positive than Gram-negative bacteria; however, the presence of catalase or other peroxidases in these organisms may increase tolerance in the presence of lower concentrations. Lower levels of concentration (3%) will work against most spores; higher concentrations (7 to 30%) and longer contact times will improve sporicidal activity.
Hydrogen peroxide is seen as an environmentally safe alternative to chlorine-based bleaches, as it degrades to form oxygen and water and it is generally recognized as safe as an antimicrobial agent by the U.S. Food and Drug Administration (FDA).
Propellant
High-concentration is referred to as "high-test peroxide" (HTP). It can be used as either a monopropellant (not mixed with fuel) or the oxidizer component of a bipropellant rocket. Use as a monopropellant takes advantage of the decomposition of 70–98% concentration hydrogen peroxide into steam and oxygen. The propellant is pumped into a reaction chamber, where a catalyst, usually a silver or platinum screen, triggers decomposition, producing steam at over , which is expelled through a nozzle, generating thrust. monopropellant produces a maximal specific impulse (Isp) of 161 s (1.6 kN·s/kg). Peroxide was the first major monopropellant adopted for use in rocket applications. Hydrazine eventually replaced hydrogen peroxide monopropellant thruster applications primarily because of a 25% increase in the vacuum specific impulse. Hydrazine (toxic) and hydrogen peroxide (less toxic [ACGIH TLV 0.01 and 1 ppm respectively]) are the only two monopropellants (other than cold gases) to have been widely adopted and utilized for propulsion and power applications. The Bell Rocket Belt, reaction control systems for X-1, X-15, Centaur, Mercury, Little Joe, as well as the turbo-pump gas generators for X-1, X-15, Jupiter, Redstone and Viking used hydrogen peroxide as a monopropellant. The RD-107 engines (used from 1957 to present) in the R-7 series of rockets decompose hydrogen peroxide to power the turbopumps.
In bipropellant applications, is decomposed to oxidize a burning fuel. Specific impulses as high as 350 s (3.5 kN·s/kg) can be achieved, depending on the fuel. Peroxide used as an oxidizer gives a somewhat lower Isp than liquid oxygen but is dense, storable, and non-cryogenic and can be more easily used to drive gas turbines to give high pressures using an efficient closed cycle. It may also be used for regenerative cooling of rocket engines. Peroxide was used very successfully as an oxidizer in World War II German rocket motors (e.g., T-Stoff, containing oxyquinoline stabilizer, for both the Walter HWK 109-500 Starthilfe RATO externally podded monopropellant booster system and the Walter HWK 109-509 rocket motor series used for the Me 163B), most often used with C-Stoff in a self-igniting hypergolic combination, and for the low-cost British Black Knight and Black Arrow launchers. Presently, HTP is used on ILR-33 AMBER and Nucleus suborbital rockets.
In the 1940s and 1950s, the Hellmuth Walter KG–conceived turbine used hydrogen peroxide for use in submarines while submerged; it was found to be too noisy and require too much maintenance compared to diesel-electric power systems. Some torpedoes used hydrogen peroxide as oxidizer or propellant. Operator error in the use of hydrogen peroxide torpedoes was named as possible causes for the sinking of HMS Sidon and the Russian submarine Kursk. SAAB Underwater Systems is manufacturing the Torpedo 2000. This torpedo, used by the Swedish Navy, is powered by a piston engine propelled by HTP as an oxidizer and kerosene as a fuel in a bipropellant system.
Household use
Hydrogen peroxide has various domestic uses, primarily as a cleaning and disinfecting agent.
Hair bleaching
Diluted (between 1.9% and 12%) mixed with aqueous ammonia has been used to bleach human hair. The chemical's bleaching property lends its name to the phrase "peroxide blonde".
Hydrogen peroxide is also used for tooth whitening. It may be found in most whitening toothpastes. Hydrogen peroxide has shown positive results involving teeth lightness and chroma shade parameters. It works by oxidizing colored pigments onto the enamel where the shade of the tooth may become lighter. Hydrogen peroxide may be mixed with baking soda and salt to make a homemade toothpaste.
Removal of blood stains
Hydrogen peroxide reacts with blood as a bleaching agent, and so if a blood stain is fresh, or not too old, liberal application of hydrogen peroxide, if necessary in more than single application, will bleach the stain fully out. After about two minutes of the application, the blood should be firmly blotted out.
Acne treatment
Hydrogen peroxide may be used to treat acne, although benzoyl peroxide is a more common treatment.
Oral cleaning agent
The use of dilute hydrogen peroxide as an oral cleansing agent has been reviewed academically to determine its usefulness in treating gingivitis and plaque. Although there is a positive effect when compared with a placebo, it was concluded that chlorhexidine is a much more effective treatment.
Niche uses
Horticulture
Some horticulturists and users of hydroponics advocate the use of weak hydrogen peroxide solution in watering solutions. Its spontaneous decomposition releases oxygen that enhances a plant's root development and helps to treat root rot (cellular root death due to lack of oxygen) and a variety of other pests.
For general watering concentrations, around 0.1% is in use. This can be increased up to one percent for antifungal actions. Tests show that plant foliage can safely tolerate concentrations up to 3%.
Fishkeeping
Hydrogen peroxide is used in aquaculture for controlling mortality caused by various microbes. In 2019, the U.S. FDA approved it for control of Saprolegniasis in all coldwater finfish and all fingerling and adult coolwater and warmwater finfish, for control of external columnaris disease in warm-water finfish, and for control of Gyrodactylus spp. in freshwater-reared salmonids. Laboratory tests conducted by fish culturists have demonstrated that common household hydrogen peroxide may be used safely to provide oxygen for small fish. The hydrogen peroxide releases oxygen by decomposition when it is exposed to catalysts such as manganese dioxide.
Removing yellowing from aged plastics
Hydrogen peroxide may be used in combination with a UV-light source to remove yellowing from white or light grey acrylonitrile butadiene styrene (ABS) plastics to partially or fully restore the original color. In the retrocomputing scene, this process is commonly referred to as retrobright.
Safety
Regulations vary, but low concentrations, such as 5%, are widely available and legal to buy for medical use. Most over-the-counter peroxide solutions are not suitable for ingestion. Higher concentrations may be considered hazardous and typically are accompanied by a safety data sheet (SDS). In high concentrations, hydrogen peroxide is an aggressive oxidizer and will corrode many materials, including human skin. In the presence of a reducing agent, high concentrations of will react violently.
While concentrations up to 35% produce only "white" oxygen bubbles in the skin (and some biting pain) that disappear with the blood within 30–45 minutes, concentrations of 98% dissolve paper. However, concentrations as low as 3% can be dangerous for the eye because of oxygen evolution within the eye.
High-concentration hydrogen peroxide streams, typically above 40%, should be considered hazardous due to concentrated hydrogen peroxide's meeting the definition of a DOT oxidizer according to U.S. regulations if released into the environment. The EPA Reportable Quantity (RQ) for D001 hazardous wastes is , or approximately , of concentrated hydrogen peroxide.
Hydrogen peroxide should be stored in a cool, dry, well-ventilated area and away from any flammable or combustible substances. It should be stored in a container composed of non-reactive materials such as stainless steel or glass (other materials including some plastics and aluminium alloys may also be suitable). As it breaks down quickly when exposed to light, it should be stored in an opaque container, and pharmaceutical formulations typically come in brown bottles that block light.
Hydrogen peroxide, either in pure or diluted form, may pose several risks, the main one being that it forms explosive mixtures upon contact with organic compounds. Distillation of hydrogen peroxide at normal pressures is highly dangerous. It is corrosive, especially when concentrated, but even domestic-strength solutions may cause irritation to the eyes, mucous membranes, and skin. Swallowing hydrogen peroxide solutions is particularly dangerous, as decomposition in the stomach releases large quantities of gas (ten times the volume of a 3% solution), leading to internal bloating. Inhaling over 10% can cause severe pulmonary irritation.
With a significant vapour pressure (1.2 kPa at 50 °C), hydrogen peroxide vapour is potentially hazardous. According to U.S. NIOSH, the immediately dangerous to life and health (IDLH) limit is only 75 ppm. The U.S. Occupational Safety and Health Administration (OSHA) has established a permissible exposure limit of 1.0 ppm calculated as an 8-hour time-weighted average (29 CFR 1910.1000, Table Z-1). Hydrogen peroxide has been classified by the American Conference of Governmental Industrial Hygienists (ACGIH) as a "known animal carcinogen, with unknown relevance on humans". For workplaces where there is a risk of exposure to the hazardous concentrations of the vapours, continuous monitors for hydrogen peroxide should be used. Information on the hazards of hydrogen peroxide is available from OSHA and from the ATSDR.
Wound healing
Historically, hydrogen peroxide was used for disinfecting wounds, partly because of its low cost and prompt availability compared to other antiseptics.
There is conflicting evidence on hydrogen peroxide's effect on wound healing. Some research finds benefit, while other research find delays and healing inhibition. Its use for home treatment of wounds is generally not recommended.
1.5–3% hydrogen peroxide is used as a disinfectant in dentistry, especially in endodotic treatments together with hypochlorite and chlorhexidine and 1–1.5% is also useful for treatment of inflammation of third molars (wisdom teeth).
Use in alternative medicine
Practitioners of alternative medicine have advocated the use of hydrogen peroxide for various conditions, including emphysema, influenza, AIDS, and in particular cancer. There is no evidence of effectiveness and in some cases it has proved fatal.
Both the effectiveness and safety of hydrogen peroxide therapy is scientifically questionable. Hydrogen peroxide is produced by the immune system, but in a carefully controlled manner. Cells called phagocytes engulf pathogens and then use hydrogen peroxide to destroy them. The peroxide is toxic to both the cell and the pathogen and so is kept within a special compartment, called a phagosome. Free hydrogen peroxide will damage any tissue it encounters via oxidative stress, a process that also has been proposed as a cause of cancer.
Claims that hydrogen peroxide therapy increases cellular levels of oxygen have not been supported. The quantities administered would be expected to provide very little additional oxygen compared to that available from normal respiration. It is also difficult to raise the level of oxygen around cancer cells within a tumour, as the blood supply tends to be poor, a situation known as tumor hypoxia.
Large oral doses of hydrogen peroxide at a 3% concentration may cause irritation and blistering to the mouth, throat, and abdomen as well as abdominal pain, vomiting, and diarrhea. Ingestion of hydrogen peroxide at concentrations of 35% or higher has been implicated as the cause of numerous gas embolism events resulting in hospitalisation. In these cases, hyperbaric oxygen therapy was used to treat the embolisms.
Intravenous injection of hydrogen peroxide has been linked to several deaths.
The American Cancer Society states that "there is no scientific evidence that hydrogen peroxide is a safe, effective, or useful cancer treatment." Furthermore, the therapy is not approved by the U.S. FDA.
Historical incidents
On 16 July 1934, in Kummersdorf, Germany, a propellant tank containing an experimental monopropellant mixture consisting of hydrogen peroxide and ethanol exploded during a test, killing three people.
During the Second World War, doctors in German concentration camps experimented with the use of hydrogen peroxide injections in the killing of human subjects.
In December 1943, the pilot Josef Pöhs died after being exposed to the T-Stoff of his Messerschmitt Me 163.
In June 1955, Royal Navy submarine HMS Sidon sank after leaking high-test peroxide in a torpedo caused it to explode in its tube, killing twelve crew members; a member of the rescue party also succumbed.
In April 1992, an explosion occurred at the hydrogen peroxide plant at Jarrie in France, due to technical failure of the computerised control system and resulting in one fatality and wide destruction of the plant.
Several people received minor injuries after a hydrogen peroxide spill on board a Northwest Airlines flight from Orlando, Florida to Memphis, Tennessee on 28 October 1998.
The Russian submarine K-141 Kursk sailed to perform an exercise of firing dummy torpedoes at the Pyotr Velikiy, a Kirov-class battlecruiser. On 12 August 2000, at 11:28 local time (07:28 UTC), there was an explosion while preparing to fire the torpedoes. The only credible report to date is that this was due to the failure and explosion of one of the Kursks hydrogen peroxide-fueled torpedoes. It is believed that HTP, a form of highly concentrated hydrogen peroxide used as propellant for the torpedo, seeped through its container, damaged either by rust or in the loading procedure on land where an incident involving one of the torpedoes accidentally touching ground went unreported. The vessel was lost with all hands.
On 15 August 2010, a spill of about of cleaning fluid occurred on the 54th floor of 1515 Broadway, in Times Square, New York City. The spill, which a spokesperson for the New York City Fire Department said was of hydrogen peroxide, shut down Broadway between West 42nd and West 48th streets as fire engines responded to the hazmat situation. There were no reported injuries.
See also
FOX reagent, used to measure levels of hydrogen peroxide in biological systems
Hydrogen chalcogenide
Retrobright, a process using hydrogen peroxide to restore yellowed acrylonitrile butadiene styrene plastic
Bis(trimethylsilyl) peroxide, an aprotic substitute
ReferencesBibliography'''
A great description of properties & chemistry of .
External links
Hydrogen Peroxide at The Periodic Table of Videos'' (University of Nottingham)
Material Safety Data Sheet
ATSDR Agency for Toxic Substances and Disease Registry FAQ
International Chemical Safety Card 0164
NIOSH Pocket Guide to Chemical Hazards
Process flow sheet of Hydrogen Peroxide Production by anthrahydroquinone autoxidation
Hydrogen Peroxide Handbook by Rocketdyne
IR spectroscopic study J. Phys. Chem.
Bleaching action of Hydrogen peroxide at YouTube
Antiseptics
Bleaches
Disinfectants
1894 introductions
Household chemicals
Oxoacids
Light-sensitive chemicals
Peroxides
Otologicals
Oxidizing agents
Rocket oxidizers
Hair coloring
Reactive oxygen species
Liquid explosives
Oxygen compounds | Hydrogen peroxide | [
"Chemistry"
] | 7,242 | [
"Light-sensitive chemicals",
"Redox",
"Oxidizing agents",
"Light reactions",
"Rocket oxidizers"
] |
14,413 | https://en.wikipedia.org/wiki/Hydrocodone | Hydrocodone, also known as dihydrocodeinone, is a semi-synthetic opioid used to treat pain and as a cough suppressant. It is taken by mouth. Typically, it is dispensed as the combination acetaminophen/hydrocodone or ibuprofen/hydrocodone for pain severe enough to require an opioid and in combination with homatropine methylbromide to relieve cough. It is also available by itself in a long-acting form sold under the brand name Zohydro ER, among others, to treat severe pain of a prolonged duration. Hydrocodone is a controlled drug: in the United States, it is classified as a Schedule II Controlled Substance.
Common side effects include dizziness, sleepiness, nausea, and constipation. Serious side effects may include low blood pressure, seizures, QT prolongation, respiratory depression, and serotonin syndrome. Rapidly decreasing the dose may result in opioid withdrawal. Use during pregnancy or breastfeeding is generally not recommended. Hydrocodone is believed to work by activating opioid receptors, mainly in the brain and spinal cord. Hydrocodone 10 mg is equivalent to about 10 mg of morphine by mouth.
Hydrocodone was patented in 1923, while the long-acting formulation was approved for medical use in the United States in 2013. It is most commonly prescribed in the United States, which consumed 99% of the worldwide supply as of 2010. In 2018, it was the 402nd most commonly prescribed medication in the United States, with more than 400,000 prescriptions. Hydrocodone is a semisynthetic opioid, converted from codeine or less often from thebaine. Production using genetically engineered yeasts has been developed but is not used commercially.
Medical uses
Hydrocodone is used to treat moderate to severe pain. In liquid formulations, it is used to treat cough. In one study comparing the potency of hydrocodone to that of oxycodone, it was found that it took 50% more hydrocodone to achieve the same degree of miosis (pupillary contraction). The investigators interpreted this to mean that oxycodone is about 50% more potent than hydrocodone.
However, in a study of emergency department patients with fractures, it was found that an equal amount of either drug provided about the same degree of pain relief, indicating that there is little practical difference between them when used for that purpose. Some references state that the analgesic action of hydrocodone begins in 20–30 minutes and lasts about 4–8 hours. The manufacturer's information says onset of action is about 10–30 minutes and duration is about 4–6 hours. Recommended dosing interval is 4–6 hours. Hydrocodone reaches peak serum levels after 1.3 hours.
Available forms
Hydrocodone is available in a variety of formulations for oral administration:
The original oral form of hydrocodone alone, Dicodid, as immediate-release 5- and 10-mg tablets is available for prescription in Continental Europe per national drug control and prescription laws and Title 76 of the Schengen Treaty, but dihydrocodeine has been more widely used for the same indications since the beginning in the early 1920s, with hydrocodone being regulated the same way as morphine in the German Betäubungsmittelgesetz, the similarly named law in Switzerland and the Austrian Suchtmittelgesetz, whereas dihydrocodeine is regulated like codeine. For a number of decades, the liquid hydrocodone products available have been cough medicines.
Hydrocodone plus homatropine (Hycodan) in the form of small tablets for coughing and especially neuropathic moderate pain (the homatropine, an anticholinergic, is useful in both of those cases and is a deterrent to intentional overdose) was more widely used than Dicodid and was labelled as a cough medicine in the United States whilst Vicodin and similar drugs were the choices for analgesia.
Extended-release hydrocodone in a time-release syrup also containing chlorphenamine/chlorpheniramine is a cough medicine called Tussionex in North America. In Europe, similar time-release syrups containing codeine (numerous), dihydrocodeine (Paracodin Retard Hustensaft), nicocodeine (Tusscodin), thebacon, acetyldihydrocodeine, dionine, and nicodicodeine are used instead.
Immediate-release hydrocodone with paracetamol (acetaminophen) (Vicodin, Lortab, Lorcet, Maxidone, Norco, Zydone)
Immediate-release hydrocodone with ibuprofen (Vicoprofen, Ibudone, Reprexain)
Immediate-release hydrocodone with aspirin (Alor 5/500, Azdone, Damason-P, Lortab ASA, Panasal 5/500)
Controlled-release hydrocodone (Hysingla ER by Purdue Pharma, Zohydro ER)
Hydrocodone is not available in parenteral or any other non-oral forms.
Side effects
Common side effects of hydrocodone are nausea, vomiting, constipation, drowsiness, dizziness, lightheadedness, anxiety, abnormally happy or sad mood, dry throat, difficulty urinating, rash, itching, and contraction of the pupils. Serious side effects include slowed or irregular breathing and chest tightness.
Several cases of progressive bilateral hearing loss unresponsive to steroid therapy have been described as an infrequent adverse reaction to hydrocodone/paracetamol misuse. This adverse effect has been considered by some to be due to the ototoxicity of hydrocodone. Other researchers have suggested that paracetamol is the primary agent responsible for the ototoxicity.
The U.S. Food and Drug Administration (FDA) assigns the drug to pregnancy category C, meaning that no adequate and well-controlled studies in humans have been conducted. A newborn of a mother taking opioid medications regularly prior to the birth will be physically dependent. The baby may also exhibit respiratory depression if the opioid dose was high. An epidemiological study indicated that opioid treatment during early pregnancy results in increased risk of various birth defects.
Symptoms of hydrocodone overdose include narrowed or widened pupils; slow, shallow, or stopped breathing; slowed or stopped heartbeat; cold, clammy, or blue skin; excessive sleepiness; loss of consciousness; seizures; or death.
Hydrocodone can be habit forming, causing physical and psychological dependence. Its abuse liability is similar to morphine and less than oxycodone.
Interactions
Hydrocodone is metabolized by the cytochrome P450 enzymes CYP2D6 and CYP3A4, and inhibitors and inducers of these enzymes can modify hydrocodone exposure. One study found that combination of paroxetine, a selective serotonin reuptake inhibitor (SSRI) and strong CYP2D6 inhibitor, with once-daily extended-release hydrocodone, did not modify exposure to hydrocodone or the incidence of adverse effects. These findings suggest that hydrocodone can be coadministered with CYP2D6 inhibitors without dosage modification. Conversely, combination of hydrocodone/acetaminophen with the antiviral regimen of ombitasvir, paritaprevir, ritonavir, and dasabuvir for treatment of hepatitis C increased peak concentrations of hydrocodone by 27%, total exposure by 90%, and elimination half-life from 5.1hours to 8.0hours. Ritonavir is a strong CYP3A4 inhibitor as well as inducer of CYP3A and other enzymes, and the other antivirals are known to inhibit drug transporters like organic anion transporting polypeptide (OATP) 1B1 and 1B3, P-glycoprotein, and breast cancer resistance protein (BCRP). The changes in hydrocodone levels are consistent with CYP3A4 inhibition by ritonavir. Based on these findings, a 50% lower dose of hydrocodone and closer clinical monitoring was recommended when hydrocodone is used in combination with this antiviral regimen.
People consuming alcohol, other opioids, anticholinergic antihistamines, antipsychotics, anxiolytics, or other central nervous system (CNS) depressants together with hydrocodone may exhibit an additive CNS depression. Hydrocodone taken concomitantly with serotonergic medications like SSRI antidepressants may increase the risk of serotonin syndrome.
Pharmacology
Pharmacodynamics
Hydrocodone is a highly selective full agonist of the μ-opioid receptor (MOR). This is the main biological target of the endogenous opioid neuropeptide β-endorphin. Hydrocodone has low affinity for the δ-opioid receptor (DOR) and the κ-opioid receptor (KOR), where it is an agonist similarly.
Studies have shown hydrocodone is stronger than codeine but only one-tenth as potent as morphine at binding to receptors and reported to be only 59% as potent as morphine in analgesic properties. However, in tests conducted on rhesus monkeys, the analgesic potency of hydrocodone was actually higher than morphine. Oral hydrocodone has a mean equivalent daily dosage (MEDD) factor of 0.4, meaning that 1 mg of hydrocodone is equivalent to 0.4 mg of intravenous morphine. However, because of morphine's low oral bioavailability, there is a 1:1 correspondence between orally administered morphine and orally administered hydrocodone.
Pharmacokinetics
Absorption
Hydrocodone is only pharmaceutically available as an oral medication. It is well-absorbed, but the oral bioavailability of hydrocodone is only approximately 25%. The onset of action of hydrocodone via this route is 10 to 20 minutes, with a peak effect (Tmax) occurring at 30 to 60 minutes, and it has a duration of 4 to 8 hours. The FDA label for immediate-release hydrocodone with acetaminophen does not include any information on the influence of food on its absorption or other pharmacokinetics. Conversely, coadministration with a high-fat meal increases peak concentrations of different formulations of extended-release hydrocodone by 14 to 54%, whereas area-under-the-curve levels are not notably affected.
Distribution
The volume of distribution of hydrocodone is 3.3 to 4.7 L/kg. The plasma protein binding of hydrocodone is 20 to 50%.
Metabolism
In the liver, hydrocodone is transformed into several metabolites, including norhydrocodone, hydromorphone, 6α-hydrocodol (dihydrocodeine), and 6β-hydrocodol. 6α- and 6β-hydromorphol are also formed, and the metabolites of hydrocodone are conjugated (via glucuronidation). Hydrocodone has a terminal half-life that averages 3.8 hours (range 3.3–4.4 hours). The hepatic cytochrome P450 enzyme CYP2D6 converts hydrocodone into hydromorphone, a more potent opioid (5-fold higher binding affinity to the MOR). However, extensive and poor cytochrome 450 CYP2D6 metabolizers had similar physiological and subjective responses to hydrocodone, and CYP2D6 inhibitor quinidine did not change the responses of extensive metabolizers, suggesting that inhibition of CYP2D6 metabolism of hydrocodone has no practical importance. Ultra-rapid CYP2D6 metabolizers (1–2% of the population) may have an increased response to hydrocodone; however, hydrocodone metabolism in this population has not been studied.
Norhydrocodone, the major metabolite of hydrocodone, is predominantly formed by CYP3A4-catalyzed oxidation. In contrast to hydromorphone, it is described as inactive. However, norhydrocodone is actually a MOR agonist with similar potency to hydrocodone, but has been found to produce only minimal analgesia when administered peripherally to animals (likely due to poor blood–brain barrier and thus central nervous system penetration). Inhibition of CYP3A4 in a child who was, in addition, a poor CYP2D6 metabolizer, resulted in a fatal overdose of hydrocodone. Approximately 40% of hydrocodone metabolism is attributed to non-cytochrome P450-catalyzed reactions.
Elimination
Hydrocodone is excreted in urine, mainly in the form of conjugates.
Chemistry
Detection in body fluids
Hydrocodone concentrations are measured in blood, plasma, and urine to seek evidence of misuse, to confirm diagnoses of poisoning, and to assist in investigations into deaths. Many commercial opiate screening tests react indiscriminately with hydrocodone, other opiates, and their metabolites, but chromatographic techniques can easily distinguish hydrocodone uniquely. Blood and plasma hydrocodone concentrations typically fall into the 5–30 μg/L range among people taking the drug therapeutically, 100–200 μg/L among recreational users, and 100–1,600 μg/L in cases of acute, fatal overdosage. Co-administration of the drug with food or alcohol can very significantly increase the resulting plasma hydrocodone concentrations that are subsequently achieved.
Synthesis
Hydrocodone is most commonly synthesized from thebaine, a constituent of opium latex from the dried poppy plant. Once thebaine is obtained, the reaction undergoes hydrogenation using a palladium catalyst.
Structure
There are three important structures in hydrocodone: the amine group, which binds to the tertiary nitrogen binding site in the central nervous system's opioid receptor, the hydroxy group that binds to the anionic binding site, and the phenyl group which binds to the phenolic binding site. This triggers a G protein activation and subsequent release of dopamine.
History
Hydrocodone was first synthesized in Germany in 1920 by Carl Mannich and Helene Löwenheim. It was approved by the Food and Drug Administration on 23 March 1943 for sale in the United States and approved by Health Canada for sale in Canada under the brand name Hycodan.
Hydrocodone was first marketed by Knoll as Dicodid, starting in February 1924 in Germany. This name is analogous to other products the company introduced or otherwise marketed: Dilaudid (hydromorphone, 1926), Dinarkon (oxycodone, 1917), Dihydrin (dihydrocodeine, 1911), and Dimorphan (dihydromorphine). Paramorfan is the trade name of dihydromorphine from another manufacturer, as is Paracodin, for dihydrocodeine.
Hydrocodone was patented in 1923, while the long-acting formulation was approved for medical use in the United States in 2013. It is most commonly prescribed in the United States, which consumed 99% of the worldwide supply as of 2010. In 2018, it was the 402nd most commonly prescribed medication in the United States, with more than 400,000 prescriptions.
Society and culture
Formulations
Several common imprints for hydrocodone are M365, M366, M367.
Combination products
Most hydrocodone formulations include a second analgesic, such as paracetamol (acetaminophen) or ibuprofen. Examples of hydrocodone combinations include Norco, Vicodin, Vicoprofen and Riboxen.
Legal status in the United States
The US government imposed tougher prescribing rules for hydrocodone in 2014, changing the drug from Schedule III to Schedule II. In 2011, hydrocodone products were involved in around 100,000 abuse-related emergency department visits in the United States, more than double the number in 2004.
References
External links
4,5-Epoxymorphinans
Euphoriants
German inventions
Ketones
Mu-opioid receptor agonists
Phenol ethers
Semisynthetic opioids
Wikipedia medicine articles ready to translate | Hydrocodone | [
"Chemistry"
] | 3,596 | [
"Ketones",
"Functional groups"
] |
14,454 | https://en.wikipedia.org/wiki/Henry%20Moseley | Henry Gwyn Jeffreys Moseley (; 23 November 1887 – 10 August 1915) was an English physicist, whose contribution to the science of physics was the justification from physical laws of the previous empirical and chemical concept of the atomic number. This stemmed from his development of Moseley's law in X-ray spectra.
Moseley's law advanced atomic physics, nuclear physics and quantum physics by providing the first experimental evidence in favour of Niels Bohr's theory, aside from the hydrogen atom spectrum which the Bohr theory was designed to reproduce. That theory refined Ernest Rutherford's and Antonius van den Broek's model, which proposed that the atom contains in its nucleus a number of positive nuclear charges that is equal to its (atomic) number in the periodic table.
When World War I broke out in Western Europe, Moseley left his research work at the University of Oxford behind to volunteer for the Royal Engineers of the British Army. Moseley was assigned to the force of British Empire soldiers that invaded the region of Gallipoli, Turkey, in April 1915, as a telecommunications officer. Moseley was shot and killed during the Battle of Gallipoli on 10 August 1915, at the age of 27. Experts have speculated that Moseley could otherwise have been awarded the Nobel Prize in Physics in 1916.
Biography
Henry G. J. Moseley, known to his friends as Harry, was born in Weymouth in Dorset in 1887. His father Henry Nottidge Moseley (1844–1891), who died when Moseley was quite young, was a biologist and also a professor of anatomy and physiology at the University of Oxford, who had been a member of the Challenger Expedition. Moseley's mother was Amabel Gwyn Jeffreys, the daughter of the Welsh biologist and conchologist John Gwyn Jeffreys. She was also the British women's champion of chess in 1913.
Moseley had been a very promising schoolboy at Summer Fields School (where one of the four "leagues" is named after him), and he was awarded a King's scholarship to attend Eton College. In 1906 he won the chemistry and physics prizes at Eton. In 1906, Moseley entered Trinity College of the University of Oxford, where he earned his bachelor's degree. While an undergraduate at Oxford, Moseley became a Freemason by joining the Apollo University Lodge. Immediately after graduation from Oxford in 1910, Moseley became a demonstrator in physics at the University of Manchester under the supervision of Sir Ernest Rutherford. During Moseley's first year at Manchester, he had a teaching load as a graduate teaching assistant, but following that first year, he was reassigned from his teaching duties to work as a graduate research assistant. He declined a fellowship offered by Rutherford, preferring to move back to Oxford, in November 1913, where he was given laboratory facilities but no support.
Scientific work
Experimenting with the energy of beta particles in 1912, Moseley showed that high potentials were attainable from a radioactive source of radium, thereby inventing the first atomic battery, though he was unable to produce the 1MV necessary to stop the particles.
In 1913, Moseley observed and measured the X-ray spectra of various chemical elements (mostly metals) that were found by the method of diffraction through crystals. This was a pioneering use of the method of X-ray spectroscopy in physics, using Bragg's diffraction law to determine the X-ray wavelengths. Moseley discovered a systematic mathematical relationship between the wavelengths of the X-rays produced and the atomic numbers of the metals that were used as the targets in X-ray tubes. This has become known as Moseley's law.
Before Moseley's discovery, the atomic numbers (or elemental number) of an element had been thought of as a semi-arbitrary sequential number, based on the sequence of atomic masses, but modified somewhat where chemists found this modification to be desirable, such as by the Russian chemist, Dmitri Ivanovich Mendeleev. In his invention of the Periodic Table of the Elements, Mendeleev had interchanged the orders of a few pairs of elements to put them in more appropriate places in this table of the elements. For example, the metals cobalt and nickel had been assigned the atomic numbers 27 and 28, respectively, based on their known chemical and physical properties, even though they have nearly the same atomic masses. In fact, the atomic mass of cobalt is slightly larger than that of nickel, so nickel would be placed in the Periodic Table before cobalt if they were placed purely according to atomic mass. However Moseley's experiments in X-ray spectroscopy showed directly from their physics that cobalt and nickel have the different atomic numbers, 27 and 28, and that they are placed in the Periodic Table correctly by Moseley's objective measurements of their atomic numbers. Hence, Moseley's discovery demonstrated that the atomic numbers of elements are not just rather arbitrary numbers based on chemistry and the intuition of chemists, but rather, they have a firm experimental basis from the physics of their X-ray spectra.
In addition, Moseley showed that there were gaps in the atomic number sequence at numbers 43, 61, 72, and 75. These spaces are now known, respectively, to be the places of the radioactive synthetic elements technetium and promethium, and also the last two quite rare naturally occurring stable elements hafnium (discovered 1923) and rhenium (discovered 1925). Nothing was known about these four elements in Moseley's lifetime, not even their very existence. Based on the intuition of a very experienced chemist, Dmitri Mendeleev had predicted the existence of a missing element in the Periodic Table, which was later found to be filled by technetium, and Bohuslav Brauner had predicted the existence of another missing element in this Table, which was later found to be filled by promethium. Henry Moseley's experiments confirmed these predictions, by showing exactly what the missing atomic numbers were, 43 and 61. In addition, Moseley predicted the existence of two more undiscovered elements, those with the atomic numbers 72 and 75, and gave very strong evidence that there were no other gaps in the Periodic Table between the elements aluminium (atomic number 13) and gold (atomic number 79).
This latter question about the possibility of more undiscovered ("missing") elements had been a standing problem among the chemists of the world, particularly given the existence of the large family of the lanthanide series of rare earth elements. Moseley was able to demonstrate that these lanthanide elements, i.e. lanthanum through lutetium, must have exactly 15 members – no more and no less. The number of elements in the lanthanides had been a question that was very far from being settled by the chemists of the early 20th Century. They could not yet produce pure samples of all the rare-earth elements, even in the form of their salts, and in some cases they were unable to distinguish between mixtures of two very similar (adjacent) rare-earth elements from the nearby pure metals in the Periodic Table. For example, there was a so-called "element" that was even given the chemical name of "didymium". "Didymium" was found some years later to be simply a mixture of two genuine rare-earth elements, and these were given the names neodymium and praseodymium, meaning "new twin" and "green twin". Also, the method of separating the rare-earth elements by the method of ion exchange had not been invented yet in Moseley's time.
Moseley's method in early X-ray spectroscopy was able to sort out the above chemical problems promptly, some of which had occupied chemists for a number of years. Moseley also predicted the existence of element 61, a lanthanide whose existence was previously unsuspected. Quite a few years later, this element 61 was created artificially in nuclear reactors and was named promethium.
Contribution to understanding of the atom
Before Moseley and his law, atomic numbers had been thought of as a semi-arbitrary ordering number, vaguely increasing with atomic weight but not strictly defined by it. Moseley's discovery showed that atomic numbers were not arbitrarily assigned, but rather, they have a definite physical basis. Moseley postulated that each successive element has a nuclear charge exactly one unit greater than its predecessor. Moseley redefined the idea of atomic numbers from its previous status as an ad hoc numerical tag to help sorting the elements into an exact sequence of ascending atomic numbers that made the Periodic Table exact. (This was later to be the basis of the Aufbau principle in atomic studies.) As noted by Bohr, Moseley's law provided a reasonably complete experimental set of data that supported the (new from 1911) conception by Ernest Rutherford and Antonius van den Broek of the atom, with a positively charged nucleus surrounded by negatively charged electrons in which the atomic number is understood to be the exact physical number of positive charges (later discovered and called protons) in the central atomic nuclei of the elements. Moseley mentioned the two scientists above in his research paper, but he did not actually mention Bohr, who was rather new on the scene then. Simple modifications of Rydberg's and Bohr's formulas were found to give a theoretical justification for Moseley's empirically derived law for determining atomic numbers.
Use of X-ray spectrometer
X-ray spectrometers are the foundation-stones of X-ray crystallography. The X-ray spectrometers as Moseley knew them worked as follows. A glass-bulb electron tube was used, similar to that held by Moseley in the photo here. Inside the evacuated tube, electrons were fired at a metallic substance (i.e. a sample of pure element in Moseley's work), causing the ionization of electrons from the inner electron shells of the element. The rebound of electrons into these holes in the inner shells next causes the emission of X-ray photons that were led out of the tube in a semi-beam, through an opening in the external X-ray shielding. These are next diffracted by a standardized salt crystal, with angular results read out as photographic lines by the exposure of an X-ray film fixed at the outside the vacuum tube at a known distance. Application of Bragg's law (after some initial guesswork of the mean distances between atoms in the metallic crystal, based on its density) next allowed the wavelength of the emitted X-rays to be calculated.
Moseley participated in the design and development of early X-ray spectrometry equipment, learning some techniques from William Henry Bragg and William Lawrence Bragg at the University of Leeds, and developing others himself. Many of the techniques of X-ray spectroscopy were inspired by the methods that are used with visible light spectroscopes and spectrograms, by substituting crystals, ionization chambers, and photographic plates for their analogs in light spectroscopy. In some cases, Moseley found it necessary to modify his equipment to detect particularly soft (lower frequency) X-rays that could not penetrate either air or paper, by working with his instruments in a vacuum chamber.
Death and aftermath
Sometime in the first half of 1914, Moseley resigned from his position at Manchester, with plans to return to Oxford and continue his physics research there. However, World War I broke out in August 1914, and Moseley turned down this job offer to instead enlist with the Royal Engineers of the British Army. His family and friends tried to persuade him not to join, but he thought it was his duty. Moseley served as a technical officer in communications during the Battle of Gallipoli, in Turkey, beginning in April 1915, where he was killed by a sniper on 10 August 1915.
Only twenty-seven years old at the time of his death, Moseley could, in the opinion of some scientists, have contributed much to the knowledge of atomic structure had he survived. Niels Bohr said in 1962 that Rutherford's work "was not taken seriously at all" and that the "great change came from Moseley."
Robert Millikan wrote, "In a research which is destined to rank as one of the dozen most brilliant in conception, skillful in execution, and illuminating in results in the history of science, a young man twenty-six years old threw open the windows through which we can glimpse the sub-atomic world with a definiteness and certainty never dreamed of before. Had the European War had no other result than the snuffing out of this young life, that alone would make it one of the most hideous and most irreparable crimes in history."
George Sarton wrote, "His fame was already established on such a secure foundation that his memory will be green forever. He is one of the immortals of science, and though he would have made many other additions to our knowledge if his life had been spared, the contributions already credited to him were of such fundamental significance, that the probability of his surpassing himself was extremely small. It is very probable that however long his life, he would have been chiefly remembered because of the 'Moseley law' which he published at the age of twenty-six."
Isaac Asimov wrote, "In view of what he [Moseley] might still have accomplished … his death might well have been the most costly single death of the War to mankind generally."
Rutherford believed that Moseley's work would have earned him the Nobel Prize (which however is never awarded posthumously).
Memorial plaques to Moseley were installed at Manchester and Eton, and a Royal Society scholarship, established by his will, had as its second recipient the physicist P. M. S. Blackett, who later became president of the Society.
The Institute of Physics Henry Moseley Medal and Prize is named in his honour.
Notes
References
Further reading
External links
1887 births
1915 deaths
English physicists
Alumni of Trinity College, Oxford
People associated with the University of Manchester
People from Weymouth, Dorset
People educated at Eton College
Royal Engineers officers
British Army personnel of World War I
British military personnel killed in World War I
People involved with the periodic table
People educated at Summer Fields School
Rare earth scientists
Deaths by firearm in Turkey
Recipients of the Matteucci Medal
Manchester Literary and Philosophical Society
Military personnel from Dorset | Henry Moseley | [
"Chemistry"
] | 2,996 | [
"Periodic table",
"People involved with the periodic table"
] |
14,458 | https://en.wikipedia.org/wiki/Hail | Hail is a form of solid precipitation. It is distinct from ice pellets (American English "sleet"), though the two are often confused. It consists of balls or irregular lumps of ice, each of which is called a hailstone. Ice pellets generally fall in cold weather, while hail growth is greatly inhibited during low surface temperatures.
Unlike other forms of water ice precipitation, such as graupel (which is made of rime ice), ice pellets (which are smaller and translucent), and snow (which consists of tiny, delicately crystalline flakes or needles), hailstones usually measure between and in diameter. The METAR reporting code for hail or greater is GR, while smaller hailstones and graupel are coded GS.
Hail is possible within most thunderstorms (as it is produced by cumulonimbus), as well as within of the parent storm. Hail formation requires environments of strong, upward motion of air within the parent thunderstorm (similar to tornadoes) and lowered heights of the freezing level. In the mid-latitudes, hail forms near the interiors of continents, while, in the tropics, it tends to be confined to high elevations.
There are methods available to detect hail-producing thunderstorms using weather satellites and weather radar imagery. Hailstones generally fall at higher speeds as they grow in size, though complicating factors such as melting, friction with air, wind, and interaction with rain and other hailstones can slow their descent through Earth's atmosphere. Severe weather warnings are issued for hail when the stones reach a damaging size, as it can cause serious damage to human-made structures, and, most commonly, farmers' crops.
Definition
Any thunderstorm which produces hail that reaches the ground is known as a hailstorm. An ice crystal with a diameter of > is considered a hailstone. Hailstones can grow to and weigh more than .
Unlike ice pellets, hailstones are often layered and can be irregular and clumped together. Hail is composed of transparent ice or alternating layers of transparent and translucent ice at least thick, which are deposited upon the hailstone as it travels through the cloud, suspended aloft by air with strong upward motion until its weight overcomes the updraft and falls to the ground. Although the diameter of hail is varied, in the United States, the average observation of damaging hail is between and golf-ball-sized .
Stones larger than are usually considered large enough to cause damage. The Meteorological Service of Canada issues severe thunderstorm warnings when hail that size or above is expected. The US National Weather Service has a diameter threshold, effective January 2010, an increase over the previous threshold of hail. Other countries have different thresholds according to local sensitivity to hail; for instance, grape-growing areas could be adversely impacted by smaller hailstones. Hailstones can be very large or very small, depending on how strong the updraft is: weaker hailstorms produce smaller hailstones than stronger hailstorms (such as supercells), as the more powerful updrafts in a stronger storm can keep larger hailstones aloft.
Formation
Hail forms in strong thunderstorm clouds, particularly those with intense updrafts, high liquid-water content, great vertical extent, large water droplets, and where a good portion of the cloud layer is below freezing (). These types of strong updrafts can also indicate the presence of a tornado. The growth rate of hailstones is impacted by factors such as higher elevation, lower freezing zones, and wind shear.
Layer nature of the hailstones
Like other precipitation in cumulonimbus clouds, hail begins as water droplets. As the droplets rise and the temperature goes below freezing, they become supercooled water and will freeze on contact with condensation nuclei. A cross-section through a large hailstone shows an onion-like structure. This means that the hailstone is made of thick and translucent layers, alternating with layers that are thin, white and opaque. Former theory suggested that hailstones were subjected to multiple descents and ascents, falling into a zone of humidity and refreezing as they were uplifted. This up and down motion was thought to be responsible for the successive layers of the hailstone. New research, based on theory as well as field study, has shown this is not necessarily true.
The storm's updraft, with upwardly directed wind speeds as high as , blows the forming hailstones up the cloud. As the hailstone ascends, it passes into areas of the cloud where the concentration of humidity and supercooled water droplets varies. The hailstone's growth rate changes depending on the variation in humidity and supercooled water droplets that it encounters. The accretion rate of these water droplets is another factor in the hailstone's growth. When the hailstone moves into an area with a high concentration of water droplets, it captures the latter and acquires a translucent layer. Should the hailstone move into an area where mostly water vapor is available, it acquires a layer of opaque white ice.
Furthermore, the hailstone's speed depends on its position in the cloud's updraft and its mass. This determines the varying thicknesses of the layers of the hailstone. The accretion rate of supercooled water droplets onto the hailstone depends on the relative velocities between these water droplets and the hailstone itself. This means that generally the larger hailstones will form some distance from the stronger updraft, where they can pass more time growing. As the hailstone grows, it releases latent heat, which keeps its exterior in a liquid phase. Because it undergoes "wet growth", the outer layer is sticky (i.e. more adhesive), so a single hailstone may grow by collision with other smaller hailstones, forming a larger entity with an irregular shape.
Hail can also undergo "dry growth", in which the latent heat release through freezing is not enough to keep the outer layer in a liquid state. Hail forming in this manner appears opaque due to small air bubbles that become trapped in the stone during rapid freezing. These bubbles coalesce and escape during the "wet growth" mode, and the hailstone is more clear. The mode of growth for a hailstone can change throughout its development, and this can result in distinct layers in a hailstone's cross-section.
The hailstone will keep rising in the thunderstorm until its mass can no longer be supported by the updraft. This may take at least 30 minutes, based on the force of the updrafts in the hail-producing thunderstorm, whose top is usually greater than 10 km high. It then falls toward the ground while continuing to grow, based on the same processes, until it leaves the cloud. It will later begin to melt as it passes into air above freezing temperature.
Thus, a unique trajectory in the thunderstorm is sufficient to explain the layer-like structure of the hailstone. The only case in which multiple trajectories can be discussed is in a multicellular thunderstorm, where the hailstone may be ejected from the top of the "mother" cell and captured in the updraft of a more intense "daughter" cell. This, however, is an exceptional case.
Factors favoring hail
Hail is most common within continental interiors of the mid-latitudes, as hail formation is considerably more likely when the freezing level is below the altitude of . Movement of dry air into strong thunderstorms over continents can increase the frequency of hail by promoting evaporational cooling, which lowers the freezing level of thunderstorm clouds, giving hail a larger volume to grow in. Accordingly, hail is less common in the tropics despite a much higher frequency of thunderstorms than in the mid-latitudes because the atmosphere over the tropics tends to be warmer over a much greater altitude. Hail in the tropics occurs mainly at higher elevations.
Hail growth becomes vanishingly small when air temperatures fall below , as supercooled water droplets become rare at these temperatures. Around thunderstorms, hail is most likely within the cloud at elevations above . Between and , 60% of hail is still within the thunderstorm, though 40% now lies within the clear air under the anvil. Below , hail is equally distributed in and around a thunderstorm to a distance of .
Climatology
Hail occurs most frequently within continental interiors at mid-latitudes and is less common in the tropics, despite a much higher frequency of thunderstorms than in the mid-latitudes. Hail is also much more common along mountain ranges because mountains force horizontal winds upwards (known as orographic lifting), thereby intensifying the updrafts within thunderstorms and making hail more likely. The higher elevations also result in there being less time available for hail to melt before reaching the ground. One of the more common regions for large hail is across mountainous northern India, which reported one of the highest hail-related death tolls on record in 1888. China also experiences significant hailstorms. Central Europe and southern Australia also experience a lot of hailstorms. Regions where hailstorms frequently occur are southern and western Germany, northern and eastern France, southern and eastern Benelux, and northern Italy. In southeastern Europe, Croatia and Serbia experience frequent occurrences of hail. Some mediterranean countries register the maximum frequency of hail during the Fall season.
In North America, hail is most common in the area where Colorado, Nebraska, and Wyoming meet, known as "Hail Alley". Hail in this region occurs between the months of March and October during the afternoon and evening hours, with the bulk of the occurrences from May through September. Cheyenne, Wyoming is North America's most hail-prone city with an average of nine to ten hailstorms per season. To the north of this area and also just downwind of the Rocky Mountains is the Hailstorm Alley region of Alberta, which also experiences an increased incidence of significant hail events.
Hailstorms are also common in several regions of South America, particularly in the temperate latitudes. The central region of Argentina, extending from the Mendoza region eastward towards Córdoba, experiences some of the most frequent hailstorms in the world, with 10-30 storms per year on average. The Patagonia region of southern Argentina also sees frequent hailstorms, though this may be partially due to graupel (small hail) being counted as hail in this colder region. The triple border region between the Brazilian states of Paraná, Santa Catarina, and Argentina, in southern Brazil is another area known for damaging hailstorms. Hailstorms are also common in parts of Paraguay, Uruguay, and Bolivia that border the high-frequency hail regions of northern Argentina. The high frequency of hailstorms in these areas of South America is attributed to the region's orographic forcing of convection, combined with moisture transport from the Amazon and instability created by temperature contrasts between the surface and upper atmosphere. In Colombia, the cities of Bogotá and Medellín also see frequent hailstorms due to their high elevation. Southern Chile also sees persistent hail from mid april through october.
Short-term detection
Weather radar is a very useful tool to detect the presence of hail-producing thunderstorms. However, radar data has to be complemented by a knowledge of current atmospheric conditions which can allow one to determine if the current atmosphere is conducive to hail development.
Modern radar scans many angles around the site. Reflectivity values at multiple angles above ground level in a storm are proportional to the precipitation rate at those levels. Summing reflectivities in the Vertically Integrated Liquid or VIL, gives the liquid water content in the cloud. Research shows that hail development in the upper levels of the storm is related to the evolution of VIL. VIL divided by the vertical extent of the storm, called VIL density, has a relationship with hail size, although this varies with atmospheric conditions and therefore is not highly accurate. Traditionally, hail size and probability can be estimated from radar data by computer using algorithms based on this research. Some algorithms include the height of the freezing level to estimate the melting of the hailstone and what would be left on the ground.
Certain patterns of reflectivity are important clues for the meteorologist as well. The three body scatter spike is an example. This is the result of energy from the radar hitting hail and being deflected to the ground, where they deflect back to the hail and then to the radar. The energy took more time to go from the hail to the ground and back, as opposed to the energy that went directly from the hail to the radar, and the echo is further away from the radar than the actual location of the hail on the same radial path, forming a cone of weaker reflectivities.
More recently, the polarization properties of weather radar returns have been analyzed to differentiate between hail and heavy rain. The use of differential reflectivity (), in combination with horizontal reflectivity () has led to a variety of hail classification algorithms. Visible satellite imagery is beginning to be used to detect hail, but false alarm rates remain high using this method.
Size and terminal velocity
The size of hailstones is best determined by measuring their diameter with a ruler. In the absence of a ruler, hailstone size is often visually estimated by comparing its size to that of known objects, such as coins. Using objects such as hen's eggs, peas, and marbles for comparing hailstone sizes is imprecise, due to their varied dimensions. The UK organisation, TORRO, also scales for both hailstones and hailstorms.
When observed at an airport, METAR code is used within a surface weather observation which relates to the size of the hailstone. Within METAR code, GR is used to indicate larger hail, of a diameter of at least . GR is derived from the French word grêle. Smaller-sized hail, as well as snow pellets, use the coding of GS, which is short for the French word grésil.
Terminal velocity of hail, or the speed at which hail is falling when it strikes the ground, varies. It is estimated that a hailstone of in diameter falls at a rate of , while stones the size of in diameter fall at a rate of . Hailstone velocity is dependent on the size of the stone, its drag coefficient, the motion of wind it is falling through, collisions with raindrops or other hailstones, and melting as the stones fall through a warmer atmosphere. As hailstones are not perfect spheres, it is difficult to accurately calculate their drag coefficient - and, thus, their speed.
Size comparisons to objects
In the United States, the National Weather Service reports hail size as a comparison to everyday objects. Hailstones larger than 1 inch in diameter are denoted as "severe."
Hail records
Megacryometeors, large rocks of ice that are not associated with thunderstorms, are not officially recognized by the World Meteorological Organization as "hail", which are aggregations of ice associated with thunderstorms, and therefore records of extreme characteristics of megacryometeors are not given as hail records.
Heaviest: ; Gopalganj District, Bangladesh, 14 April 1986.
Largest diameter officially measured: diameter, circumference; Vivian, South Dakota, 23 July 2010.
Largest circumference officially measured: circumference, diameter; Aurora, Nebraska, 22 June 2003.
Greatest average hail precipitation: Kericho, Kenya experiences hailstorms, on average, 50 days annually. Kericho is close to the equator and the elevation of contributes to it being a hot spot for hail. Kericho reached the world record for 132 days of hail in one year.
Hazards
Hail can cause serious damage, notably to automobiles, aircraft, skylights, glass-roofed structures, livestock, and most commonly, crops. Hail damage to roofs often goes unnoticed until further structural damage is seen, such as leaks or cracks. It is hardest to recognize hail damage on shingled roofs and flat roofs, but all roofs have their own hail damage detection problems. Metal roofs are fairly resistant to hail damage, but may accumulate cosmetic damage in the form of dents and damaged coatings.
Hail is one of the most significant thunderstorm hazards to aircraft. When hailstones exceed in diameter, planes can be seriously damaged within seconds. The hailstones accumulating on the ground can also be hazardous to landing aircraft. Hail is a common nuisance to drivers of automobiles, severely denting the vehicle and cracking or even shattering windshields and windows unless parked in a garage or covered with a shielding material. Wheat, corn, soybeans, and tobacco are the most sensitive crops to hail damage. Hail is one of Canada's most expensive hazards.
Rarely, massive hailstones have been known to cause concussions or fatal head trauma. Hailstorms have been the cause of costly and deadly events throughout history. One of the earliest known incidents occurred around the 9th century in Roopkund, Uttarakhand, India, where 200 to 600 nomads seem to have died of injuries from hail the size of cricket balls.
Accumulations
Narrow zones where hail accumulates on the ground in association with thunderstorm activity are known as hail streaks or hail swaths, which can be detectable by satellite after the storms pass by. Hailstorms normally last from a few minutes up to 15 minutes in duration. Accumulating hail storms can blanket the ground with over of hail, cause thousands to lose power, and bring down many trees. Flash flooding and mudslides within areas of steep terrain can be a concern with accumulating hail.
Depths of up to have been reported. A landscape covered in accumulated hail generally resembles one covered in accumulated snow and any significant accumulation of hail has the same restrictive effects as snow accumulation, albeit over a smaller area, on transport and infrastructure. Accumulated hail can also cause flooding by blocking drains, and hail can be carried in the floodwater, turning into a snow-like slush which is deposited at lower elevations.
On somewhat rare occasions, a thunderstorm can become stationary or nearly so while prolifically producing hail and significant depths of accumulation do occur; this tends to happen in mountainous areas, such as the July 29, 2010 case of a foot of hail accumulation in Boulder County, Colorado. On June 5, 2015, hail up to four feet deep fell on one city block in Denver, Colorado. The hailstones, described as between the size of bumble bees and ping pong balls, were accompanied by rain and high winds. The hail fell in only the one area, leaving the surrounding area untouched. It fell for one and a half hours between 10:00 pm and 11:30 pm. A meteorologist for the National Weather Service in Boulder said, "It's a very interesting phenomenon. We saw the storm stall. It produced copious amounts of hail in one small area. It's a meteorological thing." Tractors used to clear the area filled more than 30 dump truck loads of hail.
Research focused on four individual days that accumulated more than of hail in 30 minutes on the Colorado front range has shown that these events share similar patterns in observed synoptic weather, radar, and lightning characteristics, suggesting the possibility of predicting these events prior to their occurrence. A fundamental problem in continuing research in this area is that, unlike hail diameter, hail depth is not commonly reported. The lack of data leaves researchers and forecasters in the dark when trying to verify operational methods. A cooperative effort between the University of Colorado and the National Weather Service is in progress. The joint project's goal is to enlist the help of the general public to develop a database of hail accumulation depths.
Suppression and prevention
During the Middle Ages, people in Europe used to ring church bells and fire cannons to try to prevent hail, and the subsequent damage to crops. Updated versions of this approach are available as modern hail cannons. Cloud seeding after World War II was done to eliminate the hail threat, particularly across the Soviet Union, where it was claimed a 70–98% reduction in crop damage from hail storms was achieved by deploying silver iodide in clouds using rockets and artillery shells. But these effects have not been replicated in randomized trials conducted in the West. Hail suppression programs have been undertaken by 15 countries between 1965 and 2005.
See also
Sleet (disambiguation)
Cumulonimbus and aviation
References
Further reading
External links
Hail Storm Research Tools at hailtrends.com
Hail Factsheet (archived) from ucar.edu
U.S. Billion-dollar Weather and Climate Disasters at NOAA.gov
Articles containing video clips
Precipitation
Snow or ice weather phenomena
Storm
Weather hazards
Water ice | Hail | [
"Physics"
] | 4,249 | [
"Weather",
"Physical phenomena",
"Weather hazards"
] |
14,463 | https://en.wikipedia.org/wiki/Harmonic%20mean | In mathematics, the harmonic mean is a kind of average, one of the Pythagorean means.
It is the most appropriate average for ratios and rates such as speeds, and is normally only used for positive arguments.
The harmonic mean is the reciprocal of the arithmetic mean of the reciprocals of the numbers, that is, the generalized f-mean with . For example, the harmonic mean of 1, 4, and 4 is
Definition
The harmonic mean H of the positive real numbers is
It is the reciprocal of the arithmetic mean of the reciprocals, and vice versa:
where the arithmetic mean is
The harmonic mean is a Schur-concave function, and is greater than or equal to the minimum of its arguments: for positive arguments, . Thus, the harmonic mean cannot be made arbitrarily large by changing some values to bigger ones (while having at least one value unchanged).
The harmonic mean is also concave for positive arguments, an even stronger property than Schur-concavity.
Relationship with other means
For all positive data sets containing at least one pair of nonequal values, the harmonic mean is always the least of the three Pythagorean means, while the arithmetic mean is always the greatest of the three and the geometric mean is always in between. (If all values in a nonempty data set are equal, the three means are always equal.)
It is the special case M−1 of the power mean:
Since the harmonic mean of a list of numbers tends strongly toward the least elements of the list, it tends (compared to the arithmetic mean) to mitigate the impact of large outliers and aggravate the impact of small ones.
The arithmetic mean is often mistakenly used in places calling for the harmonic mean. In the speed example below for instance, the arithmetic mean of 40 is incorrect, and too big.
The harmonic mean is related to the other Pythagorean means, as seen in the equation below. This can be seen by interpreting the denominator to be the arithmetic mean of the product of numbers n times but each time omitting the j-th term. That is, for the first term, we multiply all n numbers except the first; for the second, we multiply all n numbers except the second; and so on. The numerator, excluding the n, which goes with the arithmetic mean, is the geometric mean to the power n. Thus the n-th harmonic mean is related to the n-th geometric and arithmetic means. The general formula is
If a set of non-identical numbers is subjected to a mean-preserving spread — that is, two or more elements of the set are "spread apart" from each other while leaving the arithmetic mean unchanged — then the harmonic mean always decreases.
Harmonic mean of two or three numbers
Two numbers
For the special case of just two numbers, and , the harmonic mean can be written as:
or
(Note that the harmonic mean is undefined if , i.e. .)
In this special case, the harmonic mean is related to the arithmetic mean and the geometric mean by
Since by the inequality of arithmetic and geometric means, this shows for the n = 2 case that H ≤ G (a property that in fact holds for all n). It also follows that , meaning the two numbers' geometric mean equals the geometric mean of their arithmetic and harmonic means.
Three numbers
For the special case of three numbers, , and , the harmonic mean can be written as:
Three positive numbers H, G, and A are respectively the harmonic, geometric, and arithmetic means of three positive numbers if and only if the following inequality holds
Weighted harmonic mean
If a set of weights , ..., is associated to the data set , ..., , the weighted harmonic mean is defined by
The unweighted harmonic mean can be regarded as the special case where all of the weights are equal.
Examples
In analytic number theory
Prime number theory
The prime number theorem states that the number of primes less than or equal to is asymptotically equal to the harmonic mean of the first natural numbers.
In physics
Average speed
In many situations involving rates and ratios, the harmonic mean provides the correct average. For instance, if a vehicle travels a certain distance d outbound at a speed x (e.g. 60 km/h) and returns the same distance at a speed y (e.g. 20 km/h), then its average speed is the harmonic mean of x and y (30 km/h), not the arithmetic mean (40 km/h). The total travel time is the same as if it had traveled the whole distance at that average speed. This can be proven as follows:
Average speed for the entire journey
=
However, if the vehicle travels for a certain amount of time at a speed x and then the same amount of time at a speed y, then its average speed is the arithmetic mean of x and y, which in the above example is 40 km/h.
Average speed for the entire journey
The same principle applies to more than two segments: given a series of sub-trips at different speeds, if each sub-trip covers the same distance, then the average speed is the harmonic mean of all the sub-trip speeds; and if each sub-trip takes the same amount of time, then the average speed is the arithmetic mean of all the sub-trip speeds. (If neither is the case, then a weighted harmonic mean or weighted arithmetic mean is needed. For the arithmetic mean, the speed of each portion of the trip is weighted by the duration of that portion, while for the harmonic mean, the corresponding weight is the distance. In both cases, the resulting formula reduces to dividing the total distance by the total time.)
However, one may avoid the use of the harmonic mean for the case of "weighting by distance". Pose the problem as finding "slowness" of the trip where "slowness" (in hours per kilometre) is the inverse of speed. When trip slowness is found, invert it so as to find the "true" average trip speed. For each trip segment i, the slowness si = 1/speedi. Then take the weighted arithmetic mean of the si's weighted by their respective distances (optionally with the weights normalized so they sum to 1 by dividing them by trip length). This gives the true average slowness (in time per kilometre). It turns out that this procedure, which can be done with no knowledge of the harmonic mean, amounts to the same mathematical operations as one would use in solving this problem by using the harmonic mean. Thus it illustrates why the harmonic mean works in this case.
Density
Similarly, if one wishes to estimate the density of an alloy given the densities of its constituent elements and their mass fractions (or, equivalently, percentages by mass), then the predicted density of the alloy (exclusive of typically minor volume changes due to atom packing effects) is the weighted harmonic mean of the individual densities, weighted by mass, rather than the weighted arithmetic mean as one might at first expect. To use the weighted arithmetic mean, the densities would have to be weighted by volume. Applying dimensional analysis to the problem while labeling the mass units by element and making sure that only like element-masses cancel makes this clear.
Electricity
If one connects two electrical resistors in parallel, one having resistance x (e.g., 60 Ω) and one having resistance y (e.g., 40 Ω), then the effect is the same as if one had used two resistors with the same resistance, both equal to the harmonic mean of x and y (48 Ω): the equivalent resistance, in either case, is 24 Ω (one-half of the harmonic mean). This same principle applies to capacitors in series or to inductors in parallel.
However, if one connects the resistors in series, then the average resistance is the arithmetic mean of x and y (50 Ω), with total resistance equal to twice this, the sum of x and y (100 Ω). This principle applies to capacitors in parallel or to inductors in series.
As with the previous example, the same principle applies when more than two resistors, capacitors or inductors are connected, provided that all are in parallel or all are in series.
The "conductivity effective mass" of a semiconductor is also defined as the harmonic mean of the effective masses along the three crystallographic directions.
Optics
As for other optic equations, the thin lens equation = + can be rewritten such that the focal length f is one-half of the harmonic mean of the distances of the subject u and object v from the lens.
Two thin lenses of focal length f1 and f2 in series is equivalent to two thin lenses of focal length fhm, their harmonic mean, in series. Expressed as optical power, two thin lenses of optical powers P1 and P2 in series is equivalent to two thin lenses of optical power Pam, their arithmetic mean, in series.
In finance
The weighted harmonic mean is the preferable method for averaging multiples, such as the price–earnings ratio (P/E). If these ratios are averaged using a weighted arithmetic mean, high data points are given greater weights than low data points. The weighted harmonic mean, on the other hand, correctly weights each data point. The simple weighted arithmetic mean when applied to non-price normalized ratios such as the P/E is biased upwards and cannot be numerically justified, since it is based on equalized earnings; just as vehicles speeds cannot be averaged for a roundtrip journey (see above).
In geometry
In any triangle, the radius of the incircle is one-third of the harmonic mean of the altitudes.
For any point P on the minor arc BC of the circumcircle of an equilateral triangle ABC, with distances q and t from B and C respectively, and with the intersection of PA and BC being at a distance y from point P, we have that y is half the harmonic mean of q and t.
In a right triangle with legs a and b and altitude h from the hypotenuse to the right angle, is half the harmonic mean of and .
Let t and s (t > s) be the sides of the two inscribed squares in a right triangle with hypotenuse c. Then equals half the harmonic mean of and .
Let a trapezoid have vertices A, B, C, and D in sequence and have parallel sides AB and CD. Let E be the intersection of the diagonals, and let F be on side DA and G be on side BC such that FEG is parallel to AB and CD. Then FG is the harmonic mean of AB and DC. (This is provable using similar triangles.)
One application of this trapezoid result is in the crossed ladders problem, where two ladders lie oppositely across an alley, each with feet at the base of one sidewall, with one leaning against a wall at height A and the other leaning against the opposite wall at height B, as shown. The ladders cross at a height of h above the alley floor. Then h is half the harmonic mean of A and B. This result still holds if the walls are slanted but still parallel and the "heights" A, B, and h are measured as distances from the floor along lines parallel to the walls. This can be proved easily using the area formula of a trapezoid and area addition formula.
In an ellipse, the semi-latus rectum (the distance from a focus to the ellipse along a line parallel to the minor axis) is the harmonic mean of the maximum and minimum distances of the ellipse from a focus.
In other sciences
In computer science, specifically information retrieval and machine learning, the harmonic mean of the precision (true positives per predicted positive) and the recall (true positives per real positive) is often used as an aggregated performance score for the evaluation of algorithms and systems: the F-score (or F-measure). This is used in information retrieval because only the positive class is of relevance, while number of negatives, in general, is large and unknown. It is thus a trade-off as to whether the correct positive predictions should be measured in relation to the number of predicted positives or the number of real positives, so it is measured versus a putative number of positives that is an arithmetic mean of the two possible denominators.
A consequence arises from basic algebra in problems where people or systems work together. As an example, if a gas-powered pump can drain a pool in 4 hours and a battery-powered pump can drain the same pool in 6 hours, then it will take both pumps , which is equal to 2.4 hours, to drain the pool together. This is one-half of the harmonic mean of 6 and 4: . That is, the appropriate average for the two types of pump is the harmonic mean, and with one pair of pumps (two pumps), it takes half this harmonic mean time, while with two pairs of pumps (four pumps) it would take a quarter of this harmonic mean time.
In hydrology, the harmonic mean is similarly used to average hydraulic conductivity values for a flow that is perpendicular to layers (e.g., geologic or soil) - flow parallel to layers uses the arithmetic mean. This apparent difference in averaging is explained by the fact that hydrology uses conductivity, which is the inverse of resistivity.
In sabermetrics, a baseball player's Power–speed number is the harmonic mean of their home run and stolen base totals.
In population genetics, the harmonic mean is used when calculating the effects of fluctuations in the census population size on the effective population size. The harmonic mean takes into account the fact that events such as population bottleneck increase the rate genetic drift and reduce the amount of genetic variation in the population. This is a result of the fact that following a bottleneck very few individuals contribute to the gene pool limiting the genetic variation present in the population for many generations to come.
When considering fuel economy in automobiles two measures are commonly used – miles per gallon (mpg), and litres per 100 km. As the dimensions of these quantities are the inverse of each other (one is distance per volume, the other volume per distance) when taking the mean value of the fuel economy of a range of cars one measure will produce the harmonic mean of the other – i.e., converting the mean value of fuel economy expressed in litres per 100 km to miles per gallon will produce the harmonic mean of the fuel economy expressed in miles per gallon. For calculating the average fuel consumption of a fleet of vehicles from the individual fuel consumptions, the harmonic mean should be used if the fleet uses miles per gallon, whereas the arithmetic mean should be used if the fleet uses litres per 100 km. In the USA the CAFE standards (the federal automobile fuel consumption standards) make use of the harmonic mean.
In chemistry and nuclear physics the average mass per particle of a mixture consisting of different species (e.g., molecules or isotopes) is given by the harmonic mean of the individual species' masses weighted by their respective mass fraction.
Beta distribution
The harmonic mean of a beta distribution with shape parameters α and β is:
The harmonic mean with α < 1 is undefined because its defining expression is not bounded in [0, 1].
Letting α = β
showing that for α = β the harmonic mean ranges from 0 for α = β = 1, to 1/2 for α = β → ∞.
The following are the limits with one parameter finite (non-zero) and the other parameter approaching these limits:
With the geometric mean the harmonic mean may be useful in maximum likelihood estimation in the four parameter case.
A second harmonic mean (H1 − X) also exists for this distribution
This harmonic mean with β < 1 is undefined because its defining expression is not bounded in [ 0, 1 ].
Letting α = β in the above expression
showing that for α = β the harmonic mean ranges from 0, for α = β = 1, to 1/2, for α = β → ∞.
The following are the limits with one parameter finite (non zero) and the other approaching these limits:
Although both harmonic means are asymmetric, when α = β the two means are equal.
Lognormal distribution
The harmonic mean ( H ) of the lognormal distribution of a random variable X is
where μ and σ2 are the parameters of the distribution, i.e. the mean and variance of the distribution of the natural logarithm of X.
The harmonic and arithmetic means of the distribution are related by
where Cv and μ* are the coefficient of variation and the mean of the distribution respectively..
The geometric (G), arithmetic and harmonic means of the distribution are related by
Pareto distribution
The harmonic mean of type 1 Pareto distribution is
where k is the scale parameter and α is the shape parameter.
Statistics
For a random sample, the harmonic mean is calculated as above. Both the mean and the variance may be infinite (if it includes at least one term of the form 1/0).
Sample distributions of mean and variance
The mean of the sample m is asymptotically distributed normally with variance s2.
The variance of the mean itself is
where m is the arithmetic mean of the reciprocals, x are the variates, n is the population size and E is the expectation operator.
Delta method
Assuming that the variance is not infinite and that the central limit theorem applies to the sample then using the delta method, the variance is
where H is the harmonic mean, m is the arithmetic mean of the reciprocals
s2 is the variance of the reciprocals of the data
and n is the number of data points in the sample.
Jackknife method
A jackknife method of estimating the variance is possible if the mean is known. This method is the usual 'delete 1' rather than the 'delete m' version.
This method first requires the computation of the mean of the sample (m)
where x are the sample values.
A series of value wi is then computed where
The mean (h) of the wi is then taken:
The variance of the mean is
Significance testing and confidence intervals for the mean can then be estimated with the t test.
Size biased sampling
Assume a random variate has a distribution f( x ). Assume also that the likelihood of a variate being chosen is proportional to its value. This is known as length based or size biased sampling.
Let μ be the mean of the population. Then the probability density function f*( x ) of the size biased population is
The expectation of this length biased distribution E*( x ) is
where σ2 is the variance.
The expectation of the harmonic mean is the same as the non-length biased version E( x )
The problem of length biased sampling arises in a number of areas including textile manufacture pedigree analysis and survival analysis
Akman et al. have developed a test for the detection of length based bias in samples.
Shifted variables
If X is a positive random variable and q > 0 then for all ε > 0
Moments
Assuming that X and E(X) are > 0 then
This follows from Jensen's inequality.
Gurland has shown that for a distribution that takes only positive values, for any n > 0
Under some conditions
where ~ means approximately equal to.
Sampling properties
Assuming that the variates (x) are drawn from a lognormal distribution there are several possible estimators for H:
where
Of these H3 is probably the best estimator for samples of 25 or more.
Bias and variance estimators
A first order approximation to the bias and variance of H1 are
where Cv is the coefficient of variation.
Similarly a first order approximation to the bias and variance of H3 are
In numerical experiments H3 is generally a superior estimator of the harmonic mean than H1. H2 produces estimates that are largely similar to H1.
Notes
The Environmental Protection Agency recommends the use of the harmonic mean in setting maximum toxin levels in water.
In geophysical reservoir engineering studies, the harmonic mean is widely used.
See also
Contraharmonic mean
Generalized mean
Harmonic number
Rate (mathematics)
Weighted mean
Parallel summation
Geometric mean
Weighted geometric mean
HM-GM-AM-QM inequalities
Harmonic mean p-value
Notes
References
External links
Averages, Arithmetic and Harmonic Means at cut-the-knot
Means | Harmonic mean | [
"Physics",
"Mathematics"
] | 4,202 | [
"Means",
"Mathematical analysis",
"Point (geometry)",
"Geometric centers",
"Symmetry"
] |
14,467 | https://en.wikipedia.org/wiki/Harry%20Kroto | Sir Harold Walter Kroto (born Harold Walter Krotoschiner; 7 October 1939 – 30 April 2016) was an English chemist. He shared the 1996 Nobel Prize in Chemistry with Robert Curl and Richard Smalley for their discovery of fullerenes. He was the recipient of many other honors and awards.
Kroto ended his career as the Francis Eppes Professor of Chemistry at Florida State University, which he joined in 2004. Prior to this, he spent approximately 40 years at the University of Sussex.
Kroto promoted science education and was a critic of religious faith.
Early years
Kroto was born in Wisbech, Isle of Ely, Cambridgeshire, England, to Edith and Heinz Krotoschiner, his name being of Silesian origin. His father's family came from Bojanowo, Poland, and his mother's from Berlin. Both of his parents were born in Berlin and fled to Great Britain in the 1930s as refugees from Nazi Germany; his father was Jewish. Harry was raised in Bolton while the British authorities interned his father on the Isle of Man as an enemy alien during World War II. Kroto attended Bolton School, where he was a contemporary of the actor Ian McKellen. In 1955, Harold's father shortened the family name to Kroto.
As a child, he became fascinated by a Meccano set. Kroto credited Meccano, as well as his aiding his father in the latter's balloon factory after World War II – amongst other things – with developing skills useful in scientific research. He developed an interest in chemistry, physics, and mathematics in secondary school, and because his sixth form chemistry teacher (Harry Heaney – who subsequently became a university professor) felt that the University of Sheffield had the best chemistry department in the United Kingdom, he went to Sheffield.
Although raised Jewish, Kroto stated that religion never made any sense to him. He was a humanist who claimed to have three religions: Amnesty Internationalism, atheism, and humour. He was a distinguished supporter of the British Humanist Association. In 2003 he was one of 22 Nobel Laureates who signed the Humanist Manifesto.
In 2015, Kroto signed the Mainau Declaration 2015 on Climate Change on the final day of the 65th Lindau Nobel Laureate Meeting. The declaration was signed by a total of 76 Nobel Laureates and handed to then-President of the French Republic, François Hollande, as part of the successful COP21 climate summit in Paris.
Education and academic career
Education
Kroto was educated at Bolton School and went to the University of Sheffield in 1958, where he obtained a first-class honours BSc degree in Chemistry (1961) and a PhD in Molecular Spectroscopy (1964). During his time at Sheffield he also was the art editor of Arrows – the university student magazine, played tennis for the university team (reaching the UAU finals twice) and was President of the Student Athletics Council (1963–64). Among other things such as making the first phosphaalkenes (compounds with carbon phosphorus double bonds), his doctoral studies included unpublished research on carbon suboxide, O=C=C=C=O, and this led to a general interest in molecules containing chains of carbon atoms with numerous multiple bonds. He started his work with an interest in organic chemistry, but when he learned about spectroscopy it inclined him towards quantum chemistry; he later developed an interest in astrochemistry.
After obtaining his PhD, Kroto spent two-years as a postdoctoral fellow in the molecular spectroscopy group of Gerhard Herzberg at the National Research Council in Ottawa, Canada, and the subsequent year (1966–1967) at Bell Laboratories in New Jersey carrying out Raman studies of liquid phase interactions and worked on quantum chemistry.
Research at the University of Sussex
In 1967, Kroto began teaching and research at the University of Sussex in England. During his time at Sussex from 1967 to 1985, he carried out research mainly focused on the spectroscopic studies of new and novel unstable and semi-stable species. This work resulted in the birth of the various fields of new chemistry involving carbon multiply bonded to second and third row elements e.g. S, Se and P. A particularly important breakthrough (with Sussex colleague John Nixon) was the creation of several new phosphorus species detected by microwave spectroscopy. This work resulted in the birth of the field(s) of phosphaalkene and phosphaalkyne chemistry. These species contain carbon double and triple bonded to phosphorus (C=P and C≡P) such as cyanophosphaethyne.
In 1975, he became a full professor of Chemistry. This coincided with laboratory microwave measurements with Sussex colleague David Walton on long linear carbon chain molecules, leading to radio astronomy observations with Canadian astronomers surprisingly revealing that these unusual carbonaceous species exist in relatively large abundances in interstellar space as well as the outer atmospheres of certain stars – the carbon-rich red giants.
Discovery of buckminsterfullerene
In 1985, on the basis of the Sussex studies and the stellar discoveries, laboratory experiments (with co-workers James R. Heath, Sean C. O'Brien, Yuan Liu, Robert Curl and Richard Smalley at Rice University) which simulated the chemical reactions in the atmospheres of the red giant stars demonstrated that stable C60 molecules could form spontaneously from a condensing carbon vapour. The co-investigators directed lasers at graphite and examined the results. The C60 molecule is a molecule with the same symmetry pattern as a football, consisting of 12 pentagons and 20 hexagons of carbon atoms. Kroto named the molecule buckminsterfullerene, after Buckminster Fuller who had conceived of the geodesic domes, as the dome concept had provided a clue to the likely structure of the new species.
In 1985, the C60 discovery caused Kroto to shift the focus of his research from spectroscopy in order to probe the consequences of the C60 structural concept (and prove it correct) and to exploit the implications for chemistry and material science.
This research is significant for the discovery of a new allotrope of carbon known as a fullerene. Other allotropes of carbon include graphite, diamond and graphene. Kroto's 1985 paper entitled "C60: Buckminsterfullerene", published with colleagues J. R. Heath, S. C. O'Brien, R. F. Curl, and R. E. Smalley, was honored by a Citation for Chemical Breakthrough Award from the Division of History of Chemistry of the American Chemical Society, presented to Rice University in 2015. The discovery of fullerenes was recognized in 2010 by the designation of a National Historic Chemical Landmark by the American Chemical Society at the Richard E. Smalley Institute for Nanoscale Science and Technology at Rice University in Houston, Texas.
Research at Florida State University
In 2004, Kroto left the University of Sussex to take up a new position as Francis Eppes Professor of Chemistry at Florida State University. At FSU he carried out fundamental research on: Carbon vapour with Professor Alan Marshall; Open framework condensed phase systems with strategically important electrical and magnetic behaviour with Professors Naresh Dalal (FSU) and Tony Cheetham (Cambridge); and the mechanism of formation and properties of nano-structured systems. In addition, he participated in research initiatives at FSU that probed the astrochemistry of fullerenes, metallofullerenes, and polycyclic aromatic hydrocarbons in stellar/circumstellar space, as well as their relevance to stardust.
Educational outreach and public service
In 1995, he jointly set up the Vega Science Trust, a UK educational charity that created high quality science films including lectures and interviews with Nobel Laureates, discussion programmes, careers and teaching resources for TV and Internet Broadcast. Vega produced over 280 programmes, that streamed for free from the Vega website which acted as a TV science channel. The trust closed in 2012.
In 2009, Kroto spearheaded the development of a second science education initiative, Geoset. Short for the Global Educational Outreach for Science, Engineering and Technology, GEOSET is an ever-growing online cache of recorded teaching modules that are freely downloadable to educators and the public. The program aims to increase knowledge of the sciences by creating a global repository of educational videos and presentations from leading universities and institutions.
In 2003, prior to the Blair/Bush invasion of Iraq on the pretext that Iraq had weapons of mass destruction, Kroto initiated and organised the publication of a letter to be signed by a dozen UK Nobel Laureates and published in The Times. It was composed by his friend the Nobel Peace Prize Laureate the late Sir Joseph Rotblat and published in The Times on 15 February 2003.
He wrote a set of articles, mostly opinion pieces, from 2002 to 2003 for the Times Higher Education Supplement, a weekly UK publication.
From 2002 to 2004, Kroto served as president of the Royal Society of Chemistry. In 2004, he was appointed to the Francis Eppes Professorship in the chemistry department at Florida State University, carrying out research in nanoscience and nanotechnology.
He spoke at Auburn University on 29 April 2010, and at the James A. Baker III Institute for Public Policy at Rice University with Robert Curl on 13 October 2010.
In October 2010 Kroto participated in the USA Science and Engineering Festival's Lunch with a Laureate program where middle and high school students had the opportunity to engage in an informal conversation with a Nobel Prize–winning scientist.
He spoke at Mahatma Gandhi University, at Kottayam, in Kerala, India in January 2011, where he was an 'Erudite' special invited lecturer of the Government of Kerala, from 5 to 11 January 2011.
Kroto spoke at CSICon 2011, a convention "dedicated to scientific inquiry and critical thinking" organized by the Committee for Skeptical Inquiry in association with Skeptical Inquirer magazine and the Center for Inquiry.
He also delivered the IPhO 2012 lecture at the International Physics Olympiad held in Estonia.
In 2014, Kroto spoke at the Starmus Festival in the Canary Islands, delivering a lecture about his life in science, chemistry, and design.
Personal life
In 1963, Kroto married Margaret Henrietta Hunter, also a student of the University of Sheffield at the time. The couple had two sons. Throughout his life, Kroto was a lover of film, theatre, art, and music and published his own artwork.
Personal beliefs
Kroto was a "devout atheist" who thought that beliefs in immortality derive from lack of the courage to accept human mortality. He was a patron of the British Humanist Association. He was a supporter of Amnesty International. He referred to his view that religious dogma causes people to accept unethical or inhumane actions: "The only mistake Bernie Madoff made was to promise returns in this life." He held that scientists had a responsibility to work for the benefit of the entire species. On 15 September 2010, Kroto, along with 54 other public figures, signed an open letter published in The Guardian, stating their opposition to Pope Benedict XVI's state visit to the UK.
Kroto was an early Signatory of Asteroid Day.
In 2008, Kroto was critical of Michael Reiss for directing the teaching of creationism alongside evolution.
Kroto praised the increase of organized online information as an "Educational Revolution" and named it as the "GooYouWiki" world referring to Google, YouTube and Wikipedia.
Graphic design
The discovery of buckminsterfullerene caused Kroto to postpone his dream of setting up an art and graphic design studio – he had been doing graphics semi-professionally for years. However, Kroto's graphic design work resulted in numerous posters, letterheads, logos, book/journal covers, medal design, etc. He produced artwork after receiving graphic awards in the Sunday Times Book Jacket Design competition (1964) and the Moet Hennesy/Louis Vuitton Science pour l'Art Prize (1994). Other notable graphical works include the design of the Nobel UK Stamp for Chemistry (2001) and features at the Royal Academy (London) Summer Exhibition (2004).
Death and reactions
Kroto died on 30 April 2016 in Lewes, East Sussex, from complications of amyotrophic lateral sclerosis at the age of 76.
Richard Dawkins wrote a memorial for Kroto in which he mentioned Kroto's "passionate hatred of religion." The Wall Street Journal described him as "(spending much of his later life) jetting around the world to extol scientific education in a world he saw as blinded by religion." Slate's Zack Kopplin related a story about how Kroto gave him advice and support to fight Louisiana's creationism law, a law that allows public school science teachers to attack evolution and how Kroto defended the scientific findings of global warming. In an obituary published in the journal Nature, Robert Curl and James R. Heath described Kroto as having an "impish sense of humour similar to that of the British comedy group Monty Python".
Honours and awards
Kroto won numerous awards, individually and with others:
Major awards
Tilden Lecturer of the Royal Society of Chemistry, 1981–82
Elected a Fellow of the Royal Society (FRS) in 1990
International Prize for New Materials American Physical Society, 1992 (with Robert Curl and Richard Smalley)
Italgas Prize for Innovation in Chemistry, 1992
Royal Society of Chemistry Longstaff Medal, 1993
Hewlett Packard Europhysics Prize, 1994 (with Wolfgang Kraetschmer, Don Huffman and Richard Smalley)
Nobel Prize in Chemistry, 1996 (shared with Robert Curl and Richard Smalley)
Carbon Medal, American Carbon Society Medal for Achievement in Carbon Science, 1997 (shared with Robert Curl and Richard Smalley)
Blackett Lectureship (Royal Society), 1999
Faraday Award and Lecture (Royal Society), 2001
Dalton Medal (Manchester Lit and Phil), 1998
Erasmus Medal of Academia Europaea, 2002
Copley Medal of the Royal Society, 2002
Golden Plate Award of the American Academy of Achievement, 2002
Order of Cherubini (Torino), 2005
Foreign Associate of the National Academy of Sciences, 2007
Kavli Lecturer, 2007
National Historic Chemical Landmark, American Chemical Society, 2010.
Citation for Chemical Breakthrough Award, Division of History of Chemistry, American Chemical Society, 2015
Kroto was made a Knight Bachelor in the 1996 New Year Honours list.
The University of Sheffield North Campus contains two buildings named after Kroto: The Kroto Innovation Centre and the Kroto Research Institute.
Honorary degrees
Université Libre de Bruxelles (Belgium)
University of Stockholm (Sweden)
University of Limburg (now Hasselt University) (Belgium)
University of Sheffield (UK)
University of Kingston (UK)
University of Sussex (UK)
University of Helsinki (Finland)
University of Nottingham (UK)
Yokohama City University (Japan)
University of Sheffield-Hallam (UK)
University of Aberdeen (Scotland)
University of Leicester (UK)
University of Aveiro (Portugal)
University of Bielefeld (Germany)
University of Hull (UK)
Manchester Metropolitan University (UK)
Hong Kong City University (HK China)
Gustavus Adolphus College (Minnesota, US)
University College London (UK)
University of Patras (Greece)
University of Dalhousie (Halifax, NovaScotia, Canada)
University of Strathclyde (Scotland)
University of Manchester (UK)
AGH University of Science and Technology in Kraków (Poland)
University of Durham (UK)
Queens University Belfast (NI)
University of Surrey (UK)
Polytechnico di Torino (Italy)
University of Chemical Technology – Beijing (China)
University of Liverpool (UK)
Florida Southern College (US)
Keio University (Japan)
University of Chiba (Japan)
University of Bolton (UK)
University of Hartford (US)
University of Tel Aviv (Israel)
University of Poitiers (France)
Universidad Complutense de Madrid
Naresuan University (Thailand)
Vietnam National University (Hanoi)
University of Edinburgh (Scotland)
University of Primorska (Slovenia)
Returned due to closure of Chemistry Departments
Hertfordshire University
Exeter University
See also
List of Jewish Nobel laureates
References
Further reading
External links
Harry Kroto personal website
Sir Harold W. Kroto at Florida State University
About Harry Kroto at University of Sheffield
Videos from Vega Science Trust
Harry Kroto, Nobel Luminaries Project, The Museum of the Jewish People at Beit Hatfutsot
1939 births
2016 deaths
Nobel laureates in Chemistry
English Nobel laureates
Jewish Nobel laureates
Alumni of the University of Sheffield
Academics of the University of Sussex
English people of Polish-Jewish descent
English atheists
English chemists
English humanists
English Jews
Jewish atheists
Jewish chemists
Jewish humanists
Fellows of the Royal Society
Foreign associates of the National Academy of Sciences
Carbon scientists
Knights Bachelor
Florida State University faculty
British nanotechnologists
Recipients of the Copley Medal
People educated at Bolton School
People from Wisbech
Scientists at Bell Labs
Scripps Research
English sceptics
Spectroscopists
Presidents of the Royal Society of Chemistry
Deaths from motor neuron disease in England
Recipients of the Dalton Medal | Harry Kroto | [
"Physics",
"Chemistry"
] | 3,503 | [
"Physical chemists",
"Spectrum (physical sciences)",
"Analytical chemists",
"Spectroscopists",
"Spectroscopy"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.