id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
12,541 | https://en.wikipedia.org/wiki/Gematria | In numerology, gematria (; or , plural or ) is the practice of assigning a numerical value to a name, word or phrase by reading it as a number, or sometimes by using an alphanumerical cipher. The letters of the alphabets involved have standard numerical values, but a word can yield several values if a cipher is used.
According to Aristotle (384–322 BCE), isopsephy, based on the Milesian numbering of the Greek alphabet developed in the Greek city of Miletus, was part of the Pythagorean tradition, which originated in the 6th century BCE. The first evidence of use of Hebrew letters as numbers dates to 78 BCE; gematria is still used in Jewish culture. Similar systems have been used in other languages and cultures, derived from or inspired by either Greek isopsephy or Hebrew gematria, and include Arabic abjad numerals and English gematria.
The most common form of Hebrew gematria is used in the Talmud and Midrash, and elaborately by many post-Talmudic commentators. It involves reading words and sentences as numbers, assigning numerical instead of phonetic value to each letter of the Hebrew alphabet. When read as numbers, they can be compared and contrasted with other words or phrases – cf. the Hebrew proverb (, , i.e. ). The gematric value of ('wine') is 70 (=10; =10; =50) and this is also the gematric value of ('secret', =60; =6; =4).
Although a type of gematria system ('Aru') was employed by the ancient Babylonian culture, their writing script was logographic, and the numerical assignments they made were to whole words. Aru was very different from the Milesian systems used by Greek and Hebrew cultures, which used alphabetic writing scripts. The value of words with Aru were assigned in an entirely arbitrary manner and correspondences were made through tables, and so cannot be considered a true form of gematria.
Gematria sums can involve single words, or a string of lengthy calculations. A short example of Hebrew numerology that uses gematria is the word , which is composed of two letters that (using the assignments in the table shown below) add up to 18. This has made 18 a "lucky number" among Jews.
In early Jewish sources, the term can also refer to other forms of calculation or letter manipulation, for example atbash.
Etymology
Classical scholars agree that the Hebrew word gematria was derived from the Greek word γεωμετρία geōmetriā, "geometry", though some scholars believe it to derive from Greek γραμματεια grammateia "knowledge of writing". It is likely that both Greek words had an influence on the formation of the Hebrew word. Some hold it to derive from the order of the Greek alphabet, gamma being the third letter of the Greek alphabet ("gamma tria").
The word has been extant in English since at least the 17th century from translations of works by Giovanni Pico della Mirandola. It is largely used in Jewish texts, notably in those associated with the Kabbalah. Neither the concept nor the term appears in the Hebrew Bible itself.
History
The first documented use of gematria is from an Assyrian inscription dating to the 8th century BCE, commissioned by Sargon II. In this inscription, Sargon II states: "the king built the wall of Khorsabad 16,283 cubits long to correspond with the numerical value of his name."
The practice of using alphabetic letters to represent numbers developed in the Greek city of Miletus, and is thus known as the Milesian system. Early examples include vase graffiti dating to the 6th century BCE. Aristotle wrote that the Pythgoraean tradition, founded in the 6th century BCE by Pythagoras of Samos, practiced isopsephy, the Greek predecessor of gematria. Pythagoras was a contemporary of the philosophers Anaximander, Anaximenes, and the historian Hecataeus, all of whom lived in Miletus, across the sea from Samos. The Milesian system was in common use by the reign of Alexander the Great (336–323 BCE) and was adopted by other cultures during the subsequent Hellenistic period. It was officially adopted in Egypt during the reign of Ptolemy II Philadelphus (284–246 BCE).
In early biblical texts, numbers were written out in full using Hebrew number words. The first evidence of the use of Hebrew letters as numerals appears during the late Hellenistic period, in 78 BCE. Scholars have identified gematria in the Hebrew Bible, the canon of which was fixed during the Hasmonean dynasty (c. 140 BCE to 37 BCE), though some scholars argue it was not fixed until the second century CE or even later. The Hasmonean king of Judea, Alexander Jannaeus (died 76 BCE) had coins inscribed in Aramaic with the Phoenician alphabet, marking the 20th and 25th years of his reign using the letters K and KE ( and ).
Some old Mishnaic texts may preserve very early usage of this number system, but no surviving written documents exist, and some scholars believe these texts were passed down orally and during the early stages before the Bar Kochba rebellion were never written. Gematria is not known to be found in the Dead Sea scrolls, a vast body of texts from 100 BCE100 CE, or in any of the documents found from the Bar-Kochba revolt circa 150 CE.
According to Proclus in his commentary on the Timaeus of Plato written in the 5th century, the author Theodorus Asaeus from a century earlier interpreted the word "soul" (ψυχή) based on gematria and an inspection of the graphical aspects of the letters that make up the word. According to Proclus, Theodorus learned these methods from the writings of Numenius of Apamea and Amelius. Proclus rejects these methods by appealing to the arguments against them put forth by the Neoplatonic philosopher Iamblichus. The first argument was that some letters have the same numerical value but opposite meaning. His second argument was that the form of letters changes over the years, and so their graphical qualities cannot hold any deeper meaning. Finally, he puts forth the third argument that when one uses all sorts of methods as addition, subtraction, division, multiplication, and even ratios, the infinite ways in which these can be combined allow virtually any number to be produced to suit any purpose.
Some scholars propose that at least two cases of gematria appear in the New Testament. According to one theory, the reference to the miraculous "catch of 153 fish" in John 21:11 is an application of gematria derived from the name of the spring called 'EGLaIM in Ezekiel 47:10. The appearance of this gematria in John 21:11 has been connected to one of the Dead Sea Scrolls, namely 4Q252, which also applies the same gematria of 153 derived from Ezekiel 47 to state that Noah arrived at Mount Ararat on the 153rd day after the beginning of the flood. Some historians see gematria behind the reference to the number of the name of the Beast in Revelation as 666, which corresponds to the numerical value of the Hebrew transliteration of the Greek name "Neron Kaisar", referring to the 1st century Roman emperor who persecuted the early Christians. Another possible influence on the use of 666 in Revelation goes back to reference to Solomon's intake of 666 talents of gold in 1 Kings 10:14.
Gematria makes several appearances in various Christian and Jewish texts written in the first centuries of the common era. One appearance of gematria in the early Christian period is in the Epistle of Barnabas 9:6–7, which dates to sometime between 70 and 132 CE. There, the 318 servants of Abraham in Genesis 14:14 is used to indicate that Abraham looked forward to the coming of Jesus as the numerical value of some of the letters in the Greek name for Jesus as well as the 't' representing a symbol for the cross also equaled 318. Another example is a Christian interpolation in the Sibylline Oracles, where the symbolic significance of the value of 888 (equal to the numerical value of Iesous, the Latinized rendering of the Greek version of Jesus' name) is asserted. Irenaeus also heavily criticized the interpretation of letters by the Gnostic Marcus. Because of their association with Gnosticism and the criticisms of Irenaeus as well as Hippolytus of Rome and Epiphanius of Salamis, this form of interpretation never became popular in Christianity—though it does appear in at least some texts. Another two examples can be found in 3 Baruch, a text that may have either been composed by a Jew or a Christian sometime between the 1st and 3rd centuries. In the first example, a snake is stated to consume a cubit of ocean every day, but is unable to ever finish consuming it, because the oceans are also refilled by 360 rivers. The number 360 is given because the numerical value of the Greek word for snake, δράκων, when transliterated to Hebrew () is 360. In a second example, the number of giants stated to have died during the Deluge is 409,000. The Greek word for 'deluge', κατακλυσμός, has a numerical value of 409 when transliterated in Hebrew characters, thus leading the author of 3 Baruch to use it for the number of perished giants.
Gematria is often used in Rabbinic literature. One example is that the numerical value of "The Satan" () in Hebrew is 364, and so it was said that the Satan had authority to prosecute Israel for 364 days before his reign ended on the Day of Atonement, an idea which appears in Yoma 20a and Peskita 7a. Yoma 20a states: "Rami bar Ḥama said: The numerological value of the letters that constitute the word HaSatan is three hundred and sixty four: Heh has a value of five, sin has a value of three hundred, tet has a value of nine, and nun has a value of fifty. Three hundred and sixty-four days of the solar year, which is three hundred and sixty-five days long, Satan has license to prosecute." Genesis 14:14 states that Abraham took 318 of his servants to help him rescue some of his kinsmen, which was taken in Peskita 70b to be a reference to Eleazar, whose name has a numerical value of 318.
The total value of the letters of the Islamic Basmala, i.e. the phrase Bismillah al-Rahman al-Rahim ("In the name of God, the Most Gracious, the Most Merciful"), according to the standard Abjadi system of numerology, is 786. This number has therefore acquired a significance in folk Islam and Near Eastern folk magic and also appears in many instances of pop-culture, such as its appearance in the 2006 song '786 All is War' by the band Fun-Da-Mental. A recommendation of reciting the basmala 786 times in sequence is recorded in Al-Buni. Sündermann (2006) reports that a contemporary "spiritual healer" from Syria recommends the recitation of the basmala 786 times over a cup of water, which is then to be ingested as medicine. The use of gematria is still pervasive in many parts of Asia and Africa.
Methods of Hebrew gematria
Standard encoding
In standard gematria (mispar hechrechi), each letter is given a numerical value between 1 and 400, as shown in the following table. In mispar gadol, the five final letters are given their own values, ranging from 500 to 900. It is possible that this well-known cipher was used to conceal other more hidden ciphers in Jewish texts. For instance, a scribe may discuss a sum using the 'standard gematria' cipher, but may intend the sum to be checked with a different secret cipher.
A mathematical formula for finding a letter's corresponding number in mispar gadol is:
where x is the position of the letter in the language letters index (regular order of letters), and the floor and modulo functions are used.
Vowels
The value of the Hebrew vowels is not usually counted, but some lesser-known methods include the vowels as well. The most common vowel values are as follows (a less common alternative value, based on the digit sum, is given in parentheses):
Sometimes, the names of the vowels are spelled out and their gematria is calculated using standard methods.
Other methods
There are many different methods used to calculate the numerical value for the individual Hebrew/Aramaic words, phrases or whole sentences. Gematria is the 29th of 32 hermeneutical rules countenanced by the Rabbis of the Talmud for valid aggadic interpretation of the Torah. More advanced methods are usually used for the most significant Biblical verses, prayers, names of God, etc. These methods include:
Mispar hechrachi (absolute value) is the standard method. It assigns the values 1–9, 10–90, 100–400 to the 22 Hebrew letters in order. Sometimes it is also called mispar ha-panim (face number), as opposed to the more complicated mispar ha-akhor (back number).
Mispar gadol (large value) counts the final forms (sofit) of the Hebrew letters as a continuation of the numerical sequence for the alphabet, with the final letters assigned values from 500 to 900. The name mispar gadol is sometimes used for a different method, Otiyot beMilui.
The same name, mispar gadol, is also used for another method, which spells the name of each letter and adds the standard values of the resulting string. For example, the letter aleph is spelled aleph lamed peh, giving it a value of .
Mispar katan (small value) calculates the value of each letter, but truncates all of the zeros. It is also sometimes called mispar me'ugal.
Mispar siduri (ordinal value) with each of the 22 letters given a value from 1 to 22.
Mispar bone'eh (building value, also revu'a, square) is calculated by walking over each letter from the beginning to the end, adding the value of all previous letters and the value of the current letter to the running total. Therefore, the value of the word achad (one) is .
Mispar kidmi (preceding value) uses each letter as the sum of all the standard gematria letter values preceding it. Therefore, the value of aleph is 1, the value of bet is 1+2=3, the value of gimel is 1+2+3=6, etc. It is also known as mispar meshulash (triangular or tripled number).
Mispar p'rati calculates the value of each letter as the square of its standard gematria value. Therefore, the value of aleph is 1 × 1 = 1, the value of bet is 2 × 2 = 4, the value of gimel is 3 × 3 = 9, etc. It is also known as mispar ha-merubah ha-prati.
Mispar ha-merubah ha-klali is the square of the standard absolute value of each word.
Mispar meshulash calculates the value of each letter as the cube of their standard value. The same term is more often used for mispar kidmi.
Mispar ha-akhor – The value of each letter is its standard value multiplied by the position of the letter in a word or a phrase in either ascending or descending order. This method is particularly interesting, because the result is sensitive to the order of letters. It is also sometimes called mispar meshulash (triangular number).
Mispar mispari spells out the standard values of each letter by their Hebrew names ("achad" (one) is etc.), and then adds up the standard values of the resulting string.
Otiyot be-milui ("filled letters", also known as mispar gadol or mispar shemi), uses the value of each letter as equal to the value of its name. For example, the value of the letter aleph is , bet is , etc. Sometimes the same operation is applied two or more times recursively. In a variation known as otiyot pnimiyot (inner letters), the initial letter in the spelled-out name is omitted, thus the value of aleph becomes 30+80=110.
Mispar ne'elam (hidden number) spells out the name of each letter without the letter itself (e.g., "leph" for aleph) and adds up the value of the resulting string.
Mispar katan mispari (integral reduced value) is used where the total numerical value of a word is reduced to a single digit. If the sum of the value exceeds 9, the integer values of the total are repeatedly added to produce a single-digit number. The same value will be arrived at regardless of whether it is the absolute values, the ordinal values, or the reduced values that are being counted by methods above. For example, the value of word emet (truth - אֶמֶת) is aleph + mem + tav: Emet - Emet is emet - emet - emet is etc.
Mispar musafi adds the number of the letters in the word or phrase to their gematria.
Kolel is the number of words, which is often added to the gematria. In case of one word, the standard value is incremented by one.
Related transformations
Within the wider topic of gematria are included the various alphabet transformations, where one letter is substituted by another based on a logical scheme:
Atbash exchanges each letter in a word or a phrase by opposite letters. Opposite letters are determined by substituting the first letter of the Hebrew alphabet (aleph) with the last letter (tav), the second letter (bet) with the next to last (shin), etc. The result can be interpreted as a secret message or calculated by the standard gematria methods. A few instances of atbash are found already in the Hebrew Bible. For example, see Jeremiah 25:26, and 51:41, with Targum and Rashi, in which the name ששך ("Sheshek") is thought to represent בבל (Babylon).
Albam – the alphabet is divided in half, eleven letters in each section. The first letter of the first series is exchanged for the first letter of the second series, the second letter of the first series for the second letter of the second series, and so forth.
Achbi divides the alphabet into two equal groups of 11 letters. Within each group, the first letter is replaced by the last, the second by the 10th, etc.
Ayak bakar replaces each letter by another one that has a 10-times-greater value. The final letters usually signify the numbers from 500 to 900. Thousands is reduced to ones (1,000 becomes 1, 2,000 becomes 2, etc.)
Ofanim replaces each letter by the last letter of its name (e.g. peh for aleph).
Akhas beta divides the alphabet into three groups of 7, 7 and 8 letters. Each letter is replaced cyclically by the corresponding letter of the next group. The letter Tav remains the same.
Avgad replaces each letter by the next one. Tav becomes aleph. The opposite operation is also used.
Most of the above-mentioned methods and ciphers are listed by Rabbi Moshe Cordevero.
Some authors provide lists of as many as 231 various replacement ciphers, related to the 231 mystical Gates of the Sefer Yetzirah.
Dozens of other far more advanced methods are used in Kabbalistic literature, without any particular names. In Ms. Oxford 1,822, one article lists 75 different forms of gematria. Some known methods are recursive in nature and are reminiscent of graph theory or make a lot of use of combinatorics. Rabbi Elazar Rokeach (born c. 1176 – died 1238) often used multiplication, instead of addition, for the above-mentioned methods. For example, spelling out the letters of a word and then multiplying the squares of each letter value in the resulting string produces very large numbers, in orders of trillions. The spelling process can be applied recursively, until a certain pattern (e.g., all the letters of the word "Talmud") is found; the gematria of the resulting string is then calculated. The same author also used the sums of all possible unique letter combinations, which add up to the value of a given letter. For example, the letter Hei, which has the standard value of 5, can be produced by combining , , , , , or , which adds up to . Sometimes combinations of repeating letters are not allowed (e.g., is valid, but is not). The original letter itself can also be viewed as a valid combination.
Variant spellings of some letters can be used to produce sets of different numbers, which can be added up or analyzed separately. Many various complex formal systems and recursive algorithms, based on graph-like structural analysis of the letter names and their relations to each other, modular arithmetic, pattern search and other highly advanced techniques, are found in the "Sefer ha-Malchut" by Rabbi David ha-Levi of the Draa Valley, a Spanish-Moroccan Kabbalist of the 15th–16th century. Rabbi David ha-Levi's methods also consider the numerical values and other properties of the vowels.
Kabbalistic astrology uses some specific methods to determine the astrological influences on a particular person. According to one method, the gematria of the person's name is added to the gematria of his or her mother's name; the result is then divided by 7 and 12. The remainders signify a particular planet and Zodiac sign.
Transliterated Hebrew
Historically, hermetic and esoteric groups of the 19th and 20th centuries in the UK and in France used a transliterated Hebrew cipher with the Latin alphabet. In particular, the transliterated cipher was taught to members of the Hermetic Order of the Golden Dawn. In 1887, S.L. MacGregor Mathers, who was one of the order's founders, published the transliterated cipher in The Kabbalah Unveiled in the Mathers table.
As a former member of the Golden Dawn, Aleister Crowley used the transliterated cipher extensively in his writings for his two magical orders the A∴A∴ and Ordo Templi Orientis (O.T.O). Many other occult authors belonging to various esoteric groups have either mentioned the cipher or published it in their books, including Paul Foster Case of the Builders of the Adytum (B.O.T.A).
Use in non-Semitic languages
Greek
According to Aristotle (384–322 BCE), isopsephy, an early Milesian system using the Greek alphabet, was part of the Pythagorean tradition, which originated in the 6th century BCE.
Plato (c. 427–347 BCE) offers a discussion in the Cratylus, involving a view of words and names as referring (more or less accurately) to the "essential nature" of a person or object and that this view may have influenced—and is central to—isopsephy.
A sample of graffiti at Pompeii (destroyed under volcanic ash in 79 CE) reads "I love the girl whose name is phi mu epsilon (545)".
Other examples of use in Greek come primarily from the Christian literature. Davies and Allison state that, unlike rabbinic sources, isopsephy is always explicitly stated as being used.
Latin
During the Renaissance, systems of gematria were devised for the Classical Latin alphabet. There were a number of variations of these which were popular in Europe.
In 1525, Christoph Rudolff included a Classical Latin gematria in his work Nimble and beautiful calculation via the artful rules of algebra [which] are so commonly called "coss":
A=1 B=2 C=3 D=4 E=5 F=6 G=7 H=8 I=9 K=10 L=11 M=12
N=13 O=14 P=15 Q=16 R=17 S=18 T=19 U=20 W=21 X=22 Y=23 Z=24
At the beginning of the Apocalypisis in Apocalypsin (1532), the German monk Michael Stifel (also known as Steifel) describes the natural order and trigonal number alphabets, claiming to have invented the latter. He used the trigonal alphabet to interpret the prophecy in the Biblical Book of Revelation, and predicted the world would end at 8am on October 19, 1533. The official Lutheran reaction to Steifel's prophecy shows that this type of activity was not welcome. Belief in the power of numbers was unacceptable in reformed circles, and gematria was not part of the reformation agenda.
An analogue of the Greek system of isopsephy using the Latin alphabet appeared in 1583, in the works of the French poet Étienne Tabourot. This cipher and variations of it were published or referred to in the major work of Italian Pietro Bongo Numerorum Mysteria, and a 1651 work by Georg Philipp Harsdörffer, and by Athanasius Kircher in 1665, and in a 1683 volume of Cabbalologia by Johann Henning, where it was simply referred to as the 1683 alphabet. It was mentioned in the work of The European Helicon or Muse Mountain, in 1704, and it was also called the Alphabetum Cabbalisticum Vulgare in Die verliebte und galante Welt by Christian Friedrich Hunold in 1707. It was used by Leo Tolstoy in his 1865 work War and Peace to identify Napoleon with the number of the Beast.
English
English Qabalah refers to several different systems of mysticism related to Hermetic Qabalah that interpret the letters of the English alphabet via an assigned set of numerological significances. The first system of English gematria was used by the poet John Skelton in 1523 in his poem "The Garland of Laurel".
The Agrippa code was used with English as well as Latin. It was defined by Heinrich Cornelius Agrippa in 1532, in his work De Occulta Philosopha. Agrippa based his system on the order of the Classical Latin alphabet using a ranked valuation as in isopsephy, appending the four additional letters in use at the time after Z, including J (600) and U (700), which were still considered letter variants. Agrippa was the mentor of Welsh magician John Dee, who makes reference to the Agrippa code in Theorem XVI of his 1564 book, Monas Hieroglyphica.
Although Aleister Crowley, as a former Adept of the Golden Dawn, used a transliterated approach to gematria in his works, since Crowley's death a number of people have proposed numerical correspondences for English gematria in order to achieve a deeper understanding of Crowley's The Book of the Law (1904). One such system, the English Qaballa, was discovered by English magician James Lees on November 26, 1976. The founding of Lees' magical order in 1974 and his discovery of EQ are chronicled in All This and a Book by Cath Thompson.
See also
About the Mystery of the Letters
Bible code
Chinese numerology
Chronogram
Goroawase
Hurufism
Isopsephy
'Ilm al-huruf
Katapayadi system
Notarikon
Numbers in Germanic paganism
Numerology
Roman numerals
Significance of numbers in Judaism
Temurah (Kabbalah)
Theomatics
Untranslatability
References
Further reading
Hebrew alphabet
Jewish mysticism
Kabbalah
Kabbalistic words and phrases
Numerology | Gematria | [
"Mathematics"
] | 5,938 | [
"Numerology",
"Mathematical objects",
"Numbers"
] |
12,543 | https://en.wikipedia.org/wiki/Groupoid | In mathematics, especially in category theory and homotopy theory, a groupoid (less often Brandt groupoid or virtual group) generalises the notion of group in several equivalent ways. A groupoid can be seen as a:
Group with a partial function replacing the binary operation;
Category in which every morphism is invertible. A category of this sort can be viewed as augmented with a unary operation on the morphisms, called inverse by analogy with group theory. A groupoid where there is only one object is a usual group.
In the presence of dependent typing, a category in general can be viewed as a typed monoid, and similarly, a groupoid can be viewed as simply a typed group. The morphisms take one from one object to another, and form a dependent family of types, thus morphisms might be typed , , say. Composition is then a total function: , so that .
Special cases include:
Setoids: sets that come with an equivalence relation,
G-sets: sets equipped with an action of a group .
Groupoids are often used to reason about geometrical objects such as manifolds. introduced groupoids implicitly via Brandt semigroups.
Definitions
Algebraic
A groupoid can be viewed as an algebraic structure consisting of a set with a binary partial function .
Precisely, it is a non-empty set with a unary operation , and a partial function . Here is not a binary operation because it is not necessarily defined for all pairs of elements of . The precise conditions under which is defined are not articulated here and vary by situation.
The operations and −1 have the following axiomatic properties: For all , , and in ,
Associativity: If and are defined, then and are defined and are equal. Conversely, if one of or is defined, then they are both defined (and they are equal to each other), and and are also defined.
Inverse: and are always defined.
Identity: If is defined, then , and . (The previous two axioms already show that these expressions are defined and unambiguous.)
Two easy and convenient properties follow from these axioms:
,
If is defined, then .
Category-theoretic
A groupoid is a small category in which every morphism is an isomorphism, i.e., invertible. More explicitly, a groupoid is a set of objects with
for each pair of objects x and y, a (possibly empty) set G(x,y) of morphisms (or arrows) from x to y; we write f : x → y to indicate that f is an element of G(x,y);
for every object x, a designated element of G(x, x);
for each triple of objects x, y, and z, a function ;
for each pair of objects x, y, a function satisfying, for any f : x → y, g : y → z, and h : z → w:
and ;
;
and .
If f is an element of G(x,y), then x is called the source of f, written s(f), and y is called the target of f, written t(f).
A groupoid G is sometimes denoted as , where is the set of all morphisms, and the two arrows represent the source and the target.
More generally, one can consider a groupoid object in an arbitrary category admitting finite fiber products.
Comparing the definitions
The algebraic and category-theoretic definitions are equivalent, as we now show. Given a groupoid in the category-theoretic sense, let G be the disjoint union of all of the sets G(x,y) (i.e. the sets of morphisms from x to y). Then and become partial operations on G, and will in fact be defined everywhere. We define ∗ to be and −1 to be , which gives a groupoid in the algebraic sense. Explicit reference to G0 (and hence to ) can be dropped.
Conversely, given a groupoid G in the algebraic sense, define an equivalence relation on its elements by
iff a ∗ a−1 = b ∗ b−1. Let G0 be the set of equivalence classes of , i.e. . Denote a ∗ a−1 by if with .
Now define as the set of all elements f such that exists. Given and , their composite is defined as . To see that this is well defined, observe that since and exist, so does . The identity morphism on x is then , and the category-theoretic inverse of f is f−1.
Sets in the definitions above may be replaced with classes, as is generally the case in category theory.
Vertex groups and orbits
Given a groupoid G, the vertex groups or isotropy groups or object groups in G are the subsets of the form G(x,x), where x is any object of G. It follows easily from the axioms above that these are indeed groups, as every pair of elements is composable and inverses are in the same vertex group.
The orbit of a groupoid G at a point is given by the set containing every point that can be joined to x by a morphism in G. If two points and are in the same orbits, their vertex groups and are isomorphic: if is any morphism from to , then the isomorphism is given by the mapping .
Orbits form a partition of the set X, and a groupoid is called transitive if it has only one orbit (equivalently, if it is connected as a category). In that case, all the vertex groups are isomorphic (on the other hand, this is not a sufficient condition for transitivity; see the section below for counterexamples).
Subgroupoids and morphisms
A subgroupoid of is a subcategory that is itself a groupoid. It is called wide or full if it is wide or full as a subcategory, i.e., respectively, if or for every .
A groupoid morphism is simply a functor between two (category-theoretic) groupoids.
Particular kinds of morphisms of groupoids are of interest. A morphism of groupoids is called a fibration if for each object of and each morphism of starting at there is a morphism of starting at such that . A fibration is called a covering morphism or covering of groupoids if further such an is unique. The covering morphisms of groupoids are especially useful because they can be used to model covering maps of spaces.
It is also true that the category of covering morphisms of a given groupoid is equivalent to the category of actions of the groupoid on sets.
Examples
Topology
Given a topological space , let be the set . The morphisms from the point to the point are equivalence classes of continuous paths from to , with two paths being equivalent if they are homotopic.
Two such morphisms are composed by first following the first path, then the second; the homotopy equivalence guarantees that this composition is associative. This groupoid is called the fundamental groupoid of , denoted (or sometimes, ). The usual fundamental group is then the vertex group for the point .
The orbits of the fundamental groupoid are the path-connected components of . Accordingly, the fundamental groupoid of a path-connected space is transitive, and we recover the known fact that the fundamental groups at any base point are isomorphic. Moreover, in this case, the fundamental groupoid and the fundamental groups are equivalent as categories (see the section below for the general theory).
An important extension of this idea is to consider the fundamental groupoid where is a chosen set of "base points". Here is a (full) subgroupoid of , where one considers only paths whose endpoints belong to . The set may be chosen according to the geometry of the situation at hand.
Equivalence relation
If is a setoid, i.e. a set with an equivalence relation , then a groupoid "representing" this equivalence relation can be formed as follows:
The objects of the groupoid are the elements of ;
For any two elements and in , there is a single morphism from to (denote by ) if and only if ;
The composition of and is .
The vertex groups of this groupoid are always trivial; moreover, this groupoid is in general not transitive and its orbits are precisely the equivalence classes. There are two extreme examples:
If every element of is in relation with every other element of , we obtain the pair groupoid of , which has the entire as set of arrows, and which is transitive.
If every element of is only in relation with itself, one obtains the unit groupoid, which has as set of arrows, , and which is completely intransitive (every singleton is an orbit).
Examples
If is a smooth surjective submersion of smooth manifolds, then is an equivalence relation since has a topology isomorphic to the quotient topology of under the surjective map of topological spaces. If we write, then we get a groupoid which is sometimes called the banal groupoid of a surjective submersion of smooth manifolds.
If we relax the reflexivity requirement and consider partial equivalence relations, then it becomes possible to consider semidecidable notions of equivalence on computable realisers for sets. This allows groupoids to be used as a computable approximation to set theory, called PER models. Considered as a category, PER models are a cartesian closed category with natural numbers object and subobject classifier, giving rise to the effective topos introduced by Martin Hyland.
Čech groupoid
A Čech groupoidp. 5 is a special kind of groupoid associated to an equivalence relation given by an open cover of some manifold . Its objects are given by the disjoint union
and its arrows are the intersections
The source and target maps are then given by the induced mapsand the inclusion mapgiving the structure of a groupoid. In fact, this can be further extended by settingas the -iterated fiber product where the represents -tuples of composable arrows. The structure map of the fiber product is implicitly the target map, sinceis a cartesian diagram where the maps to are the target maps. This construction can be seen as a model for some ∞-groupoids. Also, another artifact of this construction is k-cocyclesfor some constant sheaf of abelian groups can be represented as a functiongiving an explicit representation of cohomology classes.
Group action
If the group acts on the set , then we can form the action groupoid (or transformation groupoid) representing this group action as follows:
The objects are the elements of ;
For any two elements and in , the morphisms from to correspond to the elements of such that ;
Composition of morphisms interprets the binary operation of .
More explicitly, the action groupoid is a small category with and and with source and target maps and . It is often denoted (or for a right action). Multiplication (or composition) in the groupoid is then , which is defined provided .
For in , the vertex group consists of those with , which is just the isotropy subgroup at for the given action (which is why vertex groups are also called isotropy groups). Similarly, the orbits of the action groupoid are the orbit of the group action, and the groupoid is transitive if and only if the group action is transitive.
Another way to describe -sets is the functor category , where is the groupoid (category) with one element and isomorphic to the group . Indeed, every functor of this category defines a set and for every in (i.e. for every morphism in ) induces a bijection : . The categorical structure of the functor assures us that defines a -action on the set . The (unique) representable functor is the Cayley representation of . In fact, this functor is isomorphic to and so sends to the set which is by definition the "set" and the morphism of (i.e. the element of ) to the permutation of the set . We deduce from the Yoneda embedding that the group is isomorphic to the group , a subgroup of the group of permutations of .
Finite set
Consider the group action of on the finite set that takes each number to its negative, so and . The quotient groupoid is the set of equivalence classes from this group action , and has a group action of on it.
Quotient variety
Any finite group that maps to gives a group action on the affine space (since this is the group of automorphisms). Then, a quotient groupoid can be of the form , which has one point with stabilizer at the origin. Examples like these form the basis for the theory of orbifolds. Another commonly studied family of orbifolds are weighted projective spaces and subspaces of them, such as Calabi–Yau orbifolds.
Fiber product of groupoids
Given a diagram of groupoids with groupoid morphisms
where and , we can form the groupoid whose objects are triples , where , , and in . Morphisms can be defined as a pair of morphisms where and
such that for triples , there is a commutative diagram in of , and the .
Homological algebra
A two term complex
of objects in a concrete Abelian category can be used to form a groupoid. It has as objects the set and as arrows the set ; the source morphism is just the projection onto while the target morphism is the addition of projection onto composed with and projection onto . That is, given , we have
Of course, if the abelian category is the category of coherent sheaves on a scheme, then this construction can be used to form a presheaf of groupoids.
Puzzles
While puzzles such as the Rubik's Cube can be modeled using group theory (see Rubik's Cube group), certain puzzles are better modeled as groupoids.
The transformations of the fifteen puzzle form a groupoid (not a group, as not all moves can be composed). This groupoid acts on configurations.
Mathieu groupoid
The Mathieu groupoid is a groupoid introduced by John Horton Conway acting on 13 points such that the elements fixing a point form a copy of the Mathieu group M12.
Relation to groups
If a groupoid has only one object, then the set of its morphisms forms a group. Using the algebraic definition, such a groupoid is literally just a group. Many concepts of group theory generalize to groupoids, with the notion of functor replacing that of group homomorphism.
Every transitive/connected groupoid - that is, as explained above, one in which any two objects are connected by at least one morphism - is isomorphic to an action groupoid (as defined above) . By transitivity, there will only be one orbit under the action.
Note that the isomorphism just mentioned is not unique, and there is no natural choice. Choosing such an isomorphism for a transitive groupoid essentially amounts to picking one object , a group isomorphism from to , and for each other than , a morphism in from to .
If a groupoid is not transitive, then it is isomorphic to a disjoint union of groupoids of the above type, also called its connected components (possibly with different groups and sets for each connected component).
In category-theoretic terms, each connected component of a groupoid is equivalent (but not isomorphic) to a groupoid with a single object, that is, a single group. Thus any groupoid is equivalent to a multiset of unrelated groups. In other words, for equivalence instead of isomorphism, one does not need to specify the sets , but only the groups . For example,
The fundamental groupoid of is equivalent to the collection of the fundamental groups of each path-connected component of , but an isomorphism requires specifying the set of points in each component;
The set with the equivalence relation is equivalent (as a groupoid) to one copy of the trivial group for each equivalence class, but an isomorphism requires specifying what each equivalence class is;
The set equipped with an action of the group is equivalent (as a groupoid) to one copy of for each orbit of the action, but an isomorphism requires specifying what set each orbit is.
The collapse of a groupoid into a mere collection of groups loses some information, even from a category-theoretic point of view, because it is not natural. Thus when groupoids arise in terms of other structures, as in the above examples, it can be helpful to maintain the entire groupoid. Otherwise, one must choose a way to view each in terms of a single group, and this choice can be arbitrary. In the example from topology, one would have to make a coherent choice of paths (or equivalence classes of paths) from each point to each point in the same path-connected component.
As a more illuminating example, the classification of groupoids with one endomorphism does not reduce to purely group theoretic considerations. This is analogous to the fact that the classification of vector spaces with one endomorphism is nontrivial.
Morphisms of groupoids come in more kinds than those of groups: we have, for example, fibrations, covering morphisms, universal morphisms, and quotient morphisms. Thus a subgroup of a group yields an action of on the set of cosets of in and hence a covering morphism from, say, to , where is a groupoid with vertex groups isomorphic to . In this way, presentations of the group can be "lifted" to presentations of the groupoid , and this is a useful way of obtaining information about presentations of the subgroup . For further information, see the books by Higgins and by Brown in the References.
Category of groupoids
The category whose objects are groupoids and whose morphisms are groupoid morphisms is called the groupoid category, or the category of groupoids, and is denoted by Grpd.
The category Grpd is, like the category of small categories, Cartesian closed: for any groupoids we can construct a groupoid whose objects are the morphisms and whose arrows are the natural equivalences of morphisms. Thus if are just groups, then such arrows are the conjugacies of morphisms. The main result is that for any groupoids there is a natural bijection
This result is of interest even if all the groupoids are just groups.
Another important property of Grpd is that it is both complete and cocomplete.
Relation to Cat
The inclusion has both a left and a right adjoint:
Here, denotes the localization of a category that inverts every morphism, and denotes the subcategory of all isomorphisms.
Relation to sSet
The nerve functor embeds Grpd as a full subcategory of the category of simplicial sets. The nerve of a groupoid is always a Kan complex.
The nerve has a left adjoint
Here, denotes the fundamental groupoid of the simplicial set .
Groupoids in Grpd
There is an additional structure which can be derived from groupoids internal to the category of groupoids, double-groupoids. Because Grpd is a 2-category, these objects form a 2-category instead of a 1-category since there is extra structure. Essentially, these are groupoids with functorsand an embedding given by an identity functorOne way to think about these 2-groupoids is they contain objects, morphisms, and squares which can compose together vertically and horizontally. For example, given squares and with the same morphism, they can be vertically conjoined giving a diagramwhich can be converted into another square by composing the vertical arrows. There is a similar composition law for horizontal attachments of squares.
Groupoids with geometric structures
When studying geometrical objects, the arising groupoids often carry a topology, turning them into topological groupoids, or even some differentiable structure, turning them into Lie groupoids. These last objects can be also studied in terms of their associated Lie algebroids, in analogy to the relation between Lie groups and Lie algebras.
Groupoids arising from geometry often possess further structures which interact with the groupoid multiplication. For instance, in Poisson geometry one has the notion of a symplectic groupoid, which is a Lie groupoid endowed with a compatible symplectic form. Similarly, one can have groupoids with a compatible Riemannian metric, or complex structure, etc.
See also
∞-groupoid
2-group
Homotopy type theory
Inverse category
Groupoid algebra (not to be confused with algebraic groupoid)
R-algebroid
Notes
References
Brown, Ronald, 1987, "From groups to groupoids: a brief survey", Bull. London Math. Soc. 19: 113–34. Reviews the history of groupoids up to 1987, starting with the work of Brandt on quadratic forms. The downloadable version updates the many references.
—, 2006. Topology and groupoids. Booksurge. Revised and extended edition of a book previously published in 1968 and 1988. Groupoids are introduced in the context of their topological application.
—, Higher dimensional group theory. Explains how the groupoid concept has led to higher-dimensional homotopy groupoids, having applications in homotopy theory and in group cohomology. Many references.
F. Borceux, G. Janelidze, 2001, Galois theories. Cambridge Univ. Press. Shows how generalisations of Galois theory lead to Galois groupoids.
Cannas da Silva, A., and A. Weinstein, Geometric Models for Noncommutative Algebras. Especially Part VI.
Golubitsky, M., Ian Stewart, 2006, "Nonlinear dynamics of networks: the groupoid formalism", Bull. Amer. Math. Soc. 43: 305–64
Higgins, P. J., "The fundamental groupoid of a graph of groups", J. London Math. Soc. (2) 13 (1976) 145–149.
Higgins, P. J. and Taylor, J., "The fundamental groupoid and the homotopy crossed complex of an orbit space", in Category theory (Gummersbach, 1981), Lecture Notes in Math., Volume 962. Springer, Berlin (1982), 115–122.
Higgins, P. J., 1971. Categories and groupoids. Van Nostrand Notes in Mathematics. Republished in Reprints in Theory and Applications of Categories, No. 7 (2005) pp. 1–195; freely downloadable. Substantial introduction to category theory with special emphasis on groupoids. Presents applications of groupoids in group theory, for example to a generalisation of Grushko's theorem, and in topology, e.g. fundamental groupoid.
Mackenzie, K. C. H., 2005. General theory of Lie groupoids and Lie algebroids. Cambridge Univ. Press.
Weinstein, Alan, "Groupoids: unifying internal and external symmetry – A tour through some examples". Also available in Postscript, Notices of the AMS, July 1996, pp. 744–752.
Weinstein, Alan, "The Geometry of Momentum" (2002)
R.T. Zivaljevic. "Groupoids in combinatorics—applications of a theory of local symmetries". In Algebraic and geometric combinatorics, volume 423 of Contemp. Math., 305–324. Amer. Math. Soc., Providence, RI (2006)
Algebraic structures
Category theory
Homotopy theory | Groupoid | [
"Mathematics"
] | 4,952 | [
"Functions and mappings",
"Mathematical structures",
"Mathematical objects",
"Fields of abstract algebra",
"Mathematical relations",
"Algebraic structures",
"Category theory"
] |
12,557 | https://en.wikipedia.org/wiki/Gilles%20Deleuze | Gilles Louis René Deleuze ( ; ; 18 January 1925 – 4 November 1995) was a French philosopher who, from the early 1950s until his death in 1995, wrote on philosophy, literature, film, and fine art. His most popular works were the two volumes of Capitalism and Schizophrenia: Anti-Oedipus (1972) and A Thousand Plateaus (1980), both co-written with psychoanalyst Félix Guattari. His metaphysical treatise Difference and Repetition (1968) is considered by many scholars to be his magnum opus.
An important part of Deleuze's oeuvre is devoted to the reading of other philosophers: the Stoics, Leibniz, Hume, Kant, Nietzsche, Spinoza, and Bergson. A. W. Moore, citing Bernard Williams's criteria for a great thinker, ranks Deleuze among the "greatest philosophers". Although he once characterized himself as a "pure metaphysician", his work has influenced a variety of disciplines across the humanities, including philosophy, art, and literary theory, as well as movements such as post-structuralism and postmodernism.
Life
Early life
Gilles Deleuze was born into a middle-class family in Paris and lived there for most of his life. His mother was Odette Camaüer and his father, Louis, was an engineer. His initial schooling was undertaken during World War II, during which time he attended the Lycée Carnot. He also spent a year in khâgne at the Lycée Henri IV. During the Nazi occupation of France, Deleuze's brother, three years his senior, Georges, was arrested for his participation in the French Resistance, and died while in transit to a concentration camp. In 1944, Deleuze went to study at the Sorbonne. His teachers there included several noted specialists in the history of philosophy, such as Georges Canguilhem, Jean Hyppolite, Ferdinand Alquié, and Maurice de Gandillac. Deleuze's lifelong interest in the canonical figures of modern philosophy owed much to these teachers.
Career
Deleuze passed the agrégation in philosophy in 1948, and taught at various lycées (Amiens, Orléans, Louis le Grand) until 1957, when he took up a position at the University of Paris. In 1953, he published his first monograph, Empiricism and Subjectivity, on David Hume. This monograph was based on his 1947 DES () thesis, roughly equivalent to an M.A. thesis, which was conducted under the direction of Jean Hyppolite and Georges Canguilhem. From 1960 to 1964, he held a position at the Centre National de Recherche Scientifique. During this time he published the seminal Nietzsche and Philosophy (1962) and befriended Michel Foucault. From 1964 to 1969, he was a professor at the University of Lyon. In 1968, Deleuze defended his two DrE dissertations amid the ongoing May 68 demonstrations; he later published his two dissertations under the titles Difference and Repetition (supervised by Gandillac) and Expressionism in Philosophy: Spinoza (supervised by Alquié).
In 1969, he was appointed to the University of Paris VIII at Vincennes/St. Denis, an experimental school organized to implement educational reform. This new university drew a number of well-known academics, including Foucault (who suggested Deleuze's hiring) and the psychoanalyst Félix Guattari. Deleuze taught at Paris VIII until his retirement in 1987.
Personal life
Deleuze's outlook on life was sympathetic to transcendental ideas, "nature as god" ethics, and the monist experience. Some of the important ideas he advocated for and found inspiration in include his personally coined expression pluralism = monism, as well as the concepts of Being and Univocity.
He married Denise Paul "Fanny" Grandjouan in 1956 and they had two children.
According to James Miller, Deleuze portrayed little visible interest in actually doing many of the risky things he so vividly conjured up in his lectures and writing. Married, with two children, he outwardly lived the life of a conventional French professor. He kept his fingernails untrimmed because, as he once explained, he lacked "normal protective fingerprints", and therefore could not "touch an object, particularly a piece of cloth, with the pads of my fingers without sharp pain".
When once asked to talk about his life, he replied: "Academics' lives are seldom interesting." Deleuze concludes his reply to this critic thus:
Death
Deleuze, who had suffered from respiratory ailments from a young age, developed tuberculosis in 1968 and underwent lung removal. He suffered increasingly severe respiratory symptoms for the rest of his life. In the last years of his life, simple tasks such as writing required laborious effort. Overwhelmed by his respiratory problems, he died by suicide on 4 November 1995, throwing himself from the window of his apartment.
Before his death, Deleuze had announced his intention to write a book entitled La Grandeur de Marx (The Greatness of Marx), and left behind two chapters of an unfinished project entitled Ensembles and Multiplicities (these chapters have been published as the essays "Immanence: A Life" and "The Actual and the Virtual"). He is buried in the cemetery of the village of Saint-Léonard-de-Noblat.
Philosophy
Deleuze's works fall into two groups: on the one hand, monographs interpreting the work of other philosophers (Baruch Spinoza, Gottfried Wilhelm Leibniz, David Hume, Immanuel Kant, Friedrich Nietzsche, Henri Bergson, Michel Foucault) and artists (Marcel Proust, Franz Kafka, Francis Bacon); on the other, eclectic philosophical tomes organized by concept (e.g., difference, sense, event, economy, cinema, desire, philosophy). However, both of these aspects are seen by his critics and analysts as often overlapping, in particular, due to his prose and the unique mapping of his books that allow for multifaceted readings.
Metaphysics
Deleuze's main philosophical project in the works he wrote prior to his collaborations with Guattari can be summarized as an inversion of the traditional metaphysical relationship between identity and difference. Traditionally, difference is seen as derivative from identity: e.g., to say that "X is different from Y" assumes some X and Y with at least relatively stable identities (as in Plato's forms). On the contrary, Deleuze claims that all identities are effects of difference. Identities are neither logically nor metaphysically prior to difference, Deleuze argues, "given that there exist differences of nature between things of the same genus." That is, not only are no two things ever the same, the categories used to identify individuals in the first place derive from differences. Apparent identities such as "X" are composed of endless series of differences, where "X" = "the difference between x and x", and "x" = "the difference between...", and so forth. Difference, in other words, goes all the way down. To confront reality honestly, Deleuze argues, beings must be grasped exactly as they are, and concepts of identity (forms, categories, resemblances, unities of apperception, predicates, etc.) fail to attain what he calls "difference in itself." "If philosophy has a positive and direct relation to things, it is only insofar as philosophy claims to grasp the thing itself, according to what it is, in its difference from everything it is not, in other words, in its internal difference."
Like Kant, Deleuze considers traditional notions of space and time as unifying forms imposed by the subject. He, therefore, concludes that pure difference is non-spatiotemporal; it is an idea, what Deleuze calls "the virtual". (The coinage refers to Proust's definition of what is constant in both the past and the present: "real without being actual, ideal without being abstract.") While Deleuze's virtual ideas superficially resemble Plato's forms and Kant's ideas of pure reason, they are not originals or models, nor do they transcend possible experience; instead they are the conditions of actual experience, the internal difference in itself. "The concept they [the conditions] form is identical to its object." A Deleuzean idea or concept of difference is therefore not a wraith-like abstraction of an experienced thing, it is a real system of differential relations that creates actual spaces, times, and sensations.
Thus, Deleuze at times refers to his philosophy as a transcendental empiricism (), alluding to Kant. In Kant's transcendental idealism, experience only makes sense when organized by intuitions (namely, space and time) and concepts (such as causality). Assuming the content of these intuitions and concepts to be qualities of the world as it exists independently of human perceptual access, according to Kant, spawns seductive but senseless metaphysical beliefs (for example, extending the concept of causality beyond possible experience results in unverifiable speculation about a first cause). Deleuze inverts the Kantian arrangement: experience exceeds human concepts by presenting novelty, and this raw experience of difference actualizes an idea, unfettered by prior categories, forcing the invention of new ways of thinking (see Epistemology).
Simultaneously, Deleuze claims that being is univocal, i.e., that all of its senses are affirmed in one voice. Deleuze borrows the doctrine of ontological univocity from the medieval philosopher John Duns Scotus. In medieval disputes over the nature of God, many eminent theologians and philosophers (such as Thomas Aquinas) held that when one says that "God is good", God's goodness is only analogous to human goodness. Scotus argued to the contrary that when one says that "God is good", the goodness in question is exactly the same sort of goodness that is meant when one says "Jane is good". That is, God only differs from humans in degree, and properties such as goodness, power, reason, and so forth are univocally applied, regardless of whether one is talking about God, a person, or a flea.
Deleuze adapts the doctrine of univocity to claim that being is, univocally, difference. "With univocity, however, it is not the differences which are and must be: it is being which is Difference, in the sense that it is said of difference. Moreover, it is not we who are univocal in a Being which is not; it is we and our individuality which remains equivocal in and for a univocal Being." Here Deleuze at once echoes and inverts Spinoza, who maintained that everything that exists is a modification of the one substance, God or Nature. For Deleuze, there is no one substance, only an always-differentiating process, an origami cosmos, always folding, unfolding, refolding. Deleuze summarizes this ontology in the paradoxical formula "pluralism = monism".
Difference and Repetition (1968) is Deleuze's most sustained and systematic attempt to work out the details of such a metaphysics, but his other works develop similar ideas. In Nietzsche and Philosophy (1962), for example, reality is a play of forces; in Anti-Oedipus (1972), a "body without organs"; in What is Philosophy? (1991), a "plane of immanence" or "chaosmos".
Epistemology
Deleuze's unusual metaphysics entails an equally atypical epistemology, or what he calls a transformation of "the image of thought". According to Deleuze, the traditional image of thought, found in philosophers such as Aristotle, René Descartes, and Edmund Husserl, misconceives thinking as a mostly unproblematic business. Truth may be hard to discover—it may require a life of pure theorizing, or rigorous computation, or systematic doubt—but thinking is able, at least in principle, to correctly grasp facts, forms, ideas, etc. It may be practically impossible to attain a God's-eye, neutral point of view, but that is the ideal to approximate: a disinterested pursuit that results in a determinate, fixed truth; an orderly extension of common sense. Deleuze rejects this view as papering over the metaphysical flux, instead claiming that genuine thinking is a violent confrontation with reality, an involuntary rupture of established categories. Truth changes thought; it alters what people think is possible. By setting aside the assumption that thinking has a natural ability to recognize the truth, Deleuze says, people attain a "thought without image", a thought always determined by problems rather than solving them. "All this, however, presupposes codes or axioms which do not result by chance, but which do not have an intrinsic rationality either. It's just like theology: everything about it is quite rational if you accept sin, the immaculate conception, and the incarnation. Reason is always a region carved out of the irrational—not sheltered from the irrational at all, but traversed by it and only defined by a particular kind of relationship among irrational factors. Underneath all reason lies delirium, and drift."
The Logic of Sense, published in 1969, is one of Deleuze's most peculiar works in the field of epistemology. Michel Foucault, in his essay "Theatrum Philosophicum" about the book, attributed this to how he begins with his metaphysics but approaches it through language and truth; the book is focused on "the simple condition that instead of denouncing metaphysics as the neglect of being, we force it to speak of extrabeing". In it, he refers to epistemological paradoxes: in the first series, as he analyzes Lewis Carroll's Alice in Wonderland, he remarks that "the personal self requires God and the world in general. But when substantives and adjectives begin to dissolve, when the names of pause and rest are carried away by the verbs of pure becoming and slide into the language of events, all identity disappears from the self, the world, and God."
Deleuze's peculiar readings of the history of philosophy stem from this unusual epistemological perspective. To read a philosopher is no longer to aim at finding a single, correct interpretation, but is instead to present a philosopher's attempt to grapple with the problematic nature of reality. "Philosophers introduce new concepts, they explain them, but they don't tell us, not completely anyway, the problems to which those concepts are a response. [...] The history of philosophy, rather than repeating what a philosopher says, has to say what he must have taken for granted, what he didn't say but is nonetheless present in what he did say."
Likewise, rather than seeing philosophy as a timeless pursuit of truth, reason, or universals, Deleuze defines philosophy as the creation of concepts. For Deleuze, concepts are not identity conditions or propositions, but metaphysical constructions that define a range of thinking, such as Plato's ideas, Descartes's cogito, or Kant's doctrine of the faculties. A philosophical concept "posits itself and its object at the same time as it is created." In Deleuze's view, then, philosophy more closely resembles practical or artistic production than it does an adjunct to a definitive scientific description of a pre-existing world (as in the tradition of John Locke or Willard Van Orman Quine).
In his later work (from roughly 1981 onward), Deleuze sharply distinguishes art, philosophy, and science as three distinct disciplines, each relating to reality in different ways. While philosophy creates concepts, the arts create novel qualitative combinations of sensation and feeling (what Deleuze calls "percepts" and "affects"), and the sciences create quantitative theories based on fixed points of reference such as the speed of light or absolute zero (which Deleuze calls "functives"). According to Deleuze, none of these disciplines enjoy primacy over the others: they are different ways of organizing the metaphysical flux, "separate melodic lines in constant interplay with one another." For example, Deleuze does not treat cinema as an art representing an external reality, but as an ontological practice that creates different ways of organizing movement and time. Philosophy, science, and art are equally, and essentially, creative and practical. Hence, instead of asking traditional questions of identity such as "is it true?" or "what is it?", Deleuze proposes that inquiries should be functional or practical: "what does it do?" or "how does it work?"
Values
In ethics and politics Deleuze again echoes Spinoza, albeit in a sharply Nietzschean key. Following his rejection of any metaphysics based on identity, Deleuze criticizes the notion of an individual as an arresting or halting of differentiation (as the etymology of the word "individual" suggests[how so?differentiation is not not dividing: citation needed]). Guided by the naturalistic ethics of Spinoza and Nietzsche, Deleuze instead seeks to understand individuals and their moralities as products of the organization of pre-individual desires and powers.
In the two volumes of Capitalism and Schizophrenia, Anti-Oedipus (1972) and A Thousand Plateaus (1980), Deleuze and Guattari describe history as a congealing and regimentation of "desiring-production" (a concept combining features of Freudian drives and Marxist labor) into the modern individual (typically neurotic and repressed), the nation-state (a society of continuous control), and capitalism (an anarchy domesticated into infantilizing commodification). Deleuze, following Karl Marx, welcomes capitalism's destruction of traditional social hierarchies as liberating but inveighs against its homogenization of all values to the aims of the market.
The first part of Capitalism and Schizophrenia undertakes a universal history and posits the existence of a separate socius (the social body that takes credit for production) for each mode of production: the earth for the tribe, the body of the despot for the empire, and capital for capitalism."
In his 1990 essay "Postscript on the Societies of Control" ("Post-scriptum sur les sociétés de contrôle"), Deleuze builds on Foucault's notion of the society of discipline to argue that society is undergoing a shift in structure and control. Where societies of discipline were characterized by discrete physical enclosures (such as schools, factories, prisons, office buildings, etc.), institutions and technologies introduced since World War II have dissolved the boundaries between these enclosures. As a result, social coercion and discipline have moved into the lives of individuals considered as "masses, samples, data, markets, or 'banks'." The mechanisms of modern societies of control are described as continuous, following and tracking individuals throughout their existence via transaction records, mobile location tracking, and other personally identifiable information.
But how does Deleuze square his pessimistic diagnoses with his ethical naturalism? Deleuze claims that standards of value are internal or immanent: to live well is to fully express one's power, to go to the limits of one's potential, rather than to judge what exists by non-empirical, transcendent standards. Modern society still suppresses difference and alienates people from what they can do. To affirm reality, which is a flux of change and difference, established identities must be overturned and so become all that they can become—though exactly what cannot be known in advance. The pinnacle of Deleuzean practice, then, is creativity. "Herein, perhaps, lies the secret: to bring into existence and not to judge. If it is so disgusting to judge, it is not because everything is of equal value, but on the contrary, because what has value can be made or distinguished only by defying judgment. What expert judgment, in art, could ever bear on the work to come?"
Deleuze's interpretations
Deleuze's studies of individual philosophers and artists are purposely heterodox. Deleuze once famously described his method of interpreting philosophers as "buggery (enculage)", as sneaking behind an author and producing an offspring which is recognizably his, yet also monstrous and different.
The various monographs thus are not attempts to present what Nietzsche or Spinoza strictly intended, but re-stagings of their ideas in different and unexpected ways. Deleuze's peculiar readings aim to enact the creativity he believes is the acme of philosophical practice. A parallel in painting Deleuze points to is Francis Bacon's Study after Velázquez—it is quite beside the point to say that Bacon "gets Velasquez wrong". Similar considerations apply, in Deleuze's view, to his own uses of mathematical and scientific terms, pace critics such as Alan Sokal: "I'm not saying that Resnais and Prigogine, or Godard and Thom, are doing the same thing. I'm pointing out, rather, that there are remarkable similarities between scientific creators of functions and cinematic creators of images. And the same goes for philosophical concepts, since there are distinct concepts of these spaces."
Similarities with Heidegger
From the 1930s onward, German philosopher Martin Heidegger wrote in a series of manuscripts and books on concepts of Difference, Identity, Representation, and Event; notably among these the Beiträge zur Philosophie (Vom Ereignis) (Written 1936-38; published posthumously 1989); none of the relevant texts were translated into French by Deleuze's death in 1995, excluding any strong possibility of appropriation. However, Heidegger's early work can be traced through mathematician Albert Lautman, who drew heavily from Heidegger's Sein und Zeit and Vom Wesen des Grundes (1928), which James Bahoh describes as having "...decisive influence on the twentieth century mathematician and philosopher [...] whose theory of dialectical Ideas Deleuze appropriated and modified for his own use." The similarities between Heidegger's later, post-turn, 1930-1976 thought and Deleuze's early works in the 60s and 70s are generally described by Deleuze-scholar Daniel W. Smith in the following way: "Difference and Repetition could be read as a response to Being and Time (for Deleuze, Being is difference, and time is repetition)."Bahoh continues in saying that: "...then Beiträge could be read as Difference and Repetition's unknowing and anachronistic doppelgänger." Deleuze and Heidegger's philosophy is considered to converge on the topics of Difference and the Event. Where, for Heidegger, an evental being is constituted in part by difference as "...an essential dimension of the concept of event"; for Deleuze, being is difference, and difference "differentiates by way of events." In contrast to this, however, Jussi Backman argues that, for Heidegger, being is united only insofar as it consists of and is difference, or rather as the movement of difference, not too dissimilar to Deleuze's later claims:"...the unity and univocity of being (in the sense of being), its 'selfsameness,' paradoxically consists exclusively in difference."This mutual apprehension of a differential, Evental ontology lead both thinkers into an extended critique of the representation characteristic to Platonic, Aristotelian, and Cartesian thought; as Joe Hughes states: "Difference and Repetition is a detective novel. It tells the story of what some readers of Deleuze might consider a horrendous crime [...]: the birth of representation." Heidegger formed his critiques most decisively in the concept of the fourfold [German: das Geviert], a non-metaphysical grounding for the thing (as opposed to "object") as "ungrounded, mediated, meaningful, and shared" united in an "event of appropriation" [Ereignis]. This evental ontology continues in Identität und Differenz, where the fundamental concept expressed in Difference and Repetition, of dethroning the primacy of identity, can be seen throughout the text. Even in earlier Heideggerian texts such as Sein und Zeit, however, the critique of representation is "...cast in terms of the being of truth, or the processes of uncovering and covering (grounded in Dasein's existence) whereby beings come into and withdraw from phenomenal presence." In parallel, Deleuze's extended critique of representation (in the sense of detailing a "genealogy" of the antiquated beliefs as well) is given "...in terms of being or becoming as difference and repetition, together with genetic processes of individuation whereby beings come to exist and pass out of existence."
Time and space, for both thinkers, is also constituted in nearly identical ways. Time-space in the Beiträge and the three syntheses in Difference and Repetition both apprehend time as grounded in difference, whilst the distinction between the time-space of the world [Welt] and the time-space as the eventual production of such a time-space is mirrored by Deleuze's categorization between the temporality of what is actual and temporality of the virtual in the first and the second/third syntheses respectively.
Another parallel can be found in their utilization of so-called "generative paradoxes," or rather problems whose fundamental problematic element is constantly outside the categorical grasp fond of formal, natural, and human sciences. For Heidegger, this is the Earth in the fourfold, something which has as one of its traits the behaviour of "resisting articulation," what he characterizes as a "strife"; for Deleuze, a similar example can be spotted in the paradox of regress, or of indefinite proliferation in the Logic of Sense.
Reception
In the 1960s, Deleuze's portrayal of Nietzsche as a metaphysician of difference rather than a reactionary mystic contributed greatly to the plausibility and popularity of "left-wing Nietzscheanism" as an intellectual stance. His books Difference and Repetition (1968) and The Logic of Sense (1969) led Michel Foucault to declare that "one day, perhaps, this century will be called Deleuzian." (Deleuze, for his part, said Foucault's comment was "a joke meant to make people who like us laugh, and make everyone else livid.") In the 1970s, the Anti-Oedipus, written in a style by turns vulgar and esoteric, offering a sweeping analysis of the family, language, capitalism, and history via eclectic borrowings from primarily Marx, Freud, Lacan, and Nietzsche, but also featuring insights from dozens of other writers, was received as a theoretical embodiment of the anarchic spirit of May 1968. In 1994 and 1995, L'Abécédaire de Gilles Deleuze, an eight-hour series of interviews between Deleuze and Claire Parnet, aired on France's Arte Channel.
In the 1980s and 1990s, almost all of Deleuze's books were translated into English. Deleuze's work is frequently cited in English-speaking academia (in 2007, e.g., he was the 11th most frequently cited author in English-speaking publications in the humanities, between Freud and Kant). In the English-speaking academy, Deleuze's work is typically classified as continental philosophy.
However, some French and some Anglophone philosophers criticised Deleuze's work.
According to Pascal Engel, Deleuze's metaphilosophical approach makes it impossible to reasonably disagree with a philosophical system, and so destroys meaning, truth, and philosophy itself. Engel summarizes Deleuze's metaphilosophy thus: "When faced with a beautiful philosophical concept you should just sit back and admire it. You should not question it."
American philosopher Stanley Rosen objects to Deleuze's interpretation of Nietzsche's eternal return.
Vincent Descombes argues that Deleuze's account of a difference that is not derived from identity (in Nietzsche and Philosophy) is incoherent.
Slavoj Žižek states that the Deleuze of Anti-Oedipus ("arguably Deleuze's worst book"), the "political" Deleuze under the "'bad' influence" of Guattari, ends up, despite protestations to the contrary, as "the ideologist of late capitalism".
Allegations of idealism and negligence of material conditions
Peter Hallward argues that Deleuze's insistence that being is necessarily creative and always-differentiating entails that his philosophy can offer no insight into, and is supremely indifferent to, the material conditions of existence. Thus Hallward claims that Deleuze's thought is literally other-worldly, aiming only at a passive contemplation of the dissolution of all identity into the theophanic self-creation of nature.
Descombes argues that his analysis of history in Anti-Oedipus is 'utter idealism', criticizing reality for falling short of a non-existent ideal of schizophrenic becoming.
Žižek claims that Deleuze's ontology oscillates between materialism and idealism.
Relation with monism
Alain Badiou claims that Deleuze's metaphysics only apparently embraces plurality and diversity, remaining at bottom monist. Badiou further argues that, in practical matters, Deleuze's monism entails an ascetic, aristocratic fatalism akin to ancient Stoicism.
American philosopher Todd May argues that Deleuze's claim that difference is ontologically primary ultimately contradicts his embrace of immanence, i.e., his monism. However, May believes that Deleuze can discard the primacy-of-difference thesis, and accept a Wittgensteinian holism without significantly altering his practical philosophy.
It has more recently been argued by the Swedish philosopher that Deleuze's criticism of the history of philosophy as the metaphysical priority of identity over difference is a false distinction, and that Deleuze inadvertently reaches conclusions akin to such idealist philosophers of identity as Schelling.
Subjectivity and individuality
Other European philosophers have criticized Deleuze's theory of subjectivity. For example, Manfred Frank claims that Deleuze's theory of individuation as a process of bottomless differentiation fails to explain the unity of consciousness.
Žižek also calls Deleuze to task for allegedly reducing the subject to "just another" substance and thereby failing to grasp the nothingness that, according to Lacan and Žižek, defines subjectivity. What remains worthwhile in Deleuze's oeuvre, Žižek finds, are precisely Deleuze's engagements with virtuality as the product of negativity.
Science wars
In Fashionable Nonsense (1997), physicists Alan Sokal and Jean Bricmont accuse Deleuze of abusing mathematical and scientific terms, particularly by sliding between accepted technical meanings and his own idiosyncratic use of those terms in his works. Sokal and Bricmont state that they don't object to metaphorical reasoning, including with mathematical concepts, but mathematical and scientific terms are useful only insofar as they are precise. They give examples of mathematical concepts being "abused" by taking them out of their intended meaning, rendering the idea into normal language reduces it to truism or nonsense. In their opinion, Deleuze used mathematical concepts about which the typical reader might be not knowledgeable, and thus served to display erudition rather than enlightening the reader. Sokal and Bricmont state that they only deal with the "abuse" of mathematical and scientific concepts and explicitly suspend judgment about Deleuze's wider contributions.
Influence
Other scholars in continental philosophy, feminist studies and sexuality studies have taken Deleuze's analysis of the sexual dynamics of sadism and masochism with a level of uncritical celebration following the 1989 Zone Books translation of the 1967 booklet on Leopold von Sacher-Masoch, Le froid et le cruel (Coldness and Cruelty). As sexuality historian Alison M. Moore notes, Deleuze's own value placed on difference is poorly reflected in this booklet which fails to differentiate between Masoch's own view of his desire and that imposed upon him by the pathologizing forms of psychiatric thought prevailing in the late nineteenth century which produced the concept of 'masochism' (a term Masoch himself emphatically rejected).
Smith, Protevi and Voss note "Sokal and Bricmont’s 1999 intimations" underestimated Deleuze's awareness of mathematics and pointed out several "positive views of Deleuze’s use of mathematics as provocations for [...] his philosophical concepts", and that Deleuze's epistemology and ontology can be "brought together" with dynamical systems theory, chaos theory, biology, and geography.
Bibliography
Single-authored
In collaboration with Félix Guattari
Capitalisme et Schizophrénie 1. L'Anti-Œdipe (1972). Trans. Anti-Oedipus (1977).
On the Line, New York: Semiotext(e), translated by John Johnson (1983).
Kafka: Pour une Littérature Mineure (1975). Trans. Kafka: Toward a Minor Literature (1986).
Rhizome (1976). Trans., in revised form, in A Thousand Plateaus (1987).
Nomadology: The War Machine (1986). Trans. in A Thousand Plateaus (1987).
Capitalisme et Schizophrénie 2. Mille Plateaux (1980). Trans. A Thousand Plateaus (1987).
Qu'est-ce que la philosophie? (1991). Trans. What is Philosophy? (1994).
Part I: Deleuze and Guattari on Anti-Oedipus of Chaosophy: Texts and Interviews 1972–77 (2009) Edited by Sylvere Lotringer. (pp. 35–118).
In collaboration with Michel Foucault
"Intellectuals and Power: A Discussion Between Gilles Deleuze and Michel Foucault". Telos 16 (Summer 1973). New York: Telos Press (reprinted in L'île déserte et autres textes / Desert Islands and Other Texts; see above)
Documentaries
L'Abécédaire de Gilles Deleuze, with Claire Parnet, produced by Pierre-André Boutang. Éditions Montparnasse.
See also
Notes
References
External links
Webdeleuze – Courses & audio , etc.
Stanford Encyclopedia of Philosophy: "Gilles Deleuze", by Daniel Smith & John Protevi.
Internet Encyclopedia of Philosophy: Gilles Deleuze", by Jon Roffe.
Near complete bibliography including various translations
Alain Badiou, "The Event in Deleuze." (English translation).
Lectures and notes on work by Deleuze and Guattari.
Rhizomes. Online journal inspired by Deleuzian thought.
Web resources from Wayne State University.
Capitalism: A Very Special Delirium (1995).
Institute of Art and Ideas: "Deleuze and the Time for Non-Reason", by James R. Williams.
1925 births
1995 suicides
1995 deaths
20th-century atheists
20th-century French historians
20th-century French non-fiction writers
20th-century French philosophers
Anti-psychiatry
Atheist philosophers
Deaths by defenestration
Empiricists
French epistemologists
Film theorists
Foucault scholars
Franz Kafka scholars
French anti-capitalists
French anti-fascists
French atheists
French essayists
French ethicists
French historians of philosophy
French male non-fiction writers
French political philosophers
Hegel scholars
Hume scholars
Kant scholars
Literacy and society theorists
Literary theorists
Lycée Carnot alumni
Lycée Henri-IV alumni
Mass media theorists
Materialists
Metaphilosophers
Nietzsche scholars
Ontologists
French philosophers of art
French philosophers of culture
Philosophers of economics
French philosophers of education
French philosophers of history
Philosophers of literature
Philosophers of love
Philosophers of mind
Philosophers of psychology
French philosophers of science
Philosophers of sexuality
Philosophy writers
Poststructuralists
Scholars of Marxism
Spinoza scholars
Suicides by jumping in France
Suicides in Paris
University of Paris alumni
Academic staff of the University of Paris
Academic staff of Paris 8 University Vincennes-Saint-Denis
Writers from Paris | Gilles Deleuze | [
"Physics"
] | 7,767 | [
"Materialism",
"Matter",
"Materialists"
] |
12,558 | https://en.wikipedia.org/wiki/Galaxy | A galaxy is a system of stars, stellar remnants, interstellar gas, dust, and dark matter bound together by gravity. The word is derived from the Greek (), literally 'milky', a reference to the Milky Way galaxy that contains the Solar System. Galaxies, averaging an estimated 100 million stars, range in size from dwarfs with less than a thousand stars, to the largest galaxies known – supergiants with one hundred trillion stars, each orbiting its galaxy's center of mass. Most of the mass in a typical galaxy is in the form of dark matter, with only a few percent of that mass visible in the form of stars and nebulae. Supermassive black holes are a common feature at the centres of galaxies.
Galaxies are categorised according to their visual morphology as elliptical, spiral, or irregular. The Milky Way is an example of a spiral galaxy. It is estimated that there are between 200 billion () to 2 trillion galaxies in the observable universe. Most galaxies are 1,000 to 100,000 parsecs in diameter (approximately 3,000 to 300,000 light years) and are separated by distances in the order of millions of parsecs (or megaparsecs). For comparison, the Milky Way has a diameter of at least 26,800 parsecs (87,400 ly) and is separated from the Andromeda Galaxy, its nearest large neighbour, by just over 750,000 parsecs (2.5 million ly).
The space between galaxies is filled with a tenuous gas (the intergalactic medium) with an average density of less than one atom per cubic metre. Most galaxies are gravitationally organised into groups, clusters and superclusters. The Milky Way is part of the Local Group, which it dominates along with the Andromeda Galaxy. The group is part of the Virgo Supercluster. At the largest scale, these associations are generally arranged into sheets and filaments surrounded by immense voids. Both the Local Group and the Virgo Supercluster are contained in a much larger cosmic structure named Laniakea.
Etymology
The word galaxy was borrowed via French and Medieval Latin from the Greek term for the Milky Way, () 'milky (circle)', named after its appearance as a milky band of light in the sky. In Greek mythology, Zeus places his son, born by a mortal woman, the infant Heracles, on Hera's breast while she is asleep so the baby will drink her divine milk and thus become immortal. Hera wakes up while breastfeeding and then realises she is nursing an unknown baby: she pushes the baby away, some of her milk spills, and it produces the band of light known as the Milky Way.
In the astronomical literature, the capitalised word "Galaxy" is often used to refer to the Milky Way galaxy, to distinguish it from the other galaxies in the observable universe. The English term Milky Way can be traced back to a story by Geoffrey Chaucer :
Galaxies were initially discovered telescopically and were known as spiral nebulae. Most 18th- to 19th-century astronomers considered them as either unresolved star clusters or anagalactic nebulae, and were just thought of as a part of the Milky Way, but their true composition and natures remained a mystery. Observations using larger telescopes of a few nearby bright galaxies, like the Andromeda Galaxy, began resolving them into huge conglomerations of stars, but based simply on the apparent faintness and sheer population of stars, the true distances of these objects placed them well beyond the Milky Way. For this reason they were popularly called island universes, but this term quickly fell into disuse, as the word universe implied the entirety of existence. Instead, they became known simply as galaxies.
Nomenclature
Millions of galaxies have been catalogued, but only a few have well-established names, such as the Andromeda Galaxy, the Magellanic Clouds, the Whirlpool Galaxy, and the Sombrero Galaxy. Astronomers work with numbers from certain catalogues, such as the Messier catalogue, the NGC (New General Catalogue), the IC (Index Catalogue), the CGCG (Catalogue of Galaxies and of Clusters of Galaxies), the MCG (Morphological Catalogue of Galaxies), the UGC (Uppsala General Catalogue of Galaxies), and the PGC (Catalogue of Principal Galaxies, also known as LEDA). All the well-known galaxies appear in one or more of these catalogues but each time under a different number. For example, Messier 109 (or "M109") is a spiral galaxy having the number 109 in the catalogue of Messier. It also has the designations NGC 3992, UGC 6937, CGCG 269–023, MCG +09-20-044, and PGC 37617 (or LEDA 37617), among others. Millions of fainter galaxies are known by their identifiers in sky surveys such as the Sloan Digital Sky Survey.
Observation history
Milky Way
Greek philosopher Democritus (450–370 BCE) proposed that the bright band on the night sky known as the Milky Way might consist of distant stars.
Aristotle (384–322 BCE), however, believed the Milky Way was caused by "the ignition of the fiery exhalation of some stars that were large, numerous and close together" and that the "ignition takes place in the upper part of the atmosphere, in the region of the World that is continuous with the heavenly motions." Neoplatonist philosopher Olympiodorus the Younger (–570 CE) was critical of this view, arguing that if the Milky Way was sublunary (situated between Earth and the Moon) it should appear different at different times and places on Earth, and that it should have parallax, which it did not. In his view, the Milky Way was celestial.
According to Mohani Mohamed, Arabian astronomer Ibn al-Haytham (965–1037) made the first attempt at observing and measuring the Milky Way's parallax, and he thus "determined that because the Milky Way had no parallax, it must be remote from the Earth, not belonging to the atmosphere." Persian astronomer al-Biruni (973–1048) proposed the Milky Way galaxy was "a collection of countless fragments of the nature of nebulous stars." Andalusian astronomer Avempace ( 1138) proposed that it was composed of many stars that almost touched one another, and appeared to be a continuous image due to the effect of refraction from sublunary material, citing his observation of the conjunction of Jupiter and Mars as evidence of this occurring when two objects were near. In the 14th century, Syrian-born Ibn Qayyim al-Jawziyya proposed the Milky Way galaxy was "a myriad of tiny stars packed together in the sphere of the fixed stars."
Actual proof of the Milky Way consisting of many stars came in 1610 when the Italian astronomer Galileo Galilei used a telescope to study it and discovered it was composed of a huge number of faint stars. In 1750, English astronomer Thomas Wright, in his An Original Theory or New Hypothesis of the Universe, correctly speculated that it might be a rotating body of a huge number of stars held together by gravitational forces, akin to the Solar System but on a much larger scale, and that the resulting disk of stars could be seen as a band on the sky from a perspective inside it. In his 1755 treatise, Immanuel Kant elaborated on Wright's idea about the Milky Way's structure.
The first project to describe the shape of the Milky Way and the position of the Sun was undertaken by William Herschel in 1785 by counting the number of stars in different regions of the sky. He produced a diagram of the shape of the galaxy with the Solar System close to the center. Using a refined approach, Kapteyn in 1920 arrived at the picture of a small (diameter about 15 kiloparsecs) ellipsoid galaxy with the Sun close to the center. A different method by Harlow Shapley based on the cataloguing of globular clusters led to a radically different picture: a flat disk with diameter approximately 70 kiloparsecs and the Sun far from the centre. Both analyses failed to take into account the absorption of light by interstellar dust present in the galactic plane; but after Robert Julius Trumpler quantified this effect in 1930 by studying open clusters, the present picture of the Milky Way galaxy emerged.
Distinction from other nebulae
A few galaxies outside the Milky Way are visible on a dark night to the unaided eye, including the Andromeda Galaxy, Large Magellanic Cloud, Small Magellanic Cloud, and the Triangulum Galaxy. In the 10th century, Persian astronomer Abd al-Rahman al-Sufi made the earliest recorded identification of the Andromeda Galaxy, describing it as a "small cloud". In 964, he probably mentioned the Large Magellanic Cloud in his Book of Fixed Stars, referring to "Al Bakr of the southern Arabs", since at a declination of about 70° south it was not visible where he lived. It was not well known to Europeans until Magellan's voyage in the 16th century. The Andromeda Galaxy was later independently noted by Simon Marius in 1612.
In 1734, philosopher Emanuel Swedenborg in his Principia speculated that there might be other galaxies outside that were formed into galactic clusters that were minuscule parts of the universe that extended far beyond what could be seen. These views "are remarkably close to the present-day views of the cosmos."
In 1745, Pierre Louis Maupertuis conjectured that some nebula-like objects were collections of stars with unique properties, including a glow exceeding the light its stars produced on their own, and repeated Johannes Hevelius's view that the bright spots were massive and flattened due to their rotation.
In 1750, Thomas Wright correctly speculated that the Milky Way was a flattened disk of stars, and that some of the nebulae visible in the night sky might be separate Milky Ways.
Toward the end of the 18th century, Charles Messier compiled a catalog containing the 109 brightest celestial objects having nebulous appearance. Subsequently, William Herschel assembled a catalog of 5,000 nebulae. In 1845, Lord Rosse examined the nebulae catalogued by Herschel and observed the spiral structure of Messier object M51, now known as the Whirlpool Galaxy.
In 1912, Vesto M. Slipher made spectrographic studies of the brightest spiral nebulae to determine their composition. Slipher discovered that the spiral nebulae have high Doppler shifts, indicating that they are moving at a rate exceeding the velocity of the stars he had measured. He found that the majority of these nebulae are moving away from us.
In 1917, Heber Doust Curtis observed nova S Andromedae within the "Great Andromeda Nebula", as the Andromeda Galaxy, Messier object M31, was then known. Searching the photographic record, he found 11 more novae. Curtis noticed that these novae were, on average, 10 magnitudes fainter than those that occurred within this galaxy. As a result, he was able to come up with a distance estimate of 150,000 parsecs. He became a proponent of the so-called "island universes" hypothesis, which holds that spiral nebulae are actually independent galaxies.
In 1920 a debate took place between Harlow Shapley and Heber Curtis, the Great Debate, concerning the nature of the Milky Way, spiral nebulae, and the dimensions of the universe. To support his claim that the Great Andromeda Nebula is an external galaxy, Curtis noted the appearance of dark lanes resembling the dust clouds in the Milky Way, as well as the significant Doppler shift.
In 1922, the Estonian astronomer Ernst Öpik gave a distance determination that supported the theory that the Andromeda Nebula is indeed a distant extra-galactic object. Using the new 100-inch Mt. Wilson telescope, Edwin Hubble was able to resolve the outer parts of some spiral nebulae as collections of individual stars and identified some Cepheid variables, thus allowing him to estimate the distance to the nebulae: they were far too distant to be part of the Milky Way. In 1926 Hubble produced a classification of galactic morphology that is used to this day.
Multi-wavelength observation
Advances in astronomy have always been driven by technology. After centuries of success in optical astronomy, recent decades have seen major progress in other regions of the electromagnetic spectrum.
The dust present in the interstellar medium is opaque to visual light. It is more transparent to far-infrared, which can be used to observe the interior regions of giant molecular clouds and galactic cores in great detail. Infrared is also used to observe distant, red-shifted galaxies that were formed much earlier. Water vapor and carbon dioxide absorb a number of useful portions of the infrared spectrum, so high-altitude or space-based telescopes are used for infrared astronomy.
The first non-visual study of galaxies, particularly active galaxies, was made using radio frequencies. The Earth's atmosphere is nearly transparent to radio between 5 MHz and 30 GHz. The ionosphere blocks signals below this range. Large radio interferometers have been used to map the active jets emitted from active nuclei.
Ultraviolet and X-ray telescopes can observe highly energetic galactic phenomena. Ultraviolet flares are sometimes observed when a star in a distant galaxy is torn apart from the tidal forces of a nearby black hole. The distribution of hot gas in galactic clusters can be mapped by X-rays. The existence of supermassive black holes at the cores of galaxies was confirmed through X-ray astronomy.
Modern research
In 1944, Hendrik van de Hulst predicted that microwave radiation with wavelength of 21 cm would be detectable from interstellar atomic hydrogen gas; and in 1951 it was observed. This radiation is not affected by dust absorption, and so its Doppler shift can be used to map the motion of the gas in this galaxy. These observations led to the hypothesis of a rotating bar structure in the center of this galaxy. With improved radio telescopes, hydrogen gas could also be traced in other galaxies.
In the 1970s, Vera Rubin uncovered a discrepancy between observed galactic rotation speed and that predicted by the visible mass of stars and gas. Today, the galaxy rotation problem is thought to be explained by the presence of large quantities of unseen dark matter.
Beginning in the 1990s, the Hubble Space Telescope yielded improved observations. Among other things, its data helped establish that the missing dark matter in this galaxy could not consist solely of inherently faint and small stars. The Hubble Deep Field, an extremely long exposure of a relatively empty part of the sky, provided evidence that there are about 125 billion () galaxies in the observable universe. Improved technology in detecting the spectra invisible to humans (radio telescopes, infrared cameras, and x-ray telescopes) allows detection of other galaxies that are not detected by Hubble. Particularly, surveys in the Zone of Avoidance (the region of sky blocked at visible-light wavelengths by the Milky Way) have revealed a number of new galaxies.
A 2016 study published in The Astrophysical Journal, led by Christopher Conselice of the University of Nottingham, used 20 years of Hubble images to estimate that the observable universe contained at least two trillion () galaxies. However, later observations with the New Horizons space probe from outside the zodiacal light reduced this to roughly 200 billion ().
Types and morphology
Galaxies come in three main types: ellipticals, spirals, and irregulars. A slightly more extensive description of galaxy types based on their appearance is given by the Hubble sequence. Since the Hubble sequence is entirely based upon visual morphological type (shape), it may miss certain important characteristics of galaxies such as star formation rate in starburst galaxies and activity in the cores of active galaxies.
Many galaxies are thought to contain a supermassive black hole at their center. This includes the Milky Way, whose core region is called the Galactic Center.
Ellipticals
The Hubble classification system rates elliptical galaxies on the basis of their ellipticity, ranging from E0, being nearly spherical, up to E7, which is highly elongated. These galaxies have an ellipsoidal profile, giving them an elliptical appearance regardless of the viewing angle. Their appearance shows little structure and they typically have relatively little interstellar matter. Consequently, these galaxies also have a low portion of open clusters and a reduced rate of new star formation. Instead, they are dominated by generally older, more evolved stars that are orbiting the common center of gravity in random directions. The stars contain low abundances of heavy elements because star formation ceases after the initial burst. In this sense they have some similarity to the much smaller globular clusters.
Type-cD galaxies
The largest galaxies are the type-cD galaxies.
First described in 1964 by a paper by Thomas A. Matthews and others, they are a subtype of the more general class of D galaxies, which are giant elliptical galaxies, except that they are much larger. They are popularly known as the supergiant elliptical galaxies and constitute the largest and most luminous galaxies known. These galaxies feature a central elliptical nucleus with an extensive, faint halo of stars extending to megaparsec scales. The profile of their surface brightnesses as a function of their radius (or distance from their cores) falls off more slowly than their smaller counterparts.
The formation of these cD galaxies remains an active area of research, but the leading model is that they are the result of the mergers of smaller galaxies in the environments of dense clusters, or even those outside of clusters with random overdensities. These processes are the mechanisms that drive the formation of fossil groups or fossil clusters, where a large, relatively isolated, supergiant elliptical resides in the middle of the cluster and are surrounded by an extensive cloud of X-rays as the residue of these galactic collisions. Another older model posits the phenomenon of cooling flow, where the heated gases in clusters collapses towards their centers as they cool, forming stars in the process, a phenomenon observed in clusters such as Perseus, and more recently in the Phoenix Cluster.
Shell galaxy
A shell galaxy is a type of elliptical galaxy where the stars in its halo are arranged in concentric shells. About one-tenth of elliptical galaxies have a shell-like structure, which has never been observed in spiral galaxies. These structures are thought to develop when a larger galaxy absorbs a smaller companion galaxy—that as the two galaxy centers approach, they start to oscillate around a center point, and the oscillation creates gravitational ripples forming the shells of stars, similar to ripples spreading on water. For example, galaxy NGC 3923 has over 20 shells.
Spirals
Spiral galaxies resemble spiraling pinwheels. Though the stars and other visible material contained in such a galaxy lie mostly on a plane, the majority of mass in spiral galaxies exists in a roughly spherical halo of dark matter which extends beyond the visible component, as demonstrated by the universal rotation curve concept.
Spiral galaxies consist of a rotating disk of stars and interstellar medium, along with a central bulge of generally older stars. Extending outward from the bulge are relatively bright arms. In the Hubble classification scheme, spiral galaxies are listed as type S, followed by a letter (a, b, or c) which indicates the degree of tightness of the spiral arms and the size of the central bulge. An Sa galaxy has tightly wound, poorly defined arms and possesses a relatively large core region. At the other extreme, an Sc galaxy has open, well-defined arms and a small core region. A galaxy with poorly defined arms is sometimes referred to as a flocculent spiral galaxy; in contrast to the grand design spiral galaxy that has prominent and well-defined spiral arms. The speed in which a galaxy rotates is thought to correlate with the flatness of the disc as some spiral galaxies have thick bulges, while others are thin and dense.
In spiral galaxies, the spiral arms do have the shape of approximate logarithmic spirals, a pattern that can be theoretically shown to result from a disturbance in a uniformly rotating mass of stars. Like the stars, the spiral arms rotate around the center, but they do so with constant angular velocity. The spiral arms are thought to be areas of high-density matter, or "density waves". As stars move through an arm, the space velocity of each stellar system is modified by the gravitational force of the higher density. (The velocity returns to normal after the stars depart on the other side of the arm.) This effect is akin to a "wave" of slowdowns moving along a highway full of moving cars. The arms are visible because the high density facilitates star formation, and therefore they harbor many bright and young stars.
Barred spiral galaxy
A majority of spiral galaxies, including the Milky Way galaxy, have a linear, bar-shaped band of stars that extends outward to either side of the core, then merges into the spiral arm structure. In the Hubble classification scheme, these are designated by an SB, followed by a lower-case letter (a, b or c) which indicates the form of the spiral arms (in the same manner as the categorization of normal spiral galaxies). Bars are thought to be temporary structures that can occur as a result of a density wave radiating outward from the core, or else due to a tidal interaction with another galaxy. Many barred spiral galaxies are active, possibly as a result of gas being channeled into the core along the arms.
Our own galaxy, the Milky Way, is a large disk-shaped barred-spiral galaxy about 30 kiloparsecs in diameter and a kiloparsec thick. It contains about two hundred billion (2×1011) stars and has a total mass of about six hundred billion (6×1011) times the mass of the Sun.
Super-luminous spiral
Recently, researchers described galaxies called super-luminous spirals. They are very large with an upward diameter of 437,000 light-years (compared to the Milky Way's 87,400 light-year diameter). With a mass of 340 billion solar masses, they generate a significant amount of ultraviolet and mid-infrared light. They are thought to have an increased star formation rate around 30 times faster than the Milky Way.
Other morphologies
Peculiar galaxies are galactic formations that develop unusual properties due to tidal interactions with other galaxies.
A ring galaxy has a ring-like structure of stars and interstellar medium surrounding a bare core. A ring galaxy is thought to occur when a smaller galaxy passes through the core of a spiral galaxy. Such an event may have affected the Andromeda Galaxy, as it displays a multi-ring-like structure when viewed in infrared radiation.
A lenticular galaxy is an intermediate form that has properties of both elliptical and spiral galaxies. These are categorized as Hubble type S0, and they possess ill-defined spiral arms with an elliptical halo of stars (barred lenticular galaxies receive Hubble classification SB0).
Irregular galaxies are galaxies that can not be readily classified into an elliptical or spiral morphology.
An Irr-I galaxy has some structure but does not align cleanly with the Hubble classification scheme.
Irr-II galaxies do not possess any structure that resembles a Hubble classification, and may have been disrupted. Nearby examples of (dwarf) irregular galaxies include the Magellanic Clouds.
A dark or "ultra diffuse" galaxy is an extremely-low-luminosity galaxy. It may be the same size as the Milky Way, but have a visible star count only one percent of the Milky Way's. Multiple mechanisms for producing this type of galaxy have been proposed, and it is possible that different dark galaxies formed by different means. One candidate explanation for the low luminosity is that the galaxy lost its star-forming gas at an early stage, resulting in old stellar populations.
Dwarfs
Despite the prominence of large elliptical and spiral galaxies, most galaxies are dwarf galaxies. They are relatively small when compared with other galactic formations, being about one hundredth the size of the Milky Way, with only a few billion stars. Blue compact dwarf galaxies contains large clusters of young, hot, massive stars. Ultra-compact dwarf galaxies have been discovered that are only 100 parsecs across.
Many dwarf galaxies may orbit a single larger galaxy; the Milky Way has at least a dozen such satellites, with an estimated 300–500 yet to be discovered.
Most of the information we have about dwarf galaxies come from observations of the local group, containing two spiral galaxies, the Milky Way and Andromeda, and many dwarf galaxies. These dwarf galaxies are classified as either irregular or dwarf elliptical/dwarf spheroidal galaxies.
A study of 27 Milky Way neighbors found that in all dwarf galaxies, the central mass is approximately 10 million solar masses, regardless of whether it has thousands or millions of stars. This suggests that galaxies are largely formed by dark matter, and that the minimum size may indicate a form of warm dark matter incapable of gravitational coalescence on a smaller scale.
Variants
Interacting
Interactions between galaxies are relatively frequent, and they can play an important role in galactic evolution. Near misses between galaxies result in warping distortions due to tidal interactions, and may cause some exchange of gas and dust.
Collisions occur when two galaxies pass directly through each other and have sufficient relative momentum not to merge. The stars of interacting galaxies usually do not collide, but the gas and dust within the two forms interacts, sometimes triggering star formation. A collision can severely distort the galaxies' shapes, forming bars, rings or tail-like structures.
At the extreme of interactions are galactic mergers, where the galaxies' relative momentums are insufficient to allow them to pass through each other. Instead, they gradually merge to form a single, larger galaxy. Mergers can result in significant changes to the galaxies' original morphology. If one of the galaxies is much more massive than the other, the result is known as cannibalism, where the more massive larger galaxy remains relatively undisturbed, and the smaller one is torn apart. The Milky Way galaxy is currently in the process of cannibalizing the Sagittarius Dwarf Elliptical Galaxy and the Canis Major Dwarf Galaxy.
Starburst
Stars are created within galaxies from a reserve of cold gas that forms giant molecular clouds. Some galaxies have been observed to form stars at an exceptional rate, which is known as a starburst. If they continue to do so, they would consume their reserve of gas in a time span less than the galaxy's lifespan. Hence starburst activity usually lasts only about ten million years, a relatively brief period in a galaxy's history. Starburst galaxies were more common during the universe's early history, but still contribute an estimated 15% to total star production.
Starburst galaxies are characterized by dusty concentrations of gas and the appearance of newly formed stars, including massive stars that ionize the surrounding clouds to create H II regions. These stars produce supernova explosions, creating expanding remnants that interact powerfully with the surrounding gas. These outbursts trigger a chain reaction of star-building that spreads throughout the gaseous region. Only when the available gas is nearly consumed or dispersed does the activity end.
Starbursts are often associated with merging or interacting galaxies. The prototype example of such a starburst-forming interaction is M82, which experienced a close encounter with the larger M81. Irregular galaxies often exhibit spaced knots of starburst activity.
Radio galaxy
A radio galaxy is a galaxy with giant regions of radio emission extending well beyond its visible structure. These energetic radio lobes are powered by jets from its active galactic nucleus. Radio galaxies are classified according to their Fanaroff–Riley classification. The FR I class have lower radio luminosity and exhibit structures which are more elongated; the FR II class are higher radio luminosity. The correlation of radio luminosity and structure suggests that the sources in these two types of galaxies may differ.
Radio galaxies can also be classified as giant radio galaxies (GRGs), whose radio emissions can extend to scales of megaparsecs (3.26 million light-years). Alcyoneus is an FR II class low-excitation radio galaxy which has the largest observed radio emission, with lobed structures spanning 5 megaparsecs (16×106 ly). For comparison, another similarly sized giant radio galaxy is 3C 236, with lobes 15 million light-years across. It should however be noted that radio emissions are not always considered part of the main galaxy itself.
A giant radio galaxy is a special class of objects characterized by the presence of radio lobes generated by relativistic jets powered by the central galaxy's supermassive black hole. Giant radio galaxies are different from ordinary radio galaxies in that they can extend to much larger scales, reaching upwards to several megaparsecs across, far larger than the diameters of their host galaxies.
A "normal" radio galaxy do not have a source that is a supermassive black hole or monster neutron star; instead the source is synchrotron radiation from relativistic electrons accelerated by supernova. These sources are comparatively short lived, making the radio spectrum from normal radio galaxies an especially good way to study star formation.
Active galaxy
Some observable galaxies are classified as "active" if they contain an active galactic nucleus (AGN). A significant portion of the galaxy's total energy output is emitted by the active nucleus instead of its stars, dust and interstellar medium. There are multiple classification and naming schemes for AGNs, but those in the lower ranges of luminosity are called Seyfert galaxies, while those with luminosities much greater than that of the host galaxy are known as quasi-stellar objects or quasars. Models of AGNs suggest that a significant fraction of their light is shifted to far-infrared frequencies because optical and UV emission in the nucleus is absorbed and remitted by dust and gas surrounding it.
The standard model for an active galactic nucleus is based on an accretion disc that forms around a supermassive black hole (SMBH) at the galaxy's core region. The radiation from an active galactic nucleus results from the gravitational energy of matter as it falls toward the black hole from the disc. The AGN's luminosity depends on the SMBH's mass and the rate at which matter falls onto it.
In about 10% of these galaxies, a diametrically opposed pair of energetic jets ejects particles from the galaxy core at velocities close to the speed of light. The mechanism for producing these jets is not well understood.
Seyfert galaxy
Seyfert galaxies are one of the two largest groups of active galaxies, along with quasars. They have quasar-like nuclei (very luminous, distant and bright sources of electromagnetic radiation) with very high surface brightnesses; but unlike quasars, their host galaxies are clearly detectable. Seen through a telescope, a Seyfert galaxy appears like an ordinary galaxy with a bright star superimposed atop the core. Seyfert galaxies are divided into two principal subtypes based on the frequencies observed in their spectra.
Quasar
Quasars are the most energetic and distant members of active galactic nuclei. Extremely luminous, they were first identified as high redshift sources of electromagnetic energy, including radio waves and visible light, that appeared more similar to stars than to extended sources similar to galaxies. Their luminosity can be 100 times that of the Milky Way. The nearest known quasar, Markarian 231, is about 581 million light-years from Earth, while others have been discovered as far away as UHZ1, roughly 13.2 billion light-years distant. Quasars are noteworthy for providing the first demonstration of the phenomenon that gravity can act as a lens for light.
Other AGNs
Blazars are believed to be active galaxies with a relativistic jet pointed in the direction of Earth. A radio galaxy emits radio frequencies from relativistic jets. A unified model of these types of active galaxies explains their differences based on the observer's position.
Possibly related to active galactic nuclei (as well as starburst regions) are low-ionization nuclear emission-line regions (LINERs). The emission from LINER-type galaxies is dominated by weakly ionized elements. The excitation sources for the weakly ionized lines include post-AGB stars, AGN, and shocks. Approximately one-third of nearby galaxies are classified as containing LINER nuclei.
Luminous infrared galaxy
Luminous infrared galaxies (LIRGs) are galaxies with luminosities—the measurement of electromagnetic power output—above 1011 L☉ (solar luminosities). In most cases, most of their energy comes from large numbers of young stars which heat surrounding dust, which reradiates the energy in the infrared. Luminosity high enough to be a LIRG requires a star formation rate of at least 18 M☉ yr−1. Ultra-luminous infrared galaxies (ULIRGs) are at least ten times more luminous still and form stars at rates >180 M☉ yr−1. Many LIRGs also emit radiation from an AGN. Infrared galaxies emit more energy in the infrared than all other wavelengths combined, with peak emission typically at wavelengths of 60 to 100 microns. LIRGs are believed to be created from the strong interaction and merger of spiral galaxies. While uncommon in the local universe, LIRGs and ULIRGS were more prevalent when the universe was younger.
Physical diameters
Galaxies do not have a definite boundary by their nature, and are characterized by a gradually decreasing stellar density as a function of increasing distance from their center, making measurements of their true extents difficult. Nevertheless, astronomers over the past few decades have made several criteria in defining the sizes of galaxies.
Angular diameter
As early as the time of Edwin Hubble in 1936, there have been attempts to characterize the diameters of galaxies. The earliest efforts were based on the observed angle subtended by the galaxy and its estimated distance, leading to an angular diameter (also called "metric diameter").
Isophotal diameter
The isophotal diameter is introduced as a conventional way of measuring a galaxy's size based on its apparent surface brightness. Isophotes are curves in a diagram - such as a picture of a galaxy - that adjoins points of equal brightnesses, and are useful in defining the extent of the galaxy. The apparent brightness flux of a galaxy is measured in units of magnitudes per square arcsecond (mag/arcsec2; sometimes expressed as mag arcsec−2), which defines the brightness depth of the isophote. To illustrate how this unit works, a typical galaxy has a brightness flux of 18 mag/arcsec2 at its central region. This brightness is equivalent to the light of an 18th magnitude hypothetical point object (like a star) being spread out evenly in a one square arcsecond area of the sky. The isophotal diameter is typically defined as the region enclosing all the light down to 25 mag/arcsec2 in the blue B-band, which is then referred to as the D25 standard.
Effective radius (half-light) and its variations
The half-light radius (also known as effective radius; Re) is a measure that is based on the galaxy's overall brightness flux. This is the radius upon which half, or 50%, of the total brightness flux of the galaxy was emitted. This was first proposed by Gérard de Vaucouleurs in 1948. The choice of using 50% was arbitrary, but proved to be useful in further works by R. A. Fish in 1963, where he established a luminosity concentration law that relates the brightnesses of elliptical galaxies and their respective Re, and by José Luis Sérsic in 1968 that defined a mass-radius relation in galaxies.
In defining Re, it is necessary that the overall brightness flux galaxy should be captured, with a method employed by Bershady in 2000 suggesting to measure twice the size where the brightness flux of an arbitrarily chosen radius, defined as the local flux, divided by the overall average flux equals to 0.2. Using half-light radius allows a rough estimate of a galaxy's size, but is not particularly helpful in determining its morphology.
Variations of this method exist. In particular, in the ESO-Uppsala Catalogue of Galaxies values of 50%, 70%, and 90% of the total blue light (the light detected through a B-band specific filter) had been used to calculate a galaxy's diameter.
Petrosian magnitude
First described by Vahe Petrosian in 1976, a modified version of this method has been used by the Sloan Digital Sky Survey (SDSS). This method employs a mathematical model on a galaxy whose radius is determined by the azimuthally (horizontal) averaged profile of its brightness flux. In particular, the SDSS employed the Petrosian magnitude in the R-band (658 nm, in the red part of the visible spectrum) to ensure that the brightness flux of a galaxy would be captured as much as possible while counteracting the effects of background noise. For a galaxy whose brightness profile is exponential, it is expected to capture all of its brightness flux, and 80% for galaxies that follow a profile that follows de Vaucouleurs's law.
Petrosian magnitudes have the advantage of being redshift and distance independent, allowing the measurement of the galaxy's apparent size since the Petrosian radius is defined in terms of the galaxy's overall luminous flux.
A critique of an earlier version of this method has been issued by the Infrared Processing and Analysis Center, with the method causing a magnitude of error (upwards to 10%) of the values than using isophotal diameter. The use of Petrosian magnitudes also have the disadvantage of missing most of the light outside the Petrosian aperture, which is defined relative to the galaxy's overall brightness profile, especially for elliptical galaxies, with higher signal-to-noise ratios on higher distances and redshifts. A correction for this method has been issued by Graham et al. in 2005, based on the assumption that galaxies follow Sérsic's law.
Near-infrared method
This method has been used by 2MASS as an adaptation from the previously used methods of isophotal measurement. Since 2MASS operates in the near infrared, which has the advantage of being able to recognize dimmer, cooler, and older stars, it has a different form of approach compared to other methods that normally use B-filter. The detail of the method used by 2MASS has been described thoroughly in a document by Jarrett et al., with the survey measuring several parameters.
The standard aperture ellipse (area of detection) is defined by the infrared isophote at the Ks band (roughly 2.2 μm wavelength) of 20 mag/arcsec2. Gathering the overall luminous flux of the galaxy has been employed by at least four methods: the first being a circular aperture extending 7 arcseconds from the center, an isophote at 20 mag/arcsec2, a "total" aperture defined by the radial light distribution that covers the supposed extent of the galaxy, and the Kron aperture (defined as 2.5 times the first-moment radius, an integration of the flux of the "total" aperture).
Larger-scale structures
Deep-sky surveys show that galaxies are often found in groups and clusters. Solitary galaxies that have not significantly interacted with other galaxies of comparable mass in the past few billion years are relatively scarce. Only about 5% of the galaxies surveyed are isolated in this sense. However, they may have interacted and even merged with other galaxies in the past, and may still be orbited by smaller satellite galaxies.
On the largest scale, the universe is continually expanding, resulting in an average increase in the separation between individual galaxies (see Hubble's law). Associations of galaxies can overcome this expansion on a local scale through their mutual gravitational attraction. These associations formed early, as clumps of dark matter pulled their respective galaxies together. Nearby groups later merged to form larger-scale clusters. This ongoing merging process, as well as an influx of infalling gas, heats the intergalactic gas in a cluster to very high temperatures of 30–100 megakelvins. About 70–80% of a cluster's mass is in the form of dark matter, with 10–30% consisting of this heated gas and the remaining few percent in the form of galaxies.
Most galaxies are gravitationally bound to a number of other galaxies. These form a fractal-like hierarchical distribution of clustered structures, with the smallest such associations being termed groups. A group of galaxies is the most common type of galactic cluster; these formations contain the majority of galaxies (as well as most of the baryonic mass) in the universe. To remain gravitationally bound to such a group, each member galaxy must have a sufficiently low velocity to prevent it from escaping (see Virial theorem). If there is insufficient kinetic energy, however, the group may evolve into a smaller number of galaxies through mergers.
Clusters of galaxies consist of hundreds to thousands of galaxies bound together by gravity. Clusters of galaxies are often dominated by a single giant elliptical galaxy, known as the brightest cluster galaxy, which, over time, tidally destroys its satellite galaxies and adds their mass to its own.
Superclusters contain tens of thousands of galaxies, which are found in clusters, groups and sometimes individually. At the supercluster scale, galaxies are arranged into sheets and filaments surrounding vast empty voids. Above this scale, the universe appears to be the same in all directions (isotropic and homogeneous), though this notion has been challenged in recent years by numerous findings of large-scale structures that appear to be exceeding this scale. The Hercules–Corona Borealis Great Wall, currently the largest structure in the universe found so far, is 10 billion light-years (three gigaparsecs) in length.
The Milky Way galaxy is a member of an association named the Local Group, a relatively small group of galaxies that has a diameter of approximately one megaparsec. The Milky Way and the Andromeda Galaxy are the two brightest galaxies within the group; many of the other member galaxies are dwarf companions of these two. The Local Group itself is a part of a cloud-like structure within the Virgo Supercluster, a large, extended structure of groups and clusters of galaxies centered on the Virgo Cluster. In turn, the Virgo Supercluster is a portion of the Laniakea Supercluster.
Magnetic fields
Galaxies have magnetic fields of their own. A galaxy's magnetic field influences its dynamics in multiple ways, including affecting the formation of spiral arms and transporting angular momentum in gas clouds. The latter effect is particularly important, as it is a necessary factor for the gravitational collapse of those clouds, and thus for star formation.
The typical average equipartition strength for spiral galaxies is about 10 μG (microgauss) or 1nT (nanotesla). By comparison, the Earth's magnetic field has an average strength of about 0.3 G (Gauss) or 30 μT (microtesla). Radio-faint galaxies like M 31 and M33, the Milky Way's neighbors, have weaker fields (about 5μG), while gas-rich galaxies with high star-formation rates, like M 51, M 83 and NGC 6946, have 15 μG on average. In prominent spiral arms, the field strength can be up to 25 μG, in regions where cold gas and dust are also concentrated. The strongest total equipartition fields (50–100 μG) were found in starburst galaxies—for example, in M 82 and the Antennae; and in nuclear starburst regions, such as the centers of NGC 1097 and other barred galaxies.
Formation and evolution
Formation
Current models of the formation of galaxies in the early universe are based on the ΛCDM model. About 300,000 years after the Big Bang, atoms of hydrogen and helium began to form, in an event called recombination. Nearly all the hydrogen was neutral (non-ionized) and readily absorbed light, and no stars had yet formed. As a result, this period has been called the "dark ages". It was from density fluctuations (or anisotropic irregularities) in this primordial matter that larger structures began to appear. As a result, masses of baryonic matter started to condense within cold dark matter halos. These primordial structures allowed gasses to condense in to protogalaxies, large scale gas clouds that were precursors to the first galaxies.
As gas falls in to the gravity of the dark matter halos, its pressure and temperature rise. To condense further, the gas must radiate energy. This process was slow in the early universe dominated by hydrogen atoms and molecules which are inefficient radiators compared to heavier elements. As clumps of gas aggregate forming rotating disks, temperatures and pressures continue to increase. Some places within the disk reach high enough density to form stars.
Once protogalaxies began to form and contract, the first halo stars, called Population III stars, appeared within them. These were composed of primordial gas, almost entirely of hydrogen and helium.
Emission from the first stars heats the remaining gas helping to trigger additional star formation; the ultraviolet light emission from the first generation of stars re-ionized the surrounding neutral hydrogen in expanding spheres eventually reaching the entire universe, an event called reionization. The most massive stars collapse in violent supernova explosions releasing heavy elements ("metals") into the interstellar medium. This metal content is incorporated into population II stars.
Theoretical models for early galaxy formation have been verified and informed by a large number and variety of sophisticated astronomical observations. The photometric observations generally need spectroscopic confirmation due the large number mechanisms that can introduce systematic errors. For example, a high redshift (z ~ 16) photometric observation by James Webb Space Telescope (JWST) was later corrected to be closer to z ~ 5.
Nevertheless, confirmed observations from the JWST and other observatories are accumulating, allowing systematic comparison of early galaxies to predictions of theory.
Evidence for individual Population III stars in early galaxies is even more challenging. Even seemingly confirmed spectroscopic evidence may turn out to have other origins. For example, astronomers reported HeII emission evidence for Population III stars in the Cosmos Redshift 7 galaxy, with a redshift value of 6.60. Subsequent observations found metallic emission lines, OIII, inconsistent with an early-galaxy star.
Evolution
Once stars begin to form, emit radiation, and in some cases explode, the process of galaxy formation becomes very complex, involving interactions between the forces of gravity, radiation, and thermal energy. Many details are still poorly understood.
Within a billion years of a galaxy's formation, key structures begin to appear. Globular clusters, the central supermassive black hole, and a galactic bulge of metal-poor Population II stars form. The creation of a supermassive black hole appears to play a key role in actively regulating the growth of galaxies by limiting the total amount of additional matter added. During this early epoch, galaxies undergo a major burst of star formation.
During the following two billion years, the accumulated matter settles into a galactic disc. A galaxy will continue to absorb infalling material from high-velocity clouds and dwarf galaxies throughout its life. This matter is mostly hydrogen and helium. The cycle of stellar birth and death slowly increases the abundance of heavy elements, eventually allowing the formation of planets.
Star formation rates in galaxies depend upon their local environment. Isolated 'void' galaxies have highest rate per stellar mass, with 'field' galaxies associated with spiral galaxies having lower rates and galaxies in dense cluster having the lowest rates.
The evolution of galaxies can be significantly affected by interactions and collisions. Mergers of galaxies were common during the early epoch, and the majority of galaxies were peculiar in morphology. Given the distances between the stars, the great majority of stellar systems in colliding galaxies will be unaffected. However, gravitational stripping of the interstellar gas and dust that makes up the spiral arms produces a long train of stars known as tidal tails. Examples of these formations can be seen in NGC 4676 or the Antennae Galaxies.
The Milky Way galaxy and the nearby Andromeda Galaxy are moving toward each other at about 130 km/s, and—depending upon the lateral movements—the two might collide in about five to six billion years. Although the Milky Way has never collided with a galaxy as large as Andromeda before, it has collided and merged with other galaxies in the past. Cosmological simulations indicate that, 11 billion years ago, it merged with a particularly large galaxy that has been labeled the Kraken.
Such large-scale interactions are rare. As time passes, mergers of two systems of equal size become less common. Most bright galaxies have remained fundamentally unchanged for the last few billion years, and the net rate of star formation probably also peaked about ten billion years ago.
Future trends
Spiral galaxies, like the Milky Way, produce new generations of stars as long as they have dense molecular clouds of interstellar hydrogen in their spiral arms. Elliptical galaxies are largely devoid of this gas, and so form few new stars. The supply of star-forming material is finite; once stars have converted the available supply of hydrogen into heavier elements, new star formation will come to an end.
The current era of star formation is expected to continue for up to one hundred billion years, and then the "stellar age" will wind down after about ten trillion to one hundred trillion years (1013–1014 years), as the smallest, longest-lived stars in the visible universe, tiny red dwarfs, begin to fade. At the end of the stellar age, galaxies will be composed of compact objects: brown dwarfs, white dwarfs that are cooling or cold ("black dwarfs"), neutron stars, and black holes. Eventually, as a result of gravitational relaxation, all stars will either fall into central supermassive black holes or be flung into intergalactic space as a result of collisions.
Gallery
See also
Bright early galaxies
Dark galaxy
Galactic orientation
Galaxy formation and evolution
Illustris project
List of galaxies
List of the most distant astronomical objects
List of nearest galaxies
List of largest galaxies
Low surface brightness galaxy
Outline of galaxies
Timeline of knowledge about galaxies, clusters of galaxies, and large-scale structure
Notes
References
Bibliography
External links
NASA/IPAC Extragalactic Database (NED)
NED Redshift-Independent Distances
An Atlas of The Universe
Galaxies – Information and amateur observations
Galaxy Zoo – citizen science galaxy classification project
"A Flight Through the Universe, by the Sloan Digital Sky Survey" – animated video from Berkeley Lab
Concepts in astronomy
Articles containing video clips | Galaxy | [
"Physics",
"Astronomy"
] | 10,515 | [
"Concepts in astronomy",
"Galaxies",
"Astronomical objects"
] |
12,570 | https://en.wikipedia.org/wiki/Gigabyte | The gigabyte () is a multiple of the unit byte for digital information. The prefix giga means 109 in the International System of Units (SI). Therefore, one gigabyte is one billion bytes. The unit symbol for the gigabyte is GB.
This definition is used in all contexts of science (especially data science), engineering, business, and many areas of computing, including storage capacities of hard drives, solid-state drives, and tapes, as well as data transmission speeds. The term is also used in some fields of computer science and information technology to denote (10243 or 230) bytes, however, particularly for sizes of RAM. Thus, some usage of gigabyte has been ambiguous. To resolve this difficulty, IEC 80000-13 clarifies that a gigabyte (GB) is 109 bytes and specifies the term gibibyte (GiB) to denote 230 bytes. These differences are still readily seen, for example, when a 400 GB drive's capacity is displayed by Microsoft Windows as 372 GB instead of 372 GiB. Analogously, a memory module that is labeled as having the size "" has one gibibyte () of storage capacity.
In response to litigation over whether the makers of electronic storage devices must conform to Microsoft Windows' use of a binary definition of "GB" instead of the metric/decimal definition, the United States District Court for the Northern District of California rejected that argument, ruling that "the U.S. Congress has deemed the decimal definition of gigabyte to be the 'preferred' one for the purposes of 'U.S. trade and commerce.
Definition
The term gigabyte has a standard definition of 10003 bytes, as well as a discouraged meaning of 10243 bytes. The latter binary usage originated as compromise technical jargon for byte multiples that needed to be expressed in a power of 2, but lacked a convenient name. As 1024 (210) is approximately 1000 (103), roughly corresponding to SI multiples, it was used for binary multiples as well.
In 1998 the International Electrotechnical Commission (IEC) published standards for binary prefixes, requiring that the gigabyte strictly denote 10003 bytes and gibibyte denote 10243 bytes. By the end of 2007, the IEC Standard had been adopted by the IEEE, EU, and NIST, and in 2009 it was incorporated in the International System of Quantities. Nevertheless, the term gigabyte continues to be widely used with the following two different meanings:
Base 10 (decimal)
1 GB = bytes (= 10003 B = 109 B)
Based on powers of 10, this definition uses the prefix giga- as defined in the International System of Units (SI). This is the recommended definition by the International Electrotechnical Commission (IEC). This definition is used in networking contexts and most storage media, particularly hard drives, flash-based storage, and DVDs, and is also consistent with the other uses of the SI prefix in computing, such as CPU clock speeds or measures of performance. The file manager of Mac OS X version 10.6 and later versions are a notable example of this usage in software, which report files sizes in decimal units.
Base 2 (binary)
1 GiB = bytes (= 10243 B = 230 B).
The binary definition uses powers of the base 2, as does the architectural principle of binary computers.
This usage is widely promulgated by some operating systems, such as Microsoft Windows in reference to computer memory (e.g., RAM). This definition is synonymous with the unambiguous unit gibibyte.
Consumer confusion
Since the first disk drive, the IBM 350, disk drive manufacturers expressed hard drive capacities using decimal prefixes. With the advent of gigabyte-range drive capacities, manufacturers labelled many consumer hard drive, solid-state drive and USB flash drive capacities in certain size classes expressed in decimal gigabytes, such as "500 GB". The exact capacity of a given drive model is usually slightly larger than the class designation. Practically all manufacturers of hard disk drives and flash-memory disk devices continue to define one gigabyte as , which is displayed on the packaging. Some operating systems such as Mac OS X and Ubuntu, and Debian express hard drive capacity or file size using decimal multipliers, while others such as Microsoft Windows report size using binary multipliers. This discrepancy causes confusion, as a disk with an advertised capacity of, for example, (meaning , equal to 372 GiB) might be reported by the operating system as "".
For RAM, the JEDEC memory standards use IEEE 100 nomenclature which quote the gigabyte as (230 bytes).
The difference between units based on decimal and binary prefixes increases as a semi-logarithmic (linear-log) function—for example, the decimal kilobyte value is nearly 98% of the kibibyte, a megabyte is under 96% of a mebibyte, and a gigabyte is just over 93% of a gibibyte value. This means that a 300 GB (279 GiB) hard disk might be indicated variously as "300 GB", "279 GB" or "279 GiB", depending on the operating system. As storage sizes increase and larger units are used, these differences become more pronounced.
US lawsuits
A lawsuit decided in 2019 that arose from alleged breach of contract and other claims over the binary and decimal definitions used for "gigabyte" have ended in favour of the manufacturers, with courts holding that the legal definition of gigabyte or GB is 1 GB = 1,000,000,000 (109) bytes (the decimal definition). Specifically, the courts held that "the U.S. Congress has deemed the decimal definition of gigabyte to be the 'preferred' one for the purposes of 'U.S. trade and commerce' .... The California Legislature has likewise adopted the decimal system for all 'transactions in this state'."
Earlier lawsuits had ended in settlement with no court ruling on the question, such as a lawsuit against drive manufacturer Western Digital. Western Digital settled the challenge and added explicit disclaimers to products that the usable capacity may differ from the advertised capacity.
Seagate was sued on similar grounds and also settled.
Other contexts
Because of their physical design, the capacity of modern computer random-access memory devices, such as DIMM modules, is always a multiple of a power of 1024. It is thus convenient to use prefixes denoting powers of 1024, known as binary prefixes, in describing them. For example, a memory capacity of (10243 B) is conveniently expressed as 1 GiB rather than as 1.074 GB. The former specification is, however, often quoted as "1 GB" when applied to random-access memory.
Software allocates memory in varying degrees of granularity as needed to fulfill data structure requirements and binary multiples are usually not required. Other computer capacities and rates, like storage hardware size, data transfer rates, clock speeds, operations per second, etc., do not depend on an inherent base, and are usually presented in decimal units. For example, the manufacturer of a "300 GB" hard drive is claiming a capacity of , not 300 × 10243 (which would be ) bytes.
Examples of gigabyte-sized storage
One hour of SDTV video at 2.2 Mbit/s is approximately 1 GB.
Seven minutes of HDTV video at 19.39 Mbit/s is approximately 1 GB.
114 minutes of uncompressed CD-quality audio at 1.4 Mbit/s is approximately 1 GB.
A single-layer DVD+R disc can hold about 4.7 GB.
A dual-layered DVD+R disc can hold about 8.5 GB.
A single-layer Blu-ray can hold about 25 GB.
The largest Nintendo Switch cartridge available on the market holds about 32 GB.
A dual-layered Blu-ray can hold about 50 GB.
A triple-layered Ultra HD Blu-ray can hold about 100 GB.
Unicode character
The "gigabyte" symbol is encoded by Unicode at code point .
See also
Orders of magnitude (data)
Binary prefix
References
External links
http://physics.nist.gov/cuu/Units/binary.html
http://www.quinion.com/words/turnsofphrase/tp-kib1.htm
https://www.nist.gov/public_affairs/techbeat/tb9903.htm
Units of information | Gigabyte | [
"Mathematics"
] | 1,768 | [
"Units of information",
"Quantity",
"Units of measurement"
] |
12,571 | https://en.wikipedia.org/wiki/Galaxy%20groups%20and%20clusters | Galaxy groups and clusters are the largest known gravitationally bound objects to have arisen thus far in the process of cosmic structure formation. They form the densest part of the large-scale structure of the Universe. In models for the gravitational formation of structure with cold dark matter, the smallest structures collapse first and eventually build the largest structures, clusters of galaxies. Clusters are then formed relatively recently between 10 billion years ago and now. Groups and clusters may contain ten to thousands of individual galaxies. The clusters themselves are often associated with larger, non-gravitationally bound, groups called superclusters.
Groups of galaxies
Groups of galaxies are the smallest aggregates of galaxies. They typically contain no more than 50 galaxies in a diameter of 1 to 2 megaparsecs (Mpc)(see 1022 m for distance comparisons). Their mass is approximately 1013 solar masses. The spread of velocities for the individual galaxies is about 150 km/s. However, this definition should be used as a guide only, as larger and more massive galaxy systems are sometimes classified as galaxy groups. Groups are the most common structures of galaxies in the universe, comprising at least 50% of the galaxies in the local universe. Groups have a mass range between those of the very large elliptical galaxies and clusters of galaxies.
Our own galaxy, the Milky Way, is contained in the Local Group of more than 54 galaxies.
In July 2017 S. Paul, R. S. John et al. defined clear distinguishing parameters for classifying galaxy aggregations as ‘galaxy groups’ and ‘clusters’ on the basis of scaling laws that they followed. According to this paper, galaxy aggregations less massive than 8 × 1013 solar masses are classified as galaxy groups.
Clusters of galaxies
Clusters are larger than groups, although there is no sharp dividing line between the two. When observed visually, clusters appear to be collections of galaxies held together by mutual gravitational attraction. However, their velocities are too large for them to remain gravitationally bound by their mutual attractions, implying the presence of either an additional invisible mass component, or an additional attractive force besides gravity. X-ray studies have revealed the presence of large amounts of intergalactic gas known as the intracluster medium. This gas is very hot, between 107K and 108K, and hence emits X-rays in the form of bremsstrahlung and atomic line emission.
The total mass of the gas is greater than that of the galaxies by roughly a factor of two. However, this is still not enough mass to keep the galaxies in the cluster. Since this gas is in approximate hydrostatic equilibrium with the overall cluster gravitational field, the total mass distribution can be determined. It turns out the total mass deduced from this measurement is approximately six times larger than the mass of the galaxies or the hot gas. The missing component is known as dark matter and its nature is unknown. In a typical cluster perhaps only 5% of the total mass is in the form of galaxies, maybe 10% in the form of hot X-ray emitting gas and the remainder is dark matter. Brownstein and Moffat use a theory of modified gravity to explain X-ray cluster masses without dark matter. Observations of the Bullet Cluster are the strongest evidence for the existence of dark matter; however, Brownstein and Moffat have shown that their modified gravity theory can also account for the properties of the cluster.
Observational methods
Clusters of galaxies have been found in surveys by a number of observational techniques and have been studied in detail using many methods:
Optical or infrared: The individual galaxies of clusters can be studied through optical or infrared imaging and spectroscopy. Galaxy clusters are found by optical or infrared telescopes by searching for overdensities, and then confirmed by finding several galaxies at a similar redshift. Infrared searches are more useful for finding more distant (higher redshift) clusters.
X-ray: The hot plasma emits X-rays that can be detected by X-ray telescopes. The cluster gas can be studied using both X-ray imaging and X-ray spectroscopy. Clusters are quite prominent in X-ray surveys and along with AGN are the brightest X-ray emitting extragalactic objects.
Radio: A number of diffuse structures emitting at radio frequencies have been found in clusters. Groups of radio sources (that may include diffuse structures or AGN) have been used as tracers of cluster location. At high redshift imaging around individual radio sources (in this case AGN) has been used to detect proto-clusters (clusters in the process of forming).
Sunyaev-Zel'dovich effect: The hot electrons in the intracluster medium scatter radiation from the cosmic microwave background through inverse Compton scattering. This produces a "shadow" in the observed cosmic microwave background at some radio frequencies.
Gravitational lensing: Clusters of galaxies contain enough matter to distort the observed orientations of galaxies behind them. The observed distortions can be used to model the distribution of dark matter in the cluster.
Temperature and density
Clusters of galaxies are the most recent and most massive objects to have arisen in the hierarchical structure formation of the Universe and the study of clusters tells one about the way galaxies form and evolve. Clusters have two important properties: their masses are large enough to retain any energetic gas ejected from member galaxies and the thermal energy of the gas within the cluster is observable within the X-Ray bandpass. The observed state of gas within a cluster is determined by a combination of shock heating during accretion, radiative cooling, and thermal feedback triggered by that cooling. The density, temperature, and substructure of the intracluster X-Ray gas therefore represents the entire thermal history of cluster formation. To better understand this thermal history one needs to study the entropy of the gas because entropy is the quantity most directly changed by increasing or decreasing the thermal energy of intracluster gas.
List of groups and clusters
See also
Entropy
Fossil galaxy group
Galactic orientation
Galaxy filament
Illustris project
Intracluster medium
Large-scale structure of the Cosmos
List of galaxy groups and clusters
Supercluster
Timeline of knowledge about galaxies, clusters of galaxies, and large-scale structure
References
Further reading
Large-scale structure of the cosmos
tr:Galaksi kümesi | Galaxy groups and clusters | [
"Astronomy"
] | 1,281 | [
"Galaxy clusters",
"Astronomical objects"
] |
12,572 | https://en.wikipedia.org/wiki/Grus%20%28constellation%29 | Grus (, or colloquially ) is a constellation in the southern sky. Its name is Latin for the crane, a type of bird. It is one of twelve constellations conceived by Petrus Plancius from the observations of Pieter Dirkszoon Keyser and Frederick de Houtman. Grus first appeared on a celestial globe published in 1598 in Amsterdam by Plancius and Jodocus Hondius and was depicted in Johann Bayer's star atlas Uranometria of 1603. French explorer and astronomer Nicolas-Louis de Lacaille gave Bayer designations to its stars in 1756, some of which had been previously considered part of the neighbouring constellation Piscis Austrinus. The constellations Grus, Pavo, Phoenix and Tucana are collectively known as the "Southern Birds".
The constellation's brightest star, Alpha Gruis, is also known as Alnair and appears as a 1.7-magnitude blue-white star. Beta Gruis is a red giant variable star with a minimum magnitude of 2.3 and a maximum magnitude of 2.0. Six star systems have been found to have planets: the red dwarf Gliese 832 is one of the closest stars to Earth to have a planetary system. Another—WASP-95—has a planet that orbits every two days. Deep-sky objects found in Grus include the planetary nebula IC 5148, also known as the Spare Tyre Nebula, and a group of four interacting galaxies known as the Grus Quartet.
History
The stars that form Grus were originally considered part of the neighbouring constellation Piscis Austrinus (the southern fish), with Gamma Gruis seen as part of the fish's tail. The stars were first defined as a separate constellation by the astronomer Petrus Plancius, who created twelve new constellations based on the observations of the southern sky by the Dutch explorers Pieter Dirkszoon Keyser and Frederick de Houtman, who had sailed on the first Dutch trading expedition, known as the Eerste Schipvaart, to the East Indies. Grus first appeared on a 35-centimetre-diameter celestial globe published in 1598 in Amsterdam by Plancius with Jodocus Hondius. Its first depiction in a celestial atlas was in the German cartographer Johann Bayer's Uranometria of 1603. De Houtman included it in his southern star catalogue the same year under the Dutch name Den Reygher, "The Heron", but Bayer followed Plancius and Hondius in using Grus.
An alternative name for the constellation, Phoenicopterus (Latin "flamingo"), was used briefly during the early 17th century, seen in the 1605 work Cosmographiae Generalis by Paul Merula of Leiden University and a c. 1625 globe by Dutch globe maker Pieter van den Keere. Astronomer Ian Ridpath has reported the symbolism likely came from Plancius originally, who had worked with both of these people. Grus and the nearby constellations Phoenix, Tucana and Pavo are collectively called the "Southern Birds".
The stars that correspond to Grus were generally too far south to be seen from China. In Chinese astronomy, Gamma and Lambda Gruis may have been included in the tub-shaped asterism Bàijiù, along with stars from Piscis Austrinus. In Central Australia, the Arrernte and Luritja people living on a mission in Hermannsburg viewed the sky as divided between them, east of the Milky Way representing Arrernte camps and west denoting Luritja camps. Alpha and Beta Gruis, along with Fomalhaut, Alpha Pavonis and the stars of Musca, were all claimed by the Arrernte.
Characteristics
Grus is bordered by Piscis Austrinus to the north, Sculptor to the northeast, Phoenix to the east, Tucana to the south, Indus to the southwest, and Microscopium to the west. Bayer straightened the tail of Piscis Austrinus to make way for Grus in his Uranometria. Covering 366 square degrees, it ranks 45th of the 88 modern constellations in size and covers 0.887% of the night sky. The three-letter abbreviation for the constellation, as adopted by the International Astronomical Union in 1922, is "Gru". The official constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined as a polygon of 6 segments. In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between −36.31° and −56.39°. Grus is located too far south to be seen by observers in the British Isles and the northern United States, though it can easily be seen from Florida or San Diego; the whole constellation is visible to observers south of latitude 33°N.
Features
Stars
Keyser and de Houtman assigned twelve stars to the constellation. Bayer depicted Grus on his chart, but did not assign its stars Bayer designations. French explorer and astronomer Nicolas-Louis de Lacaille labelled them Alpha to Phi in 1756 with some omissions. In 1879, American astronomer Benjamin Gould added Kappa, Nu, Omicron and Xi, which had all been catalogued by Lacaille but not given Bayer designations. Lacaille considered them too faint, while Gould thought otherwise. Xi Gruis had originally been placed in Microscopium. Conversely, Gould dropped Lacaille's Sigma as he thought it was too dim.
Grus has several bright stars. Marking the left wing is Alpha Gruis, a blue-white star of spectral type B6V and apparent magnitude 1.7, around 101 light-years from Earth. Its traditional name, Alnair, means "the bright one" and refers to its status as the brightest star in Grus (although the Arabians saw it as the brightest star in the Fish's tail, as Grus was then depicted).Alnair Alnair is around 380 times as luminous and has over 3 times the diameter of the Sun. Lying 5 degrees west of Alnair, denoting the Crane's heart is Beta Gruis (the proper name is Tiaki), a red giant of spectral type M5III. It has a diameter of 0.8 astronomical units (AU) (if placed in the Solar System it would extend to the orbit of Venus) located around 170 light-years from Earth. It is a variable star with a minimum magnitude of 2.3 and a maximum magnitude of 2.0. An imaginary line drawn from the Great Square of Pegasus through Fomalhaut will lead to Alnair and Beta Gruis.
Lying in the northwest corner of the constellation and marking the crane's eye is Gamma Gruis, a blue-white subgiant of spectral type B8III and magnitude 3.0 lying around 211 light-years from Earth. Also known as Al Dhanab, it has finished fusing its core hydrogen and has begun cooling and expanding, which will see it transform into a red giant.
There are several double stars visible to the naked eye in Grus. Forming a triangle with Alnair and Beta, Delta Gruis is an optical double whose components—Delta1 and Delta2—are separated by 45 arcseconds. Delta1 is a yellow giant of spectral type G7III and magnitude 4.0, 309 light-years from Earth, and may have its own magnitude 12 orange dwarf companion. Delta2 is a red giant of spectral type M4.5III and semiregular variable that ranges between magnitudes 3.99 and 4.2, located 325 light-years from Earth. It has around 3 times the mass and 135 times the diameter of the Sun. Mu Gruis, composed of Mu1 and Mu2, is also an optical double—both stars are yellow giants of spectral type G8III around 2.5 times as massive as the Sun with surface temperatures of around 4900 K. Mu1 is the brighter of the two at magnitude 4.8 located around 275 light-years from Earth, while Mu2 the dimmer at magnitude 5.11 lies 265 light-years distant from Earth. Pi Gruis, an optical double with a variable component, is composed of Pi1 Gruis and Pi2. Pi1 is a semi-regular red giant of spectral type S5, ranging from magnitude 5.31 to 7.01 over a period of 191 days, and is around 532 light-years from Earth. One of the brightest S-class stars to Earth viewers, it has a companion star of apparent magnitude 10.9 with sunlike properties, being a yellow main sequence star of spectral type G0V. The pair make up a likely binary system. Pi2 is a giant star of spectral type F3III-IV located around 130 light-years from Earth, and is often brighter than its companion at magnitude 5.6. Marking the right wing is Theta Gruis, yet another double star, lying 5 degrees east of Delta1 and Delta2.
RZ Gruis is a binary system of apparent magnitude 12.3 with occasional dimming to 13.4, whose components—a white dwarf and main sequence star—are thought to orbit each other roughly every 8.5 to 10 hours. It belongs to the UX Ursae Majoris subgroup of cataclysmic variable star systems, where material from the donor star is drawn to the white dwarf where it forms an accretion disc that remains bright and outshines the two component stars. The system is poorly understood, though the donor star has been calculated to be of spectral type F5V. These stars have spectra very similar to novae that have returned to quiescence after outbursts, yet they have not been observed to have erupted themselves. The American Association of Variable Star Observers recommends watching them for future events. CE Gruis (also known as Grus V-1) is a faint (magnitude 18–21) star system also composed of a white dwarf and donor star; in this case the two are so close they are tidally locked. Known as polars, material from the donor star does not form an accretion disc around the white dwarf, but rather streams directly onto it.
Six star systems are thought to have planetary systems. Tau1 Gruis is a yellow star of magnitude 6.0 located around 106 light-years away. It may be a main sequence star or be just beginning to depart from the sequence as it expands and cools. In 2002 the star was found to have a planetary companion. HD 215456, HD 213240 and WASP-95 are yellow sunlike stars discovered to have two planets, a planet and a remote red dwarf, and a hot Jupiter, respectively; this last—WASP-95b—completes an orbit round its sun in a mere two days. Gliese 832 is a red dwarf of spectral type M1.5V and apparent magnitude 8.66 located only 16.1 light-years distant; hence it is one of the nearest stars to the Solar System. A Jupiter-like planet—Gliese 832 b—orbiting the red dwarf over a period of 9.4±0.4 years was discovered in 2008. WISE 2220−3628 is a brown dwarf of spectral type Y, and hence one of the coolest star-like objects known. It has been calculated as being around 26 light-years distant from Earth.
In July 2019, astronomers reported finding a star, S5-HVS1, traveling , faster that any other star detected so far. The star is in the Grus constellation in the southern sky, and about 29,000 light-years from Earth, and may have been propelled out of the Milky Way galaxy after interacting with Sagittarius A*, the supermassive black hole at the center of the galaxy.
Deep-sky objects
Nicknamed the spare-tyre nebula, IC 5148 is a planetary nebula located around 1 degree west of Lambda Gruis. Around 3000 light-years distant, it is expanding at 50 kilometres a second, one of the fastest rates of expansion of all planetary nebulae.
Northeast of Theta Gruis are four interacting galaxies known as the Grus Quartet. These galaxies are NGC 7552, NGC 7590, NGC 7599, and NGC 7582. The latter three galaxies occupy an area of sky only 10 arcminutes across and are sometimes referred to as the "Grus Triplet," although all four are part of a larger loose group of galaxies called the IC 1459 Grus Group. NGC 7552 and 7582 are exhibiting high starburst activity; this is thought to have arisen because of the tidal forces from interacting. Located on the border of Grus with Piscis Austrinus, IC 1459 is a peculiar E3 giant elliptical galaxy. It has a fast counterrotating stellar core, and shells and ripples in its outer region. The galaxy has an apparent magnitude of 11.9 and is around 80 million light-years distant.
NGC 7424 is a barred spiral galaxy with an apparent magnitude of 10.4. located around 4 degrees west of the Grus Triplet. Approximately 37.5 million light-years distant, it is about 100,000 light-years in diameter, has well defined spiral arms and is thought to resemble the Milky Way. Two ultraluminous X-ray sources and one supernova have been observed in NGC 7424. SN 2001ig was discovered in 2001 and classified as a Type IIb supernova, one that initially showed a weak hydrogen line in its spectrum, but this emission later became undetectable and was replaced by lines of oxygen, magnesium and calcium, as well as other features that resembled the spectrum of a Type Ib supernova. A massive star of spectral type F, A or B is thought to be the surviving binary companion to SN 2001ig, which was believed to have been a Wolf–Rayet star.
Located near Alnair is NGC 7213, a face-on type 1 Seyfert galaxy located approximately 71.7 million light-years from Earth. It has an apparent magnitude of 12.1. Appearing undisturbed in visible light, it shows signs of having undergone a collision or merger when viewed at longer wavelengths, with disturbed patterns of ionized hydrogen including a filament of gas around 64,000 light-years long. It is part of a group of ten galaxies.
NGC 7410 is a spiral galaxy discovered by British astronomer John Herschel during observations at the Cape of Good Hope in October 1834. The galaxy has a visual magnitude of 11.7 and is approximately 122 million light-years distant from Earth.
See also
Grus in Chinese astronomy
List of star names in Grus
Notes
References
Cited text
External links
The Deep Photographic Guide to the Constellations: Grus
The clickable Grus
Starry Night Photography – Grus Constellation
Southern constellations
Constellations listed by Petrus Plancius | Grus (constellation) | [
"Astronomy"
] | 3,084 | [
"Constellations listed by Petrus Plancius",
"Grus (constellation)",
"Constellations",
"Southern constellations"
] |
12,581 | https://en.wikipedia.org/wiki/Glass | Glass is an amorphous (non-crystalline) solid. Because it is often transparent and chemically inert, glass has found widespread practical, technological, and decorative use in window panes, tableware, and optics. Some common objects made of glass are named after the material, e.g., a "glass" for drinking, "glasses" for vision correction, and a "magnifying glass".
Glass is most often formed by rapid cooling (quenching) of the molten form. Some glasses such as volcanic glass are naturally occurring, and obsidian has been used to make arrowheads and knives since the Stone Age. Archaeological evidence suggests glassmaking dates back to at least 3600 BC in Mesopotamia, Egypt, or Syria. The earliest known glass objects were beads, perhaps created accidentally during metalworking or the production of faience, which is a form of pottery using lead glazes.
Due to its ease of formability into any shape, glass has been traditionally used for vessels, such as bowls, vases, bottles, jars and drinking glasses. Soda–lime glass, containing around 70% silica, accounts for around 90% of modern manufactured glass. Glass can be coloured by adding metal salts or painted and printed with vitreous enamels, leading to its use in stained glass windows and other glass art objects.
The refractive, reflective and transmission properties of glass make glass suitable for manufacturing optical lenses, prisms, and optoelectronics materials. Extruded glass fibres have applications as optical fibres in communications networks, thermal insulating material when matted as glass wool to trap air, or in glass-fibre reinforced plastic (fibreglass).
Microscopic structure
The standard definition of a glass (or vitreous solid) is a non-crystalline solid formed by rapid melt quenching. However, the term "glass" is often defined in a broader sense, to describe any non-crystalline (amorphous) solid that exhibits a glass transition when heated towards the liquid state.
Glass is an amorphous solid. Although the atomic-scale structure of glass shares characteristics of the structure of a supercooled liquid, glass exhibits all the mechanical properties of a solid. As in other amorphous solids, the atomic structure of a glass lacks the long-range periodicity observed in crystalline solids. Due to chemical bonding constraints, glasses do possess a high degree of short-range order with respect to local atomic polyhedra. The notion that glass flows to an appreciable extent over extended periods well below the glass transition temperature is not supported by empirical research or theoretical analysis (see viscosity in solids). Though atomic motion at glass surfaces can be observed, and viscosity on the order of 1017–1018 Pa s can be measured in glass, such a high value reinforces the fact that glass would not change shape appreciably over even large periods of time.
Formation from a supercooled liquid
For melt quenching, if the cooling is sufficiently rapid (relative to the characteristic crystallization time) then crystallization is prevented and instead, the disordered atomic configuration of the supercooled liquid is frozen into the solid state at Tg. The tendency for a material to form a glass while quenched is called glass-forming ability. This ability can be predicted by the rigidity theory. Generally, a glass exists in a structurally metastable state with respect to its crystalline form, although in certain circumstances, for example in atactic polymers, there is no crystalline analogue of the amorphous phase.
Glass is sometimes considered to be a liquid due to its lack of a first-order phase transition
where certain thermodynamic variables such as volume, entropy and enthalpy are discontinuous through the glass transition range. The glass transition may be described as analogous to a second-order phase transition where the intensive thermodynamic variables such as the thermal expansivity and heat capacity are discontinuous. However, the equilibrium theory of phase transformations does not hold for glass, and hence the glass transition cannot be classed as one of the classical equilibrium phase transformations in solids.
Occurrence in nature
Glass can form naturally from volcanic magma. Obsidian is a common volcanic glass with high silica (SiO2) content formed when felsic lava extruded from a volcano cools rapidly. Impactite is a form of glass formed by the impact of a meteorite, where Moldavite (found in central and eastern Europe), and Libyan desert glass (found in areas in the eastern Sahara, the deserts of eastern Libya and western Egypt) are notable examples. Vitrification of quartz can also occur when lightning strikes sand, forming hollow, branching rootlike structures called fulgurites. Trinitite is a glassy residue formed from the desert floor sand at the Trinity nuclear bomb test site. Edeowie glass, found in South Australia, is proposed to originate from Pleistocene grassland fires, lightning strikes, or hypervelocity impact by one or several asteroids or comets.
History
Naturally occurring obsidian glass was used by Stone Age societies as it fractures along very sharp edges, making it ideal for cutting tools and weapons.
Glassmaking dates back at least 6000 years, long before humans had discovered how to smelt iron. Archaeological evidence suggests that the first true synthetic glass was made in Lebanon and the coastal north Syria, Mesopotamia or ancient Egypt. The earliest known glass objects, of the mid-third millennium BC, were beads, perhaps initially created as accidental by-products of metalworking (slags) or during the production of faience, a pre-glass vitreous material made by a process similar to glazing.
Early glass was rarely transparent and often contained impurities and imperfections, and is technically faience rather than true glass, which did not appear until the 15th century BC. However, red-orange glass beads excavated from the Indus Valley Civilization dated before 1700 BC (possibly as early as 1900 BC) predate sustained glass production, which appeared around 1600 BC in Mesopotamia and 1500 BC in Egypt.
During the Late Bronze Age, there was a rapid growth in glassmaking technology in Egypt and Western Asia. Archaeological finds from this period include coloured glass ingots, vessels, and beads.
Much early glass production relied on grinding techniques borrowed from stoneworking, such as grinding and carving glass in a cold state.
The term glass has its origins in the late Roman Empire, in the Roman glass making centre at Trier (located in current-day Germany) where the late-Latin term glesum originated, likely from a Germanic word for a transparent, lustrous substance. Glass objects have been recovered across the Roman Empire in domestic, funerary, and industrial contexts, as well as trade items in marketplaces in distant provinces. Examples of Roman glass have been found outside of the former Roman Empire in China, the Baltics, the Middle East, and India. The Romans perfected cameo glass, produced by etching and carving through fused layers of different colours to produce a design in relief on the glass object.
In post-classical West Africa, Benin was a manufacturer of glass and glass beads.
Glass was used extensively in Europe during the Middle Ages. Anglo-Saxon glass has been found across England during archaeological excavations of both settlement and cemetery sites. From the 10th century onwards, glass was employed in stained glass windows of churches and cathedrals, with famous examples at Chartres Cathedral and the Basilica of Saint-Denis. By the 14th century, architects were designing buildings with walls of stained glass such as Sainte-Chapelle, Paris, (1203–1248) and the East end of Gloucester Cathedral. With the change in architectural style during the Renaissance period in Europe, the use of large stained glass windows became much less prevalent, although stained glass had a major revival with Gothic Revival architecture in the 19th century.
During the 13th century, the island of Murano, Venice, became a centre for glass making, building on medieval techniques to produce colourful ornamental pieces in large quantities. Murano glass makers developed the exceptionally clear colourless glass cristallo, so called for its resemblance to natural crystal, which was extensively used for windows, mirrors, ships' lanterns, and lenses. In the 13th, 14th, and 15th centuries, enamelling and gilding on glass vessels were perfected in Egypt and Syria. Towards the end of the 17th century, Bohemia became an important region for glass production, remaining so until the start of the 20th century. By the 17th century, glass in the Venetian tradition was also being produced in England. In about 1675, George Ravenscroft invented lead crystal glass, with cut glass becoming fashionable in the 18th century. Ornamental glass objects became an important art medium during the Art Nouveau period in the late 19th century.
Throughout the 20th century, new mass production techniques led to the widespread availability of glass in much larger amounts, making it practical as a building material and enabling new applications of glass. In the 1920s a mould-etch process was developed, in which art was etched directly into the mould so that each cast piece emerged from the mould with the image already on the surface of the glass. This reduced manufacturing costs and, combined with a wider use of coloured glass, led to cheap glassware in the 1930s, which later became known as Depression glass. In the 1950s, Pilkington Bros., England, developed the float glass process, producing high-quality distortion-free flat sheets of glass by floating on molten tin. Modern multi-story buildings are frequently constructed with curtain walls made almost entirely of glass. Laminated glass has been widely applied to vehicles for windscreens. Optical glass for spectacles has been used since the Middle Ages. The production of lenses has become increasingly proficient, aiding astronomers as well as having other applications in medicine and science. Glass is also employed as the aperture cover in many solar energy collectors.
In the 21st century, glass manufacturers have developed different brands of chemically strengthened glass for widespread application in touchscreens for smartphones, tablet computers, and many other types of information appliances. These include Gorilla Glass, developed and manufactured by Corning, AGC Inc.'s Dragontrail and Schott AG's Xensation.
Physical properties
Optical
Glass is in widespread use in optical systems due to its ability to refract, reflect, and transmit light following geometrical optics. The most common and oldest applications of glass in optics are as lenses, windows, mirrors, and prisms. The key optical properties refractive index, dispersion, and transmission, of glass are strongly dependent on chemical composition and, to a lesser degree, its thermal history. Optical glass typically has a refractive index of 1.4 to 2.4, and an Abbe number (which characterises dispersion) of 15 to 100. The refractive index may be modified by high-density (refractive index increases) or low-density (refractive index decreases) additives.
Glass transparency results from the absence of grain boundaries which diffusely scatter light in polycrystalline materials. Semi-opacity due to crystallization may be induced in many glasses by maintaining them for a long period at a temperature just insufficient to cause fusion. In this way, the crystalline, devitrified material, known as Réaumur's glass porcelain is produced. Although generally transparent to visible light, glasses may be opaque to other wavelengths of light. While silicate glasses are generally opaque to infrared wavelengths with a transmission cut-off at 4 μm, heavy-metal fluoride and chalcogenide glasses are transparent to infrared wavelengths of 7 to 18 μm. The addition of metallic oxides results in different coloured glasses as the metallic ions will absorb wavelengths of light corresponding to specific colours.
Other
In the manufacturing process, glasses can be poured, formed, extruded and moulded into forms ranging from flat sheets to highly intricate shapes. The finished product is brittle but can be laminated or tempered to enhance durability. Glass is typically inert, resistant to chemical attack, and can mostly withstand the action of water, making it an ideal material for the manufacture of containers for foodstuffs and most chemicals. Nevertheless, although usually highly resistant to chemical attack, glass will corrode or dissolve under some conditions. The materials that make up a particular glass composition affect how quickly the glass corrodes. Glasses containing a high proportion of alkali or alkaline earth elements are more susceptible to corrosion than other glass compositions.
The density of glass varies with chemical composition with values ranging from for fused silica to for dense flint glass. Glass is stronger than most metals, with a theoretical tensile strength for pure, flawless glass estimated at due to its ability to undergo reversible compression without fracture. However, the presence of scratches, bubbles, and other microscopic flaws lead to a typical range of in most commercial glasses. Several processes such as toughening can increase the strength of glass. Carefully drawn flawless glass fibres can be produced with a strength of up to .
Reputed flow
The observation that old windows are sometimes found to be thicker at the bottom than at the top is often offered as supporting evidence for the view that glass flows over a timescale of centuries, the assumption being that the glass has exhibited the liquid property of flowing from one shape to another. This assumption is incorrect, as once solidified, glass stops flowing. The sags and ripples observed in old glass were already there the day it was made; manufacturing processes used in the past produced sheets with imperfect surfaces and non-uniform thickness (the near-perfect float glass used today only became widespread in the 1960s).
A 2017 study computed the rate of flow of the medieval glass used in Westminster Abbey from the year 1268. The study found that the room temperature viscosity of this glass was roughly 1024Pa·s which is about 1016 times less viscous than a previous estimate made in 1998, which focused on soda-lime silicate glass. Even with this lower viscosity, the study authors calculated that the maximum flow rate of medieval glass is 1 nm per billion years, making it impossible to observe in a human timescale.
Types
Silicate glasses
Silicon dioxide (SiO2) is a common fundamental constituent of glass. Fused quartz is a glass made from chemically pure silica. It has very low thermal expansion and excellent resistance to thermal shock, being able to survive immersion in water while red hot, resists high temperatures (1000–1500 °C) and chemical weathering, and is very hard. It is also transparent to a wider spectral range than ordinary glass, extending from the visible further into both the UV and IR ranges, and is sometimes used where transparency to these wavelengths is necessary. Fused quartz is used for high-temperature applications such as furnace tubes, lighting tubes, melting crucibles, etc. However, its high melting temperature (1723 °C) and viscosity make it difficult to work with. Therefore, normally, other substances (fluxes) are added to lower the melting temperature and simplify glass processing.
Soda–lime glass
Sodium carbonate (Na2CO3, "soda") is a common additive and acts to lower the glass-transition temperature. However, sodium silicate is water-soluble, so lime (CaO, calcium oxide, generally obtained from limestone), along with magnesium oxide (MgO), and aluminium oxide (Al2O3), are commonly added to improve chemical durability. Soda–lime glasses (Na2O) + lime (CaO) + magnesia (MgO) + alumina (Al2O3) account for over 75% of manufactured glass, containing about 70 to 74% silica by weight. Soda–lime–silicate glass is transparent, easily formed, and most suitable for window glass and tableware. However, it has a high thermal expansion and poor resistance to heat. Soda–lime glass is typically used for windows, bottles, light bulbs, and jars.
Borosilicate glass
Borosilicate glasses (e.g. Pyrex, Duran) typically contain 5–13% boron trioxide (B2O3). Borosilicate glasses have fairly low coefficients of thermal expansion (7740 Pyrex CTE is 3.25/°C as compared to about 9/°C for a typical soda–lime glass). They are, therefore, less subject to stress caused by thermal expansion and thus less vulnerable to cracking from thermal shock. They are commonly used for e.g. labware, household cookware, and sealed beam car head lamps.
Lead glass
The addition of lead(II) oxide into silicate glass lowers the melting point and viscosity of the melt. The high density of lead glass (silica + lead oxide (PbO) + potassium oxide (K2O) + soda (Na2O) + zinc oxide (ZnO) + alumina) results in a high electron density, and hence high refractive index, making the look of glassware more brilliant and causing noticeably more specular reflection and increased optical dispersion. Lead glass has a high elasticity, making the glassware more workable and giving rise to a clear "ring" sound when struck. However, lead glass cannot withstand high temperatures well. Lead oxide also facilitates the solubility of other metal oxides and is used in coloured glass. The viscosity decrease of lead glass melt is very significant (roughly 100 times in comparison with soda glass); this allows easier removal of bubbles and working at lower temperatures, hence its frequent use as an additive in vitreous enamels and glass solders. The high ionic radius of the Pb2+ ion renders it highly immobile and hinders the movement of other ions; lead glasses therefore have high electrical resistance, about two orders of magnitude higher than soda–lime glass (108.5 vs 106.5 Ω⋅cm, DC at 250 °C).
Aluminosilicate glass
Aluminosilicate glass typically contains 5–10% alumina (Al2O3). Aluminosilicate glass tends to be more difficult to melt and shape compared to borosilicate compositions but has excellent thermal resistance and durability. Aluminosilicate glass is extensively used for fibreglass, used for making glass-reinforced plastics (boats, fishing rods, etc.), top-of-stove cookware, and halogen bulb glass.
Other oxide additives
The addition of barium also increases the refractive index. Thorium oxide gives glass a high refractive index and low dispersion and was formerly used in producing high-quality lenses, but due to its radioactivity has been replaced by lanthanum oxide in modern eyeglasses. Iron can be incorporated into glass to absorb infrared radiation, for example in heat-absorbing filters for movie projectors, while cerium(IV) oxide can be used for glass that absorbs ultraviolet wavelengths. Fluorine lowers the dielectric constant of glass. Fluorine is highly electronegative and lowers the polarizability of the material. Fluoride silicate glasses are used in the manufacture of integrated circuits as an insulator.
Glass-ceramics
Glass-ceramic materials contain both non-crystalline glass and crystalline ceramic phases. They are formed by controlled nucleation and partial crystallisation of a base glass by heat treatment. Crystalline grains are often embedded within a non-crystalline intergranular phase of grain boundaries. Glass-ceramics exhibit advantageous thermal, chemical, biological, and dielectric properties as compared to metals or organic polymers.
The most commercially important property of glass-ceramics is their imperviousness to thermal shock. Thus, glass-ceramics have become extremely useful for countertop cooking and industrial processes. The negative thermal expansion coefficient (CTE) of the crystalline ceramic phase can be balanced with the positive CTE of the glassy phase. At a certain point (~70% crystalline) the glass-ceramic has a net CTE near zero. This type of glass-ceramic exhibits excellent mechanical properties and can sustain repeated and quick temperature changes up to 1000 °C.
Fibreglass
Fibreglass (also called glass fibre reinforced plastic, GRP) is a composite material made by reinforcing a plastic resin with glass fibres. It is made by melting glass and stretching the glass into fibres. These fibres are woven together into a cloth and left to set in a plastic resin.
Fibreglass has the properties of being lightweight and corrosion resistant and is a good insulator enabling its use as building insulation material and for electronic housing for consumer products. Fibreglass was originally used in the United Kingdom and United States during World War II to manufacture radomes. Uses of fibreglass include building and construction materials, boat hulls, car body parts, and aerospace composite materials.
Glass-fibre wool is an excellent thermal and sound insulation material, commonly used in buildings (e.g. attic and cavity wall insulation), and plumbing (e.g. pipe insulation), and soundproofing. It is produced by forcing molten glass through a fine mesh by centripetal force and breaking the extruded glass fibres into short lengths using a stream of high-velocity air. The fibres are bonded with an adhesive spray and the resulting wool mat is cut and packed in rolls or panels.
Non-silicate glasses
Besides common silica-based glasses many other inorganic and organic materials may also form glasses, including metals, aluminates, phosphates, borates, chalcogenides, fluorides, germanates (glasses based on GeO2), tellurites (glasses based on TeO2), antimonates (glasses based on Sb2O3), arsenates (glasses based on As2O3), titanates (glasses based on TiO2), tantalates (glasses based on Ta2O5), nitrates, carbonates, plastics, acrylic, and many other substances. Some of these glasses (e.g. Germanium dioxide (GeO2, Germania), in many respects a structural analogue of silica, fluoride, aluminate, phosphate, borate, and chalcogenide glasses) have physicochemical properties useful for their application in fibre-optic waveguides in communication networks and other specialised technological applications.
Silica-free glasses may often have poor glass-forming tendencies. Novel techniques, including containerless processing by aerodynamic levitation (cooling the melt whilst it floats on a gas stream) or splat quenching (pressing the melt between two metal anvils or rollers), may be used to increase the cooling rate or to reduce crystal nucleation triggers.
Amorphous metals
In the past, small batches of amorphous metals with high surface area configurations (ribbons, wires, films, etc.) have been produced through the implementation of extremely rapid rates of cooling. Amorphous metal wires have been produced by sputtering molten metal onto a spinning metal disk.
Several alloys have been produced in layers with thicknesses exceeding 1 millimetre. These are known as bulk metallic glasses (BMG). Liquidmetal Technologies sells several zirconium-based BMGs.
Batches of amorphous steel have also been produced that demonstrate mechanical properties far exceeding those found in conventional steel alloys.
Experimental evidence indicates that the system Al-Fe-Si may undergo a first-order transition to an amorphous form (dubbed "q-glass") on rapid cooling from the melt. Transmission electron microscopy (TEM) images indicate that q-glass nucleates from the melt as discrete particles with uniform spherical growth in all directions. While x-ray diffraction reveals the isotropic nature of q-glass, a nucleation barrier exists implying an interfacial discontinuity (or internal surface) between the glass and melt phases.
Polymers
Important polymer glasses include amorphous and glassy pharmaceutical compounds. These are useful because the solubility of the compound is greatly increased when it is amorphous compared to the same crystalline composition. Many emerging pharmaceuticals are practically insoluble in their crystalline forms. Many polymer thermoplastics familiar to everyday use are glasses. For many applications, like glass bottles or eyewear, polymer glasses (acrylic glass, polycarbonate or polyethylene terephthalate) are a lighter alternative to traditional glass.
Molecular liquids and molten salts
Molecular liquids, electrolytes, molten salts, and aqueous solutions are mixtures of different molecules or ions that do not form a covalent network but interact only through weak van der Waals forces or transient hydrogen bonds. In a mixture of three or more ionic species of dissimilar size and shape, crystallization can be so difficult that the liquid can easily be supercooled into a glass. Examples include LiCl:RH2O (a solution of lithium chloride salt and water molecules) in the composition range 4<R<8. sugar glass, or Ca0.4K0.6(NO3)1.4. Glass electrolytes in the form of Ba-doped Li-glass and Ba-doped Na-glass have been proposed as solutions to problems identified with organic liquid electrolytes used in modern lithium-ion battery cells.
Production
Following the glass batch preparation and mixing, the raw materials are transported to the furnace. Soda–lime glass for mass production is melted in glass-melting furnaces. Smaller-scale furnaces for speciality glasses include electric melters, pot furnaces, and day tanks.
After melting, homogenization and refining (removal of bubbles), the glass is formed. This may be achieved manually by glassblowing, which involves gathering a mass of hot semi-molten glass, inflating it into a bubble using a hollow blowpipe, and forming it into the required shape by blowing, swinging, rolling, or moulding. While hot, the glass can be worked using hand tools, cut with shears, and additional parts such as handles or feet attached by welding.
Flat glass for windows and similar applications is formed by the float glass process, developed between 1953 and 1957 by Sir Alastair Pilkington and Kenneth Bickerstaff of the UK's Pilkington Brothers, who created a continuous ribbon of glass using a molten tin bath on which the molten glass flows unhindered under the influence of gravity. The top surface of the glass is subjected to nitrogen under pressure to obtain a polished finish. Container glass for common bottles and jars is formed by blowing and pressing methods. This glass is often slightly modified chemically (with more alumina and calcium oxide) for greater water resistance.
Once the desired form is obtained, glass is usually annealed for the removal of stresses and to increase the glass's hardness and durability. Surface treatments, coatings or lamination may follow to improve the chemical durability (glass container coatings, glass container internal treatment), strength (toughened glass, bulletproof glass, windshields), or optical properties (insulated glazing, anti-reflective coating).
New chemical glass compositions or new treatment techniques can be initially investigated in small-scale laboratory experiments. The raw materials for laboratory-scale glass melts are often different from those used in mass production because the cost factor has a low priority. In the laboratory mostly pure chemicals are used. Care must be taken that the raw materials have not reacted with moisture or other chemicals in the environment (such as alkali or alkaline earth metal oxides and hydroxides, or boron oxide), or that the impurities are quantified (loss on ignition). Evaporation losses during glass melting should be considered during the selection of the raw materials, e.g., sodium selenite may be preferred over easily evaporating selenium dioxide (SeO2). Also, more readily reacting raw materials may be preferred over relatively inert ones, such as aluminium hydroxide (Al(OH)3) over alumina (Al2O3). Usually, the melts are carried out in platinum crucibles to reduce contamination from the crucible material. Glass homogeneity is achieved by homogenizing the raw materials mixture (glass batch), stirring the melt, and crushing and re-melting the first melt. The obtained glass is usually annealed to prevent breakage during processing.
Colour
Colour in glass may be obtained by addition of homogenously distributed electrically charged ions (or colour centres). While ordinary soda–lime glass appears colourless in thin section, iron(II) oxide (FeO) impurities produce a green tint in thick sections. Manganese dioxide (MnO2), which gives glass a purple colour, may be added to remove the green tint given by FeO. FeO and chromium(III) oxide (Cr2O3) additives are used in the production of green bottles. Iron (III) oxide, on the other-hand, produces yellow or yellow-brown glass. Low concentrations (0.025 to 0.1%) of cobalt oxide (CoO) produce rich, deep blue cobalt glass. Chromium is a very powerful colouring agent, yielding dark green.
Sulphur combined with carbon and iron salts produces amber glass ranging from yellowish to almost black. A glass melt can also acquire an amber colour from a reducing combustion atmosphere. Cadmium sulfide produces imperial red, and combined with selenium can produce shades of yellow, orange, and red. Addition of copper(II) oxide (CuO) produces a turquoise colour in glass, in contrast to copper(I) oxide (Cu2O) which gives a dull red-brown colour.
Uses
Architecture and windows
Soda–lime sheet glass is typically used as a transparent glazing material, typically as windows in external walls of buildings. Float or rolled sheet glass products are cut to size either by scoring and snapping the material, laser cutting, water jets, or diamond-bladed saw. The glass may be thermally or chemically tempered (strengthened) for safety and bent or curved during heating. Surface coatings may be added for specific functions such as scratch resistance, blocking specific wavelengths of light (e.g. infrared or ultraviolet), dirt-repellence (e.g. self-cleaning glass), or switchable electrochromic coatings.
Structural glazing systems represent one of the most significant architectural innovations of modern times, where glass buildings now often dominate the skylines of many modern cities. These systems use stainless steel fittings countersunk into recesses in the corners of the glass panels allowing strengthened panes to appear unsupported creating a flush exterior. Structural glazing systems have their roots in iron and glass conservatories of the nineteenth century
Tableware
Glass is an essential component of tableware and is typically used for water, beer and wine drinking glasses. Wine glasses are typically stemware, i.e. goblets formed from a bowl, stem, and foot. Crystal or Lead crystal glass may be cut and polished to produce decorative drinking glasses with gleaming facets. Other uses of glass in tableware include decanters, jugs, plates, and bowls.
Packaging
The inert and impermeable nature of glass makes it a stable and widely used material for food and drink packaging as glass bottles and jars. Most container glass is soda–lime glass, produced by blowing and pressing techniques. Container glass has a lower magnesium oxide and sodium oxide content than flat glass, and a higher silica, calcium oxide, and aluminium oxide content. Its higher content of water-insoluble oxides imparts slightly higher chemical durability against water, which is advantageous for storing beverages and food. Glass packaging is sustainable, readily recycled, reusable and refillable.
For electronics applications, glass can be used as a substrate in the manufacture of integrated passive devices, thin-film bulk acoustic resonators, and as a hermetic sealing material in device packaging, including very thin solely glass based encapsulation of integrated circuits and other semiconductors in high manufacturing volumes.
Laboratories
Glass is an important material in scientific laboratories for the manufacture of experimental apparatus because it is relatively cheap, readily formed into required shapes for experiment, easy to keep clean, can withstand heat and cold treatment, is generally non-reactive with many reagents, and its transparency allows for the observation of chemical reactions and processes. Laboratory glassware applications include flasks, Petri dishes, test tubes, pipettes, graduated cylinders, glass-lined metallic containers for chemical processing, fractionation columns, glass pipes, Schlenk lines, gauges, and thermometers. Although most standard laboratory glassware has been mass-produced since the 1920s, scientists still employ skilled glassblowers to manufacture bespoke glass apparatus for their experimental requirements.
Optics
Glass is a ubiquitous material in optics because of its ability to refract, reflect, and transmit light. These and other optical properties can be controlled by varying chemical compositions, thermal treatment, and manufacturing techniques. The many applications of glass in optics include glasses for eyesight correction, imaging optics (e.g. lenses and mirrors in telescopes, microscopes, and cameras), fibre optics in telecommunications technology, and integrated optics. Microlenses and gradient-index optics (where the refractive index is non-uniform) find application in e.g. reading optical discs, laser printers, photocopiers, and laser diodes.
Modern Art
The 19th century saw a revival in ancient glassmaking techniques including cameo glass, achieved for the first time since the Roman Empire, initially mostly for pieces in a neo-classical style. The Art Nouveau movement made great use of glass, with René Lalique, Émile Gallé, and Daum of Nancy in the first French wave of the movement, producing coloured vases and similar pieces, often in cameo glass or lustre glass techniques.
Louis Comfort Tiffany in America specialised in stained glass, both secular and religious, in panels and his famous lamps. The early 20th century saw the large-scale factory production of glass art by firms such as Waterford and Lalique. Small studios may hand-produce glass artworks. Techniques for producing glass art include blowing, kiln-casting, fusing, slumping, pâte de verre, flame-working, hot-sculpting and cold-working. Cold work includes traditional stained glass work and other methods of shaping glass at room temperature. Objects made out of glass include vessels, paperweights, marbles, beads, sculptures and installation art.
See also
Aluminium oxynitride transparent ceramic
Fire glass
Flexible glass
Glass in green buildings
Kimberley points
Prince Rupert's drop
Smart glass
References
External links
The Story of Glass Making in Canada from The Canadian Museum of Civilization.
"How Your Glass Ware Is Made" by George W. Waltz, February 1951, Popular Science.
All About Glass from the Corning Museum of Glass: a collection of articles, multimedia, and virtual books all about glass, including the Glass Dictionary.
Amorphous solids
Dielectrics
Materials
Packaging materials
Sculpture materials
Windows | Glass | [
"Physics",
"Chemistry"
] | 7,224 | [
"Glass",
"Unsolved problems in physics",
"Homogeneous chemical mixtures",
"Materials",
"Dielectrics",
"Amorphous solids",
"Matter"
] |
12,582 | https://en.wikipedia.org/wiki/Gel%20electrophoresis | Gel electrophoresis is an electrophoresis method for separation and analysis of biomacromolecules (DNA, RNA, proteins, etc.) and their fragments, based on their size and charge through a gel. It is used in clinical chemistry to separate proteins by charge or size (IEF agarose, essentially size independent) and in biochemistry and molecular biology to separate a mixed population of DNA and RNA fragments by length, to estimate the size of DNA and RNA fragments or to separate proteins by charge.
Nucleic acid molecules are separated by applying an electric field to move the negatively charged molecules through a gel matrix of agarose, polyacrylamide, or other substances. Shorter molecules move faster and migrate farther than longer ones because shorter molecules migrate more easily through the pores of the gel. This phenomenon is called sieving. Proteins are separated by the charge in agarose because the pores of the gel are too large to sieve proteins. Gel electrophoresis can also be used for the separation of nanoparticles.
Gel electrophoresis uses a gel as an anticonvective medium or sieving medium during electrophoresis. Gels suppress the thermal convection caused by the application of the electric field and can also simply serve to maintain the finished separation so that a post electrophoresis stain can be applied. DNA gel electrophoresis is usually performed for analytical purposes, often after amplification of DNA via polymerase chain reaction (PCR), but may be used as a preparative technique prior to use of other methods such as mass spectrometry, RFLP, PCR, cloning, DNA sequencing, or southern blotting for further characterization.
Physical basis
Electrophoresis is a process that enables the sorting of molecules based on charge, size, or shape. Using an electric field, molecules (such as DNA) can be made to move through a gel made of agarose or polyacrylamide. The electric field consists of a negative charge at one end which pushes the molecules through the gel, and a positive charge at the other end that pulls the molecules through the gel. The molecules being sorted are dispensed into a well in the gel material. The gel is placed in an electrophoresis chamber, which is then connected to a power source. When the electric field is applied, the larger molecules move more slowly through the gel while the smaller molecules move faster. The different sized molecules form distinct bands on the gel.
The term "gel" in this instance refers to the matrix used to contain, then separate the target molecules. In most cases, the gel is a crosslinked polymer whose composition and porosity are chosen based on the specific weight and composition of the target to be analyzed. When separating proteins or small nucleic acids (DNA, RNA, or oligonucleotides) the gel is usually composed of different concentrations of acrylamide and a cross-linker, producing different sized mesh networks of polyacrylamide. When separating larger nucleic acids (greater than a few hundred bases), the preferred matrix is purified agarose. In both cases, the gel forms a solid, yet porous matrix. Acrylamide, in contrast to polyacrylamide, is a neurotoxin and must be handled using appropriate safety precautions to avoid poisoning. Agarose is composed of long unbranched chains of uncharged carbohydrates without cross-links resulting in a gel with large pores allowing for the separation of macromolecules and macromolecular complexes.
Electrophoresis refers to the electromotive force (EMF) that is used to move the molecules through the gel matrix. By placing the molecules in wells in the gel and applying an electric field, the molecules will move through the matrix at different rates, determined largely by their mass when the charge-to-mass ratio (Z) of all species is uniform. However, when charges are not all uniform the electrical field generated by the electrophoresis procedure will cause the molecules to migrate differentially according to charge. Species that are net positively charged will migrate towards the cathode which is negatively charged (because this is an electrolytic rather than galvanic cell), whereas species that are net negatively charged will migrate towards the positively charged anode. Mass remains a factor in the speed with which these non-uniformly charged molecules migrate through the matrix toward their respective electrodes.
If several samples have been loaded into adjacent wells in the gel, they will run parallel in individual lanes. Depending on the number of different molecules, each lane shows the separation of the components from the original mixture as one or more distinct bands, one band per component. Incomplete separation of the components can lead to overlapping bands, or indistinguishable smears representing multiple unresolved components. Bands in different lanes that end up at the same distance from the top contain molecules that passed through the gel at the same speed, which usually means they are approximately the same size. There are molecular weight size markers available that contain a mixture of molecules of known sizes. If such a marker was run on one lane in the gel parallel to the unknown samples, the bands observed can be compared to those of the unknown to determine their size. The distance a band travels is approximately inversely proportional to the logarithm of the size of the molecule (alternatively, this can be stated as the distance traveled is inversely proportional to the log of samples's molecular weight).
There are limits to electrophoretic techniques. Since passing a current through a gel causes heating, gels may melt during electrophoresis. Electrophoresis is performed in buffer solutions to reduce pH changes due to the electric field, which is important because the charge of DNA and RNA depends on pH, but running for too long can exhaust the buffering capacity of the solution. There are also limitations in determining the molecular weight by SDS-PAGE, especially when trying to find the MW of an unknown protein. Certain biological variables are difficult or impossible to minimize and can affect electrophoretic migration. Such factors include protein structure, post-translational modifications, and amino acid composition. For example, tropomyosin is an acidic protein that migrates abnormally on SDS-PAGE gels. This is because the acidic residues are repelled by the negatively charged SDS, leading to an inaccurate mass-to-charge ratio and migration. Further, different preparations of genetic material may not migrate consistently with each other, for morphological or other reasons.
Types of gel
The types of gel most typically used are agarose and polyacrylamide gels. Each type of gel is well-suited to different types and sizes of the analyte. Polyacrylamide gels are usually used for proteins and have very high resolving power for small fragments of DNA (5-500 bp). Agarose gels, on the other hand, have lower resolving power for DNA but have a greater range of separation, and are therefore used for DNA fragments of usually 50–20,000 bp in size, but the resolution of over 6 Mb is possible with pulsed field gel electrophoresis (PFGE). Polyacrylamide gels are run in a vertical configuration while agarose gels are typically run horizontally in a submarine mode. They also differ in their casting methodology, as agarose sets thermally, while polyacrylamide forms in a chemical polymerization reaction.
Agarose
Agarose gels are made from the natural polysaccharide polymers extracted from seaweed.
Agarose gels are easily cast and handled compared to other matrices because the gel setting is a physical rather than chemical change. Samples are also easily recovered. After the experiment is finished, the resulting gel can be stored in a plastic bag in a refrigerator.
Agarose gels do not have a uniform pore size, but are optimal for electrophoresis of proteins that are larger than 200 kDa. Agarose gel electrophoresis can also be used for the separation of DNA fragments ranging from 50 base pair to several megabases (millions of bases), the largest of which require specialized apparatus. The distance between DNA bands of different lengths is influenced by the percent agarose in the gel, with higher percentages requiring longer run times, sometimes days. Instead high percentage agarose gels should be run with a pulsed field electrophoresis (PFE), or field inversion electrophoresis.
"Most agarose gels are made with between 0.7% (good separation or resolution of large 5–10kb DNA fragments) and 2% (good resolution for small 0.2–1kb fragments) agarose dissolved in electrophoresis buffer. Up to 3% can be used for separating very tiny fragments but a vertical polyacrylamide gel is more appropriate in this case. Low percentage gels are very weak and may break when you try to lift them. High percentage gels are often brittle and do not set evenly. 1% gels are common for many applications."
Polyacrylamide
Polyacrylamide gel electrophoresis (PAGE) is used for separating proteins ranging in size from 5 to 2,000 kDa due to the uniform pore size provided by the polyacrylamide gel. Pore size is controlled by modulating the concentrations of acrylamide and bis-acrylamide powder used in creating a gel. Care must be used when creating this type of gel, as acrylamide is a potent neurotoxin in its liquid and powdered forms.
Traditional DNA sequencing techniques such as Maxam-Gilbert or Sanger methods used polyacrylamide gels to separate DNA fragments differing by a single base-pair in length so the sequence could be read. Most modern DNA separation methods now use agarose gels, except for particularly small DNA fragments. It is currently most often used in the field of immunology and protein analysis, often used to separate different proteins or isoforms of the same protein into separate bands. These can be transferred onto a nitrocellulose or PVDF membrane to be probed with antibodies and corresponding markers, such as in a western blot.
Typically resolving gels are made in 6%, 8%, 10%, 12% or 15%. Stacking gel (5%) is poured on top of the resolving gel and a gel comb (which forms the wells and defines the lanes where proteins, sample buffer, and ladders will be placed) is inserted. The percentage chosen depends on the size of the protein that one wishes to identify or probe in the sample. The smaller the known weight, the higher the percentage that should be used. Changes in the buffer system of the gel can help to further resolve proteins of very small sizes.
Starch
Partially hydrolysed potato starch makes for another non-toxic medium for protein electrophoresis. The gels are slightly more opaque than acrylamide or agarose. Non-denatured proteins can be separated according to charge and size. They are visualised using Napthal Black or Amido Black staining. Typical starch gel concentrations are 5% to 10%.
Gel conditions
Denaturing
Denaturing gels are run under conditions that disrupt the natural structure of the analyte, causing it to unfold into a linear chain. Thus, the mobility of each macromolecule depends only on its linear length and its mass-to-charge ratio. Thus, the secondary, tertiary, and quaternary levels of biomolecular structure are disrupted, leaving only the primary structure to be analyzed.
Nucleic acids are often denatured by including urea in the buffer, while proteins are denatured using sodium dodecyl sulfate, usually as part of the SDS-PAGE process. For full denaturation of proteins, it is also necessary to reduce the covalent disulfide bonds that stabilize their tertiary and quaternary structure, a method called reducing PAGE. Reducing conditions are usually maintained by the addition of beta-mercaptoethanol or dithiothreitol. For a general analysis of protein samples, reducing PAGE is the most common form of protein electrophoresis.
Denaturing conditions are necessary for proper estimation of molecular weight of RNA. RNA is able to form more intramolecular interactions than DNA which may result in change of its electrophoretic mobility. Urea, DMSO and glyoxal are the most often used denaturing agents to disrupt RNA structure. Originally, highly toxic methylmercury hydroxide was often used in denaturing RNA electrophoresis, but it may be method of choice for some samples.
Denaturing gel electrophoresis is used in the DNA and RNA banding pattern-based methods temperature gradient gel electrophoresis (TGGE) and denaturing gradient gel electrophoresis (DGGE).
Native
Native gels are run in non-denaturing conditions so that the analyte's natural structure is maintained. This allows the physical size of the folded or assembled complex to affect the mobility, allowing for analysis of all four levels of the biomolecular structure. For biological samples, detergents are used only to the extent that they are necessary to lyse lipid membranes in the cell. Complexes remain—for the most part—associated and folded as they would be in the cell. One downside, however, is that complexes may not separate cleanly or predictably, as it is difficult to predict how the molecule's shape and size will affect its mobility. Addressing and solving this problem is a major aim of preparative native PAGE.
Unlike denaturing methods, native gel electrophoresis does not use a charged denaturing agent. The molecules being separated (usually proteins or nucleic acids) therefore differ not only in molecular mass and intrinsic charge, but also the cross-sectional area, and thus experience different electrophoretic forces dependent on the shape of the overall structure. For proteins, since they remain in the native state they may be visualized not only by general protein staining reagents but also by specific enzyme-linked staining.
A specific experiment example of an application of native gel electrophoresis is to check for enzymatic activity to verify the presence of the enzyme in the sample during protein purification. For example, for the protein alkaline phosphatase, the staining solution is a mixture of 4-chloro-2-2methylbenzenediazonium salt with 3-phospho-2-naphthoic acid-2'-4'-dimethyl aniline in Tris buffer. This stain is commercially sold as a kit for staining gels. If the protein is present, the mechanism of the reaction takes place in the following order: it starts with the de-phosphorylation of 3-phospho-2-naphthoic acid-2'-4'-dimethyl aniline by alkaline phosphatase (water is needed for the reaction). The phosphate group is released and replaced by an alcohol group from water. The electrophile 4- chloro-2-2 methylbenzenediazonium (Fast Red TR Diazonium salt) displaces the alcohol group forming the final product Red Azo dye. As its name implies, this is the final visible-red product of the reaction. In undergraduate academic experimentation of protein purification, the gel is usually run next to commercial purified samples to visualize the results and conclude whether or not purification was successful.
Native gel electrophoresis is typically used in proteomics and metallomics. However, native PAGE is also used to scan genes (DNA) for unknown mutations as in single-strand conformation polymorphism.
Buffers
Buffers in gel electrophoresis are used to provide ions that carry a current and to maintain the pH at a relatively constant value.
These buffers have plenty of ions in them, which is necessary for the passage of electricity through them. Something like distilled water or benzene contains few ions, which is not ideal for the use in electrophoresis. There are a number of buffers used for electrophoresis. The most common being, for nucleic acids Tris/Acetate/EDTA (TAE), Tris/Borate/EDTA (TBE). Many other buffers have been proposed, e.g. lithium borate, which is rarely used, based on Pubmed citations (LB), isoelectric histidine, pK matched goods buffers, etc.; in most cases the purported rationale is lower current (less heat) matched ion mobilities, which leads to longer buffer life. Borate is problematic; Borate can polymerize, or interact with cis diols such as those found in RNA. TAE has the lowest buffering capacity but provides the best resolution for larger DNA. This means a lower voltage and more time, but a better product. LB is relatively new and is ineffective in resolving fragments larger than 5 kbp; However, with its low conductivity, a much higher voltage could be used (up to 35 V/cm), which means a shorter analysis time for routine electrophoresis. As low as one base pair size difference could be resolved in 3% agarose gel with an extremely low conductivity medium (1 mM Lithium borate).
Most SDS-PAGE protein separations are performed using a "discontinuous" (or DISC) buffer system that significantly enhances the sharpness of the bands within the gel. During electrophoresis in a discontinuous gel system, an ion gradient is formed in the early stage of electrophoresis that causes all of the proteins to focus on a single sharp band in a process called isotachophoresis. Separation of the proteins by size is achieved in the lower, "resolving" region of the gel. The resolving gel typically has a much smaller pore size, which leads to a sieving effect that now determines the electrophoretic mobility of the proteins.
Visualization
After the electrophoresis is complete, the molecules in the gel can be stained to make them visible. DNA may be visualized using ethidium bromide which, when intercalated into DNA, fluoresce under ultraviolet light, while protein may be visualised using silver stain or Coomassie brilliant blue dye. Other methods may also be used to visualize the separation of the mixture's components on the gel. If the molecules to be separated contain radioactivity, for example in a DNA sequencing gel, an autoradiogram can be recorded of the gel. Photographs can be taken of gels, often using a Gel Doc system. Gels are then commonly labelled for presentation and scientific records on the popular figure-creation website, SciUGo.
Downstream processing
After separation, an additional separation method may then be used, such as isoelectric focusing or SDS-PAGE. The gel will then be physically cut, and the protein complexes extracted from each portion separately. Each extract may then be analysed, such as by peptide mass fingerprinting or de novo peptide sequencing after in-gel digestion. This can provide a great deal of information about the identities of the proteins in a complex.
Applications
Estimation of the size of DNA molecules following restriction enzyme digestion, e.g. in restriction mapping of cloned DNA.
Analysis of PCR products, e.g. in molecular genetic diagnosis or genetic fingerprinting
Separation of restricted genomic DNA prior to Southern transfer, or of RNA prior to Northern transfer.
Gel electrophoresis is used in forensics, molecular biology, genetics, microbiology and biochemistry. The results can be analyzed quantitatively by visualizing the gel with UV light and a gel imaging device. The image is recorded with a computer-operated camera, and the intensity of the band or spot of interest is measured and compared against standard or markers loaded on the same gel. The measurement and analysis are mostly done with specialized software.
Depending on the type of analysis being performed, other techniques are often implemented in conjunction with the results of gel electrophoresis, providing a wide range of field-specific applications.
Nucleic acids
In the case of nucleic acids, the direction of migration, from negative to positive electrodes, is due to the naturally occurring negative charge carried by their sugar-phosphate backbone.
Double-stranded DNA fragments naturally behave as long rods, so their migration through the gel is relative to their size or, for cyclic fragments, their radius of gyration. Circular DNA such as plasmids, however, may show multiple bands, the speed of migration may depend on whether it is relaxed or supercoiled. Single-stranded DNA or RNA tends to fold up into molecules with complex shapes and migrate through the gel in a complicated manner based on their tertiary structure. Therefore, agents that disrupt the hydrogen bonds, such as sodium hydroxide or formamide, are used to denature the nucleic acids and cause them to behave as long rods again.
Gel electrophoresis of large DNA or RNA is usually done by agarose gel electrophoresis. See the "chain termination method" page for an example of a polyacrylamide DNA sequencing gel. Characterization through ligand interaction of nucleic acids or fragments may be performed by mobility shift affinity electrophoresis.
Electrophoresis of RNA samples can be used to check for genomic DNA contamination and also for RNA degradation. RNA from eukaryotic organisms shows distinct bands of 28s and 18s rRNA, the 28s band being approximately twice as intense as the 18s band. Degraded RNA has less sharply defined bands, has a smeared appearance, and the intensity ratio is less than 2:1.
Proteins
Proteins, unlike nucleic acids, can have varying charges and complex shapes, therefore they may not migrate into the polyacrylamide gel at similar rates, or all when placing a negative to positive EMF on the sample. Proteins, therefore, are usually denatured in the presence of a detergent such as sodium dodecyl sulfate (SDS) that coats the proteins with a negative charge. Generally, the amount of SDS bound is relative to the size of the protein (usually 1.4g SDS per gram of protein), so that the resulting denatured proteins have an overall negative charge, and all the proteins have a similar charge-to-mass ratio. Since denatured proteins act like long rods instead of having a complex tertiary shape, the rate at which the resulting SDS coated proteins migrate in the gel is relative only to their size and not their charge or shape.
Proteins are usually analyzed by sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE), by native gel electrophoresis, by preparative native gel electrophoresis (QPNC-PAGE), or by 2-D electrophoresis.
Characterization through ligand interaction may be performed by electroblotting or by affinity electrophoresis in agarose or by capillary electrophoresis as for estimation of binding constants and determination of structural features like glycan content through lectin binding.
Nanoparticles
A novel application for gel electrophoresis is the separation or characterization of metal or metal oxide nanoparticles (e.g. Au, Ag, ZnO, SiO2) regarding the size, shape, or surface chemistry of the nanoparticles. The scope is to obtain a more homogeneous sample (e.g. narrower particle size distribution), which then can be used in further products/processes (e.g. self-assembly processes). For the separation of nanoparticles within a gel, the key parameter is the ratio of the particle size to the mesh size, whereby two migration mechanisms were identified: the unrestricted mechanism, where the particle size << mesh size, and the restricted mechanism, where particle size is similar to mesh size.
History
1930s – first reports of the use of sucrose for gel electrophoresis; moving-boundary electrophoresis (Tiselius)
1950 – introduction of "zone electrophoresis" (Tiselius); paper electrophoresis
1955 – introduction of starch gels, mediocre separation (Smithies)
1959 – introduction of acrylamide gels; discontinuous electrophoresis (Ornstein and Davis); accurate control of parameters such as pore size and stability; and (Raymond and Weintraub)
1965 – introduction of free-flow electrophoresis (Hannig)
1966 – first use of agar gels
1969 – introduction of denaturing agents especially SDS separation of protein subunit (Weber and Osborn)
1970 – Lämmli separated 28 components of T4 phage using a stacking gel and SDS
1972 – agarose gels with ethidium bromide stain
1975 – 2-dimensional gels (O’Farrell); isoelectric focusing, then SDS gel electrophoresis
1977 – sequencing gels (Sanger)
1981 – introduction of capillary electrophoresis (Jorgenson and Lukacs)
1984 – pulsed-field gel electrophoresis enables separation of large DNA molecules (Schwartz and Cantor)
2004 – introduction of a standardized polymerization time for acrylamide gel solutions to optimize gel properties, in particular gel stability (Kastenholz)
A 1959 book on electrophoresis by Milan Bier cites references from the 1800s. However, Oliver Smithies made significant contributions. Bier states: "The method of Smithies ... is finding wide application because of its unique separatory power." Taken in context, Bier clearly implies that Smithies' method is an improvement.
See also
History of electrophoresis
Electrophoretic mobility shift assay
Gel extraction
Isoelectric focusing
Pulsed field gel electrophoresis
Nonlinear frictiophoresis
Two-dimensional gel electrophoresis
SDD-AGE
QPNC-PAGE
Zymography
Fast parallel proteolysis
Free-flow electrophoresis
References
External links
Biotechniques Laboratory electrophoresis demonstration, from the University of Utah's Genetic Science Learning Center
Discontinuous native protein gel electrophoresis
Drinking straw electrophoresis
How to run a DNA or RNA gel
Animation of gel analysis of DNA restriction
Step by step photos of running a gel and extracting DNA
A typical method from wikiversity
Protein methods
Molecular biology
Laboratory techniques
Electrophoresis
Polymerase chain reaction
electrophoresis | Gel electrophoresis | [
"Chemistry",
"Biology"
] | 5,552 | [
"Biochemistry methods",
"Genetics techniques",
"Biochemistry",
"Polymerase chain reaction",
"Instrumental analysis",
"Protein methods",
"Protein biochemistry",
"Colloids",
"Biochemical separation processes",
"Molecular biology techniques",
"Gels",
"Molecular biology",
"nan",
"Electrophores... |
12,600 | https://en.wikipedia.org/wiki/Grid%20network | A grid network is a computer network consisting of a number of computer systems connected in a grid topology.
In a regular grid topology, each node in the network is connected with two neighbors along one or more dimensions. If the network is one-dimensional, and the chain of nodes is connected to form a circular loop, the resulting topology is known as a ring. Network systems such as FDDI use two counter-rotating token-passing rings to achieve high reliability and performance. In general, when an n-dimensional grid network is connected circularly in more than one dimension, the resulting network topology is a torus, and the network is called "toroidal". When the number of nodes along each dimension of a toroidal network is 2, the resulting network is called
a hypercube.
A parallel computing cluster or multi-core processor is often connected in regular interconnection network such as a
de Bruijn graph,
a hypercube graph,
a hypertree network,
a fat tree network,
a torus, or cube-connected cycles.
A grid network is not the same as a grid computer or a computational grid, although the nodes in a grid network are usually computers, and grid computing requires some kind of computer network or "universal coding" to interconnect the computers.
See also
Grid plan - street network
Network topology
References
Telecommunications
Network topology | Grid network | [
"Mathematics",
"Technology"
] | 278 | [
"Information and communications technology",
"Computer network stubs",
"Network topology",
"Telecommunications",
"Topology",
"Computing stubs"
] |
12,608 | https://en.wikipedia.org/wiki/Geodesy | Geodesy or geodetics is the science of measuring and representing the geometry, gravity, and spatial orientation of the Earth in temporally varying 3D. It is called planetary geodesy when studying other astronomical bodies, such as planets or circumplanetary systems. Geodesy is an earth science and many consider the study of Earth's shape and gravity to be central to that science. It is also a discipline of applied mathematics.
Geodynamical phenomena, including crustal motion, tides, and polar motion, can be studied by designing global and national control networks, applying space geodesy and terrestrial geodetic techniques, and relying on datums and coordinate systems. Geodetic job titles include geodesist and geodetic surveyor.
History
Geodesy began in pre-scientific antiquity, so the very word geodesy comes from the Ancient Greek word or geodaisia (literally, "division of Earth").
Early ideas about the figure of the Earth held the Earth to be flat and the heavens a physical dome spanning over it. Two early arguments for a spherical Earth were that lunar eclipses appear to an observer as circular shadows and that Polaris appears lower and lower in the sky to a traveler headed South.
Definition
In English, geodesy refers to the science of measuring and representing geospatial information, while geomatics encompasses practical applications of geodesy on local and regional scales, including surveying.
In German, geodesy can refer to either higher geodesy ( or , literally "geomensuration") — concerned with measuring Earth on the global scale, or engineering geodesy () that includes surveying — measuring parts or regions of Earth.
For the longest time, geodesy was the science of measuring and understanding Earth's geometric shape, orientation in space, and gravitational field; however, geodetic science and operations are applied to other astronomical bodies in our Solar System also.
To a large extent, Earth's shape is the result of rotation, which causes its equatorial bulge, and the competition of geological processes such as the collision of plates, as well as of volcanism, resisted by Earth's gravitational field. This applies to the solid surface, the liquid surface (dynamic sea surface topography), and Earth's atmosphere. For this reason, the study of Earth's gravitational field is called physical geodesy.
Geoid and reference ellipsoid
The geoid essentially is the figure of Earth abstracted from its topographical features. It is an idealized equilibrium surface of seawater, the mean sea level surface in the absence of currents and air pressure variations, and continued under the continental masses. Unlike a reference ellipsoid, the geoid is irregular and too complicated to serve as the computational surface for solving geometrical problems like point positioning. The geometrical separation between the geoid and a reference ellipsoid is called geoidal undulation, and it varies globally between ±110 m based on the GRS 80 ellipsoid.
A reference ellipsoid, customarily chosen to be the same size (volume) as the geoid, is described by its semi-major axis (equatorial radius) a and flattening f. The quantity f = , where b is the semi-minor axis (polar radius), is purely geometrical. The mechanical ellipticity of Earth (dynamical flattening, symbol J2) can be determined to high precision by observation of satellite orbit perturbations. Its relationship with geometrical flattening is indirect and depends on the internal density distribution or, in simplest terms, the degree of central concentration of mass.
The 1980 Geodetic Reference System (GRS 80), adopted at the XVII General Assembly of the International Union of Geodesy and Geophysics (IUGG), posited a 6,378,137 m semi-major axis and a 1:298.257 flattening. GRS 80 essentially constitutes the basis for geodetic positioning by the Global Positioning System (GPS) and is thus also in widespread use outside the geodetic community. Numerous systems used for mapping and charting are becoming obsolete as countries increasingly move to global, geocentric reference systems utilizing the GRS 80 reference ellipsoid.
The geoid is a "realizable" surface, meaning it can be consistently located on Earth by suitable simple measurements from physical objects like a tide gauge. The geoid can, therefore, be considered a physical ("real") surface. The reference ellipsoid, however, has many possible instantiations and is not readily realizable, so it is an abstract surface. The third primary surface of geodetic interest — the topographic surface of Earth — is also realizable.
Coordinate systems in space
The locations of points in 3D space most conveniently are described by three cartesian or rectangular coordinates, X, Y, and Z. Since the advent of satellite positioning, such coordinate systems are typically geocentric, with the Z-axis aligned to Earth's (conventional or instantaneous) rotation axis.
Before the era of satellite geodesy, the coordinate systems associated with a geodetic datum attempted to be geocentric, but with the origin differing from the geocenter by hundreds of meters due to regional deviations in the direction of the plumbline (vertical). These regional geodetic datums, such as ED 50 (European Datum 1950) or NAD 27 (North American Datum 1927), have ellipsoids associated with them that are regional "best fits" to the geoids within their areas of validity, minimizing the deflections of the vertical over these areas.
It is only because GPS satellites orbit about the geocenter that this point becomes naturally the origin of a coordinate system defined by satellite geodetic means, as the satellite positions in space themselves get computed within such a system.
Geocentric coordinate systems used in geodesy can be divided naturally into two classes:
The inertial reference systems, where the coordinate axes retain their orientation relative to the fixed stars or, equivalently, to the rotation axes of ideal gyroscopes. The X-axis points to the vernal equinox.
The co-rotating reference systems (also ECEF or "Earth Centred, Earth Fixed"), in which the axes are "attached" to the solid body of Earth. The X-axis lies within the Greenwich observatory's meridian plane.
The coordinate transformation between these two systems to good approximation is described by (apparent) sidereal time, which accounts for variations in Earth's axial rotation (length-of-day variations). A more accurate description also accounts for polar motion as a phenomenon closely monitored by geodesists.
Coordinate systems in the plane
In geodetic applications like surveying and mapping, two general types of coordinate systems in the plane are in use:
Plano-polar, with points in the plane defined by their distance, s, from a specified point along a ray having a direction α from a baseline or axis.
Rectangular, with points defined by distances from two mutually perpendicular axes, x and y. Contrary to the mathematical convention, in geodetic practice, the x-axis points North and the y-axis East.
One can intuitively use rectangular coordinates in the plane for one's current location, in which case the x-axis will point to the local north. More formally, such coordinates can be obtained from 3D coordinates using the artifice of a map projection. It is impossible to map the curved surface of Earth onto a flat map surface without deformation. The compromise most often chosen — called a conformal projection — preserves angles and length ratios so that small circles get mapped as small circles and small squares as squares.
An example of such a projection is UTM (Universal Transverse Mercator). Within the map plane, we have rectangular coordinates x and y. In this case, the north direction used for reference is the map north, not the local north. The difference between the two is called meridian convergence.
It is easy enough to "translate" between polar and rectangular coordinates in the plane: let, as above, direction and distance be α and s respectively; then we have:
The reverse transformation is given by:
Heights
In geodesy, point or terrain heights are "above sea level" as an irregular, physically defined surface.
Height systems in use are:
Orthometric heights
Dynamic heights
Geopotential heights
Normal heights
Each system has its advantages and disadvantages. Both orthometric and normal heights are expressed in metres above sea level, whereas geopotential numbers are measures of potential energy (unit: m2 s−2) and not metric. The reference surface is the geoid, an equigeopotential surface approximating the mean sea level as described above. For normal heights, the reference surface is the so-called quasi-geoid, which has a few-metre separation from the geoid due to the density assumption in its continuation under the continental masses.
One can relate these heights through the geoid undulation concept to ellipsoidal heights (also known as geodetic heights), representing the height of a point above the reference ellipsoid. Satellite positioning receivers typically provide ellipsoidal heights unless fitted with special conversion software based on a model of the geoid.
Geodetic datums
Because coordinates and heights of geodetic points always get obtained within a system that itself was constructed based on real-world observations, geodesists introduced the concept of a "geodetic datum" (plural datums): a physical (real-world) realization of a coordinate system used for describing point locations. This realization follows from choosing (therefore conventional) coordinate values for one or more datum points. In the case of height data, it suffices to choose one datum point — the reference benchmark, typically a tide gauge at the shore. Thus we have vertical datums, such as the NAVD 88 (North American Vertical Datum 1988), NAP (Normaal Amsterdams Peil), the Kronstadt datum, the Trieste datum, and numerous others.
In both mathematics and geodesy, a coordinate system is a "coordinate system" per ISO terminology, whereas the International Earth Rotation and Reference Systems Service (IERS) uses the term "reference system" for the same. When coordinates are realized by choosing datum points and fixing a geodetic datum, ISO speaks of a "coordinate reference system", whereas IERS uses a "reference frame" for the same. The ISO term for a datum transformation again is a "coordinate transformation".
Positioning
General geopositioning, or simply positioning, is the determination of the location of points on Earth, by myriad techniques. Geodetic positioning employs geodetic methods to determine a set of precise geodetic coordinates of a point on land, at sea, or in space. It may be done within a coordinate system (point positioning or absolute positioning) or relative to another point (relative positioning). One computes the position of a point in space from measurements linking terrestrial or extraterrestrial points of known location ("known points") with terrestrial ones of unknown location ("unknown points"). The computation may involve transformations between or among astronomical and terrestrial coordinate systems. Known points used in point positioning can be GNSS continuously operating reference stations or triangulation points of a higher-order network.
Traditionally, geodesists built a hierarchy of networks to allow point positioning within a country. The highest in this hierarchy were triangulation networks, densified into the networks of traverses (polygons) into which local mapping and surveying measurements, usually collected using a measuring tape, a corner prism, and the red-and-white poles, are tied.
Commonly used nowadays is GPS, except for specialized measurements (e.g., in underground or high-precision engineering). The higher-order networks are measured with static GPS, using differential measurement to determine vectors between terrestrial points. These vectors then get adjusted in a traditional network fashion. A global polyhedron of permanently operating GPS stations under the auspices of the IERS is the basis for defining a single global, geocentric reference frame that serves as the "zero-order" (global) reference to which national measurements are attached.
Real-time kinematic positioning (RTK GPS) is employed frequently in survey mapping. In that measurement technique, unknown points can get quickly tied into nearby terrestrial known points.
One purpose of point positioning is the provision of known points for mapping measurements, also known as (horizontal and vertical) control. There can be thousands of those geodetically determined points in a country, usually documented by national mapping agencies. Surveyors involved in real estate and insurance will use these to tie their local measurements.
Geodetic problems
In geometrical geodesy, there are two main problems:
First geodetic problem (also known as direct or forward geodetic problem): given the coordinates of a point and the directional (azimuth) and distance to a second point, determine the coordinates of that second point.
Second geodetic problem (also known as inverse or reverse geodetic problem): given the coordinates of two points, determine the azimuth and length of the (straight, curved, or geodesic) line connecting those points.
The solutions to both problems in plane geometry reduce to simple trigonometry and are valid for small areas on Earth's surface; on a sphere, solutions become significantly more complex as, for example, in the inverse problem, the azimuths differ going between the two end points along the arc of the connecting great circle.
The general solution is called the geodesic for the surface considered, and the differential equations for the geodesic are solvable numerically. On the ellipsoid of revolution, geodesics are expressible in terms of elliptic integrals, which are usually evaluated in terms of a series expansion — see, for example, Vincenty's formulae.
Observational concepts
As defined in geodesy (and also astronomy), some basic observational concepts like angles and coordinates include (most commonly from the viewpoint of a local observer):
Plumbline or vertical: (the line along) the direction of local gravity.
Zenith: the (direction to the) intersection of the upwards-extending gravity vector at a point and the celestial sphere.
Nadir: the (direction to the) antipodal point where the downward-extending gravity vector intersects the (obscured) celestial sphere.
Celestial horizon: a plane perpendicular to the gravity vector at a point.
Azimuth: the direction angle within the plane of the horizon, typically counted clockwise from the north (in geodesy and astronomy) or the south (in France).
Elevation: the angular height of an object above the horizon; alternatively: zenith distance equal to 90 degrees minus elevation.
Local topocentric coordinates: azimuth (direction angle within the plane of the horizon), elevation angle (or zenith angle), distance.
North celestial pole: the extension of Earth's (precessing and nutating) instantaneous spin axis extended northward to intersect the celestial sphere. (Similarly for the south celestial pole.)
Celestial equator: the (instantaneous) intersection of Earth's equatorial plane with the celestial sphere.
Meridian plane: any plane perpendicular to the celestial equator and containing the celestial poles.
Local meridian: the plane which contains the direction to the zenith and the celestial pole.
Measurements
The reference surface (level) used to determine height differences and height reference systems is known as mean sea level. The traditional spirit level directly produces such (for practical purposes most useful) heights above sea level; the more economical use of GPS instruments for height determination requires precise knowledge of the figure of the geoid, as GPS only gives heights above the GRS80 reference ellipsoid. As geoid determination improves, one may expect that the use of GPS in height determination shall increase, too.
The theodolite is an instrument used to measure horizontal and vertical (relative to the local vertical) angles to target points. In addition, the tachymeter determines, electronically or electro-optically, the distance to a target and is highly automated or even robotic in operations. Widely used for the same purpose is the method of free station position.
Commonly for local detail surveys, tachymeters are employed, although the old-fashioned rectangular technique using an angle prism and steel tape is still an inexpensive alternative. As mentioned, also there are quick and relatively accurate real-time kinematic (RTK) GPS techniques. Data collected are tagged and recorded digitally for entry into Geographic Information System (GIS) databases.
Geodetic GNSS (most commonly GPS) receivers directly produce 3D coordinates in a geocentric coordinate frame. One such frame is WGS84, as well as frames by the International Earth Rotation and Reference Systems Service (IERS). GNSS receivers have almost completely replaced terrestrial instruments for large-scale base network surveys.
To monitor the Earth's rotation irregularities and plate tectonic motions and for planet-wide geodetic surveys, methods of very-long-baseline interferometry (VLBI) measuring distances to quasars, lunar laser ranging (LLR) measuring distances to prisms on the Moon, and satellite laser ranging (SLR) measuring distances to prisms on artificial satellites, are employed.
Gravity is measured using gravimeters, of which there are two kinds. First are absolute gravimeters, based on measuring the acceleration of free fall (e.g., of a reflecting prism in a vacuum tube). They are used to establish vertical geospatial control or in the field. Second, relative gravimeters are spring-based and more common. They are used in gravity surveys over large areas — to establish the figure of the geoid over these areas. The most accurate relative gravimeters are called superconducting gravimeters, which are sensitive to one-thousandth of one-billionth of Earth-surface gravity. Twenty-some superconducting gravimeters are used worldwide in studying Earth's tides, rotation, interior, oceanic and atmospheric loading, as well as in verifying the Newtonian constant of gravitation.
In the future, gravity and altitude might become measurable using the special-relativistic concept of time dilation as gauged by optical clocks.
Units and measures on the ellipsoid
Geographical latitude and longitude are stated in the units degree, minute of arc, and second of arc. They are angles, not metric
measures, and describe the direction of the local normal to the reference ellipsoid of revolution. This direction is approximately the same as the direction of the plumbline, i.e., local gravity, which is also the normal to the geoid surface. For this reason, astronomical position determination – measuring the direction of the plumbline by astronomical means – works reasonably well when one also uses an ellipsoidal model of the figure of the Earth.
One geographical mile, defined as one minute of arc on the equator, equals 1,855.32571922 m. One nautical mile is one minute of astronomical latitude. The radius of curvature of the ellipsoid varies with latitude, being the longest at the pole and the shortest at the equator same as with the nautical mile.
A metre was originally defined as the 10-millionth part of the length from the equator to the North Pole along the meridian through Paris (the target was not quite reached in actual implementation, as it is off by 200 ppm in the current definitions). This situation means that one kilometre roughly equals (1/40,000) * 360 * 60 meridional minutes of arc, or 0.54 nautical miles. (This is not exactly so as the two units had been defined on different bases, so the international nautical mile is 1,852 m exactly, which corresponds to rounding the quotient from 1,000/0.54 m to four digits).
Temporal changes
Various techniques are used in geodesy to study temporally changing surfaces, bodies of mass, physical fields, and dynamical systems. Points on Earth's surface change their location due to a variety of mechanisms:
Continental plate motion, plate tectonics
The episodic motion of tectonic origin, especially close to fault lines
Periodic effects due to tides and tidal loading
Postglacial land uplift due to isostatic adjustment
Mass variations due to hydrological changes, including the atmosphere, cryosphere, land hydrology, and oceans
Sub-daily polar motion
Length-of-day variability
Earth's center-of-mass (geocenter) variations
Anthropogenic movements such as reservoir construction or petroleum or water extraction
Geodynamics is the discipline that studies deformations and motions of Earth's crust and its solidity as a whole. Often the study of Earth's irregular rotation is included in the above definition. Geodynamical studies require terrestrial reference frames realized by the stations belonging to the Global Geodetic Observing System (GGOS).
Techniques for studying geodynamic phenomena on global scales include:
Satellite positioning by GPS, GLONASS, Galileo, and BeiDou
Very-long-baseline interferometry (VLBI)
Satellite laser ranging (SLR) and lunar laser ranging (LLR)
DORIS
Regionally and locally precise leveling
Precise tachymeters
Monitoring of gravity change using land, airborne, shipborne, and spaceborne gravimetry
Satellite altimetry based on microwave and laser observations for studying the ocean surface, sea level rise, and ice cover monitoring
Interferometric synthetic aperture radar (InSAR) using satellite images.
Notable geodesists
See also
Fundamentals
Geodesy (book)
Concepts and Techniques in Modern Geography
Geodesics on an ellipsoid
History of geodesy
Physical geodesy
Earth's circumference
Physics
Geosciences
Governmental agencies
National mapping agencies
U.S. National Geodetic Survey
National Geospatial-Intelligence Agency
Ordnance Survey
United States Coast and Geodetic Survey
United States Geological Survey
International organizations
International Union of Geodesy and Geophysics (IUGG)
International Association of Geodesy (IAG)
International Federation of Surveyors (IFS)
International Geodetic Student Organisation (IGSO)
Other
EPSG Geodetic Parameter Dataset
Meridian arc
Surveying
References
Further reading
F. R. Helmert, Mathematical and Physical Theories of Higher Geodesy, Part 1, ACIC (St. Louis, 1964). This is an English translation of Die mathematischen und physikalischen Theorieen der höheren Geodäsie, Vol 1 (Teubner, Leipzig, 1880).
F. R. Helmert, Mathematical and Physical Theories of Higher Geodesy, Part 2, ACIC (St. Louis, 1964). This is an English translation of Die mathematischen und physikalischen Theorieen der höheren Geodäsie, Vol 2 (Teubner, Leipzig, 1884).
B. Hofmann-Wellenhof and H. Moritz, Physical Geodesy, Springer-Verlag Wien, 2005. (This text is an updated edition of the 1967 classic by W.A. Heiskanen and H. Moritz).
W. Kaula, Theory of Satellite Geodesy : Applications of Satellites to Geodesy, Dover Publications, 2000. (This text is a reprint of the 1966 classic).
Vaníček P. and E.J. Krakiwsky, Geodesy: the Concepts, pp. 714, Elsevier, 1986.
Torge, W (2001), Geodesy (3rd edition), published by de Gruyter, .
Thomas H. Meyer, Daniel R. Roman, and David B. Zilkoski. "What does height really mean?" (This is a series of four articles published in Surveying and Land Information Science, SaLIS.)
"Part I: Introduction" SaLIS Vol. 64, No. 4, pages 223–233, December 2004.
"Part II: Physics and gravity" SaLIS Vol. 65, No. 1, pages 5–15, March 2005.
"Part III: Height systems" SaLIS Vol. 66, No. 2, pages 149–160, June 2006.
"Part IV: GPS heighting" SaLIS Vol. 66, No. 3, pages 165–183, September 2006.
External links
Geodetic awareness guidance note, Geodesy Subcommittee, Geomatics Committee, International Association of Oil & Gas Producers
Earth sciences
Cartography
Measurement
Navigation
Applied mathematics
Articles containing video clips | Geodesy | [
"Physics",
"Astronomy",
"Mathematics"
] | 5,099 | [
"Applied and interdisciplinary physics",
"Physical quantities",
"Applied mathematics",
"Quantity",
"Measurement",
"Size",
"nan",
"Geophysics",
"Geodesy"
] |
12,610 | https://en.wikipedia.org/wiki/Grand%20Unified%20Theory | A Grand Unified Theory (GUT) is any model in particle physics that merges the electromagnetic, weak, and strong forces (the three gauge interactions of the Standard Model) into a single force at high energies. Although this unified force has not been directly observed, many GUT models theorize its existence. If the unification of these three interactions is possible, it raises the possibility that there was a grand unification epoch in the very early universe in which these three fundamental interactions were not yet distinct.
Experiments have confirmed that at high energy, the electromagnetic interaction and weak interaction unify into a single combined electroweak interaction. GUT models predict that at even higher energy, the strong and electroweak interactions will unify into one electronuclear interaction. This interaction is characterized by one larger gauge symmetry and thus several force carriers, but one unified coupling constant. Unifying gravity with the electronuclear interaction would provide a more comprehensive theory of everything (TOE) rather than a Grand Unified Theory. Thus, GUTs are often seen as an intermediate step towards a TOE.
The novel particles predicted by GUT models are expected to have extremely high masses—around the GUT scale of GeV (just three orders of magnitude below the Planck scale of GeV)—and so are well beyond the reach of any foreseen particle hadron collider experiments. Therefore, the particles predicted by GUT models will be unable to be observed directly, and instead the effects of grand unification might be detected through indirect observations of the following:
proton decay,
electric dipole moments of elementary particles,
or the properties of neutrinos.
Some GUTs, such as the Pati–Salam model, predict the existence of magnetic monopoles.
While GUTs might be expected to offer simplicity over the complications present in the Standard Model, realistic models remain complicated because they need to introduce additional fields and interactions, or even additional dimensions of space, in order to reproduce observed fermion masses and mixing angles. This difficulty, in turn, may be related to the existence of family symmetries beyond the conventional GUT models. Due to this and the lack of any observed effect of grand unification so far, there is no generally accepted GUT model.
Models that do not unify the three interactions using one simple group as the gauge symmetry but do so using semisimple groups can exhibit similar properties and are sometimes referred to as Grand Unified Theories as well.
History
Historically, the first true GUT, which was based on the simple Lie group , was proposed by Howard Georgi and Sheldon Glashow in 1974. The Georgi–Glashow model was preceded by the semisimple Lie algebra Pati–Salam model by Abdus Salam and Jogesh Pati also in 1974, who pioneered the idea to unify gauge interactions.
The acronym GUT was first coined in 1978 by CERN researchers John Ellis, Andrzej Buras, Mary K. Gaillard, and Dimitri Nanopoulos, however in the final version of their paper they opted for the less anatomical GUM (Grand Unification Mass). Nanopoulos later that year was the first to use the acronym in a paper.
Motivation
The fact that the electric charges of electrons and protons seem to cancel each other exactly to extreme precision is essential for the existence of the macroscopic world as we know it, but this important property of elementary particles is not explained in the Standard Model of particle physics. While the description of strong and weak interactions within the Standard Model is based on gauge symmetries governed by the simple symmetry groups and which allow only discrete charges, the remaining component, the weak hypercharge interaction is described by an abelian symmetry which in principle allows for arbitrary charge assignments. The observed charge quantization, namely the postulation that all known elementary particles carry electric charges which are exact multiples of one-third of the "elementary" charge, has led to the idea that hypercharge interactions and possibly the strong and weak interactions might be embedded in one Grand Unified interaction described by a single, larger simple symmetry group containing the Standard Model. This would automatically predict the quantized nature and values of all elementary particle charges. Since this also results in a prediction for the relative strengths of the fundamental interactions which we observe, in particular, the weak mixing angle, grand unification ideally reduces the number of independent input parameters but is also constrained by observations.
Grand unification is reminiscent of the unification of electric and magnetic forces by Maxwell's field theory of electromagnetism in the 19th century, but its physical implications and mathematical structure are qualitatively different.
Unification of matter particles
SU(5)
is the simplest GUT. The smallest simple Lie group which contains the standard model, and upon which the first Grand Unified Theory was based, is
.
Such group symmetries allow the reinterpretation of several known particles, including the photon, W and Z bosons, and gluon, as different states of a single particle field. However, it is not obvious that the simplest possible choices for the extended "Grand Unified" symmetry should yield the correct inventory of elementary particles. The fact that all currently known matter particles fit perfectly into three copies of the smallest group representations of and immediately carry the correct observed charges, is one of the first and most important reasons why people believe that a Grand Unified Theory might actually be realized in nature.
The two smallest irreducible representations of are (the defining representation) and . (These bold numbers indicate the dimension of the representation.) In the standard assignment, the contains the charge conjugates of the right-handed down-type quark color triplet and a left-handed lepton isospin doublet, while the contains the six up-type quark components, the left-handed down-type quark color triplet, and the right-handed electron. This scheme has to be replicated for each of the three known generations of matter. It is notable that the theory is anomaly free with this matter content.
The hypothetical right-handed neutrinos are a singlet of , which means its mass is not forbidden by any symmetry; it doesn't need a spontaneous electroweak symmetry breaking which explains why its mass would be heavy (see seesaw mechanism).
SO(10)
The next simple Lie group which contains the standard model is
.
Here, the unification of matter is even more complete, since the irreducible spinor representation contains both the and of and a right-handed neutrino, and thus the complete particle content of one generation of the extended standard model with neutrino masses. This is already the largest simple group that achieves the unification of matter in a scheme involving only the already known matter particles (apart from the Higgs sector).
Since different standard model fermions are grouped together in larger representations, GUTs specifically predict relations among the fermion masses, such as between the electron and the down quark, the muon and the strange quark, and the tau lepton and the bottom quark for and . Some of these mass relations hold approximately, but most don't (see Georgi-Jarlskog mass relation).
The boson matrix for is found by taking the matrix from the representation of and adding an extra row and column for the right-handed neutrino. The bosons are found by adding a partner to each of the 20 charged bosons (2 right-handed W bosons, 6 massive charged gluons and 12 X/Y type bosons) and adding an extra heavy neutral Z-boson to make 5 neutral bosons in total. The boson matrix will have a boson or its new partner in each row and column. These pairs combine to create the familiar 16D Dirac spinor matrices of .
E6
In some forms of string theory, including E8 × E8 heterotic string theory, the resultant four-dimensional theory after spontaneous compactification on a six-dimensional Calabi–Yau manifold resembles a GUT based on the group E6. Notably E6 is the only exceptional simple Lie group to have any complex representations, a requirement for a theory to contain chiral fermions (namely all weakly-interacting fermions). Hence the other four (G2, F4, E7, and E8) can't be the gauge group of a GUT.
Extended Grand Unified Theories
Non-chiral extensions of the Standard Model with vectorlike split-multiplet particle spectra which naturally appear in the higher SU(N) GUTs considerably modify the desert physics and lead to the realistic (string-scale) grand unification for conventional three quark-lepton families even without using supersymmetry (see below). On the other hand, due to a new missing VEV mechanism emerging in the supersymmetric SU(8) GUT the simultaneous solution to the gauge hierarchy (doublet-triplet splitting) problem and problem of unification of flavor can be argued.
GUTs with four families / generations, SU(8): Assuming 4 generations of fermions instead of 3 makes a total of types of particles. These can be put into representations of . This can be divided into which is the theory together with some heavy bosons which act on the generation number.
GUTs with four families / generations, O(16): Again assuming 4 generations of fermions, the 128 particles and anti-particles can be put into a single spinor representation of .
Symplectic groups and quaternion representations
Symplectic gauge groups could also be considered. For example, (which is called in the article symplectic group) has a representation in terms of quaternion unitary matrices which has a dimensional real representation and so might be considered as a candidate for a gauge group. has 32 charged bosons and 4 neutral bosons. Its subgroups include so can at least contain the gluons and photon of . Although it's probably not possible to have weak bosons acting on chiral fermions in this representation. A quaternion representation of the fermions might be:
A further complication with quaternion representations of fermions is that there are two types of multiplication: left multiplication and right multiplication which must be taken into account. It turns out that including left and right-handed quaternion matrices is equivalent to including a single right-multiplication by a unit quaternion which adds an extra SU(2) and so has an extra neutral boson and two more charged bosons. Thus the group of left- and right-handed quaternion matrices is which does include the standard model bosons:
If is a quaternion valued spinor, is quaternion hermitian matrix coming from and is a pure vector quaternion (both of which are 4-vector bosons) then the interaction term is:
Octonion representations
It can be noted that a generation of 16 fermions can be put into the form of an octonion with each element of the octonion being an 8-vector. If the 3 generations are then put in a 3x3 hermitian matrix with certain additions for the diagonal elements then these matrices form an exceptional (Grassmann) Jordan algebra, which has the symmetry group of one of the exceptional Lie groups (, , , or ) depending on the details.
Because they are fermions the anti-commutators of the Jordan algebra become commutators. It is known that has subgroup and so is big enough to include the Standard Model. An gauge group, for example, would have 8 neutral bosons, 120 charged bosons and 120 charged anti-bosons. To account for the 248 fermions in the lowest multiplet of , these would either have to include anti-particles (and so have baryogenesis), have new undiscovered particles, or have gravity-like (spin connection) bosons affecting elements of the particles spin direction. Each of these possesses theoretical problems.
Beyond Lie groups
Other structures have been suggested including Lie 3-algebras and Lie superalgebras. Neither of these fit with Yang–Mills theory. In particular Lie superalgebras would introduce bosons with incorrect statistics. Supersymmetry, however, does fit with Yang–Mills.
Unification of forces and the role of supersymmetry
The unification of forces is possible due to the energy scale dependence of force coupling parameters in quantum field theory called renormalization group "running", which allows parameters with vastly different values at usual energies to converge to a single value at a much higher energy scale.
The renormalization group running of the three gauge couplings in the Standard Model has been found to nearly, but not quite, meet at the same point if the hypercharge is normalized so that it is consistent with or GUTs, which are precisely the GUT groups which lead to a simple fermion unification. This is a significant result, as other Lie groups lead to different normalizations. However, if the supersymmetric extension MSSM is used instead of the Standard Model, the match becomes much more accurate. In this case, the coupling constants of the strong and electroweak interactions meet at the grand unification energy, also known as the GUT scale:
.
It is commonly believed that this matching is unlikely to be a coincidence, and is often quoted as one of the main motivations to further investigate supersymmetric theories despite the fact that no supersymmetric partner particles have been experimentally observed. Also, most model builders simply assume supersymmetry because it solves the hierarchy problem—i.e., it stabilizes the electroweak Higgs mass against radiative corrections.
Neutrino masses
Since Majorana masses of the right-handed neutrino are forbidden by symmetry, GUTs predict the Majorana masses of right-handed neutrinos to be close to the GUT scale where the symmetry is spontaneously broken in those models. In supersymmetric GUTs, this scale tends to be larger than would be desirable to obtain realistic masses of the light, mostly left-handed neutrinos (see neutrino oscillation) via the seesaw mechanism. These predictions are independent of the Georgi–Jarlskog mass relations, wherein some GUTs predict other fermion mass ratios.
Proposed theories
Several theories have been proposed, but none is currently universally accepted. An even more ambitious theory that includes all fundamental forces, including gravitation, is termed a theory of everything. Some common mainstream GUT models are:
Pati–Salam model —
Georgi–Glashow model — ; and Flipped —
model; and Flipped —
model; and Trinification —
minimal left-right model —
331 model —
chiral color
Not quite GUTs:
Technicolor models
Little Higgs
String theory
Causal fermion systems
M-theory
Preons
Loop quantum gravity
Causal dynamical triangulation theory
Note: These models refer to Lie algebras not to Lie groups. The Lie group could be just to take a random example.
The most promising candidate is .
(Minimal) does not contain any exotic fermions (i.e. additional fermions besides the Standard Model fermions and the right-handed neutrino), and it unifies each generation into a single irreducible representation. A number of other GUT models are based upon subgroups of . They are the minimal left-right model, , flipped and the Pati–Salam model. The GUT group contains , but models based upon it are significantly more complicated. The primary reason for studying models comes from heterotic string theory.
GUT models generically predict the existence of topological defects such as monopoles, cosmic strings, domain walls, and others. But none have been observed. Their absence is known as the monopole problem in cosmology. Many GUT models also predict proton decay, although not the Pati–Salam model. As of now, proton decay has never been experimentally observed. The minimal experimental limit on the proton's lifetime pretty much rules out minimal and heavily constrains the other models. The lack of detected supersymmetry to date also constrains many models.
Some GUT theories like and suffer from what is called the doublet-triplet problem. These theories predict that for each electroweak Higgs doublet, there is a corresponding colored Higgs triplet field with a very small mass (many orders of magnitude smaller than the GUT scale here). In theory, unifying quarks with leptons, the Higgs doublet would also be unified with a Higgs triplet. Such triplets have not been observed. They would also cause extremely rapid proton decay (far below current experimental limits) and prevent the gauge coupling strengths from running together in the renormalization group.
Most GUT models require a threefold replication of the matter fields. As such, they do not explain why there are three generations of fermions. Most GUT models also fail to explain the little hierarchy between the fermion masses for different generations.
Ingredients
A GUT model consists of a gauge group which is a compact Lie group, a connection form for that Lie group, a Yang–Mills action for that connection given by an invariant symmetric bilinear form over its Lie algebra (which is specified by a coupling constant for each factor), a Higgs sector consisting of a number of scalar fields taking on values within real/complex representations of the Lie group and chiral Weyl fermions taking on values within a complex rep of the Lie group. The Lie group contains the Standard Model group and the Higgs fields acquire VEVs leading to a spontaneous symmetry breaking to the Standard Model. The Weyl fermions represent matter.
Current evidence
The discovery of neutrino oscillations indicates that the Standard Model is incomplete, but there is currently no clear evidence that nature is described by any Grand Unified Theory. Neutrino oscillations have led to renewed interest toward certain GUT such as .
One of the few possible experimental tests of certain GUT is proton decay and also fermion masses. There are a few more special tests for supersymmetric GUT. However, minimum proton lifetimes from research (at or exceeding the ~ year range) have ruled out simpler GUTs and most non-SUSY models.
The maximum upper limit on proton lifetime (if unstable), is calculated at 6× years for SUSY models and 1.4× years for minimal non-SUSY GUTs.
The gauge coupling strengths of QCD, the weak interaction and hypercharge seem to meet at a common length scale called the GUT scale and equal approximately to GeV (slightly less than the Planck energy of GeV), which is somewhat suggestive. This interesting numerical observation is called the gauge coupling unification, and it works particularly well if one assumes the existence of superpartners of the Standard Model particles. Still, it is possible to achieve the same by postulating, for instance, that ordinary (non supersymmetric) models break with an intermediate gauge scale, such as the one of Pati–Salam group.
See also
B − L quantum number
Classical unified field theories
Paradigm shift
Physics beyond the Standard Model
Theory of everything
X and Y bosons
Notes
References
Further reading
Stephen Hawking, A Brief History of Time, includes a brief popular overview.
External links
The Algebra of Grand Unified Theories
Particle physics
Physical cosmology
Physics beyond the Standard Model | Grand Unified Theory | [
"Physics",
"Astronomy"
] | 3,962 | [
"Astronomical sub-disciplines",
"Theoretical physics",
"Unsolved problems in physics",
"Astrophysics",
"Particle physics",
"Grand Unified Theory",
"Physics beyond the Standard Model",
"Physical cosmology"
] |
12,616 | https://en.wikipedia.org/wiki/Gossip | Gossip is idle talk or rumor, especially about the personal or private affairs of others; the act is also known as dishing or tattling.
Etymology
The word is from Old English godsibb, from god and sibb, the term for the godparents of one's child or the parents of one's godchild, generally very close friends. In the 16th century, the word assumed the meaning of a person, mostly a woman, one who delights in idle talk, a newsmonger, a tattler. In the early 19th century, the term was extended from the talker to the conversation of such persons. The verb to gossip, meaning "to be a gossip", first appears in Shakespeare.
The term originates from the bedroom at the time of childbirth. Giving birth used to be a social event exclusively attended by women. The pregnant woman's female relatives and neighbours would congregate and idly converse. Over time, gossip came to mean talk of others.
Functions
Gossip can:
reinforceor punish the lack ofmorality and accountability
reveal passive aggression, isolating and harming others
build and maintain a sense of community with shared interests, information, and values
begin a courtship that helps one find their desired mate, by counseling others
provide a peer-to-peer mechanism for disseminating information
Workplace gossip
Mary Gormandy White, a human resource expert, gives the following "signs" for identifying workplace gossip:
Animated people become silent ("Conversations stop when you enter the room")
People begin staring at someone
Workers indulge in inappropriate topics of conversation.
White suggests "five tips ... [to] handle the situation with aplomb:
Rise above the gossip
Understand what causes or fuels the gossip
Do not participate in workplace gossip.
Allow for the gossip to go away on its own
If it persists, "gather facts and seek help."
Peter Vajda identifies gossip as a form of workplace violence, noting that it is "essentially a form of attack." Gossip is thought by many to "empower one person while disempowering another" (Hafen).Accordingly, many companies have formal policies in their employee handbooks against gossip. Sometimes there is room for disagreement on exactly what constitutes unacceptable gossip, since workplace gossip may take the form of offhand remarks about someone's tendencies such as "He always takes a long lunch," or "Don't worry, that's just how she is."
TLK Healthcare cites as examples of gossip, "tattletaling to the boss without intention of furthering a solution or speaking to co-workers about something someone else has done to upset us." Corporate email can be a particularly dangerous method of gossip delivery, as the medium is semi-permanent and messages are easily forwarded to unintended recipients; accordingly, a Mass High Tech article advised employers to instruct employees against using company email networks for gossip. Low self-esteem and a desire to "fit in" are frequently cited as motivations for workplace gossip.
There are five essential functions that gossip has in the workplace (according to DiFonzo & Bordia):
Helps individuals learn social information about other individuals in the organization (often without even having to meet the other individual)
Builds social networks of individuals by bonding co-workers together and affiliating people with each other.
Breaks existing bonds by ostracizing individuals within an organization.
enhances one's social status/power/prestige within the organization.
Inform individuals as to what is considered socially acceptable behavior within the organization.
According to Kurkland and Pelled, workplace gossip can be very serious depending upon the amount of power that the gossiper has over the recipient, which will in turn affect how the gossip is interpreted. There are four types of power that are influenced by gossip:
Coercive: when a gossiper tells negative information about a person, their recipient might believe that the gossiper will also spread negative information about them. This causes the gossiper's coercive power to increase.
Reward: when a gossiper tells positive information about a person, their recipient might believe that the gossiper will also spread positive information about them. This causes the gossiper's reward power to increase.
Expert: when a gossiper seems to have very detailed knowledge of either the organization's values or about others in the work environment, their expert power becomes enhanced.
Referent: this power can either be reduced OR enhanced to a point. When people view gossiping as a petty activity done to waste time, a gossiper's referent power can decrease along with their reputation. When a recipient is thought of as being invited into a social circle by being a recipient, the gossiper's referent power can increase, but only to a high point where then the recipient begins to resent the gossiper (Kurland & Pelled).
Negative consequences of the gossip
Some serious negative consequences of gossip may include:
Lost productivity and time wasting
Erosion of trust and morale between members of the working community
Increased anxiety among employees as rumors circulate without any clear information as to what is fact and what is not
Growing divisiveness among employees as people "take sides", risks of "infighting" that may further deteriorate unity
Hurt feelings and reputations
Jeopardized chances for the gossipers' advancement as they are perceived as unprofessional, and
Attrition: good employees tend leave the company due to the unhealthy work atmosphere and lack of trust
Turner and Weed theorize that among the three main types of responders to workplace conflict are attackers who cannot keep their feelings to themselves and express their feelings by attacking whatever they can. Attackers are further divided into up-front attackers and behind-the-back attackers. Turner and Weed note that the latter "are difficult to handle because the target person is not sure of the source of any criticism, nor even always sure that there is criticism."
It is possible however, that there may be illegal, unethical, or disobedient behavior happening at the workplace and this may be a case where reporting the behavior may be viewed as gossip. It is then left up to the authority in charge to fully investigate the matter and not simply look past the report and assume it to be workplace gossip.
Informal networks through which communication occurs in an organization are sometimes called the grapevine. In a study done by Harcourt, Richerson, and Wattier, it was found that middle managers in several different organizations believed that gathering information from the grapevine was a much better way of learning information than through formal communication with their subordinates (Harcourt, Richerson & Wattier).
Various views
Some see gossip as trivial, hurtful and socially, spiritually
and/or intellectually unproductive. Some people view gossip as a lighthearted way of spreading information. Authorities or would-be authorities may have a negative view of gossip as something undesirable or dangerous.
Philosophical analysis by Emrys Westacott points to the role of gossip in (for example) cementing friendships and combatting abuses of power.
A feminist definition of gossip presents it as "a way of talking between women, intimate in style, personal and domestic in scope and setting, a female cultural event which springs from and perpetuates the restrictions of the female role, but also gives the comfort of validation." (Jones, 1990:243)
In early modern England
In early modern England, the word "gossip" referred to companions in childbirth, not limited to the midwife. It also became a term for women-friends generally, with no necessary derogatory connotations. (OED n. definition 2. a. "A familiar acquaintance, friend, chum", supported by references from 1361 to 1873). It commonly referred to an informal local sorority or social group, who could enforce socially acceptable behavior through private censure or through public rituals, such as "rough music", the cucking stool and the skimmington ride.
In Thomas Harman's Caveat for Common Cursitors 1566 a 'walking mort' relates how she was forced to agree to meet a man in his barn, but informed his wife. The wife arrived with her "five furious, sturdy, muffled gossips" who catch the errant husband with "his hosen [trousers] about his legs" and give him a sound beating. The story clearly functions as a morality tale in which the gossips uphold the social order.
Sir Herbert Maxwell Bart, in The Chevalier of the Splendid Crest [1900] at the end of chapter three portrays the king as referring to his loyal knight "Sir Thomas de Roos" in kindly terms as "my old gossip". Whilst a historical novel of that time the reference implies a continued use of the term "Gossip" as a childhood friend as late as 1900.
In Judaism
Judaism considers gossip spoken without a constructive purpose (known in Hebrew as "evil tongue", lashon hara) to be a sin. Speaking negatively about people, even if retelling true facts, counts as sinful, as it demeans the dignity of man — both the speaker and the subject of the gossip.
According to Proverbs 18:8: "The words of a gossip are like choice morsels: they go down to a man's innermost parts."
In Christianity
The Christian perspective on gossip typically aligns with modern cultural assumptions of the phenomenon, especially with the assumption that generally speaking, gossip is negative speech. However, due to the complexity of the phenomenon, biblical scholars have more precisely identified the form and function of gossip, even identifying a socially positive role for the social process as it is described in the New Testament. Of course, this does not mean that there are not numerous texts in the New Testament that see gossip as dangerous negative speech.
Thus, for example, the Epistle to the Romans associates gossips ("backbiters") with a list of sins including sexual immorality and with murder:
28: And even as they did not like to retain God in their knowledge, God gave them over to a reprobate mind, to do those things which are not convenient;
29: Being filled with all unrighteousness, fornication, wickedness, covetousness, maliciousness; full of envy, murder, debate, deceit, malignity; whisperers,
30: Backbiters, haters of God, despiteful, proud, boasters, inventors of evil things, disobedient to parents,
31: Without understanding, covenant breakers, without natural affection, implacable, unmerciful:
32: Who knowing the judgment of God, that they which commit such things are worthy of death, not only do the same, but have pleasure in them that do them. (Romans 1:28-32)
According to Matthew 18, Jesus also taught that conflict-resolution among church members ought to begin with the aggrieved party attempting to resolve their dispute with the offending party alone. Only if this did not work would the process escalate to the next step, in which another church member would become involved. After that if the person at fault still would not "hear", the matter was to be fully investigated by the church elders, and if not resolved to be then exposed publicly.
Based on texts like these portraying gossip negatively, many Christian authors generalize on the phenomenon. So, in order to gossip, writes Phil Fox Rose, we "must harden our heart towards the 'out' person. We draw a line between ourselves and them; define them as being outside the rules of Christian charity... We create a gap between ourselves and God's Love." As we harden our heart towards more people and groups, he continues, "this negativity and feeling of separateness will grow and permeate our world, and we'll find it more difficult to access God's love in any aspect of our lives."
The New Testament is also in favor of group accountability (Ephesians 5:11; 1st Tim 5:20; James 5:16; Gal 6:1-2; 1 Cor 12:26), which may be associated with gossip.
Gossip as a breach of secrecy has parallels with confession: the medieval Christian church sought to control both from its position as a powerful regulator.
In Islam
Islam regards backbiting as the equivalent of eating the flesh of one's dead brother. According to Muslims, backbiting harms its victims without offering them any chance of defense, just as dead people cannot defend against their flesh being eaten. Muslims are expected to treat others like brothers (regardless of their beliefs, skin-color, gender, or ethnic origin), deriving from Islam's concept of brotherhood amongst its believers.
In the Baháʼí Faith
The Baháʼí Faith labels backbiting as the "worst human quality and the most great sin..." In their faith, murder would be considered less negative than backbiting. Bahá’u’lláh, the Prophet-Founder of the Baháʼí Faith stated: "Backbiting quencheth the light of the heart, and extinguished the life of the soul."
In psychology
Evolutionary view
From Robin Dunbar's evolutionary theories, gossip originated to help bond the groups that were constantly growing in size. To survive, individuals need alliances; but as these alliances grew larger, it was difficult if not impossible to physically connect with everyone. Conversation and language were able to bridge this gap. Gossip became a social interaction that helped the group gain information about other individuals without personally speaking to them.
It enabled people to keep up with what was going on in their social network. It also creates a bond between the teller and the hearer, as they share information of mutual interest and spend time together. It also helps the hearer learn about another individual's behavior and helps them have a more effective approach to their relationship. Dunbar (2004) found that 65% of conversations consist of social topics.
Dunbar (1994) argues that gossip is the equivalent of social grooming often observed in other primate species. Anthropological investigations indicate that gossip is a cross-cultural phenomenon, providing evidence for evolutionary accounts of gossip.
There is very little evidence to suggest meaningful sex differences in the proportion of conversational time spent gossiping, and when there is a difference, women are only very slightly more likely to gossip compared with men. Further support for the evolutionary significance of gossip comes from a recent study published in the peer-reviewed journal, Science Anderson and colleagues (2011) found that faces paired with negative social information dominate visual consciousness to a greater extent than positive and neutral social information during a binocular rivalry task.
Binocular rivalry occurs when two different stimuli are presented to each eye simultaneously and the two percepts compete for dominance in visual consciousness. While this occurs, an individual will consciously perceive one of the percepts while the other is suppressed. After a time, the other percept will become dominant and an individual will become aware of the second percept. Finally, the two percepts will alternate back and forth in terms of visual awareness.
The study by Anderson and colleagues (2011) indicates that higher order cognitive processes, like evaluative information processing, can influence early visual processing. That only negative social information differentially affected the dominance of the faces during the task alludes to the unique importance of knowing information about an individual that should be avoided. Since the positive social information did not produce greater perceptual dominance of the matched face indicates that negative information about an individual may be more salient to our behavior than positive.
Gossip also gives information about social norms and guidelines for behavior, usually commenting on how appropriate a behavior was, and the mere act of repeating it signifies its importance. In this sense, gossip is effective regardless of whether it is positive or negative Some theorists have proposed that gossip is actually a pro-social behavior intended to allow an individual to correct their socially prohibitive behavior without direct confrontation of the individual. By gossiping about an individual's acts, other individuals can subtly indicate that said acts are inappropriate and allow the individual to correct their behavior (Schoeman 1994).
Perception of those who gossip
Individuals who are perceived to engage in gossiping regularly are seen as having less social power and being less liked than those who gossip less frequently. The type of gossip being exchanged also affects likeability, whereby those who engage in negative gossip are less liked than those who engage in positive gossip. In a study done by Turner and colleagues (2003), having a prior relationship with a gossiper was not found to protect the gossiper from less favorable personality-ratings after gossip was exchanged. In the study, pairs of individuals were brought into a research lab to participate. Either the two individuals were friends prior to the study or they were strangers scheduled to participate at the same time. One of the individuals was a confederate of the study, and they engaged in gossiping about the research assistant after she left the room. The gossip exchanged was either positive or negative. Regardless of gossip type (positive versus negative) or relationship type (friend versus stranger) the gossipers were rated as less trustworthy after sharing the gossip.
Walter Block has suggested that while gossip and blackmail both involve the disclosure of unflattering information, the blackmailer is arguably ethically superior to the gossip. Block writes: "In a sense, the gossip is much worse than the blackmailer, for the blackmailer has given the blackmailed a chance to silence him. The gossip exposes the secret without warning." The victim of a blackmailer is thus offered choices denied to the subject of gossip, such as deciding if the exposure of his or her secret is worth the cost the blackmailer demands. Moreover, in refusing a blackmailer's offer one is in no worse a position than with the gossip. Adds Block, "It is indeed difficult, then, to account for the vilification suffered by the blackmailer, at least compared to the gossip, who is usually dismissed with slight contempt and smugness."
Contemporary critiques of gossip may concentrate on or become subsumed in the discussion of social media such as Facebook.
See also
Altruism
Backbiting
Blind item
Bullying
Circle of friends
Communication in small groups
Curiosity
False dilemma
Gossip magazines
Impression management
Interpersonal relationship
Lashon hara
Libel
Misinformation
Personal network
Popularity
Respectability
Rumor
Scandal
Sexual selection in human evolution
Social perception
Social status
Word of mouth
Yenta
References
Further reading
Niko Besnier, 2009: Gossip and the Everyday Production of Politics. Honolulu: University of Hawai'i Press.
Niko Besnier, 1996: Gossip. In Encyclopedia of Cultural Anthropology. David Levinson and Melvin Ember, eds. Vol. 2, pp. 544–547. New York: Henry Holt.
Preview.
DiFonzo, Nicholas & Prashant Bordia. "Rumor, Gossip, & Urban Legend." Diogenes Vol. 54 (Feb 2007) pg 19–35.
Feeley, Kathleen A. and Frost, Jennifer (eds.) When Private Talk Goes Public: Gossip in American History. New York: Palgrave Macmillan, 2014.
Robert F. Goodman and Aaron Ben-Zeev, editors: Good Gossip. Lawrence, Kansas: University Press of Kansas, 1993.
Hafen, Susan. "Organizational Gossip: A Revolving Door of Regulation & Resistance." The Southern Communication Journal Vol. 69, No. 3 (Spring 2004) pg 223
Harcourt, Jules, Virginia Richerson, and Mark J Wattier. "A National Study of Middle Managers' Assessment of Organizational Communication Quality." Journal of Business Communication Vol. 28, No. 4 (Fall 1991) pg 348–365
Jones, Deborah, 1990: 'Gossip: notes on women's oral culture'. In: Cameron, Deborah. (editor) The Feminist Critique of Language: A Reader. London/New York: Routledge, 1990, pp. 242–250. . Cited online in Rash, 1996.
Kenny, Robert Wade, 2014: Gossip. In Encyclopedia of Lying and Deception. Timothy R. Levine, ed. Vol. 1, pp. 410–414. Los Angeles: Sage Press.
Kurland, Nancy B. & Lisa Hope Pelled. "Passing the Word: Toward a Model of Gossip & Power in the Workplace." The Academy of Management Review Vol. 25, No. 2 (April 2000) pg 428–438
External links
Ronald de Sousa (U Toronto) on Gossip
"Go Ahead. Gossip May Be Virtuous" New York Times article by Patricia Cohen 2002-08-10 (requires registration)
The Ethics of Gossiping, Emrys Westacott
Robin Dunbar, Coevolution of neocortical size, group size and language in humans (pre-publication version) "Analysis of a sample of human conversations shows that about 60% of time is spent gossiping about relationships and personal experiences."
Benjamin Brown, From Principles to Rules and from Musar to Halakhah - The Hafetz Hayim's Rulings on Libel and Gossip.
Human communication
Social status
Group processes
Workplace
Evolutionary psychology
Moral psychology
Defamation
Journalism ethics
Tabloid journalism | Gossip | [
"Biology"
] | 4,330 | [
"Human communication",
"Behavior",
"Human behavior"
] |
12,630 | https://en.wikipedia.org/wiki/Geometric%20series | In mathematics, a geometric series is a series summing the terms of an infinite geometric sequence, in which the ratio of consecutive terms is constant. For example, the series is a geometric series with common ratio , which converges to the sum of . Each term in a geometric series is the geometric mean of the term before it and the term after it, in the same way that each term of an arithmetic series is the arithmetic mean of its neighbors.
While Greek philosopher Zeno's paradoxes about time and motion (5th century BCE) have been interpreted as involving geometric series, such series were formally studied and applied a century or two later by Greek mathematicians, for example used by Archimedes to calculate the area inside a parabola (3rd century BCE). Today, geometric series are used in mathematical finance, calculating areas of fractals, and various computer science topics.
Though geometric series most commonly involve real or complex numbers, there are also important results and applications for matrix-valued geometric series, function-valued geometric series, adic number geometric series, and most generally geometric series of elements of abstract algebraic fields, rings, and semirings.
Definition and examples
The geometric series is an infinite series derived from a special type of sequence called a geometric progression. This means that it is the sum of infinitely many terms of geometric progression: starting from the initial term , and the next one being the initial term multiplied by a constant number known as the common ratio . By multiplying each term with a common ratio continuously, the geometric series can be defined mathematically as:
The sum of a finite initial segment of an infinite geometric series is called a finite geometric series, that is:
When it is often called a growth rate or rate of expansion. When it is often called a decay rate or shrink rate, where the idea that it is a "rate" comes from interpreting as a sort of discrete time variable. When an application area has specialized vocabulary for specific types of growth, expansion, shrinkage, and decay, that vocabulary will also often be used to name parameters of geometric series. In economics, for instance, rates of increase and decrease of price levels are called inflation rates and deflation rates, while rates of increase in values of investments include rates of return and interest rates.
When summing infinitely many terms, the geometric series can either be convergent or divergent. Convergence means there is a value after summing infinitely many terms, whereas divergence means no value after summing. The convergence of a geometric series can be described depending on the value of a common ratio, see . Grandi's series is an example of a divergent series that can be expressed as , where the initial term is and the common ratio is ; this is because it has three different values.
Decimal numbers that have repeated patterns that continue forever can be interpreted as geometric series and thereby converted to expressions of the ratio of two integers. For example, the repeated decimal fraction can be written as the geometric series where the initial term is and the common ratio is .
Convergence of the series and its proof
The convergence of the infinite sequence of partial sums of the infinite geometric series depends on the magnitude of the common ratio alone:
If , the terms of the series approach zero (becoming smaller and smaller in magnitude) and the sequence of partial sums converge to a limit value of .
If , the terms of the series become larger and larger in magnitude and the partial sums of the terms also get larger and larger in magnitude, so the series diverges.
If , the terms of the series become no larger or smaller in magnitude and the sequence of partial sums of the series does not converge. When , all the terms of the series are the same and the grow to infinity. When , the terms take two values and alternately, and therefore the sequence of partial sums of the terms oscillates between the two values and 0. One example can be found in Grandi's series. When and , the partial sums circulate periodically among the values , never converging to a limit. Generally when for any integer and with any , the partial sums of the series will circulate indefinitely with a period of , never converging to a limit.
The rate of convergence shows how the sequence quickly approaches its limit. In the case of the geometric series—the relevant sequence is and its limit is —the rate and order are found via
where represents the order of convergence. Using and choosing the order of convergence gives:
When the series converges, the rate of convergence gets slower as approaches . The pattern of convergence also depends on the sign or complex argument of the common ratio. If and then terms all share the same sign and the partial sums of the terms approach their eventual limit monotonically. If and , adjacent terms in the geometric series alternate between positive and negative, and the partial sums of the terms oscillate above and below their eventual limit . For complex and the converge in a spiraling pattern.
The convergence is proved as follows. The partial sum of the first terms of a geometric series, up to and including the term,
is given by the closed form
where is the common ratio. The case is merely a simple addition, a case of an arithmetic series. The formula for the partial sums with can be derived as follows:
for . As approaches 1, polynomial division or L'Hospital's rule recovers the case .
As approaches infinity, the absolute value of must be less than one for this sequence of partial sums to converge to a limit. When it does, the series converges absolutely. The infinite series then becomes
for .
This convergence result is widely applied to prove the convergence of other series as well, whenever those series's terms can be bounded from above by a suitable geometric series; that proof strategy is the basis for the ratio test and root test for the convergence of infinite series.
Connection to the power series
Like the geometric series, a power series has one parameter for a common variable raised to successive powers corresponding to the geometric series's , but it has additional parameters one for each term in the series, for the distinct coefficients of each , rather than just a single additional parameter for all terms, the common coefficient of in each term of a geometric series. The geometric series can therefore be considered a class of power series in which the sequence of coefficients satisfies for all and .
This special class of power series plays an important role in mathematics, for instance for the study of ordinary generating functions in combinatorics and the summation of divergent series in analysis. Many other power series can be written as transformations and combinations of geometric series, making the geometric series formula a convenient tool for calculating formulas for those power series as well.
As a power series, the geometric series has a radius of convergence of 1. This could be seen as a consequence of the Cauchy–Hadamard theorem and the fact that for any or as a consequence of the ratio test for the convergence of infinite series, with implying convergence only for However, both the ratio test and the Cauchy–Hadamard theorem are proven using the geometric series formula as a logically prior result, so such reasoning would be subtly circular.
Background
2,500 years ago, Greek mathematicians believed that an infinitely long list of positive numbers must sum to infinity. Therefore, Zeno of Elea created a paradox, demonstrating as follows: in order to walk from one place to another, one must first walk half the distance there, and then half of the remaining distance, and half of that remaining distance, and so on, covering infinitely many intervals before arriving. In doing so, he partitioned a fixed distance into an infinitely long list of halved remaining distances, each with a length greater than zero. Zeno's paradox revealed to the Greeks that their assumption about an infinitely long list of positive numbers needing to add up to infinity was incorrect.
Euclid's Elements has the distinction of being the world's oldest continuously used mathematical textbook, and it includes a demonstration of the sum of finite geometric series in Book IX, Proposition 35, illustrated in an adjacent figure.
Archimedes in his The Quadrature of the Parabola used the sum of a geometric series to compute the area enclosed by a parabola and a straight line. Archimedes' theorem states that the total area under the parabola is of the area of the blue triangle. His method was to dissect the area into infinite triangles as shown in the adjacent figure. He determined that each green triangle has the area of the blue triangle, each yellow triangle has the area of a green triangle, and so forth. Assuming that the blue triangle has area 1, then, the total area is the sum of the infinite series
Here the first term represents the area of the blue triangle, the second term is the area of the two green triangles, the third term is the area of the four yellow triangles, and so on. Simplifying the fractions gives
a geometric series with common ratio and its sum is:
In addition to his elegantly simple proof of the divergence of the harmonic series, Nicole Oresme proved that the arithmetico-geometric series known as Gabriel's Staircase,
In the diagram for his geometric proof, similar to the adjacent diagram, shows a two-dimensional geometric series. The first dimension is horizontal, in the bottom row, representing the geometric series with initial value and common ratio
The second dimension is vertical, where the bottom row is a new initial term and each subsequent row above it shrinks according to the same common ratio , making another geometric series with sum ,
This approach generalizes usefully to higher dimensions, and that generalization is described below in .
Applications
As mentioned above, the geometric series can be applied in the field of economics. This leads to the common ratio of a geometric series that may refer to the rates of increase and decrease of price levels are called inflation rates and deflation rates; in contrast, the rates of increase in values of investments include rates of return and interest rates. More specifically in mathematical finance, geometric series can also be applied in time value of money; that is to represent the present values of perpetual annuities, sums of money to be paid each year indefinitely into the future. This sort of calculation is used to compute the annual percentage rate of a loan, such as a mortgage loan. It can also be used to estimate the present value of expected stock dividends, or the terminal value of a financial asset assuming a stable growth rate. However, the assumption that interest rates are constant is generally incorrect and payments are unlikely to continue forever since the issuer of the perpetual annuity may lose its ability or end its commitment to make continued payments, so estimates like these are only heuristic guidelines for decision making rather than scientific predictions of actual current values.
In addition to finding the area enclosed by a parabola and a line in Archimedes' The Quadrature of the Parabola, the geometric series may also be applied in finding the Koch snowflake's area described as the union of infinitely many equilateral triangles (see figure). Each side of the green triangle is exactly the size of a side of the large blue triangle and therefore has exactly the area. Similarly, each yellow triangle has the area of a green triangle, and so forth. All of these triangles can be represented in terms of geometric series: the blue triangle's area is the first term, the three green triangles' area is the second term, the twelve yellow triangles' area is the third term, and so forth. Excluding the initial 1, this series has a common ratio , and by taking the blue triangle as a unit of area, the total area of the snowflake is:
Various topics in computer science may include the application of geometric series in the following:
Algorithm analysis: analyzing the time complexity of recursive algorithms (like divide-and-conquer) and in amortized analysis for operations with varying costs, such as dynamic array resizing.
Data structures: analyzing the space and time complexities of operations in data structures like balanced binary search trees and heaps.
Computer graphics: crucial in rendering algorithms for anti-aliasing, for mipmapping, and for generating fractals, where the scale of detail varies geometrically.
Networking and communication: modelling retransmission delays in exponential backoff algorithms and are used in data compression and error-correcting codes for efficient communication.
Probabilistic and randomized algorithms: analyzing random walks, Markov chains, and geometric distributions, which are essential in probabilistic and randomized algorithms.
Beyond real and complex numbers
While geometric series with real and complex number parameters and are most common, geometric series of more general terms such as functions, matrices, and adic numbers also find application. The mathematical operations used to express a geometric series given its parameters are simply addition and repeated multiplication, and so it is natural, in the context of modern algebra, to define geometric series with parameters from any ring or field. Further generalization to geometric series with parameters from semirings is more unusual, but also has applications; for instance, in the study of fixed-point iteration of transformation functions, as in transformations of automata via rational series.
In order to analyze the convergence of these general geometric series, then on top of addition and multiplication, one must also have some metric of distance between partial sums of the series. This can introduce new subtleties into the questions of convergence, such as the distinctions between uniform convergence and pointwise convergence in series of functions, and can lead to strong contrasts with intuitions from the real numbers, such as in the convergence of the series with and to in the 2-adic numbers using the 2-adic absolute value as a convergence metric. In that case, the 2-adic absolute value of the common coefficient is , and while this is counterintuitive from the perspective of real number absolute value (where naturally), it is nonetheless well-justified in the context of p-adic analysis.
When the multiplication of the parameters is not commutative, as it often is not for matrices or general physical operators, particularly in quantum mechanics, then the standard way of writing the geometric series, , multiplying from the right, may need to be distinguished from the alternative , multiplying from the left, and also the symmetric , multiplying half on each side. These choices may correspond to important alternatives with different strengths and weaknesses in applications, as in the case of ordering the mutual interferences of drift and diffusion differently at infinitesimal temporal scales in Ito integration and Stratonovitch integration in stochastic calculus.
References
Beyer, W. H. CRC Standard Mathematical Tables, 28th ed. Boca Raton, FL: CRC Press, p. 8, 1987.
Courant, R. and Robbins, H. "The Geometric Progression." §1.2.3 in What Is Mathematics?: An Elementary Approach to Ideas and Methods, 2nd ed. Oxford, England: Oxford University Press, pp. 13–14, 1996.
.
James Stewart (2002). Calculus, 5th ed., Brooks Cole.
Larson, Hostetler, and Edwards (2005). Calculus with Analytic Geometry, 8th ed., Houghton Mifflin Company.
Pappas, T. "Perimeter, Area & the Infinite Series." The Joy of Mathematics. San Carlos, CA: Wide World Publ./Tetra, pp. 134–135, 1989.
Roger B. Nelsen (1997). Proofs without Words: Exercises in Visual Thinking, The Mathematical Association of America.
History and philosophy
C. H. Edwards Jr. (1994). The Historical Development of the Calculus, 3rd ed., Springer. .
Eli Maor (1991). To Infinity and Beyond: A Cultural History of the Infinite, Princeton University Press.
Morr Lazerowitz (2000). The Structure of Metaphysics (International Library of Philosophy), Routledge.
Economics
Carl P. Simon and Lawrence Blume (1994). Mathematics for Economists, W. W. Norton & Company.
Mike Rosser (2003). Basic Mathematics for Economists, 2nd ed., Routledge.
Biology
Edward Batschelet (1992). Introduction to Mathematics for Life Scientists, 3rd ed., Springer.
Richard F. Burton (1998). Biology by Numbers: An Encouragement to Quantitative Thinking, Cambridge University Press.
External links
"Geometric Series" by Michael Schreiber, Wolfram Demonstrations Project, 2007.
Articles containing proofs
Ratios | Geometric series | [
"Mathematics"
] | 3,357 | [
"Articles containing proofs",
"Arithmetic",
"Ratios"
] |
12,644 | https://en.wikipedia.org/wiki/Glycolysis | Glycolysis is the metabolic pathway that converts glucose () into pyruvate and, in most organisms, occurs in the liquid part of cells (the cytosol). The free energy released in this process is used to form the high-energy molecules adenosine triphosphate (ATP) and reduced nicotinamide adenine dinucleotide (NADH). Glycolysis is a sequence of ten reactions catalyzed by enzymes.
The wide occurrence of glycolysis in other species indicates that it is an ancient metabolic pathway. Indeed, the reactions that make up glycolysis and its parallel pathway, the pentose phosphate pathway, can occur in the oxygen-free conditions of the Archean oceans, also in the absence of enzymes, catalyzed by metal ions, meaning this is a plausible prebiotic pathway for abiogenesis.
The most common type of glycolysis is the Embden–Meyerhof–Parnas (EMP) pathway, which was discovered by Gustav Embden, Otto Meyerhof, and Jakub Karol Parnas. Glycolysis also refers to other pathways, such as the Entner–Doudoroff pathway and various heterofermentative and homofermentative pathways. However, the discussion here will be limited to the Embden–Meyerhof–Parnas pathway.
The glycolysis pathway can be separated into two phases:
Investment phase – wherein ATP is consumed
Yield phase – wherein more ATP is produced than originally consumed
Overview
The overall reaction of glycolysis is:
The use of symbols in this equation makes it appear unbalanced with respect to oxygen atoms, hydrogen atoms, and charges. Atom balance is maintained by the two phosphate (Pi) groups:
Each exists in the form of a hydrogen phosphate anion (), dissociating to contribute overall
Each liberates an oxygen atom when it binds to an adenosine diphosphate (ADP) molecule, contributing 2O overall
Charges are balanced by the difference between ADP and ATP. In the cellular environment, all three hydroxyl groups of ADP dissociate into −O− and H+, giving ADP3−, and this ion tends to exist in an ionic bond with Mg2+, giving ADPMg−. ATP behaves identically except that it has four hydroxyl groups, giving ATPMg2−. When these differences along with the true charges on the two phosphate groups are considered together, the net charges of −4 on each side are balanced.
In high-oxygen (aerobic) conditions, eukaryotic cells can continue from glycolysis to metabolise the pyruvate through the citric acid cycle or the electron transport chain to produce significantly more ATP.
Importantly, under low-oxygen (anaerobic) conditions, glycolysis is the only biochemical pathway in eukaryotes that can generate ATP, and, for many anaerobic respiring organisms the most important producer of ATP. Therefore, many organisms have evolved fermentation pathways to recycle NAD+ to continue glycolysis to produce ATP for survival. These pathways include ethanol fermentation and lactic acid fermentation.
History
The modern understanding of the pathway of glycolysis took almost 100 years to fully learn. The combined results of many smaller experiments were required to understand the entire pathway.
The first steps in understanding glycolysis began in the 19th century. For economic reasons, the French wine industry sought to investigate why wine sometimes turned distasteful, instead of fermenting into alcohol. The French scientist Louis Pasteur researched this issue during the 1850s. His experiments showed that alcohol fermentation occurs by the action of living microorganisms, yeasts, and that glucose consumption decreased under aerobic conditions (the Pasteur effect).
The component steps of glycolysis were first analysed by the non-cellular fermentation experiments of Eduard Buchner during the 1890s. Buchner demonstrated that the conversion of glucose to ethanol was possible using a non-living extract of yeast, due to the action of enzymes in the extract. This experiment not only revolutionized biochemistry, but also allowed later scientists to analyze this pathway in a more controlled laboratory setting. In a series of experiments (1905–1911), scientists Arthur Harden and William Young discovered more pieces of glycolysis. They discovered the regulatory effects of ATP on glucose consumption during alcohol fermentation. They also shed light on the role of one compound as a glycolysis intermediate: fructose 1,6-bisphosphate.
The elucidation of fructose 1,6-bisphosphate was accomplished by measuring levels when yeast juice was incubated with glucose. production increased rapidly then slowed down. Harden and Young noted that this process would restart if an inorganic phosphate (Pi) was added to the mixture. Harden and Young deduced that this process produced organic phosphate esters, and further experiments allowed them to extract fructose diphosphate (F-1,6-DP).
Arthur Harden and William Young along with Nick Sheppard determined, in a second experiment, that a heat-sensitive high-molecular-weight subcellular fraction (the enzymes) and a heat-insensitive low-molecular-weight cytoplasm fraction (ADP, ATP and NAD+ and other cofactors) are required together for fermentation to proceed. This experiment begun by observing that dialyzed (purified) yeast juice could not ferment or even create a sugar phosphate. This mixture was rescued with the addition of undialyzed yeast extract that had been boiled. Boiling the yeast extract renders all proteins inactive (as it denatures them). The ability of boiled extract plus dialyzed juice to complete fermentation suggests that the cofactors were non-protein in character.
In the 1920s Otto Meyerhof was able to link together some of the many individual pieces of glycolysis discovered by Buchner, Harden, and Young. Meyerhof and his team were able to extract different glycolytic enzymes from muscle tissue, and combine them to artificially create the pathway from glycogen to lactic acid.
In one paper, Meyerhof and scientist Renate Junowicz-Kockolaty investigated the reaction that splits fructose 1,6-diphosphate into the two triose phosphates. Previous work proposed that the split occurred via 1,3-diphosphoglyceraldehyde plus an oxidizing enzyme and cozymase. Meyerhoff and Junowicz found that the equilibrium constant for the isomerase and aldoses reaction were not affected by inorganic phosphates or any other cozymase or oxidizing enzymes. They further removed diphosphoglyceraldehyde as a possible intermediate in glycolysis.
With all of these pieces available by the 1930s, Gustav Embden proposed a detailed, step-by-step outline of that pathway we now know as glycolysis. The biggest difficulties in determining the intricacies of the pathway were due to the very short lifetime and low steady-state concentrations of the intermediates of the fast glycolytic reactions. By the 1940s, Meyerhof, Embden and many other biochemists had finally completed the puzzle of glycolysis. The understanding of the isolated pathway has been expanded in the subsequent decades, to include further details of its regulation and integration with other metabolic pathways.
Sequence of reactions
Summary of reactions
Preparatory phase
The first five steps of Glycolysis are regarded as the preparatory (or investment) phase, since they consume energy to convert the glucose into two three-carbon sugar phosphates (G3P).
Once glucose enters the cell, the first step is phosphorylation of glucose by a family of enzymes called hexokinases to form glucose 6-phosphate (G6P). This reaction consumes ATP, but it acts to keep the glucose concentration inside the cell low, promoting continuous transport of blood glucose into the cell through the plasma membrane transporters. In addition, phosphorylation blocks the glucose from leaking out – the cell lacks transporters for G6P, and free diffusion out of the cell is prevented due to the charged nature of G6P. Glucose may alternatively be formed from the phosphorolysis or hydrolysis of intracellular starch or glycogen.
In animals, an isozyme of hexokinase called glucokinase is also used in the liver, which has a much lower affinity for glucose (Km in the vicinity of normal glycemia), and differs in regulatory properties. The different substrate affinity and alternate regulation of this enzyme are a reflection of the role of the liver in maintaining blood sugar levels.
Cofactors: Mg2+
G6P is then rearranged into fructose 6-phosphate (F6P) by glucose phosphate isomerase. Fructose can also enter the glycolytic pathway by phosphorylation at this point.
The change in structure is an isomerization, in which the G6P has been converted to F6P. The reaction requires an enzyme, phosphoglucose isomerase, to proceed. This reaction is freely reversible under normal cell conditions. However, it is often driven forward because of a low concentration of F6P, which is constantly consumed during the next step of glycolysis. Under conditions of high F6P concentration, this reaction readily runs in reverse. This phenomenon can be explained through Le Chatelier's Principle. Isomerization to a keto sugar is necessary for carbanion stabilization in the fourth reaction step (below).
The energy expenditure of another ATP in this step is justified in 2 ways: The glycolytic process (up to this step) becomes irreversible, and the energy supplied destabilizes the molecule. Because the reaction catalyzed by phosphofructokinase 1 (PFK-1) is coupled to the hydrolysis of ATP (an energetically favorable step) it is, in essence, irreversible, and a different pathway must be used to do the reverse conversion during gluconeogenesis. This makes the reaction a key regulatory point (see below).
Furthermore, the second phosphorylation event is necessary to allow the formation of two charged groups (rather than only one) in the subsequent step of glycolysis, ensuring the prevention of free diffusion of substrates out of the cell.
The same reaction can also be catalyzed by pyrophosphate-dependent phosphofructokinase (PFP or PPi-PFK), which is found in most plants, some bacteria, archea, and protists, but not in animals. This enzyme uses pyrophosphate (PPi) as a phosphate donor instead of ATP. It is a reversible reaction, increasing the flexibility of glycolytic metabolism. A rarer ADP-dependent PFK enzyme variant has been identified in archaean species.
Cofactors: Mg2+
Destabilizing the molecule in the previous reaction allows the hexose ring to be split by aldolase into two triose sugars: dihydroxyacetone phosphate (a ketose), and glyceraldehyde 3-phosphate (an aldose). There are two classes of aldolases: class I aldolases, present in animals and plants, and class II aldolases, present in fungi and bacteria; the two classes use different mechanisms in cleaving the ketose ring.
Electrons delocalized in the carbon-carbon bond cleavage associate with the alcohol group. The resulting carbanion is stabilized by the structure of the carbanion itself via resonance charge distribution and by the presence of a charged ion prosthetic group.
Triosephosphate isomerase rapidly interconverts dihydroxyacetone phosphate with glyceraldehyde 3-phosphate (GADP) that proceeds further into glycolysis. This is advantageous, as it directs dihydroxyacetone phosphate down the same pathway as glyceraldehyde 3-phosphate, simplifying regulation.
Pay-off phase
The second half of glycolysis is known as the pay-off phase, characterised by a net gain of the energy-rich molecules ATP and NADH. Since glucose leads to two triose sugars in the preparatory phase, each reaction in the pay-off phase occurs twice per glucose molecule. This yields 2 NADH molecules and 4 ATP molecules, leading to a net gain of 2 NADH molecules and 2 ATP molecules from the glycolytic pathway per glucose.
The aldehyde groups of the triose sugars are oxidised, and inorganic phosphate is added to them, forming 1,3-bisphosphoglycerate.
The hydrogen is used to reduce two molecules of NAD+, a hydrogen carrier, to give NADH + H+ for each triose.
Hydrogen atom balance and charge balance are both maintained because the phosphate (Pi) group actually exists in the form of a hydrogen phosphate anion (), which dissociates to contribute the extra H+ ion and gives a net charge of -3 on both sides.
Here, arsenate (), an anion akin to inorganic phosphate may replace phosphate as a substrate to form 1-arseno-3-phosphoglycerate. This, however, is unstable and readily hydrolyzes to form 3-phosphoglycerate, the intermediate in the next step of the pathway. As a consequence of bypassing this step, the molecule of ATP generated from 1-3 bisphosphoglycerate in the next reaction will not be made, even though the reaction proceeds. As a result, arsenate is an uncoupler of glycolysis.
This step is the enzymatic transfer of a phosphate group from 1,3-bisphosphoglycerate to ADP by phosphoglycerate kinase, forming ATP and 3-phosphoglycerate. At this step, glycolysis has reached the break-even point: 2 molecules of ATP were consumed, and 2 new molecules have now been synthesized. This step, one of the two substrate-level phosphorylation steps, requires ADP; thus, when the cell has plenty of ATP (and little ADP), this reaction does not occur. Because ATP decays relatively quickly when it is not metabolized, this is an important regulatory point in the glycolytic pathway.
ADP actually exists as ADPMg−, and ATP as ATPMg2−, balancing the charges at −5 both sides.
Cofactors: Mg2+
Phosphoglycerate mutase isomerises 3-phosphoglycerate into 2-phosphoglycerate.
Enolase next converts 2-phosphoglycerate to phosphoenolpyruvate. This reaction is an elimination reaction involving an E1cB mechanism.
Cofactors: 2 Mg2+, one "conformational" ion to coordinate with the carboxylate group of the substrate, and one "catalytic" ion that participates in the dehydration.
A final substrate-level phosphorylation now forms a molecule of pyruvate and a molecule of ATP by means of the enzyme pyruvate kinase. This serves as an additional regulatory step, similar to the phosphoglycerate kinase step.
Cofactors: Mg2+
Biochemical logic
The existence of more than one point of regulation indicates that intermediates between those points enter and leave the glycolysis pathway by other processes. For example, in the first regulated step, hexokinase converts glucose into glucose-6-phosphate. Instead of continuing through the glycolysis pathway, this intermediate can be converted into glucose storage molecules, such as glycogen or starch. The reverse reaction, breaking down, e.g., glycogen, produces mainly glucose-6-phosphate; very little free glucose is formed in the reaction. The glucose-6-phosphate so produced can enter glycolysis after the first control point.
In the second regulated step (the third step of glycolysis), phosphofructokinase converts fructose-6-phosphate into fructose-1,6-bisphosphate, which then is converted into glyceraldehyde-3-phosphate and dihydroxyacetone phosphate. The dihydroxyacetone phosphate can be removed from glycolysis by conversion into glycerol-3-phosphate, which can be used to form triglycerides. Conversely, triglycerides can be broken down into fatty acids and glycerol; the latter, in turn, can be converted into dihydroxyacetone phosphate, which can enter glycolysis after the second control point.
Free energy changes
The change in free energy, ΔG, for each step in the glycolysis pathway can be calculated using ΔG = ΔG°′ + RTln Q, where Q is the reaction quotient. This requires knowing the concentrations of the metabolites. All of these values are available for erythrocytes, with the exception of the concentrations of NAD+ and NADH. The ratio of NAD+ to NADH in the cytoplasm is approximately 1000, which makes the oxidation of glyceraldehyde-3-phosphate (step 6) more favourable.
Using the measured concentrations of each step, and the standard free energy changes, the actual free energy change can be calculated. (Neglecting this is very common—the delta G of ATP hydrolysis in cells is not the standard free energy change of ATP hydrolysis quoted in textbooks).
From measuring the physiological concentrations of metabolites in an erythrocyte it seems that about seven of the steps in glycolysis are in equilibrium for that cell type. Three of the steps—the ones with large negative free energy changes—are not in equilibrium and are referred to as irreversible; such steps are often subject to regulation.
Step 5 in the figure is shown behind the other steps, because that step is a side-reaction that can decrease or increase the concentration of the intermediate glyceraldehyde-3-phosphate. That compound is converted to dihydroxyacetone phosphate by the enzyme triose phosphate isomerase, which is a catalytically perfect enzyme; its rate is so fast that the reaction can be assumed to be in equilibrium. The fact that ΔG is not zero indicates that the actual concentrations in the erythrocyte are not accurately known.
Regulation
The enzymes that catalyse glycolysis are regulated via a range of biological mechanisms in order to control overall flux though the pathway. This is vital for both homeostatsis in a static environment, and metabolic adaptation to a changing environment or need. The details of regulation for some enzymes are highly conserved between species, whereas others vary widely.
Gene Expression: Firstly, the cellular concentrations of glycolytic enzymes are modulated via regulation of gene expression via transcription factors, with several glycolysis enzymes themselves acting as regulatory protein kinases in the nucleus.
Allosteric inhibition and activation by metabolites: In particular end-product inhibition of regulated enzymes by metabolites such as ATP serves as negative feedback regulation of the pathway.
Allosteric inhibition and activation by Protein-protein interactions (PPI). Indeed, some proteins interact with and regulate multiple glycolytic enzymes.
Post-translational modification (PTM). In particular, phosphorylation and dephosphorylation is a key mechanism of regulation of pyruvate kinase in the liver.
Localization
Regulation by insulin in animals
In animals, regulation of blood glucose levels by the pancreas in conjunction with the liver is a vital part of homeostasis. The beta cells in the pancreatic islets are sensitive to the blood glucose concentration. A rise in the blood glucose concentration causes them to release insulin into the blood, which has an effect particularly on the liver, but also on fat and muscle cells, causing these tissues to remove glucose from the blood. When the blood sugar falls the pancreatic beta cells cease insulin production, but, instead, stimulate the neighboring pancreatic alpha cells to release glucagon into the blood. This, in turn, causes the liver to release glucose into the blood by breaking down stored glycogen, and by means of gluconeogenesis. If the fall in the blood glucose level is particularly rapid or severe, other glucose sensors cause the release of epinephrine from the adrenal glands into the blood. This has the same action as glucagon on glucose metabolism, but its effect is more pronounced. In the liver glucagon and epinephrine cause the phosphorylation of the key, regulated enzymes of glycolysis, fatty acid synthesis, cholesterol synthesis, gluconeogenesis, and glycogenolysis. Insulin has the opposite effect on these enzymes. The phosphorylation and dephosphorylation of these enzymes (ultimately in response to the glucose level in the blood) is the dominant manner by which these pathways are controlled in the liver, fat, and muscle cells. Thus the phosphorylation of phosphofructokinase inhibits glycolysis, whereas its dephosphorylation through the action of insulin stimulates glycolysis.
Regulated Enzymes in Glycolysis
The three regulatory enzymes are hexokinase (or glucokinase in the liver), phosphofructokinase, and pyruvate kinase. The flux through the glycolytic pathway is adjusted in response to conditions both inside and outside the cell. The internal factors that regulate glycolysis do so primarily to provide ATP in adequate quantities for the cell's needs. The external factors act primarily on the liver, fat tissue, and muscles, which can remove large quantities of glucose from the blood after meals (thus preventing hyperglycemia by storing the excess glucose as fat or glycogen, depending on the tissue type). The liver is also capable of releasing glucose into the blood between meals, during fasting, and exercise thus preventing hypoglycemia by means of glycogenolysis and gluconeogenesis. These latter reactions coincide with the halting of glycolysis in the liver.
In addition hexokinase and glucokinase act independently of the hormonal effects as controls at the entry points of glucose into the cells of different tissues. Hexokinase responds to the glucose-6-phosphate (G6P) level in the cell, or, in the case of glucokinase, to the blood sugar level in the blood to impart entirely intracellular controls of the glycolytic pathway in different tissues (see below).
When glucose has been converted into G6P by hexokinase or glucokinase, it can either be converted to glucose-1-phosphate (G1P) for conversion to glycogen, or it is alternatively converted by glycolysis to pyruvate, which enters the mitochondrion where it is converted into acetyl-CoA and then into citrate. Excess citrate is exported from the mitochondrion back into the cytosol, where ATP citrate lyase regenerates acetyl-CoA and oxaloacetate (OAA). The acetyl-CoA is then used for fatty acid synthesis and cholesterol synthesis, two important ways of utilizing excess glucose when its concentration is high in blood. The regulated enzymes catalyzing these reactions perform these functions when they have been dephosphorylated through the action of insulin on the liver cells. Between meals, during fasting, exercise or hypoglycemia, glucagon and epinephrine are released into the blood. This causes liver glycogen to be converted back to G6P, and then converted to glucose by the liver-specific enzyme glucose 6-phosphatase and released into the blood. Glucagon and epinephrine also stimulate gluconeogenesis, which converts non-carbohydrate substrates into G6P, which joins the G6P derived from glycogen, or substitutes for it when the liver glycogen store have been depleted. This is critical for brain function, since the brain utilizes glucose as an energy source under most conditions. The simultaneously phosphorylation of, particularly, phosphofructokinase, but also, to a certain extent pyruvate kinase, prevents glycolysis occurring at the same time as gluconeogenesis and glycogenolysis.
Hexokinase and glucokinase
All cells contain the enzyme hexokinase, which catalyzes the conversion of glucose that has entered the cell into glucose-6-phosphate (G6P). Since the cell membrane is impervious to G6P, hexokinase essentially acts to transport glucose into the cells from which it can then no longer escape. Hexokinase is inhibited by high levels of G6P in the cell. Thus the rate of entry of glucose into cells partially depends on how fast G6P can be disposed of by glycolysis, and by glycogen synthesis (in the cells which store glycogen, namely liver and muscles).
Glucokinase, unlike hexokinase, is not inhibited by G6P. It occurs in liver cells, and will only phosphorylate the glucose entering the cell to form G6P, when the glucose in the blood is abundant. This being the first step in the glycolytic pathway in the liver, it therefore imparts an additional layer of control of the glycolytic pathway in this organ.
Phosphofructokinase
Phosphofructokinase is an important control point in the glycolytic pathway, since it is one of the irreversible steps and has key allosteric effectors, AMP and fructose 2,6-bisphosphate (F2,6BP).
F2,6BP is a very potent activator of phosphofructokinase (PFK-1) that is synthesized when F6P is phosphorylated by a second phosphofructokinase (PFK2). In the liver, when blood sugar is low and glucagon elevates cAMP, PFK2 is phosphorylated by protein kinase A. The phosphorylation inactivates PFK2, and another domain on this protein becomes active as fructose bisphosphatase-2, which converts F2,6BP back to F6P. Both glucagon and epinephrine cause high levels of cAMP in the liver. The result of lower levels of liver F2,6BP is a decrease in activity of phosphofructokinase and an increase in activity of fructose 1,6-bisphosphatase, so that gluconeogenesis (in essence, "glycolysis in reverse") is favored. This is consistent with the role of the liver in such situations, since the response of the liver to these hormones is to release glucose to the blood.
ATP competes with AMP for the allosteric effector site on the PFK enzyme. ATP concentrations in cells are much higher than those of AMP, typically 100-fold higher, but the concentration of ATP does not change more than about 10% under physiological conditions, whereas a 10% drop in ATP results in a 6-fold increase in AMP. Thus, the relevance of ATP as an allosteric effector is questionable. An increase in AMP is a consequence of a decrease in energy charge in the cell.
Citrate inhibits phosphofructokinase when tested in vitro by enhancing the inhibitory effect of ATP. However, it is doubtful that this is a meaningful effect in vivo, because citrate in the cytosol is utilized mainly for conversion to acetyl-CoA for fatty acid and cholesterol synthesis.
TIGAR, a p53 induced enzyme, is responsible for the regulation of phosphofructokinase and acts to protect against oxidative stress. TIGAR is a single enzyme with dual function that regulates F2,6BP. It can behave as a phosphatase (fructuose-2,6-bisphosphatase) which cleaves the phosphate at carbon-2 producing F6P. It can also behave as a kinase (PFK2) adding a phosphate onto carbon-2 of F6P which produces F2,6BP. In humans, the TIGAR protein is encoded by C12orf5 gene. The TIGAR enzyme will hinder the forward progression of glycolysis, by creating a build up of fructose-6-phosphate (F6P) which is isomerized into glucose-6-phosphate (G6P). The accumulation of G6P will shunt carbons into the pentose phosphate pathway.
Pyruvate kinase
The final step of glycolysis is catalysed by pyruvate kinase to form pyruvate and another ATP. It is regulated by a range of different transcriptional, covalent and non-covalent regulation mechanisms, which can vary widely in different tissues. For example, in the liver, pyruvate kinase is regulated based on glucose availability. During fasting (no glucose available), glucagon activates protein kinase A which phosphorylates pyruvate kinase to inhibit it. An increase in blood sugar leads to secretion of insulin, which activates protein phosphatase 1, leading to dephosphorylation and re-activation of pyruvate kinase. These controls prevent pyruvate kinase from being active at the same time as the enzymes that catalyze the reverse reaction (pyruvate carboxylase and phosphoenolpyruvate carboxykinase), preventing a futile cycle. Conversely, the isoform of pyruvate kinasein found in muscle is not affected by protein kinase A (which is activated by adrenaline in that tissue), so that glycolysis remains active in muscles even during fasting.
Post-glycolysis processes
The overall process of glycolysis is:
Glucose + 2 NAD+ + 2 ADP + 2 Pi → 2 Pyruvate + 2 NADH + 2 H+ + 2 ATP + 2 H2O
If glycolysis were to continue indefinitely, all of the NAD+ would be used up, and glycolysis would stop. To allow glycolysis to continue, organisms must be able to oxidize NADH back to NAD+. How this is performed depends on which external electron acceptor is available.
Anoxic regeneration of NAD+
One method of doing this is to simply have the pyruvate do the oxidation; in this process, pyruvate is converted to lactate (the conjugate base of lactic acid) in a process called lactic acid fermentation:
Pyruvate + NADH + H+ → Lactate + NAD+
This process occurs in the bacteria involved in making yogurt (the lactic acid causes the milk to curdle). This process also occurs in animals under hypoxic (or partially anaerobic) conditions, found, for example, in overworked muscles that are starved of oxygen. In many tissues, this is a cellular last resort for energy; most animal tissue cannot tolerate anaerobic conditions for an extended period of time.
Some organisms, such as yeast, convert NADH back to NAD+ in a process called ethanol fermentation. In this process, the pyruvate is converted first to acetaldehyde and carbon dioxide, and then to ethanol.
Lactic acid fermentation and ethanol fermentation can occur in the absence of oxygen. This anaerobic fermentation allows many single-cell organisms to use glycolysis as their only energy source.
Anoxic regeneration of NAD+ is only an effective means of energy production during short, intense exercise in vertebrates, for a period ranging from 10 seconds to 2 minutes during a maximal effort in humans. (At lower exercise intensities it can sustain muscle activity in diving animals, such as seals, whales and other aquatic vertebrates, for very much longer periods of time.) Under these conditions NAD+ is replenished by NADH donating its electrons to pyruvate to form lactate. This produces 2 ATP molecules per glucose molecule, or about 5% of glucose's energy potential (38 ATP molecules in bacteria). But the speed at which ATP is produced in this manner is about 100 times that of oxidative phosphorylation. The pH in the cytoplasm quickly drops when hydrogen ions accumulate in the muscle, eventually inhibiting the enzymes involved in glycolysis.
The burning sensation in muscles during hard exercise can be attributed to the release of hydrogen ions during the shift to glucose fermentation from glucose oxidation to carbon dioxide and water, when aerobic metabolism can no longer keep pace with the energy demands of the muscles. These hydrogen ions form a part of lactic acid. The body falls back on this less efficient but faster method of producing ATP under low oxygen conditions. This is thought to have been the primary means of energy production in earlier organisms before oxygen reached high concentrations in the atmosphere between 2000 and 2500 million years ago, and thus would represent a more ancient form of energy production than the aerobic replenishment of NAD+ in cells.
The liver in mammals gets rid of this excess lactate by transforming it back into pyruvate under aerobic conditions; see Cori cycle.
Fermentation of pyruvate to lactate is sometimes also called "anaerobic glycolysis", however, glycolysis ends with the production of pyruvate regardless of the presence or absence of oxygen.
In the above two examples of fermentation, NADH is oxidized by transferring two electrons to pyruvate. However, anaerobic bacteria use a wide variety of compounds as the terminal electron acceptors in cellular respiration: nitrogenous compounds, such as nitrates and nitrites; sulfur compounds, such as sulfates, sulfites, sulfur dioxide, and elemental sulfur; carbon dioxide; iron compounds; manganese compounds; cobalt compounds; and uranium compounds.
Aerobic regeneration of NAD+ and further catabolism of pyruvate
In aerobic eukaryotes, a complex mechanism has developed to use the oxygen in air as the final electron acceptor, in a process called oxidative phosphorylation. Aerobic prokaryotes, which lack mitochondria, use a variety of simpler mechanisms.
Firstly, the NADH + H+ generated by glycolysis has to be transferred to the mitochondrion to be oxidized, and thus to regenerate the NAD+ necessary for glycolysis to continue. However the inner mitochondrial membrane is impermeable to NADH and NAD+. Use is therefore made of two "shuttles" to transport the electrons from NADH across the mitochondrial membrane. They are the malate-aspartate shuttle and the glycerol phosphate shuttle. In the former the electrons from NADH are transferred to cytosolic oxaloacetate to form malate. The malate then traverses the inner mitochondrial membrane into the mitochondrial matrix, where it is reoxidized by NAD+ forming intra-mitochondrial oxaloacetate and NADH. The oxaloacetate is then re-cycled to the cytosol via its conversion to aspartate which is readily transported out of the mitochondrion. In the glycerol phosphate shuttle electrons from cytosolic NADH are transferred to dihydroxyacetone to form glycerol-3-phosphate which readily traverses the outer mitochondrial membrane. Glycerol-3-phosphate is then reoxidized to dihydroxyacetone, donating its electrons to FAD instead of NAD+. This reaction takes place on the inner mitochondrial membrane, allowing FADH2 to donate its electrons directly to coenzyme Q (ubiquinone) which is part of the electron transport chain which ultimately transfers electrons to molecular oxygen , with the formation of water, and the release of energy eventually captured in the form of ATP.
The glycolytic end-product, pyruvate (plus NAD+) is converted to acetyl-CoA, and NADH + H+ within the mitochondria in a process called pyruvate decarboxylation.
The resulting acetyl-CoA enters the citric acid cycle (or Krebs Cycle), where the acetyl group of the acetyl-CoA is converted into carbon dioxide by two decarboxylation reactions with the formation of yet more intra-mitochondrial NADH + H+.
The intra-mitochondrial NADH + H+ is oxidized to NAD+ by the electron transport chain, using oxygen as the final electron acceptor to form water. The energy released during this process is used to create a hydrogen ion (or proton) gradient across the inner membrane of the mitochondrion.
Finally, the proton gradient is used to produce about 2.5 ATP for every NADH + H+ oxidized in a process called oxidative phosphorylation.
Conversion of carbohydrates into fatty acids and cholesterol
The pyruvate produced by glycolysis is an important intermediary in the conversion of carbohydrates into fatty acids and cholesterol. This occurs via the conversion of pyruvate into acetyl-CoA in the mitochondrion. However, this acetyl CoA needs to be transported into cytosol where the synthesis of fatty acids and cholesterol occurs. This cannot occur directly. To obtain cytosolic acetyl-CoA, citrate (produced by the condensation of acetyl CoA with oxaloacetate) is removed from the citric acid cycle and carried across the inner mitochondrial membrane into the cytosol. There it is cleaved by ATP citrate lyase into acetyl-CoA and oxaloacetate. The oxaloacetate is returned to mitochondrion as malate (and then back into oxaloacetate to transfer more acetyl-CoA out of the mitochondrion). The cytosolic acetyl-CoA can be carboxylated by acetyl-CoA carboxylase into malonyl CoA, the first committed step in the synthesis of fatty acids, or it can be combined with acetoacetyl-CoA to form 3-hydroxy-3-methylglutaryl-CoA (HMG-CoA) which is the rate limiting step controlling the synthesis of cholesterol. Cholesterol can be used as is, as a structural component of cellular membranes, or it can be used to synthesize the steroid hormones, bile salts, and vitamin D.
Conversion of pyruvate into oxaloacetate for the citric acid cycle
Pyruvate molecules produced by glycolysis are actively transported across the inner mitochondrial membrane, and into the matrix where they can either be oxidized and combined with coenzyme A to form , acetyl-CoA, and NADH, or they can be carboxylated (by pyruvate carboxylase) to form oxaloacetate. This latter reaction "fills up" the amount of oxaloacetate in the citric acid cycle, and is therefore an anaplerotic reaction (from the Greek meaning to "fill up"), increasing the cycle's capacity to metabolize acetyl-CoA when the tissue's energy needs (e.g. in heart and skeletal muscle) are suddenly increased by activity.
In the citric acid cycle all the intermediates (e.g. citrate, iso-citrate, alpha-ketoglutarate, succinate, fumarate, malate and oxaloacetate) are regenerated during each turn of the cycle. Adding more of any of these intermediates to the mitochondrion therefore means that that additional amount is retained within the cycle, increasing all the other intermediates as one is converted into the other. Hence the addition of oxaloacetate greatly increases the amounts of all the citric acid intermediates, thereby increasing the cycle's capacity to metabolize acetyl CoA, converting its acetate component into and water, with the release of enough energy to form 11 ATP and 1 GTP molecule for each additional molecule of acetyl CoA that combines with oxaloacetate in the cycle.
To cataplerotically remove oxaloacetate from the citric cycle, malate can be transported from the mitochondrion into the cytoplasm, decreasing the amount of oxaloacetate that can be regenerated. Furthermore, citric acid intermediates are constantly used to form a variety of substances such as the purines, pyrimidines and porphyrins.
Intermediates for other pathways
This article concentrates on the catabolic role of glycolysis with regard to converting potential chemical energy to usable chemical energy during the oxidation of glucose to pyruvate. Many of the metabolites in the glycolytic pathway are also used by anabolic pathways, and, as a consequence, flux through the pathway is critical to maintain a supply of carbon skeletons for biosynthesis.
The following metabolic pathways are all strongly reliant on glycolysis as a source of metabolites: and many more.
Pentose phosphate pathway, which begins with the dehydrogenation of glucose-6-phosphate, the first intermediate to be produced by glycolysis, produces various pentose sugars, and NADPH for the synthesis of fatty acids and cholesterol.
Glycogen synthesis also starts with glucose-6-phosphate at the beginning of the glycolytic pathway.
Glycerol, for the formation of triglycerides and phospholipids, is produced from the glycolytic intermediate glyceraldehyde-3-phosphate.
Various post-glycolytic pathways:
Fatty acid synthesis
Cholesterol synthesis
The citric acid cycle which in turn leads to:
Amino acid synthesis
Nucleotide synthesis
Tetrapyrrole synthesis
Although gluconeogenesis and glycolysis share many intermediates the one is not functionally a branch or tributary of the other. There are two regulatory steps in both pathways which, when active in the one pathway, are automatically inactive in the other. The two processes can therefore not be simultaneously active. Indeed, if both sets of reactions were highly active at the same time the net result would be the hydrolysis of four high energy phosphate bonds (two ATP and two GTP) per reaction cycle.
NAD+ is the oxidizing agent in glycolysis, as it is in most other energy yielding metabolic reactions (e.g. beta-oxidation of fatty acids, and during the citric acid cycle). The NADH thus produced is primarily used to ultimately transfer electrons to to produce water, or, when is not available, to produce compounds such as lactate or ethanol (see Anoxic regeneration of NAD+ above). NADH is rarely used for synthetic processes, the notable exception being gluconeogenesis. During fatty acid and cholesterol synthesis the reducing agent is NADPH. This difference exemplifies a general principle that NADPH is consumed during biosynthetic reactions, whereas NADH is generated in energy-yielding reactions. The source of the NADPH is two-fold. When malate is oxidatively decarboxylated by "NADP+-linked malic enzyme" pyruvate, and NADPH are formed. NADPH is also formed by the pentose phosphate pathway which converts glucose into ribose, which can be used in synthesis of nucleotides and nucleic acids, or it can be catabolized to pyruvate.
Glycolysis in disease
Diabetes
Cellular uptake of glucose occurs in response to insulin signals, and glucose is subsequently broken down through glycolysis, lowering blood sugar levels. However, insulin resistance or low insulin levels seen in diabetes result in hyperglycemia, where glucose levels in the blood rise and glucose is not properly taken up by cells. Hepatocytes further contribute to this hyperglycemia through gluconeogenesis. Glycolysis in hepatocytes controls hepatic glucose production, and when glucose is overproduced by the liver without having a means of being broken down by the body, hyperglycemia results.
Genetic diseases
Glycolytic mutations are generally rare due to importance of the metabolic pathway; the majority of occurring mutations result in an inability of the cell to respire, and therefore cause the death of the cell at an early stage. However, some mutations (glycogen storage diseases and other inborn errors of carbohydrate metabolism) are seen with one notable example being pyruvate kinase deficiency, leading to chronic hemolytic anemia.
In combined malonic and methylmalonic aciduria (CMAMMA) due to ACSF3 deficiency, glycolysis is reduced by -50%, which is caused by reduced lipoylation of mitochondrial enzymes such as the pyruvate dehydrogenase complex and α-ketoglutarate dehydrogenase complex.
Cancer
Malignant tumor cells perform glycolysis at a rate that is ten times faster than their noncancerous tissue counterparts. During their genesis, limited capillary support often results in hypoxia (decreased O2 supply) within the tumor cells. Thus, these cells rely on anaerobic metabolic processes such as glycolysis for ATP (adenosine triphosphate). Some tumor cells overexpress specific glycolytic enzymes which result in higher rates of glycolysis. Often these enzymes are Isoenzymes, of traditional glycolysis enzymes, that vary in their susceptibility to traditional feedback inhibition. The increase in glycolytic activity ultimately counteracts the effects of hypoxia by generating sufficient ATP from this anaerobic pathway. This phenomenon was first described in 1930 by Otto Warburg and is referred to as the Warburg effect. The Warburg hypothesis claims that cancer is primarily caused by dysfunctionality in mitochondrial metabolism, rather than because of the uncontrolled growth of cells.
A number of theories have been advanced to explain the Warburg effect. One such theory suggests that the increased glycolysis is a normal protective process of the body and that malignant change could be primarily caused by energy metabolism.
This high glycolysis rate has important medical applications, as high aerobic glycolysis by malignant tumors is utilized clinically to diagnose and monitor treatment responses of cancers by imaging uptake of 2-18F-2-deoxyglucose (FDG) (a radioactive modified hexokinase substrate) with positron emission tomography (PET).
There is ongoing research to affect mitochondrial metabolism and treat cancer by reducing glycolysis and thus starving cancerous cells in various new ways, including a ketogenic diet.
Interactive pathway map
The diagram below shows human protein names. Names in other organisms may be different and the number of isozymes (such as HK1, HK2, ...) is likely to be different too.
Alternative nomenclature
Some of the metabolites in glycolysis have alternative names and nomenclature. In part, this is because some of them are common to other pathways, such as the Calvin cycle.
Structure of glycolysis components in Fischer projections and polygonal model
The intermediates of glycolysis depicted in Fischer projections show the chemical changing step by step. Such image can be compared to polygonal model representation.
See also
Carbohydrate catabolism
Citric acid cycle
Cori cycle
Fermentation (biochemistry)
Gluconeogenesis
Glycolytic oscillation
Glycogenoses (glycogen storage diseases)
Inborn errors of carbohydrate metabolism
Pentose phosphate pathway
Pyruvate decarboxylation
Triose kinase
References
External links
A Detailed Glycolysis Animation provided by IUBMB (Adobe Flash Required)
The Glycolytic enzymes in Glycolysis at RCSB PDB
Glycolytic cycle with animations at wdv.com
Metabolism, Cellular Respiration and Photosynthesis - The Virtual Library of Biochemistry, Molecular Biology and Cell Biology
The chemical logic behind glycolysis at ufp.pt
Expasy biochemical pathways poster at ExPASy
metpath: Interactive representation of glycolysis
Biochemical reactions
Carbohydrates
Cellular respiration
Metabolic pathways | Glycolysis | [
"Chemistry",
"Biology"
] | 10,402 | [
"Biomolecules by chemical classification",
"Carbohydrate metabolism",
"Carbohydrates",
"Cellular respiration",
"Biochemistry",
"Glycolysis",
"Biochemical reactions",
"Organic compounds",
"Carbohydrate chemistry",
"Metabolic pathways",
"Metabolism"
] |
12,656 | https://en.wikipedia.org/wiki/Godwin%27s%20law | Godwin's law (or Godwin's rule), short for Godwin's law of Nazi analogies, is an Internet adage asserting: "As an online discussion grows longer, the probability of a comparison involving Nazis or Hitler approaches 1."
History
Promulgated by the American attorney and author Mike Godwin in 1990, Godwin's law originally referred specifically to Usenet newsgroup discussions. He stated that he introduced Godwin's law in 1990 as an experiment in memetics, specifically to address the ubiquity of such comparisons which he believes regrettably trivialize the Holocaust. Later, it was applied to any threaded online discussion, such as Internet forums, chat rooms, and social-media comment threads, as well as to speeches, articles, and other rhetoric where occurs.
In 2012, Godwin's law became an entry in the third edition of the Oxford English Dictionary.
Generalization, corollaries, and usage
Godwin's law can be applied mistakenly or abused as a distraction, a diversion, or even censorship, when miscasting an opponent's argument as hyperbole even when the comparison made by the argument is appropriate. Godwin has criticized the over-application of the adage, claiming that it does not articulate a fallacy, but rather is intended to reduce the frequency of inappropriate and hyperbolic comparisons:
In 2021, Harvard researchers published an article showing that the Nazi-comparison phenomenon does not occur with statistically meaningful frequency in Reddit discussions.
Godwin's law has many corollaries, some considered more canonical (by being adopted by Godwin himself) than others. For example, many newsgroups and other Internet discussion forums have a tradition that, when a Nazi or Hitler comparison is made, the thread is finished and whoever made the comparison loses whatever debate is in progress. This idea is itself sometimes mistakenly referred to as Godwin's law.
Godwin rejects the idea that whoever invokes Godwin's law has lost the argument, and suggests that, applied appropriately, the rule "should function less as a conversation ender and more as a conversation starter." In an interview with Time Magazine, Godwin said that making comparisons to Hitler would actually be appropriate under the right circumstances:
In August 2017, while commenting on the Unite the Right rally in Charlottesville, Virginia, Godwin himself endorsed and encouraged social-media users to compare its "alt-right" participants to Nazis.
Godwin has denied the need to update or amend the rule. In June 2018, he wrote, in an opinion piece for the Los Angeles Times: "It still serves us as a tool to recognize specious comparisons to Nazism – but also, by contrast, to recognize comparisons that aren't."
Additionally, when a potential subject of Godwin's law seems "intent on making the Hitler comparison", the comparison with fascism may be appropriate rather than devaluing the argument; a "MAGA" corollary to the Law recognizes the pernicious embrace of Nazi-inspired tropes and phrases by the "alt-right".
In 2023, Godwin published an opinion on The Washington Post stating "Yes, it's okay to compare Trump to Hitler. Don't let me stop you." In the article, Godwin says "But when people draw parallels between Donald Trump’s 2024 candidacy and Hitler’s progression from fringe figure to Great Dictator, we aren’t joking. Those of us who hope to preserve our democratic institutions need to underscore the resemblance before we enter the twilight of American democracy."
See also
Association fallacy
Goebbels gap
Law of truly large numbers
List of eponymous laws
Nazi analogies
Poe's law
Straw man
Thought-terminating cliché
References
Further reading
External links
1990 neologisms
Adages
Adolf Hitler
Eponyms
Genetic fallacies
Internet terminology
Internet trolling
Nazi analogies
Political Internet memes
Principles | Godwin's law | [
"Technology"
] | 806 | [
"Computing terminology",
"Internet terminology"
] |
12,666 | https://en.wikipedia.org/wiki/Gluon | A gluon ( ) is a type of massless elementary particle that mediates the strong interaction between quarks, acting as the exchange particle for the interaction. Gluons are massless vector bosons, thereby having a spin of 1. Through the strong interaction, gluons bind quarks into groups according to quantum chromodynamics (QCD), forming hadrons such as protons and neutrons.
Gluons carry the color charge of the strong interaction, thereby participating in the strong interaction as well as mediating it. Because gluons carry the color charge, QCD is more difficult to analyze compared to quantum electrodynamics (QED) where the photon carries no electric charge.
The term was coined by Murray Gell-Mann in 1962 for being similar to an adhesive or glue that keeps the nucleus together. Together with the quarks, these particles were referred to as partons by Richard Feynman.
Properties
The gluon is a vector boson, which means it has a spin of 1. While massive spin-1 particles have three polarization states, massless gauge bosons like the gluon have only two polarization states because gauge invariance requires the field polarization to be transverse to the direction that the gluon is traveling. In quantum field theory, unbroken gauge invariance requires that gauge bosons have zero mass. Experiments limit the gluon's rest mass (if any) to less than a few MeV/c2. The gluon has negative intrinsic parity.
Counting gluons
There are eight independent types of gluons in QCD. This is unlike the photon of QED or the three W and Z bosons of the weak interaction.
Additionally, gluons are subject to the color charge phenomena. Quarks carry three types of color charge; antiquarks carry three types of anticolor. Gluons carry both color and anticolor. This gives nine possible combinations of color and anticolor in gluons. The following is a list of those combinations (and their schematic names):
red–antired red–antigreen red–antiblue
green–antired green–antigreen green–antiblue
blue–antired blue–antigreen blue–antiblue
These possible combinations are only effective states, not the actual observed color states of gluons. To understand how they are combined, it is necessary to consider the mathematics of color charge in more detail.
Color singlet states
The stable strongly interacting particles, including hadrons like the proton or the neutron, are observed to be "colorless". More precisely, they are in a "color singlet" state, and mathematically analogous to a spin singlet state. The states allow interaction with other color singlets, but not other color states; because long-range gluon interactions do not exist, this illustrates that gluons in the singlet state do not exist either.
The color singlet state is:
If one could measure the color of the state, there would be equal probabilities of it being red–antired, blue–antiblue, or green–antigreen.
Eight color states
There are eight remaining independent color states corresponding to the "eight types" or "eight colors" of gluons. Since the states can be mixed together, there are multiple ways of presenting these states. These are known as the "color octet", and a commonly used list for each is:
These are equivalent to the Gell-Mann matrices. The critical feature of these particular eight states is that they are linearly independent, and also independent of the singlet state, hence 32 − 1 or 23. There is no way to add any combination of these states to produce any others. It is also impossible to add them to make r, g, or b the forbidden singlet state. There are many other possible choices, but all are mathematically equivalent, at least equally complicated, and give the same physical results.
Group theory details
Formally, QCD is a gauge theory with SU(3) gauge symmetry. Quarks are introduced as spinors in Nf flavors, each in the fundamental representation (triplet, denoted 3) of the color gauge group, SU(3). The gluons are vectors in the adjoint representation (octets, denoted 8) of color SU(3). For a general gauge group, the number of force-carriers, like photons or gluons, is always equal to the dimension of the adjoint representation. For the simple case of SU(N), the dimension of this representation is .
In group theory, there are no color singlet gluons because quantum chromodynamics has an SU(3) rather than a U(3) symmetry. There is no known a priori reason for one group to be preferred over the other, but as discussed above, the experimental evidence supports SU(3). If the group were U(3), the ninth (colorless singlet) gluon would behave like a "second photon" and not like the other eight gluons.
Confinement
Since gluons themselves carry color charge, they participate in strong interactions. These gluon–gluon interactions constrain color fields to string-like objects called "flux tubes", which exert constant force when stretched. Due to this force, quarks are confined within composite particles called hadrons. This effectively limits the range of the strong interaction to meters, roughly the size of a nucleon. Beyond a certain distance, the energy of the flux tube binding two quarks increases linearly. At a large enough distance, it becomes energetically more favorable to pull a quark–antiquark pair out of the vacuum rather than increase the length of the flux tube.
One consequence of the hadron-confinement property of gluons is that they are not directly involved in the nuclear forces between hadrons. The force mediators for these are other hadrons called mesons.
Although in the normal phase of QCD single gluons may not travel freely, it is predicted that there exist hadrons that are formed entirely of gluons — called glueballs. There are also conjectures about other exotic hadrons in which real gluons (as opposed to virtual ones found in ordinary hadrons) would be primary constituents. Beyond the normal phase of QCD (at extreme temperatures and pressures), quark–gluon plasma forms. In such a plasma there are no hadrons; quarks and gluons become free particles.
Experimental observations
Quarks and gluons (colored) manifest themselves by fragmenting into more quarks and gluons, which in turn hadronize into normal (colorless) particles, correlated in jets. As revealed in 1978 summer conferences, the PLUTO detector at the electron-positron collider DORIS (DESY) produced the first evidence that the hadronic decays of the very narrow resonance Υ(9.46) could be interpreted as three-jet event topologies produced by three gluons. Later, published analyses by the same experiment confirmed this interpretation and also the spin = 1 nature of the gluon (see also the recollection and PLUTO experiments).
In summer 1979, at higher energies at the electron-positron collider PETRA (DESY), again three-jet topologies were observed, now clearly visible and interpreted as q gluon bremsstrahlung, by TASSO, MARK-J and PLUTO experiments (later in 1980 also by JADE). The spin = 1 property of the gluon was confirmed in 1980 by TASSO and PLUTO experiments (see also the review). In 1991 a subsequent experiment at the LEP storage ring at CERN again confirmed this result.
The gluons play an important role in the elementary strong interactions between quarks and gluons, described by QCD and studied particularly at the electron-proton collider HERA at DESY. The number and momentum distribution of the gluons in the proton (gluon density) have been measured by two experiments, H1 and ZEUS, in the years 1996–2007. The gluon contribution to the proton spin has been studied by the HERMES experiment at HERA. The gluon density in the proton (when behaving hadronically) also has been measured.
Color confinement is verified by the failure of free quark searches (searches of fractional charges). Quarks are normally produced in pairs (quark + antiquark) to compensate the quantum color and flavor numbers; however at Fermilab single production of top quarks has been shown. No glueball has been demonstrated.
Deconfinement was claimed in 2000 at CERN SPS in heavy-ion collisions, and it implies a new state of matter: quark–gluon plasma, less interactive than in the nucleus, almost as in a liquid. It was found at the Relativistic Heavy Ion Collider (RHIC) at Brookhaven in the years 2004–2010 by four contemporaneous experiments. A quark–gluon plasma state has been confirmed at the CERN Large Hadron Collider (LHC) by the three experiments ALICE, ATLAS and CMS in 2010.
Jefferson Lab's Continuous Electron Beam Accelerator Facility, in Newport News, Virginia, is one of 10 Department of Energy facilities doing research on gluons. The Virginia lab was competing with another facility – Brookhaven National Laboratory, on Long Island, New York – for funds to build a new electron-ion collider. In December 2019, the US Department of Energy selected the Brookhaven National Laboratory to host the electron-ion collider.
See also
Quark
Hadron
Meson
Gauge boson
Quark model
Quantum chromodynamics
Quark–gluon plasma
Color confinement
Glueball
Gluon field
Gluon field strength tensor
Exotic hadrons
Standard Model
Three-jet event
Deep inelastic scattering
Quantum chromodynamics binding energy
Special unitary group
Hadronization
Color charge
Coupling constant
Footnotes
References
Further reading
Cambridge Handout 8 : Quantum Chromodynamics – Particle Physics
External resources
Big Think website, clear explanation of the QCD Octet
Why are there eight gluons and not nine?
Bosons
Elementary particles
Gauge bosons
Gluons
Quantum chromodynamics
Force carriers
Subatomic particles with spin 1 | Gluon | [
"Physics"
] | 2,174 | [
"Matter",
"Elementary particles",
"Physical phenomena",
"Force carriers",
"Bosons",
"Fundamental interactions",
"Subatomic particles"
] |
12,667 | https://en.wikipedia.org/wiki/Book%20of%20Genesis | The Book of Genesis (from Greek ; ; ) is the first book of the Hebrew Bible and the Christian Old Testament. Its Hebrew name is the same as its first word, ('In the beginning'). Genesis purports to be an account of the creation of the world, the early history of humanity, and the origins of the Jewish people.
Genesis is part of the Torah or Pentateuch, the first five books of the Bible. Tradition credits Moses as the Torah's author. It was probably composed around the 5th century BC, although arguments have been made for as late as the 270s BC. Based on scientific interpretation of archaeological, genetic, and linguistic evidence, some mainstream Bible scholars consider Genesis to be primarily mythological rather than historical.
It is divisible into two parts, the primeval history (chapters 1–11) and the ancestral history (chapters 12–50). The primeval history sets out the author's concepts of the nature of the deity and of humankind's relationship with its maker: God creates a world which is good and fit for humans, but when man corrupts it with sin, God decides to destroy his creation, sparing only the righteous Noah and his family to re-establish the relationship between man and God. The ancestral history (chapters 12–50) tells of the prehistory of Israel, God's chosen people. At God's command, Noah's descendant Abraham journeys from his birthplace (described as Ur of the Chaldeans and whose identification with Sumerian Ur is tentative in modern scholarship) into the God-given land of Canaan, where he dwells as a sojourner, as does his son Isaac and his grandson Jacob. Jacob's name is changed to "Israel", and through the agency of his son Joseph, the children of Israel descend into Egypt, 70 people in all with their households, and God promises them a future of greatness. Genesis ends with Israel in Egypt, ready for the coming of Moses and the Exodus (departure). The narrative is punctuated by a series of covenants with God, successively narrowing in scope from all humankind (the covenant with Noah) to a special relationship with one people alone (Abraham and his descendants through Isaac and Jacob).
In Judaism, the theological importance of Genesis centres on the covenants linking God to his chosen people and the people to the Promised Land.
Title
The name Genesis is from the Latin Vulgate, in turn borrowed or transliterated from Greek , meaning 'origin'; , 'In [the] beginning'.
Composition
Genesis was written anonymously, but both Jewish and Christian religious tradition attributes the entire Pentateuch—Genesis, Exodus, Leviticus, Numbers and Deuteronomy—to Moses. During the Enlightenment, the philosophers Benedict Spinoza and Thomas Hobbes questioned Mosaic authorship. In the 17th century, Richard Simon proposed that the Pentateuch was written by multiple authors over a long period of time. The involvement of multiple authors is suggested by internal contradictions within the text. For example, Genesis includes two creation narratives.
At the end of the 19th century, most scholars adopted the documentary hypothesis. This theory held that the five books of the Pentateuch came from four sources: the Yahwist (abbreviated as J), the Elohist (E), the Deuteronomist (D) and the Priestly source (P). Each source was held to tell the same basic story, with the sources later combined by various editors. Scholars were able to distinguish sources based on the designations for God. For example, the Yahwist source uses Yahweh, while the Elohistic and Priestly sources use Elohim. Scholars also use repeated and duplicate stories to identify separate sources. In Genesis, these include the two creation stories, three different wife–sister narratives, and the two versions of Abraham sending Hagar and Ishmael into the desert.
According to the documentary hypothesis, J was produced during the 9th century BC in the southern Kingdom of Judah and was believed to be the earliest source. E was written in the northern Kingdom of Israel during the 8th century BC. D was written in Judah in the 7th century BC and associated with the religious reforms of King Josiah . The latest source was P, which was written during the 5th century in Babylon. Based on these dates, Genesis and the rest of the Pentateuch did not reach its final, present-day form until after the Babylonian Exile. Julius Wellhausen argued that the Pentateuch was finalized in the time of Ezra. Ezra 7:14 records that Ezra traveled from Babylon to Jerusalem in 458 BC with God's law in his hand. Wellhausen argued that this was the newly compiled Pentateuch. Nehemiah 8–10, according to Wellhausen, describes the publication and public acceptance of this new law code . There was now a large gap between the earliest sources of the Pentateuch and the period they claimed to describe, which ended .
Most scholars held to the documentary hypothesis until the 1980s. Since then, a number of variations and revisions of the documentary hypothesis have been proposed. The new supplementary hypothesis posits three main sources for the Pentateuch: J, D, and P. The E source is considered no more than a variation of J, and P is considered a body of revisions and expansions to the J (or "non-Priestly") material. The Deuteronomistic source does not appear in Genesis. More recent thinking is that J dates from either just before or during the Babylonian Exile, and the Priestly final edition was made late in the Exilic period or soon after. Russell Gmirkin argues that Genesis was composed in the late 270s BC, drawing on Greek sources like Berossus’ Babyloniaca and reflecting the political context of the Seleucid and Ptolemaic realms.
As for why the book was created, a theory which has gained considerable interest, although still controversial, is that of Persian imperial authorisation. This proposes that the Persians of the Achaemenid Empire, after their conquest of Babylon in 539 BC, agreed to grant Jerusalem a large measure of local autonomy within the empire, but required the local authorities to produce a single law code accepted by the entire community. The two powerful groups making up the community—the priestly families who controlled the Second Temple and who traced their origin to Moses and the wilderness wanderings, and the major landowning families who made up the "elders" and who traced their own origins to Abraham, who had "given" them the land—were in conflict over many issues, and each had its own "history of origins". However, the Persian promise of greatly increased local autonomy for all provided a powerful incentive to cooperate in producing a single text.
Genre
Genesis is an example of a work in the "antiquities" genre, as the Romans knew it, a popular genre telling of the appearance of humans and their ancestors and heroes, with elaborate genealogies and chronologies fleshed out with stories and anecdotes. Notable examples are found in the work of Greek historians of the 6th century BC: their intention was to connect notable families of their own day to a distant and heroic past, and in doing so they did not distinguish between myth, legend, and facts. Professor Jean-Louis Ska of the Pontifical Biblical Institute calls the basic rule of the antiquarian historian the "law of conservation": everything old is valuable, nothing is eliminated. This antiquity was needed to prove the worth of Israel's traditions to the nations (the neighbours of the Jews in the early Persian province of Judea), and to reconcile and unite the various factions within Israel itself.
Describing the work of the biblical authors, John Van Seters wrote that lacking many historical traditions and none from the distant past, "They had to use myths and legends for earlier periods. In order to make sense out of the variety of different and often conflicting versions of stories, and to relate the stories to each other, they fitted them into a genealogical chronology." Tremper Longman describes Genesis as theological history: "the fact that these events took place is assumed, and not argued. The concern of the text is not to prove the history but rather to impress the reader with the theological significance of these acts".
Textual variation
The original manuscripts are lost, and the text of surviving copies varies. There are four major groupings of surviving manuscripts: the Masoretic Text, the Samaritan Pentateuch (in Samaritan script), the Septuagint (a Greek translation), and fragments of Genesis found in the Dead Sea Scrolls. The Dead Sea Scrolls are oldest but cover only a small portion of the book.
Structure
Genesis appears to be structured around the recurring phrase , meaning "these are the generations", with the first use of the phrase referring to the "generations of heaven and earth" and the remainder marking individuals. The formula, occurring eleven times in the book of Genesis, serves as a heading which marks a transition to a new subject. The divide the book into the following sections:
Genesis 1:1–2:3 In the beginning (prologue)
Genesis 2:4–4:26 of Heaven and Earth (narrative)
Genesis 5:1–6:8 of Adam (genealogy, )
Genesis 6:9–9:29 of Noah (Genesis flood narrative)
Genesis 10:1–11:9 of Noah's sons Shem, Ham, and Japheth (genealogy)
Genesis 11:10–26 of Shem (genealogy)
Genesis 11:27–25:11 of Terah (Abraham narrative)
Genesis 25:12–18 of Ishmael (genealogy)
Genesis 25:19–35:29 of Isaac (Jacob narrative)
Genesis 36:1–36:8 of Esau (genealogy)
Genesis 36:9–37:1 of Esau "the father of the Edomites" (genealogy)
Genesis 37:2–50:26 of Jacob (Joseph narrative)
It is not clear, however, what this meant to the original authors, and most modern commentators divide it into two parts based on the subject matter, a primeval history (chapters 1–11) and a patriarchal history (chapters 12–50). While the first is far shorter than the second, it sets out the basic themes and provides an interpretive key for understanding the entire book. The primeval history has a symmetrical structure hinging on the flood story (chapters 6–9) with the events before the flood mirrored by the events after. The ancestral history is structured around the three patriarchs Abraham, Jacob and Joseph. The stories of Isaac arguably do not make up a coherent cycle of stories and function as a bridge between the cycles of Abraham and Jacob.
Summary
Primeval history (chapters 1–11)
The Genesis creation narrative comprises two different stories; the first two chapters roughly correspond to these. In the first, Elohim, the generic Hebrew word for God, creates the heavens and the earth including humankind, in six days, and rests on the seventh. In the second, God, now referred to as "Yahweh Elohim" (rendered as "the God" in English translations), creates two individuals, Adam and Eve, as the first man and woman, and places them in the Garden of Eden.
In the second chapter, God commanded the man that he is free to eat from any tree, including the tree of life, except from the tree of the knowledge of good and evil. Later, in chapter 3, a serpent, portrayed as a deceptive creature or trickster, convinces Eve to eat the fruit. She then convinces Adam to eat it, whereupon God throws them out and punishes them—Adam was punished with getting what he needs only by sweat and work, and Eve to giving birth in pain. This is interpreted by Christians as the "fall of man" into sin. Eve bears two sons, Cain and Abel. Cain works in the garden, and Abel works with meat; they both offer offerings to God one day, and God does not accept Cain's offering but does accept Abel's. This causes Cain to resent Abel, and Cain ends up murdering him. God then curses Cain. Eve bears another son, Seth, to take Abel's place in accordance to the promises given at 3:15, 20.
After many generations of Adam have passed from the lines of Cain and Seth, the world becomes corrupted by human sin and Nephilim, and God wants to wipe out humanity for their wickedness. However, Noah is righteous and blameless. So first, he instructs the Noah to build an ark and put examples of all the animals on it, seven pairs of every clean animal and one pair of every unclean. Then God sends a great flood to wipe out the rest of the world. When the waters recede, God promises he will never destroy the world with water again, making a rainbow as a symbol of his promise. God sees humankind cooperating to build a great tower city, the Tower of Babel, and divides humanity with many languages and sets them apart with confusion. Then, a generation line from Shem to Abram is described.
Patriarchal age (chapters 12–50)
Abram, a man descended from Noah, is instructed by God to travel from his home in Mesopotamia to the land of Canaan. There, God makes a promise to Abram, promising that his descendants shall be as numerous as the stars, but that people will suffer oppression in a foreign land for four hundred years, after which they will inherit the land "from the river of Egypt to the great river, the river Euphrates". Abram's name is changed to 'Abraham' and that of his wife Sarai to Sarah (meaning 'princess'), and God says that all males should be circumcised as a sign of his promise to Abraham. Due to her old age, Sarah tells Abraham to take her Egyptian handmaiden, Hagar, as a second wife (to bear a child). Through Hagar, Abraham fathers Ishmael.
God then plans to destroy the cities of Sodom and Gomorrah for the sins of their people. Abraham protests, but fails to get God to agree not to destroy the cities (reasoning with Abraham that not even ten righteous persons were found there; and among the righteous was Abraham's nephew Lot). Angels save Abraham's nephew Lot (who was living there at the same time) and his family, but his wife looks back on the destruction, (even though God commanded not to) and turns into a pillar of salt for going against his word. Lot's daughters, concerned that they are fugitives who will never find husbands, get Lot drunk so they can become pregnant by him, and give birth to the ancestors of the Moabites and Ammonites.
Abraham and Sarah go to the Philistine town of Gerar, pretending to be brother and sister (they are half-siblings). The King of Gerar takes Sarah for his wife, but God warns him to return her (as she is really Abraham's wife) and he obeys. God sends Sarah a son and tells her she should name him Isaac; through him will be the establishment of the covenant (promise). Sarah then drives Ishmael and his mother Hagar out into the wilderness (because Ishmael is not her real son and Hagar is a slave), but God saves them and promises to make Ishmael a great nation.
Then, God tests Abraham by demanding that he sacrifice Isaac. As Abraham is about to lay the knife upon his son, "the Angel of the Lord" restrains him, promising him again innumerable descendants. On the death of Sarah, Abraham purchases Machpelah (believed to be modern Hebron) for a family tomb and sends his servant to Mesopotamia to find among his relations a wife for Isaac; after proving herself worthy, Rebekah becomes Isaac's betrothed. Keturah, Abraham's other wife, births more children, among whose descendants are the Midianites. Abraham dies at a prosperous old age and his family lays him to rest in Hebron (Machpelah).
Isaac's wife Rebekah gives birth to the twins Esau (meaning 'velvet'), father of the Edomites, and Jacob (meaning 'supplanter' or 'follower'). Esau was a couple of seconds older as he had come out of the womb first, and was going to become the heir; however, through carelessness, he sold his birthright to Jacob for a bowl of stew. His mother, Rebekah, ensures Jacob rightly gains his father's blessing as the firstborn son and inheritor. At 77 years of age, Jacob leaves his parents and later seeks a wife and meets Rachel at a well. He goes to her father, his uncle, where he works for a total of 14 years to earn his wives, Rachel and Leah. Jacob's name is changed to Israel after his wrestle with an angel, and by his wives and their handmaidens he has twelve sons, the ancestors of the twelve tribes of the children of Israel, and a daughter, Dinah.
Shechem, son of Hamor the Hivite, rapes Dinah and asks his father to get Dinah for him as his wife, according to Chapter 34. Jacob agrees to the marriage but requires that all the males of Hamor's tribe be circumcised, including Hamor and Shechem. After this was performed and all the men were still weak, Jacob's sons Simeon and Levi murdered all the males. Jacob complained that their act would mean retribution by others, namely the Canaanites and Perizzites. Jacob and his tribe took all the Hivite women and children as well as livestock and other property for themselves.
Joseph, Jacob's favourite son of the twelve, makes his brothers jealous (especially because of special gifts Jacob gave him) and because of that jealousy they sell Joseph into slavery in Egypt. Joseph endures many trials including being innocently sentenced to jail but he stays faithful to God. After several years, he prospers there after the pharaoh of Egypt asks him to interpret a dream he had about an upcoming famine, which Joseph does through God. He is then made second in command of Egypt by the grateful pharaoh, and later on, he is reunited with his father and brothers, who fail to recognize him and plead for food as the famine had reached Canaan as well. After much manipulation to see if they still hate him, Joseph reveals himself, forgives them for their actions, and lets them and their households into Egypt, where Pharaoh assigns to them the land of Goshen. Jacob calls his sons to his bedside and reveals their future before he dies. Joseph lives to old age and tells his brothers before his death that if God leads them out of the country, then they should take his bones with them.
Themes
Promises to the ancestors
In 1978, David Clines published The Theme of the Pentateuch. Considered influential as one of the first authors to take up the question of the overarching theme of the Pentateuch, Clines' conclusion was that the overall theme is "the partial fulfilment—which implies also the partial nonfulfillment—of the promise to or blessing of the Patriarchs". (By calling the fulfilment "partial", Clines was drawing attention to the fact that at the end of Deuteronomy the people of Israel are still outside Canaan.)
The patriarchs, or ancestors, are Abraham, Isaac and Jacob, with their wives (Joseph is normally excluded). Since the name YHWH had not been revealed to them, they worshipped El in his various manifestations. (It is, however, worth noting that in the Jahwist source, the patriarchs refer to deity by the name YHWH, for example in Genesis 15.) Through the patriarchs, God announces the election of Israel, that is, he chooses Israel to be his special people and commits himself to their future. God tells the patriarchs that he will be faithful to their descendants (i.e. to Israel), and Israel is expected to have faith in God and his promise. ("Faith" in the context of Genesis and the Hebrew Bible means an agreement to the promissory relationship, not a body of a belief.)
The promise itself has three parts: offspring, blessings, and land. The fulfilment of the promise to each patriarch depends on having a male heir, and the story is constantly complicated by the fact that each prospective mother—Sarah, Rebekah and Rachel—is barren. The ancestors, however, retain their faith in God and God in each case gives a son—in Jacob's case, twelve sons, the foundation of the chosen Israelites. Each succeeding generation of the three promises attains a more rich fulfilment, until through Joseph "all the world" attains salvation from famine, and by bringing the children of Israel down to Egypt he becomes the means through which the promise can be fulfilled.
God's chosen people
Scholars generally agree that the theme of divine promise unites the patriarchal cycles, but many would dispute the efficacy of trying to examine Genesis' theology by pursuing a single overarching theme, instead citing as more productive the analysis of the Abraham cycle, the Jacob cycle, and the Joseph cycle, and the Yahwist and Priestly sources. The problem lies in finding a way to unite the patriarchal theme of the divine promise to the stories of Genesis 1–11 (the primeval history) with their theme of God's forgiveness in the face of man's evil nature. One solution is to see the patriarchal stories as resulting from God's decision not to remain alienated from humankind: God creates the world and humans, humans rebel, and God "elects" (chooses) Abraham.
To this basic plot (which comes from the Yahwist), the Priestly source has added a series of covenants dividing history into stages, each with its own distinctive "sign". The first covenant is between God and all living creatures, and is marked by the sign of the rainbow; the second is with the descendants of Abraham (Ishmaelites and others as well as Israelites), and its sign is circumcision; and the last, which does not appear until the Book of Exodus, is with Israel alone, and its sign is Sabbath. A great leader mediates each covenant (Noah, Abraham, Moses), and at each stage God progressively reveals himself by his name (Elohim with Noah, El Shaddai with Abraham, Yahweh with Moses).
Deception
Throughout Genesis, various figures engage in deception or trickery to survive or prosper. Biblical scholar David M. Carr notes that such stories reflect the vulnerability felt by ancient Israelites and that "such stories can be a major way of gaining hope and resisting domination". Examples include:
To avoid being killed, a patriarch (Abraham in 12:10–20 and 20:1–18 and Isaac in 26:6–11) tells a king that his wife is only his sister and not also his wife. (Genesis 12:11-13 and Genesis 20:11-12)
In chapter 25, Jacob tricks Esau into selling his birthright for a pot of lentil stew.
In chapter 27, Isaac is tricked by Rebekah into giving Jacob the superior blessing instead of Esau.
In chapter 29, Jacob believes he is marrying Rachel but is tricked into marrying her sister.
Cultural impact
By totaling the spans of time in the genealogies of Genesis, religious authorities have calculated what they consider to be the age of the world since creation. This Anno Mundi system of counting years is the basis of the Hebrew calendar and Byzantine calendar. Counts differ somewhat, but they generally place the age of the Earth at about six thousand years.
During the Protestant Reformation, rivalry between Catholic and Protestant Christians led to a closer study of the Bible and a competition to take its words more seriously. Thus, scholars in Europe from the 16th to the 19th century treated the book of Genesis as factual. As evidence in the fields of paleontology, geology and other sciences was uncovered, scholars tried to fit these discoveries into the Genesis creation account. For example, Johann Jakob Scheuchzer in the 18th century believed that fossils were the remains of creatures killed during the flood. This literal understanding of Genesis fell out of favor with scholars during the Victorian crisis of faith as evidence mounted that the Earth was far older than six thousand years.
Judaism's weekly Torah portions
It is a custom among religious Jewish communities for a weekly Torah portion, popularly referred to as a , to be read during Jewish prayer services on Saturdays, Mondays and Thursdays. The full name, , is popularly abbreviated to (also or ), and is also known as a (or ).
The is a section of the Torah (Five Books of Moses) used in Jewish liturgy during a particular week. There are 54 weekly parshas, or in Hebrew, and the full cycle is read over the course of one Jewish year.
The first 12 of the 54 come from the Book of Genesis, and they are:
Chapters 1–6 (verses 1–8) Parashat Bereshit
Chapters 6 (v. 9 ff)–11 Parashat Noach
Chapters 12–17 Parashat Lekh Lekha
Chapters 18–22 Parashat Vayera
Chapters 23–25 (v. 1–18) Parashat Chayyei Sarah
Chapters 25 (v. 19 ff)–28 (v. 1–9) Parashat Toledot
Chapters 28 (v. 10 ff)–32 (v. 1–3) Parashat Vayetzei
Chapters 32 (v. 4 ff)–36 Parashat Vayishlach
Chapters 37–40 Parashat Vayeshev
Chapters 41–44 (v. 1–17) Parashat Miketz
Chapters 44 (v. 18 ff)–47 (v. 1–27) Parashat Vayigash
Chapters 47 (v. 28 ff)–50 Parashat Vayechi
See also
Apollo 8 Genesis reading while in lunar orbit
Biblical criticism
Criticism of the Bible
Dating the Bible
Historicity of the Bible
Interpretations of Genesis
Paradise Lost
Protevangelium
Notes
References
Bibliography
Further reading
Commentaries
Fretheim, Terence E. "The Book of Genesis." In The New Interpreter's Bible. Edited by Leander E. Keck, vol. 1, pp. 319–674. Nashville: Abingdon Press, 1994. .
Hirsch, Samson Raphael. The Pentateuch: Genesis. Translated by Isaac Levy. Judaica Press, 2nd edition 1999. . Originally published as Der Pentateuch uebersetzt und erklaert Frankfurt, 1867–1878.
Kass, Leon R. The Beginning of Wisdom: Reading Genesis. New York: Free Press, 2003. .
Plaut, Gunther. The Torah: A Modern Commentary (1981),
Sarna, Nahum M. The JPS Torah Commentary: Genesis: The Traditional Hebrew Text with the New JPS Translation. Philadelphia: Jewish Publication Society, 1989. .
Speiser, E.A. Genesis: Introduction, Translation, and Notes. New York: Anchor Bible, 1964. .
General
External links
Various versions
Genesis
Genesis
Mythology books
1
Creation myths | Book of Genesis | [
"Astronomy"
] | 5,663 | [
"Cosmogony",
"Creation myths"
] |
12,695 | https://en.wikipedia.org/wiki/Group%20representation | In the mathematical field of representation theory, group representations describe abstract groups in terms of bijective linear transformations of a vector space to itself (i.e. vector space automorphisms); in particular, they can be used to represent group elements as invertible matrices so that the group operation can be represented by matrix multiplication.
In chemistry, a group representation can relate mathematical group elements to symmetric rotations and reflections of molecules.
Representations of groups allow many group-theoretic problems to be reduced to problems in linear algebra. In physics, they describe how the symmetry group of a physical system affects the solutions of equations describing that system.
The term representation of a group is also used in a more general sense to mean any "description" of a group as a group of transformations of some mathematical object. More formally, a "representation" means a homomorphism from the group to the automorphism group of an object. If the object is a vector space we have a linear representation. Some people use realization for the general notion and reserve the term representation for the special case of linear representations. The bulk of this article describes linear representation theory; see the last section for generalizations.
Branches of group representation theory
The representation theory of groups divides into subtheories depending on the kind of group being represented. The various theories are quite different in detail, though some basic definitions and concepts are similar. The most important divisions are:
Finite groups — Group representations are a very important tool in the study of finite groups. They also arise in the applications of finite group theory to crystallography and to geometry. If the field of scalars of the vector space has characteristic p, and if p divides the order of the group, then this is called modular representation theory; this special case has very different properties. See Representation theory of finite groups.
Compact groups or locally compact groups — Many of the results of finite group representation theory are proved by averaging over the group. These proofs can be carried over to infinite groups by replacement of the average with an integral, provided that an acceptable notion of integral can be defined. This can be done for locally compact groups, using the Haar measure. The resulting theory is a central part of harmonic analysis. The Pontryagin duality describes the theory for commutative groups, as a generalised Fourier transform. See also: Peter–Weyl theorem.
Lie groups — Many important Lie groups are compact, so the results of compact representation theory apply to them. Other techniques specific to Lie groups are used as well. Most of the groups important in physics and chemistry are Lie groups, and their representation theory is crucial to the application of group theory in those fields. See Representations of Lie groups and Representations of Lie algebras.
Linear algebraic groups (or more generally affine group schemes) — These are the analogues of Lie groups, but over more general fields than just R or C. Although linear algebraic groups have a classification that is very similar to that of Lie groups, and give rise to the same families of Lie algebras, their representations are rather different (and much less well understood). The analytic techniques used for studying Lie groups must be replaced by techniques from algebraic geometry, where the relatively weak Zariski topology causes many technical complications.
Non-compact topological groups — The class of non-compact groups is too broad to construct any general representation theory, but specific special cases have been studied, sometimes using ad hoc techniques. The semisimple Lie groups have a deep theory, building on the compact case. The complementary solvable Lie groups cannot be classified in the same way. The general theory for Lie groups deals with semidirect products of the two types, by means of general results called Mackey theory, which is a generalization of Wigner's classification methods.
Representation theory also depends heavily on the type of vector space on which the group acts. One distinguishes between finite-dimensional representations and infinite-dimensional ones. In the infinite-dimensional case, additional structures are important (e.g. whether or not the space is a Hilbert space, Banach space, etc.).
One must also consider the type of field over which the vector space is defined. The most important case is the field of complex numbers. The other important cases are the field of real numbers, finite fields, and fields of p-adic numbers. In general, algebraically closed fields are easier to handle than non-algebraically closed ones. The characteristic of the field is also significant; many theorems for finite groups depend on the characteristic of the field not dividing the order of the group.
Definitions
A representation of a group G on a vector space V over a field K is a group homomorphism from G to GL(V), the general linear group on V. That is, a representation is a map
such that
Here V is called the representation space and the dimension of V is called the dimension or degree of the representation. It is common practice to refer to V itself as the representation when the homomorphism is clear from the context.
In the case where V is of finite dimension n it is common to choose a basis for V and identify GL(V) with , the group of invertible matrices on the field K.
If G is a topological group and V is a topological vector space, a continuous representation of G on V is a representation ρ such that the application defined by is continuous.
The kernel of a representation ρ of a group G is defined as the normal subgroup of G whose image under ρ is the identity transformation:
A faithful representation is one in which the homomorphism is injective; in other words, one whose kernel is the trivial subgroup {e} consisting only of the group's identity element.
Given two K vector spaces V and W, two representations and are said to be equivalent or isomorphic if there exists a vector space isomorphism so that for all g in G,
Examples
Consider the complex number u = e2πi / 3 which has the property u3 = 1. The set C3 = {1, u, u2} forms a cyclic group under multiplication. This group has a representation ρ on given by:
This representation is faithful because ρ is a one-to-one map.
Another representation for C3 on , isomorphic to the previous one, is σ given by:
The group C3 may also be faithfully represented on by τ given by:
where
A possible representation on is given by the set of cyclic permutation matrices v:
Another example:
Let be the space of homogeneous degree-3 polynomials over the complex numbers in variables
Then acts on by permutation of the three variables.
For instance, sends to .
Reducibility
A subspace W of V that is invariant under the group action is called a subrepresentation. If V has exactly two subrepresentations, namely the zero-dimensional subspace and V itself, then the representation is said to be irreducible; if it has a proper subrepresentation of nonzero dimension, the representation is said to be reducible. The representation of dimension zero is considered to be neither reducible nor irreducible, just as the number 1 is considered to be neither composite nor prime.
Under the assumption that the characteristic of the field K does not divide the size of the group, representations of finite groups can be decomposed into a direct sum of irreducible subrepresentations (see Maschke's theorem). This holds in particular for any representation of a finite group over the complex numbers, since the characteristic of the complex numbers is zero, which never divides the size of a group.
In the example above, the first two representations given (ρ and σ) are both decomposable into two 1-dimensional subrepresentations (given by span{(1,0)} and span{(0,1)}), while the third representation (τ) is irreducible.
Generalizations
Set-theoretical representations
A set-theoretic representation (also known as a group action or permutation representation) of a group G on a set X is given by a function ρ : G → XX, the set of functions from X to X, such that for all g1, g2 in G and all x in X:
where is the identity element of G. This condition and the axioms for a group imply that ρ(g) is a bijection (or permutation) for all g in G. Thus we may equivalently define a permutation representation to be a group homomorphism from G to the symmetric group SX of X.
For more information on this topic see the article on group action.
Representations in other categories
Every group G can be viewed as a category with a single object; morphisms in this category are just the elements of G. Given an arbitrary category C, a representation of G in C is a functor from G to C. Such a functor selects an object X in C and a group homomorphism from G to Aut(X), the automorphism group of X.
In the case where C is VectK, the category of vector spaces over a field K, this definition is equivalent to a linear representation. Likewise, a set-theoretic representation is just a representation of G in the category of sets.
When C is Ab, the category of abelian groups, the objects obtained are called G-modules.
For another example consider the category of topological spaces, Top. Representations in Top are homomorphisms from G to the homeomorphism group of a topological space X.
Two types of representations closely related to linear representations are:
projective representations: in the category of projective spaces. These can be described as "linear representations up to scalar transformations".
affine representations: in the category of affine spaces. For example, the Euclidean group acts affinely upon Euclidean space.
See also
Irreducible representations
Character table
Character theory
Molecular symmetry
List of harmonic analysis topics
List of representation theory topics
Representation theory of finite groups
Semisimple representation
Notes
References
. Introduction to representation theory with emphasis on Lie groups.
Yurii I. Lyubich. Introduction to the Theory of Banach Representations of Groups. Translated from the 1985 Russian-language edition (Kharkov, Ukraine). Birkhäuser Verlag. 1988.
Group theory
Representation theory | Group representation | [
"Mathematics"
] | 2,110 | [
"Group theory",
"Representation theory",
"Fields of abstract algebra"
] |
12,696 | https://en.wikipedia.org/wiki/GRE%20Physics%20Test | The GRE physics test is an examination administered by the Educational Testing Service (ETS). The test attempts to determine the extent of the examinees' understanding of fundamental principles of physics and their ability to apply them to problem solving. Many graduate schools require applicants to take the exam and base admission decisions in part on the results.
The scope of the test is largely that of the first three years of a standard United States undergraduate physics curriculum, since many students who plan to continue to graduate school apply during the first half of the fourth year. It consists of 70 five-option multiple-choice questions covering subject areas including the first three years of undergraduate physics.
The International System of Units (SI Units) is used in the test. A table of information representing various physical constants and conversion factors is presented in the test book.
Major content topics
1. Classical mechanics (20%)
kinematics
Newton's laws
work and energy
oscillatory motion
rotational motion about a fixed axis
dynamics of systems of particles
central forces and celestial mechanics
three-dimensional particle dynamics
Lagrangian and Hamiltonian formalism
non-inertial reference frames
elementary topics in fluid dynamics
2. Electromagnetism (18%)
electrostatics
currents and DC circuits
magnetic fields in free space
Lorentz force
induction
Maxwell's equations and their applications
electromagnetic waves
AC circuits
magnetic and electric fields in matter
3. Optics and wave phenomena (8%)
wave properties
superposition
interference
diffraction
geometrical optics
polarization
Doppler effect
4. Thermodynamics and statistical mechanics (10%)
laws of thermodynamics
thermodynamic processes
equations of state
ideal gases
kinetic theory
ensembles
statistical concepts and calculation of thermodynamic quantities
thermal expansion and heat transfer
5. Quantum mechanics (13%)
fundamental concepts
solutions of the Schrödinger equation
square wells
harmonic oscillators
hydrogenic atoms
spin
angular momentum
wave function symmetry
elementary perturbation theory
6. Atomic physics (10%)
properties of electrons
Bohr model
energy quantization
atomic structure
atomic spectra
selection rules
black-body radiation
x-rays
atoms in electric and magnetic fields
7. Special relativity (6%)
introductory concepts
time dilation
length contraction
simultaneity
energy and momentum
four-vectors and Lorentz transformation
velocity addition
8. Laboratory methods (6%)
data and error analysis
electronics
instrumentation
radiation detection
counting statistics
interaction of charged particles with matter
lasers and optical interferometers
dimensional analysis
fundamental applications of probability and statistics
9. Specialized topics (9%)
Nuclear and particle physics
nuclear properties
radioactive decay
fission and fusion
reactions
fundamental properties of elementary particles
Condensed matter
crystal structure
x-ray diffraction
thermal properties
electron theory of metals
semiconductors
superconductors
miscellaneous
astrophysics
mathematical methods
single and multivariate calculus
coordinate systems (rectangular, cylindrical, spherical)
vector algebra and vector differential operators
Fourier series
partial differential equations
boundary value problems
matrices and determinants
functions of complex variables
computer applications
See also
Graduate Record Examination
GRE Biochemistry Test
GRE Biology Test
GRE Chemistry Test
GRE Literature in English Test
GRE Mathematics Test
GRE Psychology Test
Graduate Management Admission Test (GMAT)
Graduate Aptitude Test in Engineering (GATE)
References
External links
Official Description of the GRE Physics Test
Detailed Solutions to ETS released tests - The Missing Solutions Manual, free online, and User Comments and discussions on individual problems
More solutions to the released tests - Includes solutions to the recently released 2008 exam
GRE Prep Course at Ohio State University - Preparation course, with links to all 4 publicly released Physics GRE tests, as well as links to other Physics GRE resources
GR0877 Solutions - Solutions to 2008 exam
- Physics GRE Review at Troy University
GRE standardized tests
Physics education
Standardized tests | GRE Physics Test | [
"Physics"
] | 753 | [
"Applied and interdisciplinary physics",
"Physics education"
] |
12,713 | https://en.wikipedia.org/wiki/Giant%20panda | The giant panda (Ailuropoda melanoleuca), also known as the panda bear or simply panda, is a bear species endemic to China. It is characterised by its white coat with black patches around the eyes, ears, legs and shoulders. Its body is rotund; adult individuals weigh and are typically long. It is sexually dimorphic, with males being typically 10 to 20% larger than females. A thumb is visible on its forepaw, which helps in holding bamboo in place for feeding. It has large molar teeth and expanded temporal fossa to meet its dietary requirements. It can digest starch and is mostly herbivorous with a diet consisting almost entirely of bamboo and bamboo shoots.
The giant panda lives exclusively in six montane regions in a few Chinese provinces at elevations of up to . It is solitary and gathers only in mating seasons. It relies on olfactory communication to communicate and uses scent marks as chemical cues and on landmarks like rocks or trees. Females rear cubs for an average of 18 to 24 months. The oldest known giant panda was 38 years old.
As a result of farming, deforestation and infrastructural development, the giant panda has been driven out of the lowland areas where it once lived. The wild population has increased again to 1,864 individuals as of March 2015. Since 2016, it has been listed as Vulnerable on the IUCN Red List. In July 2021, Chinese authorities also classified the giant panda as vulnerable. It is a conservation-reliant species. By 2007, the captive population comprised 239 giant pandas in China and another 27 outside the country. It has often served as China's national symbol, appeared on Chinese Gold Panda coins since 1982 and as one of the five Fuwa mascots of the 2008 Summer Olympics held in Beijing.
Etymology
The word panda was borrowed into English from French, but no conclusive explanation of the origin of the French word panda has been found. The closest candidate is the Nepali word ponya, possibly referring to the adapted wrist bone of the red panda, which is native to Nepal. In many older sources, the name "panda" or "common panda" refers to the red panda (Ailurus fulgens), which was described some 40 years earlier and over that period was the only animal known as a panda. The binomial name Ailuropoda melanoleuca means black and white (melanoleuca) cat-foot (ailuropoda).
Since the earliest collection of Chinese writings, the Chinese language has given the bear many different names, including mò (, ancient Chinese name for giant panda), huāxióng (; "spotted bear") and zhúxióng (; "bamboo bear"). The most popular names in China today are dàxióngmāo (; ), or simply xióngmāo (; ). As with the word panda in English, xióngmāo () was originally used to describe just the red panda, but dàxióngmāo () and xiǎoxióngmāo (; ) were coined to differentiate between the species.
In Taiwan, another popular name for panda is the inverted dàmāoxióng (; ), though many encyclopedias and dictionaries in Taiwan still use the "bear cat" form as the correct name. Some linguists argue, in this construction, "bear" instead of "cat" is the base noun, making the name more grammatically and logically correct, which have led to the popular choice despite official writings. This name did not gain its popularity until 1988, when a private zoo in Tainan painted a sun bear black and white and created the Tainan fake panda incident.
Taxonomy
For many decades, the precise taxonomic classification of the giant panda was under debate because it shares characteristics with both bears and raccoons. In 1985, molecular studies indicated that the giant panda is a true bear, part of the family Ursidae. These studies show it diverged about from the common ancestor of the Ursidae; it is the most basal member of this family and equidistant from all other extant bear species.
Subspecies
Two subspecies of giant panda have been recognized on the basis of distinct cranial measurements, colour patterns, and population genetics.
The nominate subspecies, A. m. melanoleuca, consists of most extant populations of the giant panda. These animals are principally found in Sichuan and display the typical stark black and white contrasting colours.
The Qinling panda, A. m. qinlingensis, is restricted to the Qinling Mountains in Shaanxi at elevations of . The typical black and white pattern of Sichuan giant pandas is replaced with a light brown and white pattern. The skull of A. m. qinlingensis is smaller than its relatives, and it has larger molars.
A detailed study of the giant panda's genetic history from 2012 confirms that the separation of the Qinling population occurred about 300,000 years ago, and reveals that the non-Qinling population further diverged into two groups, named the Minshan and the Qionglai-Daxiangling-Xiaoxiangling-Liangshan group respectively, about 2,800 years ago.
Phylogeny
Of the eight extant species in the bear family Ursidae, the giant panda's lineage branched off the earliest.
Distribution and habitat
The giant panda is endemic to China. It is found in small, fragmented populations in six mountainous regions in the country, mainly in Sichuan, and also in neighbouring Shaanxi and Gansu. Successful habitat preservation has seen a rise in panda numbers, though loss of habitat due to human activities remains its biggest threat. In areas with a high concentration of medium-to-large-sized mammalssuch as domestic cattle, a species known to degrade the landscapethe giant panda population is generally low. This is mainly attributed to the panda's avoidance of interspecific competition.
The species has been located at elevations of above sea level. They frequent habitats with a healthy concentration of bamboos, typically old-growth forests, but may also venture into secondary forest habitats. The Daxiangling Mountain population inhabits both coniferous and broadleaf forests. Additionally, the Qinling population often selects evergreen broadleaf and conifer forests, while pandas in the Qionglai mountainous region exclusively select upland conifer forests. The remaining two populations, namely those occurring in the Liangshan and Xiaoxiangling mountains, predominantly occur in broadleaf evergreen and conifer forests.
Giant pandas once roamed across Southeast Asia from Myanmar to northern Vietnam. Their range in China spanned much of the southeast region. By the Pleistocene, climate change affected panda populations, and the subsequent domination of modern humans led to large-scale habitat loss. In 2001, it was estimated that the range of the giant panda had declined by about 99% of its range in earlier millenniums.
Description
The giant panda has a body shape typical of bears. It has black fur on its ears, limbs, shoulders and around the eyes. The rest of the animal's coat is white. The bear's distinctive coloration appears to serve as camouflage in both winter and summer environments as they do not hibernate. The white areas serve as camouflage in snow, while the black shoulders and legs conceal them in shade. Studies in the wild have found that when viewed from a distance, the panda displays disruptive coloration, while up close, they rely more on blending in. The black ears may be used to display aggression, while the eye patches might facilitate them identifying one another. The giant panda's thick, woolly coat keeps it warm in the cool forests of its habitat.
The panda's skull shape is typical of durophagous carnivorans. It has evolved from previous ancestors to exhibit larger molars with increased complexity and expanded temporal fossa. A study revealed that a giant panda had a bite force of 1298.9 Newton (BFQ 151.4) at canine teeth and 1815.9 Newton (BFQ 141.8) at carnassial teeth.
Adults measure around long, including a tail of about , and tall at the shoulder. Males can weigh up to . Females are generally 10–20% smaller than males. They weigh between and . The average weight for adults is .
The giant panda's paw has a digit similar to a thumb and five fingers; the thumb-like digit – actually a modified sesamoid bone – helps it to hold bamboo while eating. The giant panda's tail, measuring , is the second-longest in the bear family, behind the sloth bear.
Ecology
Diet
Despite its taxonomic classification as a carnivoran, the giant panda's diet is primarily herbivorous, with approximately 99% of its diet consisting of bamboo. However, the giant panda still has the digestive system of a carnivore, as well as carnivore-specific genes, and thus derives little energy and little protein from the consumption of bamboo. The ability to break down cellulose and lignin is very weak, and their main source of nutrients comes from starch and hemicelluloses. The most important part of their bamboo diet is the shoots, that are rich in starch and have up to 32% protein content. Accordingly, pandas have evolved a higher capability to digest starches than strict carnivores. Raw bamboo is toxic, containing cyanide compounds. Pandas' body tissues are less able than herbivores to detoxify cyanide, but their gut microbiomes are significantly enriched in putative genes coding for enzymes related to cyanide degradation, suggesting that they have cyanide-digesting gut microbes. It has been estimated that an adult panda absorbs of cyanide a day through its diet. To prevent poisoning, they have evolved anti-toxic mechanisms to protect themselves. About 80% of the cyanide is metabolized to less toxic thiocyanate and discharged in urine, while the remaining 20% is detoxified by other minor pathways.
During the shoot season (AprilAugust), pandas store a large amount of food in preparation for the months succeeding this seasonal period, in which pandas live off a diet of bamboo leaves. The giant panda is a highly specialised animal with unique adaptations, and has lived in bamboo forests for millions of years.
The average giant panda eats as much as of bamboo shoots a day to compensate for the limited energy content of its diet. Ingestion of such a large quantity of material is possible and necessary because of the rapid passage of large amounts of indigestible plant material through the short, straight digestive tract. It is also noted, however, that such rapid passage of digesta limits the potential of microbial digestion in the gastrointestinal tract, limiting alternative forms of digestion. Given this voluminous diet, the giant panda defecates up to 40 times a day. The limited energy input imposed on it by its diet has affected the panda's behavior. The giant panda tends to limit its social interactions and avoids steeply sloping terrain to limit its energy expenditures.
Two of the panda's most distinctive features, its large size and round face, are adaptations to its bamboo diet. Anthropologist Russell Ciochon observed: "[much] like the vegetarian gorilla, the low body surface area to body volume [of the giant panda] is indicative of a lower metabolic rate. This lower metabolic rate and a more sedentary lifestyle allows the giant panda to subsist on nutrient poor resources such as bamboo." The giant panda's round face is the result of powerful jaw muscles, which attach from the top of the head to the jaw. Large molars crush and grind fibrous plant material.
The morphological characteristics of extinct relatives of the giant panda suggest that while the ancient giant panda was omnivorous 7 million years ago (mya), it only became herbivorous some 2–2.4 mya with the emergence of A. microta. Genome sequencing of the giant panda suggests that the dietary switch could have initiated from the loss of the sole umami taste receptor, encoded by the genes TAS1R1 and TAS1R3 (also known as T1R1 and T1R3), resulting from two frameshift mutations within the T1R1 exons. Umami taste corresponds to high levels of glutamate as found in meat and may have thus altered the food choice of the giant panda. Although the pseudogenisation (conversion into a pseudogene) of the umami taste receptor in Ailuropoda coincides with the dietary switch to herbivory, it is likely a result of, and not the reason for, the dietary change. The mutation time for the T1R1 gene in the giant panda is estimated to 4.2 mya while fossil evidence indicates bamboo consumption in the giant panda species at least 7 mya, signifying that although complete herbivory occurred around 2 mya, the dietary switch was initiated prior to T1R1 loss-of-function.
Pandas eat any of 25 bamboo species in the wild, with the most common including Fargesia dracocephala and Fargesia rufa. Only a few bamboo species are widespread at the high altitudes pandas now inhabit. Bamboo leaves contain the highest protein levels; stems have less. Because of the synchronous flowering, death, and regeneration of all bamboo within a species, the giant panda must have at least two different species available in its range to avoid starvation. While primarily herbivorous, the giant panda still retains decidedly ursine teeth and will eat meat, fish, and eggs when available. In captivity, zoos typically maintain the giant panda's bamboo diet, though some will provide specially formulated biscuits or other dietary supplements.
Pandas will travel between different habitats if they need to, so they can get the nutrients that they need and to balance their diet for reproduction.
Interspecific interactions
Although adult giant pandas have few natural predators other than humans, young cubs are vulnerable to attacks by snow leopards, yellow-throated martens, eagles, feral dogs, and the Asian black bear. Sub-adults weighing up to may be vulnerable to predation by leopards.
Giant pandas are sympatric with other large mammals and bamboo feeders, such as the takin (Budorcas taxicolor). The takin and giant panda share a similar ecological niche, and they consume the same resources. When competition for food is fierce, pandas disperse to the outskirts of takin distribution. Other possible competitors include but is not limited to, the Eurasian wild pig (Sus scrofa), Chinese goral (Naemorhedus griseus) and the Asian black bear (Ursus thibetanus). Giant pandas avoid areas with a mid-to-high density of livestock, as they depress the vegetation. The Tibetan Plateau is the only known area where both giant and red pandas can be found. Although sharing near-identical ecological niches, competition between the two species has rarely been observed. Nearly 50% of their respective distribution overlaps, and successful coexistence is achieved through distinct habitat selection.
Pathogens and parasites
A captive female died from toxoplasmosis, a disease caused by an obligate intracellular parasitic protozoan known as Toxoplasma gondii that infects most warm-blooded animals, including humans. They are likely susceptible to diseases from Baylisascaris schroederi, a parasitic nematode known to infect giant panda intestines. This nematode species is known to give pandas baylisascariasi, a deadly disease that kills more wild pandas than any other cause. Additionally, the population is threatened by canine distemper virus (CDV), canine parvovirus, rotavirus, canine adenovirus, and canine coronavirus. Bacteria, such as Clostridium welchii, Proteus mirabilis, Klebsiella pneumoniae, and Escherichia coli, may also be lethal.
Behavior
The giant panda is a terrestrial animal and primarily spends its life roaming and feeding in the bamboo forests of the Qinling Mountains and in the hilly province of Sichuan. Giant pandas are generally solitary. Each adult has a defined territory and a female is not tolerant of other females in her range. Social encounters occur primarily during the brief breeding season in which pandas in proximity to one another will gather. After mating, the male leaves the female alone to raise the cub. Pandas were thought to fall into the crepuscular category, those who are active twice a day, at dawn and dusk; however, pandas may belong to a category all of their own, with activity peaks in the morning, afternoon and midnight. The low nutrition quality of bamboo means pandas need to eat more frequently, and due to their lack of major predators they can be active at any time of the day. Activity is highest in June and decreases in late summer to autumn with an increase from November through the following March. Activity is also directly related to the amount of sunlight during colder days. There is a significant interaction of solar radiation, such that solar radiation has a stronger positive effect on activity levels of panda bears.
Pandas communicate through vocalisation and scent marking such as clawing trees or spraying urine. They are able to climb and take shelter in hollow trees or rock crevices, but do not establish permanent dens. For this reason, pandas do not hibernate, which is similar to other subtropical mammals, and will instead move to elevations with warmer temperatures. Pandas rely primarily on spatial memory rather than visual memory. Though the panda is often assumed to be docile, it has been known to attack humans on rare occasions. Pandas have been known to cover themselves in horse manure to protect themselves against cold temperatures.
The species communicates foremost through a blatting sound; they achieve peaceful interactions through the emission of this sound. When in oestrus, a female emits a chirp. In hostile confrontations or during fights, the giant panda emits vocalizations such as a roar or growl. On the other hand, squeals typically indicate inferiority and submission in a dispute. Other vocalizations include honks and moans.
Olfactory communication
Giant pandas heavily rely on olfactory communication to communicate with one another. Scent marks are used to spread these chemical cues and are placed on landmarks like rocks or trees. Chemical communication in giant pandas plays many roles in their social situations. Scent marks and odors are used to spread information about sexual status, whether a female is in estrus or not, age, gender, individuality, dominance over territory, and choice of settlement. Giant pandas communicate by excreting volatile compounds, or scent marks, through the anogenital gland. Giant pandas have unique positions in which they will scent mark. Males deposit scent marks or urine by lifting their hind leg, rubbing their backside, or standing in order to rub the anogenital gland onto a landmark. Females, however, exercise squatting or simply rubbing their genitals onto a landmark.
The season plays a major role in mediating chemical communication. Depending on the season, mainly whether it is breeding season or not, may influence which odors are prioritized. Chemical signals can have different functions in different seasons. During the non-breeding season, females prefer the odors of other females because reproduction is not their primary motivation. However, during breeding season, odors from the opposite sex will be more attractive. Because they are solitary mammals and their breeding season is so brief, female pandas secrete chemical cues in order to let males know their sexual status. The chemical cues female pandas secrete can be considered to be pheromones for sexual reproduction. Females deposit scent marks through their urine which induces an increase in androgen levels in males. Androgen is a sex hormone found in both males and females; testosterone is the major androgen produced by males. Civetone and decanoic acid are chemicals found in female urine which promote behavioral responses in males; both chemicals are considered giant panda pheromones. Male pandas also secrete chemical signals that include information about their sexual reproductivity and age, which is beneficial for a female when choosing a mate. For example, age can be useful for a female to determine sexual maturity and sperm quality. Pandas are also able to determine when the signal was placed, further aiding in the quest to find a potential mate. However, chemical cues are not just used for communication between males and females, pandas can determine individuality from chemical signals. This allows them to be able to differentiate between a potential partner or someone of the same sex, which could be a potential competitor.
Chemical cues, or odors, play an important role in how a panda chooses their habitat. Pandas look for odors that tell them not only the identity of another panda, but if they should avoid them or not. Pandas tend to avoid their species for most of the year, breeding season being the brief time of major interaction. Chemical signaling allows for avoidance and competition. Pandas whose habitats are in similar locations will collectively leave scent marks in a unique location which is termed "scent stations". When pandas come across these scent stations, they are able to identify a specific panda and the scope of their habitat. This allows pandas to be able to pursue a potential mate or avoid a potential competitor.
Pandas can assess an individual's dominance status, including their age and size, via odor cues and may choose to avoid a scent mark if the signaler's competitive ability outweighs their own. A panda's size can be conveyed through the height of the scent mark. Since larger animals can place higher scent marks, an elevated scent mark advertises a higher competitive ability. Age must also be taken into consideration when assessing a competitor's fighting ability. For example, a mature panda will be larger than a younger, immature panda and possess an advantage during a fight.
Reproduction
Giant pandas reach sexual maturity between the ages of four and eight, and may be reproductive until age 20. The mating season is between March and May, when a female goes into estrus, which lasts for two or three days and only occurs once a year. When mating, the female is in a crouching, head-down position as the male mounts her from behind. Copulation time ranges from 30 seconds to five minutes, but the male may mount her repeatedly to ensure successful fertilisation. The gestation period is somewhere between 95 and 160 days - the variability is due to the fact that the fertilized egg may linger in the reproductive system for a while before implanting on the uterine wall. Giant pandas give birth to twins in about half of pregnancies. If twins are born, usually only one survives in the wild. The mother will select the stronger of the cubs, and the weaker cub will die due to starvation. The mother is thought to be unable to produce enough milk for two cubs since she does not store fat. The father has no part in helping raise the cub.
When the cub is first born, it is pink, blind, and toothless, weighing only , or about of the mother's weight, proportionally the smallest baby of any placental mammal. It nurses from its mother's breast six to 14 times a day for up to 30 minutes at a time. For three to four hours, the mother may leave the den to feed, which leaves the cub defenseless. One to two weeks after birth, the cub's skin turns grey where its hair will eventually become black. Slight pink colour may appear on the cub's fur, as a result of a chemical reaction between the fur and its mother's saliva. A month after birth, the colour pattern of the cub's fur is fully developed. Its fur is very soft and coarsens with age. The cub begins to crawl at 75 to 80 days; mothers play with their cubs by rolling and wrestling with them. The cubs can eat small quantities of bamboo after six months, though mother's milk remains the primary food source for most of the first year. Giant panda cubs weigh at one year and live with their mothers until they are 18 months to two years old. The interval between births in the wild is generally two years.
Initially, the primary method of breeding giant pandas in captivity was by artificial insemination, as they seemed to lose their interest in mating once they were captured. This led some scientists to trying methods such as showing them videos of giant pandas mating and giving the males sildenafil (commonly known as Viagra). In the 2000s, researchers started having success with captive breeding programs, and they have now determined giant pandas have comparable breeding to some populations of the American black bear, a thriving bear species.
In July 2009, Chinese scientists confirmed the birth of the first cub to be successfully conceived through artificial insemination using frozen sperm. The technique for freezing the sperm in liquid nitrogen was first developed in 1980 and the first birth was hailed as a solution to the dwindling availability of giant panda semen, which had led to inbreeding. Panda semen, which can be frozen for decades, could be shared between different zoos to save the species. As of 2009, it is expected that zoos in destinations such as San Diego in the United States and Mexico City will be able to provide their own semen to inseminate more giant pandas.
Attempts have also been made to reproduce giant pandas by interspecific pregnancy where cloned panda embryos were implanted into the uterus of an animal of another species. This has resulted in panda fetuses, but no live births.
Human interaction
Early references
In Ancient China, people thought pandas to be rare and noble creatures – the Empress Dowager Bo was buried with a panda skull in her vault. The grandson of Emperor Taizong of Tang is said to have given Japan two pandas and a sheet of panda skin as a sign of goodwill. Unlike many other animals in Ancient China, pandas were rarely thought to have medical uses. The few known uses include the Sichuan tribal peoples' use of panda urine to melt accidentally swallowed needles, and the use of panda pelts to control menstruation as described in the Qin dynasty encyclopedia Erya.
The creature named mo (貘) mentioned in some ancient books has been interpreted as giant panda. The dictionary Shuowen Jiezi (Eastern Han Dynasty) says that the mo, from Shu (Sichuan), is bear-like, but yellow-and-black, although the older Erya describes mo simply as a "white leopard". The interpretation of the legendary fierce creature pixiu (貔貅) as referring to the giant panda is also common.
During the reign of the Yongle Emperor (early 15th century), his relative from Kaifeng sent him a captured zouyu (騶虞), and another zouyu was sighted in Shandong. Zouyu is a legendary "righteous" animal, which, similarly to a qilin, only appears during the rule of a benevolent and sincere monarch.
In captivity
Pandas have been kept in zoos as early as the Western Han Dynasty in China, where the writer Sima Xiangru noted that the panda was the most treasured animal in the emperor's garden of exotic animals in the capital Chang'an (present Xi'an). Not until the 1950s were pandas again recorded to have been exhibited in China's zoos. Chi Chi at the London Zoo became very popular. This influenced the World Wildlife Fund to use a panda as its symbol. A 2006 New York Times article outlined the economics of keeping pandas, which costs five times more than keeping the next most expensive animal, an elephant. American zoos generally pay the Chinese government $1 million a year in fees, as part of a typical ten-year contract. San Diego's contract with China was to expire in 2008, but got a five-year extension at about half of the previous yearly cost. The last contract, with the Memphis Zoo in Memphis, Tennessee, ended in 2013.
In the 1970s, gifts of giant pandas to American and Japanese zoos formed an important part of the diplomacy of the People's Republic of China (PRC), as it marked some of the first cultural exchanges between China and the West. This practice has been termed "panda diplomacy". By 1984, however, pandas were no longer given as gifts. Instead, China began to offer pandas to other nations only on 10-year loans for a fee of up to US$1,000,000 per year and with the provision that any cubs born during the loan are the property of China. As a result of this change in policy, nearly all the pandas in the world are owned by China, and pandas leased to foreign zoos and all cubs are eventually returned to China. As of 2022, Xin Xin at the Chapultepec Zoo in Mexico City, was the last living descendant of the gifted pandas.
Since 1998, because of a WWF lawsuit, the United States Fish and Wildlife Service only allows US zoos to import a panda if the zoo can ensure China channels more than half of its loan fee into conservation efforts for giant pandas and their habitat. In May 2005, China offered a breeding pair to Taiwan. The issue became embroiled in cross-Strait relations – due to both the underlying symbolism and technical issues such as whether the transfer would be considered "domestic" or "international" or whether any true conservation purpose would be served by the exchange. A contest in 2006 to name the pandas was held in the mainland, resulting in the politically charged names Tuan Tuan and Yuan Yuan (from , implying reunification). China's offer was initially rejected by Chen Shui-bian, then President of Taiwan. However, when Ma Ying-jeou assumed the presidency in 2008, the offer was accepted and the pandas arrived in December of that year.
In the 2020s, certain "celebrity pandas" have gained a cult following amongst internet users, with dedicated fan accounts existing to keep tabs on the animals. Known as "giant panda fever" or "panda-monium", individual pandas are known to get billions of views and engagements on social media, as well as product lines specifically emulating them. At Chengdu Research Base of Giant Panda Breeding, certain of these "celebrity pandas" are known to garner hours-long lines specifically to see them.
Conservation
The giant panda is a vulnerable species, threatened by continued habitat loss and fragmentation, and by a very low birthrate, both in the wild and in captivity. Its range is confined to a small portion on the western edge of its historical range, which stretched through southern and eastern China, northern Myanmar, and northern Vietnam. The species is scattered into more than 30 subpopulations of relatively few animals. Building of roads and human settlement near panda habitat, result in population declines. Diseases from domesticated pets and livestock is another threat. By 2100, it is estimated that the distribution of giant pandas will shrink by up to 100%, mainly due to the effects of climate change. The giant panda is listed on CITES Appendix I, meaning trade of their parts is prohibited and that they require this protection to avoid extinction. They have been protected and placed in category 1, by the 1988 Wildlife Protection Act.
The giant panda has been a target of poaching by locals since ancient times and by foreigners since it was introduced to the West. Starting in the 1930s, foreigners were unable to poach giant pandas in China because of the Second Sino-Japanese War and the Chinese Civil War, but pandas remained a source of soft furs for the locals. The population boom in China after 1949 created stress on the pandas' habitat and the subsequent famines led to the increased hunting of wildlife, including pandas. After the Chinese economic reform, demand for panda skins from Hong Kong and Japan led to illegal poaching for the black market, acts generally ignored by the local officials at the time. In 1963, the PRC government set up Wolong National Nature Reserve to save the declining panda population.
The giant panda is among the world's most adored and protected rare animals, and is one of the few in the world whose natural inhabitant status was able to gain a UNESCO World Heritage Site designation. The Sichuan Giant Panda Sanctuaries, located in the southwest province of Sichuan and covering seven natural reserves, were inscribed onto the World Heritage List in 2006. A 2015 paper found that the giant panda can serve as an umbrella species as the preservation of their habitat also helps other endemic species in China, including 70% of the country's forest birds, 70% of mammals and 31% of amphibians.
In 2012, Earthwatch Institute, a global nonprofit that teams volunteers with scientists to conduct important environmental research, launched a program called "On the Trail of Giant Panda". This program, based in the Wolong National Nature Reserve, allows volunteers to work up close with pandas cared for in captivity, and help them adapt to life in the wild, so that they may breed, and live longer and healthier lives. Efforts to preserve the panda bear populations in China have come at the expense of other animals in the region, including snow leopards, wolves, and dholes. In order to improve living and mating conditions for the fragmented populations of pandas, nearly 70 natural reserves have been combined to form the Giant Panda National Park in 2020. With a size of 10,500 square miles, the park is roughly three times as large as Yellowstone National Park and incorporates the Wolong National Nature Reserve. Small, isolated populations run the risk of inbreeding and smaller genetic variety makes the individuals more vulnerable to various defects and genetic mutation.
Population
In 2006, scientists reported that the number of pandas living in the wild may have been underestimated at about 1,000. Previous population surveys had used conventional methods to estimate the size of the wild panda population, but using a new method that analyzes DNA from panda droppings, scientists believed the wild population were as large as 3,000. In 2006, there were 40 panda reserves in China, compared to just 13 reserves in 1998. As the species has been reclassified from "endangered" to "vulnerable" since 2016, the conservation efforts are thought to be working. Furthermore, in response to this reclassification, the State Forestry Administration of China announced that they would not accordingly lower the conservation level for panda, and would instead reinforce the conservation efforts.
In 2020, the panda population of the new national park was already above 1,800 individuals, which is roughly 80 percent of the entire panda population in China. Establishing the new protected area in the Sichuan Province also gives various other endangered or threatened species, like the Siberian tiger, the possibility to improve their living conditions by offering them a habitat. Other species who benefit from the protection of their habitat include the snow leopard, the golden snub-nosed monkey, the red panda and the complex-toothed flying squirrel.
In July 2021, Chinese conservation authorities announced that giant pandas are no longer endangered in the wild following years of conservation efforts, with a population in the wild exceeding 1,800. China has received international praise for its conservation of the species, which has also helped the country establish itself as a leader in endangered species conservation.
See also
Giant pandas around the world
List of giant pandas
Panda tea
Pygmy giant panda
Wildlife of China
List of endangered and protected species of China
References
Notes
Bibliography
AFP (via Discovery Channel) (20 June 2006). Panda Numbers Exceed Expectations.
Associated Press (via CNN) (2006). Article link.
Catton, Chris (1990). Pandas. Christopher Helm.
Friends of the National Zoo (2006). Panda Cam: A Nation Watches Tai Shan the Panda Cub Grow. New York: Fireside Books.
Goodman, Brenda (12 February 2006). Pandas Eat Up Much of Zoos' Budgets. The New York Times.
(An earlier edition is available as The Smithsonian Book of Giant Pandas, Smithsonian Institution Press, 2002, .)
Panda Facts At a Glance (N.d.). www.wwfchina.org. WWF China.
Ryder, Joanne (2001). Little panda: The World Welcomes Hua Mei at the San Diego Zoo. New York: Simon & Schuster.
(There are also several later reprints)
Warren, Lynne (July 2006). "Panda, Inc." National Geographic. (About Mei Xiang, Tai Shan and the Wolong Panda Research Facility in Chengdu China).
Journal of Mammalogy, Volume 96, Issue 6, 24 November 2015, Pages 1116–1127, https://doi.org/10.1093/jmammal/gyv118
External links
BBC Nature: Giant panda news, and video clips from BBC programmes past and present.
Panda Pioneer: the release of the first captive-bred panda 'Xiang Xiang' in 2006
WWF – environmental conservation organization
Pandas International – panda conservation group
National Zoo Live Panda Cams – Baby Panda Tai Shan and mother Mei Xiang
Information from Animal Diversity
NPR News 2007/08/20 – Panda Romance Stems From Bamboo
View the panda genome on Ensembl.
Texts and pictures of the Panda exhibition at the Museum für Naturkunde Berlin
iPanda-50: annotated image dataset for fine-grained panda identification on Github
Mammals of China
Endemic fauna of China
Clawed herbivores
Herbivorous mammals
EDGE species
Vulnerable animals
Vulnerable fauna of Asia
Articles containing video clips
Species that are or were threatened by agricultural development
Species that are or were threatened by logging
Mammals described in 1869
Taxa named by Armand David
Ailuropodinae
National symbols of China | Giant panda | [
"Biology"
] | 7,779 | [
"EDGE species",
"Biodiversity"
] |
12,718 | https://en.wikipedia.org/wiki/Griffith%27s%20experiment | Griffith's experiment, performed by Frederick Griffith and reported in 1928, was the first experiment suggesting that bacteria are capable of transferring genetic information through a process known as transformation. Griffith's findings were followed by research in the late 1930s and early 40s that isolated DNA as the material that communicated this genetic information.
Pneumonia was a serious cause of death in the wake of the post-WWI Spanish influenza pandemic, and Griffith was studying the possibility of creating a vaccine. Griffith used two strains of pneumococcus (Diplococcus pneumoniae) bacteria which infect mice – a type III-S (smooth) which was virulent, and a type II-R (rough) strain which was nonvirulent. The III-S strain synthesized a polysaccharide capsule that protected itself from the host's immune system, resulting in the death of the host, while the II-R strain did not have that protective capsule and was defeated by the host's immune system. A German bacteriologist, Fred Neufeld, had discovered the three pneumococcal types (Types I, II, and III) and discovered the quellung reaction to identify them in vitro. Until Griffith's experiment, bacteriologists believed that the types were fixed and unchangeable, from one generation to another.
In this experiment, bacteria from the III-S strain were killed by heat, and their remains were added to II-R strain bacteria. While neither alone harmed the mice, the combination was able to kill its host. Griffith was also able to isolate both live II-R and live III-S strains of pneumococcus from the blood of these dead mice. Griffith concluded that the type II-R had been "transformed" into the lethal III-S strain by a "transforming principle" that was somehow part of the dead III-S strain bacteria.
Scientific advances since then have revealed that the "transforming principle" Griffith observed was the DNA of the III-s strain bacteria. While the bacteria had been killed, the DNA had survived the heating process and was taken up by the II-R strain bacteria. The III-S strain DNA contains the genes that form the smooth protective polysaccharide capsule. Equipped with this gene, the former II-R strain bacteria were now protected from the host's immune system and could kill the host. The exact nature of the transforming principle (DNA) was verified in the experiments done by Avery, McLeod and McCarty and by Hershey and Chase.
Notes
References
(References the original experiment by Griffith. Original article and 35th anniversary reprint available.'')
Further reading
854 pages.
Genetics experiments
Genetics in the United Kingdom
History of genetics
Microbiology
1928 in biology | Griffith's experiment | [
"Chemistry",
"Biology"
] | 566 | [
"Microbiology",
"Microscopy"
] |
12,727 | https://en.wikipedia.org/wiki/Original%20proof%20of%20G%C3%B6del%27s%20completeness%20theorem | The proof of Gödel's completeness theorem given by Kurt Gödel in his doctoral dissertation of 1929 (and a shorter version of the proof, published as an article in 1930, titled "The completeness of the axioms of the functional calculus of logic" (in German)) is not easy to read today; it uses concepts and formalisms that are no longer used and terminology that is often obscure. The version given below attempts to represent all the steps in the proof and all the important ideas faithfully, while restating the proof in the modern language of mathematical logic. This outline should not be considered a rigorous proof of the theorem.
Assumptions
We work with first-order predicate calculus. Our languages allow constant, function and relation symbols. Structures consist of (non-empty) domains and interpretations of the relevant symbols as constant members, functions or relations over that domain.
We assume classical logic (as opposed to intuitionistic logic for example).
We fix some axiomatization (i.e. a syntax-based, machine-manageable proof system) of the predicate calculus: logical axioms and rules of inference. Any of the several well-known equivalent axiomatizations will do. Gödel's original proof assumed the Hilbert-Ackermann proof system.
We assume without proof all the basic well-known results about our formalism that we need, such as the normal form theorem or the soundness theorem.
We axiomatize predicate calculus without equality (sometimes confusingly called without identity), i.e. there are no special axioms expressing the properties of (object) equality as a special relation symbol. After the basic form of the theorem has been proved, it will be easy to extend it to the case of predicate calculus with equality.
Statement of the theorem and its proof
In the following, we state two equivalent forms of the theorem, and show their equivalence.
Later, we prove the theorem. This is done in the following steps:
Reducing the theorem to sentences (formulas with no free variables) in prenex form, i.e. with all quantifiers ( and ) at the beginning. Furthermore, we reduce it to formulas whose first quantifier is . This is possible because for every sentence, there is an equivalent one in prenex form whose first quantifier is .
Reducing the theorem to sentences of the form . While we cannot do this by simply rearranging the quantifiers, we show that it is yet enough to prove the theorem for sentences of that form.
Finally we prove the theorem for sentences of that form.
This is done by first noting that a sentence such as is either refutable (its negation is always true) or satisfiable, i.e. there is some model in which it holds (it might even be always true, i.e. a tautology); this model is simply assigning truth values to the subpropositions from which B is built. The reason for that is the completeness of propositional logic, with the existential quantifiers playing no role.
We extend this result to more and more complex and lengthy sentences, Dn (n = 1,2...), built out from B, so that either any of them is refutable and therefore so is φ, or all of them are not refutable and therefore each holds in some model.
We finally use the models in which the Dn hold (in case all are not refutable) in order to build a model in which φ holds.
Theorem 1. Every valid formula (true in all structures) is provable.
This is the most basic form of the completeness theorem. We immediately restate it in a form more convenient for our purposes:
When we say "all structures", it is important to specify that the structures involved are classical (Tarskian) interpretations I, where I = <U,F> (U is a non-empty (possibly infinite) set of objects, whereas F is a set of functions from expressions of the interpreted symbolism into U). [By contrast, so-called "free logics" allow possibly empty sets for U. For more regarding free logics, see the work of Karel Lambert.]
Theorem 2. Every formula φ is either refutable or satisfiable in some structure.
"φ is refutable" means by definition "¬φ is provable".
Equivalence of both theorems
If Theorem 1 holds, and φ is not satisfiable in any structure, then ¬φ is valid in all structures and therefore provable, thus φ is refutable and Theorem 2 holds. If on the other hand Theorem 2 holds and φ is valid in all structures, then ¬φ is not satisfiable in any structure and therefore refutable; then ¬¬φ is provable and then so is φ, thus Theorem 1 holds.
Proof of theorem 2: first step
We approach the proof of Theorem 2 by successively restricting the class of all formulas φ for which we need to prove "φ is either refutable or satisfiable". At the beginning we need to prove this for all possible formulas φ in our language. However, suppose that for every formula φ there is some formula ψ taken from a more restricted class of formulas C, such that "ψ is either refutable or satisfiable" → "φ is either refutable or satisfiable". Then, once this claim (expressed in the previous sentence) is proved, it will suffice to prove "φ is either refutable or satisfiable" only for φ's belonging to the class C. If φ is provably equivalent to ψ (i.e., (φ ≡ ψ) is provable), then it is indeed the case that "ψ is either refutable or satisfiable" → "φ is either refutable or satisfiable" (the soundness theorem is needed to show this).
There are standard techniques for rewriting an arbitrary formula into one that does not use function or constant symbols, at the cost of introducing additional quantifiers; we will therefore assume that all formulas are free of such symbols. Gödel's paper uses a version of first-order predicate calculus that has no function or constant symbols to begin with.
Next we consider a generic formula φ (which no longer uses function or constant symbols) and apply the prenex form theorem to find a formula ψ in normal form such that φ ≡ ψ (ψ being in normal form means that all the quantifiers in ψ, if there are any, are found at the very beginning of ψ). It follows now that we need only prove Theorem 2 for formulas φ in normal form.
Next, we eliminate all free variables from φ by quantifying them existentially: if, say, x1...xn are free in φ, we form . If ψ is satisfiable in a structure M, then certainly so is φ and if ψ is refutable, then is provable, and then so is ¬φ, thus φ is refutable. We see that we can restrict φ to be a sentence, that is, a formula with no free variables.
Finally, we would like, for reasons of technical convenience, that the prefix of φ (that is, the string of quantifiers at the beginning of φ, which is in normal form) begin with a universal quantifier and end with an existential quantifier. To achieve this for a generic φ (subject to restrictions we have already proved), we take some one-place relation symbol F unused in φ, and two new variables y and z.. If φ = (P)Φ, where (P) stands for the prefix of φ and Φ for the matrix (the remaining, quantifier-free part of φ) we form . Since is clearly provable, it is easy to see that is provable.
Reducing the theorem to formulas of degree 1
Our generic formula φ now is a sentence, in normal form, and its prefix starts with a universal quantifier and ends with an existential quantifier. Let us call the class of all such formulas R. We are faced with proving that every formula in R is either refutable or satisfiable. Given our formula φ, we group strings of quantifiers of one kind together in blocks:
We define the degree of to be the number of universal quantifier blocks, separated by existential quantifier blocks as shown above, in the prefix of . The following lemma, which Gödel adapted from Skolem's proof of the Löwenheim–Skolem theorem, lets us sharply reduce the complexity of the generic formula we need to prove the theorem for:
Lemma. Let k ≥ 1. If every formula in R of degree k is either refutable or satisfiable, then so is every formula in R of degree k + 1.
Comment: Take a formula φ of degree k + 1 of the form , where is the remainder of (it is thus of degree k − 1). φ states that for every x there is a y such that... (something). It would have been nice to have a predicate Q' so that for every x, Q′(x,y) would be true if and only if y is the required one to make (something) true. Then we could have written a formula of degree k, which is equivalent to φ, namely . This formula is indeed equivalent to φ because it states that for every x, if there is a y that satisfies Q'(x,y), then (something) holds, and furthermore, we know that there is such a y, because for every x', there is a y' that satisfies Q'(x',y'). Therefore φ follows from this formula. It is also easy to show that if the formula is false, then so is φ. Unfortunately, in general there is no such predicate Q'. However, this idea can be understood as a basis for the following proof of the Lemma.
Proof. Let φ be a formula of degree k + 1; then we can write it as
where (P) is the remainder of the prefix of (it is thus of degree k – 1) and is the quantifier-free matrix of . x, y, u and v denote here tuples of variables rather than single variables; e.g. really stands for where are some distinct variables.
Let now x' and y' be tuples of previously unused variables of the same length as x and y respectively, and let Q be a previously unused relation symbol that takes as many arguments as the sum of lengths of x and y; we consider the formula
Clearly, is provable.
Now since the string of quantifiers does not contain variables from x or y, the following equivalence is easily provable with the help of whatever formalism we're using:
And since these two formulas are equivalent, if we replace the first with the second inside Φ, we obtain the formula Φ' such that Φ≡Φ':
Now Φ' has the form , where (S) and (S') are some quantifier strings, ρ and ρ' are quantifier-free, and, furthermore, no variable of (S) occurs in ρ' and no variable of (S') occurs in ρ. Under such conditions every formula of the form , where (T) is a string of quantifiers containing all quantifiers in (S) and (S') interleaved among themselves in any fashion, but maintaining the relative order inside (S) and (S'), will be equivalent to the original formula Φ'(this is yet another basic result in first-order predicate calculus that we rely on). To wit, we form Ψ as follows:
and we have .
Now is a formula of degree k and therefore by assumption either refutable or satisfiable.
If is satisfiable in a structure M, then, considering , we see that is satisfiable as well.
If is refutable, then so is , which is equivalent to it; thus is provable.
Now we can replace all occurrences of Q inside the provable formula by some other formula dependent on the same variables, and we will still get a provable formula.
(This is yet another basic result of first-order predicate calculus. Depending on the particular formalism adopted for the calculus, it may be seen as a simple application of a "functional substitution" rule of inference, as in Gödel's paper, or it may be proved by considering the formal proof of , replacing in it all occurrences of Q by some other formula with the same free variables, and noting that all logical axioms in the formal proof remain logical axioms after the substitution, and all rules of inference still apply in the same way.)
In this particular case, we replace Q(x',y') in with the formula . Here (x,y | x',y') means that instead of ψ we are writing a different formula, in which x and y are replaced with x' and y'. Q(x,y) is simply replaced by .
then becomes
and this formula is provable; since the part under negation and after the sign is obviously provable, and the part under negation and before the sign is obviously φ, just with x and y replaced by x' and y', we see that is provable, and φ is refutable. We have proved that φ is either satisfiable or refutable, and this concludes the proof of the Lemma.
Notice that we could not have used instead of Q(x',y') from the beginning, because would not have been a well-formed formula in that case. This is why we cannot naively use the argument appearing at the comment that precedes the proof.
Proving the theorem for formulas of degree 1
As shown by the Lemma above, we only need to prove our theorem for formulas φ in R of degree 1. φ cannot be of degree 0, since formulas in R have no free variables and don't use constant symbols. So the formula φ has the general form:
Now we define an ordering of the k-tuples of natural numbers as follows: should hold if either , or , and precedes in lexicographic order. [Here denotes the sum of the terms of the tuple.] Denote the nth tuple in this order by .
Set the formula as . Then put as
Lemma: For every n, .
Proof: By induction on n; we have , where the latter implication holds by variable substitution, since the ordering of the tuples is such that . But the last formula is equivalent to φ.
For the base case, is obviously a corollary of φ as well. So the Lemma is proven.
Now if is refutable for some n, it follows that φ is refutable. On the other hand, suppose that is not refutable for any n. Then for each n there is some way of assigning truth values to the distinct subpropositions (ordered by their first appearance in ; "distinct" here means either distinct predicates, or distinct bound variables) in , such that will be true when each proposition is evaluated in this fashion. This follows from the completeness of the underlying propositional logic.
We will now show that there is such an assignment of truth values to , so that all will be true: The appear in the same order in every ; we will inductively define a general assignment to them by a sort of "majority vote": Since there are infinitely many assignments (one for each ) affecting , either infinitely many make true, or infinitely many make it false and only finitely many make it true. In the former case, we choose to be true in general; in the latter we take it to be false in general. Then from the infinitely many n for which through are assigned the same truth value as in the general assignment, we pick a general assignment to in the same fashion.
This general assignment must lead to every one of the and being true, since if one of the were false under the general assignment, would also be false for every n > k. But this contradicts the fact that for the finite collection of general assignments appearing in , there are infinitely many n where the assignment making true matches the general assignment.
From this general assignment, which makes all of the true, we construct an interpretation of the language's predicates that makes φ true. The universe of the model will be the natural numbers. Each i-ary predicate should be true of the naturals precisely when the proposition is either true in the general assignment, or not assigned by it (because it never appears in any of the ).
In this model, each of the formulas is true by construction. But this implies that φ itself is true in the model, since the range over all possible k-tuples of natural numbers. So φ is satisfiable, and we are done.
Intuitive explanation
We may write each Bi as Φ(x1...xk,y1...ym) for some xs, which we may call "first arguments" and ys that we may call "last arguments".
Take B1 for example. Its "last arguments" are z2,z3...zm+1, and for every possible combination of k of these variables there is some j so that they appear as "first arguments" in Bj. Thus for large enough n1, Dn1 has the property that the "last arguments" of B1 appear, in every possible combinations of k of them, as "first arguments" in other Bjs within Dn. For every Bi there is a Dni with the corresponding property.
Therefore, in a model that satisfies all the Dns, there are objects corresponding to z1, z2... and each combination of k of these appear as "first arguments" in some Bj, meaning that for every k of these objects zp1...zpk there are zq1...zqm, which makes Φ(zp1...zpk,zq1...zqm) satisfied. By taking a submodel with only these z1, z2... objects, we have a model satisfying φ.
Extensions
Extension to first-order predicate calculus with equality
Gödel reduced a formula containing instances of the equality predicate to ones without it in an extended language. His method involves replacing a formula φ containing some instances of equality with the formula
Here denote the predicates appearing in φ (with their respective arities), and φ' is the formula φ with all occurrences of equality replaced with the new predicate Eq. If this new formula is refutable, the original φ was as well; the same is true of satisfiability, since we may take a quotient of satisfying model of the new formula by the equivalence relation representing Eq. This quotient is well-defined with respect to the other predicates, and therefore will satisfy the original formula φ.
Extension to countable sets of formulas
Gödel also considered the case where there are a countably infinite collection of formulas. Using the same reductions as above, he was able to consider only those cases where each formula is of degree 1 and contains no uses of equality. For a countable collection of formulas of degree 1, we may define as above; then define to be the closure of . The remainder of the proof then went through as before.
Extension to arbitrary sets of formulas
When there is an uncountably infinite collection of formulas, the Axiom of Choice (or at least some weak form of it) is needed. Using the full AC, one can well-order the formulas, and prove the uncountable case with the same argument as the countable one, except with transfinite induction. Other approaches can be used to prove that the completeness theorem in this case is equivalent to the Boolean prime ideal theorem, a weak form of AC.
References
The first proof of the completeness theorem.
The same material as the dissertation, except with briefer proofs, more succinct explanations, and omitting the lengthy introduction.
External links
Stanford Encyclopedia of Philosophy: "Kurt Gödel"—by Juliette Kennedy.
MacTutor biography: Kurt Gödel.
Logic
Godel's completeness theorem
Mathematical proofs
Godel's completeness theorem
Godel's completeness theorem
Works by Kurt Gödel
Article proofs | Original proof of Gödel's completeness theorem | [
"Mathematics"
] | 4,290 | [
"Proof theory",
"Mathematical logic",
"Article proofs",
"Model theory",
"nan"
] |
12,730 | https://en.wikipedia.org/wiki/General%20Electric | General Electric Company (GE) was an American multinational conglomerate founded in 1892, incorporated in the state of New York and headquartered in Boston. The company had several divisions, including aerospace, energy, healthcare, and finance. In 2020, GE ranked among the Fortune 500 as the 33rd largest firm in the United States by gross revenue. In 2023, the company was ranked 64th in the Forbes Global 2000. In 2011, GE ranked among the Fortune 20 as the 14th most profitable company, but later very severely underperformed the market (by about 75%) as its profitability collapsed. Two employees of GE—Irving Langmuir (1932) and Ivar Giaever (1973)—have been awarded the Nobel Prize.
Following the Great Recession of the late 2000s decade, General Electric began selling off various divisions and assets, including its appliances and financial capital divisions, under Jeff Immelt's leadership as CEO. John Flannery, Immelt's replacement in 2017, further divested General Electric's assets in locomotives and lighting in order to focus the company more on aviation. Restrictions on air travel during the COVID-19 pandemic caused General Electric's revenue to fall significantly in 2020. Ultimately, GE's final CEO Larry Culp announced in November 2021 that General Electric was to be broken up into three separate, public companies by 2024. GE Aerospace, the aerospace company, is GE's legal successor. GE HealthCare, the healthcare company, was spun off from GE in 2023. GE Vernova, the energy company, was founded when GE finalized the split. NBCUniversal, the entertainment company, was sold to Comcast in 2011, but was not one of the companies spun off from GE in 2024. Following these transactions, GE Aerospace took the General Electric name and ticker symbols, while the old General Electric ceased to exist as a conglomerate.
History
Formation
During 1889, Thomas Edison (1847–1931) had business interests in many electricity-related companies, including Edison Lamp Company, a lamp manufacturer in East Newark, New Jersey; Edison Machine Works, a manufacturer of dynamos and large electric motors in Schenectady, New York; Bergmann & Company, a manufacturer of electric lighting fixtures, sockets, and other electric lighting devices; and Edison Electric Light Company, the patent-holding company and financial arm for Edison's lighting experiments, backed by J. P. Morgan (1837–1913) and the Vanderbilt family.
Henry Villard, a long-time Edison supporter and investor, proposed to consolidate all of these business interests. The proposal was supported by Samuel Insull - who served as his secretary and, later, financier - as well other investors. In 1889, Drexel, Morgan & Co.—a company founded by J.P. Morgan and Anthony J. Drexel—financed Edison's research and helped merge several of Edison's separate companies under one corporation, forming Edison General Electric Company, which was incorporated in New York on April 24, 1889. The new company acquired Sprague Electric Railway & Motor Company in the same year. The consolidation did not involve all of the companies established by Edison; notably, the Edison Illuminating Company, which would later become Consolidated Edison, was not part of the merger.
In 1880, Gerald Waldo Hart formed the American Electric Company of New Britain, Connecticut, which merged a few years later with Thomson-Houston Electric Company, led by Charles Coffin. In 1887, Hart left to become superintendent of the Edison Electric Company. General Electric was formed through the 1892 merger of Edison General Electric Company and Thomson-Houston Electric Company with the support of Drexel, Morgan & Co. The original plants of both companies continue to operate under the GE banner to this day.
The General Electric business was incorporated in New York, with the Schenectady plant used as headquarters for many years thereafter. Around the same time, General Electric's Canadian counterpart, Canadian General Electric, was formed.
In 1893, General Electric bought the business of Rudolf Eickemeyer in Yonkers, New York, along with all of its patents and designs. Eickemeyer's firm had developed transformers for use in the transmission of electrical power.
Public company
In 1896, General Electric was one of the original 12 companies listed on the newly formed Dow Jones Industrial Average, where it remained a part of the index for 122 years, though not continuously.
In 1911, General Electric absorbed the National Electric Lamp Association (NELA) into its lighting business. GE established its lighting division headquarters at Nela Park in East Cleveland, Ohio. The lighting division has since remained in the same location.
RCA and NBC
Owen D. Young, who was then GE's general counsel and vice president, through GE, founded the Radio Corporation of America (RCA) in 1919. This came after Young, while working with senior naval officers, purchased the Marconi Wireless Telegraph Company of America, which was a subsidiary of the British company Marconi Wireless and Signal Company. He aimed to expand international radio communications. GE used RCA as its retail arm for radio sales. In 1926, RCA co-founded the National Broadcasting Company (NBC), which built two radio broadcasting networks. In 1930, General Electric was charged with antitrust violations and was ordered to divest itself of RCA.
Television
In 1927, Ernst Alexanderson of GE made the first demonstration of television broadcast reception at his General Electric Realty Plot home at 1132 Adams Road in Schenectady, New York. On January 13, 1928, he made what was said to be the first broadcast to the public in the United States on GE's W2XAD: the pictures were picked up on 1.5 square inches (9.7 square centimeters) screens in the homes of four GE executives. The sound was broadcast on GE's WGY (AM).
Experimental television station W2XAD evolved into the station WRGB, which, along with WGY and WGFM (now WRVE), was owned and operated by General Electric until 1983. In 1965, the company expanded into cable with the launch of a franchise, which was awarded to a non-exclusive franchise in Schenectady through subsidiary General Electric Cablevision Corporation. On February 15, 1965, General Electric expanded its holdings in order to acquire more television stations to meet the maximum limit of the FCC, and more cable holdings through subsidiaries General Electric Broadcasting Company and General Electric Cablevision Corporation.
The company also owned television stations such as KOA-TV (now KCNC-TV) in Denver and WSIX-TV (later WNGE-TV, now WKRN) in Nashville, but like WRGB, General Electric sold off most of its broadcasting holdings, but held on to the Denver television station until in 1986, when General Electric bought out RCA and made it into an owned-and-operated station by NBC. It even stayed on until 1995 when it was transferred to a joint venture between CBS and Group W in a swap deal, alongside KUTV in Salt Lake City for longtime CBS O&O in Philadelphia, WCAU-TV.
Former General Electric-owned stations
Stations are arranged in alphabetical order by state and city of license.
(**) Indicates a station that was built and signed on by General Electric.
Radio stations
Power generation
Led by Sanford Alexander Moss, GE moved into the new field of aircraft turbo superchargers. This technology also led to the development of industrial gas turbine engines used for power production. GE introduced the first set of superchargers during World War I and continued to develop them during the interwar period. Superchargers became indispensable in the years immediately before World War II. GE supplied 300,000 turbo superchargers for use in fighter and bomber engines. This work led the U.S. Army Air Corps to select GE to develop the nation's first jet engine during the war. This experience, in turn, made GE a natural selection to develop the Whittle W.1 jet engine that was demonstrated in the United States in 1941. GE was ranked ninth among United States corporations in the value of wartime production contracts. However, their early work with Whittle's designs was later handed to Allison Engine Company. GE Aviation then emerged as one of the world's largest engine manufacturers, bypassing the British company Rolls-Royce plc.
Some consumers boycotted GE light bulbs, refrigerators, and other products during the 1980s and 1990s. The purpose of the boycott was to protest against GE's role in nuclear weapons production.
In 2002, GE acquired the wind power assets of Enron during its bankruptcy proceedings. Enron Wind was the only surviving U.S. manufacturer of large wind turbines at the time, and GE increased engineering and supplies for the Wind Division and doubled the annual sales to $1.2 billion in 2003. It acquired ScanWind in 2009.
In 2018, GE Power garnered press attention when a model 7HA gas turbine in Texas was shut down for two months due to the break of a turbine blade. This model uses similar blade technology to GE's newest and most efficient model, the 9HA. After the break, GE developed new protective coatings and heat treatment methods. Gas turbines represent a significant portion of GE Power's revenue, and also represent a significant portion of the power generation fleet of several utility companies in the United States. Chubu Electric of Japan and Électricité de France also had units that were impacted. Initially, GE did not realize the turbine blade issue of the 9FB unit would impact the new HA units.
Computing
GE was one of the eight major computer companies of the 1960s along with IBM, Burroughs, NCR, Control Data Corporation, Honeywell, RCA, and UNIVAC. GE had a line of general purpose and special purpose computers, including the GE 200, GE 400, and GE 600 series general-purpose computers, the GE/PAC 4000 series real-time process control computers, and the DATANET-30 and Datanet 355 message switching computers (DATANET-30 and 355 were also used as front end processors for GE mainframe computers). A Datanet 500 computer was designed but never sold.
In 1956 Homer Oldfield had been promoted to General Manager of GE's Computer Department. He facilitated the invention and construction of the Bank of America ERMA system, the first computerized system designed to read magnetized numbers on checks. But he was fired from GE in 1958 by Ralph J. Cordiner for overstepping his bounds and successfully gaining the ERMA contract. Cordiner was strongly against GE entering the computer business because he did not see the potential in it.
In 1962, GE started developing its GECOS (later renamed GCOS) operating system, originally for batch processing, but later extended to time-sharing and transaction processing. Versions of GCOS are still in use today. From 1964 to 1969, GE and Bell Laboratories (which soon dropped out) joined with MIT to develop the Multics operating system on the GE 645 mainframe computer. The project took longer than expected and was not a major commercial success, but it demonstrated concepts such as single-level storage, dynamic linking, hierarchical file system, and ring-oriented security. Active development of Multics continued until 1985.
GE got into computer manufacturing because, in the 1950s, they were the largest user of computers outside the United States federal government, aside from being the first business in the world to own a computer. Its major appliance manufacturing plant "Appliance Park" was the first non-governmental site to host one. However, in 1970, GE sold its computer division to Honeywell, exiting the computer manufacturing industry, though it retained its timesharing operations for some years afterward. GE was a major provider of computer time-sharing services through General Electric Information Services (GEIS, now GXS), offering online computing services that included GEnie.
In 2000, when United Technologies Corp. planned to buy Honeywell, GE made a counter-offer that was approved by Honeywell. On July 3, 2001, the European Union issued a statement that "prohibit the proposed acquisition by General Electric Co. of Honeywell Inc.". The reasons given were it "would create or strengthen dominant positions on several markets and that the remedies proposed by GE were insufficient to resolve the competition concerns resulting from the proposed acquisition of Honeywell".
On June 27, 2014, GE partnered with collaborative design company Quirky to announce its connected LED bulb called Link. The Link bulb is designed to communicate with smartphones and tablets using a mobile app called Wink.
Acquisitions and divestments
In December 1985, GE reacquired the RCA Corporation, primarily to gain ownership of the NBC television network for $6.28 billion; this merger surpassed the Capital Cities/ABC merger from earlier that year as the largest non-oil company merger in world business history. The remainder of RCA's divisions and assets were sold to various companies, including Bertelsmann Music Group which acquired RCA Records. Thomson SA, which licensed the manufacture of RCA and GE branded electronics, traced its roots to Thomson-Houston, one of the original components of GE. Also in 1986, Kidder, Peabody & Co., a U.S.-based securities firm, was sold to GE and following heavy losses was sold to PaineWebber in 1994.
In 1997, Genpact was founded as a unit of General Electric in Gurgaon. The company was founded as GE Capital International Services (GECIS). In the beginning, GECIS created processes for outsourcing back-office activities for GE Capital such as processing car loans and credit card transactions. It was an experimental concept at the time and the beginning of the business process outsourcing (BPO) industry. GE sold 60% stake in Genpact to General Atlantic and Oak Hill Capital Partners in 2005 and hived off Genpact into an independent business. GE is still a major client to Genpact today for services in customer service, finance, information technology, and analytics.
In 2001, GE acquired Spanish-language broadcaster Telemundo and incorporated it into NBC.
In 2002, Francisco Partners and Norwest Venture Partners acquired a division of GE called GE Information Systems (GEIS). The new company, named GXS, is based in Gaithersburg, Maryland. GXS is a provider of business-to-business e-commerce solutions. GE maintains a minority stake in GXS. Also in 2002, GE Wind Energy was formed when GE bought the wind turbine manufacturing assets of Enron Wind after the Enron scandals.
In 2004, GE bought 80% of Vivendi Universal Entertainment, the parent of Universal Pictures from Vivendi. Vivendi bought 20% of NBC, forming the company NBCUniversal. GE then owned 80% of NBCUniversal and Vivendi owned 20%. In 2004, GE completed the spin-off of most of its mortgage and life insurance assets into an independent company, Genworth Financial, based in Richmond, Virginia.
In May 2007, GE acquired Smiths Aerospace for $4.8 billion. Also in 2007, GE Oil & Gas acquired Vetco Gray for $1.9 billion, followed by the acquisition of Hydril Pressure & Control in 2008 for $1.1 billion.
GE Plastics was sold in 2008 to SABIC (Saudi Arabia Basic Industries Corporation). In May 2008, GE announced it was exploring options for divesting the bulk of its consumer and industrial business.
On December 3, 2009, it was announced that NBCUniversal would become a joint venture between GE and cable television operator Comcast. Comcast would hold a controlling interest in the company, while GE would retain a 49% stake and would buy out shares owned by Vivendi.
Vivendi would sell its 20% stake in NBCUniversal to GE for US$5.8 billion. Vivendi would sell 7.66% of NBCUniversal to GE for US$2 billion if the GE/Comcast deal was not completed by September 2010 and then sell the remaining 12.34% stake of NBCUniversal to GE for US$3.8 billion when the deal was completed or to the public via an IPO if the deal was not completed.
On March 1, 2010, GE announced plans to sell its 20.85% stake in Turkey-based Garanti Bank. In August 2010, GE Healthcare signed a strategic partnership to bring cardiovascular Computed Tomography (CT) technology from start-up Arineta Ltd. of Israel to the hospital market. In October 2010, GE acquired gas engines manufacturer Dresser Industries in a $3 billion deal and also bought a $1.6 billion portfolio of retail credit cards from Citigroup Inc. On October 14, 2010, GE announced the acquisition of data migration & SCADA simulation specialists Opal Software. In December 2010, for the second time that year (after the Dresser acquisition), GE bought the oil sector company Wellstream, an oil pipe maker, for 800 million pounds ($1.3 billion).
In March 2011, GE announced that it had completed the acquisition of privately held Lineage Power Holdings from The Gores Group. In April 2011, GE announced it had completed its purchase of John Wood plc's Well Support Division for $2.8 billion.
In 2011, GE Capital sold its $2 billion Mexican assets to Santander for $162 million and exited the business in Mexico. Santander additionally assumed the portfolio debts of GE Capital in the country. Following this, GE Capital focused on its core business and shed its non-core assets.
In June 2012, CEO and President of GE Jeff Immelt said that the company would invest ₹3 billion to accelerate its businesses in Karnataka. In October 2012, GE acquired $7 billion worth of bank deposits from MetLife Inc.
On March 19, 2013, Comcast bought GE's shares in NBCU for $16.7 billion, ending the company's longtime stake in television and film media.
In April 2013, GE acquired oilfield pump maker Lufkin Industries for $2.98 billion.
In April 2014, it was announced that GE was in talks to acquire the global power division of French engineering group Alstom for a figure of around $13 billion. A rival joint bid was submitted in June 2014 by Siemens and Mitsubishi Heavy Industries (MHI) with Siemens seeking to acquire Alstom's gas turbine business for €3.9 billion, and MHI proposing a joint venture in steam turbines, plus a €3.1 billion cash investment. In June 2014, a formal offer from GE worth $17 billion was agreed by the Alstom board. Part of the transaction involved the French government taking a 20% stake in Alstom to help secure France's energy and transport interests and French jobs. A rival offer from Siemens Mitsubishi Heavy Industries was rejected. The acquisition was expected to be completed in 2015. In October 2014, GE announced it was considering the sale of its Polish banking business Bank BPH.
Later in 2014, General Electric announced plans to open its global operations center in Cincinnati, Ohio. The Global Operations Center opened in October 2016 as home to GE's multifunctional shared services organization. It supports the company's finance/accounting, human resources, information technology, supply chain, legal and commercial operations, and is one of GE's four multifunctional shared services centers worldwide in Pudong, China; Budapest, Hungary; and Monterrey, Mexico.
In April 2015, GE announced its intention to sell off its property portfolio, worth $26.5 billion, to Wells Fargo and The Blackstone Group. It was announced in April 2015 that GE would sell most of its finance unit and return around $90 billion to shareholders as the firm looked to trim down on its holdings and rid itself of its image of a "hybrid" company, working in both banking and manufacturing. In August 2015, GE Capital agreed to sell its Healthcare Financial Services business to Capital One for US$9 billion. The transaction involved US$8.5 billion of loans made to a wide array of sectors, including senior housing, hospitals, medical offices, outpatient services, pharmaceuticals, and medical devices. Also in August 2015, GE Capital agreed to sell GE Capital Bank's on-line deposit platform to Goldman Sachs. Terms of the transaction were not disclosed, but the sale included US$8 billion of on-line deposits and another US$8 billion of brokered certificates of deposit. The sale was part of GE's strategic plan to exit the U.S. banking sector and to free itself from tightening banking regulations. GE also aimed to shed its status as a "systematically important financial institution".
In September 2015, GE Capital agreed to sell its transportation finance unit to Canada's Bank of Montreal. The unit sold had US$8.7 billion (CA$11.5 billion) of assets, 600 employees, and 15 offices in the U.S. and Canada. The exact terms of the sale were not disclosed, but the final price would be based on the value of the assets at closing, plus a premium according to the parties. In October 2015, activist investor Nelson Peltz's fund Trian bought a $2.5 billion stake in the company.
In January 2016, Haier acquired GE's appliance division for $5.4 billion. In October 2016, GE Renewable Energy agreed to pay €1.5 billion to Doughty Hanson & Co for LM Wind Power during 2017.
At the end of October 2016, it was announced that GE was under negotiations for a deal valued at about $30 billion to combine GE Oil & Gas with Baker Hughes. The transaction would create a publicly traded entity controlled by GE. It was announced that GE Oil & Gas would sell off its water treatment business, GE Water & Process Technologies, as part of its divestment agreement with Baker Hughes. The deal was cleared by the EU in May 2017, and by the United States Department of Justice in June 2017. The merger agreement was approved by shareholders at the end of June 2017. On July 3, 2017, the transaction was completed, and Baker Hughes became a GE company and was renamed Baker Hughes, a GE Company (BHGE). In November 2018, GE reduced its stake in Baker Hughes to 50.4%. On October 18, 2019, GE reduced its stake to 36.8% and the company was renamed back to Baker Hughes.
In May 2017, GE had signed $15 billion of business deals with Saudi Arabia. Saudi Arabia is one of GE's largest customers. In September 2017, GE announced the sale of its Industrial Solutions Business to ABB. The deal closed on June 30, 2018.
Fraud allegations and notice of possible SEC civil action
On August 15, 2019, Harry Markopolos, a financial fraud investigator known for his discovery of a Ponzi scheme run by Bernard Madoff, accused General Electric of being a "bigger fraud than Enron," alleging $38 billion in accounting fraud. GE denied wrongdoing.
On October 6, 2020, General Electric reported it received a Wells notice from the Securities and Exchange Commission stating the SEC may take civil action for possible violations of securities laws.
Insufficient reserves for long-term care policies
It is alleged that GE is "hiding" (i.e., under-reserved) $29 billion in losses related to its long-term care business.
According to an August 2019 Fitch Ratings report, there are concerns that GE has not set aside enough money to cover its long-term care liabilities.
In 2018, a lawsuit (the Bezio case) was filed in New York state court on behalf of participants in GE's 401(k) plan and shareowners alleging violations of Section 11 of the Securities Act of 1933 based on alleged misstatements and omissions related to insurance reserves and performance of GE's business segments.
The Kansas Insurance Department (KID) is requiring General Electric to make $14.5 billion of capital contributions for its insurance contracts during the 7-year period ending in 2024.
GE reported the total liability related to its insurance contracts increased significantly from 2016 to 2019:
December 31, 2016 $26.1 billion
December 31, 2017 $38.6 billion
December 31, 2018 $35.6 billion
December 31, 2019 $39.6 billion
In 2018, GE announced that the issuance of the new standard by the Financial Accounting Standards Board (FASB) regarding Financial Services – Insurance (Topic 944) would materially affect its financial statements. Mr. Markopolos estimated there would be a $US 10.5 billion charge when the new accounting standard is adopted in the first quarter of 2021.
Anticipated $8 billion loss upon disposition of Baker Hughes
In 2017, GE acquired a 62.5% interest in Baker Hughes (BHGE) when it combined its oil & gas business with Baker Hughes Incorporated.
In 2018, GE reduced its interest to 50.4%, resulting in the realization of a $2.1 billion loss. GE is planning to divest its remaining interest and has warned that the divestment will result in an additional loss of $8.4 billion (assuming a BHGE share price of $23.57 per share). In response to the fraud allegations, GE noted the amount of the loss would be $7.4 billion if the divestment occurred on July 26, 2019. Mr. Markopolos noted that BHGE is an asset available for sale and therefore mark-to-market accounting is required.
Markopolos noted GE's current ratio was only 0.67. He expressed concerns that GE may file for bankruptcy if there is a recession.
Final years and three-way split (2018–2024)
In 2018, the GE Pension Plan reported losses of US$3.3 billion on plan assets.
In 2018, General Electric changed the discount rate used to calculate the actuarial liabilities of its pension plans. The rate was increased from 3.64% to 4.34%. Consequently, the reported liability for the underfunded pension plans decreased by $7 billion year-over-year, from $34.2 billion in 2017 to $27.2 billion in 2018.
In October 2018, General Electric announced it would "freeze pensions" for about 20,000 salaried U.S. employees. The employees will be moved to a defined contribution retirement plan in 2021.
On March 30, 2020, General Electric factory workers protested to convert jet engine factories to make ventilators during the COVID-19 crisis.
In June 2020, GE made an agreement to sell its Lighting business to Savant Systems, Inc. Financial details of the transaction were not disclosed.
In November 2020, General Electric warned it would be cutting jobs waiting for a recovery due to the COVID-19 pandemic.
On November 9, 2021, the company announced it would divide itself into three public companies. On July 18, 2022, GE unveiled the brand names of the companies it had devised through its planned separation: GE Aerospace, GE HealthCare, and GE Vernova. The new companies are respectively focused on aerospace, healthcare, and energy (renewable energy, power, and digital). The first spin-off of GE HealthCare was finalized on January 4, 2023; GE continues to hold 10.24% of shares and intends to sell the remaining over time. This was followed by the spin-off of GE's portfolio of energy businesses, which became GE Vernova on April 2, 2024. Following these transactions, GE became an aviation-focused company; GE Aerospace is the legal successor of the original GE. The company's legal name is still General Electric Company.
Financial performance
Dividends
General Electric was a longtime "dividend aristocrat" (a company with a long history of maintaining dividend payments to shareholders). Until 2017, the company had never cut dividends for 119 years before a 50% dividend reduction from 24 cents per share to 12 cents per share. In 2018, GE further reduced its quarterly dividend from 12 cents to 1 cent per share.
Stock
As a publicly traded company on the New York Stock Exchange, GE stock was one of the 30 components of the Dow Jones Industrial Average from 1907 to 2018, the longest continuous presence of any company on the index, and during this time the only company that was part of the original Dow Jones Industrial Index created in 1896. In August 2000, the company had a market capitalization of $601 billion, and was the most valuable company in the world. On June 26, 2018, the stock was removed from the index and replaced with Walgreens Boots Alliance. In the years leading to its removal, GE was the worst performing stock in the Dow, falling more than 55 percent year on year and more than 25 percent year to date. The company continued to lose value after being removed from the index.
General Electric Co. announced on July 30, 2021 (the completion of) a reverse stock split of GE common stock at a ratio of 1-for-8 and trading on a split-adjusted basis with a new ISIN number (US3696043013) starting on August 2, 2021.
Corporate affairs
In 1959, General Electric was accused of promoting the largest illegal cartel in the United States since the adoption of the Sherman Antitrust Act of 1890 in order to maintain artificially high prices. In total, 29 companies and 45 executives would be convicted. Subsequent parliamentary inquiries revealed that "white-collar crime" was by far the most costly form of crime for the United States' finances.
GE is a multinational conglomerate headquartered in Boston, Massachusetts. However its main offices are located at 30 Rockefeller Plaza at Rockefeller Center in New York City, known now as the Comcast Building. It was formerly known as the GE Building for the prominent GE logo on the roof; NBC's headquarters and main studios are also located in the building. Through its RCA subsidiary, it has been associated with the center since its construction in the 1930s. GE moved its corporate headquarters from the GE Building on Lexington Avenue to Fairfield, Connecticut in 1974. In 2016, GE announced a move to the South Boston Waterfront neighborhood of Boston, Massachusetts, partly as a result of an incentive package provide by state and city governments. The first group of workers arrived in the summer of 2016, and the full move will be completed by 2018. Due to poor financial performance and corporate downsizing, GE sold the land it planned to build its new headquarters building on, instead choosing to occupy neighboring leased buildings.
GE's tax return is the largest return filed in the United States; the 2005 return was approximately 24,000 pages when printed out, and 237 megabytes when submitted electronically. As of 2011, the company spent more on U.S. lobbying than any other company.
In 2005, GE launched its "Ecomagination" initiative in an attempt to position itself as a "green" company.
GE is one of the biggest players in the wind power industry and is developing environment-friendly products such as hybrid locomotives, desalination and water reuse solutions, and photovoltaic cells. The company "plans to build the largest solar-panel-making factory in the U.S." and has set goals for its subsidiaries to lower their greenhouse gas emissions.
On May 21, 2007, GE announced it would sell its GE Plastics division to petrochemicals manufacturer SABIC for net proceeds of $11.6 billion. The transaction took place on August 31, 2007, and the company name changed to SABIC Innovative Plastics, with Brian Gladden as CEO.
In July 2010, GE agreed to pay $23.4 million to settle an SEC complaint without admitting or denying the allegations that two of its subsidiaries bribed Iraqi government officials to win contracts under the U.N. oil-for-food program between 2002 and 2003.
In February 2017, GE announced that the company intends to close the gender gap by promising to hire and place 20,000 women in technical roles by 2020. The company is also seeking to have a 50:50 male-to-female gender representation in all entry-level technical programs.
In October 2017, GE announced they would be closing research and development centers in Shanghai, Munich and Rio de Janeiro. The company spent $5 billion on R&D in the last year.
On February 25, 2019, GE sold its diesel locomotive business to Wabtec.
CEO
, John L. Flannery was replaced by H. Lawrence "Larry" Culp Jr. as chairman and CEO, in a unanimous vote of the GE Board of Directors.
Charles A. Coffin (1913–1922)
Owen D. Young (1922–1939, 1942–1945)
Philip D. Reed (1940–1942, 1945–1958)
Ralph J. Cordiner (1958–1963)
Gerald L. Phillippe (1963–1972)
Fred J. Borch (1967–1972)
Reginald H. Jones (1972–1981)
Jack Welch (1981–2001)
Jeff Immelt (2001–2017)
John L. Flannery (2017–2018)
H. Lawrence Culp Jr. (2018–2024)
Corporate recognition and rankings
In 2011, Fortune ranked GE the sixth-largest firm in the U.S., and the 14th-most profitable. Other rankings for 2011–2012 include the following:
#18 company for leaders (Fortune)
#82 green company (Newsweek)
#91 most admired company (Fortune)
#19 most innovative company (Fast Company).
In 2012, GE's brand was valued at $28.8 billion. CEO Jeff Immelt had a set of changes in the presentation of the brand commissioned in 2004, after he took the reins as chairman, to unify the diversified businesses of GE.
Tom Geismar later stated that looking back at the logos of the 1910s, 1920s, and 1930s, one can clearly judge that they are old-fashioned. Chermayeff & Geismar, along with colleagues Bill Brown and Ivan Chermaev, created the modern 1980 logo. They, in turn, argued that even now the old logos look out of date, earlier they were good. The changes included a new corporate color palette, small modifications to the GE logo, a new customized font (GE Inspira) and a new slogan, "Imagination at work", composed by David Lucas, to replace the slogan "We Bring Good Things to Life" used since 1979. The standard requires many headlines to be lowercased and adds visual "white space" to documents and advertising. The changes were designed by Wolff Olins and are used on GE's marketing, literature, and website. In 2014, a second typeface family was introduced: GE Sans and Serif by Bold Monday, created under art direction by Wolff Olins.
, GE had appeared on the Fortune 500 list for 22 years and held the 11th rank. GE was removed from the Dow Jones Industrial Average on June 28, 2018, after the value had dropped below 1% of the index's weight.
Businesses
GE's primary business divisions are:
GE Additive
GE Aerospace
GE Capital
GE Digital
GE Healthcare
GE Power
GE Renewable Energy
GE Research
Through these businesses, GE participates in markets that include the generation, transmission and distribution of electricity (e.g. nuclear, gas and solar), industrial automation, medical imaging equipment, motors, aircraft jet engines, and aviation services. Through GE Commercial Finance, GE Consumer Finance, GE Equipment Services, and GE Insurance, it offers a range of financial services. It has a presence in over 100 countries.
General Imaging manufacturers GE digital cameras.
Even though the first wave of conglomerates (such as ITT Corporation, Ling-Temco-Vought, Tenneco, etc.) fell by the wayside by the mid-1980s, in the late 1990s, another wave (consisting of Westinghouse, Tyco, and others) tried and failed to emulate GE's success.
GE is planning to set up a silicon carbide chip packaging R&D center in coalition with SUNY Polytechnic Institute in Utica, New York. The project will create 470 jobs with the potential to grow to 820 jobs within 10 years.
On September 14, 2015, GE announced the creation of a new unit: GE Digital, which will bring together its software and IT capabilities. The new business unit will be headed by Bill Ruh, who joined GE in 2011 from Cisco Systems and has since worked on GE's software efforts.
Morgan Stanley sold a stake in GE HealthCare Technologies for $1.1 billion as part of a deal to swap General Electric Co. debt for GE HealthCare stock.
Former divisions
GE Industrial was a division providing appliances, lighting, and industrial products; factory automation systems; plastics, silicones, and quartz products; security and sensors technology; and equipment financing, management, and operating services. As of 2007, it had 70,000 employees, generating $17.7 billion in revenue. After some major realignments in late 2007, GE Industrial was organized in two main sub businesses:
GE Consumer & Industrial
Appliances
Electrical Distribution
Lighting
GE Enterprise Solutions
Digital Energy
GE Fanuc Intelligent Platforms
Security
Sensing & Inspection Technologies
The former GE Plastics division was sold in August 2007 and is now SABIC Innovative Plastics.
On May 4, 2008, it was announced that GE would auction off its appliances business for an expected sale of $5–8 billion. However, this plan fell through as a result of the recession.
The former GE Appliances and Lighting segment was dissolved in 2014 when GE's appliance division was attempted to be sold to Electrolux for $5.4 billion, but eventually sold it to Haier in June 2016 due to antitrust filing against Electrolux. GE Lighting (consumer lighting) and the newly created Current, powered by GE, which deals in commercial LED, solar, EV, and energy storage, became stand-alone businesses within the company, until the sale of the latter to American Industrial Partners in April 2019.
The former GE Transportation division merged with Wabtec on February 25, 2019, leaving GE with a 24.9% holding in Wabtec.
On July 1, 2020, GE Lighting was acquired by Savant Systems and remains headquartered at Nela Park in East Cleveland, Ohio.
Environmental record
Carbon footprint
General Electric Company reported Total CO2e emissions (direct + indirect) for the twelve months ending 31 December 2020 at 2,080 Kt (-310 /-13% y-o-y). There has been a consistent declining trend in reported emissions since 2016.
Pollution
Some of GE's activities have given rise to large-scale air and water pollution. Based on data from 2000, Researchers at the Political Economy Research Institute listed the corporation as the fourth-largest corporate producer of air pollution in the United States (behind only E. I. Du Pont de Nemours & Co., United States Steel Corp., and ConocoPhillips), with more than 4.4 million pounds per year (2,000 tons) of toxic chemicals released into the air. GE has also been implicated in the creation of toxic waste. According to United States Environmental Protection Agency (EPA) documents, only the United States Government, Honeywell, and Chevron Corporation are responsible for producing more Superfund toxic waste sites.
In 1983, New York State Attorney General Robert Abrams filed suit in the United States District Court for the Northern District of New York to compel GE to pay for the clean-up of what was claimed to be more than 100,000 tons of chemicals dumped from their plant in Waterford, New York, which polluted nearby groundwater and the Hudson River. In 1999, the company agreed to pay a $250 million settlement in connection with claims it polluted the Housatonic River (at Pittsfield, Massachusetts) and other sites with polychlorinated biphenyls (PCBs) and other hazardous substances.
In 2003, acting on concerns that the plan proposed by GE did not "provide for adequate protection of public health and the environment," EPA issued an administrative order for the company to "address cleanup at the GE site" in Rome, Georgia, also contaminated with PCBs.
The nuclear reactors involved in the 2011 crisis at Fukushima I in Japan were GE designs, and the architectural designs were done by Ebasco, formerly owned by GE. Concerns over the design and safety of these reactors were raised as early as 1972, but tsunami danger was not discussed at that time. , the same model nuclear reactors designed by GE are operating in the US; however, as of May 31, 2019, the controversial Pilgrim Nuclear Generating Station, in Plymouth, Massachusetts, has been shut down and is in the process of decommission.
Pollution of the Hudson River
GE heavily contaminated the Hudson River with PCBs between 1947 and 1977. This pollution caused a range of harmful effects to wildlife and people who eat fish from the river. In 1983 EPA declared a 200-mile (320 km) stretch of the river, from Hudson Falls to New York City, to be a Superfund site requiring cleanup. This Superfund site is considered to be one of the largest in the nation. In addition to receiving extensive fines, GE is continuing its sediment removal operations, pursuant to the Superfund orders, in the 21st century.
Pollution of the Housatonic River
From until 1977, GE polluted the Housatonic River with PCB discharges from its plant at Pittsfield, Massachusetts. EPA designated the Pittsfield plant and several miles of the Housatonic to be a Superfund site in 1997, and ordered GE to remediate the site. Aroclor 1254 and Aroclor 1260, products manufactured by Monsanto, were the principal contaminants that were discharged into the river. The highest concentrations of PCBs in the Housatonic River are found in Woods Pond in Lenox, Massachusetts, just south of Pittsfield, where they have been measured up to 110 mg/kg in the sediment. About 50% of all the PCBs currently in the river are estimated to be retained in the sediment behind Woods Pond dam. This is estimated to be about of PCBs. Formerly filled oxbows are also polluted. Waterfowl and fish who live in and around the river contain significant levels of PCBs and can present health risks if consumed. In 2020 GE completed remediation and restoration of its 10 manufacturing plant areas within the city of Pittsfield. plans for cleanup of the river south of the city are not finalized.
Social responsibility
Environmental initiatives
The environmental work and research of GE can be seen as early as 1968 with the experimental Delta electric car built by the GE Research and Development Center led by Bruce Laumeister. The electric car led to the production shortly after of the cutting-edge technology of the first commercially produced all-electric Elec-Trak garden tractor, which was manufactured from around 1969 until 1975.
On June 6, 2011, GE announced that it had licensed solar thermal technology from California-based eSolar for use in power plants that use both solar and natural gas.
On May 26, 2011, GE unveiled its EV Solar Carport, a carport that incorporates solar panels on its roof, with electric vehicle charging stations under its cover.
In May 2005, GE announced the launch of a program called "Ecomagination", intended, in the words of CEO Jeff Immelt, "to develop tomorrow's solutions such as solar energy, hybrid locomotives, fuel cells, lower-emission aircraft engines, lighter and stronger durable materials, efficient lighting, and water purification technology". The announcement prompted an op-ed piece in The New York Times to observe that, "while General Electric's increased emphasis on clean technology will probably result in improved products and benefit its bottom line, Mr. Immelt's credibility as a spokesman on national environmental policy is fatally flawed because of his company's intransigence in cleaning up its own toxic legacy."
GE has said that it will invest $1.4 billion in clean technology research and development in 2008 as part of its Ecomagination initiative. As of October 2008, the scheme had resulted in 70 green products being brought to market, ranging from halogen lamps to biogas engines. In 2007, GE raised the annual revenue target for its Ecomagination initiative from $20 billion in 2010 to $25 billion following positive market response to its new product lines. In 2010, GE continued to raise its investment by adding $10 billion into Ecomagination over the next five years.
GE Energy's renewable energy business has expanded greatly to keep up with growing U.S. and global demand for clean energy. Since entering the renewable energy industry in 2002, GE has invested more than $850 million in renewable energy commercialization. In August 2008, it acquired Kelman Ltd, a Northern Ireland-based company specializing in advanced monitoring and diagnostics technologies for transformers used in renewable energy generation and announced an expansion of its business in Northern Ireland in May 2010. In 2009, GE's renewable energy initiatives, which include solar power, wind power and GE Jenbacher gas engines using renewable and non-renewable methane-based gases, employ more than 4,900 people globally and have created more than 10,000 supporting jobs.
GE Energy and Orion New Zealand (Orion) have announced the implementation of the first phase of a GE network management system to help improve power reliability for customers. GE's ENMAC Distribution Management System is the foundation of Orion's initiative. The system of smart grid technologies will significantly improve the network company's ability to manage big network emergencies and help it restore power faster when outages occur.
In June 2018, GE Volunteers, an internal group of GE employees, along with the Malaysian Nature Society, transplanted more than 270 plants from the Taman Tugu forest reserve so that they may be replanted in a forest trail that is under construction.
Educational initiatives
GE Healthcare is collaborating with the Wayne State University School of Medicine and the Medical University of South Carolina to offer an integrated radiology curriculum during their respective MD Programs led by investigators of the Advanced Diagnostic Ultrasound in Microgravity study. GE has donated over one million dollars of Logiq E Ultrasound equipment to these two institutions.
Marketing initiatives
Between September 2011 and April 2013, GE ran a content marketing campaign dedicated to telling the stories of "innovators—people who are reshaping the world through act or invention." The initiative included 30 3-minute films from leading documentary film directors (Albert Maysles, Jessica Yu, Leslie Iwerks, Steve James, Alex Gibney, Lixin Fan, Gary Hustwit and others), and a user-generated competition that received over 600 submissions, out of which 20 finalists were chosen.
Short Films, Big Ideas was launched at the 2011 Toronto International Film Festival in partnership with cinelan. Stories included breakthroughs in Slingshot (water vapor distillation system), cancer research, energy production, pain management, and food access. Each of the 30 films received world premiere screenings at a major international film festival, including the Sundance Film Festival and the Tribeca Film Festival. The winning amateur director film, The Cyborg Foundation, was awarded a prize at the 2013 Sundance Film Festival. According to GE, the campaign garnered more than 1.5 billion total media impressions, 14 million online views, and was seen in 156 countries.
In January 2017, GE signed an estimated $7 million deal with the Boston Celtics to have its corporate logo put on the NBA team's jersey.
Charity
On March 3, 2022, GE published an international memo pledging to donate $4.5 million to Ukraine amid Russian invasion. According to the memo, $4 million will be used for medical equipment, $400,000 for emergency cash for refugees, and $100,000 will go to Airlink, an NGO that helps communities in crisis.
Political affiliation
In the 1950s, GE sponsored Ronald Reagan's TV career and launched him on the lecture circuit. GE has also designed social programs, supported civil rights organizations, and funded minority education programs.
Notable appearances in media
In the early 1950s, Kurt Vonnegut was a writer for GE. A number of his novels and stories (notably Cat's Cradle and Player Piano) refer to the fictional city of Ilium, which appears to be loosely based on Schenectady, New York. The Ilium Works is the setting for the short story "Deer in the Works".
In 1981, GE won a Clio award for its 30 Soft White Light Bulbs commercial, We Bring Good Things to Life. The slogan "We Bring Good Things to Life" was created by Phil Dusenberry at the ad agency BBDO.
GE was the primary focus of a 1991 short subject Academy Award-winning documentary, Deadly Deception: General Electric, Nuclear Weapons, and Our Environment, that juxtaposed GE's "We Bring Good Things To Life" commercials with the true stories of workers and neighbors whose lives have been affected by the company's activities involving nuclear weapons.
GE was frequently mentioned and parodied in the NBC comedy sitcom 30 Rock from 2006 to 2013. Former General Electric CEO Jack Welch even cameoed as himself, appearing in the season four episode "Future Husband". The episode is a satirical reference to the real-world acquisition of NBC Universal from General Electric by Comcast in November 2009.
In 2013, GE received a National Jefferson Award for Outstanding Service by a Major Corporation.
Branding
The General Electric logo has a blue circle with a white outline. It has four white lines which "suggest the blades of a midcentury tabletop fan." In the center of the circle is the letters "GE." Its design has changed little throughout the company's history. The logo is officially known as the Monogram but is also known by some as "the meatball."
See also
GE Technology Infrastructure
Knolls Atomic Power Laboratory
List of assets owned by General Electric
Phoebus cartel
Top 100 US Federal Contractors
Notes
References
Further reading
Woodbury, David O. Elihu Thomson, Beloved Scientist (Boston: Museum of Science, 1944)
Haney, John L. The Elihu Thomson Collection American Philosophical Society Yearbook 1944.
Hammond, John W. Men and Volts: The Story of General Electric, published 1941, 436 pages.
Mill, John M. Men and Volts at War: The Story of General Electric in World War II, published 1947.
Irmer, Thomas. Gerard Swope. In Immigrant Entrepreneurship: German-American Business Biographies, 1720 to the Present, vol. 4, edited by Jeffrey Fear. German Historical Institute.
External links
1892 establishments in New York (state)
2024 disestablishments in New York (state)
American companies established in 1892
American companies disestablished in 2024
Companies in the Dow Jones Global Titans 50
Companies listed on the New York Stock Exchange
Conglomerate companies established in 1892
Conglomerate companies disestablished in 2024
Conglomerate companies of the United States
Defunct aircraft engine manufacturers of the United States
Defunct computer companies of the United States
Defunct computer hardware companies
Defunct computer systems companies
Defunct electric power companies of the United States
Electrical engineering companies of the United States
Electrical wiring and construction supplies manufacturers
Electric motor manufacturers
Electric transformer manufacturers
Electronics companies established in 1892
Electronics companies disestablished in 2024
Former components of the Dow Jones Industrial Average
GIS companies
Guitar amplification tubes
Lighting brands
Manufacturing companies based in Boston
Manufacturing companies established in 1892
Manufacturing companies disestablished in 2024
Marine engine manufacturers
Military equipment of the United States
Multinational companies headquartered in the United States
Photography equipment manufacturers of the United States
Pump manufacturers
Radio manufacturers
RCA
Schenectady, New York
Superfund sites in Washington (state)
Thomas Edison
Time-sharing companies
Transportation companies based in New York (state)
Transportation companies of the United States | General Electric | [
"Engineering"
] | 10,575 | [
"Radio electronics",
"Radio manufacturers"
] |
12,733 | https://en.wikipedia.org/wiki/Giant%20planet | A giant planet, sometimes referred to as a jovian planet (Jove being another name for the Roman god Jupiter), is a diverse type of planet much larger than Earth. Giant planets are usually primarily composed of low-boiling point materials (volatiles), rather than rock or other solid matter, but massive solid planets can also exist. There are four such planets in the Solar System: Jupiter, Saturn, Uranus, and Neptune. Many extrasolar giant planets have been identified.
Giant planets are sometimes known as gas giants, but many astronomers now apply the term only to Jupiter and Saturn, classifying Uranus and Neptune, which have different compositions, as ice giants. Both names are potentially misleading; the Solar System's giant planets all consist primarily of fluids above their critical points, where distinct gas and liquid phases do not exist. Jupiter and Saturn are principally made of hydrogen and helium, whilst Uranus and Neptune consist of water, ammonia, and methane.
The defining differences between a very low-mass brown dwarf and a massive gas giant () are debated. One school of thought is based on planetary formation; the other, on the physics of the interior of planets. Part of the debate concerns whether brown dwarfs must, by definition, have experienced nuclear fusion at some point in their history.
Terminology
The term gas giant was coined in 1952 by science fiction writer James Blish and was originally used to refer to all giant planets. Arguably it is something of a misnomer, because throughout most of the volume of these planets the pressure is so high that matter is not in gaseous form. Other than the upper layers of the atmosphere, all matter is likely beyond the critical point, where there is no distinction between liquids and gases. Fluid planet would be a more accurate term. Jupiter also has metallic hydrogen near its center, but much of its volume is hydrogen, helium, and traces of other gases above their critical points. The observable atmospheres of all these planets (at less than a unit optical depth) are quite thin compared to their radii, only extending perhaps one percent of the way to the center. Thus, the observable parts are gaseous (in contrast to Mars and Earth, which have gaseous atmospheres through which the crust can be seen).
The rather misleading term has caught on because planetary scientists typically use rock, gas, and ice as shorthands for classes of elements and compounds commonly found as planetary constituents, irrespective of the matter's phase. In the outer Solar System, hydrogen and helium are referred to as gas; water, methane, and ammonia as ice; and silicates and metals as rock. When deep planetary interiors are considered, it may not be far off to say that, by ice astronomers mean oxygen and carbon, by rock they mean silicon, and by gas they mean hydrogen and helium. The many ways in which Uranus and Neptune differ from Jupiter and Saturn have led some to use the term only for planets similar to the latter two. With this terminology in mind, some astronomers have started referring to Uranus and Neptune as ice giants to indicate the predominance of the ices (in fluid form) in their interior composition.
The alternative term jovian planet refers to the Roman god Jupiter—the genitive form of which is Jovis, hence Jovian—and was intended to indicate that all of these planets were similar to Jupiter.
Objects large enough to start deuterium fusion (above 13 Jupiter masses for solar composition) are called brown dwarfs, and these occupy the mass range between that of large giant planets and the lowest-mass stars. The 13-Jupiter-mass () cutoff is a rule of thumb rather than something of precise physical significance. Larger objects will burn most of their deuterium and smaller ones will burn only a little, and the value is somewhere in between. The amount of deuterium burnt depends not only on the mass but also on the composition of the planet, especially on the amount of helium and deuterium present. The Extrasolar Planets Encyclopaedia includes objects up to 60 Jupiter masses, and the Exoplanet Data Explorer up to 24 Jupiter masses.
Description
A giant planet is a massive planet and has a thick atmosphere of hydrogen and helium. They may have a condensed "core" of heavier elements, delivered during the formation process. This core may be partially or completely dissolved and dispersed throughout the hydrogen/helium envelope. In "traditional" giant planets such as Jupiter and Saturn (the gas giants) hydrogen and helium make up most of the mass of the planet, whereas they only make up an outer envelope on Uranus and Neptune, which are instead mostly composed of water, ammonia, and methane and therefore increasingly referred to as "ice giants".
Extrasolar giant planets that orbit very close to their stars are the exoplanets that are easiest to detect. These are called hot Jupiters and hot Neptunes because they have very high surface temperatures. Hot Jupiters were, until the advent of space-borne telescopes, the most common form of exoplanet known, due to the relative ease of detecting them with ground-based instruments.
Giant planets are commonly said to lack solid surfaces, but it is more accurate to say that they lack surfaces altogether since the gases that form them simply become thinner and thinner with increasing distance from the planets' centers, eventually becoming indistinguishable from the interplanetary medium. Therefore, landing on a giant planet may or may not be possible, depending on the size and composition of its core.
Subtypes
Gas giants
Gas giants consist mostly of hydrogen and helium. The Solar System's gas giants, Jupiter and Saturn, have heavier elements making up between 3 and 13 percent of their mass. Gas giants are thought to consist of an outer layer of molecular hydrogen, surrounding a layer of liquid metallic hydrogen, with a probable molten core with a rocky composition.
Jupiter and Saturn's outermost portion of the hydrogen atmosphere has many layers of visible clouds that are mostly composed of water and ammonia. The layer of metallic hydrogen makes up the bulk of each planet, and is referred to as "metallic" because the very high pressure turns hydrogen into an electrical conductor. The core is thought to consist of heavier elements at such high temperatures (20,000 K) and pressures that their properties are poorly understood.
Ice giants
Ice giants have distinctly different interior compositions from gas giants. The Solar System's ice giants, Uranus and Neptune, have a hydrogen-rich atmosphere that extends from the cloud tops down to about 80% (Uranus) or 85% (Neptune) of their radius. Below this, they are predominantly "icy", i.e. consisting mostly of water, methane, and ammonia. There is also some rock and gas, but various proportions of ice–rock–gas could mimic pure ice, so that the exact proportions are unknown.
Uranus and Neptune have very hazy atmospheric layers with small amounts of methane, giving them light aquamarine colors. Both have magnetic fields that are sharply inclined to their axes of rotation.
Unlike the other giant planets, Uranus has an extreme tilt that causes its seasons to be severely pronounced. The two planets also have other subtle but important differences. Uranus has more hydrogen and helium than Neptune despite being less massive overall. Neptune is therefore denser and has much more internal heat and a more active atmosphere. The Nice model, in fact, suggests that Neptune formed closer to the Sun than Uranus did, and should therefore have more heavy elements.
Massive solid planets
Massive solid planets seemingly can also exist, though their formation mechanisms and occurrence remain subjects of ongoing research and debate.
The possibility of solid planets up to thousands of Earth masses forming around massive stars (B-type and O-type stars; 5–120 solar masses) has been suggested in some earlier studies. The hypothesis proposed that the protoplanetary disk around such stars would contain enough heavy elements, and that high UV radiation and strong winds could photoevaporate the gas in the disk, leaving just the heavy elements. For comparison, Neptune's mass equals 17 Earth masses, Jupiter has 318 Earth masses, and the 13 Jupiter-mass limit used in the IAU's working definition of an exoplanet equals approximately 4000 Earth masses.
However, it is important to note that more recent research has called into question the likelihood of massive solid planet formation around very massive stars(https://arxiv.org/pdf/1103.0556). Studies have shown that the ratio of protoplanetary disk mass to stellar mass decreases rapidly for stars above 10 solar masses, falling to less than 10^-4. Furthermore, no protoplanetary disks have been observed around O-type stars to date.
The original suggestion of massive solid planets forming around 5-120 solar mass stars, presented in earlier literature, lacks substantial supporting evidence or citations to planetary formation theories. The study in question primarily focused on simulating mass-radius relationships for rocky planets, including hypothetical super-massive solid planets, but did not investigate whether planetary formation theories actually support the existence of such objects. The authors of that study acknowledged that "Such massive exoplanets are not yet known to exist."
Given these considerations, the formation and existence of massive solid planets around very massive stars remain speculative and require further research and observational evidence.
Super-Puffs
A super-puff is a type of exoplanet with a mass only a few times larger than
Earth’s but a radius larger than Neptune, giving it a very low mean density. They are cooler and less massive than the inflated low-density hot-Jupiters. The most extreme examples known are the three planets around Kepler-51 which are all Jupiter-sized but with densities below 0.1 g/cm3.
Extrasolar giant planets
Because of the limited techniques currently available to detect exoplanets, many of those found to date have been of a size associated, in the Solar System, with giant planets. Because these large planets are inferred to share more in common with Jupiter than with the other giant planets, some have claimed that "jovian planet" is a more accurate term for them. Many of the exoplanets are much closer to their parent stars and hence much hotter than the giant planets in the Solar System, making it possible that some of those planets are a type not observed in the Solar System. Considering the relative abundances of the elements in the universe (approximately 98% hydrogen and helium) it would be surprising to find a predominantly rocky planet more massive than Jupiter. On the other hand, models of planetary-system formation have suggested that giant planets would be inhibited from forming as close to their stars as many of the extrasolar giant planets have been observed to orbit.
Atmospheres
The bands seen in the atmosphere of Jupiter are due to counter-circulating streams of material called zones and belts, encircling the planet parallel to its equator. The zones are the lighter bands, and are at higher altitudes in the atmosphere. They have an internal updraft and are high-pressure regions. The belts are the darker bands, are lower in the atmosphere, and have an internal downdraft. They are low-pressure regions. These structures are somewhat analogous to the high and low-pressure cells in Earth's atmosphere, but they have a very different structure—latitudinal bands that circle the entire planet, as opposed to small confined cells of pressure. This appears to be a result of the rapid rotation and underlying symmetry of the planet. There are no oceans or landmasses to cause local heating and the rotation speed is much higher than that of Earth.
There are smaller structures as well: spots of different sizes and colors. On Jupiter, the most noticeable of these features is the Great Red Spot, which has been present for at least 300 years. These structures are huge storms. Some such spots are thunderheads as well.
See also
References
Bibliography
SPACE.com: Q&A: The IAU's Proposed Planet Definition, 16 August 2006, 2:00 AM ET
BBC News: Q&A New planets proposal Wednesday, 16 August 2006, 13:36 GMT 14:36 UK
External links
SPACE.com: Q&A: The IAU's Proposed Planet Definition 16 August 2006 2:00 am ET
BBC News: Q&A New planets proposal Wednesday, 16 August 2006, 13:36 GMT 14:36 UK
Gas Giants in Science Fiction:
Episode "Giants" on The Science Channel TV show Planets
Types of planet
Solar System | Giant planet | [
"Astronomy"
] | 2,555 | [
"Outer space",
"Solar System"
] |
12,737 | https://en.wikipedia.org/wiki/Gunpowder | Gunpowder, also commonly known as black powder to distinguish it from modern smokeless powder, is the earliest known chemical explosive. It consists of a mixture of sulfur, charcoal (which is mostly carbon), and potassium nitrate (saltpeter). The sulfur and charcoal act as fuels while the saltpeter is an oxidizer. Gunpowder has been widely used as a propellant in firearms, artillery, rocketry, and pyrotechnics, including use as a blasting agent for explosives in quarrying, mining, building pipelines, tunnels, and roads.
Gunpowder is classified as a low explosive because of its relatively slow decomposition rate, low ignition temperature and consequently low brisance (breaking/shattering). Low explosives deflagrate (i.e., burn at subsonic speeds), whereas high explosives detonate, producing a supersonic shockwave. Ignition of gunpowder packed behind a projectile generates enough pressure to force the shot from the muzzle at high speed, but usually not enough force to rupture the gun barrel. It thus makes a good propellant but is less suitable for shattering rock or fortifications with its low-yield explosive power. Nonetheless, it was widely used to fill fused artillery shells (and used in mining and civil engineering projects) until the second half of the 19th century, when the first high explosives were put into use.
Gunpowder is one of the Four Great Inventions of China. Originally developed by Taoists for medicinal purposes, it was first used for warfare around AD 904. Its use in weapons has declined due to smokeless powder replacing it, whilst its relative inefficiency led to newer alternatives such as dynamite and ammonium nitrate/fuel oil replacing it in industrial applications.
Effect
Gunpowder is a low explosive: it does not detonate, but rather deflagrates (burns quickly). This is an advantage in a propellant device, where one does not desire a shock that would shatter the gun and potentially harm the operator; however, it is a drawback when an explosion is desired. In that case, the propellant (and most importantly, gases produced by its burning) must be confined. Since it contains its own oxidizer and additionally burns faster under pressure, its combustion is capable of bursting containers such as a shell, grenade, or improvised "pipe bomb" or "pressure cooker" casings to form shrapnel.
In quarrying, high explosives are generally preferred for shattering rock. However, because of its low brisance, gunpowder causes fewer fractures and results in more usable stone compared to other explosives, making it useful for blasting slate, which is fragile, or monumental stone such as granite and marble. Gunpowder is well suited for blank rounds, signal flares, burst charges, and rescue-line launches. It is also used in fireworks for lifting shells, in rockets as fuel, and in certain special effects.
Combustion converts less than half the mass of gunpowder to gas; most of it turns into particulate matter. Some of it is ejected, wasting propelling power, fouling the air, and generally being a nuisance (giving away a soldier's position, generating fog that hinders vision, etc.). Some of it ends up as a thick layer of soot inside the barrel, where it also is a nuisance for subsequent shots, and a cause of jamming an automatic weapon. Moreover, this residue is hygroscopic, and with the addition of moisture absorbed from the air forms a corrosive substance. The soot contains potassium oxide or sodium oxide that turns into potassium hydroxide, or sodium hydroxide, which corrodes wrought iron or steel gun barrels. Gunpowder arms therefore require thorough and regular cleaning to remove the residue.
Gunpowder loads can be used in modern firearms as long as they are not gas-operated. The most compatible modern guns are smoothbore-barreled shotguns that are long-recoil operated with chrome-plated essential parts such as barrels and bores. Such guns have minimal fouling and corrosion and are easier to clean.
History
China
The first confirmed reference to what can be considered gunpowder in China occurred in the 9th century AD during the Tang dynasty, first in a formula contained in the Taishang Shengzu Jindan Mijue (太上聖祖金丹秘訣) in 808, and then about 50 years later in a Taoist text known as the Zhenyuan miaodao yaolüe (真元妙道要略). The Taishang Shengzu Jindan Mijue mentions a formula composed of six parts sulfur to six parts saltpeter to one part birthwort herb. According to the Zhenyuan miaodao yaolüe, "Some have heated together sulfur, realgar and saltpeter with honey; smoke and flames result, so that their hands and faces have been burnt, and even the whole house where they were working burned down." Based on these Taoist texts, the invention of gunpowder by Chinese alchemists was likely an accidental byproduct from experiments seeking to create the elixir of life. This experimental medicine origin is reflected in its Chinese name huoyao (), which means "fire medicine". Saltpeter was known to the Chinese by the mid-1st century AD and was primarily produced in the provinces of Sichuan, Shanxi, and Shandong. There is strong evidence of the use of saltpeter and sulfur in various medicinal combinations. A Chinese alchemical text dated 492 noted saltpeter burnt with a purple flame, providing a practical and reliable means of distinguishing it from other inorganic salts, thus enabling alchemists to evaluate and compare purification techniques; the earliest Latin accounts of saltpeter purification are dated after 1200.
The earliest chemical formula for gunpowder appeared in the 11th century Song dynasty text, Wujing Zongyao (Complete Essentials from the Military Classics), written by Zeng Gongliang between 1040 and 1044. The Wujing Zongyao provides encyclopedia references to a variety of mixtures that included petrochemicals—as well as garlic and honey. A slow match for flame-throwing mechanisms using the siphon principle and for fireworks and rockets is mentioned. The mixture formulas in this book contain at most 50% not enough to create an explosion, they produce an incendiary instead. The Essentials was written by a Song dynasty court bureaucrat and there is little evidence that it had any immediate impact on warfare; there is no mention of its use in the chronicles of the wars against the Tanguts in the 11th century, and China was otherwise mostly at peace during this century. However, it had already been used for fire arrows since at least the 10th century. Its first recorded military application dates its use to 904 in the form of incendiary projectiles. In the following centuries various gunpowder weapons such as bombs, fire lances, and the gun appeared in China. Explosive weapons such as bombs have been discovered in a shipwreck off the shore of Japan dated from 1281, during the Mongol invasions of Japan.
By 1083 the Song court was producing hundreds of thousands of fire arrows for their garrisons. Bombs and the first proto-guns, known as "fire lances", became prominent during the 12th century and were used by the Song during the Jin-Song Wars. Fire lances were first recorded to have been used at the Siege of De'an in 1132 by Song forces against the Jin. In the early 13th century the Jin used iron-casing bombs. Projectiles were added to fire lances, and re-usable fire lance barrels were developed, first out of hardened paper, and then metal. By 1257 some fire lances were firing wads of bullets. In the late 13th century metal fire lances became 'eruptors', proto-cannons firing co-viative projectiles (mixed with the propellant, rather than seated over it with a wad), and by 1287 at the latest, had become true guns, the hand cannon.
Middle East
According to Iqtidar Alam Khan, it was invading Mongols who introduced gunpowder to the Islamic world. The Muslims acquired knowledge of gunpowder sometime between 1240 and 1280, by which point the Syrian Hasan al-Rammah had written recipes, instructions for the purification of saltpeter, and descriptions of gunpowder incendiaries. It is implied by al-Rammah's usage of "terms that suggested he derived his knowledge from Chinese sources" and his references to saltpeter as "Chinese snow" ( ), fireworks as "Chinese flowers", and rockets as "Chinese arrows" that knowledge of gunpowder arrived from China. However, because al-Rammah attributes his material to "his father and forefathers", al-Hassan argues that gunpowder became prevalent in Syria and Egypt by "the end of the twelfth century or the beginning of the thirteenth". In Persia saltpeter was known as "Chinese salt" () namak-i chīnī) or "salt from Chinese salt marshes" ( ).
Hasan al-Rammah included 107 gunpowder recipes in his text al-Furusiyyah wa al-Manasib al-Harbiyya (The Book of Military Horsemanship and Ingenious War Devices), 22 of which are for rockets. If one takes the median of 17 of these 22 compositions for rockets (75% nitrates, 9.06% sulfur, and 15.94% charcoal), it is nearly identical to the modern reported ideal recipe of 75% potassium nitrate, 10% sulfur, and 15% charcoal. The text also mentions fuses, incendiary bombs, naphtha pots, fire lances, and an illustration and description of the earliest torpedo. The torpedo was called the "egg which moves itself and burns". Two iron sheets were fastened together and tightened using felt. The flattened pear-shaped vessel was filled with gunpowder, metal filings, "good mixtures", two rods, and a large rocket for propulsion. Judging by the illustration, it was evidently supposed to glide across the water. Fire lances were used in battles between the Muslims and Mongols in 1299 and 1303.
Al-Hassan claims that in the Battle of Ain Jalut of 1260, the Mamluks used "the first cannon in history" against the Mongols, utilizing a formula with near-identical ideal composition ratios for explosive gunpowder. Other historians urge caution regarding claims of Islamic firearms use in the 1204–1324 period, as late medieval Arabic texts used the same word for gunpowder, naft, that they used for an earlier incendiary, naphtha.
The earliest surviving documentary evidence for cannons in the Islamic world is from an Arabic manuscript dated to the early 14th century. The author's name is uncertain but may have been Shams al-Din Muhammad, who died in 1350. Dating from around 1320–1350, the illustrations show gunpowder weapons such as gunpowder arrows, bombs, fire tubes, and fire lances or proto-guns. The manuscript describes a type of gunpowder weapon called a midfa which uses gunpowder to shoot projectiles out of a tube at the end of a stock. Some consider this to be a cannon while others do not. The problem with identifying cannons in early 14th century Arabic texts is the term midfa, which appears from 1342 to 1352 but cannot be proven to be true hand-guns or bombards. Contemporary accounts of a metal-barrel cannon in the Islamic world do not occur until 1365. Needham believes that in its original form the term midfa refers to the tube or cylinder of a naphtha projector (flamethrower), then after the invention of gunpowder it meant the tube of fire lances, and eventually it applied to the cylinder of hand-guns and cannons.
According to Paul E. J. Hammer, the Mamluks certainly used cannons by 1342. According to J. Lavin, cannons were used by Moors at the siege of Algeciras in 1343. A metal cannon firing an iron ball was described by Shihab al-Din Abu al-Abbas al-Qalqashandi between 1365 and 1376.
The musket appeared in the Ottoman Empire by 1465. In 1598, Chinese writer Zhao Shizhen described Turkish muskets as being superior to European muskets. The Chinese military book Wu Pei Chih (1621) later described Turkish muskets that used a rack-and-pinion mechanism, which was not known to have been used in European or Chinese firearms at the time.
The state-controlled manufacture of gunpowder by the Ottoman Empire through early supply chains to obtain nitre, sulfur and high-quality charcoal from oaks in Anatolia contributed significantly to its expansion between the 15th and 18th century. It was not until later in the 19th century when the syndicalist production of Turkish gunpowder was greatly reduced, which coincided with the decline of its military might.
Europe
The earliest Western accounts of gunpowder appear in texts written by English philosopher Roger Bacon in 1267 called and Opus Tertium. The oldest written recipes in continental Europe were recorded under the name Marcus Graecus or Mark the Greek between 1280 and 1300 in the Liber Ignium, or Book of Fires.
Some sources mention possible gunpowder weapons being deployed by the Mongols against European forces at the Battle of Mohi in 1241. Professor Kenneth Warren Chase credits the Mongols for introducing into Europe gunpowder and its associated weaponry. However, there is no clear route of transmission, and while the Mongols are often pointed to as the likeliest vector, Timothy May points out that "there is no concrete evidence that the Mongols used gunpowder weapons on a regular basis outside of China." May also states, "however [, ...] the Mongols used the gunpowder weapon in their wars against the Jin, the Song and in their invasions of Japan."
Records show that, in England, gunpowder was being made in 1346 at the Tower of London; a powder house existed at the Tower in 1461, and in 1515 three King's gunpowder makers worked there. Gunpowder was also being made or stored at other royal castles, such as Portchester. The English Civil War (1642–1645) led to an expansion of the gunpowder industry, with the repeal of the Royal Patent in August 1641.
In late 14th century Europe, gunpowder was improved by corning, the practice of drying it into small clumps to improve combustion and consistency. During this time, European manufacturers also began regularly purifying saltpeter, using wood ashes containing potassium carbonate to precipitate calcium from their dung liquor, and using ox blood, alum, and slices of turnip to clarify the solution.
During the Renaissance, two European schools of pyrotechnic thought emerged, one in Italy and the other at Nuremberg, Germany. In Italy, Vannoccio Biringuccio, born in 1480, was a member of the guild Fraternita di Santa Barbara but broke with the tradition of secrecy by setting down everything he knew in a book titled De la pirotechnia, written in vernacular. It was published posthumously in 1540, with 9 editions over 138 years, and also reprinted by MIT Press in 1966.
By the mid-17th century fireworks were used for entertainment on an unprecedented scale in Europe, being popular even at resorts and public gardens. With the publication of Deutliche Anweisung zur Feuerwerkerey (1748), methods for creating fireworks were sufficiently well-known and well-described that "Firework making has become an exact science." In 1774 Louis XVI ascended to the throne of France at age 20. After he discovered that France was not self-sufficient in gunpowder, a Gunpowder Administration was established; to head it, the lawyer Antoine Lavoisier was appointed. Although from a bourgeois family, after his degree in law Lavoisier became wealthy from a company set up to collect taxes for the Crown; this allowed him to pursue experimental natural science as a hobby.
Without access to cheap saltpeter (controlled by the British), for hundreds of years France had relied on saltpetremen with royal warrants, the droit de fouille or "right to dig", to seize nitrous-containing soil and demolish walls of barnyards, without compensation to the owners. This caused farmers, the wealthy, or entire villages to bribe the petermen and the associated bureaucracy to leave their buildings alone and the saltpeter uncollected. Lavoisier instituted a crash program to increase saltpeter production, revised (and later eliminated) the droit de fouille, researched best refining and powder manufacturing methods, instituted management and record-keeping, and established pricing that encouraged private investment in works. Although saltpeter from new Prussian-style putrefaction works had not been produced yet (the process taking about 18 months), in only a year France had gunpowder to export. A chief beneficiary of this surplus was the American Revolution. By careful testing and adjusting the proportions and grinding time, powder from mills such as at Essonne outside Paris became the best in the world by 1788, and inexpensive.
Two British physicists, Andrew Noble and Frederick Abel, worked to improve the properties of gunpowder during the late 19th century. This formed the basis for the Noble-Abel gas equation for internal ballistics.
The introduction of smokeless powder in the late 19th century led to a contraction of the gunpowder industry. After the end of World War I, the majority of the British gunpowder manufacturers merged into a single company, "Explosives Trades limited", and a number of sites were closed down, including those in Ireland. This company became Nobel Industries Limited, and in 1926 became a founding member of Imperial Chemical Industries. The Home Office removed gunpowder from its list of Permitted Explosives. Shortly afterwards, on 31 December 1931, the former Curtis & Harvey's Glynneath gunpowder factory at Pontneddfechan in Wales closed down. The factory was demolished by fire in 1932. The last remaining gunpowder mill at the Royal Gunpowder Factory, Waltham Abbey was damaged by a German parachute mine in 1941 and it never reopened. This was followed by the closure and demolition of the gunpowder section at the Royal Ordnance Factory, ROF Chorley, at the end of World War II, and of ICI Nobel's Roslin gunpowder factory which closed in 1954. This left ICI Nobel's Ardeer site in Scotland, which included a gunpowder factory, as the only factory in Great Britain producing gunpowder. The gunpowder area of the Ardeer site closed in October 1976.
India
Gunpowder and gunpowder weapons were transmitted to India through the Mongol invasions of India. The Mongols were defeated by Alauddin Khalji of the Delhi Sultanate, and some of the Mongol soldiers remained in northern India after their conversion to Islam. It was written in the Tarikh-i Firishta (1606–1607) that Nasiruddin Mahmud the ruler of the Delhi Sultanate presented the envoy of the Mongol ruler Hulegu Khan with a dazzling pyrotechnics display upon his arrival in Delhi in 1258. Nasiruddin Mahmud tried to express his strength as a ruler and tried to ward off any Mongol attempt similar to the Siege of Baghdad (1258). Firearms known as top-o-tufak also existed in many Muslim kingdoms in India by as early as 1366. From then on the employment of gunpowder warfare in India was prevalent, with events such as the "Siege of Belgaum" in 1473 by Sultan Muhammad Shah Bahmani.
The shipwrecked Ottoman Admiral Seydi Ali Reis is known to have introduced the earliest type of matchlock weapons, which the Ottomans used against the Portuguese during the Siege of Diu (1531). After that, a diverse variety of firearms, large guns in particular, became visible in Tanjore, Dacca, Bijapur, and Murshidabad. Guns made of bronze were recovered from Calicut (1504)- the former capital of the Zamorins
The Mughal emperor Akbar mass-produced matchlocks for the Mughal Army. Akbar is personally known to have shot a leading Rajput commander during the Siege of Chittorgarh. The Mughals began to use bamboo rockets (mainly for signalling) and employ sappers: special units that undermined heavy stone fortifications to plant gunpowder charges.
The Mughal Emperor Shah Jahan is known to have introduced much more advanced matchlocks, their designs were a combination of Ottoman and Mughal designs. Shah Jahan also countered the British and other Europeans in his province of Gujarāt, which supplied Europe saltpeter for use in gunpowder warfare during the 17th century. Bengal and Mālwa participated in saltpeter production. The Dutch, French, Portuguese, and English used Chhapra as a center of saltpeter refining.
Ever since the founding of the Sultanate of Mysore by Hyder Ali, French military officers were employed to train the Mysore Army. Hyder Ali and his son Tipu Sultan were the first to introduce modern cannons and muskets, their army was also the first in India to have official uniforms. During the Second Anglo-Mysore War Hyder Ali and his son Tipu Sultan unleashed the Mysorean rockets at their British opponents effectively defeating them on various occasions. The Mysorean rockets inspired the development of the Congreve rocket, which the British widely used during the Napoleonic Wars and the War of 1812.
Southeast Asia
Cannons were introduced to Majapahit when Kublai Khan's Chinese army under the leadership of Ike Mese sought to invade Java in 1293. History of Yuan mentioned that the Mongol used cannons (Chinese: 炮—Pào) against Daha forces. Cannons were used by the Ayutthaya Kingdom in 1352 during its invasion of the Khmer Empire. Within a decade large quantities of gunpowder could be found in the Khmer Empire. By the end of the century firearms were also used by the Trần dynasty.
Even though the knowledge of making gunpowder-based weapons was known after the failed Mongol invasion of Java, and the predecessor of firearms, the pole gun (bedil tombak), is recorded as being used by Java in 1413, the knowledge of making "true" firearms came much later, after the middle of the 15th century. It was brought by the Islamic nations of West Asia, most probably the Arabs. The precise year of introduction is unknown, but it may be safely concluded to be no earlier than 1460. Before the arrival of the Portuguese in Southeast Asia, the natives already possessed primitive firearms, the Java arquebus. Portuguese influence to local weaponry after the capture of Malacca (1511) resulted in a new type of hybrid tradition matchlock firearm, the istinggar.
When the Portuguese came to the archipelago, they referred to the breech-loading swivel gun as berço, while the Spaniards call it verso. By the early 16th century, the Javanese already locally producing large guns, some of them still survived until the present day and dubbed as "sacred cannon" or "holy cannon". These cannons varied between 180- and 260-pounders, weighing anywhere between 3 and 8 tons, length of them between 3 and 6 m.
Saltpeter harvesting was recorded by Dutch and German travelers as being common in even the smallest villages and was collected from the decomposition process of large dung hills specifically piled for the purpose. The Dutch punishment for possession of non-permitted gunpowder appears to have been amputation. Ownership and manufacture of gunpowder was later prohibited by the colonial Dutch occupiers. According to colonel McKenzie quoted in Sir Thomas Stamford Raffles', The History of Java (1817), the purest sulfur was supplied from a crater from a mountain near the straits of Bali.
Historiography
On the origins of gunpowder technology, historian Tonio Andrade remarked, "Scholars today overwhelmingly concur that the gun was invented in China." Gunpowder and the gun are widely believed by historians to have originated from China due to the large body of evidence that documents the evolution of gunpowder from a medicine to an incendiary and explosive, and the evolution of the gun from the fire lance to a metal gun, whereas similar records do not exist elsewhere. As Andrade explains, the large amount of variation in gunpowder recipes in China relative to Europe is "evidence of experimentation in China, where gunpowder was at first used as an incendiary and only later became an explosive and a propellant... in contrast, formulas in Europe diverged only very slightly from the ideal proportions for use as an explosive and a propellant, suggesting that gunpowder was introduced as a mature technology."
However, the history of gunpowder is not without controversy. A major problem confronting the study of early gunpowder history is ready access to sources close to the events described. Often the first records potentially describing use of gunpowder in warfare were written several centuries after the fact, and may well have been colored by the contemporary experiences of the chronicler. Translation difficulties have led to errors or loose interpretations bordering on artistic licence. Ambiguous language can make it difficult to distinguish gunpowder weapons from similar technologies that do not rely on gunpowder. A commonly cited example is a report of the Battle of Mohi in Eastern Europe that mentions a "long lance" sending forth "evil-smelling vapors and smoke", which has been variously interpreted by different historians as the "first-gas attack upon European soil" using gunpowder, "the first use of cannon in Europe", or merely a "toxic gas" with no evidence of gunpowder. It is difficult to accurately translate original Chinese alchemical texts, which tend to explain phenomena through metaphor, into modern scientific language with rigidly defined terminology in English. Early texts potentially mentioning gunpowder are sometimes marked by a linguistic process where semantic change occurred. For instance, the Arabic word naft transitioned from denoting naphtha to denoting gunpowder, and the Chinese word pào changed in meaning from trebuchet to a cannon. This has led to arguments on the exact origins of gunpowder based on etymological foundations. Science and technology historian Bert S. Hall makes the observation that, "It goes without saying, however, that historians bent on special pleading, or simply with axes of their own to grind, can find rich material in these terminological thickets."
Another major area of contention in modern studies of the history of gunpowder is regarding the transmission of gunpowder. While the literary and archaeological evidence supports a Chinese origin for gunpowder and guns, the manner in which gunpowder technology was transferred from China to the West is still under debate. It is unknown why the rapid spread of gunpowder technology across Eurasia took place over several decades whereas other technologies such as paper, the compass, and printing did not reach Europe until centuries after they were invented in China.
Components
Gunpowder is a granular mixture of:
a nitrate, typically potassium nitrate (KNO3), which supplies oxygen for the reaction;
charcoal, which provides carbon and other fuel for the reaction, simplified as carbon (C);
sulfur (S), which, while also serving as a fuel, lowers the temperature required to ignite the mixture, thereby increasing the rate of combustion.
Potassium nitrate is the most important ingredient in terms of both bulk and function because the combustion process releases oxygen from the potassium nitrate, promoting the rapid burning of the other ingredients. To reduce the likelihood of accidental ignition by static electricity, the granules of modern gunpowder are typically coated with graphite, which prevents the build-up of electrostatic charge.
Charcoal does not consist of pure carbon; rather, it consists of partially pyrolyzed cellulose, in which the wood is not completely decomposed. Carbon differs from ordinary charcoal. Whereas charcoal's autoignition temperature is relatively low, carbon's is much greater. Thus, a gunpowder composition containing pure carbon would burn similarly to a match head, at best.
The current standard composition for the gunpowder manufactured by pyrotechnicians was adopted as long ago as 1780. Proportions by weight are 75% potassium nitrate (known as saltpeter or saltpetre), 15% softwood charcoal, and 10% sulfur. These ratios have varied over the centuries and by country, and can be altered somewhat depending on the purpose of the powder. For instance, power grades of black powder, unsuitable for use in firearms but adequate for blasting rock in quarrying operations, are called blasting powder rather than gunpowder with standard proportions of 70% nitrate, 14% charcoal, and 16% sulfur; blasting powder may be made with the cheaper sodium nitrate substituted for potassium nitrate and proportions may be as low as 40% nitrate, 30% charcoal, and 30% sulfur. In 1857, Lammot du Pont solved the main problem of using cheaper sodium nitrate formulations when he patented DuPont "B" blasting powder. After manufacturing grains from press-cake in the usual way, his process tumbled the powder with graphite dust for 12 hours. This formed a graphite coating on each grain that reduced its ability to absorb moisture.
Neither the use of graphite nor sodium nitrate was new. Glossing gunpowder corns with graphite was already an accepted technique in 1839, and sodium nitrate-based blasting powder had been made in Peru for many years using the sodium nitrate mined at Tarapacá (now in Chile). Also, in 1846, two plants were built in south-west England to make blasting powder using this sodium nitrate. The idea may well have been brought from Peru by Cornish miners returning home after completing their contracts. Another suggestion is that it was William Lobb, the plant collector, who recognised the possibilities of sodium nitrate during his travels in South America. Lammot du Pont would have known about the use of graphite and probably also knew about the plants in south-west England. In his patent he was careful to state that his claim was for the combination of graphite with sodium nitrate-based powder, rather than for either of the two individual technologies.
French war powder in 1879 used the ratio 75% saltpeter, 12.5% charcoal, 12.5% sulfur. English war powder in 1879 used the ratio 75% saltpeter, 15% charcoal, 10% sulfur. The British Congreve rockets used 62.4% saltpeter, 23.2% charcoal and 14.4% sulfur, but the British Mark VII gunpowder was changed to 65% saltpeter, 20% charcoal and 15% sulfur. The explanation for the wide variety in formulation relates to usage. Powder used for rocketry can use a slower burn rate since it accelerates the projectile for a much longer time—whereas powders for weapons such as flintlocks, cap-locks, or matchlocks need a higher burn rate to accelerate the projectile in a much shorter distance. Cannons usually used lower burn-rate powders, because most would burst with higher burn-rate powders.
Other compositions
Besides black powder, there are other historically important types of gunpowder. "Brown gunpowder" is cited as composed of 79% nitre, 3% sulfur, and 18% charcoal per 100 of dry powder, with about 2% moisture. Prismatic Brown Powder is a large-grained product the Rottweil Company introduced in 1884 in Germany, which was adopted by the British Royal Navy shortly thereafter. The French navy adopted a fine, 3.1 millimeter, not prismatic grained product called Slow Burning Cocoa (SBC) or "cocoa powder". These brown powders reduced burning rate even further by using as little as 2 percent sulfur and using charcoal made from rye straw that had not been completely charred, hence the brown color.
Lesmok powder was a product developed by DuPont in 1911, one of several semi-smokeless products in the industry containing a mixture of black and nitrocellulose powder. It was sold to Winchester and others primarily for .22 and .32 small calibers. Its advantage was that it was believed at the time to be less corrosive than smokeless powders then in use. It was not understood in the U.S. until the 1920s that the actual source of corrosion was the potassium chloride residue from potassium chlorate sensitized primers. The bulkier black powder fouling better disperses primer residue. Failure to mitigate primer corrosion by dispersion caused the false impression that nitrocellulose-based powder caused corrosion. Lesmok had some of the bulk of black powder for dispersing primer residue, but somewhat less total bulk than straight black powder, thus requiring less frequent bore cleaning. It was last sold by Winchester in 1947.
Sulfur-free powders
The development of smokeless powders, such as cordite, in the late 19th century created the need for a spark-sensitive priming charge, such as gunpowder. However, the sulfur content of traditional gunpowders caused corrosion problems with Cordite Mk I and this led to the introduction of a range of sulfur-free gunpowders, of varying grain sizes. They typically contain 70.5 parts of saltpeter and 29.5 parts of charcoal. Like black powder, they were produced in different grain sizes. In the United Kingdom, the finest grain was known as sulfur-free mealed powder (SMP). Coarser grains were numbered as sulfur-free gunpowder (SFG n): 'SFG 12', 'SFG 20', 'SFG 40' and 'SFG 90', for example where the number represents the smallest BSS sieve mesh size, which retained no grains.
Sulfur's main role in gunpowder is to decrease the ignition temperature. A sample reaction for sulfur-free gunpowder would be:
6 KNO3 + C7H4O -> 3 K2CO3 + 4 CO2 + 2 H2O + 3 N2
Smokeless powders
The term black powder was coined in the late 19th century, primarily in the United States, to distinguish prior gunpowder formulations from the new smokeless powders and semi-smokeless powders. Semi-smokeless powders featured bulk volume properties that approximated black powder, but had significantly reduced amounts of smoke and combustion products. Smokeless powder has different burning properties (pressure vs. time) and can generate higher pressures and work per gram. This can rupture older weapons designed for black powder. Smokeless powders ranged in color from brownish tan to yellow to white. Most of the bulk semi-smokeless powders ceased to be manufactured in the 1920s.
Granularity
Serpentine
The original dry-compounded powder used in 15th-century Europe was known as "Serpentine", either a reference to Satan or to a common artillery piece that used it. The ingredients were ground
together with a mortar and pestle, perhaps for 24 hours, resulting in a fine flour. Vibration during transportation could cause the components to separate again, requiring remixing in the field. Also if the quality of the saltpeter was low (for instance if it was contaminated with highly hygroscopic calcium nitrate), or if the powder was simply old (due to the mildly hygroscopic nature of potassium nitrate), in humid weather it would need to be re-dried. The dust from "repairing" powder in the field was a major hazard.
Loading cannons or bombards before the powder-making advances of the Renaissance was a skilled art. Fine powder loaded haphazardly or too tightly would burn incompletely or too slowly. Typically, the breech-loading powder chamber in the rear of the piece was filled only about half full, the serpentine powder neither too compressed nor too loose, a wooden bung pounded in to seal the chamber from the barrel when assembled, and the projectile placed on. A carefully determined empty space was necessary for the charge to burn effectively. When the cannon was fired through the touchhole, turbulence from the initial surface combustion caused the rest of the powder to be rapidly exposed to the flame.
The advent of much more powerful and easy to use corned powder changed this procedure, but serpentine was used with older guns into the 17th century.
Corning
For propellants to oxidize and burn rapidly and effectively, the combustible ingredients must be reduced to the smallest possible particle sizes, and be as thoroughly mixed as possible. Once mixed, however, for better results in a gun, makers discovered that the final product should be in the form of individual dense grains that spread the fire quickly from grain to grain, much as straw or twigs catch fire more quickly than a pile of sawdust.
In late 14th century Europe and China, gunpowder was improved by wet grinding; liquid such as distilled spirits were added during the grinding-together of the ingredients and the moist paste dried afterwards. The principle of wet mixing to prevent the separation of dry ingredients, invented for gunpowder, is used today in the pharmaceutical industry. It was discovered that if the paste was rolled into balls before drying the resulting gunpowder absorbed less water from the air during storage and traveled better. The balls were then crushed in a mortar by the gunner immediately before use, with the old problem of uneven particle size and packing causing unpredictable results. If the right size particles were chosen, however, the result was a great improvement in power. Forming the damp paste into corn-sized clumps by hand or with the use of a sieve instead of larger balls produced a product after drying that loaded much better, as each tiny piece provided its own surrounding air space that allowed much more rapid combustion than a fine powder. This "corned" gunpowder was from 30% to 300% more powerful. An example is cited where of serpentine was needed to shoot a ball, but only of corned powder.
Because the dry powdered ingredients must be mixed and bonded together for extrusion and cut into grains to maintain the blend, size reduction and mixing is done while the ingredients are damp, usually with water. After 1800, instead of forming grains by hand or with sieves, the damp mill-cake was pressed in molds to increase its density and extract the liquid, forming press-cake. The pressing took varying amounts of time, depending on conditions such as atmospheric humidity. The hard, dense product was broken again into tiny pieces, which were separated with sieves to produce a uniform product for each purpose: coarse powders for cannons, finer grained powders for muskets, and the finest for small hand guns and priming. Inappropriately fine-grained powder often caused cannons to burst before the projectile could move down the barrel, due to the high initial spike in pressure. Mammoth powder with large grains, made for Rodman's 15-inch cannon, reduced the pressure to only 20 percent as high as ordinary cannon powder would have produced.
In the mid-19th century, measurements were made determining that the burning rate within a grain of black powder (or a tightly packed mass) is about 6 cm/s (0.20 feet/s), while the rate of ignition propagation from grain to grain is around 9 m/s (30 feet/s), over two orders of magnitude faster.
Modern types
Modern corning first compresses the fine black powder meal into blocks with a fixed density (1.7 g/cm3). In the United States, gunpowder grains were designated F (for fine) or C (for coarse). Grain diameter decreased with a larger number of Fs and increased with a larger number of Cs, ranging from about for 7F to for 7C. Even larger grains were produced for artillery bore diameters greater than about . The standard DuPont Mammoth powder developed by Thomas Rodman and Lammot du Pont for use during the American Civil War had grains averaging in diameter with edges rounded in a glazing barrel. Other versions had grains the size of golf and tennis balls for use in Rodman guns. In 1875 DuPont introduced Hexagonal powder for large artillery, which was pressed using shaped plates with a small center core—about diameter, like a wagon wheel nut, the center hole widened as the grain burned. By 1882 German makers also produced hexagonal grained powders of a similar size for artillery.
By the late 19th century manufacturing focused on standard grades of black powder from Fg used in large bore rifles and shotguns, through FFg (medium and small-bore arms such as muskets and fusils), FFFg (small-bore rifles and pistols), and FFFFg (extreme small bore, short pistols and most commonly for priming flintlocks). A coarser grade for use in military artillery blanks was designated A-1. These grades were sorted on a system of screens with oversize retained on a mesh of 6 wires per inch, A-1 retained on 10 wires per inch, Fg retained on 14, FFg on 24, FFFg on 46, and FFFFg on 60. Fines designated FFFFFg were usually reprocessed to minimize explosive dust hazards. In the United Kingdom, the main service gunpowders were classified RFG (rifle grained fine) with diameter of one or two millimeters and RLG (rifle grained large) for grain diameters between two and six millimeters. Gunpowder grains can alternatively be categorized by mesh size: the BSS sieve mesh size, being the smallest mesh size, which retains no grains. Recognized grain sizes are Gunpowder G 7, G 20, G 40, and G 90.
Owing to the large market of antique and replica black-powder firearms in the US, modern black powder substitutes like Pyrodex, Triple Seven and Black Mag3 pellets have been developed since the 1970s. These products, which should not be confused with smokeless powders, aim to produce less fouling (solid residue), while maintaining the traditional volumetric measurement system for charges. Claims of less corrosiveness of these products have been controversial however. New cleaning products for black-powder guns have also been developed for this market.
Chemistry
A simple, commonly cited, chemical equation for the combustion of gunpowder is:
2 KNO3 + S + 3 C → K2S + N2 + 3 CO2.
A balanced, but still simplified, equation is:
10 KNO3 + 3 S + 8 C → 2 K2CO3 + 3 K2SO4 + 6 CO2 + 5 N2.
The exact percentages of ingredients varied greatly through the medieval period as the recipes were developed by trial and error, and needed to be updated for changing military technology.
Gunpowder does not burn as a single reaction, so the byproducts are not easily predicted. One study showed that it produced (in order of descending quantities) 55.91% solid products: potassium carbonate, potassium sulfate, potassium sulfide, sulfur, potassium nitrate, potassium thiocyanate, carbon, ammonium carbonate and 42.98% gaseous products: carbon dioxide, nitrogen, carbon monoxide, hydrogen sulfide, hydrogen, methane, 1.11% water.
Gunpowder made with less-expensive and more plentiful sodium nitrate instead of potassium nitrate (in appropriate proportions) works just as well. Gunpowder releases 3 megajoules per kilogram and contains its own oxidant. This is less than TNT (4.7 megajoules per kilogram), or gasoline (47.2 megajoules per kilogram in combustion, but gasoline requires an oxidant; for instance, an optimized gasoline and O2 mixture releases 10.4 megajoules per kilogram, taking into account the mass of the oxygen).
Gunpowder also has a low energy density compared to modern "smokeless" powders, and thus to achieve high energy loadings, large amounts are needed with heavy projectiles.
Production
For the most powerful black powder, meal powder, a wood charcoal is used. The best wood for the purpose is Pacific willow, but others such as alder or buckthorn can be used. In Great Britain between the 15th and 19th centuries charcoal from alder buckthorn was greatly prized for gunpowder manufacture; cottonwood was used by the American Confederate States. The ingredients are reduced in particle size and mixed as intimately as possible. Originally, this was with a mortar-and-pestle or a similarly operating stamping-mill, using copper, bronze or other non-sparking materials, until supplanted by the rotating ball mill principle with non-sparking bronze or lead. Historically, a marble or limestone edge runner mill, running on a limestone bed, was used in Great Britain; however, by the mid 19th century this had changed to either an iron-shod stone wheel or a cast iron wheel running on an iron bed. The mix was dampened with alcohol or water during grinding to prevent accidental ignition. This also helps the extremely soluble saltpeter to mix into the microscopic pores of the very high surface-area charcoal.
Around the late 14th century, European powdermakers first began adding liquid during grinding to improve mixing, reduce dust, and with it the risk of explosion. The powder-makers would then shape the resulting paste of dampened gunpowder, known as mill cake, into corns, or grains, to dry. Not only did corned powder keep better because of its reduced surface area, gunners also found that it was more powerful and easier to load into guns. Before long, powder-makers standardized the process by forcing mill cake through sieves instead of corning powder by hand.
The improvement was based on reducing the surface area of a higher density composition. At the beginning of the 19th century, makers increased density further by static pressing. They shoveled damp mill cake into a two-foot square box, placed this beneath a screw press and reduced it to half its volume. "Press cake" had the hardness of slate. They broke the dried slabs with hammers or rollers, and sorted the granules with sieves into different grades. In the United States, Eleuthere Irenee du Pont, who had learned the trade from Lavoisier, tumbled the dried grains in rotating barrels to round the edges and increase durability during shipping and handling. (Sharp grains rounded off in transport, producing fine "meal dust" that changed the burning properties.)
Another advance was the manufacture of kiln charcoal by distilling wood in heated iron retorts instead of burning it in earthen pits. Controlling the temperature influenced the power and consistency of the finished gunpowder. In 1863, in response to high prices for Indian saltpeter, DuPont chemists developed a process using potash or mined potassium chloride to convert plentiful Chilean sodium nitrate to potassium nitrate.
The following year (1864) the Gatebeck Low Gunpowder Works in Cumbria (Great Britain) started a plant to manufacture potassium nitrate by essentially the same chemical process. This is nowadays called the 'Wakefield Process', after the owners of the company. It would have used potassium chloride from the Staßfurt mines, near Magdeburg, Germany, which had recently become available in industrial quantities.
During the 18th century, gunpowder factories became increasingly dependent on mechanical energy. Despite mechanization, production difficulties related to humidity control, especially during the pressing, were still present in the late 19th century. A paper from 1885 laments that "Gunpowder is such a nervous and sensitive spirit, that in almost every process of manufacture it changes under our hands as the weather changes." Pressing times to the desired density could vary by a factor of three depending on the atmospheric humidity.
Legal status
The United Nations Model Regulations on the Transportation of Dangerous Goods and national transportation authorities, such as United States Department of Transportation, have classified gunpowder (black powder) as a Group A: Primary explosive substance for shipment because it ignites so easily. Complete manufactured devices containing black powder are usually classified as Group D: Secondary detonating substance, or black powder, or article containing secondary detonating substance, such as firework, class D model rocket engine, etc., for shipment because they are harder to ignite than loose powder. As explosives, they all fall into the category of Class 1.
Other uses
Besides its use as a propellant in firearms and artillery, black powder's other main use has been as a blasting powder in quarrying, mining, and road construction (including railroad construction). During the 19th century, outside of war emergencies such as the Crimean War or the American Civil War, more black powder was used in these industrial uses than in firearms and artillery. Dynamite gradually replaced it for those uses. Today, industrial explosives for such uses are still a huge market, but most of the market is in newer explosives rather than black powder.
Beginning in the 1930s, gunpowder or smokeless powder was used in rivet guns, stun guns for animals, cable splicers and other industrial construction tools. The "stud gun", a powder-actuated tool, drove nails or screws into solid concrete, a function not possible with hydraulic tools, and today is still an important part of various industries, but the cartridges usually use smokeless powders. Industrial shotguns have been used to eliminate persistent material rings in operating rotary kilns (such as those for cement, lime, phosphate, etc.) and clinker in operating furnaces, and commercial tools make the method more reliable.
Gunpowder has occasionally been employed for other purposes besides weapons, mining, fireworks and construction:
After the Battle of Aspern-Essling (1809), Dominique-Jean Larrey, the surgeon of the Napoleonic Army, lacking salt, seasoned a horse meat bouillon for the wounded under his care with gunpowder. It was also used for sterilization in ships when there was no alcohol.
British sailors used gunpowder to create tattoos when ink wasn't available, by pricking the skin and rubbing the powder into the wound in a method known as traumatic tattooing.
Christiaan Huygens experimented with gunpowder in 1673 in an early attempt to build an gunpowder engine, but he did not succeed. Modern attempts to recreate his invention were similarly unsuccessful.
Near London in 1853, Captain Shrapnel demonstrated a mineral processing use of black powder in a method for crushing gold-bearing ores by firing them from a cannon into an iron chamber, and "much satisfaction was expressed by all present". He hoped it would be useful on the goldfields of California and Australia. Nothing came of the invention, as continuously operating crushing machines that achieved more reliable comminution were already coming into use.
Starting in 1967, Los Angeles-based artist Ed Ruscha began using gunpowder as an artistic medium for a series of works on paper.
Gunpowder had originally been produced for medicinal purposes. It was eaten, in hopes of curing digestive ailments; inhaled, for respiratory disorders; and, as mentioned, rubbed onto skin level disorders like rashes or burns.
See also
Ballistics
Berthold Schwarz
Black powder rocket motor
Black powder substitute
Bulk loaded liquid propellants
Faversham explosives industry
Gunpowder magazine
Gunpowder Plot
Gunpowder warfare
Technology of the Song dynasty
Footnotes
Notes
References
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Hadden, R. Lee. 2005. "Confederate Boys and Peter Monkeys." Armchair General. January 2005. Adapted from a talk given to the Geological Society of America on 25 March 2004.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Schmidtchen, Volker (1977a), "Riesengeschütze des 15. Jahrhunderts. Technische Höchstleistungen ihrer Zeit", Technikgeschichte 44 (2): 153–73 (153–57)
Schmidtchen, Volker (1977b), "Riesengeschütze des 15. Jahrhunderts. Technische Höchstleistungen ihrer Zeit", Technikgeschichte 44 (3): 213–37 (226–28).
.
.
.
.
.
.
.
External links
Gun and Gunpowder
Cannons and Gunpowder
Oare Gunpowder Works, Kent, UK
Royal Gunpowder Mills
The DuPont Company on the Brandywine A digital exhibit produced by the Hagley Library that covers the founding and early history of the DuPont Company powder yards in Delaware
Video Demonstration of the Medieval Siege Society's Guns, Including showing ignition of gunpowder
Black Powder Recipes
Chinese inventions
Explosives
Firearm propellants
Pyrotechnic compositions
Rocket propellants
Solid fuels | Gunpowder | [
"Chemistry"
] | 10,711 | [
"Pyrotechnic compositions",
"Explosives",
"Explosions"
] |
12,772 | https://en.wikipedia.org/wiki/Gas%20mask | A gas mask is a piece of personal protective equipment used to protect the wearer from inhaling airborne pollutants and toxic gases. The mask forms a sealed cover over the nose and mouth, but may also cover the eyes and other vulnerable soft tissues of the face. Most gas masks are also respirators, though the word gas mask is often used to refer to military equipment (such as a field protective mask), the scope used in this article. Gas masks only protect the user from ingesting or inhaling chemical agents, as well as preventing contact with the user's eyes (many chemical agents affect through eye contact). Most combined gas mask filters will last around 8 hours in a biological or chemical situation. Filters against specific chemical agents can last up to 20 hours.
Airborne toxic materials may be gaseous (for example, chlorine or mustard gas), or particulates (such as biological agents). Many filters provide protection from both types.
The first gas masks mostly used circular lenses made of glass, mica or cellulose acetate to allow vision. Glass and mica were quite brittle and needed frequent replacement. The later Triplex lens style (a cellulose acetate lens sandwiched between glass ones) became more popular, and alongside plain cellulose acetate they became the standard into the 1930s. Panoramic lenses were not popular until the 1930s, but there are some examples of those being used even during the war (Austro-Hungarian 15M). Later, stronger polycarbonate came into use.
Some masks have one or two compact air filter containers screwed onto inlets, while others have a large air filtration container connected to the gas mask via a hose that is sometimes confused with an air-supplied respirator in which an alternate supply of fresh air (oxygen tanks) is delivered.
History and development
Early breathing devices
According to Popular Mechanics, "The common sponge was used in ancient Greece as a gas mask..." In 1785, Jean-François Pilâtre de Rozier invented a respirator.
Primitive respirator examples were used by miners and introduced by Alexander von Humboldt in 1799, when he worked as a mining engineer in Prussia. The forerunner to the modern gas mask was invented in 1847 by Lewis P. Haslett, a device that contained elements that allowed breathing through a nose and mouthpiece, inhalation of air through a bulb-shaped filter, and a vent to exhale air back into the atmosphere. First Facts states that a "gas mask resembling the modern type" was patented by Lewis Phectic Haslett of Louisville, Kentucky, who received a patent on June 12, 1849. U.S. patent #6,529 issued to Haslett, described the first "Inhaler or Lung Protector" that filtered dust from the air.
Early versions were constructed by the Scottish chemist John Stenhouse in 1854 and the physicist John Tyndall in the 1870s. Another early design was the "Safety Hood and Smoke Protector" invented by Garrett Morgan in 1912, and patented in 1914. It was a simple device consisting of a cotton hood with two hoses which hung down to the floor, allowing the wearer to breathe the safer air found there. In addition, moist sponges were inserted at the end of the hoses in order to better filter the air.
World War I
The First World War brought about the first need for mass-produced gas masks on both sides because of extensive use of chemical weapons. The German army successfully used poison gas for the first time against Allied troops at the Second Battle of Ypres, Belgium on April 22, 1915. An immediate response was cotton wool wrapped in muslin, issued to the troops by May 1. This was followed by the Black Veil Respirator, invented by John Scott Haldane, which was a cotton pad soaked in an absorbent solution which was secured over the mouth using black cotton veiling.
Seeking to improve on the Black Veil respirator, Cluny Macpherson created a mask made of chemical-absorbing fabric which fitted over the entire head: a canvas hood treated with chlorine-absorbing chemicals, and fitted with a transparent mica eyepiece. Macpherson presented his idea to the British War Office Anti-Gas Department on May 10, 1915; prototypes were developed soon after. The design was adopted by the British Army and introduced as the British Smoke Hood in June 1915; Macpherson was appointed to the War Office Committee for Protection against Poisonous Gases. More elaborate sorbent compounds were added later to further iterations of his helmet (PH helmet), to defeat other respiratory poison gases used such as phosgene, diphosgene and chloropicrin. In summer and autumn 1915, Edward Harrison, Bertram Lambert and John Sadd developed the Large Box Respirator. This canister gas mask had a tin can containing the absorbent materials by a hose and began to be issued in February 1916. A compact version, the Small Box Respirator, was made a universal issue from August 1916.
In the first gas masks of World War I, it was initially found that wood charcoal was a good absorbent of poison gases. Around 1918, it was found that charcoals made from the shells and seeds of various fruits and nuts such as coconuts, chestnuts, horse-chestnuts, and peach stones performed much better than wood charcoal. These waste materials were collected from the public in recycling programs to assist the war effort.
The first effective filtering activated charcoal gas mask in the world was invented in 1915 by Russian chemist Nikolay Zelinsky.
Also in World War I, since dogs were frequently used on the front lines, a special type of gas mask was developed that dogs were trained to wear. Other gas masks were developed during World War I and the time following for horses in the various mounted units that operated near the front lines. In America, thousands of gas masks were produced for American as well as Allied troops. Mine Safety Appliances was a chief producer. This mask was later used widely in industry.
World War II
The British Respirator, Anti-Gas (Light) was developed in 1943 by the British. It was made of plastic and rubber-like material that greatly reduced the weight and bulk compared to World War I gas masks, and fitted the user's face more snugly and comfortably. The main improvement was replacing the separate filter canister connected with a hose by an easily replaceable filter canister screwed on the side of the gas mask. Also, it had replaceable plastic lenses.
Modern mask
Gas mask development since has mirrored the development of chemical agents in warfare, filling the need to protect against ever more deadly threats, biological weapons, and radioactive dust in the nuclear era. However, for agents that cause harm through contact or penetration of the skin, such as blister agent or nerve agent, a gas mask alone is not sufficient protection, and full protective clothing must be worn in addition to protect from contact with the atmosphere. For reasons of civil defence and personal protection, individuals often buy gas masks since they believe that they protect against the harmful effects of an attack with nuclear, biological, or chemical (NBC) agents, which is only partially true, as gas masks protect only against respiratory absorption. Most military gas masks are designed to be capable of protecting against all NBC agents, but they can have filter canisters proof against those agents (heavier) or only against riot control agents and smoke (lighter and often used for training purposes). There are lightweight masks solely for protection against riot-control agents and not for NBC situations.
Although thorough training and the availability of gas masks and other protective equipment can nullify the casualty-causing effects of an attack by chemical agents, troops who are forced to operate in full protective gear are less efficient in completing tasks, tire easily, and may be affected psychologically by the threat of attack by those weapons. During the Cold War, it was seen as inevitable that there would be a constant NBC threat on the battlefield and so troops needed protection in which they could remain fully functional; thus, protective gear and especially gas masks have evolved to incorporate innovations in terms of increasing user comfort and compatibility with other equipment (from drinking devices to artificial respiration tubes, to communications systems etc.).
During the Iran–Iraq War (1980–88), Iraq developed its chemical weapons program with the help of European countries such as Germany and France and used them in a large scale against Iranians and Iraqi Kurds. Iran was unprepared for chemical warfare. In 1984, Iran received gas masks from the Republic of Korea and East Germany, but the Korean masks were not suited for the faces of non-East Asian people, the filter lasted for only 15 minutes, and the 5,000 masks bought from East Germany proved to be not gas masks but spray-painting goggles. As late as 1986, Iranian diplomats still travelled in Europe to buy active charcoal and models of filters to produce defensive gear domestically. In April 1988, Iran started domestic production of gas masks by the Iran Yasa factories.
Principles of construction
Absorption is the process of being drawn into a (usually larger) body or substrate, and adsorption is the process of deposition upon a surface. This can be used to remove both particulate and gaseous hazards. Although some form of reaction may take place, it is not necessary; the method may work by attractive charges. For example, if the target particles are positively charged, a negatively charged substrate may be used. Examples of substrates include activated carbon, and zeolites. This effect can be very simple and highly effective, for example using a damp cloth to cover the mouth and nose while escaping a fire. While this method can be effective at trapping particulates produced by combustion, it does not filter out harmful gases which may be toxic or which displace the oxygen required for survival.
Safety of old gas masks
Gas masks have a useful lifespan limited by the absorbent capacity of the filter. Filters cease to provide protection when saturated with hazardous chemicals, and degrade over time even if sealed. Most gas masks have sealing caps over the air intake and are stored in vacuum-sealed bags to prevent the filter from degrading due to exposure to humidity and pollutants in normal air. Unused gas mask filters from World War II may not protect the wearer at all, and could be harmful if worn due to long-term changes in the chemical composition of the filter.
Some World War II and Soviet Cold War gas mask filters contained chrysotile asbestos or crocidolite asbestos. not known to be harmful at the time. It is not reliably known for how long the materials were used in filters.
Typically, masks using 40 mm connections are a more recent design. Rubber degrades with time, so boxed unused "modern type" masks can be cracked and leak. The US C2 canister (black) contains hexavalent chromium; studies by the U.S. Army Chemical Corps found that the level in the filter was acceptable, but suggest caution when using, as it is a carcinogen.
Modern filter classification
The filter is selected according to the toxic compound. Each filter type protects against a particular hazard and is color-coded:
Particle filters are often included, because in many cases the hazardous materials are in the form of mist, which can be captured by the particle filter before entering the chemical adsorber. In Europe and jurisdictions with similar rules such as Russia and Australia, filter types are given suffix numbers to indicate their capacity. For non-particle hazards, the level "1" is assumed and a number "2" is used to indicate a better level. For particles (P), three levels are always given with the number. In the US, only the particle part is further classified by NIOSH air filtration ratings.
A filter type that can protect against multiple hazards is notated with the European symbols concatenated with each other. Examples include ABEK, ABEK-P3, and ABEK-HgP3. A2B2E2K2-P3 is the highest rating of filter available. An entirely different "multi/CBRN" filter class with an olive color is used in the US.
Filtration may be aided with an air pump to improve wearer comfort. Filtration of air is only possible if there is sufficient oxygen in the first place. Thus, when handling asphyxiants, or when ventilation is poor or the hazards are unknown, filtration is not possible and air must be supplied (with a SCBA system) from a pressurized bottle as in scuba diving.
Use
A modern mask typically is constructed of an elastic polymer in various sizes. It is fitted with various adjustable straps which may be tightened to secure a good fit. Crucially, it is connected to a filter cartridge near the mouth either directly, or via a flexible hose. Some models contain drinking tubes which may be connected to a water bottle. Corrective lens inserts are also available for users who require them.
Masks are typically tested for fit before use. After a mask is fitted, it is often tested by various challenge agents. Isoamyl acetate, a synthetic banana flavourant, and camphor are often used as innocuous challenge agents. In the military, teargases such as CN, CS, and stannic chloride in a chamber may be used to give the users confidence in the efficacy of the mask.
Shortcomings
The protection of a gas mask comes with some disadvantages. The wearer of a typical gas mask must exert extra effort to breathe, and some of the exhaled air is re-inhaled due to the dead space between the facepiece and the user's face. The exposure to carbon dioxide may exceed its OELs (0.5% by volume/9 grammes per cubic metre for an eight-hour shift; 1.4%/27 grammes per m3 for 15 minutes' exposure) by a factor of many times: for gas masks and elastomeric respirators, up to 2.6%); and in case of long-term use, headache, dermatitis and acne may appear. The UK HSE textbook recommends limiting the use of respirators without air supply (that is, not PAPR) to one hour.
Reaction and exchange
This principle relies on substances harmful to humans being usually more reactive than air. This method of separation will use some form of generally reactive substance (for example an acid) coating or supported by some solid material. An example is synthetic resins. These can be created with different groups of atoms (usually called functional groups) that have different properties. Thus a resin can be tailored to a particular toxic group. When the reactive substance comes in contact with the resin, it will bond to it, removing it from the air stream. It may also exchange with a less harmful substance at this site.
Though it was crude, the hypo helmet was a stopgap measure for British troops in the trenches that offered at least some protection during a gas attack. As the months passed and poison gas was used more often, more sophisticated gas masks were developed and introduced. There are two main difficulties with gas mask design:
The user may be exposed to many types of toxic material. Military personnel are especially prone to being exposed to a diverse range of toxic gases. However, if the mask is for a particular use (such as the protection from a specific toxic material in a factory), then the design can be much simpler and the cost lower.
The protection will wear off over time. Filters will clog up, substrates for absorption will fill up, and reactive filters will run out of reactive substances. Thus the user only has protection for a limited time, and then they must either replace the filter device in the mask, or use a new mask.
See also
Assigned Protection Factors
Cartridges and canisters of air-purifying respirators
GP-7 gas mask
GP-5 gas mask
Hopcalite
M2 Gas Mask
M40 Field Protective Mask
M50 joint service general purpose mask
C-4 Protective Mask
NBC suit
PH helmet
Plague doctor's outfit
Respirator
Respirator fit test
Respirators testing in the workplaces
Respirator assigned protection factors
Smoke hood
Notes
Bibliography
Further reading
NIOSH MultiVapor manual, External video
External links
How Stuff Works - Gas Masks Science.com
The History of Gas Masks inventors.about.com, About, Inc. updated March 3, 2024 12:47
Respirator Fact Sheet
NIOSH NPPTL
NIOSH MultiVapor breakthrough concentration program
NIOSH GasRemove (beta)
OSHA math model tool for replacement of chemical cartridges
American inventions
British inventions
Russian inventions
Science and technology in the United Kingdom
Military personal equipment
1914 introductions
Riot control equipment
Main
Respirators | Gas mask | [
"Chemistry"
] | 3,428 | [
"Gas masks"
] |
12,778 | https://en.wikipedia.org/wiki/Group%20velocity | The group velocity of a wave is the velocity with which the overall envelope shape of the wave's amplitudes—known as the modulation or envelope of the wave—propagates through space.
For example, if a stone is thrown into the middle of a very still pond, a circular pattern of waves with a quiescent center appears in the water, also known as a capillary wave. The expanding ring of waves is the wave group or wave packet, within which one can discern individual waves that travel faster than the group as a whole. The amplitudes of the individual waves grow as they emerge from the trailing edge of the group and diminish as they approach the leading edge of the group.
History
The idea of a group velocity distinct from a wave's phase velocity was first proposed by W.R. Hamilton in 1839, and the first full treatment was by Rayleigh in his "Theory of Sound" in 1877.
Definition and interpretation
The group velocity is defined by the equation:
where is the wave's angular frequency (usually expressed in radians per second), and is the angular wavenumber (usually expressed in radians per meter). The phase velocity is: .
The function , which gives as a function of , is known as the dispersion relation.
If is directly proportional to , then the group velocity is exactly equal to the phase velocity. A wave of any shape will travel undistorted at this velocity.
If ω is a linear function of k, but not directly proportional , then the group velocity and phase velocity are different. The envelope of a wave packet (see figure on right) will travel at the group velocity, while the individual peaks and troughs within the envelope will move at the phase velocity.
If is not a linear function of , the envelope of a wave packet will become distorted as it travels. Since a wave packet contains a range of different frequencies (and hence different values of ), the group velocity will be different for different values of . Therefore, the envelope does not move at a single velocity, but its wavenumber components () move at different velocities, distorting the envelope. If the wavepacket has a narrow range of frequencies, and is approximately linear over that narrow range, the pulse distortion will be small, in relation to the small nonlinearity. See further discussion below. For example, for deep water gravity waves, , and hence . This underlies the Kelvin wake pattern for the bow wave of all ships and swimming objects. Regardless of how fast they are moving, as long as their velocity is constant, on each side the wake forms an angle of 19.47° = arcsin(1/3) with the line of travel.
Derivation
One derivation of the formula for group velocity is as follows.
Consider a wave packet as a function of position and time .
Let be its Fourier transform at time ,
By the superposition principle, the wavepacket at any time is
where is implicitly a function of .
Assume that the wave packet is almost monochromatic, so that is sharply peaked around a central wavenumber .
Then, linearization gives
where
and
(see next section for discussion of this step). Then, after some algebra,
There are two factors in this expression. The first factor, , describes a perfect monochromatic wave with wavevector , with peaks and troughs moving at the phase velocity within the envelope of the wavepacket.
The other factor,
,
gives the envelope of the wavepacket. This envelope function depends on position and time only through the combination .
Therefore, the envelope of the wavepacket travels at velocity
which explains the group velocity formula.
Other expressions
For light, the refractive index , vacuum wavelength , and wavelength in the medium , are related by
with the phase velocity.
The group velocity, therefore, can be calculated by any of the following formulas,
Dispersion
Part of the previous derivation is the Taylor series approximation that:
If the wavepacket has a relatively large frequency spread, or if the dispersion has sharp variations (such as due to a resonance), or if the packet travels over very long distances, this assumption is not valid, and higher-order terms in the Taylor expansion become important.
As a result, the envelope of the wave packet not only moves, but also distorts, in a manner that can be described by the material's group velocity dispersion. Loosely speaking, different frequency-components of the wavepacket travel at different speeds, with the faster components moving towards the front of the wavepacket and the slower moving towards the back. Eventually, the wave packet gets stretched out. This is an important effect in the propagation of signals through optical fibers and in the design of high-power, short-pulse lasers.
Relation to phase velocity, refractive index and transmission speed
In three dimensions
For waves traveling through three dimensions, such as light waves, sound waves, and matter waves, the formulas for phase and group velocity are generalized in a straightforward way:
One dimension:
Three dimensions:
where means the gradient of the angular frequency as a function of the wave vector , and is the unit vector in direction k.
If the waves are propagating through an anisotropic (i.e., not rotationally symmetric) medium, for example a crystal, then the phase velocity vector and group velocity vector may point in different directions.
In lossy or gainful media
The group velocity is often thought of as the velocity at which energy or information is conveyed along a wave. In most cases this is accurate, and the group velocity can be thought of as the signal velocity of the waveform. However, if the wave is travelling through an absorptive or gainful medium, this does not always hold. In these cases the group velocity may not be a well-defined quantity, or may not be a meaningful quantity.
In his text "Wave Propagation in Periodic Structures", Brillouin argued that in a lossy medium the group velocity ceases to have a clear physical meaning. An example concerning the transmission of electromagnetic waves through an atomic gas is given by Loudon. Another example is mechanical waves in the solar photosphere: The waves are damped (by radiative heat flow from the peaks to the troughs), and related to that, the energy velocity is often substantially lower than the waves' group velocity.
Despite this ambiguity, a common way to extend the concept of group velocity to complex media is to consider spatially damped plane wave solutions inside the medium, which are characterized by a complex-valued wavevector. Then, the imaginary part of the wavevector is arbitrarily discarded and the usual formula for group velocity is applied to the real part of wavevector, i.e.,
Or, equivalently, in terms of the real part of complex refractive index, , one has
It can be shown that this generalization of group velocity continues to be related to the apparent speed of the peak of a wavepacket. The above definition is not universal, however: alternatively one may consider the time damping of standing waves (real , complex ), or, allow group velocity to be a complex-valued quantity. Different considerations yield distinct velocities, yet all definitions agree for the case of a lossless, gainless medium.
The above generalization of group velocity for complex media can behave strangely, and the example of anomalous dispersion serves as a good illustration.
At the edges of a region of anomalous dispersion, becomes infinite (surpassing even the speed of light in vacuum), and may easily become negative
(its sign opposes Re) inside the band of anomalous dispersion.
Superluminal group velocities
Since the 1980s, various experiments have verified that it is possible for the group velocity (as defined above) of laser light pulses sent through lossy materials, or gainful materials, to significantly exceed the speed of light in vacuum . The peaks of wavepackets were also seen to move faster than .
In all these cases, however, there is no possibility that signals could be carried faster than the speed of light in vacuum, since the high value of does not help to speed up the true motion of the sharp wavefront that would occur at the start of any real signal. Essentially the seemingly superluminal transmission is an artifact of the narrow band approximation used above to define group velocity and happens because of resonance phenomena in the intervening medium. In a wide band analysis it is seen that the apparently paradoxical speed of propagation of the signal envelope is actually the result of local interference of a wider band of frequencies over many cycles, all of which propagate perfectly causally and at phase velocity. The result is akin to the fact that shadows can travel faster than light, even if the light causing them always propagates at light speed; since the phenomenon being measured is only loosely connected with causality, it does not necessarily respect the rules of causal propagation, even if it under normal circumstances does so and leads to a common intuition.
See also
Wave propagation
Dispersion (water waves)
Dispersion (optics)
Wave propagation speed
Group delay
Group velocity dispersion
Group delay dispersion
Phase delay
Phase velocity
Signal velocity
Slow light
Front velocity
Matter wave#Group velocity
Soliton
References
Notes
Further reading
Crawford jr., Frank S. (1968). Waves (Berkeley Physics Course, Vol. 3), McGraw-Hill, Free online version
External links
Greg Egan has an excellent Java applet on his web site that illustrates the apparent difference in group velocity from phase velocity.
Maarten Ambaum has a webpage with movie demonstrating the importance of group velocity to downstream development of weather systems.
Phase vs. Group Velocity – Various Phase- and Group-velocity relations (animation)
Radio frequency propagation
Optical quantities
Wave mechanics
Physical quantities
Mathematical physics | Group velocity | [
"Physics",
"Mathematics"
] | 2,010 | [
"Physical phenomena",
"Spectrum (physical sciences)",
"Physical quantities",
"Radio frequency propagation",
"Applied mathematics",
"Quantity",
"Theoretical physics",
"Classical mechanics",
"Electromagnetic spectrum",
"Waves",
"Wave mechanics",
"Optical quantities",
"Mathematical physics",
... |
12,781 | https://en.wikipedia.org/wiki/Group%20action | In mathematics, a group action of a group on a set is a group homomorphism from to some group (under function composition) of functions from to itself. It is said that acts on .
Many sets of transformations form a group under function composition; for example, the rotations around a point in the plane. It is often useful to consider the group as an abstract group, and to say that one has a group action of the abstract group that consists of performing the transformations of the group of transformations. The reason for distinguishing the group from the transformations is that, generally, a group of transformations of a structure acts also on various related structures; for example, the above rotation group also acts on triangles by transforming triangles into triangles.
If a group acts on a structure, it will usually also act on objects built from that structure. For example, the group of Euclidean isometries acts on Euclidean space and also on the figures drawn in it; in particular, it acts on the set of all triangles. Similarly, the group of symmetries of a polyhedron acts on the vertices, the edges, and the faces of the polyhedron.
A group action on a vector space is called a representation of the group. In the case of a finite-dimensional vector space, it allows one to identify many groups with subgroups of the general linear group , the group of the invertible matrices of dimension over a field .
The symmetric group acts on any set with elements by permuting the elements of the set. Although the group of all permutations of a set depends formally on the set, the concept of group action allows one to consider a single group for studying the permutations of all sets with the same cardinality.
Definition
Left group action
If is a group with identity element , and is a set, then a (left) group action of on is a function
that satisfies the following two axioms:
{|
|Identity:
|
|-
|Compatibility:
|
|}
for all and in and all in .
The group is then said to act on (from the left). A set together with an action of is called a (left) -set.
It can be notationally convenient to curry the action , so that, instead, one has a collection of transformations , with one transformation for each group element . The identity and compatibility relations then read
and
with being function composition. The second axiom then states that the function composition is compatible with the group multiplication; they form a commutative diagram. This axiom can be shortened even further, and written as .
With the above understanding, it is very common to avoid writing entirely, and to replace it with either a dot, or with nothing at all. Thus, can be shortened to or , especially when the action is clear from context. The axioms are then
From these two axioms, it follows that for any fixed in , the function from to itself which maps to is a bijection, with inverse bijection the corresponding map for . Therefore, one may equivalently define a group action of on as a group homomorphism from into the symmetric group of all bijections from to itself.
Right group action
Likewise, a right group action of on is a function
that satisfies the analogous axioms:
{|
|Identity:
|
|-
|Compatibility:
|
|}
(with often shortened to or when the action being considered is clear from context)
{|
|Identity:
|
|-
|Compatibility:
|
|}
for all and in and all in .
The difference between left and right actions is in the order in which a product acts on . For a left action, acts first, followed by second. For a right action, acts first, followed by second. Because of the formula , a left action can be constructed from a right action by composing with the inverse operation of the group. Also, a right action of a group on can be considered as a left action of its opposite group on .
Thus, for establishing general properties of group actions, it suffices to consider only left actions. However, there are cases where this is not possible. For example, the multiplication of a group induces both a left action and a right action on the group itself—multiplication on the left and on the right, respectively.
Notable properties of actions
Let be a group acting on a set . The action is called or if for all implies that . Equivalently, the homomorphism from to the group of bijections of corresponding to the action is injective.
The action is called (or semiregular or fixed-point free) if the statement that for some already implies that . In other words, no non-trivial element of fixes a point of . This is a much stronger property than faithfulness.
For example, the action of any group on itself by left multiplication is free. This observation implies Cayley's theorem that any group can be embedded in a symmetric group (which is infinite when the group is). A finite group may act faithfully on a set of size much smaller than its cardinality (however such an action cannot be free). For instance the abelian 2-group (of cardinality ) acts faithfully on a set of size . This is not always the case, for example the cyclic group cannot act faithfully on a set of size less than .
In general the smallest set on which a faithful action can be defined can vary greatly for groups of the same size. For example, three groups of size 120 are the symmetric group , the icosahedral group and the cyclic group . The smallest sets on which faithful actions can be defined for these groups are of size 5, 7, and 16 respectively.
Transitivity properties
The action of on is called if for any two points there exists a so that .
The action is (or sharply transitive, or ) if it is both transitive and free. This means that given the element in the definition of transitivity is unique. If is acted upon simply transitively by a group then it is called a principal homogeneous space for or a -torsor.
For an integer , the action is if has at least elements, and for any pair of -tuples with pairwise distinct entries (that is , when ) there exists a such that for . In other words, the action on the subset of of tuples without repeated entries is transitive. For this is often called double, respectively triple, transitivity. The class of 2-transitive groups (that is, subgroups of a finite symmetric group whose action is 2-transitive) and more generally multiply transitive groups is well-studied in finite group theory.
An action is when the action on tuples without repeated entries in is sharply transitive.
Examples
The action of the symmetric group of is transitive, in fact -transitive for any up to the cardinality of . If has cardinality , the action of the alternating group is -transitive but not -transitive.
The action of the general linear group of a vector space on the set of non-zero vectors is transitive, but not 2-transitive (similarly for the action of the special linear group if the dimension of is at least 2). The action of the orthogonal group of a Euclidean space is not transitive on nonzero vectors but it is on the unit sphere.
Primitive actions
The action of on is called primitive if there is no partition of preserved by all elements of apart from the trivial partitions (the partition in a single piece and its dual, the partition into singletons).
Topological properties
Assume that is a topological space and the action of is by homeomorphisms.
The action is wandering if every has a neighbourhood such that there are only finitely many with .
More generally, a point is called a point of discontinuity for the action of if there is an open subset such that there are only finitely many with . The domain of discontinuity of the action is the set of all points of discontinuity. Equivalently it is the largest -stable open subset such that the action of on is wandering. In a dynamical context this is also called a wandering set.
The action is properly discontinuous if for every compact subset there are only finitely many such that . This is strictly stronger than wandering; for instance the action of on given by is wandering and free but not properly discontinuous.
The action by deck transformations of the fundamental group of a locally simply connected space on a universal cover is wandering and free. Such actions can be characterized by the following property: every has a neighbourhood such that for every . Actions with this property are sometimes called freely discontinuous, and the largest subset on which the action is freely discontinuous is then called the free regular set.
An action of a group on a locally compact space is called cocompact if there exists a compact subset such that . For a properly discontinuous action, cocompactness is equivalent to compactness of the quotient space .
Actions of topological groups
Now assume is a topological group and a topological space on which it acts by homeomorphisms. The action is said to be continuous if the map is continuous for the product topology.
The action is said to be if the map defined by is proper. This means that given compact sets the set of such that is compact. In particular, this is equivalent to proper discontinuity is a discrete group.
It is said to be locally free if there exists a neighbourhood of such that for all and .
The action is said to be strongly continuous if the orbital map is continuous for every . Contrary to what the name suggests, this is a weaker property than continuity of the action.
If is a Lie group and a differentiable manifold, then the subspace of smooth points for the action is the set of points such that the map is smooth. There is a well-developed theory of Lie group actions, i.e. action which are smooth on the whole space.
Linear actions
If acts by linear transformations on a module over a commutative ring, the action is said to be irreducible if there are no proper nonzero -invariant submodules. It is said to be semisimple if it decomposes as a direct sum of irreducible actions.
Orbits and stabilizers
Consider a group acting on a set . The of an element in is the set of elements in to which can be moved by the elements of . The orbit of is denoted by :
The defining properties of a group guarantee that the set of orbits of (points in) under the action of form a partition of . The associated equivalence relation is defined by saying if and only if there exists a in with . The orbits are then the equivalence classes under this relation; two elements and are equivalent if and only if their orbits are the same, that is, .
The group action is transitive if and only if it has exactly one orbit, that is, if there exists in with . This is the case if and only if for in (given that is non-empty).
The set of all orbits of under the action of is written as (or, less frequently, as ), and is called the of the action. In geometric situations it may be called the , while in algebraic situations it may be called the space of , and written , by contrast with the invariants (fixed points), denoted : the coinvariants are a while the invariants are a . The coinvariant terminology and notation are used particularly in group cohomology and group homology, which use the same superscript/subscript convention.
Invariant subsets
If is a subset of , then denotes the set . The subset is said to be invariant under if (which is equivalent ). In that case, also operates on by restricting the action to . The subset is called fixed under if for all in and all in . Every subset that is fixed under is also invariant under , but not conversely.
Every orbit is an invariant subset of on which acts transitively. Conversely, any invariant subset of is a union of orbits. The action of on is transitive if and only if all elements are equivalent, meaning that there is only one orbit.
A -invariant element of is such that for all . The set of all such is denoted and called the -invariants of . When is a -module, is the zeroth cohomology group of with coefficients in , and the higher cohomology groups are the derived functors of the functor of -invariants.
Fixed points and stabilizer subgroups
Given in and in with , it is said that " is a fixed point of " or that " fixes ". For every in , the of with respect to (also called the isotropy group or little group) is the set of all elements in that fix :
This is a subgroup of , though typically not a normal one. The action of on is free if and only if all stabilizers are trivial. The kernel of the homomorphism with the symmetric group, , is given by the intersection of the stabilizers for all in . If is trivial, the action is said to be faithful (or effective).
Let and be two elements in , and let be a group element such that . Then the two stabilizer groups and are related by . Proof: by definition, if and only if . Applying to both sides of this equality yields ; that is, . An opposite inclusion follows similarly by taking and .
The above says that the stabilizers of elements in the same orbit are conjugate to each other. Thus, to each orbit, we can associate a conjugacy class of a subgroup of (that is, the set of all conjugates of the subgroup). Let denote the conjugacy class of . Then the orbit has type if the stabilizer of some/any in belongs to . A maximal orbit type is often called a principal orbit type.
Orbits and stabilizers are closely related. For a fixed in , consider the map given by . By definition the image of this map is the orbit . The condition for two elements to have the same image is
In other words, if and only if and lie in the same coset for the stabilizer subgroup . Thus, the fiber of over any in is contained in such a coset, and every such coset also occurs as a fiber. Therefore induces a between the set of cosets for the stabilizer subgroup and the orbit , which sends . This result is known as the orbit-stabilizer theorem.
If is finite then the orbit-stabilizer theorem, together with Lagrange's theorem, gives
in other words the length of the orbit of times the order of its stabilizer is the order of the group. In particular that implies that the orbit length is a divisor of the group order.
Example: Let be a group of prime order acting on a set with elements. Since each orbit has either or elements, there are at least orbits of length which are -invariant elements. More specifically, and the number of -invariant elements are congruent modulo .
This result is especially useful since it can be employed for counting arguments (typically in situations where is finite as well).
Example: We can use the orbit-stabilizer theorem to count the automorphisms of a graph. Consider the cubical graph as pictured, and let denote its automorphism group. Then acts on the set of vertices , and this action is transitive as can be seen by composing rotations about the center of the cube. Thus, by the orbit-stabilizer theorem, . Applying the theorem now to the stabilizer , we can obtain . Any element of that fixes 1 must send 2 to either 2, 4, or 5. As an example of such automorphisms consider the rotation around the diagonal axis through 1 and 7 by , which permutes 2, 4, 5 and 3, 6, 8, and fixes 1 and 7. Thus, . Applying the theorem a third time gives . Any element of that fixes 1 and 2 must send 3 to either 3 or 6. Reflecting the cube at the plane through 1, 2, 7 and 8 is such an automorphism sending 3 to 6, thus . One also sees that consists only of the identity automorphism, as any element of fixing 1, 2 and 3 must also fix all other vertices, since they are determined by their adjacency to 1, 2 and 3. Combining the preceding calculations, we can now obtain .
Burnside's lemma
A result closely related to the orbit-stabilizer theorem is Burnside's lemma:
where is the set of points fixed by . This result is mainly of use when and are finite, when it can be interpreted as follows: the number of orbits is equal to the average number of points fixed per group element.
Fixing a group , the set of formal differences of finite -sets forms a ring called the Burnside ring of , where addition corresponds to disjoint union, and multiplication to Cartesian product.
Examples
The action of any group on any set is defined by for all in and all in ; that is, every group element induces the identity permutation on .
In every group , left multiplication is an action of on : for all , in . This action is free and transitive (regular), and forms the basis of a rapid proof of Cayley's theorem – that every group is isomorphic to a subgroup of the symmetric group of permutations of the set .
In every group with subgroup , left multiplication is an action of on the set of cosets : for all , in . In particular if contains no nontrivial normal subgroups of this induces an isomorphism from to a subgroup of the permutation group of degree .
In every group , conjugation is an action of on : . An exponential notation is commonly used for the right-action variant: ; it satisfies (.
In every group with subgroup , conjugation is an action of on conjugates of : for all in and conjugates of .
An action of on a set uniquely determines and is determined by an automorphism of , given by the action of 1. Similarly, an action of on is equivalent to the data of an involution of .
The symmetric group and its subgroups act on the set by permuting its elements
The symmetry group of a polyhedron acts on the set of vertices of that polyhedron. It also acts on the set of faces or the set of edges of the polyhedron.
The symmetry group of any geometrical object acts on the set of points of that object.
For a coordinate space over a field with group of units , the mapping given by is a group action called scalar multiplication.
The automorphism group of a vector space (or graph, or group, or ring ...) acts on the vector space (or set of vertices of the graph, or group, or ring ...).
The general linear group and its subgroups, particularly its Lie subgroups (including the special linear group , orthogonal group , special orthogonal group , and symplectic group ) are Lie groups that act on the vector space . The group operations are given by multiplying the matrices from the groups with the vectors from .
The general linear group acts on by natural matrix action. The orbits of its action are classified by the greatest common divisor of coordinates of the vector in .
The affine group acts transitively on the points of an affine space, and the subgroup V of the affine group (that is, a vector space) has transitive and free (that is, regular) action on these points; indeed this can be used to give a definition of an affine space.
The projective linear group and its subgroups, particularly its Lie subgroups, which are Lie groups that act on the projective space . This is a quotient of the action of the general linear group on projective space. Particularly notable is , the symmetries of the projective line, which is sharply 3-transitive, preserving the cross ratio; the Möbius group is of particular interest.
The isometries of the plane act on the set of 2D images and patterns, such as wallpaper patterns. The definition can be made more precise by specifying what is meant by image or pattern, for example, a function of position with values in a set of colors. Isometries are in fact one example of affine group (action).
The sets acted on by a group comprise the category of -sets in which the objects are -sets and the morphisms are -set homomorphisms: functions such that for every in .
The Galois group of a field extension acts on the field but has only a trivial action on elements of the subfield . Subgroups of correspond to subfields of that contain , that is, intermediate field extensions between and .
The additive group of the real numbers acts on the phase space of "well-behaved" systems in classical mechanics (and in more general dynamical systems) by time translation: if is in and is in the phase space, then describes a state of the system, and is defined to be the state of the system seconds later if is positive or seconds ago if is negative.
The additive group of the real numbers acts on the set of real functions of a real variable in various ways, with equal to, for example, , , , , , or , but not .
Given a group action of on , we can define an induced action of on the power set of , by setting for every subset of and every in . This is useful, for instance, in studying the action of the large Mathieu group on a 24-set and in studying symmetry in certain models of finite geometries.
The quaternions with norm 1 (the versors), as a multiplicative group, act on : for any such quaternion , the mapping is a counterclockwise rotation through an angle about an axis given by a unit vector ; is the same rotation; see quaternions and spatial rotation. This is not a faithful action because the quaternion leaves all points where they were, as does the quaternion .
Given left -sets , , there is a left -set whose elements are -equivariant maps , and with left -action given by (where "" indicates right multiplication by ). This -set has the property that its fixed points correspond to equivariant maps ; more generally, it is an exponential object in the category of -sets.
Group actions and groupoids
The notion of group action can be encoded by the action groupoid associated to the group action. The stabilizers of the action are the vertex groups of the groupoid and the orbits of the action are its components.
Morphisms and isomorphisms between G-sets
If and are two -sets, a morphism from to is a function such that for all in and all in . Morphisms of -sets are also called equivariant maps or -maps.
The composition of two morphisms is again a morphism. If a morphism is bijective, then its inverse is also a morphism. In this case is called an isomorphism, and the two -sets and are called isomorphic; for all practical purposes, isomorphic -sets are indistinguishable.
Some example isomorphisms:
Every regular action is isomorphic to the action of on given by left multiplication.
Every free action is isomorphic to , where is some set and acts on by left multiplication on the first coordinate. ( can be taken to be the set of orbits .)
Every transitive action is isomorphic to left multiplication by on the set of left cosets of some subgroup of . ( can be taken to be the stabilizer group of any element of the original -set.)
With this notion of morphism, the collection of all -sets forms a category; this category is a Grothendieck topos (in fact, assuming a classical metalogic, this topos will even be Boolean).
Variants and generalizations
We can also consider actions of monoids on sets, by using the same two axioms as above. This does not define bijective maps and equivalence relations however. See semigroup action.
Instead of actions on sets, we can define actions of groups and monoids on objects of an arbitrary category: start with an object of some category, and then define an action on as a monoid homomorphism into the monoid of endomorphisms of . If has an underlying set, then all definitions and facts stated above can be carried over. For example, if we take the category of vector spaces, we obtain group representations in this fashion.
We can view a group as a category with a single object in which every morphism is invertible. A (left) group action is then nothing but a (covariant) functor from to the category of sets, and a group representation is a functor from to the category of vector spaces. A morphism between -sets is then a natural transformation between the group action functors. In analogy, an action of a groupoid is a functor from the groupoid to the category of sets or to some other category.
In addition to continuous actions of topological groups on topological spaces, one also often considers smooth actions of Lie groups on smooth manifolds, regular actions of algebraic groups on algebraic varieties, and actions of group schemes on schemes. All of these are examples of group objects acting on objects of their respective category.
Gallery
See also
Gain graph
Group with operators
Measurable group action
Monoid action
Young–Deruyts development
Notes
Citations
References
.
External links
Group theory
Representation theory of groups
Symmetry | Group action | [
"Physics",
"Mathematics"
] | 5,300 | [
"Group actions",
"Group theory",
"Fields of abstract algebra",
"Geometry",
"Symmetry"
] |
12,783 | https://en.wikipedia.org/wiki/Gzip | gzip is a file format and a software application used for file compression and decompression. The program was created by Jean-loup Gailly and Mark Adler as a free software replacement for the compress program used in early Unix systems, and intended for use by GNU (from which the "g" of gzip is derived). Version 0.1 was first publicly released on 31 October 1992, and version 1.0 followed in February 1993.
The decompression of the gzip format can be implemented as a streaming algorithm, an important feature for Web protocols, data interchange and ETL (in standard pipes) applications.
File format
gzip is based on the DEFLATE algorithm, which is a combination of LZ77 and Huffman coding. DEFLATE was intended as a replacement for LZW and other patent-encumbered data compression algorithms which, at the time, limited the usability of the compress utility and other popular archivers.
"gzip" is often also used to refer to the gzip file format, which is:
a 10-byte header, containing a magic number (1f 8b), the compression method (08 for DEFLATE), 1-byte of header flags, a 4-byte timestamp, compression flags and the operating system ID.
optional extra headers as allowed by the header flags, including the original filename, a comment field, an "extra" field, and the lower half of a CRC-32 checksum for the header section.
a body, containing a DEFLATE-compressed payload
an 8-byte trailer, containing a CRC-32 checksum and the length of the original uncompressed data, modulo 232.
Although its file format also allows for multiple such streams to be concatenated (gzipped files are simply decompressed concatenated as if they were originally one file), gzip is normally used to compress just single files. Compressed archives are typically created by assembling collections of files into a single tar archive (also called tarball), and then compressing that archive with gzip. The final compressed file usually has the extension or .
gzip is not to be confused with the ZIP archive format, which also uses DEFLATE. The ZIP format can hold collections of files without an external archiver, but is less compact than compressed tarballs holding the same data, because it compresses files individually and cannot take advantage of redundancy between files (solid compression).
The gzip file format is also not to be confused with that of the compress utility, based on LZW, with extension ; however, the gunzip utility is able to decompress .Z files.
Implementations
Various implementations of the program have been written. The most commonly known is the GNU Project's implementation using Lempel-Ziv coding (LZ77). OpenBSD's version of gzip is actually the compress program, to which support for the gzip format was added in OpenBSD 3.4. The 'g' in this specific version stands for gratis. FreeBSD, DragonFly BSD and NetBSD use a BSD-licensed implementation instead of the GNU version; it is actually a command-line interface for zlib intended to be compatible with the GNU implementations' options. These implementations originally come from NetBSD, and support decompression of bzip2 and the Unix pack format.
An alternative compression program achieving 3-8% better compression is Zopfli. It achieves gzip-compatible compression using more exhaustive algorithms, at the expense of compression time required. It does not affect decompression time.
pigz, written by Mark Adler, is compatible with gzip and speeds up compression by using all available CPU cores and threads.
Damage recovery
Data in blocks prior to the first damaged part of the archive is usually fully readable. Data from blocks not demolished by damage that are located afterward may be recoverable through difficult workarounds.
Derivatives and other uses
The tar utility included in most Linux distributions can extract .tar.gz files by passing the option, e.g., , where -z instructs decompression, -x means extraction, and -f specifies the name of the compressed archive file to extract from. Optionally, -v (verbose) lists files as they are being extracted.
zlib is an abstraction of the DEFLATE algorithm in library form which includes support both for the gzip file format and a lightweight data stream format in its API. The zlib stream format, DEFLATE, and the gzip file format were standardized respectively as RFC 1950, RFC 1951, and RFC 1952.
The gzip format is used in HTTP compression, a technique used to speed up the sending of HTML and other content on the World Wide Web. It is one of the three standard formats for HTTP compression as specified in RFC 2616. This RFC also specifies a zlib format (called "DEFLATE"), which is equal to the gzip format except that gzip adds eleven bytes of overhead in the form of headers and trailers. Still, the gzip format is sometimes recommended over zlib because Internet Explorer does not implement the standard correctly and cannot handle the zlib format as specified in RFC 1950.
zlib DEFLATE is used internally by the Portable Network Graphics (PNG) format.
Since the late 1990s, bzip2, a file compression utility based on a block-sorting algorithm, has gained some popularity as a gzip replacement. It produces considerably smaller files (especially for source code and other structured text), but at the cost of memory and processing time (up to a factor of 4).
AdvanceCOMP, Zopfli, libdeflate and 7-Zip can produce gzip-compatible files, using an internal DEFLATE implementation with better compression ratios than gzip itself—at the cost of more processor time compared to the reference implementation.
Research published in 2023 showed that simple lossless compression techniques such as gzip could be combined with a k-nearest-neighbor classifier to create an attractive alternative to deep neural networks for text classification in natural language processing. This approach has been shown to equal and in some cases outperform conventional approaches such as BERT due to low resource requirements, e.g. no requirement for GPU hardware.
See also
Comparison of file archivers
Free file format
List of archive formats
List of Unix commands
Libarc
Brotli
zlib
Notes
References
RFC 1952 – GZIP file format specification version 4.3
External links
Archive formats
Cross-platform software
Free data compression software
Free software programmed in C
GNU Project software
IBM i Qshell commands
Inferno (operating system) commands
Lossless compression algorithms
Plan 9 commands
Unix archivers and compression-related utilities | Gzip | [
"Technology"
] | 1,427 | [
"IBM i Qshell commands",
"Computing commands",
"Plan 9 commands",
"Inferno (operating system) commands"
] |
12,796 | https://en.wikipedia.org/wiki/Genotype | The genotype of an organism is its complete set of genetic material. Genotype can also be used to refer to the alleles or variants an individual carries in a particular gene or genetic location. The number of alleles an individual can have in a specific gene depends on the number of copies of each chromosome found in that species, also referred to as ploidy. In diploid species like humans, two full sets of chromosomes are present, meaning each individual has two alleles for any given gene. If both alleles are the same, the genotype is referred to as homozygous. If the alleles are different, the genotype is referred to as heterozygous.
Genotype contributes to phenotype, the observable traits and characteristics in an individual or organism. The degree to which genotype affects phenotype depends on the trait. For example, the petal color in a pea plant is exclusively determined by genotype. The petals can be purple or white depending on the alleles present in the pea plant. However, other traits are only partially influenced by genotype. These traits are often called complex traits because they are influenced by additional factors, such as environmental and epigenetic factors. Not all individuals with the same genotype look or act the same way because appearance and behavior are modified by environmental and growing conditions. Likewise, not all organisms that look alike necessarily have the same genotype.
The term genotype was coined by the Danish botanist Wilhelm Johannsen in 1903.
Phenotype
Any given gene will usually cause an observable change in an organism, known as the phenotype. The terms genotype and phenotype are distinct for at least two reasons:
To distinguish the source of an observer's knowledge (one can know about genotype by observing DNA; one can know about phenotype by observing outward appearance of an organism).
Genotype and phenotype are not always directly correlated. Some genes only express a given phenotype in certain environmental conditions. Conversely, some phenotypes could be the result of multiple genotypes. The genotype is commonly mixed up with the phenotype which describes the result of both the genetic and the environmental factors giving the observed expression (e.g. blue eyes, hair color, or various hereditary diseases).
A simple example to illustrate genotype as distinct from phenotype is the flower colour in pea plants (see Gregor Mendel). There are three available genotypes, PP (homozygous dominant), Pp (heterozygous), and pp (homozygous recessive). All three have different genotypes but the first two have the same phenotype (purple) as distinct from the third (white).
A more technical example to illustrate genotype is the single-nucleotide polymorphism or SNP. A SNP occurs when corresponding sequences of DNA from different individuals differ at one DNA base, for example where the sequence AAGCCTA changes to AAGCTTA. This contains two alleles : C and T. SNPs typically have three genotypes, denoted generically AA Aa and aa. In the example above, the three genotypes would be CC, CT and TT. Other types of genetic marker, such as microsatellites, can have more than two alleles, and thus many different genotypes.
Penetrance is the proportion of individuals showing a specified genotype in their phenotype under a given set of environmental conditions.
Mendelian inheritance
Traits that are determined exclusively by genotype are typically inherited in a Mendelian pattern. These laws of inheritance were described extensively by Gregor Mendel, who performed experiments with pea plants to determine how traits were passed on from generation to generation. He studied phenotypes that were easily observed, such as plant height, petal color, or seed shape. He was able to observe that if he crossed two true-breeding plants with distinct phenotypes, all the offspring would have the same phenotype. For example, when he crossed a tall plant with a short plant, all the resulting plants would be tall. However, when he self-fertilized the plants that resulted, about 1/4 of the second generation would be short. He concluded that some traits were dominant, such as tall height, and others were recessive, like short height. Though Mendel was not aware at the time, each phenotype he studied was controlled by a single gene with two alleles. In the case of plant height, one allele caused the plants to be tall, and the other caused plants to be short. When the tall allele was present, the plant would be tall, even if the plant was heterozygous. In order for the plant to be short, it had to be homozygous for the recessive allele.
One way this can be illustrated is using a Punnett square. In a Punnett square, the genotypes of the parents are placed on the outside. An uppercase letter is typically used to represent the dominant allele, and a lowercase letter is used to represent the recessive allele. The possible genotypes of the offspring can then be determined by combining the parent genotypes. In the example on the right, both parents are heterozygous, with a genotype of Bb. The offspring can inherit a dominant allele from each parent, making them homozygous with a genotype of BB. The offspring can inherit a dominant allele from one parent and a recessive allele from the other parent, making them heterozygous with a genotype of Bb. Finally, the offspring could inherit a recessive allele from each parent, making them homozygous with a genotype of bb. Plants with the BB and Bb genotypes will look the same, since the B allele is dominant. The plant with the bb genotype will have the recessive trait.
These inheritance patterns can also be applied to hereditary diseases or conditions in humans or animals. Some conditions are inherited in an autosomal dominant pattern, meaning individuals with the condition typically have an affected parent as well. A classic pedigree for an autosomal dominant condition shows affected individuals in every generation. Other conditions are inherited in an autosomal recessive pattern, where affected individuals do not typically have an affected parent. Since each parent must have a copy of the recessive allele in order to have an affected offspring, the parents are referred to as carriers of the condition.
In autosomal conditions, the sex of the offspring does not play a role in their risk of being affected. In sex-linked conditions, the sex of the offspring affects their chances of having the condition. In humans, females inherit two X chromosomes, one from each parent, while males inherit an X chromosome from their mother and a Y chromosome from their father. X-linked dominant conditions can be distinguished from autosomal dominant conditions in pedigrees by the lack of transmission from fathers to sons, since affected fathers only pass their X chromosome to their daughters. In X-linked recessive conditions, males are typically affected more commonly because they are hemizygous, with only one X chromosome. In females, the presence of a second X chromosome will prevent the condition from appearing. Females are therefore carriers of the condition and can pass the trait on to their sons.
Mendelian patterns of inheritance can be complicated by additional factors. Some diseases show incomplete penetrance, meaning not all individuals with the disease-causing allele develop signs or symptoms of the disease. Penetrance can also be age-dependent, meaning signs or symptoms of disease are not visible until later in life. For example, Huntington disease is an autosomal dominant condition, but up to 25% of individuals with the affected genotype will not develop symptoms until after age 50. Another factor that can complicate Mendelian inheritance patterns is variable expressivity, in which individuals with the same genotype show different signs or symptoms of disease. For example, individuals with polydactyly can have a variable number of extra digits.
Non-Mendelian inheritance
Many traits are not inherited in a Mendelian fashion, but have more complex patterns of inheritance.
Incomplete dominance
For some traits, neither allele is completely dominant. Heterozygotes often have an appearance somewhere in between those of homozygotes. For example, a cross between true-breeding red and white Mirabilis jalapa results in pink flowers.
Codominance
Codominance refers to traits in which both alleles are expressed in the offspring in approximately equal amounts. A classic example is the ABO blood group system in humans, where both the A and B alleles are expressed when they are present. Individuals with the AB genotype have both A and B proteins expressed on their red blood cells.
Epistasis
Epistasis is when the phenotype of one gene is affected by one or more other genes. This is often through some sort of masking effect of one gene on the other. For example, the "A" gene codes for hair color, a dominant "A" allele codes for brown hair, and a recessive "a" allele codes for blonde hair, but a separate "B" gene controls hair growth, and a recessive "b" allele causes baldness. If the individual has the BB or Bb genotype, then they produce hair and the hair color phenotype can be observed, but if the individual has a bb genotype, then the person is bald which masks the A gene entirely.
Polygenic traits
A polygenic trait is one whose phenotype is dependent on the additive effects of multiple genes. The contributions of each of these genes are typically small and add up to a final phenotype with a large amount of variation. A well studied example of this is the number of sensory bristles on a fly. These types of additive effects is also the explanation for the amount of variation in human eye color.
Genotyping
Genotyping refers to the method used to determine an individual's genotype. There are a variety of techniques that can be used to assess genotype. The genotyping method typically depends on what information is being sought. Many techniques initially require amplification of the DNA sample, which is commonly done using PCR.
Some techniques are designed to investigate specific SNPs or alleles in a particular gene or set of genes, such as whether an individual is a carrier for a particular condition. This can be done via a variety of techniques, including allele specific oligonucleotide (ASO) probes or DNA sequencing. Tools such as multiplex ligation-dependent probe amplification can also be used to look for duplications or deletions of genes or gene sections. Other techniques are meant to assess a large number of SNPs across the genome, such as SNP arrays. This type of technology is commonly used for genome-wide association studies.
Large-scale techniques to assess the entire genome are also available. This includes karyotyping to determine the number of chromosomes an individual has and chromosomal microarrays to assess for large duplications or deletions in the chromosome. More detailed information can be determined using exome sequencing, which provides the specific sequence of all DNA in the coding region of the genome, or whole genome sequencing, which sequences the entire genome including non-coding regions.
Genotype encoding
In linear models, the genotypes can be encoded in different manners. Let us consider a biallelic locus with two possible alleles, encoded by and . We consider to correspond to the dominant allele to the reference allele . The following table details the different encoding.
See also
Endophenotype
Nucleic acid sequence
Sequence (biology)
References
External links
Genetic nomenclature
Genetics
Polymorphism (biology)
DNA sequencing | Genotype | [
"Chemistry",
"Biology"
] | 2,452 | [
"Molecular biology techniques",
"DNA sequencing",
"Genetics"
] |
12,799 | https://en.wikipedia.org/wiki/Graphic%20design | Graphic design is a profession, academic discipline and applied art whose activity consists in projecting visual communications intended to transmit specific messages to social groups, with specific objectives. Graphic design is an interdisciplinary branch of design and of the fine arts. Its practice involves creativity, innovation and lateral thinking using manual or digital tools, where it is usual to use text and graphics to communicate visually.
The role of the graphic designer in the communication process is that of the encoder or interpreter of the message. They work on the interpretation, ordering, and presentation of visual messages. Usually, graphic design uses the aesthetics of typography and the compositional arrangement of the text, ornamentation, and imagery to convey ideas, feelings, and attitudes beyond what language alone expresses. The design work can be based on a customer's demand, a demand that ends up being established linguistically, either orally or in writing, that is, that graphic design transforms a linguistic message into a graphic manifestation.
Graphic design has, as a field of application, different areas of knowledge focused on any visual communication system. For example, it can be applied in advertising strategies, or it can also be applied in the aviation world or space exploration. In this sense, in some countries graphic design is related as only associated with the production of sketches and drawings, this is incorrect, since visual communication is a small part of a huge range of types and classes where it can be applied.
With origins in Antiquity and the Middle Ages, graphic design as applied art was initially linked to the boom of the rise of printing in Europe in the 15th century and the growth of consumer culture in the Industrial Revolution. From there it emerged as a distinct profession in the West, closely associated with advertising in the 19th century and its evolution allowed its consolidation in the 20th century. Given the rapid and massive growth in information exchange today, the demand for experienced designers is greater than ever, particularly because of the development of new technologies and the need to pay attention to human factors beyond the competence of the engineers who develop them.
Terminology
The term "graphic design" makes an early appearance in a 4 July 1908 issue (volume 9, number 27) of Organized Labor, a publication of the Labor Unions of San Francisco, in an article about technical education for printers:
An Enterprising Trades Union
… The admittedly high standard of intelligence which prevails among printers is an assurance that with the elemental principles of design at their finger ends many of them will grow in knowledge and develop into specialists in graphic design and decorating. …
A decade later, the 1917–1918 course catalog of the California School of Arts & Crafts advertised a course titled Graphic Design and Lettering, which replaced one called Advanced Design and Lettering. Both classes were taught by Frederick Meyer.
History
In both its lengthy history and in the relatively recent explosion of visual communication in the 20th and 21st centuries, the distinction between advertising, art, graphic design and fine art has disappeared. They share many elements, theories, principles, practices, languages and sometimes the same benefactor or client. In advertising, the ultimate objective is the sale of goods and services. In graphic design, "the essence is to give order to information, form to ideas, expression, and feeling to artifacts that document the human experience."
The definition of the graphic designer profession is relatively recent concerning its preparation, activity, and objectives. Although there is no consensus on an exact date when graphic design emerged, some date it back to the Interwar period. Others understand that it began to be identified as such by the late 19th century.
It can be argued that graphic communications with specific purposes have their origins in Paleolithic cave paintings and the birth of written language in the third millennium BCE. However, the differences in working methods, auxiliary sciences, and required training are such that it is not possible to clearly identify the current graphic designer with prehistoric man, the 15th-century xylographer, or the lithographer of 1890.
The diversity of opinions stems from some considering any graphic manifestation as a product of graphic design, while others only recognize those that arise as a result of the application of an industrial production model—visual manifestations that have been "projected" to address various needs: productive, symbolic, ergonomic, contextual, among others.
Nevertheless, the evolution of graphic design as a practice and profession has been closely linked to technological innovations, social needs, and the visual imagination of professionals. Graphic design has been practiced in various forms throughout history; in fact, good examples of graphic design date back to manuscripts from ancient China, Egypt, and Greece. As printing and book production developed in the 15th century, advances in graphic design continued over the subsequent centuries, with composers or typographers often designing pages according to established type.
By the late 19th century, graphic design emerged as a distinct profession in the West, partly due to the process of labor specialization that occurred there and partly due to the new technologies and business possibilities brought about by the Industrial Revolution. New production methods led to the separation of the design of a communication medium (such as a poster) from its actual production. Increasingly, throughout the 19th and early 20th centuries, advertising agencies, book publishers, and magazines hired art directors who organized all visual elements of communication and integrated them into a harmonious whole, creating an expression appropriate to the content. In 1922, typographer William A. Dwiggins coined the term graphic design to identify the emerging field.
Throughout the 20th century, the technology available to designers continued to advance rapidly, as did the artistic and commercial possibilities of design. The profession expanded greatly, and graphic designers created, among other things, magazine pages, book covers, posters, CD covers, postage stamps, packaging, brands, signs, advertisements, kinetic titles for TV programs and movies, and websites. By the early 21st century, graphic design had become a global profession as advanced technology and industry spread worldwide.
Historical background
In China, during the Tang dynasty (618–907) wood blocks were cut to print on textiles and later to reproduce Buddhist texts. A Buddhist scripture printed in 868 is the earliest known printed book. Beginning in the 11th century in China, longer scrolls and books were produced using movable type printing, making books widely available during the Song dynasty (960–1279).
In the mid-15th century in Mainz, Germany, Johannes Gutenberg developed a way to reproduce printed pages at a faster pace using movable type made with a new metal alloy that created a revolution in the dissemination of information.
Nineteenth century
In 1849, Henry Cole became one of the major forces in design education in Great Britain, informing the government of the importance of design in his Journal of Design and Manufactures. He organized the Great Exhibition as a celebration of modern industrial technology and Victorian design.
From 1891 to 1896, William Morris' Kelmscott Press was a leader in graphic design associated with the Arts and Crafts movement, creating hand-made books in medieval and Renaissance era style, in addition to wallpaper and textile designs. Morris' work, along with the rest of the Private Press movement, directly influenced Art Nouveau.
Will H. Bradley became one of the notable graphic designers in the late nineteenth-century due to creating art pieces in various Art Nouveau styles. Bradley created a number of designs as promotions for a literary magazine titled The Chap-Book.
Twentieth century
In 1917, Frederick H. Meyer, director and instructor at the California School of Arts and Crafts, taught a class entitled "Graphic Design and Lettering". Raffe's Graphic Design, published in 1927, was the first book to use "Graphic Design" in its title. In 1936, author and graphic designer Leon Friend published his book titled "Graphic Design" and it is known to be the first piece of literature to cover the topic extensively.
The signage in the London Underground is a classic design example of the modern era. Although he lacked artistic training, Frank Pick led the Underground Group design and publicity movement. The first Underground station signs were introduced in 1908 with a design of a solid red disk with a blue bar in the center and the name of the station. The station name was in white sans-serif letters. It was in 1916 when Pick used the expertise of Edward Johnston to design a new typeface for the Underground. Johnston redesigned the Underground sign and logo to include his typeface on the blue bar in the center of a red circle.
In the 1920s, Soviet constructivism applied 'intellectual production' in different spheres of production. The movement saw individualistic art as useless in revolutionary Russia and thus moved towards creating objects for utilitarian purposes. They designed buildings, film and theater sets, posters, fabrics, clothing, furniture, logos, menus, etc.
Jan Tschichold codified the principles of modern typography in his 1928 book, New Typography. He later repudiated the philosophy he espoused in this book as fascistic, but it remained influential. Tschichold, Bauhaus typographers such as Herbert Bayer and László Moholy-Nagy and El Lissitzky greatly influenced graphic design. They pioneered production techniques and stylistic devices used throughout the twentieth century. The following years saw graphic design in the modern style gain widespread acceptance and application.
The professional graphic design industry grew in parallel with consumerism. This raised concerns and criticisms, notably from within the graphic design community with the First Things First manifesto. First launched by Ken Garland in 1964, it was re-published as the First Things First 2000 manifesto in 1999 in the magazine Emigre 51 stating "We propose a reversal of priorities in favor of more useful, lasting and democratic forms of communication – a mindshift away from product marketing and toward the exploration and production of a new kind of meaning. The scope of debate is shrinking; it must expand. Consumerism is running uncontested; it must be challenged by other perspectives expressed, in part, through the visual languages and resources of design."
Applications
Graphic design can have many applications, from road signs to technical schematics and reference manuals. It is often used in branding products and elements of company identity such as logos, colors, packaging, labelling and text.
From scientific journals to news reporting, the presentation of opinions and facts is often improved with graphics and thoughtful compositions of visual information – known as information design. With the advent of the web, information designers with experience in interactive tools are increasingly used to illustrate the background to news stories. Information design can include Data and information visualization, which involves using programs to interpret and form data into a visually compelling presentation, and can be tied in with information graphics.
Skills
A graphic design project may involve the creative presentation of existing text, ornament, and images.
The "process school" is concerned with communication; it highlights the channels and media through which messages are transmitted and by which senders and receivers encode and decode these messages. The semiotic school treats a message as a construction of signs which through interaction with receivers, produces meaning; communication as an agent.
Typography
Typography includes type design, modifying type glyphs and arranging type. Type glyphs (characters) are created and modified using illustration techniques. Type arrangement is the selection of typefaces, point size, tracking (the space between all characters used), kerning (the space between two specific characters) and leading (line spacing).
Typography is performed by typesetters, compositors, typographers, graphic artists, art directors, and clerical workers. Until the digital age, typography was a specialized occupation. Certain fonts communicate or resemble stereotypical notions. For example, the 1942 Report is a font which types text akin to a typewriter or a vintage report.
Page layout
Page layout deals with the arrangement of elements (content) on a page, such as image placement, text layout and style. Page design has always been a consideration in printed material and more recently extended to displays such as web pages. Elements typically consist of type (text), images (pictures), and (with print media) occasionally place-holder graphics such as a dieline for elements that are not printed with ink such as die/laser cutting, foil stamping or blind embossing.
Grids
A grid serves as a method of arranging both space and information, allowing the reader to easily comprehend the overall project. Furthermore, a grid functions as a container for information and a means of establishing and maintaining order. Despite grids being utilized for centuries, many graphic designers associate them with Swiss design. The desire for order in the 1940s resulted in a highly systematic approach to visualizing information. However, grids were later regarded as tedious and uninteresting, earning the label of "designersaur." Today, grids are once again considered crucial tools for professionals, whether they are novices or veterans.
Tools
In the mid-1980s desktop publishing and graphic art software applications introduced computer image manipulation and creation capabilities that had previously been manually executed. Computers enabled designers to instantly see the effects of layout or typographic changes, and to simulate the effects of traditional media. Traditional tools such as pencils can be useful even when computers are used for finalization; a designer or art director may sketch numerous concepts as part of the creative process. Styluses can be used with tablet computers to capture hand drawings digitally.
Computers and software
Designers disagree whether computers enhance the creative process. Some designers argue that computers allow them to explore multiple ideas quickly and in more detail than can be achieved by hand-rendering or paste-up. While other designers find the limitless choices from digital design can lead to paralysis or endless iterations with no clear outcome.
Most designers use a hybrid process that combines traditional and computer-based technologies. First, hand-rendered layouts are used to get approval to execute an idea, then the polished visual product is produced on a computer.
Graphic designers are expected to be proficient in software programs for image-making, typography and layout. Nearly all of the popular and "industry standard" software programs used by graphic designers since the early 1990s are products of Adobe Inc. Adobe Photoshop (a raster-based program for photo editing) and Adobe Illustrator (a vector-based program for drawing) are often used in the final stage. CorelDraw, a vector graphics editing software developed and marketed by Corel Corporation, is also used worldwide. Designers often use pre-designed raster images and vector graphics in their work from online design databases. Raster images may be edited in Adobe Photoshop, vector logos and illustrations in Adobe Illustrator and CorelDraw, and the final product assembled in one of the major page layout programs, such as Adobe InDesign, Serif PagePlus and QuarkXPress.
Many free and open-source programs are also used by both professionals and casual graphic designers. Inkscape uses Scalable Vector Graphics (SVG) as its primary file format and allows importing and exporting other formats. Other open-source programs used include GIMP for photo-editing and image manipulation, Krita for digital painting, and Scribus for page layout.
Related design fields
Print design
A specialized branch of graphic design and historically its earliest form, print design involves creating visual content intended for reproduction on physical substrates such as silk, paper, and later, plastic, for mass communication and persuasion (e.g., marketing, governmental publishing, propaganda). Print design techniques have evolved over centuries, beginning with the invention of movable type by the Chinese alchemist Pi Sheng, later refined by the German inventor Johannes Gutenberg. Over time, methods such as lithography, screen printing, and offset printing have been developed, culminating in the contemporary use of digital presses that integrate traditional print techniques with modern digital technology.
Interface design
Since the advent of personal computers, many graphic designers have become involved in interface design, in an environment commonly referred to as a Graphical user interface (GUI). This has included web design and software design when end user-interactivity is a design consideration of the layout or interface. Combining visual communication skills with an understanding of user interaction and online branding, graphic designers often work with software developers and web developers to create the look and feel of a web site or software application. An important aspect of interface design is icon design.
User experience design
User experience design (UX) is the study, analysis, and development of creating products that provide meaningful and relevant experiences to users. This involves the creation of the entire process of acquiring and integrating the product, including aspects of branding, design, usability, and function. UX design involves creating the interface and interactions for a website or application, and is considered both an act and an art. This profession requires a combination of skills, including visual design, social psychology, development, project management, and most importantly, empathy towards the end-users.
Experiential graphic design
Experiential graphic design is the application of communication skills to the built environment. This area of graphic design requires practitioners to understand physical installations that have to be manufactured and withstand the same environmental conditions as buildings. As such, it is a cross-disciplinary collaborative process involving designers, fabricators, city planners, architects, manufacturers and construction teams.
Experiential graphic designers try to solve problems that people encounter while interacting with buildings and space (also called environmental graphic design). Examples of practice areas for environmental graphic designers are wayfinding, placemaking, branded environments, exhibitions and museum displays, public installations and digital environments.
Occupations
Graphic design career paths cover all parts of the creative spectrum and often overlap. Workers perform specialized tasks, such as design services, publishing, advertising and public relations. As of 2023, median pay was $58,910 per year. The main job titles within the industry are often country specific. They can include graphic designer, art director, creative director, animator and entry level production artist. Depending on the industry served, the responsibilities may have different titles such as "DTP associate" or "Graphic Artist". The responsibilities may involve specialized skills such as illustration, photography, animation, visual effects or interactive design.
Employment in design of online projects was expected to increase by 35% by 2026, while employment in traditional media, such as newspaper and book design, expect to go down by 22%. Graphic designers will be expected to constantly learn new techniques, programs, and methods.
Graphic designers can work within companies devoted specifically to the industry, such as design consultancies or branding agencies, others may work within publishing, marketing or other communications companies. Especially since the introduction of personal computers, many graphic designers work as in-house designers in non-design oriented organizations. Graphic designers may also work freelance, working on their own terms, prices, ideas, etc.
A graphic designer typically reports to the art director, creative director or senior media creative. As a designer becomes more senior, they spend less time designing and more time leading and directing other designers on broader creative activities, such as brand development and corporate identity development. They are often expected to interact more directly with clients, for example taking and interpreting briefs.
Crowdsourcing in graphic design
Jeff Howe of Wired Magazine first used the term "crowdsourcing" in his 2006 article, "The Rise of Crowdsourcing." It spans such creative domains as graphic design, architecture, apparel design, writing, illustration, and others. Tasks may be assigned to individuals or a group and may be categorized as convergent or divergent. An example of a divergent task is generating alternative designs for a poster. An example of a convergent task is selecting one poster design. Companies, startups, small businesses and entrepreneurs have all benefitted from design crowdsourcing since it helps them source great graphic designs at a fraction of the budget they used to spend before. Getting a logo design through crowdsourcing being one of the most common. Major companies that operate in the design crowdsourcing space are generally referred to as design contest sites.]
Role of graphic design
Graphic design is essential for advertising, branding, and marketing, influencing how people act. Good graphic design builds strong, recognizable brands, communicates messages clearly, and shapes how consumers see and react to things.
One way that graphic design influences consumer behavior is through the use of visual elements, such as color, typography, and imagery. Studies have shown that certain colors can evoke specific emotions and behaviors in consumers, and that typography can influence how information is perceived and remembered. For example, serif fonts are often associated with tradition and elegance, while sans-serif fonts are seen as modern and minimalistic. These factors can all impact the way consumers perceive a brand and its messaging.
Another way that graphic design impacts consumer behavior is through its ability to communicate complex information in a clear and accessible way. For example, infographics and data visualizations can help to distill complex information into a format that is easy to understand and engaging for consumers. This can help to build trust and credibility with consumers, and encourage them to take action.
Ethical consideration in graphic design
Ethics are an important consideration in graphic design, particularly when it comes to accurately representing information and avoiding harmful stereotypes. Graphic designers have a responsibility to ensure that their work is truthful, accurate, and free from any misleading or deceptive elements. This requires a commitment to honesty, integrity, and transparency in all aspects of the design process.
One of the key ethical considerations in graphic design is the responsibility to accurately represent information. This means ensuring that any claims or statements made in advertising or marketing materials are true and supported by evidence. For example, a company should not use misleading statistics to promote their product or service, or make false claims about its benefits. Graphic designers must take care to accurately represent information in all visual elements, such as graphs, charts, and images, and avoid distorting or misrepresenting data.
Another important ethical consideration in graphic design is the need to avoid harmful stereotypes. This means avoiding any images or messaging that perpetuate negative or harmful stereotypes based on race, gender, religion, or other characteristics. Graphic designers should strive to create designs that are inclusive and respectful of all individuals and communities, and avoid reinforcing negative attitudes or biases.
Future of graphic design
The future of graphic design is likely to be heavily influenced by emerging technologies and social trends. Advancements in areas such as artificial intelligence, virtual and augmented reality, and automation are likely to transform the way that graphic designers work and create designs. Social trends, such as a greater focus on sustainability and inclusivity, are also likely to impact the future of graphic design.
One area where emerging technologies are likely to have a significant impact on graphic design is in the automation of certain tasks. Machine learning algorithms, for example, can analyze large datasets and create designs based on patterns and trends, freeing up designers to focus on more complex and creative tasks. Virtual and augmented reality technologies may also allow designers to create immersive and interactive experiences for users, blurring the lines between the digital and physical worlds. Artificial intelligence has also led to many challenges within the world of graphic design. Some of those challenges include maintaining brand authenticity, ensuring quality, issues of bias, and preserving creative control.
Social trends are also likely to shape the future of graphic design. As consumers become more conscious of environmental issues, for example, there may be a greater demand for designs that prioritize sustainability and minimize waste. Similarly, there is likely to be a growing focus on inclusivity and diversity in design, with designers seeking to create designs that are accessible and representative of a wide range of individuals and communities.
See also
Related areas
Related topics
References
Bibliography
Fiell, Charlotte and Fiell, Peter (editors). Contemporary Graphic Design. Taschen Publishers, 2008.
Wiedemann, Julius and Taborda, Felipe (editors). Latin-American Graphic Design. Taschen Publishers, 2008.
External links
The Universal Arts of Graphic Design – Documentary produced by Off Book
Graphic Designers, entry in the Occupational Outlook Handbook of the Bureau of Labor Statistics of the United States Department of Labor
Communication design | Graphic design | [
"Engineering"
] | 4,898 | [
"Design",
"Communication design"
] |
12,806 | https://en.wikipedia.org/wiki/Gemstone | A gemstone (also called a fine gem, jewel, precious stone, semiprecious stone, or simply gem) is a piece of mineral crystal which, when cut or polished, is used to make jewelry or other adornments. Certain rocks (such as lapis lazuli, opal, and obsidian) and occasionally organic materials that are not minerals (such as amber, jet, and pearl) may also be used for jewelry and are therefore often considered to be gemstones as well. Most gemstones are hard, but some softer minerals such as brazilianite may be used in jewelry because of their color or luster or other physical properties that have aesthetic value. However, generally speaking, soft minerals are not typically used as gemstones by virtue of their brittleness and lack of durability.
Found all over the world, the industry of coloured gemstones (i.e. anything other than diamonds) is currently estimated at US$1.55billion and is projected to steadily increase to a value of $4.46billion by 2033.
A gem expert is a gemologist, a gem maker is called a lapidarist or gemcutter; a diamond cutter is called a diamantaire.
Characteristics and classification
The traditional classification in the West, which goes back to the ancient Greeks, begins with a distinction between precious and semi-precious; similar distinctions are made in other cultures. In modern use, the precious stones are emerald, ruby, sapphire and diamond, with all other gemstones being semi-precious. This distinction reflects the rarity of the respective stones in ancient times, as well as their quality: all are translucent, with fine color in their purest forms (except for the colorless diamond), and very hard with a hardness score of 8 to 10 on the Mohs scale. Other stones are classified by their color, translucency, and hardness. The traditional distinction does not necessarily reflect modern values; for example, while garnets are relatively inexpensive, a green garnet called tsavorite can be far more valuable than a mid-quality emerald. Another traditional term for semi-precious gemstones used in art history and archaeology is hardstone. Use of the terms 'precious' and 'semi-precious' in a commercial context is, arguably, misleading in that it suggests certain stones are more valuable than others when this is not reflected in the actual market value, although it would generally be correct if referring to desirability.
In modern times gemstones are identified by gemologists, who describe gems and their characteristics using technical terminology specific to the field of gemology. The first characteristic a gemologist uses to identify a gemstone is its chemical composition. For example, diamonds are made of carbon () and rubies of aluminium oxide (). Many gems are crystals which are classified by their crystal system such as cubic or trigonal or monoclinic. Another term used is habit, the form the gem is usually found in. For example, diamonds, which have a cubic crystal system, are often found as octahedrons.
Gemstones are classified into different groups, species, and varieties. For example, ruby is the red variety of the species corundum, while any other color of corundum is considered sapphire. Other examples are the emerald (green), aquamarine (blue), red beryl (red), goshenite (colorless), heliodor (yellow), and morganite (pink), which are all varieties of the mineral species beryl.
Gems are characterized in terms of their color (hue, tone and saturation), optical phenomena, luster, refractive index, birefringence, dispersion, specific gravity, hardness, cleavage, and fracture. They may exhibit pleochroism or double refraction. They may have luminescence and a distinctive absorption spectrum. Gemstones may also be classified in terms of their "water". This is a recognized grading of the gem's luster, transparency, or "brilliance". Very transparent gems are considered "first water", while "second" or "third water" gems are those of a lesser transparency. Additionally, material or flaws within a stone may be present as inclusions.
Value
Gemstones have no universally accepted grading system. Diamonds are graded using a system developed by the Gemological Institute of America (GIA) in the early 1950s. Historically, all gemstones were graded using the naked eye. The GIA system included a major innovation: the introduction of 10x magnification as the standard for grading clarity. Other gemstones are still graded using the naked eye (assuming 20/20 vision).
A mnemonic device, the "four Cs" (color, cut, clarity, and carats), has been introduced to help describe the factors used to grade a diamond. With modification, these categories can be useful in understanding the grading of all gemstones. The four criteria carry different weights depending upon whether they are applied to colored gemstones or to colorless diamonds. In diamonds, the cut is the primary determinant of value, followed by clarity and color. An ideally cut diamond will sparkle, to break down light into its constituent rainbow colors (dispersion), chop it up into bright little pieces (scintillation), and deliver it to the eye (brilliance). In its rough crystalline form, a diamond will do none of these things; it requires proper fashioning and this is called "cut". In gemstones that have color, including colored diamonds, the purity, and beauty of that color is the primary determinant of quality.
Physical characteristics that make a colored stone valuable are color, clarity to a lesser extent (emeralds will always have a number of inclusions), cut, unusual optical phenomena within the stone such as color zoning (the uneven distribution of coloring within a gem) and asteria (star effects).
Apart from the more generic and commonly used gemstones such as from diamonds, rubies, sapphires, and emeralds, pearls and opal have also been defined as precious in the jewellery trade. Up to the discoveries of bulk amethyst in Brazil in the 19th century, amethyst was considered a "precious stone" as well, going back to ancient Greece. Even in the last century certain stones such as aquamarine, peridot and cat's eye (cymophane) have been popular and hence been regarded as precious, thus reinforcing the notion that a mineral's rarity may have been implicated in its classification as a precious stone and thus contribute to its value.
Today the gemstone trade no longer makes such a distinction. Many gemstones are used in even the most expensive jewelry, depending on the brand-name of the designer, fashion trends, market supply, treatments, etc. Nevertheless, diamonds, rubies, sapphires, and emeralds still have a reputation that exceeds those of other gemstones.
Rare or unusual gemstones, generally understood to include those gemstones which occur so infrequently in gem quality that they are scarcely known except to connoisseurs, include andalusite, axinite, cassiterite, clinohumite, painite and red beryl.
Gemstone pricing and value are governed by factors and characteristics in the quality of the stone. These characteristics include clarity, rarity, freedom from defects, the beauty of the stone, as well as the demand for such stones. There are different pricing influencers for both colored gemstones, and for diamonds. The pricing on colored stones is determined by market supply-and-demand, but diamonds are more intricate.
In the addition to the aesthetic and adorning/ornamental purpose of gemstones, there are many proponents of energy medicine who also value gemstones on the basis of their alleged healing powers.
A gemstone that has been rising in popularity is Cuprian Elbaite Tourmaline which is also called "Paraiba Tourmaline". It was first discovered in the late 1980s in Paraíba, Brazil and later in Mozambique and Nigeria. It is famous for its glowing neon blue color. Paraiba Tourmaline has become one of the most popular gemstones in recent times thanks to its color and is considered to be one of the important gemstones after rubies, emeralds, and sapphires according to Gübelin Gemlab. Even though it is a tourmaline, Paraiba Tourmaline is one of the most expensive gemstones.
Grading
There are a number of laboratories which grade and provide reports on gemstones.
Gemological Institute of America (GIA), the main provider of education services and diamond grading reports
International Gemological Institute (IGI), independent laboratory for grading and evaluation of diamonds, jewelry, and colored stones
Hoge Raad Voor Diamant (HRD Antwerp), The Diamond High Council, Belgium is one of Europe's oldest laboratories; its main stakeholder is the Antwerp World Diamond Centre
American Gemological Society (AGS) is not as widely recognized nor as old as the GIA
American Gem Trade Laboratory which is part of the American Gem Trade Association (AGTA), a trade organization of jewelers and dealers of colored stones
American Gemological Laboratories (AGL), owned by Christopher P. Smith
European Gemological Laboratory (EGL), founded in 1974 by Guy Margel in Belgium
Gemmological Association of All Japan (GAAJ-ZENHOKYO), Zenhokyo, Japan, active in gemological research
The Gem and Jewelry Institute of Thailand (Public Organization) or GIT, Thailand's national institute for gemological research and gem testing, Bangkok
Gemmology Institute of Southern Africa, Africa's premium gem laboratory
Asian Institute of Gemological Sciences (AIGS), the oldest gemological institute in South East Asia, involved in gemological education and gem testing
Swiss Gemmological Institute (SSEF), founded by Henry Hänni, focusing on colored gemstones and the identification of natural pearls
Gübelin Gem Lab, the traditional Swiss lab founded by Eduard Gübelin
Each laboratory has its own methodology to evaluate gemstones. A stone can be called "pink" by one lab while another lab calls it "padparadscha". One lab can conclude a stone is untreated, while another lab might conclude that it is heat-treated. To minimize such differences, seven of the most respected labs, AGTA-GTL (New York), CISGEM (Milano), GAAJ-ZENHOKYO (Tokyo), GIA (Carlsbad), GIT (Bangkok), Gübelin (Lucerne) and SSEF (Basel), have established the Laboratory Manual Harmonisation Committee (LMHC), for the standardization of wording reports, promotion of certain analytical methods and interpretation of results. Country of origin has sometimes been difficult to determine, due to the constant discovery of new source locations. Determining a "country of origin" is thus much more difficult than determining other aspects of a gem (such as cut, clarity, etc.).
Gem dealers are aware of the differences between gem laboratories and will make use of the discrepancies to obtain the best possible certificate.
Cutting and polishing
A few gemstones are used as gems in the crystal or other forms in which they are found. Most, however, are cut and polished for usage as jewelry. The two main classifications are as follows:
Stones cut as smooth, dome-shaped stones called cabochons or simply cab. These have been a popular shape since ancient time and is more durable than faceted gems.
Stones which are cut with a faceting machine by polishing small flat windows called facets at regular intervals at exact angles.
Stones which are opaque or semi-opaque such as opal, turquoise, variscite, etc. are commonly cut as cabochons. These gems are designed to show the stone's color, luster and other surface properties as opposed to internal reflection properties like brilliance. Grinding wheels and polishing agents are used to grind, shape, and polish the smooth dome shape of the stones.
Gems that are transparent are normally faceted, a method that shows the optical properties of the stone's interior to its best advantage by maximizing reflected light which is perceived by the viewer as sparkle. There are many commonly used shapes for faceted stones. The facets must be cut at the proper angles, which varies depending on the optical properties of the gem. If the angles are too steep or too shallow, the light will pass through and not be reflected back toward the viewer. The faceting machine is used to hold the stone onto a flat lap for cutting and polishing the flat facets. Rarely, some cutters use special curved laps to cut and polish curved facets.
Colors
The color of any material is due to the nature of light itself. Daylight, often called white light, is all of the colors of the spectrum combined. When light strikes a material, most of the light is absorbed while a smaller amount of a particular frequency or wavelength is reflected. The part that is reflected reaches the eye as the perceived color. A ruby appears red because it absorbs all other colors of white light while reflecting red.
A material which is mostly the same can exhibit different colors. For example, ruby and sapphire have the same primary chemical composition (both are corundum) but exhibit different colors because of impurities which absorb and reflect different wavelengths of light depending on their individual compositions. Even the same named gemstone can occur in many different colors: sapphires show different shades of blue and pink and "fancy sapphires" exhibit a whole range of other colors from yellow to orange-pink, the latter called "padparadscha sapphire".
This difference in color is based on the atomic structure of the stone. Although the different stones formally have the same chemical composition and structure, they are not exactly the same. Every now and then an atom is replaced by a completely different atom, sometimes as few as one in a million atoms. These so-called impurities are sufficient to absorb certain colors and leave the other colors unaffected. For example, beryl, which is colorless in its pure mineral form, becomes emerald with chromium impurities. If manganese is added instead of chromium, beryl becomes pink morganite. With iron, it becomes aquamarine.Some gemstone treatments make use of the fact that these impurities can be "manipulated", thus changing the color of the gem.
Treatment
Gemstones are often treated to enhance the color or clarity of the stone. In some cases, the treatment applied to the gemstone can also increase its durability. Even though natural gemstones can be transformed using the traditional method of cutting and polishing, other treatment options allow the stone's appearance to be enhanced. Depending on the type and extent of treatment, they can affect the value of the stone. Some treatments are used widely because the resulting gem is stable, while others are not accepted most commonly because the gem color is unstable and may revert to the original tone.
Early history
Before the innovation of modern-day tools, thousands of years ago, people were recorded to use a variety of techniques to treat and enhance gemstones. Some of the earliest methods of gemstone treatment date back to the Minoan Age, for example foiling, which is where metal foil is used to enhance a gemstone's colour. Other methods recorded 2000 years ago in the book Natural History by Pliny the Elder include oiling and dyeing/staining.
Heat
Heat can either improve or spoil gemstone color or clarity. The heating process has been well known to gem miners and cutters for centuries, and in many stone types heating is a common practice. Most citrine is made by heating amethyst, and partial heating with a strong gradient results in "ametrine" – a stone partly amethyst and partly citrine. Aquamarine is often heated to remove yellow tones, or to change green colors into the more desirable blue, or enhance its existing blue color to a deeper blue.
Nearly all tanzanite is heated at low temperatures to remove brown undertones and give a more desirable blue / purple color. A considerable portion of all sapphire and ruby is treated with a variety of heat treatments to improve both color and clarity.
When jewelry containing diamonds is heated for repairs, the diamond should be protected with boric acid; otherwise, the diamond, which is pure carbon, could be burned on the surface or even burned completely up. When jewelry containing sapphires or rubies is heated, those stones should not be coated with boric acid (which can etch the surface) or any other substance. They do not have to be protected from burning, like a diamond (although the stones do need to be protected from heat stress fracture by immersing the part of the jewelry with stones in the water when metal parts are heated).
Radiation
The irradiation process is widely practiced in jewelry industry and enabled the creation of gemstone colors that do not exist or are extremely rare in nature. However, particularly when done in a nuclear reactor, the processes can make gemstones radioactive. Health risks related to the residual radioactivity of the treated gemstones have led to government regulations in many countries.
Virtually all blue topaz, both the lighter and the darker blue shades such as "London" blue, has been irradiated to change the color from white to blue. Most green quartz (Oro Verde) are also irradiated to achieve the yellow-green color. Diamonds are mainly irradiated to become blue-green or green, although other colors are possible. When light-to-medium-yellow diamonds are treated with gamma rays they may become green; with a high-energy electron beam, blue.
Waxing/oiling
Emeralds containing natural fissures are sometimes filled with wax or oil to disguise them. This wax or oil is also colored to make the emerald appear of better color as well as clarity. Turquoise is also commonly treated in a similar manner.
Fracture filling
Fracture filling has been in use with different gemstones such as diamonds, emeralds, and sapphires. In 2006 "glass-filled rubies" received publicity. Rubies over 10 carats (2 g) with large fractures were filled with lead glass, thus dramatically improving the appearance (of larger rubies in particular). Such treatments are fairly easy to detect.
Bleaching
Another treatment method that is commonly used to treat gemstones is bleaching. This method uses a chemical in order to reduce the colour of the gem. After bleaching, a combination treatment can be done by dying the gemstone once the unwanted colours are removed. Hydrogen peroxide is the most commonly used product used to alter gemstones and have notably been used to treat jade and pearls. The treatment of bleaching can also be followed by impregnation, which allows the gemstone's durability to be increased.
Socioeconomic issues in the gemstone industry
The socio-economic dynamics of the gemstone industry are shaped by market forces and consumer preferences and typically go undiscussed. Changes in demand and prices can significantly affect the livelihoods of those involved in gemstone mining and trade, particularly in developing countries where the industry serves as a crucial source of income.
A situation that arises as a result of this is the exploitation of natural resources and labor within gemstone mining operations. Many mines, particularly in developing countries, face challenges such as inadequate safety measures, low wages, and poor working conditions. Miners, often from disadvantaged backgrounds, endure hazardous working conditions and receive meager wages, contributing to cycles of poverty and exploitation. Gemstone mining operations are frequently conducted in remote or underdeveloped areas, lacking proper infrastructure and access to essential services such as healthcare and education. This further contributes to the pre-existing socio-economic disparities and obstructs community development such that the benefits of gemstone extraction may not adequately reach those directly involved in the process.
Another such issue revolves around environmental degradation resulting from mining activities. Environmental degradation can pose long-term threats to ecosystems and biodiversity, further worsening the socio-economic state in affected regions. Unregulated mining practices often result in deforestation, soil erosion, and water contamination thus threatening ecosystems and biodiversity. Unregulated mining activity can also cause depletion of natural resources, thus diminishing the prospects for sustainable development. The environmental impact of gemstone mining not only poses a threat to ecosystems but also undermines the long-term viability of the industry by diminishing the quality and quantity of available resources.
Furthermore, the gemstone industry is also susceptible to issues related to transparency and ethics, which impact both producers and consumers. The lack of standardized certification processes and the prevalence of illicit practices undermine market integrity and trust. The lack of transparency and accountability in the supply chain aggravates pre-existing inequalities, as middlemen and corporations often capture a disproportionate share of the profits. As a result, the unequal distribution of profits along the supply chain does little to improve socio-economic inequalities, particularly in regions where gemstones are mined.
Addressing these socio-economic challenges requires intensive effort from various stakeholders, including governments, industry executives, and society, to promote sustainable practices and ensure equitable outcomes for all involved parties. Implementing and enforcing regulations to ensure fair labor practices, environmental sustainability, and ethical sourcing is essential. Additionally, investing in community development projects, such as education and healthcare initiatives, can help alleviate poverty and empower marginalized communities dependent on the gemstone industry. Collaboration across sectors is crucial for fostering a more equitable and sustainable gemstone trade that benefits both producers and consumers while respecting human rights and environmental integrity.
Synthetic and artificial gemstones
Synthetic gemstones are distinct from imitation or simulated gems.
Synthetic gems are physically, optically, and chemically identical to the natural stone, but are created in a laboratory. Imitation or simulated stones are chemically different from the natural stone, but may appear quite similar to it; they can be more easily manufactured synthetic gemstones of a different mineral (spinel), glass, plastic, resins, or other compounds.
Examples of simulated or imitation stones include cubic zirconia, composed of zirconium oxide, synthetic moissanite, and uncolored, synthetic corundum or spinels; all of which are diamond simulants. The simulants imitate the look and color of the real stone but possess neither their chemical nor physical characteristics. In general, all are less hard than diamond. Moissanite actually has a higher refractive index than diamond, and when presented beside an equivalently sized and cut diamond will show more "fire".
Cultured, synthetic, or "lab-created" gemstones are not imitations: The bulk mineral and trace coloring elements are the same in both. For example, diamonds, rubies, sapphires, and emeralds have been manufactured in labs that possess chemical and physical characteristics identical to the naturally occurring variety. Synthetic (lab created) corundum, including ruby and sapphire, is very common and costs much less than the natural stones. Small synthetic diamonds have been manufactured in large quantities as industrial abrasives, although larger gem-quality synthetic diamonds are becoming available in multiple carats.
Whether a gemstone is a natural stone or synthetic, the chemical, physical, and optical characteristics are the same: They are composed of the same mineral and are colored by the same trace materials, have the same hardness and density and strength, and show the same color spectrum, refractive index, and birefringence (if any). Lab-created stones tend to have a more vivid color since impurities common in natural stones are not present in the synthetic stone. Synthetics are made free of common naturally occurring impurities that reduce gem clarity or color unless intentionally added in order to provide a more drab, natural appearance, or to deceive an assayer. On the other hand, synthetics often show flaws not seen in natural stones, such as minute particles of corroded metal from lab trays used during synthesis.
Types
Some gemstones are more difficult to synthesize than others and not all stones are commercially viable to attempt to synthesize. These are the most common on the market currently.
Synthetic corundum
Synthetic corundum includes ruby (red variation) and sapphire (other color variations), both of which are considered highly desired and valued. Ruby was the first gemstone to be synthesized by Auguste Verneuil with his development of the flame-fusion process in 1902. Synthetic corundum continues to be made typically by flame-fusion as it is most cost-effective, but can also be produced through flux growth and hydrothermal growth.
Synthetic beryls
The most common synthesized beryl is emerald (green). Yellow, red and blue beryls are possible but much more rare. Synthetic emerald became possible with the development of the flux growth process and is produced in this way and well as hydrothermal growth.
Synthetic quartz
Types of synthetic quartz include citrine, rose quartz, and amethyst. Natural occurring quartz is not rare, but is nevertheless synthetically produced as it has practical application outside of aesthetic purposes. Quartz generates an electric current when under pressure and is used in watches, clocks, and oscillators.
Synthetic spinel
Synthetic spinel was first produced by accident. It can be created in any color making it popular to simulate various natural gemstones. It is created through flux growth and hydrothermal growth.
Creation process
There are two main categories for creation of these minerals: melt or solution processes.
Verneuil flame fusion process (melt process)
The flame fusion process was the first process used which successfully created large quantities of synthetic gemstones to be sold on the market. This remains the most cost effective and common method of creating corundums today.
The flame fusion process is completed in a Verneuil furnace. The furnace consists of an inverted blowpipe burner which produces an extremely hot oxyhydrogen flame, a powder dispenser, and a ceramic pedestal. A chemical powder which corresponds to the desired gemstone is passed through this flames. This melts the ingredients which drop on to a plate and solidify into a crystal called a boule. For corundum the flame must be 2000 °C. This process takes hours and yields a crystal with the same properties as its natural counterpart.
To produce corundum, a pure aluminium powder is used with different additives to achieve different colors.
Chromic oxide for ruby
Iron and titanium oxide for blue sapphire
Nickel oxide for yellow sapphire
Nickel, chromium and iron for orange sapphire
Manganese for pink sapphire
Copper for blue-green sapphire
Cobalt for dark blue sapphire
Czochralski process (melt process)
In 1918 this process was developed by J. Czocharalski and is also referred to as the "crystal pulling" method. In this process, the required gemstone materials are added to a crucible. A seed stone is placed into the melt in the crucible. As the gem begins to crystallize on the seed, the seed is pulled away and the gem continues to grow. This is used for corundum but is currently the least popular method.
Flux growth (solution process)
The flux growth process was the first process able to synthesize emerald. Flux growth begins with a crucible which can withstand high heat; either graphite or platinum which is filled with a molten liquid referred to as flux. The specific gem ingredients are added and dissolved in this fluid and recrystallize to form the desired gemstone.This is a longer process compared to the flame fusion process and can take two months up to a year depending on the desired final size.
Hydrothermal growth (solution process)
The hydrothermal growth process attempts to imitate the natural growth process of minerals. The required gem materials are sealed in a container of water and placed under extreme pressure. The water is heated beyond its boiling point which allows normally insoluble materials to dissolve. As more material cannot be added once the container is sealed, in order to create a larger gem the process would begin with a "seed" stone from a previous batch which the new material will crystallize on. This process takes a few weeks to complete.
Characteristics
Synthetic gemstones share chemical and physical properties with natural gemstones, but there are some slight differences that can be used to discern synthetic from natural. These differences are slight and often require microscopy as a tool to distinguish differences. Undetectable synthetics pose a threat to the market if they are able to be sold as rare natural gemstones. Because of this there are certain characteristic gemologists look for. Each crystal is characteristic to the environment and growth process under which it was created.
Gemstones created from the flame-fusion process may have
small air bubbles which were trapped inside the boule during formation process
visible banding from formation of the boule
chatter marks which on the surface which appear crack like which are caused from damage during polishing of the gemstone
Gemstones created from flux melt process may have
small cavities which are filled with flux solution
inclusions in the gemstone from crucible used
Gemstones created from hydrothermal growth may have
inclusions from container used
History
Prior to development of synthesising processes the alternatives on the market to natural gemstones were imitations or fake. In 1837, the first successful synthesis of ruby occurred. French chemist Marc Gaudin managed to produce small crystals of ruby from melting together potassium aluminium sulphate and potassium chromate through what would later be known as the flux melt process. Following this, another French chemist Fremy was able to grow large quantities of small ruby crystals using a lead flux.
A few years later an alternative to flux melt was developed which led to the introduction of what was labeled "reconstructed ruby" to the market. Reconstructed ruby was sold as a process which produced larger rubies from melting together bits of natural ruby. In later attempts to recreate this process it was found to not be possible and is believed reconstructed rubies were most likely created using a multi-step method of melting of ruby powder.
Auguste Verneuil, a student of Fremy, went on to develop flame-fusion as an alternative to the flux-melt method. He developed large furnaces which were able to produce large quantities of corundums more efficiently and shifted the gemstone market dramatically. This process is still used today and the furnaces have not changed much from the original design. World production of corundum using this method reaches 1000 million carats a year.
List of rare gemstones
Painite was discovered in 1956 in Ohngaing in Myanmar. The mineral was named in honor of the British gemologist Arthur Charles Davy Pain. At one point it was considered the rarest mineral on Earth.
Tanzanite was discovered in 1967 in Northern Tanzania. With its supply possibly declining in the next 30 years, this gemstone is considered to be more rare than a diamond. This type of gemstone receives its vibrant blue from being heated.
Hibonite was discovered in 1956 in Madagascar. It was named after the discoverer, French geologist Paul Hibon. Gem quality hibonite has been found only in Myanmar.
Red beryl or bixbite was discovered in an area near Beaver, Utah in 1904 and named after the American mineralogist Maynard Bixby.
Jeremejevite was discovered in 1883 in Russia and named after its discoverer, Pawel Wladimirowich Jeremejew (1830–1899).
Chambersite was discovered in 1957 in Chambers County, Texas, US, and named after the deposit's location.
Taaffeite was discovered in 1945. It was named after the discoverer, the Irish gemologist Count Edward Charles Richard Taaffe.
Musgravite was discovered in 1967 in the Musgrave Mountains in South Australia and named for the location.
Black opal is directly mined in New South Wales, Australia, making it the rarest type of opal. Having a darker composition, this gemstone can be in a variety of colours.
Grandidierite was discovered by Antoine François Alfred Lacroix (1863–1948) in 1902 in Tuléar Province, Madagascar. It was named in honor of the French naturalist and explorer Alfred Grandidier (1836–1912).
Poudretteite was discovered in 1965 at the Poudrette Quarry in Canada and named after the quarry's owners and operators, the Poudrette family.
Serendibite was discovered in Sri Lanka by Sunil Palitha Gunasekera in 1902 and named after Serendib, the old Arabic name for Sri Lanka.
Zektzerite was discovered by Bart Cannon in 1968 on Kangaroo Ridge near Washington Pass in Okanogan County, Washington, USA. The mineral was named in honor of mathematician and geologist Jack Zektzer, who presented the material for study in 1976.
In popular culture
French singer-songwriter Nolwenn Leroy was inspired by the gemstones for her 2017 album Gemme (meaning gemstone in French) and the single of the same name.
Land of the Lustrous is a Japanese manga and anime series whose main characters are depicted as humanoid jewels.
Steven Universe is an American animated television series whose main characters are magical gemstones who project themselves as feminine humanoids.
See also
Assembled gem
Gemology
List of gemstones by species
List of individual gemstones
List of names derived from gemstones
List of emeralds by size
List of sapphires by size
List of diamonds
Luminous gemstones
References
External links
Jewellery components
Materials
Mineralogy
Stone objects | Gemstone | [
"Physics",
"Technology"
] | 6,900 | [
"Materials",
"Jewellery components",
"Gemstones",
"Components",
"Matter"
] |
12,821 | https://en.wikipedia.org/wiki/Gate | A gate or gateway is a point of entry to or from a space enclosed by walls. The word derived from old Norse "gat" meaning road or path; But other terms include yett and port. The concept originally referred to the gap or hole in the wall or fence, rather than a barrier which closed it. Gates may prevent or control the entry or exit of individuals, or they may be merely decorative. The moving part or parts of a gateway may be considered "doors", as they are fixed at one side whilst opening and closing like one.
A gate may have a latch that can be raised and lowered to both open a gate or prevent it from swinging. Gate operation can be either automated or manual. Locks are also used on gates to increase security.
Larger gates can be used for a whole building, such as a castle or fortified town. Doors can also be considered gates when they are used to block entry as prevalent within a gatehouse.
Purpose-specific types of gate
Baby gate: a safety gate to protect babies and toddlers
Badger gate: gate to allow badgers to pass through rabbit-proof fencing
City gate of a walled city
Hampshire gate (a.k.a. New Zealand gate, wire gate, etc.)
Kissing gate on a footpath
Lychgate with a roof
Mon Japanese: gate. The religious torii compares to the Chinese pailou (paifang), Indian torana, Indonesian Paduraksa and Korean hongsalmun. Mon are widespread, in Japanese gardens.
Portcullis of a castle
Race gate used for checkpoints on race tracks.
Slip gate on footpaths
Turnstile
Watergate of a castle by navigable water
Slalom skiing gates
Wicket gate
Image gallery
See also
City Gate
Barricade
Boom barrier (a.k.a. boom gate)
Border
Gate tower
Gopuram
Leave the gate as you found it
Portal (architecture)
Portcullis
Threshold (disambiguation)
Triumphal arch
List of scandals with "-gate" suffix
Watergate, as used in politics
References
External links
Doors
Fortification (architectural elements)
Garden features
Buildings and structures by type | Gate | [
"Engineering"
] | 430 | [
"Buildings and structures by type",
"Architecture"
] |
12,823 | https://en.wikipedia.org/wiki/Garbage%20in%2C%20garbage%20out | In computer science, garbage in, garbage out (GIGO) is the concept that flawed, biased or poor quality ("garbage") information or input produces a result or output of similar ("garbage") quality. The adage points to the need to improve data quality in, for example, programming. Rubbish in, rubbish out (RIRO) is an alternate wording.
The principle applies to all logical argumentation: soundness implies validity, but validity does not imply soundness.
History
The expression was popular in the early days of computing. The first known use is in a 1957 syndicated newspaper article about US Army mathematicians and their work with early computers, in which an Army Specialist named William D. Mellin explained that computers cannot think for themselves, and that "sloppily programmed" inputs inevitably lead to incorrect outputs. The underlying principle was noted by the inventor of the first programmable computing device design:
More recently, the Marine Accident Investigation Branch comes to a similar conclusion:
The term may have been derived from last-in, first-out (LIFO) or first-in, first-out (FIFO).
Uses
This phrase can be used as an explanation for the poor quality of a digitized audio or video file. Although digitizing can be the first step in cleaning up a signal, it does not, by itself, improve the quality. Defects in the original analog signal will be faithfully recorded, but might be identified and removed by a subsequent step by digital signal processing.
GIGO is also used to describe failures in human decision-making due to faulty, incomplete, or imprecise data.
In audiology, GIGO describes the process that occurs at the dorsal cochlear nucleus (DCN) when auditory neuropathy spectrum disorder is present. This occurs when the neural firing from the cochlea has become unsynchronized, resulting in a static-filled sound being input into the DCN and then passed up the chain to the auditory cortex. The term was applied by Dan Schwartz at the 2012 Worldwide ANSD Conference, St. Petersburg, Florida, on 16 March 2012; and adopted as industry jargon to describe the electrical signal received by the dorsal cochlear nucleus and passed up the auditory chain to the superior olivary complex on the way to the auditory cortex destination.
GIGO was the name of a Usenet gateway program to FidoNet, MAUSnet, e.a.
See also
Algorithmic bias
Computer says no
FINO
Auditory neuropathy spectrum disorder
Standard error
Undefined behavior
Data processing inequality
No free lunch theorem
References
Computer errors
Computer humour
Computer jargon | Garbage in, garbage out | [
"Technology"
] | 536 | [
"Computer errors",
"Computing terminology",
"Computer jargon",
"Natural language and computing"
] |
12,832 | https://en.wikipedia.org/wiki/G%20protein-coupled%20receptor | G protein-coupled receptors (GPCRs), also known as seven-(pass)-transmembrane domain receptors, 7TM receptors, heptahelical receptors, serpentine receptors, and G protein-linked receptors (GPLR), form a large group of evolutionarily related proteins that are cell surface receptors that detect molecules outside the cell and activate cellular responses. They are coupled with G proteins. They pass through the cell membrane seven times in the form of six loops (three extracellular loops interacting with ligand molecules, three intracellular loops interacting with G proteins, an N-terminal extracellular region and a C-terminal intracellular region) of amino acid residues, which is why they are sometimes referred to as seven-transmembrane receptors. Ligands can bind either to the extracellular N-terminus and loops (e.g. glutamate receptors) or to the binding site within transmembrane helices (rhodopsin-like family). They are all activated by agonists, although a spontaneous auto-activation of an empty receptor has also been observed.
G protein-coupled receptors are found only in eukaryotes, including yeast, and choanoflagellates. The ligands that bind and activate these receptors include light-sensitive compounds, odors, pheromones, hormones, and neurotransmitters, and vary in size from small molecules to peptides to large proteins. G protein-coupled receptors are involved in many diseases.
There are two principal signal transduction pathways involving the G protein-coupled receptors:
the cAMP signal pathway and
the phosphatidylinositol signal pathway.
When a ligand binds to the GPCR it causes a conformational change in the GPCR, which allows it to act as a guanine nucleotide exchange factor (GEF). The GPCR can then activate an associated G protein by exchanging the GDP bound to the G protein for a GTP. The G protein's α subunit, together with the bound GTP, can then dissociate from the β and γ subunits to further affect intracellular signaling proteins or target functional proteins directly depending on the α subunit type (Gαs, Gαi/o, Gαq/11, Gα12/13).
GPCRs are an important drug target and approximately 34% of all Food and Drug Administration (FDA) approved drugs target 108 members of this family. The global sales volume for these drugs is estimated to be 180 billion US dollars . It is estimated that GPCRs are targets for about 50% of drugs currently on the market, mainly due to their involvement in signaling pathways related to many diseases i.e. mental, metabolic including endocrinological disorders, immunological including viral infections, cardiovascular, inflammatory, senses disorders, and cancer. The long ago discovered association between GPCRs and many endogenous and exogenous substances, resulting in e.g. analgesia, is another dynamically developing field of the pharmaceutical research.
History and significance
With the determination of the first structure of the complex between a G-protein coupled receptor (GPCR) and a G-protein trimer (Gαβγ) in 2011 a new chapter of GPCR research was opened for structural investigations of global switches with more than one protein being investigated. The previous breakthroughs involved determination of the crystal structure of the first GPCR, rhodopsin, in 2000 and the crystal structure of the first GPCR with a diffusible ligand (β2AR) in 2007. The way in which the seven transmembrane helices of a GPCR are arranged into a bundle was suspected based on the low-resolution model of frog rhodopsin from cryogenic electron microscopy studies of the two-dimensional crystals. The crystal structure of rhodopsin, that came up three years later, was not a surprise apart from the presence of an additional cytoplasmic helix H8 and a precise location of a loop covering retinal binding site. However, it provided a scaffold which was hoped to be a universal template for homology modeling and drug design for other GPCRs – a notion that proved to be too optimistic.
Results 7 years later were surprising because the crystallization of β2-adrenergic receptor (β2AR) with a diffusible ligand revealed quite a different shape of the receptor extracellular side than that of rhodopsin. This area is important because it is responsible for the ligand binding and is targeted by many drugs. Moreover, the ligand binding site was much more spacious than in the rhodopsin structure and was open to the exterior. In the other receptors crystallized shortly afterwards the binding side was even more easily accessible to the ligand. New structures complemented with biochemical investigations uncovered mechanisms of action of molecular switches which modulate the structure of the receptor leading to activation states for agonists or to complete or partial inactivation states for inverse agonists.
The 2012 Nobel Prize in Chemistry was awarded to Brian Kobilka and Robert Lefkowitz for their work that was "crucial for understanding how G protein-coupled receptors function". There have been at least seven other Nobel Prizes awarded for some aspect of G protein–mediated signaling. As of 2012, two of the top ten global best-selling drugs (Advair Diskus and Abilify) act by targeting G protein-coupled receptors.
Classification
The exact size of the GPCR superfamily is unknown, but at least 831 different human genes (or about 4% of the entire protein-coding genome) have been predicted to code for them from genome sequence analysis. Although numerous classification schemes have been proposed, the superfamily was classically divided into three main classes (A, B, and C) with no detectable shared sequence homology between classes.
The largest class by far is class A, which accounts for nearly 85% of the GPCR genes. Of class A GPCRs, over half of these are predicted to encode olfactory receptors, while the remaining receptors are liganded by known endogenous compounds or are classified as orphan receptors. Despite the lack of sequence homology between classes, all GPCRs have a common structure and mechanism of signal transduction. The very large rhodopsin A group has been further subdivided into 19 subgroups (A1-A19).
According to the classical A-F system, GPCRs can be grouped into six classes based on sequence homology and functional similarity:
Class A (or 1) (Rhodopsin-like)
Class B (or 2) (Secretin receptor family)
Class C (or 3) (Metabotropic glutamate/pheromone)
Class D (or 4) (Fungal mating pheromone receptors)
Class E (or 5) (Cyclic AMP receptors)
Class F (or 6) (Frizzled/Smoothened)
More recently, an alternative classification system called GRAFS (Glutamate, Rhodopsin, Adhesion, Frizzled/Taste2, Secretin) has been proposed for vertebrate GPCRs. They correspond to classical classes C, A, B2, F, and B.
An early study based on available DNA sequence suggested that the human genome encodes roughly 750 G protein-coupled receptors, about 350 of which detect hormones, growth factors, and other endogenous ligands. Approximately 150 of the GPCRs found in the human genome have unknown functions.
Some web-servers and bioinformatics prediction methods have been used for predicting the classification of GPCRs according to their amino acid sequence alone, by means of the pseudo amino acid composition approach.
Physiological roles
GPCRs are involved in a wide variety of physiological processes. Some examples of their physiological roles include:
The visual sense: The opsins use a photoisomerization reaction to translate electromagnetic radiation into cellular signals. Rhodopsin, for example, uses the conversion of 11-cis-retinal to all-trans-retinal for this purpose.
The gustatory sense (taste): GPCRs in taste cells mediate release of gustducin in response to bitter-, umami- and sweet-tasting substances.
The sense of smell: Receptors of the olfactory epithelium bind odorants (olfactory receptors) and pheromones (vomeronasal receptors)
Behavioral and mood regulation: Receptors in the mammalian brain bind several different neurotransmitters, including serotonin, dopamine, histamine, GABA, and glutamate
Regulation of immune system activity and inflammation: chemokine receptors bind ligands that mediate intercellular communication between cells of the immune system; receptors such as histamine receptors bind inflammatory mediators and engage target cell types in the inflammatory response. GPCRs are also involved in immune-modulation, e. g. regulating interleukin induction or suppressing TLR-induced immune responses from T cells.
Autonomic nervous system transmission: Both the sympathetic and parasympathetic nervous systems are regulated by GPCR pathways, responsible for control of many automatic functions of the body such as blood pressure, heart rate, and digestive processes
Cell density sensing: A novel GPCR role in regulating cell density sensing.
Homeostasis modulation (e.g., water balance).
Involved in growth and metastasis of some types of tumors.
Used in the endocrine system for peptide and amino-acid derivative hormones that bind to GCPRs on the cell membrane of a target cell. This activates cAMP, which in turn activates several kinases, allowing for a cellular response, such as transcription.
Receptor structure
GPCRs are integral membrane proteins that possess seven membrane-spanning domains or transmembrane helices. The extracellular parts of the receptor can be glycosylated. These extracellular loops also contain two highly conserved cysteine residues that form disulfide bonds to stabilize the receptor structure. Some seven-transmembrane helix proteins (channelrhodopsin) that resemble GPCRs may contain ion channels, within their protein.
In 2000, the first crystal structure of a mammalian GPCR, that of bovine rhodopsin (), was solved. In 2007, the first structure of a human GPCR was solved This human β2-adrenergic receptor GPCR structure proved highly similar to the bovine rhodopsin. The structures of activated or agonist-bound GPCRs have also been determined. These structures indicate how ligand binding at the extracellular side of a receptor leads to conformational changes in the cytoplasmic side of the receptor. The biggest change is an outward movement of the cytoplasmic part of the 5th and 6th transmembrane helix (TM5 and TM6). The structure of activated beta-2 adrenergic receptor in complex with Gs confirmed that the Gα binds to a cavity created by this movement.
GPCRs exhibit a similar structure to some other proteins with seven transmembrane domains, such as microbial rhodopsins and adiponectin receptors 1 and 2 (ADIPOR1 and ADIPOR2). However, these 7TMH (7-transmembrane helices) receptors and channels do not associate with G proteins. In addition, ADIPOR1 and ADIPOR2 are oriented oppositely to GPCRs in the membrane (i.e. GPCRs usually have an extracellular N-terminus, cytoplasmic C-terminus, whereas ADIPORs are inverted).
Structure–function relationships
In terms of structure, GPCRs are characterized by an extracellular N-terminus, followed by seven transmembrane (7-TM) α-helices (TM-1 to TM-7) connected by three intracellular (IL-1 to IL-3) and three extracellular loops (EL-1 to EL-3), and finally an intracellular C-terminus. The GPCR arranges itself into a tertiary structure resembling a barrel, with the seven transmembrane helices forming a cavity within the plasma membrane that serves a ligand-binding domain that is often covered by EL-2. Ligands may also bind elsewhere, however, as is the case for bulkier ligands (e.g., proteins or large peptides), which instead interact with the extracellular loops, or, as illustrated by the class C metabotropic glutamate receptors (mGluRs), the N-terminal tail. The class C GPCRs are distinguished by their large N-terminal tail, which also contains a ligand-binding domain. Upon glutamate-binding to an mGluR, the N-terminal tail undergoes a conformational change that leads to its interaction with the residues of the extracellular loops and TM domains. The eventual effect of all three types of agonist-induced activation is a change in the relative orientations of the TM helices (likened to a twisting motion) leading to a wider intracellular surface and "revelation" of residues of the intracellular helices and TM domains crucial to signal transduction function (i.e., G-protein coupling). Inverse agonists and antagonists may also bind to a number of different sites, but the eventual effect must be prevention of this TM helix reorientation.
The structure of the N- and C-terminal tails of GPCRs may also serve important functions beyond ligand-binding. For example, The C-terminus of M3 muscarinic receptors is sufficient, and the six-amino-acid polybasic (KKKRRK) domain in the C-terminus is necessary for its preassembly with Gq proteins. In particular, the C-terminus often contains serine (Ser) or threonine (Thr) residues that, when phosphorylated, increase the affinity of the intracellular surface for the binding of scaffolding proteins called β-arrestins (β-arr). Once bound, β-arrestins both sterically prevent G-protein coupling and may recruit other proteins, leading to the creation of signaling complexes involved in extracellular-signal regulated kinase (ERK) pathway activation or receptor endocytosis (internalization). As the phosphorylation of these Ser and Thr residues often occurs as a result of GPCR activation, the β-arr-mediated G-protein-decoupling and internalization of GPCRs are important mechanisms of desensitization. In addition, internalized "mega-complexes" consisting of a single GPCR, β-arr(in the tail conformation), and heterotrimeric G protein exist and may account for protein signaling from endosomes.
A final common structural theme among GPCRs is palmitoylation of one or more sites of the C-terminal tail or the intracellular loops. Palmitoylation is the covalent modification of cysteine (Cys) residues via addition of hydrophobic acyl groups, and has the effect of targeting the receptor to cholesterol- and sphingolipid-rich microdomains of the plasma membrane called lipid rafts. As many of the downstream transducer and effector molecules of GPCRs (including those involved in negative feedback pathways) are also targeted to lipid rafts, this has the effect of facilitating rapid receptor signaling.
GPCRs respond to extracellular signals mediated by a huge diversity of agonists, ranging from proteins to biogenic amines to protons, but all transduce this signal via a mechanism of G-protein coupling. This is made possible by a guanine-nucleotide exchange factor (GEF) domain primarily formed by a combination of IL-2 and IL-3 along with adjacent residues of the associated TM helices.
Mechanism
The G protein-coupled receptor is activated by an external signal in the form of a ligand or other signal mediator. This creates a conformational change in the receptor, causing activation of a G protein. Further effect depends on the type of G protein. G proteins are subsequently inactivated by GTPase activating proteins, known as RGS proteins.
Ligand binding
GPCRs include one or more receptors for the following ligands:
sensory signal mediators (e.g., light and olfactory stimulatory molecules);
adenosine, bombesin, bradykinin, endothelin, γ-aminobutyric acid (GABA), hepatocyte growth factor (HGF), melanocortins, neuropeptide Y, opioid peptides, opsins, somatostatin, GH, tachykinins, members of the vasoactive intestinal peptide family, and vasopressin;
biogenic amines (e.g., dopamine, epinephrine, norepinephrine, histamine, serotonin, and melatonin);
glutamate (metabotropic effect);
glucagon;
acetylcholine (muscarinic effect);
chemokines;
lipid mediators of inflammation (e.g., prostaglandins, prostanoids, platelet-activating factor, and leukotrienes);
peptide hormones (e.g., calcitonin, C5a anaphylatoxin, follicle-stimulating hormone [FSH], gonadotropin-releasing hormone [GnRH], neurokinin, thyrotropin-releasing hormone [TRH], and oxytocin);
and endocannabinoids.
GPCRs that act as receptors for stimuli that have not yet been identified are known as orphan receptors.
However, in contrast to other types of receptors that have been studied, wherein ligands bind externally to the membrane, the ligands of GPCRs typically bind within the transmembrane domain. However, protease-activated receptors are activated by cleavage of part of their extracellular domain.
Conformational change
The transduction of the signal through the membrane by the receptor is not completely understood. It is known that in the inactive state, the GPCR is bound to a heterotrimeric G protein complex. Binding of an agonist to the GPCR results in a conformational change in the receptor that is transmitted to the bound Gα subunit of the heterotrimeric G protein via protein domain dynamics. The activated Gα subunit exchanges GTP in place of GDP which in turn triggers the dissociation of Gα subunit from the Gβγ dimer and from the receptor. The dissociated Gα and Gβγ subunits interact with other intracellular proteins to continue the signal transduction cascade while the freed GPCR is able to rebind to another heterotrimeric G protein to form a new complex that is ready to initiate another round of signal transduction.
It is believed that a receptor molecule exists in a conformational equilibrium between active and inactive biophysical states. The binding of ligands to the receptor may shift the equilibrium toward the active receptor states. Three types of ligands exist: Agonists are ligands that shift the equilibrium in favour of active states; inverse agonists are ligands that shift the equilibrium in favour of inactive states; and neutral antagonists are ligands that do not affect the equilibrium. It is not yet known how exactly the active and inactive states differ from each other.
G-protein activation/deactivation cycle
When the receptor is inactive, the GEF domain may be bound to an also inactive α-subunit of a heterotrimeric G-protein. These "G-proteins" are a trimer of α, β, and γ subunits (known as Gα, Gβ, and Gγ, respectively) that is rendered inactive when reversibly bound to Guanosine diphosphate (GDP) (or, alternatively, no guanine nucleotide) but active when bound to guanosine triphosphate (GTP). Upon receptor activation, the GEF domain, in turn, allosterically activates the G-protein by facilitating the exchange of a molecule of GDP for GTP at the G-protein's α-subunit. The cell maintains a 10:1 ratio of cytosolic GTP:GDP so exchange for GTP is ensured. At this point, the subunits of the G-protein dissociate from the receptor, as well as each other, to yield a Gα-GTP monomer and a tightly interacting Gβγ dimer, which are now free to modulate the activity of other intracellular proteins. The extent to which they may diffuse, however, is limited due to the palmitoylation of Gα and the presence of an isoprenoid moiety that has been covalently added to the C-termini of Gγ.
Because Gα also has slow GTP→GDP hydrolysis capability, the inactive form of the α-subunit (Gα-GDP) is eventually regenerated, thus allowing reassociation with a Gβγ dimer to form the "resting" G-protein, which can again bind to a GPCR and await activation. The rate of GTP hydrolysis is often accelerated due to the actions of another family of allosteric modulating proteins called regulators of G-protein signaling, or RGS proteins, which are a type of GTPase-activating protein, or GAP. In fact, many of the primary effector proteins (e.g., adenylate cyclases) that become activated/inactivated upon interaction with Gα-GTP also have GAP activity. Thus, even at this early stage in the process, GPCR-initiated signaling has the capacity for self-termination.
Crosstalk
GPCRs downstream signals have been shown to possibly interact with integrin signals, such as FAK. Integrin signaling will phosphorylate FAK, which can then decrease GPCR Gαs activity.
Signaling
If a receptor in an active state encounters a G protein, it may activate it. Some evidence suggests that receptors and G proteins are actually pre-coupled. For example, binding of G proteins to receptors affects the receptor's affinity for ligands. Activated G proteins are bound to GTP.
Further signal transduction depends on the type of G protein. The enzyme adenylate cyclase is an example of a cellular protein that can be regulated by a G protein, in this case the G protein Gs. Adenylate cyclase activity is activated when it binds to a subunit of the activated G protein. Activation of adenylate cyclase ends when the G protein returns to the GDP-bound state.
Adenylate cyclases (of which 9 membrane-bound and one cytosolic forms are known in humans) may also be activated or inhibited in other ways (e.g., Ca2+/calmodulin binding), which can modify the activity of these enzymes in an additive or synergistic fashion along with the G proteins.
The signaling pathways activated through a GPCR are limited by the primary sequence and tertiary structure of the GPCR itself but ultimately determined by the particular conformation stabilized by a particular ligand, as well as the availability of transducer molecules. Currently, GPCRs are considered to utilize two primary types of transducers: G-proteins and β-arrestins. Because β-arr's have high affinity only to the phosphorylated form of most GPCRs (see above or below), the majority of signaling is ultimately dependent upon G-protein activation. However, the possibility for interaction does allow for G-protein-independent signaling to occur.
G-protein-dependent signaling
There are three main G-protein-mediated signaling pathways, mediated by four sub-classes of G-proteins distinguished from each other by sequence homology (Gαs, Gαi/o, Gαq/11, and Gα12/13). Each sub-class of G-protein consists of multiple proteins, each the product of multiple genes or splice variations that may imbue them with differences ranging from subtle to distinct with regard to signaling properties, but in general they appear reasonably grouped into four classes. Because the signal transducing properties of the various possible βγ combinations do not appear to radically differ from one another, these classes are defined according to the isoform of their α-subunit.
While most GPCRs are capable of activating more than one Gα-subtype, they also show a preference for one subtype over another. When the subtype activated depends on the ligand that is bound to the GPCR, this is called functional selectivity (also known as agonist-directed trafficking, or conformation-specific agonism). However, the binding of any single particular agonist may also initiate activation of multiple different G-proteins, as it may be capable of stabilizing more than one conformation of the GPCR's GEF domain, even over the course of a single interaction. In addition, a conformation that preferably activates one isoform of Gα may activate another if the preferred is less available. Furthermore, feedback pathways may result in receptor modifications (e.g., phosphorylation) that alter the G-protein preference. Regardless of these various nuances, the GPCR's preferred coupling partner is usually defined according to the G-protein most obviously activated by the endogenous ligand under most physiological or experimental conditions.
Gα signaling
The effector of both the Gαs and Gαi/o pathways is the cyclic-adenosine monophosphate (cAMP)-generating enzyme adenylate cyclase, or AC. While there are ten different AC gene products in mammals, each with subtle differences in tissue distribution or function, all catalyze the conversion of cytosolic adenosine triphosphate (ATP) to cAMP, and all are directly stimulated by G-proteins of the Gαs class. In contrast, however, interaction with Gα subunits of the Gαi/o type inhibits AC from generating cAMP. Thus, a GPCR coupled to Gαs counteracts the actions of a GPCR coupled to Gαi/o, and vice versa. The level of cytosolic cAMP may then determine the activity of various ion channels as well as members of the ser/thr-specific protein kinase A (PKA) family. Thus cAMP is considered a second messenger and PKA a secondary effector.
The effector of the Gαq/11 pathway is phospholipase C-β (PLCβ), which catalyzes the cleavage of membrane-bound phosphatidylinositol 4,5-bisphosphate (PIP2) into the second messengers inositol (1,4,5) trisphosphate (IP3) and diacylglycerol (DAG). IP3 acts on IP3 receptors found in the membrane of the endoplasmic reticulum (ER) to elicit Ca2+ release from the ER, while DAG diffuses along the plasma membrane where it may activate any membrane localized forms of a second ser/thr kinase called protein kinase C (PKC). Since many isoforms of PKC are also activated by increases in intracellular Ca2+, both these pathways can also converge on each other to signal through the same secondary effector. Elevated intracellular Ca2+ also binds and allosterically activates proteins called calmodulins, which in turn tosolic small GTPase, Rho. Once bound to GTP, Rho can then go on to activate various proteins responsible for cytoskeleton regulation such as Rho-kinase (ROCK). Most GPCRs that couple to Gα12/13 also couple to other sub-classes, often Gαq/11.
Gβγ signaling
The above descriptions ignore the effects of Gβγ–signalling, which can also be important, in particular in the case of activated Gαi/o-coupled GPCRs. The primary effectors of Gβγ are various ion channels, such as G-protein-regulated inwardly rectifying K+ channels (GIRKs), P/Q- and N-type voltage-gated Ca2+ channels, as well as some isoforms of AC and PLC, along with some phosphoinositide-3-kinase (PI3K) isoforms.
G-protein-independent signaling
Although they are classically thought of working only together, GPCRs may signal through G-protein-independent mechanisms, and heterotrimeric G-proteins may play functional roles independent of GPCRs. GPCRs may signal independently through many proteins already mentioned for their roles in G-protein-dependent signaling such as β-arrs, GRKs, and Srcs. Such signaling has been shown to be physiologically relevant, for example, β-arrestin signaling mediated by the chemokine receptor CXCR3 was necessary for full efficacy chemotaxis of activated T cells. In addition, further scaffolding proteins involved in subcellular localization of GPCRs (e.g., PDZ-domain-containing proteins) may also act as signal transducers. Most often the effector is a member of the MAPK family.
Examples
In the late 1990s, evidence began accumulating to suggest that some GPCRs are able to signal without G proteins. The ERK2 mitogen-activated protein kinase, a key signal transduction mediator downstream of receptor activation in many pathways, has been shown to be activated in response to cAMP-mediated receptor activation in the slime mold D. discoideum despite the absence of the associated G protein α- and β-subunits.
In mammalian cells, the much-studied β2-adrenoceptor has been demonstrated to activate the ERK2 pathway after arrestin-mediated uncoupling of G-protein-mediated signaling. Therefore, it seems likely that some mechanisms previously believed related purely to receptor desensitisation are actually examples of receptors switching their signaling pathway, rather than simply being switched off.
In kidney cells, the bradykinin receptor B2 has been shown to interact directly with a protein tyrosine phosphatase. The presence of a tyrosine-phosphorylated ITIM (immunoreceptor tyrosine-based inhibitory motif) sequence in the B2 receptor is necessary to mediate this interaction and subsequently the antiproliferative effect of bradykinin.
GPCR-independent signaling by heterotrimeric G-proteins
Although it is a relatively immature area of research, it appears that heterotrimeric G-proteins may also take part in non-GPCR signaling. There is evidence for roles as signal transducers in nearly all other types of receptor-mediated signaling, including integrins, receptor tyrosine kinases (RTKs), cytokine receptors (JAK/STATs), as well as modulation of various other "accessory" proteins such as GEFs, guanine-nucleotide dissociation inhibitors (GDIs) and protein phosphatases. There may even be specific proteins of these classes whose primary function is as part of GPCR-independent pathways, termed activators of G-protein signalling (AGS). Both the ubiquity of these interactions and the importance of Gα vs. Gβγ subunits to these processes are still unclear.
Details of cAMP and PIP2 pathways
There are two principal signal transduction pathways involving the G protein-linked receptors: the cAMP signal pathway and the phosphatidylinositol signal pathway.
cAMP signal pathway
The cAMP signal transduction contains five main characters: stimulative hormone receptor (Rs) or inhibitory hormone receptor (Ri); stimulative regulative G-protein (Gs) or inhibitory regulative G-protein (Gi); adenylyl cyclase; protein kinase A (PKA); and cAMP phosphodiesterase.
Stimulative hormone receptor (Rs) is a receptor that can bind with stimulative signal molecules, while inhibitory hormone receptor (Ri) is a receptor that can bind with inhibitory signal molecules.
Stimulative regulative G-protein is a G-protein linked to stimulative hormone receptor (Rs), and its α subunit upon activation could stimulate the activity of an enzyme or other intracellular metabolism. On the contrary, inhibitory regulative G-protein is linked to an inhibitory hormone receptor, and its α subunit upon activation could inhibit the activity of an enzyme or other intracellular metabolism.
Adenylyl cyclase is a 12-transmembrane glycoprotein that catalyzes the conversion of ATP to cAMP with the help of cofactor Mg2+ or Mn2+. The cAMP produced is a second messenger in cellular metabolism and is an allosteric activator of protein kinase A.
Protein kinase A is an important enzyme in cell metabolism due to its ability to regulate cell metabolism by phosphorylating specific committed enzymes in the metabolic pathway. It can also regulate specific gene expression, cellular secretion, and membrane permeability. The protein enzyme contains two catalytic subunits and two regulatory subunits. When there is no cAMP,the complex is inactive. When cAMP binds to the regulatory subunits, their conformation is altered, causing the dissociation of the regulatory subunits, which activates protein kinase A and allows further biological effects.
These signals then can be terminated by cAMP phosphodiesterase, which is an enzyme that degrades cAMP to 5'-AMP and inactivates protein kinase A.
Phosphatidylinositol signal pathway
In the phosphatidylinositol signal pathway, the extracellular signal molecule binds with the G-protein receptor (Gq) on the cell surface and activates phospholipase C, which is located on the plasma membrane. The lipase hydrolyzes phosphatidylinositol 4,5-bisphosphate (PIP2) into two second messengers: inositol 1,4,5-trisphosphate (IP3) and diacylglycerol (DAG). IP3 binds with the IP3 receptor in the membrane of the smooth endoplasmic reticulum and mitochondria to open Ca2+ channels. DAG helps activate protein kinase C (PKC), which phosphorylates many other proteins, changing their catalytic activities, leading to cellular responses.
The effects of Ca2+ are also remarkable: it cooperates with DAG in activating PKC and can activate the CaM kinase pathway, in which calcium-modulated protein calmodulin (CaM) binds Ca2+, undergoes a change in conformation, and activates CaM kinase II, which has unique ability to increase its binding affinity to CaM by autophosphorylation, making CaM unavailable for the activation of other enzymes. The kinase then phosphorylates target enzymes, regulating their activities. The two signal pathways are connected together by Ca2+-CaM, which is also a regulatory subunit of adenylyl cyclase and phosphodiesterase in the cAMP signal pathway.
Receptor regulation
GPCRs become desensitized when exposed to their ligand for a long period of time. There are two recognized forms of desensitization: 1) homologous desensitization, in which the activated GPCR is downregulated; and 2) heterologous desensitization, wherein the activated GPCR causes downregulation of a different GPCR. The key reaction of this downregulation is the phosphorylation of the intracellular (or cytoplasmic) receptor domain by protein kinases.
Phosphorylation by cAMP-dependent protein kinases
Cyclic AMP-dependent protein kinases (protein kinase A) are activated by the signal chain coming from the G protein (that was activated by the receptor) via adenylate cyclase and cyclic AMP (cAMP). In a feedback mechanism, these activated kinases phosphorylate the receptor. The longer the receptor remains active the more kinases are activated and the more receptors are phosphorylated. In β2-adrenoceptors, this phosphorylation results in the switching of the coupling from the Gs class of G-protein to the Gi class. cAMP-dependent PKA mediated phosphorylation can cause heterologous desensitisation in receptors other than those activated.
Phosphorylation by GRKs
The G protein-coupled receptor kinases (GRKs) are protein kinases that phosphorylate only active GPCRs. G-protein-coupled receptor kinases (GRKs) are key modulators of G-protein-coupled receptor (GPCR) signaling. They constitute a family of seven mammalian serine-threonine protein kinases that phosphorylate agonist-bound receptor. GRKs-mediated receptor phosphorylation rapidly initiates profound impairment of receptor signaling and desensitization. Activity of GRKs and subcellular targeting is tightly regulated by interaction with receptor domains, G protein subunits, lipids, anchoring proteins and calcium-sensitive proteins.
Phosphorylation of the receptor can have two consequences:
Translocation: The receptor is, along with the part of the membrane it is embedded in, brought to the inside of the cell, where it is dephosphorylated within the acidic vesicular environment and then brought back. This mechanism is used to regulate long-term exposure, for example, to a hormone, by allowing resensitisation to follow desensitisation. Alternatively, the receptor may undergo lysozomal degradation, or remain internalised, where it is thought to participate in the initiation of signalling events, the nature of which depending on the internalised vesicle's subcellular localisation.
Arrestin linking: The phosphorylated receptor can be linked to arrestin molecules that prevent it from binding (and activating) G proteins, in effect switching it off for a short period of time. This mechanism is used, for example, with rhodopsin in retina cells to compensate for exposure to bright light. In many cases, arrestin's binding to the receptor is a prerequisite for translocation. For example, beta-arrestin bound to β2-adrenoreceptors acts as an adaptor for binding with clathrin, and with the beta-subunit of AP2 (clathrin adaptor molecules); thus, the arrestin here acts as a scaffold assembling the components needed for clathrin-mediated endocytosis of β2-adrenoreceptors.
Mechanisms of GPCR signal termination
As mentioned above, G-proteins may terminate their own activation due to their intrinsic GTP→GDP hydrolysis capability. However, this reaction proceeds at a slow rate (≈0.02 times/sec) and, thus, it would take around 50 seconds for any single G-protein to deactivate if other factors did not come into play. Indeed, there are around 30 isoforms of RGS proteins that, when bound to Gα through their GAP domain, accelerate the hydrolysis rate to ≈30 times/sec. This 1500-fold increase in rate allows for the cell to respond to external signals with high speed, as well as spatial resolution due to limited amount of second messenger that can be generated and limited distance a G-protein can diffuse in 0.03 seconds. For the most part, the RGS proteins are promiscuous in their ability to deactivate G-proteins, while which RGS is involved in a given signaling pathway seems more determined by the tissue and GPCR involved than anything else. In addition, RGS proteins have the additional function of increasing the rate of GTP-GDP exchange at GPCRs, (i.e., as a sort of co-GEF) further contributing to the time resolution of GPCR signaling.
In addition, the GPCR may be desensitized itself. This can occur as:
a direct result of ligand occupation, wherein the change in conformation allows recruitment of GPCR-Regulating Kinases (GRKs), which go on to phosphorylate various serine/threonine residues of IL-3 and the C-terminal tail. Upon GRK phosphorylation, the GPCR's affinity for β-arrestin (β-arrestin-1/2 in most tissues) is increased, at which point β-arrestin may bind and act to both sterically hinder G-protein coupling as well as initiate the process of receptor internalization through clathrin-mediated endocytosis. Because only the liganded receptor is desensitized by this mechanism, it is called homologous desensitization
the affinity for β-arrestin may be increased in a ligand occupation and GRK-independent manner through phosphorylation of different ser/thr sites (but also of IL-3 and the C-terminal tail) by PKC and PKA. These phosphorylations are often sufficient to impair G-protein coupling on their own as well.
PKC/PKA may, instead, phosphorylate GRKs, which can also lead to GPCR phosphorylation and β-arrestin binding in an occupation-independent manner. These latter two mechanisms allow for desensitization of one GPCR due to the activities of others, or heterologous desensitization. GRKs may also have GAP domains and so may contribute to inactivation through non-kinase mechanisms as well. A combination of these mechanisms may also occur.
Once β-arrestin is bound to a GPCR, it undergoes a conformational change allowing it to serve as a scaffolding protein for an adaptor complex termed AP-2, which in turn recruits another protein called clathrin. If enough receptors in the local area recruit clathrin in this manner, they aggregate and the membrane buds inwardly as a result of interactions between the molecules of clathrin, in a process called opsonization. Once the pit has been pinched off the plasma membrane due to the actions of two other proteins called amphiphysin and dynamin, it is now an endocytic vesicle. At this point, the adapter molecules and clathrin have dissociated, and the receptor is either trafficked back to the plasma membrane or targeted to lysosomes for degradation.
At any point in this process, the β-arrestins may also recruit other proteins—such as the non-receptor tyrosine kinase (nRTK), c-SRC—which may activate ERK1/2, or other mitogen-activated protein kinase (MAPK) signaling through, for example, phosphorylation of the small GTPase, Ras, or recruit the proteins of the ERK cascade directly (i.e., Raf-1, MEK, ERK-1/2) at which point signaling is initiated due to their close proximity to one another. Another target of c-SRC are the dynamin molecules involved in endocytosis. Dynamins polymerize around the neck of an incoming vesicle, and their phosphorylation by c-SRC provides the energy necessary for the conformational change allowing the final "pinching off" from the membrane.
GPCR cellular regulation
Receptor desensitization is mediated through a combination phosphorylation, β-arr binding, and endocytosis as described above. Downregulation occurs when endocytosed receptor is embedded in an endosome that is trafficked to merge with an organelle called a lysosome. Because lysosomal membranes are rich in proton pumps, their interiors have low pH (≈4.8 vs. the pH≈7.2 cytosol), which acts to denature the GPCRs. In addition, lysosomes contain many degradative enzymes, including proteases, which can function only at such low pH, and so the peptide bonds joining the residues of the GPCR together may be cleaved. Whether or not a given receptor is trafficked to a lysosome, detained in endosomes, or trafficked back to the plasma membrane depends on a variety of factors, including receptor type and magnitude of the signal.
GPCR regulation is additionally mediated by gene transcription factors. These factors can increase or decrease gene transcription and thus increase or decrease the generation of new receptors (up- or down-regulation) that travel to the cell membrane.
Receptor oligomerization
G-protein-coupled receptor oligomerisation is a widespread phenomenon. One of the best-studied examples is the metabotropic GABAB receptor. This so-called constitutive receptor is formed by heterodimerization of GABABR1 and GABABR2 subunits. Expression of the GABABR1 without the GABABR2 in heterologous systems leads to retention of the subunit in the endoplasmic reticulum. Expression of the GABABR2 subunit alone, meanwhile, leads to surface expression of the subunit, although with no functional activity (i.e., the receptor does not bind agonist and cannot initiate a response following exposure to agonist). Expression of the two subunits together leads to plasma membrane expression of functional receptor. It has been shown that GABABR2 binding to GABABR1 causes masking of a retention signal of functional receptors.
Origin and diversification of the superfamily
Signal transduction mediated by the superfamily of GPCRs dates back to the origin of multicellularity. Mammalian-like GPCRs are found in fungi, and have been classified according to the GRAFS classification system based on GPCR fingerprints. Identification of the superfamily members across the eukaryotic domain, and comparison of the family-specific motifs, have shown that the superfamily of GPCRs have a common origin. Characteristic motifs indicate that three of the five GRAFS families, Rhodopsin, Adhesion, and Frizzled, evolved from the Dictyostelium discoideum cAMP receptors before the split of opisthokonts. Later, the Secretin family evolved from the Adhesion GPCR receptor family before the split of nematodes. Insect GPCRs appear to be in their own group and Taste2 is identified as descending from Rhodopsin. Note that the Secretin/Adhesion split is based on presumed function rather than signature, as the classical Class B (7tm_2, ) is used to identify both in the studies.
See also
G protein-coupled receptors database
List of MeSH codes (D12.776)
Metabotropic receptor
Orphan receptor
Pepducins, a class of drug candidates targeted at GPCRs
Receptor activated solely by a synthetic ligand, a technique for control of cell signaling through synthetic GPCRs
TOG superfamily
References
Further reading
External links
GPCR Cell Line
;
GPCR-HGmod , a database of 3D structural models of all human G-protein coupled receptors, built by the GPCR-I-TASSER pipeline
Biochemistry
Integral membrane proteins
Molecular biology
Protein families
Signal transduction
Protein superfamilies | G protein-coupled receptor | [
"Chemistry",
"Biology"
] | 9,970 | [
"Protein classification",
"Signal transduction",
"G protein-coupled receptors",
"nan",
"Molecular biology",
"Biochemistry",
"Protein families",
"Neurochemistry",
"Protein superfamilies"
] |
12,833 | https://en.wikipedia.org/wiki/GTPase | GTPases are a large family of hydrolase enzymes that bind to the nucleotide guanosine triphosphate (GTP) and hydrolyze it to guanosine diphosphate (GDP). The GTP binding and hydrolysis takes place in the highly conserved P-loop "G domain", a protein domain common to many GTPases.
Functions
GTPases function as molecular switches or timers in many fundamental cellular processes.
Examples of these roles include:
Signal transduction in response to activation of cell surface receptors, including transmembrane receptors such as those mediating taste, smell and vision.
Protein biosynthesis (a.k.a. translation) at the ribosome.
Regulation of cell differentiation, proliferation, division and movement.
Translocation of proteins through membranes.
Transport of vesicles within the cell, and vesicle-mediated secretion and uptake, through GTPase control of vesicle coat assembly.
GTPases are active when bound to GTP and inactive when bound to GDP. In the generalized receptor-transducer-effector signaling model of Martin Rodbell, signaling GTPases act as transducers to regulate the activity of effector proteins. This inactive-active switch is due to conformational changes in the protein distinguishing these two forms, particularly of the "switch" regions that in the active state are able to make protein-protein contacts with partner proteins that alter the function of these effectors.
Mechanism
Hydrolysis of GTP bound to an (active) G domain-GTPase leads to deactivation of the signaling/timer function of the enzyme. The hydrolysis of the third (γ) phosphate of GTP to create guanosine diphosphate (GDP) and Pi, inorganic phosphate, occurs by the SN2 mechanism (see nucleophilic substitution) via a pentacoordinate transition state and is dependent on the presence of a magnesium ion Mg2+.
GTPase activity serves as the shutoff mechanism for the signaling roles of GTPases by returning the active, GTP-bound protein to the inactive, GDP-bound state. Most "GTPases" have functional GTPase activity, allowing them to remain active (that is, bound to GTP) only for a short time before deactivating themselves by converting bound GTP to bound GDP. However, many GTPases also use accessory proteins named GTPase-activating proteins or GAPs to accelerate their GTPase activity. This further limits the active lifetime of signaling GTPases. Some GTPases have little to no intrinsic GTPase activity, and are entirely dependent on GAP proteins for deactivation (such as the ADP-ribosylation factor or ARF family of small GTP-binding proteins that are involved in vesicle-mediated transport within cells).
To become activated, GTPases must bind to GTP. Since mechanisms to convert bound GDP directly into GTP are unknown, the inactive GTPases are induced to release bound GDP by the action of distinct regulatory proteins called guanine nucleotide exchange factors or GEFs. The nucleotide-free GTPase protein quickly rebinds GTP, which is in far excess in healthy cells over GDP, allowing the GTPase to enter the active conformation state and promote its effects on the cell. For many GTPases, activation of GEFs is the primary control mechanism in the stimulation of the GTPase signaling functions, although GAPs also play an important role. For heterotrimeric G proteins and many small GTP-binding proteins, GEF activity is stimulated by cell surface receptors in response to signals outside the cell (for heterotrimeric G proteins, the G protein-coupled receptors are themselves GEFs, while for receptor-activated small GTPases their GEFs are distinct from cell surface receptors).
Some GTPases also bind to accessory proteins called guanine nucleotide dissociation inhibitors or GDIs that stabilize the inactive, GDP-bound state.
The amount of active GTPase can be changed in several ways:
Acceleration of GDP dissociation by GEFs speeds up the accumulation of active GTPase.
Inhibition of GDP dissociation by guanine nucleotide dissociation inhibitors (GDIs) slows down accumulation of active GTPase.
Acceleration of GTP hydrolysis by GAPs reduces the amount of active GTPase.
Artificial GTP analogues like GTP-γ-S, β,γ-methylene-GTP, and β,γ-imino-GTP that cannot be hydrolyzed can lock the GTPase in its active state.
Mutations (such as those that reduce the intrinsic GTP hydrolysis rate) can lock the GTPase in the active state, and such mutations in the small GTPase Ras are particularly common in some forms of cancer.
G domain GTPases
In most GTPases, the specificity for the base guanine versus other nucleotides is imparted by the base-recognition motif, which has the consensus sequence [N/T]KXD. The following classification is based on shared features; some examples have mutations in the base-recognition motif that shift their substrate specificity, most commonly to ATP.
TRAFAC class
The TRAFAC class of G domain proteins is named after the prototypical member, the translation factor G proteins. They play roles in translation, signal transduction, and cell motility.
Translation factor superfamily
Multiple classical translation factor family GTPases play important roles in initiation, elongation and termination of protein biosynthesis. Sharing a similar mode of ribosome binding due to the β-EI domain following the GTPase, the most well-known members of the family are EF-1A/EF-Tu, EF-2/EF-G, and class 2 release factors. Other members include EF-4 (LepA), BipA (TypA), SelB (bacterial selenocysteinyl-tRNA EF-Tu paralog), Tet (tetracycline resistance by ribosomal protection), and HBS1L (eukaryotic ribosome rescue protein similar to release factors).
The superfamily also includes the Bms1 family from yeast.
Ras-like superfamily
Heterotrimeric G proteins
Heterotrimeric G protein complexes are composed of three distinct protein subunits named alpha (α), beta (β) and gamma (γ) subunits. The alpha subunits contain the GTP binding/GTPase domain flanked by long regulatory regions, while the beta and gamma subunits form a stable dimeric complex referred to as the beta-gamma complex. When activated, a heterotrimeric G protein dissociates into activated, GTP-bound alpha subunit and separate beta-gamma subunit, each of which can perform distinct signaling roles. The α and γ subunit are modified by lipid anchors to increase their association with the inner leaflet of the plasma membrane.
Heterotrimeric G proteins act as the transducers of G protein-coupled receptors, coupling receptor activation to downstream signaling effectors and second messengers. In unstimulated cells, heterotrimeric G proteins are assembled as the GDP bound, inactive trimer (Gα-GDP-Gβγ complex). Upon receptor activation, the activated receptor intracellular domain acts as GEF to release GDP from the G protein complex and to promote binding of GTP in its place. The GTP-bound complex undergoes an activating conformation shift that dissociates it from the receptor and also breaks the complex into its component G protein alpha and beta-gamma subunit components. While these activated G protein subunits are now free to activate their effectors, the active receptor is likewise free to activate additional G proteins – this allows catalytic activation and amplification where one receptor may activate many G proteins.
G protein signaling is terminated by hydrolysis of bound GTP to bound GDP. This can occur through the intrinsic GTPase activity of the α subunit, or be accelerated by separate regulatory proteins that act as GTPase-activating proteins (GAPs), such as members of the Regulator of G protein signaling (RGS) family). The speed of the hydrolysis reaction works as an internal clock limiting the length of the signal. Once Gα is returned to being GDP bound, the two parts of the heterotrimer re-associate to the original, inactive state.
The heterotrimeric G proteins can be classified by sequence homology of the α unit and by their functional targets into four families: Gs family, Gi family, Gq family and G12 family. Each of these Gα protein families contains multiple members, such that the mammals have 16 distinct α-subunit genes. The Gβ and Gγ are likewise composed of many members, increasing heterotrimer structural and functional diversity. Among the target molecules of the specific G proteins are the second messenger-generating enzymes adenylyl cyclase and phospholipase C, as well as various ion channels.
Small GTPases
Small GTPases function as monomers and have a molecular weight of about 21 kilodaltons that consists primarily of the GTPase domain. They are also called small or monomeric guanine nucleotide-binding regulatory proteins, small or monomeric GTP-binding proteins, or small or monomeric G-proteins, and because they have significant homology with the first-identified such protein, named Ras, they are also referred to as Ras superfamily GTPases. Small GTPases generally serve as molecular switches and signal transducers for a wide variety of cellular signaling events, often involving membranes, vesicles or cytoskeleton. According to their primary amino acid sequences and biochemical properties, the many Ras superfamily small GTPases are further divided into five subfamilies with distinct functions: Ras, Rho ("Ras-homology"), Rab, Arf and Ran. While many small GTPases are activated by their GEFs in response to intracellular signals emanating from cell surface receptors (particularly growth factor receptors), regulatory GEFs for many other small GTPases are activated in response to intrinsic cell signals, not cell surface (external) signals.
Myosin-kinesin superfamily
This class is defined by loss of two beta-strands and additional N-terminal strands. Both namesakes of this superfamily, myosin and kinesin, have shifted to use ATP.
Large GTPases
See dynamin as a prototype for large monomeric GTPases.
SIMIBI class
Much of the SIMIBI class of GTPases is activated by dimerization. Named after the signal recognition particle (SRP), MinD, and BioD, the class is involved in protein localization, chromosome partitioning, and membrane transport. Several members of this class, including MinD and Get3, has shifted in substrate specificity to become ATPases.
Translocation factors
For a discussion of Translocation factors and the role of GTP, see signal recognition particle (SRP).
Other GTPases
While tubulin and related structural proteins also bind and hydrolyze GTP as part of their function to form intracellular tubules, these proteins utilize a distinct tubulin domain that is unrelated to the G domain used by signaling GTPases.
There are also GTP-hydrolyzing proteins that use a P-loop from a superclass other than the G-domain-containing one. Examples include the NACHT proteins of its own superclass and McrB protein of the AAA+ superclass.
See also
G protein-coupled receptors
Growth factor receptor
Septins
References
External links
MBInfo - RhoGTPases
Signal transduction | GTPase | [
"Chemistry",
"Biology"
] | 2,450 | [
"Biochemistry",
"Neurochemistry",
"Signal transduction"
] |
12,841 | https://en.wikipedia.org/wiki/G%20protein | G proteins, also known as guanine nucleotide-binding proteins, are a family of proteins that act as molecular switches inside cells, and are involved in transmitting signals from a variety of stimuli outside a cell to its interior. Their activity is regulated by factors that control their ability to bind to and hydrolyze guanosine triphosphate (GTP) to guanosine diphosphate (GDP). When they are bound to GTP, they are 'on', and, when they are bound to GDP, they are 'off'. G proteins belong to the larger group of enzymes called GTPases.
There are two classes of G proteins. The first function as monomeric small GTPases (small G-proteins), while the second function as heterotrimeric G protein complexes. The latter class of complexes is made up of alpha (Gα), beta (Gβ) and gamma (Gγ) subunits. In addition, the beta and gamma subunits can form a stable dimeric complex referred to as the beta-gamma complex
.
Heterotrimeric G proteins located within the cell are activated by G protein-coupled receptors (GPCRs) that span the cell membrane. Signaling molecules bind to a domain of the GPCR located outside the cell, and an intracellular GPCR domain then in turn activates a particular G protein. Some active-state GPCRs have also been shown to be "pre-coupled" with G proteins, whereas in other cases a collision coupling mechanism is thought to occur. The G protein triggers a cascade of further signaling events that finally results in a change in cell function. G protein-coupled receptors and G proteins working together transmit signals from many hormones, neurotransmitters, and other signaling factors. G proteins regulate metabolic enzymes, ion channels, transporter proteins, and other parts of the cell machinery, controlling transcription, motility, contractility, and secretion, which in turn regulate diverse systemic functions such as embryonic development, learning and memory, and homeostasis.
History
G proteins were discovered in 1980 when Alfred G. Gilman and Martin Rodbell investigated stimulation of cells by adrenaline. They found that when adrenaline binds to a receptor, the receptor does not stimulate enzymes (inside the cell) directly. Instead, the receptor stimulates a G protein, which then stimulates an enzyme. An example is adenylate cyclase, which produces the second messenger cyclic AMP. For this discovery, they won the 1994 Nobel Prize in Physiology or Medicine.
Nobel prizes have been awarded for many aspects of signaling by G proteins and GPCRs. These include receptor antagonists, neurotransmitters, neurotransmitter reuptake, G protein-coupled receptors, G proteins, second messengers, the enzymes that trigger protein phosphorylation in response to cAMP, and consequent metabolic processes such as glycogenolysis.
Prominent examples include (in chronological order of awarding):
The 1947 Nobel Prize in Physiology or Medicine to Carl Cori, Gerty Cori and Bernardo Houssay, for their discovery of how glycogen is broken down to glucose and resynthesized in the body, for use as a store and source of energy. Glycogenolysis is stimulated by numerous hormones and neurotransmitters including adrenaline.
The 1970 Nobel Prize in Physiology or Medicine to Julius Axelrod, Bernard Katz and Ulf von Euler for their work on the release and reuptake of neurotransmitters.
The 1971 Nobel Prize in Physiology or Medicine to Earl Sutherland for discovering the key role of adenylate cyclase, which produces the second messenger cyclic AMP.
The 1988 Nobel Prize in Physiology or Medicine to George H. Hitchings, Sir James Black and Gertrude Elion "for their discoveries of important principles for drug treatment" targeting GPCRs.
The 1992 Nobel Prize in Physiology or Medicine to Edwin G. Krebs and Edmond H. Fischer for describing how reversible phosphorylation works as a switch to activate proteins, and to regulate various cellular processes including glycogenolysis.
The 1994 Nobel Prize in Physiology or Medicine to Alfred G. Gilman and Martin Rodbell for their discovery of "G-proteins and the role of these proteins in signal transduction in cells".
The 2000 Nobel Prize in Physiology or Medicine to Eric Kandel, Arvid Carlsson and Paul Greengard, for research on neurotransmitters such as dopamine, which act via GPCRs.
The 2004 Nobel Prize in Physiology or Medicine to Richard Axel and Linda B. Buck for their work on G protein-coupled olfactory receptors.
The 2012 Nobel Prize in Chemistry to Brian Kobilka and Robert Lefkowitz for their work on GPCR function.
Function
G proteins are important signal transducing molecules in cells. "Malfunction of GPCR [G Protein-Coupled Receptor] signaling pathways are involved in many diseases, such as diabetes, blindness, allergies, depression, cardiovascular defects, and certain forms of cancer. It is estimated that about 30% of the modern drugs' cellular targets are GPCRs." The human genome encodes roughly 800 G protein-coupled receptors, which detect photons of light, hormones, growth factors, drugs, and other endogenous ligands. Approximately 150 of the GPCRs found in the human genome still have unknown functions.
Whereas G proteins are activated by G protein-coupled receptors, they are inactivated by RGS proteins (for "Regulator of G protein signalling"). Receptors stimulate GTP binding (turning the G protein on). RGS proteins stimulate GTP hydrolysis (creating GDP, thus turning the G protein off).
Diversity
All eukaryotes use G proteins for signaling and have evolved a large diversity of G proteins. For instance, humans encode 18 different Gα proteins, 5 Gβ proteins, and 12 Gγ proteins.
Signaling
G protein can refer to two distinct families of proteins. Heterotrimeric G proteins, sometimes referred to as the "large" G proteins, are activated by G protein-coupled receptors and are made up of alpha (α), beta (β), and gamma (γ) subunits. "Small" G proteins (20-25kDa) belong to the Ras superfamily of small GTPases. These proteins are homologous to the alpha (α) subunit found in heterotrimers, but are in fact monomeric, consisting of only a single unit. However, like their larger relatives, they also bind GTP and GDP and are involved in signal transduction.
Heterotrimeric
Different types of heterotrimeric G proteins share a common mechanism. They are activated in response to a conformational change in the GPCR, exchanging GDP for GTP, and dissociating in order to activate other proteins in a particular signal transduction pathway. The specific mechanisms, however, differ between protein types.
Mechanism
Receptor-activated G proteins are bound to the inner surface of the cell membrane. They consist of the Gα and the tightly associated Gβγ subunits.
There are four main families of Gα subunits: Gαs (G stimulatory), Gαi (G inhibitory), Gαq/11, and Gα12/13. They behave differently in the recognition of the effector molecule, but share a similar mechanism of activation.
Activation
When a ligand activates the G protein-coupled receptor, it induces a conformational change in the receptor that allows the receptor to function as a guanine nucleotide exchange factor (GEF) that exchanges GDP for GTP. The GTP (or GDP) is bound to the Gα subunit in the traditional view of heterotrimeric GPCR activation. This exchange triggers the dissociation of the Gα subunit (which is bound to GTP) from the Gβγ dimer and the receptor as a whole. However, models which suggest molecular rearrangement, reorganization, and pre-complexing of effector molecules are beginning to be accepted. Both Gα-GTP and Gβγ can then activate different signaling cascades (or second messenger pathways) and effector proteins, while the receptor is able to activate the next G protein.
Termination
The Gα subunit will eventually hydrolyze the attached GTP to GDP by its inherent enzymatic activity, allowing it to re-associate with Gβγ and starting a new cycle. A group of proteins called Regulator of G protein signalling (RGSs), act as GTPase-activating proteins (GAPs), are specific for Gα subunits. These proteins accelerate the hydrolysis of GTP to GDP, thus terminating the transduced signal. In some cases, the effector itself may possess intrinsic GAP activity, which then can help deactivate the pathway. This is true in the case of phospholipase C-beta, which possesses GAP activity within its C-terminal region. This is an alternate form of regulation for the Gα subunit. Such Gα GAPs do not have catalytic residues (specific amino acid sequences) to activate the Gα protein. They work instead by lowering the required activation energy for the reaction to take place.
Specific mechanisms
Gαs
Gαs activates the cAMP-dependent pathway by stimulating the production of cyclic AMP (cAMP) from ATP. This is accomplished by direct stimulation of the membrane-associated enzyme adenylate cyclase. cAMP can then act as a second messenger that goes on to interact with and activate protein kinase A (PKA). PKA can phosphorylate a myriad downstream targets.
The cAMP-dependent pathway is used as a signal transduction pathway for many hormones including:
ADH – Promotes water retention by the kidneys (created by the magnocellular neurosecretory cells of the posterior pituitary)
GHRH – Stimulates the synthesis and release of GH (somatotropic cells of the anterior pituitary)
GHIH – Inhibits the synthesis and release of GH (somatotropic cells of anterior pituitary)
CRH – Stimulates the synthesis and release of ACTH (anterior pituitary)
ACTH – Stimulates the synthesis and release of cortisol (zona fasciculata of the adrenal cortex in the adrenal glands)
TSH – Stimulates the synthesis and release of a majority of T4 (thyroid gland)
LH – Stimulates follicular maturation and ovulation in women; or testosterone production and spermatogenesis in men
FSH – Stimulates follicular development in women; or spermatogenesis in men
PTH – Increases blood calcium levels. This is accomplished via the parathyroid hormone 1 receptor (PTH1) in the kidneys and bones, or via the parathyroid hormone 2 receptor (PTH2) in the central nervous system and brain, as well as the bones and kidneys.
Calcitonin – Decreases blood calcium levels (via the calcitonin receptor in the intestines, bones, kidneys, and brain)
Glucagon – Stimulates glycogen breakdown in the liver
hCG – Promotes cellular differentiation, and is potentially involved in apoptosis.
Epinephrine – released by the adrenal medulla during the fasting state, when body is under metabolic duress. It stimulates glycogenolysis, in addition to the actions of glucagon.
Gαi
Gαi inhibits the production of cAMP from ATP.
e.g. somatostatin, prostaglandins
Gαq/11
Gαq/11 stimulates the membrane-bound phospholipase C beta, which then cleaves phosphatidylinositol 4,5-bisphosphate (PIP2) into two second messengers, inositol trisphosphate (IP3) and diacylglycerol (DAG). IP3 induces calcium release from the endoplasmic reticulum. DAG activates protein kinase C.
The Inositol Phospholipid Dependent Pathway is used as a signal transduction pathway for many hormones including:
Epinephrine
ADH (Vasopressin/AVP) – Induces the synthesis and release of glucocorticoids (Zona fasciculata of adrenal cortex); Induces vasoconstriction (V1 Cells of Posterior pituitary)
TRH – Induces the synthesis and release of TSH (Anterior pituitary gland)
TSH – Induces the synthesis and release of a small amount of T4 (Thyroid Gland)
Angiotensin II – Induces Aldosterone synthesis and release (zona glomerulosa of adrenal cortex in kidney)
GnRH – Induces the synthesis and release of FSH and LH (Anterior Pituitary)
Gα12/13
Gα12/13 are involved in Rho family GTPase signaling (see Rho family of GTPases). This is through the RhoGEF superfamily involving the RhoGEF domain of the proteins' structures). These are involved in control of cell cytoskeleton remodeling, and thus in regulating cell migration.
Gβ, Gγ
The Gβγ complexes sometimes also have active functions. Examples include coupling to and activating G protein-coupled inwardly-rectifying potassium channels.
Small GTPases
Small GTPases, also known as small G-proteins, bind GTP and GDP likewise, and are involved in signal transduction. These proteins are homologous to the alpha (α) subunit found in heterotrimers, but exist as monomers. They are small (20-kDa to 25-kDa) proteins that bind to guanosine triphosphate (GTP). This family of proteins is homologous to the Ras GTPases and is also called the Ras superfamily GTPases.
Lipidation
In order to associate with the inner leaflet of the plasma membrane, many G proteins and small GTPases are lipidated, that is, covalently modified with lipid extensions. They may be myristoylated, palmitoylated or prenylated.
References
External links
Peripheral membrane proteins
Cell signaling
Signal transduction
EC 3.6 | G protein | [
"Chemistry",
"Biology"
] | 2,990 | [
"Biochemistry",
"Neurochemistry",
"G proteins",
"Signal transduction"
] |
12,858 | https://en.wikipedia.org/wiki/Galvanization | Galvanization (also spelled galvanisation) is the process of applying a protective zinc coating to steel or iron, to prevent rusting. The most common method is hot-dip galvanizing, in which the parts are coated by submerging them in a bath of hot, molten zinc.
Protective action
The zinc coating, when intact, prevents corrosive substances from reaching the underlying iron. Additional electroplating such as a chromate conversion coating may be applied to provide further surface passivation to the substrate material.
History and etymology
The process is named after the Italian physician, physicist, biologist and philosopher Luigi Galvani (9 September 1737 – 4 December 1798). The earliest known example of galvanized iron was discovered on 17th-century Indian armour in the Royal Armouries Museum collection in the United Kingdom.
The term "galvanized" can also be used metaphorically of any stimulus which results in activity by a person or group of people.
In modern usage, the term "galvanizing" has largely come to be associated with zinc coatings, to the exclusion of other metals. Galvanic paint, a precursor to hot-dip galvanizing, was patented by Stanislas Sorel, of Paris, on June 10, 1837, as an adoption of a term from a highly fashionable field of contemporary science, despite having no evident relation to it.
Methods
Hot-dip galvanizing deposits a thick, robust layer of zinc iron alloys on the surface of a steel item. In the case of automobile bodies, where additional decorative coatings of paint will be applied, a thinner form of galvanizing is applied by electrogalvanizing. The hot-dip process generally does not reduce strength to a measurable degree, with the exception of high-strength steels where hydrogen embrittlement can become a problem.
Thermal diffusion galvanizing, or Sherardizing, provides a zinc diffusion coating on iron- or copper-based materials.
Eventual corrosion
Galvanized steel can last for many decades if other supplementary measures are maintained, such as paint coatings and additional sacrificial anodes. Corrosion in non-salty environments is caused mainly by levels of sulfur dioxide in the air.
Galvanized construction steel
This is the most common use for galvanized metal; hundreds of thousands of tons of steel products are galvanized annually worldwide. In developed countries, most larger cities have several galvanizing factories, and many items of steel manufacture are galvanized for protection. Typically these include street furniture, building frameworks, balconies, verandahs, staircases, ladders, walkways, and more. Hot dip galvanized steel is also used for making steel frames as a basic construction material for steel frame buildings.
Galvanized piping
In the early 20th century, galvanized piping swiftly took the place of previously used cast iron and lead in cold-water plumbing. Practically, galvanized piping rusts from the inside out, building up layers of plaque on the inside of the piping, causing both water pressure problems and eventual pipe failure. These plaques can flake off, leading to visible impurities in water and a slight metallic taste. The life expectancy of galvanized piping is about 40–50 years, but it may vary on how well the pipes were built and installed. Pipe longevity also depends on the thickness of zinc in the original galvanizing, which ranges on a scale from G01 to G360.
See also
Electroplating
Aluminized steel
Cathodic protection
Corrugated galvanized iron
Galvanic corrosion
Galvannealed – galvanization and annealing
Prepainted metal
Rust
Rustproofing
Sendzimir process
Sherardizing
Corrosion
Sacrificial metal
Corrosion engineering
References
External links
Chemical processes
Corrosion prevention
Metal plating
Zinc
Bimetal | Galvanization | [
"Chemistry",
"Materials_science"
] | 782 | [
"Corrosion prevention",
"Metallurgical processes",
"Metallurgy",
"Coatings",
"Corrosion",
"Bimetal",
"Chemical processes",
"nan",
"Chemical process engineering",
"Metal plating"
] |
12,859 | https://en.wikipedia.org/wiki/Golden%20Rule | The Golden Rule is the principle of treating others as one would want to be treated by them. It is sometimes called an ethics of reciprocity, meaning that you should reciprocate to others how you would like them to treat you (not necessarily how they actually treat you). Various expressions of this rule can be found in the tenets of most religions and creeds through the ages.
The maxim may appear as a positive or negative injunction governing conduct:
Treat others as you would like others to treat you (positive or directive form)
Do not treat others in ways that you would not like to be treated (negative or prohibitive form)
What you wish upon others, you wish upon yourself (empathetic or responsive form)
Etymology
The term "Golden Rule", or "Golden law", began to be used widely in the early 17th century in Britain by Anglican theologians and preachers; the earliest known usage is that of Anglicans Charles Gibbon and Thomas Jackson in 1604.
Ancient history
Ancient Egypt
Possibly the earliest affirmation of the maxim of reciprocity, reflecting the ancient Egyptian goddess Ma'at, appears in the story of "The Eloquent Peasant", which dates to the Middle Kingdom (): "Now this is the command: Do to the doer to make him do." This proverb embodies the do ut des principle. A Late Period () papyrus contains an early negative affirmation of the Golden Rule: "That which you hate to be done to you, do not do to another."
Ancient India
Sanskrit tradition
In Mahābhārata, the ancient epic of India, there is a discourse in which sage Brihaspati tells the king Yudhishthira the following about dharma, a philosophical understanding of values and actions that lend good order to life:
The Mahābhārata is usually dated to the period between 400 BCE and 400 CE.
Tamil tradition
In Chapter 32 in the Book of Virtue of the Tirukkuṛaḷ (), Valluvar says:
Furthermore, in verse 312, Valluvar says that it is the determination or code of the spotless (virtuous) not to do evil, even in return, to those who have cherished enmity and done them evil. According to him, the proper punishment to those who have done evil is to put them to shame by showing them kindness, in return and to forget both the evil and the good done on both sides (verse 314).
Ancient Greece
The Golden Rule in its prohibitive (negative) form was a common principle in ancient Greek philosophy. Examples of the general concept include:
"Avoid doing what you would blame others for doing." – Thales ( – )
"What you do not want to happen to you, do not do it yourself either." – Sextus the Pythagorean. The oldest extant reference to Sextus is by Origen in the third century of the common era.
"Ideally, no one should touch my property or tamper with it, unless I have given him some sort of permission, and, if I am sensible I shall treat the property of others with the same respect." – Plato ( – )
"Do not do to others that which angers you when they do it to you." – Isocrates (436–338 BCE)
"It is impossible to live a pleasant life without living wisely and well and justly, and it is impossible to live wisely and well and justly without living pleasantly." – Epicurus (341–270 BC) where "justly" refers to "an agreement made in reciprocal association ... against the infliction or suffering of harm."
Ancient Persia
The Pahlavi Texts of Zoroastrianism ( – 1000 CE) were an early source for the Golden Rule: "That nature alone is good which refrains from doing to another whatsoever is not good for itself." Dadisten-I-dinik, 94,5, and "Whatever is disagreeable to yourself do not do unto others." Shayast-na-Shayast 13:29
Ancient Rome
Seneca the Younger ( – 65 CE), a practitioner of Stoicism ( – 200 CE), expressed a hierarchical variation of the Golden Rule in his Letter 47, an essay regarding the treatment of slaves: "Treat your inferior as you would wish your superior to treat you."
Religious context
According to Simon Blackburn, the Golden Rule "can be found in some form in almost every ethical tradition". A multi-faith poster showing the Golden Rule in sacred writings from 13 faith traditions (designed by Paul McKenna of Scarboro Missions, 2000) has been on permanent display at the Headquarters of the United Nations since 4 January 2002. Creating the poster "took five years of research that included consultations with experts in each of the 13 faith groups." (See also the section on Global Ethic.)
Abrahamic religions
Judaism
A rule of reciprocal altruism was stated positively in a well-known Torah verse (Hebrew: ):
According to John J. Collins of Yale Divinity School, most modern scholars, with Richard Elliott Friedman as a prominent exception, view the command as applicable to fellow Israelites.
Rashi commented what constitutes revenge and grudge, using the example of two men. One man would not lend the other his ax, then the next day, the same man asks the other for his ax. If the second man should say, I will not lend it to you, just as you did not lend to me,' it constitutes revenge; if 'Here it is for you; I am not like you, who did not lend me,' it constitutes a grudge. Rashi concludes his commentary by quoting Rabbi Akiva on love of neighbor: 'This is a fundamental [all-inclusive] principle of the Torah.
Hillel the Elder ( – 10 CE) used this verse as a most important message of the Torah for his teachings. Once, he was challenged by a gentile who asked to be converted under the condition that the Torah be explained to him while he stood on one foot. Hillel accepted him as a candidate for conversion to Judaism but, drawing on Leviticus 19:18, briefed the man:
Hillel recognized brotherly love as the fundamental principle of Jewish ethics. Rabbi Akiva agreed, while Simeon ben Azzai suggested that the principle of love must have its foundation in Genesis chapter 1, which teaches that all men are the offspring of Adam, who was made in the image of God. According to Jewish rabbinic literature, the first man Adam represents the unity of mankind. This is echoed in the modern preamble of the Universal Declaration of Human Rights. It is also taught that Adam is last in order according to the evolutionary character of God's creation:
The Jewish Publication Society's edition of Leviticus states:
This Torah verse represents one of several versions of the Golden Rule, which itself appears in various forms, positive and negative. It is the earliest written version of that concept in a positive form.
At the turn of the era, the Jewish rabbis were discussing the scope of the meaning of Leviticus 19:18 and 19:34 extensively:
Commentators interpret that this applies to foreigners (e.g. Samaritans), proselytes ('strangers who reside with you') and Jews.
On the verse, "Love your fellow as yourself", the classic commentator Rashi quotes from Torat Kohanim, an early Midrashic text regarding the famous dictum of Rabbi Akiva: "Love your fellow as yourself – Rabbi Akiva says this is a great principle of the Torah."
In 1935, Rabbi Eliezer Berkovits explained in his work "What is the Talmud?" that Leviticus 19:34 disallowed xenophobia by Jews.
Israel's postal service quoted from the previous Leviticus verse when it commemorated the Universal Declaration of Human Rights on a 1958 postage stamp.
Christianity
New Testament
The Golden Rule was proclaimed by Jesus of Nazareth during his Sermon on the Mount and described by him as the second great commandment. The common English phrasing is "Do unto others as you would have them do unto you". Various applications of the Golden Rule are stated positively numerous times in the Old Testament: "You shall not take vengeance or bear a grudge against any of your people, but you shall love your neighbor as yourself: I am the LORD." Or, in Leviticus 19:34: "The alien who resides with you shall be to you as the native-born among you; you shall love the alien as yourself, for you were aliens in the land of Egypt: I am the LORD your God." These two examples are given in the Septuagint as follows: "And thy hand shall not avenge thee; and thou shalt not be angry with the children of thy people; and thou shalt love thy neighbour as thyself; I am the Lord." and "The stranger that comes to you shall be among you as the native, and thou shalt love him as thyself; for ye were strangers in the land of Egypt: I am the Lord your God."
According to John J. Collins of Yale Divinity School, neither Jewish sources or the New Testament ever claim that the commandment to love one's neighbors is applicable to all mankind, though some expansion can also be seen beyond its original context in the Hebrew Bible. The law only applies to an in-group, whether it be Israelites, Jews, or early Christians.
Two passages in the New Testament quote Jesus of Nazareth espousing the positive form of the Golden rule:
A similar passage, a parallel to the Great Commandment, is to be found later in the Gospel of Luke.
The passage in the book of Luke then continues with Jesus answering the question, "Who is my neighbor?", by telling the parable of the Good Samaritan, which John Wesley interprets as meaning that "your neighbor" is anyone in need.
Jesus' teaching goes beyond the negative formulation of not doing what one would not like done to themselves, to the positive formulation of actively doing good to another that, if the situations were reversed, one would desire that the other would do for them. This formulation, as indicated in the parable of the Good Samaritan, emphasizes the needs for positive action that brings benefit to another, not simply restraining oneself from negative activities that hurt another.
In one passage of the New Testament, Paul the Apostle refers to the golden rule, restating Jesus' second commandment:
St. Paul also comments on the golden rule in the Epistle to the Romans:
Deuterocanon
The Old Testament Deuterocanonical books of Tobit and Sirach, accepted as part of the Scriptural canon by Catholic Church, Eastern Orthodoxy, and the non-Chalcedonian churches, express a negative form of the golden rule:
Church Fathers
As prolific commentators on the Bible, multiple Church Fathers, including the Apostolic Fathers, wrote on the Golden Rule found in both Old and New Testaments. The early Christian treatise the Didache included the Golden Rule in saying "in everything, do not do to another what you would not want done to you."
Clement of Alexandria, commenting on the Golden Rule in Luke 6:31, calls the concept "all embracing" for how one acts in life. Clement further pointed to the phrasing in the book of Tobit as part of the ethics between husbands and wives. Tertullian stated that the rule taught "love, respect, consolation, protection, and benefits".
While many Church Fathers framed the Golden Rule as part of Jewish and Christian Ethics, Theophilus of Antioch stated that it had universal application for all of humanity. Origen connected the Golden Rule with the law written on the hearts of Gentiles mentioned by Paul in his letter to the Romans, and had universal application to Christian and non-Christian alike.
Basil of Caesarea commented that the negative form of the Golden Rule was for avoiding evil while the positive form was for doing good.
Islam
The Arabian peninsula was said to not practice the golden rule prior to the advent of Islam. According to Th. Emil Homerin: "Pre-Islamic Arabs regarded the survival of the tribe, as most essential and to be ensured by the ancient rite of blood vengeance." Homerin goes on to say:
From the hadith:
Ali ibn Abi Talib (4th Caliph in Sunni Islam, and first Imam in Shia Islam) says:
Muslim scholar Al-Qurtubi looked at the Golden Rule of loving your neighbor and treating them as you wish to be treated as having universal application to believers and unbelievers alike. Relying upon a Hadith, exegist Ibn Kathir listed those "who judge people the way they judge themselves" as people who will be among the first to be Resurrected.
Hussein bin Ali bin Awn al-Hashemi (102nd Caliph in Sunni Islam), repeated the Golden rule in the context of the Armenian genocide, thus, in 1917, he states:
Mandaeism
In Mandaean scriptures, the Ginza Rabba and Mandaean Book of John contain a prohibitive form of the Golden Rule that is virtually identical to the one used by Hillel.
Baháʼí Faith
The writings of the Baháʼí Faith encourage everyone to treat others as they would treat themselves and even prefer others over oneself:
Indian religions
Hinduism
Also,
Buddhism
Buddha (Siddhartha Gautama, –543 BCE) made the negative formulation of the golden rule one of the cornerstones of his ethics in the 6th century BCE. It occurs in many places and in many forms throughout the Tripitaka.
Jainism
The Golden Rule is paramount in the Jainist philosophy and can be seen in the doctrines of ahimsa and karma. As part of the prohibition of causing any living beings to suffer, Jainism forbids inflicting upon others what is harmful to oneself.
The following line from the Acaranga Sutra sums up the philosophy of Jainism:
Sikhism
Chinese religions
Confucianism
The same idea is also presented in V.12 and VI.30 of the Analects (), which can be found in the online Chinese Text Project. The phraseology differs from the Christian version of the Golden Rule. It does not presume to do anything unto others, but merely to avoid doing what would be harmful. It does not preclude doing good deeds and taking moral positions.
In relation to the Golden Rule, Confucian philosopher Mencius said "If one acts with a vigorous effort at the law of reciprocity, when he seeks for the realization of perfect virtue, nothing can be closer than his approximation to it."
Taoism
Mohism
Mozi regarded the golden rule as a corollary to the cardinal virtue of impartiality, and encouraged egalitarianism and selflessness in relationships.
Iranian religions
Zoroastrianism
New religious movements
Wicca
Scientology
Traditional African religions
Yoruba
Odinani
Secular context
Global ethic
The "Declaration Toward a Global Ethic" from the Parliament of the World's Religions (1993) proclaimed the Golden Rule ("We must treat others as we wish others to treat us") as the common principle for many religions. The Initial Declaration was signed by 143 leaders from all of the world's major faiths, including Baháʼí Faith, Brahmanism, Brahma Kumaris, Buddhism, Christianity, Hinduism, Indigenous, Interfaith, Islam, Jainism, Judaism, Native American, Neo-Pagan, Sikhism, Taoism, Theosophist, Unitarian Universalist and Zoroastrian. In the folklore of several cultures the Golden Rule is depicted by the allegory of the long spoons.
Humanism
In the view of Greg M. Epstein, a Humanist chaplain at Harvard University, do unto others' ... is a concept that essentially no religion misses entirely. But not a single one of these versions of the golden rule requires a God." Various sources identify the Golden Rule as a humanist principle:
Existentialism
Classical Utilitarianism
John Stuart Mill in his book, Utilitarianism (originally published in 1861), wrote, "In the golden rule of Jesus of Nazareth, we read the complete spirit of the ethics of utility. 'To do as you would be done by,' and 'to love your neighbour as yourself,' constitute the ideal perfection of utilitarian morality."
Other contexts
Human rights
According to Marc H. Bornstein, and William E. Paden, the Golden Rule is arguably the most essential basis for the modern concept of human rights, in which each individual has a right to just treatment, and a reciprocal responsibility to ensure justice for others.
However, Leo Damrosch argued that the notion that the Golden Rule pertains to "rights" per se is a contemporary interpretation and has nothing to do with its origin. The development of human "rights" is a modern political ideal that began as a philosophical concept promulgated through the philosophy of Jean Jacques Rousseau in 18th century France, among others. His writings influenced Thomas Jefferson, who then incorporated Rousseau's reference to "inalienable rights" into the United States Declaration of Independence in 1776. Damrosch argued that to confuse the Golden Rule with human rights is to apply contemporary thinking to ancient concepts.
Variations
The Platinum Rule has been said to be stated as, "Do to others as they would have you do to them." Taken in the spirit of the Golden Rule this suggests one should be familiar or at least consider the desires of the person they're interacting with. However, this is the flaw of the rule in that it requires one to stereotype or make broad assumptions about a strangers interests and personality before interacting with them. These kind of assumptions are often erroneous and therefore a prudent person would avoid the interaction knowing their assumptions are likely incorrect. This rule is prohibitive to communication and prefers no interaction over any interaction with strangers. On occasion, stereotypes may be applied and in rare cases are largely correct. In those situations this rule can be applied successfully.
On the other hand, the Platinum Rule is broadly successful when interacting with familiar people and directs that all interaction be conducted in a manner the person would like to be treated. This demonstrates respect and the desire to favorably regard the person one is interacting with. Unfortunately, this can lead to a dependent relationship, developing a psychological tendency to expect similar treatment in all relationships and avoid forming new relationships where this treatment would not exist simply from not knowing the individuals preferences.
Despite the unusual cases stifling interaction or individuals developing a demand for this behavior from others, the Platinum Rule requires due consideration, self-control, and receiver analysis. Taken altogether, the Platinum Rule represents a gesture of kindness, and is an established norm in various industries, such as marketing, medical care, motivational speaking, and many others. As a consequence, some argue the Golden Rule is outdated, self-absorbed, and grossly fails to consider the needs of others.
Science and economics
Some published research argues that some 'sense' of fair play and the Golden Rule may be stated and rooted in terms of neuroscientific and neuroethical principles.
The Golden Rule can also be explained from the perspectives of psychology, philosophy, sociology, human evolution, and economics. Psychologically, it involves a person empathizing with others. Philosophically, it involves a person perceiving their neighbor also as "I" or "self". Sociologically, "love your neighbor as yourself" is applicable between individuals, between groups, and also between individuals and groups. In evolution, "reciprocal altruism" is seen as a distinctive advance in the capacity of human groups to survive and reproduce, as their exceptional brains demanded exceptionally long childhoods and ongoing provision and protection even beyond that of the immediate family. In economics, Richard Swift, referring to ideas from David Graeber, suggests that "without some kind of reciprocity society would no longer be able to exist."
Study of other primates provides evidence that the Golden Rule exists in other non-human species.
Criticism
Philosophers such as Immanuel Kant and Friedrich Nietzsche have objected to the rule on a variety of grounds. One is the epistemic question of determining how others want to be treated. The obvious way is to ask them, but they might give duplicitous answers if they find this strategically useful, and they might also fail to understand the details of the choice situation as you understand it. We might also be biased to perceiving harms and benefits to ourselves more than to others, which could lead to escalating conflict if we are suspicious of others. Hence Linus Pauling suggested that we introduce a bias towards others into the golden rule: "Do unto others 20 percent better than you would have them do unto you" - to correct for subjective bias.
Differences in values or interests
George Bernard Shaw wrote, "Do not do unto others as you would that they should do unto you. Their tastes may not be the same." This suggests that if your values are not shared with others, the way you want to be treated will not be the way they want to be treated. Hence, the Golden Rule of "do unto others" is "dangerous in the wrong hands", according to philosopher Iain King, because "some fanatics have no aversion to death: the Golden Rule might inspire them to kill others in suicide missions."
Walter Terence Stace, in The Concept of Morals (1937) argued that Shaw's remark
Differences in situations
Immanuel Kant famously criticized the golden rule for not being sensitive to differences of situation, noting that a prisoner duly convicted of a crime could appeal to the golden rule while asking the judge to release him, pointing out that the judge would not want anyone else to send him to prison, so he should not do so to others. On the other hand, in a critique of the consistency of Kant's writings, several authors have noted the "similarity" between the Golden Rule and Kant's Categorical Imperative, introduced in Groundwork of the Metaphysic of Morals (See discussion at this link).
This was perhaps a well-known objection, as Leibniz actually responded to it long before Kant made it, suggesting that the judge should put himself in the place, not merely of the criminal, but of all affected persons and then judging each option (to inflict punishment, or release the criminal, etc.) by whether there was a “greater good in which this lesser evil was included.”
Other responses to criticisms
Marcus George Singer observed that there are two importantly different ways of looking at the golden rule: as requiring (1) that you perform specific actions that you want others to do to you or (2) that you guide your behavior in the same general ways that you want others to. Counter-examples to the golden rule typically are more forceful against the first than the second.
In his book on the golden rule, Jeffrey Wattles makes the similar observation that such objections typically arise while applying the golden rule in certain general ways (namely, ignoring differences in taste or situation, failing to compensate for subjective bias, etc.) But if we apply the golden rule to our own method of using it, asking in effect if we would want other people to apply the golden rule in such ways, the answer would typically be no, since others' ignoring of such factors will lead to behavior which we object to. It follows that we should not do so ourselves—according to the golden rule. In this way, the golden rule may be self-correcting. An article by Jouni Reinikainen develops this suggestion in greater detail.
It is possible, then, that the golden rule can itself guide us in identifying which differences of situation are morally relevant. We would often want other people to ignore any prejudice against our race or nationality when deciding how to act towards us, but would also want them to not ignore our differing preferences in food, desire for aggressiveness, and so on. This principle of "doing unto others, wherever possible, as they would be done by..." has sometimes been termed the platinum rule.
Popular references
Charles Kingsley's The Water Babies (1863) includes a character named Mrs Do-As-You-Would-Be-Done-By (and another, Mrs Be-Done-By-As-You-Did).
See also
Empathy
Eye for an eye
General welfare clause
Kali's morality - a literary example of character not using the Golden Rule
Norm of reciprocity, social norm of in-kind responses to the behavior of others
Reciprocity (cultural anthropology), way of defining people's informal exchange of goods and labour
Reciprocity (evolution), mechanisms for the evolution of cooperation
Reciprocity (international relations), principle that favours, benefits, or penalties that are granted by one state to the citizens or legal entities of another, should be returned in kind
Reciprocity (social and political philosophy), concept of reciprocity as in-kind positive or negative responses for the actions of others; relation to justice; related ideas such as gratitude, mutuality, and the Golden Rule
Reciprocity (social psychology), in-kind positive or negative responses of individuals towards the actions of others
Serial reciprocity, where the benefactor of a gift or service will in turn provide benefits to a third party
Ubuntu (philosophy), an ethical philosophy originating from Southern Africa, which has been summarised as 'A person is a person through other people'
References
External links
The Golden Rule Movie A teaching resource.
Golden Rule Day An annual global event every April 5.
Golden Rule Project - learning tools, etc. (based in Salt Lake City, Utah, US)
Monmouth Center for World Religions and Ethical Thought. The Golden Rule
Scarboro Mission. The Golden Rule Educational, participatory, and interactive resources including videos, exercises, multi-disciplinary commentaries, The Golden Rule Poster, and interfaith dialogues on the Golden Rule.
St Columbans Mission Society – Interfaith Relations. The Golden Rule The Golden Rule Poster, etc.
Codes of conduct
Ethical principles
Interpersonal relationships
Life skills
Philosophy of law
Positive Mitzvoth
Religious practices
Sermon on the Mount | Golden Rule | [
"Biology"
] | 5,362 | [
"Behavior",
"Interpersonal relationships",
"Religious practices",
"Human behavior"
] |
12,866 | https://en.wikipedia.org/wiki/Globular%20cluster | A globular cluster is a spheroidal conglomeration of stars that is bound together by gravity, with a higher concentration of stars towards its center. It can contain anywhere from tens of thousands to many millions of member stars, all orbiting in a stable, compact formation. Globular clusters are similar in form to dwarf spheroidal galaxies, and though globular clusters were long held to be the more luminous of the two, discoveries of outliers had made the distinction between the two less clear by the early 21st century. Their name is derived from Latin (small sphere). Globular clusters are occasionally known simply as "globulars".
Although one globular cluster, Omega Centauri, was observed in antiquity and long thought to be a star, recognition of the clusters' true nature came with the advent of telescopes in the 17th century. In early telescopic observations, globular clusters appeared as fuzzy blobs, leading French astronomer Charles Messier to include many of them in his catalog of astronomical objects that he thought could be mistaken for comets. Using larger telescopes, 18th-century astronomers recognized that globular clusters are groups of many individual stars. Early in the 20th century the distribution of globular clusters in the sky was some of the first evidence that the Sun is far from the center of the Milky Way.
Globular clusters are found in nearly all galaxies. In spiral galaxies like the Milky Way, they are mostly found in the outer spheroidal part of the galaxythe galactic halo. They are the largest and most massive type of star cluster, tending to be older, denser, and composed of lower abundances of heavy elements than open clusters, which are generally found in the disks of spiral galaxies. The Milky Way has more than 150 known globulars, and there may be many more.
Both the origin of globular clusters and their role in galactic evolution are unclear. Some are among the oldest objects in their galaxies and even the universe, constraining estimates of the universe's age. Star clusters were formerly thought to consist of stars that all formed at the same time from one star-forming nebula, but nearly all globular clusters contain stars that formed at different times, or that have differing compositions. Some clusters may have had multiple episodes of star formation, and some may be remnants of smaller galaxies captured by larger galaxies.
History of observations
The first known globular cluster, now called M 22, was discovered in 1665 by Abraham Ihle, a German amateur astronomer. The cluster Omega Centauri, easily visible in the southern sky with the naked eye, was known to ancient astronomers like Ptolemy as a star, but was reclassified as a nebula by Edmond Halley in 1677, then finally as a globular cluster in the early 19th century by John Herschel. The French astronomer Abbé Lacaille listed NGC 104, , M 55, M 69, and in his 1751–1752 catalogue. The low resolution of early telescopes prevented individual stars in a cluster from being visually separated until Charles Messier observed M 4 in 1764.
When William Herschel began his comprehensive survey of the sky using large telescopes in 1782, there were 34 known globular clusters. Herschel discovered another 36 and was the first to resolve virtually all of them into stars. He coined the term globular cluster in his Catalogue of a Second Thousand New Nebulae and Clusters of Stars (1789). In 1914, Harlow Shapley began a series of studies of globular clusters, published across about forty scientific papers. He examined the clusters' RR Lyrae variables (stars which he assumed were Cepheid variables) and used their luminosity and period of variability to estimate the distances to the clusters. RR Lyrae variables were later found to be fainter than Cepheid variables, causing Shapley to overestimate the distances.
A large majority of the Milky Way's globular clusters are found in the halo around the galactic core. In 1918, Shapley used this strongly asymmetrical distribution to determine the overall dimensions of the galaxy. Assuming a roughly spherical distribution of globular clusters around the galaxy's center, he used the positions of the clusters to estimate the position of the Sun relative to the Galactic Center. He correctly concluded that the Milky Way's center is in the Sagittarius constellation and not near the Earth. He overestimated the distance, finding typical globular cluster distances of ; the modern distance to the Galactic Center is roughly . Shapley's measurements indicated the Sun is relatively far from the center of the galaxy, contrary to what had been inferred from the observed uniform distribution of ordinary stars. In reality most ordinary stars lie within the galaxy's disk and are thus obscured by gas and dust in the disk, whereas globular clusters lie outside the disk and can be seen at much greater distances.
The count of known globular clusters in the Milky Way has continued to increase, reaching 83 in 1915, 93 in 1930, 97 by 1947, and 157 in 2010. Additional, undiscovered globular clusters are believed to be in the galactic bulge or hidden by the gas and dust of the Milky Way. For example, most of the Palomar Globular Clusters have only been discovered in the 1950s, with some located relatively close-by yet obscured by dust, while others reside in the very far reaches of the Milky Way halo. The Andromeda Galaxy, which is comparable in size to the Milky Way, may have as many as five hundred globulars. Every galaxy of sufficient mass in the Local Group has an associated system of globular clusters, as does almost every large galaxy surveyed. Some giant elliptical galaxies (particularly those at the centers of galaxy clusters), such as M 87, have as many as 13,000 globular clusters.
Classification
Shapley was later assisted in his studies of clusters by Henrietta Swope and Helen Sawyer Hogg. In 1927–1929, Shapley and Sawyer categorized clusters by the degree of concentration of stars toward each core. Their system, known as the Shapley–Sawyer Concentration Class, identifies the most concentrated clusters as Class I and ranges to the most diffuse Class XII. Astronomers from the Pontifical Catholic University of Chile proposed a new type of globular cluster on the basis of observational data in 2015: Dark globular clusters.
Formation
The formation of globular clusters is poorly understood. Globular clusters have traditionally been described as a simple star population formed from a single giant molecular cloud, and thus with roughly uniform age and metallicity (proportion of heavy elements in their composition). Modern observations show that nearly all globular clusters contain multiple populations; the globular clusters in the Large Magellanic Cloud (LMC) exhibit a bimodal population, for example. During their youth, these LMC clusters may have encountered giant molecular clouds that triggered a second round of star formation. This star-forming period is relatively brief, compared with the age of many globular clusters. It has been proposed that this multiplicity in stellar populations could have a dynamical origin. In the Antennae Galaxy, for example, the Hubble Space Telescope has observed clusters of clustersregions in the galaxy that span hundreds of parsecs, in which many of the clusters will eventually collide and merge. Their overall range of ages and (possibly) metallicities could lead to clusters with a bimodal, or even multiple, distribution of populations.
Observations of globular clusters show that their stars primarily come from regions of more efficient star formation, and from where the interstellar medium is at a higher density, as compared to normal star-forming regions. Globular cluster formation is prevalent in starburst regions and in interacting galaxies. Some globular clusters likely formed in dwarf galaxies and were removed by tidal forces to join the Milky Way. In elliptical and lenticular galaxies there is a correlation between the mass of the supermassive black holes (SMBHs) at their centers and the extent of their globular cluster systems. The mass of the SMBH in such a galaxy is often close to the combined mass of the galaxy's globular clusters.
No known globular clusters display active star formation, consistent with the hypothesis that globular clusters are typically the oldest objects in their galaxy and were among the first collections of stars to form. Very large regions of star formation known as super star clusters, such as Westerlund 1 in the Milky Way, may be the precursors of globular clusters.
Many of the Milky Way's globular clusters have a retrograde orbit (meaning that they revolve around the galaxy in the reverse of the direction the galaxy is rotating), including the most massive, Omega Centauri. Its retrograde orbit suggests it may be a remnant of a dwarf galaxy captured by the Milky Way.
Composition
Globular clusters are generally composed of hundreds of thousands of low-metal, old stars. The stars found in a globular cluster are similar to those in the bulge of a spiral galaxy but confined to a spheroid in which half the light is emitted within a radius of only a few to a few tens of parsecs. They are free of gas and dust, and it is presumed that all the gas and dust was long ago either turned into stars or blown out of the cluster by the massive first-generation stars.
Globular clusters can contain a high density of stars; on average about 0.4stars per cubic parsec, increasing to 100 or 1000stars/pc in the core of the cluster. In comparison, the stellar density around the Sun is roughly 0.1 stars/pc. The typical distance between stars in a globular cluster is about one light year, but at its core the separation between stars averages about a third of a light yearthirteen times closer than the Sun is to its nearest neighbor, Proxima Centauri.
Globular clusters are thought to be unfavorable locations for planetary systems. Planetary orbits are dynamically unstable within the cores of dense clusters because of the gravitational perturbations of passing stars. A planet orbiting at one astronomical unit around a star that is within the core of a dense cluster, such as 47 Tucanae, would survive only on the order of a hundred million years. There is a planetary system orbiting a pulsar (PSRB1620−26) that belongs to the globular cluster M4, but these planets likely formed after the event that created the pulsar.
Some globular clusters, like Omega Centauri in the Milky Way and Mayall II in the Andromeda Galaxy, are extraordinarily massive, measuring several million solar masses () and having multiple stellar populations. Both are evidence that supermassive globular clusters formed from the cores of dwarf galaxies that have been consumed by larger galaxies. About a quarter of the globular cluster population in the Milky Way may have been accreted this way, as with more than 60% of the globular clusters in the outer halo of Andromeda.
Heavy element content
Globular clusters normally consist of Population II stars which, compared with Population I stars such as the Sun, have a higher proportion of hydrogen and helium and a lower proportion of heavier elements. Astronomers refer to these heavier elements as metals (distinct from the material concept) and to the proportions of these elements as the metallicity. Produced by stellar nucleosynthesis, the metals are recycled into the interstellar medium and enter a new generation of stars. The proportion of metals can thus be an indication of the age of a star in simple models, with older stars typically having a lower metallicity.
The Dutch astronomer Pieter Oosterhoff observed two special populations of globular clusters, which became known as Oosterhoff groups. The second group has a slightly longer period of RR Lyrae variable stars. While both groups have a low proportion of metallic elements as measured by spectroscopy, the metal spectral lines in the stars of Oosterhoff typeI (OoI) cluster are not quite as weak as those in typeII (OoII), and so typeI stars are referred to as metal-rich (e.g. Terzan 7), while typeII stars are metal-poor (e.g. ESO 280-SC06). These two distinct populations have been observed in many galaxies, especially massive elliptical galaxies. Both groups are nearly as old as the universe itself and are of similar ages. Suggested scenarios to explain these subpopulations include violent gas-rich galaxy mergers, the accretion of dwarf galaxies, and multiple phases of star formation in a single galaxy. In the Milky Way, the metal-poor clusters are associated with the halo and the metal-rich clusters with the bulge.
A large majority of the metal-poor clusters in the Milky Way are aligned on a plane in the outer part of the galaxy's halo. This observation supports the view that typeII clusters were captured from a satellite galaxy, rather than being the oldest members of the Milky Way's globular cluster system as was previously thought. The difference between the two cluster types would then be explained by a time delay between when the two galaxies formed their cluster systems.
Exotic components
Close interactions and near-collisions of stars occur relatively often in globular clusters because of their high star density. These chance encounters give rise to some exotic classes of starssuch as blue stragglers, millisecond pulsars, and low-mass X-ray binarieswhich are much more common in globular clusters. How blue stragglers form remains unclear, but most models attribute them to interactions between stars, such as stellar mergers, the transfer of material from one star to another, or even an encounter between two binary systems. The resulting star has a higher temperature than other stars in the cluster with comparable luminosity and thus differs from the main-sequence stars formed early in the cluster's existence. Some clusters have two distinct sequences of blue stragglers, one bluer than the other.
Astronomers have searched for black holes within globular clusters since the 1970s. The required resolution for this task is exacting; it is only with the Hubble Space Telescope (HST) that the first claimed discoveries were made, in 2002 and 2003. Based on HST observations, other researchers suggested the existence of a (solar masses) intermediate-mass black hole in the globular cluster M15 and a black hole in the Mayall II cluster of the Andromeda Galaxy. Both X-ray and radio emissions from MayallII appear consistent with an intermediate-mass black hole; however, these claimed detections are controversial.
The heaviest objects in globular clusters are expected to migrate to the cluster center due to mass segregation. One research group pointed out that the mass-to-light ratio should rise sharply towards the center of the cluster, even without a black hole, in both M15 and Mayall II. Observations from 2018 find no evidence for an intermediate-mass black hole in any globular cluster, including M15, but cannot definitively rule out one with a mass of . Finally, in 2023, an analysis of HST and the Gaia spacecraft data from the closest globular cluster, Messier 4, revealed an excess mass of roughly in the center of this cluster, which appears to not be extended. This could thus be considered as kinematic evidence for an intermediate-mass black hole (even if an unusually compact cluster of compact objects like white dwarfs, neutron stars or stellar-mass black holes cannot be completely discounted).
The confirmation of intermediate-mass black holes in globular clusters would have important ramifications for theories of galaxy development as being possible sources for the supermassive black holes at their centers. The mass of these supposed intermediate-mass black holes is proportional to the mass of their surrounding clusters, following a pattern previously discovered between supermassive black holes and their surrounding galaxies.
Hertzsprung–Russell diagrams
Hertzsprung–Russell diagrams (H–R diagrams) of globular clusters allow astronomers to determine many of the properties of their populations of stars. An H–R diagram is a graph of a large sample of stars plotting their absolute magnitude (their luminosity, or brightness measured from a standard distance), as a function of their color index. The color index, roughly speaking, measures the color of the star; positive color indices indicate a reddish star with a cool surface temperature, while negative values indicate a bluer star with a hotter surface. Stars on an H–R diagram mostly lie along a roughly diagonal line sloping from hot, luminous stars in the upper left to cool, faint stars in the lower right. This line is known as the main sequence and represents the primary stage of stellar evolution. The diagram also includes stars in later evolutionary stages such as the cool but luminous red giants.
Constructing an H–R diagram requires knowing the distance to the observed stars to convert apparent into absolute magnitude. Because all the stars in a globular cluster have about the same distance from Earth, a color–magnitude diagram using their observed magnitudes looks like a shifted H–R diagram (because of the roughly constant difference between their apparent and absolute magnitudes). This shift is called the distance modulus and can be used to calculate the distance to the cluster. The modulus is determined by comparing features (like the main sequence) of the cluster's color–magnitude diagram to corresponding features in an H–R diagram of another set of stars, a method known as spectroscopic parallax or main-sequence fitting.
Properties
Since globular clusters form at once from a single giant molecular cloud, a cluster's stars have roughly the same age and composition. A star's evolution is primarily determined by its initial mass, so the positions of stars in a cluster's H–R or color–magnitude diagram mostly reflect their initial masses. A cluster's H–R diagram, therefore, appears quite different from H–R diagrams containing stars of a wide variety of ages. Almost all stars fall on a well-defined curve in globular cluster H–R diagrams, and that curve's shape indicates the age of the cluster. A more detailed H–R diagram often reveals multiple stellar populations as indicated by the presence of closely separated curves, each corresponding to a distinct population of stars with a slightly different age or composition. Observations with the Wide Field Camera 3, installed in 2009 on the Hubble Space Telescope, made it possible to distinguish these slightly different curves.
The most massive main-sequence stars have the highest luminosity and will be the first to evolve into the giant star stage. As the cluster ages, stars of successively lower masses will do the same. Therefore, the age of a single-population cluster can be measured by looking for those stars just beginning to enter the giant star stage, which form a "knee" in the H–R diagram called the main-sequence turnoff, bending to the upper right from the main-sequence line. The absolute magnitude at this bend is directly a function of the cluster's age; an age scale can be plotted on an axis parallel to the magnitude.
The morphology and luminosity of globular cluster stars in H–R diagrams are influenced by numerous parameters, many of which are still actively researched. Recent observations have overturned the historical paradigm that all globular clusters consist of stars born at exactly the same time, or sharing exactly the same chemical abundance. Some clusters feature multiple populations, slightly differing in composition and age; for example, high-precision imagery of cluster NGC 2808 discerned three close, but distinct, main sequences. Further, the placements of the cluster stars in an H–R diagram (including the brightnesses of distance indicators) can be influenced by observational biases. One such effect, called blending, arises when the cores of globular clusters are so dense that observations see multiple stars as a single target. The brightness measured for that seemingly single star is thus incorrecttoo bright, given that multiple stars contributed. In turn, the computed distance is incorrect, so the blending effect can introduce a systematic uncertainty into the cosmic distance ladder and may bias the estimated age of the universe and the Hubble constant.
Consequences
The blue stragglers appear on the H–R diagram as a series diverging from the main sequence in the direction of brighter, bluer stars. White dwarfs (the final remnants of some Sun-like stars), which are much fainter and somewhat hotter than the main-sequence stars, lie on the bottom-left of an H–R diagram. Globular clusters can be dated by looking at the temperatures of the coolest white dwarfs, often giving results as old as 12.7 billion years. In comparison, open clusters are rarely older than about half a billion years. The ages of globular clusters place a lower bound on the age of the entire universe, presenting a significant constraint in cosmology. Astronomers were historically faced with age estimates of clusters older than their cosmological models would allow, but better measurements of cosmological parameters, through deep sky surveys and satellites, appear to have resolved this issue.
Studying globular clusters sheds light on how the composition of the formational gas and dust affects stellar evolution; the stars' evolutionary tracks vary depending on the abundance of heavy elements. Data obtained from these studies are then used to study the evolution of the Milky Way as a whole.
Morphology
In contrast to open clusters, most globular clusters remain gravitationally bound together for time periods comparable to the lifespans of most of their stars. Strong tidal interactions with other large masses result in the dispersal of some stars, leaving behind "tidal tails" of stars removed from the cluster.
After formation, the stars in the globular cluster begin to interact gravitationally with each other. The velocities of the stars steadily change, and the stars lose any history of their original velocity. The characteristic interval for this to occur is the relaxation time, related to the characteristic length of time a star needs to cross the cluster and the number of stellar masses. The relaxation time varies by cluster, but a typical value is on the order of one billion years.
Although globular clusters are generally spherical in form, ellipticity can form via tidal interactions. Clusters within the Milky Way and the Andromeda Galaxy are typically oblate spheroids in shape, while those in the Large Magellanic Cloud are more elliptical.
Radii
Astronomers characterize the morphology (shape) of a globular cluster by means of standard radii: the core radius (rc), the half-light radius (rh), and the tidal or Jacobi radius (rt). The radius can be expressed as a physical distance or as a subtended angle in the sky. Considering a radius around the core, the surface luminosity of the cluster steadily decreases with distance, and the core radius is the distance at which the apparent surface luminosity has dropped by half. A comparable quantity is the half-light radius, or the distance from the core containing half the total luminosity of the cluster; the half-light radius is typically larger than the core radius.
Most globular clusters have a half-light radius of less than ten parsecs (pc), although some globular clusters have very large radii, like NGC 2419 (rh = 18 pc) and Palomar 14 (rh = 25 pc). The half-light radius includes stars in the outer part of the cluster that happen to lie along the line of sight, so theorists also use the half-mass radius (rm)the radius from the core that contains half the total mass of the cluster. A small half-mass radius, relative to the overall size, indicates a dense core. Messier 3 (M3), for example, has an overall visible dimension of about 18 arc minutes, but a half-mass radius of only 1.12 arc minutes.
The tidal radius, or Hill sphere, is the distance from the center of the globular cluster at which the external gravitation of the galaxy has more influence over the stars in the cluster than does the cluster itself. This is the distance at which the individual stars belonging to a cluster can be separated away by the galaxy. The tidal radius of M3, for example, is about forty arc minutes, or about 113 pc.
Mass segregation, luminosity and core collapse
In most Milky Way clusters, the surface brightness of a globular cluster as a function of decreasing distance to the core first increases, then levels off at a distance typically 1–2 parsecs from the core. About 20% of the globular clusters have undergone a process termed "core collapse". The luminosity in such a cluster increases steadily all the way to the core region.
Models of globular clusters predict that core collapse occurs when the more massive stars in a globular cluster encounter their less massive counterparts. Over time, dynamic processes cause individual stars to migrate from the center of the cluster to the outside, resulting in a net loss of kinetic energy from the core region and leading the region's remaining stars to occupy a more compact volume. When this gravothermal instability occurs, the central region of the cluster becomes densely crowded with stars, and the surface brightness of the cluster forms a power-law cusp. A massive black hole at the core could also result in a luminosity cusp. Over a long time, this leads to a concentration of massive stars near the core, a phenomenon called mass segregation.
The dynamical heating effect of binary star systems works to prevent an initial core collapse of the cluster. When a star passes near a binary system, the orbit of the latter pair tends to contract, releasing energy. Only after this primordial supply of energy is exhausted can a deeper core collapse proceed. In contrast, the effect of tidal shocks as a globular cluster repeatedly passes through the plane of a spiral galaxy tends to significantly accelerate core collapse.
Core collapse may be divided into three phases. During a cluster's adolescence, core collapse begins with stars nearest the core. Interactions between binary star systems prevents further collapse as the cluster approaches middle age. The central binaries are either disrupted or ejected, resulting in a tighter concentration at the core. The interaction of stars in the collapsed core region causes tight binary systems to form. As other stars interact with these tight binaries they increase the energy at the core, causing the cluster to re-expand. As the average time for a core collapse is typically less than the age of the galaxy, many of a galaxy's globular clusters may have passed through a core collapse stage, then re-expanded.
The HST has provided convincing observational evidence of this stellar mass-sorting process in globular clusters. Heavier stars slow down and crowd at the cluster's core, while lighter stars pick up speed and tend to spend more time at the cluster's periphery. The cluster 47 Tucanae, made up of about one million stars, is one of the densest globular clusters in the Southern Hemisphere. This cluster was subjected to an intensive photographic survey that obtained precise velocities for nearly fifteen thousand stars in this cluster.
The overall luminosities of the globular clusters within the Milky Way and the Andromeda Galaxy each have a roughly Gaussian distribution, with an average magnitude Mv and a variance σ2. This distribution of globular cluster luminosities is called the Globular Cluster Luminosity Function (GCLF). For the Milky Way, Mv = , σ = . The GCLF has been used as a "standard candle" for measuring the distance to other galaxies, under the assumption that globular clusters in remote galaxies behave similarly to those in the Milky Way.
N-body simulations
Computing the gravitational interactions between stars within a globular cluster requires solving the N-body problem. The naive computational cost for a dynamic simulation increases in proportion to N 2 (where N is the number of objects), so the computing requirements to accurately simulate a cluster of thousands of stars can be enormous. A more efficient method of simulating the N-body dynamics of a globular cluster is done by subdivision into small volumes and velocity ranges, and using probabilities to describe the locations of the stars. Their motions are described by means of the Fokker–Planck equation, often using a model describing the mass density as a function of radius, such as a Plummer model. The simulation becomes more difficult when the effects of binaries and the interaction with external gravitation forces (such as from the Milky Way galaxy) must also be included. In 2010 a low-density globular cluster's lifetime evolution was able to be directly computed, star-by-star.
Completed N-body simulations have shown that stars can follow unusual paths through the cluster, often forming loops and falling more directly toward the core than would a single star orbiting a central mass. Additionally, some stars gain sufficient energy to escape the cluster due to gravitational interactions that result in a sufficient increase in velocity. Over long periods of time this process leads to the dissipation of the cluster, a process termed evaporation. The typical time scale for the evaporation of a globular cluster is 1010 years. The ultimate fate of a globular cluster must be either to accrete stars at its core, causing its steady contraction, or gradual shedding of stars from its outer layers.
Binary stars form a significant portion of stellar systems, with up to half of all field stars and open cluster stars occurring in binary systems. The present-day binary fraction in globular clusters is difficult to measure, and any information about their initial binary fraction is lost by subsequent dynamical evolution. Numerical simulations of globular clusters have demonstrated that binaries can hinder and even reverse the process of core collapse in globular clusters. When a star in a cluster has a gravitational encounter with a binary system, a possible result is that the binary becomes more tightly bound and kinetic energy is added to the solitary star. When the massive stars in the cluster are sped up by this process, it reduces the contraction at the core and limits core collapse.
Intermediate forms
Cluster classification is not always definitive; objects have been found that can be classified in more than one category. For example, BH 176 in the southern part of the Milky Way has properties of both an open and a globular cluster.
In 2005 astronomers discovered a new, "extended" type of star cluster in the Andromeda Galaxy's halo, similar to the globular cluster. The three new-found clusters have a similar star count to globular clusters and share other characteristics, such as stellar populations and metallicity, but are distinguished by their larger sizeseveral hundred light years acrossand some hundred times lower density. Their stars are separated by larger distances; parametrically, these clusters lie somewhere between a globular cluster and a dwarf spheroidal galaxy.
The formation of these extended clusters is likely related to accretion. It is unclear why the Milky Way lacks such clusters; Andromeda is unlikely to be the sole galaxy with them, but their presence in other galaxies remains unknown.
Tidal encounters
When a globular cluster comes close to a large mass, such as the core region of a galaxy, it undergoes a tidal interaction. The difference in gravitational strength between the nearer and further parts of the cluster results in an asymmetric, tidal force. A "tidal shock" occurs whenever the orbit of a cluster takes it through the plane of a galaxy.
Tidal shocks can pull stars away from the cluster halo, leaving only the core part of the cluster; these trails of stars can extend several degrees away from the cluster. These tails typically both precede and follow the cluster along its orbit and can accumulate significant portions of the original mass of the cluster, forming clump-like features. The globular cluster Palomar 5, for example, is near the apogalactic point of its orbit after passing through the Milky Way. Streams of stars extend outward toward the front and rear of the orbital path of this cluster, stretching to distances of 13,000 light years. Tidal interactions have stripped away much of Palomar5's mass; further interactions with the galactic core are expected to transform it into a long stream of stars orbiting the Milky Way in its halo.
The Milky Way is in the process of tidally stripping the Sagittarius Dwarf Spheroidal Galaxy of stars and globular clusters through the Sagittarius Stream. As many as 20% of the globular clusters in the Milky Way's outer halo may have originated in that galaxy. Palomar 12, for example, most likely originated in the Sagittarius Dwarf Spheroidal but is now associated with the Milky Way. Tidal interactions like these add kinetic energy into a globular cluster, dramatically increasing the evaporation rate and shrinking the size of the cluster. The increased evaporation accelerates the process of core collapse.
Planets
Astronomers are searching for exoplanets of stars in globular star clusters. A search in 2000 for giant planets in the globular cluster came up negative, suggesting that the abundance of heavier elements – low in globular clusters – necessary to build these planets may need to be at least 40% of the Sun's abundance. Because terrestrial planets are built from heavier elements such as silicon, iron and magnesium, member stars have a far lower likelihood of hosting Earth-mass planets than stars in the solar neighborhood. Globular clusters are thus unlikely to host habitable terrestrial planets.
A giant planet was found in the globular cluster , orbiting a pulsar in the binary star system . The planet's eccentric and highly inclined orbit suggests it may have been formed around another star in the cluster, then "exchanged" into its current arrangement. The likelihood of close encounters between stars in a globular cluster can disrupt planetary systems; some planets break free to become rogue planets, orbiting the galaxy. Planets orbiting close to their star can become disrupted, potentially leading to orbital decay and an increase in orbital eccentricity and tidal effects. In 2024, a gas giant or brown dwarf was found to closely orbit the pulsar "M62H", where the name indicates that the planetary system belongs to the globular cluster Messier 62.
See also
Footnotes
References
Further reading
Books
Review articles
External links
Globular Clusters, Students for the Exploration and Development of Space Messier pages
Milky Way Globular Clusters
Catalogue of Milky Way Globular Cluster Parameters by William E. Harris, McMaster University, Ontario, Canada
A galactic globular cluster database by Marco Castellani, Rome Astronomical Observatory, Italy
Catalogue of structural and kinematic parameters and galactic orbits of globular clusters by Holger Baumgardt, University of Queensland, Australia
SCYON, a newsletter dedicated to star clusters.
MODEST, a loose collaboration of scientists working on star clusters.
Star clusters | Globular cluster | [
"Astronomy"
] | 7,235 | [
"Astronomical objects",
"Star clusters"
] |
12,882 | https://en.wikipedia.org/wiki/Gallon | The gallon is a unit of volume in British imperial units and United States customary units. Three different versions are in current use:
the imperial gallon (imp gal), defined as , which is or was used in the United Kingdom, Ireland, Canada, Australia, New Zealand, and some Caribbean countries;
the US liquid gallon (US gal), defined as ), which is used in the United States and some Latin American and Caribbean countries; and
the US dry gallon, defined as US bushel (exactly ).
There are two pints in a quart and four quarts in a gallon. Different sizes of pints account for the different sizes of the imperial and US gallons.
The IEEE standard symbol for both US (liquid) and imperial gallon is gal, not to be confused with the gal (symbol: Gal), a CGS unit of acceleration.
Definitions
The gallon currently has one definition in the imperial system, and two definitions (liquid and dry) in the US customary system. Historically, there were many definitions and redefinitions.
English system gallons
There were a number of systems of liquid measurements in the United Kingdom prior to the 19th century.
Winchester or corn gallon was (1697 act 8 & 9 Will. 3. c. 22)
Henry VII (Winchester) corn gallon from 1497 onwards was
Elizabeth I corn gallon from 1601 onwards was
William III corn gallon from 1697 onwards was
Old English (Elizabethan) ale gallon was (Ale Measures Act 1698 (11 Will. 3. c. 15))
London 'Guildhall' gallon (before 1688) was then
Old English (Queen Anne) wine gallon was standardized as in the 1706 act 6 Ann. c. 27:
Jersey gallon (from 1562 onwards) was
Guernsey gallon (17th century origins until 1917) was
Irish gallon was (Poynings' Act 1495 (10 Hen. 7. c. 22 (I)) confirmed by 1736 act 9 Geo. 2. c. 9 (I))
Imperial gallon
The British imperial gallon (frequently called simply "gallon") is defined as exactly 4.54609 dm3 (4.54609 litres). It is used in some Commonwealth countries, and until 1976 was defined as the volume of water at whose mass is . There are four imperial quarts in a gallon, two imperial pints in a quart, and there are 20 imperial fluid ounces in an imperial pint, yielding 160 fluid ounces in an imperial gallon.
US liquid gallon
The US liquid gallon (frequently called simply "gallon") is legally defined as 231 cubic inches, which is exactly . A US liquid gallon can contain about of water at , and is about 16.7% less than the imperial gallon. There are four quarts in a gallon, two pints in a quart and 16 US fluid ounces in a US pint, which makes the US fluid ounce equal to of a US gallon.
In order to overcome the effects of expansion and contraction with temperature when using a gallon to specify a quantity of material for purposes of trade, it is common to define the temperature at which the material will occupy the specified volume. For example, the volume of petroleum products and alcoholic beverages are both referenced to in government regulations.
US dry gallon
Since the dry measure is one-eighth of a US Winchester bushel of cubic inches, it is equal to exactly 268.8025 cubic inches, which is . The US dry gallon is not used in commerce, and is also not listed in the relevant statute, which jumps from the dry pint to the bushel.
Worldwide usage
Imperial gallon
As of 2021, the imperial gallon continues to be used as the standard petrol unit on 10 Caribbean island groups, consisting of:
four British Overseas Territories (Anguilla, the British Virgin Islands, the Cayman Islands, and Montserrat) and
six countries (Antigua and Barbuda, Dominica, Grenada, Saint Christopher and Nevis, Saint Lucia, and Saint Vincent and the Grenadines).
All 12 of the Caribbean islands use miles per hour for speed limits signage, and drive on the left side of the road.
The United Arab Emirates ceased selling petrol by the imperial gallon in 2010 and switched to the litre, with Guyana following suit in 2013. In 2014, Myanmar switched from the imperial gallon to the litre.
Antigua and Barbuda has proposed switching to selling petrol by litres since 2015.
In the European Union the gallon was removed from the list of legally defined primary units of measure catalogue in the EU directive 80/181/EEC for trading and official purposes, effective from 31 December 1994. Under the directive the gallon could still be used, but only as a supplementary or secondary unit.
As a result of the EU directive Ireland and the United Kingdom passed legislation to replace the gallon with the litre as a primary unit of measure in trade and in the conduct of public business, effective from 31 December 1993, and 30 September 1995 respectively. Though the gallon has ceased to be a primary unit of trade, it can still be legally used in both the UK and Ireland as a supplementary unit. However, barrels and large containers of beer, oil and other fluids are commonly measured in multiples of an imperial gallon.
Miles per imperial gallon is used as the primary fuel economy unit in the United Kingdom and as a supplementary unit in Canada on official documentation.
US liquid gallon
Other than the United States, petrol is sold by the US gallon in 12 other countries and four US territories:
the Caribbean countries of Dominican Republic and Haiti,
the Central American countries of Belize, Guatemala, and Nicaragua,
the South American countries of Colombia, Ecuador, and Peru,
the Pacific Ocean countries of Marshall Islands, Federated States of Micronesia, and Palau, which are associated countries of the United States,
the African country of Liberia, a former protectorate of the United States, and
the US territories of American Samoa, the Northern Mariana Islands, Guam, and the US Virgin Islands. Puerto Rico ceased selling petrol by the US gallon in 1980.
The latest country to cease using the gallon is El Salvador in June 2021.
The Imperial and US liquid gallon
Both the US gallon and imperial gallon are used in the Turks and Caicos Islands, due to an increase in tax duties which was disguised by levying the same duty on the US gallon (3.79 L) as was previously levied on the Imperial gallon (4.55 L), and the Bahamas.
Legacy
In some parts of the Middle East, such as the United Arab Emirates and Bahrain, 18.9-litre water cooler bottles are marketed as five-gallon bottles.
Relationship to other units
Both the US liquid and imperial gallon are divided into four quarts (quarter gallons), which in turn are divided into two pints, which in turn are divided into two cups (not in customary use outside the US), which in turn are further divided into two gills. Thus, both gallons are equal to four quarts, eight pints, sixteen cups, or thirty-two gills.
The imperial gill is further divided into five fluid ounces, whereas the US gill is divided into four fluid ounces, meaning an imperial fluid ounce is of an imperial pint, or of an imperial gallon, while a US fluid ounce is of a US pint, or of a US gallon. Thus, the imperial gallon, quart, pint, cup and gill are approximately 20% larger than their US counterparts, meaning these are not interchangeable, but the imperial fluid ounce is only approximately 4% smaller than the US fluid ounce, meaning these are often used interchangeably.
Historically, a common bottle size for liquor in the US was the "fifth", i.e. one-fifth of a US gallon (or one-sixth of an imperial gallon). While spirit sales in the US were switched to metric measures in 1976, a 750 mL bottle is still sometimes known as a "fifth".
History
The term derives most immediately from galun, galon in Old Norman French, but the usage was common in several languages, for example in Old French and (bowl) in Old English. This suggests a common origin in Romance Latin, but the ultimate source of the word is unknown.
The gallon originated as the base of systems for measuring wine and beer in England. The sizes of gallon used in these two systems were different from each other: the first was based on the wine gallon (equal in size to the US gallon), and the second one either the ale gallon or the larger imperial gallon.
By the end of the 18th century, there were three definitions of the gallon in common use:
The corn gallon, or Winchester gallon, of about ,
The wine gallon, or Queen Anne's gallon, which was , and
The ale gallon of .
The corn or dry gallon is used (along with the dry quart and pint) in the United States for grain and other dry commodities. It is one-eighth of the (Winchester) bushel, originally defined as a cylindrical measure of inches in diameter and 8 inches in depth, which made the bushel . The bushel was later defined to be 2150.42 cubic inches exactly, thus making its gallon exactly (); in previous centuries, there had been a corn gallon of between 271 and 272 cubic inches.
The wine, fluid, or liquid gallon has been the standard US gallon since the early 19th century. The wine gallon, which some sources relate to the volume occupied by eight medieval merchant pounds of wine, was at one time defined as the volume of a cylinder 6 inches deep and 7 inches in diameter, i.e. . It was redefined during the reign of Queen Anne in 1706 as 231 cubic inches exactly, the earlier definition with approximated to .
Although the wine gallon had been used for centuries for import duty purposes, there was no legal standard of it in the Exchequer, while a smaller gallon was actually in use, requiring this statute; the 231 cubic inch gallon remains the US definition today.
In 1824, Britain adopted a close approximation to the ale gallon known as the imperial gallon, and abolished all other gallons in favour of it. Inspired by the kilogram-litre relationship, the imperial gallon was based on the volume of 10 pounds of distilled water weighed in air with brass weights with the barometer standing at and at a temperature of .
In 1963, this definition was refined as the space occupied by 10 pounds of distilled water of density weighed in air of density against weights of density (the original "brass" was refined as the densities of brass alloys vary depending on metallurgical composition), which was calculated as to ten significant figures.
The precise definition of exactly cubic decimetres (also , ≈ ) came after the litre was redefined in 1964. This was adopted shortly afterwards in Canada, and adopted in 1976 in the United Kingdom.
Sizes of gallons
Historically, gallons of various sizes were used in many parts of Western Europe. In these localities, it has been replaced as the unit of capacity by the litre.
References
External links
Customary units of measurement in the United States
Imperial units
Systems of units
Units of volume
Alcohol measurement
Cooking weights and measures | Gallon | [
"Mathematics"
] | 2,257 | [
"Units of volume",
"Quantity",
"Systems of units",
"Units of measurement"
] |
12,891 | https://en.wikipedia.org/wiki/Gene%20therapy | Gene therapy is a medical technology that aims to produce a therapeutic effect through the manipulation of gene expression or through altering the biological properties of living cells.
The first attempt at modifying human DNA was performed in 1980, by Martin Cline, but the first successful nuclear gene transfer in humans, approved by the National Institutes of Health, was performed in May 1989. The first therapeutic use of gene transfer as well as the first direct insertion of human DNA into the nuclear genome was performed by French Anderson in a trial starting in September 1990. Between 1989 and December 2018, over 2,900 clinical trials were conducted, with more than half of them in phase I. In 2003, Gendicine became the first gene therapy to receive regulatory approval. Since that time, further gene therapy drugs were approved, such as alipogene tiparvovec (2012), Strimvelis (2016), tisagenlecleucel (2017), voretigene neparvovec (2017), patisiran (2018), onasemnogene abeparvovec (2019), idecabtagene vicleucel (2021), nadofaragene firadenovec, valoctocogene roxaparvovec and etranacogene dezaparvovec (all 2022). Most of these approaches utilize adeno-associated viruses (AAVs) and lentiviruses for performing gene insertions, in vivo and ex vivo, respectively. AAVs are characterized by stabilizing the viral capsid, lower immunogenicity, ability to transduce both dividing and nondividing cells, the potential to integrate site specifically and to achieve long-term expression in the in-vivo treatment. ASO / siRNA approaches such as those conducted by Alnylam and Ionis Pharmaceuticals require non-viral delivery systems, and utilize alternative mechanisms for trafficking to liver cells by way of GalNAc transporters.
Not all medical procedures that introduce alterations to a patient's genetic makeup can be considered gene therapy. Bone marrow transplantation and organ transplants in general have been found to introduce foreign DNA into patients.
Background
Gene therapy was first conceptualized in the 1960s, when the feasibility of adding new genetic functions to mammalian cells began to be researched. Several methods to do so were tested, including injecting genes with a micropipette directly into a living mammalian cell, and exposing cells to a precipitate of DNA that contained the desired genes. Scientists theorized that a virus could also be used as a vehicle, or vector, to deliver new genes into cells.
One of the first scientists to report the successful direct incorporation of functional DNA into a mammalian cell was biochemist Dr. Lorraine Marquardt Kraus (6 September 1922 – 1 July 2016) at the University of Tennessee Health Science Center in Memphis, Tennessee. In 1961, she managed to genetically alter the hemoglobin of cells from bone marrow taken from a patient with sickle cell anaemia. She did this by incubating the patient's cells in tissue culture with DNA extracted from a donor with normal hemoglobin. In 1968, researchers Theodore Friedmann, Jay Seegmiller, and John Subak-Sharpe at the National Institutes of Health (NIH), Bethesda, in the United States successfully corrected genetic defects associated with Lesch-Nyhan syndrome, a debilitating neurological disease, by adding foreign DNA to cultured cells collected from patients suffering from the disease.
The first attempt, an unsuccessful one, at gene therapy (as well as the first case of medical transfer of foreign genes into humans not counting organ transplantation) was performed by geneticist Martin Cline of the University of California, Los Angeles in California, United States on 10 July 1980. Cline claimed that one of the genes in his patients was active six months later, though he never published this data or had it verified.
After extensive research on animals throughout the 1980s and a 1989 bacterial gene tagging trial on humans, the first gene therapy widely accepted as a success was demonstrated in a trial that started on 14 September 1990, when Ashanthi DeSilva was treated for ADA-SCID.
The first somatic treatment that produced a permanent genetic change was initiated in 1993. The goal was to cure malignant brain tumors by using recombinant DNA to transfer a gene making the tumor cells sensitive to a drug that in turn would cause the tumor cells to die.
The polymers are either translated into proteins, interfere with target gene expression, or possibly correct genetic mutations. The most common form uses DNA that encodes a functional, therapeutic gene to replace a mutated gene. The polymer molecule is packaged within a "vector", which carries the molecule inside cells.
Early clinical failures led to dismissals of gene therapy. Clinical successes since 2006 regained researchers' attention, although , it was still largely an experimental technique. These include treatment of retinal diseases Leber's congenital amaurosis and choroideremia, X-linked SCID, ADA-SCID, adrenoleukodystrophy, chronic lymphocytic leukemia (CLL), acute lymphocytic leukemia (ALL), multiple myeloma, haemophilia, and Parkinson's disease. Between 2013 and April 2014, US companies invested over $600 million in the field.
The first commercial gene therapy, Gendicine, was approved in China in 2003, for the treatment of certain cancers. In 2011, Neovasculgen was registered in Russia as the first-in-class gene-therapy drug for treatment of peripheral artery disease, including critical limb ischemia.
In 2012, alipogene tiparvovec, a treatment for a rare inherited disorder, lipoprotein lipase deficiency, became the first treatment to be approved for clinical use in either the European Union or the United States after its endorsement by the European Commission.
Following early advances in genetic engineering of bacteria, cells, and small animals, scientists started considering how to apply it to medicine. Two main approaches were considered – replacing or disrupting defective genes. Scientists focused on diseases caused by single-gene defects, such as cystic fibrosis, haemophilia, muscular dystrophy, thalassemia, and sickle cell anemia. alipogene tiparvovec treats one such disease, caused by a defect in lipoprotein lipase.
DNA must be administered, reach the damaged cells, enter the cell and either express or disrupt a protein. Multiple delivery techniques have been explored. The initial approach incorporated DNA into an engineered virus to deliver the DNA into a chromosome. Naked DNA approaches have also been explored, especially in the context of vaccine development.
Generally, efforts focused on administering a gene that causes a needed protein to be expressed. More recently, increased understanding of nuclease function has led to more direct DNA editing, using techniques such as zinc finger nucleases and CRISPR. The vector incorporates genes into chromosomes. The expressed nucleases then knock out and replace genes in the chromosome. these approaches involve removing cells from patients, editing a chromosome and returning the transformed cells to patients.
Gene editing is a potential approach to alter the human genome to treat genetic diseases, viral diseases, and cancer. these approaches are being studied in clinical trials.
Classification
Breadth of definition
In 1986, a meeting at the Institute Of Medicine defined gene therapy as the addition or replacement of a gene in a targeted cell type. In the same year, the FDA announced that it had jurisdiction over approving "gene therapy" without defining the term. The FDA added a very broad definition in 1993 of any treatment that would 'modify or manipulate the expression of genetic material or to alter the biological properties of living cells'. In 2018 this was narrowed to 'products that mediate their effects by transcription or translation of transferred genetic material or by specifically altering host (human) genetic sequences'.
Writing in 2018, in the Journal of Law and the Biosciences, Sherkow et al. argued for a narrower definition of gene therapy than the FDA's in light of new technology that would consist of any treatment that intentionally and permanently modified a cell's genome, with the definition of genome including episomes outside the nucleus but excluding changes due to episomes that are lost over time. This definition would also exclude introducing cells that did not derive from a patient themselves, but include ex vivo approaches, and would not depend on the vector used.
During the COVID-19 pandemic, some academics insisted that the mRNA vaccines for COVID were not gene therapy to prevent the spread of incorrect information that the vaccine could alter DNA, other academics maintained that the vaccines were a gene therapy because they introduced genetic material into a cell. Fact-checkers, such as Full Fact, Reuters, PolitiFact, and FactCheck.org said that calling the vaccines a gene therapy was incorrect. Podcast host Joe Rogan was criticized for calling mRNA vaccines gene therapy as was British politician Andrew Bridgen, with fact checker Full Fact calling for Bridgen to be removed from the conservative party for this and other statements.
Genes present or added
Gene therapy encapsulates many forms of adding different nucleic acids to a cell. Gene augmentation adds a new protein coding gene to a cell. One form of gene augmentiation is gene replacement therapy, a treatment for monogenic recessive disorders where a single gene is not functional an additional functional gene is added. For diseases caused by multiple genes or a dominant gene, gene silencing or gene editing approaches are more appropriate but gene addition, a form of gene augmentation where new gene is added, may improve a cells function without modifying the genes that cause a disorder.
Cell types
Gene therapy may be classified into two types by the type of cell it affects: somatic cell and germline gene therapy.
In somatic cell gene therapy (SCGT), the therapeutic genes are transferred into any cell other than a gamete, germ cell, gametocyte, or undifferentiated stem cell. Any such modifications affect the individual patient only, and are not inherited by offspring. Somatic gene therapy represents mainstream basic and clinical research, in which therapeutic DNA (either integrated in the genome or as an external episome or plasmid) is used to treat disease. Over 600 clinical trials utilizing SCGT are underway in the US. Most focus on severe genetic disorders, including immunodeficiencies, haemophilia, thalassaemia, and cystic fibrosis. Such single gene disorders are good candidates for somatic cell therapy. The complete correction of a genetic disorder or the replacement of multiple genes is not yet possible. Only a few of the trials are in the advanced stages.
In germline gene therapy (GGT), germ cells (sperm or egg cells) are modified by the introduction of functional genes into their genomes. Modifying a germ cell causes all the organism's cells to contain the modified gene. The change is therefore heritable and passed on to later generations. Australia, Canada, Germany, Israel, Switzerland, and the Netherlands prohibit GGT for application in human beings, for technical and ethical reasons, including insufficient knowledge about possible risks to future generations and higher risks versus SCGT. The US has no federal controls specifically addressing human genetic modification (beyond FDA regulations for therapies in general).
In vivo versus ex vivo therapies
In in vivo gene therapy, a vector (typically, a virus) is introduced to the patient, which then achieves the desired biological effect by passing the genetic material (e.g. for a missing protein) into the patient's cells. In ex vivo gene therapies, such as CAR-T therapeutics, the patient's own cells (autologous) or healthy donor cells (allogeneic) are modified outside the body (hence, ex vivo) using a vector to express a particular protein, such as a chimeric antigen receptor.
In vivo gene therapy is seen as simpler, since it does not require the harvesting of mitotic cells. However, ex vivo gene therapies are better tolerated and less associated with severe immune responses. The death of Jesse Gelsinger in a trial of an adenovirus-vectored treatment for ornithine transcarbamylase deficiency due to a systemic inflammatory reaction led to a temporary halt on gene therapy trials across the United States. , in vivo and ex vivo therapeutics are both seen as safe.
Gene editing
The concept of gene therapy is to fix a genetic problem at its source. If, for instance, a mutation in a certain gene causes the production of a dysfunctional protein resulting (usually recessively) in an inherited disease, gene therapy could be used to deliver a copy of this gene that does not contain the deleterious mutation and thereby produces a functional protein. This strategy is referred to as gene replacement therapy and could be employed to treat inherited retinal diseases.
While the concept of gene replacement therapy is mostly suitable for recessive diseases, novel strategies have been suggested that are capable of also treating conditions with a dominant pattern of inheritance.
The introduction of CRISPR gene editing has opened new doors for its application and utilization in gene therapy, as instead of pure replacement of a gene, it enables correction of the particular genetic defect. Solutions to medical hurdles, such as the eradication of latent human immunodeficiency virus (HIV) reservoirs and correction of the mutation that causes sickle cell disease, may be available as a therapeutic option in the future.
Prosthetic gene therapy aims to enable cells of the body to take over functions they physiologically do not carry out. One example is the so-called vision restoration gene therapy, that aims to restore vision in patients with end-stage retinal diseases. In end-stage retinal diseases, the photoreceptors, as the primary light sensitive cells of the retina are irreversibly lost. By the means of prosthetic gene therapy light sensitive proteins are delivered into the remaining cells of the retina, to render them light sensitive and thereby enable them to signal visual information towards the brain.
In vivo, gene editing systems using CRISPR have been used in studies with mice to treat cancer and have been effective at reducing tumors. In vitro, the CRISPR system has been used to treat HPV cancer tumors. Adeno-associated virus, Lentivirus based vectors have been to introduce the genome for the CRISPR system.
Vectors
The delivery of DNA into cells can be accomplished by multiple methods. The two major classes are recombinant viruses (sometimes called biological nanoparticles or viral vectors) and naked DNA or DNA complexes (non-viral methods).
Viruses
In order to replicate, viruses introduce their genetic material into the host cell, tricking the host's cellular machinery into using it as blueprints for viral proteins. Retroviruses go a stage further by having their genetic material copied into the nuclear genome of the host cell. Scientists exploit this by substituting part of a virus's genetic material with therapeutic DNA or RNA. Like the genetic material (DNA or RNA) in viruses, therapeutic genetic material can be designed to simply serve as a temporary blueprint that degrades naturally, as in a non-integrative vectors, or to enter the host's nucleus becoming a permanent part of the host's nuclear DNA in infected cells.
A number of viruses have been used for human gene therapy, including viruses such as lentivirus, adenoviruses, herpes simplex, vaccinia, and adeno-associated virus.
Adenovirus viral vectors (Ad) temporarily modify a cell's genetic expression with genetic material that is not integrated into the host cell's DNA. As of 2017, such vectors were used in 20% of trials for gene therapy. Adenovirus vectors are mostly used in cancer treatments and novel genetic vaccines such as the Ebola vaccine, vaccines used in clinical trials for HIV and SARS-CoV-2, or cancer vaccines.
Lentiviral vectors based on lentivirus, a retrovirus, can modify a cell's nuclear genome to permanently express a gene, although vectors can be modified to prevent integration. Retroviruses were used in 18% of trials before 2018. Libmeldy is an ex vivo stem cell treatment for metachromatic leukodystrophy which uses a lentiviral vector and was approved by the European medical agency in 2020.
Adeno-associated virus (AAV) is a virus that is incapable of transmission between cells unless the cell is infected by another virus, a helper virus. Adenovirus and the herpes viruses act as helper viruses for AAV. AAV persists within the cell outside of the cell's nuclear genome for an extended period of time through the formation of concatemers mostly organized as episomes. Genetic material from AAV vectors is integrated into the host cell's nuclear genome at a low frequency and likely mediated by the DNA-modifying enzymes of the host cell. Animal models suggest that integration of AAV genetic material into the host cell's nuclear genome may cause hepatocellular carcinoma, a form of liver cancer. Several AAV investigational agents have been explored in treatment of wet age related macular degeneration by both intravitreal and subretinal approaches as a potential application of AAV gene therapy for human disease.
Non-viral
Non-viral vectors for gene therapy present certain advantages over viral methods, such as large scale production and low host immunogenicity. However, non-viral methods initially produced lower levels of transfection and gene expression, and thus lower therapeutic efficacy. Newer technologies offer promise of solving these problems, with the advent of increased cell-specific targeting and subcellular trafficking control.
Methods for non-viral gene therapy include the injection of naked DNA, electroporation, the gene gun, sonoporation, magnetofection, the use of oligonucleotides, lipoplexes, dendrimers, and inorganic nanoparticles. These therapeutics can be administered directly or through scaffold enrichment.
More recent approaches, such as those performed by companies such as Ligandal, offer the possibility of creating cell-specific targeting technologies for a variety of gene therapy modalities, including RNA, DNA and gene editing tools such as CRISPR. Other companies, such as Arbutus Biopharma and Arcturus Therapeutics, offer non-viral, non-cell-targeted approaches that mainly exhibit liver trophism. In more recent years, startups such as Sixfold Bio, GenEdit, and Spotlight Therapeutics have begun to solve the non-viral gene delivery problem. Non-viral techniques offer the possibility of repeat dosing and greater tailorability of genetic payloads, which in the future will be more likely to take over viral-based delivery systems.
Companies such as Editas Medicine, Intellia Therapeutics, CRISPR Therapeutics, Casebia, Cellectis, Precision Biosciences, bluebird bio, Excision BioTherapeutics, and Sangamo have developed non-viral gene editing techniques, however frequently still use viruses for delivering gene insertion material following genomic cleavage by guided nucleases. These companies focus on gene editing, and still face major delivery hurdles.
BioNTech, Moderna Therapeutics and CureVac focus on delivery of mRNA payloads, which are necessarily non-viral delivery problems.
Alnylam, Dicerna Pharmaceuticals, and Ionis Pharmaceuticals focus on delivery of siRNA (antisense oligonucleotides) for gene suppression, which also necessitate non-viral delivery systems.
In academic contexts, a number of laboratories are working on delivery of PEGylated particles, which form serum protein coronas and chiefly exhibit LDL receptor mediated uptake in cells in vivo.
Treatment
Cancer
There have been attempts to treat cancer using gene therapy. As of 2017, 65% of gene therapy trials were for cancer treatment.
Adenovirus vectors are useful for some cancer gene therapies because adenovirus can transiently insert genetic material into a cell without permanently altering the cell's nuclear genome. These vectors can be used to cause antigens to be added to cancers causing an immune response, or hinder angiogenesis by expressing certain proteins. An Adenovirus vector is used in the commercial products Gendicine and Oncorine. Another commercial product, Rexin G, uses a retrovirus-based vector and selectively binds to receptors that are more expressed in tumors.
One approach, suicide gene therapy, works by introducing genes encoding enzymes that will cause a cancer cell to die. Another approach is the use oncolytic viruses, such as Oncorine, which are viruses that selectively reproduce in cancerous cells leaving other cells unaffected.
mRNA has been suggested as a non-viral vector for cancer gene therapy that would temporarily change a cancerous cell's function to create antigens or kill the cancerous cells and there have been several trials.
Afamitresgene autoleucel, sold under the brand name Tecelra, is an autologous T cell immunotherapy used for the treatment of synovial sarcoma. It is a T cell receptor (TCR) gene therapy. It is the first FDA-approved engineered cell therapy for a solid tumor. It uses a self-inactivating lentiviral vector to express a T-cell receptor specific for MAGE-A4, a melanoma-associated antigen.
Genetic diseases
Gene therapy approaches to replace a faulty gene with a healthy gene have been proposed and are being studied for treating some genetic diseases. As of 2017, 11.1% of gene therapy clinical trials targeted monogenic diseases.
Diseases such as sickle cell disease that are caused by autosomal recessive disorders for which a person's normal phenotype or cell function may be restored in cells that have the disease by a normal copy of the gene that is mutated, may be a good candidate for gene therapy treatment. The risks and benefits related to gene therapy for sickle cell disease are not known.
Gene therapy has been used in the eye. The eye is especially suitable for adeno-associated virus vectors. Voretigene neparvovec is an approved gene therapy to treat Leber's hereditary optic neuropathy. alipogene tiparvovec, a treatment for pancreatitis caused by a genetic condition, and Zolgensma for the treatment of spinal muscular atrophy both use an adeno-associated virus vector.
Infectious diseases
As of 2017, 7% of genetic therapy trials targeted infectious diseases. 69.2% of trials targeted HIV, 11% hepatitis B or C, and 7.1% malaria.
List of gene therapies for treatment of disease
Some genetic therapies have been approved by the U.S. Food and Drug Administration (FDA), the European Medicines Agency (EMA), and for use in Russia and China.
Adverse effects, contraindications and hurdles for use
Some of the unsolved problems include:
Off-target effects – The possibility of unwanted, likely harmful, changes to the genome present a large barrier to the widespread implementation of this technology. Improvements to the specificity of gRNAs and Cas enzymes present viable solutions to this issue as well as the refinement of the delivery method of CRISPR. It is likely that different diseases will benefit from different delivery methods.
Short-lived nature – Before gene therapy can become a permanent cure for a condition, the therapeutic DNA introduced into target cells must remain functional and the cells containing the therapeutic DNA must be stable. Problems with integrating therapeutic DNA into the nuclear genome and the rapidly dividing nature of many cells prevent it from achieving long-term benefits. Patients require multiple treatments.
Immune response – Any time a foreign object is introduced into human tissues, the immune system is stimulated to attack the invader. Stimulating the immune system in a way that reduces gene therapy effectiveness is possible. The immune system's enhanced response to viruses that it has seen before reduces the effectiveness to repeated treatments.
Problems with viral vectors – Viral vectors carry the risks of toxicity, inflammatory responses, and gene control and targeting issues.
Multigene disorders – Some commonly occurring disorders, such as heart disease, high blood pressure, Alzheimer's disease, arthritis, and diabetes, are affected by variations in multiple genes, which complicate gene therapy.
Some therapies may breach the Weismann barrier (between soma and germ-line) protecting the testes, potentially modifying the germline, falling afoul of regulations in countries that prohibit the latter practice.
Insertional mutagenesis – If the DNA is integrated in a sensitive spot in the genome, for example in a tumor suppressor gene, the therapy could induce a tumor. This has occurred in clinical trials for X-linked severe combined immunodeficiency (X-SCID) patients, in which hematopoietic stem cells were transduced with a corrective transgene using a retrovirus, and this led to the development of T cell leukemia in 3 of 20 patients. One possible solution is to add a functional tumor suppressor gene to the DNA to be integrated. This may be problematic since the longer the DNA is, the harder it is to integrate into cell genomes. CRISPR technology allows researchers to make much more precise genome changes at exact locations.
Cost – alipogene tiparvovec (Glybera), for example, at a cost of $1.6 million per patient, was reported in 2013, to be the world's most expensive drug.
Deaths
Three patients' deaths have been reported in gene therapy trials, putting the field under close scrutiny. The first was that of Jesse Gelsinger, who died in 1999, because of immune rejection response. One X-SCID patient died of leukemia in 2003. In 2007, a rheumatoid arthritis patient died from an infection; the subsequent investigation concluded that the death was not related to gene therapy.
Regulations
Regulations covering genetic modification are part of general guidelines about human-involved biomedical research. There are no international treaties which are legally binding in this area, but there are recommendations for national laws from various bodies.
The Helsinki Declaration (Ethical Principles for Medical Research Involving Human Subjects) was amended by the World Medical Association's General Assembly in 2008. This document provides principles physicians and researchers must consider when involving humans as research subjects. The Statement on Gene Therapy Research initiated by the Human Genome Organization (HUGO) in 2001, provides a legal baseline for all countries. HUGO's document emphasizes human freedom and adherence to human rights, and offers recommendations for somatic gene therapy, including the importance of recognizing public concerns about such research.
United States
No federal legislation lays out protocols or restrictions about human genetic engineering. This subject is governed by overlapping regulations from local and federal agencies, including the Department of Health and Human Services, the FDA and NIH's Recombinant DNA Advisory Committee. Researchers seeking federal funds for an investigational new drug application, (commonly the case for somatic human genetic engineering,) must obey international and federal guidelines for the protection of human subjects.
NIH serves as the main gene therapy regulator for federally funded research. Privately funded research is advised to follow these regulations. NIH provides funding for research that develops or enhances genetic engineering techniques and to evaluate the ethics and quality in current research. The NIH maintains a mandatory registry of human genetic engineering research protocols that includes all federally funded projects.
An NIH advisory committee published a set of guidelines on gene manipulation. The guidelines discuss lab safety as well as human test subjects and various experimental types that involve genetic changes. Several sections specifically pertain to human genetic engineering, including Section III-C-1. This section describes required review processes and other aspects when seeking approval to begin clinical research involving genetic transfer into a human patient. The protocol for a gene therapy clinical trial must be approved by the NIH's Recombinant DNA Advisory Committee prior to any clinical trial beginning; this is different from any other kind of clinical trial.
As with other kinds of drugs, the FDA regulates the quality and safety of gene therapy products and supervises how these products are used clinically. Therapeutic alteration of the human genome falls under the same regulatory requirements as any other medical treatment. Research involving human subjects, such as clinical trials, must be reviewed and approved by the FDA and an Institutional Review Board.
Gene doping
Athletes may adopt gene therapy technologies to improve their performance. Gene doping is not known to occur, but multiple gene therapies may have such effects. Kayser et al. argue that gene doping could level the playing field if all athletes receive equal access. Critics claim that any therapeutic intervention for non-therapeutic/enhancement purposes compromises the ethical foundations of medicine and sports.
Genetic enhancement
Genetic engineering could be used to cure diseases, but also to change physical appearance, metabolism, and even improve physical capabilities and mental faculties such as memory and intelligence. Ethical claims about germline engineering include beliefs that every fetus has a right to remain genetically unmodified, that parents hold the right to genetically modify their offspring, and that every child has the right to be born free of preventable diseases. For parents, genetic engineering could be seen as another child enhancement technique to add to diet, exercise, education, training, cosmetics, and plastic surgery. Another theorist claims that moral concerns limit but do not prohibit germline engineering.
A 2020 issue of the journal Bioethics was devoted to moral issues surrounding germline genetic engineering in people.
Possible regulatory schemes include a complete ban, provision to everyone, or professional self-regulation. The American Medical Association's Council on Ethical and Judicial Affairs stated that "genetic interventions to enhance traits should be considered permissible only in severely restricted situations: (1) clear and meaningful benefits to the fetus or child; (2) no trade-off with other characteristics or traits; and (3) equal access to the genetic technology, irrespective of income or other socioeconomic characteristics."
As early in the history of biotechnology as 1990, there have been scientists opposed to attempts to modify the human germline using these new tools, and such concerns have continued as technology progressed. With the advent of new techniques like CRISPR, in March 2015 a group of scientists urged a worldwide moratorium on clinical use of gene editing technologies to edit the human genome in a way that can be inherited. In April 2015, researchers sparked controversy when they reported results of basic research to edit the DNA of non-viable human embryos using CRISPR. A committee of the American National Academy of Sciences and National Academy of Medicine gave qualified support to human genome editing in 2017 once answers have been found to safety and efficiency problems "but only for serious conditions under stringent oversight."
History
1970s and earlier
In 1972, Friedmann and Roblin authored a paper in Science titled "Gene therapy for human genetic disease?". Rogers (1970) was cited for proposing that exogenous good DNA be used to replace the defective DNA in those with genetic defects.
1980s
In 1984, a retrovirus vector system was designed that could efficiently insert foreign genes into mammalian chromosomes.
1990s
The first approved gene therapy clinical research in the US took place on 14 September 1990, at the National Institutes of Health (NIH), under the direction of William French Anderson. Four-year-old Ashanti DeSilva received treatment for a genetic defect that left her with adenosine deaminase deficiency (ADA-SCID), a severe immune system deficiency. The defective gene of the patient's blood cells was replaced by the functional variant. Ashanti's immune system was partially restored by the therapy. Production of the missing enzyme was temporarily stimulated, but the new cells with functional genes were not generated. She led a normal life only with the regular injections performed every two months. The effects were successful, but temporary.
Cancer gene therapy was introduced in 1992/93 (Trojan et al. 1993). The treatment of glioblastoma multiforme, the malignant brain tumor whose outcome is always fatal, was done using a vector expressing antisense IGF-I RNA (clinical trial approved by NIH protocol no.1602 24 November 1993, and by the FDA in 1994). This therapy also represents the beginning of cancer immunogene therapy, a treatment which proves to be effective due to the anti-tumor mechanism of IGF-I antisense, which is related to strong immune and apoptotic phenomena.
In 1992, Claudio Bordignon, working at the Vita-Salute San Raffaele University, performed the first gene therapy procedure using hematopoietic stem cells as vectors to deliver genes intended to correct hereditary diseases. In 2002, this work led to the publication of the first successful gene therapy treatment for ADA-SCID. The success of a multi-center trial for treating children with SCID (severe combined immune deficiency or "bubble boy" disease) from 2000 and 2002, was questioned when two of the ten children treated at the trial's Paris center developed a leukemia-like condition. Clinical trials were halted temporarily in 2002, but resumed after regulatory review of the protocol in the US, the United Kingdom, France, Italy, and Germany.
In 1993, Andrew Gobea was born with SCID following prenatal genetic screening. Blood was removed from his mother's placenta and umbilical cord immediately after birth, to acquire stem cells. The allele that codes for adenosine deaminase (ADA) was obtained and inserted into a retrovirus. Retroviruses and stem cells were mixed, after which the viruses inserted the gene into the stem cell chromosomes. Stem cells containing the working ADA gene were injected into Andrew's blood. Injections of the ADA enzyme were also given weekly. For four years T cells (white blood cells), produced by stem cells, made ADA enzymes using the ADA gene. After four years more treatment was needed.
In 1996, Luigi Naldini and Didier Trono developed a new class of gene therapy vectors based on HIV capable of infecting non-dividing cells that have since then been widely used in clinical and research settings, pioneering lentivirals vector in gene therapy.
Jesse Gelsinger's death in 1999 impeded gene therapy research in the US. As a result, the FDA suspended several clinical trials pending the reevaluation of ethical and procedural practices.
2000s
The modified gene therapy strategy of antisense IGF-I RNA (NIH n˚ 1602) using antisense / triple helix anti-IGF-I approach was registered in 2002, by Wiley gene therapy clinical trial - n˚ 635 and 636. The approach has shown promising results in the treatment of six different malignant tumors: glioblastoma, cancers of liver, colon, prostate, uterus, and ovary (Collaborative NATO Science Programme on Gene Therapy USA, France, Poland n˚ LST 980517 conducted by J. Trojan) (Trojan et al., 2012). This anti-gene antisense/triple helix therapy has proven to be efficient, due to the mechanism stopping simultaneously IGF-I expression on translation and transcription levels, strengthening anti-tumor immune and apoptotic phenomena.
2002
Sickle cell disease can be treated in mice. The mice – which have essentially the same defect that causes human cases – used a viral vector to induce production of fetal hemoglobin (HbF), which normally ceases to be produced shortly after birth. In humans, the use of hydroxyurea to stimulate the production of HbF temporarily alleviates sickle cell symptoms. The researchers demonstrated this treatment to be a more permanent means to increase therapeutic HbF production.
A new gene therapy approach repaired errors in messenger RNA derived from defective genes. This technique has the potential to treat thalassaemia, cystic fibrosis and some cancers.
Researchers created liposomes 25 nanometers across that can carry therapeutic DNA through pores in the nuclear membrane.
2003
In 2003, a research team inserted genes into the brain for the first time. They used liposomes coated in a polymer called polyethylene glycol, which unlike viral vectors, are small enough to cross the blood–brain barrier.
Short pieces of double-stranded RNA (short, interfering RNAs or siRNAs) are used by cells to degrade RNA of a particular sequence. If a siRNA is designed to match the RNA copied from a faulty gene, then the abnormal protein product of that gene will not be produced.
Gendicine is a cancer gene therapy that delivers the tumor suppressor gene p53 using an engineered adenovirus. In 2003, it was approved in China for the treatment of head and neck squamous cell carcinoma.
2006
In March, researchers announced the successful use of gene therapy to treat two adult patients for X-linked chronic granulomatous disease, a disease which affects myeloid cells and damages the immune system. The study is the first to show that gene therapy can treat the myeloid system.
In May, a team reported a way to prevent the immune system from rejecting a newly delivered gene. Similar to organ transplantation, gene therapy has been plagued by this problem. The immune system normally recognizes the new gene as foreign and rejects the cells carrying it. The research utilized a newly uncovered network of genes regulated by molecules known as microRNAs. This natural function selectively obscured their therapeutic gene in immune system cells and protected it from discovery. Mice infected with the gene containing an immune-cell microRNA target sequence did not reject the gene.
In August, scientists successfully treated metastatic melanoma in two patients using killer T cells genetically retargeted to attack the cancer cells.
In November, researchers reported on the use of VRX496, a gene-based immunotherapy for the treatment of HIV that uses a lentiviral vector to deliver an antisense gene against the HIV envelope. In a phase I clinical trial, five subjects with chronic HIV infection who had failed to respond to at least two antiretroviral regimens were treated. A single intravenous infusion of autologous CD4 T cells genetically modified with VRX496 was well tolerated. All patients had stable or decreased viral load; four of the five patients had stable or increased CD4 T cell counts. All five patients had stable or increased immune response to HIV antigens and other pathogens. This was the first evaluation of a lentiviral vector administered in a US human clinical trial.
2007
In May 2007, researchers announced the first gene therapy trial for inherited retinal disease. The first operation was carried out on a 23-year-old British male, Robert Johnson, in early 2007.
2008
Leber's congenital amaurosis is an inherited blinding disease caused by mutations in the RPE65 gene. The results of a small clinical trial in children were published in April. Delivery of recombinant adeno-associated virus (AAV) carrying RPE65 yielded positive results. In May, two more groups reported positive results in independent clinical trials using gene therapy to treat the condition. In all three clinical trials, patients recovered functional vision without apparent side-effects.
2009
In September researchers were able to give trichromatic vision to squirrel monkeys. In November 2009, researchers halted a fatal genetic disorder called adrenoleukodystrophy in two children using a lentivirus vector to deliver a functioning version of ABCD1, the gene that is mutated in the disorder.
2010s
2010
An April paper reported that gene therapy addressed achromatopsia (color blindness) in dogs by targeting cone photoreceptors. Cone function and day vision were restored for at least 33 months in two young specimens. The therapy was less efficient for older dogs.
In September it was announced that an 18-year-old male patient in France with beta thalassemia major had been successfully treated. Beta thalassemia major is an inherited blood disease in which beta haemoglobin is missing and patients are dependent on regular lifelong blood transfusions. The technique used a lentiviral vector to transduce the human β-globin gene into purified blood and marrow cells obtained from the patient in June 2007. The patient's haemoglobin levels were stable at 9 to 10 g/dL. About a third of the hemoglobin contained the form introduced by the viral vector and blood transfusions were not needed. Further clinical trials were planned. Bone marrow transplants are the only cure for thalassemia, but 75% of patients do not find a matching donor.
Cancer immunogene therapy using modified antigene, antisense/triple helix approach was introduced in South America in 2010/11 in La Sabana University, Bogota (Ethical Committee 14 December 2010, no P-004-10). Considering the ethical aspect of gene diagnostic and gene therapy targeting IGF-I, the IGF-I expressing tumors i.e. lung and epidermis cancers were treated (Trojan et al. 2016).
2011
In 2007 and 2008, a man (Timothy Ray Brown) was cured of HIV by repeated hematopoietic stem cell transplantation (see also allogeneic stem cell transplantation, allogeneic bone marrow transplantation, allotransplantation) with double-delta-32 mutation which disables the CCR5 receptor. This cure was accepted by the medical community in 2011. It required complete ablation of existing bone marrow, which is very debilitating.
In August two of three subjects of a pilot study were confirmed to have been cured from chronic lymphocytic leukemia (CLL). The therapy used genetically modified T cells to attack cells that expressed the CD19 protein to fight the disease. In 2013, the researchers announced that 26 of 59 patients had achieved complete remission and the original patient had remained tumor-free.
Human HGF plasmid DNA therapy of cardiomyocytes is being examined as a potential treatment for coronary artery disease as well as treatment for the damage that occurs to the heart after myocardial infarction.
In 2011, Neovasculgen was registered in Russia as the first-in-class gene-therapy drug for treatment of peripheral artery disease, including critical limb ischemia; it delivers the gene encoding for VEGF. Neovasculogen is a plasmid encoding the CMV promoter and the 165 amino acid form of VEGF.
2012
The FDA approved Phase I clinical trials on thalassemia major patients in the US for 10 participants in July. The study was expected to continue until 2015.
In July 2012, the European Medicines Agency recommended approval of a gene therapy treatment for the first time in either Europe or the United States. The treatment used Alipogene tiparvovec (Glybera) to compensate for lipoprotein lipase deficiency, which can cause severe pancreatitis. The recommendation was endorsed by the European Commission in November 2012, and commercial rollout began in late 2014. Alipogene tiparvovec was expected to cost around $1.6 million per treatment in 2012, revised to $1 million in 2015, making it the most expensive medicine in the world at the time. , only the patients treated in clinical trials and a patient who paid the full price for treatment have received the drug.
In December 2012, it was reported that 10 of 13 patients with multiple myeloma were in remission "or very close to it" three months after being injected with a treatment involving genetically engineered T cells to target proteins NY-ESO-1 and LAGE-1, which exist only on cancerous myeloma cells.
2013
In March researchers reported that three of five adult subjects who had acute lymphocytic leukemia (ALL) had been in remission for five months to two years after being treated with genetically modified T cells which attacked cells with CD19 genes on their surface, i.e. all B cells, cancerous or not. The researchers believed that the patients' immune systems would make normal T cells and B cells after a couple of months. They were also given bone marrow. One patient relapsed and died and one died of a blood clot unrelated to the disease.
Following encouraging Phase I trials, in April, researchers announced they were starting Phase II clinical trials (called CUPID2 and SERCA-LVAD) on 250 patients at several hospitals to combat heart disease. The therapy was designed to increase the levels of SERCA2, a protein in heart muscles, improving muscle function. The U.S. Food and Drug Administration (FDA) granted this a breakthrough therapy designation to accelerate the trial and approval process. In 2016, it was reported that no improvement was found from the CUPID 2 trial.
In July researchers reported promising results for six children with two severe hereditary diseases had been treated with a partially deactivated lentivirus to replace a faulty gene and after 7–32 months. Three of the children had metachromatic leukodystrophy, which causes children to lose cognitive and motor skills. The other children had Wiskott–Aldrich syndrome, which leaves them to open to infection, autoimmune diseases, and cancer. Follow up trials with gene therapy on another six children with Wiskott–Aldrich syndrome were also reported as promising.
In October researchers reported that two children born with adenosine deaminase severe combined immunodeficiency disease (ADA-SCID) had been treated with genetically engineered stem cells 18 months previously and that their immune systems were showing signs of full recovery. Another three children were making progress. In 2014, a further 18 children with ADA-SCID were cured by gene therapy. ADA-SCID children have no functioning immune system and are sometimes known as "bubble children".
Also in October researchers reported that they had treated six people with haemophilia in early 2011 using an adeno-associated virus. Over two years later all six were producing clotting factor.
2014
In January researchers reported that six choroideremia patients had been treated with adeno-associated virus with a copy of REP1. Over a six-month to two-year period all had improved their sight. By 2016, 32 patients had been treated with positive results and researchers were hopeful the treatment would be long-lasting. Choroideremia is an inherited genetic eye disease with no approved treatment, leading to loss of sight.
In March researchers reported that 12 HIV patients had been treated since 2009 in a trial with a genetically engineered virus with a rare mutation (CCR5 deficiency) known to protect against HIV with promising results.
Clinical trials of gene therapy for sickle cell disease were started in 2014.
In February LentiGlobin BB305, a gene therapy treatment undergoing clinical trials for treatment of beta thalassemia gained FDA "breakthrough" status after several patients were able to forgo the frequent blood transfusions usually required to treat the disease.
In March researchers delivered a recombinant gene encoding a broadly neutralizing antibody into monkeys infected with simian HIV; the monkeys' cells produced the antibody, which cleared them of HIV. The technique is named immunoprophylaxis by gene transfer (IGT). Animal tests for antibodies to ebola, malaria, influenza, and hepatitis were underway.
In March, scientists, including an inventor of CRISPR, Jennifer Doudna, urged a worldwide moratorium on germline gene therapy, writing "scientists should avoid even attempting, in lax jurisdictions, germline genome modification for clinical application in humans" until the full implications "are discussed among scientific and governmental organizations".
In December, scientists of major world academies called for a moratorium on inheritable human genome edits, including those related to CRISPR-Cas9 technologies but that basic research including embryo gene editing should continue.
2015
Researchers successfully treated a boy with epidermolysis bullosa using skin grafts grown from his own skin cells, genetically altered to repair the mutation that caused his disease.
In November, researchers announced that they had treated a baby girl, Layla Richards, with an experimental treatment using donor T cells genetically engineered using TALEN to attack cancer cells. One year after the treatment she was still free of her cancer (a highly aggressive form of acute lymphoblastic leukaemia [ALL]). Children with highly aggressive ALL normally have a very poor prognosis and Layla's disease had been regarded as terminal before the treatment.
2016
In April the Committee for Medicinal Products for Human Use of the European Medicines Agency endorsed a gene therapy treatment called Strimvelis and the European Commission approved it in June. This treats children born with adenosine deaminase deficiency and who have no functioning immune system. This was the second gene therapy treatment to be approved in Europe.
In October, Chinese scientists reported they had started a trial to genetically modify T cells from 10 adult patients with lung cancer and reinject the modified T cells back into their bodies to attack the cancer cells. The T cells had the PD-1 protein (which stops or slows the immune response) removed using CRISPR-Cas9.
A 2016 Cochrane systematic review looking at data from four trials on topical cystic fibrosis transmembrane conductance regulator (CFTR) gene therapy does not support its clinical use as a mist inhaled into the lungs to treat cystic fibrosis patients with lung infections. One of the four trials did find weak evidence that liposome-based CFTR gene transfer therapy may lead to a small respiratory improvement for people with CF. This weak evidence is not enough to make a clinical recommendation for routine CFTR gene therapy.
2017
In February Kite Pharma announced results from a clinical trial of CAR-T cells in around a hundred people with advanced non-Hodgkin lymphoma.
In March, French scientists reported on clinical research of gene therapy to treat sickle cell disease.
In August, the FDA approved tisagenlecleucel for acute lymphoblastic leukemia. Tisagenlecleucel is an adoptive cell transfer therapy for B-cell acute lymphoblastic leukemia; T cells from a person with cancer are removed, genetically engineered to make a specific T-cell receptor (a chimeric T cell receptor, or "CAR-T") that reacts to the cancer, and are administered back to the person. The T cells are engineered to target a protein called CD19 that is common on B cells. This is the first form of gene therapy to be approved in the United States. In October, a similar therapy called axicabtagene ciloleucel was approved for non-Hodgkin lymphoma.
In October, biophysicist and biohacker Josiah Zayner claimed to have performed the very first in-vivo human genome editing in the form of a self-administered therapy.
On 13 November, medical scientists working with Sangamo Therapeutics, headquartered in Richmond, California, announced the first ever in-body human gene editing therapy. The treatment, designed to permanently insert a healthy version of the flawed gene that causes Hunter syndrome, was given to 44-year-old Brian Madeux and is part of the world's first study to permanently edit DNA inside the human body. The success of the gene insertion was later confirmed. Clinical trials by Sangamo involving gene editing using zinc finger nuclease (ZFN) are ongoing.
In December the results of using an adeno-associated virus with blood clotting factor VIII to treat nine haemophilia A patients were published. Six of the seven patients on the high dose regime increased the level of the blood clotting VIII to normal levels. The low and medium dose regimes had no effect on the patient's blood clotting levels.
In December, the FDA approved voretigene neparvovec, the first in vivo gene therapy, for the treatment of blindness due to Leber's congenital amaurosis. The price of this treatment is for both eyes.
2019
In May, the FDA approved onasemnogene abeparvovec (Zolgensma) for treating spinal muscular atrophy in children under two years of age. The list price of Zolgensma was set at per dose, making it the most expensive drug ever.
In May, the EMA approved betibeglogene autotemcel (Zynteglo) for treating beta thalassemia for people twelve years of age and older.
In July, Allergan and Editas Medicine announced phase I/II clinical trial of AGN-151587 for the treatment of Leber congenital amaurosis 10. This is one of the first studies of a CRISPR-based in vivo human gene editing therapy, where the editing takes place inside the human body. The first injection of the CRISPR-Cas System was confirmed in March 2020.
Exagamglogene autotemcel, a CRISPR-based human gene editing therapy, was used for sickle cell and thalassemia in clinical trials.
2020s
2020
In May, onasemnogene abeparvovec (Zolgensma) was approved by the European Union for the treatment of spinal muscular atrophy in people who either have clinical symptoms of SMA type 1 or who have no more than three copies of the SMN2 gene, irrespective of body weight or age.
In August, Audentes Therapeutics reported that three out of 17 children with X-linked myotubular myopathy participating the clinical trial of a AAV8-based gene therapy treatment AT132 have died. It was suggested that the treatment, whose dosage is based on body weight, exerts a disproportionately toxic effect on heavier patients, since the three patients who died were heavier than the others. The trial has been put on clinical hold.
On 15 October, the Committee for Medicinal Products for Human Use (CHMP) of the European Medicines Agency (EMA) adopted a positive opinion, recommending the granting of a marketing authorisation for the medicinal product Libmeldy (autologous CD34+ cell enriched population that contains hematopoietic stem and progenitor cells transduced ex vivo using a lentiviral vector encoding the human arylsulfatase A gene), a gene therapy for the treatment of children with the "late infantile" (LI) or "early juvenile" (EJ) forms of metachromatic leukodystrophy (MLD). The active substance of Libmeldy consists of the child's own stem cells which have been modified to contain working copies of the ARSA gene. When the modified cells are injected back into the patient as a one-time infusion, the cells are expected to start producing the ARSA enzyme that breaks down the build-up of sulfatides in the nerve cells and other cells of the patient's body. Libmeldy was approved for medical use in the EU in December 2020.
On 15 October, Lysogene, a French biotechnological company, reported the death of a patient in who has received LYS-SAF302, an experimental gene therapy treatment for mucopolysaccharidosis type IIIA (Sanfilippo syndrome type A).
2021
In May, a new method using an altered version of HIV as a lentivirus vector was reported in the treatment of 50 children with ADA-SCID obtaining positive results in 48 of them, this method is expected to be safer than retroviruses vectors commonly used in previous studies of SCID where the development of leukemia was usually observed and had already been used in 2019, but in a smaller group with X-SCID.
In June a clinical trial on six patients affected with transthyretin amyloidosis reported a reduction the concentration of missfolded transthretin (TTR) protein in serum through CRISPR-based inactivation of the TTR gene in liver cells observing mean reductions of 52% and 87% among the lower and higher dose groups.This was done in vivo without taking cells out of the patient to edit them and reinfuse them later.
In July results of a small gene therapy phase I study was published reporting observation of dopamine restoration on seven patients between 4 and 9 years old affected by aromatic L-amino acid decarboxylase deficiency (AADC deficiency).
2022
In February, the first ever gene therapy for Tay–Sachs disease was announced, it uses an adeno-associated virus to deliver the correct instruction for the HEXA gene on brain cells which causes the disease. Only two children were part of a compassionate trial presenting improvements over the natural course of the disease and no vector-related adverse events.
In May, eladocagene exuparvovec is recommended for approval by the European Commission.
In July results of a gene therapy candidate for haemophilia B called FLT180 were announced, it works using an adeno-associated virus (AAV) to restore the clotting factor IX (FIX) protein, normal levels of the protein were observed with low doses of the therapy but immunosuppression was necessitated to decrease the risk of vector-related immune responses.
In December, a 13-year girl that had been diagnosed with T-cell acute lymphoblastic leukaemia was successfully treated at Great Ormond Street Hospital (GOSH) in the first documented use of therapeutic gene editing for this purpose, after undergoing six months of an experimental treatment, where all attempts of other treatments failed. The procedure included reprogramming a healthy T-cell to destroy the cancerous T-cells to first rid her of leukaemia, and then rebuilding her immune system using healthy immune cells. The GOSH team used BASE editing and had previously treated a case of acute lymphoblastic leukaemia in 2015 using TALENs.
2023
In May 2023, the FDA approved beremagene geperpavec for the treatment of wounds in people with dystrophic epidermolysis bullosa (DEB) which is applied as a topical gel that delivers a herpes-simplex virus type 1 (HSV-1) vector encoding the collagen type VII alpha 1 chain (COL7A1) gene that is dysfunctional on those affected by DEB . One trial found 65% of the Vyjuvek-treated wounds completely closed while only 26% of the placebo-treated at 24 weeks. It has been also reported its use as an eyedrop for a patient with DEB that had vision loss due to the widespread blistering with good results.
In June 2023, the FDA gave an accelerated approval to Elevidys for Duchenne muscular dystrophy (DMD) only for boys 4 to 5 years old as they are more likely to benefit from the therapy which consists of one-time intravenous infusion of a virus (AAV rh74 vector) that delivers a functioning "microdystrophin" gene (138 kDa) into the muscle cells to act in place of the normal dystrophin (427 kDa) that is found mutated in this disease.
In July 2023, it was reported that it had been developed a new method to affect genetic expressions through direct current.
In December 2023, two gene therapies were approved for sickle cell disease, exagamglogene autotemcel and lovotibeglogene autotemcel.
2024
In November 2024, FDA granted accelerated approval for eladocagene exuparvovec-tneq (Kebilidi, PTC Therapeutics), a direct-to-brain gene therapy for aromatic L-amino acid decarboxylase deficiency. It uses a recombinant adeno-associated virus serotype 2 (rAAV2) to deliver a functioning DOPA decarboxylase (DDC) gene directly into the putamen, increasing the AADC enzyme and restoring dopamine production. It is administered through a stereotactic surgical procedure.
List of gene therapies
Gene therapy for color blindness
Gene therapy for epilepsy
Gene therapy for osteoarthritis
Gene therapy in Parkinson's disease
Gene therapy of the human retina
List of gene therapies
References
Further reading
External links
Applied genetics
Approved gene therapies
Bioethics
Biotechnology
Medical genetics
Molecular biology
Molecular genetics
Gene delivery
1989 introductions
1996 introductions
1989 in biotechnology
Genetic engineering | Gene therapy | [
"Chemistry",
"Technology",
"Engineering",
"Biology"
] | 12,502 | [
"Bioethics",
"Genetics techniques",
"Biological engineering",
"Biochemistry",
"Genetic engineering",
"Biotechnology",
"Molecular biology techniques",
"Molecular genetics",
"Gene therapy",
"nan",
"Molecular biology",
"Ethics of science and technology",
"Gene delivery"
] |
12,902 | https://en.wikipedia.org/wiki/Gomoku | Gomoku, also called Five in a Row, is an abstract strategy board game. It is traditionally played with Go pieces (black and white stones) on a 15×15 Go board while in the past a 19×19 board was standard. Because pieces are typically not moved or removed from the board, gomoku may also be played as a paper-and-pencil game. The game is known in several countries under different names.
Rules
Players alternate turns placing a stone of their color on an empty intersection. Black plays first. The winner is the first player to form an unbroken line of five stones of their color horizontally, vertically, or diagonally. In some rules, this line must be exactly five stones long; six or more stones in a row does not count as a win and is called an overline. If the board is completely filled and no one has made a line of 5 stones, then the game ends in a draw.
Origin
Historical records indicate that the origins of gomoku can be traced back to the mid-1700s during the Edo period. It is said that the 10th generation of Kuwanaya Buemon, a merchant who frequented the Nijō family, was highly skilled in this game, which subsequently spread among the people. By the late Edo period, around 1850, books had been published on gomoku. The earliest published book on gomoku that can be verified is the in 1856.
The name "gomoku" is from the Japanese language, in which it is referred to as . Go means five, moku is a counter word for pieces and narabe means line-up. The game is popular in China, where it is called Wuziqi (五子棋). Wu (五 wǔ) means five, zi (子 zǐ) means piece, and qi (棋 qí) refers to a board game category in Chinese. The game is also popular in Korea, where it is called omok (오목 [五目]) which has the same structure and origin as the Japanese name.
In the nineteenth century, the game was introduced to Britain where it was known as Go Bang, said to be a corruption of the Japanese word goban, which was itself adapted from the Chinese k'i pan (qí pán) "go-board."<ref>OED citations: 1886 GUILLEMARD Cruise 'Marchesa I. 267 Some of the games are purely Japanese..as go-ban. Note, This game is the one lately introduced into England under the misspelt name of Go Bang. 1888''' Pall Mall Gazette 1. Nov. 3/1 These young persons...played go-bang and cat's cradle.
The board below shows the three types of winning arrangements as they might appear on an 8x8 Petteia board. Obviously the cramped conditions would result in a draw most of the time, depending on the rules. Play would be easier on a larger Latrunculi board of 12x8 or even 10x11.
.</ref>
First-player advantage
Gomoku has a strong advantage for the first player when unrestricted.
Championships in gomoku previously used the "Pro" opening rule, which mandated that the first player place the first stone in the center of the board. The second player's stone placement was unrestricted. The first player's second stone had to be placed at least three intersections away from the first player's first stone. This rule was used in the 1989 and 1991 world championships. When the win–loss ratio of these two championships was calculated, the first player (black) won 67 percent of games.
This was deemed too unbalanced for tournament play, so tournament gomoku adopted the Swap2 opening protocol in 2009. In Swap2, the first player places three stones, two black and one white, on the board. The second player then selects one of three options: play as black, play as white and place another white stone, or place two more stones, one white and one black, and let the first player choose the color.
The win ratio of the first player has been calculated to be around 52 percent using the Swap2 opening protocol, greatly balancing the game and largely solving the first-player advantage.
Variants
Freestyle gomoku
Freestyle gomoku has no restrictions on either player and allows a player to win by creating a line of five or more stones, with each player alternating turns placing one stone at a time.
Swap after 1st move
The rule of "swap after 1st move" is a variant of the freestyle gomoku rule, and is mostly played in China. The game can be played on a 19×19 or 15×15 board. As per the rule, once the first player places a black stone on the board, the second player has the right to swap colors. The rest of the game proceeds as freestyle gomoku. This rule is set to balance the advantage of black in a simple way.
Renju
Black (the player who makes the first move) has long been known to have an advantage, even before L. Victor Allis proved that black can force a win (see below). Renju attempts to mitigate this imbalance with extra rules that aim to reduce black's first player advantage.
It is played on a 15×15 board, with the rules of three and three, four and four, and overlines applied to Black only.
The rule of three and three bans a move that simultaneously forms two open rows of three stones (rows not blocked by an opponent's stone at either end).
The rule of four and four bans a move that simultaneously forms two rows of four stones (open or not).
Overlines prevent a player from winning if they form a line of 6 or more stones.
Renju also makes use of various tournament opening rules, such as Soosõrv-8, the current international standard.
Caro
In Caro, (also called gomoku+, popular among Vietnamese), the winner must have an overline or an unbroken row of five stones that is not blocked at either end (overlines are immune to this rule). This makes the game more balanced and provides more power for White to defend.
Omok
Omok is similar to Freestyle gomoku; however, it is played on a 19×19 board and includes the rule of three and three.Sungjin, Nam. "Omok." Encyclopedia of Korean Folk Culture, National Folk Museum of Korea, https://web.archive.org/web/20210722180119/https://folkency.nfm.go.kr/en/topic/detail/1587 . Accessed 22 July 2021.
Ninuki-renju
Also called Wu, Ninuki Renju is a variant which adds capturing to the game; A pair of stones of the same color may be captured by the opponent by means of custodial capture (sandwiching a line of two stones lengthwise). The winner is the player either to make a perfect five in a row, or to capture five pairs of the opponent's stones. It uses a 15x15 board and the rules of three and three and overlines. It also allows the game to continue after a player has formed a row of five stones if their opponent can capture a pair across the line.
Pente
Pente is related to Ninuki-Renju, and has the same custodial capture method, but is most often played on a 19x19 board and does not use the rules of three and three, four and four, or overlines.
Tournament Opening Rules
Tournament rules are used in professional play to balance the game and mitigate the first player advantage. The tournament rule used for the gomoku world championships since 2009 is the Swap2 opening rule. For all of the following professional rules, an overline (six or more stones in a row) does not count as a win.
Pro
The first player's first stone must be placed in the center of the board. The second player's first stone may be placed anywhere on the board. The first player's second stone must be placed at least three intersections away from the first stone (two empty intersections in between the two stones).
Long Pro
The first player's first stone must be placed in the center of the board. The second player's first stone may be placed anywhere on the board. The first player's second stone must be placed at least four intersections away from the first stone (three empty intersections in between the two stones).
Swap
The tentative first player places three stones (two black, and one white) anywhere on the board. The tentative second player then chooses which color to play as. Play proceeds from there as normal with white playing their second stone.
Swap2
The tentative first player places three stones on the board, two black and one white. The tentative second player then has three options:
They can choose to play as white and place a second white stone
They can swap their color and choose to play as black
Or they can place two more stones, one black and one white, and pass the choice of which color to play back to the tentative first player.
Because the tentative first player doesn't know where the tentative second player will place the additional stones if they take option 3, the swap2 opening protocol limits excessive studying of a line by only one of the players.
Theoretical generalizations
m,n,k-games are a generalization of gomoku to a board with m×n intersections, and k in a row needed to win. Connect Four is (7,6,4) with piece placement restricted to the lowest unoccupied place in a column.
Connect(m,n,k,p,q) games are another generalization of gomoku to a board with m×n intersections, k in a row needed to win, p stones for each player to place, and q stones for the first player to place for the first move only. In particular, Connect(m,n'',6,2,1) is called Connect6.
Example game
This game on the 15×15 board is adapted from the paper "Go-Moku and Threat-Space Search".
The opening moves show clearly black's advantage. An open row of three (one that is not blocked by an opponent's stone at either end) has to be blocked immediately, or countered with a threat elsewhere on the board. If not blocked or countered, the open row of three will be extended to an open row of four, which threatens to win in two ways.
White has to block open rows of three at moves 10, 14, 16 and 20, but black only has to do so at move 9.
Move 20 is a blunder for white (it should have been played next to black 19). Black can now force a win against any defense by white, starting with move 21.
There are two forcing sequences for black, depending on whether white 22 is played next to black 15 or black 21. The diagram on the right shows the first sequence. All the moves for white are forced. Such long forcing sequences are typical in gomoku, and expert players can read out forcing sequences of 20 to 40 moves rapidly and accurately.
The diagram on the right shows the second forcing sequence. This diagram shows why white 20 was a blunder; if it had been next to black 19 (at the position of move 32 in this diagram) then black 31 would not be a threat and so the forcing sequence would fail.
World championships
World Gomoku Championships have occurred 2 times in 1989, 1991.
Since 2009 tournament play has resumed, with the opening rule changed to swap2.
List of the tournaments occurred and title holders follows.
Computers and gomoku
Researchers have been applying artificial intelligence techniques on playing gomoku for several decades. Joseph Weizenbaum published a short paper in Datamation in 1962 entitled "How to Make a Computer Appear Intelligent" that described the strategy used in a gomoku program that could beat novice players. In 1994, L. Victor Allis raised the algorithm of proof-number search (pn-search) and dependency-based search (db-search), and proved that when starting from an empty 15×15 board, the first player has a winning strategy using these searching algorithms. This applies to both free-style gomoku and standard gomoku without any opening rules. It seems very likely that black wins on larger boards too. In any size of a board, freestyle gomoku is an m,n,k-game, hence it is known that the first player can force a win or a draw. In 2001, Allis's winning strategy was also approved for renju, a variation of gomoku, when there was no limitation on the opening stage.
However, neither the theoretical values of all legal positions, nor the opening rules such as Swap2 used by the professional gomoku players have been solved yet, so the topic of gomoku artificial intelligence is still a challenge for computer scientists, such as the problem on how to improve the gomoku algorithms to make them more strategic and competitive. Nowadays, most of the state-of-the-art gomoku algorithms are based on the alpha-beta pruning framework.
Reisch proved that Generalized gomoku is PSPACE-complete. He also observed that the reduction can be adapted to the rules of k-in-a-Row for fixed k. Although he did not specify exactly which values of k are allowed, the reduction would appear to generalize to any k ≥ 5.
There exist several well-known tournaments for gomoku programs since 1989. The Computer Olympiad started with the gomoku game in 1989, but gomoku has not been in the list since 1993. The Renju World Computer Championship was started in 1991, and held for 4 times until 2004. The Gomocup tournament is played since 2000 and taking place every year, still active now, with more than 30 participants from about 10 countries. The Hungarian Computer Go-Moku Tournament was also played twice in 2005. There were also two Computer vs. Human tournaments played in the Czech Republic, in 2006 and 2011. Not until 2017 were the computer programs proved to be able to outperform the world human champion in public competitions. In the Gomoku World Championship 2017, there was a match between the world champion program Yixin and the world champion human player Rudolf Dupszki. Yixin won the match with a score of 2–0.
In popular culture
Gomoku was featured in a 2018 Korean drama by Baek Seung-Hwa starring Park Se-wan. The film follows Baduk Lee (Park Se-wan), a former go prodigy who retired after a humiliating loss on time. Years later, Baduk Lee works part time at a go club, where she meets Ahn Kyung Kim, who introduces her to an Omok (Korean gomoku) tournament. Lee is initially uninterested and considers Omok a children's game, but after her roommate loses money on an impulse purchase, she enters the tournament for the prize money and loses badly, being humiliated once again. Afterwards, she begins training to redeem herself and becomes a serious omok player.
In the video game Vintage Story omok boards and pieces (made of gold and lead) can occasionally be found in ruins or as part of luxury traders' inventory. The board and pieces are functional, allowing players to have actual omok matches. In-universe, omok is so far the only game surviving from the times before the Rot.
See also
Renju
Pente
Pegity
Connect6
Connection game
Reversi
References
Further reading
Five-in-a-Row (Renju) For Beginners to Advanced Players
External links
Gomoku World
Renju International Federation website
Gomocup tournament
Abstract strategy games
Traditional board games
Japanese games
Japanese inventions
Paper-and-pencil games
PSPACE-complete problems
In-a-row games
Solved games
Games played on Go boards | Gomoku | [
"Mathematics"
] | 3,298 | [
"PSPACE-complete problems",
"Mathematical problems",
"Computational problems"
] |
12,903 | https://en.wikipedia.org/wiki/Gegenschein | Gegenschein (; ; ) or counterglow is a faintly bright spot in the night sky centered at the antisolar point. The backscatter of sunlight by interplanetary dust causes this optical phenomenon, being a zodiacal light and part of its zodiacal light band.
Explanation
Like zodiacal light, gegenschein is sunlight scattered by interplanetary dust. Most of this dust orbits the Sun near the ecliptic plane, with a possible concentration of particles centered at the point of the Earth–Sun system.
Gegenschein is distinguished from zodiacal light by its high angle of reflection of the incident sunlight on the dust particles. It forms a slightly brighter elliptical spot of 8–10° across directly opposite the Sun within the dimmer band of zodiacal light and zodiac constellation. The intensity of the gegenschein is relatively enhanced because each dust particle is seen at full phase, having a difficult to measure apparent magnitude of +5 to +6, with a very low surface brightness in the +10 to +12 magnitude range.
History
It is commonly stated that the gegenschein was first described by the French Jesuit astronomer and professor (1692–1776) in 1730. Further observations were supposedly made by the German explorer Alexander von Humboldt during his South American journey from 1799 to 1803. It was Humboldt who first used the German term Gegenschein. However, research conducted in 2021 by Texas State University astronomer and professor Donald Olson discovered that the Danish astronomer Theodor Brorsen was actually the first person to observe and describe one in 1854, although Brorsen had thought that Pézenas had observed it first. Olson believes what Pézenas actually observed was an auroral event, as he described the phenomenon as having a red glow; Olson found many other reports of auroral activity from around Europe and Asia on the same date Pézenas made his observation. Humboldt's report instead described glowing triangular patches on both the western and eastern horizons shortly after sunset, while true gegenschein is most visible near local midnight when it is highest in the sky.
Brorsen published the first thorough investigations of the gegenschein in 1854. T. W. Backhouse discovered it independently in 1876, as did Edward Emerson Barnard in 1882. In modern times, the gegenschein is not visible in most inhabited regions of the world due to light pollution.
See also
Antisolar point
Earth's shadow
Heiligenschein
Interplanetary dust cloud
Kordylewski cloud
Opposition surge, the apparent brightening of a coarse surface or an aggregate of many particles when illuminated from directly behind the observer
Sylvanshine
References
External links
Gegenschein page on EarthSky.org
Photos of gegenschein on SwissEduc.ch taken from Stromboli volcano
"Zodiacal Light and the Gegenschein", an essay by J. E. Littleton
Observational astronomy
Optical phenomena
German words and phrases
de:Gegenschein | Gegenschein | [
"Physics",
"Astronomy"
] | 606 | [
"Optical phenomena",
"Physical phenomena",
"Observational astronomy",
"Astronomical sub-disciplines"
] |
12,908 | https://en.wikipedia.org/wiki/Global%20warming%20potential | Global warming potential (GWP) is a measure of how much heat a greenhouse gas traps in the atmosphere over a specific time period, relative to carbon dioxide (). It is expressed as a multiple of warming caused by the same mass of carbon dioxide (). Therefore, by definition has a GWP of 1. For other gases it depends on how strongly the gas absorbs thermal radiation, how quickly the gas leaves the atmosphere, and the time frame considered.
For example, methane has a GWP over 20 years (GWP-20) of 81.2 meaning that, a leak of a tonne of methane is equivalent to emitting 81.2 tonnes of carbon dioxide measured over 20 years. As methane has a much shorter atmospheric lifetime than carbon dioxide, its GWP is much less over longer time periods, with a GWP-100 of 27.9 and a GWP-500 of 7.95.
The carbon dioxide equivalent (e or eq or -e or -eq) can be calculated from the GWP. For any gas, it is the mass of that would warm the earth as much as the mass of that gas. Thus it provides a common scale for measuring the climate effects of different gases. It is calculated as GWP times mass of the other gas.
Definition
The global warming potential (GWP) is defined as an "index measuring the radiative forcing following an emission of a unit mass of a given substance, accumulated over a chosen time horizon, relative to that of the reference substance, carbon dioxide (CO2). The GWP thus represents the combined effect of the differing times these substances remain in the atmosphere and their effectiveness in causing radiative forcing."
In turn, radiative forcing is a scientific concept used to quantify and compare the external drivers of change to Earth's energy balance. Radiative forcing is the change in energy flux in the atmosphere caused by natural or anthropogenic factors of climate change as measured in watts per meter squared.
GWP in policymaking
As governments develop policies to combat emissions from high-GWP sources, policymakers have chosen to use the 100-year GWP scale as the standard in international agreements. The Kigali Amendment to the Montreal Protocol sets the global phase-down of hydrofluorocarbons (HFCs), a group of high-GWP compounds. It requires countries to use a set of GWP100 values equal to those published in the IPCC's Fourth Assessment Report (AR4). This allows policymakers to have one standard for comparison instead of changing GWP values in new assessment reports. One exception to the GWP100 standard exists: New York state’s Climate Leadership and Community Protection Act requires the use of GWP20, despite being a different standard from all other countries participating in phase downs of HFCs.
Calculated values
Current values (IPCC Sixth Assessment Report from 2021)
The global warming potential (GWP) depends on both the efficiency of the molecule as a greenhouse gas and its atmospheric lifetime. GWP is measured relative to the same mass of and evaluated for a specific timescale. Thus, if a gas has a high (positive) radiative forcing but also a short lifetime, it will have a large GWP on a 20-year scale but a small one on a 100-year scale. Conversely, if a molecule has a longer atmospheric lifetime than its GWP will increase when the timescale is considered. Carbon dioxide is defined to have a GWP of 1 over all time periods.
Methane has an atmospheric lifetime of 12 ± 2 years. The 2021 IPCC report lists the GWP as 83 over a time scale of 20 years, 30 over 100 years and 10 over 500 years. The decrease in GWP at longer times is because methane decomposes to water and through chemical reactions in the atmosphere. Similarly the third most important GHG, nitrous oxide (N2O), is a common gas emitted through the denitrification part of the nitrogen cycle. It has a lifetime of 109 years and an even higher GWP level running at 273 over 20 and 100 years.
Examples of the atmospheric lifetime and GWP relative to for several greenhouse gases are given in the following table:
Estimates of GWP values over 20, 100 and 500 years are periodically compiled and revised in reports from the Intergovernmental Panel on Climate Change. The most recent report is the IPCC Sixth Assessment Report (Working Group I) from 2023.
The IPCC lists many other substances not shown here. Some have high GWP but only a low concentration in the atmosphere.
The values given in the table assume the same mass of compound is analyzed; different ratios will result from the conversion of one substance to another. For instance, burning methane to carbon dioxide would reduce the global warming impact, but by a smaller factor than 25:1 because the mass of methane burned is less than the mass of carbon dioxide released (ratio 1:2.74). For a starting amount of 1 tonne of methane, which has a GWP of 25, after combustion there would be 2.74 tonnes of , each tonne of which has a GWP of 1. This is a net reduction of 22.26 tonnes of GWP, reducing the global warming effect by a ratio of 25:2.74 (approximately 9 times).
Earlier values from 2007
The values provided in the table below are from 2007 when they were published in the IPCC Fourth Assessment Report. These values are still used (as of 2020) for some comparisons.
Importance of time horizon
A substance's GWP depends on the number of years (denoted by a subscript) over which the potential is calculated. A gas which is quickly removed from the atmosphere may initially have a large effect, but for longer time periods, as it has been removed, it becomes less important. Thus methane has a potential of 25 over 100 years (GWP100 = 25) but 86 over 20 years (GWP20 = 86); conversely sulfur hexafluoride has a GWP of 22,800 over 100 years but 16,300 over 20 years (IPCC Third Assessment Report). The GWP value depends on how the gas concentration decays over time in the atmosphere. This is often not precisely known and hence the values should not be considered exact. For this reason when quoting a GWP it is important to give a reference to the calculation.
The GWP for a mixture of gases can be obtained from the mass-fraction-weighted average of the GWPs of the individual gases.
Commonly, a time horizon of 100 years is used by regulators.
Water vapour
Water vapour does contribute to anthropogenic global warming, but as the GWP is defined, it is negligible for H2O: an estimate gives a 100-year GWP between -0.001 and 0.0005.
H2O can function as a greenhouse gas because it has a profound infrared absorption spectrum with more and broader absorption bands than . Its concentration in the atmosphere is limited by air temperature, so that radiative forcing by water vapour increases with global warming (positive feedback). But the GWP definition excludes indirect effects. GWP definition is also based on emissions, and anthropogenic emissions of water vapour (cooling towers, irrigation) are removed via precipitation within weeks, so its GWP is negligible.
Calculation methods
When calculating the GWP of a greenhouse gas, the value depends on the following factors:
the absorption of infrared radiation by the given gas
the time horizon of interest (integration period)
the atmospheric lifetime of the gas
A high GWP correlates with a large infrared absorption and a long atmospheric lifetime. The dependence of GWP on the wavelength of absorption is more complicated. Even if a gas absorbs radiation efficiently at a certain wavelength, this may not affect its GWP much, if the atmosphere already absorbs most radiation at that wavelength. A gas has the most effect if it absorbs in a "window" of wavelengths where the atmosphere is fairly transparent. The dependence of GWP as a function of wavelength has been found empirically and published as a graph.
Because the GWP of a greenhouse gas depends directly on its infrared spectrum, the use of infrared spectroscopy to study greenhouse gases is centrally important in the effort to understand the impact of human activities on global climate change.
Just as radiative forcing provides a simplified means of comparing the various factors that are believed to influence the climate system to one another, global warming potentials (GWPs) are one type of simplified index based upon radiative properties that can be used to estimate the potential future impacts of emissions of different gases upon the climate system in a relative sense. GWP is based on a number of factors, including the radiative efficiency (infrared-absorbing ability) of each gas relative to that of carbon dioxide, as well as the decay rate of each gas (the amount removed from the atmosphere over a given number of years) relative to that of carbon dioxide.
The radiative forcing capacity (RF) is the amount of energy per unit area, per unit time, absorbed by the greenhouse gas, that would otherwise be lost to space. It can be expressed by the formula:
where the subscript i represents a wavenumber interval of 10 inverse centimeters. Absi represents the integrated infrared absorbance of the sample in that interval, and Fi represents the RF for that interval.
The Intergovernmental Panel on Climate Change (IPCC) provides the generally accepted values for GWP, which changed slightly between 1996 and 2001, except for methane, which had its GWP almost doubled. An exact definition of how GWP is calculated is to be found in the IPCC's 2001 Third Assessment Report. The GWP is defined as the ratio of the time-integrated radiative forcing from the instantaneous release of 1 kg of a trace substance relative to that of 1 kg of a reference gas:
where TH is the time horizon over which the calculation is considered; ax is the radiative efficiency due to a unit increase in atmospheric abundance of the substance (i.e., Wm−2 kg−1) and [x](t) is the time-dependent decay in abundance of the substance following an instantaneous release of it at time t=0. The denominator contains the corresponding quantities for the reference gas (i.e. ). The radiative efficiencies ax and ar are not necessarily constant over time. While the absorption of infrared radiation by many greenhouse gases varies linearly with their abundance, a few important ones display non-linear behaviour for current and likely future abundances (e.g., , CH4, and N2O). For those gases, the relative radiative forcing will depend upon abundance and hence upon the future scenario adopted.
Since all GWP calculations are a comparison to which is non-linear, all GWP values are affected. Assuming otherwise as is done above will lead to lower GWPs for other gases than a more detailed approach would. Clarifying this, while increasing has less and less effect on radiative absorption as ppm concentrations rise, more powerful greenhouse gases like methane and nitrous oxide have different thermal absorption frequencies to that are not filled up (saturated) as much as , so rising ppms of these gases are far more significant.
Applications
Carbon dioxide equivalent
Carbon dioxide equivalent (e or eq or -e) of a quantity of gas is calculated from its GWP. For any gas, it is the mass of which would warm the earth as much as the mass of that gas. Thus it provides a common scale for measuring the climate effects of different gases. It is calculated as GWP multiplied by mass of the other gas. For example, if a gas has GWP of 100, two tonnes of the gas have e of 200 tonnes, and 9 tonnes of the gas has e of 900 tonnes.
On a global scale, the warming effects of one or more greenhouse gases in the atmosphere can also be expressed as an equivalent atmospheric concentration of . e can then be the atmospheric concentration of which would warm the earth as much as a particular concentration of some other gas or of all gases and aerosols in the atmosphere. For example, e of 500 parts per million would reflect a mix of atmospheric gases which warm the earth as much as 500 parts per million of would warm it. Calculation of the equivalent atmospheric concentration of of an atmospheric greenhouse gas or aerosol is more complex and involves the atmospheric concentrations of those gases, their GWPs, and the ratios of their molar masses to the molar mass of .
e calculations depend on the time-scale chosen, typically 100 years or 20 years, since gases decay in the atmosphere or are absorbed naturally, at different rates.
The following units are commonly used:
By the UN climate change panel (IPCC): billion metric tonnes = n×109 tonnes of equivalent (Gteq)
In industry: million metric tonnes of carbon dioxide equivalents (MMTCDE) and MMT eq.
For vehicles: grams of carbon dioxide equivalent per mile (ge/mile) or per kilometer (ge/km)
For example, the table above shows GWP for methane over 20 years at 86 and nitrous oxide at 289, so emissions of 1 million tonnes of methane or nitrous oxide are equivalent to emissions of 86 or 289 million tonnes of carbon dioxide, respectively.
Use in Kyoto Protocol and for reporting to UNFCCC
Under the Kyoto Protocol, in 1997 the Conference of the Parties standardized international reporting, by deciding (see decision number 2/CP.3) that the values of GWP calculated for the IPCC Second Assessment Report were to be used for converting the various greenhouse gas emissions into comparable equivalents.
After some intermediate updates, in 2013 this standard was updated by the Warsaw meeting of the UN Framework Convention on Climate Change (UNFCCC, decision number 24/CP.19) to require using a new set of 100-year GWP values. They published these values in Annex III, and they took them from the IPCC Fourth Assessment Report, which had been published in 2007. Those 2007 estimates are still used for international comparisons through 2020, although the latest research on warming effects has found other values, as shown in the tables above.
Though recent reports reflect more scientific accuracy, countries and companies continue to use the IPCC Second Assessment Report (SAR) and IPCC Fourth Assessment Report values for reasons of comparison in their emission reports. The IPCC Fifth Assessment Report has skipped the 500-year values but introduced GWP estimations including the climate-carbon feedback (f) with a large amount of uncertainty.
Other metrics to compare greenhouse gases
The Global Temperature change Potential (GTP) is another way to compare gases. While GWP estimates infrared thermal radiation absorbed, GTP estimates the resulting rise in average surface temperature of the world, over the next 20, 50 or 100 years, caused by a greenhouse gas, relative to the temperature rise which the same mass of would cause. Calculation of GTP requires modelling how the world, especially the oceans, will absorb heat. GTP is published in the same IPCC tables with GWP.
Another metric called GWP* (pronounced "GWP star") has been proposed to take better account of short-lived climate pollutants (SLCPs) such as methane. A permanent increase in the rate of emission of an SLCP has a similar effect to that of a one-time emission of an amount of carbon dioxide, because both raise the radiative forcing permanently or (in the case of carbon dioxide) practically permanently (since the stays in the air for a long time). GWP* therefore assigns an increase in emission rate of an SLCP a supposedly equivalent amount (tonnes) of . However GWP* has been criticised both for its suitability as a metric and for inherent design features which can perpetuate injustices and inequity. Developing countries whose emissions of SLCPs are increasing are "penalized", while developed countries such as Australia or New Zealand which have steady emissions of SLCPs are not penalized in this way, though they may be penalized for their emissions of .
See also
Carbon accounting
Carbon footprint
Emission intensity
References
Sources
External links
List of Global Warming Potentials and Atmospheric Lifetimes from the U.S. EPA
GWP and the different meanings of e explained
Greenhouse gas emissions
Climate forcing
Infrared spectroscopy
Carbon dioxide
Equivalent units | Global warming potential | [
"Physics",
"Chemistry",
"Mathematics"
] | 3,439 | [
"Greenhouse gas emissions",
"Equivalent quantities",
"Spectrum (physical sciences)",
"Quantity",
"Infrared spectroscopy",
"Equivalent units",
"Greenhouse gases",
"Carbon dioxide",
"Spectroscopy",
"Units of measurement"
] |
12,910 | https://en.wikipedia.org/wiki/Grothendieck%20topology | In category theory, a branch of mathematics, a Grothendieck topology is a structure on a category C that makes the objects of C act like the open sets of a topological space. A category together with a choice of Grothendieck topology is called a site.
Grothendieck topologies axiomatize the notion of an open cover. Using the notion of covering provided by a Grothendieck topology, it becomes possible to define sheaves on a category and their cohomology. This was first done in algebraic geometry and algebraic number theory by Alexander Grothendieck to define the étale cohomology of a scheme. It has been used to define other cohomology theories since then, such as ℓ-adic cohomology, flat cohomology, and crystalline cohomology. While Grothendieck topologies are most often used to define cohomology theories, they have found other applications as well, such as to John Tate's theory of rigid analytic geometry.
There is a natural way to associate a site to an ordinary topological space, and Grothendieck's theory is loosely regarded as a generalization of classical topology. Under meager point-set hypotheses, namely sobriety, this is completely accurate—it is possible to recover a sober space from its associated site. However simple examples such as the indiscrete topological space show that not all topological spaces can be expressed using Grothendieck topologies. Conversely, there are Grothendieck topologies that do not come from topological spaces.
The term "Grothendieck topology" has changed in meaning. In it meant what is now called a Grothendieck pretopology, and some authors still use this old meaning. modified the definition to use sieves rather than covers. Much of the time this does not make much difference, as each Grothendieck pretopology determines a unique Grothendieck topology, though quite different pretopologies can give the same topology.
Overview
André Weil's famous Weil conjectures proposed that certain properties of equations with integral coefficients should be understood as geometric properties of the algebraic variety that they define. His conjectures postulated that there should be a cohomology theory of algebraic varieties that gives number-theoretic information about their defining equations. This cohomology theory was known as the "Weil cohomology", but using the tools he had available, Weil was unable to construct it.
In the early 1960s, Alexander Grothendieck introduced étale maps into algebraic geometry as algebraic analogues of local analytic isomorphisms in analytic geometry. He used étale coverings to define an algebraic analogue of the fundamental group of a topological space. Soon Jean-Pierre Serre noticed that some properties of étale coverings mimicked those of open immersions, and that consequently it was possible to make constructions that imitated the cohomology functor . Grothendieck saw that it would be possible to use Serre's idea to define a cohomology theory that he suspected would be the Weil cohomology. To define this cohomology theory, Grothendieck needed to replace the usual, topological notion of an open covering with one that would use étale coverings instead. Grothendieck also saw how to phrase the definition of covering abstractly; this is where the definition of a Grothendieck topology comes from.
Definition
Motivation
The classical definition of a sheaf begins with a topological space . A sheaf associates information to the open sets of . This information can be phrased abstractly by letting be the category whose objects are the open subsets of and whose morphisms are the inclusion maps of open sets and of . We will call such maps open immersions, just as in the context of schemes. Then a presheaf on is a contravariant functor from to the category of sets, and a sheaf is a presheaf that satisfies the gluing axiom (here including the separation axiom). The gluing axiom is phrased in terms of pointwise covering, i.e., covers if and only if . In this definition, is an open subset of . Grothendieck topologies replace each with an entire family of open subsets; in this example, is replaced by the family of all open immersions . Such a collection is called a sieve. Pointwise covering is replaced by the notion of a covering family; in the above example, the set of all as varies is a covering family of . Sieves and covering families can be axiomatized, and once this is done open sets and pointwise covering can be replaced by other notions that describe other properties of the space .
Sieves
In a Grothendieck topology, the notion of a collection of open subsets of U stable under inclusion is replaced by the notion of a sieve. If c is any given object in C, a sieve on c is a subfunctor of the functor Hom(−, c); (this is the Yoneda embedding applied to c). In the case of O(X), a sieve S on an open set U selects a collection of open subsets of U that is stable under inclusion. More precisely, consider that for any open subset V of U, S(V) will be a subset of Hom(V, U), which has only one element, the open immersion V → U. Then V will be considered "selected" by S if and only if S(V) is nonempty. If W is a subset of V, then there is a morphism S(V) → S(W) given by composition with the inclusion W → V. If S(V) is non-empty, it follows that S(W) is also non-empty.
If S is a sieve on X, and f: Y → X is a morphism, then left composition by f gives a sieve on Y called the pullback of S along f, denoted by fS. It is defined as the fibered product S ×Hom(−, X) Hom(−, Y) together with its natural embedding in Hom(−, Y). More concretely, for each object Z of C, fS(Z) = { g: Z → Y | fg S(Z) }, and fS inherits its action on morphisms by being a subfunctor of Hom(−, Y). In the classical example, the pullback of a collection {Vi} of subsets of U along an inclusion W → U is the collection {Vi∩W}.
Grothendieck topology
A Grothendieck topology J on a category C is a collection, for each object c of C, of distinguished sieves on c, denoted by J(c) and called covering sieves of c. This selection will be subject to certain axioms, stated below. Continuing the previous example, a sieve S on an open set U in O(X) will be a covering sieve if and only if the union of all the open sets V for which S(V) is nonempty equals U; in other words, if and only if S gives us a collection of open sets that cover U in the classical sense.
Axioms
The conditions we impose on a Grothendieck topology are:
(T 1) (Base change) If S is a covering sieve on X, and f: Y → X is a morphism, then the pullback fS is a covering sieve on Y.
(T 2) (Local character) Let S be a covering sieve on X, and let T be any sieve on X. Suppose that for each object Y of C and each arrow f: Y → X in S(Y), the pullback sieve fT is a covering sieve on Y. Then T is a covering sieve on X.
(T 3) (Identity) Hom(−, X) is a covering sieve on X for any object X in C.
The base change axiom corresponds to the idea that if {Ui} covers U, then {Ui ∩ V} should cover U ∩ V. The local character axiom corresponds to the idea that if {Ui} covers U and {Vij}j Ji covers Ui for each i, then the collection {Vij} for all i and j should cover U. Lastly, the identity axiom corresponds to the idea that any set is covered by itself via the identity map.
Grothendieck pretopologies
In fact, it is possible to put these axioms in another form where their geometric character is more apparent, assuming that the underlying category C contains certain fibered products. In this case, instead of specifying sieves, we can specify that certain collections of maps with a common codomain should cover their codomain. These collections are called covering families. If the collection of all covering families satisfies certain axioms, then we say that they form a Grothendieck pretopology. These axioms are:
(PT 0) (Existence of fibered products) For all objects X of C, and for all morphisms X0 → X that appear in some covering family of X, and for all morphisms Y → X, the fibered product X0 ×X Y exists.
(PT 1) (Stability under base change) For all objects X of C, all morphisms Y → X, and all covering families {Xα → X}, the family {Xα ×X Y → Y} is a covering family.
(PT 2) (Local character) If {Xα → X} is a covering family, and if for all α, {Xβα → Xα} is a covering family, then the family of composites {Xβα → Xα → X} is a covering family.
(PT 3) (Isomorphisms) If f: Y → X is an isomorphism, then {f} is a covering family.
For any pretopology, the collection of all sieves that contain a covering family from the pretopology is always a Grothendieck topology.
For categories with fibered products, there is a converse. Given a collection of arrows {Xα → X}, we construct a sieve S by letting S(Y) be the set of all morphisms Y → X that factor through some arrow Xα → X. This is called the sieve generated by {Xα → X}. Now choose a topology. Say that {Xα → X} is a covering family if and only if the sieve that it generates is a covering sieve for the given topology. It is easy to check that this defines a pretopology.
(PT 3) is sometimes replaced by a weaker axiom:
(PT 3') (Identity) If 1X : X → X is the identity arrow, then {1X} is a covering family.
(PT 3) implies (PT 3'), but not conversely. However, suppose that we have a collection of covering families that satisfies (PT 0) through (PT 2) and (PT 3'), but not (PT 3). These families generate a pretopology. The topology generated by the original collection of covering families is then the same as the topology generated by the pretopology, because the sieve generated by an isomorphism Y → X is Hom(−, X). Consequently, if we restrict our attention to topologies, (PT 3) and (PT 3') are equivalent.
Sites and sheaves
Let C be a category and let J be a Grothendieck topology on C. The pair (C, J) is called a site.
A presheaf on a category is a contravariant functor from C to the category of all sets. Note that for this definition C is not required to have a topology. A sheaf on a site, however, should allow gluing, just like sheaves in classical topology. Consequently, we define a sheaf on a site to be a presheaf F such that for all objects X and all covering sieves S on X, the natural map Hom(Hom(−, X), F) → Hom(S, F), induced by the inclusion of S into Hom(−, X), is a bijection. Halfway in between a presheaf and a sheaf is the notion of a separated presheaf, where the natural map above is required to be only an injection, not a bijection, for all sieves S. A morphism of presheaves or of sheaves is a natural transformation of functors. The category of all sheaves on C is the topos defined by the site (C, J).
Using the Yoneda lemma, it is possible to show that a presheaf on the category O(X) is a sheaf on the topology defined above if and only if it is a sheaf in the classical sense.
Sheaves on a pretopology have a particularly simple description: For each covering family {Xα → X}, the diagram
must be an equalizer. For a separated presheaf, the first arrow need only be injective.
Similarly, one can define presheaves and sheaves of abelian groups, rings, modules, and so on. One can require either that a presheaf F is a contravariant functor to the category of abelian groups (or rings, or modules, etc.), or that F be an abelian group (ring, module, etc.) object in the category of all contravariant functors from C to the category of sets. These two definitions are equivalent.
Examples of sites
The discrete and indiscrete topologies
Let C be any category. To define the discrete topology, we declare all sieves to be covering sieves. If C has all fibered products, this is equivalent to declaring all families to be covering families. To define the indiscrete topology, also known as the coarse or chaotic topology, we declare only the sieves of the form Hom(−, X) to be covering sieves. The indiscrete topology is generated by the pretopology that has only isomorphisms for covering families. A sheaf on the indiscrete site is the same thing as a presheaf.
The canonical topology
Let C be any category. The Yoneda embedding gives a functor Hom(−, X) for each object X of C. The canonical topology is the biggest (finest) topology such that every representable presheaf, i.e. presheaf of the form Hom(−, X), is a sheaf. A covering sieve or covering family for this site is said to be strictly universally epimorphic because it consists of the legs of a colimit cone (under the full diagram on the domains of its constituent morphisms) and these colimits are stable under pullbacks along morphisms in C. A topology that is less fine than the canonical topology, that is, for which every covering sieve is strictly universally epimorphic, is called subcanonical. Subcanonical sites are exactly the sites for which every presheaf of the form Hom(−, X) is a sheaf. Most sites encountered in practice are subcanonical.
Small site associated to a topological space
We repeat the example that we began with above. Let X be a topological space. We defined O(X) to be the category whose objects are the open sets of X and whose morphisms are inclusions of open sets. Note that for an open set U and a sieve S on U, the set S(V) contains either zero or one element for every open set V. The covering sieves on an object U of O(X) are those sieves S satisfying the following condition:
If W is the union of all the sets V such that S(V) is non-empty, then W = U.
This notion of cover matches the usual notion in point-set topology.
This topology can also naturally be expressed as a pretopology. We say that a family of inclusions {Vα U} is a covering family if and only if the union Vα equals U. This site is called the small site associated to a topological space X.
Big site associated to a topological space
Let Spc be the category of all topological spaces. Given any family of functions {uα : Vα → X}, we say that it is a surjective family or that the morphisms uα are jointly surjective if uα(Vα) equals X. We define a pretopology on Spc by taking the covering families to be surjective families all of whose members are open immersions. Let S be a sieve on Spc. S is a covering sieve for this topology if and only if:
For all Y and every morphism f : Y → X in S(Y), there exists a V and a g : V → X such that g is an open immersion, g is in S(V), and f factors through g.
If W is the union of all the sets f(Y), where f : Y → X is in S(Y), then W = X.
Fix a topological space X. Consider the comma category Spc/X of topological spaces with a fixed continuous map to X. The topology on Spc induces a topology on Spc/X. The covering sieves and covering families are almost exactly the same; the only difference is that now all the maps involved commute with the fixed maps to X. This is the big site associated to a topological space X. Notice that Spc is the big site associated to the one point space. This site was first considered by Jean Giraud.
The big and small sites of a manifold
Let M be a manifold. M has a category of open sets O(M) because it is a topological space, and it gets a topology as in the above example. For two open sets U and V of M, the fiber product U ×M V is the open set U ∩ V, which is still in O(M). This means that the topology on O(M) is defined by a pretopology, the same pretopology as before.
Let Mfd be the category of all manifolds and continuous maps. (Or smooth manifolds and smooth maps, or real analytic manifolds and analytic maps, etc.) Mfd is a subcategory of Spc, and open immersions are continuous (or smooth, or analytic, etc.), so Mfd inherits a topology from Spc. This lets us construct the big site of the manifold M as the site Mfd/M. We can also define this topology using the same pretopology we used above. Notice that to satisfy (PT 0), we need to check that for any continuous map of manifolds X → Y and any open subset U of Y, the fibered product U ×Y X is in Mfd/M. This is just the statement that the preimage of an open set is open. Notice, however, that not all fibered products exist in Mfd because the preimage of a smooth map at a critical value need not be a manifold.
Topologies on the category of schemes
The category of schemes, denoted Sch, has a tremendous number of useful topologies. A complete understanding of some questions may require examining a scheme using several different topologies. All of these topologies have associated small and big sites. The big site is formed by taking the entire category of schemes and their morphisms, together with the covering sieves specified by the topology. The small site over a given scheme is formed by only taking the objects and morphisms that are part of a cover of the given scheme.
The most elementary of these is the Zariski topology. Let X be a scheme. X has an underlying topological space, and this topological space determines a Grothendieck topology. The Zariski topology on Sch is generated by the pretopology whose covering families are jointly surjective families of scheme-theoretic open immersions. The covering sieves S for Zar are characterized by the following two properties:
For all Y and every morphism f : Y → X in S(Y), there exists a V and a g : V → X such that g is an open immersion, g is in S(V), and f factors through g.
If W is the union of all the sets f(Y), where f : Y → X is in S(Y), then W = X.
Despite their outward similarities, the topology on Zar is not the restriction of the topology on Spc! This is because there are morphisms of schemes that are topologically open immersions but that are not scheme-theoretic open immersions. For example, let A be a non-reduced ring and let N be its ideal of nilpotents. The quotient map A → A/N induces a map Spec A/N → Spec A, which is the identity on underlying topological spaces. To be a scheme-theoretic open immersion it must also induce an isomorphism on structure sheaves, which this map does not do. In fact, this map is a closed immersion.
The étale topology is finer than the Zariski topology. It was the first Grothendieck topology to be closely studied. Its covering families are jointly surjective families of étale morphisms. It is finer than the Nisnevich topology, but neither finer nor coarser than the cdh and l′ topologies.
There are two flat topologies, the fppf topology and the fpqc topology. fppf stands for , and in this topology, a morphism of affine schemes is a covering morphism if it is faithfully flat, of finite presentation, and is quasi-finite. fpqc stands for , and in this topology, a morphism of affine schemes is a covering morphism if it is faithfully flat. In both categories, a covering family is defined to be a family that is a cover on Zariski open subsets. In the fpqc topology, any faithfully flat and quasi-compact morphism is a cover. These topologies are closely related to descent. The fpqc topology is finer than all the topologies mentioned above, and it is very close to the canonical topology.
Grothendieck introduced crystalline cohomology to study the p-torsion part of the cohomology of characteristic p varieties. In the crystalline topology, which is the basis of this theory, the underlying category has objects given by infinitesimal thickenings together with divided power structures. Crystalline sites are examples of sites with no final object.
Continuous and cocontinuous functors
There are two natural types of functors between sites. They are given by functors that are compatible with the topology in a certain sense.
Continuous functors
If (C, J) and (D, K) are sites and u : C → D is a functor, then u is continuous if for every sheaf F on D with respect to the topology K, the presheaf Fu is a sheaf with respect to the topology J. Continuous functors induce functors between the corresponding topoi by sending a sheaf F to Fu. These functors are called pushforwards. If and denote the topoi associated to C and D, then the pushforward functor is .
us admits a left adjoint us called the pullback. us need not preserve limits, even finite limits.
In the same way, u sends a sieve on an object X of C to a sieve on the object uX of D. A continuous functor sends covering sieves to covering sieves. If J is the topology defined by a pretopology, and if u commutes with fibered products, then u is continuous if and only if it sends covering sieves to covering sieves and if and only if it sends covering families to covering families. In general, it is not sufficient for u to send covering sieves to covering sieves (see SGA IV 3, 1.9.3).
Cocontinuous functors
Again, let (C, J) and (D, K) be sites and v : C → D be a functor. If X is an object of C and R is a sieve on vX, then R can be pulled back to a sieve S as follows: A morphism f : Z → X is in S if and only if v(f) : vZ → vX is in R. This defines a sieve. v is cocontinuous if and only if for every object X of C and every covering sieve R of vX, the pullback S of R is a covering sieve on X.
Composition with v sends a presheaf F on D to a presheaf Fv on C, but if v is cocontinuous, this need not send sheaves to sheaves. However, this functor on presheaf categories, usually denoted , admits a right adjoint . Then v is cocontinuous if and only if sends sheaves to sheaves, that is, if and only if it restricts to a functor . In this case, the composite of with the associated sheaf functor is a left adjoint of v* denoted v*. Furthermore, v* preserves finite limits, so the adjoint functors v* and v* determine a geometric morphism of topoi .
Morphisms of sites
A continuous functor u : C → D is a morphism of sites D → C (not C → D) if us preserves finite limits. In this case, us and us determine a geometric morphism of topoi . The reasoning behind the convention that a continuous functor C → D is said to determine a morphism of sites in the opposite direction is that this agrees with the intuition coming from the case of topological spaces. A continuous map of topological spaces X → Y determines a continuous functor O(Y) → O(X). Since the original map on topological spaces is said to send X to Y, the morphism of sites is said to as well.
A particular case of this happens when a continuous functor admits a left adjoint. Suppose that u : C → D and v : D → C are functors with u right adjoint to v. Then u is continuous if and only if v is cocontinuous, and when this happens, us is naturally isomorphic to v* and us is naturally isomorphic to v*. In particular, u is a morphism of sites.
See also
Fibered category
Lawvere–Tierney topology
Notes
References
External links
The birthday of Grothendieck topologies
The birthday of Grothendieck topologies (non-archived version)
Topos theory
Sheaf theory | Grothendieck topology | [
"Mathematics"
] | 5,737 | [
"Mathematical structures",
"Sheaf theory",
"Topology",
"Category theory",
"Topos theory"
] |
12,914 | https://en.wikipedia.org/wiki/Ghost%20in%20the%20Shell | Ghost in the Shell is a Japanese cyberpunk media franchise based on the manga series of the same name written and illustrated by Masamune Shirow. The manga, first serialized between 1989 and 1991, is set in mid-21st century Japan and tells the story of the fictional counter-cyberterrorist organization Public Security Section 9, led by protagonist Major Motoko Kusanagi.
Animation studio Production I.G has produced several anime adaptations of the series. These include the 1995 film of the same name and its 2004 sequel, Ghost in the Shell 2: Innocence; the 2002 television series Ghost in the Shell: Stand Alone Complex and its 2020 follow-up, Ghost in the Shell: SAC_2045; and the Ghost in the Shell: Arise original video animation series. In addition, an American-produced live-action film was released on March 2017.
Overview
Title
The original editor Koichi Yuri says: At first, Ghost in the Shell came from Shirow, but when Yuri asked "something more flashy", Shirow came up with "攻殻機動隊 Koukaku Kidou Tai (Shell Squad)" for Yuri. But Shirow was attached to including "Ghost in the Shell" as well even if in smaller type.
Setting
Primarily set in the mid-twenty-first century in the fictional Japanese city of , otherwise known as , the manga and the many anime adaptations follow the members of Public Security Section 9, a task-force consisting of various professionals skilled at solving and preventing crime, mostly with some sort of police background. Political intrigue and counter-terrorism operations are standard fare for Section 9, but the various actions of corrupt officials, companies, and cyber-criminals in each scenario are unique and require the diverse skills of Section 9's staff to prevent a series of incidents from escalating.
In this post-cyberpunk iteration of a possible future, computer technology has advanced to the point that many members of the public possess cyberbrains, technology that allows them to interface their biological brain with various networks. The level of cyberization varies from simple minimal interfaces to almost complete replacement of the brain with cybernetic parts, in cases of severe trauma. This can also be combined with various levels of prostheses, with a fully prosthetic body enabling a person to become a cyborg. The main character of Ghost in the Shell, Major Motoko Kusanagi, is such a cyborg, having had a terrible accident befall her as a child that ultimately required her to use a full-body prosthesis to house her cyberbrain. This high level of cyberization, however, opens the brain up to attacks from highly skilled hackers, with the most dangerous being those who will hack a person to bend to their whims.
Media
Literature
Original manga
The original Ghost in the Shell manga ran in Japan from April 1989 to November 1990 in Kodansha's manga anthology Young Magazine, and was released in a tankōbon volume on October 2, 1991. Ghost in the Shell 2: Man-Machine Interface followed in 1997 for 9 issues in Young Magazine, and was collected in the Ghost in the Shell: Solid Box on December 1, 2000. Then a standard version with modifications and new pages was published on June 26, 2001. Four stories from Man-Machine Interface that were not released in tankobon format from previous releases were later collected in Ghost in the Shell 1.5: Human-Error Processor, and published by Kodansha on July 17, 2003. Several art books have also been published for the manga.
Films
Animated films
Two animated films based on the original manga have been released, both directed by Mamoru Oshii and animated by Production I.G. Ghost in the Shell was released in 1995 and follows the "Puppet Master" storyline from the manga. It was re-released in 2008 as Ghost in the Shell 2.0 with new audio and updated 3D computer graphics in certain scenes. Innocence, otherwise known as Ghost in the Shell 2: Innocence, was released in 2004, with its story based on a chapter from the first manga.
Live-action film
In 2008, DreamWorks and producer Steven Spielberg acquired the rights to a live-action film adaptation of the original Ghost in the Shell manga. On January 24, 2014, Rupert Sanders was announced as director, with a screenplay by William Wheeler. In April 2016, the full cast was announced, which included Juliette Binoche, Chin Han, Lasarus Ratuere and Kaori Momoi, and Scarlett Johansson in the lead role; the casting of Johansson drew accusations of whitewashing. Principal photography on the film began on location in Wellington, New Zealand, on February 1, 2016. Filming wrapped in June 2016. Ghost in the Shell premiered in Tokyo on March 16, 2017, and was released in the United States on March 31, 2017, in 2D, 3D and IMAX 3D. It received mixed reviews, with praise for its visuals and Johansson's performance but criticism for its script.
Television
Stand Alone Complex TV series, film and ONA
In 2002, Ghost in the Shell: Stand Alone Complex premiered on Animax, presenting a new telling of Ghost in the Shell independent from the original manga, focusing on Section 9's investigation of the Laughing Man hacker. It was followed in 2004 by a second season titled Ghost in the Shell: S.A.C. 2nd GIG, which focused on the Individual Eleven terrorist group. The primary storylines of both seasons were compressed into OVAs broadcast as Ghost in the Shell: Stand Alone Complex The Laughing Man in 2005 and Ghost in the Shell: Stand Alone Complex Individual Eleven in 2006. Also in 2006, Ghost in the Shell: Stand Alone Complex - Solid State Society, featuring Section 9's confrontation with a hacker known as the Puppeteer, was broadcast, serving as a finale to the anime series. The extensive score for the series and its films was composed by Yoko Kanno.
On April 7, 2017, Kodansha and Production I.G announced that Kenji Kamiyama and Shinji Aramaki would be co-directing a new Kōkaku Kidōtai anime production. On December 7, 2018, it was reported by Netflix that they had acquired the worldwide streaming rights to the original net animation (ONA) anime series, titled Ghost in the Shell: SAC_2045, and that it would premiere on April 23, 2020. The series is in 3DCG and Sola Digital Arts collaborated with Production I.G on the project. Ilya Kuvshinov handled character designs. The series had two seasons of 12 episodes each.
In addition to the anime, a series of published books, two separate manga adaptations, and several video games for consoles and mobile phones have been released for Stand Alone Complex.
Arise OVA, TV series and film
In 2013, a new iteration of the series titled Ghost in the Shell: Arise premiered, taking an original look at the Ghost in the Shell world, set before the original manga. It was released as a series of four original video animation (OVA) episodes (with limited theatrical releases) from 2013 to 2014, then recompiled as a 10-episode television series under the title of Kōkaku Kidōtai: Arise - Alternative Architecture. An additional fifth OVA titled Pyrophoric Cult, originally premiering in the Alternative Architecture broadcast as two original episodes, was released on August 26, 2015. Kazuchika Kise served as the chief director of the series, with Tow Ubukata as head writer. Cornelius was brought onto the project to compose the score for the series, with the Major's new voice actress Maaya Sakamoto also providing vocals for certain tracks.
Ghost in the Shell: The New Movie, also known as Ghost in the Shell: Arise − The Movie or New Ghost in the Shell, is a 2015 film directed by Kazuya Nomura that serves as a finale to the Ghost in the Shell: Arise story arc. The film is a continuation to the plot of the Pyrophoric Cult episode of Arise, and ties up loose ends from that arc.
A manga adaptation was serialized in Kodansha's Young Magazine, which started on March 13 and ended on August 26, 2013.
2026 anime
On May 25, 2024, it was announced that a new anime television series adaptation will be produced by Science Saru for a 2026 premiere. Saru will be in a production committee with Bandai Namco Filmworks, Kodansha and Production I.G.
Video games
Ghost in the Shell was developed by Exact and released for the PlayStation on July 17, 1997, in Japan by Sony Computer Entertainment. It is a third-person shooter featuring an original storyline where the character plays a rookie member of Section 9. The video game's soundtrack Megatech Body features various techno artists, such as Takkyu Ishino, Scan X and Mijk Van Dijk.
Several video games were also developed to tie into the Stand Alone Complex television series, in addition to a first-person shooter by Nexon and Neople titled Ghost in the Shell: Stand Alone Complex - First Assault Online, released in 2016.
A virtual reality game entitled Ghost in the Shell Arise: Stealth Hounds, was made available at Bandai Namco's arcade VR Zone Shinjuku in 2017.
Legacy
Ghost in the Shell influenced some prominent filmmakers. The Wachowskis, creators of The Matrix and its sequels, showed it to producer Joel Silver, saying, "We wanna do that for real." The Matrix series took several concepts from the film, including the Matrix digital rain, which was inspired by the opening credits of Ghost in the Shell, and the way characters access the Matrix through holes in the back of their necks. Other parallels have been drawn to James Cameron's Avatar, Steven Spielberg's A.I. Artificial Intelligence and Jonathan Mostow's Surrogates. James Cameron cited Ghost in the Shell as a source of inspiration, citing it as an influence on Avatar.
Bungie's 2001 third-person action game Oni draws substantial inspiration from Ghost in the Shell setting and characters. Ghost in the Shell also influenced video games such as the Metal Gear Solid series, Deus Ex, and Cyberpunk 2077.
Explanatory notes
References
External links
Madman Entertainment's Australian distribution release site
Fiction about artificial intelligence
Bandai Namco franchises
Fiction about brain–computer interface
Cyberpunk
Cyberpunk anime and manga
Fiction about cyborgs
Existentialist anime and manga
Fiction about consciousness transfer
Fiction about memory erasure and alteration
Fiction about robots
IG Port franchises
Kodansha franchises
Philosophical anime and manga
Post-apocalyptic fiction
Postcyberpunk
Fiction about prosthetics
Transhumanism in fiction
Military science fiction | Ghost in the Shell | [
"Biology"
] | 2,195 | [
"Fiction about cyborgs",
"Cyborgs"
] |
12,916 | https://en.wikipedia.org/wiki/Gauss%E2%80%93Legendre%20algorithm | The Gauss–Legendre algorithm is an algorithm to compute the digits of . It is notable for being rapidly convergent, with only 25 iterations producing 45 million correct digits of . However, it has some drawbacks (for example, it is computer memory-intensive) and therefore all record-breaking calculations for many years have used other methods, almost always the Chudnovsky algorithm. For details, see Chronology of computation of .
The method is based on the individual work of Carl Friedrich Gauss (1777–1855) and Adrien-Marie Legendre (1752–1833) combined with modern algorithms for multiplication and square roots. It repeatedly replaces two numbers by their arithmetic and geometric mean, in order to approximate their arithmetic-geometric mean.
The version presented below is also known as the Gauss–Euler, Brent–Salamin (or Salamin–Brent) algorithm; it was independently discovered in 1975 by Richard Brent and Eugene Salamin. It was used to compute the first 206,158,430,000 decimal digits of on September 18 to 20, 1999, and the results were checked with Borwein's algorithm.
Algorithm
Initial value setting:
Repeat the following instructions until the difference between and is within the desired accuracy:
is then approximated as:
The first three iterations give (approximations given up to and including the first incorrect digit):
The algorithm has quadratic convergence, which essentially means that the number of correct digits doubles with each iteration of the algorithm.
Mathematical background
Limits of the arithmetic–geometric mean
The arithmetic–geometric mean of two numbers, a0 and b0, is found by calculating the limit of the sequences
which both converge to the same limit.
If and then the limit is where is the complete elliptic integral of the first kind
If , , then
where is the complete elliptic integral of the second kind:
Gauss knew of these two results.
Legendre’s identity
Legendre proved the following identity:
for all .
Elementary proof with integral calculus
The Gauss-Legendre algorithm can be proven to give results converging to π using only integral calculus. This is done here and here.
See also
Numerical approximations of
References
Pi algorithms | Gauss–Legendre algorithm | [
"Mathematics"
] | 445 | [
"Pi",
"Pi algorithms"
] |
17,519,029 | https://en.wikipedia.org/wiki/Link%20concordance | In mathematics, two links and are concordant if there exists an embedding such that and .
By its nature, link concordance is an equivalence relation. It is weaker than isotopy, and stronger than homotopy: isotopy implies concordance implies homotopy. A link is a slice link if it is concordant to the unlink.
Concordance invariants
A function of a link that is invariant under concordance is called a concordance invariant.
The linking number of any two components of a link is one of the most elementary concordance invariants. The signature of a knot is also a concordance invariant. A subtler concordance invariant are the Milnor invariants, and in fact all rational finite type concordance invariants are Milnor invariants and their products, though non-finite type concordance invariants exist.
Higher dimensions
One can analogously define concordance for any two submanifolds . In this case one considers two submanifolds concordant if there is a cobordism between them in i.e., if there is a manifold with boundary whose boundary consists of and
This higher-dimensional concordance is a relative form of cobordism – it requires two submanifolds to be not just abstractly cobordant, but "cobordant in N".
See also
Slice knot
References
Further reading
J. Hillman, Algebraic invariants of links. Series on Knots and everything. Vol 32. World Scientific.
Livingston, Charles, A survey of classical knot concordance, in: Handbook of knot theory, pp 319–347, Elsevier, Amsterdam, 2005.
Knot invariants
Manifolds | Link concordance | [
"Mathematics"
] | 341 | [
"Topological spaces",
"Topology",
"Manifolds",
"Space (mathematics)"
] |
17,519,063 | https://en.wikipedia.org/wiki/John%20von%20Neumann%20Computer%20Society | The John von Neumann Computer Society () is the central association for Hungarian researchers of Information communication technology and official partner of the International Federation for Information Processing founded in 1968.
References
External links
Official website
Professional associations based in Hungary
Information technology organizations based in Europe
Computer science organizations
Organizations established in 1968 | John von Neumann Computer Society | [
"Technology"
] | 58 | [
"Computing stubs",
"Computer science",
"Computer science organizations"
] |
17,519,721 | https://en.wikipedia.org/wiki/Entropic%20vector | The entropic vector or entropic function is a concept arising in information theory. It represents the possible values of Shannon's information entropy that subsets of one set of random variables may take. Understanding which vectors are entropic is a way to represent all possible inequalities between entropies of various subsets. For example, for any two random variables , their joint entropy (the entropy of the random variable representing the pair ) is at most the sum of the entropies of and of :
Other information-theoretic measures such as conditional information, mutual information, or total correlation can be expressed in terms of joint entropy and are thus related by the corresponding inequalities.
Many inequalities satisfied by entropic vectors can be derived as linear combinations of a few basic ones, called Shannon-type inequalities.
However, it has been proven that already for variables, no finite set of linear inequalities is sufficient to characterize all entropic vectors.
Definition
Shannon's information entropy of a random variable is denoted .
For a tuple of random variables , we denote the joint entropy of a subset as , or more concisely as , where .
Here can be understood as the random variable representing the tuple .
For the empty subset , denotes a deterministic variable with entropy 0.
A vector h in indexed by subsets of is called an entropic vector of order if there exists a tuple of random variables such that for each subset .
The set of all entropic vectors of order is denoted by .
Zhang and Yeung proved that it is not closed (for ), but its closure, , is a convex cone and hence characterized by the (infinitely many) linear inequalities it satisfies.
Describing the region is thus equivalent to characterizing all possible inequalities on joint entropies.
Example
Let X,Y be two independent random variables with discrete uniform distribution over the set . Then
(since each is uniformly distributed over a two-element set), and
(since the two variables are independent, which means the pair is uniformly distributed over .)
The corresponding entropic vector is thus:
On the other hand, the vector is not entropic (that is, ), because any pair of random variables (independent or not) should satisfy .
Characterizing entropic vectors: the region Γn*
Shannon-type inequalities and Γn
For a tuple of random variables , their entropies satisfy:
, for any
In particular, , for any .
The Shannon inequality says that an entropic vector is submodular:
, for any
It is equivalent to the inequality stating that the conditional mutual information is non-negative:
(For one direction, observe this the last form expresses Shannon's inequality for subsets and of the tuple ; for the other direction, substitute , , ).
Many inequalities can be derived as linear combinations of Shannon inequalities; they are called Shannon-type inequalities or basic information inequalities of Shannon's information measures. The set of vectors that satisfies them is called ; it contains .
Software has been developed to automate the task of proving Shannon-type inequalities.
Given an inequality, such software is able to determine whether the given inequality is a valid Shannon-type inequality (i.e., whether it contains the cone ).
Non-Shannon-type inequalities
The question of whether Shannon-type inequalities are the only ones, that is, whether they completely characterize the region , was first asked by Te Su Han in 1981 and more precisely by Nicholas Pippenger in 1986.
It is not hard to show that this is true for two variables, that is, .
For three variables, Zhang and Yeung proved that ; however, it is still asymptotically true, meaning that the closure is equal: .
In 1998, Zhang and Yeung showed that for all , by proving that the following inequality on four random variables (in terms of conditional mutual information) is true for any entropic vector, but is not Shannon-type:
Further inequalities and infinite families of inequalities have been found.
These inequalities provide outer bounds for better than the Shannon-type bound .
In 2007, Matus proved that no finite set of linear inequalities is sufficient (to deduce all as linear combinations), for variables. In other words, the region is not polyhedral.
Whether they can be characterized in some other way (allowing to effectively decide whether a vector is entropic or not) remains an open problem.
Analogous questions for von Neumann entropy in quantum information theory have been considered.
Inner bounds
Some inner bounds of are also known.
One example is that contains all vectors in which additionally satisfy the following inequality (and those obtained by permuting variables), known as Ingleton's inequality for entropy:
Entropy and groups
Group-characterizable vectors and quasi-uniform distributions
Consider a group and subgroups of .
Let denote for ; this is also a subgroup of .
It is possible to construct a probability distribution for random variables such that
.
(The construction essentially takes an element of uniformly at random and lets be the corresponding coset ). Thus any information-theoretic inequality implies a group-theoretic one. For example, the basic inequality implies that
It turns out the converse is essentially true.
More precisely, a vector is said to be group-characterizable if it can be obtained from a tuple of subgroups as above.
The set of group-characterizable vectors is denoted .
As said above, .
On the other hand, (and thus ) is contained in the topological closure of the convex closure of .
In other words, a linear inequality holds for all entropic vectors if and only if it holds for all vectors of the form , where goes over subsets of some tuple of subgroups in a group .
Group-characterizable vectors that come from an abelian group satisfy Ingleton's inequality.
Kolmogorov complexity
Kolmogorov complexity satisfies essentially the same inequalities as entropy.
Namely, denote the Kolmogorov complexity of a finite string as (that is, the length of the shortest program that outputs ).
The joint complexity of two strings , defined as the complexity of an encoding of the pair , can be denoted .
Similarly, the conditional complexity can be denoted (the length of the shortest program that outputs given ).
Andrey Kolmogorov noticed these notions behave similarly to Shannon entropy, for example:
In 2000, Hammer et al. proved that indeed an inequality holds for entropic vectors if and only if the corresponding inequality in terms of Kolmogorov complexity holds up to logarithmic terms for all tuples of strings.
See also
Inequalities in information theory
References
Thomas M. Cover, Joy A. Thomas. Elements of information theory New York: Wiley, 1991.
Raymond Yeung. A First Course in Information Theory, Chapter 12, Information Inequalities, 2002, Print
Information theory | Entropic vector | [
"Mathematics",
"Technology",
"Engineering"
] | 1,466 | [
"Telecommunications engineering",
"Applied mathematics",
"Computer science",
"Information theory"
] |
17,519,991 | https://en.wikipedia.org/wiki/Pierre-%C3%89mile%20Martin | Pierre-Émile Martin (; 18 August 1824, Bourges, Cher – 23 May 1915, Fourchambault) was a French industrial engineer. He applied the principle of recovery of the hot gas in an open hearth furnace, a process invented by Carl Wilhelm Siemens.
In 1865, based on the Siemens process, he implemented the process which bears his name for producing steel in a hearth by remelting scrap steel with the addition of cast iron for the dilution of impurities.
His work earned him the award of the Bessemer Gold Medal of the Iron and Steel Institute in 1915 and of the French nation (knight in 1878 then Officer of the Legion of Honour in 1910).
Martin steel
The metal obtained using Martin's process has been called Martin steel. This steel contain much less impurities than those produced in the Bessemer converter, and its composition much better controlled. The development of the process made it possible to use scrap steel and cast iron and to produce steel with a reputation for being of better quality than Bessemer steel. On the other hand, the process takes longer and the production costs are consequently higher. The invention was tested and implemented at the Sireuil foundry in Charente. The product was awarded a Gold Medal at the Paris Exhibition of 1867.
Martin-Siemens Process
The process of refining steel in a hearth, as developed by Pierre-Émile Martin, consists of smelting a mixture of cast iron and scrap or ore, then refining it by decarburization, desulfurization and dephosphorization. This method makes it possible to produce fine and alloy steels by adding noble elements.
The process employs a gas-heated reverberatory furnace with recovery of the heat from the flue gases as in the Siemens system.
References
Article includes content from the equivalent article in French Wikipedia
1824 births
1915 deaths
People from Bourges
French engineers
People of the Industrial Revolution
Bessemer Gold Medal
19th-century French businesspeople | Pierre-Émile Martin | [
"Chemistry"
] | 402 | [
"Bessemer Gold Medal",
"Chemical engineering awards"
] |
17,521,962 | https://en.wikipedia.org/wiki/Certified%20wireless%20network%20expert | The Certified Wireless Network Expert (CWNE) is the highest level certification in the CWNP program started in 2001 by Planet3 Wireless. It certifies the ability to design, install, secure, optimize and troubleshoot IEEE 802.11 wireless networks.
Certification track
The CWNE credential is the final step in a four-level certification process. It validates the applicant's real-world application of the principles covered by the other CWNP certification exams, including wireless protocol analysis, security, advanced design, spectrum analysis, wired network administration, and troubleshooting.
CWNE Requirements
The requirements for earning the CWNE certification changed on October 1, 2010, when the CWNE exam (PW0-300) was retired. The new requirements for the CWNE certification are:
Valid and current CWSP, CWAP, CWISA, and CWDP certifications (requires CWNA).
Three (3) years of documented enterprise Wi-Fi implementation experience.
Three (3) professional endorsements.
One (1) other current, valid professional networking certifications.
Documentation of three (3) enterprise Wi-Fi projects in which you participated or led in the form of 500 word essays.
Recertification
Like most other CWNP certifications, the CWNE certification is valid for three (3) years. The certification may be renewed by reporting at least sixty (60) hours of approved Continuing Education (CE).
Passing the most current version of either the CWSP, CWAP, or CWDP exam, which was the only recertification requirement prior to the change, is now worth twenty (20) CE hours.
See also
Professional certification (Computer technology)
References
External links
Official CWNP Site
Wireless networking
Information technology qualifications | Certified wireless network expert | [
"Technology",
"Engineering"
] | 354 | [
"Wireless networking",
"Computer occupations",
"Computer networks engineering",
"Information technology qualifications"
] |
17,523,133 | https://en.wikipedia.org/wiki/Lithium-ion%20capacitor | A lithium-ion capacitor (LIC or LiC) is a hybrid type of capacitor classified as a type of supercapacitor. It is called a hybrid because the anode is the same as those used in lithium-ion batteries and the cathode is the same as those used in supercapacitors. Activated carbon is typically used as the cathode. The anode of the LIC consists of carbon material which is often pre-doped with lithium ions. This pre-doping process lowers the potential of the anode and allows a relatively high output voltage compared to other supercapacitors.
History
In 1981, Dr. Yamabe of Kyoto University, in collaboration with Dr. Yata of Kanebo Co., created a material known as PAS (polyacenic semiconductive) by pyrolyzing phenolic resin at 400–700 °C. This amorphous carbonaceous material performs well as the electrode in high-energy-density rechargeable devices. Patents were filed in the early 1980s by Kanebo Co., and efforts to commercialize PAS capacitors and lithium-ion capacitors (LICs) began. The PAS capacitor was first used in 1986, and the LIC capacitor in 1991.
It wasn't until 2001 that a research group was able to bring the idea of a hybrid ion capacitor into existence. A lot of research was done to improve electrode and electrolyte performance and cycle life but it wasn't until 2010 that Naoi et al. made a real breakthrough by developing a nano-structured composite of LTO (lithium titanium oxide) with carbon nanofibers. Nowadays, another field of interest is the Sodium Ion Capacitor (NIC) because sodium is much cheaper than lithium. Nevertheless, the LIC still outperforms the NIC so it's not economically viable at the moment.
Concept
A lithium-ion capacitor is a hybrid electrochemical energy storage device which combines the intercalation mechanism of a lithium-ion battery anode with the double-layer mechanism of the cathode of an electric double-layer capacitor (EDLC). The combination of a negative battery-type LTO electrode and a positive capacitor type activated carbon (AC) resulted in an energy density of ca. 20 W⋅h/kg which is about 4–5 times that of a standard Electric Double Layer Capacitor (EDLC). The power density, however, has been shown to match that of EDLCs, as it is able to completely discharge in seconds.
At the negative electrode (anode), for which activated carbon is often used, charges are stored in an electric double layer that develops at the interface between the electrode and the electrolyte. Like EDLCs, LIC voltages vary linearly adding to complications integrating them into systems which have power electronics that expect the more stable voltage of batteries. As a consequence, LICs have a high energy density, which varies with the square of the voltage. The capacitance of the anode is several orders of magnitude larger than that of the cathode. As a result, the change of the anode potential during charge and discharge is much smaller than the change in the cathode potential.
Anode
The negative electrode or anode of the LIC is the battery type or high energy density electrode. The anode can be charged to contain large amounts of energy by reversible intercalation of lithium ions. This process is an electrochemical reaction. This is the reason that degradation is more of a problem for the anode than for the cathode since the cathode is involved in an electrostatic process and not in an electrochemical one.
There are two groups of anodes. The first group are the hybrids of electrochemical active species and carbonaceous materials. The second group are the nanostructured anode materials. The anode of LIC's is basically an intercalation type battery material which has sluggish kinetics. However, in order to employ an anode in LICs, one needs to slightly incline their properties towards those of a capacitor by designing hybrid anode materials. The hybrid materials can be prepared using capacitor and battery type storage mechanisms. Currently, the best electrochemical species is lithium titanium oxide (LTO), , because of its extraordinary properties like high coulombic efficiency, stable operating voltage plateau and insignificant volume alteration during lithium insertion/desertion. Bare LTO has poor electrical conductivity and lithium ion diffusivity so a hybrid is needed. The advantages of LTO combined with the great electrical conductivity and ionic diffusivity of carbonaceous materials like carbon coatings lead to economically viable LIC's.
The electrode potential of LTO is fairly stable around −1.5 V versus Li/Li+. Since carbonaceous material is used the graphitic electrode potential which is initially at −0.1 V versus SHE (standard hydrogen electrode) is lowered further to −2.8 V by intercalating lithium ions. This step is referred to as "doping" and often takes place in the device between the anode and a sacrificial lithium electrode. Doping the anode lowers the anode potential and leads to a higher output voltage of the capacitor. Typically, output voltages for LICs are in the range of 3.8–4.0 V but are limited to minimum allowed voltages of 1.8–2.2 V.
The nanostructured materials are metal oxides with a high specific surface area. Their main advantage is that it's a way to increase the rate capability of the anode by reducing the diffusion pathways of the electrolytic species. Different forms of nanostructures have been developed including nanotubes (single- and multi-walled), nanoparticles, nanowires, and nanobeads to enhance power density.
Other candidates for anode materials are being investigated as alternative to graphitic carbons, such as hard carbon, soft carbon and graphene-based carbons. The expected benefit, compared to graphitic carbons, is to increase the doped electrode potential which leads to improved power capability as well as reducing the risk of metal (lithium) plating on the anode.
Cathode
The cathode of LIC's uses an electric double layer to store energy. To maximise the effectiveness of the cathode it should have a high specific surface area and good conductivity. Initially activated carbon was used to make cathodes but in order to improve performance, different cathodes have been used in LIC's. These can be sorted into four groups: heteroatom-doped carbon, graphene-based, porous carbon, and bifunctional cathodes.
Heteroatom-doped carbon has as of yet only been doped with nitrogen. Doping activated carbon with nitrogen improves both the capacitance and the conductivity of the cathode.
Graphene based cathodes have been used because graphene has excellent electrical conductivity, its thin layers have a high specific surface area, and it can be produced cheaply. It has been shown to be effective and stable compared to other cathode materials.
Porous carbon cathodes are made similar to activated carbon cathodes. By using different methods to produce the carbon, it can be made with a higher porosity. This is useful because for the double layer effect to work the ions have to move between the double layer and the separator. Having a hierarchical pore structure makes this quicker and easier.
Bifunctional cathodes use a combination of materials used for their EDLC properties and materials used for their good Li+ intercalation properties to increase the energy density of the LIC. A similar idea was applied to the anode materials where their properties were slightly inclined towards those of a capacitor
Pre-lithiation (pre-doping)
The anode of LIC's is often pre-lithiated in order to prevent the anode from experiencing a large potential drop during charge and discharge cycles. When a LIC comes near its maximum or minimum voltage the electrolyte and electrodes start to degrade. This will irreversibly damage the device and the degradation products will catalyse further degradation.
Another reason for pre-lithiation is that high-capacity electrodes irreversibly lose capacity after the initial charge and discharge cycles. This is mainly attributed to the formation of a Solid Electrolyte Interphase (SEI) film. By pre-lithiation of the electrodes the loss of lithium ions to the SEI formation can be mainly compensated. In general, the anode of LIC's is pre-lithiated since the cathode is Li-free and will not take part in lithium insertion/desertion processes.
Electrolyte
The third part of nearly any energy storage device is the electrolyte. The electrolyte must be able to transport electrons from one electrode to the other but it must not limit the electrochemical species in its reaction rate. For LIC's the electrolyte ideally has a high ionic conductivity such that lithium ions can easily reach the anode. Normally, one would use aqueous electrolyte to achieve this but water will react with the lithium ions so non-aqueous electrolytes are often used. The electrolyte used in a LIC is a lithium-ion salt solution that can be combined with other organic components and is generally identical to that used in lithium-ion batteries.
In general, organic electrolytes are used which have a lower electrical conductivity (10 to 60 mS/cm) than aqueous electrolytes (100 to 1000 mS/cm) but are much more stable. Often cyclic (ethylene carbonate) and linear (dimethyl carbonate) carbonates are added to increase conductivity and these even enhance SEI formation stability. Where the latter means that there is a smaller chance that much SEI is formed after the initial cycles. Another category of electrolytes are the inorganic glass and ceramic electrolytes. These are not mentioned very often but they do have their applications and have their own advantages and disadvantages compared to organic electrolytes which mainly comes from their porous structure.
A separator prevents direct electrical contact between the anode and the cathode. It must be chemically inert in order to prevent it from reacting with the electrolyte which will lower the capabilities of the LIC. However, the separator should let ions through but not the electrons that are formed since this would create a short circuit.
Properties
Typical properties of an LIC are
high capacitance compared to a capacitor, because of the large anode, though low capacity compared to a Li-ion cell
high energy density compared to a capacitor (14 W⋅h/kg reported), though low energy density compared to a Li-ion cell
high power density
high reliability
operating temperatures ranging from −20 °C to 70 °C
low self-discharge (<5% voltage drop at 25 °C over three months)
Comparison to other technologies
Batteries, EDLC and LICs each have different strengths and weaknesses, making them useful for different categories of applications.
Energy storage devices are characterized by three main criteria: power density (in W/kg), energy density (in W⋅h/kg) and cycle life (no. of charge cycles).
LIC's have higher power densities than batteries, and are safer than lithium-ion batteries, in which thermal runaway reactions may occur.
Compared to the electric double-layer capacitor (EDLC), the LIC has a higher output voltage. Although they have similar power densities, the LIC has a much higher energy density than other supercapacitors. The Ragone plot in figure 1 shows that LICs combine the high energy of LIBs with the high power density of EDLCs.
The cycle life performance of LICs is much better than batteries and but is not near that of EDLCs. Some LIC's have a longer cycle life but this is often at the cost of a lower energy density.
In conclusion, the LIC will probably never reach the energy density of a lithium-ion battery and never reach the combined cycle life and power density of a supercapacitor. Therefore, it should be seen as a separate technology with its own uses and applications.
LiC and LiB temperature Performance
Lithium-ion capacitors offer superior performance in cold environments compared to traditional lithium-ion batteries. As demonstrated in recent studies, LiCs can maintain approximately 50% of their capacity at temperatures as low as -10°C under high discharge rates (7.5C). In contrast, lithium-ion batteries experience a significant reduction in capacity, dropping to around 50% capacity at just 5°C under the same conditions. This makes LiCs particularly suitable for applications in cold climates or where the temperature fluctuates widely.
Applications
Lithium-ion capacitors are fairly suitable for applications which require a high energy density, high power densities and excellent durability. Since they combine high energy density with high power density, there is no need for additional electrical storage devices in various kinds of applications, resulting in reduced costs.
Potential applications for lithium-ion capacitors are, for example, in the fields of wind power generation systems, uninterruptible power source systems (UPS), voltage sag compensation, photovoltaic power generation, energy recovery systems in industrial machinery, electric and hybrid vehicles and transportation systems.
One important potential end-use of HIC(hybrid ion capacitor) devices is in regenerative braking. Regenerative braking energy harvesting from trains, heavy automotive, and ultimately light vehicles represents a huge potential market that remains not fully exploited due to the limitations of existing secondary battery and supercapacitor (electrochemical capacitor and ultracapacitor) technologies.
References
External links
Introducing JM Energy Lithium-Ion Capacitor, JM Energy
Lithium-Ion Capacitor, JSR Micro
Capacitors | Lithium-ion capacitor | [
"Physics"
] | 2,927 | [
"Capacitance",
"Capacitors",
"Physical quantities"
] |
17,523,721 | https://en.wikipedia.org/wiki/Looman%E2%80%93Menchoff%20theorem | In the mathematical field of complex analysis, the Looman–Menchoff theorem states that a continuous complex-valued function defined in an open set of the complex plane is holomorphic if and only if it satisfies the Cauchy–Riemann equations. It is thus a generalization of a theorem by Édouard Goursat, which instead of assuming the continuity of f, assumes its Fréchet differentiability when regarded as a function from a subset of R2 to R2. Theorem bears the name of Dutch mathematician Herman Looman and Soviet mathematician Dmitrii Menshov.
Statement
A complete statement of the theorem is as follows:
Let Ω be an open set in C and f : Ω → C be a continuous function. Suppose that the partial derivatives and exist everywhere but a countable set in Ω. Then f is holomorphic if and only if it satisfies the Cauchy–Riemann equation:
Examples
Looman pointed out that the function given by f(z) = exp(−z−4) for z ≠ 0, f(0) = 0 satisfies the Cauchy–Riemann equations everywhere but is not analytic (or even continuous) at z = 0. This shows that the function f must be assumed continuous in the theorem.
The function given by f(z) = z5/|z|4 for z ≠ 0, f(0) = 0 is continuous everywhere and satisfies the Cauchy–Riemann equations at z = 0, but is not analytic at z = 0 (or anywhere else). This shows that a naive generalization of the Looman–Menchoff theorem to a single point is false:
Let f be continuous at a neighborhood of a point z, and such that and exist at z. Then f is holomorphic at z if and only if it satisfies the Cauchy–Riemann equation at z.
References
.
.
.
.
.
Theorems in complex analysis | Looman–Menchoff theorem | [
"Mathematics"
] | 410 | [
"Theorems in mathematical analysis",
"Mathematical analysis",
"Theorems in complex analysis",
"Mathematical analysis stubs"
] |
17,524,713 | https://en.wikipedia.org/wiki/A%2ASTAR%20Talent%20Search | The A*STAR Talent Search (ATS) is a research-based science competition in Singapore for high school students between 15–21 years of age. It was formerly known as National Science Talent Search. The ATS is an annual competition which acknowledges and rewards students who have a strong aptitude for science & technology. This competition provides students the opportunity to showcase their stellar projects and encourage them to further explore science and technology.
The ATS is administered by the Agency for Science, Technology and Research (A*STAR) and Science Centre Singapore (SCS) from 2006. Participants are required to compete in the Singapore Science and Engineering Fair (SSEF) and winners from the fair will then proceed to the short-listing round of ATS. The panel of judges consists of distinguished scientists from local and international universities, as well as A*STAR research institutes and a Nobel Laureate as the Chief Judge. ATS winners need to display resourcefulness, mastery of scientific concepts, as well as passion for scientific research.
The First Prize winner will be given S$5000, inclusive of a sponsored overseas conference.
Winners and finalists (top 8 students) of the ATS have gone on to top universities worldwide, such as National University of Singapore, Harvard University, Princeton University, Yale University, Stanford University, Massachusetts Institute of Technology and California Institute of Technology in the United States, and University of Cambridge, University of Oxford and Imperial College London in the United Kingdom.
References
External links
ATS (A*STAR Graduate Academy) website
ATS (Science Centre Singapore) website
Science competitions | A*STAR Talent Search | [
"Technology"
] | 315 | [
"Science and technology awards",
"Science competitions"
] |
17,524,976 | https://en.wikipedia.org/wiki/Operations%20and%20technology%20management | Operations and Technology Management (OTM) is an interdisciplinary major which prepares students to gain knowledge and skills in the areas of operations management, IT management, and data analytics. This major is typically offered as part of business school and the curriculum is designed to develop the skills needed to manage and improve business operations through the integrated use of theories and methods from both operations management and information technology management (IT). Because of its inter-disciplinary nature, students graduating with OTM degrees tend to have more career options across a wide-range of industries. For instances, students with OTM degrees can pursue many roles across Operations, IT, and Analytics fields.
Many universities offer this major. For instance, the University of Portland offers a BBA in OTM and MS in OTM (MSOTM) programs. Harvard University offers MBA and DBA in Technology and operations management (TOM). The University of Wisconsin-Madison offers BBA and MBA programs in OTM. Cal Poly-Pomona offers programs in Technology and Operations Management (TOM). UCLA Anderson School of Management of Management offers Decisions, operations and technology management (DTOM) programs. Boston University offers programs in Operations & Technology Management (OTM). NYU's Stern offers a specialization in management of technology and operations.
References
Further reading
book series: Operations and Technology Management, ed. by Prof. Dr. Thorsten Blecker; Prof. Dr. George Q. Huang; Prof. Dr. Fabrizio Salvador, Buchreihe Operations and Technology Management | Operations and technology management | [
"Engineering"
] | 310 | [
"Industrial engineering"
] |
17,525,025 | https://en.wikipedia.org/wiki/EV%20Lacertae | EV Lacertae (EV Lac, Gliese 873, HIP 112460) is a faint red dwarf star 16.48 light-years away in the constellation Lacerta. It is the nearest star to the Sun in that region of the sky, although with an apparent magnitude of 10, it is only barely visible with binoculars. EV Lacertae is a spectral type M3.5 flare star that emits X-rays.
On 25 April 2008, NASA's Swift satellite picked up a record-setting flare from EV Lacertae. This flare was thousands of times more powerful than the largest observed solar flare. Because EV Lacertae is much farther from Earth than the Sun, the flare did not appear as bright as a solar flare. The flare would have been visible to the naked eye if the star had been in an observable part of the night sky at the time. It was the brightest flare ever seen from a star other than the Sun.
EV Lacertae is much younger than that of the Sun. Its age is estimated at 300 million years, and it is still spinning rapidly. The fast spin, together with its convective interior, produces a magnetic field much more powerful than that of the Sun. This strong magnetic field is believed to play a role in the star's ability to produce such bright flares. After the flare, the star was blue.
In October 2022, another stellar flare was observed in EV Lacertae by a group of scientists led by Shun Inoue of Kyoto University, after observing the star in near-ultraviolet and white-light curves. The finding was announced and detailed on December 31, 2023, in the pre-print server arXiv.
References
Lacerta
M-type main-sequence stars
Lacertae, EV
Flare stars
Ursa Major moving group
112460
0873 | EV Lacertae | [
"Astronomy"
] | 380 | [
"Lacerta",
"Constellations"
] |
17,525,141 | https://en.wikipedia.org/wiki/Preferred%20IUPAC%20name | In chemical nomenclature, a preferred IUPAC name (PIN) is a unique name, assigned to a chemical substance and preferred among all possible names generated by IUPAC nomenclature. The "preferred IUPAC nomenclature" provides a set of rules for choosing between multiple possibilities in situations where it is important to decide on a unique name. It is intended for use in legal and regulatory situations.
Preferred IUPAC names are applicable only for organic compounds, to which the IUPAC (International Union of Pure and Applied Chemistry) has the definition as compounds which contain at least a single carbon atom but no alkali, alkaline earth or transition metals and can be named by the nomenclature of organic compounds (see below). Rules for the remaining organic and inorganic compounds are still under development.
The concept of PINs is defined in the introductory chapter and chapter 5 of the "Nomenclature of Organic Chemistry: IUPAC Recommendations and Preferred Names 2013" (freely accessible), which replace two former publications: the "Nomenclature of Organic Chemistry", 1979 (the Blue Book) and "A Guide to IUPAC Nomenclature of Organic Compounds, Recommendations 1993". The full draft version of the PIN recommendations ("Preferred names in the nomenclature of organic compounds", Draft of 7 October 2004) is also available.
Definitions
A preferred IUPAC name or PIN is a name that is preferred among two or more IUPAC names. An IUPAC name is a systematic name that meets the recommended IUPAC rules. IUPAC names include retained names. A general IUPAC name is any IUPAC name that is not a "preferred IUPAC name". A retained name is a traditional or otherwise often used name, usually a trivial name, that may be used in IUPAC nomenclature.
Since systematic names often are not human-readable a PIN may be a retained name. Both "PINs" and "retained names" have to be chosen (and established by IUPAC) explicitly, unlike other IUPAC names, which automatically arise from IUPAC nomenclatural rules. Thus, the PIN is sometimes the retained name (e.g., phenol and acetic acid, instead of benzenol and ethanoic acid), while in other cases, the systematic name was chosen over a very common retained name (e.g., propan-2-one, instead of acetone).
A preselected name is a preferred name chosen among two or more names for parent hydrides or other parent structures that do not contain carbon (inorganic parents). "Preselected names" are used in the nomenclature of organic compounds as the basis for PINs for organic derivatives. They are needed for derivatives of organic compounds that do not contain carbon themselves.
A preselected name is not necessarily a PIN in inorganic chemical nomenclature.
Basic principles
The systems of chemical nomenclature developed by the International Union of Pure and Applied Chemistry (IUPAC) have traditionally concentrated on ensuring that chemical names are unambiguous, that is that a name can only refer to one substance. However, a single substance can have more than one acceptable name, like toluene, which may also be correctly named as "methylbenzene" or "phenylmethane". Some alternative names remain available as "retained names" for more general contexts. For example, tetrahydrofuran remains an unambiguous and acceptable name for the common organic solvent, even if the preferred IUPAC name is "oxolane".
The nomenclature goes:
Preselected names are to be used.
Substitutive nomenclature (replacement of hydrogen atoms in the parent structure) is used most extensively, for example "ethoxyethane" instead of diethyl ether and "tetrachloromethane" instead of carbon tetrachloride.
Functional class naming (also known as radicofunctional nomenclature) is preferred next. In the case of acid anhydrides, esters, acyl halides and pseudohalides and salts, this method takes precedence over substitution.
Skeletal replacement ('a', named for the suffix), if it both applies and a heteroatom is found in a chain, is preferred over substitution. If it applies but no heteroatom is found in a chain, it is preferred over multiplicative nomenclature. Example: 3-phospha-2,5,7-trisilaoctane refers to CH3-SiH2-PH-CH2-SiH2-CH2-SiH2-CH3.
Skeletal replacement mainly replaces carbon with other atoms, or in the case of phane nomenclature, whole "superatom" rings. It also includes more complex replacements, such as the lambda convention.
Multiplicative nomenclature is to be preferred over simple substitutive if it can be used. This is the nomenclature with an apostrophe after numbers such as 4'; it allows multiple occurrences of the principal characteristic group or compound class to be treated together. Example: 4,4′-sulfanediyldibenzoic acid refers to (COOH-C6H4)2S.
The following are available, but not given special preference:
Conjunctive nomenclature is available, but substitutive, multiplicative, or skeletal should be preferred. Example: benzene-1,3,5-triacetic acid should instead be named 2,2′,2′′-(benzene-1,3,5-triyl)triacetic acid.
Additive and subtractive operations remain available as with the general case. For example, one keeps changing "ane" into "ene" for double bonds. Additive operations generally have better names derived from the above rules, such as phenyloxirane for styrene oxide.
Retained names
The number of retained non-systematic, trivial names of simple organic compounds (for example formic acid and acetic acid) has been reduced considerably for preferred IUPAC names, although a larger set of retained names is available for general nomenclature. The traditional names of simple monosaccharides, α-amino acids and many natural products have been retained as preferred IUPAC names; in these cases the systematic names may be very complicated and virtually never used. The name for water itself is a retained IUPAC name.
Scope of the nomenclature for organic compounds
In IUPAC nomenclature, all compounds containing carbon atoms are considered organic compounds. Organic nomenclature only applies to organic compounds containing elements from the Groups 13 through 17. Organometallic compounds of the Groups 1 through 12 are not covered by organic nomenclature.
Notes and references
Further reading
– background on why a preferred name is needed (multiple dialects of "systematic" such as CAS and Beilstein)
Chemical nomenclature | Preferred IUPAC name | [
"Chemistry"
] | 1,407 | [
"nan"
] |
17,525,666 | https://en.wikipedia.org/wiki/XTE%20J1650%E2%88%92500 | XTE J1650−500 is a binary system containing a stellar-mass black hole candidate and 2000–2001 transient binary X-ray source located in the constellation Ara.
In 2008, it was claimed that this black hole had a mass of 3.8±0.5 solar masses, which would have been the smallest found for any black hole; smaller than GRO 1655−40, the then known smallest of 6.3 . However, this claim was subsequently retracted; the more likely mass is 5–10 solar masses.
The binary period of the black hole and its companion is 0.32 days.
See also
Stellar black hole
References
Stellar black holes
Ara (constellation)
Binary stars
K-type main-sequence stars | XTE J1650−500 | [
"Physics",
"Astronomy"
] | 147 | [
"Black holes",
"Stellar black holes",
"Ara (constellation)",
"Unsolved problems in physics",
"Constellations"
] |
17,525,974 | https://en.wikipedia.org/wiki/Nanocell | A nanocell is a drug delivery platform consisting of a polymer-bound chemotherapeutic drug combined with a lipid-bound anti-angiogenesis drug. Nanocells are currently being developed in the lab of Shiladitya Sengupta of MIT.
Theory
Angiogenesis, or the formation of new blood vessels, plays a major role in the development of a tumor. After a tumor has grown to about the size of a cubic millimeter, its core becomes hypoxic, and it begins to release growth factors to recruit new blood vessels that will supply it with oxygen. Inhibiting angiogenesis has been investigated as a means of preventing tumor growth but has not proven to be fully successful, for tumor cells cut off from the blood supply can eventually develop “reactive resistance” to hypoxia. These resistant cancer cells could be killed by chemotherapeutic drugs, but once the vasculature to the tumor has been cut off, there is no way for chemotherapy to be delivered. Nanotechnology offers a way to deliver chemotherapeutic drugs and anti-angiogenic drugs in the same vehicle so that as the blood supply is shut off, chemotherapy is present to prevent any hypoxia-resistant cells from proliferating.
Technology
Labs at MIT are in the process of developing nanocells capable of delivering both types of drugs. Each nanocell is between 120 and 200 _m in diameter and can be thought of as “a balloon within a balloon.” Inside each nanocell is a chemotherapeutic drug covalently bound to a polymer, and on the surface of each cell is a lipid coat containing an anti-angiogenic drug. The technology makes use of the fact that a tumor's blood vessels have pores 600 _m in diameter and are much leakier than normal blood vessels, which have pores only around 50 _m in diameter. The nanocells circulate in the blood, and because of their size, they leak out of blood vessels only in tumors. Once there, the nanocells are degraded by enzymes produced by the tumor. Work remains to be done to win clinical approval for the technology, but results from Sengupta's lab indicate that the nanocells are more effective and less toxic than traditional chemotherapy.
References
Nanocell targets cancer
Nanocell's double hit on cancer
MIT engineers an anti-cancer smart bomb
Sengupta S, Eavarone D, Capila I, Zhao G, Watson N, Kiziltepe T, Sasisekharan R. Temporal targeting of tumour cells and neovasculature with a nanoscale delivery system. Nature. 2005 Jul 28;436(7050):568-72.
External links
Center for Integration of Medicine and Innovative Technology
Drug delivery devices
Dosage forms | Nanocell | [
"Chemistry"
] | 591 | [
"Pharmacology",
"Drug delivery devices"
] |
17,526,233 | https://en.wikipedia.org/wiki/Ballistic%20electron%20emission%20microscopy | Ballistic electron emission microscopy or BEEM is a technique for studying ballistic electron transport through a variety of materials and material interfaces. BEEM is a three terminal scanning tunneling microscopy (STM) technique that was invented in 1988 at the Jet Propulsion Laboratory in Pasadena, California by L. Douglas Bell, Michael H. Hecht, and William J. Kaiser. The most popular interfaces to study are metal–semiconductor Schottky diodes, but metal–insulator–semiconductor systems can be studied as well.
When performing BEEM, electrons are injected from a STM tip into a grounded metal base of a Schottky diode. A small fraction of these electrons will travel ballistically through the metal to the metal– semiconductor interface where they will encounter a Schottky barrier. Those electrons with sufficient energy to surmount the Schottky barrier will be detected as the BEEM current. The atomic scale positioning capability of the STM tip gives BEEM nanometer spatial resolution. In addition, the narrow energy distribution of electrons tunneling from the STM tip gives BEEM a high energetic resolution (about 0.02 eV).
References
Scanning probe microscopy
American inventions
Jet Propulsion Laboratory | Ballistic electron emission microscopy | [
"Chemistry",
"Materials_science"
] | 242 | [
"Nanotechnology",
"Scanning probe microscopy",
"Microscopy"
] |
17,526,284 | https://en.wikipedia.org/wiki/Self-conscious%20emotions | Self-conscious emotions, such as guilt, shame, embarrassment, and pride, are a variety of social emotions that relate to our sense of self and our consciousness of others' reactions to us.
Description
During the second year of life, new emotions begin to emerge when children gain the understanding that they themselves are entities distinct from other people and begin to develop a sense of self. These emotions include:
Shame
Pride
Guilt
Envy
Embarrassment
Self-conscious emotions have been shown to have social benefits. These include areas such as reinforcing social behaviors and reparation of social errors. There is also possible research suggesting that a lack of self-conscious emotion is a contributing cause of bad behaviour.
They have five distinct features that differentiate them from other emotions:
Require self-awareness and self representation
Emerge later than basic emotions
Facilitate attainment of complex social goals
Do not have distinct universally recognized facial expressions
Cognitively complex
Development
Self-conscious emotions are among the latter of emotions to develop. Two reasons are at the cause of this:
Body language
Emotions such as joy, fear and sadness can all be gathered reliantly on just a person’s face. However self-conscious emotions heavily involve the body in addition to the face (Darwin, 1965). This means that when humans are attempting to learn self-conscious emotions, they have more to tend to, making the emotions harder to grasp.
Self-awareness
Due to the nature of these emotions, they can only begin to form once an individual has the capacity to self-evaluate their own actions. If the individual decides that they have caused a situation to occur, they then must decide if the situation was a success or a failure based on the social norms they have accrued, then attach the appropriate self-conscious feeling (Weiner, 1986). This is a complex cognitive skill, one that takes time to master.
Biological complexity
As stated, self-conscious emotions are complex and harder to learn than basic emotions such as happiness or fear. This premise also has biological backing.
Frontotemporal lobar degeneration
Frontotemporal lobar degeneration (FTLD) is a neurodegenerative disease that attacks the brain selectively in the frontal lobe, temporal lobe and amygdala. Patients suffering from FTLD offer information on the biological complexity involved in generating self-conscious emotions. With the use of a startle experiment (where patients and control participants are exposed to an unexpected and loud sound) it has been shown that sufferers of FTLD show and experience the basic negative emotions expected to be attached to the startling sounds. However they show significantly less signs of experiencing self-conscious emotions compared to control groups. This is due to an inhibition of embarrassment caused by the damaged brain (Sturm & Rosen, 2006).
The ability to show basic emotions while lacking the ability to perform the more complex self-conscious emotions demonstrates that self-conscious emotions are biologically harder to perform than average emotions. FTLD patients tend to struggle in social situations (Sturm & Rosen, 2006). This is again linked with their inability to perform self-conscious emotions adequately.
Social benefits
Acquiring the ability to perform self-conscious emotions may be relatively difficult, but does bring benefits. The main benefits being those of social harmony and social healing.
Social harmony
Self-conscious emotions are seen to promote social harmony in different ways. The first is its ability to reinforce social norms. It does this in a very similar way to that of operant conditioning. Performing well in situations while keeping to social norms can elicit pride. This feels good so therefore encourages the behaviour to be repeated. Equally performing in a situation while not sticking to the social norms can leave individuals feeling embarrassed. This feels bad and is generally avoided in the future. An example of this is a study (Brown, 1970) where participants were shown to choose avoiding feelings of embarrassment over financial gains.
Social healing
Self-conscious emotions enable social healing. When an individual makes a social error, feelings of guilt or embarrassment changes not just the person’s mood but their body language. In this situation the individual gives out non-verbal signs of submission and this is generally more likely to be greeted with forgiveness. This has been shown in a study where actors knocked over a supermarket shelve (Semin & Manstead, 1982). Those that acted embarrassed were received more favorably than those who reacted in a neutral fashion.
Levels of embarrassment have found to be easier to see in females and African-Americans, than compared to male and Caucasian targets (Keltner, 1995). This is due to social learnings from previous generations.
Poor behaviour
Initially, self-conscious emotions were looked upon as troublesome and all part of an internal fight. However, views on this have now changed. There is a strong link between the ability of an individual to regulate their behaviour in an appropriate manner and problems with their self-conscious emotions. A school was able to list a set of boys who were classified as ‘prone to aggression and delinquent behaviour’. When these boys sat an interactive IQ test, they scored higher on scores of anger compared with the normal boys at the school. They also scored lower in feelings of embarrassment (Keltner, 1995).
Caution should be taken with regards to these studies. While the findings are becoming more robust, the number of different variables involved will make it hard to ever come to a conclusion on the subject of poor behaviour being caused by these deficiencies. The difficulty being the hardship of creating the proper environment within a lab where self-conscious emotions would not only occur, but could be adequately measured.
See also
Attention
Developmental psychology
Moral emotions
Psychological repression
Self-consciousness
Self-esteem
Self-pity
Thought suppression
References
Brown, B. R. (1970). Face-saving following experimentally induced embarrassment. Journal of Experimental Social Psychology, 4, 107–122.
Darwin, C., R. (1965). The expression of the emotions in man and animals. Chicago: University of Chicago Press. (Original work publish 1872).
Keltner, D. (1995). Signs of appeasement: Evidence for the distinct displaysof embarrassment, amusement, and shame. Journal of Personalityand Social Psychology, 68, 441–454.
Semin, G. R., & Manstead, A. S. R. (1982). The social implications of embarrassment displays and restitution behavior. European Journal of Social Psychology, 12, 367–377.
Sturm, V., E. & Rosen, H., J. (2006). Self-conscious emotion deficits in frontotemporal lobar degeneration. Retrieve from the web, 11th Jan 2010. http://brain.oxfordjournals.org/cgi/content/full/129/9/2508
Weiner, B. (1986). An atrributional theory of motivation and emotion. New York: Springer-Verlag.
Tracy, J. L., & Robins, R. W. (2004). Keeping the self in self-conscious emotions: Further arguments for a theoretical model. Psychological Inquiry, 15(2), 171-177.
Emotion | Self-conscious emotions | [
"Biology"
] | 1,458 | [
"Emotion",
"Behavior",
"Human behavior"
] |
17,526,371 | https://en.wikipedia.org/wiki/Programmable-gain%20amplifier | A programmable-gain amplifier (PGA) is an electronic amplifier (typically based on an operational amplifier) whose gain can be controlled by external digital or analog signals.
The gain can be set from less than 1 V/V to over 100 V/V. Examples for the external digital signals can be SPI, I²C while the latest PGAs can also be programmed for offset voltage trimming, as well as active output filters. Popular applications for these products are motor control, signal and sensor conditioning.
References
Electronic amplifiers | Programmable-gain amplifier | [
"Technology"
] | 107 | [
"Electronic amplifiers",
"Amplifiers"
] |
17,527,026 | https://en.wikipedia.org/wiki/Diboron%20tetrafluoride | Diboron tetrafluoride is the inorganic compound with the formula (BF2)2. A colorless gas, the compound has a halflife of days at room temperature. It is the most stable of the diboron tetrahalides, and does not appreciably decompose under standard conditions.
Structure and bonding
Diboron tetrafluoride is a planar molecule with a B-B bond distance of 172 pm. Although it is electron-deficient, the unsaturated boron centers are stabilized by pi-bonding with the terminal fluoride ligands. The compound is isoelectronic with oxalate.
Synthesis and reactions
Diboron tetrafluoride can be formed by treating boron monofluoride with boron trifluoride at low temperatures, taking care not to form higher polymers. Alternatively, diboron tetrachloride can be fluorinated with antimony trifluoride.
Addition of diboron tetrafluoride to Vaska's complex was employed to produce an early example of a transition metal boryl complex:
2B2F4 + IrCl(CO)(PPh3)2 → Ir(BF2)3(CO)(PPh3)2 + ClBF2
Historical literature
References
External links
Diboron tetrafluoride at webelements
Fluorides
Boron compounds
Nonmetal halides
Boron halides | Diboron tetrafluoride | [
"Chemistry"
] | 304 | [
"Fluorides",
"Salts"
] |
17,527,468 | https://en.wikipedia.org/wiki/Transmission-line%20pulse | In electrical engineering, transmission-line pulse (TLP) is a way to study integrated circuit technologies and circuit behavior in the current and time domain of electrostatic discharge (ESD) events. The concept was described shortly after WWII in pp. 175–189 of Pulse Generators, Vol. 5 of the MIT Radiation Lab Series. Also, D. Bradley, J. Higgins, M. Key, and S. Majumdar realized a TLP-based laser-triggered spark gap for kilovolt pulses of accurately variable timing in 1969. For investigation of ESD and electrical-overstress (EOS) effects a measurement system using a TLP generator has been introduced first by T. Maloney and N. Khurana in 1985. Since then, the technique has become indispensable for integrated circuit ESD protection development.
The TLP technique is based on charging a long, floating cable to a pre-determined voltage, and discharging it into a Device-Under-Test (DUT). The cable discharge emulates an electro-static discharge event, but employing time-domain reflectometry (TDR), the change in DUT impedance can be monitored as a function of time.
The first commercial TLP system was developed by Barth Electronics in 1990s. Since then, other commercial systems have been developed (e.g., by Thermo Fisher Scientific, Grundtech, ESDEMC Technology, High Power Pulse Instruments, Hanwa, TLPsol).
A subset of TLP, VF-TLP (Very-Fast Transmission-Line Pulsing), has lately gained popularity with its improved resolution and bandwidth for analysis of ephemeral ESD events such as CDM (Charged Device Model) events. Pioneered by academia (University of Illinois) and commercialized by Barth Electronics, VF-TLP has become an important ESD analysis tool for analyzing modern high-speed semiconductor circuits.
TLP Standards
ANSI/ESD STM5.5.1-2016 Electrostatic Discharge Sensitivity Testing – Transmission Line Pulse (TLP) – Component Level
ANSI/ESD SP5.5.2-2007 Electrostatic Discharge Sensitivity Testing - Very Fast Transmission Line Pulse (VF-TLP) - Component Level
IEC 62615:2010 Electrostatic discharge sensitivity testing - Transmission line pulse (TLP) - Component level
See also
Human-body model
G. N. Glasoe and J. V. Lebacqz. Pulse Generators, volume 5 of MIT Radiation Laboratory Series. McGraw-Hill, New York, 1948, pp. 175-189.
D. Bradley, J. Higgins, M. Key, and S. Majumdar, "A simple laser-triggered spark gap for kilovolt pulses of accurately variable timing," Opto-Electronics Letters, vol. 1, pp. 62–64, 1969.
External links
Introduction of Transmission Line Pulse (TLP) Testing for ESD Analysis -Device Level
Cable Discharge Event (CDE) Automated Evaluation System Based on TLP Method
Characterizing Touch Panel Sensor ESD Failure with IV-Curve TLP
TVS Failure Level Tests Comparison Between ESD Gun, TLP & HMM
Advanced Frequency Compensation Method for VF-TLP Measurement (up to 10 GHz)
ESD Failure Analysis of PV Module Diodes and TLP Test Methods
Integrated circuits
Electrical breakdown | Transmission-line pulse | [
"Physics",
"Technology",
"Engineering"
] | 691 | [
"Physical phenomena",
"Computer engineering",
"Electrical phenomena",
"Electrical breakdown",
"Integrated circuits"
] |
17,527,733 | https://en.wikipedia.org/wiki/Henri%20Lecoq | Henri Lecoq (18 April 1802 – 4 August 1871) was a French botanist. Charles Darwin mentioned this name in 1859 in the preface of his famous book On The Origin of Species as a believer in the modification of species. Darwin wrote:
The work referenced by Darwin is Lecoq's "Étude de la Géographie Botanique de l’Europe", published in 1854.
A number of plants carry the name of Lecoq in their descriptive names (see IPNI search). Also in 1829, botanist DC. published Lecokia, a monotypic genus of flowering plants belonging to the family Apiaceae with its name honouring him.
In addition a museum in his home town of Clermont Ferrand (France) is named after him.
References
1871 deaths
1802 births
19th-century French botanists
Proto-evolutionary biologists | Henri Lecoq | [
"Biology"
] | 174 | [
"Non-Darwinian evolution",
"Biology theories",
"Proto-evolutionary biologists"
] |
17,527,870 | https://en.wikipedia.org/wiki/Columbitech | Columbitech, founded in 2000, provides wireless security to secure mobile devices, with support for WLAN and public networks, including 3G, 4G and WiMAX. The company is headquartered in Stockholm, Sweden with offices in New York City.
Columbitech Mobile VPN
The Columbitech mobile VPN provides remote network access to field mobility users, corporate WLAN users and remote workers – mobilizing the enterprise. The solution is encrypted on standards-based Wireless Transport Layer Security (WTLS) and holds a FIPS 140-2 certification.
The technology is utilized in the retail industry to meet PCI DSS requirements, and in other industries where mobile devices are used over wireless networks.
References
Computer security companies
Information technology companies of Sweden
Mobile technology companies
Companies based in Stockholm
Computer companies established in 2000
Electronics companies established in 2000 | Columbitech | [
"Technology"
] | 176 | [
"Mobile technology companies"
] |
17,527,994 | https://en.wikipedia.org/wiki/William%20Horrocks | Brigadier General Sir William Heaton Horrocks (25 August 1859 – 26 January 1941) was an officer of the British Army remembered chiefly for confirming Sir David Bruce's theory that Malta fever was spread through goat's milk. He also contributed to the making safe of water, developing a simple method of testing and purifying water in the field. Because of his work, he became the first Director of Hygiene at the War Office in 1919.
Early life and career
William Heaton Horrocks was the son of William Holden Horrocks of Bolton. Horrocks studied for his M.B. at Owen's College and passed his first M.B. examination in 1881. He received a Third Class Honours pass in Anatomy, and a Second Class in Physiology and Histology.
Previously a Surgeon on probation, Horrocks was promoted to Surgeon (the equivalent of Captain) on 5 February 1887. While serving in India, Horrocks married Minna Moore (died 1921), the daughter of the Reverend J.C. Moore of Connor, County Antrim on 27 September 1894 at Christ Church, Mussoorie. Together they had one son and one daughter. His son Brian also joined the British Army, and became a leading corps commander during the Second World War.
Horrocks was promoted from Captain to Major on 5 February 1899.
Malta fever
In 1904 Horrocks was appointed as a member of the Royal Society's Mediterranean Fever Commission, to investigate the highly contagious disease Malta fever which was prevalent in the British colony of Malta. Identified by Sir David Bruce in 1887, Malta fever was characterised by a low mortality rate but was of indefinite duration. It was accompanied by profuse perspiration, pain and occasional swelling of the joints. In 1905 Sir Themistocles Zammit infected a goat with the bacteria Micrococcus Melitanensis which then caught Malta fever. Horrocks was the first person to find the bacteria in goat's milk, thus identifying the method of transmission.
In attempting to settle the matter of who was responsible for the discovery, Bruce (who had served as chairman of the Commission, wrote to The Times newspaper:
Horrocks afterwards served as sanitary officer at the British colony of Gibraltar, where he noted that the incidence of Malta fever practically disappeared with the removal of Maltese goats from that place.
Later career
Horrocks was promoted to Lieutenant-Colonel on 19 May 1911, then in July was promoted Brevet Colonel dated 20 May, in recognition of his services. In 1915, Horrocks was honoured by becoming an Honorary Surgeon to King George V, commencing 6 November 1914, holding the appointment until 26 December 1917.
Horrocks also developed the "Horrocks Box", following his research into contamination of water. This device used sand filtration and chlorine sterilisation plants to provide a portable means of decontaminating water supplies. It proved of particular use during the First World War, when it kept the Allied forces largely free of water-borne disease. In addition to this he also developed means of removing poisons from water and assisted in the design of the first gas mask.
For his services in the war, Horrocks was honoured with appointments to a number of orders. On 24 January 1917 he was appointed a Companion of the Bath. On 3 June 1918 (in the King's Birthday Honours) Horrocks was appointed a Knight Commander of the Order of Saint Michael and Saint George. He became the first Director of Hygiene at the War Office on 1 June 1919 in recognition of his expertise in military hygiene, this last period of active duty came to an end on 1 November 1919, and he relinquished his temporary rank of Brigadier-General.
Horrocks died on 26 January 1941 at the age of eighty-one, at Hersham in Surrey. His funeral took place at St. Peter's Church, Hersham on 31 January with his son and daughter, among others, present.
Notes
References
Published works
(Report II by Major W. H. Horrocks) pdf at militaryhealth.bmj.com
Selected articles
1859 births
1941 deaths
Hygienists
Royal Army Medical Corps officers
Knights Commander of the Order of St Michael and St George
Companions of the Order of the Bath
People from Bolton
British Army personnel of World War I
Water filters
People from Hersham
British Army generals
Military personnel from the Metropolitan Borough of Bolton
British Army brigadiers
19th-century British Army personnel | William Horrocks | [
"Chemistry"
] | 903 | [
"Water treatment",
"Water filters",
"Filters"
] |
17,528,027 | https://en.wikipedia.org/wiki/Glossary%20of%20backup%20terms | The subject of computer backups is rife with jargon and highly specialized terminology. This page is a glossary of backup terms that aims to clarify the meaning of such jargon and terminology.
Terms and definitions
3-2-1 Rule (or 3-2-1 Backup Strategy)
The idea that a minimal backup solution should include three copies of the data, including two local copies and one remote copy.
Backup policy
an organization's procedures and rules for ensuring that adequate numbers and types of backups are made, including suitably frequent testing of the process for restoring the original production system from the backup copies.
Backup rotation scheme
a method for effectively backing up data where multiple media are systematically moved from storage to usage in the backup process and back to storage. There are several different schemes. Each takes a different approach to balance the need for a long retention period with frequently backing up changes. Some schemes are more complicated than others.
Backup site
a place where business can continue after a data loss event. Such a site may have ready access to the backups or possibly even a continuously updated mirror.
Backup software
computer software applications that are used for performing the backing up of data, i.e., the systematic generation of backup copies. See also: List of backup software.
Backup window
the period of time that a system is available to perform a backup procedure. Backup procedures can have detrimental effects to system and network performance, sometimes requiring the primary use of the system to be suspended. These effects can be mitigated by arranging a backup window with the users or owners of the system(s).
Copy backup
backs up the selected files, but does not mark the files as backed up (reset the archive bit). This is found in the backup with Windows 2003.
Daily backup
incremental backup of files that have changed today
Data salvaging/recovery
the process of recovering data from storage devices when the normal operational methods are impossible. This process is typically performed by specialists in controlled environments with special tools. For example, a crashed hard disk may still have data on it even though it doesn't work properly. A data salvage specialist might be able to recover much of the original data by opening it up in a clean room and tinkering with the internal parts.
Differential backup
a cumulative backup of all changes made since the last full backup. The advantage to this is the quicker recovery time, requiring only a full backup and the latest differential backup to restore the system. The disadvantage is that for each day elapsed since the last full backup, more data needs to be backed up, especially if a majority of the data has been changed.
Disaster recovery
the process of recovering after a business disaster and restoring or recreating data. One of the main purposes of creating backups is to facilitate a successful disaster recovery. For maximum effectiveness, this process should be planned in advance and audited.
Disk cloning
the process of copying the contents of one computer hard disk to another disk or to an image file (see disk image below) for later recovery.
Disk image
single file or storage device containing the complete contents and structure representing a data storage medium or device, such as a hard drive, tape drive, floppy disk, CD/DVD/BD, or USB flash drive.
Full backup
a backup of all (selected) files on the system. In contrast to a drive image, this does not included the file allocation tables, partition structure and boot sectors.
Hot backup
a backup of a database that is still running, and so changes may be made to the data while it is being backed up. Some database engines keep a record of all entries changed, including the complete new value. This can be used to resolve changes made during the backup.
Incremental backup
a backup that only contains the files that have changed since the most recent backup (either full or incremental). The advantage of this is quicker backup times, as only changed files need to be saved. The disadvantage is longer recovery times, as the latest full backup, and all incremental backups up to the date of data loss need to be restored.
Media spanning
sometimes a backup job is larger than a single destination storage medium. In this case, the job must be broken up into fragments that can be distributed across multiple storage media.
Multiplexing
the practice of combining multiple backup data streams into a single stream that can be written to a single storage device. For example, backing up 4 PCs to a single tape drive at once.
Multistreaming
the practice of creating multiple backup data streams from a single system to multiple storage devices. For example, backing up a single database to 4 tape drives at once.
Normal backup
full backup used by Windows Server 2003.
Near store
provisionally backing up data to a local staging backup device, possibly for later archival backup to a remote store device.
Open file backup
the ability to back up a file while it is in use by another application. See File locking.
Remote store
backing up data to an offsite permanent backup facility, either directly from the live data source or else from an intermediate near store device.
Restore time
the amount of time required to bring a desired data set back from the backup media.
Retention time
the amount of time in which a given set of data will remain available for restore. Some backup products rely on daily copies of data and measure retention in terms of days. Others retain a number of copies of data changes regardless of the amount of time.
Site-to-site backup
backup, over the internet, to an offsite location under the user's control. Similar to remote backup except that the owner of the data maintains control of the storage location.
Synthetic backup
a restorable backup image that is synthesized on the backup server from a previous full backup and all the incremental backups since then. It is equivalent to what a full backup would be if it were taken at the time of the last incremental backup.
Tape library
a storage device which contains tape drives, slots to hold tape cartridges, a barcode reader to identify tape cartridges and an automated method for physically moving tapes within the device. These devices can store immense amounts of data.
Trusted paper key
a machine-readable print of a cryptographic key.
Virtual Tape Library (VTL)
a storage device that appears to be a tape library to backup software, but actually stores data by some other means. A VTL can be configured as a temporary storage location before data is actually sent to real tapes or it can be the final storage location itself.
See also
Computer data storage
Data proliferation
File synchronization
Information repository
Disaster recovery and business continuity auditing
Digital preservation
Reversible computing
References
External links
Online Backup Glossary
Backup
Data security
Glossaries of computers
Tape-based computer storage
Wikipedia glossaries using description lists | Glossary of backup terms | [
"Technology",
"Engineering"
] | 1,367 | [
"Cybersecurity engineering",
"Computing terminology",
"Reliability engineering",
"Backup",
"Glossaries of computers",
"Data security"
] |
17,530,209 | https://en.wikipedia.org/wiki/Soviet%20parallel%20cinema | Soviet parallel cinema is a genre of film and underground cinematic movement that occurred in the Soviet Union in the 1970s onwards. The term parallel cinema (known as parallel’noe kino) was first associated with the samizdat films made out of the official Soviet state system. Films from the parallel movement are considered to be avant-garde, non-conventionalist and cinematographically subversive.
The two main groups and founders of the parallel cinema movement are Evgenii Iufit and the Necrorealists in Leningrad (now known as Saint Petersburg), and the circle of Aleinikov brothers in Moscow. These two groups achieved phenomenal fame in Russia in the 1980s – and during the dissolution of the Soviet Union – for their involvement in the parallel cinema movement and ‘late socialism’.
Overview
Origin
Soviet parallel cinema is an offshoot of the film movement that overrun the 1960s and 1970s in India called New Indian Cinema – alternatively known as Indian New Wave or parallel cinema. Similarly to its Soviet counterpart, it maintained a focus on offbeat productions that dealt with real world representations of society including socio-cultural and political contexts. It was strongly espoused by the themes, methods and workings of Neorealism.
The introduction of parallel cinema is derived from amateur film studios and workshops that were prevalent in Soviet culture at the time. During 1957, the Soviet state system formed a funding to provide support for professional filmmakers and subsequently for amateur film workshops. This enforced the states regulation and control on ideological distribution on the industry. The Soviet state established two systems for filmmaking: the first controlled professional cinema or State Committee for Cinematography called Goskino, and amateur film studios. Films produced outside of these official systems are classified in the world of ‘parallel cinema’. The films produced in this era were out of the states’ control and as such, prohibited. Parallel cinematographers, such as Evgenii Iufit and the Aleinikov Brothers, utilised the studio facilities of amateur film clubs and workshops to produce their films. The Leningrad amateur film club - used by Evgenii Iufit and more – still exists under the name the St. Petersburg Club of Film and Video Amateurs. As the political controls diminished in the 1980s with the shifts of power, the amateur film workshops were overrun with parallel cinema production. The parallel movement marks the first large influx of creative expression in Soviet culture since post-Russian Revolution in the 1920s.
Background
Films of the Soviet parallel cinema movement is classified under the umbrella of renegade self-published art and literature of the Soviet State called samizdat. Samizdat was deemed forbidden and unavailable for distribution due to is rebellious ideals and alternative ideologies to the Soviet Union. All forms of samizdat displayed defiance to the government-regulated distributed content as it embodied political opposition and an open discourse that aimed to deconstruct the Soviet Empire. In the term's broadness, parallel cinema, or parallel’noe kino, is also described as cinematic samizdat.
The parallel cinema movement in Soviet states was accompanied by the introduction of a cinematic journal, or samizdat, called Cine-Fantom. Cine-Fantom was founded by Gleb Aleinikov and Igor Aleinikov (known as Aleinikov brothers) in the 1980s. Cine-Fantom has since stood as a film festival, meeting house and currently as a theatre. The Cine-Fantom was a hand-made art journal that was devoted to the publishing the issues and developments of cinema – particularly in relation to parallel cinema. The journal was originally created and published only for friends of the two brothers, however, it gained popularity in the latter part of the 1980s circulating widely across Russia. It became well known for its trademark blue cover. The journal was the first to establish the changes in Soviet cinema with the term "parallel cinema" as it appeared in an issue having been taken from the encyclopedia. During the Soviet Parallel cinema movement, filmmakers began producing films independent from the official Soviet production system. Hence, Cine-Fantom was a centre-point for these filmmakers and the avant-garde film industry.
While the state defined production of art was ground on the doctrine of social-realism, the parallel cinema movement challenged the construction with a subversive potency. The two most infamous groups and founders of the parallel cinema movement are Evgenii Iufit and the Necrorealists in Leningrad, and the circle of Aleinikov brothers in Moscow. Evgenii Iufit founded the cult sub-genre called necrorealism, that was adopted by followers of the movement throughout Leningrad. Necrorealism is understood as parody of social-realism. Through the wordplay and pun, the term implies that the reality of the regime was death. This term also signalled the anticipation to the end of the radicalism Russian Social Democratic Labor Party, or the Bolsheviks. Films adopting this ideology primarily focus on dark themes, black humour, monochromatic images and notions of the absurd. The necrorealists focused on challenging the ways death was represented throughout Soviet culture, specifically through the taboo aspects such as extreme violence, suicide or body decomposition. At the time, Leningrad hosted a multitude of underground groups with the necrorealists being the most infamous and prevalent. The necrorealists were established outside the amateur film movement, however, were not completely detached from it as individual filmmakers created their films at trade union studios and Leningrad amateur film club.
Historical significance
In the early 1980s and prior, official filmmaking was under the control of the Soviet regime while the parallel cinema world remained underground. Goskino - the state studio – monopolised the control, regulation and censorship on the industry and film production. This committee were culminated to deem what images were appropriate and acceptable for the people in terms of political and social ideologies. As such, at this stage Soviet parallel cinema remained an unofficial and underground renegade film club that challenged the constraints of the official system.
Soviet parallel cinema emerged from underground to the surface naturally during the shift in political power and historical changes - specifically during perestroika. Perestroika was a program established by the Soviet leader Mikhail Gorbachev in the mid-1980s. After the death of his successor, Konstantin Chemenko, in 1985, Gorbachev rose from second in the Soviet hierarchy to the Communist Party's ruling body. Perestroika is the program he instituted to restructure the Soviet economic and political landscape. The term perestroika means ‘restructuring’ in the Russian language. The reforms of this program allowed the compression of artistic expression to lessen and the underground to surface. Furthermore, the political shifts rendered state bureaucracy inefficient to control the mass of institutions and enforce policies. The leniency in the states control fostered the movement of the underground world of parallel cinema into openness. In this time, the control of enforcers such as the KGB diluted and as such, trends in culture and art emerged. Parallel cinema quickly became the forefront of cinematic production. As well as this, the Gorbachev era marks a shift in the nature of the movement. Prior to the reform, the artists were suppressed, fighting and dissenting in the form of Soviet parallel film. The lessened control allowed the movement to shift from a rebellion to independent filmmaking.
Key figures
Evgenii Iufit
Evgenii Iufit (also known as Yevgeny Yufit) was a Russian artist and filmmaker known for founding necrorealism – a driving ideology of parallel cinema. He was born in 1961 in Saint Petersburg and stated his career in the film industry by the early 1980s. He died on 13 December 2016. Iufit developed his interest in art and cinema as a student at Leningrad technical institute. Due to the overarching control by the state's cinema organisation, Goskino, he founded the alternative style of necro-realism in the underground movement. Iufit's scope of art included paintings, photography and films.
Iufit's avant-garde parallel films are thematically characterised by homoeroticism and the hybridisation of black humour and slapstick comedy. In early 1986 – the beginning of Perestroika – he founded the first independent art-house film studios in Soviet Russia called ‘The Mzhalala Film’. The production company focussed on the production of experimental shorts from artists, writers and directors that practiced the radical aesthetics of parallel cinema. His films include Sawyer (1984), Spring (1987), Courage (1988), and Boar Suicide (1988).
Gleb Aleinikov and Igor Aleinikov
Gleb Aleinikov and younger brother Igor Aleinikov (known as Aleinikov brothers) are renowned Russian screenwriters, directors and film theorists. Alongside Evgenii Iufit, the brothers are founders and prolific figures of the parallel cinema movement in the Soviet Union. Prior to their shift into the film industry, Gleb Aleinikov completed his studies at the Moscow Institute of Construction Engineering in 1988 and Igor at the Moscow Institute of Engineering and Physics in 1984. During this period of study, the brothers founded, promoted and developed the underground Parallel cinema movement.
In the mid-1980s, alongside Boris Yukhananov, the brothers formed the cinematic samizdat, Cine-Fantom, and acted as primary editors from 1985 to 1990. The Aleinikov brothers’ films are subversive and provocative – as per the nature of the parallel movement – as a means of attacking the constraints of the mainstream state-run Soviet system. Igor Aleinikov died in a plane crash in March 1994. After his death, Gleb Aleinikov published his late brothers’ diaries in 1999 which consisted of film ideas, plans, and anecdotes of the underground movement. Gleb Aleinikov currently runs the second largest TV station in Russian.
Some of their films include The Cruel Illness of Males (1987), Awaiting de Bil (1990), Tractor Drivers II (1992) – which was their only feature-length film - M.E (1986), Tractors (1987) and Boris and Gleb (1988).
Boris Yukhananov
Boris Yukhananov is a prominent Russian director, educator and theorist most well known as a founding figure and strong contributor to the Soviet Parallel cinema movement. He contributed a broad range of art – including cinema, photography, theatre and written. Alongside the Brothers Aleinikov, Yukhananov stood outside the constraints of Soviet forces contributing greatly to the samizdat Cine-Fantom. He joined the Cine-Fantom on the editorial board as well as writing theoretical articles about video and art.
In 1986 he formed the very popular video novel ‘The Mad Prince’.‘The Mad Prince’ was a cycle of films shot with the sole known VHS camera in Moscow and Leningrad. From the material, five one-hour long films have been edited. This is a key collection of films that embodied the notions of the parallel cinema movement. Yukhanov's films are immersed in the context of the renegade ‘youth culture’ that developed throughout the underground at the time. Through ‘The Mad Prince’, Yukhananov innovated fatal editing. This concept of fatal editing (known as fatal’nyi montazh) was highly used in the production of parallel films throughout the era. This is a montage technique that irreversibly replaced old shots with new ones due to the functions of video cameras.
List of notable films
The Mad Prince (1986)
Sawyer (1984)
Metastases (1984)
M.E (1986)
A Revolutionary Sketch (1987)
Tractors (1987)
I'm Cold, So What?/ I'm Frigid but it Doesn't Matter (1987)
The Severe Illness of Men (1987)
Supporter of Olf (1987)
Spring (1987)
Boris and Gleb (1988)
Courage (1988)
Boar Suicide (1988)
Dreams (1988)
Postpolitical Cinema (1988)
Someone Has Been Here (1989)
War and Peace (1989)
Awaiting de Bil (1990)
Tractor Drivers II (1992)
The Wooden Room (1995)
Identifying characteristics
Cinematography
The parallel movement directed filmmakers into a regressive and rebellious mode of cinematography that refuted traditional conventions. After the long run of conservatism as a result of social realism, parallel films marked a shift into the more experimental and free stylistic filmmaking - much like films post-Russian Revolution in the 1920s. Due to the lack of money and underground nature of the production, filmmakers improvised by using crude equipment and the expression of the physical human body to create meaning in their films. They produced cheap 8mm and 16mm shorts that were in violation of the technical standards and rules of cinematic narration permitted by the government regime. The unity in their means and motivations formed this new genre and age in Soviet film.
Films of this era exhibit experimental and obscure forms of montage that do not abide by the state-approved regulations that were placed on all aspects of filmmaking. In official Goskino productions, montage was replaced by a representational style that adhered to the official Soviet policy that artistic quality is derived from its content. While bureaucratic pressure drove out experimental montage style, parallel filmmakers re-adopted the autonomous and rebelliousness. The films of this movement had a primitive, low-budget style that acted as a documentation of times rather than a narrative film. They feature sporadic cuts, non-linearity, lack of narrative and Boris Yukhananov's fatal editing.
Themes and motifs
There is no singular overarching theme to parallel cinema other than they embody the notions of anti-establishment. The genre was defined by the stark subversion from the Russian films of the state-approved Socialist realism imagery. Through the collapse of the Soviet Union and the threat of authority - such as the KGB – diminished, the filmmakers unleashed the repressed creativity resulting in films depicting alcohol, profanity, violence and surrealism. In their rebellion, filmmakers of the time aimed to depict the harsh truth of the Soviet lifestyle, social landscape and political context. The directors and films of the times were united by the sense of aesthetic revenge that overpowered the movements productions.
Soviet parallel films explore notions of sexuality, parody, death and anti-utopia in the perestroika era. As filmmakers worked underground and skirted around existing Soviet regulation or censorship, their films included social satire, cynical commentary on Soviet life and fantasy elements. The depiction of such things was an implicit affront to state-approved imagery and Soviet conventions. Film of the era are categorised as dark, profane and confronting – commonly compared to those of film noir. The films derived from the parallel cinema era embody the Russian concept of "" (roughly "black stuff"). Chernukha is a perestroika phenomenon of the late 1980s that describe the tendency toward unrelenting pessimism in arts and mass media. Through vivid depictions of prostitutes and gangsters, this film ideology represented graphic violence and misery in Soviet life. While the motif of death and destruction carried throughout the movement, the necro-realist Parallel filmmakers - such as Evgenii Iufit - particularly focussed on exploring the liminal state between life and death. The films featured apocalyptic landscapes with zombies committing excessive cruelty, murder, sex and homosexual violence. Homoeroticism played a large role in the films produced. As well as this, many films featured a dystopian world of un-human creatures committing heinous acts as a means of showing a Soviet reality.
See also
Cinema of the Soviet Union
Parallel cinema
Film genre
Dissolution of Soviet Union
Yevgenii Yufit
Boris Yukhananov
Social realism
Perestroika
Samizdat
References
Bibliography
ACLA. 2021. New Perspectives on the Indian New Wave: Fifty Years On. https://www.acla.org/new-perspectives-indian-new-wave-fifty-years-i.
Art Forum. 2016. Eugene Yufit (1961–2016) . https://www.artforum.com/news/eugene-yufit-1961-2016-65640.
Berry, Ellen E., and Anesa Miller-Pogacar. 1996. "A Shock Therapy of the Social Consciousness: The Nature and Cultural Function of Russian Necrorealism." Cultural Critique (University of Minnesota Press) 34: 185–203.
Beumers, Brigit, and Eugénie Zvonkine. 2017. "Introduction: Re-construction, or perestroika Re-visioning, re-making, re-framing." In Ruptures and Continuities in Soviet/Russian Cinema, by Brigit Beumers and Eugénie Zvonkine, 1–11. London: Routledge.
Bordwell, David. 1972. "The Idea of Montage in Soviet Art and Film." Cinema Journal 11 (2): 9–17.
Boym, Svetlana. 1995. "Post-Soviet Cinematic Nostalgia: From "Elite Cinema" to Soap Opera." Discourse 17 (3): 75-84.
Brashinski, Michael, and Andrew Horton. 1996. "Russian Critics on the Cinema of Glasnost." Film Quarterly 49 (4): 58–59.
Britannica. 2020. Perestroika. https://www.britannica.com/topic/perestroika-Soviet-government-policy.
EEFB. 2020. Editorial. https://eefb.org/volumes/vol-109-november-2020/editorial-109/.
Gooding, John. 1992. "Perestroika as Revolution from within: An Interpretation." The Russian Review 51 (1): 36–57.
Hansgen, Sabine. 2019. "The Repetition of History." In Performance Cinema Sound Perspectives and Retrospectives in Central and Eastern Europe, by Alfrun Kliems, Tomas Glanc and Zornitza Kazalarska, 67–78.
Iles, Andrew. 2003. "Necrorealism: a Russian death experience." BMJ. 327.
Ioffe, Dennis, and Eugenie Zvonkine. 2014. Russian pathographical cinema: the films of Evgenii Iufit. https://lib.ugent.be/en/catalog/pug01:5782871.
Isakava, Volha. 2017. "Reality excess Chernukha cinema in the late 1980s." In Ruptures and Continuities in Soviet/Russian Cinema, by Brigit Beumers and Eugénie Zvonkine, 147–165. London: Routledge. https://www.encyclopedia.com/history/encyclopedias-almanacs-transcripts-and-maps/chernukha.
Kepley, Vance. 1996. "The First "Perestroika": Soviet Cinema under the First Five-Year Plan." Cinema Journal (University of Texas Press) 35 (4): 31-53 .
Komaromi, Ann. 2004. "The Material Existence of Soviet Samizdat." Slavic Review (Cambridge University Press) 63 (3): 597–618.
Law, Alma (1990). "Red Fish in America". Soviet and East European Performance. 10 (2): 41–42.
Moller, Olaf. 2007. "Return of the Living Dead." Film Comment 43 (1): 13–14.
Petrik, Gordei. 2020. Your Head in Your Hands. https://eefb.org/retrospectives/boris-yukhananovs-video-novel-the-mad-prince-sumashedshiy-prints-1986/.
Rojavin, Marina, and Tim Harte. 2021. "Introduction." In Soviet Films of the 1970s and Early 1980s Conformity and Non-Conformity Amidst Stagnation Decay, by Marina Rojavin and Tim Harte. London and New York: Routledge.
Rollberg, Peter. 2016. "The Dictionary." In Historical Dictionary of Russian and Soviet Cinema, by Peter Rollberg, 36–37. Rowman and Littlefield.
Shterianova, Magdalena. 2020. Interview with Boris Yukhananov. https://eefb.org/country/russia/interview-with-boris-yukhananov/.
VBS Staff. 2010. 'Bizarre' Russian film genre born of decades of creative suppression. http://edition.cnn.com/2010/WORLD/europe/04/13/vbs.russian.parallel.cinema/index.html.
VICE Staff. 2010. Russian Parallel Cinema. https://www.vice.com/en/article/kwz5yz/russian-parallel-cinema-part-1-of-3.
Vinogradova, Maria. 2012. "Between the state and the kino: Amateur film workshops in the Soviet Union." Studies in European Cinema (Routledge) 8 (3): 211–225.
Vinogradova, Maria. 2016. "Scientists, punks, engineers and gurus: Soviet experimental film culture in the 1960s–1980s." Studies in Eastern European Cinema 7 (1): 39–52.
Westwell, Guy, and Annette Kuhn. 2020. A Dictionary of Film Studies$ A Dictionary of Film Studies. https://www-oxfordreference-com.ezproxy.library.sydney.edu.au/view/10.1093/acref/9780198832096.001.0001/acref-9780198832096-e-0476.
Yukhananov, Boris. 1989. Fatal Montage. February . https://borisyukhananov.com/archive/item.htm?id=12532.
Yukhananov, Boris. 2017. "Perestroika and Parallel Cinema." In Ruptures and Continuities in Soviet/Russian Cinema, by Boris Yukhananov and Brigit Beumers. Routledge.
Yurchak, Alexei. 2008. "Suspending the Political: Late Soviet Artistic Experiments on the Margins of the State." Poetics Today 29 (4): 713–733.
Movements in cinema
Film genres
Cinema of the Soviet Union
European films
Soviet historical films
Soviet historical musical films
Soviet historical drama films
Soviet historical comedy films
Soviet historical action films
Soviet historical adventure films
Russian film-related lists
Film noir
Violence
1970s in film
1980s in film
1990s in film | Soviet parallel cinema | [
"Biology"
] | 4,645 | [
"Behavior",
"Aggression",
"Human behavior",
"Violence"
] |
17,530,866 | https://en.wikipedia.org/wiki/Ecological%20study | In epidemiology, ecological studies are used to understand the relationship between outcome and exposure at a population level, where 'population' represents a group of individuals with a shared characteristic such as geography, ethnicity, socio-economic status of employment. What differentiates ecological studies from other studies is that the unit analysis being studied is the group, therefore inferences cannot be made about individual study participants. On the other hand, details of outcome and exposure can be generalized to the population being studied. Examples of such studies include investigating associations between units of grouped data, such as electoral wards, regions, or even whole countries.
Study Design
Generally, three different designs can be used to conduct ecological studies depending on the situation. Such studies may compare populations or groups using a multiple-group design, periods of time using a time-trend design, or groups and time using a mixed design.
Notable examples
Cholera study
The study by John Snow regarding a cholera outbreak in London is considered the first ecological study to solve a health issue. He used a map of deaths from cholera to determine that the source of the cholera was a pump on Broad Street. He had the pump handle removed in 1854 and people stopped dying there. It was only when Robert Koch discovered bacteria years later that the mechanism of cholera transmission was understood.
Diet and cancer
Dietary risk factors for cancer have also been studied using both geographical and temporal ecological studies. Multi-country ecological studies of cancer incidence and mortality rates with respect to national diets have shown that some dietary factors such as animal products (meat, milk, fish and eggs), added sweeteners/sugar, and some fats appear to be risk factors for many types of cancer, while cereals/grains and vegetable products as a whole appear to be risk reduction factors for many types of cancer. Temporal changes in Japan in the types of cancer common in Western developed countries have been linked to the nutrition transition to the Western diet.
UV radiation and cancer
An important advancement in the understanding of risk-modifying factors for cancer was made by examining maps of cancer mortality rates. The map of colon cancer mortality rates in the United States was used by the brothers Cedric and Frank C. Garland to propose the hypothesis that solar ultraviolet B (UVB) radiation, through vitamin D production, reduced the risk of cancer (the UVB-vitamin D-cancer hypothesis). Since then many ecological studies have been performed relating the reduction of incidence or mortality rates of over 20 types of cancer to higher solar UVB doses.
Diet and Alzheimer's
Links between diet and Alzheimer's disease have been studied using both geographical and temporal ecological studies. The first paper linking diet to risk of Alzheimer's disease was a multi-country ecological study published in 1997. It used prevalence of Alzheimer's disease in 11 countries along with dietary supply factors, finding that total fat and total energy (caloric) supply were strongly correlated with prevalence, while fish and cereals/grains were inversely correlated (i.e., protective). Diet is now considered an important risk-modifying factor for Alzheimer's disease. Recently it was reported that the rapid rise of Alzheimer's disease in Japan between 1985 and 2007 was likely due to the nutrition transition from the traditional Japanese diet to the Western diet.
UV radiation and influenza
Another example of the use of temporal ecological studies relates to influenza. John Cannell and associates hypothesized that the seasonality of influenza was largely driven by seasonal variations in solar UVB doses and calcidiol levels. A randomized controlled trial involving Japanese school children found that taking 1000 IU per day vitamin D3 reduced the risk of type A influenza by two-thirds.
Advantages and drawbacks
Ecological studies are particularly useful for generating hypotheses since they can use existing data sets and rapidly test the hypothesis. The advantages of the ecological studies include the large number of people that can be included in the study and the large number of risk-modifying factors that can be examined.
The term "ecological fallacy" means that risk-associations apparent between different groups of people may not accurately reflect the true association between individuals within those groups. Ecological studies should include as many known risk-modifying factors for any outcome as possible, adding others if warranted. Then the results should be evaluated by other methods, using, for example, Hill's criteria for causality in a biological system.
References
Epidemiology
Design of experiments | Ecological study | [
"Environmental_science"
] | 889 | [
"Epidemiology",
"Environmental social science"
] |
17,530,988 | https://en.wikipedia.org/wiki/History%20of%20the%20web%20browser | A web browser is a software application for retrieving, presenting and traversing information resources on the World Wide Web. It further provides for the capture or input of information which may be returned to the presenting system, then stored or processed as necessary. The method of accessing a particular page or content is achieved by entering its address, known as a Uniform Resource Identifier or URI. This may be a web page, image, video, or other piece of content. Hyperlinks present in resources enable users easily to navigate their browsers to related resources.
A web browser can also be defined as an application software or program designed to enable users to access, retrieve and view documents and other resources on the Internet.
Precursors to the web browser emerged in the form of hyperlinked applications during the mid and late 1980s, and following these, Tim Berners-Lee is credited with developing, in 1990, both the first web server, and the first web browser, called WorldWideWeb (no spaces) and later renamed Nexus. Many others were soon developed, with Marc Andreessen's 1993 Mosaic (later Netscape), being particularly easy to use and install, and often credited with sparking the internet boom of the 1990s. Today, the major web browsers are Chrome, Safari, Firefox, Opera, and Edge.
The explosion in popularity of the Web was triggered in September 1993 by NCSA Mosaic, a graphical browser which eventually ran on several popular office and home computers. This was the first web browser aiming to bring multimedia content to non-technical users, and therefore included images and text on the same page, unlike previous browser designs; its founder, Marc Andreessen, also established the company that in 1994, released Netscape Navigator, which resulted in one of the early browser wars, when it ended up in a competition for dominance (which it lost) with Microsoft's Internet Explorer (for Windows).
Precursors
In 1984, expanding on ideas from futurist Ted Nelson, Neil Larson's commercial DOS MaxThink outline program added angle bracket hypertext jumps (adopted by later web browsers) to and from ASCII, batch, and other MaxThink files up to 32 levels deep. In 1986, he released his DOS Houdini knowledge network program that supported 2500 topics cross-connected with 7500 links in each file along with hypertext links among unlimited numbers of external ASCII, batch, and other Houdini files, these capabilities were included in his then popular shareware DOS file browser programs HyperRez (memory resident) and PC Hypertext (which also added jumps to programs, editors, graphic files containing hot spots jumps, and cross-linked thesaurus/glossary files). These programs introduced many to the browser concept and 20 years later, Google still lists 3,000,000 references to PC Hypertext. In 1989, Larson created both HyperBBS and HyperLan which both allow multiple users to create/edit both topics and jumps for information and knowledge annealing which, in concept, the columnist John C. Dvorak says pre-dated Wiki by many years.
From 1987 on, Neil Larson also created TransText (hypertext word processor) and many utilities for rapidly building large scale knowledge systems. In 1989, his software helped produce, for one of the big eight accounting firms, a comprehensive knowledge system (integrated litigation knowledge system) of integrating all accounting laws/regulations into a CDROM containing 50,000 files with 200,000 hypertext jumps. Additionally, the Lynx (a very early web-based browser) development history notes their project origin was based on the browser concepts from Neil Larson and Maxthink. In 1989, he declined joining the Mosaic browser team with his preference for knowledge/wisdom creation over distributing information ... a problem he says is still not solved by today's internet.
Another early browser, Silversmith, was created by John Bottoms in 1986. The browser, based on SGML tags, used a tag set from the Electronic Document Project of the AAP with minor modifications and was sold to a number of early adopters. At the time SGML was used exclusively for the formatting of printed documents. The use of SGML for electronically displayed documents signaled a shift in electronic publishing and was met with considerable resistance. Silversmith included an integrated indexer, full text searches, hypertext links between images text and sound using SGML tags and a return stack for use with hypertext links. It included features that are still not available in today's browsers. These include capabilities such as the ability to restrict searches within document structures, searches on indexed documents using wild cards and the ability to search on tag attribute values and attribute names.
Peter Scott and Earle Fogel expanded the earlier HyperRez (1988) concept in creating HyTelnet in 1990 which added jumps to telnet sites ... and which offered users instant logon and access to the online catalogs of over 5000 libraries around the world. The strength of Hytelnet was speed and simplicity in link creation/execution at the expense of a centralized worldwide source for adding, indexing, and modifying telnet links. This problem was solved by the invention of the web server.
In April 1990, a draft patent application for a mass market consumer device for browsing pages via links "PageLink" was proposed by Craig Cockburn at Digital Equipment Corporation (DEC) whilst working in their Networking and Communications division in Reading, England. This application for a keyboard-less touch screen browser for consumers also makes reference to "navigating and searching text" and "bookmarks" was aimed at (quotes paraphrased) "replacing books", "storing a shopping list" "have an updated personalised newspaper updated round the clock", "dynamically updated maps for use in a car" and suggests such a device could have a "profound effect on the advertising industry". The patent was canned by Digital as too futuristic and, being largely hardware based, had obstacles to market that purely software driven approaches lacked.
Early 1990s: world wide web
The first web browser, WorldWideWeb, was developed in 1990 by Tim Berners-Lee for the NeXT Computer (at the same time as the first web server for the same machine) and introduced to his colleagues at CERN in March 1991. Berners-Lee recruited Nicola Pellow, a math student intern working at CERN, to write the Line Mode Browser, a cross-platform web browser that displayed web-pages on old terminals and was released in May 1991.
In 1992, Tony Johnson released the MidasWWW browser. Based on Motif/X, MidasWWW allowed viewing of PostScript files on the Web from Unix and VMS, and even handled compressed PostScript. Another early popular Web browser was ViolaWWW, which was modeled after HyperCard. In the same year the Lynx browser was announced – the only one of these early projects still being maintained and supported today. Erwise was the first browser with a graphical user interface, developed as a student project at Helsinki University of Technology and released in April 1992, but discontinued in 1994.
Thomas R. Bruce of the Legal Information Institute at Cornell Law School started to develop Cello in 1992. When released on 8 June 1993 it was one of the first graphical web browsers, and the first to run on Microsoft Windows (Windows 3.1, NT 3.5) and OS/2 platforms.
However, the explosion in popularity of the Web was triggered by NCSA Mosaic which was a graphical browser running originally on Unix and soon ported to the Amiga and VMS platforms, and later the Apple Macintosh and Microsoft Windows platforms. Version 1.0 was released in September 1993, and was dubbed the killer application of the Internet. It was the first web browser to display images inline with the document's text. Prior browsers would display an icon that, when clicked, would download and open the graphic file in a helper application. This was an intentional design decision on both parts, as the graphics support in early browsers was intended for displaying charts and graphs associated with technical papers while the user scrolled to read the text, while Mosaic was trying to bring multimedia content to non-technical users. Mosaic and browsers derived from it had a user option to automatically display images inline or to show an icon for opening in external programs. Marc Andreessen, who was the leader of the Mosaic team at NCSA, quit to form a company that would later be known as Netscape Communications Corporation. Netscape released its flagship Navigator product in October 1994, and it took off the next year.
IBM presented its own WebExplorer with OS/2 Warp in 1994 and version 1.0 was released 6 January 1995.
UdiWWW was the first web browser that was able to handle all HTML 3 features with the math tags released 1995. Following the release of version 1.2 in April 1996, Bernd Richter ceased development, stating "let Microsoft with the ActiveX Development Kit do the rest."
Microsoft, which had thus far not marketed a browser, finally entered the fray with its Internet Explorer product (version 1.0 was released 16 August 1995), purchased from Spyglass, Inc. This began what is known as the "browser wars" in which Microsoft and Netscape competed for the Web browser market.
Early web users were free to choose among the handful of web browsers available, just as they would choose any other application—web standards would ensure their experience remained largely the same. The browser wars put the Web in the hands of millions of ordinary PC users, but showed how commercialization of the Web could stymie standards efforts. Both Microsoft and Netscape liberally incorporated proprietary extensions to HTML in their products, and tried to gain an edge by product differentiation, leading to a web by the late 1990s where only Microsoft or Netscape browsers were viable contenders. In a victory for a standardized web, Cascading Style Sheets, proposed by Håkon Wium Lie, were accepted over Netscape's JavaScript Style Sheets (JSSS) by W3C.
Late 1990s: Microsoft vs Netscape
In 1996, Netscape's share of the browser market reached 86% (with Internet Explorer approaching 10%); but then Microsoft began integrating its browser with its operating system and bundling deals with OEMs. Within 4 years of its release IE had 75% of the browser market and by 1999 it had 99% of the market. Although Microsoft has since faced antitrust litigation on these charges, the browser wars effectively ended once it was clear that Netscape's declining market share trend was irreversible. Prior to the release of Mac OS X, Internet Explorer for Mac and Netscape were also the primary browsers in use on the Macintosh platform.
Unable to continue commercially funding their product's development, Netscape responded by open sourcing its product, creating Mozilla. This helped the browser maintain its technical edge over Internet Explorer, but did not slow Netscape's declining market share. Netscape was purchased by America Online in late 1998.
2000s
At first, the Mozilla project struggled to attract developers, but by 2002, it had evolved into a relatively stable and powerful internet suite. Mozilla 1.0 was released to mark this milestone. Also in 2002, a spinoff project that would eventually become the popular Firefox was released.
Firefox was always downloadable for free from the start, as was its predecessor, the Mozilla browser. Firefox's business model, unlike the business model of 1990s Netscape, primarily consists of doing deals with search engines such as Google to direct users towards them – see Web browser#Business models.
In 2003, Microsoft announced that Internet Explorer would no longer be made available as a separate product but would be part of the evolution of its Windows platform, and that no more releases for the Macintosh would be made.
AOL announced that it would retire support and development of the Netscape web browser in February 2008.
In the second half of 2004, Internet Explorer reached a peak market share of more than 92%. Since then, its market share has been slowly but steadily declining and is around 11.8% as of July 2013. In early 2005, Microsoft reversed its decision to release Internet Explorer as part of Windows, announcing that a standalone version of Internet Explorer was under development. Internet Explorer 7 was released for Windows XP, Windows Server 2003, and Windows Vista in October 2006. Internet Explorer 8 was released on 19 March 2009, for Windows XP, Windows Server 2003, Windows Vista, Windows Server 2008, and Windows 7. Internet Explorer 9, 10 and 11 were later released, and version 11 is included in Windows 10, but Microsoft Edge became the default browser there.
Apple's Safari, the default browser on Mac OS X from version 10.3 onwards, has grown to dominate browsing on Mac OS X. Browsers such as Firefox, Camino, Google Chrome, and OmniWeb are alternative browsers for Mac systems. OmniWeb and Google Chrome, like Safari, use the WebKit rendering engine (forked from KHTML), which is packaged by Apple as a framework for use by third-party applications. In August 2007, Apple also ported Safari for use on the Windows XP and Vista operating systems.
Opera was first released in 1996. It was a popular choice in handheld devices, particularly mobile phones, but remains a niche player in the PC Web browser market. It was also available on Nintendo's DS, DS Lite and Wii consoles. The Opera Mini browser uses the Presto layout engine like all versions of Opera, but runs on most phones supporting Java MIDlets.
The Lynx browser remains popular for Unix shell users and with vision impaired users due to its entirely text-based nature. There are also several text-mode browsers with advanced features, such as w3m, Links (which can operate both in text and graphical mode), and the Links forks such as ELinks.
Relationships of browsers
A number of web browsers have been derived and branched from source code of earlier versions and products.
Web browsers by year
Historical web browsers
This table focuses on operating system (OS) and browsers of the 1990 to 2000. The year listed for a version is usually the year of the first official release, with an end year being end of development, project change, or relevant termination. Releases of OS and browser from the early 1990s to before 2001–02 time frame are the current focus.
Many early browsers can be made to run on later OS (and later browsers on early OS in some cases); however, most of these situations are avoided in the table. Terms are defined below.
See also
Comparison of web browsers
History of the Internet
History of the World Wide Web
List of web browsers
Usage share of web browsers
References
External links
evolt.org – Browser Archive
Web browser
Web browser
History of computing | History of the web browser | [
"Technology"
] | 3,055 | [
"History of software",
"Computers",
"History of computing"
] |
2,249,050 | https://en.wikipedia.org/wiki/4P/Faye | Comet 4P/Faye (also known as Faye's Comet or Comet Faye) is a periodic Jupiter-family comet discovered in November 1843 by Hervé Faye at the Royal Observatory in Paris. Its most recent perihelia (closest approaches to the Sun) were on November 15, 2006; May 29, 2014; and September 8, 2021.
The comet was first observed by Faye on November 23, but bad weather prevented its confirmation until the 25th. It was so faint that it had already passed perihelion about a month before its discovery, and only a close pass by the Earth had made it bright enough for discovery. Otto Wilhelm von Struve reported that the comet was visible to the naked eye at the end of November. It remained visible for smaller telescopes until January 10, 1844, and was finally lost to larger telescopes on April 10, 1844.
In 1844, Friedrich Wilhelm Argelander and Thomas James Henderson independently computed that the comet was a short-period comet; by May, its period had been calculated to be 7.43 years. Urbain Le Verrier computed the positions for the 1851 apparition, predicting perihelion in April 1851. The comet was found close to his predicted position on November 29, 1850, by James Challis.
The comet was missed during its apparitions in 1903 and 1918 due to unfavorable observing circumstances. It reached a brightness of about 9th magnitude in 2006.
4P/Faye has a close approach to Jupiter every 59.3 years, which is gradually reducing its perihelion and increasing its orbital eccentricity. In the most recent close approach to Jupiter (March 2018), Faye's perihelion changed from about 1.7 AU to about 1.5 AU.
The comet nucleus is estimated to be about 3.5 km in diameter.
References
External links
4P/Faye at CometBase database
4P/Faye – Seiichi Yoshida @ aerith.net
4P/Faye history from Gary W. Kronk's Cometography
4P/Faye at the Minor Planet Center's Database
Periodic comets
0004
004P
004P
18431123 | 4P/Faye | [
"Astronomy"
] | 440 | [
"Astronomy stubs",
"Comet stubs"
] |
2,249,054 | https://en.wikipedia.org/wiki/Farid%20F.%20Abraham | Farid F. Abraham (born May 5, 1937) is an American scientist.
He has pioneered new methods of using computer modeling in the fields of fracture mechanics, membrane dynamics and phase transformation behavior of matter. He has written two textbooks and over 200 papers published in international journals. He won the Aneesur Rahman Prize in Computational Physics, which is the highest prize given by the American Physical Society.
Biography
Abraham is a native of Phoenix, Arizona and received both his B.S. (1959) and Ph.D. (1962) degrees in physics from the University of Arizona. He spent two postdoctoral years (1962–63) at the Enrico Fermi Institute at the University of Chicago and two years as a research scientist at the Lawrence Livermore National Laboratory in California. He joined IBM in 1966 as a staff member at its Palo Alto Scientific Center. In 1971, he was named the first Consulting Professor at Stanford University and developed a graduate course in computational applied science in its Materials Science Department. In 1972, he moved to the IBM Research Division's San Jose Research Laboratory, known since 1985 as the Almaden Research Center. During 1994, he held the Sandoval Vallarta Chair at the Universidad Autonoma Metropolitana in Mexico City.
For the period of 1995 to 2003, he was awarded several computer grants at the National Science Foundation Computational Centers and Department of Defence Grand Challenge Grants at the Maui High Performance Computing Center (MHPCC). He has been awarded several IBM Outstanding Technical Achievement Awards. He is a Fellow of the American Physical Society and, in 1998/99, was an American Physical Society Centennial Speaker. He was the Chair of the American Physical Society's Division of Computational Physics in 2000-2001. He was elected the recipient of the Alexander von Humboldt Research Award for Senior Scientists. In March 2004, he received the Aneesur Rahman Prize for Computational Physics from the American Physical Society. Retiring from IBM in 2004, he joined Lawrence Livermore National Laboratory as a Senior Scientist and was named the Graham-Perdue Visiting Professor at The University of Georgia. In 2010, he retired from LLNL. For over four decades he has pursued a wide range of computational physics applications, mainly in condensed matter physics and chemical physics.
Bibliography
Abraham, Farid F., and Tiller, William A. (1972) An Introduction to Computer Simulation in Applied Science. New York: Plenum Press.
Abraham, Farid F. (1974) Homogeneous Nucleation Theory, New York: Academic Press
Living people
University of Arizona alumni
1937 births
Place of birth missing (living people)
American scientists
Computational physicists
IBM employees
Humboldt Research Award recipients
Fellows of the American Physical Society | Farid F. Abraham | [
"Physics"
] | 540 | [
"Computational physicists",
"Computational physics"
] |
2,249,310 | https://en.wikipedia.org/wiki/Aether%20theories | In the history of physics, aether theories (or ether theories) proposed the existence of a medium, a space-filling substance or field as a transmission medium for the propagation of electromagnetic or gravitational forces. Since the development of special relativity, theories using a substantial aether fell out of use in modern physics, and are now replaced by more abstract models.
This early modern aether has little in common with the aether of classical elements from which the name was borrowed. The assorted theories embody the various conceptions of this medium and substance.
Historical models
Luminiferous aether
Isaac Newton suggests the existence of an aether in the Third Book of Opticks (1st ed. 1704; 2nd ed. 1718): "Doth not this aethereal medium in passing out of water, glass, crystal, and other compact and dense bodies in empty spaces, grow denser and denser by degrees, and by that means refract the rays of light not in a point, but by bending them gradually in curve lines? ...Is not this medium much rarer within the dense bodies of the Sun, stars, planets and comets, than in the empty celestial space between them? And in passing from them to great distances, doth it not grow denser and denser perpetually, and thereby cause the gravity of those great bodies towards one another, and of their parts towards the bodies; every body endeavouring to go from the denser parts of the medium towards the rarer?"
In the 19th century, luminiferous aether (or ether), meaning light-bearing aether, was a theorized medium for the propagation of light. James Clerk Maxwell developed a model to explain electric and magnetic phenomena using the aether, a model that led to what are now called Maxwell's equations and the understanding that light is an electromagnetic wave. Later, a series of increasingly careful experiments were carried out in the late 1800s, including the Michelson–Morley experiment, to try to detect the motion of Earth through the aether, but no drag was detected. A range of proposed aether-dragging theories could explain the null result but these were more complex, and tended to use arbitrary-looking coefficients and physical assumptions. Joseph Larmor discussed the aether in terms of a moving magnetic field caused by the acceleration of electrons.
Hendrik Lorentz and George Francis FitzGerald offered, within the framework of Lorentz ether theory, an explanation of how the Michelson–Morley experiment could have failed to detect motion through the aether. However, the initial Lorentz theory predicted that motion through the aether would create a birefringence effect, which Rayleigh and Brace tested and failed to find (Experiments of Rayleigh and Brace). All of those results required the full application of the Lorentz transformation by Lorentz and Joseph Larmor in 1904. Summarizing the results of Michelson, Rayleigh and others, Hermann Weyl would later write that the aether had "betaken itself to the land of the shades in a final effort to elude the inquisitive search of the physicist". In addition to possessing more conceptual clarity, Albert Einstein's 1905 special theory of relativity could explain all of the experimental results without referring to an aether at all. This eventually led most physicists to conclude that the earlier notion of a luminiferous aether was not a useful concept.
Mechanical gravitational aether
From the 16th until the late 19th century, gravitational effects had also been modeled using an aether. In a note at the end of his work "A Dynamical Theory of the Electromagnetic Field", Maxwell discussed a model for gravity based on a medium similar to the one he used for the electromagnetic field. He concluded that the medium would have "an enormous intrinsic energy" and would necessarily have to be diminished in areas of mass. He could not "understand in what way a medium can possess such properties" so he did not pursue it further. The most well-known formulation is Le Sage's theory of gravitation, although variations on the idea were entertained by Isaac Newton, Bernhard Riemann, and Lord Kelvin. For example, Kelvin published a note on Le Sage's model in 1873, in which he found Le Sage's proposal thermodynamically flawed and suggested a possible way to salvage it using the then popular vortex theory of the atom. Kelvin later concluded
None of those concepts are considered to be viable by the scientific community today.
Non-standard interpretations in modern physics
General relativity
Albert Einstein sometimes used the word aether for the gravitational field within general relativity, but the only similarity of this relativistic aether concept with the classical aether models lies in the presence of physical properties in space, which can be identified through geodesics. As historians such as John Stachel argue, Einstein's views on the "new aether" are not in conflict with his abandonment of the aether in 1905. As Einstein himself pointed out, no "substance" and no state of motion can be attributed to that new aether. Einstein's use of the word "aether" found little support in the scientific community, and played no role in the continuing development of modern physics.
Quantum vacuum
Quantum mechanics can be used to describe spacetime as being non-empty at extremely small scales, fluctuating and generating particle pairs that appear and disappear incredibly quickly. It has been suggested by some such as Paul Dirac that this quantum vacuum may be the equivalent in modern physics of a particulate aether. However, Dirac's aether hypothesis was motivated by his dissatisfaction with quantum electrodynamics, and it never gained support from the mainstream scientific community.
Physicist Robert B. Laughlin commented that the quantum vacuum could be called a "relativistic ether". Paul Davies writes that while the quantum vacuum resembles in some ways the old concept of the aether, the two differ in a key respect: the quantum vacuum "has no privileged reference frame, no state of rest relative to which a material body could be said to move."
Pilot waves
Louis de Broglie stated, "Any particle, ever isolated, has to be imagined as in continuous "energetic contact" with a hidden medium." However, as de Broglie pointed out, this medium "could not serve as a universal reference medium, as this would be contrary to relativity theory."
See also
Absolute space and time
Apeiron (cosmology)
Astral light
Cosmology
Frame-dragging
Tests of general relativity
Tests of special relativity
References
Further reading
Epple, M. (1998). "Topology, Matter, and Space, I: Topological Notions in 19th-Century Natural Philosophy", Archive for History of Exact Sciences 52: 297–392.
Oliver Lodge, "Ether", Encyclopædia Britannica, Thirteenth Edition (1926).
"A Ridiculously Brief History of Electricity and Magnetism; Mostly from E. T. Whittaker's A History of the Theories of Aether and Electricity]. (PDF format.)
Vacuum | Aether theories | [
"Physics"
] | 1,447 | [
"Vacuum",
"Matter"
] |
2,249,324 | https://en.wikipedia.org/wiki/Sodium%20propionate | Sodium propanoate or sodium propionate is the sodium salt of propionic acid which has the chemical formula Na(C2H5COO). This white crystalline solid is deliquescent in moist air.
Reactions
It is produced by the reaction of propionic acid and sodium carbonate or sodium hydroxide.
Uses
It is used as a food preservative and is represented by the food labeling E number E281 in Europe; it is used primarily as a mold inhibitor in bakery products. It is approved for use as a food additive in the EU, USA and Australia and New Zealand (where it is listed by its INS number 281).
Structure
Anhydrous sodium propionate is a polymeric structure, featuring trigonal prismatic Na+ centers bonded to six oxygen ligands provided by the carboxylates. A layered structure is observed, with the hydrophobic ethyl groups projecting into the layered galleries. With hydrated sodium propionate, some of these Na-carboxylate linkages are displaced by water.
See also
Propionic acid, E 280
Calcium propionate, E 282
Potassium propionate, E 283
References
External links
Sodium propanoate at Sci-toys.com
Propionates
Organic sodium salts
E-number additives | Sodium propionate | [
"Chemistry"
] | 262 | [
"Organic sodium salts",
"Salts"
] |
2,249,332 | https://en.wikipedia.org/wiki/Potassium%20propanoate | Potassium propanoate or potassium propionate has formula K(C2H5COO). Its melting point is 410 °C. It is the potassium salt of propanoic acid.
Use
It is used as a food preservative and is represented by the food labeling E number E283 in Europe and by the INS number 283 in Australia and New Zealand.
References
Propionates
Potassium compounds
E-number additives | Potassium propanoate | [
"Chemistry"
] | 88 | [
"Organic compounds",
"Organic compound stubs",
"Organic chemistry stubs"
] |
2,249,718 | https://en.wikipedia.org/wiki/Representation%20theory%20of%20the%20Lorentz%20group | The Lorentz group is a Lie group of symmetries of the spacetime of special relativity. This group can be realized as a collection of matrices, linear transformations, or unitary operators on some Hilbert space; it has a variety of representations. This group is significant because special relativity together with quantum mechanics are the two physical theories that are most thoroughly established, and the conjunction of these two theories is the study of the infinite-dimensional unitary representations of the Lorentz group. These have both historical importance in mainstream physics, as well as connections to more speculative present-day theories.
Development
The full theory of the finite-dimensional representations of the Lie algebra of the Lorentz group is deduced using the general framework of the representation theory of semisimple Lie algebras. The finite-dimensional representations of the connected component of the full Lorentz group are obtained by employing the Lie correspondence and the matrix exponential. The full finite-dimensional representation theory of the universal covering group (and also the spin group, a double cover) of is obtained, and explicitly given in terms of action on a function space in representations of and . The representatives of time reversal and space inversion are given in space inversion and time reversal, completing the finite-dimensional theory for the full Lorentz group. The general properties of the (m, n) representations are outlined. Action on function spaces is considered, with the action on spherical harmonics and the Riemann P-functions appearing as examples. The infinite-dimensional case of irreducible unitary representations are realized for the principal series and the complementary series. Finally, the Plancherel formula for is given, and representations of are classified and realized for Lie algebras.
The development of the representation theory has historically followed the development of the more general theory of representation theory of semisimple groups, largely due to Élie Cartan and Hermann Weyl, but the Lorentz group has also received special attention due to its importance in physics. Notable contributors are physicist E. P. Wigner and mathematician Valentine Bargmann with their Bargmann–Wigner program, one conclusion of which is, roughly, a classification of all unitary representations of the inhomogeneous Lorentz group amounts to a classification of all possible relativistic wave equations. The classification of the irreducible infinite-dimensional representations of the Lorentz group was established by Paul Dirac's doctoral student in theoretical physics, Harish-Chandra, later turned mathematician, in 1947. The corresponding classification for was published independently by Bargmann and Israel Gelfand together with Mark Naimark in the same year.
Applications
Many of the representations, both finite-dimensional and infinite-dimensional, are important in theoretical physics. Representations appear in the description of fields in classical field theory, most importantly the electromagnetic field, and of particles in relativistic quantum mechanics, as well as of both particles and quantum fields in quantum field theory and of various objects in string theory and beyond. The representation theory also provides the theoretical ground for the concept of spin. The theory enters into general relativity in the sense that in small enough regions of spacetime, physics is that of special relativity.
The finite-dimensional irreducible non-unitary representations together with the irreducible infinite-dimensional unitary representations of the inhomogeneous Lorentz group, the Poincare group, are the representations that have direct physical relevance.
Infinite-dimensional unitary representations of the Lorentz group appear by restriction of the irreducible infinite-dimensional unitary representations of the Poincaré group acting on the Hilbert spaces of relativistic quantum mechanics and quantum field theory. But these are also of mathematical interest and of potential direct physical relevance in other roles than that of a mere restriction. There were speculative theories, (tensors and spinors have infinite counterparts in the expansors of Dirac and the expinors of Harish-Chandra) consistent with relativity and quantum mechanics, but they have found no proven physical application. Modern speculative theories potentially have similar ingredients per below.
Classical field theory
While the electromagnetic field together with the gravitational field are the only classical fields providing accurate descriptions of nature, other types of classical fields are important too. In the approach to quantum field theory (QFT) referred to as second quantization, the starting point is one or more classical fields, where e.g. the wave functions solving the Dirac equation are considered as classical fields prior to (second) quantization. While second quantization and the Lagrangian formalism associated with it is not a fundamental aspect of QFT, it is the case that so far all quantum field theories can be approached this way, including the standard model. In these cases, there are classical versions of the field equations following from the Euler–Lagrange equations derived from the Lagrangian using the principle of least action. These field equations must be relativistically invariant, and their solutions (which will qualify as relativistic wave functions according to the definition below) must transform under some representation of the Lorentz group.
The action of the Lorentz group on the space of field configurations (a field configuration is the spacetime history of a particular solution, e.g. the electromagnetic field in all of space over all time is one field configuration) resembles the action on the Hilbert spaces of quantum mechanics, except that the commutator brackets are replaced by field theoretical Poisson brackets.
Relativistic quantum mechanics
For the present purposes the following definition is made: A relativistic wave function is a set of functions on spacetime which transforms under an arbitrary proper Lorentz transformation as
where is an -dimensional matrix representative of belonging to some direct sum of the representations to be introduced below.
The most useful relativistic quantum mechanics one-particle theories (there are no fully consistent such theories) are the Klein–Gordon equation and the Dirac equation in their original setting. They are relativistically invariant and their solutions transform under the Lorentz group as Lorentz scalars () and bispinors () respectively. The electromagnetic field is a relativistic wave function according to this definition, transforming under .
The infinite-dimensional representations may be used in the analysis of scattering.
Quantum field theory
In quantum field theory, the demand for relativistic invariance enters, among other ways in that the S-matrix necessarily must be Poincaré invariant. This has the implication that there is one or more infinite-dimensional representation of the Lorentz group acting on Fock space. One way to guarantee the existence of such representations is the existence of a Lagrangian description (with modest requirements imposed, see the reference) of the system using the canonical formalism, from which a realization of the generators of the Lorentz group may be deduced.
The transformations of field operators illustrate the complementary role played by the finite-dimensional representations of the Lorentz group and the infinite-dimensional unitary representations of the Poincare group, witnessing the deep unity between mathematics and physics. For illustration, consider the definition an -component field operator: A relativistic field operator is a set of operator valued functions on spacetime which transforms under proper Poincaré transformations according to
Here is the unitary operator representing on the Hilbert space on which is defined and is an -dimensional representation of the Lorentz group. The transformation rule is the second Wightman axiom of quantum field theory.
By considerations of differential constraints that the field operator must be subjected to in order to describe a single particle with definite mass and spin (or helicity), it is deduced that
where are interpreted as creation and annihilation operators respectively. The creation operator transforms according to
and similarly for the annihilation operator. The point to be made is that the field operator transforms according to a finite-dimensional non-unitary representation of the Lorentz group, while the creation operator transforms under the infinite-dimensional unitary representation of the Poincare group characterized by the mass and spin of the particle. The connection between the two are the wave functions, also called coefficient functions
that carry both the indices operated on by Lorentz transformations and the indices operated on by Poincaré transformations. This may be called the Lorentz–Poincaré connection. To exhibit the connection, subject both sides of equation to a Lorentz transformation resulting in for e.g. ,
where is the non-unitary Lorentz group representative of and is a unitary representative of the so-called Wigner rotation associated to and that derives from the representation of the Poincaré group, and is the spin of the particle.
All of the above formulas, including the definition of the field operator in terms of creation and annihilation operators, as well as the differential equations satisfied by the field operator for a particle with specified mass, spin and the representation under which it is supposed to transform, and also that of the wave function, can be derived from group theoretical considerations alone once the frameworks of quantum mechanics and special relativity is given.
Speculative theories
In theories in which spacetime can have more than dimensions, the generalized Lorentz groups of the appropriate dimension take the place of .
The requirement of Lorentz invariance takes on perhaps its most dramatic effect in string theory. Classical relativistic strings can be handled in the Lagrangian framework by using the Nambu–Goto action. This results in a relativistically invariant theory in any spacetime dimension. But as it turns out, the theory of open and closed bosonic strings (the simplest string theory) is impossible to quantize in such a way that the Lorentz group is represented on the space of states (a Hilbert space) unless the dimension of spacetime is 26. The corresponding result for superstring theory is again deduced demanding Lorentz invariance, but now with supersymmetry. In these theories the Poincaré algebra is replaced by a supersymmetry algebra which is a -graded Lie algebra extending the Poincaré algebra. The structure of such an algebra is to a large degree fixed by the demands of Lorentz invariance. In particular, the fermionic operators (grade ) belong to a or representation space of the (ordinary) Lorentz Lie algebra. The only possible dimension of spacetime in such theories is 10.
Finite-dimensional representations
Representation theory of groups in general, and Lie groups in particular, is a very rich subject. The Lorentz group has some properties that makes it "agreeable" and others that make it "not very agreeable" within the context of representation theory; the group is simple and thus semisimple, but is not connected, and none of its components are simply connected. Furthermore, the Lorentz group is not compact.
For finite-dimensional representations, the presence of semisimplicity means that the Lorentz group can be dealt with the same way as other semisimple groups using a well-developed theory. In addition, all representations are built from the irreducible ones, since the Lie algebra possesses the complete reducibility property. But, the non-compactness of the Lorentz group, in combination with lack of simple connectedness, cannot be dealt with in all the aspects as in the simple framework that applies to simply connected, compact groups. Non-compactness implies, for a connected simple Lie group, that no nontrivial finite-dimensional unitary representations exist. Lack of simple connectedness gives rise to spin representations of the group. The non-connectedness means that, for representations of the full Lorentz group, time reversal and reversal of spatial orientation have to be dealt with separately.
History
The development of the finite-dimensional representation theory of the Lorentz group mostly follows that of representation theory in general. Lie theory originated with Sophus Lie in 1873. By 1888 the classification of simple Lie algebras was essentially completed by Wilhelm Killing. In 1913 the theorem of highest weight for representations of simple Lie algebras, the path that will be followed here, was completed by Élie Cartan. Richard Brauer was during the period of 1935–38 largely responsible for the development of the Weyl-Brauer matrices describing how spin representations of the Lorentz Lie algebra can be embedded in Clifford algebras. The Lorentz group has also historically received special attention in representation theory, see History of infinite-dimensional unitary representations below, due to its exceptional importance in physics. Mathematicians Hermann Weyl and Harish-Chandra and physicists Eugene Wigner and Valentine Bargmann made substantial contributions both to general representation theory and in particular to the Lorentz group. Physicist Paul Dirac was perhaps the first to manifestly knit everything together in a practical application of major lasting importance with the Dirac equation in 1928.
The Lie algebra
This section addresses the irreducible complex linear representations of the complexification of the Lie algebra of the Lorentz group. A convenient basis for is given by the three generators of rotations and the three generators of boosts. They are explicitly given in conventions and Lie algebra bases.
The Lie algebra is complexified, and the basis is changed to the components of its two ideals
The components of and separately satisfy the commutation relations of the Lie algebra and, moreover, they commute with each other,
where are indices which each take values , and is the three-dimensional Levi-Civita symbol. Let and denote the complex linear span of and respectively.
One has the isomorphisms
where is the complexification of
The utility of these isomorphisms comes from the fact that all irreducible representations of , and hence all irreducible complex linear representations of are known. The irreducible complex linear representation of is isomorphic to one of the highest weight representations. These are explicitly given in complex linear representations of
The unitarian trick
The Lie algebra is the Lie algebra of It contains the compact subgroup with Lie algebra The latter is a compact real form of Thus from the first statement of the unitarian trick, representations of are in one-to-one correspondence with holomorphic representations of
By compactness, the Peter–Weyl theorem applies to , and hence orthonormality of irreducible characters may be appealed to. The irreducible unitary representations of are precisely the tensor products of irreducible unitary representations of .
By appeal to simple connectedness, the second statement of the unitarian trick is applied. The objects in the following list are in one-to-one correspondence:
Holomorphic representations of
Smooth representations of
Real linear representations of
Complex linear representations of
Tensor products of representations appear at the Lie algebra level as either of
where is the identity operator. Here, the latter interpretation, which follows from , is intended. The highest weight representations of are indexed by for . (The highest weights are actually , but the notation here is adapted to that of ) The tensor products of two such complex linear factors then form the irreducible complex linear representations of
Finally, the -linear representations of the real forms of the far left, , and the far right, in are obtained from the -linear representations of characterized in the previous paragraph.
The (μ, ν)-representations of sl(2, C)
The complex linear representations of the complexification of obtained via isomorphisms in , stand in one-to-one correspondence with the real linear representations of The set of all real linear irreducible representations of are thus indexed by a pair . The complex linear ones, corresponding precisely to the complexification of the real linear representations, are of the form , while the conjugate linear ones are the . All others are real linear only. The linearity properties follow from the canonical injection, the far right in , of into its complexification. Representations on the form or are given by real matrices (the latter are not irreducible). Explicitly, the real linear -representations of are
where are the complex linear irreducible representations of and their complex conjugate representations. (The labeling is usually in the mathematics literature , but half-integers are chosen here to conform with the labeling for the Lie algebra.) Here the tensor product is interpreted in the former sense of . These representations are concretely realized below.
The (m, n)-representations of so(3; 1)
Via the displayed isomorphisms in and knowledge of the complex linear irreducible representations of upon solving for and , all irreducible representations of and, by restriction, those of are obtained. The representations of obtained this way are real linear (and not complex or conjugate linear) because the algebra is not closed upon conjugation, but they are still irreducible. Since is semisimple, all its representations can be built up as direct sums of the irreducible ones.
Thus the finite dimensional irreducible representations of the Lorentz algebra are classified by an ordered pair of half-integers and , conventionally written as one of
where is a finite-dimensional vector space. These are, up to a similarity transformation, uniquely given by
where is the -dimensional unit matrix and
are the -dimensional irreducible representations of also termed spin matrices or angular momentum matrices. These are explicitly given as
where denotes the Kronecker delta. In components, with , , the representations are given by
Common representations
The representation is the one-dimensional trivial representation and is carried by relativistic scalar field theories.
Fermionic supersymmetry generators transform under one of the or representations (Weyl spinors).
The four-momentum of a particle (either massless or massive) transforms under the representation, a four-vector.
A physical example of a (1,1) traceless symmetric tensor field is the traceless part of the energy–momentum tensor .
Off-diagonal direct sums
Since for any irreducible representation for which it is essential to operate over the field of complex numbers, the direct sum of representations and have particular relevance to physics, since it permits to use linear operators over real numbers.
is the bispinor representation. See also Dirac spinor and Weyl spinors and bispinors below.
is the Rarita–Schwinger field representation.
would be the symmetry of the hypothesized gravitino. It can be obtained from the representation.
is the representation of a parity-invariant 2-form field (a.k.a. curvature form). The electromagnetic field tensor transforms under this representation.
The group
The approach in this section is based on theorems that, in turn, are based on the fundamental Lie correspondence. The Lie correspondence is in essence a dictionary between connected Lie groups and Lie algebras. The link between them is the exponential mapping from the Lie algebra to the Lie group, denoted
If for some vector space is a representation, a representation of the connected component of is defined by
This definition applies whether the resulting representation is projective or not.
Surjectiveness of exponential map for SO(3, 1)
From a practical point of view, it is important whether the first formula in can be used for all elements of the group. It holds for all , however, in the general case, e.g. for , not all are in the image of .
But is surjective. One way to show this is to make use of the isomorphism the latter being the Möbius group. It is a quotient of (see the linked article). The quotient map is denoted with The map is onto. Apply with being the differential of at the identity. Then
Since the left hand side is surjective (both and are), the right hand side is surjective and hence is surjective. Finally, recycle the argument once more, but now with the known isomorphism between and to find that is onto for the connected component of the Lorentz group.
Fundamental group
The Lorentz group is doubly connected, i. e. is a group with two equivalence classes of loops as its elements.
Projective representations
Since has two elements, some representations of the Lie algebra will yield projective representations. Once it is known whether a representation is projective, formula applies to all group elements and all representations, including the projective ones — with the understanding that the representative of a group element will depend on which element in the Lie algebra (the in ) is used to represent the group element in the standard representation.
For the Lorentz group, the -representation is projective when is a half-integer. See .
For a projective representation of , it holds that
since any loop in traversed twice, due to the double connectedness, is contractible to a point, so that its homotopy class is that of a constant map. It follows that is a double-valued function. It is not possible to consistently choose a sign to obtain a continuous representation of all of , but this is possible locally around any point.
The covering group SL(2, C)
Consider as a real Lie algebra with basis
where the sigmas are the Pauli matrices. From the relations
is obtained
which are exactly on the form of the -dimensional version of the commutation relations for (see conventions and Lie algebra bases below). Thus, the map , , extended by linearity is an isomorphism. Since is simply connected, it is the universal covering group of .
A geometric view
Let be a path from to , denote its homotopy class by and let be the set of all such homotopy classes. Define the set
and endow it with the multiplication operation
where is the path multiplication of and :
With this multiplication, becomes a group isomorphic to the universal covering group of . Since each has two elements, by the above construction, there is a 2:1 covering map . According to covering group theory, the Lie algebras and of are all isomorphic. The covering map is simply given by .
An algebraic view
For an algebraic view of the universal covering group, let act on the set of all Hermitian matrices by the operation
The action on is linear. An element of may be written in the form
The map is a group homomorphism into Thus is a 4-dimensional representation of . Its kernel must in particular take the identity matrix to itself, and therefore . Thus for in the kernel so, by Schur's lemma, is a multiple of the identity, which must be since . The space is mapped to Minkowski space , via
The action of on preserves determinants. The induced representation of on via the above isomorphism, given by
preserves the Lorentz inner product since
This means that belongs to the full Lorentz group . By the main theorem of connectedness, since is connected, its image under in is connected, and hence is contained in .
It can be shown that the Lie map of is a Lie algebra isomorphism: The map is also onto.
Thus , since it is simply connected, is the universal covering group of , isomorphic to the group of above.
Non-surjectiveness of exponential mapping for SL(2, C)
The exponential mapping is not onto. The matrix
is in but there is no such that .
In general, if is an element of a connected Lie group with Lie algebra then, by ,
The matrix can be written
Realization of representations of and and their Lie algebras
The complex linear representations of and are more straightforward to obtain than the representations. They can be (and usually are) written down from scratch. The holomorphic group representations (meaning the corresponding Lie algebra representation is complex linear) are related to the complex linear Lie algebra representations by exponentiation. The real linear representations of are exactly the -representations. They can be exponentiated too. The -representations are complex linear and are (isomorphic to) the highest weight-representations. These are usually indexed with only one integer (but half-integers are used here).
The mathematics convention is used in this section for convenience. Lie algebra elements differ by a factor of and there is no factor of in the exponential mapping compared to the physics convention used elsewhere. Let the basis of be
This choice of basis, and the notation, is standard in the mathematical literature.
Complex linear representations
The irreducible holomorphic -dimensional representations can be realized on the space of homogeneous polynomial of degree in 2 variables the elements of which are
The action of is given by
The associated -action is, using and the definition above, for the basis elements of
With a choice of basis for , these representations become matrix Lie algebras.
Real linear representations
The -representations are realized on a space of polynomials in homogeneous of degree in and homogeneous of degree in The representations are given by
By employing again it is found that
In particular for the basis elements,
Properties of the (m, n) representations
The representations, defined above via (as restrictions to the real form ) of tensor products of irreducible complex linear representations and of are irreducible, and they are the only irreducible representations.
Irreducibility follows from the unitarian trick and that a representation of is irreducible if and only if , where are irreducible representations of .
Uniqueness follows from that the are the only irreducible representations of , which is one of the conclusions of the theorem of the highest weight.
Dimension
The representations are -dimensional. This follows easiest from counting the dimensions in any concrete realization, such as the one given in representations of and . For a Lie general algebra the Weyl dimension formula,
applies, where is the set of positive roots, is the highest weight, and is half the sum of the positive roots. The inner product is that of the Lie algebra invariant under the action of the Weyl group on the Cartan subalgebra. The roots (really elements of ) are via this inner product identified with elements of For the formula reduces to , where the present notation must be taken into account. The highest weight is . By taking tensor products, the result follows.
Faithfulness
If a representation of a Lie group is not faithful, then is a nontrivial normal subgroup. There are three relevant cases.
is non-discrete and abelian.
is non-discrete and non-abelian.
is discrete. In this case , where is the center of .
In the case of , the first case is excluded since is semi-simple. The second case (and the first case) is excluded because is simple. For the third case, is isomorphic to the quotient But is the center of It follows that the center of is trivial, and this excludes the third case. The conclusion is that every representation and every projective representation for finite-dimensional vector spaces are faithful.
By using the fundamental Lie correspondence, the statements and the reasoning above translate directly to Lie algebras with (abelian) nontrivial non-discrete normal subgroups replaced by (one-dimensional) nontrivial ideals in the Lie algebra, and the center of replaced by the center of The center of any semisimple Lie algebra is trivial and is semi-simple and simple, and hence has no non-trivial ideals.
A related fact is that if the corresponding representation of is faithful, then the representation is projective. Conversely, if the representation is non-projective, then the corresponding representation is not faithful, but is .
Non-unitarity
The Lie algebra representation is not Hermitian. Accordingly, the corresponding (projective) representation of the group is never unitary. This is due to the non-compactness of the Lorentz group. In fact, a connected simple non-compact Lie group cannot have any nontrivial unitary finite-dimensional representations. There is a topological proof of this. Let , where is finite-dimensional, be a continuous unitary representation of the non-compact connected simple Lie group . Then where is the compact subgroup of consisting of unitary transformations of . The kernel of is a normal subgroup of . Since is simple, is either all of , in which case is trivial, or is trivial, in which case is faithful. In the latter case is a diffeomorphism onto its image, and is a Lie group. This would mean that is an embedded non-compact Lie subgroup of the compact group . This is impossible with the subspace topology on since all embedded Lie subgroups of a Lie group are closed If were closed, it would be compact, and then would be compact, contrary to assumption.
In the case of the Lorentz group, this can also be seen directly from the definitions. The representations of and used in the construction are Hermitian. This means that is Hermitian, but is anti-Hermitian. The non-unitarity is not a problem in quantum field theory, since the objects of concern are not required to have a Lorentz-invariant positive definite norm.
Restriction to SO(3)
The representation is, however, unitary when restricted to the rotation subgroup , but these representations are not irreducible as representations of SO(3). A Clebsch–Gordan decomposition can be applied showing that an representation have -invariant subspaces of highest weight (spin) , where each possible highest weight (spin) occurs exactly once. A weight subspace of highest weight (spin) is -dimensional. So for example, the (, ) representation has spin 1 and spin 0 subspaces of dimension 3 and 1 respectively.
Since the angular momentum operator is given by , the highest spin in quantum mechanics of the rotation sub-representation will be and the "usual" rules of addition of angular momenta and the formalism of 3-j symbols, 6-j symbols, etc. applies.
Spinors
It is the -invariant subspaces of the irreducible representations that determine whether a representation has spin. From the above paragraph, it is seen that the representation has spin if is half-integer. The simplest are and , the Weyl-spinors of dimension . Then, for example, and are a spin representations of dimensions and respectively. According to the above paragraph, there are subspaces with spin both and in the last two cases, so these representations cannot likely represent a single physical particle which must be well-behaved under . It cannot be ruled out in general, however, that representations with multiple subrepresentations with different spin can represent physical particles with well-defined spin. It may be that there is a suitable relativistic wave equation that projects out unphysical components, leaving only a single spin.
Construction of pure spin representations for any (under ) from the irreducible representations involves taking tensor products of the Dirac-representation with a non-spin representation, extraction of a suitable subspace, and finally imposing differential constraints.
Dual representations
The following theorems are applied to examine whether the dual representation of an irreducible representation is isomorphic to the original representation:
The set of weights of the dual representation of an irreducible representation of a semisimple Lie algebra is, including multiplicities, the negative of the set of weights for the original representation.
Two irreducible representations are isomorphic if and only if they have the same highest weight.
For each semisimple Lie algebra there exists a unique element of the Weyl group such that if is a dominant integral weight, then is again a dominant integral weight.
If is an irreducible representation with highest weight , then has highest weight .
Here, the elements of the Weyl group are considered as orthogonal transformations, acting by matrix multiplication, on the real vector space of roots. If is an element of the Weyl group of a semisimple Lie algebra, then . In the case of the Weyl group is . It follows that each is isomorphic to its dual The root system of is shown in the figure to the right. The Weyl group is generated by where is reflection in the plane orthogonal to as ranges over all roots. Inspection shows that so . Using the fact that if are Lie algebra representations and , then , the conclusion for is
Complex conjugate representations
If is a representation of a Lie algebra, then is a representation, where the bar denotes entry-wise complex conjugation in the representative matrices. This follows from that complex conjugation commutes with addition and multiplication. In general, every irreducible representation of can be written uniquely as , where
with holomorphic (complex linear) and anti-holomorphic (conjugate linear). For since is holomorphic, is anti-holomorphic. Direct examination of the explicit expressions for and in equation below shows that they are holomorphic and anti-holomorphic respectively. Closer examination of the expression also allows for identification of and for as
Using the above identities (interpreted as pointwise addition of functions), for yields
where the statement for the group representations follow from . It follows that the irreducible representations have real matrix representatives if and only if . Reducible representations on the form have real matrices too.
The adjoint representation, the Clifford algebra, and the Dirac spinor representation
In general representation theory, if is a representation of a Lie algebra then there is an associated representation of on , also denoted , given by
Likewise, a representation of a group yields a representation on of , still denoted , given by
If and are the standard representations on and if the action is restricted to then the two above representations are the adjoint representation of the Lie algebra and the adjoint representation of the group respectively. The corresponding representations (some or ) always exist for any matrix Lie group, and are paramount for investigation of the representation theory in general, and for any given Lie group in particular.
Applying this to the Lorentz group, if is a projective representation, then direct calculation using shows that the induced representation on is a proper representation, i.e. a representation without phase factors.
In quantum mechanics this means that if or is a representation acting on some Hilbert space , then the corresponding induced representation acts on the set of linear operators on . As an example, the induced representation of the projective spin representation on is the non-projective 4-vector (, ) representation.
For simplicity, consider only the "discrete part" of , that is, given a basis for , the set of constant matrices of various dimension, including possibly infinite dimensions. The induced 4-vector representation of above on this simplified has an invariant 4-dimensional subspace that is spanned by the four gamma matrices. (The metric convention is different in the linked article.) In a corresponding way, the complete Clifford algebra of spacetime, whose complexification is generated by the gamma matrices decomposes as a direct sum of representation spaces of a scalar irreducible representation (irrep), the , a pseudoscalar irrep, also the , but with parity inversion eigenvalue , see the next section below, the already mentioned vector irrep, , a pseudovector irrep, with parity inversion eigenvalue +1 (not −1), and a tensor irrep, . The dimensions add up to . In other words,
where, as is customary, a representation is confused with its representation space.
The spin representation
The six-dimensional representation space of the tensor -representation inside has two roles. The
where are the gamma matrices, the sigmas, only of which are non-zero due to antisymmetry of the bracket, span the tensor representation space. Moreover, they have the commutation relations of the Lorentz Lie algebra,
and hence constitute a representation (in addition to spanning a representation space) sitting inside the spin representation. For details, see bispinor and Dirac algebra.
The conclusion is that every element of the complexified in (i.e. every complex matrix) has well defined Lorentz transformation properties. In addition, it has a spin-representation of the Lorentz Lie algebra, which upon exponentiation becomes a spin representation of the group, acting on making it a space of bispinors.
Reducible representations
There is a multitude of other representations that can be deduced from the irreducible ones, such as those obtained by taking direct sums, tensor products, and quotients of the irreducible representations. Other methods of obtaining representations include the restriction of a representation of a larger group containing the Lorentz group, e.g. and the Poincaré group. These representations are in general not irreducible.
The Lorentz group and its Lie algebra have the complete reducibility property. This means that every representation reduces to a direct sum of irreducible representations. The reducible representations will therefore not be discussed.
Space inversion and time reversal
The (possibly projective) representation is irreducible as a representation , the identity component of the Lorentz group, in physics terminology the proper orthochronous Lorentz group. If it can be extended to a representation of all of , the full Lorentz group, including space parity inversion and time reversal. The representations can be extended likewise.
Space parity inversion
For space parity inversion, the adjoint action of on is considered, where is the standard representative of space parity inversion, , given by
It is these properties of and under that motivate the terms vector for and pseudovector or axial vector for . In a similar way, if is any representation of and is its associated group representation, then acts on the representation of by the adjoint action, for . If is to be included in , then consistency with requires that
holds, where and are defined as in the first section. This can hold only if and have the same dimensions, i.e. only if . When then can be extended to an irreducible representation of , the orthochronous Lorentz group. The parity reversal representative does not come automatically with the general construction of the representations. It must be specified separately. The matrix (or a multiple of modulus −1 times it) may be used in the representation.
If parity is included with a minus sign (the matrix ) in the representation, it is called a pseudoscalar representation.
Time reversal
Time reversal , acts similarly on by
By explicitly including a representative for , as well as one for , a representation of the full Lorentz group is obtained. A subtle problem appears however in application to physics, in particular quantum mechanics. When considering the full Poincaré group, four more generators, the , in addition to the and generate the group. These are interpreted as generators of translations. The time-component is the Hamiltonian . The operator satisfies the relation
in analogy to the relations above with replaced by the full Poincaré algebra. By just cancelling the 's, the result would imply that for every state with positive energy in a Hilbert space of quantum states with time-reversal invariance, there would be a state with negative energy . Such states do not exist. The operator is therefore chosen antilinear and antiunitary, so that it anticommutes with , resulting in , and its action on Hilbert space likewise becomes antilinear and antiunitary. It may be expressed as the composition of complex conjugation with multiplication by a unitary matrix. This is mathematically sound, see Wigner's theorem, but with very strict requirements on terminology, is not a representation.
When constructing theories such as QED which is invariant under space parity and time reversal, Dirac spinors may be used, while theories that do not, such as the electroweak force, must be formulated in terms of Weyl spinors. The Dirac representation, , is usually taken to include both space parity and time inversions. Without space parity inversion, it is not an irreducible representation.
The third discrete symmetry entering in the CPT theorem along with and , charge conjugation symmetry , has nothing directly to do with Lorentz invariance.
Action on function spaces
If is a vector space of functions of a finite number of variables , then the action on a scalar function given by
produces another function . Here is an -dimensional representation, and is a possibly infinite-dimensional representation. A special case of this construction is when is a space of functions defined on the a linear group itself, viewed as a -dimensional manifold embedded in (with the dimension of the matrices). This is the setting in which the Peter–Weyl theorem and the Borel–Weil theorem are formulated. The former demonstrates the existence of a Fourier decomposition of functions on a compact group into characters of finite-dimensional representations. The latter theorem, providing more explicit representations, makes use of the unitarian trick to yield representations of complex non-compact groups, e.g.
The following exemplifies action of the Lorentz group and the rotation subgroup on some function spaces.
Euclidean rotations
The subgroup of three-dimensional Euclidean rotations has an infinite-dimensional representation on the Hilbert space
where are the spherical harmonics. An arbitrary square integrable function on the unit sphere can be expressed as
where the are generalized Fourier coefficients.
The Lorentz group action restricts to that of and is expressed as
where the are obtained from the representatives of odd dimension of the generators of rotation.
The Möbius group
The identity component of the Lorentz group is isomorphic to the Möbius group . This group can be thought of as conformal mappings of either the complex plane or, via stereographic projection, the Riemann sphere. In this way, the Lorentz group itself can be thought of as acting conformally on the complex plane or on the Riemann sphere.
In the plane, a Möbius transformation characterized by the complex numbers acts on the plane according to
and can be represented by complex matrices
since multiplication by a nonzero complex scalar does not change . These are elements of and are unique up to a sign (since give the same ), hence
The Riemann P-functions
The Riemann P-functions, solutions of Riemann's differential equation, are an example of a set of functions that transform among themselves under the action of the Lorentz group. The Riemann P-functions are expressed as
where the are complex constants. The P-function on the right hand side can be expressed using standard hypergeometric functions. The connection is
The set of constants in the upper row on the left hand side are the regular singular points of the Gauss' hypergeometric equation. Its exponents, i. e. solutions of the indicial equation, for expansion around the singular point are and ,corresponding to the two linearly independent solutions, and for expansion around the singular point they are and . Similarly, the exponents for are and for the two solutions.
One has thus
where the condition (sometimes called Riemann's identity)
on the exponents of the solutions of Riemann's differential equation has been used to define .
The first set of constants on the left hand side in , denotes the regular singular points of Riemann's differential equation. The second set, , are the corresponding exponents at for one of the two linearly independent solutions, and, accordingly, are exponents at for the second solution.
Define an action of the Lorentz group on the set of all Riemann P-functions by first setting
where are the entries in
for a Lorentz transformation.
Define
where is a Riemann P-function. The resulting function is again a Riemann P-function. The effect of the Möbius transformation of the argument is that of shifting the poles to new locations, hence changing the critical points, but there is no change in the exponents of the differential equation the new function satisfies. The new function is expressed as
where
Infinite-dimensional unitary representations
History
The Lorentz group and its double cover also have infinite dimensional unitary representations, studied independently by , and at the instigation of Paul Dirac. This trail of development begun with where he devised matrices and necessary for description of higher spin (compare Dirac matrices), elaborated upon by , see also , and proposed precursors of the Bargmann-Wigner equations. In he proposed a concrete infinite-dimensional representation space whose elements were called expansors as a generalization of tensors. These ideas were incorporated by Harish–Chandra and expanded with expinors as an infinite-dimensional generalization of spinors in his 1947 paper.
The Plancherel formula for these groups was first obtained by Gelfand and Naimark through involved calculations. The treatment was subsequently considerably simplified by and , based on an analogue for of the integration formula of Hermann Weyl for compact Lie groups. Elementary accounts of this approach can be found in and .
The theory of spherical functions for the Lorentz group, required for harmonic analysis on the hyperboloid model of 3-dimensional hyperbolic space sitting in Minkowski space is considerably easier than the general theory. It only involves representations from the spherical principal series and can be treated directly, because in radial coordinates the Laplacian on the hyperboloid is equivalent to the Laplacian on This theory is discussed in , , and the posthumous text of .
Principal series for SL(2, C)
The principal series, or unitary principal series, are the unitary representations induced from the one-dimensional representations of the lower triangular subgroup of Since the one-dimensional representations of correspond to the representations of the diagonal matrices, with non-zero complex entries and , they thus have the form
for an integer, real and with . The representations are irreducible; the only repetitions, i.e. isomorphisms of representations, occur when is replaced by . By definition the representations are realized on sections of line bundles on which is isomorphic to the Riemann sphere. When , these representations constitute the so-called spherical principal series.
The restriction of a principal series to the maximal compact subgroup of can also be realized as an induced representation of using the identification , where is the maximal torus in consisting of diagonal matrices with . It is the representation induced from the 1-dimensional representation , and is independent of . By Frobenius reciprocity, on they decompose as a direct sum of the irreducible representations of with dimensions with a non-negative integer.
Using the identification between the Riemann sphere minus a point and the principal series can be defined directly on by the formula
Irreducibility can be checked in a variety of ways:
The representation is already irreducible on . This can be seen directly, but is also a special case of general results on irreducibility of induced representations due to François Bruhat and George Mackey, relying on the Bruhat decomposition where is the Weyl group element .
The action of the Lie algebra of can be computed on the algebraic direct sum of the irreducible subspaces of can be computed explicitly and the it can be verified directly that the lowest-dimensional subspace generates this direct sum as a -module.
Complementary series for
The for , the complementary series is defined on for the inner product
with the action given by
The representations in the complementary series are irreducible and pairwise non-isomorphic. As a representation of , each is isomorphic to the Hilbert space direct sum of all the odd dimensional irreducible representations of . Irreducibility can be proved by analyzing the action of on the algebraic sum of these subspaces or directly without using the Lie algebra.
Plancherel theorem for SL(2, C)
The only irreducible unitary representations of are the principal series, the complementary series and the trivial representation.
Since acts as on the principal series and trivially on the remainder, these will give all the irreducible unitary representations of the Lorentz group, provided is taken to be even.
To decompose the left regular representation of on only the principal series are required. This immediately yields the decomposition on the subrepresentations the left regular representation of the Lorentz group, and the regular representation on 3-dimensional hyperbolic space. (The former only involves principal series representations with k even and the latter only those with .)
The left and right regular representation and are defined on by
Now if is an element of , the operator defined by
is Hilbert–Schmidt. Define a Hilbert space by
where
and denotes the Hilbert space of Hilbert–Schmidt operators on Then the map defined on by
extends to a unitary of onto .
The map satisfies the intertwining property
If are in then by unitarity
Thus if denotes the convolution of and and then
The last two displayed formulas are usually referred to as the Plancherel formula and the Fourier inversion formula respectively.
The Plancherel formula extends to all By a theorem of Jacques Dixmier and Paul Malliavin, every smooth compactly supported function on is a finite sum of convolutions of similar functions, the inversion formula holds for such . It can be extended to much wider classes of functions satisfying mild differentiability conditions.
Classification of representations of
The strategy followed in the classification of the irreducible infinite-dimensional representations is, in analogy to the finite-dimensional case, to assume they exist, and to investigate their properties. Thus first assume that an irreducible strongly continuous infinite-dimensional representation on a Hilbert space of is at hand. Since is a subgroup, is a representation of it as well. Each irreducible subrepresentation of is finite-dimensional, and the representation is reducible into a direct sum of irreducible finite-dimensional unitary representations of if is unitary.
The steps are the following:
Choose a suitable basis of common eigenvectors of and .
Compute matrix elements of and .
Enforce Lie algebra commutation relations.
Require unitarity together with orthonormality of the basis.
Step 1
One suitable choice of basis and labeling is given by
If this were a finite-dimensional representation, then would correspond the lowest occurring eigenvalue of in the representation, equal to , and would correspond to the highest occurring eigenvalue, equal to . In the infinite-dimensional case, retains this meaning, but does not. For simplicity, it is assumed that a given occurs at most once in a given representation (this is the case for finite-dimensional representations), and it can be shown that the assumption is possible to avoid (with a slightly more complicated calculation) with the same results.
Step 2
The next step is to compute the matrix elements of the operators and forming the basis of the Lie algebra of The matrix elements of and (the complexified Lie algebra is understood) are known from the representation theory of the rotation group, and are given by
where the labels and have been dropped since they are the same for all basis vectors in the representation.
Due to the commutation relations the triple is a vector operator and the Wigner–Eckart theorem applies for computation of matrix elements between the states represented by the chosen basis. The matrix elements of
where the superscript signifies that the defined quantities are the components of a spherical tensor operator of rank (which explains the factor as well) and the subscripts are referred to as in formulas below, are given by
Here the first factors on the right hand sides are Clebsch–Gordan coefficients for coupling with to get . The second factors are the reduced matrix elements. They do not depend on or , but depend on and, of course, . For a complete list of non-vanishing equations, see .
Step 3
The next step is to demand that the Lie algebra relations hold, i.e. that
This results in a set of equations for which the solutions are
where
Step 4
The imposition of the requirement of unitarity of the corresponding representation of the group restricts the possible values for the arbitrary complex numbers and . Unitarity of the group representation translates to the requirement of the Lie algebra representatives being Hermitian, meaning
This translates to
leading to
where is the angle of on polar form. For follows and is chosen by convention. There are two possible cases:
In this case , real, This is the principal series. Its elements are denoted
It follows: Since , is real and positive for , leading to . This is complementary series. Its elements are denoted
This shows that the representations of above are all infinite-dimensional irreducible unitary representations.
Explicit formulas
Conventions and Lie algebra bases
The metric of choice is given by , and the physics convention for Lie algebras and the exponential mapping is used. These choices are arbitrary, but once they are made, fixed. One possible choice of basis for the Lie algebra is, in the 4-vector representation, given by:
The commutation relations of the Lie algebra are:
In three-dimensional notation, these are
The choice of basis above satisfies the relations, but other choices are possible. The multiple use of the symbol above and in the sequel should be observed.
For example, a typical boost and a typical rotation exponentiate as,
symmetric and orthogonal, respectively.
Weyl spinors and bispinors
By taking, in turn, and and by setting
in the general expression , and by using the trivial relations and , it follows
These are the left-handed and right-handed Weyl spinor representations. They act by matrix multiplication on 2-dimensional complex vector spaces (with a choice of basis) and , whose elements and are called left- and right-handed Weyl spinors respectively. Given
their direct sum as representations is formed,
This is, up to a similarity transformation, the Dirac spinor representation of It acts on the 4-component elements of , called bispinors, by matrix multiplication. The representation may be obtained in a more general and basis independent way using Clifford algebras. These expressions for bispinors and Weyl spinors all extend by linearity of Lie algebras and representations to all of Expressions for the group representations are obtained by exponentiation.
Open problems
The classification and characterization of the representation theory of the Lorentz group was completed in 1947. But in association with the Bargmann–Wigner programme, there are yet unresolved purely mathematical problems, linked to the infinite-dimensional unitary representations.
The irreducible infinite-dimensional unitary representations may have indirect relevance to physical reality in speculative modern theories since the (generalized) Lorentz group appears as the little group of the Poincaré group of spacelike vectors in higher spacetime dimension. The corresponding infinite-dimensional unitary representations of the (generalized) Poincaré group are the so-called tachyonic representations. Tachyons appear in the spectrum of bosonic strings and are associated with instability of the vacuum. Even though tachyons may not be realized in nature, these representations must be mathematically understood in order to understand string theory. This is so since tachyon states turn out to appear in superstring theories too in attempts to create realistic models.
One open problem is the completion of the Bargmann–Wigner programme for the isometry group of the de Sitter spacetime . Ideally, the physical components of wave functions would be realized on the hyperboloid of radius embedded in and the corresponding covariant wave equations of the infinite-dimensional unitary representation to be known.
See also
Bargmann–Wigner equations
Dirac algebra
Gamma matrices
Lorentz group
Möbius transformation
Poincaré group
Representation theory of the Poincaré group
Symmetry in quantum mechanics
Wigner's classification
Remarks
Notes
Freely available online references
Expanded version of the lectures presented at the second Modave summer school in mathematical physics (Belgium, August 2006).
Group elements of SU(2) are expressed in closed form as finite polynomials of the Lie algebra generators, for all definite spin representations of the rotation group.
References
(the representation theory of SO(2,1) and SL(2, R); the second part on SO(3; 1) and SL(2, C), described in the introduction, was never published).
(free access)
(a general introduction for physicists)
(elementary treatment for SL(2,C))
(a detailed account for physicists)
(James K. Whittemore Lectures in Mathematics given at Yale University, 1967)
, Chapter 9, SL(2, C) and more general Lorentz groups
.
Representation theory of Lie groups
Special relativity
Quantum mechanics | Representation theory of the Lorentz group | [
"Physics"
] | 11,399 | [
"Special relativity",
"Theoretical physics",
"Quantum mechanics",
"Theory of relativity"
] |
2,249,951 | https://en.wikipedia.org/wiki/Deadly%20Friend | Deadly Friend is a 1986 American science fiction horror film directed by Wes Craven, and starring Matthew Laborteaux, Kristy Swanson, Michael Sharrett, Anne Twomey, Richard Marcus, and Anne Ramsey. Its plot follows a teenage computer prodigy who implants a robot's processor into the brain of his teenage neighbor after she is pronounced brain dead; the experiment proves successful, but she swiftly begins a killing spree in their neighborhood. It is based on the 1985 novel Friend by Diana Henstell, which was adapted for the screen by Bruce Joel Rubin.
Originally, the film was a sci-fi thriller without any graphic scenes, with a bigger focus on plot and character development and a dark love story centering on the two main characters, which were not typical aspects of Craven's previous films. After Craven's original cut was shown to a test audience by Warner Bros., the audience criticized the lack of graphic, bloody violence and gore that Craven's other films included. Warner Bros. executive vice president Mark Canton and the film's producers then demanded script re-writes and re-shoots, which included filming gorier death scenes and nightmare sequences, similar to the ones from Craven's previous film, A Nightmare on Elm Street. Due to studio imposed re-shoots and re-editing, the film was drastically altered in post-production, losing much of the original plot and more scenes between characters, while other scenes, including more grisly deaths and a new ending, were added. According to the screenwriter, this version was criticized by the studio for containing too much graphic, bloody violence and was cut back for release.
In April 2014, an online petition for the release of the original cut was made.
Source material
Friend is a 1985 science fiction horror novel by Diana Henstell. It tells of a 13-year-old boy, Paul "Piggy" Conway who moves to a small town after his parents get divorced. There he befriends a girl named Samantha, but their friendship is cut short when her abusive father throws her down the stairs, mortally injuring her. Piggy tries to save her by implanting a microchip in her, but the reanimated Samantha is much more dangerous than she appears.
Plot
Teenage prodigy Paul Conway and his mother Jeannie move into their new house in the town of Welling. He soon becomes friends with paperboy Tom Toomey. Living next door to Paul is Samantha Pringle and her abusive, alcoholic father Harry. Paul built a robot named BB, which occasionally displays autonomous behavior, such as being protective of Paul. Paul, Jeannie, and BB meet Paul's professor, Dr. Johanson, at Polytech, a prestigious university where Paul has a scholarship.
One day, Tom, Paul and BB stop at the house of reclusive harridan Elvira Parker, who threatens them with a shotgun. The trio then encounters a motorcycle gang led by bully Carl. When Carl intimidates Paul, BB assaults him. Another day, while playing basketball, BB accidentally tosses the ball onto Elvira's porch. She takes the ball away from them and refuses to give it back. On Halloween night, Tom decides to pull a prank on Elvira with the help of Paul, Samantha and BB. BB unlocks her gate and Samantha rings her doorbell. When alarms go off, they hide in a shrubbery nearby. When Elvira sees BB standing near her porch, she destroys him with her shotgun, devastating Paul.
On Thanksgiving, Samantha has dinner with Paul and his mother, and Samantha and Paul share their first kiss. Samantha returns home late at night, outraging her father, who pushes her down the stairs. At the hospital, Paul learns that Samantha is brain dead and will be on life support for 24 hours before the plug is pulled. As BB's microchip can interface with the human brain, Paul decides to use it to revive Samantha with Tom's help. The boys enter the hospital using a key taken from Tom's father, who works there as a security guard. After Tom deactivates the power from the basement, Paul takes Samantha to his lab. He inserts the microchip into Samantha's brain and takes her back to his house, hiding her in the shed. After he activates the microchip, Samantha "wakes up", but her mannerisms are completely mechanical, suggesting BB is in control of her body.
In the middle of the night, Paul finds Samantha staring at the window, looking at her father, and he deactivates her. The next morning, Paul finds Samantha gone. When Harry finds the cellar door open and goes downstairs, Samantha attacks him, breaks his wrist and snaps his neck. Paul finds Samantha, and Harry's corpse, in the cellar. Horrified, he hides the body, takes Samantha back to his home and locks her in his bedroom. At night, Samantha breaks into Elvira's house and corners her by throwing her to the wall of her living room. As Elvira screams in horror, Samantha kills her by smashing her head with the basketball stolen from Tom.
When Tom learns of Samantha's rampage, he gets into a fight with Paul and threatens to call the police. Still being protective of Paul, Samantha jumps out the attic window and attacks Tom, with Paul and Jeannie intervening. Trying to get her under control, Paul slaps Samantha, resulting in her strangling him. Samantha, quickly coming to her senses, lets him go and runs away. As Paul goes after her, he again encounters Carl, who gets into a fight with him. Samantha goes back for Paul, grabs Carl and kills him by throwing him at an incoming police car. She runs back to Paul's shed, where Paul comforts her and realizes she's regaining some of her humanity. However, the police arrive with their guns aimed at Samantha, who yells out Paul's name in her human voice. She runs towards him, trying to protect him, but Sergeant Volchek (Lee Paul), thinking she's trying to attack him, shoots her. She says Paul's name one more time before dying in his arms.
Later at the morgue, Paul tries to steal Samantha's body once more. Suddenly, Samantha grabs Paul's neck and her face rips apart, revealing a terrifying variant of BB's head. Her skin strips away, revealing half-robotic bones underneath. With a robotic voice, Samantha tells him to come with her. When a horrified Paul screams, she snaps his neck, killing him.
Cast
Production
Development
Wes Craven and Bruce Joel Rubin's original intent for the film was for it to be a science fiction thriller with the primary focus being on the dark love story between Paul and Samantha.
Casting
Kristy Swanson, 16 years old at the time of filming, was cast as Samantha. She admitted that Craven was unsure of her capability to play the role, but ultimately cast her, and was "always encouraging... always prodding me in subtle ways." She elaborated in a 1996 interview: "I committed myself completely to it. I just went full out with it. I wanted to do the best job I could possibly do. I was having the time of my life. As for the movie itself, some people love it, some people hate it. It is what it is. I really enjoyed making Deadly Friend. At that point in my life, it was spectacular."
Filming
Professional mime artist Richmond Shepard taught Swanson all of the robotic movements that her character has in the film. In an interview, Swanson said this about learning to walk in that specific way: "Getting those moves down was difficult at first. You don't think walking that way is hard until you actually try doing it. But Richmond was a good teacher and I picked up on most of the moves pretty quickly."
During filming of one of the studio-demanded scenes where Sam has a nightmare where her father attacks her in her room and she stabs him with a glass vase, there were difficulties on set with the special effects. Swanson mentioned, "The scene was set up so that I would hit a protective device inside his shirt. But during one take, I missed the device and glass actually shattered on his chest. I freaked out because I thought I had really stuck this glass into his chest. Everybody else just laughed." In another incident, the great amount of fake blood turned out to be a problem. "We had been working on that scene a long time. Finally, it was time for blood to spray out, but something leaked and we had blood spraying all over the set and myself. I was so tired that I started yelling, "More blood!" and the effects people really pumped it out."
In an interview with Maxim magazine in May 2000, Swanson said that the fake head of Elvira that was decimated by the basketball was stuffed with actual cow brains that the production crew picked up from a butcher shop. In a 2006 interview for The Hills Have Eyes, Craven mentioned problems that the basketball scene had with the MPAA: "On Deadly Friend, we had a scene where a nasty old lady gets her head knocked off with a basketball. The actual scene as it was originally cut was fabulous. She was running around the room like a chicken with its head cut off for ten, fifteen seconds. It was bizarre and wonderful and they cut the shit out of it. So I compiled what we called our "Decapitation Compilation," all the films that I knew of that had decapitations in them that had an R, and sent it to them. They immediately sent it back saying they just base it on what they feel in the room at the time. And we had like eight or ten films in there, like The Omen where the guy gets his head cut off by the sheet of glass, and it didn't matter to them."
Craven had a hand in selecting Bruce Joel Rubin to write the screenplay for Deadly Friend. Rubin agreed with Craven that the film should have a gentler tone than his other features. Craven couldn't write the script himself because he was directing episodes of The Twilight Zone at the time. Craven and producer Robert M. Sherman hired Rubin as the screenwriter because they read his script for Jacob's Ladder, which was unproduced at the time.
For the scene chronicling the transplant of BB's microchip into Samantha's brain, Craven called on the advice of retired neurosurgeon William H. Faeth, who has a cameo in the film as a coroner in Sam's hospital room. Craven said that he was very helpful on all the anatomical details.
The robot, BB, cost over $20,000 to build. Craven used a company called Robotics 21. His eyes were constructed from two 1950's camera lenses, a garage remote control unit, and a radio antenna taken from a Corvette. BB could actually lift 7,500 pounds in weight. The voice of BB was provided by Charles Fleischer, who appeared in Wes Craven's previous film A Nightmare on Elm Street as a doctor.
Earlier in production when the film was originally going to be a PG-rated sci-fi thriller, Craven wanted to make something that was similar to John Carpenter's 1984 sci-fi film Starman. Also, according to Swanson in a 1987 interview with Fangoria writer Mark Shapiro, "Craven suggested that I take a look at the movie Starman because what he wanted to do with Deadly Friend was similar in tone to that film." John Carpenter directed Starman because he wanted to get away from his reputation as a director of violent films, just like Wes Craven wanted to make Deadly Friend with a PG rating in mind so he could prove that he could make a film that was not simply "blood and guts" horror.
Post-production
According to the book Wes Craven: The Art of Horror by John Kenneth Muir, Craven's original cut of the film was "a teenage film filled with charm, wit, and solid performances by likeable teens Swanson and Laborteaux. It was definitely a mainstream, PG film all the way, similar in tone to Real Genius or Short Circuit, but the point was made that Craven could direct something other than double-barreled horror." After principal photography was completed, Craven's original version of the film was screened to a test audience mostly consisting of his fanbase. The response from them was negative, criticizing the lack of violence and gore seen in Craven's other films. Finding that Craven had a large fanbase within the horror genre, Warner Bros.' marketing team insisted that additional scenes of gore and horror be incorporated into the finished film.
The executive vice president of Warner Bros. at the time, Mark Canton, had Rubin write six additional gore scenes into his script, each bloodier than the last. Following the negative reactions from test audiences that saw Craven's first cut of the film and wanted a much more grisly product, it was re-edited in post-production and the more graphic deaths and other re-shot scenes were included, making the final film appear tonally jumbled. Furthermore, with the additional gore introduced, the film struggled being granted an R rating with the Motion Picture Association of America (MPAA) instead of an X due to the overt violence. According to Craven, the film was submitted a total of thirteen times before it was passed.
Editor Michael Eliot was brought in by Warner Bros. to re-edit the original cut of Deadly Friend. Eliot went on to do the same for two other Warner Bros. films, Out for Justice and Showdown in Little Tokyo. While new scenes were added, others such as more scenes between Paul and Samantha that would have made the film more of a love story as originally intended were deleted for length and pacing reasons. Since re-writes, re-shoots, and post production re-editing heavily changed the original story, Craven and Rubin expressed strong anger and heartbreak at the studio and then virtually disowned the film.
Craven was no longer attracted to the story because of Samantha going on a killing spree when she is revived. He was much more interested in exploring the adults around her, all of whom seem to be monsters in human skin. In his own words: "The scares don't come from her, but from the ordinary people, who are actually much more frightening. A father who beats a child is a terrifying figure. That's the one person you're afraid of in the movie. The idea is along the lines that adults can be horrible, without being outside what society says is acceptable."
Swanson commented that she found herself and the other actors caught up in the studio's attempts to strong-arm Craven into making the film more visceral than what was originally intended. During both production and re-shoots, changes to the script were being made, title changes were being discussed, and there were many discussions about how violent and bloody the final film would be. All of these issues caused problems for the actors. Regarding the title changes, when Craven started the project, it was titled Friend, much like the Diana Henstell novel it was based on. The title was later changed to Artificial Intelligence and then to A.I. before the studio and producers finally settled on Deadly Friend.
In a 1990 interview with Fangoria journalist Daniel Schweiger, screenwriter Bruce Joel Rubin said this about the ending and why it stayed in the film: "That robot coming out of the girl's head belongs solely to Mark Canton, and you don't tell the president of Warner Bros. that his idea stinks!" Rubin also said how at the time, people were still blaming him for the ending where Samantha turns into a robot, even though Canton was the one who conceived it. He also mentioned that despite the fact that the studio destroyed the love story of the movie that he and Craven enjoyed, he still enjoyed working with Craven, confirming that he was not the one who wanted to change the film and that he should not be blamed for what happened to it. Rubin even said that production was one of the happiest experiences he ever had.
In another interview, Rubin told the story about how the $36,000 that he got paid for writing the script for Deadly Friend saved him from going nearly broke due to the four months long Writer's Guild strike and also helped him with a bar mitzvah for his son and to buy a house. In the same interview, Rubin said how at first, he did not want to write the script, but after changing his mind, he called Robert M. Sherman and got the job. He also said how working on the film was one of the most extraordinary experiences of his life: "It was a horror film with a lot of elements that are not things I wanted on my resume. And it didn't do very good business, but it was total fun. My kids were on the set every night. My five-year-old Ari was totally in love with Kristy Swanson, who was the lead. She later became Buffy the Vampire Slayer in the movie. She was really sweet to him and even took him on a date."
Release
Censorship
Due to all of the gore scenes that were added into the film—as well as Craven's contentious history with the Motion Picture Association of America (MPAA)—it was initially given an X rating. The film was trimmed and resubmitted to the MPAA thirteen times before it was granted an R-rating. Most of the cuts were made to the death scenes of Harry and Elvira.
Marketing
The theatrical trailer for the film released by Warner Bros. represented it as a straightforward horror film, omitting any reference to its science fiction elements, with BB not appearing in a single frame. The mixture of teenagers and terror as seen in the trailer implied that Deadly Friend would be like Craven's A Nightmare on Elm Street. In an interview with Fangoria, Craven said that the deadline for delivering the first cut of Deadly Friend with all of the studio-demanded sequences included, and delivering his original script for A Nightmare on Elm Street 3: Dream Warriors, which he was writing with Bruce Wagner, was virtually the same, making it very difficult for him to do both things at once.
Box office
Hoping to score a financial success with the Halloween trade, Warner Bros. released Deadly Friend in theaters on October 10, 1986, but the film was a box office bomb, grossing $8,988,731 in the United States against an $11 million budget.
Critical response
AllMovie gave the film a generally negative review, writing, "It's an intriguing combination of elements, but the end result is a schizoid mess", calling Craven's direction "awkward" and opining that it "lacks the intense, sustained atmosphere of his previous horror hits." On Rotten Tomatoes the film has a 20% approval rating based on 35 reviews, with an average rating of 3.7/10, with the consensus reading, "An uninspired departure for Wes Craven, mired by an uneven premise; beware, this is one Deadly Friend. On Metacritic it has a score of 44% based on reviews from 11 critics, indicating "mixed or average reviews".
Home media
In 2007, Warner Bros. released a DVD edition featuring all of the death scenes in their fully uncut form. In 2021, numerous Twitter users called for Craven's original cut of the film to be released, sharing the hashtag #ReleaseTheCravenCut for both Deadly Friend and Cursed. In October 2021, Scream Factory released the film for the first time on Blu-ray. The Blu-ray features the same cut of the film as issued on the previous Warner Bros. DVD. In a press announcement regarding the Blu-ray release, Scream Factory wrote: "We anticipate being asked if we found any alternate footage from the film (as seen in the original theatrical trailer) or Craven's more milder original feature-length cut. Unfortunately, we could not locate any lost footage after investigating. Sorry, we tried. As fans of the film ourselves we wanted to see that too!"
References
Sources
External links
1986 films
1986 horror films
1980s science fiction horror films
1980s teen horror films
American robot films
American science fiction horror films
American teen horror films
Films about androids
Films about child abuse
Films about computing
Films based on American horror novels
Films directed by Wes Craven
Films scored by Charles Bernstein
Films shot in Los Angeles
Films with screenplays by Bruce Joel Rubin
Mad scientist films
Techno-horror films
Warner Bros. films
1980s English-language films
1980s American films
American novels adapted into films
1986 science fiction films
English-language science fiction horror films | Deadly Friend | [
"Technology"
] | 4,238 | [
"Works about computing",
"Films about computing"
] |
2,250,414 | https://en.wikipedia.org/wiki/Railroad%20Commission%20of%20Texas | The Railroad Commission of Texas (RRC; also sometimes called the Texas Railroad Commission, TRC) is the state agency that regulates the oil and gas industry, gas utilities, pipeline safety, safety in the liquefied petroleum gas industry, and surface coal and uranium mining. Despite its name, it ceased regulating railroads in 2005, when the last of the rail functions were transferred to the Texas Department of Transportation.
Established by the Texas Legislature in 1891, it is the state's oldest regulatory agency, and began as part of the Efficiency Movement of the Progressive Era. From the 1930s to the 1960s, it largely set world oil prices, but was displaced by OPEC (Organization of Petroleum Exporting Countries) after 1973. In 1984, the federal government took over transportation regulation for railroads, trucking, and buses, but the Railroad Commission kept its name. With an annual budget of $79 million, it now focuses entirely on oil, gas, mining, propane, and pipelines, setting allocations for production each month.
The three-member commission was initially appointed by the governor, but an amendment to the state's constitution in 1894 established the commissioners as elected officials who serve overlapping six-year terms, like the sequence in the U.S. Senate, elected statewide. No specific seat is designated as chairman; the commissioners choose the chairman from among themselves. Normally, the commissioner who faces reelection is the chairman for the preceding two years. The current commissioners are: Jim Wright since January 4, 2021; Wayne Christian since January 9, 2017; and Christi Craddick since December 17, 2012.
Origins
Attempts to establish a railroad commission in Texas began in 1876. After five legislative failures, an amendment to the state constitution that provided for a railroad commission was submitted to voters in 1890. The amendment's ratification and the 1890 election of Governor James S. Hogg, a Democrat, permitted the legislature in 1891 to pass legislation that constitutionally created the Railroad Commission of Texas, and gave it jurisdiction over the operations of railroads, terminals, wharves, and express companies. It could set rates, issue rules on how to classify freight, require adequate railroad reports, and prohibit and punish discrimination and extortion by corporations. George Clark, running as an independent “Jeffersonian Democratic” candidate for governor in 1892, denounced the TRC as being “Wrong in principle, undemocratic, and unrepublican.” Clark opined that the TRC and similar “Commissions do no good. They do harm. Their only function is to harass. I regard it as essentially foolish and essentially vicious.” Clark lost the 1892 election to Hogg, but federal judge Andrew Phelps McCormick granted an injunction preventing the TRC from enforcing compliance and seeking to prosecute or recover penalties from railroad companies the same year; the decision was overruled by the United States Supreme Court in 1894. The governor appointed the first members; the first elections to the commission were held in 1893, with three commissioners serving six-year, overlapping terms. The TRC did not have jurisdiction over interstate rates, but Texas was so large that the in-state traffic it regulated was of dominant importance.
The agency did not have the legal authority to set rates, nor did it have the resources to spend much of its time in court battles. The carrot was far more important than the stick. Freight rates continued to decline dramatically. In 1891, a typical rate was 1.403 cents per ton mile. By 1907, the rate was 1.039 cents—a decline of 25%. However, the railroads did not have rates high enough for them to upgrade their equipment and lower costs in the face of competition from pipelines, cars, and trucks, and the Texas railway system began a slow decline.
Members of the First Railroad Commission of Texas
John H. Reagan (1818–1903), the first chairman of the TRC (1891–1903), had been the most outspoken advocate in Congress of bills to regulate railroads in the 1880s. He feared the corruption caused by railroad monopolies, and considered their control a moral challenge. As chairman of the TRC, Reagan changed his views when he became acquainted with the realities of the complex forces affecting railroad management. Reagan turned to the Efficiency Movement for ideas, and established a pattern of regulatory practice that the TRC used for decades. He believed that the agency should pursue two main goals: to protect consumers from unfair railway practices and excessive rates, and to support the state's overall economic growth. To find the optimal rates that met these goals, he focused the TRC on the collection of data, direct negotiation with railway executives, and compromises with the parties involved.
Lafayette L. Foster (1851–1901) was a commissioner of the first TRC (1891–1895) appointed by Governor Hogg. He resigned in 1895, and became the vice president and general manager of the Velasco Terminal Railway. He was succeeded as commissioner by Nathan Alexander Stedman.
William P. McLean (1836–1925) was a commissioner of the first TRC (1891–1894) appointed by Governor Hogg. He was a judge before his appointment to the commission. He was re-elected in 1893, but resigned his position in 1894 to practice law in Fort Worth. He was succeeded as commissioner by Leonidas Jefferson Storey, who later became chairman of the TRC in 1903, following Reagan's death.
Segregation
From the 1890s through the 1960s, the Texas Railroad Commission found it difficult to fully enforce Jim Crow segregation legislation. Because of the expense involved, Texas railroads often allowed wealthier blacks to mix with whites, rather than provide separate cars, dining facilities, and even depots. In addition, West Texas authorities often refused to enforce Jim Crow laws because few African Americans resided there. In the 1940s, the railroad commission's enforcement of segregation laws began collapsing further, in part because of the great number of African American soldiers that were transported during World War II. The trains were integrated in the early 1960s.
Expansion to oil
The agency's reach expanded as it took over responsibility for regulating oil pipelines (in 1917), oil and gas production (1919), natural gas delivery systems (1920), bus lines (1927), and trucking (1929). It grew from 12 employees in 1916 to 69 in 1930 and 566 in 1939. It does not have jurisdiction over investor-owned electric utility companies; that falls under the jurisdiction of the Public Utility Commission of Texas.
A crisis for the petroleum industry was created by the East Texas oil boom of the 1930s, as prices plunged to 25¢ a barrel. The traditional TRC policy of negotiating compromises failed; the governor was forced to call in the state militia to enforce order. Texas oilmen decided they preferred state to federal regulation, and wanted the TRC to give out quotas so that every producer would get higher prices and profits. Pure Oil Company opposed the first statewide oil prorationing order, which was issued by the TRC in August 1930. The order, which was intended to conserve oil resources by limiting the number of barrels drilled per day, was seen by small producers, like Pure Oil, as a conspiracy between government and major companies to drive them out of business, and ultimately foster monopoly in the oil industry.
Ernest O. Thompson (1892–1966), head of the TRC from 1932 to 1965, took charge of the agency, and indeed the oil industry, by appealing to an ideal of Texas's role in the global oil order—the civil religion of Texas oil. He cajoled, harangued, and browbeat recalcitrant producers into compliance with the TRC's prorationing orders. The New Deal allowed the TRC to set national oil policy. As late as the 1950s, the TRC controlled over 40% of United States’ crude production, and approximately half of estimated national proved reserves. It served as a model in the creation of OPEC. Gordon M. Griffin, chief engineer of the TRC during World War II, developed the formula for prorationing to keep production flowing for the military.
Because the TRC needed access to the Texas headquarters of the various oil companies, it became a long term tenant at the Milam Building.
Operations
Regulation was a practical rather than ideological affair. The TRC typically worked with the regulated industries to improve operations, share best practices, and address consumer complaints. Radical activities—like heated court battles or rate-setting to favor shippers, producers, or consumers—were the exception rather than the rule.
Within the oil and gas industry, it took into account production in other states, in effect bringing total available supply (including imports, which were small) within the principle of prorationing to market demand. Allowable oilfield production was calculated as follows: estimated market demand, minus uncontrolled additions to supply, gave the Texas total; this was then prorated among fields and wells in a manner calculated to preserve equity among producers, and to prevent any well from producing beyond its maximum efficient rate (MER). Scheduled allowables are expressed in numbers of calendar days of permitted production per month at MER. In the spring of 2013, new hydraulic fracturing water recycling rules were adopted in the state of Texas by the Railroad Commission of Texas. The Water Recycling Rules are intended to encourage Texas hydraulic fracturing operators to conserve water used in the hydraulic fracturing process for oil and gas wells.
Recent history
As of March 2022, the commission members are Wayne Christian (chairman), Christi Craddick, and Jim Wright. All three members are Republicans. Christian was elected in 2016 as a commissioner, and was selected as chairman in 2019. Craddick was elected in 2012, and reelected in 2018. Wright was elected in 2020.
Effective October 1, 2005, as a result of House Bill 2702, the rail oversight functions of the Railroad Commission were transferred to the Texas Department of Transportation. The traditional name of the commission was not changed despite the loss of its titular regulatory duties.
Court cases involving the commission
The Shreveport Rate Case, also known as Houston E. & W. Ry. Co. v. United States, 234 U.S. 342 (1914) arose from the Railroad Commission's setting railroad freight rates unequally. Because of the low intrastate rates, shippers in eastern Texas tended to ship their wares to Dallas (in Texas), rather than to Shreveport, Louisiana, although Shreveport was considerably closer to much of eastern Texas. The Railroad Commission's (and the railroad's) position was that only the state could regulate commerce within a state, and that the federal government had no power so to do. The Supreme Court ruled that the federal government's ability to regulate interstate commerce necessarily included the ability to regulate intrastate “operations in all matters having a close and substantial relation to interstate traffic,” and to ensure that “interstate commerce may be conducted upon fair terms.”
The Railroad Commission has also figured prominently in two major U.S. Supreme Court cases on the doctrine of abstention:
Railroad Commission v. Pullman Co., a 1941 case in which the U.S. Supreme Court ruled that it was appropriate for federal courts to abstain from hearing a case to allow state courts to decide substantial constitutional issues that touch upon sensitive areas of state social policy, specifically the race of railroad employees.
Burford v. Sun Oil Co., a 1943 case in which the U.S. Supreme Court ruled that a federal court sitting in diversity jurisdiction may abstain from hearing the case where the state courts likely have greater expertise in a particularly complex and unclear area of state law which is of special significance to the state, where there is comprehensive state administrative/regulatory procedure, and where the federal issues cannot be decided without delving into state law.
Commissioners
The commissioners are elected in statewide partisan elections for six-year terms, with one commission seat up for election every two years. The commission selects a chairperson from among their members every year.
Offices and districts
The agency is headquartered in the William B. Travis State Office Building at 1701 North Congress Avenue in Austin. In addition, the Texas Railroad Commission has twelve oil and gas district offices located throughout the state. The district offices facilitate communication between industry representatives and the Commission.
See also
Oil and gas law in the United States
History of Texas
Bibliography
Childs, William R. The Texas Railroad Commission: Understanding Regulation in America to the Mid-Twentieth Century. (2005). 323 pp. the standard history; online review
Childs, William R. "Origins of the Texas Railroad Commission's Power to Control Production of Petroleum: Regulatory Strategies in the 1920s." Journal of Policy History 1990 2(4): 353–387.
De Chazeau, Melvin G., and Alfred E. Kahn. Integration and Competition in the Petroleum Industry (1959) online edition
Green, George N. "Thompson, Ernest Othmer," The Handbook of Texas Online (2008)
Norvell, James R. "The Railroad Commission of Texas: its Origin and History." Southwestern Historical Quarterly 1965 68(4): 465–480. online edition
Prindle, David F. Petroleum Politics and the Texas Railroad Commission. (1981). 230 pp., focuses on relations with independent oilmen
David F. Prindle, "Railroad Commission," Handbook of Texas Online (2008)
Procter, Ben H. Not Without Honor: The Life of John H. Reagan (1962).
Procter, Ben H. Reagan, John Henninger, Handbook of Texas Online (2008)
Splawn, W. M. W. "Valuation and Rate Regulation by the Railroad Commission of Texas," Journal of Political Economy Vol. 31, No. 5 (Oct., 1923), pp. 675–707 in JSTOR
References
External links
"Hazardous Business: Industry, Regulation, and the Texas Railroad Commission" from Texas State Library and Archives Commission
Conversion of EBCDIC files
State agencies of Texas
Petroleum in Texas
History of the petroleum industry in the United States
Oil and gas law
Government agencies established in 1891
1891 establishments in Texas
Petroleum politics
United States railroad regulation
Rail transportation in Texas | Railroad Commission of Texas | [
"Chemistry"
] | 2,880 | [
"Petroleum",
"Petroleum politics"
] |
2,250,488 | https://en.wikipedia.org/wiki/Positional%20good | Positional goods are goods valued only by how they are distributed among the population, not by how many of them there are available in total (as would be the case with other consumer goods). The source of greater worth of positional goods is their desirability as a status symbol, which usually results in them greatly exceeding the value of comparable goods.
Various goods have been described as positional in a given capitalist society, such as gold, real estate, diamonds, and luxury goods. Generally any coveted goods, which may be in abundance, that are considered valuable or desirable in order to display or change one's social status when possessed by relatively few in a given community may be described as positional goods. What could be considered a positional good can vary widely depending on cultural or subcultural norms.
More formally in economics, positional goods are a subset of economic goods whose consumption (and subsequent utility), also conditioned by Veblen-like pricing, depends negatively on consumption of those same goods by others. In particular, for these goods the value is at least in part (if not exclusively) a function of its ranking in desirability by others, in comparison to substitutes. The extent to which a good's value depends on such a ranking is referred to as its positionality. The term was coined by Austrian-British financial journalist Fred Hirsch, and the concept has been refined by American economics professor Robert H. Frank and Italian economist Ugo Pagano.
The term is sometimes extended to include services and non-material possessions that may alter one's social status and that are deemed highly desirable when enjoyed by relatively few in a community, such as college degrees, achievements, awards, etc.
Concept
Although Thorstein Veblen emphasized the importance of one's relative position in society with reference to the concept of conspicuous leisure and consumption, it was Fred Hirsch who coined the concept of the "positional good", in Social Limits to Growth. He explained that the positional economy is composed of "all aspects of goods, services, work positions and other social relationships that are either (1) scarce in some absolute or socially imposed sense or (2) subject to congestion and crowding through more extensive use" (Hirsch, 1977: 27).
Hence, Hirsch distinguished categories of positional goods. Some depend, essentially, on their relative positions (pride of superiority, status, and power); others, such as land for leisure activities or land for suburban housing, are positional merely because their total amount is fixed. However, land is valued at least in part for its absolute contribution to productivity, which does not derive from its relative ranking. Thus, some economists (such as Robert H. Frank and Ugo Pagano) include only goods (like status and power) which are valued specifically because of their relative quality. In addition, jural positions can also be considered as positional goods (cf. Pagano and Vatiero 2017, and Vatiero 2021).
Hirsch's main contribution is his assertion that positional goods are inextricably linked to social scarcity – social scarcity relates to the relative standings of different individuals and arises not from physical or natural limitations, but from social factors; for instance, the land in Inter-Provincial Montioni Park is physically scarce, while political leadership positions are socially scarce.
The broad theme of Hirsch's book was, he told The New York Times, that material growth can "no longer deliver what has long been promised for it – to make everyone middle-class". The concept of positional good explains why, as economic growth improves overall quality of life at any particular level, doing "better" than how an individual's grandparents lived does not translate automatically into doing "well", if there are as many or more people ahead of them in the economic hierarchy. For example, if someone is the first in their family to get a college degree, they are doing better. But if they were at the bottom of their class at a weak school, they may find themselves less eligible for a job than their grandfather, who was only a high school graduate. That is, competition for positional goods is a zero-sum game: Attempts to acquire them can only benefit one player at the expense of others.
In the case of positional goods, people benefiting from a positional good do not take into account the externalities of their respective sufferers. That is, in the case of "public ... goods, the consequences of this failure implies that an agent consuming the public good does not get paid for other people's consumption; in the case of a positional ... good, the equivalent failure implies that an agent consuming positive amounts is not charged for the negative consumption of other agent's consumption" (Pagano 1999:71). That is, while, in the case of public goods, we have the standard underinvestment problem in their supply, because excluding individuals from externalities that have the "same sign" may turn out to be impossible, by contrast, in the case of positional goods, we have a problem of over-provision, because all agents may try to consume positive amounts of these goods, neglecting to consider the externality on others. For public goods, an undersupply, for positional goods, it signifies an over-supply. In other words, in positional competitions, people work harder to compete and consume more than they would under optimal conditions.
Some economists, such as Robert Frank, argue that positional goods create externalities and that "positional arms races" can result for goods that might boost one's social status relative to others. This phenomenon, Frank argues, is clearly bad for society, and thus government can improve social welfare by imposing a high luxury tax on certain luxury goods to correct for the externality and mitigate the posited social waste.
However, in some cases it may be less clear that such government intervention is warranted in response to these externalities. For example, in certain cases, such government actions can potentially impede improvements in living standards and innovation. Technological advance itself is possible in part because wealthy individuals are willing to be first purchasers of new and untested goods (e.g., early cellphone models in the early 1990s). There is a certain experimentation and risk that accompany luxury goods, and if they are found to be useful they may eventually be mass-produced and made affordable to the common person: one era's luxuries are another's commonplace goods. In short, the negative positional externality can be compensated by the public goods of infant industry effects and research and development.
In his response to the cited article by Kashdan and Klein, Robert Frank wrote the following: In the short run, the tax would not change the total level of spending. Rather, it would shift the composition of spending in favor of investment. Innovation is hardly confined to the consumption sector. Producers of capital goods also have strong incentives to come up with useful innovations. And with the greater aggregate investment spending caused by a consumption tax, more resources than before would be available for research and development. There is thus no reason to expect innovation to slow down, even in the short run. In the long run, which is what really counts for the point Kashdan and Klein are attempting to make, their argument collapses completely. Higher rates of investment mean a higher rate of income growth, which means that consumption along the high-savings trajectory will eventually exceed what it would have been had we remained on the low-savings trajectory. From that point forward, there would be more expenditure on innovation in both the consumption and capital goods sectors...
Definitions
One early instance of positional economy comes from San Gimignano – a Tuscan medieval town that has been described as the Manhattan of the Middle Ages because of its towers (in the past there were about eighty towers). Towers were not built by aristocratic families to live in them, but to "demonstrate" to community, their power, affluence, and status of each family. In this case, the owner of a tower consumed a positive level of positional good, like power, instead the family which did not own a tower or owned a lower building consumed a negative level of positional good, that is, it consumed the exposure stemming from the power of the owner. For this reason, there is a zero-sum game in the family consumptions. There is a party consuming a positive amount of positional good and at the same time there is a counterparty consuming a negative amount of such a good. The aristocratic family – owners of a tower – enjoyed the positive consumption of the positional good, namely it had a positive utility deriving from the positional good. On the contrary, the family – non-owners of a tower – suffered from the negative consumption of the positional good (the consumption of exposure to power of others), namely it had a negative utility. For this reason, there is a zero-sum game in the family utilities.
The case of San Gimignano's towers explains three meanings of positional good, each one resting on the idea of social scarcity: 1) the first one based on a zero-sum game in the consumptions, 2) the second one based on a zero-sum game in the payoffs (utilities), and 3) the third one related to higher pricing mechanism to deny the consumption of others.
The definition centered on a zero-sum game in the consumptions is originated from contributions of Ugo Pagano: When one party's level of consumption is positive, then at least one other party's level of consumption must be negative. However while the dimension of positional good is binary, the net (utility) impact of a positional good may be positive, zero, or negative. The individual utility derives from the individual preferences on the level of consumption. If reasonable conditions – positive (negative) consumptions imply positive (negative) utilities – hold, then a second kind of definition of positional goods may be formulated: Zero-sum game in the payoffs. Positional goods are goods whose utility of their consumptions is relative (negatively) to the consumption of the others.
A last definition of positional good derives from the so-called "Veblen effect", which is witnessed whenever individuals are willing to pay higher prices for functionally equivalent goods (a significant example is the luxury goods market). The Veblen effect also implies that a sufficient decrease in price leads not to an increase in demand, but to a decrease, because the social status derived from acquiring the goods in question may fall. In this respect, positional goods are goods for which the satisfaction derives (at least in part) from higher pricing.
This brings us to an intriguing parallel between positional goods, such as "luxury goods", and what are known as "Giffen goods". Rae observed that in the case of "mere luxuries", while a halving of the price would require a doubling in the number of units purchased, in order to satisfy vanity to the same extent, a reduction of the price to a small fraction of its previous level would reduce demand to zero. Cournot also admitted that some goods "of whim and luxury ... are only desirable on account of their rarity and of the high price which is the consequence thereof ... [i]n this case a great fall in price would almost annihilate the demand" (cf. Schneider).
Triad of economic goods
People constantly compare themselves to their environments and care greatly about their relative positions, which influence their choices. Therefore, it could be argued that the paradigm of homo economicus should be extended, so that positional goods are included in theories of individual consumption and social concerns are considered among the basic motivations for individual economic behaviour. The triad of economic goods – private, public, and positional goods – can be defined in terms of individual and total consumption. Private goods are characterized by the fact that they are consumed only by single individuals. The exclusion of others from positive amounts of consumption is impossible in the case of public goods. Instead, when some individuals consume positional goods, other individuals must be included in the consumption of related negative quantities.
A pure positional good can be defined as a good of which a certain amount of positive consumption by one agent is matched by an equally negative amount of consumption by another agent. That is, in the case of positional goods, individuals' consumption levels have opposite signs. However, in the case of certain positional goods, such as Olympic medals, one may talk of new positional products created out of nothing; such products do not result in negative externalities, especially if even last place at the Olympics is viewed as prestigious enough to bring positive utility to the competitor.
The distinction among private, public and positional goods brings different rules for deriving total demand. In a diagrammatic view, total demand of a private good is the horizontal sum of individual demands. For a public good, instead, total demand is the Samuelsonian vertical summation of individual demands. Finally, for positional goods, the optimal level of consumption does not coincide, as it does in the case of private goods, with the intersection of any individual marginal rate of substitution curve with the marginal cost curve since an externality emerges for the consumption of other.
Thus, it is necessary to first calculate the total marginal rate of substitution and, consequently, find the intersection with the marginal cost curve. As in the case of public goods, the total marginal rate of substitution is calculated by the summation of individual marginal rates of substitution. But in the case of positional goods, one marginal rate of substitution is subtracted since there is negative consumption. Therefore, the total marginal rate of substitution is the difference between the two individual marginal rates of substitution.
See also
Conspicuous consumption
Expenditure cascades
Status symbol
Veblen good
References
Goods (economics)
Welfare economics | Positional good | [
"Physics"
] | 2,851 | [
"Materials",
"Goods (economics)",
"Matter"
] |
2,250,527 | https://en.wikipedia.org/wiki/Radisson%2C%20Quebec | Radisson is a small unconstituted locality situated near the Robert-Bourassa hydroelectric power station on the La Grande River in the municipality of Eeyou Istchee Baie-James of Quebec, Canada. Geographically, Radisson is located halfway between the southern and northernmost points in Quebec and is, besides Schefferville, the only non-native town north of the 53rd parallel in this province.
Despite its remoteness, Radisson has plenty of services for its residents and travellers: two fuel stations, hotel, motel, campground (summer only), a general store, restaurants, gift shops, a school and a hospital. It is also home to a huge Hydro-Québec employee facility, from where guided tours to the Robert-Bourassa power station start. It also houses employees of Air Inuit who are stationed at La Grande Rivière Airport.
The Cree village of Chisasibi is about to the west, near the mouth of the La Grande River. To the East is the Trans-Taiga Road (French: Route Transtaïga) that leads to the Caniapiscau Reservoir and the former construction camp of Caniapiscau (now used by a wilderness outfitter).
History
Radisson was founded in 1974 to accommodate workers for the James Bay hydroelectric project and named by the Société de développement de la Baie James after Pierre-Esprit Radisson, a 17th-century French explorer and founder of the Hudson's Bay Company. During the peak construction period in 1977, its population reached about 2,500 and has fluctuated since that time. Currently it is a community of about 300 people. The main employer is Hydro-Québec and its main subsidiary, the Société de l'énergie de la Baie James. Many locals are also employed in the tourism/hospitality industry that caters especially to the outdoor sports, such as hunting, fishing, and camping.
Radisson, also referred to on some unofficial maps as "La Grande", is part of the Municipality of Baie-James which covers most of the territory of James Bay region, with the exception of the Cree villages as well as towns of Chapais, Chibougamau, Matagami and Lebel-sur-Quévillon, all of which are enclaves.
The town is accessible by road from Matagami, to the south. The road is known as the James Bay Road (French: Route de la Baie James) and was built during the construction of the James Bay Project in the mid-1970s. No services whatsoever are available along this road with the exception of a 24-hour service station, complete with cafeteria and lodging, at kilometre 381. The road is fully paved, well maintained and ploughed during the winter, making Radisson accessible year-round. It is also accessible via La Grande Rivière Airport.
Climate
Demographics
In the 2021 Census of Population conducted by Statistics Canada, Radisson had a population of 203 living in 105 of its 235 total private dwellings, a change of from its 2016 population of 468. With a land area of , it had a population density of in 2021.
References
External links
Virtual tour of Radisson, photos (English)
Communities in Nord-du-Québec
Designated places in Quebec
James Bay Project
Unconstituted localities in Quebec | Radisson, Quebec | [
"Engineering"
] | 684 | [
"James Bay Project",
"Macro-engineering"
] |
2,250,604 | https://en.wikipedia.org/wiki/L%28R%29 | In set theory, L(R) (pronounced L of R) is the smallest transitive inner model of ZF containing all the ordinals and all the reals.
Construction
It can be constructed in a manner analogous to the construction of L (that is, Gödel's constructible universe), by adding in all the reals at the start, and then iterating the definable powerset operation through all the ordinals.
Assumptions
In general, the study of L(R) assumes a wide array of large cardinal axioms, since without these axioms one cannot show even that L(R) is distinct from L. But given that sufficient large cardinals exist, L(R) does not satisfy the axiom of choice, but rather the axiom of determinacy. However, L(R) will still satisfy the axiom of dependent choice, given only that the von Neumann universe, V, also satisfies that axiom.
Results
Given the assumptions above, some additional results of the theory are:
Every projective set of reals – and therefore every analytic set and every Borel set of reals – is an element of L(R).
Every set of reals in L(R) is Lebesgue measurable (in fact, universally measurable) and has the property of Baire and the perfect set property.
L(R) does not satisfy the axiom of uniformization or the axiom of real determinacy.
R#, the sharp of the set of all reals, has the smallest Wadge degree of any set of reals not contained in L(R).
While not every relation on the reals in L(R) has a uniformization in L(R), every such relation does have a uniformization in L(R#).
Given any (set-size) generic extension V[G] of V, L(R) is an elementary submodel of L(R) as calculated in V[G]. Thus the theory of L(R) cannot be changed by forcing.
L(R) satisfies AD+.
References
Inner model theory
Determinacy
Descriptive set theory | L(R) | [
"Mathematics"
] | 448 | [
"Game theory",
"Determinacy"
] |
2,250,747 | https://en.wikipedia.org/wiki/Iodomethane | Iodomethane, also called methyl iodide, and commonly abbreviated "MeI", is the chemical compound with the formula CH3I. It is a dense, colorless, volatile liquid. In terms of chemical structure, it is related to methane by replacement of one hydrogen atom by an atom of iodine. It is naturally emitted in small amounts by rice plantations. It is also produced in vast quantities estimated to be greater than 214,000 tons annually by algae and kelp in the world's temperate oceans, and in lesser amounts on land by terrestrial fungi and bacteria. It is used in organic synthesis as a source of methyl groups.
Preparation and handling
Iodomethane is formed via the exothermic reaction that occurs when iodine is added to a mixture of methanol with red phosphorus. The iodinating reagent is phosphorus triiodide that is formed in situ:
3 CH3OH + PI3 → 3 CH3I + H2PO3H
Alternatively, it is prepared from the reaction of dimethyl sulfate with potassium iodide in the presence of calcium carbonate:
(CH3O)2SO2 + KI → CH3I + CH3OSO2OK
Iodomethane can also be prepared by the reaction of methanol with aqueous hydrogen iodide:
CH3OH + HI → CH3I + H2O
The generated iodomethane can be distilled from the reaction mixture.
Iodomethane may also be prepared by treating iodoform with potassium hydroxide and dimethyl sulfate under 95% ethanol.
In the Tennessee Eastman acetic anhydride process iodomethane is formed as an intermediate product by a catalytic reaction between methyl acetate and lithium iodide.
Storage and purification
Like many organoiodide compounds, iodomethane is typically stored in dark bottles to inhibit degradation caused by light to give iodine, giving degraded samples a purplish tinge. Commercial samples may be stabilized by copper or silver wire. It can be purified by washing with Na2S2O3 to remove iodine followed by distillation.
Biogenic iodomethane
Most iodomethane is produced by microbial methylation of iodide. Oceans are the major source, but rice paddies are also significant.
Reactions
Methylation reagent
Iodomethane is an excellent substrate for SN2 substitution reactions. It is sterically open for attack by nucleophiles, and iodide is a good leaving group. It is used for alkylating carbon, oxygen, sulfur, nitrogen, and phosphorus nucleophiles. Unfortunately, it has a high equivalent weight: one mole of iodomethane weighs almost three times as much as one mole of chloromethane and nearly 1.5 times as much as one mole of bromomethane. On the other hand, chloromethane and bromomethane are gaseous, thus harder to handle, and are also weaker alkylating agents. Iodide can act as a catalyst when reacting chloromethane or bromomethane with a nucleophile while iodomethane is formed in situ.
Iodides are generally expensive relative to the more common chlorides and bromides, though iodomethane is reasonably affordable; on a commercial scale, the more toxic dimethyl sulfate is preferred, since it is cheap and has a higher boiling point. The iodide leaving group in iodomethane may cause unwanted side reactions. Finally, being highly reactive, iodomethane is more dangerous for laboratory workers than related chlorides and bromides.
For example, it can be used for the methylation of carboxylic acids or phenols:
In these examples, the base (K2CO3 or Li2CO3) removes the acidic proton to form the carboxylate or phenoxide anion, which serves as the nucleophile in the SN2 substitution.
Iodide is a "soft" anion which means that methylation with MeI tends to occur at the "softer" end of an ambidentate nucleophile. For example, reaction with thiocyanate ion favours attack at sulfur rather than "hard" nitrogen, leading mainly to methyl thiocyanate (CH3SCN) rather than methyl isothiocyanate CH3NCS. This behavior is relevant to the methylation of stabilized enolates such as those derived from 1,3-dicarbonyl compounds. Methylation of these and related enolates can occur on the harder oxygen atom or the (usually desired) carbon atom. With iodomethane, C-alkylation nearly always predominates.
Other reactions
In the Monsanto process and the Cativa process, MeI forms in situ from the reaction of methanol and hydrogen iodide. The CH3I then reacts with carbon monoxide in the presence of a rhodium or iridium complex to form acetyl iodide, the precursor to acetic acid after hydrolysis. The Cativa process is usually preferred because less water is required to use and there are less byproducts.
MeI is used to prepare the Grignard reagent, methylmagnesium iodide ("MeMgI"), a common source of "Me−". The use of MeMgI has been somewhat superseded by the commercially available methyllithium. MeI can also be used to prepare dimethylmercury, by reacting 2 moles of MeI with a 2/1-molar sodium amalgam (2 moles of sodium, 1 mol of mercury).
Iodomethane and other organic iodine compounds do form under the conditions of a serious nuclear accident, after both Chernobyl and Fukushima, Iodine-131 was detected in organic iodine compounds in Europe and Japan respectively.
Use as a pesticide
Iodomethane had also been proposed for use as a fungicide, herbicide, insecticide, nematicide, and as a soil disinfectant, replacing methyl bromide (also known as bromomethane) (banned under the Montreal Protocol). Manufactured by Arysta LifeScience and sold under the brand name MIDAS, iodomethane is registered as a pesticide in the U.S., Mexico, Morocco, Japan, Turkey, and New Zealand and registration is pending in Australia, Guatemala, Costa Rica, Chile, Egypt, Israel, South Africa and other countries. The first commercial applications of iodomethane soil fumigant in California began in Fresno County in May 2011.
Iodomethane had been approved for use as a pesticide by the United States Environmental Protection Agency in 2007 as a pre-plant biocide used to control insects, plant parasitic nematodes, soil borne pathogens, and weed seeds. The compound was registered for use as a preplant soil treatment for field grown strawberries, peppers, tomatoes, grape vines, ornamentals and turf and nursery grown strawberries, stone fruits, tree nuts, and conifer trees. After the discovery phase in a consumer lawsuit, the manufacturer withdrew the fumigant citing its lack of market viability.
The use of iodomethane as a fumigant has drawn concern. For example, 54 chemists and physicians contacted the U.S. EPA in a letter, saying "We are skeptical of U.S. EPA's conclusion that the high levels of exposure to iodomethane that are likely to result from broadcast applications are 'acceptable' risks. U.S. EPA has made many assumptions about toxicology and exposure in the risk assessment that have not been examined by independent scientific peer reviewers for adequacy or accuracy. Additionally, none of U.S. EPA's calculations account for the extra vulnerability of the unborn fetus and children to toxic insults." EPA Assistant Administrator Jim Gulliford replied saying, "We are confident that by conducting such a rigorous analysis and developing highly restrictive provisions governing its use, there will be no risks of concern," and in October the EPA approved the use of iodomethane as a soil fumigant in the United States.
The California Department of Pesticide Regulation (DPR) concluded that iodomethane is "highly toxic," that "any anticipated scenario for the agricultural or structural fumigation use of this agent would result in exposures to a large number of the public and thus would have a significant adverse impact on the public health", and that adequate control of the chemical in these circumstances would be "difficult, if not impossible." Iodomethane was approved as a pesticide in California that December. A lawsuit was filed on January 5, 2011, challenging California's approval of iodomethane. Subsequently, the manufacturer withdrew the fumigant and requested that California Department of Pesticide Regulation cancel its California registration, citing its lack of market viability.
Safety
Toxicity and biological effects
According to the United States Department of Agriculture iodomethane exhibits moderate to high acute toxicity for inhalation and ingestion. The Centers for Disease Control and Prevention (CDC) lists inhalation, skin absorption, ingestion, and eye contact as possible exposure routes with target organs of the eyes, skin, respiratory system, and the central nervous system. Symptoms may include eye irritation, nausea, vomiting, dizziness, ataxia, slurred speech, and dermatitis. In high dose acute toxicity, as may occur in industrial accidents, toxicity includes metabolic disturbance, renal failure, venous and arterial thrombosis and encephalopathy with seizures and coma, with a characteristic pattern of brain injury.
Iodomethane has an for oral administration to rats 76 mg/kg, and is rapidly converted in the liver to S-methylglutathione.
In its risk assessment of iodomethane, the U.S. EPA conducted an exhaustive scientific and medical literature search over the past 100 years for reported cases of human poisonings attributable to the compound. Citing the EPA as its source, the California Department of Pesticide Regulation said: "Over the past century, only 11 incidents of iodomethane poisoning have been reported in the published literature." "An updated literature search on May 30, 2007 for iodomethane poisoning produced only one additional case report." All but one were industrial—not agricultural—accidents, and the remaining case of poisoning was an apparent suicide. Iodomethane is routinely and regularly used in industrial processes as well as in most university and college chemistry departments for study and learning related to a variety of organic chemical reactions.
In 2024, a case of a person being injected with iodomethane emerged. The subject, who was the victim of attempted murder by a GP disguised as a community nurse, went on to develop necrotizing fasciitis but survived.
Carcinogenicity in mammals
The U.S. National Institute for Occupational Safety and Health (NIOSH), the U.S. Occupational Safety and Health Administration and the U.S. Centers for Disease Control and Prevention consider iodomethane a potential occupational carcinogen.
The International Agency for Research on Cancer concluded based on studies performed after methyl iodide was put on the Proposition 65 list that: "Methyl iodide is not classifiable as to its carcinogenicity to humans (Group 3)." the Environmental Protection Agency classifies it as "not likely to be carcinogenic to humans in the absence of altered thyroid hormone homeostasis," i.e. it is a human carcinogen but only at doses large enough to disrupt thyroid function (via excess iodide). However this finding is disputed by the Pesticide Action Network, which states that the EPA's assessment "appears to be based solely on a single rat inhalation study in which 66% of the control group and 54-62% of the rats in the other groups died before the end of the study". They go on to state: "The EPA appears to be dismissing early peer-reviewed studies in favor of two nonpeer-reviewed studies conducted by the registrant that are flawed in design and execution." Despite requests by the U.S. EPA to the Pesticide Action Network to bring forth scientific evidence of their claims, they have not done so.
See also
Angelita C. et al. v. California Department of Pesticide Regulation
References
Additional sources
Sulikowski, G. A.; Sulikowski, M. M. (1999). in Coates, R.M.; Denmark, S. E. (Eds.) Handbook of Reagents for Organic Synthesis, Volume 1: Reagents, Auxiliaries and Catalysts for C-C Bond Formation New York: Wiley, pp. 423–26.
External links
IARC Summaries & Evaluations: Vol. 15 (1977), Vol. 41 (1986), Vol. 71 (1999)
Metabolism of iodomethane in the rat
Iodomethane NMR spectra
Iodoalkanes
Halomethanes
Methylating agents
IARC Group 3 carcinogens
Environmental controversies
Environmental effects of pesticides
Pesticides in the United States
Halogen-containing natural products
Iodine-containing natural products | Iodomethane | [
"Chemistry"
] | 2,764 | [
"Methylation",
"Methylating agents"
] |
2,251,116 | https://en.wikipedia.org/wiki/Tin%28II%29%20oxide | Tin(II) oxide (stannous oxide) is a compound with the formula SnO. It is composed of tin and oxygen where tin has the oxidation state of +2. There are two forms, a stable blue-black form and a metastable red form.
Preparation and reactions
Blue-black SnO can be produced by heating the tin(II) oxide hydrate, SnO·xH2O (x<1) precipitated when a tin(II) salt is reacted with an alkali hydroxide such as NaOH.
Metastable, red SnO can be prepared by gentle heating of the precipitate produced by the action of aqueous ammonia on a tin(II) salt.
SnO may be prepared as a pure substance in the laboratory, by controlled heating of tin(II) oxalate (stannous oxalate) in the absence of air or under a CO2 atmosphere. This method is also applied to the production of ferrous oxide and manganous oxide.
SnC2O4·2H2O → SnO + CO2 + CO + 2 H2O
Tin(II) oxide burns in air with a dim green flame to form SnO2.
2 SnO + O2 → 2 SnO2
When heated in an inert atmosphere initially disproportionation occurs giving Sn metal and Sn3O4 which further reacts to give SnO2 and Sn metal.
4SnO → Sn3O4 + Sn
Sn3O4 → 2SnO2 + Sn
SnO is amphoteric, dissolving in strong acid to give tin(II) salts and in strong base to give stannites containing Sn(OH)3−. It can be dissolved in strong acid solutions to give the ionic complexes Sn(OH2)32+ and Sn(OH)(OH2)2+, and in less acid solutions to give Sn3(OH)42+. Note that anhydrous stannites, e.g. K2Sn2O3, K2SnO2 are also known.
SnO is a reducing agent and is thought to reduce copper(I) to metallic clusters in the manufacture of so-called "copper ruby glass".
Structure
Black, α-SnO adopts the tetragonal PbO layer structure containing four coordinate square pyramidal tin atoms. This form is found in nature as the rare mineral romarchite. The asymmetry is usually simply ascribed to a sterically active lone pair; however, electron density calculations show that the asymmetry is caused by an antibonding interaction of the Sn(5s) and the O(2p) orbitals. The electronic structure and chemistry of the lone pair determines most of the properties of the material.
Non-stoichiometry has been observed in SnO.
The electronic band gap has been measured between 2.5eV and 3eV.
Uses
The dominant use of stannous oxide is as a precursor in manufacturing of other, typically divalent, tin compounds or salts. Stannous oxide may also be employed as a reducing agent and in the creation of ruby glass. It has a minor use as an esterification catalyst.
Cerium(III) oxide in ceramic form, together with Tin(II) oxide (SnO) is used for illumination with UV light.
References
Amphoteric compounds
Oxides
Reducing agents
Tin(II) compounds | Tin(II) oxide | [
"Chemistry"
] | 720 | [
"Amphoteric compounds",
"Acids",
"Redox",
"Oxides",
"Salts",
"Reducing agents",
"Bases (chemistry)"
] |
2,251,120 | https://en.wikipedia.org/wiki/Law%20of%20comparative%20judgment | The law of comparative judgment was conceived by L. L. Thurstone. In modern-day terminology, it is more aptly described as a model that is used to obtain measurements from any process of pairwise comparison. Examples of such processes are the comparisons of perceived intensity of physical stimuli, such as the weights of objects, and comparisons of the extremity of an attitude expressed within statements, such as statements about capital punishment. The measurements represent how we perceive entities, rather than measurements of actual physical properties. This kind of measurement is the focus of psychometrics and psychophysics.
In somewhat more technical terms, the law of comparative judgment is a mathematical representation of a discriminal process, which is any process in which a comparison is made between pairs of a collection of entities with respect to magnitudes of an attribute, trait, attitude, and so on. The theoretical basis for the model is closely related to item response theory and the theory underlying the Rasch model, which are used in psychology and education to analyse data from questionnaires and tests.
Background
Thurstone published a paper on the law of comparative judgment in 1927. In this paper he introduced the underlying concept of a psychological continuum for a particular 'project in measurement' involving the comparison between a series of stimuli, such as weights and handwriting specimens, in pairs. He soon extended the domain of application of the law of comparative judgment to things that have no obvious physical counterpart, such as attitudes and values (Thurstone, 1929). For example, in one experiment, people compared statements about capital punishment to judge which of each pair expressed a stronger positive (or negative) attitude.
The essential idea behind Thurstone's process and model is that it can be used to scale a collection of stimuli based on simple comparisons between stimuli two at a time: that is, based on a series of pairwise comparisons. For example, suppose that someone wishes to measure the perceived weights of a series of five objects of varying masses. By having people compare the weights of the objects in pairs, data can be obtained and the law of comparative judgment applied to estimate scale values of the perceived weights. This is the perceptual counterpart to the physical weight of the objects. That is, the scale represents how heavy people perceive the objects to be based on the comparisons.
Although Thurstone referred to it as a law, as stated above, in terms of modern psychometric theory the 'law' of comparative judgment is more aptly described as a measurement model. It represents a general theoretical model which, applied in a particular empirical context, constitutes a scientific hypothesis regarding the outcomes of comparisons between some collection of objects. If data agree with the model, it is possible to produce a scale from the data.
Relationships to pre-existing psychophysical theory
Thurstone showed that in terms of his conceptual framework, Weber's law and the so-called Weber-Fechner law, which are sometimes (and misleadingly) regarded as one and the same, are independent, in the sense that one may be applicable but not the other to a given collection of experimental data. In particular, Thurstone showed that if Fechner's law applies and the discriminal dispersions associated with stimuli are constant (as in Case 5 of the LCJ outlined below), then Weber's law will also be verified. He considered that the Weber-Fechner law and the LCJ both involve a linear measurement on a psychological continuum whereas Weber's law does not.
Weber's law essentially states that how much people perceive physical stimulus intensity to change depends on that intensity. For example, if someone compares a light object of 1 kg with one slightly heavier, one notices a relatively small difference, perhaps when the second object is 1.2 kg. On the other hand, if someone compares a heavy object of 30 kg with a second, the second must be quite a bit larger for a person to notice the difference, perhaps when the second object is 36 kg. People tend to perceive differences that are proportional to the size rather than noticing a specific difference irrespective of the size. The same applies to brightness, pressure, warmth, loudness, and so on.
Thurstone stated Weber's law as follows: "The stimulus increase which is correctly discriminated in any specified proportion of attempts (except 0 and 100 per cent) is a constant fraction of the stimulus magnitude" (Thurstone, 1959, p. 61). He considered that Weber's law said nothing directly about sensation intensities at all. In terms of Thurstone's conceptual framework, the association posited between perceived stimulus intensity and the physical magnitude of the stimulus in the Weber-Fechner law will only hold when Weber's law holds and the just noticeable difference (JND) is treated as a unit of measurement. Importantly, this is not simply given a priori (Michell, 1997, p. 355), as is implied by purely mathematical derivations of the one law from the other. It is, rather, an empirical question whether measurements have been obtained; one which requires justification through the process of stating and testing a well-defined hypothesis in order to ascertain whether specific theoretical criteria for measurement have been satisfied. Some of the relevant criteria were articulated by Thurstone, in a preliminary fashion, including what he termed the additivity criterion. Accordingly, from the point of view of Thurstone's approach, treating the JND as a unit is justifiable provided only that the discriminal dispersions are uniform for all stimuli considered in a given experimental context. Similar issues are associated with Stevens' power law.
In addition, Thurstone employed the approach to clarify other similarities and differences between Weber's law, the Weber-Fechner law, and the LCJ. An important clarification is that the LCJ does not necessarily involve a physical stimulus, whereas the other 'laws' do. Another key difference is that Weber's law and the LCJ involve proportions of comparisons in which one stimulus is judged greater than another whereas the so-called Weber-Fechner law does not.
The general form
The most general form of the LCJ is
in which:
is the psychological scale value of stimuli i
is the sigma corresponding with the proportion of occasions on which the magnitude of stimulus i is judged to exceed the magnitude of stimulus j
is the discriminal dispersion of a stimulus
is the correlation between the discriminal deviations of stimuli i and j
The discriminal dispersion of a stimulus i is the dispersion of fluctuations of the discriminal process for a uniform repeated stimulus, denoted , where represents the mode of such values. Thurstone (1959, p. 20) used the term discriminal process to refer to the "psychological values of psychophysics"; that is, the values on a psychological continuum associated with a given stimulus.
Case 5
Thurstone specified five particular cases of the 'law', or measurement model. An important case of the model is Case 5, in which the discriminal dispersions are specified to be uniform and uncorrelated. This form of the model can be represented as follows:
where
In this case of the model, the difference can be inferred directly from the proportion of instances in which j is judged greater than i if it is hypothesised that is distributed according to some density function, such as the normal distribution or logistic function. In order to do so, it is necessary to let , which is in effect an arbitrary choice of the unit of measurement. Letting be the proportion of occasions on which i is judged greater than j, if, for example, and it is hypothesised that is normally distributed, then it would be inferred that .
When a simple logistic function is employed instead of the normal density function, then the model has the structure of the Bradley-Terry-Luce model (BTL model) (Bradley & Terry, 1952; Luce, 1959). In turn, the Rasch model for dichotomous data (Rasch, 1960/1980) is identical to the BTL model after the person parameter of the Rasch model has been eliminated, as is achieved through statistical conditioning during the process of Conditional Maximum Likelihood estimation. With this in mind, the specification of uniform discriminal dispersions is equivalent to the requirement of parallel Item Characteristic Curves (ICCs) in the Rasch model. Accordingly, as shown by Andrich (1978), the Rasch model should, in principle, yield essentially the same results as those obtained from a Thurstone scale. Like the Rasch model, when applied in a given empirical context, Case 5 of the LCJ constitutes a mathematized hypothesis which embodies theoretical criteria for measurement.
Applications
One important application involving the law of comparative judgment is the widely used Analytic Hierarchy Process, a structured technique for helping people deal with complex decisions. It uses pairwise comparisons of tangible and intangible factors to construct ratio scales that are useful in making important decisions.
References
Andrich, D. (1978b). Relationships between the Thurstone and Rasch approaches to item scaling. Applied Psychological Measurement, 2, 449-460.
Bradley, R.A. and Terry, M.E. (1952). Rank analysis of incomplete block designs, I. the method of paired comparisons. Biometrika, 39, 324-345.
Krus, D.J., & Kennedy, P.H. (1977) Normal scaling of dominance matrices: The domain-referenced model. Educational and Psychological Measurement, 37, 189-193
Luce, R.D. (1959). Individual Choice Behaviours: A Theoretical Analysis. New York: J. Wiley.
Michell, J. (1997). Quantitative science and the definition of measurement in psychology. British Journal of Psychology, 88, 355-383.
Rasch, G. (1960/1980). Probabilistic models for some intelligence and attainment tests. (Copenhagen, Danish Institute for Educational Research), expanded edition (1980) with foreword and afterword by B.D. Wright. Chicago: The University of Chicago Press.
Thurstone, L.L. (1927). A law of comparative judgement. Psychological Review, 34, 273-286.
Thurstone, L.L. (1929). The Measurement of Psychological Value. In T.V. Smith and W.K. Wright (Eds.), Essays in Philosophy by Seventeen Doctors of Philosophy of the University of Chicago. Chicago: Open Court.
Thurstone, L.L. (1959). The Measurement of Values. Chicago: The University of Chicago Press.
External links
"The Measurement of Psychological Value"
How to Analyze Paired Comparisons (tutorial on using Thurstone's Law of Comparative Judgement)
L.L. Thurstone psychometric laboratory
Psychometrics
Psychophysics | Law of comparative judgment | [
"Physics"
] | 2,232 | [
"Psychophysics",
"Applied and interdisciplinary physics"
] |
2,251,570 | https://en.wikipedia.org/wiki/Alginic%20acid | Alginic acid, also called algin, is a naturally occurring, edible polysaccharide found in brown algae. It is hydrophilic and forms a viscous gum when hydrated. When the alginic acid binds with sodium and calcium ions, the resulting salts are known as alginates. Its colour ranges from white to yellowish-brown. It is sold in filamentous, granular, or powdered forms.
It is a significant component of the biofilms produced by the bacterium Pseudomonas aeruginosa, a major pathogen found in the lungs of some people who have cystic fibrosis. The biofilm and P. aeruginosa have a high resistance to antibiotics, but are susceptible to inhibition by macrophages.
Alginate was discovered by British chemical scientist E. C. C. Stanford in 1881, and he patented an extraction process for it in the same year. The alginate was extracted, in the original patent, by first soaking the algae in water or diluted acid, then extracting the alginate by soaking it in sodium carbonate, and finally precipitating the alginate from solution.
Structure
Alginic acid is a linear copolymer with homopolymeric blocks of (1→4)-linked β-D-mannuronate (M) and α-L-guluronate (G) residues, respectively, covalently linked together in different sequences or blocks. The monomers may appear in homopolymeric blocks of consecutive G-residues (G-blocks), consecutive M-residues (M-blocks) or alternating M and G-residues (MG-blocks). α-L-guluronate is the C-5 epimer of β-D-mannuronate.
Forms
Alginates are refined from brown seaweeds. Throughout the world, many of the Phaeophyceae class brown seaweeds are harvested to be processed and converted into sodium alginate. Sodium alginate is used in many industries including food, animal food, fertilisers, textile printing, and pharmaceuticals. Dental impression material uses alginate as its means of gelling. Food grade alginate is an approved ingredient in processed and manufactured foods.
Brown seaweeds range in size from the giant kelp Macrocystis pyrifera which can be 20–40 meters long, to thick, leather-like seaweeds from 2–4 m long, to smaller species 30–60 cm long. Most brown seaweed used for alginates are gathered from the wild, with the exception of Laminaria japonica, which is cultivated in China for food and its surplus material is diverted to the alginate industry in China.
Alginates from different species of brown seaweed vary in their chemical structure, resulting in different physical properties of alginates. Some species yield an alginate that gives a strong gel, another a weaker gel, some may produce a cream or white alginate, while others are difficult to gel and are best used for technical applications where color does not matter.
Commercial grade alginate is extracted from giant kelp Macrocystis pyrifera, Ascophyllum nodosum, and types of Laminaria. Alginates are also produced by two bacterial genera Pseudomonas and Azotobacter, which played a major role in the unravelling of its biosynthesis pathway. Bacterial alginates are useful for the production of micro- or nanostructures suitable for medical applications.
Sodium alginate (NaC6H7O6) is the sodium salt of alginic acid. Sodium alginate is a gum.
Potassium alginate (KC6H7O6) is the potassium salt of alginic acid.
Calcium alginate (CaC12H14O12) is the calcium salt of alginic acid. It is made by replacing the sodium ion in sodium alginate with a calcium ion (ion exchange).
Production
The manufacturing process used to extract sodium alginates from brown seaweed fall into two categories: 1) calcium alginate method and, 2) alginic acid method.
Chemically the process is simple, but difficulties arise from the physical separations required between the slimy residues from viscous solutions and the separation of gelatinous precipitates that hold large amounts of liquid within their structure, so they resist filtration and centrifugation. The conventional process involves large amounts of reagents and solvents, as well as time-consuming steps. Simpler and newer techniques, such as microwave-assisted extraction, ultrasound, high pressure, pressurized fluid extraction, and enzyme-assisted extraction, are the subject of research.
The most common, conventional extraction process involves six steps: pre-treatment of the algal biomass, acid treatment, alkaline extraction, precipitation, bleaching, and drying. Pre-treatments mainly aim at either breaking the cell wall to help extract the alginate, or removing other compounds and contaminants from the algae. Drying is of the first kind, also helping to prevent bacterial growth; algae which is dried is also usually powdered to expose more surface area. Common treatments to remove contaminants include treatments with ethanol and formaldehyde, the latter of which is very common; ethanol solutions help remove compounds bonded to the alginate, and formaldehyde solutions help prevent enzymatic or microbial reactions.
The algae is then treated with an acidic solution to help disrupt cell walls, which converts the alginate salts into insoluble alginic acid; a subsequently applied alkaline solution (pH 9-10), usually sodium carbonate, converts it back into water-soluble sodium alginate, which is then precipitated. It is also possible to extract the alginate directly with an alkaline treatment, but this is less common.
Alginic acid is usually precipitated, through different techniques, with either an alcohol (usually ethanol), calcium chloride, or hydrochloric acid. After the alginin is precipitated into a fine paste, it is dried, ground to the desired grain size, and finally purified through a variety of techniques. Commercial alginate for biomedical and pharmaceutical use is extracted and purified through more rigorous techniques, but these are trade secrets.
Derivatives
Various alginate-based materials can be produced, including porous scaffold material, alginate hydrogel, nonwoven fabric, and alginate membranes. Techniques used to produce these include ion cross-linking, microfluidic spinning, freeze drying, wet spinning, and immersive centrifugal jet spinning.
Calcium salts added to a sodium alginate solution to induce ionic cross-linking, which produces the hydrogel. Freeze-drying the hydrogel to eliminate water produces the porous scaffold material.
Wet spinning consists of extruding an alginate solution from a spinneret into a calcium salt solution to induce ionic cross-linking (forming the gel), and then drawing the fibers out of the bath with draft rollers. Microfluidic spinning, a simpler and more eco-friendly implementation of the process, involves introducing calcium salt flows flowing alongside and touching a central "core" flow of alginate. These flows form a "sheath". The fiber then emerges from the core flow. This technique can be used to produce shaped and grooved fibers.
Alginate fiber, which is used in fabric, is usually produced through either microfluidic spinning or wet spinning, or electrospinning to obtain thinner fibers. The fabric, which can be used in wound dressing and other applications, is produced by carding and then needle punching the fibers.
Uses
As of 2022, alginate had become one of the most preferred materials as an abundant natural biopolymer. It is particularly useful as a biomaterial because of its nontoxicity, hygroscopicity, and biocompatibility, and can imitate local bioenvironments; its degradation product can be easily cleared by the kidneys.
Alginate absorbs water quickly, which makes it useful as an additive in dehydrated products such as slimming aids, and in the manufacture of paper and textiles.
Alginate is also used for waterproofing and fireproofing fabrics, in the food industry as a thickening agent for drinks, ice cream, cosmetics, as a gelling agent for jellies, known by the code E401 and sausage casing. Sodium alginate is mixed with soybean protein to make meat analogue.
Alginate is used as an ingredient in various pharmaceutical preparations, such as Gaviscon, in which it combines with bicarbonate to inhibit gastroesophageal reflux.
Sodium alginate is used as an impression-making material in dentistry, prosthetics, lifecasting, and for creating positives for small-scale casting.
Sodium alginate is used in reactive dye printing and as a thickener for reactive dyes in textile screen-printing. Alginates do not react with these dyes and wash out easily, unlike starch-based thickeners. It also serves as a material for micro-encapsulation.
Calcium alginate is used in different types of medical products, including skin wound dressings to promote healing, and may be removed with less pain than conventional dressings.
Alginate hydrogels
In research on bone reconstruction, alginate composites have favorable properties encouraging regeneration, such as improved porosity, cell proliferation, and mechanical strength. Alginate hydrogel is a common biomaterial for bio-fabrication of scaffolds and tissue regeneration.
Covalent bonding of thiol groups to alginate improves in-situ gelling and mucoadhesive properties; the thiolated polymer (thiomer) forms disulfide bonds within its polymeric network and with cysteine-rich subdomains of the mucus layer. Thiolated alginates are used as in situ gelling hydrogels, and are under preliminary research as possible mucoadhesive drug delivery systems. Alginate hydrogels may be used for drug delivery, exhibiting responses to pH changes, temperature changes, redox, and the presence of enzymes.
See also
Hyaluronic acid: a polysaccharide in animals.
Agar
References
External links
Alginate seaweed sources
Alginate properties
Polysaccharides
Natural gums
Edible thickening agents
Copolymers
Dental materials
Excipients
Algal food ingredients
Brown algae
Food stabilizers
E-number additives | Alginic acid | [
"Physics",
"Chemistry",
"Biology"
] | 2,226 | [
"Carbohydrates",
"Dental materials",
"Algae",
"Materials",
"Brown algae",
"Matter",
"Polysaccharides"
] |
2,251,815 | https://en.wikipedia.org/wiki/Chris%20Phoenix%20%28nanotechnologist%29 | Chris Phoenix (born December 25, 1970) is the co-founder (with Mike Treder) and Director of Research of the Center for Responsible Nanotechnology (CRN), and has worked in the field of advanced nanotechnology for over 15 years. He obtained his BS in Symbolic Systems and MS in Computer Science from Stanford University in 1991. Since 2000, he has studied and written about molecular manufacturing. Phoenix, who lives in northern California, is a published author in nanotechnology and nanomedical research best known for his peer-reviewed paper, "Design of a Primitive Nanofactory", as well as his comprehensive outline of Thirty Essential Nanotechnology Studies. Phoenix has authored or co-authored many papers and essays published by the Society in connection with his work for CRN.
Phoenix also serves on the Scientific Advisory Board for Nanorex, Inc.
References
External links
"Revolution in a Box: the Center for Responsible Nanotechnology" - interview transcript
Interview with Nanotech.biz
Journal of Evolution and Technology, "Design of a Primitive Nanofactory" by Chris Phoenix
1970 births
Living people
Nanotechnologists | Chris Phoenix (nanotechnologist) | [
"Materials_science"
] | 229 | [
"Nanotechnology",
"Nanotechnologists"
] |
2,251,893 | https://en.wikipedia.org/wiki/Informavore | The term informavore (also spelled informivore) characterizes an organism that consumes information. It is meant to be a description of human behavior in modern information society, in comparison to omnivore, as a description of humans consuming food. George A. Miller
coined the term in 1983 as an analogy to how organisms survive by consuming negative entropy (as suggested by Erwin Schrödinger). Miller states, "Just as the body survives by ingesting negative entropy, so the mind survives by ingesting information. In a very general sense, all higher organisms are informavores."
An early use of the term was in a newspaper article by Jonathan Chevreau
where he quotes a speech made by Zenon Pylyshyn. Soon after, the term appeared in the introduction of Pylyshyn's seminal book on Cognitive Science, Computation and Cognition.
More recently the term has been popularized by philosopher Daniel Dennett in his book Kinds of Minds and by cognitive scientist Steven Pinker.
References
External links
"informavore" at Word Spy
Information Age
1980s neologisms | Informavore | [
"Technology"
] | 227 | [
"Information Age",
"Computing and society"
] |
2,251,939 | https://en.wikipedia.org/wiki/Sex%20hormone%20receptor | The sex hormone receptors, or sex steroid receptors, are a group of steroid hormone receptors that interact with the sex hormones, the androgens, estrogens, and progestogens, as well as with sex-hormonal agents such as anabolic steroids, progestins, and antiestrogens. They include the:
Androgen receptor (AR) (A, B) - binds and is activated by androgens such as testosterone and dihydrotestosterone (DHT)
Estrogen receptor (ER) (α, β) - binds and is activated by estrogens such as estradiol, estrone, and estriol
Progesterone receptor (PR) (A, B) - binds and is activated by progestogens such as progesterone
In addition, sex steroids have been found to bind and activate membrane steroid receptors, such as estradiol and GPER.
See also
Gonadotropin-releasing hormone receptor
Gonadotropin receptor
Steroid hormone receptor
References
Intracellular receptors
G protein-coupled receptors
Transcription factors | Sex hormone receptor | [
"Chemistry",
"Biology"
] | 231 | [
"Transcription factors",
"Gene expression",
"Signal transduction",
"G protein-coupled receptors",
"Induced stem cells"
] |
2,251,965 | https://en.wikipedia.org/wiki/Earthing%20system | An earthing system (UK and IEC) or grounding system (US) connects specific parts of an electric power system with the ground, typically the equipment's conductive surface, for safety and functional purposes. The choice of earthing system can affect the safety and electromagnetic compatibility of the installation. Regulations for earthing systems vary among countries, though most follow the recommendations of the International Electrotechnical Commission (IEC). Regulations may identify special cases for earthing in mines, in patient care areas, or in hazardous areas of industrial plants.
In addition to electric power systems, other systems may require grounding for safety or function. Tall structures may have lightning rods as part of a system to protect them from lightning strikes. Telegraph lines may use the Earth as one conductor of a circuit, saving the cost of installation of a return wire over a long circuit. Radio antennas may require particular grounding for operation, as well as to control static electricity and provide lightning protection.
Purposes
There are three main purposes for earthing:
System earthing
System earthing serves a purpose of electrical safety throughout the system that is not caused by a short circuit or other electrical fault. It prevents static buildup and helps protect (as part of a surge protection system) against power surges caused by nearby lightning strikes or switching. Static buildup, as induced by friction for example, such as when wind blows onto a radio mast, is dissipated to the Earth. In the event of a surge, a lightning arrester, a surge arrester or a surge protection device (SPD) will divert the excess current to the Earth before it reaches an appliance.
System earthing allows for equipotential bonding to all metal works to prevent potential differences between them.
Having Earth as a common reference point keeps the electrical system's potential difference limited to the supply voltage.
Equipment earthing
Equipment earthing provides electrical safety during an electrical fault. It prevents equipment damage and electric shock. This type of earthing is not an earth connection, technically speaking. When current flows from a line conductor to an earth wire, as is the case when a line conductor makes contact with an earthed surface in a Class I appliance, an automatic disconnection of supply (ADS) device such as a circuit breaker or a residual-current device (RCD) will automatically open the circuit to clear the fault.
Functional earthing
Functional earthing serves a purpose other than electrical safety. Example purposes include electromagnetic interference (EMI) filtering in an EMI filter, and the use of the Earth as a return path in a single-wire earth return distribution system.
Low-voltage systems
In low-voltage networks, which distribute the electric power to the widest class of end users, the main concern for the design of earthing systems is the safety of consumers who use the electric appliances and their protection against electric shocks. The earthing system, in combination with protective devices such as fuses and residual current devices, must ultimately ensure that a person does not come into contact with a metallic object whose potential relative to the person's potential exceeds a safe threshold, typically set at about 50 V.
While there was considerable national variation, most developed countries introduced 220 V, 230 V, or 240 V sockets with earthed contacts either just before or soon after World War II. However, in the United States and Canada, where the supply voltage is only 120 V, power outlets installed before the mid-1960s generally did not include a ground (earth) pin. In the developing world, local wiring practices may or may not provide a connection to an earth conductor.
On low-voltage electricity networks with a phase-to-neutral voltage exceeding 240 V to 690 V, which are mostly used in industry, mining equipment, and machines rather than publicly accessible networks, the earthing system design is equally important from a safety point of view as for domestic users.
The US National Electrical Code permitted the use of the supply neutral wire as the equipment enclosure connection to ground from 1947 to 1996 for ranges (including separate cooktops and ovens) and from 1953 to 1996 for clothes dryers, whether plug-in or permanently fixed, provided that the circuit originated in the main service panel. Normal imbalances in the circuit would create small equipment voltages with respect to Earth; a failure of the neutral conductor or connections would allow the equipment to go to full 120 volts, an easily lethal situation. The 1996 and newer editions of the NEC no longer permit this practice. For similar reasons, most countries now mandate dedicated protective earth connections in consumer wiring, a practice that has become nearly universal. In distribution networks however, where connections are fewer and less vulnerable, many countries do permit earth and neutral functions to share a conductor (see PEN conductor).
If the fault path between exposed conductive parts and the supply has sufficiently low impedance, then should such a part accidentally become energized, the fault current will cause the circuit overcurrent protection device (fuse or circuit breaker) to open, clearing the fault. However, if the impedance of the fault path is too high, then fault currents may not trip the overcurrent protection device quickly enough to meet the requirements of local electrical regulations. This is often the case with a TT-type earthing system. In such cases use of a residual-current device (RCD) may allow the required disconnection times to be met.
IEC terminology
International standard IEC 60364 distinguishes three families of earthing arrangements, using the two-letter codes — TN, TT, and IT.
The first letter indicates the relationship between the power-supply (generator or transformer) and Earth:
"T" — Direct connection of a point to Earth (Latin: terra)
"I" — All live parts isolated from Earth (Latin: īnsulātum), except perhaps via a high impedance.
The second letter indicates the relationship between the exposed-conductive-parts of the installation, and Earth:
"T" — Direct connection to Earth (Latin: terra), independent of a power-supply connection to Earth.
"N" — Direct connection to the point on the power-supply where the power-supply connects to Earth. This point is typically the neutral point of a star-connected transformer, from where a neutral connection might also be provided.
Any subsequent letter(s) indicate:
"S" — Neutral and protective functions provided by separate conductors.
"C" — Neutral and protective functions provided by the same single conductor (PEN conductor).
Types of TN system
In a TN (terra–neutral) earthing system, one of the points in the supply transformer is directly connected with Earth, usually the neutral-star-point in a star-connected supply transformer, the same point from which a neutral (N) connection would be provided. Exposed-conductive-parts within a consumer installation are connected with Earth via this connection at the transformer, and thus via the supply cable(s). The conductor that connects an exposed-conductive-part of the consumer's electrical installation to Earth is called the protective earth (PE; see also: Ground) conductor.
This arrangement is a current standard for residential and industrial electric systems particularly in Europe.
Three variants of TN systems are distinguished:
TN−S PE and N are entirely separate conductors. If a neutral conductor is provided, and if the point from which the transformer connects to Earth is the neutral-star-point, then PE and N conductors will be connected at this one and only point within the system. Note that the armoring of the supply cable is commonly used as the PE conductor between the transformer and installation rather than a dedicated conductor within the supply cable.
TN−C A single combined PEN conductor (PE+N) fulfils the functions of both a PE and an N conductor. This is not only so for the supply cable but within the consumer installation also (no separation of neutral and earthing). (In 230/400 V consumer systems this is normally only used with distribution circuits).
Part of the system uses a combined PEN conductor, which is at some point split up into separate PE and N conductors. The combined PEN conductor typically spans between the transformer and the consumer installation, with separate earth and neutral conductors used within the installation.
In the UK, a common practice with TN-C-S is to connect the combined PEN supply conductor to Earth at multiple points along its length between the source transformer and the consumer installation. This is known as protective multiple earthing (PME). This is so common that consequently PME is often incorrectly used as a synonym. Similar systems in Australia and New Zealand are designated as multiple earthed neutral (MEN) and, in North America, as multi-grounded neutral (MGN).
It is possible to have both TN-S and TN-C-S supplies taken from the same transformer. For example, the sheaths on some underground cables corrode and stop providing good earth connections, and so homes where high resistance "bad earths" are found may be converted to TN-C-S. This is only possible on a network when the neutral is suitably robust against failure. Conversion is not always possible. The PEN must be suitably reinforced against failure, as an open circuit PEN can impress full phase voltage on any exposed metal connected to the system earth downstream of the break. The alternative is to convert the installation to TT.
The main attraction of a TN system is that the low impedance earth path means that overcurrent protection devices can usually cut off the supply suitably quickly in the event of a (line-to-) earth fault. This is not typically the case for TT systems. The invention of residual current devices (RCDs) provided another means of protection from earth faults, which can be critical for a TT system as an RCD is often the only means of achieving suitable quick disconnection times, but is simply used as a secondary layer of protection in a TN system.
A danger of TN-C-S systems, especially for installations in rural locations where supplies are more likely to be provided with overhead cables exposed to the elements, or certain kinds of installations such as supplies to caravans or boats, is the risk of an open or broken PEN fault whereby the supply PEN conductor is severed or significantly corroded. In such a scenario current will take any alternate path available, and since extraneous-conductive-parts like water and gas pipes should be bonded to an installation's earthing, and the earthing is tied to the neutral, neutral current can still flow via the Earth, potentially passing through neighbouring properties (if their neutral is still intact), and voltage-to-Earth can rise significantly, especially should the break occur upstream of properties on different supply phases, in which case the floating neutral could cause voltage to rise as high as three-phase line-to-line voltage (400 V nominal in the UK). Hypothetically if no complete path existed for current to flow, then exposed-conductive-parts would rise to line voltage. PME helps mitigate risk somewhat. The danger is serious enough that the UK Electricity Safety, Quality and Continuity Regulations 2002 forbids use of PEN conductors to supply caravans and boats where simultaneous contact with Earth is especially high.
TN-C systems are not permitted in some countries. The UK for instance forbids it in the Electricity Safety, Quality and Continuity Regulations 2002. Note that an RCD cannot work on a TN-C system.
TT system
In a TT (terra–terra) earthing system, just as with a TN system, there is a direct connection to Earth at the supply transformer. But, unlike TN, the exposed-conductive-parts at the consumer installation are independent from it, instead having an entirely separate connection to Earth via a local earth electrode (sometimes referred to as the terra firma connection). I.e. there is no 'earth wire' between supply and consumer, only a connection through the mass of the Earth.
The big advantage of the TT earthing system is the reduced conducted interference from other users' connected equipment. TT has always been preferable for special applications like telecommunication sites that benefit from the interference-free earthing. Also, TT systems do not pose any serious risks in the case of a broken neutral conductor. In addition, in locations where power is distributed overhead, earth conductors are not at risk of becoming live should any overhead distribution conductor be fractured by, say, a fallen tree or branch.
A big disadvantage of TT systems is that the impedance of the earth path is often so high that it can prevent overcurrent protection devices from breaking the supply sufficiently quickly to meet safety regulation. This issue though can be addressed by instead relying upon RCD protection, which does not require a large fault current to activate. In the pre-RCD era the TT earthing system was unattractive for general use because of this difficulty of achieving reliable automatic disconnection of supply (ADS).
In some countries (such as the UK) TT is recommended for situations where a low impedance equipotential zone is impractical to maintain by bonding, where there is significant outdoor wiring, such as supplies to mobile homes and some agricultural settings, or where a high fault current could pose other dangers, such as at fuel depots or marinas. The TT earthing system is used throughout Japan, with RCD units in most industrial settings or even at home. This can impose added requirements on variable frequency drives and switched-mode power supplies which often have substantial filters passing high frequency noise to the ground conductor.
IT system
In an IT (īnsulātum–terra) earthing system, the electrical distribution system has no connection to Earth at all, or it has only a very high-impedance connection.
In IT systems, a single insulation fault is unlikely to cause dangerous currents to flow through a human body in contact with Earth, because no low-impedance circuit exists for such a current to flow. However, a first insulation fault can effectively turn an IT system into a TN system, and then a second insulation fault can lead to dangerous body currents. Worse, in a multi-phase system, if one of the line conductors made contact with earth, it would cause the other phase cores to rise to the phase-phase voltage relative to earth rather than the phase-neutral voltage. IT systems also experience larger transient overvoltages than other systems.
Comparison
Other terminologies
While the national wiring regulations for buildings of many countries follow the IEC 60364 terminology, in North America (United States and Canada), the term "equipment grounding conductor" refers to equipment grounds and ground wires on branch circuits, and "grounding electrode conductor" is used for conductors bonding an earth/ground rod, electrode or similar to a service panel. The "local" earth/ground electrode provides "system grounding" at each building where it is installed.
The "Grounded" current carrying conductor is the system "neutral".
Australian and New Zealand standards use a modified protective multiple earthing (PME ) system called multiple earthed neutral (MEN). The neutral is grounded (earthed) at each consumer service point thereby effectively bringing the neutral potential difference towards zero along the whole length of LV lines. In North America, the term multigrounded neutral (MGN) is used.
In the UK and some Commonwealth countries, the term "PNE", meaning phase-neutral-earth is used to indicate that three (or more for non-single-phase connections) conductors are used, i.e., PN-S.
Resistance-earthed neutral (India)
A resistance earth system is used for mining in India as per Central Electricity Authority Regulations. Instead of a solid connection of neutral to earth, a neutral grounding resistor (NGR) is used to limit the current to ground to less than 750 mA. Due to the fault current restriction it is safer for gassy mines. Since the earth leakage is restricted, leakage protection devices can be set to less than 750 mA. By comparison, in a solidly earthed system, earth fault current can be as much as the available short-circuit current.
The neutral earthing resistor is monitored to detect an interrupted ground connection and to shut off power if a fault is detected.
Earth leakage protection
To avoid accidental shock, current sensing devices are used at installations to isolate the power when leakage current exceeds a certain limit. RCDs are used for this purpose. Previously, for a short period before the invention of the RCD, voltage operated earth leakage circuit breaker (VO-ELCB) devices were used. In industrial applications, earth leakage relays are used with separate core balanced current transformers. This protection works in the range of milli-Amps and can be set from 30 mA to 3000 mA.
Earth connectivity check
A separate pilot wire is run from distribution/ equipment supply system in addition to earth wire, to supervise the continuity of the wire. This is used in the trailing cables of mining machinery. If the earth wire is broken, the pilot wire allows a sensing device at the source end to interrupt power to the machine. This type of circuit is a must for portable heavy electric equipment (like LHD (Load, Haul, Dump machine)) being used in underground mines.
Electromagnetic compatibility
In TN-S and TT systems, the consumer has a low-noise connection to Earth, which does not suffer from the voltage that appears on the N conductor as a result of the return currents and the impedance of that conductor. This is of particular importance with some types of telecommunication and measurement equipment.
In TT systems, each consumer has its own connection to Earth, and will not notice any currents that may be caused by other consumers on a shared PE line.
Regulations
In the UK the Electricity Safety, Quality and Continuity Regulations 2002 governs electrical supplies. Highlights include: TN-C supplies are forbidden; Cable armouring ("outer conductor of a line with concentric conductors") must be connected to Earth; Every supply neutral must be connected to Earth; Consumers are forbidden from combining earth and neutral within their installations; and PEN conductor based supplies (TN-C-S) are forbidden to caravans and boats.
In the United States National Electrical Code and Canadian Electrical Code, the supply from the distribution transformer must be TN-C-S. The neutral must be connected to earth only on the supply side of the customer's disconnecting switch.
In Argentina, Australia (TN-C-S), France (TT), Israel (TN-C-S), and New Zealand (TN-C-S), the customers must provide their own ground connections.
In Japan building wiring uses TT earthing in most installations.
In Australia, the multiple earthed neutral (MEN) earthing system is used, described in Section 5 of AS/NZS 3000. For an low-voltage (e.g. domestic) customer, this means TN-C-S, with the neutral connected to Earth multiple times between the supply transformer and the consumer installation, and with the neutral-earth split occurring from the Main Switchboard.
In Denmark the high voltage regulation (Stærkstrømsbekendtgørelsen) and Malaysia the Electricity Ordinance 1994 states that all consumers must use TT earthing, though in rare cases TN-C-S may be allowed (used in the same manner as in the United States). Rules are different when it comes to larger companies.
In India as per Central Electricity Authority Regulations, CEAR, 2010, rule 41, there is provision of earthing, neutral wire of a 3-phase, 4-wire system and the additional third wire of a 2-phase, 3-wire system. Earthing is to be done with two separate connections. The grounding system must also have a minimum of two or more earth pits (electrodes) to better ensure proper grounding. According to rule 42, an installation with a connected load above 5 kW exceeding 250 V shall have a suitable earth leakage protective device to isolate the load in case of earth fault or leakage.
Application examples
In the areas of UK where underground power cabling is prevalent, the TN-S system is common. Older urban and suburban homes in the UK tend to have TN-S supplies where the earth connection is delivered through a lead sheath of an underground lead-and-paper cable.
In India LT supply is generally through TN-S system. Neutral is double grounded at each distribution transformer. Neutral and earth conductors run separately on overhead distribution lines. Separate conductors for overhead lines and armoring of cables are used for earth connection. Additional earth electrodes/pits are installed at each user end to provide redundant path to Earth.
Most modern homes in Europe have a TN-C-S earthing system. The combined neutral and earth occurs between the nearest transformer substation and the service cut out (the fuse before the meter). After this, separate earth and neutral cores are used in all the internal wiring.
In Norway the IT system with 230 V between the phases is quite extensively used. It is estimated that 70% of all households are connected to the grid via the IT system. Newer residential areas are however mostly built with TN-C-S, in a large degree driven by the fact that three-phase products for the consumer market - such as electric vehicle charging stations - are developed for the European market where TN systems with 400V between the phases dominate.
Laboratory rooms, medical facilities, construction sites, repair workshops, mobile electrical installations, and other environments that are supplied via engine-generators where there is an increased risk of insulation faults, often use an IT earthing arrangement supplied from isolation transformers. To mitigate the two-fault issues with IT systems, the isolation transformers should supply only a small number of loads each and should be protected with an insulation monitoring device (generally used only by medical, railway or military IT systems, because of cost).
In remote areas, where the cost of an additional PE conductor outweighs the cost of a local Earth connection, TT systems are commonly used in some countries, especially in older properties or in rural areas, where safety might otherwise be threatened by the fracture of an overhead PE conductor by, say, a fallen tree branch.
In Australia the TN-C-S system is in use; however, the wiring rules state that, in addition, each customer must provide a separate connection to Earth, via a dedicated earth electrode. (Any metallic water pipes entering the consumer's premises must also be "bonded" to the earthing point at the distribution Switchboard/Panel.) In Australia and New Zealand the connection between the protective earth bar and the neutral bar at the main Switchboard/Panel is called the multiple earthed neutral Link or MEN Link. This MEN link is removable for installation testing purposes, but is connected during normal service by either a locking system (locknuts for instance) or two or more screws. In the MEN system, the integrity of the neutral is paramount. In Australia, new installations must also bond the foundation concrete re-enforcing under wet areas to the protective earth conductor (AS3000), typically increasing the size of the earthing (i.e. reducing resistance), and providing an equipotential plane in areas such as bathrooms. In older installations, it is not uncommon to find only the water pipe bond, and it is allowed to remain as such, but the additional earth electrode must be installed if any upgrade work is done. The incoming protective earth/neutral conductor is connected to a neutral bar (located on the customer's side of the electricity meter's neutral connection) which is then connected via the customer's MEN link to the earth bar – beyond this point, the protective earth and neutral conductors are separate.
High-voltage systems
In high-voltage networks (above 1 kV), which are far less accessible to the general public, the focus of earthing system design is less on safety and more on reliability of supply, reliability of protection, and impact on the equipment in presence of a short circuit. Only the magnitude of phase-to-ground short circuits, which are the most common, is significantly affected with the choice of earthing system, as the current path is mostly closed through the earth. Three-phase HV/MV power transformers, located in distribution substations, are the most common source of supply for distribution networks, and type of grounding of their neutral determines the earthing system.
There are five types of neutral earthing:
Solid-earthed neutral
Unearthed neutral
Resistance-earthed neutral
Low-resistance earthing
High-resistance earthing
Reactance-earthed neutral
Using earthing transformers (such as the Zigzag transformer)
Solid-earthed neutral
In solid or directly earthed neutral, transformer's star point is directly connected to the ground. In this solution, a low-impedance path is provided for the ground fault current to close and, as result, their magnitudes are comparable with three-phase fault currents. Since the neutral remains at the potential close to the ground, voltages in unaffected phases remain at levels similar to the pre-fault ones; for that reason, this system is regularly used in high-voltage transmission networks, where insulation costs are high.
Resistance-earthed neutral
To limit short circuit earth fault an additional neutral earthing resistor (NER) is added between the neutral of transformer's star point and earth.
Low-resistance earthing
With low resistance fault current limit is relatively high. In India it is restricted for 50 A for open cast mines according to Central Electricity Authority Regulations, CEAR, 2010, rule 100.
High-resistance earthing
High resistance grounding system grounds the neutral through a resistance which limits the ground fault current to a value equal to or slightly greater than the capacitive charging current of that system.
Unearthed neutral
In unearthed, isolated or floating neutral system, as in the IT system, there is no direct connection of the star point (or any other point in the network) and Earth. As a result, ground fault currents have no path to be closed and thus have negligible magnitudes. However, in practice, the fault current will not be equal to zero: conductors in the circuit — particularly underground cables — have an inherent capacitance towards the Earth, which provides a path of relatively high impedance.
Systems with isolated neutral may continue operation and provide uninterrupted supply even in presence of a ground fault. However, while the fault is present, the potential of other two phases relative to the ground reaches of the normal operating voltage, creating additional stress for the insulation; insulation failures may inflict additional ground faults in the system, now with much higher currents.
Presence of uninterrupted ground fault may pose a significant safety risk: if the current exceeds 4 A – 5 A an electric arc develops, which may be sustained even after the fault is cleared. For that reason, they are chiefly limited to underground and submarine networks, and industrial applications, where the reliability need is high and probability of human contact relatively low. In urban distribution networks with multiple underground feeders, the capacitive current may reach several tens of amperes, posing significant risk for the equipment.
The benefit of low fault current and continued system operation thereafter is offset by inherent drawback that the fault location is hard to detect.
Grounding rods
According to the IEEE standards, grounding rods are made from material such as copper and steel. For choosing a grounding rod there are several selection criteria such as: corrosion resistance, diameter depending on the fault current, conductivity and others. There are several types derived from copper and steel: copper-bonded, stainless-steel, solid copper, galvanized steel ground. In recent decades, there has been developed chemical grounding rods for low impedance ground containing natural electrolytic salts. and Nano-Carbon Fiber Grounding rods.
Grounding connectors
Connectors for earthing installation are a means of communication between the various components of the earthing and lightning protection installations (earthing rods, earthing conductors, current leads, busbars, etc.).
For high voltage installations, exothermic welding is used for underground connections.
Soil resistance
Soil resistance is a major aspect in the design and calculation of an earthing system/grounding installation. Its resistance determines the efficiency of the diversion of unwanted currents to zero potential (ground). The resistance of a geological material depends on several components: the presence of metal ores, the temperature of the geological layer, the presence of archeological or structural features, the presence of dissolved salts, and contaminants, porosity and permeability. There are several basic methods for measuring soil resistance. The measurement is performed with two, three or four electrodes. The measurement methods are: pole-pole, dipole-dipole, pole-dipole, Wenner method, and the Schlumberger method.
See also
Electrical wiring
Ground and neutral
Soil resistivity
References
General
IEC 60364-1: Electrical installations of buildings — Part 1: Fundamental principles, assessment of general characteristics, definitions. International Electrotechnical Commission, Geneva.
John Whitfield: The Electricians Guide to the 16th Edition IEE Regulations, Section 5.2: Earthing systems, 5th edition.
Geoff Cronshaw: Earthing: Your questions answered. IEE Wiring Matters, Autumn 2005.
EU Leonardo ENERGY earthing systems education center: Earthing systems resources
Dmitry Makarov: What Is a TN-C-S Earthing System? Definition, Meaning, Diagrams.
Electric power distribution
Electrical wiring
Electrical safety
IEC 60364 | Earthing system | [
"Physics",
"Engineering"
] | 6,048 | [
"Electrical systems",
"Building engineering",
"Physical systems",
"Electrical engineering",
"Electrical wiring"
] |
2,251,988 | https://en.wikipedia.org/wiki/Orphan%20receptor | In biochemistry, an orphan receptor is a protein that has a similar structure to other identified receptors but whose endogenous ligand has not yet been identified. If a ligand for an orphan receptor is later discovered, the receptor is referred to as an "adopted orphan". Conversely, the term orphan ligand refers to a biological ligand whose cognate receptor has not yet been identified.
Examples
Examples of orphan receptors are found in the G protein-coupled receptor (GPCR) and nuclear receptor families.
If an endogenous ligand is found, the orphan receptor is "adopted" or "de-orphanized". An example is the nuclear receptor farnesoid X receptor (FXR) and the GPCR TGR5/GPCR19/G protein-coupled bile acid receptor, both of which are activated by bile acids. Adopted orphan receptors in the nuclear receptor group include FXR, liver X receptor (LXR), and peroxisome proliferator-activated receptor (PPAR). Another example of an orphan receptor site is the PCP binding site in the NMDA receptor, a type of ligand-gated ion channel. This site is where the recreational drug PCP works, but no endogenous ligand is known to bind to this site.
GPCR orphan receptors are usually given the name "GPR" followed by a number, for example GPR1. In the GPCR family, nearly 100 receptor-like genes remain orphans.
Discovery
Historically, receptors were discovered by using ligands to "fish" for their receptors. Hence, by definition, these receptors were not orphans. However, with modern molecular biology techniques such as reverse pharmacology, screening of cDNA libraries, and whole genome sequencing, receptors have been identified based on sequence similarity to known receptors, without knowing what their ligands are.
References
External links
Receptors | Orphan receptor | [
"Chemistry"
] | 377 | [
"Receptors",
"Signal transduction"
] |
2,252,026 | https://en.wikipedia.org/wiki/Rudolf%20Grimm | Rudolf Grimm (born 10 November 1961) is an experimental physicist from Austria. His work centres on ultracold atoms and quantum gases. He was the first scientist worldwide who, with his team, succeeded in realizing a Bose–Einstein condensation of non-polar molecules.
Career
Grimm graduated in physics from the University of Hannover in 1986. From 1986 to 1989 he was a post-graduate researcher at the ETH Zurich (Swiss Federal Institute of Technology), then went on to the Institute of Spectroscopy of the USSR Academy of Sciences in Troitsk near Moscow for half a year. He spent the next ten years in Heidelberg as a researcher at the Max Planck Institute for Nuclear Physics. In 1994, Grimm applied to the University of Heidelberg to qualify as a professor by receiving the "venia docendi" in experimental physics. In the year 2000, he was appointed to a chair in experimental physics at the University of Innsbruck, where he has been Dean of the Faculty for Mathematics, Computer Science and Physics since 2005 and Director of the Research Center for Quantum Physics from 2006. Since 2003, Grimm has also held the position of Scientific Director at the Institute for Quantum Optics and Quantum Information (IQOQI) of the Austrian Academy of Sciences (ÖAW). Grimm is married, with three children.
Research
The work of the experimental physicist concentrates on Bose–Einstein condensation of atoms and molecules and on fermionic quantum gases. In 2002 his working group succeeded for the first time ever to produce a Bose–Einstein condensate from caesium atoms. In the following year, the team produced the first Bose–Einstein condensate of molecules (simultaneously with Deborah S. Jin's group at JILA, Boulder, Colorado). In 2004, the Innsbruck scientists achieved a Fermionic condensate. In his work on collective oscillations and pairing energies, Grimm found first evidence of the flow of particles without any loss of energy (superfluidity) in Fermi condensates. Meanwhile, Grimm and his team have succeeded in producing more complex molecules in ultracold quantum gases. Currently Grimm is concentrating his efforts on producing mixed condensation from atoms of different elements. In 2006, his working group also managed to lift the veil on an old mystery of physics: they succeeded in the first experimental observation of Efimov states, mysterious quantum states that the Russian scientist Vitali Efimov had theoretically predicted in the early 1970s.
Awards
Grimm has received numerous awards for his achievements. In 2005 he was presented with the Wittgenstein Award, Austria's highest scientific accolade. In the same year, the Austrian daily paper Die Presse made him „Austrian (Researcher) of the Year 2005". Years before, he had won the Gerhard Hess Prize, a new blood stipend of the German Research Foundation (DFG) (1996), and the Silver Medal of the ETH Zurich (1989). Recently he received the Beller Lectureship Award of the American Physical Society (APS) (2007), the Science Award of the Region of Tyrol (2008) and was named „Austrian Scientist of the Year 2009" by the Austrian Club of Education and Science Journalists. In 2018, he was honored jointly with Vitaly Efimov with the inaugural Faddeev Medal. In 2021, he received an ERC Advanced Grant.
In 2006, Grimm became a full member of the Austrian Academy of Sciences.
References
External links
CV Rudolf Grimm
Working group Ultracold Atoms and Quantum gases, University of Innsbruck
Institute of Quantum Optics and Quantum Information (IQOQI), Austrian Academy of Sciences
Wittgenstein Award Laureate Rudolf Grimm
Quantum physicists
21st-century Austrian physicists
21st-century German physicists
Scientists from Mannheim
University of Hanover alumni
Academic staff of Heidelberg University
Academic staff of the University of Innsbruck
1961 births
Living people
Fellows of the American Physical Society | Rudolf Grimm | [
"Physics"
] | 792 | [
"Quantum physicists",
"Quantum mechanics"
] |
2,252,258 | https://en.wikipedia.org/wiki/Royal%20stars | The Royal Stars, also known as the Royal Stars of Persia, are Aldebaran, Regulus, Antares, and Fomalhaut, four prominent stars that played a significant role in ancient astronomy and astrology. These stars were regarded as the celestial guardians of the sky during the time of the Persian Empire (550 BCE–330 BCE) and were considered markers of the four cardinal directions.
The idea of these stars as "guardians" can be traced back to Babylonian astronomy, which significantly influenced Persian cosmology. The Persians further incorporated these stars into their Zoroastrian worldview, assigning them roles as watchers of the sky and associating them with seasonal transitions and divine entities.
Babylonian and Assyrian Origins
The concept of the Four Royal Stars predates the Persian Empire and originates in ancient Babylonian and Assyrian astronomy. By 747 BCE, the Babylonian King Nabonassar implemented a calendar system based on the motions of the moon relative to these four stars.
The Babylonians used two primary cycles for this system: an **eight-year cycle** and a **nineteen-year cycle**, the latter becoming the standard lunisolar calendar. By 700 BCE, the Assyrians had mapped the **ecliptic cycle** and identified these stars as key markers of the zodiacal constellations. This knowledge allowed them to distinguish fixed stars from wandering planets and further refine the study of celestial phenomena.
The four stars were tied to specific constellations:
- Aldebaran in Tarus
- Regulus in Leo
- Antares in Scorpius
- Fomalhaut in Piscis Austrinus
Persian Cosmology and Zoroastrian Integration
With the rise of the Persian Empire, these stars became deeply embedded in Zoroastrian cosmology. In Persian tradition, the stars were associated with seasonal transitions and were considered **"watchers"** of the cardinal directions:
Aldebaran (Tascheter): Watcher of the East, associated with the vernal equinox.
Regulus (Venant): Watcher of the North, associated with the summer solstice.
Antares (Satevis): Watcher of the West, associated with the autumnal equinox.
Fomalhaut (Haftorang): Watcher of the South, associated with the winter solstice.
The Bundahishn, a Zoroastrian text of cosmogony and cosmology, describes these stars in connection with divine entities and cosmic order. Each star was linked to a specific Zoroastrian deity or spirit:
- **Tishtrya (Aldebaran)**: The deity of rain and fertility, celebrated in the Tir Yasht, where Tishtrya battles drought-bringing demons.
- **Vanant (Regulus)**: The guardian of the west and a warrior spirit against evil forces, as detailed in Zoroastrian texts.
- **Satevis (Antares): A celestial entity associated with balance and the autumnal harvest, mentioned in the Bundahishn.
- **Haftorang (Fomalhaut)**: Symbolizing cosmic order and often linked to **Ursa Major** as "Seven Thrones," representing stability and guidance.
Names of the Four Royal Stars
The following table illustrates the evolution of the names of the Four Royal Stars in different stages of Persian history:
Uses
The Royal Stars were used for:
1. Navigation: As fixed markers for the four cardinal directions.
2. Calendar Systems: Tracking lunar and solar cycles for agricultural and religious purposes.
3. Astrology: Interpreting celestial alignments to predict major events. Regulus, in particular, was associated with kingship and power, symbolizing strength and divine favor.
Criticism
Modern scholars, such as George A. Davis Jr., have questioned the connection between the "Four Royal Stars" and ancient Persian cosmology. While the stars were significant in Persian astronomy, Davis noted discrepancies between the Zoroastrian naming scheme and the modern identification of the Royal Stars.
References
Further reading
Calendars
History of astrology
Persian mythology
Stellar groupings
Technical factors of astrology | Royal stars | [
"Physics",
"Astronomy"
] | 872 | [
"Calendars",
"Physical quantities",
"Time",
"History of astronomy",
"Spacetime",
"History of astrology"
] |
2,252,392 | https://en.wikipedia.org/wiki/Haloperidol%20decanoate | Haloperidol decanoate, sold under the brand name Haldol Decanoate among others, is a typical antipsychotic which is used in the treatment of schizophrenia. It is administered by injection into muscle at a dose of 100 to 200 mg once every 4 weeks or monthly. The dorsogluteal site is recommended. A 3.75-cm (1.5-inch), 21-gauge needle is generally used, but obese individuals may require a 6.5-cm (2.5-inch) needle to ensure that the drug is indeed injected intramuscularly and not subcutaneously. Haloperidol decanoate is provided in the form of 50 or 100 mg/mL oil solution of sesame oil and benzyl alcohol in ampoules or pre-filled syringes. Its elimination half-life after multiple doses is 21 days. The medication is marketed in many countries throughout the world.
See also
List of antipsychotics § Antipsychotic esters
References
4-Phenylpiperidines
Antipsychotic esters
Butyrophenone antipsychotics
4-Chlorophenyl compounds
Decanoate esters
4-Fluorophenyl compounds
Drugs developed by Johnson & Johnson
Janssen Pharmaceutica
NMDA receptor antagonists
Prodrugs
Prolactin releasers
Suspected embryotoxicants
Suspected fetotoxicants
Tertiary alcohols
Typical antipsychotics | Haloperidol decanoate | [
"Chemistry"
] | 305 | [
"Chemicals in medicine",
"Prodrugs"
] |
2,252,531 | https://en.wikipedia.org/wiki/Extended-release%20morphine | Extended-release (or slow-release) formulations of morphine are those whose effect last substantially longer than bare morphine, availing for, e.g., one administration per day. Conversion between extended-release and immediate-release (or "regular") morphine is easier than conversion to or from an equianalgesic dose of another opioid with different half-life, with less risk of altered pharmacodynamics.
Brand names
Brand names for this formulation of morphine include Avinza, Kadian, MS Contin, MST Continus, Morphagesic, Zomorph, Filnarine, MXL, Malfin, Contalgin, Dolcontin, and DepoDur. MS Contin is a trademark of Purdue Pharma, and is available in the United States and Australia. In the UK, MS Contin is marketed by NAPP Pharmaceuticals as MST Continus. MS Contin is a DEA Schedule II substance in the United States, a Schedule 8 (controlled) drug in Australia and a Schedule 2 CD (Controlled Drug) in the UK.
Avinza is made by King Pharmaceuticals and Kadian is made by Actavis Pharmaceuticals. Unlike the MS Contin brand and its generic versions, Kadian and Avinza are designed to be 12- to 24-hour release, not 8- to 12-hour. So instead of 2–3 times a day dosing, it can be 1–2 times.
MST Continus and MXL are registered copyright and trademark of Napp Pharmaceuticals and are available in the UK. MXL is a 24-hour release formula designed to be taken once daily. It is available in doses between 30 mg and 200 mg in 30 mg intervals (equating to between 1.25 mg/hour and 8.33 mg/hour). MST Continus is a 12-hour release formula, therefore it is given 2 times per day. It is available in the following doses: 5 mg, 10 mg, 15 mg, 30 mg, 60 mg, 100 mg and 200 mg tablets (equating to between 0.416 mg/hour and 16.67 mg/hour).
Dosage comparison
For constant pain, the relieving effect of extended-release morphine given once (for Kadian) or twice (for MS Contin) every 24 hours is roughly the same as multiple administrations of immediate release (or "regular") morphine. Morphine sulfate pentahydrade (trade names including Dolcontin) has a higher molecular mass than morphine base, and therefore 10 mg morphine sulfate pentahydrate contains approximatively 7.5 mg of morphine free base. Extended-release morphine can be administered together with "rescue doses" of immediate-release morphine pro re nata in case of breakthrough pain, each generally consisting of 5% to 15% of the 24-hour extended-release dosage.
Structure
Some brands may have a pellet (spheroid) formulations (made by extrusion and spheronization) can be used for controlled release of the drug in the body whereas powder filled pellets generally cannot. The plastic spheres containing powder have micropores that open at varying pH levels, to maintain a mostly constant release during transit through the digestive tract. The spheres themselves, the outer shells, pass undigested in most patients. Other brands are thought to use ethylcellulose coatings to control drug release from pellets. Another use these medications have is that they can be given via NG tube, the pellets being very small. This makes them one of the few extended release oral medications that can be given by feeding tube.
Opioid replacement therapy
According to a Cochrane review in 2013, extended-release morphine as an opioid replacement therapy for people with heroin addiction or dependence confers a possible reduction of opioid use and with fewer depressive symptoms but overall more adverse effects when compared to other forms of long-acting opioids. The length of time in treatment was not found to be significantly different.
References
External links
Advanced consumer information: morphine sulfate
Consumer Medicine Information (Australia)
Analgesics
Opiates
Sulfates
Drugs developed by Pfizer | Extended-release morphine | [
"Chemistry"
] | 889 | [
"Sulfates",
"Salts"
] |
2,252,579 | https://en.wikipedia.org/wiki/No-three-in-line%20problem | The no-three-in-line problem in discrete geometry asks how many points can be placed in the grid so that no three points lie on the same line. The problem concerns lines of all slopes, not only those aligned with the grid. It was introduced by Henry Dudeney in 1900. Brass, Moser, and Pach call it "one of the oldest and most extensively studied geometric questions concerning lattice points".
At most points can be placed, because points in a grid would include a row of three or more points, by the pigeonhole principle. Although the problem can be solved with points for every up it is conjectured that fewer than points can be placed in grids of large size. Known methods can place linearly many points in grids of arbitrary size, but the best of these methods place slightly fewer than points,
Several related problems of finding points with no three in line, among other sets of points than grids, have also been studied. Although originating in recreational mathematics, the no-three-in-line problem has applications in graph drawing and to the Heilbronn triangle problem.
Small instances
The problem was first posed by Henry Dudeney in 1900, as a puzzle in recreational mathematics, phrased in terms of placing the 16 pawns of a chessboard onto the board so that no three are in a line. This is exactly the no-three-in-line problem, for the In a later version of the puzzle, Dudeney modified the problem, making its solution unique, by asking for a solution in which two of the pawns are on squares d4 and e5, attacking each other in the center of the board.
Many authors have published solutions to this problem for small values and by 1998 it was known that points could be placed on an grid with no three in a line
for all up and some larger values. The numbers of solutions with points for small values of , starting are
The numbers of equivalence classes of solutions with points under reflections and rotations are
Upper and lower bounds
The exact number of points that can be placed, as a function is not known. However, both proven and conjectured bounds limit this number to within a range proportional
General placement methods
A solution of Paul Erdős, published by , is based on the observation that when is a prime number, the set of grid points for contains no three collinear points. When is not prime, one can perform this construction for a grid contained in the grid, where is the largest prime that is at Because the gap between consecutive primes is much smaller than the primes themselves, will always be close so this method can be used to place points in the grid with no three points collinear.
Erdős' bound has been improved subsequently: show that, when is prime, one can obtain a solution with points, by placing points in multiple copies of the hyperbola where may be chosen arbitrarily as long as it is nonzero Again, for arbitrary one can perform this construction for a prime near to obtain a solution with
Upper bound
At most points may be placed in a grid of any For, if more points are placed, then by the pigeonhole principle some three of them would all lie on the same horizontal line of the grid. For , this trivial bound is known to be tight.
Conjectured bounds
Although exactly points can be placed on small grids, conjectured that for large grids, there is a significantly smaller upper bound on the number of points that can be placed. More precisely, they conjectured that the number of points that can be placed is at most a sublinear amount larger with
After an error in the heuristic reasoning leading to this conjecture was uncovered, Guy corrected the error and made the stronger conjecture that one cannot do more than sublinearly better than with
Applications
Solutions to the no-three-in-line problem can be used to avoid certain kinds of degeneracies in graph drawing. The problem they apply to involves placing the vertices of a given graph at integer coordinates in the plane, and drawing the edges of the graph as straight line segments. For certain graphs, such as the utility graph, crossings between pairs of edges are unavoidable, but one should still avoid placements that cause a vertex to lie on an edge through two other vertices. When the vertices are placed with no three in line, this kind of problematic placement cannot occur, because the entire line through any two vertices, and not just the line segment, is free of other vertices. The fact that the no-three-in-line problem has a solution with linearly many points can be translated into graph drawing terms as meaning that every graph, even a complete graph, can be drawn without unwanted vertex-edge incidences using a grid whose area is quadratic in the number of vertices, and that for complete graphs no such drawing with less than quadratic area is possible. The complete graphs also require a linear number of colors in any graph coloring, but other graphs that can be colored with fewer colors can also be drawn on smaller grids: if a graph has vertices and a graph coloring with colors, it can be drawn in a grid with area proportional The no-three-in-line drawing of a complete graph is a special case of this result
The no-three-in-line problem also has applications to another problem in discrete geometry, the Heilbronn triangle problem. In this problem, one must place points, anywhere in a unit square, not restricted to a grid. The goal of the placement is to avoid small-area triangles, and more specifically to maximize the area of the smallest triangle formed by three of the points. For instance, a placement with three points in line would be very bad by this criterion, because these three points would form a degenerate triangle with area zero. On the other hand, if the points can be placed on a grid of side length within the unit square, with no three in a line, then by Pick's theorem every triangle would have area at half of a grid square. Therefore, solving an instance of the no-three-in-line problem and then scaling down the integer grid to fit within a unit square produces solutions to the Heilbronn triangle problem where the smallest triangle has area This application was the motivation for Paul Erdős to find his solution for the no-three-in-line problem. It remained the best area lower bound known for the Heilbronn triangle problem from 1951 until 1982, when it was improved by a logarithmic factor using a construction that was not based on the no-three-in-line problem.
Generalizations and variations
General-position subsets
In computational geometry, finite sets of points with no three in line are said to be in general position. In this terminology, the no-three-in-line problem seeks the largest subset of a grid that is in general position, but researchers have also considered the problem of finding the largest general position subset of other non-grid sets of points. It is NP-hard to find this subset, for certain input sets, and hard to approximate its size to within a constant factor; this hardness of approximation result is summarized by saying that the problem is APX-hard. If the largest subset has a solution with the non-constant approximation ratio can be obtained by a greedy algorithm that simply chooses points one at a time until all remaining points lie on lines through pairs of chosen points.
One can get a finer-grained understanding of the running time of algorithms for finding the exact optimal solution by using parameterized complexity, in which algorithms are analyzed not only in terms of the input size, but in terms of other parameters of the input. In this case, for inputs whose largest general position subset has it can be found in an amount of time that is an exponential function of multiplied by a polynomial in the input with the exponent of the polynomial not depending Problems with this kind of time bound are called fixed-parameter tractable.
For point sets having at most points per line, with there exist general-position subsets of size nearly proportional The example of the grid shows that this bound cannot be significantly improved. The proof of existence of these large general-position subsets can be converted into a polynomial-time algorithm for finding a general-position subset of size matching the existence bound, using an algorithmic technique known as entropy compression.
Greedy placement
Repeating a suggestion of , Martin Gardner asked for the smallest subset of an grid that cannot be extended: it has no three points in a line, but every proper superset has three in a line. Equivalently, this is the smallest set that could be produced by a greedy algorithm that tries to solve the no-three-in-line problem by placing points one at a time until it gets stuck. If only axis-parallel and diagonal lines are considered, then every such set has at least points. However, less is known about the version of the problem where all lines are considered: every greedy placement includes at least points before getting stuck, but nothing better than the trivial upper bound is known.
Higher dimensions
Non-collinear sets of points in the three-dimensional grid were considered by . They proved that the maximum number of points in the grid with no three points collinear
Similarly to Erdős's 2D construction, this can be accomplished by using points where is a prime congruent to .
Just as the original no-three-in-line problem can be used for two-dimensional graph drawing, one can use this three-dimensional solution to draw graphs in the three-dimensional grid. Here the non-collinearity condition means that a vertex should not lie on a non-adjacent edge, but it is normal to work with the stronger requirement that no two edges cross.
In much higher dimensions, sets of grid points with no three in line, obtained by choosing points near a hypersphere, have been used for finding large Salem–Spencer sets, sets of integers with no three forming an arithmetic progression. However, it does not work well to use this same idea of choosing points near a circle in two dimensions: this method finds points forming convex polygons, which satisfy the requirement of having no three in line, but are too small. The largest convex polygons with vertices in an grid have only vertices. The cap set problem concerns a similar problem to the no-three-in-line problem in spaces that are both high-dimensional, and based as vector spaces over finite fields rather than over the integers.
Another generalization to higher dimensions is to find as many points as possible in a three dimensional grid such that no four of them are in the same plane. This sequence begins 5, 8, 10, 13, 16, ... for , etc.
Torus
Another variation on the problem involves converting the grid into a discrete torus by using periodic boundary conditions in which the left side of the torus is connected to the right side, and the top side is connected to the bottom side. This has the effect, on slanted lines through the grid, of connecting them up into longer lines through more points, and therefore making it more difficult to select points with at most two from each line. These extended lines can also be interpreted as normal lines through an infinite grid in the Euclidean plane, taken modulo the dimensions of the torus. For a torus based on an grid, the maximum number of points that can be chosen with no three in line is at most When both dimensions are equal, and prime, it is not possible to place exactly one point in each row and column without forming a linear number of collinear triples. Higher-dimensional torus versions of the problem have also been studied.
See also
Eight queens puzzle, on placing points on a grid with no two on the same row, column, or diagonal
Notes
References
Solution, p. 222. Originally published in the London Tribune, November 7, 1906.
External links
Combinatorics
Lattice points
Conjectures
Mathematical problems | No-three-in-line problem | [
"Mathematics"
] | 2,430 | [
"Unsolved problems in mathematics",
"Discrete mathematics",
"Lattice points",
"Combinatorics",
"Conjectures",
"Mathematical problems",
"Number theory"
] |
2,252,727 | https://en.wikipedia.org/wiki/Solar%20Designer | Alexander Peslyak () (born 1977), better known as Solar Designer, is a security specialist from Russia. He is best known for his publications on exploitation techniques, including the return-to-libc attack and the first generic heap-based buffer overflow exploitation technique, as well as computer security protection techniques such as privilege separation for daemon processes.
Peslyak is the author of the widely popular password cracking tool John the Ripper. His code has also been used in various third-party operating systems, such as OpenBSD and Debian.
Work
Peslyak has been the founder and leader of the Openwall Project since 1999. He is the founder of Openwall, Inc. and has been the CTO since 2003. He served as an advisory board member at the Open Source Computer Emergency Response Team (oCERT) from 2008 until oCERT's conclusion in August 2017. He also co-founded oss-security.
He has spoken at many international conferences, including FOSDEM and CanSecWest. He wrote the foreword to Michał Zalewski's 2005 book Silence on the Wire.
Alexander received the 2009 "Lifetime Achievement Award" during the annual Pwnie Award at the Black Hat Security Conference. In 2015 Qualys acknowledged his help with the disclosure of a GNU C Library gethostbyname function buffer overflow ().
See also
Security-focused operating system
References
External links
Openwall Project home page
Solar Designer's pseudo homepage
http://phrack.org/issues/69/2.html#article
1977 births
Living people
Hackers | Solar Designer | [
"Technology"
] | 334 | [
"Lists of people in STEM fields",
"Hackers"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.