id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
567,364 | https://en.wikipedia.org/wiki/Lucilio%20Vanini | Lucilio Vanini (15859 February 1619), who, in his works, styled himself Giulio Cesare Vanini, was an Italian philosopher, physician and free-thinker, who was one of the first significant representatives of intellectual libertinism. He was among the first modern thinkers who viewed the universe as an entity governed by natural laws (nomological determinism). He was also an early literate proponent of biological evolution, maintaining that humans and other apes have common ancestors. He was murdered in Toulouse.
Vanini was born at Taurisano near Lecce, and studied philosophy and theology at Naples. Afterwards, he applied himself to the physical studies, chiefly medicine and astronomy, which had come into vogue with the Renaissance. Like Giordano Bruno, he attacked scholasticism.
From Naples he went to Padua, where he came under the influence of the Alexandrist Pietro Pomponazzi, whom he styled his divine master. Subsequently, he led a roving life in France, Switzerland and the Low Countries, supporting himself by giving lessons and disseminating radical ideas. He was obliged to flee to England in 1612 but was imprisoned in London for 49 days.
Returning to Italy, he made an attempt to teach in Genoa but was driven again to France, where he tried to clear himself of suspicion by publishing a book against atheism: Amphitheatrum Aeternae Providentiae Divino-Magicum (1615). Though the definitions of God are somewhat pantheistic, the book served its immediate purpose. Although Vanini did not expound his true views in his first book, he did in his second: De Admirandis Naturae Reginae Deaeque Mortalium Arcanis (Paris, 1616). This was originally certified by two doctors of the Sorbonne, but was later re-examined and condemned.
Vanini then left Paris, where he had been staying as chaplain to the Marechal de Bassompierre, and began to teach in Toulouse. In November 1618, he was arrested and, after a prolonged trial, was condemned to have his tongue cut out, to be strangled at the stake and to have his body burned to ashes. The murder was carried out on 9 February 1619 by local authorities.
Life
Early life (1585–1612)
Lucilio Vanini was born in 1585 in Taurisano, Terra d'Otranto, Italy. His father was Giovan Battista Vanini, a businessman from Tresana in Tuscany, while his mother was the daughter of a man named Lopez de Noguera, a customs contractor of the Spanish royal family's lands in Bari, Terra d'Otranto, Capitanata, and Basilicata. A document dated August 1612, discovered in the Vatican Secret Archives, describes Vanini as of Apulia, which is consistent with the native land he mentions in his own works.
The government census of the population of the hamlet of Taurisano, in 1596, includes the names of Giovan Battista Vanini, his lawful son Alexander, born in 1582, and his natural son Giovan Francesco, while there is no mention of Vanini's wife or of another lawful son called Lucilio (or Giulio Cesare). In 1603 Giovan Battista Vanini is reported for the last time in Taurisano.
Lucilio Vanini entered the University of Naples in 1599. In 1603 he entered the Carmelite order, taking the name of Fra Gabriele. He earned a doctorate in canon and civil law from the University of Naples on 6 June 1606.
Afterwards, he remained in the Naples area for two years, apparently living as a friar, or alternatively he returned to Lecce and studied the new Renaissance sciences, chiefly medicine and astronomy. By now, he had assimilated much knowledge and "speaks very good Latin and with great ease, is tall and a bit thin, has brown hair, an aquiline nose, lively eyes and a pleasant and ingenious physiognomy".
In (probably) 1606, Vanini's father died in Naples. Vanini, now come of age, was recognised by a court in the capital as heir of Giovan Battista and guardian of his brother Alexander. With a series of deeds and power of attorney drawn up in Naples, Vanini began to settle the financial consequences of the death of his father: selling a house he owned in Ugento, a few miles from his home country; in 1607 mandating a maternal uncle to fulfil assignments of the same type; in 1608 instructing friend Scarciglia to recover a sum and sell some goods remaining in Taurisano and held in custody by the two brothers.
In 1608, Vanini moved to Padua, a town under the rule of Venice, to study theology at that university (although there is no record of him subsequently obtaining a degree). While there he came into contact with the group led by Paolo Sarpi that, with the support of the English embassy in Venice, fueled anti-papal polemics. In 1611 he participated in the Lenten sermons, attracting the suspicions of the religious authorities. During that period, the controversy over the 1606 interdict placed on the Republic of Venice by Pope Paul V was still raging, and Vanini showed himself unambiguously in favour of the Republic. Consequently, the Prior General of his order, Enrico Silvio, commanded him to return to Naples, where he would have been disciplined, probably severely, but instead Vanini sought refuge with the English ambassador to Venice in 1612.
In England (1612–1614)
Vanini then fled to England, along with his Genoese companion Bonaventure Genocchi. They passed through Bologna, Milan, the Swiss canton of Graubünden, and descended via the Rhine, through Germany and the Netherlands, to the North Sea coast and the English Channel, finally reaching London and the Lambeth residence of the Archbishop of Canterbury. Here the two remained for nearly two years, hiding their true identity from their English guests. In July 1612, they both renounced their Catholic faith and embraced Anglicanism.
By 1613, however, Vanini was having doubts, so he appealed to the Pope to be allowed back into the Catholic fold, but as a secular priest rather than as a friar; the request was granted by the Pope himself. Around the start of 1614, Vanini visited the Universities of Cambridge and Oxford and confided to some acquaintances his imminent flight from England, so in January, he and Genocchi were arrested on the orders of the Archbishop of Canterbury, George Abbot. They managed to escape however, Genocchi in February 1614 and Vanini in March. The Spanish ambassador in London and the chaplain of the embassy of the Venetian Republic were thought to have engineered their escapes. The two passed through the hands of the papal nuncio in Flanders, Guido Bentivoglio, to the papal nuncio in Paris, Roberto Ubaldini.
In France (1614–1618)
In Paris, in the summer of 1614, Vanini subscribed to the principles of the Council of Trent, to prove the sincerity of his return to the Catholic faith. He then journeyed to Italy, going first to Rome, where he had to face the difficult final stages of the process in the court of the Inquisition, then to Genoa for a few months, where he found his friend Genocchi and taught philosophy to children of Scipio Doria for a time.
Despite assurances, the return of Vanini and Genocchi was not entirely peaceful; in January 1615 Genocchi was arrested by the Inquisitor of Genoa. Vanini therefore, fearing the same fate, ran away again to France and headed to Lyon. There, in June 1615, he published Amphitheatrum, a book against atheism, which he hoped would clear his name with the Roman authorities.
A short time later Vanini returned to Paris, where he asked Nuncio Ubaldini to intervene on his behalf with the authorities in Rome. Insufficiently assured, Vanini decided not to return to Italy, and instead cultivated connections with prestigious elements of the French nobility.
In 1616, Vanini completed the second of his two works, De Admirandis, and got it approved by two theologians at the Sorbonne. The work was published in September in Paris. It was dedicated to François de Bassompierre, a powerful man at the court of Marie de' Medici, and was printed by Adrien Périer, a Protestant. The work was immediately successful among those aristocratic circles populated by young spirits who looked with interest to the cultural and scientific innovations that came from Italy. The De Admirandis was a summa, lively and brilliant, of the new knowledge, and became a kind of "manifesto" for these cultural free spirits, giving Vanini a chance to stay safe in circles close to the French court. However, a few days after the publication of the work, the two theologians at the Sorbonne who had expressed their approval were presented to the Faculty of Theology in formal session and the outcome was a de facto ban on the movement of the text.
Now, unwelcome in England, unable to return to Italy and threatened by some circles of French Catholics, Vanini saw his room for manoeuvre shrinking and his chances of finding a stable place in French society failing. Fearing that a court case would be started against him in Paris, he fled and went into hiding at Redon Abbey in Brittany, where Abbott Arthur d'Épinay de Saint-Luc acted as his protector. But other factors gave cause for concern: in April 1617 Concino Concini, favorite of Marie de' Medici, was killed in Paris, giving rise to a wave of hostility to Italian residents at court.
Final year (1618–1619)
In the following months, a mysterious Italian, with a strange name (Pompeo Uciglio) and in possession of great knowledge but an uncertain past, appeared in some cities of Guyenne, then the Languedoc and finally Toulouse. Duke Henri II de Montmorency, protector of esprits forts of the time, was the governor of this region and seemed to grant protection to the fugitive, who still continued to keep carefully hidden.
The presence of this mysterious character in Toulouse did not however pass unnoticed and attracted the suspicions of the authorities. In August 1618 he was apprehended and interrogated. In February 1619, the Parlement of Toulouse found him guilty of atheism and blasphemy and, in accordance with the regulations of the time, his tongue was cut out, he was strangled and his body was burned. After the execution it emerged that the stranger was in fact Vanini.
Works
Amphitheatrum
Amphitheatrum Aeternae Providentiae divino-magicum, christiano-physicum, necnon astrologo-catholicum adversus veteres philosophos, atheos, epicureos, peripateticos et stoicos (possible translation: "Amphitheatre of Eternal Providence – Religio-magical, Christian-physical and Astrologico-Catholic – against the Ancient Philosophers, Atheists, Epicureans, Peripatetics and Stoics"), published in Lyon in 1615, consists of 50 exercises, which aim to demonstrate the existence of God, to define His essence, to describe His providence and to examine or refute the opinions of Pythagoras, Protagoras, Cicero, Boethius, Thomas Aquinas, the Epicureans, Aristotle, Averroes, Gerolamo Cardano, the Peripatetics, the Stoics, etc. on this subject.
De Admirandis
De Admirandis Naturae Reginae Deaeque Mortalium Arcanis (possible translation: "On the Marvelous Secrets of Nature, the Queen and Goddess of Mortals"), printed in Paris in 1616 by publisher Adrien Périer, is divided into four books:
On Sky and Air
On Water and Land
On Animals and Passions
On Non-Christian Religions
These contain a total of 60 dialogues (but really only 59, as dialogue XXXV is absent), which take place between the author, in the role of disseminator of knowledge, and an imaginary Alessandro, who urges his interlocutor to list and explain the mysteries of nature found around and within man.
In a mixture of reinterpretation of ancient knowledge and the dissemination of new scientific and religious theories, the protagonist discusses: the material, figure, colour, form, energy and eternity of heaven; the motion and the central pole of the heavens; the sun, the moon, the stars; fire; comets and rainbows; lightning, snow and rain; the motion and rest of projectiles in the air; the impulsion of mortars and crossbows; winds and breezes; corrupt airs; the element of water; the birth of the rivers; the rising of the Nile; the extent and saltiness of the sea; the roar and the motion of the water; the motion of projectiles; the creation of islands and mountains, as well as the cause of earthquakes; the genesis, root and colour of the gems, as well as spots of stones; life, food, and the death of the stones; the strength of the magnet to attract iron and its direction toward the Earth's poles; plants; the explanation to be given to certain phenomena of everyday life; semen; the reproduction, nature, respiration and nutrition of fish; the reproduction of birds; the reproduction of bees; the first generation of man; stains contracted by children in the womb; the generation of male and female; parts of monsters; the faces of children covered by larvae; the growth of man; the length of human life; sight; hearing; smell; taste; touch and tickle; the affections of man; God; appearances in the air; oracles; the Sibyls; the possessed; sacred images of the pagans; augurs; the miraculous healing of diseases reported in pagan times; the resurrection of the dead; witchcraft; dreams.
Thought
The naturalistic interpretation of supernatural phenomena that Pietro Pomponazzi – called by Vanini magister meus, divinus praeceptor meus, nostri seculi Philosophorum princeps – had given in the early 16th century in his treatise De Incantationibus was summarised in De Admirandis Naturae, where, in simple and elegant prose, Vanini also referred to Gerolamo Cardano, Julius Caesar Scaliger and other 16th century thinkers.
"God acts on sublunary beings [humans] using the sky as a tool": hence the natural and rational explanation of the allegedly supernatural phenomena, since even astrology was considered a science. God may use such phenomena to warn the people, and especially rulers, of danger. But the real origin of supernatural phenomena is, for Vanini, the human imagination, which can sometimes change the appearance of external reality. For the ecclesiastical "impostors" that promulgate false beliefs to gain wealth and power, and rulers interested in dominating the people, according to Vanini, "all religious things are false and fake principles to teach the naive populace that, when reason cannot be reached, at least practice religion".
Following Pietro Pomponazzi and Simone Porzio in their interpretation of the Aristotelian texts and the commentary thereon by Alexander of Aphrodisias, Vanini denied the immortality of the soul and attacked the Aristotelian cosmos-view. Like Bruno, he denied the difference between the everyday world and the celestial world, saying that both are composed of the same corruptible material. He disputed, in the physical and biological world, finality and the hylomorphic Aristotelian doctrine, and, reconnecting Epicureanism with Lucretius, prepared a new mechanistic-materialistic description of the universe where bodies are likened to a watch, and conceived a first form of universal transformation of living species. He agreed with the Aristotelian eternity of the world, especially considering the temporal aspect, but affirmed the rotation of the earth and appeared to reject the Ptolemaic system in favour of the heliocentric/Copernican system.
If the first editor of his works, Luigi Corvaglia, and historian Guido De Ruggiero, unjustly, considered his writings simply "a centone devoid of originality and scientific seriousness", the Jesuit priest François Garasse, far more worried about the consequences of the spread of his writings, judged them "a work of such most pernicious atheism as was never released in the last hundred years". The works of Vanini have been extensively reviewed and revalued by contemporary critics, revealing originality and insights (metaphysical, physical, biological) sometimes well ahead of their time.
Since Vanini in his works obscured his ideas, a typical ploy at the time to avoid serious conflicts with the religious and political authorities, the interpretation of his thought is difficult. However, in the history of philosophy, he has the image of an unbeliever or even an atheist. Considered as one of the fathers of libertinism, he was regarded as a lost soul by conventional Christians, despite having written a defense of the Council of Trent.
To understand the origins of Vanini's thought one has to look to his cultural background, which was fairly typical of the Renaissance, with a prevalence of elements of Averroistic Aristotelianism but with strong elements of mysticism and Neo-Platonism. On the other hand, he drew from Nicholas of Cusa typical pantheistic elements, similar to those which are also found in Giordano Bruno, but more materialistic. His world view was based on the eternity of matter, and of a God in nature as a "force" that shapes, orders and directs. All forms of life, he thought, had originated spontaneously from the earth itself as their creator.
Vanini was considered an atheist, but his first work, published in Lyon in 1615, Amphitheatrum, indicates otherwise. As a precursor of libertinism there are many elements that make his teaching close to the thought of the unknown author of the Treatise of the Three Impostors, also a pantheist. Vanini thought in fact that the creators of the three monotheistic religions, Moses, Jesus and Muhammad, were nothing but impostors.
In De Admirandis are found themes from Amphitheatrum, with refinements and developments that make it his masterpiece and the summary of his philosophy. Denying creation from nothing and the immortality of the soul, he saw God in Nature as its driving force and vital force, both eternal. The stars of heaven he considered a kind of intermediary between God and Nature. The true religion is therefore a "religion of Nature" that does not deny God but considers Him a spirit-force.
The thought of Vanini is quite fragmented and also reflects the complexity of its origins, as he was a religious figure, a naturalist, but also a doctor and in part a magician. What characterizes the prose is the vehemently anti-clerical sentiment. Among the original aspects of his thinking there is a kind of anticipation of Darwinism, because, after a first half in which he argues that the animal species arise by spontaneous generation from the earth, in the second part he seems convinced that they can be transformed into each other and that man comes from "animals related to man, such as the Barbary apes, the monkeys and apes in general".
Reputation
In 1623 two works appeared that started the myth of Vanini the atheist: La doctrine curieuse des beaux esprits de ce temps ... of Jesuit François Garasse, and Quaestiones celeberrimae in Genesim cum accurata explicatione ..., of Father Marin Mersenne. The two works, though, instead of turning off the voice of the philosopher, boosted it in an environment that was obviously ready to receive, discuss and recognise the validity of his claims.
In that same year the name of Vanini was again brought to the attention of French culture during the sensational trial of the poet Théophile de Viau, whose outlook had striking similarities with Vaninian thought.
In 1624, the monk Marin Mersenne returned to attacking the philosophy of Vanini, analyzing some statements in chapter X of his L'Impiétè des Déistes, Athées et Libertins de ce temps, combatuë, et renversee de point en point par raisons tirées de la Philosophie, et de la Theologie, in which the theologian expresses his judgment of the works of Girolamo Cardano and Giordano Bruno.
Even Leibniz, another opponent of libertinism, was strongly opposed to Vanini, considering him evil, a fool and a charlatan.
English intellectuals showed interest in the ideas of Vanini, and it was especially with the work of Charles Blount that Vanini's ideas entered English culture, becoming a cornerstone of libertinism and deism in seventeenth century England.
An unpublished manuscript in the municipal library of Avignon preserves Observations sur Lucilio Vanini written by Joseph Louis Dominique de Cambis, Marquis de Velleron, but provides only uncertain information on the philosopher, largely rectified by recent studies. In this same period a manuscript copy of the Amphitheatrum, was made or commissioned by Joseph Uriot, which later came to the library of the Duke of Württemberg; currently it is in the Württembergische Landesbibliothek in Stuttgart. Another manuscript copy of the same work is in the Staats und Universitätbibliothek in Hamburg, reflecting the continued interest in the thought of Vanini in German culture.
Pierre Bayle, in his Various Thoughts on the Occasion of a Comet, cited Vanini as an example of a learned atheist, alongside the ancient figures of Diagoras, Theodorus, and Euhemerus.
In 1730, the press in London was given a biography of Vanini with an extract of his works, entitled The life of Lucilio (alias Julius Caesar) Vanini, burnt for atheism at Toulouse. With an abstract of his writings. The work debates Vanini's ideas, recognising much merit.
Arthur Schopenhauer's 1839 essay 'On the Freedom of the Will' includes Vanini among his account of predecessors who also came to the same conclusion as that of his essay, which Schopenhauer expressed as follows: "Everything that happens, from the greatest to the smallest, happens necessarily."
Notes
References
La Vie et L'Œuvre de J.C Vanini, Princes des Libertins mort a Toulouse sur le bucher en 1619, Emile Namer, 1980.
Further reading
(2011) Eight Philosophical Dialogues of Giulio Cesare Vanini,(translated), The Philosophical Forum, 42: 370–418. doi:10.1111/j.1467-9191.2011.00397.x
Francesco De Paola, Vanini e il primo '600 anglo-veneto, Cutrofiano, Lecce (1980).
Francesco De Paola, Giulio Cesare Vanini da Taurisano filosofo Europeo, Schena Editore, Fasano, Brindisi (1998).
Giovanni Papuli, Studi Vaniniani, Galatina, Congedo (2006).
Giovanni Papuli, Francesco Paolo Raimondi (ed.), Giulio Cesare Vanini - Opere, Galatina, Congedo (1990).
Francesco Paolo Raimondi, Giulio Cesare Vanini nell'Europa del Seicento, Roma-Pisa, Istituti Editoriali e Poligrafici Internazionali, Roma (2005).
C. Teofilato, Giulio Cesare Vanini, in The Connecticut Magazine, articles in English and Italian, New Britain, Connecticut, May 1923, p. 13 (I, 7).
1585 births
1619 deaths
People from the Province of Lecce
16th-century Italian philosophers
16th-century Italian male writers
17th-century Italian philosophers
Italian atheists
Atheist philosophers
People executed for heresy
Italian people executed abroad
People executed by strangulation
17th-century executions by France
Executed philosophers
Pantheists
Proto-evolutionary biologists
Persecution of atheists
17th-century atheists | Lucilio Vanini | [
"Biology"
] | 5,072 | [
"Non-Darwinian evolution",
"Biology theories",
"Proto-evolutionary biologists"
] |
567,391 | https://en.wikipedia.org/wiki/Trachtenberg%20system | The Trachtenberg system is a system of rapid mental calculation. The system consists of a number of readily memorized operations that allow one to perform arithmetic computations very quickly. It was developed by the Russian engineer Jakow Trachtenberg in order to keep his mind occupied while being in a Nazi concentration camp.
The rest of this article presents some methods devised by Trachtenberg. Some of the algorithms Trachtenberg developed are ones for general multiplication, division and addition. Also, the Trachtenberg system includes some specialised methods for multiplying small numbers between 5 and 13.
The section on addition demonstrates an effective method of checking calculations that can also be applied to multiplication.
General multiplication
The method for general multiplication is a method to achieve multiplications with low space complexity, i.e. as few temporary results as possible to be kept in memory. This is achieved by noting that the final digit is completely determined by multiplying the last digit of the multiplicands. This is held as a temporary result. To find the next to last digit, we need everything that influences this digit: The temporary result, the last digit of times the next-to-last digit of , as well as the next-to-last digit of times the last digit of . This calculation is performed, and we have a temporary result that is correct in the final two digits.
In general, for each position in the final result, we sum for all :
People can learn this algorithm and thus multiply four-digit numbers in their head – writing down only the final result. They would write it out starting with the rightmost digit and finishing with the leftmost.
Trachtenberg defined this algorithm with a kind of pairwise multiplication where two digits are multiplied by one digit, essentially only keeping the middle digit of the result. By performing the above algorithm with this pairwise multiplication, even fewer temporary results need to be held.
Example:
To find the first (rightmost) digit of the answer, start at the first digit of the multiplicand
The units digit of is
The first digit of the answer is . The tens digit is ignored.
To find the second digit of the answer, start at the second digit of the multiplicand:
The units digit of plus the tens digit of plus
The units digit of .
.
The second digit of the answer is and carry to the third digit.
To find the third digit of the answer, start at the third digit of the multiplicand:
The units digit of plus the tens digit of plus
The units digit of plus the tens digit of plus
The units digit of
The third digit of the answer is and carry to the next digit.
To find the fourth digit of the answer, start at the fourth digit of the multiplicand:
The units digit of plus the tens digit of plus
The units digit of plus the tens digit of plus
The units digit of plus the tens digit of .
carried from the third digit.
The fourth digit of the answer is and carry to the next digit.
Continue with the same method to obtain the remaining digits.
Trachtenberg called this the 2 Finger Method. The calculations for finding the fourth digit from the example above are illustrated at right. The arrow from the nine will always point to the digit of the multiplicand directly above the digit of the answer you wish to find, with the other arrows each pointing one digit to the right. Each arrow head points to a UT Pair, or Product Pair. The vertical arrow points to the product where we will get the Units digit, and the sloping arrow points to the product where we will get the Tens digits of the Product Pair. If an arrow points to a space with no digit there is no calculation for that arrow. As you solve for each digit you will move each of the arrows over the multiplicand one digit to the left until all of the arrows point to prefixed zeros.
Division in the Trachtenberg System is done much the same as in multiplication but with subtraction instead of addition. Splitting the dividend into smaller Partial Dividends, then dividing this Partial Dividend by only the left-most digit of the divisor will provide the answer one digit at a time. As you solve each digit of the answer you then subtract Product Pairs (UT pairs) and also NT pairs (Number-Tens) from the Partial Dividend to find the next Partial Dividend. The Product Pairs are found between the digits of the answer so far and the divisor. If a subtraction results in a negative number you have to back up one digit and reduce that digit of the answer by one. With enough practice this method can be done in your head.
General addition
A method of adding columns of numbers and accurately checking the result without repeating the first operation. An intermediate sum, in the form of two rows of digits, is produced. The answer is obtained by taking the sum of the intermediate results with an L-shaped algorithm. As a final step, the checking method that is advocated both removes the risk of repeating any original errors and identifies the precise column in which an error occurs all at once. It is based on check (or digit) sums, such as the nines-remainder method.
For the procedure to be effective, the different operations used in each stage must be kept distinct, otherwise there is a risk of interference.
Other multiplication algorithms
When performing any of these multiplication algorithms the following "steps" should be applied.
The answer must be found one digit at a time starting at the least significant digit and moving left. The last calculation is on the leading zero of the multiplicand.
Each digit has a neighbor, i.e., the digit on its right. The rightmost digit's neighbor is the trailing zero.
The 'halve' operation has a particular meaning to the Trachtenberg system. It is intended to mean "half the digit, rounded down" but for speed reasons people following the Trachtenberg system are encouraged to make this halving process instantaneous. So instead of thinking "half of seven is three and a half, so three" it's suggested that one thinks "seven, three". This speeds up calculation considerably. In this same way the tables for subtracting digits from 10 or 9 are to be memorized.
And whenever the rule calls for adding half of the neighbor, always add 5 if the current digit is odd. This makes up for dropping 0.5 in the next digit's calculation.
Numbers and digits (base 10)
Digits and numbers are two different notions. The number T consists of n digits cn ... c1.
Multiplying by 2
Proof
Rule:
Multiply each digit by 2 (with carrying).
Example: 8624 × 2
Working from left to right:
8+8=16,
6+6=12 (carry the 1),
2+2=4
4+4=8;
8624 × 2 = 17248
Example: 76892 × 2
Working from left to right:
7+7=14
6+6=12
8+8=16
9+9=18
2+2=4;
76892 × 2 =153784
Multiplying by 3
Proof
Rule:
Subtract the rightmost digit from 10.
Subtract the remaining digits from 9.
Double the result.
Add half of the neighbor to the right, plus 5 if the digit is odd.
For the leading zero, subtract 2 from half of the neighbor.
Example: 492 × 3 = 1476
Working from right to left:
(10 − 2) × 2 + Half of 0 (0) = 16. Write 6, carry 1.
(9 − 9) × 2 + Half of 2 (1) + 5 (since 9 is odd) + 1 (carried) = 7. Write 7.
(9 − 4) × 2 + Half of 9 (4) = 14. Write 4, carry 1.
Half of 4 (2) − 2 + 1 (carried) = 1. Write 1.
Multiplying by 4
Proof
Rule:
Subtract the right-most digit from 10.
Subtract the remaining digits from 9.
Add half of the neighbor, plus 5 if the digit is odd.
For the leading 0, subtract 1 from half of the neighbor.
Example: 346 × 4 = 1384
Working from right to left:
(10 − 6) + Half of 0 (0) = 4. Write 4.
(9 − 4) + Half of 6 (3) = 8. Write 8.
(9 − 3) + Half of 4 (2) + 5 (since 3 is odd) = 13. Write 3, carry 1.
Half of 3 (1) − 1 + 1 (carried) = 1. Write 1.
Multiplying by 5
Proof
Rule:
Take half of the neighbor, then, if the current digit is odd, add 5.
Example: 42×5=210
Half of 2's neighbor, the trailing zero, is 0.
Half of 4's neighbor is 1.
Half of the leading zero's neighbor is 2.
43×5 = 215
Half of 3's neighbor is 0, plus 5 because 3 is odd, is 5.
Half of 4's neighbor is 1.
Half of the leading zero's neighbor is 2.
93×5=465
Half of 3's neighbor is 0, plus 5 because 3 is odd, is 5.
Half of 9's neighbor is 1, plus 5 because 9 is odd, is 6.
Half of the leading zero's neighbor is 4.
Multiplying by 6
Proof
Rule:
Add half of the neighbor to each digit. If the current digit is odd, add 5.
Example: 357 × 6 = 2142
Working right to left:
7 has no neighbor, add 5 (since 7 is odd) = 12. Write 2, carry the 1.
5 + half of 7 (3) + 5 (since the starting digit 5 is odd) + 1 (carried) = 14. Write 4, carry the 1.
3 + half of 5 (2) + 5 (since 3 is odd) + 1 (carried) = 11. Write 1, carry 1.
0 + half of 3 (1) + 1 (carried) = 2. Write 2.
Multiplying by 7
Proof
Rule:
Double each digit.
Add half of its neighbor to the right (dropping decimals, if any). The neighbor of the units position is 0.
If the base-digit is even add 0 otherwise add 5.
Add in any carryover from the previous step.
Example: 693 × 7 = 4,851
Working from right to left:
(3×2) + 0 + 5 + 0 = 11 = carryover 1, result 1.
(9×2) + 1 + 5 + 1 = 25 = carryover 2, result 5.
(6×2) + 4 + 0 + 2 = 18 = carryover 1, result 8.
(0×2) + 3 + 0 + 1 = 4 = result 4.
Multiplying by 8
Proof
Rule:
Subtract right-most digit from 10.
Subtract the remaining digits from 9.
Double the result.
Add the neighbor.
For the leading zero, subtract 2 from the neighbor.
Example: 456 × 8 = 3648
Working from right to left:
(10 − 6) × 2 + 0 = 8. Write 8.
(9 − 5) × 2 + 6 = 14, Write 4, carry 1.
(9 − 4) × 2 + 5 + 1 (carried) = 16. Write 6, carry 1.
4 − 2 + 1 (carried) = 3. Write 3.
Multiplying by 9
Proof
Rule:
Subtract the right-most digit from 10.
Subtract the remaining digits from 9.
Add the neighbor to the sum
For the leading zero, subtract 1 from the neighbor.
For rules 9, 8, 4, and 3 only the first digit is subtracted from 10. After that each digit is subtracted from nine instead.
Example: 2,130 × 9 = 19,170
Working from right to left:
(10 − 0) + 0 = 10. Write 0, carry 1.
(9 − 3) + 0 + 1 (carried) = 7. Write 7.
(9 − 1) + 3 = 11. Write 1, carry 1.
(9 − 2) + 1 + 1 (carried) = 9. Write 9.
2 − 1 = 1. Write 1.
Multiplying by 10
Add 0 (zero) as the rightmost digit.
Proof
Multiplying by 11
Proof
Rule:
Add the digit to its neighbor. (By "neighbor" we mean the digit on the right.)
Example:
(0 + 3) (3 + 4) (4 + 2) (2 + 5) (5 + 0)
3 7 6 7 5
To illustrate:
11=10+1
Thus,
Multiplying by 12
Proof
Rule: to multiply by 12:Starting from the rightmost digit, double each digit and add the neighbor. (The "neighbor" is the digit on the right.)
If the answer is greater than a single digit, simply carry over the extra digit (which will be a 1 or 2) to the next operation.
The remaining digit is one digit of the final result.
Example:
Determine neighbors in the multiplicand 0316:
digit 6 has no right neighbor
digit 1 has neighbor 6
digit 3 has neighbor 1
digit 0 (the prefixed zero) has neighbor 3
Multiplying by 13
Proof
Publications
Rushan Ziatdinov, Sajid Musa. Rapid mental computation system as a tool for algorithmic thinking of elementary school students development. European Researcher 25(7): 1105–1110, 2012 .
The Trachtenberg Speed System of Basic Mathematics by Jakow Trachtenberg, A. Cutler (Translator), R. McShane (Translator), was published by Doubleday and Company, Inc. Garden City, New York in 1960.
The book contains specific algebraic explanations for each of the above operations.
Most of the information in this article is from the original book.
The algorithms/operations for multiplication, etc., can be expressed in other more compact ways that the book does not specify, despite the chapter on algebraic description.
In popular culture
The 2017 American film Gifted revolves around a child prodigy who at the age of 7 impresses her teacher by doing calculations in her head using the Trachtenberg system.
Other systems
There are many other methods of calculation in mental mathematics. The list below shows a few other methods of calculating, though they may not be entirely mental.
Bharati Krishna Tirtha's book "Vedic Mathematics"
Mental abacus – As students become used to manipulating the abacus with their fingers, they are typically asked to do calculation by visualizing abacus in their head. Almost all proficient abacus users are adept at doing arithmetic mentally.
Chisanbop
Notes
References
Further reading
Trachtenberg, J. (1960). The Trachtenberg Speed System of Basic Mathematics. Doubleday and Company, Inc., Garden City, NY, USA.
Катлер Э., Мак-Шейн Р.Система быстрого счёта по Трахтенбергу, 1967 .
Rushan Ziatdinov, Sajid Musa. "Rapid Mental Computation System as a Tool for Algorithmic Thinking of Elementary School Students Development", European Researcher 25(7): 1105–1110, 2012.
External links
Chandrashekhar, Kiran.
Gifted (2017 film), This film is more about the Trachtenberg system, with Mckenna Grace, a young artist who has learned this technique, playing the lead role.
Vedic Mathematics Academy
Arithmetic
Mental calculation | Trachtenberg system | [
"Mathematics"
] | 3,267 | [
"Mental calculation",
"Arithmetic",
"Number theory"
] |
567,405 | https://en.wikipedia.org/wiki/Riverfront%20Stadium | Riverfront Stadium, also known as Cinergy Field from 1996 to 2002, was a multi-purpose stadium in Cincinnati, Ohio. It was the home of the Cincinnati Reds of Major League Baseball (MLB) from 1970 through 2002 and the Cincinnati Bengals of the National Football League (NFL) from 1970 to 1999. Located on the Ohio River in downtown Cincinnati, the stadium was best known as the home of "The Big Red Machine", as the Reds were often called in the 1970s.
Construction began on February 1, 1968, and was completed at a cost of less than $50 million. Riverfront's grand opening was held on June 30, 1970, an 8–2 Reds loss to the Atlanta Braves. Braves right fielder Hank Aaron hit the first home run in Riverfront's history, a two-run shot in the first inning which also served as the stadium's first runs batted in. Two weeks later on July 14, 1970, Riverfront hosted the 1970 Major League Baseball All-Star Game. This game is best remembered for the often-replayed collision at home plate between Reds star Pete Rose and catcher Ray Fosse of the Cleveland Indians.
In September 1996, Riverfront Stadium was renamed "Cinergy Field" in a sponsorship deal with Greater Cincinnati energy company Cinergy. In 2001, to make room for Great American Ball Park, the seating capacity at Cinergy Field was reduced to 39,000. There was a huge in-play wall in center field visible after the renovations, to serve as the batter's eye. The stadium was demolished by implosion on December 29, 2002.
History
Riverfront was a multi-purpose, circular "cookie-cutter" stadium, one of many built in the United States in the late 1960s and early 1970s as communities sought to save money by having their football and baseball teams share the same facility. Riverfront, Veterans Stadium in Philadelphia, Busch Memorial Stadium in St. Louis, Atlanta–Fulton County Stadium in Atlanta, Three Rivers Stadium in Pittsburgh, Shea Stadium in New York and Robert F. Kennedy Memorial Stadium in Washington, D.C., all opened within a few years of each other and were largely indistinguishable from one another; in particular, it was often confused with fellow Ohio River cookie-cutter Three Rivers Stadium by sportscasters because of the two stadium's similar names and similar designs.
One feature of Riverfront that distinguished it from other cookie-cutters was that the field level seats for baseball were divided in half directly behind home plate, with the third-base side stands wheeled to left field and the ones on the first-base side remaining stationary for conversion to a football seating configuration. The AstroTurf panels covering the tracks could be seen in left field during Reds games.
The site Riverfront Stadium sat on originally included the 2nd Street tenement, birthplace and boyhood home of cowboy singer and actor Roy Rogers, who joked that he was born "somewhere between second base and center field."
Riverfront Stadium's scoreboard was designed by American Sign and Indicator, but in its last years was maintained by Trans-Lux. That scoreboard would be upgraded in the 1980s with the addition of an adjacent Sony JumboTron. The playing field was originally illuminated by 1,648 thousand-watt GTE Sylvania Metalarc lamps.
Big Red Machine
The Reds moved to Riverfront Stadium midway through the 1970 season, after spending over 86 years at the intersection of Findlay Street and Western Avenue – the last 57½ of those years at Crosley Field. Riverfront quickly earned a place in Cincinnati's century-long baseball tradition as the home of one of the best teams in baseball history. The Reds had only won three pennants in their final 39 years at Crosley Field (1939, 1940, 1961) but made the World Series in Riverfront's first year (1970) and a total of four times in the stadium's first seven years, with the Reds winning back-to-back championships in 1975 and 1976. The World Series would return in 1990, with Cincinnati winning the first two of a four-game sweep of the Oakland Athletics at Riverfront.
Baseball purists disliked Riverfront's artificial turf, but Reds' Manager Sparky Anderson and General Manager Bob Howsam took advantage of it by encouraging speed and line drive hitting that could produce doubles, triples and high-bouncing infield hits. Players who combined power and speed like Joe Morgan, Pete Rose and Ken Griffey, Sr. thrived there. On defense, the fast surface and virtually dirtless infield (see photo) rewarded range and quickness by both outfielders and infielders, like shortstop Dave Concepción who used the turf to bounce many of his long throws to first. Catcher Johnny Bench and first baseman Tony Pérez played here. The artificial turf covered not only the normal grass area of the ballpark but also most of the normally dirt-covered portion of the infield; the infield area boundary where dirt would normally be was denoted with a white lined arc. Only the pitcher's mound, the home plate area (in two circled areas), and cutouts around first, second and third bases had dirt surfaces (which were covered in five-sided diamond shaped areas). This was the first stadium in the majors with this "sliding pit" configuration. Most of the new stadiums with artificial turf that would follow (Veterans Stadium, Royals Stadium, Louisiana Superdome, Olympic Stadium (Montreal), Exhibition Stadium, Kingdome, Hubert H. Humphrey Metrodome, B.C. Place, SkyDome) installed sliding pits as the original layout, and the existing artificial turf fields in San Francisco, Houston, Pittsburgh, and St. Louis would change to the cut-out configuration within the next few years after Riverfront's opening.
Riverfront hosted the MLB All-Star Game twice: first on July 14, 1970, with President Richard Nixon in attendance (51,838 total attendance), and again on July 12, 1988 (55,837 attendance).
Professional football
Despite Cincinnati's love of baseball, it was the prospect of a professional football team that finally moved the city to end 20 years of discussion and build a new stadium on the downtown riverfront. After playing for two seasons at Nippert Stadium on the University of Cincinnati campus, the Bengals built on the Reds' success in the stadium's first year when they recorded their first winning season and playoff appearance in 1970, just their third year of existence.
Perhaps the most memorable football game at Riverfront Stadium was the 1981 AFC Championship Game on January 10, 1982. The game became known as the Freezer Bowl and was won by the Bengals over the San Diego Chargers, 27–7. The air temperature during the game was and the wind chill was , the coldest in NFL history. The win earned the Bengals their first of two trips to the Super Bowl (XVI) while playing at Riverfront Stadium, and the first of three in team history overall.
Riverfront Stadium hosted the 1988 AFC Championship Game, as the Bengals beat the Buffalo Bills 21–10 to advance to their second Super Bowl appearance.
During the Bengals' tenure, they defeated every visiting franchise at least once, enjoying perfect records against the Arizona Cardinals (4-0), New York Giants (4-0), and Philadelphia Eagles (3-0). They posted a 5–1 record in playoff games played in Riverfront Stadium, with victories over the Buffalo Bills (twice), San Diego Chargers, Seattle Seahawks, and Houston Oilers. Their only home playoff loss came to the New York Jets.
For most of the Bengals' tenure at the stadium, the field contained only the basic markings required for play. Until the late 1990s, there wasn't a logo at midfield or any writing in the end zone, which had long become standard in NFL stadiums.
During the 1988 season as the Bengals were making another Super Bowl run, Riverfront Stadium was nicknamed the Jungle as the Bengals went a perfect 10-0 at home during the regular season and in the playoffs. With the new stadium nickname, the fans and team adopted the Guns N' Roses song "Welcome to the Jungle" as the unofficial theme song for the Bengals. When Paul Brown Stadium (now Paycor Stadium) opened in 2000, the Jungle theme was incorporated into the stadium design.
College football
Between 1970 and 1990 Riverfront Stadium hosted 25 University of Cincinnati football games to accommodate higher-caliber visiting teams and local rivals which would overwhelm demand in their usual on-campus home, Nippert Stadium (which then could only hold 28,000). Among the Bearcats' opponents were the University of Maryland, University of Kentucky, University of Louisville, Boston College, West Virginia University, Penn State University, whose 1985 game took place with the Nittany Lions number one in the coaches' poll, and the University of Miami three times, twice while the Hurricanes were the defending national champions. It would be a temporary full-time home for the Bearcats during the 1990 season, when Nippert Stadium was undergoing renovations.
The Bearcats finished with a 12–13 all-time record at Riverfront.
Final years as a baseball-only stadium
When the Bengals moved to Paul Brown Stadium in 2000, the Reds were left as Riverfront Stadium’s only tenant. Prior to the 2001 baseball season, the stadium was remodeled into a baseball-only configuration, and the artificial turf surface being replaced with natural grass.
To allow room for the construction of Great American Ball Park (which was being built largely over the grounds the stadium already sat on), a large section of the left and center field stands was removed and the distance to the fences was shortened by . A wall was built in deep center field to prevent easy home runs. The new Great American Ball Park and old Riverfront Stadium were 26 inches apart at their closest point during this time. In the Reds' final two seasons in the stadium, ongoing construction on Great American was plainly visible just beyond the outfield walls while the team played their games. The stadium's final game was played on September 22, 2002, as the Reds lost 4–3 to the Philadelphia Phillies before a crowd of 40,964. Reds third baseman Aaron Boone hit the final home run in Riverfront's history in the loss, an eighth-inning solo home run off Phillies reliever Dan Plesac.
The stadium was demolished by implosion on December 29, 2002. Part of the former Riverfront Stadium site is now occupied by Great American Ball Park (which would open the following April) and the National Underground Railroad Freedom Center, along with several mixed-use developments and parking facilities. A small portion of the site is now occupied by the Reds' Hall of Fame and Museum and Main Street, which was extended when the new park was built and when the old park was demolished.
Seating capacity
Attendance records
Bold indicates the winner of each game.
Baseball
The 1988 All-Star Game had an attendance of 55,837
Football
Milestones
Baseball
First stadium to have its entire field covered by AstroTurf, except for the cutouts around the bases and pitcher's mound.
First hit: Félix Millán, June 30, 1970.
First home run: Hank Aaron, June 30, 1970.
First Presidential Visit: Richard Nixon, July 14, 1970.
First upper deck home run: Tony Pérez, August 11, 1970.
First World Series game ever played on artificial turf: October 10, 1970 (Reds vs. Baltimore Orioles).
First no-hitter: Ken Holtzman, June 3, 1971.
First pitcher ever to pitch a no-hitter and hit two home runs in the same game: Rick Wise, June 23, 1971.
Hank Aaron ties the all-time home run record with number 714: April 4, 1974.
First stadium to display metric distances on the outfield walls (100.58 meters down the lines, 114.30 to the alleys, 123.13 to center): 1976.
Highest season attendance, 2,629,708: 1976.
First rain checks issued: August 30, 1978.
First player to hit for the cycle: Mike Easler, June 12, 1980.
Pete Rose breaks the all-time hit record with number 4,192: September 11, 1985.
First player ever to be caught stealing four times in one game: Robby Thompson, June 27, 1986.
Perfect Game: Tom Browning, September 16, 1988.
Umpire John McSherry collapsed and died on April 1, 1996.
Ray Lankford hits two upper-deck home runs on July 15, 1997, becoming the only player to do so in the stadium's history to that point.
Longest home run, 473': Mark McGwire, May 5, 2000.
Football
First touchdown: Sam Wyche, September 20, 1970
First Field goal: Horst Muhlmann, September 20, 1970
Freezer Bowl: lowest wind-chill (2nd lowest temperature) in NFL history, January 10, 1982
Steve Largent becomes the first player in NFL History to catch 100 TD's in career, December 10, 1989.
Corey Dillon breaks the single-game rookie rushing record with 246 yards on December 4, 1997.
Concerts
The Kool Jazz Festival (now the Macy's Music Festival) was an annual fixture.
Religious gatherings
The Jehovah's Witnesses hosted three conventions in the stadium, in 1971, 1974 and 1978.
Promise Keepers held a meeting there in 1997.
Gallery
References
Sources
Dittmar, Joseph J. (1997). Baseball Records Registry: The Best and Worst Single-Day Performances and the Stories Behind Them. McFarland & Company.
Munsey & Suppes (1996–2004). Riverfront Stadium. Ballparks.
Smith, Ron (2000). Riverfront Stadium. The Ballpark Book. The Sporting News.
Riverfront Stadium Opens. BaseballLibrary.com.
External links
A Farewell to Cinergy Field. MLB.com.
Cinergy Field: Kiss it Goodbye. Cincinnati.com.
Riverfront Stadium/Cinergy Field. Ballparks of Baseball.
Riverfront Stadium/Cinergy Field. Stadiums of Pro Football
Cinergy Field. BaseballLibrary.com.
2002 disestablishments in Ohio
American football venues in Ohio
Baseball venues in Ohio
Buildings and structures demolished by controlled implosion
Cincinnati Bengals stadiums
Cincinnati Reds stadiums
Defunct Major League Baseball venues
Defunct National Football League venues
Defunct American football venues in the United States
Defunct baseball venues in the United States
Defunct multi-purpose stadiums in the United States
Demolished sports venues in Ohio
Multi-purpose stadiums in the United States
Sports venues completed in 1970
Sports venues demolished in 2002
Sports venues in Cincinnati
1970 establishments in Ohio | Riverfront Stadium | [
"Engineering"
] | 3,003 | [
"Buildings and structures demolished by controlled implosion",
"Architecture"
] |
567,445 | https://en.wikipedia.org/wiki/Jakob%20Steiner | Jakob Steiner (18 March 1796 – 1 April 1863) was a Swiss mathematician who worked primarily in geometry.
Life
Steiner was born in the village of Utzenstorf, Canton of Bern. At 18, he became a pupil of Heinrich Pestalozzi and afterwards studied at Heidelberg. Then, he went to Berlin, earning a livelihood there, as in Heidelberg, by tutoring. Here he became acquainted with A. L. Crelle, who, encouraged by his ability and by that of Niels Henrik Abel, then also staying at Berlin, founded his famous Journal (1826).
After Steiner's publication (1832) of his Systematische Entwickelungen he received, through Carl Gustav Jacob Jacobi, who was then professor at Königsberg University, and earned an honorary degree there; and through the influence of Jacobi and of the brothers Alexander and Wilhelm von Humboldt a new chair of geometry was founded for him at Berlin (1834). This he occupied until his death in Bern on 1 April 1863.
He was described by Thomas Hirst as follows:
"He is a middle-aged man, of pretty stout proportions, has a long intellectual face, with beard and moustache and a fine prominent forehead, hair dark rather inclining to turn grey. The first thing that strikes you on his face is a dash of care and anxiety, almost pain, as if arising from physical suffering—he has rheumatism. He never prepares his lectures beforehand. He thus often stumbles or fails to prove what he wishes at the moment, and at every such failure he is sure to make some characteristic remark."
Mathematical contributions
Steiner's mathematical work was mainly confined to geometry. This he treated synthetically, to the total exclusion of analysis, which he hated, and he is said to have considered it a disgrace to synthetic geometry if equal or higher results were obtained by analytical geometry methods. In his own field he surpassed all his contemporaries. His investigations are distinguished by their great generality, by the fertility of his resources, and by the rigour in his proofs. He has been considered the greatest pure geometer since Apollonius of Perga.
In his Systematische Entwickelung der Abhängigkeit geometrischer Gestalten von einander he laid the foundation of modern synthetic geometry. In projective geometry even parallel lines have a point in common: a point at infinity. Thus two points determine a line and two lines determine a point. The symmetry of point and line is expressed as projective duality. Starting with perspectivities, the transformations of projective geometry are formed by composition, producing projectivities. Steiner identified sets preserved by projectivities such as a projective range and pencils. He is particularly remembered for his approach to a conic section by way of projectivity called the Steiner conic.
In a second little volume, Die geometrischen Constructionen ausgeführt mittels der geraden Linie und eines festen Kreises (1833), republished in 1895 by Ottingen, he shows, what had been already suggested by J. V. Poncelet, how all problems of the second order can be solved by aid of the straight edge alone without the use of compasses, as soon as one circle is given on the drawing-paper. He also wrote "Vorlesungen über synthetische Geometrie", published posthumously at Leipzig by C. F. Geiser and H. Schroeter in 1867; a third edition by R. Sturm was published in 1887–1898.
Other geometric results by Steiner include development of a formula for the partitioning of space by planes (the maximal number of parts created by n planes), several theorems about the famous Steiner's chain of tangential circles, and a proof of the isoperimetric theorem (later a flaw was found in the proof, but was corrected by Weierstrass).
The rest of Steiner's writings are found in numerous papers mostly published in Crelle's Journal, the first volume of which contains his first four papers. The most important are those relating to algebraic curves and surfaces, especially the short paper Allgemeine Eigenschaften algebraischer Curven. This contains only results, and there is no indication of the method by which they were obtained, so that, according to O. Hesse, they are, like Fermat's theorems, riddles to the present and future generations. Eminent analysts succeeded in proving some of the theorems, but it was reserved to Luigi Cremona to prove them all, and that by a uniform synthetic method, in his book on algebraic curves.
Other important investigations relate to maxima and minima. Starting from simple elementary propositions, Steiner advances to the solution of problems which analytically require the calculus of variations, but which at the time altogether surpassed the powers of that calculus. Connected with this is the paper Vom Krümmungsschwerpuncte ebener Curven, which contains numerous properties of pedals and roulettes, especially of their areas.
Steiner also made a small but important contribution to combinatorics. In 1853, Steiner published a two-page article in Crelle's Journal on what nowadays is called Steiner systems, a basic kind of block design.
His oldest papers and manuscripts (1823–1826) were published by his admirer Fritz Bützberger on the request of the Bernese Society for Natural Scientists.
See also
Arrangement of lines
Malfatti circles
Miquel and Steiner's quadrilateral theorem
Minkowski–Steiner formula
Mixed volume
Power of a point theorem
Steiner curve
Steiner symmetrization
Steiner system
Steiner surface
Steiner conic
Steiner's conic problem
Steiner's problem
Steiner tree
Steiner chain
Poncelet–Steiner theorem
Parallel axes rule
Steiner–Lehmus theorem
Steiner inellipse
Steinerian
Steiner point (computational geometry)
Steiner point (triangle)
Notes
References
Viktor Blåsjö (2009) "Jakob Steiner's Systematische Entwickelung: The Culmination of Classical Geometry", Mathematical Intelligencer 31(1): 21–9.
External links
Steiner, J. (1796–1863)
Jacob Steiner's work on the Isoperimetric Problem at Convergence (by Jennifer Wiegert)
1796 births
1863 deaths
People from Emmental District
19th-century Swiss mathematicians
Geometers | Jakob Steiner | [
"Mathematics"
] | 1,306 | [
"Geometers",
"Geometry"
] |
567,472 | https://en.wikipedia.org/wiki/Cloud%20condensation%20nuclei | Cloud condensation nuclei (CCNs), also known as cloud seeds, are small particles typically 0.2 μm, or one hundredth the size of a cloud droplet. CCNs are a unique subset of aerosols in the atmosphere on which water vapour condenses. This can affect the radiative properties of clouds and the overall atmosphere. Water vapour requires a non-gaseous surface to make the transition to a liquid; this process is called condensation.
In the atmosphere of Earth, this surface presents itself as tiny solid or liquid particles called CCNs. When no CCNs are present, water vapour can be supercooled at about for 5–6 hours before droplets spontaneously form. This is the basis of the cloud chamber for detecting subatomic particles.
The concept of CCN is used in cloud seeding, which tries to encourage rainfall by seeding the air with condensation nuclei. It has further been suggested that creating such nuclei could be used for marine cloud brightening, a climate engineering technique. Some natural environmental phenomena, such as the one proposed in the CLAW hypothesis also arise from the interaction between naturally produced CCNs and cloud formation.
Properties
Size
A typical raindrop is about 2 mm in diameter, a typical cloud droplet is on the order of 0.02 mm, and a typical cloud condensation nucleus (aerosol) is on the order of 0.0001 mm or 0.1 μm or greater in diameter. The number of cloud condensation nuclei in the air can be measured at ranges between around 100 to 1000 per cm3. The total mass of CCNs injected into the atmosphere has been estimated at over a year's time.
Composition
There are many different types of atmospheric particulates that can act as CCN. The particles may be composed of dust or clay, soot or black carbon from grassland or forest fires, sea salt from ocean wave spray, soot from factory smokestacks or internal combustion engines, sulfate from volcanic activity, phytoplankton or the oxidation of sulfur dioxide and secondary organic matter formed by the oxidation of volatile organic compounds. The ability of these different types of particles to form cloud droplets varies according to their size and also their exact composition, as the hygroscopic properties of these different constituents are very different. Sulfate and sea salt, for instance, readily absorb water whereas soot, organic carbon, and mineral particles do not. This is made even more complicated by the fact that many of the chemical species may be mixed within the particles (in particular the sulfate and organic carbon). Additionally, while some particles (such as soot and minerals) do not make very good CCN, they do act as ice nuclei in colder parts of the atmosphere.
Abundance
The number and type of CCNs can affect the precipitation amount, lifetimes, and radiative properties of clouds and their lifetimes. Ultimately, this has an influence on climate change. Modeling research led by Marcia Baker revealed that sources and sinks are balanced by coagulation and coalescence which leads to stable levels of CCNs in the atmosphere. There is also speculation that solar variation may affect cloud properties via CCNs, and hence affect climate.
Airborne Measurements
The airborne measurements of these individual mixed aerosols that can form CCN at SGP site were performed using a research aircraft. CCN study by Kulkarni et al 2023 describes the complexity in modeling CCN concentrations.
Applications
Cloud seeding
Cloud seeding is a process by which small particulates are added to the atmosphere to induce cloud formation and precipitation. This has been done by dispersing salts using aerial or ground-based methods. Other methods have been researched, like using laser pulses to excite molecules in the atmosphere, and more recently, in 2021, electric charge emission using drones. The effectiveness of these methods is not consistent. Many studies did not notice a statistically significant difference in precipitation while others have. Cloud seeding may also occur from natural processes such as forest fires, which release small particles into the atmosphere that can act as nuclei.
Marine cloud brightening
Marine cloud brightening is a climate engineering technique which involves the injection of small particles into clouds to enhance their reflectivity, or albedo. The motive behind this technique is to control the amount of sunlight allowed to reach ocean surfaces in hopes of lowering surface temperatures through radiative forcing. Many methods involve the creation of small droplets of seawater to deliver sea salt particles into overlying clouds.
Complications may arise when reactive chlorine and bromine from sea salt react with existing molecules in the atmosphere. They have been shown to reduce ozone in the atmosphere; the same effect reduces hydroxide which correlates to the increased longevity of methane, a greenhouse gas.
Relation with phytoplankton and climate
A 1987 article in Nature found that global climate may occur in a feedback loop due to the relationship between CCNs, the temperature regulating behaviors of clouds, and oceanic phytoplankton. This phenomenon has since been referred to as the CLAW hypothesis, after the authors of the original study. A common CCN over oceans is sulphate aerosols. These aerosols are formed from the dimethyl sulfide (DMS) produced by algae found in seawater. Large algal blooms, observed to have increased in areas such as the South China Sea, can contribute a substantial amount of DMS into their surrounding atmospheres, leading to increased cloud formation. As the activity of phytoplankton is temperature reliant, this negative-feedback loop can act as a form of climate regulation.
The Revenge of Gaia, written by James Lovelock, an author of the 1987 study, proposes an alternative relationship between ocean temperatures and phytoplankton population size. This has been named the anti-CLAW hypothesis In this scenario, the stratification of oceans causes nutrient-rich cold water to become trapped under warmer water, where sunlight for photosynthesis is most abundant. This inhibits the growth of phytoplankton, resulting in the decrease in their population, and the sulfate CCNs they produce, with increasing temperature. This interaction thus lowers cloud albedo through decreasing CCN-induced cloud formations and increases the solar radiation allowed to reach ocean surfaces, resulting in a positive-feedback loop.
From volcanoes
Volcanoes emit a significant amount of microscopic gas and ash particles into the atmosphere when they erupt, which become atmospheric aerosols. By increasing the number of aerosol particles through gas-to-particle conversion processes, the contents of these eruptions can then affect the concentrations of potential cloud condensation nuclei (CCN) and ice nucleating particles (INP), which in turn affects cloud properties and leads to changes in local or regional climate.
Of these gases, sulfur dioxide, carbon dioxide, and water vapour are most commonly found in volcanic eruptions. While water vapour and carbon dioxide CCNs are naturally abundant in the atmosphere, the increase of sulfur dioxide CCNs can impact the climate by causing global cooling. Almost 9.2 Tg of sulfur dioxide () is emitted from volcanoes annually. This sulphur dioxide undergoes a transformation into sulfuric acid, which quickly condenses in the stratosphere to produce fine sulphate aerosols. The Earth's lower atmosphere, or troposphere, cools as a result of the aerosols' increased capability to reflect solar radiation back into space.
Effect on air pollution
See also
Bergeron process
Contrail
Evapotranspiration
Global dimming
Nucleation
Seed crystal
Water cycle
References
Further reading
Fletcher, Neville H. (2011). The physics of rainclouds (Paperback ed.). Cambridge: Cambridge University Press. ISBN 978-0-521-15479-6. OCLC 85709529
External links
www.grida.no
An easy experiment to do at home (in French)
Cloud and fog physics
Particulates | Cloud condensation nuclei | [
"Chemistry"
] | 1,625 | [
"Particulates",
"Particle technology"
] |
567,484 | https://en.wikipedia.org/wiki/Hormogonium | Hormogonia are motile filaments of cells formed by some cyanobacteria in the order Nostocales and Stigonematales. They are formed during vegetative reproduction in unicellular, filamentous cyanobacteria, and some may contain heterocysts and akinetes.
Cyanobacteria differentiate into hormogonia when exposed to an environmental stress or when placed in new media.
Hormogonium differentiation is crucial for the development of nitrogen-fixing plant cyanobacteria symbioses, in particular that between cyanobacteria of the genus Nostoc and their hosts. In response to a hormogonium-inducing factor (HIF) secreted by plant hosts, cyanobacterial symbionts differentiate into hormogonia and then dedifferentiate back into vegetative cells after about 96 hours. Hopefully, they have managed to reach the plant host by this time. The bacteria then differentiate specialized nitrogen-fixing cells called heterocysts and enter into a working symbiosis with the plant.
Depending on species, Hormogonia can be many hundreds of micrometers in length and can travel as fast as 11 μm/s. They move via gliding motility, requiring a wettable surface or a viscous substrate, such as agar for motion.
References
Cell anatomy
Cyanobacterial cells
Cyanobacteria
Cyanobacteria stubs | Hormogonium | [
"Biology"
] | 312 | [
"Algae",
"Cyanobacteria"
] |
567,523 | https://en.wikipedia.org/wiki/Message%20authentication%20code | In cryptography, a message authentication code (MAC), sometimes known as an authentication tag, is a short piece of information used for authenticating and integrity-checking a message. In other words, it is used to confirm that the message came from the stated sender (its authenticity) and has not been changed (its integrity). The MAC value allows verifiers (who also possess a secret key) to detect any changes to the message content.
Terminology
The term message integrity code (MIC) is frequently substituted for the term MAC, especially in communications to distinguish it from the use of the latter as media access control address (MAC address). However, some authors use MIC to refer to a message digest, which aims only to uniquely but opaquely identify a single message. RFC 4949 recommends avoiding the term message integrity code (MIC), and instead using checksum, error detection code, hash, keyed hash, message authentication code, or protected checksum.
Definitions
Informally, a message authentication code system consists of three algorithms:
A key generation algorithm selects a key from the key space uniformly at random.
A MAC generation algorithm efficiently returns a tag given the key and the message.
A verifying algorithm efficiently verifies the authenticity of the message given the same key and the tag. That is, return accepted when the message and tag are not tampered with or forged, and otherwise return rejected.
A secure message authentication code must resist attempts by an adversary to forge tags, for arbitrary, select, or all messages, including under conditions of known- or chosen-message. It should be computationally infeasible to compute a valid tag of the given message without knowledge of the key, even if for the worst case, we assume the adversary knows the tag of any message but the one in question.
Formally, a message authentication code (MAC) system is a triple of efficient algorithms (G, S, V) satisfying:
G (key-generator) gives the key k on input 1n, where n is the security parameter.
S (signing) outputs a tag t on the key k and the input string x.
V (verifying) outputs accepted or rejected on inputs: the key k, the string x and the tag t.
S and V must satisfy the following:
.
A MAC is unforgeable if for every efficient adversary A
,
where AS(k, · ) denotes that A has access to the oracle S(k, · ), and Query(AS(k, · ), 1n) denotes the set of the queries on S made by A, which knows n. Clearly we require that any adversary cannot directly query the string x on S, since otherwise a valid tag can be easily obtained by that adversary.
Security
While MAC functions are similar to cryptographic hash functions, they possess different security requirements. To be considered secure, a MAC function must resist existential forgery under chosen-message attacks. This means that even if an attacker has access to an oracle which possesses the secret key and generates MACs for messages of the attacker's choosing, the attacker cannot guess the MAC for other messages (which were not used to query the oracle) without performing infeasible amounts of computation.
MACs differ from digital signatures as MAC values are both generated and verified using the same secret key. This implies that the sender and receiver of a message must agree on the same key before initiating communications, as is the case with symmetric encryption. For the same reason, MACs do not provide the property of non-repudiation offered by signatures specifically in the case of a network-wide shared secret key: any user who can verify a MAC is also capable of generating MACs for other messages. In contrast, a digital signature is generated using the private key of a key pair, which is public-key cryptography. Since this private key is only accessible to its holder, a digital signature proves that a document was signed by none other than that holder. Thus, digital signatures do offer non-repudiation. However, non-repudiation can be provided by systems that securely bind key usage information to the MAC key; the same key is in the possession of two people, but one has a copy of the key that can be used for MAC generation while the other has a copy of the key in a hardware security module that only permits MAC verification. This is commonly done in the finance industry.
While the primary goal of a MAC is to prevent forgery by adversaries without knowledge of the secret key, this is insufficient in certain scenarios. When an adversary is able to control the MAC key, stronger guarantees are needed, akin to collision resistance or preimage security in hash functions. For MACs, these concepts are known as commitment and context-discovery security.
Implementation
MAC algorithms can be constructed from other cryptographic primitives, like cryptographic hash functions (as in the case of HMAC) or from block cipher algorithms (OMAC, CCM, GCM, and PMAC). However many of the fastest MAC algorithms, like UMAC-VMAC and Poly1305-AES, are constructed based on universal hashing.
Intrinsically keyed hash algorithms such as SipHash are also by definition MACs; they can be even faster than universal-hashing based MACs.
Additionally, the MAC algorithm can deliberately combine two or more cryptographic primitives, so as to maintain protection even if one of them is later found to be vulnerable. For instance, in Transport Layer Security (TLS) versions before 1.2, the input data is split in halves that are each processed with a different hashing primitive (SHA-1 and SHA-2) then XORed together to output the MAC.
One-time MAC
Universal hashing and in particular pairwise independent hash functions provide a secure message authentication code as long as the key is used at most once. This can be seen as the one-time pad for authentication.
The simplest such pairwise independent hash function is defined by the random key, , and the MAC tag for a message m is computed as , where p is prime.
More generally, k-independent hashing functions provide a secure message authentication code as long as the key is used less than k times for k-ways independent hashing functions.
Message authentication codes and data origin authentication have been also discussed in the framework of quantum cryptography. By contrast to other cryptographic tasks, such as key distribution, for a rather broad class of quantum MACs it has been shown that quantum resources do not offer any advantage over unconditionally secure one-time classical MACs.
Standards
Various standards exist that define MAC algorithms. These include:
FIPS PUB 113 Computer Data Authentication, withdrawn in 2002, defines an algorithm based on DES.
FIPS PUB 198-1 The Keyed-Hash Message Authentication Code (HMAC)
NIST SP800-185 SHA-3 Derived Functions: cSHAKE, KMAC, TupleHash, and ParallelHash
ISO/IEC 9797-1 Mechanisms using a block cipher
ISO/IEC 9797-2 Mechanisms using a dedicated hash-function
ISO/IEC 9797-3 Mechanisms using a universal hash-function
ISO/IEC 29192-6 Lightweight cryptography - Message authentication codes
ISO/IEC 9797-1 and -2 define generic models and algorithms that can be used with any block cipher or hash function, and a variety of different parameters. These models and parameters allow more specific algorithms to be defined by nominating the parameters. For example, the FIPS PUB 113 algorithm is functionally equivalent to ISO/IEC 9797-1 MAC algorithm 1 with padding method 1 and a block cipher algorithm of DES.
An example of MAC use
In this example, the sender of a message runs it through a MAC algorithm to produce a MAC data tag. The message and the MAC tag are then sent to the receiver. The receiver in turn runs the message portion of the transmission through the same MAC algorithm using the same key, producing a second MAC data tag. The receiver then compares the first MAC tag received in the transmission to the second generated MAC tag. If they are identical, the receiver can safely assume that the message was not altered or tampered with during transmission (data integrity).
However, to allow the receiver to be able to detect replay attacks, the message itself must contain data that assures that this same message can only be sent once (e.g. time stamp, sequence number or use of a one-time MAC). Otherwise an attacker could – without even understanding its content – record this message and play it back at a later time, producing the same result as the original sender.
See also
Checksum
CMAC
HMAC (hash-based message authentication code)
MAA
MMH-Badger MAC
Poly1305
Authenticated encryption
UMAC
VMAC
SipHash
KMAC
Notes
References
External links
RSA Laboratories entry on MACs
Ron Rivest lecture on MACs
Message authentication codes
Error detection and correction | Message authentication code | [
"Engineering"
] | 1,842 | [
"Error detection and correction",
"Reliability engineering"
] |
567,527 | https://en.wikipedia.org/wiki/Provisional%20designation%20in%20astronomy | Provisional designation in astronomy is the naming convention applied to astronomical objects immediately following their discovery. The provisional designation is usually superseded by a permanent designation once a reliable orbit has been calculated. Approximately 47% of the more than 1,100,000 known minor planets remain provisionally designated, as hundreds of thousands have been discovered in the last two decades.
Minor planets
The current system of provisional designation of minor planets (asteroids, centaurs and trans-Neptunian objects) has been in place since 1925. It superseded several previous conventions, each of which was in turn rendered obsolete by the increasing numbers of minor planet discoveries. A modern or new-style provisional designation consists of the year of discovery, followed by two letters and, possibly, a suffixed number.
New-style provisional designation
For example, the provisional designation (15760 Albion) stands for the 27th body identified during 16-31 Aug 1992:
1992 – the first element indicates the year of discovery.
Q – the first letter indicates the half-month of the object's discovery within that year and ranges from A (first half of January) to Y (second half of December), while the letters I and Z are not used (see table below). The first half is always the 1st through to the 15th of the month, regardless of the numbers of days in the second "half". Thus, Q indicates the period from Aug 16 to 31.
B1 – the second letter and a numerical suffix indicate the order of discovery within that half-month. The first 25 discoveries of the half-month only receive a letter (A to Z) without a suffix, while the letter I is not used (to avoid potential confusions with the digit 1). Because modern techniques typically yield hundreds if not thousands of discoveries per half-month, the subscript number is appended to indicate the number of times that the letters from A to Z have cycled through. The suffix 1 indicates one completed cycle (1 cycle × 25 letters = 25), while B is the 2nd position in the current cycle. Thus, B1 stands for the 27th minor planet discovered in a half-month.
The packed form of is written as .
This scheme is now also used retrospectively for pre-1925 discoveries. For these, the first digit of the year is replaced by an A. For example, A801 AA indicates the first object discovered in the first half of January 1801 (1 Ceres).
Further explanations
During the first half-month of January 2014, the first minor planet identification was assigned the provisional designation . Then the assignment continued to the end of the cycle at , which was in turn followed by the first identification of the second cycle, . The assignment in this second cycle continued with , , ... until , and then was continued with the first item in the third cycle. With the beginning of a new half-month on 16 January 2014, the first letter changed to "B", and the series started with .
An idiosyncrasy of this system is that the second letter is listed before the number, even though the second letter is considered "least-significant". This is in contrast to most of the world's numbering systems. This idiosyncrasy is not seen, however, in the so-called packed form (packed designation).
A packed designation has no spaces. It may also use letters to codify for the designation's year and subscript number. It is frequently used in online and electronic documents. For example, the provisional designation is written as K07Tf8A in the packed form, where "K07" stands for the year 2007, and "f8" for the subscript number 418.
90377 Sedna, a large trans-Neptunian object, had the provisional designation , meaning it was identified in the first half of November 2003 (as indicated by the letter "V"), and that it was the 302nd object identified during that time, as 12 cycles of 25 letters give 300, and the letter "B" is the second position in the current cycle.
Survey designations do not follow the rules for new-style provisional designations.
For technical reasons, such as ASCII limitations, the numerical suffix is not always subscripted, but sometimes "flattened out", so that can also be written as .
A very busy half month was the second half of January 2015 (letter "B"), which saw a total of 14,208 new minor planet identifications . One of the last assignments in this period was and corresponds to the 14,208th position in the sequence.
Survey designations
Minor planets discovered during the Palomar–Leiden survey including three subsequent Trojan-campaigns, which altogether discovered more than 4,000 asteroids and Jupiter trojans between 1960 and 1977, have custom designations that consist of a number (order in the survey) followed by a space and one of the following identifiers:
P-L Palomar–Leiden survey (1960–1970)
T-1 Palomar–Leiden Trojan survey (1971)
T-2 Palomar–Leiden Trojan survey (1973)
T-3 Palomar–Leiden Trojan survey (1977)
For example, the asteroid 6344 P-L is the 6344th minor planet in the original Palomar–Leiden survey, while the asteroid 4835 T-1 was discovered during the first Trojan-campaign. The majority of these bodies have since been assigned a number and many are already named.
Historical designations
The first four minor planets were discovered in the early 19th century, after which there was a lengthy gap before the discovery of the fifth. Astronomers initially had no reason to believe that there would be countless thousands of minor planets, and strove to assign a symbol to each new discovery, in the tradition of the symbols used for the major planets. For example, 1 Ceres was assigned a stylized sickle (⚳), 2 Pallas a stylized lance or spear (⚴), 3 Juno a scepter (⚵), and 4 Vesta an altar with a sacred fire (). All had various graphic forms, some of considerable complexity.
It soon became apparent, though, that continuing to assign symbols was impractical and provided no assistance when the number of known minor planets was in the dozens. Johann Franz Encke introduced a new system in the Berliner Astronomisches Jahrbuch (BAJ) for 1854, published in 1851, in which he used encircled numbers instead of symbols. Encke's system began the numbering with Astrea which was given the number (1) and went through (11) Eunomia, while Ceres, Pallas, Juno and Vesta continued to be denoted by symbols, but in the following year's BAJ, the numbering was changed so that Astraea was number (5).
The new system found popularity among astronomers, and since then, the final designation of a minor planet is a number indicating its order of discovery followed by a name. Even after the adoption of this system, though, several more minor planets received symbols, including 28 Bellona the morning star and lance of Mars's martial sister, 35 Leukothea an ancient lighthouse and 37 Fides a Latin cross (). According to Webster's A Dictionary of the English Language, four more minor planets were also given symbols: 16 Psyche, 17 Thetis, 26 Proserpina, and 29 Amphitrite. However, there is no evidence that these symbols were ever used outside of their initial publication in the Astronomische Nachrichten.
134340 Pluto is an exception: it is a high-numbered minor planet that received a graphical symbol with significant astronomical use (♇), because it was considered a major planet on its discovery, and did not receive a minor planet number until 2006.
Graphical symbols continue to be used for some minor planets, and assigned for some recently discovered larger ones, mostly by astrologers (see astronomical symbol and astrological symbol). Three centaurs – 2060 Chiron, 5145 Pholus, and 7066 Nessus – and the largest trans-Neptunian objects – 50000 Quaoar, 90377 Sedna, 90482 Orcus, 136108 Haumea, 136199 Eris, 136472 Makemake, and 225088 Gonggong – have relatively standard symbols among astrologers: the symbols for Haumea, Makemake, and Eris have even been occasionally used in astronomy. However, such symbols are generally not in use among astronomers.
Genesis of the current system
Several different notation and symbolic schemes were used during the latter half of the nineteenth century, but the present form first appeared in the journal Astronomische Nachrichten (AN) in 1892. New numbers were assigned by the AN on receipt of a discovery announcement, and a permanent designation was then assigned once an orbit had been calculated for the new object.
At first, the provisional designation consisted of the year of discovery followed by a letter indicating the sequence of the discovery, but omitting the letter I (historically, sometimes J was omitted instead). Under this scheme, 333 Badenia was initially designated , 163 Erigone was , etc. In 1893, though, increasing numbers of discoveries forced the revision of the system to use double letters instead, in the sequence AA, AB... AZ, BA and so on. The sequence of double letters was not restarted each year, so that followed and so on. In 1916, the letters reached ZZ and, rather than starting a series of triple-letter designations, the double-letter series was restarted with .
Because a considerable amount of time could sometimes elapse between exposing the photographic plates of an astronomical survey and actually spotting a small Solar System object on them (witness the story of Phoebe's discovery), or even between the actual discovery and the delivery of the message (from some far-flung observatory) to the central authority, it became necessary to retrofit discoveries into the sequence — to this day, discoveries are still dated based on when the images were taken, and not on when a human realised they were looking at something new. In the double-letter scheme, this was not generally possible once designations had been assigned in a subsequent year. The scheme used to get round this problem was rather clumsy and used a designation consisting of the year and a lower-case letter in a manner similar to the old provisional-designation scheme for comets. For example, (note that there is a space between the year and the letter to distinguish this designation from the old-style comet designation 1915a, Mellish's first comet of 1915), 1917 b. In 1914 designations of the form year plus Greek letter were used in addition.
Temporary minor planet designations
Temporary designations are custom designations given by an observer or discovering observatory prior to the assignment of a provisional designation by the MPC. These intricate designations were used prior to the Digital Age, when communication was slow or even impossible (e.g. during WWI). The listed temporary designations by observatory/observer use uppercase and lowercase letters (LETTER, letter), digits, numbers and years, as well Roman numerals (ROM) and Greek letters (greek).
Comets
The system used for comets was complex previous to 1995. Originally, the year was followed by a space and then a Roman numeral (indicating the sequence of discovery) in most cases, but difficulties always arose when an object needed to be placed between previous discoveries. For example, after Comet 1881 III and Comet 1881 IV might be reported, an object discovered in between the discovery dates but reported much later couldn't be designated "Comet 1881 III½". More commonly comets were known by the discoverer's name and the year. An alternate scheme also listed comets in order of time of perihelion passage, using lower-case letters; thus "Comet Faye" (modern designation 4P/Faye) was both Comet 1881 I (first comet to pass perihelion in 1881) and Comet 1880c (third comet to be discovered in 1880).
The system since 1995 is similar to the provisional designation of minor planets. For comets, the provisional designation consists of the year of discovery, a space, one letter (unlike the minor planets with two) indicating the half-month of discovery within that year (A=first half of January, B=second half of January, etc. skipping I (to avoid confusion with the number 1 or the numeral I) and not reaching Z), and finally a number (not subscripted as with minor planets), indicating the sequence of discovery within the half-month. Thus, the eighth comet discovered in the second half of March 2006 would be given the provisional designation 2006 F8, whilst the tenth comet of late March would be 2006 F10.
If a comet splits, its segments are given the same provisional designation with a suffixed letter A, B, C, ..., Z, AA, AB, AC...
If an object is originally found asteroidal, and later develops a cometary tail, it retains its asteroidal designation. For example, minor planet 1954 PC turned out to be Comet Faye, and we thus have "4P/1954 PC" as one of the designations of said comet. Similarly, minor planet was reclassified as a comet, and because it was discovered by LINEAR, it is now known as 176P/LINEAR (LINEAR 52) and (118401) LINEAR.
Provisional designations for comets are given condensed or "packed form" in the same manner as minor planets. 2006 F8, if a periodic comet, would be listed in the IAU Minor Planet Database as PK06F080. The last character is purposely a zero, as that allows comet and minor planet designations not to overlap.
Periodic comets
Comets are assigned one of four possible prefixes as a rough classification. The prefix "P" (as in, for example, P/1997 C1, a.k.a. Comet Gehrels 4) designates a "periodic comet", one which has an orbital period of less than 200 years or which has been observed during more than a single perihelion passage (e.g. 153P/Ikeya-Zhang, whose period is 367 years). They receive a permanent number prefix after their second observed perihelion passage (see List of periodic comets).
Non-periodic comets
Comets which do not fulfill the "periodic" requirements receive the "C" prefix (e.g. C/2006 P1, the Great Comet of 2007). Comets initially labeled as "non-periodic" may, however, switch to "P" if they later fulfill the requirements.
Comets which have been lost or have disintegrated are prefixed "D" (e.g. D/1993 F2, Comet Shoemaker-Levy 9).
Finally, comets for which no reliable orbit could be calculated, but are known from historical records, are prefixed "X" as in, for example, X/1106 C1. (Also see List of non-periodic comets and List of hyperbolic comets.)
Satellites and rings of planets
When satellites or rings are first discovered, they are given provisional designations such as "" (the 11th new satellite of Jupiter discovered in 2000), "" (the first new satellite of Pluto discovered in 2005), or "" (the second new ring of Saturn discovered in 2004). The initial "S/" or "R/" stands for "satellite" or "ring", respectively, distinguishing the designation from the prefixes "C/", "D/", "P/", and "X/" used for comets. These designations are sometimes written as "", dropping the second space.
The prefix "S/" indicates a natural satellite, and is followed by a year (using the year when the discovery image was acquired, not necessarily the date of discovery). A one-letter code written in upper case identifies the planet such as J and S for Jupiter and Saturn, respectively (see list of one-letter abbreviations), and then a number identifies sequentially the observation. For example, Naiad, the innermost moon of Neptune, was at first designated "". Later, once its existence and orbit were confirmed, it received its full designation, "".
The Roman numbering system arose with the very first discovery of natural satellites other than Earth's Moon: Galileo referred to the Galilean moons as I through IV (counting from Jupiter outward), in part to spite his rival Simon Marius, who had proposed the names now adopted. Similar numbering schemes naturally arose with the discovery of moons around Saturn and Uranus. Although the numbers initially designated the moons in orbital sequence, new discoveries soon failed to conform with this scheme (e.g. "" is Amalthea, which orbits closer to Jupiter than does Io). The unstated convention then became, at the close of the 19th century, that the numbers more or less reflected the order of discovery, except for prior historical exceptions (see the Timeline of discovery of Solar System planets and their natural satellites). The convention has been extended to natural satellites of minor planets, such as "".
Moons of minor planets
The provisional designation system for minor planet satellites, such as asteroid moons, follows that established for the satellites of the major planets. With minor planets, the planet letter code is replaced by the minor planet number in parentheses. Thus, the first observed moon of 87 Sylvia, discovered in 2001, was at first designated S/2001 (87) 1, later receiving its permanent designation of (87) Sylvia I Romulus. Where more than one moon has been discovered, Roman numerals specify the discovery sequence, so that Sylvia's second moon is designated (87) Sylvia II Remus.
Since Pluto was reclassified in 2006, discoveries of Plutonian moons since then follow the minor-planet system: thus Nix and Hydra, discovered in 2005, were S/2005 P 2 and S/2005 P 1, but Kerberos and Styx, discovered in 2011 and 2012 respectively, were S/2011 (134340) 1 and S/2012 (134340) 1. That said, there has been some unofficial use of the formats "S/2011 P 1" and "S/2012 P 1".
Packed designation
Packed designations are used in online and electronic documents as well as databases.
Packed minor planet designation
The Orbit Database (MPCORB) of the Minor Planet Center (MPC) uses the "packed form" to refer to all provisionally designated minor planets. The idiosyncrasy found in the new-style provisional designations, no longer exists in this packed-notation system, as the second letter is now listed after the subscript number, or its equivalent 2-digit code. For an introduction on provisional minor planet designations in the "un-packed" form, see .
Provisional packed designations
The system of packed provisional minor planet designations:
uses exactly 7 characters with no spaces for all designations
compacts 4 digit years to a 3-character code, e.g. 2014 is written as K14 (see tables below)
converts all subscript numbers to a 2-character code (00 is used when there is no following subscript, 99 is used for subscript 99, A0 is used for subscript 100, and A1 is used for 101)
the packed 2 character subscript code is placed between the half-month letter and the second (discovery order) letter (e.g. has discovery order K so the last three characters for its packed form are A2K)
Contrary to the new-style system, the letter "i" is used in the packed form both for the year and the numeric suffix. The compacting system provides upper and lowercase letters to encode up to 619 "cycles". This means that 15,500 designations () within a half-month can be packed, which is a few times more than the designations assigned monthly in recent years.
Examples
is written as J95X00A
is written as J95X01L
is written as K16EF6K
is written as K07Tf8A
Description
The year 1995 is compacted to J95. As it has no subscript number, 00 is used as placeholder instead, and directly placed after the half-month letter "X".
The year 1995 is compacted to J95. Subscript number "1" is padded to 01 to maintain the length of 7 characters, and placed after the first letter.
The year 2016 is compacted to K16. The subscript number "156" exceeds 2 digits and is converted to F6, (see table below)
The year 2007 is compacted to K07. The subscript number "418" exceeds 2 digits and is converted to f8, (see table below)
Conversion tables
Comets follow the minor-planet scheme for their first four characters. The fifth and sixth characters encode the number. The seventh character is usually 0, unless it is a component of a split comet, in which case it encodes in lowercase the letter of the fragment.
Examples
1995 A1 is written as J95A010
1995 P1-B is written as J95P01b (i.e. fragment B of comet )
2088 A103 is written as K88AA30 (as the subscript number exceeds two digits and is converted according to the above table).
There is also an extended form that adds five characters to the front. The fifth character is one of "C", "D", "P", or "X", according to the status of the comet. If the comet is periodic, then the first four characters are the periodic-comet number (padded to the left with zeroes); otherwise, they are blank.
Natural satellites use the format for comets, except that the last column is always 0.
Packed survey designations
Survey designations used during the Palomar–Leiden Survey (PLS) have a simpler packed form, as for example:
is written as PLS6344
is written as T1S4835
is written as T2S1010
is written as T3S4101
Note that the survey designations are distinguished from provisional designations by having the letter S in the third character, which contains a decimal digit in provisional designations and permanent numbers.
Permanent packed designations
A packed form for permanent designations also exists (these are numbered minor planets, with or without a name). In this case, only the designation's number is used and converted to a 5-character string. The rest of the permanent designation is ignored. Minor planet numbers below 100,000 are simply zero-padded to 5 digits from the left side. For minor planets between 100,000 and 619,999 inclusive, a single letter (A–Z and a–z) is used, similar as for the provisional subscript number (also see table above):
A covers the number range 100,000–109,999
B covers the number range 110,000–119,999
a covers the number range 360,000–369,999
z covers the number range 610,000–619,999
Examples
00001 encodes 1 Ceres
99999 encodes
A0000 encodes 100000 Astronautica, ()
A9999 encodes ()
B0000 encodes ()
G3693 encodes 163693 Atira ()
Y2843 encodes 342843 Davidbowie ()
g0356 encodes 420356 Praamzius ()
z9999 encodes ()
For minor planets numbered 620,000 or higher, a tilde "~" is used as the first character. The subsequent 4 characters encoded in Base62 (using 0–9, then A–Z, and a–z, in this specific order) are used to store the difference of the object's number minus 620,000. This extended system allows for the encoding of more than 15 million minor planet numbers. For example:
is represented as ~0000 (
is represented as ~000z (
(3140113) will be represented as ~AZaz (
(15396335) will be represented as ~zzzz (
For comets, permanent designations only apply to periodic comets that are seen to return. The first four characters are the number of the comet (left-padded with zeroes). The fifth character is "P", unless the periodic comet is lost or defunct, in which case it is "D".
For natural satellites, permanent packed designations take the form of the planet letter, then three digits containing the converted Roman numeral (left-padded with zeroes), and finally an "S". For example, Jupiter XIII Leda is J013S, and Neptune II Nereid is N002S.
See also
Minor-planet designation
Naming of moons
References
External links
New- And Old-Style Minor Planet Designations (Minor Planet Center)
Astronomical nomenclature
Comets
Minor planets
Moons | Provisional designation in astronomy | [
"Astronomy"
] | 5,148 | [
"Astronomical nomenclature",
"Astronomical objects"
] |
567,580 | https://en.wikipedia.org/wiki/Gaussian%20integral | The Gaussian integral, also known as the Euler–Poisson integral, is the integral of the Gaussian function over the entire real line. Named after the German mathematician Carl Friedrich Gauss, the integral is
Abraham de Moivre originally discovered this type of integral in 1733, while Gauss published the precise integral in 1809, attributing its discovery to Laplace. The integral has a wide range of applications. For example, with a slight change of variables it is used to compute the normalizing constant of the normal distribution. The same integral with finite limits is closely related to both the error function and the cumulative distribution function of the normal distribution. In physics this type of integral appears frequently, for example, in quantum mechanics, to find the probability density of the ground state of the harmonic oscillator. This integral is also used in the path integral formulation, to find the propagator of the harmonic oscillator, and in statistical mechanics, to find its partition function.
Although no elementary function exists for the error function, as can be proven by the Risch algorithm, the Gaussian integral can be solved analytically through the methods of multivariable calculus. That is, there is no elementary indefinite integral for
but the definite integral
can be evaluated. The definite integral of an arbitrary Gaussian function is
Computation
By polar coordinates
A standard way to compute the Gaussian integral, the idea of which goes back to Poisson, is to make use of the property that:
Consider the function on the plane , and compute its integral two ways:
on the one hand, by double integration in the Cartesian coordinate system, its integral is a square:
on the other hand, by shell integration (a case of double integration in polar coordinates), its integral is computed to be
Comparing these two computations yields the integral, though one should take care about the improper integrals involved.
where the factor of is the Jacobian determinant which appears because of the transform to polar coordinates ( is the standard measure on the plane, expressed in polar coordinates Wikibooks:Calculus/Polar Integration#Generalization), and the substitution involves taking , so .
Combining these yields
so
Complete proof
To justify the improper double integrals and equating the two expressions, we begin with an approximating function:
If the integral
were absolutely convergent we would have that its Cauchy principal value, that is, the limit
would coincide with
To see that this is the case, consider that
So we can compute
by just taking the limit
Taking the square of yields
Using Fubini's theorem, the above double integral can be seen as an area integral
taken over a square with vertices on the xy-plane.
Since the exponential function is greater than 0 for all real numbers, it then follows that the integral taken over the square's incircle must be less than , and similarly the integral taken over the square's circumcircle must be greater than . The integrals over the two disks can easily be computed by switching from Cartesian coordinates to polar coordinates:
(See to polar coordinates from Cartesian coordinates for help with polar transformation.)
Integrating,
By the squeeze theorem, this gives the Gaussian integral
By Cartesian coordinates
A different technique, which goes back to Laplace (1812), is the following. Let
Since the limits on as depend on the sign of , it simplifies the calculation to use the fact that is an even function, and, therefore, the integral over all real numbers is just twice the integral from zero to infinity. That is,
Thus, over the range of integration, , and the variables and have the same limits. This yields:
Then, using Fubini's theorem to switch the order of integration:
Therefore, , as expected.
By Laplace's method
In Laplace approximation, we deal only with up to second-order terms in Taylor expansion, so we consider .
In fact, since for all , we have the exact bounds:Then we can do the bound at Laplace approximation limit:
That is,
By trigonometric substitution, we exactly compute those two bounds: and
By taking the square root of the Wallis formula, we have , the desired lower bound limit. Similarly we can get the desired upper bound limit.
Conversely, if we first compute the integral with one of the other methods above, we would obtain a proof of the Wallis formula.
Relation to the gamma function
The integrand is an even function,
Thus, after the change of variable , this turns into the Euler integral
where is the gamma function. This shows why the factorial of a half-integer is a rational multiple of . More generally,
which can be obtained by substituting in the integrand of the gamma function to get .
Generalizations
The integral of a Gaussian function
The integral of an arbitrary Gaussian function is
An alternative form is
This form is useful for calculating expectations of some continuous probability distributions related to the normal distribution, such as the log-normal distribution, for example.
Complex form
and more generally,for any positive-definite symmetric matrix .
n-dimensional and functional generalization
Suppose A is a symmetric positive-definite (hence invertible) precision matrix, which is the matrix inverse of the covariance matrix. Then,
By completing the square, this generalizes to
This fact is applied in the study of the multivariate normal distribution.
Also,
where σ is a permutation of and the extra factor on the right-hand side is the sum over all combinatorial pairings of of N copies of A−1.
Alternatively,
for some analytic function f, provided it satisfies some appropriate bounds on its growth and some other technical criteria. (It works for some functions and fails for others. Polynomials are fine.) The exponential over a differential operator is understood as a power series.
While functional integrals have no rigorous definition (or even a nonrigorous computational one in most cases), we can define a Gaussian functional integral in analogy to the finite-dimensional case. There is still the problem, though, that is infinite and also, the functional determinant would also be infinite in general. This can be taken care of if we only consider ratios:
In the DeWitt notation, the equation looks identical to the finite-dimensional case.
n-dimensional with linear term
If A is again a symmetric positive-definite matrix, then (assuming all are column vectors)
Integrals of similar form
where is a positive integer
An easy way to derive these is by differentiating under the integral sign.
One could also integrate by parts and find a recurrence relation to solve this.
Higher-order polynomials
Applying a linear change of basis shows that the integral of the exponential of a homogeneous polynomial in n variables may depend only on SL(n)-invariants of the polynomial. One such invariant is the discriminant,
zeros of which mark the singularities of the integral. However, the integral may also depend on other invariants.
Exponentials of other even polynomials can numerically be solved using series. These may be interpreted as formal calculations when there is no convergence. For example, the solution to the integral of the exponential of a quartic polynomial is
The mod 2 requirement is because the integral from −∞ to 0 contributes a factor of to each term, while the integral from 0 to +∞ contributes a factor of 1/2 to each term. These integrals turn up in subjects such as quantum field theory.
See also
List of integrals of Gaussian functions
Common integrals in quantum field theory
Normal distribution
List of integrals of exponential functions
Error function
Berezin integral
References
Citations
Sources
Integrals
Articles containing proofs
Gaussian function
Theorems in analysis | Gaussian integral | [
"Mathematics"
] | 1,578 | [
"Theorems in mathematical analysis",
"Mathematical analysis",
"Mathematical theorems",
"Articles containing proofs",
"Mathematical problems"
] |
567,667 | https://en.wikipedia.org/wiki/Lexicographic%20order | In mathematics, the lexicographic or lexicographical order (also known as lexical order, or dictionary order) is a generalization of the alphabetical order of the dictionaries to sequences of ordered symbols or, more generally, of elements of a totally ordered set.
There are several variants and generalizations of the lexicographical ordering. One variant applies to sequences of different lengths by comparing the lengths of the sequences before considering their elements.
Another variant, widely used in combinatorics, orders subsets of a given finite set by assigning a total order to the finite set, and converting subsets into increasing sequences, to which the lexicographical order is applied.
A generalization defines an order on an n-ary Cartesian product of partially ordered sets; this order is a total order if and only if all factors of the Cartesian product are totally ordered.
Definition
The words in a lexicon (the set of words used in some language) have a conventional ordering, used in dictionaries and encyclopedias, that depends on the underlying ordering of the alphabet of symbols used to build the words. The lexicographical order is one way of formalizing word order given the order of the underlying symbols.
The formal notion starts with a finite set , often called the alphabet, which is totally ordered. That is, for any two symbols and in that are not the same symbol, either or .
The words of are the finite sequences of symbols from , including words of length 1 containing a single symbol, words of length 2 with 2 symbols, and so on, even including the empty sequence with no symbols at all. The lexicographical order on the set of all these finite words orders the words as follows:
Given two different words of the same length, say and , the order of the two words depends on the alphabetic order of the symbols in the first place where the two words differ (counting from the beginning of the words): if and only if in the underlying order of the alphabet .
If two words have different lengths, the usual lexicographical order pads the shorter one with "blanks" (a special symbol that is treated as smaller than every element of ) at the end until the words are the same length, and then the words are compared as in the previous case.
However, in combinatorics, another convention is frequently used for the second case, whereby a shorter sequence is always smaller than a longer sequence. This variant of the lexicographical order is sometimes called .
In lexicographical order, the word "Thomas" appears before "Thompson" because they first differ at the fifth letter ('a' and 'p'), and letter 'a' comes before the letter 'p' in the alphabet. Because it is the first difference, in this case the 5th letter is the "most significant difference" for alphabetical ordering.
An important property of the lexicographical order is that for each , the set of words of length is well-ordered by the lexicographical order (provided the alphabet is finite); that is, every decreasing sequence of words of length is finite (or equivalently, every non-empty subset has a least element). It is not true that the set of all finite words is well-ordered; for example, the infinite set of words {b, ab, aab, aaab, ... } has no lexicographically earliest element.
Numeral systems and dates
The lexicographical order is used not only in dictionaries, but also commonly for numbers and dates.
One of the drawbacks of the Roman numeral system is that it is not always immediately obvious which of two numbers is the smaller. On the other hand, with the positional notation of the Hindu–Arabic numeral system, comparing numbers is easy, because the natural order on natural numbers is the same as the variant shortlex of the lexicographic order. In fact, with positional notation, a natural number is represented by a sequence of numerical digits, and a natural number is larger than another one if either it has more digits (ignoring leading zeroes) or the number of digits is the same and the first (most significant) digit which differs is larger.
For real numbers written in decimal notation, a slightly different variant of the lexicographical order is used: the parts on the left of the decimal point are compared as before; if they are equal, the parts at the right of the decimal point are compared with the lexicographical order. The padding 'blank' in this context is a trailing "0" digit.
When negative numbers are also considered, one has to reverse the order for comparing negative numbers. This is not usually a problem for humans, but it may be for computers (testing the sign takes some time). This is one of the reasons for adopting two's complement representation for representing signed integers in computers.
Another example of a non-dictionary use of lexicographical ordering appears in the ISO 8601 standard for dates, which expresses a date as YYYY-MM-DD. This formatting scheme has the advantage that the lexicographical order on sequences of characters that represent dates coincides with the chronological order: an earlier CE date is smaller in the lexicographical order than a later date up to year 9999. This date ordering makes computerized sorting of dates easier by avoiding the need for a separate sorting algorithm.
Monoid of words
The over an alphabet is the free monoid over . That is, the elements of the monoid are the finite sequences (words) of elements of (including the empty sequence, of length 0), and the operation (multiplication) is the concatenation of words. A word is a prefix (or 'truncation') of another word if there exists a word such that . By this definition, the empty word () is a prefix of every word, and every word is a prefix of itself (with ); care must be taken if these cases are to be excluded.
With this terminology, the above definition of the lexicographical order becomes more concise: Given a partially or totally ordered set , and two words and over such that is non-empty, then one has under lexicographical order, if at least one of the following conditions is satisfied:
is a prefix of
there exists words , , (possibly empty) and elements and of such that
Notice that, due to the prefix condition in this definition, where is the empty word.
If is a total order on then so is the lexicographic order on the words of However, in general this is not a well-order, even if the alphabet is well-ordered. For instance, if , the language has no least element in the lexicographical order: .
Since many applications require well orders, a variant of the lexicographical orders is often used. This well-order, sometimes called or , consists in considering first the lengths of the words (if , then ), and, if the lengths are equal, using the lexicographical order. If the order on is a well-order, the same is true for the shortlex order.
Cartesian products
The lexicographical order defines an order on an n-ary Cartesian product of ordered sets, which is a total order when all these sets are themselves totally ordered. An element of a Cartesian product is a sequence whose th element belongs to for every As evaluating the lexicographical order of sequences compares only elements which have the same rank in the sequences, the lexicographical order extends to Cartesian products of ordered sets.
Specifically, given two partially ordered sets and the is defined as
The result is a partial order. If and are each totally ordered, then the result is a total order as well. The lexicographical order of two totally ordered sets is thus a linear extension of their product order.
One can define similarly the lexicographic order on the Cartesian product of an infinite family of ordered sets, if the family is indexed by the natural numbers, or more generally by a well-ordered set. This generalized lexicographical order is a total order if each factor set is totally ordered.
Unlike the finite case, an infinite product of well-orders is not necessarily well-ordered by the lexicographical order. For instance, the set of countably infinite binary sequences (by definition, the set of functions from natural numbers to also known as the Cantor space ) is not well-ordered; the subset of sequences that have precisely one (that is, ) does not have a least element under the lexicographical order induced by because is an infinite descending chain. Similarly, the infinite lexicographic product is not Noetherian either because is an infinite ascending chain.
Functions over a well-ordered set
The functions from a well-ordered set to a totally ordered set may be identified with sequences indexed by of elements of They can thus be ordered by the lexicographical order, and for two such functions and the lexicographical order is thus determined by their values for the smallest such that
If is also well-ordered and is finite, then the resulting order is a well-order. As shown above, if is infinite this is not the case.
Finite subsets
In combinatorics, one has often to enumerate, and therefore to order the finite subsets of a given set For this, one usually chooses an order on Then, sorting a subset of is equivalent to convert it into an increasing sequence. The lexicographic order on the resulting sequences induces thus an order on the subsets, which is also called the .
In this context, one generally prefer to sort first the subsets by cardinality, such as in the shortlex order. Therefore, in the following, we will consider only orders on subsets of fixed cardinal.
For example, using the natural order of the integers, the lexicographical ordering on the subsets of three elements of is
.
For ordering finite subsets of a given cardinality of the natural numbers, the order (see below) is often more convenient, because all initial segments are finite, and thus the colexicographical order defines an order isomorphism between the natural numbers and the set of sets of natural numbers. This is not the case for the lexicographical order, as, with the lexicographical order, we have, for example, for every
Group orders of Zn
Let be the free Abelian group of rank whose elements are sequences of integers, and operation is the addition. A group order on is a total order, which is compatible with addition, that is
The lexicographical ordering is a group order on
The lexicographical ordering may also be used to characterize all group orders on In fact, linear forms with real coefficients, define a map from into which is injective if the forms are linearly independent (it may be also injective if the forms are dependent, see below). The lexicographic order on the image of this map induces a group order on Robbiano's theorem is that every group order may be obtained in this way.
More precisely, given a group order on there exist an integer and linear forms with real coefficients, such that the induced map from into has the following properties;
is injective;
the resulting isomorphism from to the image of is an order isomorphism when the image is equipped with the lexicographical order on
Colexicographic order
The colexicographic or colex order is a variant of the lexicographical order that is obtained by reading finite sequences from the right to the left instead of reading them from the left to the right. More precisely, whereas the lexicographical order between two sequences is defined by
if for the first where and differ,
the colexicographical order is defined by
if for the last where and differ
In general, the difference between the colexicographical order and the lexicographical order is not very significant. However, when considering increasing sequences, typically for coding subsets, the two orders differ significantly.
For example, for ordering the increasing sequences (or the sets) of two natural integers, the lexicographical order begins by
,
and the colexicographic order begins by
.
The main property of the colexicographical order for increasing sequences of a given length is that every initial segment is finite. In other words, the colexicographical order for increasing sequences of a given length induces an order isomorphism with the natural numbers, and allows enumerating these sequences. This is frequently used in combinatorics, for example in the proof of the Kruskal–Katona theorem.
Monomials
When considering polynomials, the order of the terms does not matter in general, as the addition is commutative. However, some algorithms, such as polynomial long division, require the terms to be in a specific order. Many of the main algorithms for multivariate polynomials are related with Gröbner bases, concept that requires the choice of a monomial order, that is a total order, which is compatible with the monoid structure of the monomials. Here "compatible" means that if the monoid operation is denoted multiplicatively. This compatibility implies that the product of a polynomial by a monomial does not change the order of the terms. For Gröbner bases, a further condition must be satisfied, namely that every non-constant monomial is greater than the monomial . However this condition is not needed for other related algorithms, such as the algorithms for the computation of the tangent cone.
As Gröbner bases are defined for polynomials in a fixed number of variables, it is common to identify monomials (for example ) with their exponent vectors (here ). If is the number of variables, every monomial order is thus the restriction to of a monomial order of (see above for a classification).
One of these admissible orders is the lexicographical order. It is, historically, the first to have been used for defining Gröbner bases, and is sometimes called for distinguishing it from other orders that are also related to a lexicographical order.
Another one consists in comparing first the total degrees, and then resolving the conflicts by using the lexicographical order. This order is not widely used, as either the lexicographical order or the degree reverse lexicographical order have generally better properties.
The consists also in comparing first the total degrees, and, in case of equality of the total degrees, using the reverse of the colexicographical order. That is, given two exponent vectors, one has
if either
or
For this ordering, the monomials of degree one have the same order as the corresponding indeterminates (this would not be the case if the reverse lexicographical order would be used). For comparing monomials in two variables of the same total degree, this order is the same as the lexicographic order. This is not the case with more variables. For example, for exponent vectors of monomials of degree two in three variables, one has for the degree reverse lexicographic order:
For the lexicographical order, the same exponent vectors are ordered as
A useful property of the degree reverse lexicographical order is that a homogeneous polynomial is a multiple of the least indeterminate if and only if its leading monomial (its greater monomial) is a multiple of this least indeterminate.
See also
Collation
Kleene–Brouwer order
Lexicographic preferences - an application of lexicographic order in economics.
Lexicographic optimization - an algorithmic problem of finding a lexicographically-maximal element.
Lexicographic order topology on the unit square
Lexicographic ordering in tensor abstract index notation
Lexicographically minimal string rotation
Leximin order
Long line (topology)
Lyndon word
Pre-order - the name of the lexicographical order (of bits) in a binary tree traversal
Star product, a different way of combining partial orders
Shortlex order
Orders on the Cartesian product of totally ordered sets
References
External links
Order theory
Lexicography | Lexicographic order | [
"Mathematics"
] | 3,342 | [
"Order theory"
] |
567,723 | https://en.wikipedia.org/wiki/Zero%20crossing | A zero-crossing is a point where the sign of a mathematical function changes (e.g. from positive to negative), represented by an intercept of the axis (zero value) in the graph of the function. It is a commonly used term in electronics, mathematics, acoustics, and image processing.
In electronics
In alternating current, the zero-crossing is the instantaneous point at which there is no voltage present. In a sine wave or other simple waveform, this normally occurs twice during each cycle. It is a device for detecting the point where the voltage crosses zero in either direction.
The zero-crossing is important for systems that send digital data over AC circuits, such as modems, X10 home automation control systems, and Digital Command Control type systems for Lionel and other AC model trains.
Counting zero-crossings is also a method used in speech processing to estimate the fundamental frequency of speech.
In a system where an amplifier with digitally controlled gain is applied to an input signal, artifacts in the non-zero output signal occur when the gain of the amplifier is abruptly switched between its discrete gain settings. At audio frequencies, such as in modern consumer electronics like digital audio players, these effects are clearly audible, resulting in a 'zipping' sound when rapidly ramping the gain or a soft 'click' when a single gain change is made. Artifacts are disconcerting and clearly not desirable. If changes are made only at zero-crossings of the input signal, then no matter how the amplifier gain setting changes, the output also remains at zero, thereby minimizing the change. (The instantaneous change in gain will still produce distortion, but it will not produce a click.)
If electrical power is to be switched, no electrical interference is generated if switched at an instant when there is no current—a zero crossing. Early light dimmers and similar devices generated interference; later versions were designed to switch at the zero crossing.
In signal processing
In the field of digital image processing, great emphasis is placed on operators that seek out edges within an image. They are called edge detection or gradient filters. A gradient filter is a filter that seeks out areas of rapid change in pixel value. These points usually mark an edge or a boundary. A Laplace filter is a filter that fits in this family, though it sets about the task in a different way. It seeks out points in the signal stream where the digital signal of an image passes through a pre-set '0' value, and marks this out as a potential edge point. Because the signal has crossed through the point of zero, it is called a zero-crossing. An example can be found here, including the source in Java.
In the field of industrial radiography, it is used as a simple method for the segmentation of potential defects.
In the field of NLP, the rate of zero crossings observed in a spectrogram can be used to distinguish between certain phonemes such as fricatives, voiceless stops, and vowels.
See also
Reconstruction from zero crossings
Zero crossing control
Zero-crossing rate
Zero of a function (a root)
Sign function
References
Signal processing | Zero crossing | [
"Technology",
"Engineering"
] | 635 | [
"Telecommunications engineering",
"Computer engineering",
"Signal processing"
] |
567,743 | https://en.wikipedia.org/wiki/Riemann%E2%80%93Liouville%20integral | In mathematics, the Riemann–Liouville integral associates with a real function another function of the same kind for each value of the parameter . The integral is a manner of generalization of the repeated antiderivative of in the sense that for positive integer values of , is an iterated antiderivative of of order . The Riemann–Liouville integral is named for Bernhard Riemann and Joseph Liouville, the latter of whom was the first to consider the possibility of fractional calculus in 1832. The operator agrees with the Euler transform, after Leonhard Euler, when applied to analytic functions. It was generalized to arbitrary dimensions by Marcel Riesz, who introduced the Riesz potential.
Motivation
The Riemann-Liouville integral is motivated from Cauchy formula for repeated integration. For a function continuous on the interval [,], the Cauchy formula for -fold repeated integration states that
Now, this formula can be generalized to any positive real number by replacing positive integer with , Therefore we obtain the definition of Riemann-Liouville fractional Integral by
Definition
The Riemann–Liouville integral is defined by
where is the gamma function and is an arbitrary but fixed base point. The integral is well-defined provided is a locally integrable function, and is a complex number in the half-plane . The dependence on the base-point is often suppressed, and represents a freedom in constant of integration. Clearly is an antiderivative of (of first order), and for positive integer values of , is an antiderivative of order by Cauchy formula for repeated integration. Another notation, which emphasizes the base point, is
This also makes sense if , with suitable restrictions on .
The fundamental relations hold
the latter of which is a semigroup property. These properties make possible not only the definition of fractional integration, but also of fractional differentiation, by taking enough derivatives of .
Properties
Fix a bounded interval . The operator associates to each integrable function on the function on which is also integrable by Fubini's theorem. Thus defines a linear operator on :
Fubini's theorem also shows that this operator is continuous with respect to the Banach space structure on 1, and that the following inequality holds:
Here denotes the norm on .
More generally, by Hölder's inequality, it follows that if , then as well, and the analogous inequality holds
where is the norm on the interval . Thus we have a bounded linear operator . Furthermore, in the sense as along the real axis. That is
for all . Moreover, by estimating the maximal function of , one can show that the limit holds pointwise almost everywhere.
The operator is well-defined on the set of locally integrable function on the whole real line . It defines a bounded transformation on any of the Banach spaces of functions of exponential type consisting of locally integrable functions for which the norm
is finite. For , the Laplace transform of takes the particularly simple form
for . Here denotes the Laplace transform of , and this property expresses that is a Fourier multiplier.
Fractional derivatives
One can define fractional-order derivatives of as well by
where denotes the ceiling function. One also obtains a differintegral interpolating between differentiation and integration by defining
An alternative fractional derivative was introduced by Caputo in 1967, and produces a derivative that has different properties: it produces zero from constant functions and, more importantly, the initial value terms of the Laplace Transform are expressed by means of the values of that function and of its derivative of integer order rather than the derivatives of fractional order as in the Riemann–Liouville derivative. The Caputo fractional derivative with base point , is then:
Another representation is:
Fractional derivative of a basic power function
Let us assume that is a monomial of the form
The first derivative is as usual
Repeating this gives the more general result that
which, after replacing the factorials with the gamma function, leads to
For and , we obtain the half-derivative of the function as
To demonstrate that this is, in fact, the "half derivative" (where ), we repeat the process to get:
(because and ) which is indeed the expected result of
For negative integer power , 1/ is 0, so it is convenient to use the following relation:
This extension of the above differential operator need not be constrained only to real powers; it also applies for complex powers. For example, the -th derivative of the -th derivative yields the second derivative. Also setting negative values for yields integrals.
For a general function and , the complete fractional derivative is
For arbitrary , since the gamma function is infinite for negative (real) integers, it is necessary to apply the fractional derivative after the integer derivative has been performed. For example,
Laplace transform
We can also come at the question via the Laplace transform. Knowing that
and
and so on, we assert
.
For example,
as expected. Indeed, given the convolution rule
and shorthanding for clarity, we find that
which is what Cauchy gave us above.
Laplace transforms "work" on relatively few functions, but they are often useful for solving fractional differential equations.
See also
Caputo fractional derivative
Notes
References
.
.
.
.
.
.
External links
Bernhard Riemann
Fractional calculus
Integral transforms | Riemann–Liouville integral | [
"Mathematics"
] | 1,093 | [
"Fractional calculus",
"Calculus"
] |
567,826 | https://en.wikipedia.org/wiki/Vedic%20Mathematics | Vedic Mathematics is a book written by Indian Shankaracharya Bharati Krishna Tirtha and first published in 1965. It contains a list of mathematical techniques which were falsely claimed to contain advanced mathematical knowledge. The book was posthumously published under its deceptive title by editor V. S. Agrawala, who noted in the foreword that the claim of Vedic origin, made by the original author and implied by the title, was unsupported.
Neither Krishna Tirtha nor Agrawala were able to produce sources, and scholars unanimously note it to be a compendium of methods for increasing the speed of elementary mathematical calculations sharing no overlap with historical mathematical developments during the Vedic period. Nonetheless, there has been a proliferation of publications in this area and multiple attempts to integrate the subject into mainstream education at the state level by right-wing Hindu nationalist governments.
S. G. Dani of the Indian Institute of Technology Bombay wrote that despite the dubious historigraphy, some of the calculation methods it describes are themselves interesting, a product of the author's academic training in mathematics and long recorded habit of experimentation with numbers.
Contents
The book contains metaphorical aphorisms in the form of sixteen sutras and thirteen sub-sutras, which Krishna Tirtha states allude to significant mathematical tools. The range of their asserted applications spans from topic as diverse as statics and pneumatics to astronomy and financial domains. Tirtha stated that no part of advanced mathematics lay beyond the realms of his book and propounded that studying it for a couple of hours every day for a year equated to spending about two decades in any standardized education system to become professionally trained in the discipline of mathematics.
STS scholar S. G. Dani in 'Vedic Mathematics': Myth and Reality states that the book is primarily a compendium of "tricks" that can be applied in elementary, middle and high school arithmetic and algebra, to gain faster results. The sutras and sub-sutras are abstract literary expressions (for example, "as much less" or "one less than previous one") prone to creative interpretations; Krishna Tirtha exploited this to the extent of manipulating the same shloka to generate widely different mathematical equivalencies across a multitude of contexts.
Relationship with the Vedas
According to Krishna Tirtha, the sutras and other accessory content were found after years of solitary study of the Vedas—a set of sacred ancient Hindu scriptures—in a forest. They were supposedly contained in the pariśiṣṭa—a supplementary text/appendix—of the Atharvaveda. He does not provide any more bibliographic clarification on the sourcing. The book's editor, V. S. Agrawala, argues that since the Vedas are defined as the traditional repositories of all knowledge, any knowledge can be assumed to be somewhere in the Vedas, by definition; he even went to the extent of deeming Krishna Tirtha's work as a pariśiṣṭa in itself.
However, numerous mathematicians and STS scholars (Dani, Kim Plofker, K.S. Shukla, Jan Hogendijk et al.) note that the Vedas do not contain any of those sutras and sub-sutras. When Shukla, a mathematician and historiographer of ancient Indian mathematics, challenged Krishna Tirtha to locate the sutras in the Parishishta of a standard edition of the Atharvaveda, Krishna Tirtha stated that they were not included in the standard editions but only in a hitherto-undiscovered version, chanced upon by him; the foreword and introduction of the book also takes a similar stand. Sanskrit scholars have observed that the book's linguistic style is not that of the Vedic period but rather reflects modern Sanskrit.
Dani points out that the contents of the book have "practically nothing in common" with the mathematics of the Vedic period or even with subsequent developments in Indian mathematics. Shukla reiterates the observations, on a per-chapter basis. For example, multiple techniques in the book involve the use of decimals. These were unknown during the Vedic times and were introduced in India only in the sixteenth century; works of numerous ancient mathematicians such as Aryabhata, Brahmagupta and Bhaskara were based entirely on fractions. From a historiographic perspective, Vedic India had no knowledge of differentiation or integration. The book also claims that analytic geometry of conics occupied an important tier in Vedic mathematics, which runs contrary to all available evidence.
Publication history and reprints
First published in 1965, five years after Krishna Tirtha's death, the work consisted of forty chapters, originally on 367 pages, and covered techniques he had promulgated through his lectures. A foreword by Tirtha's disciple Manjula Trivedi stated that he had originally written 16 volumes—one on each sutra—but the manuscripts were lost before publication, and that this work was penned in 1957.
Reprints were published in 1975 and 1978 to accommodate typographical corrections. Several reprints have been published since the 1990s.
Reception
S. G. Dani of the Indian Institute of Technology Bombay (IIT Bombay) notes the book to be of dubious quality. He believes it did a disservice both to the pedagogy of mathematical education by presenting the subject as a collection of methods without any conceptual rigor, and to science and technology studies in India (STS) by adhering to dubious standards of historiography. He also points out that while Tirtha's system could be used as a teaching aid, there was a need to prevent the use of "public money and energy on its propagation" except in a limited way and that authentic Vedic studies were being neglected in India even as Tirtha's system received support from several government and private agencies. Jayant Narlikar has voiced similar concerns.
Hartosh Singh Bal notes that whilst Krishna Tirtha's attempts might be somewhat acceptable in light of his nationalistic inclinations during colonial rule — he had left his spiritual endeavors to be appointed as the principal of a college to counter Macaulayism —, it provided a fertile ground for further ethno-nationalistic abuse of historiography by Hindu Nationalist parties; Thomas Trautmann views the development of Vedic Mathematics in a similar manner. Meera Nanda has noted hagiographic descriptions of Indian knowledge systems by various right-wing cultural movements (including the BJP), which deemed Krishna Tirtha to be in the same league as Srinivasa Ramanujan.
Some have however praised the methods and commented on its potential to attract school-children to mathematics and increase popular engagement with the subject. Others have viewed the works as an attempt at harmonizing religion with science.
Originality of methods
Dani speculated that Krishna Tirtha's methods were a product of his academic training in mathematics and long recorded habit of experimentation with numbers. Similar systems include the Trachtenberg system or the techniques mentioned in Lester Meyers's 1947 book High-speed Mathematics. Alex Bellos points out that several of the calculation methods can also be found in certain European treatises on calculation from the early Modern period.
Computation algorithms
Some of the algorithms have been tested for efficiency, with positive results. However, most of the algorithms have higher time complexity than conventional ones, which explains the lack of adoption of Vedic mathematics in real life.
Integration into mainstream education
The book had been included in the school syllabus of Madhya Pradesh and Uttar Pradesh, soon after the Bharatiya Janata Party (BJP), a development oriented nationalist political party came to power and chose to improve the education-system.
Dinanath Batra had conducted a lengthy campaign for the inclusion of Vedic Maths into the National Council of Educational Research and Training (NCERT) curricula. Subsequently, there was a proposal from NCERT to induct Vedic Maths, along with a number of fringe pseudo-scientific subjects (Vedic Astrology et al.), into the standard academic curricula. This was only shelved after a number of academics and mathematicians, led by Dani and sometimes backed by political parties, opposed these attempts based on previously discussed rationales and criticized the move as a politically guided attempt at saffronisation. Concurrent official reports also advocated for its inclusion in the Madrassah education system to modernize it.
After the BJP's return to power in 2014, three universities began offering courses on the subject while a television channel, catering to the topic, was also launched; generous education and research grants have also been allotted to the subject. The topic was introduced into the elementary curriculum of Himachal Pradesh in 2022. The same year, the government of Karnataka allocated funds for teaching the subject. This move by the BJP provoked criticism from academics and from Dalit groups.
Editions
Notes
References
Works cited
Books about Hinduism
Books about the history of mathematics
Indian non-fiction books
Mental calculation
1965 non-fiction books
Pseudohistory
20th-century Indian books
Pseudomathematics | Vedic Mathematics | [
"Mathematics"
] | 1,837 | [
"Mental calculation",
"Arithmetic"
] |
567,840 | https://en.wikipedia.org/wiki/Solution%20set | In mathematics, the solution set of a system of equations or inequality is the set of all its solutions, that is the values that satisfy all equations and inequalities. Also, the solution set or the truth set of a statement or a predicate is the set of all values that satisfy it.
If there is no solution, the solution set is the empty set.
Examples
The solution set of the single equation is the set .
Since there do not exist numbers and making the two equations simultaneously true, the solution set of this system is the
The solution set of a constrained optimization problem is its feasible region.
The truth set of the predicate is .
Remarks
In algebraic geometry, solution sets are called algebraic sets if there are no inequalities. Over the reals, and with inequalities, there are called semialgebraic sets.
Other meanings
More generally, the solution set to an arbitrary collection E of relations (Ei) (i varying in some index set I) for a collection of unknowns , supposed to take values in respective spaces , is the set S of all solutions to the relations E, where a solution is a family of values such that substituting by in the collection E makes all relations "true".
(Instead of relations depending on unknowns, one should speak more correctly of predicates, the collection E is their logical conjunction, and the solution set is the inverse image of the boolean value true by the associated boolean-valued function.)
The above meaning is a special case of this one, if the set of polynomials fi if interpreted as the set of equations fi(x)=0.
Examples
The solution set for E = { x+y = 0 } with respect to is S = { (a,−a) : a ∈ R }.
The solution set for E = { x+y = 0 } with respect to is S = { −y }. (Here, y is not "declared" as an unknown, and thus to be seen as a parameter on which the equation, and therefore the solution set, depends.)
The solution set for with respect to is the interval S = [0,2] (since is undefined for negative values of x).
The solution set for with respect to is S = 2πZ (see Euler's identity).
See also
Equation solving
Extraneous and missing solutions
References
Equations | Solution set | [
"Mathematics"
] | 491 | [
"Mathematical objects",
"Equations"
] |
567,883 | https://en.wikipedia.org/wiki/Weakly%20harmonic%20function | In mathematics, a function is weakly harmonic in a domain if
for all with compact support in and continuous second derivatives, where Δ is the Laplacian. This is the same notion as a weak derivative, however, a function can have a weak derivative and not be differentiable. In this case, we have the somewhat surprising result that a function is weakly harmonic if and only if it is harmonic. Thus weakly harmonic is actually equivalent to the seemingly stronger harmonic condition.
See also
Weak solution
Weyl's lemma
References
Harmonic functions | Weakly harmonic function | [
"Mathematics"
] | 108 | [
"Mathematical analysis",
"Mathematical analysis stubs"
] |
567,927 | https://en.wikipedia.org/wiki/Ice%20hockey%20rink | An ice hockey rink is an ice rink that is specifically designed for ice hockey, a competitive team sport. Alternatively it is used for other sports such as broomball, ringette, rinkball, and rink bandy. It is a rectangle with rounded corners and surrounded by walls approximately high called the boards.
Name origins
Rink, a Scots word meaning 'course', was used as the name of a place where another game, curling, was played. Early in its history, ice hockey was played mostly on rinks constructed for curling. The name was retained after hockey-specific facilities were built.
Dimensions
There are two standard sizes for hockey rinks: one used primarily in North America, also known as NHL size, the other used in Europe and international competitions, also known as IIHF or Olympic size.
International
Hockey rinks in the rest of the world follow the International Ice Hockey Federation (IIHF) specifications, which are with a corner radius of .
The two goal lines are from the end boards, and the blue lines are from the end boards.
North American
Most North American rinks follow the National Hockey League (NHL) specifications of with a corner radius of . Each goal line is from the end boards. NHL blue lines are from the end boards and apart. The difference in width from the international standard represents a significant difference in width-to-length ratio on the ice.
Origins
The rink specifications originate from the ice surface of the Victoria Skating Rink in Montreal, constructed in 1862, where the first indoor game was played in 1875. Its ice surface measured . The curved corners are said to originate from the design of the Montreal Arena, constructed in 1898.
Markings
Lines
The centre line divides the ice in half crosswise. It is used to judge icing. It is a thick line, and in the NHL must "contain regular interval markings of a uniform distinctive design, which will readily distinguish it from the two blue lines" (i.e. it must not be a solid single colour as the blue lines are). It may also be used to judge two-line pass violations in leagues that use such a rule.
There are two thick blue lines that divide the rink into three parts, called zones. The blue lines are used to judge if a player is offside. If an attacking player crosses the line into the other team's zone before the puck does, they are said to be offside.
Near each end of the rink, there is a thin red goal line spanning the width of the ice. It is used to judge goals and icing calls.
Faceoff spots and circles
There are 9 faceoff spots on a hockey rink. All faceoffs take place at these spots. There are two spots in each team's defensive zone, two at each end of the neutral zone, and one in the centre of the rink.
There are faceoff circles around the centre ice and end zone faceoff spots. There are hash marks painted on the ice near the end zone faceoff spots. The circles and hash marks show where players may legally position themselves during a faceoff or during in-game play.
Spot and circle dimensions
Both the centre faceoff spot and centre faceoff circle are blue. The circle is 30 feet (9m) in diameter, with an outline thick, and the faceoff spot is a solid blue circle in diameter.
All of the other faceoff spots and circles are colored red. Each spot consists of a circle in diameter (as measured from the outermost edges) with an outline thick. Within the spot, two red vertical lines are drawn from the left and right inner edges, and the area between these lines is painted red while the rest of the circle is painted white.
Goal posts and nets
At each end of the ice, there is a goal consisting of a metal goal frame and cloth net in which each team must place the puck to score. According to NHL and IIHF rules, the entire puck must cross the entire goal line in order to be counted as a goal. Under NHL rules, the opening of the goal is wide by tall, and the footprint of the goal is deep.
Crease
The crease is a special area of the ice in front of each goal that is designed to allow the goaltender to perform without interference.
In North American professional hockey, the goal crease consists of straight lines extending perpendicularly from the goal line outside each goal post, connected by an arc with a radius; red hashmarks are added just inside the straight lines, from the goal line and extending into the crease from either side. The entire area of the crease is typically coloured blue for easier visibility.
Goaltender trapezoid ("Martin Brodeur" Rule)
During the 2004–05 American Hockey League (AHL) season, an experimental rule was implemented for the first seven weeks of the season, instituting a goaltender trap zone, more commonly called the trapezoid in reference to its shape. Under the rule, it is prohibited for the goaltender to handle the puck anywhere behind the goal line that is not within the trapezoidal area. If they do so they are assessed a minor penalty for delay of game.
The motivation for the introduction of the trapezoid was to promote game flow and prolonged offensive attacks by making it more difficult for the goaltender to possess and clear the puck. The rule was aimed at reducing the effectiveness of goaltenders with good puck-handling abilities, such as New Jersey Devils goalie Martin Brodeur, for whom the rule is nicknamed.
The area consists of a centred, symmetrical trapezoid. The bases of the trapezoid are formed by the goal line and the end boards. The base on the goal line measures — widened from the original for the 2014-15 NHL season onwards — and the base along the end boards measures , with the depth behind the goal line-to-boards distance specified at .
The seven-week experiment proved so successful that the AHL moved to enforce the rule for the rest of the season, and then the rule was approved by the NHL when play resumed for the 2005–06 season. The ECHL, the only other developmental league in the Professional Hockey Players Association along with the AHL, also approved the rule for 2005–06.
The trapezoid was later adopted by the KHL for the 2019–20 season, and by the IIHF in 2021.
Referee's crease
The referee's crease is a semicircle in radius in front of the scorekeepers bench. Under USA Hockey rule 601(d)(5), any player entering or remaining in the referee's crease while the referee is reporting to or consulting with any game official may be assessed a misconduct penalty. The USA Hockey casebook specifically states that the imposition of such a penalty would be unusual, and the player would typically first be asked to leave the referee's crease before the imposition of the penalty. The NHL has a similar rule, also calling for a misconduct penalty. Traditionally, captains and alternate captains are the only players allowed to approach the referee's crease.
Zones
The blue lines divide the rink into three zones. The central zone is called the neutral zone or simply centre ice. The generic term for the outer zones is end zones, but they are more commonly referred to by terms relative to each team. The end zone in which a team is trying to score is called the attacking zone or offensive zone; the end zone in which the team's own goal net is located is called the defending zone or defensive zone.
The blue line is considered part of whichever zone the puck is in. Therefore, if the puck is in the neutral zone, the blue line is part of the neutral zone. It must completely cross the blue line to be considered in the end zone. Once the puck is in the end zone, the blue line becomes part of that end zone. The puck must now completely cross the blue line in the other direction to be considered in the neutral zone again.
Boards
In a hockey rink, the boards are the low wall that form the boundaries of the rink. They are between high. The "side boards" are the boards along the two long sides of the rink. The half boards are the boards halfway between the goal line and blue line. The sections of the rink located behind each goal are called the "end boards". The boards that are curved (near the ends of the rink) are called the "corner boards".
See also
National Hockey League rules
Ice rink
Figure skating rink
Speed skating rink
References
External links
Backyard Ice Hockey Rinks
NHL Official Rules: Rule 1 – Rink
Hockey Rinks Database of 5,500 Rinks in the U.S. and Canada
Hockey Arenas in Europe
Ice hockey rules and regulations
Ice rinks
Sports venues by type | Ice hockey rink | [
"Engineering"
] | 1,769 | [
"Structural engineering",
"Ice rinks"
] |
567,946 | https://en.wikipedia.org/wiki/Kelvin%E2%80%93Helmholtz%20instability | The Kelvin–Helmholtz instability (after Lord Kelvin and Hermann von Helmholtz) is a fluid instability that occurs when there is velocity shear in a single continuous fluid or a velocity difference across the interface between two fluids. Kelvin-Helmholtz instabilities are visible in the atmospheres of planets and moons, such as in cloud formations on Earth or the Red Spot on Jupiter, and the atmospheres of the Sun and other stars.
Theory overview and mathematical concepts
Fluid dynamics predicts the onset of instability and transition to turbulent flow within fluids of different densities moving at different speeds. If surface tension is ignored, two fluids in parallel motion with different velocities and densities yield an interface that is unstable to short-wavelength perturbations for all speeds. However, surface tension is able to stabilize the short wavelength instability up to a threshold velocity.
If the density and velocity vary continuously in space (with the lighter layers uppermost, so that the fluid is RT-stable), the dynamics of the Kelvin-Helmholtz instability is described by the Taylor–Goldstein equation:
where denotes the Brunt–Väisälä frequency, U is the horizontal parallel velocity, k is the wave number, c is the eigenvalue parameter of the problem, is complex amplitude of the stream function. Its onset is given by the Richardson number . Typically the layer is unstable for . These effects are common in cloud layers. The study of this instability is applicable in plasma physics, for example in inertial confinement fusion and the plasma–beryllium interface. In situations where there is a state of static stability (where there is a continuous density gradient), the Rayleigh-Taylor instability is often insignificant compared to the magnitude of the Kelvin–Helmholtz instability.
Numerically, the Kelvin–Helmholtz instability is simulated in a temporal or a spatial approach. In the temporal approach, the flow is considered in a periodic (cyclic) box "moving" at mean speed (absolute instability). In the spatial approach, simulations mimic a lab experiment with natural inlet and outlet conditions (convective instability).
Discovery and history
The existence of the Kelvin-Helmholtz instability was first discovered by German physiologist and physicist Hermann von Helmholtz in 1868. Helmholtz identified that "every perfect geometrically sharp edge by which a fluid flows must tear it asunder and establish a surface of separation". Following that work, in 1871, collaborator William Thomson (later Lord Kelvin), developed a mathematical solution of linear instability whilst attempting to model the formation of ocean wind waves.
Throughout the early 20th Century, the ideas of Kelvin-Helmholtz instabilities were applied to a range of stratified fluid applications. In the early 1920s, Lewis Fry Richardson developed the concept that such shear instability would only form where shear overcame static stability due to stratification, encapsulated in the Richardson Number.
Geophysical observations of the Kelvin-Helmholtz instability were made through the late 1960s/early 1970s, for clouds, and later the ocean.
See also
Rayleigh–Taylor instability
Richtmyer–Meshkov instability
Mushroom cloud
Plateau–Rayleigh instability
Kármán vortex street
Taylor–Couette flow
Fluid mechanics
Fluid dynamics
Reynolds number
Turbulence
Notes
References
Article describing discovery of K-H waves in deep ocean:
External links
Giant Tsunami-Shaped Clouds Roll Across Alabama Sky - Natalie Wolchover, Livescience via Yahoo.com
Tsunami Cloud Hits Florida Coastline
Vortex formation in free jet - YouTube video showing Kelvin Helmholtz waves on the edge of a free jet visualised in a scientific experiment.
Wave clouds over Christchurch City
Kelvin-Helmholtz clouds, in Barmouth, Gwynedd, on 18 February 2017
1868 introductions
1868 in science
Fluid dynamics
Boundary layer meteorology
Clouds
Fluid dynamic instabilities
Articles containing video clips
Hermann von Helmholtz
William Thomson, 1st Baron Kelvin
Plasma instabilities | Kelvin–Helmholtz instability | [
"Physics",
"Chemistry",
"Engineering"
] | 791 | [
"Physical phenomena",
"Fluid dynamic instabilities",
"Chemical engineering",
"Plasma phenomena",
"Plasma instabilities",
"Piping",
"Fluid dynamics"
] |
568,048 | https://en.wikipedia.org/wiki/Fighting%20in%20ice%20hockey | Fighting is an established tradition in North American ice hockey, with a long history that involves many levels of amateur and professional play and includes some notable individual fights. Fights may be fought by enforcers, or "goons" ()—players whose role is to fight and intimidate—on a given team, and are governed by a system of unwritten rules that players, coaches, officials, and the media refer to as "the code". Some fights are spontaneous, while others are premeditated by the participants. While officials tolerate fighting during hockey games, they impose a variety of penalties on players who engage in fights.
Unique among North American professional team sports, the National Hockey League (NHL) and most minor professional leagues in North America do not eject players outright for fighting (although they may do so for more flagrant violations as part of a fight) but major European and collegiate hockey leagues do, and multi-game suspensions may be added on top of the ejection. Therefore, the vast majority of fights occur in the NHL and other North American professional leagues.
Physical play in hockey, consisting of allowed techniques such as checking and prohibited techniques such as elbowing, high-sticking, and cross-checking, is linked to fighting. Although often a target of criticism, it is a considerable draw for the sport, and some fans attend games primarily to see fights. Those who defend fighting in hockey say that it helps deter other types of rough play, allows teams to protect their star players, and creates a sense of solidarity among teammates. The debate over allowing fighting in ice hockey games is ongoing. Despite its potentially negative consequences, such as heavier enforcers (or "heavyweights") knocking each other out, administrators at the professional level have no plans to eliminate fighting from the game, as most players consider it essential. Most fans and players oppose eliminating fights from professional hockey games, but considerable opposition to fighting exists, and efforts to eliminate it continue.
History
Fighting has been a part of ice hockey since the sport's rise in popularity in 19th century Canada. There are a number of theories behind the integration of fighting into the game; the most common is that the relative lack of rules in the early history of hockey encouraged physical intimidation and control. Other theories include the poverty and high crime rates of local Canada in the 19th century. There was also an influence from working-class lacrosse players, who transitioned to ice hockey when lacrosse adopted an amateur-only policy in Canada, and who were accustomed to a violently aggressive form of play. The implementation of some features, such as the blue lines in 1918, actually encouraged fighting due to the increased level of physical play. Creation of the blue lines allowed forward passing, but only in the neutral zone. Therefore, puck handlers played at close quarters and were subject to a great deal of physical play. The emergence of enforcers, who protected the puck handlers and fought when necessary, followed shortly thereafter.
In 1922, the NHL introduced Rule 56, which formally regulated fighting, or "fisticuffs" as it was called in the official NHL rulebook. Rather than ejecting players from the game, as was the practice in amateur and collegiate hockey, players would be given a five-minute major penalty. Rule 56 and its language also filtered down to the minor professional and junior leagues in North America. Promoters such as Tex Rickard of Madison Square Garden, who also promoted boxing events, saw financial opportunities in hockey fights and devised marketing campaigns around the rivalries between various team enforcers.
In the current NHL rulebook, the archaic reference to "fisticuffs" has been removed; fighting is now governed under Rule 46 in the NHL rulebook. Referees are given considerable latitude in determining what exactly constitutes a fight and what penalties are applicable to the participants. Significant modifications from the original rule involve penalties which can be assessed to a fight participant deemed to have instigated the fight and additional penalties resulting from instigating a fight while wearing a face-shield.
Although fighting was rarer from the 1920s through the 1960s, it was often brutal in nature; author Ross Bernstein said of the game's early years that it "was probably more like rugby on skates than it was modern hockey." Star players were also known to fight for themselves during the Original Six era, when fewer teams existed than in later years. However, as the NHL's expansion in the late 1960s created more roster spots and spread star players more widely throughout the league, enforcers (who usually possess limited overall skill sets) became more common. Multiple fights during the era received significant media attention. In an NHL preseason game between the Boston Bruins and St. Louis Blues in 1969, Bruins defenceman Ted Green and Blues left wing Wayne Maki engaged in a bloody stick-swinging fight. The fight, initiated by Maki, resulted in Green sustaining a skull fracture. In 1978, World Hockey Association Birmingham Bulls enforcer Dave Hanson, known for his 11-year professional career, fought Hall of Famer Bobby Hull and in the process got Hull's wig caught in his knuckles. The incident landed Hanson in the news, and irate Winnipeg fans attempted to assault him on his way out of the arena. Hanson appeared in the 1977 movie Slap Shot, a comedy about hockey violence.
The rise of the "Broad Street Bullies" in the 1973–74 and 1974–75 Philadelphia Flyers served as an example for future NHL enforcers. The average number of fights per game rose above 1.0 during the 1980s, peaking at 1.17 in 1983–84. That season, a bench-clearing brawl broke out at the end of the second period of a second-round playoff matchup between the Quebec Nordiques and the Montreal Canadiens. A second bench-clearing brawl erupted before the third period began, provoked by the announcement of penalties; a total of 252 penalty minutes were incurred and 11 players were ejected. This game is commonly referred to as the Good Friday Massacre.
North American competitive amateur leagues serve as a training ground and emulate the practices and conduct of professional leagues. Around age 12 players begin to be chosen for size and toughness, play becomes rough, and less-violent players drop out in large numbers. 34% of Toronto amateur skaters aged 12–21 reported being in at least one fist-fight during the 1975–76 season with the likelihood of fighting increasing with player age and competitive level. Coaches of the time trained players to fight in self-defence or against players who commit flagrant fouls. Players did not consider fist-fights to be , reserving this term for acts which were more likely to cause injury. Among professional players, those who refused to fight were seen as untrustworthy and a challenge to team morale and such players could gain a reputation for being easily intimidated. Those who fought excessively were seen as displaying a lack of judgement and "game sense".
Many NHL teams signed enforcers to protect and fight for smaller offensive stars. Fights in the 1990s included the Brawl in Hockeytown in 1997, in which the Colorado Avalanche and Detroit Red Wings engaged in nine fights, including bouts between Darren McCarty and Claude Lemieux and goaltenders Patrick Roy and Mike Vernon. The following year, a game between the Avalanche and Red Wings involved a fight between goaltenders Chris Osgood and Roy after which they received minor, major, and game misconduct penalties. In 2004, a Philadelphia Flyers – Ottawa Senators game resulted in five consecutive brawls in the closing minutes of the game, including fights between many players who are not known as enforcers and a fight between Flyers goaltender Robert Esche and Senators goaltender Patrick Lalime. The game ended with an NHL record 419 penalty minutes, and an NHL record 20 players were ejected, leaving five players on the team benches. The officials took 90 minutes to sort out the penalties that each team had received.
{| cellpadding="1" border="0" style="float: right; margin: 0em 0em 1em 1em; width: 175px; border: 1px #bbbbbb solid; border-collapse: collapse; font-size: 85%;"
|- align="center" bgcolor=#CDC0B0
! colspan="2" | Hockey fights per NHL season
|- align="center" bgcolor=#EEDFCC
| Season || # of fights
|- align="center" bgcolor=#FFFFFF
| 2014–15 || 391
|- align="center" bgcolor=#FFFFFF
| 2013–14 || 469
|- align="center" bgcolor=#FFFFFF
| 2012–13 || 347*
|- align="center" bgcolor=#FFFFFF
| 2011–12 || 546
|- align="center" bgcolor=#FFFFFF
| 2010–11 || 645
|- align="center" bgcolor=#FFFFFF
| 2009–10 || 714
|- align="center" bgcolor=#FFFFFF
| 2008–09 || 734
|- align="center" bgcolor=#FFFFFF
| colspan="2" align="center" | Source:*Lockout shortened year|}
By 2009–10, the number of fights in the NHL declined to .58 per game. A further decrease in the frequency of fighting happened over the next five seasons. The 2014–15 season had 0.32 fights per game, as teams placed a greater emphasis on skating ability and fewer young players became enforcers.
Rules and penalties
Rules of the NHL, the North American junior leagues, and other North American professional minor leagues punish fighting with a five-minute major penalty. What separates these leagues from other major North American sports leagues is that they do not eject players simply for participating in a fight. However, fighting is frequently punishable by ejection in European leagues and in Olympic competition.
The rulebooks of the NHL and other professional leagues contain specific rules for fighting. These rules state that at the initiation of a fight, both players must definitely drop their sticks so as not to use them as a weapon. Players must also "drop" or shake off their protective gloves to fight bare-knuckled, as the hard leather and plastic of hockey gloves would increase the effect of landed blows. Players should not remove their own helmet before engaging in a fight due to risk of head injury or else both of the opposing players get an extra two penalty minutes. Players must also heed a referee warning to end a fight once the opponents have been separated. Failure to adhere to any of these rules results in an immediate game misconduct penalty and the possibility of fines and suspension from future games. In the NHL, when a player is fined, his lost pay goes towards the NHL emergency assistance fund. A fined coach's lost pay goes to the NHL Foundation.
North American professional leagues
In the NHL, American Hockey League (AHL), ECHL, Southern Professional Hockey League, and other notable minor leagues, officials punish combatants with five-minute major penalties for fighting (hence the phrase "five for fighting"). A player is automatically ejected and suspended if the player tries to leave the bench to join a fight, or for using weapons of any kind (such as using a skate to kick an opponent, using a stick to hit an opponent, wrapping tape around one's hands, or spitting), as they can cause serious injury. A player who receives two instigator penalties or participates in three fights in a single game is also ejected automatically. Furthermore, his coach can be suspended up to ten games for allowing players to leave the bench to join a fight.
A player who commits three major penalties (including fighting) during a game is automatically ejected, suspended, and fined. A player ejected for three major penalties in a game, or for use of weapons, cannot be replaced for five minutes. In 2003, the ECHL added an ejection, fine, and suspension of an additional game for any player charged as an instigator of a fight during the final five minutes of the third period or any overtime. The NHL and AHL adopted the rule in 2005–06, and the NHL includes a fine against the ejected player's head coach. In 2014, the AHL added a major penalty counter. A player who commits ten major penalties for fighting is suspended one game, and will be suspended one game on each such penalty for his 11th to 13th, and two games for his 14th and further penalties. If the opposing fighter is also charged with an instigator penalty, the fighting major will not count towards suspension.
In 2023, the ECHL toughened the game misconduct penalty leading to ejection. The ejection penalty will now be assessed for two fighting majors in the same game, unless another player in the fight was assessed an instigator penalty. In addition, an automatic game misconduct penalty is assessed to offending fighters if a fight occurs before, during, or shortly after a face-off.
Collegiate, European, and Olympic
In Division I and Division III National Collegiate Athletic Association (NCAA) hockey, the fighters are given a Game Disqualification, which is an ejection from the game and a suspension for as many games as the player has accrued Game Disqualifications during the course of a season. For example, if a player engages in a fight having already received a Game Disqualification earlier in the season, he is ejected from that game and suspended for his team's next two games. This automatic suspension has made fighting in college hockey relatively rare.
Fighting is strictly prohibited in European professional hockey leagues and in Olympic ice hockey. The international rules (by the International Ice Hockey Federation (IIHF)) specify in rule 141 – Fighting the following penalties (among others):
Match penalty (the player is ejected from the game and another player serves 5 minutes in addition to any other penalties imposed in the penalty box) for a player who starts fisticuffs.
Minor penalty (2 minutes) for a player who retaliates with a blow or attempted blow.
Game misconduct penalty (ejection from the game) in addition to any other penalties for any player who is the first to intervene in fisticuffs which are already in progress.
Double minor penalty (4 minutes), major penalty and game misconduct penalty (5 minutes and ejection from the game), or match penalty (at the discretion of the referee) for a player who continues fisticuffs after being told by officials to stop.
Misconduct penalty (10 minutes; second misconduct penalty in one game means automatic ejection) for a player who intentionally takes off his gloves in fisticuffs.
Despite the bans, there have been fights in European leagues. In 2001, a game between the Nottingham Panthers and the Sheffield Steelers in the British Superleague saw "some of the worst scenes of violence seen at a British ice hockey rink". When Sheffield enforcer Dennis Vial crosschecked Nottingham forward Greg Hadden, Panthers enforcer Barry Nieckar subsequently fought with Vial, which eventually escalated into a 36-man bench-clearing brawl. Referee Moray Hanson sent both teams to their locker rooms and delayed the game for 45 minutes while tempers cooled and the officials sorted out the penalties. Eight players and both coaches were ejected, and a British record total of 404 penalty minutes were incurred during the second period. The league handed out 30 games in suspensions to four players and Steelers' coach Mike Blaisdell and a total of £8,400 in fines. Russia's Kontinental Hockey League (KHL) had a bench-clearing brawl between Vityaz Chekhov and Avangard Omsk in 2010. Officials were forced to abandon the game as there were only four players left. Thirty-three players and both teams' coaches were ejected, and a world record total of 707 penalty minutes were incurred during the game. The KHL imposed fines totaling 5.7 million rubles ($191,000), suspended seven players, and counted the game as a 5–0 defeat for both teams, with no points being awarded.
The Punch-up in Piestany was a notable instance of fighting in international play. A 1987 World Junior Ice Hockey Championships game between Canada and the Soviet Union was the scene of a bench-clearing brawl that lasted 20 minutes and prompted officials to turn off the arena lights in an attempt to stop it, forcing the IIHF to declare the game null and void. The fighting was particularly dangerous as fighting was a surprise and a custom unknown to the Soviet players, some of whom escalated the fighting beyond what was considered acceptable in North America. Both teams were ejected from the tournament, costing Canada an assured medal, and the Soviet team was barred from the end-of-tournament dinner.
Enforcers
The role of "enforcer" on a hockey team is unofficial. Enforcers occasionally play regular shifts like other players, but their primary role is deterring opposing players from rough play. Coaches often send enforcers out when opposing enforcers are on the ice or any time when it is necessary to check excessively physical play by the opposing team. Enforcers, particularly those with questionable playing skills, can be colloquially referred to as goons (a term also occasionally used for a related position, the pest, who may not fight but will agitate an opponent with rough play and goad the opponent into a fight).
Causes
There are many reasons for fights during a hockey game. Some reasons are related to game play, such as retaliation, momentum-building, intimidation, deterrence, attempting to draw "reaction penalties", and protecting star players. There are also some personal reasons such as retribution for past incidents, bad blood between players, and simple job security for enforcers. Fights often start in response to an opponent's rough play. A North American study of 1975–1983 (the period of peak fighting) found that players used fist-fights to either "stick up for oneself" and save face from attempts at intimidation, or to act in self-defence from actual or perceived dirty tricks.
Game-related reasons
Of the many reasons for fighting, the foremost is retaliation. When players engage in play that members of the opposing team consider unscrupulous, a fight can ensue. The fight may be between the assailant and the victim, between the assailant and an enforcer from the victim's team, or between opposing enforcers. Fights that occur for retaliation purposes can be in immediate response to an on-ice incident, to incidents from earlier in the game, or to actions from past games. Enforcers who intend to start a fight have to consider their timing due to the Instigator rule. For example, putting the opposing team on a power play due to penalties incurred from fighting is less advisable when the game is close.
Enforcers sometimes start fights to build game momentum and provide a psychological advantage over the opposing team. These fights usually involve two enforcers, but may involve any player who is agitating the opposition. This type of fight raises morale on the team of the player who wins, and often excites the home crowd. For that reason, it can also be a gamble to start a fight for momentum; if an enforcer loses the fight, the momentum can swing the wrong way.
Intimidation is an important element of a hockey game and some enforcers start fights just to intimidate opposing players in hopes that they will refrain from agitating skilled players. For example, in the late 1950s, Gordie Howe helped establish himself as an enforcer by defeating Lou Fontinato, a notable tough guy who tallied over 1,200 penalty minutes in his career. Fontinato suffered a broken nose from the fight. After that incident, Howe got a lot more space on the ice and was able to score many goals over the span of his career because he intimidated other players. Conversely, games in European professional leagues are known to be less violent than North American games because fighting is discouraged in Europe by ejection and heavy fines. Since the penalties for fighting are so severe, the enforcers are less able to intimidate opposing players with fighting and said players take more liberties on the ice.
For teams that face each other frequently, players may fight just to send the message to the opposing players that they will be the target of agitation or aggression in future games. Teams that are losing by a considerable margin often start these fights near the end of the game when they have nothing to lose. Enforcers may start fights with more skilled players to draw what is called a "reaction penalty", an undisciplined reaction to aggressive play on the part of the enforcer. This practice is also known to be difficult due to the Instigator rule.
Another reason is the protection of star skaters and defenceless goalies. Fighting within the game can also send a message to players and coaches from other teams that cheap shots, dirty plays, and targeting specific players will not be tolerated and there will be consequences involved. Fighting can provide retribution for a team's player getting targeted or injured. Overall, fighting is sometimes seen as a beneficial policing that the game needs to keep players in line. Over the history of hockey, many enforcers have been signed simply to protect players like Wayne Gretzky, who was protected by Dave Semenko, Marty McSorley, and others, and Brett Hull, who was protected by Kelly Chase and others. Many believe that without players protecting each other, referees would affect the gameplay by having to call more penalties, and the league would have to suspend players for longer periods.
Personal reasons
Many young enforcers need to establish their role early in their career to avoid losing their jobs. Due to the farm systems that most professional hockey leagues use, enforcers who get a chance to play at the level above their current one (for example, an AHL player getting a chance to play in an NHL game) need to show other players, coaches, and fans that they are worthy of the enforcer role on the team. Players and coaches enjoy being with enforcers who fight for their teams, not for themselves.
There are also times when players and even entire teams carry on personal rivalries that have little to do with individual games; fights frequently occur for no other reason. A rivalry that produced many fights was between the Detroit Red Wings and the Colorado Avalanche during the 1990s.
Effect on game
Statistical evidence indicates that fighting correlatates negatively with a given teams' success, or at least seems to have inconsequential benefits. Since the 1979–80 season, teams in the bottom three of fighting-related major penalties have finished at the top of the regular-season standings 10 times and have won the Stanley Cup 11 times, while teams in the top three have won the regular season and Stanley Cup only twice each. One statistical analysis calculated that winning a fight benefited a team by about of a win in the standings. Two others showed that fights increase scoring, but do so evenly for both teams so do not significantly affect wins.
Efforts to ban fighting
The Canadian Academy of Sport Medicine announced in Position Statement in 1988 that "Fighting does cause injuries, which range from fractures of the hands and face to lacerations and eye injuries. At present, it is an endemic and ritualized blot on the reputation of the North American game."
Criticism often arises after single acts of violence committed during fights. For example, on March 21, 2007, Colton Orr of the New York Rangers fought with Todd Fedoruk of the Philadelphia Flyers and ended up knocking Fedoruk unconscious. Fedoruk already had titanium plates in his face from a fight earlier in the season with Derek Boogaard. The resulting media coverage of the incident renewed calls for a fighting ban. Some players acknowledge that there is no harm in discussing the issue; however, most players and administrators continue to insist that fighting stay as a permanent element of organized ice hockey. Some league administrators, such as former NHL senior vice-president and director of hockey operations Colin Campbell, have been circulating the idea of banning fighting in response to incidents such as the Fedoruk–Orr fight.
Sports journalists have articulated the idea with increasing frequency that fighting adds nothing to the sport and should be banned. Among the reasons they cite are that it is unsportsmanlike, is a "knee-jerk" reaction that detracts from the skillful aspects of the game, and that it is simply a waste of time. The Journal of Sport and Social Issues''' Ryan T. Lewinson and Oscar E. Palma believe that fighting shows a lack of discipline on the part of participants, as well as a lack of fairness in certain cases, including when fighters have a size disparity. However, supporters of fighting say it provides a means of security for players, that fighting is a tool players use to keep opposing players in check; essentially allowing players to police which hits and dirty plays are unacceptable.
Various politicians and hockey figures have expressed opposition to fighting. In 2012, David Johnston, the Governor General of Canada, said that fighting should not be part of the sport. IIHF president René Fasel has protested against fighting, deeming it "Neanderthal behavior". Wayne Gretzky, considered by many to be the greatest hockey player of all time, has often spoken out against fighting.
NHL Commissioner Gary Bettman, at a 2007 press conference broadcast on CBC Sports, said, "Fighting has always had a role in the game ... from a player safety standpoint, what happens in fighting is something we need to look at just as we need to look at hits to the head. But we're not looking to have a debate on whether fighting is good or bad or should be part of the game."
Community members often become involved in the debate over banning fighting. In December 2006, a school board trustee in London, Ontario, attended a London Knights game and was shocked by the fighting and by the crowd's positive reaction to it. This experience led him to organize an ongoing effort to ban fighting in the Ontario Hockey League, where the Knights compete, by attempting to gain the support of other school boards and by writing letters to OHL administrators. On the advice of its Medical Health Officer, the Middlesex-London Health board has supported recommendations to ban fighting across amateur hockey and to increase disciplinary measures to ensure deterrence.
The first known death directly related to a hockey fight occurred when Don Sanderson of the Whitby Dunlops, a top-tier senior amateur team in Ontario's Major League Hockey, died in January 2009, a month after sustaining a head injury during a fight: Sanderson's helmet came off during the fight, and when he fell to the ice, he hit his head. His death renewed calls to ban fighting among critics. In reaction, the league has stated that they are reviewing the players' use of helmets.
Fighters such as Bob Probert and Boogaard have been posthumously diagnosed with chronic traumatic encephalopathy, a degenerative disease of the brain caused by repeated brain trauma. While the NHL took steps to limit head trauma from blindslide hits, it was criticized for doing nothing to reduce fighting, which consists of repeated deliberate blows to the head. It is unknown whether Boogaard's death was mainly attributed from his repeated head trauma from fighting and hits or from a possible addiction to painkillers while simultaneously abusing alcohol. His brain has been sent to Boston University for further testing.
Rules in-game to discourage fighting
Since the 1970s, three rules have curtailed the number and scope of fights in the NHL. In 1971, the league created the "Third Man In" rule which attempts to eliminate the bench-clearing brawl by providing for the ejection of the first player who joins a fight already in progress, unless a match penalty is being assessed to a player already engaged in that fight. Another rule automatically suspends the first player from each team that leaves the bench to join a fight when it is not their shift. In 1992, the "Instigator" rule, which adds an additional two-minute minor penalty to the player who starts a fight, was introduced.
Beginning in the 2016–17 season, the American Hockey League imposed a fighting major counter, similar to the National Basketball Association's unsportsmanlike technical foul counter and soccer's accumulated cards. A player who collects ten major penalties for fighting during the season will be suspended one game, and will be suspended one game for each fighting major for the next three penalties (the 11th, 12th, and 13th fighting majors). A player is suspended two games for his 14th and subsequent major penalty for fighting. If one player involved in the fight is charged with an instigator penalty, the opponent will not have the fighting major count towards suspension. The ECHL added the rule in 2019–20.
Beginning in the 2023-24 season, the ECHL reduced the number of fighting majors that can result in an ejection from three to two, with exceptions for opponents being docked as instigators, and added automatic game misconduct penalties for fights that occur just before or after the puck is dropped.
Etiquette
There are several informal rules governing fighting in ice hockey that players rarely discuss but take quite seriously. The most important aspect of this etiquette is that opposing enforcers must agree to a fight, usually via a verbal or physical exchange on the ice. This agreement helps both players avoid being given an instigator penalty, and helps keep unwilling participants out of fights.
Enforcers typically only fight each other, with only the occasional spontaneous fight breaking out between one or two opponents who do not usually fight. There is a high degree of respect among enforcers as well; they will respect a rival who declines a fight because he is playing with injuries, a frequent occurrence, because enforcers consider winning a fight with an injured opponent to be an empty victory. This is also known as granting a "free pass". Enforcer Darren McCarty described fighters as being divided into "heavyweights" and "light heavyweights", and said that players in the latter category "end up dancing with some guys who could end (their) career with a single punch."
Long-standing rivalries result in numerous rematches, especially if one of the enforcers has to decline an invitation to fight during a given game. This is one of the reasons that enforcers may fight at the beginning of a game, when nothing obvious has happened to agitate the opponents. On the other hand, it is bad etiquette to try to initiate a fight with an enforcer who is near the end of his shift, since the more rested player will have an obvious advantage.
Another important aspect of etiquette is simply fighting fairly and cleanly. Fairness is maintained by not wearing equipment that could injure the opposing fighter, such as face shields, gloves, or masks, and not assaulting referees or linesmen. Finally, whatever the outcome of the fight, etiquette dictates that players who choose to fight win and lose those fights gracefully. Otherwise, they risk losing the respect of their teammates and fans.
Sportsmanship is also an important aspect when it comes to fights. While an enforcer may start a fight in response to foul play, it is generally not acceptable to start a fight to retaliate against an opponent who scored fairly.
Tactics
Fighting tactics are governed by several actual rules, and enforcers will also adopt informal tactics particular to their style and personality. One tactic adopted by players is known as "going for it", in which the player puts his head down and just throws as many punches as he can, as fast as he can. In the process, that player takes as many punches as he delivers, although some of them are to the hard forehead. Fighters usually must keep one hand on their opponent's jersey since the ice surface makes maintaining balance very difficult. For this reason, the majority of a hockey fight consists of the players holding on with one hand and punching with the other.
Other examples include Gordie Howe's tactic of holding the sweater of his opponent right around the armpit of his preferred punching arm so as to impede his movement. Probert, of the Detroit Red Wings and Chicago Blackhawks, was known to allow his opponents to punch until they showed signs of tiring, at which time he would take over and usually dominate the fight. Some consider long-time Buffalo Sabres enforcer Rob Ray to be the reason that hockey jerseys are now equipped with tie-down straps ("fight straps") that prevent their removal; he would always remove his jersey during fights so his opponents would have nothing to grab on to. This is commonly referred to as the "Rob Ray Rule".
Officials' role
Throughout a game, the referee and linesmen have a role in preventing fights through the way they are managing the game—calling penalties, breaking up scuffles before they escalate, etc. Despite an official's best efforts, though, fights do occur and once they do, the referee and linesmen have a certain set of responsibilities to follow in order to safely break up the fight. None of these responsibilities are written in the NHL's rule book, but often are guided by "common sense", according to officials.
In a single fight situation, the linesmen will communicate with each other as to which player they will take during the fight, clear out any sticks, gloves, or other equipment that has been dropped and wait for a safe time to enter the fight, which they will do together. If both players are still standing while the linesmen enter, the linesmen will approach from each side (never from behind), bring their arms over the combatants' arms and wrap them around, pushing downwards and breaking the players apart. If the players have fallen, the linesmen will approach from the side (never over the skates), getting in between the two players. One linesman will use his body to shield the player on the bottom from the other player while his partner will remove the top player from the fight. Most linesmen will allow a fight to run its course for their own safety, but will enter a fight regardless if one player has gained a significant advantage over his opponent. Once the players have been broken up, the linesmen then escort the players off the ice. During this time the referee will keep other players from entering the fight by sending them to a neutral area on the ice and then watching the fight and assessing any other penalties that occur.
In a multiple-fight situation, the linesmen will normally break up fights together, one fight at a time using the same procedures for a single fight. The linesmen will communicate with each other which fight to break up. In a multiple-fight situation, the referee will stand in an area of the ice where he/she can have a full view of all the players and will write down—on a pad of paper commonly known as a "riot pad"—the numbers of the players that are involved in the fights, watching for situations that warrant additional penalties, such as players removing opponents' helmets, players participating in a second fight, players leaving a bench to participate in a fight, or third players into a fight. The referee will not normally break up a fight unless the linesmen need assistance, or a fight is occurring where a player has gained a significant advantage over the other player, leading to concerns of significant injury.
See also
Gordie Howe hat trick
Footnotes
References
.
Aggression
Ice hockey penalties
Violence in ice hockey
Banned sports tactics | Fighting in ice hockey | [
"Biology"
] | 7,278 | [
"Behavior",
"Aggression",
"Human behavior"
] |
568,248 | https://en.wikipedia.org/wiki/Paralanguage | Paralanguage, also known as vocalics, is a component of meta-communication that may modify meaning, give nuanced meaning, or convey emotion, by using techniques such as prosody, pitch, volume, intonation, etc. It is sometimes defined as relating to nonphonemic properties only. Paralanguage may be expressed consciously or unconsciously.
The study of paralanguage is known as paralinguistics and was invented by George L. Trager in the 1950s, while he was working at the Foreign Service Institute of the U.S. Department of State. His colleagues at the time included Henry Lee Smith, Charles F. Hockett (working with him on using descriptive linguistics as a model for paralanguage), Edward T. Hall developing proxemics, and Ray Birdwhistell developing kinesics. Trager published his conclusions in 1958, 1960 and 1961.
His work has served as a basis for all later research, especially those investigating the relationship between paralanguage and culture (since paralanguage is learned, it differs by language and culture). A good example is the work of John J. Gumperz on language and social identity, which specifically describes paralinguistic differences between participants in intercultural interactions. The film Gumperz made for BBC in 1982, Multiracial Britain: Cross talk, does a particularly good job of demonstrating cultural differences in paralanguage and their impact on relationships.
Paralinguistic information, because it is phenomenal, belongs to the external speech signal (Ferdinand de Saussure's parole) but not to the arbitrary conmodality. Even vocal language has some paralinguistic as well as linguistic properties that can be seen (lip reading, McGurk effect), and even felt, e.g. by the Tadoma method.
Aspects of the speech signal
Perspectival aspects
Speech signals arrive at a listener's ears with acoustic properties that may allow listeners to identify location of the speaker (sensing distance and direction, for example). Sound localization functions in a similar way also for non-speech sounds. The perspectival aspects of lip reading are more obvious and have more drastic effects when head turning is involved.
Organic aspects
The speech organs of different speakers differ in size. As children grow up, their organs of speech become larger, and there are differences between male and female adults. The differences concern not only size, but also proportions. They affect the pitch of the voice and to a substantial extent also the formant frequencies, which characterize the different speech sounds. The organic quality of speech has a communicative function in a restricted sense, since it is merely informative about the speaker. It will be expressed independently of the speaker's intention.
Expressive aspects
Paralinguistic cues such as loudness, rate, pitch, pitch contour, and to some extent formant frequencies of an utterance, contribute to the emotive or attitudinal quality of an utterance. Typically, attitudes are expressed intentionally and emotions without intention, but attempts to fake or to hide emotions are not unusual.
Consequently, paralinguistic cues relating to expression have a moderate effect of semantic marking. That is, a message may be made more or less coherent by adjusting its expressive presentation. For instance, upon hearing an utterance such as "I drink a glass of wine every night before I go to sleep" is coherent when made by a speaker identified as an adult, but registers a small semantic anomaly when made by a speaker identified as a child. This anomaly is significant enough to be measured through electroencephalography, as an N400. Autistic individuals have a reduced sensitivity to this and similar effects.
Emotional tone of voice, itself paralinguistic information, has been shown to affect the resolution of lexical ambiguity. Some words have homophonous partners; some of these homophones appear to have an implicit emotive quality, for instance, the sad "die" contrasted with the neutral "dye"; uttering the sound /dai/ in a sad tone of voice can result in a listener writing the former word significantly more often than if the word is uttered in a neutral tone.
Linguistic aspects
Ordinary phonetic transcriptions of utterances reflect only the linguistically informative quality. The problem of how listeners factor out the linguistically informative quality from speech signals is a topic of current research.
Some of the linguistic features of speech, in particular of its prosody, are paralinguistic or pre-linguistic in origin. A most fundamental and widespread phenomenon of this kind is described by John Ohala as the "frequency code". This code works even in communication across species. It has its origin in the fact that the acoustic frequencies in the voice of small vocalizers are high, while they are low in the voice of large vocalizers. This gives rise to secondary meanings such as "harmless", "submissive", "unassertive", which are naturally associated with smallness, while meanings such as "dangerous", "dominant", and "assertive" are associated with largeness. In most languages, the frequency code also serves the purpose of distinguishing questions from statements. It is universally reflected in expressive variation, and it is reasonable to assume that it has phylogenetically given rise to the sexual dimorphism that lies behind the large difference in pitch between average female and male adults.
In text-only communication such as email, chatrooms and instant messaging, paralinguistic elements can be displayed by emoticons, font and color choices, capitalization and the use of non-alphabetic or abstract characters. Nonetheless, paralanguage in written communication is limited in comparison with face-to-face conversation, sometimes leading to misunderstandings.
Specific forms of paralinguistic respiration
Gasps
A gasp is a kind of paralinguistic respiration in the form of a sudden and sharp inhalation of air through the mouth. A gasp may indicate difficulty breathing and a panicked effort to draw air into the lungs. Gasps also occur from an emotion of surprise, shock or disgust. Like a sigh, a yawn, or a moan, a gasp is often an automatic and unintentional act. Gasping is closely related to sighing, and the inhalation characterizing a gasp induced by shock or surprise may be released as a sigh if the event causing the initial emotional reaction is determined to be less shocking or surprising than the observer first believed.
As a symptom of physiological problems, apneustic respirations (a.k.a. apneusis), are gasps related to the brain damage associated with a stroke or other trauma.
Sighs
A sigh is a kind of paralinguistic respiration in the form of a deep and especially audible, single exhalation of air out of the mouth or nose, that humans use to communicate emotion. It is a voiced pharyngeal fricative, sometimes associated with a guttural glottal breath exuded in a low tone. It often arises from a negative emotion, such as dismay, dissatisfaction, boredom, or futility. A sigh can also arise from positive emotions such as relief, particularly in response to some negative situation ending or being avoided. Like a gasp, a yawn, or a moan, a sigh is often an automatic and unintentional act.
Scientific studies show that babies sigh after 50 to 100 breaths. This serves to improve the mechanical properties of lung tissue, and it also helps babies to develop a regular breathing rhythm. Behaviors equivalent to sighing have also been observed in animals such as dogs, monkeys, and horses.
In text messages and internet chat rooms, or in comic books, a sigh is usually represented with the word itself, 'sigh', possibly within asterisks, *sigh*.
Sighing is also a reflex, governed by a few neurons.
Moans and groans
Moaning and groaning both refer to an extended sound emanating from the throat, which is typically made by engaging in sexual activity. Moans and groans are also noises traditionally associated with ghosts, and their supposed experience of suffering in the afterlife. They are sometimes used to indicate displeasure.
Throat clearing
Throat clearing is a metamessaging nonverbal form of communication used in announcing one's presence upon entering the room or approaching a group. It is done by individuals who perceive themselves to be of higher rank than the group they are approaching and utilize the throat-clear as a form of communicating this perception to others. It can convey nonverbalized disapproval.
In chimpanzee social hierarchy, this utterance is a sign of rank, directed by alpha males and higher-ranking chimps to lower-ranking ones and signals a mild warning or a slight annoyance.
As a form of metacommunication, the throat-clear is acceptable only to signal that a formal business meeting is about to start.
It is not acceptable business etiquette to clear one's throat when approaching a group on an informal basis;
the basis of one's authority has already been established and requires no further reiteration by this ancillary nonverbal communication.
Mhm
is between a literal language and movement, by making a noise "hmm" or "mhm", to make a pause for the conversation or as a chance to stop and think.
The "mhm" utterance is often used in narrative interviews, such as an interview with a disaster survivor or sexual violence victim. In this kind of interview, it is better for the interviewers or counselors not to intervene too much when an interviewee is talking. The "mhm" assures the interviewee that they are being heard and can continue their story. Observing emotional differences and taking care of an interviewee's mental status is an important way to find slight changes during conversation.
Huh?
"Huh?", meaning "what?" (that is, used when an utterance by another is not fully heard or requires clarification), is an essentially universal expression, but may be a normal word (learned like other words) and not paralanguage. If it is a word, it is a rare (or possibly even unique) one, being found with basically the same sound and meaning in almost all languages.
Physiology of paralinguistic comprehension
fMRI studies
Several studies have used the fMRI paradigm to observe brain states brought about by adjustments of paralinguistic information. One such study investigated the effect of interjections that differed along the criteria of lexical index (more or less "wordy") as well as neutral or emotional pronunciation; a higher hemodynamic response in auditory cortical gyri was found when more robust paralinguistic data was available. Some activation was found in lower brain structures such as the pons, perhaps indicating an emotional response.
See also
Business communication
Intercultural competence
Kinesics
Meta message
Meta-communication
Metacommunicative competence
Prosody (linguistics)
Proxemics
References
Further reading
Cook, Guy (2001) The Discourse of Advertising. (second edition) London: Routledge. (chapter 4 on paralanguage and semiotics).
Robbins, S. and Langton, N. (2001) Organizational Behaviour: Concepts, Controversies, Applications (2nd Canadian ed.). Upper Saddle River, NJ: Prentice-Hall.
Traunmüller, H. (2005) "Paralinguale Phänomene" (Paralinguistic phenomena), chapter 76 in: SOCIOLINGUISTICS An International Handbook of the Science of Language and Society, 2nd ed., U. Ammon, N. Dittmar, K. Mattheier, P. Trudgill (eds.), Vol. 1, pp. 653–665. Walter de Gruyter, Berlin/New York.
Matthew McKay, Martha Davis, Patrick Fanning [1983] (1995) Messages: The Communication Skills Book, Second Edition, New Harbinger Publications, , , pp. 63–67.
Human communication
Nonverbal communication
Sociological terminology
Social philosophy
Online chat | Paralanguage | [
"Biology"
] | 2,486 | [
"Human communication",
"Behavior",
"Human behavior"
] |
568,646 | https://en.wikipedia.org/wiki/Simplified%20Instructional%20Computer | The Simplified Instructional Computer (abbreviated SIC) is a hypothetical computer system introduced in System Software: An Introduction to Systems Programming, by Leland Beck. Due to the fact that most modern microprocessors include subtle, complex functions for the purposes of efficiency, it can be difficult to learn systems programming using a real-world system. The Simplified Instructional Computer solves this by abstracting away these complex behaviors in favor of an architecture that is clear and accessible for those wanting to learn systems programming.
SIC Architecture
The SIC machine has basic addressing, storing most memory addresses in hexadecimal integer format. Similar to most modern computing systems, the SIC architecture stores all data in binary and uses the two's complement to represent negative values at the machine level. Memory storage in SIC consists of 8-bit bytes, and all memory addresses in SIC are byte addresses. Any three consecutive bytes form a 24-bit 'word' value, addressed by the location of the lowest numbered byte in the word value. Numeric values are stored as word values, and character values use the 8-bit ASCII system. The SIC machine does not support floating-point hardware and has at most 32,768 bytes of memory. There is also a more complicated machine built on top of SIC called the Simplified Instruction Computer with Extra Equipment (SIC/XE). The XE expansion of SIC adds a 48-bit floating point data type, an additional memory addressing mode, and extra memory (1 megabyte instead of 32,768 bytes) to the original machine. All SIC assembly code is upwards compatible with SIC/XE.
SIC machines have several registers, each 24 bits long and having both a numeric and character representation:
A (0): Used for basic arithmetic operations; known as the accumulator register.
X (1): Stores and calculates addresses; known as the index register.
L (2): Used for jumping to specific memory addresses and storing return addresses; known as the linkage register.
PC (8): Contains the address of the next instruction to execute; known as the program counter register.
SW (9): Contains a variety of information, such as carry or overflow flags; known as the status word register.
In addition to the standard SIC registers, there are also four additional general-purpose registers specific to the SIC/XE machine:
B (3): Used for addressing; known as the base register.
S (4): No special use, general purpose register.
T (5): No special use, general purpose register.
F (6): Floating point accumulator register (This register is 48-bits instead of 24).
These five/nine registers allow the SIC or SIC/XE machine to perform most simple tasks in a customized assembly language. In the System Software book, this is used with a theoretical series of operation codes to aid in understanding the assemblers and linker-loaders required for the execution of assembly language code.
Addressing Modes for SIC and SIC/XE
The Simplified Instruction Computer has three instruction formats, and the Extra Equipment add-on includes a fourth. The instruction formats provide a model for memory and data management. Each format has a different representation in memory:
Format 1: Consists of 8 bits of allocated memory to store instructions.
Format 2: Consists of 16 bits of allocated memory to store 8 bits of instructions and two 4-bits segments to store operands.
Format 3: Consists of 6 bits to store an instruction, 6 bits of flag values, and 12 bits of displacement.
Format 4: Only valid on SIC/XE machines, consists of the same elements as format 3, but instead of a 12-bit displacement, stores a 20-bit address.
Both format 3 and format 4 have six-bit flag values in them, consisting of the following flag bits:
n: Indirect addressing flag
i: Immediate addressing flag
x: Indexed addressing flag
b: Base address-relative flag
p: Program counter-relative flag
e: Format 4 instruction flag
Addressing Modes for SIC/XE
Rule 1:
e = 0 : format 3
e = 1 : format 4
format 3:
b = 1, p = 0 (base relative)
b = 0, p = 1 (pc relative)
b = 0, p = 0 (direct addressing)
format 4:
b = 0, p = 0 (direct addressing)
x = 1 (index)
i = 1, n = 0 (immediate)
i = 0, n = 1 (indirect)
i = 0, n = 0 (SIC)
i = 1, n = 1 (SIC/XE for SIC compatible)
Rule 2:
i = 0, n =0 (SIC)
b, p, e is part of the address.
SIC Assembly Syntax
SIC uses a special assembly language with its own operation codes that hold the hex values needed to assemble and execute programs. A sample program is provided below to get an idea of what a SIC program might look like. In the code below, there are three columns. The first column represents a forwarded symbol that will store its location in memory. The second column denotes either a SIC instruction (opcode) or a constant value (BYTE or WORD). The third column takes the symbol value obtained by going through the first column and uses it to run the operation specified in the second column. This process creates an object code, and all the object codes are put into an object file to be run by the SIC machine.
COPY START 1000
FIRST STL RETADR
CLOOP JSUB RDREC
LDA LENGTH
COMP ZERO
JEQ ENDFIL
JSUB WRREC
J CLOOP
ENDFIL LDA EOF
STA BUFFER
LDA THREE
STA LENGTH
JSUB WRREC
LDL RETADR
RSUB
EOF BYTE C'EOF'
THREE WORD 3
ZERO WORD 0
RETADR RESW 1
LENGTH RESW 1
BUFFER RESB 4096
.
. SUBROUTINE TO READ RECORD INTO BUFFER
.
RDREC LDX ZERO
LDA ZERO
RLOOP TD INPUT
JEQ RLOOP
RD INPUT
COMP ZERO
JEQ EXIT
STCH BUFFER,X
TIX MAXLEN
JLT RLOOP
EXIT STX LENGTH
RSUB
INPUT BYTE X'F1'
MAXLEN WORD 4096
.
. SUBROUTINE TO WRITE RECORD FROM BUFFER
.
WRREC LDX ZERO
WLOOP TD OUTPUT
JEQ WLOOP
LDCH BUFFER,X
WD OUTPUT
TIX LENGTH
JLT WLOOP
RSUB
OUTPUT BYTE X'06'
END FIRST
If you were to assemble this program, you would get the object code depicted below. The beginning of each line consists of a record type and hex values for memory locations. For example, the top line is an 'H' record, the first 6 hex digits signify its relative starting location, and the last 6 hex digits represent the program's size. The lines throughout are similar, with each 'T' record consisting of 6 hex digits to signify that line's starting location, 2 hex digits to indicate the size (in bytes) of the line, and the object codes that were created during the assembly process.
HCOPY 00100000107A
T0010001E1410334820390010362810303010154820613C100300102A0C103900102D
T00101E150C10364820610810334C0000454F46000003000000
T0020391E041030001030E0205D30203FD8205D2810303020575490392C205E38203F
T0020571C1010364C0000F1001000041030E02079302064509039DC20792C1036
T002073073820644C000006
E001000
Sample program
Given below is a program illustrating data movement in SIC.
LDA FIVE
STA ALPHA
LDCH CHARZ
STCH C1
ALPHA RESW 1
FIVE WORD 5
CHARZ BYTE C'Z'
C1 RESB 1
Emulating the SIC System
Since the SIC and SIC/XE machines are not real machines, the task of actually constructing a SIC emulator is often part of coursework in a systems programming class. The purpose of SIC is to teach introductory-level systems programmers or collegiate students how to write and assemble code below higher-level languages like C and C++. With that being said, there are some sources of SIC-emulating programs across the web, however infrequent they may be.
An assembler and a simulator written by the author, Leland in Pascal is available on his educational home page at ftp://rohan.sdsu.edu/faculty/beck
SIC/XE Simulator And Assembler downloadable at https://sites.google.com/site/sarimohsultan/Projects/sic-xe-simulator-and-assembler
SIC Emulator, Assembler and some example programs written for SIC downloadable at http://sicvm.sourceforge.net/home.php
SicTools - virtual machine, simulator, assembler and linker for the SIC/XE computer available at https://jurem.github.io/SicTools/
See also
Computer
System software
Assembly language
Processor register
Virtual machine
References
Information of SIC and SIC/XE systems: https://web.archive.org/web/20121114101742/http://www-rohan.sdsu.edu/~stremler/2003_CS530/SicArchitecture.html
List of SIC and SIC/XE instructions: http://teaching.yfolajimi.com/uploads/3/5/6/9/3569427/_sp04.ppt
Brief memory addressing information: http://www.unf.edu/~cwinton/html/cop3601/s10/class.notes/basic4-SICfmts.pdf
SIC/XE Mode Addressing: http://uhost.rmutp.ac.th/wanapun.w/--j--/ch2-2.pdf
External links
SICvm A Virtual Machine based on a Simplified Instructional Computer (SIC)
Educational abstract machines
Computer science education | Simplified Instructional Computer | [
"Technology"
] | 2,170 | [
"Computer science education",
"Computer science"
] |
568,674 | https://en.wikipedia.org/wiki/Valence%20electron | In chemistry and physics, valence electrons are electrons in the outermost shell of an atom, and that can participate in the formation of a chemical bond if the outermost shell is not closed. In a single covalent bond, a shared pair forms with both atoms in the bond each contributing one valence electron.
The presence of valence electrons can determine the element's chemical properties, such as its valence—whether it may bond with other elements and, if so, how readily and with how many. In this way, a given element's reactivity is highly dependent upon its electronic configuration. For a main-group element, a valence electron can exist only in the outermost electron shell; for a transition metal, a valence electron can also be in an inner shell.
An atom with a closed shell of valence electrons (corresponding to a noble gas configuration) tends to be chemically inert. Atoms with one or two valence electrons more than a closed shell are highly reactive due to the relatively low energy to remove the extra valence electrons to form a positive ion. An atom with one or two electrons fewer than a closed shell is reactive due to its tendency either to gain the missing valence electrons and form a negative ion, or else to share valence electrons and form a covalent bond.
Similar to a core electron, a valence electron has the ability to absorb or release energy in the form of a photon. An energy gain can trigger the electron to move (jump) to an outer shell; this is known as atomic excitation. Or the electron can even break free from its associated atom's shell; this is ionization to form a positive ion. When an electron loses energy (thereby causing a photon to be emitted), then it can move to an inner shell which is not fully occupied.
Overview
Electron configuration
The electrons that determine valence – how an atom reacts chemically – are those with the highest energy.
For a main-group element, the valence electrons are defined as those electrons residing in the electronic shell of highest principal quantum number n. Thus, the number of valence electrons that it may have depends on the electron configuration in a simple way. For example, the electronic configuration of phosphorus (P) is 1s2 2s2 2p6 3s2 3p3 so that there are 5 valence electrons (3s2 3p3), corresponding to a maximum valence for P of 5 as in the molecule PF5; this configuration is normally abbreviated to [Ne] 3s2 3p3, where [Ne] signifies the core electrons whose configuration is identical to that of the noble gas neon.
However, transition elements have (n−1)d energy levels that are very close in energy to the n level. So as opposed to main-group elements, a valence electron for a transition metal is defined as an electron that resides outside a noble-gas core. Thus, generally, the d electrons in transition metals behave as valence electrons although they are not in the outermost shell. For example, manganese (Mn) has configuration 1s2 2s2 2p6 3s2 3p6 4s2 3d5; this is abbreviated to [Ar] 4s2 3d5, where [Ar] denotes a core configuration identical to that of the noble gas argon. In this atom, a 3d electron has energy similar to that of a 4s electron, and much higher than that of a 3s or 3p electron. In effect, there are possibly seven valence electrons (4s2 3d5) outside the argon-like core; this is consistent with the chemical fact that manganese can have an oxidation state as high as +7 (in the permanganate ion: ). (But note that merely having that number of valence electrons does not imply that the corresponding oxidation state will exist. For example, fluorine is not known in oxidation state +7; and although the maximum known number of valence electrons is 16 in ytterbium and nobelium, no oxidation state higher than +9 is known for any element.)
The farther right in each transition metal series, the lower the energy of an electron in a d subshell and the less such an electron has valence properties. Thus, although a nickel atom has, in principle, ten valence electrons (4s2 3d8), its oxidation state never exceeds four. For zinc, the 3d subshell is complete in all known compounds, although it does contribute to the valence band in some compounds. Similar patterns hold for the (n−2)f energy levels of inner transition metals.
The d electron count is an alternative tool for understanding the chemistry of a transition metal.
The number of valence electrons
The number of valence electrons of an element can be determined by the periodic table group (vertical column) in which the element is categorized. In groups 1–12, the group number matches the number of valence electrons; in groups 13–18, the units digit of the group number matches the number of valence electrons. (Helium is the sole exception.)
Helium is an exception: despite having a 1s2 configuration with two valence electrons, and thus having some similarities with the alkaline earth metals with their ns2 valence configurations, its shell is completely full and hence it is chemically very inert and is usually placed in group 18 with the other noble gases.
Valence shell
The valence shell is the set of orbitals which are energetically accessible for accepting electrons to form chemical bonds.
For main-group elements, the valence shell consists of the ns and np orbitals in the outermost electron shell. For transition metals the orbitals of the incomplete (n−1)d subshell are included, and for lanthanides and actinides incomplete (n−2)f and (n−1)d subshells. The orbitals involved can be in an inner electron shell and do not all correspond to the same electron shell or principal quantum number n in a given element, but they are all at similar energies.
As a general rule, a main-group element (except hydrogen or helium) tends to react to form a s2p6 electron configuration. This tendency is called the octet rule, because each bonded atom has 8 valence electrons including shared electrons. Similarly, a transition metal tends to react to form a d10s2p6 electron configuration. This tendency is called the 18-electron rule, because each bonded atom has 18 valence electrons including shared electrons.
The heavy group 2 elements calcium, strontium, and barium can use the (n−1)d subshell as well, giving them some similarities to transition metals.
Chemical reactions
The number of valence electrons in an atom governs its bonding behavior. Therefore, elements whose atoms have the same number of valence electrons are often grouped together in the periodic table of the elements, especially if they also have the same types of valence orbitals.
The most reactive kind of metallic element is an alkali metal of group 1 (e.g., sodium or potassium); this is because such an atom has only a single valence electron. During the formation of an ionic bond, which provides the necessary ionization energy, this one valence electron is easily lost to form a positive ion (cation) with a closed shell (e.g., Na+ or K+). An alkaline earth metal of group 2 (e.g., magnesium) is somewhat less reactive, because each atom must lose two valence electrons to form a positive ion with a closed shell (e.g., Mg2+).
Within each group (each periodic table column) of metals, reactivity increases with each lower row of the table (from a light element to a heavier element), because a heavier element has more electron shells than a lighter element; a heavier element's valence electrons exist at higher principal quantum numbers (they are farther away from the nucleus of the atom, and are thus at higher potential energies, which means they are less tightly bound).
A nonmetal atom tends to attract additional valence electrons to attain a full valence shell; this can be achieved in one of two ways: An atom can either share electrons with a neighboring atom (a covalent bond), or it can remove electrons from another atom (an ionic bond). The most reactive kind of nonmetal element is a halogen (e.g., fluorine (F) or chlorine (Cl)). Such an atom has the following electron configuration: s2p5; this requires only one additional valence electron to form a closed shell. To form an ionic bond, a halogen atom can remove an electron from another atom in order to form an anion (e.g., F−, Cl−, etc.). To form a covalent bond, one electron from the halogen and one electron from another atom form a shared pair (e.g., in the molecule H–F, the line represents a shared pair of valence electrons, one from H and one from F).
Within each group of nonmetals, reactivity decreases with each lower row of the table (from a light element to a heavy element) in the periodic table, because the valence electrons are at progressively higher energies and thus progressively less tightly bound. In fact, oxygen (the lightest element in group 16) is the most reactive nonmetal after fluorine, even though it is not a halogen, because the valence shells of the heavier halogens are at higher principal quantum numbers.
In these simple cases where the octet rule is obeyed, the valence of an atom equals the number of electrons gained, lost, or shared in order to form the stable octet. However, there are also many molecules that are exceptions, and for which the valence is less clearly defined.
Electrical conductivity
Valence electrons are also responsible for the bonding in the pure chemical elements, and whether their electrical conductivity is characteristic of metals, semiconductors, or insulators.
Metallic elements generally have high electrical conductivity when in the solid state. In each row of the periodic table, the metals occur to the left of the nonmetals, and thus a metal has fewer possible valence electrons than a nonmetal. However, a valence electron of a metal atom has a small ionization energy, and in the solid-state this valence electron is relatively free to leave one atom in order to associate with another nearby. This situation characterises metallic bonding. Such a "free" electron can be moved under the influence of an electric field, and its motion constitutes an electric current; it is responsible for the electrical conductivity of the metal. Copper, aluminium, silver, and gold are examples of good conductors.
A nonmetallic element has low electrical conductivity; it acts as an insulator. Such an element is found toward the right of the periodic table, and it has a valence shell that is at least half full (the exception is boron). Its ionization energy is large; an electron cannot leave an atom easily when an electric field is applied, and thus such an element can conduct only very small electric currents. Examples of solid elemental insulators are diamond (an allotrope of carbon) and sulfur. These form covalently bonded structures, either with covalent bonds extending across the whole structure (as in diamond) or with individual covalent molecules weakly attracted to each other by intermolecular forces (as in sulfur). (The noble gases remain as single atoms, but those also experience intermolecular forces of attraction, that become stronger as the group is descended: helium boils at −269 °C, while radon boils at −61.7 °C.)
A solid compound containing metals can also be an insulator if the valence electrons of the metal atoms are used to form ionic bonds. For example, although elemental sodium is a metal, solid sodium chloride is an insulator, because the valence electron of sodium is transferred to chlorine to form an ionic bond, and thus that electron cannot be moved easily.
A semiconductor has an electrical conductivity that is intermediate between that of a metal and that of a nonmetal; a semiconductor also differs from a metal in that a semiconductor's conductivity increases with temperature. The typical elemental semiconductors are silicon and germanium, each atom of which has four valence electrons. The properties of semiconductors are best explained using band theory, as a consequence of a small energy gap between a valence band (which contains the valence electrons at absolute zero) and a conduction band (to which valence electrons are excited by thermal energy).
References
External links
Francis, Eden. Valence Electrons.
Chemical bonding
Electron states | Valence electron | [
"Physics",
"Chemistry",
"Materials_science"
] | 2,651 | [
"Electron",
"Condensed matter physics",
"nan",
"Chemical bonding",
"Electron states"
] |
568,726 | https://en.wikipedia.org/wiki/Giant%20star | A giant star has a substantially larger radius and luminosity than a main-sequence (or dwarf) star of the same surface temperature. They lie above the main sequence (luminosity class V in the Yerkes spectral classification) on the Hertzsprung–Russell diagram and correspond to luminosity classes II and III. The terms giant and dwarf were coined for stars of quite different luminosity despite similar temperature or spectral type (namely K and M) by Ejnar Hertzsprung in 1905 or 1906.
Giant stars have radii up to a few hundred times the Sun and luminosities between 10 and a few thousand times that of the Sun. Stars still more luminous than giants are referred to as supergiants and hypergiants.
A hot, luminous main-sequence star may also be referred to as a giant, but any main-sequence star is properly called a dwarf, regardless of how large and luminous it is.
Formation
A star becomes a giant after all the hydrogen available for fusion at its core has been depleted and, as a result, leaves the main sequence. The behaviour of a post-main-sequence star depends largely on its mass.
Intermediate-mass stars
For a star with a mass above about 0.25 solar masses (), once the core is depleted of hydrogen it contracts and heats up so that hydrogen starts to fuse in a shell around the core. The portion of the star outside the shell expands and cools, but with only a small increase in luminosity, and the star becomes a subgiant. The inert helium core continues to grow and increase in temperature as it accretes helium from the shell, but in stars up to about it does not become hot enough to start helium burning (higher-mass stars are supergiants and evolve differently). Instead, after just a few million years the core reaches the Schönberg–Chandrasekhar limit, rapidly collapses, and may become degenerate. This causes the outer layers to expand even further and generates a strong convective zone that brings heavy elements to the surface in a process called the first dredge-up. This strong convection also increases the transport of energy to the surface, the luminosity increases dramatically, and the star moves onto the red-giant branch where it will stably burn hydrogen in a shell for a substantial fraction of its entire life (roughly 10% for a Sun-like star). The core continues to gain mass, contract, and increase in temperature, whereas there is some mass loss in the outer layers., § 5.9.
If the star's mass, when on the main sequence, was below approximately , it will never reach the central temperatures necessary to fuse helium., p. 169. It will therefore remain a hydrogen-fusing red giant until it runs out of hydrogen, at which point it will become a helium white dwarf., § 4.1, 6.1. According to stellar evolution theory, no star of such low mass can have evolved to that stage within the age of the Universe.
In stars above about the core temperature eventually reaches 108 K and helium will begin to fuse to carbon and oxygen in the core by the triple-alpha process.,§ 5.9, chapter 6. When the core is degenerate helium fusion begins explosively, but most of the energy goes into lifting the degeneracy and the core becomes convective. The energy generated by helium fusion reduces the pressure in the surrounding hydrogen-burning shell, which reduces its energy-generation rate. The overall luminosity of the star decreases, its outer envelope contracts again, and the star moves from the red-giant branch to the horizontal branch., chapter 6.
When the core helium is exhausted, a star with up to about has a carbon–oxygen core that becomes degenerate and starts helium burning in a shell. As with the earlier collapse of the helium core, this starts convection in the outer layers, triggers a second dredge-up, and causes a dramatic increase in size and luminosity. This is the asymptotic giant branch (AGB) analogous to the red-giant branch but more luminous, with a hydrogen-burning shell contributing most of the energy. Stars only remain on the AGB for around a million years, becoming increasingly unstable until they exhaust their fuel, go through a planetary nebula phase, and then become a carbon–oxygen white dwarf., § 7.1–7.4.
High-mass stars
Main-sequence stars with masses above about are already very luminous and they move horizontally across the HR diagram when they leave the main sequence, briefly becoming blue giants before they expand further into blue supergiants. They start core-helium burning before the core becomes degenerate and develop smoothly into red supergiants without a strong increase in luminosity. At this stage they have comparable luminosities to bright AGB stars although they have much higher masses, but will further increase in luminosity as they burn heavier elements and eventually become a supernova.
Stars in the range have somewhat intermediate properties and have been called super-AGB stars. They largely follow the tracks of lighter stars through RGB, HB, and AGB phases, but are massive enough to initiate core carbon burning and even some neon burning. They form oxygen–magnesium–neon cores, which may collapse in an electron-capture supernova, or they may leave behind an oxygen–neon white dwarf.
O class main sequence stars are already highly luminous. The giant phase for such stars is a brief phase of slightly increased size and luminosity before developing a supergiant spectral luminosity class. Type O giants may be more than a hundred thousand times as luminous as the sun, brighter than many supergiants. Classification is complex and difficult with small differences between luminosity classes and a continuous range of intermediate forms. The most massive stars develop giant or supergiant spectral features while still burning hydrogen in their cores, due to mixing of heavy elements to the surface and high luminosity which produces a powerful stellar wind and causes the star's atmosphere to expand.
Low-mass stars
A star whose initial mass is less than approximately will not become a giant star at all. For most of their lifetimes, such stars have their interior thoroughly mixed by convection and so they can continue fusing hydrogen for a time in excess of years, much longer than the current age of the Universe. They steadily become hotter and more luminous throughout this time. Eventually they do develop a radiative core, subsequently exhausting hydrogen in the core and burning hydrogen in a shell surrounding the core. (Stars with a mass in excess of may expand at this point, but will never become very large.) Shortly thereafter, the star's supply of hydrogen will be completely exhausted and it is expected to become a helium white dwarf, although the universe is too young for any such star to exist yet, so no star with that history has ever been observed.
Subclasses
There are a wide range of giant-class stars and several subdivisions are commonly used to identify smaller groups of stars.
Subgiants
Subgiants are an entirely separate spectroscopic luminosity class (IV) from giants, but share many features with them. Although some subgiants are simply over-luminous main-sequence stars due to chemical variation or age, others are a distinct evolutionary track towards true giants.
Examples:
Gamma Geminorum (γ Gem), an A-type subgiant;
Eta Bootis (η Boo), a G-type subgiant.
Delta Scorpii (δ Sco), a B-type subgiant.
Bright giants
Bright giants are stars of luminosity class II in the Yerkes spectral classification. These are stars which straddle the boundary between ordinary giants and supergiants, based on the appearance of their spectra. The bright giant luminosity class was first defined in 1943.
Well known stars which are classified as bright giants include:
Canopus
Albireo
Epsilon Canis Majoris
Theta Scorpii
Beta Draconis
Alpha Herculis
Gamma Canis Majoris
Red giants
Within any giant luminosity class, the cooler stars of spectral class K, M, S, and C, (and sometimes some G-type stars) are called red giants. Red giants include stars in a number of distinct evolutionary phases of their lives: a main red-giant branch (RGB); a red horizontal branch or red clump; the asymptotic giant branch (AGB), although AGB stars are often large enough and luminous enough to get classified as supergiants; and sometimes other large cool stars such as immediate post-AGB stars. The RGB stars are by far the most common type of giant star due to their moderate mass, relatively long stable lives, and luminosity. They are the most obvious grouping of stars after the main sequence on most HR diagrams, although white dwarfs are more numerous but far less luminous.
Examples:
Pollux, a K-type giant.
Epsilon Ophiuchi, a G-type red giant.
Arcturus (α Boötis), a K-type giant.
R Doradus, a M-type giant.
Mira (ο Ceti), an M-type giant and prototype Mira variable.
Aldebaran, a K-type giant
Yellow giants
Giant stars with intermediate temperatures (spectral class G, F, and at least some A) are called yellow giants. They are far less numerous than red giants, partly because they only form from stars with somewhat higher masses, and partly because they spend less time in that phase of their lives. However, they include a number of important classes of variable stars. High-luminosity yellow stars are generally unstable, leading to the instability strip on the HR diagram where the majority of stars are pulsating variables. The instability strip reaches from the main sequence up to hypergiant luminosities, but at the luminosities of giants there are several classes of pulsating variable stars:
RR Lyrae variables, pulsating horizontal-branch class A (sometimes F) stars with periods less than a day and amplitudes of a magnitude of less;
W Virginis variables, more-luminous pulsating variables also known as type II Cepheids, with periods of 10–20 days;
Type I Cepheid variables, more luminous still and mostly supergiants, with even longer periods;
Delta Scuti variables, includes subgiant and main-sequence stars.
Yellow giants may be moderate-mass stars evolving for the first time towards the red-giant branch, or they may be more evolved stars on the horizontal branch. Evolution towards the red-giant branch for the first time is very rapid, whereas stars can spend much longer on the horizontal branch. Horizontal-branch stars, with more heavy elements and lower mass, are more unstable.
Examples:
Sigma Octantis (σ Octantis), an F-type giant and a Delta Scuti variable;
Capella Aa (α Aurigae Aa), a G-type giant.
Beta Corvi (β Corvi), a G-type bright giant.
Blue (and sometimes white) giants
The hottest giants, of spectral classes O, B, and sometimes early A, are called blue giants. Sometimes A- and late-B-type stars may be referred to as white giants.
The blue giants are a very heterogeneous grouping, ranging from high-mass, high-luminosity stars just leaving the main sequence to low-mass, horizontal-branch stars. Higher-mass stars leave the main sequence to become blue giants, then bright blue giants, and then blue supergiants, before expanding into red supergiants, although at the very highest masses the giant stage is so brief and narrow that it can hardly be distinguished from a blue supergiant.
Lower-mass, core-helium-burning stars evolve from red giants along the horizontal branch and then back again to the asymptotic giant branch, and depending on mass and metallicity they can become blue giants. It is thought that some post-AGB stars experiencing a late thermal pulse can become peculiar blue giants.
Examples:
Meissa (λ Orionis A), an O-type giant.
Alcyone (η Tauri), a B-type giant, the brightest star in the Pleiades;
Thuban (α Draconis), an A-type giant.
See also
List of nearest giant stars
References
External links
Interactive giant-star comparison.
Star types | Giant star | [
"Astronomy"
] | 2,583 | [
"Star types",
"Astronomical classification systems"
] |
568,755 | https://en.wikipedia.org/wiki/Functionalism%20%28architecture%29 | In architecture, functionalism is the principle that buildings should be designed based solely on their purpose and function. An international functionalist architecture movement emerged in the wake of World War I, as part of the wave of Modernism. Its ideas were largely inspired by a desire to build a new and better world for the people, as broadly and strongly expressed by the social and political movements of Europe after the extremely devastating world war. In this respect, functionalist architecture is often linked with the ideas of socialism and modern humanism.
A new slight addition to this new wave of architecture was that not only should buildings and houses be designed around the purpose of functionality, architecture should also be used as a means to physically create a better world and a better life for people in the broadest sense. This new functionalist architecture had the strongest impact in Czechoslovakia, Germany, Poland, the USSR and the Netherlands, and from the 1930s also in Scandinavia and Finland.
This principle is a matter of confusion and controversy within the profession, particularly in regard to modern architecture, as it is less self-evident than it first appears.
History of functionalism
The theoretical articulation of functionalism in buildings can be traced back to the Vitruvian triad, where utilitas (variously translated as 'commodity', 'convenience', or 'utility') stands alongside firmitas (firmness) and venustas (beauty) as one of three classic goals of architecture. Functionalist views were typical of some Gothic Revival architects. In particular, Augustus Welby Pugin wrote that "there should be no features about a building which are not necessary for convenience, construction, or propriety" and "all ornament should consist of enrichment of the essential construction of the building".
In 1896, Chicago architect Louis Sullivan coined the phrase Form follows function. However, this aphorism does not relate to a contemporary understanding of the term 'function' as utility or the satisfaction of user needs; it was instead based in metaphysics, as the expression of organic essence and could be paraphrased as meaning 'destiny'.
In the mid-1930s, functionalism began to be discussed as an aesthetic approach rather than a matter of design integrity (use). The idea of functionalism was conflated with a lack of ornamentation, which is a different matter. It became a pejorative term associated with the baldest and most brutal ways to cover space, like cheap commercial buildings and sheds, then finally used, for example in academic criticism of Buckminster Fuller's geodesic domes, simply as a synonym for 'gauche'.
For 70 years the influential American architect Philip Johnson held that the profession has no functional responsibility whatsoever, and this is one of the many views today. The position of postmodern architect Peter Eisenman is based on a user-hostile theoretical basis and even more extreme: "I don't do function."
Modernism
Popular notions of modern architecture are heavily influenced by the work of the Franco-Swiss architect Le Corbusier and the German architect Mies van der Rohe. Both were functionalists at least to the extent that their buildings were radical simplifications of previous styles. In 1923, Mies van der Rohe was working in Weimar Germany, and had begun his career of producing radically simplified, lovingly detailed structures that achieved Sullivan's goal of inherent architectural beauty. Le Corbusier famously said "a house is a machine for living in"; his 1923 book Vers une architecture was, and still is, very influential, and his early built work such as the Villa Savoye in Poissy, France, is thought of as prototypically function.
In Europe
Czechoslovakia
The former Czechoslovakia was an early adopter of the functionalist style, with notable examples such as Villa Tugendhat in Brno, designed by Mies van der Rohe in 1928, Villa Müller in Prague, designed by Adolf Loos in 1930, and the majority of the city of Zlín, developed by the Bata shoe company as a factory town in the 1920s and designed by Le Corbusier's student František Lydie Gahura.
Numerous villas, apartment buildings and interiors, factories, office blocks and department stores can be found in the functionalist style throughout the country, which industrialised rapidly in the early 20th century while embracing the Bauhaus-style architecture that was emerging concurrently in Germany. Large urban extensions to Brno in particular contain numerous apartment buildings in the functionalist style, while the domestic interiors of Adolf Loos in Plzeň are also notable for their application of functionalist principles.
Nordic "funkis"
In Scandinavia and Finland, the international movement and ideas of modernist architecture became widely known among architects at the 1930 Stockholm Exhibition, under the guidance of director and Swedish architect Gunnar Asplund. Enthusiastic architects collected their ideas and inspirations in the manifesto acceptera and in the years thereafter, a functionalist architecture emerged throughout Scandinavia. The genre involves some peculiar features unique to Scandinavia and it is often referred to as "funkis", to distinguish it from functionalism in general. Some of the common features are flat roofing, stuccoed walls, architectural glazing and well-lit rooms, an industrial expression and nautical-inspired details, including round windows. The global stock market crisis and economic meltdown in 1929, instigated the needs to use affordable materials, such as brick and concrete, and to build quickly and efficiently. These needs became another signature of the Nordic version of functionalist architecture, in particular in buildings from the 1930s, and carried over into modernist architecture when industrial serial production became much more prevalent after World War II.
As most architectural styles, Nordic funkis was international in its scope and several architects designed Nordic funkis buildings throughout the region. Some of the most active architects working internationally with this style, includes Edvard Heiberg, Arne Jacobsen and Alvar Aalto. Nordic funkis features prominently in Scandinavian urban architecture, as the need for urban housing and new institutions for the growing welfare states exploded after World War II. Funkis had its heyday in the 1930s and 1940s, but functionalist architecture continued to be built long into the 1960s. These later structures, however, tend to be categorized as modernism in a Nordic context.
Denmark
Vilhelm Lauritzen, Arne Jacobsen and C.F. Møller were among the most active and influential Danish architects of the new functionalist ideas and Arne Jacobsen, Poul Kjærholm, Kaare Klint, and others, extended the new approach to design in general, most notably furniture which evolved to become Danish modern. Some Danish designers and artists who did not work as architects are sometimes also included in the Danish functionalist movement, such as Finn Juhl, Louis Poulsen and Poul Henningsen. In Denmark, bricks were largely preferred over reinforced concrete as construction material, and this included funkis buildings. Apart from institutions and apartment blocks, more than 100,000 single-family funkis houses were built in the years 1925–1945. However, the truly dedicated funkis design was often approached with caution. Many residential buildings only included some signature funkis elements such as round windows, corner windows or architectural glazing to signal modernity while not provoking conservative traditionalists too much. This branch of restrained approach to the funkis design created the Danish version of the bungalow building.
Fine examples of Danish functionalist architecture are the now listed Kastrup Airport 1939 terminal by Vilhelm Lauritzen, Aarhus University (by C. F. Møller et al.) and Aarhus City Hall (by Arne Jacobsen et al.), all including furniture and lamps specially designed for these buildings in the functionalist spirit. The largest functionalist complex in the Nordic countries is the 30,000-sq. m. residential compound of Hostrups Have in Copenhagen.
Finland
Some of the most prolific and notable architects in Finland, working in the funkis style, includes Alvar Aalto and Erik Bryggman who were both engaged from the very start in the 1930s. The Turku region pioneered this new style and the journal Arkkitehti mediated and discussed functionalism in a Finnish context. Many of the first buildings in the funkis style were industrial structures, institutions and offices but spread to other kinds of structures such as residential buildings, individual housing and churches. The functionalist design also spread to interior designs and furniture as exemplified by the iconic Paimio Sanatorium, designed in 1929 and built in 1933.
Aalto introduced standardised, precast concrete elements as early as the late 1920s, when he designed residential buildings in Turku. This technique became a cornerstone of later developments in modernist architecture after World War II, especially in the 1950s and 1960s. He also introduced serial produced wooden housing.
Poland
Interbellum avant-garde Polish architects in the years 1918–1939 made a notable impact in the legacy of European modern architecture and functionalism. A lot of Polish architects were fascinated by Le Corbusier like his Polish students and coworkers Jerzy Sołtan, Aleksander Kujawski (both co-authors of Unité d'habitation in Marseille) and his coworkers Helena Syrkus (Le Corbusier's companion on board of the S.S. Patris, an ocean liner journeying from Marseille to Athens in 1933 during the CIAM IV), Roman Piotrowski and Maciej Nowicki. Le Corbusier said about Poles (When the Cathedrals Were White, Paris 1937) "Academism has sent down roots everywhere. Nevertheless, the Dutch are relatively free of bias. The Czechs believe in 'modern' and the Polish also." Other Polish architects like Stanisław Brukalski was meeting with Gerrit Rietveld and inspired by him and his neoplasticism. Only a few years after the construction of Rietveld Schröder House, Polish architect Stanisław Brukalski built his own house in Warsaw in 1929 supposedly inspired by Schröder House he had visited. His Polish example of the modern house was awarded bronze medal in Paris world expo in 1937. Just before the Second World War, it was fashionable to build in Poland a lot of large districts of luxury houses in neighbourhoods full of greenery for wealthy Poles like, for example, district Saska Kępa in Warsaw or district Kamienna Góra in seaport Gdynia. The most characteristic features in Polish functionalist architecture 1918–1939 were portholes, roof terraces and marble interiors.
Probably the most outstanding work of Polish functionalist architecture is the entire city of Gdynia, modern Polish seaport established 1926.
Russia
In Russia and the former Soviet Union, functionalism was known as Constructivist architecture, and was the dominant style for major building projects between 1918 and 1932. The 1932 competition for the Palace of the Soviets and the winning entry by Boris Iofan marked the start of eclectic historicism of Stalinist Architecture and the end of constructivist domination in Soviet Union.
Examples
Notable representations of functionalist architecture include:
Aarhus University, Denmark
ADGB Trade Union School, Germany
Administratívna budova spojov, Bratislava, Slovakia
Obchodný a obytný dom Luxor, Bratislava, Slovakia
Villa Tugendhat, Brno, Czech Republic
Kavárna Era, Brno, Czech Republic
Kolonie Nový dům, Brno, Czech Republic
Veletržní palác, Prague, Czech Republic
Villa Müller, Prague, Czech Republic
Zlín city, Czech Republic
Tomas Bata Memorial, Zlín, Czech Republic
Booth House, Bridge Street, Sydney, Australia
Bullfighting Arena, Póvoa de Varzim, Portugal
Glass Palace, Helsinki, Finland
Hotel Hollywood, Sydney, Australia
Knarraros lighthouse, Stokkseyri, Iceland
Pärnu Rannahotell, Estonia
Pärnu Rannakohvik, Estonia
Södra Ängby, Stockholm, Sweden
Stanislas Brukalski's villa, Warsaw, Poland
Modernist Center of Gdynia, Poland
Villa Savoye, Poissy, France
Södra Ängby, Sweden
The residential area of Södra Ängby in western Stockholm, Sweden, blended a functionalist or international style with garden city ideals. Encompassing more than 500 buildings, it remains the largest coherent functionalistic villa area in Sweden and possibly the world, still well-preserved more than a half-century after its construction 1933–40 and protected as a national cultural heritage.
Zlín, Czech Republic
Zlín is a city in the Czech Republic which was in the 1930s completely reconstructed on principles of functionalism. In that time the city was a headquarters of Bata Shoes company and Tomáš Baťa initiated a complex reconstruction of the city which was inspired by functionalism and the Garden city movement.
Zlín's distinctive architecture was guided by principles that were strictly observed during its whole inter-war development. Its central theme was the derivation of all architectural elements from the factory buildings. The central position of the industrial production in the life of all Zlín inhabitants was to be highlighted. Hence the same building materials (red bricks, glass, reinforced concrete) were used for the construction of all public (and most private) edifices. The common structural element of Zlín architecture is a square bay of 20x20 feet (6.15x6.15 m). Although modified by several variations, this high modernist style leads to a high degree of uniformity of all buildings. It highlights the central and unique idea of an industrial garden city at the same time. Architectural and urban functionalism was to serve the demands of a modern city. The simplicity of its buildings which also translated into its functional adaptability was to prescribe (and also react to) the needs of everyday life.
The urban plan of Zlín was the creation of František Lydie Gahura, a student at Le Corbusier's atelier in Paris. Architectural highlights of the city are e.g. the Villa of Tomáš Baťa, Baťa's Hospital, Tomas Bata Memorial, The Grand Cinema or Baťa's Skyscraper.
Khrushchyovka
Khrushchyovka () is an unofficial name of type of low-cost, concrete-paneled or brick three- to five-storied apartment building which was developed in the Soviet Union during the early 1960s, during the time its namesake Nikita Khrushchev directed the Soviet government. The apartment buildings also went by the name of "Khruschoba" (, Khrushchev-slum).
Functionalism in landscape architecture
The development of functionalism in landscape architecture paralleled its development in building architecture. At the residential scale, designers like Christopher Tunnard, James Rose, and Garrett Eckbo advocated a design philosophy based on the creation of spaces for outdoor living and the integration of house and garden. At a larger scale, the German landscape architect and planner Leberecht Migge advocated the use of edible gardens in social housing projects as a way to counteract hunger and increase self-sufficiency of families. At a still larger scale, the Congrès International d'Architecture Moderne advocated for urban design strategies based on human proportions and in support of four functions of human settlement: housing, work, play, and transport.
See also
Modernist architecture; streamline moderne
Enrique Yáñez
Literature
Vers une Architecture and Villa Savoye: A Comparison of Treatise and Building – A multipart essay explaining the basics of Le Corbusier's theory and contrasting them with his built work.
Behne, Adolf (1923). The Modern Functional Building. Michael Robinson, trans. Santa Monica: Getty Research Institute, 1996.
Forty, Adrian (2000). "Function". Words and Buildings, A Vocabulary of Modern Architecture. Thames & Hudson, p. 174–195.
Michl, Jan (1995). Form follows WHAT? The modernist notion of function as a carte blanche 1995. Read more articles on www.beautytips.pk
References
External links
Fostinum: Czech and Slovak Functionalist Architecture
20th-century architectural styles
Architectural theory
Functionalism
Functionalism
Functionalism | Functionalism (architecture) | [
"Engineering"
] | 3,303 | [
"Architectural theory",
"Architecture"
] |
568,824 | https://en.wikipedia.org/wiki/Desert%20Research%20Institute | Desert Research Institute (DRI) is a nonprofit research campus of the Nevada System of Higher Education (NSHE) and a sister property of the University of Nevada, Reno (UNR), the organization that oversees all publicly supported higher education in the U.S. state of Nevada. At DRI, approximately 500 research faculty and support staff engage in more than $50 million in environmental research each year. DRI's environmental research programs are divided into three core divisions (Atmospheric Sciences, Earth and Ecosystem Sciences, and Hydrologic Sciences) and two interdisciplinary centers (Center for Arid Lands Environmental Management and the Center for Watersheds and Environmental Sustainability). Established in 1988 and sponsored by AT&T, the institute's Nevada Medal awards "outstanding achievement in science and engineering".
Programs
Cloud Seeding Program
DRI weather modification research produced the Nevada State Cloud Seeding Program in the 1960s. This initiative, funded by the U.S. Bureau of Reclamation and the National Oceanic and Atmospheric Administration, seeks to augment snowfall in mountainous regions of Nevada to increase snowpack and water supply. DRI researchers use ground stations and aircraft to release microscopic silver iodide particles into winter clouds, stimulating the formation of ice crystals that develop to snow.
Research indicates that cloud seeding leads to precipitation rate increases of 0.1–1.5 millimeters per hour.
Atmospheric and Dispersion Modeling Program
For over a decade the Atmospheric and Dispersion Modeling Program team has been performing work focused on observations and modeling of atmospheric dispersion processes over complex terrain and coastal areas. In particular, the team is applying, developing, and evaluating mesoscale meteorological models as well as regulatory and advanced atmospheric dispersion models such as ISC3ST, AERMOD, WYNDVALLEY, ASPEN, and CALPUFF. They have developed a Lagrangian Random Particle Dispersion Model that has been applied to complex coastal and inland environments.
Several recent projects led to developing real-time mesoscale forecasting system using the MM5 model coupled with a Lagrangian random particle dispersion model and implementation of data assimilation schemes.
History
A two-page bill signed into law by the Nevada Governor Grant Sawyer on March 23, 1959, authorized establishment of the Desert Research Institute at the University of Nevada, Reno.
UNR hired Dr. Wendell Mordy as the Founding Director (1960–1969) of the University's Desert Research Institute, which initially was an office at the top of the historic Morrill Hall building on UNR's campus. Early on Mordy also initiated the development of the UNR's Fleishmann Atmospherium Planetarium.
Microplastics were found for the first time in Lake Tahoe in 2019 by the Desert Research Institute. They plan on studying the pollution to determine if it is from local sources or if particles from discarded plastic products have been transported long distances through the atmosphere by wind, rain and falling snow.
Campuses
Main research campuses
Dandini Research Park – Reno, Nevada .
Southern Nevada Science Park – Paradise, Nevada .
Subsidiary campuses
Boulder City Research Facility – Boulder City, Nevada.
Storm Peak Laboratory – Steamboat Springs, Colorado.
Stead Research Facility - Reno, Nevada
See also
Atmospheric dispersion modeling
List of atmospheric dispersion models
Notes
References
1959 establishments in Nevada
1988 establishments in Nevada
Atmospheric dispersion modeling
Buildings and structures in Paradise, Nevada
Education in Reno, Nevada
Educational institutions established in 1959
Educational institutions established in 1988
Meteorological research institutes
Nevada System of Higher Education
Nuclear research institutes
Universities and colleges in Clark County, Nevada
Public universities and colleges in Nevada
Environmental research institutes | Desert Research Institute | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 734 | [
"Nuclear research institutes",
"Nuclear organizations",
"Environmental research institutes",
"Atmospheric dispersion modeling",
"Environmental engineering",
"Environmental modelling",
"Environmental research"
] |
568,901 | https://en.wikipedia.org/wiki/Suppository | A suppository is a dosage form used to deliver medications by insertion into a body orifice (any opening in the body), where it dissolves or melts to exert local or systemic effects. There are three types of suppositories, each to insert into a different sections: rectal suppositories into the rectum, vaginal suppositories into the vagina, and urethral suppositories into the urethra of a male.
Suppositories are ideal for infants, elderly individuals and post-operative patients, who are unable to swallow oral medications, and for individuals experiencing severe nausea and/or vomiting.
Composition
Several different ingredients can be used to form the base of a suppository: cocoa butter or a similar substitute, polyethylene glycol, hydrogels, and glycerinated gelatin. The type of material used depends on the type of suppository, the type of drug, and the conditions in which the suppository will be stored.
Rectal suppositories
In 1991, a study on suppository insertion in The Lancet found that the "torpedo" shape helps the device to travel internally, increasing its efficacy. The findings of this single study have been challenged as there is insufficient evidence on which to base clinical practice. Rectal suppositories are intended for localized or systemic action to relieve pain, constipation, irritation, inflammation, nausea and vomiting, fever, migraines, allergies, and sedation. If they cause inflammation, chronic use of suppositories may cause rectal stricture, but overall this is a safe method of drug delivery.
Urethral suppositories
Alprostadil pellets are urethral suppositories used for the treatment of severe erectile dysfunction (impotence). They are marketed under the name Muse in the United States. Its use has diminished since the development of oral impotence medications.
See also
Artesunate suppositories
Enema
Pessary
Notes
References
Doyle, D., "Per Rectum: A History of Enemata", Journal of the Royal College of Physicians of Edinburgh, Vol.35, No.4, (December 2005), pp. 367–370.
Payer, L., "Borderline Cases: How Medical Practice Reflects National Culture", The Sciences, Vol.30, No.4, (July–August 1990), pp. 38–42.
Anus
Constipation
Dosage forms
Drug delivery devices
Drugs acting on the gastrointestinal system and metabolism
Laxatives
Rectum
Routes of administration | Suppository | [
"Chemistry"
] | 564 | [
"Pharmacology",
"Drug delivery devices",
"Routes of administration"
] |
568,911 | https://en.wikipedia.org/wiki/Sexual%20penetration | Sexual penetration is the insertion of a body part or other object into a body orifice, such as the mouth, vagina or anus, as part of human sexual activity or sexual behavior in non-human animals.
The term is most commonly used in statute law in the context of proscribing certain sexual activities. Terms such as "sexual intercourse" or "carnal knowledge" are more commonly found in older statutes, while many modern criminal statutes use the term "sexual penetration" because it is a broad term encompassing (unless otherwise qualified) any form of penetrative sexual activity, including digital (i.e., the fingers) or with an object, and may involve only the most minimal penetration. Some jurisdictions refer to some forms of penetration as "acts of indecency", or other terminology.
Definitions
When a penis is inserted into a vagina, it is generally called vaginal sex, vaginal intercourse, or penis-in-vagina (PIV) sex. When a penis penetrates another person's anus, it is called anal sex or anal intercourse.
Penetrative oral sex may involve penetration of the mouth by a penis (fellatio) or the use of the tongue to penetrate a vagina or vulva (cunnilingus). The tongue may also penetrate the anus during anilingus. If one or more fingers are used to penetrate an orifice, it is called fingering or digital penetration. The insertion of an object, such as a dildo, vibrator or other sex toy, into a person's genital area or anus may also be considered sexual penetration. Penetrative sex is referred to as coitus or connotative sex.
Unlawful
Penetrative sex crimes are generally considered more serious than non-penetrative sex crimes, and sexual penetration of a child even more so. A child below the statutory age of consent cannot consent to acts involving sexual penetration. In laws, the term sexual penetration is commonly used in relation to sex with children. Unlawful sexual penetration is generally an offense irrespective of how deep the penetration was and irrespective of whether ejaculation of semen took place.
Laws may distinguish particular forms of sexual penetration as part of the offense. For example, the law in the U.S. state of Oregon provides:
In the United Kingdom, sexually penetrating a relative is an offense.
Various forms of penetration have at times been considered obscene and been prohibited. Works containing such penetrations may be considered pornography.
See also
Penile-vaginal intercourse
Sex and the law
References
Sexual acts
Sexual intercourse | Sexual penetration | [
"Biology"
] | 535 | [
"Sexual acts",
"Behavior",
"Sexuality",
"Mating"
] |
568,942 | https://en.wikipedia.org/wiki/International%20Regulations%20for%20Preventing%20Collisions%20at%20Sea | The International Regulations for Preventing Collisions at Sea 1972, also known as Collision Regulations (COLREGs), are published by the International Maritime Organization (IMO) and set out, among other things, the "rules of the road" or navigation rules to be followed by ships and other vessels at sea to prevent collisions between two or more vessels. COLREGs can also refer to the specific political line that divides inland waterways, which are subject to their own navigation rules, and coastal waterways which are subject to international navigation rules. They are derived from a multilateral treaty called the Convention on the International Regulations for Preventing Collisions at Sea, also known as Collision Regulations of 1960.
Although rules for navigating vessels inland may differ, the international rules specify that they should be as closely in line with the international rules as possible. In most of continental Europe, the Code Européen des Voies de la Navigation Intérieure (CEVNI, or the European Code for Navigation on Inland Waters) apply. In the United States, the rules for vessels navigating inland are published alongside the international rules.
Organization of the regulatory documents
As of 2022, there are 41 Rules and four annexes in COLREGs Rules in force.
PART A - GENERAL
Rule 1 - Application. This rule states that the COLREGs should be complied with by all vessels on the "high seas".
Rule 2 – Responsibility. This rule allows Master mariners and other persons in charge of vessels to depart from the rules to "avoid immediate danger", provided there are special circumstances for doing so. The rule also effectively requires all navigators to exercise good seamanship in applying the rules.
Rule 3 – General Definitions. This rule sets out key definitions that apply to terms in the rest of the rules, including definitions for 'power-driven vessels', 'sailing vessels' and other terms such as 'not under command' and 'vessel restricted in her ability to manoeuvre.
PART B - Section I Conduct of Vessels in any Condition of Visibility
Rule 4 – Application. This rule sets out a requirement for all vessels to proceed at a safe speed with reference to the prevailing circumstances and conditions. Relevant circumstances include, for example, the state of visibility, the presence of other ships (traffic), as well as the draught and manoeuvrability of the mariner's own ship. This rule states that the rules in this section apply to all vessels in any condition of visibility.
Rule 5 – Look-out. This rule concerns the keeping of a proper lookout to sea. It involves keeping the lookout by all available means, including audible means, visual means and by the use of marine radar.
Rule 6 – Safe Speed. This rule sets out a requirement for all vessels to proceed at a safe speed with reference to the prevailing circumstances and conditions. Relevant circumstances include, for example, the state of visibility, the presence of other ships (traffic), as well as the draught and manoeuvrability of the mariner's own ship.
Rule 7 – Risk of Collision. This rule requires all vessels to use all available means to determine if a risk of collision exists. These include the proper use of marine radar and the taking of bearings by ship's compass to determine if there is a steady bearing and risk of collison.
Rule 8 – Action to Avoid Collision. This rule sets out requirements for vessels to alter course and/or speed to pass a safe distance with other vessels. It requires alterations to be consistent with the concept of good seamanship, as well as be sufficient to be observed by the other vessel, ie, a large and bold angle of course alteration. The rule is designed to work in operation with other rules, including Rules 16 and 17.
Rule 9 – Narrow Channels. This rule concerns those vessels keeping a course within narrow channels and fairways. It requires vessels less than 20 metres in length, fishing vessels and sailing vessels to not impede the passage of larger vessels in the narrow channel. It also gives reference to signals (sound and light) that can be given to allow vessels to overtake one another if following the narrow channel or fairway, as well as a separate signal when approaching a bend. Vessels are also not allowed to anchor unless there are legitimate circumstances for doing so.
Rule 10 – Traffic Separation Schemes. Typically abbreviated to TSS by mariners, these schemes aim to promote the safety of navigation by ensuring ships follow a general direction of travel within defined traffic lanes. The TSS lanes are shown on paper and electronic charts and by monitoring their position, a ship can determine their navigation within the scheme. Additionally, a TSS provides separation zones and inshore-traffic zones, to which restrictions apply. Additional restrictions also apply to some vessel types, such as fishing vessels and vessels less than 20m in length to not impede the safe passage of other/larger vessels.
PART B - Section II Conduct of Vessels in Sight of One Another
Rule 11 - Application
Rule 12 – Sailing Vessels. The rule details how two or more sailing vessels should give way to each other when meeting. This is based on the wind direction. When each sailing vessel has the wind on a different side, the vessel which has the wind on the port side should keep out of the way of the other. Alternatively when both sailing vessels have the wind on the same side, the vessel which is to windward should keep out of the way of the vessel which is to leeward. Finally, if a vessel with the wind on the port side sees a vessel to windward and cannot determine with certainty whether the other vessel has the wind on her port or starboard side, they should keep out of the way of the other, ie they take action to make the situation safe regardless of knowing for sure the wind situation of the other vessel.
Rule 13 – Overtaking. This rule governs overtaking situations between different vessels. The primary requirement is that for all overtaking vessels, they must keep clear of the vessels they are overtaking. For sailing vessels Rule 13 also takes precedence over rules 12 and 18 meaning the overtaking sailing vessel must keep clear.
Rule 14 – Head-on Situation. This rule requires power-driven vessels that meet head-on ie bow directly facing another bow, to both alter course to starboard so as to pass clear of each other. This is referred to as passing 'port to port' as the port sides separate away from each other as vessels alter. The rule effectively assigns equal responsibility to vessels to prevent collision.
Rule 15 – Crossing Situation. This rule concerns actions for vessels in crossing situations and essentially requires a vessel that has another vessel on their starboard (right hand) side to stay out of the way of the other, becoming the give way vessel under rule 17. The other vessel is required to stand-on under rule 17. Also, if the circumstances of the case admit, the vessel that has the other on their starboard side should avoid crossing ahead of the other vessel.
Rule 16 – Action by Give-way Vessel. This rule requires the give-way vessel to take early and substantial action to keep well clear of the other vessel.
Rule 17 – Action by Stand-on Vessel. Rule 17 requires the stand-on vessel to maintain their course and speed. However, if it appears that the other vessel who is required to give way is not taking action, then they may take action to avoid collision according to certain requirements having been met.
Rule 18 – Responsibilities Between Vessels. Rule 18 effectively establishes an order of priority between all vessels and modes of operation of those vessels. For power-driven vessels operating normally, these type of vessels are required to keep clear of all other vessels, including sailing and fishing vessels. However, where vessels are subject to restrictions such as not under command, constrained by draft or restricted in their ability manoeuvre, then other vessels, including other power-driven vessels, sailing vessels and fishing vessels are either required to keep out of the way or not impede their passage, depending on the requirements.
PART B - Section III Conduct of Vessels in Restricted Visibility
Rule 19 – Conduct of Vessels in Restricted Visibility. This rule governs collision avoidance for vessels not in sight of another when navigating in or near an area of restricted visibility. Causes include fog, smoke and other phenomena such as heavy precipitation. The rule requires all vessels to proceed at a safe speed adapted to the conditions and to effectively take action to avoid collision in ample time. As far as possible, vessels should avoid altering to port for vessels forward of their beam unless being overtaken and avoid altering course towards a vessel abeam or abaft the beam. For ships that have heard another vessels sound signal but not observed them on radar, part e requires the ship to reduce speed to the minimum required to maintain their course and if necessary to take all way off.
PART C - Lights and Shapes
Rule 20 – Application
Rule 21 – Definitions
Rule 22 – Visibility of Lights. The minimum visible distance requirements of navigational lights are detailed under this rule. These vary according to the length of the vessel. For example, for vessels greater than 50 metres in length, the visibility ranges of lights are 6 miles for masthead lights, 3 miles for sidelights, 3 miles for the sternlight, 3 miles for towing lights and 3 miles for an all round light.
Rule 23 – Power-driven Vessels Underway
Rule 24 – Towing and Pushing
Rule 25 – Sailing Vessels Underway and Vessels Under Oars
Rule 26 – Fishing Vessels
Rule 27 – Vessels Not Under Command or Restricted in their Ability to Manoeuvre
Rule 28 – Vessels Constrained by their Draught
Rule 29 – Pilot Vessels
Rule 30 – Anchored Vessels and Vessels Aground
Rule 31 – Seaplanes
PART D - Sound and Light Signals
Rule 32 – Definitions
Rule 33 – Equipment for Sound Signals
Rule 34 – Manoeuvring and Warning Signals
Rule 35 - Sound Signals in Restricted Visibility
Rule 36 - Signals to Attract Attention
Rule 37 - Distress Signals
PART E - Exemptions
Rule 38 - Exemptions
PART F - Verification of Compliance with the Provisions of the Convention
Rule 39 - Definitions
Rule 40 - Application
Rule 41 - Verification of Compliance
Annexes
ANNEX I - Positioning and Technical Details of Lights and Shapes
ANNEX II Addition Signals for Fishing Vessels Fishing in Close Proximity
ANNEX III : Technical Details of Sound Signal Appliances
ANNEX IV Distress Signals. The annex lists the official international distress signals. These include signals such as the spoken word "Mayday", the Code flag Signal "N.C. (November Charlie)", flares showing red lights, "SOS" in morse code", an orange smoke signal and others.
History
Prior to the development of a single set of international rules and practices, separate practices and various conventions and informal procedures existed in different parts of the world. As a result, there were inconsistencies and contradictions between navigation lights that gave rise to unintended collisions, including not least that many sailing vessels did not display navigation lights. Vessel navigation lights for operating in darkness as well as navigation marks were also not standardised, giving rise to dangerous confusion and ambiguity between vessels resulting in collisions and groundings.
With the advent of steam-powered ships in the mid-19th century, conventions for sailing vessel navigation had to be supplemented with conventions for power-driven vessel navigation. Sailing vessels are limited as to their manoeuvrability in that they cannot sail directly into the wind and are limited in the absence of wind. On the other hand, ships propelled by machinery can manoeuvre in all 360 degrees of direction and therefore can be manoeuvred irrespective of the wind conditions.
In 1840 in London, Trinity House drew up a set of regulations which were enacted by Parliament in 1846. The Trinity House rules were included in the Steam Navigation Act 1846, and the Admiralty regulations regarding lights for steam ships were included in this statute in 1848. In 1849 Congress extended the light requirements to sailing vessels on US waters. In the UK in 1858 coloured sidelights were recommended for sailing vessels and fog signals were required to be given, by steam vessels on the ship's whistle and by sailing vessels on the fog horn or bell, while a separate but similar action was also taken in the United States as by 1850 onwards, English maritime law on collisions was being gradually adopted in United States law. Also in 1850, courts in the England and the United States adopted common law pertaining to reasonable speed within the Assured Clear Distance Ahead.
In 1863 a new set of rules drawn up by the British Board of Trade, in consultation with the French government, came into force. By 1864 the regulations (or Articles) had been adopted by more than thirty maritime countries, including Germany and the United States (passed by the United States Congress as Rules to prevent Collisions at Sea. An act fixing certain rules and regulations for preventing collisions on the water. 29 April 1864, ch. 69. and signed into law by President Abraham Lincoln). International regulations would continue to be further developed over the next several decades as a result of legislative and government action by the UK, US and other maritime States. For example, in 1867, Thomas Gray, assistant secretary to the Maritime Department of the Board of Trade, wrote The Rule of the Road, a pamphlet that became famous for its well-known mnemonic verses. Furthermore, in 1878, the United States codified its common law rules for reducing the risk of collisions. By 1880, the 1863 international Articles were supplemented with whistle signals and in 1884 a new set of international regulations was implemented.
In 1889, the United States convened the first international maritime conference in Washington, D.C to further codify international collision regulations, including requirements for lights and shapes. The resulting rules were adopted in 1890 and effected in 1897. Some minor changes were made during the 1910 Brussels Maritime Conference and some rule changes were proposed, but never ratified, at the 1929 International Conference on Safety of Life at Sea (S.O.L.A.S.) With the recommendation that the direction of a turn be referenced by the rudder instead of the helm or tiller being informally agreed by all maritime nations in 1935.
The 1948 S.O.L.A.S. International Conference made several recommendations, including the recognition of radar; these were eventually ratified in 1952 and became effective in 1954. Further recommendations were made by a SOLAS Conference in London in 1960 which became effective in 1965.
The 1972 International Regulations for Preventing Collisions at Sea
The International Regulations for Preventing Collisions at Sea were adopted as a convention of the International Maritime Organization on 20 October 1972 and entered into force on 15 July 1977. They were designed to update and replace the Collision Regulations of 1960, particularly with regard to Traffic Separation Schemes (TSS) following the first of these, introduced in the Strait of Dover in 1967. As of June 2013, the convention has been ratified by 155 states representing 98.7% of the tonnage of the world's merchant fleets.
The international regulations have been amended several times since their first adoption. In 1981 Rule 10 was amended with regard to dredging or surveying in traffic separation schemes. In 1987 amendments were made to several rules, including rule 1(e) for vessels of special construction; rule 3(h), vessels constrained by her draught and Rule 10(c), crossing traffic lanes. In 1989 Rule 10 was altered to stop unnecessary use of the inshore traffic zones associated with Traffic Separation Schemes (TSS). In 1993 amendments were made concerning the positioning of lights on vessels. In 2001 new rules were added relating to wing-in-ground-effect (WIG) craft and in 2007 the text of Annex IV (Distress signals) was rewritten.
The 2013 amendments (resolution A.1085(28))
Adoption: 4 December 2013
Entry into force: 1 January 2016
After existing part E (Exemptions), a new part F (Verification of compliance with the provisions of the Convention) is added in order for the Organization to make necessary verifications under the IMO Member State Audit Scheme.
Jurisdictions
The International Maritime Organization (IMO) convention, including the almost four dozen "rules" contained in the international regulations, must be adopted by each member country that is signatory to the convention—COLREG laws must exist within each jurisdiction. Thereafter, each IMO member country must designate an "administration"—national authority or agency—for implementing the provisions of the COLREG convention, as it applies to vessels over which the national authority has jurisdiction. Individual governing bodies must pass legislation to establish or assign such authority, as well as to create national navigation laws (and subsequent specific regulations) which conform to the international convention; each national administration is thereafter responsible for the implementation and enforcement of the regulations as it applies to ships and vessels under its legal authority. As well, administrations are typically empowered to enact modifications that apply to vessels in waters under the national jurisdiction concerned, provided that any such modifications are not inconsistent with the COLREGs.
Canada
The Canadian version of the COLREGs is provided by Transport Canada, which regulates Canadian vessels.
Korea
South Korea ratified the COLREGs in 1977 and enacted enforcing legislation under the 1986 Korea Marine Traffic Safety Act.
Marshall Islands
For Marshall Islands waters and ships, Chapter 22.11.4 requires all vessels to comply with the 1972 COLREGs, as amended. Section 150 of the Maritime Act encompasses the fitting and provision of navigation lights, shapes and sound signalling equipment.
Singapore
The version of the COLREGs applicable to the territorial waters of Singapore is the Merchant Shipping (Prevention of Collisions at Sea) Regulations. These Rules were enacted by Singapore in 1983 and then revised and reissued 25 March 1992.
United Kingdom
The UK version of the COLREGs is provided by the Maritime and Coastguard Agency (MCA), in the Merchant Shipping (Distress Signals and Prevention of Collisions) Regulations of 1996. They are distributed and accessed in the form of a "Merchant Shipping Notice" (MSN), which is used to convey mandatory information that must be complied with under UK legislation. These MSNs relate to Statutory Instruments and contain the technical detail of such regulations. Material published by the MCA is subject to Crown copyright protection, but the MCA allows it to be reproduced free of charge in any format or medium for research or private study, provided it is reproduced accurately and not used in a misleading context.
United States
The US version of the COLREGs is provided by the United States Coast Guard of the US Department of Homeland Security.
No right-of-way
A commonly held misconception concerning the rules of marine navigation is that by following specific rules, a vessel can gain certain rights of way over other vessels. No vessel ever has "right of way" over other vessels. Rather, there can be a "give way" vessel and a "stand on" vessel, or there may be two give way vessels with no stand on vessel. A stand on vessel does not have any right of way over any give way vessel, and is not free to maneuver however it wishes, but is obliged to keep a constant course and speed (so as to help the give way vessel in determining a safe course). So standing on is an obligation, not a right, and is not a privilege. Furthermore, a stand on vessel may still be obliged (under Rule 2 and Rule 17) to give way itself, in particular when a situation has arisen where a collision can no longer be avoided by actions of the give way vessel alone. For example, two power-driven vessels approaching each other head-to-head, are both deemed to be "give way" and both are required to alter course so as to avoid colliding with the other. Neither vessel has "right of way".
Future
In recent years, the IMO, States and other interested parties have assessed the COLREGs with a view to potential future amendment to facilitate Maritime Autonomous Surface Ships. These include a regulatory scoping exercise to review the applicability of the COLREGs for autonomous ships.
Racing Rules
The Racing Rules of Sailing, which govern the conduct of yacht and dinghy racing under the sanction of national sailing authorities which are members of World Sailing, are based on the COLREGs, but differ in some important matters such as overtaking and right of way close to turning marks in competitive sailing.
See also
Assured Clear Distance Ahead
Brussels Collision Convention
Navigation
Navigational aid
Pilotage
Sea mark
References
Notes
Further reading
RN approved self-study book. Includes the full text of the colregs.
External links
International Regulations for Preventing Collisions at Sea. Wikisource. Retrieved 18 July 2010
1972 in London
Navigational aids
Admiralty law treaties
International Maritime Organization treaties
International transport
Naval architecture
Traffic law
Treaties concluded in 1972
Treaties entered into force in 1977
Treaties extended to American Samoa
Treaties extended to Aruba
Treaties extended to Baker Island
Treaties extended to Bermuda
Treaties extended to British Honduras
Treaties extended to British Hong Kong
Treaties extended to the British Solomon Islands
Treaties extended to the British Virgin Islands
Treaties extended to the Cayman Islands
Treaties extended to the Falkland Islands
Treaties extended to Gibraltar
Treaties extended to the Gilbert and Ellice Islands
Treaties extended to Guam
Treaties extended to Guernsey
Treaties extended to Howland Island
Treaties extended to the Isle of Man
Treaties extended to Jarvis Island
Treaties extended to Jersey
Treaties extended to Johnston Atoll
Treaties extended to Midway Atoll
Treaties extended to Montserrat
Treaties extended to Navassa Island
Treaties extended to the Netherlands Antilles
Treaties extended to Palmyra Atoll
Treaties extended to the Pitcairn Islands
Treaties extended to Saint Helena, Ascension and Tristan da Cunha
Treaties extended to the Turks and Caicos Islands
Treaties extended to the Panama Canal Zone
Treaties extended to Puerto Rico
Treaties extended to Portuguese Macau
Treaties extended to the Trust Territory of the Pacific Islands
Treaties extended to the United States Virgin Islands
Treaties extended to Wake Island
Treaties of Albania
Treaties of Algeria
Treaties of the People's Republic of Angola
Treaties of Antigua and Barbuda
Treaties of Argentina
Treaties of Australia
Treaties of Austria
Treaties of Azerbaijan
Treaties of the Bahamas
Treaties of Bahrain
Treaties of Bangladesh
Treaties of Barbados
Treaties of Belarus
Treaties of Belgium
Treaties of Belize
Treaties of the People's Republic of Benin
Treaties of Bolivia
Treaties of the military dictatorship in Brazil
Treaties of Brunei
Treaties of the People's Republic of Bulgaria
Treaties of Cambodia
Treaties of Cameroon
Treaties of Canada
Treaties of Cape Verde
Treaties of Chile
Treaties of the People's Republic of China
Treaties of Colombia
Treaties of the Comoros
Treaties of the Republic of the Congo
Treaties of the Cook Islands
Treaties of Ivory Coast
Treaties of Croatia
Treaties of Cuba
Treaties of Cyprus
Treaties of the Czech Republic
Treaties of Czechoslovakia
Treaties of Denmark
Treaties of the Derg
Treaties of Djibouti
Treaties of Dominica
Treaties of the Dominican Republic
Treaties of Ecuador
Treaties of Egypt
Treaties of El Salvador
Treaties of Equatorial Guinea
Treaties of Eritrea
Treaties of Estonia
Treaties of Fiji
Treaties of Finland
Treaties of France
Treaties of Gabon
Treaties of the Gambia
Treaties of Georgia (country)
Treaties of East Germany
Treaties of West Germany
Treaties of Ghana
Treaties of Greece
Treaties of Grenada
Treaties of Guatemala
Treaties of Guinea
Treaties of Guyana
Treaties of Honduras
Treaties of the Hungarian People's Republic
Treaties of Iceland
Treaties of India
Treaties of Indonesia
Treaties of Iran
Treaties of Ireland
Treaties of Israel
Treaties of Italy
Treaties of Jamaica
Treaties of Japan
Treaties of Jordan
Treaties of Kazakhstan
Treaties of Kenya
Treaties of Kiribati
Treaties of North Korea
Treaties of South Korea
Treaties of Kuwait
Treaties of Latvia
Treaties of Lebanon
Treaties of Liberia
Treaties of the Libyan Arab Jamahiriya
Treaties of Lithuania
Treaties of Luxembourg
Treaties of Malaysia
Treaties of the Maldives
Treaties of Malta
Treaties of the Marshall Islands
Treaties of Mauritania
Treaties of Mauritius
Treaties of Mexico
Treaties of Moldova
Treaties of Monaco
Treaties of Mongolia
Treaties of Montenegro
Treaties of Morocco
Treaties of Mozambique
Treaties of Myanmar
Treaties of Namibia
Treaties of the Netherlands
Treaties of New Zealand
Treaties of Nicaragua
Treaties of Nigeria
Treaties of Niue
Treaties of Norway
Treaties of Oman
Treaties of Palau
Treaties of Pakistan
Treaties of Panama
Treaties of Papua New Guinea
Treaties of Peru
Treaties of the Philippines
Treaties of the Polish People's Republic
Treaties of Portugal
Treaties of Qatar
Treaties of the Socialist Republic of Romania
Treaties of the Soviet Union
Treaties of Saint Kitts and Nevis
Treaties of Saint Lucia
Treaties of Saint Vincent and the Grenadines
Treaties of Samoa
Treaties of São Tomé and Príncipe
Treaties of Saudi Arabia
Treaties of Senegal
Treaties of Serbia and Montenegro
Treaties of Seychelles
Treaties of Sierra Leone
Treaties of Singapore
Treaties of Slovakia
Treaties of Slovenia
Treaties of the Solomon Islands
Treaties of South Africa
Treaties of Spain
Treaties of Sri Lanka
Treaties of the Republic of the Sudan (1985–2011)
Treaties of Sweden
Treaties of Switzerland
Treaties of Syria
Treaties of Tanzania
Treaties of Thailand
Treaties of Togo
Treaties of Tonga
Treaties of Trinidad and Tobago
Treaties of Tunisia
Treaties of Turkey
Treaties of Turkmenistan
Treaties of Tuvalu
Treaties of Ukraine
Treaties of the United Arab Emirates
Treaties of the United Kingdom
Treaties of the United States
Treaties of Uruguay
Treaties of Vanuatu
Treaties of Venezuela
Treaties of Vietnam
Treaties of South Yemen
Treaties of Yugoslavia
Treaties of Zaire | International Regulations for Preventing Collisions at Sea | [
"Physics",
"Engineering"
] | 5,006 | [
"Naval architecture",
"Physical systems",
"Transport",
"International transport",
"Marine engineering"
] |
568,962 | https://en.wikipedia.org/wiki/Forward%20chaining | Forward chaining (or forward reasoning) is one of the two main methods of reasoning when using an inference engine and can be described logically as repeated application of modus ponens. Forward chaining is a popular implementation strategy for expert systems, business and production rule systems. The opposite of forward chaining is backward chaining.
Forward chaining starts with the available data and uses inference rules to extract more data (from an end user, for example) until a goal is reached. An inference engine using forward chaining searches the inference rules until it finds one where the antecedent (If clause) is known to be true. When such a rule is found, the engine can conclude, or infer, the consequent (Then clause), resulting in the addition of new information to its data.
Inference engines will iterate through this process until a goal is reached.
Example
Suppose that the goal is to conclude the color of a pet named Fritz, given that he croaks and eats flies, and that the rule base contains the following four rules:
If X croaks and X eats flies - Then X is a frog
If X chirps and X sings - Then X is a canary
If X is a frog - Then X is green
If X is a canary - Then X is blue
Let us illustrate forward chaining by following the pattern of a computer as it evaluates the rules.
Assume the following facts:
Fritz croaks
Fritz eats flies
With forward reasoning, the inference engine can derive that Fritz is green in a series of steps:
1. Since the base facts indicate that "Fritz croaks" and "Fritz eats flies", the antecedent of rule #1 is satisfied by substituting Fritz for X, and the inference engine concludes:
Fritz is a frog
2. The antecedent of rule #3 is then satisfied by substituting Fritz for X, and the inference engine concludes:
Fritz is green
The name "forward chaining" comes from the fact that the inference engine starts with the data and reasons its way to the answer,
as opposed to backward chaining, which works the other way around.
In the derivation, the rules are used in the opposite order as compared to backward chaining.
In this example, rules #2 and #4 were not used in determining that Fritz is green.
Because the data determines which rules are selected and used, this method is called data-driven, in contrast to goal-driven backward chaining inference. The forward chaining approach is often employed by expert systems, such as CLIPS.
One of the advantages of forward-chaining over backward-chaining is that the reception of new data can trigger new inferences, which makes the engine better suited to dynamic situations in which conditions are likely to change.
Applications
Forward chaining is a powerful reasoning strategy with numerous applications in AI and related fields. Some of the prominent applications include:
Expert Systems: Expert systems are AI systems that mimic the decision-making abilities of human experts in a specific domain. They rely on forward chaining to apply expert knowledge to solve problems and make recommendations.
Diagnosis and Troubleshooting: Forward chaining is extensively used in medical diagnosis and troubleshooting systems, where the input symptoms and test results are used to determine potential causes and treatments.
Intelligent Tutoring Systems: Educational software often employs forward chaining to adapt to students’ progress and provide customized learning paths and feedback.
Decision Support Systems: Forward chaining is utilized in business and management decision support systems to analyze data and recommend actions or strategies.
Natural Language Processing: In natural language processing, forward chaining can be applied to resolve ambiguities in language and extract useful information from text.
See also
Backward chaining
Constraint Handling Rules
Opportunistic reasoning
Rete algorithm
References
External links
Forward vs. Backward Chaining Explained at SemanticWeb.com
Expert systems
Logic programming | Forward chaining | [
"Technology"
] | 783 | [
"Information systems",
"Expert systems"
] |
568,963 | https://en.wikipedia.org/wiki/Basement | A basement or cellar is one or more floors of a building that are completely or partly below the ground floor. Especially in residential buildings, it often is used as a utility space for a building, where such items as the furnace, water heater, breaker panel or fuse box, car park, and air-conditioning system are located; so also are amenities such as the electrical system and cable television distribution point. In cities with high property prices, such as London, basements are often fitted out to a high standard and used as living space.
In British English, the word basement is usually used for underground floors of, for example, department stores. The word is usually used with buildings when the space below the ground floor is habitable and with (usually) its own access. The word cellar applies to the whole underground level or to any large underground room. A subcellar or subbasement is a level that lies below the basement or cellar.
Purpose, geography, and history
A basement can be used in almost exactly the same manner as an additional above-ground floor of a house or other building. However, the use of basements depends largely on factors specific to a particular geographical area such as climate, soil, seismic activity, building technology, and real estate economics.
Basements in small buildings such as single-family detached houses are rare in wet climates such as Great Britain and Ireland where flooding can be a problem, though they may be used in larger structures. However, basements are considered standard on all but the smallest new buildings in many places with temperate continental climates such as the American Midwest and the Canadian Prairies where a concrete foundation below the frost line is needed in any case, to prevent a building from shifting during the freeze-thaw cycle. Basements are much easier to construct in areas with relatively soft soils and may be avoided in places where the soil is too compact for easy excavation. Their use may be restricted in earthquake zones, because of the possibility of the upper floors collapsing into the basement; on the other hand, they may be required in tornado-prone areas as a shelter against violent winds. Adding a basement can also reduce heating and cooling costs as it is a form of earth sheltering, and a way to reduce a building's surface area-to-volume ratio. The housing density of an area may also influence whether or not a basement is considered necessary.
Historically, basements have become much easier to build (in developed countries) since the industrialization of home building. Large powered excavation machines such as backhoes and front-end loaders have dramatically reduced the time and manpower needed to dig a basement as compared to digging by hand with a spade, although this method may still be used in the developing world.
For most of its early history, the basement took one of two forms. It could be little more than a cellar, or it could be a section of a building containing rooms and spaces similar to those of the rest of the structure, as in the case of basement flats and basement offices.
However, beginning with the development of large, mid-priced suburban homes in the 1950s, the basement, as a space in its own right, gradually took hold. Initially, it was typically a large, concrete-floored space, accessed by indoor stairs, with exposed columns and beams along the walls and ceilings, or sometimes, walls of poured concrete or concrete cinder block.
Types
English basement
An English basement, also known as a daylight basement or lower ground floor, is contained in a house where at least part of the floor goes above ground to provide reasonably-sized windows. Generally, the floor's ceiling should be enough above ground to provide nearly full-size windows. Some daylight basements are located on slopes, such that one portion of the floor is at-grade with the land. A walk-out basement almost always results from this.
Most daylight basements naturally result from raised bungalows and at-grade walk-out basements. However, there are instances where the terrain dips enough from one side to another to allow for 3/4 to full-size windows, with the actual floor remaining below grade.
In most parts of North America, it is legal to set up apartments and bedrooms in daylight basements, whether or not the entire basement is above grade.
Daylight basements can be used for several purposes—as a garage, as maintenance rooms, or as living space. The buried portion is often used for storage, laundry room, hot water tanks, and HVAC.
Daylight basement homes typically appraise higher than standard-basement homes, since they include more viable living spaces. In some parts of the US, however, the appraisal for daylight basement space is half that of ground and above ground level square footage. Designs accommodated include split-foyer and split-level homes. Garages on both levels are sometimes possible. As with any multilevel home, there are savings on roofing and foundations.
Walk-out basement
A walk-out basement is any basement that is partially underground but nonetheless allows egress directly outdoors and has floating walls. This can either be through a stairwell leading above ground, or a door directly outside if a portion of the basement is completely at or above grade.
Many walk-out basements are also daylight basements. The only exceptions are when the entire basement is nearly entirely underground, and a stairwell leads up nearly a floor's worth of vertical height to lead to the outdoors.
Generally, basements with only an emergency exit well do not count as walk-out. Walk-out basements with at-grade doors on one side typically are more costly to construct since the foundation is still constructed to reach below the frost line. At-grade walk-out basements on the door-side are often used as livable space for the house, with the buried portion used for utilities and storage.
Subbasement
A subbasement is a floor below the basement floor. In the homes where there is any type of basement mentioned above, such as a look-out basement, all of the volume of the subbasements from floor to ceiling are located well below ground. Therefore, subbasements have no windows nor an outside door. In the homes that have subbasements, all of the basement can be used as part of the main home where people relax and do recreational things, while all of the subbasement can be used for storage. Subbasements are much more common in larger structures, such as commercial buildings and larger apartment buildings, than they are in single family homes. It is common for skyscrapers to have multiple subbasements.
Building a subbasement is more difficult, costly, and time-consuming than building a basement as the lowest floor. Subbasements are even more susceptible to flooding and water damage than basements and are therefore rare, except in dry climates and at higher elevations.
Some famous landmarks contain subbasements. The subbasement of the US Capitol Building is used as storage and that in the White House is used to store guest items.
Finished fully underground cellar
According to the international Oxford Dictionary of English, a finished fully underground cellar is a room below ground level in a house that is often used for the storage of wine or coal; it may also refer to the stock of wine itself. A cellar is intended to remain at a constant cool (not freezing) temperature all year round and usually has either a small window/opening or some form of air ventilation (air/draught bricks, etc.) in order to help eliminate damp or stale air. Cellars are more common in the United Kingdom in older houses, with most terraced housing built during late 19th and early 20th centuries having cellars. These were important shelters from air raids during World War II. In parts of North America that are prone to tornadoes (e.g. Tornado Alley), cellars still serve as shelter in the event of a direct hit on the house from a tornado or other storm damage caused by strong winds.
Except for Britain, Australia and New Zealand, cellars are popular in most western countries. In the United Kingdom, almost all new homes built since the 1960s have no cellar or basement due to the extra cost of digging down further into the sub-soil and a requirement for much deeper foundations and waterproof tanking. The reverse has recently become common, where the impact of smaller home-footprints has led to roof-space being utilised for further living space and now many new homes are built with third-floor living accommodation. For this reason, especially where lofts have been converted into living space, people tend to use garages for the storage of food freezers, tools, bicycles, garden and outdoor equipment. The majority of continental European houses have cellars, although a large proportion of people live in apartments or flats rather than houses. In North America, cellars usually are found in rural or older homes on the coasts and in the South. However, full basements are commonplace in new houses in the Canadian and American Midwest and other areas subject to tornado activity or requiring foundations below the frost line.
Underground crawl space
An underground crawl space (as the name implies) is a type of basement in which one cannot stand up—the height may be as little as one foot (30 cm), and the surface is often soil. Crawl spaces offer a convenient access to pipes, substructures and a variety of other areas that may be difficult or expensive to access otherwise. While a crawl space cannot be used as living space, it can be used as storage, often for infrequently used items. Care must be taken in doing so, however, as water from the damp ground, water vapour (entering from crawl space vents), and moisture seeping through porous concrete can create a perfect environment for mold/mildew to form on any surface in the crawl space, especially cardboard boxes, wood floors and surfaces, drywall and some types of insulation.
Health and safety issues must be considered when installing a crawl space. As air warms in a home, it rises and leaves through the upper regions of the house, much in the same way that air moves through a chimney. This phenomenon, called the "stack effect", causes the home to suck air up from the crawl space into the main area of the home. Mould spores, decomposition odours, and material from dust mites in the crawl space can come up with the air, aggravating asthma and other breathing problems, and creating a variety of health concerns.
It is usually desirable to finish a crawl space with a plastic vapour barrier that will not support mold growth or allow humidity from the earth into the crawl space. This helps insulate the crawl space and discourages the habitation of insects and vermin by breaking the ecological chain in which insects feed off the mould and vermin feed on the insects, as well as creating a physical inorganic barrier that deters entrance into the space. Vapour barriers can end at the wall or be run up the wall and fastened to provide even more protection against moisture infiltration. Some pest control agencies recommend against covering the walls, as it complicates their job of inspection and spraying. Almost unheard of as late as the 1990s, vapour barriers are becoming increasingly popular in recent years. In fact, the more general area of conditioned vs. unconditioned crawl spaces has seen much research over the last decade.
Dry rot and other conditions detrimental to buildings (particularly wood and timber structures) can develop in enclosed spaces. Providing adequate ventilation is thought to reduce the occurrence of these problems. Crawl space vents are openings in the wall which allow air movement. Such vents are usually fitted with metal grating, mesh, or louvers which can block the movement of rodents and vermin but generally not insects such as termites and carpenter ants. One common rule is to provide vents in cross sectional area equal to 1/150 of the floor area served.
Modern crawl space thinking has reconsidered the usage of crawl space vents in the home. While crawl space vents do allow outside air to ventilate into the home, the ability of that air to dry out the crawl space is debatable. In areas with humid summers, during the summer months, the air vented into a crawl space will be humid, and as it enters the crawl space, which has been cooled naturally by the earth, the relative humidity of the air will rise. In those cases, crawl space vents can even increase the humidity level of a crawl space and lead to condensation on cool surfaces within, such as metal and wood. In the winter, crawl space vents should be shut off entirely, to keep out the cold winter air which can cool hot water pipes, furnaces, and water heaters stored within. During rainy weather, crawl space vents bring wet air into the crawl space, which will not dry the space effectively.
Design and structural considerations
Structurally, for houses, the basement walls typically form the foundation. In warmer climates, some houses do not have basements because they are not necessary (although many still prefer them). In colder climates, the foundation must be below the frost line. Unless constructed in very cold climates, the frost line is not so deep as to justify an entire level below the ground, although it is usually deep enough that a basement is the assumed standard. In places with oddly stratified soil substrata or high water tables, such as most of Florida, Texas, Oklahoma, Arkansas, and areas within of the Gulf of Mexico, basements are usually not financially feasible unless the building is a large apartment or commercial structure.
Excavation using a backhoe or excavator is commonly used to dig a basement. If shelf rock is discovered, the need for blasting may be cost prohibitive. Basement walls may need to have the surrounding earth backfilled around them to return the soil to grade. A water stop, some gravel and a french drain may need to be used to prevent water from entering the basement at the bottom of the wall. Walls below grade may need to be sealed with an impervious coating (such as tar) to prevent water seepage. A polyethylene of about 6 mil (visqueen) serves as a water barrier underneath the basement.
Some designs elect to simply leave a crawl space under the house, rather than a full basement due to structural challenges. Most other designs justify further excavations to create a full-height basement, sufficient for another level of living space. Even so, basements in Canada and the northern United States were typically only in height, rather than the standard full of the main floors. Older homes may have even lower basement heights as the basement walls were concrete block and thus, could be customized to any height. Modern builders offer higher basements as an option. The cost of the additional depth of excavation is usually quite expensive. Thus, houses almost certainly never have multi-storey basements though basements heights are a frequent choice among new home buyers. For large office or apartment buildings in prime locations, the cost of land may justify multi-storey basement parking garages.
The concrete floor in most basements is structurally not part of the foundation; only the basement walls are. If there are posts supporting a main floor beam to form a post and beam system, these posts typically go right through the basement floor to a footing underneath the basement floor. It is the footing that supports the post and the footing is part of the house foundation. Load-bearing wood-stud walls rest directly on the concrete floor. Under the concrete floor is typically gravel or crushed stone to facilitate draining. The floor is typically four inches (100 mm) thick and it rests on top of the foundation footings. The floor is typically sloped towards a drain point, in case of leaks.
Modern construction for basement walls typically falls into one of two categories: they will be made of poured-in-place concrete using concrete forms with a concrete pump, or they will use concrete masonry units (block walls). Rock may also be used, but is less common. In monolithic architecture, large parts of the building are made of concrete; in insulating concrete form construction, the concrete walls may be hidden with an exterior finish or siding. Inside the structure, a single Lally column, steel basement jack, wooden column or support post may hold up the floor above in a small basement. A series of these supports may be necessary for large basements; many basements have the support columns exposed.
Since warm air rises, basements are typically cooler than the rest of the house. In summer, this makes basements damp, due to the higher relative humidity. Dehumidifiers are recommended. In winter, additional heating, such as a fireplace or baseboard heaters may be required. A well-defined central heating system may minimize this requirement. Heating ducts typically run in the ceiling of the basement (since there is not an empty floor below to run the ducts). Ducts extending from the ceiling down to the floor help heat the cold floors of the basement. Older or cheaper systems may simply have the heating vent in the ceiling of the basement.
The finished floor is typically raised off the concrete basement floor. In countries such as Canada, laminate flooring is an exception: It is typically separated from the concrete by only a thin foam underlay. Radiant heating systems may be embedded within the concrete floor. Even if unfinished and unoccupied, basements are heated in order to ensure relative warmth of the floor above, and to prevent water supply pipes, drains, etc. from freezing and bursting in winter. It is recommended that the basement walls be insulated to the frost line. In Canada, the walls of a finished basement are typically insulated to the floor with vapor barriers to prevent moisture transmission. However, a finished basement should avoid wood or wood-laminate flooring, and metal framing and other moisture resistant products should be used. Finished basements can be costly to maintain due to deterioration of waterproofing materials or lateral earth movement etc. Below-ground structures will never be as dry as one above ground, and measures must be taken to circulate air and dehumidify the area.
Drainage considerations
Basement floor drains that connect to sanitary sewers need to be filled regularly to prevent the trap from drying out and sewer gas from escaping into the basement. The drain trap can be topped up automatically by the condensation from air conditioners or high-efficiency furnaces. A small tube from another downpipe is sometimes used to keep the trap from drying out. Health Canada advocates the use of special radon gas traps for floor drains that lead to soil or to a sealed sump pump. In areas where storm and sanitary sewers are combined, and there is the risk of flooding and sewage backing up, backwater valves in all basement drains may be mandated by code and definitely are recommended even if not mandated.
The main water cut-off valve is usually in the basement. Basements often have "clean outs" for the sanitary and storm sewers, where these pipes can be accessed. The storm sewer access is only needed where the weeping tiles drain into the storm sewers.
Other than with walk-out or look-out basements, windows in basements require a well and are below grade. A clear window well cover may be required to keep the window wells from accumulating rain water. There should be drains in the window well, connected to the foundation drains.
If the water table outside the basement is above the height of the basement floor, then the foundation drains or the weeping tiles outside the footings may be insufficient to keep the basement dry. A sump pump may be required. It can be located anywhere and is simply in a well that is deeper than the basement floor.
Even with functioning sump pumps or low water tables, basements may become wet after rainfall, due to improper drainage. The ground next to the basement must be graded such that water flows away from the basement wall. Downspouts from roof gutters should drain freely into the storm sewer or directed away from the house. Downspouts should not be connected to the foundation draintiles. If the draintiles become clogged by leaves or debris from the rain gutters, the roof water would cause basement flooding through the draintile. Damp-proofing or waterproofing materials are typically applied to outside of the basement wall. It is virtually impossible to make a concrete wall waterproof, over the long run, so drainage is the key. There are draining membranes that can be applied to the outside of the basement that create channels for water against the basement wall to flow to the foundation drains.
Where drainage is inadequate, waterproofing may be needed. There are numerous ways to waterproof a basement, but most systems fall into one of three categories:
Tanking – Systems that bond to the basement structure and physically hold back groundwater.
Cavity drainage – Dimpled plastic membranes are used to line the floors and walls of the basement, creating a "drained cavity." Any water entering this drained cavity is diverted to a sump pump and pumped away from the basement.
Exterior foundation drain – Installing an exterior foundation drain that will drain away by gravity is the most effective means to waterproof a basement. An exterior system allows water to flow away from the basement without using pumps or electricity. An exterior drain also allows for the installation of a waterproof membrane to the foundation walls.
The waterproofing system can be applied to the inside or the outside walls of a basement. When waterproofing existing basements it is much cheaper to waterproof the basement on the inside. Waterproofing on the outside requires the expense of excavation, but does offer a number of advantages for a homeowner over the long term. Among them are:
Gravity system
No pumps or electrical wiring required
Membrane applied to exterior walls to prevent dampness, mold, moisture, and soil gases from entering the home
Permanent solution
Basement culture and finishings
Unfinished basement
The unfinished design, found principally in spaces larger than the traditional cellar, is common in residences throughout the U.S. and Canada. One usually finds within it a water heater, various pipes running along the ceiling and downwards to the floor, and sometimes a workbench, a freezer or refrigerator, or a laundry set (usually found in older homes). Boxes of various materials, and objects unneeded in the rest of the house, are also often stored there; in this regard, the unfinished basement takes the place both of the cellar and of the attic. Home workshops are often located in the basement, since sawdust, metal chips, and other mess or noise are less of a nuisance there. Sometimes, if the laundry is found in the basement, a laundry chute collects dirty laundry from the upper floors of the house. The basement can contain all of these objects and still be considered to be "unfinished", as they are either mostly or entirely functional in purpose.
Finished basement
In this case the space has been designed, either during construction or at a later point by the owners, to function as a fully habitable addition to the house. Frequently most or all of the basement is used as a recreation room or living room, but it is not uncommon as well to find there (either instead of or alongside the living/recreation room) a guest bedroom or teenager's room, a bathroom, a home office, a home gym, a home theater, a basement bar, a sauna, craft room, play room, kitchenette, and one or more closets. Usually a part of the basement is unfurnished and is used for storage, a workshop, and/or a laundry room; when this is the case the water heater and furnace will also often be located there, although in some cases the entire basement is finished, and the water heater and furnace are boxed off into a closet.
Partially finished basement
The main point of distinction between this type of basement and the two others lies in its being either entirely unmodified (unlike the finished basement) beyond the addition of furniture, recreational objects and appliances, and/or exercise equipment on the bare floor, or slightly modified through the installation (besides any or all of the aforementioned items) of loose carpet and perhaps simple light fixtures. In both cases, the objects found there—many of which could be found in a finished basement as well—might include the following: weight sets and other exercise equipment; the boom boxes or entertainment systems used during exercise; musical instruments (which are not in storage, as they would technically be in an unfinished basement; an assembled drum set would be the most easily identified of these); football tables, chairs, couches and entertainment appliances of lesser quality than those in the rest of the house; refrigerators, stand-alone freezers, and microwaves (the first and the second being also sometimes used as supplementary storage units in an unfinished basement); and sports pennants and/or other types of posters which are attached to the walls.
As the description suggests, this type of basement, which also might be called "half-finished", is likely used by teenagers and children. The entire family might utilize a work-out area. It is also common to have a secondary (or primary) home office in a partially finished basement, as well as a workbench and/or a space for laundry appliances.
Toilets and showers sometimes exist in this variety of basement, as many North American basements are designed to allow for their installation.
Fully finished basement – retro fit
In London the construction of finished retrofit basements is big business with a large number of projects in the 100–200 square meter bracket. There are a smaller number of projects in the 200–500 square meter bracket under construction. It is also not unusual to see multi-level retrofit basements. These are considerable works of civil engineering and require some skill and intuitive understanding as well as good engineering. Some of the more grandiose of these basement projects have been widely reported in the national media, including the "Witanhurst" project in the Highgate area of London. and the huge iceberg-like homes which are beginning to be constructed in prime London areas such as Kensington and Chelsea.
Use in hospitals
Hospitals often place their nuclear chemistry and radiation therapy and diagnostic resources in basements to utilize the shielding from the earth.
Real estate floorspace measures
In Canada, historically the basement area was excluded from advertised square footage of a house as it was not part of the living space. For example, a "2,000-square-foot bungalow" would, in reality, have of floor space. More recently, finished space has become increasingly acceptable as a measure which includes the developed basement areas of a home. Due to fire code requirements, most jurisdictions require an emergency egress (through either egress-style windows, or, in the case of a walk-out basement, a door) to include the basement square footage as living space.
See also
Coal hole
Loft conversions in the United Kingdom
References
External links
National Research Council (Canada) Basement subject search
HealthLink's Article on Mold Allergies
Rooms
Building engineering
Structural engineering
Semi-subterranean structures
Food storage | Basement | [
"Engineering"
] | 5,528 | [
"Structural engineering",
"Building engineering",
"Rooms",
"Construction",
"Civil engineering",
"Architecture"
] |
568,967 | https://en.wikipedia.org/wiki/Backward%20chaining | Backward chaining (or backward reasoning) is an inference method described colloquially as working backward from the goal. It is used in automated theorem provers, inference engines, proof assistants, and other artificial intelligence applications.
In game theory, researchers apply it to (simpler) subgames to find a solution to the game, in a process called backward induction. In chess, it is called retrograde analysis, and it is used to generate table bases for chess endgames for computer chess.
Backward chaining is implemented in logic programming by SLD resolution. Both rules are based on the modus ponens inference rule. It is one of the two most commonly used methods of reasoning with inference rules and logical implications – the other is forward chaining. Backward chaining systems usually employ a depth-first search strategy, e.g. Prolog.
Usage
Backward chaining starts with a list of goals (or a hypothesis) and works backwards from the consequent to the antecedent to see if any data supports any of these consequents. An inference engine using backward chaining would search the inference rules until it finds one with a consequent (Then clause) that matches a desired goal. If the antecedent (If clause) of that rule is not known to be true, then it is added to the list of goals (for one's goal to be confirmed one must also provide data that confirms this new rule).
For example, suppose a new pet, Fritz, is delivered in an opaque box along with two facts about Fritz:
Fritz croaks
Fritz eats flies
The goal is to decide whether Fritz is green, based on a rule base containing the following four rules:
If X croaks and X eats flies – Then X is a frog
If X chirps and X sings – Then X is a canary
If X is a frog – Then X is green
If X is a canary – Then X is yellow
With backward reasoning, an inference engine can determine whether Fritz is green in four steps. To start, the query is phrased as a goal assertion that is to be proven: "Fritz is green".
1. Fritz is substituted for X in rule #3 to see if its consequent matches the goal, so rule #3 becomes:
If Fritz is a frog – Then Fritz is green
Since the consequent matches the goal ("Fritz is green"), the rules engine now needs to see if the antecedent ("Fritz is a frog") can be proven. The antecedent, therefore, becomes the new goal:
Fritz is a frog
2. Again substituting Fritz for X, rule #1 becomes:
If Fritz croaks and Fritz eats flies – Then Fritz is a frog
Since the consequent matches the current goal ("Fritz is a frog"), the inference engine now needs to see if the antecedent ("Fritz croaks and eats flies") can be proven. The antecedent, therefore, becomes the new goal:
Fritz croaks and Fritz eats flies
3. Since this goal is a conjunction of two statements, the inference engine breaks it into two sub-goals, both of which must be proven:
Fritz croaks
Fritz eats flies
4. To prove both of these sub-goals, the inference engine sees that both of these sub-goals were given as initial facts. Therefore, the conjunction is true:
Fritz croaks and Fritz eats flies
therefore the antecedent of rule #1 is true and the consequent must be true:
Fritz is a frog
therefore the antecedent of rule #3 is true and the consequent must be true:
Fritz is green
This derivation, therefore, allows the inference engine to prove that Fritz is green. Rules #2 and #4 were not used.
Note that the goals always match the affirmed versions of the consequents of implications (and not the negated versions as in modus tollens) and even then, their antecedents are then considered as the new goals (and not the conclusions as in affirming the consequent), which ultimately must match known facts (usually defined as consequents whose antecedents are always true); thus, the inference rule used is modus ponens.
Because the list of goals determines which rules are selected and used, this method is called goal-driven, in contrast to data-driven forward-chaining inference. The backward chaining approach is often employed by expert systems.
Programming languages such as Prolog, Knowledge Machine and ECLiPSe support backward chaining within their inference engines.
See also
Backtracking
Backward induction
Forward chaining
Opportunistic reasoning
References
Sources
External links
Backward chaining example
Expert systems
Logic in computer science
Reasoning
Automated reasoning | Backward chaining | [
"Mathematics",
"Technology"
] | 970 | [
"Information systems",
"Mathematical logic",
"Logic in computer science",
"Expert systems"
] |
568,978 | https://en.wikipedia.org/wiki/Coccoloba%20uvifera | Coccoloba uvifera is a species of tree and flowering plant in the buckwheat family, Polygonaceae, that is native to coastal beaches throughout tropical America and the Caribbean, including central & southern Florida, the Bahamas, the Greater and Lesser Antilles, and Bermuda. Common names include seagrape and baygrape.
Fruit
In late summer, it bears green fruit, about diameter, in large, grape-like clusters. The fruit gradually ripens to a purplish color. Each contains a large pit that constitutes most of the volume of the fruit.
Cultivation and propagation
Although it is capable of surviving down to about 2 °C (35.6 °F), the tree cannot survive frost. The leaves turn reddish before withering. The seeds of this plant, once gathered, must be planted immediately, for unlike most plants, the seeds cannot withstand being stored for future planting.
C. uvifera is wind-resistant, moderately tolerant of shade, and highly tolerant of salt, so it is often planted to stabilize beach edges; it is also planted as an ornamental shrub. The fruit is very tasty, and can be used for jam or eaten directly from the tree. The shrub is low maintenance and largely disease and pest free. Sea grape does have one minor pest, the sea grape borer, which is a moth native to florida that bores into small twigs and branches and kills them along with their leaves. The damage, however, is usually minor. The leaves of the sea grape are also slow to decompose, which may cause some annoyance for homeowners.
Sea grape is a dioecious species, that is, male and female flowers are borne on separate plants, and cross-pollination is necessary for fruit to develop. Honey bees and other insects help pollinate these plants; male and female plants can be distinguished by the appearance of their flowers, as males usually show dead flower stalks.
Hardiness: USDA zone 9B–11
Propagation: seeds and cuttings
Culture: partial shade/full sun, drought tolerance
Uses
Coccoloba uvifera is a popular ornamental plant in south Florida yards. It serves as a dune stabilizer and protective habitat for small animals. Tall sea grape plants behind beaches help prevent sea turtles from being distracted by lights from nearby buildings. The sap has been used for dyeing and tanning leather. The wood has occasionally been used in furniture, as firewood, or for making charcoal. The fruits of the sea grape may be eaten raw, cooked into jellies and jams, or fermented into sea grape wine. The leaves of the sea grape can be made into a tea and honey bees can make a certain type of honey with the nectar of the sea grape flowers.
In other places native to sea grapes, various parts of the plant are used for medicinal purposes. For example, in Puerto Rico and the Caribbean the roots and bark of the plants are used in traditional medicine, while in the Yucatán peninsula tea made from the bark of sea grape mixed with alcohol is used for ulcers. In French Guiana, a juice made from the whole plant called Jamaica kino, is used to treat diarrhea and dysentery.
Classification
The first botanical names of the plant were assigned in 1696 by Hans Sloane, who called it Prunus maritima racemosa, "maritime grape-cluster Prunus", and Leonard Plukenet, who named it Uvifera littorea, "grape-bearer of the shore", both of which names reflect the European concept of "sea-grape", expressed in a number of languages by the explorers of the times. The natives viewed it as a large mulberry.
The first edition of Linnaeus's Species Plantarum (1753), based on Plukenet, assigned the plant to Polygonum uvifera and noted flores non vidi, "I have not seen the flowers." Subsequently, Patrick Browne, The Civil and Natural History of Jamaica (1756) devised Coccoloba for it. Relying on Browne, Linnaeus' second edition (1762), changed the classification to Coccolobus uvifera, citing all the other names. Coccoloba comes from the Greek kokkolobis, a kind of grape, literally, "berry pod".
Gallery
References
Preview available, Google Books.
uvifera
Constantly blooming plants
Halophytes
Flora of the Caribbean
Flora of Florida
Flora of Bermuda
Flora of the Bahamas
Trees of Îles des Saintes
Plants described in 1759
Garden plants of North America
Fruit trees
Taxa named by Carl Linnaeus
Dioecious plants | Coccoloba uvifera | [
"Chemistry"
] | 941 | [
"Halophytes",
"Salts"
] |
569,092 | https://en.wikipedia.org/wiki/Behavioral%20modernity | Behavioral modernity is a suite of behavioral and cognitive traits believed to distinguish current Homo sapiens from other anatomically modern humans, hominins, and primates. Most scholars agree that modern human behavior can be characterized by abstract thinking, planning depth, symbolic behavior (e.g., art, ornamentation), music and dance, exploitation of large game, and blade technology, among others.
Underlying these behaviors and technological innovations are cognitive and cultural foundations that have been documented experimentally and ethnographically by evolutionary and cultural anthropologists. These human universal patterns include cumulative cultural adaptation, social norms, language, and extensive help and cooperation beyond close kin.
Within the tradition of evolutionary anthropology and related disciplines, it has been argued that the development of these modern behavioral traits, in combination with the climatic conditions of the Last Glacial Period and Last Glacial Maximum causing population bottlenecks, contributed to the evolutionary success of Homo sapiens worldwide relative to Neanderthals, Denisovans, and other archaic humans.
Debate continues as to whether anatomically modern humans were behaviorally modern as well. There are many theories on the evolution of behavioral modernity. These approaches tend to fall into two camps: cognitive and gradualist. The Later Upper Paleolithic Model theorizes that modern human behavior arose through cognitive, genetic changes in Africa abruptly around 40,000–50,000 years ago around the time of the Out-of-Africa migration, prompting the movement of some modern humans out of Africa and across the world.
Other models focus on how modern human behavior may have arisen through gradual steps, with the archaeological signatures of such behavior appearing only through demographic or subsistence-based changes. Many cite evidence of behavioral modernity earlier (by at least about 150,000–75,000 years ago and possibly earlier) namely in the African Middle Stone Age. Anthropologists Sally McBrearty and Alison S. Brooks have been notable proponents of gradualism—challenging Europe-centered models by situating more change in the African Middle Stone Age—though this model is more difficult to substantiate due to the general thinning of the fossil record as one goes further back in time.
Definition
To classify what should be included in modern human behavior, it is necessary to define behaviors that are universal among living human groups. Some examples of these human universals are abstract thought, planning, trade, cooperative labor, body decoration, and the control and use of fire. Along with these traits, humans possess much reliance on social learning. This cumulative cultural change or cultural "ratchet" separates human culture from social learning in animals. In addition, a reliance on social learning may be responsible in part for humans' rapid adaptation to many environments outside of Africa. Since cultural universals are found in all cultures, including isolated indigenous groups, these traits must have evolved or have been invented in Africa prior to the exodus.
Archaeologically, a number of empirical traits have been used as indicators of modern human behavior. While these are often debated a few are generally agreed upon. Archaeological evidence of behavioral modernity includes:
Burial
Fishing
Figurative art (cave paintings, petroglyphs, dendroglyphs, figurines)
Use of pigments (such as ochre) and jewelry for decoration or self-ornamentation
Using bone material for tools
Transport of resources over long distances
Blade technology
Diversity, standardization, and regionally distinct artifacts
Hearths
Composite tools
Critiques
Several critiques have been placed against the traditional concept of behavioral modernity, both methodologically and philosophically. Anthropologist John Shea outlines a variety of problems with this concept, arguing instead for "behavioral variability", which, according to the author, better describes the archaeological record. The use of trait lists, according to Shea, runs the risk of taphonomic bias, where some sites may yield more artifacts than others despite similar populations; as well, trait lists can be ambiguous in how behaviors may be empirically recognized in the archaeological record. In particular, Shea cautions that population pressure, cultural change, or optimality models, like those in human behavioral ecology, might better predict changes in tool types or subsistence strategies than a change from "archaic" to "modern" behavior. Some researchers argue that a greater emphasis should be placed on identifying only those artifacts which are unquestionably, or purely, symbolic as a metric for modern human behavior.
Since 2018, recent dating methods utilized on various cave art sites in Spain and France have shown that Neanderthals performed symbolic artistic expression, consisting of red "lines, dots, and hand stencils" found in caves, prior to contact with anatomically modern humans. This is contrary to previous suggestions that Neanderthals lacked these capabilities.
Theories and models
Late Upper Paleolithic Model or "Upper Paleolithic Revolution"
The Late Upper Paleolithic Model, or Upper Paleolithic Revolution, refers to the idea that, though anatomically modern humans first appear around 150,000 years ago (as was once believed), they were not cognitively or behaviorally "modern" until around 50,000 years ago, leading to their expansion out of Africa and into Europe and Asia. These authors note that traits used as a metric for behavioral modernity do not appear as a package until around 40–50,000 years ago. Anthropologist Richard Klein specifically describes that evidence of fishing, tools made from bone, hearths, significant artifact diversity, and elaborate graves are all absent before this point. According to both Shea and Klein, art only becomes common beyond this switching point, signifying a change from archaic to modern humans. Most researchers argue that a neurological or genetic change, perhaps one enabling complex language, such as FOXP2, caused this revolutionary change in humans. The role of FOXP2 as a driver of evolutionary selection has been called into question following recent research results.
Building on the FOXP2 gene hypothesis, cognitive scientist Philip Lieberman has argued that proto-language behaviour existed prior to 50,000 BP, albeit in a more primitive form. Lieberman has advanced fossil evidence, such as neck and throat dimensions, to demonstrate that so-called “anatomically modern” humans from 100,000 BP continued to evolve their SVT (supralaryngeal vocal tract), which already possessed a horizontal portion (SVTh) capable of producing many phonemes which were mostly consonants. According to his theory, Neanderthals and early Homo sapiens would have been able to communicate using sounds and gestures.
From 100,000 BP, Homo sapiens necks continued to lengthen to a point, by around 50,000 BP, where Homo sapiens necks were long enough to accommodate a vertical portion to their SVT (SVTv), which is now a universal trait among humans. This SVTv enabled the enunciation of quantal vowels: [i]; [u]; and [a]. These quantal vowels could then be immediately put to use by the already sophisticated neuro-motor-control features of the FOXP2 gene to generate more nuanced sounds and in effect increase by orders of magnitude the number of distinct sounds that can be produced, allowing for fully symbolic language.
Goody (1986) draws an analogy between the development of spoken language and that of writing: the shift from pictographic or ideographic symbols into a fully abstract logographic writing system (such as hieroglyphics), or from a logoprahic system into an abjad or alphabet, led to dramatic changes in human civilization.
Alternative models
Contrasted with this view of a spontaneous leap in cognition among ancient humans, some anthropologists like Alison S. Brooks, primarily working in African archaeology, point to the gradual accumulation of "modern" behaviors, starting well before the 50,000-year benchmark of the Upper Paleolithic Revolution models. Howiesons Poort, Blombos, and other South African archaeological sites, for example, show evidence of marine resource acquisition, trade, the making of bone tools, blade and microlithic technology, and abstract ornamentation at least by 80,000 years ago. Given evidence from Africa and the Middle East, a variety of hypotheses have been put forth to describe an earlier, gradual transition from simple to more complex human behavior. Some authors have pushed back the appearance of fully modern behavior to around 80,000 years ago or earlier in order to incorporate the South African data.
Others focus on the slow accumulation of different technologies and behaviors across time. These researchers describe how anatomically modern humans could have been cognitively the same, and what we define as behavioral modernity is just the result of thousands of years of cultural adaptation and learning. Archaeologist Francesco d'Errico, and others, have looked at Neanderthal culture, rather than early human behavior exclusively, for clues into behavioral modernity. Noting that Neanderthal assemblages often portray traits similar to those listed for modern human behavior, researchers stress that the foundations for behavioral modernity may in fact, lie deeper in our hominin ancestors. If both modern humans and Neanderthals express abstract art and complex tools then "modern human behavior" cannot be a derived trait for our species. They argue that the original "human revolution" theory reflects a profound Eurocentric bias. Recent archaeological evidence, they argue, proves that humans evolving in Africa some 300,000 or even 400,000 years ago were already becoming cognitively and behaviourally "modern". These features include blade and microlithic technology, bone tools, increased geographic range, specialized hunting, the use of aquatic resources, long-distance trade, systematic processing and use of pigment, and art and decoration. These items do not occur suddenly together as predicted by the "human revolution" model, but at sites that are widely separated in space and time. This suggests a gradual assembling of the package of modern human behaviours in Africa, and its later export to other regions of the Old World.
Between these extremes is the view—currently supported by archaeologists Chris Henshilwood, Curtis Marean, Ian Watts and others—that there was indeed some kind of "human revolution" but that it occurred in Africa and spanned tens of thousands of years. The term "revolution," in this context, would mean not a sudden mutation but a historical development along the lines of the industrial revolution or the Neolithic revolution. In other words, it was a relatively accelerated process, too rapid for ordinary Darwinian "descent with modification" yet too gradual to be attributed to a single genetic or other sudden event. These archaeologists point in particular to the relatively explosive emergence of ochre crayons and shell necklaces, apparently used for cosmetic purposes. These archaeologists see symbolic organisation of human social life as the key transition in modern human evolution. Recently discovered at sites such as Blombos Cave and Pinnacle Point, South Africa, pierced shells, pigments and other striking signs of personal ornamentation have been dated within a time-window of 70,000–160,000 years ago in the African Middle Stone Age, suggesting that the emergence of Homo sapiens coincided, after all, with the transition to modern cognition and behaviour. While viewing the emergence of language as a "revolutionary" development, this school of thought generally attributes it to cumulative social, cognitive and cultural evolutionary processes as opposed to a single genetic mutation.
A further view, taken by archaeologists such as Francesco d'Errico and João Zilhão, is a multi-species perspective arguing that evidence for symbolic culture, in the form of utilised pigments and pierced shells, are also found in Neanderthal sites, independently of any "modern" human influence.
Cultural evolutionary models may also shed light on why although evidence of behavioral modernity exists before 50,000 years ago, it is not expressed consistently until that point. With small population sizes, human groups would have been affected by demographic and cultural evolutionary forces that may not have allowed for complex cultural traits. According to some authors, until population density became significantly high, complex traits could not have been maintained effectively. Some genetic evidence supports a dramatic increase in population size before human migration out of Africa. High local extinction rates within a population also can significantly decrease the amount of diversity in neutral cultural traits, regardless of cognitive ability.
Archaeological evidence
Africa
Research from 2017 indicates that Homo sapiens originated in Africa between around 350,000 and 260,000 years ago. There is some evidence for the beginning of modern behavior among early African H. sapiens around that period.
Before the Out of Africa theory was generally accepted, there was no consensus on where the human species evolved and, consequently, where modern human behavior arose. Now, however, African archaeology has become extremely important in discovering the origins of humanity. The first Cro-Magnon expansion into Europe around 48,000 years ago is generally accepted as already "modern", and it is now generally believed that behavioral modernity appeared in Africa before 50,000 years ago, either significantly earlier, or possibly as a late Upper Paleolithic "revolution" soon before which prompted migration out of Africa.
A variety of evidence of abstract imagery, widened subsistence strategies, and other "modern" behaviors have been discovered in Africa, especially South, North, and East Africa. The Blombos Cave site in South Africa, for example, is famous for rectangular slabs of ochre engraved with geometric designs. Using multiple dating techniques, the site was dated to be around 77,000 and 100,000 to 75,000 years old. Ostrich egg shell containers engraved with geometric designs dating to 60,000 years ago were found at Diepkloof, South Africa. Beads and other personal ornamentation have been found from Morocco which might be as much as 130,000 years old; as well, the Cave of Hearths in South Africa has yielded a number of beads dating from significantly prior to 50,000 years ago, and shell beads dating to about 75,000 years ago have been found at Blombos Cave, South Africa.
Specialized projectile weapons as well have been found at various sites in Middle Stone Age Africa, including bone and stone arrowheads at South African sites such as Sibudu Cave (along with an early bone needle also found at Sibudu) dating approximately 72,000–60,000 years ago on some of which poisons may have been used, and bone harpoons at the Central African site of Katanda dating to about 90,000 years ago. Evidence also exists for the systematic heat treating of silcrete stone to increase its flake-ability for the purpose of toolmaking, beginning approximately 164,000 years ago at the South African site of Pinnacle Point and becoming common there for the creation of microlithic tools at about 72,000 years ago.
In 2008, an ochre processing workshop likely for the production of paints was uncovered dating to c. 100,000 years ago at Blombos Cave, South Africa. Analysis shows that a liquefied pigment-rich mixture was produced and stored in the two abalone shells, and that ochre, bone, charcoal, grindstones, and hammer-stones also formed a composite part of the toolkits. Evidence for the complexity of the task includes procuring and combining raw materials from various sources (implying they had a mental template of the process they would follow), possibly using pyrotechnology to facilitate fat extraction from bone, using a probable recipe to produce the compound, and the use of shell containers for mixing and storage for later use. Modern behaviors, such as the making of shell beads, bone tools and arrows, and the use of ochre pigment, are evident at a Kenyan site by 78,000–67,000 years ago. Evidence of early stone-tipped projectile weapons (a characteristic tool of Homo sapiens), the stone tips of javelins or throwing spears, were discovered in 2013 at the Ethiopian site of Gademotta, and date to around 279,000 years ago.
Expanding subsistence strategies beyond big-game hunting and the consequential diversity in tool types has been noted as signs of behavioral modernity. A number of South African sites have shown an early reliance on aquatic resources from fish to shellfish. Pinnacle Point, in particular, shows exploitation of marine resources as early as 120,000 years ago, perhaps in response to more arid conditions inland. Establishing a reliance on predictable shellfish deposits, for example, could reduce mobility and facilitate complex social systems and symbolic behavior. Blombos Cave and Site 440 in Sudan both show evidence of fishing as well. Taphonomic change in fish skeletons from Blombos Cave have been interpreted as capture of live fish, clearly an intentional human behavior.
Humans in North Africa (Nazlet Sabaha, Egypt) are known to have dabbled in chert mining, as early as ≈100,000 years ago, for the construction of stone tools.
Evidence was found in 2018, dating to about 320,000 years ago, at the Kenyan site of Olorgesailie, of the early emergence of modern behaviors including: long-distance trade networks (involving goods such as obsidian), the use of pigments, and the possible making of projectile points. It is observed by the authors of three 2018 studies on the site that the evidence of these behaviors is approximately contemporary to the earliest known Homo sapiens fossil remains from Africa (such as at Jebel Irhoud and Florisbad), and they suggest that complex and modern behaviors had already begun in Africa around the time of the emergence of anatomically modern Homo sapiens.
In 2019, further evidence of early complex projectile weapons in Africa was found at Aduma, Ethiopia, dated 100,000–80,000 years ago, in the form of points considered likely to belong to darts delivered by spear throwers.
Olduvai Hominid 1 wore facial piercings.
Europe
While traditionally described as evidence for the later Upper Paleolithic Model, European archaeology has shown that the issue is more complex. A variety of stone tool technologies are present at the time of human expansion into Europe and show evidence of modern behavior. Despite the problems of conflating specific tools with cultural groups, the Aurignacian tool complex, for example, is generally taken as a purely modern human signature. The discovery of "transitional" complexes, like "proto-Aurignacian", have been taken as evidence of human groups progressing through "steps of innovation". If, as this might suggest, human groups were already migrating into eastern Europe around 40,000 years and only afterward show evidence of behavioral modernity, then either the cognitive change must have diffused back into Africa or was already present before migration.
In light of a growing body of evidence of Neanderthal culture and tool complexes some researchers have put forth a "multiple species model" for behavioral modernity. Neanderthals were often cited as being an evolutionary dead-end, apish cousins who were less advanced than their human contemporaries. Personal ornaments were relegated as trinkets or poor imitations compared to the cave art produced by H. sapiens. Despite this, European evidence has shown a variety of personal ornaments and artistic artifacts produced by Neanderthals; for example, the Neanderthal site of Grotte du Renne has produced grooved bear, wolf, and fox incisors, ochre and other symbolic artifacts. Although few and controversial, circumstantial evidence of Neanderthal ritual burials has been uncovered. There are two options to describe this symbolic behavior among Neanderthals: they copied cultural traits from arriving modern humans or they had their own cultural traditions comparative with behavioral modernity. If they just copied cultural traditions, which is debated by several authors, they still possessed the capacity for complex culture described by behavioral modernity. As discussed above, if Neanderthals also were "behaviorally modern" then it cannot be a species-specific derived trait.
Asia
Most debates surrounding behavioral modernity have been focused on Africa or Europe but an increasing amount of focus has been placed on East Asia. This region offers a unique opportunity to test hypotheses of multi-regionalism, replacement, and demographic effects. Unlike Europe, where initial migration occurred around 50,000 years ago, human remains have been dated in China to around 100,000 years ago. This early evidence of human expansion calls into question behavioral modernity as an impetus for migration.
Stone tool technology is particularly of interest in East Asia. Following Homo erectus migrations out of Africa, Acheulean technology never seems to appear beyond present-day India and into China. Analogously, Mode 3, or Levallois technology, is not apparent in China following later hominin dispersals. This lack of more advanced technology has been explained by serial founder effects and low population densities out of Africa. Although tool complexes comparative to Europe are missing or fragmentary, other archaeological evidence shows behavioral modernity. For example, the peopling of the Japanese archipelago offers an opportunity to investigate the early use of watercraft. Although one site, Kanedori in Honshu, does suggest the use of watercraft as early as 84,000 years ago, there is no other evidence of hominins in Japan until 50,000 years ago.
The Zhoukoudian cave system near Beijing has been excavated since the 1930s and has yielded precious data on early human behavior in East Asia. Although disputed, there is evidence of possible human burials and interred remains in the cave dated to around 34–20,000 years ago. These remains have associated personal ornaments in the form of beads and worked shell, suggesting symbolic behavior. Along with possible burials, numerous other symbolic objects like punctured animal teeth and beads, some dyed in red ochre, have all been found at Zhoukoudian. Although fragmentary, the archaeological record of eastern Asia shows evidence of behavioral modernity before 50,000 years ago but, like the African record, it is not fully apparent until that time.
See also
Anatomically modern human
Archaic Homo sapiens
Blombos Cave
Cultural universal
Dawn of Humanity (film)
Evolution of human intelligence
Female cosmetic coalitions
FOXP2 and human evolution
Human evolution
List of Stone Age art
Origin of language
Origins of society
Prehistoric art
Prehistoric music
Paleolithic religion
Recent African origin
Sibudu Cave
Sociocultural evolution
Symbolism (disambiguation)
Symbolic culture
Timeline of evolution
References
External links
Steven Mithen (1999), The Prehistory of the Mind: The Cognitive Origins of Art, Religion and Science, Thames & Hudson, .
Artifacts in Africa Suggest An Earlier Modern Human
Tools point to African origin for human behaviour
Key Human Traits Tied to Shellfish Remains, nytimes 2007/10/18
"Python Cave" Reveals Oldest Human Ritual, Scientists Suggest
Human Timeline (Interactive) – Smithsonian, National Museum of Natural History (August 2016).
Anthropology
Anatomically modern humans
Modernity
Upper Paleolithic
Human evolution
Evolutionary biology
Evolutionary psychology | Behavioral modernity | [
"Biology"
] | 4,640 | [
"Evolutionary biology",
"Behavior",
"Anatomically modern humans",
"Recent African origin of modern humans",
"Biological hypotheses",
"Human behavior"
] |
569,111 | https://en.wikipedia.org/wiki/Basionym | In the scientific name of organisms, basionym or basyonym means the original name on which a new name is based; the author citation of the new name should include the authors of the basionym in parentheses. The term "basionym" is used in both botany and zoology. In zoology, alternate terms such as original combination or protonym are sometimes used instead. Bacteriology uses a similar term, basonym, spelled without an i.
Although "basionym" and "protonym" are often used interchangeably, they have slightly different technical definitions. A basionym is the correct spelling of the original name (according to the applicable nomenclature rules), while a protonym is the original spelling of the original name. These are typically the same, but in rare cases may differ.
When creating new taxonomic names, there are specific rules about how basionyms can be used. A new combination or name at new rank must be based directly on the original basionym rather than on any intermediate combinations. This means that if a species is transferred between multiple genera over time, each new combination must refer back to the original name rather than to more recent combinations. This helps maintain a clear chain of nomenclature and prevents confusion about the ultimate source of the name. For example, when transferring a species that has already been moved to a different genus, taxonomists must cite the original species name as the basionym, not the intermediate combination.
Use in botany
The term "basionym" is used in botany only for the circumstances where a previous name exists with a useful description, and the International Code of Nomenclature for algae, fungi, and plants (ICNafp) does not require a full description with the new name. A basionym must therefore be legitimate. Basionyms are regulated by the code's articles 6.10, 7.3, 41, and others.
When a current name has a basionym, the author or authors of the basionym are included in parentheses at the start of the author citation. If a basionym is later found to be illegitimate, it becomes a replaced synonym and the current name's author citation must be changed so that the basionym authors do not appear.
Historical rules for basionyms have evolved over time. Prior to 1 January 1953, the requirements for referencing basionyms were less stringent: an indirect reference to a basionym or replaced synonym was sufficient for valid publication of a new combination, name at new rank, or replacement name. After this date, more explicit references became required.
Replaced synonyms
In some cases, taxonomists may need to publish a replacement name even when working with a legitimate existing name. This situation arises when it is not possible to publish a legitimate new combination or name at new rank, such as when the new name would create an illegitimate homonym (duplicate name) or when the name cannot be validly published under the nomenclatural rules (for example, in the case of tautonyms).
Types and original material
When working with basionyms and new combinations, there are specific rules regarding type specimens. Because a new combination is typified by the type of its basionym, all original material must also come from the basionym. This means that taxonomists cannot designate a lectotype (a type specimen selected after the original description) from specimens that were only cited or used in the later new combination but were not part of the original material of the basionym. Such specimens could only be designated as neotypes if no original material exists.
Combinatio nova
The basionym of the name Picea abies (the Norway spruce) is Pinus abies. The species was originally named Pinus abies by Carl Linnaeus and so the author citation of the basionym is simply "L." Later on, botanist Gustav Karl Wilhelm Hermann Karsten decided this species should not be grouped in the same genus (Pinus) as the pines, so he transferred it to the genus Picea (the spruces). The new name Picea abies is combinatio nova, a new combination (abbreviated comb. nov.). With author citation, the current name is "Picea abies (L.) Karst."
Status novus
In 1964, the subfamily name Pomoideae, which had been in use for the group within family Rosaceae that have pome fruit like apples, was no longer acceptable under the code of nomenclature because it is not based on a genus name. Claude Weber did not consider the family name Malaceae Small to be taxonomically appropriate, so he created the name Maloideae at the rank of subfamily, referring to the original description of the family, and using the same type. This change of rank from family to subfamily is an example of status novus (abbreviated stat. nov.), also called a "name at new rank".
See also
Glossary of scientific naming
Synonym (taxonomy)
References
Biological nomenclature
Botanical nomenclature
Zoological nomenclature | Basionym | [
"Biology"
] | 1,037 | [
"Botanical nomenclature",
"Zoological nomenclature",
"Botanical terminology",
"Biological nomenclature"
] |
569,123 | https://en.wikipedia.org/wiki/Johann%20Deisenhofer | Johann Deisenhofer (; born September 30, 1943) is a German biochemist who, along with Hartmut Michel and Robert Huber, received the Nobel Prize for Chemistry in 1988 for their determination of the first crystal structure of an integral membrane protein, a membrane-bound complex of proteins and co-factors that is essential to photosynthesis.
Early life and education
Born in Bavaria, Deisenhofer earned his doctorate from the Technical University of Munich for research work done at the Max Planck Institute of Biochemistry in Martinsried, West Germany, in 1974. He conducted research there until 1988, when he joined the scientific staff of the Howard Hughes Medical Institute and the faculty of the Department of Biochemistry at The University of Texas Southwestern Medical Center at Dallas.
Career
Together with Michel and Huber, Deisenhofer determined the three-dimensional structure of a protein complex found in certain photosynthetic bacteria. This membrane protein complex, called a photosynthetic reaction center, was known to play a crucial role in initiating a simple type of photosynthesis. Between 1982 and 1985, the three scientists used X-ray crystallography to determine the exact arrangement of the more than 10,000 atoms that make up the protein complex. Their research increased the general understanding of the mechanisms of photosynthesis and revealed similarities between the photosynthetic processes of plants and bacteria.
Deisenhofer currently serves on the board of advisors of Scientists and Engineers for America, an organization focused on promoting sound science in American government. In 2003 he was one of 22 Nobel Laureates who signed the Humanist Manifesto. He is currently a professor at the Department of Biophysics at the University of Texas Southwestern Medical Center.
References
External links
1943 births
German biochemists
German biophysicists
German Nobel laureates
Howard Hughes Medical Investigators
Living people
Max Planck Society people
Members of the United States National Academy of Sciences
Nobel laureates in Chemistry
Technical University of Munich alumni
University of Texas Southwestern Medical Center faculty
Knights Commander of the Order of Merit of the Federal Republic of Germany
Researchers of photosynthesis | Johann Deisenhofer | [
"Chemistry"
] | 415 | [
"Biochemists",
"Photochemists",
"Photosynthesis",
"Researchers of photosynthesis"
] |
569,154 | https://en.wikipedia.org/wiki/Perpetual%20calendar | A perpetual calendar is a calendar valid for many years, usually designed to look up the day of the week for a given date in the past or future.
For the Gregorian and Julian calendars, a perpetual calendar typically consists of one of three general variations:
Fourteen one-year calendars, plus a table to show which one-year calendar is to be used for any given year. These one-year calendars divide evenly into two sets of seven calendars: seven for each common year (the year that does not have a February 29) with each of the seven starting on a different day of the week, and seven for each leap year, again with each one starting on a different day of the week, totaling fourteen. (See Dominical letter for one common naming scheme for the 14 calendars.)
Seven (31-day) one-month calendars (or seven each of 28–31 day month lengths, for a total of 28) and one or more tables to show which calendar is used for any given month. Some perpetual calendars' tables slide against each other so that aligning two scales with one another reveals the specific month calendar via a pointer or window mechanism. The seven calendars may be combined into one, either with 13 columns of which only seven are revealed, or with movable day-of-week names (as shown in the pocket perpetual calendar picture).
A mixture of the above two variations - a one-year calendar in which the names of the months are fixed and the days of the week and dates are shown on movable pieces which can be swapped around as necessary.
Such a perpetual calendar fails to indicate the dates of moveable feasts such as Easter, which are calculated based on a combination of events in the tropical year and lunar cycles. These issues are dealt with in great detail in computus.
An early example of a perpetual calendar for practical use is found in the Nürnberger Handschrift GNM 3227a. The calendar covers the period of 1390–1495 (on which grounds the manuscript is dated to c. 1389). For each year of this period, it lists the number of weeks between Christmas and Quinquagesima. This is the first known instance of a tabular form of perpetual calendar allowing the calculation of the moveable feasts that became popular during the 15th century.
The chapel Cappella dei Mercanti, Turin contains a perpetual calendar machine made by Giovanni Plana using rotating drums.
Other uses of the term "perpetual calendar"
Offices and retail establishments often display devices containing a set of elements to form all possible numbers from 1 through 31, as well as the names/abbreviations for the months and the days of the week, to show the current date for convenience of people who might be signing and dating documents such as checks. Establishments that serve alcoholic beverages may use a variant that shows the current month and day but subtracting the legal age of alcohol consumption in years, indicating the latest legal birth date for alcohol purchases. A common device consists of two cubes in a holder. One cube carries the digits zero to five. The other bears the digits 0, 1, 2, 6 (or 9 if inverted), 7, and 8. This is sufficient because only one and two may appear twice in date and they are on both cubes, while the 0 is on both cubes so that all single-digit dates can be shown in double-digit format. In addition to the two cubes, three blocks, each as wide as the two cubes combined, and a third as tall and as deep, have the names of the months printed on their long faces. The current month is turned forward on the front block, with the other two month blocks behind it.
Certain calendar reforms have been labeled perpetual calendars because their dates are fixed on the same weekdays every year. Examples are The World Calendar, the International Fixed Calendar and the Pax Calendar. Technically, these are not perpetual calendars but perennial calendars. Their purpose, in part, is to eliminate the need for perpetual calendar tables, algorithms, and computation devices.
In watchmaking, "perpetual calendar" describes a calendar mechanism that correctly displays the date on the watch "perpetually", taking into account the different lengths of the months as well as leap years. The internal mechanism will move the dial to the next day.
Algorithms
Perpetual calendars use algorithms to compute the day of the week for any given year, month, and day of the month. Even though the individual operations in the formulas can be very efficiently implemented in software, they are too complicated for most people to perform all of the arithmetic mentally. Perpetual calendar designers hide the complexity in tables to simplify their use.
A perpetual calendar employs a table for finding which of fourteen yearly calendars to use. A table for the Gregorian calendar expresses its 400-year grand cycle: 303 common years and 97 leap years total to 146,097 days, or exactly 20,871 weeks. This cycle breaks down into one 100-year period with 25 leap years, making 36,525 days, or one day less than 5,218 full weeks; and three 100-year periods with 24 leap years each, making 36,524 days, or two days less than 5,218 full weeks.
Within each 100-year block, the cyclic nature of the Gregorian calendar proceeds in the same fashion as its Julian predecessor: A common year begins and ends on the same day of the week, so the following year will begin on the next successive day of the week. A leap year has one more day, so the year following a leap year begins on the second day of the week after the leap year began. Every four years, the starting weekday advances five days, so over a 28-year period, it advances 35, returning to the same place in both the leap year progression and the starting weekday. This cycle completes three times in 84 years, leaving 16 years in the fourth, incomplete cycle of the century.
A major complicating factor in constructing a perpetual calendar algorithm is the peculiar and variable length of February, which was at one time the last month of the year, leaving the first 11 months March through January with a five-month repeating pattern: 31, 30, 31, 30, 31, ..., so that the offset from March of the starting day of the week for any month could be easily determined. Zeller's congruence, a well-known algorithm for finding the day of the week for any date, explicitly defines January and February as the "13th" and "14th" months of the previous year to take advantage of this regularity, but the month-dependent calculation is still very complicated for mental arithmetic:
Instead, a table-based perpetual calendar provides a simple lookup mechanism to find offset for the day of the week for the first day of each month. To simplify the table, in a leap year January and February must either be treated as a separate year or have extra entries in the month table:
Perpetual Julian and Gregorian calendar tables
Table one (cyd)
The following calendar works for any date from 15 October 1582 onwards, but only for Gregorian calendar dates.
Table two (cymd)
Table three (dmyc)
See also
Determination of the day of the week
Doomsday rule
Long Now Foundation
Year 10,000 problem
References
External links
Sliding Perpetual Calendar on one sheet of paper (U.S. version, PDF)
Sliding Perpetual Calendar on one sheet of paper (non U.S. version, PDF)
Conical or Pyramidal Year Calendar (with "First of March table", PDF)
New Perpetual Calendar for any year
Perpetual Calendar in JavaScript
Calendars | Perpetual calendar | [
"Physics"
] | 1,563 | [
"Spacetime",
"Calendars",
"Physical quantities",
"Time"
] |
569,180 | https://en.wikipedia.org/wiki/Plant%20propagation | Plant propagation is the process by which new plants grow from various sources, including seeds, cuttings, and other plant parts. Plant propagation can refer to both man-made and natural processes.
Propagation typically occurs as a step in the overall cycle of plant growth. For seeds, it happens after ripening and dispersal; for vegetative parts, it happens after detachment or pruning; for asexually-reproducing plants, such as strawberry, it happens as the new plant develops from existing parts.
Countless plants are propagated each day in horticulture and agriculture.
Plant propagation is vital to agriculture and horticulture, not just for human food production but also for forest and fibre crops, as well as traditional and herbal medicine. It is also important for plant breeding.
Sexual propagation
Seeds and spores can be used for reproduction (e.g. sowing). Seeds are typically produced from sexual reproduction within a species because genetic recombination has occurred. A plant grown from seeds may have different characteristics from its parents. Some species produce seeds that require special conditions to germinate, such as cold treatment. The seeds of many Australian plants and plants from southern Africa and the American west require smoke or fire to germinate. Some plant species, including many trees, do not produce seeds until they reach maturity, which may take many years. Seeds can be difficult to acquire, and some plants do not produce seed at all. Some plants (like certain plants modified using genetic use restriction technology) may produce seed, but not a fertile seed. In certain cases, this is done to prevent the accidental spreading of these plants, for example by birds and other animals.
Asexual propagation
Plant roots, stems, and leaves have a number of mechanisms for asexual or vegetative reproduction, which horticulturists employ to multiply or clone plants rapidly, such as in tissue culture and grafting. Plants are produced using material from a single parent and as such, there is no exchange of genetic material, therefore vegetative propagation methods almost always produce plants that are identical to the parent.
In some plants, seeds can be produced without fertilization and the seeds contain only the genetic material of the parent plant. Therefore, propagation via asexual seeds or apomixis is asexual reproduction but not vegetative propagation.
Techniques for vegetative propagation include:
Air or ground layering
Division
Grafting and bud grafting, widely used in fruit tree propagation
Micropropagation
Offsets
Stolons (runners)
Storage organs such as bulbs, corms, tubers, and rhizomes
Striking or cuttings
Twin-scaling
Heated propagator
A heated propagator is a horticultural device to maintain a warm and damp environment for seeds and cuttings to grow in. They generally provide bottom heat (maintained at a particular temperature) and high humidity, which is essential in successful seed germination and in helping cuttings to take root. In colder climates they are sometimes used for plants like peppers and sweet peas which need warmer environments (about 15°C, for the plants listed) in order to germinate. If excessive condensation forms on the inside of the lid, the gardener can open the ventilating holes to regulate the temperature a little.
Non-electric propagators (mainly a seed tray and a clear plastic lid) are a lot cheaper to purchase than a heated propagator, but without the constant regulated warmth and bottom heat provided by a heated propagator, growth of seedlings tends to be slower and less consistent (with increased risk of seeds failing to germinate).
Seed propagation mat
An electric seed-propagation mat is a heated rubber mat covered by a metal cage that is used in gardening. The mats are made so that planters containing seedlings can be placed on top of the metal cage without the risk of starting a fire. Another example is a seedling heat mat, multiple layers of durable, water resistant plastic material with insulated heating coils embedded inside (similar to underfloor heating systems, but with rubber mat instead of flooring). In extreme cold, gardeners place a loose plastic cover over the planters/mats which creates a sort of miniature greenhouse. The constant and predictable heat allows people to raise seedlings in the winter months when the weather is generally too cold for seedlings to survive naturally outside. When combined with a lighting system, many plants can be grown indoors using these mats. This can increase the variety of plants that a gardener can use.
See also
Adventitious
Clonal colony
Fruit tree propagation
Orthodox seed
Recalcitrant seed
Selection methods in plant breeding based on mode of reproduction
Propagation of grapevines
Weeping willow (tree) is an ornamental tree (Salix babylonica and related hybrids
Propagation of Christmas trees
Hemerochory
Escaped plant
References
External links
Reference Guide to plant care handling and merchandising
Bibliography
Charles W. Heuser (1997). The Complete Book of Plant Propagation. Taunton Press. .
Propagation
Horticultural techniques
Agronomy
Forest management | Plant propagation | [
"Biology"
] | 1,030 | [
"Behavior",
"Plant reproduction",
"Plants",
"Reproduction"
] |
569,263 | https://en.wikipedia.org/wiki/Acute-phase%20protein | Acute-phase proteins (APPs) are a class of proteins whose concentrations in blood plasma either increase (positive acute-phase proteins) or decrease (negative acute-phase proteins) in response to inflammation. This response is called the acute-phase reaction (also called acute-phase response). The acute-phase reaction characteristically involves fever, acceleration of peripheral leukocytes, circulating neutrophils and their precursors. The terms acute-phase protein and acute-phase reactant (APR) are often used synonymously, although some APRs are (strictly speaking) polypeptides rather than proteins.
In response to injury, local inflammatory cells (neutrophil granulocytes and macrophages) secrete a number of cytokines into the bloodstream, most notable of which are the interleukins IL1, and IL6, and TNF-α. The liver responds by producing many acute-phase reactants. At the same time, the production of a number of other proteins is reduced; these proteins are, therefore, referred to as "negative" acute-phase reactants. Increased acute-phase proteins from the liver may also contribute to the promotion of sepsis.
Regulation of synthesis
TNF-α, IL-1β and IFN-γ are important for the expression of inflammatory mediators such as prostaglandins and leukotrienes, and they also cause the production of platelet-activating factor and IL-6. After stimulation with proinflammatory cytokines, Kupffer cells produce IL-6 in the liver and present it to the hepatocytes. IL-6 is the major mediator for the hepatocytic secretion of APPs. Synthesis of APP can also be regulated indirectly by cortisol. Cortisol can enhance expression of IL-6 receptors in liver cells and induce IL-6-mediated production of APPs.
Positive
Positive acute-phase proteins serve (as part of the innate immune system) different physiological functions within the immune system. Some act to destroy or inhibit growth of microbes, e.g., C-reactive protein, mannose-binding protein, complement factors, ferritin, ceruloplasmin, serum amyloid A and haptoglobin. Others give negative feedback on the inflammatory response, e.g. serpins. Alpha 2-macroglobulin and coagulation factors affect coagulation, mainly stimulating it. This pro-coagulant effect may limit infection by trapping pathogens in local blood clots. Also, some products of the coagulation system can contribute to the innate immune system by their ability to increase vascular permeability and act as chemotactic agents for phagocytic cells.
Negative
"Negative" acute-phase proteins decrease in inflammation. Examples include albumin, transferrin, transthyretin, retinol-binding protein, antithrombin, transcortin. The decrease of such proteins may be used as markers of inflammation. The physiological role of decreased synthesis of such proteins is generally to save amino acids for producing "positive" acute-phase proteins more efficiently. Theoretically, a decrease in transferrin could additionally be decreased by an upregulation of transferrin receptors, but the latter does not appear to change with inflammation.
While the production of C3 (a complement factor) increases in the liver, the plasma concentration often lowers because of an increased turn-over, therefore it is often seen as a negative acute-phase protein.
Clinical significance
Measurement of acute-phase proteins, especially C-reactive protein, is a useful marker of inflammation in both medical and veterinary clinical pathology. It correlates with the erythrocyte sedimentation rate (ESR), however not always directly. This is due to the ESR being largely dependent on the elevation of fibrinogen, an acute phase reactant with a half-life of approximately one week. This protein will therefore remain higher for longer despite the removal of the inflammatory stimuli. In contrast, C-reactive protein (with a half-life of 6–8 hours) rises rapidly and can quickly return to within the normal range if treatment is employed. For example, in active systemic lupus erythematosus, one may find a raised ESR but normal C-reactive protein.They may also indicate liver failure.
References
External links
http://eclinpath.com/chemistry/proteins/acute-phase-proteins/
Immune system | Acute-phase protein | [
"Biology"
] | 924 | [
"Immune system",
"Organ systems"
] |
569,315 | https://en.wikipedia.org/wiki/Dimethyl%20ether | Dimethyl ether (DME; also known as methoxymethane) is the organic compound with the formula CH3OCH3,
(sometimes ambiguously simplified to C2H6O as it is an isomer of ethanol). The simplest ether, it is a colorless gas that is a useful precursor to other organic compounds and an aerosol propellant that is currently being demonstrated for use in a variety of fuel applications.
Dimethyl ether was first synthesised by Jean-Baptiste Dumas and Eugene Péligot in 1835 by distillation of methanol and sulfuric acid.
Production
Approximately 50,000 tons were produced in 1985 in Western Europe by dehydration of methanol:
The required methanol is obtained from synthesis gas (syngas). Other possible improvements call for a dual catalyst system that permits both methanol synthesis and dehydration in the same process unit, with no methanol isolation and purification.
Both the one-step and two-step processes above are commercially available. The two-step process is relatively simple and start-up costs are relatively low. A one-step liquid-phase process is in development.
From biomass
Dimethyl ether is a synthetic second generation biofuel (BioDME), which can be produced from lignocellulosic biomass. The EU is considering BioDME in its potential biofuel mix in 2030; It can also be made from biogas or methane from animal, food, and agricultural waste, or even from shale gas or natural gas.
The Volvo Group is the coordinator for the European Community Seventh Framework Programme project BioDME where Chemrec's BioDME pilot plant is based on black liquor gasification in Piteå, Sweden.
Applications
The largest use of dimethyl ether is as the feedstock for the production of the methylating agent, dimethyl sulfate, which entails its reaction with sulfur trioxide:
Dimethyl ether can also be converted into acetic acid using carbonylation technology related to the Monsanto acetic acid process:
Laboratory reagent and solvent
Dimethyl ether is a low-temperature solvent and extraction agent, applicable to specialised laboratory procedures. Its usefulness is limited by its low boiling point (), but the same property facilitates its removal from reaction mixtures. Dimethyl ether is the precursor to the useful alkylating agent, trimethyloxonium tetrafluoroborate.
Niche applications
A mixture of dimethyl ether and propane is used in some over-the-counter "freeze spray" products to treat warts by freezing them. In this role, it has supplanted halocarbon compounds (Freon).
Dimethyl ether is also a component of certain high temperature "Map-Pro" blowtorch gas blends, supplanting the use of methyl acetylene and propadiene mixtures.
Dimethyl ether is also used as a propellant in aerosol products. Such products include hair spray, bug spray and some aerosol glue products.
Research
Fuel
A potentially major use of dimethyl ether is as substitute for propane in LPG used as fuel in household and industry. Dimethyl ether can also be used as a blendstock in propane autogas.
It is also a promising fuel in diesel engines, and gas turbines. For diesel engines, an advantage is the high cetane number of 55, compared to that of diesel fuel from petroleum, which is 40–53. Only moderate modifications are needed to convert a diesel engine to burn dimethyl ether. The simplicity of this short carbon chain compound leads to very low emissions of particulate matter during combustion. For these reasons as well as being sulfur-free, dimethyl ether meets even the most stringent emission regulations in Europe (EURO5), U.S. (U.S. 2010), and Japan (2009 Japan).
At the European Shell Eco Marathon, an unofficial World Championship for mileage, vehicle running on 100 % dimethyl ether drove 589 km/L (169.8 cm3/100 km), fuel equivalent to gasoline with a 50 cm3 displacement 2-stroke engine. As well as winning they beat the old standing record of 306 km/liter (326.8 cm3/100 km), set by the same team in 2007.
To study the dimethyl ether for the combustion process a chemical kinetic mechanism is required which can be used for Computational fluid dynamics calculation.
Refrigerant
Dimethyl ether is a refrigerant with ASHRAE refrigerant designation R-E170. It is also used in refrigerant blends with e.g. ammonia, carbon dioxide, butane and propene.
Dimethyl ether was the first refrigerant. In 1876, the French engineer Charles Tellier bought the ex-Elder-Dempster a 690 tons cargo ship Eboe and fitted a methyl-ether refrigerating plant of his design. The ship was renamed Le Frigorifique and successfully imported a cargo of refrigerated meat from Argentina. However the machinery could be improved and in 1877 another refrigerated ship called Paraguay with a refrigerating plant improved by Ferdinand Carré was put into service on the South American run.
Safety
Unlike other alkyl ethers, dimethyl ether resists autoxidation. Dimethyl ether is also relatively non-toxic, although it is highly flammable. On July 28, 1948, a BASF factory in Ludwigshafen suffered an explosion after 30 tonnes of dimethyl ether leaked from a tank and ignited in the air. 200 people died, and a third of the industrial plant was destroyed.
Data sheet
Routes to produce dimethyl ether
Vapor pressure
See also
Methanol economy
References
External links
The International DME Association
NOAA site for NFPA 704
XTL & DME Institute
Aerosol propellants
Dialkyl ethers
Fuels
Synthetic fuels
Symmetrical ethers
Organic compounds with 2 carbon atoms
Substances discovered in the 19th century | Dimethyl ether | [
"Chemistry"
] | 1,254 | [
"Organic compounds",
"Organic compounds with 2 carbon atoms",
"Fuels",
"Chemical energy sources"
] |
569,399 | https://en.wikipedia.org/wiki/Stimulus%20%28physiology%29 | In physiology, a stimulus is a change in a living thing's internal or external environment. This change can be detected by an organism or organ using sensitivity, and leads to a physiological reaction. Sensory receptors can receive stimuli from outside the body, as in touch receptors found in the skin or light receptors in the eye, as well as from inside the body, as in chemoreceptors and mechanoreceptors. When a stimulus is detected by a sensory receptor, it can elicit a reflex via stimulus transduction. An internal stimulus is often the first component of a homeostatic control system. External stimuli are capable of producing systemic responses throughout the body, as in the fight-or-flight response. In order for a stimulus to be detected with high probability, its level of strength must exceed the absolute threshold; if a signal does reach threshold, the information is transmitted to the central nervous system (CNS), where it is integrated and a decision on how to react is made. Although stimuli commonly cause the body to respond, it is the CNS that finally determines whether a signal causes a reaction or not.
Types
Internal
Homeostatic imbalances
Homeostatic outbalances are the main driving force for changes of the body. These stimuli are monitored closely by receptors and sensors in different parts of the body. These sensors are mechanoreceptors, chemoreceptors and thermoreceptors that, respectively, respond to pressure or stretching, chemical changes, or temperature changes. Examples of mechanoreceptors include baroreceptors which detect changes in blood pressure, Merkel's discs which can detect sustained touch and pressure, and hair cells which detect sound stimuli. Homeostatic imbalances that can serve as internal stimuli include nutrient and ion levels in the blood, oxygen levels, and water levels. Deviations from the homeostatic ideal may generate a homeostatic emotion, such as pain, thirst or fatigue, that motivates behavior that will restore the body to stasis (such as withdrawal, drinking or resting).
Blood pressure
Blood pressure, heart rate, and cardiac output are measured by stretch receptors found in the carotid arteries. Nerves embed themselves within these receptors and when they detect stretching, they are stimulated and fire action potentials to the central nervous system. These impulses inhibit the constriction of blood vessels and lower the heart rate. If these nerves do not detect stretching, the body determines perceives low blood pressure as a dangerous stimulus and signals are not sent, preventing the inhibition CNS action; blood vessels constrict and the heart rate increases, causing an increase in blood pressure in the body.
External
Touch and pain
Sensory feelings, especially pain, are stimuli that can elicit a large response and cause neurological changes in the body. Pain also causes a behavioral change in the body, which is proportional to the intensity of the pain. The feeling is recorded by sensory receptors on the skin and travels to the central nervous system, where it is integrated and a decision on how to respond is made; if it is decided that a response must be made, a signal is sent back down to a muscle, which behaves appropriately according to the stimulus. The postcentral gyrus is the location of the primary somatosensory area, the main sensory receptive area for the sense of touch.
Pain receptors are known as nociceptors. Two main types of nociceptors exist, A-fiber nociceptors and C-fiber nociceptors. A-fiber receptors are myelinated and conduct currents rapidly. They are mainly used to conduct fast and sharp types of pain. Conversely, C-fiber receptors are unmyelinated and slowly transmit. These receptors conduct slow, burning, diffuse pain.
The absolute threshold for touch is the minimum amount of sensation needed to elicit a response from touch receptors. This amount of sensation has a definable value and is often considered to be the force exerted by dropping the wing of a bee onto a person's cheek from a distance of one centimeter. This value will change based on the body part being touched.
Vision
Vision provides opportunity for the brain to perceive and respond to changes occurring around the body. Information, or stimuli, in the form of light enters the retina, where it excites a special type of neuron called a photoreceptor cell. A local graded potential begins in the photoreceptor, where it excites the cell enough for the impulse to be passed along through a track of neurons to the central nervous system. As the signal travels from photoreceptors to larger neurons, action potentials must be created for the signal to have enough strength to reach the CNS. If the stimulus does not warrant a strong enough response, it is said to not reach absolute threshold, and the body does not react. However, if the stimulus is strong enough to create an action potential in neurons away from the photoreceptor, the body will integrate the information and react appropriately. Visual information is processed in the occipital lobe of the CNS, specifically in the primary visual cortex.
The absolute threshold for vision is the minimum amount of sensation needed to elicit a response from photoreceptors in the eye. This amount of sensation has a definable value and is often considered to be the amount of light present from someone holding up a single candle 30 miles away, if one's eyes were adjusted to the dark.
Smell
Smell allows the body to recognize chemical molecules in the air through inhalation. Olfactory organs located on either side of the nasal septum consist of olfactory epithelium and lamina propria. The olfactory epithelium, which contains olfactory receptor cells, covers the inferior surface of the cribiform plate, the superior portion of the perpendicular plate, the superior nasal concha. Only roughly two percent of airborne compounds inhaled are carried to olfactory organs as a small sample of the air being inhaled. Olfactory receptors extend past the epithelial surface providing a base for many cilia that lie in the surrounding mucus. Odorant-binding proteins interact with these cilia stimulating the receptors. Odorants are generally small organic molecules. Greater water and lipid solubility is related directly to stronger smelling odorants. Odorant binding to G protein coupled receptors activates adenylate cyclase, which converts ATP to camp. cAMP, in turn, promotes the opening of sodium channels resulting in a localized potential.
The absolute threshold for smell is the minimum amount of sensation needed to elicit a response from receptors in the nose. This amount of sensation has a definable value and is often considered to be a single drop of perfume in a six-room house. This value will change depending on what substance is being smelled.
Taste
Taste records flavoring of food and other materials that pass across the tongue and through the mouth. Gustatory cells are located on the surface of the tongue and adjacent portions of the pharynx and larynx. Gustatory cells form on taste buds, specialized epithelial cells, and are generally turned over every ten days. From each cell, protrudes microvilli, sometimes called taste hairs, through also the taste pore and into the oral cavity. Dissolved chemicals interact with these receptor cells; different tastes bind to specific receptors. Salt and sour receptors are chemically gated ion channels, which depolarize the cell. Sweet, bitter, and umami receptors are called gustducins, specialized G protein coupled receptors. Both divisions of receptor cells release neurotransmitters to afferent fibers causing action potential firing.
The absolute threshold for taste is the minimum amount of sensation needed to elicit a response from receptors in the mouth. This amount of sensation has a definable value and is often considered to be a single drop of quinine sulfate in 250 gallons of water.
Sound
Changes in pressure caused by sound reaching the external ear resonate in the tympanic membrane, which articulates with the auditory ossicles, or the bones of the middle ear. These tiny bones multiply these pressure fluctuations as they pass the disturbance into the cochlea, a spiral-shaped bony structure within the inner ear. Hair cells in the cochlear duct, specifically the organ of Corti, are deflected as waves of fluid and membrane motion travel through the chambers of the cochlea. Bipolar sensory neurons located in the center of the cochlea monitor the information from these receptor cells and pass it on to the brainstem via the cochlear branch of cranial nerve VIII. Sound information is processed in the temporal lobe of the CNS, specifically in the primary auditory cortex.
The absolute threshold for sound is the minimum amount of sensation needed to elicit a response from receptors in the ears. This amount of sensation has a definable value and is often considered to be a watch ticking in an otherwise soundless environment 20 feet away.
Equilibrium
Semi circular ducts, which are connected directly to the cochlea, can interpret and convey to the brain information about equilibrium by a similar method as the one used for hearing. Hair cells in these parts of the ear protrude kinocilia and stereocilia into a gelatinous material that lines the ducts of this canal. In parts of these semi circular canals, specifically the maculae, calcium carbonate crystals known as statoconia rest on the surface of this gelatinous material. When tilting the head or when the body undergoes linear acceleration, these crystals move disturbing the cilia of the hair cells and, consequently, affecting the release of neurotransmitter to be taken up by surrounding sensory nerves. In other areas of the semi circular canal, specifically the ampulla, a structure known as the cupula—analogous to the gelatinous material in the maculae—distorts hair cells in a similar fashion when the fluid medium that surrounds it causes the cupula itself to move. The ampulla communicates to the brain information about the head's horizontal rotation. Neurons of the adjacent vestibular ganglia monitor the hair cells in these ducts. These sensory fibers form the vestibular branch of the cranial nerve VIII.
Cellular response
In general, cellular response to stimuli is defined as a change in state or activity of a cell in terms of movement, secretion, enzyme production, or gene expression. Receptors on cell surfaces are sensing components that monitor stimuli and respond to changes in the environment by relaying the signal to a control center for further processing and response. Stimuli are always converted into electrical signals via transduction. This electrical signal, or receptor potential, takes a specific pathway through the nervous system to initiate a systematic response. Each type of receptor is specialized to respond preferentially to only one kind of stimulus energy, called the adequate stimulus. Sensory receptors have a well-defined range of stimuli to which they respond, and each is tuned to the particular needs of the organism. Stimuli are relayed throughout the body by mechanotransduction or chemotransduction, depending on the nature of the stimulus.
Mechanical
In response to a mechanical stimulus, cellular sensors of force are proposed to be extracellular matrix molecules, cytoskeleton, transmembrane proteins, proteins at the membrane-phospholipid interface, elements of the nuclear matrix, chromatin, and the lipid bilayer. Response can be twofold: the extracellular matrix, for example, is a conductor of mechanical forces but its structure and composition is also influenced by the cellular responses to those same applied or endogenously generated forces. Mechanosensitive ion channels are found in many cell types and it has been shown that the permeability of these channels to cations is affected by stretch receptors and mechanical stimuli. This permeability of ion channels is the basis for the conversion of the mechanical stimulus into an electrical signal.
Chemical
Chemical stimuli, such as odorants, are received by cellular receptors that are often coupled to ion channels responsible for chemotransduction. Such is the case in olfactory cells. Depolarization in these cells result from opening of non-selective cation channels upon binding of the odorant to the specific receptor. G protein-coupled receptors in the plasma membrane of these cells can initiate second messenger pathways that cause cation channels to open.
In response to stimuli, the sensory receptor initiates sensory transduction by creating graded potentials or action potentials in the same cell or in an adjacent one. Sensitivity to stimuli is obtained by chemical amplification through second messenger pathways in which enzymatic cascades produce large numbers of intermediate products, increasing the effect of one receptor molecule.
Systematic response
Nervous-system response
Though receptors and stimuli are varied, most extrinsic stimuli first generate localized graded potentials in the neurons associated with the specific sensory organ or tissue. In the nervous system, internal and external stimuli can elicit two different categories of responses: an excitatory response, normally in the form of an action potential, and an inhibitory response. When a neuron is stimulated by an excitatory impulse, neuronal dendrites are bound by neurotransmitters which cause the cell to become permeable to a specific type of ion; the type of neurotransmitter determines to which ion the neurotransmitter will become permeable. In excitatory postsynaptic potentials, an excitatory response is generated. This is caused by an excitatory neurotransmitter, normally glutamate binding to a neuron's dendrites, causing an influx of sodium ions through channels located near the binding site.
This change in membrane permeability in the dendrites is known as a local graded potential and causes the membrane voltage to change from a negative resting potential to a more positive voltage, a process known as depolarization. The opening of sodium channels allows nearby sodium channels to open, allowing the change in permeability to spread from the dendrites to the cell body. If a graded potential is strong enough, or if several graded potentials occur in a fast enough frequency, the depolarization is able to spread across the cell body to the axon hillock. From the axon hillock, an action potential can be generated and propagated down the neuron's axon, causing sodium ion channels in the axon to open as the impulse travels. Once the signal begins to travel down the axon, the membrane potential has already passed threshold, which means that it cannot be stopped. This phenomenon is known as an all-or-nothing response. Groups of sodium channels opened by the change in membrane potential strengthen the signal as it travels away from the axon hillock, allowing it to move the length of the axon. As the depolarization reaches the end of the axon, or the axon terminal, the end of the neuron becomes permeable to calcium ions, which enters the cell via calcium ion channels. Calcium causes the release of neurotransmitters stored in synaptic vesicles, which enter the synapse between two neurons known as the presynaptic and postsynaptic neurons; if the signal from the presynaptic neuron is excitatory, it will cause the release of an excitatory neurotransmitter, causing a similar response in the postsynaptic neuron. These neurons may communicate with thousands of other receptors and target cells through extensive, complex dendritic networks. Communication between receptors in this fashion enables discrimination and the more explicit interpretation of external stimuli. Effectively, these localized graded potentials trigger action potentials that communicate, in their frequency, along nerve axons eventually arriving in specific cortexes of the brain. In these also highly specialized parts of the brain, these signals are coordinated with others to possibly trigger a new response.
If a signal from the presynaptic neuron is inhibitory, inhibitory neurotransmitters, normally GABA will be released into the synapse. This neurotransmitter causes an inhibitory postsynaptic potential in the postsynaptic neuron. This response will cause the postsynaptic neuron to become permeable to chloride ions, making the membrane potential of the cell negative; a negative membrane potential makes it more difficult for the cell to fire an action potential and prevents any signal from being passed on through the neuron. Depending on the type of stimulus, a neuron can be either excitatory or inhibitory.
Muscular-system response
Nerves in the peripheral nervous system spread out to various parts of the body, including muscle fibers. A muscle fiber and the motor neuron to which it is connected. The spot at which the motor neuron attaches to the muscle fiber is known as the neuromuscular junction. When muscles receive information from internal or external stimuli, muscle fibers are stimulated by their respective motor neuron. Impulses are passed from the central nervous system down neurons until they reach the motor neuron, which releases the neurotransmitter acetylcholine (ACh) into the neuromuscular junction. ACh binds to nicotinic acetylcholine receptors on the surface of the muscle cell and opens ion channels, allowing sodium ions to flow into the cell and potassium ions to flow out; this ion movement causes a depolarization, which allows for the release of calcium ions within the cell. Calcium ions bind to proteins within the muscle cell to allow for muscle contraction; the ultimate consequence of a stimulus.
Endocrine-system response
Vasopressin
The endocrine system is affected largely by many internal and external stimuli. One internal stimulus that causes hormone release is blood pressure. Hypotension, or low blood pressure, is a large driving force for the release of vasopressin, a hormone which causes the retention of water in the kidneys. This process also increases an individual's thirst. By fluid retention or by consuming fluids, if an individual's blood pressure returns to normal, vasopressin release slows and less fluid is retained by the kidneys. Hypovolemia, or low fluid levels in the body, can also act as a stimulus to cause this response.
Epinephrine
Epinephrine, also known as adrenaline, is also used commonly to respond to both internal and external changes. One common cause of the release of this hormone is the Fight-or-flight response. When the body encounters an external stimulus that is potentially dangerous, epinephrine is released from the adrenal glands. Epinephrine causes physiological changes in the body, such as constriction of blood vessels, dilation of pupils, increased heart and respiratory rate, and the metabolism of glucose. All of these responses to a single stimuli aid in protecting the individual, whether the decision is made to stay and fight, or run away and avoid danger.
Digestive-system response
Cephalic phase
The digestive system can respond to external stimuli, such as the sight or smell of food, and cause physiological changes before the food ever enters the body. This reflex is known as the cephalic phase of digestion. The sight and smell of food are strong enough stimuli to cause salivation, gastric and pancreatic enzyme secretion, and endocrine secretion in preparation for the incoming nutrients; by starting the digestive process before food reaches the stomach, the body is able to more effectively and efficiently metabolize food into necessary nutrients. Once food hits the mouth, taste and information from receptors in the mouth add to the digestive response. Chemoreceptors and mechanorceptors, activated by chewing and swallowing, further increase the enzyme release in the stomach and intestine.
Enteric nervous system
The digestive system is also able to respond to internal stimuli. The digestive tract, or enteric nervous system alone contains millions of neurons. These neurons act as sensory receptors that can detect changes, such as food entering the small intestine, in the digestive tract. Depending on what these sensory receptors detect, certain enzymes and digestive juices from the pancreas and liver can be secreted to aid in metabolism and breakdown of food.
Research methods and techniques
Clamping techniques
Intracellular measurements of electrical potential across the membrane can be obtained by microelectrode recording. Patch clamp techniques allow for the manipulation of the intracellular or extracellular ionic or lipid concentration while still recording potential. In this way, the effect of various conditions on threshold and propagation can be assessed.
Noninvasive neuronal scanning
Positron emission tomography (PET) and magnetic resonance imaging (MRI) permit the noninvasive visualization of activated regions of the brain while the test subject is exposed to different stimuli. Activity is monitored in relation to blood flow to a particular region of the brain.
Other methods
Hindlimb withdrawal time is another method. Sorin Barac et al. in a recent paper published in the Journal of Reconstructive Microsurgery monitored the response of test rats to pain stimuli by inducing an acute, external heat stimulus and measuring hindlimb withdrawal times (HLWT).
See also
Reflex
Sensory stimulation therapy
Stimulation
Stimulus (psychology)
References
Neurophysiology
Plant intelligence | Stimulus (physiology) | [
"Biology"
] | 4,405 | [
"Plant intelligence",
"Plants"
] |
569,403 | https://en.wikipedia.org/wiki/Pic%20du%20Midi%20de%20Bigorre | The Pic du Midi de Bigorre or simply the Pic du Midi (elevation ) is a mountain in the French Pyrenees. It is the site of the Pic du Midi Observatory.
Pic du Midi Observatory
The Pic du Midi Observatory () is an astronomical observatory located at 2,877 meters on top of the Pic du Midi de Bigorre in the French Pyrenees. It is part of the Observatoire Midi-Pyrénées (OMP) which has additional research stations in the southwestern French towns of Tarbes, Lannemezan, and Auch, as well as many partnerships in South America, Africa, and Asia, due to the guardianship it receives from the French Research Institute for Development (IRD).
Construction of the observatory began in 1878 under the auspices of the Société Ramond, but by 1882 the society decided that the spiralling costs were beyond its relatively modest means, and yielded the observatory to the French state, which took it into its possession by a law of 7 August 1882. The 8 metre dome was completed in 1908, under the ambitious direction of Benjamin Baillaud. It housed a powerful mechanical equatorial reflector which was used in 1909 to formally discredit the Martian canal theory. In 1946 Mr. Gentilli funded a dome and a 0.60-meter telescope, and in 1958, a spectrograph was installed.
A 1.06-meter (42-inch) telescope was installed in 1963, funded by NASA and was used to take detailed photographs of the surface of the Moon in preparation for the Apollo missions. In 1965 the astronomers and Janine Connes were able to formulate a detailed analysis of the composition of the atmospheres on Mars and Venus, based on the infrared spectra gathered from these planets. The results showed atmospheres in chemical equilibrium. This served as a basis for James Lovelock, a scientist working for the Jet Propulsion Laboratory in California, to predict that those planets had no life—scientifically accepted years after.
The 2-metre Bernard Lyot Telescope was placed at the observatory in 1980 on top of a 28-meter column built off to the side to avoid wind turbulence affecting the seeing of the other telescopes. It is the largest telescope in France. The observatory also has a coronagraph, which is used to study the solar corona. A 0.60-meter telescope (the Gentilly's T60 telescope) is also located at the top of Pic du Midi. Since 1982 this T60 is dedicated to amateur astronomy and managed by a group of amateurs, called association T60.
The observatory consists of:
The 0.55-meter telescope (Robley Dome);
The 0.60-meter telescope (T60 Dome, welcoming amateur astronomers via the Association T60);
The 1.06-meter telescope (Gentilli Dome) dedicated to observations of the solar system;
The 2-meter telescope or Bernard Lyot Telescope (used with a new generation stellar spectropolarimeter);
The coronagraph HACO-CLIMSO (studies of the solar corona);
The bezel Jean Rösch (studies of the solar surface)
The Charvin dome, which sheltered a photoelectric coronometer (which studied the Sun);
The Baillaud dome, reassigned to the museum in 2000 and which houses a 1:1 scale model coronagraph.
The observatory is located very close to the Greenwich meridian.
Saturn's moon Helene (Saturn XII or Dione B), was discovered by French astronomers Pierre Laques and Jean Lecacheux in 1980 from ground-based observations at Pic du Midi, and named Helene in 1988. It is also a trojan moon of Dione.
The main-belt asteroid 20488 Pic-du-Midi, discovered at Pises Observatory in 1999, was named for the observatory and the mountain it is located on.
List of discovered minor planets
The Minor Planet Center credits the discovery of the following minor planets directly to the observatory (as of 2017, no discoveries have been assigned to individual astronomers):
International Dark Sky Reserve
Officially initiated in 2009, during the international year of astronomy, the Pic du Midi International Dark Sky Reserve (IDSR) was labeled in 2013 by the International Dark-Sky Association. It's the sixth in the world, the first in Europe and the only one still today in France.
The IDSR aims to limit the exponential propagation of light pollution, in order to preserve the quality of the night. Co-managed by the Syndicat mixte for the tourist promotion of the Pic du Midi, the Pyrénées National Park and the Departmental Energy Union 65, its priority actions are the public education on the impacts and consequences of these pollutions as well as the establishment of responsible lighting in the Haut-Pyrenean territory.
It covers 3,000 km2, or 65% of the Hautes-Pyrénées. The IDSR includes 251 communes spread around the Pic du Midi de Bigorre and is distinguished in two zones:
A core zone, devoid of any permanent lighting and witnessing an exceptional night quality;
A buffer zone, in which the territory actors recognize the importance of the nocturnal environment and undertake to protect it.
The IDSR initiated the program "Ciel Etoilé" (Starry sky), program of reconversion of the 40 000 luminous points of its territory, the program "Gardiens des Etoiles" (Guardians of the stars), program of metrological monitoring of the light pollution evolution, but also the program "Adap'Ter", project that will identify "trames sombres" (Dark frame: nocturnal biodiversity deplacements).
Climate
Pic du Midi de Bigorre has a mediterranean alpine climate with a polar temperature regime due to its high elevation. Due to the Gulf Stream moderation of the surrounding lowlands, temperature swings are in general quite low. This results in temperatures rarely exceeding even during lowland heat waves, and also temperatures beneath being extremely rare. The UV index is higher than in the surrounding lowlands due to the elevation. Snow cover is permanent during winter months, but melts for a few months each year. Seasonal lag is extreme during winter and spring, with February being the clearly coldest month, and May having mean temperatures below freezing. Among lowland climates, the station closely resembles Nuuk in Greenland for the temperature regime.
See also
List of astronomical observatories
References
External links
Observatoire Midi-Pyrénées
Profile of climb from Col du Tourmalet on www.climbbybike.com
A night on the "Vaisseaux d'Etoiles" (Starship) du Pic du Midi - Photo gallery
Histoire de l'observatoire du Pic du Midi (Observatory history)
Video about the Pic du Midi, by Roger Servajean, on Paris Observatory digital library
Astronomical observatories in France
Pic du Midi Observatory
Mountains of Hautes-Pyrénées
Mountains of the Pyrenees
Pic du Midi Observatory
International Dark Sky Reserves | Pic du Midi de Bigorre | [
"Astronomy"
] | 1,411 | [
"International Dark Sky Reserves",
"Dark-sky preserves"
] |
569,444 | https://en.wikipedia.org/wiki/Transduction%20%28physiology%29 | In physiology, transduction is the translation of arriving stimulus into an action potential by a sensory receptor. It begins when stimulus changes the membrane potential of a sensory receptor.
A sensory receptor converts the energy in a stimulus into an electrical signal. Receptors are broadly split into two main categories: exteroceptors, which receive external sensory stimuli, and interoceptors, which receive internal sensory stimuli.
Sensory transduction
The visual system
In the visual system, sensory cells called rod and cone cells in the retina convert the physical energy of light signals into electrical impulses that travel to the brain. The light causes a conformational change in a protein called rhodopsin. This conformational change sets in motion a series of molecular events that result in a reduction of the electrochemical gradient of the photoreceptor. The decrease in the electrochemical gradient causes a reduction in the electrical signals going to the brain. Thus, in this example, more light hitting the photoreceptor results in the transduction of a signal into fewer electrical impulses, effectively communicating that stimulus to the brain. A change in neurotransmitter release is mediated through a second messenger system. The change in neurotransmitter release is by rods. Because of the change, a change in light intensity causes the response of the rods to be much slower than expected (for a process associated with the nervous system).
The auditory system
In the auditory system, sound vibrations (mechanical energy) are transduced into electrical energy by hair cells in the inner ear. Sound vibrations from an object cause vibrations in air molecules, which in turn, vibrate the ear drum. The movement of the eardrum causes the bones of the middle ear (the ossicles) to vibrate. These vibrations then pass into the cochlea, the organ of hearing. Within the cochlea, the hair cells on the sensory epithelium of the organ of Corti bend and cause movement of the basilar membrane. The membrane undulates in different sized waves according to the frequency of the sound. Hair cells are then able to convert this movement (mechanical energy) into electrical signals (graded receptor potentials) which travel along auditory nerves to hearing centres in the brain.
The olfactory system
In the olfactory system, odorant molecules in the mucus bind to G-protein receptors on olfactory cells. The G-protein activates a downstream signalling cascade that causes increased level of cyclic-AMP (cAMP), which trigger neurotransmitter release.
The gustatory system
In the gustatory system, perception of five primary taste qualities (sweet, salty, sour, bitter and umami [savoriness] ) depends on taste transduction pathways, through taste receptor cells, G proteins, ion channels, and effector enzymes.
The somatosensory system
In the somatosensory system the sensory transduction mainly involves the conversion of the mechanical signal such as pressure, skin compression, stretch, vibration to electro-ionic impulses through the process of mechanotransduction. It also includes the sensory transduction related to thermoception and nociception.
References
Physiology | Transduction (physiology) | [
"Biology"
] | 653 | [
"Physiology"
] |
569,460 | https://en.wikipedia.org/wiki/Program%20slicing | In computer programming, program slicing is the computation of the set of program statements, the program slice, that may affect the values at some point of interest, referred to as a slicing criterion. Program slicing can be used in debugging to locate source of errors more easily. Other applications of slicing include software maintenance, optimization, program analysis, and information flow control.
Slicing techniques have been seeing a rapid development since the original definition by Mark Weiser. At first, slicing was only static, i.e., applied on the source code with no other information than the source code. Bogdan Korel and Janusz Laski introduced dynamic slicing, which works on a specific execution of the program (for a given execution trace). Other forms of slicing exist, for instance path slicing.
Static slicing
Based on the original definition of Weiser, informally, a static program slice S consists of all statements in program P that may affect the value of variable v in a statement x. The slice is defined for a slicing criterion C=(x,v) where x is a statement in program P and v is variable in x. A static slice includes all the statements that can affect the value of variable v at statement x for any possible input. Static slices are computed by backtracking dependencies between statements. More specifically, to compute the static slice for (x,v), we first find all statements that can directly affect the value of v before statement x is encountered. Recursively, for each statement y which can affect the value of v in statement x, we compute the slices for all variables z in y that affect the value of v. The union of all those slices is the static slice for (x,v).
Example
For example, consider the C program below. Let's compute the slice for ( write(sum), sum ). The value of sum is directly affected by the statements "sum = sum + i + w" if N>1 and "int sum = 0" if N <= 1. So, slice( write(sum), sum) is the union of three slices and the "int sum = 0" statement which has no dependencies:
slice( sum = sum + i + w, sum),
slice( sum = sum + i + w, i),
slice( sum = sum + i + w, w), and
{ int sum=0 }.
It is fairly easy to see that slice( sum = sum + i + w, sum) consists of "sum = sum + i + w" and "int sum = 0" because those are the only two prior statements that can affect the value of sum at "sum = sum + i + w". Similarly, slice( sum = sum + i + w, i) only contains "for(i = 1; i < N; ++i) {" and slice( sum = sum + i + w, w) only contains the statement "int w = 7".
When we union all of those statements, we do not have executable code, so to make the slice an executable slice we merely add the end brace for the for loop and the declaration of i. The resulting static executable slice is shown below the original code below.
int i;
int sum = 0;
int product = 1;
int w = 7;
for(i = 1; i < N; ++i) {
sum = sum + i + w;
product = product * i;
}
write(sum);
write(product);
The static executable slice for criteria (write(sum), sum) is the new program shown below.
int i;
int sum = 0;
int w = 7;
for(i = 1; i < N; ++i) {
sum = sum + i + w;
}
write(sum);
In fact, most static slicing techniques, including Weiser's own technique, will also remove the write(sum) statement. Since, at the statement write(sum), the value of sum is not dependent on the statement itself. Often a slice for a particular statement x will include more than one variable. If V is a set of variables in a statement x, then the slice for (x, V) is the union of all slices with criteria (x, v) where v is a variable in the set V.
Lightweight forward static slicing approach
A very fast and scalable, yet slightly less accurate, slicing approach is extremely useful for a number of reasons. Developers will have a very low cost and practical means to estimate the impact of a change within minutes versus days. This is very important for planning the implementation of new features and understanding how a change is related to other parts of the system. It will also provide an inexpensive test to determine if a full, more expensive, analysis of the system is warranted. A fast slicing approach will open up new avenues of research in metrics and the mining of histories based on slicing. That is, slicing can now be conducted on very large systems and on entire version histories in very practical time frames. This opens the door to a number of experiments and empirical investigations previously too costly to undertake.
Dynamic slicing
Dynamic slicing makes use of information about a particular execution of a program. A dynamic slice contains all statements that actually affect the value of a variable at a program point for a particular execution of the program rather than all statements that may have affected the value of a variable at a program point for any arbitrary execution of the program.
An example to clarify the difference between static and dynamic slicing. Consider a small piece of a program unit, in which there is an iteration block containing an if-else block. There are a few statements in both the if and else blocks that have an effect on a variable. In the case of static slicing, since the whole program unit is looked at irrespective of a particular execution of the program, the affected statements in both blocks would be included in the slice. But, in the case of dynamic slicing we consider a particular execution of the program, wherein the if block gets executed and the affected statements in the else block do not get executed. So, that is why in this particular execution case, the dynamic slice would contain only the statements in the if block.
See also
Software maintenance
Dependence analysis
Reaching definition
Data dependency
Frama-C a tool which implements slicing algorithms on C programs.
Partial dead code elimination
Notes
References
Mark Weiser. "Program slicing". Proceedings of the 5th International Conference on Software Engineering, pages 439–449, IEEE Computer Society Press, March 1981.
Mark Weiser. "Program slicing". IEEE Transactions on Software Engineering, Volume 10, Issue 4, pages 352–357, IEEE Computer Society Press, July 1984.
Susan Horwitz, Thomas Reps, and David Binkley, Interprocedural slicing using dependence graphs, ACM Transactions on Programming Languages and Systems, Volume 12, Issue 1, pages 26-60, January 1990.
Frank Tip. "A survey of program slicing techniques". Journal of Programming Languages, Volume 3, Issue 3, pages 121–189, September 1995.
David Binkley and Keith Brian Gallagher. "Program slicing". Advances in Computers, Volume 43, pages 1–50, Academic Press, 1996.
Andrea de Lucia. "Program slicing: Methods and applications", International Workshop on Source Code Analysis and Manipulation, pages 142-149, 2001, IEEE Computer Society Press.
Mark Harman and Robert Hierons. "An overview of program slicing", Software Focus, Volume 2, Issue 3, pages 85–92, January 2001.
David Binkley and Mark Harman. "A survey of empirical results on program slicing", Advances in Computers, Volume 62, pages 105-178, Academic Press, 2004.
Jens Krinke. "Program Slicing", In Handbook of Software Engineering and Knowledge Engineering, Volume 3: Recent Advances. World Scientific Publishing, 2005
Silva, Josep. "A vocabulary of program slicing-based techniques", ACM Computing Surveys, Volume 44, Issue 3, Association for Computing Machinery, June 2012
Alomari HW et al. "srcSlice: very efficient and scalable forward static slicing". Wiley Journal of Software: Evolution and Process (JSEP), DOI: 10.1002/smr.1651, Vol. 26, No. 11, pp. 931-961, 2014.
External links
VALSOFT/Joana Project
Indus Project (part of Bandera checker)
Wisconsin Program-Slicing Project
Debugging
Program analysis
Program transformation
Software maintenance | Program slicing | [
"Engineering"
] | 1,758 | [
"Software engineering",
"Software maintenance"
] |
569,480 | https://en.wikipedia.org/wiki/Receptor%20%28biochemistry%29 | In biochemistry and pharmacology, receptors are chemical structures, composed of protein, that receive and transduce signals that may be integrated into biological systems. These signals are typically chemical messengers which bind to a receptor and produce physiological responses such as change in the electrical activity of a cell. For example, GABA, an inhibitory neurotransmitter, inhibits electrical activity of neurons by binding to GABA receptors. There are three main ways the action of the receptor can be classified: relay of signal, amplification, or integration. Relaying sends the signal onward, amplification increases the effect of a single ligand, and integration allows the signal to be incorporated into another biochemical pathway.
Receptor proteins can be classified by their location. Cell surface receptors, also known as transmembrane receptors, include ligand-gated ion channels, G protein-coupled receptors, and enzyme-linked hormone receptors. Intracellular receptors are those found inside the cell, and include cytoplasmic receptors and nuclear receptors. A molecule that binds to a receptor is called a ligand and can be a protein, peptide (short protein), or another small molecule, such as a neurotransmitter, hormone, pharmaceutical drug, toxin, calcium ion or parts of the outside of a virus or microbe. An endogenously produced substance that binds to a particular receptor is referred to as its endogenous ligand. E.g. the endogenous ligand for the nicotinic acetylcholine receptor is acetylcholine, but it can also be activated by nicotine and blocked by curare. Receptors of a particular type are linked to specific cellular biochemical pathways that correspond to the signal. While numerous receptors are found in most cells, each receptor will only bind with ligands of a particular structure. This has been analogously compared to how locks will only accept specifically shaped keys. When a ligand binds to a corresponding receptor, it activates or inhibits the receptor's associated biochemical pathway, which may also be highly specialised.
Receptor proteins can be also classified by the property of the ligands. Such classifications include chemoreceptors, mechanoreceptors, gravitropic receptors, photoreceptors, magnetoreceptors and gasoreceptors.
Structure
The structures of receptors are very diverse and include the following major categories, among others:
Type 1: Ligand-gated ion channels (ionotropic receptors) – These receptors are typically the targets of fast neurotransmitters such as acetylcholine (nicotinic) and GABA; activation of these receptors results in changes in ion movement across a membrane. They have a heteromeric structure in that each subunit consists of the extracellular ligand-binding domain and a transmembrane domain which includes four transmembrane alpha helices. The ligand-binding cavities are located at the interface between the subunits.
Type 2: G protein-coupled receptors (metabotropic receptors) – This is the largest family of receptors and includes the receptors for several hormones and slow transmitters e.g. dopamine, metabotropic glutamate. They are composed of seven transmembrane alpha helices. The loops connecting the alpha helices form extracellular and intracellular domains. The binding-site for larger peptide ligands is usually located in the extracellular domain whereas the binding site for smaller non-peptide ligands is often located between the seven alpha helices and one extracellular loop. The aforementioned receptors are coupled to different intracellular effector systems via G proteins. G proteins are heterotrimers made up of 3 subunits: α (alpha), β (beta), and γ (gamma). In the inactive state, the three subunits associate together and the α-subunit binds GDP. G protein activation causes a conformational change, which leads to the exchange of GDP for GTP. GTP-binding to the α-subunit causes dissociation of the β- and γ-subunits. Furthermore, the three subunits, α, β, and γ have additional four main classes based on their primary sequence. These include Gs, Gi, Gq and G12.
Type 3: Kinase-linked and related receptors (see "Receptor tyrosine kinase" and "Enzyme-linked receptor") – They are composed of an extracellular domain containing the ligand binding site and an intracellular domain, often with enzymatic-function, linked by a single transmembrane alpha helix. The insulin receptor is an example.
Type 4: Nuclear receptors – While they are called nuclear receptors, they are actually located in the cytoplasm and migrate to the nucleus after binding with their ligands. They are composed of a C-terminal ligand-binding region, a core DNA-binding domain (DBD) and an N-terminal domain that contains the AF1(activation function 1) region. The core region has two zinc fingers that are responsible for recognizing the DNA sequences specific to this receptor. The N terminus interacts with other cellular transcription factors in a ligand-independent manner; and, depending on these interactions, it can modify the binding/activity of the receptor. Steroid and thyroid-hormone receptors are examples of such receptors.
Membrane receptors may be isolated from cell membranes by complex extraction procedures using solvents, detergents, and/or affinity purification.
The structures and actions of receptors may be studied by using biophysical methods such as X-ray crystallography, NMR, circular dichroism, and dual polarisation interferometry. Computer simulations of the dynamic behavior of receptors have been used to gain understanding of their mechanisms of action.
Binding and activation
Ligand binding is an equilibrium process. Ligands bind to receptors and dissociate from them according to the law of mass action in the following equation, for a ligand L and receptor, R. The brackets around chemical species denote their concentrations.
One measure of how well a molecule fits a receptor is its binding affinity, which is inversely related to the dissociation constant Kd. A good fit corresponds with high affinity and low Kd. The final biological response (e.g. second messenger cascade, muscle-contraction), is only achieved after a significant number of receptors are activated.
Affinity is a measure of the tendency of a ligand to bind to its receptor. Efficacy is the measure of the bound ligand to activate its receptor.
Agonists versus antagonists
Not every ligand that binds to a receptor also activates that receptor. The following classes of ligands exist:
(Full) agonists are able to activate the receptor and result in a strong biological response. The natural endogenous ligand with the greatest efficacy for a given receptor is by definition a full agonist (100% efficacy).
Partial agonists do not activate receptors with maximal efficacy, even with maximal binding, causing partial responses compared to those of full agonists (efficacy between 0 and 100%).
Antagonists bind to receptors but do not activate them. This results in a receptor blockade, inhibiting the binding of agonists and inverse agonists. Receptor antagonists can be competitive (or reversible), and compete with the agonist for the receptor, or they can be irreversible antagonists that form covalent bonds (or extremely high affinity non-covalent bonds) with the receptor and completely block it. The proton pump inhibitor omeprazole is an example of an irreversible antagonist. The effects of irreversible antagonism can only be reversed by synthesis of new receptors.
Inverse agonists reduce the activity of receptors by inhibiting their constitutive activity (negative efficacy).
Allosteric modulators: They do not bind to the agonist-binding site of the receptor but instead on specific allosteric binding sites, through which they modify the effect of the agonist. For example, benzodiazepines (BZDs) bind to the BZD site on the GABAA receptor and potentiate the effect of endogenous GABA.
Note that the idea of receptor agonism and antagonism only refers to the interaction between receptors and ligands and not to their biological effects.
Constitutive activity
A receptor which is capable of producing a biological response in the absence of a bound ligand is said to display "constitutive activity". The constitutive activity of a receptor may be blocked by an inverse agonist. The anti-obesity drugs rimonabant and taranabant are inverse agonists at the cannabinoid CB1 receptor and though they produced significant weight loss, both were withdrawn owing to a high incidence of depression and anxiety, which are believed to relate to the inhibition of the constitutive activity of the cannabinoid receptor.
The GABAA receptor has constitutive activity and conducts some basal current in the absence of an agonist. This allows beta carboline to act as an inverse agonist and reduce the current below basal levels.
Mutations in receptors that result in increased constitutive activity underlie some inherited diseases, such as precocious puberty (due to mutations in luteinizing hormone receptors) and hyperthyroidism (due to mutations in thyroid-stimulating hormone receptors).
Theories of drug-receptor interaction
Occupation
Early forms of the receptor theory of pharmacology stated that a drug's effect is directly proportional to the number of receptors that are occupied. Furthermore, a drug effect ceases as a drug-receptor complex dissociates.
Ariëns & Stephenson introduced the terms "affinity" & "efficacy" to describe the action of ligands bound to receptors.
Affinity: The ability of a drug to combine with a receptor to create a drug-receptor complex.
Efficacy: The ability of drug to initiate a response after the formation of drug-receptor complex.
Rate
In contrast to the accepted Occupation Theory, Rate Theory proposes that the activation of receptors is directly proportional to the total number of encounters of a drug with its receptors per unit time. Pharmacological activity is directly proportional to the rates of dissociation and association, not the number of receptors occupied:
Agonist: A drug with a fast association and a fast dissociation.
Partial-agonist: A drug with an intermediate association and an intermediate dissociation.
Antagonist: A drug with a fast association & slow dissociation
Induced-fit
As a drug approaches a receptor, the receptor alters the conformation of its binding site to produce drug—receptor complex.
Spare Receptors
In some receptor systems (e.g. acetylcholine at the neuromuscular junction in smooth muscle), agonists are able to elicit maximal response at very low levels of receptor occupancy (<1%). Thus, that system has spare receptors or a receptor reserve. This arrangement produces an economy of neurotransmitter production and release.
Receptor regulation
Cells can increase (upregulate) or decrease (downregulate) the number of receptors to a given hormone or neurotransmitter to alter their sensitivity to different molecules. This is a locally acting feedback mechanism.
Change in the receptor conformation such that binding of the agonist does not activate the receptor. This is seen with ion channel receptors.
Uncoupling of the receptor effector molecules is seen with G protein-coupled receptors.
Receptor sequestration (internalization), e.g. in the case of hormone receptors.
Examples and ligands
The ligands for receptors are as diverse as their receptors. GPCRs (7TMs) are a particularly vast family, with at least 810 members. There are also LGICs for at least a dozen endogenous ligands, and many more receptors possible through different subunit compositions. Some common examples of ligands and receptors include:
Ion channels and G protein coupled receptors
Some example ionotropic (LGIC) and metabotropic (specifically, GPCRs) receptors are shown in the table below. The chief neurotransmitters are glutamate and GABA; other neurotransmitters are neuromodulatory. This list is by no means exhaustive.
Enzyme linked receptors
Enzyme linked receptors include Receptor tyrosine kinases (RTKs), serine/threonine-specific protein kinase, as in bone morphogenetic protein and guanylate cyclase, as in atrial natriuretic factor receptor. Of the RTKs, 20 classes have been identified, with 58 different RTKs as members. Some examples are shown below:
Intracellular Receptors
Receptors may be classed based on their mechanism or on their position in the cell. 4 examples of intracellular LGIC are shown below:
Role in health and disease
In genetic disorders
Many genetic disorders involve hereditary defects in receptor genes. Often, it is hard to determine whether the receptor is nonfunctional or the hormone is produced at decreased level; this gives rise to the "pseudo-hypo-" group of endocrine disorders, where there appears to be a decreased hormonal level while in fact it is the receptor that is not responding sufficiently to the hormone.
In the immune system
The main receptors in the immune system are pattern recognition receptors (PRRs), toll-like receptors (TLRs), killer activated and killer inhibitor receptors (KARs and KIRs), complement receptors, Fc receptors, B cell receptors and T cell receptors.
See also
Ki Database
Ion channel linked receptors
Neuropsychopharmacology
Schild regression for ligand receptor inhibition
Signal transduction
Stem cell marker
List of MeSH codes (D12.776)
Receptor theory
Notes
References
External links
IUPHAR GPCR Database and Ion Channels Compendium
Human plasma membrane receptome
Cell biology
Cell signaling
Membrane biology | Receptor (biochemistry) | [
"Chemistry",
"Biology"
] | 2,857 | [
"Cell biology",
"Membrane biology",
"Signal transduction",
"Receptors",
"Molecular biology"
] |
2,151,080 | https://en.wikipedia.org/wiki/Hosta%20%27Undulata%27 | Hosta 'Undulata' is a cultivar of the genus Hosta, widely cultivated as ornamental plants in borders or as specimen plants. It was formerly regarded as a species under the name Hosta undulata (Otto & A.Dietr.) L.H.Bailey. It is not accepted as a species by the World Checklist of Selected Plant Families , and has been relegated to cultivar status by Schmid.
Tolerating temperatures as low as , H. 'Undulata' and its related cultivars are widely grown in temperate zones. Garden performance is best in partial to moderate shade, in well-drained moist soil.
Hostas in the 'Undulata' group include an all-green cultivar, 'Undulata Erromena'; a white-edged cultivar, 'Undulata Albomarginata'; and white-centered (medio-variegated) cultivars that may be grouped according to the amount of white in the leaf. The typical H. 'Undulata' has a wide white center, wider than the green of the margins. Over time (or as a cultivar selection), the white center can narrow to a form classified as H. 'Undulata Univittata'; the displayed picture is more of this type. These four are the only registered Undulata cultivar names. Other names for the white-centered forms include Undulata variegata, Undulata mediopicta, and registered forms such as H. 'Middle Ridge' and 'White Feather'. The expansion of the green margins (narrowing of the center) depends on garden culture. In time, the all-green 'Undulata Erromena' cultivar may appear. As this all-green form is significantly more vigorous than the variegated form, it can quickly overwhelm a planting of 'Undulata'. Division of the fast-growing clumps to remove undesired leaf forms is easily accomplished.
In areas with significant summer heat, or where planted in too much light, the white-centered forms are also prone to a greening of the leaf centers, shown as a misting that darkens with the season. This is typical behavior and not indicative of disease or mutation. The substance of the leaves is among the thinnest of hostas, making them particularly subject to slug damage. All hostas are attractive to deer.
The flower scapes of all H. 'Undulata' cultivars are tall and offer pale lavender blossoms which are very attractive to bees. The flowers for the various 'Undulata' cultivars are essentially similar. These cultivars are effectively sterile and do not generally set seed either by self-pollination or by attempted hybridization.
The variety listed as Hosta undulata var. undulata has gained the Royal Horticultural Society's Award of Garden Merit.
References
External links
Missouri Botanical Garden
Hosta Library
Undulata
Ornamental plant cultivars
Unplaced names | Hosta 'Undulata' | [
"Biology"
] | 606 | [
"Biological hypotheses",
"Controversial taxa",
"Unplaced names"
] |
2,151,162 | https://en.wikipedia.org/wiki/Rat%20Genome%20Database | The Rat Genome Database (RGD) is a database of rat genomics, genetics, physiology and functional data, as well as data for comparative genomics between rat, human and mouse. RGD is responsible for attaching biological information to the rat genome via structured vocabulary, or ontology, annotations assigned to genes and quantitative trait loci (QTL), and for consolidating rat strain data and making it available to the research community. They are also developing a suite of tools for mining and analyzing genomic, physiologic and functional data for the rat, and comparative data for rat, mouse, human, and five other species.
RGD began as a collaborative effort between research institutions involved in rat genetic and genomic research. Its goal, as stated in the National Institutes of Health’s Request for Grant Application: HL-99-013, is the establishment of a Rat Genome Database to collect, consolidate, and integrate data generated from ongoing rat genetic and genomic research efforts and make this data widely available to the scientific community. A secondary, but critical goal is to provide curation of mapped positions for quantitative trait loci, known mutations and other phenotypic data.
The rat continues to be extensively used by researchers as a model organism for investigating pharmacology, toxicology, general physiology and the biology and pathophysiology of disease. In recent years, there has been a rapid increase in rat genetic and genomic data. In addition to this, the Rat Genome Database has become a central point for information on the rat for research and now features information on not just genetics and genomics, but physiology and molecular biology as well. There are tools and data pages available for all of these fields that are curated by RGD staff.
Data
RGD’s data consists of manual annotations from RGD researchers as well as imported annotations from a variety of different sources. RGD also exports their own annotations to share with others.
RGD's Data page lists eight types of data stored in the database: Genes, QTLs, Markers, Maps, Strains, Ontologies, Sequences and References. Of these, six are actively used and regularly updated. The RGD Maps datatype refers to legacy genetic and radiation hybrid maps. This data has been largely supplanted by the rat whole genome sequence. The Sequences data type is not a full list of either genomic, transcript or protein sequences, but rather mostly contains PCR primer sequences which define simple sequence length polymorphism (SSLP) and expressed sequence tag (EST) Markers. Such sequences are useful primarily for researchers still using these markers for genotyping their animals and for distinguishing between markers of the same name. The six major data types in RGD are as follows:
Genes: Initial gene records are imported and updated from the National Center for Biotechnology Information's (NCBI's) Gene database on a weekly basis. Data imported during this process includes the Gene ID, Genbank/RefSeq nucleotide and protein sequence identifiers, HomoloGene group IDs and Ensembl Gene, Transcript and Protein IDs. Additional protein-related data is imported from the UniProtKB database. RGD curators review the literature and manually curate Gene Ontology (GO), diseases, phenotypes and pathways for rat genes, diseases and pathways for mouse genes, and diseases, phenotypes and pathways for human genes. In addition, the site imports GO annotations for mouse and human genes from the GO Consortium, rat electronic annotations from UniProt and mouse phenotype annotations from the Mouse Genome Database/Mouse Genome Informatics (MGD/MGI).
QTLs: RGD's staff manually curates data for rat and human QTLs from the literature where such publications exist or from records directly submitted by researchers. Mouse QTL records, including Mammalian Phenotype (MP) ontology assignments, are imported directly from MGI. For rat and human QTLs, curation includes assigning MP, HP, and disease ontology annotations. QTL positions are automatically assigned based on the genomic positions of peak and/or flanking markers or single nucleotide polymorphisms (SNPs). QTL records link to information about related strains, candidate genes, associated markers and related QTLs.
Strains: Like QTL records, RGD strain records are either manually curated from the literature or submitted by researchers. Strain records include information about the official symbol of the strain, origin and availability of the strain, associated phenotypes, whether the strain is a model for a human disease, and any information that is available about breeding, behavior, husbandry, etc. Strain records link to information about related genes, alleles, and QTLs, associated strains (e.g. parental strains or substrains) and, where available, strain-specific damaging nucleotide variants. For congenic and mutant strains, genomic positions are assigned for the introgressed region (congenic strains) or the location of the mutated sequence (mutant strains).
Markers: Because genetic markers such as SSLPs and ESTs have been, and continue to be, used for QTLs and strains, RGD stores marker data for rat, human and mouse. Marker data includes the sequences of the associated forward and reverse PCR primers, genomic positions and links to NCBI's Probe database. Marker records link to associated QTL, strain and gene records.
Cell lines: RGD stores cell line records based on imports from Cellosaurus. Although the largest numbers of these are human and mouse cell lines, records are also available for rat, bonobo, dog, squirrel, pig, green monkey and naked mole-rat.
Ontologies: In order to make RGD's data both human readable and available for computational analysis and retrieval, RGD relies on the use of multiple ontologies. As of July 2021, RGD used 19 different ontologies to express the various types of data applicable to RGD's diverse datatypes. Ontology annotations are assigned manually by curators or are imported from external sources through the use of automated pipelines. Six of the ontologies in use at RGD were created or co-created at RGD and seven are under development by RGD staff members or collaborators, these being ontologies for Pathway (PW), Rat Strains (RS), Vertebrate Traits (VT), Disease (RDO), Clinical Measurements (CMO), Measurement Methods (MMO) and Experimental Conditions (XCO). Ontologies which are imported from outside sources are updated weekly.
References: RGD references are scientific publications and resources that have been used for curation of information into the database, and are sources for data objects such as QTLs and strains. For references accessed via NCBI's PubMed, imported data includes the title, authors, citation and PubMed ID, and an RGD ID is generated. In some cases, references are generated as internal records, such as bulk uploads from automated pipelines or personal communications with data sources. These additional references give RGD users an identification of the source of particular pieces and types of data for which PubMed records are not available. Both types of reference records provide links to all of the data curated from that article or source, including genes, QTLs, strains, disease and other ontology annotations. The resources curated for information can be retrieved from the database using the reference search page or links on an object page. Uncurated references are also available, which are known to contain relevant data but have not yet been manually reviewed. These are found as PubMed links listed in the ‘References – uncurated’ section of an object report (e.g. a gene report).
Genome tools
RGD's Genome tools include both software tools developed at RGD and tools from third party sources.
Genome tools developed at RGD
RGD develops web-based tools designed to use the data stored in the RGD database for analyses in rat and across species. These include:
OntoMate: OntoMate is an ontology-driven, concept-based literature search engine that has been developed by RGD as an alternative for the basic PubMed search engine in the gene curation workflow. Converting data from free text in the scientific literature to a structured searchable format is one of the main tasks of all model organism databases. OntoMate tags abstracts with gene names, gene mutations, organism names, disease, and other terms from the ontologies/vocabularies used at RGD. All terms/ entities tagged to an abstract are listed with the abstract in the search results. OntoMate also provides user-activated filters for species, date and other parameters relevant to the literature search, which has streamlined the process compared to using PubMed. Besides its usefulness for RGD internal curation processes, the tool is available to all RGD users.
Gene Annotator: The Gene Annotator or GA tool takes as input a list of gene symbols, RGD IDs, GenBank accession numbers, Ensembl identifiers, or a chromosomal region and retrieves gene orthologs, external database identifiers and ontology annotations for the corresponding genes in RGD. The data can be downloaded into an Excel spreadsheet or analyzed in the tool. The Annotation Distribution function displays a list of terms in each of seven categories with the percentage of genes from the input list with annotations to each term. The Comparison Heat Map function allows comparisons of annotations for genes in the input list across two ontologies or across two branches of the same ontology.
Variant Visualizer: Variant Visualizer (VV) is a viewing and analysis tool for rat strain-specific sequence polymorphisms. VV takes as input a list of gene symbols or a genomic region as defined by chromosome, start and stop positions or by two gene or marker symbols. The user must also select their strains of interest from a list of strains for which whole genome sequences exist and can set parameters for the variants in the result set. Output is a heatmap-type display of variants. Additional information for individual variants can be viewed in a detail pane display.
Multi-Ontology Enrichment Tool (MOET): MOET is a web-based ontology analysis tool used to identify terms from any or all of the ontologies used by RGD for gene curation (Disease, Pathway, Phenotype, GO, ChEBI) that are over-represented in the annotations for those genes, or for orthologs in other species.It outputs a downloadable graph and a list of statistically overrepresented terms in the user’s list of genes using hypergeometric distribution. MOET also displays the corresponding Bonferroni correction and odds ratio on the results page.
Gene Ortholog Location Finder (GOLF): GOLF is used to compare genes or positions within regions of interest across RGD species or assemblies. Results are displayed with the corresponding genes/positions in both species or on both assemblies in a side by side tabular view. Inputs and outputs to GOLF can be exported to other RGD tools for analysis or downloaded using the links on the GOLF results page.
InterViewer: InterViewer is a protein-protein interactive viewer that displays the appropriate information about types of interactions and links to associated genes pertaining to the user’s input.
PhenoMiner: PhenoMiner combines phenotypic data from different rat strains, so researchers can use filters to find the quantitative phenotypic data they are looking for.
OLGA - Object List Generator & Analyzer: OLGA is a search engine designed to allow users to run multiple queries, generate a list of objects from each query and flexibly combine the results using Boolean specifications. OLGA takes as input either a list of object symbols or search parameters based on ontology annotations or position. The final list of genes, QTLs or strains can be downloaded or submitted to the GA Tool, the Variant Visualizer, the Genome Viewer or other RGD tools.
Genome Viewer: The Genome Viewer (GViewer) tool provides users with complete genome views of genes, QTLs and mapped strains annotated to a function, biological process, cellular component, phenotype, disease, pathway, or chemical interaction. GViewer allows Boolean searches across multiple ontologies. Output is displayed against a karyotype of the rat genome.
Overgo Probe Designer: Overgo probes are pairs of partially overlapping 22mer oligonucleotides derived from repeat-masked genomic sequence and used as high specific activity probes for genome mapping. The Overgo Probe Designer tool takes as input a nucleotide sequence and outputs a list of optimized probe sequences containing the requisite 8 nucleotide overlap on their 3' ends.
ACP Haplotyper: The ACP Haplotyper creates a visual haplotype that can be used to identify conserved and non-conserved chromosomal regions between any of the 48 rat strains characterized as part of the ACP project. For the selected chromosome and between the selected strains, the tool compares the allele size data for microsatellite markers on the selected genetic or RH map.
Third party genome tools adapted for use with RGD data
RGD offers several third party software tools that have been adapted for use on the website utilizing data stored in the RGD database. These include:
JBrowse: JBrowse is a free, interactive, and database-specific data analysis tool. The software was created and is currently maintained by the Generic Model Organism Database project, Genetic and phenotypic data types, including fundamental datasets and gene-chemical interaction data, and their relationship to the genomic sequence can be accessed through JBrowse.
RatMine: RatMine is a rat-centric version of the InterMine software. It enables users to mine and analyze rat data from diverse databases including RGD, NCBI, UniProtKB and Ensembl in a single location using a consistent format. The InterMine platform has been adapted for multiple species in other databases and is designed to be interoperable between instances so that users can query across species from the RatMine interface.
Additional Data and Tools
Phenotypes and Models Portal
RGD's Phenotypes and Models portal focuses on strains, phenotypes and the rat as a model organism for physiology and disease.
Genetic Models: It is the place where all the genomically modified rats (mutant strains and transgenic strains) are listed in a table format for quick access by affected genes, background strains and other available information. This section also contains GERRC strains where genome modified rats were created through the Gene Editing Rat Resource Center.
Autism Models: Laboratory rats are the animal of choice in neurobiology. The Medical College of Wisconsin has been working with the Simons Foundation Autism Research Initiative (SFARI) to generate and distribute engineered rat models of autism.
PhenoMiner (Quantitative Models): PhenoMiner is a database and web application for finding and analyzing quantitative rat phenotype data. Data is annotated to ontologies for rat strain, clinical measurement, measurement method, and experimental condition. Experiments are categorized by the trait or disease assessed by the measurement. The use of standardized vocabularies and data formats allows comparison of values across experiments for the same measurement. The PhenoMiner results page includes a graph of the measurement values and a downloadable table of the values with their accompanying metadata. A link is provided to give users the opportunity to submit their own data to the database.
Expected Ranges (Quantitative Models): Expected Ranges is a statistical meta-analysis database where quantitative phenotype values from PhenoMiner are used to calculated the “expected range” of a measured phenotype for a strain group across different studies. These expected ranges can be stratified by sex, age and experimental conditions if there are enough data points.
Phenotypes: The Phenotypes section contains a large body of data from the PhysGen Program for Genomic Applications project, an NHLBI-funded project to "develop consomic and knockout rat strains, phenotypically characterize these strains, and provide these resources to the scientific community." Data categories include measurements of cardiovascular, renal and respiratory function, blood chemistry, body morphology and behavior. Links are also provided to protocols for phenotyping rats and to similar high-throughput phenotyping data at the National BioResource Project for the Rat in Japan (NBRP-Rat).
Phenotypic Models and Genomic Resources for Additional Species: In addition to rat, mouse, and human data, the RGD provides integrated access to additional mammalian species' genomic, and in some cases phenotypic, information. These other species, listed below, are important research models for disease, physiology and phenotypes.
Disease Portals
Disease Portals consolidate the data in RGD for a specific disease category and present it in a single group of pages. Genes, QTLs and strains annotated to any disease in the category are listed, with genome-wide views of their locations in rat, human and mouse (see Genome Viewer in Genome tools developed at RGD). Additional sections of the portal display data for phenotypes, biological processes and pathways related to the disease category. Pages are also supplied to give users access to information about rat strains used as models for one or more diseases in the category, tools that could be used to analyze the data and additional resources related to the disease category. Further, access to the RGD's Multi-Ontology Enrichment Tool (MOET) is available at the bottom of the individual disease portals.
As of May 2021, RGD has fifteen disease portals:
Aging & Age-Related Disease
Cancer
Cardiovascular Disease
COVID-19
Developmental Disease
Diabetes
Hematologic Disease
Immune and Inflammatory Disease
Infectious Disease
Liver Disease
Neurological Disease
Obesity and Metabolic Syndrome
Renal Disease
Respiratory Disease
Sensory Organ Disease
Disease portals consolidate the data in RGD for a specific disease category and present it in a single group of pages. Genes, QTLs and strains annotated to any disease in the category are listed, with genome-wide views of their locations in rat, human and mouse (see "Genome Viewer" in Genome tools developed at RGD). Additional sections of the portal display data for phenotypes, biological processes and pathways related to the disease category. Pages are also supplied to give users access to information about rat strains used as models for one or more diseases in the category, tools that could be used to analyze the data and additional resources related to the disease category.
Pathways
RGD's Pathway resources include a Pathway Ontology of pathway terms (developed and maintained at RGD, encompassing not only metabolic pathways but also disease, drug, regulatory and signaling pathways), as well as interactive diagrams of the components and interactions of selected pathways. Included on the diagram pages are a description, lists of pathway gene members and additional elements, tables of disease, pathway and phenotype annotations made to pathway member genes, associated references and an ontology path diagram. Pathway Suites and Suite Networks, i.e. groupings of related pathways which all contribute to a larger process such as glucose homeostasis or gene expression regulation are presented, as well as Physiological Pathway diagrams which display networks of organs, tissues, cells and molecular pathways at the whole animal or systems level.
Knockouts
Until recently, direct, specific genomic manipulations in the rat were not possible. However, with the rise of technologies such as Zinc finger nuclease- and CRISPR -based mutagenesis techniques, that is no longer the case. Groups producing rat gene knockouts and other types of genetically modified rats include the Human and Molecular Genetics Center at MCW. RGD links to information about the rat strains produced in these studies via pages about the PhysGen Knockout project and the MCW Gene Editing Rat Resource Center (GERRC), accessed from RGD page headers. Funding for both the PhysGenKO project and the GERRC came from the National Heart Lung and Blood Institute (NHLBI). The stated goal of both projects is to produce rats with alterations in one or more specific genes related to the mission of the NHLBI. Genes were nominated by rat researchers. Nominations were adjudicated by an External Advisory Board. In the case of the PhysGenKO project, many of the rats produced by the group were phenotyped using a standardized high-throughput phenotyping protocol and the data is available in RGD's PhenoMiner tool.
Community outreach and education
RGD reaches out to the rat research community in a variety of ways including an email forum, a news page, a Facebook page, a Twitter account, and regular attendance and presentations at scientific meetings and conferences. Additional educational activities include the production of tutorial videos, both outlining how to use RGD tools and data, and on more general topics such as biomedical ontologies and biological (i.e. gene, QTL and strain) nomenclature. These videos can be viewed on several online video hosting sites including YouTube.
Funding
RGD is funded by grant R01HL64541 from the National Heart, Lung, and Blood Institute (NHLBI) on behalf of the National Institutes of Health (NIH). The principal investigator of the grant is Anne E. Kwitek, who was appointed to this leadership position from Mary E. Shimoyama, in March 2020. Melinda R Dwinell is Co-Investigator.
New Genome Assembly
The new genome rat assembly, mRatBN7.2, was generated by the Darwin Tree of Life Project at the Wellcome Sanger Institute and has been accepted into the Genome Reference Consortium. mRatBN7.2 was derived from a male BN/NHsdMcwi rat that is a direct descendant of the female BN rat previously sequenced. The new BN rat reference genome was created using a variety of technologies including PacBio long reads, 10X linked reads, Bionano maps and Arima Hi-C. Its contiguity is similar to the human or mouse reference assemblies. It is available at NCBI’s GenBank and at RefSeq, and it will be made the primary assembly at RGD in the near future.
References
External links
The National BioResource Project for the Rat in Japan
About RGD
About RGD Ontologies
RGD Genome Tools
RGD Phenotypes & Models
RGD Disease Portals
RGD PhysGen Knockouts
RDG Gene Editing Rat Resource Center
Project Info for Grant HL64541
NIH Rnor_6.0 Assembly
Model organism databases
Rats
Laboratory rats | Rat Genome Database | [
"Biology"
] | 4,776 | [
"Model organism databases",
"Model organisms"
] |
2,151,329 | https://en.wikipedia.org/wiki/Pascal%27s%20simplex | In mathematics, Pascal's simplex is a generalisation of Pascal's triangle into arbitrary number of dimensions, based on the multinomial theorem.
Generic Pascal's m-simplex
Let m () be a number of terms of a polynomial and n () be a power the polynomial is raised to.
Let m denote a Pascal's m-simplex. Each Pascal's m-simplex is a semi-infinite object, which consists of an infinite series of its components.
Let denote its nth component, itself a finite -simplex with the edge length n, with a notational equivalent .
nth component
consists of the coefficients of multinomial expansion of a polynomial with m terms raised to the power of n:
where
Example for ⋀4
Pascal's 4-simplex , sliced along the k4. All points of the same color belong to the same nth component, from red (for ) to blue (for ).
Specific Pascal's simplices
Pascal's 1-simplex
1 is not known by any special name.
nth component
(a point) is the coefficient of multinomial expansion of a polynomial with 1 term raised to the power of n:
Arrangement of
which equals 1 for all n.
Pascal's 2-simplex
is known as Pascal's triangle .
nth component
(a line) consists of the coefficients of binomial expansion of a polynomial with 2 terms raised to the power of n:
Arrangement of
Pascal's 3-simplex
is known as Pascal's tetrahedron .
nth component
(a triangle) consists of the coefficients of trinomial expansion of a polynomial with 3 terms raised to the power of n:
Arrangement of
Properties
Inheritance of components
is numerically equal to each -face (there is of them) of , or:
From this follows, that the whole is -times included in , or:
Example
For more terms in the above array refer to
Equality of sub-faces
Conversely, is (-times bounded by , or:
From this follows, that for given n, all i-faces are numerically equal in nth components of all Pascal's ()-simplices, or:
Example
The 3rd component (2-simplex) of Pascal's 3-simplex is bounded by 3 equal 1-faces (lines). Each 1-face (line) is bounded by 2 equal 0-faces (vertices):
2-simplex 1-faces of 2-simplex 0-faces of 1-face
1 3 3 1 1 . . . . . . 1 1 3 3 1 1 . . . . . . 1
3 6 3 3 . . . . 3 . . .
3 3 3 . . 3 . .
1 1 1 .
Also, for all m and all n:
Number of coefficients
For the nth component (-simplex) of Pascal's m-simplex, the number of the coefficients of multinomial expansion it consists of is given by:
(where the latter is the multichoose notation). We can see this either as a sum of the number of coefficients of an th component (-simplex) of Pascal's m-simplex with the number of coefficients of an nth component (-simplex) of Pascal's -simplex, or by a number of all possible partitions of an nth power among m exponents.
Example
The terms of this table comprise a Pascal triangle in the format of a symmetric Pascal matrix.
Symmetry
An nth component (-simplex) of Pascal's m-simplex has the (m!)-fold spatial symmetry.
Geometry
Orthogonal axes k1, ..., km in m-dimensional space, vertices of component at n on each axis, the tip at for .
Numeric construction
Wrapped nth power of a big number gives instantly the nth component of a Pascal's simplex.
where .
Factorial and binomial topics
Triangles of numbers | Pascal's simplex | [
"Mathematics"
] | 818 | [
"Factorial and binomial topics",
"Triangles of numbers",
"Combinatorics"
] |
2,151,421 | https://en.wikipedia.org/wiki/Integer%20overflow | In computer programming, an integer overflow occurs when an arithmetic operation on integers attempts to create a numeric value that is outside of the range that can be represented with a given number of digits – either higher than the maximum or lower than the minimum representable value.
The most common result of an overflow is that the least significant representable digits of the result are stored; the result is said to wrap around the maximum (i.e. modulo a power of the radix, usually two in modern computers, but sometimes ten or other number). On some processors like graphics processing units (GPUs) and digital signal processors (DSPs) which support saturation arithmetic, overflowed results would be clamped, i.e. set to the minimum value in the representable range if the result is below the minimum and set to the maximum value in the representable range if the result is above the maximum, rather than wrapped around.
An overflow condition may give results leading to unintended behavior. In particular, if the possibility has not been anticipated, overflow can compromise a program's reliability and security.
For some applications, such as timers and clocks, wrapping on overflow can be desirable. The C11 standard states that for unsigned integers, modulo wrapping is the defined behavior and the term overflow never applies: "a computation involving unsigned operands can never overflow."
Origin
The register width of a processor determines the range of values that can be represented in its registers. Though the vast majority of computers can perform multiple-precision arithmetic on operands in memory, allowing numbers to be arbitrarily long and overflow to be avoided, the register width limits the sizes of numbers that can be operated on (e.g., added or subtracted) using a single instruction per operation. Typical binary register widths for unsigned integers include:
4-bit: maximum representable value 24 − 1 = 15
8-bit: maximum representable value 28 − 1 = 255
16-bit: maximum representable value 216 − 1 = 65,535
32-bit: maximum representable value 232 − 1 = 4,294,967,295 (the most common width for personal computers ),
64-bit: maximum representable value 264 − 1 = 18,446,744,073,709,551,615 (the most common width for personal computer central processing units (CPUs), ),
128-bit: maximum representable value 2128 − 1 = 340,282,366,920,938,463,463,374,607,431,768,211,455
When an unsigned arithmetic operation produces a result larger than the maximum above for an N-bit integer, an overflow reduces the result to modulo N-th power of 2, retaining only the least significant bits of the result and effectively causing a wrap around.
In particular, multiplying or adding two integers may result in a value that is unexpectedly small, and subtracting from a small integer may cause a wrap to a large positive value (for example, 8-bit integer addition 255 + 2 results in 1, which is , and similarly subtraction 0 − 1 results in 255, a two's complement representation of −1).
Such wraparound may cause security detriments—if an overflowed value is used as the number of bytes to allocate for a buffer, the buffer will be allocated unexpectedly small, potentially leading to a buffer overflow which, depending on the use of the buffer, might in turn cause arbitrary code execution.
If the variable has a signed integer type, a program may make the assumption that a variable always contains a positive value. An integer overflow can cause the value to wrap and become negative, which violates the program's assumption and may lead to unexpected behavior (for example, 8-bit integer addition of 127 + 1 results in −128, a two's complement of 128). (A solution for this particular problem is to use unsigned integer types for values that a program expects and assumes will never be negative.)
Flags
Most computers have two dedicated processor flags to check for overflow conditions.
The carry flag is set when the result of an addition or subtraction, considering the operands and result as unsigned numbers, does not fit in the given number of bits. This indicates an overflow with a carry or borrow from the most significant bit. An immediately following add with carry or subtract with borrow operation would use the contents of this flag to modify a register or a memory location that contains the higher part of a multi-word value.
The overflow flag is set when the result of an operation on signed numbers does not have the sign that one would predict from the signs of the operands, e.g., a negative result when adding two positive numbers. This indicates that an overflow has occurred and the signed result represented in two's complement form would not fit in the given number of bits.
Definition variations and ambiguity
For an unsigned type, when the ideal result of an operation is outside the type's representable range and the returned result is obtained by wrapping, then this event is commonly defined as an overflow. In contrast, the C11 standard defines that this event is not an overflow and states "a computation involving unsigned operands can never overflow."
When the ideal result of an integer operation is outside the type's representable range and the returned result is obtained by clamping, then this event is commonly defined as a saturation. Use varies as to whether a saturation is or is not an overflow. To eliminate ambiguity, the terms wrapping overflow and saturating overflow can be used.
Many references can be found to integer underflow. When the term integer underflow is used, it means the ideal result was closer to negative infinity than the output type's representable value closest to negative infinity. Depending on context, the definition of overflow may include all types including underflows, or it may only include cases where the ideal result was closer to positive infinity than the output type's representable value closest to positive infinity.
When the ideal result of an operation is not an exact integer, the meaning of overflow can be ambiguous in edge cases. Consider the case where the ideal result has a value of 127.25 and the output type's maximum representable value is 127. If overflow is defined as the ideal value being outside the representable range of the output type, then this case would be classified as an overflow. For operations that have well defined rounding behavior, overflow classification may need to be postponed until after rounding is applied. The C11 standard defines that conversions from floating point to integer must round toward zero. If C is used to convert the floating point value 127.25 to integer, then rounding should be applied first to give an ideal integer output of 127. Since the rounded integer is in the outputs range, the C standard would not classify this conversion as an overflow.
Inconsistent behavior
The behavior on occurrence of overflow may not be consistent in all circumstances. For example, in the language Rust, while functionality is provided to give users choice and control, the behavior for basic use of mathematic operators is naturally fixed; however, this fixed behavior differs between a program built in 'debug' mode and one built in 'release' mode. In C, unsigned integer overflow is defined to wrap around, while signed integer overflow causes undefined behavior.
Methods to address integer overflow problems
Detection
Run-time overflow detection implementation UBSan (undefined behavior sanitizer) is available for C compilers.
In Java 8, there are overloaded methods, for example , which will throw an in case of overflow.
Computer emergency response team (CERT) developed the As-if Infinitely Ranged (AIR) integer model, a largely automated mechanism to eliminate integer overflow and truncation in C/C++ using run-time error handling.
Avoidance
By allocating variables with data types that are large enough to contain all values that may possibly be computed and stored in them, it is always possible to avoid overflow. Even when the available space or the fixed data types provided by a programming language or environment are too limited to allow for variables to be defensively allocated with generous sizes, by carefully ordering operations and checking operands in advance, it is often possible to ensure a priori that the result will never be larger than can be stored. Static analysis tools, formal verification and design by contract techniques can be used to more confidently and robustly ensure that an overflow cannot accidentally result.
Handling
If it is anticipated that overflow may occur, then tests can be inserted into the program to detect when it happens, or is about to happen, and do other processing to mitigate it. For example, if an important result computed from user input overflows, the program can stop, reject the input, and perhaps prompt the user for different input, rather than the program proceeding with the invalid overflowed input and probably malfunctioning as a consequence.
CPUs generally have a way to detect this to support addition of numbers larger than their register size, typically using a status bit. The technique is called multiple-precision arithmetic. Thus, it is possible to perform byte-wide addition on operands wider than a byte: first add the low bytes, store the result and check for overflow; then add the high bytes, and if necessary add the carry from the low bytes, then store the result.
Handling possible overflow of a calculation may sometimes present a choice between performing a check before a calculation (to determine whether or not overflow is going to occur), or after it (to consider whether or not it likely occurred based on the resulting value). Since some implementations might generate a trap condition on integer overflow, the most portable programs test in advance of performing the operation that might overflow.
Programming language support
Programming languages implement various mitigation methods against an accidental overflow: Ada, Seed7, and certain variants of functional languages trigger an exception condition on overflow, while Python (since 2.4) seamlessly converts internal representation of the number to match its growth, eventually representing it as long – whose ability is only limited by the available memory.
In languages with native support for arbitrary-precision arithmetic and type safety (such as Python, Smalltalk, or Common Lisp), numbers are promoted to a larger size automatically when overflows occur, or exceptions thrown (conditions signaled) when a range constraint exists. Using such languages may thus be helpful to mitigate this issue. However, in some such languages, situations are still possible where an integer overflow can occur. An example is explicit optimization of a code path which is considered a bottleneck by the profiler. In the case of Common Lisp, this is possible by using an explicit declaration to type-annotate a variable to a machine-size word (fixnum) and lower the type safety level to zero for a particular code block.
In stark contrast to older languages such as C, some newer languages such as Rust provide built-in functions that allow easy detection and user choice over how overflow should be handled case-by-case. In Rust, while use of basic mathematic operators naturally lacks such flexibility, users can alternatively perform calculations via a set of methods provided by each of the integer primitive types. These methods give users several choices between performing a checked (or overflowing) operation (which indicates whether or not overflow occurred via the return type); an 'unchecked' operation; an operation that performs wrapping, or an operation which performs saturation at the numeric bounds.
Saturated arithmetic
In computer graphics or signal processing, it is typical to work on data that ranges from 0 to 1 or from −1 to 1. For example, take a grayscale image where 0 represents black, 1 represents white, and the values in between represent shades of gray. One operation that one may want to support is brightening the image by multiplying every pixel by a constant. Saturated arithmetic allows one to just blindly multiply every pixel by that constant without worrying about overflow by just sticking to a reasonable outcome that all these pixels larger than 1 (i.e., "brighter than white") just become white and all values "darker than black" just become black.
Examples
Unanticipated arithmetic overflow is a fairly common cause of program errors. Such overflow bugs may be hard to discover and diagnose because they may manifest themselves only for very large input data sets, which are less likely to be used in validation tests.
Taking the arithmetic mean of two numbers by adding them and dividing by two, as done in many search algorithms, causes error if the sum (although not the resulting mean) is too large to be represented and hence overflows.
Between 1985 and 1987, arithmetic overflow in the Therac-25 radiation therapy machines, along with a lack of hardware safety controls, caused the death of at least six people from radiation overdoses.
An unhandled arithmetic overflow in the engine steering software was the primary cause of the crash of the 1996 maiden flight of the Ariane 5 rocket. The software had been considered bug-free since it had been used in many previous flights, but those used smaller rockets which generated lower acceleration than Ariane 5. Frustratingly, the part of the software in which the overflow error occurred was not even required to be running for the Ariane 5 at the time that it caused the rocket to fail: it was a launch-regime process for a smaller predecessor of the Ariane 5 that had remained in the software when it was adapted for the new rocket. Further, the true cause of the failure was a flaw in the engineering specification of how the software dealt with the overflow when it was detected: it did a diagnostic dump to its bus, which would have been connected to test equipment during software testing during development but was connected to the rocket steering motors during flight; the data dump drove the engine nozzle hard to one side which put the rocket out of aerodynamic control and precipitated its rapid breakup in the air.
On 30 April 2015, the U.S. Federal Aviation Administration announced it will order Boeing 787 operators to reset its electrical system periodically, to avoid an integer overflow which could lead to loss of electrical power and ram air turbine deployment, and Boeing deployed a software update in the fourth quarter. The European Aviation Safety Agency followed on 4 May 2015. The error happens after 231 hundredths of a second (about days), indicating a 32-bit signed integer.
Overflow bugs are evident in some computer games. In Super Mario Bros. for the NES, the stored number of lives is a signed byte (ranging from −128 to 127) meaning the player can safely have 127 lives, but when the player reaches their 128th life, the counter rolls over to zero lives (although the number counter is glitched before this happens) and stops keeping count. As such, if the player then dies it's an immediate game over. This is caused by the game's data overflow that was an error of programming as the developers may not have thought said number of lives would be reasonably earned in a full playthrough.
In the arcade game Donkey Kong, it is impossible to advance past level 22 due to an integer overflow in its time/bonus. The game calculates the time/bonus by taking the level number a user is on, multiplying it by 10, and adding 40. When they reach level 22, the time/bonus number is 260, which is too large for its 8-bit 256 value register, so it overflows to a value of 4 – too short to finish the level. In Donkey Kong Jr. Math, when trying to calculate a number over 10,000, it shows only the first 4 digits. Overflow is the cause of the famous "split-screen" level in Pac-Man. Such a bug also caused the Far Lands in Minecraft Java Edition which existed from the Infdev development period to Beta 1.7.3; it was later fixed in Beta 1.8. The same bug also existed in Minecraft Bedrock Edition but has since been fixed.
In the Super Nintendo Entertainment System (SNES) game Lamborghini American Challenge, the player can cause their amount of money to drop below $0 during a race by being fined over the limit of remaining money after paying the fee for a race, which glitches the integer and grants the player $65,535,000 more than it would have had after going negative.
IBM–Microsoft Macro Assembler (MASM) version 1.00, and likely all other programs built by the same Pascal compiler, had an integer overflow and signedness error in the stack setup code, which prevented them from running on newer DOS machines or emulators under some common configurations with more than 512 KB of memory. The program either hangs or displays an error message and exits to DOS.
In August 2016, a casino machine at Resorts World casino printed a prize ticket of $42,949,672.76 as a result of an overflow bug. The casino refused to pay this amount, calling it a malfunction, using in their defense that the machine clearly stated that the maximum payout was $10,000, so any prize exceeding that had to be the result of a programming bug. The New York State Gaming Commission ruled in favor of the casino.
See also
Carry (arithmetic)
Modular arithmetic
Nuclear Gandhi
References
External links
Phrack #60, Basic Integer Overflows
Phrack #60, Big Loop Integer Protection
Efficient and Accurate Detection of Integer-based Attacks
WASC Threat Classification – Integer Overflows
Understanding Integer Overflow in C/C++
Binary Overflow – Binary Arithmetic
ISO C11 Standard
Software bugs
Computer security exploits
Computer arithmetic
de:Arithmetischer Überlauf | Integer overflow | [
"Mathematics",
"Technology"
] | 3,696 | [
"Computer arithmetic",
"Arithmetic",
"Computer security exploits"
] |
2,151,656 | https://en.wikipedia.org/wiki/Wave%20tank | A wave tank is a laboratory setup for observing the behavior of surface waves. The typical wave tank is a box filled with liquid, usually water, leaving open or air-filled space on top. At one end of the tank, an actuator generates waves; the other end usually has a wave-absorbing surface. A similar device is the ripple tank, which is flat and shallow and used for observing patterns of surface waves from above.
Wave basin
A wave basin is a wave tank which has a width and length of comparable magnitude, often used for testing ships, offshore structures and three-dimensional models of harbors (and their breakwaters).
Wave flume
A wave flume (or wave channel) is a special sort of wave tank: the width of the flume is much less than its length. The generated waves are therefore – more or less – two-dimensional in a vertical plane (2DV), meaning that the orbital flow velocity component in the direction perpendicular to the flume side wall is much smaller than the other two components of the three-dimensional velocity vector. This makes a wave flume a well-suited facility to study near-2DV structures, like cross-sections of a breakwater. Also (3D) constructions providing little blockage to the flow may be tested, e.g. measuring wave forces on vertical cylinders with a diameter much less than the flume width.
Wave flumes may be used to study the effects of water waves on coastal structures, offshore structures, sediment transport and other transport phenomena.
The waves are most often generated with a mechanical wavemaker, although there are also wind–wave flumes with (additional) wave generation by an air flow over the water – with the flume closed above by a roof above the free surface. The wavemaker frequently consists of a translating or rotating rigid wave board. Modern wavemakers are computer controlled, and can generate besides periodic waves also random waves, solitary waves, wave groups or even tsunami-like wave motion. The wavemaker is at one end of the wave flume, and at the other end is the construction being tested, or a wave absorber (a beach or special wave absorbing constructions).
Often, the side walls contain glass windows, or are completely made of glass, allowing for a clear visual observation of the experiment, and the easy deployment of optical instruments (e.g. by Laser Doppler velocimetry or particle image velocimetry).
Circular wave basin
In 2014, the first circular, combined current and wave test basin, FloWaveTT, was commissioned in The University of Edinburgh. This allows for "true" 360° waves to be generated to simulate rough storm conditions as well as scientific controlled waves in the same facility.
See also
Water tunnel (hydrodynamic)
Airy wave theory
Ocean waves
Ripple tank
Shallow water equations
Further reading
References
External links
Experimental physics
Hydrodynamics
Water waves
Scale modeling
Physical models
Articles containing video clips | Wave tank | [
"Physics",
"Chemistry"
] | 600 | [
"Scale modeling",
"Physical phenomena",
"Water waves",
"Hydrodynamics",
"Waves",
"Experimental physics",
"Physical objects",
"Physical models",
"Matter",
"Fluid dynamics"
] |
2,151,693 | https://en.wikipedia.org/wiki/Cassegrain%20reflector | The Cassegrain reflector is a combination of a primary concave mirror and a secondary convex mirror, often used in optical telescopes and radio antennas, the main characteristic being that the optical path folds back onto itself, relative to the optical system's primary mirror entrance aperture. This design puts the focal point at a convenient location behind the primary mirror and the convex secondary adds a telephoto effect creating a much longer focal length in a mechanically short system.
In a symmetrical Cassegrain both mirrors are aligned about the optical axis, and the primary mirror usually contains a hole in the center, thus permitting the light to reach an eyepiece, a camera, or an image sensor. Alternatively, as in many radio telescopes, the final focus may be in front of the primary. In an asymmetrical Cassegrain, the mirror(s) may be tilted to avoid obscuration of the primary or to avoid the need for a hole in the primary mirror (or both).
The classic Cassegrain configuration uses a parabolic reflector as the primary while the secondary mirror is hyperbolic. Modern variants may have a hyperbolic primary for increased performance (for example, the Ritchey–Chrétien design); and either or both mirrors may be spherical or elliptical for ease of manufacturing.
The Cassegrain reflector is named after a published reflecting telescope design that appeared in the April 25, 1672 Journal des sçavans which has been attributed to Laurent Cassegrain. Similar designs using convex secondary mirrors have been found in the Bonaventura Cavalieri's 1632 writings describing burning mirrors and Marin Mersenne's 1636 writings describing telescope designs. James Gregory's 1662 attempts to create a reflecting telescope included a Cassegrain configuration, judging by a convex secondary mirror found among his experiments.
The Cassegrain design is also used in catadioptric systems.
Cassegrain designs
"Classic" Cassegrain telescopes
The "classic" Cassegrain has a parabolic primary mirror and a hyperbolic secondary mirror that reflects the light back down through a hole in the primary. Folding the optics makes this a compact design. On smaller telescopes, and camera lenses, the secondary is often mounted on an optically flat, optically clear glass plate that closes the telescope tube. This support eliminates the "star-shaped" diffraction effects caused by a straight-vaned support spider. The closed tube stays clean, and the primary is protected, at the cost of some loss of light-gathering power.
It makes use of the special properties of parabolic and hyperbolic reflectors. A concave parabolic reflector will reflect all incoming light rays parallel to its axis of symmetry to a single point, the focus. A convex hyperbolic reflector has two foci and will reflect all light rays directed at one of its two foci towards its other focus. The mirrors in this type of telescope are designed and positioned so that they share one focus and so that the second focus of the hyperbolic mirror will be at the same point at which the image is to be observed, usually just outside the eyepiece.
In most Cassegrain systems, the secondary mirror blocks a central portion of the aperture. This ring-shaped entrance aperture significantly reduces a portion of the modulation transfer function (MTF) over a range of low spatial frequencies, compared to a full-aperture design such as a refractor or an offset Cassegrain. This MTF notch has the effect of lowering image contrast when imaging broad features. In addition, the support for the secondary (the spider) may introduce diffraction spikes in images.
The radii of curvature of the primary and secondary mirrors, respectively, in the classic configuration are
and
where
is the effective focal length of the system,
is the back focal length (the distance from the secondary to the focus),
is the distance between the two mirrors and
is the secondary magnification.
If, instead of and , the known quantities are the focal length of the primary mirror, , and the distance to the focus behind the primary mirror, , then and .
The conic constant of the primary mirror is that of a parabola, . Thanks to that there is no spherical aberration introduced by the primary mirror. The secondary mirror, however, is of a hyperbolic shape with one focus coinciding with that of the primary mirror and the other focus being at the back focal length . Thus, the classical Cassegrain has ideal focus for the chief ray (the center spot diagram is one point). We have,
,
where
.
Actually, as the conic constants should not depend on scaling, the formulae for both and can be greatly simplified and presented only as functions of the secondary magnification. Finally,
and
.
Ritchey-Chrétien
The Ritchey-Chrétien is a specialized Cassegrain reflector which has two hyperbolic mirrors (instead of a parabolic primary). It is free of coma and spherical aberration at a flat focal plane, making it well suited for wide field and photographic observations. It was invented by George Willis Ritchey and Henri Chrétien in the early 1910s. This design is very common in large professional research telescopes, including the Hubble Space Telescope, the Keck Telescopes, and the Very Large Telescope (VLT); it is also found in high-grade amateur telescopes.
Dall-Kirkham
The Dall-Kirkham Cassegrain telescope design was created by Horace Dall in 1928 and took on the name in an article published in Scientific American in 1930 following discussion between amateur astronomer Allan Kirkham and Albert G. Ingalls, the magazine's astronomy editor at the time. It uses a concave elliptical primary mirror and a convex spherical secondary. While this system is easier to polish than a classic Cassegrain or Ritchey-Chretien system, the off-axis coma is significantly worse, so the image degrades quickly off-axis. Because this is less noticeable at longer focal ratios, Dall-Kirkhams are seldom faster than f/15.
Off-axis configurations
An unusual variant of the Cassegrain is the Schiefspiegler telescope ("skewed" or "oblique reflector"; also known as the "Kutter telescope" after its inventor, Anton Kutter) which uses tilted mirrors to avoid the secondary mirror casting a shadow on the primary. However, while eliminating diffraction patterns this leads to several other aberrations that must be corrected.
Several different off-axis configurations are used for radio antennas.
Another off-axis, unobstructed design and variant of the Cassegrain is the 'Yolo' reflector invented by Arthur Leonard. This design uses a spherical or parabolic primary and a mechanically warped spherical secondary to correct for off-axis induced astigmatism. When set up correctly the Yolo can give uncompromising unobstructed views of planetary objects and non-wide field targets, with no lack of contrast or image quality caused by spherical aberration. The lack of obstruction also eliminates the diffraction associated with Cassegrain and Newtonian reflector astrophotography.
Catadioptric Cassegrains
Catadioptric Cassegrains use two mirrors, often with a spherical primary mirror to reduce cost, combined with refractive corrector element(s) to correct the resulting aberrations.
Schmidt-Cassegrain
The Schmidt-Cassegrain was developed from the wide-field Schmidt camera, although the Cassegrain configuration gives it a much narrower field of view. The first optical element is a Schmidt corrector plate. The plate is figured by placing a vacuum on one side, and grinding the exact correction required to correct the spherical aberration caused by the spherical primary mirror. Schmidt-Cassegrains are popular with amateur astronomers. An early Schmidt-Cassegrain camera was patented in 1946 by artist/architect/physicist Roger Hayward, with the film holder placed outside the telescope.
Maksutov-Cassegrain
The Maksutov-Cassegrain is a variation of the Maksutov telescope named after the Soviet optician and astronomer Dmitri Dmitrievich Maksutov. It starts with an optically transparent corrector lens that is a section of a hollow sphere. It has a spherical primary mirror, and a spherical secondary that is usually a mirrored section of the corrector lens.
Argunov-Cassegrain
In the Argunov-Cassegrain telescope all optics are spherical, and the classical Cassegrain secondary mirror is replaced by a sub-aperture corrector consisting of three air spaced lens elements. The element farthest from the primary mirror is a Mangin mirror, which acts as a secondary mirror.
Klevtsov-Cassegrain
The Klevtsov-Cassegrain, like the Argunov-Cassegrain, uses a sub-aperture corrector consisting of a small meniscus lens and a Mangin mirror as its "secondary mirror".
Cassegrain radio antennas
Cassegrain designs are also utilized in satellite telecommunication earth station antennas and radio telescopes, ranging in size from 2.4 metres to 70 metres. The centrally located sub-reflector serves to focus radio frequency signals in a similar fashion to optical telescopes.
An example of a cassegrain radio antenna is the 70-meter dish at JPL's Goldstone antenna complex. For this antenna, the final focus is in front of the primary, at the top of the pedestal protruding from the mirror.
See also
Catadioptric system
Celestron (Schmidt–Cassegrains, Maksutov Cassegrains)
List of telescope types
Meade Instruments (Schmidt–Cassegrains, Maksutov Cassegrains)
Questar (Maksutov Cassegrains)
Refracting telescope
Vixen (Cassegrains, Klevtsov–Cassegrain)
References
External links
Antennas (radio)
Radio frequency propagation
Radio frequency antenna types
Telescope types | Cassegrain reflector | [
"Physics"
] | 2,054 | [
"Physical phenomena",
"Spectrum (physical sciences)",
"Radio frequency propagation",
"Electromagnetic spectrum",
"Waves"
] |
2,151,745 | https://en.wikipedia.org/wiki/Subfunctor | In category theory, a branch of mathematics, a subfunctor is a special type of functor that is an analogue of a subset.
Definition
Let C be a category, and let F be a contravariant functor from C to the category of sets Set. A contravariant functor G from C to Set is a subfunctor of F if
For all objects c of C, G(c) ⊆ F(c), and
For all arrows f: → c of C, G(f) is the restriction of F(f) to G(c).
This relation is often written as G ⊆ F.
For example, let 1 be the category with a single object and a single arrow. A functor F: 1 → Set maps the unique object of 1 to some set S and the unique identity arrow of 1 to the identity function 1S on S. A subfunctor G of F maps the unique object of 1 to a subset T of S and maps the unique identity arrow to the identity function 1T on T. Notice that 1T is the restriction of 1S to T. Consequently, subfunctors of F correspond to subsets of S.
Remarks
Subfunctors in general are like global versions of subsets. For example, if one imagines the objects of some category C to be analogous to the open sets of a topological space, then a contravariant functor from C to the category of sets gives a set-valued presheaf on C, that is, it associates sets to the objects of C in a way that is compatible with the arrows of C. A subfunctor then associates a subset to each set, again in a compatible way.
The most important examples of subfunctors are subfunctors of the Hom functor. Let c be an object of the category C, and consider the functor . This functor takes an object of C and gives back all of the morphisms → c. A subfunctor of gives back only some of the morphisms. Such a subfunctor is called a sieve, and it is usually used when defining Grothendieck topologies.
Open subfunctors
Subfunctors are also used in the construction of representable functors on the category of ringed spaces. Let F be a contravariant functor from the category of ringed spaces to the category of sets, and let G ⊆ F. Suppose that this inclusion morphism G → F is representable by open immersions, i.e., for any representable functor and any morphism , the fibered product is a representable functor and the morphism Y → X defined by the Yoneda lemma is an open immersion. Then G is called an open subfunctor of F. If F is covered by representable open subfunctors, then, under certain conditions, it can be shown that F is representable. This is a useful technique for the construction of ringed spaces. It was discovered and exploited heavily by Alexander Grothendieck, who applied it especially to the case of schemes. For a formal statement and proof, see Grothendieck, Éléments de géométrie algébrique, vol. 1, 2nd ed., chapter 0, section 4.5.
Functors | Subfunctor | [
"Mathematics"
] | 702 | [
"Functions and mappings",
"Mathematical structures",
"Mathematical objects",
"Mathematical relations",
"Category theory",
"Functors"
] |
2,151,928 | https://en.wikipedia.org/wiki/M-SG%20reducing%20agent | In M-SG an alkali metal is absorbed into silica gel at elevated temperatures. The resulting black powder material is an effective reducing agent and safe to handle as opposed to the pure metal. The material can also be used as a desiccant and as a hydrogen source.
The metal is either sodium or a sodium - potassium alloy (Na2K). The molten metal is mixed with silica gel under constant agitation at room temperature. This phase 0 material must be handled in an inert atmosphere. Heating phase 0 at takes it to phase I. When this material is exposed to dry oxygen the reducing power is not affected. At further heating to phase II can be handled safely in an ambient environment.
The metal reacts with the silica gel in an exothermic reaction in which Na4Si4 nanoparticles are formed. The powder reacts with water to form hydrogen.
Compounds such as biphenyl and naphthalene are reduced by the powder and form highly coloured radical anions. The powder can also be introduced in a column chromatography setup and eluted with organic reactants in order to probe the reducing power. The powder is mixed with additional (wet) silica gel which provides additional hydrogen. A Birch reduction of naphthalene takes 5 minutes elution time. The column converts benzyl chloride to bibenzyl in a Wurtz coupling and in a similar fashion dibenzothiophene is reduced to biphenyl.
See also
Potassium graphite
References
Desiccants
Reducing agents | M-SG reducing agent | [
"Physics",
"Chemistry"
] | 317 | [
"Redox",
"Reducing agents",
"Desiccants",
"Materials",
"Matter"
] |
2,151,949 | https://en.wikipedia.org/wiki/Mott%20problem | The Mott problem is an iconic challenge to quantum mechanics theory: how can the prediction of spherically symmetric wave function result in linear tracks seen in a cloud chamber. The problem was first formulated in 1927 by Albert Einstein and Max Born and solved in 1929 by Nevill Francis Mott. Mott's solution notably only uses the wave equation, not wavefunction collapse, and it is considered the earliest example of what is now called decoherence theory.
Spherical waves, particle tracks
The problem later associated with Mott concerns a spherical wave function associated with an alpha ray emitted from the decay of a radioactive atomic nucleus. Intuitively, one might think that such a wave function should randomly ionize atoms throughout the cloud chamber, but this is not the case. The result of such a decay is always observed as linear tracks seen in Wilson's cloud chamber. The origin of the tracks given the original spherical wave predicted by theory is the problem requiring physical explanation.
In practice, virtually all high energy physics experiments, such as those conducted at particle colliders, involve wave functions which are inherently spherical. Yet, when the results of a particle collision are detected, they are invariably in the form of linear tracks (see, for example, the illustrations accompanying the article on bubble chambers). It is somewhat strange to think that a spherically symmetric wave function should be observed as a straight track, and yet, this occurs on a daily basis in all particle collider experiments.
History
The problem of alpha particle track was discussed at the Fifth Solvay conference in 1927. Max Born described the problem as one that Albert Einstein pointed to, asking "how can the corpuscular character of the phenomenon be reconciled here with the representation by waves?". Born answers with Heisenberg's "reduction of the probability packet", now called wavefunction collapse, introduced in May 1927. Born says each droplet in the cloud chamber track corresponds to a reduction of the wave in the immediate vicinity of the droplet. At the suggestion of Wolfgang Pauli he also discusses a solution that includes the alpha emitter and two atoms all in the same state and without wave function collapse, but does not pursue the idea beyond a brief discussion.
In his highly influential 1930 book, Werner Heisenberg analyzed the problem qualitatively but in detail. He considers two cases: wavefunction collapse at each interaction or wavefunction collapse only at the final apparatus, concluding they are equivalent.
In 1929 Charles Galton Darwin analyzed the problem without using wavefunction collapse. He says the correct approach requires viewing the wavefunction as consisting of the system under study (the alpha particle) and the environment it interacts with (atoms of the cloud chamber). Starting with a simple spherical wave, each collision involves a wavefunction with more coordinates and increasing complexity. His model coincides with the strategy of modern quantum decoherence theory.
Mott's analysis
Nevill Mott picks up where Darwin left off, citing Darwin's paper explicitly.
Mott's goal is to calculate the probability of exciting multiple atoms in the cloud chamber to understand why the excitation with a spherical wave creates a linear track.
Mott starts with a spherical wave for the alpha particle and two representative cloud chamber atoms modeled as hydrogen atoms. The relative positions of the emitter (black dot in the diagram, taken as the origin in Mott's treatment) and the two atoms (orange dots at and ) are fixed during the calculation of the track, meaning the velocity of the alpha particle is taken as much larger than the thermal motion of the gas atoms. These relative coordinates are parameters in the solution so the intensity of the excitations for various positions can be compared. The hydrogen atoms stand in for whatever might compose the cloud chamber gas.
Given the fixed positions of the atoms, Mott calculates the excitation of the electrons of those atoms. By assuming that the emitter and the hydrogen atoms are not close together, Mott represents the time-independent part of the three-body state of the system, , as a sum of products of hydrogen atom eigenfunctions :
Here is the position of the alpha particle, the positions of the hydrogen atoms' electrons, and the sum runs over the excited states of the atoms I and II. The expansion factors have the physical interpretation of conditional probability for the alpha particle near , given that atom I is excited to state and atom II is excited to state .
To solve for the expansion factors, Mott used the Born approximation, a form of perturbation theory for scattering that works well when the incident wave is not significantly altered by the scattering. Consequently, Mott is assuming that the alpha particle barely notices the atoms it excites as it races through the cloud chamber.
Mott analyzes the spatial properties of the factor which describes the scattered alpha-particle wave when the first atom is excited and the second is in its ground state. He shows that it is strongly peaked along the line from the emitter to the first atom (along in the diagram).
Mott then shows that the probability that both atoms become excited depends on the product of the probability that one atom is excited and the spatial extent of the electron potential of the other atom. Both atoms are excited only for colinear configurations.
Mott demonstrated that by considering the interaction in configuration space, where all of the atoms of the cloud chamber play a role, it is overwhelmingly probable that all of the condensed droplets in the cloud chamber will lie close to the same straight line.
In his work on quantum measurement, Eugene Wigner cites Mott's insight on configuration space as a critical aspect of quantum mechanics: the configuration space approach allows spatial correlations like the line of atoms into the structure of quantum mechanics. What is uncertain is which straight line the wave packet will reduce to; the probability distribution of straight tracks is spherically symmetric.
Modern applications
Erich Joos and H. Dieter Zeh adopt Mott's model in the first concrete model of quantum decoherence theory. Mott's analysis, while it predates modern decoherence theory, fits squarely within its approach. Bryce DeWitt points to the dramatic mass difference between the alpha particle and the electrons in Mott's analysis as characteristic of decoherence of the state of the more massive system, the alpha particle.
In modern times, the Mott problem is occasionally considered theoretically in the context of astrophysics and cosmology, where the evolution of the wave function from the Big Bang or other astrophysical phenomena is considered.
See also
References
Quantum measurement | Mott problem | [
"Physics"
] | 1,334 | [
"Quantum measurement",
"Quantum mechanics"
] |
2,152,071 | https://en.wikipedia.org/wiki/Historical%20astronomy | Historical astronomy is the science of analysing historic astronomical data. The American Astronomical Society (AAS), established 1899, states that its Historical Astronomy Division "...shall exist for the purpose of advancing interest in topics relating to the historical nature of astronomy. By historical astronomy we include the history of astronomy; what has come to be known as archaeoastronomy; and the application of historical records to modern astrophysical problems." Historical and ancient observations are used to track theoretically long term trends, such as eclipse patterns and the velocity of nebular clouds. Conversely, using known and well documented phenomenological activity, historical astronomers apply computer models to verify the validity of ancient observations, as well as dating such observations and documents which would otherwise be unknown.
Examples
One example of such study would be the Crab Nebula, which is the remains of a supernova of July 1054, the SN 1054. During the Northern Song dynasty in China, a historical astronomical record was written, which lists unusual phenomena observed in the night sky. The event was also recorded by Japanese and Arab astronomers. Scholars often associate this with the formation of the Crab Nebula. [3]
Secondly, The astronomer Edmond Halley employed this science to deduce that three comets that appeared roughly 76 years apart were in fact the same object.
Similarly, the dwarf planet Pluto was found to have been photographed as early as 1915 although it was not recognized until 1930.
Quasars have been photographed since the late 19th century although they were not known to be unusual objects until the 1960s.
See also
Archaeoastronomy
Astronomical chronology
Astronomical interferometer
Cultural astronomy
F. Richard Stephenson
History of astronomy
References
Katherine Bracher, THE HISTORICAL ASTRONOMY DIVISION
https://web.archive.org/web/20011019220913/http://star-www.dur.ac.uk/~jms/group.html
Misner, Thorne, Wheeler; Gravitation; 1970 [3]
External links
American Astronomical Society
Donald Yeomans, Great Comets in History
Search Engine for Astronomy
History of astronomy
Astronomical sub-disciplines | Historical astronomy | [
"Astronomy"
] | 431 | [
"Astronomy stubs",
"Astronomical sub-disciplines",
"History of astronomy"
] |
2,152,181 | https://en.wikipedia.org/wiki/List%20of%20chemical%20elements | 118 chemical elements have been identified and named officially by IUPAC. A chemical element, often simply called an element, is a type of atom which has a specific number of protons in its atomic nucleus (i.e., a specific atomic number, or Z).
The definitive visualisation of all 118 elements is the periodic table of the elements, whose history along the principles of the periodic law was one of the founding developments of modern chemistry. It is a tabular arrangement of the elements by their chemical properties that usually uses abbreviated chemical symbols in place of full element names, but the linear list format presented here is also useful. Like the periodic table, the list below organizes the elements by the number of protons in their atoms; it can also be organized by other properties, such as atomic weight, density, and electronegativity. For more detailed information about the origins of element names, see List of chemical element name etymologies.
List
See also
List of people whose names are used in chemical element names
List of places used in the names of chemical elements
List of chemical element name etymologies
Roles of chemical elements
Extended periodic table Theories about undiscovered elements
References
External links
Atoms made thinkable, an interactive visualisation of the elements allowing physical and chemical properties of the elements to be compared | List of chemical elements | [
"Chemistry"
] | 266 | [
"Lists of chemical elements"
] |
2,152,217 | https://en.wikipedia.org/wiki/FreeRTOS | FreeRTOS is a real-time operating system kernel for embedded devices that has been ported to 40 microcontroller platforms. It is distributed under the MIT License.
History
The FreeRTOS kernel was originally developed by Richard Barry around 2003, and was later developed and maintained by Barry's company, Real Time Engineers Ltd. In 2017, the firm passed stewardship of the FreeRTOS project to Amazon Web Services (AWS). Barry continues to work on FreeRTOS as part of an AWS team. With the transition to Amazon control, subsequent releases of the project also switched licensing from GPL version 2 (with special exceptions for static linking to proprietary code outside the FreeRTOS kernel itself) to MIT.
Implementation
FreeRTOS is designed to be small and simple. It is mostly written in the C programming language to make it easy to port and maintain. It also comprises a few assembly language functions where needed, mostly in architecture-specific scheduler routines.
Process management
FreeRTOS provides methods for multiple threads or tasks, mutexes, semaphores and software timers. A tickless mode is provided for low power applications. Thread priorities are supported. FreeRTOS applications can be statically allocated, but objects can also be dynamically allocated with five schemes of memory management (allocation):
allocate only;
allocate and free with a very simple, fast, algorithm;
a more complex but fast allocate and free algorithm with memory coalescence;
an alternative to the more complex scheme that includes memory coalescence that allows a heap to be broken across multiple memory areas.
and C library allocate and free with some mutual exclusion protection.
RTOSes typically do not have the more advanced features that are found in operating systems like Linux and Microsoft Windows, such as device drivers, advanced memory management, and user accounts. The emphasis is on compactness and speed of execution. FreeRTOS can be thought of as a thread library rather than an operating system, although command line interface and POSIX-like input/output (I/O) abstraction are available.
FreeRTOS implements multiple threads by having the host program call a thread tick method at regular short intervals. The thread tick method switches tasks depending on priority and a round-robin scheduling scheme. The usual interval is 1 to 10 milliseconds ( to of a second) via an interrupt from a hardware timer, but this interval is often changed to suit a given application.
The software distribution contains prepared configurations and demonstrations for every port and compiler, allowing rapid application design. The project website provides documentation and RTOS tutorials, and details of the RTOS design.
Key features
Book and reference manuals.
Small memory size, low overhead, and fast execution.
Tick-less option for low power applications.
Intended for both hobbyists and professional developers working on commercial products.
Scheduler can be configured for both preemptive or cooperative multitasking.
Coroutine support (coroutines in FreeRTOS are simple and lightweight tasks with limited use of the call stack)
Trace support through generic trace macros. Tools such as Tracealyzer, a commercial tool by FreeRTOS partner Percepio, can thereby record and visualize the runtime behavior of FreeRTOS-based systems for debugging and verification. This includes task scheduling and kernel calls for semaphore and queue operations.
Supported architectures
Altera Nios II
ARM architecture
ARM7
ARM9
ARM Cortex-M
ARM Cortex-A
Atmel
Atmel AVR
AVR32
SAM3, SAM4
SAM7, SAM9
SAMD20, SAML21
Ceva
Ceva-BXx
SensPro
Ceva-XC16
Ceva-XM6
Ceva-Xx
Ceva-XM4
Cortus
APS1
APS3
APS3R
APS5
FPS6
FPS8
Cypress
PSoC
Energy Micro
EFM32
eSi-RISC
eSi-16x0
eSi-32x0
DSP Group
DBMD7
Espressif
ESP8266
ESP32
Fujitsu
FM3
MB91460
MB96340
Freescale
Coldfire V1, V2
HCS12
Kinetis
IBM
PPC404, PPC405
Infineon
TriCore
Infineon XMC4000
Intel
x86
8052
Microchip Technology
PIC18, PIC24, dsPIC
PIC32
Microsemi
SmartFusion
Multiclet
P1
NXP
LPC1000
LPC2000
LPC4300
Renesas
78K0R
RL78
H8/S
RX600
RX200
SuperH
V850
RISC-V
RV32I
RV64I
PULP RI5CY
Silicon Labs
Gecko (ARM Cortex)
STMicroelectronics
STM32
STR7
Texas Instruments
C2000 series (TMS320F28x)
MSP430
Stellaris
Hercules (TMS570LS04 & RM42)
Xilinx
MicroBlaze
Zynq-7000
Derivations
Amazon FreeRTOS
Amazon provides a now deprecated extension of FreeRTOS, this is FreeRTOS with libraries for Internet of things (IoT) support, specifically for Amazon Web Services. Since version 10.0.0 in 2017, Amazon has taken stewardship of the FreeRTOS code, including any updates to the original kernel.
SAFERTOS
SAFERTOS was developed as a complementary version of FreeRTOS, with common functions, but designed for safety-critical implementation. FreeRTOS was subject to hazard and operability study (HAZOP), and weaknesses were identified and resolved. The result was put through a full IEC 61508 SIL 3 development lifecycle, the highest level for a software-only component.
SAFERTOS was developed by Wittenstein High Integrity Systems, in partnership with Real Time Engineers Ltd, primary developer of the FreeRTOS project. Both SAFERTOS and FreeRTOS share the same scheduling algorithm, have similar application programming interfaces (APIs), and are otherwise very similar, but they were developed with differing objectives. SAFERTOS was developed solely in the C language to meet requirements for certification to IEC61508.
SAFERTOS can reside solely in the on-chip read-only memory (ROM) of a microcontroller for standards compliance. When implemented in hardware memory, SAFERTOS code can only be used in its original, certified configuration. This means certifying a system needs no retesting of the kernel portion of a design. SAFERTOS is included in the ROM of some Stellaris Microcontrollers from Texas Instruments. SAFERTOS source code does not need to be separately purchased. In this usage scenario, a C header file is used to map SAFERTOS API functions to their location in read-only memory.
OPENRTOS
OPENRTOS is a commercially licensed version of Amazon FreeRTOS, sold by Wittenstein High Integrity Systems. This product provides support and allows companies to use the Amazon FreeRTOS kernel and libraries without the a:FreeRTOS MIT license.
See also
Embedded operating system
References
External links
Amazon Web Services
ARM operating systems
Embedded operating systems
Free software operating systems
Microkernel-based operating systems
Microkernels
Real-time operating systems
X86 operating systems | FreeRTOS | [
"Technology"
] | 1,490 | [
"Real-time computing",
"Real-time operating systems"
] |
2,152,225 | https://en.wikipedia.org/wiki/Thin-layer%20chromatography | Thin-layer chromatography (TLC) is a chromatography technique that separates components in non-volatile mixtures.
It is performed on a TLC plate made up of a non-reactive solid coated with a thin layer of adsorbent material. This is called the stationary phase. The sample is deposited on the plate, which is eluted with a solvent or solvent mixture known as the mobile phase (or eluent). This solvent then moves up the plate via capillary action. As with all chromatography, some compounds are more attracted to the mobile phase, while others are more attracted to the stationary phase. Therefore, different compounds move up the TLC plate at different speeds and become separated. To visualize colourless compounds, the plate is viewed under UV light or is stained. Testing different stationary and mobile phases is often necessary to obtain well-defined and separated spots.
TLC is quick, simple, and gives high sensitivity for a relatively low cost. It can monitor reaction progress, identify compounds in a mixture, determine purity, or purify small amounts of compound.
Procedure
The process for TLC is similar to paper chromatography but provides faster runs, better separations, and the choice between different stationary phases. Plates can be labelled before or after the chromatography process with a pencil or other implement that will not interfere with the process.
There are four main stages to running a thin-layer chromatography plate:
Plate preparation: Using a capillary tube, a small amount of a concentrated solution of the sample is deposited near the bottom edge of a TLC plate. The solvent is allowed to completely evaporate before the next step. A vacuum chamber may be necessary for non-volatile solvents. To make sure there is sufficient compound to obtain a visible result, the spotting procedure can be repeated. Depending on the application, multiple different samples may be placed in a row the same distance from the bottom edge; each sample will move up the plate in its own "lane."
Development chamber preparation: The development solvent or solvent mixture is placed into a transparent container (separation/development chamber) to a depth of less than 1 centimetre. A strip of filter paper (aka "wick") is also placed along the container wall. This filter paper should touch the solvent and almost reach the top of the container. The container is covered with a lid and the solvent vapors are allowed to saturate the atmosphere of the container. Failure to do so results in poor separation and non-reproducible results.
Development: The TLC plate is placed in the container such that the sample spot(s) are not submerged into the mobile phase. The container is covered to prevent solvent evaporation. The solvent migrates up the plate by capillary action, meets the sample mixture, and carries it up the plate (elutes the sample). The plate is removed from the container before the solvent reaches the top of the plate; otherwise, the results will be misleading. The solvent front, the highest mark the solvent has travelled along the plate, is marked.
Visualization: The solvent evaporates from the plate. Visualization methods include UV light, staining, and many more.
Separation process and principle
The separation of compounds is due to the differences in their attraction to the stationary phase and because of differences in solubility in the solvent. As a result, the compounds and the mobile phase compete for binding sites on the stationary phase. Different compounds in the sample mixture travel at different rates due to the differences in their partition coefficients. Different solvents, or different solvent mixtures, gives different separation. The retardation factor (Rf), or retention factor, quantifies the results. It is the distance traveled by a given substance divided by the distance traveled by the mobile phase.
In normal-phase TLC, the stationary phase is polar. Silica gel is very common in normal-phase TLC. More polar compounds in a sample mixture interact more strongly with the polar stationary phase. As a result, more-polar compounds move less (resulting in smaller Rf) while less-polar compounds move higher up the plate (higher Rf). A more-polar mobile phase also binds more strongly to the plate, competing more with the compound for binding sites; a more-polar mobile phase also dissolves polar compounds more. As such, all compounds on the TLC plate move higher up the plate in polar solvent mixtures. "Strong" solvents move compounds higher up the plate, whereas "weak" solvents move them less.
If the stationary phase is non-polar, like C18-functionalized silica plates, it is called reverse-phase TLC. In this case, non-polar compounds move less and polar compounds move more. The solvent mixture will also be much more polar than in normal-phase TLC.
Solvent choice
An eluotropic series, which orders solvents by how much they move compounds, can help in selecting a mobile phase. Solvents are also divided into solvent selectivity groups. Using solvents with different elution strengths or different selectivity groups can often give very different results. While single-solvent mobile phases can sometimes give good separation, some cases may require solvent mixtures.
In normal-phase TLC, the most common solvent mixtures include ethyl acetate/hexanes (EtOAc/Hex) for less-polar compounds and methanol/dichloromethane (MeOH/DCM) for more polar compounds. Different solvent mixtures and solvent ratios can help give better separation. In reverse-phase TLC, solvent mixtures are typically water with a less-polar solvent: Typical choices are water with tetrahydrofuran (THF), acetonitrile (ACN), or methanol.
Analysis
As the chemicals being separated may be colourless, several methods exist to visualise the spots:
Placing the plate under blacklight (366 nm light) makes fluorescent compounds glow
TLC plates containing a small amount of fluorescent compound (usually manganese-activated zinc silicate) in the adsorbent layer allow for visualisation of some compounds under UV-C light (254 nm). The adsorbent layer will fluoresce light-green, while spots containing compounds that absorb UV-C light will not.
Placing the plate in a container filled with iodine vapours temporarily stains the spots. They typically become a yellow or brown colour.
The TLC plate can either be dipped in or sprayed with a stain and sometimes heated depending on the stain used. Many stains exist for a large range of chemical moieties but some examples include:
Potassium permanganate (no heating, for oxidisable groups)
Ninhydrin (heating, amines and amino-acids)
Acidic vanillin (heating, general reagent)
Phosphomolybdic acid (no heating, general reagent)
In the case of lipids, the chromatogram may be transferred to a polyvinylidene fluoride membrane and then subjected to further analysis, for example, mass spectrometry. This technique is known as far-eastern blot.
Plate production
TLC plates are usually commercially available, with standard particle size ranges to improve reproducibility. They are prepared by mixing the adsorbent, such as silica gel, with a small amount of inert binder like calcium sulfate (gypsum) and water. This mixture is spread as a thick slurry on an unreactive carrier sheet, usually glass, thick aluminum foil, or plastic. The resultant plate is dried and activated by heating in an oven for thirty minutes at 110 °C. The thickness of the absorbent layer is typically around 0.1–0.25 mm for analytical purposes and around 0.5–2.0 mm for preparative TLC. Other adsorbent coatings include aluminium oxide (alumina), or cellulose.
Applications
Reaction monitoring and characterization
TLC is a useful tool for reaction monitoring. For this, the plate normally contains a spot of starting material, a spot from the reaction mixture, and a co-spot (or cross-spot) containing both. The analysis will show if the starting material disappeared and if any new products appeared. This provides a quick and easy way to estimate how far a reaction has proceeded. In one study, TLC has been applied in the screening of organic reactions. The researchers react an alcohol and a catalyst directly in the co-spot of a TLC plate before developing it. This provides quick and easy small-scale testing of different reagents.
Compound characterization with TLC is also possible and is similar to reaction monitoring. However, rather than spotting with starting material and reaction mixture, it is with an unknown and a known compound. They may be the same compound if both spots have the same Rf and look the same under the chosen visualization method. However, co-elution complicates both reaction monitoring and characterization. This is because different compounds will move to the same spot on the plate. In such cases, different solvent mixtures may provide better separation.
Purity and purification
TLC helps show the purity of a sample. A pure sample should only contain one spot by TLC. TLC is also useful for small-scale purification. Because the separated compounds will be on different areas of the plate, a scientist can scrape off the stationary phase particles containing the desired compound and dissolve them into an appropriate solvent. Once all the compound dissolves in the solvent, they filter out the silica particles, then evaporate the solvent to isolate the product. Big preparative TLC plates with thick silica gel coatings can separate more than 100 mg of material.
For larger-scale purification and isolation, TLC is useful to quickly test solvent mixtures before running flash column chromatography on a large batch of impure material. A compound elutes from a column when the amount of solvent collected is equal to 1/Rf. The eluent from flash column chromatography gets collected across several containers (for example, test tubes) called fractions. TLC helps show which fractions contain impurities and which contain pure compound.
Furthermore, two-dimensional TLC can help check if a compound is stable on a particular stationary phase. This test requires two runs on a square-shaped TLC plate. The plate is rotated by 90º before the second run. If the target compound appears on the diagonal of the square, it is stable on the chosen stationary phase. Otherwise, it is decomposing on the plate. If this is the case, an alternative stationary phase may prevent this decomposition.
TLC is also an analytical method for the direct separation of enantiomers and the control of enantiomeric purity, e.g. active pharmaceutical ingredients (APIs) that are chiral.
See also
Column chromatography
HPTLC
Radial chromatography
Chiral thin-layer chromatography
References
Bibliography
F. Geiss (1987): Fundamentals of thin layer chromatography planar chromatography, Heidelberg, Hüthig,
Justus G. Kirchner (1978): Thin-layer chromatography, 2nd edition, Wiley
Joseph Sherma, Bernard Fried (1991): Handbook of Thin-Layer Chromatography (= Chromatographic Science. Bd. 55). Marcel Dekker, New York NY, .
Elke Hahn-Deinstorp: Applied Thin-Layer Chromatography. Best Practice and Avoidance of Mistakes. Wiley-VCH, Weinheim u. a. 2000,
Chromatography | Thin-layer chromatography | [
"Chemistry"
] | 2,416 | [
"Chromatography",
"Separation processes"
] |
2,152,230 | https://en.wikipedia.org/wiki/Reverse%20bungee | The reverse bungee (also known as catapult bungee, slingshot, or ejection seat) is a modern type of fairground ride.
The ride consists of two telescopic gantry towers mounted on a platform, feeding two elastic ropes down to a two-person passenger car constructed from an open sphere of tubular steel. The passenger car is secured to the platform with an electro-magnetic latch as the elastic ropes are stretched. When the electromagnet is turned off, the passenger car is catapulted vertically with a g-force of 3–5, reaching an altitude of between and .
Installations
Safety issues
In August 1998, Jérôme Charron died in a reverse bungee ride accident at the Ottawa Exhibition in Ottawa, Ontario, Canada when he was hurled 40 m into the air before plummeting to his death as his harness had detached. In February 2000, the firm responsible for the ride, Anderson Ventures, was fined $145,000 for this incident.
References
External links
Amusement Ride Extravaganza
Amusement rides
Bungee jumping
Articles containing video clips | Reverse bungee | [
"Physics",
"Technology"
] | 220 | [
"Physical systems",
"Machines",
"Amusement rides"
] |
2,152,318 | https://en.wikipedia.org/wiki/Armature%20%28electrical%29 | In electrical engineering, the armature is the winding (or set of windings) of an electric machine which carries alternating current. The armature windings conduct AC even on DC machines, due to the commutator action (which periodically reverses current direction) or due to electronic commutation, as in brushless DC motors. The armature can be on either the rotor (rotating part) or the stator (stationary part), depending on the type of electric machine.
Shapes of armature used in motors include double-T and triple-T armatures.
The armature windings interact with the magnetic field (magnetic flux) in the air-gap; the magnetic field is generated either by permanent magnets, or electromagnets formed by a conducting coil.
The armature must carry current, so it is always a conductor or a conductive coil, oriented normal to both the field and to the direction of motion, torque (rotating machine), or force (linear machine). The armature's role is twofold. The first is to carry current across the field, thus creating shaft torque in a rotating machine or force in a linear machine. The second role is to generate an electromotive force (EMF).
In the armature, an electromotive force is created by the relative motion of the armature and the field. When the machine or motor is used as a motor, this EMF opposes the armature current, and the armature converts electrical power to mechanical power in the form of torque, and transfers it via the shaft. When the machine is used as a generator, the armature EMF drives the armature current, and the shaft's movement is converted to electrical power. In an induction generator, generated power is drawn from the stator.
A growler is used to check the armature for short and open circuits and leakages to ground.
Terminology
The word armature was first used in its electrical sense, i.e. keeper of a magnet, in mid 19th century.
The parts of an alternator or related equipment can be expressed in either mechanical terms or electrical terms. Although distinctly separate these two sets of terminology are frequently used interchangeably or in combinations that include one mechanical term and one electrical term. This may cause confusion when working with compound machines like brushless alternators, or in conversation among people who are accustomed to work with differently configured machinery.
In most generators, the field magnet is rotating, and is part of the rotor, while the armature is stationary, and is part of the stator. Both motors and generators can be built either with a stationary armature and a rotating field or a rotating armature and a stationary field. The pole piece of a permanent magnet or electromagnet and the moving, iron part of a solenoid, especially if the latter acts as a switch or relay, may also be referred to as armatures.
Armature reaction in a DC machine
In a DC machine, two sources of magnetic fluxes are present; 'armature flux' and 'main field flux'. The effect of armature flux on the main field flux is called "armature reaction". The armature reaction changes the distribution of the magnetic field, which affects the operation of the machine. The effects of the armature flux can be offset by adding a compensating winding to the main poles, or in some machines adding intermediate magnetic poles, connected in the armature circuit.
Armature reaction is essential in amplidyne rotating amplifiers.
Armature reaction drop is the effect of a magnetic field on the distribution of the flux under main poles of a generator.
Since an armature is wound with coils of wire, a magnetic field is set up in the armature whenever a current flows in the coils. This field is at right angles to the generator field and is called cross magnetization of the armature. The effect of the armature field is to distort the generator field and shift the neutral plane. The neutral plane is the position where the armature windings are moving parallel to the magnetic flux lines, that is why an axis lying in this plane is called as magnetic neutral axis (MNA). This effect is known as armature reaction and is proportional to the current flowing in the armature coils.
The geometrical neutral axis (GNA) is the axis that bisects the angle between the centre line of adjacent poles. The magnetic neutral axis (MNA) is the axis drawn perpendicular to the mean direction of the flux passing through the centre of the armature. No e.m.f. is produced in the armature conductors along this axis because then they cut no flux. When no current is there in the armature conductors, the MNA coincides with GNA.
The brushes of a generator must be set in the neutral plane; that is, they must contact segments of the commutator that are connected to armature coils having no induced emf. If the brushes were contacting commutator segments outside the neutral plane, they would short-circuit "live" coils and cause arcing and loss of power.
Without armature reaction, the magnetic neutral axis (MNA) would coincide with geometrical neutral axis (GNA). Armature reaction causes the neutral plane to shift in the direction of rotation, and if the brushes are in the neutral plane at no load, that is, when no armature current is flowing, they will not be in the neutral plane when armature current is flowing. For this reason it is desirable to incorporate a corrective system into the generator design.
These are two principal methods by which the effect of armature reaction is overcome. The first method is to shift the position of the brushes so that they are in the neutral plane when the generator is producing its normal load current. in the other method, special field poles, called interpoles, are installed in the generator to counteract the effect of armature reaction.
The brush-setting method is satisfactory in installations in which the generator operates under a fairly constant load. If the load varies to a marked degree, the neutral plane will shift proportionately, and the brushes will not be the correct position at all times. The brush-setting method is the most common means of correcting for armature reaction in small generators (those producing approximately 1,000 W or less). Larger generators require the use of interpoles.
Winding circuits
Coils of the winding are distributed over the entire surface of the air gap, which may be the rotor or the stator of the machine. In a "lap" winding, there are as many current paths between the brush (or line) connections as there are poles in the field winding. In a "wave" winding, there are only two paths, and there are as many coils in series as half the number of poles. So, for a given rating of machine, a wave winding is more suitable for large currents and low voltages.
Windings are held in slots in the rotor or armature covered by stator magnets. The exact distribution of the windings and selection of the number of slots per pole of the field greatly influences the design of the machine and its performance, affecting such factors as commutation in a DC machine or the waveform of an AC machine.
Winding materials
Armature wiring is made from copper or aluminum. Copper armature wiring enhances electrical efficiencies due to its higher electrical conductivity. Aluminum armature wiring is lighter and less expensive than copper.
See also
Balancing machine
Commutator
References
External links
Example Diagram of an Armature Coil and data used to specify armature coil parameters
How to Check a Motor Armature for Damaged Windings
Electromagnetic components
Electric motors | Armature (electrical) | [
"Technology",
"Engineering"
] | 1,634 | [
"Electrical engineering",
"Engines",
"Electric motors"
] |
2,152,371 | https://en.wikipedia.org/wiki/Methanol%20%28data%20page%29 | This page provides supplementary chemical data on methanol.
Safety Data Sheet
The handling of this chemical may incur notable safety precautions. It is highly recommended that you seek the Safety Datasheet (SDS) for this chemical from a reliable source such as SIRI, and follow its directions. SDS is available at MSDS, J.T. Baker and Loba Chemie
Structure and properties
Thermodynamic properties
Spectral data
Vapor pressure of liquid
Here is a similar formula from the 67th edition of the CRC handbook. Note that the form of this formula as given is a fit to the Clausius–Clapeyron equation, which is a good theoretical starting point for calculating saturation vapor pressures:
log10(P) = −(0.05223)a/T + b, where P is in mmHg, T is in kelvins, a = 38324, and b = 8.8017.
Properties of aqueous methanol solutions
Data obtained from Lange's Handbook of Chemistry, 10th ed. and CRC Handbook of Chemistry and Physics 44th ed. The annotation, d a°C/b°C, indicates density of solution at temperature a divided by density of pure water at temperature b known as specific gravity. When temperature b is 4 °C, density of water is 0.999972 g/mL.
Distillation data
References
(sample table of physical properties)
External links
Chemical data pages
Methanol
Chemical data pages cleanup | Methanol (data page) | [
"Chemistry"
] | 306 | [
"Chemical data pages",
"nan"
] |
2,152,426 | https://en.wikipedia.org/wiki/Cyclone%20furnace | A cyclone furnace is a type of coal combustor commonly used in large industrial boilers.
Background
Developed in the early 1942 by Babcock & Wilcox to take advantage of coal grades not suitable for pulverized coal combustion, cyclone furnaces feed coal in a spiral manner into a combustion chamber for maximum combustion efficiency.
During coal combustion in a furnace, volatile components burn without much difficulty. Fuel carbon “char” particles (heavier, less volatile coal constituents) require much higher temperatures and a continuing supply of oxygen. Cyclone furnaces are able to provide a thorough mixing of coal particles and air with sufficient turbulence to provide fresh air to surfaces of the coal particles.
Cyclone furnaces were originally designed to take advantage of four things
Lower fuel preparation time and costs
Smaller more compact furnaces
Less fly ash and convective pass slagging
Flexibility in fuel types
Operation
A cyclone furnace consists of a horizontal cylindrical barrel attached through the side of a boiler furnace. The cyclone barrel is constructed with water cooled, tangential oriented, tube construction. Inside the cyclone barrel are short, densely spaced, pin studs welded to the outside of the tubes. The studs are coated with a refractory material, usually silica or aluminium based, that allows the cyclone to operate at a high enough temperature to keep the slag in a molten state and allow removal through the tap.
Crushed coal and a small amount of primary air enter from the front of the cyclone into the burner. In the main cyclone burner, secondary air is introduced tangentially, causing a circulating gas flow pattern. The products, flue gas and un-combusted fuel, then leave the burner and pass over the boiler tubes. Tertiary air is then released further downstream to complete combustion of the remaining fuel, greatly reducing NOx formation. A layer of molten slag coats the burner and flows through traps at the bottom of the burners, reducing the amount of slag that would otherwise form on the boiler tubes.
Cyclone Furnaces can handle a wide range of fuels. Low volatile bituminous coals, lignite coal, mineral rich anthracitic coal, wood chips, petroleum coke, and old tires can and have all been used in cyclones.
The crushed coal is fed into the cyclone burner and fired with high rates of heat release. Before the hot gases enter in the boiler furnace the combustion of coal is completed. The crushed coal is fed into cyclone burners. The coal is Burned by centrifugal action which is imparted by the primary air which enters tangentially and secondary Air which also enters in the top tangentially at high speed and tertiary air is admitted in the centre.
Due to Whirling action of coal and air, a large amount of heat is generated (1500–1600 °C) and that covered the surface of cyclone and ashes are transformed into molten slag. The molten slag drained from the boiler furnace through a slag tap.
References
Power station technology
Boilers
Energy conversion
Industrial furnaces | Cyclone furnace | [
"Chemistry"
] | 608 | [
"Metallurgical processes",
"Boilers",
"Industrial furnaces",
"Pressure vessels"
] |
2,152,465 | https://en.wikipedia.org/wiki/Antiisomorphism | In category theory, a branch of mathematics, an antiisomorphism (or anti-isomorphism) between structured sets A and B is an isomorphism from A to the opposite of B (or equivalently from the opposite of A to B). If there exists an antiisomorphism between two structures, they are said to be antiisomorphic.
Intuitively, to say that two mathematical structures are antiisomorphic is to say that they are basically opposites of one another.
The concept is particularly useful in an algebraic setting, as, for instance, when applied to rings.
Simple example
Let A be the binary relation (or directed graph) consisting of elements {1,2,3} and binary relation defined as follows:
Let B be the binary relation set consisting of elements {a,b,c} and binary relation defined as follows:
Note that the opposite of B (denoted Bop) is the same set of elements with the opposite binary relation (that is, reverse all the arcs of the directed graph):
If we replace a, b, and c with 1, 2, and 3 respectively, we see that each rule in Bop is the same as some rule in A. That is, we can define an isomorphism from A to Bop by . is then an antiisomorphism between A and B.
Ring anti-isomorphisms
Specializing the general language of category theory to the algebraic topic of rings, we have:
Let R and S be rings and f: R → S be a bijection. Then f is a ring anti-isomorphism if
If R = S then f is a ring anti-automorphism.
An example of a ring anti-automorphism is given by the conjugate mapping of quaternions:
Notes
References
Morphisms
Ring theory | Antiisomorphism | [
"Mathematics"
] | 362 | [
"Functions and mappings",
"Mathematical structures",
"Mathematical objects",
"Ring theory",
"Fields of abstract algebra",
"Category theory",
"Mathematical relations",
"Morphisms"
] |
2,152,618 | https://en.wikipedia.org/wiki/COSILAB | COSILAB is a software tool for solving complex chemical kinetics problems. It is used worldwide in research and industry, in particular in automotive, combustion, and chemical processing applications.
Problems to be solved by COSILAB may involve thousands of reactions amongst hundreds of species for practically any mixture composition, pressure and temperature. Its computational capabilities allow for a complex chemical reaction to be studied in detail, including intermediate compounds, trace compounds and pollutants.
Whilst complex chemistry is accounted for, chemical reactor or combustion geometries that can be handled by COSILAB are relatively simple. For the purpose of ``real-life" simulations this limitation can be overcome, however, by using a library of pre-compiled subroutines and functions, that one can link to his or her own code written in Fortran, the C programming language or C++. In this way, it is possible to develop fully two-dimensional or three-dimensional CFD or computational fluid dynamics codes that are able to capture fairly realistic geometries.
The development of codes like COSILAB is motivated by a worldwide attempt to keep the environment clean and to save—or at least make best use of—the continuously diminishing fossil fuel resources.
External links
United States Environmental Protection Agency on NOX
World Energy Council
Softpredict's COSILAB page,
Combustion
Computational chemistry software | COSILAB | [
"Chemistry"
] | 277 | [
"Computational chemistry software",
"Chemistry software",
"Theoretical chemistry stubs",
"Computational chemistry stubs",
"Combustion",
"Computational chemistry",
"Chemical reaction stubs",
"Physical chemistry stubs",
"Chemical process stubs"
] |
2,152,676 | https://en.wikipedia.org/wiki/Password-authenticated%20key%20agreement | In cryptography, a password-authenticated key agreement (PAK) method is an interactive method for two or more parties to establish cryptographic keys based on one or more parties' knowledge of a password.
An important property is that an eavesdropper or man-in-the-middle cannot obtain enough information to be able to brute-force guess a password without further interactions with the parties for each (few) guesses. This means that strong security can be obtained using weak passwords.
Types
Password-authenticated key agreement generally encompasses methods such as:
Balanced password-authenticated key exchange
Augmented password-authenticated key exchange
Password-authenticated key retrieval
Multi-server methods
Multi-party methods
In the most stringent password-only security models, there is no requirement for the user of the method to remember any secret or public data other than the password.
Password-authenticated key exchange (PAKE) is a method in which two or more parties, based only on their knowledge of a shared password, establish a cryptographic key using an exchange of messages, such that an unauthorized party (one who controls the communication channel but does not possess the password) cannot participate in the method and is constrained as much as possible from brute-force guessing the password. (The optimal case yields exactly one guess per run exchange.) Two forms of PAKE are balanced and augmented methods.
Balanced PAKE
Balanced PAKE assumes the two parties in either a client-client or client-server situation use the same secret password to negotiate and authenticate a shared key. Examples of these are:
Encrypted Key Exchange (EKE)
PAK and PPK
SPEKE (Simple password exponential key exchange)
Dragonfly – IEEE Std 802.11-2012, RFC 5931, RFC 6617
CPace
SPAKE1 and SPAKE2
SESPAKE
J-PAKE (Password Authenticated Key Exchange by Juggling) – ISO/IEC 11770-4 (2017), RFC 8236
ITU-T Recommendation X.1035
"Advanced modular handshake for key agreement and optional authentication"
Augmented PAKE
Augmented PAKE is a variation applicable to client/server scenarios, in which the server does not store password-equivalent data. This means that an attacker that stole the server data still cannot masquerade as the client unless they first perform a brute force search for the password.
Some augmented PAKE systems use an oblivious pseudorandom function to mix the user's secret password with the server's secret salt value, so that the user never learns the server's secret salt value and the server never learns the user's password (or password-equivalent value) or the final key.
Examples include:
AMP
Augmented-EKE
B-SPEKE
PAK-X
SRP
AugPAKE
OPAQUE
AuCPace
SPAKE2+
"Advanced modular handshake for key agreement and optional authentication"
Key retrieval
Password-authenticated key retrieval is a process in which a client obtains a static key in a password-based negotiation with a server that knows data associated with the password, such as the Ford and Kaliski methods. In the most stringent setting, one party uses only a password in conjunction with N (two or more) servers to retrieve a static key. This is completed in a way that protects the password (and key) even if N − 1 of the servers are completely compromised.
Brief history
The first successful password-authenticated key agreement methods were Encrypted Key Exchange methods described by Steven M. Bellovin and Michael Merritt in 1992. Although several of the first methods were flawed, the surviving and enhanced forms of EKE effectively amplify a shared password into a shared key, which can then be used for encryption and/or message authentication.
The first provably-secure PAKE protocols were given in work by M. Bellare, D. Pointcheval, and P. Rogaway (Eurocrypt 2000) and V. Boyko, P. MacKenzie, and S. Patel (Eurocrypt 2000). These protocols were proven secure in the so-called random oracle model (or even stronger variants), and the first protocols proven secure under standard assumptions were those of O. Goldreich and Y. Lindell (Crypto 2001) which serves as a plausibility proof but is not efficient, and J. Katz, R. Ostrovsky, and M. Yung (Eurocrypt 2001) which is practical.
The first password-authenticated key retrieval methods were described by Ford and Kaliski in 2000.
A considerable number of alternative, secure PAKE protocols were given in work by M. Bellare, D. Pointcheval, and P. Rogaway, variations, and security proofs have been proposed in this growing class of password-authenticated key agreement methods. Current standards for these methods include IETF RFC 2945, RFC 5054, RFC 5931, RFC 5998, RFC 6124, RFC 6617, RFC 6628 and RFC 6631, IEEE Std 1363.2-2008, ITU-T X.1035 and ISO-IEC 11770-4:2006.
PAKE selection process for use in internet protocols
On request of the internet engineering task force IETF, a PAKE selection process has been carried out in 2018 and 2019 by the IRTF crypto forum research group (CFRG).
The selection process has been carried out in several rounds.
In the final round in 2019 four finalists AuCPace, OPAQUE (augmented cases) and CPace, SPAKE2 (balanced PAKE) prevailed. As a result of the CFRG selection process, two winner protocols were declared as "recommended by the CFRG for usage in IETF protocols": CPace and OPAQUE.
See also
Cryptographic protocol
IEEE P1363
Simultaneous Authentication of Equals
Outline of cryptography
Zero-knowledge password proof
References
Further reading
ISO/IEC 11770-4:2006 Information technology—Security techniques—Key management—Part 4: Mechanisms based on weak secrets.
External links
IEEE P1363 Working Group
IEEE Std 1363.2-2008: IEEE Standard Specifications for Password-Based Public-Key Cryptographic Techniques
David Jablon's links for password-based cryptography
Simple Password-Based Encrypted Key Exchange Protocols Abdalla et al 2005
Manual and interactive exchange of complex passwords
Cryptography
Password authentication
Authentication protocols
Key-agreement protocols | Password-authenticated key agreement | [
"Mathematics",
"Engineering"
] | 1,313 | [
"Applied mathematics",
"Cryptography",
"Cybersecurity engineering"
] |
2,152,687 | https://en.wikipedia.org/wiki/Warren%20Woods%20State%20Park | Warren Woods State Park is a nature preserve and public recreation area in Berrien County, Michigan, near the village of Three Oaks. The state park is leased by private owners to the state of Michigan.
History
The woods are named for Edward Kirk Warren (1847-1919), the inventor of the featherbone corset (which replaced the whalebone in corsets with turkey feathers and secured his fortune). Starting in 1879, Warren bought of the woods and of the dunes, setting them aside for preservation.
Natural features
The park is home to the last climax beech-maple forest in Michigan, which occupies . The virgin North American beech (Fagus grandifolia) and sugar maple (Acer saccharum) forest has specimens tall and with girths greater than in diameter. The remaining area in the park consists of floodplain oak-hickory forest. Because of the size and age of the trees, and the rarity of the ecosystem, the area has been designated since 1967 as a National Natural Landmark. Many of the beeches, with their smooth, thin, silver-grey bark, are heavily scarred by hand-carved graffiti, some of it decades old; however, the practice seems to have fallen out of favor in recent years.
Activities and amenities
The park has few facilities and is administered by nearby Warren Dunes State Park. Most visitors come to walk the of hiking trails, which run from the northern boundary on Warren Woods Road to a parking area accessed from the southern boundary on Elm Valley Road. In the middle of the park the trail crosses the Galien River on a pedestrian bridge, where there is an interpretive station. The park contains the Warren Woods Ecological Field Station owned and operated by the University of Chicago. Birders cite the park as a particularly good place to spot pileated woodpeckers. Other visitors come to picnic. The park is the subject of ecological studies because, in combination with the ecosystems preserved in nearby Warren Dunes State Park, it completes a progression of ecological seres.
References
External links
Warren Woods State Park Michigan Department of Natural Resources
Warren Woods State Park Map Michigan Department of Natural Resources
State parks of Michigan
Protected areas of Berrien County, Michigan
National Natural Landmarks in Michigan
Protected areas established in 1930
1930 establishments in Michigan
Old-growth forests | Warren Woods State Park | [
"Biology"
] | 461 | [
"Old-growth forests",
"Ecosystems"
] |
2,152,689 | https://en.wikipedia.org/wiki/Boardroom%20coup | A boardroom coup is a sudden and often unexpected takeover or transfer of power of an organisation or company. The coup is usually performed by an individual or a small group usually from within the corporation in order to seize power.
A boardroom coup draws upon the ideas of a coup d'état in the same way that a corrupt, dysfunctional or unpopular group is pushed out of power.
Examples
Paramount and DuMont
In 1940, Paramount Pictures took control of DuMont after failed attempts to work with other established companies in its field, including CBS, RCA and AT&T. Preceding these failures Paramount decided to obtain stocks in another television company, DuMont. After their purchase Allen DuMont, the owner of DuMont television, began to see his powers within the company flagging as Paramount now owned a large proportion of his company. Paramount with its newfound power proceeded to appoint its own directors amongst DuMont's. The company established their coup by giving crucial financial positions to those they had hired, which stopped DuMont from having any financial input into the company and effectively becoming its owners.
Anheuser–Busch
After the death of his youngest daughter in a freak car accident late in 1974, the aging Anheuser–Busch CEO Gussie Busch, who had already become unusually wary of spending company money on new projects, was so consumed with grief that he was impossible to work with. In May 1975, his oldest son, August Busch III, put into action a plan he had been working on with executives close to him for several years, and persuaded the board to replace his father with him. Busch was allowed to retain some of his company perks and continue running the St. Louis Cardinals, a baseball team that the company owned, but only if he represented his departure from the company leadership as voluntary. Only after his death in 1989 was it made public otherwise.
Apple (1985)
In 1985, Steve Jobs was stripped of his title as Apple’s chief visionary after a boardroom meeting in which Apple's representatives sided with then-CEO John Sculley. Despite the success of a new advertising campaign release in 1984, there was significant backlog of unsold stock, which was worth millions in revenue. Demand for the products dropped significantly between 1984 and 1985, which led to rising tensions between the CEO and Jobs. In an attempt to raise confidence in the company and his own ability, he launched a new computer, the Turbo Mac. His new piece of technology featured a multitude of different features from previous Mac models including a faster operating system, customised internal parts exclusively for the Turbo and the ability to store data from within.
The project, however, was unsuccessful due to problems in the physical technology of the system. After that failure, with no sign of improvement in profits at the corporation, a boardroom meeting was set up to decide Apple's and, unknowingly to him, Jobs's future as well. No concrete solutions were made, and a few months later, in 1985, Jobs and Sculley had a showdown. Sculley ousted his rival, diminished his former title, and forced Jobs to leave the company.
Apple (1997)
After Jobs's removal, Apple had been faltering in its market position once again and was being overtaken by the other technological powerhouses. Sculley, in an attempt to save the company, began a search for a buyer. A multitude of companies were offered the corporation, but none accepted. In 1996, Gil Amelio took over the helm at Apple, and with the bad publicity the corporation was receiving, Amelio struggled in reinstating any hope for the future. He was fighting a losing battle for Apple and was considered by many to be worsening its struggle for survival. In June 1997 a meeting was set up with senior board members of Apple. The executives intended to discuss, exactly as had been the case nearly 10 years earlier, the state of the company and of Amelio. However, this time a clear decision was reached and a couple of weeks later Woolard, a member of the Board of Directors, delivered the news to Amelio by telephone that he would no longer be the head of Apple. Jobs, who had been busy after his resignation with his new company, NeXT Inc., was reinstated as interim CEO after Apple bought his business in 1996 and was formally restored to the board in July 1997, just days after Amelio was ousted.
General Motors
In 1990, Robert Stempel was appointed as CEO of General Motors, a company that he had worked for since 1962. He had started in its Oldsmobile Division and gradually climbed the various ranks of the company until he was given the position. An economic recession hit the globe in the late 1980s and into the early 1990s which significantly impacted on GM and the automotive industry in general. The pressure to keep the company profitable came upon Stempel. He, alongside the board of directors, looked for ways to cut costs in every division. Particularly, they looked to restructure the makeup of the company so that it would bear the brunt of the recession less forcefully than his rivals. However, in 1992 the board took action against Stempel. With a vote the managing directors, as well as external directors, fired him. They believed that he was responsible for the level of GM's losses, and they also accused him of doing little to return the company to the success they expected.
Rangers Football Club
In May 2013, an attempt was made to threaten the position of the Rangers Football Club's chairman, Malcolm Murray. The club had gone into liquidation a year before but after an agreement was reached and the club was repurchased, their prospects began to improve and confidence amongst shareholders was raised. Despite a prosperous looking future, a group of these shareholders decided in May to have Murray removed with other senior members of the board. Fans and investors alike were uncertain of the exact financial state of the club since little information had been released since their liquidation. Therefore, tensions rose between the board and shareholders, despite directors trying to assure investors that the club was financially secure. The shareholders, who owned 6.1% of the business, then successfully managed to oust Murray of his position and employ Walter Smith, a previous manager of the club, who was appointed in his place.
Attempt
Blackberry
In 2012 an attempt was made by Robin Chan for a complete overhaul of Blackberry in an attempt to save it. Chan was not an employee of Blackberry but instead a technological entrepreneur who set up his own business, the XPD Media Inc group. After Blackberry's share prices plummeted to $16 in 2012 Chan began to plan a takeover over the company with a team of specialists in finance and technology. His aim was to save the fledgling company from having to be sold out or going bankrupt. He created a slideshow, which he named Project BBX to present to the board. Within the slides there were details of Blackberry's current losses in a number of graphs and how he envisaged, with his team, to turn the company around. Some of the changes included completely overhauling Blackberry's operating system and ensuring the phone's security system was the best available to buy. In the end, the board's opposition to major changes, coupled with his lack of funding due to the size of his project, meant that Chan was never able to succeed in Project BBX, and his boardroom coup failed.
References
See also
Coup d'état
Takeover
Buyout
Workplace politics
Board of directors
Corporate governance
Organizational behavior
Organizational conflict | Boardroom coup | [
"Biology"
] | 1,517 | [
"Behavior",
"Organizational behavior",
"Human behavior"
] |
2,152,845 | https://en.wikipedia.org/wiki/Apterygota | The name Apterygota is sometimes applied to a former subclass of small, agile insects, distinguished from other insects by their lack of wings in the present and in their evolutionary history; notable examples are the silverfish, the firebrat, and the jumping bristletails. Their first known occurrence in the fossil record is during the Devonian period, 417–354 million years ago. The group Apterygota is not a clade; it is paraphyletic, and not recognized in modern classification schemes. As defined, the group contains two separate clades of wingless insects: Archaeognatha comprises jumping bristletails, while Zygentoma comprises silverfish and firebrats. The Zygentoma are in the clade Dicondylia with winged insects, a clade that includes all other insects, while Archaeognatha is sister to this lineage.
The nymphs (younger stages) go through little or even no metamorphosis, hence they resemble the adult specimens (ametabolism).
Currently, no species are listed as being at conservation risk.
Characteristics
The primary characteristic of the apterygotes is they are primitively wingless. While some other insects, such as fleas, also lack wings, they nonetheless descended from winged insects but have lost them during the course of evolution. By contrast, the apterygotes are a primitive group of insects that diverged from other ancient orders before wings evolved.
Apterygotes, however, have the demonstrated capacity for directed, aerial gliding descent from heights. It has been suggested by researchers that this evolved gliding mechanism in apterygotes might have provided an evolutionary basis from which winged insects would later evolve the capability for powered flight.
Apterygotes also have a number of other primitive features not shared with other insects. Males deposit sperm packages, or spermatophores, rather than fertilizing the female internally. When hatched, the young closely resemble adults and do not undergo any significant metamorphosis, and lack even an identifiable nymphal stage. They continue to molt throughout life, undergoing multiple instars after reaching sexual maturity, whereas all other insects undergo only a single instar when sexually mature.
Apterygotes possess small unsegmented appendages, referred to as "styli", on some of their abdominal segments, but play no part in locomotion. They also have long, paired abdominal cerci and a single median, tail-like caudal filament, or telson.
While all members of winged insects (Pterygota) has a closed amniotic cavity during embryonic development, this varies within Apterygota. In Archaeognatha, species like Petrobius brevistylis and Pedetontus unimaculatus have a wide open cavity, whereas Trigoniophthalmus alternatus does not have an amniotic cavity at all. In Zygentoma, the cavity is open through a narrow canal called the amniopore in the species Thermobia domestica and Lepisma saccharina, but in other species like Ctenolepisma lineata it is completely closed.
History of the concept
The composition and classification of Apterygota changed over time. By the mid-20th century, the subclass included four orders (Collembola, Protura, Diplura, and Thysanura). With the advent of a more rigorous cladistic methodology, the subclass was proven paraphyletic. While the first three groups formed a monophyletic group, the Entognatha, distinguished by having mouthparts submerged in a pocket formed by the lateral and ventral parts of the head capsule, the Thysanura (Zygentoma plus Archaeognatha) appeared to be more closely related to winged insects. The most notable synapomorphy proving the monophyly of Thysanura+Pterygota is the absence of intrinsic antennal muscles, which connect the antennomeres in entognaths, myriapods, and crustaceans. For this reason, the whole group is often termed the Amyocerata, meaning "lacking antennal muscles".
However, the Zygentoma are now considered more closely related to the Pterygota than to the Archaeognatha, thus rendering even the amyocerate apterygotes paraphyletic, and resulting in the dissolution of Thysanura into two separate monophyletic orders.
References
Firefly Encyclopedia of Insects and Spiders, edited by Christopher O'Toole, , 2002
Insect taxonomy
Arthropod subclasses
Extant Devonian first appearances
Paraphyletic groups | Apterygota | [
"Biology"
] | 975 | [
"Phylogenetics",
"Paraphyletic groups"
] |
2,153,158 | https://en.wikipedia.org/wiki/Masahiko%20Fujiwara | Masahiko Fujiwara (; born July 9, 1943, in Shinkyo, Manchukuo) is a Japanese mathematician and writer who is known for his book The Dignity of the Nation. He is a professor emeritus at Ochanomizu University.
Life
Masahiko Fujiwara is the son of Jirō Nitta and Tei Fujiwara, who were both authors. He graduated from the University of Tokyo in 1966.
Biography
Masahiko Fujiwara began writing after a two-year position as associate professor at the University of Colorado, with a book Wakaki sugakusha no Amerika designed to explain American campus life to Japanese people. He also wrote about the University of Cambridge, after a year's visit (Harukanaru Kenburijji: Ichi sugakusha no Igirisu). In a popular book on mathematics, he categorized theorems as beautiful theorems or ugly theorems. He is also known in Japan for speaking out against government reforms in secondary education. He wrote The Dignity of the Nation, which according to Time Asia was the second best selling book in the first six months of 2006 in Japan.
In 2006, Fujiwara published Yo ni mo utsukushii sugaku nyumon ("An Introduction to the World's Most Elegant Mathematics") with the writer Yōko Ogawa: it is a dialogue between novelist and mathematician on the extraordinary beauty of numbers.
References
External links
Article in the Financial Times from 2007.
Online essay
Essay on Literature and Mathematics
Mathematics popularizers
Number theorists
20th-century Japanese mathematicians
21st-century Japanese mathematicians
1943 births
Living people
Recreational mathematicians
Japanese people from Manchukuo
University of Tokyo alumni
University of Colorado Boulder faculty
20th-century Japanese essayists
21st-century essayists
Academic staff of Ochanomizu University | Masahiko Fujiwara | [
"Mathematics"
] | 368 | [
"Recreational mathematics",
"Number theorists",
"Recreational mathematicians",
"Number theory"
] |
2,153,281 | https://en.wikipedia.org/wiki/Dark%20star%20%28Newtonian%20mechanics%29 | A dark star is a theoretical object compatible with Newtonian mechanics that, due to its large mass, has a surface escape velocity that equals or exceeds the speed of light. Whether light is affected by gravity under Newtonian mechanics is unclear but if it were accelerated the same way as projectiles, any light emitted at the surface of a dark star would be trapped by the star's gravity, rendering it dark, hence the name. Dark stars are analogous to black holes in general relativity.
Dark star theory history
John Michell and dark stars
During 1783 geologist John Michell wrote a letter to Henry Cavendish outlining the expected properties of dark stars, published by The Royal Society in their 1784 volume. Michell calculated that when the escape velocity at the surface of a star was equal to or greater than lightspeed, the generated light would be gravitationally trapped so that the star would not be visible to a distant astronomer.
Michell's idea for calculating the number of such "invisible" stars anticipated 20th century astronomers' work: he suggested that since a certain proportion of double-star systems might be expected to contain at least one "dark" star, we could search for and catalogue as many double-star systems as possible, and identify cases where only a single circling star was visible. This would then provide a statistical baseline for calculating the amount of other unseen stellar matter that might exist in addition to the visible stars.
Dark stars and gravitational shifts
Michell also suggested that future astronomers might be able to identify the surface gravity of a distant star by seeing how far the star's light was shifted to the weaker end of the spectrum, a precursor of Einstein's 1911 gravity-shift argument. However, Michell cited Newton as saying that blue light was less energetic than red (Newton thought that more massive particles were associated with bigger wavelengths), so Michell's predicted spectral shifts were in the wrong direction. It is difficult to tell whether Michell's careful citing of Newton's position on this may have reflected a lack of conviction on Michell's part over whether Newton was correct or just academic thoroughness.
Wave theory of light
In 1796, the mathematician Pierre-Simon Laplace promoted the same idea in the first and second editions of his book Exposition du système du Monde, independently of Michell.
Because of the development of the wave theory of light, Laplace may have removed it from later editions as light came to be thought of as a massless wave, and therefore not influenced by gravity and as a group, physicists dropped the idea although the German physicist, mathematician, and astronomer Johann Georg von Soldner continued with Newton's corpuscular theory of light as late as 1804.
Comparisons with black holes
Indirect radiation
Dark stars and black holes both have a surface escape velocity equal or greater than lightspeed, and a critical radius of r ≤ 2M.
However, the dark star is capable of emitting indirect radiation – outward-aimed light and matter can leave the r = 2M surface briefly before being recaptured, and while outside the critical surface, can interact with other matter, or be accelerated free from the star through such interactions. A dark star, therefore, has a rarefied atmosphere of "visiting particles", and this ghostly halo of matter and light can radiate, albeit weakly. Also as faster-than-light speeds are possible in Newtonian mechanics, it is possible for particles to escape.
Radiation effects
A dark star may emit indirect radiation as described above. Black holes as described by current theories about quantum mechanics emit radiation through a different process, Hawking radiation, first postulated in 1975. The radiation emitted by a dark star depends on its composition and structure; Hawking radiation, by the no-hair theorem, is generally thought of as depending only on the black hole's mass, charge, and angular momentum, although the black hole information paradox makes this controversial.
Light-bending effects
If Newtonian physics does have a gravitational deflection of light (Newton, Cavendish, Soldner), general relativity predicts twice as much deflection in a light beam skimming the Sun. This difference can be explained by the additional contribution of the curvature of space under modern theory: while Newtonian gravitation is analogous to the space-time components of general relativity's Riemann curvature tensor, the curvature tensor only contains purely spatial components, and both forms of curvature contribute to the total deflection.
See also
Black hole
Magnetospheric eternally collapsing object
Q star
References
Dark concepts in astrophysics
Obsolete theories in physics
Stellar black holes | Dark star (Newtonian mechanics) | [
"Physics"
] | 923 | [
"Black holes",
"Stellar black holes",
"Theoretical physics",
"Unsolved problems in physics",
"Astrophysics",
"Dark concepts in astrophysics",
"Obsolete theories in physics"
] |
2,153,318 | https://en.wikipedia.org/wiki/Dana%20Foundation | The Dana Foundation (Charles A. Dana Foundation) is a private philanthropic organization based in New York dedicated to advancing neuroscience and society by supporting cross-disciplinary intersections such as neuroscience and ethics, law, policy, humanities, and arts.
Leadership
The foundation was founded in 1950 by Charles A. Dana, a legislator and businessman from New York State, and president of the Dana Corporation. He presided over the organization until 1960, and continued to participate until his death in 1975.
Steven E. Hyman is chairman of the board of directors of the foundation. Caroline Montojo is the current president of the foundation.
The Dana Alliance for Brain Initiatives
The Dana Foundation supported the Dana Alliance for Brain Initiatives (which included the European Dana Alliance for the Brain), a nonprofit organization of leading neuroscientists committed to advancing public awareness about the progress and promise of brain research, from 1993 to 2022.
Grant programs
In 2022, the Dana Foundation moved away from grants for research to grants that aim to strengthen neuroscience's positive role in the world. Current grants fall under three categories.
NextGen: To develop a new generation of interdisciplinary experts who shepherd neuroscience uses for a better world. Its current major project is creating Dana Centers for Neuroscience & Society.
Frontiers: To grow capacity for informed public reflection on emerging neuroscience and neurotechnology. Its projects include Judicial Seminars on Emerging Issues in Neuroscience, which provide state and federal judges in the US with a better understanding of the role neuroscience may play in making legal determinations in the courts, from the admissibility of neuroimaging evidence to decisions about criminal culpability. The foundation also provides funding for the Royal Society's Neuroscience and the Law program in the UK.
Education: To spark interest and support education around neuroscience and the many ways it interfaces with our everyday lives. Its projects include the annual Brain Awareness Week, next held March 11-17, 2024.
Past research grant programs
The Dana Foundation's area of research emphasis had been in neuroscience, focusing on neuroimaging and clinical neuroscience research. In 2019, the foundation paused awarding new research grants while the board of trustees worked to revise its strategic plan for future neuroscience grants.
Also supported were studies to develop ethical guidelines in brain research and explore other aspects of neuroethics.
Public education
The foundation has a range of outreach initiatives for the general public and for targeted audiences. Major initiatives include:
Brain Awareness Week (#brainweek) is the global campaign to increase public awareness of the progress and benefits of brain research. Partner organizations host creative and innovative activities in their communities to educate kids and adults about the brain. Brain Awareness Week 2023 is March 13 to 19; BrainWeek 2024 will be March 11 to 17.
The Dana Foundation website, dana.org, offers scientist-vetted information about the brain, including PDFs of publications, fact sheets, and lesson plans to download and share, as well as articles, videos, and podcasts targeted to non-scientists.
Web-based publications include reporting from neuroscience events, scientist Q&As, and Brain Basics.
References
External links
Dana Foundation website
The New York Times
Dana Foundation website, Grants
Inside Philanthropy, Dana Foundation Grants
Dana Foundation secondary website, Brain Awareness Week
AAAS website
Dana Foundation website, publications
Biomedical research foundations
Educational foundations in the United States
Medical and health foundations in the United States | Dana Foundation | [
"Engineering",
"Biology"
] | 678 | [
"Biotechnology organizations",
"Biomedical research foundations"
] |
2,153,462 | https://en.wikipedia.org/wiki/Reflector%20%28antenna%29 | An antenna reflector is a device that reflects electromagnetic waves. Antenna reflectors can exist as a standalone device for redirecting radio frequency (RF) energy, or can be integrated as part of an antenna assembly.
Standalone reflectors
The function of a standalone reflector is to redirect electromagnetic (EM) energy, generally in the radio wavelength range of the electromagnetic spectrum.
Common standalone reflector types are
corner reflector, which reflects the incoming signal back to the direction from which it came, commonly used in radar.
flat reflector, which reflects the signal such as a mirror and is often used as a passive repeater.
Integrated reflectors
When integrated into an antenna assembly, the reflector serves to modify the radiation pattern of the antenna, increasing gain in a given direction.
Common integrated reflector types are
parabolic reflector, which focuses a beam signal into one point or directs a radiating signal into a beam.
a passive element slightly longer than and located behind a radiating dipole element that absorbs and re-radiates the signal in a directional way as in a Yagi antenna array.
a flat reflector such as used in a Short backfire antenna or Sector antenna.
a corner reflector used in UHF television antennas.
a cylindrical reflector as used in Cantenna.
Design criteria
Parameters that can directly influence the performance of an antenna with integrated reflector:
Dimensions of the reflector (Big ugly dish versus small dish)
Spillover (part of the feed antenna radiation misses the reflector)
Aperture blockage (also known as feed blockage: part of the feed energy is reflected back into the feed antenna and does not contribute to the main beam)
Illumination taper (feed illumination reduced at the edges of the reflector)
Reflector surface deviation
Defocusing
Cross polarization
Feed losses
Antenna feed mismatch
Non-uniform amplitude/phase distributions
The antenna efficiency is measured in terms of its effectiveness ratio.
Any gain-degrading factors which raise side lobes have a two-fold effect, in that they contribute to system noise temperature in addition to reducing gain. Aperture blockage and deviation of reflector surface (from the designed "ideal") are two important cases. Aperture blockage is normally due to shadowing by feed, subreflector and/or support members. Deviations in reflector surfaces cause non-uniform aperture distributions, resulting in reduced gains.
The standard symmetrical, parabolic, Cassegrain reflector system is very popular in practice because it allows minimum feeder length to the terminal equipment. The major disadvantage of this configuration is blockage by the hyperbolic sub-reflector and its supporting struts (usually 3–4 are used). The blockage becomes very significant when the size of the parabolic reflector is small compared to the diameter of the sub-reflector.
To avoid blockage from the sub-reflector asymmetric designs such as the open Cassegrain can be employed. Note however that the asymmetry can have deleterious effects on some aspects of the antenna's performance - for example, inferior side-lobe levels, beam squint, poor cross-polar response, etc.
To avoid spillover from the effects of over-illumination of the main reflector surface and diffraction, a microwave absorber is sometimes employed. This lossy material helps prevent excessive side-lobe levels radiating from edge effects and over-illumination. Note that in the case of a front-fed Cassegrain the feed horn and feeder (usually waveguide) need to be covered with an edge absorber in addition to the circumference of the main paraboloid.
Measurements
Measurements are made on reflector antennas to establish important performance indicators such as the gain and sidelobe levels. For this purpose the measurements must be made at a distance at which the beam is fully formed. A distance of four Rayleigh distances is commonly adopted as the minimum distance at which measurements can be made, unless specialized techniques are used (see Antenna measurement).
See also
Lens antenna
Radio astronomy
References
Radio frequency antenna types
Satellite broadcasting | Reflector (antenna) | [
"Engineering"
] | 825 | [
"Telecommunications engineering",
"Satellite broadcasting"
] |
2,153,700 | https://en.wikipedia.org/wiki/Prontosil | Prontosil is an antibacterial drug of the sulfonamide group. It has a relatively broad effect against gram-positive cocci but not against enterobacteria. One of the earliest antimicrobial drugs, it was widely used in the mid-20th century but is little used today because better options now exist. The discovery and development of this first sulfonamide drug opened a new era in medicine, because it greatly widened the success of antimicrobial chemotherapy in an era when many physicians doubted its still largely untapped potential. At the time, disinfectant cleaners and topical antiseptic wound care were widely used but there were very few antimicrobial drugs to use safely inside living bodies. Antibiotic drugs derived from microbes, which are relied on heavily today, did not yet exist. Prontosil was discovered in 1932 by a research team at the Bayer Laboratories of the IG Farben conglomerate in Germany led by Gerhard Domagk. Domagk received the 1939 Nobel Prize in Physiology or Medicine for that discovery.
Names
The capitalized name "Prontosil" is Bayer's trade name; the nonproprietary names include . Because the drug predates the modern system of drug nomenclature, which ensures that nonproprietary names are well known from the inception of marketing, it was generally known among the public only by its trade name, and the trade name was the origin of some of the nonproprietary names (as also happened with "aspirin").
History
This compound was first synthesized by Bayer chemists Josef Klarer and Fritz Mietzsch as part of a research program designed to find dyes that might act as antibacterial drugs in the body. The molecule was tested and in the late autumn of 1932 was found effective against some important bacterial infections in mice by Gerhard Domagk, who subsequently received the 1939 Nobel Prize in Medicine. Prontosil was the result of five years of testing involving thousands of compounds related to azo dyes.
The crucial test result (in a murine model of Streptococcus pyogenes systemic infection) that preliminarily established the antibacterial efficacy of Prontosil in mice dates from late December 1931. IG Farben filed a German patent application concerning its medical utility on December 25, 1932. The synthesis of the compound had been first reported by Paul Gelmo, a chemistry student working at the University of Vienna in his 1909 thesis, although he had not realized its medical potential.
The readily water-soluble sodium salt of sulfonamidochrysoidine, which gives a burgundy red solution and was trademarked Prontosil Solubile, was clinically investigated between 1932 and 1934, first at the nearby hospital at Wuppertal-Elberfeld headed by Philipp Klee, and then at the Düsseldorf University Hospital. The results were published in a series of articles in the February 15, 1935 issue of Germany's then preeminent medical scientific journal, Deutsche Medizinische Wochenschrift, and were initially received with some skepticism by a medical community bent on vaccination and crude immunotherapy.
Leonard Colebrook introduced it as a cure for puerperal fever. As impressive clinical successes with Prontosil started to be reported from all over Europe, and especially after a widely published treatment in 1936 of Franklin Delano Roosevelt, Jr. (a son of U.S. president Franklin D. Roosevelt), acceptance was quick and dozens of medicinal chemistry teams set out to improve Prontosil.
Eclipse and legacy
In late 1935, working at the Pasteur Institute in Paris in the laboratory of Dr. Ernest Fourneau, Jacques and Thérèse Tréfouël, Dr. Daniel Bovet and Federico Nitti discovered that Prontosil is metabolized to sulfanilamide (para-aminobenzenesulfonamide), a much simpler, colorless molecule, reclassifying Prontosil as a prodrug. Prontalbin became the first oral version of sulfanilamide by Bayer, which had actually obtained a German patent on sulfanilamide as early as 1909, without realizing its medical potential at this time.
It has been argued that IG Farben might have made its breakthrough discovery with sulfanilamide in 1932 but, recognizing that it would not be patentable as an antibacterial, had spent the next three years developing Prontosil as a new, and therefore more easily patentable, compound. However Dr. Bovet, who has received a Nobel Prize for medicine, and one of the authors of the French discovery, wrote in 1988: "Today, we have the proof that the chemists of Elberfeld were unaware of the properties of sulfanilamide at the time of our discovery and that it was by our communication that they were informed. To be convinced about it, it is enough to attentively examine the monthly reports of work of Mietzsch and Klarer during years 1935–1936 and especially the Log Book of Gerhard Domagk: the formula of sulphamide is consigned there – without comment – not before January 1936."
Dr. Alexander Ashley Weech (1895–1977), a pioneer pediatrician, while working at Columbia University's College of Physicians & Surgeons (in the affiliated New York Babies Hospital) treated the first patient in the United States with an antibiotic (sulfanilamide; prontosil) in 1935 which led to a new era of medicine across the Atlantic. Dr. Weech researched Domagk's work, translating the German article, and "was so intrigued by [the] experiments and by the three accompanying clinical articles on Prontosil that he contacted a pharmaceutical house, obtained a supply of the drug, and proceeded to treat a patient [a daughter of a colleague] who had serious streptococcal disease." Dr. Perrin Long and Dr. Eleanor Bliss of Johns Hopkins University began their pioneering work later on prontosil and sulfanilamide which led to the large scale production of this new treatment saving the lives of millions with systemic bacterial infections.
Sulfanilamide was cheap to produce and (due to the early date of its original composition of matter patent which made no reference to a medical use) was already off-patent when its antibacterial properties were first made public. Since the sulfanilamide moiety was also easy to link into other molecules, chemists soon gave rise to hundreds of second-generation sulfonamide drugs. As a result, Prontosil failed to make the profits in the marketplace hoped for by Bayer. Although quickly eclipsed by these newer "sulfa drugs" and, in the mid-1940s and through the 1950s by penicillin and other antibacterials that proved more effective against more types of bacteria, Prontosil remained on the market until the 1960s. Prontosil's discovery ushered in the era of antibacterial drugs and had a profound effect on pharmaceutical research, drug laws, and medical history.
Sulfonamide-trimethoprim combinations (co-trimoxazole) are still used extensively against opportunistic infections in patients with AIDS, urinary infections and in the treatment of burns. However, in many other situations, sulfa drugs have been replaced by beta-lactam antibacterials.
References
Further reading
Sulfonamide antibiotics
Azo dyes
Prodrugs
Drugs developed by Bayer
German inventions
1932 in science
1932 in Germany | Prontosil | [
"Chemistry"
] | 1,547 | [
"Chemicals in medicine",
"Prodrugs"
] |
2,153,809 | https://en.wikipedia.org/wiki/Seating%20capacity | Seating capacity is the number of people who can be seated in a specific space, in terms of both the physical space available, and limitations set by law. Seating capacity can be used in the description of anything ranging from an automobile that seats two to a stadium that seats hundreds of thousands of people. The largest sporting venue in the world, the Indianapolis Motor Speedway, has a permanent seating capacity for more than 235,000 people and infield seating that raises capacity to an approximate 400,000.
In transport
In venues
Safety is a primary concern in determining the seating capacity of a venue: "Seating capacity, seating layouts and densities are largely dictated by legal requirements for the safe evacuation of the occupants in the event of fire". The International Building Code specifies, "In places of assembly, the seats shall be securely fastened to the floor" but provides exceptions if the total number of seats is fewer than 100, if there is a substantial amount of space available between seats or if the seats are at tables. It also delineates the number of available exits for interior balconies and galleries based on the seating capacity, and sets forth the number of required wheelchair spaces in a table derived from the seating capacity of the space.
The International Fire Code, portions of which have been adopted by many jurisdictions, is directed more towards the use of a facility than the construction. It specifies, "For areas having fixed seating without dividing arms, the occupant load shall not be less than the number of seats based on one person for each 18 inches (457 mm) of seating length". It also requires that every public venue submit a detailed site plan to the local fire code official, including "details of the means of egress, seating capacity, [and] arrangement of the seating...."
Once safety considerations have been satisfied, determinations of seating capacity turn on the total size of the venue, and its purpose. For sports venues, the "decision on maximum seating capacity is determined by several factors. Chief among these are the primary sports program and the size of the market area". In motion picture venues, the "limit of seating capacity is determined by the maximal viewing distance for a given size of screen", with image quality for closer viewers declining as the screen is expanded to accommodate more distant viewers.
Seating capacity of venues also plays a role in what media they are able to provide and how they are able to provide it. In contracting to permit performers to use a theatre or other performing space, the "seating capacity of the performance facility must be disclosed". Seating capacity may influence the kind of contract to be used and the royalties to be given. The seating capacity must also be disclosed to the copyright owner in seeking a license for the copyrighted work to be performed in that venue.
Venues that may be leased for private functions such as ballrooms and auditoriums generally advertise their seating capacity. Seating capacity is also an important consideration in the construction and use of sports venues such as stadiums and arenas. When entities such as the National Football League's Super Bowl Committee decide on a venue for a particular event, seating capacity, which reflects the possible number of tickets that can be sold for the event, is an important consideration.
Legal capacity and total capacity
Seating capacity differs from total capacity (sometimes called public capacity), which describes the total number of people who can fit in a venue or in a vehicle either sitting or standing. Where seating capacity is a legal requirement, however, as it is in movie theatres and on aircraft, the law reflects the fact that the number of people allowed in should not exceed the number who can be seated.
Use of the term "public capacity" indicates that a venue is allowed to hold more people than it can actually seat. Again, the maximum total number of people can refer to either the physical space available or limitations set by law.
See also
All-seater stadium
List of stadiums by capacity
List of association football stadiums by capacity
List of American football stadiums by capacity
List of rugby league stadiums by capacity
List of rugby union stadiums by capacity
List of tennis stadiums by capacity
Seating assignment
References
Transportation planning
Transport law
C
Sports attendance | Seating capacity | [
"Physics"
] | 838 | [
"Physical systems",
"Transport",
"Transport law"
] |
2,154,069 | https://en.wikipedia.org/wiki/Papilloma | A papilloma (plural papillomas or papillomata) (papillo- + -oma) is a benign epithelial tumor growing exophytically (outwardly projecting) in nipple-like and often finger-like fronds. In this context, papilla refers to the projection created by the tumor, not a tumor on an already existing papilla (such as the nipple).
When used without context, it frequently refers to infections (squamous cell papilloma) caused by human papillomavirus (HPV), such as warts. Human papillomavirus infection is a major cause of cervical cancer, vulvar cancer, vaginal cancer, penis cancer, anal cancer, and HPV-positive oropharyngeal cancers. Most viral warts are caused by human papillomavirus infection (HPV), of which there are nearly 200 distinct human papillomaviruses (HPVs), and many HPV types are carcinogenic. There are, however, a number of other conditions that cause papilloma, as well as many cases in which there is no known cause.
Signs and symptoms
A benign papillomatous tumor is derived from epithelium, with cauliflower-like projections that arise from the mucosal surface.
It may appear white or normal colored. It may be pedunculated or sessile. The average size is between 1–5 cm. Neither sex is significantly more likely to develop them. The most common site is the palate-uvula area followed by tongue and lips. Durations range from weeks to 10 years.
Presence of HPV
Immunoperoxidase stains have identified antigens of the human papillomavirus (HPV) types 6 and 11 in approximately 50% of cases of squamous cell papilloma.
Prognosis
There is no evidence that papillomas are premalignant.
Differential diagnosis
Intraoral verruca vulgaris,
Condyloma acuminatum, and
Focal epithelial hyperplasia.
Note: differentiation is done accurately by microscopic examination only.
Treatment
With conservative surgical excision, recurrence is rare.
See also
Skin tag
Inverted papilloma
Squamous cell papilloma
Urothelial papilloma
Intraductal papilloma of breast
Wart
Genital wart
Plantar wart
Papillomavirus
Human papillomavirus
References
External links
Choroid Plexus Papilloma - Palmer, Cheryl Ann and Daniel Keith Harrison; EMedicine; Jun 5, 2008
Benign neoplasms
Glandular and epithelial neoplasia
Histopathology | Papilloma | [
"Chemistry"
] | 573 | [
"Histopathology",
"Microscopy"
] |
2,154,197 | https://en.wikipedia.org/wiki/Anthracosauria | Anthracosauria is an order of extinct reptile-like amphibians (in the broad sense) that flourished during the Carboniferous and early Permian periods, although precisely which species are included depends on one's definition of the taxon. "Anthracosauria" is sometimes used to refer to all tetrapods more closely related to amniotes such as reptiles, mammals, and birds, than to lissamphibians such as frogs and salamanders. An equivalent term to this definition would be Reptiliomorpha. Anthracosauria has also been used to refer to a smaller group of large, crocodilian-like aquatic tetrapods also known as embolomeres.
Various definitions
As originally defined by Säve-Söderbergh in 1934, the anthracosaurs are a group of usually large aquatic Amphibia from the Carboniferous and lower Permian. As defined by Alfred Sherwood Romer however, the anthracosaurs include all non-amniote "labyrinthodont" reptile-like amphibians, and Säve-Söderbergh's definition is more equivalent to Romer's suborder Embolomeri. This definition was also used by Edwin H. Colbert and Robert L. Carroll in their textbooks of Vertebrate Palaeontology (Colbert 1969, Carroll 1988). Dr A. L. Panchen however preferred Säve-Söderbergh's original definition of Antracosauria in his Handbuch der Paläoherpetologie, 1970.
With cladistics things have changed again. Gauthier, Kluge and Rowe (1988) defined Anthracosauria as a clade including "Amniota plus all other tetrapods that are more closely related to amniotes than they are to amphibians" (Amphibia in turn was defined by these authors as a clade including Lissamphibia and those tetrapods that are more closely related to lissamphibians than they are to amniotes). Similarly, Michel Laurin (1996) uses the term in a cladistic sense to refer to only the most advanced reptile-like amphibians. Thus his definition includes Diadectomorpha, Solenodonsauridae and the amniotes. As Ruta, Coates and Quicke (2003) pointed out, this definition is problematic, because, depending on the exact phylogenetic position of Lissamphibia within Tetrapoda, using it might lead to the situation where some taxa traditionally classified as anthracosaurs, including even the genus Anthracosaurus itself, wouldn't belong to Anthracosauria. Laurin (2001) created a different phylogenetic definition of Anthracosauria, defining it as "the largest clade that includes Anthracosaurus russelli but not Ascaphus truei". However, Michael Benton (2000, 2004) makes the anthracosaurs a paraphyletic order within the superorder Reptiliomorpha, along with the orders Seymouriamorpha and Diadectomorpha, thus making the Anthracosaurians the "lower" reptile-like amphibians. In his definition, the group encompass the Embolomeri, Chroniosuchia and possibly the family Gephyrostegidae.
Many studies since have suggested that anthracosaurs or embolomeres are likely reptiliomorphs closer to amniotes, but some recent studies either retain them as amphibians or argue that their relationships are still ambiguous and are more likely to be stem-tetrapods.
Etymology
The name "Anthracosauria" is Greek ('coal lizards'), because many of its fossils were found in the Coal Measures.
References and external links
Benton, M. J. (2004), Vertebrate Palaeontology, Blackwell Science Ltd 3rd ed. - see also taxonomic hierarchy of the vertebrates, according to Benton 2004
Carroll, R. L., 1988: Vertebrate Paleontology and Evolution. W. H. Freeman and company, New York
Clack, J. A. (2002), Gaining Ground: the Origin and Evolution of Tetrapods Indiana Univ. Press, 369 pp.
Colbert, E. H. (1969), Evolution of the Vertebrates, John Wiley & Sons Inc (2nd ed.)
Laurin, Michel (1996) Terrestrial Vertebrates - Stegocephalians: Tetrapods and other digit-bearing vertebrates
Palaeos Anthracosauroidea
Panchen, A. L. (1970) Handbuch der Paläoherpetologie - Encyclopedia of Paleoherpetology Part 5a - Batrachosauria (Anthracosauria), Gustav Fischer Verlag - Stuttgart & Portland, 83 pp., web page
Systema Naturae 2000 / Classification Order Anthracosauria
Citations
Stegocephalians
Reptiliomorphs
Mississippian first appearances
Carnian extinctions
Paraphyletic groups
Tetrapodomorph orders | Anthracosauria | [
"Biology"
] | 1,064 | [
"Phylogenetics",
"Paraphyletic groups"
] |
2,154,225 | https://en.wikipedia.org/wiki/Psocoptera | Psocoptera () are a paraphyletic group of insects that are commonly known as booklice, barklice or barkflies. The name Psocoptera has been replaced with Psocodea in recent literature, with the inclusion of the former order Phthiraptera into Psocodea (as part of the suborder Troctomorpha).
These insects first appeared in the Permian period, 295–248 million years ago. They are often regarded as the most primitive of the hemipteroids. Their name originates from the Greek word ψῶχος (psokhos), meaning "gnawed" or "rubbed" and πτερά (ptera), meaning "wings". There are more than 5,500 species in 41 families in three suborders. Many of these species have only been described in the early twenty-first century. They range in size from in length.
The species known as booklice received their common name because they are commonly found amongst old books—they feed upon the paste used in binding. The barklice are found on trees, feeding on algae and lichen.
Anatomy and biology
Psocids are small, scavenging insects with a relatively generalized body plan. They feed primarily on fungi, algae, lichen, and organic detritus in nature but are also known to feed on starch-based household items like grains, wallpaper glue and book bindings. They have chewing mandibles, and the central lobe of the maxilla is modified into a slender rod. This rod is used to brace the insect while it scrapes up detritus with its mandibles. They also have a swollen forehead, large compound eyes, and three ocelli. Their bodies are soft with a segmented abdomen. Some species can spin silk from glands in their mouth. They may festoon large sections of trunk and branches in dense swathes of silk.
Some psocids have small ovipositors that are up to 1.5 times as long as the hindwings, and all four wings have a relatively simple venation pattern, with few cross-veins. The wings, if present, are held tent-like over the body. The legs are slender and adapted for jumping, rather than gripping, as in the true lice. The abdomen has nine segments, and no cerci.
There is often considerable variation in the appearance of individuals within the same species. Many have no wings or ovipositors, and may have a different shape to the thorax. Other, more subtle, variations are also known, such as changes to the development of the setae. The significance of such changes is uncertain, but their function appears to be different from similar variations in, for example, aphids. Like aphids, however, many psocids are parthenogenic, and the presence of males may even vary between different races of the same species.
Psocids lay their eggs in minute crevices or on foliage, although a few species are known to be viviparous. The young are born as miniature, wingless versions of the adult. These nymphs typically molt six times before reaching full adulthood. The total lifespan of a psocid is rarely more than a few months.
Booklice range from approximately . Some species are wingless and they are easily mistaken for bedbug nymphs and vice versa. Booklouse eggs take two to four weeks to hatch and can reach adulthood approximately two months later. Adult booklice can live for six months. Besides damaging books, they also sometimes infest food storage areas, where they feed on dry, starchy materials. Although some psocids feed on starchy household products, the majority of psocids are woodland insects with little to no contact with humans, therefore they are of little economic importance. They are scavengers and do not bite humans.
Psocids can affect the ecosystems in which they reside. Many psocids can affect decomposition by feeding on detritus, especially in environments with lower densities of predacious micro arthropods that may eat psocids. The nymph of a psocid species, Psilopsocus mimulus, is the first known wood-boring psocopteran. These nymphs make their own burrows in woody material, rather than inhabiting vacated, existing burrows. This boring activity can create habitats that other organisms may use.
Interaction with humans
Some species of psocids, such as Liposcelis bostrychophila, are common pests of stored products. Psocids, among other arthropods, have been studied to develop new pest control techniques in food manufacturing. One study found that modified atmospheres during packing (MAP) helped to control the reoccurrence of pests during the manufacturing process and prevented further infestation in the final products that go to consumers.
Classification
In the 2000s, morphological and molecular phylogenetic evidence has shown that the parasitic lice (Phthiraptera) evolved from within the psocopteran suborder Troctomorpha, thus making Psocoptera paraphyletic with respect to Phthiraptera. In modern systematics, Psocoptera and Phthiraptera are therefore treated together in the order Psocodea.
Here is a cladogram showing the relationships within Psocodea, with the former grouping Psocoptera highlighted:
References
External links
National Barkfly Recording Scheme
Psoco Net
Tree of Life: Psocodea
Archipsocus nomas, a webbing barklouse on the UF / IFAS Featured Creatures Web site
Insect orders
Paraphyletic groups
Paraneoptera | Psocoptera | [
"Biology"
] | 1,185 | [
"Phylogenetics",
"Paraphyletic groups"
] |
2,154,303 | https://en.wikipedia.org/wiki/Licenciado%20Gustavo%20D%C3%ADaz%20Ordaz%20International%20Airport | Licenciado Gustavo Díaz Ordaz International Airport (), simply known as Puerto Vallarta International Airport (), is an international airport serving Puerto Vallarta, Jalisco, Mexico. It serves as a gateway to the Mexican tourist destination of Riviera Nayarit and the Jalisco coast year-round, offering flights to and from Mexico, the United States, Canada, and the United Kingdom. The airport also houses facilities for the Mexican Army and supports various tourism, flight training, and general aviation activities. Operated by Grupo Aeroportuario del Pacífico, it is named after President Gustavo Díaz Ordaz.
Ranked as the fifth-busiest airport in Mexico for international passenger traffic and the seventh-busiest in terms of passenger numbers and aircraft operations, it has witnessed rapid growth, becoming one of the country's fastest-growing airports: in 2021, it served 4.1 million passengers, increasing to almost 6.8 million in 2023. The airport connects travelers to 52 destinations, including 13 domestic and 39 international, served by 24 airlines.
Facilities
The airport is situated within the Puerto Vallarta Urban area, just one km north of Marina Vallarta, at an elevation of above mean sea level. It features a single runway, designated as 04/22, measuring in length with an asphalt surface. The commercial aviation apron provides twelve aircraft parking positions next to the terminal and eight remote positions. The general aviation apron offers stands for fixed-wing aircraft and heliports for private aviation.
Passenger terminals
The passenger terminal is a two-story structure. The ground floor includes the main entrance, a check-in area, and the arrivals section, housing customs and immigration facilities, as well as baggage claim services. Additionally, amenities such as car rental services, taxi stands, snack bars, and souvenir shops are available. The upper terminal floor features a security checkpoint and a departures area divided into two sections.Concourse A (Gates 1-5A) caters to domestic flights and includes waiting areas with shops, food stands, and a VIP Lounge. The concourse is equipped with five gates: gates 1-3 on the top floor have jet bridges, while gates 4 and 5 on the ground floor allow passengers to board directly from the apron. Airlines operating from this concourse include Aeromexico, Aeromexico Connect, Viva Aerobus, Volaris, TAR, and Magni.
Concourse B is situated in a satellite building connected to the main terminal by a walkway. This concourse serves international flights, primarily from the United States and Canadian airlines. It offers seating areas, food stands, restaurants, a VIP lounge, and duty-free shops. The satellite has 15 gates (gates 6-20B) spread across two floors, with those on the top floor equipped with jet bridges. All international airlines operate from this area.
In 2022, the construction of a new Terminal 2 officially began. The terminal is projected to cover more than , featuring significant expansions, resulting in an increase from 9 to 16 remote boarding gates and from 11 to 19 boarding bridges. The development also encompasses improvements to parking facilities and the establishment of a new bus terminal. Terminal 2 is planned to have the capacity to mobilize 4.5 million passengers annually and aims to become the first airport in Latin America certified as NET Zero.
Other facilities
In the vicinity of the passenger terminal, various facilities are situated, including civil aviation hangars, courier and logistics companies, and cargo services. Additionally, there is a dedicated general aviation terminal that supports a range of activities such as tourism, flight training, executive aviation, and general aviation.
Air Force Station No. 5 () (E.A.M. No. 5) is located on the airport grounds, north of Runway 04/22. This station does not currently have active squadrons assigned to it. It features an aviation platform spanning , one hangar, and other facilities designed to accommodate Air Force personnel.
Airlines and destinations
Intense seasonal tourism to Puerto Vallarta means that passenger traffic at the airport is notably focused on flights to the United States and Canada. Among the busiest routes at the airport are those to Los Angeles, Dallas, and Phoenix. WestJet stands out as the airline serving the largest number of destinations, connecting Puerto Vallarta with 12 Canadian airports during the high season.
Passenger
Destinations map
Statistics
Passengers
Busiest routes
Notes
See also
List of the busiest airports in Mexico
List of airports in Mexico
List of airports by ICAO code: M
List of busiest airports in North America
List of the busiest airports in Latin America
Transportation in Mexico
Tourism in Mexico
Grupo Aeroportuario del Pacífico
List of beaches in Mexico
List of Mexican military installations
Mexican Air Force
Economy of Jalisco
Riviera Nayarit
Nuevo Vallarta
Notes
References
External links
Official website
Grupo Aeroportuario del Pacífico
Puerto Vallarta Airport information at Great Circle Mapper
Airports in Mexico
Airports in Jalisco
Transportation in Jalisco
Tourist attractions in Jalisco
Puerto Vallarta
WAAS reference stations
Mexican Air Force bases
Mexican Air Force
Military installations of Mexico | Licenciado Gustavo Díaz Ordaz International Airport | [
"Technology"
] | 1,028 | [
"Global Positioning System",
"WAAS reference stations"
] |
2,154,325 | https://en.wikipedia.org/wiki/Cannabis%20indica | Cannabis indica is an annual plant species in the family Cannabaceae indigenous to the Hindu Kush mountains of Southern Asia. The plant produces large amounts of tetrahydrocannabinol (THC) and tetrahydrocannabivarin (THCV), with total cannabinoid levels being as high as 53.7%. It is now widely grown in China, India, Nepal, Thailand, Afghanistan, and Pakistan, as well as southern and western Africa, and is cultivated for purposes including hashish in India. The high concentrations of THC or THCV provide euphoric effects making it popular for use for several purposes, not only simple pleasure but also clinical drug research, potential new drug research, and use in alternative medicine, among many others.
Taxonomy
In 1785, Jean-Baptiste Lamarck published a description of a second species of Cannabis, which he named Cannabis indica. Lamarck based his description of the newly named species on plant specimens collected in India. Richard Evans Schultes described C. indica as relatively short, conical, and densely branched, whereas C. sativa was described as tall and laxly branched. Loran C. Anderson described C. indica plants as having short, broad leaflets whereas those of C. sativa were characterized as relatively long and narrow. C. indica plants conforming to Schultes's and Anderson's descriptions originated from the Hindu Kush mountain range. Because of the often harsh and variable climate of those parts (extremely cold winters and warm summers), C. indica is well-suited for cultivation in temperate climates.
The specific epithet indica is Latin for "of India" and has come to be synonymous with the cannabis strain.
There was very little debate about the taxonomy of Cannabis until the 1970s, when botanists like Richard Evans Schultes began testifying in court on behalf of accused persons who sought to avoid criminal charges of possession of C. sativa by arguing that the plant material could instead be C. indica.
Cultivation
Broad-leafed C. indica plants in the Indian Subcontinent are traditionally cultivated for the production of charas, a form of hashish. Pharmacologically, C. indica landraces tend to have higher THC content than C. sativa strains. Some users report more of a "stoned" feeling and less of a "high" from C. indica when compared to C. sativa. (The terms sativa and indica, used in this sense, are more appropriately termed "narrow-leaflet" and "wide-leaflet" drug type, respectively.) The C. indica high is often referred to as a "body buzz" and has beneficial properties such as pain relief in addition to being an effective treatment for insomnia and an anxiolytic, as opposed to C. sativa's more common reports of a cerebral, creative and energetic high, and even (albeit rarely) including hallucinations. Differences in the terpenoid content of the essential oil may account for some of these differences in effect. Common C. indica strains for recreational or medicinal use include Kush and Northern Lights.
A recent genetic analysis included both the narrow-leaflet and wide-leaflet drug "biotypes" under C. indica, as well as southern and eastern Asian hemp (fiber/seed) landraces and wild Himalayan populations.
Genome
In 2011, a team of Canadian researchers led by Andrew Sud announced that they had sequenced a draft genome of the Purple Kush strain of C. indica.
Gallery
References
External links
Popular Indica Marijuana Strains
Four full pages of photos of cannabis cultivation in Morocco (Rif) on geopium.org
Photos of Indica cannabis availability in Canada (Rif)
Trending Indica Cannabis Strains
Cannabis strains
Entheogens
Hemp
Euphoriants
Plants described in 1785 | Cannabis indica | [
"Biology"
] | 803 | [
"Cannabis strains",
"Biopiracy"
] |
2,154,347 | https://en.wikipedia.org/wiki/Cannabis%20ruderalis | Cannabis ruderalis is a variety, subspecies, or species of Cannabis native to Central and Eastern Europe and Russia. It contains a relatively low quantity of psychoactive compound tetrahydrocannabinol (THC) and does not require photoperiod to blossom (unlike C. indica and C. sativa). Some scholars accept C. ruderalis as its own species due to its unique traits and phenotypes which distinguish it from C. indica and C. sativa; others debate whether ruderalis is a subspecies under C. sativa.
Description
This species is smaller than other species of the genus, rarely growing over in height. The plants have "thin, slightly fibrous stems" with little branching. The foliage is typically open with large leaves. C. ruderalis reaches maturity much quicker than other species of Cannabis, typically 5–7 weeks after being planted from seed.
Unlike other species of the genus, C. ruderalis enters the flowering stage based on the plant's maturity rather than its light cycle. With C. sativa and C. indica varieties, the plant stays in the vegetative state indefinitely as long as a long daylight cycle is maintained. Cannabis geneticists today refer to this feature as "autoflowering" when C. ruderalis is cross-bred.
Regarding its cannabinoid profile, it usually contains less tetrahydrocannabinol (THC) in its resin compared to other Cannabis species but is often high in cannabidiol (CBD).
Taxonomy
Species description
There is no consensus in the botany community that C. ruderalis is one separate species, rather than a subspecies from C. sativa. It was first described in 1924 by D. E. Janischewsky, noting the visible differences in the fruits' seed (an achene), shape and size from previously classified Cannabis sativa.
Genomic studies
Recently, genomic DNA studies utilizing molecular markers and different varieties of plants from diverse geographical origins have been employed to enrich the Cannabis taxonomy discussion. In 2005, Hillig reinforced the polytypic classification system based on allozyme variation at 17 genomic loci. Hillig's approach, proposed a more detailed taxonomy encompassing three species with seven subspecies or varieties:
C. sativa
C. sativa subsp. sativa var. sativa
C. sativa subsp. sativa var. spontanea
C. sativa subsp. indica var. kafiristanica
C. indica
C. indica
C. indica sensu
C. chinensis
C. ruderalis.
Clarke and Merlin carried out more studies in 2013 to analyze the genus mixing molecular markers, chemotypes and morphological characteristics. They proposed a refinement in Hillig's hypothesis and suggested that C. ruderalis could be the wild ancestor of C. sativa and C. indica. However, these affirmations were based on a limited sample size.
Etymology
The term ruderalis is derived from the Latin rūdera, which is the plural form of rūdus, meaning "rubble", "lump", or "rough piece of bronze". In botanical Latin, ruderalis means "weedy" or "growing among waste". A ruderal species refers to any plant that is the first to colonize land after a disturbance removing competition.
Distribution and habitat
C. ruderalis was first scientifically described in 1924 (from plants collected in southern Siberia), although it grows wild in other areas of Russia. The Russian botanist, Janischewski, was studying wild Cannabis in the Volga River system and realized he had come upon a third species. C. ruderalis is a hardier variety grown in the northern Himalayas and southern states of the former Soviet Union, characterized by a more sparse, "weedy" growth.
Similar C. ruderalis populations can be found in most of the areas where hemp cultivation was once prevalent. The most notable region in North America is the midwestern United States, though populations occur sporadically throughout the United States and Canada. Large wild C. ruderalis populations are found in central and eastern Europe, most of them in Ukraine, Lithuania, Belarus, Latvia, Estonia and adjacent countries. Without human selection, these plants have lost many of the traits they were originally selected for, and have acclimated to their environment.
Cultivation
Seeds of C. ruderalis were brought to Amsterdam in the early 1980s in order to enhance the breeding program of seed banks.
C. ruderalis has lower THC content than either C. sativa or C. indica, so it is rarely grown for recreational use. Also, the shorter stature of C. ruderalis limits its application for hemp production. C. ruderalis strains are high in the cannabіnoid cannabidiol (CBD), so they are grown by some medical marijuana users.
Because C. ruderalis transitions from the vegetative stage to the flowering stage with age, as opposed to the light cycle required with photoperiod strains, it is bred with other household sativa and indica strains of cannabis to create "auto-flowering cannabis strains". This trait offers breeders some agricultural possibilities and advantages over the photoperiodic flowering varieties, as well as resistance aspects to insect and disease pressures.
C. indica strains are frequently cross-bred with C. ruderalis to produce autoflowering plants with high THC content, improved hardiness and reduced height. Cannabis x intersita Sojak, a strain identified in 1960, is a cross between C. sativa and C. ruderalis. Attempts to produce a Cannabis strain with a shorter growing season are another application of cultivating C. ruderalis. C. ruderalis when crossed with sativa and indica strains will carry the recessive autoflowering trait. Further crosses will stabilise this trait and give a plant which flowers automatically and can be fully mature in as little as 10 weeks.
Cultivators also favor ruderalis plants due to their reduced production time, typically finishing in 3–4 months rather than 6–8 months . The auto-flowering trait is extremely beneficial because it allows for multiple harvests in one outdoor growing season without the use of light deprivation techniques necessary for multiple harvests of photoperiod-dependent strains.
Uses
C. ruderalis is traditionally used in Russian and Mongolian folk medicine, especially for uses in treating depression. Because C. ruderalis is among the lowest THC producing biotypes of Cannabis, C. ruderalis is rarely used for recreational purposes.
In modern use, C. ruderalis has been crossed with Bedrocan strains to produce the strain Bediol for patients with medical prescriptions. The typically higher concentration of CBD may make ruderalis plants viable for the treatment of anxiety or epilepsy.
Bibliography
Books
Articles
References
External links
Cannabis strains
Flora of Nepal
Ruderal species | Cannabis ruderalis | [
"Biology"
] | 1,432 | [
"Cannabis strains",
"Biopiracy"
] |
2,154,371 | https://en.wikipedia.org/wiki/Extreme%20ultraviolet%20lithography | Extreme ultraviolet lithography (EUVL, also known simply as EUV) is a technology used in the semiconductor industry for manufacturing integrated circuits (ICs). It is a type of photolithography that uses 13.5 nm extreme ultraviolet (EUV) light from a laser-pulsed tin (Sn) plasma to create intricate patterns on semiconductor substrates.
, ASML Holding is the only company that produces and sells EUV systems for chip production, targeting 5 nanometer (nm) and 3 nm process nodes.
The EUV wavelengths that are used in EUVL are near 13.5 nanometers (nm), using a laser-pulsed tin (Sn) droplet plasma to produce a pattern by using a reflective photomask to expose a substrate covered by photoresist. Tin ions in the ionic states from Sn IX to Sn XIV give photon emission spectral peaks around 13.5 nm from 4p64dn – 4p54dn+1 + 4dn−14f ionic state transitions.
History and economic impact
In the 1960s, visible light was used for the production of integrated circuits, with wavelengths as small as 435 nm (mercury "g line").
Later, ultraviolet (UV) light was used, at first with a wavelength of 365 nm (mercury "i line"), then with excimer wavelengths, first of 248 nm (krypton fluoride laser), then 193 nm (argon fluoride laser), which was called deep UV.
The next step, going even smaller, was called extreme UV, or EUV. The EUV technology was considered impossible by many.
EUV light is absorbed by glass and air, so instead of using lenses to focus the beams of light as done previously, mirrors in vacuum would be needed. A reliable production of EUV was also problematic. Then, leading producers of steppers Canon and Nikon stopped development, and some predicted the end of Moore's law.
In 1991, scientists at Bell Labs published a paper demonstrating the possibility of using a wavelength of 13.8 nm for the so-called soft X-ray projection lithography.
To address the challenge of EUV lithography, researchers at Lawrence Livermore National Laboratory, Lawrence Berkeley National Laboratory, and Sandia National Laboratories were funded in the 1990s to perform basic research into the technical obstacles. The results of this successful effort were disseminated via a public/private partnership Cooperative R&D Agreement (CRADA) with the invention and rights wholly owned by the US government, but licensed and distributed under approval by DOE and Congress. The CRADA consisted of a consortium of private companies and the Labs, manifested as an entity called the Extreme Ultraviolet Limited Liability Company (EUV LLC).
Intel, Canon, and Nikon (leaders in the field at the time), as well as the Dutch company ASML and Silicon Valley Group (SVG) all sought licensing. Congress denied the Japanese companies the necessary permission, as they were perceived as strong technical competitors at the time and should not benefit from taxpayer-funded research at the expense of American companies. In 2001 SVG was acquired by ASML, leaving ASML as the sole benefactor of the critical technology.
By 2018, ASML succeeded in deploying the intellectual property from the EUV-LLC after several decades of developmental research, with incorporation of European-funded EUCLIDES (Extreme UV Concept Lithography Development System) and long-standing partner German optics manufacturer ZEISS and synchrotron light source supplier Oxford Instruments. This led MIT Technology Review to name it "the machine that saved Moore's law". The first prototype in 2006 produced one wafer in 23 hours. As of 2022, a scanner produces up to 200 wafers per hour. The scanner uses Zeiss optics, which that company calls "the most precise mirrors in the world", produced by locating imperfections and then knocking off individual molecules with techniques such as ion beam figuring.
This made the once small company ASML the world leader in the production of scanners and monopolist in this cutting-edge technology and resulted in a record turnover of 27.4 billion euros in 2021, dwarfing their competitors Canon and Nikon, who were denied IP access. Because it is such a key technology for development in many fields, the United States licenser pressured Dutch authorities to not sell these machines to China. ASML has followed the guidelines of Dutch export controls and until further notice will have no authority to ship the machines to China.
Along with multiple patterning, EUV has paved the way for higher transistor densities, allowing the production of higher-performance processors. Smaller transistors also require less power to operate, resulting in more energy-efficient electronics.
Market growth projection
According to a report by Pragma Market Research, the global extreme ultraviolet (EUV) lithography market is projected to grow from US$8,957.8 million in 2024 to US$17,350 million by 2030, at a compound annual growth rate (CAGR) of 11.7%. This significant growth reflects the rising demand for miniaturized electronics in various sectors, including smartphones, artificial intelligence, and high-performance computing.
Fab tool output
Requirements for EUV steppers, given the number of layers in the design that require EUV, the number of machines, and the desired throughput of the fab, assuming 24 hours per day operation.
Masks
EUV photomasks work by reflecting light, which is achieved by using multiple alternating layers of molybdenum and silicon. This is in contrast to conventional photomasks which work by blocking light using a single chromium layer on a quartz substrate. An EUV mask consists of 40–50 alternating silicon and molybdenum layers; this is a multilayer which acts to reflect the extreme ultraviolet light through Bragg diffraction; the reflectance is a strong function of incident angle and wavelength, with longer wavelengths reflecting more near normal incidence and shorter wavelengths reflecting more away from normal incidence. The multilayer may be protected by a thin ruthenium layer, called a capping layer. The pattern is defined in a tantalum-based absorbing layer over the capping layer.
Blank photomasks are mainly made by two companies: AGC Inc. and Hoya Corporation. Ion-beam deposition equipment mainly made by Veeco is often used to deposit the multilayer. A blank photomask is covered with photoresist, which is then baked (solidified) in an oven, and later the pattern is defined on the photoresist using maskless lithography with an electron beam. This step is called exposure. The exposed photoresist is developed (removed), and the unprotected areas are etched. The remaining photoresist is then removed. Masks are then inspected and later repaired using an electron beam. Etching must be done only in the absorbing layer and thus there is a need to distinguish between the capping and the absorbing layer, which is known as etch selectivity and is unlike etching in conventional photomasks, which only have one layer critical to their function.
Tool
An EUV tool (EUV photolithography machine) has a laser-driven tin (Sn) plasma light source, reflective optics comprising multilayer mirrors, contained within a hydrogen gas ambient. The hydrogen is used to keep the EUV collector mirror, as the first mirror collecting EUV emitted over a large range in angle (~2π sr) from the Sn plasma, in the source free of Sn deposition. Specifically, the hydrogen buffer gas in the EUV source chamber or vessel decelerates or possibly pushes back Sn ions and Sn debris traveling toward the EUV collector (collector protection) and enable a chemical reaction of Sn(s) + 4H(g) -> SnH4(g) to remove Sn deposition on the collector in the form of SnH4 gas (collector reflectivity restoration).
EUVL is a significant departure from the deep-ultraviolet lithography standard. All matter absorbs EUV radiation. Hence, EUV lithography requires vacuum. All optical elements, including the photomask, must use defect-free molybdenum/silicon (Mo/Si) multilayers (consisting of 50 Mo/Si bilayers, which theoretical reflectivity limit at 13.5 nm is ~75%) that act to reflect light by means of interlayer wave interference; any one of these mirrors absorb around 30% of the incident light, so the mirror temperature control is important.
EUVL systems, as of 2002-2009, contain at least two condenser multilayer mirrors, six projection multilayer mirrors and a multilayer object (mask). Since the mirrors absorb 96% of the EUV light, the ideal EUV source needs to be much brighter than its predecessors. EUV source development has focused on plasmas generated by laser or discharge pulses. The mirror responsible for collecting the light is directly exposed to the plasma and is vulnerable to damage from high-energy ions and other debris such as tin droplets, which require the costly collector mirror to be replaced every year.
Resource requirements
The required utility resources are significantly larger for EUV compared to 193 nm immersion, even with two exposures using the latter. At the 2009 EUV Symposium, Hynix reported that the wall plug efficiency was ~0.02% for EUV, i.e., to get 200 watts at intermediate focus for 100 wafers per hour, one would require 1 megawatt of input power, compared to 165 kilowatts for an ArF immersion scanner, and that even at the same throughput, the footprint of the EUV scanner was ~3× the footprint of an ArF immersion scanner, resulting in productivity loss. Additionally, to confine ion debris, a superconducting magnet may be required.
A typical EUV tool weighs nearly 200 tons and costs around 180 million USD.
EUV tools consume at least 10× more energy than immersion tools.
Summary of key features
The following table summarizes key differences between EUV systems in development and ArF immersion systems which are widely used in production today:
The different degrees of resolution among the 0.33 NA tools are due to the different illumination options. Despite the potential of the optics to reach sub-20 nm resolution, secondary electrons in resist practically limit the resolution to around 20 nm (more on this below).
Light source power, throughput, and uptime
Neutral atoms or condensed matter cannot emit EUV radiation. Ionization must precede EUV emission in matter. The thermal production of multicharged positive ions is only possible in a hot dense plasma, which itself strongly absorbs EUV. As of 2025, the established EUV light source is a laser-pulsed tin plasma. The ions absorb the EUV light they emit and are easily neutralized by electrons in the plasma to lower charge states, which produce light mainly at other, unusable wavelengths, resulting in a much reduced efficiency of light generation for lithography at higher plasma power density.
The throughput is tied to the source power, divided by the dose. A higher dose requires a slower stage motion (lower throughput) if pulse power cannot be increased.
EUV collector reflectivity degrades ~0.1–0.3% per billion 50 kHz pulses (~10% in ~2 weeks), leading to loss of uptime and throughput, while even for the first few billion pulses (within one day), there is still 20% (±10%) fluctuation. This could be due to the accumulating Sn residue mentioned above which is not completely cleaned off. On the other hand, conventional immersion lithography tools for double-patterning provide consistent output for up to a year.
Recently, the NXE:3400B illuminator features a smaller pupil fill ratio (PFR) down to 20% without transmission loss. PFR is maximized and greater than 0.2 around a metal pitch of 45 nm.
Due to the use of EUV mirrors which also absorb EUV light, only a small fraction of the source light is finally available at the wafer. There are 4 mirrors used for the illumination optics and 6 mirrors for the projection optics. The EUV mask or reticle is itself an additional mirror. With 11 reflections, only ~2% of the EUV source light is available at the wafer.
The throughput is determined by the EUV resist dose, which in turn depends on the required resolution. A dose of 40 mJ/cm2 is expected to be maintained for adequate throughput.
Tool uptime
The EUV light source limits tool uptime besides throughput. In a two-week period, for example, over seven hours downtime may be scheduled, while total actual downtime including unscheduled issues could easily exceed a day. A dose error over 2% warrants tool downtime.
The wafer exposure throughput steadily expanded up to around 1000 wafers per day (per system) over the 2019–2022 period, indicating substantial idle time, while at the same time running >120 wafers per day on a number of multipatterned EUV layers, for an EUV wafer on average.
Comparison to other lithography light sources
EUV (10–121 nm) is the band longer than X-rays (0.1–10 nm) and shorter than the hydrogen Lyman-alpha line.
While state-of-the-art 193 nm ArF excimer lasers offer intensities of 200 W/cm2, lasers for producing EUV-generating plasmas need to be much more intense, on the order of 1011 W/cm2. A state-of-the-art ArF immersion lithography 120 W light source requires no more than 40 kW electrical power, while EUV sources are targeted to exceed 40 kW.
The optical power target for EUV lithography is at least 250 W, while for other conventional lithography sources, it is much less. For example, immersion lithography light sources target 90 W, dry ArF sources 45 W, and KrF sources 40 W. High-NA EUV sources are expected to require at least 500 W.
EUV-specific optical issues
Reflective optics
A fundamental aspect of EUVL tools, resulting from the use of reflective optics, is the off-axis illumination (at an angle of 6°, in different direction at different positions within the illumination slit) on a multilayer mask (reticle). This leads to shadowing effects resulting in asymmetry in the diffraction pattern that degrade pattern fidelity in various ways as described below. For example, one side (behind the shadow) would appear brighter than the other (within the shadow).
The behavior of light rays within the plane of reflection (affecting horizontal lines) is different from the behavior of light rays out of the plane of reflection (affecting vertical lines). Most conspicuously, identically sized horizontal and vertical lines on the EUV mask are printed at different sizes on the wafer.
The combination of the off-axis asymmetry and the mask shadowing effect leads to a fundamental inability of two identical features even in close proximity to be in focus simultaneously. One of EUVL's key issues is the asymmetry between the top and bottom line of a pair of horizontal lines (the so-called "two-bar"). Some ways to partly compensate are the use of assist features as well as asymmetric illumination.
An extension of the two-bar case to a grating consisting of many horizontal lines shows similar sensitivity to defocus. It is manifest in the critical dimension (CD) difference between the top and bottom edge lines of the set of 11 horizontal lines.
Polarization by reflection also leads to partial polarization of EUV light, which favors imaging of lines perpendicular to the plane of the reflections.
Pattern shift from defocus (non-telecentricity)
The EUV mask absorber, due to partial transmission, generates a phase difference between the 0th and 1st diffraction orders of a line-space pattern, resulting in image shifts (at a given illumination angle) as well as changes in peak intensity (leading to linewidth changes) which are further enhanced due to defocus. Ultimately, this results in different positions of best focus for different pitches and different illumination angles. Generally, the image shift is balanced out due to illumination source points being paired (each on opposite sides of the optical axis). However, the separate images are superposed and the resulting image contrast is degraded when the individual source image shifts are large enough. The phase difference ultimately also determines the best focus position.
The multilayer is also responsible for image shifting due to phase shifts from diffracted light within the multilayer itself. This is inevitable due to light passing twice through the mask pattern.
The use of reflection causes wafer exposure position to be extremely sensitive to the reticle flatness and the reticle clamp. Reticle clamp cleanliness is therefore required to be maintained. Small (milliradian-scale) deviations in mask flatness in the local slope, coupled with wafer defocus. More significantly, mask defocus has been found to result in large overlay errors. In particular, for a 10 nm node metal 1 layer (including 48 nm, 64 nm, 70 nm pitches, isolated, and power lines), the uncorrectable pattern placement error was 1 nm for 40 nm mask z-position shift. This is a global pattern shift of the layer with respect to previously defined layers. However, features at different locations will also shift differently due to different local deviations from mask flatness, e.g., from defects buried under the multilayer. It can be estimated that the contribution of mask non-flatness to overlay error is roughly 1/40 times the peak-to-valley thickness variation. With the blank peak-to-valley spec of 50 nm, ~1.25 nm image placement error is possible. Blank thickness variations up to 80 nm also contribute, which lead to up to 2 nm image shift.
The off-axis illumination of the reticle is also the cause of non-telecentricity in wafer defocus, which consumes most of the 1.4 nm overlay budget of the NXE:3400 EUV scanner even for design rules as loose as 100 nm pitch. The worst uncorrectable pattern placement error for a 24 nm line was about 1.1 nm, relative to an adjacent 72 nm power line, per 80 nm wafer focus position shift at a single slit position; when across-slit performance is included, the worst error is over 1.5 nm in the wafer defocus window In 2017, an actinic microscope mimicking a 0.33 NA EUV lithography system with 0.2/0.9 quasar 45 illumination showed that an 80 nm pitch contact array shifted −0.6 to 1.0 nm while a 56 nm pitch contact array shifted −1.7 to 1.0 nm relative to a horizontal reference line, within a ±50 nm defocus window.
Wafer defocus also leads to image placement errors due to deviations from local mask flatness. If the local slope is indicated by an angle α, the image is projected to be shifted in a 4× projection tool by , where DOF is the depth of focus. For a depth of focus of 100 nm, a small local deviation from flatness of 2.5 mrad (0.14°) can lead to a pattern shift of 1 nm.
Simulations as well as experiments have shown that pupil imbalances in EUV lithography can result in pitch-dependent pattern placement errors. Since the pupil imbalance changes with EUV collector mirror aging or contamination, such placement errors may not be stable over time. The situation is specifically challenging for logic devices, where multiple pitches have critical requirements at the same time. The issue is ideally addressed by multiple exposures with tailored illuminations.
Slit position dependence
The direction of illumination is also highly dependent on slit position, essentially rotated azimuthally. Nanya Technology and Synopsys found that horizontal vs. vertical bias changed across slit with dipole illumination. The rotating plane of incidence (azimuthal range within −25° to 25°) is confirmed in the SHARP actinic review microscope at CXRO which mimics the optics for EUV projection lithography systems. The reason for this is a mirror is used to transform straight rectangular fields into arc-shaped fields. In order to preserve a fixed plane of incidence, the reflection from the previous mirror would be from a different angle with the surface for a different slit position; this causes non-uniformity of reflectivity. To preserve uniformity, rotational symmetry with a rotating plane of incidence is used. More generally, so-called "ring-field" systems reduce aberrations by relying on the rotational symmetry of an arc-shaped field derived from an off-axis annulus. This is preferred, as reflective systems must use off-axis paths, which aggravate aberrations. Hence identical die patterns within different halves of the arc-shaped slit would require different OPC. This renders them uninspectable by die-to-die comparison, as they are no longer truly identical dies. For pitches requiring dipole, quadrupole, or hexapole illumination, the rotation also causes mismatch with the same pattern layout at a different slit position, i.e., edge vs. center. Even with annular or circular illumination, the rotational symmetry is destroyed by the angle-dependent multilayer reflectance described above. Although the azimuthal angle range is about ±20° (field data indicated over 18°) on 0.33 NA scanners, at 7 nm design rules (36–40 nm pitch), the tolerance for illumination can be ±15°, or even less. Annular illumination nonuniformity and asymmetry also significantly impact the imaging. Newer systems have azimuthal angle ranges going up to ±30°. On 0.33 NA systems, 30 nm pitch and lower already suffer sufficient reduction of pupil fill to significantly affect throughput.
The larger incident angle for pitch-dependent dipole illumination trend across slit does not affect horizontal line shadowing so much, but vertical line shadowing does increase going from center to edge. In addition, higher-NA systems may offer limited relief from shadowing, as they target tighet pitches.
The slit position dependence is particularly difficult for the tilted patterns encountered in DRAM. Besides the more complicated effects due to shadowing and pupil rotation, tilted edges are converted to stair shape, which may be distorted by OPC. In fact, the 32 nm pitch DRAM by EUV will lengthen up to at least 9F2 cell area, where F is the active area half-pitch (traditionally, it had been 6F2). With a 2-D self-aligned double-patterning active area cut, the cell area is still lower at 8.9F2.
Aberrations, originating from deviations of optical surfaces from subatomic (<0.1 nm) specifications as well as thermal deformations and possibly including polarized reflectance effects, are also dependent on slit position, as will be further discussed below, with regard to source-mask optimization (SMO). The thermally induced aberrations are expected to exhibit differences among different positions across the slit, corresponding to different field positions, as each position encounters different parts of the deformed mirrors. Ironically, the use of substrate materials with high thermal and mechanical stability make it more difficult to compensate wavefront errors.
In combination with the range of wavelengths, the rotated plane of incidence aggravates the already severe stochastic impact on EUV imaging.
Wavelength bandwidth (chromatic aberration)
Unlike deep ultraviolet (DUV) lithography sources, based on excimer lasers, EUV plasma sources produce light across a broad range of wavelengths roughly spanning a 2% FWHM bandwidth near 13.5 nm (13.36nm – 13.65nm at 50% power). EUV (10–121nm) is the band longer than X-Rays (0.1–10nm) and shorter than the hydrogen Lyman-alpha line.
Though the EUV spectrum is not completely monochromatic, nor even as spectrally pure as DUV laser sources, the working wavelength has generally been taken to be 13.5 nm. In actuality, the reflected power is distributed mostly in the 13.3-13.7 nm range. The bandwidth of EUV light reflected by a multilayer mirror used for EUV lithography is over +/-2% (>270 pm); the phase changes due to wavelength changes at a given illumination angle may be calculated
and compared to the aberration budget. Wavelength dependence of reflectance also affects the apodization, or illumination distribution across the pupil (for different angles); different wavelengths effectively 'see' different illuminations as they are reflected differently by the multilayer of the mask. This effective source illumination tilt can lead to large image shifts due to defocus. Conversely, the peak reflected wavelength varies across the pupil due to different incident angles. This is aggravated when the angles span a wide radius, e.g., annular illumination. The peak reflectance wavelength increases for smaller incident angles. Aperiodic multilayers have been proposed to reduce the sensitivity at the cost of lower reflectivity but are too sensitive to random fluctuations of layer thicknesses, such as from thickness control imprecision or interdiffusion.
A narrower bandwidth would increase sensitivity to mask absorber and buffer thickness on the 1 nm scale.
Flare
Flare is the presence of background light originating from scattering off of surface features which are not resolved by the light. In EUV systems, this light can be EUV or out-of-band (OoB) light that is also produced by the EUV source. The OoB light adds the complication of affecting the resist exposure in ways other than accounted for by the EUV exposure. OoB light exposure may be alleviated by a layer coated above the resist, as well as 'black border' features on the EUV mask. However, the layer coating inevitably absorbs EUV light, and the black border adds EUV mask processing cost.
Line tip effects
A key challenge for EUV is the counter-scaling behavior of the line tip-to-tip (T2T) distance as half-pitch (hp) is scaled down. This is in part due to lower image contrast for the binary masks used in EUV lithography, which is not encountered with the use of phase shift masks in immersion lithography. The rounding of the corners of the line end leads to line end shortening, and this is worse for binary masks. The use of phase-shift masks in EUV lithography has been studied but encounters difficulties from phase control in thin layers as well as the bandwidth of the EUV light itself. More conventionally, optical proximity correction (OPC) is used to address the corner rounding and line-end shortening. In spite of this, it has been shown that the tip-to-tip resolution and the line tip printability are traded off against each other, being effectively CDs of opposite polarity.
In unidirectional metal layers, tip-to-tip spacing is one of the more severe issues for single exposure patterning. For the 40 nm pitch vertical lines, an 18 nm nominal tip-to-tip drawn gap resulted in an actual tip-to-tip distance of 29 nm with OPC, while for 32 nm pitch horizontal lines, the tip-to-tip distance with a 14 nm nominal gap went to 31 nm with OPC. These actual tip-to-tip distances define a lower limit of the half-pitch of the metal running in the direction perpendicular to the tip. In this case, the lower limit is around 30 nm. With further optimization of the illumination (discussed in the section on source-mask optimization), the lower limit can be further reduced to around 25 nm.
For larger pitches, where conventional illumination can be used, the line tip-to-tip distance is generally larger. For the 24 nm half-pitch lines, with a 20 nm nominally drawn gap, the distance was actually 45 nm, while for 32 nm half-pitch lines, the same nominal gap resulted in a tip-to-tip distance of 34 nm. With OPC, these become 39 nm and 28 nm for 24 nm half-pitch and 32 nm half-pitch, respectively.
Enhancement opportunities for EUV patterning
Assist features
Assist features are often used to help balance asymmetry from non-telecentricity at different slit positions, due to different illumination angles, starting at the 7 nm node, where the pitch is ~ 41 nm for a wavelength ~13.5 nm and NA=0.33, corresponding to k1 ~ 0.5. However, the asymmetry is reduced but not eliminated, since the assist features mainly enhance the highest spatial frequencies, whereas intermediate spatial frequencies, which also affect feature focus and position, are not much affected. The coupling between the primary image and the self images is too strong for the asymmetry to be eliminated by assist features; only asymmetric illumination can achieve this. Assist features may also get in the way of access to power/ground rails. Power rails are expected to be wider, which also limits the effectiveness of using assist features, by constraining the local pitch. Local pitches between 1× and 2× the minimum pitch forbid assist feature placement, as there is simply no room to preserve the local pitch symmetry. In fact, for the application to the two-bar asymmetry case, the optimum assist feature placement may be less than or exceed the two-bar pitch. Depending on the parameter to be optimized (process window area, depth of focus, exposure latitude), the optimum assist feature configuration can be very different, e.g., pitch between assist feature and bar being different from two-bar pitch, symmetric or asymmetric, etc..
At pitches smaller than 58 nm, there is a tradeoff between depth of focus enhancement and contrast loss by assist feature placement. Generally, there is still a focus-exposure tradeoff as the dose window is constrained by the need to have the assist features not print accidentally.
An additional concern comes from shot noise; sub-resolution assist features (SRAFs) cause the required dose to be lower, so as not to print the assist features accidentally. This results in fewer photons defining smaller features (see discussion in section on shot noise).
As SRAFs are smaller features than primary features and are not supposed to receive doses high enough to print, they are more susceptible to stochastic dose variations causing printing errors; this is particularly prohibitive for EUV, where phase-shift masks may need to be used.
Source-mask optimization
Due to the effects of non-telecentricity, standard illumination pupil shapes, such as disc or annular, are not sufficient to be used for feature sizes of ~20 nm or below (10 nm node and beyond). Instead certain parts of the pupil (often over 50%) must be asymmetrically excluded. The parts to be excluded depend on the pattern. In particular, the densest allowed lines need to be aligned along one direction and prefer a dipole shape. For this situation, double exposure lithography would be required for 2D patterns, due to the presence of both X- and Y-oriented patterns, each requiring its own 1D pattern mask and dipole orientation. There may be 200–400 illuminating points, each contributing its weight of the dose to balance the overall image through focus. Thus the shot noise effect (to be discussed later) critically affects the image position through focus, in a large population of features.
Double- or multiple-patterning would also be required if a pattern consists of sub-patterns which require significantly different optimized illuminations, due to different pitches, orientations, shapes, and sizes.
Impact of slit position and aberrations
Largely due to the slit shape, and the presence of residual aberrations, the effectiveness of SMO varies across slit position. At each slit position, there are different aberrations and different azimuthal angles of incidence leading to different shadowing. Consequently, there could be uncorrected variations across slit for aberration-sensitive features, which may not be obviously seen with regular line-space patterns. At each slit position, although optical proximity correction (OPC), including the assist features mentioned above, may also be applied to address the aberrations, they also feedback into the illumination specification, since the benefits differ for different illumination conditions. This would necessitate the use of different source-mask combinations at each slit position, i.e., multiple mask exposures per layer.
The above-mentioned chromatic aberrations, due to mask-induced apodization, also lead to inconsistent source-mask optimizations for different wavelengths.
Pitch-dependent focus windows
The best focus for a given feature size varies as a strong function of pitch, polarity, and orientation under a given illumination. At 36 nm pitch, horizontal and vertical darkfield features have more than 30 nm difference of focus. The 34 nm pitch and 48 nm pitch features have the largest difference of best focus regardless of feature type. In the 48–64 nm pitch range, the best focus position shifts roughly linearly as a function of pitch, by as much as 10–20 nm. For the 34–48 nm pitch range, the best focus position shifts roughly linearly in the opposite direction as a function of pitch. This can be correlated with the phase difference between the zero and first diffraction orders. Assist features, if they can fit within the pitch, were found not to reduce this tendency much, for a range of intermediate pitches, or even worsened it for the case of 18–27 nm and quasar illumination. 50 nm contact holes on 100 nm and 150 pitches had best focus positions separated by roughly 25 nm; smaller features are expected to be worse. Contact holes in the 48–100 nm pitch range showed a 37 nm best focus range. The best focus position vs. pitch is also dependent on resist. Critical layers often contain lines at one minimum pitch of one polarity, e.g., darkfield trenches, in one orientation, e.g., vertical, mixed with spaces of the other polarity of the other orientation. This often magnifies the best focus differences, and challenges the tip-to-tip and tip-to-line imaging.
Reduction of pupil fill
A consequence of SMO and shifting focus windows has been the reduction of pupil fill. In other words, the optimum illumination is necessarily an optimized overlap of the preferred illuminations for the various patterns that need to be considered. This leads to lower pupil fill providing better results. However, throughput is affected below 20% pupil fill due to absorption.
Phase shift masks
A commonly touted advantage of EUV has been the relative ease of lithography, as indicated by the ratio of feature size to the wavelength multiplied by the numerical aperture, also known as the k1 ratio. An 18 nm metal linewidth has a k1 of 0.44 for 13.5 nm wavelength, 0.33 NA, for example. For the k1 approaching 0.5, some weak resolution enhancement including attenuated phase shift masks has been used as essential to production with the ArF laser wavelength (193 nm), whereas this resolution enhancement is not available for EUV. In particular, 3D mask effects including scattering at the absorber edges distort the desired phase profile. Also, the phase profile is effectively derived from the plane wave spectrum reflected from the multilayer through the absorber rather than the incident plane wave. Without absorbers, near-field distortion also occurs at an etched multilayer sidewall due to the oblique incidence illumination; some light traverses only a limited number of bilayers near the sidewall. Additionally, the different polarizations (TE and TM) have different phase shifts..Fundamentally, a chromeless phase shift mask enables pitch splitting by suppression of the zeroth diffracted order on the mask, but fabricating a high quality phase shift mask for EUV is certainly not a trivial task. One possible way to achieve this is through spatial filtering at the Fourier plane of the mask pattern. At Lawrence Berkeley National Lab, the light of the zeroth order is a centrally obscured system, and the +/-1 diffracted orders will be captured by the clear aperture, providing a functional equivalent to the chromeless phase shift mask while using a conventional binary amplitude mask.
EUV photoresist exposure: the role of electrons
EUV light generates photoelectrons upon absorption by matter. These photoelectrons in turn generate secondary electrons, which slow down before engaging in chemical reactions. At sufficient doses 40 eV electrons are known to penetrate 180 nm thick resist leading to development. At a dose of 160 μC/cm2, corresponding to 15 mJ/cm2 EUV dose assuming one electron/photon, 30 eV electrons removed 7 nm of PMMA resist after standard development. For a higher 30 eV dose of 380 μC/cm2, equivalent to 36 mJ/cm2 at one electron/photon, 10.4 nm of PMMA resist are removed. These indicate the distances the electrons can travel in resist, regardless of direction.
The degree of photoelectron emission from the layer underlying the EUV photoresist has been shown to affect the depth of focus. Unfortunately, hardmask layers tend to increase photoelectron emission, degrading the depth of focus. Electrons from defocused images in the resist can also affect the best focus image.
The randomness of the number of secondary electrons is itself a source of stochastic behavior in EUV resist images. The scale length of electron blur itself has a distribution. Intel demonstrated with a rigorous simulation that EUV-released electrons scatter distances larger than 15 nm in EUV resists.
The electron blur is also affected by total internal reflection from the top surface of the resist film.
Effect of underlying layers
Secondary electrons from layers underneath the resist can affect the resist profile as well as pattern collapse. Hence, selection of such both the underlayer and the layer under that layer are important considerations for EUV lithography. Moreover, the electrons from defocused images can aggravate the stochastic nature of the image.
Contamination effects
Resist outgassing
Due to the high efficiency of absorption of EUV by photoresists, heating and outgassing become primary concerns. One well-known issue is contamination deposition on the resist from ambient or outgassed hydrocarbons, which results from EUV- or electron-driven reactions. Organic photoresists outgas hydrocarbons while metal oxide photoresists outgas water and oxygen and metal (in a hydrogen ambient); the last is uncleanable. The carbon contamination is known to affect multilayer reflectivity while the oxygen is particularly harmful for the ruthenium capping layers (relatively stable under EUV and hydrogen conditions) on the EUV multilayer optics.
Tin redeposition
Atomic hydrogen in the tool chambers is used to clean tin and carbon which deposit on the EUV optical surfaces. Atomic hydrogen is produced by EUV light directly photoionizing H2:
hν + H2 → H+ + H + e−.
Electrons generated in the above reaction may also dissociate H2 to form atomic hydrogen:
e− + H2 → H+ + H + 2e−.
The reaction with tin in the light source (e.g., tin on an optical surface in the source) to form volatile SnH4 (stannane) that can be pumped out from the source proceeds via the reaction
Sn(s) + 4 H(g) → SnH4(g).
The SnH4 can reach the coatings of other EUV optical surfaces, where it redeposits Sn via the reaction
SnH4 → Sn(s) + 2 H2(g).
Redeposition may also occur by other intermediate reactions.
The redeposited Sn might be subsequently removed by atomic-hydrogen exposure. However, overall, the tin cleaning efficiency (the ratio of the removed tin flux from a tin sample to the atomic-hydrogen flux to the tin sample) is less than 0.01%, due to both redeposition and hydrogen desorption, leading to formation of hydrogen molecules at the expense of atomic hydrogen. The tin cleaning efficiency for tin oxide is found roughly twice higher than that of tin (with a native oxide layer of ~2 nm on it). Injecting a small amount of oxygen to the light source may improve the tin cleaning rate.
Hydrogen blistering
Hydrogen also reacts with metal-containing compounds to reduce them to metal, and diffuses through the silicon and molybdenum in the multilayer, eventually causing blistering. Capping layers that mitigate hydrogen-related damage often reduce reflectivity to well below 70%. Capping layers are known to be permeable to ambient gases including oxygen and hydrogen, as well as susceptible to the hydrogen-induced blistering defects. Hydrogen may also react with the capping layer, resulting in its removal. TSMC proposed some means for mitigating hydrogen blistering defects on EUV masks, which may impact productivity.
Tin spitting
Hydrogen can penetrate molten tin (Sn), creating hydrogen bubbles inside it. If the bubbles move at the molten tin surface, then it bursts with tin, resulting in tin spreading over a large angle range. This phenomenon is called tin spitting and is one of EUV Collector contamination sources.
Resist erosion
Hydrogen also reacts with resists to etch or decompose them. Besides photoresist, hydrogen plasmas can also etch silicon, albeit very slowly.
Membrane
To help mitigate the above effects, the latest EUV tool introduced in 2017, the NXE:3400B, features a membrane that separates the wafer from the projection optics of the tool, protecting the latter from outgassing from the resist on the wafer. The membrane contains layers which absorb DUV and IR radiation, and transmits 85–90% of the incident EUV radiation. There is of course, accumulated contamination from wafer outgassing as well as particles in general (although the latter are out of focus, they may still obstruct light).
EUV-induced plasma
EUV lithographic systems using EUV light operate in 1–10 Pa hydrogen background gas. The plasma is a source of VUV radiation as well as electrons and hydrogen ions This plasma is known to etch exposed materials.
In 2023, a study supported at TSMC was published which indicated net charging by electrons from the plasma as well as from electron emission. The charging was found to occur even outside the EUV exposure area, indicating that the surrounding area had been exposed to electrons.
Due to chemical sputtering of carbon by the hydrogen plasma, there can be generation of nanoparticles, which can obstruct the EUV resist exposure.
Mask defects
Reducing defects on extreme ultraviolet (EUV) masks is currently one of the most critical issues to be addressed for commercialization of EUV lithography. Defects can be buried underneath or within the multilayer stack or be on top of the multilayer stack. Mesas or protrusions form on the sputtering targets used for multilayer deposition, which may fall off as particles during the multilayer deposition. In fact, defects of atomic scale height (0.3–0.5 nm) with 100 nm FWHM can still be printable by exhibiting 10% CD impact. IBM and Toppan reported at Photomask Japan 2015 that smaller defects, e.g., 50 nm size, can have 10% CD impact even with 0.6 nm height, yet remain undetectable.
Furthermore, the edge of a phase defect will further reduce reflectivity by more than 10% if its deviation from flatness exceeds 3 degrees, due to the deviation from the target angle of incidence of 84 degrees with respect to the surface. Even if the defect height is shallow, the edge still deforms the overlying multilayer, producing an extended region where the multilayer is sloped. The more abrupt the deformation, the narrower the defect edge extension, the greater the loss in reflectivity.
EUV mask defect repair is also more complicated due to the across-slit illumination variation mentioned above. Due to the varying shadowing sensitivity across the slit, the repair deposition height must be controlled very carefully, being different at different positions across the EUV mask illumination slit.
Multilayer reflectivity random variations
GlobalFoundries and Lawrence Berkeley Labs carried out a Monte Carlo study to simulate the effects of intermixing between the molybdenum (Mo) and silicon (Si) layers in the multilayer that is used to reflect EUV light from the EUV mask. The results indicated high sensitivity to the atomic-scale variations of layer thickness. Such variations could not be detected by wide-area reflectivity measurements but would be significant on the scale of the critical dimension (CD). The local variation of reflectivity could be on the order of 10% for a few nm standard deviation.
Multilayer damage
Multiple EUV pulses at less than 10 mJ/cm2 could accumulate damage to a Ru-capped Mo/Si multilayer mirror optic element. The angle of incidence was 16° or 0.28 rads, which is within the range of angles for a 0.33 NA optical system.
Pellicles
Production EUV tools need a pellicle to protect the mask from contamination. Pellicles are normally expected to protect the mask from particles during transport, entry into or exit from the exposure chamber, as well as the exposure itself. Without pellicles, particle adders would reduce yield, which has not been an issue for conventional optical lithography with 193 nm light and pellicles. However, for EUV, the feasibility of pellicle use is severely challenged, due to the required thinness of the shielding films to prevent excessive EUV absorption. Particle contamination would be prohibitive if pellicles were not stable above 200 W, i.e., the targeted power for manufacturing.
Heating of the EUV mask pellicle (film temperature up to 750 K for 80 W incident power) is a significant concern, due to the resulting deformation and transmission decrease. ASML developed a 70 nm thick polysilicon pellicle membrane, which allows EUV transmission of 82%; however, less than half of the membranes survived expected EUV power levels. SiNx pellicle membranes also failed at 82 W equivalent EUV source power levels. At target 250 W levels, the pellicle is expected to reach 686 degrees Celsius, well over the melting point of aluminum. Alternative materials need to allow sufficient transmission as well as maintain mechanical and thermal stability. However, graphite, graphene or other carbon nanomaterials (nanosheets, nanotubes) are damaged by EUV due to the release of electrons and also too easily etched in the hydrogen cleaning plasma expected to be deployed in EUV scanners. Hydrogen plasmas can also etch silicon as well. A coating helps improve hydrogen resistance, but this reduces transmission and/or emissivity, and may also affect mechanical stability (e.g., bulging).
Wrinkles on pellicles can cause CD nonuniformity due to uneven absorption; this is worse for smaller wrinkles and more coherent illumination, i.e., lower pupil fill.
In the absence of pellicles, EUV mask cleanliness would have to be checked before actual product wafers are exposed, using wafers specially prepared for defect inspection. These wafers are inspected after printing for repeating defects indicating a dirty mask; if any are found, the mask must be cleaned and another set of inspection wafers are exposed, repeating the flow until the mask is clean. Any affected product wafers must be reworked.
TSMC reported starting limited use of its own pellicle in 2019 and continuing to expand afterwards, and Samsung is planning pellicle introduction in 2022.
Hydrogen bulging defects
As discussed above, with regard to contamination removal, hydrogen used in recent EUV systems can penetrate into the EUV mask layers. TSMC indicated in its patent that hydrogen would enter from the mask edge. Once trapped, bulge defects or blisters were produced, which could lead to film peeling. These are essentially the blister defects which arise after a sufficient number of EUV mask exposures in the hydrogen environment. TSMC proposed some means for mitigating hydrogen blistering defects on EUV masks, which may impact productivity.
EUV stochastic issues
EUV lithography is particularly sensitive to stochastic effects. In a large population of features printed by EUV, although the overwhelming majority are resolved, some suffer complete failure to print, e.g. missing holes or bridging lines. A known significant contribution to this effect is the dose used to print. This is related to shot noise, to be discussed further below. Due to the stochastic variations in arriving photon numbers, some areas designated to print actually fail to reach the threshold to print, leaving unexposed defect regions. Some areas may be overexposed, leading to excessive resist loss or crosslinking. The probability of stochastic failure increases exponentially as feature size decreases, and for the same feature size, increasing distance between features also significantly increases the probability. Line cuts which are misshapen are a significant issue due to potential arcing and shorting. Yield requires detection of stochastic failures down to below 1e-12.
The tendency to stochastic defects is worse from defocus over a large pupil fill.
Multiple failure modes may exist for the same population. For example, besides bridging of trenches, the lines separating the trenches may be broken. This can be attributed to stochastic resist loss, from secondary electrons. The randomness of the number of secondary electrons is itself a source of stochastic behavior in EUV resist images.
The coexistence of stochastically underexposed and overexposed defect regions leads to a loss of dose window at a certain post-etch defect level between the low-dose and high-dose patterning cliffs. Hence, the resolution benefit from shorter wavelength is lost.
The resist underlayer also plays an important role. This could be due to the secondary electrons generated by the underlayer. Secondary electrons may remove over 10 nm of resist from the exposed edge.
The defect level is on the order of 1K/mm2. In 2020, Samsung reported that 5 nm layouts had risks for process defects and had started implementing automated check and fixing.
Photon shot noise also leads to stochastic edge placement error. The photon shot noise is augmented to some degree by blurring factors such as secondary electrons or acids in chemically amplified resists; when significant the blur also reduces the image contrast at the edge. An edge placement error (EPE) as large as 8.8 nm was measured for a 48 nm pitch EUV-printed metal pattern.
With the natural Poisson distribution due to the random arrival and absorption times of the photons, there is an expected natural dose (photon number) variation of at least several percent 3 sigma, making the exposure process susceptible to stochastic variations. The dose variation leads to a variation of the feature edge position, effectively becoming a blur component. Unlike the hard resolution limit imposed by diffraction, shot noise imposes a softer limit, with the main guideline being the ITRS line width roughness (LWR) spec of 8% (3s) of linewidth. Increasing the dose will reduce the shot noise, but this also requires higher source power.
The two issues of shot noise and EUV-released electrons point out two constraining factors: 1) keeping dose high enough to reduce shot noise to tolerable levels, but also 2) avoiding too high a dose due to the increased contribution of EUV-released photoelectrons and secondary electrons to the resist exposure process, increasing the edge blur and thereby limiting the resolution. Aside from the resolution impact, higher dose also increases outgassing and limits throughput, and crosslinking occurs at very high dose levels. For chemically amplified resists, higher dose exposure also increases line edge roughness due to acid generator decomposition.
Even with higher absorption at the same dose, EUV has a larger shot noise concern than the ArF (193 nm) wavelength, mainly because it is applied to thinner resists.
Due to stochastic considerations, the IRDS 2022 lithography roadmap now acknowledges increasing doses for smaller feature sizes. However, an upper limit to how much dose can be increased is imposed by resist loss.
Due to resist thinning with increased dose, EUV stochastic defectivity limits will define a narrow CD or dose window. The thinner resist at higher incident dose reduces absorption, and hence, absorbed dose.
EUV resolution will likely be compromised by stochastic effects. Stochastic defect densities have exceeded 1/cm2, at 36 nm pitch. In 2024, an EUV resist exposure by ASML revealed a missing+bridging 32 nm pitch contact hole defect density floor >0.25/cm2 (177 defects per wafer), made worse with thinner resist. ASML indicated 30 nm pitch would not use direct exposure but double patterning. Intel did not use EUV for 30 nm pitch.
Pupil fill ratio
For pitches less than half-wavelength divided by numerical aperture, dipole illumination is necessary. This illumination fills at most a leaf-shaped area at the edge of the pupil. However, due to 3D effects in the EUV mask, smaller pitches require even smaller portions of this leaf shape. Below 20% of the pupil, the throughput and dose stability begin to suffer. Higher numerical aperture allows a higher pupil fill to be used for the same pitch, but depth of focus is significantly reduced.
A larger pupil fill is more susceptible to stochastic fluctuations from point to point in the pupil.
Use with multiple-patterning
EUV is anticipated to use double-patterning at around 34 nm pitch with 0.33 NA. This resolution is equivalent to '1Y' for DRAM. In 2020, ASML reported that 5 nm M0 layer (30 nm minimum pitch) required double-patterning.
In H2 2018, TSMC confirmed that its 5 nm EUV scheme still used multi-patterning, also indicating that mask count did not decrease from its 7 nm node, which used extensive DUV multi-patterning, to its 5 nm node, which used extensive EUV. EDA vendors also indicated the continued use of multi-patterning flows. While Samsung introduced its own 7 nm process with EUV single-patterning, it encountered severe photon shot noise causing excessive line roughness, which required higher dose, resulting in lower throughput. TSMC's 5 nm node uses even tighter design rules. Samsung indicated smaller dimensions would have more severe shot noise.
In Intel's complementary lithography scheme at 20 nm half-pitch, EUV would be used only in a second line-cutting exposure after a first 193 nm line-printing exposure.
Multiple exposures would also be expected where two or more patterns in the same layer, e.g., different pitches or widths, must use different optimized source pupil shapes. For example, when considering a staggered bar array of 64 nm vertical pitch, changing the horizontal pitch from 64 nm to 90 nm changes the optimized illumination significantly. Source-mask optimization that is based on line-space gratings and tip-to-tip gratings only does not entail improvements for all parts of a logic pattern, e.g., a dense trench with a gap on one side.
In 2020, ASML reported that for the 3 nm node, center-to-center contact/via spacings of 40 nm or less would require double- or triple-patterning for some contact/via arrangements.
For the 24–36 nm metal pitch, it was found that using EUV as a (second) cutting exposure had a significantly wider process window than as a complete single exposure for the metal layer. However, using a second exposure in the LELE approach for double patterning does not get around the vulnerability to stochastic defects.
Multiple exposures of the same mask are also expected for defect management without pellicles, limiting productivity similarly to multiple-patterning.
Self-aligned litho-etch-litho-etch (SALELE) is a hybrid SADP/LELE technique whose implementation has started in 7 nm.
Self-aligned litho-etch-litho-etch (SALELE) has become an accepted form of double-patterning to be used with EUV.
Single-patterning extension: anamorphic high-NA
A return to extended generations of single-patterning would be possible with higher numerical aperture (NA) tools. An NA of 0.45 could require retuning of a few percent. Increasing demagnification could avoid this retuning, but the reduced field size severely affects large patterns (one die per 26 mm × 33 mm field) such as the many-core multi-billion transistor 14 nm Xeon chips. by requiring field stitching of two mask exposures.
In 2015, ASML disclosed details of its anamorphic next-generation EUV scanner, with an NA of 0.55. These machines cost around USD 360 million. The demagnification is increased from 4× to 8× only in one direction (in the plane of incidence). However, the 0.55 NA has a much smaller depth of focus than immersion lithography. Also, an anamorphic 0.52 NA tool has been found to exhibit too much CD and placement variability for 5 nm node single exposure and multi-patterning cutting.
Depth of focus being reduced by increasing NA is also a concern, especially in comparison with multi-patterning exposures using 193 nm immersion lithography:
High-NA EUV tools focus horizontal and vertical lines differently from low-NA systems, due to the different demagnfication for horizontal lines.
High-NA EUV tools also suffer from obscuration, which can cause errors in the imaging of certain patterns.
The first high-NA tools are expected at Intel by 2025 at earliest.
For sub-2nm nodes, high-NA EUV systems will be affected by a host of issues: throughput, new masks, polarization, thinner resists, and secondary electron blur and randomness. Reduced depth of focus requires resist thickness less than 30 nm, which in turn increases stochastic effects, due to reduced photon absorption.
Electron blur is estimated to be at least ~2 nm, which is enough to thwart the benefit of High-NA EUV lithography.
Beyond high-NA, ASML in 2024 announced plans for the development of a hyper-NA EUV tool with an NA beyond 0.55, such as an NA of 0.75 or 0.85. These machines could cost USD 720 million each and are expected to be available in 2030. A problem with Hyper-NA is polarization of the EUV light causing a reduction in image contrast.
Beyond EUV wavelength
A much shorter wavelength (~6.7 nm) would be beyond EUV, and is often referred to as BEUV (beyond extreme ultraviolet). With current technology, BEUV wavelengths would have worse shot noise effects without ensuring sufficient dose. (The generally accepted 'border' of UV is 10nm below which the (soft) x-ray region begins.)
References
Further reading
Michael Purvis, An Introduction to EUV Sources for Lithography, ASML, STROBE, 2020-09-25.
Igor Fomenkov, EUV Source for Lithography in HVM - performance and prospects, ASML Fellow, Source workshop, Amsterdam, 2019-11-05.
Related links
EUV presents economic challenges
Industry mulls 6.7-nm wavelength EUV
Lithography (microfabrication)
Extreme ultraviolet | Extreme ultraviolet lithography | [
"Chemistry",
"Materials_science"
] | 12,510 | [
"Microtechnology",
"Ultraviolet radiation",
"Extreme ultraviolet",
"Nanotechnology",
"Lithography (microfabrication)"
] |
2,154,436 | https://en.wikipedia.org/wiki/Next-generation%20lithography | Next-generation lithography or NGL is a term used in integrated circuit manufacturing to describe the lithography technologies in development which are intended to replace current techniques. Driven by Moore's law in the semiconductor industries, the shrinking of the chip size and critical dimension continues. The term applies to any lithography method which uses a shorter-wavelength light or beam type than the current state of the art, such as X-ray lithography, electron beam lithography, focused ion beam lithography, and nanoimprint lithography. The term may also be used to describe techniques which achieve finer resolution features from an existing light wavelength.
Many technologies once termed "next generation" have entered commercial production, and open-air photolithography, with visible light projected through hand-drawn photomasks, has gradually progressed to deep-UV immersion lithography using optical proximity correction, inverse lithography technology, off-axis illumination, phase-shift masks, double patterning, and multiple patterning. In the late 2010s, the combination of many such techniques was able to achieve features on the order of 20 nm with the 193 nm-wavelength ArF excimer laser in the 14 nm, 10 nm and 7 nm processes, though at the cost of adding processing steps and therefore cost.
13.5 nm extreme ultraviolet (EUV) lithography, long considered a leading candidate for next-generation lithography, began to enter commercial mass-production in 2018. As of 2021, Samsung and TSMC were gradually phasing EUV lithography into their production lines, as it became economical to replace multiple processing steps with single EUV steps. As of the early 2020s, many EUV techniques are still in development and many challenges remain to be solved, positioning EUV lithography as being in transition from "next generation" to "state of the art."
Candidates for next-generation lithography beyond EUV include X-ray lithography, electron beam lithography, focused ion beam lithography, nanoimprint lithography, and quantum lithography. Several of these technologies have experienced periods of popularity, but have remained outcompeted by the continuing improvements in photolithography. Electron beam lithography was most popular during the 1970s, but was replaced in popularity by X-ray lithography during the 1980s and early 1990s, and then by EUV lithography from the mid-1990s to the mid-2000s. Focused ion beam lithography has carved a niche for itself in the area of defect repair. Nanoimprint's popularity is rising, and is positioned to succeed EUV as the most popular choice for next-generation lithography, due to its inherent simplicity and low cost of operation as well as its success in the LED, hard disk drive and microfluidics sectors.
The rise and fall in popularity of each NGL candidate has largely hinged on its throughput capability and its cost of operation and implementation. Electron beam and nanoimprint lithography are limited mainly by the throughput, while EUV and X-ray lithography are limited by implementation and operation costs. The projection of charged particles (ions or electrons) through stencil masks was also popularly considered in the early 2000s but eventually fell victim to both low throughput and implementation difficulties.
Issues
Fundamental issues
Regardless of whether NGL or photolithography is used, etching of polymer (resist) is the last step. Ultimately the quality (roughness) as well as resolution of this polymer etching limits the inherent resolution of the lithography technique. Next generation lithography also generally makes use of ionizing radiation, leading to secondary electrons which can limit resolution to effectively > 20 nm.
Studies have also found that for NGL to reach LER (line edge roughness) objectives ways to control variables such as polymer size, image contrast and resist contrast must be found.
Market issues
The above-mentioned competition between NGL and the recurring extension of photolithography, where the latter consistently wins, may be more a strategic than a technical matter. If a highly scalable NGL technology were to become readily available, late adopters of leading-edge technology would immediately have the opportunity to leapfrog the current use of advanced but costly photolithography techniques, at the expense of the early adopters of leading-edge technology, who have been the key investors in NGL. While this would level the playing field, it is disruptive enough to the industry landscape that the leading semiconductor companies would probably not want to see it happen.
The following example would make this clearer. Suppose company A manufactures down to 28 nm, while company B manufactures down to 7 nm, by extending its photolithography capability by implementing double patterning. If an NGL were deployed for the 5 nm node, both companies would benefit, but company A currently manufacturing at the 28 nm node would benefit much more because it would immediately be able to use the NGL for manufacturing at all design rules from 22 nm down to 7 nm (skipping all the said multiple patterning), while company B would only benefit starting at the 5 nm node, having already spent much on extending photolithography from its 22 nm process down to 7 nm. The gap between Company B, whose customers expect it to advance the leading edge, and Company A, whose customers don't expect an equally aggressive roadmap, will continue to widen as NGL is delayed and photolithography is extended at greater and greater cost, making the deployment of NGL less and less attractive strategically for Company B. With NGL deployment, customers will also be able to demand lower prices for products made at advanced generations.
This becomes more clear when considering that each resolution enhancement technique applied to photolithography generally extends the capability by only one or two generations. For this reason, the observation that "optical lithography will live forever" will likely hold, as the early adopters of leading-edge technology will never benefit from highly scalable lithography technologies in a competitive environment.
There is therefore great pressure to deploy an NGL as soon as possible, but the NGL ultimately may be realized in the form of photolithography with more efficient multiple patterning, such as directed self-assembly or aggressive cut reduction.
See also
Computational lithography
Nanolithography
Quantum Lithography
References
Lithography (microfabrication) | Next-generation lithography | [
"Materials_science"
] | 1,318 | [
"Nanotechnology",
"Microtechnology",
"Lithography (microfabrication)"
] |
2,154,534 | https://en.wikipedia.org/wiki/Dehydroascorbic%20acid | Dehydroascorbic acid (DHA) is an oxidized form of ascorbic acid (vitamin C). It is actively imported into the endoplasmic reticulum of cells via glucose transporters. It is trapped therein by reduction back to ascorbic acid by glutathione and other thiols. The (free) chemical radical semidehydroascorbic acid (SDA) also belongs to the group of oxidized ascorbic acids.
Structure and physiology
Top: ascorbic acid(reduced form of vitamin C)Bottom: dehydroascorbic acid(nominal oxidized form of vitamin C)
Although sodium-dependent transporters for vitamin C exists, it is present mainly in specialized cells whereas the glucose transporters, most notably GLUT1, transport DHA in most cells, where recycling back to ascorbic acid generates the necessary enzyme cofactor and intracellular antioxidant, (see Transport to mitochondria).
The structure shown here for DHA is the commonly shown textbook structure. This 1,2,3-tricarbonyl is too electrophilic to survive more than a few milliseconds in aqueous solution, however. The actual structure shown by spectroscopic studies is the result of rapid hemiketal formation between the 6-OH and the 3-carbonyl groups. Hydration of the 2-carbonyl is also observed. The lifetime of the stabilized species is commonly said to be about 6 minutes under biological conditions. Destruction results from irreversible hydrolysis of the lactone bond, with additional degradation reactions following. Crystallization of solutions of DHA gives a pentacyclic dimer structure of indefinite stability. Recycling of vitamin C via active transport of DHA into cells, followed by reduction and reuse, mitigates the inability of humans to synthesize it from glucose.
Transport to mitochondria
Vitamin C accumulates in mitochondria, where most of the free radicals are produced, by entering as DHA through the glucose transporter GLUT10. Ascorbic acid protects the mitochondrial genome and membrane.
Transport to the brain
Vitamin C does not pass from the bloodstream into the brain, although the brain is one of the organs that have the greatest concentration of vitamin C. Instead, DHA is transported through the blood–brain barrier via GLUT1 transporters, and then reduced back to ascorbic acid.
Use
Dehydroascorbic acid has been used as a vitamin C dietary supplement.
As a cosmetic ingredient, dehydroascorbic acid is used to enhance the appearance of the skin. It may be used in a process for permanent waving of hair and in a process for sunless tanning of skin.
In a cell culture growth medium, dehydroascorbic acid has been used to assure the uptake of vitamin C into cell types that do not contain ascorbic acid transporters.
As a pharmaceutical agent, some research has suggested that administration of dehydroascorbic acid may confer protection from neuronal injury following an ischemic stroke. The literature contains many reports on the antiviral effects of vitamin C, and one study suggests dehydroascorbic acid has stronger antiviral effects and a different mechanism of action than ascorbic acid. Solutions in water containing ascorbic acid and copper ions and/or peroxide, resulting in rapid oxidation of ascorbic acid to dehydroascorbic acid, have been shown to possess powerful but short-lived antimicrobial, antifungal, and antiviral properties, and have been used to treat gingivitis, periodontal disease, and dental plaque. A pharmaceutical product named Ascoxal is an example of such a solution used as a mouth rinse as an oral mucolytic and prophylactic agent against gingivitis. Ascoxal solution has also been tested with positive results as a treatment for recurrent mucocutaneous herpes, and as a mucolytic agent in acute and chronic pulmonary disease such as emphysema, bronchitis, and asthma by aerosol inhalation.
References
Further reading
External links
Organic acids
Vitamin C
Diketones
de:Ascorbinsäure#Dehydroascorbinsäure | Dehydroascorbic acid | [
"Chemistry"
] | 898 | [
"Organic acids",
"Acids",
"Redox",
"Oxidizing agents",
"Organic compounds"
] |
2,154,537 | https://en.wikipedia.org/wiki/NanoInk | NanoInk, Inc. was a nanotechnology company headquartered in Skokie, Illinois, with a MEMS fabrication facility in Campbell, California.
Overview
A spin-off of Northwestern University and founded by Northwestern professor Chad Mirkin, NanoInk specialized in nanometer-scale manufacturing and applications development for the life science and semiconductor industries. Dip Pen Nanolithography (DPN) was a patented and proprietary nanofabrication technology marketed as an anti-counterfeiting aid for pharmaceutical products.
Other key applications included nanoscale additive repair and nanoscale rapid prototyping. Located in the Illinois Science and Technology Park, north of Chicago, NanoInk had nearly 400 patents and applications filed worldwide and had licensing agreements with Northwestern University, Stanford University, the University of Illinois at Urbana-Champaign, and the Georgia Institute of Technology.
History
Within seven months of its formation, NanoInk released its first product, the DPN-System-1, which turned any atomic force microscope into a DPN machine.
In February 2013, NanoInk announced it would be shutting down due to insufficient funding when its primary backer, Ann Lurie, decided to pull the plug after investing $150 million over a decade.
See also
Nanosys
References
External links
Official Site
NanoInk Writes its Own Ticket Using Quills on the Nanoscale
Out of Sight, Out of Mind
Protect the Product, Not the Package
Role of nanotechnology in brand protection
Nanotechnology institutions
Companies based in Skokie, Illinois
Nanotechnology companies | NanoInk | [
"Materials_science"
] | 310 | [
"Nanotechnology",
"Nanotechnology institutions",
"Nanotechnology companies"
] |
2,154,567 | https://en.wikipedia.org/wiki/Techreport | Techreport (formerly "The Tech Report") is one of the oldest hardware, news, and tech review sites. Techreport specialized in hardware and produced quarterly system build guides at various price points, and occasional price vs. performance scatter plots. It has an online community and used to have an active podcast. Some of the site's investigative articles regarding hardware benchmarking have been cited by other technology news sites like Anandtech and PC World. Currently, the publication is focused on technology news, informational how tos, and consumer guides on topics such as software, cryptocurrency, artificial intelligence, VPNs, antivirus software, and more.
The site went through an ownership change and major redesign in the middle of 2019 after which the site's focus and content went through significant changes, no longer specializing in hardware or producing any system guides or podcasts and no longer being focused on computer technology.
In 2023, the editorial team was rebuilt and the entire content process overhauled to ensure the publication's focus on content quality, but also alignment with consumer interests, and the cornerstones of journalistic ethos, namely objectivity, accuracy and truthfulness. The Techreport website was also redesigned to improve user experience, while the branding (formerly "The Tech Report", now "Techreport") and logo were updated to a more modern look.
History
Tech Report was founded by Scott Wasson, a Harvard Divinity School graduate, and Andy Brown. Both started by writing at Ars Technica in 1998. The two later decided to launch their website. The site eventually grew into a business enterprise with multiple full-time staff members.
Tech Report was originally located at tech-report.com in 1999. The site was moved to techreport.com in 2003.
On August 20, 2007, a beta for a new site design was posted in the forums for review by the user community. It was later moved to live.
Launching on January 1, 2011, the new site design, TR 3.0, rolled out. It offered a completely new layout and two user-switchable colors, blue and white, along with a reduced mobile device format.
On December 2, 2015, Scott Wasson, the founder and Editor-In-Chief stepped down as he accepted a role in AMD's graphics division. Wasson subsequently sold the company in March 2018 to Adam Eiberger, the Tech Report's business manager.
On December 21, 2018 Jeff Kampman stepped down as Editor-In-Chief. The site was then sold to investors John Rampton and John Rall, and Renee Johnson took over as Editor-in-Chief.
On July 7, 2019, coinciding with the release of AMD's Ryzen 3 CPUs and Navi GPUs, a site redesign was launched, moving from the Tech Report's former custom CMS and functionality to a WordPress template. On July 9, Johnson posted an introduction to the design. The new redesign was met with criticism from the users. In August of the same year, TechReport's senior editing team experienced a series of changes, which was also evident in a change in direction in terms of the focus of the site, no longer covering hardware reviews and system guides to the same extent.
When the site's editorial team was revamped in 2023, the company recruited a team of technology, finance, and cybersecurity experts with the aim of increasing editorial quality and restoring the journalistic integrity of the content process. The new iteration of Techreport's editorial team is focused on creating unbiased, human-only technology content that addresses the needs of consumers, at no cost to the consumer.
A product testing methodology was introduced by the new Techreport team. Contributors now have to test and interact with product and services, providing images and proof of testing as part of the publication process. In addition, Techreport now maintains editorial independence, which means that coverage and product recommendations are the result of testing, and represent the views of author at the time of publication. While the site still uses an affiliate marketing model to fund the cost of publishing, earnings from vendors do not dictate the team's views on any products or services.
Leadership and Culture
Techreport's content is written, edited, and fact checked by people. Writers never use AI tools, and the company is committed to the idea that exceptional tech content written by people content takes time. In addition, the editorial team is now working 100% remotely, and the organization has switched to a model of empathetic leadership, where people's ability to produce quality work without undue pressure is a chief consideration when establishing individual KPIs.
AMD TLB bug investigation
TechReport was one of the first sites in 2007 to document and benchmark the flaw in the translation lookaside buffer (TLB) of AMD Phenom CPUs. Despite claims by AMD that the initial BIOS fix would only result in 10% performance decrease, benchmarks by TechReport have revealed that the performance impact by the initial BIOS fix was much more severe, up to nearly 20% on average, with some applications such as Firefox experiencing performance decrease of 57% in tests. They were also the first one to notify about AMD stopping the shipment of processors due to this bug.
Video game and GPU performance benchmarking research
On September 8, 2011, Scott Wasson posted an article titled "Inside the Second: A New Look at Game Benchmarking." This showed gamers that frames per second (FPS) are not the only thing that matters in "smooth" gameplay, but frame latency has a big part. This innovative benchmarking method was later mentioned and acknowledged by other publications such as Anandtech, which described this method as "a revolution in the 3D game benchmarking scene" and Overclockers.
SSD Endurance Experiment
In 2013, TechReport started an experiment using several SSD drives to determine how many writes they can endure. This test lasted for more than 18 months before all drives used in this test failed, enduring much larger amount of written data than rated by manufacturers themselves, and even prompting one of the manufacturers, Samsung, to release a humorous music video dedicated to this test.
Site Structure
Main Page
A large portion of the main page was dedicated to "News" and "Blog" entries. Among the news entries were "Shortbread" posts which offered a summary breakdown of reviews and news offered by other sites. Featured articles were often reviews of newly released PC hardware that had been tested by the site's editors and judged on several metrics including performance and value compared to other available hardware.
As of 2023, the editorial team reduced the scope of Techreport's product reviews and consumer technology guides, allowing contributors to perform thorough, hands-on testing of all the products and services being discussed. The change was necessary in order to align content processes with the ethos of producing useful and informative consumer-focused content.
The structure of Techreport's information and homepage was updated to make it easier for readers to navigate. "News is covered in its own category, and "Software Guides" are organized by parent category, for example VPNs, Antivirus, and Artificial Intelligence.
Podcast
Adapting to the general trend of more content for digest, The Tech Report launched its podcast on February 9, 2008, hosted by Jordan Drake. While the schedule has varied it provides a casual but quite in-depth look back at the topics that made news from a panel of the site staff. After 2015 episodes were released irregularly, frequently discussing the release of a new microarchitecture with David Kanter of Real World Technologies. The last episode was made in January 2018.
Community and Forum
Tech Report has a phpBB-styled forum that is unrestricted in read-only form and open to the public for contribution via simple registration. The forum is primarily structured around computer technology and related topics, but debates also range from politics and religion in the "opt-in only" R&P forum to general random chatter in the Back Porch. Contributors to the website also have access to a restricted forum called the Smoky Back Room. Registered users may respond to news topics and other entries posted on the front page in an isolated threaded comments section that automatically attaches to each new entry. Although access to the main page comments is linked to the user database, the discussions are logged separately from the forum area of the site and are not counted toward the user forum statistics.
As of 2023, Techreport forums have been taken offline due to outdated forum software.
References
External links
American technology news websites
Computing websites
Internet properties established in 1999
1999 establishments in the United States | Techreport | [
"Technology"
] | 1,791 | [
"Computing websites"
] |
2,154,572 | https://en.wikipedia.org/wiki/Nanobiotechnology | Nanobiotechnology, bionanotechnology, and nanobiology are terms that refer to the intersection of nanotechnology and biology. Given that the subject is one that has only emerged very recently, bionanotechnology and nanobiotechnology serve as blanket terms for various related technologies.
This discipline helps to indicate the merger of biological research with various fields of nanotechnology. Concepts that are enhanced through nanobiology include: nanodevices (such as biological machines), nanoparticles, and nanoscale phenomena that occurs within the discipline of nanotechnology. This technical approach to biology allows scientists to imagine and create systems that can be used for biological research. Biologically inspired nanotechnology uses biological systems as the inspirations for technologies not yet created. However, as with nanotechnology and biotechnology, bionanotechnology does have many potential ethical issues associated with it.
The most important objectives that are frequently found in nanobiology involve applying nanotools to relevant medical/biological problems and refining these applications. Developing new tools, such as peptoid nanosheets, for medical and biological purposes is another primary objective in nanotechnology. New nanotools are often made by refining the applications of the nanotools that are already being used. The imaging of native biomolecules, biological membranes, and tissues is also a major topic for nanobiology researchers. Other topics concerning nanobiology include the use of cantilever array sensors and the application of nanophotonics for manipulating molecular processes in living cells.
Recently, the use of microorganisms to synthesize functional nanoparticles has been of great interest. Microorganisms can change the oxidation state of metals. These microbial processes have opened up new opportunities for us to explore novel applications, for example, the biosynthesis of metal nanomaterials. In contrast to chemical and physical methods, microbial processes for synthesizing nanomaterials can be achieved in aqueous phase under gentle and environmentally benign conditions. This approach has become an attractive focus in current green bionanotechnology research towards sustainable development.
Terminology
The terms are often used interchangeably. When a distinction is intended, though, it is based on whether the focus is on applying biological ideas or on studying biology with nanotechnology. Bionanotechnology generally refers to the study of how the goals of nanotechnology can be guided by studying how biological "machines" work and adapting these biological motifs into improving existing nanotechnologies or creating new ones. Nanobiotechnology, on the other hand, refers to the ways that nanotechnology is used to create devices to study biological systems.
In other words, nanobiotechnology is essentially miniaturized biotechnology, whereas bionanotechnology is a specific application of nanotechnology. For example, DNA nanotechnology or cellular engineering would be classified as bionanotechnology because they involve working with biomolecules on the nanoscale. Conversely, many new medical technologies involving nanoparticles as delivery systems or as sensors would be examples of nanobiotechnology since they involve using nanotechnology to advance the goals of biology.
The definitions enumerated above will be utilized whenever a distinction between nanobio and bionano is made in this article. However, given the overlapping usage of the terms in modern parlance, individual technologies may need to be evaluated to determine which term is more fitting. As such, they are best discussed in parallel.
Concepts
Most of the scientific concepts in bionanotechnology are derived from other fields. Biochemical principles that are used to understand the material properties of biological systems are central in bionanotechnology because those same principles are to be used to create new technologies. Material properties and applications studied in bionanoscience include mechanical properties (e.g. deformation, adhesion, failure), electrical/electronic (e.g. electromechanical stimulation, capacitors, energy storage/batteries), optical (e.g. absorption, luminescence, photochemistry), thermal (e.g. thermomutability, thermal management), biological (e.g. how cells interact with nanomaterials, molecular flaws/defects, biosensing, biological mechanisms such as mechanosensation), nanoscience of disease (e.g. genetic disease, cancer, organ/tissue failure), as well as biological computing (e.g. DNA computing) and agriculture (target delivery of pesticides, hormones and fertilizers.
The impact of bionanoscience, achieved through structural and mechanistic analyses of biological processes at nanoscale, is their translation into synthetic and technological applications through nanotechnology.
Nanobiotechnology takes most of its fundamentals from nanotechnology. Most of the devices designed for nano-biotechnological use are directly based on other existing nanotechnologies. Nanobiotechnology is often used to describe the overlapping multidisciplinary activities associated with biosensors, particularly where photonics, chemistry, biology, biophysics, nanomedicine, and engineering converge. Measurement in biology using wave guide techniques, such as dual-polarization interferometry, is another example.
Applications
Applications of bionanotechnology are extremely widespread. Insofar as the distinction holds, nanobiotechnology is much more commonplace in that it simply provides more tools for the study of biology. Bionanotechnology, on the other hand, promises to recreate biological mechanisms and pathways in a form that is useful in other ways.
Nanomedicine
Nanomedicine is a field of medical science whose applications are increasing.
Nanobots
The field includes nanorobots and biological machines, which constitute a very useful tool to develop this area of knowledge. In the past years, researchers have made many improvements in the different devices and systems required to develop functional nanorobots – such as motion and magnetic guidance. This supposes a new way of treating and dealing with diseases such as cancer; thanks to nanorobots, side effects of chemotherapy could get controlled, reduced and even eliminated, so some years from now, cancer patients could be offered an alternative to treat such diseases instead of chemotherapy, which causes secondary effects such as hair loss, fatigue or nausea killing not only cancerous cells but also the healthy ones. Nanobots could be used for various therapies, surgery, diagnosis, and medical imaging – such as via targeted drug-delivery to the brain (similar to nanoparticles) and other sites. Programmability for combinations of features such as "tissue penetration, site-targeting, stimuli responsiveness, and cargo-loading" makes such nanobots promising candidates for "precision medicine".
At a clinical level, cancer treatment with nanomedicine would consist of the supply of nanorobots to the patient through an injection that will search for cancerous cells while leaving the healthy ones untouched. Patients that are treated through nanomedicine would thereby not notice the presence of these nanomachines inside them; the only thing that would be noticeable is the progressive improvement of their health. Nanobiotechnology may be useful for medicine formulation.
"Precision antibiotics" has been proposed to make use of bacteriocin-mechanisms for targeted antibiotics.
Nanoparticles
Nanoparticles are already widely used in medicine. Its applications overlap with those of nanobots and in some cases it may be difficult to distinguish between them. They can be used to for diagnosis and targeted drug delivery, encapsulating medicine. Some can be manipulated using magnetic fields and, for example, experimentally, remote-controlled hormone release has been achieved this way.
On example advanced application under development are "Trojan horse" designer-nanoparticles that makes blood cells eat away – from the inside out – portions of atherosclerotic plaque that cause heart attacks and are the current most common cause of death globally.
Artificial cells
Artificial cells such as synthetic red blood cells that have all or many of the natural cells' known broad natural properties and abilities could be used to load functional cargos such as hemoglobin, drugs, magnetic nanoparticles, and ATP biosensors which may enable additional non-native functionalities.
Other
Nanofibers that mimic the matrix around cells and contain molecules that were engineered to wiggle was shown to be a potential therapy for spinal cord injury in mice.
Technically, gene therapy can also be considered to be a form of nanobiotechnology or to move towards it. An example of an area of genome editing related developments that is more clearly nanobiotechnology than more conventional gene therapies, is synthetic fabrication of functional materials in tissues. Researcher made C. elegans worms synthesize, fabricate, and assemble bioelectronic materials in its brain cells. They enabled modulation of membrane properties in specific neuron populations and manipulation of behavior in the living animals which might be useful in the study and treatments for diseases such as multiple sclerosis in specific and demonstrates the viability of such synthetic in vivo fabrication. Moreover, such genetically modified neurons may enable connecting external components – such as prosthetic limbs – to nerves.
Nanosensors based on e.g. nanotubes, nanowires, cantilevers, or atomic force microscopy could be applied to diagnostic devices/sensors
Nanobiotechnology
Nanobiotechnology (sometimes referred to as nanobiology) in medicine may be best described as helping modern medicine progress from treating symptoms to generating cures and regenerating biological tissues.
Three American patients have received whole cultured bladders with the help of doctors who use nanobiology techniques in their practice. Also, it has been demonstrated in animal studies that a uterus can be grown outside the body and then placed in the body in order to produce a baby. Stem cell treatments have been used to fix diseases that are found in the human heart and are in clinical trials in the United States. There is also funding for research into allowing people to have new limbs without having to resort to prosthesis. Artificial proteins might also become available to manufacture without the need for harsh chemicals and expensive machines. It has even been surmised that by the year 2055, computers may be made out of biochemicals and organic salts.
In vivo biosensors
Another example of current nanobiotechnological research involves nanospheres coated with fluorescent polymers. Researchers are seeking to design polymers whose fluorescence is quenched when they encounter specific molecules. Different polymers would detect different metabolites. The polymer-coated spheres could become part of new biological assays, and the technology might someday lead to particles which could be introduced into the human body to track down metabolites associated with tumors and other health problems. Another example, from a different perspective, would be evaluation and therapy at the nanoscopic level, i.e. the treatment of nanobacteria (25-200 nm sized) as is done by NanoBiotech Pharma.
In vitro biosensors
"Nanoantennas" made out of DNA – a novel type of nano-scale optical antenna – can be attached to proteins and produce a signal via fluorescence when these perform their biological functions, in particular for their distinct conformational changes. This could be used for further nanobiotechnology such as various types of nanomachines, to develop new drugs, for bioresearch and for new avenues in biochemistry.
Energy
It may also be useful in sustainable energy: in 2022, researchers reported 3D-printed nano-"skyscraper" electrodes – albeit micro-scale, the pillars had nano-features of porosity due to printed metal nanoparticle inks – (nanotechnology) that house cyanobacteria for extracting substantially more sustainable bioenergy from their photosynthesis (biotechnology) than in earlier studies.
Nanobiology
While nanobiology is in its infancy, there are a lot of promising methods that may rely on nanobiology in the future. Biological systems are inherently nano in scale; nanoscience must merge with biology in order to deliver biomacromolecules and molecular machines that are similar to nature. Controlling and mimicking the devices and processes that are constructed from molecules is a tremendous challenge to face for the converging disciplines of nanobiotechnology. All living things, including humans, can be considered to be nanofoundries. Natural evolution has optimized the "natural" form of nanobiology over millions of years. In the 21st century, humans have developed the technology to artificially tap into nanobiology. This process is best described as "organic merging with synthetic". Colonies of live neurons can live together on a biochip device; according to research from Gunther Gross at the University of North Texas. Self-assembling nanotubes have the ability to be used as a structural system. They would be composed together with rhodopsins; which would facilitate the optical computing process and help with the storage of biological materials. DNA (as the software for all living things) can be used as a structural proteomic system – a logical component for molecular computing. Ned Seeman – a researcher at New York University – along with other researchers are currently researching concepts that are similar to each other.
Bionanotechnology
Distinction from nanobiotechnology
Broadly, bionanotechnology can be distinguished from nanobiotechnology in that it refers to nanotechnology that makes use of biological materials/components – it could in principle or does alternatively use abiotic components. It plays a smaller role in medicine (which is concerned with biological organisms). It makes use of natural or biomimetic systems or elements for unique nanoscale structures and various applications that may not be directionally associated with biology rather than mostly biological applications. In contrast, nanobiotechnology uses biotechnology miniaturized to nanometer size or incorporates nanomolecules into biological systems. In some future applications, both fields could be merged.
DNA
DNA nanotechnology is one important example of bionanotechnology. The utilization of the inherent properties of nucleic acids like DNA to create useful materials or devices – such as biosensors – is a promising area of modern research.
DNA digital data storage refers mostly to the use of synthesized but otherwise conventional strands of DNA to store digital data, which could be useful for e.g. high-density long-term data storage that isn't accessed and written to frequently as an alternative to 5D optical data storage or for use in combination with other nanobiotechnology.
Membrane materials
Another important area of research involves taking advantage of membrane properties to generate synthetic membranes. Proteins that self-assemble to generate functional materials could be used as a novel approach for the large-scale production of programmable nanomaterials. One example is the development of amyloids found in bacterial biofilms as engineered nanomaterials that can be programmed genetically to have different properties.
Lipid nanotechnology
Lipid nanotechnology is another major area of research in bionanotechnology, where physico-chemical properties of lipids such as their antifouling and self-assembly is exploited to build nanodevices with applications in medicine and engineering. Lipid nanotechnology approaches can also be used to develop next-generation emulsion methods to maximize both absorption of fat-soluble nutrients and the ability to incorporate them into popular beverages.
Computing
"Memristors" fabricated from protein nanowires of the bacterium Geobacter sulfurreducens which function at substantially lower voltages than previously described ones may allow the construction of artificial neurons which function at voltages of biological action potentials. The nanowires have a range of advantages over silicon nanowires and the memristors may be used to directly process biosensing signals, for neuromorphic computing (see also: wetware computer) and/or direct communication with biological neurons.
Other
Protein folding studies provide a third important avenue of research, but one that has been largely inhibited by our inability to predict protein folding with a sufficiently high degree of accuracy. Given the myriad uses that biological systems have for proteins, though, research into understanding protein folding is of high importance and could prove fruitful for bionanotechnology in the future.
Agriculture
In the agriculture industry, engineered nanoparticles have been serving as nano carriers, containing herbicides, chemicals, or genes, which target particular plant parts to release their content.
Previously nanocapsules containing herbicides have been reported to effectively penetrate through cuticles and tissues, allowing the slow and constant release of the active substances. Likewise, other literature describes that nano-encapsulated slow release of fertilizers has also become a trend to save fertilizer consumption and to minimize environmental pollution through precision farming. These are only a few examples from numerous research works which might open up exciting opportunities for nanobiotechnology application in agriculture. Also, application of this kind of engineered nanoparticles to plants should be considered the level of amicability before it is employed in agriculture practices. Based on a thorough literature survey, it was understood that there is only limited authentic information available to explain the biological consequence of engineered nanoparticles on treated plants. Certain reports underline the phytotoxicity of various origin of engineered nanoparticles to the plant caused by the subject of concentrations and sizes . At the same time, however, an equal number of studies were reported with a positive outcome of nanoparticles, which facilitate growth promoting nature to treat plant. In particular, compared to other nanoparticles, silver and gold nanoparticles based applications elicited beneficial results on various plant species with less and/or no toxicity. Silver nanoparticles (AgNPs) treated leaves of Asparagus showed the increased content of ascorbate and chlorophyll. Similarly, AgNPs-treated common bean and corn has increased shoot and root length, leaf surface area, chlorophyll, carbohydrate and protein contents reported earlier. The gold nanoparticle has been used to induce growth and seed yield in Brassica juncea.
Nanobiotechnology is used in tissue cultures. The administration of micronutrients at the level of individual atoms and molecules allows for the stimulation of various stages of development, initiation of cell division, and differentiation in the production of plant material, which must be qualitatively uniform and genetically homogeneous. The use of nanoparticles of zinc (ZnO NPs) and silver (Ag NPs) compounds gives very good results in the micropropagation of chrysanthemums using the method of single-node shoot fragments.
Tools
This field relies on a variety of research methods, including experimental tools (e.g. imaging, characterization via AFM/optical tweezers etc.), x-ray diffraction based tools, synthesis via self-assembly, characterization of self-assembly (using e.g. MP-SPR, DPI, recombinant DNA methods, etc.), theory (e.g. statistical mechanics, nanomechanics, etc.), as well as computational approaches (bottom-up multi-scale simulation, supercomputing).
Risk management
As of 2009, the risks of nanobiotechnologies are poorly understood and in the U.S. there is no solid national consensus on what kind of regulatory policy principles should be followed. For example, nanobiotechnologies may have hard to control effects on the environment or ecosystems and human health. The metal-based nanoparticles used for biomedical prospectives are extremely enticing in various applications due to their distinctive physicochemical characteristics, allowing them to influence cellular processes at the biological level. The fact that metal-based nanoparticles have high surface-to-volume ratios makes them reactive or catalytic. Due to their small size, they are more likely to be able to penetrate biological barriers such as cell membranes and cause cellular dysfunction in living organisms. Indeed, the high toxicity of some transition metals can make it challenging to use mixed oxide NPs in biomedical uses. It triggers adverse effects on organisms, causing oxidative stress, stimulating the formation of ROS, mitochondrial perturbation, and the modulation of cellular functions, with fatal results in some cases.
Bonin notes that "Nanotechnology is not a specific determinate homogenous entity, but a collection of diverse capabilities and applications" and that nanobiotechnology research and development is – as one of many fields – affected by dual-use problems.
See also
Biomimicry
Colloidal gold
Genome editing (bacteria, (micro-borgs))
Gold nanoparticle
Nanomedicine
Nanobiomechanics
Nanoparticle–biomolecule conjugate
Nanosubmarine
Nanozymes
References
External links
What is Bionanotechnology?—a video introduction to the field
Nanobiotechnology in Orthopaedic
Nanotechnology
Biotechnology
Nanomedicine | Nanobiotechnology | [
"Materials_science",
"Engineering",
"Biology"
] | 4,320 | [
"Materials science",
"Biotechnology",
"Nanomedicine",
"nan",
"Nanotechnology"
] |
2,154,601 | https://en.wikipedia.org/wiki/Heyndrickxia%20coagulans | Heyndrickxia coagulans (formerly Bacillus coagulans) is a lactic acid–forming bacterial species. This species was transferred to Weizmannia in 2020, then to Heyndrickxia in 2023.
Description
H. coagulans is a Gram-positive, catalase-positive, spore-forming, motile, facultative anaerobe rod that measures approximately 0.9 μm by 3.0 μm to 5.0 μm. It may appear Gram negative when entering the stationary phase of growth. The optimum temperature for growth is ; the range of temperatures tolerated is . IMViC tests VP and MR (methyl red) are positive.
Taxonomic history
The species was first isolated and described in 1915 by B.W. Hammer at the Iowa Agricultural Experiment Station as a cause of an outbreak of coagulation in evaporated milk packed by an Iowa condensary. Separately isolated in 1935 and described as Lactobacillus sporogenes in the fifth edition of Bergey's Manual of Systematic Bacteriology, it exhibits characteristics typical of both genera Lactobacillus and Bacillus; its taxonomic position between the families Lactobacillaceae and Bacillaceae was often debated. However, in the seventh edition of Bergey's, it was finally transferred to the genus Bacillus. DNA-based technology was used in distinguishing between the two genera of bacteria, which are morphologically similar and possess similar physiological and biochemical characteristics.
In 2020, further genetic evidence shows that it is sufficiently different from other members of Bacillus to be transferred into its own genus. As a result, it became the type species of Weizmannia. In 2023, even further genetic evidence shows that Weizmannia was not sufficiently distinct from Heyndrickxia to be an independent genus; as a result, all members of Weizmannia were moved to Heyndrickxia.
Uses
H. coagulans has been added by the EFSA to their Qualified Presumption of Safety list and has been approved for veterinary purposes as GRAS by the U.S. Food and Drug Administration's Center for Veterinary Medicine, as well as by the European Union, and is listed by AAFCO for use as a direct-fed microbial in livestock production. It is often used in veterinary applications, especially as a probiotic in pigs, cattle, poultry, and shrimp. Many references to use of this bacterium in humans exist, especially in improving the vaginal flora, improving abdominal pain and bloating in irritable bowel syndrome patients, and increasing immune response to viral challenges. There is evidence from animal research that suggests that H. coagulans is effective in both treating as well as preventing recurrence of Clostridioides difficile associated diarrhea. Further, one animal research study showed that it can alter inflammatory processes in the context of multiple sclerosis. One strain of this bacterium has also been assessed for safety as a food ingredient. Spores are activated in the acidic environment of the stomach and begin germinating and proliferating in the intestine. Sporeforming H. coagulans strains are used in some countries as probiotics for patients on antibiotics.
Marketing
H. coagulans is often marketed as Lactobacillus sporogenes or a 'sporeforming lactic acid bacterium' probiotic, but this is an outdated name due to taxonomic changes in 1939. Although H. coagulans does produce L+lactic acid, the bacterium used in these products is not a lactic-acid bacterium, as Bacillaceae species do not belong to the lactic acid bacteria (Lactobacillales). By definition, lactic acid bacteria (Lactobacillus, Bifidobacterium) do not form spores. Therefore, using the name Lactobacillus sporogenes is scientifically incorrect.
The 2023 name H. coagulans is nowhere as common as the former name Bacillus coagulans. The former name remains valid under the Prokaryotic Code.
References
Notes
External links
coagulans
Digestive system
Probiotics
Bacteria described in 1915 | Heyndrickxia coagulans | [
"Biology"
] | 871 | [
"Digestive system",
"Organ systems"
] |
2,154,737 | https://en.wikipedia.org/wiki/Detention%20basin | A detention basin or retarding basin is an excavated area installed on, or adjacent to, tributaries of rivers, streams, lakes or bays to protect against flooding and, in some cases, downstream erosion by storing water for a limited period of time. These basins are also called dry ponds, holding ponds or dry detention basins if no permanent pool of water exists.
Detention ponds that are designed to permanently retain some volume of water at all times are called retention basins. In its basic form, a detention basin is used to manage water quantity while having a limited effectiveness in protecting water quality, unless it includes a permanent pool feature.
Functions and design
Detention basins are storm water best management practices that provide general flood protection and can also control extreme floods such as a 1 in 100-year storm event. The basins are typically built during the construction of new land development projects including residential subdivisions or shopping centers. The ponds help manage the excess urban runoff generated by newly constructed impervious surfaces such as roads, parking lots and rooftops.
A basin functions by allowing large flows of water to enter but limits the outflow by having a small opening at the lowest point of the structure. The size of this opening is determined by the capacity of underground and downstream culverts and washes to handle the release of the contained water.
Frequently the inflow area is constructed to protect the structure from some types of damage. Offset concrete blocks in the entrance spillways are used to reduce the speed of entering flood water. These structures may also have debris drop vaults to collect large rocks. These vaults are deep holes under the entrance to the structure. The holes are wide enough to allow large rocks and other debris to fall into the holes before they can damage the rest of the structure. These vaults must be emptied after each storm event.
Research has shown that detention basins built with real-time control of the outflow from the basin are significantly more effective at retaining total suspended solids and associated contaminants, such as heavy metals, when compared to basins without control.
Extended detention basin
A variant basin design called an extended detention dry basin can limit downstream erosion and control of some pollutants such as suspended solids. This basin type differs from a retention basin, also known as a "wet pond," which includes a permanent pool of water.
While basic detention ponds are typically designed to empty within 6 to 12 hours after a storm, extended detention (ED) dry basins improve the basic detention design by lengthening the storage time, for example, to 24 or 48 hours. Longer detention allows for more settling of suspended solids, resulting in higher-quality water.
See also
Best management practice for water pollution
Groundwater banking
Retention basin
Stream restoration
Sustainable urban drainage systems
Sustainable Flood Retention Basin
Balancing lake
References
External links
Detention vs. retention - Project Brays (Harris County, Texas)
Maintaining Your BMPs: A Guidebook for Private Owners & Operators in Northern Virginia
Environmental engineering
Hydraulic engineering
Hydrology
Infrastructure
Ponds
Water treatment
Stormwater management
Water supply | Detention basin | [
"Physics",
"Chemistry",
"Engineering",
"Environmental_science"
] | 598 | [
"Hydrology",
"Water treatment",
"Stormwater management",
"Chemical engineering",
"Water supply",
"Water pollution",
"Physical systems",
"Construction",
"Hydraulics",
"Civil engineering",
"Environmental engineering",
"Water technology",
"Hydraulic engineering",
"Infrastructure"
] |
2,154,814 | https://en.wikipedia.org/wiki/Digital%20Cinema%20Initiatives | Digital Cinema Initiatives, LLC (DCI) is a consortium of major motion picture studios, formed to establish specifications for a common systems architecture for digital cinema systems.
The organization was formed in March 2002 by Metro-Goldwyn-Mayer, Paramount Pictures, Sony Pictures, 20th Century Studios, Universal Studios, Walt Disney Studios and Warner Bros.
The primary purpose of DCI is to establish and document specifications for an open architecture for digital cinema that ensures a uniform and high level of technical performance, reliability and quality. By establishing a common set of content requirements, distributors, studios, exhibitors, d-cinema manufacturers and vendors can be assured of interoperability and compatibility. Because of the relationship of DCI to many of Hollywood's key studios, conformance to DCI's specifications is considered a requirement by software developers or equipment manufacturers targeting the digital cinema market.
Specification
On July 20, 2005, DCI released Version 1.0 of its "Digital Cinema System Specification", commonly referred to as the "DCI Specification". The document describes overall system requirements and specifications for digital cinema. Between March 28, 2006, and March 21, 2007, DCI issued 148 errata to Version 1.0.
DCI released Version 1.1 of the DCI Specification on April 12, 2007, incorporating the previous 148 errata into the DCI Specification. On April 15, 2007, at the annual NAB Digital Cinema Summit, DCI announced the new version, as well as some future plans. They released the "Stereoscopic Digital Cinema Addendum" to begin to establish 3-D technical specifications in response to the popularity of 3-D stereoscopic films. It was also announced "which studios would take over the leadership roles in DCI after the current leadership term expires at the end of September."
Subsequently, between August 27, 2007, and February 1, 2008, DCI issued 100 errata to Version 1.1. So, DCI released Version 1.2 of the DCI Specification on March 7, 2008, again incorporating the previous 100 errata into the specification document. An additional 96 errata were issued by August 30, 2012, so a revised Version 1.2 incorporating those additional errata was approved on October 10, 2012. DCI approved DCI Specification Version 1.3 on June 27, 2018, integrating the 45 errata issued to the previous version into a new document.
On July 20, 2020, fifteen years to the day after Version 1.0, DCI issued a new DCI Specification Version 1.4 that assimilated 29 errata issued since Version 1.3. On October 13, 2021, DCI approved a new DCI Specification Version 1.4.1 that integrated the 23 errata that had been issued to DCI Specification Version 1.4. For the convenience of users, DCI also created an online HTML version of DCI Specification, Version 1.4.1. Due to the HTML conversion process, the footnotes in the DCSS now appear as endnotes. The PDF version contains pagination and page numbers whereas the HTML version does not.
DCI Specification Version 1.4.2, dated June 15, 2022, includes revisions and refinements respecting Object-Based Audio Essence (OBAE), also known as Immersive Audio Bitstream (IAB). Version 1.4.2 also implements post-show log record collection utilizing SMPTE 430-17 SMS-OMB Communications Protocol Specification. Additionally, Version 1.4.2 incorporated two prior addenda: the Digital Cinema Object-Based Audio Addendum, dated October 1, 2018 and the Stereoscopic Digital Cinema Addendum, Version 1.0, dated July 11, 2007. Users using Version 1.4.2 no longer need to refer to the separate addenda. Previous DCSS versions are archived on the DCI web site.
Based on many SMPTE and ISO standards, such as JPEG 2000-compressed image and "broadcast wave" PCM/WAV sound, the DCI Specification explains the route to create an entire Digital Cinema Package (DCP) from a raw collection of files known as the Digital Cinema Distribution Master (DCDM), as well as the specifics of its content protection, encryption, and forensic marking.
The DCI Specification also establishes standards for the decoder requirements and the presentation environment itself, such as ambient light levels, pixel aspect and shape, image luminance, white point chromaticity, and those tolerances to be kept.
Even though it specifies what kind of information is required, the DCI Specification does not include specific information about how data within a distribution package is to be formatted. Formatting of this information is defined by the Society of Motion Picture and Television Engineers (SMPTE) digital cinema standards and related documents.
Image and audio capability overview
2D image
2048×1080 (2K) at 24 frame/s or 48 frame/s, or 4096×2160 (4K) at 24 frame/s
In 2K, for Scope (2.39:1) presentation 2048×858 pixels of the imager is used
In 2K, for Flat (1.85:1) presentation 1998×1080 pixels of the imager is used
In 4K, for Scope (2.39:1) presentation 4096×1716 pixels of the imager is used
In 4K, for Flat (1.85:1) presentation 3996×2160 pixels of the imager is used
12 bits per color component (36 bits per pixel) via dual HD-SDI (encrypted)
10 bits only permitted for 2K at 48 frame/s
CIE XYZ color space, gamma-corrected
TIFF 6.0 container format (one file per frame)
JPEG 2000 compression
From 0 to 5 or from 1 to 6 wavelet decomposition levels for 2K or 4K resolutions, respectively
Compression rate of 4.71 bits/pixel (2K @ 24 frame/s), 2.35 bits/pixel (2K @ 48 frame/s), 1.17 bits/pixel (4K @ 24 frame/s)
250 Mbit/s maximum image bit rate
Stereoscopic 3D image
2048×1080 (2K) at 48 frame/s - 24 frame/s per eye (4096×2160 4K not supported)
In 2K, for Scope (2.39:1) presentation 2048×858 pixels of the imager is used
In 2K, for Flat (1.85:1) presentation 1998×1080 pixels of the imager is used
Optionally, in the HD-SDI link only: 12 bit color, YCxCz 4:2:2 (i.e. chroma subsampling in XYZ space), each eye in separate stream
Audio
24 bits per sample, 48 kHz or 96 kHz
Up to 16 channels
WAV container, uncompressed PCM
DCI has additionally published a document outlining recommended practice for High Frame Rate digital cinema. This document discloses the following proposed frame rates: 60, 96, and 120 frames per second for 2D at 2K resolution; 48 and 60 for stereoscopic 3D at 2K resolution; 48 and 60 for 2D at 4K resolution. The maximum compressed bit rate for support of all proposed frame rates should be 500 Mbit/s.
Related information
The idea for DCI was originally mooted in late 1999 by Tom McGrath, then COO of Paramount Pictures, who applied to the U.S. Department of Justice for anti-trust waivers to allow the joint cooperation of all seven major motion picture studios.
Universal Pictures made one of the first feature-length DCPs created to DCI specifications, using their film Serenity. Although it was not distributed theatrically, it had one public screening on November 7, 2005, at the USC Entertainment Technology Center's Digital Cinema Laboratory in the Pacific Theatre, Hollywood. Inside Man was Universal's first DCP commercial release, and, in addition to 35mm film distribution, was delivered via hard drive to 20 theatres in the United States along with two trailers.
The Academy Film Archive houses the Digital Cinema Initiatives, LLC Collection, which includes film and digital elements from DCI's Standard Evaluation Material (StEM), a 12-minute production shot on 35mm and 65mm film, created for vendors and standards organizations to test and evaluate image compression and digital projection technologies.
Notes
References
Bibliography
High frame rates digital cinema recommended practice, DCI, 2015 (PDF)
External links
Film and video technology
Digital media
The Walt Disney Company
2002 establishments in California
20th Century Studios
Paramount Pictures
Sony Pictures Entertainment
Warner Bros.
Universal Pictures
Mass media companies established in 2002
Communications and media organizations based in the United States | Digital Cinema Initiatives | [
"Technology"
] | 1,806 | [
"Multimedia",
"Digital media"
] |
2,154,852 | https://en.wikipedia.org/wiki/Dovecot%20%28software%29 | Dovecot is an open-source IMAP and POP3 server for Unix-like operating systems, written primarily with security in mind. Timo Sirainen originated Dovecot and first released it in July 2002. Dovecot developers primarily aim to produce a lightweight, fast and easy-to-set-up open-source email server.
The primary purpose of Dovecot is to act as a mail storage server. The mail is delivered to the server using some mail delivery agent (MDA) and is stored for later access with an email client (mail user agent, or MUA). Dovecot can also act as mail proxy server, forwarding connection to another mail server, or act as a lightweight MUA in order to retrieve and manipulate mail on remote server for e.g. mail migration.
According to the Open Email Survey, as of 2020, Dovecot has an installed base of at least 2.9million IMAP servers, and has a global market share of 76.9% of all IMAP servers. The results of the same survey in 2019 gave figures of 2.6million and 76.2%, respectively.
Features
Dovecot can work with standard mbox, Maildir, and its own native high-performance dbox formats. It is fully compatible with UW IMAP and Courier IMAP servers' implementation of them, as well as mail clients accessing the mailboxes directly.
Dovecot also includes a mail delivery agent (called Local delivery agent in Dovecot's documentation) and an LMTP server, with the optional Sieve filtering support.
Dovecot supports a variety of authentication schemas for IMAP, POP and message submission agent (MSA) access, including CRAM-MD5 and the more secure DIGEST-MD5.
With version 2.2, some new features have been added to Dovecot, e.g. additional IMAP command extensions, has been rewritten or optimized, and shared mailboxes now support per-user flags.
Version 2.3 adds a message submission agent, Lua scripting for authentication, and some other improvements.
Apple Inc. includes Dovecot for email services since Mac OS X Server 10.6 Snow Leopard.
In 2017, Mozilla, via the Mozilla Open Source Support program, conducted a security audit on the Dovecot software, the first public audit of the Dovecot code. The team that performed the audit was extremely impressed with the quality of the dovecot code, writing that "despite much effort and thoroughly all-encompassing approach, the Cure53 testers only managed to assert the excellent security-standing of Dovecot. More specifically, only three minor security issues have been found in the codebase, thus translating to an exceptionally good outcome for Dovecot, and a true testament to the fact that keeping security promises is at the core of the Dovecot development and operations.".
See also
Comparison of mail servers
References
External links
Official docs with how-tos and documentation
Free email server software
Free software programmed in C
Email server software for Linux
Software using the MIT license
Software developed in Finland | Dovecot (software) | [
"Technology"
] | 635 | [
"Software developed in Finland"
] |
2,154,963 | https://en.wikipedia.org/wiki/Interior%20product | In mathematics, the interior product (also known as interior derivative, interior multiplication, inner multiplication, inner derivative, insertion operator, or inner derivation) is a degree −1 (anti)derivation on the exterior algebra of differential forms on a smooth manifold. The interior product, named in opposition to the exterior product, should not be confused with an inner product. The interior product is sometimes written as
Definition
The interior product is defined to be the contraction of a differential form with a vector field. Thus if is a vector field on the manifold then
is the map which sends a -form to the -form defined by the property that
for any vector fields
When is a scalar field (0-form), by convention.
The interior product is the unique antiderivation of degree −1 on the exterior algebra such that on one-forms
where is the duality pairing between and the vector Explicitly, if is a -form and is a -form, then
The above relation says that the interior product obeys a graded Leibniz rule. An operation satisfying linearity and a Leibniz rule is called a derivation.
Properties
If in local coordinates the vector field is given by
then the interior product is given by
where is the form obtained by omitting from .
By antisymmetry of forms,
and so This may be compared to the exterior derivative which has the property
The interior product with respect to the commutator of two vector fields satisfies the identity
Proof. For any k-form , and similarly for the other result.
Cartan identity
The interior product relates the exterior derivative and Lie derivative of differential forms by the Cartan formula (also known as the Cartan identity, Cartan homotopy formula or Cartan magic formula):
where the anticommutator was used. This identity defines a duality between the exterior and interior derivatives. Cartan's identity is important in symplectic geometry and general relativity: see moment map. The Cartan homotopy formula is named after Élie Cartan.
See also
Notes
References
Theodore Frankel, The Geometry of Physics: An Introduction; Cambridge University Press, 3rd ed. 2011
Loring W. Tu, An Introduction to Manifolds, 2e, Springer. 2011.
Differential forms
Differential geometry
Multilinear algebra | Interior product | [
"Engineering"
] | 461 | [
"Tensors",
"Differential forms"
] |
2,155,310 | https://en.wikipedia.org/wiki/Common%20ethanol%20fuel%20mixtures | Several common ethanol fuel mixtures are in use around the world. The use of pure hydrous or anhydrous ethanol in internal combustion engines (ICEs) is only possible if the engines are designed or modified for that purpose, and used only in automobiles, light-duty trucks and motorcycles. Anhydrous ethanol can be blended with :gasoline (petrol) for use in gasoline engines, but with high ethanol content only after engine modifications to meter increased fuel volume since pure ethanol contains only 2/3 of the BTUs of an equivalent volume of pure gasoline. High percentage ethanol mixtures are used in some racing engine applications as the very high octane rating of ethanol is compatible with very high compression ratios.
Ethanol fuel mixtures have "E" numbers which describe the percentage of ethanol fuel in the mixture by volume, for example, E85 is 85% anhydrous ethanol and 15% gasoline. Low-ethanol blends are typically from E5 to E25, although internationally the most common use of the term refers to the E10 blend.
Blends of E10 or less are used in more than 20 countries around the world, led by the United States, where ethanol represented 10% of the U.S. gasoline fuel supply in 2011. Blends from E20 to E25 have been used in Brazil since the late 1970s. E85 is commonly used in the U.S. and Europe for flexible-fuel vehicles. Hydrous ethanol or E100 is used in Brazilian neat ethanol vehicles and flex-fuel light vehicles and hydrous E15 called hE15 for modern petrol cars in the Netherlands.
E10 or less
E10, a fuel mixture of 10% anhydrous ethanol and 90% gasoline sometimes called gasohol, can be used in the internal combustion engines of most modern automobiles and light-duty vehicles without need for any modification on the engine or fuel system. E10 blends are typically rated as being 2 to 3 octane numbers higher than regular gasoline and are approved for use in all new U.S. automobiles, and mandated in some areas for emissions and other reasons.
Other common blends include E5 and E7. These concentrations are generally safe for recent engines that should run on pure gasoline. As of 2006, mandates for blending bioethanol into vehicle fuels had been enacted in at least 36 states/provinces and 17 countries at the national level, with most mandates requiring a blend of 10 to 15% ethanol with gasoline.
One measure of alternative fuels in the U.S. is the "gasoline-equivalent gallon" (GEG). In 2002, the U.S. used as motor fuel, ethanol equal to , the energy equivalent of of gasoline. This was less than 1% of the total fuel used that year.
E10 and other blends of ethanol are considered to be useful in decreasing U.S. dependence on foreign oil, and can reduce carbon monoxide (CO) emissions by 20 to 30% under the right conditions. Although E10 does decrease emissions of CO and greenhouse gases such as CO2 by an estimated 2% over regular gasoline, it can cause increases in evaporative emissions and some pollutants depending on factors such as the age of the vehicle and weather conditions. According to the Philippine Department of Energy, the use of up to 10% ethanol-gasoline mixture is not harmful to cars' fuel systems. Generally, automobile gasoline containing alcohol (ethanol or methanol) is not recommended to be used in aircraft.
Availability
E10 became the standard fuel at petrol stations in the United Kingdom as of September 2021.
E10 was introduced nationwide in Thailand in 2007, and replaced 91 octane pure gasoline in that country in 2013.
E10 is commonly available in the Midwestern United States. It was also mandated for use in all standard automobile fuel in the state of Florida by the end of 2010. Due to the phasing out of MTBE as a gasoline additive and mainly due to the mandates established in the Energy Policy Act of 2005 and the Energy Independence and Security Act of 2007, ethanol blends have increased throughout the United States, and by 2009, the ethanol market share in the U.S. gasoline supply reached almost 8% by volume.
Mandatory blending of ethanol was approved in Mozambique, but the percentage in the blend has not been specified.
South Africa approved a biofuel strategy in 2007, and mandated an 8% blend of ethanol by 2013.
A 2007 Uruguayan law mandates a minimum of 5% of ethanol blended with gasoline starting in January 2015. The monopolic, state-owned fuel producer ANCAP started blending premium gasoline with 10% of bioethanol in December 2009, which will be available in all the country by early January 2010.
The Dominican Republic has a mandate for blending 15% of ethanol by 2015.
Chile is considering the introduction of E5, and Panama, Bolivia and Venezuela of E10.
India achieved the target of 10 percent ethanol blending, 5 months ahead of schedule, in June 2022.
From January 2018, all 92-octane fuel in Vietnam is mandated to contain 5 percent ethanol (E5). No ethanol blending is required for 95-octane fuel.
From June 2021, Argentina approved an E12 minimum (Law 27640), and after October 2022 a waiver for a maximum of E15.
A 2011 study conducted by VTT Technical Research Centre of Finland found practically no difference in fuel consumption in normal driving conditions between commercial gasoline grades 95E10 and 98E5 sold in Finland, despite the public perception that fuel consumption is significantly higher with 95E10. VTT performed the comparison test under controlled laboratory conditions and their measurements showed that over a distance of , the cars tested used an average of of 95E10, as opposed to of 98E5. The difference was 0.07 in favor of 98E5 on average, meaning that using 95E10 gasoline, which has a higher ethanol content, increases consumption by 0.7%. When the measurements are normalized, the difference becomes 1.0%, a result that is highly consistent with an estimation of calorific values based on approximate fuel composition, which came out at 1.1% in favour of E5.
Sweden
In Sweden, all 95-octane gasoline is E10 (6 to 10 percent of ethanol) since 1 August 2021, when the proportion of ethanol was increased from E5. In the early-mid-1990s, some fuel chains also sold E10. All newer and many older petrol cars bought in Sweden should handle this, since from January 2011, the Fuel Quality Directive (Directive 2009/30/EC) applied through its transposition into the law of Sweden as a member of the 27 member states of the EU.
E15
E15 contains 15% ethanol and 85% gasoline. This is generally the highest ratio of ethanol to gasoline that is possible to use in vehicles recommended by some auto manufacturers to run on E10 in the US. This is due to ethanol's hydrophilia and solvent power.
As a result of the Energy Independence and Security Act of 2007, which mandates an increase in renewable fuels for the transport sector, the U.S. Department of Energy began assessments for the feasibility of using intermediate ethanol blends in the existing vehicle fleet as a way to allow higher consumption of ethanol fuel. The National Renewable Energy Laboratory (NREL) conducted tests to evaluate the potential impacts of intermediate ethanol blends on legacy vehicles and other engines. In a preliminary report released in October 2008, the NREL presented the results of the first evaluations of the effects of E10, E15 and E20 gasoline blends on tailpipe and evaporative emissions, catalyst and engine durability, vehicle driveability, engine operability, and vehicle and engine materials. This preliminary report found none of the vehicles displayed a malfunction indicator light as a result of the ethanol blend used; no fuel filter plugging symptoms were observed; no cold start problems were observed at and laboratory conditions; and as expected, computer technology available in newer model vehicles adapts to the higher octane causing lower emissions with greater horsepower and in some cases greater fuel economy.
Other sources make the opposite claim about fuel economy. According to Consumer Reports, "ethanol isn’t as energy-dense as regular gasoline so you will see worse fuel economy with E15 gas.”
In March 2009, a lobbying group from the ethanol industry, Growth Energy, formally requested the U.S. Environmental Protection Agency (EPA) to allow the ethanol content in gasoline to be increased from 10% to 15%. Organizations doing such studies included the Energy Department, the State of Minnesota, the Renewable Fuels Association, the Rochester Institute of Technology, the Minnesota Center for Automotive Research, and Stockholm University in Sweden.
In October 2010, the EPA granted a waiver to allow up to 15% of ethanol blended with gasoline to be sold only for cars and light pickup trucks with a model year of 2007 or later, representing about 15% of vehicles on U.S. roads. In January 2011, the waiver was expanded to authorize use of E15 to include model year 2001 through 2006 passenger vehicles. The EPA also decided not to grant any waiver for E15 use in any motorcycles, heavy-duty vehicles, or nonroad engines because current testing data do not support such a waiver. According to the Renewable Fuels Association, the E15 waivers now cover 62% of vehicles on the road in the US, and the ethanol group estimates if all 2001 and newer cars and pickups were to use E15, the theoretical blend wall for ethanol use would be approximately 17.5 billion gallons (66.2 billion liters) per year. The EPA was still studying if older cars can withstand a 15% ethanol blend.
The EPA waiver authorizes sale of E15 only from Sep 15 to May 31 out of a black hose and a yellow hose to flex fuel vehicles only from June 1 to Sep 14. Retailers have shunned building infrastructure due to the costly regulatory requirements which have created a practical barrier to the commercialization of the higher blend. Most fuel stations do not have enough pumps to offer the new blend, few existing pumps are certified to dispense E15, and no dedicated tanks are readily available to store E15. Also, some state and federal regulations would have to change before E15 can be legally sold. The National Association of Convenience Stores, which represents most gasoline retailers, considers the potential for actual E15 demand is small, "because the auto industry is not embracing the fuel and is not adjusting their warranties or recommendations for the fuel type." One possible solution to the infrastructure barriers is the introduction of blender pumps that allow consumers to turn a dial to select the level of ethanol, which would also allow owners of flexible-fuel cars to buy E85 fuel.
In June 2011 EPA, in cooperation with the Federal Trade Commission, issued its final ruling regarding the E15 warning label required to be displayed in all E15 fuel dispensers in the U.S. to inform consumers about what vehicles can, and what vehicles and equipment cannot, use the E15 blend. Both the Alliance of Automobile Manufacturers and the National Petrochemical and Refiners Association complained that relying solely on this warning label is not enough to protect consumers from misfueling. In July 2012, a fueling station in Lawrence, Kansas became the first in the U.S. to sell the E15 blend. The fuel is sold through a blender pump that allows customers to choose between E10, E15, E30 or E85, with the latter blends sold only to flexible-fuel vehicles. , there are about 24 fueling stations selling E15 out of 180,000 stations across the U.S.
In December 2010, several groups, including the Alliance of Automobile Manufacturers, the American Petroleum Institute, the Association of International Automobile Manufacturers, the National Marine Manufacturers Association, the Outdoor Power Equipment Institute, and the Grocery Manufacturers Association, filed suit against the EPA in the United States Court of Appeals for the District of Columbia Circuit. The plaintiffs argued the EPA does not have the authority to issue a “partial waiver” that covers some cars and not others. Among other arguments, the groups argued that the higher ethanol blend is not only a problem for cars, but also for fuel pumps and underground tanks not designed for the E15 mixture. It was also argued that the rise in ethanol has contributed to the big jump in corn prices in recent years. In August 2012, the federal appeals court rejected the suit against the EPA. The case was thrown out on a technical reason, as the court ruled the groups did not have legal standing to challenge EPA's decision to issue the waiver for E15. In June 2013 the U.S. Supreme Court declined to hear an appeal from industry groups opposed to the EPA ruling about E15, and let the 2012 federal appeals court ruling stand.
, sales of E15 are not authorized in California, and according to the California Air Resources Board (CARB), the blend is still awaiting approval, and in a public statement the agency said that "it would take several years to complete the vehicle testing and rule development necessary to introduce a new transportation fuel into California's market."
According to a survey conducted by the American Automobile Association (AAA) in 2012, only about 12 million out of the more than 240 million light-duty vehicles on the U.S. roads in 2012 are approved by manufacturers are fully compliant with E15 gasoline. According with the association, BMW, Chrysler, Nissan, Toyota, and Volkswagen warned that their warranties will not cover E15-related damage. Despite the controversy, in order to adjust to EPA regulations, 2012 and 2013 model year vehicles manufactured by General Motors can use fuel containing up to 15 percent ethanol, as indicated in the vehicle owners' manuals. However, the carmaker warned that for model year 2011 or earlier vehicles, they "strongly recommend that GM customers refer to their owners manuals for the proper fuel designation for their vehicles." Ford Motor Company also is manufacturing all of its 2013 vehicles E15 compatible, including hybrid electrics and vehicles with Ecoboost engines. Also Porsches built since 2001 are approved by its manufacturer to use E15. Volkswagen announced that for the 2014 model year, its entire lineup will be E15 capable. Fiat Chrysler Automobiles announced in August 2015 that all 2016 model year Chrysler/Fiat, Jeep, Dodge and Ram vehicles will be E15 compatible.
In November 2013, the Environmental Protection Agency opened for public comment its proposal to reduce the amount of ethanol required in the U.S. gasoline supply as mandated by the Energy Independence and Security Act of 2007. The agency cited problems with increasing the blend of ethanol above 10%. This limit, known as the "blend wall," refers to the practical difficulty in incorporating increasing amounts of ethanol into the transportation fuel supply at volumes exceeding those achieved by the sale of nearly all gasoline as E10.
hE15
A 15% hydrous ethanol and 85% gasoline blend, hE15, has been introduced at public gas stations in the Netherlands since 2008. Ethanol fuel specifications worldwide traditionally dictate use of anhydrous ethanol (less than 1% water) for gasoline blending. This results in additional costs, energy usage and environmental impacts associated with the extra processing step required to dehydrate the hydrous ethanol produced via distillation (3.5-4.9 vol.% water) to meet the current anhydrous ethanol specifications. A patented discovery reveals hydrous ethanol can be effectively used in most ethanol/gasoline blending applications.
According to the Brazilian Agência Nacional do Petróleo (ANP) specification, hydrous ethanol contains up to 4.9 vol.% water. In hE15, this would be up to 0.74 vol.% water in the overall mixture. Japanese and German scientific evidence revealed the water is an inhibitor for corrosion by ethanol.
The experiments show that water in fuel ethanol inhibits dry corrosion. At 10,000 ppm water in the E50 experiments by JARI and 3,500 ppm water in the E20 experiments by TU Darmstadt the alcoholate/alkoxide corrosion stopped. In the fuel ethanol this resembles 20,000 ppm or 2 volume% in the case of JARI and 5 x 3500 = 17,500 ppm of 1.75 volume% in the case of TU Darmstadt. The observations are in line with the fact that hydrous ethanol is known for being less corrosive than anhydrous ethanol. The reaction mechanism will be the same at lower-mid blends. When enough water is present in the fuel, the aluminum will react preferably with water to produce aluminum oxide, repairing the protective aluminum oxide layer, which is why the corrosion stops. The aluminum alcoholate/alkoxide does not make a tight oxide layer, which is why the corrosion continues. In other words, water is essential to repair the holes in the oxide layer. Based on the Japanese/German results, a minimum of 2 vol.% or 2.52% m/m water is currently proposed in the revision of the hydrous ethanol specification for blending in petrol at E10+ levels. Water injection has additional positive effects on the engine performance (thermodynamic efficiency) and reduces overall CO2 emissions.
Overall, a transition from anhydrous to hydrous ethanol for gasoline blending is expected to make a significant contribution to ethanol's cost-competitiveness, fuel cycle net energy balance, air quality, and greenhouse gas emissions.
The level of blending above 10% (V/V) is chosen both from a technical (safety) perspective and to distinguish the product in Europe from regular unleaded petrol for reasons of taxes and customer clarity. Small-scale tests have shown many vehicles with modern engine types can run smoothly on this hydrous ethanol blend. Mixed tanking scenarios with anhydrous ethanol blends at 5% or 10% level do not induce phase separation. As avoiding mixing with E0, in particular at extremely low temperatures, in logistic systems and engines is not recommended, a separate specification for controlled usage is presented in a Netherlands Technical Agreement NTA 8115. The NTA 8115 is written for a worldwide application in trading and fuel blending.
E20, E25
E20 contains 20% ethanol and 80% gasoline, while E25 contains 25% ethanol. These blends have been widely used in Brazil since the late 1970s. As a response to the 1973 oil crisis, the Brazilian government made mandatory the blend of ethanol fuel with gasoline, fluctuating between 10% and 22% from 1976 until 1992. Due to this mandatory minimum gasoline blend, pure gasoline (E0) is no longer sold in Brazil. A federal law was passed in October 1993 establishing a mandatory blend of 22% anhydrous ethanol (E22) in the entire country. This law also authorized the Executive to set different percentages of ethanol within pre-established boundaries, and since 2003, these limits were fixed at a maximum of 25% (E25) and a minimum of 20% (E20) by volume. Since then, the government has set the percentage on the ethanol blend according to the results of the sugarcane harvest and ethanol production from sugarcane, resulting in blend variations even within the same year.
Since July 1, 2007, the mandatory blend was set at 25% of anhydrous ethanol (E25) by executive decree, and this has been the standard gasoline blend sold throughout Brazil most of the time as of 2011. However, as a result of a supply shortage and the resulting high ethanol fuel prices, in 2010, the government mandated a temporary 90-day blend reduction from E25 to E20 beginning February 1, 2010. As prices rose abruptly again due to supply shortages that took place again between the 2010 and 2011 harvest seasons, some ethanol had to be imported from the United States, and in April 2011, the government reduced the minimum mandatory blend to 18%, leaving the mandatory blend range between E18 and E25.
All Brazilian automakers have adapted their gasoline engines to run smoothly with this range of mixtures, thus, all gasoline vehicles are built to run with blends from E20 to E25, defined by local law as "common gasoline type C". Some vehicles might work properly with lower concentrations of ethanol, but with a few exceptions, they are unable to run smoothly with pure gasoline, which causes engine knocking, as vehicles traveling to neighboring South American countries have demonstrated. Flex-fuel vehicles, which can run on any type of gasoline E20-E25 up to 100% hydrous ethanol (E100 or hydrated ethanol) ratios, were first available in mid-2003. In July 2008, 86% of all new light vehicles sold in Brazil were flexible-fuel, and only two carmakers build models with a flex-fuel engine optimized to operate with pure gasoline (E0): Renault with the models Clio, Symbol, Logan, Sandero and Mégane, and Fiat with the Siena Tetrafuel.
Thailand introduced E20 in 2008, but shortages in ethanol supplies by mid-2008 caused a delay in the expansion of the E20 fueling station network in the country. By mid-2010, 161 fueling stations were selling E20, and sales have risen 80% since April 2009. The rapid growth in E20 demand is because most vehicle models launched since 2009 were E20-compatible, and sales of E20 are expected to grow faster once more local automakers start producing small, E20-compatible, fuel-efficient cars. The Thai government is promoting ethanol usage through subsidies, as ethanol costs four baht (about 12 US cents) a litre more than gasoline.
A state law approved in Minnesota in 2005 mandated that ethanol comprise 20% of all gasoline sold in this American state beginning in 2013. Successful tests have been conducted to determine the performance under E20 by current vehicles and fuel dispensing equipment designed for E10. However, this mandate was later delayed to 2015, and has never taken effect because the federal EPA has yet to authorize the use of E20 as a replacement for gasoline.
A study commissioned by BP and published in September 2013, concluded that the use of advanced biofuels in the UK, and particularly E20 cellulosic ethanol, is a more cost-effective way of reducing emissions than using plug-in electric vehicles (PEVs) in the timeframe to 2030. The study also found that the use of higher blends of biofuels is complementary to hybrid electric vehicles (HEVs) and plug-in hybrids (PHEVs). Battery electric vehicles (BEVs) can deliver strong CO2 savings with a decarbonised electric grid, but are expected to have significantly higher costs than internal combustion engine vehicles and hybrid cars to 2030, as the latter are expected to be the most popular models by 2030. According to the study, in 2030 an E20 blend in an HEV can achieve a 10% emission savings compared to an HEV running on E5, for an annual fuel cost premium of compared to an annual cost of for an all-electric car.
E70, E75
E70 contains 70% ethanol and 30% gasoline, while E75 contains 75% ethanol. These winter blends are used in the United States and Sweden for E85 flexible-fuel vehicles during the cold weather, but still sold at the pump labeled as E85. The seasonal reduction of the ethanol content to an E85 winter blend is mandated to avoid cold starting problems at low temperatures.
In the US, this seasonal reduction of the ethanol content to E70 applies only in cold regions, where temperatures fall below during the winter. In Wyoming for example, E70 is sold as E85 from October to May. In Sweden, all E85 flexible-fuel vehicles use an E75 winter blend. This blend was introduced since the winter 2006-07 and E75 is used from November until March.
For temperatures below , all E85 flex vehicles require an engine block heater to avoid cold starting problems. The use of this device is also recommended for gasoline vehicles when temperatures drop below . Another option when extreme cold weather is expected is to add more pure gasoline in the tank, thus reducing the ethanol content below the E70 winter blend, or simply not to use E85 during extreme low temperature spells.
E85
E85, a mixture of 85% ethanol and ~15% gasoline, is generally the highest ethanol fuel mixture found in the United States and several European countries, particularly in Sweden, as this blend is the standard fuel for flexible-fuel vehicles. This mixture has an octane rating of 108, however, the Ethanol molecule also carries with it an oxygen atom, where-as gasoline does not, effectively requiring the internal combustion engine to ingest less air per unit-volume by its own accord, which reduces pumping losses, and further increases the exo-thermic chemical reaction. Ethanol fuel is considered – although not widely known as – a form of "chemical supercharging", similar to that of Nitrous Oxide (N2O) & Nitromethane (CH3NO2).
The 85% limit in the ethanol content was set to reduce ethanol emissions at low temperatures and to avoid cold starting problems during cold weather, at temperatures lower than . A further reduction in the ethanol content is used during the winter in regions where temperatures fall below and this blend is called Winter E85, as the fuel is still sold under the E85 label. A winter blend of E70 is mandated in some regions in the US, while Sweden mandates E75. Some regions in the United States now allow E51 (51% ethanol, 49% gasoline) to be sold as E85 in the winter months.
As of October 2010, nearly 3,000 E85 fuel pumps were in Europe, led by Sweden with 1,699 filling stations. The United States had 3,354 public E85 fuel pumps located in 2,154 cities by August 2014, mostly concentrated in the Midwest.
Thailand introduced E85 fuel by the end of 2008, and by mid-2010, only four E85 filling stations were available, with plans to expand to 15 stations by 2012.
A major restriction hampering sales of E85 flex vehicles or fuelling with E85, is the limited infrastructure available to sell E85 to the public, as by 2014 only 2 percent of motor fuel stations offered E85, up from about 1 percent in 2011. , there were only 3,218 gasoline fueling stations selling E85 to the public in the entire U.S., while about 156,000 retail motor fuel outlets do not offer the E85 blend. The number of E85 grew from 1,229 in 2007 to 2,442 in 2011, but only increased by 7% from 2011 to 2013, when the total reached 2,625. There is a great concentration of E85 stations in the Corn Belt states, and , the leading state is Minnesota with 274 stations, followed by Michigan with 231, Illinois with 225, Iowa with 204, Indiana with 188, Texas with 181, Wisconsin with 152, and Ohio with 126. Only eight states do not have E85 available to the public, Alaska, Delaware, Hawaii, Montana, Maine, New Hampshire, Rhode Island, and Vermont. The main constraint for a more rapid expansion of E85 availability is that it requires dedicated storage tanks at filling stations, at an estimated cost of for each dedicated ethanol tank. A study conducted by the U.S. Department of Energy concluded that every service station in America could be converted to handle E85 at a cost of $3.4 billion to $10.1 billion.
ED95
ED95 designates a blend of 95% ethanol and 5% ignition improver; it is used in modified diesel engines where high compression is used to ignite the fuel, as opposed to the operation of gasoline engines, where spark plugs are used. This fuel was developed by Swedish ethanol producer SEKAB. Because of the high ignition temperatures of pure ethanol, the addition of ignition improver is necessary for successful diesel engine operation. A diesel engine running on ethanol also has a higher compression ratio and an adapted fuel system.
This fuel has been used with success in many Swedish Scania buses since 1985, which has produced around 700 ethanol buses, more than 600 of them to Swedish cities, and more recently has also delivered ethanol buses for commercial service in Great Britain, Spain, Italy, Belgium, and Norway. As of June 2010 Stockholm has the largest ethanol ED95 bus fleet in the world.
As of 2010, the Swedish ED95 engine is in its third generation and already has complied with Euro 5 emission standards, without any kind of post-treatment of the exhaust gases. The ethanol-powered engine is also being certified as environmentally enhanced vehicle (EEV) in the Stockholm municipality. The EEV rule still has no date to enter into force in Europe and is stricter than the Euro 5 standard.
Nottingham became the first city in England to operate a regular bus service with ethanol-fuelled vehicles. Three ED95 single-deck buses entered regular service in the city in March 2008. Soon after, Reading also introduced ED95 double-deck buses.
Under the auspices of the BioEthanol for Sustainable Transport project, more than 138 bioethanol ED95 buses were part of demonstration trial at four cities, three in Europe, and one in Brazil, between 2006 and 2009. A total of 127 ED95 buses operated in Stockholm, five buses operated in Madrid, three in La Spezia, and one in Brazil.
In Brazil, the first Scania ED95 bus with a modified diesel engine was introduced as a trial in São Paulo city in December 2007, and since November 2009, two ED95 buses were in regular service. The Brazilian trial project ran for three years and performance and emissions were monitored by the National Reference Center on Biomass (CENBIO- ) at the Universidade de São Paulo.
In November 2010, the municipal government of São Paulo city signed an agreement with UNICA, Cosan, Scania and Viação Metropolitana, a local bus operator, to introduced a fleet of 50 ethanol-powered ED95 buses by May 2011. Scania manufactures the bus engine and chassis in its plant located in São Bernardo do Campo, São Paulo, using the same technology and fuel as the ED95 buses already operating in Stockholm. The bus body is a Brazilian CAIO. The first ethanol-powered buses were delivered in May 2011, and the 50 buses will start regular service in June 2011 in the southern region of São Paulo. The 50 ED95 buses had a cost of R$ 20 million () and due to the higher cost of the ED95 fuel and the lower energy content of ethanol as compared to diesel, one of the firms participating in the cooperation agreement, Raísen (a joint venture between Royal Dutch Shell and Cosan), supplies the fuel to the municipality at 70% of the market price of regular diesel.
E100
E100 is pure ethanol fuel. Straight hydrous ethanol as an automotive fuel has been widely used in Brazil since the late 1970s for neat ethanol vehicles and more recently for flexible-fuel vehicles. The ethanol fuel used in Brazil is distilled close to the azeotrope mixture of 95.63% ethanol and 4.37% water (by weight) which is approximately 3.5% water by volume.
The azeotrope is the highest concentration of ethanol that can be achieved by simple fractional distillation. The maximum water concentration according to the Agência Nacional do Petróleo (ANP) specification is 4.9 vol.% (approximately 6.1 weight%) The E nomenclature is not adopted in Brazil, but hydrated ethanol can be tagged as E100, meaning it does not have any gasoline, because the water content is not an additive, but rather a residue from the distillation process. However, straight hydrous ethanol is also called E95 by some authors.
The first commercial vehicle capable of running on pure ethanol was the Ford Model T, produced from 1908 through 1927. It was fitted with a carburetor with adjustable jetting, allowing use of gasoline or ethanol, or a combination of both. At that time, other car manufacturers also provided engines for ethanol fuel use. Thereafter, and as a response to the 1973 and 1979 energy crises, the first modern vehicle capable of running with pure hydrous ethanol (E100) was launched in the Brazilian market, the Fiat 147, after testing with several prototypes developed by the Brazilian subsidiaries of Fiat, Volkswagen, General Motors and Ford. , there were 1.1 million neat ethanol vehicles still in use in Brazil. Since 2003, Brazilian newer flex-fuel vehicles are capable of running on pure hydrous ethanol (E100) or blended with any combination of E20 to E27.5 gasoline (a mixture made with anhydrous ethanol), the national mandatory blend. , there were 17.1 million flexible-fuel vehicles running on Brazilian roads.
E100 imposes a limitation on normal vehicle operation, as ethanol's lower evaporative pressure (as compared to gasoline) causes problems when cold starting the engine at temperatures below . For this reason, both pure ethanol and E100 flex-fuel vehicles are built with an additional small gasoline reservoir inside the engine compartment to help in starting the engine when cold by initially injecting gasoline. Once started, the engine is then switched back to ethanol. An improved flex-fuel engine generation was developed to eliminate the need for the secondary gas tank by warming the ethanol fuel during starting, and allowing them to start at temperatures as low as , the lowest temperature expected anywhere in the Brazilian territory. The Polo E-Flex, launched in March 2009, was the first flex-fuel model without an auxiliary tank for cold start. The warming system, called Flex Start, was developed by Robert Bosch GmbH.
Swedish carmakers have developed ethanol-only capable engines for the new Saab Aero X BioPower 100 Concept E100, with a V6 engine which is fuelled entirely by E100 bioethanol, and the limited edition of the Koenigsegg CCXR, a version of the CCX converted to use E85 or E100, as well as standard 98-octane gasoline, and currently the fastest and most powerful flex-fuel vehicle with its twin-supercharged V8 producing 1018 hp when running on biofuel, as compared to 806 hp on 91-octane unleaded gasoline.
The higher fuel efficiency of E100 (compared to methanol) in high performance race cars resulted in Indianapolis 500 races in 2007 and 2008 being run on 100% fuel-grade ethanol.
Use limitations
Modifications to engines
The use of ethanol blends in conventional gasoline vehicles is restricted to low mixtures, as ethanol-gasoline is corrosive and can degrade some of the materials in the engine and fuel system. Also, the engine has to be adjusted for a higher compression ratio as compared to a pure gasoline engine to take advantage of ethanol's higher oxygen content, thus allowing an improvement in fuel efficiency and a reduction of tailpipe emissions. The following table shows the required modifications to gasoline engines to run smoothly and without degrading any materials. This information is based on the modifications made by the Brazilian automotive industry at the beginning of the ethanol program in that country in the late 1970s, and reflects the experience of Volkswagen do Brasil.
Disadvantages to ethanol fuel blends when used in engines designed exclusively for gasoline include lowered fuel mileage, metal corrosion, deterioration of plastic and rubber fuel system components, clogged fuel systems, fuel injectors, and carburetors, delamination of composite fuel tanks, varnish buildup on engine parts, damaged or destroyed internal engine components, water absorption, fuel phase separation, and shortened fuel storage life. Many major auto, marine, motorcycle, lawn equipment, generator, and other internal combustion engine manufacturers have issued warnings and precautions about the use of ethanol-blended gasolines of any type in their engines, and the Federal Aviation Administration and major aviation engine manufacturers have prohibited the use of automotive gasolines blended with ethanol in light aircraft due to safety issues from fuel system and engine damage.
Other disadvantages
See also
Butanol fuel
Ethanol fuel
Ethanol fuel energy balance
Ethanol fuel in Brazil
Biofuel in Sweden
Ethanol fuel in the United States
Food vs. fuel
Indirect land use change impacts of biofuels
List of flexible-fuel vehicles by car manufacturer
List of gasoline additives
Notes
References
External links
2011 NACS Annual Fuels Report
Ethanol fuel
Petroleum products | Common ethanol fuel mixtures | [
"Chemistry"
] | 7,470 | [
"Petroleum",
"Petroleum products"
] |
2,155,356 | https://en.wikipedia.org/wiki/Adomian%20decomposition%20method | The Adomian decomposition method (ADM) is a semi-analytical method for solving ordinary and partial nonlinear differential equations. The method was developed from the 1970s to the 1990s by George Adomian, chair of the Center for Applied Mathematics at the University of Georgia.
It is further extensible to stochastic systems by using the Ito integral.
The aim of this method is towards a unified theory for the solution of partial differential equations (PDE); an aim which has been superseded by the more general theory of the homotopy analysis method.
The crucial aspect of the method is employment of the "Adomian polynomials" which allow for solution convergence of the nonlinear portion of the equation, without simply linearizing the system. These polynomials mathematically generalize to a Maclaurin series about an arbitrary external parameter; which gives the solution method more flexibility than direct Taylor series expansion.
Ordinary differential equations
Adomian method is well suited to solve Cauchy problems, an important class of problems which include initial conditions problems.
Application to a first order nonlinear system
An example of initial condition problem for an ordinary differential equation is the following:
To solve the problem, the highest degree differential operator (written here as L) is put on the left side, in the following way:
with L = d/dt and . Now the solution is assumed to be an infinite series of contributions:
Replacing in the previous expression, we obtain:
Now we identify y0 with some explicit expression on the right, and yi, i = 1, 2, 3, ..., with some expression on the right containing terms of lower order than i. For instance:
In this way, any contribution can be explicitly calculated at any order. If we settle for the four first terms, the approximant is the following:
Application to Blasius equation
A second example, with more complex boundary conditions is the Blasius equation for a flow in a boundary layer:
With the following conditions at the boundaries:
Linear and non-linear operators are now called and , respectively. Then, the expression becomes:
and the solution may be expressed, in this case, in the following simple way:
where: If:
and:
Adomian’s polynomials to linearize the non-linear term can be obtained systematically by using the following rule:
where:
Boundary conditions must be applied, in general, at the end of each approximation. In this case, the integration constants must be grouped into three final independent constants. However, in our example, the three constants appear grouped from the beginning in the form shown in the formal solution above. After applying the two first boundary conditions we obtain the so-called Blasius series:
To obtain γ we have to apply boundary conditions at ∞, which may be done by writing the series as a Padé approximant:
where L = M. The limit at of this expression is aL/bM.
If we choose b0 = 1, M linear equations for the b coefficients are obtained:
Then, we obtain the a coefficients by means of the following sequence:
In our example:
Which when γ = 0.0408 becomes:
with the limit:
Which is approximately equal to 1 (from boundary condition (3)) with an accuracy of 4/1000.
Partial differential equations
Application to a rectangular system with nonlinearity
One of the most frequent problems in physical sciences is to obtain the solution of a (linear or nonlinear) partial differential equation which satisfies a set of functional values on a rectangular boundary. An example is the following problem:
with the following boundary conditions defined on a rectangle:
This kind of partial differential equation appears frequently coupled with others in science and engineering. For instance, in the incompressible fluid flow problem, the Navier–Stokes equations must be solved in parallel with a Poisson equation for the pressure.
Decomposition of the system
Let us use the following notation for the problem (1):
where Lx, Ly are double derivate operators and N is a non-linear operator.
The formal solution of (2) is:
Expanding now u as a set of contributions to the solution we have:
By substitution in (3) and making a one-to-one correspondence between the contributions on the left side and the terms on the right side we obtain the following iterative scheme:
where the couple {an(y), bn(y)} is the solution of the following system of equations:
here is the nth-order approximant to the solution and N u has been consistently expanded in Adomian polynomials:
where and f(u) = u2 in the example (1).
Here C(ν, n) are products (or sum of products) of ν components of u whose subscripts sum up to n, divided by the factorial of the number of repeated subscripts. It is only a thumb-rule to order systematically the decomposition to be sure that all the combinations appearing are utilized sooner or later.
The is equal to the sum of a generalized Taylor series about u0.
For the example (1) the Adomian polynomials are:
Other possible choices are also possible for the expression of An.
Series solutions
Cherruault established that the series terms obtained by Adomian's method approach zero as 1/(mn)! if m is the order of the highest linear differential operator and that . With this method the solution can be found by systematically integrating along any of the two directions: in the x-direction we would use expression (3); in the alternative y-direction we would use the following expression:
where: c(x), d(x) is obtained from the boundary conditions at y = - yl and y = yl:
If we call the two respective solutions x-partial solution and y-partial solution, one of the most interesting consequences of the method is that the x-partial solution uses only the two boundary conditions (1-a) and the y-partial solution uses only the conditions (1-b).
Thus, one of the two sets of boundary functions {f1, f2} or {g1, g2} is redundant, and this implies that a partial differential equation with boundary conditions on a rectangle cannot have arbitrary boundary conditions on the borders, since the conditions at x = x1, x = x2 must be consistent with those imposed at y = y1 and y = y2.
An example to clarify this point is the solution of the Poisson problem with the following boundary conditions:
By using Adomian's method and a symbolic processor (such as Mathematica or Maple) it is easy to obtain the third order approximant to the solution. This approximant has an error lower than 5×10−16 in any point, as it can be proved by substitution in the initial problem and by displaying the absolute value of the residual obtained as a function of (x, y).
The solution at y = -0.25 and y = 0.25 is given by specific functions that in this case are:
and g2(x) = g1(x) respectively.
If a (double) integration is now performed in the y-direction using these two boundary functions the same solution will be obtained, which satisfy u(x=0, y) = 0 and u(x=0.5, y) = 0 and cannot satisfy any other condition on these borders.
Some people are surprised by these results; it seems strange that not all initial-boundary conditions must be explicitly used to solve a differential system. However, it is a well established fact that any elliptic equation has one and only one solution for any functional conditions in the four sides of a rectangle provided there is no discontinuity on the edges.
The cause of the misconception is that scientists and engineers normally think in a boundary condition in terms of weak convergence in a Hilbert space (the distance to the boundary function is small enough to practical purposes). In contrast, Cauchy problems impose a point-to-point convergence to a given boundary function and to all its derivatives (and this is a quite strong condition!).
For the first ones, a function satisfies a boundary condition when the area (or another functional distance) between it and the true function imposed in the boundary is so small as desired; for the second ones, however, the function must tend to the true function imposed in any and every point of the interval.
The commented Poisson problem does not have a solution for any functional boundary conditions f1, f2, g1, g2; however, given f1, f2 it is always possible to find boundary functions g1*, g2* so close to g1, g2 as desired (in the weak convergence meaning) for which the problem has solution. This property makes it possible to solve Poisson's and many other problems with arbitrary boundary conditions but never for analytic functions exactly specified on the boundaries.
The reader can convince himself (herself) of the high sensitivity of PDE solutions to small changes in the boundary conditions by solving this problem integrating along the x-direction, with boundary functions slightly different even though visually not distinguishable. For instance, the solution with the boundary conditions:
at x = 0 and x = 0.5, and the solution with the boundary conditions:
at x = 0 and x = 0.5, produce lateral functions with different sign convexity even though both functions are visually not distinguishable.
Solutions of elliptic problems and other partial differential equations are highly sensitive to small changes in the boundary function imposed when only two sides are used. And this sensitivity is not easily compatible with models that are supposed to represent real systems, which are described by means of measurements containing experimental errors and are normally expressed as initial-boundary value problems in a Hilbert space.
Improvements to the decomposition method
At least three methods have been reported
to obtain the boundary functions g1*, g2* that are compatible with any lateral set of conditions {f1, f2} imposed. This makes it possible to find the analytical solution of any PDE boundary problem on a closed rectangle with the required accuracy, so allowing to solve a wide range of problems that the standard Adomian's method was not able to address.
The first one perturbs the two boundary functions imposed at x = 0 and x = x1 (condition 1-a) with a Nth-order polynomial in y: p1, p2 in such a way that: f1' = f1 + p1, f2' = f2 + p2, where the norm of the two perturbation functions are smaller than the accuracy needed at the boundaries. These p1, p2 depend on a set of polynomial coefficients ci, i = 1, ..., N. Then, the Adomian method is applied and functions are obtained at the four boundaries which depend on the set of ci, i = 1, ..., N. Finally, a boundary function F(c1, c2, ..., cN) is defined as the sum of these four functions, and the distance between F(c1, c2, ..., cN) and the real boundary functions ((1-a) and (1-b)) is minimized. The problem has been reduced, in this way, to the global minimization of the function F(c1, c2, ..., cN) which has a global minimum for some combination of the parameters ci, i = 1, ..., N. This minimum may be found by means of a genetic algorithm or by using some other optimization method, as the one proposed by Cherruault (1999).
A second method to obtain analytic approximants of initial-boundary problems is to combine Adomian decomposition with spectral methods.
Finally, the third method proposed by García-Olivares is based on imposing analytic solutions at the four boundaries, but modifying the original differential operator in such a way that it is different from the original one only in a narrow region close to the boundaries, and it forces the solution to satisfy exactly analytic conditions at the four boundaries.
Integral Equations
The Adomian decomposition method may also be applied to linear and nonlinear integral equations to obtain solutions. This corresponds to the fact that many differential equation can be converted into integral equations.
Adomian Decomposition Method
The Adomian decomposition method for nonhomogenous Fredholm integral equation of the second kind goes as follows:
Given an integral equation of the form:
We assume we may express the solution in series form:
Plugging the series form into the integral equation then yields:
Assuming that the sum converges absolutely to we may integerchange the sum and integral as follows:
Expanding the sum on both sides yields:
Hence we may associate each in the following recurrent manner:
which gives us the solution in the solution form above.
Example
Given the Fredholm integral equation:
Since , we can set:
...
Hence the solution may be written as:
Since this is a telescoping series, we can see that every terms after cancels and may be regarded as "noise", Thus, becomes:
Gallery
See also
Order of approximation
References
Differential equations | Adomian decomposition method | [
"Mathematics"
] | 2,689 | [
"Mathematical objects",
"Differential equations",
"Equations"
] |
13,224,331 | https://en.wikipedia.org/wiki/Ecological%20resilience | In ecology, resilience is the capacity of an ecosystem to respond to a perturbation or disturbance by resisting damage and subsequently recovering. Such perturbations and disturbances can include stochastic events such as fires, flooding, windstorms, insect population explosions, and human activities such as deforestation, fracking of the ground for oil extraction, pesticide sprayed in soil, and the introduction of exotic plant or animal species. Disturbances of sufficient magnitude or duration can profoundly affect an ecosystem and may force an ecosystem to reach a threshold beyond which a different regime of processes and structures predominates. When such thresholds are associated with a critical or bifurcation point, these regime shifts may also be referred to as critical transitions.
Human activities that adversely affect ecological resilience such as reduction of biodiversity, exploitation of natural resources, pollution, land use, and anthropogenic climate change are increasingly causing regime shifts in ecosystems, often to less desirable and degraded conditions. Interdisciplinary discourse on resilience now includes consideration of the interactions of humans and ecosystems via socio-ecological systems, and the need for shift from the maximum sustainable yield paradigm to environmental resource management and ecosystem management, which aim to build ecological resilience through "resilience analysis, adaptive resource management, and adaptive governance". Ecological resilience has inspired other fields and continues to challenge the way they interpret resilience, e.g. supply chain resilience.
Definitions
The IPCC Sixth Assessment Report defines resilience as, “not just the ability to maintain essential function, identity and structure, but also the capacity for transformation.” The IPCC considers resilience both in terms of ecosystem recovery as well as the recovery and adaptation of human societies to natural disasters.
The concept of resilience in ecological systems was first introduced by the Canadian ecologist C.S. Holling in order to describe the persistence of natural systems in the face of changes in ecosystem variables due to natural or anthropogenic causes. Resilience has been defined in two ways in ecological literature:
as the time required for an ecosystem to return to an equilibrium or steady-state following a perturbation (which is also defined as stability by some authors). This definition of resilience is used in other fields such as physics and engineering, and hence has been termed ‘engineering resilience’ by Holling.
as "the capacity of a system to absorb disturbance and reorganize while undergoing change so as to still retain essentially the same function, structure, identity, and feedbacks".
The second definition has been termed ‘ecological resilience’, and it presumes the existence of multiple stable states or regimes.
For example, some shallow temperate lakes can exist within either clear water regime, which provides many ecosystem services, or a turbid water regime, which provides reduced ecosystem services and can produce toxic algae blooms. The regime or state is dependent upon lake phosphorus cycles, and either regime can be resilient dependent upon the lake's ecology and management.
Likewise, Mulga woodlands of Australia can exist in a grass-rich regime that supports sheep herding, or a shrub-dominated regime of no value for sheep grazing. Regime shifts are driven by the interaction of fire, herbivory, and variable rainfall. Either state can be resilient dependent upon management.
Theory
Ecologists Brian Walker, C S Holling and others describe four critical aspects of resilience: latitude, resistance, precariousness, and panarchy.
The first three can apply both to a whole system or the sub-systems that make it up.
Latitude: the maximum amount a system can be changed before losing its ability to recover (before crossing a threshold which, if breached, makes recovery difficult or impossible).
Resistance: the ease or difficulty of changing the system; how “resistant” it is to being changed.
Precariousness: how close the current state of the system is to a limit or “threshold.”.
Panarchy: the degree to which a certain hierarchical level of an ecosystem is influenced by other levels. For example, organisms living in communities that are in isolation from one another may be organized differently from the same type of organism living in a large continuous population, thus the community-level structure is influenced by population-level interactions.
Closely linked to resilience is adaptive capacity, which is the property of an ecosystem that describes change in stability landscapes and resilience. Adaptive capacity in socio-ecological systems refers to the ability of humans to deal with change in their environment by observation, learning and altering their interactions.
Human impacts
Resilience refers to ecosystem's stability and capability of tolerating disturbance and restoring itself. If the disturbance is of sufficient magnitude or duration, a threshold may be reached where the ecosystem undergoes a regime shift, possibly permanently. Sustainable use of environmental goods and services requires understanding and consideration of the resilience of the ecosystem and its limits. However, the elements which influence ecosystem resilience are complicated. For example, various elements such as the water cycle, fertility, biodiversity, plant diversity and climate, interact fiercely and affect different systems.
There are many areas where human activity impacts upon and is also dependent upon the resilience of terrestrial, aquatic and marine ecosystems. These include agriculture, deforestation, pollution, mining, recreation, overfishing, dumping of waste into the sea and climate change.
Agriculture
Agriculture can be used as a significant case study in which the resilience of terrestrial ecosystems should be considered. The organic matter (elements carbon and nitrogen) in soil, which is supposed to be recharged by multiple plants, is the main source of nutrients for crop growth. In response to global food demand and shortages, however, intensive agriculture practices including the application of herbicides to control weeds, fertilisers to accelerate and increase crop growth and pesticides to control insects, reduce plant biodiversity while the supply of organic matter to replenish soil nutrients and prevent surface runoff is diminished. This leads to a reduction in soil fertility and productivity. More sustainable agricultural practices would take into account and estimate the resilience of the land and monitor and balance the input and output of organic matter.
Deforestation
The term deforestation has a meaning that covers crossing the threshold of forest's resilience and losing its ability to return to its originally stable state. To recover itself, a forest ecosystem needs suitable interactions among climate conditions and bio-actions, and enough area. In addition, generally, the resilience of a forest system allows recovery from a relatively small scale of damage (such as lightning or landslide) of up to 10 percent of its area. The larger the scale of damage, the more difficult it is for the forest ecosystem to restore and maintain its balance.
Deforestation also decreases biodiversity of both plant and animal life and can lead to an alteration of the climatic conditions of an entire area. According to the IPCC Sixth Assessment Report, carbon emissions due to land use and land use changes predominantly come from deforestation, thereby increasing the long-term exposure of forest ecosystems to drought and other climate change-induced damages. Deforestation can also lead to species extinction, which can have a domino effect particularly when keystone species are removed or when a significant number of species is removed and their ecological function is lost.
Climate change
Overfishing
It has been estimated by the United Nations Food and Agriculture Organisation that over 70% of the world's fish stocks are either fully exploited or depleted which means overfishing threatens marine ecosystem resilience and this is mostly by rapid growth of fishing technology. One of the negative effects on marine ecosystems is that over the last half-century the stocks of coastal fish have had a huge reduction as a result of overfishing for its economic benefits. Blue fin tuna is at particular risk of extinction. Depletion of fish stocks results in lowered biodiversity and consequently imbalance in the food chain, and increased vulnerability to disease.
In addition to overfishing, coastal communities are suffering the impacts of growing numbers of large commercial fishing vessels in causing reductions of small local fishing fleets. Many local lowland rivers which are sources of fresh water have become degraded because of the inflows of pollutants and sediments.
Dumping of waste into the sea
Dumping both depends upon ecosystem resilience whilst threatening it. Dumping of sewage and other contaminants into the ocean is often undertaken for the dispersive nature of the oceans and adaptive nature and ability for marine life to process the marine debris and contaminants. However, waste dumping threatens marine ecosystems by poisoning marine life and eutrophication.
Poisoning marine life
According to the International Maritime Organisation oil spills can have serious effects on marine life. The OILPOL Convention recognized that most oil pollution resulted from routine shipboard operations such as the cleaning of cargo tanks. In the 1950s, the normal practice was simply to wash the tanks out with water and then pump the resulting mixture of oil and water into the sea. OILPOL 54 prohibited the dumping of oily wastes within a certain distance from land and in 'special areas' where the danger to the environment was especially acute. In 1962 the limits were extended by means of an amendment adopted at a conference organized by IMO. Meanwhile, IMO in 1965 set up a Subcommittee on Oil Pollution, under the auspices of its Maritime Safety committee, to address oil pollution issues.
The threat of oil spills to marine life is recognised by those likely to be responsible for the pollution, such as the International Tanker Owners Pollution Federation:
The marine ecosystem is highly complex and natural fluctuations in species composition, abundance and distribution are a basic feature of its normal function. The extent of damage can therefore be difficult to detect against this background variability. Nevertheless, the key to understanding damage and its importance is whether spill effects result in a downturn in breeding success, productivity, diversity and the overall functioning of the system. Spills are not the only pressure on marine habitats; chronic urban and industrial contamination or the exploitation of the resources they provide are also serious threats.
Eutrophication and algal blooms
The Woods Hole Oceanographic Institution calls nutrient pollution the most widespread, chronic environmental problem in the coastal ocean. The discharges of nitrogen, phosphorus, and other nutrients come from agriculture, waste disposal, coastal development, and fossil fuel use. Once nutrient pollution reaches the coastal zone, it stimulates harmful overgrowths of algae, which can have direct toxic effects and ultimately result in low-oxygen conditions. Certain types of algae are toxic. Overgrowths of these algae result in harmful algal blooms, which are more colloquially referred to as "red tides" or "brown tides". Zooplankton eat the toxic algae and begin passing the toxins up the food chain, affecting edibles like clams, and ultimately working their way up to seabirds, marine mammals, and humans. The result can be illness and sometimes death.
Sustainable development
There is increasing awareness that a greater understanding and emphasis of ecosystem resilience is required to reach the goal of sustainable development. A similar conclusion is drawn by Perman et al. who use resilience to describe one of 6 concepts of sustainability; "A sustainable state is one which satisfies minimum conditions for ecosystem resilience through time". Resilience science has been evolving over the past decade, expanding beyond ecology to reflect systems of thinking in fields such as economics and political science. And, as more and more people move into densely populated cities, using massive amounts of water, energy, and other resources, the need to combine these disciplines to consider the resilience of urban ecosystems and cities is of paramount importance.
Academic perspectives
The interdependence of ecological and social systems has gained renewed recognition since the late 1990s by academics including Berkes and Folke and developed further in 2002 by Folke et al. As the concept of sustainable development has evolved beyond the 3 pillars of sustainable development to place greater political emphasis on economic development. This is a movement which causes wide concern in environmental and social forums and which Clive Hamilton describes as "the growth fetish".
The purpose of ecological resilience that is proposed is ultimately about averting our extinction as Walker cites Holling in his paper: "[..] "resilience is concerned with [measuring] the probabilities of extinction” (1973, p. 20)". Becoming more apparent in academic writing is the significance of the environment and resilience in sustainable development. Folke et al state that the likelihood of sustaining development is raised by "Managing for resilience" whilst Perman et al. propose that safeguarding the environment to "deliver a set of services" should be a "necessary condition for an economy to be sustainable". The growing application of resilience to sustainable development has produced a diversity of approaches and scholarly debates.
The flaw of the free market
The challenge of applying the concept of ecological resilience to the context of sustainable development is that it sits at odds with conventional economic ideology and policy making. Resilience questions the free market model within which global markets operate. Inherent to the successful operation of a free market is specialisation which is required to achieve efficiency and increase productivity. This very act of specialisation weakens resilience by permitting systems to become accustomed to and dependent upon their prevailing conditions. In the event of unanticipated shocks; this dependency reduces the ability of the system to adapt to these changes. Correspondingly; Perman et al. note that; "Some economic activities appear to reduce resilience, so that the level of disturbance to which the ecosystem can be subjected to without parametric change taking place is reduced".
Moving beyond sustainable development
Berkes and Folke table a set of principles to assist with "building resilience and sustainability" which consolidate approaches of adaptive management, local knowledge-based management practices and conditions for institutional learning and self-organisation.
More recently, it has been suggested by Andrea Ross that the concept of sustainable development is no longer adequate in assisting policy development fit for today's global challenges and objectives. This is because the concept of sustainable development is "based on weak sustainability" which doesn't take account of the reality of "limits to earth's resilience". Ross draws on the impact of climate change on the global agenda as a fundamental factor in the "shift towards ecological sustainability" as an alternative approach to that of sustainable development.
Because climate change is a major and growing driver of biodiversity loss, and that biodiversity and ecosystem functions and services, significantly contribute to climate change adaptation, mitigation and disaster risk reduction, proponents of ecosystem-based adaptation suggest that the resilience of vulnerable human populations and the ecosystem services upon which they depend are critical factors for sustainable development in a changing climate.
In environmental policy
Scientific research associated with resilience is beginning to play a role in influencing policy-making and subsequent environmental decision making.
This occurs in a number of ways:
Observed resilience within specific ecosystems drives management practice. When resilience is observed to be low, or impact seems to be reaching the threshold, management response can be to alter human behavior to result in less adverse impact to the ecosystem.
Ecosystem resilience impacts upon the way that development is permitted/environmental decision making is undertaken, similar to the way that existing ecosystem health impacts upon what development is permitted. For instance, remnant vegetation in the states of Queensland and New South Wales are classified in terms of ecosystem health and abundance. Any impact that development has upon threatened ecosystems must consider the health and resilience of these ecosystems. This is governed by the Threatened Species Conservation Act 1995 in New South Wales and the Vegetation Management Act 1999 in Queensland.
International level initiatives aim at improving socio-ecological resilience worldwide through the cooperation and contributions of scientific and other experts. An example of such an initiative is the Millennium Ecosystem Assessment whose objective is "to assess the consequences of ecosystem change for human well-being and the scientific basis for action needed to enhance the conservation and sustainable use of those systems and their contribution to human well-being". Similarly, the United Nations Environment Programme aim is "to provide leadership and encourage partnership in caring for the environment by inspiring, informing, and enabling nations and peoples to improve their quality of life without compromising that of future generations.
Environmental management in legislation
Ecological resilience and the thresholds by which resilience is defined are closely interrelated in the way that they influence environmental policy-making, legislation and subsequently environmental management. The ability of ecosystems to recover from certain levels of environmental impact is not explicitly noted in legislation, however, because of ecosystem resilience, some levels of environmental impact associated with development are made permissible by environmental policy-making and ensuing legislation.
Some examples of the consideration of ecosystem resilience within legislation include:
Environmental Planning and Assessment Act 1979 (NSW) – A key goal of the Environmental Assessment procedure is to determine whether proposed development will have a significant impact upon ecosystems.
Protection of the Environment (Operations) Act 1997 (NSW) – Pollution control is dependent upon keeping levels of pollutants emitted by industrial and other human activities below levels which would be harmful to the environment and its ecosystems. Environmental protection licenses are administered to maintain the environmental objectives of the POEO Act and breaches of license conditions can attract heavy penalties and in some cases criminal convictions.
Threatened Species Conservation Act 1995 (NSW) – This Act seeks to protect threatened species while balancing it with development.
History
The theoretical basis for many of the ideas central to climate resilience have actually existed since the 1960s. Originally an idea defined for strictly ecological systems, resilience in ecology was initially outlined by C.S. Holling as the capacity for ecological systems and relationships within those systems to persist and absorb changes to "state variables, driving variables, and parameters." This definition helped form the foundation for the notion of ecological equilibrium: the idea that the behavior of natural ecosystems is dictated by a homeostatic drive towards some stable set point. Under this school of thought (which maintained quite a dominant status during this time period), ecosystems were perceived to respond to disturbances largely through negative feedback systems – if there is a change, the ecosystem would act to mitigate that change as much as possible and attempt to return to its prior state.
As greater amounts of scientific research in ecological adaptation and natural resource management was conducted, it became clear that oftentimes, natural systems were subjected to dynamic, transient behaviors that changed how they reacted to significant changes in state variables: rather than work back towards a predetermined equilibrium, the absorbed change was harnessed to establish a new baseline to operate under. Rather than minimize imposed changes, ecosystems could integrate and manage those changes, and use them to fuel the evolution of novel characteristics. This new perspective of resilience as a concept that inherently works synergistically with elements of uncertainty and entropy first began to facilitate changes in the field of adaptive management and environmental resources, through work whose basis was built by Holling and colleagues yet again.
By the mid 1970s, resilience began gaining momentum as an idea in anthropology, culture theory, and other social sciences. There was significant work in these relatively non-traditional fields that helped facilitate the evolution of the resilience perspective as a whole. Part of the reason resilience began moving away from an equilibrium-centric view and towards a more flexible, malleable description of social-ecological systems was due to work such as that of Andrew Vayda and Bonnie McCay in the field of social anthropology, where more modern versions of resilience were deployed to challenge traditional ideals of cultural dynamics.
See also
Climate change mitigation
Climate resilience
Ecology and Society
Resilience of coral reefs
Resistance (ecology)
Regeneration (ecology)
Stability (ecology)
Socio-ecological system
Soil resilience
Vulnerability
Homeostasis
References
Further reading
Hulme, M. (2009). “Why we Disagree about Climate Change: Understanding Controversy, Inaction and Opportunity". Cambridge University Press.
Lee, M. (2005) “EU Environmental Law: Challenges, Change and Decisions Making”. Hart. 26.
Maclean K, Cuthill M, Ross H. (2013). Six attributes of social resilience. Journal of Environmental Planning and Management. (online first)
Pearce, D.W. (1993). “Blueprint 3: Measuring Sustainable Development”. Earthscan.
External links
Resilience Alliance — a research network that focuses on social-ecological resilience Resilience Alliance
Stockholm Resilience Centre — an international centre that advances trans disciplinary research for governance of social-ecological systems with a special emphasis on resilience — the ability to deal with change and continue to develop Stockholm Resilience Centre
TURaS — a European project mapping urban transitioning towards resilience and sustainability TURaS
Microdocs:Resilience — a short documentary on resilience Resilience
Ecology terminology
Ecological restoration
Conservation biology
sv:Resiliens | Ecological resilience | [
"Chemistry",
"Engineering",
"Biology"
] | 4,253 | [
"Ecology terminology",
"Conservation biology",
"Ecological restoration",
"Environmental engineering"
] |
13,224,606 | https://en.wikipedia.org/wiki/Sodium%20phenoxide | Sodium phenoxide (sodium phenolate) is an organic compound with the formula NaOC6H5. It is a white crystalline solid. Its anion, phenoxide, also known as phenolate, is the conjugate base of phenol. It is used as a precursor to many other organic compounds, such as aryl ethers.
Synthesis and structure
Most commonly, solutions of sodium phenoxide are produced by treating phenol with sodium hydroxide. Anhydrous derivatives can be prepared by combining phenol and sodium. A related, updated procedure uses sodium methoxide instead of sodium hydroxide:
NaOCH3 + HOC6H5 → NaOC6H5 + HOCH3
Sodium phenoxide can also be produced by the "alkaline fusion" of benzenesulfonic acid, whereby the sulfonate groups are displaced by hydroxide:
C6H5SO3Na + 2 NaOH → C6H5OH + Na2SO3
This route once was the principal industrial route to phenol.
Structure
Like other sodium alkoxides, solid sodium phenoxide adopts a complex structure involving multiple Na-O bonds. Solvent-free material is polymeric, each Na center being bound to three oxygen ligands as well as the phenyl ring. Adducts of sodium phenoxide are molecular, such as the cubane-type cluster [NaOPh]4(HMPA)4.
Reactions
Sodium phenoxide is a moderately strong base. Acidification gives phenol:
PhOH ⇌ PhO− + H+ (K = 10−10)
The acid-base behavior is complicated by homoassociation, reflecting the association of phenol and phenoxide.
Sodium phenoxide reacts with alkylating agents to afford alkyl phenyl ethers:
NaOC6H5 + RBr → ROC6H5 + NaBr
The conversion is an extension of the Williamson ether synthesis. With acylating agents, one obtains phenyl esters:
NaOC6H5 + RC(O)Cl → RCO2C6H5 + NaCl
Sodium phenoxide is susceptible to certain types of electrophilic aromatic substitutions. For example, it reacts with carbon dioxide to form 2-hydroxybenzoate, the conjugate base of salicylic acid. In general however, electrophiles irreversibly attack the oxygen center in phenoxide.
References
External links
Phenolates
Organic sodium salts | Sodium phenoxide | [
"Chemistry"
] | 525 | [
"Organic sodium salts",
"Phenolates",
"Salts"
] |
13,224,789 | https://en.wikipedia.org/wiki/Sextant%20%28astronomy%29 | In astronomy, sextants are devices depicting a sixth of a circle, used primarily for measuring the position of stars. There are two types of astronomical sextants, mural instruments and frame-based instruments.
They are of significant historical importance, but have been replaced over time by transit telescopes, other astrometry techniques, and satellites such as Hipparcos.
Mural sextants
The first known mural sextant was constructed in Ray, Iran, by Abu-Mahmud al-Khujandi in 994. To measure the obliquity of the ecliptic, al-Khujandī invented a device that he called al-Fakhri sextant (al-suds al Fakhrī), a reference to his patron, Buwayhid ruler, Fakhr al Dawla (976–997). This instrument was a sixty-degree arc on a wall aligned along a meridian (north–south) line. Al Khujandi's instrument was larger than previous instruments; it had a radius of about twenty meters. The main improvement incorporated in al-Fakhri sextants over earlier instruments was bringing the precision of reading to seconds while older instruments could only be read in degrees and minutes. This was confirmed by al-Birūni, al-Marrākushī and al-Kāshī. Al-Khujandī used his device to measure the sun's angle above the horizon at the summer and winter solstices; these two measurements allow computation of the latitude of the sextant's location and the obliquity of the ecliptic.
Ulugh Beg constructed a Fakhri Sextant that had a radius of 40.4 meters, the largest instrument of its type in the 15th century. Housed in the Ulugh Beg Observatory, the sextant had a finely constructed arc with a staircase on either side to provide access for the assistants who performed the measurements.
Framed sextants
A sextant based on a large metal frame had an advantage over a mural instrument in that it could be used at any orientation. This allows the measure of angular distances between astronomical bodies.
These instruments differ substantially from a navigator's sextant in that the latter is a reflecting instrument. The navigator's sextant uses mirrors to bring the image of the sun, moon or a star to the horizon and measure the altitude of the object. Due to the use of the mirrors, the angle measured is twice the length of the instrument's arc. Hence, the navigator's sextant measures 120° on an arc with an included angle of 60°. By comparison, the astronomical sextants are large and measure angles directly — a 60° arc will measure at most 60°.
Construction
These large sextants are made primarily of wood, brass or a combination of both materials. The frame is heavy enough to be stiff and provide reliable measures without flexural changes in the instrument compromising the quality of the observation. The frame is mounted on a support structure that holds it in position while in use. In some cases, the position of the sextant can be adjusted to allow measurements to be made with any instrument orientation. Owing to the size and weight of the instrument, attention was paid to balancing it so that it could be moved with ease.
Observations were typically made with an alidade, though newer versions could use a telescope. In some cases, a system of counter-weights and pulleys were used to allow the observer to manipulate the instrument in spite of its size.
Usage
These instruments were used in much the same way as smaller instruments, with effort possibly scaled due to the size. Some of the instruments might have needed more than one person to operate.
If the sextant is permanently fixed in position, only the position of the alidade or similar index need be determined. In that case, the observer moved the alidade until the object of interest is centered in the sights and then reads the graduations marked on the arc.
For instruments that could be moved, the process was more complex. It was necessary to sight the object with two lines. The edge of the instrument would typically be supplied with sights and the instrument was aligned with one of the two objects of interest. The alidade was then aligned with the second object as well. Once each object was centred in one set of sights, the reading could be taken. This could be a challenge for a moving star observed with a very large instrument as a single person might not be able to confirm both sights with ease; an assistant was a great benefit. The illustration of the Hevelius instrument to the right shows how two persons would use such a sextant: his wife Elisabetha is aligning the instrument while Johannes sets the alidade.
Well-known framed sextants
Taqi al-Din used a sextant for the determination of the equinoxes.
Tycho Brahe used a sextant for many of his stellar position measurements.
Johannes Hevelius used a sextant with a particularly ingenious alidade to provide stellar position measurements of great accuracy.
John Flamsteed, the first Astronomer Royal, used a sextant at the Royal Greenwich Observatory.
See also
List of astronomical instruments
References
Astronomical instruments
Measuring instruments
Historical scientific instruments
History of astronomy
Iranian inventions | Sextant (astronomy) | [
"Astronomy",
"Technology",
"Engineering"
] | 1,093 | [
"Astronomical instruments",
"Measuring instruments",
"History of astronomy"
] |
13,225,486 | https://en.wikipedia.org/wiki/Private%20VLAN | Private VLAN, also known as port isolation, is a technique in computer networking where a VLAN contains switch ports that are restricted such that they can only communicate with a given uplink. The restricted ports are called private ports. Each private VLAN typically contains many private ports, and a single uplink. The uplink will typically be a port (or link aggregation group) connected to a router, firewall, server, provider network, or similar central resource.
The concept was primarily introduced as a result of the limitation on the number of VLANs in network switches, a limit quickly exhausted in highly scaled scenarios. Hence, there was a requirement to create multiple network segregations with a minimum number of VLANs.
The switch forwards all frames received from a private port to the uplink port, regardless of VLAN ID or destination MAC address. Frames received from an uplink port are forwarded in the normal way (i.e. to the port hosting the destination MAC address, or to all ports of the VLAN for broadcast frames or for unknown destination MAC addresses). As a result, direct peer-to-peer traffic between peers through the switch is blocked, and any such communication must go through the uplink. While private VLANs provide isolation between peers at the data link layer, communication at higher layers may still be possible depending on further network configuration.
A typical application for a private VLAN is a hotel or Ethernet to the home network where each room or apartment has a port for Internet access. Similar port isolation is used in Ethernet-based ADSL DSLAMs. Allowing direct data link layer communication between customer nodes would expose the local network to various security attacks, such as ARP spoofing, as well as increase the potential for damage due to misconfiguration.
Another application of private VLANs is to simplify IP address assignment. Ports can be isolated from each other at the data link layer (for security, performance, or other reasons), while belonging to the same IP subnet. In such a case, direct communication between the IP hosts on the protected ports is only possible through the uplink connection by using MAC-Forced Forwarding or a similar Proxy ARP based solution.
VLAN Trunking Protocol
Version 3
Version 3 of VLAN Trunking Protocol saw support added for private VLANs.
version 1 and 2
If using version 1 and 2, the switch must be in VTP .
VTP v1 and 2 do not propagate private-VLAN configuration, so the administrator needs to configure it one by one.
Limitations of Private VLANs
Private VLANs have no support for:
Dynamic-access port VLAN membership.
Dynamic Trunking Protocol (DTP)
Port Aggregation Protocol (PAgP)
Link Aggregation Control Protocol (LACP)
Multicast VLAN Registration (MVR)
Voice VLAN
Web Cache Communication Protocol (WCCP)
Ethernet ring protection (ERP)
Flexible VLAN tagging
Egress VLAN firewall filters
Integrated routing and bridging (IRB) interface
Multichassis link aggregation groups (MC-LAGs)
Q-in-Q tunneling
Routing between sub-VLANs (Secondary) and Primary VLAN
Juniper, does not support IGMP snooping
Configuration limitations
An access interface cannot participate in two different primary VLANs, limited to one private VLAN.
Spanning Tree Protocol (STP) settings.
Cannot be configured on VLAN 1 or VLANs 1002 to 1005 as primary or secondary VLANs. As they are special VLANs.
Cisco implementation
Cisco Systems' Private VLANs have the advantage that they can function across multiple switches. A Private VLAN divides a VLAN (Primary) into sub-VLANs (Secondary) while keeping existing IP subnet and layer 3 configuration. A regular VLAN is a single broadcast domain, while private VLAN partitions one broadcast domain into multiple smaller broadcast subdomains.
Primary VLAN: Simply the original VLAN. This type of VLAN is used to forward frames downstream to all Secondary VLANs.
Secondary VLAN: Secondary VLAN is configured with one of the following types:
Isolated: Any switch ports associated with an Isolated VLAN can reach the primary VLAN, but not any other Secondary VLAN. In addition, hosts associated with the same Isolated VLAN cannot reach each other. There can be multiple Isolated VLANs in one Private VLAN domain (which may be useful if the VLANs need to use distinct paths for security reasons); the ports remain isolated from each other within each VLAN.
Community: Any switch ports associated with a common community VLAN can communicate with each other and with the primary VLAN but not with any other secondary VLAN. There can be multiple distinct community VLANs within one Private VLAN domain.
There are mainly two types of ports in a Private VLAN: Promiscuous port (P-Port) and Host port. Host port further divides in two types Isolated port (I-Port) and Community port (C-port).
Promiscuous port (P-Port): The switch port connects to a router, firewall or other common gateway device. This port can communicate with anything else connected to the primary or any secondary VLAN. In other words, it is a type of a port that is allowed to send and receive frames from any other port on the VLAN.
Host Ports:
Isolated Port (I-Port): Connects to the regular host that resides on isolated VLAN. This port communicates only with P-Ports.
Community Port (C-Port): Connects to the regular host that resides on community VLAN. This port communicates with P-Ports and ports on the same community VLAN.
Example scenario: a switch with VLAN 100, converted into a Private VLAN with one P-Port, two I-Ports in Isolated VLAN 101 (Secondary) and two community VLANs 102 and 103 (Secondary), with 2 ports in each. The switch has one uplink port (trunk), connected to another switch. The diagram shows this configuration graphically.
The following table shows the traffic which can flow between all these ports.
Traffic from an Uplink port to an Isolated port will be denied if it is in the Isolated VLAN. Traffic from an Uplink port to an isolated port will be permitted if it is in the primary VLAN.
Use cases
Network segregation
Private VLANs are used for network segregation when:
Moving from a flat network to a segregated network without changing the IP addressing of the hosts. A firewall can replace a router, and then hosts can be slowly moved to their secondary VLAN assignment without changing their IP addresses.
There is a need for a firewall with many tens, hundreds or even thousands interfaces. Using Private VLANs the firewall can have only one interface for all the segregated networks.
There is a need to preserve IP addressing. With Private VLANs, all Secondary VLANs can share the same IP subnet.
Overcome license fees for number of supported VLANs per firewall.
There is a need for more than 4095 segregated networks. With Isolated VLAN, there can be endless number of segregated networks.
Secure hosting
Private VLANs in hosting operation allows segregation between customers with the following benefits:
No need for separate IP subnet for each customer.
Using Isolated VLAN, there is no limit on the number of customers.
No need to change firewall's interface configuration to extend the number of configured VLANs.
Secure VDI
An Isolated VLAN can be used to segregate VDI desktops from each other, allowing filtering and inspection of desktop to desktop communication. Using non-isolated VLANs would require a different VLAN and subnet for each VDI desktop.
Backup network
On a backup network, there is no need for hosts to reach each other. Hosts should only reach their backup destination. Backup clients can be placed in one Isolated VLAN and the backup servers can be placed as promiscuous on the Primary VLAN, this will allow hosts to communicate only with the backup servers.
Broadcast mitigation
Because broadcast traffic on a network must be sent to each wireless host serially, it can consume large shares of air time, making the wireless network unresponsive. Where there is more than one wireless access point connected to a switch, private VLANs can prevent broadcast frames from propagating from one AP to another, preserving network performance for connected hosts.
Vendor support
Hardware switches
Alcatel-Lucent Enterprise OmniSwitch series
Arista Networks Data Center Switching
Brocade BigIron, TurboIron and FastIron switches
Cisco Systems Catalyst 2960-XR, 3560 and higher product lines switches
Extreme Networks XOS based switches
Fortinet FortiOS-based switches
Hewlett-Packard Enterprise Aruba Access Switches 2920 series and higher product lines switches
Juniper Networks EX switches
Lenovo CNOS based switches
Microsens G6 switch family
MikroTik All models (routers/switches) with switch chips since RouterOS v6.43
TP-Link T2600G series, T3700G series
TRENDnet many models
Ubiquiti Networks EdgeSwitch series, Unifi series
Software switches
Cisco Systems Nexus 1000V
Microsoft HyperV 2012
Oracle Oracle VM Server for SPARC 3.1.1.1
VMware vDS switch
Other private VLAN–aware products
Cisco Systems Firewall Services Module
Marathon Networks PVTD Private VLAN deployment and operation appliance
See also
Ethernet
Broadcast domain
VLAN hopping
References
External links
"Configuring Private VLAN" TP-Link Configuration Guide.
Further reading
CCNP BCMSN Official exam certification guide.By-David Hucaby, ,
Local area networks
Network architecture | Private VLAN | [
"Engineering"
] | 2,012 | [
"Network architecture",
"Computer networks engineering"
] |
13,225,683 | https://en.wikipedia.org/wiki/Jakob%20Ackeret | Jakob Ackeret, FRAeS (17 March 1898 – 27 March 1981) was a Swiss aeronautical engineer. He is widely viewed as one of the foremost aeronautics experts of the 20th century.
Birth and education
Jakob Ackeret was born in 1898 in Switzerland. He received his diploma degree in mechanical engineering from ETH Zurich in 1920 under the supervision of Aurel Stodola. From 1921 to 1927 he worked with Ludwig Prandtl at the "Aerodynamische Versuchsanstalt" in Göttingen, witnessing a legendary period in the development of modern fluid dynamics. He received his PhD from ETH Zurich in 1927.
Academic career
After completing his PhD, Ackeret worked at Escher Wyss AG in Zurich as chief engineer of hydraulics, where he applied, with great success, modern aerodynamics to the design of turbines.
He became a professor of Aerodynamics at ETH Zurich in 1931, where Wernher von Braun was one of his students.
Research
Ackeret was an expert on gas turbines and was known for his research on propellers and on high-speed propulsion problems.
When he was at ETH Zurich, he actively participated in the solution of practical engineering problems, such as the design of variable-pitch propellers for ships and airplanes. His most important invention was the gas turbine with a closed circuit. He made the invention together with C. Keller.
Ackeret also contributed significantly to research in supersonic aerodynamics. He led the initial work on calculating the lift and drag on a supersonic airfoil and he proposed the designation of the "Mach number" for multiples of the speed of sound. On the 5th Volta Conference in Rome in 1935 Ackeret was planning to talk about supersonic lift, but because of "sensitive developments" for the Luftwaffe Adolf Busemann arranged their topics to be swapped (his paper about swept wings, which seemed an academic curioisuty back then, later became seminal) and presented a design for a supersonic wind tunnel.
Ackeret was awarded the Ludwig-Prandtl-Ring from the Deutsche Gesellschaft für Luft- und Raumfahrt (German Society for Aeronautics and Astronautics) for "outstanding contribution in the field of aerospace engineering" in 1965.
In 1976, he was elected foreign associate member of the American National Academy of Engineering for his "contributions to the understanding of high-speed and supersonic fluid mechanics, leading to significant improvements to the science of flight".
References
External links
Jakob Ackeret
Jakob Ackeret and the "Institut für Aerodynamik (IfA)''
1898 births
1981 deaths
Aerospace engineers
Aerodynamicists
ETH Zurich alumni
Academic staff of ETH Zurich
Ludwig-Prandtl-Ring recipients
Foreign associates of the National Academy of Engineering | Jakob Ackeret | [
"Engineering"
] | 560 | [
"Aerospace engineers",
"Aerospace engineering"
] |
13,226,237 | https://en.wikipedia.org/wiki/Mural%20instrument | A mural instrument is an angle measuring instrument mounted on or built into a wall. For astronomical purposes, these walls were oriented so they lie precisely on the meridian. A mural instrument that measured angles from 0 to 90 degrees was called a mural quadrant. They were utilized as astronomical devices in ancient Egypt and ancient Greece. Edmond Halley, due to the lack of an assistant and only one vertical wire in his transit, confined himself to the use of a mural quadrant built by George Graham after its erection in 1725 at the Royal Observatory, Greenwich. Bradley's first observation with that quadrant was made on 15 June 1742.
The mural quadrant has been called the "quintessential instrument" of 18th century (i.e. 1700s) observatories. It rose to prominence in the field of positional astronomy at this time.
Construction
Many older mural quadrants have been constructed by marking directly on the wall surfaces. More recent instruments were made with a frame that was constructed with precision and mounted permanently on the wall.
The arc is marked with divisions, almost always in degrees and fractions of a degree. In the oldest instruments, an indicator is placed at the centre of the arc. An observer can move a device with a second indicator along the arc until the line of sight from the movable device's indicator through the indicator at the centre of the arc aligns with the astronomical object. The angle is then read, yielding the elevation or altitude of the object. In smaller instruments, an alidade could be used. More modern mural instruments would use a telescope with a reticle eyepiece to observe the object.
Many mural quadrants were constructed, giving the observer the ability to measure a 90° range of elevation. There were also mural sextants that read 60°.
Mural quadrants of the 17th century were noted for their expense, with Flamsteed's 1689 quadrant costing , and Edmund Halley's 1725 quadrant which cost over . The large fixed quadrants were more expensive than a typical portable quadrant, with a Bird 2-foot quadrant costing 70 guineas or .
Usage
In order to measure the position of, for example, a star, the observer needs a sidereal clock in addition to the mural instrument. With the clock measuring time, a star of interest is observed with the instrument until it crosses an indicator showing that it is transiting the meridian. At this instant, the time on the clock is recorded as well as the angular elevation of the star. This yields the position in the coordinates of the instrument. If the instrument's arc is not marked relative to the celestial equator, then the elevation is corrected for the difference, resulting in the star's declination. If the sidereal clock is precisely synchronized with the stars, the time yields the right ascension directly.
Famous mural instruments
A mural sextant was constructed in Ray, Iran, by Abu-Mahmud al-Khujandi in 994.
Ulugh Beg constructed the "Fakhri Sextant" in Samarkand that had a radius of 40 meters. Seen in the image on the right, the arc was finely constructed with a staircase on either side to provide access for the assistants who performed the measurements.
Tycho Brahe's mural quadrant in Uraniborg at Hven (now Ven in Sweden).
The mural quadrant at the Royal Observatory, Greenwich, in east London.
Ptolemy's mural quadrant at Alexandria. This instrument is also referred to as a plinth.
The obsolete constellation Quadrans Muralis represents a mural quadrant.
The mural quadrant at the Mannheim Observatory in Germany. This is another of John Bird's instruments.
See also
List of astronomical instruments
References
External links
Ancient Greek astronomy
Ancient Egyptian science
Ancient Egyptian technology
Astronomical instruments
Historical scientific instruments
Astronomy in the medieval Islamic world
Technology in the medieval Islamic world
Angle measuring instruments
Egyptian inventions | Mural instrument | [
"Astronomy"
] | 782 | [
"History of astronomy",
"Astronomy in the medieval Islamic world",
"Astronomical instruments"
] |
13,226,961 | https://en.wikipedia.org/wiki/Calcium-sensing%20receptor | The calcium-sensing receptor (CaSR) is a Class C G-protein coupled receptor which senses extracellular levels of calcium ions. It is primarily expressed in the parathyroid gland, the renal tubules of the kidney and the brain. In the parathyroid gland, it controls calcium homeostasis by regulating the release of parathyroid hormone (PTH). In the kidney it has an inhibitory effect on the reabsorption of calcium, potassium, sodium, and water depending on which segment of the tubule is being activated.
Since the initial review of CaSR, there has been in-depth analysis of its role related to parathyroid disease and other roles related to tissues and organs in the body. 1993, Brown et al. isolated a clone named BoPCaR (bovine parathyroid calcium receptor) which replicated the effect when introduced to polyvalent cations. Because of this, the ability to clone full-length CaSRs from mammals were performed.
Structure
Each protomer of the receptor has a large, N-terminal extracellular domain that linked to create VFT (Venus flytrap) domain. The receptor has a CR (cysteine-rich) domain that links the VFT to the 7 transmembrane domains of the receptor. The 7 transmembrane domain is followed by a long cytoplasmatic tail. The tail has no structure, but still, it has an important role in trafficking and phosphorylation.
The CaSR is a homodimer receptor. The signal transmission occurs only when the agonist binds to the homodimer of the CaSR. Binding of a single protomer will not lead to signal transmission. In vitro experiments showed that the receptor can form a heterodimer with mGlu1/5 or with GABAB receptor. The heterodimerization may facilitate the varied functional roles of the CaSR in different tissues, particularly in the brain.
The CryoEM structures of CasR homodimer was recently solved
Extracellular domain
The VFT extends outside the cell and is composed of two lobe subdomains. Each lobe forms part of the ligand binding cleft.
In contrast to the conservative structure of other class C GPCR receptors, the CaSR cleft is an allosteric or co-agonist binding site, with the cations (Ca2+) binding elsewhere.
The inactive state of the receptor has two extracellular domains, oriented in an open conformation with an empty intradomain part. When the receptor is activated, the two lobes interact with each other and creates a rotation of the interdomain cleft.
Cation binding sites
The cation binding sites varied in their location and in the number of repetitive appearances.
The receptor has four Calcium binding sites that have a role in the stabilization of the extracellular domain (ECD) and in the activation of the receptor. The stabilization maintains the receptor in its active conformation.
Calcium cations bind to the first Calcium binding site in the inactive conformation. In the second binding site, Calcium cations are bound to both the active and inactive structures. In the third binding Site, the binding of the calcium facilitates the closure of lobe 1 and 2. This closure permits the interaction between the two lobes. The fourth binding site is located on lobe 2 in a place close to the CR domain. The agonist binding to the fourth binding site leads formation of homodimer interface bridge. This bridge between lobe 2 domain of subunit 1 and the CR domain of subunit 2, stabilize the open conformation.
The order of Calcium binding affinity to four of the bindings sites is as follows: 1 = 2 > 3 > 4. The lower affinity of Calcium to site 4 indicates that the receptor is activated only when the calcium concentration is elevated above the required concentration. That behavior makes the binding of calcium at site 4 to hold a major role in stabilization.
The CaSR also has binding sites for Magnesium and Gadolinium.
Anion binding sites
There are four anion binding sites in the ECD. Sites 1-3 are occupied in the inactive structure, whereas in the active structure only sites 2 and 4 are occupied.
7-Transmembrane domain
Based on a similarity of CaSR to mGlu5, it is believed that in the inactivated form of the receptor, the VFT domain disrupts the interface between the 7TM domains, and the activation of the receptor force a reorientation of the 7TM domains.
Signal transduction
The inactivated form of the receptor has an open conformation. upon binding of the fourth binding site, the structure of the receptor changes to a close conformation. The change in the structure conformation leads to inhibition of PTH release.
On the intracellular side, initiates the phospholipase C pathway, presumably through a Gqα type of G protein, which ultimately increases intracellular concentration of calcium, which inhibits vesicle fusion and exocytosis of parathyroid hormone. It also inhibits (not stimulates, as some sources state) the cAMP dependent pathway.
Ligands
Agonists
Calcium
Spermine
Neomycin
Vitamin D
Positive allosteric modulators
Gamma-Glutamyl peptides
L- amino acids
Cinacalcet
Evocalcet
NPS R-568
NPS R-467
Etelcalcetide
Calhex 231
Antagonists
Calcilytics
Phosphate
Negative allosteric modulators
NPS 2143
Ronacaleret
Calhex 231
It is unknown whether Ca2+ alone can activate the receptor, but L-amino acids and g-Glutamyl peptides are shown to act as co-activator of the receptor. Those molecules intensify the intracellular responses evoked by Calcium cation.
Pathology
Mutations that inactivate a CaSR gene cause familial hypocalciuric hypercalcemia (FHH) (also known as familial benign hypercalcemia because it is generally asymptomatic and does not require treatment), when present in heterozygotes. Patients who are homozygous for CaSR inactivating mutations have more severe hypercalcemia. Other mutations that activate CaSR are the cause of autosomal dominant hypocalcemia or Type 5 Bartter syndrome. An alternatively spliced transcript variant encoding 1088 aa has been found for this gene, but its full-length nature has not been defined.
Role in Chronic kidney disease
In CKD, the dysregulation of CaSR leads to a secondary hyperparathyroidism linked with osteoporosis, which considered as one of the main complications.
Patients suffers from secondary hyperparathyroidism require to make changes in their diet in order to balance the disease. The diet recommendation includes restriction of Calcium, phosphate, and protein intake. Those nutrients are abundance in our diet and because of that, avoiding foods that contains those nutrients may limit our dietary options and can lead to other nutrients deficiencies.
Therapeutic application
The drugs cinacalcet and etelcalcetide are allosteric modifiers of the calcium-sensing receptor. They are classified as a calcimimetics, binding to the calcium-sensing receptor and decreasing parathyroid hormone release.
Calcilytic drugs, which block CaSR, produce increased bone density in animal studies and have been researched for the treatment of osteoporosis. Unfortunately clinical trial results in humans have proved disappointing, with sustained changes in bone density not observed despite the drug being well tolerated. More recent research has shown the CaSR receptor to be involved in numerous other conditions including Alzheimer's disease, asthma and some forms of cancer, and calcilytic drugs are being researched as potential treatments for these. Recently it has been shown that biomimetic bone like apatite inhibits formation of bone through endochondral ossification pathway via hyperstimulation of extracellular calcium sensing receptor.
Transactivation across the dimer can result in unique pharmacology for CaSR allosteric modulators. For example, Calhex 231, which shows a positive allosteric activity when bound to the allosteric site in just one protomer. In contrast, it shows a negative allosteric activity when occupying both the allosteric sites of the dimer.
Interactions
Calcium-sensing receptor has been shown to interact with filamin.
Role in sensory evaluation of food
Kokumi was discovered in Japan, 1989. It is defined as a sensation that enhances existing flavors and creates feelings of roundness, complexity, and richness in the mouth. The kokumi is present in different foods such as fish sauce, soybean, garlic, beans, etc. The Kokumi substances are Gamma-glutamyl peptides.
CaSR is known to be expressed in the parathyroid gland and kidneys, but recent experiments showed that the receptor is also expressed in the alimentary canal (known as the digestive tract) and the near the taste buds on the back of the tongue.
Gamma-glutamyl peptides are allosteric modulators of the CaSR, and the binding of those peptides to the CaSR on the tongue is what mediates the Kokumi sensation in the mouth.
In the mouth, unlike in other tissues, the influx of the extracellular Calcium does not affect the receptor activity. Instead, the activation of the CaSR is by the binding of the Gamma glutamine peptides.
Taste signal involves a release of intracellular calcium as respond to the molecule binding to the taste receptor, leads to secretion of neurotransmitter and taste perception. The simultaneous binding of gamma glutamine peptides to the CaSR increases the level of the intracellular calcium, and that intensify the taste perception.
References
Further reading
External links
CASRdb - Calcium Sensing Receptor Database, McGill University
G protein-coupled receptors | Calcium-sensing receptor | [
"Chemistry"
] | 2,048 | [
"G protein-coupled receptors",
"Signal transduction"
] |
13,227,332 | https://en.wikipedia.org/wiki/Amdahl%20UTS | UTS is a discontinued implementation of the UNIX operating system for IBM mainframe (and compatible) computers. Amdahl created the first versions of UTS, and released it in May 1981, with UTS Global acquiring rights to the product in 2002. UTS Global has since gone out of business.
System requirements
UTS Release 4.5 supports the following S/390 model processors and their successors:
Amdahl 5990, 5995A, 5995M series of ECL processors
Amdahl Millennium Global Server series of CMOS processors
Fujitsu Global Server
IBM ES/9000/9021 series of ECL processors
IBM G4, G5 & G6 Servers (the 9672 R and X series of CMOS processors)
History
The UTS project had its origins in work started at Princeton University in 1975 to port UNIX to the IBM VM/370 system. Team members there were Tom Lyon, Joseph Skudlarek, Peter Eichenberger, and Eric Schmidt. Tom Lyon joined Amdahl in 1978, and by 1979 there was a full Version 6 Unix system on the Amdahl 470 being used internally for design automation engineering. In late 1979 this was updated to the more commonly ported Version 7.
In 1980 Amdahl announced support for Unix on the System 470. Five years later, IBM announced its own mainframe Unix, IX/370, as a competitive response to Amdahl.
The commercial versions of UTS were based on UNIX System III and UNIX System V. In 1986, Amdahl announced the first version to run natively on IBM/370-compatible hardware, UTS/580 for its Amdahl 580 series of machines; previous Unix ports always ran as "guests" under the IBM VM hypervisor. Version 4.5 was based on Unix System V, Release 4 (SVR4).
See also
Linux on IBM Z
OpenSolaris for System z
UNIX System Services in OS/390 and its successors
UNIX-RT
RTLinux
References
External links
UTS Global home page (archived page at Archive.org, April 2008)
Unix variants
1981 software | Amdahl UTS | [
"Technology"
] | 433 | [
"Operating system stubs",
"Computing stubs"
] |
13,227,535 | https://en.wikipedia.org/wiki/RA-1%20Enrico%20Fermi | RA-1 Enrico Fermi is a research reactor in Argentina. It was the first nuclear reactor to be built in that country and the first research reactor in the southern hemisphere.
Construction started in April 1957, with first criticality on 20 January 1958. By contrast, the HIFAR reactor in Australia went critical just six days later, on 26 January 1958. The RA-1 reactor produced the first medical and industrial radioisotopes made in Argentina, and was used to train staff for the first two nuclear power stations there.
It is a pool type, with enriched uranium oxide fuel (20% U-235), light water coolant and moderator, and a graphite reflector. It produces 40 kilowatts of thermal energy at full authorized power.
It has been modernized on several occasions, and is currently used for research and teaching.
External links
Report of the National Atomic Energy Commission of Argentina (CNEA), November 2004, (PDF, 2353KB)
El Reactor RA - 1, CNEA web page (in Spanish)
El Reactor RA - 1 - Características, CNEA web page (in Spanish)
Nuclear research reactors
Light water reactors | RA-1 Enrico Fermi | [
"Physics"
] | 238 | [
"Nuclear and atomic physics stubs",
"Nuclear physics"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.