text stringlengths 11 320k | source stringlengths 26 161 |
|---|---|
Thomas Bayes ( / b eɪ z / BAYZ , audio ⓘ ; c. 1701 – 7 April 1761 [ 2 ] [ 4 ] [ note 1 ] ) was an English statistician , philosopher and Presbyterian minister who is known for formulating a specific case of the theorem that bears his name: Bayes' theorem .
Bayes never published what would become his most famous accomplishment; his notes were edited and published posthumously by Richard Price . [ 5 ]
Thomas Bayes was the son of London Presbyterian minister Joshua Bayes , [ 6 ] and was possibly born in Hertfordshire . [ 7 ] He came from a prominent nonconformist family from Sheffield . In 1719, he enrolled at the University of Edinburgh to study logic and theology. On his return around 1722, he assisted his father at the latter's chapel in London before moving to Tunbridge Wells , Kent, around 1734. There he was minister of the Mount Sion Chapel, until 1752. [ 8 ]
He is known to have published two works in his lifetime, one theological and one mathematical:
Bayes was elected as a Fellow of the Royal Society in 1742. His nomination letter was signed by Philip Stanhope , Martin Folkes , James Burrow , Cromwell Mortimer , and John Eames . It is speculated that he was accepted by the society on the strength of the Introduction to the Doctrine of Fluxions , as he is not known to have published any other mathematical work during his lifetime. [ 9 ]
In his later years he took a deep interest in probability. Historian Stephen Stigler thinks that Bayes became interested in the subject while reviewing a work written in 1755 by Thomas Simpson , [ 10 ] but George Alfred Barnard thinks he learned mathematics and probability from a book by Abraham de Moivre . [ 11 ] Others speculate he was motivated to rebut David Hume 's argument against believing in miracles on the evidence of testimony in An Enquiry Concerning Human Understanding . [ 12 ] His work and findings on probability theory were passed in manuscript form to his friend Richard Price after his death.
By 1755, he was ill, and by 1761, he had died in Tunbridge Wells. He was buried in Bunhill Fields burial ground in Moorgate, London, where many nonconformists lie.
In 2018, the University of Edinburgh opened a £45 million research centre connected to its informatics department named after its alumnus, Bayes. [ 13 ]
In April 2021, it was announced that Cass Business School , whose City of London campus is on Bunhill Row , was to be renamed after Bayes. [ 13 ]
Bayes's solution to a problem of inverse probability was presented in An Essay Towards Solving a Problem in the Doctrine of Chances , which was read to the Royal Society in 1763 after Bayes's death. Richard Price shepherded the work through this presentation and its publication in the Philosophical Transactions of the Royal Society of London the following year. [ 14 ] This was an argument for using a uniform prior distribution for a binomial parameter and not merely a general postulate. [ 15 ] This essay gives the following theorem (stated here in present-day terminology).
Suppose a quantity R is uniformly distributed between 0 and 1. Suppose each of X 1 , ..., X n is equal to either 1 or 0 and the conditional probability that any of them is equal to 1, given the value of R , is R . Suppose they are conditionally independent given the value of R . Then the conditional probability distribution of R , given the values of X 1 , ..., X n , is
Thus, for example,
This is a special case of the Bayes' theorem .
In the first decades of the eighteenth century, many problems concerning the probability of certain events, given specified conditions, were solved. For example: given a specified number of white and black balls in an urn, what is the probability of drawing a black ball? Or the converse: given that one or more balls has been drawn, what can be said about the number of white and black balls in the urn? These are sometimes called " inverse probability " problems.
Bayes's Essay contains his solution to a similar problem posed by Abraham de Moivre , author of The Doctrine of Chances (1718).
In addition, a paper by Bayes on asymptotic series was published posthumously.
Bayesian probability is the name given to several related interpretations of probability as an amount of epistemic confidence – the strength of beliefs, hypotheses etc. – rather than a frequency. This allows the application of probability to all sorts of propositions rather than just ones that come with a reference class. "Bayesian" has been used in this sense since about 1950. Since its rebirth in the 1950s, advancements in computing technology have allowed scientists from many disciplines to pair traditional Bayesian statistics with random walk techniques. The use of the Bayes' theorem has been extended in science and in other fields. [ 16 ]
Bayes himself might not have embraced the broad interpretation now called Bayesian, which was in fact pioneered and popularised by Pierre-Simon Laplace ; [ 17 ] it is difficult to assess Bayes's philosophical views on probability, since his essay does not go into questions of interpretation. There, Bayes defines probability of an event as "the ratio between the value at which an expectation depending on the happening of the event ought to be computed, and the value of the thing expected upon its happening" (Definition 5). In modern utility theory, the same definition would result by rearranging the definition of expected utility (the probability of an event times the payoff received in case of that event – including the special cases of buying risk for small amounts or buying security for big amounts) to solve for the probability. As Stigler points out, [ 10 ] this is a subjective definition, and does not require repeated events; however, it does require that the event in question be observable, for otherwise it could never be said to have "happened". Stigler argues that Bayes intended his results in a more limited way than modern Bayesians. Given Bayes's definition of probability, his result concerning the parameter of a binomial distribution makes sense only to the extent that one can bet on its observable consequences.
The philosophy of Bayesian statistics is at the core of almost every modern estimation approach that includes conditioned probabilities, such as sequential estimation, probabilistic machine learning techniques, risk assessment, simultaneous localization and mapping, regularization or information theory. The rigorous axiomatic framework for probability theory as a whole, however, was developed 200 years later during the early and middle 20th century, starting with insightful results in ergodic theory by Plancherel in 1913. [ citation needed ] | https://en.wikipedia.org/wiki/Thomas_Bayes |
Thomas Carnelley (22 October 1854 – 27 August 1890) was a British chemist who contributed to physical chemistry and was involved in introducing German-inspired chemistry research into Britain as professor of chemistry at the University of Dundee and later at Aberdeen. He studied the relationships between the melting and boiling points of the salts of elements and their positions in the periodic table. He also examined relationships between molecular structures and physical properties and came up with a rule that is sometimes called "Carnelley's Rule". [ 1 ]
Carnelley was born in Manchester , the son of William. He studied at King's College School, London and joined Owens College, Manchester in 1868. He showed scholastic brilliance and received a Bachelor of Science in 1871 with a third position in Third Class Honors in Chemistry. In 1872 he obtained second place in First Class Honors in Chemistry qualifying for a university scholarship. He studied the vanadates of thallium that led to a Dalton Chemical Scholarship and he worked as a private assistant to Henry Enfield Roscoe in 1872-74 giving lectures in the evening at Owens College. He then went to the University of Bonn and studied under August Kekulé (1829–1896), Theodor Zincke (1843–1928), and Otto Wallach (1847–1931). He studied the reactions of carbon disulfide and alcohol with hot copper catalysts [ 2 ] and the synthesis of tolylphenyl. He received a doctorate in 1876 and in 1879 he received a DSc from the University of London. He was appointed to Firth College, Sheffield in 1879 where he established a chemistry laboratory and in 1881 he moved to the University College of Dundee where he had more resources. He taught with great zeal and was popular among students. He also conducted research on the heating and ventilation of schools, the quality of air in buildings and so on leading to his being elected to the school board. He also established a museum and a dye-house with material contributed by Carnelley's father to the college. In 1888 he accepted the chair of chemistry at the University of Aberdeen following the death of James Smith Brazier . Two years after moving to Aberdeen, he suffered from a sudden illness and an internal abscess. He died at home in Cults, Aberdeen at the age of 36. [ 3 ] [ 4 ]
Carnelley helped introduce the German style of chemical research and industrial applications into Britain. He was elected to the Chemical Society of London in 1874. He published more than 50 papers and several textbooks. He studied the synthesis of several hydrocarbons including tolylphenyl [ 5 ] and ditolyl [ 6 ] and examined the physics of ice. In his 1879 work, he examined the melting points of metallic salts and related them to their positions in the periodic table. [ 7 ] [ 8 ] [ 9 ] Mendeleeff took notice of the work and wrote to Henry Roscoe that Carnelley's work deserved wide knowledge. He stated that: “ The labors of Carnelley connected with the periodic law of the elements have been so remarkable that the history of the subject would be incomplete if his name were omitted .” [ 10 ] Carnelley and Thomas Burton developed a new pyrometer to measure high temperatures. [ 11 ] It was made of a coil of copper tubing which carries water through it. Measurement was made of the water temperature at the inlet and at the outlet and he calibrated these with known temperatures. In 1881 he claimed that it was possible to maintain ice at solid phase at temperatures above the normal melting point under pressure. [ 12 ] In 1880 the sublimation of ice was demonstrated at low temperatures. [ 3 ]
Carnelley's Rule states that of two or more isomers, those whose atoms are the more symmetrically and the more compactly arranged melt higher than those in which the atomic arrangement is asymmetrical or in the form of long chains. [ 1 ] [ 13 ]
His interest in public hygiene led to his being appointed to a committee to examine the air and smells in the House of Commons in 1886. Carnelley and J.S. Haldane were asked to examine the quality of the air. The studied the carbon dioxide levels in the sewers and in the rooms. [ 14 ] He also adapted a bacteriological analysis using Hesse's method. [ 3 ] [ 15 ] [ 16 ] [ 17 ] | https://en.wikipedia.org/wiki/Thomas_Carnelley |
Thomas Gilbert Henry Jones CBE (1895–1970) was an Australian organic chemist and academic, notable for his pioneering work in the field of essential oils from Queensland flora natural products .
Thomas Gilbert Henry Jones was born on 14 July 1895 in Owens Gap, Hunter Valley, New South Wales , the son of Thomas Jones a schoolteacher and his wife Margaret Bell. [ 1 ] He attended Newcastle High School where he won prizes in his Junior and Senior years. [ 2 ] He entered the University of Sydney in 1912 where he studied his B.Sc graduating with first class honours in mathematics and chemistry in 1915 [ 3 ] and winning the Levy Scholarship for chemistry and physics, the Slade Prize for practical chemistry, the Caird Scholarship for chemistry II and the University medals for mathematics and chemistry. In 1915, Jones was awarded a government research scholarship and was appointed an assistant lecturer and demonstrator at the University of Queensland .
Jones was selected as one of a group of chemists to be sent to England to undertake research for the munitions factories. [ 4 ] [ 5 ] His work at the HM factory Gretna on the manufacture of nitroglycerin , led to further work on solvent recovery, including that of cordite . At the end of the war he was admitted an associate of the British Chemical Institute (BCI) for his service. He returned to Australia in 1919, resuming his work and was promoted to lecturer in 1921. He earned his DSc from the University of Sydney in 1926. [ 6 ]
Jones was awarded the H. G. Smith Memorial Medal by the Royal Australian Chemical Institute (RACI) in 1930. [ 7 ] He served as President of the Queensland branch of the ACI (1938–39) and the President of the Australian branch in 1939. [ 8 ] Jones was promoted to Professor and Head of the Chemistry department at the University of Queensland in 1940, [ 9 ] following the death of Professor L.S. Bagster. He was appointed a member of the University Senate from 1944 to 1968, the Dean of the Faculty of Science from 1942 to 1949 and 1960–61. He was President of the Professorial Board from 1951 to 1956, and served on every senior committee, including that of the library for twelve years. As acting president of the Professorial Board in April 1957, he addressed a public meeting of 2500 people in Brisbane's City Hall , protesting a new bill of the then Gair government, which threatened the university's ability to make autonomous appointments. [ 1 ]
Jones was awarded a CBE in 1960 and retired in 1965. He received an honorary LLD from the University of Queensland in 1960 and the University of Newcastle in 1966.
He published over 40 papers during his career.
Jones married Vera Haines, a dispensing chemist in Gympie in 1923. [ 10 ] They had two children. He died on 11 August 1970 in Brisbane.
Jones was honoured with a stone grotesque in the Great Court of the University of Queensland placed on the Forgan Smith building. [ 11 ] An annual lecture is presented in his name at the University of Queensland in the School of Chemistry of Molecular Biosciences. [ 12 ]
Jones, T. G. H., & Robinson, R. (1917). Experiments on the orientation of substituted catechol ethers. Journal of the Chemical Society, Transactions , 111 : 903-929.
Jones, T. G. H., & Smith, F. (1923). Notes on the essential oil of Daphnandra aromatica. Proc. Roy. Soc. Queensland . 35: 61-62.
Jones, T. G. H., & Smith, F. (1923). The composition of the volatile oil of the leaf of Daphnandra aromatica. Proc, Roy. Soc. Queensland . 35: 133-136.
Jones, T. G. H., & Smith, F. B. (1925). Olefinic terpene ketones from the volatile oil of flowering Tagetes glandulifera. Part I. Journal of the Chemical Society, Transactions , 127 , 2530-2539.
Jones, T. G. H., & Berry-Smith, F. (1925). The essential oil of Australia Menthas. 1. Mentha satureoides. Proceedings of the Royal Society of Queensland . 37: 89-91.
Jones, T. G. H., & Smith, F. B. (1925). Olefinic Terpene Ketones from the Volatile Oil of Flowering Tagetes glandulifera. Part II. Journal of the Chem. Soc . 127: 2530.
Jones, T. G. H. (1926). Olefinic terpene ketones from the volatile oil of flowering Tagetes glandulifera. Part II. Journal of the Chemical Society , 129 , 2767-2770.
Jones, T. G. H., & White, M. (1928). The essential oil of Eucalyptus andrewsi from Queensland. Proc. Roy. Soc. Queensland, 40, 132 , 133 .
Jones, T. G. H., & Smith, F. B. (1928). Campnospermonol, a ketonic phenol from Campnospermum brevipetiolatum. Journal of the Chemical Society , 65-70.
Jones, T. G. H., & Smith, F. B. (1930). The volatile oil of Queensland sandalwood. Proceedings of the Royal Society of Queensland . 41: 17-22.
Jones, T. G. H., & White, M. (1931). Essential oils from the Queensland flora. III. Agonis luehmanni. Proc. Roy. Soc. Queensland . 43: 24-27.
Jones, T. G. H. (1932). Essential oils from the Queensland flora. IV. Ago-nis elliptica. Proc. Roy. Soc. Qld . 43: 3-5.
Jones, T. G. H., & Lahey, F. N. (1933). Essential oils from the Queensland flora. Part V. Eriostemon glasshousiensis. Proceedings of the Royal Society of Queensland , 44 : 151-152.
Jones, T. G. H. (1934). Reactions of Tagetone. I. Proc. Roy. Soc. Queensland . 45: 45.
Jones, T. G. H., & Harvey, J. M. (1936). Essential oils from the Queensland flora. Part VIII. The identity of melaleucol with nerolidol. Proc. R. Soc. Queensland . 47: 92-93.
Jones, T. G. H., & Lahey, F. N. (1936). Essential oils from the Queensland flora. VII. Melaleuca pubescens. Proceedings of the Royal Society of Queensland . 48 : 20-1.
Jones, T.G.H. and Haenke, W.L. (1937). Essential oils from the Queensland Flora, Part IX-Melaleuca Viridiflora, Part 1. Papers, University of Queensland, Department of Chemistry . 1 (1).
Jones, T. G. H., & Haenke, W. L. (1937). Essential oils from the Queensland flora. X. Melaleuca linariifolia. Proceedings of the Royal Society of Queensland , 48 , 48-50.
Jones, T. G. H., & Lahey, F. N. (1938). Essential oils of the Queensland flora—Part XIII. Backhousia hughesii. Proc. Roy Soc. Qld. 49: 152-153.
Lahey, F. N., & Jones, T. G. H. (1938). Essential oils from the Queensland flora. XV. Backhousia bancroftii , Papers, University of Queensland, Department of Chemistry 1(8): 41-42.
Lahey, F. N., & Jones, T. G. H. (1939). Essential oils from Queensland flora Part XVII, The essential oil of Evodia littoralis and the occurrence of a new phenolic ketone. Papers, University of Queensland, Department of Chemistry . 1(13).
Lahey, F.N. and Jones, T.G.H. (1939). Essential oils from the Queensland flora, XIV-Eucalyptus Conglomerata. Papers, University of Queensland, Department of Chemistry . 1(5).
Hancox, N.C. and Jones, T.G.H. (1939). 1-a-Phellandrene and its Monohydrochloride, Part 1. Papers, University of Queensland, Department of Chemistry . 1(6).
Hancox, N.C. and Jones, T.G.H. (1939). A new derivative of Terpinen-4-ol. Papers, University of Queensland, Department of Chemistry. 1 (7).
Lahey, F.N. and Jones, T.G.H. (1939). Essential oils from the Queensland Flora Part XV. Backhousia Bancroftii and Daphandra Rapandula. Papers, University of Queensland, Department of Chemistry . 1 (8).
Jones, T.G.H. (1939). The reduction of Tagetone to Tagetol. University of Queensland Papers, Department of Chemistry . 1(11).
Lahey, F.N. and Jones, T.G.H. (1939). The constitution and synthesis of Conglomerone. University of Queensland Papers, Department of Chemistry . 1(12).
Hancox, N.C., Jones, T.G.H. (1939). Optically pure 1-a-Phellandrene. University of Queensland Papers, Department of Chemistry . 1(14).
Lahey, F. N., & Jones, T. G. H. (1939). Essential oils from Queensland flora XIV Eucalyptus conglomerata. In Proc. Roy. Soc. Qld . 51: 10-13.
Lahey, F. N., & Jones, T. G. H. (1939). Essential oils from the Queensland flora. XVII. Evodia littoralis and the occurrence of a new phenolic ketone. Univ. of Queensland Papers, Dept. Chem. , 1 (13), 4.
Lahey, F.N and Jones, T.G.H. (1939). Essential oils from the Queensland Flora, XVI. Eucalyptus Microcorys. Papers, University of Queensland, Department of Chemistry . 1(9).
Jones, T. G. H., & Oakes, H. C. (1940). The crystalline solid formed in the oil of Melaleuca linariifolia. Univ. Queensl. Pap., Dep. Chem , 1 (18), 1-3.
Jones, T.G.H. and Lahey, F.N. (1942). The ultra-violet absorption spectra of Tagetone and related ketones. Papers, University of Queensland, Department of Chemistry . 1 (22).
Jones, T. G. H., & Lahey, F. (1943). Essential oils of the Queensland flora. Part XIX. The essential oils of Halfordia kendack. Proceedings of the Royal Society of Queensland . 55: 85-86.
Jones, T. G. H., & Wright, S. E. (1946). Essential oils from the Queensland flora. 21. The essential oil of Evodia Elleryana. Univ. Queensland Papers, Dept. Chem , 1 (27), 1-7.
Davenport, J.B., Jones, T.G.H. and Sutherland, M.D. (1949). The essential oils of the Queensland flora, part XXIII: a re-examination of the essential oil of Melaleuca Inarifolia. Papers of the Department of Chemistry, University of Queensland . 1 (36).
Jones, T.G.H., Lahey, G. and Sutherland, M. (1949). Essential oils of the Queensland flora, Part XXIV, The essential oil of Calythris tetragona lab from the Glasshouse Mountains. Papers, University of Queensland, Department of Chemistry . 1 (37). | https://en.wikipedia.org/wiki/Thomas_Gilbert_Henry_Jones |
Thomas Henry Haines (August 9, 1933 – December 17, 2023) was an American author, social activist, biochemist and academic. He was a professor of chemistry at City College of New York and of biochemistry at the Sophie Davis School of Biomedical Education . He was a visiting professor in the Laboratory of Thomas Sakmar at Rockefeller University . [ 1 ] He also served on the board of the Graham School , a social services and foster care agency in New York City. His scientific research focused on the structure and function of the living cell membrane . He is the father of Avril Haines , the seventh Director of National Intelligence .
Thomas Haines was born on August 9, 1933, to Elsie Cubbon Haines (1894–1955) and Charles Haines, who deserted when Haines was two. In 1937, "by reason of the insanity of the mother", a judge placed him at the Graham School, an orphanage in Hastings-on-Hudson, New York . The orphanage, now a social services and foster care agency, was founded in 1806 by Isabella Graham and Elizabeth Hamilton, the recently widowed wife of Alexander Hamilton . Haines remained at the orphanage until high school, when he became a resident houseboy and gardener for a wealthy Hastings family. The story of Haines' early life appears as "From the Orphanage to the Lab" in the Story Collider podcast. [ 2 ] and in his autobiography with Mindy Lewis, A Curious Life: From Rebel Orphan to Innovative Scientist . [ 3 ]
Haines attended the City College of New York , with a B.S. in chemistry in 1957 and an M.S. in education in 1959. During that time he worked as live-in baby sitter for then-blacklisted American songwriter Jay Gorney (co-writer with Yip Harburg of the Depression era anthem, “ Brother, Can You Spare a Dime ?”) and his wife Sondra. There Haines came to know many other blacklisted professionals including actors Zero Mostel , Paul Robeson , and Lionel Stander , philosopher Barrows Dunham , and Bella Abzug , then a young lawyer defending blacklisted artists and intellectuals at HUAC hearings. [ 3 ]
After CCNY, Haines taught elementary school science at the Ethical Culture Fieldston School . He then became a laboratory assistant to Richard Block [ 4 ] at the Boyce Thompson Institute where he studied the microorganism Ochromonas danica . [ 5 ] When Block died in a plane crash, Haines took over his research projects. In 1964 he obtained his Doctor of Philosophy degree in chemistry from Rutgers University . [ 6 ]
Haines became assistant professor of chemistry at City College in 1964 and full professor of chemistry in 1972, a position he held until retiring in 2007. In 1972 he co-founded the Sophie Davis School of Biomedical Education with University President Robert Marshak . This remarkable program took new undergraduates directly into medical school. It continues today as The CUNY School of Medicine . Haines taught biochemistry to undergraduates and served as director of biochemistry at the school from 1974 to 2006. Deeply committed to his students, he also taught remedial summer school and regularly counseled struggling students and their parents. On many occasions he was voted most popular professor.
Haines simultaneously conducted laboratory research and taught as professor of biochemistry in the doctoral program of biochemistry at the Graduate Center of the City University of New York . He has published extensively on the structure and function of living membranes, including on the function of cholesterol in blocking sodium leakage through membranes, and most recently on the function of cardiolipin in the mitochondrial membrane.
From 1994 to 2001, Haines chaired the Partnership for Responsible Drug Information , which organized lectures and conferences to educate the public about alternatives to the "War on Drugs."
Haines served as visiting professor at the Mitsubishi Institute in Japan, at the University of California at Berkeley, and in many other universities. On his retirement from CCNY, he became a visiting professor of biochemistry at the Sakmar Laboratory at Rockefeller University .
In 2020, Haines was elected a Fellow of the American Association for the Advancement of Science "For initiating and setting up the CUNY Medical School at City College of New York to educate minority and disadvantaged students." [ 7 ]
In 1960, Haines married painter Adrienne Rappaport, who used the name Adrian Rappin professionally. [ citation needed ] They had one daughter, Avril Haines , an attorney who is serving as the current Director of National Intelligence in the Biden administration . [ 8 ] Rappaport died in 1985 after developing chronic obstructive pulmonary disease and later contracting avian tuberculosis . [ citation needed ]
In 1986, Haines married his current wife, economist Mary "Polly" Cleveland . [ 3 ]
In 1964, Haines and Rappaport purchased two small run-down rent-controlled apartment buildings on New York's Upper West Side for $140,000, $10,000 down [ citation needed ] and for a time employed Al Pacino as the building superintendent. [ 9 ] When Haines and Cleveland sold the buildings for many millions of dollars in 2009, they put half the net proceeds into a foundation for the benefit of scientific and economic education.
Haines died in New York on December 17, 2023, at the age of 90. [ 10 ] | https://en.wikipedia.org/wiki/Thomas_H._Haines |
Thomas Harriot ( / ˈ h ær i ə t / ; [ 2 ] c. 1560 – 2 July 1621), also spelled Harriott , Hariot or Heriot , was an English astronomer , mathematician , ethnographer and translator to whom the theory of refraction is attributed. Thomas Harriot was also recognized for his contributions in navigational techniques, [ 3 ] working closely with John White to create advanced maps for navigation. [ 3 ] While Harriot worked extensively on numerous papers on the subjects of astronomy, mathematics and navigation, he remains obscure because he published little of it, [ 4 ] namely only The Briefe and True Report of the New Found Land of Virginia (1588). [ 3 ] This book includes descriptions of English settlements and financial issues in Virginia at the time. [ 3 ] He is sometimes credited with the introduction of the potato to the British Isles . [ 5 ] Harriot invented binary notation and arithmetic several decades before Gottfried Wilhelm Leibniz , but this remained unknown until the 1920s. [ 6 ] He was also the first person to make a drawing of the Moon through a telescope, on 5 August 1609, about four months before Galileo Galilei . [ 7 ] [ 8 ]
After graduating from St Mary Hall , Oxford , Harriot travelled to the Americas , accompanying the 1585 expedition to Roanoke island funded by Sir Walter Raleigh and led by Sir Ralph Lane . He learned the Carolina Algonquian language from two Native Americans, Wanchese and Manteo , and could translate it, making him a vital member of the expedition. On his return to England, he worked for the 9th Earl of Northumberland .
Born in 1560 in Oxford , England , Thomas Harriot attended St Mary Hall, Oxford . His name appears in the hall's registry dating from 1577. [ 9 ]
Harriot started to study navigation shortly after receiving a bachelor's degree from Oxford University . [ 4 ] The study of navigation that Harriot studied concentrated on the idea of the open seas and how to cross to the New World from the Atlantic Ocean. [ 3 ] He used instruments such as the astrolabe and sextants to aid his studies of navigation. [ 3 ] After educating himself by incorporating ideals from his astronomic and nautical studies, Harriot taught other captains his navigational techniques in Raleigh. [ 4 ] His findings were recorded in the Articon but were later never found. [ 3 ]
After his graduation from Oxford in 1580, Harriot was first hired by Sir Walter Raleigh as a mathematics tutor; he used his knowledge of astronomy/astrology to provide navigational expertise, help design Raleigh's ships, and serve as his accountant. Prior to his expedition with Raleigh, Harriot wrote a treatise on navigation. [ 10 ] He made efforts to communicate with Manteo and Wanchese , two Native Americans who had been brought to England. Harriot devised a phonetic alphabet to transcribe their Carolina Algonquian language.
Harriot and Manteo spent many days in one another's company; Harriot interrogated Manteo closely about life in the New World and learned much that was to the advantage of the English settlers. [ 11 ] In addition, he recorded the sense of awe with which the Native Americans viewed European technology: [ 11 ]
Many things they sawe with us...as mathematical instruments, sea compasses...[and] spring clocks that seemed to goe of themselves – and many other things we had – were so strange unto them, and so farre exceeded their capacities to comprehend the reason and meanes how they should be made and done, that they thought they were rather the works of gods than men.
He made only one expedition, around 1585–86, and spent some time in the New World visiting Roanoke Island off the coast of North Carolina , expanding his knowledge by improving his understanding of the Carolina Algonquian language. As the only Englishman who had learned Algonquin prior to the voyage, Harriot was vital to the success of the expedition. [ 12 ]
His account of the voyage, named A Briefe and True Report of the New Found Land of Virginia , was published in 1588 (probably written a year before). The True Report contains an early account of the Native American population encountered by the expedition; it proved very influential upon later English explorers and colonists. He wrote: "Whereby it may be hoped, if means of good government be used, that they may in short time be brought to civility and the embracing of true religion." [ 13 ] At the same time, his views of Native Americans' industry and capacity to learn were later largely ignored in favor of the parts of the "True Report" about extractable minerals and resources. [ citation needed ]
As a scientific adviser during the voyage, Harriot was asked by Raleigh to find the most efficient way to stack cannonballs on the deck of the ship. His ensuing theory about the close-packing of spheres shows a striking resemblance to atomism and modern atomic theory, which he was later accused of believing. His correspondence about optics with Johannes Kepler , in which he described some of his ideas, later influenced Kepler's conjecture . [ citation needed ]
Harriot was employed for many years by Henry Percy, 9th Earl of Northumberland , with whom he resided at Syon House , which was run by Henry Percy's cousin Thomas Percy . [ citation needed ]
The Duke was surrounded by many scholars and learned men and provided a more stable form of patronage than Raleigh. In 1595 the Duke of Northumberland made property in Durham over to Harriot, moving him up the social ladder into 'the landed gentry'. Not long after the Durham transactions, the Duke gave Harriot the use of one of the houses on the estate at Syon, to work on optics and the sine law of refraction. [ 14 ]
Harriot's sponsors began to fall from favor: Raleigh was the first, and Harriot's other patron Henry Percy , the Earl of Northumberland, was imprisoned in 1605 in connection with the Gunpowder Plot as he was closely connected to one of the conspirators, Thomas Percy . Harriot himself was interrogated and briefly imprisoned but was soon released. [ 3 ] Walter Warner , Robert Hues , William Lower , and other scientists were present around the Earl of Northumberland's mansion as they worked for him and assisted in the teaching of the family's children. [ 9 ]
While this was occurring, Harriot continued his work involving mainly astronomy, and in 1607 Harriot used his notes from the observations of Halley's Comet from Ilfracombe [ 15 ] to elaborate on his understanding of its orbit. [ 3 ] Soon after, in 1609 and 1610 respectively, Harriot turned his attention towards the physical aspects of the Moon and his observations of the first sightings of sunspots . [ 4 ]
In early 1609, he bought a "Dutch trunke" (telescope), invented in 1608, and his observations were among the first uses of a telescope for astronomy. Harriot is now credited as the first astronomer to draw an astronomical object after viewing it through a telescope: he drew a map of the Moon on 5 August 1609 [O.S. 26 July 1609], preceding Galileo by several months. By 1613, Harriot had created two maps of the whole Moon, with many identifiable features such as lunar craters depicted in their correct relative positions that were not to be improved upon for several decades. [ 16 ] [ 17 ] He also observed sunspots in December 1610. [ 18 ]
From 1614 Harriot was consulting Theodore de Mayerne , who was among James I 's doctors, for an apparent cancer of the left nostril that was gradually eating away the septum [ 19 ] and was apparently linked to a cancerous ulcer on his lip. This progressed until 1621, when he was living with a friend named Thomas Buckner on Threadneedle Street . There he died, apparently from skin cancer . It was suspected that Harriot's cancer was due to excessive tobacco consumption. [ 3 ]
He died on 2 July 1621, three days after writing his will (discovered by Henry Stevens ). [ 20 ] His executors posthumously published his Artis Analyticae Praxis on algebra in 1631; Nathaniel Torporley was the intended executor of Harriot's wishes, but Walter Warner in the end pulled the book into shape. [ 21 ] It may be a compendium of some of his works but does not represent all that he left unpublished (more than 400 sheets of annotated writing). It is not directed in a way that follows the manuscripts and it fails to give the full significance of Harriot's writings. [ 9 ]
Thomas Harriot was buried in the church of St Christopher le Stocks in Threadneedle Street, near where he died. The church was subsequently damaged in the Great Fire of London , and demolished in 1781 to enable expansion of the Bank of England .
Harriot's 5 August [O.S. 26 July] 1609 drawings of his observations of the Moon have been noted as the first recorded telescopic observations ever made, predating Galileo Galilei 's 30 November 1609 observation by almost four months. [ 22 ] [ 23 ] [ 8 ] Galileo's drawings, which were the first such observations to be published, contained greater detail such as identifying previously unknown features including mountains and craters. [ 22 ] Harriot inaccurately drew how far the crescent Moon would be illuminated around its limb, inaccurately drew the position of the craters, and did not draw the relief details that one would see along the Moon's light/dark terminator . [ 24 ] Critics, such as Terrie Bloom, accused Harriot of plagiarizing depictions directly from Galileo's works and argued that Harriot's representation of the Moon was an inadequate representation that needed to be improved. [ 24 ] However, both descriptions were also deemed valuable due to the scientists focusing on different specific observations. [ 22 ] Galileo describes the arrangement in a topographical way while Harriot used cartographical concepts to illustrate his views of the Moon. [ 22 ] Harriot used a 6X Dutch telescope for his observations of the Moon. [ 22 ] Harriot's recordings and descriptions were very simple with minimal detail causing his sketches to be difficult to analyze by later scientists. [ 24 ] Galileo's astronomical observations regarding the Moon were published in his book Sidereus Nuncius in 1610 and Harriot's observations were published in 1784 with some not coming to light until 1965. [ 22 ] Harriot's lack of publication is presumed to be connected to the issues with the Ninth Earl of Northumberland and the Gunpowder Plot . [ 22 ] Harriot was also known to have read and admired the work of Galileo in Sidereus Nuncius . Harriot continued his observations of the Moon until 1612. [ 22 ]
Thomas Harriot is recognized as the first person to observe sunspots in 1610 with the use of a telescope. [ 25 ] Harriot observed the sunspot with the use of a telescope in a direct and hazardous way. [ 26 ] Even though Harriot observed the Sun directly through his telescope, there were no recorded injuries to his eyes. [ 4 ] Harriot's depiction of the sunspots were documented in 199 drawings that provided details about the solar rotation and its acceleration. [ 26 ] Like many of Harriot's other notes, depictions of the sunspots were not published. [ 26 ] Similar to the early observation of the Moon, Galileo was also known to contribute his observations of sunspots and published his findings in 1613. [ 25 ] The specifics as to how Harriot's telescope was set up remains largely unknown. [ 26 ] However, it is known that Harriot used different magnifications of telescopes with 10X and 20X power being used most often. [ 26 ] Harriot chose to observe the sunspots after sunrise because it made the vertical easier to analyze. [ 26 ] According to Harriot's notes there was a total of 690 observations of sunspots recorded. [ 26 ] Harriot's findings challenged the idea of the unchanging heavens by explaining the Sun's axial rotation and provided further support for the heliocentric theory . [ 4 ]
Around 1620, Harriot's unpublished papers include the early basis of continuous compounding . [ 27 ] Harriot uses modern mathematical concepts to explain the process behind continuous compounding. [ 27 ] The concept of compounded interest occurs when the more times interest is added within the year assuming the rate stays the same then the final interest will be larger. [ 27 ] Based on this observation, Harriot created mathematical equations that included logarithms and series calculations to illustrate his concepts. [ 27 ]
Harriott also studied optics and refraction , and apparently discovered Snell's law 20 years before Snellius did; like so many of his works, this remained unpublished. In Virginia he learned the local Algonquian language, which may have had some effect on his mathematical thinking. [ citation needed ] He founded the "English school" of algebra . Around 1600, he introduced an algebraic symbolism close to modern notation; thus, computation with unknowns became as easy as with numbers. [ 28 ] He is also credited with discovering Girard's theorem , although the formula bears Girard's name as he was the first to publish it. [ 29 ]
His algebra book Artis Analyticae Praxis [ 30 ] (1631) was published posthumously in Latin. Unfortunately, the editors did not understand much of his reasoning and removed the parts they did not comprehend such as the negative and complex roots of equations. Because of the dispersion of Harriot's writings the full annotated English translation of the Praxis was not completed until 2007. [ 31 ] A more complete manuscript, De numeris triangularibus et inde de progressionibus arithmeticis: Magisteria magna , was finally published in facsimile form with commentary by Janet Beery and Jackie Stedall in 2009. [ 32 ]
The first biography of Harriot was written in 1876 by Henry Stevens of Vermont but not published until 1900 [ 20 ] fourteen years after his death. The publication was limited to 167 copies and so the work was not widely known until 1972 when a reprint edition appeared. [ 33 ] Prominent American poet, novelist and biographer Muriel Rukeyser wrote an extended literary inquiry into the life and significance of Hariot (her preferred spelling), The Traces of Thomas Hariot (1970, 1971). Interest in Harriot continued to revive with the convening of a symposium at the University of Delaware in April 1971 with the proceedings published by the Oxford University Press in 1974. [ 34 ] John W. Shirley the editor (1908-1988) went on to publish A Sourcebook for the Study of Thomas Harriot [ 35 ] and his Harriot biography (1983). [ 36 ] The papers of John Shirley are held in Special Collections at the University of Delaware. [ 37 ]
Harriot's accomplishments remain relatively obscure because he did not publish any of his results and also because many of his manuscripts have been lost; those that survive are in the British Museum and in the archives of the Percy family at Petworth House (Sussex) and Alnwick Castle (Northumberland). He was frequently accused of being an atheist, and it has been suggested that he deliberately refrained from publishing for fear of intensifying such attacks; as the literary historian Stephen Greenblatt writes "... he preferred life to fame. And who can blame him?" [ 38 ]
An event was held at Syon House , West London, to celebrate the 400th anniversary of Harriot's first observations of the Moon on 26 July 2009. This event, Telescope400, [ 39 ] included the unveiling of a plaque to commemorate Harriot by Lord Egremont . The plaque can now be seen by visitors to Syon House, the location of Harriot's historic observations. His drawing made 400 years earlier is believed to be based on the first observations of the Moon through a telescope. The event (sponsored by the Royal Astronomical Society ) was run as part of the International Year of Astronomy (IYA).
The original documents showing Harriot's Moon map of c. 1611, observations of Jupiter's satellites, and first observations of sunspots were on display at the Science Museum, London , from 23 July 2009 until the end of the IYA. [ 40 ]
The observatory in the campus of the College of William & Mary is named in Harriot's honor. A crater on the Moon was named after him in 1970; it is on the Moon's far side and hence unobservable from Earth. [ citation needed ]
In July 2014 the International Astronomical Union launched NameExoWorlds , a process for giving proper names to certain exoplanets and their host stars. The process involved public nomination and voting for the new names. In December 2015, the IAU announced the winning name was Harriot for the exoplanet 55 Cancri f . The winning name was submitted by the Royal Netherlands Association for Meteorology and Astronomy of the Netherlands . It honors the astronomer. [ 41 ]
The Thomas Harriot College of Arts and Sciences at East Carolina University in Greenville, NC is named in recognition of this Harriot's scientific contributions to the New World such as his work A Briefe and True Report of the New Found Land of Virginia . [ 11 ] | https://en.wikipedia.org/wiki/Thomas_Harriot |
Thomas Shirley Hele (24 October 1881 – 23 January 1953) was a British academic. [ 1 ]
Hele was educated at Carlisle Grammar School ; Sedbergh School ; Emmanuel College, Cambridge ( Fellow , 1911); and Barts . He was University Lecturer in Biochemistry from 1921; Tutor at Emmanuel from 1922 to 1935; its Master from 1935 [ 2 ] to 1951; [ 3 ] and Vice-Chancellor of the University of Cambridge from 1943 to 1945. [ 4 ]
This article relating to the University of Cambridge is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Thomas_Hele_(biochemist) |
Thomas Hobbes ( / h ɒ b z / HOBZ ; 5 April 1588 – 4 December 1679) was an English philosopher , best known for his 1651 book Leviathan , in which he expounds an influential formulation of social contract theory. [ 4 ] He is considered to be one of the founders of modern political philosophy . [ 5 ] [ 6 ]
In his early life, overshadowed by his father's departure following a fight, he was taken under the care of his wealthy uncle. Hobbes's academic journey began in Westport , leading him to Oxford University, where he was exposed to classical literature and mathematics. He then graduated from the University of Cambridge in 1608. He became a tutor to the Cavendish family , which connected him to intellectual circles and initiated his extensive travels across Europe. These experiences, including meetings with figures like Galileo , shaped his intellectual development.
After returning to England from France in 1637, Hobbes witnessed the destruction and brutality of the English Civil War from 1642 to 1651 between Parliamentarians and Royalists, which heavily influenced his advocacy for governance by an absolute sovereign in Leviathan , as the solution to human conflict and societal breakdown. Aside from social contract theory, Leviathan also popularized ideas such as the state of nature ( "war of all against all" ) and laws of nature . His other major works include the trilogy De Cive (1642), De Corpore (1655), and De Homine (1658) as well as the posthumous work Behemoth (1681).
Hobbes contributed to a diverse array of fields, including history, jurisprudence , geometry , optics , theology , classical translations, ethics , as well as philosophy in general, marking him as a polymath . Despite controversies and challenges, including accusations of atheism and contentious debates with contemporaries, Hobbes's work profoundly influenced the understanding of political structure and human nature.
Thomas Hobbes was born on 5 April 1588 (Old Style), in Westport , now part of Malmesbury in Wiltshire , England. Having been born prematurely when his mother heard of the coming invasion of the Spanish Armada , Hobbes later reported that "my mother gave birth to twins: myself and fear." [ 7 ] Hobbes had a brother, Edmund, about two years older, as well as a sister, Anne.
Although Thomas Hobbes's childhood is unknown to a large extent, as is his mother's name, [ 8 ] it is known that Hobbes's father, Thomas Sr., was the vicar of both Charlton and Westport. Hobbes's father was uneducated, according to John Aubrey , Hobbes's biographer, and he "disesteemed learning." [ 9 ] Thomas Sr. was involved in a fight with the local clergy outside his church, forcing him to leave London . As a result, the family was left in the care of Thomas Sr.'s older brother, Francis, a wealthy glove manufacturer with no family of his own.
Hobbes was educated at Westport church from age four, went to the Malmesbury school , and then to a private school kept by a young man named Robert Latimer, a graduate of the University of Oxford . [ 10 ] Hobbes was a good pupil, and between 1601 and 1602 he went to Magdalen Hall , the predecessor to Hertford College, Oxford , where he was taught scholastic logic and mathematics. [ 11 ] [ 12 ] [ 13 ] The principal, John Wilkinson, was a Puritan and had some influence on Hobbes. Before going up to Oxford, Hobbes translated Euripides ' Medea from Greek into Latin verse . [ 9 ]
At university, Thomas Hobbes appears to have followed his own curriculum as he was little attracted by the scholastic learning. [ 10 ] Leaving Oxford, Hobbes completed his B.A. degree by incorporation at St John's College, Cambridge , in 1608. [ 14 ] He was recommended by Sir James Hussey, his master at Magdalen, as tutor to William , the son of William Cavendish , [ 10 ] Baron of Hardwick (and later Earl of Devonshire ), and began a lifelong connection with that family. [ 15 ] William Cavendish was elevated to the peerage on his father's death in 1626, holding it for two years before his death in 1628. His son, also William, likewise became the 3rd Earl of Devonshire. Hobbes served as a tutor and secretary to both men. The 1st Earl's younger brother, Charles Cavendish, had two sons who were patrons of Hobbes. The elder son, William Cavendish , later 1st Duke of Newcastle , was a leading supporter of Charles I during the Civil War in which he personally financed an army for the king, having been governor to the Prince of Wales , Charles James, Duke of Cornwall. It was to this William Cavendish that Hobbes dedicated his Elements of Law . [ 9 ]
Hobbes became a companion to the younger William Cavendish and they both took part in a grand tour of Europe between 1610 and 1615. Hobbes was exposed to European scientific and critical methods during the tour, in contrast to the scholastic philosophy that he had learned in Oxford. In Venice, Hobbes made the acquaintance of Fulgenzio Micanzio , an associate of Paolo Sarpi , a Venetian scholar and statesman. [ 9 ]
His scholarly efforts at the time were aimed at a careful study of classical Greek and Latin authors, the outcome of which was, in 1628, his edition of Thucydides ' History of the Peloponnesian War , [ 10 ] the first translation of that work into English directly from a Greek manuscript. Hobbes professed a deep admiration for Thucydides, praising him as "the most politic historiographer that ever writ," and one scholar has suggested that "Hobbes' reading of Thucydides confirmed, or perhaps crystallized, the broad outlines and many of the details of [Hobbes'] own thought." [ 16 ] It has been argued that three of the discourses in the 1620 publication known as Horae Subsecivae: Observations and Discourses also represent the work of Hobbes from this period. [ 17 ]
Although he did associate with literary figures like Ben Jonson and briefly worked as Francis Bacon 's amanuensis , translating several of his Essays into Latin, [ 9 ] he did not extend his efforts into philosophy until after 1629. In June 1628, his employer Cavendish, then the Earl of Devonshire, died of the plague , and his widow, the countess Christian , dismissed Hobbes. [ 18 ] [ 19 ]
Hobbes soon (in 1629) found work as a tutor to Gervase Clifton , the son of Sir Gervase Clifton, 1st Baronet , and continued in this role until November 1630. [ 20 ] He spent most of this time in Paris. Thereafter, he again found work with the Cavendish family, tutoring William Cavendish, 3rd Earl of Devonshire , the eldest son of his previous pupil. Over the next seven years, as well as tutoring, he expanded his own knowledge of philosophy, awakening in him curiosity over key philosophic debates. He visited Galileo Galilei in Florence while he was under house arrest upon condemnation , in 1636, and was later a regular debater in philosophic groups in Paris, held together by Marin Mersenne . [ 18 ]
Hobbes's first area of study was an interest in the physical doctrine of motion and physical momentum. Despite his interest in this phenomenon, he disdained experimental work as in physics. He went on to conceive the system of thought to the elaboration of which he would devote his life. His scheme was first to work out, in a separate treatise , a systematic doctrine of body, showing how physical phenomena were universally explicable in terms of motion, at least as motion or mechanical action was then understood. He then singled out Man from the realm of Nature and plants. Then, in another treatise, he showed what specific bodily motions were involved in the production of the peculiar phenomena of sensation, knowledge, affections and passions whereby Man came into relation with Man. Finally, he considered, in his crowning treatise, how Men were moved to enter into society, and argued how this must be regulated if people were not to fall back into "brutishness and misery". Thus he proposed to unite the separate phenomena of Body, Man, and the State. [ 18 ]
Hobbes came back home from Paris, in 1637, to a country riven with discontent, which disrupted him from the orderly execution of his philosophic plan. [ 18 ] However, by the end of the Short Parliament in 1640, he had written a short treatise called The Elements of Law, Natural and Politic . It was not published and only circulated as a manuscript among his acquaintances. A pirated version, however, was published about ten years later. Although it seems that much of The Elements of Law was composed before the sitting of the Short Parliament , there are polemical pieces of the work that clearly mark the influences of the rising political crisis. Nevertheless, many (though not all) elements of Hobbes's political thought were unchanged between The Elements of Law and Leviathan , which demonstrates that the events of the English Civil War had little effect on his contractarian methodology. However, the arguments in Leviathan were modified from The Elements of Law when it came to the necessity of consent in creating political obligation: Hobbes wrote in The Elements of Law that patrimonial kingdoms were not necessarily formed by the consent of the governed , while in Leviathan he argued that they were. This was perhaps a reflection either of Hobbes's thoughts about the engagement controversy or of his reaction to treatises published by Patriarchalists , such as Sir Robert Filmer , between 1640 and 1651. [ citation needed ]
When in November 1640 the Long Parliament succeeded the Short, Hobbes felt that he was in disfavour due to the circulation of his treatise and fled to Paris. He did not return for 11 years. In Paris, he rejoined the coterie around Mersenne and wrote a critique of the Meditations on First Philosophy of René Descartes , which was printed as third among the sets of "Objections" appended, with "Replies" from Descartes, in 1641. A different set of remarks on other works by Descartes succeeded only in ending all correspondence between the two. [ 21 ]
Hobbes also extended his own works in a way, working on the third section, De Cive , which was finished in November 1641. Although it was initially only circulated privately, it was well received, and included lines of argumentation that were repeated a decade later in Leviathan . He then returned to hard work on the first two sections of his work and published little except a short treatise on optics ( Tractatus opticus ), included in the collection of scientific tracts published by Mersenne as Cogitata physico-mathematica in 1644. He built a good reputation in philosophic circles and in 1645 was chosen with Descartes, Gilles de Roberval and others to referee the controversy between John Pell and Longomontanus over the problem of squaring the circle . [ 21 ]
The English Civil War began in 1642, and when the royalist cause began to decline in mid-1644, many royalists came to Paris and were known to Hobbes. [ 21 ] This revitalised Hobbes's political interests, and the De Cive was republished and more widely distributed. The printing began in 1646 by Samuel de Sorbiere through the Elsevier press in Amsterdam with a new preface and some new notes in reply to objections. [ 21 ]
In 1647, Hobbes took up a position as mathematical instructor to the young Charles, Prince of Wales , who had come to Paris from Jersey around July. This engagement lasted until 1648 when Charles went to Holland. [ 21 ]
The company of the exiled royalists led Hobbes to produce Leviathan , which set forth his theory of civil government in relation to the political crisis resulting from the war. Hobbes compared the State to a monster ( leviathan ) composed of men, created under pressure of human needs and dissolved by civil strife due to human passions. The work closed with a general "Review and Conclusion", in response to the war, which answered the question: Does a subject have the right to change allegiance when a former sovereign's power to protect is irrevocably lost? [ 21 ]
During the years of composing Leviathan , Hobbes remained in or near Paris. In 1647, he suffered a near-fatal illness that disabled him for six months. [ 21 ] On recovering, he resumed his literary task and completed it by 1650. Meanwhile, a translation of De Cive was being produced; scholars disagree about whether it was Hobbes who translated it. [ 22 ]
In 1650, a pirated edition of The Elements of Law, Natural and Politic was published. [ 23 ] It was divided into two small volumes: Human Nature, or the Fundamental Elements of Policie ; and De corpore politico, or the Elements of Law, Moral and Politick . [ 22 ]
In 1651, the translation of De Cive was published under the title Philosophical Rudiments concerning Government and Society . [ 24 ] Also, the printing of the greater work proceeded, and finally appeared in mid-1651, titled Leviathan, or the Matter, Forme, and Power of a Common Wealth, Ecclesiastical and Civil . It had a famous title-page engraving depicting a crowned giant above the waist towering above hills overlooking a landscape, holding a sword and a crozier and made up of tiny human figures. The work had immediate impact. [ 22 ] Soon, Hobbes was more lauded and decried than any other thinker of his time. [ 22 ] The first effect of its publication was to sever his link with the exiled royalists, who might well have killed him. [ 22 ] The secularist spirit of his book greatly angered both Anglicans and French Catholics . [ 22 ] Hobbes appealed to the revolutionary English government for protection and fled back to London in winter 1651. [ 22 ] After his submission to the Council of State , he was allowed to subside into private life [ 22 ] in Fetter Lane . [ citation needed ]
In 1658, Hobbes published the final section of his philosophical system, completing the scheme he had planned more than 19 years before. De Homine consisted for the most part of an elaborate theory of vision. The remainder of the treatise dealt partially with some of the topics more fully treated in the Human Nature and the Leviathan . In addition to publishing some controversial writings on mathematics, including disciplines like geometry, Hobbes also continued to produce philosophical works. [ 22 ]
From the time of the Restoration , he acquired a new prominence; "Hobbism" became a byword for all that respectable society ought to denounce. The young king, Hobbes's former pupil, now Charles II, remembered Hobbes and called him to the court to grant him a pension of £100. [ 25 ]
The king was important in protecting Hobbes when, in 1666, the House of Commons introduced a bill against atheism and profaneness. That same year, on 17 October 1666, it was ordered that the committee to which the bill was referred "should be empowered to receive information touching such books as tend to atheism, blasphemy and profaneness... in particular... the book of Mr. Hobbes called the Leviathan ." [ 26 ] Hobbes was terrified at the prospect of being labelled a heretic , and proceeded to burn some of his compromising papers. At the same time, he examined the actual state of the law of heresy. The results of his investigation were first announced in three short Dialogues added as an Appendix to his Latin translation of Leviathan , published in Amsterdam in 1668. In this appendix, Hobbes aimed to show that, since the High Court of Commission had been put down, there remained no court of heresy at all to which he was amenable, and that nothing could be heresy except opposing the Nicene Creed , which, he maintained, Leviathan did not do. [ 27 ]
The only consequence that came of the bill was that Hobbes could never thereafter publish anything in England on subjects relating to human conduct. The 1668 edition of his works was printed in Amsterdam because he could not obtain the censor's licence for its publication in England. Other writings were not made public until after his death, including Behemoth: the History of the Causes of the Civil Wars of England and of the Counsels and Artifices by which they were carried on from the year 1640 to the year 1662 . For some time, Hobbes was not even allowed to respond to any attacks by his enemies. Despite this, his reputation abroad was formidable. [ 27 ]
Hobbes spent the last four or five years of his life with his patron, William Cavendish, 1st Duke of Devonshire , at the family's Chatsworth House estate. He had been a friend of the family since 1608 when he first tutored an earlier William Cavendish. [ 28 ] After Hobbes's death, many of his manuscripts would be found at Chatsworth House. [ 29 ]
His final works were an autobiography in Latin verse in 1672, and a translation of four books of the Odyssey into "rugged" English rhymes that in 1673 led to a complete translation of both Iliad and Odyssey in 1675. [ 27 ]
In October 1679 Hobbes suffered a bladder disorder , and then a paralytic stroke , from which he died on 4 December 1679, aged 91, [ 27 ] [ 30 ] at Hardwick Hall , owned by the Cavendish family. [ 29 ]
His last words were said to have been "A great leap in the dark", uttered in his final conscious moments. [ 31 ] His body was interred in St John the Baptist's Church, Ault Hucknall , in Derbyshire. [ 32 ]
Hobbes, influenced by contemporary scientific ideas, had intended for his political theory to be a quasi-geometrical system, in which the conclusions followed inevitably from the premises. [ 9 ] The main practical conclusion of Hobbes's political theory is that state or society cannot be secure unless at the disposal of an absolute sovereign. From this follows the view that no individual can hold rights of property against the sovereign, and that the sovereign may therefore take the goods of its subjects without their consent. This particular view owes its significance to it being first developed in the 1630s when Charles I had sought to raise revenues without the consent of Parliament, and therefore of his subjects. [ 9 ] Hobbes rejected one of the most famous theses of Aristotle 's politics, namely that human beings are naturally suited to life in a polis and do not fully realize their natures until they exercise the role of citizen . [ 33 ] It is perhaps also important to note that Hobbes extrapolated his mechanistic understanding of nature into the social and political realm, making him a progenitor of the term ' social structure .'
In Leviathan , Hobbes set out his doctrine of the foundation of states and legitimate governments and creating an objective science of morality. [ 34 ] Much of the book is occupied with demonstrating the necessity of a strong central authority to avoid the evil of discord and civil war.
Beginning from a mechanistic understanding of human beings and their passions, Hobbes postulates what life would be like without government, a condition which he calls the state of nature . In that state, each person would have a right, or license, to everything in the world. This, Hobbes argues, would lead to a "war of all against all" ( bellum omnium contra omnes ). The description contain one of the best-known passages in English philosophy, which describes the natural state humankind would be in, were it not for political community: [ 35 ]
In such condition, there is no place for industry; because the fruit thereof is uncertain: and consequently no culture of the earth; no navigation, nor use of the commodities that may be imported by sea; no commodious building; no instruments of moving, and removing, such things as require much force; no knowledge of the face of the earth; no account of time; no arts; no letters; no society; and which is worst of all, continual fear, and danger of violent death; and the life of man, solitary, poor, nasty, brutish, and short. [ 36 ]
In such states, people fear death and lack both the things necessary to comfortable living, and the hope of being able to obtain them. So, in order to avoid it, people accede to a social contract and establish a civil society . According to Hobbes, society is a population and a sovereign authority , to whom all individuals in that society cede some right [ 37 ] for the sake of protection. Power exercised by this authority cannot be resisted, because the protector's sovereign power derives from individuals' surrendering their own sovereign power for protection. The individuals are thereby the authors of all decisions made by the sovereign: [ 38 ] "he that complaineth of injury from his sovereign complaineth that whereof he himself is the author, and therefore ought not to accuse any man but himself, no nor himself of injury because to do injury to one's self is impossible". There is no doctrine of separation of powers in Hobbes's discussion. He argues that any division of authority would lead to internal strife, jeopardizing the stability provided by an absolute sovereign. [ 39 ] [ 40 ] According to Hobbes, the sovereign must control civil, military, judicial and ecclesiastical powers, even the words. [ 41 ]
In 1654 a small treatise, Of Liberty and Necessity , directed at Hobbes, was published by Bishop John Bramhall . [ 22 ] [ 42 ] Bramhall, a strong Arminian , had met and debated with Hobbes and afterwards wrote down his views and sent them privately to be answered in this form by Hobbes. Hobbes duly replied, but not for publication. However, a French acquaintance took a copy of the reply and published it with "an extravagantly laudatory epistle". [ 22 ] Bramhall countered in 1655, when he printed everything that had passed between them (under the title of A Defence of the True Liberty of Human Actions from Antecedent or Extrinsic Necessity ). [ 22 ]
In 1656, Hobbes was ready with The Questions Concerning Liberty, Necessity and Chance , in which he replied "with astonishing force" [ 22 ] to the bishop. As perhaps the first clear exposition of the psychological doctrine of determinism, Hobbes's own two pieces were important in the history of the free will controversy. The bishop returned to the charge in 1658 with Castigations of Mr Hobbes's Animadversions , and also included a bulky appendix entitled The Catching of Leviathan the Great Whale . [ 43 ]
Hobbes opposed the existing academic arrangements, and assailed the system of the original universities in Leviathan . He went on to publish De Corpore , which contained not only tendentious views on mathematics but also an erroneous proof of the squaring of the circle . This all led mathematicians to target him for polemics and sparked John Wallis to become one of his most persistent opponents. From 1655, the publishing date of De Corpore , Hobbes and Wallis continued name-calling and bickering for nearly a quarter of a century, with Hobbes failing to admit his error to the end of his life. [ 44 ] After years of debate, the spat over proving the squaring of the circle gained such notoriety that it has become one of the most infamous feuds in mathematical history.
The religious opinions of Hobbes remain controversial as many positions have been attributed to him and range from atheism to orthodox Christianity. In The Elements of Law , Hobbes provided a cosmological argument for the existence of God, saying that God is "the first cause of all causes". [ 45 ]
Hobbes was accused of atheism by several contemporaries; Bramhall accused him of teachings that could lead to atheism. This was an important accusation, and Hobbes himself wrote, in his answer to Bramhall's The Catching of Leviathan , that "atheism, impiety, and the like are words of the greatest defamation possible". [ 46 ] Hobbes always defended himself from such accusations. [ 47 ] In more recent times also, much has been made of his religious views by scholars such as Richard Tuck and J. G. A. Pocock , but there is still widespread disagreement about the exact significance of Hobbes's unusual views on religion.
As Martinich has pointed out, in Hobbes's time the term "atheist" was often applied to people who believed in God but not in divine providence , or to people who believed in God but also maintained other beliefs that were considered to be inconsistent with such belief or judged incompatible with orthodox Christianity. He says that this "sort of discrepancy has led to many errors in determining who was an atheist in the early modern period ". [ 48 ] In this extended early modern sense of atheism, Hobbes did take positions that strongly disagreed with church teachings of his time. For example, he argued repeatedly that there are no incorporeal substances, and that all things, including human thoughts, and even God, heaven, and hell are corporeal, matter in motion. He argued that "though Scripture acknowledge spirits, yet doth it nowhere say, that they are incorporeal, meaning thereby without dimensions and quantity". [ 49 ] (In this view, Hobbes claimed to be following Tertullian .) Like John Locke , he also stated that true revelation can never disagree with human reason and experience, [ 50 ] although he also argued that people should accept revelation and its interpretations for the same reason that they should accept the commands of their sovereign: in order to avoid war.
While in Venice on tour, Hobbes made the acquaintance of Fulgenzio Micanzio, a close associate of Paolo Sarpi, who had written against the pretensions of the papacy to temporal power in response to the Interdict of Pope Paul V against Venice , which refused to recognise papal prerogatives. James I had invited both men to England in 1612. Micanzio and Sarpi had argued that God willed human nature, and that human nature indicated the autonomy of the state in temporal affairs. When he returned to England in 1615, William Cavendish maintained correspondence with Micanzio and Sarpi, and Hobbes translated the latter's letters from Italian, which were circulated among the Duke's circle. [ 9 ]
Editions compiled by William Molesworth.
Pp. 193–210 in Elements , Appendix I.
pp. 211–226 in Elements , Appendix II.
Pp. 147–228 in Rivista critica di storia della filosofia 18
pp. 10–27 in Hobbes's Thucydides
pp. 10–27 in Hobbes's Thucydides
pp. 729–738 in Rivista di storia della filosofia 43
Clarendon Edition , vol. 6–7
Attribution:
Digital collections
Physical collections
Philosophy encyclopedia entries
Biographical information
Other links | https://en.wikipedia.org/wiki/Thomas_Hobbes |
Thomas Frank Hofmann (born 1968) is a German food chemist and academic administrator . Since 2019, he has been President of the Technical University of Munich .
Hofmann passed the Abitur in 1987 at the Meranier-Gymnasium Lichtenfels and studied food chemistry at the University of Erlangen–Nuremberg from 1988 to 1992. In 1995, he received his PhD from the Technical University of Munich and habilitated in 1998. [ 1 ]
Until 2002, Hofmann taught as a Privatdozent for Food Chemistry at the Technical University of Munich. From 2002 to 2006, he was Professor (C4) and Managing Director of the Institute of Food Chemistry at the University of Münster . From 2007, he held the Chair of Food Chemistry and Molecular Sensory Science at the Weihenstephan Science Center of the Technical University of Munich.
From 2009 to 2019, Hofmann was TUM's Executive Vice President for Research and Innovation. In October 2018, he was elected by the University Council to succeed Wolfgang A. Herrmann as President of TUM, effective 1 October 2019. [ 2 ] | https://en.wikipedia.org/wiki/Thomas_Hofmann |
Thomas Kurtzman is an American physical chemist most notable for his research into the use of convolutional neural networks (CNNs) to improve pharmaceutical design. According to Bioworld , [ 1 ] Kurtzman's research "reached the devastating conclusion that 'the entirety'" of apparent deep learning produced over the course of several years by a CNN dataset highly regarded in academia and industry was illusory. The perceived scientific progress, Kurtzman wrote, was due to CNNs' effective learning of the deficiencies in the dataset. "This is alarming," the article continued, "as companies have been built on this research. [ 2 ]
During the COVID-19 pandemic , a computational tool Kurtzman developed, GIST, was used to research potential new drugs to treat the illness. [ 3 ]
Kurtzman is a professor of chemistry at the Lehman College and the Graduate Center of the City University of New York . [ 2 ] His research is conducted at the affiliated Kurtzman Lab [ 4 ] and funded by the National Institutes of Health . [ 5 ]
He is married to Mor Armony, vice dean for faculty and research at New York University 's Stern School of Business .
This physical chemistry -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Thomas_Kurtzman |
Thomas M. Connelly Jr. (born June 1952) is an American business executive with a focus on chemical engineering. In February 2015, he succeeded Madeleine Jacobs as chief executive officer and executive director of the American Chemical Society . [ 1 ]
In November 2014, E. I. du Pont de Nemours and Company announced that Connelly was retiring from his position as executive vice president and chief innovation officer after 36 years with the company. [ 2 ]
Connelly studied at Princeton University earning degrees in Chemical Engineering and Economics in 1974. He then attended the University of Cambridge as a Winston Churchill Scholar, [ 3 ] where he received a Ph.D. in chemical engineering.
Connelly was employed by E. I. du Pont de Nemours and Company for 36 years. He joined the company in 1977 as a research engineer at the DuPont Experimental Station in Wilmington, Delaware. [ 4 ] He had assignments in Kentucky and West Virginia before starting his overseas assignments. He had positions in England, Switzerland and China – the final position with responsibility for DuPont's Asia Pacific businesses. He then returned to Wilmington in 1999 and was named vice president and general manager of DuPont Fluoroproducts. He was named senior vice-president of research and Chief Science and Technology Officer in 2001. He was promoted to Executive Vice President, the Chief Innovation Officer and a member of the Office of the Chief Executives of DuPont in 2006. In this position, he had responsibility for DuPont's Applied BioSciences, Nutrition & Health, Performance Polymers and Packaging & Industrial Polymers businesses. He also had responsibility for Integrated Operations which includes Operations, Sourcing & Logistics and Engineering. DuPont announced he was retiring from the company in 2014. [ 2 ]
He is a member of the Department of Chemical Engineering Advisory Committee of Princeton University . As part of the Chemical Heritage Foundation "Heritage Day 2005" ceremonies, Connelly received the 2005 Award for Executive Excellence of the Commercial Development and Marketing Association (CDMA). | https://en.wikipedia.org/wiki/Thomas_M._Connelly |
Thomas Matthias Klapötke (born 24 February 1961 in Göttingen ) is a German inorganic chemist at Ludwig Maximilian University of Munich , studying explosives.
Klapötke grew up in Berlin and studied at Technische Universität Berlin (TU Berlin), completing his undergraduate degree in 1982, his PhD in 1986, and his habilitation in 1990. Klapötke worked as a lecturer at TU Berlin until 1995, when the University of Glasgow hired him for the Ramsay professorship. Since 1997, Klapötke has worked at the Ludwig Maximilian University of Munich (LMU) as a professor of Inorganic Chemistry. [ 1 ]
Klapötke's lab at the University of Munich is a group of about 30 employees, mainly studying explosives. Klapötke's goal is to generate "green" explosives, that either burn to completion or leave few toxic residues. [ 2 ] [ 3 ] Die Zeit called it "the only university laboratory in Germany investigating implements of war". [ 2 ] For this reason, the Federal Office for the Protection of the Constitution watches Klapötke's lab quite closely. [ 3 ] Klapötke is funded both by the German federal government and the US military [ 4 ] and has won a number of awards, including the 1986 Schering Prize. [ 1 ]
This article about a German chemist is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Thomas_M._Klapötke |
Thomas Maclear (17 March 1794 – 14 July 1879) was an Irish -born Cape Colony astronomer who became Her Majesty's astronomer at the Cape of Good Hope . [ 1 ]
Born on 17 March 1794, in Newtownstewart , the eldest son of Rev. James Maclear and Mary Magrath. [ 2 ] In 1808, he was sent to England to be educated in the medical profession. After passing his examinations, in 1815 he was accepted into the Royal College of Surgeons of England. He then worked as house-surgeon in the Bedford Infirmary.
In 1823, he went into partnership with his uncle at Biggleswade , Bedfordshire . In 1825, he was married to Mary Pearse, the daughter of Theed Pearse, Clerk of the Peace for the county of Bedford.
Maclear had a keen interest in amateur astronomy, and would begin a long association with the Royal Astronomical Society , to which he would be named a Fellow. In 1833, when the post became vacant, he was named as Her Majesty's Astronomer at the Cape of Good Hope , and arrived there aboard the Tam O'Shanter with his wife and five daughters, to take up his new duties in 1834. He worked with John Herschel until 1838, performing a survey of the southern sky, and continued to perform important astronomical observations over several more decades. [ 3 ] The Maclears and Herschels formed a close friendship, the wives drawn together by the unusual occupations of their husbands and the raising of their large families. Mary Maclear, like Margaret Herschel, was a noted beauty and intelligent, though suffering from extreme deafness.
Between 1841 and 1848, Maclear would be occupied in performing a geodetic survey for the purpose of recalculating the figure of the Earth (its dimensions and shape) via an arc measurement . He caused a beacon to be erected on top of Table Mountain which was used as a triangulation station for the checking of de Lacaille's arc measurement .
He became close friends with David Livingstone , and they shared a common interest in the exploration of Africa. He performed many other useful scientific activities, including collecting meteorological, magnetic and tide data.
In 1861, his wife died. In 1863, he was granted a pension, but did not retire from the observatory until 1870. He lived thereafter at Grey Villa, Mowbray. By 1876, he had lost his sight, and died on 14 July 1878, aged 85, in Cape Town . He is buried next to his wife in the grounds of the Royal Observatory, Cape of Good Hope . [ 3 ] | https://en.wikipedia.org/wiki/Thomas_Maclear |
Thomas Messinger Drown (March 19, 1842 – November 17, 1904) was the fourth University President of Lehigh University in Bethlehem, Pennsylvania , United States. He was also an analytical chemist and metallurgist . [ 1 ]
He was born in Philadelphia , Pennsylvania, in 1842. He graduated from Central High School in Philadelphia in 1859, [ 2 ] and then went on to study medicine at the University of Pennsylvania and graduated in 1862. He went abroad to Germany to study chemistry in Freiberg, Saxony , and mining at the University of Heidelberg . From 1869 to 1870 he was an instructor of metallurgy at Harvard University . In 1870, he started a consulting business in Philadelphia. [ 3 ] In 1872, he hired a former student, John Townsend Baker , as an assistant. From 1874 to 1881, he was professor of Analytical Chemistry at Lafayette College . [ 2 ] Baker followed him to Lafayette and later would found the J. T. Baker Chemical Co. , which merged with Mallinckrodt and was absorbed and spun off of Tyco International as a component company of Covidien . [ 4 ] In 1875, he was elected as a member of the American Philosophical Society . [ 5 ]
His professional career was interrupted in 1881, when, after the death of his father, he devoted himself to family matters. He restarted his professional work in 1885 by accepting a professorship at the Massachusetts Institute of Technology . [ 3 ]
He helped start MIT's chemical engineering curriculum in the late 1880s. [ 6 ] In 1887, he was appointed by the newly formed Massachusetts Board of Health to a landmark study of sanitary quality of the state's inland waters. As consulting chemist to the Massachusetts State Board of Health, he was in charge of the famous Lawrence Experiment Station laboratory conducting the water sampling, testing, and analysis. There he put to work the environmental chemist and first female graduate of MIT, Ellen Swallow Richards . This research created the famous "normal chlorine" map of Massachusetts that was the first of its kind and was the template for others. As a result, Massachusetts established the first water-quality standards in America, and the first modern sewage treatment plant was created. [ 7 ]
As a professor, Drown published a number of papers on metallurgy, mostly in Transactions of the American Institute of Mining Engineers . He was a founding member of the Institute, [ 2 ] and served as its secretary, and editor of its Transactions from 1871 till 1884. He was elected its president in 1897.
In 1895 he left MIT to become the fourth president of Lehigh University . Lehigh's endowment was predominantly in the stock of the major company of its founder, Asa Packer 's Lehigh Valley Railroad . The Panic of 1893 crashed the market, brought the country into depression that lasted years, and nearly brought the university to financial insolvency. Many prominent railroads such as the Northern Pacific Railway , the Union Pacific Railroad and the Atchison, Topeka & Santa Fe Railroad went into bankruptcy, and over 15,000 companies and 500 banks failed. In order to gain new sources of funding, President Drown broke the university's ties with the Episcopal Church in 1897, qualifying the university for aid from the Commonwealth of Pennsylvania . During his term, which started during a major financial crisis, he was able to save Lehigh from bankruptcy, grow enrollment, which had dipped seriously, grow academics, and even have one major building erected.
A broad intellectual with interests in various fields, he nonetheless thought the key to Lehigh's success would be the school of technology. There he sought to broaden and deepen the offerings, increase the quality and quantity of laboratory space, equipment and apparatus, as funding permitted. Additionally, and in consultation with the faculty and the board of trustees, he created many new tiers of teaching, including the associate and assistant professorships. His idea was that this would create resources for top professors to be invited to Lehigh, and so help enlarge the curricula. During his tenure, the university's first emeritus professorship was granted (Harding of Physics), and first doctorate awarded (Joseph W. Richards). Many new degrees in the technical school were now being offered, such as Metallurgy (1891), Electrometallurgy, and Chemical Engineering (1902). [ 8 ] The curriculum leading to a degree in arts and engineering was established, as was the department of zoology and biology . New courses (majors, that is, or degree offerings, as it is now known) were also adopted in geology, and physics.
Dr. Drown eventually gained in popularity on campus, with his forward ideas, success, idiosyncratic pince-nez glasses and mustache. Faculty members eventually came to refer to Dr. Drown as "chief". Unfortunately, T. M. Drown would not live long enough to see all his ideas to fruition, as he died in office, following abdominal surgery, November 16, 1904. [ 9 ] [ 10 ] [ 11 ]
Williams Hall (1903), a Beaux Arts inspired Brick structure, was erected to house the growing departments of Biology and Geology, among other functions.
In 1908, Lehigh University opened up Drown Hall which now houses Lehigh's English Department. | https://en.wikipedia.org/wiki/Thomas_Messinger_Drown |
A. Thomas Tymoczko (September 1, 1943 – August 8, 1996) was a philosopher specializing in logic and the philosophy of mathematics . He taught at Smith College in Northampton , Massachusetts from 1971 until his death from stomach cancer in 1996, aged 52. [ 1 ] [ 2 ]
His publications include New Directions in the Philosophy of Mathematics , an edited collection of essays for which he wrote individual introductions, and Sweet Reason: A Field Guide to Modern Logic , co-authored by Jim Henle . In addition, he published a number of philosophical articles, such as " The Four-Color Problem and its Philosophical Significance", which argues that the increasing use of computers is changing the nature of mathematical proof.
He is considered a member of the fallibilist school in philosophy of mathematics . Philip Kitcher dubbed this school the "maverick" tradition in the philosophy of mathematics. ( Paul Ernest ) [ citation needed ]
He completed an undergraduate degree from Harvard University in 1965, and his PhD from the same university in 1972. [ 1 ]
Tymoczko was married to comparative literature scholar Maria Tymoczko of the University of Massachusetts Amherst . Their three children include music composer Dmitri Tymoczko and Smith College mathematics professor Julianna Tymoczko . [ 3 ] [ 4 ]
This biography of an American philosopher is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Thomas_Tymoczko |
Thomas Vogt (born 1958) is a German chemist and material scientist . He is an Educational Foundation Distinguished Professor in the Department of Chemistry and Biochemistry at the University of South Carolina . [ 1 ]
Vogt is most known for his work in structural chemistry , chemical synthesis , and structure-property correlations of metal oxides based on diffraction techniques using electrons , x-rays , and neutrons . [ 2 ] He has authored and co-authored over 300 peer-reviewed journal articles and several books such as Solid State Materials Chemistry and Modelling Nanoscale Imaging in Electron Microscopy . He is the recipient of the 1996 R&D 100 award from R&D Magazine , the 2002 Design and Engineering Award of Popular Mechanics , the 2018 Carolina Trustee Professorship Award, and the 2019 USC Educational Foundation Award in Science, Mathematics and Engineering. [ 3 ]
Vogt is a Fellow of the American Physical Society , [ 4 ] the American Association for the Advancement of Science , [ 5 ] the Neutron Scattering Society of America, [ 6 ] as well as of the Institute of Advanced Study at Durham University [ 7 ] and was a Founding Member of the editorial board for Physical Review Applied . [ 8 ]
Vogt earned a Diploma in Chemistry in 1985, followed by a PhD in 1987, both from the University of Tübingen . [ 7 ]
After working at a European and US national laboratory ( Institute Laue Langevin and Brookhaven National Laboratory ), Vogt began an academic career at the Department of Philosophy at the University of South Carolina . He teaches The History and Philosophy of Chemistry in the South Carolina Honors College. Later he became a professor in the Department of Chemistry and Biochemistry at the University of South Carolina, where he has been the Educational Foundation Distinguished Professor since 2010. [ 1 ]
From 2005 to 2023, Vogt served as Director of the NanoCenter at the University of South Carolina [ 9 ] and was Associate Vice President for Research from 2011-2013, and a member on the Board of Directors of the USC Research Foundation from 2008 to 2012. He was the co-chair of the Search Committee for Provost [ 10 ] and Chief Academic Officer in 2019 and later a Pearce Faculty Fellow in the South Carolina Honors College from 2020 to 2022. [ 11 ]
Before joining the University of South Carolina, Vogt worked as a Scientist at the Institute Laue-Langevin, France until 1992, then joined Brookhaven National Laboratory (BNL) as an Associate Physicist, promoted to Physicist in 1995, [ 12 ] and by 2000, he led the Powder Diffraction Group in BNL's Physics Department. From 2003 to 2005, he held various roles at BNL, including Head of Materials Synthesis and Characterization Group, Cluster Leader of Materials Synthesis in the Center for Functional Nanomaterials (CFN), and Technical Coordinator for scientific equipment in the CFN building project. Moreover, he led three startups, Nanosource, LUMINOF and Sens4 as the Chief Technology Officer. He is a limited partner of TEXXMO mobile solutions, a wearable computer company and IOT button manufacturer. [ 1 ]
Vogt has conducted basic research using neutron, x-ray, and electron diffraction techniques to study structure-property relationships in materials, while also exploring philosophical and ethical implications of science and technology, particularly concerning the emergence of the periodic table of chemical elements. [ 13 ] He holds 11 US patents such as the development of multidimensional integrated detection and analysis system (MIDAS) [ 14 ] [ 15 ] and neutron scintillating materials. [ 16 ]
Vogt investigated complex material structures using aberration-corrected scanning transmission electron microscopy (STEM). [ 17 ] He helped develop new image simulation and modeling methodologies, such as super-resolution techniques, specialized de-noising methods, mathematical and statistical learning theories, and applications of compressed sensing, outlined in the book Modelling Nanoscale Imaging in Electron Microscopy . In a review for Physics Today , Les J. Allen commented, "In six chapters, the editors tackle the ambitious challenge of bridging the gap between high-level applied mathematics and experimental electron microscopy. They have met the challenge admirably... That work is also applicable to the new generation of x-ray free-electron lasers, which have similar prospective applications, and illustrates nicely the importance of applied mathematics in the physical sciences." [ 18 ]
Vogt and collaborators using STEM imaging with spherical aberration imaged the M1 phase, a MoVNbTe oxide partial oxidation catalyst, highlighting its potential applications in complex materials structure analysis. [ 19 ] He also used the annular dark-field STEM to analyze nanoscale domains of complex oxide phases in disordered solids development. [ 20 ] Furthermore, he and Douglas Blom employed parallel computing to analyze compositional disorder in a Mo, V-oxide bronze, highlighting discrepancies between experimental and simulated V content along metal-oxygen atomic columns, validated by HAADF-STEM imaging. [ 21 ]
Vogt used high-resolution neutron diffraction techniques to investigate structural changes in molecules. Alongside Andrew N. Fitch and Jeremy K. Cockcroft, he revealed the low-temperature crystal structure of Rhenium heptafluoride (ReF7), confirming its molecular configuration as a distorted pentagonal bipyramid with Cs (m) symmetry. [ 22 ] In another joint study published in Science , he observed negative thermal expansion in ZrW 2 O 8 , using diffraction to analyze its cubic structure. [ 23 ]
Using high-resolution neutron powder diffraction, Czjzek and Vogt located the hydrogen positions in zeolite Y. [ 24 ] Subsequently, with Yongjae Lee, he examined structural changes in zeolites at high pressures, showing a pronounced rearrangement of non-framework metal ions and pressure-induced hydration/superhydration. [ 25 ]
Vogt's work on solid-state chemistry has focused on the temperature and pressure-dependent structural arrangements of materials. In 2021, he co-authored a textbook Solid State Materials Chemistry with Patrick M. Woodward, Pavel Karen and John S.O. Evans, covering structure, defects, bonding, and properties of solid state materials. He reported a spin ordering transition in oxygen-deficient YBaCo 2 O 5 , accompanied by structural changes and spin state alterations, marking the first observation of this phenomenon induced by long-range orbital and charge ordering. [ 26 ] He collaborated on the characterization of a new solid electrolyte, Bi 2 La 8 [(GeO 4 ) 6 ]O 3 , identifying oxide ion interstitials as key to its ionic conductivity using advanced dark field electron microscopy. [ 27 ] [ 28 ] Furthermore, he investigated the cubic structure of CaCu 3 Ti 4 O 12 , a material with a large optical conductivity, ruling out ferroelectricity in favor of relaxor-like dynamics responsible for its giant dielectric effect. [ 29 ] [ 30 ]
In a paper published in Nature Chemistry , Vogt and collaborators demonstrated the irreversible insertion and trapping of xenon in Ag-natrolite under moderate conditions, a possible explanation xenon deficiency in terrestrial and Martian atmospheres. [ 31 ] He also observed water insertion into kaolinite at 2.7 GPa and 200 °C, shedding light on water release in subduction zones and its effects on seismicity and volcanic activity. Furthermore, his research showcased a pressure-driven metathesis reaction resulting in the formation of a water-free pollucite phase, CsAlSi 2 O 6 , with potential applications in nuclear waste remediation. [ 32 ]
Vogt and colleagues used advanced laser techniques to observe sub nanosecond structural dynamics of iron, revealing intricate wave patterns during compression and shock decay. [ 33 ] He also examined the structural phase transitions in silicon 2D-nanosheets under high pressure, revealing size and shape-dependent behavior and the formation of 1D nanowires with reduced thermal conductivity. [ 34 ]
Vogt contributed to the development of white phosphors for fluorescent lighting. Together with Sangmoon Park, he developed a family of self-activating and doped UV phosphors for fluorescent white-light production. [ 35 ] They also developed up-conversion phosphors emitting shorter-wavelength light in an ordered oxyfluoride compound. [ 36 ] | https://en.wikipedia.org/wiki/Thomas_Vogt |
Thomas Whitwell (24 October 1837 – 5 August 1878) was a British engineer, inventor and metallurgist.
Known as Tom, he was the third son of William and Sarah Whitwell of Kendal. Tom was initially educated at home via private tutors he was sent to the Quaker run York School at 10 years old. In 1858, at 16, he travelled with his elder brother William to Darlington . As apprentice to Alfred Kitching in his locomotive building shop he learned engineering and metallurgy. From there he continued to build his skills, working with Robert Stephenson & Co in Newcastle. [ 1 ]
In 1859 he and William started iron-smelting at Thornaby. Iron ore had been discovered in the area four years previously. [ 1 ] The brothers designed and built large scale hot blast fire brick stoves that were much larger and more efficient that anything built in the area until that point. By 1873 the three re-built blast furnaces were 80 feet high and 22 feet in diameter and the works had over 750 employees. [ 2 ]
In 1878 Tom died due to an accident at his works. A steam explosion caught him and his foreman John Thompson whilst they were investigating a problem with the rolling mill furnace. [ 1 ] [ 3 ]
The works continued to run under family ownership, under the chairmanship of Tom's nephew William Fry Whitwell until 1922 when they were eventually closed due to a global glut of pig iron.
Tom filed at least five patents in the UK and two in the US. He invented and patented the technology used at Thornaby as the Whitwell Heating Stove. Over two hundred stoves were installed in over 70 furnaces around the globe. He also patented a continuous brick-burning kiln and a more efficient fire grate. [ 1 ]
The City of Whitwell in Tennessee is named in his honour. Tom was a founder and Chairman of the Southern States Coal, Iron and Land Company which developed coal mining in Whitwell and Iron smelting in nearby South Pittsburg . Tom visited the area at least twice hosting a banquet for five hundred workers and guests of ‘all classes’. [ 4 ] After his death, the company was acquired by the Tennessee Coal and Iron Company .
Throughout his life, Tom was a committed Christian and contributed to wider society, helping to form over thirty Young Men's Christian Associations across the North of England. He was also Captain of his local Fire Brigade. One of his lasting legacies is the Cleveland Institution of Engineers . The Institution is one of the oldest such engineering bodies in the world. Tom hosted the inaugural meeting at his home on Church Road in Stockton and was the first secretary of the organisation. There were 12 members at that first meeting, but by the time of his death (when he was president) the ranks had grown to over 460. [ 1 ]
Thousands of residents assembled to pay respects to Tom at his funeral filling the south end of Stockton High Street and the entire length of Bridge Road. His funeral procession, was four deep and numbered about two thousand people – an unusual turnout for a 40 year old industrialist and engineer. | https://en.wikipedia.org/wiki/Thomas_Whitwell |
Thomas–Fermi screening is a theoretical approach to calculate the effects of electric field screening by electrons in a solid. [ 1 ] It is a special case of the more general Lindhard theory ; in particular, Thomas–Fermi screening is the limit of the Lindhard formula when the wavevector (the reciprocal of the length-scale of interest) is much smaller than the Fermi wavevector, i.e. the long-distance limit. [ 1 ] It is named after Llewellyn Thomas and Enrico Fermi .
The Thomas–Fermi wavevector (in Gaussian-cgs units ) is [ 1 ] k 0 2 = 4 π e 2 ∂ n ∂ μ , {\displaystyle k_{0}^{2}=4\pi e^{2}{\frac {\partial n}{\partial \mu }},} where μ is the chemical potential ( Fermi level ), n is the electron concentration and e is the elementary charge .
For the example of semiconductors that are not too heavily doped, the charge density n ∝ e μ / k B T , where k B is Boltzmann constant and T is temperature. In this case, k 0 2 = 4 π e 2 n k B T , {\displaystyle k_{0}^{2}={\frac {4\pi e^{2}n}{k_{\rm {B}}T}},}
i.e. 1/ k 0 is given by the familiar formula for Debye length . In the opposite extreme, in the low-temperature limit T = 0 ,
electrons behave as quantum particles ( fermions ). Such an approximation is valid for metals at room temperature, and the Thomas–Fermi screening wavevector k TF given in atomic units is k T F 2 = 4 ( 3 n π ) 1 / 3 . {\displaystyle k_{\rm {TF}}^{2}=4\left({\frac {3n}{\pi }}\right)^{1/3}.}
If we restore the electron mass m e {\displaystyle m_{e}} and the Planck constant ℏ {\displaystyle \hbar } , the screening wavevector in Gaussian units is k 0 2 = k T F 2 ( m e / ℏ 2 ) {\displaystyle k_{0}^{2}=k_{\rm {TF}}^{2}(m_{e}/\hbar ^{2})} .
For more details and discussion, including the one-dimensional and two-dimensional cases, see the article on Lindhard theory .
The internal chemical potential (closely related to Fermi level , see below) of a system of electrons describes how much energy is required to put an extra electron into the system, neglecting electrical potential energy. As the number of electrons in the system increases (with fixed temperature and volume), the internal chemical potential increases. This consequence is largely because electrons satisfy the Pauli exclusion principle : only one electron may occupy an energy level and lower-energy electron states are already full, so the new electrons must occupy higher and higher energy states.
Given a Fermi gas of density n {\displaystyle n} , the highest occupied momentum state (at zero temperature) is known as the Fermi momentum, k F {\displaystyle k_{\rm {F}}} .
Then the required relationship is described by the electron number density n ( μ ) {\displaystyle n(\mu )} as a function of μ , the internal chemical potential. The exact functional form depends on the system. For example, for a three-dimensional Fermi gas , a noninteracting electron gas, at absolute zero temperature, the relation is n ( μ ) ∝ μ 3 / 2 {\displaystyle n(\mu )\propto \mu ^{3/2}} .
Proof: Including spin degeneracy,
n = 2 1 ( 2 π ) 3 4 3 π k F 3 , μ = ℏ 2 k F 2 2 m . {\displaystyle n=2{\frac {1}{(2\pi )^{3}}}{\frac {4}{3}}\pi k_{\rm {F}}^{3}\quad ,\quad \mu ={\frac {\hbar ^{2}k_{\rm {F}}^{2}}{2m}}.}
(in this context—i.e., absolute zero—the internal chemical potential is more commonly called the Fermi energy ).
As another example, for an n-type semiconductor at low to moderate electron concentration, n ( μ ) ∝ e μ / k B T {\displaystyle n(\mu )\propto e^{\mu /k_{\rm {B}}T}} .
The main assumption in the Thomas–Fermi model is that there is an internal chemical potential at each point r that depends only on the electron concentration at the same point r . This behaviour cannot be exactly true because of the Heisenberg uncertainty principle . No electron can exist at a single point; each is spread out into a wavepacket of size ≈ 1 / k F , where k F is the Fermi wavenumber, i.e. a typical wavenumber for the states at the Fermi surface . Therefore, it cannot be possible to define a chemical potential at a single point, independent of the electron density at nearby points.
Nevertheless, the Thomas–Fermi model is likely to be a reasonably accurate approximation as long as the potential does not vary much over lengths comparable or smaller than 1 / k F . This length usually corresponds to a few atoms in metals.
Finally, the Thomas–Fermi model assumes that the electrons are in equilibrium, meaning that the total chemical potential is the same at all points. (In electrochemistry terminology, "the electrochemical potential of electrons is the same at all points". In semiconductor physics terminology, "the Fermi level is flat".) This balance requires that the variations in internal chemical potential are matched by equal and opposite variations in the electric potential energy. This gives rise to the "basic equation of nonlinear Thomas–Fermi theory": [ 1 ] ρ induced ( r ) = − e [ n ( μ 0 + e ϕ ( r ) ) − n ( μ 0 ) ] {\displaystyle \rho ^{\text{induced}}(\mathbf {r} )=-e[n(\mu _{0}+e\phi (\mathbf {r} ))-n(\mu _{0})]} where n ( μ ) is the function discussed above (electron density as a function of internal chemical potential), e is the elementary charge , r is the position, and ρ induced ( r ) {\displaystyle \rho ^{\text{induced}}(\mathbf {r} )} is the induced charge at r . The electric potential ϕ {\displaystyle \phi } is defined in such a way that ϕ ( r ) = 0 {\displaystyle \phi (\mathbf {r} )=0} at the points where the material is charge-neutral (the number of electrons is exactly equal to the number of ions), and similarly μ 0 is defined as the internal chemical potential at the points where the material is charge-neutral.
If the chemical potential does not vary too much, the above equation can be linearized: ρ induced ( r ) ≈ − e 2 ∂ n ∂ μ ϕ ( r ) {\displaystyle \rho ^{\text{induced}}(\mathbf {r} )\approx -e^{2}{\frac {\partial n}{\partial \mu }}\phi (\mathbf {r} )} where ∂ n / ∂ μ {\displaystyle \partial n/\partial \mu } is evaluated at μ 0 and treated as a constant.
This relation can be converted into a wavevector-dependent dielectric function : [ 1 ] (in cgs-Gaussian units) ε ( q ) = 1 + k 0 2 q 2 {\displaystyle \varepsilon (\mathbf {q} )=1+{\frac {k_{0}^{2}}{q^{2}}}} where k 0 = 4 π e 2 ∂ n ∂ μ . {\displaystyle k_{0}={\sqrt {4\pi e^{2}{\frac {\partial n}{\partial \mu }}}}.} At long distances ( q → 0 ), the dielectric constant approaches infinity, reflecting the fact that charges get closer and closer to perfectly screened as you observe them from further away.
If a point charge Q is placed at r = 0 in a solid, what field will it produce, taking electron screening into account?
One seeks a self-consistent solution to two equations:
For the nonlinear Thomas–Fermi formula, solving these simultaneously can be difficult, and usually there is no analytical solution. However, the linearized formula has a simple solution (in cgs-Gaussian units): ϕ ( r ) = Q r e − k 0 r {\displaystyle \phi (\mathbf {r} )={\frac {Q}{r}}e^{-k_{0}r}} With k 0 = 0 (no screening), this becomes the familiar Coulomb's law .
Note that there may be dielectric permittivity in addition to the screening discussed here; for example due to the polarization of immobile core electrons. In that case, replace Q by Q / ε , where ε is the relative permittivity due to these other contributions.
For a three-dimensional Fermi gas (noninteracting electron gas), the screening wavevector k 0 {\displaystyle k_{0}} can be expressed as a function of both temperature and Fermi energy E F {\displaystyle E_{\rm {F}}} . The first step is calculating the internal chemical potential μ {\displaystyle \mu } , which involves the inverse of a Fermi–Dirac integral , μ k B T = F 1 / 2 − 1 [ 2 3 Γ ( 3 / 2 ) ( E F T ) 3 / 2 ] . {\displaystyle {\frac {\mu }{k_{\rm {B}}T}}=F_{1/2}^{-1}\left[{2 \over {3\Gamma (3/2)}}\left({E_{\rm {F}} \over T}\right)^{3/2}\right].}
We can express k 0 {\displaystyle k_{0}} in terms of an effective temperature T e f f {\displaystyle T_{\rm {eff}}} : k 0 2 = 4 π e 2 n / k B T e f f {\displaystyle k_{0}^{2}=4\pi e^{2}n/k_{\rm {B}}T_{\rm {eff}}} , or k B T e f f = n ∂ μ / ∂ n {\displaystyle k_{\rm {B}}T_{\rm {eff}}=n\partial \mu /\partial n} . The general result for T e f f {\displaystyle T_{\rm {eff}}} is T e f f T = 4 3 Γ ( 1 / 2 ) ( E F / k B T ) 3 / 2 F − 1 / 2 ( μ / k B T ) . {\displaystyle {T_{\rm {eff}} \over T}={4 \over 3\Gamma (1/2)}{(E_{\rm {F}}/k_{\rm {B}}T)^{3/2} \over F_{-1/2}(\mu /k_{\rm {B}}T)}.} In the classical limit k B T ≫ E F {\displaystyle k_{\rm {B}}T\gg E_{\rm {F}}} , we find T e f f = T {\displaystyle T_{\rm {eff}}=T} , while in the degenerate limit k B T ≪ E F {\displaystyle k_{\rm {B}}T\ll E_{\rm {F}}} we find k B T e f f = ( 2 / 3 ) E F . {\displaystyle k_{\rm {B}}T_{\rm {eff}}=(2/3)E_{\rm {F}}.} A simple approximate form that recovers both limits correctly is k B T e f f = [ ( k B T ) p + ( 2 E F / 3 ) p ] 1 / p , {\displaystyle k_{\rm {B}}T_{\rm {eff}}=\left[(k_{\rm {B}}T)^{p}+(2E_{\rm {F}}/3)^{p}\right]^{1/p},} for any power p {\displaystyle p} . A value that gives decent agreement with the exact result for all k B T / E F {\displaystyle k_{\rm {B}}T/E_{\rm {F}}} is p = 1.8 {\displaystyle p=1.8} , [ 2 ] which has a maximum relative error of < 2.3%.
In the effective temperature given above, the temperature is used to construct an effective classical model. However, this form of the effective temperature does not correctly recover the specific heat and most other properties of the finite- T {\displaystyle T} electron fluid even for the non-interacting electron gas. It does not of course attempt to include electron-electron interaction effects. A simple form for an effective temperature which correctly recovers all the density-functional properties of even the interacting electron gas, including the pair-distribution functions at finite T {\displaystyle T} , has been given using the classical map hyper-netted-chain ( CHNC ) model of the electron fluid. That is T e f f E F = ( T 2 E F 2 + T q 2 E F 2 ) 1 / 2 {\displaystyle {\frac {T_{\rm {eff}}}{E_{\rm {F}}}}=\left({\frac {T^{2}}{E_{\rm {F}}^{2}}}+{\frac {T_{q}^{2}}{E_{\rm {F}}^{2}}}\right)^{1/2}} where the quantum temperature T q {\displaystyle T_{q}} is defined as: T q E F = 1 a + b r s + c r s {\displaystyle {\frac {T_{q}}{E_{\rm {F}}}}={\frac {1}{a+b{\sqrt {r_{\rm {s}}}}+cr_{\rm {s}}}}} where a = 1.594 , b = −0.3160 , c = 0.0240 . Here r s {\displaystyle r_{\rm {s}}} is the Wigner–Seitz radius corresponding to a sphere in atomic units containing one electron. That is, if n {\displaystyle n} is the number of electrons in a unit volume using atomic units where the unit of length is the Bohr, viz., 5.291 77 × 10 −9 cm , then r s = ( 3 4 π n ) 1 / 3 . {\displaystyle r_{\rm {s}}=\left({\frac {3}{4\pi n}}\right)^{1/3}.} For a dense electron gas, e.g., with r s ≈ 1 {\displaystyle r_{\rm {s}}\approx 1} or less, electron-electron interactions become negligible compared to the Fermi energy, then, using a value of r s {\displaystyle r_{\rm {s}}} close to unity, we see that the CHNC effective temperature at T = 0 {\displaystyle T=0} approximates towards the form 2 E F / 3 {\displaystyle 2E_{\rm {F}}/3} . Other mappings for the 3D case, [ 3 ] and similar formulae for the effective temperature have been given for the classical map of the 2-dimensional electron gas as well. [ 4 ] | https://en.wikipedia.org/wiki/Thomas–Fermi_screening |
Thompson sampling , [ 1 ] [ 2 ] [ 3 ] named after William R. Thompson , is a heuristic for choosing actions that address the exploration–exploitation dilemma in the multi-armed bandit problem. It consists of choosing the action that maximizes the expected reward with respect to a randomly drawn belief.
Consider a set of contexts X {\displaystyle {\mathcal {X}}} , a set of actions A {\displaystyle {\mathcal {A}}} , and rewards in R {\displaystyle \mathbb {R} } . The aim of the player is to play actions under the various contexts, such as to maximize the cumulative rewards. Specifically, in each round, the player obtains a context x ∈ X {\displaystyle x\in {\mathcal {X}}} , plays an action a ∈ A {\displaystyle a\in {\mathcal {A}}} and receives a reward r ∈ R {\displaystyle r\in \mathbb {R} } following a distribution that depends on the context and the issued action.
The elements of Thompson sampling are as follows: [ 3 ] : sec. 4
Thompson sampling consists of playing the action a ∗ ∈ A {\displaystyle a^{\ast }\in {\mathcal {A}}} according to the probability that it maximizes the expected reward; action a ∗ {\displaystyle a^{\ast }} is chosen with probability [ 3 ] : Algorithm 4
where I {\displaystyle \mathbb {I} } is the indicator function .
In practice, the rule is implemented by sampling. In each round, parameters θ ∗ {\displaystyle \theta ^{\ast }} are sampled from the posterior P ( θ | D ) {\displaystyle P(\theta |{\mathcal {D}})} , [ 3 ] : 7 and an action a ∗ {\displaystyle a^{\ast }} chosen that maximizes E [ r | θ ∗ , a ∗ , x ] {\displaystyle \mathbb {E} [r|\theta ^{\ast },a^{\ast },x]} , i.e. the expected reward given the sampled parameters, the action, and the current context. Conceptually, this means that the player instantiates their beliefs randomly in each round according to the posterior distribution, and then acts optimally according to them. In most practical applications, it is computationally onerous to maintain and sample from a posterior distribution over models. As such, Thompson sampling is often used in conjunction with approximate sampling techniques. [ 3 ] : sec. 5
Thompson sampling was originally described by Thompson in 1933. [ 1 ] It was subsequently rediscovered numerous times independently in the context of multi-armed bandit problems. [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] [ 9 ] A first proof of convergence for the bandit case has been shown in 1997. [ 4 ] The first application to Markov decision processes was in 2000. [ 6 ] A related approach (see Bayesian control rule ) was published in 2010. [ 5 ] In 2010 it was also shown that Thompson sampling is instantaneously self-correcting . [ 9 ] Asymptotic convergence results for contextual bandits were published in 2011. [ 7 ] Thompson Sampling has been widely used in many online learning problems including A/B testing in website design and online advertising, [ 10 ] and accelerated learning in decentralized decision making. [ 11 ] A Double Thompson Sampling (D-TS) [ 12 ] algorithm has been proposed for dueling bandits , a variant of traditional MAB, where feedback comes in the form of pairwise comparison.
Probability matching is a decision strategy in which predictions of class membership are proportional to the class base rates. Thus, if in the training set positive examples are observed 60% of the time, and negative examples are observed 40% of the time, the observer using a probability-matching strategy will predict (for unlabeled examples) a class label of "positive" on 60% of instances, and a class label of "negative" on 40% of instances.
A generalization of Thompson sampling to arbitrary dynamical environments and causal structures, known as Bayesian control rule , has been shown to be the optimal solution to the adaptive coding problem with actions and observations. [ 5 ] In this formulation, an agent is conceptualized as a mixture over a set of behaviours. As the agent interacts with its environment, it learns the causal properties and adopts the behaviour that minimizes the relative entropy to the behaviour with the best prediction of the environment's behaviour. If these behaviours have been chosen according to the maximum expected utility principle, then the asymptotic behaviour of the Bayesian control rule matches the asymptotic behaviour of the perfectly rational agent.
The setup is as follows. Let a 1 , a 2 , … , a T {\displaystyle a_{1},a_{2},\ldots ,a_{T}} be the actions issued by an agent up to time T {\displaystyle T} , and let o 1 , o 2 , … , o T {\displaystyle o_{1},o_{2},\ldots ,o_{T}} be the observations gathered by the agent up to time T {\displaystyle T} . Then, the agent issues the action a T + 1 {\displaystyle a_{T+1}} with probability: [ 5 ]
where the "hat"-notation a ^ t {\displaystyle {\hat {a}}_{t}} denotes the fact that a t {\displaystyle a_{t}} is a causal intervention (see Causality ), and not an ordinary observation. If the agent holds beliefs θ ∈ Θ {\displaystyle \theta \in \Theta } over its behaviors, then the Bayesian control rule becomes
where P ( θ | a ^ 1 : T , o 1 : T ) {\displaystyle P(\theta |{\hat {a}}_{1:T},o_{1:T})} is the posterior distribution over the parameter θ {\displaystyle \theta } given actions a 1 : T {\displaystyle a_{1:T}} and observations o 1 : T {\displaystyle o_{1:T}} .
In practice, the Bayesian control amounts to sampling, at each time step, a parameter θ ∗ {\displaystyle \theta ^{\ast }} from the posterior distribution P ( θ | a ^ 1 : T , o 1 : T ) {\displaystyle P(\theta |{\hat {a}}_{1:T},o_{1:T})} , where the posterior distribution is computed using Bayes' rule by only considering the (causal) likelihoods of the observations o 1 , o 2 , … , o T {\displaystyle o_{1},o_{2},\ldots ,o_{T}} and ignoring the (causal) likelihoods of the actions a 1 , a 2 , … , a T {\displaystyle a_{1},a_{2},\ldots ,a_{T}} , and then by sampling the action a T + 1 ∗ {\displaystyle a_{T+1}^{\ast }} from the action distribution P ( a T + 1 | θ ∗ , a ^ 1 : T , o 1 : T ) {\displaystyle P(a_{T+1}|\theta ^{\ast },{\hat {a}}_{1:T},o_{1:T})} .
Thompson sampling and upper-confidence bound algorithms share a fundamental property that underlies many of their theoretical guarantees. Roughly speaking, both algorithms allocate exploratory effort to actions that might be optimal and are in this sense "optimistic". Leveraging this property, one can translate regret bounds established for UCB algorithms to Bayesian regret bounds for Thompson sampling [ 13 ] or unify regret analysis across both these algorithms and many classes of problems. [ 14 ] | https://en.wikipedia.org/wiki/Thompson_sampling |
In mathematical finite group theory, Thompson's original uniqueness theorem ( Feit & Thompson 1963 , theorems 24.5 and 25.2) states that in a minimal simple finite group of odd order there is a unique maximal subgroup containing a given elementary abelian subgroup of rank 3. Bender (1970) gave a shorter proof of the uniqueness theorem.
This abstract algebra -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Thompson_uniqueness_theorem |
A transversely isotropic (also known as polar anisotropic ) material is one with physical properties that are symmetric about an axis that is normal to a plane of isotropy . This transverse plane has infinite planes of symmetry and thus, within this plane, the material properties are the same in all directions. In geophysics, vertically transverse isotropy (VTI) is also known as radial anisotropy.
This type of material exhibits hexagonal symmetry (though technically this ceases to be true for tensors of rank 6 and higher), so the number of independent constants in the (fourth-rank) elasticity tensor are reduced to 5 (from a total of 21 independent constants in the case of a fully anisotropic solid ). The (second-rank) tensors of electrical resistivity, permeability, etc. have two independent constants.
An example of a transversely isotropic material is the so-called on-axis unidirectional fiber composite lamina where the fibers are circular in cross section. In a unidirectional composite, the plane normal to the fiber direction can be considered as the isotropic plane, at long wavelengths (low frequencies) of excitation. In the figure to the right, the fibers would be aligned with the x 2 {\displaystyle x_{2}} axis, which is normal to the plane of isotropy.
In terms of effective properties, geological layers of rocks are often interpreted as being transversely isotropic. Calculating the effective elastic properties of such layers in petrology has been coined Backus upscaling , which is described below.
The material matrix K _ _ {\displaystyle {\underline {\underline {\boldsymbol {K}}}}} has a symmetry with respect to a given orthogonal transformation ( A {\displaystyle {\boldsymbol {A}}} ) if it does not change when subjected to that transformation.
For invariance of the material properties under such a transformation we require
Hence the condition for material symmetry is (using the definition of an orthogonal transformation)
Orthogonal transformations can be represented in Cartesian coordinates by a 3 × 3 {\displaystyle 3\times 3} matrix A _ _ {\displaystyle {\underline {\underline {\boldsymbol {A}}}}} given by
Therefore, the symmetry condition can be written in matrix form as
For a transversely isotropic material, the matrix A _ _ {\displaystyle {\underline {\underline {\boldsymbol {A}}}}} has the form
where the x 3 {\displaystyle x_{3}} -axis is the axis of symmetry . The material matrix remains invariant under rotation by any angle θ {\displaystyle \theta } about the x 3 {\displaystyle x_{3}} -axis.
Linear material constitutive relations in physics can be expressed in the form
where d , f {\displaystyle \mathbf {d} ,\mathbf {f} } are two vectors representing physical quantities and K {\displaystyle {\boldsymbol {K}}} is a second-order material tensor. In matrix form,
Examples of physical problems that fit the above template are listed in the table below. [ 1 ]
Using θ = π {\displaystyle \theta =\pi } in the A _ _ {\displaystyle {\underline {\underline {\boldsymbol {A}}}}} matrix implies that K 13 = K 31 = K 23 = K 32 = 0 {\displaystyle K_{13}=K_{31}=K_{23}=K_{32}=0} . Using θ = π 2 {\displaystyle \theta ={\tfrac {\pi }{2}}} leads to K 11 = K 22 {\displaystyle K_{11}=K_{22}} and K 12 = − K 21 {\displaystyle K_{12}=-K_{21}} . Energy restrictions usually require K 12 , K 21 ≥ 0 {\displaystyle K_{12},K_{21}\geq 0} and hence we must have K 12 = K 21 = 0 {\displaystyle K_{12}=K_{21}=0} . Therefore, the material properties of a transversely isotropic material are described by the matrix
In linear elasticity , the stress and strain are related by Hooke's law , i.e.,
or, using Voigt notation ,
The condition for material symmetry in linear elastic materials is. [ 2 ]
where
Using the specific values of θ {\displaystyle \theta } in matrix A _ _ {\displaystyle {\underline {\underline {\boldsymbol {A}}}}} , [ 3 ] it can be shown that the fourth-rank elasticity stiffness tensor may be written in 2-index Voigt notation as the matrix
The elasticity stiffness matrix C i j {\displaystyle C_{ij}} has 5 independent constants, which are related to well known engineering elastic moduli in the following way. These engineering moduli are experimentally determined.
The compliance matrix (inverse of the elastic stiffness matrix) is
where Δ := ( C 11 − C 12 ) [ ( C 11 + C 12 ) C 33 − 2 C 13 C 13 ] {\displaystyle \Delta :=(C_{11}-C_{12})[(C_{11}+C_{12})C_{33}-2C_{13}C_{13}]} . In engineering notation,
Comparing these two forms of the compliance matrix shows us that the longitudinal Young's modulus is given by
Similarly, the transverse Young's modulus is
The inplane shear modulus is
and the Poisson's ratio for loading along the polar axis is
Here, L represents the longitudinal (polar) direction and T represents the transverse direction.
In geophysics, a common assumption is that the rock formations of the crust are locally polar anisotropic (transversely isotropic); this is the simplest case of geophysical interest. Backus upscaling [ 4 ] is often used to determine the effective transversely isotropic elastic constants of layered media for long wavelength seismic waves.
Assumptions that are made in the Backus approximation are:
For shorter wavelengths, the behavior of seismic waves is described using the superposition of plane waves . Transversely isotropic media support three types of elastic plane waves:
Solutions to wave propagation problems in such media may be constructed from these plane waves, using Fourier synthesis .
A layered model of homogeneous and isotropic material, can be up-scaled to a transverse isotropic medium, proposed by Backus. [ 4 ]
Backus presented an equivalent medium theory, a heterogeneous medium can be replaced by a homogeneous one that predicts wave propagation in the actual medium. [ 5 ] Backus showed that layering on a scale much finer than the wavelength has an impact and that a number of isotropic layers can be replaced by a homogeneous transversely isotropic medium that behaves exactly in the same manner as the actual medium under static load in the infinite wavelength limit.
If each layer i {\displaystyle i} is described by 5 transversely isotropic parameters ( a i , b i , c i , d i , e i ) {\displaystyle (a_{i},b_{i},c_{i},d_{i},e_{i})} , specifying the matrix
The elastic moduli for the effective medium will be
where
⟨ ⋅ ⟩ {\displaystyle \langle \cdot \rangle } denotes the volume weighted average over all layers.
This includes isotropic layers, as the layer is isotropic if b i = a i − 2 e i {\displaystyle b_{i}=a_{i}-2e_{i}} , a i = c i {\displaystyle a_{i}=c_{i}} and d i = e i {\displaystyle d_{i}=e_{i}} .
Solutions to wave propagation problems in linear elastic transversely isotropic media can be constructed by superposing solutions for the quasi-P wave, the quasi S-wave, and a S-wave polarized orthogonal to the quasi S-wave.
However, the equations for the angular variation of velocity are algebraically complex and the plane-wave velocities are functions of the propagation angle θ {\displaystyle \theta } are. [ 6 ] The direction dependent wave speeds for elastic waves through the material can be found by using the Christoffel equation and are given by [ 7 ]
where θ {\displaystyle {\begin{aligned}\theta \end{aligned}}} is the angle between the axis of symmetry and the wave propagation direction, ρ {\displaystyle \rho } is mass density and the C i j {\displaystyle C_{ij}} are elements of the elastic stiffness matrix . The Thomsen parameters are used to simplify these expressions and make them easier to understand.
Thomsen parameters [ 8 ] are dimensionless combinations of elastic moduli that characterize transversely isotropic materials, which are encountered, for example, in geophysics . In terms of the components of the elastic stiffness matrix , these parameters are defined as:
where index 3 indicates the axis of symmetry ( e 3 {\displaystyle \mathbf {e} _{3}} ) . These parameters, in conjunction with the associated P wave and S wave velocities, can be used to characterize wave propagation through weakly anisotropic, layered media. Empirically, the Thomsen parameters for most layered rock formations are much lower than 1.
The name refers to Leon Thomsen, professor of geophysics at the University of Houston , who proposed these parameters in his 1986 paper "Weak Elastic Anisotropy".
In geophysics the anisotropy in elastic properties is usually weak, in which case δ , γ , ϵ ≪ 1 {\displaystyle \delta ,\gamma ,\epsilon \ll 1} . When the exact expressions for the wave velocities above are linearized in these small quantities, they simplify to
where
are the P and S wave velocities in the direction of the axis of symmetry ( e 3 {\displaystyle \mathbf {e} _{3}} ) (in geophysics, this is usually, but not always, the vertical direction). Note that δ {\displaystyle \delta } may be further linearized, but this does not lead to further simplification.
The approximate expressions for the wave velocities are simple enough to be physically interpreted, and sufficiently accurate for most geophysical applications. These expressions are also useful in some contexts where the anisotropy is not weak. | https://en.wikipedia.org/wiki/Thomsen_parameters |
In thermochemistry , the Thomsen–Berthelot principle is a hypothesis in the history of chemistry which argued that all chemical changes are accompanied by the production of heat and that processes which occur will be ones in which the most heat is produced. [ 1 ] This principle was formulated in slightly different versions by the Danish chemist Julius Thomsen in 1854 and by the French chemist Marcellin Berthelot in 1864. This early postulate in classical thermochemistry became the controversial foundation of a research program that would last three decades.
This principle came to be associated with what was called the thermal theory of affinity , which postulated that the heat evolved in a chemical reaction was the true measure of its affinity .
The experimental objections to the Thomsen–Berthelot principle include incomplete dissociation, reversibility, and spontaneous endothermic processes. [ 2 ] Such cases were dismissed by orthodox thermochemist as outliers not covered by the principle, or the experiments were manipulated to fit it through with somewhat contrived justifications was later disproved. [ 2 ] In 1873, Thomsen acknowledged that his theory might not have universal or definitive credibility. [ 3 ] Later, under newly created chemical thermodynamics framework, the principle was explained to only be valid as an idealization under extreme conditions (i.e., absolute zero ). [ 2 ] Thomsen openly admitted that his initial understanding was merely a close estimate of the reality, emphasizing that while chemical reactions typically release heat, this heat isn't always a trustworthy indicator of the strength of the bonds formed. [ 4 ] On the other hand, Berthelot , was more resistant and continued to assert the validity of the principle until 1894. [ 5 ] In 1882 the German scientist Hermann von Helmholtz proved that affinity was not given by the heat evolved in a chemical reaction but rather by the maximum work, or free energy , produced when the reaction was carried out reversibly . | https://en.wikipedia.org/wiki/Thomsen–Berthelot_principle |
Thomsen–Friedenreich antigen (Galβ1-3GalNAcα1-Ser/Thr) is a disaccharide that serves as a core 1 structure in O -linked glycosylation . [ 1 ] First described by Thomsen as a red blood cell's antigen, later research have determined it to be an oncofetal antigen . [ 2 ] it is present in the body as a part of membrane transport proteins where it is normally masked from the immune system . [ 3 ] It is commonly demasked in cancer cells , with it being expressed in up to 90% of carcinomas , making it a potential target for immunotherapy . [ 2 ]
This biochemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Thomsen–Friedenreich_antigen |
The objective of the Thomson problem is to determine the minimum electrostatic potential energy configuration of N electrons constrained to the surface of a unit sphere that repel each other with a force given by Coulomb's law . The physicist J. J. Thomson posed the problem in 1904 [ 1 ] after proposing an atomic model , later called the plum pudding model , based on his knowledge of the existence of negatively charged electrons within neutrally-charged atoms.
Related problems include the study of the geometry of the minimum energy configuration and the study of the large N behavior of the minimum energy.
The electrostatic interaction energy occurring between each pair of electrons of equal charges ( e i = e j = e {\displaystyle e_{i}=e_{j}=e} , with e {\displaystyle e} the elementary charge of an electron) is given by Coulomb's law ,
where ϵ 0 {\displaystyle \epsilon _{0}} is the electric constant and r i j = | r i − r j | {\displaystyle r_{ij}=|\mathbf {r} _{i}-\mathbf {r} _{j}|} is the distance between each pair of electrons located at points on the sphere defined by vectors r i {\displaystyle \mathbf {r} _{i}} and r j {\displaystyle \mathbf {r} _{j}} , respectively.
Simplified units of e = 1 {\displaystyle e=1} and k e = 1 / 4 π ϵ 0 = 1 {\displaystyle k_{e}=1/4\pi \epsilon _{0}=1} (the Coulomb constant ) are used without loss of generality. Then,
The total electrostatic potential energy of each N -electron configuration may then be expressed as the sum of all pair-wise interaction energies
The global minimization of U ( N ) {\displaystyle U(N)} over all possible configurations of N distinct points is typically found by numerical minimization algorithms.
Thomson's problem is related to the 7th of the eighteen unsolved mathematics problems proposed by the mathematician Steve Smale — "Distribution of points on the 2-sphere". [ 2 ] The main difference is that in Smale's problem the function to minimise is not the electrostatic potential 1 r i j {\displaystyle 1 \over r_{ij}} but a logarithmic potential given by − log r i j . {\displaystyle -\log r_{ij}.} A second difference is that Smale's question is about the asymptotic behaviour of the total potential when the number N of points goes to infinity, not for concrete values of N .
The solution of the Thomson problem for two electrons is obtained when both electrons are as far apart as possible on opposite sides of the origin, r i j = 2 r = 2 {\displaystyle r_{ij}=2r=2} , or
Mathematically exact minimum energy configurations have been rigorously identified in only a handful of cases.
Geometric solutions of the Thomson problem for N = 4, 6, and 12 electrons are Platonic solids whose faces are all congruent equilateral triangles. Numerical solutions for N = 8 and 20 are not the regular convex polyhedral configurations of the remaining two Platonic solids, the cube and dodecahedron respectively. [ 7 ]
One can also ask for ground states of particles interacting with arbitrary potentials.
To be mathematically precise, let f be a decreasing real-valued function, and define the energy functional
∑ i < j f ( | x i − x j | ) . {\displaystyle \sum _{i<j}f(|x_{i}-x_{j}|).}
Traditionally, one considers f ( x ) = x − α {\displaystyle f(x)=x^{-\alpha }} also known as Riesz α {\displaystyle \alpha } -kernels. For integrable Riesz kernels see the 1972 work of Landkof. [ 8 ] For non-integrable Riesz kernels, the poppy-seed bagel theorem holds, see the 2004 work of Hardin and Saff. [ 9 ] Notable cases include: [ 10 ]
One may also consider configurations of N points on a sphere of higher dimension . See spherical design .
Several algorithms have been applied to this problem. The focus since the millennium has been on local optimization methods applied to the energy function, although random walks have made their appearance: [ 10 ]
While the objective is to minimize the global electrostatic potential energy of each N -electron case, several algorithmic starting cases are of interest.
The energy of a continuous spherical shell of charge distributed across its surface is given by
and is, in general, greater than the energy of every Thomson problem solution. Note: Here N is used as a continuous variable that represents the infinitely divisible charge, Q , distributed across the spherical shell. For example, a spherical shell of N = 1 {\displaystyle N=1} represents the uniform distribution of a single electron's charge, − e {\displaystyle -e} , across the entire shell.
The expected global energy of a system of electrons distributed in a purely random manner across the surface of the sphere is given by
and is, in general, greater than the energy of every Thomson problem solution.
Here, N is a discrete variable that counts the number of electrons in the system. As well, U rand ( N ) < U shell ( N ) {\displaystyle U_{\text{rand}}(N)<U_{\text{shell}}(N)} .
For every N th solution of the Thomson problem there is an ( N + 1 ) {\displaystyle (N+1)} th configuration that includes an electron at the origin of the sphere whose energy is simply the addition of N to the energy of the N th solution. That is, [ 11 ]
Thus, if U Thom ( N ) {\displaystyle U_{\text{Thom}}(N)} is known exactly, then U 0 ( N + 1 ) {\displaystyle U_{0}(N+1)} is known exactly.
In general, U 0 ( N + 1 ) {\displaystyle U_{0}(N+1)} is greater than U Thom ( N + 1 ) {\displaystyle U_{\text{Thom}}(N+1)} , but is remarkably closer to each ( N + 1 ) {\displaystyle (N+1)} th Thomson solution than U shell ( N + 1 ) {\displaystyle U_{\text{shell}}(N+1)} and U rand ( N + 1 ) {\displaystyle U_{\text{rand}}(N+1)} . Therefore, the charge-centered distribution represents a smaller "energy gap" to cross to arrive at a solution of each Thomson problem than algorithms that begin with the other two charge configurations.
The Thomson problem is a natural consequence of J. J. Thomson's plum pudding model in the absence of its uniform positive background charge. [ 12 ]
"No fact discovered about the atom can be trivial, nor fail to accelerate the progress of physical science, for the greater part of natural philosophy is the outcome of the structure and mechanism of the atom."
Though experimental evidence led to the abandonment of Thomson's plum pudding model as a complete atomic model, irregularities observed in numerical energy solutions of the Thomson problem have been found to correspond with electron shell-filling in naturally occurring atoms throughout the periodic table of elements. [ 14 ]
The Thomson problem also plays a role in the study of other physical models including multi-electron bubbles and the surface ordering of liquid metal drops confined in Paul traps .
The generalized Thomson problem arises, for example, in determining arrangements of protein subunits that comprise the shells of spherical viruses . The "particles" in this application are clusters of protein subunits arranged on a shell. Other realizations include regular arrangements of colloid particles in colloidosomes , proposed for encapsulation of active ingredients such as drugs, nutrients or living cells, fullerene patterns of carbon atoms, and VSEPR theory . An example with long-range logarithmic interactions is provided by Abrikosov vortices that form at low temperatures in a superconducting metal shell with a large monopole at its center.
In the following table [ citation needed ] N {\displaystyle N} is the number of points (charges) in a configuration, U Thom {\displaystyle U_{\textrm {Thom}}} is the energy, the symmetry type is given in Schönflies notation (see Point groups in three dimensions ), and r i {\displaystyle r_{i}} are the positions of the charges. Most symmetry types require the vector sum of the positions (and thus the electric dipole moment ) to be zero.
It is customary to also consider the polyhedron formed by the convex hull of the points. Thus, v i {\displaystyle v_{i}} is the number of vertices where the given number of edges meet, e {\displaystyle e} is the total number of edges, f 3 {\displaystyle f_{3}} is the number of triangular faces, f 4 {\displaystyle f_{4}} is the number of quadrilateral faces, and θ 1 {\displaystyle \theta _{1}} is the smallest angle subtended by vectors associated with the nearest charge pair. Note that the edge lengths are generally not equal. Thus, except in the cases N = 2, 3, 4, 6, 12, and the geodesic polyhedra , the convex hull is only topologically equivalent to the figure listed in the last column. [ 15 ]
According to a conjecture, if P {\displaystyle P} is the polyhedron formed by the convex hull of the solution configuation to the Thomson Problem for m {\displaystyle m} electrons and q {\displaystyle q} is the number of quadrilateral faces of P {\displaystyle P} , then P {\displaystyle P} has f ( m ) = δ 0 , m − 2 + 3 ( m − 2 ) − q {\displaystyle f(m)=\delta _{0,m-2}+3(m-2)-q} edges. [ 16 ] [ clarification needed ] | https://en.wikipedia.org/wiki/Thomson_problem |
Thomson scattering is the elastic scattering of electromagnetic radiation by a free charged particle , as described by classical electromagnetism . It is the low-energy limit of Compton scattering : the particle's kinetic energy and photon frequency do not change as a result of the scattering. [ 1 ] This limit is valid as long as the photon energy is much smaller than the mass energy of the particle: ν ≪ m c 2 / h {\displaystyle \nu \ll mc^{2}/h} , or equivalently, if the wavelength of the light is much greater than the Compton wavelength of the particle (e.g., for electrons, longer wavelengths than hard x-rays). [ 2 ]
Thomson scattering describes the classical limit of electromagnetic radiation scattering from a free particle. An incident plane wave accelerates a charged particle which consequently emits radiation of the same frequency. The net effect is to scatter the incident radiation. [ 3 ] : 679
Thomson scattering is an important phenomenon in plasma physics and was first explained by the physicist J. J. Thomson . As long as the motion of the particle is non- relativistic (i.e. its speed is much less than the speed of light), the main cause of the acceleration of the particle will be due to the electric field component of the incident wave. In a first approximation, the influence of the magnetic field can be neglected. [ 2 ] : 15 The particle will move in the direction of the oscillating electric field, resulting in electromagnetic dipole radiation . The moving particle radiates most strongly in a direction perpendicular to its acceleration and that radiation will be polarized along the direction of its motion. Therefore, depending on where an observer is located, the light scattered from a small volume element may appear to be more or less polarized.
In the diagram, everything happens in the plane of the diagram. Electric fields of the incoming and outgoing wave can be divided up into perpendicular components. Those perpendicular to the plane are "tangential" and are not affected. Those components lying in the plane are referred to as "radial". The incoming and outgoing wave directions are also in the plane, and perpendicular to the electric components, as usual. (It is difficult to make these terms seem natural, but it is standard terminology.)
It can be shown that the amplitude of the outgoing wave will be proportional to the cosine of χ {\displaystyle \chi } , the angle between the incident and scattered outgoing waves. The intensity, which is the square of the amplitude, will then be diminished by a factor of cos 2 ( χ {\displaystyle \chi } ). It can be seen that the tangential components (perpendicular to the plane of the diagram) will not be affected in this way.
The scattering is best described by an emission coefficient which is defined as ε where ε dt dV d Ω dλ is the energy scattered by a volume element d V {\displaystyle dV} in time dt into solid angle d Ω between wavelengths λ and λ + dλ . From the point of view of an observer, there are two emission coefficients, ε r corresponding to radially polarized light and ε t corresponding to tangentially polarized light. For unpolarized incident light, these are given by: ε t = 3 16 π σ t I n ε r = 3 16 π σ t I n cos 2 χ {\displaystyle {\begin{aligned}\varepsilon _{t}&={\frac {3}{16\pi }}\sigma _{t}In\\[1ex]\varepsilon _{r}&={\frac {3}{16\pi }}\sigma _{t}In\cos ^{2}\chi \end{aligned}}}
where n {\displaystyle n} is the density of charged particles at the scattering point, I {\displaystyle I} is incident flux (i.e. energy/time/area/wavelength), χ {\displaystyle \chi } is the angle between the incident and scattered photons (see figure above) and σ t {\displaystyle \sigma _{t}} is the Thomson cross section for the charged particle, defined below. The total energy radiated by a volume element d V {\displaystyle dV} in time dt between wavelengths λ and λ + dλ is found by integrating the sum of the emission coefficients over all directions (solid angle): ∫ ε d Ω = ∫ 0 2 π d φ ∫ 0 π d χ ( ε t + ε r ) sin χ = I 3 σ t 16 π n 2 π ( 2 + 2 / 3 ) = σ t I n . {\displaystyle \int \varepsilon \,d\Omega =\int _{0}^{2\pi }d\varphi \int _{0}^{\pi }d\chi (\varepsilon _{t}+\varepsilon _{r})\sin \chi =I{\frac {3\sigma _{t}}{16\pi }}n2\pi (2+2/3)=\sigma _{t}In.}
The Thomson differential cross section, related to the sum of the emissivity coefficients, is given by d σ t d Ω = ( q 2 4 π ε 0 m c 2 ) 2 1 + cos 2 χ 2 {\displaystyle {\frac {d\sigma _{t}}{d\Omega }}=\left({\frac {q^{2}}{4\pi \varepsilon _{0}mc^{2}}}\right)^{2}{\frac {1+\cos ^{2}\chi }{2}}} expressed in SI units; q is the charge per particle, m the mass of particle, and ε 0 {\displaystyle \varepsilon _{0}} a constant, the permittivity of free space. (To obtain an expression in cgs units , drop the factor of 4 π ε 0 .) Integrating over the solid angle, we obtain the Thomson cross section σ t = 8 π 3 ( q 2 4 π ε 0 m c 2 ) 2 {\displaystyle \sigma _{t}={\frac {8\pi }{3}}\left({\frac {q^{2}}{4\pi \varepsilon _{0}mc^{2}}}\right)^{2}} in SI units.
The important feature is that the cross section is independent of light frequency. The cross section is proportional by a simple numerical factor to the square of the classical radius of a point particle of mass m and charge q , namely [ 2 ] : 17 σ t = 8 π 3 r e 2 {\displaystyle \sigma _{t}={\frac {8\pi }{3}}r_{e}^{2}}
Alternatively, this can be expressed in terms of λ c {\displaystyle \lambda _{c}} , the Compton wavelength , and the fine structure constant : σ t = 8 π 3 ( α λ c 2 π ) 2 {\displaystyle \sigma _{t}={\frac {8\pi }{3}}\left({\frac {\alpha \lambda _{c}}{2\pi }}\right)^{2}}
For an electron, the Thomson cross-section is numerically given by: [ 4 ] σ t = 8 π 3 ( α ℏ c m c 2 ) 2 = 6.6524587321 ( 60 ) × 10 − 29 m 2 ≈ 66.5 fm 2 = 0.665 b {\displaystyle \sigma _{t}={\frac {8\pi }{3}}\left({\frac {\alpha \hbar c}{mc^{2}}}\right)^{2}=6.6524587321(60)\times 10^{-29}{\text{ m}}^{2}\approx 66.5{\text{ fm}}^{2}=0.665{\text{ b}}}
The cosmic microwave background contains a small linearly-polarized component attributed to Thomson scattering. That polarized component mapping out the so-called E-modes was first detected by DASI in 2002.
The solar K-corona is the result of the Thomson scattering of solar radiation from solar coronal electrons. The ESA and NASA SOHO mission and the NASA STEREO mission generate three-dimensional images of the electron density around the Sun by measuring this K-corona from three separate satellites.
In tokamaks , corona of ICF targets and other experimental fusion devices, the electron temperatures and densities in the plasma can be measured with high accuracy by detecting the effect of Thomson scattering of a high-intensity laser beam. An upgraded Thomson scattering system in the Wendelstein 7-X stellarator uses Nd:YAG lasers to emit multiple pulses in quick succession. The intervals within each burst can range from 2 ms to 33.3 ms, permitting up to twelve consecutive measurements. Synchronization with plasma events is made possible by a newly added trigger system that facilitates real-time analysis of transient plasma events. [ 5 ]
In the Sunyaev–Zeldovich effect , where the photon energy is much less than the electron rest mass, the inverse-Compton scattering can be approximated as Thomson scattering in the rest frame of the electron. [ 6 ]
Models for X-ray crystallography are based on Thomson scattering. | https://en.wikipedia.org/wiki/Thomson_scattering |
Thoriated glass is a glass material used in the manufacture of optical systems, specifically photographic lenses . It is useful to this process due to its high refractive index . Thoriated glass is radioactive due to the inclusion of thorium dioxide , oxide of radioactive element thorium . It has therefore been succeeded as a material of choice by glass including lanthanum oxide . Thoriated glass can contain up to 30% by weight of thorium. [ 2 ] The thoriated glass elements in lenses over time develop a brown tint reducing transmission and interfering with neutral color reproduction.
Many Kodak, Fuji and Asahi Takumar lenses that were produced prior to the 1970s are radioactive. [ 3 ] [ 4 ]
Over extended time periods, thoriated glass may develop significant discoloration. This is due to induced F-centers forming in the glass as the radioactive decay of the thorium progresses. [ 5 ] The formation of F-centers is due to the ionizing effect of the high energy thorium decay products. This process can potentially be reversed by annealing the glass or exposing it to light . [ 6 ]
This glass engineering or glass science related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Thoriated_glass |
2-(3,6-Disulfo-2-hydroxy-1-naphthylazo)benzenearsonic acid disodium salt Thoron
Thorin (also called thoron or thoronol ) is an indicator used in the determination of barium , beryllium , lithium , uranium and thorium compounds. Being a compound of arsenic , it is highly toxic. [ 1 ]
This article about an aromatic compound is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Thorin_(chemistry) |
Thorium(IV) nitrate is a chemical compound , a salt of thorium and nitric acid with the formula Th(NO 3 ) 4 . A white solid in its anhydrous form, it can form tetra- and penta hydrates . As a salt of thorium it is weakly radioactive .
Thorium(IV) nitrate hydrate can be prepared by the reaction of thorium(IV) hydroxide and nitric acid :
Different hydrates are produced by crystallizing in different conditions. When a solution is very dilute, the nitrate is hydrolysed. Although various hydrates have been reported over the years, and some suppliers even claim to stock them, [1] only the tetrahydrate and pentahydrate actually exist. [ 2 ] What is called a hexahydrate, crystallized from a neutral solution, is probably a basic salt. [ 3 ]
The pentahydrate is the most common form. It is crystallized from dilute nitric acid solution. [ 4 ]
The tetrahydrate, Th(NO 3 ) 4 •4H 2 O is formed by crystallizing from a stronger nitric acid solution. Concentrations of nitric acid from 4 to 59% result in the tetrahydrate forming. [ 2 ] The thorium atom has 12-coordination, with four bidentate nitrate groups and four water molecules attached to each thorium atom. [ 3 ]
To obtain the anhydrous thorium(IV) nitrate, thermal decomposition of Th(NO 3 ) 4 ·2N 2 O 5 is required. The decomposition occurs at 150-160 °C. [ 5 ]
Anhydrous thorium nitrate is a white substance. It is covalently bound with low melting point of 55 °C. [ 2 ]
The pentahydrate Th(NO 3 ) 4 •5H 2 O crystallizes with clear colourless crystals [ 6 ] in the orthorhombic system. The unit cell size is a=11.191 b=22.889 c=10.579 Å. Each thorium atom is connected twice to each of four bidentate nitrate groups, and to three water molecules via their oxygen atoms. In total the thorium is eleven-coordinated. There are also two other water molecules in the crystal structure. The water is hydrogen bonded to other water, or to nitrate groups. [ 7 ] The density is 2.80 g/cm 3 . [ 4 ] Vapour pressure of the pentahydrate at 298K is 0.7 torr , and increases to 1.2 torr at 315K, and at 341K it is up to 10.7 torr. At 298.15K the heat capacity is about 114.92 calK −1 mol −1 . This heat capacity shrinks greatly at cryogenic temperatures. Entropy of formation of thorium nitrate pentahydrate at 298.15K is −547.0 calK −1 mol −1 . The standard Gibbs energy of formation is −556.1 kcalmol −1 . [ 8 ]
Thorium nitrate can dissolve in several different organic solvents [ 7 ] including alcohols, ketones, esters and ethers. [ 3 ] This can be used to separate different metals such as the lanthanides. With ammonium nitrate in the aqueous phase, thorium nitrate prefers the organic liquid, and the lanthanides stay with the water. [ 3 ]
Thorium nitrate dissolved in water lowers it freezing point. The maximum freezing point depression is −37 °C with a concentration of 2.9 mol/kg. [ 9 ]
At 25° a saturated solution of thorium nitrate contains 4.013 moles per liter. At this concentration the vapour pressure of water in the solution is 1745.2 Pascals, compared to 3167.2 Pa for pure water. [ 10 ]
When thorium nitrate pentahydrate is heated, nitrates with less water are produced, however the compounds also lose some nitrate. At 140 °C a basic nitrate, ThO(NO 3 ) 2 is produced. When strongly heated thorium dioxide is produced. [ 7 ]
A polymeric peroxynitrate is precipitated when hydrogen peroxide combines with thorium nitrate in solution with dilute nitric acid. Its formula is Th 6 (OO) 10 (NO 3 ) 4 •10H 2 O. [ 7 ]
The hydrolysis of thorium nitrate solutions produces basic nitrates Th 2 (OH) 4 (NO 3 ) 4 • x H 2 O and Th 2 (OH) 2 (NO 3 ) 6 •8H 2 O. In crystals of Th 2 (OH) 2 (NO 3 ). 6 •8H 2 O a pair of thorium atoms are connected by two bridging oxygen atoms. Each thorium atom is surrounded by three bidentate nitrate groups and three water molecules, bringing the coordination number to 11. [ 7 ]
When oxalic acid is added to a thorium nitrate solution, insoluble thorium oxalate precipitates. [ 11 ] Other organic acids added to thorium nitrate solution produce precipitates of organic salts with citric acid; basic salts, such as tartaric acid , adipic acid , malic acid , gluconic acid , phenylacetic acid , valeric acid . [ 12 ] Other precipitates are also formed from sebacic acid and azelaic acid
Hexanitratothorates with the generic formula M I 2 Th(NO 3 ) 6 or M II Th(NO 3 ) 6 •8H 2 O are made by mixing other metal nitrates with thorium nitrate in dilute nitric acid solution. M II can be Mg, Mn, Co, Ni, or Zn. M I can be Cs, (NO) + or (NO 2 ) + . [ 7 ] Crystals the divalent metal thorium hexanitrate octahydrate have a monoclinic form with similar unit cell dimensions: β=97°, a=9.08 b=8.75-8 c=12.61-3. [ 13 ] Pentanitratothorates with the generic formula M I Th(NO 3 ) 5 • x H 2 O are known for M I being Na or K. [ 7 ]
K 3 Th(NO 3 ) 7 and K 3 H 3 Th(NO 3 ) 10 •4H 2 O are also known. [ 3 ]
Thorium nitrate also crystallizes with other ligands and organic solvates including ethylene glycol diethyl ether , tri(n‐butyl)phosphate , butylamine , dimethylamine , and trimethylphosphine oxide . [ 3 ] | https://en.wikipedia.org/wiki/Thorium(IV)_nitrate |
The thorium fuel cycle is a nuclear fuel cycle that uses an isotope of thorium , 232 Th , as the fertile material . In the reactor, 232 Th is transmuted into the fissile artificial uranium isotope 233 U which is the nuclear fuel . Unlike natural uranium , natural thorium contains only trace amounts of fissile material (such as 231 Th ), which are insufficient to initiate a nuclear chain reaction . Additional fissile material or another neutron source is necessary to initiate the fuel cycle. In a thorium-fuelled reactor, 232 Th absorbs neutrons to produce 233 U . This parallels the process in uranium breeder reactors whereby fertile 238 U absorbs neutrons to form fissile 239 Pu . Depending on the design of the reactor and fuel cycle, the generated 233 U either fissions in situ or is chemically separated from the used nuclear fuel and formed into new nuclear fuel.
The thorium fuel cycle has several potential advantages over a uranium fuel cycle , including thorium's greater abundance , superior physical and nuclear properties, reduced plutonium and actinide production, [ 1 ] and better resistance to nuclear weapons proliferation when used in a traditional light water reactor [ 1 ] [ 2 ] though not in a molten salt reactor . [ 3 ] [ 4 ] [ 5 ]
Concerns about the limits of worldwide uranium resources motivated initial interest in the thorium fuel cycle. [ 6 ] It was envisioned that as uranium reserves were depleted, thorium would supplement uranium as a fertile material. However, for most countries uranium was relatively abundant and research in thorium fuel cycles waned. A notable exception was India's three-stage nuclear power programme . [ 7 ] In the twenty-first century thorium's claimed potential for improving proliferation resistance and waste characteristics led to renewed interest in the thorium fuel cycle. [ 8 ] [ 9 ] [ 10 ] While thorium is more abundant in the continental crust than uranium and easily extracted from monazite as a side product of rare earth element mining, it is much less abundant in seawater than uranium. [ 11 ]
At Oak Ridge National Laboratory in the 1960s, the Molten-Salt Reactor Experiment used 233 U as the fissile fuel in an experiment to demonstrate a part of the Molten Salt Breeder Reactor that was designed to operate on the thorium fuel cycle. Molten salt reactor (MSR) experiments assessed thorium's feasibility, using thorium(IV) fluoride dissolved in a molten salt fluid that eliminated the need to fabricate fuel elements. The MSR program was defunded in 1976 after its patron Alvin Weinberg was fired. [ 12 ]
In 1993, Carlo Rubbia proposed the concept of an energy amplifier or "accelerator driven system" (ADS), which he saw as a novel and safe way to produce nuclear energy that exploited existing accelerator technologies. Rubbia's proposal offered the potential to incinerate high-activity nuclear waste and produce energy from natural thorium and depleted uranium . [ 13 ] [ 14 ]
Kirk Sorensen, former NASA scientist and Chief Technologist at Flibe Energy, has been a long-time promoter of thorium fuel cycle and particularly liquid fluoride thorium reactors (LFTRs). He first researched thorium reactors while working at NASA , while evaluating power plant designs suitable for lunar colonies. In 2006 Sorensen started "energyfromthorium.com" to promote and make information available about this technology. [ 15 ]
A 2011 MIT study concluded that although there is little in the way of barriers to a thorium fuel cycle, with current or near term light-water reactor designs there is also little incentive for any significant market penetration to occur. As such they conclude there is little chance of thorium cycles replacing conventional uranium cycles in the current nuclear power market, despite the potential benefits. [ 16 ]
In the thorium cycle, fuel is formed when 232 Th captures a neutron (whether in a fast reactor or thermal reactor ) to become 233 Th . This normally emits an electron and an anti-neutrino ( ν ) by β − decay to become 233 Pa . This then emits another electron and anti-neutrino by a second β − decay to become 233 U , the fuel:
Nuclear fission produces radioactive fission products which can have half-lives from days to greater than 200,000 years . According to some toxicity studies, [ 17 ] the thorium cycle can fully recycle actinide wastes and only emit fission product wastes, and after a few hundred years, the waste from a thorium reactor can be less toxic than the uranium ore that would have been used to produce low enriched uranium fuel for a light water reactor of the same power.
Other studies assume some actinide losses and find that actinide wastes dominate thorium cycle waste radioactivity at some future periods. [ 18 ] Some fission products have been proposed for nuclear transmutation , which would further reduce the amount of nuclear waste and the duration during which it would have to be stored (whether in a deep geological repository or elsewhere). However, while the principal feasibility of some of those reactions has been demonstrated at laboratory scale, there is, as of 2024, no large scale deliberate transmutation of fission products anywhere in the world, and the upcoming MYRRHA research project into transmutation is mostly focused on transuranic waste. Furthermore, the cross section of some fission products is relatively low and others - such as caesium - are present as a mixture of stable, short lived and long lived isotopes in nuclear waste, making transmutation dependent on expensive isotope separation .
In a reactor, when a neutron hits a fissile atom (such as certain isotopes of uranium), it either splits the nucleus or is captured and transmutes the atom. In the case of 233 U , the transmutations tend to produce useful nuclear fuels rather than transuranic waste. When 233 U absorbs a neutron, it either fissions or becomes 234 U . The chance of fissioning on absorption of a thermal neutron is about 92%; the capture-to-fission ratio of 233 U , therefore, is about 1:12 – which is better than the corresponding capture vs. fission ratios of 235 U (about 1:6), or 239 Pu or 241 Pu (both about 1:3). [ 6 ] [ 19 ] The result is less transuranic waste than in a reactor using the uranium-plutonium fuel cycle.
234 U , like most actinides with an even number of neutrons, is not fissile, but neutron capture produces fissile 235 U . If the fissile isotope fails to fission on neutron capture, it produces 236 U , 237 Np , 238 Pu , and eventually fissile 239 Pu and heavier isotopes of plutonium . The 237 Np can be removed and stored as waste or retained and transmuted to plutonium, where more of it fissions, while the remainder becomes 242 Pu , then americium and curium , which in turn can be removed as waste or returned to reactors for further transmutation and fission.
However, the 231 Pa (with a half-life of 3.27 × 10 4 years ) formed via ( n ,2 n ) reactions with 232 Th (yielding 231 Th that decays to 231 Pa ), while not a transuranic waste, is a major contributor to the long-term radiotoxicity of spent nuclear fuel. While 231 Pa can in principle be converted back to 232 Th by neutron absorption , its neutron absorption cross section is relatively low, making this rather difficult and possibly uneconomic.
232 U is also formed in this process, via ( n ,2 n ) reactions between fast neutrons and 233 U , 233 Pa , and 232 Th :
Unlike most even numbered heavy isotopes, 232 U is also a fissile fuel fissioning just over half the time when it absorbs a thermal neutron. [ 20 ] 232 U has a relatively short half-life ( 68.9 years ), and some decay products emit high energy gamma radiation , such as 220 Rn , 212 Bi and particularly 208 Tl . The full decay chain , along with half-lives and relevant gamma energies, is:
232 U decays to 228 Th where it joins the decay chain of 232 Th
Thorium-cycle fuels produce hard gamma emissions , which damage electronics, limiting their use in bombs. 232 U cannot be chemically separated from 233 U from used nuclear fuel ; however, chemical separation of thorium from uranium removes the decay product 228 Th and the radiation from the rest of the decay chain, which gradually build up as 228 Th reaccumulates. The contamination could also be avoided by using a molten-salt breeder reactor and separating the 233 Pa before it decays into 233 U . [ 3 ] The hard gamma emissions also create a radiological hazard which requires remote handling during reprocessing.
As a fertile material thorium is similar to 238 U , the major part of natural and depleted uranium. The thermal neutron absorption cross section (σ a ) and resonance integral (average of neutron cross sections over intermediate neutron energies) for 232 Th are about three and one third times those of the respective values for 238 U .
The primary physical advantage of thorium fuel is that it uniquely makes possible a breeder reactor that runs with slow neutrons , otherwise known as a thermal breeder reactor . [ 6 ] These reactors are often considered simpler than the more traditional fast-neutron breeders. Although the thermal neutron fission cross section (σ f ) of the resulting 233 U is comparable to 235 U and 239 Pu , it has a much lower capture cross section (σ γ ) than the latter two fissile isotopes, providing fewer non-fissile neutron absorptions and improved neutron economy . The ratio of neutrons released per neutron absorbed (η) in 233 U is greater than two over a wide range of energies, including the thermal spectrum. A breeding reactor in the uranium–plutonium cycle needs to use fast neutrons, because in the thermal spectrum one neutron absorbed by 239 Pu on average leads to less than two neutrons.
Thorium is estimated to be about three to four times more abundant than uranium in Earth's crust, [ 21 ] although present knowledge of reserves is limited. Current demand for thorium has been satisfied as a by-product of rare-earth extraction from monazite sands. Notably, there is very little thorium dissolved in seawater, so seawater extraction is not viable, as it is with uranium. Using breeder reactors, known thorium and uranium resources can both generate world-scale energy for thousands of years.
Thorium-based fuels also display favorable physical and chemical properties that improve reactor and repository performance. Compared to the predominant reactor fuel, uranium dioxide ( UO 2 ), thorium dioxide ( ThO 2 ) has a higher melting point , higher thermal conductivity , and lower coefficient of thermal expansion . Thorium dioxide also exhibits greater chemical stability and, unlike uranium dioxide, does not further oxidize . [ 6 ]
Because the 233 U produced in thorium fuels is significantly contaminated with 232 U in proposed power reactor designs, thorium-based used nuclear fuel possesses inherent proliferation resistance. 232 U cannot be chemically separated from 233 U and has several decay products that emit high-energy gamma radiation . These high-energy photons are a radiological hazard that necessitate the use of remote handling of separated uranium and aid in the passive detection of such materials.
The long-term (on the order of roughly 10 3 to 10 6 years ) radiological hazard of conventional uranium-based used nuclear fuel is dominated by plutonium and other minor actinides , after which long-lived fission products become significant contributors again. A single neutron capture in 238 U is sufficient to produce transuranic elements , whereas five captures are generally necessary to do so from 232 Th . 98–99% of thorium-cycle fuel nuclei would fission at either 233 U or 235 U , so fewer long-lived transuranics are produced. Because of this, thorium is a potentially attractive alternative to uranium in mixed oxide (MOX) fuels to minimize the generation of transuranics and maximize the destruction of plutonium. [ 22 ]
There are several challenges to the application of thorium as a nuclear fuel, particularly for solid fuel reactors:
In contrast to uranium, naturally occurring thorium is effectively mononuclidic and contains no fissile isotopes; fissile material, generally 233 U , 235 U or plutonium, must be added to achieve criticality . This, along with the high sintering temperature necessary to make thorium-dioxide fuel, complicates fuel fabrication. Oak Ridge National Laboratory experimented with thorium tetrafluoride as fuel in a molten salt reactor from 1964 to 1969, which was expected to be easier to process and separate from contaminants that slow or stop the chain reaction.
In an open fuel cycle (i.e. utilizing 233 U in situ), higher burnup is necessary to achieve a favorable neutron economy . Although thorium dioxide performed well at burnups of 170,000 MWd/t and 150,000 MWd/t at Fort St. Vrain Generating Station and AVR respectively, [ 6 ] challenges complicate achieving this in light water reactors (LWR), which compose the vast majority of existing power reactors.
In a once-through thorium fuel cycle, thorium-based fuels produce far less long-lived transuranics than uranium-based fuels,
some long-lived actinide products constitute a long-term radiological impact, especially 231 Pa and 233 U . [ 17 ] On a closed cycle, 233 U and 231 Pa can be reprocessed. 231 Pa is also considered an excellent burnable poison absorber in light water reactors. [ 23 ]
Another challenge associated with the thorium fuel cycle is the comparatively long interval over which 232 Th breeds to 233 U . The half-life of 233 Pa is about 27 days, which is an order of magnitude longer than the half-life of 239 Np . As a result, substantial 233 Pa develops in thorium-based fuels. 233 Pa is a significant neutron absorber and, although it eventually breeds into fissile 235 U , this requires two more neutron absorptions, which degrades neutron economy and increases the likelihood of transuranic production.
Alternatively, if solid thorium is used in a closed fuel cycle in which 233 U is recycled , remote handling is necessary for fuel fabrication because of the high radiation levels resulting from the decay products of 232 U . This is also true of recycled thorium because of the presence of 228 Th , which is part of the 232 U decay sequence. Further, unlike proven uranium fuel recycling technology (e.g. PUREX ), recycling technology for thorium (e.g. THOREX) is only under development.
Although the presence of 232 U complicates matters, there are public documents showing that 233 U has been used once in a nuclear weapon test. The United States tested a composite 233 U -plutonium bomb core in the MET (Military Effects Test) blast during Operation Teapot in 1955, though with much lower yield than expected. [ 24 ]
Advocates for liquid core and molten salt reactors such as LFTRs claim that these technologies negate thorium's disadvantages present in solid fuelled reactors. As only two liquid-core fluoride salt reactors have been built (the ORNL ARE and MSRE ) and neither have used thorium, it is hard to validate the exact benefits. [ 6 ]
Thorium fuels have fueled several different reactor types, including light water reactors , heavy water reactors , high temperature gas reactors , sodium-cooled fast reactors , and molten salt reactors . [ 25 ]
From IAEA TECDOC-1450 "Thorium Fuel Cycle – Potential Benefits and Challenges", Table 1: Thorium utilization in different experimental and power reactors. [ 6 ] Additionally from Energy Information Administration, "Spent Nuclear Fuel Discharges from U. S. Reactors", Table B4: Dresden 1 Assembly Class. [ 26 ]
Nuclear technology portal Energy portal | https://en.wikipedia.org/wiki/Thorium_fuel_cycle |
The Thorne miniature rooms are a set of approximately 100 miniature models of rooms created between 1932 and 1940 under the direction of Narcissa Niblack Thorne . Ninety-nine of the rooms are believed still to be in existence; the majority (68) are on display at the Art Institute of Chicago , while 20 are at the Phoenix Art Museum , nine at the Knoxville Museum of Art , and one each at The Children's Museum of Indianapolis and the Kaye Miniature Museum in Los Angeles . The Art Institute's rooms document European and American interiors from the late 13th century to the 1930s and the 17th century to the 1930s, respectively. Constructed on a 1:12 scale , the rooms are largely made of the same materials as full-sized rooms, and some even include original works of art.
The model rooms were the brainchild of Narcissa Niblack Thorne , [ 1 ] [ 2 ] who was born in 1882 in Vincennes, Indiana . [ 3 ] [ 4 ] During her childhood, her uncle Albert Parker Niblack , a United States Navy vice admiral, sent her many antique dollhouse miniatures from around the world. [ 3 ] The idea for the model rooms also developed from Thorne's collection of miniature furniture and household accessories, which she began assembling around 1900, and her desire to house and display these items. [ 1 ] A further inspiration may have been a miniature shadow box that she encountered at a bazaar in Istanbul during the 1920s. [ 3 ]
When she was 19, Thorne married Montgomery Ward department store heir James Ward Thorne , whose fortune would help finance her hobby. [ 3 ] [ 4 ] They lived together in Lake Forest, Illinois . [ 4 ] By 1930, Thorne was researching period architecture, interior design, and decorative arts to create sketches and blueprints for miniature rooms to house her dollhouse miniatures and other miniature furniture. [ 3 ]
During the Great Depression , Thorne had access to some of the top architects, interior designers, and craftsmen in the United States, who between 1932 and 1940 created approximately 100 "period rooms" under her direction. [ 1 ] [ 2 ] [ 3 ] In total, 99 of the rooms are believed still to be in existence. [ 1 ] [ 5 ] The original 30 were placed on display at the 1933 Century of Progress Exposition in Chicago , [ 1 ] [ 4 ] and in 1940 they were the subject of a LIFE magazine article. [ 1 ] Twenty of these original rooms were donated to the Phoenix Art Museum , where they remain on display. [ 1 ]
The majority of the rooms, 68 in all, are on display at the Art Institute of Chicago , where they document European and American interiors from the late 13th century to the 1930s and the 17th century to the 1930s, respectively. [ 1 ] [ 2 ] [ 6 ] The Art Institute's rooms were created by Thorne and her craftsmen between 1932 and 1940 at her studio on Oak Street on the city's Near North Side ; [ 4 ] [ 5 ] the 31 European rooms were finished by 1937, while the 37 American rooms were completed by 1940. [ 4 ] The rooms were gifted to the museum in 1941, and put on permanent display in 1954. [ 4 ] [ 5 ] The Art Institute of Chicago's rooms are among the museum's most popular permanent collections. [ 5 ]
The Knoxville Museum of Art is home to 9 of the remaining rooms, while The Children's Museum of Indianapolis and the Kaye Miniature Museum in Los Angeles have one each. [ 1 ]
Some of the Thorne rooms are miniature replicas of actual rooms. [ 1 ] They were constructed on a 1:12 scale , [ 1 ] or in other words a scale of 1 inch (2.5 cm) to 1 foot (0.30 m). [ 2 ] The rooms are largely made of the same materials as full-sized rooms; for example, they include bowls made of silver, chandeliers made of crystal, and even original works of art, both miniature paintings (by Fernand Léger , Hildreth Meière , Amédée Ozenfant , and Léopold Survage ) and sculptures (by John Storrs ). [ 5 ]
In 2010, the Art Institute of Chicago began decorating a few of its rooms for Christmas , Hanukkah , and New Year's , using period-appropriate decorations for each of the involved rooms. [ 4 ] [ 6 ] Lindsey Mican Morgan, who is responsible for the rooms at the Art Institute, began the practice of decorating the rooms for the holidays after discovering Thorne's great affection for Christmas while researching. [ 4 ] | https://en.wikipedia.org/wiki/Thorne_miniature_rooms |
In plant morphology , thorns , spines , and prickles , and in general spinose structures (sometimes called spinose teeth or spinose apical processes ), are hard, rigid extensions or modifications of leaves , roots , stems , or buds with sharp, stiff ends, and generally serve the same function: physically defending plants against herbivory .
In common language, the terms are used more or less interchangeably, but in botanical terms, thorns are derived from shoots (so that they may or may not be branched, they may or may not have leaves, and they may or may not arise from a bud), [ 1 ] [ 2 ] [ 3 ] [ 4 ] spines are derived from leaves (either the entire leaf or some part of the leaf that has vascular bundles inside, like the petiole or a stipule ), [ 1 ] [ 2 ] [ 3 ] [ 4 ] and prickles are derived from epidermis tissue (so that they can be found anywhere on the plant and do not have vascular bundles inside [ 4 ] ). [ 1 ] [ 2 ] [ 3 ]
Leaf margins may also have teeth, and if those teeth are sharp, they are called spinose teeth on a spinose leaf margin [ 1 ] [ 2 ] (some authors consider them a kind of spine [ 2 ] ). On a leaf apex, if there is an apical process (generally an extension of the midvein), and if it is especially sharp, stiff, and spine-like, it may be referred to as spinose or as a pungent apical process [ 1 ] (again, some authors call them a kind of spine [ 2 ] ). When the leaf epidermis is covered with very long, stiff trichomes (more correctly called bristles in this case; [ 1 ] for some authors a kind of prickle [ 2 ] ), it may be referred to as a hispid vestiture ; [ 1 ] [ 2 ] [ 3 ] if the trichomes are stinging trichomes, it may be called a urent vestiture . [ 1 ]
There can be found also spines or spinose structures derived from roots. [ 5 ]
The predominant function of thorns, spines, and prickles is deterring herbivory in a mechanical form. For this reason, they are classified as physical or mechanical defenses, as opposed to chemical defenses.
Not all functions of spines or glochids are limited to defense from physical attacks by herbivores and other animals. In some cases, spines have been shown to shade or insulate the plants that grow them, thereby protecting them from extreme temperatures. For example, saguaro cactus spines shade the apical meristem in summer, and in members of the Opuntioideae , glochids insulate the apical meristem in winter.
Agrawal et al. (2000) found that spines seem to have little effect on specialist pollinators, on which many plants rely in order to reproduce. [ 6 ]
Pointing or spinose processes can broadly be divided by the presence of vascular tissue: thorns and spines are derived from shoots and leaves respectively, and have vascular bundles inside, whereas prickles (like rose prickles) do not have vascular bundles inside, so that they can be removed more easily and cleanly than thorns and spines.
Thorns are modified branches or stems . They may be simple or branched.
Spines are modified leaves , stipules , or parts of leaves, such as extensions of leaf veins. Some authors prefer not to distinguish spines from thorns because, like thorns, and unlike prickles, they commonly contain vascular tissue . [ 7 ]
Spines are variously described as petiolar spines (as in Fouquieria ), leaflet spines (as in Phoenix ), or stipular spines (as in Euphorbia ), all of which are examples of spines developing from a part of a leaf containing the petiole, midrib, or a secondary vein. [ 1 ] The plants of the cactus family are particularly well known for their dense covering of spines. Some cacti have also glochids (or glochidia , singular glochidium) – a particular kind of spine of different origin, which are smaller and deciduous with numerous retrose barbs along its length (as found in areoles of Opuntia ). [ 1 ]
Prickles are comparable to hairs but can be quite coarse (for example, rose prickles). They are extensions of the cortex and epidermis . [ 8 ] [ 9 ] Technically speaking, many plants commonly thought of as having thorns or spines actually have prickles. Roses , for instance, have prickles. [ 7 ] While the position of thorns and spines are known positively to be controlled by phyllotaxis , the positioning of prickles appears to be truly random. If not, then by a phyllotaxis so arcane as to give the appearance of randomness. [ citation needed ] The largest prickles are found on the trunk and major limbs of the Barrigudo ( Chorisia inermis , or Ceiba speciosa ) which can be two inches (five cm) in length and diameter.
A study published for peer review to the journal Science concluded that plants with these types of prickles have been identified as sharing a common gene family. [ 10 ]
Other similar structures are spinose teeth, spinose apical processes, and trichomes. Trichomes , in particular, are distinct from thorns, spines, and prickles in that they are much smaller (often microscopic) outgrowths of epidermal tissue, and they are less rigid and more hair-like in appearance; they typically consist of just a few cells of the outermost layer of epidermis, whereas prickles may include cortex tissue. Trichomes are often effective defenses against small insect herbivores; thorns, spines, and prickles are usually only effective against larger herbivores like birds and mammals.
Spinescent is a term describing plants that bear any sharp structures that deter herbivory. It also can refer to the state of tending to be or become spiny in some sense or degree, as in: "... the division of the African acacias on the basis of spinescent stipules versus non-spinescent stipules..." [ 11 ]
There are also spines derived from roots, like the ones on the trunk of the "Root Spine Palms" ( Cryosophila spp.). The trunk roots of Cryosophila guagara grow downwards to a length of 6–12 cm, then stop growing and transform into a spine. [ 5 ] The anatomy of crown roots on this species (roots among the bases of the living fronds) also alters during their life. [ 5 ] They initially grow upwards and then turn down and finally they, too, become spinous. [ 5 ] Lateral roots on these two types of roots, as well as those on the stilt roots on this species, also become spinous. [ 5 ] Some authors believe that some of these short spiny laterals have a ventilating function so they are 'pneumorhizae'. [ 5 ] Short spiny laterals that may have a ventilating function may also be found on roots of Iriartea exorrhiza . [ 5 ]
There are also spines that function as pneumorhizae on the palm Euterpe oleracea . [ 5 ] In Cryosophila nana (formerly Acanthorhiza aculeata ), there are spiny roots; some authors prefer to term these "root spines" if the length of the root is less than 10x the thickness and "spine roots" if the length is more than 10x the thickness. [ 5 ] Adventitious spiny roots have also been described on the trunks of dicotyledonous trees from tropical Africa (e.g. Euphorbiaceae, as in Macaranga barteri , Bridelia micrantha and B. pubescens ; Ixonanthaceae, Sterculiaceae), and may also be found protecting perennating organs such as tubers and corms (e.g. Dioscorea prehensilis -Dioscoreaceae- and Moraea spp. -Iridaceae- respectively). [ 5 ] Short root spines cover the tuberous base of the epiphytic ant-plant Myrmecodia tuberosa (Rubiaceae), these probably give protection to ants which inhabit chambers within the tuber as they wander over the plant's surface. (Jackson 1986 [ 5 ] and references therein). In many respects, the pattern of spine formation is similar to that which occurs in the development of thorns from lateral shoots. (Jackson 1986 [ 5 ] and references therein).
It has been proposed that thorny structures may have first evolved as a defense mechanism in plants growing in sandy environments that provided inadequate resources for fast regeneration of damage. [ 12 ] [ 13 ]
Spinose structures occur in a wide variety of ecologies, and their morphology also varies greatly. They occur as:
Some thorns are hollow and act as myrmecodomatia ; others (e.g. in Crataegus monogyna ) bear leaves. The thorns of many species are branched (e.g. in Crataegus crus-galli and Carissa macrocarpa ).
Plants bearing thorns, spines, or prickles are often used as a defense against burglary , being strategically planted below windows or around the entire perimeter of a property. [ 17 ] They also have been used to protect crops and livestock against marauding animals. Examples include hawthorn hedges in Europe, agaves or ocotillos in the Americas and in other countries where they have been introduced, Osage orange in the prairie states of the US, and Sansevieria in Africa. [ 18 ] [ page needed ] | https://en.wikipedia.org/wiki/Thorns,_spines,_and_prickles |
The Thorpe reaction is a chemical reaction described as a self-condensation of aliphatic nitriles catalyzed by base to form enamines . [ 1 ] [ 2 ] [ 3 ] The reaction was discovered by Jocelyn Field Thorpe .
The Thorpe–Ziegler reaction (named after Jocelyn Field Thorpe and Karl Ziegler ), or Ziegler method , is the intramolecular modification with a dinitrile as a reactant and a cyclic ketone as the final reaction product after acidic hydrolysis. The reaction is conceptually related to the Dieckmann condensation . [ 3 ] | https://en.wikipedia.org/wiki/Thorpe_reaction |
The Thorpe–Ingold effect , gem-dimethyl effect , or angle compression is an effect observed in chemistry where increasing steric hindrance favours ring closure and intramolecular reactions. The effect was first reported by Beesley, Thorpe , and Ingold in 1915 as part of a study of cyclization reactions . [ 1 ] It has since been generalized to many areas of chemistry. [ 2 ]
The comparative rates of lactone formation (lactonization) of various 2-hydroxybenzenepropionic acids illustrate the effect. The placement of an increasing number of methyl groups accelerates the cyclization process. [ 3 ]
One application of this effect is addition of a quaternary carbon (e.g., a gem -di methyl group ) in an alkyl chain to increase the reaction rate and/or equilibrium constant of cyclization reactions. An example of this is an olefin metathesis reaction: [ 4 ] In the field of peptide foldamers , amino acid residues containing quaternary carbons such as 2-aminoisobutyric acid are used to promote formation of certain types of helices. [ 5 ]
One proposed explanation for this effect is that the increased size of the substituents increases the angle between them. As a result, the angle between the other two substituents decreases. By moving them closer together, reactions between them are accelerated. It is thus a kinetic effect.
The effect also has some thermodynamic contribution as the in silico strain energy decreases on going from cyclobutane to 1-methylcyclobutane and 1,1-dimethylcyclobutane by a value between 8 kcal/mole [ 6 ] and 1.5 kcal/mole. [ 7 ] A noteworthy example of the Thorpe-Ingold effect in supramolecular catalysis is given by diphenylmethane derivatives provided with guanidinium groups. [ 8 ] These compounds are active in the cleavage of the RNA model compound HPNP. Substitution of the methylene group of the parent diphenylmethane spacer with cyclohexylidene and adamantylidene moieties enhances catalytic efficiency, with gem dialkyl effect accelerations of 4.5 and 9.1, respectively. | https://en.wikipedia.org/wiki/Thorpe–Ingold_effect |
Thorson's rule (named after Gunnar Thorson by S. A. Mileikovsky in 1971) [ 1 ] is an ecogeographical rule which states that benthic marine invertebrates at low latitudes tend to produce large numbers of eggs developing to pelagic (often planktotrophic [plankton-feeding]) and widely dispersing larvae, whereas at high latitudes such organisms tend to produce fewer and larger lecithotrophic (yolk-feeding) eggs and larger offspring, often by viviparity or ovoviviparity , which are often brooded. [ 2 ]
The rule was originally established for marine bottom invertebrates, but it also applies to a group of parasitic flatworms , monogenean ectoparasites on the gills of marine fish. [ 3 ] Most low-latitude species of Monogenea produce large numbers of ciliated larvae . However, at high latitudes, species of the entirely viviparous family Gyrodactylidae, which produce few nonciliated offspring and are very rare at low latitudes , represent the majority of gill Monogenea , i.e., about 80–90% of all species at high northern latitudes, and about one third of all species in Antarctic and sub-Antarctic waters, against less than 1% in tropical waters. Data compiled by A.V. Gusev in 1978 indicates that Gyrodactylidae may also be more common in cold than tropical freshwater systems, suggesting that Thorson's rule may apply to freshwater invertebrates. [ 4 ]
There are exceptions to the rule, such as ascoglossan snails : tropical ascoglossans have a higher incidence of lecithotrophy and direct development than temperate species. [ 5 ] A study in 2001 indicated that two factors are important for Thorson's rule to be valid for marine gastropods: 1) the habitat must include rocky substrates , because soft-bottom habitats appear to favour non-pelagic development; and 2) a diverse assemblage of taxa need to be compared to avoid the problem of phyletic constraints, which could limit the evolution of different developmental modes. [ 6 ]
The temperature gradient from warm surface waters to the deep sea is similar to that along latitudinal gradients. A gradient as described by Thorson's rule may therefore be expected. However, evidence for such a gradient is ambiguous; [ 1 ] Gyrodactylidae have not yet been found in the deep sea. [ 3 ]
Several explanations of the rule have been given. They include:
Most of these explanations can be excluded for the Monogenea, whose larvae are never planktotrophic (therefore eliminating explanations 1 and 2), their larvae are always short-lived (3), Gyrodactylidae are most common not only close to melting ice but in cold seas generally (5). Explanation 6 is unlikely, because small organisms are common in cold seas, Gyrodactylidae are among the smallest Monogenea (7), and Monogenea do not possess calcareous skeletons (8). The conclusion is that the most likely explanation for the Monogenea (and by implication for other groups) is that small larvae cannot locate suitable habitats at low temperatures, where physiological including sensory processes are slowed, and/or that low temperatures prevent the production of sufficient numbers of pelagic larvae, which would be necessary to find suitable habitats in the vast oceanic spaces. [ 3 ]
Rapoport's rule states that latitudinal ranges of species are generally smaller at low than at high latitudes. Thorson's rule contradicts this rule, because species disperse more widely at low than at high latitudes, supplementing much evidence against the generality of Rapoport's rule and for the fact that tropical species often have wider geographical ranges than high latitude species. [ 7 ] [ 8 ] | https://en.wikipedia.org/wiki/Thorson's_rule |
Brain-reading or thought identification uses the responses of multiple voxels in the brain evoked by stimulus then detected by fMRI in order to decode the original stimulus. Advances in research have made this possible by using human neuroimaging to decode a person's conscious experience based on non-invasive measurements of an individual's brain activity. [ 1 ] Brain reading studies differ in the type of decoding (i.e. classification, identification and reconstruction) employed, the target (i.e. decoding visual patterns, auditory patterns, cognitive states ), and the decoding algorithms ( linear classification , nonlinear classification, direct reconstruction, Bayesian reconstruction, etc.) employed.
Identification of complex natural images is possible using voxels from early and anterior visual cortex areas forward of them (visual areas V3A, V3B, V4, and the lateral occipital) together with Bayesian inference . This brain reading approach uses three components: [ 2 ] a structural encoding model that characterizes responses in early visual areas; a semantic encoding model that characterizes responses in anterior visual areas; and a Bayesian prior that describes the distribution of structural and semantic scene statistics . [ 2 ]
Experimentally the procedure is for subjects to view 1750 black and white natural images that are correlated with voxel activation in their brains. Then subjects viewed another 120 novel target images, and information from the earlier scans is used reconstruct them. Natural images used include pictures of a seaside cafe and harbor, performers on a stage, and dense foliage. [ 2 ]
In 2008 IBM applied for a patent on how to extract mental images of human faces from the human brain. It uses a feedback loop based on brain measurements of the fusiform gyrus area in the brain which activates proportionate with degree of facial recognition. [ 3 ]
In 2011, a team led by Shinji Nishimoto used only brain recordings to partially reconstruct what volunteers were seeing. The researchers applied a new model, about how moving object information is processed in human brains, while volunteers watched clips from several videos. An algorithm searched through thousands of hours of external YouTube video footage (none of the videos were the same as the ones the volunteers watched) to select the clips that were most similar. [ 4 ] [ 5 ] The authors have uploaded demos comparing the watched and the computer-estimated videos. [ 6 ] [ 7 ]
In 2017 a face perception study in monkeys reported the reconstruction of human faces by analyzing electrical activity from 205 neurons. [ 8 ] [ 9 ]
In 2023 image reconstruction was reported utilizing Stable Diffusion on human brain activity obtained via fMRI. [ 10 ] [ 11 ]
In 2024, a study demonstrated that images imagined in the mind, without visual stimulation, can be reconstructed from fMRI brain signals utilizing machine learning and generative AI technology. [ 12 ] [ 13 ] [ 14 ] Another 2024 study reported the reconstruction of images from EEG. [ 15 ]
Brain-reading has been suggested as an alternative to polygraph machines as a form of lie detection . [ 16 ] Another alternative to polygraph machines is blood oxygenated level dependent functional MRI technology. This technique involves the interpretation of the local change in the concentration of oxygenated hemoglobin in the brain, although the relationship between this blood flow and neural activity is not yet completely understood. [ 16 ] Another technique to find concealed information is brain fingerprinting , which uses EEG to ascertain if a person has a specific memory or information by identifying P300 event related potentials. [ 17 ]
A number of concerns have been raised about the accuracy and ethical implications of brain-reading for this purpose. Laboratory studies have found rates of accuracy of up to 85%; however, there are concerns about what this means for false positive results: "If the prevalence of "prevaricators" in the group being examined is low, the test will yield far more false-positive than true-positive results; about one person in five will be incorrectly identified by the test." [ 16 ] Ethical problems involved in the use of brain-reading as lie detection include misapplications due to adoption of the technology before its reliability and validity can be properly assessed and due to misunderstanding of the technology, and privacy concerns due to unprecedented access to individual's private thoughts. [ 16 ] However, it has been noted that the use of polygraph lie detection carries similar concerns about the reliability of the results [ 16 ] and violation of privacy. [ 18 ]
Brain-reading has also been proposed as a method of improving human–machine interfaces , by the use of EEG to detect relevant brain states of a human. [ 19 ] In recent years, there has been a rapid increase in patents for technology involved in reading brainwaves, rising from fewer than 400 from 2009–2012 to 1600 in 2014. [ 20 ] These include proposed ways to control video games via brain waves and " neuro-marketing " to determine someone's thoughts about a new product or advertisement. [ citation needed ]
Emotiv Systems , an Australian electronics company, has demonstrated a headset that can be trained to recognize a user's thought patterns for different commands. Tan Le demonstrated the headset's ability to manipulate virtual objects on screen, and discussed various future applications for such brain-computer interface devices , from powering wheel chairs to replacing the mouse and keyboard. [ 21 ]
It is possible to track which of two forms of rivalrous binocular illusions a person was subjectively experiencing from fMRI signals. [ 22 ]
When humans think of an object, such as a screwdriver, many different areas of the brain activate. Marcel Just and his colleague, Tom Mitchell, have used fMRI brain scans to teach a computer to identify the various parts of the brain associated with specific thoughts. [ 23 ] This technology also yielded a discovery: similar thoughts in different human brains are surprisingly similar neurologically. To illustrate this, Just and Mitchell used their computer to predict, based on nothing but fMRI data, which of several images a volunteer was thinking about. The computer was 100% accurate, but so far the machine is only distinguishing between 10 images. [ 23 ]
The category of event which a person freely recalls can be identified from fMRI before they say what they remembered. [ 24 ]
16 December 2015, a study conducted by Toshimasa Yamazaki at Kyushu Institute of Technology found that during a rock-paper-scissors game a computer was able to determine the choice made by the subjects before they moved their hand. An EEG was used to measure activity in the Broca's area to see the words two seconds before the words were uttered. [ 25 ] [ 26 ] [ 27 ]
In 2023, the University of Texas in Austin trained a non-invasive brain decoder to translate volunteers' brainwaves into the GPT-1 language model . After lengthy training on each individual volunteer, the decoder usually failed to reconstruct the exact words, but could nevertheless reconstruct meanings close enough that the decoder could, most of the time, identify what timestamp of a given book the subject was listening to. [ 28 ] [ 29 ]
Statistical analysis of EEG brainwaves has been claimed to allow the recognition of phonemes , [ 30 ] and (in 1999) at a 60% to 75% level color and visual shape words. [ 31 ]
On 31 January 2012 Brian Pasley and colleagues of University of California Berkeley published their paper in PLoS Biology wherein subjects' internal neural processing of auditory information was decoded and reconstructed as sound on computer by gathering and analyzing electrical signals directly from subjects' brains. [ 32 ] The research team conducted their studies on the superior temporal gyrus, a region of the brain that is involved in higher order neural processing to make semantic sense from auditory information. [ 33 ] The research team used a computer model to analyze various parts of the brain that might be involved in neural firing while processing auditory signals. Using the computational model, scientists were able to identify the brain activity involved in processing auditory information when subjects were presented with recording of individual words. [ 34 ] Later, the computer model of auditory information processing was used to reconstruct some of the words back into sound based on the neural processing of the subjects. However the reconstructed sounds were not of good quality and could be recognized only when the audio wave patterns of the reconstructed sound were visually matched with the audio wave patterns of the original sound that was presented to the subjects. [ 34 ] However this research marks a direction towards more precise identification of neural activity in cognition. [ citation needed ]
Some researchers in 2008 were able to predict, with 60% accuracy, whether a subject was going to push a button with their left or right hand. This is notable, not just because the accuracy is better than chance, but also because the scientists were able to make these predictions up to 10 seconds before the subject acted – well before the subject felt they had decided. [ 35 ] This data is even more striking in light of other research suggesting that the decision to move, and possibly the ability to cancel that movement at the last second, [ 36 ] may be the results of unconscious processing. [ 37 ]
John Dylan-Haynes has also demonstrated that fMRI can be used to identify whether a volunteer is about to add or subtract two numbers in their head. [ 23 ]
Neural decoding techniques have been used to test theories about the predictive brain , and to investigate how top-down predictions affect brain areas such as the visual cortex . Studies using fMRI decoding techniques have found that predictable sensory events [ 38 ] and the expected consequences of our actions [ 39 ] are better decoded in visual brain areas, suggesting that prediction 'sharpens' representations in line with expectations.
It has also been shown that brain-reading can be achieved in a complex virtual environment . [ 40 ]
Just and Mitchell also claim they are beginning to be able to identify kindness, hypocrisy, and love in the brain. [ 23 ]
In 2013 a project led by University of California Berkeley professor John Chuang published findings on the feasibility of brainwave-based computer authentication as a substitute for passwords. Improvements in the use of biometrics for computer authentication has continually improved since the 1980s, but this research team was looking for a method faster and less intrusive than today's retina scans, fingerprinting, and voice recognition. The technology chosen to improve security measures is an electroencephalogram (EEG), or brainwave measurer, to improve passwords into "pass thoughts." Using this method Chuang and his team were able to customize tasks and their authentication thresholds to the point where they were able to reduce error rates under 1%, significantly better than other recent methods. In order to better attract users to this new form of security the team is still researching mental tasks that are enjoyable for the user to perform while having their brainwaves identified. In the future this method could be as cheap, accessible, and straightforward as thought itself. [ 41 ]
John-Dylan Haynes states that fMRI can also be used to identify recognition in the brain. He provides the example of a criminal being interrogated about whether he recognizes the scene of the crime or murder weapons. [ 23 ]
In classification, a pattern of activity across multiple voxels is used to determine the particular class from which the stimulus was drawn. [ 42 ]
In reconstruction brain reading the aim is to create a literal picture of the image that was presented. Early studies used voxels from early visual cortex areas (V1, V2, and V3) to reconstruct geometric stimuli made up of flickering checkerboard patterns. [ 43 ] [ 44 ]
EEG has also been used to identify recognition of specific information or memories by the P300 event related potential, which has been dubbed ' brain fingerprinting '. [ 45 ]
Brain-reading accuracy is increasing steadily as the quality of the data and the complexity of the decoding algorithms improve. In one recent experiment it was possible to identify which single image was being seen from a set of 120. [ 46 ] In another it was possible to correctly identify 90% of the time which of two categories the stimulus came and the specific semantic category (out of 23) of the target image 40% of the time. [ 2 ]
It has been noted that so far brain-reading is limited. Naselaris et al. report that: "In practice, exact reconstructions are impossible to achieve by any reconstruction algorithm on the basis of brain activity signals acquired by fMRI. This is because all reconstructions will inevitably be limited by inaccuracies in the encoding models and noise in the measured signals. Our results demonstrate that the natural image prior is a powerful (if unconventional) tool for mitigating the effects of these fundamental limitations. A natural image prior with only six million images is sufficient to produce reconstructions that are structurally and semantically similar to a target image." [ 2 ]
With brain scanning technology becoming increasingly accurate, experts predict important debates over how and when it should be used. One potential area of application is criminal law. Haynes states that simply refusing to use brain scans on suspects also prevents the wrongly accused from proving their innocence. [ 47 ] US scholars generally believe that involuntary brain reading, and involuntary polygraph tests, would violate the Fifth Amendment's right to not self-incriminate. [ 48 ] [ 49 ] One perspective is to consider whether brain imaging is like testimony, or instead like DNA, blood, or semen. Paul Root Wolpe, director of the Center for Ethics at Emory University in Atlanta predicts that this question will be decided by a Supreme Court case. [ 50 ]
In other countries outside the United States, thought identification has already been used in criminal law. In 2008 an Indian woman was convicted of murder after an EEG of her brain allegedly revealed that she was familiar with the circumstances surrounding the poisoning of her ex-fiancé. [ 50 ] Some neuroscientists and legal scholars doubt the validity of using thought identification as a whole for anything past research on the nature of deception and the brain. [ 51 ]
The Economist cautioned people to be "afraid" of the future impact, and some ethicists argue that privacy laws should protect private thoughts. Legal scholar Hank Greely argues that the court systems could benefit from such technology, and neuroethicist Julian Savulescu states that brain data is not fundamentally different from other types of evidence. [ 52 ] In Nature , journalist Liam Drew writes about emerging projects to attach brain-reading devices to speech synthesizers or other output devices for the benefit of tetraplegics . Such devices could create concerns of accidentally broadcasting the patient's "inner thoughts" rather than merely conscious speech. [ 53 ]
Psychologist John-Dylan Haynes experienced breakthroughs in brain imaging research in 2006 by using fMRI . This research included new findings on visual object recognition, tracking dynamic mental processes, lie detecting , and decoding unconscious processing. The combination of these four discoveries revealed such a significant amount of information about an individual's thoughts that Haynes termed it "brain reading". [ 1 ]
The fMRI has allowed research to expand by significant amounts because it can track the activity in an individual's brain by measuring the brain's blood flow. It is currently thought to be the best method for measuring brain activity, which is why it has been used in multiple research experiments in order to improve the understanding of how doctors and psychologists can identify thoughts. [ 54 ]
In a 2020 study, AI using implanted electrodes could correctly transcribe a sentence read aloud from a fifty-sentence test set 97% of the time, given 40 minutes of training data per participant. [ 55 ]
Experts are unsure of how far thought identification can expand, but Marcel Just believed in 2014 that in 3–5 years there will be a machine that is able to read complex thoughts such as 'I hate so-and-so'. [ 50 ]
Professor of neuropsychology Barbara Sahakian qualified, "A lot of neuroscientists in the field are very cautious and say we can't talk about reading individuals' minds, and right now that is very true, but we're moving ahead so rapidly, it's not going to be that long before we will be able to tell whether someone's making up a story, or whether someone intended to do a crime with a certain degree of certainty." [ 47 ]
Frederic Gilbert and Ingrid Russo assert that the field of BCI/BMI related brain reading has significant levels of "hype", similar to the field of artificial intelligence . [ 56 ]
Donald Marks, founder and chief science officer of MMT, is working on playing back thoughts individuals have after they have already been recorded. [ 57 ]
Researchers at the University of California Berkeley have already been successful in forming, erasing, and reactivating memories in rats. Marks says they are working on applying the same techniques to humans. This discovery could be monumental for war veterans who suffer from PTSD . [ 57 ]
Further research is also being done in analyzing brain activity during video games to detect criminals, neuromarketing , and using brain scans in government security checks. [ 50 ] [ 54 ]
The episode Black Hole of American medical drama House , which aired on 15 March 2010, featured an experimental "cognitive imaging" device that supposedly allowed seeing into a patient's subconscious mind. The patient was first put in a preparation phase of six hours while watching video clips, attached to a neuroimaging device looking like electroencephalography or functional near-infrared spectroscopy , to train the neuroimaging classifier. Then the patient was put under twilight anesthesia , and the same device was used to try to infer what was going through the patient's mind. The fictional episode somewhat anticipated the study by Nishimoto et al. published the following year, in which fMRI was used instead. [ 4 ] [ 5 ] [ 6 ] [ 7 ]
In the movie Dumb and Dumber To ', one scene shows a brain reader.
In the Henry Danger episode, "Dream Busters," a machine shows Henry's dream . | https://en.wikipedia.org/wiki/Thought_identification |
" Thoughts on Flash " is an open letter published by Steve Jobs , co-founder and then-chief executive officer of Apple Inc. , on April 29, 2010. The letter criticizes Adobe Systems ' Flash platform and outlines reasons why the technology would not be allowed on Apple's iOS hardware products. The letter drew accusations of falsehood, hypocrisy, and ulterior motive. In retrospect many publications came to agree with Jobs.
On April 29, 2010, Steve Jobs , the co-founder and then-chief executive officer of Apple Inc. , published an open letter called "Thoughts on Flash" explaining why Apple would not allow Flash on the iPhone , iPod Touch and iPad . He cited the rapid energy consumption, computer crashes , poor performance on mobile devices, abysmal security, lack of touch support, and desire to avoid "a third party layer of software coming between the platform and the developer". He touched on the idea of Flash being "open", claiming "by almost any definition, Flash is a closed system". Jobs dismissed the idea that Apple customers are missing out by being sold devices without Flash compatibility by quoting a number of statistics, concluding with "Flash is no longer necessary to watch video or consume any kind of web content." [ 1 ] [ 2 ] [ 3 ]
The letter drew immediate attention. In response to Jobs' accusations, Adobe's CEO Shantanu Narayen described the open letter as an "extraordinary attack", and, during an interview with The Wall Street Journal , called the problems mentioned by Jobs' "really a smokescreen". He further fired back at Apple, stating that computer crashes were due to Apple's operating system, and that allegations of battery drain were "patently false". [ 4 ] [ 5 ] Various publications had different opinions on the topic. Wired ' s Brian Chen had in a 2009 article claimed Apple would not allow Flash on the iPhone for business reasons, due to the technology being able to divert users away from the App Store . [ 6 ] John Sullivan of Ars Technica agreed with Jobs, but highlighted the hypocrisy in his reasoning, writing: "every criticism he makes of Adobe's proprietary approach applies equally to Apple". [ 7 ] Dan Rayburn of Business Insider accused Steve Jobs of lying, particularly the sentiment that most content on the Internet is available in a different format. [ 8 ]
Retrospectively, more publications have agreed with Jobs. Ryan Lawler of TechCrunch wrote in 2012 "Jobs was right", adding Android users had poor experiences with watching Flash content and interactive Flash experiences were "often wonky or didn't perform well, even on high-powered phones". [ 9 ] Mike Isaac of Wired wrote in 2011 that "In [our] testing of multiple Flash-compatible devices, choppiness and browser crashes were common", and a former Adobe employee stated "Flash is a resource hog [...] It's a battery drain, and it's unreliable on mobile web browsers". [ 10 ] Kyle Wagner of Gizmodo wrote in 2011 that "Adobe was never really able to smooth over performance, battery, and security issues". [ 11 ]
In April 2010, Apple announced changes to its iPhone Developer Agreement, with details on new developer restrictions, particularly that only apps built using "approved" programming languages would be allowed on the App Store. The change impacted a number of companies that had developed tools for porting applications from their respective languages into native iPhone apps, with the most prominent example being Adobe's "Packager for iPhone", an iOS development tool in beta at the time. [ 12 ] [ 13 ] [ 14 ] The New York Times quoted an Adobe supporter alleging the policy to be anti-competitive. [ 15 ]
On May 3, 2010, New York Post reported that the US Federal Trade Commission (FTC) and the United States Department of Justice (DOJ) were deciding which agency would launch an antitrust investigation into the matter. [ 16 ] [ 17 ]
In September 2010, after having "listened to our developers and taken much of their feedback to heart", Apple removed the restrictions on third-party tools, languages and frameworks, and again allowing the deployment of Flash applications on iOS using Adobe's iOS Packager. [ 18 ] [ 19 ]
On November 8, 2011, Adobe announced that it was ceasing development of the Flash Player plug-in for web browsers on mobile devices, and shifting its focus toward building tools to develop applications for mobile app stores. [ 20 ] [ 21 ] [ 22 ]
In 2021, former Apple head of software engineering Scott Forstall said in a taped deposition in the Epic Games v. Apple lawsuit that Apple had once helped Adobe try to port Flash for iPhone and iPad. Performance was "abysmal and embarrassing", and Apple never allowed Flash to be released for iOS. [ 23 ]
In July 2017, Adobe announced its intention to discontinue Flash (including security updates) altogether by the year 2020. [ 24 ] [ 25 ] As of December 31, 2020, Flash support has ended. Adobe blocked Flash content from running in Flash Player beginning January 12, 2021. [ 26 ] | https://en.wikipedia.org/wiki/Thoughts_on_Flash |
The Thouless energy is a characteristic energy scale of diffusive disordered conductors . It was first introduced by the Scottish-American physicist David J. Thouless when studying Anderson localization , [ 1 ] as a measure of the sensitivity of energy levels to a change in the boundary conditions of the system. Though being a classical quantity, it has been shown to play an important role in the quantum-mechanical treatment of disordered systems. [ 2 ]
It is defined by
where D is the diffusion constant and L the size of the system, and thereby inversely proportional to the diffusion time
through the system.
This condensed matter physics -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Thouless_energy |
TPTP (Thousands of Problems for Theorem Provers) [ 1 ] is a freely available collection of problems for automated theorem proving . It is used to evaluate the efficacy of automated reasoning algorithms. [ 2 ] [ 3 ] [ 4 ] Problems are expressed in a simple text-based format for first order logic or higher-order logic. [ 5 ] TPTP is used as the source of some problems in CASC .
This mathematical logic -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Thousands_of_Problems_for_Theorem_Provers |
The Thraustochytrium mitochondrial code (translation table 23) is a genetic code found in the mitochondria of the labyrinthulid protist Thraustochytrium aureum . [ 1 ] The mitochondrial genome was sequenced by the Organelle Genome Megasequencing Program .
Bases: adenine (A), cytosine (C), guanine (G) and thymine (T) or uracil (U).
Amino acids: Alanine (Ala, A), Arginine (Arg, R), Asparagine (Asn, N), Aspartic acid (Asp, D), Cysteine (Cys, C), Glutamic acid (Glu, E), Glutamine (Gln, Q), Glycine (Gly, G), Histidine (His, H), Isoleucine (Ile, I), Leucine (Leu, L), Lysine (Lys, K), Methionine (Met, M), Phenylalanine (Phe, F), Proline (Pro, P), Serine (Ser, S), Threonine (Thr, T), Tryptophan (Trp, W), Tyrosine (Tyr, Y), Valine (Val, V)
It is the similar to the bacterial code ( translation table 11 ) but it contains an additional stop codon (TTA) and also has a different set of start codons.
This article incorporates text from the United States National Library of Medicine , which is in the public domain . [ 2 ]
This genetics article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Thraustochytrium_mitochondrial_code |
A thread protector is used to protect the threads of a pipe during transportation and storage . Thread protectors are generally manufactured from plastic or steel and can be applied to the pipe manually or automatically (by machine).
Thread protectors are used frequently in the oil and gas industry to protect pipes during transportation to the oil and gas fields. Metal thread protectors can be cleaned and re-used, while plastic thread protectors are often collected and either re-used or recycled .
Thread protectors are widely used on firearms to protect threaded barrels. Some firearms are manufactured with thread and protectors in the factory, but most thread protectors are part of the aftermarket process of fitting a sound moderator (silencer), muzzle brake or flash hider . They protect the threads from mechanical damage and ensure the center lines line up when the muzzle device is replaced.
This engineering-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Thread_protector |
A threaded pipe is a pipe with screw-threaded ends for assembly.
The threaded pipes used in some plumbing installations for the delivery of gases or liquids under pressure have a tapered thread that is slightly conical (in contrast to the parallel sided cylindrical section commonly found on bolts and leadscrews ). The seal provided by a threaded pipe joint depends upon multiple factors: the labyrinth seal created by the threads; a positive seal between the threads created by thread deformation when they are tightened to the proper torque; and sometimes on the presence of a sealing coating , such as thread seal tape or a liquid or paste pipe sealant such as pipe dope . Tapered thread joints typically do not include a gasket .
Especially precise threads are known as "dry fit" or "dry seal" and require no sealant for a gas-tight seal. Such threads are needed where the sealant would contaminate or react with the media inside the piping, e.g., oxygen service.
Tapered threaded fittings are sometimes used on plastic piping. Due to the wedging effect of the tapered thread, extreme care must be used to avoid overtightening the joint. The overstressed female fitting may split days, weeks, or even years after initial installation. Therefore, many municipal plumbing codes restrict the use of threaded plastic pipe fittings.
Both British standard and National pipe thread standards specify a thread taper of 1:16; the change in diameter is one sixteenth the distance travelled along the thread. The nominal diameter is achieved some small distance (the "gauge length") from the end of the pipe.
Pipes may also be threaded with cylindrical threaded sections, in which case the threads do not themselves provide any sealing function other than some labyrinth seal effect, which may not be enough to satisfy either functional or code requirements. Instead, an O-ring seated between the shoulder of the male pipe section and an interior surface on the female, provides the seal. | https://en.wikipedia.org/wiki/Threaded_pipe |
In molecular biology , protein threading , also known as fold recognition , is a method of protein modeling which is used to model those proteins which have the same fold as proteins of known structures , but do not have homologous proteins with known structure.
It differs from the homology modeling method of structure prediction as it (protein threading) is used for proteins which do not have their homologous protein structures deposited in the Protein Data Bank (PDB), whereas homology modeling is used for those proteins which do. Threading works by using statistical knowledge of the relationship between the structures deposited in the PDB and the sequence of the protein which one wishes to model.
The prediction is made by "threading" (i.e. placing, aligning) each amino acid in the target sequence to a position in the template structure, and evaluating how well the target fits the template. After the best-fit template is selected, the structural model of the sequence is built based on the alignment with the chosen template. Protein threading is based on two basic observations: that the number of different folds in nature is fairly small (approximately 1300); and that 90% of the new structures submitted to the PDB in the past three years have similar structural folds to ones already in the PDB.
The Structural Classification of Proteins database (SCOP) provides a detailed and comprehensive description of the structural and evolutionary relationships of known structure. Proteins are classified to reflect both structural and evolutionary relatedness. Many levels exist in the hierarchy, but the principal levels are family , superfamily , and fold:
A general paradigm of protein threading consists of the following four steps:
Homology modeling and protein threading are both template-based methods and there is no rigorous boundary between them in terms of prediction techniques. But the protein structures of their targets are different. Homology modeling is for those targets which have homologous proteins with known structure (usually/maybe of same family), while protein threading is for those targets with only fold-level homology found. In other words, homology modeling is for "easier" targets and protein threading is for "harder" targets.
Homology modeling treats the template in an alignment as a sequence, and only sequence homology is used for prediction. Protein threading treats the template in an alignment as a structure, and both sequence and structure information extracted from the alignment are used for prediction. When there is no significant homology found, protein threading can make a prediction based on the structure information. That also explains why protein threading may be more effective than homology modeling in many cases.
In practice, when the sequence identity in a sequence sequence alignment is low (i.e. <25%), homology modeling may not produce a significant prediction. In this case, if there is distant homology found for the target, protein threading can generate a good prediction.
Fold recognition methods can be broadly divided into two types: those that derive a 1-D profile for each structure in the fold library and align the target sequence to these profiles; and those that consider the full 3-D structure of the protein template. A simple example of a profile representation would be to take each amino acid in the structure and simply label it according to whether it is buried in the core of the protein or exposed on the surface. More elaborate profiles might take into account the local secondary structure (e.g. whether the amino acid is part of an alpha helix ) or even evolutionary information (how conserved the amino acid is). In the 3-D representation, the structure is modeled as a set of inter-atomic distances, i.e. the distances are calculated between some or all of the atom pairs in the structure. This is a much richer and far more flexible description of the structure, but is much harder to use in calculating an alignment. The profile-based fold recognition approach was first described by Bowie, Lüthy and David Eisenberg in 1991. [ 1 ] The term threading was first coined by David Jones , William R. Taylor and Janet Thornton in 1992, [ 2 ] and originally referred specifically to the use of a full 3-D structure atomic representation of the protein template in fold recognition. Today, the terms threading and fold recognition are frequently (though somewhat incorrectly) used interchangeably.
Fold recognition methods are widely used and effective because it is believed that there are a strictly limited number of different protein folds in nature, mostly as a result of evolution but also due to constraints imposed by the basic physics and chemistry of polypeptide chains. There is, therefore, a good chance (currently 70-80%) that a protein which has a similar fold to the target protein has already been studied by X-ray crystallography or nuclear magnetic resonance (NMR) spectroscopy and can be found in the PDB. Currently there are nearly 1300 different protein folds known, but new folds are still being discovered every year due in significant part to the ongoing structural genomics projects.
Many different algorithms have been proposed for finding the correct threading of a sequence onto a structure, though many make use of dynamic programming in some form. For full 3-D threading, the problem of identifying the best alignment is very difficult (it is an NP-hard problem for some models of threading). [ citation needed ] Researchers have made use of many combinatorial optimization methods such as conditional random fields , simulated annealing , branch and bound , and linear programming , searching to arrive at heuristic solutions. It is interesting to compare threading methods to methods which attempt to align two protein structures ( protein structural alignment ), and indeed many of the same algorithms have been applied to both problems. | https://en.wikipedia.org/wiki/Threading_(protein_sequence) |
In computer security , a threat is a potential negative action or event enabled by a vulnerability that results in an unwanted impact to a computer system or application.
A threat can be either a negative " intentional " event (i.e. hacking: an individual cracker or a criminal organization) or an " accidental " negative event (e.g. the possibility of a computer malfunctioning, or the possibility of a natural disaster event such as an earthquake , a fire , or a tornado ) or otherwise a circumstance, capability, action, or event ( incident is often used as a blanket term). [ 1 ] A threat actor who is an individual or group that can perform the threat action, such as exploiting a vulnerability to actualise a negative impact. An exploit is a vulnerability that a threat actor used to cause an incident.
A more comprehensive definition, tied to an Information assurance point of view, can be found in " Federal Information Processing Standards (FIPS) 200, Minimum Security Requirements for Federal Information and Information Systems " by NIST of United States of America [ 2 ]
National Information Assurance Glossary defines threat as:
ENISA gives a similar definition: [ 3 ]
The Open Group defines threat as: [ 4 ]
Factor analysis of information risk defines threat as: [ 5 ]
National Information Assurance Training and Education Center gives a more articulated definition of threat : [ 6 ] [ 7 ]
The term "threat" relates to some other basic security terms as shown in the following diagram: [ 1 ] A resource (both physical or logical) can have one or more vulnerabilities that can be exploited by a threat agent in a threat action. The result can potentially compromise the confidentiality , integrity or availability properties of resources (potentially different than the vulnerable one) of the organization and others involved parties (customers, suppliers). The so-called CIA triad is the basis of information security .
The attack can be active when it attempts to alter system resources or affect their operation: so it compromises Integrity or Availability. A " passive attack " attempts to learn or make use of information from the system but does not affect system resources: so it compromises Confidentiality. [ 1 ]
OWASP (see figure) depicts the same phenomenon in slightly different terms: a threat agent through an attack vector exploits a weakness (vulnerability) of the system and the related security controls causing a technical impact on an IT resource (asset) connected to a business impact.
A set of policies concerned with information security management, the Information security management systems (ISMS), has been developed to manage, according to risk management principles, the countermeasures in order to accomplish to a security strategy set up following rules and regulations applicable in a country. Countermeasures are also called security controls; when applied to the transmission of information are named security services . [ 8 ]
The overall picture represents the risk factors of the risk scenario. [ 9 ]
The widespread of computer dependencies and the consequent raising of the consequence of a successful attack, led to a new term cyberwarfare .
Nowadays the many real attacks exploit Psychology at least as much as technology. Phishing and Pretexting and other methods are called social engineering techniques. [ 10 ] The Web 2.0 applications, specifically Social network services , can be a mean to get in touch with people in charge of system administration or even system security, inducing them to reveal sensitive information. [ 11 ] One famous case is Robin Sage . [ 12 ]
The most widespread documentation on computer insecurity is about technical threats such as a computer virus , trojan and other malware , but a serious study to apply cost effective countermeasures can only be conducted following a rigorous IT risk analysis in the framework of an ISMS: a pure technical approach will let out the psychological attacks that are increasing threats.
Threats can be classified according to their type and origin: [ 13 ]
Note that a threat type can have multiple origins.
Recent trends in computer threats show an increase in ransomware attacks, supply chain attacks, and fileless malware. Ransomware attacks involve the encryption of a victim's files and a demand for payment to restore access. Supply chain attacks target the weakest links in a supply chain to gain access to high-value targets. Fileless malware attacks use techniques that allow malware to run in memory, making it difficult to detect. [ 14 ]
Below are the few common emerging threats:
Microsoft published a mnemonic, STRIDE , [ 15 ] from the initials of threat groups:
Microsoft previously rated the risk of security threats using five categories in a classification called DREAD: Risk assessment model . The model is considered obsolete by Microsoft.
The categories were:
The DREAD name comes from the initials of the five categories listed.
The spread over a network of threats can lead to dangerous situations. In military and civil fields, threat level has been defined: for example INFOCON is a threat level used by the US. Leading antivirus software vendors publish global threat level on their websites. [ 16 ] [ 17 ]
The term Threat Agent is used to indicate an individual or group that can manifest a threat. It is fundamental to identify who would want to exploit the assets of a company, and how they might use them against the company. [ 18 ]
Individuals within a threat population; Practically anyone and anything can, under the right circumstances, be a threat agent – the well-intentioned, but inept, computer operator who trashes a daily batch job by typing the wrong command, the regulator performing an audit, or the squirrel that chews through a data cable. [ 5 ]
Threat agents can take one or more of the following actions against an asset: [ 5 ]
Each of these actions affects different assets differently, which drives the degree and nature of loss. For example, the potential for productivity loss resulting from a destroyed or stolen asset depends upon how critical that asset is to the organization's productivity. If a critical asset is simply illicitly accessed, there is no direct productivity loss. Similarly, the destruction of a highly sensitive asset that does not play a critical role in productivity would not directly result in a significant productivity loss. Yet that same asset, if disclosed, can result in significant loss of competitive advantage or reputation, and generate legal costs. The point is that it is the combination of the asset and type of action against the asset that determines the fundamental nature and degree of loss. Which action(s) a threat agent takes will be driven primarily by that agent's motive (e.g., financial gain, revenge, recreation, etc.) and the nature of the asset. For example, a threat agent bent on financial gain is less likely to destroy a critical server than they are to steal an easily pawned asset like a laptop. [ 5 ]
It is important to separate the concept of the event that a threat agent get in contact with the asset (even virtually, i.e. through the network) and the event that a threat agent act against the asset. [ 5 ]
OWASP collects a list of potential threat agents to prevent system designers, and programmers insert vulnerabilities in the software. [ 18 ]
Threat Agent = Capabilities + Intentions + Past Activities
These individuals and groups can be classified as follows: [ 18 ]
Threat sources are those who wish a compromise to occur. It is a term used to distinguish them from threat agents/actors who are those who carry out the attack and who may be commissioned or persuaded by the threat source to knowingly or unknowingly carry out the attack. [ 19 ]
Threat action is an assault on system security. A complete security architecture deals with both intentional acts (i.e. attacks) and accidental events. [ 20 ]
Various kinds of threat actions are defined as subentries under "threat consequence".
Threat analysis is the analysis of the probability of occurrences and consequences of damaging actions to a system. [ 1 ] It is the basis of risk analysis .
Threat modeling is a process that helps organizations identify and prioritize potential threats to their systems. It involves analyzing the system's architecture, identifying potential threats, and prioritizing them based on their impact and likelihood. By using threat modeling, organizations can develop a proactive approach to security and prioritize their resources to address the most significant risks. [ 21 ]
Threat intelligence is the practice of collecting and analyzing information about potential and current threats to an organization. This information can include indicators of compromise, attack techniques, and threat actor profiles. By using threat intelligence, organizations can develop a better understanding of the threat landscape and improve their ability to detect and respond to threats. [ 22 ]
Threat consequence is a security violation that results from a threat action. [ 1 ] Includes disclosure, deception, disruption, and usurpation.
The following subentries describe four kinds of threat consequences, and also list and describe the kinds of threat actions that cause each consequence. [ 1 ] Threat actions that are accidental events are marked by "*".
A collection of threats in a particular domain or context, with information on identified vulnerable assets, threats, risks, threat actors and observed trends. [ 23 ] [ 24 ]
Threats should be managed by operating an ISMS, performing all the IT risk management activities foreseen by laws, standards and methodologies.
Very large organizations tend to adopt business continuity management plans in order to protect, maintain and recover business-critical processes and systems. Some of these plans are implemented by computer security incident response team (CSIRT).
Threat management must identify, evaluate, and categorize threats. There are two primary methods of threat assessment :
Many organizations perform only a subset of these methods, adopting countermeasures based on a non-systematic approach, resulting in computer insecurity .
Information security awareness is a significant market. There has been a lot of software developed to deal with IT threats, including both open-source software and proprietary software . [ 25 ]
Threat management involves a wide variety of threats including physical threats like flood and fire. While ISMS risk assessment process does incorporate threat management for cyber threats such as remote buffer overflows the risk assessment process doesn't include processes such as threat intelligence management or response procedures.
Cyber threat management (CTM) is emerging as the best practice for managing cyber threats beyond the basic risk assessment found in ISMS. It enables early identification of threats, data-driven situational awareness, accurate decision-making, and timely threat mitigating actions. [ 26 ]
CTM includes:
Cyber threat hunting is "the process of proactively and iteratively searching through networks to detect and isolate advanced threats that evade existing security solutions." [ 27 ] This is in contrast to traditional threat management measures, such as firewalls , intrusion detection systems , and SIEMs , which typically involve an investigation after there has been a warning of a potential threat, or an incident has occurred.
Threat hunting can be a manual process, in which a security analyst sifts through various data information using their knowledge and familiarity with the network to create hypotheses about potential threats. To be even more effective and efficient, however, threat hunting can be partially automated, or machine-assisted, as well. In this case, the analyst utilizes software that harnesses machine learning and user and entity behaviour analytics (UEBA) to inform the analyst of potential risks. The analyst then investigates these potential risks, tracking suspicious behaviour in the network. Thus hunting is an iterative process, meaning that it must be continuously carried out in a loop, beginning with a hypothesis. There are three types of hypotheses:
The analyst researches their hypothesis by going through vast amounts of data about the network. The results are then stored so that they can be used to improve the automated portion of the detection system and to serve as a foundation for future hypotheses.
The SANS Institute has conducted research and surveys on the effectiveness of threat hunting to track and disrupt cyber adversaries as early in their process as possible. According to a survey performed in 2019, "61% [of the respondents] report at least an 11% measurable improvement in their overall security posture" and 23.6% of the respondents have experienced a 'significant improvement' in reducing the dwell time . [ 29 ]
To protect yourself from computer threats, it's essential to keep your software up-to-date, use strong and unique passwords, and be cautious when clicking on links or downloading attachments. Additionally, using antivirus software and regularly backing up your data can help mitigate the impact of a threat. | https://en.wikipedia.org/wiki/Threat_(computer_security) |
Threat Intelligence Platform (TIP) is an emerging technology discipline that helps organizations aggregate, correlate, and analyze threat data from multiple sources in real time to support defensive actions. TIPs have evolved to address the growing amount of data generated by a variety of internal and external resources (such as system logs and threat intelligence feeds) and help security teams identify the threats that are relevant to their organization. By importing threat data from multiple sources and formats, correlating that data, and then exporting it into an organization’s existing security systems or ticketing systems, a TIP automates proactive threat management and mitigation. A true TIP differs from typical enterprise security products in that it is a system that can be programmed by outside developers, in particular, users of the platform. TIPs can also use APIs to gather data to generate configuration analysis , Whois information, reverse IP lookup , website content analysis, name servers , and SSL certificates .
The traditional approach to enterprise security involves security teams using a variety of processes and tools to conduct incident response, network defense, and threat analysis. Integration between these teams and sharing of threat data is often a manual process that relies on email, spreadsheets, or a portal ticketing system. This approach does not scale as the team and enterprise grows and the number of threats and events increases. With attack sources changing by the minute, hour, and day, scalability and efficiency is difficult. The tools used by large Security Operations Centers (SOCs), for example, produce hundreds of millions of events per day, from endpoint and network alerts to log events, making it difficult to filter down to a manageable number of suspicious events for triage.
Threat intelligence platforms make it possible for organizations to gain an advantage over the adversary by detecting the presence of threat actors, blocking and tackling their attacks, or degrading their infrastructure. Using threat intelligence, businesses and government agencies can also identify the threat sources and data that are the most useful and relevant to their own environment, potentially reducing the costs associated with unnecessary commercial threat feeds. [ 1 ]
Tactical use cases for threat intelligence include security planning, monitoring and detection, incident response , threat discovery and threat assessment. A TIP also drives smarter practices back into SIEMs , intrusion detection , and other security tools because of the finely curated, relevant, and widely sourced threat intelligence that a TIP produces.
An advantage held by TIPs, is the ability to share threat intelligence with other stakeholders and communities. Adversaries typically coordinate their efforts, across forums and platforms. A TIP provides a common habitat which makes it possible for security teams to share threat information among their own trusted circles, interface with security and intelligence experts, and receive guidance on implementing coordinated counter-measures. Full-featured TIPs enable security analysts to simultaneously coordinate these tactical and strategic activities with incident response, security operations, and risk management teams while aggregating data from trusted communities. [ 2 ]
Threat intelligence platforms [ 3 ] are made up of several primary feature areas [ 4 ] that allow organizations to implement an intelligence-driven security approach. These stages are supported by automated workflows that streamline the threat detection, management, analysis, and defensive process and track it through to completion:
Threat intelligence platforms can be deployed as a software or appliance (physical or virtual) on-premises or in dedicated or public clouds for enhanced community collaboration. | https://en.wikipedia.org/wiki/Threat_Intelligence_Platform |
Threat modeling is a process by which potential threats, such as structural vulnerabilities or the absence of appropriate safeguards, can be identified and enumerated, and countermeasures prioritized. [ 1 ] The purpose of threat modeling is to provide defenders with a systematic analysis of what controls or defenses need to be included, given the nature of the system, the probable attacker's profile, the most likely attack vectors, and the assets most desired by an attacker. Threat modeling answers questions like "Where am I most vulnerable to attack?" , "What are the most relevant threats?" , and "What do I need to do to safeguard against these threats?" .
Conceptually, most people incorporate some form of threat modeling in their daily life and don't even realize it. [ citation needed ] Commuters use threat modeling to consider what might go wrong during the morning journey to work and to take preemptive action to avoid possible accidents. Children engage in threat modeling when determining the best path toward an intended goal while avoiding the playground bully. In a more formal sense, threat modeling has been used to prioritize military defensive preparations since antiquity.
Shortly after shared computing made its debut in the early 1960s, individuals began seeking ways to exploit security vulnerabilities for personal gain. [ 2 ] As a result, engineers and computer scientists soon began developing threat modeling concepts for information technology systems.
Early technology-centered threat modeling methodologies were based on the concept of architectural patterns [ 3 ] first presented by Christopher Alexander in 1977. In 1988 Robert Barnard developed and successfully applied the first profile for an IT-system attacker.
In 1994, Edward Amoroso put forth the concept of a "threat tree" in his book, "Fundamentals of Computer Security Technology. [ 4 ] " The concept of a threat tree was based on decision tree diagrams. Threat trees graphically represent how a potential threat to an IT system can be exploited.
Independently, similar work was conducted by the NSA and DARPA on a structured graphical representation of how specific attacks against IT-systems could be executed. The resulting representation was called " attack trees ." In 1998 Bruce Schneier published his analysis of cyber risks utilizing attack trees in his paper entitled "Toward a Secure System Engineering Methodology". [ 5 ] The paper proved to be a seminal contribution in the evolution of threat modeling for IT-systems. In Schneier's analysis, the attacker's goal is represented as a "root node," with the potential means of reaching the goal represented as "leaf nodes." Utilizing the attack tree in this way allowed cybersecurity professionals to systematically consider multiple attack vectors against any defined target.
In 1999, Microsoft cybersecurity professionals Loren Kohnfelder and Praerit Garg developed a model for considering attacks relevant to the Microsoft Windows development environment. ( STRIDE [ 1 ] is an acrostic for: Spoofing identity, Tampering with data, Repudiation, Information disclosure, Denial of service, Elevation of privilege) The resultant mnemonic helps security professionals systematically determine how a potential attacker could utilize any threat included in STRIDE.
In 2003, OCTAVE [ 6 ] (Operationally Critical Threat, Asset, and Vulnerability Evaluation) method, an operations-centric threat modeling methodology, was introduced with a focus on organizational risk management.
In 2004, Frank Swiderski and Window Snyder wrote "Threat Modeling," published by Microsoft press. In it they developed the concept of using threat models to create secure applications.
In 2014, Ryan Stillions expressed the idea that cyber threats should be expressed with different semantic levels, and proposed the DML (Detection Maturity Level) model. [ 7 ] An attack is an instantiation of a threat scenario which is caused by a specific attacker with a specific goal in mind and a strategy for reaching that goal. The goal and strategy represent the highest semantic levels of the DML model. This is followed by the TTP (Tactics, Techniques and Procedures) which represent intermediate semantic levels. The lowest semantic levels of the DML model are the tools used by the attacker, host and observed network artifacts such as packets and payloads, and finally atomic indicators such as IP addresses at the lowest semantic level. Current SIEM (Security Information and Event Management) tools typically only provide indicators at the lowest semantic levels. There is therefore a need to develop SIEM tools that can provide threat indicators at higher semantic levels. [ 8 ]
The threat modeling manifesto is a document published in 2020 by threat modeling authorities in order to clearly state the core values and principles that every threat modeler should know and follow. [ 9 ]
In 2024 the same group of authors followed up the Manifesto with a Threat Modeling Capabilities document, which "...provides a catalog of capabilities to help you cultivate value from your Threat Modeling practice". [ 10 ]
Conceptually, a threat modeling practice flows from a methodology. Numerous threat modeling methodologies are available for implementation. Typically, threat modeling has been implemented using one of five approaches independently: asset-centric, attacker-centric, software-centric, value and stakeholder-centric, and hybrid. Based on the volume of published online content, the methodologies discussed below are the most well known.
The STRIDE was created in 1999 at Microsoft as a mnemonic for developers to find 'threats to our products'. [ 11 ] STRIDE can be used as a simple prompt or checklist, or in more structured approaches such as STRIDE per element. STRIDE, Patterns and Practices, and Asset/entry point were amongst the threat modeling approaches developed and published by Microsoft. References to "the" Microsoft methodology commonly mean STRIDE and Data Flow Diagrams.
The Process for Attack Simulation and Threat Analysis (PASTA) is a seven-step, risk-centric methodology. [ 12 ] It provides a seven-step process for aligning business objectives and technical requirements, taking into account compliance issues and business analysis. The intent of the method is to provide a dynamic threat identification, enumeration, and scoring process. Once the threat model is completed, security subject matter experts develop a detailed analysis of the identified threats. Finally, appropriate security controls can be enumerated. This methodology is intended to provide an attacker-centric view of the application and infrastructure from which defenders can develop an asset-centric mitigation strategy.
Researchers created this method to combine the positive elements of different methodologies. [ 13 ] [ 14 ] [ 15 ] This methodology combines different methodologies, including SQUARE [ 16 ] and the Security Cards [ 17 ] and Personae Non Gratae. [ 18 ]
All IT-related threat modeling processes start with creating a visual representation of the application, infrastructure or both being analyzed. The application or infrastructure is decomposed into various elements to aid in the analysis. Once completed, the visual representation is used to identify and enumerate potential threats. Further analysis of the model regarding risks associated with identified threats, prioritization of threats, and enumeration of the appropriate mitigating controls depends on the methodological basis for the threat model process being utilized. Threat modeling approaches can focus on the system in use, attackers, or assets.
Most threat modeling approaches use data flow diagrams (DFD). DFDs were developed in the 1970s as tool for system engineers to communicate, on a high level, how an application caused data to flow, be stored, and manipulated by the infrastructure upon which the application runs. Traditionally, DFDs utilize only four unique symbols: data flows, data stores, processes, and interactors. In the early 2000s, an additional symbol, trust boundaries, were added to improve the usefulness of DFDs for threat modeling.
Once the application-infrastructure system is decomposed into its five elements, security experts consider each identified threat entry point against all known threat categories. Once the potential threats are identified, mitigating security controls can be enumerated or additional analysis can be performed.
Threat modeling is being applied not only to IT but also to other areas such as vehicle, [ 26 ] [ 27 ] building and home automation . [ 28 ] In this context, threats to security and privacy like information about the inhabitant's movement profiles, working times, and health situations are modeled as well as physical or network-based attacks. The latter could make use of more and more available smart building features, i.e., sensors (e.g., to spy on the inhabitant) and actuators (e.g., to unlock doors). [ 28 ] | https://en.wikipedia.org/wiki/Threat_model |
A threatened species is any species (including animals , plants and fungi ) which is vulnerable to extinction in the near future. Species that are threatened are sometimes characterised by the population dynamics measure of critical depensation , a mathematical measure of biomass related to population growth rate . This quantitative metric is one method of evaluating the degree of endangerment without direct reference to human activity. [ 1 ]
The International Union for Conservation of Nature (IUCN) is the foremost authority on threatened species, and treats threatened species not as a single category, but as a group of three categories, depending on the degree to which they are threatened: [ 2 ] : 8–11
Less-than-threatened categories are near threatened , least concern , and the no longer assigned category of conservation dependent . Species that have not been evaluated (NE), or do not have sufficient data ( data deficient ) also are not considered "threatened" by the IUCN.
Although threatened and vulnerable may be used interchangeably when discussing IUCN categories, the term threatened is generally used to refer to the three categories (critically endangered, endangered, and vulnerable), while vulnerable is used to refer to the least at risk of those three categories. They may be used interchangeably in most contexts however, as all vulnerable species are threatened species ( vulnerable is a category of threatened species ); and, as the more at-risk categories of threatened species (namely endangered and critically endangered ) must, by definition, also qualify as vulnerable species, all threatened species may also be considered vulnerable.
Threatened species are also referred to as a red-listed species, as they are listed in the IUCN Red List of Threatened Species .
Subspecies , populations and stocks may also be classified as threatened.
The Commonwealth of Australia (federal government) has legislation for categorising and protecting endangered species, namely the Environment Protection and Biodiversity Conservation Act 1999 , which is known in short as the EPBC Act . This Act has six categories: extinct, extinct in the wild, critically endangered, endangered, vulnerable, and conservation dependent, as defined in Section 179 of the Act. [ 3 ] These could be summarised as: [ 4 ]
The EPBC Act also recognises and protects threatened ecosystems such as plant communities, and Ramsar Convention wetlands used by migratory birds . [ 4 ]
Lists of threatened species are drawn up under the Act and these lists are the primary reference to threatened species in Australia. The Species Profile and Threats Database (SPRAT) is a searchable online database about species and ecological communities listed under the EPBC Act . It provides information on what the species looks like, its population and distribution, habitat, movements, feeding, reproduction and taxonomic comments. [ 5 ]
A Threatened Mammal Index , publicly launched on 22 April 2020 and combined as of June 2020 [update] with the Threatened Bird Index (created 2018 [ 6 ] ) as the Threatened Species Index , is a research collaboration of the National Environmental Science Program's Threatened Species Recovery Hub, the University of Queensland and BirdLife Australia . It does not show detailed data of individual species, but shows overall trends, and the data can be downloaded via a web-app "to allow trends for different taxonomic groups or regions to be explored and compared". [ 7 ] The Index uses data visualisation tools to show data clearly in graphic form, including a graph from 1985 to present of the main index, geographical representation, monitoring consistency and time series and species accumulation. [ 8 ] In April 2020 the Mammal Index reported that there had been a decline of more than a third of threatened mammal numbers in the 20 years between 1995 and 2016, but the data also show that targeted conservation efforts are working. The Threatened Mammal Index "is compiled from more than 400,000 individual surveys, and contains population trends for 57 of Australia's threatened or near-threatened terrestrial and marine mammal species". [ 6 ]
Individual states and territories of Australia are bound under the EPBC Act, but may also have legislation which gives further protection to certain species, for example Western Australia 's Wildlife Conservation Act 1950 . Some species, such as Lewin's rail ( Lewinia pectoralis ), are not listed as threatened species under the EPBC Act, but they may be recognised as threatened by individual states or territories.
Pests and weeds, climate change and habitat loss are some of the key threatening processes faced by native plants and animals listed by the Department of Planning, Industry and Environment of New South Wales . [ 9 ]
The German Federal Agency for Nature Conservation ( German : Bundesamt für Naturschutz , BfN) publishes a regional Red List for Germany of at least 48000 animals and 24000 plants and fungi. The scheme for categorization is similar to that of the IUCN, but adds a "warning list", includes species endangered to an unknown extend, and rare species that are not endangered, but are highly at risk of extinction due to the small population. [ 11 ]
Under the Endangered Species Act in the United States, "threatened" is defined as "any species which is likely to become an endangered species within the foreseeable future throughout all or a significant portion of its range". [ 12 ] It is the less protected of the two protected categories. The Bay checkerspot butterfly ( Euphydryas editha bayensis ) is an example of a threatened subspecies protected under the Endangered Species Act .
Within the U.S., state wildlife agencies have the authority under the ESA to manage species which are considered endangered or threatened within their state but not within all states, and which therefore are not included on the national list of endangered and threatened species. For example, the trumpeter swan ( Cygnus buccinator ) is threatened in the state of Minnesota , while large populations still remain in Canada and Alaska . [ 13 ] | https://en.wikipedia.org/wiki/Threatened_species |
The three-axis acceleration switch is a micromachined microelectromechanical systems (MEMS) sensor that detects whether an acceleration event has exceeded a predefined threshold. [ 1 ] It is a small, compact device, only 5mm by 5mm, and measures acceleration in the x, y, and z axes. [ 2 ] It was developed by the Army Research Laboratory for the purposes of traumatic brain injury (TBI) research and was first introduced in 2012 at the 25th International Conference on Micro Electro Mechanical Systems (MEMS). [ 1 ]
The three-axis acceleration switch was designed to obtain acceleration data more effectively than a conventional accelerometer in order to more accurately characterize the forces and shocks responsible for TBI. [ 2 ] While miniature accelerometers require a constant power draw, the three-axis acceleration switch only draws current when it senses an acceleration event, using up less energy and allowing the use of smaller batteries. [ 1 ] The three-axis acceleration switch has shown to exhibit an expected battery lifetime that is about 100 times better than that of a digital accelerometer. In return, however, the acceleration switch has a lower resolution than that of a digital or analog accelerometer.
One potential application of the three-axis acceleration switch is in studying the head impacts of players in high-risk contact sports. [ 2 ] Due to the size of conventional accelerometers, measuring the acceleration requires the device to be implemented inside the player's helmet, which is designed to mitigate the collision forces and thus may not accurately reflect the true level of injury potential. In contrast, the miniature nature of the acceleration switch makes it easier for the switch to be affixed directly onto the participant's head. | https://en.wikipedia.org/wiki/Three-Axis_Acceleration_Switch |
A three-body force is a force that does not exist in a system of two objects but appears in a three-body system. In general, if the behaviour of a system of more than two objects cannot be described by the two-body interactions between all possible pairs, as a first approximation, the deviation is mainly due to a three-body force.
The fundamental strong interaction does exhibit such behaviour, the most important example being the stability experimentally observed for the helium-3 isotope, which can be described as a 3-body quantum cluster entity of two protons and one neutron [PNP] in stable superposition. Direct evidence of a 3-body force in helium-3 is known: [1] . The existence of stable [PNP] cluster calls into question models of the atomic nucleus that restrict nucleon interactions within shells to 2-body phenomenon. The three-nucleon-interaction is fundamentally possible because gluons , the mediators of the strong interaction, can couple to themselves. In particle physics , the interactions between the three quarks that compose hadrons can be described in a diquark model which might be equivalent to the hypothesis of a three-body force. There is growing evidence in the field of nuclear physics that three-body forces exist among the nucleons inside atomic nuclei for many different isotopes ( three-nucleon force ). | https://en.wikipedia.org/wiki/Three-body_force |
In physics , specifically classical mechanics , the three-body problem is to take the initial positions and velocities (or momenta ) of three point masses orbiting each other in space and then calculate their subsequent trajectories using Newton's laws of motion and Newton's law of universal gravitation . [ 1 ]
Unlike the two-body problem , the three-body problem has no general closed-form solution , meaning there is no equation that always solves it. [ 1 ] When three bodies orbit each other, the resulting dynamical system is chaotic for most initial conditions . Because there are no solvable equations for most three-body systems, the only way to predict the motions of the bodies is to estimate them using numerical methods .
The three-body problem is a special case of the n -body problem . Historically, the first specific three-body problem to receive extended study was the one involving the Earth , the Moon , and the Sun . [ 2 ] In an extended modern sense, a three-body problem is any problem in classical mechanics or quantum mechanics that models the motion of three particles.
The mathematical statement of the three-body problem can be given in terms of the Newtonian equations of motion for vector positions r i = ( x i , y i , z i ) {\displaystyle \ \mathbf {r} _{i}=(x_{i},y_{i},z_{i})\ } of three gravitationally interacting bodies with masses m i {\displaystyle m_{i}} :
r ¨ 1 = − G m 2 ( r 1 − r 2 ) | r 1 − r 2 | 3 − G m 3 ( r 1 − r 3 ) | r 1 − r 3 | 3 , r ¨ 2 = − G m 3 ( r 2 − r 3 ) | r 2 − r 3 | 3 − G m 1 ( r 2 − r 1 ) | r 2 − r 1 | 3 , r ¨ 3 = − G m 1 ( r 3 − r 1 ) | r 3 − r 1 | 3 − G m 2 ( r 3 − r 2 ) | r 3 − r 2 | 3 . {\displaystyle {\begin{aligned}{\ddot {\mathbf {r} }}_{1}&=-Gm_{2}{\frac {\left(\mathbf {r} _{1}-\mathbf {r} _{2}\right)}{\ \left|\mathbf {r} _{1}-\mathbf {r} _{2}\right|^{3}}}-Gm_{3}{\frac {\left(\mathbf {r} _{1}-\mathbf {r} _{3}\right)}{\ \left|\mathbf {r} _{1}-\mathbf {r} _{3}\right|^{3}}}\ ,\\{\ddot {\mathbf {r} }}_{2}&=-Gm_{3}{\frac {\left(\mathbf {r} _{2}-\mathbf {r} _{3}\right)}{\ \left|\mathbf {r} _{2}-\mathbf {r} _{3}\right|^{3}}}-Gm_{1}{\frac {\left(\mathbf {r} _{2}-\mathbf {r} _{1}\right)}{\ \left|\mathbf {r} _{2}-\mathbf {r} _{1}\right|^{3}}}\ ,\\{\ddot {\mathbf {r} }}_{3}&=-Gm_{1}{\frac {\left(\mathbf {r} _{3}-\mathbf {r} _{1}\right)}{\ \left|\mathbf {r} _{3}-\mathbf {r} _{1}\right|^{3}}}-Gm_{2}{\frac {\left(\mathbf {r} _{3}-\mathbf {r} _{2}\right)}{\ \left|\mathbf {r} _{3}-\mathbf {r} _{2}\right|^{3}}}~.\end{aligned}}} where G {\displaystyle \ G\ } is the gravitational constant . As astronomer Juhan Frank describes, "These three second-order vector differential equations are equivalent to 18 first order scalar differential equations." [ 3 ] [ better source needed ] As June Barrow-Green notes with regard to an alternative presentation, if
P i {\displaystyle P_{i}} represent three particles with masses m i {\displaystyle m_{i}} , distances P i P j = r i j , {\displaystyle \ P_{i}P_{j}=r_{ij}\ ,} and coordinates q i j {\displaystyle \ q_{ij}\ } ( i , j = 1 , 2 , 3 ) {\displaystyle \ (i,j=1,2,3)\ } in an inertial coordinate system ... the problem is described by nine second-order differential equations. [ 4 ] : 8
The problem can also be stated equivalently in the Hamiltonian formalism , in which case it is described by a set of 18 first-order differential equations, one for each component of the positions r i {\displaystyle \ \mathbf {r} _{i}\ } and momenta p i {\displaystyle \ \mathbf {p} _{i}\ } : [ citation needed ] [ 5 ]
d r i d t = ∂ H ∂ p i , d p i d t = − ∂ H ∂ r i , {\displaystyle {\frac {\mathrm {d} \ \mathbf {r} _{i}}{\mathrm {d} \ t}}={\frac {\partial \ {\mathcal {H}}}{\partial \ \mathbf {p} _{i}}}\ ,\qquad {\frac {\mathrm {d} \ \mathbf {p} _{i}}{\mathrm {d} \ t}}=-{\frac {\partial \ {\mathcal {H}}}{\partial \ \mathbf {r} _{i}}}\ ,}
where H {\displaystyle {\mathcal {H}}} is the Hamiltonian : [ citation needed ]
H = − G m 1 m 2 | r 1 − r 2 | − G m 2 m 3 | r 3 − r 2 | − G m 3 m 1 | r 3 − r 1 | + | p 1 | 2 2 m 1 + | p 2 | 2 2 m 2 + | p 3 | 2 2 m 3 . {\displaystyle {\mathcal {H}}\ =\ -{\frac {Gm_{1}m_{2}}{\left|\mathbf {r} _{1}-\mathbf {r} _{2}\right|}}\ -\ {\frac {Gm_{2}m_{3}}{\left|\mathbf {r} _{3}-\mathbf {r} _{2}\right|}}\ -\ {\frac {Gm_{3}m_{1}}{\left|\mathbf {r} _{3}-\mathbf {r} _{1}\right|}}\ +\ {\frac {\left|\mathbf {p} _{1}\right|^{2}}{2m_{1}}}\ +\ {\frac {\left|\mathbf {p} _{2}\right|^{2}}{2m_{2}}}\ +\ {\frac {\left|\mathbf {p} _{3}\right|^{2}}{2m_{3}}}~.}
In this case, H {\displaystyle {\mathcal {H}}} is simply the total energy of the system, gravitational plus kinetic. [ citation needed ]
In the restricted three-body problem formulation, in the description of Barrow-Green, [ 4 ] : 11–14
two... bodies revolve around their centre of mass in circular orbits under the influence of their mutual gravitational attraction, and... form a two body system... [whose] motion is known. A third body (generally known as a planetoid), assumed massless with respect to the other two, moves in the plane defined by the two revolving bodies and, while being gravitationally influenced by them, exerts no influence of its own. [ 4 ] : 11
Per Barrow-Green, "[t]he problem is then to ascertain the motion of the third body." [ 4 ] : 11
That is to say, this two-body motion is taken to consist of circular orbits around the center of mass , and the planetoid is assumed to move in the plane defined by the circular orbits. [ clarification needed ] (That is, it is useful to consider the effective potential . [ clarification needed ] [ according to whom? ] ) With respect to a rotating reference frame , the two co-orbiting bodies are stationary, and the third can be stationary as well at the Lagrangian points , or move around them, for instance on a horseshoe orbit . [ citation needed ]
The restricted three-body problem is easier to analyze theoretically than the full problem. It is of practical interest as well since it accurately describes many real-world problems, the most important example being the Earth–Moon–Sun system. For these reasons, it has occupied an important role in the historical development of the three-body problem. [ 6 ]
Mathematically, the problem is stated as follows. [ citation needed ] Let m 1 , m 2 {\displaystyle \ m_{1},m_{2}\ } be the masses of the two massive bodies, with (planar) coordinates ( x 1 , y 1 ) {\displaystyle \ (x_{1},y_{1})\ } and ( x 2 , y 2 ) , {\displaystyle \ (x_{2},y_{2})\ ,} and let ( x , y ) {\displaystyle \ (x,y)\ } be the coordinates of the planetoid. For simplicity, choose units such that the distance between the two massive bodies, as well as the gravitational constant, are both equal to 1 . {\displaystyle \ 1~.} Then, the motion of the planetoid is given by: [ citation needed ]
d 2 x d t 2 = − m 1 ( x − x 1 ) r 1 3 − m 2 ( x − x 2 ) r 2 3 , d 2 y d t 2 = − m 1 ( y − y 1 ) r 1 3 − m 2 ( y − y 2 ) r 2 3 , {\displaystyle {\begin{aligned}{\frac {\mathrm {d} ^{2}x}{\mathrm {d} \ t^{2}}}=-m_{1}{\frac {(x-x_{1})}{r_{1}^{3}}}-m_{2}{\frac {(x-x_{2})}{r_{2}^{3}}}\ ,\\{\frac {\mathrm {d} ^{2}y}{\mathrm {d} \ t^{2}}}=-m_{1}{\frac {(y-y_{1})}{r_{1}^{3}}}-m_{2}{\frac {(y-y_{2})}{r_{2}^{3}}}\ ,\end{aligned}}}
where r i ≡ ( x − x i ) 2 + ( y − y i ) 2 . {\displaystyle \ r_{i}\equiv {\sqrt {(x-x_{i})^{2}+(y-y_{i})^{2}\;}}~.} [ citation needed ] In this form the equations of motion carry an explicit time dependence through the coordinates x i ( t ) , y i ( t ) ; {\displaystyle \ x_{i}(t),y_{i}(t)\ ;} [ citation needed ] however, if the two bodies are uniformly rotating, this time dependence can be removed through a transformation to their rotating reference frame, which simplifies any subsequent analysis. [ original research? ] [ 7 ]
There is no general closed-form solution to the three-body problem. [ 1 ] In other words, it does not have a general solution that can be expressed in terms of a finite number of standard mathematical operations. Moreover, the motion of three bodies is generally non-repeating, except in special cases. [ 8 ]
However, in 1912 the Finnish mathematician Karl Fritiof Sundman proved that there exists an analytic solution to the three-body problem in the form of a Puiseux series , specifically a power series in terms of powers of t 1/3 . [ 9 ] This series converges for all real t , except for initial conditions corresponding to zero angular momentum . In practice, the latter restriction is insignificant since initial conditions with zero angular momentum are rare, having Lebesgue measure zero.
An important issue in proving this result is the fact that the radius of convergence for this series is determined by the distance to the nearest singularity. Therefore, it is necessary to study the possible singularities of the three-body problems. As is briefly discussed below, the only singularities in the three-body problem are binary collisions (collisions between two particles at an instant) and triple collisions (collisions between three particles at an instant).
Collisions of any number are somewhat improbable, since it has been shown that they correspond to a set of initial conditions of measure zero. But there is no criterion known to be put on the initial state in order to avoid collisions for the corresponding solution. So Sundman's strategy consisted of the following steps:
This finishes the proof of Sundman's theorem.
The corresponding series converges extremely slowly. That is, obtaining a value of meaningful precision requires so many terms that this solution is of little practical use. Indeed, in 1930, David Beloriszky calculated that if Sundman's series were to be used for astronomical observations, then the computations would involve at least 10 8 000 000 terms. [ 10 ]
In 1767, Leonhard Euler found three families of periodic solutions in which the three masses are collinear at each instant.
In 1772, Lagrange found a family of solutions in which the three masses form an equilateral triangle at each instant. Together with Euler's collinear solutions, these solutions form the central configurations for the three-body problem. These solutions are valid for any mass ratios, and the masses move on Keplerian ellipses . These four families are the only known solutions for which there are explicit analytic formulae. In the special case of the circular restricted three-body problem , these solutions, viewed in a frame rotating with the primaries, become points called Lagrangian points and labeled L 1 , L 2 , L 3 , L 4 , and L 5 , with L 4 and L 5 being symmetric instances of Lagrange's solution.
In work summarized in 1892–1899, Henri Poincaré established the existence of an infinite number of periodic solutions to the restricted three-body problem, together with techniques for continuing these solutions into the general three-body problem.
In 1893, Meissel stated what is now called the Pythagorean three-body problem: three masses in the ratio 3:4:5 are placed at rest at the vertices of a 3:4:5 right triangle , with the heaviest body at the right angle and the lightest at the smaller acute angle. Burrau [ 11 ] further investigated this problem in 1913. In 1967 Victor Szebehely and C. Frederick Peters established eventual escape of the lightest body for this problem using numerical integration, while at the same time finding a nearby periodic solution. [ 12 ]
In the 1970s, Michel Hénon and Roger A. Broucke each found a set of solutions that form part of the same family of solutions: the Broucke–Hénon–Hadjidemetriou family. In this family, the three objects all have the same mass and can exhibit both retrograde and direct forms. In some of Broucke's solutions, two of the bodies follow the same path. [ 14 ]
In 1993, physicist Cris Moore at the Santa Fe Institute found a zero angular momentum solution with three equal masses moving around a figure-eight shape. [ 15 ] In 2000, mathematicians Alain Chenciner and Richard Montgomery proved its formal existence. [ 16 ] [ 17 ] The solution has been shown numerically to be stable for small perturbations of the mass and orbital parameters, which makes it possible for such orbits to be observed in the physical universe. But it has been argued that this is unlikely since the domain of stability is small. For instance, the probability of a binary–binary scattering event [ clarification needed ] resulting in a figure-8 orbit has been estimated to be a small fraction of a percent. [ 18 ]
In 2013, physicists Milovan Šuvakov and Veljko Dmitrašinović at the Institute of Physics in Belgrade discovered 13 new families of solutions for the equal-mass zero-angular-momentum three-body problem. [ 8 ] [ 14 ]
In 2015, physicist Ana Hudomal discovered 14 new families of solutions for the equal-mass zero-angular-momentum three-body problem. [ 19 ]
In 2017, researchers Xiaoming Li and Shijun Liao found 669 new periodic orbits of the equal-mass zero-angular-momentum three-body problem. [ 20 ] This was followed in 2018 by an additional 1,223 new solutions for a zero-angular-momentum system of unequal masses. [ 21 ]
In 2018, Li and Liao reported 234 solutions to the unequal-mass "free-fall" three-body problem. [ 22 ] The free-fall formulation starts with all three bodies at rest. Because of this, the masses in a free-fall configuration do not orbit in a closed "loop", but travel forward and backward along an open "track".
In 2023, Ivan Hristov, Radoslava Hristova, Dmitrašinović and Kiyotaka Tanikawa published a search for "periodic free-fall orbits" three-body problem, limited to the equal-mass case, and found 12,409 distinct solutions. [ 23 ]
Using a computer, the problem may be solved to arbitrarily high precision using numerical integration . There have been attempts of creating computer programs that numerically solve the three-body problem (and by extension, the n-body problem ) involving both electromagnetic and gravitational interactions, and incorporating modern theories of physics such as special relativity . [ 24 ] In addition, using the theory of random walks , an approximate probability of different outcomes may be computed. [ 25 ] [ 26 ]
The gravitational problem of three bodies in its traditional sense dates in substance from 1687, when Isaac Newton published his Philosophiæ Naturalis Principia Mathematica , in which Newton attempted to figure out if any long term stability is possible especially for such a system like that of the Earth , the Moon , and the Sun , after having solved the two-body problem . [ 27 ] Guided by major Renaissance astronomers Nicolaus Copernicus , Tycho Brahe and Johannes Kepler , Newton introduced later generations to the beginning of the gravitational three-body problem. [ 28 ] In Proposition 66 of Book 1 of the Principia , and its 22 Corollaries, Newton took the first steps in the definition and study of the problem of the movements of three massive bodies subject to their mutually perturbing gravitational attractions. In Propositions 25 to 35 of Book 3, Newton also took the first steps in applying his results of Proposition 66 to the lunar theory , the motion of the Moon under the gravitational influence of Earth and the Sun. [ 29 ] Later, this problem was also applied to other planets' interactions with the Earth and the Sun. [ 28 ]
The physical problem was first addressed by Amerigo Vespucci and subsequently by Galileo Galilei , as well as Simon Stevin , but they did not realize what they contributed. Though Galileo determined that the speed of fall of all bodies changes uniformly and in the same way, he did not apply it to planetary motions. [ 28 ] Whereas in 1499, Vespucci used knowledge of the position of the Moon to determine his position in Brazil. [ 30 ] It became of technical importance in the 1720s, as an accurate solution would be applicable to navigation, specifically for the determination of longitude at sea , solved in practice by John Harrison 's invention of the marine chronometer . However the accuracy of the lunar theory was low, due to the perturbing effect of the Sun and planets on the motion of the Moon around Earth.
Jean le Rond d'Alembert and Alexis Clairaut , who developed a longstanding rivalry, both attempted to analyze the problem in some degree of generality; they submitted their competing first analyses to the Académie Royale des Sciences in 1747. [ 31 ] It was in connection with their research, in Paris during the 1740s, that the name "three-body problem" ( French : Problème des trois Corps ) began to be commonly used. An account published in 1761 by Jean le Rond d'Alembert indicates that the name was first used in 1747. [ 32 ]
From the end of the 19th century to early 20th century, the approach to solve the three-body problem with the usage of short-range attractive two-body forces was developed by scientists, which offered P. F. Bedaque, H.-W. Hammer and U. van Kolck an idea to renormalize the short-range three-body problem, providing scientists a rare example of a renormalization group limit cycle at the beginning of the 21st century. [ 33 ] George William Hill worked on the restricted problem in the late 19th century with an application of motion of Venus and Mercury . [ 34 ]
At the beginning of the 20th century, Karl Sundman approached the problem mathematically and systematically by providing a functional theoretical proof to the problem valid for all values of time. It was the first time scientists theoretically solved the three-body problem. However, because there was not a qualitative enough solution of this system, and it was too slow for scientists to practically apply it, this solution still left some issues unresolved. [ 35 ] In the 1970s, implication to three-body from two-body forces had been discovered by V. Efimov , which was named the Efimov effect . [ 36 ]
In 2017, Shijun Liao and Xiaoming Li applied a new strategy of numerical simulation for chaotic systems called the clean numerical simulation (CNS), with the use of a national supercomputer, to successfully gain 695 families of periodic solutions of the three-body system with equal mass. [ 37 ]
In 2019, Breen et al. announced a fast neural network solver for the three-body problem, trained using a numerical integrator. [ 38 ]
In September 2023, several possible solutions have been found to the problem according to reports. [ 39 ] [ 40 ]
The term "three-body problem" is sometimes used in the more general sense to refer to any physical problem involving the interaction of three bodies.
A quantum-mechanical analogue of the gravitational three-body problem in classical mechanics is the helium atom ,
in which a helium nucleus and two electrons interact according to the inverse-square Coulomb interaction . Like the
gravitational three-body problem, the helium atom cannot be solved exactly. [ 41 ]
In both classical and quantum mechanics, however, there exist nontrivial interaction laws besides the inverse-square force that do lead to exact analytic three-body solutions. One such model consists of a combination of harmonic attraction and a repulsive inverse-cube force. [ 42 ] This model is considered nontrivial since it is associated with a set of nonlinear differential equations containing singularities (compared with, e.g., harmonic interactions alone, which lead to an easily solved system of linear differential equations). In these two respects it is analogous to (insoluble) models having Coulomb interactions, and as a result has been suggested as a tool for intuitively understanding physical systems like the helium atom. [ 42 ] [ 43 ]
Within the point vortex model , the motion of vortices in a two-dimensional ideal fluid is described by equations of motion that contain only first-order time derivatives. I.e. in contrast to Newtonian mechanics, it is the velocity and not the acceleration that is determined by their relative positions. As a consequence, the three-vortex problem is still integrable , [ 44 ] while at least four vortices are required to obtain chaotic behavior. [ 45 ] One can draw parallels between the motion of a passive tracer particle in the velocity field of three vortices and the restricted three-body problem of Newtonian mechanics. [ 46 ]
The gravitational three-body problem has also been studied using general relativity . Physically, a relativistic treatment becomes necessary in systems with very strong gravitational fields, such as near the event horizon of a black hole . However, the relativistic problem is considerably more difficult than in Newtonian mechanics, and sophisticated numerical techniques are required.
Even the full two-body problem (i.e. for arbitrary ratio of masses) does not have a rigorous analytic solution in general relativity. [ 47 ]
The three-body problem is a special case of the n -body problem , which describes how n objects move under one of the physical forces, such as gravity . These problems have a global analytical solution in the form of a convergent power series, as was proven by Karl F. Sundman for n = 3 and by Qiudong Wang for n > 3 (see n -body problem for details). However, the Sundman and Wang series converge so slowly that they are useless for practical purposes; [ 48 ] therefore, it is currently necessary to approximate solutions by numerical analysis in the form of numerical integration or, for some cases, classical trigonometric series approximations (see n -body simulation ). Atomic systems, e.g. atoms, ions, and molecules, can be treated in terms of the quantum n -body problem. Among classical physical systems, the n -body problem usually refers to a galaxy or to a cluster of galaxies ; planetary systems , such as stars , planets , and their satellites , can also be treated as n -body systems. Some applications are conveniently treated by perturbation theory, in which the system is considered as a two-body problem plus additional forces causing deviations from a hypothetical unperturbed two-body trajectory. | https://en.wikipedia.org/wiki/Three-body_problem |
The 3-center 4-electron (3c–4e) bond is a model used to explain bonding in certain hypervalent molecules such as tetratomic and hexatomic interhalogen compounds, sulfur tetrafluoride , the xenon fluorides , and the bifluoride ion. [ 1 ] [ 2 ] It is also known as the Pimentel–Rundle three-center model after the work published by George C. Pimentel in 1951, [ 3 ] which built on concepts developed earlier by Robert E. Rundle for electron-deficient bonding. [ 4 ] [ 5 ] An extended version of this model is used to describe the whole class of hypervalent molecules such as phosphorus pentafluoride and sulfur hexafluoride as well as multi-center π-bonding such as ozone and sulfur trioxide .
There are also molecules such as diborane (B 2 H 6 ) and dialane (Al 2 H 6 ) which have three-center two-electron (3c–2e) bonds.
While the term "hypervalent" was not introduced in the chemical literature until 1969, [ 6 ] Irving Langmuir and G. N. Lewis debated the nature of bonding in hypervalent molecules as early as 1921. [ 7 ] [ 8 ] While Lewis supported the viewpoint of expanded octet, invoking s-p-d hybridized orbitals and maintaining 2c–2e bonds between neighboring atoms, Langmuir instead opted for maintaining the octet rule , invoking an ionic basis for bonding in hypervalent compounds (see Hypervalent molecule , valence bond theory diagrams for PF 5 and SF 6 ). [ 9 ]
In a 1951 seminal paper, [ 3 ] Pimentel rationalized the bonding in hypervalent trihalide ions ( X − 3 , X = F, Br, Cl, I) via a molecular orbital (MO) description, building on the concept of the "half-bond" introduced by Rundle in 1947. [ 4 ] [ 5 ] In this model, two of the four electrons occupy an all in-phase bonding MO, while the other two occupy a non-bonding MO, leading to an overall bond order of 0.5 between adjacent atoms (see Molecular orbital description ).
More recent theoretical studies on hypervalent molecules support the Langmuir view, confirming that the octet rule serves as a good first approximation to describing bonding in the s- and p-block elements. [ 10 ] [ 11 ]
The σ molecular orbitals (MOs) of triiodide can be constructed by considering the in-phase and out-of-phase combinations of the central atom's p orbital (collinear with the bond axis) with the p orbitals of the peripheral atoms. [ 12 ] This exercise generates the diagram at right (Figure 1). Three molecular orbitals result from the combination of the three relevant atomic orbitals, with the four electrons occupying the two MOs lowest in energy – a bonding MO delocalized across all three centers, and a non-bonding MO localized on the peripheral centers. Using this model, one sidesteps the need to invoke hypervalent bonding considerations at the central atom, since the bonding orbital effectively consists of two 2-center-1-electron bonds (which together do not violate the octet rule), and the other two electrons occupy the non-bonding orbital.
In the natural bond orbital viewpoint of 3c–4e bonding, the triiodide anion is constructed from the combination of the diiodine (I 2 ) σ molecular orbitals and an iodide (I − ) lone pair. The I − lone pair acts as a 2-electron donor, while the I 2 σ* antibonding orbital acts as a 2-electron acceptor. [ 12 ] Combining the donor and acceptor in in-phase and out-of-phase combinations results in the diagram depicted at right (Figure 2). Combining the donor lone pair with the acceptor σ* antibonding orbital results in an overall lowering in energy of the highest-occupied orbital (ψ 2 ). While the diagram depicted in Figure 2 shows the right-hand atom as the donor, an equivalent diagram can be constructed using the left-hand atom as the donor. This bonding scheme is succinctly summarized by the following two resonance structures: I—I···I − ↔ I − ···I—I (where "—" represents a single bond and "···" represents a "dummy bond" with formal bond order 0 whose purpose is only to indicate connectivity), which when averaged reproduces the I—I bond order of 0.5 obtained both from natural bond orbital analysis and from molecular orbital theory.
More recent theoretical investigations suggest the existence of a novel type of donor-acceptor interaction that may dominate in triatomic species with so-called "inverted electronegativity"; [ 13 ] that is, a situation in which the central atom is more electronegative than the peripheral atoms. Molecules of theoretical curiosity such as neon difluoride (NeF 2 ) and beryllium dilithide (BeLi 2 ) represent examples of inverted electronegativity. [ 13 ] As a result of unusual bonding situation, the donor lone pair ends up with significant electron density on the central atom, while the acceptor is the "out-of-phase" combination of the p orbitals on the peripheral atoms. This bonding scheme is depicted in Figure 3 for the theoretical noble gas dihalide NeF 2 .
The valence bond description and accompanying resonance structures A—B···C − ↔ A − ···B—C suggest that molecules exhibiting 3c–4e bonding can serve as models for studying the transition states of bimolecular nucleophilic substitution reactions . [ 12 ] | https://en.wikipedia.org/wiki/Three-center_four-electron_bond |
A three-center two-electron (3c–2e) bond is an electron-deficient chemical bond where three atoms share two electrons . The combination of three atomic orbitals form three molecular orbitals : one bonding, one non -bonding, and one anti -bonding. The two electrons go into the bonding orbital, resulting in a net bonding effect and constituting a chemical bond among all three atoms. In many common bonds of this type, the bonding orbital is shifted towards two of the three atoms instead of being spread equally among all three. Example molecules with 3c–2e bonds are the trihydrogen cation ( H + 3 ) and diborane ( B 2 H 6 ). In these two structures, the three atoms in each 3c–2e bond form an angular geometry, leading to a bent bond .
An extended version of the 3c–2e bond model features heavily in cluster compounds described by the polyhedral skeletal electron pair theory, such as boranes and carboranes . These molecules derive their stability from having a completely filled set of bonding molecular orbitals as outlined by Wade's rules .
The monomer BH 3 is unstable since the boron atom has an empty p-orbital. A B−H−B 3-center-2-electron bond is formed when a boron atom shares electrons with a B−H bond on another boron atom. The two electrons (corresponding to one bond) in a B−H−B bonding molecular orbital are spread out across three internuclear spaces. [ 1 ]
In diborane (B 2 H 6 ), there are two such 3c–2e bonds: two H atoms bridge the two B atoms, leaving two additional H atoms in ordinary B−H bonds on each B. As a result, the molecule achieves stability since each B participates in a total of four bonds and all bonding molecular orbitals are filled, although two of the four bonds are 3-center B−H−B bonds. The reported bond order for each B−H interaction in a bridge is 0.5, [ 2 ] so that the bridging B−H−B bonds are weaker and longer than the terminal B−H bonds, as shown by the bond lengths in the structural diagram.
Three-center, two-electron bonding is pervasive in organotransition metal chemistry. A celebrated family of compounds featuring such interactions is called agostic complexes .
This bonding pattern is also seen in trimethylaluminium , which forms a dimer Al 2 (CH 3 ) 6 with the carbon atoms of two of the methyl groups in bridging positions. This type of bond also occurs in carbon compounds, where it is sometimes referred to as hyperconjugation - another name for asymmetrical three-center two-electron bonds.
The first stable subvalent Be complex ever observed contains a three-center two-electron π-bond that consists of donor-acceptor interactions over the C-Be-C core of a Be(0)-carbene adduct. [ 4 ]
Carbocation rearrangement reactions occur through three-center bond transition states. Because the three center bond structures have about the same energy as carbocations, there is generally virtually no activation energy for these rearrangements so they occur with extraordinarily high rates.
Carbonium ions such as ethanium C 2 H + 7 have three-center two-electron bonds. Perhaps the best known and studied structure of this sort is the 2-Norbornyl cation . | https://en.wikipedia.org/wiki/Three-center_two-electron_bond |
The three-click rule or three click rule is an unofficial web design rule concerning the design of website navigation. It suggests that a user of a website should be able to find any information with no more than three mouse clicks. [ 1 ] It is based on the belief that users of a site will become frustrated and often leave if they cannot find the information within the three clicks. [ 2 ]
One of the earliest mentions of the three click rule comes from Jeffrey Zeldman , who wrote in Taking Your Talent to the Web (2001), that the Three-Click Rule is "based on the way people use the Web" and "the rule can help you create sites with intuitive, logical hierarchical structures". [ 3 ] Although there is little analytical evidence that this is the case, it is a commonly held belief amongst designers that the rule is part of a good system of navigation. Critics of the rule suggest that the number of clicks is not as important as the success of the clicks or information sent. [ 4 ]
The principle of the “three-click rule” is often used to test the user-friendliness of a program or application. The implementation of the rule of three clicks is evident in the design of modern day operating systems and applications where users can complete most tasks from starting the computer or app and completing a desired task in less than three clicks. [ 5 ]
In 2024, the Federal Trade Commission announced a "click-to-cancel" rule that would online sellers to simplify the process for users to cancel services. This involved both transparent communication around cancellation and simplifying the user experience of canceling an online service. [ 6 ]
The three click rule has been challenged by usability test results, which have shown that the number of clicks needed to access the desired information affects neither user satisfaction, nor success rate. [ 7 ] [ 8 ]
In eCommerce websites, the rule can often be detrimental as in order to adhere to the rule, products on offer to customers must be grouped into categories that are far too large to be easily browsed. | https://en.wikipedia.org/wiki/Three-click_rule |
Three-dimensional X-ray diffraction ( 3DXRD ) is a microscopy technique using hard X-rays (with energy in the 30-100 keV range) to investigate the internal structure of polycrystalline materials in three dimensions. [ 1 ] [ 2 ] For a given sample, 3DXRD returns the shape, juxtaposition, and orientation of the crystallites ( "grains" ) it is made of. 3DXRD allows investigating micrometer- to millimetre-sized samples with resolution ranging from hundreds of nanometers to micrometers. Other techniques employing X-rays to investigate the internal structure of polycrystalline materials include X-ray diffraction contrast tomography (DCT) [ 3 ] and high energy X-ray diffraction (HEDM). [ 4 ]
Compared with destructive techniques, e.g. three-dimensional electron backscatter diffraction (3D EBSD), [ 5 ] with which the sample is serially sectioned and imaged, 3DXRD and similar X-ray nondestructive techniques have the following advantages:
3DXRD measurements are performed using various experimental geometries. The classical 3DXRD setup is similar to the conventional tomography setting used at synchrotrons: [ 6 ] the sample, mounted on a rotation stage, is illuminated using quasi-parallel monochromatic X-ray beam. Each time a certain grain within the sample satisfies the Bragg condition , a diffracted beam is generated. This signal is transmitted through the sample and collected by two-dimensional detectors. Since different grains satisfy the Bragg condition at different angles, the sample is rotated to probe the complete sample structure. Crucial for 3DXRD is the idea to mimic a three-dimensional detector by positioning a number of two-dimensional detectors at different distances from the centre of rotation of the sample, and exposing these either simultaneously (many detectors are semi-transparent to hard X-rays) or at different times.
A 3DXRD microscope is installed at the Materials Science beamline [ 7 ] of the ESRF .
To determine the crystallographic orientation of the grains in the considered sample, the following software packages are in use: Fable [ 8 ] and GrainSpotter. [ 9 ] Reconstructing the 3D shape of the grains is nontrivial and three approaches are available to do so, respectively based on simple back-projection, forward projection, algebraic reconstruction technique and Monte Carlo method -based reconstruction. [ 10 ]
With 3DXRD, it is possible to study in situ the time evolution of materials under different conditions. Among others, the technique has been used to map the elastic strains and stresses in a pre-strained nickel-titanium wire. [ 11 ]
The scientists involved in developing 3DXRD contributed to the development of three other three-dimensional non-destructive techniques for the material sciences, respectively using electrons and neutrons as a probe: three-dimensional orientation mapping in the transmission electron microscope (3D-OMiTEM), [ 12 ] time-of-flight 3D neutron diffraction for multigrain crystallography (ToF 3DND) [ 13 ] [ 14 ] and laue 3D neutron diffraction (Laue3DND). [ 15 ]
Using a system of lenses, the synchrotron technique dark-field X-ray microscopy (DFXRM) [ 16 ] extends the capabilities of 3DXRD, allowing to focus on a deeply embedded single grain and to reconstruct its 3D structure and its crystalline properties. DFXRM is under development at the European Synchrotron Research Facility ( ESRF ), beamline ID06. [ 17 ]
In a laboratory setting, 3D grain maps using X-rays as a probe can be obtained using laboratory diffraction contrast tomography (LabDCT), a technique derived from 3DXRD. [ 18 ] | https://en.wikipedia.org/wiki/Three-dimensional_X-ray_diffraction |
Three-dimensional beamforming ( 3DBF ), full dimension MIMO or tilt angle adaptation is an interference coordination method in cellular networks and radar systems which brings significant improvements in comparison with conventional 2D beamforming techniques. Most beamforming schemes currently employed in wireless cellular networks control the beam pattern radiation in the horizontal plane. In contrast to such two-dimensional beamforming (2DBF), 3DBF adapts the radiation beam pattern in both elevation and azimuth planes to provide more degrees of freedom in supporting users. [ 1 ] By utilizing information on angle of arrival (AoA) of users provided by suitable antenna hardware such as sector antenna or planar array in both elevation and azimuth planes and estimating direction of arrival (DoA) of each users' signal, base station is capable of distinguishing different users using proper beamforming and also steering the array's beam to a desired direction which optimizes some preferred performance metric of the network.
Depending on the way that the antenna downtilt is changed, 3DBF can be classified into two categories: [ 1 ]
This article related to telecommunications is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Three-dimensional_beamforming |
Three-dimension losses and correlation in turbomachinery refers to the measurement of flow-fields in three dimensions, where measuring the loss of smoothness of flow, and resulting inefficiencies, becomes difficult, unlike two-dimensional losses where mathematical complexity is substantially less.
Three-dimensionality takes into account large pressure gradients in every direction, design/curvature of blades, shock waves, heat transfer, cavitation , and viscous effects , which generate secondary flow , vortices, tip leakage vortices, and other effects that interrupt smooth flow and cause loss of efficiency. Viscous effects in turbomachinery block flow by the formation of viscous layers around blade profiles, which affects pressure rise and fall and reduces the effective area of a flow field. Interaction between these effects increases rotor instability and decreases the efficiency of turbomachinery.
In calculating three-dimensional losses, every element affecting a flow path is taken into account—such as axial spacing between vane and blade rows, end-wall curvature, radial distribution of pressure gradient, hup/tip ratio, dihedral, lean, tip clearance, flare, aspect ratio, skew, sweep, platform cooling holes, surface roughness, and off-take bleeds. Associated with blade profiles are parameters such as camber distribution, stagger angle, blade spacing, blade camber, chord, surface roughness, leading- and trailing-edge radii, and maximum thickness.
Two-dimensional losses are easily evaluated using Navier-Stokes equations, but three-dimensional losses are difficult to evaluate; so, correlation is used, which is difficult with so many parameters. So, correlation based on geometric similarity has been developed in many industries, in the form of charts, graphs, data statistics, and performance data.
Three-dimensional losses are generally classified as:
The main points to consider are:
The main points to consider are:
The main points to consider are:
The main points to consider are:
The main points to consider are: | https://en.wikipedia.org/wiki/Three-dimensional_losses_and_correlation_in_turbomachinery |
Three-dimensional quartz phenolic ( 3DQP ) [ 1 ] is a phenolic -based material composed of a quartz cloth material impregnated with a phenolic resin and hot-pressed. When cured, 3DQP can be machined in the same way as metals and is tough and fire-resistant. 3DQP is used for the manufacture of nuclear weapon re-entry vehicles (RV).
The quartz material ' hardens ' the RV protecting the nuclear warhead against high-energy neutrons emitted by exo-atmospheric Anti-ballistic missile (ABM) bursts before re-entry. [ 2 ]
3DQP was first used on the British Chevaline improved front end ( IFE ) for the Royal Navy 's UGM-27 Polaris system that was in service from 1982 to 1996, when it was replaced by Trident D5 .
A licence to manufacture 3DQP in the US was acquired from the British Government and production was undertaken by AVCO , one of the two suppliers of US RVs, the other being General Electric . First production examples of the Chevaline ReBs were manufactured by AVCO, now part of Textron . Subsequently, production was undertaken in the UK at the Royal Ordnance Factory Burghfield using quartz material purchased from France. [ 3 ] Documents declassified in 2013 describe in some detail the process whereby a quartz two-dimensional cloth is hand woven from quartz threads with conventional wefts and warps within a stainless steel matrix before being impregnated under pressure with a phenolic resin. A third dimension was then woven through the cavities created by removal of the stainless steel matrix before these threads were impregnated with phenolic resin inside an autoclave pressurised at up to 2000 psi. The resulting material was then lathe-turned to the ReB profile, internally and externally. [ 4 ]
The supply by AVCO of test samples of 3DQP to France without UK permission caused friction between the British Government and AVCO, and action in the US courts by the British government. [ 5 ] | https://en.wikipedia.org/wiki/Three-dimensional_quartz_phenolic |
In geometry , a three-dimensional space ( 3D space , 3-space or, rarely, tri-dimensional space ) is a mathematical space in which three values ( coordinates ) are required to determine the position of a point . Most commonly, it is the three-dimensional Euclidean space , that is, the Euclidean space of dimension three, which models physical space . More general three-dimensional spaces are called 3-manifolds .
The term may also refer colloquially to a subset of space, a three-dimensional region (or 3D domain ), [ 1 ] a solid figure .
Technically, a tuple of n numbers can be understood as the Cartesian coordinates of a location in a n -dimensional Euclidean space. The set of these n -tuples is commonly denoted R n , {\displaystyle \mathbb {R} ^{n},} and can be identified to the pair formed by a n -dimensional Euclidean space and a Cartesian coordinate system .
When n = 3 , this space is called the three-dimensional Euclidean space (or simply "Euclidean space" when the context is clear). [ 2 ] In classical physics , it serves as a model of the physical universe , in which all known matter exists. When relativity theory is considered, it can be considered a local subspace of space-time . [ 3 ] While this space remains the most compelling and useful way to model the world as it is experienced, [ 4 ] it is only one example of a 3-manifold. In this classical example, when the three values refer to measurements in different directions ( coordinates ), any three directions can be chosen, provided that these directions do not lie in the same plane . Furthermore, if these directions are pairwise perpendicular , the three values are often labeled by the terms width /breadth , height /depth , and length .
Books XI to XIII of Euclid's Elements dealt with three-dimensional geometry. Book XI develops notions of orthogonality and parallelism of lines and planes, and defines solids including parallelpipeds, pyramids, prisms, spheres, octahedra, icosahedra and dodecahedra. Book XII develops notions of similarity of solids. Book XIII describes the construction of the five regular Platonic solids in a sphere.
In the 17th century, three-dimensional space was described with Cartesian coordinates , with the advent of analytic geometry developed by René Descartes in his work La Géométrie and Pierre de Fermat in the manuscript Ad locos planos et solidos isagoge (Introduction to Plane and Solid Loci), which was unpublished during Fermat's lifetime. However, only Fermat's work dealt with three-dimensional space.
In the 19th century, developments of the geometry of three-dimensional space came with William Rowan Hamilton 's development of the quaternions . In fact, it was Hamilton who coined the terms scalar and vector , and they were first defined within his geometric framework for quaternions . Three dimensional space could then be described by quaternions q = a + u i + v j + w k {\displaystyle q=a+ui+vj+wk} which had vanishing scalar component, that is, a = 0 {\displaystyle a=0} . While not explicitly studied by Hamilton, this indirectly introduced notions of basis, here given by the quaternion elements i , j , k {\displaystyle i,j,k} , as well as the dot product and cross product , which correspond to (the negative of) the scalar part and the vector part of the product of two vector quaternions.
It was not until Josiah Willard Gibbs that these two products were identified in their own right, and the modern notation for the dot and cross product were introduced in his classroom teaching notes, found also in the 1901 textbook Vector Analysis written by Edwin Bidwell Wilson based on Gibbs' lectures.
Also during the 19th century came developments in the abstract formalism of vector spaces, with the work of Hermann Grassmann and Giuseppe Peano , the latter of whom first gave the modern definition of vector spaces as an algebraic structure.
In mathematics, analytic geometry (also called Cartesian geometry) describes every point in three-dimensional space by means of three coordinates. Three coordinate axes are given, each perpendicular to the other two at the origin , the point at which they cross. They are usually labeled x , y , and z . Relative to these axes, the position of any point in three-dimensional space is given by an ordered triple of real numbers , each number giving the distance of that point from the origin measured along the given axis, which is equal to the distance of that point from the plane determined by the other two axes. [ 5 ]
Other popular methods of describing the location of a point in three-dimensional space include cylindrical coordinates and spherical coordinates , though there are an infinite number of possible methods. For more, see Euclidean space .
Below are images of the above-mentioned systems.
Two distinct points always determine a (straight) line . Three distinct points are either collinear or determine a unique plane . On the other hand, four distinct points can either be collinear, coplanar , or determine the entire space.
Two distinct lines can either intersect, be parallel or be skew . Two parallel lines, or two intersecting lines , lie in a unique plane, so skew lines are lines that do not meet and do not lie in a common plane.
Two distinct planes can either meet in a common line or are parallel (i.e., do not meet). Three distinct planes, no pair of which are parallel, can either meet in a common line, meet in a unique common point, or have no point in common. In the last case, the three lines of intersection of each pair of planes are mutually parallel.
A line can lie in a given plane, intersect that plane in a unique point, or be parallel to the plane. In the last case, there will be lines in the plane that are parallel to the given line.
A hyperplane is a subspace of one dimension less than the dimension of the full space. The hyperplanes of a three-dimensional space are the two-dimensional subspaces, that is, the planes. In terms of Cartesian coordinates, the points of a hyperplane satisfy a single linear equation , so planes in this 3-space are described by linear equations. A line can be described by a pair of independent linear equations—each representing a plane having this line as a common intersection.
Varignon's theorem states that the midpoints of any quadrilateral in R 3 {\displaystyle \mathbb {R} ^{3}} form a parallelogram , and hence are coplanar.
A sphere in 3-space (also called a 2-sphere because it is a 2-dimensional object) consists of the set of all points in 3-space at a fixed distance r from a central point P . The solid enclosed by the sphere is called a ball (or, more precisely a 3-ball ).
The volume of the ball is given by
V = 4 3 π r 3 , {\displaystyle V={\frac {4}{3}}\pi r^{3},} and the surface area of the sphere is A = 4 π r 2 . {\displaystyle A=4\pi r^{2}.} Another type of sphere arises from a 4-ball, whose three-dimensional surface is the 3-sphere : points equidistant to the origin of the euclidean space R 4 . If a point has coordinates, P ( x , y , z , w ) , then x 2 + y 2 + z 2 + w 2 = 1 characterizes those points on the unit 3-sphere centered at the origin.
This 3-sphere is an example of a 3-manifold: a space which is 'looks locally' like 3-D space. In precise topological terms, each point of the 3-sphere has a neighborhood which is homeomorphic to an open subset of 3-D space.
In three dimensions, there are nine regular polytopes: the five convex Platonic solids and the four nonconvex Kepler-Poinsot polyhedra .
A surface generated by revolving a plane curve about a fixed line in its plane as an axis is called a surface of revolution . The plane curve is called the generatrix of the surface. A section of the surface, made by intersecting the surface with a plane that is perpendicular (orthogonal) to the axis, is a circle.
Simple examples occur when the generatrix is a line. If the generatrix line intersects the axis line, the surface of revolution is a right circular cone with vertex (apex) the point of intersection. However, if the generatrix and axis are parallel, then the surface of revolution is a circular cylinder .
In analogy with the conic sections , the set of points whose Cartesian coordinates satisfy the general equation of the second degree, namely, A x 2 + B y 2 + C z 2 + F x y + G y z + H x z + J x + K y + L z + M = 0 , {\displaystyle Ax^{2}+By^{2}+Cz^{2}+Fxy+Gyz+Hxz+Jx+Ky+Lz+M=0,} where A , B , C , F , G , H , J , K , L and M are real numbers and not all of A , B , C , F , G and H are zero, is called a quadric surface . [ 6 ]
There are six types of non-degenerate quadric surfaces:
The degenerate quadric surfaces are the empty set, a single point, a single line, a single plane, a pair of planes or a quadratic cylinder (a surface consisting of a non-degenerate conic section in a plane π and all the lines of R 3 through that conic that are normal to π ). [ 6 ] Elliptic cones are sometimes considered to be degenerate quadric surfaces as well.
Both the hyperboloid of one sheet and the hyperbolic paraboloid are ruled surfaces , meaning that they can be made up from a family of straight lines. In fact, each has two families of generating lines, the members of each family are disjoint and each member one family intersects, with just one exception, every member of the other family. [ 7 ] Each family is called a regulus .
Another way of viewing three-dimensional space is found in linear algebra , where the idea of independence is crucial. Space has three dimensions because the length of a box is independent of its width or breadth. In the technical language of linear algebra, space is three-dimensional because every point in space can be described by a linear combination of three independent vectors .
A vector can be pictured as an arrow. The vector's magnitude is its length, and its direction is the direction the arrow points. A vector in R 3 {\displaystyle \mathbb {R} ^{3}} can be represented by an ordered triple of real numbers. These numbers are called the components of the vector.
The dot product of two vectors A = [ A 1 , A 2 , A 3 ] and B = [ B 1 , B 2 , B 3 ] is defined as: [ 8 ]
The magnitude of a vector A is denoted by || A || . The dot product of a vector A = [ A 1 , A 2 , A 3 ] with itself is
which gives
the formula for the Euclidean length of the vector.
Without reference to the components of the vectors, the dot product of two non-zero Euclidean vectors A and B is given by [ 9 ]
where θ is the angle between A and B .
The cross product or vector product is a binary operation on two vectors in three-dimensional space and is denoted by the symbol ×. The cross product A × B of the vectors A and B is a vector that is perpendicular to both and therefore normal to the plane containing them. It has many applications in mathematics, physics , and engineering .
In function language, the cross product is a function × : R 3 × R 3 → R 3 {\displaystyle \times :\mathbb {R} ^{3}\times \mathbb {R} ^{3}\rightarrow \mathbb {R} ^{3}} .
The components of the cross product are A × B = [ A 2 B 3 − B 2 A 3 , A 3 B 1 − B 3 A 1 , A 1 B 2 − B 1 A 2 ] {\displaystyle \mathbf {A} \times \mathbf {B} =[A_{2}B_{3}-B_{2}A_{3},A_{3}B_{1}-B_{3}A_{1},A_{1}B_{2}-B_{1}A_{2}]} , and can also be written in components, using Einstein summation convention as ( A × B ) i = ε i j k A j B k {\displaystyle (\mathbf {A} \times \mathbf {B} )_{i}=\varepsilon _{ijk}A_{j}B_{k}} where ε i j k {\displaystyle \varepsilon _{ijk}} is the Levi-Civita symbol . It has the property that A × B = − B × A {\displaystyle \mathbf {A} \times \mathbf {B} =-\mathbf {B} \times \mathbf {A} } .
Its magnitude is related to the angle θ {\displaystyle \theta } between A {\displaystyle \mathbf {A} } and B {\displaystyle \mathbf {B} } by the identity ‖ A × B ‖ = ‖ A ‖ ⋅ ‖ B ‖ ⋅ | sin θ | . {\displaystyle \left\|\mathbf {A} \times \mathbf {B} \right\|=\left\|\mathbf {A} \right\|\cdot \left\|\mathbf {B} \right\|\cdot \left|\sin \theta \right|.}
The space and product form an algebra over a field , which is not commutative nor associative , but is a Lie algebra with the cross product being the Lie bracket. Specifically, the space together with the product, ( R 3 , × ) {\displaystyle (\mathbb {R} ^{3},\times )} is isomorphic to the Lie algebra of three-dimensional rotations, denoted s o ( 3 ) {\displaystyle {\mathfrak {so}}(3)} . In order to satisfy the axioms of a Lie algebra, instead of associativity the cross product satisfies the Jacobi identity . For any three vectors A , B {\displaystyle \mathbf {A} ,\mathbf {B} } and C {\displaystyle \mathbf {C} }
A × ( B × C ) + B × ( C × A ) + C × ( A × B ) = 0 {\displaystyle \mathbf {A} \times (\mathbf {B} \times \mathbf {C} )+\mathbf {B} \times (\mathbf {C} \times \mathbf {A} )+\mathbf {C} \times (\mathbf {A} \times \mathbf {B} )=0}
One can in n dimensions take the product of n − 1 vectors to produce a vector perpendicular to all of them. But if the product is limited to non-trivial binary products with vector results, it exists only in three and seven dimensions . [ 10 ]
It can be useful to describe three-dimensional space as a three-dimensional vector space V {\displaystyle V} over the real numbers. This differs from R 3 {\displaystyle \mathbb {R} ^{3}} in a subtle way. By definition, there exists a basis B = { e 1 , e 2 , e 3 } {\displaystyle {\mathcal {B}}=\{e_{1},e_{2},e_{3}\}} for V {\displaystyle V} . This corresponds to an isomorphism between V {\displaystyle V} and R 3 {\displaystyle \mathbb {R} ^{3}} : the construction for the isomorphism is found here . However, there is no 'preferred' or 'canonical basis' for V {\displaystyle V} .
On the other hand, there is a preferred basis for R 3 {\displaystyle \mathbb {R} ^{3}} , which is due to its description as a Cartesian product of copies of R {\displaystyle \mathbb {R} } , that is, R 3 = R × R × R {\displaystyle \mathbb {R} ^{3}=\mathbb {R} \times \mathbb {R} \times \mathbb {R} } . This allows the definition of canonical projections, π i : R 3 → R {\displaystyle \pi _{i}:\mathbb {R} ^{3}\rightarrow \mathbb {R} } , where 1 ≤ i ≤ 3 {\displaystyle 1\leq i\leq 3} . For example, π 1 ( x 1 , x 2 , x 3 ) = x {\displaystyle \pi _{1}(x_{1},x_{2},x_{3})=x} . This then allows the definition of the standard basis B Standard = { E 1 , E 2 , E 3 } {\displaystyle {\mathcal {B}}_{\text{Standard}}=\{E_{1},E_{2},E_{3}\}} defined by π i ( E j ) = δ i j {\displaystyle \pi _{i}(E_{j})=\delta _{ij}} where δ i j {\displaystyle \delta _{ij}} is the Kronecker delta . Written out in full, the standard basis is E 1 = ( 1 0 0 ) , E 2 = ( 0 1 0 ) , E 3 = ( 0 0 1 ) . {\displaystyle E_{1}={\begin{pmatrix}1\\0\\0\end{pmatrix}},E_{2}={\begin{pmatrix}0\\1\\0\end{pmatrix}},E_{3}={\begin{pmatrix}0\\0\\1\end{pmatrix}}.}
Therefore R 3 {\displaystyle \mathbb {R} ^{3}} can be viewed as the abstract vector space, together with the additional structure of a choice of basis. Conversely, V {\displaystyle V} can be obtained by starting with R 3 {\displaystyle \mathbb {R} ^{3}} and 'forgetting' the Cartesian product structure, or equivalently the standard choice of basis.
As opposed to a general vector space V {\displaystyle V} , the space R 3 {\displaystyle \mathbb {R} ^{3}} is sometimes referred to as a coordinate space. [ 11 ]
Physically, it is conceptually desirable to use the abstract formalism in order to assume as little structure as possible if it is not given by the parameters of a particular problem. For example, in a problem with rotational symmetry, working with the more concrete description of three-dimensional space R 3 {\displaystyle \mathbb {R} ^{3}} assumes a choice of basis, corresponding to a set of axes. But in rotational symmetry, there is no reason why one set of axes is preferred to say, the same set of axes which has been rotated arbitrarily. Stated another way, a preferred choice of axes breaks the rotational symmetry of physical space.
Computationally, it is necessary to work with the more concrete description R 3 {\displaystyle \mathbb {R} ^{3}} in order to do concrete computations.
A more abstract description still is to model physical space as a three-dimensional affine space E ( 3 ) {\displaystyle E(3)} over the real numbers. This is unique up to affine isomorphism. It is sometimes referred to as three-dimensional Euclidean space. Just as the vector space description came from 'forgetting the preferred basis' of R 3 {\displaystyle \mathbb {R} ^{3}} , the affine space description comes from 'forgetting the origin' of the vector space. Euclidean spaces are sometimes called Euclidean affine spaces for distinguishing them from Euclidean vector spaces. [ 12 ]
This is physically appealing as it makes the translation invariance of physical space manifest. A preferred origin breaks the translational invariance.
The above discussion does not involve the dot product . The dot product is an example of an inner product . Physical space can be modelled as a vector space which additionally has the structure of an inner product. The inner product defines notions of length and angle (and therefore in particular the notion of orthogonality). For any inner product, there exist bases under which the inner product agrees with the dot product, but again, there are many different possible bases, none of which are preferred. They differ from one another by a rotation, an element of the group of rotations SO(3) .
In a rectangular coordinate system, the gradient of a (differentiable) function f : R 3 → R {\displaystyle f:\mathbb {R} ^{3}\rightarrow \mathbb {R} } is given by
and in index notation is written
( ∇ f ) i = ∂ i f . {\displaystyle (\nabla f)_{i}=\partial _{i}f.}
The divergence of a (differentiable) vector field F = U i + V j + W k , that is, a function F : R 3 → R 3 {\displaystyle \mathbf {F} :\mathbb {R} ^{3}\rightarrow \mathbb {R} ^{3}} , is equal to the scalar -valued function:
In index notation, with Einstein summation convention this is ∇ ⋅ F = ∂ i F i . {\displaystyle \nabla \cdot \mathbf {F} =\partial _{i}F_{i}.}
Expanded in Cartesian coordinates (see Del in cylindrical and spherical coordinates for spherical and cylindrical coordinate representations), the curl ∇ × F is, for F composed of [ F x , F y , F z ]:
where i , j , and k are the unit vectors for the x -, y -, and z -axes, respectively. This expands as follows: [ 13 ]
In index notation, with Einstein summation convention this is ( ∇ × F ) i = ϵ i j k ∂ j F k , {\displaystyle (\nabla \times \mathbf {F} )_{i}=\epsilon _{ijk}\partial _{j}F_{k},} where ϵ i j k {\displaystyle \epsilon _{ijk}} is the totally antisymmetric symbol, the Levi-Civita symbol .
For some scalar field f : U ⊆ R n → R , the line integral along a piecewise smooth curve C ⊂ U is defined as
where r : [a, b] → C is an arbitrary bijective parametrization of the curve C such that r ( a ) and r ( b ) give the endpoints of C and a < b {\displaystyle a<b} .
For a vector field F : U ⊆ R n → R n , the line integral along a piecewise smooth curve C ⊂ U , in the direction of r , is defined as
where · is the dot product and r : [a, b] → C is a bijective parametrization of the curve C such that r ( a ) and r ( b ) give the endpoints of C .
A surface integral is a generalization of multiple integrals to integration over surfaces . It can be thought of as the double integral analog of the line integral. To find an explicit formula for the surface integral, we need to parameterize the surface of interest, S , by considering a system of curvilinear coordinates on S , like the latitude and longitude on a sphere . Let such a parameterization be x ( s , t ), where ( s , t ) varies in some region T in the plane . Then, the surface integral is given by
where the expression between bars on the right-hand side is the magnitude of the cross product of the partial derivatives of x ( s , t ), and is known as the surface element . Given a vector field v on S , that is a function that assigns to each x in S a vector v ( x ), the surface integral can be defined component-wise according to the definition of the surface integral of a scalar field; the result is a vector.
A volume integral is an integral over a three-dimensional domain or region.
When the integrand is trivial (unity), the volume integral is simply the region's volume . [ 14 ] [ 1 ] It can also mean a triple integral within a region D in R 3 of a function f ( x , y , z ) , {\displaystyle f(x,y,z),} and is usually written as:
The fundamental theorem of line integrals , says that a line integral through a gradient field can be evaluated by evaluating the original scalar field at the endpoints of the curve.
Let φ : U ⊆ R n → R {\displaystyle \varphi :U\subseteq \mathbb {R} ^{n}\to \mathbb {R} } . Then
Stokes' theorem relates the surface integral of the curl of a vector field F over a surface Σ in Euclidean three-space to the line integral of the vector field over its boundary ∂Σ:
Suppose V is a subset of R n {\displaystyle \mathbb {R} ^{n}} (in the case of n = 3, V represents a volume in 3D space) which is compact and has a piecewise smooth boundary S (also indicated with ∂ V = S ). If F is a continuously differentiable vector field defined on a neighborhood of V , then the divergence theorem says: [ 15 ]
The left side is a volume integral over the volume V , the right side is the surface integral over the boundary of the volume V . The closed manifold ∂ V is quite generally the boundary of V oriented by outward-pointing normals , and n is the outward pointing unit normal field of the boundary ∂ V . ( d S may be used as a shorthand for n dS .)
Three-dimensional space has a number of topological properties that distinguish it from spaces of other dimension numbers. For example, at least three dimensions are required to tie a knot in a piece of string. [ 16 ]
In differential geometry the generic three-dimensional spaces are 3-manifolds , which locally resemble R 3 {\displaystyle {\mathbb {R} }^{3}} .
Many ideas of dimension can be tested with finite geometry . The simplest instance is PG(3,2) , which has Fano planes as its 2-dimensional subspaces. It is an instance of Galois geometry , a study of projective geometry using finite fields . Thus, for any Galois field GF( q ), there is a projective space PG(3, q ) of three dimensions. For example, any three skew lines in PG(3, q ) are contained in exactly one regulus . [ 17 ] | https://en.wikipedia.org/wiki/Three-dimensional_space |
The three-domain system is a taxonomic classification system that groups all cellular life into three domains , namely Archaea , Bacteria and Eukarya , introduced by Carl Woese , Otto Kandler and Mark Wheelis in 1990. [ 1 ] The key difference from earlier classifications such as the two-empire system and the five-kingdom classification is the splitting of Archaea (previously named "archaebacteria") from Bacteria as completely different organisms.
The three domain hypothesis is considered obsolete by some since it is thought that eukaryotes do not form a separate domain of life; instead, they arose from a fusion between two different species, one from within Archaea and one from within Bacteria. [ 2 ] [ 3 ] [ 4 ] (see Two-domain system )
Woese argued, on the basis of differences in 16S rRNA genes , that bacteria, archaea, and eukaryotes each arose separately from an ancestor with poorly developed genetic machinery, often called a progenote . To reflect these primary lines of descent, he treated each as a domain, divided into several different kingdoms . Originally his split of the prokaryotes was into Eubacteria (now Bacteria ) and Archaebacteria (now Archaea ). [ 5 ] Woese initially used the term "kingdom" to refer to the three primary phylogenic groupings, and this nomenclature was widely used until the term "domain" was adopted in 1990. [ 1 ]
Acceptance of the validity of Woese's phylogenetically valid classification was a slow process. Prominent biologists including Salvador Luria and Ernst Mayr objected to his division of the prokaryotes. [ 6 ] [ 7 ] Not all criticism of him was restricted to the scientific level. A decade of labor-intensive oligonucleotide cataloging left him with a reputation as "a crank", and Woese would go on to be dubbed "Microbiology's Scarred Revolutionary" by a news article printed in the journal Science in 1997. [ 8 ] The growing amount of supporting data led the scientific community to accept the Archaea by the mid-1980s. [ 9 ] Today, very few scientists still accept the concept of a unified Prokarya. [ 10 ]
The three-domain system adds a level of classification (the domains) "above" the kingdoms present in the previously used five- or six-kingdom systems . This classification system recognizes the fundamental divide between the two prokaryotic groups, insofar as Archaea appear to be more closely related to eukaryotes than they are to other prokaryotes – bacteria-like organisms with no cell nucleus . The three-domain system sorts the previously known kingdoms into these three domains: Archaea , Bacteria , and Eukarya . [ 2 ]
The Archaea are prokaryotic , with no nuclear membrane, but with biochemistry and RNA markers that are distinct from bacteria. The archaeans possess unique, ancient evolutionary history for which they are considered some of the oldest species of organisms on Earth, most notably their diverse, exotic metabolisms.
Some examples of archaeal organisms are:
The Bacteria are also prokaryotic ; their domain consists of cells with bacterial rRNA, no nuclear membrane, and whose membranes possess primarily diacyl glycerol diester lipids . Traditionally classified as bacteria, many thrive in the same environments favored by humans, and were the first prokaryotes discovered; they were briefly called the Eubacteria or "true" bacteria when the Archaea were first recognized as a distinct clade .
Most known pathogenic prokaryotic organisms belong to bacteria (see [ 11 ] for exceptions). For that reason, and because the Archaea are typically difficult to grow in laboratories, Bacteria are currently studied more extensively than Archaea.
Some examples of bacteria include:
Eukaryota are organisms whose cells contain a membrane-bound nucleus. They include many large single-celled organisms and all known non- microscopic organisms . The domain contains, for example:
Each of the three cell types tends to fit into recurring specialities or roles. Bacteria tend to be the most prolific reproducers, at least in moderate environments. Archaeans tend to adapt quickly to extreme environments, such as high temperatures, high acids, high sulfur, etc. This includes adapting to use a wide variety of food sources. Eukaryotes are the most flexible with regard to forming cooperative colonies, such as in multi-cellular organisms, including humans. In fact, the structure of a eukaryote is likely to have derived from a joining of different cell types, forming organelles .
Parakaryon myojinensis ( incertae sedis ) is a single-celled organism known to be a unique example. "This organism appears to be a life form distinct from prokaryotes and eukaryotes ", [ 12 ] with features of both.
Parts of the three-domain theory have been challenged by scientists including Ernst Mayr , Thomas Cavalier-Smith , and Radhey S. Gupta . [ 13 ] [ 14 ] [ 15 ]
Recent work has proposed that Eukaryota may have actually branched off from the domain Archaea. According to Spang et al. , Lokiarchaeota forms a monophyletic group with eukaryotes in phylogenomic analyses. The associated genomes also encode an expanded repertoire of eukaryotic signature proteins that are suggestive of sophisticated membrane remodelling capabilities. [ 16 ] This work suggests a two-domain system as opposed to the three-domain system. [ 3 ] [ 4 ] [ 2 ] Exactly how and when Archaea, Bacteria, and Eucarya developed and how they are related continues to be debated. [ 17 ] [ 2 ] [ 18 ] | https://en.wikipedia.org/wiki/Three-domain_system |
Three-phase electric power (abbreviated 3ϕ [ 1 ] ) is a common type of alternating current (AC) used in electricity generation , transmission , and distribution . [ 2 ] It is a type of polyphase system employing three wires (or four including an optional neutral return wire) and is the most common method used by electrical grids worldwide to transfer power.
Three-phase electrical power was developed in the 1880s by several people. In three-phase power, the voltage on each wire is 120 degrees phase shifted relative to each of the other wires. Because it is an AC system, it allows the voltages to be easily stepped up using transformers to high voltage for transmission and back down for distribution, giving high efficiency.
A three-wire three-phase circuit is usually more economical than an equivalent two-wire single-phase circuit at the same line-to-ground voltage because it uses less conductor material to transmit a given amount of electrical power. [ 3 ] Three-phase power is mainly used directly to power large induction motors , other electric motors and other heavy loads. Small loads often use only a two-wire single-phase circuit, which may be derived from a three-phase system.
The conductors between a voltage source and a load are called lines, and the voltage between any two lines is called line voltage . The voltage measured between any line and neutral is called phase voltage . [ 4 ] For example, in countries with nominal 230 V power, the line voltage is 400 V and the phase voltage is 230 V. For a 208/120 V service, the line voltage is 208 V and the phase voltage is 120 V.
Polyphase power systems were independently invented by Galileo Ferraris , Mikhail Dolivo-Dobrovolsky , Jonas Wenström , John Hopkinson , William Stanley Jr. , and Nikola Tesla in the late 1880s. [ 5 ]
Three phase power evolved out of electric motor development. In 1885, Galileo Ferraris was doing research on rotating magnetic fields . Ferraris experimented with different types of asynchronous electric motors . The research and his studies resulted in the development of an alternator , which may be thought of as an alternating-current motor operating in reverse, so as to convert mechanical (rotating) power into electric power (as alternating current). On 11 March 1888, Ferraris published his research in a paper to the Royal Academy of Sciences in Turin . [ 6 ]
Two months later Nikola Tesla gained U.S. patent 381,968 for a three-phase electric motor design, application filed October 12, 1887. Figure 13 of this patent shows that Tesla envisaged his three-phase motor being powered from the generator via six wires.
These alternators operated by creating systems of alternating currents displaced from one another in phase by definite amounts, and depended on rotating magnetic fields for their operation. The resulting source of polyphase power soon found widespread acceptance. The invention of the polyphase alternator is key in the history of electrification, as is the power transformer. These inventions enabled power to be transmitted by wires economically over considerable distances. Polyphase power enabled the use of water-power (via hydroelectric generating plants in large dams) in remote places, thereby allowing the mechanical energy of the falling water to be converted to electricity, which then could be fed to an electric motor at any location where mechanical work needed to be done. This versatility sparked the growth of power-transmission network grids on continents around the globe.
Mikhail Dolivo-Dobrovolsky developed a three-phase electrical generator and a three-phase electric motor in 1888 and studied star and delta connections . [ 7 ] [ 8 ] [ 9 ] His three-phase three-wire transmission system was displayed in 1891 in Germany at the International Electrotechnical Exhibition , where Dolivo-Dobrovolsky used the system to transmit electric power at the distance of 176 km (110 miles) with 75% efficiency . In 1891 he also created a three-phase transformer and short-circuited ( squirrel-cage ) induction motor . [ 10 ] [ 11 ] [ 12 ] He designed the world's first three-phase hydroelectric power plant in 1891. [ 13 ] Inventor Jonas Wenström received in 1890 a Swedish patent on the same three-phase system. [ 14 ] The possibility of transferring electrical power from a waterfall at a distance was explored at the Grängesberg mine. A 45 m fall at Hällsjön, Smedjebackens kommun, where a small iron work had been located, was selected. In 1893, a three-phase 9.5 kV system was used to transfer 400 horsepower (300 kW) a distance of 15 km (10 miles), becoming the first commercial application. [ 15 ]
In a symmetric three-phase power supply system, three conductors each carry an alternating current of the same frequency and voltage amplitude relative to a common reference, but with a phase difference of one third of a cycle (i.e., 120 degrees out of phase) between each. The common reference is usually connected to ground and often to a current-carrying conductor called the neutral. Due to the phase difference, the voltage on any conductor reaches its peak at one third of a cycle after one of the other conductors and one third of a cycle before the remaining conductor. This phase delay gives constant power transfer to a balanced linear load. It also makes it possible to produce a rotating magnetic field in an electric motor and generate other phase arrangements using transformers (for instance, a two-phase system using a Scott-T transformer ). The amplitude of the voltage difference between two phases is 3 = 1.732 … {\displaystyle {\sqrt {3}}=1.732\ldots } times the amplitude of the voltage of the individual phases.
The symmetric three-phase systems described here are simply referred to as three-phase systems because, although it is possible to design and implement asymmetric three-phase power systems (i.e., with unequal voltages or phase shifts), they are not used in practice because they lack the most important advantages of symmetric systems.
In a three-phase system feeding a balanced and linear load, the sum of the instantaneous currents of the three conductors is zero. In other words, the current in each conductor is equal in magnitude to the sum of the currents in the other two, but with the opposite sign. The return path for the current in any phase conductor is the other two phase conductors.
Constant power transfer is possible with any number of phases greater than one. However, two-phase systems do not have neutral-current cancellation and thus use conductors less efficiently, and more than three phases complicates infrastructure unnecessarily.
Additionally, in some practical generators and motors,
two phases can result in a less smooth (pulsating) torque. [ 16 ]
Three-phase systems may have a fourth wire, common in low-voltage distribution. This is the neutral wire. The neutral allows three separate single-phase supplies to be provided at a constant voltage and is commonly used for supplying multiple single-phase loads. The connections are arranged so that, as far as possible in each group, equal power is drawn from each phase. Further up the distribution system , the currents are usually well balanced. Transformers may be wired to have a four-wire secondary and a three-wire primary, while allowing unbalanced loads and the associated secondary-side neutral currents.
Wiring for three phases is typically identified by colors that vary by country and voltage. The phases must be connected in the correct order to achieve the intended direction of rotation of three-phase motors. For example, pumps and fans do not work as intended in reverse. Maintaining the identity of phases is required if two sources could be connected at the same time. A direct connection between two different phases is a short circuit and leads to flow of unbalanced current.
As compared to a single-phase AC power supply that uses two current-carrying conductors (phase and neutral ), a three-phase supply with no neutral and the same phase-to-ground voltage and current capacity per phase can transmit three times as much power by using just 1.5 times as many wires (i.e., three instead of two). Thus, the ratio of capacity to conductor material is doubled. [ 17 ] The ratio of capacity to conductor material increases to 3:1 with an ungrounded three-phase and center-grounded single-phase system (or 2.25:1 if both use grounds with the same gauge as the conductors). That leads to higher efficiency, lower weight, and cleaner waveforms.
Three-phase supplies have properties that make them desirable in electric power distribution systems:
However, most loads are single-phase. In North America, single-family houses and individual apartments are supplied one phase from the power grid and use a split-phase system to the panelboard from which most branch circuits will carry 120 V. Circuits designed for higher powered devices such as stoves, dryers, or outlets for electric vehicles carry 240 V.
In Europe, three-phase power is normally delivered to the panelboard and further to higher powered devices.
At the power station , an electrical generator converts mechanical power into a set of three AC electric currents , one from each coil (or winding) of the generator. The windings are arranged such that the currents are at the same frequency but with the peaks and troughs of their wave forms offset to provide three complementary currents with a phase separation of one-third cycle ( 120° or 2π ⁄ 3 radians ). The generator frequency is typically 50 or 60 Hz , depending on the country.
At the power station, transformers change the voltage from generators to a level suitable for transmission in order to minimize losses.
After further voltage conversions in the transmission network, the voltage is finally transformed to the standard utilization before power is supplied to customers.
Most automotive alternators generate three-phase AC and rectify it to DC with a diode bridge . [ 20 ]
A "delta" (Δ) connected transformer winding is connected between phases of a three-phase system. A "wye" (Y) transformer connects each winding from a phase wire to a common neutral point.
A single three-phase transformer can be used, or three single-phase transformers.
In an "open delta" or "V" system, only two transformers are used. A closed delta made of three single-phase transformers can operate as an open delta if one of the transformers has failed or needs to be removed. [ 21 ] In open delta, each transformer must carry current for its respective phases as well as current for the third phase, therefore capacity is reduced to 87%. With one of three transformers missing and the remaining two at 87% efficiency, the capacity is 58% ( 2 ⁄ 3 of 87%). [ 22 ] [ 23 ]
Where a delta-fed system must be grounded for detection of stray current to ground or protection from surge voltages, a grounding transformer (usually a zigzag transformer ) may be connected to allow ground fault currents to return from any phase to ground. Another variation is a "corner grounded" delta system, which is a closed delta that is grounded at one of the junctions of transformers. [ 24 ]
There are two basic three-phase configurations: wye (Y) and delta (Δ). As shown in the diagram, a delta configuration requires only three wires for transmission, but a wye (star) configuration may have a fourth wire. The fourth wire, if present, is provided as a neutral and is normally grounded. The three-wire and four-wire designations do not count the ground wire present above many transmission lines, which is solely for fault protection and does not carry current under normal use.
A four-wire system with symmetrical voltages between phase and neutral is obtained when the neutral is connected to the "common star point" of all supply windings. In such a system, all three phases will have the same magnitude of voltage relative to the neutral. Other non-symmetrical systems have been used.
The four-wire wye system is used when a mixture of single-phase and three-phase loads are to be served, such as mixed lighting and motor loads. An example of application is local distribution in Europe (and elsewhere), where each customer may be only fed from one phase and the neutral (which is common to the three phases). When a group of customers sharing the neutral draw unequal phase currents, the common neutral wire carries the currents resulting from these imbalances. Electrical engineers try to design the three-phase power system for any one location so that the power drawn from each of three phases is the same, as far as possible at that site. [ 25 ] Electrical engineers also try to arrange the distribution network so the loads are balanced as much as possible, since the same principles that apply to individual premises also apply to the wide-scale distribution system power. Hence, every effort is made by supply authorities to distribute the power drawn on each of the three phases over a large number of premises so that, on average, as nearly as possible a balanced load is seen at the point of supply.
For domestic use, some countries such as the UK may supply one phase and neutral at a high current (up to 100 A ) to one property, while others such as Germany may supply 3 phases and neutral to each customer, but at a lower fuse rating, typically 40–63 A per phase, and "rotated" to avoid the effect that more load tends to be put on the first phase. [ citation needed ]
Based on wye (Y) and delta (Δ) connection. Generally, there are four different types of three-phase transformer winding connections for transmission and distribution purposes:
In North America, a high-leg delta supply is sometimes used where one winding of a delta-connected transformer feeding the load is center-tapped and that center tap is grounded and connected as a neutral as shown in the second diagram. This setup produces three different voltages: If the voltage between the center tap (neutral) and each of the top and bottom taps (phase and anti-phase) is 120 V (100%), the voltage across the phase and anti-phase lines is 240 V (200%), and the neutral to "high leg" voltage is ≈ 208 V (173%). [ 21 ]
The reason for providing the delta connected supply is usually to power large motors requiring a rotating field. However, the premises concerned will also require the "normal" North American 120 V supplies, two of which are derived (180 degrees "out of phase") between the "neutral" and either of the center-tapped phase points.
In the perfectly balanced case all three lines share equivalent loads. Examining the circuits, we can derive relationships between line voltage and current, and load voltage and current for wye- and delta-connected loads.
In a balanced system each line will produce equal voltage magnitudes at phase angles equally spaced from each other. With V 1 as our reference and V 3 lagging V 2 lagging V 1 , using angle notation , and V LN the voltage between the line and the neutral we have: [ 26 ]
These voltages feed into either a wye- or delta-connected load.
The voltage seen by the load will depend on the load connection; for the wye case, connecting each load to a phase (line-to-neutral) voltages gives [ 26 ]
where Z total is the sum of line and load impedances ( Z total = Z LN + Z Y ), and θ is the phase of the total impedance ( Z total ).
The phase angle difference between voltage and current of each phase is not necessarily 0 and depends on the type of load impedance, Z y . Inductive and capacitive loads will cause current to either lag or lead the voltage. However, the relative phase angle between each pair of lines (1 to 2, 2 to 3, and 3 to 1) will still be −120°.
By applying Kirchhoff's current law (KCL) to the neutral node, the three phase currents sum to the total current in the neutral line. In the balanced case:
In the delta circuit, loads are connected across the lines, and so loads see line-to-line voltages: [ 26 ]
(Φ v1 is the phase shift for the first voltage, commonly taken to be 0°; in this case, Φ v2 = −120° and Φ v3 = −240° or 120°.)
Further:
where θ is the phase of delta impedance ( Z Δ ).
Relative angles are preserved, so I 31 lags I 23 lags I 12 by 120°. Calculating line currents by using KCL at each delta node gives
and similarly for each other line:
where, again, θ is the phase of delta impedance ( Z Δ ).
Inspection of a phasor diagram, or conversion from phasor notation to complex notation, illuminates how the difference between two line-to-neutral voltages yields a line-to-line voltage that is greater by a factor of √ 3 . As a delta configuration connects a load across phases of a transformer, it delivers the line-to-line voltage difference, which is √ 3 times greater than the line-to-neutral voltage delivered to a load in the wye configuration. As the power transferred is V 2 / Z , the impedance in the delta configuration must be 3 times what it would be in a wye configuration for the same power to be transferred.
Except in a high-leg delta system and a corner-grounded delta system, single-phase loads may be connected across any two phases, or a load can be connected from phase to neutral. [ 28 ] Distributing single-phase loads among the phases of a three-phase system balances the load and makes most economical use of conductors and transformers.
In a symmetrical three-phase four-wire wye system, the three phase conductors have the same voltage to the system neutral. The voltage between line conductors is √ 3 times the phase conductor to neutral voltage: [ 29 ]
The currents returning from the customers' premises to the supply transformer all share the neutral wire. If the loads are evenly distributed on all three phases, the sum of the returning currents in the neutral wire is approximately zero. Any unbalanced phase loading on the secondary side of the transformer will use the transformer capacity inefficiently.
If the supply neutral is broken, phase-to-neutral voltage is no longer maintained. Phases with higher relative loading will experience reduced voltage, and phases with lower relative loading will experience elevated voltage, up to the phase-to-phase voltage.
A high-leg delta provides phase-to-neutral relationship of V LL = 2 V LN , however, LN load is imposed on one phase. [ 21 ] A transformer manufacturer's page suggests that LN loading not exceed 5% of transformer capacity. [ 30 ]
Since √ 3 ≈ 1.73, defining V LN as 100% gives V LL ≈ 100% × 1.73 = 173% . If V LL was set as 100%, then V LN ≈ 57.7% .
When the currents on the three live wires of a three-phase system are not equal or are not at an exact 120° phase angle, the power loss is greater than for a perfectly balanced system. The method of symmetrical components is used to analyze unbalanced systems.
With linear loads, the neutral only carries the current due to imbalance between the phases. Gas-discharge lamps and devices that utilize rectifier-capacitor front-end such as switch-mode power supplies , computers, office equipment and such produce third-order harmonics that are in-phase on all the supply phases. Consequently, such harmonic currents add in the neutral in a wye system (or in the grounded (zigzag) transformer in a delta system), which can cause the neutral current to exceed the phase current. [ 28 ] [ 31 ]
An important class of three-phase load is the electric motor . A three-phase induction motor has a simple design, inherently high starting torque and high efficiency. Such motors are applied in industry for many applications. A three-phase motor is more compact and less costly than a single-phase motor of the same voltage class and rating, and single-phase AC motors above 10 hp (7.5 kW) are uncommon. Three-phase motors also vibrate less and hence last longer than single-phase motors of the same power used under the same conditions. [ 32 ]
Resistive heating loads such as electric boilers or space heating may be connected to three-phase systems. Electric lighting may also be similarly connected.
Line frequency flicker in light is detrimental to high-speed cameras used in sports event broadcasting for slow-motion replays. It can be reduced by evenly spreading line frequency operated light sources across the three phases so that the illuminated area is lit from all three phases. This technique was applied successfully at the 2008 Beijing Olympics. [ 33 ]
Rectifiers may use a three-phase source to produce a six-pulse DC output. [ 34 ] The output of such rectifiers is much smoother than rectified single phase and, unlike single-phase, does not drop to zero between pulses. Such rectifiers may be used for battery charging, electrolysis processes such as aluminium production and the electric arc furnace used in steelmaking , and for operation of DC motors. Zigzag transformers may make the equivalent of six-phase full-wave rectification, twelve pulses per cycle, and this method is occasionally employed to reduce the cost of the filtering components, while improving the quality of the resulting DC.
In many European countries electric stoves are usually designed for a three-phase feed with permanent connection. Individual heating units are often connected between phase and neutral to allow for connection to a single-phase circuit if three-phase is not available. [ 35 ] Other usual three-phase loads in the domestic field are tankless water heating systems and storage heaters . Homes in Europe have standardized on a nominal 230 V ±10% between any phase and ground. Most groups of houses are fed from a three-phase street transformer so that individual premises with above-average demand can be fed with a second or third phase connection.
Phase converters are used when three-phase equipment needs to be operated on a single-phase power source. They are used when three-phase power is not available or cost is not justifiable. Such converters may also allow the frequency to be varied, allowing speed control. Some railway locomotives use a single-phase source to drive three-phase motors fed through an electronic drive. [ 36 ]
A rotary phase converter is a three-phase motor with special starting arrangements and power factor correction that produces balanced three-phase voltages. When properly designed, these rotary converters can allow satisfactory operation of a three-phase motor on a single-phase source. In such a device, the energy storage is performed by the inertia (flywheel effect) of the rotating components. An external flywheel is sometimes found on one or both ends of the shaft.
A three-phase generator can be driven by a single-phase motor. This motor-generator combination can provide a frequency changer function as well as phase conversion, but requires two machines with all their expenses and losses. The motor-generator method can also form an uninterruptible power supply when used in conjunction with a large flywheel and a battery-powered DC motor; such a combination will deliver nearly constant power compared to the temporary frequency drop experienced with a standby generator set gives until the standby generator kicks in.
Capacitors and autotransformers can be used to approximate a three-phase system in a static phase converter, but the voltage and phase angle of the additional phase may only be useful for certain loads.
Variable-frequency drives and digital phase converters use power electronic devices to synthesize a balanced three-phase supply from single-phase input power.
Verification of the phase sequence in a circuit is of considerable practical importance. Two sources of three-phase power must not be connected in parallel unless they have the same phase sequence, for example, when connecting a generator to an energized distribution network or when connecting two transformers in parallel. Otherwise, the interconnection will behave like a short circuit, and excess current will flow. The direction of rotation of three-phase motors can be reversed by interchanging any two phases; it may be impractical or harmful to test a machine by momentarily energizing the motor to observe its rotation. Phase sequence of two sources can be verified by measuring voltage between pairs of terminals and observing that terminals with very low voltage between them will have the same phase, whereas pairs that show a higher voltage are on different phases.
Where the absolute phase identity is not required, phase rotation test instruments can be used to identify the rotation sequence with one observation. The phase rotation test instrument may contain a miniature three-phase motor, whose direction of rotation can be directly observed through the instrument case. Another pattern uses a pair of lamps and an internal phase-shifting network to display the phase rotation. Another type of instrument can be connected to a de-energized three-phase motor and can detect the small voltages induced by residual magnetism, when the motor shaft is rotated by hand. A lamp or other indicator lights to show the sequence of voltages at the terminals for the given direction of shaft rotation. [ 37 ]
Conductors of a three-phase system are usually identified by a color code, to facilitate balanced loading and to assure the correct phase rotation for motors . Colors used may adhere to International Standard IEC 60446 (later IEC 60445 ), older standards or to no standard at all and may vary even within a single installation. For example, in the U.S. and Canada, different color codes are used for grounded (earthed) and ungrounded systems.
(High-Leg [ note 11 ] ) | https://en.wikipedia.org/wiki/Three-phase_electric_power |
Three-photon microscopy ( 3PEF ) is a high-resolution fluorescence microscopy based on nonlinear excitation effect. [ 1 ] [ 2 ] [ 3 ] Different from two-photon excitation microscopy , it uses three exciting photons. It typically uses 1300 nm or longer wavelength lasers to excite the fluorescent dyes with three simultaneously absorbed photons. The fluorescent dyes then emit one photon whose energy is (slightly smaller than) three times the energy of each incident photon. Compared to two-photon microscopy, three-photon microscopy reduces the fluorescence away from the focal plane by 1 / z 4 {\displaystyle 1/z^{4}} , which is much faster than that of two-photon microscopy by 1 / z 2 {\displaystyle 1/z^{2}} . [ 4 ] In addition, three-photon microscopy employs near- infrared light with less tissue scattering effect. This causes three-photon microscopy to have higher resolution than conventional microscopy .
Three-photon excited fluorescence was first observed by Singh and Bradley in 1964 when they estimated the three-photon absorption cross section of naphthalene crystals. [ 5 ] In 1996, Stefan W. Hell designed experiments to validate the feasibility of applying three-photon excitation to scanning fluorescence microscopy, which further proved the concept of three-photon excited fluorescence. [ 6 ]
Three-photon microscopy shares a few similarities with Two-photon excitation microscopy . Both of them employ the point scanning method. Both are able to image 3D samples by adjusting the position of the focus lens along the axial and lateral directions. The structures of both systems do not require a pinhole to block out-focus light. However, three-photon microscopy differs from Two-photon excitation microscopy in their Point spread function , resolution , penetration depth, resistance to out-of-focus light and strength of photobleaching .
In three-photon excitation, the fluorophore absorbs three photons almost simultaneously. The wavelength of the excitation laser is about 1200 nm or more in three photon microscopy with the emission wavelength slightly longer than one-third of the excitation wavelength. Three photon microscopy has deeper tissue penetration because of the longer excitation wavelengths and the higher order nonlinear excitation. However, a three-photon microscope needs a laser with higher power due to relatively smaller cross-section of the dyes for three-photon excitation, which is on the order of 10 − 82 cm 6 ( s / photon ) 2 {\displaystyle 10^{-82}{\text{cm}}^{6}(s/{\text{photon}})^{2}} . This is much smaller than the typical two-photon excitation cross-sections of 10 − 49 cm 4 s / photon {\displaystyle 10^{-49}{\text{cm}}^{4}s/{\text{photon}}} . [ 7 ] The Ultrashort pulses are usually around 100 fs.
For three photon fluorescence scanning microscopy, the three dimensional intensity point-spread function (IPSF) can be denoted as,
where ⊗ 3 {\displaystyle \otimes _{3}} denotes the 3-D convolution operation, D {\displaystyle D} denotes the intensity sensitivity of an incoherent detector, and I 1 ( ν , u ) {\displaystyle I_{1}(\nu ,u)} , I 2 ( ν , u ) {\displaystyle I_{2}(\nu ,u)} denotes the 3-D IPSF for the objective lens and collector lens in single-photon fluorescence, respectively. The 3-D IPSF I 1 ( ν , u ) {\displaystyle I_{1}(\nu ,u)} can be expressed in
where J 0 {\displaystyle J_{0}} is a Bessel function of the first kind of order zero. The axial and radial coordinates u {\displaystyle u} and ν {\displaystyle \nu } are defined by
where α 0 {\displaystyle \alpha _{0}} is the numerical aperture of the objective lens, z {\displaystyle z} is the real defocus, and r {\displaystyle r} is the radial coordinates.
Correlative images can be obtained using different multiphoton schemes such as 2PEF , 3PEF, and third-harmonic generation (THG), in parallel (since the corresponding wavelengths are different, they can be easily separated onto different detectors). A multichannel image is then constructed. [ 9 ]
3PEF is also compared to 2PEF : it generally gives a smaller degradation of the signal-to-background ratio (SBR) with depth, even if the emitted signal is smaller than with 2PEF. [ 9 ]
After three-photon excited fluorescence was observed by Singh and Bradley and further validated by Hell, Chris Xu and Watt W. Webb reported measurement of excitation cross sections of several native chromophores and biological indicators, and implemented three-photon excited fluorescence in Laser Scanning Microscopy of living cells. [ 10 ] In November 1996, David Wokosin applied three photon excitation fluorescence for fixed in vivo biological specimen imaging.
In 2010s, three photon microscopy was applied for deep tissue imaging using excitation wavelengths beyond 1060 nm. In January 2013, Horton, Wang, Kobat and Xu invented in vivo deep imaging of an intact mouse brain by employing point scanning method to three photon microscope at the long wavelength window of 1700 nm. [ 4 ] In February 2017, Dimitre Ouzounov, Tainyu Wang, and Chris Xu demonstrated deep activity imaging of GCaMP6-labeled neurons in the hippocampus of an intact, adult mouse brain using three-photon microscopy at the 1300 nm wavelength window. [ 11 ] In May 2017, Rowlands applied wide-field three-photon excitation to three photon microscope for larger penetration depth. [ 12 ] In Oct 2018, T Wang, D Ouzounov, and C Xu were able to image vasculature and GCaMP6 calcium activity using three photon microscope through the intact mouse skull. [ 13 ]
Three-photon microscopy has similar application fields with two-photon excitation microscopy including neuroscience, [ 14 ] and oncology. [ 15 ] However, compared to standard single-photon or two-photon excitation, three-photon excitation has several benefits such as the use of longer wavelengths reduces the effects of light scattering and increasing the penetration depth of the illumination beam into the sample. [ 16 ] The nonlinear nature of three photon microscopy confines the excitation target to a smaller volume, reducing out-of-focus light as well as minimizing photobleaching on the biological sample. [ 16 ] These advantages of three-photon microscopy gives it an edge in visualize in vivo and ex vivo tissue morphology and physiology at a cellular level deep within scattering tissue [ 4 ] and Rapid volumetric imaging. [ 17 ] In the recent study, Xu has demonstrated the potential of three-photon imaging for noninvasive studies of live biological systems. [ 13 ] The paper used three-photon fluorescence microscopy at a spectral excitation window of 1,320 nm to imaging the mouse brain structure and function through the intact skull with high spatial and temporal resolution(The lateral and axial FWHM was 0.96μm and 4.6μm) and large FOVs (hundreds of micrometers), and at substantial depth(>500 μm). This work demonstrates the advantage of higher-order nonlinear excitation for imaging through a highly scattering layer, in addition to the previously reported advantage of 3PM for deep imaging of densely labeled samples. Localized isomerization of photoswitchable drugs in vivo using three-photon excitation at 1560 nm has also been reported and used to control neuronal activity in a pharmacologically specific way. [ 18 ] | https://en.wikipedia.org/wiki/Three-photon_microscopy |
In genetics , a three-point cross is used to determine the loci of three genes in an organism's genome .
An individual heterozygous for three mutations is crossed with a homozygous recessive individual, and the phenotypes of the progeny are scored. The two most common phenotypes that result are the parental gametes ; the two least common phenotypes that result come from a double crossover in gamete formation . By comparing the parental and double-crossover phenotypes, the geneticist can determine which gene is located between the others on the chromosome.
The recombinant frequency is the ratio of non-parental phenotypes to total individuals. It is expressed as a percentage , which is equivalent to the number of map units (or centiMorgans ) between two genes. For example, if 100 out of 1000 individuals display the phenotype resulting from a crossover between genes a and b , then the recombination frequency is 10 percent and genes a and b are 10 map-units apart on the chromosome.
If the recombination frequency is greater than 50 percent, it means that the genes are unlinked - they are either located on different chromosomes or are sufficiently distant from each other on the same chromosome. Any recombination frequency greater than 50 percent is expressed as exactly 50 percent because, being unlinked, they are equally as likely as not to be separated during gamete formation. [ 1 ] | https://en.wikipedia.org/wiki/Three-point_cross |
The three-point bending flexural test provides values for the modulus of elasticity
in bending E f {\displaystyle E_{f}} , flexural stress σ f {\displaystyle \sigma _{f}} , flexural strain ϵ f {\displaystyle \epsilon _{f}} and the flexural stress–strain response of the material. This test is performed on a universal testing machine (tensile testing machine or tensile tester) with a three-point or four-point bend fixture. The main advantage of a three-point flexural test is the ease of the specimen preparation and testing. However, this method has also some disadvantages: the results of the testing method are sensitive to specimen and loading geometry and strain rate.
The test method for conducting the test usually involves a specified test fixture on a universal testing machine . Details of the test preparation, conditioning, and conduct affect the test results. The sample is placed on two supporting pins a set distance apart.
Calculation of the flexural stress σ f {\displaystyle \sigma _{f}}
Calculation of the flexural strain ϵ f {\displaystyle \epsilon _{f}}
Calculation of flexural modulus E f {\displaystyle E_{f}} [ 2 ]
in these formulas the following parameters are used:
The fracture toughness of a specimen can also be determined using a three-point flexural test. The stress intensity factor at the crack tip of a single edge notch bending specimen is [ 3 ]
where P {\displaystyle P} is the applied load, B = W / 2 {\displaystyle B=W/2} is the thickness of the specimen, a {\displaystyle a} is the crack length, and W {\displaystyle W} is the width of the specimen. In a three-point bend test, a fatigue crack is created at the tip of the notch by cyclic loading. The length of the crack is measured. The specimen is then loaded monotonically. A plot of the load versus the crack opening displacement is used to determine the load at which the crack starts growing. This load is substituted into the above formula to find the fracture toughness K I c {\displaystyle K_{Ic}} .
The ASTM D5045-14 [ 4 ] and E1290-08 [ 5 ] Standards suggests the relation
where
The predicted values of K I {\displaystyle K_{\rm {I}}} are nearly identical for the ASTM and Bower equations for crack lengths less than 0.6 W {\displaystyle W} . | https://en.wikipedia.org/wiki/Three-point_flexural_test |
The three-process view is a psychological term coined by Janet E. Davidson and Robert Sternberg .
According to this concept, there are three kinds of insight: selective-encoding, selective-comparison, and selective-combination. [ 1 ]
This psychology -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Three-process_view |
Three-taxon analysis (or TTS , three-item analysis , 3ia ) is a cladistic based method of phylogenetic reconstruction. Introduced by Nelson and Platnick in 1991 [ 2 ] to reconstruct organisms' phylogeny, this method can also be applied to biogeographic areas . It attempts to reconstruct complex phylogenetic trees by breaking the problem down into simpler chunks. Rather than try to resolve the relationships of all X taxa at once, it considers taxa 3 at a time. It is relatively easy to generate three-taxon statements (3is); that is, statements of the form "A and B are more closely related to one another than to C". [ 3 ] Once each group of three taxa has been considered, the method constructs a tree that is consistent with as many three-item statements as possible. [ 3 ]
From a theoretical point of view, the method has three main problems: (1) character evolution is a priori assumed to be irreversible; (2) 3is that are not logically independent are treated as if they are; (3) 3is that are considered as independent support for a given tree may be mutually exclusive on that tree. [ 4 ]
A computer program that implement three-taxon analysis is LisBeth [ 5 ] (for systematic and biogeographic studies). LisBeth have been freely released. [ 6 ] A recent simulation-based study found that Three-taxon analysis yields good power and an error rate intermediate between parsimony with ordered states and parsimony with unordered states. [ 1 ] | https://en.wikipedia.org/wiki/Three-taxon_analysis |
In nonlinear systems , the three-wave equations , sometimes called the three-wave resonant interaction equations or triad resonances , describe small-amplitude waves in a variety of non-linear media, including electrical circuits and non-linear optics . They are a set of completely integrable nonlinear partial differential equations . Because they provide the simplest, most direct example of a resonant interaction , have broad applicability in the sciences, and are completely integrable, they have been intensively studied since the 1970s. [ 1 ]
The three-wave equation arises by consideration of some of the simplest imaginable non-linear systems . Linear differential systems have the generic form
for some differential operator D . The simplest non-linear extension of this is to write
How can one solve this? Several approaches are available. In a few exceptional cases, there might be known exact solutions to equations of this form. In general, these are found in some ad hoc fashion after applying some ansatz . A second approach is to assume that ε ≪ 1 {\displaystyle \varepsilon \ll 1} and use perturbation theory to find "corrections" to the linearized theory. A third approach is to apply techniques from scattering matrix ( S-matrix ) theory.
In the S-matrix approach, one considers particles or plane waves coming in from infinity, interacting, and then moving out to infinity. Counting from zero, the zero-particle case corresponds to the vacuum , consisting entirely of the background. The one-particle case is a wave that comes in from the distant past and then disappears into thin air; this can happen when the background is absorbing, deadening or dissipative . Alternately, a wave appears out of thin air and moves away. This occurs when the background is unstable and generates waves: one says that the system " radiates ". The two-particle case consists of a particle coming in, and then going out. This is appropriate when the background is non-uniform: for example, an acoustic plane wave comes in, scatters from an enemy submarine , and then moves out to infinity; by careful analysis of the outgoing wave, characteristics of the spatial inhomogeneity can be deduced. There are two more possibilities: pair creation and pair annihilation . In this case, a pair of waves is created "out of thin air" (by interacting with some background), or disappear into thin air.
Next on this count is the three-particle interaction. It is unique, in that it does not require any interacting background or vacuum, nor is it "boring" in the sense of a non-interacting plane-wave in a homogeneous background. Writing ψ 1 , ψ 2 , ψ 3 {\displaystyle \psi _{1},\psi _{2},\psi _{3}} for these three waves moving from/to infinity, this simplest quadratic interaction takes the form of
and cyclic permutations thereof. This generic form can be called the three-wave equation ; a specific form is presented below. A key point is that all quadratic resonant interactions can be written in this form (given appropriate assumptions). For time-varying systems where λ {\displaystyle \lambda } can be interpreted as energy , one may write
for a time-dependent version.
Formally, the three-wave equation is
where j , ℓ , m = 1 , 2 , 3 {\displaystyle j,\ell ,m=1,2,3} cyclic, v j {\displaystyle v_{j}} is the group velocity for the wave having k → j , ω j {\displaystyle {\vec {k}}_{j},\omega _{j}} as the wave-vector and angular frequency , and ∇ {\displaystyle \nabla } the gradient , taken in flat Euclidean space in n dimensions. The η j {\displaystyle \eta _{j}} are the interaction coefficients; by rescaling the wave, they can be taken η j = ± 1 {\displaystyle \eta _{j}=\pm 1} . By cyclic permutation , there are four classes of solutions. Writing η = η 1 η 2 η 3 {\displaystyle \eta =\eta _{1}\eta _{2}\eta _{3}} one has η = ± 1 {\displaystyle \eta =\pm 1} . The η = − 1 {\displaystyle \eta =-1} are all equivalent under permutation. In 1+1 dimensions, there are three distinct η = + 1 {\displaystyle \eta =+1} solutions: the + + + {\displaystyle +++} solutions, termed explosive ; the − − + {\displaystyle --+} cases, termed stimulated backscatter , and the − + − {\displaystyle -+-} case, termed soliton exchange . These correspond to very distinct physical processes. [ 2 ] [ 3 ] One interesting solution is termed the simulton , it consists of three comoving solitons, moving at a velocity v that differs from any of the three group velocities v 1 , v 2 , v 3 {\displaystyle v_{1},v_{2},v_{3}} . This solution has a possible relationship to the "three sisters" observed in rogue waves , even though deep water does not have a three-wave resonant interaction.
The lecture notes by Harvey Segur provide an introduction. [ 4 ]
The equations have a Lax pair , and are thus completely integrable . [ 1 ] [ 5 ] The Lax pair is a 3x3 matrix pair, to which the inverse scattering method can be applied, using techniques by Fokas . [ 6 ] [ 7 ] The class of spatially uniform solutions are known, these are given by Weierstrass elliptic ℘-function . [ 8 ] The resonant interaction relations are in this case called the Manley–Rowe relations ; the invariants that they describe are easily related to the modular invariants g 2 {\displaystyle g_{2}} and g 3 . {\displaystyle g_{3}.} [ 9 ] That these appear is perhaps not entirely surprising, as there is a simple intuitive argument. Subtracting one wave-vector from the other two, one is left with two vectors that generate a period lattice . All possible relative positions of two vectors are given by Klein's j-invariant , thus one should expect solutions to be characterized by this.
A variety of exact solutions for various boundary conditions are known. [ 10 ] A "nearly general solution" to the full non-linear PDE for the three-wave equation has recently been given. It is expressed in terms of five functions that can be freely chosen, and a Laurent series for the sixth parameter. [ 8 ] [ 9 ]
Some selected applications of the three-wave equations include: | https://en.wikipedia.org/wiki/Three-wave_equation |
The Three Rs ( 3Rs ) are guiding principles for more ethical use of animals in product testing and scientific research. They were first described by W. M. S. Russell and R. L. Burch in 1959. [ 1 ] The 3Rs are:
The 3Rs have a broader scope than simply encouraging alternatives to animal testing , but aim to improve animal welfare and scientific quality where the use of animals cannot be avoided. In many countries, these 3Rs are now explicit in legislation governing animal use. It is usual to capitalise the first letter of each of the three 'R' principles (i.e. 'Replacement' rather than 'replacement') to avoid ambiguity and clarify reference to the 3Rs principles.
In 1954, the Universities Federation for Animal Welfare (UFAW) decided to sponsor systematic research on the progress of humane techniques in the laboratory. [ 2 ] In October of that year, William Russell, described as a brilliant young zoologist who happened to be also a psychologist and a classical scholar, and Rex Burch, a microbiologist, were appointed to inaugurate a systematic study of laboratory techniques in their ethical aspects. In 1956, they prepared a general report to the Federation's committees, and this report formed the nucleus of the book which was completed at the beginning of 1958. Over much of the period they worked with a special Consultative Committee, chaired by Professor Peter Medawar .
As a contribution to the centenary of The Origin of Species , the quotations at the head of each chapter are all from the works of Charles Darwin .
A common misconception of the 3Rs is that they refer only to replacement; [ 3 ] however, their scope is much broader. Moreover, while the 3Rs were designed for research on laboratory animal models, their implementation has been encouraged also in farmed animals [ 4 ] [ 5 ] and wildlife conservation research. [ 6 ] [ 7 ]
Replacement: In the original book, the 3Rs were restricted, arbitrarily, to vertebrates. Russell and Burch discussed the possibility of suffering with reference to sentience . They used the term "replacement technique" for any scientific method using non-sentient material to replace methods which use conscious living vertebrates. [ 1 ] This non-sentient material included higher plants, microorganisms, and the more degenerate metazoan endoparasites which, they argued, had nervous and sensory systems that were almost atrophied. They acknowledged that the arbitrary exclusion of invertebrates meant that in several contexts, these species could be considered as possible replacements for vertebrate subjects; they termed this "comparative substitution". Russell and Burch also considered levels of replacement. In "relative replacement", animals are still required, though during an experiment they are exposed, probably or certainly, to no distress at all. In "absolute replacement", animals are not required at all at any stage.
Replacement strategies include:
More recent interpretations of the replacement principle suggest the preferred use of non-animal methods over animal methods whenever it is possible to achieve the same scientific aims, i.e. invertebrates are not considered suitable replacements for vertebrates. However, others such as the National Centre for the Replacement, Refinement and Reduction of Animals in Research (NC3Rs) advocate the use of some invertebrates in replacement studies. [ 8 ] Therefore, the term 'Replacement' can refer to the use of a supposedly less sentient species, [ 9 ] as in "relative replacement".
Russell and Burch writing six decades ago could not have anticipated some of the technologies that have emerged today. One of these technologies, 3D cell cultures , also known as organoids or mini-organs, have replaced animal models for some types of research. In recent years, scientists have produced organoids that can be used to model disease and test new drugs. Organoids grow in vitro on scaffolds (biological or synthetic hydrogels such as Matrigel ) or in a culture medium. [ 10 ] Organoids are derived from three kinds of human or animal stem cells—embryonic pluripotent stem cells (ESCs), adult somatic stem cells (ASCs), and induced pluripotent stem cells (iPSCs). These organoids are grown in vitro and mimic the structure and function of different organs such as the brain, liver, lung, kidney, and intestine. Organoids have been developed to study infectious disease. Scientists at Johns Hopkins University have developed mini-brain organoids to model how COVID-19 can affect the brain. [ 11 ] Researchers have used brain organoids to model how the Zika virus disrupt fetal brain development. Tumoroids—3D cell cultures derived from cells biopsied from human patients—can be used in studying the genomics and drug resistance of tumors in different organs. Organoids are also used in modeling genetic diseases such as cystic fibrosis, [ 12 ] neurodegenerative diseases such as Alzheimer's and Parkinson's, infectious diseases such as MERS-CoV and norovirus, and parasitic infections such as Toxoplasma gondii . [ 10 ] Human- and animal-cell-derived organoids are also used extensively in pharmacological and toxicological research. [ 13 ] [ 14 ]
Reduction: Reduction refers to methods which minimise the number of animals used per study. [ 8 ] Russell and Burch suggested a reduction in the number of animals used could be achieved in several ways. One general way in which great reduction may occur is by the right choice of strategies in the planning and performance of whole lines of research. A second method is by controlling variation amongst the animals used in studies, and a third method is careful design and analysis of studies.
With the advent, development and availability of computers since the original 3Rs, large data-sets can be used in statistical analysis, thereby reducing the numbers of animals used. In some cases, by using previously published studies, the use of animals can be totally avoided by avoiding unnecessary replication. Modern imaging techniques in conjunction with new statistical analysis methods also allow reductions in the numbers of animals used, for example, by providing greater information per animal. [ 15 ] [ 16 ]
Refinement: Russell and Burch wrote "Suppose, for a particular purpose, we cannot use replacing techniques. Suppose it is agreed that we shall be using every device of theory and practice to reduce to a minimum the number of animals we have to employ. It is at this point that refinement starts, and its object is simply to reduce to an absolute minimum the amount of distress imposed on those animals that are still used." [ 1 ] Amongst areas of experiments that can be refined are the procedure to be used, the appropriateness of the species (its suitability for the procedure and its responses to a laboratory environment in general).
Refinements techniques may include: [ 17 ]
The definition of Refinement has evolved from that provided by Russell and Burch. A newer definition is now commonly accepted: "any approach which avoids or minimises the actual or potential pain, distress and other adverse effects experienced at any time during the life of the animals involved, and which enhances their wellbeing." [ 18 ] Refinement encompasses not only the direct harms associated with animal use, but the indirect, or contingent harms associated with breeding, transportation, housing and husbandry.
Some have criticized the Three Rs for what they call "ambiguities" and tensions in the understanding and implementation of different prongs of the approach –Refinement, Reduction and Replacement. [ 19 ] This is, in part, because different stakeholders (e.g. animal experimenters, institutional figures, policy makers, activists and the public) may interpret the Three Rs differently. [ 19 ] [ 20 ] The 3Rs principles do not address some issues, such as the ethics of using animals in research and focus instead on improving the humane use of animals which are used. [ 19 ]
Others have noted that promotion of the 3Rs has failed to reduce the number of animals used in experiments. [ 21 ] [ 22 ] However, this may be the result of a misunderstanding of the definition of 'Reduction', not an absolute reduction in the number of animals used, but a reduction in the number of animals used per study. By its nature, it is difficult to estimate the number of animals not used in scientific procedures as a result of Replacement or Reduction techniques however, despite the rapid increases in medical research, animal numbers have not increased at the same rate. [ 23 ]
In a review of dozens of articles involving mice in prolonged pain experiments, researchers found "there were no references to the '3Rs ' " which in turn "raise serious questions about whether the 3Rs' principles of Replacement, Reduction, and Refinement are being appropriately implemented by researchers and institutions". The researchers continued, [ 24 ]
That the 3Rs or any of the 3Rs' components—Replace, Reduce, or Refine—were not mentioned in any of the... studies suggests that prolonged mouse pain researchers may be unaware of or indifferent to the 3Rs framework and that this aspect is not considered relevant in the peer review process of manuscripts for scientific journals... [T]he growing proportion of the number of studies...in this paper suggests that adherence to guidelines and/or animal use committee requirements is not translating into significant progress from a reduction or replacement perspective.
Following a review of the quality of experimental design in published journal articles, [ 25 ] including the use of the 3Rs, it was found that the use and reporting of these principles was sporadic. As a result, the ARRIVE (Animal Research: Reporting of In Vivo Experiments) guidelines were developed [ 26 ] and published in 2010. The ARRIVE guidelines present a 20-point list of items which must be reported in publications which have used animals in scientific research, including sample size calculations, explicit descriptions of the environmental enrichment employed and welfare-related assessments made during the study. Many journals now require authors to comply with the ARRIVE guidelines in the preparation of manuscripts. [ 27 ] A follow-up review published in 2014 [ 28 ] found that there were still low reporting levels of some elements, such as reporting of appropriate statistical methods and the avoidance of bias.
In a survey of scientists in Portugal who had recently undergone training in the Three Rs, researchers found that a "surprisingly large number of researchers were unaware of the 3Rs principle, even those who had worked with animal models for over 10 years" and that subsequent training in the Three Rs "did not change perceptions on the current and future needs for animal use in research", but did increase knowledge of the application of the 3Rs [ 29 ] The authors found that the training they provided "appear to have little influence on researchers' acceptance of replacement alternatives to animal use". [ 29 ]
There are a number of organisations which promote the implementation of the 3Rs and methods that avoid the use of animals in research. Amongst the earliest is FRAME (the Fund for the Replacement of Animals in Medical Experiments ) in the UK, established in 1969. The ZEBET (Zentralstelle zur Erfassung und Bewertung von Ersatz- und Ergänzungsmethoden zum Tierversuch) was founded in Germany in 1989, as the first governmental institution with the mandate to reduce animal experiments on a scientific basis. The United Kingdom's Home Office led the Inter-Departmental Group on Reduction, Refinement and Replacement, which aims to improve the application of the 3Rs and promote research into alternatives, reducing the need for toxicity testing through better sharing of data, and encouraging the validation and acceptance of alternatives.The Group reported to Ministers that there was support for a body which would act as a means to better publicise and coordinate what is already done by way of research into the 3Rs. In May 2004, the NC3Rs was announced in the UK to act as a focal point for research into the 3Rs. [ 30 ] Although the principles of the 3Rs were implicit in UK law under the Animals (Scientific Procedures) Act (1986) , the Directive 2010/63/EU governing animal use within the European Union [ 31 ] makes the principles explicit and researchers must demonstrate the use of Replacement, Reduction and Refinement techniques in research involving animals. The Directive introduced a new level of transparency to help progress towards eventually replacing animal use in science and was instrumental in accelerating the concrete application of the 3Rs and the establishment of institutions and centres dedicated to dissemination, education and research based on the Principles across Europe. To date there are such centres in Austria, Belgium, Denmark, Germany, Ireland, Italy, Luxembourg, The Netherlands, Norway, Span, Sweden and Switzerland. [ 32 ] | https://en.wikipedia.org/wiki/Three_Rs_(animal_research) |
Three dots (∴) also known as "tripunctual abbreviation" or "triple dot" is a symbol used all over the world in Freemasonry for abbreviations, signatures, and symbolic representation . The dots are typically arranged in a triangular pattern and carry multiple layers of meaning within Masonic tradition. [ 1 ] The (∴) is used only for Masonic abbreviations, any non-masonic abbreviations must be written with a simple dot, as an example a date on a Masonic document could be written 6024 A∴L∴/2024 A.D.
The symbol has been used in Freemasonry since its earliest speculative days, at least as early as 1764, where it is found in the registers of La Sincerité Lodge in Besançon , France which strongly indicates an earlier use. [ 1 ] While some attribute its widespread adoption to a circular issued by the Grand Orient de France on August 12, 1774, evidence shows earlier usage. [ 2 ]
The symbol predates Freemasonry, appearing in various contexts: [ 3 ]
The triple dot is used in Masonic writing to denote abbreviations of Masonic terms and titles: [ 4 ]
For plural forms, the initial letter is doubled:
The three dots symbol (∴) is an integral part of Masonic written tradition, used exclusively within Masonic context. All Master Masons are entitled to use these dots when writing Masonic terms, titles, or positions. The usage is strictly reserved for Masonic terminology and should not be applied to non-Masonic (profane) words or phrases.
A widespread misconception holds that the three dots are exclusively reserved for Grand Lodge usage. This error likely originated from historical circumstances, particularly following the Morgan Affair (1826). [ 17 ] During this period, many individual Lodges abandoned or lost various traditional practices, while Grand Lodges maintained strict adherence to Masonic protocols and writing conventions. As Grand Lodges often became the primary preservers of these writing traditions while individual Lodges departed from them, particularly in the United States, this may have contributed to the misconception of exclusive Grand Lodge usage, but the three dots can be used for all Masonic communication, individual Lodges, messages, communications and attached to a signature by any Master Masons. [ 18 ]
The proper representation of the three dots is crucial for preserving Masonic written tradition. Several improper variations have emerged over time: [ citation needed ] these are deprecated: [ citation needed ]
The correct format is W∴M∴, using the proper symbol (∴) rather than substituting periods or colons. This standardization plays a vital role in preserving Masonic tradition and ensures clear communication within the fraternity. Using the proper symbol helps prevent degradation of the traditional format and maintains the integrity of Masonic written communication. [ citation needed ]
Only Master Masons may incorporate the triple dot symbol into their signatures as a mark of identification. This practice became widespread in the late 18th and early 19th centuries and is reserved for Master Masons, used as proof that the person has attained the degree of Master Mason. [ 2 ] When traveling, these three dots after a signature serve as a discreet sign of recognition. A fellow Mason seeing this symbol would recognize the traveler as an accomplished Master Mason and could therefore extend appropriate fraternal courtesies and assistance to the brother, even as a stranger in unfamiliar surroundings.
The Masonic three dots have appeared in political contexts as deliberate identifiers. During the French Revolution and Empire period (late 18th to early 19th century), government officials who were Freemasons would often incorporate the three dots into their signatures on official documents. [ 19 ] This practice created networks of mutual recognition and support within government institutions. A notable modern example emerged when former French President Nicolas Sarkozy's signature appeared to contain three points in a triangular formation, prompting public speculation about potential Masonic connections. The controversy intensified when observers noted these points mysteriously disappeared from photocopies of the same documents displayed at the Palace of Justice. [ 20 ]
The triple dot symbol carries multiple interpretations within Masonic tradition:
The arrangement of the three dots inherently forms a triangle , a fundamental geometric shape deeply significant in Freemasonry and directly related to the symbol of the Luminous Delta (or Radiant Delta). [ 21 ] The Luminous Delta is a prominent Masonic emblem, typically depicted as an equilateral triangle , often with an All-Seeing Eye or the Tetragrammaton (the four-letter Hebrew name for God, יהוה) at its center. This symbol is frequently displayed in the East of the Masonic Lodge, above the seat of the Worshipful Master.
Its symbolism is rich and multifaceted:
The three dots are associated with a wide array of triadic concepts in Masonic philosophy, reflecting the significance of the number three. As Rizzardo da Camino notes, these can include: [ 25 ] | https://en.wikipedia.org/wiki/Three_dots_(Freemasonry) |
In molecular genetics , the three prime untranslated region ( 3′-UTR ) is the section of messenger RNA (mRNA) that immediately follows the translation termination codon . The 3′-UTR often contains regulatory regions that post-transcriptionally influence gene expression .
During gene expression , an mRNA molecule is transcribed from the DNA sequence and is later translated into a protein . Several regions of the mRNA molecule are not translated into a protein including the 5' cap , 5' untranslated region , 3′ untranslated region and poly(A) tail . Regulatory regions within the 3′-untranslated region can influence polyadenylation , translation efficiency, localization, and stability of the mRNA. [ 1 ] [ 2 ] The 3′-UTR contains binding sites for both regulatory proteins and microRNAs (miRNAs). By binding to specific sites within the 3′-UTR, miRNAs can decrease gene expression of various mRNAs by either inhibiting translation or directly causing degradation of the transcript. The 3′-UTR also has silencer regions which bind to repressor proteins and will inhibit the expression of the mRNA.
Many 3′-UTRs also contain AU-rich elements (AREs). Proteins bind AREs to affect the stability or decay rate of transcripts in a localized manner or affect translation initiation. Furthermore, the 3′-UTR contains the sequence AAUAAA that directs addition of several hundred adenine residues called the poly(A) tail to the end of the mRNA transcript. Poly(A) binding protein (PABP) binds to this tail, contributing to regulation of mRNA translation, stability, and export. For example, poly(A) tail bound PABP interacts with proteins associated with the 5' end of the transcript, causing a circularization of the mRNA that promotes translation.
The 3′-UTR can also contain sequences that attract proteins to associate the mRNA with the cytoskeleton , transport it to or from the cell nucleus , or perform other types of localization. In addition to sequences within the 3′-UTR, the physical characteristics of the region, including its length and secondary structure , contribute to translation regulation. These diverse mechanisms of gene regulation ensure that the correct genes are expressed in the correct cells at the appropriate times.
The 3′-UTR of mRNA has a great variety of regulatory functions that are controlled by the physical characteristics of the region. One such characteristic is the length of the 3′-UTR, which in the mammalian genome has considerable variation. This region of the mRNA transcript can range from 60 nucleotides to about 4000. [ 3 ] On average the length for the 3′-UTR in humans is approximately 800 nucleotides, while the average length of 5'-UTRs is only about 200 nucleotides. [ 4 ] The length of the 3′-UTR is significant since longer 3′-UTRs are associated with lower levels of gene expression. One possible explanation for this phenomenon is that longer regions have a higher probability of possessing more miRNA binding sites that have the ability to inhibit translation. In addition to length, the nucleotide composition also differs significantly between the 5' and 3′-UTR. The mean G+C percentage of the 5'-UTR in warm-blooded vertebrates is about 60% as compared to only 45% for 3′-UTRs. This is important because an inverse correlation has been observed between the G+C% of 5' and 3′-UTRs and their corresponding lengths. The UTRs that are GC-poor tend to be longer than those located in GC-rich genomic regions. [ 4 ]
Sequences within the 3′-UTR also have the ability to degrade or stabilize the mRNA transcript. Modifications that control a transcript's stability allow expression of a gene to be rapidly controlled without altering translation rates. One group of elements in the 3′-UTR that can help destabilize an mRNA transcript are the AU-rich elements (AREs). These elements range in size from 50 to 150 base pairs and generally contain multiple copies of the pentanucleotide AUUUA. Early studies indicated that AREs can vary in sequence and fall into three main classes that differ in the number and arrangement of motifs. [ 1 ] Another set of elements that is present in both the 5' and 3′-UTR are iron response elements (IREs). The IRE is a stem-loop structure within the untranslated regions of mRNAs that encode proteins involved in cellular iron metabolism. The mRNA transcript containing this element is either degraded or stabilized depending upon the binding of specific proteins and the intracellular iron concentrations. [ 3 ]
The 3′-UTR also contains sequences that signal additions to be made, either to the transcript itself or to the product of translation. For example, there are two different polyadenylation signals present within the 3′-UTR that signal the addition of the poly(A) tail. These signals initiate the synthesis of the poly(A) tail at a defined length of about 250 base pairs. [ 1 ] The primary signal used is the nuclear polyadenylation signal (PAS) with the sequence AAUAAA located toward the end of the 3′-UTR. [ 3 ] However, during early development cytoplasmic polyadenylation can occur instead and regulate the translational activation of maternal mRNAs. The element that controls this process is called the CPE which is AU-rich and located in the 3′-UTR as well. The CPE generally has the structure UUUUUUAU and is usually within 100 base pairs of the nuclear PAS. [ 3 ] Another specific addition signaled by the 3′-UTR is the incorporation of selenocysteine at UGA codons of mRNAs encoding selenoproteins. Normally the UGA codon encodes for a stop of translation, but in this case a conserved stem-loop structure called the selenocysteine insertion sequence (SECIS) causes for the insertion of selenocysteine instead. [ 4 ]
The 3′-untranslated region plays a crucial role in gene expression by influencing the localization, stability, export, and translation efficiency of an mRNA. It contains various sequences that are involved in gene expression, including microRNA response elements (MREs), AU-rich elements (AREs), and the poly(A) tail. In addition, the structural characteristics of the 3′-UTR as well as its use of alternative polyadenylation play a role in gene expression.
The 3′-UTR often contains microRNA response elements (MREs), which are sequences to which miRNAs bind. miRNAs are short, non-coding RNA molecules capable of binding to mRNA transcripts and regulating their expression. One miRNA mechanism involves partial base pairing of the 5' seed sequence of an miRNA to an MRE within the 3′-UTR of an mRNA; this binding then causes translational repression.
In addition to containing MREs, the 3′-UTR also often contains AU-rich elements (AREs) , which are 50 to 150 bp in length and usually include many copies of the sequence AUUUA. ARE binding proteins (ARE-BPs) bind to AU-rich elements in a manner that is dependent upon tissue type, cell type, timing, cellular localization, and environment. In response to different intracellular and extracellular signals, ARE-BPs can promote mRNA decay, affect mRNA stability, or activate translation. This mechanism of gene regulation is involved in cell growth, cellular differentiation , and adaptation to external stimuli. It therefore acts on transcripts encoding cytokines , growth factors , tumor suppressors, proto-oncogenes , cyclins , enzymes , transcription factors , receptors , and membrane proteins . [ 1 ]
The poly(A) tail contains binding sites for poly(A) binding proteins (PABPs). These proteins cooperate with other factors to affect the export, stability, decay, and translation of an mRNA. PABPs bound to the poly(A) tail may also interact with proteins, such as translation initiation factors, that are bound to the 5' cap of the mRNA. This interaction causes circularization of the transcript, which subsequently promotes translation initiation. Furthermore, it allows for efficient translation by causing recycling of ribosomes . [ 1 ] [ 2 ] While the presence of a poly(A) tail usually aids in triggering translation, the absence or removal of one often leads to exonuclease-mediated degradation of the mRNA. Polyadenylation itself is regulated by sequences within the 3′-UTR of the transcript. These sequences include cytoplasmic polyadenylation elements (CPEs), which are uridine-rich sequences that contribute to both polyadenylation activation and repression. CPE-binding protein (CPEB) binds to CPEs in conjunction with a variety of other proteins in order to elicit different responses. [ 2 ]
While the sequence that constitutes the 3′-UTR contributes greatly to gene expression, the structural characteristics of the 3′-UTR also play a large role. In general, longer 3′-UTRs correspond to lower expression rates since they often contain more miRNA and protein binding sites that are involved in inhibiting translation. [ 1 ] [ 2 ] [ 5 ] Human transcripts possess 3′-UTRs that are on average twice as long as other mammalian 3′-UTRs. This trend reflects the high level of complexity involved in human gene regulation. In addition to length, the secondary structure of the 3′-untranslated region also has regulatory functions. Protein factors can either aid or disrupt folding of the region into various secondary structures. The most common structure is a stem-loop, which provides a scaffold for RNA binding proteins and non-coding RNAs that influence expression of the transcript. [ 1 ]
Another mechanism involving the structure of the 3′-UTR is called alternative polyadenylation (APA), which results in mRNA isoforms that differ only in their 3′-UTRs. This mechanism is especially useful for complex organisms as it provides a means of expressing the same protein but in varying amounts and locations. It is utilized by about half of human genes. APA can result from the presence of multiple polyadenylation sites or mutually exclusive terminal exons . Since it can affect the presence of protein and miRNA binding sites, APA can cause differential expression of mRNA transcripts by influencing their stability, export to the cytoplasm, and translation efficiency. [ 1 ] [ 5 ] [ 6 ]
Scientists use a number of methods to study the complex structures and functions of the 3′ UTR. Even if a given 3′-UTR in an mRNA is shown to be present in a tissue, the effects of localization, functional half-life, translational efficiency, and trans-acting elements must be determined to understand the 3′-UTR's full functionality. [ 7 ] Computational approaches, primarily by sequence analysis, have shown the existence of AREs in approximately 5 to 8% of human 3′-UTRs and the presence of one or more miRNA targets in as many as 60% or more of human 3′-UTRs. Software can rapidly compare millions of sequences at once to find similarities between various 3′ UTRs within the genome. Experimental approaches have been used to define sequences that associate with specific RNA-binding proteins; specifically, recent improvements in sequencing and cross-linking techniques have enabled fine mapping of protein binding sites within the transcript. [ 8 ] Induced site-specific mutations, for example those that affect the termination codon, polyadenylation signal, or secondary structure of the 3′-UTR, can show how mutated regions can cause translation deregulation and disease. [ 9 ] These types of transcript-wide methods should help our understanding of known cis elements and trans-regulatory factors within 3′-UTRs.
3′-UTR mutations can be very consequential because one alteration can be responsible for the altered expression of many genes. Transcriptionally, a mutation may affect only the allele and genes that are physically linked. However, since 3′-UTR binding proteins also function in the processing and nuclear export of mRNA, a mutation can also affect other unrelated genes. [ 9 ] Dysregulation of ARE-binding proteins (AUBPs) due to mutations in AU-rich regions can lead to diseases including tumorigenesis (cancer), hematopoietic malignancies, leukemogenesis, and developmental delay/autism spectrum disorders. [ 10 ] [ 11 ] [ 12 ] An expanded number of trinucleotide (CTG) repeats in the 3’-UTR of the dystrophia myotonica protein kinase (DMPK) gene causes myotonic dystrophy . [ 7 ] Retro-transposal 3-kilobase insertion of tandem repeat sequences within the 3′-UTR of fukutin protein is linked to Fukuyama-type congenital muscular dystrophy. [ 7 ] Elements in the 3′-UTR have also been linked to human acute myeloid leukemia , alpha-thalassemia , neuroblastoma , Keratinopathy , Aniridia , IPEX syndrome , and congenital heart defects . [ 9 ] The few UTR-mediated diseases identified only hint at the countless links yet to be discovered.
Despite current understanding of 3′-UTRs, they are still relative mysteries. Since mRNAs usually contain several overlapping control elements, it is often difficult to specify the identity and function of each 3′-UTR element, let alone the regulatory factors that may bind at these sites. Additionally, each 3′-UTR contains many alternative AU-rich elements and polyadenylation signals. These cis- and trans-acting elements, along with miRNAs, offer a virtually limitless range of control possibilities within a single mRNA. [ 7 ] Future research through the increased use of deep-sequencing based ribosome profiling will reveal more regulatory subtleties as well as new control elements and AUBPs. [ 1 ] | https://en.wikipedia.org/wiki/Three_prime_untranslated_region |
The three prisoners problem appeared in Martin Gardner 's " Mathematical Games " column in Scientific American in 1959. [ 1 ] [ 2 ] It is mathematically equivalent to the Monty Hall problem with car and goat replaced respectively with freedom and execution. [ 3 ]
Three prisoners, A, B, and C, are in separate cells and sentenced to death. The governor has selected one of them at random to be pardoned. The warden knows which one is pardoned, but is not allowed to tell. Prisoner A begs the warden to let him know the identity of one of the two who are going to be executed. "If B is to be pardoned, give me C's name. If C is to be pardoned, give me B's name. And if I'm to be pardoned, secretly flip a coin to decide whether to give me name B or C."
The warden gives him B's name. Prisoner A is pleased because he believes that his probability of surviving has gone up from 1 / 3 to 1 / 2 , as it is now between him and C. Prisoner A secretly tells C the news, who reasons that A's chance of being pardoned is unchanged at 1 / 3 , but he is pleased because his own chance has gone up to 2 / 3 . Which prisoner is correct?
The answer is that prisoner A did not gain any information about his own fate, since he already knew that the warden would give him the name of someone else. Prisoner A, prior to hearing from the warden, estimates his chances of being pardoned as 1 / 3 , the same as both B and C. As the warden says B will be executed, it is either because C will be pardoned ( 1 / 3 chance), or A will be pardoned ( 1 / 3 chance) and the coin to decide whether to name B or C the warden flipped came up B ( 1 / 2 chance; for an overall 1 / 2 × 1 / 3 = 1 / 6 chance B was named because A will be pardoned). Hence, after hearing that B will be executed, the estimate of A's chance of being pardoned is half that of C. This means his chances of being pardoned, now knowing B is not, again are 1 / 3 , but C has a 2 / 3 chance of being pardoned.
The explanation above may be summarised in the following table. As the warden is asked by A, he can only answer B or C to be executed (or "not pardoned").
As the warden has answered that B will not be pardoned, the solution comes from the second column "not B". It appears that the odds for A vs. C to be pardoned are 1:2.
Call A {\displaystyle A} , B {\displaystyle B} and C {\displaystyle C} the events that the corresponding prisoner will be pardoned, and b {\displaystyle b} the event that the warden tells A that prisoner B is to be executed, then, using Bayes' theorem , the posterior probability of A being pardoned, is: [ 4 ]
The probability of C being pardoned, on the other hand, is:
The crucial difference making A and C unequal is that P ( b | A ) = 1 2 {\displaystyle P(b|A)={\tfrac {1}{2}}} but P ( b | C ) = 1 {\displaystyle P(b|C)=1} . If A will be pardoned, the warden can tell A that either B or C is to be executed, and hence P ( b | A ) = 1 2 {\displaystyle P(b|A)={\tfrac {1}{2}}} ; whereas if C will be pardoned, the warden can only tell A that B is executed, so P ( b | C ) = 1 {\displaystyle P(b|C)=1} .
Prisoner A only has a 1 / 3 chance of pardon. Knowing whether B or C will be executed does not change his chance. After he hears B will be executed, Prisoner A realizes that if he will not get the pardon himself it must only be going to C. That means there is a 2/3 chance for C to get a pardon. This is comparable to the Monty Hall problem .
The following scenarios may arise:
With the stipulation that the warden will choose randomly, in the 1 / 3 of the time that A is to be pardoned, there is a 1 / 2 chance he will say B and 1 / 2 chance he will say C. This means that taken overall, 1 / 6 of the time ( 1 / 3 [that A is pardoned] × 1 / 2 [that warden says B]), the warden will say B because A will be pardoned, and 1 / 6 of the time ( 1 / 3 [that A is pardoned] × 1 / 2 [that warden says C]) he will say C because A is being pardoned. This adds up to the total of 1 / 3 of the time ( 1 / 6 + 1 / 6 ) A is being pardoned, which is accurate.
It is now clear that if the warden answers B to A ( 1 / 2 of all cases), then 1 / 3 of the time C is pardoned and A will still be executed (case 4), and only 1 / 6 of the time A is pardoned (case 1). Hence C's chances are ( 1 / 3 )/( 1 / 2 ) = 2 / 3 and A's are ( 1 / 6 )/( 1 / 2 ) = 1 / 3 .
The key to this problem is that the warden may not reveal the name of a prisoner who will be pardoned. If we eliminate this requirement, it can demonstrate the original problem in another way. The only change in this example is that prisoner A asks the warden to reveal the fate of one of the other prisoners (not specifying one that will be executed). In this case, the warden flips a coin and chooses one of B and C to reveal the fate of. The cases are as follows:
Each scenario has a 1 / 6 probability. The original three prisoners problem can be seen in this light: The warden in that problem still has these six cases, each with a 1 / 6 probability of occurring. However, the warden in the original case cannot reveal the fate of a pardoned prisoner. Therefore, in case 3 for example, since saying "B is pardoned" is not an option, the warden says "C is executed" instead (making it the same as case 4). That leaves cases 4 and 5 each with a 1 / 3 probability of occurring and leaves us with the same probability as before.
The tendency of people to provide the answer 1/2 is likely due to a tendency to ignore context that may seem unimpactful. For example, how the question is posed to the warden can affect the answer. This can be shown by considering a modified case, where P ( A ) = 1 4 , P ( B ) = 1 4 , P ( C ) = 1 2 {\displaystyle P(A)={\frac {1}{4}},P(B)={\frac {1}{4}},P(C)={\frac {1}{2}}} and everything else about the problem remains the same. [ 4 ] Using Bayes' Theorem once again:
However, if A simply asks if B will be executed, and the warden responds with "yes", the probability that A is pardoned becomes:
A similar assumption is that A plans beforehand to ask the warden for this information. A similar case to the above arises if A does not plan to ask the warden anything and the warden simply informs him that he will be executing B. [ 5 ]
Another likely overlooked assumption is that the warden has a probabilistic choice. Let us define p {\displaystyle p} as the conditional probability that the warden will name B given that C will be executed. The conditional probability P ( A | b ) {\displaystyle P(A|b)} can be then expressed as: [ 6 ]
If we assume that p = 1 {\displaystyle p=1} , that is, that we do not take into account that the warden is making a probabilistic choice, then P ( A | b ) = 1 2 {\displaystyle P(A|b)={\frac {1}{2}}} . However, the reality of the problem is that the warden is flipping a coin ( p = 1 2 {\displaystyle p={\frac {1}{2}}} ), so P ( A | b ) = 1 3 {\displaystyle P(A|b)={\frac {1}{3}}} . [ 5 ]
Judea Pearl (1988) used a variant of this example to demonstrate that belief updates must depend not merely on the facts observed but also on the experiment (i.e., query) that led to those facts. [ 7 ] | https://en.wikipedia.org/wiki/Three_prisoners_problem |
A three roll mill or triple roll mill [ 1 ] is a machine that uses shear force created by three horizontally positioned rolls rotating in opposite directions and different speeds relative to each other, in order to mix, refine, disperse, or homogenize viscous materials fed into it.
The three-roll mill has proven to be the most successful of the range of roll mills which saw extensive development in the 19th century. These included the single-roll mill and the five-roll mill. The single-roll mill works by material passing between the roll and a fixed bar pressing against the roll. The five-roll mill incorporates four successively smaller in-running nips and hence, compared to the three-roll mill, allows the use of larger agglomerates as part of the input material- but is correspondingly more complicated and expensive. [ 2 ]
The three adjacent rolls of a three roll mill (called the feed roll, centre roll and apron roll) rotate at progressively higher speeds. Material, usually in the form of paste , is fed between the feed roll and the center roll. Due to the narrowing space between the rolls, most of the paste initially remains in the feed region. The part that makes it through the first in-running nip experiences very high shear force due to the different rotation speeds of the two rolls. Upon exiting, the material that remains on the center roll moves through the second nip between the center roll and apron roll. This subjects it to an even higher shear force, due to the higher speed of the apron roll and typically, a smaller gap than between the feed and centre rolls. A knife blade then scrapes the processed material off the apron roll and the paste rolls down the apron. This milling cycle can be repeated several times to maximize dispersion .
The gaps between the rolls can be mechanically or hydraulically adjusted and maintained. Typically, the gap distance is far greater than the particle size. In some operations, the gap distance is gradually decreased to achieve the desired level of dispersion. The rollers are normally internally water-cooled. [ 3 ] [ 4 ]
Three roll mills are widely used to mix printing inks , electronic thick film inks, high performance ceramics , cosmetics , plastisols , carbon / graphite , paints , pharmaceuticals , chemicals, glass coatings, dental composites , pigment , coatings, adhesives , sealants , and foods. With the recent development in technology, they are also utilized in the production of cable cover, electronics, soap , and artificial plastics .
Small bench models are used for bench-top development work, laboratory work, and low volume production. Larger bench and floor models are built to meet different production needs from pilot plants to large volume productions.
Particular advantages of this process are that it allows high-viscosity pastes to be milled, and that the high surface contact with the cooled rollers allow the temperature to remain low despite the high amount of dispersion work being put in. A notable disadvantage is that the large open area of paste on the rollers causes loss of volatiles. | https://en.wikipedia.org/wiki/Three_roll_mill |
In mathematics , the three spheres inequality bounds the L 2 {\displaystyle L^{2}} norm of a harmonic function on a given sphere in terms of the L 2 {\displaystyle L^{2}} norm of this function on two spheres, one with bigger radius and one with smaller radius.
Let u {\displaystyle u} be an harmonic function on R n {\displaystyle \mathbb {R} ^{n}} . Then for all 0 < r 1 < r < r 2 {\displaystyle 0<r_{1}<r<r_{2}} one has
where S ρ := { x ∈ R n : | x | = ρ } {\displaystyle S_{\rho }:=\{x\in \mathbb {R} ^{n}\colon \vert x\vert =\rho \}} for ρ > 0 {\displaystyle \rho >0} is the sphere of radius ρ {\displaystyle \rho } centred at the origin and where
Here we use the following normalisation for the L 2 {\displaystyle L^{2}} norm: | https://en.wikipedia.org/wiki/Three_spheres_inequality |
Threose nucleic acid ( TNA ) is an artificial genetic polymer in which the natural five-carbon ribose sugar found in RNA has been replaced by an unnatural four-carbon threose sugar. [ 1 ] Invented by Albert Eschenmoser as part of his quest to explore the chemical etiology of RNA, [ 2 ] TNA has become an important synthetic genetic polymer ( XNA ) due to its ability to efficiently base pair with complementary sequences of DNA and RNA. [ 1 ] The main difference between TNA and DNA/RNA is their backbones. DNA and RNA have their phosphate backbones attached to the 5' carbon of the deoxyribose or ribose sugar ring, respectively. TNA, on the other hand, has its phosphate backbone directly attached to the 3' carbon in the ring, since it does not have a 5' carbon. This modified backbone [ 3 ] makes TNA, unlike DNA and RNA, completely refractory to nuclease digestion, making it a promising nucleic acid analog for therapeutic and diagnostic applications. [ 4 ]
TNA oligonucleotides were first constructed by automated solid-phase synthesis using phosphoramidite chemistry. Methods for chemically synthesized TNA monomers (phosphoramidites and nucleoside triphosphates) have been heavily optimized to support synthetic biology projects aimed at advancing TNA research. [ 5 ] More recently, polymerase engineering efforts have identified TNA polymerases that can copy genetic information back and forth between DNA and TNA. [ 6 ] [ 7 ] TNA replication occurs through a process that mimics RNA replication. In these systems, TNA is reverse transcribed into DNA, the DNA is amplified by the polymerase chain reaction , and then forward transcribed back into TNA.
The availability of TNA polymerases have enabled the in vitro selection of biologically stable TNA aptamers to both small molecule and protein targets. [ 8 ] [ 9 ] [ 10 ] Such experiments demonstrate that the properties of heredity and evolution are not limited to the natural genetic polymers of DNA and RNA. [ 11 ] The high biological stability of TNA relative to other nucleic acid systems that are capable of undergoing Darwinian evolution, suggests that TNA is a strong candidate for the development of next-generation therapeutic aptamers.
The mechanism of TNA synthesis by a laboratory evolved TNA polymerase has been studied using X-ray crystallography to capture the five major steps of nucleotide addition. [ 12 ] These structures demonstrate imperfect recognition of the incoming TNA nucleotide triphosphate and support the need for further directed evolution experiments to create TNA polymerases with improved activity. The binary structure of a TNA reverse transcriptase has also been solved by X-ray crystallography, revealing the importance of structural plasticity as a possible mechanism for template recognition. [ 13 ]
John Chaput, a professor in the department of Pharmaceutical Sciences at the University of California, Irvine , has theorized that issues concerning the prebiotic synthesis of ribose sugars and the non-enzymatic replication of RNA may provide circumstantial evidence of an earlier genetic system more readily produced under primitive earth conditions.{{subst: cn }} TNA could have been an early genetic system and a precursor to RNA. [ 14 ] TNA is simpler than RNA and can be synthesized from a single starting material. TNA is able to transfer back and forth information with RNA and with strands of itself that are complementary to the RNA. TNA has been shown to fold into tertiary structures with discrete ligand-binding properties. [ 8 ]
Although TNA research is still in its infancy, practical applications are already apparent. Its ability to undergo Darwinian evolution, coupled with its nuclease resistance, make TNA a promising candidate for the development of diagnostic and therapeutic applications that require high biological stability. This would include the evolution of TNA aptamers that can bind to specific small molecule and protein targets, as well as the development of TNA enzymes (threozymes) that can catalyze a chemical reaction. In addition, TNA is a promising candidate for RNA therapeutics that involve gene silencing technology. For example, TNA has been evaluated in a model system for antisense technology. [ 15 ] | https://en.wikipedia.org/wiki/Threose_nucleic_acid |
In materials science , the threshold displacement energy ( T d ) is the minimum kinetic energy that an atom in a solid needs to be permanently displaced from its site in the lattice to a defect position. It is also known as "displacement threshold energy" or just "displacement energy". In a crystal , a separate threshold displacement energy exists for each crystallographic direction. Then one should distinguish between the minimum ( T d ,min ) and average ( T d ,ave ) over all lattice directions' threshold displacement energies. In amorphous solids, it may be possible to define an effective displacement energy to describe some other average quantity of interest. Threshold displacement energies in typical solids are of the order of 10-50 eV . [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ]
The threshold displacement energy is a materials property relevant during high-energy particle radiation of materials.
The maximum energy T m a x {\displaystyle T_{max}} that an irradiating particle can transfer in a binary collision to an atom in a material is given by (including relativistic effects)
T m a x = 2 M E ( E + 2 m c 2 ) ( m + M ) 2 c 2 + 2 M E {\displaystyle T_{max}={2ME(E+2mc^{2}) \over (m+M)^{2}c^{2}+2ME}}
where E is the kinetic energy and m the mass of the incoming irradiating particle and M the mass of the material atom. c is the velocity of light.
If the kinetic energy E is much smaller than the mass m c 2 {\displaystyle mc^{2}} of the irradiating particle, the equation reduces to
T m a x = E 4 M m ( m + M ) 2 {\displaystyle T_{max}=E{4Mm \over (m+M)^{2}}}
In order for a permanent defect to be produced from initially perfect crystal lattice, the kinetic energy that it receives T m a x {\displaystyle T_{max}} must be larger than the formation energy of a Frenkel pair .
However, while the Frenkel pair formation energies in crystals are typically around 5–10 eV, the average threshold displacement energies are much higher, 20–50 eV. [ 1 ] The reason for this apparent discrepancy is that the defect formation is a complex multi-body collision process (a small collision cascade ) where the atom that receives a recoil energy can also bounce back, or kick another atom back to its lattice site. Hence, even the minimum threshold displacement energy is usually clearly higher than the Frenkel pair formation energy.
Each crystal direction has in principle its own threshold displacement energy, so for a full description one should know the full threshold displacement surface T d ( θ , ϕ ) = T d ( [ h k l ] ) {\displaystyle T_{d}(\theta ,\phi )=T_{d}([hkl])} for all non-equivalent crystallographic directions [hkl]. Then T d , m i n = min ( T d ( θ , ϕ ) ) {\displaystyle T_{d,min}=\min(T_{d}(\theta ,\phi ))} and T d , a v e = a v e ( T d ( θ , ϕ ) ) {\displaystyle T_{d,ave}={\rm {ave}}(T_{d}(\theta ,\phi ))} where the minimum and average is with respect to all angles in three dimensions.
An additional complication is that the threshold displacement energy for a given direction is not necessarily a step function, but there can be an intermediate
energy region where a defect may or may not be formed depending on the random atom displacements.
The one can define a lower threshold where a defect may be formed T d l {\displaystyle T_{d}^{l}} ,
and an upper one where it is certainly formed T d u {\displaystyle T_{d}^{u}} . [ 6 ] The difference between these two may be surprisingly large, and whether or not this effect is taken into account may have a large effect on the average threshold displacement energy.
. [ 7 ]
It is not possible to write down a single analytical equation that would relate e.g. elastic material properties or defect formation energies to the threshold displacement energy. Hence theoretical study of the threshold displacement energy is conventionally carried out using either classical [ 6 ] [ 7 ] [ 8 ] [ 9 ] [ 10 ] [ 11 ] or quantum mechanical [ 12 ] [ 13 ] [ 14 ] [ 15 ] molecular dynamics computer simulations. Although an analytical description of the
displacement is not possible, the "sudden approximation" gives fairly good approximations
of the threshold displacement energies at least in covalent materials and low-index crystal
directions [ 13 ]
An example molecular dynamics simulation of a threshold displacement event is available in 100_20eV.avi . The animation shows how a defect ( Frenkel pair , i.e. an interstitial and vacancy ) is formed in silicon when a lattice atom is given a recoil energy of 20 eV in the 100 direction. The data for the animation was obtained from density functional theory molecular dynamics computer simulations. [ 15 ]
Such simulations have given significant qualitative insights into the threshold displacement energy, but the quantitative results should be viewed with caution.
The classical interatomic potentials are usually fit only to equilibrium properties, and hence their predictive capability may be limited. Even in the most studied materials such as Si and Fe, there are variations of more than a factor of two in the predicted threshold displacement energies. [ 7 ] [ 15 ] The quantum mechanical simulations based on density functional theory (DFT) are likely to be much more accurate, but very few comparative studies of different DFT methods on this issue have yet been carried out to assess their quantitative reliability.
The threshold displacement energies have been studied
extensively with electron irradiation experiments. Electrons with kinetic energies of the order of hundreds of keVs or a few MeVs can to a very good approximation be considered to collide with a single lattice atom at a time.
Since the initial energy for electrons coming from a particle accelerator is accurately known, one can thus
at least in principle determine the lower minimum threshold displacement T d , m i n l {\displaystyle T_{d,min}^{l}} energy by irradiating a crystal with electrons of increasing energy until defect formation is observed. Using the equations given above one can then translate the electron energy E into the threshold energy T. If the irradiation is carried out on a single crystal in a known crystallographic directions one can determine also direction-specific thresholds T d l ( θ , ϕ ) {\displaystyle T_{d}^{l}(\theta ,\phi )} . [ 1 ] [ 3 ] [ 4 ] [ 16 ] [ 17 ]
There are several complications in interpreting the experimental results, however. To name a few, in thick samples the electron beam will spread, and hence the measurement on single crystals
does not probe only a single well-defined crystal direction. Impurities may cause the threshold
to appear lower than they would be in pure materials.
Particular care has to be taken when interpreting threshold displacement energies
at temperatures where defects are mobile and can recombine. At such temperatures,
one should consider
two distinct processes: the creation of the defect by the high-energy
ion (stage A), and subsequent thermal recombination effects (stage B).
The initial stage A. of defect creation, until all excess kinetic
energy has dissipated in the lattice and it is back to its
initial temperature T 0 , takes < 5 ps. This is the fundamental
("primary damage") threshold displacement energy, and also the one
usually simulated by molecular dynamics computer simulations.
After this
(stage B), however, close Frenkel pairs may be recombined
by thermal processes. Since low-energy recoils just above the
threshold only produce close Frenkel pairs, recombination
is quite likely.
Hence on experimental time scales and temperatures above the first
(stage I) recombination temperature, what one sees is the combined
effect of stage A and B. Hence the net effect often is that the
threshold energy appears to increase with increasing temperature,
since the Frenkel pairs produced by the lowest-energy recoils
above threshold all recombine, and only defects produced by higher-energy
recoils remain. Since thermal recombination is time-dependent,
any stage B kind of recombination also implies that the
results may have a dependence on the ion irradiation flux.
In a wide range of materials, defect recombination occurs already below
room temperature. E.g. in metals the initial ("stage I") close Frenkel
pair recombination and interstitial migration starts to happen already
around 10-20 K. [ 18 ] Similarly, in Si major recombination of damage happens already
around 100 K during ion irradiation and 4 K during electron irradiation [ 19 ]
Even the stage A threshold displacement energy can be expected
to have a temperature dependence, due to effects such as thermal
expansion, temperature dependence of the elastic constants and increased
probability of recombination before the lattice has cooled down back to the
ambient temperature T 0 .
These effects, are, however, likely to be much weaker than the
stage B thermal recombination effects.
The threshold displacement energy is often used to estimate the total
amount of defects produced by higher energy irradiation using the Kinchin-Pease or NRT
equations [ 20 ] [ 21 ] which says that the number of Frenkel pairs produced N F P {\displaystyle N_{FP}} for a nuclear deposited energy of F D n {\displaystyle F_{Dn}} is
N F P = 0.8 F D n 2 T d , a v e {\displaystyle N_{FP}=0.8{F_{Dn} \over 2T_{d,ave}}}
for any nuclear deposited energy above 2 T d , a v e / 0.8 {\displaystyle 2T_{d,ave}/0.8} .
However, this equation should be used with great caution for several
reasons. For instance, it does not account for any thermally activated
recombination of damage, nor the well known fact that in metals
the damage production is for high energies only something like
20% of the Kinchin-Pease prediction. [ 4 ]
The threshold displacement energy is also often used in binary collision approximation computer codes such as SRIM [ 22 ] to estimate
damage. However, the same caveats as for the Kinchin-Pease equation
also apply for these codes (unless they are extended with a damage
recombination model).
Moreover, neither the Kinchin-Pease equation nor SRIM take in any way
account of ion channeling , which may in crystalline or
polycrystalline materials reduce the nuclear deposited
energy and thus the damage production dramatically for some
ion-target combinations. For instance, keV ion implantation
into the Si 110 crystal direction leads to massive channeling
and thus reductions in stopping power. [ 23 ] Similarly, light ion like He irradiation of a BCC metal like Fe
leads to massive channeling even in a randomly selected
crystal direction. [ 24 ] | https://en.wikipedia.org/wiki/Threshold_displacement_energy |
In particle physics , the threshold energy for production of a particle is the minimum kinetic energy that must be imparted to one of a pair of particles in order for their collision to produce a given result. [ 1 ] If the desired result is to produce a third particle then the threshold energy is greater than or equal to the rest energy of the desired particle. In most cases, since momentum is also conserved, the threshold energy is significantly greater than the rest energy of the desired particle.
The threshold energy should not be confused with the threshold displacement energy , which is the minimum energy needed to permanently displace an atom in a crystal to produce a crystal defect in radiation material science .
Consider the collision of a mobile proton with a stationary proton so that a π 0 {\displaystyle {\pi }^{0}} meson is produced: [ 1 ] p + + p + → p + + p + + π 0 {\displaystyle p^{+}+p^{+}\to p^{+}+p^{+}+\pi ^{0}}
We can calculate the minimum energy that the moving proton must have in order to create a pion.
Transforming into the ZMF (Zero Momentum Frame or Center of Mass Frame) and assuming the outgoing particles have no KE (kinetic energy) when viewed in the ZMF, the conservation of energy equation is:
E = 2 γ m p c 2 = 2 m p c 2 + m π c 2 {\displaystyle E=2\gamma m_{p}c^{2}=2m_{p}c^{2}+m_{\pi }c^{2}}
Rearranged to
γ = 1 1 − β 2 = 2 m p c 2 + m π c 2 2 m p c 2 {\displaystyle \gamma ={\frac {1}{\sqrt {1-\beta ^{2}}}}={\frac {2m_{p}c^{2}+m_{\pi }c^{2}}{2m_{p}c^{2}}}}
By assuming that the outgoing particles have no KE in the ZMF, we have effectively considered an inelastic collision in which the product particles move with a combined momentum equal to that of the incoming proton in the Lab Frame.
Our c 2 {\displaystyle c^{2}} terms in our expression will cancel, leaving us with:
β 2 = 1 − ( 2 m p 2 m p + m π ) 2 ≈ 0.130 {\displaystyle \beta ^{2}=1-\left({\frac {2m_{p}}{2m_{p}+m_{\pi }}}\right)^{2}\approx 0.130}
β ≈ 0.360 {\displaystyle \beta \approx 0.360}
Using relativistic velocity additions:
v lab = u cm + V cm 1 + u cm V cm / c 2 {\displaystyle v_{\text{lab}}={\frac {u_{\text{cm}}+V_{\text{cm}}}{1+u_{\text{cm}}V_{\text{cm}}/c^{2}}}}
We know that V c m {\displaystyle V_{cm}} is equal to the speed of one proton as viewed in the ZMF, so we can re-write with u c m = V c m {\displaystyle u_{cm}=V_{cm}} :
v lab = 2 u cm 1 + u cm 2 / c 2 ≈ 0.64 c {\displaystyle v_{\text{lab}}={\frac {2u_{\text{cm}}}{1+u_{\text{cm}}^{2}/c^{2}}}\approx 0.64c}
So the energy of the proton must be E = γ m p c 2 = m p c 2 1 − ( v lab / c ) 2 = 1221 {\displaystyle E=\gamma m_{p}c^{2}={\frac {m_{p}c^{2}}{\sqrt {1-(v_{\text{lab}}/c)^{2}}}}=1221\,} MeV.
Therefore, the minimum kinetic energy for the proton must be T = E − m p c 2 ≈ 280 {\displaystyle T=E-{m_{p}c^{2}}\approx 280} MeV.
At higher energy, the same collision can produce an antiproton :
If one of the two initial protons is stationary, we find that the impinging proton must be given at least 6 m p c 2 {\displaystyle 6m_{p}c^{2}} of energy, that is, 5.63 GeV. On the other hand, if both protons are accelerated one towards the other (in a collider ) with equal energies, then each needs to be given only m p c 2 {\displaystyle m_{p}c^{2}} of energy. [ 1 ]
Consider the case where a particle 1 with lab energy E 1 {\displaystyle E_{1}} (momentum p 1 {\displaystyle p_{1}} )
and mass m 1 {\displaystyle m_{1}} impinges on a
target particle 2 at rest in the lab, i.e. with lab energy E 2 {\displaystyle E_{2}} and mass m 2 {\displaystyle m_{2}} .
The threshold energy E 1 , thr {\displaystyle E_{1,{\text{thr}}}} to produce three particles of masses m a {\displaystyle m_{a}} , m b {\displaystyle m_{b}} , m c {\displaystyle m_{c}} , i.e.
1 + 2 → a + b + c , {\displaystyle 1+2\to a+b+c,}
is then found by assuming that these three particles are at rest in the center of mass frame (symbols with
hat indicate quantities in the center of mass frame):
E cm = m a c 2 + m b c 2 + m c c 2 = E ^ 1 + E ^ 2 = γ ( E 1 − β p 1 c ) + γ m 2 c 2 {\displaystyle E_{\text{cm}}=m_{a}c^{2}+m_{b}c^{2}+m_{c}c^{2}={\hat {E}}_{1}+{\hat {E}}_{2}=\gamma (E_{1}-\beta p_{1}c)+\gamma m_{2}c^{2}}
Here E cm {\displaystyle E_{\text{cm}}} is the total energy available in the center of mass frame.
Using γ = E 1 + m 2 c 2 E cm {\displaystyle \gamma ={\frac {E_{1}+m_{2}c^{2}}{E_{\text{cm}}}}} , β = p 1 c E 1 + m 2 c 2 {\displaystyle \beta ={\frac {p_{1}c}{E_{1}+m_{2}c^{2}}}} and p 1 2 c 2 = E 1 2 − m 1 2 c 4 {\displaystyle p_{1}^{2}c^{2}=E_{1}^{2}-m_{1}^{2}c^{4}} one derives that
E 1 , thr = ( m a + m b + m c ) 2 − ( m 1 2 + m 2 2 ) 2 m 2 c 2 {\displaystyle E_{1,{\text{thr}}}={\frac {(m_{a}+m_{b}+m_{c})^{2}-(m_{1}^{2}+m_{2}^{2})}{2m_{2}}}c^{2}} [ 2 ]
This particle physics –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Threshold_energy |
Mitochondrial threshold effect is a phenomenon where the number of mutated mtDNA has surpassed a certain threshold which causes the electron transport chain and ATP synthesis of a mitochondrion to fail. [ 1 ] There isn't a set number that needs to be surpassed, however, it is associated with an increase of the number of mutated mtDNA . When there is 60-80% of mutated mtDNA present, that is said to be the threshold level. [ 1 ] While 60-80% is the general threshold level, this is also dependent on the individual, the specific organ in question and what the specific mutation is. There are three specific types of mitochondrial threshold effects: phenotypic threshold effect, biochemical threshold effect and translational threshold effect. [ citation needed ]
Threshold expression is a phenomenon in which phenotypic expression of a mitochondrial disease within an organ system occurs when the severity of the mutation, relative number of mutant mtDNA , and reliance of the organ system on oxidative phosphorylation combine in such a way that ATP production of the tissue falls below the level required by the tissue. The phenotype may be expressed even if the percentage of mutant mtDNA is below 50% if the mutation is severe enough. [ citation needed ]
Phenotypic threshold effect is when there is a certain amount of wild-type mtDNA present in the mitochondrion which is able to balance out the number of mutated mtDNA . [ 2 ] As a result, the phenotype is normal. However, if the number of wild-type mtDNA decreases and the number of mutant mtDNA increases, resulting in an imbalance between the two, the threshold level has been altered which causes complications. This occurs because the wild-type mtDNA present are able to keep the electron transport chain and ATP synthesis functioning despite there being a few number of them present. They are able to counterbalance the mutated mtDNA , however, when the number drops below threshold level the mutant mtDNA take over. [ 2 ]
This biochemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Threshold_expression |
Threshold host density (N T ) , in the context of wildlife disease ecology , refers to the concentration of a population of a particular organism as it relates to disease. Specifically, the threshold host density (N T ) of a species refers to the minimum concentration of individuals necessary to sustain a given disease within a population . [ 1 ]
Threshold host density (N T ) only applies to density dependent diseases, where there is an "aggregation of risk" to the host in either high host density or low host density patches. When low host density causes an increase in incidence of parasitism or disease, this is known as inverse host density dependence, whereas when incidence of parasitism or disease is elevated in high host density conditions, it is known as direct host density dependence.
Host density independent diseases show no correlation between the concentration of a given host population and the incidence of a particular disease. Some examples of host density independent diseases are sexually transmitted diseases in both humans and other animals. This is due to the constant incidence of interaction observed in sexually transmitted diseases—even if there are only 20 individuals left of a given population, survival of the species requires sexual contact, and continued spread of the disease.
Density dependent diseases are significantly less likely to cause extinction of a population, [ 2 ] as the natural course of disease will bring down the density, and thus the propinquity of individuals in the population. In other words, less individuals—as caused by disease—means lower infection rates and a population equilibrium.
This graph shows the direct relationship between disease spread through contact and population density. As the population density increases, so do transmission events between individuals.
There is a rapid initial increase in disease transmission as the population increases from zero, and then the plateau of transmission throughout most of the graph. As sexual contact is required in nearly all sexually reproducing species, transmission is not very host density dependent. It is only in cases of near-extinction where sexually transmitted diseases show any dependence on host density. It is for this reason that sexually transmitted diseases are more likely than density dependent diseases to cause extinction. [ 4 ]
This graph shows the relationship between population density and the transmission of vector-borne disease. Initially, the number of contacts between individuals and vectors increases as population density increases. Eventually, however, the advantage of host density diminishes as the density becomes too great for the vector to maintain its natural ecological relationship with the host, and transmission decreases. [ citation needed ] | https://en.wikipedia.org/wiki/Threshold_host_density |
In mathematical or statistical modeling a threshold model is any model where a threshold value, or set of threshold values, is used to distinguish ranges of values where the behaviour predicted by the model varies in some important way. A particularly important instance arises in toxicology, where the model for the effect of a drug may be that there is zero effect for a dose below a critical or threshold value, while an effect of some significance exists above that value. [ 1 ] Certain types of regression model may include threshold effects. [ 1 ]
Threshold models are often used to model the behavior of groups, ranging from social insects to animal herds to human society.
Classic threshold models were introduced by Sakoda, [ 2 ] in his 1949 dissertation and the Journal of Mathematical Sociology (JMS vol 1 #1, 1971). [ 3 ] They were subsequently developed by Schelling, Axelrod, and Granovetter to model collective behavior . Schelling used a special case of Sakoda's model to describe the dynamics of segregation motivated by individual interactions in America (JMS vol 1 #2, 1971) [ 4 ] by constructing two simulation models. Schelling demonstrated that “there is no simple correspondence of individual incentive to collective results,” and that the dynamics of movement influenced patterns of segregation. In doing so Schelling highlighted the significance of “a general theory of ‘tipping’”.
Mark Granovetter, following Schelling, proposed the threshold model (Granovetter & Soong, 1983, 1986, 1988), which assumes that individuals’ behavior depends on the number of other individuals already engaging in that behavior (both Schelling and Granovetter classify their term of “threshold” as behavioral threshold.). He used the threshold model to explain the riot, residential segregation, and the spiral of silence . In the spirit of Granovetter's threshold model, the “threshold” is “the number or proportion of others who must make one decision before a given actor does so”.
It is necessary to emphasize the determinants of threshold. Different individuals have different thresholds. Individuals' thresholds may be influenced by many factors: social economic status, education, age, personality, etc. Further, Granovetter relates “threshold” with utility one gets from participating in collective behavior or not, using the utility function, each individual will calculate his or her cost and benefit from undertaking an action. And situation may change the cost and benefit of the behavior, so threshold is situation-specific.
The distribution of the thresholds determines the outcome of the aggregate behavior (for example, public opinion).
The models used in segmented regression analysis are threshold models.
Certain deterministic recursive multivariate models which include threshold effects have been shown to produce fractal effects. [ 5 ]
Several classes of nonlinear autoregressive models formulated for time series applications have been threshold models. [ 5 ]
A threshold model used in toxicology posits that anything above a certain dose of a toxin is dangerous, and anything below it safe. This model is usually applied to non- carcinogenic health hazards.
Edward J. Calabrese and Linda A. Baldwin wrote:
An alternative type of model in toxicology is the linear no-threshold model (LNT), while hormesis correspond to the existence of opposite effects at low vs. high dose, which usually gives a U- or inverted U-shaped dose response curve.
The liability-threshold model is a threshold model of categorical (usually binary) outcomes in which a large number of variables are summed to yield an overall 'liability' score; the observed outcome is determined by whether the latent score is smaller or larger than the threshold. The liability-threshold model is frequently employed in medicine and genetics to model risk factors contributing to disease.
In a genetic context, the variables are all the genes and different environmental conditions, which protect against or increase the risk of a disease, and the threshold z is the biological limit past which disease develops. The threshold can be estimated from population prevalence of the disease (which is usually low). Because the threshold is defined relative to the population & environment, the liability score is generally considered as a N(0, 1) normally distributed random variable .
Early genetics models were developed to deal with very rare genetic diseases by treating them as Mendelian diseases caused by 1 or 2 genes: the presence or absence of the gene corresponds to the presence or absence of the disease, and the occurrence of the disease will follow predictable patterns within families. Continuous traits like height or intelligence could be modeled as normal distributions , influenced by a large number of genes, and the heritability and effects of selection easily analyzed. Some diseases, like alcoholism, epilepsy, or schizophrenia , cannot be Mendelian diseases because they are common; do not appear in Mendelian ratios; respond slowly to selection against them; often occur in families with no prior history of that disease; however, relatives and adoptees of someone with that disease are far more likely (but not certain) to develop it, indicating a strong genetic component. The liability threshold model was developed to deal with these non-Mendelian binary cases; the model proposes that there is a continuous normally-distributed trait expressing risk polygenically influenced by many genes, which all individuals above a certain value develop the disease and all below it do not.
The first threshold models in genetics were introduced by Sewall Wright , examining the propensity of guinea pig strains to have an extra hind toe, a phenomenon which could not be explained as a dominant or recessive gene, or continuous "blinding inheritance". [ 7 ] [ 8 ] The modern liability-threshold model was introduced into human research by geneticist Douglas Scott Falconer in his textbook [ 9 ] and two papers. [ 10 ] [ 11 ] Falconer had been asked about the topic of modeling 'threshold characters' by Cyril Clarke who had diabetes . [ 12 ]
An early application of liability-threshold models was to schizophrenia by Irving Gottesman & James Shields , finding substantial heritability & little shared-environment influence [ 13 ] and undermining the "cold mother" theory of schizophrenia.
The proposition that global temperature will rise in a non-linear mode once it crosses a hypothetical threshold value has been made in several studies [ 14 ] A recent threshold model [ 15 ] predicts that in this suprathreshold state temperature rise will be dramatically sharp and non-graded. | https://en.wikipedia.org/wiki/Threshold_model |
The threshold of toxilogical concern (or TTC ) is a method for determining the level of exposure to chemicals above which would be considered toxic, [ 1 ] in cases where data about such chemicals is scarce or non-existent. [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ]
This toxicology -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Threshold_of_toxilogical_concern |
The thrifty gene hypothesis , or Gianfranco's hypothesis [ citation needed ] is an attempt by geneticist James V. Neel to explain why certain populations and subpopulations in the modern day are prone to diabetes mellitus type 2 . He proposed the hypothesis in 1962 to resolve a fundamental problem: diabetes is clearly a very harmful medical condition, yet it is quite common, and it was already evident to Neel that it likely had a strong genetic basis. The problem is to understand how disease with a likely genetic component and with such negative effects may have been favoured by the process of natural selection. Neel suggested the resolution to this problem is that genes which predispose to diabetes (called 'thrifty genes') were historically advantageous, but they became detrimental in the modern world. In his words they were "rendered detrimental by 'progress'". Neel's primary interest was in diabetes, but the idea was soon expanded to encompass obesity as well. Thrifty genes are genes which enable individuals to efficiently collect and process food to deposit fat during periods of food abundance in order to provide for periods of food shortage (feast and famine). [ citation needed ]
According to the hypothesis, the 'thrifty' genotype would have been advantageous for hunter-gatherer populations, especially child-bearing women, because it would allow them to fatten more quickly during times of abundance. Fatter individuals carrying the thrifty genes would thus better survive times of food scarcity. However, in modern societies with a constant abundance of food, this genotype effectively prepares individuals for a famine that never comes. The result of this mismatch between the environment in which the brain evolved and the environment of today is widespread chronic obesity and related health problems like diabetes.
The hypothesis has received various criticisms and several modified or alternative hypotheses have been proposed.
James Neel, a professor of Human Genetics at the University of Michigan Medical School , proposed the "thrifty genotype" hypothesis in 1962 in his paper "Diabetes Mellitus: A 'Thrifty' Genotype Rendered Detrimental by 'Progress'?" Neel intended the paper to provoke further contemplation and research on the possible evolutionary and genetic causes of diabetes among populations that had only recently come into regular contact with Westerners . [ 1 ]
The genetic paradox Neel sought to address was this: diabetes conferred a significant reproductive (and thus evolutionary) disadvantage to anyone who had it; yet the populations Neel studied had diabetes in such high frequencies that a genetic predisposition to develop diabetes seemed plausible. Neel sought to unravel the mystery of why genes that promote diabetes had not been naturally-selected out of the population's gene pool . [ 2 ]
Neel proposed that a genetic predisposition to develop diabetes was adaptive to the feast and famine cycles of Paleolithic human existence, allowing humans to fatten rapidly and profoundly during times of feast in order that they might better survive during times of famine. This would have been advantageous then but not in the current environment. [ 3 ]
The hypothesis was proposed before there was a clear distinction between the different types of diabetes. Neel later stated that the hypothesis applied to non-insulin-dependent diabetes mellitus . In its original form the theory more specifically stated that diabetes may be due to a rapid insulin response which would prevent loss of glucose from the urine. Furthermore, it made use of a then popular theory which was later disproven. This argued that specific insulin antagonists were released in response to insulin with this causing diabetes. [ 4 ]
In the decades following the publications of his first paper on the "thrifty genotype" hypothesis, Neel researched the frequency of diabetes and (increasingly) obesity in a number of other populations and sought out observations that might disprove or discount his "thrifty gene" hypothesis. [ citation needed ]
Neel's further investigations cast doubt on the "thrifty genotype" hypothesis. If a propensity to develop diabetes were an evolutionary adaptation, then diabetes would have been a disease of long standing in those populations currently experiencing a high frequency of diabetes. However, Neel found no evidence of diabetes among these populations earlier in the century. [ 5 ] And when he tested younger members of these populations for glucose intolerance - which might have indicated a predisposition for diabetes - he found none. [ 6 ]
In 1989, Neel published a review of his further research based on the "thrifty genotype" hypothesis and in the Introduction noted the following: "The data on which that (rather soft) hypothesis was based has now largely collapsed." However, Neel argued that "...the concept of a "thrifty genotype" remains as viable as when first advanced...". He went on to advance that the thrifty genotype concept be thought of in the context of a "compromised" genotype that affects several other metabolically related diseases. [ 7 ]
Neel in a 1998 review described an expanded form of the original hypothesis, diabetes being caused by "thrifty genes" adapted specifically for intermittent starvation, to a more complex theory of several related diseases such as diabetes, obesity, and hypertension (see also metabolic syndrome ) being caused by physiological systems adapted for an older environment being pushed beyond their limits by environmental changes. Thus, one possible remedy for these diseases is changing diet and exercise activity to more closely reflect that of the ancestral environment. [ 8 ]
The thrifty genotype hypothesis has been used to explain high, and rapidly escalating, levels of obesity and diabetes among groups newly introduced to western diets and environments, from South Pacific Islanders , [ 9 ] to Sub Saharan Africans , [ 10 ] to Native Americans in the Southwestern United States , [ 11 ] to Inuit . [ 12 ]
The original "thrifty gene" hypothesis argued that famines were common and severe enough to select for thrifty gene in the 2.5 million years of human paleolithic history. This assumption is contradicted by some anthropological evidence. [ 13 ] [ 14 ] [ 15 ] [ 16 ] Many of the populations that later developed high rates of obesity and diabetes appeared to have no discernible history of famine or starvation (for example, Pacific Islanders whose "tropical-equatorial islands had luxuriant vegetation all year round and were surrounded by lukewarm waters full of fish."). [ 14 ] [ 15 ] However, this implies that the period after which humans migrated out of Africa would have provided sufficient time to reverse any pre-existing famine-adapted alleles, for which there is little to no evidence. One criticism of the 'thrifty gene' idea is that it predicts that modern hunter gatherers should get fat in the periods between famines. Data on the body mass index of hunter-gatherer and subsistence agriculturalists show that between famines they do not deposit large fat stores. [ 16 ] However, genes that promote only limited fat deposition in the context of pre-industrialized lifestyles and diets may promote excessive fat deposition and obesity when caloric intake is increased and expenditure is decreased beyond the range of the environments these genes evolved in (a gene x environment interaction).
As a response to such criticisms, a modified "thrifty" gene hypothesis is that the famines and seasonal shortages of food that occurred only during the agricultural period may have exerted enough pressure to select for "thrifty" genes. [ 17 ]
The thrifty phenotype hypothesis arose from challenges posed to the thrifty gene hypothesis. The thrifty phenotype hypothesis theorizes that instead of arising genetically, the "thrifty factors" developed as a direct result of the environment within the womb during development. The development of insulin resistance is theorized to be directly related to the body "predicting" a life of starvation for the developing fetus. [ 18 ]
Hence, one of the main causes of type 2 diabetes has been attributed to poor fetal and infant growth and the subsequent development of the metabolic syndrome. Since the hypothesis was proposed, many studies worldwide have confirmed the initial epidemiological evidence. Although the relationship with insulin resistance is clear at all ages studied, the relation of insulin secretion is less clear. The relative contribution of genes and environment to these relationships remains a matter of debate. [ 19 ]
Other relevant observations arose from metabolism researchers who note that for practically every other species on earth, fat metabolism is well regulated [ 20 ] and that "most wild animals are in fact very lean" and that they remain lean "even when adequate food is supplied."
In response to the criticisms of the original thrifty genotype theory, several new ideas have been proposed for explaining the evolutionary bases of obesity and related diseases. [ citation needed ]
The "thrifty epigenomic hypothesis" is a combination of the thrifty phenotype and thrifty genotype hypotheses. While it argues that there is an ancient, canalized (genetically coded) physiological system for being "thrifty", the hypothesis argues that an individual's disease risk is primarily determined by epigenetic events. Subtle, epigenetic modifications at many genomic loci ( gene regulatory networks ) alter the shape of the canal in response to environmental influences and thereby establish a predisposition for complex diseases such as metabolic syndrome . There may be epigenetic inheritance of disease risk. [ 21 ]
Watve and Yajnik suggested that changing insulin resistance mediates two phenotypic transitions: a transition in reproductive strategy from "r" (large number of offspring with smaller investment in each) to "K" (smaller number of offspring with greater investment in each) (see r/K selection theory ); and a switch from a lifestyle dependent upon muscular strength to one dependent on brain power ("soldier to diplomat"). Because the environmental conditions that would facilitate each transition are heavily overlapping, the scientists surmise, a common switch could have evolved for the two transitions. [ 18 ]
The main problem with this idea is the timing at which the transition is presumed to have happened, and how this would then translate into the genetic predisposition to type 2 diabetes and obesity [ citation needed ] . For example, the decline in reproductive investment in human societies (the so-called r to K shift) has occurred far too recently to have been caused by a change in genetics.
Sellayah and colleagues have postulated an 'Out of Africa' theory to explain the evolutionary origins of obesity. The theory cites diverse ethnic based differences in obesity susceptibility in western civilizations to contend that, neither the thrifty or drifty gene hypotheses can explain the demographics of the modern obesity crisis. Although the arguments against these patterns arising due to 'drift' are unclear. Sellayah et al. argue that ethnic groups whose ancestors were adapted to hot climates have low metabolic rates due to lack of thermogenic capacity, whereas those groups whose ancestors were cold-adapted were endowed with greater thermogenic capacity and higher metabolic rates. Sellayah and colleagues provide evidence of thermogenic capacity, metabolic rates and obesity prevalence in various indigenous populations in support of their argument. [ 22 ] Contrasting this analysis however a study of the spatial distribution of obesity across the mainland USA showed that once the effects of poverty and race were accounted for there was no association between ambient temperature and obesity rates. [ 23 ]
The most highly cited alternative to the thrifty gene hypothesis is the drifty gene hypothesis proposed by the British biologist John Speakman . This idea differs fundamentally from all the other ideas in that it does not propose any selective advantage for the obese state, either now or in the past. The main feature of this hypothesis is that the current pattern of obesity does not suggest that obesity has been under strong positive selection for a protracted period of time. It is argued instead that the obesity comes about because of genetic drift in the genes controlling the upper limit on our body fatness. Such drift may have started because around 2 million years ago ancestral humans effectively removed the risk from predators, which was probably a key factor selecting against fatness. The drifty gene hypothesis was presented as part of a presidential debate at the 2007 Obesity Society meeting in New Orleans, with the counter-arguments favouring the thrifty gene presented by British nutritionist Andrew Prentice. The main thrust of Prentice's argument against the drifty gene idea is that Speakman's critique of the thrifty gene hypothesis ignores the huge impact that famines have on fertility. It is argued by Prentice that famine may actually have only been a force driving evolution of thrifty genes for the past 15,000 years or so (since the invention of agriculture), but because famines exert effects on both survival and fertility the selection pressure may have been sufficient even over such a short timescale to generate a pressure for "thrifty" genes. These alternative arguments were published in two back-to-back papers in the International Journal of Obesity in November 2008. [ 17 ] [ 24 ]
Prentice et al. [ 17 ] predicted that the emerging molecular genetics field would ultimately provide a way to test between the adaptive 'thrifty gene' idea and the non-adaptive 'drifty gene' idea because it would be possible to find signatures of positive selection in the human genome, at genes that are linked to both obesity and type 2 diabetes, if the 'thrifty gene' hypothesis is correct. Two comprehensive studies have been performed seeking such signatures of selection. Ayub et al. (2014) [ 25 ] searched for signatures of positive selection at 65 genes linked to type 2 diabetes, and Wang and Speakman (2016) [ 26 ] searched for signatures of selection at 115 genes linked to obesity. In both cases there was no evidence for such selection signatures at a higher rate than in random genes selected for matched GC content and recombination rate. These two papers provide strong evidence against the thrifty gene idea, and indeed against any adaptive explanation which relies on selection during our recent evolutionary history, but rather provide strong support the 'drifty gene' interpretation.
Many attempts have been made to search for one or more genes contributing to thrift. Modern tools of genome wide association studies have revealed many genes with small effects associated with obesity or type 2 diabetes but all of them together explain only between 1.4 and 10% of population variance. [ 27 ] [ 28 ] This leaves a large gap between the pregenomic and emerging genomic estimates of heritability of obesity and Type 2 diabetes, sometimes called the " missing heritability problem ." The reasons for this discrepancy are not completely understood. A likely possibility is that the missing heritability is explained by rare variants of large effect that are found only in limited populations. These would be impossible to detect by standard whole genome sequencing approaches even with hundreds of thousands of participants. The extreme endpoint of this distribution are the so-called 'monogenic' obesities where most of the impact on body weight can be tied to a mutation in a single gene that runs in a single family. The classic example of such a genetic effect is the presence of mutations in the leptin gene. [ 29 ]
An important unanswered question is whether such rare variants exist because of chance mutations, population founder events and maintenance by processes such as drift, or whether there is any selective advantage involved in their maintenance and spread. An example of such a rare variant effect was recently discovered among Samoan islanders. [ 30 ] Among the islanders the variant is extremely common, but in other populations it is extremely rare or absent. The variant predisposes to obesity but strangely is protective against type 2 diabetes. Based on cell studies it was suggested the variant may protect individuals against periods of 'famine' and there is also evidence that it has been under positive selection. The most likely scenario then is that this rare variant was established in the islanders by a founder effect among a small initial colonising population, and was able to spread because of a selective advantage it conferred within that small group. Hence, in small populations under particular environmental conditions it may be feasible that the 'thrifty gene' idea is correct. It remains to be seen if rare variants that fill the gap in the missing heritability estimates are also 'thrifty genes' or if they are rare chance events sustained by drift, as implicated for the common variants currently linked to obesity and type 2 diabetes. [ 25 ] [ 26 ] | https://en.wikipedia.org/wiki/Thrifty_gene_hypothesis |
Thrifty phenotype refers to the correlation between low birth weight of neonates and the increased risk of developing metabolic syndromes later in life, including type 2 diabetes and cardiovascular diseases . [ 1 ] Although early life undernutrition is thought to be the key driving factor to the hypothesis, other environmental factors have been explored for their role in susceptibility, such as physical inactivity. Genes may also play a role in susceptibility of these diseases, as they may make individuals predisposed to factors that lead to increased disease risk. [ 2 ]
The term thrifty phenotype was first coined by Charles Nicholas Hales and David Barker in a study published in 1992. [ 3 ] In their study, the authors reviewed the literature up to and addressed five central questions regarding role of different factors in type 2 diabetes on which they based their hypothesis. These questions included the following:
From the review of the existing literature, they posited that poor nutritional status in fetal and early neonatal stages could hamper the development and proper functioning of the pancreatic beta cells by impacting structural features of islet anatomy, which could consequently make the individual more susceptible to the development of type 2 diabetes in later life. However, they did not exclude other causal factors such as obesity, ageing and physical inactivity as determining factors of type 2 diabetes. [ 4 ]
In a later study, Barker et al. [ 5 ] analyzed living patient data from Hertfordshire, UK, and found that men in their sixties having low birthweight (2.95 kg or less) were 10 times more likely to develop syndrome X (type 2 diabetes, hypertension and hyperlipidemia ) than men of the same age whose birthweight was 4.31 kg or more. This statistical correlation was independent of the gestation period and other possible confounding factors such as current social class or social class at birth, smoking, and consumption of alcohol. Furthermore, they argued that they were likely to underestimate this association, since they could only sample the surviving patients, and patients having more severe manifestations of syndrome X were less likely to survive to that age.
In 1994, Phillips’ et al. [ 6 ] found statistically significant association between thinness in birth (measured as Ponderal index ) and insulin resistance , the association being independent of length of gestation period , adult body mass index , and confounding factors like then-current social class or social class at birth.
In 2001, Hales and Barker [ 7 ] updated the hypothesis by positing that the thrifty phenotype may be an evolutionary adaptation: the thrifty phenotype responds to fetal malnutrition by selectively preserving more vital organs of the body and preparing the fetus for a postnatal environment where resources will be scarce.
Maternal nutrition can affect the development of the unborn child in poor nutritional environments such that it will be prepared for survival within that poor environment. This results in a thrifty phenotype (Hales & Barker, 1992 [ 8 ] [ 9 ] ). It is sometimes called Barker's hypothesis , after Professor David J. P. Barker, researching at the University of Southampton who published the theory in 1990. [ 10 ]
The thrifty phenotype hypothesis says that early-life metabolic adaptations help in survival of the organism by selecting an appropriate trajectory of growth in response to environmental cues. An example of this is type 2 diabetes. In their review, Barker and Hales discuss evidence that beta cells abnormally develop due to malnutrition during fetal development, causing insulin abnormalities later in life. The review also notes that low birth weight alone does not necessarily mean that it is a manifestation of thrifty phenotype. Since low birth weight is not exclusively caused by maternal malnutrition, meaning that other factors could influence the low birth weight–disease relationship. [ 8 ]
Before the term thrifty phenotype was coined, Barker had noted the phenomenon with cardiovascular disease. In his lecture paper, he discusses the role of malnutrition during fetal development in obstructed lung disease (now known as chronic obstructive pulmonary disease [COPD]), ischemic heart disease , and blood pressure. In each of these diseases, there was an association with social class and development prevalence of the disease. This was determined to be due to issues of malnutrition during key points in organ development in utero. [ 11 ]
However, environmental changes during early development may result in the selected trajectory becoming inappropriate, resulting in adverse effects on health. This paradox generates doubts about whether the thrifty phenotype is adaptive for human offspring. Thus, the thrifty phenotype should be considered as the capacity of all offspring to respond to environmental cues during early ontogenetic development. It has been suggested that the thrifty phenotype is the consequence of three unlike adaptive processes: maternal effects, niche construction and developmental plasticity, which all are influenced by the brain. While developmental plasticity demonstrates an adaptation by the offspring, niche construction and parental effects are result of parental selections rather than offspring fitness. Therefore, the thrifty phenotype can be described as a manipulation of offspring phenotype for the benefit of maternal fitness. The information that enters offspring phenotype during early development mirror the mother's own developmental experience and the quality of the environment during her own maturation rather than predicting the possible future environment of the offspring [ 12 ]
Not all research into this topic has been conducted on diseases. Other research has explored the thrifty phenotype hypothesis as a causal factor for differing development into puberty and adulthood. A review on the literature, up to 2013, discussed not only the hierarchical tissue preservation within pancreatic cells, but research on limb shortening to preserve development of more vital organs and bones. [ 13 ] An example of this phenomenon is a study published in 2018 by the Royal Society, which found that hypoxic stress from differing altitudes affected offspring limb length. [ 14 ] Fetal overnutrition may also play a key role in development, increasing the likelihood of early puberty and obesity. [ 15 ]
Many human diseases in adulthood are related to growth patterns during early life, determining early-life nutrition as the underlying mechanism. Individuals with a thrifty phenotype will have "a smaller body size, a lowered metabolic rate and a reduced level of behavioral activity… adaptations to an environment that is chronically short of food" (Bateson & Martin, 1999 [ 16 ] ). Those with a thrifty phenotype who actually develop in an affluent environment may be more prone to metabolic disorders, such as obesity and type II diabetes , whereas those who have received a positive maternal forecast will be adapted to good conditions and therefore better able to cope with rich diets. This idea (Barker, 1992 [ 17 ] ) is now widely (if not universally) accepted and is a source of concern for societies undergoing a transition from sparse to better nutrition (Robinson, 2001 [ 18 ] ).
Risk factors of thrifty phenotype include advanced maternal age and placental insufficiency . [ 19 ]
The ability to conserve, acquire and expend energy is believed to be an innate, ancient trait that is embedded in the genome in a way that is quite protected against mutations . [ 20 ] These changes are also believed to possibly be inherited across generations. [ 20 ] Leptin has been identified as a possible gene for the acquisition of these thrifty traits. [ 20 ]
On a larger anatomic scale, the molecular mechanisms are broadly caused by a suboptimal environment in the reproductive tract or maternal physiological adaptations to pregnancy . [ 19 ] | https://en.wikipedia.org/wiki/Thrifty_phenotype |
A throat culture is a laboratory diagnostic test that evaluates for the presence of a bacterial or fungal infection in the throat. A sample from the throat is collected by swabbing the throat and placing the sample into a special cup ( culture ) that allows infections to grow. If an organism grows, the culture is positive and the presence of an infection is confirmed. The type of infection is found using a microscope , chemical tests, or both. If no infection grows, the culture is negative. Common infectious organisms tested for by a throat culture include Candida albicans known for causing thrush and Group A streptococcus known for causing strep throat , [ 1 ] scarlet fever , and rheumatic fever . [ 1 ] Throat cultures are more sensitive (81% sensitive) than the rapid strep test (70%) for diagnosing strep throat, but are nearly equal in terms of specificity. [ 1 ]
A throat culture may be done to investigate the cause of a sore throat . Most sore throats are caused by viral infections. However, in some cases the cause of a sore throat may be unclear and a throat culture can be used to determine if the infection is bacterial. Identifying the responsible organism can guide treatment.
The person receiving the throat culture is asked to tilt his or her head back and open his or her mouth. The health professional will press the tongue down with a tongue depressor and examine the mouth and throat. A clean swab will be rubbed over the back of the throat, around the tonsils , and over any red areas or sores to collect a sample.
The sample may also be collected using a throat washout. For this test, the patient will gargle a small amount of salt water and then spit the fluid into a clean cup. This method gives a larger sample than a throat swab and may make the culture more reliable.
A culture for Streptococcus pyogenes can take 18–24 hours when grown at 37 degrees Celsius (body temperature). [ 1 ] | https://en.wikipedia.org/wiki/Throat_culture |
In hydrology , throughfall is the process which describes how wet leaves shed excess water onto the ground surface. These drops have greater erosive power because they are heavier than rain drops. Furthermore, where there is a high canopy, falling drops may reach terminal velocity , about 8 metres (26 ft), thus maximizing the drop's erosive potential. [ 2 ]
Rates of throughfall are higher in areas of forest where the leaves are broad-leaved. This is because the flat leaves allow water to collect. Drip-tips also facilitate throughfall. Rates of throughfall are lower in coniferous forests as conifers can only hold individual droplets of water on their needles.
Throughfall is a crucial process when designing pesticides for foliar application since it will condition their washing and the fate of potential pollutants in the environment. [ 3 ]
This ecology -related article is a stub . You can help Wikipedia by expanding it .
This article about forestry is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Throughfall |
In hydrology , throughflow , a subtype of interflow (percolation), is the lateral unsaturated flow of water in the soil zone, typically through a highly permeable geologic unit overlying a less permeable one. Water thus returns to the surface, as return flow , before or on entering a stream or groundwater. [ 1 ] [ 2 ] Once water infiltrates into the soil, it is still affected by gravity and infiltrates to the water table or if permeability varies laterally travels downslope. [ 1 ] Throughflow usually occurs during peak hydrologic events (such as high precipitation). Flow rates are dependent on the hydraulic conductivity of the geologic medium. [ 1 ] | https://en.wikipedia.org/wiki/Throughflow |
Throwback uniforms , throwback jerseys , retro kits or heritage guernseys are sports uniforms styled to resemble the uniforms that a team wore in the past . One-time or limited-time retro uniforms are sometimes produced to be worn by teams in games, on special occasions such as anniversaries of significant events.
Throwback uniforms have proven popular in all major pro and college sports in North America, not only with fans, but with the teams' merchandising departments. Because the "authentic" uniforms (accurate reproductions) and less-authentic "replicas" had been so popular at retail, the professional leagues institutionalized throwbacks as " third jerseys ". In some instances, teams will wear "fauxbacks", which are new retro-style uniforms harkening back to a time that predates the team itself. For example, though the Tampa Bay Rays first took the field in 1998, they have worn 1979-style uniforms on several occasions since introducing them in 2012, and have also worn pre-1998 jerseys of several defunct local minor league teams, including the Tampa Tarpons and Tampa Smokers . [ 1 ] [ 2 ]
Throwbacks were introduced in the NFL in 1991 at retail through the NFL Throwbacks Collection. [ citation needed ] The rights to produce the vintage apparel was limited to six apparel licensees, including Tiedman & Company Sportswear (exclusive to jerseys), Riddell (helmets), Starter (caps), Nutmeg Mills (sweatshirts), and DeLong (jackets). In 1994, to honor the NFL's 75th Anniversary, teams were allowed to wear modern versions of their old uniform styles.
In their 80th anniversary, the Pittsburgh Steelers released a throwback uniform that honored the 1934 team. The uniform was a gold and black horizontal-striped jersey white squares containing the numbers. [ 3 ] The throwback uniform was worn twice during the regular season. [ 4 ] [ 5 ] and drew major media attention. USA Today said that the Steelers looked like "bumblebee[s] in a Depression-era chain gang." [ 6 ]
The NFL imposed a new rule for the 2013 season prohibiting the use of alternate colored helmets, eliminating many of the historically accurate throwback uniforms that had been in use up to that point. Teams are still allowed to use alternate decals (or no decals at all) for their throwbacks, but they must use them on the regular helmets. [ 7 ] The one-helmet rule was repealed in 2022 , allowing a number of teams to revisit classic uniforms from the past, such as the New York Giants ' 1980s blue uniforms, and the Tennessee Titans ' powder blue uniforms of their predecessors, the Houston Oilers . [ 8 ]
The Clemson University football team wore throwback uniforms in a single game during the 1995 season (October 7 vs. Georgia), in commemoration of the 100th anniversary of Clemson's football program. The uniforms resembled those of the 1939 Tigers, Clemson's first bowl team. In a continuation of the centennial celebration, the uniforms were also worn for one game the following season, a September 7, 1996, contest against Furman. [ 9 ]
The Texas Longhorns college football team wore throwback uniforms for a single game during their 2005 national championship season as a way of honoring the past. The throwback jerseys were similar to jerseys worn during their 1963 National Championship season under Coach Darrell K Royal . [ 10 ] [ 11 ] [ 12 ]
The University of Illinois football team wore throwback uniforms in a single game on September 6, 2008, in honor of the re-dedication of the renovated Memorial Stadium . The uniforms were styled after the 1960s-era uniforms worn by linebacker Dick Butkus . [ 13 ]
The University of Virginia football team wore throwback uniforms in a single game on September 6, 2008, in honor of Virginia's teams from 1984 through 1993. The university's athletic department termed the game a "Retro Game" instead of using the term "throwback". [ 14 ] The University of Virginia football team also wore throwback uniforms in a single game on September 29, 2012, in honor of Virginia's 1968 team and Frank Quayle . [ 15 ]
The University of Florida football team wore throwback uniforms in a single game on September 30, 2006, in honor of Florida's teams in the 1960s. [ 16 ]
The University of Washington football team wore throwback uniforms on September 29, 2007, to honor the 1960 national championship team. The throwback jerseys were dark blue with gold helmets. [ 17 ] [ 18 ] On October 16, 2021 the Huskies again wore throwback jerseys to honor the 30th anniversary of the 1991 national championship team. [ 19 ] [ 20 ]
For the 2009 and 2010 seasons, as part of Nike's Pro Combat program, several college football teams, including Ohio State and Oregon State wore throwback-inspired uniforms. In addition, for the 2009 playing of the "Holy War" rivalry against the University of Utah Utes (and also in the Las Vegas Bowl ), the BYU Cougars donned royal blue throwback uniforms to commemorate the 25th anniversary of their 1984 National Championship season. These throwbacks, along with another alternate royal blue uniform, have been employed occasionally in subsequent seasons; since in 2014, they have been worn for the team's homecoming game each year. [ 21 ]
The Kansas Jayhawks football team wore throwback uniforms on October 1, 2011, to honor the 50th anniversary of the 1961 KU football team, winners of the 1961 Bluebonnet Bowl , the program's first-ever bowl victory. [ 22 ]
The University of Oregon football team wore throwback uniforms on October 8, 2016 to honor the 100th anniversary of the 1916 team, then known as the Webfoots. The jerseys were navy blue with yellow "Webfoots" lettering across the chest. [ 23 ]
'Retro shirts', as they are known in the United Kingdom, are also sometimes used in association football, albeit with modern fabrics. In 2005–06 Arsenal changed their home colours from their traditional red and white to a variant of maroon known as redcurrant as a commemoration of their final season at Highbury Stadium ; this colour was supposedly the same shade the team had worn when they first played at Highbury in 1913, [ 24 ] although later evidence suggested that Arsenal's main colour at that time was a more standard shade known as 'Garibaldi red'. [ 25 ] Redcurrant, however, still played a part in their kits since; most recently on their yellow change kit featuring redcurrant shorts and pinstripes on the shirt and socks, between 2010 and 2012.
Manchester United wore several retro-style kits in the 1990s and 2000s, based on kits the worn by the club in the 1950s and '60s, as well as that worn by their first ever team, known then as Newton Heath . The Newton Heath-inspired kit was introduced in 1992 and worn for two seasons as a third kit. [ 26 ] They wore a replica of their jersey from 1958 during the Manchester derby against Manchester City on February 10, 2008, at Old Trafford to mark the 50th anniversary of the Munich air disaster four days earlier. United were granted special dispensation by the Premier League to wear the one-off uniform which was devoid of logos and kit markings, and used the traditional "one to eleven" numbering scheme rather than using squad numbers . [ 27 ] In a gesture of solidarity, Manchester City similarly removed the sponsor and manufacturer logos from their kits for the game, giving their shirts the same clean and empty look resembling the plain shirts of the 1950s when logos and team badges were not worn. However, they used the current season's kit style and chose not to go the whole distance in producing a retro-looking kit; retaining the club crest, competition sleeve patches, and the player name and squad number on the kit but added a black ribbon above the right breast. [ citation needed ] The previous season, 2006–07 , United introduced a similar 1950s-style uniform to celebrate 50 years of the Busby Babes ' first league championship . [ 28 ] After their Champions League victory in 2008, United introduced another retro-style kit for 2008–09 , celebrating the 40th anniversary of their first European Cup win. The club unveiled an all-blue third kit, based on the one worn against Benfica in the 1968 final . [ 29 ]
In 1999, to celebrate the 100th anniversary of its foundation, A.C. Milan introduced a retro kit that was worn on several official matches by its players across the 1999–2000 season. The kit resembled the thin stripes design of the first silk shirts used by the club in the first decade of the 20th century. [ 30 ]
More authentic reproductions of kits from the past have become popular fashion items, especially jerseys linked to successful or memorable teams. When France won the 1998 World Cup , their uniform was reminiscent of the design of the triumphant Euro 1984 team, with a red horizontal stripe and three thin horizontal stripes across the chest. Germany clearly based their 2018 World Cup design, featuring an unusual angular stripe pattern across the chest, on the shirt they wore while winning the trophy in 1990, [ 31 ] although they failed to attain the same level of performance. Several other nations at that tournament had designs based on 'classics' of 20–30 years earlier. [ 32 ]
When the United States men's soccer team took the field between 1999 and 2001, their plain white uniform with a thick V-neck collar looked reminiscent of the U.S. Soccer Federation 's first uniform worn in 1916. A similar uniform was produced in 2013, complete with vintage crest, to mark the 100th anniversary of the USSF. [ citation needed ]
For the FIFA Centenary Match in 2004, France and Brazil played in kits resembling their first ever home kits. The Brazilian team wore white tops with blue trim, the original colours of their home kit, which was replaced in 1951 by today's yellow top with green trim after the 1950 World Cup defeat . [ 33 ]
Tottenham Hotspur celebrated their 125th anniversary during the 2007–08 season by launching a special kit in the club's early colours, sky blue and white, which were originally worn in 1885. [ 34 ] The kit was worn for one game only, a 4–4 home draw to Aston Villa . [ 35 ]
During the later stages of the 2011–12 season, financially troubled Scottish club Rangers wore their normal blue shirts on the pitch, but began selling and encouraging fans to wear throwback red and black striped scarves, the traditional colours of the burgh of Govan (where Ibrox Stadium is located) in an attempt to raise money. The club would be placed in administration, face liquidation and then sold to a new ownership group, and forced to re-apply for entry to the fourth (lowest) tier of the senior Scottish football system for the 2012–13 season . That year, Rangers and rivals Celtic both released retro-style simple kits with round collars and small sponsor logos to acknowledge historic anniversaries, despite being with different suppliers; however, Rangers' absence from the top division meant they never met wearing the 'matching' designs. [ 36 ] [ 37 ]
English club West Bromwich Albion wore a replica of their 1968 FA Cup Final kit, in their Premier League game against Leicester City on April 11, 2015. [ 38 ] It was worn to honour the match-winning goalscorer in the 1968 Cup Final, Jeff Astle , who died in 2002 due to chronic traumatic encephalopathy as a result of heading the heavy leather footballs through his career. The kit used the 1–11 numbering system save for the goalkeeper's shirt, which was left blank as they were in those days.
For the 2019 Copa América , the Brazil national team released a 100th anniversary shirt, in white with blue details. It resembled the shirt worn in the first official match v Exeter City in 1919. The white uniform would be last worn in the 1950 FIFA World Cup 'final' that Brazil lost to Uruguay at Estadio Maracaná. [ 39 ] The retro kit debuted in the first match v Bolivia .
In December 2023, German supplier Adidas released its Originals/Lifestyle collection line consisting of now classic 1970s, 1980s and 1990s national team kits. Some kits included were Mexico's 1983–84 away kit, Argentina's 1994–97 away kit featured in their group play match against Greece at the 1994 World Cup ; the kit was worn well into the 1997 FIFA World Youth Championship , both of Germany (home and away) and Spain's (home) 1996–97 kits which were featured at Euro 96 . All items were released on 1 December 2023 with a pricing of $110.00 USD with several items selling out in less than three minutes. [ 40 ]
In June 2024, Nike re-launched Brazil's home kit as worn for the 1998 World Cup and both the 1999 Copa América and FIFA Confederations Cup . [ 41 ]
In 1990, the Chicago White Sox wore replicas of their 1917 World Series uniforms against the Milwaukee Brewers as part of the White Sox celebration of the final season at Comiskey Park . During the game, the scoreboard and public address system were turned off, and the lineups announced with a hand-held megaphone. [ 42 ]
In 2003 the St. Louis Cardinals hosted the Baltimore Orioles with teams wearing retro St. Louis Cardinals and St. Louis Browns (the predecessor to today's Orioles, which moved to Baltimore for 1954) uniforms, respectively. The scoreboard that day said "Browns" and the stadium announcer played along with the fantasy as well. [ 43 ]
The Tampa Bay Rays have staged a "Turn Back the Clock" promotion with a retro theme and throwback uniforms almost every season of their existence. Because the franchise does not yet have a long history from which to choose uniforms, they have often worn the uniforms of historical local teams such as the Tampa Tarpons of the Florida State League (worn in 1999, 2006, and 2010), the St. Petersburg Pelicans of the Senior Professional Baseball Association (worn in 2008), the St. Petersburg Saints (2007) and Tampa Smokers (2011) of the Florida International League , and the University of Tampa Spartans (2000). The Rays have worn their own uniforms for Turn Back the Clock night only once: in 2009, when they wore Devil Rays "rainbow" uniforms from their 1998 inaugural season. [ 44 ]
In 2019, the Cincinnati Reds wore a total of 15 throwback uniforms throughout the season as a part of their 150th anniversary celebration. [ 45 ]
In 1921 a baseball game held at Rickwood Field as part of the Semicentennial of Birmingham, Alabama was played in "old-style" uniforms and according to "the rules of the games as they were in 1872.". [ 46 ] Since 1996 Rickwood Field has been the site of the annual "Rickwood Classic", a regular season Birmingham Barons game in which both Southern League teams wear uniforms honoring some period of their respective histories.
The Philadelphia Wings indoor lacrosse team ditched their silver, red, and black uniforms for a game to wear their original orange and white jerseys worn in the early 1970s from the original National Lacrosse League . For the 100th anniversary of the rivalry between Johns Hopkins and Maryland in men's lacrosse , both teams wore special retro jerseys. [ 47 ] During the 35th anniversary of women's field hockey at Dartmouth College , the Big Green are wearing a special harlequin-design throwback uniform.
The first documented use of a throwback uniform came during the 1998 season , when the Calgary Stampeders wore 1948 red striped jerseys to celebrate the first Grey Cup championship won by the Stampeders franchise. The jerseys were worn on October 4, 1998, against the BC Lions . [ 48 ] The BC Lions were the next to wear throwback jerseys in 2003, as they were celebrating their 50th season with orange replica jerseys from the 1954 BC Lions season . Those jerseys were worn four times that season with the first being the home opener that year. [ 49 ] In both cases, neither uniform was accurate as the jerseys were paired with pants and helmets from both teams' present day sets. In 2007 , the Saskatchewan Roughriders wore green replica jerseys from the late 1960s to 1970s with double white striping over the shoulders. As opposed to the single season usage the Stampeders and Lions employed, the Roughriders wore these jerseys from 2007 to 2013 , including their usage in the 97th and 98th Grey Cup .
It wasn't until the 2008 CFL season that the league started to truly embrace throwback uniforms when they announced that the Winnipeg Blue Bombers and Toronto Argonauts would play two games (September 12 in Toronto and October 10 in Winnipeg ) to celebrate and recognize the 1950s and in particular, the 1950 Mud Bowl Grey Cup game . [ 50 ] Both teams wore coloured jerseys, as was common during the 1950s. Toronto's jerseys were a light blue in colour, with dark blue striping on the sleeves and the team's old "Pull Together" football-as-a-ship logo on the shoulders. The Blue Bombers' jerseys were dark blue in colour, with gold sleeve stripes. The team's 1950s-era logo was on the front of the jersey, just below the V in the neck. A special CFL "Retro Week" logo adorned each jersey as well, that logo being a take-off of the maple leaf one used as the league symbol from 1954 through 1969.
For the 2009 CFL season , all eight teams wore retro uniforms , this time based on uniforms from the 1960s. [ 51 ] Week 3 of the 2009 season featured all teams wearing their retro uniforms. When revealed at the time, four teams had white retro jerseys and four had coloured retro jerseys. As the season progressed, Saskatchewan added a green 1960s jersey for the Labour Day Classic and Calgary wore a white 1960s jersey for the Labour Day rematch versus Edmonton . 11 games were scheduled during the season to feature both teams wearing these uniforms while more were added later on.
In 2010, all eight teams again wore retro uniforms and for this season it was based on uniforms worn from the 1970s. [ 52 ] Teams wore retro uniforms during weeks 6 and 7, however, contrary to the previous year, only the Saskatchewan Roughriders wore white throwback uniforms, meaning most teams wore their regular white uniforms as the away team. [ 53 ] The Roughriders wore their regular 1970s throwback jersey during retro games they hosted. Additionally, during this season, the Roughriders were celebrating their 100th anniversary as a franchise and wore black, red, and silver throwback uniforms similar to the ones worn by the Regina Roughriders from 1912 to 1947. These uniforms were worn on July 17, 2010. [ 54 ]
While the league had originally planned to celebrate with retro uniforms each season leading up to the 100th Grey Cup , the CFL did not introduce 1980s-themed uniforms for the 2011 CFL season . [ 50 ] Some teams (Calgary, Saskatchewan, Toronto, and Winnipeg) continued wearing the previous year's retro uniforms while the rest wore no throwback uniforms at all. In 2012, all teams remodeled their full uniform set with only Saskatchewan and Winnipeg carrying over their 1970s throwback uniforms.
In 2013, the Toronto Argonauts wore 1980s throwback uniforms on August 23, 2013, to celebrate the 30th anniversary of the 71st Grey Cup championship. [ 55 ] Also that year, the Hamilton Tiger-Cats wore red, black, and white replicas of the 1943 Hamilton Flying Wildcats to celebrate the 70th anniversary of their 31st Grey Cup victory. [ 56 ]
In February 2005 at Eden Park , Auckland, Australia and New Zealand contested the very first Twenty20 cricket international match. Both teams appeared in retro 1980s-style tight-fitting One Day International uniforms without team names, numbers or sponsors' logos. The Australians wore their original "yellow and gold" whilst New Zealand were in "beige" inspired by the Beige Brigade sports fans. The game was played in a light-hearted manner with both teams sporting 1980s-style head bands, moustaches and hairstyles. [ 57 ]
In the 1991–92 NHL season , the Original Six teams, the Boston Bruins , Chicago Blackhawks , Detroit Red Wings , Montreal Canadiens , New York Rangers , and Toronto Maple Leafs all wore throwback jerseys for select games, based on uniforms from before the modern expansion era. In addition, the All-Star Game featured throwbacks based upon the original All-Star uniforms from 1947–1959. In subsequent seasons, teams have worn throwback jerseys on special occasions to celebrate team or even stadium anniversaries, or in annual "heritage uniform" games. The NHL Heritage Classic and NHL Winter Classic also initially began with participating teams wearing throwback uniforms, with later games in both series seeing teams wear hybrid designs inspired by past uniforms, and on occasion from prior teams in the participants' home cities.
For the 2021 and 2022–23 seasons Adidas introduced the "Reverse Retro" program, where each team created a specialty jersey that took elements of old jerseys with a twist. Some teams had colour swapped traditional uniforms, like the Edmonton Oilers who swapped Orange and Blue on their 1980s uniforms, and New Jersey Devils who swapped Green and Red on their '80s uniforms. Other teams merged two eras of their jerseys. The Los Angeles Kings took their original colours of forum blue and gold, and applied it to their 1990s style of jerseys which were only in black and silver. As well, there were teams who combined the looks of their current and former franchises. Such as the Minnesota Wild using North Star colours on their own logo, and the Colorado Avalanche applying their current colours to the style of the Quebec Nordiques jersey. [ 58 ] [ 59 ] | https://en.wikipedia.org/wiki/Throwback_uniform |
Thrust-to-weight ratio is a dimensionless ratio of thrust to weight of a rocket , jet engine , propeller engine, or a vehicle propelled by such an engine that is an indicator of the performance of the engine or vehicle.
The instantaneous thrust-to-weight ratio of a vehicle varies continually during operation due to progressive consumption of fuel or propellant and in some cases a gravity gradient . The thrust-to-weight ratio based on initial thrust and weight is often published and used as a figure of merit for quantitative comparison of a vehicle's initial performance.
The thrust-to-weight ratio is calculated by dividing the thrust (in SI units – in newtons ) by the weight (in newtons) of the engine or vehicle. The weight (N) is calculated by multiplying the mass in kilograms (kg) by the acceleration due to gravity (m/s 2 ). The thrust can also be measured in pound-force (lbf), provided the weight is measured in pounds (lb). Division using these two values still gives the numerically correct (dimensionless) thrust-to-weight ratio. For valid comparison of the initial thrust-to-weight ratio of two or more engines or vehicles, thrust must be measured under controlled conditions.
Because an aircraft's weight can vary considerably, depending on factors such as munition load, fuel load, cargo weight, or even the weight of the pilot, the thrust-to-weight ratio is also variable and even changes during flight operations. There are several standards for determining the weight of an aircraft used to calculate the thrust-to-weight ratio range.
The thrust-to-weight ratio and lift-to-drag ratio are the two most important parameters in determining the performance of an aircraft.
The thrust-to-weight ratio varies continually during a flight. Thrust varies with throttle setting, airspeed , altitude , air temperature, etc. Weight varies with fuel burn and payload changes. For aircraft, the quoted thrust-to-weight ratio is often the maximum static thrust at sea level divided by the maximum takeoff weight . [ 2 ] Aircraft with thrust-to-weight ratio greater than 1:1 can pitch straight up and maintain airspeed until performance decreases at higher altitude. [ 3 ]
A plane can take off even if the thrust is less than its weight as, unlike a rocket, the lifting force is produced by lift from the wings, not directly by thrust from the engine. As long as the aircraft can produce enough thrust to travel at a horizontal speed above its stall speed, the wings will produce enough lift to counter the weight of the aircraft.
For propeller-driven aircraft, the thrust-to-weight ratio can be calculated as follows in imperial units: [ 4 ]
where η p {\displaystyle \eta _{\mathrm {p} }\;} is propulsive efficiency (typically 0.65 for wooden propellers, 0.75 metal fixed pitch and up to 0.85 for constant-speed propellers), hp is the engine's shaft horsepower , and V {\displaystyle V\;} is true airspeed in feet per second, weight is in lbs.
The metric formula is:
The thrust-to-weight ratio of a rocket, or rocket-propelled vehicle, is an indicator of its acceleration expressed in multiples of gravitational acceleration g . [ 5 ]
Rockets and rocket-propelled vehicles operate in a wide range of gravitational environments, including the weightless environment. The thrust-to-weight ratio is usually calculated from initial gross weight at sea level on earth [ 6 ] and is sometimes called thrust-to-Earth-weight ratio . [ 7 ] The thrust-to-Earth-weight ratio of a rocket or rocket-propelled vehicle is an indicator of its acceleration expressed in multiples of earth's gravitational acceleration, g 0 . [ 5 ]
The thrust-to-weight ratio of a rocket improves as the propellant is burned. With constant thrust, the maximum ratio (maximum acceleration of the vehicle) is achieved just before the propellant is fully consumed. Each rocket has a characteristic thrust-to-weight curve, or acceleration curve, not just a scalar quantity.
The thrust-to-weight ratio of an engine is greater than that of the complete launch vehicle, but is nonetheless useful because it determines the maximum acceleration that any vehicle using that engine could theoretically achieve with minimum propellant and structure attached.
For a takeoff from the surface of the earth using thrust and no aerodynamic lift , the thrust-to-weight ratio for the whole vehicle must be greater than one . In general, the thrust-to-weight ratio is numerically equal to the g-force that the vehicle can generate. [ 5 ] Take-off can occur when the vehicle's g-force exceeds local gravity (expressed as a multiple of g 0 ).
The thrust-to-weight ratio of rockets typically greatly exceeds that of airbreathing jet engines because the comparatively far greater density of rocket fuel eliminates the need for much engineering materials to pressurize it.
Many factors affect thrust-to-weight ratio. The instantaneous value typically varies over the duration of flight with the variations in thrust due to speed and altitude, together with changes in weight due to the amount of remaining propellant, and payload mass. Factors with the greatest effect include freestream air temperature , pressure , density , and composition. Depending on the engine or vehicle under consideration, the actual performance will often be affected by buoyancy and local gravitational field strength . | https://en.wikipedia.org/wiki/Thrust-to-weight_ratio |
In modular arithmetic , Thue's lemma roughly states that every modular integer may be represented by a "modular fraction" such that the numerator and the denominator have absolute values not greater than the square root of the modulus.
More precisely, for every pair of integers ( a , m ) with m > 1 , given two positive integers X and Y such that X ≤ m < XY , there are two integers x and y such that
and
Usually, one takes X and Y equal to the smallest integer greater than the square root of m , but the general form is sometimes useful, and makes the uniqueness theorem (below) easier to state. [ 1 ]
The first known proof is attributed to Axel Thue ( 1902 ) [ 2 ] who used a pigeonhole argument. [ 3 ] It can be used to prove Fermat's theorem on sums of two squares by taking m to be a prime p that is congruent to 1 modulo 4 and taking a to satisfy a 2 + 1 ≡ 0 mod p . (Such an " a " is guaranteed for " p " by Wilson's theorem . [ 4 ] )
In general, the solution whose existence is asserted by Thue's lemma is not unique. For example, when a = 1 there are usually several solutions ( x , y ) = (1, 1), (2, 2), (3, 3), ... , provided that X and Y are not too small. Therefore, one may only hope for uniqueness for the rational number x / y , to which a is congruent modulo m if y and m are coprime . Nevertheless, this rational number need not be unique; for example, if m = 5 , a = 2 and X = Y = 3 , one has the two solutions
However, for X and Y small enough, if a solution exists, it is unique. More precisely, with above notation, if
and
with
and
then
This result is the basis for rational reconstruction , which allows using modular arithmetic for computing rational numbers for which one knows bounds for numerators and denominators. [ 5 ]
The proof is rather easy: by multiplying each congruence by the other y i and subtracting, one gets
The hypotheses imply that each term has an absolute value lower than XY < m / 2 , and thus that the absolute value of their difference is lower than m . This implies that y 2 x 1 − y 1 x 2 = 0 {\displaystyle y_{2}x_{1}-y_{1}x_{2}=0} , hence the result.
The original proof of Thue's lemma is not efficient, in the sense that it does not provide any fast method for computing the solution.
The extended Euclidean algorithm , allows us to provide a proof that leads to an efficient algorithm that has the same computational complexity of the Euclidean algorithm . [ 6 ]
More precisely, given the two integers m and a appearing in Thue's lemma, the extended Euclidean algorithm computes three sequences of integers ( t i ) , ( x i ) and ( y i ) such that
where the x i are non-negative and strictly decreasing. The desired solution is, up to the sign, the first pair ( x i , y i ) such that x i < X . | https://en.wikipedia.org/wiki/Thue's_lemma |
In mathematics , a Thue equation is a Diophantine equation of the form
f ( x , y ) = r , {\displaystyle f(x,y)=r,}
where f {\displaystyle f} is an irreducible bivariate form of degree at least 3 over the rational numbers , and r {\displaystyle r} is a nonzero rational number. It is named after Axel Thue , who in 1909 proved that a Thue equation can have only finitely many solutions in integers x {\displaystyle x} and y {\displaystyle y} , a result known as Thue's theorem . [ 1 ]
The Thue equation is solvable effectively : there is an explicit bound on the solutions x {\displaystyle x} , y {\displaystyle y} of the form ( C 1 r ) C 2 {\displaystyle (C_{1}r)^{C_{2}}} where constants C 1 {\displaystyle C_{1}} and C 2 {\displaystyle C_{2}} depend only on the form f {\displaystyle f} . A stronger result holds: if K {\displaystyle K} is the field generated by the roots of f {\displaystyle f} , then the equation has only finitely many solutions with x {\displaystyle x} and y {\displaystyle y} integers of K {\displaystyle K} , and again these may be effectively determined. [ 2 ]
Thue's original proof that the equation named in his honour has finitely many solutions is through the proof of what is now known as Thue's theorem : it asserts that for any algebraic number α {\displaystyle \alpha } having degree d ≥ 3 {\displaystyle d\geq 3} and for any ε > 0 {\displaystyle \varepsilon >0} there exists only finitely many coprime integers p , q {\displaystyle p,q} with q > 0 {\displaystyle q>0} such that | α − p / q | < q − ( d + 1 + ε ) / 2 {\displaystyle |\alpha -p/q|<q^{-(d+1+\varepsilon )/2}} . Applying this theorem allows one to almost immediately deduce the finiteness of solutions. However, Thue's proof, as well as subsequent improvements by Siegel , Dyson , and Roth were all ineffective.
Finding all solutions to a Thue equation can be achieved by a practical algorithm, [ 3 ] which has been implemented in the following computer algebra systems :
While there are several effective methods to solve Thue equations (including using Baker's method and Skolem's p -adic method), these are not able to give the best theoretical bounds on the number of solutions. One may qualify an effective bound C ( f , r ) {\displaystyle C(f,r)} of the Thue equation f ( x , y ) = r {\displaystyle f(x,y)=r} by the parameters it depends on, and how "good" the dependence is.
The best result known today, essentially building on pioneering work of Bombieri and Schmidt , [ 4 ] gives a bound of the shape C ( f , r ) = C ⋅ ( deg f ) 1 + ω ( r ) {\displaystyle C(f,r)=C\cdot (\deg f)^{1+\omega (r)}} , where C {\displaystyle C} is an absolute constant (that is, independent of both f {\displaystyle f} and r {\displaystyle r} ) and ω ( ⋅ ) {\displaystyle \omega (\cdot )} is the number of distinct prime factors of r {\displaystyle r} . The most significant qualitative improvement to the theorem of Bombieri and Schmidt is due to Stewart , [ 5 ] who obtained a bound of the form C ( f , r ) = C ⋅ ( deg f ) 1 + ω ( g ) {\displaystyle C(f,r)=C\cdot (\deg f)^{1+\omega (g)}} where g {\displaystyle g} is a divisor of r {\displaystyle r} exceeding | r | 3 / 4 {\displaystyle |r|^{3/4}} in absolute value . It is conjectured that one may take the bound C ( f , r ) = C ( deg f ) {\displaystyle C(f,r)=C(\deg f)} ; that is, depending only on the degree of f {\displaystyle f} but not its coefficients , and completely independent of the integer r {\displaystyle r} on the right hand side of the equation.
This is a weaker form of a conjecture of Stewart , and is a special case of the uniform boundedness conjecture for rational points . This conjecture has been proven for "small" integers r {\displaystyle r} , where smallness is measured in terms of the discriminant of the form f {\displaystyle f} , by various authors, including Evertse , Stewart , and Akhtari . Stewart and Xiao demonstrated a strong form of this conjecture, asserting that the number of solutions is absolutely bounded, holds on average (as r {\displaystyle r} ranges over the interval | r | ≤ Z {\displaystyle |r|\leq Z} with Z → ∞ {\displaystyle Z\rightarrow \infty } ). [ 6 ]
This number theory -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Thue_equation |
Thujaplicinol is either of two isomeric tropolone -related natural products . They are found in tree species primarily in bark, needles, xylem, of the family of Cupressaceae like the Cupressus, Thuja, Juniperus and Thujopsis . [ 1 ] [ 2 ] The thujaplicinols are structurally equivalent to the thujaplicins with an additional hydroxyl group . They belong to the class of natural terpenoids having two free hydroxyl groups at C3 and C5 position.
The thujaplicinols are highly volatile compounds. It is known that the presence of such tropolones, including alpha-tropolone and its isopropyl derivatives, result in the high natural durability of wood species, such as western red cedar , juniper and cypress .
Alpha-thujaplicinol , an isomer of thujaplicinol, is often encountered in the Asian species of Thujopsis dolabrata , and exhibits high antibacterial and antifungal activities. It has been found to be effective against Enterococcus faecalis and Legionella pneumophila , even at low inhibitory concentrations (1.56 to 50 mg/ml). [ 3 ]
In a 2004 study, α-thujaplicinol showed to have high cytotoxic effects upon several cancer cell lines, such as human stomach cancer and murine lymphocytic leukemia. [ 4 ]
The other isomer of the compound is called β-thujaplicinol . A recent study found out that it inhibited the development of hepatocellular carcinoma cells because it triggered the autophagic cell death and a subsequent apoptosis. [ 5 ]
Earlier studies have shown that β-thujaplicinol can addiotionally suppress estrogen-dependent breast cancer by regulating the estrogen receptor signaling. [ 6 ] | https://en.wikipedia.org/wiki/Thujaplicinol |
Thumbcast is a term used for the mobile delivery of text, picture, audio, or video content via short message service , multimedia messaging service , WAP push , or other mobile distribution mechanism. The term is an evolution specialized for original mobile content, coming from the generally audio-based podcast , the input mechanisms of multi-tap and predictive text , and the distribution of content directly to mobile phones .
Think: podcast for your phone. Our thumbcasts are hand-crafted by local correspondents and delivered direct to you as text messages.
Thumbcasts are coming to Boston.
The term 'thumbcast' became one of that day's 'Hot Searches' on the site.
Johnson, Carolyn (2007-04-09). "Snippets of news, via cellphone" . The Boston Globe . New York Times. pp. E1, E4 . Retrieved 2007-04-09 . | https://en.wikipedia.org/wiki/Thumbcast |
Thumbshots are screenshots of online documents such as web page in small thumbnail sizes. Thumbshots help users to visualize web sites or preview links before clicking. The dimension of a thumbshot image (usually under 120 pixels in width by 90 pixels in height) is generally much smaller than the actual online document allowing users to download and view a sample of the document quickly. Thumbshots can be automatically generated by custom software or manually screen captured using popular graphics programs. [ 1 ]
Some thumbshot pictures are enhanced with informative icons or text highlights. Normally, thumbshots are found embedded inline beside hyperlinks in a web page to help improve web browser navigation by helping users to locate information faster. Thumbshots are often used to provide visual hints in search engines and web directories where a large number of text links are displayed on a page.
One of the early applications of thumbshots was developed by Jakob Nielsen while working for Sun Microsystems . The application consisted of capturing and then displaying a small image of a webpage that a user had saved as a bookmark in their web browser . At that time, they were referred to as visual bookmarks.
Since then, thumbshots have been used in numerous different applications, ranging from Windows Explorer and a desktop search engine like Copernic to Internet search engines and web directories which provide a thumbshot preview of a webpage alongside search results. | https://en.wikipedia.org/wiki/Thumbshot |
Thunderstorm asthma (also referred to in the media as thunder fever or a pollen bomb [ 1 ] ) is the triggering of an asthma attack by environmental conditions directly caused by a local thunderstorm . Due to the acute nature of the onset and wide exposure of local populations to the same triggering conditions, severe epidemic thunderstorm asthma events can put significant and unmanageable stress on public health facilities.
Widely recognised but not fully understood, it has been proposed that during a thunderstorm, pollen grains can absorb moisture and then burst into much smaller fragments with these fragments being easily dispersed by wind. [ 2 ] [ 3 ] While larger pollen grains are usually filtered by hairs in the nose, the smaller pollen fragments are able to pass through and enter the lungs, triggering the asthma attack. [ 4 ] [ 5 ] [ 6 ] [ 7 ]
The phenomenon of thunderstorm asthma has been recognised since the 1980s, with an event in Birmingham , England, in July 1983 often considered the first prominent example. [ 8 ] A 2013 study which reviewed instances of abnormally high asthma-related admissions to emergency departments between 1983 and 2013 identified strong correlation between those instances and thunderstorms, while noting that such events were so rare that very little detailed research into the phenomenon had occurred. [ 9 ]
A significant impetus in the study of the phenomenon occurred after an event in November 2016 in Melbourne , Australia. Recognised as the most severe epidemic thunderstorm asthma event on record, the onset overwhelmed the city's ambulance system and some local hospitals, saw a ten-fold increase in asthma cases presenting to emergency departments compared with average, and resulted in ten deaths. [ 10 ] [ 11 ] [ 12 ] [ 13 ] [ 14 ] [ 15 ] One month later, an epidemic thunderstorm asthma event in Kuwait resulted in at least 5 deaths and many admissions to the ICU. [ 16 ] [ 17 ]
Since then there have been further reports of epidemic thunderstorm asthma events in Wagga Wagga , Australia; London , England; Naples , Italy; [ 18 ] Atlanta , United States; [ 19 ] and Ahvaz , Iran. [ 20 ]
Many of those affected during a thunderstorm asthma outbreak may have never experienced an asthma attack before. [ 21 ]
It has been found 95% of those that were affected by thunderstorm asthma had a history of hayfever , and 96% of those people had tested positive to grass pollen allergies, particularly rye grass. [ 22 ] A rye grass pollen grain can hold up to 700 tiny starch granules, measuring 0.6 to 2.5 μm, small enough to reach the lower airways in the lung. [ 23 ] [ 24 ] [ 25 ]
Patients with a history of grass allergies should be tested for asthma and treated for the grass allergies and asthma if also present. Patients with known asthma should be treated and counseled on the importance of adherence to preventative medication protocols. [ 26 ] Preventative treatment found useful for severe asthma includes Allergen immunotherapy (AIT) particularly sublingual immunotherapy (SLIT). [ 27 ] | https://en.wikipedia.org/wiki/Thunderstorm_asthma |
In computer programming , a thunk is a subroutine used to inject a calculation into another subroutine. Thunks are primarily used to delay a calculation until its result is needed, or to insert operations at the beginning or end of the other subroutine. They have many other applications in compiler code generation and modular programming .
The term originated as a whimsical irregular form of the verb think . It refers to the original use of thunks in ALGOL 60 compilers, which required special analysis (thought) to determine what type of routine to generate. [ 1 ] [ 2 ]
The early years of compiler research saw broad experimentation with different evaluation strategies . A key question was how to compile a subroutine call if the arguments can be arbitrary mathematical expressions rather than constants. One approach, known as " call by value ", calculates all of the arguments before the call and then passes the resulting values to the subroutine. In the rival " call by name " approach, the subroutine receives the unevaluated argument expression and must evaluate it.
A simple implementation of "call by name" might substitute the code of an argument expression for each appearance of the corresponding parameter in the subroutine, but this can produce multiple versions of the subroutine and multiple copies of the expression code. As an improvement, the compiler can generate a helper subroutine, called a thunk , that calculates the value of the argument. The address and environment [ a ] of this helper subroutine are then passed to the original subroutine in place of the original argument, where it can be called as many times as needed. Peter Ingerman first described thunks in reference to the ALGOL 60 programming language, which supports call-by-name evaluation. [ 4 ]
Although the software industry largely standardized on call-by-value and call-by-reference evaluation, [ 5 ] active study of call-by-name continued in the functional programming community. This research produced a series of lazy evaluation programming languages in which some variant of call-by-name is the standard evaluation strategy. Compilers for these languages, such as the Glasgow Haskell Compiler , have relied heavily on thunks, with the added feature that the thunks save their initial result so that they can avoid recalculating it; [ 6 ] this is known as memoization or call-by-need .
Functional programming languages have also allowed programmers to explicitly generate thunks. This is done in source code by wrapping an argument expression in an anonymous function that has no parameters of its own. This prevents the expression from being evaluated until a receiving function calls the anonymous function, thereby achieving the same effect as call-by-name. [ 7 ] The adoption of anonymous functions into other programming languages has made this capability widely available.
Thunks are useful in object-oriented programming platforms that allow a class to inherit multiple interfaces , leading to situations where the same method might be called via any of several interfaces. The following code illustrates such a situation in C++ .
In this example, the code generated for each of the classes A, B and C will include a dispatch table that can be used to call Access on an object of that type, via a reference that has the same type. Class C will have an additional dispatch table, used to call Access on an object of type C via a reference of type B. The expression b->Access() will use B's own dispatch table or the additional C table, depending on the type of object b refers to. If it refers to an object of type C, the compiler must ensure that C's Access implementation receives an instance address for the entire C object, rather than the inherited B part of that object. [ 8 ]
As a direct approach to this pointer adjustment problem, the compiler can include an integer offset in each dispatch table entry. This offset is the difference between the reference's address and the address required by the method implementation. The code generated for each call through these dispatch tables must then retrieve the offset and use it to adjust the instance address before calling the method.
The solution just described has problems similar to the naïve implementation of call-by-name described earlier: the compiler generates several copies of code to calculate an argument (the instance address), while also increasing the dispatch table sizes to hold the offsets. As an alternative, the compiler can generate an adjustor thunk along with C's implementation of Access that adjusts the instance address by the required amount and then calls the method. The thunk can appear in C's dispatch table for B, thereby eliminating the need for callers to adjust the address themselves. [ 9 ]
Thunks have been widely used to provide interoperability between software modules whose routines cannot call each other directly. This may occur because the routines have different calling conventions , run in different CPU modes or address spaces , or at least one runs in a virtual machine . A compiler (or other tool) can solve this problem by generating a thunk that automates the additional steps needed to call the target routine, whether that is transforming arguments, copying them to another location, or switching the CPU mode. A successful thunk minimizes the extra work the caller must do compared to a normal call.
Much of the literature on interoperability thunks relates to various Wintel platforms, including MS-DOS , OS/2 , [ 10 ] Windows [ 11 ] [ 12 ] [ 13 ] [ 14 ] and .NET , and to the transition from 16-bit to 32-bit memory addressing. As customers have migrated from one platform to another, thunks have been essential to support legacy software written for the older platforms. UEFI CSM is another example to do thunk for legacy boot loaders .
The transition from 32-bit to 64-bit code on x86 also uses a form of thunking ( WoW64 ). However, because the x86-64 address space is larger than the one available to 32-bit code, the old "generic thunk" mechanism could not be used to call 64-bit code from 32-bit code. [ 15 ] The only case of 32-bit code calling 64-bit code is in the WoW64's thunking of Windows APIs to 32-bit.
On systems that lack automatic virtual memory hardware, thunks can implement a limited form of virtual memory known as overlays . With overlays, a developer divides a program's code into segments that can be loaded and unloaded independently, and identifies the entry points into each segment. A segment that calls into another segment must do so indirectly via a branch table . When a segment is in memory, its branch table entries jump into the segment. When a segment is unloaded, its entries are replaced with "reload thunks" that can reload it on demand. [ 16 ]
Similarly, systems that dynamically link modules of a program together at run-time can use thunks to connect the modules. Each module can call the others through a table of thunks that the linker fills in when it loads the module. This way the modules can interact without prior knowledge of where they are located in memory. [ 17 ] | https://en.wikipedia.org/wiki/Thunk |
Thuraya ( Arabic : الثريا , Gulf Arabic pron.: [ɐθ.θʊˈrɑj.jɐ] ; from the Arabic name for the Pleiades , Thurayya ) [ 1 ] is a United Arab Emirates -based regional mobile-satellite service (MSS) provider. The company operates two geosynchronous satellites and provides telecommunications coverage in about 150 countries in Europe , the Middle East , North, Central and East Africa and Asia [ 2 ] [ 3 ] Thuraya's L-band network delivers voice and data services.
Thuraya is the mobile satellite services subsidiary of Yahsat , a global satellite operator based in the United Arab Emirates, fully owned by Mubadala Investment Company . [ citation needed ] The geostationary nature of the service implies high round-trip times from satellite to Earth, leading to a noticeable lag being present during voice calls.
Thuraya's country calling code is +882 16, which is part of the ITU-T International Networks numbering group. Thuraya is not part of the +881 country calling code numbering group, as this is allocated by ITU-T for networks in the Global Mobile Satellite System , of which Thuraya is not a part, being a regional rather than a global system.
Transceivers communicate directly with the satellites using an antenna of roughly the same length as the handset and have a maximum output power of 2 watts . QPSK modulation is used for the air interface . Thuraya SIM cards work in regular GSM telephones, and ordinary GSM SIM cards can be used on the satellite network as long as the SIM provider has a roaming agreement with Thuraya. As with all geosynchronous voice services, a noticeable lag is present while making a call.
Due to the relatively high gain of the antennas contained within handsets, it is necessary to roughly aim the antenna at the satellite. As the handsets contain a GPS receiver, it is possible to program the ground position of the satellites as waypoints to assist with aiming.
The service operates on L-band carriers assigned in blocks to areas of coverage referred to as "spotbeams", which are Thuraya's equivalent to cells or service areas. In L-band, 34 MHz of bandwidth from 1.525 GHz to 1.559 GHz is assigned for downlink (space-to-Earth) communication, while the uplink (Earth-to-space) operates between 1.6265 GHz and 1.6605 GHz. Uplink and downlink channels are 1087 paired carrier frequencies, on a raster of 31.25 kHz. A time-division multiple access (TDMA) time-slot architecture is employed which allocates a carrier in time slots of a fixed length. [ citation needed ]
Every Thuraya phone and standalone transceiver unit is fitted with a GPS receiver and transmits its location to the Thuraya gateway periodically. [ 4 ] [ 5 ] The built-in GPS capability can be used for waypoint navigation.
Thuraya operates two communications satellites built by Boeing .
The first satellite, named Thuraya 1, had deficient solar panels and could not operate properly; this satellite was positioned above Korea for testing purposes. It was launched on 21 October 2000 by Sea Launch on a Zenit 3SL rocket. [ 6 ] At launch it weighed 5250 kg. [ 7 ] The satellite was used for testing and backup until May 2007, when it was moved to junk orbit and declared at its end of life. [ 8 ]
Thuraya 2 was launched by Sea Launch on 10 June 2003. [ 9 ] It is located in geosynchronous orbit at 44° E longitude with 6.3° inclination. [ 10 ] The satellite can handle 13,750 simultaneous voice calls. This satellite currently serves most of Europe, the Middle East, Africa and parts of Asia. The craft had a weight of 3200 kg and an expected life of 12 years. The two solar-panel wings, each containing five panels, generate 11 kW electric power. The craft has two antenna systems: a round C-band antenna, 1.27 meters in diameter and a 12 × 16 meter AstroMesh reflector, 128 element L-band antenna, supplied by Astro Aerospace in Carpinteria, California . These antennas support up to 351 separate spot beams , each configurable to concentrate power where usage needs it. [ 11 ] Amateur astronomer observations suspected that the nearby MENTOR 4 USA-202 , a satellite belonging to the US National Reconnaissance Office , was eavesdropping on Thuraya 2, and this was reported to be confirmed by documents released on 9 September 2016 [ 12 ] by The Intercept as part of the Snowden files . [ 13 ]
The third satellite was planned for launch by Sea Launch in 2007, and the start of Far East and Australia service was planned for 15 October 2007. The failure in January 2007 of the NSS-8 mission on another Sea Launch rocket led to a substantial delay in the launch of Thuraya-3, which was rescheduled for 14 November 2007, but the launch was postponed several times due to sea conditions. [ 14 ] The launch vessels set out from port again on 2 January 2008, and launch occurred successfully at 11:49 GMT on 15 January 2008. [ 15 ] [ 16 ] The Thuraya 3 satellite is technically the same as Thuraya 2, but located in geosynchronous orbit at 98.5° E longitude with 6.2° inclination.
On 15 April 2024, Thuraya 3 suffered an "unexpected payload anomaly" and despite attempted recovery efforts, coverage provided by this satellite was unable to be restored. [ 17 ] Thuraya advised that they have suffered a sustained force majeure event and withdrew offering services to regions affected (such as Australia, New Caledonia, Papua New Guinea, Papua, East Timor, and Indonesia).
Thuraya 4-NGS (Next Generation Satellite) [ 18 ] is a satellite launched on a SpaceX Falcon 9 rocket on 3 January 2025. [ 19 ] [ 20 ] [ 21 ] It will replace Thuraya 2. This process will span several months until Thuraya 4 reaches its operational geostationary orbit at 44° East, approximately 36,000 kilometers above Earth. [ 21 ] | https://en.wikipedia.org/wiki/Thuraya |
Thure E. Cerling (born 1949) [ 1 ] is a Distinguished Professor of Geology and Geophysics and a Distinguished Professor of Biology at the University of Utah . [ 2 ] Cerling is a leading expert in the evolution of modern landscapes including modern mammals and their associated grassland ecologies and stable isotope analyses of the atmosphere. [ 3 ] Cerling lives in Salt Lake City, Utah.
Cerling's research interests are primarily focused on Earth surface geochemistry processes and on the geological record of ecological change. [ 3 ] Particularly, working on conservation biology , Cerling has analyzed the modern animal diet and physiology by using stable isotopes as natural tracers as well as studying dietary changes of different mammalian lineages extending over millions of years.
Emphasizing continental ecologies of lakes and modern soils and ecosystems, Cerling has written extensievely about the evolution of ecosystems, the inception and strengthening of monsoons , and the atmosphere over geological time scales through evidence gathered about the fractionation of stable isotopes in these systems.
Current research work includes a focus on the development of landforms in semi-arid regions, the geology of Old World paleo-anthropologic sites and on contaminant migration in surface and ground waters, including the use of tritium and helium as hydrological tracers.
Together with James Ehleringer , he established the Stable Isotope Biogeochemistry and Ecology ( IsoCamp ) summer course at the University of Utah , which "trains students in the fundamental environmental and biological theory underlying isotope fractionation processes across a broad spectrum of ecological and environmental applications".
Thure E. Cerling received his Bachelor of Science degree in geology and chemistry from Iowa State University , in Ames, Iowa , in 1972, and, in 1973, his Master of Science in geology from Iowa State. In 1977 he was awarded a Ph.D. in geology from the University of California at Berkeley . From 1977 to 1979 he worked as a research scientist at Oak Ridge National Laboratory and, from 1979 he has been a member of the University of Utah 's faculty.
With the publication of "Expansion of C 4 ecosystems as an indicator of global ecological change in the late Miocene" in 1993, Cerling, helped by Yang Wang and Jay Quade , made relevant studies relatively to carbon isotopes. Thanks to a deep analysis of palaeovegetation from palaeosols and palaeodiet measured in fossil tooth enamel, was demonstrated a global increase in the biomass of plants using C 4 photosynthesis between 7 and 5 million years ago. The decrease of atmospheric CO 2 concentrations over the history below a threshold that favored the C 3 -photosynthesizing plants was considered as a valid reason for the global expansion of C 4 biomass.
The publication "Global vegetation change through the Miocene/Pliocene boundary" in 1997 confirmed these results, demonstrating even how at lower latitudes the change appeared to occur earlier because of the threshold for C 3 photosynthesis is higher at warmer temperatures.
Thure Cerling and James Ehleringer , a biology professor at the University of Utah, founded Isoforensics in 2003, a company with the aim of interpreting the stable isotope composition of various biological and synthetic materials. This was the first step for the discovery they made which was first published on February 25, 2008, by the " Proceedings of the National Academy of Sciences " with the title "Hydrogen and oxygen isotope ratios in human hair are related to geography".
To know where people have been and where they lived for a while are information that became available by analyzing the stable isotope composition of their scalp hair. Cerling discovered that a strand of hair could provide valuable clues about a person's travels by studying the variation of hydrogen-2 ( δ 2 H ) and oxygen-18 (δ18O) isotopes and comparing them to the ones in the drinking water. The extent of the information that can be deduced depends on the length of the hair: the longer is the hair, the greater is the extraction of information. The variation with geography of isotope concentrations is linked with precipitations, cloud temperatures and with the amount of water that evaporates from soil and plants. When clouds move off the ocean towards inland the ratios of oxygen-18 to oxygen-16 and hydrogen-2 to hydrogen-1 tend to decrease because of the rain water with oxygen-18 and hydrogen-2, being heavier, tends to fall first.
Samples of tap water were collected from more than 600 cities across the United States as well as hair samples from the barbershops in 65 cities in 20 states. The comparison showed that both hair and drinking water samples had the same isotopic variations. In order to display these information, the scientists produced color-coded maps based on the correlation of the isotopes in hair to those in drinking water. This maps show how ratios of hydrogen and oxygen isotopes in scalp hair vary in different areas of the United States. It was so proved that the water drank by a human being leaves in the hair an evidence which contain oxygen and hydrogen isotopes equal to the ones in the tap water.
This technique would have been a new tool for policemen, anthropologists, archaeologists and doctors.
Professor Cerling, helped by James Ehleringer and Christopher Remien (two University of Utah colleagues), George Wittemyer of Colorado State University and member of "Save the Elephants" in Nairobi, and Iain Douglas-Hamilton , who founded the association "Save the Elephants", conducted a research around the Samburu and Buffalo Springs national reserves in northern Kenya analyzing carbon and other stable isotopes in elephant tail hair to discover where and what Victoria, Anastasia and Cleopatra, three daughters of a mother elephant named Queen Elizabeth, usually eat over a six-years period (2000–2006). In order to monitor their life, the elephants were equipped with a Global Positioning System that recorded their positions every hour for the whole research period. For getting the sample of tail hair, elephants were immobilized with drug-filled dart guns when necessary. Considering that the hair grows about an inch per month, a single hair contained isotopic information to diet during an 18-month period.
The analysis of ratios of carbon-13 to carbon-12 along the length of a single elephant hair led Cerling and his crew to understand the elephants' diet. During the wet season, after the grass had grown long enough for elephants to grab with their trunks, their tail hair showed the presence of different form of carbon, indicating a high amount of high-protein grass. On the other hand, during the dry season, the results obtained by the analysis of the hair pointed out how elephants had switched over to shrubs and trees.
For what concern the Samburu-Buffalo Springs, five weeks after the rainy season had started, the grass became rich in nutrients and the females were most likely to conceive, giving birth 22 months later, just in time for another rainy season to provide nutrients to the grass they would have eaten: the cycle could restart.
The research also pointed out how developed is the competition between elephants and cattle: during the typical wet season diet of elephants, the overgrazing by cattle caused the grass to be very short, resulting in a limited access to it for elephants, out-competing them. This situation could have influenced the elephants' ability to bulk up for pregnancy.
All these analyses pointed out even that there are some elephant families friendlier than others and showed how there are dominant families that settle down in the best places, where there is plenty of food and water. | https://en.wikipedia.org/wiki/Thure_E._Cerling |
William Thurston 's elliptization conjecture states that a closed 3-manifold with finite fundamental group is spherical , i.e. has a Riemannian metric of constant positive sectional curvature.
A 3-manifold with a Riemannian metric of constant positive sectional curvature is covered by the 3-sphere, moreover the group of covering transformations are isometries of the 3-sphere.
If the original 3-manifold had in fact a trivial fundamental group, then it is homeomorphic to the 3-sphere (via the covering map ). Thus, proving the elliptization conjecture would prove the Poincaré conjecture as a corollary. In fact, the elliptization conjecture is logically equivalent to two simpler conjectures: the Poincaré conjecture and the spherical space form conjecture .
The elliptization conjecture is a special case of Thurston's geometrization conjecture , which was proved in 2003 by G. Perelman .
For the proof of the conjectures, see the references in the articles on geometrization conjecture or Poincaré conjecture .
This Riemannian geometry -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Thurston_elliptization_conjecture |
In mathematics, the Thurston norm is a function on the second homology group of an oriented 3-manifold introduced by William Thurston , which measures in a natural way the topological complexity of homology classes represented by surfaces.
Let M {\displaystyle M} be a differentiable manifold and c ∈ H 2 ( M ) {\displaystyle c\in H_{2}(M)} . Then c {\displaystyle c} can be represented by a smooth embedding S → M {\displaystyle S\to M} , where S {\displaystyle S} is a (not necessarily connected) surface that is compact and without boundary. The Thurston norm of c {\displaystyle c} is then defined to be [ 1 ]
where the minimum is taken over all embedded surfaces S = ⋃ i S i {\displaystyle S=\bigcup _{i}S_{i}} (the S i {\displaystyle S_{i}} being the connected components) representing c {\displaystyle c} as above, and χ − ( F ) = max ( 0 , − χ ( F ) ) {\displaystyle \chi _{-}(F)=\max(0,-\chi (F))} is the absolute value of the Euler characteristic for surfaces which are not spheres (and 0 for spheres).
This function satisfies the following properties:
These properties imply that ‖ ⋅ ‖ {\displaystyle \|\cdot \|} extends to a function on H 2 ( M , Q ) {\displaystyle H_{2}(M,\mathbb {Q} )} which can then be extended by continuity to a seminorm ‖ ⋅ ‖ T {\displaystyle \|\cdot \|_{T}} on H 2 ( M , R ) {\displaystyle H_{2}(M,\mathbb {R} )} . [ 2 ] By Poincaré duality , one can define the Thurston norm on H 1 ( M , R ) {\displaystyle H^{1}(M,\mathbb {R} )} .
When M {\displaystyle M} is compact with boundary, the Thurston norm is defined in a similar manner on the relative homology group H 2 ( M , ∂ M , R ) {\displaystyle H_{2}(M,\partial M,\mathbb {R} )} and its Poincaré dual H 1 ( M , R ) {\displaystyle H^{1}(M,\mathbb {R} )} .
It follows from further work of David Gabai [ 3 ] that one can also define the Thurston norm using only immersed surfaces. This implies that the Thurston norm is also equal to half the Gromov norm on homology.
The Thurston norm was introduced in view of its applications to fiberings and foliations of 3-manifolds.
The unit ball B {\displaystyle B} of the Thurston norm of a 3-manifold M {\displaystyle M} is a polytope with integer vertices. It can be used to describe the structure of the set of fiberings of M {\displaystyle M} over the circle: if M {\displaystyle M} can be written as the mapping torus of a diffeomorphism f {\displaystyle f} of a surface S {\displaystyle S} then the embedding S ↪ M {\displaystyle S\hookrightarrow M} represents a class in a top-dimensional (or open) face of B {\displaystyle B} : moreover all other integer points on the same face are also fibers in such a fibration. [ 4 ]
Embedded surfaces which minimise the Thurston norm in their homology class are exactly the closed leaves of foliations of M {\displaystyle M} . [ 3 ] | https://en.wikipedia.org/wiki/Thurston_norm |
1 Granum 2 Chloroplast envelope
3 Thylakoid ◄ You are here
4 Stromal thylakoid 5 Stroma 6 Nucleoid (DNA ring) 7 Ribosome 8 Plastoglobulus 9 Starch granule
Thylakoids are membrane-bound compartments inside chloroplasts and cyanobacteria . They are the site of the light-dependent reactions of photosynthesis . Thylakoids consist of a thylakoid membrane surrounding a thylakoid lumen . Chloroplast thylakoids frequently form stacks of disks referred to as grana (singular: granum ). Grana are connected by intergranal or stromal thylakoids, which join granum stacks together as a single functional compartment.
In thylakoid membranes, chlorophyll pigments are found in packets called quantasomes . Each quantasome contains 230 to 250 chlorophyll molecules.
The word Thylakoid comes from the Greek word thylakos or θύλακος , meaning "sac" or "pouch". [ 1 ] Thus, thylakoid means "sac-like" or "pouch-like".
Thylakoids are membrane-bound structures embedded in the chloroplast stroma . A stack of thylakoids is called a granum and resembles a stack of coins.
The thylakoid membrane is the site of the light-dependent reactions of photosynthesis with the photosynthetic pigments embedded directly in the membrane. It is an alternating pattern of dark and light bands measuring one nanometer each. [ 3 ] The thylakoid lipid bilayer shares characteristic features with prokaryotic membranes and the inner chloroplast membrane. For example, acidic lipids can be found in thylakoid membranes, cyanobacteria and other photosynthetic bacteria and are involved in the functional integrity of the photosystems. [ 4 ] The thylakoid membranes of higher plants are composed primarily of phospholipids [ 5 ] and galactolipids that are asymmetrically arranged along and across the membranes. [ 6 ] Thylakoid membranes are richer in galactolipids than phospholipids; they predominantly consist of hexagonal phase II forming monogalacotosyl diglyceride lipid. Despite this composition, plant thylakoid membranes have been shown to assume largely lipid-bilayer dynamic organization. [ 7 ] Lipids forming the thylakoid membranes, rich in high-fluidity linolenic acid [ 8 ] are synthesized in a complex pathway involving exchange of lipid precursors between the endoplasmic reticulum and inner membrane of the plastid envelope and transported from the inner membrane to the thylakoids via vesicles. [ 9 ]
The thylakoid lumen is a continuous aqueous phase enclosed by the thylakoid membrane . It plays an important role for photophosphorylation during photosynthesis . During the light-dependent reaction, protons are pumped across the thylakoid membrane into the lumen making it acidic down to pH 4.
In higher plants thylakoids are organized into a granum-stroma membrane assembly. A granum (plural grana ) is a stack of thylakoid discs. Chloroplasts can have from 10 to 100 grana. Grana are connected by stroma thylakoids, also called intergranal thylakoids or lamellae . Grana thylakoids and stroma thylakoids can be distinguished by their different protein composition. Grana contribute to chloroplasts' large surface area to volume ratio. A recent electron tomography study of the thylakoid membranes has shown that the stroma lamellae are organized in wide sheets perpendicular to the grana stack axis and form multiple right-handed helical surfaces at the granal interface. [ 2 ] Left-handed helical surfaces consolidate between the right-handed helices and sheets. This complex network of alternating helical membrane surfaces of different radii and pitch was shown to minimize the surface and bending energies of the membranes. [ 2 ] This new model, the most extensive one generated to date, revealed that features from two, seemingly contradictory, older models [ 10 ] [ 11 ] coexist in the structure. Notably, similar arrangements of helical elements of alternating handedness, often referred to as "parking garage" structures, were proposed to be present in the endoplasmic reticulum [ 12 ] and in ultradense nuclear matter. [ 13 ] [ 14 ] [ 15 ] This structural organization may constitute a fundamental geometry for connecting between densely packed layers or sheets. [ 2 ]
Chloroplasts develop from proplastids when seedlings emerge from the ground. Thylakoid formation requires light. In the plant embryo and in the absence of light, proplastids develop into etioplasts that contain semicrystalline membrane structures called prolamellar bodies. When exposed to light, these prolamellar bodies develop into thylakoids. This does not happen in seedlings grown in the dark, which undergo etiolation . An underexposure to light can cause the thylakoids to fail. This causes the chloroplasts to fail resulting to the death of the plant.
Thylakoid formation requires the action of vesicle-inducing protein in plastids 1 (VIPP1). Plants cannot survive without this protein, and reduced VIPP1 levels lead to slower growth and paler plants with reduced ability to photosynthesize. VIPP1 appears to be required for basic thylakoid membrane formation, but not for the assembly of protein complexes of the thylakoid membrane. [ 16 ] It is conserved in all organisms containing thylakoids, including cyanobacteria, [ 17 ] green algae, such as Chlamydomonas , [ 18 ] and higher plants, such as Arabidopsis thaliana . [ 19 ]
Thylakoids can be purified from plant cells using a combination of differential and gradient centrifugation . [ 20 ] Disruption of isolated thylakoids, for example by mechanical shearing, releases the lumenal fraction. Peripheral and integral membrane fractions can be extracted from the remaining membrane fraction. Treatment with sodium carbonate (Na 2 CO 3 ) detaches peripheral membrane proteins , whereas treatment with detergents and organic solvents solubilizes integral membrane proteins .
Thylakoids contain many integral and peripheral membrane proteins, as well as lumenal proteins. Recent proteomics studies of thylakoid fractions have provided further details on the protein composition of the thylakoids. [ 21 ] These data have been summarized in several plastid protein databases that are available online. [ 22 ] [ 23 ]
According to these studies, the thylakoid proteome consists of at least 335 different proteins. Out of these, 89 are in the lumen, 116 are integral membrane proteins, 62 are peripheral proteins on the stroma side, and 68 peripheral proteins on the lumenal side. Additional low-abundance lumenal proteins can be predicted through computational methods. [ 20 ] [ 24 ] Of the thylakoid proteins with known functions, 42% are involved in photosynthesis. The next largest functional groups include proteins involved in protein targeting , processing and folding with 11%, oxidative stress response (9%) and translation (8%). [ 22 ]
Thylakoid membranes contain integral membrane proteins which play an important role in light-harvesting and the light-dependent reactions of photosynthesis. There are four major protein complexes in the thylakoid membrane:
Photosystem II is located mostly in the grana thylakoids, whereas photosystem I and ATP synthase are mostly located in the stroma thylakoids and the outer layers of grana. The cytochrome b6f complex is distributed evenly throughout thylakoid membranes. Due to the separate location of the two photosystems in the thylakoid membrane system, mobile electron carriers are required to shuttle electrons between them. These carriers are plastoquinone and plastocyanin. Plastoquinone shuttles electrons from photosystem II to the cytochrome b6f complex, whereas plastocyanin carries electrons from the cytochrome b6f complex to photosystem I.
Together, these proteins make use of light energy to drive electron transport chains that generate a chemiosmotic potential across the thylakoid membrane and NADPH , a product of the terminal redox reaction. The ATP synthase uses the chemiosmotic potential to make ATP during photophosphorylation .
These photosystems are light-driven redox centers, each consisting of an antenna complex that uses chlorophylls and accessory photosynthetic pigments such as carotenoids and phycobiliproteins to harvest light at a variety of wavelengths. Each antenna complex has between 250 and 400 pigment molecules and the energy they absorb is shuttled by resonance energy transfer to a specialized chlorophyll a at the reaction center of each photosystem. When either of the two chlorophyll a molecules at the reaction center absorb energy, an electron is excited and transferred to an electron-acceptor molecule. Photosystem I contains a pair of chlorophyll a molecules, designated P700 , at its reaction center that maximally absorbs 700 nm light. Photosystem II contains P680 chlorophyll that absorbs 680 nm light best (note that these wavelengths correspond to deep red – see the visible spectrum ). The P is short for pigment and the number is the specific absorption peak in nanometers for the chlorophyll molecules in each reaction center. This is the green pigment present in plants that is not visible to unaided eyes.
The cytochrome b6f complex is part of the thylakoid electron transport chain and couples electron transfer to the pumping of protons into the thylakoid lumen. Energetically, it is situated between the two photosystems and transfers electrons from photosystem II-plastoquinone to plastocyanin-photosystem I.
The thylakoid ATP synthase is a CF1FO-ATP synthase similar to the mitochondrial ATPase. It is integrated into the thylakoid membrane with the CF1-part sticking into the stroma. Thus, ATP synthesis occurs on the stromal side of the thylakoids where the ATP is needed for the light-independent reactions of photosynthesis.
The electron transport protein plastocyanin is present in the lumen and shuttles electrons from the cytochrome b6f protein complex to photosystem I. While plastoquinones are lipid-soluble and therefore move within the thylakoid membrane, plastocyanin moves through the thylakoid lumen.
The lumen of the thylakoids is also the site of water oxidation by the oxygen evolving complex associated with the lumenal side of photosystem II.
Lumenal proteins can be predicted computationally based on their targeting signals. In Arabidopsis, out of the predicted lumenal proteins possessing the Tat signal, the largest groups with known functions are 19% involved in protein processing (proteolysis and folding), 18% in photosynthesis, 11% in metabolism, and 7% redox carriers and defense. [ 20 ]
Chloroplasts have their own genome , which encodes a number of thylakoid proteins. However, during the course of plastid evolution from their cyanobacterial endosymbiotic ancestors, extensive gene transfer from the chloroplast genome to the cell nucleus took place. This results in the four major thylakoid protein complexes being encoded in part by the chloroplast genome and in part by the nuclear genome. Plants have developed several mechanisms to co-regulate the expression of the different subunits encoded in the two different organelles to assure the proper stoichiometry and assembly of these protein complexes. For example, transcription of nuclear genes encoding parts of the photosynthetic apparatus is regulated by light . Biogenesis, stability and turnover of thylakoid protein complexes are regulated by phosphorylation via redox-sensitive kinases in the thylakoid membranes. [ 25 ] The translation rate of chloroplast-encoded proteins is controlled by the presence or absence of assembly partners (control by epistasy of synthesis). [ 26 ] This mechanism involves negative feedback through binding of excess protein to the 5' untranslated region of the chloroplast mRNA . [ 27 ] Chloroplasts also need to balance the ratios of photosystem I and II for the electron transfer chain. The redox state of the electron carrier plastoquinone in the thylakoid membrane directly affects the transcription of chloroplast genes encoding proteins of the reaction centers of the photosystems, thus counteracting imbalances in the electron transfer chain. [ 28 ]
Thylakoid proteins are targeted to their destination via signal peptides and prokaryotic-type secretory pathways inside the chloroplast. Most thylakoid proteins encoded by a plant's nuclear genome need two targeting signals for proper localization: An N-terminal chloroplast targeting peptide (shown in yellow in the figure), followed by a thylakoid targeting peptide (shown in blue). Proteins are imported through the translocon of the outer and inner membrane ( Toc and Tic ) complexes. After entering the chloroplast, the first targeting peptide is cleaved off by a protease processing imported proteins. This unmasks the second targeting signal and the protein is exported from the stroma into the thylakoid in a second targeting step. This second step requires the action of protein translocation components of the thylakoids and is energy-dependent. Proteins are inserted into the membrane via the SRP-dependent pathway (1), the Tat-dependent pathway (2), or spontaneously via their transmembrane domains (not shown in the figure). Lumenal proteins are exported across the thylakoid membrane into the lumen by either the Tat-dependent pathway (2) or the Sec-dependent pathway (3) and released by cleavage from the thylakoid targeting signal. The different pathways utilize different signals and energy sources. The Sec (secretory) pathway requires ATP as an energy source and consists of SecA, which binds to the imported protein and a Sec membrane complex to shuttle the protein across. Proteins with a twin arginine motif in their thylakoid signal peptide are shuttled through the Tat (twin arginine translocation) pathway, which requires a membrane-bound Tat complex and the pH gradient as an energy source. Some other proteins are inserted into the membrane via the SRP ( signal recognition particle ) pathway. The chloroplast SRP can interact with its target proteins either post-translationally or co-translationally, thus transporting imported proteins as well as those that are translated inside the chloroplast. The SRP pathway requires GTP and the pH gradient as energy sources. Some transmembrane proteins may also spontaneously insert into the membrane from the stromal side without energy requirement. [ 29 ]
The thylakoids are the site of the light-dependent reactions of photosynthesis. These include light-driven water oxidation and oxygen evolution , the pumping of protons across the thylakoid membranes coupled with the electron transport chain of the photosystems and cytochrome complex, and ATP synthesis by the ATP synthase utilizing the generated proton gradient.
The first step in photosynthesis is the light-driven reduction (splitting) of water to provide the electrons for the photosynthetic electron transport chains as well as protons for the establishment of a proton gradient. The water-splitting reaction occurs on the lumenal side of the thylakoid membrane and is driven by the light energy captured by the photosystems. This oxidation of water conveniently produces the waste product O 2 that is vital for cellular respiration . The molecular oxygen formed by the reaction is released into the atmosphere.
Two different variations of electron transport are used during photosynthesis:
The noncyclic variety involves the participation of both photosystems, while the cyclic electron flow is dependent on only photosystem I.
A major function of the thylakoid membrane and its integral photosystems is the establishment of chemiosmotic potential. The carriers in the electron transport chain use some of the electron's energy to actively transport protons from the stroma to the lumen . During photosynthesis, the lumen becomes acidic , as low as pH 4, compared to pH 8 in the stroma. [ 30 ] This represents a 10,000 fold concentration gradient for protons across the thylakoid membrane.
The protons in the lumen come from three primary sources.
The proton gradient is also caused by the consumption of protons in the stroma to make NADPH from NADP+ at the NADP reductase.
The molecular mechanism of ATP (Adenosine triphosphate) generation in chloroplasts is similar to that in mitochondria and takes the required energy from the proton motive force (PMF). [ citation needed ] However, chloroplasts rely more on the chemical potential of the PMF to generate the potential energy required for ATP synthesis. The PMF is the sum of a proton chemical potential (given by the proton concentration gradient) and a transmembrane electrical potential (given by charge separation across the membrane). Compared to the inner membranes of mitochondria, which have a significantly higher membrane potential due to charge separation, thylakoid membranes lack a charge gradient. [ citation needed ] To compensate for this, the 10,000 fold proton concentration gradient across the thylakoid membrane is much higher compared to a 10 fold gradient across the inner membrane of mitochondria. The resulting chemiosmotic potential between the lumen and stroma is high enough to drive ATP synthesis using the ATP synthase . As the protons travel back down the gradient through channels in ATP synthase , ADP + P i are combined into ATP. In this manner, the light-dependent reactions are coupled to the synthesis of ATP via the proton gradient. [ citation needed ]
Cyanobacteria are photosynthetic prokaryotes with highly differentiated membrane systems. Cyanobacteria have an internal system of thylakoid membranes where the fully functional electron transfer chains of photosynthesis and respiration reside. The presence of different membrane systems lends these cells a unique complexity among bacteria . Cyanobacteria must be able to reorganize the membranes, synthesize new membrane lipids, and properly target proteins to the correct membrane system. The outer membrane , plasma membrane , and thylakoid membranes each have specialized roles in the cyanobacterial cell. Understanding the organization, functionality, protein composition, and dynamics of the membrane systems remains a great challenge in cyanobacterial cell biology. [ 31 ]
In contrast to the thylakoid network of higher plants, which is differentiated into grana and stroma lamellae, the thylakoids in cyanobacteria are organized into multiple concentric shells that split and fuse to parallel layers forming a highly connected network. This results in a continuous network that encloses a single lumen (as in higher‐plant chloroplasts) and allows water‐soluble and lipid‐soluble molecules to diffuse through the entire membrane network. Moreover, perforations are often observed within the parallel thylakoid sheets. These gaps in the membrane allow for the traffic of particles of different sizes throughout the cell, including ribosomes, glycogen granules, and lipid bodies. [ 32 ] The relatively large distance between the thylakoids provides space for the external light-harvesting antennae, the phycobilisomes . [ 33 ] This macrostructure, as in the case of higher plants, shows some flexibility during changes in the physicochemical environment. [ 34 ] Thylakoid mambranes in cyanobacteria have a variety of different spatial distributions which are characteristic of different species, and these distributions have in the past been used to infer taxonomic relationships between species, but DNA evidence suggests that the type of spatial distribution does not reliably reflect taxonomic relationships between species. [ 35 ] | https://en.wikipedia.org/wiki/Thylakoid |
Thymic mimetic cells are a heterogeneous population of cells located in the thymus that exhibit phenotypes of a wide variety of differentiated peripheral cells. They arise from medullary thymic epithelial cells (mTECs) and also function in negative selection of self-reactive T cells . [ 1 ]
Some subsets of these cells were observed as early as the mid-1800s because of their distinct, seemingly misplaced, phenotype. The most readily observed subsets were those accumulating and forming microscopic structures, most notably Hassall's corpuscles resembling skin keratinocytes . Many subsets with a more dispersed distribution were found later. Substantial progress has been made in recent years owing to the rapid development of single cell sequencing methods, such as scRNA-seq or scATAC-seq . [ 1 ]
Although thymic mimetic cells exhibit transcriptional programmes of cells from other tissues, they are not identical to them and share a part of their gene expression with mTECs from which they arise. The entire range of phenotypes as well as the pathways that lead to them are still in need of further research. A recent review recognizes (based on expression of lineage specific transcription factors and cell products) the following subtypes: Basal (skin/lung) mTEC, Enterocyte/hepatocyte mTEC, Ciliated mTEC, Ionocyte mTEC, Keratinocyte mTEC, Microfold mTEC, Muscle mTEC, Neuroendocrine mTEC, Parathyroid mTEC, Secretory mTEC, Thyroid mTEC, Tuft mTEC. [ 1 ] [ 2 ]
Since its discovery in 2001, [ 3 ] AIRE (Autoimmune regulator) has been the main focus of studies of thymic (central) immune tolerance . AIRE induces the expression of many antigens specific to differentiated cells not found in the thymus (termed peripheral tissue antigens or tissue restricted antigens) thus helping to detect and remove T cells that react with these antigens. [ 4 ] The mechanism of AIRE is complicated and there are reasons to believe that it is not the sole mechanism of TRA (tissue restricted antigen) expression. AIRE is not sequence specific making its action stochastic and not well targeted, TRAs can also be detected in cells with AIRE knocked out [ 5 ] and patients with AIRE deficiency ( APS-1 ) share some autoimmune symptoms but can have other symptoms which are not shared by most. [ 6 ]
The expression of peripheral antigens in mimetic cells strongly implies a function in establishing central immune tolerance. This has been reported [ 7 ] but further studies are needed. It is unknown what prompts the mTECs to differentiate into mimetic cells, the lineage specific transcription factors could be induced by AIRE or perhaps other signals. Lineage specific transcription factors expressed by some mimetic cell subtypes have been associated with autoimmune disorders. [ 1 ] [ 8 ] [ 9 ] [ 10 ] [ 11 ]
In addition, some mimetic cells can shape the environment and function of the thymus by producing cytokines . [ 12 ] [ 13 ] | https://en.wikipedia.org/wiki/Thymic_mimetic_cells |
Thymidine diphosphate ( TDP ) or deoxythymidine diphosphate ( dTDP ) (also thymidine pyrophosphate , dTPP ) is a nucleotide diphosphate . It is an ester of pyrophosphoric acid with the nucleoside thymidine . dTDP consists of the pyrophosphate group , the pentose sugar ribose , and the nucleobase thymine . Unlike the other deoxyribonucleotides , thymidine diphosphate does not always contain the "deoxy" prefix in its name. [ 1 ]
This biochemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Thymidine_diphosphate |
Thymidine diphosphate glucose (often abbreviated dTDP-glucose or TDP-glucose ) is a nucleotide-linked sugar consisting of deoxythymidine diphosphate linked to glucose . It is the starting compound for the syntheses of many deoxysugars . [ 1 ]
DTDP-glucose is produced by the enzyme glucose-1-phosphate thymidylyltransferase and is synthesized from dTTP and glucose-1-phosphate . Pyrophosphate is a byproduct of the reaction.
DTDP-glucose goes on to form a variety of compounds in nucleotide sugars metabolism . Many bacteria utilize dTDP-glucose to form exotic sugars that are incorporated into their lipopolysaccharides or into secondary metabolites such as antibiotics . During the syntheses of many of these exotic sugars, dTDP-glucose undergoes a combined oxidation/reduction reaction via the enzyme dTDP-glucose 4,6-dehydratase , producing dTDP-4-keto-6-deoxy-glucose. [ 1 ] [ 2 ] | https://en.wikipedia.org/wiki/Thymidine_diphosphate_glucose |
Thymidine monophosphate ( TMP ), also known as thymidylic acid ( conjugate base thymidylate ), deoxythymidine monophosphate ( dTMP ), or deoxythymidylic acid ( conjugate base deoxythymidylate ), is a nucleotide that is used as a monomer in DNA . It is an ester of phosphoric acid with the nucleoside thymidine . dTMP consists of a phosphate group , the pentose sugar deoxyribose , and the nucleobase thymine . Unlike the other deoxyribonucleotides , thymidine monophosphate often does not contain the "deoxy" prefix in its name; nevertheless, its symbol often includes a "d" ("dTMP"). [ 1 ] Dorland’s Illustrated Medical Dictionary [ 2 ] provides an explanation of the nomenclature variation at its entry for thymidine.
As a substituent , it is called by the prefix thymidylyl- .
This biochemistry article is a stub . You can help Wikipedia by expanding it .
This organic chemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Thymidine_monophosphate |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.