text stringlengths 11 320k | source stringlengths 26 161 |
|---|---|
Ramipril , sold under the brand name Altace among others, is an ACE inhibitor type medication used to treat high blood pressure , heart failure , and diabetic kidney disease . [ 1 ] It can also be used as a preventative medication in patients over 55 years old to reduce the risk of having a heart attack, stroke or cardiovascular death in patients shown to be at high risk, such as some diabetics and patients with vascular disease. [ 2 ] [ 3 ] [ 4 ] It is a reasonable initial treatment for high blood pressure. [ 1 ] It is taken by mouth. [ 1 ]
Common side effects include headaches, dizziness, fatigue, and cough. [ 1 ] Serious side effects may include liver problems, angioedema , kidney problems , and high blood potassium . [ 1 ] Use in pregnancy and breastfeeding is not recommended. [ 5 ] It is an ACE inhibitor and works by decreasing renin-angiotensin-aldosterone system activity. [ 1 ]
Ramipril was patented in 1981 and approved for medical use in 1989. [ 6 ] It is available as a generic medication . [ 7 ] In 2022, it was the 187th most commonly prescribed medication in the United States, with more than 2 million prescriptions. [ 8 ] [ 9 ]
Ramipril is a pro-drug . The molecule must be hydrolyzed by an esterase at the OCH 2 CH 3 and form a carboxylate . This carboxylate then interacts with the positive Zn 2+ ion which is located at the active site of the ACE enzyme. [ 10 ] Ramipril is similar in structure to another ACE Inhibitor, trandolapril , but it has a second cyclopentane ring instead of a cyclohexane ring.
Medical uses include:
Contraindications to its use include volume-depleted patients, a history of angioedema while on an ACE inhibitor , pregnancy and hypotension . [ citation needed ]
People should not take ramipril (or any ACE inhibitors) if they have hyperkalemia . It is also recommended to avoid using salt-substitutes as this can further increase potassium levels in the blood. [ 1 ]
Ramipril can be considered in patients with bilateral or unilateral significant renal artery stenosis (RAS). [ 13 ] An early rise in serum creatinine above baseline is expected after initiation of therapy with Ramipril, however, monitoring serum biochemistry and renal function after initiation is crucial. [ 13 ] [ 14 ] Treatment with Ramipril in some patients with significant narrowing in both kidneys can increase serum creatinine concentration (measured in the blood test), which returns to baseline upon therapy cessation. [ 15 ]
Serious allergic reactions to this drug are unlikely, but immediate medical attention must be sought if they occur. Symptoms of a serious allergic reaction include, but are not limited to a rash or swelling of the face, mouth, tongue, or throat. In extreme cases, ramipril may lead to potentially fatal liver problems.
ACE inhibitors inhibit the actions of angiotensin converting enzyme (ACE), thereby lowering the production of angiotensin II and decreasing the breakdown of bradykinin . The decrease in angiotensin II results in relaxation of arteriole smooth muscle leading to a decrease in total peripheral resistance , reducing blood pressure as the blood is pumped through widened vessels. Its effect on bradykinin is responsible for the dry cough side effect .
Ramipril, a prodrug or precursor drug, is converted to the active metabolite ramiprilat by carboxylesterase 1 . [ 17 ] [ 18 ] Ramiprilat is mostly excreted by the kidneys . Its half-life is variable (3–16 hours), and is prolonged by heart and liver failure , as well as kidney failure . Peak effect occurs between 3 and 6 hours after dosing, with approximately 50% of this effect retained after 24 hours. [ 19 ]
The penultimate step in the synthesis of ramipril combines an alanine derivative with a ( S,S,S )-2-azabicyclo-[3.3.0]-octane-3-carboxylic acid protected as its benzyl ester. [ 20 ] In the original patented route, these components were obtained by a multi-step process. [ 21 ]
The acid chloride forms an amide bond with the amino group of the pyrrolidine ring in the presence of triethylamine and ramipril is the product after the benzyl ester has been removed by hydrogenation . [ 20 ]
The compound was protected by a patent which was assigned to the German pharmaceutical company Hoechst AG (since merged into Aventis ) on 29 October 1991. [ 21 ] The patent was scheduled to expire on 29 October 2008. On 11 September 2007, in an appeal by the Indian company Lupin Ltd. , the United States Court of Appeals for the Federal Circuit reversed a district court trial verdict and found that Aventis's patent on ramipril was invalid for "obviousness", opening this drug to generic manufacturers.
Ramipril is marketed as Prilace by Arrow Pharmaceuticals in Australia, Ramipro by Westfield Pharma in the Philippines, Triatec by Sanofi-Aventis in Italy and United States and Altace by King Pharmaceuticals in the United States, Novapril by Pharmanova in Ghana, Ramitens by PharmaSwiss, Ampril by Krka in Slovenia, Corpril by Cemelog-BRS in Hungary, Piramil and Prilinda by Hemofarm in Serbia, by Lek in Poland and by Novartis and Opsonin Pharma Limited as Ramace in Bangladesh, and in Canada as Altace (Sanofi-Aventis) and Ramipril (Pharmascience).
Ramipril is marketed in India under the brand names Cardace, Zigpril, Ramistar, Odipril and Zorem . Ramipril is marketed in Myanmar under brand name Endpril .
The 2001 Heart Outcomes and Prevention Evaluation trial seemed to show ramipril possessed cardioprotective qualities which extended beyond its qualities as an antihypertensive. [ 22 ] [ 23 ] However, the trial and the interpretation of its results have been criticised. [ 24 ]
The Acute Infarction Ramipril Efficacy (AIRE) trial [ 17 ] [ 25 ] showed a 27% reduction in mortality for patients receiving ramipril for chronic heart failure following a myocardial infarction .
Ramipril was found to have similar results as telmisartan , an angiotensin II receptor blocker . [ 26 ] | https://en.wikipedia.org/wiki/Ramipril |
Protestant Reformation Counter-Reformation Aristotelianism Scholasticism Patristics
Second scholasticism of the School of Salamanca Lutheran scholasticism during Lutheran orthodoxy Ramism among the Reformed orthodoxy Metaphysical poets in the Church of England
The Jesuits against Jansenism Labadists against the Jesuits Pietism against orthodox Lutherans Nadere Reformatie within Dutch Calvinism Richard Hooker against the Ramists
Neologists against Lutherans Spinozists against Dutch Calvinists Deists against Anglicanism John Locke against Bishop Stillingfleet
Ramism was a collection of theories on rhetoric , logic , and pedagogy based on the teachings of Petrus Ramus , a French academic, philosopher, and Huguenot convert, who was murdered during the St. Bartholomew's Day massacre in August 1572. [ 1 ]
According to British historian Jonathan Israel :
"[Ramism], despite its crudity, enjoyed vast popularity in late sixteenth-century Europe, and at the outset of the seventeenth, providing as it did a method of systematizing all branches of knowledge, emphasizing the relevance of theory to practical applications [...]" [ 2 ]
Ramus was a cleric and professor of philosophy who gained notoriety first by his criticism of Aristotle and then by conversion to Protestantism . He was killed in the St Bartholomew's Day Massacre of 1572 , and a biography by Banosius (Théophile de Banos) appeared by 1576. [ 3 ] His status as Huguenot martyr certainly had something to do with the early dissemination of his ideas. [ 4 ] His ideas had influence in some (but not all) parts of Protestant Europe , strong in Germany and the Netherlands , and on Puritan and Calvinist theologians of England , Scotland , and in the American colonies of New England , via Puritan colonists on the Mayflower . [ 5 ]
He had little effect however on mainstream Swiss Calvinists, and was largely ignored in Catholic countries. [ 6 ] The progress of Ramism in the half-century from roughly 1575 to 1625 was closely related to, and mediated by, university education : the religious factor came in through the different reception in Protestant and Catholic universities, all over Europe. [ 7 ]
Outside France, for example, there was the 1574 English translation by the Scot Roland MacIlmaine of the University of St Andrews . [ 8 ] [ 9 ] Ramus's works and influence then appeared in the logical textbooks of the Scottish universities, and equally he had followers in England. [ 1 ]
Audomarus Talaeus ( Omer Talon ) was one early French disciple and writer on Ramism. [ 10 ] The work of Ramus gained early international attention, with Roger Ascham corresponding about him with Johann Sturm , teacher of Ramus and collaborator with Ascham; Ascham supported his stance on Joachim Perion , one early opponent, but also expressed some reservations. Later Ascham found Ramus' lack of respect for Cicero , rather than extreme proponents, just unacceptable. [ 11 ]
As late as 1626, Francis Burgersdyk divides the logicians of his day into the Aristotelians, the Ramists and the Semi-Ramists. [ 1 ] [ 12 ] [ 13 ] These last endeavoured, like Rudolph Goclenius of Marburg and Amandus Polanus of Basel , to mediate between the contending parties. [ 1 ] Ramism was closely linked to systematic Calvinism , but the hybrid Philippo-Ramism (which is where the Semi-Ramists fit in) arose as a blend of Ramus with the logic of Philipp Melanchthon . [ 14 ]
Ramism, while in fashion, met with considerable hostility. The Jesuits were completely opposed. [ 15 ] The Calvinist Aristotelian Theodore Beza was also a strong opponent of Ramism. [ 16 ] Similarly the leading Lutheran Aristotelian philosopher Jakob Schegk resolutely rejected Ramus and opposed his visit to Tübingen . [ 17 ] In Heidelberg the efforts of Giulio Pace to teach Ramist dialectic to Polish private students were forbidden. [ 18 ]
Where universities were open to Ramist teaching, there still could be dislike and negative reactions, stemming from the perceived personality of Ramus (arrogant, a natural polemicist), or of that of his supporters (young men in a hurry). There was tacit adoption of some of the techniques such as the epitome, without acceptance of the whole package of reform including junking Aristotle in favour of the new textbooks, and making Ramus an authoritative figure. John Rainolds at Oxford was an example of an older academic torn by the issue; his follower Richard Hooker was firmly against "Ramystry". [ 19 ]
Gerhard Johann Vossius at Leiden wrote massive works on classical rhetoric and opposed Ramism. He defended and enriched the Aristotelian tradition for the seventeenth century. [ 20 ] He was a representative Dutch opponent; Ramism did not take permanent hold in the universities of the Netherlands, and once William Ames had died, it declined. [ 21 ]
Mid-century, Ramism was still under attack, from Cartesians such as Johannes Clauberg , who defended Aristotle against Ramus. [ 22 ]
Frances Yates proposed a subtle relationship of Ramism to the legacy of Lullism , the art of memory , and Renaissance hermetism . She considers that Ramism drew on Lullism, but is more superficial; was opposed to the classical art of memory; and moved in an opposite direction to the occult (reducing rather than increasing the role of images). [ 23 ] He "abandoned imagery and the creative imagination". [ 24 ] Mary Carruthers referred back to Albertus Magnus and Thomas Aquinas :
"It is one of those ironies of history that Peter Ramus, who, in the sixteenth century, thought he was reacting against Aristotelianism by taking memoria from rhetoric and making it part of dialectic, was essentially remaking a move made 300 years before by two Dominican professors who were attempting to reshape memorial study in conformity with Aristotle." [ 25 ]
An alternative to this aspect of Ramism, as belated and diminishing, is the discussion initiated by Walter Ong of Ramus in relation to several evolutionary steps. Ong's position, on the importance of Ramus as historical figure and humanist , has been summed up as the center of controversies about method (both in teaching and in scientific discovery) and about rhetoric and logic and their role in communication . [ 26 ]
The best known of Ong's theses is Ramus the post- Gutenberg writer, in other words the calibration of the indexing and schematics involved in Ramism to the transition away from written manuscripts, and the spoken word. [ 27 ] Extensive charts were instead used, drawing on the resources of typography, to organise material, from left to right across a printed page, particularly in theological treatises. [ 28 ] The cultural impact of Ramism depended on the nexus of printing (trees regularly laid out with braces ) and rhetoric, forceful and persuasive at least to some Protestants ; and it had partly been anticipated in cataloguing and indexing knowledge and its encyclopedism by Conrad Gesner . [ 29 ] The term Ramean tree became standard in logic books, applying to the classical Porphyrian tree , or any binary tree , without clear distinction between the underlying structure and the way of displaying it; now scholars use the clearer term Ramist epitome to signify the structure. Ong argued that, a chart being a visual aid and logic having come down to charts, the role of voice and dialogue is placed squarely and rigidly in the domain of rhetoric, and in a lower position. [ 30 ]
Two other theses of Ong on Ramism are: the end of copia or profuseness for its own sake in writing, making Ramus an opponent of the Erasmus of Copia: Foundations of the Abundant Style ; and the beginning of the later Cartesian emphasis on clarity. Ong, though, consistently argues that Ramus is thin, insubstantial as a scholar, a beneficiary of fashion supported by the new medium of printing, as well as a transitional figure. [ 31 ]
These ideas, from the 1950s and 1960s onwards, have been reconsidered. Brian Vickers summed up the view a generation or so later: dismissive of Yates, he notes that bracketed tables existed in older manuscripts, and states that Ong's emphases are found unconvincing. Further, methodus , the Ramists' major slogan, was specific to figures of speech , deriving from Hermogenes of Tarsus via George of Trebizond . And the particular moves used by Ramus in the reconfiguration of rhetoric were in no sense innovative by themselves. [ 32 ] Lisa Jardine agrees with Ong that he was not a first-rank innovator, more of a successful textbook writer adapting earlier insights centred on topics-logic , but insists on his importance and influence in humanistic logic . She takes the Ramean tree to be a "voguish" pedagogic advance. [ 33 ]
It has been said that:
Puritans believed the maps proved well suited to rationalize and order the Christian view of revealed truth and the language and knowledge of the new learning , specifically the scientific and philosophical paradigms arising out of the Renaissance. [ 34 ]
Donald R. Kelley writes of the "new learning" ( nova doctrina ) or opposition in Paris to traditional scholasticism as a "trivial revolution", i.e. growing out of specialist teachers of the trivium . He argues that:
The aim was a fundamental change of priorities, the transformation of hierarchy of disciplines into a 'circle' of learning, an 'encyclopedia' embracing human culture in all of its richness and concreteness and organized for persuasive transmission to society as a whole. This was the rationale of the Ramist method, which accordingly emphasized mnemonics and pedagogical technique at the expense of discovery and the advancement of learning. [ 35 ]
The need for demarcation was seen in "redundancies and overlapping categories". [ 36 ]
This was taken to the lengths where it could be mocked in the Port-Royal Logic (1662). There, the authors claimed that "everything that is useful to logic belongs to it", with a swipe at the "torments" the Ramists put themselves through. [ 37 ]
The method of demarcation was applied within the trivium , made up of grammar , logic (for which Ramists usually preferred a traditional name, dialectic ), and rhetoric . Logic falls, according to Ramus, into two parts: invention (treating of the notion and definition) and judgment (comprising the judgment proper, syllogism and method). [ 1 ] In this he was influenced by Rodolphus Agricola . [ 38 ] What Ramus does here in fact redefines rhetoric. There is a new configuration, with logic and rhetoric each having two parts: rhetoric was to cover elocutio (mainly figures of speech) and pronuntiatio (oratorical delivery). In general, Ramism liked to deal with binary trees as method for organising knowledge. [ 39 ]
Rhetoric, traditionally, had had five parts, of which inventio (invention) was the first. Two others were dispositio (arrangement) and memoria (memory). Ramus proposed transferring those back to the realm of dialectic (logic); and merging them under a new heading, renaming them as iudicium (judgment). [ 40 ] This was the final effect: as an intermediate memoria was left with rhetoric. [ citation needed ]
In the end the art of memory was diminished in Ramism, displaced by an idea of "method": better mental organisation would be more methodical, and mnemonic techniques drop away. This was a step in the direction of Descartes . The construction of disciplines, for Ramus, was subject to some laws, his methodus . There were three, with clear origins in Aristotle, and his Posterior Analytics . [ citation needed ]
They comprised the lex veritatis (French du tout , law of truth), lex justitiae ( par soi , law of justice), and lex sapientiae ( universalité , or law of wisdom). The third was in the terms of Ramus "universel premièrement", or to make the universal the first instance. The "wisdom" is therefore to start with the universal, and set up a ramifying binary tree by subdivision. [ 41 ] [ 42 ]
As Ramism evolved, these characteristic binary trees, set up rigidly, were treated differently in various fields. In theology, for example, this procedure was turned on its head, since the search for God, the universal, would appear as the goal rather than the starting point. [ 43 ]
Émile Bréhier wrote that after Ramus, "order" as a criterion of the methodical had become commonplace; Descartes needed only to supply to method the idea of relation, exemplified by the idea of a mathematical sequence based on a functional relationship of an element to its successor. [ 44 ] Therefore, for Cartesians, the Ramist insights were quite easily absorbed. [ citation needed ]
For the Baconian method , on the other hand, the rigidity of Ramist distinctions was a serious criticism. Francis Bacon , a Cambridge graduate, was early aware of Ramism, but the near-equation of dispositio with method was unsatisfactory, for Baconians, because arrangement of material was seen to be inadequate for research. The Novum Organum implied in its title a further reform of Aristotle, and its aphorism viii of Book I made this exact point. [ 45 ]
A Ramist tradition took root in Christ's College, Cambridge in the 1570s, when Laurence Chaderton became the leading Ramist, and Gabriel Harvey lectured on the rhetoric of Ramus. [ 46 ] [ 47 ] Marshall McLuhan 's dissertation on Thomas Nashe (via the classical trivium ), who was involved in a high-profile literary quarrel with Harvey, was shaped by his interest in aligning Harvey with dialectic and the plain style (logic in the sense of Ramus), and Nashe with the full resources of Elizabethan rhetoric. [ 48 ] After Chaderton, there was a succession of important theologians using Ramist logic, including William Perkins , [ 49 ] and William Ames (Amesius), [ 50 ] who made Ramist dialectic integral to his approach. [ citation needed ]
William Temple annotated a 1584 reprint of the Dialectics in Cambridge. [ 51 ] Known as an advocate of Ramism, and involved in controversy with Everard Digby of Oxford, [ 52 ] he became secretary to Sir Philip Sidney about a year later, in 1585. [ 53 ] Temple was with Sidney when he died in 1586, and wrote a Latin Ramist commentary on An Apology for Poetry . [ 54 ] Sidney himself is supposed to have learned Ramist theory from John Dee , and was the dedicatee of the biography by Banosius, but was not in any strict sense a Ramist. [ 55 ]
This Ramist school was influential:
The Ramist system was introduced into Cambridge University by Sir William Temple, in 1580, and contributed to the growth of Cambridge Platonism . It became the basis of Congregational apologetics. The Cambridge Puritans were represented by Alexander Richardson , George Downame , Anthony Wotton , and especially by William Ames, whose writings became the favorite philosophy texts of early New England. In 1672, the same year in which Ames's edition of Ramus's Dialectics with Commentary appeared, Milton published his Institutions of the Art of Logic Based on the Method of Peter Ramus. Other Puritan divines who popularized the Ramist philosophy and Covenant Theology were William Perkins, John Preston , and Thomas Hooker . [ 56 ]
Christopher Marlowe encountered Ramist thought as a student at Cambridge (B.A. in 1584), and made Peter Ramus a character in The Massacre at Paris . He also cited Ramus in Dr. Faustus : Bene disserere est finis logices is a line given to Faustus, who states it is from Aristotle , when it is from the Dialecticae of Ramus. [ 57 ] [ 58 ]
There is a short treatise by John Milton , who was a student at Christ's from 1625, published two years before his death, called Artis Logicae Plenior Institutio ad Petri Rami Methodum concinnata . [ 1 ] [ 59 ] It was one of the last commentaries on Ramist logic. [ 60 ] Although composed in the 1640s, it was not published until 1672. Milton, whose first tutor at Christ's William Chappell used Ramist method, [ 61 ] can take little enough credit for the content. Most of the text proper is adapted from the 1572 edition of Ramus's logic; most of the commentary is adapted from George Downham 's Commentarii in P. Rami Dialecticam (1601) [ 62 ] —Downham, also affiliated with Christ's, was a professor of logic at Cambridge. [ 63 ] The biography of Ramus is a cut-down version of that of Johann Thomas Freigius (1543–83). [ 64 ]
Herborn Academy in Germany was founded in 1584, as a Protestant university, and initially was associated with a group of Reformed theologians who developed covenant theology . It was also a centre of Ramism, and in particular of its encyclopedic form. In turn, it was the birthplace of pansophism . [ 65 ] Heinrich Alsted taught there, and John Amos Comenius studied with him. [ citation needed ]
Ramism was built into the curriculum, with the professors required to give Ramist treatments of the trivium . Johannes Piscator anticipated the foundation in writing introductory Ramist texts, Johannes Althusius and Lazarus Schöner likewise wrote respectively on social science topics and mathematics, and Piscator later produced a Ramist theology text. [ 66 ]
Brian Vickers argues that the Ramist influence did add something to rhetoric: it concentrated more on the remaining aspect of elocutio or effective use of language, and emphasised the role of vernacular European languages (rather than Latin). The outcome was that rhetoric was applied in literature. [ 67 ]
In 1588 Abraham Fraunce , a protégé of Philip Sidney, published Arcadian Rhetorike , a Ramist-style rhetoric book cut down largely to a discussion of figures of speech (in prose and verse), and referring by its title to Sidney's Arcadia . It was based on a translation of Talon's Rhetoricae , and was a companion to The Lawiers Logike of 1585, an adapted translation of the Dialecticae of Ramus. Through it, Sidney's usage of figures was disseminated as the Ramist "Arcadian rhetoric" of standard English literary components and ornaments, before the source Arcadia had been published. It quickly lent itself to floridity of style. William Wimsatt and Cleanth Brooks consider that the Ramist reform at least created a tension between the ornamented and the plain style (of preachers and scientific scholars), into the seventeenth century, and contributed to the emergence of the latter. [ 68 ] With the previous work of Dudley Fenner (1584), and the later book of Charles Butler (1598), Ramist rhetoric in Elizabethan England accepts the reduction to elocutio and pronuntiatio , puts all the emphasis on the former, and reduces its scope to the trope . [ 69 ]
Geoffrey Hill classified Robert Burton 's Anatomy of Melancholy (1621) as a "post-Ramist anatomy ". It is a work (he says against Ong) of a rooted scholar with a "method" but turning Ramism back on itself. [ 70 ]
Samuel Taylor Coleridge combined Aristotelian logic with the Holy Trinity to create his "cinque spotted spider making its way upstream by fits & starts," his logical system based on Ramist logic (thesis, antithesis, synthesis, mesothesis, exothesis). [ 71 ] | https://en.wikipedia.org/wiki/Ramism |
The term ramogen refers to a biological factor, typically a growth factor or other protein , that causes a developing biological cell or tissue to branch in a tree-like manner. Ramogenic molecules are branch promoting molecules found throughout the human body,. [ 1 ]
Brief History
The term was first coined (from the Latin ramus = branch and the Greek genesis = creation) in an article about kidney development by Davies and Davey (Pediatr Nephrol. 1999 Aug;13(6):535-41). In the article, Davies and Davy describe the existence of "ramogens" in the kidney as glial cell line-derived neurotrophic factors, neurturin and persephin. [ 2 ] The term has now passed into general use in the technical literature concerned with branching of biological structures.
Function A ramogen is a biochemical signal that enables the creation of a physiological branch. The signal can be in the form of a growth factor or a hormone that makes a tube branch. One specific example would be the hormone that forms the simple tube through which the mammary glands begin to form causing the formation of a highly branched “tree” of milk ducts in females. [ 3 ]
Types of Ramogens
Mesenchyme -derived ramogens are found throughout the body and serve as chemoattractants to branching tissues.
An example of how this works is found through a study on a bead soaked in the renal ramogen GDNF. When this ramogen was placed next to a kidney sample in culture, the nearby uteric [ check spelling ] parts branch and grow toward it. [ 4 ]
Another example of a ramogen in use was found in the lungs. The existence of Sprouty2 in the body is demonstrated in response to the signaling of the ramogen FGF10, serving as an inhibitor of branching. [ 5 ]
The following table is a list of Key Ramogens in Branching Organs of a mouse species. [ 6 ]
Studies involving Ramogens
The physiological capabilities of ramogens are still being postulated in medical studies involving kidney functions on mice.
In development maturing nephrons and stroma in the body may cease to produce ramogens and may begin to secrete anti-ramogenic factors, such as Bmp2 and Tgfβ. [ 7 ]
The pattern of branching and the rate of cell proliferation can contribute to the shape of different organs. As such, the use of the glial-cell-line neurotrophic factor (GDNF) has been found to contribute to uterine tissues. [ 8 ]
The implication of this is that the introduction of ramogens to the body can cause cell repair through the creation of side branches introduced through ramogenic signals in the body [ 9 ] ).
This is evidenced through studies demonstrating that uterine stalks were capable of forming new tips if provided with fresh mesenchyme or with a Matrigel artificially loaded with ramogens, such as GDNF and FGF1. The ramogens used in this study were manufactured with fresh mesenchyme. [ 10 ] | https://en.wikipedia.org/wiki/Ramogen |
Ramon Torrecillas (Oviedo, 30 August 1963) is a Spanish physicist and materials scientist internationally recognized for his research in the fields of nanomaterials and biomaterials. Since December 2019 he is Head of the Brussels' Office of the Spanish National Research Council (CSIC) , [ 1 ] which holds the institutional representation of the CSIC before the institutions of the EU and other relevant organizations and forums.
He has published more than 220 scientific articles and book chapters [ 2 ] [ 3 ] and filed 19 patents. [ 4 ]
He received his bachelor's degree in physics in 1986 from the University of Zaragoza. Afterwards he moved to the Institut National des Sciences Appliquees of Lyon (INSA-Lyon) in France where he started his research on thermomechanical properties of advanced ceramics. In 1991 he received a PhD in physics from the National Distance Education University, Directors: J.S.Moya and S. de Aza, and became director of the Ceramics and Refractories department of the Instituto Tecnológico de Materiales de Asturias, a Spanish technological research center, a position he held until 1994. In December that same year he obtained his PhD in Materials Engineering from the Institut National des Sciences Appliquees of Lyon with a thesis titled "Mechanical Behavior of Mullite and Mullite-Zirconia Composites Obtained by Reactive Sintering. [ 5 ]
In 1994 he joined the National Institute of Coal (INCAR) belonging to the Spanish National Research Council (CSIC) where he established and headed until 2008 the Department of Nanostructured Ceramics. In 2008 he became full research Professor of the CSIC and was appointed founding director of the Nanomaterials and Nanotechnology Research Center (CINN).
In 2009 he was appointed general manager of the Asturian Materials Techchnology Center ITMA [ 6 ] sharing this position until 2011 with the managing direction of the CINN.
In 2011 he founded the company NANOKER Research SL, which manufactures advanced technical ceramics, nanomaterials and nanocomposites for optical, biomedical applications and extreme conditions [ 7 ]
In December 2019 he was appointed Delegate of the Spanish National Research Council to the EU in Brussels.
He has led some of the most relevant European projects in the fields of Biomaterials and Nanomaterials. | https://en.wikipedia.org/wiki/Ramon_Torrecillas |
The Ramsauer–Townsend effect , also sometimes called the Ramsauer effect or the Townsend effect , is a physical phenomenon involving the scattering of low-energy electrons by atoms of a noble gas . This effect is a result of quantum mechanics . The effect is named for Carl Ramsauer and John Sealy Townsend , who each independently studied the collisions between atoms and low-energy electrons in 1921.
When an electron moves through a gas, its interactions with the gas atoms cause scattering to occur. These interactions are classified as inelastic if they cause excitation or ionization of the atom to occur and elastic if they do not.
The probability of scattering in such a system is defined as the number of electrons scattered, per unit electron current, per unit path length, per unit pressure at 0 °C, per unit solid angle . The number of collisions equals the total number of electrons scattered elastically and inelastically in all angles, and the probability of collision is the total number of collisions, per unit electron current, per unit path length, per unit pressure at 0 °C.
Because noble gas atoms have a relatively high first ionization energy and the electrons do not carry enough energy to cause excited electronic states, ionization and excitation of the atom are unlikely, and the probability of elastic scattering over all angles is approximately equal to the probability of collision.
If one tries to predict the probability of collision with a classical model that treats the electron and atom as hard spheres , one finds that the probability of collision should be independent of the incident electron energy. [ 1 ] However, Ramsauer and Townsend, independently observed [ 2 ] [ 3 ] that for slow-moving electrons in argon , krypton , or xenon , the probability of collision between the electrons and gas atoms obtains a minimum value for electrons with a certain amount of kinetic energy (about 1 electron volts for xenon gas [ 4 ] ). [ 5 ]
No good explanation for the phenomenon existed until the introduction of quantum mechanics , which explains that the effect results from the wave-like properties of the electron. A simple model of the collision that makes use of wave theory can predict the existence of the Ramsauer–Townsend minimum. Niels Bohr presented a simple model for the phenomenon that considers the atom as a finite square potential well . [ 6 ] [ 7 ]
Predicting from theory the kinetic energy that will produce a Ramsauer–Townsend minimum is quite complicated since the problem involves understanding the wave nature of particles. However, the problem has been extensively investigated both experimentally and theoretically and is well understood. [ 8 ] | https://en.wikipedia.org/wiki/Ramsauer–Townsend_effect |
Ramsay , also referred to as Ramsay Malware , is a cyber espionage framework and toolkit that was discovered by ESET Research in 2020. [ 1 ]
Ramsay is specifically tailored for Windows systems on networks that are not connected to the internet and that also isolated from intranets of companies, so called air-gapped networks, from which it steals sensitive documents like Word documents after first collecting them in a hidden storage folder. [ 2 ] [ 3 ]
ESET researchers found various versions of the malware, and believe that in May 2020 it was still under development. They numbered the versions Ramsay Version 1, Ramsay Version 2a and Ramsay Version 2b. The very first encounter with the malware was a sample that was uploaded from Japan to VirusTotal . The first version was compiled in September 2019. The last version that they found was most advanced. [ 1 ]
The discovery of Ramsay was seen as significant as malware is rarely able to target physically isolated devices. [ 4 ]
While authorship has not been attributed, it has many common artefacts with Retro, a backdoor by hacking entity Darkhotel believed to operate in the interests of South Korea . [ 5 ]
The three versions of Ramsay that ESET found have different workings.
Ramsay version 1 does not include a rootkit , whilst the later versions do.
Ramsay version 1 and 2.b exploit CVE-2017-0199, a "Microsoft Office/WordPad Remote Code Execution Vulnerability w/Windows API." [ 6 ]
Version 2.b also uses exploit CVE-2017-11882 as an attack vector . [ 2 ]
The way in which Ramsay can spread is via removable media like USB sticks and network shares. In this way, the malware can jump the air gap. [ 3 ] | https://en.wikipedia.org/wiki/Ramsay_Malware |
Ramsay grease is a vacuum grease , used as a lubrication and a sealant of ground glass joints and cocks on laboratory glassware , e.g. burettes . It is usable to about 10 −2 mbar (about 1 Pa) and about 30 °C. [ 1 ] Its vapor pressure at 20 °C is about 10 −4 mbar (0.01 Pa). [ 2 ] It is named after Sir William Ramsay . [ 3 ]
Different grades exist (e.g. thick or viscous, soft). The viscous one is used for standard stopcocks and ground joints. The soft grade is for large stopcocks and ground joints, desiccators , and for lower temperature use. Ramsay grease consists of paraffin wax , petroleum jelly , and crude natural rubber , in ratio 1:3:7 to 1:8:16. Due to the rubber content it has less tendency to flow. [ 4 ]
One recipe for a grease usable up to 25 °C consists of 6 parts of petroleum jelly, 1 part of paraffin wax, and 6 parts of Pará rubber . [ 5 ]
The dropping point of Leybold-brand Ramsay grease is 56 °C; its maximum service temperature is 25-30 °C. Its vapor pressure at 25 °C is 10 −7 torr (0.013 mPa), at 38 °C it is 10 −4 torr (13 mPa). [ 6 ]
An equivalent of Ramsay grease can be made by cooking lanolin with natural rubber extracted from golf balls . [ 7 ] | https://en.wikipedia.org/wiki/Ramsay_grease |
Ramsbottom carbon residue ( RCR ) is well known in the petroleum industry as a method to calculate the carbon residue of a fuel. The carbon residue value is considered by some to give an approximate indication of the combustibility and deposit forming tendencies of the fuel. [ 1 ]
The Ramsbottom test is used to measure carbon residues of an oil. In brief, the carbon residue of a fuel is the tendency to form carbon deposits under high temperature conditions in an inert atmosphere. This is an important value for the crude oil refinery, and usually one of the measurements in a crude oil assay . Carbon residue is an important measurement for the feed to the refinery process fluid catalytic cracking and delayed coking .
There are three methods to calculate this carbon residue. It may be expressed as Ramsbottom carbon residue (RCR), Conradson carbon residue (CCR) or micro carbon residue (MCR). Numerically, the CCR value is the same as that of MCR.
Sometimes the carbon residue value can be listed as residual carbon content , RCC, which is normally the same as MCR/CCR.
For the test, 4 grams of the sample are put into a weighed glass bulb. The sample in the bulb is heated in a bath at 553°C for 20 minutes. After cooling the bulb is weighed again and the difference noted.
This article related to natural gas, petroleum or the petroleum industry is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Ramsbottom_carbon_residue |
In combinatorics , Ramsey's theorem , in one of its graph-theoretic forms, states that one will find monochromatic cliques in any edge labelling (with colours) of a sufficiently large complete graph . To demonstrate the theorem for two colours (say, blue and red), let r and s be any two positive integers . [ a ] Ramsey's theorem states that there exists a least positive integer R ( r , s ) for which every blue-red edge colouring of the complete graph on R ( r , s ) vertices contains a blue clique on r vertices or a red clique on s vertices. (Here R ( r , s ) signifies an integer that depends on both r and s .)
Ramsey's theorem is a foundational result in combinatorics. The first version of this result was proved by Frank Ramsey . This initiated the combinatorial theory now called Ramsey theory , that seeks regularity amid disorder: general conditions for the existence of substructures with regular properties. In this application it is a question of the existence of monochromatic subsets , that is, subsets of connected edges of just one colour.
An extension of this theorem applies to any finite number of colours, rather than just two. More precisely, the theorem states that for any given number of colours, c , and any given integers n 1 , …, n c , there is a number, R ( n 1 , …, n c ) , such that if the edges of a complete graph of order R ( n 1 , …, n c ) are coloured with c different colours, then for some i between 1 and c , it must contain a complete subgraph of order n i whose edges are all colour i . The special case above has c = 2 (and n 1 = r and n 2 = s ).
Suppose the edges of a complete graph on 6 vertices are coloured red and blue. Pick a vertex, v . There are 5 edges incident to v and so (by the pigeonhole principle ) at least 3 of them must be the same colour. Without loss of generality we can assume at least 3 of these edges, connecting the vertex, v , to vertices, r , s and t , are blue. (If not, exchange red and blue in what follows.) If any of the edges, ( rs ) , ( rt ) , ( st ) , are also blue then we have an entirely blue triangle. If not, then those three edges are all red and we have an entirely red triangle. Since this argument works for any colouring, any K 6 contains a monochromatic K 3 , and therefore R (3, 3) ≤ 6 . The popular version of this is called the theorem on friends and strangers .
An alternative proof works by double counting . It goes as follows: Count the number of ordered triples of vertices, x , y , z , such that the edge, ( xy ) , is red and the edge, ( yz ) , is blue. Firstly, any given vertex will be the middle of either 0 × 5 = 0 (all edges from the vertex are the same colour), 1 × 4 = 4 (four are the same colour, one is the other colour), or 2 × 3 = 6 (three are the same colour, two are the other colour) such triples. Therefore, there are at most 6 × 6 = 36 such triples. Secondly, for any non-monochromatic triangle ( xyz ) , there exist precisely two such triples. Therefore, there are at most 18 non-monochromatic triangles. Therefore, at least 2 of the 20 triangles in the K 6 are monochromatic.
Conversely, it is possible to 2-colour a K 5 without creating any monochromatic K 3 , showing that R (3, 3) > 5 . The unique [ b ] colouring is shown to the right. Thus R (3, 3) = 6 .
The task of proving that R (3, 3) ≤ 6 was one of the problems of William Lowell Putnam Mathematical Competition in 1953, as well as in the Hungarian Math Olympiad in 1947.
A multicolour Ramsey number is a Ramsey number using 3 or more colours. There are (up to symmetries) only two non-trivial multicolour Ramsey numbers for which the exact value is known, namely R (3, 3, 3) = 17 and R (3, 3, 4) = 30 . [ 1 ]
Suppose that we have an edge colouring of a complete graph using 3 colours, red, green and blue. Suppose further that the edge colouring has no monochromatic triangles. Select a vertex v . Consider the set of vertices that have a red edge to the vertex v . This is called the red neighbourhood of v . The red neighbourhood of v cannot contain any red edges, since otherwise there would be a red triangle consisting of the two endpoints of that red edge and the vertex v . Thus, the induced edge colouring on the red neighbourhood of v has edges coloured with only two colours, namely green and blue. Since R (3, 3) = 6 , the red neighbourhood of v can contain at most 5 vertices. Similarly, the green and blue neighbourhoods of v can contain at most 5 vertices each. Since every vertex, except for v itself, is in one of the red, green or blue neighbourhoods of v , the entire complete graph can have at most 1 + 5 + 5 + 5 = 16 vertices. Thus, we have R (3, 3, 3) ≤ 17 .
To see that R (3, 3, 3) = 17 , it suffices to draw an edge colouring on the complete graph on 16 vertices with 3 colours that avoids monochromatic triangles. It turns out that there are exactly two such colourings on K 16 , the so-called untwisted and twisted colourings. Both colourings are shown in the figures to the right, with the untwisted colouring on the left, and the twisted colouring on the right.
If we select any colour of either the untwisted or twisted colouring on K 16 , and consider the graph whose edges are precisely those edges that have the specified colour, we will get the Clebsch graph .
It is known that there are exactly two edge colourings with 3 colours on K 15 that avoid monochromatic triangles, which can be constructed by deleting any vertex from the untwisted and twisted colourings on K 16 , respectively.
It is also known that there are exactly 115 edge colourings with 3 colours on K 14 that avoid monochromatic triangles, provided that we consider edge colourings that differ by a permutation of the colours as being the same.
The theorem for the 2-colour case can be proved by induction on r + s . [ 2 ] It is clear from the definition that for all n , R ( n , 2) = R (2, n ) = n . This starts the induction. We prove that R ( r , s ) exists by finding an explicit bound for it. By the inductive hypothesis R ( r − 1, s ) and R ( r , s − 1) exist.
Proof. Consider a complete graph on R ( r − 1, s ) + R ( r , s − 1) vertices whose edges are coloured with two colours. Pick a vertex v from the graph, and partition the remaining vertices into two sets M and N , such that for every vertex w , w is in M if edge ( vw ) is blue, and w is in N if ( vw ) is red. Because the graph has R ( r − 1 , s ) + R ( r , s − 1 ) = | M | + | N | + 1 {\displaystyle R(r-1,s)+R(r,s-1)=|M|+|N|+1} vertices, it follows that either | M | ≥ R ( r − 1 , s ) {\displaystyle |M|\geq R(r-1,s)} or | N | ≥ R ( r , s − 1 ) . {\displaystyle |N|\geq R(r,s-1).} In the former case, if M has a red K s then so does the original graph and we are finished. Otherwise M has a blue K r − 1 and so M ∪ { v } {\displaystyle M\cup \{v\}} has a blue K r by the definition of M . The latter case is analogous. Thus the claim is true and we have completed the proof for 2 colours.
In this 2-colour case, if R ( r − 1, s ) and R ( r , s − 1) are both even, the induction inequality can be strengthened to: [ 3 ]
Proof . Suppose p = R ( r − 1, s ) and q = R ( r , s − 1) are both even. Let t = p + q − 1 and consider a two-coloured graph of t vertices. If d i is the degree of the i -th vertex in the blue subgraph, then by the Handshaking lemma , ∑ i = 1 t d i {\displaystyle \textstyle \sum _{i=1}^{t}d_{i}} is even. Given that t is odd, there must be an even d i . Assume without loss of generality that d 1 is even, and that M and N are the vertices incident to vertex 1 in the blue and red subgraphs, respectively. Then both | M | = d 1 {\displaystyle |M|=d_{1}} and | N | = t − 1 − d 1 {\displaystyle |N|=t-1-d_{1}} are even. By the Pigeonhole principle , either | M | ≥ p − 1 , {\displaystyle |M|\geq p-1,} or | N | ≥ q . {\displaystyle |N|\geq q.} Since | M | {\displaystyle |M|} is even and p – 1 is odd, the first inequality can be strengthened, so either | M | ≥ p {\displaystyle |M|\geq p} or | N | ≥ q . {\displaystyle |N|\geq q.} Suppose | M | ≥ p = R ( r − 1 , s ) . {\displaystyle |M|\geq p=R(r-1,s).} Then either the M subgraph has a red K s and the proof is complete, or it has a blue K r – 1 which along with vertex 1 makes a blue K r . The case | N | ≥ q = R ( r , s − 1 ) {\displaystyle |N|\geq q=R(r,s-1)} is treated similarly.
Lemma 2. If c > 2 , then R ( n 1 , … , n c ) ≤ R ( n 1 , … , n c − 2 , R ( n c − 1 , n c ) ) . {\displaystyle R(n_{1},\dots ,n_{c})\leq R(n_{1},\dots ,n_{c-2},R(n_{c-1},n_{c})).}
Proof. Consider a complete graph of R ( n 1 , … , n c − 2 , R ( n c − 1 , n c ) ) {\displaystyle R(n_{1},\dots ,n_{c-2},R(n_{c-1},n_{c}))} vertices and colour its edges with c colours. Now 'go colour-blind' and pretend that c − 1 and c are the same colour. Thus the graph is now ( c − 1) -coloured. Due to the definition of R ( n 1 , … , n c − 2 , R ( n c − 1 , n c ) ) , {\displaystyle R(n_{1},\dots ,n_{c-2},R(n_{c-1},n_{c})),} such a graph contains either a K n i mono-chromatically coloured with colour i for some 1 ≤ i ≤ c − 2 or a K R ( n c − 1 , n c ) -coloured in the 'blurred colour'. In the former case we are finished. In the latter case, we recover our sight again and see from the definition of R ( n c − 1 , n c ) we must have either a ( c − 1) -monochrome K n c − 1 or a c -monochrome K n c . In either case the proof is complete.
Lemma 1 implies that any R ( r , s ) is finite. The right hand side of the inequality in Lemma 2 expresses a Ramsey number for c colours in terms of Ramsey numbers for fewer colours. Therefore, any R ( n 1 , …, n c ) is finite for any number of colours. This proves the theorem.
The numbers R ( r , s ) in Ramsey's theorem (and their extensions to more than two colours) are known as Ramsey numbers. The Ramsey number R ( m , n ) gives the solution to the party problem, which asks the minimum number of guests, R ( m , n ) , that must be invited so that at least m will know each other or at least n will not know each other. In the language of graph theory, the Ramsey number is the minimum number of vertices, v = R ( m , n ) , such that all undirected simple graphs of order v , contain a clique of order m , or an independent set of order n . Ramsey's theorem states that such a number exists for all m and n .
By symmetry, it is true that R ( m , n ) = R ( n , m ) . An upper bound for R ( r , s ) can be extracted from the proof of the theorem, and other arguments give lower bounds. (The first exponential lower bound was obtained by Paul Erdős using the probabilistic method .) However, there is a vast gap between the tightest lower bounds and the tightest upper bounds. There are also very few numbers r and s for which we know the exact value of R ( r , s ) .
Computing a lower bound L for R ( r , s ) usually requires exhibiting a blue/red colouring of the graph K L −1 with no blue K r subgraph and no red K s subgraph. Such a counterexample is called a Ramsey graph . Brendan McKay maintains a list of known Ramsey graphs. [ 4 ] Upper bounds are often considerably more difficult to establish: one either has to check all possible colourings to confirm the absence of a counterexample, or to present a mathematical argument for its absence.
Erdős asks us to imagine an alien force, vastly more powerful than us, landing on Earth and demanding the value of R (5, 5) or they will destroy our planet. In that case, he claims, we should marshal all our computers and all our mathematicians and attempt to find the value. But suppose, instead, that they ask for R (6, 6) . In that case, he believes, we should attempt to destroy the aliens. [ 5 ]
A sophisticated computer program does not need to look at all colourings individually in order to eliminate all of them; nevertheless it is a very difficult computational task that existing software can only manage on small sizes. Each complete graph K n has 1 / 2 n ( n − 1) edges, so there would be a total of c n ( n − 1)/2 graphs to search through (for c colours) if brute force is used. [ 6 ] Therefore, the complexity for searching all possible graphs (via brute force ) is O ( c n 2 ) for c colourings and at most n nodes.
The situation is unlikely to improve with the advent of quantum computers . One of the best-known searching algorithms for unstructured datasets exhibits only a quadratic speedup (cf. Grover's algorithm ) relative to classical computers, so that the computation time is still exponential in the number of nodes. [ 7 ] [ 8 ]
As described above, R (3, 3) = 6 . It is easy to prove that R (4, 2) = 4 , and, more generally, that R ( s , 2) = s for all s : a graph on s − 1 nodes with all edges coloured red serves as a counterexample and proves that R ( s , 2) ≥ s ; among colourings of a graph on s nodes, the colouring with all edges coloured red contains a s -node red subgraph, and all other colourings contain a 2-node blue subgraph (that is, a pair of nodes connected with a blue edge.)
Using induction inequalities and the handshaking lemma , it can be concluded that R (4, 3) ≤ R (4, 2) + R (3, 3) − 1 = 9 , and therefore R (4, 4) ≤ R (4, 3) + R (3, 4) ≤ 18 . There are only two (4, 4, 16) graphs (that is, 2-colourings of a complete graph on 16 nodes without 4-node red or blue complete subgraphs) among 6.4 × 10 22 different 2-colourings of 16-node graphs, and only one (4, 4, 17) graph (the Paley graph of order 17) among 2.46 × 10 26 colourings. [ 4 ] It follows that R (4, 4) = 18 .
The fact that R (4, 5) = 25 was first established by Brendan McKay and Stanisław Radziszowski in 1995. [ 9 ]
The exact value of R (5, 5) is unknown, although it is known to lie between 43 (Geoffrey Exoo (1989) [ 10 ] ) and 46 (Angeltveit and McKay (2024) [ 11 ] ), inclusive.
In 1997, McKay, Radziszowski and Exoo employed computer-assisted graph generation methods to conjecture that R (5, 5) = 43 . They were able to construct exactly 656 (5, 5, 42) graphs, arriving at the same set of graphs through different routes. None of the 656 graphs can be extended to a (5, 5, 43) graph. [ 12 ]
For R ( r , s ) with r , s > 5 , only weak bounds are available. Lower bounds for R (6, 6) and R (8, 8) have not been improved since 1965 and 1972, respectively. [ 1 ]
R ( r , s ) with r , s ≤ 10 are shown in the table below. Where the exact value is unknown, the table lists the best known bounds. R ( r , s ) with r < 3 are given by R (1, s ) = 1 and R (2, s ) = s for all values of s .
The standard survey on the development of Ramsey number research is the Dynamic Survey 1 of the Electronic Journal of Combinatorics , by Stanisław Radziszowski , which is periodically updated. [ 1 ] [ 13 ] Where not cited otherwise, entries in the table below are taken from the June 2024 edition. (Note there is a trivial symmetry across the diagonal since R ( r , s ) = R ( s , r ) .)
It is also interesting that Erdos showed
R( P n , K m ) = (n − 1).(m − 1) + 1,
for a path and a complete graph with n and m vertices respectively. Also Chvatal showed
R( T n , K m ) = (n − 1).(m − 1) + 1,
for a tree and a complete graph with n and m vertices respectively. These two theorems are the best examples of formulating Ramsey numbers for some special graphs.
The inequality R ( r , s ) ≤ R ( r − 1, s ) + R ( r , s − 1) may be applied inductively to prove that
In particular, this result, due to Erdős and Szekeres , implies that when r = s ,
An exponential lower bound,
was given by Erdős in 1947 and was instrumental in his introduction of the probabilistic method. There is a huge gap between these two bounds: for example, for s = 10 , this gives 101 ≤ R (10, 10) ≤ 48,620 . Nevertheless, the exponential growth factors of either bound were not improved for a long time, and for the lower bound it still stands at √ 2 . There is no known explicit construction producing an exponential lower bound. The best known lower and upper bounds for diagonal Ramsey numbers are
due to Spencer and Conlon , respectively; a 2023 preprint by Campos, Griffiths, Morris and Sahasrabudhe claims to have made exponential progress using an algorithmic construction relying on a graph structure called a " book ", [ 17 ] [ 18 ] improving the upper bound to
with ε = 2 − 7 {\displaystyle \varepsilon =2^{-7}} and δ = 50 − 1 {\displaystyle \delta =50^{-1}} .
A 2024 preprint [ 19 ] by Gupta, Ndiaye, Norin, and Wei claims an improvement of δ {\displaystyle \delta } to − 0.14 e − 1 ≤ 20 − 1 {\displaystyle -0.14e^{-1}\leq 20^{-1}} , and the diagonal Ramsey upper bound to
R ( s , s ) ≤ ( 4 e − 0.14 e − 1 ) s + o ( s ) = 3.7792... s + o ( s ) {\displaystyle R(s,s)\leq \left(4e^{-0.14e^{-1}}\right)^{s+o(s)}=3.7792...^{s+o(s)}}
For the off-diagonal Ramsey numbers R (3, t ) , it is known that they are of order t 2 / log t ; this may be stated equivalently as saying that the smallest possible independence number in an n -vertex triangle-free graph is
The upper bound for R (3, t ) is given by Ajtai , Komlós , and Szemerédi , [ 20 ] the lower bound was obtained originally by Kim , [ 21 ] and the implicit constant was improved independently by Fiz Pontiveros, Griffiths and Morris , [ 22 ] and Bohman and Keevash , [ 23 ] by analysing the triangle-free process.
In general, studying the more general " H -free process" has set the best known asymptotic lower bounds for general off-diagonal Ramsey numbers, [ 24 ] R ( s , t )
In particular this gives an upper bound of R ( 4 , t ) ≤ c s t 3 ( log t ) − 2 {\displaystyle R(4,t)\leq c_{s}t^{3}(\log t)^{-2}} . Mattheus and Verstraete (2024) [ 25 ] [ 26 ] gave a lower bound of R ( 4 , t ) ≥ c s ′ t 3 ( log t ) − 4 {\displaystyle R(4,t)\geq c'_{s}t^{3}(\log t)^{-4}} , determining the asymptotics of R ( 4 , t ) {\displaystyle R(4,t)} up to logarithmic factors, and settling a question of Erdős, who offered 250 dollars for a proof that the lower limit has form c s ′ t 3 ( log t ) − d {\displaystyle c'_{s}t^{3}(\log t)^{-d}} . [ 27 ] [ 28 ]
The Ramsey number R ( 3 , 8 ) {\displaystyle R(3,8)} and R ( 3 , 9 ) {\displaystyle R(3,9)} have been formally verified to be 28 and 36. [ 29 ] This verification was achieved using a combination of Boolean satisfiability (SAT) solving and computer algebra systems (CAS). The proof was generated automatically using the SAT+CAS approach, marking the first certifiable proof of R ( 3 , 8 ) = 28 {\displaystyle R(3,8)=28} and R ( 3 , 9 ) = 36 {\displaystyle R(3,9)=36} . The verification process for R ( 3 , 8 ) {\displaystyle R(3,8)} and R ( 3 , 9 ) {\displaystyle R(3,9)} was conducted using the SAT+CAS framework MathCheck, which integrates a SAT solver with a computer algebra system. The verification for R ( 3 , 8 ) = 28 {\displaystyle R(3,8)=28} was completed in approximately 8 hours of wall clock time, producing a total proof size of 5.8 GiB. The verification for R ( 3 , 9 ) = 36 {\displaystyle R(3,9)=36} was significantly more computationally intensive, requiring 26 hours of wall clock time and generating 289 GiB of proof data. The correctness of these results was independently verified using a modified version of the DRAT-trim proof checker. [ 30 ]
The Ramsey number R ( 4 , 5 ) {\displaystyle R(4,5)} has been formally verified to be 25. [ 31 ] The original proof, developed by McKay and Radziszowski in 1995, combined high-level mathematical arguments with computational steps and used multiple independent implementations to reduce the possibility of programming errors. The formal proof was carried out using the HOL4 interactive theorem prover, limiting the potential for errors to the HOL4 kernel. Rather than directly verifying the original algorithms, the authors utilized HOL4's interface to the MiniSat SAT solver to formally prove key gluing lemmas.
There is a less well-known yet interesting analogue of Ramsey's theorem for induced subgraphs . Roughly speaking, instead of finding a monochromatic subgraph, we are now required to find a monochromatic induced subgraph. In this variant, it is no longer sufficient to restrict our focus to complete graphs , since the existence of a complete subgraph does not imply the existence of an induced subgraph. The qualitative statement of the theorem in the next section was first proven independently by Erdős , Hajnal and Pósa , Deuber and Rödl in the 1970s. [ 32 ] [ 33 ] [ 34 ] Since then, there has been much research in obtaining good bounds for induced Ramsey numbers.
Let H be a graph on n vertices. Then, there exists a graph G such that any coloring of the edges of G using two colors contains a monochromatic induced copy of H (i.e. an induced subgraph of G such that it is isomorphic to H and its edges are monochromatic). The smallest possible number of vertices of G is the induced Ramsey number r ind ( H ) .
Sometimes, we also consider the asymmetric version of the problem. We define r ind ( X , Y ) to be the smallest possible number of vertices of a graph G such that every coloring of the edges of G using only red or blue contains a red induced subgraph of X or blue induced subgraph of Y .
Similar to Ramsey's theorem, it is unclear a priori whether induced Ramsey numbers exist for every graph H . In the early 1970s, Erdős , Hajnal and Pósa , Deuber, and Rödl independently proved that this is the case. [ 32 ] [ 33 ] [ 34 ] However, the original proofs gave terrible bounds (e.g. towers of twos ) on the induced Ramsey numbers. It is interesting to ask if better bounds can be achieved. In 1974, Paul Erdős conjectured that there exists a constant c such that every graph H on k vertices satisfies r ind ( H ) ≤ 2 ck . [ 35 ] If this conjecture is true, it would be optimal up to the constant c because the complete graph achieves a lower bound of this form (in fact, it's the same as Ramsey numbers). However, this conjecture is still open as of now.
In 1984, Erdős and Hajnal claimed that they proved the bound [ 36 ]
However, that was still far from the exponential bound conjectured by Erdős. It was not until 1998 when a major breakthrough was achieved by Kohayakawa , Prömel and Rödl, who proved the first almost-exponential bound of r ind ( H ) ≤ 2 ck (log k ) 2 for some constant c . Their approach was to consider a suitable random graph constructed on projective planes and show that it has the desired properties with nonzero probability. The idea of using random graphs on projective planes have also previously been used in studying Ramsey properties with respect to vertex colorings and the induced Ramsey problem on bounded degree graphs H . [ 37 ]
Kohayakawa, Prömel and Rödl's bound remained the best general bound for a decade. In 2008, Fox and Sudakov provided an explicit construction for induced Ramsey numbers with the same bound. [ 38 ] In fact, they showed that every ( n , d ,λ) -graph G with small λ and suitable d contains an induced monochromatic copy of any graph on k vertices in any coloring of edges of G in two colors. In particular, for some constant c , the Paley graph on n ≥ 2 ck log 2 k vertices is such that all of its edge colorings in two colors contain an induced monochromatic copy of every k -vertex graph.
In 2010, Conlon , Fox and Sudakov were able to improve the bound to r ind ( H ) ≤ 2 ck log k , which remains the current best upper bound for general induced Ramsey numbers. [ 39 ] Similar to the previous work in 2008, they showed that every ( n , d ,λ) -graph G with small λ and edge density 1 ⁄ 2 contains an induced monochromatic copy of every graph on k vertices in any edge coloring in two colors. Currently, Erdős's conjecture that r ind ( H ) ≤ 2 ck remains open and is one of the important problems in extremal graph theory .
For lower bounds, not much is known in general except for the fact that induced Ramsey numbers must be at least the corresponding Ramsey numbers. Some lower bounds have been obtained for some special cases (see Special Cases).
It is sometimes quite difficult to compute the Ramsey number. Indeed, the inequalities
were proved by Erdos and Szekeres in 1947. [ 40 ]
While the general bounds for the induced Ramsey numbers are exponential in the size of the graph, the behaviour is much different on special classes of graphs (in particular, sparse ones). Many of these classes have induced Ramsey numbers polynomial in the number of vertices.
If H is a cycle , path or star on k vertices, it is known that r ind ( H ) is linear in k . [ 38 ]
If H is a tree on k vertices, it is known that r ind ( H ) = O ( k 2 log 2 k ) . [ 41 ] It is also known that r ind ( H ) is superlinear (i.e. r ind ( H ) = ω( k ) ). Note that this is in contrast to the usual Ramsey numbers, where the Burr–Erdős conjecture (now proven) tells us that r ( H ) is linear (since trees are 1- degenerate ).
For graphs H with number of vertices k and bounded degree Δ , it was conjectured that r ind ( H ) ≤ cn d (Δ) , for some constant d depending only on Δ . This result was first proven by Łuczak and Rödl in 1996, with d (Δ) growing as a tower of twos with height O (Δ 2 ) . [ 42 ] More reasonable bounds for d (Δ) were obtained since then. In 2013, Conlon, Fox and Zhao showed using a counting lemma for sparse pseudorandom graphs that r ind ( H ) ≤ cn 2Δ+8 , where the exponent is best possible up to constant factors. [ 43 ]
Similar to Ramsey numbers, we can generalize the notion of induced Ramsey numbers to hypergraphs and multicolor settings.
We can also generalize the induced Ramsey's theorem to a multicolor setting. For graphs H 1 , H 2 , …, H r , define r ind ( H 1 , H 2 , …, H r ) to be the minimum number of vertices in a graph G such that, given any coloring of the edges of G into r colors, there exists an i such that 1 ≤ i ≤ r and such that G contains an induced subgraph isomorphic to H i whose edges are all colored in the i -th color. Let r ind ( H ; q ) := r ind ( H , H , …, H ) ( q copies of H ).
It is possible to derive a bound on r ind ( H ; q ) which is approximately a tower of two of height ~ log q by iteratively applying the bound on the two-color case. The current best known bound is due to Fox and Sudakov, which achieves r ind ( H ; q ) ≤ 2 ck 3 , where k is the number of vertices of H and c is a constant depending only on q . [ 44 ]
We can extend the definition of induced Ramsey numbers to d -uniform hypergraphs by simply changing the word graph in the statement to hypergraph . Furthermore, we can define the multicolor version of induced Ramsey numbers in the same way as the previous subsection.
Let H be a d -uniform hypergraph with k vertices. Define the tower function t r ( x ) by letting t 1 ( x ) = x and for i ≥ 1 , t i +1 ( x ) = 2 t i ( x ) . Using the hypergraph container method, Conlon, Dellamonica, La Fleur, Rödl and Schacht were able to show that for d ≥ 3, q ≥ 2 , r ind ( H ; q ) ≤ t d ( ck ) for some constant c depending on only d and q . In particular, this result mirrors the best known bound for the usual Ramsey number when d = 3 . [ 45 ]
A further result, also commonly called Ramsey's theorem , applies to infinite graphs. In a context where finite graphs are also being discussed it is often called the "Infinite Ramsey theorem". As intuition provided by the pictorial representation of a graph is diminished when moving from finite to infinite graphs, theorems in this area are usually phrased in set-theoretic terminology. [ 46 ]
Proof : The proof is by induction on n , the size of the subsets. For n = 1 , the statement is equivalent to saying that if you split an infinite set into a finite number of sets, then one of them is infinite. This is evident. Assuming the theorem is true for n ≤ r , we prove it for n = r + 1 . Given a c -colouring of the ( r + 1) -element subsets of X , let a 0 be an element of X and let Y = X \ { a 0 }. We then induce a c -colouring of the r -element subsets of Y , by just adding a 0 to each r -element subset (to get an ( r + 1) -element subset of X ). By the induction hypothesis, there exists an infinite subset Y 1 of Y such that every r -element subset of Y 1 is coloured the same colour in the induced colouring. Thus there is an element a 0 and an infinite subset Y 1 such that all the ( r + 1) -element subsets of X consisting of a 0 and r elements of Y 1 have the same colour. By the same argument, there is an element a 1 in Y 1 and an infinite subset Y 2 of Y 1 with the same properties. Inductively, we obtain a sequence { a 0 , a 1 , a 2 , …} such that the colour of each ( r + 1) -element subset ( a i (1) , a i (2) , …, a i ( r + 1) ) with i (1) < i (2) < … < i ( r + 1) depends only on the value of i (1) . Further, there are infinitely many values of i ( n ) such that this colour will be the same. Take these a i ( n ) 's to get the desired monochromatic set.
A stronger but unbalanced infinite form of Ramsey's theorem for graphs, the Erdős–Dushnik–Miller theorem , states that every infinite graph contains either a countably infinite independent set, or an infinite clique of the same cardinality as the original graph. [ 47 ]
It is possible to deduce the finite Ramsey theorem from the infinite version by a proof by contradiction . Suppose the finite Ramsey theorem is false. Then there exist integers c , n , T such that for every integer k , there exists a c -colouring of [ k ] ( n ) without a monochromatic set of size T . Let C k denote the c -colourings of [ k ] ( n ) without a monochromatic set of size T .
For any k , the restriction of a colouring in C k +1 to [ k ] ( n ) (by ignoring the colour of all sets containing k + 1 ) is a colouring in C k . Define C k 1 {\displaystyle C_{k}^{1}} to be the colourings in C k which are restrictions of colourings in C k +1 . Since C k +1 is not empty, neither is C k 1 {\displaystyle C_{k}^{1}} .
Similarly, the restriction of any colouring in C k + 1 1 {\displaystyle C_{k+1}^{1}} is in C k 1 {\displaystyle C_{k}^{1}} , allowing one to define C k 2 {\displaystyle C_{k}^{2}} as the set of all such restrictions, a non-empty set. Continuing so, define C k m {\displaystyle C_{k}^{m}} for all integers m , k .
Now, for any integer k ,
and each set is non-empty. Furthermore, C k is finite as
It follows that the intersection of all of these sets is non-empty, and let
Then every colouring in D k is the restriction of a colouring in D k +1 . Therefore, by unrestricting a colouring in D k to a colouring in D k +1 , and continuing doing so, one constructs a colouring of N ( n ) {\displaystyle \mathbb {N} ^{(n)}} without any monochromatic set of size T . This contradicts the infinite Ramsey theorem.
If a suitable topological viewpoint is taken, this argument becomes a standard compactness argument showing that the infinite version of the theorem implies the finite version. [ 48 ]
The theorem can also be extended to hypergraphs . An m -hypergraph is a graph whose "edges" are sets of m vertices – in a normal graph an edge is a set of 2 vertices. The full statement of Ramsey's theorem for hypergraphs is that for any integers m and c , and any integers n 1 , …, n c , there is an integer R ( n 1 , …, n c ; m) such that if the hyperedges of a complete m -hypergraph of order R ( n 1 , …, n c ; m ) are coloured with c different colours, then for some i between 1 and c , the hypergraph must contain a complete sub- m -hypergraph of order n i whose hyperedges are all colour i . This theorem is usually proved by induction on m , the 'hyper-ness' of the graph. The base case for the proof is m = 2 , which is exactly the theorem above.
For m = 3 we know the exact value of one non-trivial Ramsey number, namely R (4, 4; 3) = 13 . This fact was established by Brendan McKay and Stanisław Radziszowski in 1991. [ 49 ] Additionally, we have: R (4, 5; 3) ≥ 35 , [ 50 ] R (4, 6; 3) ≥ 63 and R (5, 5; 3) ≥ 88 . [ 50 ]
It is also possible to define Ramsey numbers for directed graphs; these were introduced by P. Erdős and L. Moser ( 1964 ). Let R ( n ) be the smallest number Q such that any complete graph with singly directed arcs (also called a "tournament") and with ≥ Q nodes contains an acyclic (also called "transitive") n -node subtournament.
This is the directed-graph analogue of what (above) has been called R ( n , n ; 2) , the smallest number Z such that any 2-colouring of the edges of a complete un directed graph with ≥ Z nodes, contains a monochromatic complete graph on n nodes. (The directed analogue of the two possible arc colours is the two directions of the arcs, the analogue of "monochromatic" is "all arc-arrows point the same way"; i.e., "acyclic.")
We have R (0) = 0 , R (1) = 1 , R (2) = 2 , R (3) = 4 , R (4) = 8 , R (5) = 14 , R (6) = 28 , and 34 ≤ R (7) ≤ 47 . [ 51 ] [ 52 ]
In terms of the partition calculus, Ramsey's theorem can be stated as ℵ 0 → ( ℵ 0 ) k n {\displaystyle \aleph _{0}\rightarrow (\aleph _{0})_{k}^{n}} for all finite n and k . Wacław Sierpiński showed that the Ramsey theorem does not extend to graphs of size ℵ 1 {\displaystyle \aleph _{1}} by showing that 2 ℵ 0 ↛ ( ℵ 1 ) 2 2 {\displaystyle 2^{\aleph _{0}}\nrightarrow (\aleph _{1})_{2}^{2}} . In particular, the continuum hypothesis implies that ℵ 1 ↛ ( ℵ 1 ) 2 2 {\displaystyle \aleph _{1}\nrightarrow (\aleph _{1})_{2}^{2}} . Stevo Todorčević showed that in fact in ZFC , ℵ 1 ↛ [ ℵ 1 ] ℵ 1 2 {\displaystyle \aleph _{1}\nrightarrow [\aleph _{1}]_{\aleph _{1}}^{2}} , a much stronger statement than ℵ 1 ↛ ( ℵ 1 ) 2 2 {\displaystyle \aleph _{1}\nrightarrow (\aleph _{1})_{2}^{2}} . Justin T. Moore has strengthened this result further. On the positive side, a Ramsey cardinal is a large cardinal κ {\displaystyle \kappa } axiomatically defined to satisfy the related formula: κ → ( κ ) 2 < ω {\displaystyle \kappa \rightarrow (\kappa )_{2}^{<\omega }} . The existence of Ramsey cardinals cannot be proved in ZFC.
In reverse mathematics , there is a significant difference in proof strength between the version of Ramsey's theorem for infinite graphs (the case n = 2) and for infinite multigraphs (the case n ≥ 3). The multigraph version of the theorem is equivalent in strength to the arithmetical comprehension axiom , making it part of the subsystem ACA 0 of second-order arithmetic , one of the big five subsystems in reverse mathematics. In contrast, by a theorem of David Seetapun , the graph version of the theorem is weaker than ACA 0 , and (combining Seetapun's result with others) it does not fall into one of the big five subsystems. [ 53 ] Over ZF , however, the graph version implies the classical Kőnig's lemma , whereas the converse implication does not hold, [ 54 ] since Kőnig's lemma is equivalent to countable choice from finite sets in this setting. [ 55 ] | https://en.wikipedia.org/wiki/Ramsey's_theorem |
Heterodox
The Ramsey–Cass–Koopmans model (also known as the Ramsey growth model or the neoclassical growth model ) is a foundational model in neoclassical economics that describes the dynamics of economic growth over time. It builds upon the pioneering work of Frank P. Ramsey (1928), [ 1 ] with later extensions by David Cass and Tjalling Koopmans in the 1960s. [ 2 ] [ 3 ]
The model extends the Solow–Swan model by endogenizing the savings rate through explicit microfoundations of consumption behavior: rather than assuming a constant saving rate, the model derives it from the intertemporal optimization of a representative agent who chooses consumption to maximize utility over an infinite horizon. This approach leads to a richer dynamic structure in the transition to the long-run steady state , and yields a Pareto efficient outcome. [ note 1 ]
Ramsey originally formulated the model as a social planner ’s problem—maximizing aggregate consumption across generations [ 4 ] —before it was reformulated by Cass and Koopmans as a decentralized economy with a representative agent and competitive markets. The model is designed to explain long-run growth trends rather than short-term business cycle fluctuations and does not incorporate elements like market imperfections , heterogeneous agents , or exogenous shocks . Later developments, such as real business cycle theory , extended the model’s structure, allowing for government purchases, employment variations, and other shocks.
In the usual setup, time is continuous, starting, for simplicity, at t = 0 {\displaystyle t=0} and continuing forever. By assumption, the only productive factors are capital K {\displaystyle K} and labour L {\displaystyle L} , both required to be nonnegative. The labour force, which makes up the entire population, is assumed to grow at a constant rate n {\displaystyle n} , i.e. L ˙ = d L d t = n L {\displaystyle {\dot {L}}={\tfrac {\mathrm {d} L}{\mathrm {d} t}}=nL} , implying that L = L 0 e n t {\displaystyle L=L_{0}e^{nt}} with initial level L 0 > 0 {\displaystyle L_{0}>0} at t = 0 {\displaystyle t=0} . Finally, let Y {\displaystyle Y} denote aggregate production and C {\displaystyle C} denote aggregate consumption.
The variables that the Ramsey–Cass–Koopmans model ultimately aims to describe are the per capita (or more accurately, per labour ) consumption: c = C L {\displaystyle c={\frac {C}{L}}} and capital intensity : k = K L {\displaystyle k={\frac {K}{L}}} It does so by connecting capital accumulation , written K ˙ = d K d t {\displaystyle {\dot {K}}={\tfrac {\mathrm {d} K}{\mathrm {d} t}}} in Newton's notation , with consumption C {\displaystyle C} , describing a consumption-investment trade-off. More specifically, since the existing capital stock decays by depreciation rate δ {\displaystyle \delta } (assumed to be constant), it requires investment of current-period production output Y {\displaystyle Y} . Thus, K ˙ = Y − δ K − c L {\displaystyle {\dot {K}}=Y-\delta K-cL}
The relationship between the productive factors and aggregate output is described by the aggregate production function , Y = F ( K , L ) {\displaystyle Y=F(K,L)} . A common choice is the Cobb–Douglas production function F ( K , L ) = A K 1 − α L α {\displaystyle F(K,L)=AK^{1-\alpha }L^{\alpha }} , but generally, any production function satisfying the Inada conditions is permissible. Importantly, though, F {\displaystyle F} is required to be homogeneous of degree 1 , which economically implies constant returns to scale . With this assumption, we can re-express aggregate output in per capita terms F ( K , L ) = L ⋅ F ( K L , 1 ) = L ⋅ f ( k ) {\displaystyle F(K,L)=L\cdot F\left({\frac {K}{L}},1\right)=L\cdot f(k)} For example, if we use the Cobb–Douglas production function with A = 1 , α = 0.5 {\displaystyle A=1,\alpha =0.5} , then f ( k ) = k 0.5 {\displaystyle f(k)=k^{0.5}} .
To obtain the first key equation of the Ramsey–Cass–Koopmans model, the dynamic equation for the capital stock needs to be expressed in per capita terms. Noting the quotient rule for d d t ( K L ) {\displaystyle {\tfrac {\mathrm {d} }{\mathrm {d} t}}\left({\tfrac {K}{L}}\right)} , we have
k ˙ = f ( k ) − ( n + δ ) k − c {\displaystyle {\dot {k}}=f(k)-(n+\delta )k-c}
A non-linear differential equation akin to the Solow–Swan model but incorporates endogenous consumption 𝑐, reflecting the model's microfoundations.
If we ignore the problem of how consumption is distributed, then the rate of utility U {\displaystyle U} is a function of aggregate consumption. That is, U = U ( C , t ) {\displaystyle U=U(C,t)} . To avoid the problem of infinity, we exponentially discount future utility at a discount rate ρ ∈ ( 0 , ∞ ) {\displaystyle \rho \in (0,\infty )} . A high ρ {\displaystyle \rho } reflects high impatience .
The social planner 's problem is maximizing the social welfare function U 0 = ∫ 0 ∞ e − ρ t U ( C , t ) d t {\displaystyle U_{0}=\int _{0}^{\infty }e^{-\rho t}U(C,t)\,\mathrm {d} t} Assume that the economy is populated by identical immortal individuals with unchanging utility functions u ( c ) {\displaystyle u(c)} (a representative agent ), such that the total utility is: U ( C , t ) = L u ( c ) = L 0 e n t u ( c ) {\displaystyle U(C,t)=Lu(c)=L_{0}e^{nt}u(c)} The utility function is assumed to be strictly increasing (i.e., there is no bliss point ) and concave in c {\displaystyle c} , with lim c → 0 u c = ∞ {\displaystyle \lim _{c\to 0}u_{c}=\infty } , [ note 2 ] where u c {\displaystyle u_{c}} is marginal utility of consumption ∂ u ∂ c {\displaystyle {\tfrac {\partial u}{\partial c}}} . Thus, we have the social planner's problem:
where an initial non-zero capital stock k ( 0 ) = k 0 > 0 {\displaystyle k(0)=k_{0}>0} is given. To ensure that the integral is well-defined, we impose ρ > n {\displaystyle \rho >n} .
The solution, usually found by using a Hamiltonian function , [ note 3 ] [ note 4 ] is a differential equation that describes the optimal evolution of consumption,
c ˙ = σ ( c ) [ f k ( k ) − δ − ρ ] ⋅ c {\displaystyle {\dot {c}}=\sigma (c)\left[f_{k}(k)-\delta -\rho \right]\cdot c}
the Keynes–Ramsey rule . [ 5 ]
The term f k ( k ) − δ − ρ {\displaystyle f_{k}(k)-\delta -\rho } , where f k = ∂ k f {\displaystyle f_{k}=\partial _{k}f} is the marginal product of capital , reflects the marginal return on net investment , accounting for capital depreciation and time discounting.
Here σ ( c ) {\displaystyle \sigma (c)} is the elasticity of intertemporal substitution (EIS), defined by σ ( c ) = − u c ( c ) c ⋅ u c c ( c ) = − d ln c d ln ( u ′ ( c ) ) {\displaystyle \sigma (c)=-{\frac {u_{c}(c)}{c\cdot u_{cc}(c)}}=-{\frac {d\ln c}{d\ln(u'(c))}}} It is formally equivalent to the inverse of relative risk aversion . The quantity reflects the curvature of the utility function and indicates how much the representative agent wishes to smooth consumption over time. If the agent has high relative risk aversion, it has low EIS and thus would be more willing to smooth consumption over time.
It is often assumed that u {\displaystyle u} is strictly monotonically increasing and concave, thus σ > 0 {\displaystyle \sigma >0} . In particular, if utility is logarithmic, then it is constant: u ( c ) = u 0 ln c ⟹ σ ( c ) = 1 {\displaystyle u(c)=u_{0}\ln c\implies \sigma (c)=1} We can rewrite the Ramsey rule as d d t ln c ⏟ consumption delay rate = σ ( c ) ⏟ EIS at current consumption level [ f k ( k ) − δ − ρ ] ⏟ marginal return on net investment {\displaystyle \underbrace {{\frac {d}{dt}}\ln c} _{\text{consumption delay rate}}=\underbrace {\sigma (c)} _{{\text{EIS at current consumption level}}\quad }\underbrace {[f_{k}(k)-\delta -\rho ]} _{\text{marginal return on net investment}}} where we interpret d d t ln c {\displaystyle {\frac {d}{dt}}\ln c} as the "consumption delay rate," indicating the rate at which current consumption is being postponed in favor of future consumption. A higher value implies that the agent prioritizes saving over consuming today, thereby deferring consumption later.
The two coupled differential equations for k {\displaystyle k} and c {\displaystyle c} form the Ramsey–Cass–Koopmans dynamical system .
{ k ˙ = f ( k ) − ( n + δ ) k − c c ˙ = σ ( c ) [ f k ( k ) − δ − ρ ] ⋅ c {\displaystyle {\begin{cases}{\dot {k}}=f(k)-(n+\delta )k-c\\{\dot {c}}=\sigma (c)\left[f_{k}(k)-\delta -\rho \right]\cdot c\end{cases}}}
A steady state ( k ∗ , c ∗ ) {\displaystyle (k^{\ast },c^{\ast })} for the system is found by setting k ˙ {\displaystyle {\dot {k}}} and c ˙ {\displaystyle {\dot {c}}} equal to zero. There are three solutions:
The first is the only solution in the interior of the upper quadrant. It is a saddle point (as shown below). The second is a repelling point. The third is a degenerate stable equilibrium. The first solution is meant by default, although the other two are important to keep track of.
Any optimal trajectory must follow the dynamical system. However, since the variable c {\displaystyle c} is a control variable , at each capital intensity k {\displaystyle k} , to find its corresponding optimal trajectory, we still need to find its starting consumption rate c ( 0 ) {\displaystyle c(0)} . As it turns out, the optimal trajectory is the unique one that converges to the interior equilibrium point. Any other trajectory either converges to the all-saving equilibrium with k ∗ > 0 , c ∗ = 0 {\displaystyle k^{*}>0,c^{*}=0} , or diverges to k → 0 , c → ∞ {\displaystyle k\to 0,c\to \infty } , which means that the economy expends all its capital in finite time. Both achieve a lower overall utility than the trajectory toward the interior equilibrium point.
A qualitative statement about the stability of the solution ( k ∗ , c ∗ ) {\displaystyle (k^{\ast },c^{\ast })} requires a linearization by a first-order Taylor polynomial
where J ( k ∗ , c ∗ ) {\displaystyle \mathbf {J} (k^{\ast },c^{\ast })} is the Jacobian matrix evaluated at steady state, [ note 5 ] given by
which has determinant | J ( k ∗ , c ∗ ) | = 1 σ f k k ( k ) ⋅ c ∗ < 0 {\displaystyle \left|\mathbf {J} \left(k^{\ast },c^{\ast }\right)\right|={\frac {1}{\sigma }}f_{kk}(k)\cdot c^{\ast }<0} since c ∗ > 0 {\displaystyle c^{*}>0} , σ {\displaystyle \sigma } is positive by assumption, and f k k < 0 {\displaystyle f_{kk}<0} since f {\displaystyle f} is concave (Inada condition). Since the determinant equals the product of the eigenvalues , the eigenvalues must be real and opposite in sign. [ 6 ]
Hence, by the stable manifold theorem , the equilibrium is a saddle point , and there exists a unique stable arm, or "saddle path," that converges on the equilibrium, indicated by the blue curve in the phase diagram.
The system is called "saddle path stable" since all unstable trajectories are ruled out by the "no Ponzi scheme " condition: [ 7 ]
implying that the present value of the capital stock cannot be negative. [ note 6 ]
Spear and Young re-examine the history of optimal growth during the 1950s and 1960s, [ 8 ] focusing in part on the veracity of the claimed simultaneous and independent development of Cass' "Optimum growth in an aggregative model of capital accumulation" (published in 1965 in the Review of Economic Studies ), and Tjalling Koopman's "On the concept of optimal economic growth" (published in Study Week on the Econometric Approach to Development Planning, 1965, Rome: Pontifical Academy of Science).
Over their lifetimes, neither Cass nor Koopmans ever suggested that their results characterizing optimal growth in the one-sector, continuous-time growth model were anything other than "simultaneous and independent". The priority issue became a discussion point because, in the published version of Koopmans' work, he cited the chapter from Cass' thesis that later became the RES paper. In his paper, Koopmans states in a footnote that Cass independently obtained conditions similar to what he finds. Cass also considers the limiting case where the discount rate goes to zero in his paper. For his part, Cass notes that "after the original version of this paper was completed, a very similar analysis by Koopmans came to our attention. We draw on his results in discussing the limiting case, where the effective social discount rate goes to zero". In the interview that Cass gave to Macroeconomic Dynamics , he credits Koopmans with pointing him to Frank Ramsey's previous work, claiming to have been embarrassed not to have known of it, but says nothing to dispel the basic claim that his work and Koopmans' were independent.
Spear and Young dispute this history, based upon a previously overlooked working paper version of Koopmans' paper, [ 9 ] which was the basis for Koopmans' oft-cited presentation at a conference held by the Pontifical Academy of Sciences in October 1963. [ 10 ] In this Cowles Discussion paper, there is an error. Koopmans claims in his main result that the Euler equations are both necessary and sufficient to characterize optimal trajectories in the model because any solutions to the Euler equations that do not converge to the optimal steady-state would hit either a zero consumption or zero capital boundary in finite time. This error was presented at the Vatican conference, although no participant commented on the problem at the time of Koopmans' presentation. This can be inferred because the discussion after each paper presentation at the Vatican conference is verbatim in the conference volume.
In the Vatican volume discussion following the presentation of a paper by Edmond Malinvaud , the issue does arise because of Malinvaud's explicit inclusion of a so-called "transversality condition" (which Malinvaud calls Condition I) in his paper. At the end of the presentation, Koopmans asks Malinvaud whether it is not the case that Condition I guarantees that solutions to the Euler equations that do not converge to the optimal steady-state hit a boundary in finite time. Malinvaud replies that this is not the case and suggests that Koopmans look at the example with log utility functions and Cobb-Douglas production functions.
At this point, Koopmans recognizes he has a problem. However, based on a confusing appendix to a later version of the paper produced after the Vatican conference, he seems unable to decide how to deal with the issue raised by Malinvaud's Condition I.
From the Macroeconomic Dynamics interview with Cass, it is clear that Koopmans met with Cass' thesis advisor, Hirofumi Uzawa , at the winter meetings of the Econometric Society in January 1964, where Uzawa advised him that his student [Cass] had solved this problem already. Uzawa must have then provided Koopmans with the copy of Cass' thesis chapter, which he sent along in the guise of the IMSSS Technical Report that Koopmans cited in the published version of his paper. The word "guise" is appropriate here because the TR number listed in Koopmans' citation would have put the issue date of the report in the early 1950s, which it was not.
In the published version of Koopmans' paper, he imposes a new Condition Alpha in addition to the Euler equations, stating that the only admissible trajectories among those satisfying the Euler equations are the one that converges to the optimal steady-state equilibrium of the model. This result is derived in Cass' paper via the imposition of a transversality condition that Cass deduced from relevant sections of a book by Lev Pontryagin . [ 11 ] Spear and Young conjecture that Koopmans took this route because he did not want to appear to be "borrowing" either Malinvaud's or Cass' transversality technology.
Based on this and other examination of Malinvaud's contributions in 1950s—specifically his intuition of the importance of the transversality condition—Spear and Young suggest that the neo-classical growth model might better be called the Ramsey–Malinvaud–Cass model than the established Ramsey–Cass–Koopmans honorific. | https://en.wikipedia.org/wiki/Ramsey–Cass–Koopmans_model |
Ramus Pomifer ( Latin for apple branch ) was a constellation between Hercules and Lyra .
It was depicted in the form of a branch held in Hercules' left hand. The also-obsolete constellation of Cerberus - made up of much the same stars - became combined with it in later depictions, with the name "Cerberus et Ramus". [ 1 ]
This constellation -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Ramus_Pomifer |
The Ramón Margalef Award for Excellence in Education was launched in 2008 by the Association for the Sciences of Limnology and Oceanography to recognize innovations and excellence in teaching and mentoring students in the fields of limnology and oceanography . Criteria for the award requires "adherence to the highest standards of excellence" in pedagogy as well as verification that the teaching techniques have furthered the field of aquatic science. [ 1 ] The award is not affiliated with the Ramon Margalef Prize in Ecology , often referred to as the Ramon Margalef Award, given by the Generalitat de Catalunya in Barcelona. The award has been presented annually since 2009. [ 1 ]
The winners have included:
The information in this table is from the Association for the Sciences of Limnology and Oceanography. [ 3 ] | https://en.wikipedia.org/wiki/Ramón_Margalef_Award_for_Excellence_in_Education |
Rancidification is the process of complete or incomplete autoxidation or hydrolysis of fats and oils when exposed to air, light, moisture, or bacterial action, producing short-chain aldehydes , ketones and free fatty acids . [ 1 ]
When these processes occur in food, undesirable odors and flavors can result. In processed meats, these flavors are collectively known as warmed-over flavor . In certain cases, however, the flavors can be desirable (as in aged cheeses ). [ 2 ]
Rancidification can also detract from the nutritional value of food, as some vitamins are sensitive to oxidation. [ 3 ] Similar to rancidification, oxidative degradation also occurs in other hydrocarbons, such as lubricating oils , fuels , and mechanical cutting fluids . [ 4 ]
Five pathways for rancidification are recognized: [ 5 ]
Hydrolytic rancidity refers to the odor that develops when triglycerides are hydrolyzed and free fatty acids are released. This reaction of lipid with water may require a catalyst (such as a lipase , [ 6 ] or acidic or alkaline conditions) leading to the formation of free fatty acids and glycerol . In particular, short-chain fatty acids , such as butyric acid , are malodorous . [ 7 ] When short-chain fatty acids are produced, they serve as catalysts themselves, further accelerating the reaction, a form of autocatalysis . [ 7 ]
Oxidative rancidity is associated with the degradation by oxygen in the air.
The double bonds of an unsaturated fatty acid can be cleaved by free-radical reactions involving molecular oxygen. This reaction causes the release of malodorous and highly volatile aldehydes and ketones . Because of the nature of free-radical reactions, the reaction is catalyzed by sunlight. [ 7 ] Oxidation primarily occurs with unsaturated fats. For example, even though meat is held under refrigeration or in a frozen state, the poly-unsaturated fat will continue to oxidize and slowly become rancid. The fat oxidation process, potentially resulting in rancidity, begins immediately after the animal is slaughtered and the muscle, intra-muscular, inter-muscular and surface fat becomes exposed to oxygen of the air. This chemical process continues during frozen storage, though more slowly at lower temperature. Oxidative rancidity can be prevented by light-proof packaging, oxygen-free atmosphere (air-tight containers) and by the addition of antioxidants . [ 7 ]
A double bond of an unsaturated fatty acid can be oxidised by oxygen from the air in reactions catalysed by plant or animal lipoxygenase enzymes, [ 6 ] producing a hydroperoxide as a reactive intermediate, as in free-radical peroxidation. The final products depend on conditions: the lipoxygenase article shows that if a hydroperoxide lyase enzyme is present, it can cleave the hydroperoxide to yield short-chain fatty acids and dicarboxylic acids (several of which were first discovered in rancid fats).
Microbial rancidity refers to a water-dependent process in which microorganisms, such as bacteria or molds , use their enzymes such as lipases to break down fat. [ 6 ] Pasteurization and/or addition of antioxidant ingredients such as vitamin E , can reduce this process by destroying or inhibiting microorganisms. [ 6 ]
Despite concerns among the scientific community, there is little data on the health effects of rancidity or lipid oxidation in humans. [ 8 ] [ 9 ] Animal studies show evidence of organ damage, inflammation, carcinogenesis, and advanced atherosclerosis, although typically the dose of oxidized lipids is larger than what would be consumed by humans. [ 10 ] [ 11 ] [ 12 ]
Antioxidants are often used as preservatives in fat-containing foods to delay the onset or slow the development of rancidity due to oxidation. Natural antioxidants include ascorbic acid (vitamin C) and tocopherols (vitamin E). Synthetic antioxidants include butylated hydroxyanisole (BHA), butylated hydroxytoluene (BHT), TBHQ , propyl gallate and ethoxyquin . The natural antioxidants tend to be short-lived, [ 13 ] so synthetic antioxidants are used when a longer shelf-life is preferred. The effectiveness of water-soluble antioxidants is limited in preventing direct oxidation within fats, but is valuable in intercepting free radicals that travel through the aqueous parts of foods. A combination of water-soluble and fat-soluble antioxidants is ideal, usually in the ratio of fat to water.
In addition, rancidification can be decreased by storing fats and oils in a cool, dark place with little exposure to oxygen or free radicals, since heat and light accelerate the rate of reaction of fats with oxygen. Antimicrobial agents can also delay or prevent rancidification by inhibiting the growth of bacteria or other micro-organisms that affect the process. [ 1 ]
Oxygen scavenging technology can be used to remove oxygen from food packaging and therefore prevent oxidative rancidification.
Oxidative stability is a measure of oil or fat resistance to oxidation. Because the process takes place through a chain reaction , the oxidation reaction has a period when it is relatively slow, before it suddenly speeds up. The time for this to happen is called the "induction time", and it is repeatable under identical conditions (temperature, air flow, etc.). There are a number of ways to measure the progress of the oxidation reaction. One of the most popular methods currently in use is the Rancimat method.
The Rancimat method is carried out using an air current at temperatures between 50 and 220 °C. The volatile oxidation products (largely formic acid [ 14 ] ) are carried by the air current into the measuring vessel, where they are absorbed (dissolve) in the measuring fluid ( distilled water ). By continuous measurement of the conductivity of this solution, oxidation curves can be generated. The cusp point of the oxidation curve (the point where a rapid rise in the conductivity starts) gives the induction time of the rancidification reaction, [ 15 ] and can be taken as an indication of the oxidative stability of the sample.
The Rancimat method, the oxidative stability instrument (OSI) and the oxidograph were all developed as automatic versions of the more complicated AOM (active oxygen method), which is based on measuring peroxide values [ 15 ] for determining the induction time of fats and oils. Over time, the Rancimat method has become established, and it has been accepted into a number of national and international standards, for example AOCS Cd 12b-92 and ISO 6886. | https://en.wikipedia.org/wiki/Rancidification |
The Randall–Selitto test or paw pressure test is a technique for the measurement of the pain response in animals. It is used in basic pain research and to test the effectiveness of analgetics by observing the reaction to gradually increasing pressure on an inflamed paw. [ 1 ] [ 2 ] Pain is deemed to be present if the animal starts to exhibit the flight or struggle response. [ 3 ]
Randall and Selitto exploited the fact that inflammation increases pain sensitivity and this sensitivity is modifiable by analgesics. The inflammation may be induced by injecting a dry yeast suspension into the underside of the hind limb. [ 3 ]
This bioinformatics-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Randall–Selitto_test |
In physics , Randall–Sundrum models (RS) (also called 5-dimensional warped geometry theory ) are models that describe the world in terms of a warped-geometry higher-dimensional universe , or more concretely as a 5-dimensional anti-de Sitter space where the elementary particles (except the graviton ) are localized on a (3 + 1)- dimensional brane or branes.
The two models were proposed in two articles in 1999 by Lisa Randall and Raman Sundrum because they were dissatisfied with the universal extra-dimensional models then in vogue. Such models require two fine tunings; one for the value of the bulk cosmological constant and the other for the brane tensions . Later, while studying RS models in the context of the anti-de Sitter / conformal field theory (AdS/CFT) correspondence , they showed how it can be dual to technicolor models .
The first of the two models, called RS1 , has a finite size for the extra dimension with two branes, one at each end. [ 1 ] The second, RS2 , is similar to the first, but one brane has been placed infinitely far away, so that there is only one brane left in the model. [ 2 ]
The model is a braneworld theory developed while trying to solve the hierarchy problem of the Standard Model . It involves a finite five-dimensional bulk that is extremely warped and contains two branes : the Planckbrane (where gravity is a relatively strong force; also called "Gravitybrane") and the Tevbrane (our home with the Standard Model particles; also called "Weakbrane"). In this model, the two branes are separated in the not-necessarily large fifth dimension by approximately 16 units (the units based on the brane and bulk energies). The Planckbrane has positive brane energy, and the Tevbrane has negative brane energy. These energies are the cause of the extremely warped spacetime .
In this warped spacetime that is only warped along the fifth dimension, the graviton 's probability function is extremely high at the Planckbrane, but it drops exponentially as it moves closer towards the Tevbrane. In this, gravity would be much weaker on the Tevbrane than on the Planckbrane.
The RS1 model attempts to address the hierarchy problem . The warping of the extra dimension is analogous to the warping of spacetime in the vicinity of a massive object, such as a black hole . This warping, or red-shifting, generates a large ratio of energy scales, so that the natural energy scale at one end of the extra dimension is much larger than at the other end:
where k is some constant, and η has "−+++" metric signature . This space has boundaries at y = 1/ k and y = 1/( Wk ), with 0 ≤ 1 / k ≤ 1 / ( W k ) {\displaystyle 0\leq 1/k\leq 1/(Wk)} , where k is around the Planck scale , W is the warp factor, and Wk is around a TeV . The boundary at y = 1/ k is called the Planck brane , and the boundary at y = 1/( Wk ) is called the TeV brane . The particles of the Standard Model reside on the TeV brane. The distance between both branes is only −ln( W )/ k , though.
In another coordinate system ,
so that
and
The RS2 model uses the same geometry as RS1, but there is no TeV brane. The particles of the standard model are presumed to be on the Planck brane. This model was originally of interest because it represented an infinite 5-dimensional model, which, in many respects, behaved as a 4-dimensional model. This setup may also be of interest for studies of the AdS/CFT conjecture.
In 1998/99 Merab Gogberashvili published on arXiv a number of articles on a very similar theme. [ 3 ] [ 4 ] [ 5 ] He showed that if the Universe is considered as a thin shell (a mathematical synonym for "brane") expanding in 5-dimensional space, then there is a possibility to obtain one scale for particle theory corresponding to the 5-dimensional cosmological constant and Universe thickness, and thus to solve the hierarchy problem. It was also shown that four-dimensionality of the Universe is the result of stability requirement, since the extra component of the Einstein field equations giving the localized solution for matter fields coincides with one of the conditions of stability.
In August 2016, experimental results from the LHC excluded RS gravitons with masses below 3.85 and 4.45 TeV for ˜k = 0.1 and 0.2 respectively and for ˜k = 0.01, graviton masses below 1.95 TeV, except for the region between 1.75 TeV and 1.85 TeV. Currently, the most stringent limits on RS graviton production. [ clarification needed ] [ 6 ] | https://en.wikipedia.org/wiki/Randall–Sundrum_model |
The Randić index , also known as the connectivity index , of a graph is the sum of bond contributions 1 / ( d i d j ) 1 / 2 {\displaystyle 1/(d_{i}d_{j})^{1/2}} where d i {\displaystyle d_{i}} and d j {\displaystyle d_{j}} are
the degrees of the vertices making bond i ~ j .
This graph invariant was introduced by Milan Randić in 1975. [ 1 ] It is often used in chemoinformatics for investigations of organic compounds .
This graph theory -related article is a stub . You can help Wikipedia by expanding it .
This chemistry -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Randić's_molecular_connectivity_index |
The Randle cycle , also known as the glucose fatty-acid cycle , is a metabolic process involving the cross inhibition of glucose and fatty acids for substrates . [ 1 ] It is theorized to play a role in explaining type 2 diabetes and insulin resistance . [ 2 ] [ 3 ]
It was named for Philip Randle , who described it in 1963. [ 4 ]
The Randle cycle is a biochemical mechanism involving the competition between glucose and fatty acids for their oxidation and uptake in muscle and adipose tissue . The cycle controls fuel selection and adapts the substrate supply and demand in normal tissues. This cycle adds a nutrient-mediated fine tuning on top of the more coarse hormonal control on fuel metabolism. This adaptation to nutrient availability applies to the interaction between adipose tissue and muscle. Hormones that control adipose tissue lipolysis affect circulating concentrations of fatty acids; these in turn control the fuel selection in muscle. Mechanisms involved in the Randle Cycle include allosteric control, reversible phosphorylation and the expression of key enzymes. [ 5 ] The energy balance from meals composed of differing macronutrient composition is identical, but the glucose and fat balances that contribute to the overall energy balance change reciprocally with meal composition. [ 6 ]
When fasting, the activation of lipolysis provides fatty acids as the preferred fuel source for respiration. In the liver β-oxidation of fatty acids fulfills the local energy needs and may lead to ketogenesis (creating ketone bodies out of fatty acids.) The ketone bodies are then used to meet the demands of tissues other than the liver. This inhibition of glucose oxidation at the level of pyruvate dehydrogenase preserves pyruvate and lactate , both of which are gluconeogenic precursors. [ 5 ]
The glucose fatty acid cycle is also observed in the fed state after a high-fat meal or during exercise. This is when plasma concentrations of fatty acids or ketone bodies are increased. The glucose that is not oxidized is then rerouted to glycogen . This rerouting to glycogen explains the rapid resynthesis of muscle glycogen after exercise as well as the increased glycogen content in muscles found in starvation or diabetes. This mechanism replenishes the intermediates of the citric acid cycle . [ 5 ]
The impairment of glucose metabolism by fatty acid oxidation is mediated by the short-term inhibition of several glycolytic processes. The extent of inhibition increases along the glycolytic pathway, being most severe at the level of pyruvate dehydrogenase and less severe at the level of glucose uptake and 6-phosphofructo-1-kinase ( PFK-1 ). [ 5 ] This sequence occurs because of the initial event, triggered by fatty acid oxidation, is an increase in the mitochondrial ratios of [acetyl-CoA]/[CoA] and [NADH]/[NAD+]. These both serve to inhibit pyruvate dehydrogenase activity. [ 7 ] It has been proposed that these changes lead to an accumulation of cytosolic citrate, which in turn inhibits PFK-1, followed by an increase in glucose 6-phosphate, which eventually inhibits hexokinase. [ 5 ]
Hemodynamic stress overrides fatty acid inhibition of glucose metabolism. During this time there is a decrease in substrate supply and an increase in the substrate demand. This leads to an activation of AMP-activated protein kinase (AMPK) as the AMP concentration rises in intracellular fluids and the ATP concentration decreases. The stress-induced activation of AMPK provides an immediate metabolic adaption and protects the heart from ischemic stress. [ 5 ] [ 8 ] [ 9 ]
Malonyl-CoA signals glucose utilization and it controls the entry and oxidation of long-chain fatty acids (LCFA) in the mitochondria . Circulating glucose in the liver stimulates its uptake. Glucose oxidation produces citrate which can be converted to malonyl-CoA by acetyl-CoA carboxylase. Malonyl-CoA inhibits the carnitine palmitoyltransferase (CPT) that controls the entry and oxidation of LCFA. The glucose-derived malonyl-CoA prevents the oxidation of fatty acids and favors fatty acid esterification. [ 4 ] [ 5 ]
The concentration of malonyl-CoA depends on the balance between acetyl-CoA carboxylase (ACC) and malonyl-CoA decarboxylase (MCD). AMP-activated protein kinase (AMPK) is reported to phosphorylate and inactivate liver ACC. This in turn decreases malonyl-CoA concentrations which stimulates fatty acid oxidation and ketogenesis by glucagon in the liver. AMPK phosphorylates and inactivates ACC in the liver and other tissues. [ 4 ] [ 5 ]
Inhibition of fatty acid oxidation requires that ACC is active. Both AMPK and MCD are inactive and glucose uptake is stimulated. The LCFAs are then rerouted to esterification. [ 10 ] These conditions exist in tissues rich in oxygen, in which AMPK is inactive and glucose inactivates the AMPK (researched in skeletal muscle). [ 11 ]
The inhibition of MCD suppresses the oxidation of fatty acids and stimulates glucose oxidation. In a study on MCD deficient mice there was no difference in the oxidation of fatty acids and glucose in the heart under aerobic conditions. It is theorized that the overexpression of fatty acids being used makes up for the lack of MCD. [ 12 ]
Long chain fatty acid uptake is mediated by several transporters, including FAT (fatty acid translocase)/CD36. CD36 deletion rescues lipotoxic cardiomyopathy. FAT/CD36 may be controlled by insulin and AMPK. Increased transport coupled to the formation of the CoA derivatives and the resulting AMPK activation should ensure efficient fatty acid uptake and metabolism. [ 5 ]
Fatty acids are preferentially oxidized because of the inactivation of PDH by fatty acid oxidation inhibiting glucose oxidation. This suggests that mitochondrial metabolism may control fuel selection. Cellular respiration is stimulated by fatty acids and this relates to an increase in the mitochondrial NADH to NAD+ ratio, suggesting that energy provision overtakes energy consumption. Switching from glucose to fatty acid oxidation leads to a bigger proportion of electrons being transported to complex 2 rather than complex 1 of the respiratory chain. This difference leads to a less efficient oxidative phosphorylation. By oxidizing fatty acids, mitochondria increase their respiration while increasing the production of ROS. [ 5 ]
Fatty acids may act directly upon the pancreatic β-cell to regulate glucose-stimulated insulin secretion. This effect is biphasic. Initially fatty acids potentiate the effects of glucose. After prolonged exposure to high fatty acid concentrations this changes to an inhibition. [ 13 ] Randle suggested that the term fatty acid syndrome would be appropriate to apply to the biochemical syndrome resulting from the high concentration of fatty acids and the relationship to abnormalities of carbohydrate metabolism, including starvation, diabetes and Cushing’s syndrome . [ 4 ] | https://en.wikipedia.org/wiki/Randle_cycle |
In electrochemistry , the Randles–Ševčík equation describes the effect of scan rate on the peak current ( i p ) for a cyclic voltammetry experiment. For simple redox events where the reaction is electrochemically reversible, and the products and reactants are both soluble, such as the ferrocene / ferrocenium couple, i p depends not only on the concentration and diffusional properties of the electroactive species but also on scan rate. [ 1 ]
Or if the solution is at 25 °C: [ 2 ]
For novices in electrochemistry, the predictions of this equation appear counter-intuitive, i.e. that i p increases at faster voltage scan rates. It is important to remember that current, i, is charge (or electrons passed) per unit time. In cyclic voltammetry, the current passing through the electrode is limited by the diffusion of species to the electrode surface. This diffusion flux is influenced by the concentration gradient near the electrode. The concentration gradient, in turn, is affected by the concentration of species at the electrode, and how fast the species can diffuse through solution. By changing the cell voltage, the concentration of the species at the electrode surface is also changed, as set by the Nernst equation . Therefore, a faster voltage sweep causes a larger concentration gradient near the electrode, resulting in a higher current.
This equation is derived using the following governing equations and initial/boundary conditions:
∂ C O ∂ t = − D O ∂ C O 2 ∂ x 2 {\displaystyle {\frac {\partial C_{O}}{\partial t}}=-D_{O}{\frac {\partial C_{O}^{2}}{\partial x^{2}}}}
C O ( x , 0 ) = C O ∗ {\displaystyle C_{O}(x,0)=C_{O}^{*}}
lim x → ∞ C O ( x , t ) = C O ∗ {\displaystyle \lim _{x\rightarrow \infty }C_{O}(x,t)=C_{O}^{*}}
∂ C R ∂ t = − D R ∂ C R 2 ∂ x 2 {\displaystyle {\frac {\partial C_{R}}{\partial t}}=-D_{R}{\frac {\partial C_{R}^{2}}{\partial x^{2}}}}
C R ( x , 0 ) = C R ∗ {\displaystyle C_{R}(x,0)=C_{R}^{*}}
lim x → ∞ C R ( x , t ) = C R ∗ {\displaystyle \lim _{x\rightarrow \infty }C_{R}(x,t)=C_{R}^{*}}
D O ( ∂ C O ∂ x ) x = 0 + D R ( ∂ C R ∂ x ) x = 0 = 0 {\displaystyle D_{O}\left({\frac {\partial C_{O}}{\partial x}}\right)_{x=0}+D_{R}\left({\frac {\partial C_{R}}{\partial x}}\right)_{x=0}=0}
E = E i + v t = E 0 ′ + R T n F l n ( C O ( 0 , t ) C R ( 0 , t ) ) {\displaystyle E=E_{i}+vt=E^{0'}+{\frac {RT}{nF}}ln\left({\frac {C_{O}(0,t)}{C_{R}(0,t)}}\right)}
Using the relationships defined by this equation, the diffusion coefficient of the electroactive species can be determined. Linear plots of i p vs. ν 1/2 and peak potentials ( E p ) that are not dependent on ν provide evidence for an electrochemically reversible redox process. For species where the diffusion coefficient is known (or can be estimated), the slope of the plot of i p vs. ν 1/2 provides information into the stoichiometry of the redox process, the concentration of the analyte, the area of the electrode, etc.
A more general investigation method is the plot of the peak currents as function of the scan rate on a logarithmically scaled x-axis. Deviations become easily detectable and the more general fit formula
can be used.
In this equation j 0 {\displaystyle j_{0}{}} is the current at zero scan rate at the equilibrium potential E 0 {\displaystyle E_{0}} . In the electrochemical lab experiment j 0 {\displaystyle j_{0}{}} may be small but can nowadays easily be monitored with a modern equipment. For example corrosion processes may lead to a not vanishing but still detectable j 0 {\displaystyle j_{0}{}} . When j 0 << A {\displaystyle j_{0}<<A} and x is close to 0.5 a reaction mechanism according to Randles Sevcik can be assigned.
An example for this kind of reaction mechanism is the redox reaction of F e 3 + / F e 2 + {\displaystyle \mathrm {Fe^{3+}/Fe^{2+}} } species as an analyte (concentration 5mM each species) in a highly concentrated (1M) background solution K N O 3 {\displaystyle \mathrm {KNO_{3}} } on graphite electrode.
A more detailed plot with all fit parameters can be seen here.
This article about analytical chemistry is a stub . You can help Wikipedia by expanding it .
This electrochemistry -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Randles–Sevcik_equation |
In mathematics , the random Fibonacci sequence is a stochastic analogue of the Fibonacci sequence defined by the recurrence relation f n = f n − 1 ± f n − 2 {\displaystyle f_{n}=f_{n-1}\pm f_{n-2}} , where the signs + or − are chosen at random with equal probability 1 2 {\displaystyle {\tfrac {1}{2}}} , independently for different n {\displaystyle n} . By a theorem of Harry Kesten and Hillel Furstenberg , random recurrent sequences of this kind grow at a certain exponential rate , but it is difficult to compute the rate explicitly. In 1999, Divakar Viswanath showed that the growth rate of the random Fibonacci sequence is equal to 1.1319882487943... (sequence A078416 in the OEIS ), a mathematical constant that was later named Viswanath's constant. [ 1 ] [ 2 ] [ 3 ]
A random Fibonacci sequence is an integer random sequence given by the numbers f n {\displaystyle f_{n}} for natural numbers n {\displaystyle n} , where f 1 = f 2 = 1 {\displaystyle f_{1}=f_{2}=1} and the subsequent terms are chosen randomly according to the random recurrence relation f n = { f n − 1 + f n − 2 , with probability 1 2 ; f n − 1 − f n − 2 , with probability 1 2 . {\displaystyle f_{n}={\begin{cases}f_{n-1}+f_{n-2},&{\text{ with probability }}{\tfrac {1}{2}};\\f_{n-1}-f_{n-2},&{\text{ with probability }}{\tfrac {1}{2}}.\end{cases}}} An instance of the random Fibonacci sequence starts with 1,1 and the value of the each subsequent term is determined by a fair coin toss: given two consecutive elements of the sequence, the next element is either their sum or their difference with probability 1/2, independently of all the choices made previously. If in the random Fibonacci sequence the plus sign is chosen at each step, the corresponding instance is the Fibonacci sequence ( F n ), 1 , 1 , 2 , 3 , 5 , 8 , 13 , 21 , 34 , 55 , … . {\displaystyle 1,1,2,3,5,8,13,21,34,55,\ldots .} If the signs alternate in minus-plus-plus-minus-plus-plus-... pattern, the result is the sequence 1 , 1 , 0 , 1 , 1 , 0 , 1 , 1 , 0 , 1 , … . {\displaystyle 1,1,0,1,1,0,1,1,0,1,\ldots .}
However, such patterns occur with vanishing probability in a random experiment. In a typical run, the terms will not follow a predictable pattern: 1 , 1 , 2 , 3 , 1 , − 2 , − 3 , − 5 , − 2 , − 3 , … for the signs + , + , + , − , − , + , − , − , … . {\displaystyle 1,1,2,3,1,-2,-3,-5,-2,-3,\ldots {\text{ for the signs }}+,+,+,-,-,+,-,-,\ldots .}
Similarly to the deterministic case, the random Fibonacci sequence may be profitably described via matrices : ( f n − 1 f n ) = ( 0 1 ± 1 1 ) ( f n − 2 f n − 1 ) , {\displaystyle {f_{n-1} \choose f_{n}}={\begin{pmatrix}0&1\\\pm 1&1\end{pmatrix}}{f_{n-2} \choose f_{n-1}},}
where the signs are chosen independently for different n with equal probabilities for + or −. Thus ( f n − 1 f n ) = M n M n − 1 … M 3 ( f 1 f 2 ) , {\displaystyle {f_{n-1} \choose f_{n}}=M_{n}M_{n-1}\ldots M_{3}{f_{1} \choose f_{2}},} where ( M k ) is a sequence of independent identically distributed random matrices taking values A or B with probability 1/2: A = ( 0 1 1 1 ) , B = ( 0 1 − 1 1 ) . {\displaystyle A={\begin{pmatrix}0&1\\1&1\end{pmatrix}},\quad B={\begin{pmatrix}0&1\\-1&1\end{pmatrix}}.}
Johannes Kepler discovered that as n increases, the ratio of the successive terms of the Fibonacci sequence ( F n ) approaches the golden ratio φ = ( 1 + 5 ) / 2 , {\displaystyle \varphi =(1+{\sqrt {5}})/2,} which is approximately 1.61803. In 1765, Leonhard Euler published an explicit formula, known today as the Binet formula , F n = φ n − ( − 1 / φ ) n 5 . {\displaystyle F_{n}={{\varphi ^{n}-(-1/\varphi )^{n}} \over {\sqrt {5}}}.}
It demonstrates that the Fibonacci numbers grow at an exponential rate equal to the golden ratio φ .
In 1960, Hillel Furstenberg and Harry Kesten showed that for a general class of random matrix products, the norm grows as λ n , where n is the number of factors. Their results apply to a broad class of random sequence generating processes that includes the random Fibonacci sequence. As a consequence, the n th root of | f n | converges to a constant value almost surely , or with probability one: | f n | n → 1.1319882487943 … as n → ∞ . {\displaystyle {\sqrt[{n}]{|f_{n}|}}\to 1.1319882487943\dots {\text{ as }}n\to \infty .}
An explicit expression for this constant was found by Divakar Viswanath in 1999. It uses Furstenberg's formula for the Lyapunov exponent of a random matrix product and integration over a certain fractal measure on the Stern–Brocot tree . Moreover, Viswanath computed the numerical value above using floating point arithmetic validated by an analysis of the rounding error .
Mark Embree and Nick Trefethen showed in 1999 that the sequence f n = ± f n − 1 ± β f n − 2 {\displaystyle f_{n}=\pm f_{n-1}\pm \beta f_{n-2}}
decays almost surely if β is less than a critical value β * ≈ 0.70258 , known as the Embree–Trefethen constant, and otherwise grows almost surely. They also showed that the asymptotic ratio σ ( β ) between consecutive terms converges almost surely for every value of β . The graph of σ ( β ) appears to have a fractal structure, with a global minimum near β min ≈ 0.36747 approximately equal to σ ( β min ) ≈ 0.89517 . [ 4 ] | https://en.wikipedia.org/wiki/Random_Fibonacci_sequence |
Random amplified polymorphic DNA ( RAPD ), pronounced "rapid", [ 1 ] is a type of polymerase chain reaction (PCR), but the segments of DNA that are amplified are random. [ 2 ] The scientist performing RAPD creates several arbitrary, short primers (10–12
nucleotides), then proceeds with the PCR using a large template of genomic DNA, hoping that fragments will amplify. By resolving the resulting patterns, a semi-unique profile can be gleaned from an RAPD reaction.
No knowledge of the DNA sequence of the targeted genome is required, as the primers will bind somewhere in the sequence, but it is not certain exactly where. This makes the method popular for comparing the DNA of biological systems that have not had the attention of the scientific community, or in a system in which relatively few DNA sequences are compared (it is not suitable for forming a cDNA databank). Because it relies on a large, intact DNA template sequence, it has some limitations in the use of degraded DNA samples. Its resolving power is much lower than targeted, species-specific DNA comparison methods, such as short tandem repeats . In recent years, RAPD has been used to characterize, and trace, the phylogeny of diverse plant and animal species.
RAPD markers are decamer (10 nucleotides long) DNA fragments from PCR amplification of random segments of genomic DNA with a single primer of arbitrary nucleotide sequence and which are able to differentiate between genetically distinct individuals, although not necessarily in a reproducible way.
It is used to analyze the genetic diversity of an individual by using random primers. Due to problems in experiment reproducibility, many scientific journals do not accept experiments merely based on RAPDs anymore.
RAPD requires only one primer for amplification.
After amplification with PCR, samples are loaded into a gel (either agarose or polyacrylamide) for gel electrophoresis . The differing sizes created through random amplification will separate along the gel in a repeatable manner depending on the sample source. This creates a distinct DNA fingerprint.
Unlike traditional PCR analysis, RAPD does not require any specific knowledge of the DNA sequence of the target organism: the identical 10-mer primers will or will not amplify a segment of DNA, depending on positions that are complementary to the primers' sequence. For example, no fragment is produced if primers annealed too far apart or 3' ends of the primers are not facing each other. Therefore, if a mutation has occurred in the template DNA at the site that was previously complementary to the primer, a PCR product will not be produced, resulting in a different pattern of amplified DNA segments on the gel. | https://en.wikipedia.org/wiki/Random_amplification_of_polymorphic_DNA |
Random chimeragenesis on transient templates (RACHITT) is a method to perform molecular mutagenesis at a high recombination rate. [ 1 ] For example, RACHITT can be used to generate increased rate and extent of biodesulfurization of diesel by modification of dibenzothiophene mono-oxygenase. DNA shuffling is a similar but less powerful method used in directed evolution experiments.
This biochemistry article is a stub . You can help Wikipedia by expanding it .
This molecular biology article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Random_chimeragenesis_on_transient_templates |
In statistical mechanics , probability theory , graph theory , etc. the random cluster model is a random graph that generalizes and unifies the Ising model , Potts model , and percolation model . It is used to study random combinatorial structures, electrical networks , etc. [ 1 ] [ 2 ] It is also referred to as the RC model or sometimes the FK representation after its founders Cees Fortuin and Piet Kasteleyn . [ 3 ] The random cluster model has a critical limit, described by a conformal field theory .
Let G = ( V , E ) {\displaystyle G=(V,E)} be a graph , and ω : E → { 0 , 1 } {\displaystyle \omega :E\to \{0,1\}} be a bond configuration on the graph that maps each edge to a value of either 0 or 1. We say that a bond is closed on edge e ∈ E {\displaystyle e\in E} if ω ( e ) = 0 {\displaystyle \omega (e)=0} , and open if ω ( e ) = 1 {\displaystyle \omega (e)=1} . If we let A ( ω ) = { e ∈ E : ω ( e ) = 1 } {\displaystyle A(\omega )=\{e\in E:\omega (e)=1\}} be the set of open bonds, then an open cluster or FK cluster is any connected component in A ( ω ) {\displaystyle A(\omega )} union the set of vertices. Note that an open cluster can be a single vertex (if that vertex is not incident to any open bonds).
Suppose an edge is open independently with probability p {\displaystyle p} and closed otherwise, then this is just the standard Bernoulli percolation process. The probability measure of a configuration ω {\displaystyle \omega } is given as
The RC model is a generalization of percolation, where each cluster is weighted by a factor of q {\displaystyle q} . Given a configuration ω {\displaystyle \omega } , we let C ( ω ) {\displaystyle C(\omega )} be the number of open clusters, or alternatively the number of connected components formed by the open bonds. Then for any q > 0 {\displaystyle q>0} , the probability measure of a configuration ω {\displaystyle \omega } is given as
Z is the partition function , or the sum over the unnormalized weights of all configurations,
The partition function of the RC model is a specialization of the Tutte polynomial , which itself is a specialization of the multivariate Tutte polynomial. [ 4 ]
The parameter q {\displaystyle q} of the random cluster model can take arbitrary complex values. This includes the following special cases:
The Edwards-Sokal (ES) representation [ 5 ] of the Potts model is named after Robert G. Edwards and Alan D. Sokal . It provides a unified representation of the Potts and random cluster models in terms of a joint distribution of spin and bond configurations.
Let G = ( V , E ) {\displaystyle G=(V,E)} be a graph, with the number of vertices being n = | V | {\displaystyle n=|V|} and the number of edges being m = | E | {\displaystyle m=|E|} . We denote a spin configuration as σ ∈ Z q n {\displaystyle \sigma \in \mathbb {Z} _{q}^{n}} and a bond configuration as ω ∈ { 0 , 1 } m {\displaystyle \omega \in \{0,1\}^{m}} . The joint measure of ( σ , ω ) {\displaystyle (\sigma ,\omega )} is given as
where ψ {\displaystyle \psi } is the uniform measure, ϕ p {\displaystyle \phi _{p}} is the product measure with density p = 1 − e − β {\displaystyle p=1-e^{-\beta }} , and Z {\displaystyle Z} is an appropriate normalizing constant. Importantly, the indicator function 1 A {\displaystyle 1_{A}} of the set
enforces the constraint that a bond can only be open on an edge if the adjacent spins are of the same state, also known as the SW rule .
The statistics of the Potts spins can be recovered from the cluster statistics (and vice versa), thanks to the following features of the ES representation: [ 2 ]
There are several complications of the ES representation once frustration is present in the spin model (e.g. the Ising model with both ferromagnetic and anti-ferromagnetic couplings in the same lattice). In particular, there is no longer a correspondence between the spin statistics and the cluster statistics, [ 7 ] and the correlation length of the RC model will be greater than the correlation length of the spin model. This is the reason behind the inefficiency of the SW algorithm for simulating frustrated systems.
If the underlying graph G {\displaystyle G} is a planar graph , there is a duality between the random cluster models on G {\displaystyle G} and on the dual graph G ∗ {\displaystyle G^{*}} . [ 8 ] At the level of the partition function, the duality reads
On a self-dual graph such as the square lattice , a phase transition can only occur at the self-dual coupling v self-dual = q {\displaystyle v_{\text{self-dual}}={\sqrt {q}}} . [ 9 ]
The random cluster model on a planar graph can be reformulated as a loop model on the corresponding medial graph . For a configuration ω {\displaystyle \omega } of the random cluster model, the corresponding loop configuration is the set of self-avoiding loops that separate the clusters from the dual clusters. In the transfer matrix approach , the loop model is written in terms of a Temperley-Lieb algebra with the parameter δ = q + q − 1 {\displaystyle \delta =q+q^{-1}} . In two dimensions, the random cluster model is therefore closely related to the O(n) model , which is also a loop model.
In two dimensions, the critical random cluster model is described by a conformal field theory with the central charge
Known exact results include the conformal dimensions of the fields that detect whether a point belongs to an FK cluster or a spin cluster . In terms of Kac indices , these conformal dimensions are respectively 2 h 0 , 1 2 {\displaystyle 2h_{0,{\frac {1}{2}}}} and 2 h 1 2 , 0 {\displaystyle 2h_{{\frac {1}{2}},0}} , corresponding to the fractal dimensions 2 − 2 h 0 , 1 2 {\displaystyle 2-2h_{0,{\frac {1}{2}}}} and 2 − 2 h 1 2 , 0 {\displaystyle 2-2h_{{\frac {1}{2}},0}} of the clusters.
RC models were introduced in 1969 by Fortuin and Kasteleyn , mainly to solve combinatorial problems. [ 1 ] [ 10 ] [ 6 ] After their founders, it is sometimes referred to as FK models . [ 3 ] In 1971 they used it to obtain the FKG inequality . Post 1987, interest in the model and applications in statistical physics reignited. It became the inspiration for the Swendsen–Wang algorithm describing the time-evolution of Potts models. [ 11 ] Michael Aizenman and coauthors used it to study the phase boundaries in 1D Ising and Potts models. [ 12 ] [ 10 ] | https://en.wikipedia.org/wiki/Random_cluster_model |
In polymer chemistry , a random coil is a conformation of polymers where the monomer subunits are oriented randomly while still being bonded to adjacent units. It is not one specific shape, but a statistical distribution of shapes for all the chains in a population of macromolecules . The conformation's name is derived from the idea that, in the absence of specific, stabilizing interactions, a polymer backbone will "sample" all possible conformations randomly. Many unbranched , linear homopolymers — in solution , or above their melting temperatures — assume (approximate) random coils.
There are an enormous number of different ways in which a chain can be curled around in a relatively compact shape, like an unraveling ball of twine with much open space , and comparatively few ways it can be more or less stretched out. So, if each conformation has an equal probability or statistical weight, chains are much more likely to be ball-like than they are to be extended — a purely entropic effect. In an ensemble of chains, most of them will, therefore, be loosely balled up . This is the kind of shape any one of them will have most of the time.
Consider a linear polymer to be a freely-jointed chain with N subunits, each of length ℓ {\displaystyle \scriptstyle \ell } , that occupy zero volume , so that no part of the chain excludes another from any location. One can regard the segments of each such chain in an ensemble as performing a random walk (or "random flight") in three dimensions , limited only by the constraint that each segment must be joined to its neighbors. This is the ideal chain mathematical model . It is clear that the maximum, fully extended length L of the chain is N × ℓ {\displaystyle \scriptstyle N\,\times \,\ell } . If we assume that each possible chain conformation has an equal statistical weight, it can be shown that the probability P ( r ) of a polymer chain in the population to have distance r between the ends will obey a characteristic distribution described by the formula
where ⟨ r 2 ⟩ {\displaystyle {\langle r^{2}\rangle }} is the mean of r 2 {\displaystyle {r^{2}}} .
The average ( root mean square ) end-to-end distance for the chain, ⟨ r 2 ⟩ {\displaystyle \scriptstyle {\sqrt {\langle r^{2}\rangle }}} , turns out to be ℓ {\displaystyle \scriptstyle \ell } times the square root of N — in other words, the average distance scales with N 0.5 .
A real polymer is not freely-jointed. A -C-C- single bond has a fixed tetrahedral angle of 109.5 degrees. The value of L is well-defined for, say, a fully extended polyethylene or nylon , but it is less than N x l because of the zig-zag backbone. There is, however, free rotation about many chain bonds. The model above can be enhanced. A longer, "effective" unit length can be defined such that the chain can be regarded as freely-jointed, along with a smaller N , such that the constraint L = N x l is still obeyed. It, too, gives a Gaussian distribution. However, specific cases can also be precisely calculated. The average end-to-end distance for freely-rotating (not freely-jointed) polymethylene (polyethylene with each -C-C- considered as a subunit) is l times the square root of 2 N , an increase by a factor of about 1.4. Unlike the zero volume assumed in a random walk calculation, all real polymers' segments occupy space because of the van der Waals radii of their atoms, including bulky substituent groups that interfere with bond rotations . This can also be taken into account in calculations. All such effects increase the mean end-to-end distance.
Because their polymerization is stochastically driven, chain lengths in any real population of synthetic polymers will obey a statistical distribution. In that case, we should take N to be an average value. Also, many polymers have random branching.
Even with corrections for local constraints, the random walk model ignores steric interference between chains, and between distal parts of the same chain. A chain often cannot move from a given conformation to a closely related one by a small displacement because one part of it would have to pass through another part, or through a neighbor. We may still hope that the ideal-chain, random-coil model will be at least a qualitative indication of the shapes and dimensions of real polymers in solution , and in the amorphous state, as long as there are only weak physicochemical interactions between the monomers. This model, and the Flory-Huggins Solution Theory , [ 1 ] [ 2 ] for which Paul Flory received the Nobel Prize in Chemistry in 1974, ostensibly apply only to ideal, dilute solutions . But there is reason to believe (e.g., neutron diffraction studies) that excluded volume effects may cancel out, so that, under certain conditions, chain dimensions in amorphous polymers have approximately the ideal, calculated size [ 3 ] When separate chains interact cooperatively, as in forming crystalline regions in solid thermoplastics, a different mathematical approach must be used.
Stiffer polymers such as helical polypeptides, Kevlar , and double-stranded DNA can be treated by the worm-like chain model.
Even copolymers with monomers of unequal length will distribute in random coils if the subunits lack any specific interactions. The parts of branched polymers may also assume random coils.
Below their melting temperatures, most thermoplastic polymers ( polyethylene , nylon , etc.) have amorphous regions in which the chains approximate random coils, alternating with regions that are crystalline . The amorphous regions contribute elasticity and the crystalline regions contribute strength and rigidity .
More complex polymers such as proteins , with various interacting chemical groups attached to their backbones, self-assemble into well-defined structures. But segments of proteins, and polypeptides that lack secondary structure , are often assumed to exhibit a random-coil conformation in which the only fixed relationship is the joining of adjacent amino acid residues by a peptide bond . This is not actually the case, since the ensemble will be energy weighted due to interactions between amino acid side-chains , with lower-energy conformations being present more frequently. In addition, even arbitrary sequences of amino acids tend to exhibit some hydrogen bonding and secondary structure. For this reason, the term "statistical coil" is occasionally preferred. The conformational entropy of the random-coil stabilizes the unfolded protein state and represents main free energy contribution that opposes to protein folding .
A random-coil conformation can be detected using spectroscopic techniques. The arrangement of the planar amide bonds results in a distinctive signal in circular dichroism . The chemical shift of amino acids in a random-coil conformation is well known in nuclear magnetic resonance (NMR). Deviations from these signatures often indicates the presence of some secondary structure, rather than complete random coil. Furthermore, there are signals in multidimensional NMR experiments that indicate that stable, non-local amino acid interactions are absent for polypeptides in a random-coil conformation. Likewise, in the images produced by crystallography experiments, segments of random coil result simply in a reduction in "electron density" or contrast. A randomly coiled state for any polypeptide chain can be attained by denaturing the system. However, there is evidence that proteins are never truly random coils, even when denatured (Shortle & Ackerman). | https://en.wikipedia.org/wiki/Random_coil |
Random coil index (RCI) predicts protein flexibility by calculating an inverse weighted average of backbone secondary chemical shifts and predicting values of model-free order parameters as well as per-residue RMSD of NMR and molecular dynamics ensembles from this parameter. [ 1 ]
The key advantages of this protocol over existing methods of studying protein flexibility are
The application of secondary chemical shifts to characterize protein flexibility is based on an assumption that the proximity of chemical shifts to random coil values is a manifestation of increased protein mobility, while significant differences from random coil values are an indication of a relatively rigid structure. [ 1 ]
Even though chemical shifts of rigid residues may adopt random coil values as a result of comparable contributions of shielding and deshielding effects (e.g. from torsion angles, hydrogen bonds, ring currents, etc.), combining the chemical shifts from multiple nuclei into a single parameter allows one to decrease the influence of these flexibility false positives. The improved performance originates from the different probabilities of random coil chemical shifts from different nuclei being found among amino acid residues in flexible regions versus rigid regions. Typically, residues in rigid helices or rigid beta-strands are less likely to have more than one random coil chemical shift among their backbone shifts than residues in mobile regions. [ 2 ]
The actual calculation of the RCI involves several additional steps including the smoothing of secondary shifts over several adjacent residues, the use of neighboring residue corrections, chemical shift re-referencing , gap filling, chemical shift scaling and numeric adjustments to prevent divide-by-zero problems. 13C, 15 N and 1H secondary chemical shifts are then scaled to account for the characteristic resonance frequencies of these nuclei and to provide numeric consistency among different parts of the protocol. Once these scaling corrections have been done, the RCI is calculated. The ‘‘end-effect correction’’ can also be applied at this point. The last step of the protocol involves smoothing the initial set of RCI values by three-point averaging. [ 3 ] [ 4 ] | https://en.wikipedia.org/wiki/Random_coil_index |
Random column packing is the practice of packing a distillation column with randomly fitting filtration material in order to optimize surface area over which reactants can interact while minimizing the complexity of construction of such columns. Random column packing is an alternative to structured column packing .
Packed columns utilizing filter media for chemical exchange are the most common devices used in the chemical industry for reactant contact optimization. Packed columns are used in a range of industries to allow intimate contact between two immiscible/partly immiscible fluids, which can be liquid/gas or liquid/liquid. The fluids are passed through a column in a countercurrent flow .
In the column it is important to maintain an effective mass transfer , so it is essential that a packing is selected which will support a large surface area for mass transfer . [ 1 ]
Random packing was used as early as 1820. Originally the packing material consisted of glass spheres, however in 1850 they were replaced by a more porous pumice stone and pieces of coke .
Random packed columns are used in a variety of applications, including:
The Raschig ring is a piece of tube, invented circa 1914, [ 2 ] that is used in large numbers in a packing column. Raschig rings are usually made of ceramic or metals, and they provide a large surface area within the column, allowing for interaction between liquid and gas vapors.
Lessing rings are a type of random packing similar to the Raschig ring invented in the early 20th century by German-born British chemist Rudolf Lessing (1878-1964) of Mond Nickel Company . [ 3 ] Originally wrapped from steel strips according to his 1919 patent, [ 4 ] now they are made of ceramic. Lessing rings have partitions insides which increase the surface area and enhance mass transfer efficiency. Lessing rings have a high density and an excellent heat and acid resistance. Lessing rings withstand corrosion and are used in regenerative oxide systems and transfer systems.
Pall rings are the most common form of random packing. They are similar to Lessing rings and were developed from the Raschig ring . Pall rings have similar cylindrical dimensions but has rows of windows which increase performance by increasing the surface area . They are suited for low pressure drop and high capacity applications. They have a degree of randomness and a relatively high liquid hold up, promoting a high absorption, especially when the rate of reaction is slow. The cross structure of the Pall ring makes it mechanically robust and suitable for use in deep packed beds.
The Bialecki ring was patented in 1974 by Polish chemical engineer from Kraków Zbigniew Białecki rings are an improved version of Raschig rings. The rings may be injection moulded of plastics or press-formed from metal sheet without welding. Specific surface area of filling ranges between 60 and 440 m 2 /m 3 . [ 5 ]
Dixon rings have a similar design to Lessing rings. They are made of stainless steel mesh, giving Dixon rings a low pressure drop and after pre-wetting. Dixon rings have a very large surface area , which increases the rate of mass transfer . Dixon rings have a large liquid hold up, a low pressure drop and a large surface area, and have a high mass transfer rate. Dixon rings are used for laboratory distillation and scrubbing applications. | https://en.wikipedia.org/wiki/Random_column_packing |
In the statistical physics of disordered systems , the random energy model is a toy model of a system with quenched disorder , such as a spin glass , having a first-order phase transition . [ 1 ] [ 2 ] It concerns the statistics of a collection of N {\displaystyle N} spins ( i.e. degrees of freedom σ ≡ { σ i } i = 1 N {\displaystyle {\boldsymbol {\sigma }}\equiv \{\sigma _{i}\}_{i=1}^{N}} that can take one of two possible values σ i = ± 1 {\displaystyle \sigma _{i}=\pm 1} ) so that the number of possible states for the system is 2 N {\displaystyle 2^{N}} . The energies of such states are independent and identically distributed Gaussian random variables E x ∼ N ( 0 , N / 2 ) {\displaystyle E_{x}\sim {\mathcal {N}}(0,N/2)} with zero mean and a variance of N / 2 {\displaystyle N/2} . Many properties of this model can be computed exactly. Its simplicity makes this model suitable for pedagogical introduction of concepts like quenched disorder and replica symmetry .
Critical energy per particle: h c = ln 2 {\displaystyle h_{c}={\sqrt {\ln 2}}} .
Critical inverse temperature β c = 2 ln 2 {\displaystyle \beta _{c}=2{\sqrt {\ln 2}}} .
Partition function Z ( β ) = ∑ s e − β H ( s ) {\displaystyle Z(\beta )=\sum _{s}e^{-\beta H(s)}} , which at large N {\displaystyle N} becomes 2 N E E [ e − β E ] {\displaystyle 2^{N}\mathbb {E} _{E}[e^{-\beta E}]} when β < β c {\displaystyle \beta <\beta _{c}} , that is, condensation does not occur. When this is true, we say that it has the self-averaging property .
Free entropy per particle f ( β ) = lim N → ∞ 1 N ln Z = { ln 2 + 1 4 β 2 β < β c , β ln 2 β > β c {\displaystyle f(\beta )=\lim _{N\to \infty }{\frac {1}{N}}\ln Z={\begin{cases}\ln 2+{\frac {1}{4}}\beta ^{2}\quad &\beta <\beta _{c},\\\beta {\sqrt {\ln 2}}\quad &\beta >\beta _{c}\end{cases}}}
Entropy per particle s ( h ) = max β ( f ( β ) − β h ) = { ln 2 − h 2 h ∈ [ − h c , + h c ] , 0 else {\displaystyle s(h)=\max _{\beta }(f(\beta )-\beta h)={\begin{cases}\ln 2-h^{2}\quad &h\in [-h_{c},+h_{c}],\\0\quad &{\text{else }}\end{cases}}}
When β < β c {\displaystyle \beta <\beta _{c}} , the Boltzmann distribution of the system is concentrated at energy-per-particle h = − β / 2 {\displaystyle h=-\beta /2} , of which there are ∼ e N ( ln 2 − β 2 / 4 ) {\displaystyle \sim e^{N(\ln 2-\beta ^{2}/4)}} states.
When β > β c {\displaystyle \beta >\beta _{c}} , the Boltzmann distribution of the system is concentrated at h = − h c {\displaystyle h=-h_{c}} , and since the entropy per particle at that point is zero, the Boltzmann distribution is concentrated on a sub-exponential number of states. This is a phase transition called condensation .
Define the participation ratio as Y = ∑ E p E 2 = ∑ E e − 2 β E ( ∑ E e − β E ) 2 {\displaystyle Y=\sum _{E}p_{E}^{2}={\frac {\sum _{E}e^{-2\beta E}}{(\sum _{E}e^{-\beta E})^{2}}}} The participation ratio measures the amount of condensation in the Boltzmann distribution. It can be interpreted as the probability that two randomly sampled states are exactly the same state. Indeed, it is precisely the Simpson index , a commonly used diversity index .
For each N , β {\displaystyle N,\beta } , the participation ratio is a random variable determined by the energy levels.
When β < β c {\displaystyle \beta <\beta _{c}} , the system is not in the condensed phase, and so by asymptotic equipartition , the Boltzmann distribution is asymptotically uniformly distributed over ∼ e N ( ln 2 − β 2 / 4 ) {\displaystyle \sim e^{N(\ln 2-\beta ^{2}/4)}} states. The participation ratio is then ∼ e N ( ln 2 − β 2 / 4 ) × ( e − N ( ln 2 − β 2 / 4 ) ) 2 = e − N ( ln 2 − β 2 / 4 ) {\displaystyle \sim e^{N(\ln 2-\beta ^{2}/4)}\times (e^{-N(\ln 2-\beta ^{2}/4)})^{2}=e^{-N(\ln 2-\beta ^{2}/4)}} which decays exponentially to zero.
When β > β c {\displaystyle \beta >\beta _{c}} , the participation ratio satisfies lim N → ∞ E [ Y ] = 1 − β c β {\displaystyle \lim _{N\to \infty }\mathbb {E} [Y]=1-{\frac {\beta _{c}}{\beta }}} where the expectation is taken over all random energy levels.
The r {\displaystyle r} -spin infinite-range model , in which all r {\displaystyle r} -spin sets interact with a random, independent, identically distributed interaction constant, becomes the random energy model in a suitably defined r → ∞ {\displaystyle r\to \infty } limit. [ 3 ]
More precisely, if the Hamiltonian of the model is defined by
where the sum runs over all ( N r ) {\displaystyle {N \choose r}} distinct sets of r {\displaystyle r} indices, and, for each such set, { i 1 , … , i r } {\displaystyle \{i_{1},\ldots ,i_{r}\}} , J i 1 , … , i r {\displaystyle J_{i_{1},\ldots ,i_{r}}} is an independent Gaussian variable of mean 0 and variance J 2 r ! / ( 2 N r − 1 ) {\displaystyle J^{2}r!/(2N^{r-1})} , the Random-Energy model is recovered in the r → ∞ {\displaystyle r\to \infty } limit.
As its name suggests, in the REM each microscopic state has an independent distribution of energy. For a particular realization of the disorder, P ( E ) = δ ( E − H ( σ ) ) {\displaystyle P(E)=\delta (E-H(\sigma ))} where σ = ( σ i ) {\displaystyle \sigma =(\sigma _{i})} refers to the individual spin configurations described by the state and H ( σ ) {\displaystyle H(\sigma )} is the energy associated with it. The final extensive variables like the free energy need to be averaged over all realizations of the disorder, just as in the case of the Edwards–Anderson model . Averaging P ( E ) {\displaystyle P(E)} over all possible realizations, we find that the probability that a given configuration of the disordered system has an energy equal to E {\displaystyle E} is given by
where [ ⋯ ] {\displaystyle [\cdots ]} denotes the average over all realizations of the disorder. Moreover, the joint probability distribution of the energy values of two different microscopic configurations of the spins, σ {\displaystyle \sigma } and σ ′ {\displaystyle \sigma '} factorizes:
It can be seen that the probability of a given spin configuration only depends on the energy of that state and not on the individual spin configuration. [ 4 ]
The entropy of the REM is given by [ 5 ]
for | E | < N J log 2 {\displaystyle |E|<NJ{\sqrt {\log 2}}} . However this expression only holds if the entropy per spin, lim N → ∞ S ( E ) / N {\displaystyle \lim _{N\to \infty }S(E)/N} is finite, i.e., when | E | < − N J log 2 . {\displaystyle |E|<-NJ{\sqrt {\log 2}}.} Since ( 1 / T ) = ∂ S / ∂ E {\displaystyle (1/T)=\partial S/\partial E} , this corresponds to T > T c = 1 / ( 2 log 2 ) {\displaystyle T>T_{c}=1/(2{\sqrt {\log 2}})} . For T < T c {\displaystyle T<T_{c}} , the system remains "frozen" in a small number of configurations of energy E ≃ − N J log 2 {\displaystyle E\simeq -NJ{\sqrt {\log 2}}} and the entropy per spin vanishes in the thermodynamic limit. | https://en.wikipedia.org/wiki/Random_energy_model |
Random flip-flop (RFF) is a theoretical concept of a non-sequential logic circuit capable of generating true randomness. By definition, it operates as an "ordinary" edge-triggered clocked flip-flop , except that its clock input acts randomly and with probability p = 1/2. [ 1 ] Unlike Boolean circuits, which behave deterministically, random flip-flop behaves non-deterministically. By definition, random flip-flop is electrically compatible with Boolean logic circuits. Together with them, RFF makes up a full set of logic circuits capable of performing arbitrary algorithms, namely to realize Probabilistic Turing machine .
Random flip-flop comes in all varieties in which ordinary, edge triggered clocked flip-flop does, for example: D-type random flip-flop (DRFF). T-type random flip-flop (TRFF), JK-type random flip-flop (JKRFF), etc. Symbol for DRFF, TRFF and JKRFF are shown in the Fig. 1.
While varieties are possible, not all of them are needed: a single RFF type can be used to emulate all other types. Emulation of one type of RFF by the other type of RFF can be done using the same additional gates circuitry as for ordinary flip-flops. Examples are shown in the Fig. 2.
By definition, action of a theoretical RFF is truly random. This is difficult to achieve in practice and is probably best realized through use of physical randomness. A RFF, based on quantum-random effect of photon emission in semiconductor and subsequent detection, has been demonstrated to work well up to a clock frequency of 25 MHz. [ 1 ] At a higher clock frequency, subsequent actions of the RFF become correlated. This RFF has been built using bulk components and the effort resulted only in a handful of units. Recently, a monolithic chip containing 2800 integrated RFFs based on quantum randomness has been demonstrated [ 2 ] [ 3 ] in Bipolar-CMOS-DMOS (BCD) process.
One straightforward application of a RFF is generation of random bits, as shown in the Fig. 3.
Since each RFF operates independent of all others, N RFFs can generate N bits per clock, thus the overall generation throughput of a random number generator is only limited by the number of available RFFs and their maximum operating clock frequency.
The biggest difference between a RFF and a true random number generator is that a plethora of RFFs can work concurrently, independently of each other, with or without any synchronicity among them. This is useful in stochastic computing , [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] also known as Random Pulse Computing (RPC) [1] , where many information-processing circuits work in parallel. RFF could also find its use in: prosthetic implants such as artificial cochlear or prosthetic limbs using, near-sensor image processing [ 9 ] as well as in artificial intelligence processors. [ 10 ] [ 11 ] [ 12 ] [ 13 ] Furthermore, having in mind its high speed, a single RFF can be used to generate on the order of hundred thousand 256-bit cryptographic keys per second, or nonce data, without requiring any special or proprietary protocol to communicate with, making it potentially indispensable piece of security hardware such as IoT devices, smart cards, car keys, as well as of any computer or digital communication device.
While the technology of realizing a RFF on a chip is young, [ 2 ] it is conceivable that in the future RFF as an electronic element will appear in universal logic chips (such as 7400-series integrated circuits ), in Application Specific Integrated Circuits (ASIC), and in Field-Programmable Gate Array (FPGA) chips, thus facilitating designs that could benefit from it. | https://en.wikipedia.org/wiki/Random_flip-flop |
The random generalized Lotka–Volterra model (rGLV) is an ecological model and random set of coupled ordinary differential equations where the parameters of the generalized Lotka–Volterra equation are sampled from a probability distribution , analogously to quenched disorder . The rGLV models dynamics of a community of species in which each species' abundance grows towards a carrying capacity but is depleted due to competition from the presence of other species. It is often analyzed in the many-species limit using tools from statistical physics , in particular from spin glass theory.
The rGLV has been used as a tool to analyze emergent macroscopic behavior in microbial communities with dense, strong interspecies interactions. The model has served as a context for theoretical investigations studying diversity - stability relations in community ecology [ 1 ] and properties of static and dynamic coexistence . [ 2 ] [ 3 ] Dynamical behavior in the rGLV has been mapped experimentally in community microcosms. [ 4 ] The rGLV model has also served as an object of interest for the spin glass and disordered systems physics community to develop new techniques and numerical methods. [ 5 ] [ 6 ] [ 7 ] [ 8 ] [ 9 ]
The random generalized Lotka–Volterra model is written as the system of coupled ordinary differential equations , [ 1 ] [ 2 ] [ 4 ] [ 10 ] d N i d t = r i K i N i ( K i − N i − ∑ j ( ≠ i ) α i j N j ) , i = 1 , … , S , {\displaystyle {\frac {\mathrm {d} N_{i}}{\mathrm {d} t}}={\frac {r_{i}}{K_{i}}}N_{i}\left(K_{i}-N_{i}-\sum _{j(\neq i)}\alpha _{ij}N_{j}\right),\qquad i=1,\dots ,S,} where N i {\displaystyle N_{i}} is the abundance of species i {\displaystyle i} , S {\displaystyle S} is the number of species, K i {\displaystyle K_{i}} is the carrying capacity of species i {\displaystyle i} in the absence of interactions, r i {\displaystyle r_{i}} sets a timescale, and α {\displaystyle \alpha } is a random matrix whose entries are random variables with mean ⟨ α i j ⟩ = μ α / S {\displaystyle \langle \alpha _{ij}\rangle =\mu _{\alpha }/S} , variance v a r ( α i j ) = σ α 2 / S {\displaystyle \mathrm {var} (\alpha _{ij})=\sigma _{\alpha }^{2}/S} , and correlations c o r r ( α i j , α j i ) = γ {\displaystyle \mathrm {corr} (\alpha _{ij},\alpha _{ji})=\gamma } for i ≠ j {\displaystyle i\neq j} where − 1 ≤ γ ≤ 1 {\displaystyle -1\leq \gamma \leq 1} . The interaction matrix , α {\displaystyle \alpha } , may be parameterized as, α i j = μ α S + σ α S a i j , {\displaystyle \alpha _{ij}={\frac {\mu _{\alpha }}{S}}+{\frac {\sigma _{\alpha }}{\sqrt {S}}}a_{ij},} where a i j {\displaystyle a_{ij}} are standard random variables (i.e., zero mean and unit variance) with ⟨ a i j a j i ⟩ = γ {\displaystyle \langle a_{ij}a_{ji}\rangle =\gamma } for i ≠ j {\displaystyle i\neq j} . The matrix entries may have any distribution with common finite first and second moments and will yield identical results in the large S {\displaystyle S} limit due to the central limit theorem . The carrying capacities may also be treated as random variables with ⟨ K i ⟩ = K , var ( K i ) = σ K 2 . {\displaystyle \langle K_{i}\rangle =K,\,\operatorname {var} (K_{i})=\sigma _{K}^{2}.} Analyses by statistical physics-inspired methods have revealed phase transitions between different qualitative behaviors of the model in the many-species limit . In some cases, this may include transitions between the existence of a unique globally-attractive fixed point and chaotic , persistent fluctuations.
In the thermodynamic limit (i.e., the community has a very large number of species) where a unique globally-attractive fixed point exists, the distribution of species abundances can be computed using the cavity method while assuming the system is self-averaging . The self-averaging assumption means that the distribution of any one species' abundance between samplings of model parameters matches the distribution of species abundances within a single sampling of model parameters. In the cavity method, an additional mean-field species i = 0 {\displaystyle i=0} is introduced and the response of the system is approximated linearly.
The cavity calculation yields a self-consistent equation describing the distribution of species abundances as a mean-field random variable, N 0 {\displaystyle N_{0}} . When σ K = 0 {\displaystyle \sigma _{K}=0} , the mean-field equation is, [ 1 ] 0 = N 0 ( K − μ α m − N 0 + q ( μ α 2 + γ σ α 2 ) Z + σ α 2 γ χ N 0 ) , {\displaystyle 0=N_{0}\left(K-\mu _{\alpha }m-N_{0}+{\sqrt {q\left(\mu _{\alpha }^{2}+\gamma \sigma _{\alpha }^{2}\right)}}Z+\sigma _{\alpha }^{2}\gamma \chi N_{0}\right),} where m = ⟨ N 0 ⟩ , q = ⟨ N 0 2 ⟩ , χ = ⟨ ∂ N 0 / ∂ K 0 ⟩ {\displaystyle m=\langle N_{0}\rangle ,\,q=\langle N_{0}^{2}\rangle ,\,\chi =\langle \partial N_{0}/\partial K_{0}\rangle } , and Z ∼ N ( 0 , 1 ) {\displaystyle Z\sim {\mathcal {N}}(0,1)} is a standard normal random variable . Only ecologically uninvadable solutions are taken (i.e., the largest solution for N 0 {\displaystyle N_{0}} in the quadratic equation is selected). The relevant susceptibility and moments of N 0 {\displaystyle N_{0}} , which has a truncated normal distribution , are determined self-consistently.
In the thermodynamic limit where there is an asymptotically large number of species (i.e., S → ∞ {\displaystyle S\to \infty } ), there are three distinct phases : one in which there is a unique fixed point (UFP), another with a multiple attractors (MA), and a third with unbounded growth. In the MA phase, depending on whether species abundances are replenished at a small rate, may approach arbitrarily small population sizes, or are removed from the community when the population falls below some cutoff, the resulting dynamics may be chaotic with persistent fluctuations or approach an initial conditions-dependent steady state. [ 1 ]
The transition from the UFP to MA phase is signaled by the cavity solution becoming unstable to disordered perturbations. When σ K = 0 {\displaystyle \sigma _{K}=0} , the phase transition boundary occurs when the parameters satisfy, σ α = 2 1 + γ . {\displaystyle \sigma _{\alpha }={\frac {\sqrt {2}}{1+\gamma }}.} In the σ K > 0 {\displaystyle \sigma _{K}>0} case, the phase boundary can still be calculated analytically, but no closed-form solution has been found; numerical methods are necessary to solve the self-consistent equations determining the phase boundary.
The transition to the unbounded growth phase is signaled by the divergence of ⟨ N 0 ⟩ {\displaystyle \langle N_{0}\rangle } as computed in the cavity calculation.
The cavity method can also be used to derive a dynamical mean-field theory model for the dynamics. The cavity calculation yields a self-consistent equation describing the dynamics as a Gaussian process defined by the self-consistent equation (for σ K = 0 {\displaystyle \sigma _{K}=0} ), [ 8 ] d N 0 d t = N 0 ( t ) [ K 0 − N 0 ( t ) − μ α m ( t ) − σ α η ( t ) + γ σ α 2 ∫ 0 t d t ′ χ ( t , t ′ ) N 0 ( t ′ ) ] , {\displaystyle {\frac {\mathrm {d} N_{0}}{\mathrm {d} t}}=N_{0}(t)\left[K_{0}-N_{0}(t)-\mu _{\alpha }m(t)-\sigma _{\alpha }\eta (t)+\gamma \sigma _{\alpha }^{2}\int _{0}^{t}\mathrm {d} t'\,\chi (t,t')N_{0}(t')\right],} where m ( t ) = ⟨ N 0 ( t ) ⟩ {\displaystyle m(t)=\langle N_{0}(t)\rangle } , η {\displaystyle \eta } is a zero-mean Gaussian process with autocorrelation ⟨ η ( t ) η ( t ′ ) ⟩ = ⟨ N 0 ( t ) N 0 ( t ′ ) ⟩ {\displaystyle \langle \eta (t)\eta (t')\rangle =\langle N_{0}(t)N_{0}(t')\rangle } , and χ ( t , t ′ ) = ⟨ δ N 0 ( t ) / δ K 0 ( t ′ ) | K 0 ( t ′ ) = K 0 ⟩ {\displaystyle \chi (t,t')=\langle \left.\delta N_{0}(t)/\delta K_{0}(t')\right|_{K_{0}(t')=K_{0}}\rangle } is the dynamical susceptibility defined in terms of a functional derivative of the dynamics with respect to a time-dependent perturbation of the carrying capacity.
Using dynamical mean-field theory, it has been shown that at long times, the dynamics exhibit aging in which the characteristic time scale defining the decay of correlations increases linearly in the duration of the dynamics. That is, C N ( t , t + t τ ) → f ( τ ) {\displaystyle C_{N}(t,t+t\tau )\to f(\tau )} when t {\displaystyle t} is large, where C N ( t , t ′ ) = ⟨ N ( t ) N ( t ′ ) ⟩ {\displaystyle C_{N}(t,t')=\langle N(t)N(t')\rangle } is the autocorrelation function of the dynamics and f ( τ ) {\displaystyle f(\tau )} is a common scaling collapse function. [ 8 ] [ 11 ]
When a small immigration rate λ ≪ 1 {\displaystyle \lambda \ll 1} is added (i.e., a small constant is added to the right-hand side of the equations of motion) the dynamics reach a time transitionally invariant state. In this case, the dynamics exhibit jumps between O ( 1 ) {\displaystyle O(1)} and O ( λ ) {\displaystyle O(\lambda )} abundances. [ 12 ] | https://en.wikipedia.org/wiki/Random_generalized_Lotka–Volterra_model |
A random hexamer or random hexonucleotides are for various PCR applications such as rolling circle amplification to prime the DNA.
They are oligonucleotide sequences of 6 bases which are synthesised entirely randomly to give a numerous range of sequences that have the potential to anneal at many random points on a DNA sequence and act as a primer to commence first strand cDNA synthesis. [ 1 ] [ 2 ] [ 3 ] | https://en.wikipedia.org/wiki/Random_hexamer |
Random number generation is a process by which, often by means of a random number generator ( RNG ), a sequence of numbers or symbols is generated that cannot be reasonably predicted better than by random chance. This means that the particular outcome sequence will contain some patterns detectable in hindsight but impossible to foresee. True random number generators can be hardware random-number generators (HRNGs), wherein each generation is a function of the current value of a physical environment's attribute that is constantly changing in a manner that is practically impossible to model. This would be in contrast to so-called "random number generations" done by pseudorandom number generators (PRNGs), which generate numbers that only look random but are in fact predetermined—these generations can be reproduced simply by knowing the state of the PRNG. [ 1 ]
Various applications of randomness have led to the development of different methods for generating random data. Some of these have existed since ancient times, including well-known examples like the rolling of dice , coin flipping , the shuffling of playing cards , the use of yarrow stalks (for divination ) in the I Ching , as well as countless other techniques. Because of the mechanical nature of these techniques, generating large quantities of sufficiently random numbers (important in statistics) required much work and time. Thus, results would sometimes be collected and distributed as random number tables .
Several computational methods for pseudorandom number generation exist. All fall short of the goal of true randomness, although they may meet, with varying success, some of the statistical tests for randomness intended to measure how unpredictable their results are (that is, to what degree their patterns are discernible). This generally makes them unusable for applications such as cryptography . However, carefully designed cryptographically secure pseudorandom number generators (CSPRNGS) also exist, with special features specifically designed for use in cryptography.
Random number generators have applications in gambling , statistical sampling , computer simulation , cryptography , completely randomized design , and other areas where producing an unpredictable result is desirable. Generally, in applications having unpredictability as the paramount feature, such as in security applications, hardware generators are generally preferred over pseudorandom algorithms, where feasible.
Pseudorandom number generators are very useful in developing Monte Carlo-method simulations, as debugging is facilitated by the ability to run the same sequence of random numbers again by starting from the same random seed . They are also used in cryptography – so long as the seed is secret. The sender and receiver can generate the same set of numbers automatically to use as keys.
The generation of pseudorandom numbers is an important and common task in computer programming. While cryptography and certain numerical algorithms require a very high degree of apparent randomness, many other operations only need a modest amount of unpredictability. Some simple examples might be presenting a user with a "random quote of the day", or determining which way a computer-controlled adversary might move in a computer game. Weaker forms of randomness are used in hash algorithms and in creating amortized searching and sorting algorithms .
Some applications that appear at first sight to be suitable for randomization are in fact not quite so simple. For instance, a system that "randomly" selects music tracks for a background music system must only appear random, and may even have ways to control the selection of music: a truly random system would have no restriction on the same item appearing two or three times in succession.
There are two principal methods used to generate random numbers. The first method measures some physical phenomenon that is expected to be random and then compensates for possible biases in the measurement process. Example sources include measuring atmospheric noise , thermal noise, and other external electromagnetic and quantum phenomena. For example, cosmic background radiation or radioactive decay as measured over short timescales represent sources of natural entropy (as a measure of unpredictability or surprise of the number generation process).
The speed at which entropy can be obtained from natural sources is dependent on the underlying physical phenomena being measured. Thus, sources of naturally occurring true entropy are said to be blocking – they are rate-limited until enough entropy is harvested to meet the demand. On some Unix-like systems, including most Linux distributions , the pseudo device file /dev/random will block until sufficient entropy is harvested from the environment. [ 2 ] Due to this blocking behavior, large bulk reads from /dev/random , such as filling a hard disk drive with random bits, can often be slow on systems that use this type of entropy source.
The second method uses computational algorithms that can produce long sequences of apparently random results, which are in fact completely determined by a shorter initial value, known as a seed value or key . As a result, the entire seemingly random sequence can be reproduced if the seed value is known. This type of random number generator is often called a pseudorandom number generator . This type of generator typically does not rely on sources of naturally occurring entropy, though it may be periodically seeded by natural sources. This generator type is non-blocking, so they are not rate-limited by an external event, making large bulk reads a possibility.
Some systems take a hybrid approach, providing randomness harvested from natural sources when available, and falling back to periodically re-seeded software-based cryptographically secure pseudorandom number generators (CSPRNGs). The fallback occurs when the desired read rate of randomness exceeds the ability of the natural harvesting approach to keep up with the demand. This approach avoids the rate-limited blocking behavior of random number generators based on slower and purely environmental methods.
While a pseudorandom number generator based solely on deterministic logic can never be regarded as a true random number source in the purest sense of the word, in practice they are generally sufficient even for demanding security-critical applications. Carefully designed and implemented pseudorandom number generators can be certified for security-critical cryptographic purposes, as is the case with the yarrow algorithm and fortuna . The former is the basis of the /dev/random source of entropy on FreeBSD , AIX , macOS , NetBSD , and others. OpenBSD uses a pseudorandom number algorithm known as arc4random . [ dubious – discuss ] [ 3 ]
The earliest methods for generating random numbers, such as dice, coin flipping and roulette wheels, are still used today, mainly in games and gambling as they tend to be too slow for most applications in statistics and cryptography.
A hardware random number generator can be based on an essentially random atomic or subatomic physical phenomenon whose unpredictability can be traced to the laws of quantum mechanics . [ 4 ] [ 5 ] Sources of entropy include radioactive decay , thermal noise , shot noise , avalanche noise in Zener diodes , clock drift , the timing of actual movements of a hard disk read-write head, and radio noise . However, physical phenomena and tools used to measure them generally feature asymmetries and systematic biases that make their outcomes not uniformly random. A randomness extractor , such as a cryptographic hash function , can be used to approach a uniform distribution of bits from a non-uniformly random source, though at a lower bit rate.
The appearance of wideband photonic entropy sources, such as optical chaos and amplified spontaneous emission noise, greatly aid the development of the physical random number generator. Among them, optical chaos [ 6 ] [ 7 ] has a high potential to physically produce high-speed random numbers due to its high bandwidth and large amplitude. A prototype of a high-speed, real-time physical random bit generator based on a chaotic laser was built in 2013. [ 8 ]
Various imaginative ways of collecting this entropic information have been devised. One technique is to run a hash function against a frame of a video stream from an unpredictable source. Lavarand used this technique with images of a number of lava lamps . HotBits measured radioactive decay with Geiger–Muller tubes , [ 9 ] while Random.org uses variations in the amplitude of atmospheric noise recorded with a normal radio.
Another common entropy source is the behavior of human users of the system. While people are not considered good randomness generators upon request, they generate random behavior quite well in the context of playing mixed strategy games. [ 10 ] Some security-related computer software requires the user to make a lengthy series of mouse movements or keyboard inputs to create sufficient entropy needed to generate random keys or to initialize pseudorandom number generators. [ 11 ]
Most computer-generated random numbers use PRNGs which are algorithms that can automatically create long runs of numbers with good random properties but eventually the sequence repeats (or the memory usage grows without bound). These random numbers are fine in many situations but are not as random as numbers generated from electromagnetic atmospheric noise used as a source of entropy. [ 12 ] The series of values generated by such algorithms is generally determined by a fixed number called a seed . One of the most common PRNG is the linear congruential generator , which uses the recurrence
to generate numbers, where a , b and m are large integers, and X n + 1 {\displaystyle X_{n+1}} is the next in X as a series of pseudorandom numbers. The maximum number of numbers the formula can produce is the modulus , m . The recurrence relation can be extended to matrices to have much longer periods and better statistical properties
. [ 13 ] To avoid certain non-random properties of a single linear congruential generator, several such random number generators with slightly different values of the multiplier coefficient, a , can be used in parallel, with a master random number generator that selects from among the several different generators.
A simple pen-and-paper method for generating random numbers is the so-called middle-square method suggested by John von Neumann . While simple to implement, its output is of poor quality. It has a very short period and severe weaknesses, such as the output sequence almost always converging to zero. A recent innovation is to combine the middle square with a Weyl sequence . This method produces high-quality output through a long period. [ 14 ]
Most computer programming languages include functions or library routines that provide random number generators. They are often designed to provide a random byte or word, or a floating point number uniformly distributed between 0 and 1.
The quality i.e. randomness of such library functions varies widely from completely predictable output, to cryptographically secure. The default random number generator in many languages, including Python, Ruby, R, IDL and PHP is based on the Mersenne Twister algorithm and is not sufficient for cryptography purposes, as is explicitly stated in the language documentation. Such library functions often have poor statistical properties, and some will repeat patterns after only tens of thousands of trials. They are often initialized using a computer's real-time clock as the seed, since such a clock is 64 bit and measures in nanoseconds, far beyond the person's precision . These functions may provide enough randomness for certain tasks (for example video games) but are unsuitable where high-quality randomness is required, such as in cryptography applications, or statistics. [ 15 ]
Much higher quality random number sources are available on most operating systems; for example /dev/random on various BSD flavors, Linux, Mac OS X, IRIX, and Solaris, or CryptGenRandom for Microsoft Windows. Most programming languages, including those mentioned above, provide a means to access these higher-quality sources.
Random number generation may also be performed by humans, in the form of collecting various inputs from end users and using them as a randomization source. However, most studies find that human subjects have some degree of non-randomness when attempting to produce a random sequence of e.g. digits or letters. They may alternate too much between choices when compared to a good random generator; [ 16 ] thus, this approach is not widely used. However, for the very reason that humans perform poorly in this task, human random number generation can be used as a tool to gain insights into brain functions otherwise not accessible. [ 17 ]
Even given a source of plausible random numbers (perhaps from a quantum mechanically based hardware generator), obtaining numbers which are completely unbiased takes care. In addition, behavior of these generators often changes with temperature, power supply voltage, the age of the device, or other outside interference.
Generated random numbers are sometimes subjected to statistical tests before use to ensure that the underlying source is still working, and then post-processed to improve their statistical properties. An example would be the TRNG9803 [ 18 ] hardware random number generator, which uses an entropy measurement as a hardware test, and then post-processes the random sequence with a shift register stream cipher. It is generally hard to use statistical tests to validate the generated random numbers. Wang and Nicol [ 19 ] proposed a distance-based statistical testing technique that is used to identify the weaknesses of several random generators. Li and Wang [ 20 ] proposed a method of testing random numbers based on laser chaotic entropy sources using Brownian motion properties.
Statistical tests are also used to give confidence that the post-processed final output from a random number generator is truly unbiased, with numerous randomness test suites being developed.
Most random number generators natively work with integers or individual bits, so an extra step is required to arrive at the canonical uniform distribution between 0 and 1. The implementation is not as trivial as dividing the integer by its maximum possible value. Specifically: [ 21 ] [ 22 ]
The mainstream algorithm, used by OpenJDK , Rust , and NumPy , is described in a proposal for C++ 's STL. It does not use the extra precision and suffers from bias only in the last bit due to round-to-even. [ 23 ] Other numeric concerns are warranted when shifting this canonical uniform distribution to a different range. [ 24 ] A proposed method for the Swift programming language claims to use the full precision everywhere. [ 25 ]
Uniformly distributed integers are commonly used in algorithms such as the Fisher–Yates shuffle . Again, a naive implementation may induce a modulo bias into the result, so more involved algorithms must be used. A method that nearly never performs division was described in 2018 by Daniel Lemire, [ 26 ] with the current state-of-the-art being the arithmetic encoding-inspired 2021 "optimal algorithm" by Stephen Canon of Apple Inc. [ 27 ]
Most 0 to 1 RNGs include 0 but exclude 1, while others include or exclude both.
Given a source of uniform random numbers, there are a couple of methods to create a new random source that corresponds to a probability density function . One method called the inversion method , involves integrating up to an area greater than or equal to the random number (which should be generated between 0 and 1 for proper distributions). A second method called the acceptance-rejection method , involves choosing an x and y value and testing whether the function of x is greater than the y value. If it is, the x value is accepted. Otherwise, the x value is rejected and the algorithm tries again. [ 28 ] [ 29 ]
As an example for rejection sampling, to generate a pair of statistically independent standard normally distributed random numbers ( x , y ), one may first generate the polar coordinates ( r , θ ), where r 2 ~ χ 2 2 and θ ~ UNIFORM(0,2π) (see Box–Muller transform ).
The outputs of multiple independent RNGs can be combined (for example, using a bit-wise XOR operation) to provide a combined RNG at least as good as the best RNG used. This is referred to as software whitening .
Computational and hardware random number generators are sometimes combined to reflect the benefits of both kinds. Computational random number generators can typically generate pseudorandom numbers much faster than physical generators, while physical generators can generate true randomness.
Some computations making use of a random number generator can be summarized as the computation of a total or average value, such as the computation of integrals by the Monte Carlo method . For such problems, it may be possible to find a more accurate solution by the use of so-called low-discrepancy sequences , also called quasirandom numbers. Such sequences have a definite pattern that fills in gaps evenly, qualitatively speaking; a truly random sequence may, and usually does, leave larger gaps.
The following sites make available random number samples:
Since much cryptography depends on a cryptographically secure random number generator for key and cryptographic nonce generation, if a random number generator can be made predictable, it can be used as backdoor by an attacker to break the encryption.
The NSA is reported to have inserted a backdoor into the NIST certified cryptographically secure pseudorandom number generator Dual EC DRBG . If for example an SSL connection is created using this random number generator, then according to Matthew Green it would allow NSA to determine the state of the random number generator, and thereby eventually be able to read all data sent over the SSL connection. [ 30 ] Even though it was apparent that Dual_EC_DRBG was a very poor and possibly backdoored pseudorandom number generator long before the NSA backdoor was confirmed in 2013, it had seen significant usage in practice until 2013, for example by the prominent security company RSA Security . [ 31 ] There have subsequently been accusations that RSA Security knowingly inserted a NSA backdoor into its products, possibly as part of the Bullrun program . RSA has denied knowingly inserting a backdoor into its products. [ 32 ]
It has also been theorized that hardware RNGs could be secretly modified to have less entropy than stated, which would make encryption using the hardware RNG susceptible to attack. One such method that has been published works by modifying the dopant mask of the chip, which would be undetectable to optical reverse-engineering. [ 33 ] For example, for random number generation in Linux, it is seen as unacceptable to use Intel's RDRAND hardware RNG without mixing in the RDRAND output with other sources of entropy to counteract any backdoors in the hardware RNG, especially after the revelation of the NSA Bullrun program. [ 34 ] [ 35 ]
In 2010, a U.S. lottery draw was rigged by the information security director of the Multi-State Lottery Association (MUSL), who surreptitiously installed backdoor malware on the MUSL's secure RNG computer during routine maintenance. [ 36 ] During the hacks the man won a total amount of $16,500,000 over multiple years. | https://en.wikipedia.org/wiki/Random_number_generation |
The statistics of random permutations , such as the cycle structure of a random permutation are of fundamental importance in the analysis of algorithms , especially of sorting algorithms, which operate on random permutations. Suppose, for example, that we are using quickselect (a cousin of quicksort ) to select a random element of a random permutation. Quickselect will perform a partial sort on the array, as it partitions the array according to the pivot. Hence a permutation will be less disordered after quickselect has been performed. The amount of disorder that remains may be analysed with generating functions. These generating functions depend in a fundamental way on the generating functions of random permutation statistics. Hence it is of vital importance to compute these generating functions.
The article on random permutations contains an introduction to random permutations.
Permutations are sets of labelled cycles. Using the labelled case of the Flajolet–Sedgewick fundamental theorem and writing P {\displaystyle \scriptstyle {\mathcal {P}}} for the set of permutations and Z {\displaystyle \scriptstyle {\mathcal {Z}}} for the singleton set, we have
Translating into exponential generating functions (EGFs), we have
where we have used the fact that the EGF of the combinatorial species of permutations (there are n ! permutations of n elements) is
This one equation allows one to derive a large number of permutation statistics. Firstly, by dropping terms from SET {\displaystyle \scriptstyle \operatorname {SET} } , i.e. exp, we may constrain the number of cycles that a permutation contains, e.g. by restricting the EGF to SET 2 {\displaystyle \scriptstyle \operatorname {SET} _{2}} we obtain permutations containing two cycles. Secondly, note that the EGF of labelled cycles, i.e. of CYC ( Z ) {\displaystyle \scriptstyle \operatorname {CYC} ({\mathcal {Z}})} , is ∑ k ≥ 1 ( k − 1 ) ! z k k ! = ∑ k ≥ 1 z k k = log 1 1 − z {\displaystyle \sum _{k\geq 1}{\frac {(k-1)!z^{k}}{k!}}=\sum _{k\geq 1}{\frac {z^{k}}{k}}=\log {\frac {1}{1-z}}} because there are k ! / k labelled cycles. This means that by dropping terms from this generating function, we may constrain the size of the cycles that occur in a permutation and obtain an EGF of the permutations containing only cycles of a given size.
Instead of removing and selecting cycles, one can also put different weights on different size cycles. If b : N → R {\displaystyle b:\mathbb {N} \rightarrow \mathbb {R} } is a weight function that depends only on the size k of the cycle and for brevity we write
defining the value of b for a permutation σ {\displaystyle \sigma } to be the sum of its values on the cycles, then we may mark cycles of length k with u b ( k ) and obtain a two-variable generating function
This is a "mixed" generating function: it is an exponential generating function in z and an ordinary generating function in the secondary parameter u. Differentiating and evaluating at u = 1, we have
This is the probability generating function of the expectation of b . In other words, the coefficient of z n {\displaystyle z^{n}} in this power series is the expected value of b on permutations in S n {\displaystyle S_{n}} , given that each permutation is chosen with the same probability 1 / n ! {\displaystyle 1/n!} .
This article uses the coefficient extraction operator [ z n ], documented on the page for formal power series .
An involution is a permutation σ so that σ 2 = 1 under permutation composition. It follows that σ may only contain cycles of length one or two, i.e. the exponential generating function g ( z ) of these permutations is [ 1 ]
This gives the explicit formula for the total number I ( n ) {\displaystyle I(n)} of involutions among the permutations σ ∈ S n : [ 1 ]
Dividing by n ! yields the probability that a random permutation is an involution.
These numbers are known as telephone numbers .
This generalizes the concept of an involution. An m th root of unity is a permutation σ so that σ m = 1 under permutation composition. Now every time we apply σ we move one step in parallel along all of its cycles. A cycle of length d applied d times produces the identity permutation on d elements ( d fixed points) and d is the smallest value to do so. Hence m must be a multiple of all cycle sizes d , i.e. the only possible cycles are those whose length d is a divisor of m . It follows that the EGF g ( x ) of these permutations is
When m = p , where p is prime, this simplifies to
This one can be done by Möbius inversion . Working with the same concept as in the previous entry we note that the combinatorial species Q {\displaystyle {\mathcal {Q}}} of permutations whose order divides k is given by
Translation to exponential generating functions we obtain the EGF of permutations whose order divides k , which is
Now we can use this generating function to count permutations of order exactly k . Let p n , d {\displaystyle p_{n,d}} be the number of permutations on n whose order is exactly d and q n , k {\displaystyle q_{n,k}} the number of permutations on n the permutation count whose order divides k .
Then we have
It follows by Möbius inversion that
Therefore, we have the EGF
The desired count is then given by
This formula produces e.g. for k = 6 the EGF
with the sequence of values starting at n = 5
For k = 8 we get the EGF
with the sequence of values starting at n = 8
Finally for k = 12 we get the EGF
with the sequence of values starting at n = 7
Suppose there are n people at a party, each of whom brought an umbrella. At the end of the party everyone picks an umbrella out of the stack of umbrellas and leaves. What is the probability that no one left with his/her own umbrella? This problem is equivalent to counting permutations with no fixed points (called derangements ), and hence the EGF, where we subtract out fixed points (cycles of length 1) by removing the term z from the fundamental relation is
Multiplication by 1 / ( 1 − z ) {\displaystyle 1/(1-z)} sums the coefficients of e − z {\displaystyle e^{-z}} , so D ( n ) {\displaystyle D(n)} , the total number of derangements, is given by:
Hence there are about n ! / e {\displaystyle n!/e} derangements and the probability that a random permutation is a derangement is 1 / e . {\displaystyle 1/e.}
This result may also be proved by inclusion–exclusion . Using the sets A p {\displaystyle A_{p}} where 1 ≤ p ≤ n {\displaystyle {\begin{matrix}1\leq p\leq n\end{matrix}}} to denote the set of permutations that fix p , we have
This formula counts the number of permutations that have at least one fixed point.
The cardinalities are as follows:
Hence the number of permutations with no fixed point is
or
and we have the claim.
There is a generalization of these numbers, which is known as rencontres numbers , i.e.
the number D ( n , m ) {\displaystyle D(n,m)} of permutations of [ n ] {\displaystyle [n]} containing m fixed points.
The corresponding EGF is obtained by marking cycles of size one with the variable u ,
i.e. choosing b ( k ) equal to one for k = 1 {\displaystyle k=1} and zero otherwise, which yields
the generating function g ( z , u ) {\displaystyle g(z,u)} of the set of permutations by the number of fixed points:
It follows that
and hence
This immediately implies that
for n large, m fixed.
If P is a permutation, the order of P is the smallest positive integer n for which P n {\displaystyle P^{n}} is the identity permutation. This is the least common multiple of the lengths of the cycles of P .
A theorem of Goh and Schmutz [ 2 ] states that if μ n {\displaystyle \mu _{n}} is the expected order of a random permutation of size n , then
where the constant c is
We can use the same construction as in the previous section to compute the number of derangements D 0 ( n ) {\displaystyle D_{0}(n)} containing an even number of cycles and the number D 1 ( n ) {\displaystyle D_{1}(n)} containing an odd number of cycles. To do this we need to mark all cycles and subtract fixed points, giving
Now some very basic reasoning shows that the EGF q ( z ) {\displaystyle q(z)} of D 0 ( n ) {\displaystyle D_{0}(n)} is given by
We thus have
which is
Subtracting D 0 ( n ) {\displaystyle D_{0}(n)} from D ( n ) {\displaystyle D(n)} , we find
The difference of these two ( D 0 ( n ) {\displaystyle D_{0}(n)} and D 1 ( n ) {\displaystyle D_{1}(n)} ) is n − 1. {\displaystyle n-1.}
A prison warden wants to make room in his prison and is considering liberating one hundred prisoners, thereby freeing one hundred cells. He therefore assembles one hundred prisoners and asks them to play the following game: he lines up one hundred urns in a row, each containing the name of one prisoner, where every prisoner's name occurs exactly once. The game is played as follows: every prisoner is allowed to look inside fifty urns. If he or she does not find his or her name in one of the fifty urns, all prisoners will immediately be executed, otherwise the game continues. The prisoners have a few moments to decide on a strategy, knowing that once the game has begun, they will not be able to communicate with each other, mark the urns in any way or move the urns or the names inside them. Choosing urns at random, their chances of survival are almost zero, but there is a strategy giving them a 30% chance of survival, assuming that the names are assigned to urns randomly – what is it?
First of all, the survival probability using random choices is
so this is definitely not a practical strategy.
The 30% survival strategy is to consider the contents of the urns to be a permutation of the prisoners, and traverse cycles. To keep the notation simple, assign a number to each prisoner, for example by sorting their names alphabetically. The urns may thereafter be considered to contain numbers rather than names. Now clearly the contents of the urns define a permutation. The first prisoner opens the first urn. If he finds his name, he has finished and survives. Otherwise he opens the urn with the number he found in the first urn. The process repeats: the prisoner opens an urn and survives if he finds his name, otherwise he opens the urn with the number just retrieved, up to a limit of fifty urns. The second prisoner starts with urn number two, the third with urn number three, and so on. This strategy is precisely equivalent to a traversal of the cycles of the permutation represented by the urns. Every prisoner starts with the urn bearing his number and keeps on traversing his cycle up to a limit of fifty urns. The number of the urn that contains his number is the pre-image of that number under the permutation. Hence the prisoners survive if all cycles of the permutation contain at most fifty elements. We have to show that this probability is at least 30%.
Note that this assumes that the warden chooses the permutation randomly; if the warden anticipates this strategy, he can simply choose a permutation with a cycle of length 51. To overcome this, the prisoners may agree in advance on a random permutation of their names.
We consider the general case of 2 n {\displaystyle 2n} prisoners and n {\displaystyle n} urns being opened. We first calculate the complementary probability, i.e. that there is a cycle of more than n {\displaystyle n} elements. With this in mind, we introduce
or
so that the desired probability is
because the cycle of more than n {\displaystyle n} elements will necessarily be unique. Using the fact that 2 ( n + 1 ) > 2 n {\displaystyle 2(n+1)>2n} , we find that
which yields
Finally, using an integral estimate such as Euler–Maclaurin summation , or the asymptotic expansion of the n th harmonic number , we obtain
so that
or at least 30%, as claimed.
A related result is that asymptotically, the expected length of the longest cycle is λn, where λ is the Golomb–Dickman constant , approximately 0.62.
This example is due to Anna Gál and Peter Bro Miltersen;
consult the paper by Peter Winkler for more information, and
see the discussion on Les-Mathematiques.net .
Consult the references on 100 prisoners for links to these references.
The above computation may be performed in a more simple and direct way, as follows: first note that a permutation of 2 n {\displaystyle 2n} elements contains at most one cycle of length strictly greater than n {\displaystyle n} . Thus, if we denote
then
For k > n {\displaystyle k>n} , the number of permutations that contain a cycle of length exactly k {\displaystyle k} is
Explanation: ( 2 n k ) {\displaystyle {{2n} \choose k}} is the number of ways of choosing the k {\displaystyle k} elements that comprise the cycle; k ! k {\displaystyle {\frac {k!}{k}}} is the number of ways of arranging k {\displaystyle k} items in a cycle; and ( 2 n − k ) ! {\displaystyle (2n-k)!} is the number of ways to permute the remaining elements. There is no double counting here because there is at most one cycle of length k {\displaystyle k} when k > n {\displaystyle k>n} . Thus,
We conclude that
There is a closely related problem that fits the method presented here quite nicely. Say you have n ordered boxes. Every box contains a key to some other box or possibly itself giving a permutation of the keys. You are allowed to select k of these n boxes all at once and break them open simultaneously, gaining access to k keys. What is the probability that using these keys you can open all n boxes, where you use a found key to open the box it belongs to and repeat.
The mathematical statement of this problem is as follows: pick a random permutation on n elements and k values from the range 1 to n , also at random, call these marks. What is the probability that there is at least one mark on every cycle of the permutation? The claim is this probability is k/n .
The species Q {\displaystyle {\mathcal {Q}}} of permutations by
cycles with some non-empty subset of every cycle being marked has the
specification
The index in the inner sum starts at one because we must have at least one
mark on every cycle.
Translating the specification to generating functions we obtain the
bivariate generating function
This simplifies to
or
In order to extract coefficients from this re-write like so
It now follows that
and hence
Divide by ( n k ) {\displaystyle {n \choose k}} to obtain
We do not need to divide by n! because G ( z , u ) {\displaystyle G(z,u)} is exponential in z .
Applying the Flajolet–Sedgewick fundamental theorem , i.e. the labelled enumeration theorem with G = S m {\displaystyle G=S_{m}} , to the set
we obtain the generating function
The term
yields the signed Stirling numbers of the first kind , and g m ( z ) {\displaystyle g_{m}(z)} is the EGF of the unsigned Stirling numbers of the first kind, i.e.
We can compute the OGF of the signed Stirling numbers for n fixed, i.e.
Start with
which yields
Summing this, we obtain
Using the formula involving the logarithm for g m ( z ) {\displaystyle g_{m}(z)} on the left, the definition of s n ( w ) {\displaystyle s_{n}(w)} on the right, and the binomial theorem , we obtain
Comparing the coefficients of z n {\displaystyle z^{n}} , and using the definition of the binomial coefficient , we finally have
a falling factorial . The computation of the OGF of the unsigned Stirling numbers of the first kind works in a similar way.
In this problem we use a bivariate generating function g ( z , u ) as described in the introduction. The value of b for a cycle not of size m is zero, and one for a cycle of size m . We have
or
This means that the expected number of cycles of size m in a permutation of length n less than m is zero (obviously). A random permutation of length at least m contains on average 1/ m cycles of length m . In particular, a random permutation contains about one fixed point.
The OGF of the expected number of cycles of length less than or equal to m is therefore
where H m is the m th harmonic number . Hence the expected number of cycles of length at most m in a random permutation is about ln m .
The mixed GF g ( z , u ) {\displaystyle g(z,u)} of the set of permutations by the number of fixed points is
Let the random variable X be the number of fixed points of a random permutation.
Using Stirling numbers of the second kind , we have the following formula for the m th moment of X :
where ( X ) k {\displaystyle (X)_{k}} is a falling factorial .
Using g ( z , u ) {\displaystyle g(z,u)} , we have
which is zero when k > n {\displaystyle k>n} , and one otherwise.
Hence only terms with k <= n {\displaystyle k<=n} contribute to the sum.
This yields
Suppose you pick a random permutation σ {\displaystyle \sigma } and raise it to some power k {\displaystyle k} , with k {\displaystyle k} a positive integer and ask about the expected number of fixed points in the result. Denote this value by E [ F k ] {\displaystyle E[F_{k}]} .
For every divisor d {\displaystyle d} of k {\displaystyle k} a cycle of length d {\displaystyle d} splits into d {\displaystyle d} fixed points when raised to the power k . {\displaystyle k.} Hence we need to mark these cycles with u d . {\displaystyle u^{d}.} To illustrate this consider E [ F 6 ] . {\displaystyle E[F_{6}].}
We get
which is
Once more continuing as described in the introduction, we find
which is
The conclusion is that E [ F 6 ] = 4 {\displaystyle E[F_{6}]=4} for n ≥ 6 {\displaystyle n\geq 6} and there are four fixed points on average.
The general procedure is
Once more continuing as before, we find
We have shown that the value of E [ F k ] {\displaystyle E[F_{k}]} is equal to τ ( k ) {\displaystyle \tau (k)} (the number of divisors of k {\displaystyle k} ) as soon as n ≥ k . {\displaystyle n\geq k.} It starts out at 1 {\displaystyle 1} for n = 1 {\displaystyle n=1} and increases by one every time n {\displaystyle n} hits a divisor of k {\displaystyle k} up to and including k {\displaystyle k} itself.
We construct the bivariate generating function g ( z , u ) {\displaystyle g(z,u)} using b ( k ) {\displaystyle b(k)} , where b ( k ) {\displaystyle b(k)} is one for all cycles (every cycle contributes one to the total number of cycles).
Note that g ( z , u ) {\displaystyle g(z,u)} has the closed form
and generates the unsigned Stirling numbers of the first kind .
We have
Hence the expected number of cycles is the harmonic number H n {\displaystyle H_{n}} , or about log n {\displaystyle \log n} .
(Note that Section One hundred prisoners contains exactly the same problem with a very similar calculation, plus also a simpler elementary proof.)
Once more, start with the exponential generating function g ( z , u ) {\displaystyle g(z,u)} , this time of the class P {\displaystyle {\mathcal {P}}} of permutations according to size where cycles of length more than n / 2 {\displaystyle n/2} are marked with the variable u {\displaystyle u} :
There can only be one cycle of length more than n 2 {\displaystyle {\frac {n}{2}}} , hence the answer to the question is given by
or
which is
The exponent of z {\displaystyle z} in the term being raised to the power m + 1 {\displaystyle m+1} is larger than ⌊ n 2 ⌋ {\displaystyle \lfloor {\frac {n}{2}}\rfloor } and hence no value for m > 0 {\displaystyle m>0} can possibly contribute to [ z n ] . {\displaystyle [z^{n}].}
It follows that the answer is
The sum has an alternate representation that one encounters e.g. in the OEIS OEIS : A024167 .
finally giving
We can use the disjoint cycle decomposition of a permutation to factorize it as a product of transpositions by replacing a cycle of length k by k − 1 transpositions. E.g. the cycle ( 1 2 34 ) {\displaystyle (1\;2\;34)} factors as ( 1 2 ) ( 2 3 ) ( 3 4 ) {\displaystyle (1\;2)\;(2\;3)\;(3\;4)} . The function b ( k ) {\displaystyle b(k)} for cycles is equal to k − 1 {\displaystyle k-1} and we obtain
and
Hence the expected number of transpositions T ( n ) {\displaystyle T(n)} is
where H n {\displaystyle H_{n}} is the n t h {\displaystyle n^{th}} Harmonic number .
We could also have obtained this formula by noting that the number of transpositions is obtained by adding the lengths of all cycles (which gives n ) and subtracting one for every cycle (which gives log n {\displaystyle \log n} by the previous section).
Note that g ( z , u ) {\displaystyle g(z,u)} again generates the unsigned Stirling numbers of the first kind , but in reverse order. More precisely, we have
To see this, note that the above is equivalent to
and that
which we saw to be the EGF of the unsigned Stirling numbers of the first kind in the section on permutations consisting of precisely m cycles.
We select a random element q of a random permutation σ {\displaystyle \sigma } and ask about the expected size of the cycle that contains q . Here the function b ( k ) {\displaystyle b(k)} is equal to k 2 {\displaystyle k^{2}} , because a cycle of length k contributes k elements that are on cycles of length k . Note that unlike the previous computations, we need to average out this parameter after we extract it from the generating function (divide by n ). We have
Hence the expected length of the cycle that contains q is
This average parameter represents the probability that if we again select a random element of [ n ] {\displaystyle [n]} of a random permutation, the element lies on a cycle of size m . The function b ( k ) {\displaystyle b(k)} is equal to m {\displaystyle m} for m = k {\displaystyle m=k} and zero otherwise, because only cycles of length m contribute, namely m elements that lie on a cycle of length m . We have
It follows that the probability that a random element lies on a cycle of length m is
Select a random subset Q of [ n ] containing m elements and a random permutation, and ask about the probability that all elements of Q lie on the same cycle. This is another average parameter. The function b ( k ) is equal to ( k m ) {\displaystyle {\begin{matrix}{k \choose m}\end{matrix}}} , because a cycle of length k contributes ( k m ) {\displaystyle {\begin{matrix}{k \choose m}\end{matrix}}} subsets of size m , where ( k m ) = 0 {\displaystyle {\begin{matrix}{k \choose m}=0\end{matrix}}} for k < m . This yields
Averaging out we obtain that the probability of the elements of Q being on the same cycle is
or
In particular, the probability that two elements p < q are on the same cycle is 1/2.
We may use the Flajolet–Sedgewick fundamental theorem directly and compute more advanced permutation statistics. (Check that page for an explanation of how the operators we will use are computed.) For example, the set of permutations containing an even number of even cycles is given by
Translating to exponential generating functions (EGFs), we obtain
or
This simplifies to
or
This says that there is one permutation of size zero containing an even number of even cycles (the empty permutation, which contains zero cycles of even length), one such permutation of size one (the fixed point, which also contains zero cycles of even length), and that for n ≥ 2 {\displaystyle n\geq 2} , there are n ! / 2 {\displaystyle n!/2} such permutations.
Consider what happens when we square a permutation. Fixed points are mapped to fixed points. Odd cycles are mapped to odd cycles in a one-to-one correspondence, e.g. ( 1 8 9 11 13 ) {\displaystyle (1\;8\;9\;11\;13)} turns into ( 1 9 13 8 11 ) {\displaystyle (1\;9\;13\;8\;11)} . Even cycles split in two and produce a pair of cycles of half the size of the original cycle, e.g. ( 5 13 6 9 ) {\displaystyle (5\;13\;6\;9)} turns into ( 5 6 ) ( 9 13 ) {\displaystyle (5\;6)\;(9\;13)} . Hence permutations that are squares may contain any number of odd cycles, and an even number of cycles of size two, an even number of cycles of size four etc., and are given by
which yields the EGF
The types of permutations presented in the preceding two sections, i.e. permutations containing an even number of even cycles and permutations that are squares, are examples of so-called odd cycle invariants , studied by Sung and Zhang (see external links ). The term odd cycle invariant simply means that membership in the respective combinatorial class is independent of the size and number of odd cycles occurring in the permutation. In fact we can prove that all odd cycle invariants obey a simple recurrence, which we will derive. First, here are some more examples of odd cycle invariants.
This class has the specification
and the generating function
The first few values are
This class has the specification
and the generating function
There is a semantic nuance here. We could consider permutations containing no even cycles as belonging to this class, since zero is even . The first few values are
This class has the specification
and the generating function
The first few values are
Observe carefully how the specifications of the even cycle component are constructed. It is best to think of them in terms of parse trees. These trees have three levels. The nodes at the lowest level represent sums of products of even-length cycles of the singleton Z {\displaystyle {\mathcal {Z}}} . The nodes at the middle level represent restrictions of the set operator. Finally the node at the top level sums products of contributions from the middle level. Note that restrictions of the set operator, when applied to a generating function that is even, will preserve this feature, i.e. produce another even generating function. But all the inputs to the set operators are even since they arise from even-length cycles. The result is that all generating functions involved have the form
where h ( z ) {\displaystyle h(z)} is an even function. This means that
is even, too, and hence
Letting g n = n ! [ z n ] g ( z ) {\textstyle g_{n}=n![z^{n}]g(z)} and extracting coefficients, we find that
which yields the recurrence
A link to the Putnam competition website appears in the section External links .
The problem asks for a proof that
where the sum is over all n ! {\displaystyle n!} permutations of [ n ] {\displaystyle [n]} , σ ( π ) {\displaystyle \sigma (\pi )} is the sign of π {\displaystyle \pi } , i.e. σ ( π ) = 1 {\displaystyle \sigma (\pi )=1} if π {\displaystyle \pi } is even and σ ( π ) = − 1 {\displaystyle \sigma (\pi )=-1} if π {\displaystyle \pi } is odd, and ν ( π ) {\displaystyle \nu (\pi )} is the number of fixed points of π {\displaystyle \pi } .
Now the sign of π {\displaystyle \pi } is given by
where the product is over all cycles c of π {\displaystyle \pi } ,
as explained e.g. on the page on even and odd permutations .
Hence we consider the combinatorial class
where U {\displaystyle {\mathcal {U}}} marks one minus the length of a contributing cycle,
and V {\displaystyle {\mathcal {V}}} marks fixed points. Translating to generating functions, we obtain
or
Now we have
and hence the desired quantity is given by
Doing the computation, we obtain
or
Extracting coefficients, we find that the coefficient of 1 / z {\displaystyle 1/z} is zero.
The constant is one, which does not agree with the formula (should be zero).
For n {\displaystyle n} positive, however, we obtain
or
which is the desired result.
As an interesting aside, we observe that g ( z , u , v ) {\displaystyle g(z,u,v)} may be used to evaluate the following determinant of an n × n {\displaystyle n\times n} matrix:
where a , b ≠ 0 {\displaystyle a,b\neq 0} . Recall the formula for the determinant:
Now the value of the product on the right for a permutation π {\displaystyle \pi } is a f b n − f {\displaystyle a^{f}b^{n-f}} ,
where f is the number of fixed points of π {\displaystyle \pi } . Hence
which yields
and finally
Here we seek to show that this difference is given by
Recall that the sign σ ( π ) {\displaystyle \sigma (\pi )} of a permutation π {\displaystyle \pi } is given by
where the product ranges over the cycles c from the disjoint cycle composition of π {\displaystyle \pi } .
It follows that the combinatorial species Q {\displaystyle {\mathcal {Q}}} that reflects the signs and the cycle count of the set of permutations is given by
where we have used U {\displaystyle {\mathcal {U}}} to mark signs and V {\displaystyle {\mathcal {V}}} for the cycle count.
Translating to generating functions we have
This simplifies to
which is
Now the two generating functions Q 1 ( z , v ) {\displaystyle Q_{1}(z,v)} and Q 2 ( z , v ) {\displaystyle Q_{2}(z,v)} of even and odd permutations by cycle count are given by
and
We require the quantity
which is
Finally, extracting coefficients from this generating function, we obtain
which is
which is in turn
This concludes the proof.
Similar statistics are available for random endomorphisms on a finite set. [ 3 ] [ 4 ] | https://en.wikipedia.org/wiki/Random_permutation_statistics |
The random phase approximation ( RPA ) is an approximation method in condensed matter physics and nuclear physics . It was first introduced by David Bohm and David Pines as an important result in a series of seminal papers of 1952 and 1953. [ 1 ] [ 2 ] [ 3 ] For decades physicists had been trying to incorporate the effect of microscopic quantum mechanical interactions between electrons in the theory of matter. Bohm and Pines' RPA accounts for the weak screened Coulomb interaction and is commonly used for describing the dynamic linear electronic response of electron systems. It was further developed to the relativistic form (RRPA) by solving the Dirac equation . [ 4 ] [ 5 ]
In the RPA, electrons are assumed to respond only to the total electric potential V ( r ) which is the sum of the external perturbing potential V ext ( r ) and a screening potential V sc ( r ). The external perturbing potential is assumed to oscillate at a single frequency ω , so that the model yields via a self-consistent field (SCF) method [ 6 ] a dynamic dielectric function denoted by ε RPA ( k , ω ).
The contribution to the dielectric function from the total electric potential is assumed to average out , so that only the potential at wave vector k contributes. This is what is meant by the random phase approximation. The resulting dielectric function, also called the Lindhard dielectric function , [ 7 ] [ 8 ] correctly predicts a number of properties of the electron gas, including plasmons . [ 9 ]
The RPA was criticized in the late 1950s for overcounting the degrees of freedom and the call for justification led to intense work among theoretical physicists. In a seminal paper Murray Gell-Mann and Keith Brueckner showed that the RPA can be derived from a summation of leading-order chain Feynman diagrams in a dense electron gas. [ 10 ]
The consistency in these results became an important justification and motivated a very strong growth in theoretical physics in the late 50s and 60s.
The RPA vacuum | R P A ⟩ {\displaystyle \left|\mathrm {RPA} \right\rangle } for a bosonic system can be expressed in terms of non-correlated bosonic vacuum | M F T ⟩ {\displaystyle \left|\mathrm {MFT} \right\rangle } and original boson excitations a i † {\displaystyle \mathbf {a} _{i}^{\dagger }}
where Z is a symmetric matrix with | Z | ≤ 1 {\displaystyle |Z|\leq 1} and
The normalization can be calculated by
where Z i j = ( X t ) i k z k X j k {\displaystyle Z_{ij}=(X^{\mathrm {t} })_{i}^{k}z_{k}X_{j}^{k}} is the singular value decomposition of Z i j {\displaystyle Z_{ij}} . q ~ i = ( X † ) j i a j {\displaystyle {\tilde {\mathbf {q} }}^{i}=(X^{\dagger })_{j}^{i}\mathbf {a} ^{j}}
the connection between new and old excitations is given by | https://en.wikipedia.org/wiki/Random_phase_approximation |
Random sample consensus ( RANSAC ) is an iterative method to estimate parameters of a mathematical model from a set of observed data that contains outliers , when outliers are to be accorded no influence [ clarify ] on the values of the estimates. Therefore, it also can be interpreted as an outlier detection method. [ 1 ] It is a non-deterministic algorithm in the sense that it produces a reasonable result only with a certain probability, with this probability increasing as more iterations are allowed. The algorithm was first published by Fischler and Bolles at SRI International in 1981. They used RANSAC to solve the location determination problem (LDP), where the goal is to determine the points in the space that project onto an image into a set of landmarks with known locations.
RANSAC uses repeated random sub-sampling . [ 2 ] A basic assumption is that the data consists of "inliers", i.e., data whose distribution can be explained by some set of model parameters, though may be subject to noise, and "outliers", which are data that do not fit the model. The outliers can come, for example, from extreme values of the noise or from erroneous measurements or incorrect hypotheses about the interpretation of data. RANSAC also assumes that, given a (usually small) set of inliers, there exists a procedure that can estimate the parameters of a model optimally explaining or fitting this data.
A simple example is fitting a line in two dimensions to a set of observations. Assuming that this set contains both inliers , i.e., points which approximately can be fitted to a line, and outliers , points which cannot be fitted to this line, a simple least squares method for line fitting will generally produce a line with a bad fit to the data including inliers and outliers. The reason is that it is optimally fitted to all points, including the outliers. RANSAC, on the other hand, attempts to exclude the outliers and find a linear model that only uses the inliers in its calculation. This is done by fitting linear models to several random samplings of the data and returning the model that has the best fit to a subset of the data. Since the inliers tend to be more linearly related than a random mixture of inliers and outliers, a random subset that consists entirely of inliers will have the best model fit. In practice, there is no guarantee that a subset of inliers will be randomly sampled, and the probability of the algorithm succeeding depends on the proportion of inliers in the data as well as the choice of several algorithm parameters.
The RANSAC algorithm is a learning technique to estimate parameters of a model by random sampling of observed data. Given a dataset whose data elements contain both inliers and outliers, RANSAC uses the voting scheme to find the optimal fitting result. Data elements in the dataset are used to vote for one or multiple models. The implementation of this voting scheme is based on two assumptions: that the noisy features will not vote consistently for any single model (few outliers) and there are enough features to agree on a good model (few missing data). The RANSAC algorithm is essentially composed of two steps that are iteratively repeated:
The set of inliers obtained for the fitting model is called the consensus set . The RANSAC algorithm will iteratively repeat the above two steps until the obtained consensus set in certain iteration has enough inliers.
The input to the RANSAC algorithm is a set of observed data values, a model to fit to the observations, and some confidence parameters defining outliers. In more details than the aforementioned RANSAC algorithm overview, RANSAC achieves its goal by repeating the following steps:
To converge to a sufficiently good model parameter set, this procedure is repeated a fixed number of times, each time producing either the rejection of a model because too few points are a part of the consensus set, or a refined model with a consensus set size larger than the previous consensus set.
The generic RANSAC algorithm works as the following pseudocode :
A Python implementation mirroring the pseudocode. This also defines a LinearRegressor based on least squares, applies RANSAC to a 2D regression problem, and visualizes the outcome:
The threshold value to determine when a data point fits a model ( t ), and the number of inliers (data points fitted to the model within t ) required to assert that the model fits well to data ( d ) are determined based on specific requirements of the application and the dataset, and possibly based on experimental evaluation. The number of iterations ( k ), however, can be roughly determined as a function of the desired probability of success ( p ) as shown below.
Let p be the desired probability that the RANSAC algorithm provides at least one useful result after running. In extreme (for simplifying the derivation), RANSAC returns a successful result if in some iteration it selects only inliers from the input data set when it chooses n points from the data set from which the model parameters are estimated. (In other words, all the selected n data points are inliers of the model estimated by these points). Let w {\displaystyle w} be the probability of choosing an inlier each time a single data point is selected, that is roughly,
A common case is that w {\displaystyle w} is not well known beforehand because of an unknown number of inliers in data before running the RANSAC algorithm, but some rough value can be given. With a given rough value of w {\displaystyle w} and roughly assuming that the n points needed for estimating a model are selected independently (It is a rough assumption because each data point selection reduces the number of data point candidates to choose in the next selection in reality), w n {\displaystyle w^{n}} is the probability that all n points are inliers and 1 − w n {\displaystyle 1-w^{n}} is the probability that at least one of the n points is an outlier, a case which implies that a bad model will be estimated from this point set. That probability to the power of k (the number of iterations in running the algorithm) is the probability that the algorithm never selects a set of n points which all are inliers, and this is the same as 1 − p {\displaystyle 1-p} (the probability that the algorithm does not result in a successful model estimation) in extreme. Consequently,
which, after taking the logarithm of both sides, leads to
This result assumes that the n data points are selected independently, that is, a point which has been selected once is replaced and can be selected again in the same iteration. This is often not a reasonable approach and the derived value for k should be taken as an upper limit in the case that the points are selected without replacement. For example, in the case of finding a line which fits the data set illustrated in the above figure, the RANSAC algorithm typically chooses two points in each iteration and computes maybe_model as the line between the points and it is then critical that the two points are distinct.
To gain additional confidence, the standard deviation or multiples thereof can be added to k . The standard deviation of k is defined as
An advantage of RANSAC is its ability to do robust estimation [ 3 ] of the model parameters, i.e., it can estimate the parameters with a high degree of accuracy even when a significant number of outliers are present in the data set. A disadvantage of RANSAC is that there is no upper bound on the time it takes to compute these parameters (except exhaustion). When the number of iterations computed is limited, the solution obtained may not be optimal, and it may not even be one that fits the data in a good way. In this way RANSAC offers a trade-off; by computing a greater number of iterations, the probability of a reasonable model being produced is increased. Moreover, RANSAC is not always able to find the optimal set even for moderately contaminated sets, and it usually performs badly when the number of inliers is less than 50%. Optimal RANSAC [ 4 ] was proposed to handle both these problems and is capable of finding the optimal set for heavily contaminated sets, even for an inlier ratio under 5%. Another disadvantage of RANSAC is that it requires the setting of problem-specific thresholds.
RANSAC can only estimate one model for a particular data set. As for any one-model approach when two (or more) model instances exist, RANSAC may fail to find either one. The Hough transform is one alternative robust estimation technique that may be useful when more than one model instance is present. Another approach for multi-model fitting is known as PEARL, [ 5 ] which combines model sampling from data points as in RANSAC with iterative re-estimation of inliers and the multi-model fitting being formulated as an optimization problem with a global energy function describing the quality of the overall solution.
The RANSAC algorithm is often used in computer vision , e.g., to simultaneously solve the correspondence problem and estimate the fundamental matrix related to a pair of stereo cameras; see also: Structure from motion , scale-invariant feature transform , image stitching , rigid motion segmentation .
Since 1981 RANSAC has become a fundamental tool in the computer vision and image processing community. In 2006, for the 25th anniversary of the algorithm, a workshop was organized at the International Conference on Computer Vision and Pattern Recognition (CVPR) to summarize the most recent contributions and variations to the original algorithm, mostly meant to improve the speed of the algorithm, the robustness and accuracy of the estimated solution and to decrease the dependency from user defined constants.
RANSAC can be sensitive to the choice of the correct noise threshold that defines which data points fit a model instantiated with a certain set of parameters. If such threshold is too large, then all the hypotheses tend to be ranked equally (good). On the other hand, when the noise threshold is too small, the estimated parameters tend to be unstable ( i.e. by simply adding or removing a datum to the set of inliers, the estimate of the parameters may fluctuate). To partially compensate for this undesirable effect, Torr et al. proposed two modification of RANSAC called MSAC (M-estimator SAmple and Consensus) and MLESAC (Maximum Likelihood Estimation SAmple and Consensus). [ 6 ] The main idea is to evaluate the quality of the consensus set ( i.e. the data that fit a model and a certain set of parameters) calculating its likelihood (whereas in the original formulation by Fischler and Bolles the rank was the cardinality of such set). An extension to MLESAC which takes into account the prior probabilities associated to the input dataset is proposed by Tordoff. [ 7 ] The resulting algorithm is dubbed Guided-MLESAC. Along similar lines, Chum proposed to guide the sampling procedure if some a priori information regarding the input data is known, i.e. whether a datum is likely to be an inlier or an outlier. The proposed approach is called PROSAC, PROgressive SAmple Consensus. [ 8 ]
Chum et al. also proposed a randomized version of RANSAC called R-RANSAC [ 9 ] to reduce the computational burden to identify a good consensus set. The basic idea is to initially evaluate the goodness of the currently instantiated model using only a reduced set of points instead of the entire dataset. A sound strategy will tell with high confidence when it is the case to evaluate the fitting of the entire dataset or when the model can be readily discarded. It is reasonable to think that the impact of this approach is more relevant in cases where the percentage of inliers is large. The type of strategy proposed by Chum et al. is called preemption scheme. Nistér proposed a paradigm called Preemptive RANSAC [ 10 ] that allows real time robust estimation of the structure of a scene and of the motion of the camera. The core idea of the approach consists in generating a fixed number of hypotheses so that the comparison happens with respect to the quality of the generated hypothesis rather than against some absolute quality metric.
Other researchers tried to cope with difficult situations where the noise scale is not known and/or multiple model instances are present. The first problem has been tackled in the work by Wang and Suter. [ 11 ] Toldo et al. represent each datum with the characteristic function of the set of random models that fit the point. Then multiple models are revealed as clusters which group the points supporting the same model. The clustering algorithm, called J-linkage, does not require prior specification of the number of models, nor does it necessitate manual parameters tuning. [ 12 ]
RANSAC has also been tailored for recursive state estimation applications, where the input measurements are corrupted by outliers and Kalman filter approaches, which rely on a Gaussian distribution of the measurement error, are doomed to fail. Such an approach is dubbed KALMANSAC. [ 13 ] | https://en.wikipedia.org/wiki/Random_sample_consensus |
Random sequential adsorption ( RSA ) refers to a process where particles are randomly introduced in a system, and if they do not overlap any previously adsorbed particle, they adsorb and remain fixed for the rest of the process. RSA can be carried out in computer simulation , in a mathematical analysis, or in experiments. It was first studied by one-dimensional models: the attachment of pendant groups in a polymer chain by Paul Flory , and the car-parking problem by Alfréd Rényi . [ 1 ] Other early works include those of Benjamin Widom . [ 2 ] In two and higher dimensions many systems have been studied by computer simulation, including in 2d, disks, randomly oriented squares and rectangles, aligned squares and rectangles, various other shapes, etc.
An important result is the maximum surface coverage, called the saturation coverage or the packing fraction. On this page we list that coverage for many systems.
The blocking process has been studied in detail in terms of the random sequential adsorption (RSA) model. [ 3 ] The simplest RSA model related to deposition of spherical particles considers irreversible adsorption of circular disks. One disk after another is placed randomly at a surface. Once a disk is placed, it sticks at the same spot, and cannot be removed. When an attempt to deposit a disk would result in an overlap with an already deposited disk, this attempt is rejected. Within this model, the surface is initially filled rapidly, but the more one approaches saturation the slower the surface is being filled. Within the RSA model, saturation is sometimes referred to as jamming. For circular disks, saturation occurs at a coverage of 0.547. When the depositing particles are polydisperse, much higher surface coverage can be reached, since the small particles will be able to deposit into the holes in between the larger deposited particles. On the other hand, rod like particles may lead to much smaller coverage, since a few misaligned rods may block a large portion of the surface.
For the one-dimensional parking-car problem, Renyi [ 1 ] has shown that the maximum coverage is equal to
θ 1 = ∫ 0 ∞ exp ( − 2 ∫ 0 x 1 − e − y y d y ) d x = 0.7475979202534 … {\displaystyle \theta _{1}=\int _{0}^{\infty }\exp \left(-2\int _{0}^{x}{\frac {1-e^{-y}}{y}}dy\right)dx=0.7475979202534\ldots }
the so-called Renyi car-parking constant. [ 4 ]
Then followed the conjecture of Ilona Palásti , [ 5 ] who proposed that the coverage of d-dimensional aligned squares, cubes and hypercubes is equal to θ 1 d . This conjecture led to a great deal of work arguing in favor of it, against it, and finally computer simulations in two and three dimensions showing that it was a good approximation but not exact. The accuracy of this conjecture in higher dimensions is not known.
For k {\displaystyle k} -mers on a one-dimensional lattice, we have for the fraction of vertices covered, [ 6 ]
θ k = k ∫ 0 ∞ exp ( − u − 2 ∑ j = 1 k − 1 1 − e − j u j ) d u = k ∫ 0 1 exp ( − 2 ∑ j = 1 k − 1 1 − v j j ) d v {\displaystyle \theta _{k}=k\int _{0}^{\infty }\exp \left(-u-2\sum _{j=1}^{k-1}{\frac {1-e^{-ju}}{j}}\right)du=k\int _{0}^{1}\exp \left(-2\sum _{j=1}^{k-1}{\frac {1-v^{j}}{j}}\right)dv}
When k {\displaystyle k} goes to infinity, this gives the Renyi result above. For k = 2, this gives the Flory [ 7 ] result θ 1 = 1 − e − 2 {\displaystyle \theta _{1}=1-e^{-2}} .
For percolation thresholds related to random sequentially adsorbed particles, see Percolation threshold .
Asymptotic behavior: θ k ∼ θ ∞ + 0.2162 / k + … {\displaystyle \theta _{k}\sim \theta _{\infty }+0.2162/k+\ldots } .
R = size ratio of segments. Assume equal rates of adsorption
Asymptotic behavior: θ k ∼ θ ∞ + … {\displaystyle \theta _{k}\sim \theta _{\infty }+\ldots } .
.
For k = ∞, see "2d aligned squares" below.
Asymptotic behavior: [ 25 ] θ k ∼ θ ∞ + 0.316 / k + 0.114 / k 2 … {\displaystyle \theta _{k}\sim \theta _{\infty }+0.316/k+0.114/k^{2}\ldots } .
See also [ 27 ] | https://en.wikipedia.org/wiki/Random_sequential_adsorption |
In statistical mechanics , the random-subcube model (RSM) is an exactly solvable model that reproduces key properties of hard constraint satisfaction problems (CSPs) and optimization problems , such as geometrical organization of solutions, the effects of frozen variables, and the limitations of various algorithms like decimation schemes.
The RSM consists of a set of N binary variables , where solutions are defined as points in a hypercube . The model introduces clusters, which are random subcubes of the hypercube, representing groups of solutions sharing specific characteristics. As the density of constraints increases, the solution space undergoes a series of phase transitions similar to those observed in CSPs like random k-satisfiability ( k-SAT ) and random k-coloring (k-COL). These transitions include clustering, condensation, and ultimately the unsatisfiable phase where no solutions exist.
The RSM is equivalent to these real CSPs in the limit of large constraint size. Notably, it reproduces the cluster size distribution and freezing properties of k-SAT and k-COL in the large-k limit. This is similar to how the random energy model is the large-p limit of the p-spin glass model .
There are N {\displaystyle N} particles. Each particle can be in one of two states − 1 , + 1 {\displaystyle -1,+1} .
The state space { − 1 , + 1 } N {\displaystyle \{-1,+1\}^{N}} has 2 N {\displaystyle 2^{N}} states. Not all are available. Only those satisfying the constraints are allowed.
Each constraint is a subset A i {\displaystyle A_{i}} of the state space. Each A i {\displaystyle A_{i}} is a "subcube", structured like A i = ∏ j ∈ 1 : N A i j {\displaystyle A_{i}=\prod _{j\in 1:N}A_{ij}} where each A i j {\displaystyle A_{ij}} can be one of { − 1 } , { + 1 } , { − 1 , + 1 } {\displaystyle \{-1\},\{+1\},\{-1,+1\}} .
The available states is the union of these subsets: S = ∪ i A i {\displaystyle S=\cup _{i}A_{i}}
Each random subcube model is defined by two parameters α , p ∈ ( 0 , 1 ) {\displaystyle \alpha ,p\in (0,1)} .
To generate a random subcube A i {\displaystyle A_{i}} , sample its components A i j {\displaystyle A_{ij}} IID according to P r ( A i j = { − 1 } ) = p / 2 P r ( A i j = { + 1 } ) = p / 2 P r ( A i j = { − 1 , + 1 } ) = 1 − p {\displaystyle {\begin{aligned}Pr(A_{ij}&=\{-1\})&=p/2\\Pr(A_{ij}&=\{+1\})&=p/2\\Pr(A_{ij}&=\{-1,+1\})&=1-p\end{aligned}}}
Now sample 2 ( 1 − α ) N {\displaystyle 2^{(1-\alpha )N}} random subcubes, and union them together.
The entropy density of the r {\displaystyle r} -th cluster in bits is s r := 1 N log 2 | A r | {\displaystyle s_{r}:={\frac {1}{N}}\log _{2}|A_{r}|}
The entropy density of the system in bits is s := 1 N log 2 | ∪ r A r | {\displaystyle s:={\frac {1}{N}}\log _{2}|\cup _{r}A_{r}|}
Let n ( s ) {\displaystyle n(s)} be the number of clusters with entropy density s {\displaystyle s} , then it is binomially distributed , thus E [ n ( s ) ] = 2 ( 1 − α ) N P → 2 N Σ ( s ) + o ( N ) V a r [ n ( s ) ] = 2 ( 1 − α ) N P ( 1 − P ) V a r [ n ( s ) ] E [ n ( s ) ] 2 → 2 − N Σ ( s ) {\displaystyle {\begin{aligned}E[n(s)]&=2^{(1-\alpha )N}P\to 2^{N\Sigma (s)+o(N)}\\Var[n(s)]&=2^{(1-\alpha )N}P(1-P)\\{\frac {Var[n(s)]}{E[n(s)]^{2}}}&\to 2^{-N\Sigma (s)}\end{aligned}}} where P := ( N s N ) p ( 1 − s ) N ( 1 − p ) s N , Σ ( s ) := 1 − α − D K L ( s ‖ 1 − p ) D K L ( s ‖ 1 − p ) := s log 2 s 1 − p + ( 1 − s ) log 2 1 − s p {\displaystyle {\begin{aligned}P&:={\binom {N}{sN}}p^{(1-s)N}(1-p)^{sN},\\\Sigma (s)&:=1-\alpha -D_{KL}(s\|1-p)\\D_{KL}(s\|1-p)&:=s\log _{2}{\frac {s}{1-p}}+(1-s)\log _{2}{\frac {1-s}{p}}\end{aligned}}}
By the Chebyshev inequality , if Σ > 0 {\displaystyle \Sigma >0} , then n ( s ) {\displaystyle n(s)} concentrates to its mean value. Otherwise, since E [ n ( s ) ] → 0 {\displaystyle E[n(s)]\to 0} , n ( s ) {\displaystyle n(s)} also concentrates to 0 {\displaystyle 0} by the Markov inequality .
Thus, n ( s ) → { 2 N Σ ( s ) + o ( N ) if Σ ( s ) > 0 0 if Σ ( s ) < 0 {\displaystyle n(s)\to {\begin{cases}2^{N\Sigma (s)+o(N)}\quad &{\text{if }}\Sigma (s)>0\\0\quad &{\text{if }}\Sigma (s)<0\end{cases}}} almost surely as N → ∞ {\displaystyle N\to \infty } .
When Σ = 0 {\displaystyle \Sigma =0} exactly, the two forces exactly balance each other out, and n ( s ) {\displaystyle n(s)} does not collapse, but instead converges in distribution to the Poisson distribution P o i s s o n ( 1 ) {\displaystyle Poisson(1)} by the law of small numbers.
For each state, the number of clusters it is in is also binomially distributed, with expectation 2 ( 1 − α ) N ( 1 − p / 2 ) N = 2 N ( log 2 ( 2 − p ) − α ) {\displaystyle 2^{(1-\alpha )N}(1-p/2)^{N}=2^{N(\log _{2}(2-p)-\alpha )}}
So if α < log 2 ( 2 − p ) {\displaystyle \alpha <\log _{2}(2-p)} , then it concentrates to 2 N ( log 2 ( 2 − p ) − α ) {\displaystyle 2^{N(\log _{2}(2-p)-\alpha )}} , and so each state is in an exponential number of clusters.
Indeed, in that case, the probability that all states are allowed is [ 1 − [ 1 − ( 1 − p / 2 ) N ] 2 ( 1 − α ) N ] 2 N ∼ e − e − 2 N ( log 2 ( 2 − p ) − α ) + N ln 2 → 1 {\displaystyle [1-[1-(1-p/2)^{N}]^{2^{(1-\alpha )N}}]^{2^{N}}\sim e^{-e^{-2^{N(\log _{2}(2-p)-\alpha )}+N\ln 2}}\to 1}
Thus almost surely, all states are allowed, and the entropy density is 1 bit per particle.
If α > α d := log 2 ( 2 − p ) {\displaystyle \alpha >\alpha _{d}:=\log _{2}(2-p)} , then it concentrates to zero exponentially, and so most states are not in any cluster. Those that do are exponentially unlikely to be in more than one. Thus, we find that almost all states are in zero clusters, and of those in at least one cluster, almost all are in just one cluster. The state space is thus roughly speaking the disjoint union of the clusters.
Almost surely, there are n ( s ) = 2 N Σ ( s ) {\displaystyle n(s)=2^{N\Sigma (s)}} clusters of size 2 N s {\displaystyle 2^{Ns}} , therefore, the state space is dominated by clusters with optimal entropy density s ∗ = arg max s ( Σ ( s ) + s ) {\displaystyle s^{*}=\arg \max _{s}(\Sigma (s)+s)} .
Thus, in the clustered phase, the state space is almost entirely partitioned among 2 N Σ ( s ∗ ) {\displaystyle 2^{N\Sigma (s^{*})}} clusters of size 2 N s ∗ {\displaystyle 2^{Ns^{*}}} each. Roughly, the state space looks like exponentially many equally-sized clusters.
Another phase transition occurs when Σ ( s ∗ ) = 0 {\displaystyle \Sigma (s^{*})=0} , that is, α = α c := p ( 2 − p ) + log 2 ( 2 − p ) {\displaystyle \alpha =\alpha _{c}:={\frac {p}{(2-p)}}+\log _{2}(2-p)} When α > α c {\displaystyle \alpha >\alpha _{c}} , the optimal entropy density becomes unreachable, as there almost surely exists zero clusters with entropy density s ∗ {\displaystyle s^{*}} . Instead, the state space is dominated by clusters with entropy close to s c {\displaystyle s_{c}} , the larger solution to Σ ( s c ) = 0 {\displaystyle \Sigma (s_{c})=0} .
Near s c {\displaystyle s_{c}} , the contribution of clusters with entropy density s = s c − δ {\displaystyle s=s_{c}-\delta } to the total state space is 2 N s ⏟ size of clusters × 2 N Σ ( s ) ⏟ number of clusters = 2 N ( s + Σ ( s ) ) = 2 N ( s c − δ − Σ ′ ( s c ) δ ) {\displaystyle \underbrace {2^{Ns}} _{\text{size of clusters}}\times \underbrace {2^{N\Sigma (s)}} _{\text{number of clusters}}=2^{N(s+\Sigma (s))}=2^{N(s_{c}-\delta -\Sigma '(s_{c})\delta )}} At large N {\displaystyle N} , the possible entropy densities are s c , s c − 1 / N , s c − 2 / N , … {\displaystyle s_{c},s_{c}-1/N,s_{c}-2/N,\dots } . The contribution of each is 2 N s c , 2 N s c 2 − ( 1 + Σ ′ ( s c ) ) , 2 N s c 2 − 2 ( 1 + Σ ′ ( s c ) ) , … {\displaystyle 2^{Ns_{c}},2^{Ns_{c}}2^{-(1+\Sigma '(s_{c}))},2^{Ns_{c}}2^{-2(1+\Sigma '(s_{c}))},\dots }
We can tabulate them as follows:
Thus, we see that for any ϵ > 0 {\displaystyle \epsilon >0} , at N → ∞ {\displaystyle N\to \infty } limit, over 1 − ϵ {\displaystyle 1-\epsilon } of the total state space is covered by only a finite number of clusters. The state space looks partitioned into clusters with exponentially decaying sizes. This is the condensation phase.
When α > 1 {\displaystyle \alpha >1} , the number of clusters is zero, so there are no states.
The RSM can be extended to include energy landscapes, allowing for the study of glassy behavior, temperature chaos, and the dynamic transition. | https://en.wikipedia.org/wiki/Random_subcube_model |
Random walk closeness centrality is a measure of centrality in a network , which describes the average speed with which randomly walking processes reach a node from other nodes of the network. It is similar to the closeness centrality except that the farness is measured by the expected length of a random walk rather than by the shortest path .
The concept was first proposed by White and Smyth (2003) under the name Markov centrality . [ 1 ]
Consider a network with a finite number of nodes and a random walk process that starts in a certain node and proceeds from node to node along the edges. From each node, it chooses randomly the edge to be followed. In an unweighted network, the probability of choosing a certain edge is equal across all available edges, while in a weighted network it is proportional to the edge weights.
A node is considered to be close to other nodes, if the random walk process initiated from any node of the network arrives to this particular node in relatively few steps on average.
Consider a weighted network – either directed or undirected – with n nodes denoted by j=1, …, n; and a random walk process on this network with a transition matrix M. The m i j {\displaystyle m_{ij}} element of M describes the probability of the random walker that has reached node i, proceeds directly to node j. These probabilities are defined in the following way.
where a i j {\displaystyle a_{ij}} is the (i,j)th element of the weighting matrix A of the network. When there is no edge between two nodes, the corresponding element of the A matrix is zero.
The random walk closeness centrality of a node i is the inverse of the average mean first passage time to that node:
where H ( j , i ) {\displaystyle H(j,i)} is the mean first passage time from node j to node i.
The mean first passage time from node i to node j is the expected number of steps it takes for the process to reach node j from node i for the first time:
where P(i,j,r) denotes the probability that it takes exactly r steps to reach j from i for the first time.
To calculate these probabilities of reaching a node for the first time in r steps, it is useful to regard the target node as an absorbing one, and introduce a transformation of M by deleting its j-th row and column and denoting it by M − j {\displaystyle M_{-j}} . As the probability of a process starting at i and being in k after r-1 steps is simply given by the (i,k)th element of M − j r − 1 {\displaystyle M_{-j}^{r-1}} , P(i,j,r) can be expressed as
Substituting this into the expression for mean first passage time yields
Using the formula for the summation of geometric series for matrices yields
where I is the n-1 dimensional identity matrix .
For computational convenience, this expression can be vectorized as
where H ( . , j ) {\displaystyle H(.,j)} is the vector for first passage times for a walk ending at node j, and e is an n-1 dimensional vector of ones.
Mean first passage time is not symmetric, even for undirected graphs.
According to simulations performed by Noh and Rieger (2004), the distribution of random walk closeness centrality in a Barabási-Albert model is mainly determined by the degree distribution . In such a network, the random walk closeness centrality of a node is roughly proportional to, but does not increase monotonically with its degree.
Random walk closeness centrality is more relevant measure than the simple closeness centrality in case of applications where the concept of shortest paths is not meaningful or is very restrictive for a reasonable assessment of the nature of the system.
This is the case for example when the analyzed process evolves in the network without any specific intention to reach a certain point, or without the ability of finding the shortest path to reach its target. One example for a random walk in a network is the way a certain coin circulates in an economy: it is passed from one person to another through transactions, without any intention of reaching a specific individual.
Another example where the concept of shortest paths is not very useful is a densely connected network. Furthermore, as shortest paths are not influenced by self-loops , random walk closeness centrality is a more adequate measure than closeness centrality when analyzing networks where self-loops are important.
An important application on the field of economics is the analysis of the input-output model of an economy, which is represented by a densely connected weighted network with important self-loops . [ 2 ]
The concept is widely used in natural sciences as well. One biological application is the analysis of protein-protein interactions . [ 3 ]
A related concept, proposed by Newman, [ 4 ] is random walk betweenness centrality . Just as random walk closeness centrality is a random walk counterpart of closeness centrality , random walk betweenness centrality is, similarly, the random walk counterpart of betweenness centrality . Unlike the usual betweenness centrality measure, it does not only count shortest paths passing through the given node, but all possible paths crossing it.
Formally, the random walk betweenness centrality of a node is
where the r j k {\displaystyle r_{jk}} element of matrix R contains the probability of a random walk starting at node j with absorbing node k, passing through node i.
Calculating random walk betweenness in large networks is computationally very intensive. [ 5 ]
Another random walk based centrality is the second order centrality . [ 6 ] Instead of counting the shortest paths passing through a given node (as for random walk betweenness centrality), it focuses on another characteristic of random walks on graphs. The expectation of the standard deviation of the return times of a random walk to a node constitutes its centrality . The lower that deviation, the more central that node is.
Calculating the second order betweenness on large arbitrary graphs is also intensive, as its complexity is O ( n 3 ) {\displaystyle O(n^{3})} (worst case achieved on the Lollipop graph ). | https://en.wikipedia.org/wiki/Random_walk_closeness_centrality |
In mobility management , the random waypoint model is a random model for the movement of mobile users, and how their location, velocity and acceleration change over time. [ 1 ] Mobility models are used for simulation purposes when new network protocols are evaluated. The random waypoint model was first proposed by Johnson and Maltz. [ 2 ] It is one of the most popular mobility models [ 3 ] to evaluate mobile ad hoc network (MANET) routing protocols, because of its simplicity and wide availability.
In random-based mobility simulation models, the mobile nodes move randomly and freely without restrictions. To be more specific, the destination, speed and direction are all chosen randomly and independently of other nodes. This kind of model has been used in many simulation studies.
Two variants, the random walk model and the random direction model are variants of the random waypoint model.
The movement of nodes is governed in the following manner: Each node begins by pausing for a fixed number of seconds. The node then selects a random destination in the simulation area and a random speed between 0 (excluded) and some maximum speed. The node moves to this destination and again pauses for a fixed period before another random location and speed. This behaviour is repeated for the length of the simulation. [ 4 ]
Simulation of model
BonnMotion is one of the tool to generate mobility scenarios based on random waypoint model and many other mobility models including random walk model, random direction model, etc.
In the context of mmWave communication, optical wireless communication, and Terahertz networks, the orientation of a device is also important (in contrast to the radio frequency networks). Therefore, it is essential to incorporate the orientation of the device with the mobility model. This concept was first introduced in the paper entitled "Modeling the Random Orientation of Mobile Devices: Measurement, Analysis and LiFi Use Case". [ 5 ] [ 6 ] [ 7 ]
This article about wireless technology is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Random_waypoint_model |
In statistical decision theory , a randomised decision rule or mixed decision rule is a decision rule that associates probabilities with deterministic decision rules. In finite decision problems, randomised decision rules define a risk set which is the convex hull of the risk points of the nonrandomised decision rules.
As nonrandomised alternatives always exist to randomised Bayes rules, randomisation is not needed in Bayesian statistics , although frequentist statistical theory sometimes requires the use of randomised rules to satisfy optimality conditions such as minimax , most notably when deriving confidence intervals and hypothesis tests about discrete probability distributions .
A statistical test making use of a randomized decision rule is called a randomized test .
Let D = { d 1 , d 2 . . . , d h } {\displaystyle {\mathcal {D}}=\{d_{1},d_{2}...,d_{h}\}} be a set of non-randomised decision rules with associated probabilities p 1 , p 2 , . . . , p h {\displaystyle p_{1},p_{2},...,p_{h}} . Then the randomised decision rule d ∗ {\displaystyle d^{*}} is defined as ∑ i = 1 h p i d i {\displaystyle \sum _{i=1}^{h}p_{i}d_{i}} and its associated risk function R ( θ , d ∗ ) {\displaystyle R(\theta ,d^{*})} is ∑ i = 1 h p i R ( θ , d i ) {\displaystyle \sum _{i=1}^{h}p_{i}R(\theta ,d_{i})} . [ 1 ] This rule can be treated as a random experiment in which the decision rules d 1 , . . . , d h ∈ D {\displaystyle d_{1},...,d_{h}\in {\mathcal {D}}} are selected with probabilities p 1 , . . . p h {\displaystyle p_{1},...p_{h}} respectively. [ 2 ]
Alternatively, a randomised decision rule may assign probabilities directly on elements of the actions space A {\displaystyle {\mathcal {A}}} for each member of the sample space . More formally, d ∗ ( x , A ) {\displaystyle d^{*}(x,A)} denotes the probability that an action a ∈ A {\displaystyle a\in {\mathcal {A}}} is chosen. Under this approach, its loss function is also defined directly as: ∫ A ∈ A d ∗ ( x , A ) L ( θ , A ) d A {\displaystyle \int _{A\in {\mathcal {A}}}d^{*}(x,A)L(\theta ,A)dA} . [ 3 ]
The introduction of randomised decision rules thus creates a larger decision space from which the statistician may choose his decision. As non-randomised decision rules are a special case of randomised decision rules where one decision or action has probability 1, the original decision space D {\displaystyle {\mathcal {D}}} is a proper subset of the new decision space D ∗ {\displaystyle {\mathcal {D}}^{*}} . [ 4 ]
As with nonrandomised decision rules, randomised decision rules may satisfy favourable properties such as admissibility, minimaxity and Bayes. This shall be illustrated in the case of a finite decision problem, i.e. a problem where the parameter space is a finite set of, say, k {\displaystyle k} elements.
The risk set, henceforth denoted as S {\displaystyle {\mathcal {S}}} , is the set of all vectors in which each entry is the value of the risk function associated with a randomised decision rule under a certain parameter: it contains all vectors of the form ( R ( θ 1 , d ∗ ) , . . . R ( θ k , d ∗ ) ) , d ∗ ∈ D ∗ {\displaystyle (R(\theta _{1},d^{*}),...R(\theta _{k},d^{*})),d^{*}\in {\mathcal {D}}^{*}} . Note that by the definition of the randomised decision rule, the risk set is the convex hull of the risks ( R ( θ 1 , d ) , . . . R ( θ k , d ) ) , d ∈ D {\displaystyle (R(\theta _{1},d),...R(\theta _{k},d)),d\in {\mathcal {D}}} . [ 5 ]
In the case where the parameter space has only two elements θ 1 {\displaystyle \theta _{1}} and θ 2 {\displaystyle \theta _{2}} , this constitutes a subset of R 2 {\displaystyle \mathbb {R} ^{2}} , so it may be drawn with respect to the coordinate axes R 1 {\displaystyle R_{1}} and R 2 {\displaystyle R_{2}} corresponding to the risks under θ 1 {\displaystyle \theta _{1}} and θ 2 {\displaystyle \theta _{2}} respectively. [ 6 ] An example is shown on the right.
An admissible decision rule is one that is not dominated by any other decision rule, i.e. there is no decision rule that has equal risk as or lower risk than it for all parameters and strictly lower risk than it for some parameter. In a finite decision problem, the risk point of an admissible decision rule has either lower x-coordinates or y-coordinates than all other risk points or, more formally, it is the set of rules with risk points of the form ( a , b ) {\displaystyle (a,b)} such that { ( R 1 , R 2 ) : R 1 ≤ a , R 2 ≤ b } ∩ S = ( a , b ) {\displaystyle \{(R_{1},R_{2}):R_{1}\leq a,R_{2}\leq b\}\cap {\mathcal {S}}=(a,b)} . Thus the left side of the lower boundary of the risk set is the set of admissible decision rules. [ 6 ] [ 7 ]
A minimax Bayes rule is one that minimises the supremum risk sup θ ∈ Θ R ( θ , d ∗ ) {\displaystyle \sup _{\theta \in \Theta }R(\theta ,d^{*})} among all decision rules in D ∗ {\displaystyle {\mathcal {D}}^{*}} . Sometimes, a randomised decision rule may perform better than all other nonrandomised decision rules in this regard. [ 1 ]
In a finite decision problem with two possible parameters, the minimax rule can be found by considering the family of squares Q ( c ) = { ( R 1 , R 2 ) : 0 ≤ R 1 ≤ c , 0 ≤ R 2 ≤ c } {\displaystyle Q(c)=\{(R_{1},R_{2}):0\leq R_{1}\leq c,0\leq R_{2}\leq c\}} . [ 8 ] The value of c {\displaystyle c} for the smallest of such squares that touches S {\displaystyle {\mathcal {S}}} is the minimax risk and the corresponding point or points on the risk set is the minimax rule.
If the risk set intersects the line R 1 = R 2 {\displaystyle R_{1}=R_{2}} , then the admissible decision rule lying on the line is minimax. If R 2 > R 1 {\displaystyle R_{2}>R_{1}} or R 1 > R 2 {\displaystyle R_{1}>R_{2}} holds for every point in the risk set, then the minimax rule can either be an extreme point (i.e. a nonrandomised decision rule) or a line connecting two extreme points (nonrandomised decision rules). [ 9 ] [ 6 ]
A randomised Bayes rule is one that has infimum Bayes risk r ( π , d ∗ ) {\displaystyle r(\pi ,d^{*})} among all decision rules. In the special case where the parameter space has two elements, the line π 1 R 1 + ( 1 − π 1 ) R 2 = c {\displaystyle \pi _{1}R_{1}+(1-\pi _{1})R_{2}=c} , where π 1 {\displaystyle \pi _{1}} and π 2 {\displaystyle \pi _{2}} denote the prior probabilities of θ 1 {\displaystyle \theta _{1}} and θ 2 {\displaystyle \theta _{2}} respectively, is a family of points with Bayes risk c {\displaystyle c} . The minimum Bayes risk for the decision problem is therefore the smallest c {\displaystyle c} such that the line touches the risk set. [ 10 ] [ 11 ] This line may either touch only one extreme point of the risk set, i.e. correspond to a nonrandomised decision rule, or overlap with an entire side of the risk set, i.e. correspond to two nonrandomised decision rules and randomised decision rules combining the two. This is illustrated by the three situations below:
As different priors result in different slopes, the set of all rules that are Bayes with respect to some prior are the same as the set of admissible rules. [ 12 ]
Note that no situation is possible where a nonrandomised Bayes rule does not exist but a randomised Bayes rule does. The existence of a randomised Bayes rule implies the existence of a nonrandomised Bayes rule. This is also true in the general case, even with infinite parameter space, infinite Bayes risk, and regardless of whether the infimum Bayes risk can be attained. [ 3 ] [ 12 ] This supports the intuitive notion that the statistician need not utilise randomisation to arrive at statistical decisions. [ 4 ]
As randomised Bayes rules always have nonrandomised alternatives, they are unnecessary in Bayesian statistics . However, in frequentist statistics, randomised rules are theoretically necessary under certain situations, [ 13 ] and were thought to be useful in practice when they were first invented: Egon Pearson forecast that
they 'will not meet with strong objection'. [ 14 ] However, few statisticians actually implement them nowadays. [ 14 ] [ 15 ]
Randomized tests should not be confused with permutation tests . [ 16 ]
In the usual formulation of the likelihood ratio test , the null hypothesis is rejected whenever the likelihood ratio Λ {\displaystyle \Lambda } is smaller than some constant K {\displaystyle K} , and accepted otherwise. However, this is sometimes problematic when Λ {\displaystyle \Lambda } is discrete under the null hypothesis, when Λ = K {\displaystyle \Lambda =K} is possible.
A solution is to define a test function ϕ ( x ) {\displaystyle \phi (x)} , whose value is the probability at which the null hypothesis is accepted: [ 17 ] [ 18 ]
ϕ ( x ) = { 1 if Λ > K p ( x ) if Λ = K 0 if Λ < K {\displaystyle \phi (x)=\left\{{\begin{array}{l}1&{\text{ if }}\Lambda >K\\p(x)&{\text{ if }}\Lambda =K\\0&{\text{ if }}\Lambda <K\end{array}}\right.}
This can be interpreted as flipping a biased coin with a probability p ( x ) {\displaystyle p(x)} of returning heads whenever Λ = k {\displaystyle \Lambda =k} and rejecting the null hypothesis if a heads turns up. [ 15 ]
A generalised form of the Neyman–Pearson lemma states that this test has maximum power among all tests at the same significance level α {\displaystyle \alpha } , that such a test must exist for any significance level α {\displaystyle \alpha } , and that the test is unique under normal situations. [ 19 ]
As an example, consider the case where the underlying distribution is Bernoulli with probability p {\displaystyle p} , and we would like to test the null hypothesis p ≤ λ {\displaystyle p\leq \lambda } against the alternative hypothesis p > λ {\displaystyle p>\lambda } . It is natural to choose some k {\displaystyle k} such that P ( p ^ > k | H 0 ) = α {\displaystyle P({\hat {p}}>k|H_{0})=\alpha } , and reject the null whenever p ^ > k {\displaystyle {\hat {p}}>k} , where p ^ {\displaystyle {\hat {p}}} is the test statistic . However, to take into account cases where p ^ = k {\displaystyle {\hat {p}}=k} , we define the test function:
ϕ ( x ) = { 1 if p ^ > k γ if p ^ = k 0 if p ^ < k {\displaystyle \phi (x)=\left\{{\begin{array}{l}1&{\text{ if }}{\hat {p}}>k\\\gamma &{\text{ if }}{\hat {p}}=k\\0&{\text{ if }}{\hat {p}}<k\end{array}}\right.}
where γ {\displaystyle \gamma } is chosen such that P ( p ^ > k | H 0 ) + γ P ( p ^ = k | H 0 ) = α {\displaystyle P({\hat {p}}>k|H_{0})+\gamma P({\hat {p}}=k|H_{0})=\alpha } .
An analogous problem arises in the construction of confidence intervals. For instance, the Clopper-Pearson interval is always conservative because of the discrete nature of the binomial distribution . An alternative is to find the upper and lower confidence limits U {\displaystyle U} and L {\displaystyle L} by solving the following equations: [ 14 ]
{ P r ( p ^ < k | p = U ) + γ P ( p ^ = k | p = U ) = α / 2 P r ( p ^ > k | p = L ) + γ P ( p ^ = k | p = L ) = α / 2 {\displaystyle \left\{{\begin{array}{l}Pr({\hat {p}}<k|p=U)+\gamma P({\hat {p}}=k|p=U)&=\alpha /2\\Pr({\hat {p}}>k|p=L)+\gamma P({\hat {p}}=k|p=L)&=\alpha /2\end{array}}\right.}
where γ {\displaystyle \gamma } is a uniform random variable on (0, 1). | https://en.wikipedia.org/wiki/Randomised_decision_rule |
Randomized benchmarking is an experimental method for measuring the average error rates of quantum computing hardware platforms. The protocol estimates the average error rates by implementing long sequences of randomly sampled quantum gate operations . [ 1 ] Randomized benchmarking is the industry-standard protocol used by quantum hardware developers such as IBM [ 2 ] and Google [ 3 ] to test the performance of the quantum operations.
The original theory of randomized benchmarking, proposed by Joseph Emerson and collaborators, [ 1 ] considered the implementation of sequences of Haar-random operations, but this had several practical limitations. The now-standard protocol for randomized benchmarking (RB) relies on uniformly random Clifford operations, as proposed in 2006 by Dankert et al. [ 4 ] as an application of the theory of unitary t-designs . In current usage randomized benchmarking sometimes refers to the broader family of generalizations of the 2005 protocol involving different random gate sets [ 5 ] [ 6 ] [ 7 ] [ 8 ] [ 9 ] [ 10 ] [ 11 ] [ 12 ] [ 13 ] [ 14 ] that can identify various features of the strength and type of errors affecting the elementary quantum gate operations. Randomized benchmarking protocols are an important means of verifying and validating quantum operations and are also routinely used for the optimization of quantum control procedures. [ 15 ]
Randomized benchmarking offers several key advantages over alternative approaches to error characterization. For example, the number of experimental procedures required for full characterization of errors (called tomography ) grows exponentially with the number of quantum bits (called qubits ). This makes tomographic methods impractical for even small systems of just 3 or 4 qubits. In contrast, randomized benchmarking protocols are the only known approaches to error characterization that scale efficiently as number of qubits in the system increases. [ 4 ] Thus RB can be applied in practice to characterize errors in arbitrarily large quantum processors. Additionally, in experimental quantum computing, procedures for state preparation and measurement (SPAM) are also error-prone, and thus quantum process tomography is unable to distinguish errors associated with gate operations from errors associated with SPAM. In contrast, RB protocols are robust to state-preparation and measurement errors [ 1 ] [ 7 ]
Randomized benchmarking protocols estimate key features of the errors that affect a set of quantum operations by examining how the observed fidelity of the final quantum state decreases as the length of the random sequence increases. If the set of operations satisfies certain mathematical properties, [ 1 ] [ 4 ] [ 7 ] [ 16 ] [ 10 ] [ 11 ] [ 12 ] such as comprising a sequence of twirls [ 5 ] [ 17 ] with unitary two-designs , [ 4 ] then the measured decay can be shown to be an invariant exponential with a rate fixed uniquely by features of the error model.
Randomized benchmarking was proposed in Scalable noise estimation with random unitary operators , [ 1 ] where it was shown that long sequences of quantum gates sampled uniformly at random from the Haar measure on the group SU( d ) would lead to an exponential decay at a rate that was uniquely fixed by the error model. Emerson, Alicki and Zyczkowski also showed, under the assumption of gate-independent errors, that the measured decay rate is directly related to an important figure of merit, the average gate fidelity and independent of the choice of initial state and any errors in the initial state, as well as the specific random sequences of quantum gates. This protocol applied for arbitrary dimension d and an arbitrary number n of qubits, where d =2 n . The SU( d ) RB protocol had two important limitations that were overcome in a modified protocol proposed by Dankert et al. , [ 4 ] who proposed sampling the gate operations uniformly at random from any unitary two-design, such as the Clifford group. They proved that this would produce the same exponential decay rate as the random SU( d ) version of the protocol proposed in Emerson et al. . [ 1 ] This follows from the observation that a random sequence of gates is equivalent to an independent sequence of twirls under that group, as conjectured in [ 1 ] and later proven in. [ 5 ] This Clifford-group approach to Randomized Benchmarking [ 1 ] [ 4 ] is the now standard method for assessing error rates in quantum computers. A variation of this protocol was proposed by NIST in 2008 [ 6 ] for the first experimental implementation of an RB-type for single qubit gates. However, the sampling of random gates in the NIST protocol was later proven not to reproduce any unitary two-design. [ 12 ] The NIST RB protocol was later shown to also produce an exponential fidelity decay, albeit with a rate that depends on non-invariant features of the error model [ 12 ]
In recent years a rigorous theoretical framework has been developed for Clifford-group RB protocols to show that they work reliably under very broad experimental conditions. In 2011 and 2012, Magesan et al. [ 7 ] [ 8 ] proved that the exponential decay rate is fully robust to arbitrary state preparation and measurement errors (SPAM). They also proved a connection between the average gate fidelity and diamond norm metric of error that is relevant to the fault-tolerant threshold. They also provided evidence that the observed decay was exponential and related to the average gate fidelity even if the error model varied across the gate operations, so-called gate-dependent errors, which is the experimentally realistic situation.
In 2018, Wallman [ 16 ] and Dugas et al. , [ 11 ] showed that, despite concerns raised in, [ 18 ] even under very strong gate-dependence errors the standard RB protocols produces an exponential decay at a rate that precisely measures the average gate-fidelity of the experimentally relevant errors. The results of Wallman. [ 16 ] in particular proved that the RB error rate is so robust to gate-dependent errors models that it provides an extremely sensitive tool for detecting non- Markovian errors. This follows because under a standard RB experiment only non-Markovian errors (including time-dependent Markovian errors) can produce a statistically significant deviation from an exponential decay [ 16 ]
The standard RB protocol was first implemented for single qubit gate operations in 2012 at Yale on a superconducting qubit. [ 19 ] A variation of this standard protocol that is only defined for single qubit operations was implemented by NIST in 2008 [ 6 ] on a trapped ion. The first implementation of the standard RB protocol for two-qubit gates was performed in 2012 at NIST for a system of two trapped ions [ 20 ] | https://en.wikipedia.org/wiki/Randomized_benchmarking |
In passing through matter, charged particles ionize and thus lose energy in many steps, until their energy is (almost) zero. The distance to this point is called the range of the particle. The range depends on the type of particle, on its initial energy and on the material through which it passes.
For example, if the ionising particle passing through the material is a positive ion like an alpha particle or proton , it will collide with atomic electrons in the material via Coulombic interaction . Since the mass of the proton or alpha particle is much greater than that of the electron , there will be no significant deviation from the radiation's incident path and very little kinetic energy will be lost in each collision. As such, it will take many successive collisions for such heavy ionising radiation to come to a halt within the stopping medium or material. Maximum energy loss will take place in a head-on collision with an electron .
Since large angle scattering is rare for positive ions, a range may be well defined for that radiation , depending on its energy and charge , as well as the ionisation energy of the stopping medium. Since the nature of such interactions is statistical, the number of collisions required to bring a radiation particle to rest within the medium will vary slightly with each particle (i.e., some may travel further and undergo fewer collisions than others). Hence, there will be a small variation in the range, known as straggling .
The energy loss per unit distance (and hence, the density of ionization), or stopping power also depends on the type and energy of the particle and on the material. Usually, the energy loss per unit distance increases while the particle slows down. The curve describing this fact is called the Bragg curve. Shortly before the end, the energy loss passes through a maximum, the Bragg Peak , and then drops to zero (see the figures in Bragg Peak and in stopping power ). This fact is of great practical importance for radiation therapy .
The range of alpha particles in ambient air amounts to only several centimeters; this type of radiation can therefore be stopped by a sheet of paper. Although beta particles scatter much more than alpha particles, a range can still be defined; it frequently amounts to several hundred centimeters of air.
The mean range can be calculated by integrating the inverse stopping power over energy.
The range of a heavy charged particle is approximately proportional to the mass of the particle and the inverse of the density of the medium, and is a function of the initial velocity of the particle.
This particle physics –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Range_(particle_radiation) |
Range coding (or range encoding ) is an entropy coding method defined by G. Nigel N. Martin in a 1979 paper, [ 1 ] which effectively rediscovered the FIFO arithmetic code first introduced by Richard Clark Pasco in 1976. [ 2 ] Given a stream of symbols and their probabilities, a range coder produces a space-efficient stream of bits to represent these symbols and, given the stream and the probabilities, a range decoder reverses the process.
Range coding is very similar to arithmetic coding , except that coding is done with digits in any base, instead of with bits, and so it is faster when using larger bases (e.g. a byte ) at small cost in compression efficiency. [ 3 ] After the expiration of the first (1978) arithmetic coding patent, [ 4 ] range coding appeared to clearly be free of patent encumbrances. This particularly drove interest in the technique in the open source community. Since that time, patents on various well-known arithmetic coding techniques have also expired.
Range coding conceptually encodes all the symbols of the message into one number, unlike Huffman coding which assigns each symbol a bit-pattern and concatenates all the bit-patterns together. Thus range coding can achieve greater compression ratios than the one-bit-per-symbol lower bound on Huffman coding and it does not suffer the inefficiencies that Huffman does when dealing with probabilities that are not an exact power of two .
The central concept behind range coding is this: given a large-enough range of integers , and a probability estimation for the symbols, the initial range can easily be divided into sub-ranges whose sizes are proportional to the probability of the symbol they represent. Each symbol of the message can then be encoded in turn, by reducing the current range down to just that sub-range which corresponds to the next symbol to be encoded. The decoder must have the same probability estimation the encoder used, which can either be sent in advance, derived from already transferred data or be part of the compressor and decompressor.
When all symbols have been encoded, merely identifying the sub-range is enough to communicate the entire message (presuming of course that the decoder is somehow notified when it has extracted the entire message). A single integer is actually sufficient to identify the sub-range, and it may not even be necessary to transmit the entire integer; if there is a sequence of digits such that every integer beginning with that prefix falls within the sub-range, then the prefix alone is all that's needed to identify the sub-range and thus transmit the message.
Suppose we want to encode the message "AABA<EOM>", where <EOM> is the end-of-message symbol. For this example it is assumed that the decoder knows that we intend to encode exactly five symbols in the base 10 number system (allowing for 10 5 different combinations of symbols with the range [0, 100000)) using the probability distribution {A: .60; B: .20; <EOM>: .20}. The encoder breaks down the range [0, 100000) into three subranges:
Since our first symbol is an A, it reduces our initial range down to [0, 60000). The second symbol choice leaves us with three sub-ranges of this range. We show them following the already-encoded 'A':
With two symbols encoded, our range is now [0, 36000) and our third symbol leads to the following choices:
This time it is the second of our three choices that represent the message we want to encode, and our range becomes [21600, 28800). It may look harder to determine our sub-ranges in this case, but it is actually not: we can merely subtract the lower bound from the upper bound to determine that there are 7200 numbers in our range; that the first 4320 of them represent 0.60 of the total, the next 1440 represent the next 0.20, and the remaining 1440 represent the remaining 0.20 of the total. Adding back the lower bound gives us our ranges:
Finally, with our range narrowed down to [21600, 25920), we have just one more symbol to encode. Using the same technique as before for dividing up the range between the lower and upper bound, we find the three sub-ranges are:
And since <EOM> is our final symbol, our final range is [25056, 25920). Because all five-digit integers starting with "251" fall within our final range, it is one of the three-digit prefixes we could transmit that would unambiguously convey our original message. (The fact that there are actually eight such prefixes in all implies we still have inefficiencies. They have been introduced by our use of base 10 rather than base 2 .)
The central problem may appear to be selecting an initial range large enough that no matter how many symbols we have to encode, we will always have a current range large enough to divide into non-zero sub-ranges. In practice, however, this is not a problem, because instead of starting with a very large range and gradually narrowing it down, the encoder works with a smaller range of numbers at any given time. After some number of digits have been encoded, the leftmost digits will not change. In the example after coding just three symbols, we already knew that our final result would start with "2". More digits are shifted in on the right as digits on the left are sent off. This is illustrated in the following code:
To finish off we may need to emit a few extra digits. The top digit of low is probably too small so we need to increment it, but we have to make sure we don't increment it past low+range . So first we need to make sure range is large enough.
One problem that can occur with the Encode function above is that range might become very small but low and low+range still have differing first digits. This could result in the interval having insufficient precision to distinguish between all of the symbols in the alphabet. When this happens we need to fudge a little, output the first couple of digits even though we might be off by one, and re-adjust the range to give us as much room as possible.
For example, imagine the input stream has led the encoder to the right-open interval [59888, 60188), that is, low = 59888 and range = 300 . The trick is to narrow down the interval to [59888, 60000) = [ 59 888, 59 999], which allows the encoder to emit two of the left-most digits of low , and readjust the interval to [88800, 99999] = [88800, 100000), that is, low = 88800 and range = 100000 - low .
The decoder will be following the same steps so it will know when it needs to do this to keep in sync.
Base 10 was used in this example, but a real implementation would just use binary, with the full range of the native integer data type. Instead of 10000 and 1000 you would likely use hexadecimal constants such as 0x1000000 and 0x10000 . Instead of emitting a digit at a time you would emit a byte at a time and use a byte-shift operation instead of multiplying by 10.
Decoding uses exactly the same algorithm with the addition of keeping track of the current code value consisting of the digits read from the compressor. Instead of emitting the top digit of low you just throw it away, but you also shift out the top digit of code and shift in a new digit read from the compressor. Use AppendDigit below instead of EmitDigit .
In order to determine which probability intervals to apply, the decoder needs to look at the current value of code within the interval [low, low+range) and decide which symbol this represents.
For the AABA<EOM> example above, this would return a value in the range 0 to 9. Values 0 through 5 would represent A, 6 and 7 would represent B, and 8 and 9 would represent <EOM>.
Arithmetic coding is the same as range coding, but with the integers taken as being the numerators of fractions . These fractions have an implicit, common denominator, such that all the fractions fall in the range [0,1). Accordingly, the resulting arithmetic code is interpreted as beginning with an implicit "0". As these are just different interpretations of the same coding methods, and as the resulting arithmetic and range codes are identical, each arithmetic coder is its corresponding range encoder, and vice versa. In other words, arithmetic coding and range coding are just two, slightly different ways of understanding the same thing.
In practice, though, so-called range encoders tend to be implemented pretty much as described in Martin's paper, [ 1 ] while arithmetic coders more generally tend not to be called range encoders. An often noted feature of such range encoders is the tendency to perform renormalization a byte at a time, rather than one bit at a time (as is usually the case). In other words, range encoders tend to use bytes as coding digits, rather than bits. While this does reduce the amount of compression that can be achieved by a very small amount, it is faster than when performing renormalization for each bit. | https://en.wikipedia.org/wiki/Range_coding |
Range Condition Scoring was developed as a way to quantify biodiversity in a given rangeland system. This practice is widely used in the Sand Hills region of Nebraska , as well as the tallgrass prairie regions, as evidenced by the authoritative book on the subject, "Range Judging Handbook and Contest Guide for Nebraska." This book outlines the steps required to evaluate, or score, a particular region of rangeland; and it serves as a baseline for the understanding of this method of judging rangeland health.
A certain area of land is chosen for a survey and random selections are made to determine where species composition measurements must be taken. Once these areas are selected, plant species composition measurements are taken by clipping the plants in a cordoned off area and measuring the mass of each type of plant species. This can then be compared to the entire plant mass in the area to determine the percent of each species located within the area.
Once these percentages are determined, they can be compared with the "Guides for determining range condition" located in the Range Judging Handbook. These tables show the amount of each species that is allowed in each area of rangeland. The tables differ depending upon average rainfall as well as soil type. These differences occur because the climax plant community would differ as the variables of rainfall and soil type change.
The score that is computed will fall in the range of 0-25% if the range is in "Poor" Condition, 26-50% if the range is in "Fair" Condition, 51-75% if the range is in "Good" Condition, and 76-100% if the range is in "Excellent" Condition.
By taking the range condition score that is determined, the researcher then can use Table 4 in the Nebraska Cooperative Extension Circular EC 86-113-C to determine an "Adjustment Factor for Initial Stocking Rate." This adjustment factor is then multiplied with the correct number found in Table 3 of the same Extension Circular to determine an initial stocking rate value for livestock. This stocking rate is expressed in units called AUM/acre (Animal Unit Months per acre). AUMs are based on the amount of forage that a 1000-pound animal will graze in one month's time, which is roughly 780 pounds of air-dry forage. This information is further detailed in the "Nebraska Handbook of Range Management" (EC 92-124-E by Reece and Stubbendieck).
By connecting the research completed involving quantifying rangeland health to the research completed involving livestock grazing and distribution, we now have a system in place to more properly manage stocking rates of grazing livestock. In addition, we have a system that determines the amount of forage that should not be grazed to provide adequate support for wildlife biodiversity. This use, as well as others, is detailed below.
For livestock ranchers , landowners, wildlife conservationists , business owners of fee hunting enterprises and many others, that range condition score of certain tracts of rangeland can prove very valuable. The reasons for this include either valuation of rangeland or for impacts regarding changes in management of rangeland . When management practices are put in place to improve biodiversity and overall range health, this method of range condition scoring is one way of monitoring the improvements (or lack thereof). One example of the value of range condition scoring as a management tool can be seen in Leonard Sisson's research entitled "Recommendations for Management of Sharp-Tailed Grouse in the Nebraska Sand Hills ." In this work, he correlates the rise in range condition scores of rangeland to the increase in population of Sharp-tailed Grouse .
Other uses for range condition scores involve inclusion in leases that specify a certain level of range condition must be maintained. This may involve clauses which force the landowner to remove cattle from a certain area of leased range once a specified decrease in range condition score has occurred. For instance, when grazing pressure has increased, certain undesirable plants that do not contribute to a high range condition score (such as annual plants or non-natives) may increase. When this occurs, grazing rights must be relinquished to allow for adequate rest of the plant community. This should stimulate the lessee to maintain good grazing management to make his livestock grazing patterns more sustainable.
Other uses for range condition scoring include making adjustments to stocking rate of grazing livestock as the range condition score changes in a certain pasture. This is evident when annual changes in precipitation have an effect on rangeland health. When a producer can determine the estimated forage available before the grazing season begins, this will allow the producer to be more flexible in his grazing management decisions. In EC 91-123, Reece et al. show how specific grazing management techniques may be used on order to more effectively mitigate drought and other precipitation changes on rangeland.
An owner of a fee-hunting enterprise would be able to determine suitability for hunting certain species of game animals because research has shown that wildlife populations and overall biodiversity increase as the Range Condition Score Increases. Biodiversity Increases are most apparent because Range Condition Scoring is a direct measure of plant population biodiversity relative to the climax plant community. However, animal biodiversity is tougher to correlate, but is done so by Sisson in the previously cited work.
A producer is able to use stocking rate data to more efficiently and evenly distribute grazing livestock in areas of rangeland. By determining the costs of cross-fencing a certain pasture, for example, the producer would compute the materials and labor needed to complete the task. By determining the benefits, he would determine the increase in harvest efficiency of the rangeland by more evenly distributing the livestock by moving the herd from one smaller pasture to another rather than continuously grazing the herd in one larger pasture. By cross-fencing and rotationally grazing, the livestock producer may be able to decrease or eliminate harmful effects of his operation on the rangeland, and also increase his productivity be being able to increase his herd size. This process is detailed in numerous grazing management research papers, one of which is by Waller, S., et al. and is titled "Understanding Grass Growth: the Key to Profitable Livestock Production."
There has been a shift in thought regarding grassland ecology as a new theory of "stable state ecology" has been proposed by van Andel and Grootjans (2006). This may serve as an alternative to the Range Condition Scoring method in terms of management towards climax plant communities.
Holistic Resource Management, an acronym commonly mistaken for "Holistic Ranch Management," is a system of resource management which emphasizes decision making for the long-term. This is a concept identified by Allan Savory , the famed Zimbabwean biologist, rancher, and environmentalist. This concept focuses on healing damaged land, while increasing productivity and sustainability. Range Condition Scoring is an important factor in this process as it serves as a quantifier. Even though qualitative evidence like "the range looks better" or "it looks like there is better ground cover" or "the grass seems more resilient to drought than in the 1930s because of the way ranchers manage things today," it is important that we use quantitative evidence rather than qualitative. A number of concepts and principles may be in place to solve rangeland health and degradation issues, but none of them are possible if monitoring and quantitative evidence are not present. Savory also developed the Savory brittleness scale which reflects the distribution of humidity throughout the year and how well the land can recover if left after being cleared. | https://en.wikipedia.org/wiki/Range_condition_scoring |
In mathematics , the range of a function may refer to either of two closely related concepts:
In some cases the codomain and the image of a function are the same set; such a function is called surjective or onto . For any non-surjective function f : X → Y , {\displaystyle f:X\to Y,} the codomain Y {\displaystyle Y} and the image Y ~ {\displaystyle {\tilde {Y}}} are different; however, a new function can be defined with the original function's image as its codomain, f ~ : X → Y ~ {\displaystyle {\tilde {f}}:X\to {\tilde {Y}}} where f ~ ( x ) = f ( x ) . {\displaystyle {\tilde {f}}(x)=f(x).} This new function is surjective.
Given two sets X and Y , a binary relation f between X and Y is a function (from X to Y ) if for every element x in X there is exactly one y in Y such that f relates x to y . The sets X and Y are called the domain and codomain of f , respectively. The image of the function f is the subset of Y consisting of only those elements y of Y such that there is at least one x in X with f ( x ) = y .
As the term "range" can have different meanings, it is considered a good practice to define it the first time it is used in a textbook or article. Older books, when they use the word "range", tend to use it to mean what is now called the codomain . [ 1 ] More modern books, if they use the word "range" at all, generally use it to mean what is now called the image . [ 2 ] To avoid any confusion, a number of modern books don't use the word "range" at all. [ 3 ]
Given a function
with domain X {\displaystyle X} , the range of f {\displaystyle f} , sometimes denoted ran ( f ) {\displaystyle \operatorname {ran} (f)} or Range ( f ) {\displaystyle \operatorname {Range} (f)} , [ 4 ] may refer to the codomain or target set Y {\displaystyle Y} (i.e., the set into which all of the output of f {\displaystyle f} is constrained to fall), or to f ( X ) {\displaystyle f(X)} , the image of the domain of f {\displaystyle f} under f {\displaystyle f} (i.e., the subset of Y {\displaystyle Y} consisting of all actual outputs of f {\displaystyle f} ). The image of a function is always a subset of the codomain of the function. [ 5 ]
As an example of the two different usages, consider the function f ( x ) = x 2 {\displaystyle f(x)=x^{2}} as it is used in real analysis (that is, as a function that inputs a real number and outputs its square). In this case, its codomain is the set of real numbers R {\displaystyle \mathbb {R} } , but its image is the set of non-negative real numbers R + {\displaystyle \mathbb {R} ^{+}} , since x 2 {\displaystyle x^{2}} is never negative if x {\displaystyle x} is real. For this function, if we use "range" to mean codomain , it refers to R {\displaystyle \mathbb {\displaystyle \mathbb {R} ^{}} } ; if we use "range" to mean image , it refers to R + {\displaystyle \mathbb {R} ^{+}} .
For some functions, the image and the codomain coincide; these functions are called surjective or onto . For example, consider the function f ( x ) = 2 x , {\displaystyle f(x)=2x,} which inputs a real number and outputs its double. For this function, both the codomain and the image are the set of all real numbers, so the word range is unambiguous.
Even in cases where the image and codomain of a function are different, a new function can be uniquely defined with its codomain as the image of the original function. For example, as a function from the integers to the integers, the doubling function f ( n ) = 2 n {\displaystyle f(n)=2n} is not surjective because only the even integers are part of the image. However, a new function f ~ ( n ) = 2 n {\displaystyle {\tilde {f}}(n)=2n} whose domain is the integers and whose codomain is the even integers is surjective. For f ~ , {\displaystyle {\tilde {f}},} the word range is unambiguous. | https://en.wikipedia.org/wiki/Range_of_a_function |
Range of motion (or ROM ) is the linear or angular distance that a moving object may normally travel while properly attached to another.
In biomechanics and strength training , ROM refers to the angular distance and direction a joint can move between the flexed position and the extended position. [ 1 ] The act of attempting to increase this distance through therapeutic exercises (range of motion therapy— stretching from flexion to extension for physiological gain) is also sometimes called range of motion.
In mechanical engineering , it is (also called range of travel or ROT ) used particularly when talking about mechanical devices, such as a sound volume control knob.
Each specific joint has a normal range of motion that is expressed in degrees. The reference values for the normal ROM in individuals differ slightly depending on age and sex. [ 2 ] For example, as an individual ages, they typically lose a small amount of ROM.
Analog and traditional devices to measure range of motion in the joints of the body include the goniometer and inclinometer which use a stationary arm, protractor, fulcrum, and movement arm to measure angle from axis of the joint. As measurement results will vary by the degree of resistance, two levels of range of motion results are recorded in most cases.
Recent technological advances in 3D motion capture technology allow for the measurement of joints concurrently, which can be used to measure a patient's active range of motion.
Limited range of motion refers to a joint that has a reduction in its ability to move. The reduced motion may be a problem with the specific joint or it may be caused by injury or diseases such as osteoarthritis, rheumatoid arthritis, or other types of arthritis. Pain, swelling, and stiffness associated with arthritis can limit the range of motion of a particular joint and impair function and the ability to perform usual daily activities.
Limited range of motion can affect extension or flexion. If there is limited range of extension, it is called " flexion contracture " or " flexion deformity ". If the flexion is deficient, it is called " limited range of flexion " or " limited flexion range ".
Physical and occupational therapy can help to improve joint function by focusing on range of motion exercises. The goal of these exercises is to gently increase range of motion while decreasing pain, swelling, and stiffness. There are three types of range of motion exercises: | https://en.wikipedia.org/wiki/Range_of_motion |
In geology , range offset is the time difference between the last fossil occurrence of a taxon and the actual disappearance of this taxon. Range offset can be used as a measure of biostratigraphic precision [ 1 ] and determines among others how much information about extinctions can be derived from fossil occurrences.
The range offset of a taxon is defined as [ 2 ]
Range offset is strongly affected by sequence stratigraphy . Simulations show that range offset changes by up to three orders of magnitude dependent on the position in the systems tracts . [ 2 ]
This article about stratigraphy is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Range_offset |
Range state is a term generally used in zoogeography and conservation biology to refer to any nation that exercises jurisdiction over any part of a range which a particular species , taxon or biotope inhabits, or crosses or overflies at any time on its normal migration route. The term is often expanded to also include, particularly in international waters , any nation with vessels flying their flag that engage in exploitation (e.g. hunting, fishing, capturing) of that species. [ 1 ] [ 2 ] Countries in which a species occurs only as a vagrant or ‘accidental’ visitor outside of its normal range or migration route are not usually considered range states.
Because governmental conservation policy is often formulated on a national scale, and because in most countries, both governmental and private conservation organisations are also organised at the national level, the range state concept is often used by international conservation organizations in formulating their conservation and campaigning policy.
An example of one such organization is the Convention on the Conservation of Migratory Species of Wild Animals ( CMS , or the “ Bonn Convention ”). It is a multilateral treaty focusing on the conservation of critically endangered and threatened migratory species, their habitats and their migration routes. Because such habitats and/or migration routes may span national boundaries, conservation efforts are less likely to succeed without the cooperation, participation, and coordination of each of the range states. [ 2 ]
This article about geography terminology is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Range_state |
The rangeland health refers to the degree to which the integrity of the soil and ecological processes of rangeland ecosystems are sustained. [ 1 ] The attributes evaluated during rangeland health assessments are 1) Soil and Site Stability 2) Hydrologic Function 3) Biotic Integrity. [ 2 ]
This ecology -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Rangeland_health |
A ranging rod , or range rod , is a surveying instrument used for marking the position of stations, and for sightings of those stations, as well as for ranging straight lines . [ 1 ] Initially these were made of light, thin, and straight bamboo , or of well seasoned wood such as teak , pine , or deodar . They were shod with iron at the bottom and surmounted with a flag about 250 mm 2 in size. [ 2 ] Nowadays [ when? ] they are made of wood, metal, or fibreglass . The rods are usually about 30 mm in diameter and 2 or 3 m long, painted with alternating bands, such as red and white, red and yellow, or black and white, in lengths of 200 mm (i.e. one link length of metric chain), 500 mm, or 1 foot. These colours are used so that the rod can be properly sighted in case of long distance or bad weather. Ranging rods of greater length, e.g. 3 to 6 m, are called ranging or range poles , and are used for very long survey lines. [ 3 ] Another type of ranging rod is known as an offset rod , which has no flag at the top. It is used for measuring small offsets from the survey line when the work is of an ordinary nature. [ 4 ]
When the ranging rods are limited, thin sticks 400 mm to 1 m length with white papers in the cuts at tops can serve their purpose. Such sticks are pointed at the bottom and are cut from wood. These are called as whites. | https://en.wikipedia.org/wiki/Ranging_rod |
Condorcet methods
Positional voting
Cardinal voting
Quota-remainder methods
Approval-based committees
Fractional social choice
Semi-proportional representation
By ballot type
Pathological response
Strategic voting
Paradoxes of majority rule
Positive results
In apportionment theory , rank-index methods [ 1 ] : Sec.8 are a set of apportionment methods that generalize the divisor method . These have also been called Huntington methods , [ 2 ] since they generalize an idea by Edward Vermilye Huntington .
Like all apportionment methods, the inputs of any rank-index method are:
Its output is a vector of integers a 1 , … , a n {\displaystyle a_{1},\ldots ,a_{n}} with ∑ i = 1 n a i = h {\displaystyle \sum _{i=1}^{n}a_{i}=h} , called an apportionment of h {\displaystyle h} , where a i {\displaystyle a_{i}} is the number of items allocated to agent i .
Every rank-index method is parametrized by a rank-index function r ( t , a ) {\displaystyle r(t,a)} , which is increasing in the entitlement t {\displaystyle t} and decreasing in the current allocation a {\displaystyle a} . The apportionment is computed iteratively as follows:
Divisor methods are a special case of rank-index methods: a divisor method with divisor function d ( a ) {\displaystyle d(a)} is equivalent to a rank-index method with rank-index function r ( t , a ) = t / d ( a ) {\displaystyle r(t,a)=t/d(a)} .
Every rank-index method can be defined using a min-max inequality: a is an allocation for the rank-index method with function r , if-and-only-if: [ 1 ] : Thm.8.1
min i : a i > 0 r ( t i , a i − 1 ) ≥ max i r ( t i , a i ) {\displaystyle \min _{i:a_{i}>0}r(t_{i},a_{i}-1)\geq \max _{i}r(t_{i},a_{i})} .
Every rank-index method is house-monotone . This means that, when h {\displaystyle h} increases, the allocation of each agent weakly increases. This immediately follows from the iterative procedure.
Every rank-index method is uniform . This means that, we take some subset of the agents 1 , … , k {\displaystyle 1,\ldots ,k} , and apply the same method to their combined allocation, then the result is exactly the vector ( a 1 , … , a k ) {\displaystyle (a_{1},\ldots ,a_{k})} . In other words: every part of a fair allocation is fair too. This immediately follows from the min-max inequality.
Moreover:
A quota-capped divisor method is an apportionment method where we begin by assigning every state its lower quota of seats. Then, we add seats one-by-one to the state with the highest votes-per-seat average, so long as adding an additional seat does not result in the state exceeding its upper quota. [ 3 ] However, quota-capped divisor methods violate the participation criterion (also called population monotonicity )—it is possible for a party to lose a seat as a result of winning more votes. [ 4 ] : Tbl.A7.2
Every quota-capped divisor method satisfies house monotonicity . Moreover, quota-capped divisor methods satisfy the quota rule . [ 5 ] : Thm.7.1
However, quota-capped divisor methods violate the participation criterion (also called population monotonicity )—it is possible for a party to lose a seat as a result of winning more votes. [ 5 ] : Tbl.A7.2 This occurs when:
Moreover, quota-capped versions of other algorithms frequently violate the true quota in the presence of error (e.g. census miscounts). Jefferson's method frequently violates the true quota, even after being quota-capped, while Webster's method and Huntington-Hill perform well even without quota-caps. [ 6 ] | https://en.wikipedia.org/wiki/Rank-index_method |
Rank-width is a graph width parameter used in graph theory and parameterized complexity , and defined using linear algebra .
It is defined from hierarchical clusterings of the vertices of a given graph, which can be visualized as ternary trees having the vertices as their leaves. Removing any edge from such a tree disconnects it into two subtrees and partitions the vertices into two subsets. The graph edges that cross from one side of the partition to the other can be described by a biadjacency matrix ; for the purposes of rank-width, this matrix is defined over the finite field GF(2) rather than using real numbers . The rank-width of a graph is the maximum of the ranks of the biadjacency matrices, for a clustering chosen to minimize this maximum. [ 1 ]
Rank-width is closely related to clique-width : k ≤ c ≤ 2 k + 1 − 1 {\displaystyle k\leq c\leq 2^{k+1}-1} , where c {\displaystyle c} is the clique-width and k {\displaystyle k} the rank-width. However, clique-width is NP-hard to compute, for graphs of large clique-width, and its parameterized complexity is unknown. In contrast, testing whether the rank-width is at most a constant k {\displaystyle k} takes polynomial time , and even when the rank-width is not constant it can be approximated, with a constant approximation ratio , in polynomial time. For this reason, rank-width can be used as a more easily computed substitute for clique-width. [ 1 ]
An example of a family of graphs with high rank-width is provided by the square grid graphs . For an n × n {\displaystyle n\times n} grid graph, the rank-width is exactly n − 1 {\displaystyle n-1} . [ 2 ]
This graph theory -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Rank-width |
In graph theory , a branch of mathematics, the rank of an undirected graph has two unrelated definitions. Let n equal the number of vertices of the graph.
A sample graph and matrix:
(corresponding to the four edges, e1–e4):
( 0 1 1 1 1 0 0 0 1 0 0 1 1 0 1 0 ) . {\displaystyle {\begin{pmatrix}0&1&1&1\\1&0&0&0\\1&0&0&1\\1&0&1&0\\\end{pmatrix}}.}
In this example, the matrix theory rank of the matrix is 4, because its column vectors are linearly independent. | https://en.wikipedia.org/wiki/Rank_(graph_theory) |
In linear algebra , the rank of a matrix A is the dimension of the vector space generated (or spanned ) by its columns. [ 1 ] [ 2 ] [ 3 ] This corresponds to the maximal number of linearly independent columns of A . This, in turn, is identical to the dimension of the vector space spanned by its rows. [ 4 ] Rank is thus a measure of the " nondegenerateness " of the system of linear equations and linear transformation encoded by A . There are multiple equivalent definitions of rank. A matrix's rank is one of its most fundamental characteristics.
The rank is commonly denoted by rank( A ) or rk( A ) ; [ 2 ] sometimes the parentheses are not written, as in rank A . [ i ]
In this section, we give some definitions of the rank of a matrix. Many definitions are possible; see Alternative definitions for several of these.
The column rank of A is the dimension of the column space of A , while the row rank of A is the dimension of the row space of A .
A fundamental result in linear algebra is that the column rank and the row rank are always equal. (Three proofs of this result are given in § Proofs that column rank = row rank , below.) This number (i.e., the number of linearly independent rows or columns) is simply called the rank of A .
A matrix is said to have full rank if its rank equals the largest possible for a matrix of the same dimensions, which is the lesser of the number of rows and columns. A matrix is said to be rank-deficient if it does not have full rank. The rank deficiency of a matrix is the difference between the lesser of the number of rows and columns, and the rank.
The rank of a linear map or operator Φ {\displaystyle \Phi } is defined as the dimension of its image : [ 5 ] [ 6 ] [ 7 ] [ 8 ] rank ( Φ ) := dim ( img ( Φ ) ) {\displaystyle \operatorname {rank} (\Phi ):=\dim(\operatorname {img} (\Phi ))} where dim {\displaystyle \dim } is the dimension of a vector space, and img {\displaystyle \operatorname {img} } is the image of a map.
The matrix [ 1 0 1 0 1 1 0 1 1 ] {\displaystyle {\begin{bmatrix}1&0&1\\0&1&1\\0&1&1\end{bmatrix}}} has rank 2: the first two columns are linearly independent , so the rank is at least 2, but since the third is a linear combination of the first two (the first column plus the second), the three columns are linearly dependent so the rank must be less than 3.
The matrix A = [ 1 1 0 2 − 1 − 1 0 − 2 ] {\displaystyle A={\begin{bmatrix}1&1&0&2\\-1&-1&0&-2\end{bmatrix}}} has rank 1: there are nonzero columns, so the rank is positive, but any pair of columns is linearly dependent. Similarly, the transpose A T = [ 1 − 1 1 − 1 0 0 2 − 2 ] {\displaystyle A^{\mathrm {T} }={\begin{bmatrix}1&-1\\1&-1\\0&0\\2&-2\end{bmatrix}}} of A has rank 1. Indeed, since the column vectors of A are the row vectors of the transpose of A , the statement that the column rank of a matrix equals its row rank is equivalent to the statement that the rank of a matrix is equal to the rank of its transpose, i.e., rank( A ) = rank( A T ) .
A common approach to finding the rank of a matrix is to reduce it to a simpler form, generally row echelon form , by elementary row operations . Row operations do not change the row space (hence do not change the row rank), and, being invertible, map the column space to an isomorphic space (hence do not change the column rank). Once in row echelon form, the rank is clearly the same for both row rank and column rank, and equals the number of pivots (or basic columns) and also the number of non-zero rows.
For example, the matrix A given by A = [ 1 2 1 − 2 − 3 1 3 5 0 ] {\displaystyle A={\begin{bmatrix}1&2&1\\-2&-3&1\\3&5&0\end{bmatrix}}} can be put in reduced row-echelon form by using the following elementary row operations: [ 1 2 1 − 2 − 3 1 3 5 0 ] → 2 R 1 + R 2 → R 2 [ 1 2 1 0 1 3 3 5 0 ] → − 3 R 1 + R 3 → R 3 [ 1 2 1 0 1 3 0 − 1 − 3 ] → R 2 + R 3 → R 3 [ 1 2 1 0 1 3 0 0 0 ] → − 2 R 2 + R 1 → R 1 [ 1 0 − 5 0 1 3 0 0 0 ] . {\displaystyle {\begin{aligned}{\begin{bmatrix}1&2&1\\-2&-3&1\\3&5&0\end{bmatrix}}&\xrightarrow {2R_{1}+R_{2}\to R_{2}} {\begin{bmatrix}1&2&1\\0&1&3\\3&5&0\end{bmatrix}}\xrightarrow {-3R_{1}+R_{3}\to R_{3}} {\begin{bmatrix}1&2&1\\0&1&3\\0&-1&-3\end{bmatrix}}\\&\xrightarrow {R_{2}+R_{3}\to R_{3}} \,\,{\begin{bmatrix}1&2&1\\0&1&3\\0&0&0\end{bmatrix}}\xrightarrow {-2R_{2}+R_{1}\to R_{1}} {\begin{bmatrix}1&0&-5\\0&1&3\\0&0&0\end{bmatrix}}~.\end{aligned}}} The final matrix (in reduced row echelon form) has two non-zero rows and thus the rank of matrix A is 2.
When applied to floating point computations on computers, basic Gaussian elimination ( LU decomposition ) can be unreliable, and a rank-revealing decomposition should be used instead. An effective alternative is the singular value decomposition (SVD), but there are other less computationally expensive choices, such as QR decomposition with pivoting (so-called rank-revealing QR factorization ), which are still more numerically robust than Gaussian elimination. Numerical determination of rank requires a criterion for deciding when a value, such as a singular value from the SVD, should be treated as zero, a practical choice which depends on both the matrix and the application.
The fact that the column and row ranks of any matrix are equal forms is fundamental in linear algebra. Many proofs have been given. One of the most elementary ones has been sketched in § Rank from row echelon forms . Here is a variant of this proof:
It is straightforward to show that neither the row rank nor the column rank are changed by an elementary row operation . As Gaussian elimination proceeds by elementary row operations, the reduced row echelon form of a matrix has the same row rank and the same column rank as the original matrix. Further elementary column operations allow putting the matrix in the form of an identity matrix possibly bordered by rows and columns of zeros. Again, this changes neither the row rank nor the column rank. It is immediate that both the row and column ranks of this resulting matrix is the number of its nonzero entries.
We present two other proofs of this result. The first uses only basic properties of linear combinations of vectors, and is valid over any field . The proof is based upon Wardlaw (2005). [ 9 ] The second uses orthogonality and is valid for matrices over the real numbers ; it is based upon Mackiw (1995). [ 4 ] Both proofs can be found in the book by Banerjee and Roy (2014). [ 10 ]
Let A be an m × n matrix. Let the column rank of A be r , and let c 1 , ..., c r be any basis for the column space of A . Place these as the columns of an m × r matrix C . Every column of A can be expressed as a linear combination of the r columns in C . This means that there is an r × n matrix R such that A = CR . R is the matrix whose i th column is formed from the coefficients giving the i th column of A as a linear combination of the r columns of C . In other words, R is the matrix which contains the multiples for the bases of the column space of A (which is C ), which are then used to form A as a whole. Now, each row of A is given by a linear combination of the r rows of R . Therefore, the rows of R form a spanning set of the row space of A and, by the Steinitz exchange lemma , the row rank of A cannot exceed r . This proves that the row rank of A is less than or equal to the column rank of A . This result can be applied to any matrix, so apply the result to the transpose of A . Since the row rank of the transpose of A is the column rank of A and the column rank of the transpose of A is the row rank of A , this establishes the reverse inequality and we obtain the equality of the row rank and the column rank of A . (Also see Rank factorization .)
Let A be an m × n matrix with entries in the real numbers whose row rank is r . Therefore, the dimension of the row space of A is r . Let x 1 , x 2 , …, x r be a basis of the row space of A . We claim that the vectors A x 1 , A x 2 , …, A x r are linearly independent . To see why, consider a linear homogeneous relation involving these vectors with scalar coefficients c 1 , c 2 , …, c r : 0 = c 1 A x 1 + c 2 A x 2 + ⋯ + c r A x r = A ( c 1 x 1 + c 2 x 2 + ⋯ + c r x r ) = A v , {\displaystyle 0=c_{1}A\mathbf {x} _{1}+c_{2}A\mathbf {x} _{2}+\cdots +c_{r}A\mathbf {x} _{r}=A(c_{1}\mathbf {x} _{1}+c_{2}\mathbf {x} _{2}+\cdots +c_{r}\mathbf {x} _{r})=A\mathbf {v} ,} where v = c 1 x 1 + c 2 x 2 + ⋯ + c r x r . We make two observations: (a) v is a linear combination of vectors in the row space of A , which implies that v belongs to the row space of A , and (b) since A v = 0 , the vector v is orthogonal to every row vector of A and, hence, is orthogonal to every vector in the row space of A . The facts (a) and (b) together imply that v is orthogonal to itself, which proves that v = 0 or, by the definition of v , c 1 x 1 + c 2 x 2 + ⋯ + c r x r = 0. {\displaystyle c_{1}\mathbf {x} _{1}+c_{2}\mathbf {x} _{2}+\cdots +c_{r}\mathbf {x} _{r}=0.} But recall that the x i were chosen as a basis of the row space of A and so are linearly independent. This implies that c 1 = c 2 = ⋯ = c r = 0 . It follows that A x 1 , A x 2 , …, A x r are linearly independent.
Now, each A x i is obviously a vector in the column space of A . So, A x 1 , A x 2 , …, A x r is a set of r linearly independent vectors in the column space of A and, hence, the dimension of the column space of A (i.e., the column rank of A ) must be at least as big as r . This proves that row rank of A is no larger than the column rank of A . Now apply this result to the transpose of A to get the reverse inequality and conclude as in the previous proof.
In all the definitions in this section, the matrix A is taken to be an m × n matrix over an arbitrary field F .
Given the matrix A {\displaystyle A} , there is an associated linear mapping f : F n → F m {\displaystyle f:F^{n}\to F^{m}} defined by f ( x ) = A x . {\displaystyle f(x)=Ax.} The rank of A {\displaystyle A} is the dimension of the image of f {\displaystyle f} . This definition has the advantage that it can be applied to any linear map without need for a specific matrix.
Given the same linear mapping f as above, the rank is n minus the dimension of the kernel of f . The rank–nullity theorem states that this definition is equivalent to the preceding one.
The rank of A is the maximal number of linearly independent columns c 1 , c 2 , … , c k {\displaystyle \mathbf {c} _{1},\mathbf {c} _{2},\dots ,\mathbf {c} _{k}} of A ; this is the dimension of the column space of A (the column space being the subspace of F m generated by the columns of A , which is in fact just the image of the linear map f associated to A ).
The rank of A is the maximal number of linearly independent rows of A ; this is the dimension of the row space of A .
The rank of A is the smallest positive integer k such that A can be factored as A = C R {\displaystyle A=CR} , where C is an m × k matrix and R is a k × n matrix. In fact, for all integers k , the following are equivalent:
Indeed, the following equivalences are obvious: ( 1 ) ⇔ ( 2 ) ⇔ ( 3 ) ⇔ ( 4 ) ⇔ ( 5 ) {\displaystyle (1)\Leftrightarrow (2)\Leftrightarrow (3)\Leftrightarrow (4)\Leftrightarrow (5)} .
For example, to prove (3) from (2), take C to be the matrix whose columns are c 1 , … , c k {\displaystyle \mathbf {c} _{1},\ldots ,\mathbf {c} _{k}} from (2).
To prove (2) from (3), take c 1 , … , c k {\displaystyle \mathbf {c} _{1},\ldots ,\mathbf {c} _{k}} to be the columns of C .
It follows from the equivalence ( 1 ) ⇔ ( 5 ) {\displaystyle (1)\Leftrightarrow (5)} that the row rank is equal to the column rank.
As in the case of the "dimension of image" characterization, this can be generalized to a definition of the rank of any linear map: the rank of a linear map f : V → W is the minimal dimension k of an intermediate space X such that f can be written as the composition of a map V → X and a map X → W . Unfortunately, this definition does not suggest an efficient manner to compute the rank (for which it is better to use one of the alternative definitions). See rank factorization for details.
The rank of A equals the number of non-zero singular values , which is the same as the number of non-zero diagonal elements in Σ in the singular value decomposition A = U Σ V ∗ {\displaystyle A=U\Sigma V^{*}} .
The rank of A is the largest order of any non-zero minor in A . (The order of a minor is the side-length of the square sub-matrix of which it is the determinant.) Like the decomposition rank characterization, this does not give an efficient way of computing the rank, but it is useful theoretically: a single non-zero minor witnesses a lower bound (namely its order) for the rank of the matrix, which can be useful (for example) to prove that certain operations do not lower the rank of a matrix.
A non-vanishing p -minor ( p × p submatrix with non-zero determinant) shows that the rows and columns of that submatrix are linearly independent, and thus those rows and columns of the full matrix are linearly independent (in the full matrix), so the row and column rank are at least as large as the determinantal rank; however, the converse is less straightforward. The equivalence of determinantal rank and column rank is a strengthening of the statement that if the span of n vectors has dimension p , then p of those vectors span the space (equivalently, that one can choose a spanning set that is a subset of the vectors): the equivalence implies that a subset of the rows and a subset of the columns simultaneously define an invertible submatrix (equivalently, if the span of n vectors has dimension p , then p of these vectors span the space and there is a set of p coordinates on which they are linearly independent).
The rank of A is the smallest number k such that A can be written as a sum of k rank 1 matrices, where a matrix is defined to have rank 1 if and only if it can be written as a nonzero product c ⋅ r {\displaystyle c\cdot r} of a column vector c and a row vector r . This notion of rank is called tensor rank ; it can be generalized in the separable models interpretation of the singular value decomposition .
We assume that A is an m × n matrix, and we define the linear map f by f ( x ) = A x as above.
One useful application of calculating the rank of a matrix is the computation of the number of solutions of a system of linear equations . According to the Rouché–Capelli theorem , the system is inconsistent if the rank of the augmented matrix is greater than the rank of the coefficient matrix . If on the other hand, the ranks of these two matrices are equal, then the system must have at least one solution. The solution is unique if and only if the rank equals the number of variables. Otherwise the general solution has k free parameters where k is the difference between the number of variables and the rank. In this case (and assuming the system of equations is in the real or complex numbers) the system of equations has infinitely many solutions.
In control theory , the rank of a matrix can be used to determine whether a linear system is controllable , or observable .
In the field of communication complexity , the rank of the communication matrix of a function gives bounds on the amount of communication needed for two parties to compute the function.
There are different generalizations of the concept of rank to matrices over arbitrary rings , where column rank, row rank, dimension of column space, and dimension of row space of a matrix may be different from the others or may not exist.
Thinking of matrices as tensors , the tensor rank generalizes to arbitrary tensors; for tensors of order greater than 2 (matrices are order 2 tensors), rank is very hard to compute, unlike for matrices.
There is a notion of rank for smooth maps between smooth manifolds . It is equal to the linear rank of the derivative .
Matrix rank should not be confused with tensor order , which is called tensor rank. Tensor order is the number of indices required to write a tensor , and thus matrices all have tensor order 2. More precisely, matrices are tensors of type (1,1), having one row index and one column index, also called covariant order 1 and contravariant order 1; see Tensor (intrinsic definition) for details.
The tensor rank of a matrix can also mean the minimum number of simple tensors necessary to express the matrix as a linear combination, and that this definition does agree with matrix rank as here discussed. | https://en.wikipedia.org/wiki/Rank_(linear_algebra) |
A rank abundance curve or Whittaker plot is a chart used by ecologists to display relative species abundance , a component of biodiversity . It can also be used to visualize species richness and species evenness . It overcomes the shortcomings of biodiversity indices that cannot display the relative role different variables played in their calculation.
The curve is a 2D chart with relative abundance on the Y-axis and the abundance rank on the X-axis.
The rank abundance curve visually depicts both species richness and species evenness. Species richness can be viewed as the number of different species on the chart i.e., how many species were ranked. Species evenness is reflected in the slope of the line that fits the graph (assuming a linear, i.e. logarithmic series, relationship). A steep gradient indicates low evenness as the high-ranking species have much higher abundances than the low-ranking species. A shallow gradient indicates high evenness as the abundances of different species are similar.
Quantitative comparison of rank abundance curves of different communities can be done using RADanalysis package in R . This package uses the max rank normalization method [ 1 ] in which a rank abundance distribution is made by normalization of rank abundance curves of communities to the same number of ranks and then normalize the relative abundances to one. | https://en.wikipedia.org/wiki/Rank_abundance_curve |
In coding theory , rank codes (also called Gabidulin codes ) are non-binary [ 1 ] linear error-correcting codes over not Hamming but rank metric. They described a systematic way of building codes that could detect and correct multiple random rank errors. By adding redundancy with coding k -symbol word to a n -symbol word, a rank code can correct any errors of rank up to t = ⌊ ( d − 1) / 2 ⌋, where d is a code distance. As an erasure code , it can correct up to d − 1 known erasures.
A rank code is an algebraic linear code over the finite field G F ( q N ) {\displaystyle GF(q^{N})} similar to Reed–Solomon code .
The rank of the vector over G F ( q N ) {\displaystyle GF(q^{N})} is the maximum number of linearly independent components over G F ( q ) {\displaystyle GF(q)} . The rank distance between two vectors over G F ( q N ) {\displaystyle GF(q^{N})} is the rank of the difference of these vectors.
The rank code corrects all errors with rank of the error vector not greater than t .
Let X n {\displaystyle X^{n}} be an n -dimensional vector space over the finite field G F ( q N ) {\displaystyle GF\left({q^{N}}\right)} , where q {\displaystyle q} is a power of a prime and N {\displaystyle N} is a positive integer. Let ( u 1 , u 2 , … , u N ) {\displaystyle \left(u_{1},u_{2},\dots ,u_{N}\right)} , with u i ∈ G F ( q N ) {\displaystyle u_{i}\in GF(q^{N})} , be a base of G F ( q N ) {\displaystyle GF\left({q^{N}}\right)} as a vector space over the field G F ( q ) {\displaystyle GF\left({q}\right)} .
Every element x i ∈ G F ( q N ) {\displaystyle x_{i}\in GF\left({q^{N}}\right)} can be represented as x i = a 1 i u 1 + a 2 i u 2 + ⋯ + a N i u N {\displaystyle x_{i}=a_{1i}u_{1}+a_{2i}u_{2}+\dots +a_{Ni}u_{N}} . Hence, every vector x → = ( x 1 , x 2 , … , x n ) {\displaystyle {\vec {x}}=\left({x_{1},x_{2},\dots ,x_{n}}\right)} over G F ( q N ) {\displaystyle GF\left({q^{N}}\right)} can be written as matrix:
Rank of the vector x → {\displaystyle {\vec {x}}} over the field G F ( q N ) {\displaystyle GF\left({q^{N}}\right)} is a rank of the corresponding matrix A ( x → ) {\displaystyle A\left({\vec {x}}\right)} over the field G F ( q ) {\displaystyle GF\left({q}\right)} denoted by r ( x → ; q ) {\displaystyle r\left({{\vec {x}};q}\right)} .
The set of all vectors x → {\displaystyle {\vec {x}}} is a space X n = A N n {\displaystyle X^{n}=A_{N}^{n}} . The map x → → r ( x → ; q ) {\displaystyle {\vec {x}}\to r\left({\vec {x}};q\right)} ) defines a norm over X n {\displaystyle X^{n}} and a rank metric :
A set { x 1 , x 2 , … , x n } {\displaystyle \{x_{1},x_{2},\dots ,x_{n}\}} of vectors from X n {\displaystyle X^{n}} is called a code with code distance d = min d ( x i , x j ) {\displaystyle d=\min d\left(x_{i},x_{j}\right)} . If the set also forms a k -dimensional subspace of X n {\displaystyle X^{n}} , then it is called a linear ( n , k )-code with distance d {\displaystyle d} . Such a linear rank metric code always satisfies the Singleton bound d ≤ n − k + 1 {\displaystyle d\leq n-k+1} with equality.
There are several known constructions of rank codes, which are maximum rank distance (or MRD) codes with d = n − k + 1.
The easiest one to construct is known as the (generalized) Gabidulin code, it was discovered first by Delsarte (who called it a Singleton system ) and later by Gabidulin [ 2 ] (and Kshevetskiy [ 3 ] ).
Let's define a Frobenius power [ i ] {\displaystyle [i]} of the element x ∈ G F ( q N ) {\displaystyle x\in GF(q^{N})} as
Then, every vector g → = ( g 1 , g 2 , … , g n ) , g i ∈ G F ( q N ) , n ≤ N {\displaystyle {\vec {g}}=(g_{1},g_{2},\dots ,g_{n}),~g_{i}\in GF(q^{N}),~n\leq N} , linearly independent over G F ( q ) {\displaystyle GF(q)} , defines a generating matrix of the MRD ( n , k , d = n − k + 1)-code.
where gcd ( m , N ) = 1 {\displaystyle \gcd(m,N)=1} .
There are several proposals for public-key cryptosystems based on rank codes. However, most of them have been proven insecure (see e.g. Journal of Cryptology,
April 2008 [ 4 ] ).
Rank codes are also useful for error and erasure correction in network coding . | https://en.wikipedia.org/wiki/Rank_error-correcting_code |
In mathematics , given a field F {\displaystyle \mathbb {F} } , non-negative integers m , n {\displaystyle m,n} , and a matrix A ∈ F m × n {\displaystyle A\in \mathbb {F} ^{m\times n}} , a rank decomposition or rank factorization of A is a factorization of A of the form A = CF , where C ∈ F m × r {\displaystyle C\in \mathbb {F} ^{m\times r}} and F ∈ F r × n {\displaystyle F\in \mathbb {F} ^{r\times n}} , where r = rank A {\displaystyle r=\operatorname {rank} A} is the rank of A {\displaystyle A} .
Every finite-dimensional matrix has a rank decomposition: Let A {\textstyle A} be an m × n {\textstyle m\times n} matrix whose column rank is r {\textstyle r} . Therefore, there are r {\textstyle r} linearly independent columns in A {\textstyle A} ; equivalently, the dimension of the column space of A {\textstyle A} is r {\textstyle r} . Let c 1 , c 2 , … , c r {\textstyle \mathbf {c} _{1},\mathbf {c} _{2},\ldots ,\mathbf {c} _{r}} be any basis for the column space of A {\textstyle A} and place them as column vectors to form the m × r {\textstyle m\times r} matrix C = [ c 1 c 2 ⋯ c r ] {\textstyle C={\begin{bmatrix}\mathbf {c} _{1}&\mathbf {c} _{2}&\cdots &\mathbf {c} _{r}\end{bmatrix}}} . Therefore, every column vector of A {\textstyle A} is a linear combination of the columns of C {\textstyle C} . To be precise, if A = [ a 1 a 2 ⋯ a n ] {\textstyle A={\begin{bmatrix}\mathbf {a} _{1}&\mathbf {a} _{2}&\cdots &\mathbf {a} _{n}\end{bmatrix}}} is an m × n {\textstyle m\times n} matrix with a j {\textstyle \mathbf {a} _{j}} as the j {\textstyle j} -th column, then
where f i j {\textstyle f_{ij}} 's are the scalar coefficients of a j {\textstyle \mathbf {a} _{j}} in terms of the basis c 1 , c 2 , … , c r {\textstyle \mathbf {c} _{1},\mathbf {c} _{2},\ldots ,\mathbf {c} _{r}} . This implies that A = C F {\textstyle A=CF} , where f i j {\textstyle f_{ij}} is the ( i , j ) {\textstyle (i,j)} -th element of F {\textstyle F} .
If A = C 1 F 1 {\textstyle A=C_{1}F_{1}} is a rank factorization, taking C 2 = C 1 R {\textstyle C_{2}=C_{1}R} and F 2 = R − 1 F 1 {\textstyle F_{2}=R^{-1}F_{1}} gives another rank factorization for any invertible matrix R {\textstyle R} of compatible dimensions.
Conversely, if A = F 1 G 1 = F 2 G 2 {\textstyle A=F_{1}G_{1}=F_{2}G_{2}} are two rank factorizations of A {\textstyle A} , then there exists an invertible matrix R {\textstyle R} such that F 1 = F 2 R {\textstyle F_{1}=F_{2}R} and G 1 = R − 1 G 2 {\textstyle G_{1}=R^{-1}G_{2}} . [ 1 ]
In practice, we can construct one specific rank factorization as follows: we can compute B {\textstyle B} , the reduced row echelon form of A {\textstyle A} . Then C {\textstyle C} is obtained by removing from A {\textstyle A} all non- pivot columns (which can be determined by looking for columns in B {\textstyle B} which do not contain a pivot), and F {\textstyle F} is obtained by eliminating any all-zero rows of B {\textstyle B} .
Note: For a full-rank square matrix (i.e. when n = m = r {\textstyle n=m=r} ), this procedure will yield the trivial result C = A {\textstyle C=A} and F = B = I n {\textstyle F=B=I_{n}} (the n × n {\textstyle n\times n} identity matrix ).
Consider the matrix
B {\textstyle B} is in reduced echelon form.
Then C {\textstyle C} is obtained by removing the third column of A {\textstyle A} , the only one which is not a pivot column, and F {\textstyle F} by getting rid of the last row of zeroes from B {\textstyle B} , so
It is straightforward to check that
Let P {\textstyle P} be an n × n {\textstyle n\times n} permutation matrix such that A P = ( C , D ) {\textstyle AP=(C,D)} in block partitioned form, where the columns of C {\textstyle C} are the r {\textstyle r} pivot columns of A {\textstyle A} . Every column of D {\textstyle D} is a linear combination of the columns of C {\textstyle C} , so there is a matrix G {\textstyle G} such that D = C G {\textstyle D=CG} , where the columns of G {\textstyle G} contain the coefficients of each of those linear combinations. So A P = ( C , C G ) = C ( I r , G ) {\textstyle AP=(C,CG)=C(I_{r},G)} , I r {\textstyle I_{r}} being the r × r {\textstyle r\times r} identity matrix. We will show now that ( I r , G ) = F P {\textstyle (I_{r},G)=FP} .
Transforming A {\textstyle A} into its reduced row echelon form B {\textstyle B} amounts to left-multiplying by a matrix E {\textstyle E} which is a product of elementary matrices , so E A P = B P = E C ( I r , G ) {\textstyle EAP=BP=EC(I_{r},G)} , where E C = ( I r 0 ) {\textstyle EC={\begin{pmatrix}I_{r}\\0\end{pmatrix}}} . We then can write B P = ( I r G 0 0 ) {\textstyle BP={\begin{pmatrix}I_{r}&G\\0&0\end{pmatrix}}} , which allows us to identify ( I r , G ) = F P {\textstyle (I_{r},G)=FP} , i.e. the nonzero r {\textstyle r} rows of the reduced echelon form, with the same permutation on the columns as we did for A {\textstyle A} . We thus have A P = C F P {\textstyle AP=CFP} , and since P {\textstyle P} is invertible this implies A = C F {\textstyle A=CF} , and the proof is complete.
If F ∈ { R , C } , {\displaystyle \mathbb {F} \in \{\mathbb {R} ,\mathbb {C} \},} then one can also construct a full-rank factorization of A {\textstyle A} via a singular value decomposition
Since U 1 {\textstyle U_{1}} is a full-column-rank matrix and Σ r V 1 ∗ {\textstyle \Sigma _{r}V_{1}^{*}} is a full-row-rank matrix, we can take C = U 1 {\textstyle C=U_{1}} and F = Σ r V 1 ∗ {\textstyle F=\Sigma _{r}V_{1}^{*}} .
An immediate consequence of rank factorization is that the rank of A {\textstyle A} is equal to the rank of its transpose A T {\textstyle A^{\textsf {T}}} . Since the columns of A {\textstyle A} are the rows of A T {\textstyle A^{\textsf {T}}} , the column rank of A {\textstyle A} equals its row rank . [ 2 ]
Proof: To see why this is true, let us first define rank to mean column rank. Since A = C F {\textstyle A=CF} , it follows that A T = F T C T {\textstyle A^{\textsf {T}}=F^{\textsf {T}}C^{\textsf {T}}} . From the definition of matrix multiplication , this means that each column of A T {\textstyle A^{\textsf {T}}} is a linear combination of the columns of F T {\textstyle F^{\textsf {T}}} . Therefore, the column space of A T {\textstyle A^{\textsf {T}}} is contained within the column space of F T {\textstyle F^{\textsf {T}}} and, hence, rank ( A T ) ≤ rank ( F T ) {\textstyle \operatorname {rank} \left(A^{\textsf {T}}\right)\leq \operatorname {rank} \left(F^{\textsf {T}}\right)} .
Now, F T {\textstyle F^{\textsf {T}}} is n × r {\textstyle n\times r} , so there are r {\textstyle r} columns in F T {\textstyle F^{\textsf {T}}} and, hence, rank ( A T ) ≤ r = rank ( A ) {\textstyle \operatorname {rank} \left(A^{\textsf {T}}\right)\leq r=\operatorname {rank} \left(A\right)} . This proves that rank ( A T ) ≤ rank ( A ) {\textstyle \operatorname {rank} \left(A^{\textsf {T}}\right)\leq \operatorname {rank} \left(A\right)} .
Now apply the result to A T {\textstyle A^{\textsf {T}}} to obtain the reverse inequality: since ( A T ) T = A {\textstyle \left(A^{\textsf {T}}\right)^{\textsf {T}}=A} , we can write rank ( A ) = rank ( ( A T ) T ) ≤ rank ( A T ) {\textstyle \operatorname {rank} \left(A\right)=\operatorname {rank} \left(\left(A^{\textsf {T}}\right)^{\textsf {T}}\right)\leq \operatorname {rank} \left(A^{\textsf {T}}\right)} . This proves rank ( A ) ≤ rank ( A T ) {\textstyle \operatorname {rank} \left(A\right)\leq \operatorname {rank} \left(A^{\textsf {T}}\right)} .
We have, therefore, proved rank ( A T ) ≤ rank ( A ) {\textstyle \operatorname {rank} \left(A^{\textsf {T}}\right)\leq \operatorname {rank} \left(A\right)} and rank ( A ) ≤ rank ( A T ) {\textstyle \operatorname {rank} \left(A\right)\leq \operatorname {rank} \left(A^{\textsf {T}}\right)} , so rank ( A ) = rank ( A T ) {\textstyle \operatorname {rank} \left(A\right)=\operatorname {rank} \left(A^{\textsf {T}}\right)} . | https://en.wikipedia.org/wiki/Rank_factorization |
The rank product is a biologically motivated rank test for the detection of differentially expressed genes in replicated microarray experiments.
It is a simple non-parametric statistical method based on ranks of fold changes. In addition to its use in expression profiling , it can be used to combine ranked lists in various application domains, including proteomics , metabolomics , statistical meta-analysis , and general feature selection .
Given n genes and k replicates, let r g , i {\displaystyle r_{g,i}} be the rank of gene g in the i -th replicate.
Compute the rank product via the geometric mean :
Simple permutation-based estimation is used to determine how likely a given RP value or better is observed in a random experiment.
Permutation re-sampling requires a computationally demanding number of permutations to get reliable estimates of the p -values for the most differentially expressed genes, if n is large. Eisinga, Breitling and Heskes (2013) provide the exact probability mass distribution of the rank product statistic. Calculation of the exact p -values offers a substantial improvement over permutation approximation, most significantly for that part of the distribution rank product analysis is most interested in, i.e., the thin right tail. However, exact statistical significance of large rank products may take unacceptable long amounts of time to compute. Heskes, Eisinga and Breitling (2014) provide a method to determine accurate approximate p -values of the rank product statistic in a computationally fast manner. | https://en.wikipedia.org/wiki/Rank_product |
In mathematics , a ranked poset is a partially ordered set in which one of the following (non-equivalent) conditions hold: it is
The second definition differs from the first in that it requires all minimal elements to have the same rank; for posets with a least element, however, the two requirements are equivalent. The third definition is even more strict in that it excludes posets with infinite chains and also requires all maximal elements to have the same rank. Richard P. Stanley defines a graded poset of length n as one in which all maximal chains have length n . [ 1 ]
This combinatorics -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Ranked_poset |
The Rankine body , discovered by Scottish physicist and engineer William Rankine , is a feature of naval architecture involving the flow of liquid around a body/surface.
In fluid mechanics, a fluid flow pattern formed by combining a uniform stream with a source and a sink of equal strengths, with the line joining the source and sink along the stream direction, conforms to the shape of a Rankine body.
This article about a mechanical engineering topic is a stub . You can help Wikipedia by expanding it .
This naval article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Rankine_body |
In the field of fluid dynamics , a Rankine half body is a feature of fluid flow discovered by Scottish physicist and engineer William Rankine that is formed when a fluid source is added to a fluid undergoing potential flow . Superposition of uniform flow and source flow yields the Rankine half body flow. A practical example of this type of flow is a bridge pier or a strut placed in a uniform stream. The resulting stream function ( ψ {\displaystyle \psi } ) and velocity potential ( ϕ {\displaystyle \phi } ) are obtained by simply adding the stream function and velocity potential for each individual flow.
The flow equations of the Rankine half body are solved using the principle of superposition , combining the solutions of the linear flow of the stream and the circular flow of the source.
Given the linear flow field U {\displaystyle U} and the source m {\displaystyle m} , we have
The stagnation point for this flow can be determined by equating the velocity to zero in either directions. Because of symmetry of flow in y-direction, stagnation point must lie on x-axis.
Equating both u {\displaystyle u} and v {\displaystyle v} to zero, we obtain U = m 2 π b {\displaystyle U={\frac {m}{2\pi b}}} .
At r = b {\displaystyle r=b} and θ = π {\displaystyle \theta =\pi } we have stagnation points.
Now, we note that m 2 = π b U {\displaystyle {\frac {m}{2}}=\pi bU} , so following this constant streamline gives the outline of the body:
Then, r = b ( π − θ ) sin θ {\displaystyle r={\frac {b(\pi -\theta )}{\sin {\theta }}}} describes the half body outline.
This type of flow provides important information about flow in front part of streamlined body. It is probable that at the boundary, flow is not properly represented for real flow. The pressure and velocity of flow near to boundary layer is calculated by applying the Bernoulli's principle and is approximated with potential flow. The above equations may be used to calculate the stress on the body placed into the flow stream. | https://en.wikipedia.org/wiki/Rankine_half_body |
The Rankine–Hugoniot conditions , also referred to as Rankine–Hugoniot jump conditions or Rankine–Hugoniot relations , describe the relationship between the states on both sides of a shock wave or a combustion wave ( deflagration or detonation ) in a one-dimensional flow in fluids or a one-dimensional deformation in solids. They are named in recognition of the work carried out by Scottish engineer and physicist William John Macquorn Rankine [ 1 ] and French engineer Pierre Henri Hugoniot . [ 2 ] [ 3 ]
The basic idea of the jump conditions is to consider what happens to a fluid when it undergoes a rapid change. Consider, for example, driving a piston into a tube filled with non-reacting gas. A disturbance is propagated through the fluid somewhat faster than the speed of sound . Because the disturbance propagates supersonically , it is a shock wave , and the fluid downstream of the shock has no advance information of it. In a frame of reference moving with the wave, atoms or molecules in front of the wave slam into the wave supersonically. On a microscopic level, they undergo collisions on the scale of the mean free path length until they come to rest in the post-shock flow (but moving in the frame of reference of the wave or of the tube). The bulk transfer of kinetic energy heats the post-shock flow. Because the mean free path length is assumed to be negligible in comparison to all other length scales in a hydrodynamic treatment, the shock front is essentially a hydrodynamic discontinuity . The jump conditions then establish the transition between the pre- and post-shock flow, based solely upon the conservation of mass, momentum, and energy. The conditions are correct even though the shock actually has a positive thickness. This non-reacting example of a shock wave also generalizes to reacting flows, where a combustion front (either a detonation or a deflagration) can be modeled as a discontinuity in a first approximation.
In a coordinate system that is moving with the discontinuity, the Rankine–Hugoniot conditions can be expressed as: [ 4 ]
where m is the mass flow rate per unit area, ρ 1 and ρ 2 are the mass density of the fluid upstream and downstream of the wave, u 1 and u 2 are the fluid velocity upstream and downstream of the wave, p 1 and p 2 are the pressures in the two regions, and h 1 and h 2 are the specific (with the sense of per unit mass ) enthalpies in the two regions. If in addition, the flow is reactive, then the species conservation equations demands that
to vanish both upstream and downstream of the discontinuity. Here, ω {\displaystyle \omega } is the mass production rate of the i -th species of total N species involved in the reaction.
Combining conservation of mass and momentum gives us
which defines a straight line known as the Michelson–Rayleigh line , named after the Russian physicist Vladimir A. Mikhelson (usually anglicized as Michelson) and Lord Rayleigh , that has a negative slope (since m 2 {\displaystyle m^{2}} is always positive) in the p − ρ − 1 {\displaystyle p-\rho ^{-1}} plane. [ 5 ] Using the Rankine–Hugoniot equations for the conservation of mass and momentum to eliminate u 1 and u 2 , the equation for the conservation of energy can be expressed as the Hugoniot equation:
The inverse of the density can also be expressed as the specific volume , v = 1 / ρ {\displaystyle v=1/\rho } . Along with these, one has to specify the relation between the upstream and downstream equation of state
where Y i {\displaystyle Y_{i}} is the mass fraction of the species. Finally, the calorific equation of state h = h ( p , ρ , Y i ) {\displaystyle h=h(p,\rho ,Y_{i})} is assumed to be known, i.e.,
The following assumptions are made in order to simplify the Rankine–Hugoniot equations. The mixture is assumed to obey the ideal gas law , so that relation between the downstream and upstream equation of state can be written as
where R {\displaystyle R} is the universal gas constant and the mean molecular weight W ¯ {\displaystyle {\overline {W}}} is assumed to be constant (otherwise, W ¯ {\displaystyle {\overline {W}}} would depend on the mass fraction of the all species). If one assumes that the specific heat at constant pressure c p {\displaystyle c_{p}} is also constant across the wave, the change in enthalpies (calorific equation of state) can be simply written as
where the first term in the above expression represents the amount of heat released per unit mass of the upstream mixture by the wave and the second term represents the sensible heating. Eliminating temperature using the equation of state and substituting the above expression for the change in enthalpies into the Hugoniot equation, one obtains an Hugoniot equation expressed only in terms of pressure and densities,
where γ {\displaystyle \gamma } is the specific heat ratio , which for ordinary room temperature air (298 KELVIN) = 1.40. An Hugoniot curve without heat release ( q = 0 {\displaystyle q=0} ) is often called a "shock Hugoniot", or simply a(n) "Hugoniot". Along with the Rayleigh line equation, the above equation completely determines the state of the system. These two equations can be written compactly by introducing the following non-dimensional scales,
The Rayleigh line equation and the Hugoniot equation then simplifies to
Given the upstream conditions, the intersection of above two equations in the v ~ {\displaystyle {\tilde {v}}} - p ~ {\displaystyle {\tilde {p}}} plane determine the downstream conditions; in the v ~ {\displaystyle {\tilde {v}}} - p ~ {\displaystyle {\tilde {p}}} plane, the upstream condition correspond to the point ( v ~ , p ~ ) = ( 1 , 1 ) {\displaystyle ({\tilde {v}},{\tilde {p}})=(1,1)} . If no heat release occurs, for example, shock waves without chemical reaction, then α = 0 {\displaystyle \alpha =0} . The Hugoniot curves asymptote to the lines v ~ = ( γ − 1 ) / ( γ + 1 ) {\displaystyle {\tilde {v}}=(\gamma -1)/(\gamma +1)} and p ~ = − ( γ − 1 ) / ( γ + 1 ) {\displaystyle {\tilde {p}}=-(\gamma -1)/(\gamma +1)} , which are depicted as dashed lines in the figure. As mentioned in the figure, only the white region bounded by these two asymptotes are allowed so that μ {\displaystyle \mu } is positive. Shock waves and detonations correspond to the top-left white region wherein p ~ > 1 {\displaystyle {\tilde {p}}>1} and v ~ < 1 {\displaystyle {\tilde {v}}<1} , that is to say, the pressure increases and the specific volume decreases across the wave (the Chapman–Jouguet condition for detonation is where Rayleigh line is tangent to the Hugoniot curve). Deflagrations, on the other hand, correspond to the bottom-right white region wherein p ~ < 1 {\displaystyle {\tilde {p}}<1} and v ~ > 1 {\displaystyle {\tilde {v}}>1} , that is to say, the pressure decreases and the specific volume increases across the wave; the pressure decrease a flame is typically very small which is seldom considered when studying deflagrations.
For shock waves and detonations, the pressure increase across the wave can take any values between 0 ≤ p ~ < ∞ {\displaystyle 0\leq {\tilde {p}}<\infty } ; the steeper the slope of the Rayleigh line, the stronger is the wave. On the contrary, here the specific volume ratio is restricted to the finite interval ( γ − 1 ) / ( γ + 1 ) ≤ v ~ ≤ 2 α + ( γ + 1 ) / ( γ − 1 ) {\displaystyle (\gamma -1)/(\gamma +1)\leq {\tilde {v}}\leq 2\alpha +(\gamma +1)/(\gamma -1)} (the upper bound is derived for the case p ~ → 0 {\displaystyle {\tilde {p}}\rightarrow 0} because pressure cannot take negative values). If γ = 1.4 {\displaystyle \gamma =1.4} (diatomic gas without the vibrational mode excitation), the interval is 1 / 6 ≤ v ~ ≤ 2 α + 6 {\displaystyle 1/6\leq {\tilde {v}}\leq 2\alpha +6} , in other words, the shock wave can increase the density at most by a factor of 6. For monatomic gas , γ = 5 / 3 {\displaystyle \gamma =5/3} , the allowed interval is 1 / 4 ≤ v ~ ≤ 2 α + 4 {\displaystyle 1/4\leq {\tilde {v}}\leq 2\alpha +4} . For diatomic gases with vibrational mode excited, we have γ = 9 / 7 {\displaystyle \gamma =9/7} leading to the interval 1 / 8 ≤ v ~ ≤ 2 α + 8 {\displaystyle 1/8\leq {\tilde {v}}\leq 2\alpha +8} . In reality, the specific heat ratio is not constant in the shock wave due to molecular dissociation and ionization, but even in these cases, density ratio in general do not exceed a factor of about 11–13 . [ 6 ]
Consider gas in a one-dimensional container (e.g., a long thin tube). Assume that the fluid is inviscid (i.e., it shows no viscosity effects as for example friction with the tube walls). Furthermore, assume that there is no heat transfer by conduction or radiation and that gravitational acceleration can be neglected. Such a system can be described by the following system of conservation laws , known as the 1D Euler equations , that in conservation form is:
where
Assume further that the gas is calorically ideal and that therefore a polytropic equation-of-state of the simple form
is valid, where γ {\displaystyle \gamma } is the constant ratio of specific heats c p / c v {\displaystyle c_{p}/c_{v}} . This quantity also appears as the polytropic exponent of the polytropic process described by
For an extensive list of compressible flow equations, etc., refer to NACA Report 1135 (1953). [ 7 ]
Note: For a calorically ideal gas γ {\displaystyle \gamma } is a constant and for a thermally ideal gas γ {\displaystyle \gamma } is a function of temperature. In the latter case, the dependence of pressure on mass density and internal energy might differ from that given by equation ( 4 ).
Before proceeding further it is necessary to introduce the concept of a jump condition – a condition that holds at a discontinuity or abrupt change.
Consider a 1D situation where there is a jump in the scalar conserved physical quantity w {\displaystyle w} , which is governed by integral conservation law
for any x 1 {\displaystyle x_{1}} , x 2 {\displaystyle x_{2}} , x 1 < x 2 {\displaystyle x_{1}<x_{2}} , and, therefore, by partial differential equation
for smooth solutions. [ 8 ]
Let the solution exhibit a jump (or shock) at x = x s ( t ) {\displaystyle x=x_{s}(t)} , where x 1 < x s ( t ) {\displaystyle x_{1}<x_{s}(t)} and x s ( t ) < x 2 {\displaystyle x_{s}(t)<x_{2}} , then
The subscripts 1 and 2 indicate conditions just upstream and just downstream of the jump respectively, i.e. w 1 = lim ϵ → 0 + w ( x s − ϵ ) {\textstyle w_{1}=\lim _{\epsilon \to 0^{+}}w\left(x_{s}-\epsilon \right)} and w 2 = lim ϵ → 0 + w ( x s + ϵ ) {\textstyle w_{2}=\lim _{\epsilon \to 0^{+}}w\left(x_{s}+\epsilon \right)} . ∴ {\displaystyle \therefore } is the therefore sign .
Note, to arrive at equation ( 8 ) we have used the fact that d x 1 / d t = 0 {\displaystyle dx_{1}/dt=0} and d x 2 / d t = 0 {\displaystyle dx_{2}/dt=0} .
Now, let x 1 → x s ( t ) − ϵ {\displaystyle x_{1}\to x_{s}(t)-\epsilon } and x 2 → x s ( t ) + ϵ {\displaystyle x_{2}\to x_{s}(t)+\epsilon } , when we have ∫ x 1 x s ( t ) − ϵ w t d x → 0 {\textstyle \int _{x_{1}}^{x_{s}(t)-\epsilon }w_{t}\,dx\to 0} and ∫ x s ( t ) + ϵ x 2 w t d x → 0 {\textstyle \int _{x_{s}(t)+\epsilon }^{x_{2}}w_{t}\,dx\to 0} , and in the limit
where we have defined u s = d x s ( t ) / d t {\displaystyle u_{s}=dx_{s}(t)/dt} (the system characteristic or shock speed ), which by simple division is given by
Equation ( 9 ) represents the jump condition for conservation law ( 6 ). A shock situation arises in a system where its characteristics intersect, and under these conditions a requirement for a unique single-valued solution is that the solution should satisfy the admissibility condition or entropy condition . For physically real applications this means that the solution should satisfy the Lax entropy condition
where f ′ ( w 1 ) {\displaystyle f'\left(w_{1}\right)} and f ′ ( w 2 ) {\displaystyle f'\left(w_{2}\right)} represent characteristic speeds at upstream and downstream conditions respectively.
In the case of the hyperbolic conservation law ( 6 ), we have seen that the shock speed can be obtained by simple division. However, for the 1D Euler equations ( 1 ), ( 2 ) and ( 3 ), we have the vector state variable [ ρ ρ u E ] T {\displaystyle {\begin{bmatrix}\rho &\rho u&E\end{bmatrix}}^{\mathsf {T}}} and the jump conditions become
Equations ( 12 ), ( 13 ) and ( 14 ) are known as the Rankine–Hugoniot conditions for the Euler equations and are derived by enforcing the conservation laws in integral form over a control volume that includes the shock. For this situation u s {\displaystyle u_{s}} cannot be obtained by simple division. However, it can be shown by transforming the problem to a moving co-ordinate system
(setting u s ′ := u s − u 1 {\displaystyle u_{s}':=u_{s}-u_{1}} , u 1 ′ := 0 {\displaystyle u'_{1}:=0} , u 2 ′ := u 2 − u 1 {\displaystyle u'_{2}:=u_{2}-u_{1}} to remove u 1 {\displaystyle u_{1}} ) and some algebraic manipulation (involving the elimination of u 2 ′ {\displaystyle u'_{2}} from the transformed equation ( 13 ) using the transformed equation ( 12 )), that the shock speed is given by
where c 1 = γ p 1 / ρ 1 {\textstyle c_{1}={\sqrt {\gamma p_{1}/\rho _{1}}}} is the speed of sound in the fluid at upstream conditions. [ 9 ] [ 10 ] [ 11 ] [ 12 ] [ 13 ] [ 14 ]
For shocks in solids, a closed form expression such as equation ( 15 ) cannot be derived from first principles. Instead, experimental observations [ 15 ] indicate that a linear relation [ 16 ] can be used instead (called the shock Hugoniot in the u s - u p plane) that has the form
where c 0 is the bulk speed of sound in the material (in uniaxial compression), s is a parameter (the slope of the shock Hugoniot) obtained from fits to experimental data, and u p = u 2 is the particle velocity inside the compressed region behind the shock front.
The above relation, when combined with the Hugoniot equations for the conservation of mass and momentum, can be used to determine the shock Hugoniot in the p - v plane, where v is the specific volume (per unit mass): [ 19 ]
Alternative equations of state, such as the Mie–Grüneisen equation of state may also be used instead of the above equation.
The shock Hugoniot describes the locus of all possible thermodynamic states a material can exist in behind a shock, projected onto a two dimensional state-state plane. It is therefore a set of equilibrium states and does not specifically represent the path through which a material undergoes transformation.
Weak shocks are isentropic and that the isentrope represents the path through which the material is loaded from the initial to final states by a compression wave with converging characteristics. In the case of weak shocks, the Hugoniot will therefore fall directly on the isentrope and can be used directly as the equivalent path. In the case of a strong shock we can no longer make that simplification directly. However, for engineering calculations, it is deemed that the isentrope is close enough to the Hugoniot that the same assumption can be made.
If the Hugoniot is approximately the loading path between states for an "equivalent" compression wave, then the jump conditions for the shock loading path can be determined by drawing a straight line between the initial and final states. This line is called the Rayleigh line and has the following equation:
Most solid materials undergo plastic deformations when subjected to strong shocks. The point on the shock Hugoniot at which a material transitions from a purely elastic state to an elastic-plastic state is called the Hugoniot elastic limit (HEL) and the pressure at which this transition takes place is denoted p HEL . Values of p HEL can range from 0.2 GPa to 20 GPa. Above the HEL, the material loses much of its shear strength and starts behaving like a fluid.
Rankine–Hugoniot conditions in magnetohydrodynamics are interesting to consider since they are very relevant to astrophysical applications. Across the discontinuity the normal component H n {\displaystyle H_{n}} of the magnetic field H {\displaystyle \mathbf {H} } and the tangential component E t {\displaystyle \mathbf {E} _{t}} of the electric field E = − u × H / c {\displaystyle \mathbf {E} =-\mathbf {u} \times \mathbf {H} /c} (infinite conductivity limit) must be continuous. We thus have [ 20 ]
where [ [ ⋅ ] ] {\displaystyle [\![\cdot ]\!]} is the difference between the values of any physical quantity on the two sides of the discontinuity. The remaining conditions are given by [ 20 ]
These conditions are general in the sense that they include contact discontinuities ( j = 0 , H n ≠ 0 , [ [ u ] ] = [ [ p ] ] = [ [ H ] ] = 0 , [ [ ρ ] ] ≠ 0 {\displaystyle j=0,\,H_{n}\neq 0,\,[\![\mathbf {u} ]\!]=[\![p]\!]=[\![\mathbf {H} ]\!]=0,\,[\![\rho ]\!]\neq 0} ) tangential discontinuities ( j = H n = 0 , [ [ u t ρ ] ] ≠ 0 , [ [ H t ] ] ≠ 0 , [ [ ρ ] ] ≠ 0 {\displaystyle j=H_{n}=0,\,[\![\mathbf {u} _{t}\rho ]\!]\neq 0,\,[\![\mathbf {H} _{t}]\!]\neq 0,\,[\![\rho ]\!]\neq 0} ), rotational or Alfvén discontinuities ( j = H n ρ / 4 π ≠ 0 , [ [ ρ ] ] = [ [ u n ] ] = [ [ p ] ] = [ [ H t ] ] = 0 {\textstyle j=H_{n}{\sqrt {\rho /4\pi }}\neq 0,\,[\![\rho ]\!]=[\![u_{n}]\!]=[\![p]\!]=[\![\mathbf {H} _{t}]\!]=0} ) and shock waves ( j ≠ 0 , [ [ ρ ] ] ≠ 0 {\displaystyle j\neq 0,\,[\![\rho ]\!]\neq 0} ). | https://en.wikipedia.org/wiki/Rankine–Hugoniot_conditions |
A ranking is a relationship between a set of items, often recorded in a list , such that, for any two items, the first is either "ranked higher than", "ranked lower than", or "ranked equal to" the second. [ 1 ] In mathematics , this is known as a weak order or total preorder of objects. It is not necessarily a total order of objects because two different objects can have the same ranking. The rankings themselves are totally ordered. For example, materials are totally preordered by hardness , while degrees of hardness are totally ordered. If two items are the same in rank it is considered a tie.
By reducing detailed measures to a sequence of ordinal numbers , rankings make it possible to evaluate complex information according to certain criteria. Thus, for example, an Internet search engine may rank the pages it finds according to an estimation of their relevance , making it possible for the user quickly to select the pages they are likely to want to see.
Analysis of data obtained by ranking commonly requires non-parametric statistics .
It is not always possible to assign rankings uniquely. For example, in a race or competition two (or more) entrants might tie for a place in the ranking. [ 2 ] When computing an ordinal measurement , two (or more) of the quantities being ranked might measure equal. In these cases, one of the strategies below for assigning the rankings may be adopted.
A common shorthand way to distinguish these ranking strategies is by the ranking numbers that would be produced for four items, with the first item ranked ahead of the second and third (which compare equal) which are both ranked ahead of the fourth. [ 3 ] These names are also shown below.
In competition ranking, items that compare equal receive the same ranking number, and then a gap is left in the ranking numbers. The number of ranking numbers that are left out in this gap is one less than the number of items that compared equal. Equivalently, each item's ranking number is 1 plus the number of items ranked above it. This ranking strategy is frequently adopted for competitions, as it means that if two (or more) competitors tie for a position in the ranking, the position of all those ranked below them is unaffected (i.e., a competitor only comes second if exactly one person scores better than them, third if exactly two people score better than them, fourth if exactly three people score better than them, etc.).
Thus if A ranks ahead of B and C (which compare equal) which are both ranked ahead of D, then A gets ranking number 1 ("first"), B gets ranking number 2 ("joint second"), C also gets ranking number 2 ("joint second") and D gets ranking number 4 ("fourth").
This method is called "Low" by IBM SPSS [ 4 ] and "min" by the R programming language [ 5 ] in their methods to handle ties.
Sometimes, competition ranking is done by leaving the gaps in the ranking numbers before the sets of equal-ranking items (rather than after them as in standard competition ranking). The number of ranking numbers that are left out in this gap remains one less than the number of items that compared equal. Equivalently, each item's ranking number is equal to the number of items ranked equal to it or above it. This ranking ensures that a competitor only comes second if they score higher than all but one of their opponents, third if they score higher than all but two of their opponents, etc.
Thus if A ranks ahead of B and C (which compare equal) which are both ranked ahead of D, then A gets ranking number 1 ("first"), B gets ranking number 3 ("joint third"), C also gets ranking number 3 ("joint third") and D gets ranking number 4 ("fourth"). In this case, nobody would get ranking number 2 ("second") and that would be left as a gap.
This method is called "High" by IBM SPSS [ 4 ] and "max" by the R programming language [ 5 ] in their methods to handle ties.
In dense ranking, items that compare equally receive the same ranking number, and the next items receive the immediately following ranking number. Equivalently, each item's ranking number is 1 plus the number of items ranked above it that are distinct with respect to the ranking order.
Thus if A ranks ahead of B and C (which compare equal) which are both ranked ahead of D, then A gets ranking number 1 ("first"), B gets ranking number 2 ("joint second"), C also gets ranking number 2 ("joint second") and D gets ranking number 3 ("Third").
This method is called "Sequential" by IBM SPSS [ 4 ] and "dense" by the R programming language [ 6 ] in their methods to handle ties.
In ordinal ranking, all items receive distinct ordinal numbers, including items that compare equal. The assignment of distinct ordinal numbers to items that compare equal can be done at random, or arbitrarily, but it is generally preferable to use a system that is arbitrary but consistent, as this gives stable results if the ranking is done multiple times. An example of an arbitrary but consistent system would be to incorporate other attributes into the ranking order (such as alphabetical ordering of the competitor's name) to ensure that no two items exactly match.
With this strategy, if A ranks ahead of B and C (which compare equal) which are both ranked ahead of D, then A gets ranking number 1 ("first") and D gets ranking number 4 ("fourth"), and either B gets ranking number 2 ("second") and C gets ranking number 3 ("third") or C gets ranking number 2 ("second") and B gets ranking number 3 ("third").
In computer data processing, ordinal ranking is also referred to as "row numbering".
This method corresponds to the "first", "last", and "random" methods in the R programming language [ 5 ] to handle ties.
Items that compare equal receive the same ranking number, which is the mean of what they would have under ordinal rankings; equivalently, the ranking number of 1 plus the number of items ranked above it plus half the number of items equal to it. This strategy has the property that the sum of the ranking numbers is the same as under ordinal ranking. For this reason, it is used in computing Borda counts and in statistical tests (see below).
Thus if A ranks ahead of B and C (which compare equal) which are both ranked ahead of D, then A gets ranking number 1 ("first"), B and C each get ranking number 2.5 (average of "joint second/third") and D gets ranking number 4 ("fourth").
Here is an example:
Suppose you have the data set 1.0, 1.0, 2.0, 3.0, 3.0, 4.0, 5.0, 5.0, 5.0.
The ordinal ranks are 1, 2, 3, 4, 5, 6, 7, 8, 9.
For v = 1.0, the fractional rank is the average of the ordinal ranks: (1 + 2) / 2 = 1.5.
In a similar manner, for v = 5.0, the fractional rank is (7 + 8 + 9) / 3 = 8.0.
Thus the fractional ranks are: 1.5, 1.5, 3.0, 4.5, 4.5, 6.0, 8.0, 8.0, 8.0
This method is called "Mean" by IBM SPSS [ 4 ] and "average" by the R programming language [ 5 ] in their methods to handle ties.
In statistics , ranking is the data transformation in which numerical or ordinal values are replaced by their rank when the data are sorted.
For example, the ranks of the numerical data 3.4, 5.1, 2.6, 7.3 are 2, 3, 1, 4.
As another example, the ordinal data hot, cold, warm would be replaced by 3, 1, 2. In these examples, the ranks are assigned to values in ascending order, although descending ranks can also be used.
League tables are used to compare the academic achievements of different institutions. College and university rankings order institutions in higher education by combinations of factors. In addition to entire institutions, specific programs, departments, and schools are ranked. These rankings usually are conducted by magazines, newspapers, governments and academics. For example, league tables of British universities are published annually by The Independent , The Sunday Times , and The Times . [ 7 ] The primary aim of these rankings is to inform potential applicants about British universities based on a range of criteria. Similarly, in countries like India, league tables are being developed and a popular magazine, Education World, published them based on data from TheLearningPoint.net . [ citation needed ]
It is complained that the ranking of England's schools to rigid guidelines that fail to take into account wider social conditions actually makes failing schools even worse. This is because the most involved parents will then avoid such schools, leaving only the children of non-ambitious parents to attend. [ 8 ]
In business, league tables list the leaders in the business activity within a specific industry, ranking companies based on different criteria including revenue, earnings, and other relevant key performance indicators (such as market share and meeting customer expectations) enabling people to quickly analyze significant data. [ 9 ]
The rank methodology based on some specific indices is one of the most common systems used by policy makers and international organizations in order to assess the socio-economic context of the countries. Some notable examples include the Human Development Index (United Nations), Doing Business Index ( World Bank ), Corruption Perceptions Index (Transparency International), and Index of Economic Freedom (the Heritage Foundation). For instance, the Doing Business Indicator of the World Bank measures business regulations and their enforcement in 190 countries. Countries are ranked according to ten indicators that are synthesized to produce the final rank. Each indicator is composed of sub-indicators; for instance, the Registering Property Indicator is composed of four sub-indicators measuring time, procedures, costs, and quality of the land registration system. These kinds of ranks are based on subjective criteria for assigning the score. Sometimes, the adopted parameters may produce discrepancies with the empirical observations, therefore potential biases and paradox may emerge from the application of these criteria. [ 10 ] | https://en.wikipedia.org/wiki/Ranking |
The rank–nullity theorem is a theorem in linear algebra , which asserts:
It follows that for linear transformations of vector spaces of equal finite dimension, either injectivity or surjectivity implies bijectivity .
Let T : V → W {\displaystyle T:V\to W} be a linear transformation between two vector spaces where T {\displaystyle T} 's domain V {\displaystyle V} is finite dimensional. Then rank ( T ) + nullity ( T ) = dim V , {\displaystyle \operatorname {rank} (T)~+~\operatorname {nullity} (T)~=~\dim V,} where rank ( T ) {\textstyle \operatorname {rank} (T)} is the rank of T {\displaystyle T} (the dimension of its image ) and nullity ( T ) {\displaystyle \operatorname {nullity} (T)} is the nullity of T {\displaystyle T} (the dimension of its kernel ). In other words, dim ( Im T ) + dim ( Ker T ) = dim ( Domain ( T ) ) . {\displaystyle \dim(\operatorname {Im} T)+\dim(\operatorname {Ker} T)=\dim(\operatorname {Domain} (T)).} This theorem can be refined via the splitting lemma to be a statement about an isomorphism of spaces, not just dimensions. Explicitly, since T {\displaystyle T} induces an isomorphism from V / Ker ( T ) {\displaystyle V/\operatorname {Ker} (T)} to Im ( T ) , {\displaystyle \operatorname {Im} (T),} the existence of a basis for V {\displaystyle V} that extends any given basis of Ker ( T ) {\displaystyle \operatorname {Ker} (T)} implies, via the splitting lemma, that Im ( T ) ⊕ Ker ( T ) ≅ V . {\displaystyle \operatorname {Im} (T)\oplus \operatorname {Ker} (T)\cong V.} Taking dimensions, the rank–nullity theorem follows.
Linear maps can be represented with matrices . More precisely, an m × n {\displaystyle m\times n} matrix M represents a linear map f : F n → F m , {\displaystyle f:F^{n}\to F^{m},} where F {\displaystyle F} is the underlying field . [ 5 ] So, the dimension of the domain of f {\displaystyle f} is n , the number of columns of M , and the rank–nullity theorem for an m × n {\displaystyle m\times n} matrix M is rank ( M ) + nullity ( M ) = n . {\displaystyle \operatorname {rank} (M)+\operatorname {nullity} (M)=n.}
Here we provide two proofs. The first [ 2 ] operates in the general case, using linear maps. The second proof [ 6 ] looks at the homogeneous system A x = 0 , {\displaystyle \mathbf {Ax} =\mathbf {0} ,} where A {\displaystyle \mathbf {A} } is a m × n {\displaystyle m\times n} with rank r , {\displaystyle r,} and shows explicitly that there exists a set of n − r {\displaystyle n-r} linearly independent solutions that span the null space of A {\displaystyle \mathbf {A} } .
While the theorem requires that the domain of the linear map be finite-dimensional, there is no such assumption on the codomain. This means that there are linear maps not given by matrices for which the theorem applies. Despite this, the first proof is not actually more general than the second: since the image of the linear map is finite-dimensional, we can represent the map from its domain to its image by a matrix, prove the theorem for that matrix, then compose with the inclusion of the image into the full codomain.
Let V , W {\displaystyle V,W} be vector spaces over some field F , {\displaystyle F,} and T {\displaystyle T} defined as in the statement of the theorem with dim V = n {\displaystyle \dim V=n} .
As Ker T ⊂ V {\displaystyle \operatorname {Ker} T\subset V} is a subspace , there exists a basis for it. Suppose dim Ker T = k {\displaystyle \dim \operatorname {Ker} T=k} and let K := { v 1 , … , v k } ⊂ Ker ( T ) {\displaystyle {\mathcal {K}}:=\{v_{1},\ldots ,v_{k}\}\subset \operatorname {Ker} (T)} be such a basis.
We may now, by the Steinitz exchange lemma , extend K {\displaystyle {\mathcal {K}}} with n − k {\displaystyle n-k} linearly independent vectors w 1 , … , w n − k {\displaystyle w_{1},\ldots ,w_{n-k}} to form a full basis of V {\displaystyle V} .
Let S := { w 1 , … , w n − k } ⊂ V ∖ Ker ( T ) {\displaystyle {\mathcal {S}}:=\{w_{1},\ldots ,w_{n-k}\}\subset V\setminus \operatorname {Ker} (T)} such that B := K ∪ S = { v 1 , … , v k , w 1 , … , w n − k } ⊂ V {\displaystyle {\mathcal {B}}:={\mathcal {K}}\cup {\mathcal {S}}=\{v_{1},\ldots ,v_{k},w_{1},\ldots ,w_{n-k}\}\subset V} is a basis for V {\displaystyle V} .
From this, we know that Im T = Span T ( B ) = Span { T ( v 1 ) , … , T ( v k ) , T ( w 1 ) , … , T ( w n − k ) } {\displaystyle \operatorname {Im} T=\operatorname {Span} T({\mathcal {B}})=\operatorname {Span} \{T(v_{1}),\ldots ,T(v_{k}),T(w_{1}),\ldots ,T(w_{n-k})\}}
We now claim that T ( S ) {\displaystyle T({\mathcal {S}})} is a basis for Im T {\displaystyle \operatorname {Im} T} .
The above equality already states that T ( S ) {\displaystyle T({\mathcal {S}})} is a generating set for Im T {\displaystyle \operatorname {Im} T} ; it remains to be shown that it is also linearly independent to conclude that it is a basis.
Suppose T ( S ) {\displaystyle T({\mathcal {S}})} is not linearly independent, and let ∑ j = 1 n − k α j T ( w j ) = 0 W {\displaystyle \sum _{j=1}^{n-k}\alpha _{j}T(w_{j})=0_{W}} for some α j ∈ F {\displaystyle \alpha _{j}\in F} .
Thus, owing to the linearity of T {\displaystyle T} , it follows that T ( ∑ j = 1 n − k α j w j ) = 0 W ⟹ ( ∑ j = 1 n − k α j w j ) ∈ Ker T = Span K ⊂ V . {\displaystyle T\left(\sum _{j=1}^{n-k}\alpha _{j}w_{j}\right)=0_{W}\implies \left(\sum _{j=1}^{n-k}\alpha _{j}w_{j}\right)\in \operatorname {Ker} T=\operatorname {Span} {\mathcal {K}}\subset V.} This is a contradiction to B {\displaystyle {\mathcal {B}}} being a basis, unless all α j {\displaystyle \alpha _{j}} are equal to zero. This shows that T ( S ) {\displaystyle T({\mathcal {S}})} is linearly independent, and more specifically that it is a basis for Im T {\displaystyle \operatorname {Im} T} .
To summarize, we have K {\displaystyle {\mathcal {K}}} , a basis for Ker T {\displaystyle \operatorname {Ker} T} , and T ( S ) {\displaystyle T({\mathcal {S}})} , a basis for Im T {\displaystyle \operatorname {Im} T} .
Finally we may state that Rank ( T ) + Nullity ( T ) = dim Im T + dim Ker T {\displaystyle \operatorname {Rank} (T)+\operatorname {Nullity} (T)=\dim \operatorname {Im} T+\dim \operatorname {Ker} T}
This concludes our proof.
Let A {\displaystyle \mathbf {A} } be an m × n {\displaystyle m\times n} matrix with r {\displaystyle r} linearly independent columns (i.e. Rank ( A ) = r {\displaystyle \operatorname {Rank} (\mathbf {A} )=r} ). We will show that:
To do this, we will produce an n × ( n − r ) {\displaystyle n\times (n-r)} matrix X {\displaystyle \mathbf {X} } whose columns form a basis of the null space of A {\displaystyle \mathbf {A} } .
Without loss of generality, assume that the first r {\displaystyle r} columns of A {\displaystyle \mathbf {A} } are linearly independent. So, we can write A = ( A 1 A 2 ) , {\displaystyle \mathbf {A} ={\begin{pmatrix}\mathbf {A} _{1}&\mathbf {A} _{2}\end{pmatrix}},} where
This means that A 2 = A 1 B {\displaystyle \mathbf {A} _{2}=\mathbf {A} _{1}\mathbf {B} } for some r × ( n − r ) {\displaystyle r\times (n-r)} matrix B {\displaystyle \mathbf {B} } (see rank factorization ) and, hence, A = ( A 1 A 1 B ) . {\displaystyle \mathbf {A} ={\begin{pmatrix}\mathbf {A} _{1}&\mathbf {A} _{1}\mathbf {B} \end{pmatrix}}.}
Let X = ( − B I n − r ) , {\displaystyle \mathbf {X} ={\begin{pmatrix}-\mathbf {B} \\\mathbf {I} _{n-r}\end{pmatrix}},} where I n − r {\displaystyle \mathbf {I} _{n-r}} is the ( n − r ) × ( n − r ) {\displaystyle (n-r)\times (n-r)} identity matrix . So, X {\displaystyle \mathbf {X} } is an n × ( n − r ) {\displaystyle n\times (n-r)} matrix such that A X = ( A 1 A 1 B ) ( − B I n − r ) = − A 1 B + A 1 B = 0 m × ( n − r ) . {\displaystyle \mathbf {A} \mathbf {X} ={\begin{pmatrix}\mathbf {A} _{1}&\mathbf {A} _{1}\mathbf {B} \end{pmatrix}}{\begin{pmatrix}-\mathbf {B} \\\mathbf {I} _{n-r}\end{pmatrix}}=-\mathbf {A} _{1}\mathbf {B} +\mathbf {A} _{1}\mathbf {B} =\mathbf {0} _{m\times (n-r)}.}
Therefore, each of the n − r {\displaystyle n-r} columns of X {\displaystyle \mathbf {X} } are particular solutions of A x = 0 F m {\displaystyle \mathbf {Ax} ={0}_{{F}^{m}}} .
Furthermore, the n − r {\displaystyle n-r} columns of X {\displaystyle \mathbf {X} } are linearly independent because X u = 0 F n {\displaystyle \mathbf {Xu} =\mathbf {0} _{{F}^{n}}} will imply u = 0 F n − r {\displaystyle \mathbf {u} =\mathbf {0} _{{F}^{n-r}}} for u ∈ F n − r {\displaystyle \mathbf {u} \in {F}^{n-r}} : X u = 0 F n ⟹ ( − B I n − r ) u = 0 F n ⟹ ( − B u u ) = ( 0 F r 0 F n − r ) ⟹ u = 0 F n − r . {\displaystyle \mathbf {X} \mathbf {u} =\mathbf {0} _{{F}^{n}}\implies {\begin{pmatrix}-\mathbf {B} \\\mathbf {I} _{n-r}\end{pmatrix}}\mathbf {u} =\mathbf {0} _{{F}^{n}}\implies {\begin{pmatrix}-\mathbf {B} \mathbf {u} \\\mathbf {u} \end{pmatrix}}={\begin{pmatrix}\mathbf {0} _{{F}^{r}}\\\mathbf {0} _{{F}^{n-r}}\end{pmatrix}}\implies \mathbf {u} =\mathbf {0} _{{F}^{n-r}}.} Therefore, the column vectors of X {\displaystyle \mathbf {X} } constitute a set of n − r {\displaystyle n-r} linearly independent solutions for A x = 0 F m {\displaystyle \mathbf {Ax} =\mathbf {0} _{\mathbb {F} ^{m}}} .
We next prove that any solution of A x = 0 F m {\displaystyle \mathbf {Ax} =\mathbf {0} _{{F}^{m}}} must be a linear combination of the columns of X {\displaystyle \mathbf {X} } .
For this, let u = ( u 1 u 2 ) ∈ F n {\displaystyle \mathbf {u} ={\begin{pmatrix}\mathbf {u} _{1}\\\mathbf {u} _{2}\end{pmatrix}}\in {F}^{n}}
be any vector such that A u = 0 F m {\displaystyle \mathbf {Au} =\mathbf {0} _{{F}^{m}}} . Since the columns of A 1 {\displaystyle \mathbf {A} _{1}} are linearly independent, A 1 x = 0 F m {\displaystyle \mathbf {A} _{1}\mathbf {x} =\mathbf {0} _{{F}^{m}}} implies x = 0 F r {\displaystyle \mathbf {x} =\mathbf {0} _{{F}^{r}}} .
Therefore, A u = 0 F m ⟹ ( A 1 A 1 B ) ( u 1 u 2 ) = A 1 u 1 + A 1 B u 2 = A 1 ( u 1 + B u 2 ) = 0 F m ⟹ u 1 + B u 2 = 0 F r ⟹ u 1 = − B u 2 {\displaystyle {\begin{array}{rcl}\mathbf {A} \mathbf {u} &=&\mathbf {0} _{{F}^{m}}\\\implies {\begin{pmatrix}\mathbf {A} _{1}&\mathbf {A} _{1}\mathbf {B} \end{pmatrix}}{\begin{pmatrix}\mathbf {u} _{1}\\\mathbf {u} _{2}\end{pmatrix}}&=&\mathbf {A} _{1}\mathbf {u} _{1}+\mathbf {A} _{1}\mathbf {B} \mathbf {u} _{2}&=&\mathbf {A} _{1}(\mathbf {u} _{1}+\mathbf {B} \mathbf {u} _{2})&=&\mathbf {0} _{\mathbb {F} ^{m}}\\\implies \mathbf {u} _{1}+\mathbf {B} \mathbf {u} _{2}&=&\mathbf {0} _{{F}^{r}}\\\implies \mathbf {u} _{1}&=&-\mathbf {B} \mathbf {u} _{2}\end{array}}} ⟹ u = ( u 1 u 2 ) = ( − B I n − r ) u 2 = X u 2 . {\displaystyle \implies \mathbf {u} ={\begin{pmatrix}\mathbf {u} _{1}\\\mathbf {u} _{2}\end{pmatrix}}={\begin{pmatrix}-\mathbf {B} \\\mathbf {I} _{n-r}\end{pmatrix}}\mathbf {u} _{2}=\mathbf {X} \mathbf {u} _{2}.}
This proves that any vector u {\displaystyle \mathbf {u} } that is a solution of A x = 0 {\displaystyle \mathbf {Ax} =\mathbf {0} } must be a linear combination of the n − r {\displaystyle n-r} special solutions given by the columns of X {\displaystyle \mathbf {X} } . And we have already seen that the columns of X {\displaystyle \mathbf {X} } are linearly independent. Hence, the columns of X {\displaystyle \mathbf {X} } constitute a basis for the null space of A {\displaystyle \mathbf {A} } . Therefore, the nullity of A {\displaystyle \mathbf {A} } is n − r {\displaystyle n-r} . Since r {\displaystyle r} equals rank of A {\displaystyle \mathbf {A} } , it follows that Rank ( A ) + Nullity ( A ) = n {\displaystyle \operatorname {Rank} (\mathbf {A} )+\operatorname {Nullity} (\mathbf {A} )=n} . This concludes our proof.
When T : V → W {\displaystyle T:V\to W} is a linear transformation between two finite-dimensional subspaces, with n = dim ( V ) {\displaystyle n=\dim(V)} and m = dim ( W ) {\displaystyle m=\dim(W)} (so can be represented by an m × n {\displaystyle m\times n} matrix M {\displaystyle M} ), the rank–nullity theorem asserts that if T {\displaystyle T} has rank r {\displaystyle r} , then n − r {\displaystyle n-r} is the dimension of the null space of M {\displaystyle M} , which represents the kernel of T {\displaystyle T} . In some texts, a third fundamental subspace associated to T {\displaystyle T} is considered alongside its image and kernel: the cokernel of T {\displaystyle T} is the quotient space W / Im ( T ) {\displaystyle W/\operatorname {Im} (T)} , and its dimension is m − r {\displaystyle m-r} . This dimension formula (which might also be rendered dim Im ( T ) + dim Coker ( T ) = dim ( W ) {\displaystyle \dim \operatorname {Im} (T)+\dim \operatorname {Coker} (T)=\dim(W)} ) together with the rank–nullity theorem is sometimes called the fundamental theorem of linear algebra . [ 7 ] [ 8 ]
This theorem is a statement of the first isomorphism theorem of algebra for the case of vector spaces; it generalizes to the splitting lemma .
In more modern language, the theorem can also be phrased as saying that each short exact sequence of vector spaces splits. Explicitly, given that 0 → U → V → T R → 0 {\displaystyle 0\rightarrow U\rightarrow V\mathbin {\overset {T}{\rightarrow }} R\rightarrow 0} is a short exact sequence of vector spaces, then U ⊕ R ≅ V {\displaystyle U\oplus R\cong V} , hence dim ( U ) + dim ( R ) = dim ( V ) . {\displaystyle \dim(U)+\dim(R)=\dim(V).} Here R {\displaystyle R} plays the role of Im T {\displaystyle \operatorname {Im} T} and U {\displaystyle U} is Ker T {\displaystyle \operatorname {Ker} T} , i.e. 0 → ker T ↪ V → T im T → 0 {\displaystyle 0\rightarrow \ker T\mathbin {\hookrightarrow } V\mathbin {\overset {T}{\rightarrow }} \operatorname {im} T\rightarrow 0}
In the finite-dimensional case, this formulation is susceptible to a generalization: if 0 → V 1 → V 2 → ⋯ V r → 0 {\displaystyle 0\rightarrow V_{1}\rightarrow V_{2}\rightarrow \cdots V_{r}\rightarrow 0} is an exact sequence of finite-dimensional vector spaces, then [ 9 ] ∑ i = 1 r ( − 1 ) i dim ( V i ) = 0. {\displaystyle \sum _{i=1}^{r}(-1)^{i}\dim(V_{i})=0.} The rank–nullity theorem for finite-dimensional vector spaces may also be formulated in terms of the index of a linear map. The index of a linear map T ∈ Hom ( V , W ) {\displaystyle T\in \operatorname {Hom} (V,W)} , where V {\displaystyle V} and W {\displaystyle W} are finite-dimensional, is defined by index T = dim Ker ( T ) − dim Coker T . {\displaystyle \operatorname {index} T=\dim \operatorname {Ker} (T)-\dim \operatorname {Coker} T.}
Intuitively, dim Ker T {\displaystyle \dim \operatorname {Ker} T} is the number of independent solutions v {\displaystyle v} of the equation T v = 0 {\displaystyle Tv=0} , and dim Coker T {\displaystyle \dim \operatorname {Coker} T} is the number of independent restrictions that have to be put on w {\displaystyle w} to make T v = w {\displaystyle Tv=w} solvable. The rank–nullity theorem for finite-dimensional vector spaces is equivalent to the statement index T = dim V − dim W . {\displaystyle \operatorname {index} T=\dim V-\dim W.}
We see that we can easily read off the index of the linear map T {\displaystyle T} from the involved spaces, without any need to analyze T {\displaystyle T} in detail. This effect also occurs in a much deeper result: the Atiyah–Singer index theorem states that the index of certain differential operators can be read off the geometry of the involved spaces. | https://en.wikipedia.org/wiki/Rank–nullity_theorem |
A Ranney Collector is a type of radial well used to extract water from an aquifer with direct connection to a surface water source like a river or lake . The amount of water available from the collector is typically related more to the surface water source than to the piezometric surface of the aquifer. [ 1 ]
A caisson is constructed of reinforced concrete and installed into sand or gravel below the surface level of an adjacent river or lake. [ 1 ] Screened conduits (also referred to as laterals or lateral well screens) are extended horizontally from ports in the caisson about 60 meters (200 feet) into surrounding water-bearing alluvium . [ 2 ] The radial arrangement of screens forms a large infiltration gallery with a single central withdrawal point. [ 1 ] A single collector may produce as much as 25 million gallons per day. [ 2 ] Bank filtration of water through aquifer soils may reduce water treatment requirements. [ 2 ] [ 3 ]
Texas petroleum engineer Leo Ranney drilled horizontally for oil in the early 1920s. The first Ranney collector for water was installed in London in 1933. Hundreds of Ranney collectors have been built since. [ 4 ] | https://en.wikipedia.org/wiki/Ranney_collector |
Raon Digital was a Korean company that manufactured Ultra-Mobile PCs (UMPCs) such as the Raon Vega, Raon Everun and Everun Note . [ 1 ] The company closed in 2009. [ 2 ] [ 3 ]
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Raon_Digital |
Raoul Bott (September 24, 1923 – December 20, 2005) [ 1 ] was a Hungarian-American mathematician known for numerous foundational contributions to geometry in its broad sense. He is best known for his Bott periodicity theorem , the Morse–Bott functions which he used in this context, and the Borel–Bott–Weil theorem .
Bott was born in Budapest , Hungary , the son of Margit Kovács and Rudolph Bott. [ 2 ] His father was of Austrian descent, and his mother was of Hungarian Jewish descent; Bott was raised a Catholic by his mother and stepfather in Bratislava, Czechoslovakia, now the capital of Slovakia. [ 3 ] [ 4 ] Bott grew up in Czechoslovakia and spent his working life in the United States . His family emigrated to Canada in 1938, and subsequently he served in the Canadian Army in Europe during World War II .
Bott later went to college at McGill University in Montreal , where he studied electrical engineering . He then earned a PhD in mathematics from Carnegie Mellon University in Pittsburgh in 1949. His thesis, titled Electrical Network Theory , was written under the direction of Richard Duffin . Afterward, he began teaching at the University of Michigan in Ann Arbor . Bott continued his study at the Institute for Advanced Study in Princeton. [ 5 ] He was a professor at Harvard University from 1959 to 1999. In 2005 Bott died of cancer in San Diego .
With Richard Duffin at Carnegie Mellon, Bott studied existence of electronic filters corresponding to given positive-real functions . In 1949 they proved [ 6 ] a fundamental theorem of filter synthesis . Duffin and Bott extended earlier work by Otto Brune that requisite functions of complex frequency s could be realized by a passive network of inductors and capacitors . The proof relied on induction on the sum of the degrees of the polynomials in the numerator and denominator of the rational function. [ 7 ] In his 2000 interview [ 8 ] with Allyn Jackson of the American Mathematical Society , he explained that he sees "networks as discrete versions of harmonic theory", so his experience with network synthesis and electronic filter topology introduced him to algebraic topology .
Bott met Arnold S. Shapiro at the IAS and they worked together.
He studied the homotopy theory of Lie groups , using methods from Morse theory , leading to the Bott periodicity theorem (1957). In the course of this work, he introduced Morse–Bott functions , an important generalization of Morse functions .
This led to his role as collaborator over many years with Michael Atiyah , initially via the part played by periodicity in K-theory . Bott made important contributions towards the index theorem , especially in formulating related fixed-point theorems , in particular the so-called ' Woods Hole fixed-point theorem ', a combination of the Riemann–Roch theorem and Lefschetz fixed-point theorem (it is named after Woods Hole, Massachusetts , the site of a conference at which collective discussion formulated it). [ 9 ] [ citation needed ] The major Atiyah–Bott papers on what is now the Atiyah–Bott fixed-point theorem were written in the years up to 1968; they collaborated further in recovering in contemporary language Ivan Petrovsky on Petrovsky lacunas of hyperbolic partial differential equations , prompted by Lars Gårding . In the 1980s, Atiyah and Bott investigated gauge theory , using the Yang–Mills equations on a Riemann surface to obtain topological information about the moduli spaces of stable bundles on Riemann surfaces. In 1983 he spoke to the Canadian Mathematical Society in a talk he called "A topologist marvels at Physics". [ 10 ]
He is also well known in connection with the Borel–Bott–Weil theorem on representation theory of Lie groups via holomorphic sheaves and their cohomology groups; and for work on foliations . With Chern he worked on Nevanlinna theory , studied holomorphic vector bundles over complex analytic manifolds and introduced the Bott-Chern classes, useful in the theory of Arakelov geometry and also to algebraic number theory .
He introduced Bott–Samelson varieties and the Bott residue formula for complex manifolds and the Bott cannibalistic class .
In 1964, he was awarded the Oswald Veblen Prize in Geometry by the American Mathematical Society . In 1983, he was awarded the Jeffery–Williams Prize [ 11 ] by the Canadian Mathematical Society . In 1987, he was awarded the National Medal of Science . [ 12 ]
In 2000, he received the Wolf Prize . In 2005, he was elected an Overseas Fellow of the Royal Society of London .
Bott had 35 PhD students, including Stephen Smale , Lawrence Conlon , Daniel Quillen , Peter Landweber , Robert MacPherson , Robert W. Brooks , Robin Forman , Rama Kocherlakota , Susan Tolman , András Szenes , Kevin Corlette , [ 13 ] and Eric Weinstein . [ 14 ] [ 15 ] [ 16 ] Smale and Quillen won Fields Medals in 1966 and 1978 respectively. | https://en.wikipedia.org/wiki/Raoul_Bott |
Raoult's law ( / ˈ r ɑː uː l z / law) is a relation of physical chemistry , with implications in thermodynamics . Proposed by French chemist François-Marie Raoult in 1887, [ 1 ] [ 2 ] it states that the partial pressure of each component of an ideal mixture of liquids is equal to the vapor pressure of the pure component (liquid or solid) multiplied by its mole fraction in the mixture. In consequence, the relative lowering of vapor pressure of a dilute solution of nonvolatile solute is equal to the mole fraction of solute in the solution.
Mathematically, Raoult's law for a single component in an ideal solution is stated as
where p i {\displaystyle p_{i}} is the partial pressure of the component i {\displaystyle i} in the gaseous mixture above the solution, p i ⋆ {\displaystyle p_{i}^{\star }} is the equilibrium vapor pressure of the pure component i {\displaystyle i} , and x i {\displaystyle x_{i}} is the mole fraction of the component i {\displaystyle i} in the liquid or solid solution. [ 3 ]
Where two volatile liquids A and B are mixed with each other to form a solution, the vapor phase consists of both components of the solution. Once the components in the solution have reached equilibrium , the total vapor pressure of the solution can be determined by combining Raoult's law with Dalton's law of partial pressures to give
In other words, the vapor pressure of the solution is the mole-weighted mean of the individual vapour pressures:
If a non-volatile solute B (it has zero vapor pressure, so does not evaporate ) is dissolved into a solvent A to form an ideal solution, the vapor pressure of the solution will be lower than that of the solvent. In an ideal solution of a nonvolatile solute, the decrease in vapor pressure is directly proportional to the mole fraction of solute:
If the solute associates or dissociates in the solution (such as an electrolyte/salt), the expression of the law includes the van 't Hoff factor as a correction factor. That is, the mole fraction must be calculated using the actual number of particles in solution. [ 4 ]
Raoult's law is a phenomenological relation that assumes ideal behavior based on the simple microscopic assumption that intermolecular forces between unlike molecules are equal to those between similar molecules, and that their molar volumes are the same: the conditions of an ideal solution. This is analogous to the ideal gas law , which is a limiting law valid when the interactive forces between molecules approach zero, for example as the concentration approaches zero. Raoult's law is instead valid if the physical properties of the components are identical. The more similar the components are, the more their behavior approaches that described by Raoult's law. For example, if the two components differ only in isotopic content, then Raoult's law is essentially exact.
Comparing measured vapor pressures to predicted values from Raoult's law provides information about the true relative strength of intermolecular forces . If the vapor pressure is less than predicted (a negative deviation), fewer molecules of each component than expected have left the solution in the presence of the other component, indicating that the forces between unlike molecules are stronger. The converse is true for positive deviations.
For a solution of two liquids A and B, Raoult's law predicts that if no other gases are present, then the total vapor pressure p {\displaystyle p} above the solution is equal to the weighted sum of the "pure" vapor pressures p A {\displaystyle p_{\text{A}}} and p B {\displaystyle p_{\text{B}}} of the two components. Thus the total pressure above the solution of A and B would be
Since the sum of the mole fractions is equal to one,
This is a linear function of the mole fraction x B {\displaystyle x_{\text{B}}} , as shown in the graph.
Raoult's law was first observed empirically and led François-Marie Raoult [ 1 ] [ 2 ] to postulate that the vapor pressure above an ideal mixture of liquids is equal to the sum of the vapor pressures of each component multiplied by its mole fraction. [ 5 ] : 325 Taking compliance with Raoult's Law as a defining characteristic of ideality in a solution, it is possible to deduce that the chemical potential of each component of the liquid is given by
where μ i ⋆ {\displaystyle \mu _{i}^{\star }} is the chemical potential in the pure state and x i {\displaystyle x_{i}} is the mole fraction of component i {\displaystyle i} in the ideal solution. From this equation, other thermodynamic properties of an ideal solution may be determined. If the assumption that the vapor follows the ideal gas law is added, Raoult's law may be derived as follows.
If the system is ideal, then, at equilibrium , the chemical potential of each component i {\displaystyle i} must be the same in the liquid and gas states. That is,
Substituting the formula for chemical potential gives
as the gas-phase mole fraction depends on its fugacity , f i {\displaystyle f_{i}} , as a fraction of the pressure in the reference state, p ⊖ {\displaystyle p^{\ominus }} .
The corresponding equation when the system consists purely of component i {\displaystyle i} in equilibrium with its vapor is
Subtracting these equations and re-arranging leads to the result [ 5 ] : 326
For the ideal gas, pressure and fugacity are equal, so introducing simple pressures to this result yields Raoult's law:
An ideal solution would follow Raoult's law, but most solutions deviate from ideality. Interactions between gas molecules are typically quite small, especially if the vapor pressures are low. However, the interactions in a liquid are very strong. For a solution to be ideal, the interactions between unlike molecules must be of the same magnitude as those between like molecules. [ 6 ] This approximation is only true when the different species are almost chemically identical. One can see that from considering the Gibbs free energy change of mixing :
This is always negative, so mixing is spontaneous. However, the expression is, apart from a factor − T {\displaystyle -T} , equal to the entropy of mixing. This leaves no room at all for an enthalpy effect and implies that Δ mix H {\displaystyle \Delta _{\text{mix}}H} must be equal to zero, and this can only be true if the interactions between the molecules are indifferent.
It can be shown using the Gibbs–Duhem equation that if Raoult's law holds over the entire concentration range x ∈ [ 0 , 1 ] {\displaystyle x\in [0,\ 1]} in a binary solution then, for the second component, the same must also hold.
If deviations from the ideal are not too large, Raoult's law is still valid in a narrow concentration range when approaching x → 1 {\displaystyle x\to 1} for the majority phase (the solvent ). The solute also shows a linear limiting law, but with a different coefficient. This relationship is known as Henry's law .
The presence of these limited linear regimes has been experimentally verified in a great number of cases, though large deviations occur in a variety of cases. Consequently, both its pedagogical value and utility have been questioned at the introductory college level. [ 7 ] In a perfectly ideal system, where ideal liquid and ideal vapor are assumed, a very useful equation emerges if Raoult's law is combined with Dalton's Law :
where x i {\displaystyle x_{i}} is the mole fraction of component i {\displaystyle i} in the solution , and y i {\displaystyle y_{i}} is its mole fraction in the gas phase . This equation shows that, for an ideal solution where each pure component has a different vapor pressure, the gas phase is enriched in the component with the higher vapor pressure when pure, and the solution is enriched in the component with the lower pure vapor pressure. This phenomenon is the basis for distillation .
In elementary applications, Raoult's law is generally valid when the liquid phase is either nearly pure or a mixture of similar substances. [ 8 ] Raoult's law may be adapted to non-ideal solutions by incorporating two factors that account for the interactions between molecules of different substances. The first factor is a correction for gas non-ideality, or deviations from the ideal-gas law . It is called the fugacity coefficient ( ϕ p , i {\displaystyle \phi _{p,i}} ). The second, the activity coefficient γ i {\displaystyle \gamma _{i}} , is a correction for interactions in the liquid phase between the different molecules. [ 5 ] : 326
This modified or extended Raoult's law is then written as [ 9 ]
In many pairs of liquids, there is no uniformity of attractive forces, i.e., the adhesive (between dissimilar molecules) and cohesive forces (between similar molecules) are not uniform between the two liquids. Therefore, they deviate from Raoult's law, which applies only to ideal solutions.
Notably, when the concentration of A is small, its vapor pressure instead follows Henry's law , and likewise for substance B when its concentration is small.
When the adhesion is stronger than the cohesion, fewer liquid particles turn into vapor thereby lowering the vapor pressure and leading to negative deviation in the graph.
For example, the system of chloroform (CHCl 3 ) and acetone (CH 3 COCH 3 ) has a negative deviation [ 10 ] from Raoult's law, indicating an attractive interaction between the two components that have been described as a hydrogen bond . [ 11 ] The system HCl–water has a large enough negative deviation to form a minimum in the vapor pressure curve known as a (negative) azeotrope , corresponding to a mixture that evaporates without change of composition. [ 12 ] When these two components are mixed, the reaction is exothermic as ion-dipole intermolecular forces of attraction are formed between the resulting ions (H 3 O + and Cl – ) and the polar water molecules so that Δ H mix is negative.
When the adhesion is weaker than cohesion, which is quite common, the liquid particles escape the solution more easily that increases the vapor pressure and leads to a positive deviation.
If the deviation is large, then the vapor pressure curve shows a maximum at a particular composition and forms a positive azeotrope (low-boiling mixture). Some mixtures in which this happens are (1) ethanol and water , (2) benzene and methanol , (3) carbon disulfide and acetone , (4) chloroform and ethanol, and (5) glycine and water. When these pairs of components are mixed, the process is endothermic as weaker intermolecular interactions are formed so that Δ mix H is positive.
It is possible to have mixed deviations, which are positive for one component and negative for the other, and which switch between positive and negative while moving from x = 0 {\displaystyle x=0} to x = 1 {\displaystyle x=1} . These are not merely theoretically possible, as actual examples of mixed deviation exist. [ 13 ] The possible physical deviations are not entirely arbitrary however, as they are constrained by the Duhem–Margules equation : for example, if one component has positive deviation over the entire range then the other component cannot have negative deviation over the entire range. [ 13 ] | https://en.wikipedia.org/wiki/Raoult's_law |
In statistics , the Rao–Blackwell theorem , sometimes referred to as the Rao–Blackwell–Kolmogorov theorem , is a result that characterizes the transformation of an arbitrarily crude estimator into an estimator that is optimal by the mean-squared-error criterion or any of a variety of similar criteria.
The Rao–Blackwell theorem states that if g ( X ) is any kind of estimator of a parameter θ, then the conditional expectation of g ( X ) given T ( X ), where T is a sufficient statistic , is typically a better estimator of θ, and is never worse. Sometimes one can very easily construct a very crude estimator g ( X ), and then evaluate that conditional expected value to get an estimator that is in various senses optimal.
The theorem is named after C.R. Rao and David Blackwell . The process of transforming an estimator using the Rao–Blackwell theorem can be referred to as Rao–Blackwellization . The transformed estimator is called the Rao–Blackwell estimator . [ 1 ] [ 2 ] [ 3 ]
One case of Rao–Blackwell theorem states:
In other words,
The essential tools of the proof besides the definition above are the law of total expectation and the fact that for any random variable Y , E( Y 2 ) cannot be less than [E( Y )] 2 . That inequality is a case of Jensen's inequality , although it may also be shown to follow instantly from the frequently mentioned fact that
More precisely, the mean square error of the Rao-Blackwell estimator has the following decomposition [ 4 ]
Since E [ Var ( δ ( X ) ∣ T ( X ) ) ] ≥ 0 {\displaystyle \operatorname {E} [\operatorname {Var} (\delta (X)\mid T(X))]\geq 0} , the Rao-Blackwell theorem immediately follows.
The more general version of the Rao–Blackwell theorem speaks of the "expected loss" or risk function :
where the "loss function" L may be any convex function . If the loss function is twice-differentiable, as in the case for mean-squared-error, then we have the sharper inequality [ 4 ]
The improved estimator is unbiased if and only if the original estimator is unbiased, as may be seen at once by using the law of total expectation . The theorem holds regardless of whether biased or unbiased estimators are used.
The theorem seems very weak: it says only that the Rao–Blackwell estimator is no worse than the original estimator. In practice, however, the improvement is often enormous. [ 5 ]
Phone calls arrive at a switchboard according to a Poisson process at an average rate of λ per minute. This rate is not observable, but the numbers X 1 , ..., X n of phone calls that arrived during n successive one-minute periods are observed. It is desired to estimate the probability e −λ that the next one-minute period passes with no phone calls.
An extremely crude estimator of the desired probability is
i.e., it estimates this probability to be 1 if no phone calls arrived in the first minute and zero otherwise. Despite the apparent limitations of this estimator, the result given by its Rao–Blackwellization is a very good estimator.
The sum
can be readily shown to be a sufficient statistic for λ, i.e., the conditional distribution of the data X 1 , ..., X n , depends on λ only through this sum. Therefore, we find the Rao–Blackwell estimator
After doing some algebra we have
Since the total number of calls arriving during the first n minutes is n λ, one might not be surprised if this estimator has a fairly high probability (if n is big, by WLLN, the sample average converges in probability to the parameter λ) of being close to
So δ 1 is clearly a very much improved estimator of that last quantity. In fact, since S n is complete and δ 0 is unbiased, δ 1 is the unique minimum variance unbiased estimator by the Lehmann–Scheffé theorem .
Rao–Blackwellization is an idempotent operation. Using it to improve the already improved estimator does not obtain a further improvement, but merely returns as its output the same improved estimator.
If the conditioning statistic is both complete and sufficient , and the starting estimator is unbiased, then the Rao–Blackwell estimator is the unique " best unbiased estimator ": see Lehmann–Scheffé theorem .
An example of an improvable Rao–Blackwell improvement, when using a minimal sufficient statistic that is not complete , was provided by Galili and Meilijson in 2016. [ 6 ] Let X 1 , … , X n {\displaystyle X_{1},\ldots ,X_{n}} be a random sample from a scale-uniform distribution X ∼ U ( ( 1 − k ) θ , ( 1 + k ) θ ) , {\displaystyle X\sim U\left((1-k)\theta ,(1+k)\theta \right),} with unknown mean E [ X ] = θ {\displaystyle E[X]=\theta } and known design parameter k ∈ ( 0 , 1 ) {\displaystyle k\in (0,1)} . In the search for "best" possible unbiased estimators for θ , {\displaystyle \theta ,} it is natural to consider X 1 {\displaystyle X_{1}} as an initial (crude) unbiased estimator for θ {\displaystyle \theta } and then try to improve it. Since X 1 {\displaystyle X_{1}} is not a function of T = ( X ( 1 ) , X ( n ) ) {\displaystyle T=\left(X_{(1)},X_{(n)}\right)} , the minimal sufficient statistic for θ {\displaystyle \theta } (where X ( 1 ) = min ( X i ) {\displaystyle X_{(1)}=\min(X_{i})} and X ( n ) = max ( X i ) {\displaystyle X_{(n)}=\max(X_{i})} ), it may be improved using the Rao–Blackwell theorem as follows:
However, the following unbiased estimator can be shown to have lower variance:
And in fact, it could be even further improved when using the following estimator:
The model is a scale model . Optimal equivariant estimators can then be derived for loss functions that are invariant . [ 7 ] | https://en.wikipedia.org/wiki/Rao–Blackwell_theorem |
Raphaël Horace Dubois (20 June 1849, Le Mans – 21 January 1929) was a French pharmacologist known for his work on bioluminescence and anesthesia . [ 1 ] He coined the terms proteon and bioproteon, from the Greek "proteon" for matter and "bios" for life. Bioproteon means "living matter". He concluded that there was no difference between matter and living matter.
"A consideration of radioactivity led Dubois, in 1904, to the view that the distinction between "matter of life" and "living matter" is superficial. He proposed the term bioproteon meaning the particular state of the "proteon" in living beings, and suggested the desirability of determining the radioactivity proper of the bioproteon. In a subsequent paper he says: "The unique principle of everything, of both force and matter, I have called 'proteon,' and when it pertains to a living being, 'bioproteon'." Proteon and bioproteon are only two different states of the same thing. When the bioproteon is dead it has only ceased to be radioactive and becomes simply proteon." [ 2 ]
Dubois' bioluminescence work began when he became a research assistant to Paul Bert in 1882. While initially planning to study the effects of anesthesia on mollusks, witnessing the bioluminescence of Pyrophorus noctilucus inspired him to study the beetle more in depth. Dubois discovered that not only do the adults glow, but so do the unfertilized eggs, embryo, and larvae. Dubois later conducted studies on Scolioplanes crassipes , wherein Dubois discovered the source of its luminescence is in cells of the wall of the gut. Dubois published a paper studying the light production of Pholas dactylus in 1887, in which he coined the terms luciferin and luciferase . [ 3 ]
This biography related to medicine in France is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Raphaël_Dubois |
The RapidIO architecture is a high-performance packet-switched electrical connection technology. It supports messaging, read/write and cache coherency semantics. Based on industry-standard electrical specifications such as those for Ethernet , RapidIO can be used as a chip-to-chip, board-to-board, and chassis-to-chassis interconnect.
The RapidIO protocol was originally designed by Mercury Computer Systems and Motorola ( Freescale ) as a replacement for Mercury's RACEway proprietary bus and Freescale's PowerPC bus. [ 1 ] The RapidIO Trade Association was formed in February 2000, and included telecommunications and storage OEMs as well as FPGA, processor, and switch companies.
The RapidIO specification revision 1.1 (3xN Gen1), released in March 2001, defined a wide, parallel bus. This specification did not achieve extensive commercial adoption.
The RapidIO specification revision 1.2, released in June 2002, [ 2 ] defined a serial interconnection based on the XAUI physical layer. Devices based on this specification achieved significant commercial success within wireless baseband, [ 3 ] imaging and military computing. [ 4 ]
The RapidIO specification revision 1.3 was released in June 2005.
The RapidIO specification revision 2.0 (6xN Gen2), was released in March 2008. [ 5 ] This added more port widths (2×, 8×, and 16×) and increased the maximum lane speed to 6.25 GBd / 5 Gbit/s.
The RapidIO specification revision 2.1 was released in September 2009.
The RapidIO specification revision 2.2 was released in May 2011.
The RapidIO specification revision 3.0 (10xN Gen3) released in October 2013. [ 6 ] The following changes were made:
The RapidIO specification revision 3.1, was released in October 2014. [ 7 ] It was developed through a collaboration between the RapidIO Trade Association and NGSIS. Revision 3.1 has the following changes compared to the 3.0 specification:
The RapidIO specification revision 3.2 was released in February 2016.
The RapidIO specification revision 4.0 (25xN Gen4) was released in June 2016. [ 8 ] It had the following changes compared to the 3.x specifications:
The RapidIO specification revision 4.1 was released in July 2017. [ 9 ]
RapidIO fabrics are used in cellular infrastructure 3G, 4G and LTE networks with millions of RapidIO ports shipped [ 10 ] into wireless base stations worldwide. RapidIO fabrics were originally designed to support connecting different types of processors from different manufacturers together in a single system. This flexibility has driven the widespread use of RapidIO in wireless infrastructure equipment where there is a need to combine heterogeneous, DSP, FPGA and communication processors together in a tightly coupled system with low latency and high reliability.
Data center and HPC analytics systems have been deployed using a RapidIO 2D Torus Mesh Fabric, [ 11 ] that provides a high speed general purpose interface among the system cartridges. This allows for applications that benefit from high bandwidth to low latency node-to-node communication. The RapidIO 2D Torus unified fabric is routed as a torus ring configuration connecting up to 45 server cartridges. Hence, capable of providing 5Gbs per lane connections in each direction to its north, south, east and west neighbors. This allows the system to meet many unique HPC applications where efficient localized traffic is needed.
Also, using an open modular data center and compute platform, [ 12 ] a heterogeneous HPC system has showcased the low latency attribute of RapidIO to enable real-time analytics. [ 13 ] In March 2015 a top-of-rack switch was announced to drive RapidIO into mainstream data center applications. [ 14 ]
The interconnect or "bus" is one of the critical technologies in the design and development of spacecraft avionic systems that dictate its architecture and level of complexity. There are a host of existing architectures that are still in use given their level of maturity. These existing systems are sufficient for a given type of architecture need and requirement. Unfortunately, for next generation missions a more capable avionics architecture is desired; which is well beyond the capabilities levied by existing architectures. A viable option for the design and development of these next generation architectures is to leverage existing commercial protocols capable of accommodating high levels of data transfer.
In 2012, RapidIO was selected by the Next Generation Spacecraft Interconnect Standard (NGSIS) working group to serve as the foundation for standard communication interconnects to be used in spacecraft. The NGSIS is an umbrella standards effort that includes RapidIO Version 3.1 development, and a box level hardware standards effort under VITA 78 called SpaceVPX or High ReliabilityVPX. The NGSIS requirements committee developed extensive requirements criteria with 47 different elements for the NGSIS interconnect. Independent trade study results by NGSIS member companies demonstrated the superiority of RapidIO over other existing commercial protocols, such as InfiniBand, Fibre Channel, and 10G Ethernet. As a result, the group decided that RapidIO offered the best overall interconnect for the needs of next-generation spacecraft. [ 15 ]
The RapidIO roadmap aligns with Ethernet PHY development. RapidIO specifications for 50 GBd and higher links are under investigation. [ 16 ]
The RapidIO protocol is defined in a 3-layered specification:
System specifications include:
The RapidIO electrical specifications are based on industry-standard Ethernet and Optical Interconnect Forum standards:
The RapidIO PCS/PMA layer supports two forms of encoding/framing:
Every RapidIO processing element transmits and receives three kinds of information: Packets, control symbols, and an idle sequence.
Every packet has two values that control the physical layer exchange of that packet. The first is an acknowledge ID (ackID), which is the link-specific, unique, 5-, 6-, or 12-bit value that is used to track packets exchanged on a link. Packets are transmitted with serially increasing ackID values. Because the ackID is specific to a link, the ackID is not covered by CRC, but by protocol. This allows the ackID to change with each link it passes over, while the packet CRC can remain a constant end-to-end integrity check of the packet. When a packet is successfully received, it is acknowledged using the ackID of the packet. A transmitter must retain a packet until it has been successfully acknowledged by the link partner.
The second value is the packet's physical priority. The physical priority is composed of the Virtual Channel (VC) identifier bit, the Priority bits, and the Critical Request Flow (CRF) bit. The VC bit determines if the Priority and CRF bits identify a Virtual Channel from 1 to 8, or are used as the priority within Virtual Channel 0. Virtual Channels are assigned guaranteed minimum bandwidths. Within Virtual Channel 0, packets of higher priority can pass packets of lower priority. Response packets must have a physical priority higher than requests in order to avoid deadlock.
The physical layer contribution to RapidIO packets is a 2-byte header at the beginning of each packet that includes the ackID and physical priority, and a final 2-byte CRC value to check the integrity of the packet. Packets larger than 80 bytes also have an intermediate CRC after the first 80 bytes. With one exception a packet's CRC value(s) acts as an end-to-end integrity check.
RapidIO control symbols can be sent at any time, including within a packet. This gives RapidIO the lowest possible in-band control path latency, enabling the protocol to achieve high throughput with smaller buffers than other protocols.
Control symbols are used to delimit packets (Start of Packet, End of Packet, Stomp), to acknowledge packets (Packet Acknowledge, Packet Not Acknowledged), reset (Reset Device, Reset Port) and to distribute events within the RapidIO system (Multicast Event Control Symbol). Control symbols are also used for flow control (Retry, Buffer Status, Virtual Output Queue Backpressure) and for error recovery.
The error recovery procedure is very fast. When a receiver detects a transmission error in the received data stream, the receiver causes its associated transmitter to send a Packet Not Accepted control symbol. When the link partner receives a Packet Not Accepted control symbol, it stops transmitting new packets and sends a Link Request/Port Status control symbol. The Link Response control symbol indicates the ackID that should be used for the next packet transmitted. Packet transmission then resumes.
The IDLE sequence is used during link initialization for signal quality optimization. It is also transmitted when the link does not have any control symbols or packets to send.
Every RapidIO endpoint is uniquely identified by a Device Identifier (deviceID). Each RapidIO packet contains two device IDs. The first is the destination ID (destID), which indicates where the packet should be routed. The second is the source ID (srcID), which indicates where the packet originated. When an endpoint receives a RapidIO request packet that requires a response, the response packet is composed by swapping the srcID and destID of the request.
RapidIO switches use the destID of received packets to determine the output port or ports that should forward the packet. Typically, the destID is used to index into an array of control values. The indexing operation is fast and low cost to implement. RapidIO switches support a standard programming model for the routing table, which simplifies system control.
The RapidIO transport layer supports any network topology, from simple trees and meshes to n-dimensional hypercubes , multi-dimensional toroids , and more esoteric architectures such as entangled networks.
The RapidIO transport layer enables hardware virtualization (for example, a RapidIO endpoint can support multiple device IDs). Portions of the destination ID of each packet can be used to identify specific pieces of virtual hardware within the endpoint.
The RapidIO logical layer is composed of several specifications, each providing packet formats and protocols for different transaction semantics.
The logical I/O layer defines packet formats for read, write, write-with-response, and various atomic transactions. Examples of atomic transactions are set, clear, increment, decrement, swap, test-and-swap, and compare-and-swap.
The Messaging specification defines Doorbells and Messages. Doorbells communicate a 16-bit event code. Messages transfer up to 4KiB of data, segmented into up to 16 packets each with a maximum payload of 256 bytes. Response packets must be sent for each Doorbell and Message request. The response packet status value indicates done, error, or retry. A status of retry requests the originator of the request to send the packet again. The logical level retry response allows multiple senders to access a small number of shared reception resources, leading to high throughput with low power.
The Flow Control specification defines packet formats and protocols for simple XON/XOFF flow control operations. Flow control packets can be originated by switches and endpoints. Reception of a XOFF flow control packet halts transmission of a flow or flows until an XON flow control packet is received or a timeout occurs. Flow Control packets can also be used as a generic mechanism for managing system resources.
The Globally Shared Memory specification defines packet formats and protocols for operating a cache coherent shared memory system over a RapidIO network.
The Data Streaming specification supports messaging with different packet formats and semantics than the Messaging specification. Data Streaming packet formats support the transfer of up to 64K of data, segmented over multiple packets. Each transfer is associated with a Class of Service and Stream Identifier, enabling thousands of unique flows between endpoints.
The Data Streaming specification also defines Extended Header flow control packet formats and semantics to manage performance within a client-server system. Each client uses extended header flow control packets to inform the server of the amount of work that could be sent to the server. The server responds with extended header flow control packets that use XON/XOFF, rate, or credit based protocols to control how quickly and how much work the client sends to the server.
Systems with a known topology can be initialized in a system specific manner without affecting interoperability. The RapidIO system initialization specification supports system initialization when system topology is unknown or dynamic. System initialization algorithms support the presence of redundant hosts, so system initialization need not have a single point of failure.
Each system host recursively enumerates the RapidIO fabric, seizing ownership of devices, allocating device IDs to endpoints and updating switch routing tables. When a conflict for ownership occurs, the system host with the larger deviceID wins. The "losing" host releases ownership of its devices and retreats, waiting for the "winning" host. The winning host completes enumeration, including seizing ownership of the losing host. Once enumeration is complete, the winning host releases ownership of the losing host. The losing host then discovers the system by reading the switch routing tables and registers on each endpoint to learn the system configuration. If the winning host does not complete enumeration in a known time period, the losing host determines that the winning host has failed and completes enumeration.
System enumeration is supported in Linux by the RapidIO subsystem.
RapidIO supports high availability, fault tolerant system design, including hot swap. The error conditions that require detection, and standard registers to communicate status and error information, are defined. A configurable isolation mechanism is also defined so that when it is not possible to exchange packets on a link, packets can be discarded to avoid congestion and enable diagnosis and recovery activities. In-band (port-write packet) and out-of-band (interrupt) notification mechanisms are defined.
The RapidIO specification does not discuss the subjects of form factors and connectors, leaving this to specific application-focussed communities. RapidIO is supported by the following form factors:
Processor-agnostic RapidIO support is found in the Linux kernel. [ citation needed ]
The RapidIO interconnect is used extensively in the following applications [ citation needed ] :
RapidIO is expanding into supercomputing, server, and storage applications. [ citation needed ]
PCI Express is targeted at the host to peripheral market, as opposed to embedded systems. Unlike RapidIO, PCIe is not optimized for peer-to-peer multi processor networks. PCIe is ideal for host to peripheral communication. PCIe does not scale as well in large multiprocessor peer-to-peer systems, as the basic PCIe assumption of a "root complex" creates fault tolerance and system management issues.
Another alternative interconnect technology is Ethernet . Ethernet is a robust approach to linking computers over large geographic areas, where network topology may change unexpectedly, the protocols used are in flux, and link latencies are large. To meet these challenges, systems based on Ethernet require significant amounts of processing power, software and memory throughout the network to implement protocols for flow control, data transfer, and packet routing. RapidIO is optimized for energy efficient, low latency, processor-to-processor communication in fault tolerant embedded systems that span geographic areas of less than one kilometer.
SpaceFibre is a competing technology for space applications. [ 17 ]
Time Triggered Ethernet is a competing technology for more complex backplane (VPX) and backbone applications for space (launchers and human-rated integrated avionics). | https://en.wikipedia.org/wiki/RapidIO |
Rapid Boot is an EFI BIOS alternative using a Linux kernel (in the BIOS flash part) developed by Intel Corporation , primarily intended for computer clusters .
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Rapid_Boot |
Rapid amplification of cDNA ends ( RACE ) is a technique used in molecular biology to obtain the full length sequence of an RNA transcript found within a cell. RACE results in the production of a cDNA copy of the RNA sequence of interest, produced through reverse transcription , followed by PCR amplification of the cDNA copies (see RT-PCR ). The amplified cDNA copies are then sequenced and, if long enough, should map to a unique genomic region. RACE is commonly followed up by cloning before sequencing of what was originally individual RNA molecules. A more high-throughput alternative which is useful for identification of novel transcript structures, is to sequence the RACE-products by next generation sequencing technologies.
RACE can provide the sequence of an RNA transcript from a small known sequence within the transcript to the 5' end (5' RACE-PCR) or 3' end (3' RACE-PCR) of the RNA. This technique is sometimes called one-sided PCR or anchored PCR .
The first step in RACE is to use reverse transcription to produce a cDNA copy of a region of the RNA transcript. In this process, an unknown end portion of a transcript is copied using a known sequence from the center of the transcript. The copied region is bounded by the known sequence, at either the 5' or 3' end.
The protocols for 5' or 3' RACES differ slightly. 5' RACE-PCR begins using mRNA as a template for a first round of cDNA synthesis (or reverse transcription ) reaction using an anti-sense (reverse) oligonucleotide primer that recognizes a known sequence in the middle of the gene of interest; the primer is called a gene specific primer (GSP). The primer binds to the mRNA, and the enzyme reverse transcriptase adds base pairs to the 3' end of the primer to generate a specific single-stranded cDNA product; this is the reverse complement of the mRNA. Following cDNA synthesis, the enzyme terminal deoxynucleotidyl transferase (TdT) is used to add a string of identical nucleotides , known as a homopolymeric tail, to the 3' end of the cDNA. (There are some other ways to add the 3'-terminal sequence for the first strand of the de novo cDNA synthesis which are much more efficient than homopolymeric tailing, but the sense of the method remains the same). PCR is then carried out, which uses a second anti-sense gene specific primer (GSP2) that binds to the known sequence, and a sense (forward) universal primer (UP) that binds the homopolymeric tail added to the 3' ends of the cDNAs to amplify a cDNA product from the 5' end.
3' RACE-PCR uses the natural polyA tail that exists at the 3' end of all eukaryotic mRNAs for priming during reverse transcription, so this method does not require the addition of nucleotides by TdT. cDNAs are generated using an Oligo-dT -adaptor primer (a primer with a short sequence of deoxy-thymine nucleotides) that complements the polyA stretch and adds a special adaptor sequence to the 5' end of each cDNA. PCR is then used to amplify 3' cDNA from a known region using a sense GSP, and an anti-sense primer complementary to the adaptor sequence.
The cDNA molecules generated by RACE can be sequenced using high-throughput sequencing technologies (also called, RACE-seq). High-throughput sequencing characterization of RACE fragments is highly time-efficient, more sensitive, less costly and technically feasible compared to traditional characterization of RACE fragments with molecular cloning followed by Sanger sequencing of a few clones.
RACE can be used to amplify unknown 5' (5'-RACE) or 3' (3'-RACE) parts of RNA molecules where part of the RNA sequence is known and targeted by a gene-specific primer. Combined with high-throughput sequencing for characterization of these amplified RACE products, it is possible to apply the approach to characterize any types of coding or non-coding RNA-molecules.
The idea of combining RACE with high-throughput sequencing was first introduced in 2009 as Deep-RACE to perform mapping of Transcription start sites (TSS) of 17 genes in a single cell-line. [ 1 ] For example, In a study from 2014 to accurately map cleavage sites of target RNA directed by synthetic siRNAs , the approach was first named RACE-seq. [ 2 ] Further, the methodology was used to characterize full-length unknown parts of novel transcripts and fusion transcripts in colorectal cancer . [ 3 ] In another study aiming to characterize unknown transcript structures of lncRNAs , RACE was used in combination with semi-long 454 sequencing . [ 4 ] | https://en.wikipedia.org/wiki/Rapid_amplification_of_cDNA_ends |
A rapid antigen test ( RAT ), sometimes called a rapid antigen detection test ( RADT ), antigen rapid test ( ART ), or loosely just a rapid test , is a rapid diagnostic test suitable for point-of-care testing that directly detects the presence or absence of an antigen . RATs are a type of lateral flow test detecting antigens, rather than antibodies ( antibody tests ) or nucleic acid ( nucleic acid tests ). Rapid tests generally give a result in 5 to 30 minutes, require minimal training or infrastructure, and have significant cost advantages. [ citation needed ] Rapid antigen tests for the detection of SARS-CoV-2 , the virus that causes COVID-19 , have been commonly used during the COVID-19 pandemic .
For many years, an early and major class of RATs—the rapid strep tests for streptococci—were so often the referent when RATs or RADTs were mentioned that the two latter terms were often loosely treated as synonymous with those. Since the COVID-19 pandemic , awareness of RATs is no longer limited to health professionals and COVID-19 has become the expected referent, so more precise usage is required in other circumstances.
RATs are based on the principle of antigen-antibody interaction . They detect antigens (generally a protein on the surface of a virus). A linear chromatography substrate (a porous piece of material) bears an indicator line, onto which antibodies directed against the target antigen are fixed. Antibodies are also fixed to a visualisation marker (generally a dye, though sometimes these antibodies are modified to fluoresce), to which the sample is added. Any virus particles present will bind to these markers. This mix then travels through the substrate through capillarity. When it reaches the indicator line, virus particles are immobilised by the antibodies fixed there, along with the visualisation marker, allowing concentration and thus visual detection of significant levels of virus in a sample. [ 1 ]
A positive result with an antigen test should generally be confirmed by RT-qPCR or some other test with higher sensitivity and specificity. [ 1 ]
Common examples of RATs or RADTs include:
Rapid antigen tests for COVID-19 are one of the most useful applications of these tests. Often called lateral flow tests , they have provided global governments with several benefits. They are quick to implement with minimal training, offered significant cost advantages, costing a fraction of existing forms of PCR testing and give users a result within 5–30 minutes. Rapid antigen tests have found their best use as part of mass testing or population-wide screening approaches. [ 3 ] They are successful in these approaches because in addition to the aforementioned benefits, they identify individuals who are the most infectious and could potentially spread the virus to a large number of other people. [ 4 ] This differs slightly from other forms of COVID-19 tests such as PCR that are generally seen to be a useful test for individuals. [ citation needed ]
As early as February 2021, the US Department of State considered the antigen test suitable for entry to the country. [ 5 ] In Canada, although the antigen test appeared to be no route to entry in January 2021, [ 6 ] Health Canada in August 2021 made available subsidized at no cost rapid antigen tests "to more small and medium-sized organizations through new pharmacy partners". [ 7 ]
RATs are immunochromatographic assays which give results that can be seen with the naked eye (with or without special illumination, such as a UV lamp). They are qualitative in nature, although within a certain range it is possible to make rough order of magnitude estimates of viral load from the results. RATs are generally screening tests, with relatively low sensitivity and specificity , thus results should be evaluated on the basis of confirmatory tests like PCR testing or western blot. [ citation needed ]
One inherent advantage of an antigen test over an antibody test (such as antibody-detecting rapid HIV tests ) is that it can take time for the immune system to develop antibodies after infection begins, but the foreign antigen is present right away. Although any diagnostic test may have false negatives , this latency period can open an especially wide avenue for false negatives in antibody tests, although the particulars depend on which disease and which test are involved. A rapid antigen test typically costs around US$5 to manufacture. [ citation needed ] | https://en.wikipedia.org/wiki/Rapid_antigen_test |
The rapid furfural test is a chemical test used to distinguish between glucose and fructose . The rapid furfural test is similar to Molisch's test but uses concentrated hydrochloric acid instead of concentrated sulfuric acid and the solution is boiled. [ 1 ] Dilute sugar solution is added to ethanolic 1-naphthol and concentrated hydrochloric acid. The solution is then boiled and if a purple colour forms within thirty seconds, fructose is present. If a purple colour does not appear before thirty seconds, glucose is present. [ 2 ]
This article about analytical chemistry is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Rapid_furfural_test |
Rapid modes of evolution have been proposed by several notable biologists after Charles Darwin proposed his theory of evolutionary descent by natural selection . In his book On the Origin of Species (1859), Darwin stressed the gradual nature of descent, writing:
Work in developmental biology has identified dynamical and physical mechanisms of tissue morphogenesis that may underlie such abrupt morphological transitions. Consequently, consideration of mechanisms of phylogenetic change that are actually (not just apparently) non-gradual is increasingly common in the field of evolutionary developmental biology , particularly in studies of the origin of morphological novelty. A description of such mechanisms can be found in the multi-authored volume Origination of Organismal Form . | https://en.wikipedia.org/wiki/Rapid_modes_of_evolution |
Rapid phase transition or RPT is an explosive boiling phenomenon realized in liquefied natural gas (LNG) incidents, in which LNG vaporizes violently upon coming in contact with water causing what is known as a physical explosion. During such explosions there is no combustion but rather a huge amount of energy is transferred in the form of heat from the room-temperature water to the LNG at a temperature difference of about 200 K (176.667°C / 350 °F).
Liquefied natural gas, or LNG, is natural gas that gets liquefied at atmospheric pressure and −161.5 °C (112.7 K; −258.7 °F). It is odorless , tasteless , colorless, and not poisonous but causes asphyxia . It can cause frostbite due to its cryogenic temperature. If saturated LNG contacts liquid water (e.g. sea water , which has an average temperature of 15 °C), heat is transferred from the water to the LNG, rapidly vaporizing it. This results in an explosion because the volume occupied by natural gas in its gaseous form is 600 times greater than when its liquefied; this is the phenomenon of rapid phase transition. [ 1 ] [ 2 ]
This article about energy , its collection, its distribution, or its uses is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Rapid_phase_transition |
Rapid prototyping is a group of techniques used to quickly fabricate a scale model of a physical part or assembly using three-dimensional computer aided design ( CAD ) data. [ 1 ] [ 2 ] Construction of the part or assembly is usually done using 3D printing technology. [ 3 ]
The first methods for rapid prototyping became available in mid 1987 and were used to produce models and prototype parts. Today, they are used for a wide range of applications and are used to manufacture production-quality parts in relatively small numbers if desired without the typical unfavorable short-run economics. [ 4 ] This economy has encouraged online service bureaus. Historical surveys of RP technology [ 2 ] start with discussions of simulacra production techniques used by 19th-century sculptors. Some modern sculptors use the progeny technology to produce exhibitions and various objects. [ 5 ] The ability to reproduce designs from a dataset has given rise to issues of rights, as it is now possible to interpolate volumetric data from 2D images.
As with CNC subtractive methods , the computer-aided-design – computer-aided manufacturing CAD - CAM workflow in the traditional rapid prototyping process starts with the creation of geometric data, either as a 3D solid using a CAD workstation, or 2D slices using a scanning device. For rapid prototyping this data must represent a valid geometric model; namely, one whose boundary surfaces enclose a finite volume, contain no holes exposing the interior, and do not fold back on themselves. [ 6 ] In other words, the object must have an "inside". The model is valid if for each point in 3D space the computer can determine uniquely whether that point lies inside, on, or outside the boundary surface of the model. CAD post-processors will approximate the application vendors' internal CAD geometric forms (e.g., B-splines) with a simplified mathematical form, which in turn is expressed in a specified data format which is a common feature in additive manufacturing : STL file format, a de facto standard for transferring solid geometric models to SFF machines. [ 7 ]
To obtain the necessary motion control trajectories to drive the actual SFF, rapid prototyping, 3D printing or additive manufacturing mechanism , the prepared geometric model is typically sliced into layers, and the slices are scanned into lines (producing a "2D drawing" used to generate trajectory as in CNC 's toolpath), mimicking in reverse the layer-to-layer physical building process. [ citation needed ]
Rapid prototyping is also commonly applied in software engineering to try out new business models and application architectures such as Aerospace, Automotive, Financial Services, Product development, and Healthcare. [ 8 ] Aerospace design and industrial teams rely on prototyping in order to create new AM methodologies in the industry. Using SLA they can quickly make multiple versions of their projects in a few days and begin testing quicker. [ 9 ] Rapid Prototyping allows designers/developers to provide an accurate idea of how the finished product will turn out before putting too much time and money into the prototype. 3D printing being used for Rapid Prototyping allows for Industrial 3D printing to take place. With this, you could have large-scale moulds to spare parts being pumped out quickly within a short period of time. [ 10 ]
In the 1970s, Joseph Henry Condon and others at Bell Labs developed the Unix Circuit Design System (UCDS), automating the laborious and error-prone task of manually converting drawings to fabricate circuit boards for the purposes of research and development. [ citation needed ]
By the 1980s, U.S. policy makers and industrial managers were forced to take note that America's dominance in the field of machine tool manufacturing evaporated, in what was named the machine tool crisis. Numerous projects sought to counter these trends in the traditional CNC CAM area, which had begun in the US. Later when Rapid Prototyping Systems moved out of labs to be commercialized, it was recognized that developments were already international and U.S. rapid prototyping companies would not have the luxury of letting a lead slip away. The National Science Foundation was an umbrella for the National Aeronautics and Space Administration ( NASA ), the US Department of Energy , the US Department of Commerce NIST , the US Department of Defense , Defense Advanced Research Projects Agency ( DARPA ), and the Office of Naval Research coordinated studies to inform strategic planners in their deliberations. One such report was the 1997 Rapid Prototyping in Europe and Japan Panel Report [ 2 ] in which Joseph J. Beaman [ 12 ] founder of DTM Corporation [DTM RapidTool pictured] provides a historical perspective:
The roots of rapid prototyping technology can be traced to practices in topography and photosculpture. Within TOPOGRAPHY Blanther (1892) suggested a layered method for making a mold for raised relief paper topographical maps .The process involved cutting the contour lines on a series of plates which were then stacked. Matsubara (1974) of Mitsubishi proposed a topographical process with a photo-hardening photopolymer resin to form thin layers stacked to make a casting mold. PHOTOSCULPTURE was a 19th-century technique to create exact three-dimensional replicas of objects. Most famously Francois Willeme (1860) placed 24 cameras in a circular array and simultaneously photographed an object. The silhouette of each photograph was then used to carve a replica. Morioka (1935, 1944) developed a hybrid photo sculpture and topographic process using structured light to photographically create contour lines of an object. The lines could then be developed into sheets and cut and stacked, or projected onto stock material for carving. The Munz (1956) Process reproduced a three-dimensional image of an object by selectively exposing, layer by layer, a photo emulsion on a lowering piston. After fixing , a solid transparent cylinder contains an image of the object.
"The Origins of Rapid Prototyping - RP stems from the ever-growing CAD industry, more specifically, the solid modeling side of CAD. Before solid modeling was introduced in the late 1980's, three-dimensional models were created with wire frames and surfaces. But not until the development of true solid modeling could innovative processes such as RP be developed. Charles Hull, who helped found 3D Systems in 1986, developed the first RP process. This process, called stereolithography, builds objects by curing thin consecutive layers of certain ultraviolet light-sensitive liquid resins with a low-power laser. With the introduction of RP, CAD solid models could suddenly come to life". [ 14 ]
The technologies referred to as Solid Freeform Fabrication are what we recognize today as rapid prototyping, 3D printing or additive manufacturing : Swainson (1977), Schwerzel (1984) worked on polymerization of a photosensitive polymer at the intersection of two computer controlled laser beams . Ciraud (1972) considered magnetostatic or electrostatic deposition with electron beam , laser or plasma for sintered surface cladding. These were all proposed but it is unknown if working machines were built. Hideo Kodama of Nagoya Municipal Industrial Research Institute was the first to publish an account of a solid model fabricated using a photopolymer rapid prototyping system (1981). [ 2 ] The first 3D rapid prototyping system relying on Fused Deposition Modeling (FDM) was made in April 1992 by Stratasys but the patent did not issue until June 9, 1992. Sanders Prototype, Inc introduced the first desktop inkjet 3D Printer (3DP) using an invention from August 4, 1992 (Helinski), Modelmaker 6Pro in late 1993 and then the larger industrial 3D printer, Modelmaker 2, in 1997. [ 15 ] Z-Corp using the MIT 3DP powder binding for Direct Shell Casting (DSP) invented 1993 was introduced to the market in 1995. [ 16 ] Even at that early date the technology was seen as having a place in manufacturing practice. A low resolution, low strength output had value in design verification, mold making, production jigs and other areas. Outputs have steadily advanced toward higher specification uses. [ 17 ] Sanders Prototype, Inc. (Solidscape) started as a Rapid Prototyping 3D Printing manufacturer with the Modelmaker 6Pro for making sacrificial Thermoplastic patterns of CAD models uses Drop-On-Demand (DOD) inkjet single nozzle technology. [ 16 ]
Innovations are constantly being sought, to improve speed and the ability to cope with mass production applications. [ 18 ] A dramatic development which RP shares with related CNC areas is the freeware open-sourcing of high level applications which constitute an entire CAD - CAM toolchain. This has created a community of low res device manufacturers. Hobbyists have even made forays into more demanding laser-effected device designs. [ 19 ]
The earliest list of RP Processes or Fabrication Technologies published in 1993 was written by Marshall Burns and explains each process very thoroughly. It also names some technologies that were precursors to the names on the list below. For Example: Visual Impact Corporation only produced a prototype printer for wax deposition and then licensed the patent to Sanders Prototype, Inc instead. BPM used the same inkjets and materials. [ 20 ]
It accelerates the design process of any product as it allows for both low fidelity prototyping and high fidelity prototyping, [ 21 ] to foresee the necessary adjustments to be made before the final production line. As a result of this, it also cuts production costs for the overall product development [ 21 ] and allows functionality testing at a fraction of the regular cost. It eliminates the risk of the design team suffering injuries and the prototype from getting damaged during the modeling process. It also allows users or focus groups to have an involvement in the design process through interactions with each of the prototypes, from the initial prototype to the final model. For example: rapid tooling manufacturing process based on CNC machining prototypes, making the mold manufacturing cost reduction, shorten the mold manufacturing cycle, with easier to promote the application of the realization of the mold making process flow and other advantages. [ 22 ] Furthermore, it is an ideal way to test for ergonomics [ 23 ] and anthropometry ( human factors ) so that the designed product is capable of fulfilling the user's needs and offers a unique experience of usage.
Although there are various benefits that come with rapid prototyping, some of the negative aspects of it are that there can a be a lack of accuracy [ 23 ] as it cannot guarantee that the quality of the prototype will be high or that the different components will fit well together due to a range of error in the dimensions of the 3D model. Also, the initial cost of using this production technique can be expensive due to the technology, [ 23 ] which it works with. It can limit the range of materials, [ 23 ] which the product can be made with and depending on the level of complexity that the design entails, it can lead to hard skill labor . | https://en.wikipedia.org/wiki/Rapid_prototyping |
The rapid sand filter or rapid gravity filter is a type of filter used in water purification and is commonly used in municipal drinking water facilities as part of a multiple-stage treatment system. [ 1 ] These systems are complex and expensive to operate and maintain, and therefore less suitable for small communities and developing nations .
Rapid sand filters were first developed in the 1890s, and improved designs were developed by the 1920s. [ 2 ] The first modern rapid sand filtration plant was designed and built by George W. Fuller in Little Falls, New Jersey . [ 3 ] Rapid sand filters were widely used in large municipal water systems by the 1920s, because they required smaller land areas compared to slow sand filters . [ 4 ]
Rapid sand filters are typically designed as part of multi-stage treatment systems used by large municipalities. These systems are complex and expensive to operate and maintain, and therefore less suitable for small communities and developing nations. The filtration system requires a relatively small land area in proportion to the population served, and the design is less sensitive to changes in raw water quality, e.g. turbidity , than slow sand filters.
Rapid sand filters use relatively coarse sand (0.5 to 1.0 mm) and other granular media, such as anthracite, in beds of 0.6 to 1.2 metre depth to remove particles and impurities that have been trapped in a floc through the use of flocculation chemicals—typically alum . Since media other than silica sand can be used in such filters, a more modern term is "rapid filtration" instead of "rapid sand filtration." [ 5 ] [ 6 ] The unfiltered water flows at about 5 m/h, through the filter medium under gravity or under pumped pressure and the floc material is trapped in the sand matrix. [ 7 ] [ 8 ]
Mixing, flocculation and sedimentation processes are typical treatment stages that precede filtration. Chemical additives, such as coagulants, are often used in conjunction with the filtration system. [ 1 ] : 7–9
The two types of rapid sand filter are the gravity type (e.g. Paterson's filter) and pressure type (e.g. Candy's filter).
A disinfection system (typically using chlorine or ozone ) is commonly used following filtration. [ 1 ] : 9–11 Rapid sand filtration has very little effect on taste and smell and dissolved impurities of drinking water, unless activated carbon is included in the filter medium.
Rapid sand filters must be cleaned frequently, often several times a day, by backwashing , which involves reversing the direction of the water and adding compressed air . During backwashing, the bed is fluidized and care must be taken not to wash away the media.
The backwash sequence would typically be: [ 8 ]
The byproduct of backwashing is sludge. Most treatment works use a sludge thickening process, except for plant which discharge untreated sludge to sewers if the composition is within the tolerable limits. [ 8 ] The thickening process comprise batch settling tanks or continuous picket fence thickeners. Polyelectrolytes are added upstream to enhance settleability. Liquid from the process is routed to the inlet of the works. Thickening is followed by either lagooning, drying beds or filter pressing. Thickened sludge may be discharged to a sewer system, tankered away to landfill, or incinerator. [ 8 ] | https://en.wikipedia.org/wiki/Rapid_sand_filter |
In electronics , rapid single flux quantum ( RSFQ ) is a digital electronic device that uses superconducting devices, namely Josephson junctions , to process digital signals. In RSFQ logic, information is stored in the form of magnetic flux quanta and transferred in the form of Single Flux Quantum (SFQ) voltage pulses. RSFQ is one family of superconducting or SFQ logic . Others include Reciprocal Quantum Logic (RQL), ERSFQ – energy-efficient RSFQ version that does not use bias resistors, etc. Josephson junctions are the active elements for RSFQ electronics, just as transistors are the active elements for semiconductor electronics. RSFQ is a classical digital, not quantum computing , technology.
RSFQ is very different from the CMOS transistor technology used in conventional computers:
An SFQ pulse is produced when magnetic flux through a superconducting loop containing a Josephson junction changes by one flux quantum, Φ 0 as a result of the junction switching. SFQ pulses have a quantized area ʃ V ( t ) dt = Φ 0 ≈ 2.07 × 10 −15 Wb = 2.07 mV⋅ps = 2.07 mA⋅pH due to magnetic flux quantization , a fundamental property of superconductors. Depending on the parameters of the Josephson junctions, the pulses can be as narrow as 1 ps with an amplitude of about 2 mV, or broader (e.g., 5–10 ps) with correspondingly lower amplitude. The typical value of the pulse amplitude is approximately 2 I c R n , where I c R n is the product of the junction critical current, I c , and the junction damping resistor, R n . For Nb-based junction technology I c R n is on the order of 1 mV. | https://en.wikipedia.org/wiki/Rapid_single_flux_quantum |
A rapier loom is a shuttleless weaving loom in which the filling yarn is carried through the shed of warp yarns to the other side of the loom by finger-like carriers called rapiers. [ 1 ]
A stationary package of yarn is used to supply the weft yarns in the rapier machine. One end of a rapier, a rod or steel tape, carries the weft yarn. The other end of the rapier is connected to the control system. The rapier moves across the width of the fabric, carrying the weft yarn across through the shed to the opposite side. The rapier is then retracted, leaving the new pick in place.
In some versions of the loom, two rapiers are used, each half the width of the fabric in size. One rapier carries the yarn to the center of the shed, where the opposing rapier picks up the yarn and carries it the remainder of the way across the shed. [ 2 ] The double rapier is used more frequently than the single rapier due to its increased pick insertion speed and ability to weave wider widths of fabric.
The housing for the rapiers must take up as much space as the width of the machine. To overcome this problem, looms with flexible rapiers have been devised. The flexible rapier can be coiled as it is withdrawn, therefore requiring less storage space. If, however, the rapier is too stiff then it will not coil; if it is too flexible, it will buckle. Rigid and flexible rapier machines operate at speeds ranging from about 200 to 260 ppm, using up to 1,300 meters of weft yarn every minute. They have a noise level similar to that of modern projectile looms. They can produce a wide variety of fabrics ranging from muslin to drapery and upholstery materials.
Newer rapier machines are built with two distinct weaving areas for two separate fabrics. On such machines, one rapier picks up the yarn from the centre, between the two fabrics, and carries it across one weaving area; as it finishes laying that pick, the opposite end of the rapier picks up another yarn from the centre, and the rapier moves in the other direction to lay a pick for the second weaving area, on the other half of the machine.
Rapier machines weave more rapidly than most shuttle machines but more slowly than most other projectile machines. An important advantage of rapier machines is their flexibility, which permits the laying of picks of different colours. They also weave yarns of any type of fiber and can weave fabrics up to 110 inches in width without modification.
The development of the rapier loom began in 1844, when John Smith of Salford was granted a patent on a loom design that eliminated the shuttle typical of earlier models of looms. [ 3 ] Subsequent patents were taken out by Phillippe and Maurice in 1855, W.S. Laycock in 1869, and W. Glover in 1874, with rigid rapiers being perfected by O. Hallensleben in 1899. The main breakthrough came in 1922 when John Gabler invented the principle of loop transfer in the middle of the shed. [ 4 ] Flexible rapiers of the type used today were proposed in 1925 by the Spanish inventor R.G. Moya, while R. Dewas introduced the idea of grasping the weft at its tip by the giver or a carrier rapier and transferring it to the taker or a receiver in the middle of the shed. It was not until the 1950s and 1960s that rapier weaving became fully commercialized, with loom technology developing rapidly. [ 5 ] | https://en.wikipedia.org/wiki/Rapier_loom |
Rapoport's rule is an ecogeographical rule that states that latitudinal ranges of plants and animals are generally smaller at lower latitudes than at higher latitudes.
Stevens (1989) [ 1 ] named the rule after Eduardo H. Rapoport , who had earlier provided evidence for the phenomenon for subspecies of mammals (Rapoport 1975, [ 2 ] 1982 [ 3 ] ). Stevens used the rule to "explain" greater species diversity in the tropics in the sense that latitudinal gradients in species diversity and the rule have identical exceptional data and so must have the same underlying cause. Narrower ranges in the tropics would facilitate more species to coexist. He later extended the rule to altitudinal gradients, claiming that altitudinal ranges are greatest at greater latitudes (Stevens 1992 [ 4 ] ), and to depth gradients in the oceans (Stevens 1996 [ 5 ] ). The rule has been the focus of intense discussion and given much impetus to exploring distributional patterns of plants and animals. Stevens' original paper has been cited about 330 times in the scientific literature.
Support for the generality of the rule is at best equivocal. [ 6 ] For example, marine teleost fishes have the greatest latitudinal ranges at low latitudes. [ 7 ] [ 8 ] In contrast, freshwater fishes do show the trend, although only above a latitude of about 40 degrees North. [ 8 ] Some subsequent papers have found support for the rule; others, probably even more numerous, have found exceptions to it. [ 6 ] [ 9 ] For most groups that have been shown to follow the rule, it is restricted to or at least most distinct above latitudes of about 40–50 degrees. Rohde therefore concluded that the rule describes a local phenomenon. [ 10 ] Computer simulations using the Chowdhury Ecosystem Model did not find support for the rule. [ 11 ]
Rohde (1996) [ 10 ] explained the fact that the rule is restricted to very high latitudes by effects of glaciations which have wiped out species with narrow ranges, a view also expressed by Brown (1995). [ 12 ] Another explanation of Rapoport's rule is the "climatic variability" or "seasonal variability hypothesis". [ 5 ] [ 13 ] According to this hypothesis, seasonal variability selects for greater climatic tolerances and therefore wider latitudinal ranges (see also Fernandez and Vrba 2005 [ 14 ] ).
The methods used to demonstrate the rule have been subject to some controversy. Most commonly, authors plot means of latitudinal ranges in a particular 5° latitudinal band against latitude, although modal or median ranges have been used by some. [ 15 ] In the original paper by Stevens, all species occurring in each band were counted, i.e., a species with a range of 50 degrees occurs in 10 or 11 bands. However, this may lead to an artificial inflation of latitudinal ranges of species occurring at high latitudes, because even a few tropical species with wide ranges will affect the means of ranges at high latitudes, whereas the opposite effect due to high latitude species extending into the tropics is negligible: species diversity is much smaller at high than low latitudes. As an alternative method the "midpoint method" has been proposed, which avoids this problem. It counts only those species with the midpoint of their ranges in a particular latitudinal band. [ 8 ] An additional complication in assessing Rapoport's rule for data based on field sampling is the possibility of a spurious pattern driven by a sample-size artifact. Equal sampling effort at species-rich and species-poor localities tends to underestimate range size at the richer localities relative to the poorer, when in fact range sizes might not differ among localities. [ 16 ]
Marine benthic invertebrates and some parasites have been shown to have smaller dispersal abilities in cold seas ( Thorson's rule ), which would counteract Rapoport's rule. The tropics have far more uniform temperatures over a far wider latitudinal range (about 45 degrees) than high latitude species. As temperature is one of the most important (if not the most important) factor determining geographical distribution, wider latitudinal ranges in the tropics might therefore be expected.
The inconsistent results concerning Rapoport's rule suggest that certain characteristics of species may be responsible for their different latitudinal ranges. These characteristics may include, for example, their evolutionary age: species that have evolved recently in the tropics may have small latitudinal ranges because they have not had the time to spread far from their origin, whereas older species have extended their ranges. [ 17 ] | https://en.wikipedia.org/wiki/Rapoport's_rule |
In computer science , Raptor codes ( rap id tor nado ; [ 1 ] see Tornado codes ) are the first known class of fountain codes with linear time encoding and decoding. They were invented by Amin Shokrollahi in 2000/2001 and were first published in 2004 as an extended abstract. Raptor codes are a significant theoretical and practical improvement over LT codes , which were the first practical class of fountain codes .
Raptor codes, as with fountain codes in general, encode a given source block of data consisting of a number k of equal size source symbols into a potentially limitless sequence of encoding symbols such that reception of any k or more encoding symbols allows the source block to be recovered with some non-zero probability. The probability that the source block can be recovered increases with the number of encoding symbols received above k becoming very close to 1, once the number of received encoding symbols is only very slightly larger than k . For example, with the latest generation of Raptor codes, the RaptorQ codes, the chance of decoding failure when k encoding symbols have been received is less than 1%, and the chance of decoding failure when k+2 encoding symbols have been received is less than one in a million. A symbol can be any size, from a single byte to hundreds or thousands of bytes.
Raptor codes may be systematic or non-systematic . In the systematic case, the symbols of the original source block, i.e. the source symbols, are included within the set of encoding symbols. Some examples of a systematic Raptor code is the use by the 3rd Generation Partnership Project in mobile cellular wireless broadcasting and multicasting, and also by DVB-H standards for IP datacast to handheld devices. [ citation needed ] The Raptor codes used in these standards is also defined in IETF RFC 5053.
Online codes are an example of a non-systematic fountain code.
The most advanced version of Raptor is the RaptorQ code defined in IETF RFC 6330. The RaptorQ code is a systematic code, can be implemented in a way to achieve linear time encoding and decoding performance, has near-optimal recovery properties, supports up to 56,403 source symbols, and can support an essentially unlimited number of encoding symbols.
The RaptorQ code defined in IETF RFC 6330 is specified as a part of the Next Gen TV ( ATSC 3.0 ) standard to enable high quality broadcast video streaming (robust mobile TV) and efficient and reliable broadcast file delivery (datacasting). In particular, the RaptorQ code is specified in A/331 within ATSC 3.0. [ 2 ] See List of ATSC standards for a list of the ATSC 3.0 standard parts. Next Gen TV (ATSC 3.0) goes well-beyond traditional TV to provide a Broadcast internet enabling general data delivery services.
Raptor codes are formed by the concatenation of two codes.
A fixed rate erasure code , usually with a fairly high rate, is applied as a 'pre-code' or 'outer code'. This pre-code may itself be a concatenation of multiple codes, for example in the code standardized by 3GPP a high density parity check code derived from the binary Gray sequence is concatenated with a simple regular low density parity check code . Another possibility would be a concatenation of a Hamming code with a low density parity check code.
The inner code takes the result of the pre-coding operation and generates a sequence of encoding symbols. The inner code is a form of LT codes . Each encoding symbol is the XOR of a pseudo-randomly chosen set of symbols from the pre-code output. The number of symbols which are XOR'ed together to form an output symbol is chosen pseudo-randomly for each output symbol according to a specific probability distribution.
This distribution, as well as the mechanism for generating pseudo-random numbers for sampling this distribution and for choosing the symbols to be XOR'ed, must be known to both sender and receiver. In one approach, each symbol is accompanied with an identifier which can be used as a seed to a pseudo-random number generator to generate this information, with the same process being followed by both sender and receiver.
In the case of non-systematic Raptor codes, the source data to be encoded is used as the input to the pre-coding stage.
In the case of systematic Raptor codes, the input to the pre-coding stage is obtained by first applying the inverse of the encoding operation that generates the first k output symbols to the source data. Thus, applying the normal encoding operation to the resulting symbols causes the original source symbols to be regenerated as the first k output symbols of the code. It is necessary to ensure that the pseudo-random
processes which generate the first k output symbols generate an operation which is invertible.
Two approaches are possible for decoding Raptor codes. In a concatenated approach, the inner code is decoded first, using a belief propagation algorithm, as used for the LT codes. Decoding succeeds if this operation recovers a sufficient number of symbols, such that the outer code can recover the remaining symbols using the decoding algorithm appropriate for that code.
In a combined approach, the relationships between symbols defined by both the inner and outer codes are considered as a single combined set of simultaneous equations which can be solved by the usual means, typically by Gaussian elimination .
Raptor codes require O(symbol size) time to generate an encoding symbol from a source block, and require O(source block size) time to recover a source block from at least k encoding symbols.
The overhead is how many additional encoding symbols beyond the number k of source symbols in the original source block need to be
received to completely recover the source block.
(Based on elementary information theory considerations, complete recovery of a source block with k source symbols is not possible if less than k encoding symbols are received.)
The recovery probability is the probability that the source block is completely recovered upon receiving a given number of random encoding symbols generated from the source block.
The RaptorQ code specified in IETF RFC 6330 has the following trade-off between recovery probability and recovery overhead:
These statements hold for the entire range of k supported in IETF RFC 6330, i.e., k =1,...,56403. See IETF RFC 6330 for more details. [ 3 ]
Qualcomm, Inc. has published an IPR statement for the Raptor code specified in IETF RFC 5053, and an IPR statement for the more advanced RaptorQ code specified in IETF RFC 6330. These statements mirror the licensing commitment Qualcomm, Inc. has made with respect to the MPEG DASH standard . The MPEG DASH standard has been deployed by a wide variety of companies, including DASH Industry Forum member companies. | https://en.wikipedia.org/wiki/Raptor_code |
A rare-cutter enzyme is a restriction enzyme with a recognition sequence which occurs only rarely in a genome. An example is NotI, which cuts after the first GC of a 5'-GCGGCCGC-3' sequence; restriction enzymes with seven and eight base pair recognition sequences are often also called rare-cutter enzymes (six bp recognition sequences are much more common).
For example, rare-cutter enzymes with 7-nucleotide recognition sites cut once every 4 7 bp (16,384 bp), and those with 8-nucleotide recognition sites cut every 4 8 bp (65,536 bp) respectively. They are used in top-down mapping to cut a chromosome into chunks of these sizes on average.
This biochemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Rare-cutter_enzyme |
In planetary astronomy and astrobiology , the Rare Earth hypothesis argues that the origin of life and the evolution of biological complexity , such as sexually reproducing , multicellular organisms on Earth , and subsequently human intelligence , required an improbable combination of astrophysical and geological events and circumstances. According to the hypothesis, complex extraterrestrial life is an improbable phenomenon and likely to be rare throughout the universe as a whole. The term "Rare Earth" originates from Rare Earth: Why Complex Life Is Uncommon in the Universe (2000), a book by Peter Ward , a geologist and paleontologist, and Donald E. Brownlee , an astronomer and astrobiologist, both faculty members at the University of Washington .
In the 1970s and 1980s, Carl Sagan and Frank Drake , among others, argued that Earth is a typical rocky planet in a typical planetary system , located in a non-exceptional region of a common barred spiral galaxy . From the principle of mediocrity (extended from the Copernican principle ), they argued that the evolution of life on Earth, including human beings, was also typical, and therefore that the universe teems with complex life. Ward and Brownlee argue that planets which have all the requirements for complex life are not typical at all but actually exceedingly rare.
There is no reliable or reproducible evidence that extraterrestrial organisms of any kind have visited Earth . [ 1 ] [ 2 ] No transmissions or evidence of intelligent life have been detected or observed anywhere other than Earth in the Universe . This runs counter to the knowledge that the Universe is filled with a very large number of planets, some of which likely hold the conditions hospitable for life. Life typically expands until it fills all available niches. [ 3 ] These contradictory facts form the basis for the Fermi paradox, of which the Rare Earth hypothesis is one proposed solution.
The Rare Earth hypothesis argues that the evolution of biological complexity anywhere in the universe requires the coincidence of a large number of fortuitous circumstances, including, among others, a galactic habitable zone ; a central star and planetary system having the requisite character (i.e. a circumstellar habitable zone ); a terrestrial planet of the right mass; the advantage of one or more gas giant guardians like Jupiter and possibly a large natural satellite to shield the planet from frequent impact events; conditions needed to ensure the planet has a magnetosphere and plate tectonics ; a chemistry similar to that present in the Earth's lithosphere , atmosphere , and oceans; the influence of periodic "evolutionary pumps" such as massive glaciations and bolide impacts; and whatever factors may have led to the emergence of eukaryotic cells , sexual reproduction , and the Cambrian explosion of animal , plant , and fungi phyla . The evolution of human beings and of human intelligence may have required yet further specific events and circumstances, all of which are extremely unlikely to have happened were it not for the Cretaceous–Paleogene extinction event 66 million years ago removing dinosaurs as the dominant terrestrial vertebrates .
In order for a small rocky planet to support complex life, Ward and Brownlee argue, the values of several variables must fall within narrow ranges. The universe is so vast that it might still contain many Earth-like planets, but if such planets exist, they are likely to be separated from each other by many thousands of light-years . Such distances may preclude communication among any intelligent species that may evolve on such planets, which would solve the Fermi paradox which wonders: if extraterrestrial aliens are common, why aren't they obvious?
Rare Earth suggests that much of the known universe, including large parts of our galaxy, are "dead zones" unable to support complex life. Those parts of a galaxy where complex life is possible make up the galactic habitable zone , which is primarily characterized by distance from the Galactic Center .
Item #1 rules out the outermost reaches of a galaxy; #2 and #3 rule out galactic inner regions. Hence a galaxy's habitable zone may be a relatively narrow ring of adequate conditions sandwiched between its uninhabitable center and outer reaches.
Also, a habitable planetary system must maintain its favorable location long enough for complex life to evolve. A star with an eccentric (elliptical or hyperbolic) galactic orbit will pass through some spiral arms, unfavorable regions of high star density; thus a life-bearing star must have a galactic orbit that is nearly circular, with a close synchronization between the orbital velocity of the star and of the spiral arms. This further restricts the galactic habitable zone within a fairly narrow range of distances from the Galactic Center. Lineweaver et al. calculate this zone to be a ring 7 to 9 kiloparsecs in radius, including no more than 10% of the stars in the Milky Way , [ 6 ] about 20 to 40 billion stars. Gonzalez et al. [ 7 ] would halve these numbers; they estimate that at most 5% of stars in the Milky Way fall within the galactic habitable zone.
Approximately 77% of observed galaxies are spiral, [ 8 ] two-thirds of all spiral galaxies are barred, and more than half, like the Milky Way, exhibit multiple arms. [ 9 ] According to Rare Earth, our own galaxy is unusually quiet and dim (see below), representing just 7% of its kind. [ 10 ] Even so, this would still represent more than 200 billion galaxies in the known universe.
Our galaxy also appears unusually favorable in suffering fewer collisions with other galaxies over the last 10 billion years, which can cause more supernovae and other disturbances. [ 11 ] Also, the Milky Way's central black hole seems to have neither too much nor too little activity. [ 12 ]
The orbit of the Sun around the center of the Milky Way is indeed almost perfectly circular, with a period of 226 Ma (million years), closely matching the rotational period of the galaxy. However, the majority of stars in barred spiral galaxies populate the spiral arms rather than the halo and tend to move in gravitationally aligned orbits , so there is little that is unusual about the Sun's orbit. While the Rare Earth hypothesis predicts that the Sun should rarely, if ever, have passed through a spiral arm since its formation, astronomer Karen Masters has calculated that the orbit of the Sun takes it through a major spiral arm approximately every 100 million years. [ 13 ] Some researchers have suggested that several mass extinctions do indeed correspond with previous crossings of the spiral arms. [ 14 ]
The terrestrial example suggests that complex life requires liquid water, the maintenance of which requires an orbital distance neither too close nor too far from the central star, another scale of habitable zone or Goldilocks principle . [ 15 ] The habitable zone varies with the star's type and age.
For advanced life, the star must also be highly stable, which is typical of middle star life, about 4.6 billion years old. Proper metallicity and size are also important to stability. The Sun has a low (0.1%) luminosity variation. To date, no solar twin star, with an exact match of the Sun's luminosity variation, has been found, though some come close. The star must also have no stellar companions, as in binary systems , which would disrupt the orbits of any planets. Estimates suggest 50% or more of all star systems are binary. [ 16 ] [ 17 ] [ 18 ] [ 19 ] Stars gradually brighten over time and it takes hundreds of millions or billions of years for animal life to evolve. The requirement for a planet to remain in the habitable zone even as its boundaries move outwards over time restricts the size of what Ward and Brownlee call the "continuously habitable zone" for animals. They cite a calculation that it is very narrow, within 0.95 and 1.15 astronomical units (one AU is the distance between the Earth and the Sun), and argue that even this may be too large because it is based on the whole zone within which liquid water can exist, and water near boiling point may be much too hot for animal life. [ 20 ]
The liquid water and other gases available in the habitable zone bring the benefit of the greenhouse effect . Even though the Earth's atmosphere contains a water vapor concentration from 0% (in arid regions) to 4% (in rainforest and ocean regions) and – as of November 2022 – only 417.2 parts per million of CO 2 , [ 21 ] these small amounts suffice to raise the average surface temperature by about 40 °C, [ 22 ] with the dominant contribution being due to water vapor.
Rocky planets must orbit within the habitable zone for life to form. Although the habitable zone of such hot stars as Sirius or Vega is wide, hot stars also emit much more ultraviolet radiation that ionizes any planetary atmosphere . Such stars may also become red giants before advanced life evolves on their planets. These considerations rule out the massive and powerful stars of type F6 to O (see stellar classification ) as homes to evolved metazoan life .
Conversely, small red dwarf stars have small habitable zones wherein planets are in tidal lock , with one very hot side always facing the star and another very cold side always facing away, and they are also at increased risk of solar flares (see Aurelia ). As such, it is disputed whether they can support life. Rare Earth proponents claim that only stars from F7 to K1 types are hospitable. Such stars are rare: G type stars such as the Sun (between the hotter F and cooler K) comprise only 9% [ 23 ] of the hydrogen-burning stars in the Milky Way.
Such aged stars as red giants and white dwarfs are also unlikely to support life. Red giants are common in globular clusters and elliptical galaxies . White dwarfs are mostly dying stars that have already completed their red giant phase. Stars that become red giants expand into or overheat the habitable zones of their youth and middle age (though theoretically planets at much greater distances may then become habitable ).
An energy output that varies with the lifetime of the star will likely prevent life (e.g., as Cepheid variables ). A sudden decrease, even if brief, may freeze the water of orbiting planets, and a significant increase may evaporate it and cause a greenhouse effect that prevents the oceans from reforming.
All known life requires the complex chemistry of metallic elements. The absorption spectrum of a star reveals the presence of metals within, and studies of stellar spectra reveal that many, perhaps most, stars are poor in metals. Because heavy metals originate in supernova explosions, metallicity increases in the universe over time. Low metallicity characterizes the early universe: globular clusters and other stars that formed when the universe was young, stars in most galaxies other than large spirals , and stars in the outer regions of all galaxies. Metal-rich central stars capable of supporting complex life are therefore believed to be most common in the less dense regions of the larger spiral galaxies—where radiation also happens to be weak. [ 24 ]
Rare Earth proponents argue that a planetary system capable of sustaining complex life must be structured more or less like the Solar System, with small, rocky inner planets and massive outer gas giants. [ 25 ] Without the protection of such "celestial vacuum cleaner" planets, such as Jupiter, with strong gravitational pulls, other planets would be subject to more frequent catastrophic asteroid collisions. An asteroid only twice the size of the one which caused the Cretaceous–Paleogene extinction might have wiped out all complex life. [ 26 ]
Observations of exoplanets have shown that arrangements of planets similar to the Solar System are rare. Most planetary systems have super-Earths, several times larger than Earth, close to their star, whereas the Solar System's inner region has only a few small rocky planets and none inside Mercury's orbit. Only 10% of stars have giant planets similar to Jupiter and Saturn, and those few rarely have stable, nearly circular orbits distant from their star. Konstantin Batygin and colleagues argue that these features can be explained if, early in the history of the Solar System, Jupiter and Saturn drifted towards the Sun, sending showers of planetesimals towards the super-Earths which sent them spiralling into the Sun, and ferrying icy building blocks into the terrestrial region of the Solar System which provided the building blocks for the rocky planets. The two giant planets then drifted out again to their present positions. In the view of Batygin and his colleagues: "The concatenation of chance events required for this delicate choreography suggest that small, Earth-like rocky planets – and perhaps life itself – could be rare throughout the cosmos." [ 27 ]
Rare Earth proponents argue that a gas giant also must not be too close to a body where life is developing. Close placement of one or more gas giants could disrupt the orbit of a potential life-bearing planet, either directly or by drifting into the habitable zone.
Newtonian dynamics can produce chaotic planetary orbits , especially in a system having large planets at high orbital eccentricity . [ 28 ]
The need for stable orbits rules out stars with planetary systems that contain large planets with orbits close to the host star (called " hot Jupiters "). It is believed that hot Jupiters have migrated inwards to their current orbits. In the process, they would have catastrophically disrupted the orbits of any planets in the habitable zone. [ 29 ] To exacerbate matters, hot Jupiters are much more common orbiting F and G class stars. [ 30 ]
The Rare Earth hypothesis argues that life requires terrestrial planets like Earth, and since gas giants lack such a surface, that complex life cannot arise there. [ 31 ]
A planet that is too small cannot maintain much atmosphere, rendering its surface temperature low and variable and oceans impossible. A small planet will also tend to have a rough surface, with large mountains and deep canyons. The core will cool faster, and plate tectonics may be brief or entirely absent. A planet that is too large will retain too dense an atmosphere, like Venus . Although Venus is similar in size and mass to Earth, its surface atmospheric pressure is 92 times that of Earth, and its surface temperature is 735 K (462 °C; 863 °F). The early Earth once had a similar atmosphere, but may have lost it in the giant impact event which formed the Moon . [ 32 ]
Rare Earth proponents argue that plate tectonics and a strong magnetic field are essential for biodiversity , global temperature regulation , and the carbon cycle . [ 33 ] The lack of mountain chains elsewhere in the Solar System is evidence that Earth is the only body which now has plate tectonics, and thus the only one capable of supporting life. [ 34 ]
Plate tectonics depend on the right chemical composition and a long-lasting source of heat from radioactive decay . Continents must be made of less dense felsic rocks that "float" on underlying denser mafic rock. Taylor [ 35 ] emphasizes that tectonic subduction zones require the lubrication of oceans of water. Plate tectonics also provide a means of biochemical cycling . [ 36 ]
Plate tectonics and, as a result, continental drift and the creation of separate landmasses would create diversified ecosystems and biodiversity , one of the strongest defenses against extinction. [ 37 ] An example of species diversification and later competition on Earth's continents is the Great American Interchange . North and Middle America drifted into South America at around 3.5 to 3 Ma. The fauna of South America had already evolved separately for about 30 million years, since Antarctica separated, but, after the merger, many species were wiped out, mainly in South America, by competing North American animals.
The Moon is unusual because the other rocky planets in the Solar System either have no satellites ( Mercury and Venus ), or only relatively tiny satellites which are probably captured asteroids ( Mars ). After Charon , the Moon is also the largest natural satellite in the Solar System relative to the size of its parent body, being 27% the size of Earth. [ 38 ]
The giant-impact theory hypothesizes that the Moon resulted from the impact of a roughly Mars -sized body, dubbed Theia , with the young Earth. This giant impact also gave the Earth its axial tilt (inclination) and velocity of rotation. [ 35 ] Rapid rotation reduces the daily variation in temperature and makes photosynthesis viable. [ 39 ] The Rare Earth hypothesis further argues that the axial tilt cannot be too large or too small (relative to the orbital plane ). A planet with a large tilt will experience extreme seasonal variations in climate. A planet with little or no tilt will lack the stimulus to evolution that climate variation provides. [ citation needed ] In this view, the Earth's tilt is "just right". The gravity of a large satellite also stabilizes the planet's tilt; without this effect, the variation in tilt would be chaotic , probably making complex life forms on land impossible. [ 40 ]
If the Earth had no Moon, the ocean tides resulting solely from the Sun's gravity would be only half that of the lunar tides. A large satellite gives rise to tidal pools , which may be essential for the formation of complex life , though this is far from certain. [ 41 ]
A large satellite also increases the likelihood of plate tectonics through the effect of tidal forces on the planet's crust. [ citation needed ] The impact that formed the Moon may also have initiated plate tectonics, without which the continental crust would cover the entire planet, leaving no room for oceanic crust . [ citation needed ] It is possible that the large-scale mantle convection needed to drive plate tectonics could not have emerged if the crust had a uniform composition. A further theory indicates that such a large moon may also contribute to maintaining a planet's magnetic shield by continually acting upon a metallic planetary core as dynamo, thus protecting the surface of the planet from charged particles and cosmic rays, and helping to ensure the atmosphere is not stripped over time by solar winds. [ citation needed ]
A terrestrial planet must be the right size, like Earth and Venus, in order to retain an atmosphere. On Earth, once the giant impact of Theia thinned Earth's atmosphere , other events were needed to make the atmosphere capable of sustaining life. The Late Heavy Bombardment reseeded Earth with water lost after the impact of Theia. [ 42 ] The development of an ozone layer generated a protective shield against ultraviolet (UV) sunlight. [ 43 ] [ 44 ] Nitrogen and carbon dioxide are needed in a correct ratio for life to form. [ 45 ] Lightning is needed for nitrogen fixation . [ 46 ] The gaseous carbon dioxide needed for life comes from sources such as volcanoes and geysers . Carbon dioxide is preferably needed at relatively low levels (currently at approximately 400 ppm on Earth) because at high levels it is poisonous. [ 47 ] [ 48 ] Precipitation is needed to have a stable water cycle. [ 49 ] A proper atmosphere must reduce diurnal temperature variation . [ 50 ] [ 51 ]
Regardless of whether planets with similar physical attributes to the Earth are rare or not, some argue that life tends not to evolve into anything more complex than simple bacteria without being provoked by rare and specific circumstances. Biochemist Nick Lane argues that simple cells ( prokaryotes ) emerged soon after Earth's formation, but since almost half the planet's life had passed before they evolved into complex ones ( eukaryotes ), all of whom share a common ancestor , this event can only have happened once. According to some views, prokaryotes lack the cellular architecture to evolve into eukaryotes because a bacterium expanded up to eukaryotic proportions would have tens of thousands of times less energy available to power its metabolism. Two billion years ago, one simple cell incorporated itself into another, multiplied, and evolved into mitochondria that supplied the vast increase in available energy that enabled the evolution of complex eukaryotic life. If this incorporation occurred only once in four billion years or is otherwise unlikely, then life on most planets remains simple. [ 52 ] An alternative view is that the evolution of mitochondria was environmentally triggered, and that mitochondria-containing organisms appeared soon after the first traces of atmospheric oxygen. [ 53 ]
The evolution and persistence of sexual reproduction is another mystery in biology. The purpose of sexual reproduction is unclear, as in many organisms it has a 50% cost (fitness disadvantage) in relation to asexual reproduction . [ 54 ] Mating types (types of gametes , according to their compatibility) may have arisen as a result of anisogamy (gamete dimorphism), or the male and female sexes may have evolved before anisogamy. [ 55 ] [ 56 ] It is also unknown why most sexual organisms use a binary mating system , [ 57 ] and why some organisms have gamete dimorphism. Charles Darwin was the first to suggest that sexual selection drives speciation ; without it, complex life would probably not have evolved.
While life on Earth is regarded to have spawned relatively early in the planet's history, the evolution from multicellular to intelligent organisms took around 800 million years. [ 58 ] Civilizations on Earth have existed for about 12,000 years, and radio communication reaching space has existed for little more than 100 years. Relative to the age of the Solar System (~4.57 Ga) this is a short time, in which extreme climatic variations, super volcanoes, and large meteorite impacts were absent. These events would severely harm intelligent life, as well as life in general. For example, the Permian-Triassic mass extinction , caused by widespread and continuous volcanic eruptions in an area the size of Western Europe, led to the extinction of 95% of known species around 251.2 Ma ago. About 65 million years ago, the Chicxulub impact at the Cretaceous–Paleogene boundary (~65.5 Ma) on the Yucatán peninsula in Mexico led to a mass extinction.
The following discussion is adapted from Cramer. [ 59 ] The Rare Earth equation is Ward and Brownlee's riposte to the Drake equation . It calculates N {\displaystyle N} , the number of Earth-like planets in the Milky Way having complex life forms, as:
where:
We assume N ∗ ⋅ n e = 5 ⋅ 10 11 {\displaystyle N^{*}\cdot n_{e}=5\cdot 10^{11}} . The Rare Earth hypothesis can then be viewed as asserting that the product of the other nine Rare Earth equation factors listed below, which are all fractions, is no greater than 10 −10 and could plausibly be as small as 10 −12 . In the latter case, N {\displaystyle N} could be as small as 0 or 1. Ward and Brownlee do not actually calculate the value of N {\displaystyle N} , because the numerical values of quite a few of the factors below can only be conjectured. They cannot be estimated simply because we have but one data point : the Earth, a rocky planet orbiting a G2 star in a quiet suburb of a large barred spiral galaxy , and the home of the only intelligent species we know; namely, ourselves.
Lammer, Scherf et al. define Earth-like habitats (EHs) as rocky exoplanets within the habitable zone of
complex life (HZCL) on which Earth-like N2-O2-dominated atmospheres with minor amounts of CO2 can exist.
They estimate the maximum number of EHs in the Milky Way as 2.54 − 2.48 + 71.64 ⋅ 10 5 {\displaystyle {2.54}_{-2.48}^{+71.64}\cdot 10^{5}} , with the actual number of EHs being possibly much less than that. [ 61 ] [ 62 ] This would reduce the Rare Earth equation to:
The Rare Earth equation, unlike the Drake equation , does not factor the probability that complex life evolves into intelligent life that discovers technology. Barrow and Tipler review the consensus among such biologists that the evolutionary path from primitive Cambrian chordates , e.g., Pikaia to Homo sapiens , was a highly improbable event. For example, the large brains of humans have marked adaptive disadvantages, requiring as they do an expensive metabolism , a long gestation period , and a childhood lasting more than 25% of the average total life span. [ 63 ] Other improbable features of humans include:
Writers who support the Rare Earth hypothesis:
Cases against the Rare Earth hypothesis take various forms.
The hypothesis concludes, more or less, that complex life is rare because it can evolve only on the surface of an Earth-like planet or on a suitable satellite of a planet. Some biologists, such as Jack Cohen , believe this assumption too restrictive and unimaginative; they see it as a form of circular reasoning . [ 73 ] [ page needed ]
According to David Darling , the Rare Earth hypothesis is neither hypothesis nor prediction , but merely a description of how life arose on Earth. [ 74 ] In his view, Ward and Brownlee have done nothing more than select the factors that best suit their case.
What matters is not whether there's anything unusual about the Earth; there's going to be something idiosyncratic about every planet in space. What matters is whether any of Earth's circumstances are not only unusual but also essential for complex life. So far we've seen nothing to suggest there is. [ 75 ]
Critics also argue that there is a link between the Rare Earth hypothesis and the unscientific idea of intelligent design . [ 76 ]
An increasing number of extrasolar planet discoveries are being made, with 5,943 planets in 4,461 planetary systems known as of 17 April 2025. [ 77 ] Rare Earth proponents argue life cannot arise outside Sun-like systems, due to tidal locking and ionizing radiation outside the F7–K1 range. However, some exobiologists have suggested that stars outside this range may give rise to life under the right circumstances; this possibility is a central point of contention to the theory because these late-K and M category stars make up about 82% of all hydrogen-burning stars. [ 23 ]
Current technology limits the testing of important Rare Earth criteria: surface water, tectonic plates, a large moon and biosignatures are currently undetectable. Though planets the size of Earth are difficult to detect and classify, scientists now think that rocky planets are common around Sun-like stars. [ 78 ] The Earth Similarity Index (ESI) of mass, radius and temperature provides a means of measurement, but falls short of the full Rare Earth criteria. [ 79 ] [ 80 ]
Some argue that Rare Earth's estimates of rocky planets in habitable zones ( n e {\displaystyle n_{e}} in the Rare Earth equation) are too restrictive. James Kasting cites the Titius–Bode law to contend that it is a misnomer to describe habitable zones as narrow when there is a 50% chance of at least one planet orbiting within one. [ 82 ] In 2013, astronomers using the Kepler space telescope 's data estimated that about one-fifth of G-type and K-type stars ( sun-like stars and orange dwarfs ) are expected to have an Earth-sized or super-Earth -sized planet ( 1–2 Earths wide ) close to an Earth-like orbit ( 0.25–4 F 🜨 ), [ 83 ] yielding about 8.8 billion of them for the entire Milky Way Galaxy . [ 84 ] [ 85 ] [ 86 ]
The requirement for a system to have a Jovian planet as protector (Rare Earth equation factor f j {\displaystyle f_{j}} ) has been challenged, affecting the number of proposed extinction events (Rare Earth equation factor f m e {\displaystyle f_{me}} ). Kasting's 2001 review of Rare Earth questions whether a Jupiter protector has any bearing on the incidence of complex life. [ 87 ] Computer modelling including the 2005 Nice model and 2007 Nice 2 model yield inconclusive results in relation to Jupiter's gravitational influence and impacts on the inner planets. [ 88 ] A study by Horner and Jones (2008) using computer simulation found that while the total effect on all orbital bodies within the Solar System is unclear, Jupiter has caused more impacts on Earth than it has prevented. [ 89 ] Lexell's Comet , a 1770 near miss that passed closer to Earth than any other comet in recorded history, was known to be caused by the gravitational influence of Jupiter. [ 90 ]
Ward and Brownlee argue that for complex life to evolve (Rare Earth equation factor f c {\displaystyle f_{c}} ), tectonics must be present to generate biogeochemical cycles , and predicted that such geological features would not be found outside of Earth, pointing to a lack of observable mountain ranges and subduction . [ 92 ] There is, however, no scientific consensus on the evolution of plate tectonics on Earth. Though it is believed that tectonic motion first began around three billion years ago, [ 93 ] by this time photosynthesis and oxygenation had already begun. Furthermore, recent studies point to plate tectonics as an episodic planetary phenomenon, and that life may evolve during periods of "stagnant-lid" rather than plate tectonic states. [ 94 ]
Recent evidence also points to similar activity either having occurred or continuing to occur elsewhere. The geology of Pluto , for example, described by Ward and Brownlee as "without mountains or volcanoes ... devoid of volcanic activity", [ 24 ] has since been found to be quite the contrary, with a geologically active surface possessing organic molecules [ 95 ] and mountain ranges [ 96 ] like Tenzing Montes and Hillary Montes comparable in relative size to those of Earth, and observations suggest the involvement of endogenic processes. [ 97 ] Plate tectonics has been suggested as a hypothesis for the Martian dichotomy , and in 2012 geologist An Yin put forward evidence for active plate tectonics on Mars . [ 98 ] Europa has long been suspected to have plate tectonics [ 99 ] and in 2014 NASA announced evidence of active subduction. [ 100 ] Like Europa, analysis of the surface of Jupiter's largest moon Ganymede strike-strip faulting and surface materials of possible endogenic origin suggests that plate tectonics has also taken place there. [ 101 ] [ 102 ] In 2017, scientists studying the geology of Charon confirmed that icy plate tectonics also operated on Pluto's largest moon. [ 103 ] Since 2017 several studies of the geodynamics of Venus have also found that, contrary to the view that the lithosphere of Venus is static, it is actually being deformed via active processes similar to plate tectonics, though with less subduction, implying that geodynamics are not a rare occurrence in Earth sized bodies . [ 104 ] [ 105 ]
Kasting suggests that there is nothing unusual about the occurrence of plate tectonics in large rocky planets and liquid water on the surface as most should generate internal heat even without the assistance of radioactive elements. [ 87 ] Studies by Valencia [ 106 ] and Cowan [ 107 ] suggest that plate tectonics may be inevitable for terrestrial planets Earth-sized or larger, that is, Super-Earths , which are now known to be more common in planetary systems. [ 108 ]
The hypothesis that molecular oxygen , necessary for animal life, is rare and that a Great Oxygenation Event (Rare Earth equation factor f c {\displaystyle f_{c}} ) could only have been triggered and sustained by tectonics, appears to have been invalidated by more recent discoveries.
Ward and Brownlee ask "whether oxygenation, and hence the rise of animals, would ever have occurred on a world where there were no continents to erode". [ 109 ] Extraterrestrial free oxygen has recently been detected around other solid objects, including Mercury, [ 110 ] Venus, [ 111 ] Mars, [ 112 ] Jupiter's four Galilean moons , [ 113 ] Saturn's moons Enceladus, [ 114 ] Dione [ 115 ] [ 116 ] and Rhea [ 117 ] and even the atmosphere of a comet. [ 118 ] This has led scientists to speculate whether processes other than photosynthesis could be capable of generating an environment rich in free oxygen. Wordsworth (2014) concludes that oxygen generated other than through photodissociation may be likely on Earth-like exoplanets, and could actually lead to false positive detections of life. [ 119 ] Narita (2015) suggests photocatalysis by titanium dioxide as a geochemical mechanism for producing oxygen atmospheres. [ 120 ]
Since Ward & Brownlee's assertion that "there is irrefutable evidence that oxygen is a necessary ingredient for animal life", [ 109 ] anaerobic metazoa have been found that indeed do metabolise without oxygen. Spinoloricus cinziae , for example, a species discovered in the hypersaline anoxic L'Atalante basin at the bottom of the Mediterranean Sea in 2010, appears to metabolise with hydrogen, lacking mitochondria and instead using hydrogenosomes . [ 121 ] [ 122 ] Studies since 2015 of the eukaryotic genus Monocercomonoides that lack mitochondrial organelles are also significant as there are no detectable signs that mitochondria are part of the organism. [ 123 ] Since then further eukaryotes, particularly parasites , have been identified to be completely absent of mitochondrial genome, such as the 2020 discovery in Henneguya zschokkei . [ 124 ] Further investigation into alternative metabolic pathways used by these organisms appear to present further problems for the premise.
Stevenson (2015) has proposed other membrane alternatives for complex life in worlds without oxygen. [ 125 ] In 2017, scientists from the NASA Astrobiology Institute discovered the necessary chemical preconditions for the formation of azotosomes on Saturn's moon Titan, a world that lacks atmospheric oxygen. [ 126 ] Independent studies by Schirrmeister and by Mills concluded that Earth's multicellular life existed prior to the Great Oxygenation Event, not as a consequence of it. [ 127 ] [ 128 ]
NASA scientists Hartman and McKay argue that plate tectonics may in fact slow the rise of oxygenation (and thus stymie complex life rather than promote it). [ 129 ] Computer modelling by Tilman Spohn in 2014 found that plate tectonics on Earth may have arisen from the effects of complex life's emergence, rather than the other way around as the Rare Earth might suggest. The action of lichens on rock may have contributed to the formation of subduction zones in the presence of water. [ 130 ] Kasting argues that if oxygenation caused the Cambrian explosion then any planet with oxygen producing photosynthesis should have complex life. [ 131 ]
The importance of Earth's magnetic field to the development of complex life has been disputed. The origin of Earth's magnetic field remains a mystery [ 132 ] though the presence of a magnetosphere appears to be relatively common for larger planetary mass objects as all Solar System planets larger than Earth possess one. [ 133 ] There is increasing evidence of present or past magnetic activity in terrestrial bodies such as the Moon, Ganymede, Mercury and Mars. [ 134 ] Without sufficient measurement present studies rely heavily on modelling methods developed in 2006 by Olson & Christensen to predict field strength. [ 135 ] Using a sample of 496 planets such models predict Kepler-186f to be one of few of Earth size that would support a magnetosphere (though such a field around this planet has not currently been confirmed). [ 135 ] However current recent empirical evidence points to the occurrence of much larger and more powerful fields than those found in our Solar System, some of which cannot be explained by these models. [ 136 ] [ 137 ]
Kasting argues that the atmosphere provides sufficient protection against cosmic rays even during times of magnetic pole reversal and atmosphere loss by sputtering. [ 87 ] Kasting also dismisses the role of the magnetic field in the evolution of eukaryotes, citing the age of the oldest known magnetofossils . [ 138 ]
The requirement of a large moon (Rare Earth equation factor f m {\displaystyle f_{m}} ) has also been challenged. Even if it were required, such an occurrence may not be as unique as predicted by the Rare Earth Hypothesis. Work by Edward Belbruno and J. Richard Gott of Princeton University suggests that giant impactors such as those that may have formed the Moon can indeed form in planetary trojan points ( L 4 or L 5 Lagrangian point ) which means that similar circumstances may occur in other planetary systems. [ 139 ]
The assertion that the Moon's stabilization of Earth's obliquity and spin is a requirement for complex life has been questioned. Kasting argues that a moonless Earth would still possess habitats with climates suitable for complex life and questions whether the spin rate of a moonless Earth can be predicted. [ 87 ] Although the giant impact theory posits that the impact forming the Moon increased Earth's rotational speed to make a day about 5 hours long, the Moon has slowly " stolen " much of this speed to reduce Earth's solar day since then to about 24 hours and continues to do so: in 100 million years Earth's solar day will be roughly 24 hours 38 minutes (the same as Mars's solar day); in 1 billion years, 30 hours 23 minutes. Larger secondary bodies would exert proportionally larger tidal forces that would in turn decelerate their primaries faster and potentially increase the solar day of a planet in all other respects like Earth to over 120 hours within a few billion years. This long solar day would make effective heat dissipation for organisms in the tropics and subtropics extremely difficult in a similar manner to tidal locking to a red dwarf star. Short days (high rotation speed) cause high wind speeds at ground level. Long days (slow rotation speed) cause the day and night temperatures to be too extreme. [ 140 ]
Many Rare Earth proponents argue that the Earth's plate tectonics would probably not exist if not for the tidal forces of the Moon or the impact of Theia (prolonging mantle effects). [ 141 ] [ 142 ] The hypothesis that the Moon's tidal influence initiated or sustained Earth's plate tectonics remains unproven, though at least one study implies a temporal correlation to the formation of the Moon. [ 143 ] Evidence for the past existence of plate tectonics on planets like Mars [ 144 ] which may never have had a large moon would counter this argument, although plate tectonics may fade anyway before a moon is relevant to life. [ 141 ] [ 142 ] Kasting argues that a large moon is not required to initiate plate tectonics. [ 87 ]
Rare Earth proponents argue that simple life may be common, though complex life requires specific environmental conditions to arise. Critics consider life could arise on a moon of a gas giant, though this is less likely if life requires volcanicity. The moon must have stresses to induce tidal heating, but not so dramatic as seen on Jupiter's Io. However, the moon is within the gas giant's intense radiation belts, sterilizing any biodiversity before it can get established. Dirk Schulze-Makuch disputes this, hypothesizing alternative biochemistries for alien life. [ 145 ] While Rare Earth proponents argue that only microbial extremophiles could exist in subsurface habitats beyond Earth, some argue that complex life can also arise in these environments. Examples of extremophile animals such as the Hesiocaeca methanicola , an animal that inhabits ocean floor methane clathrates substances more commonly found in the outer Solar System, the tardigrades which can survive in the vacuum of space [ 146 ] or Halicephalobus mephisto which exists in crushing pressure, scorching temperatures and extremely low oxygen levels 3.6 kilometres ( 2.2 miles) deep in the Earth's crust, [ 147 ] are sometimes cited by critics as complex life capable of thriving in "alien" environments. Jill Tarter counters the classic counterargument that these species adapted to these environments rather than arose in them, by suggesting that we cannot assume conditions for life to emerge which are not actually known. [ 148 ] There are suggestions that complex life could arise in sub-surface conditions which may be similar to those where life may have arisen on Earth, such as the tidally heated subsurfaces of Europa or Enceladus. [ 149 ] [ 150 ] Ancient circumvental ecosystems such as these support complex life on Earth such as Riftia pachyptila that exist completely independent of the surface biosphere. [ 151 ] | https://en.wikipedia.org/wiki/Rare_Earth_hypothesis |
Rare biosphere refers to a large number of rare species of microbial life, i.e. bacteria , archaea and fungi , that can be found in very low concentrations in an environment. [ 1 ]
Changes in the biodiversity of an ecosystem , whether marine or terrestrial , may affect its efficiency and function. Climate change or other anthropogenic perturbations can decrease productivity and disrupt global biogeochemical cycles . The possible ramifications of such changes are not well characterized or understood, and up to a point redundancy in an ecosystem may protect it from disruption. [ 2 ]
The dynamics of microbial ecosystems are tightly coupled to biogeochemical processes. [ 3 ] For example, in the marine microbial loop , bacteria decompose organics and recycle nutrients such as nitrogen for other organisms such as phytoplankton to use. [ 3 ] A reduction in recycled nitrogen would limit the production rate of phytoplankton, in turn limiting the growth of grazers , with effects throughout the food web and nitrogen cycle . To gauge such effects, a base line of microbial diversity is needed. The species of rare biosphere can offer the gene pool that can be activated under changing conditions, thus keeping the ecosystem functional. [ 4 ] Members of the rare biosphere have been recognised as important drivers of many key ecosystem functions, for example providing bioavailable nitrogen in marine and soil environment. [ 5 ] [ 6 ]
Previous attempts to characterize in situ abundance of different microbial species in specific environment have been made through culturing and molecular biology techniques. [ 7 ] Culturing produces a very narrow picture of some of the rarer species present, especially when studying an environment where only less than 0,1% of all microbes are cultivable with standard methods. [ 7 ] [ 8 ] Molecular biology techniques, such as Sanger sequencing , results in a much broader scope but highlights the more abundant species present. [ 9 ] [ 10 ] Neither of these techniques capture all of the diversity present. The current state of the art practice is the use of high-throughput sequencing techniques, pioneered by Dr. Mitchell Sogin of the Marine Biological Laboratory . This method has broadened the scope of biodiversity, with the discovery of the rare biosphere. [ 11 ] High throughput sequencing , or “tag sequencing”, divides unique rRNA gene (or other target gene) tag sequences into operational taxonomic units (OTUs) based upon similarities in the DNA code of the sequenced gene region. [ 11 ] Both Sanger, shotgun sequencing , and tag sequencing organize sequences into OTUs. [ 9 ] However, it is the resolution that tag sequencing provides that sets it apart from other methods, resulting from the increased efficiency in serial analysis. [ 9 ] This efficiency increase is made possible through the use of internal primer sequences resulting in restriction digest overhanging sequences. [ 9 ] Though OTUs provide a means of distinguishing the possible number of phylogenetic groups, it is not possible to deduce phylogenetic relationships based upon OTU’s. Tags associated with OTUs must be cross-referenced with gene banks , in order for tags to be phylotyped and relationships established. [ 11 ]
The result of tag sequencing has been to produce orders of magnitude larger estimates of OTUs present in ecosystems, producing a long tail on species abundance curves . [ 12 ] [ 10 ] This long tail accounts for less than 0.1% of the abundant species in a particular ecosystem. At the same time it represents thousands of populations accounting for most of the phylogenetic diversity in an ecosystem. This low-abundance high-diversity group is the rare biosphere. Using this method, Sogin et al.’s study of microbial diversity in North Atlantic deep water produced an estimate of 5266 different taxa . [ 11 ] This is particularly dramatic considering that previous studies employing more traditional PCR cloning techniques have resulted in estimates of up to 500. [ 10 ]
Considering their low abundance, members of the rare biosphere may represent ancient and persistent taxa. [ 11 ] As these less abundant species are limited in number, viral infection and ultimately death by lysis is more unlikely as the viruses depend on high concentrations of host organisms to persist. [ 10 ] Additionally, being less abundant implies limited growth, and being on the smaller end of the cell size spectrum. [ 10 ] This limits the likelihood of death by ingestion, as grazers prefer larger or more active microbes.
It is important to note that just because these taxa are “rare” now does not mean that under previous conditions in our planet’s history they were “rare”. [ 11 ] These taxa could have been episodically abundant, resulting in either global changes in biogeochemical cycles or a small change of the conditions in their current environment. [ 11 ] Given the persistence of these taxa under the right conditions they have the potential to dominate, and become the more abundant taxa. [ 11 ] Such conditions may occur on many temporal scales. It may be possible that some rare taxa dominate only during anomalous years, such as during El Niño . [ 7 ] Change in abundance may occur on a seasonal scale. [ 7 ]
Global climate change may provide some of these rare taxa with the conditions necessary to increase in abundance. Even in their low abundance, taxa belonging to the rare biosphere may be affecting global biogeochemical cycles. For example, recent evidence implicates that a rare minority may be responsible for fixing more cumulative nitrogen than the abundant majority of microbial cells in marine environment. [ 7 ] [ 5 ]
A subtle and less direct manner the rare biosphere may be affecting ecosystems, in terms of biodiversity and biogeochemical cycles, is by acting as a nearly unlimited source of genetic diversity and material. [ 7 ] [ 11 ] Currently, a lot of discussion and investigations are ongoing on how microbial communities present resilience after environmental perturbation or catastrophe and how closely related species may present unique and novel genetic attributes compared to near relatives. [ 11 ] The rare biosphere could be seen as a seed bank, transferring genes resulting in fitter recombinants that rise to become the dominant majority. [ 11 ]
The rare biosphere has been studied in numerous different environments, including seas, lakes, soils and even deep bedrock. [ 5 ] [ 13 ] [ 14 ] [ 6 ] [ 15 ] [ 16 ] There is some debate concerning the distribution of taxa within the rare biosphere. Taxa within this group at a given site may be in the process of dispersal. [ 7 ] [ 12 ] Studies in the Arctic seabed identified thermophilic bacteria, arriving through mechanisms of dispersal, that could not be metabolically active. [ 12 ] Once these populations, such as the thermophilic bacteria in the Arctic, reach a suitable niche they will again become metabolically active and increase in abundance. This requires that one view these populations as non-discrete, not endemic to any one particular body of water. [ 12 ]
Alternatively, studies suggest that given the biogeography of rare taxa the idea of the rare biosphere being the product of dispersal seems unlikely. [ 13 ] A study in the Arctic Ocean on the biogeography of the rare biosphere found that between parcels of water within that ocean, the rare biosphere presented a large amount of diversity. This suggests that populations within the rare biosphere experience evolutionary forces specific to the location they are found, such as selection, speciation, and extinction. [ 13 ] Also, given the fact that many rare taxa cannot be identified in gene repositories, it seems unlikely that they abundant elsewhere. [ 13 ] | https://en.wikipedia.org/wiki/Rare_biosphere |
A rare species is a group of organisms that are very uncommon, scarce, or infrequently encountered. This designation may be applied to either a plant or animal taxon , and is distinct from the term endangered or threatened . Designation of a rare species may be made by an official body, such as a national government, state, or province. The term more commonly appears without reference to specific criteria. The International Union for Conservation of Nature does not normally make such designations, but may use the term in scientific discussion. [ 1 ]
Rarity rests on a specific species being represented by a small number of organisms worldwide, usually fewer than 10,000. However, a species having a very narrow endemic range or fragmented habitat also influences the concept. [ 2 ] [ 3 ] Almost 75% of known species can be classified as "rare". [ 4 ]
Rare species are species with small populations. Many will move into the endangered or vulnerable category if the negative factors affecting them continue to operate. Well-known examples of rare species - because these are large terrestrial animals - include the Himalayan brown bear , Fennec fox , Wild Asiatic buffalo , or the Hornbill .
They are not endangered yet, but classified as "at risk", [ 5 ] [ 6 ] although the frontier between these categories is increasingly difficult to draw given the general paucity of data on rare species. This is especially the case in the ocean where many 'rare' species not seen for decades may well have gone extinct unnoticed, if they are not already on the verge of extinction like the Mexican Vaquita . [ 7 ]
A species may be endangered or vulnerable, but not considered rare if it has a large, dispersed population. IUCN uses the term "rare" as a designation for species found in isolated geographical locations. Rare species are generally considered threatened because a small population size is less likely to recover from ecological disasters .
Rare plants can be classified based on the size and distribution of their populations. Some species may be rare because they consist of only a few individuals, are confined to a limited geographic area, or both. Certain rare plants are found sparsely distributed across a wide area. Others might have a large number of individuals that are concentrated in a very small area, such as a single county or canyon. The rarest plants typically have both a small number of individuals and a very limited geographic range.
Assessments of the status of rare plants are conducted using the best available data and consider various factors, including:
A rare plant's legal status can be observed through the USDA 's Plants Database.
Rabat Zoo , Morocco Keystone Safari, United States | https://en.wikipedia.org/wiki/Rare_species |
A rare sugar is a sugar that occurs in limited quantities in nature. [ 2 ] Rare sugars can be made using enzymes, choosing which enzymes to use if you know the substrate can be aided by the Izumoring-strategy. [ 3 ]
Specific examples of rare sugars are:
This article about an organic compound is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Rare_sugar |
Rarefaction is the reduction of an item's density, the opposite of compression . [ 1 ] Like compression, which can travel in waves ( sound waves , for instance), rarefaction waves also exist in nature. A common rarefaction wave is the area of low relative pressure following a shock wave (see picture).
Rarefaction waves expand with time (much like sea waves spread out as they reach a beach); in most cases rarefaction waves keep the same overall profile ('shape') at all times throughout the wave's movement: it is a self-similar expansion . Each part of the wave travels at the local speed of sound, in the local medium. This expansion behaviour contrasts with that of pressure increases, which gets narrower with time until they steepen into shock waves.
A natural example of rarefaction occurs in the layers of Earth's atmosphere . Because the atmosphere has mass , most atmospheric matter is nearer to the Earth due to the Earth's gravitation . Therefore, air at higher layers of the atmosphere is less dense, or rarefied , relative to air at lower layers. Thus, rarefaction can refer either to a reduction in density over space at a single point of time, or a reduction of density over time for one particular area.
Rarefaction can be easily observed by compressing a spring and releasing it.
Modern construction of guitars is an example of using rarefaction in manufacturing. By forcing the reduction of density (loss of oils and other impurities) in the cellular structure of the soundboard, a rarefied guitar top produces a tonal decompression affecting the sound of the instrument, mimicking aged wood. | https://en.wikipedia.org/wiki/Rarefaction |
In ecology , rarefaction is a technique to assess species richness from the results of sampling . Rarefaction allows the calculation of species richness for a given number of individual samples, based on the construction of so-called rarefaction curves. This curve is a plot of the number of species as a function of the number of samples. Rarefaction curves generally grow rapidly at first, as the most common species are found, but the curves plateau as only the rarest species remain to be sampled. [ 1 ]
The issue that occurs when sampling various species in a community is that the larger the number of individuals sampled, the more species that will be found. Rarefaction curves are created by randomly re-sampling the pool of N samples multiple times and then plotting the average number of species found in each sample (1,2, ... N). "Thus rarefaction generates the expected number of species in a small collection of n individuals (or n samples) drawn at random from the large pool of N samples.". [ 2 ]
The technique of rarefaction was developed in 1968 by Howard Sanders in a biodiversity assay of marine benthic ecosystems, as he sought a model for diversity that would allow him to compare species richness data among sets with different sample sizes; he developed rarefaction curves as a method to compare the shape of a curve rather than absolute numbers of species. [ 4 ]
Following initial development by Sanders, the technique of rarefaction has undergone a number of revisions. In a paper criticizing many methods of assaying biodiversity, Stuart Hurlbert refined the problem that he saw with Sanders' rarefaction method, that it overestimated the number of species based on sample size, and attempted to refine his methods. [ 5 ] The issue of overestimation was also dealt with by Daniel Simberloff , while other improvements in rarefaction as a statistical technique were made by Ken Heck in 1975. [ 6 ]
Today, rarefaction has grown as a technique not just for measuring species diversity, but of understanding diversity at higher taxonomic levels as well. Most commonly, the number of species is sampled to predict the number of genera in a particular community; similar techniques had been used to determine this level of diversity in studies several years before Sanders quantified his individual to species determination of rarefaction. [ 2 ] Rarefaction techniques are used to quantify species diversity of newly studied ecosystems, including human microbiomes, as well as in applied studies in community ecology , such as understanding pollution impacts on communities and other management applications.
Deriving rarefaction: N = total number of items K = total number of groups N i = the number of items in group i (i = 1, ..., K ). M j = number of groups consisting in j elements
From these definitions, it therefore follows that:
In a rarefied sample we have chosen a random subsample n from the total N items. The relevance of a rarefied sample is that some groups may now be necessarily absent from this subsample. We therefore let:
It is true that X n {\displaystyle X_{n}} is less than K whenever at least one group is missing from this subsample. Therefore the rarefaction curve , f n {\displaystyle f_{n}} is defined as:
From this it follows that 0 ≤ f(n) ≤ K.
Furthermore, f ( 0 ) = 0 , f ( 1 ) = 1 , f ( N ) = K {\displaystyle f(0)=0,f(1)=1,f(N)=K} .
Despite being defined at discrete values of n, these curves are most frequently displayed as continuous functions. [ 7 ]
Rarefaction curves are necessary for estimating species richness. Raw species richness counts, which are used to create accumulation curves, can only be compared when the species richness has reached a clear asymptote . Rarefaction curves produce smoother lines that facilitate point-to-point or full dataset comparisons.
One can plot the number of species as a function of either the number of individuals sampled or the number of samples taken. The sample-based approach accounts for patchiness in the data that results from natural levels of sample heterogeneity. However, when sample-based rarefaction curves are used to compare taxon richness at comparable levels of sampling effort, the number of taxa should be plotted as a function of the accumulated number of individuals, not accumulated number of samples, because datasets may differ systematically in the mean number of individuals per sample.
One cannot simply divide the number of species found by the number of individuals sampled in order to correct for different sample sizes. Doing so would assume that the number of species increases linearly with the number of individuals present, which is not always true.
Rarefaction analysis assumes that the individuals in an environment are randomly distributed, the sample size is sufficiently large, that the samples are taxonomically similar, and that all of the samples have been performed in the same manner. If these assumptions are not met, the resulting curves will be greatly skewed. [ 8 ]
Rarefaction only works well when no taxon is extremely rare or common [ citation needed ] , or when beta diversity is very high. Rarefaction assumes that the number of occurrences of a species reflects the sampling intensity, but if one taxon is especially common or rare, the number of occurrences will be related to the extremity of the number of individuals of that species, not to the intensity of sampling.
The technique does not account for specific taxa. It examines the number of species present in a given sample, but does not look at which species are represented across samples. Thus, two samples that each contain 20 species may have completely different compositions, leading to a skewed estimate of species richness.
The technique does not recognize species abundance , only species richness. A true measure of diversity accounts for both the number of species present and the relative abundance of each.
Rarefaction is unrealistic in its assumption of random spatial distribution of individuals.
Rarefaction does not provide an estimate of asymptotic richness, so it cannot be used to extrapolate species richness trends in larger samples. [ 9 ] | https://en.wikipedia.org/wiki/Rarefaction_(ecology) |
Rarobacteraceae is a monotypic Actinomycetota family. [ 2 ] [ 1 ]
The currently accepted taxonomy is based on the List of Prokaryotic names with Standing in Nomenclature (LPSN) [ 3 ] and National Center for Biotechnology Information (NCBI) [ 4 ] and the phylogeny is based on 16S rRNA-based LTP release 106 by The All-Species Living Tree Project [ 5 ]
R. faecitabidus Yamamoto et al. 1988
R. incanus Yamamoto et al. 1994
This Actinomycetota -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Rarobacteraceae |
Rasagiline , sold under the brand name Azilect among others, is a medication which is used in the treatment of Parkinson's disease . [ 2 ] [ 6 ] It is used as a monotherapy to treat symptoms in early Parkinson's disease or as an adjunct therapy in more advanced cases. [ 7 ] The drug is taken by mouth . [ 2 ]
Side effects of rasagiline include insomnia and orthostatic hypotension , among others. [ 2 ] Rasagiline acts as an inhibitor of the enzyme monoamine oxidase (MAO) and hence is a monoamine oxidase inhibitor (MAOI). [ 2 ] More specifically, it is a selective inhibitor of monoamine oxidase B (MAO-B). [ 2 ] The drug is thought to work by increasing levels of the monoamine neurotransmitter dopamine in the brain . [ 2 ] Rasagiline shows pharmacological differences from the related drug selegiline , including having no amphetamine -like metabolites , monoamine-releasing activity, or monoaminergic activity enhancer actions, which may result in clinical differences between the medications. [ 8 ] [ 9 ]
Rasagiline was approved for medical use in the European Union in 2005 [ 10 ] and in the United States in 2006. [ 2 ] [ 11 ] Generic versions of rasagiline are available. [ 12 ] [ 13 ] [ 14 ]
Rasagiline is used to treat symptoms of Parkinson's disease both alone and in combination with other drugs. It has shown efficacy in both early and advanced Parkinson's, and appears to be especially useful in dealing with non-motor symptoms like fatigue . [ 15 ] [ 16 ] [ 2 ]
Teva conducted clinical trials attempting to prove that rasagiline did not just treat symptoms, but was a disease-modifying drug —that it actually prevented the death of the dopaminergic neurons that characterize Parkinson's disease and slowed disease progression. They conducted two clinical trials, called TEMPO and ADAGIO, to try to prove this. The United States Food and Drug Administration (FDA) advisory committee rejected their claim in 2011, saying that the clinical trial results did not prove that rasagiline was neuroprotective. The main reason was that in one of the trials, the lower dose (1 mg) was effective at slowing progression, but the higher dose (2 mg) was not, contradicting standard dose-response pharmacology. [ 17 ] [ 18 ]
MAO-B inhibitors like rasagiline may improve certain non-motor symptoms in Parkinson's disease. [ 19 ] These may include depression , sleep disturbances , and pain (particularly related to motor fluctuations), but are unlikely to include cognitive or olfactory dysfunctions. [ 19 ] The effects of MAO-B inhibitors like rasagiline on fatigue , autonomic dysfunctions , apathy , and impulse control disorders in people with Parkinson's disease remain unknown. [ 19 ] Rasagiline has been reported to significantly improve quality of life in people with Parkinson's disease, but the effect sizes were trivial to small and may not be clinically meaningful. [ 19 ] It showed a large effect size relative to placebo for depression in people with Parkinson's disease. [ 19 ] In other studies, rasagiline appeared to reduce fatigue in people with Parkinson's disease. [ 19 ] [ 20 ] [ 21 ] [ 22 ] However, its effect sizes for this effect in a large trial were described as trivial. [ 19 ]
Rasagiline is available in the form of 0.5 and 1 mg oral tablets . [ 2 ] [ 12 ]
Rasagiline has not been tested in pregnant women. [ 2 ]
The FDA label contains warnings that rasagiline may cause severe hypertension or hypotension , may make people sleepy , may make motor control worse in some people, may cause hallucinations and psychotic-like behavior , may cause impulse control disorders , may increase the risk of melanoma , and upon withdrawal , may cause high fever or confusion . [ 2 ]
Side effects when the drug is taken alone include flu-like symptoms , joint pain , depression , stomach upset , headache , dizziness , and insomnia . [ 2 ] When taken with levodopa , side effects include increased movement problems , accidental injury , sudden drops in blood pressure , joint pain and swelling , dry mouth , rash , abnormal dreams and digestive problems including vomiting , loss of appetite , weight loss , abdominal pain , nausea , and constipation . [ 2 ] When taken with Parkinson's drugs other than levodopa, side effects include peripheral edema, fall, joint pain, cough, and insomnia. [ 2 ]
In a 2013 meta-analysis , none of the most frequently reported side effects of rasagiline occurred significantly more often than with placebo . [ 23 ] It was concluded that rasagiline is well-tolerated . [ 23 ]
Rasagiline has been found to produce orthostatic hypotension as a side effect. [ 2 ] Rates of orthostatic hypotension in a selection of different clinical trials have been 1.2- to 5-fold higher than those of placebo, ranging from 3.1 to 44% with rasagiline and 0.6 to 33% with placebo. [ 2 ] [ note 1 ] Orthostatic hypotension tends to be worst in the first 2 months of treatment and then tends to decrease with time. [ 2 ] Rasagiline can also cause hypotension while supine and unrelated to standing. [ 2 ] In a clinical trial, the rate of hypotension was 3.2% with rasagiline versus 1.3% with placebo. [ 2 ]
Rarely, rasagiline has been reported to induce impulse control disorders , [ 24 ] [ 25 ] [ 26 ] [ 27 ] [ 28 ] obsessive–compulsive symptoms , [ 29 ] hypersexuality , [ 30 ] [ 31 ] [ 32 ] [ 27 ] and spontaneous orgasm or ejaculation . [ 33 ] [ 34 ] [ 35 ] [ 36 ] Other rare adverse effects associated with rasagiline include pleurothotonus (Pisa syndrome), [ 37 ] [ 38 ] [ 39 ] livedo reticularis , [ 40 ] tendon rupture , [ 41 ] and hypoglycemia . [ 42 ]
Serotonin syndrome has been reported rarely with rasagiline both alone and in combination with selective serotonin reuptake inhibitors (SSRIs) like escitalopram , paroxetine , and sertraline and other MAOIs like linezolid . [ 43 ] [ 44 ] [ 45 ] [ 46 ] [ 47 ] [ 48 ] [ 49 ] [ 50 ]
A withdrawal syndrome associated with rasagiline has been reported. [ 51 ]
Rasagiline has been studied at single doses of up to 20 mg and at repeated doses of up to 10 mg/day and was well-tolerated at these doses. [ 52 ] [ 53 ] [ 54 ] However, in a dose-escalation study with concomitant levodopa therapy, a dosage of 10 mg/day rasagiline was associated with cardiovascular side effects including hypertension and orthostatic hypotension in some people. [ 2 ] The symptoms of rasagiline overdose may be similar to the case of non-selective MAOIs. [ 2 ] Onset of symptoms may be delayed by 12 hours and may not peak for 24 hours. [ 2 ] A variety of different symptoms may occur and the central nervous system and cardiovascular system are prominently involved. [ 2 ] Death may result and immediate hospitalization is warranted. [ 2 ] Serotonin syndrome has occurred with rasagiline overdose and body temperature should be monitored closely. [ 2 ] There is no specific antidote for overdose and treatment is supportive and based on symptoms. [ 2 ]
Rasagiline is contraindicated with known serotonergic agents like selective serotonin reuptake inhibitors (SSRIs), serotonin–norepinephrine reuptake inhibitors (SNRIs), tricyclic antidepressants (TCAs), tetracyclic antidepressants (TeCAs), triazolopyridines or serotonin antagonists and reuptake inhibitors (SARIs) like trazodone , and other monoamine oxidase inhibitors (MAOIs), as well as meperidine (pethidine), tramadol , methadone , propoxyphene , dextromethorphan , St. John's wort , and cyclobenzaprine , due to potential risk of serotonin syndrome . [ 1 ] [ 2 ] However, the risk appears to be low, based on a large study of 1,504 people which looked for serotonin syndrome in people with Parkinson's disease who were treated with rasagiline plus antidepressants, rasagiline without antidepressants, or antidepressants plus Parkinson's drugs other than either rasagiline or selegiline, and in which no cases were identified. [ 15 ]
There is a risk of psychosis or bizarre behavior if rasagiline is used with dextromethorphan. [ 2 ]
There is a risk of non-selective MAO inhibition and hypertensive crisis if rasagiline is used with other MAOIs. [ 2 ]
Rasagiline may have a risk of hypertensive crisis in combination with sympathomimetic agents such as amphetamines , ephedrine , epinephrine , isometheptene , and pseudoephedrine . [ 1 ] However, based on widespread clinical experience with the related selective MAO-B inhibitor selegiline , occasional use of over-the-counter sympathomimetics like pseudoephedrine appears to pose minimal risk of hypertensive crisis. [ 1 ] In any case, the combination of sympathomimetics with MAO-B inhibitors like rasagiline and selegiline should be undertaken with caution. [ 1 ]
Parkinson's disease is characterized by the death of cells that produce dopamine , a neurotransmitter . An enzyme called monoamine oxidase (MAO) breaks down neurotransmitters. MAO has two forms, MAO-A and MAO-B . MAO-B is involved in the metabolism of dopamine . Rasagiline prevents the breakdown of dopamine by irreversibly binding to MAO-B. Dopamine is therefore more available, somewhat compensating for the diminished quantities made in the brains of people with Parkinson's disease. [ 15 ]
Rasagiline acts as a selective and potent irreversible inhibitor of the monoamine oxidases (MAO) enzymes monoamine oxidase B (MAO-B) and monoamine oxidase A (MAO-A). [ 2 ] It is selective for inhibition of MAO-B over MAO-A, but can also inhibit MAO-A at high doses or concentrations. [ 1 ] [ 2 ] MAO-B is involved in the metabolism of the monoamine neurotransmitter dopamine in the body and brain. [ 1 ] [ 2 ] By inhibiting MAO-B, rasagiline is thought to increase dopamine levels. [ 1 ] [ 2 ] In the case of Parkinson's disease , increased dopamine levels in the striatum are thought to be responsible for rasagiline's therapeutic effectiveness in treating the condition. [ 1 ] [ 2 ]
Rasagiline inhibits platelet MAO-B activity with single doses by 35% one-hour after 1 mg, 55% after 2 mg, 79% after 5 mg, and 99% after 10 mg in healthy young people. [ 1 ] [ 55 ] [ 2 ] [ 54 ] With all dose levels, maximum inhibition is maintained for at least 48 hours after the dose. [ 1 ] [ 54 ] With repeated doses, rasagiline reaches greater than 99% platelet MAO-B inhibition after 6 days of 2 mg/day, 3 days of 5 mg/day, and 2 days of 10 mg/day. [ 1 ] [ 55 ] [ 54 ] Similarly, repeated administration of 0.5, 1, and 2 mg/day rasagiline resulted in complete MAO-B inhibition. [ 2 ] Clinically relevant inhibition of MAO-B is thought to require 80% inhibition and above. [ 1 ] Following the last dose, platelet MAO-B levels remain significantly inhibited for 7 days and return to baseline after 2 weeks. [ 1 ] [ 54 ] In people with Parkinson's disease, rasagiline at a dose of 1 mg/day achieved near-complete inhibition of platelet MAO-B after 3 days of dosing. [ 1 ] The recommended dosing schedule of rasagiline in Parkinson's disease (1 mg/day) has been described as somewhat questionable and potentially excessive from a pharmacological standpoint. [ 56 ]
The half-time for recovery of brain MAO-B following discontinuation of an MAO-B inhibitor (specifically selegiline) has been found to be approximately 40 days. [ 1 ] Similarly, recovery of brain MAO-B following rasagiline discontinuation was gradual and occurred over 6 weeks. [ 57 ] [ 58 ] The clinical effectiveness of rasagiline in Parkinson's disease has been found to persist during a 6-week washout phase with discontinuation of the medication. [ 1 ]
Rasagiline is about 30 to 100 times more potent in inhibiting MAO-B than MAO-A in vitro and is about 17 to 65 times more potent in inhibiting MAO-B over MAO-A in vivo in rodents. [ 1 ] Rasagiline does not importantly potentiate the pressor effects of tyramine challenge in humans, indicating that it is selective for MAO-B inhibition and does not meaningfully inhibit MAO-A. [ 1 ] [ 2 ] It is expected that at sufficiently high doses rasagiline would eventually become non-selective and additionally inhibit MAO-A in humans. [ 1 ] [ 2 ] However, it is unknown what dose threshold would be required for this to occur. [ 1 ]
Rasagiline is the R (+)-enantiomer of AGN-1135, a racemic mixture of rasagiline (TVP-1012) and the S (–)-enantiomer (TVP-1022). [ 55 ] [ 1 ] Virtually all of the MAO-inhibiting activity of AGN-1135 lies in the R (+)-enantiomer rasagiline, with this enantiomer having 1,000-fold greater inhibitory potency of MAO-B than the S (–)-enantiomer. [ 55 ] [ 1 ] In addition, the S (–)-enantiomer is poorly selective for MAO-B over MAO-A. [ 1 ] As a result, the purified R (+)-enantiomer rasagiline was the form of the compound advanced for clinical development. [ 55 ] [ 1 ]
Selegiline was the first selective MAO-B inhibitor. [ 59 ] Selegiline and rasagiline have similar selectivity for inhibition of MAO-B over MAO-A. [ 8 ] [ 60 ] However, rasagiline is 5- to 10-fold more potent than selegiline at inhibiting MAO-B, which results in the former being used at lower doses clinically than the latter (1 mg/day versus 5–10 mg/day, respectively). [ 8 ] [ 1 ] [ 60 ] In addition, selegiline is metabolized into levomethamphetamine and levoamphetamine . [ 61 ] These metabolites induce the release of norepinephrine and dopamine , have sympathomimetic and psychostimulant effects, and may contribute to the effects and side effects of selegiline. [ 8 ] [ 62 ] In contrast to selegiline, rasagiline does not convert into metabolites with amphetamine-like effects. [ 1 ] The amphetamine metabolites of selegiline may contribute to significant clinical differences between selegiline and rasagiline. [ 8 ]
Rasagiline metabolizes into ( R )-1-aminoindan which has no amphetamine-like effects and shows neuroprotective properties in cells and in animal models. [ 10 ]
Selective MAO-B inhibitors including rasagiline and selegiline have been found to increase dopamine levels in the striatum in rats in vivo . [ 63 ] [ 64 ] [ 65 ] It has been theorized that this might be due to strong inhibition of the metabolism of β-phenylethylamine , which is an endogenous MAO-B substrate that has monoaminergic activity enhancer and norepinephrine–dopamine releasing agent actions. [ 63 ] [ 64 ] [ 65 ] [ 66 ] [ 67 ] β-Phenylethylamine has been described as "endogenous amphetamine" and its brain levels are dramatically increased (10- to 30-fold) by MAO-B inhibitors like selegiline. [ 68 ] [ 69 ] [ 67 ] Elevation of β-phenylethylamine may be involved in the effects of MAO-B inhibitors in the treatment of Parkinson's disease. [ 63 ] [ 64 ] [ 65 ] [ 67 ]
In 2021, it was discovered that MAO-A is solely or almost entirely responsible for striatal dopamine catabolism in the rodent brain and that MAO-B is not importantly involved. [ 70 ] [ 71 ] [ 72 ] In contrast, MAO-B appears to mediate tonic γ-aminobutyric acid (GABA) synthesis from putrescine in the striatum, a minor and alternative metabolic pathway of GABA synthesis, and this synthesized GABA in turn inhibits dopaminergic neurons in this brain area. [ 70 ] [ 71 ] [ 72 ] [ 73 ] MAO-B specifically mediates the transformations of putrescine into γ-aminobutyraldehyde (GABAL or GABA aldehyde) and N -acetylputrescine into N -acetyl-γ-aminobutyraldehyde ( N -acetyl-GABAL or N -acetyl-GABA aldehyde), metabolic products that can then be converted into GABA via aldehyde dehydrogenase (ALDH) (and an unknown deacetylase enzyme in the case of N -acetyl-GABAL). [ 73 ] [ 74 ] [ 71 ] [ 72 ] These findings may warrant a rethinking of the pharmacological actions of MAO-B inhibitors like selegiline and rasagiline in the treatment of Parkinson's disease. [ 70 ] [ 71 ] [ 72 ]
Rasagiline is selective for inhibition of MAOs over interactions with other proteins , including α-adrenergic receptors , β-adrenergic receptors , muscarinic acetylcholine receptors , and other targets . [ 1 ] [ 60 ]
The major metabolite of rasagiline, ( R )-1-aminoindan , is either devoid of MAO inhibition or shows only weak inhibition of MAO-B. [ 1 ] [ 56 ] It also has no amphetamine -like activity. [ 1 ] [ 56 ] However, 1-aminoindan is not lacking in pharmacological activity. [ 1 ] [ 56 ] Like rasagiline, 1-aminoindan shows neuroprotective activity in some experimental models. [ 1 ] [ 56 ] In addition, 1-aminoindan has been found to enhance striatal dopaminergic neurotransmission and improve motor function independent of MAO inhibition in animal models of Parkinson's disease. [ 56 ]
2-Aminoindan , a closely related positional isomer of 1-aminoindan, is known to inhibit the reuptake and induce the release of dopamine and norepinephrine and to produce psychostimulant -like effects in rodents, albeit with lower potency than amphetamine , but rasagiline does not metabolize into this compound. [ 75 ] [ 76 ] 1-Aminoindan has been found to inhibit the reuptake of norepinephrine 28-fold less potently than 2-aminoindan and to inhibit the reuptake of dopamine 300-fold less potently than 2-aminoindan, with IC 50 Tooltip half maximal inhibitory concentration values for dopamine reuptake inhibition in one study of 0.4 μM for amphetamine , 3.3 μM for 2-aminoindan, and 1 mM for 1-aminoindan. [ 76 ] [ 77 ] [ 78 ] In contrast to 2-aminoindan, which increased locomotor activity in rodents (+49%), 1-aminoindan suppressed locomotor activity (–69%). [ 76 ] On the other hand, 1-aminoindan has been found to enhance the psychostimulant-like effects of amphetamine in rodents. [ 77 ]
Whereas selegiline is a catecholaminergic activity enhancer , which may be mediated by agonism of the TAAR1 , rasagiline does not possess this action. [ 66 ] [ 79 ] [ 9 ] [ 80 ] Instead, rasagiline actually antagonizes selegiline's effects as a catecholaminergic activity enhancer, which may be mediated by TAAR1 antagonism . [ 9 ]
Rasagiline has been reported to directly bind to and inhibit glyceraldehyde-3-phosphate dehydrogenase (GAPDH). [ 8 ] [ 81 ] This might play a modulating role in its clinical effectiveness for Parkinson's disease. [ 8 ] [ 81 ] Selegiline also binds to and inhibits GAPDH. [ 8 ]
Rasagiline has been found to bind reversibly to α-synuclein , a major protein involved in the pathophysiology of Parkinson's disease, and this action might be neuroprotective . [ 82 ] [ 83 ] [ 84 ]
Rasagiline is rapidly absorbed from the gastrointestinal tract with oral administration and has approximately 36% absolute bioavailability . [ 1 ] [ 2 ] The peak and area-under-the-curve levels of rasagiline are linear and dose-proportional over a dose range of 0.5 to 10 mg. [ 1 ] [ 2 ] The time to peak levels of rasagiline is 0.5 to 0.7 hours and steady-state peak levels are on average 8.5 ng/mL. [ 1 ] [ 2 ]
At steady-state, the time to peak levels of rasagiline's major metabolite ( R )-1-aminoindan is 2.1 hours, its peak levels are 2.6 ng/mL, and its area-under-the-curve levels are 10.1 ng/h/mL. [ 1 ]
Taking rasagiline with food (as a high-fat meal) increases peak levels by approximately 60% and area-under-the-curve levels by approximately 20%, whereas time to peak levels is unchanged. [ 1 ] [ 2 ] Because exposure to rasagiline is not substantially modified, rasagiline can be taken with or without food. [ 1 ] [ 2 ]
The mean volume of distribution of rasagiline is 87 L or 182 to 243 L depending on the source. [ 1 ] [ 2 ] It readily crosses the blood–brain barrier and enters the central nervous system . [ 1 ]
The plasma protein binding of rasagiline is 60 to 70% or 88 to 94% depending on the source. [ 1 ] [ 2 ] In the case of the latter range, 61 to 63% of binding was to albumin . [ 2 ]
Rasagiline is extensively metabolized in the liver . [ 1 ] [ 2 ] It is metabolized primarily by hepatic N -dealkylation via the cytochrome P450 enzyme CYP1A2 which forms the major metabolite ( R )-1-aminoindan . [ 1 ] [ 2 ] [ 53 ] It is also metabolized by hydroxylation via cytochrome P450 enzymes to form 3-hydroxy- N -propargyl-1-aminoindan (3-OH-PAI) and 3-hydroxy-1-aminoindan (3-OH-AI). [ 2 ] Rasagiline and its metabolites also undergo conjugation via glucuronidation . [ 2 ]
Use of rasagiline should be monitored carefully in people taking other drugs that inhibit or induce CYP1A2. [ 2 ] [ 85 ] Variants in CYP1A2 have been found to modify exposure to rasagiline in some studies but not others. [ 85 ] [ 86 ] Tobacco smoking , a known inhibitor of CYP1A2, did not modify rasagiline exposure. [ 85 ] Drug transporters may be more important in influencing the pharmacokinetics of rasagiline than metabolizing enzymes. [ 86 ]
Exposure to rasagiline is increased in people with hepatic impairment . [ 1 ] [ 2 ] In those with mild hepatic impairment, peak levels of rasagiline are increased by 38% and area-under-the-curve levels by 80%, whereas in people with moderate hepatic impairment, peak levels are increased by 83% and area-under-the-curve levels by 568%. [ 1 ] [ 2 ] As a result, the dosage of rasagiline should be halved to 0.5 mg/day in people with mild hepatic impairment and rasagiline is considered to be contraindicated in people with moderate to severe hepatic impairment. [ 2 ]
Rasagiline is eliminated primarily in urine (62%) and to a much lesser extent in feces (7%). [ 2 ] Rasagiline is excreted unchanged in urine at an amount of less than 1%. [ 1 ] Hence, it is almost completely metabolized prior to excretion. [ 2 ]
The elimination half-life of rasagiline is 1.34 hours. [ 1 ] At steady-state , its half-life is 3 hours. [ 2 ] As rasagiline acts as an irreversible inhibitor of MAO-B, its actions and duration of effect are not dependent on its half-life or sustained concentrations in the body. [ 1 ] [ 2 ]
The oral clearance of rasagiline is 94.3 L/h and is similar to normal liver blood flow (90 L/h). [ 1 ] This indicates that non-hepatic mechanisms are not significantly involved in the elimination of rasagiline. [ 1 ]
Moderate renal impairment did not modify exposure to rasagiline, whereas that of ( R )-1-aminoindan was increased by 1.5-fold. [ 2 ] Since ( R )-1-aminoindan is not an MAO inhibitor, mild to moderate renal impairment does not require dosage adjustment of rasagiline. [ 2 ] No data are available in the case of severe or end-stage renal impairment. [ 2 ]
Rasagiline, also known as ( R )- N -propargyl-1-aminoindan and by its former developmental code name TVP-1012, is a secondary cyclic benzylamine propargylamine . [ 1 ] [ 55 ] It is the R (+)- enantiomer of the chiral racemic compound AGN-1135 ( N -propargyl-1-aminoindan), whereas the S (–)-enantiomer is TVP-1022 (( S )- N -propargyl-1-aminoindan). [ 1 ] [ 55 ] Rasagiline is a potent and selective MAO-B inhibitor , whereas TVP-1022 is a very weak and poorly selective MAO inhibitor. [ 1 ] [ 55 ]
Both the hydrochloride and mesylate salts of rasagiline were studied and were found to have similar pharmacological , pharmacokinetic , and toxicological profiles. [ 1 ] However, the mesylate salt of rasagiline was ultimately selected for its use as a pharmaceutical drug due to favorable chemical stability . [ 1 ] [ 2 ]
The propargyl moiety is essential in the pharmacodynamics of rasagiline. [ 1 ] It binds covalently and irreversibly with the flavin adenine dinucleotide (FAD) moiety of the MAO enzyme . [ 1 ] The selectivity of rasagiline for MAO-B over MAO-A depends on the maintenance of a distance of no more than two carbon atoms between the aromatic ring and the N -propargyl group. [ 1 ] The propargyl group of rasagiline is also essential for its neuroprotective and antiapoptopic actions, which are independent of its MAO inhibition. [ 1 ]
Rasagiline is closely structurally related to selegiline ( R (–)- N -propargylmethamphetamine). [ 1 ] However, in contrast to selegiline, rasagiline is not a substituted amphetamine and is instead an 1-aminoindan derivative . [ 1 ] The chemical structures of the amphetamines and aminoindans are very similar. [ 87 ] However, whereas selegiline metabolizes into levomethamphetamine and levoamphetamine and can produce amphetamine -like effects, rasagiline does not do so. [ 1 ] [ 75 ] Instead, it metabolizes into ( R )-1-aminoindan (TVP-136) and has no such actions. [ 1 ] [ 75 ] [ 88 ] [ 89 ]
SU-11739 (AGN-1133; N -methyl- N -propargyl-1-aminoindan), the N - methylated analogue of rasagiline, is also an MAO-B-preferring MAOI. [ 55 ] [ 90 ] [ 91 ] However, it is less selective for inhibition of MAO-B over MAO-A than rasagiline. [ 90 ] [ 91 ] Another structurally related selective MAO-B inhibitor, ladostigil ( N -propargyl-(3 R )-aminoindan-5-yl- N -propylcarbamate; TV-3326), was developed from structural modification of rasagiline and additionally acts as an acetylcholinesterase inhibitor due to its carbamate moiety. [ 1 ]
Rasagiline and its metabolite ( R )-1-aminoindan are structurally related to 2-aminoindan and derivatives like 5,6-methylenedioxy-2-aminoindane (MDAI), 5,6-methylenedioxy- N -methyl-2-aminoindane (MDMAI), and 5-iodo-2-aminoindane (5-IAI). [ 75 ]
AGN-1135 , the racemic form of the drug, was invented by Aspro Nicholas in the early 1970s. Moussa B. H. Youdim identified it as a potential drug for Parkinson's disease, and working with collaborators at Technion – Israel Institute of Technology in Israel and the drug company, Teva Pharmaceuticals , identified the R-isomer as the active form of the drug. [ 92 ] Teva brought it to market in partnership with Lundbeck in the European Union and Eisai in the United States and elsewhere.
Prior to the discovery of rasagiline, a closely related analogue called SU-11739 (AGN-1133; J-508; N -methyl- N -propargyl-1-aminoindan) was patented in 1965. [ 93 ] At first, the N -methyl was necessary for the agent to be considered a ring cyclized analogue of pargyline with about 20 times the potency. [ 94 ] However, the N -methyl compound was a non-selective MAOI. [ 91 ] In addition, SU-11739 has been reported to have strong catecholamine-releasing actions. [ 95 ]
Racemic rasagiline was discovered and patented by Aspro Nicholas in the 1970s as a drug candidate for treatment of hypertension . [ 96 ]
Moussa B. H. Youdim was involved in developing selegiline as a drug for Parkinson's, in collaboration with Peter Reiderer. [ 97 ] He called the compound AGN 1135. [ 98 ] In 1996 Youdim, in collaboration with scientists from Technion and the US National Institutes of Health , and using compounds developed with Teva Pharmaceuticals , published a paper in which the authors wrote that they were inspired by the racemic nature of deprenyl and the greater activity of one of its stereoisomers, L-deprenyl, which became selegiline , to explore the qualities of the isomers of the Aspro compound, and they found that the R-isomer had almost all the activity; this is the compound that became rasagiline. [ 98 ] They called the mesylate salt of the R-isomer TVP-1012 and the hydrochloride salt, TVP-101. [ 98 ]
Teva and Technion filed patent applications for this racemically pure compound, methods to make it, and methods to use it to treat Parkinson's disease and other disorders, and Technion eventually assigned its rights to Teva. [ 96 ]
Teva began development of rasagiline, and by 1999 was in Phase III trials, and entered into a partnership with Lundbeck in which Lundbeck agreed to share the costs and obtained the joint right to market the drug in the European Union. [ 99 ] In 2003, Teva partnered with Eisai , giving Eisai the right to jointly market the drug for Parkinson's in the US, and to co-develop and co-market the drug for Alzheimer's disease and other neurological diseases. [ 100 ]
It was approved by the European Medicines Agency for Parkinson's in 2005 [ 10 ] and in the United States in 2006. [ 11 ]
Following its approval, rasagiline was described by some authors as a " me-too drug " that offered nothing new in terms of effectiveness and tolerability compared to selegiline. [ 101 ] [ 102 ] However, others have contended that rasagiline shows significant differences from and improvements over selegiline, like its lack of amphetamine metabolites and associated monoamine releasing agent effects, which may improve tolerability and safety . [ 101 ] [ 8 ] Conversely, others have maintained that rasagiline may be less efficacious than selegiline due to its lack of catecholaminergic activity enhancer actions. [ 66 ] [ 9 ] [ 103 ] [ 104 ]
Rasagiline is the generic name of the drug and its INN Tooltip International Nonproprietary Name and USAN Tooltip United States Adopted Name . [ 105 ] [ 106 ] It is also known by its former developmental code name TVP-1012 . [ 55 ] Rasagiline is marketed under the brand name Azilect, among others. [ 2 ] [ 12 ]
Lower-cost generic versions of rasagiline are available. [ 12 ] [ 13 ] [ 14 ]
Rasagiline was under development for the treatment of Alzheimer's disease . [ 107 ] However, development was discontinued. [ 107 ]
Rasagiline was tested for efficacy in people with multiple system atrophy in a large randomized, placebo-controlled, double-blind disease-modification trial; the drug failed. [ 16 ] [ 108 ]
Rasagiline has been reported to improve symptoms in people with freezing gait . [ 109 ] [ 110 ]
Rasagiline has been studied in the treatment of amyotrophic lateral sclerosis (ALS; Lou Gehrig's disease). [ 111 ] [ 112 ] [ 113 ] [ 114 ]
Rasagiline has been described as an emerging potential antidepressant . [ 115 ] MAO-B inhibitors have been found to reduce depressive symptoms in people with Parkinson's disease with a small effect size . [ 116 ] [ 19 ] However, rasagiline does not appear to have been studied in the treatment of depression in people without Parkinson's disease [ 117 ] and it has not been developed nor approved for the treatment depression. [ 107 ] In an animal study , selegiline was effective in models of antidepressant -like activity, whereas rasagiline was ineffective. [ 118 ] [ 119 ] The antidepressant effects of selegiline in animals appear to be independent of monoamine oxidase inhibition and may be related to its catecholaminergic activity enhancer (CAE) activity, which rasagiline lacks. [ 118 ] [ 119 ]
Rasagiline has not been studied in the treatment of psychostimulant addiction as of 2015. [ 120 ]
Rasagiline has been reported to improve restless legs syndrome (RLS). [ 121 ] [ 122 ] [ 123 ] | https://en.wikipedia.org/wiki/Rasagiline |
Rasaratna Samuccaya ( Devanagari : रसरत्न समुच्चय) is an Indian Sanskrit treatise on alchemy . The text is dated between 13th [ 1 ] to 16th century CE. [ 2 ]
The text contains detailed descriptions of various complex metallurgical processes, [ 3 ] [ 4 ] as well as descriptions of how to set up and equip a laboratory and other topics concerning Indian alchemy . It is a work that synthesises the writings and opinions of several earlier authors and presents a coherent account of medieval Indian alchemy.
Among the diverse scientific content of this text is: [ 5 ]
This article about an India -related book is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Rasaratna_Samuchaya |
The Raschig process for the production of hydroxylamine is one of three chemical processes developed by German chemist Friedrich Raschig . The main step in this process, patented by Raschig in 1887, is the reduction of nitrite with bisulfite towards hydroxylamine di sulfonate , which is hydrolysed to hydroxylammonium sulfate . [ 1 ] [ 2 ] Most of the hydroxylamine produced is used in the manufacture of caprolactam , the precursor to the polymer Nylon 6 . [ 3 ]
The commercially used Raschig process consists of the following steps: [ 3 ]
This chemical process -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Raschig_hydroxylamine_process |
A Raschig ring is a piece of tube, approximately equal in length and diameter, used in large numbers as a packed bed within columns for distillations and other chemical engineering processes. They are usually ceramic, metal, or glass and provide a large surface area within the volume of the column for interaction between liquid and gas vapours.
Raschig rings are named after their inventor, German chemist Friedrich Raschig , [ 1 ] [ 2 ] who patented them in 1914. [ 3 ] [ 4 ]
They form what is known as random packing , and enabled Raschig to perform distillations of much greater efficiency than his competitors using fractional distillation columns with trays. [ 1 ]
In a distillation column, the reflux or condensed vapour runs down the column, covering the surfaces of the rings, while vapour from the reboiler goes up the column. As the vapour and liquid pass each other countercurrently in a small space, they tend toward equilibrium. Thus, less-volatile material tends to go downward, and more-volatile material upward.
They are also used for devices where gas and liquid are put in contact for purposes of gas absorption, stripping , or chemical reaction, and as a support for biofilms in biological reactors.
Raschig rings made from borosilicate glass are sometimes employed in the handling of nuclear materials. They are used inside vessels and tanks containing solutions of fissile material, for example solutions of enriched uranyl nitrate . There they act as neutron absorbers to prevent a criticality accident . [ 5 ]
Given the success of the Raschig ring, there have been other forms developed to either improve upon it, or to avoid patents for particular designs.
The Pall-Ring, commonly spelled as Pall ring, developed by Wilhelm Pfannmüller of BASF during the WWII, [ 6 ] attempts to increase the useful aspects of packing, by giving an increased number of edges to disrupt flow, while also reducing the volume taken up by the ring packing medium itself. Rather than using a solid-walled tube, the Pall ring resembles an open basket structure of thin bars. These form both a tube and also a radial structure of cross bars. [ 7 ] Pall rings may be injection moulded of plastics, moulded of ceramics or press-formed from metal sheet.
The Raschig Super Ring represents a further development of the same concepts behind the Pall ring. It optimises the production of turbulent film-type flows and prevents the formation of drops. [ 8 ] The 'rings' no longer resemble rings but are pressed from metal sheet in the form of wave shapes of narrow strips. Super rings appeared in 1995 and have been developed through several improved generations since. [ 9 ]
The Bialecki ring, developed by the same Pfanmüller as the Pall-Ring and first patented by him in 1944, [ 10 ] is mistakenly named after the Polish chemical engineer from Kraków Zbigniew Białecki. Like the Pall rings, they are an improved version of Raschig rings. The rings may be injection moulded of plastics or press-formed from metal sheet without welding. Specific surface area of filling ranges between 60 and 440 m 2 /m 3 . [ 11 ] Advantages of Białecki rings include: | https://en.wikipedia.org/wiki/Raschig_ring |
The Raschig–Hooker process is a chemical process for the production of chlorobenzene and phenol . [ 1 ] [ 2 ]
The Raschig–Hooker process was patented by Friedrich Raschig , a German chemist and politician also known for the Raschig process , the Olin Raschig process and the Raschig ring . [ 3 ] He first begun to use this reaction in 1891 in order to manufacture phenol.
The main steps in this process are the production of chlorobenzene from benzene , hydrochloric acid and oxygen , and the subsequent hydrolysis of chlorobenzene to phenol. [ 4 ] The first step uses either a copper or iron chloride catalyst and exposes the materials to air at 200–250 °C. [ 5 ] [ 6 ] [ 7 ] [ 8 ] In the second step, the resulting chlorobenzene is introduced to steam at 450 °C over a silicon catalyst that hydrolyses the chlorobenzene, giving phenol and hydrogen chloride that can then be recycled back to the first step. [ 6 ] [ 7 ] Due to the two step nature, the Raschig–Hooker process can be used to produce either chlorobenzene or phenol.
The Raschig–Hooker process's ability to make phenol makes it comparable to other methods, such as the Dow and Bayer process , which also converts benzene into phenol. In fact, the ability to recycle the hydrogen chloride made the Raschig–Hooker process preferable to the Dow and Bayer process, which requires its sodium chloride product to be converted into chlorine and sodium hydroxide. The reaction, however, takes place at very high temperatures in a very acidic environment with hydrogen chloride vapor and therefore the industrial setting must use highly corrosion resistant equipment for the reaction. While the Raschig–Hooker process does recycle the hydrogen chloride it produces, its catalyst experiences carbon deposition and must be frequently regenerated. The harsh chemical environment, use of catalysts, and large energy consumption has made it a target for green chemistry alternatives. [ 6 ]
The Raschig–Hooker process suffers from selectivity issues in both steps. In the first step, the reaction is only run to 10% to 15% conversion to prevent the second addition of a chlorine atom to the desired chlorobenzene. Despite this, the overall selectivity of the reaction is 70% to 85%. This second addition can be reversed using the Hooker modification, though it is also costly. The second step shares the low conversion rate and high selectivity of the first step. The small amount conversion per reaction offsets the monetary benefit of recycling the hydrogen chloride due to the large initial cost of the reaction. Therefore, the Raschig–Hooker process needed to be run at high concentrations in large reactors to be industrially economical. [ 6 ]
Due to its low productivity, this process is largely unused today. As of 1997 [update] , every plant in the United States that was using the Raschig–Hooker process has been shut down, though it was still used by some plants in countries such as Argentina, India, Italy, and Poland. Rather than using the Raschig–Hooker process, some companies use the Hock or cumene process , which instead synthesizes acetone and phenol from benzene and propylene. This preferred process has dominated the market, especially as acetone is also a highly desired substance. [ 6 ] | https://en.wikipedia.org/wiki/Raschig–Hooker_process |
The Rashba–Edelstein effect ( REE ) is a spintronics -related effect, consisting in the conversion of a bidimensional charge current into a spin accumulation. [ 1 ] [ 2 ] This effect is an intrinsic charge-to-spin conversion mechanism [ 1 ] and it was predicted in 1990 by the scientist V.M. Edelstein. [ 3 ] It has been demonstrated in 2013 [ 4 ] and confirmed by several experimental evidences in the following years. [ 2 ] [ 5 ] [ 6 ] [ 7 ]
Its origin can be ascribed to the presence of spin-polarized surface or interface states. [ 8 ] Indeed, a structural inversion symmetry breaking (i.e., a structural inversion asymmetry (SIA)) causes the Rashba effect to occur: this effect breaks the spin degeneracy of the energy bands and it causes the spin polarization being locked to the momentum in each branch of the dispersion relation . [ 2 ] If a charge current flows in these spin-polarized surface states, it generates a spin accumulation. [ 8 ] In the case of a bidimensional Rashba gas, where this band splitting occurs, [ 9 ] this effect is called Rashba–Edelstein effect . [ 1 ] [ 8 ]
For what concerns a class of peculiar materials, called topological insulators (TI) , spin-splitted surface states exist due to the surface topology, independently from the Rashba effect. [ 10 ] Topological insulators, indeed, display a spin-splitted linear dispersion relation on their surfaces (i.e., spin-polarized Dirac cones [ 11 ] ), while having a band gap in the bulk (this is why these materials are called insulators). [ 1 ] Also in this case, spin and momentum are locked [ 2 ] and, when a charge current flows in these spin-polarized surface states, a spin accumulation is produced and this effect is called Edelstein effect . [ 8 ] In both cases, a 2D charge-to-spin conversion mechanism occurs. [ 8 ]
The reverse process is called the inverse Rashba–Edelstein effect, and it converts a spin accumulation into a bidimensional charge current, resulting in a 2D spin-to-charge conversion. [ 12 ]
The Rashba–Edelstein effect and its inverse effect are classified as a spin-charge interconversion (SCI) mechanisms, as the direct and inverse spin Hall effect , and materials displaying these effects are promising candidates for becoming spin injectors, detectors and for other future technological applications. [ 1 ] [ 2 ] [ 4 ]
The Rashba–Edelstein effect is a surface effect, at variance with the spin Hall effect, which is a bulk effect. [ 1 ] Another difference among the two, is that the Rashba–Edelstein effect is a purely intrinsic mechanism, while the spin Hall effect origin can be either intrinsic or extrinsic. [ 13 ]
The origin of the Rashba–Edelstein effect relies on the presence of spin-split surface or interface states, which can arise for a structural inversion asymmetry or because the material exhibits a topologically protected surface, being a topological insulator. [ 1 ] [ 8 ] In both cases, the material surface displays the spin polarization locked to the momentum, meaning that these two quantities are univocally linked and orthogonal one to the other (this is clearly visible from the Fermi countours ). [ 1 ] [ 8 ] [ 10 ] [ 11 ] It is worth noticing that also a bulk inversion asymmetry could be present, which would result in the Dresselhaus effect . [ 1 ] In fact, if, in addition to the spatial inversion asymmetry or to the topological insulator band structure, also a bulk inversion asymmetry is present, the spin and momentum are still locked but their relative orientation is not straightforwardly determinable (since also the orientation of the charge current with respect to the crystallographic axes plays a relevant role). [ 10 ] In the following discussion, the Dresselhaus effect will be neglected for simplicity. [ 10 ]
The topological insulator case is easier to visualize due to the presence of a single Fermi contour, therefore, the topological insulator case is discussed first. Topological insulators display spin-split surface states where spin-momentum locking is present. [ 1 ] [ 2 ] [ 11 ] Indeed, when a charge current flows in the surface states of the topological insulator, it can also be seen as a well-defined momentum shift Δ k {\displaystyle \Delta k} in the reciprocal space , resulting in a different occupation of the spin-polarized branches of the Dirac cone. [ 1 ] This imbalance, according to the structure of the topological insulator band dispersion relation, produces a spin accumulation in the investigated material, i.e., a charge-to-spin conversion occurs. [ 3 ] The spin accumulation is orthogonal to the injected charge current, accordingly to spin-momentum locking. [ 14 ] Due to the fact that these materials display a conductive behaviour on their surface while being insulating on their bulk, the charge current is only allowed to flow on the topological insulator surfaces: this is the origin of the bidimensionality of this charge-to-spin conversion mechanism. [ 1 ] [ 15 ]
For what concerns the Rashba–Edelstein effect, the spin-split dispersion relation consists in two bands displaced along the k -axis due to a structural inversion asymmetry (SIA), accordingly to the Rashba effect (i.e., these bands show a linear splitting in k due to the spin-orbit coupling [ 10 ] [ 16 ] ). This results in two Fermi countours , which are concentric at equilibrium, both displaying spin-momentum locking but with opposite helicity . [ 10 ] If the system is driven in an out-of-equilibrium condition by injecting a charge current, the two disk displace one from the other and a net spin accumulation arises. [ 10 ] This effect occurs, for instance, in a bidimensional Rashba gas. [ 1 ] The Rashba splitting complicates the understanding and the visualization of the spin-to-charge conversion mechanism but the basic working principle of the Rashba–Edelstein effect is very similar to one of the Edelstein effect. [ 1 ] [ 4 ]
Experimentally speaking, the Rashba–Edelstein effect occurs if a charge current is electrically injected inside the topological insulator, for instance, by means of two electrodes where a potential difference is applied. The resulting spin accumulation can be probed in several ways, one of them is by employing the magneto optical Kerr effect (MOKE) . [ 1 ]
The reverse process, i.e., the inverse Rashba–Edelstein effect (I(R)EE) [ 14 ] occurs when a spin accumulation is generated inside the investigated material and a consequent charge current arises on the material surface (in this case, we have a 2D spin-to-charge conversion). [ 1 ] In order to have the inverse Rashba–Edelstein effect, a spin accumulation is required to be generated inside the analyzed material, and this spin injection is usually achieved by coupling the material under investigation with a ferromagnet in order to perform the spin pumping [ 2 ] [ 17 ] or with a semiconductor where it is possible to perform optical orientation. [ 18 ] [ 19 ] [ 20 ] As for the direct effect, the inverse Rashba–Edelstein effect occurs in materials lacking the structural inversion symmetry, while in topological insulators the inverse Edelstein effect arises. [ 1 ]
In the case of the inverse Edelstein effect, by looking at the section of the Dirac cone , the spin-to-charge conversion can be visualized as follows: the spin injection produces a piling up of spins of one character in one of the energy dispersion relation branches. [ 1 ] [ 8 ] This results in a spin unbalance due to the different branch occupations (i.e., a spin accumulation), which leads to a momentum unbalance and, therefore, to a charge current that can be electrically probed. [ 8 ] As for the direct effect, also in the inverse Edelstein effect, the charge current can only flow on the topological insulator surfaces due to the energy band conformation. [ 11 ] This is how the 2D spin-to-charge conversion occurs in these materials and this could allow topological insulators to be exploited as spin detectors. [ 2 ]
As for the direct effect, this analysis has been carried out for the inverse Edelstein effect because in this case only two energy branches are present. For what concerns the inverse Rashba–Edelstein effect, the process is very similar despite the presence of four energy branches, with spin-momentum locking, in the dispersion relation and two consequent Fermi countours with opposite helicity. [ 1 ] [ 8 ] In this case, the two Fermi countours, when a spin accumulation is generated inside the material, will be displaced one from the other, generating a charge current, at variance with the equilibrium case in which the two Fermi countours are concentric and no net momentum unbalance nor spin accumulation are present. [ 1 ] [ 10 ]
While both the Rashba–Edelstein effect and the inverse Rashba–Edelstein effect rely on a spin accumulation, the figure of merit of the processes is commonly computed by accounting for the spin current density related to the spin accumulation, instead of the spin accumulation itself, in analogy with the spin Hall angle for the spin Hall effect. [ 2 ] Indeed, the efficiency of the Rashba–Edelstein effect and of the inverse Rashba–Edelstein effect can be estimated by means of the Rashba–Edelstein length, i.e., the ratio between the charge current density, flowing on the surface of the investigated material, (i.e., a surface charge current density) and the three-dimensional spin current density (since the spin accumulation can diffuse in the three-dimensional space). [ 2 ]
In the Rashba–Edelstein effect the spin current is a consequence of the spin accumulation that occurs in the material as the charge current flows on its surface (under the influence of a potential difference and, therefore, of an electric field), while in the inverse Rashba–Edelstein effect the spin current is the quantity injected inside the material leading to a spin accumulation and resulting in a charge flow localized at the material surface. [ 1 ] [ 8 ] In both cases, the asymmetry in the charge and spin current dimensions results in a ratio which dimensionally has the units of a length: this is the origin of the name of this efficiency parameter. [ 1 ]
Analytically, the value of the bidimensional charge current density can be computed employing the Boltzmann equation and considering the action of an electric field E {\displaystyle \mathbf {E} } , resulting in: [ 1 ] [ 10 ]
where q {\displaystyle q} is the elementary charge, τ m {\displaystyle \tau _{\rm {m}}} is the momentum scattering time, k F {\displaystyle k_{\rm {F}}} and v F {\displaystyle v_{\rm {F}}} are, respectively, the Fermi wavevector and the Fermi velocity and ℏ {\displaystyle \hbar } is the reduced Planck constant .
The spin current density can also be analytically computed by integrating across the Fermi surface the product of the spin polarization and the corresponding distribution function .
In the Edelstein effect case, this quantity results in: [ 1 ] [ 10 ]
where n {\displaystyle \mathbf {n} } is the unit vector perpendicular to the surface on which the charge current flows.
From these formula it can be observed the orthogonality of the spin and the charge current densities. [ 1 ]
For what regards the Edelstein and its inverse effects, the conversion efficiency is: [ 1 ]
This parameter is conventionally positive for a Fermi contour with a counterclockwise helicity. [ 2 ] The Rashba–Edelstein length derivation is the same as the Edelstein one, except for v F {\displaystyle v_{\rm {F}}} which is substituted by the Rashba parameter α R {\displaystyle \alpha _{\rm {R}}} , [ 10 ] i.e., v F → α R ℏ {\displaystyle v_{\rm {F}}\to {\frac {\alpha _{\rm {R}}}{\hbar }}} , resulting in: [ 1 ]
The Rashba–Edelstein length of the investigated material can be compared to other spin-charge interconversion efficiencies, [ 2 ] as the spin-Hall angle, [ 1 ] to establish if this material is an efficient spin-charge interconverter, and, therefore, if it could be suitable for spintronic applications. [ 2 ] The Rashba–Edelstein length (2D spin-charge interconversion efficiency) can be effectively compared to the spin Hall angle (3D spin-charge interconversion efficiency), by dividing the λ R E E {\displaystyle \lambda _{\rm {REE}}} parameter for the thickness of the spin-splitted surface states in which this 2D conversion occurs. [ 4 ] This "equivalent" spin Hall angle for the Rashba–Edelstein effect often results in being close to the unity or even greater than the unity: [ 4 ] the Rashba–Edelstein effect, on average, is a more efficient spin-charge interconversion mechanism than the spin Hall effect and this could lead to a future employment of materials displaying this effect in the technological industry. [ 2 ] [ 4 ] [ 21 ] | https://en.wikipedia.org/wiki/Rashba–Edelstein_effect |
Rat Park was a series of studies into drug addiction conducted in the late 1970s and published between 1978 and 1981 by Canadian psychologist Bruce K. Alexander and his colleagues at Simon Fraser University in British Columbia, Canada.
At the time of the studies, research exploring the self-administration of morphine in animals often used small, solitary metal cages. Alexander hypothesized that these conditions may be responsible for exacerbating self-administration. [ 1 ] To test this hypothesis, Alexander and his colleagues built Rat Park, a large housing colony 200 times the floor area of a standard laboratory cage. There were 16–20 rats of both sexes in residence, food, balls and wheels for play, and enough space for mating. [ 2 ] The results of the experiment appeared to support his hypothesis that improved housing conditions reduce the consumption of morphine water. [ 1 ] This research highlighted an important issue in the design of morphine self-administration studies of the time, namely the use of austere housing conditions, which confound the results. [ 3 ]
In Rat Park, the rats could drink a fluid from one of two drop dispensers, which automatically recorded how much each rat drank. One dispenser contained a sweetened morphine solution and the other plain tap water. Morphine solution was sweetened to reduce aversion to the taste of morphine; as a control, prior to morphine introduction, rats were offered a sweetened quinine solution instead.
Alexander designed a number of experiments to test the rats' willingness to consume the morphine. The Seduction Experiment involved four groups of 8 rats. [ 4 ] Group CC was isolated in laboratory cages when they were weaned at 22 days of age, and lived there until the experiment ended at 80 days of age; Group PP was housed in Rat Park for the same period; Group CP was moved from laboratory cages to Rat Park at 65 days of age; and Group PC was moved out of Rat Park and into cages at 65 days of age.
The caged rats (Groups CC and PC) took to the morphine instantly, even with relatively little sweetener, with the caged males drinking 19 times more morphine than the Rat Park males in one of the experimental conditions. The rats in Rat Park resisted the morphine water. They would try it occasionally—with the females trying it more often than the males—but they showed a statistically significant preference for the plain water. He writes that the most interesting group was Group CP, the rats who were brought up in cages but moved to Rat Park before the experiment began. These animals rejected the morphine solution when it was stronger, but as it became sweeter and more dilute, they began to drink almost as much as the rats that had lived in cages throughout the experiment. They wanted the sweet water, he concluded, so long as it did not disrupt their normal social behavior. [ 5 ] Even more significant, he writes, was that when he added naloxone , a drug which negates the effects of opioids , to the morphine-laced water, the Rat Park rats began to drink it.
In another experiment, he forced rats in ordinary lab cages to consume the morphine-laced solution for 57 days without other liquid available to drink. When they moved into Rat Park, they were allowed to choose between the morphine solution and plain water. They drank the plain water. He writes that they did show some signs of dependence . There were "some minor withdrawal signs, twitching, what have you, but there were none of the mythic seizures and sweats you so often hear about ..." [ 2 ]
The authors concluded that isolated cages, as well as female sex, caused an increased consumption of morphine. The authors advised that it is important to consider the conditions of testing, as well as the sex of the animals, when exploring self-administration of morphine. [ 1 ]
Studies that followed up on the contribution of environmental enrichment to addiction produced mixed results. A replication study found that both caged and "park" rats showed a decreased preference for morphine compared to Alexander's original study; the author suggested a genetic reason for the difference Alexander initially observed. [ 6 ] Another study found that while social isolation can influence levels of heroin self-administration, isolation is not a necessary condition for heroin or cocaine injections to be reinforcing. [ 7 ]
Other studies have reinforced the effect of environmental enrichment on self-administration, such as one that showed it reduced re-instatement of cocaine-seeking behavior in mice through cues (though not if that re-instatement was induced by cocaine itself) [ 8 ] and another that showed it can eliminate previously established addiction-related behaviors. [ 9 ] Furthermore, removing mice from enriched environments has been shown to increase vulnerability to cocaine addiction [ 10 ] and exposure to complex environments during early stages of life produced dramatic changes in the reward system of the brain that resulted in reduced effects of cocaine. [ 11 ]
Broadly speaking, there is mounting evidence that the impoverished small-cage environments that are standard for the housing of laboratory animals have undue influence on lab animal behavior and biology. [ 12 ] These conditions can jeopardize both a basic premise of biomedical research—that healthy control animals are healthy—and the relevance of these kinds of animal studies to human conditions. [ 13 ]
Bruce Petrie (1996), a graduate student of Alexander's, attempted to replicate the study and correct for the original studies on 20 rats using two different methods for measuring morphine consumption between conditions (which introduced a potential confound ). [ 6 ] The study was not able to replicate the results. The author suggested that strain differences between the rats that Alexander's research group used could be the reason. [ 6 ]
There has been little subsequent interest in replicating the studies due to several methodological issues present in the originals. [ 14 ] Issues included the small number of subjects used, the use of oral morphine, which does not mimic actual conditions of use (and introduces a confound because of the bitterness of morphine), and the measurement of morphine consumption, which differed between conditions. Other problems included equipment failures, lost data and rat deaths. However, some researchers have shown an interest in "conceptual" replication to continue exploring the contribution of environmental and social enrichment to addiction. [ 14 ]
Journalist [ 15 ] [ 16 ] Johann Hari gave a popular TED Talk about the results of the study in 2015. He interpreted Alexander's study as suggesting that biological underpinnings are not the cause of addiction, instead shifting the etiology to a lack of healthy relationships. [ 17 ] The YouTube channel Kurzgesagt created and published a video based on Hari's book, which garnered over 19 million views. The channel later took down the video, stating that they improperly represented the evidence. [ 18 ]
Researchers have iterated that the results of Alexander's studies highlight concerns about observations of rats kept in bare-bones lab environments, and, implicate the environment as a contributing factor in addiction. However, it is suggested that the media has overstated the studies' importance by suggesting it represents a total paradigm shift in addiction research, since it is a mistake to conclude from the study that the environment is the only factor in addiction. [ 3 ] | https://en.wikipedia.org/wiki/Rat_Park |
Rat Sound Systems is a sound equipment provider of touring sound reinforcement equipment and services to the concert touring industry, based in Camarillo, California.
Rat Sound Systems was established in 1980 by Dave Rat and Brian Benjamin, and is known for being one of the first sound companies to tour with hard core punk bands such as Black Flag , Fear and the Dead Kennedys .
The list of artists and events that Rat Sound Systems has provided equipment for includes: Black Flag , Sonic Youth , Pearl Jam , [ 1 ] Red Hot Chili Peppers , [ 2 ] Jack Johnson , R.E.M. , AFI , Ben Harper , Blink 182 , The Offspring , Rage Against the Machine , Weezer , Queens of the Stone Age , Eddie Vedder , My Chemical Romance , Paramore , Jimmy Eat World , Beck , The Used and many other artists.
The Coachella Valley Music and Arts Festival has utilized Rat Sound to provide audio as the primary audio vendor since 2001. [ 3 ] [ 4 ] Warped Tour and Taste of Chaos both tour with sound systems provided by Rat Sound. | https://en.wikipedia.org/wiki/Rat_Sound |
Rata Die ( R.D. ) is a system for assigning numbers to calendar days (optionally with time of day), independent of any calendar, for the purposes of calendrical calculations . It was named (after the Latin ablative feminine singular for "from a fixed date ") by Howard Jacobson. [ 1 ] [ 2 ]
Rata Die is somewhat similar to Julian Dates (JD), in that the values are plain real numbers that increase by 1 each day. The systems differ principally in that JD takes on a particular value at a particular absolute time, and is the same in all contexts, whereas R.D. values may be relative to time zone , depending on the implementation. This makes R.D. more suitable for work on calendar dates, whereas JD is more suitable for work on time per se. The systems also differ trivially by having different epochs: R.D. is 1 at midnight (00:00) local time on January 1, AD 1 in the proleptic Gregorian calendar , JD is 0 at noon (12:00) Universal Time on January 1, 4713 BC in the proleptic Julian calendar .
There are three distinct forms of R.D., heretofore defined using Julian Dates.
Dershowitz and Reingold do not explicitly distinguish between these three forms, using the abbreviation "R.D." for all of them. [ 1 ]
Dershowitz and Reingold do not say that the RD is based on Greenwich time, but page 10 state that an R.D. with a decimal fraction is called a moment, with the function moment-from-jd taking the floating point R.D. as an argument and returns the argument -1721424.5. Consequently, there is no requirement or opportunity to supply a time zone offset.
The first form of R.D. is a continuously-increasing fractional number, taking integer values at midnight local time. It is defined as:
Midnight local time on December 31, year 0 (1 BC) in the proleptic Gregorian calendar corresponds to Julian Date 1,721,424.5 and hence RD 0.
In the second form, R.D. is an integer that labels an entire day, from midnight to midnight local time. This is the result of rounding the first form of R.D. downwards (towards negative infinity). It is the same as the relation between Julian Date and Julian Day Number (JDN). Thus:
In the third form, the R.D. is an integer labeling noon time, and incapable of labeling any other time of day. This is defined as
where the R.D. value must be an integer, thus constraining the choice of JD. This form of R.D. is used by Dershowitz and Reingold for conversion of calendar dates between calendars that separate days on different boundaries. | https://en.wikipedia.org/wiki/Rata_Die |
The ratchet effect is a concept in sociology and economics illustrating the difficulty with reversing a course of action once a specific thing has occurred, analogous with the mechanical ratchet that allows movement in one direction and seizes or tightens in the opposite. The concept has been applied to multiple fields of study and is related to the phenomena of scope creep , mission creep , and feature creep .
The ratchet effect first came to light in Alan Peacock and Jack Wiseman 's 1961 report "The Growth of Public Expenditure in the United Kingdom." Peacock and Wiseman found that public spending increases like a ratchet following periods of crisis. [ 1 ]
The term was later expanded upon by American historian Robert Higgs in the 1987 book Crisis and Leviathan, highlighting Peacock and Wiseman's research as it relates to governments experiencing difficulty in rolling back huge bureaucratic organizations created initially for temporary needs, such as wartime measures, natural disasters, or economic crises. [ 2 ]
The effect may likewise afflict large businesses with myriad layers of bureaucracy which resist reform or dismantling. [ 3 ] In workplaces, "ratchet effects refer to the tendency for central controllers to base next year's targets on last year's performance, meaning that managers who expect still to be in place in the next target period have a perverse incentive not to exceed targets even if they could easily do so." [ 4 ]
Garrett Hardin , a biologist and environmentalist, used the phrase to describe how food aid keeps people alive who would otherwise die in a famine . They live and multiply in better times, making another bigger crisis inevitable, since the supply of food has not been increased. [ 5 ]
Jean Tirole used the concept in his pioneering work on regulation and monopolies. The ratchet effect can denote an economic strategy arising in an environment where incentive depends on both current and past production, such as in a competitive industry employing piece rates . The producers observe that since incentive is readjusted based on their production, any increase in production confers only a temporary increase in incentive while requiring a permanently greater expenditure of work. They therefore decide not to reveal hidden production capacity unless forced to do so. [ 6 ]
The ratchet effect is central to the mathematical Parrondo's paradox .
In 1999 comparative psychologist Michael Tomasello used the ratchet effect metaphor to shed light on the evolution of culture . [ 7 ] He explains that the sharedness of human culture means that it is cumulative in character. Once a certain invention has been made, it can jump from one mind to another (by means of imitation) and thus a whole population can acquire a new trait (and so the ratchet has gone "up" one tooth). Comparative psychologist Claudio Tennie, Tomasello, and Josep Call call this the "cultural ratchet" and they describe it, amongst primates, as being unique to human culture. [ 8 ]
Receptors which initiate cell fate transduction cascades, in early embryo development, exhibit a ratchet effect in response to morphogen concentrations. [ 9 ] The low receptor occupancy permits increases in receptor occupancy which alter the cell fate, but the high receptor affinity does not allow ligand dissociation leading to a cell fate of a lower concentration.
The ratchet effect is reflected in the Collingridge dilemma .
The ratchet effect can be seen in long-term trends in the production of many consumer goods. Year by year, automobiles gradually acquire more features. Competitive pressures make it hard for manufacturers to cut back on the features unless forced by a true scarcity of raw materials (e.g., an oil shortage that drives costs up radically). University textbook publishers gradually get "stuck" in producing books that have excess content and features.
In software development, products that compete often will use specification lists of competitive products to add features, presuming that they must provide all of the features of the competitive product, plus add additional functionality. This can lead to " feature creep " in which it is considered necessary to add all of a competitor's features whether or not customers will use them.
Airlines initiate frequent-flyer programs that become ever harder to terminate. Successive generations of home appliances gradually acquire more features; new editions of software acquire more features; and so on. With all of these goods, there is ongoing debate as to whether the added features truly improve usability, or simply increase the tendency for people to buy the goods.
The term was included by the MAI Negotiating Group in the 1990s as the essence of a device to enforce legislative progress toward " free trade " by preventing legislative rollback with the compulsory assent of governments as a condition of participation.
Rollback is the liberalisation process by which the reduction and eventual elimination of nonconforming measures to the MAI would take place. It is a dynamic element linked with standstill, which provides its starting point. Combined with standstill, it would produce a "ratchet effect", where any new liberalisation measures would be "locked in" so they could not be rescinded or nullified over time. [ 10 ] | https://en.wikipedia.org/wiki/Ratchet_effect |
In continuum mechanics , ratcheting , or ratchetting , also known as cyclic creep , is a behavior in which plastic deformation accumulates due to cyclic mechanical or thermal stress. [ 1 ] [ 2 ]
In an article written by J. Bree in 1967, [ 3 ] the phenomenon of ratcheting is described as "Unsymmetric cycles of stress between prescribed limits will cause progressive ‘ creep ’ or ‘ratchet(t)ing’ in the direction of the mean stress". Ratcheting is a progressive, incremental inelastic deformation characterized by a shift of the stress-strain hysteresis loop along the strain axis. [ 4 ] When the amplitude of cyclic stresses exceed the elastic limit, the plastic deformation that occurs keep accumulating paving way for a catastrophic failure of the structure. Nonlinear kinematic hardening, which occurs when the stress state reaches the yield surface , is considered as the main mechanism behind ratcheting. [ 5 ] Several factors influences the extent of ratcheting including the load condition, mean stress, stress amplitude, stress ratio, load history, plastic slip , dislocation movement, and cells deformations. [ 6 ]
The effect of structural ratcheting can sometimes be represented in terms of the Bree diagram. [ 7 ] Alternative material models have been proposed to simulate ratcheting, such as Chaboche, Ohno-Wang, Armstrong–Frederick, etc. [ 6 ]
Ratcheting is a significant effect to be considered to check permanent deformation in systems which undergoes a cyclic loading. Common examples of such repetitive stresses include sea waves, road traffic, and earthquakes. [ 8 ] Initially it was studied to inspect the permanent deformation of thin, nuclear fuel cans with an internal pressure and temperature gradient while undergoing repetitive non-zero mean stresses. [ 3 ] | https://en.wikipedia.org/wiki/Ratcheting |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.